repo
stringlengths
26
115
file
stringlengths
54
212
language
stringclasses
2 values
license
stringclasses
16 values
content
stringlengths
19
1.07M
https://github.com/crd2333/crd2333.github.io
https://raw.githubusercontent.com/crd2333/crd2333.github.io/main/src/docs/Reading/跟李沐学AI(论文)/CLIP.typ
typst
// --- // order: 17 // --- #import "/src/components/TypstTemplate/lib.typ": * #show: project.with( title: "d2l_paper", lang: "zh", ) = Learning Transferable Visual Models From Natural Language Supervision - 时间:2021.2 == 标题 & 摘要 & 引言 - Learning *Transferable* Visual Models From Natural Language *Supervision*,利用自然语言监督的信号来训练一个可迁移的视觉文本多模态模型 - zero shot:能够在没有任何训练样本的情况下(no fine-tuning)进行迁移学习,即 transferable - 多模态特征适合 zero shot 迁移 - 模型名字叫 CLIP: Contrastive Language-Image Pre-Training - CV 领域使用有标注的数据仍有诸多限制,而 NLP 领域已经常用无监督学习。CLIP 使用大规模的无监督信号(图像和文本的匹配,近乎无需标注)训练 - 输入是文字和图片的配对,各自通过一个 encoder 得到特征,然后做对比学习(只需要有正样本和负样本的定义即可,配对的样本是正样本,即*矩阵中对角线*是正样本) #fig("/public/assets/Reading/limu_paper/CLIP/2024-09-21-17-35-21.png") - 正常的 CLIP 是没有分类头的,因此要额外进行 prompt template,即将类别转成一个句子(例如 `This is a <class>`)然后抽取文本特征,图片抽取特征,然后计算相似度 - 在预测的时候,把图像进行编码,然后对预先定义的类别进行编码,计算相似度并取最大的类别作为预测结果 == 方法 - 数据集:WIT(WebImage Text) - 论文的想法实际上并不新,但是之前的工作说法混淆、规模不够大、NLP 模型不够好,当 Transformer 解决 NLP 大一统问题之后容易获得语义特征了 - CLIP 是 ConVIRT 的简化版本,但是在数据和模型大小上大大提高了 - 之前的预训练工作在 ImageNet1K 上就需要数十个 GPU/TPU 年的训练时间,而 OpenAI 注重效率,所以选择对比学习。而其他 OpenAI 工作都基于 GPT,仅有 CLIP 基于对比学习 - 一开始采用类似 VirTex,图像 CNN 文本 Transformer,给定图片预测其对应文本。但是图片的描述有很多可能,所以预训练非常慢 - 当把预测型任务换成对比型任务,判断图片与文本是否是配对就比较简单(生成式变判别式),效率提高 $4$ 倍 - 作者没有使用非线性投影层(之前的对比学习中提升了近 $10$ 个点)因为作者发现多模态中提升不大,只做了随机裁剪的数据增强 - 伪代码 + 抽取特征 + 点乘一个 $W$,这个投影是为了将单模态信息转成多模态信息,然后归一化 - 在多模态里是非常常见的做法,fusion 学习一个 joint representation space(投影到同一个语义空间) + 算相似度 + 算 loss,正样本是对角线上,对称式的 loss 函数 #fig("/public/assets/Reading/limu_paper/CLIP/2024-09-22-13-01-48.png") - 关于大模型训练,参考:#link("https://lilianweng.github.io/posts/2021-09-25-train-large")[How to Train Really Large Models on Many GPUs? | Lil'Log] == 实验 - 之前的自监督无监督学习到一个特征之后,还需要对下游任务做有监督的微调 - prompt engineering and ensembling - 为什么要 prompt:比如 ImageNet 中有两个类 construction crane(起重机)和 crane(鹤),文本多义性可能导致问题;另外输入输出要尽可能保持一致,避免 distribution gap - 作者使用 prompt template,类似 `A phot of {label}`,还可以加一些例如 `A photo of {label}, a type of pet` - ensemble,用多种提示模板,CLIP 共使用了 $80$ 个模板 - 大规模迁移学习结果: - 这里 Linear Probe 表示将前面的模型 freeze 掉只从中抽特征,然后训练一个 FC 来做分类任务。普通物体 zero shot 任务比较好,但是比较难的任务(DTD 纹理分类,CLEVRCounts 物体计数)效果一般 #fig("/public/assets/Reading/limu_paper/CLIP/2024-09-22-13-39-33.png") - Few Shot 的结果:其中 BiT 为 Big Transfer,本身就是谷歌为了迁移学习设计的 #fig("/public/assets/Reading/limu_paper/CLIP/2024-09-22-13-39-43.png") - 所有数据:略 - 和人类对比的实验,图一乐 - 去重实验,证明泛化性强 == 讨论 & 局限性 - 目前和基线模型水平相近,但和 SOTA 水平还有差距,同时扩大 CLIP 规模也很难实现 SOTA - 在某些细分类任务或抽象概念效果很差,不知道什么是异常什么是安全 - 虽然 zero shot 在某些数据集还可以,但是如果推理和训练的数据差的太远,还是不行的,out-of-distribution(例如 MNIST,原因是 $4$ 亿数据集中都是自然图片,没有合成数字图片) - CLIP 需要给一个类别来判断,不能做到图片加 caption 这种生成。之后可能会把对比式和生成式的目标函数合在一起 - CLIP 对数据利用效率不高;另外数据是从网上爬的,没有被清洗过,存在偏见和可能的不当使用 - CLIP 存在用 ImageNet test 训练的问题,$27$ 个数据集也用到了,最好有一个专门的用来 zero-shot 的数据集 - 复杂的任务或概念不能用语言描述,所以做下游任务泛化的时候能提供一些训练样本(Few-Shot)也是有必要的,但是 CLIP 的提出不是为了 Few-Shot,所以会有提供一些训练样本还不如 zero-shot 的情况,和人不太一样 == 总结 - 文本经过 Encoder 得到文本特征,图像经过 Encoder 得到图像特征,然后进行文本图像对的对比学习 - 做分类任务的时候:将类别通过 prompt template 通过文本编码器形成很多 query,然后图像通过图像编码器得到一个图像特征,然后相当于依次问 query,得到图像特征和文本特征之间的相似度 = CLIP 改进工作(串讲) == 分割 - 图像分割本质上和分类很像,只是把分类的粒度变成了像素级别。这就导致每当分类有了什么突破,分割领域马上就能有所跟进,CLIP 也不例外 - CLIP 出来以后,大家要么是用一下 CLIP 预训练的参数做一些简单的改动把它和下游任务结合起来(LSeg);要么是利用 CLIP 目标函数(GroupViT),或者它的其它特性 === Language-Driven Semantic Segmentation(LSeg) - 模型总览图 - 跟 CLIP 有点像但其实不是 zero-shot,本质上还是在传统的有监督训练中引入了文本那一支信息。它终究不是无监督学习,目标函数也不是对比学习 - 跟 CLIP 的区别主要是说,图像这边输入是一张图片,抽取的特征是一个 dense feature 而不是 CLIP 那种一张图一个向量 - 图像这边经过一个 encoder(DPT: ViT + Transformer),得到 $tilde(H) times tilde(W) times C$ 的特征;文本那边经过 encoder 得到 $N times C$ 的特征。两边一乘就得到 $tilde(H) times tilde(W) times N$,也就是回到了传统有监督分割的领域 - text encoder 用的就是纯 CLIP 冻结参数,image encoder 可以也这样做但效果不如自己整一个(基于实验科学) - 然后最后还多了一个 spacial regularization block 来多学习一点(其实不是很重要,随便一个 MLP 就可以) #fig("/public/assets/Reading/limu_paper/CLIP/2024-09-23-12-04-50.png") - 实验的话,把 PASCAL, COCO 等数据集均分,在其中一份上训练然后 zero-shot 到其它份。结果来看,比以往的 zero-shot 或 few-shot 都好不少,但比监督训练(ResNet,非 large model)还是差了不少 === Semantic Segmentation Emerges from Text Supervision(GroupViT) - 介绍 - Grouping 是 CV 分割方面已有的一个技术,利用区域生长的思想(自下而上地从聚类中心开始发散) - GroupViT 在已有 ViT 的框架中加入 group blocks 和可学习的 group tokens - 训练 - 首先依旧是把图像分成 patch,然后把 patch embedding 送到 Transformer 里面,但是要加上 group tokens - group tokens 可以理解成之前的 cls token,但之前是想学到每个图片一个特征,现在是多个特征用来分割,把那些语义相似的点归结到这 $64$ 个 clusters 里面。 - 经过一系列 Transformer Layers 互相学习之后,group tokens 学得差不多了。然后用 group blocks 把图像 patch embedding 给 assign 到 group tokens 里去(相当于合并为更大的 group,同时也是一种减小序列复杂度) - 用类似 attention 的机制算一个相似度,然后用 gumbel softmax 完成可导(argmax 不可导)的聚类分配,实现降维和合并 - 这些新的 tokens 就看做之前的 patch embedding tokens,再加入一些 group tokens 来重复以上过程,得到 $8$ 个大的 group tokens - (为了跟 CLIP 对齐)这学到的 $8$ 大块特征做一个 avg pooling(相当于 $8$ 个类别物体特征融合来表征这一张图片)再 MLP 一下,然后这张图对应的文本那边跟 CLIP 一样 encoder 得到一个特征,然后后续就跟 CLIP 一样做 contrastive learning #fig("/public/assets/Reading/limu_paper/CLIP/2024-09-30-11-08-57.png") - 推理 - 然后是如何做 zero-shot 推理,图像经过左边的编码器得到 $8$ 个类别特征,右边各种 class 经过 prompt 和 text encoder 得到文本特征,算相似度 - 另外,为了区分出背景类,需要设置一个阈值,低于这个值就认为是背景,这个值比较讲究,低了可能会导致错误分类,高了会导致全是背景类 - 感觉这边训练和推理不是很一致?另外不是很懂分割领域里面最后 group 究竟是怎么还原回原图的 - (?)group token 对应回图像靠的就是把 gumbel_softmax 再变成 argmax,这样就知道某个 group token 具体对应哪些像素了 #fig("/public/assets/Reading/limu_paper/CLIP/2024-09-30-10-24-57.png") - 结果 - 可视化:stage 1 的分割比较小(眼睛、四肢),stage 2 的分割就大了一点(草地、狗)。 - 数值上比较,比之前的自监督方法高了不少,但跟有监督方法还是差了不少(30 个点),不过这毕竟是首个将文本信号应用在自监督分割分割的第一个工作 - 目前的局限性 + 更多是一个图像 encoder,没有很好地利用 dense prediction 的特性 - Dilated Convolution, pyramid pooling, U-Net 等,获得更多的上下文信息和多尺度信息 + 另一个就是分类阈值的问题难以界定,以及(实验证实)模型其实分割对了,但是分类错了(这个主要怪 CLIP 只能学到明确的物体信息,对模糊的代表很多很多类的背景学得不太好) - 改进思路,根据每个类各自设置阈值,阈值可学习,调整 zero-shot 推理过程,训练中加入背景概念的约束等 == 目标检测 === Open-vocabulary Object Detection via Vision and Language Knowledge Distillation(ViLD) - 时间:2021.4 - 目标检测一般要比分类分割复杂一些,但丝毫不影响这篇文章出来的速度(两个月做大规模实验+投稿,太卷了) - 看到标题就知道是把 CLIP 当做 teacher 进行蒸馏的工作 - 这篇文章的引言非常好,上来就是一张图,展示模型能力的同时根据这张图问出论文想要解决的问题:open-vocabulary 的目标检测 #fig("/public/assets/Reading/limu_paper/CLIP/2024-09-30-10-25-32.png") - 模型总览图 - 这篇论文有点把锚框的提出和分类解耦的意思,图里都只画了第二阶段,即 proposal 拿到之后再开始做 #fig("/public/assets/Reading/limu_paper/CLIP/2024-09-30-10-28-06.png") - 方法部分,分别为 vanilla detector 作为 baseline,然后 ViLD-text 和 ViLD-image,两支合在一起得到 ViLD,最后还有 ViLD-ensemble 1. *baseline* 其实就是一个 maskRCNN,为两阶段分类器,第一阶段提出 N 个 region proposal,过一个 detection head 得到 region embedding,过 classifier 得到分类 2. *ViLD-text*,和 CLIP 的思路差不多,都是图像文本分别抽特征算相似度。图像这边跟 baseline 相比,就只多了 projection layer 和 L2-norm,得到 region embedding;图像那边用冻结的 CLIP text-encoder 得到 embedding,concatenate 上可学习的背景类 embedding。两边特征做点乘得到相似度,与 ground truth 有监督地算 cross entropy loss。目前为止都是 classify base(CB),对于不在基础类中的物体都塞给背景 - 那如何拓展到 classify novel(CN) 呢?于是提出了 ViLD-image。主要思路是说,CLIP 那边的图像和文本 encoder 都做得很好,那我这边既然用了 text-encoder ,那图像 encoder 得到的 embedding 跟 CLIP 尽可能一致就好了,于是引出知识蒸馏 3. *ViLD-image*,具体而言,就是把 region proposal 经过裁剪和变换 送入作为 teacher 的 CLIP-image,预测结果替换 ground truth 做一个 L1-loss。现在预测不再局限于 CB,而是 CB+CN - 另外有点怀疑这篇文章就是蒸馏了一下 CLIP,然后包装了个比较好的运用结果而已…… - 为了让训练更快,避免 region proposal 反复前向 CLIP image-encoder 造成极大计算负担,image embedding 其实是预先 RPN 得到、裁剪变换并前向算完的,蒸馏的时候只需要从硬盘 load - *ViLD*,最后把 ViLD-text 和 ViLD-iamge 合并,相当于在训练的时候有两个目标函数(测试的时候右边 image 那一支是丢掉的) - 最后 ViLD 模型的汇总图 #fig("/public/assets/Reading/limu_paper/CLIP/2024-09-30-10-33-37.png") - 实验结果 - 在 LVIS 数据集上实验(一个基于 COCO 的长尾数据集),把其中的 $A P_c$, $A P_f$($A P$ 是目标检测常用指标,为“准确率-召回率”曲线下方的面积) 作为 CB 来训练,把只见过一两次的记作 $A P_r$,看作 zero-shot 推理 - 表格显示,用 ResNet-50 作为 backbone,Supervised + RFS(repeated factor sampling)作为 baseline 只有 $12.3%$,ViLD 可以达到 $16.1%$ - 但其实这是可以理解的,因为 baseline 即使重采样了,它也没见过几次新类别,没怎么发挥 Supervised 效果,而 ViLD 仅比它好一点,略微有些取巧。从总的 $A P$ 来看,ViLD 比 baseline 还弱一点 - 不过,换用更强的 backbone 以后,它的效果变得更强,仅比 2020 性能冠军弱一点,还是有实力的 - 另外作为 zero-shot 的模型,肯定是可以直接用到其它数据集上的,于是就用到 PASAL, VOC 2007, COCO, Objects365 上,结果显示跟这些数据集上 Supervised 模型比还是有一定差距 - 评价 - ViLD 是第一个在 LVIS 这么难的数据集上做 open-vocabulary 的目标检测的工作,还是有里程碑意义的 - 它使用了 CLIP 的预训练参数,借鉴 CLIP 的一些思想,得到了不错的效果 === Grounded Language Image Pre-training(GLIP) #let Enc = math.text("Enc") #let Img = math.text("Img") #let cls = math.text("cls") #let loc = math.text("loc") #let Lf = math.cal("L") - 时间:2021.12 - 对标分割领域的 Group-ViT,目标检测这边的 GLIP,名字上跟 CLIP 相比只是把 contrast 换成 grounded - 研究动机 - 跟分割一样,精心标注的数据很贵,希望用一个巨大的预训练模型处理 open-vocabulary case,所以希望像 CLIP 一样利用图像文本对 - Vision Language 里有一类下游任务是给一段话把图像里对应物体圈出来,这种 phrase grounding 任务和 object detection 结合,同时如果再加入伪标签那一系列 self-training 的方法,就能大大扩大数据集了 - loss 处理 - 目标检测这边的 loss 一般是 $Lf = Lf_cls + Lf_loc$,后者其实大家都差不多,主要根据模型和锚框生成方式决定;但是分类 loss 就不一样了,而且对于这里 object detection(label 是一个 one-hot 单词) 和 visual grounding(label 是一个句子)的处理是不一样的,我们需要把它统一到一个框架里 - *object detection* 这边相对简单,$O in RR^(N times d), W in RR^(c times d), S_cls in RR^(N times c), T in {0, 1}^(N times c)$,分别为 region embedding, classifier weight, ground truth $ O = Enc_I (Img), S_cls = O W^T, Lf_cls = "loss"(S_cls;T) $ - 图像先经过图像 encoder 得到目标/区域特征 $O$,然后经过一个分类头(也就是乘权重矩阵 $W$),得到输出类别的 logits,然后计算与真实类别的 cross entropy loss - *visual grounding* 这边相对复杂,$P in RR^(M times d), S_"ground" in RR^(N times M)$ 为 text embedding 和点乘得到的相似度 $ O = Enc_I (Img), P = Enc_L ("Prompt"), S_"ground" = O P^T, Lf_cls = "loss"(S_"ground";T') $ - 这里的操作其实类似于 ViLD 中的 ViLD-text 分支,图像和句子分别经过各自的 encoder 得到 feature embedding,然后计算匹配度。但现在得到的 region-word aligment scores $S_"ground"$ 跟 ground truth 不匹配,(sub)-word tokens 的数量 $M$ 总是比 text prompt 中的 phrases 数量 $c$ 要大 - 作者列了 $4$ 个原因,在此从略。解决方法是,如果一个 phrase 是正匹配,那么所有 sub-words 都是正匹配,且额外添加的 tokens 对所有 image features 都是负匹配,这样把 $T$ 转化为 $T'in {0, 1}^(N times M)$(?) - 其实这两种方式都差不多,只需要小小改动 positive match 和 negative match 的方式就能联合起来。作者在小数据集上确认了方法的有效性,然后迁移到大数据集上 - #wrap-content( [具体是怎么个迁移法呢?首先对于 object 数据集和 grounding 数据集是可以拿来直接用的,然后又想利用图像文本对(caption)数据,但这些数据没有 bounding box label,所以采用了伪标签的形式 —— 把前面训的小模型的推理结果拿来当 ground truth,这样大大增大了数据集量], fig("/public/assets/Reading/limu_paper/CLIP/2024-10-03-15-44-32.png"), columns: (50%, 50%) ) - 然后来看模型总览图 #fig("/public/assets/Reading/limu_paper/CLIP/2024-10-01-00-02-18.png") - 首先两个模态经过 encoder 得到 embedding`` - Deep Fusion:得到 embedding 后,理论上可以直接计算相似度了,但是直接算的话,图像文本的 joint embedding space 还没有学的很好(Lseg 通过 conv 继续学习)。多加一些层数融合一下,最后算相似度也更有针对性。具体就是用 cross attention 交互了一下 - 然后就是用上面的方式算一下 $Lf_cls$,再用 L1-loss 另外算一下 $Lf_loc$,这样训练 - 结果 - 同期的一些纯视觉工作(DyHead, SoftTeacher)没有 zero-shot 能力,但是经过微调后在 COCO 数据集上能够达到 $60$ 左右的 AP。GLIP-L 具有 zero-shot 的能力,能够达到将近 $50$ 的 AP,而且微调后也能达到 $60$ 多一点的 AP。整体来看效果还是不错的。 - 后续还推出了 GLIPv2,又多加了几个任务(Object Detection, Instance Segmentation, Vision Grounding, Visual Question Answering, Image Captioning 等全放一起,把 text encoder 变得更花哨),这某种程度上也算是一种多模态任务的趋势,能用更多的数据把模型训练得又大又好 #fig("/public/assets/Reading/limu_paper/CLIP/2024-10-03-17-04-04.png") == AIGC === CLIP Passo:Semantically-Aware Object Sketching - 时间:2022.2 - 其实是一篇计算机图形学领域的文章(CG, CV 交叉) - 研究动机:标题为保持语义信息的物体素描,CLIP + Picasso。把*图片变成简笔画*的形式,可生成*各种层次的主要轮廓*并且保留其*主要视觉特征*。不仅要把原来的物体变成非常简单的形象,也需要模型抓住原来物体最关键的一些特征 - 相关工作 - 之前的研究都是取收集好的、抽象层次固定的数据集,属于一种 data driven 的方式。这种方式生成的素描画在风格和形式上受到限制,违背图像生成的初衷(不同层次),同时种类不能做到很丰富。 - 相比之下,CLIP 由于图像语义配对的学习方式,对物体的语义信息抓取的特别好,而且又有出色的 zero-shot 能力,不受图像风格的限制,能把图像特征编码的非常好。 - 主体方法 #fig("/public/assets/Reading/limu_paper/CLIP/2024-10-03-17-19-37.png") - 任务就是在白纸上随机生成 Bezier 曲线,然后通过不停的训练,这些曲线就组合成了简笔画 - 具体来说,先*初始化参数*,然后通过 Rasterizer 将笔画映射到二维画布上 - 这个 Rasterizer 怎么跟我熟知的不太一样。。。不是三维物体、相机、光源 $->$ 二维图像的嘛? - 文章的主要贡献有两点 + 一是一开始如何选择更好的初始化(saliency):用一个训练好的 ViT,把最后一个多头自注意力取加权平均做成 saliency MAP,在这些映射上去看哪些特征更显著,然后在显著的区域上去采点(在显著的区域上采点,就是已经知道了一个物体或者说物体边界去画贝兹曲线了) + 二是选择了一个更合适的目标函数(CLIP 作为 teacher 作蒸馏)和损失函数(Lg:模型前几层的输出,即 低层次的空间信息——动作或位置、结构 尽可能接近;Ls:简笔画和原始图像的特征应尽可能接近) - 为了更好的效果,会多生成几张分别算 loss,最后输出最小的那张 - 亮点 + 训练很快,一张 V100 能在 6 分钟完成 2000 次迭代 + 不受物体类别限制,可以为不常见的物体生成简笔画 + 通过笔画数任意控制抽象程度 - 局限性 + 图像有背景时,效果大打折扣;可以用别的模型抠掉背景做 2-step 的方式,但不够 end2end;未来可以考虑把这种 mask 设计到 loss function 里面 + 简笔画是同时生成而非逐步序列,不像人;加入 auto regressive 的方式 + 提前设定的笔画数一方面很灵活,但另一方面对不同图片不方便;考虑把这作为 learnalbe parameter == 视频理解 === CLIP4Clip: An Empirical Study of CLIP for end to end Video Clip Retrieval and Captioning - 时间:2021.4 - 视频检索,video text retrieval,根据文本检索最相关的视频片段 - 标题玩了个双关,CLIP 模型用作 clip 视频片段 - CLIP 模型本身很适合做 Retrieval 任务,因为它就是做图像和文本之间相似性,根据相似性可以去做 ranking, atching, retrieve 等任务。而且由于双塔结构(图像文本编码器分开),得到的 image, text embedding 做一步点乘就可以计算相似度,因此非常容易扩展(比如并行、预提取之类的) - 模型 #fig("/public/assets/Reading/limu_paper/CLIP/2024-10-03-19-08-34.png") - 文本这边没什么区别,得到 text embedding,视频这边多了时间维度。$N$ 个视频帧每一帧打成 patch 作为编码器输入,得到 $N$ 个 CLS Token。每个文本特征对应 $N$ 个图像特征,该怎么算相似度呢?本文是一个 Empirical Study,就把以往的三种方法都尝试了一遍,选出来最好的 + *Parameter-free type*,最简单的做法,直接对时间维度取 mean pooling。但是这种方法没有考虑时序的特性,也就是前后帧之间先后关系(e.g. 一个人坐下和站起)。即使如此,这种方式也是目前最被接受的方式 + *Sequential Type*,时序信息 + Positional encoding,用 LSTM / Transformer Encoder 建模。属于 Late FUsion(文本和图像特征抽完之后融合) + *Tight Type*,属于 Early Fusion。用 Transformer Encoder 将文本和 $N$ 个图像帧直接融合,用 MLP 直接输出相似度特征。有点像是把文本特征作为 cls token,不仅融合了时序信息,还同时融合了图文信息 - 实验结果与 insight + CLIP 的预训练模型很能打,微调或 zero-shot 都比之前方法提高 $20$ 多个点 + 如果训练数据量不那么大,Mean pooling 这种非学习方法效果反而是最好的。而 Tight Type 可能是由于下游任务数据量少过拟合的原因,效果较差 + 图像转到视频,存在域偏差(domain gap),如果视频这边找到足够多的数据集再去预训练,这样迁移的效果会更好 + 视频 ViT 领域的 2D patch 和 3D patch。2D patch 在这里效果更好一些,但 3D 也很有前途 + CLIP 用在 video text retrieval 领域,学习率这个参数非常敏感 === Action CLIP:A New Paradigm for Video Action Recognition - 时间:2021.9 - 动作识别,本质上是加了时序信息的分类任务,自然很容易应用 CLIP - 传统的动作识别模型 - 视频进一个 encoder(2D/3D),与有标签的 ground thuth 计算 loss。 - 受限于有监督学习,难以做大数据集的规模; - 并且不像图像分类那种“一一对应”关系,在视频动作识别中,比如"open the door"一个短语对应三个单词,另外,open 这个词可以描述很多动作。这时就有一个 trade off + 如果标记很多类,人工标注成本提高,softmax 效果也不好,常规的分类算法可能表现都很差 + 如果只标注大类,就无法预测细粒度的小类 - 最理想的方法就是摆脱标签的限制,从大量的视频数据中学一个好的特征,然后再去 zero-shot 或者 few-shot 迁移至下游任务。于是自然想到 CLIP - 那这篇论文改进了什么呢? + 一是如何让 image encoder 能处理视频,也就是每一帧的特征如何与文本特征计算相似度,这与 CLIP4clip 非常类似 + 二是动作识别领域的标签矩阵,当数据集相对小且 batch 比较大的时候,不是对角线的地方也可能是正样本(比如一个 batch 中可能有多个描述跑的动作)。这个问题将交叉熵损失换成 KL 散度(衡量两个分布的相似性)就可以解决 #grid( columns: (30%, 70%), column-gutter: 4pt, align: horizon, fig("/public/assets/Reading/limu_paper/CLIP/2024-10-03-19-46-03.png"), fig("/public/assets/Reading/limu_paper/CLIP/2024-10-03-19-46-39.png") ) - 主体架构 - 视频和文本变成 token 后经过各自的 encoder,得到各自的特征后计算相似度然后与 ground truth 计算损失(KL 散度) - 但是加了 Textual Prompt, Pre-network Prompt, In-network Prompt, Post-network Prompt 来加快迁移。其实除了文本的那个算是正统 prompt,其它三个都是为了写作连贯性而称作 prompt 罢了 + Pre-network Prompt: Joint。输入层面加入了时序信息,即 Temporal Encoding + In-network Prompt: Shift。每个 ViT 模块间加入 shift 模块,在特征图上做各种移动,zero cost 地达到更强的特征建模能力 + Post-network Prompt: Transf。就是 CLIP4clip 中的三种相似度计算,一模一样 - 感觉有点搭积木( - 实验结果(消融实验) + 多模态的框架(ActionCLIP)表现不错,相较于单模态(Unimodality)的框架可以提升 2-3 个点。也就是说用 Language guidance 的方式更合理 + Pre-train 是否重要?毋庸置疑,不用预训练疯狂掉点,不过值得注意的是 Vision 这边对 pre-train 的依赖性明显比 Language 那边要强 + Prompt 是否重要?文本不用 prompt 基本不掉点,但是视觉这边,如果不用 joint,会掉 $2.74$ 个点,如果不用 shift,会掉 $5.38$ 个点(都用 MeanP)。另外这里在 post-network 中平均池化的效果不是最好的了,原因应该是数据量相对大了 + Fine-tune 和 zero-shot 的比较。以往模型的 zero-shot 能力为零,如果 zero-shot 的话,大家都在涨点,但 ActionCLIP 依旧碾压 == 其它 - 快速过一下其它文章 + How Much Can CLIP Benefit Vision-and-Language Tasks?(CLIP-ViL):第一个大规模把 CLIP 作为预训练模型应用到各种下游任务上的 Empirical Study,答案是效果不错,没什么创新 + AudioCLIP:Extending CLIP to Image, Text and Audio\* #fig("/public/assets/Reading/limu_paper/CLIP/2024-10-03-20-20-07.png") - 音视频里面,语言、视频、音频基本是成对出现的,很容易模仿 CLIP 的方式加音频模块,单个模态两两算相似度矩阵最后 loss 加起来 + PointCLIP: Point Cloud Understanding by CLIP - 把 2D 图像的 CLIP 应用到 3D 的点云,想法就是把点云做成深度图(深度放在颜色那一位)。因为 CLIP 啥都见过所以这个也能迁移,效果还不错 + Can Language Understand Depth? - CLIP 在物体抓取很有实力,但对比学习在学概念方面不是很好,深度也有点概念的意思,因此验证一下 - 想法跟上面那篇一样,然后把深度硬分类成 giant, close, far 等(回归任务变成分类任务),非常巧妙地运用了 CLIP == 总结 - 对 CLIP 模型的使用基本是 $3$ 种 + 使用 CLIP 来抽特征,原有框架不动,只是用更好的特征加强我模型的训练,改动最小的方式 + 拿 CLIP 作为 teacher 来蒸馏,不论我是什么模态还是 2D/3D,都可以借助 CLIP 更快收敛 + 不借助 CLIP 预训练参数,但借用它多模态对比学习的思想,定义自己的正负样本 - 总而言之,在大模型当道的当下,不可能对每一个任务都训练大模型,而是借助大模型并加上可学习的小模块,可能会是一个更有用而且*更做得动*的方向
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/chronos/0.1.0/src/sequence.typ
typst
Apache License 2.0
#import "@preview/cetz:0.2.2": draw, vector #import "consts.typ": * #import "participant.typ" #import "note.typ" #let get-arrow-marks(sym, color) = { if type(sym) == array { return sym.map(s => get-arrow-marks(s, color)) } ( "": none, ">": (symbol: ">", fill: color), ">>": (symbol: "straight"), "\\": (symbol: ">", fill: color, harpoon: true, flip: true), "\\\\": (symbol: "straight", harpoon: true, flip: true), "/": (symbol: ">", fill: color, harpoon: true), "//": (symbol: "straight", harpoon: true), "x": none, "o": none, ).at(sym) } #let reverse-arrow-mark(mark) = { if type(mark) == array { return mark.map(m => reverse-arrow-mark(m)) } let mark2 = mark if type(mark) == dictionary and mark.at("harpoon", default: false) { let flipped = mark.at("flip", default: false) mark2.insert("flip", not flipped) } return mark2 } #let is-tip-of-type(type_, tip) = { if type(tip) == str and tip == type_ { return true } if type(tip) == array and tip.contains(type_) { return true } return false } #let is-circle-tip = is-tip-of-type.with("o") #let is-cross-tip = is-tip-of-type.with("x") #let _seq( p1, p2, comment: none, comment-align: "left", dashed: false, start-tip: "", end-tip: ">", color: black, flip: false, enable-dst: false, create-dst: false, disable-dst: false, destroy-dst: false, disable-src: false, destroy-src: false, lifeline-style: auto, slant: none ) = { return (( type: "seq", p1: p1, p2: p2, comment: comment, comment-align: comment-align, dashed: dashed, start-tip: start-tip, end-tip: end-tip, color: color, flip: flip, enable-dst: enable-dst, create-dst: create-dst, disable-dst: disable-dst, destroy-dst: destroy-dst, disable-src: disable-src, destroy-src: destroy-src, lifeline-style: lifeline-style, slant: slant ),) } #let render(pars-i, x-pos, participants, elmt, y, lifelines) = { let shapes = () y -= Y-SPACE let h = 0 // Reserve space for comment if elmt.comment != none { h = calc.max(h, measure(box(elmt.comment)).height / 1pt + 6) } if "linked-note" in elmt { h = calc.max(h, note.get-size(elmt.linked-note).height / 2) } y -= h let i1 = pars-i.at(elmt.p1) let i2 = pars-i.at(elmt.p2) let start-info = ( i: i1, x: x-pos.at(i1), y: y, ll-lvl: lifelines.at(i1).level * LIFELINE-W / 2 ) let end-info = ( i: i2, x: x-pos.at(i2), y: y, ll-lvl: lifelines.at(i2).level * LIFELINE-W / 2 ) let slant = if elmt.slant == auto { DEFAULT-SLANT } else if elmt.slant != none { elmt.slant } else { 0 } end-info.y -= slant if elmt.p1 == elmt.p2 { end-info.y -= 10 } if elmt.disable-src { let src-line = lifelines.at(i1) src-line.level -= 1 src-line.lines.push(("disable", start-info.y)) lifelines.at(i1) = src-line } if elmt.destroy-src { let src-line = lifelines.at(i1) src-line.lines.push(("destroy", start-info.y)) lifelines.at(i1) = src-line } if elmt.disable-dst { let dst-line = lifelines.at(i2) dst-line.level -= 1 dst-line.lines.push(("disable", end-info.y)) lifelines.at(i2) = dst-line } if elmt.destroy-dst { let dst-line = lifelines.at(i2) dst-line.lines.push(("destroy", end-info.y)) lifelines.at(i2) = dst-line } if elmt.enable-dst { let dst-line = lifelines.at(i2) dst-line.level += 1 lifelines.at(i2) = dst-line } if elmt.create-dst { let par = participants.at(i2) let m = measure(box(par.display-name)) let f = if i1 > i2 {-1} else {1} end-info.x -= (m.width + PAR-PAD.last() * 2) / 2pt * f shapes += participant.render(x-pos, par, y: end-info.y - CREATE-OFFSET) } end-info.ll-lvl = lifelines.at(i2).level * LIFELINE-W / 2 // Compute left/right position at start/end start-info.insert("lx", start-info.x) if start-info.ll-lvl != 0 { start-info.lx -= LIFELINE-W / 2 } end-info.insert("lx", end-info.x) if end-info.ll-lvl != 0 { end-info.lx -= LIFELINE-W / 2 } start-info.insert("rx", start-info.x + start-info.ll-lvl) end-info.insert("rx", end-info.x + end-info.ll-lvl) // Choose correct points to link let x1 = start-info.rx let x2 = end-info.lx if (start-info.i > end-info.i) { x1 = start-info.lx x2 = end-info.rx } let style = ( mark: ( start: get-arrow-marks(elmt.start-tip, elmt.color), end: get-arrow-marks(elmt.end-tip, elmt.color), scale: 1.2 ), stroke: ( dash: if elmt.dashed {(2pt,2pt)} else {"solid"}, paint: elmt.color, thickness: .5pt ) ) let y0 = start-info.y if "linked-note" in elmt { let shps = note.render(pars-i, x-pos, elmt.linked-note, start-info.y, lifelines).last() shapes += shps } let flip-mark = end-info.i <= start-info.i if elmt.flip { flip-mark = not flip-mark } if flip-mark { style.mark.end = reverse-arrow-mark(style.mark.end) } let pts let comment-pt let comment-anchor let comment-angle = 0deg if elmt.p1 == elmt.p2 { if elmt.flip { x1 = start-info.lx } else { x2 = end-info.rx } let x-mid = if elmt.flip { calc.min(x1, x2) - 20 } else { calc.max(x1, x2) + 20 } pts = ( (x1, start-info.y), (x-mid, start-info.y), (x-mid, end-info.y), (x2, end-info.y) ) if elmt.comment != none { comment-anchor = ( start: if x-mid < x1 {"south-east"} else {"south-west"}, end: if x-mid < x1 {"south-west"} else {"south-east"}, left: "south-west", right: "south-east", center: "south", ).at(elmt.comment-align) comment-pt = ( start: pts.first(), end: pts.at(1), left: if x-mid < x1 {pts.at(1)} else {pts.first()}, right: if x-mid < x1 {pts.first()} else {pts.at(1)}, center: (pts.first(), 50%, pts.at(1)) ).at(elmt.comment-align) } } else { pts = ( (x1, start-info.y), (x2, end-info.y) ) if elmt.comment != none { let start-pt = pts.first() let end-pt = pts.last() if elmt.start-tip != "" { start-pt = (pts.first(), COMMENT-PAD, pts.last()) } if elmt.end-tip != "" { end-pt = (pts.last(), COMMENT-PAD, pts.first()) } comment-pt = ( start: start-pt, end: end-pt, left: if x2 < x1 {end-pt} else {start-pt}, right: if x2 < x1 {start-pt} else {end-pt}, center: (start-pt, 50%, end-pt) ).at(elmt.comment-align) comment-anchor = ( start: if x2 < x1 {"south-east"} else {"south-west"}, end: if x2 < x1 {"south-west"} else {"south-east"}, left: "south-west", right: "south-east", center: "south", ).at(elmt.comment-align) } let (p1, p2) = pts if x2 < x1 { (p1, p2) = (p2, p1) } comment-angle = vector.angle2(p1, p2) } // Start circle tip if is-circle-tip(elmt.start-tip) { shapes += draw.circle(pts.first(), radius: CIRCLE-TIP-RADIUS, stroke: elmt.color, fill: none, name: "_circle-start-tip") pts.at(0) = "_circle-start-tip" // Start cross tip } else if is-cross-tip(elmt.start-tip) { let size = CROSS-TIP-SIZE let cross-pt = (pts.first(), size * 2, pts.at(1)) shapes += draw.line( (rel: (-size, -size), to: cross-pt), (rel: (size, size), to: cross-pt), stroke: elmt.color + 1.5pt ) shapes += draw.line( (rel: (-size, size), to: cross-pt), (rel: (size, -size), to: cross-pt), stroke: elmt.color + 1.5pt ) pts.at(0) = cross-pt } // End circle tip if is-circle-tip(elmt.end-tip) { shapes += draw.circle(pts.last(), radius: 3, stroke: elmt.color, fill: none, name: "_circle-end-tip") pts.at(pts.len() - 1) = "_circle-end-tip" // End cross tip } else if is-cross-tip(elmt.end-tip) { let size = CROSS-TIP-SIZE let cross-pt = (pts.last(), size * 2, pts.at(pts.len() - 2)) shapes += draw.line( (rel: (-size, -size), to: cross-pt), (rel: (size, size), to: cross-pt), stroke: elmt.color + 1.5pt ) shapes += draw.line( (rel: (-size, size), to: cross-pt), (rel: (size, -size), to: cross-pt), stroke: elmt.color + 1.5pt ) pts.at(pts.len() - 1) = cross-pt } shapes += draw.line(..pts, ..style) if elmt.comment != none { shapes += draw.content( comment-pt, elmt.comment, anchor: comment-anchor, angle: comment-angle, padding: 3pt ) } if elmt.enable-dst { let dst-line = lifelines.at(i2) dst-line.lines.push(("enable", end-info.y, elmt.lifeline-style)) lifelines.at(i2) = dst-line } if elmt.create-dst { end-info.y -= CREATE-OFFSET let dst-line = lifelines.at(i2) dst-line.lines.push(("create", end-info.y)) lifelines.at(i2) = dst-line } if "linked-note" in elmt { let m = note.get-size(elmt.linked-note) end-info.y = calc.min(end-info.y, y0 - m.height / 2) } let r = (end-info.y, lifelines, shapes) return r }
https://github.com/Myriad-Dreamin/shiroa
https://raw.githubusercontent.com/Myriad-Dreamin/shiroa/main/contrib/typst/tidy-book/lib.typ
typst
Apache License 2.0
// https://github.com/Jollywatt/arrow-diagrams
https://github.com/fuchs-fabian/typst-template-aio-studi-and-thesis
https://raw.githubusercontent.com/fuchs-fabian/typst-template-aio-studi-and-thesis/main/README.md
markdown
MIT License
# aio-studi-and-thesis: All-in-one template for students and theses <p align="center"> <a href="https://github.com/fuchs-fabian/typst-template-aio-studi-and-thesis/blob/main/docs/manual-de.pdf"> <img alt="Manual DE" src="https://img.shields.io/website?down_message=offline&label=manual%20de&up_color=007aff&up_message=online&url=https%3A%2F%2Fgithub.com%2Ffuchs-fabian%2Ftypst-template-aio-studi-and-thesis%2Fblob%2Fmain%2Fdocs%2Fmanual-de.pdf" /> </a> <a href="https://github.com/fuchs-fabian/typst-template-aio-studi-and-thesis/blob/main/docs/manual-en.pdf"> <img alt="Manual EN" src="https://img.shields.io/website?down_message=offline&label=manual%20en&up_color=007aff&up_message=online&url=https%3A%2F%2Fgithub.com%2Ffuchs-fabian%2Ftypst-template-aio-studi-and-thesis%2Fblob%2Fmain%2Fdocs%2Fmanual-en.pdf" /> </a> <a href="https://github.com/fuchs-fabian/typst-template-aio-studi-and-thesis/blob/main/docs/example-de-thesis.pdf"> <img alt="Example DE" src="https://img.shields.io/website?down_message=offline&label=example%20de&up_color=007aff&up_message=online&url=https%3A%2F%2Fgithub.com%2Ffuchs-fabian%2Ftypst-template-aio-studi-and-thesis%2Fblob%2Fmain%2Fdocs%2Fexample-de-thesis.pdf" /> </a> <a href="https://github.com/fuchs-fabian/typst-template-aio-studi-and-thesis/blob/main/docs/example-en-thesis.pdf"> <img alt="Example EN" src="https://img.shields.io/website?down_message=offline&label=example%20en&up_color=007aff&up_message=online&url=https%3A%2F%2Fgithub.com%2Ffuchs-fabian%2Ftypst-template-aio-studi-and-thesis%2Fblob%2Fmain%2Fdocs%2Fexample-en-thesis.pdf" /> </a> <a href="https://github.com/fuchs-fabian/typst-template-aio-studi-and-thesis/blob/main/LICENSE"> <img alt="MIT License" src="https://img.shields.io/badge/license-MIT-brightgreen"> </a> </p> This template can be used for extensive documentation as well as for final theses such as bachelor theses. It is characterised by the fact that it is highly customisable despite the predefined design. Initially, all template parameters are optional by default. It is then suitable for documentation. To make it suitable for theses, only one parameter needs to be changed. ## ⚠️ **Disclaimer - Important!** - It is a template and does not have to meet the exact requirements of your university - It is only supported in German and English (Default setting: German) ## Getting Started You can use this template in the Typst web app by clicking “Start from template” on the dashboard and searching for `aio-studi-and-thesis`. Alternatively, you can use the CLI to kick this project off using the command ```bash typst init @preview/aio-studi-and-thesis ``` Typst will create a new directory with all the files needed to get you started. ## Usage The template ([rendered PDF (DE)](docs/example-de-thesis.pdf)) contains thesis writing advice (in German) as example content. If you are looking for the details of this template package's function, take a look at the [german manual](docs/manual-de.pdf) or the [english manual](docs/manual-en.pdf). > Roboto is used as the default font. Please note accordingly if you want to use exactly this font. ## Example configuration ```typ #import "@preview/aio-studi-and-thesis:0.1.0": * #show: project.with( lang: "de", authors: ( (name: "<NAME>"), ), title: "Title", subtitle: "Subtitle", cover-sheet: ( cover-image: none, description: [] ) ) ``` ## Donate with [PayPal](https://www.paypal.com/donate/?hosted_button_id=4G9X8TDNYYNKG) If you think this template is useful and saves you a lot of work and nerves (Word and LaTex can be very tiring) and lets you sleep better, then a small donation would be very nice. [![Paypal](https://www.paypalobjects.com/de_DE/i/btn/btn_donateCC_LG.gif)](https://www.paypal.com/donate/?hosted_button_id=4G9X8TDNYYNKG)
https://github.com/The-Notebookinator/notebookinator
https://raw.githubusercontent.com/The-Notebookinator/notebookinator/main/themes/linear/entries.typ
typst
The Unlicense
#import "format.typ": * #import "./colors.typ": * #import "/utils.typ" #let cover = utils.make-cover(ctx => { v(50pt) line( length: 100%, stroke: 2pt, ) h(5pt) rect( inset: 30pt, fill: surface-0, width: 100%, )[ #grid( columns: (1fr, 3fr), gutter: 2fr, [ #set text(72pt) #ctx.team-name ], [ #align( right, [ #set text(20pt) #ctx.season Engineering Design Notebook ], ) ], ) ] h(5pt) line( length: 100%, stroke: 2pt, ) place( center + bottom, dy: -50pt, [ #set text(20pt) #box( width: 150pt, stroke: ( top: white, bottom: white, left: black, right: black, ), ctx.year, ) ], ) }) #let frontmatter-entry = utils.make-frontmatter-entry(( ctx, body, ) => { show: page.with(header: { set text(size: 25pt) set line(stroke: 1.5pt) set align(center + horizon) grid( columns: (1fr, auto, 1fr), line(length: 100%), { h(20pt) ctx.title h(20pt) }, line(length: 100%), ) }) set-border(ctx.type) body }) #let body-entry = utils.make-body-entry(( ctx, body, ) => { show: page.with( margin: (top: 88pt), header: { set text(size: 30pt) set line(stroke: 1.5pt) set align(center + horizon) grid( columns: (1fr, auto, 1fr), line(length: 100%), { h(20pt) box( fill: entry-type-metadata.at(ctx.type), outset: 10pt, ctx.title, ) h(20pt) }, line(length: 100%), ) }, footer: { grid( columns: (2fr, 2fr, 1fr), [Written by: #h(10pt) #ctx.author], [Witnessed by: #h(10pt) #ctx.witness], align( right, box( fill: surface-1, outset: 8pt, context counter(page).display(), ), ), ) }, ) set-border(ctx.type) show heading: it => { set-heading( it, ctx.type, ) } show raw.where(block: false): box.with( fill: surface-1, inset: ( x: 4pt, y: 0pt, ), outset: ( x: 0pt, y: 4pt, ), ) show raw.where(block: true): block.with( fill: surface-1, inset: 8pt, width: 100%, ) body }) #let appendix-entry = utils.make-appendix-entry(( ctx, body, ) => { show: page.with(header: [ #set text(size: 25pt) #set line(stroke: 1.5pt) #align( center + horizon, grid( columns: ( 1fr, auto, 1fr, ), [ #line(length: 100%) ], [ #h(20pt) #ctx.title #h(20pt) ], [ #line(length: 100%) ], ), ) ]) set-border(ctx.type) body })
https://github.com/MatheSchool/typst-g-exam
https://raw.githubusercontent.com/MatheSchool/typst-g-exam/develop/docs-shiroa/g-exam-doc/about.typ
typst
MIT License
#import "mod.typ": * #show: book-page.with(title: "About") = About
https://github.com/Coekjan/parallel-programming-learning
https://raw.githubusercontent.com/Coekjan/parallel-programming-learning/master/ex-1/report.typ
typst
#import "../template.typ": * #import "@preview/cetz:0.2.2" as cetz #import "@preview/codelst:2.0.1" as codelst #show: project.with( title: "并行程序设计第 1 次作业(OpenMP 编程)", authors: ( (name: "叶焯仁", email: "<EMAIL>", affiliation: "ACT, SCSE"), ), ) #let data = toml("data.toml") #let lineref = codelst.lineref.with(supplement: "代码行") #let sourcecode = codelst.sourcecode.with( label-regex: regex("//!\s*(line:[\w-]+)$"), highlight-labels: true, highlight-color: lime.lighten(50%), ) #let data-time(raw-data) = raw-data.enumerate().map(data => { let (i, data) = data (i + 1, data.sum() / data.len()) }) #let data-speedup(raw-data) = data-time(raw-data).map(data => { let time = data-time(raw-data) let (i, t) = data (i, time.at(0).at(1) / t) }) #let data-table(raw-data) = table( columns: (auto, 1fr, 1fr, 1fr), table.header([*线程数量*], table.cell([*运行时间(单位:秒)*], colspan: 3)), ..raw-data.enumerate().map(e => { let (i, data) = e (str(i + 1), data.map(str)) }).flatten() ) #let data-chart(raw-data, width, height, time-max, speedup-max) = cetz.canvas({ cetz.chart.columnchart( size: (width, height), data-time(raw-data), y-max: time-max, x-label: [_线程数量_], y-label: [_平均运行时间(单位:秒)_], bar-style: none, ) cetz.plot.plot( size: (width, height), axis-style: "scientific-auto", plot-style: (fill: black), x-tick-step: none, x-min: 0, x-max: 17, y2-min: 1, y2-max: speedup-max, x-label: none, y2-label: [_加速比_], y2-unit: sym.times, cetz.plot.add( axes: ("x", "y2"), data-speedup(raw-data), ), ) }) = 实验一:矩阵乘法 == 实验内容与方法 使用 OpenMP 并行编程实现矩阵乘法的并行加速,并在不同线程数量下进行实验,记录运行时间并进行分析。 - 矩阵大小:8192 #sym.times 8192 - 矩阵乘法算法:经典三重循环 - 线程数量:1 \~ 16 在程序构造过程中,有以下要点: + 为记录矩阵乘法计算时间(含 OpenMP 线程创建、同步、销毁开销),使用 OpenMP 的 ```c omp_get_wtime()``` 函数; + 依据环境变量 `OMP_NUM_THREADS` #footnote("https://www.openmp.org/spec-html/5.0/openmpse50.html") 来决定线程数量; + 为简要地记录矩阵乘法结果(双精度浮点阵列),使用 OpenSSL 的 SHA1 算法计算其指纹。 代码如 @code:matmul-code 所示,其中 #lineref(<line:omp-matmul>) 使用 OpenMP API 进行并行化。 #figure( sourcecode( raw(read("matmul/matmul.c"), lang: "c"), ), caption: "并行矩阵乘法 OpenMP 实现代码", ) <code:matmul-code> == 实验过程 在如 @chapter:platform-info 所述的实验平台上进行实验,分别使用 1 至 16 个线程进行矩阵乘法实验,记录运行时间,测定 3 次取平均值,原始数据如 @table:matmul-raw-data 所示。 == 实验结果与分析 #let matmul-speedup-max = data-speedup(data.matmul).sorted(key: speedup => speedup.at(1)).last() 矩阵乘法实验测定的运行时间如 @figure:matmul-chart 中的条柱所示,并行加速比如 @figure:matmul-chart 中的折线所示,其中最大加速比在 CPU 数量为 #matmul-speedup-max.at(0) 时达到,最大加速比为 #matmul-speedup-max.at(1)。 可见随着线程数量的增加,运行时间逐渐减少,但在线程数量达到 8 时,运行时间几乎不再减少,甚至有所增加。这可能有多方面的因素: + 线程数量过多时,线程创建、同步、销毁的开销超过了并行计算的收益,导致运行时间增加。 + 线程划分矩阵内存空间时,可能存在线程间共享 cache 行的情况,随着线程数量增加,cache 访问冲突增多,导致 cache 命中率降低,进而影响运行时间。 #figure( data-chart(data.matmul, 12, 8.5, 200, 6), caption: "矩阵乘法运行时间", ) <figure:matmul-chart> 矩阵乘法实验中的原始数据如 @table:matmul-raw-data 所示。 #figure( data-table(data.matmul), caption: "矩阵乘法实验原始数据", ) <table:matmul-raw-data> = 实验二:正弦计算 == 实验内容与方法 使用 OpenMP 并行编程,利用泰勒展开 @equation:sine-taylor 实现任意精度正弦函数的计算,并在不同线程数量下进行实验,记录运行时间并进行分析。 $ sin (x) = x - x^3/3! + x^5/5! - ... + (-1)^n x^{2n+1}/(2n+1)! + ... $ <equation:sine-taylor> - $x$ 取值:0.2306212 - 计算泰勒展开项数:2#super[17],即 131072 - 线程数量:1 \~ 16 程序构造过程中有如下要点: + 为实现任意精度的正弦函数计算,使用 GMP 库中的 ```cpp class mpf_class```; + 为避免重复计算阶乘、幂运算,同时为使线程的计算负载相当,在正式计算前,预先计算并存储阶乘、幂运算结果; + 为缓解分支预测错误,使用 ```cpp (1 - ((i & 1) << 1))``` 的方式来实现 $(-1)^i$; + 为记录正弦函数计算时间(含 OpenMP 线程创建、同步、销毁开销),使用 OpenMP 的 ```c omp_get_wtime()``` 函数; + 依据环境变量 `OMP_NUM_THREADS` #footnote("https://www.openmp.org/spec-html/5.0/openmpse50.html") 来决定线程数量; + 为简要地记录正弦函数计算结果(字符串),使用 OpenSSL 的 SHA1 算法计算其指纹。 代码如 @code:sincal-code 所示,其中 #lineref(<line:omp-fact-powx>) 与 #lineref(<line:omp-sincal>) 使用 OpenMP API 进行并行化。 #figure( sourcecode( raw(read("sincal/sincal.cpp"), lang: "cpp"), ), caption: "正弦函数计算 OpenMP 实现代码", ) <code:sincal-code> == 实验过程 在如 @chapter:platform-info 所述的实验平台上进行实验,分别使用 1 至 16 个线程进行正弦函数计算实验,记录运行时间,测定 3 次取平均值,原始数据如 @table:sincal-raw-data 所示。 == 实验结果与分析 #let sincal-speedup-max = data-speedup(data.sincal).sorted(key: speedup => speedup.at(1)).last() 正弦计算实验测定的运行时间如 @figure:sincal-chart 中的条柱所示,并行加速比如 @figure:sincal-chart 中的折线所示,其中最大加速比在 CPU 数量为 #sincal-speedup-max.at(0) 时达到,最大加速比为 #sincal-speedup-max.at(1)。 可见随着线程数量的增加,运行时间逐渐减少,但在线程数量达到 8 时,运行时间几乎不再减少,甚至有所增加。这可能有多方面的因素: + 线程数量过多时,线程创建、同步、销毁的开销超过了并行计算的收益,导致运行时间增加。 + 线程划分计算任务时,可能存在线程间共享 cache 行的情况,随着线程数量增加,cache 访问冲突增多,导致 cache 命中率降低,进而影响运行时间。 #figure( data-chart(data.sincal, 12, 8.5, 72, 6), caption: "正弦函数计算运行时间", ) <figure:sincal-chart> 正弦计算实验中的原始数据如 @table:sincal-raw-data 所示。 #figure( data-table(data.sincal), caption: "正弦计算实验原始数据", ) <table:sincal-raw-data> = 附注 == 编译与运行 代码依赖 GMP、OpenSSL、OpenMP 库,若未安装这些库,需手动安装。在准备好依赖后,可使用以下命令进行编译与运行: - 编译:```sh make```; - 运行:```sh make run```; - 可通过环境变量 ```OMP_NUM_THREADS``` 来指定线程数量,例如:```sh OMP_NUM_THREADS=8 make run```; - 运行结束后若提示错误(检测到指纹错误),则说明运行结果不正确,该检测机制的大致逻辑由 @code:makefile-fingerprint 中的 Makefile 代码给出; #figure( sourcecode( ```make # The fingerprint of the result FINGERPRINT := 00 11 22 33 44 55 66 77 88 99 99 88 77 66 55 44 33 22 11 00 # Run the program `app` and check the fingerprint .PHONY: run run: exec 3>&1; stdbuf -o0 ./app | tee >(cat - >&3) | grep -q $(FINGERPRINT) ``` ), caption: "Makefile 中的指纹检测代码" ) <code:makefile-fingerprint> - 清理:```sh make clean```。 == 实验平台信息 <chapter:platform-info> 本实验所处平台的各项信息如 @table:platform-info 所示。 #figure( table( columns: (auto, 1fr), table.header([*项目*], [*信息*]), [CPU], [11th Gen Intel Core i7-11800H \@ 16x 4.6GHz], [内存], [DDR4 32 GB], [操作系统], [Manjaro 23.1.4 Vulcan(Linux 6.6.19)], [编译器], [GCC 13.2.1(OpenMP 5.0)], ), caption: "实验平台信息", ) <table:platform-info>
https://github.com/7sDream/fonts-and-layout-zhCN
https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/06-features-2/positioning/single.typ
typst
Other
#import "/template/template.typ": web-page-template #import "/template/components.typ": note #import "/lib/glossary.typ": tr #show: web-page-template // ### Single adjustment === #tr[single adjustment] // A single adjustment rule just repositions a glyph or glyph class, without contextual reference to anything around it. In Japanese text, all glyphs normally fit into a standard em width and height. However, sometimes you might want to use half-width glyphs, particularly in the case of Japanese comma and Japanese full stop. Rather than designing a new glyph just to change the width, we can use a positioning adjustment: #tr[single adjustment]用于在不需要访问上下文时,对一个#tr[glyph]或#tr[glyph class]进行重#tr[positioning]。在日文中,所有的#tr[glyph]通常都会占据一个标准宽高的#tr[em square]。但有时某些#tr[glyph]也会想要使用半宽形式,特别是日文逗号和句号。我们不需要重新设计一个只是调整了宽高的#tr[glyph],只要用#tr[positioning]规则处理即可: ```fea feature halt { pos uni3001 <-250 0 -500 0>; pos uni3002 <-250 0 -500 0>; } halt; ``` // Remember that this adjusts the *placement* (placing the comma and full stop) 250 units to the left of where it would normally appear and also the *advance*, placing the following character 500 units to the left of where it would normally appear: in other words we have shaved 250 units off both the left and right sidebearings of these glyphs when the `halt` (half-width alternates) feature is selected. 这段代码既会调整#tr[glyph]的放置位置也会调整它的#tr[advance]。在放置上,会让它相对原始位置向左移动250单位;而对#tr[advance]的调整,会让下一个#tr[character]出现的位置往左移500单位。换句话说,就是当开启`halt`特性时,我们会削掉这些#tr[glyph]的左右#tr[sidebearing]各250单位。
https://github.com/mismorgano/UG-FunctionalAnalyisis-24
https://raw.githubusercontent.com/mismorgano/UG-FunctionalAnalyisis-24/main/tareas/Tarea-10/Tarea-10.typ
typst
#import "../../config.typ": config, exercise, proof #show: doc => config([Tarea 10], doc) #exercise[2.26][ Muestra que se cumple el Teorema de la gráfica cerrada ssi se cumple el principio del mapeo abierto. ] #exercise[2.27][ Sean, $X, Y$ espacios normados, $T in cal(B)(X, Y)$. Muestra que $hat(T) : X slash "Ker"(T) -> Y $ definido por $hat(T)(hat(x)) = T(x)$, es un o.l acotado sobre $T(X)$. ] #exercise[2.28][ + Prueba directamente que si $X$ es un e.B y $f$ es un f.l no cero sobre $X$, entonces $f$ es un mapeo abierto sobre $KK$. + Sea $T:c_0 -> c_0$ el operador definido por $T((x_i)) = (1/i x_i)$. ¿Es $T$ un o.l acotado?, ¿Es $T$ un mapeo abierto?, ¿T mapea $c_0$ en un subconjunto denso en $c_0$? ] #exercise[2.29][ Sea $T$ un o.l (no necesariamente acotado) de un e.n $X$ sobre un e.n $Y$. Muestra que lo siguiente es equivalente: + $T$ es un mapeo abierto. + Existe $delta >0$ tal que $delta B_Y subset T(B_X)$. + Existe $M >0$ tal que para todo $y in Y$ existe $x in T^(-1)(y)$ que satisface $norm(x)_X <= M norm(y)_Y$. ] #exercise[2.30][ Sean $X, Y$ espacios normados, $T in cal(B)(X, Y)$. Muestra que si $X$ es completo y $T$ es un mapeo abierto, entonces $Y$ es completo. ] #exercise[2.31][ Sean $X, Y$ espacios de Banach, $T in cal(B)(X, Y)$. Muestra que si $T$ es uno a uno y $B_y^circle subset T(B_Y) subset B_Y$, entonces $T$ es una isometria sobre $Y$. ] #exercise[2.32][ Sean $X, Y$ espacios de Banach y $T in cal(B)(X, Y)$. Muestra que lo siguiente es equivalente: + $T(X)$ es cerrado. + $T$ es un mapeo abierto cuando se considera sobre su imagen. + Existe $M > 0$ tla que para todo $y in T(X)$ existe $x in T^(-1)(y)$ que satisface $norm(x)_X <= M norm(y)_Y$. ]
https://github.com/jgm/typst-hs
https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/field-10.typ
typst
Other
// Error: 9-13 cannot access fields on type boolean #{false.true}
https://github.com/jneug/typst-codetastic
https://raw.githubusercontent.com/jneug/typst-codetastic/main/manual.typ
typst
MIT License
#import "@local/mantys:0.0.3": * #import "@local/tidy:0.1.0" #import "bits.typ" #import "bitfield.typ" #import "checksum.typ" #import "ecc.typ" #import "util.typ" #import "qrutil.typ" #import "codetastic.typ" #show: mantys.with( ..toml("typst.toml"), title: [C#box(baseline:25%, inset:(x:-.1em), codetastic.qrcode("O", width:1em))detast#box(baseline:10%, inset:(left:-.05em,right:.1em), rotate(90deg, codetastic.ean5("00000", scale:(.8,.5))))c], abstract: [ CODETASTIC draws different kinds of codes in your Typst documents. Supported codes include EAN-13 barcodes and QR-Codes. The codes are created and drawn in pure Typst. ], titlepage: titlepage.with(toc: false), index: none ) #let show-module(name, scope: (:)) = { tidy-module( read(name + ".typ"), name: name, scope: (bits: bits, util: util, codetastic: codetastic) + scope, tidy: tidy ) } = Codes #show-module("codetastic") // #show-module("bits") //#show-module("qrutil", scope:(qrutil: qrutil))
https://github.com/EpicEricEE/typst-based
https://raw.githubusercontent.com/EpicEricEE/typst-based/master/assets/example.typ
typst
MIT License
#import "../src/lib.typ": base64, base32, base16 #set text(size: 14pt) #set page( width: auto, height: auto, margin: 1em, background: pad(0.5pt, box( width: 100%, height: 100%, radius: 4pt, fill: white, stroke: white.darken(10%), )), ) #table( columns: 3, inset: 0.5em, table.header[*Base64*][*Base32*][*Base16*], raw(base64.encode("Hello world!")), raw(base32.encode("Hello world!")), raw(base16.encode("Hello world!")), str(base64.decode("SGVsbG8gd29ybGQh")), str(base32.decode("JBSWY3DPEB3W64TMMQQQ====")), str(base16.decode("48656C6C6F20776F726C6421")) )
https://github.com/danfunc/UseTYPST.cmake
https://raw.githubusercontent.com/danfunc/UseTYPST.cmake/main/example.typ
typst
#emph[Hello] \ #emoji.face \ #"hello".len() enjoy! typst with cmake! #pagebreak() #include "subdir/subdir_example.typ"
https://github.com/barrel111/readings
https://raw.githubusercontent.com/barrel111/readings/main/classes/math6710/notes.typ
typst
#import "@local/preamble:0.1.0": * #import "@preview/fletcher:0.4.3" as fletcher: diagram, node, edge #import "@preview/commute:0.2.0": node, arr, commutative-diagram #show: project.with( course: "MATH6710", sem: "Summer", title: "Probability Theory I", subtitle: "Notes", authors: ( "<NAME>", ), ) = Introduction & Motivation We will be covering the following material + Measure Theory, + Random Variables, + Law of Large Numbers, + Weak Convergence, Central Limit Theorems. The main textbook is _<NAME>, Probability, Theory & Examples (5th edition)_. See the following books for an alternate perspective - _<NAME>, Probability with Martingales_ - _<NAME>, A Course in Probability_ In _naive_ probability theory, we consider a countable sample space $Omega subset.eq NN$ of possible outcomes and a function $PP: cal(P)(Omega) -> [0, 1]$ such that + $PP(Omega) = 1$ + If we have disjoint _events_ $A, B in cal(P)(Omega)$ then $PP(A union B) = PP(A) + PP(B)$ _(finite additivity)_ We can deduce a few properties of $PP$ immediately. #prop($"Properties of " PP$)[ Let $A, B subset.eq Omega$. Then + If $A subset.eq B$ then $PP (A) <= PP(B)$ #h(1fr) _(monotonicity)_ + $PP(A^c) = 1 - PP(A) $ #h(1fr) _(complements)_ + $PP(A union B) = PP(A) + PP(B) - PP(A sect B)$ #h(1fr) _(inclusion exclusion)_ ] #proof[ + Consider the disjoint decomposition $A = A union (B \\ A)$, #align(center)[$ PP(B) = PP(A) + PP(B \\ A) >= PP(A). $] + Consider the disjoint decomposition $A = A union (Omega \\ A)$, $ 1 = PP(Omega) &= PP(Omega \\ A) + PP(A) \ implies PP(A^c) &= 1 - PP(A). $ + Consider the disjoint decomposition $A union B = (A \\ B) union (B \\ A) union (A sect B)$, $ PP(A union B) &= PP(A \\ B) + PP(B \\ A) + PP(A sect B) $ Since $A = (A \\ B) union (A sect B)$ and $B = (B \\ A) union (A sect B)$, $ &= PP(A) + PP(B) - 2 PP(A sect B) + PP(A sect B) \ &= PP(A) + PP(B) - PP(A sect B) . $ ] #corollary("finite subadditivity")[For any events $A, B subset.eq Omega$, we have $ PP(A union B) <= PP(A) + PP(B) $] == Motivating Countable Additivity Consider an infinite sequence of independent, random variables $ X_j = cases(+1 "w.p." 1/2, -1 "w.p." 1/2) $ We consider the random walk defined by $S_n = sum_(j = 1)^N X_j$. Then, define the event $ A &= {S "visits" 0 "infinitely often"} \ &= sect.big_(n >= 1) underbrace({exists k >= n, S_ k = 0}, A_n) \ &= sect.big_(n >= 1) A_n $ We want to be able to use 'finite observations' or 'approximations', $A_n$ to compute $A$. Ideally, #numbered_eq($ PP(A) = lim_(N -> oo) PP(A_N) $) Note that for $n' >= n$, we have $A_(n') subset.eq A_n$. Note also that $ PP(sect.big_(n >= 1) A_n) = 1 - PP(union.big_(n >= 1) A_n^C) $ The sequence of complements is increasing $A_1^c subset.eq A_2^c subset.eq dots$. Thus, note that eq. (1) is equivalent to saying $ PP(B) = lim_(n -> oo) PP(B_n) $ for $B_1 subset.eq B_2 subset.eq dots$ and $B = union.big_(n >= 1) B_n$. We convert ${B_n}$ to a disjoint family by considering the family given by $B_i \\ B_(i - 1)$ (for convenience, take $B_0 = emptyset$). Then, using finite additivity, we would like to say $ PP(B) = lim_(N -> oo) sum_(j = 1)^N PP(B_j \\ B_(j - 1)) $ Equivalently, in our theory of probability we would want, for a _countable_, disjoint family ${C_i}$ the following holds $ PP(union.big_(j = 1)^oo C_j) = sum_(j = 1)^oo PP(C_j). $ == Problems with Arbitrary Additivity Consider a probability measure on $[0, 1]$. Then, if we have arbitrary additivity, $ PP(union.big_(j in I) A_i) = sum_(j in I) PP(A_j), $ it leads to contradictions of the following form $ PP(union.big_(j in [0, 1]) j) = sum_(j in [0, 1]) PP({j}) $ The left hand side must be $1$ whereas the equality holds only under very particular probability measures. In fact, this already rules out, for example the uniform measure on $[0, 1]$. Now, we move on to actually developing a measure theory that incorporates these ideas. #definition[$ mu^star (A) = inf {sum_(j = 1)^N |B_j| "for" {B_j}_( j = 1 )^n "such that" A subset.eq union.big B_j} \ mu_star (A) = sup {sum_(j = 1)^N |B_j| "for" {B_j}_( j = 1 )^n "such that" A supset.eq union.big B_j} $] #definition("Jordan-measurable")[A set $A$ is _Jordan measurable_ if $mu^star (A) = mu_star (A)$.] #prop[The Jordan measure is finitely additive.] #proof[Consider disjoint sets $A, B$. For the forward direction, consider finite covers ${A_i}_( i = 1 )^n$ and ${B_i}_( i = 1 )^n$ of $A$ and $B$ respective. Note that they give rise to a finite cover ${C_i}_(i = 1)^n$ of $A union B$ defined by $C_i = A_i union B_i$. From this, we immediately have $ mu(A union B) <= sum_(i = 1)^N abs(C_i) <= sum_(i = 1)^N abs(A_i) + sum_(i = 1)^N abs(B_i) \ implies mu(A union B) <= mu(A) + mu(B). $. For the converse, suppose ${C_i}_( i = 1 )^n$ is such that $union_(i = 1)^n C_i subset.eq A union B$. We define finite collections of sets ${A_i}_( i = 1 )^n$ and ${B_i}_(i = 1)^n$ by $A_i = A sect C_i$ and $B_i = B sect C_i$. Then, clearly $union_(i = 1)^n A_i subset.eq A$ and $union_(i = 1)^n B_i subset.eq B$. So, $ mu(A union B) >= sum_(i = 1)^n abs(C_i) = sum_(i = 1)^n abs(C_i sect A) + sum_(i = 1)^n abs(C_i sect B) \ implies mu(A union B) >= mu(A) + mu(B). $ ] = Algebras and Measure == Definitions #definition("semialgebra")[A collection $cal(S) subset.eq cal(P)(X)$ of sets is said to be a _semialgebra_ over $X$ if + $nothing, X in cal(S)$ + $S, T in cal(S)$ implies $S sect T in cal(S)$ #h(1fr) + $S, T in cal(S)$ implies $S - T$ is a finite disjoint union of sets in $cal(S)$ ] #definition("algebra")[A collection $cal(A) subset.eq cal(P)(X)$ is an _algebra_ over $X$ if + $X in cal(A)$ + $S, T in cal(A)$ implies $S\\ T in cal(A)$ ] #prop[A collection $cal(A) subset.eq cal(P)(X)$ is an algebra if and only if it satisfies + $X in cal(A)$ + $S in cal(A)$ implies $S^c in cal(A)$ + $S, T in cal(A)$ implies $S sect T in cal(A)$] #proof[ - $==>$ Suppose $cal(A)$ is an algebra over $X$. Then note that $X in cal(A)$. Furthermore, $S in cal(A)$ implies $S^c = X \\ S in cal(A)$. Finally, note that for $S, T in cal(A)$, we have $S sect T = S \\ T^c in cal(A).$ - $<==$ Suppose $cal(A) subset.eq cal(P)(X)$ satisfies the given properties. So, $X in cal(A)$. Furthermore, for $S, T in cal(A)$ note that $S \\ T = S sect T^c in cal(A)$. Thus, $cal(A)$ is an algebra over $X$. ] #corollary[An algebra over $X$ is also a semialgebra over $X$.] #definition( $sigma"-algebra"$ )[A collection $Sigma subset.eq cal(P)(X)$ s a $sigma$-algebra over $X$ if + $Sigma$ is an algebra over $X$ + ${F_i}_( i in NN ) subset.eq Sigma$ implies $union_(i in NN) F_i in Sigma$] #lemma[If $cal(F)_i$ for $i in I$ are $sigma$-algebras then $sect_(i = 1)^n cal(F)_i$ is also a $sigma$-algebra.] #definition[Let $X$ be a set and $cal(F)$ be a collection of subsets of $X$. Then, the _$sigma$-algebra generated by $cal(F)$_ is the intersection of all $sigma$-algebra containing $cal(F)$.] #remark[The collection of $sigma$-algebras containing $cal(F)$ is non-empty since $cal(P)(X)$ is a $sigma$-algebra containing it. Thus, this is a well-defined concept.] == Example: Borel Algebra #definition[] = Carathéodory's Extension Theorem = Lebesgue Measure
https://github.com/mem-courses/linear-algebra
https://raw.githubusercontent.com/mem-courses/linear-algebra/main/functions.typ
typst
#let mem_equations(..args) = { let arr = () for line in args.pos() { let cur = "" let isFirst = true for (i, v) in line.enumerate() { if i + 1 == line.len() { cur += "=" if v >= 0 { cur += str(v) } else { cur += math.minus cur += str(-v) } } else { if v != 0 { if not isFirst and v >= 0 { cur += math.plus } isFirst = false if v >= 0 { if (v != 1) { cur += str(v) } } else { cur += math.minus if (v != -1) { cur += str(-v) } } cur += math.attach("x", br: str(i + 1)) } } } arr.push(math.display(cur)) } return math.display(math.cases(..arr)) }
https://github.com/Mojashi/ppl2024
https://raw.githubusercontent.com/Mojashi/ppl2024/main/ppl.typ
typst
#import "@preview/diagraph:0.2.1": * #import "poster.typ": * // #show regex("[\p{scx:Han}\p{scx:Hira}\p{scx:Kana}]"): set text(font: "Noto Sans CJK JP") // 漢字かなカナのみ指定(ゴシック体=サンセリフ体) #show: poster.with( size: "18x24", title: "Postの対応問題に対する様々なアプローチ", authors: "大森章裕, 南出靖彦", departments: "東京工業大学情報理工学院", univ_logo: "./images/logo.png", // Use the following to override the default settings // for the poster header. num_columns: "2", univ_logo_scale: "100", title_font_size: "34", authors_font_size: "20", univ_logo_column_size: "4", title_column_size: "10", footer_url_font_size: "15", footer_text_font_size: "24", ) #let pcp_tile(upper, lower) = { box(stack( dir: ttb, [#set align(center) #block(stroke: black, width: 1.5cm, inset: 0.2cm, [#upper])], [#set align(center) #box(stroke: black, width: 1.5cm, inset: 0.2cm, [#lower])], ), baseline: 15pt) } #let section( title, number, content, stroke_color: rgb("#d8dddd"), font_color: rgb("#0053d6"), ) = { block( width: 100%, stroke: (paint: stroke_color, thickness: 3pt), stack(dir: ttb, [ #set align(left) #block(inset: 0.5cm, width: 100%, fill: stroke_color, [#text( fill: font_color, font: "Hiragino Kaku Gothic ProN", weight: "bold", size: 20pt, [ #if number != none [#box(text(number, size: 0.9em), baseline: 2pt, stroke: font_color, inset: 2pt)] #title], )]) ], [ #set align(left) #block(inset: 0.5cm, [#content]) ]), ) } #let sub_title(title, fill: rgb("#309930")) = { text( font: "Hiragino Kaku Gothic ProN", weight: "bold", fill: fill, size: 18pt, [#title], ) } #let indent = { h(1em) } #section( "Postの対応問題とは", "A", [ #indent「上下に文字列が書いてある #text("s", fill: red, weight: "bold") 種類のタイルを好きな枚数好きな順番で並べて, 上下で同じ文字列を作れるか」という問題で, 決定不能. - *PCP[#text("s", fill: red, weight: "bold"),#text("w", fill: blue, weight: "bold")]* ... #text("s", fill: red, weight: "bold") 種類のタイル $and$ 各タイルは最長で #text("w", fill: blue, weight: "bold") 文字 #v(20pt) #sub_title([例: PCP[#text("3", fill: red, weight: "bold"),#text("4", fill: blue, weight: "bold")]のインスタンス]) #h(30pt) #box(stack( [#set align(center) #stack([#pcp_tile("100", "1")], [(1)], dir: { ttb }, spacing: 5pt)], [#set align(center) #stack([#pcp_tile("0", "100")], [(2)], dir: { ttb }, spacing: 5pt)], [#set align(center) #stack([#pcp_tile("1", "00")], [(3)], dir: { ttb }, spacing: 5pt)], dir: { ltr }, spacing: 5pt, ), baseline: 25pt) これの解は “1311322” #pcp_tile("100", "1") #pcp_tile("1", "00") #pcp_tile("100", "1") #pcp_tile("100", "1") #pcp_tile("1", "00") #pcp_tile("0", "100") #pcp_tile("0", "100") ], ) #section("貢献", "B", [ #sub_title("これまでの状況") - PCP[2,n]は決定可能である@pcp2n - PCP[3,3]は残り1個@tacklepcp - PCP[3,4]は3170個残っている@pcp2n@RAHN2008115 #sub_title(fill: rgb("#ee3030"), "本研究による更新") - PCP[3,3]は*完全解決* (以前発見されていた75タイルが最長) - PCP[3,4]は残り*9個* ], stroke_color: rgb("#0053d6"), font_color: rgb("#ffffff")) #section( stroke_color: rgb("#ee5050"), font_color: rgb("#ffffff"), "未解決インスタンスの例", none, [ #box(width: 100%, [ #set align(center) #pcp_tile("1111", "110") #pcp_tile("1110", "1") #pcp_tile("1", "1111") #h(50pt) #box(baseline: 30pt, figure( numbering: none, image("./images/qr.png", width: 70pt, height: 70pt), )) ]) ], ) #section( "文字列制約としてのアプローチ", "C", [ #sub_title("問題の定式化") $h,g: Delta^* -> Sigma^*$ ... 上段・下段を表す写像 *例*. $h(\"1\")=\"100\"$ #h(10pt) $h(\"132\")=\"10010\"$ #h(10pt) $g(\"13\")=\"100\"$ #set align(center) #block([*$h(x)=g(x)$ となる $x$ が存在するか?*]) #set align(left) #v(20pt) $h, g$ はトランスデューサとして表現できる. 整数ベクトルを出力するようなトランスデューサ $v$ を考えると, 次の式は決定可能である. これは緩和になっているから, 非存在性を示せる. #set align(center) #block( [*$v circle.stroked.tiny h(x)=v circle.stroked.tiny g(x)$ となる$x$が存在するか?*], ) #set align(left) #sub_title([$v$ の作り方]) 単語 $w$ の出現回数を数えるようなトランスデューサ$v_w$を使うことができる. 1. 単語集合 $W=\{\}$ 2. 各単語の出現回数を出力するトランスデューサ $v$ を作る. $v(x) = (|x|_w_1,|x|_w_2,...)$ #h(70pt) ($|x|_w_1$ は$x$における$w$の出現回数) 3. $v circle.stroked.tiny h(x)=v circle.stroked.tiny g(x)$ を解く 4. $x$ が存在し, $h(x) eq.not g(x)$ なら, この $x$ をブロックするような単語を見つけ, $W$に追加. ], ) #set align(left) #section("実験", "F", [ #table( columns: (1fr, auto, auto, auto), inset: 10pt, align: horizon, [], [*ベクトルの一致による緩和*], [*部分文字列パターン*], [*PDR*], [#text([PCP[3,4] #cite(<tacklepcp>)の残り], size: 0.8em)], [3041], [2167], [???], ) ]) #section(text("参考文献", size:0.8em), none, [ #set text(size: 12pt) #show bibliography: none #cite(<pcp2n>, form: "full") #cite(<tacklepcp>, form: "full") #cite(<RAHN2008115>, form: "full") #bibliography("pcp.bib") ]) #section( "遷移システムとしてのアプローチ", "D", [ 左から一枚ずつタイルを並べることを考える. - 状態集合: $Q={"upper", "lower"} #symbol("⨯") Sigma^*$ - 遷移関数: $T: Q -> Q$ ... タイルを一枚並べる操作 - 初期状態: $I = T(epsilon)$ - $italic("Bad") = {epsilon}$ #figure(image("./images/trsystem.png")) // 例えば, #pcp_tile("101", "1") の一枚だけが並んでいるとき, この状態は $("\"upper\"", "\"01\"")$ 次に, // もう一枚並べて #pcp_tile("101", "1") #pcp_tile("1", "0111") とすると, 状態は $("\"lower\"", "\"1\"")$ となる. #v(20pt) この遷移システムにおいて, $T$ で閉じ, $I$ を含み, $epsilon$ を含まない*正規言語* $italic("Inv")$ を発見したい. (正確には, 上下でそれぞれ) この $italic("Inv")$ の発見方法に幾つかの方法を検討した. - 正規言語を述語とするPDRによる方法 - Interpolationが簡単に計算できるわけではないが, 反例のblockingなどは可能 - Predicate Abstraction系は, 具体的な反例の構成が時間的に難しい - SATソルバによる $italic("Inv")$ の発見 - n状態のDFAで定式化すると $n^4$ 個の変数 - 幅優先的な場合分けによる探索 ], ) #section( "幅優先的な場合分けによる探索","E", [ - 幅優先的に解を探索する方法をベースに, 各状態を積極的に正規言語で抽象化して, 閉じることを目指す. - $italic("Bad")$ に到達した場合, refinementはせず, バックトラック. - 各操作の複雑さは状態数に対して線形. スケールしやすい. #figure( raw-render( width: 100%, ```dot digraph G { ratio="fill"; size="7,2!"; rankdir="LR" 13 [label="UP,.*0.*", shape="ellipse", style="filled", fillcolor="white"] 13 -> 7 [style="solid"] 13 -> 13 [style="solid"] 12 [label="UP,1", shape="box", style="filled", fillcolor="white"] 12 -> 7 [style="dotted"] 20 [label="DN,11001100", shape="box", style="filled", fillcolor="white"] 20 -> 25 [style="dotted"] 1 [label="DN,100", shape="box", style="filled", fillcolor="white"] 1 -> 6 [style="solid"] 10 [label="DN,11", shape="box", style="filled", fillcolor="white"] 10 -> 11 [style="solid"] 10 -> 12 [style="solid"] 11 [label="DN,11100", shape="box", style="filled", fillcolor="white"] 11 -> 20 [style="solid"] 9 [label="DN,00", shape="box", style="filled", fillcolor="white"] 9 -> 10 [style="solid"] 0 [label="UP,111", shape="box", style="filled", fillcolor="white"] 0 -> 7 [style="dotted"] 6 [label="DN,001100", shape="box", style="filled", fillcolor="white"] 6 -> 25 [style="dotted"] 25 [label="DN,.*0110.*", shape="ellipse", style="filled", fillcolor="white"] 25 -> 25 [style="solid"] 7 [label="UP,.*1.*", shape="ellipse", style="filled", fillcolor="white"] 7 -> 7 [style="solid"] 7 -> 8 [style="solid"] 7 -> 9 [style="solid"] 8 [label="UP,.*00.*", shape="ellipse", style="filled", fillcolor="white"] 8 -> 13 [style="dotted"] start -> 1 [style="solid"] start [label="", shape="point"] start -> 0 [style="solid"] start [label="", shape="point"] } ```, ), caption: [(3)により閉じた遷移の例 #pcp_tile(1111, 1) #pcp_tile(00, 11) #pcp_tile(1, 1100)], ) #sub_title("抽象化の方針") #set enum(numbering: "(1)") + 正規言語すべて許す + *$.*r.*$* か *具体的な文字列* のみを許す - (1) より優秀. (1)は無駄な抽象化が多すぎるか + *部分文字列パターン* か *具体的な文字列* のみを許す - *部分文字列パターン*: $.*1101.*, .*011.*$ - 状態数が増えても, 各操作の計算量は変わらない! #sub_title("(3)の利点") - 包含判定が単純な文字列の包含として判定できるという性質の良さ. - ex.#h(30pt) $".*1101.*" supset 11 #text("1101", fill: red) 00$ #h(30pt) $".*1101.*" supset .*11 #text("1101", fill: red) 10.*$ - 例えば, Badに行く状態の文字列で動的にGeneralized-Suffix-Treeを構築することで - ある状態sが, 他のBadな状態を含んでいるかの判定: $O(|s|)$ - Badスペースから逆向きの探索を大量に行っておいて, 枝刈りできる - ある状態を抽象化する方法が$|s|^2$個しかない - これによって, 抽象化して同じノードを指しやすい(閉じやすい). - 既存のノードの中から, 抽象化にあたるノードを探すのが, $O(|s|^2*c)$ *洞察: 局所的に悪い部分列があって, それが消せないことが多い* ], )
https://github.com/giZoes/justsit-thesis-typst-template
https://raw.githubusercontent.com/giZoes/justsit-thesis-typst-template/main/resources/utils/custom-tablex.typ
typst
MIT License
#import "@preview/tablex:0.0.8": *
https://github.com/HarryLuoo/sp24
https://raw.githubusercontent.com/HarryLuoo/sp24/main/431/hw/1/hw6.typ
typst
#set math.equation(numbering: "(1)") #set page(margin: (x: 1cm, y: 1cm)) = HW 6, <NAME> == ex4.6 recall from lec the normal approximation formula, where $ P (|hat(p)-p|< epsilon)>= 2 Phi(2epsilon sqrt(n))-1 $ <eq1> For this problem, we have $epsilon = 0.02, 2 Phi(2epsilon sqrt(n))-1 >= 0.95.$ We can solve for n when $n = n_min$ with the following: $ 2Phi(2epsilon sqrt(n)) -1= 0.95\ Phi(2*0.02sqrt(n))=1.95/(2) $ accroding to the table of Phi values, we have $ 0.04sqrt(n) = 1.96\ => #rect(inset: 8pt)[ $ display(n = 2401)$ ] $ therefore the smallest size should be 2401 == ex4.8 Rolling a biased die can be modeled as a binomial distribution as either "rolling the number 6" or not. We denote an unknown probability of rolling a 6 as p, and denote the number of getting 6 as X. We write $X~"Bin"(1000000,p)$.\ We want to find a confidence interval for p with 0.999 confidence. Using @eq1, we have $n= 1000000, P|(hat(p) - p| < epsilon) = 0.99$. We need to solve for $epsilon$ at the lower bound, where: $ 2Phi(2epsilon sqrt(n))-1 =0.999\ => Phi(2*1000epsilon)= 0.9995 \ => 2000epsilon approx 3.32\ epsilon = 0.00166 $ Since the number 6 shows up 180000 times when rolling 1000000 times, $ hat(p) = (180000)/(1000000) = 0.18$.\ #rect( inset: 8pt, )[ Therefore, the confidence interval is $ display([hat(p) -epsilon,hat(p) +epsilon]=[0.1783,0.1817])$ ] == ex4.10 We assume that scoring a goal in a certain game is a rare event for the player, we can approximate the r.v. X corresponding to the number of goals scored by the player as a Poisson distribution $ P(X = k) = (e^(-lambda) lambda^(k) )/(k!) $ probability of player scoring 0 goals is $P(X=0) = e^(-lambda) lambda^(0) /0! = e^(-lambda)$\ Thus the probability of scoring at least 1 goal is $1 - e^(-lambda) = 0.5 => lambda = ln(2) approx 0.693$\ We can now calculate the approximation for scroing 3 goals as #rect(inset: 8pt)[ $ display(P(X= 3) =(e^(-lambda) lambda^(k) )/(k!) = (e^(-0.693) 0.693^3)/(3*2*1) = 0.028)$ ] == ex4.34 Assume that accidents happen rarely and independently. We can model the number of accidents happen in a week with a Poisson distribution. We denote the r.v. X as the number of accidents in a week, and we have $X~"Poisson"(lambda)$, where lambda is the average number of accidents in a week, given as $lambda = 3$ Therefore, the probability of *at most* 2 accidents happening next week can be calculated as $ P(X=1) +P(X=2) +P(X=0) = (e^(-3)) ((3^(1) )/(1) + (3^(2) )/(2*1)+(3^(3) )/(3*2*1)) = #rect(inset: 8pt)[ $ display(0.59744)$ ] $ = ex4.46 We can consider the series of trials of "flipping a coin 5 times each day for 30 days" as a binomial distribution, where we either get 5 tails each day or not. We denote the r.v. X as the number of days that we get 5 tails. The probability of having 5 tails in a day is $p=(1)/(2^5) = 1/32$. Therefore, $X~"Bin"(30,1/32)$\ Since $n p(1-p) = 465/512$, the normal approximation is not valid. Poisson approximation is a bettor choice, especially when our $n p = 15 slash 512 $ is small. We approximate the distribution of X with r.v. $Y ~"Poisson"(lambda)"where" lambda = E(X)=n p = 30/32 = 0.9375$. Thus, $ P(X=2) approx P(Y=2)=(e^(-0.9375) 0.9375^(2) )/(2) approx #rect(inset: 8pt)[ $ display( 0.1721)$ ] $ == ex5.2 - (a ) Given the MGF, we can calculate its derivatives as $ M'(t) = -4/3 e^(-4t) +5/6 , M''(t) = 16/3 e^(-4t) 25/6 e^(5t) $ We can get $ E(X ) = M'(0) = 1/2, E(X^2) = M''(0)=19/2 \ => "Var"(X)=E(X^2)-E(X)^2 = 37/4 $ - (b) Given the MGF, we observe that the possible values for r.v. are 0, -4, 5; and the corresponding probabilities are 1/2, 1/3,1/6. Thus the discrete probability mass function is $P(X=0) = 1/2, P(X=-4) = 1/3, P(X=5) = 1/6$. From which we can calculate We can calculate $ E(X) = -4*1/3 + 5*1/6 = 1/2; E(X^2) = 1/3*16 + 1/6 * 25 = 19/2\ "Var"(X) = E(X^2)-E(X)^2 = 37/4 $ As calculated in (a). == ex5.18 - (a) Given $X~"Geom"(p)$, the probability mass function is $P(X=k) = p(1-p)^(k-1)$, where k=1,2,3,... $ M_X (t)=E(e^(t X) )=&sum_(k=1)^(infinity)e^(t k) P(X=k) = sum_(k=1)^(infinity)e^(t k)p(1-p)^(k-1) = p e^(t) sum_(k=1)^(infinity)(e^(t) (1-p))^(k-1) \ =& p e^(t) sum_(k=0)^(infinity)(e^(t) (1-p))^(k) $ when $e^(t) (1-p)<1, "i.e." t < ln(1/(1-p)),$ the series converges, and $ #rect(inset: 8pt)[ $ display(M_X (t) = (p e^(t))/(1-e^(t)(1-p) ) )$ ] $ while $t >= ln(1/(1-p))$ , the series diverges, and $ M_X (t) = +infinity $ - (b) $ E(X)=M'_X(0)= (p e^(t) )/((1-e^(t)(1-p) )^2) bracket.r _(t=0) = 1/p.\ E(X^2)=M''_X (0)=(p e^(t) )/((1-e^(t) (1-p))^2)|_(t=0) = 2/p^2 - 1/p \ "Var"(X) = E(X^2)-E(X)^2 = 1/p^2 - 1/p $ == ex 5.20 - (a) by def, we know $ M_X (t) =& integral_(-infinity)^(infinity)e^(t x) * 1/2 e^(-|x|) d x = 1/2integral_(0)^(infinity)e^((-1-t)x)d x + 1/2integral_(-infinity)^(0)e^((t+1)x) dif x \ && "x -> -x" \ =& 1/2integral_(0)^(infinity)e^((-1-t)x)d x + 1/2integral_(0)^(infinity)e^(-(t+1)x) dif x $ Noticing that $integral_(0)^(infinity)e^(-c x) dif x $ converges to $1/c$ iff c>0, we can get $ #rect(inset: 8pt)[ $ display(M_X (t) = cases(display(1/2(1/(1-t))+1/2(1/(1+t)) = 1/(2(1-t^2)) ", when" -1 <t<1 ),infinity "O.W." ))$ ] $ - (b) Taylor expanding $M_X (t)$ at t=0 when $-1 <t<1$, we have $ M_X (t) = 1/(2(1-t^2)) = 1/2 + t^2/2 + t^4/2 + t^6/2 + ... = sum_(k=0)^(infinity) 1/2t^(2k) $ Therefore, #rect(inset: 8pt)[ $ display("odd-numbered moments are all 0, and the 2k-th moment is" 1/2 t^(2k) )$ ]
https://github.com/Ombrelin/adv-java
https://raw.githubusercontent.com/Ombrelin/adv-java/master/Slides/2-tests.typ
typst
#import "@preview/polylux:0.3.1": * #import "@preview/sourcerer:0.2.1": code #import themes.clean: * #show: clean-theme.with( logo: image("images/efrei.jpg"), footer: [<NAME>, EFREI Paris], short-title: [EFREI LSI L3 ALSI62-CTP : Java Avancé], color: rgb("#EB6237") ) #title-slide( title: [Java Avancé], subtitle: [Cours 2 : Tests Unitaires], authors: ([<NAME>]), date: [22 Janvier 2024], ) #slide(title: "Tests automatiques")[ - Du code qui vérifie le fonctionnement de l'application - Détecter le bugs majeurs en 1 clic - Plus safe de changer le code #text(style:"italic", "Meilleure qualité & fiabilité du logiciel") ] #slide(title: "Tests unitaires")[ - Fin niveau de granularité - Teste une *unité* de code * Unité* : méthode, classe, petit groupe de classe ayant un fort lien logique. #text(style:"italic", "Aide a trouver les bug, mais ne permet pas de dire qu'il n'y en a pas") ] #slide(title: "Structure d'un test")[ - *Arrangement* : mis en place - *Action* : exécuter le code que l'on veut tester - *Affirmation* : vérifier que le résultat est bien le bon \ #text(style:"italic","Etant donné <arrangement>, quand <action>, alors <affirmation>") ] #slide(title: "Tests avec JUnit 5")[ - Méthode de tests: Méthode avec l'annotation `@Test` - Suite de tests: Classe contenant des méthodes de test - Une suite de test par *Unité* - Classes rangées sous `src/test/java` ] #slide(title: "Méthode de test")[ #code( lang: "Java", ```java @Test void wordCount_whenMultipleWords_returnsRightCount(){ // Given var input = "bonjour le monde"; // When var result = App.countWords(input); // Then assertEquals(3, result); } ``` ) ] #slide(title: "Cas de test")[ Des points pivots divisent les flux d'exécution potentiels. Points pivots : - `if` - `switch` _Analyser ce qui a du sens d'un PDV fonctionnel_ ] #slide(title: "Conception de tests et couplage")[ Ecrire un test -> sanctuariser une interface (si on refactor on doit refactorer le test aussi) Cela a du sens pour : - Algorithmes - IO - Frontière des unités _La conception des unités est importante_ ] #slide(title: "Pseudo Entités")[ Utilité : remplacer une dépendance pour faciliter les tests. - *Faux (fake)* : implémentation cohérente mais simplifiée - *Simulacre (mock)* : coquille vide avec un comprtement paramètré ] #slide(title: "Faux (fake)")[ #code( lang: "Java", ```java public class FakeRepository implements UserRepository { private final Map<String, User> data; public FakeUsersRepository(Map<String, User> data){ this.data = data; } public User findByUsername(String username){ return this.data.get(username); } } ``` ) ] #slide(title: "Simulacre (mock) avec Mockito")[ #code( lang: "Java", ```java public interface GithubApiClient { List<GithubRepo> getUserRepository(String username); } ``` ) #code( lang: "Java", ```java final var apiClientMock = mock(GithubApiClient.class); when(apiClientMock.getUserRepository(testUsername)) .thenReturn( List.of( new GithubRepo("test repo 1", 32), new GithubRepo("test repo 2", 12) ) ); ``` ) ] #slide(title: "Code testable")[ *Injection de dépendances :* Externaliser des comportement dans des classes (dépendances), et les fournir en paramètre du constructeur. *Inversion de dépendance :* Abstraire les dépendance par un contrat de service (une interface) - Code modulaire - Couplages moins forts _-> Le code est plus facile et test, maintenir, refactorer_ ] #slide(title: "Développement dirigé par les tests (TDD)")[ *TDD (tests driven development) :* Méthode de développment "test-first". Cycle RED-GREEN-REFACTOR : - RED: écrire un test qui ne passe pas - GREEN: écrire le code minimal qui suffit à faire passer le test - REFACTOR: retravailler le code écrit pour l'améliorer ] #slide(title: "Pour le TDD ?")[ - *Vitesse:* valider plus vite les idées, passer moins de temps à débugger manuellement - *Confiance:* tests + fiables et pertinents, vraie spécification exécutable. Meilleure sécurité contre la régression - *Qualité :* force la réflexion autour des interfaces, on détecte ainsi les problèmes de conception plus tôt. On est forcé à refactorer plus souvent, donc on produit du meilleur code ]
https://github.com/ludwig-austermann/qcm
https://raw.githubusercontent.com/ludwig-austermann/qcm/main/README.md
markdown
MIT License
# qcm Qualitative Colormaps for Typst Qualitative colormaps contain a fixed number of distinct and easily differentiable colors. They are suitable to use for e.g. categorical data visualization. ## Source The following colormaps are available: - all [colorbrew](https://github.com/axismaps/colorbrewer/) qualitive colormaps, for discovery and as documentation visit [colorbrewer2.org](https://colorbrewer2.org) ## Usage Usage is very simple: ```typst #import "@preview/qcm:0.1.0": colormap #colormap("Set1", 5) ```
https://github.com/ClazyChen/Table-Tennis-Rankings
https://raw.githubusercontent.com/ClazyChen/Table-Tennis-Rankings/main/history_CN/2019/MS-09.typ
typst
#set text(font: ("Courier New", "NSimSun")) #figure( caption: "Men's Singles (1 - 32)", table( columns: 4, [排名], [运动员], [国家/地区], [积分], [1], [马龙], [CHN], [3546], [2], [樊振东], [CHN], [3422], [3], [许昕], [CHN], [3374], [4], [林高远], [CHN], [3328], [5], [林昀儒], [TPE], [3293], [6], [王楚钦], [CHN], [3249], [7], [蒂姆 波尔], [GER], [3215], [8], [梁靖崑], [CHN], [3207], [9], [张本智和], [JPN], [3144], [10], [雨果 卡尔德拉诺], [BRA], [3128], [11], [周雨], [CHN], [3120], [12], [迪米特里 奥恰洛夫], [GER], [3080], [13], [于子洋], [CHN], [3052], [14], [方博], [CHN], [3027], [15], [闫安], [CHN], [3025], [16], [张禹珍], [KOR], [3023], [17], [马蒂亚斯 法尔克], [SWE], [3018], [18], [马克斯 弗雷塔斯], [POR], [3014], [19], [乔纳森 格罗斯], [DEN], [3013], [20], [郑荣植], [KOR], [2995], [21], [金光宏畅], [JPN], [2988], [22], [孙闻], [CHN], [2975], [23], [水谷隼], [JPN], [2971], [24], [安宰贤], [KOR], [2967], [25], [帕特里克 弗朗西斯卡], [GER], [2964], [26], [赵胜敏], [KOR], [2956], [27], [赵子豪], [CHN], [2955], [28], [刘丁硕], [CHN], [2952], [29], [弗拉基米尔 萨姆索诺夫], [BLR], [2950], [30], [周启豪], [CHN], [2946], [31], [#text(gray, "丁祥恩")], [KOR], [2927], [32], [徐晨皓], [CHN], [2925], ) )#pagebreak() #set text(font: ("Courier New", "NSimSun")) #figure( caption: "Men's Singles (33 - 64)", table( columns: 4, [排名], [运动员], [国家/地区], [积分], [33], [夸德里 阿鲁纳], [NGR], [2924], [34], [托米斯拉夫 普卡], [CRO], [2914], [35], [#text(gray, "郑培峰")], [CHN], [2901], [36], [克里斯坦 卡尔松], [SWE], [2900], [37], [吉村和弘], [JPN], [2897], [38], [李尚洙], [KOR], [2894], [39], [吉村真晴], [JPN], [2890], [40], [卢文 菲鲁斯], [GER], [2889], [41], [陈建安], [TPE], [2886], [42], [丹羽孝希], [JPN], [2884], [43], [神巧也], [JPN], [2881], [44], [#text(gray, "马特")], [CHN], [2868], [45], [达科 约奇克], [SLO], [2866], [46], [#text(gray, "大岛祐哉")], [JPN], [2865], [47], [艾曼纽 莱贝松], [FRA], [2865], [48], [西蒙 高兹], [FRA], [2864], [49], [#text(gray, "朱霖峰")], [CHN], [2863], [50], [薛飞], [CHN], [2851], [51], [林钟勋], [KOR], [2849], [52], [HIRANO Yuki], [JPN], [2847], [53], [黄镇廷], [HKG], [2845], [54], [及川瑞基], [JPN], [2842], [55], [贝内迪克特 杜达], [GER], [2841], [56], [PISTEJ Lubomir], [SVK], [2840], [57], [SHIBAEV Alexander], [RUS], [2835], [58], [庄智渊], [TPE], [2831], [59], [利亚姆 皮切福德], [ENG], [2828], [60], [田中佑汰], [JPN], [2827], [61], [吉田雅己], [JPN], [2820], [62], [SKACHKOV Kirill], [RUS], [2817], [63], [安东 卡尔伯格], [SWE], [2815], [64], [森园政崇], [JPN], [2812], ) )#pagebreak() #set text(font: ("Courier New", "NSimSun")) #figure( caption: "Men's Singles (65 - 96)", table( columns: 4, [排名], [运动员], [国家/地区], [积分], [65], [<NAME>], [GER], [2810], [66], [汪洋], [SVK], [2810], [67], [上田仁], [JPN], [2806], [68], [<NAME>], [KOR], [2804], [69], [PERSSON Jon], [SWE], [2803], [70], [KOU Lei], [UKR], [2803], [71], [ZHAI Yujia], [DEN], [2801], [72], [PLETEA Cristian], [ROU], [2800], [73], [GNANASEKARAN Sathiyan], [IND], [2796], [74], [赵大成], [KOR], [2796], [75], [帕纳吉奥迪斯 吉奥尼斯], [GRE], [2796], [76], [王臻], [CAN], [2790], [77], [TAKAKIWA Taku], [JPN], [2787], [78], [宇田幸矢], [JPN], [2785], [79], [塞德里克 纽廷克], [BEL], [2782], [80], [GERELL Par], [SWE], [2782], [81], [特鲁斯 莫雷加德], [SWE], [2778], [82], [雅克布 迪亚斯], [POL], [2778], [83], [卡纳克 贾哈], [USA], [2773], [84], [WEI Shihao], [CHN], [2769], [85], [周恺], [CHN], [2762], [86], [村松雄斗], [JPN], [2761], [87], [巴斯蒂安 斯蒂格], [GER], [2752], [88], [沙拉特 卡马尔 阿昌塔], [IND], [2749], [89], [詹斯 伦德奎斯特], [SWE], [2748], [90], [WANG Zengyi], [POL], [2743], [91], [HWANG Minha], [KOR], [2741], [92], [DRINKHALL Paul], [ENG], [2738], [93], [罗伯特 加尔多斯], [AUT], [2729], [94], [MONTEIRO Joao], [POR], [2727], [95], [安德烈 加奇尼], [CRO], [2723], [96], [户上隼辅], [JPN], [2723], ) )#pagebreak() #set text(font: ("Courier New", "NSimSun")) #figure( caption: "Men's Singles (97 - 128)", table( columns: 4, [排名], [运动员], [国家/地区], [积分], [97], [诺沙迪 阿拉米扬], [IRI], [2721], [98], [松平健太], [JPN], [2718], [99], [ROBLES Alvaro], [ESP], [2716], [100], [#text(gray, "金珉锡")], [KOR], [2708], [101], [徐瑛彬], [CHN], [2708], [102], [NORDBERG Hampus], [SWE], [2706], [103], [邱党], [GER], [2706], [104], [牛冠凯], [CHN], [2705], [105], [<NAME>], [TPE], [2704], [106], [哈米特 德赛], [IND], [2696], [107], [WALKER Samuel], [ENG], [2694], [108], [WU Jiaji], [DOM], [2693], [109], [HABESOHN Daniel], [AUT], [2689], [110], [蒂亚戈 阿波罗尼亚], [POR], [2688], [111], [廖振珽], [TPE], [2688], [112], [GERALDO Joao], [POR], [2687], [113], [MACHI Asuka], [JPN], [2686], [114], [斯特凡 菲格尔], [AUT], [2682], [115], [木造勇人], [JPN], [2680], [116], [ORT Kilian], [GER], [2680], [117], [特里斯坦 弗洛雷], [FRA], [2678], [118], [SIPOS Rares], [ROU], [2672], [119], [LANDRIEU Andrea], [FRA], [2672], [120], [AKKUZU Can], [FRA], [2671], [121], [<NAME>], [JPN], [2670], [122], [<NAME>], [ALG], [2670], [123], [<NAME>], [SLO], [2667], [124], [LIU Yebo], [CHN], [2666], [125], [<NAME>], [FRA], [2664], [126], [#text(gray, "SEO Hyundeok")], [KOR], [2664], [127], [博扬 托基奇], [SLO], [2662], [128], [奥马尔 阿萨尔], [EGY], [2658], ) )
https://github.com/xsro/xsro.github.io
https://raw.githubusercontent.com/xsro/xsro.github.io/zola/typst/Control-for-Integrator-Systems/4PTC.typ
typst
#import "@preview/cetz:0.2.0" #import cetz.plot #import cetz.draw: * #import "lib/lib.typ": op,sig,ode #let mu1(t,T:1,h:1,k1:1,k2:0)={ if t>=T { 0 }else{ k1/calc.pow(T - t,h)+k2 } } #let table_eles(profiles)={ let eles=() for p in profiles{ eles.push($T=#p.T,\ h=#p.h,\ k_1=#p.k1$) let mu0=(t)=>mu1(t,T:p.T,h:p.h,k1:p.k1,k2:p.k2) let pic=cetz.canvas({ plot.plot( size: (2,2), axis-style: "school-book", x-tick-step: 1, y-tick-step: none, { plot.add(domain: (0,2), mu0,style: (stroke: green)) }, y-label:$mu(t)$, x-label:$t$ ) }) eles.push(pic) let rhs=(t,x)=>(-mu0(t)*x) if ("sat" in p){ rhs=(t,x)=>(-sat(mu0(t)*x,p.sat)) } let (xout,dxout)=ode(rhs,4,1,p.step) let odepic=cetz.canvas({ plot.plot( size: (4,2), axis-style: "school-book", x-tick-step: 1, y-tick-step:none, { plot.add(xout,label:$x$) plot.add(dxout,label:$dot(x)$) }, y-label:$x$, x-label:$t$, ) }) eles.push(odepic) if p.h == 1{ eles.push([Prescribed Time\ Stable with $T$]) } else{ eles.push([unstable \ x(t)=#xout.at(-1).at(1) \ $t>=T$]) } } eles } #let profiles1=( (T:2,h:1,k1:1,k2:0,step:0.01), (T:2,h:1,k1:2,k2:0,step:0.01), (T:2,h:1,k1:1/2,k2:0,step:0.01), ) #let main_tvg=figure( table( align: horizon, columns: (auto,auto,auto,auto), ..([system],$mu(t)$,"numerical solution","stability"), ..table_eles(profiles1) ), caption:[ time-varying gain control with prescribed time stability ] ) = Time-varying Gain == Prescribed Time Stabiliztion of Single Integrator Systems by Time-varying Gain Generally prescribed/preassigned/pre-appointed time stability is reached by time-varying gain (time-varying scaling function, time-base generator). Following table gives the basic example, we see that the solution for the first case is the same as $dot(x)=-"sign"(x)$. @songPrescribedtimeControlIts2023 . The system is: $ dot(x)=-mu(t) x , mu(t)=cases( k_1/(T-t)^h quad 0<t<T, 0 quad t>=T), $ with $T> 1$ to be prescribed and $k_1>0,k_2>0,h=1$.\ The analytical solution with $h=1$ can be found easily as: $ x(t)=x(0)((T- t)/T)^(k_1), t in [0,T) \ x(t)=0, t in [T,infinity) $. #main_tvg #pagebreak() == Discussion: Time-varying Gain with different power The analytical solution with $h!=1$ can be found easily as: $ x(t)=x(0)exp(-k_1/(-h+1)(T^(-h+1)-(T-t)^(-h+1)))\ x(t)=x(0)exp(-k_1/(-h+1) T^(-h+1)), t in [T,infinity) $. These system can be nearly stable but we can always observe some error. #let profiles2=( // (T:2,h:2,k1:1/2,k2:0,step:0.01,sat:10), //the first simulation is quite "ill" (T:2,h:-1,k1:1/2,k2:0,step:0.01), (T:2,h:-3,k1:1/2,k2:0,step:0.01), ) #figure( table( align: horizon, columns: (auto,auto,auto,auto), ..([system],$mu(t)$,"numerical solution","stability"), ..table_eles(profiles2) ), caption: [time-varying gain control with similar form without stability ] ) #pagebreak()
https://github.com/andreasKroepelin/lovelace
https://raw.githubusercontent.com/andreasKroepelin/lovelace/main/examples/booktabs-title.typ
typst
MIT License
#import "../lib.typ": * #set page(width: auto, height: auto, margin: 1em) #set text(font: "TeX Gyre Pagella") #show math.equation: set text(font: "TeX Gyre Pagella Math") #pseudocode-list(booktabs: true, title: [My cool title])[ + do something + do something else + *while* still something to do + do even more + *if* not done yet *then* + wait a bit + resume working + *else* + go home + *end* + *end* ]
https://github.com/davidmasp/naturelike
https://raw.githubusercontent.com/davidmasp/naturelike/main/main.typ
typst
#import "naturelike.typ": * #show: naturelike.with( title: lorem(10), article_type: "Preprint", abstract: [ #lorem(150) ], authors: ( ( name: "<NAME>", affiliation: ("Junts","Consell per la Republica") ), ( name: "<NAME>", affiliation: ("Esquerra Republicana de Catalunya (ERC)") ), ( name: "<NAME>", affiliation: ("Junts") ), ( name: "<NAME>", affiliation: ("Esquerra Republicana de Catalunya (ERC)") ), ), bibliography-file: "refs.bib", ) = Introduction #lorem(300) @wagner2019 = Methods #lorem(100) $ a + b = gamma $ #lorem(120) = Results == #lorem(5) #lorem(300) == #lorem(7) #lorem(300)
https://github.com/tiankaima/typst-notes
https://raw.githubusercontent.com/tiankaima/typst-notes/master/2bc0c8-2024_spring_TA/main.typ
typst
#set text( font: ("linux libertine", "Source Han Serif SC", "Source Han Serif"), size: 10pt, ) #show math.equation: set text(11pt) #show math.equation: it => [ #math.display(it) ] #let dcases(..args) = { let dargs = args.pos().map(it => math.display(it)) math.cases(..dargs) } #show image: it => [ #set align(center) #it ] #align(center)[ = 习题课 7 讲义 2024 Spring 数学分析 B2 PB21000030 马天开 ] == 小测答案 === 1 计算 $I = integral.double_D x(1+y e^(x^4 y^6)) dif x dif y$, 其中 $D: y=sin x,x=-pi/2, y=1$ $ I&=integral_(-pi / 2)^(pi / 2) dif x integral_(sin x)^1 x (1+y e^(x^4 y^6)) dif y\ &=integral_(-1)^1 dif y integral_(-pi / 2)^(arcsin y) x(1+y dot e^(x^4 y^6)) dif x\ $ 显然这两个方向上都无法得到初等函数解, 做如下处理 先对 $f_1(x,y) = x$ 在 $D$ 上积分: $ I_1&=integral_(-pi / 2)^(pi / 2) dif x integral_(sin x)^1 x dif y\ &=integral_(-pi / 2)^(pi / 2) x (1-sin(x)) dif x\ &=integral_(-pi / 2)^(pi / 2) x dif x - integral_(-pi / 2)^(pi / 2) x sin(x) dif x\ &=0 - 2 integral_(-pi / 2)^(pi / 2) x sin(x) dif x\ &=-2 $ 对于 $f_2(x,y)=x y dot e^(x^4 y^6)$ 有比较强的对称性, 但是处理起来容易出错, 我们这里处理如下: $ I_2 &= integral_(-pi / 2)^(pi / 2) dif x integral_(sin(x))^1 f_2(x,y) dif y\ &= integral_(pi / 2)^(-pi / 2) dif (-x) integral_(sin(-x))^1 f_2(-x,y) dif y\ &= integral_(-pi / 2)^(pi / 2) dif x integral_1^(sin(x)) f_2(x,y) dif y\ &= 1 / 2 integral_(-pi / 2)^(pi / 2) dif x integral_(sin(x))^(-sin(x)) f_2(x,y) dif y\ &= 1 / 2 integral_(-pi / 2)^(pi / 2) dif x integral_(-sin(x))^(sin(x)) f_2(x,-y) dif (-y)\ &= 1 / 2 integral_(-pi / 2)^(pi / 2) dif x integral_(sin(x))^(-sin(x)) - f_2(x,y) dif (y)\ &= 0 \ $ === 2 $ F(t) = integral_1^t dif y integral_y^t e^(-x^2) dif x $ 计算 $(dif F)/(dif t)$ 在 B1 中我们讲过含参变上限求导的方法, 例如: $ F(x,t) = integral_(x_0)^t f(x,t) dif x\ (dif F) / (dif t) = f(t,t) + integral_(x_0)^t (diff f) / (diff t) dif x $ 对于本题, 只需要应用两遍: $ (dif F) / (dif t) &= dif / (dif t) integral_1^t dif y integral_y^t e^(-x^2) dif x\ &= [integral_y^t e^(-x^2) dif x]_(y=t) + integral_1^t (diff / (diff t) integral_y^t e^(-x^2) dif x) dif y\ &= 0 + integral_1^t e^(-t^2) dif y\ &= e^(-t^2) (t-1) $ === 3 求 $ (x^2+y^2)^2 + z^4 = y $ 内部的体积 考虑球坐标变换: $ dcases( x = r sin theta cos phi, y = r sin theta sin phi, z = r cos theta, ) quad => quad dif x dif y dif z = r^2 sin theta dif r dif theta dif phi $ $ &r^4 sin^4 theta (cos^2 phi + sin^phi)^2 + r^4 cos^4 theta = r sin theta sin phi\ =>&r^3 = (sin theta sin phi)/(sin^4 theta + cos^4 theta) > 0 $ 注意到 $0<theta<pi => sin(theta) > 0$, 只需要 $sin phi > 0 => phi in (0,pi)$ 所以转变为求下面的三重积分: $ integral.triple_V dif x dif y dif z &= integral.triple_(V^') r^2 sin theta dif r dif theta dif phi\ &= integral_0^pi dif phi integral_0^pi dif theta integral_0^R r^2 sin theta dif r\ &= integral_0^pi dif phi integral_0^pi R^3 / 3 dif theta\ &= 1/3 integral_0^pi dif phi integral_0^pi (sin^2 theta sin phi)/(sin^4 theta + cos^4 theta) dif theta\ &= 1/3 (integral_0^pi sin phi dif phi)(integral_0^pi (sin^2 theta)/(sin^4 theta + cos^4 theta) dif theta)\ $ 前面 $integral_0^pi sin phi dif phi = 2$ 容易得到, 后面的处理也不算容易, 我们展开讨论: $ integral_0^pi &= integral_0^(pi/2) + integral_(pi/2)^pi\ &= integral_0^(pi/2) (sin^2 theta)/(sin^4 theta + cos^4 theta) dif theta + integral_(pi/2)^pi (sin^2 theta)/(sin^4 theta + cos^4 theta) dif theta\ &= integral_0^(pi/2) (sin^2 theta)/(sin^4 theta + cos^4 theta) dif theta + integral_0^(pi/2) (cos^2 theta)/(cos^4 theta + sin^4 theta) dif theta\ &= integral_0^(pi/2) 1/(sin^4 theta + cos^4 theta) dif theta $ 接下来是三角换元, 所有都是平方项, 考虑 $tan$ 相关的方向: $ & quad integral 1/(sin^4 theta + cos^4 theta) dif theta\ &=integral (sec^4 theta)/(tan^4 theta + 1) dif theta\ &=integral (sec^2 theta (tan^2 theta + 1))/(tan^4 theta + 1) dif theta\ &= integral (t^2 + 1)/(t^4 + 1) dif t\ &= integral (1/t^2 + 1)/(1/t^2 + t^2) dif t\ &= integral (dif (t - 1/t))/((t - 1/t)^2 + 2)\ &= 1/sqrt(2) arctan((t-1/t)/sqrt(2))\ $ 当 $theta = 0 -> pi/2$ 时, $t= 0->oo$, $t-1/t = -oo -> oo$, 反常积分存在, 代入即可: $I = 2/3 dot pi/sqrt(2) = sqrt(2)/3 pi$ === 4 $ D = {x^2 + y^2 <=a }\ f: D->RR^+, quad f in C^1(D), quad f|_(diff D) = 0\ $ 证明: $ abs(integral.double_D f(x,y) dif x dif y) <= 1/3 pi a^3 max_((x,y) in D) sqrt(((diff f)/(diff x))^2 + ((diff f)/(diff y))^2 ) $ 我们把问题转化为极坐标系下面的问题: $ & (diff f)/(diff r) = (diff f)/(diff x) (diff x)/(diff r) + (diff f)/(diff y) (diff y)/(diff r) = cos theta (diff f)/(diff x) + sin theta (diff f)/(diff y)\ => & abs((diff f)/(diff r)) <= sqrt(((diff f)/(diff x))^2 + ((diff f)/(diff y))^2)\ & quad quad space.thin <= max_((x,y) in D) sqrt(((diff f)/(diff x))^2 + ((diff f)/(diff y))^2) = M\ $ 对于每个 $f(r,theta)$, 考虑到边界上函数值已知, 我们可以通过单变量的中值定理给出它的一个估计: $ &k_theta (r) = f(r, theta) \ &k_theta (a) - k_theta (r) = (dif k_theta)/(dif r)|_(r = xi) (a-r) quad xi in (a,r) \ =>&k_theta(r) = - (dif k_theta)/(dif r)|_(r = xi) (a-r)\ & quad quad space.thin <= M (a-r)\ $ 因此 $ abs(integral.double_D f(x,y) dif x dif y) &<= integral.double_D abs(f(x,y)) dif x dif y\ &<= M integral.double_D (a-r) dot r dif r dif theta\ &= M integral_0^(2pi) dif theta integral_0^a (a-r) r dif r\ &= M dot 2pi dot (1/2 a^3 - 1/3 a^3)\ & = 1/3 pi a^3 quad qed $ #pagebreak() == 作业答案 #text(weight: "bold")[ - 4.1 P125 7(3)(4)(5) - 4.3 P125 9 10(3)(4) 11(1)(3) 13 18 19 20 21 - 4.8 P156 2(1)(2)(3)(6)(7)(8) - 4.10 P156 1(2)(3)(5)(6) 3 5 6 7 - 4.12 P166 1(1)(4)(5) 2(2)(5)(8) 3(2)(3) 4 6 7 ] === 7(3) $ &space.quad f(x,y) = e^(2x) (x+2y+y^2) \ &dcases( (diff f)/(diff x) &= e^(2x)(2x+4y+2y^2+1), (diff f)/(diff y) &= e^(2x)(2+2y), (diff^2 f)/(diff x^2) &= e^(2x)(4x+8y+4y^2+4), (diff^2 f)/(diff y^2) &= e^(2x) dot 2, (diff^2 f)/(diff x diff y) &= e^(2x)(4+4y), )\ &dcases( (diff f)/(diff x) &= 0 , (diff f)/(diff y) &= 0 , )space.quad => space.quad dcases( x = 1/2 , y = -1 , )\ &dcases( A = 2e^1 , B = 0 , C = 2e^1 , ) space.quad => space.quad dcases( Delta = A C - B^2 = 4e^2 > 0, A > 0 ) $ 正定, 极小值 $f(1/2, -1) = -e/2$ #image("imgs/1.png", width: 50%) #pagebreak() === 7(4) $ (x^2+y^2)^2 = a^2(x^2-y^2) \ $ $ &=> [4x(x^2+y^2)-2a^2x]dif x +[4y(x^2+y^2)+2a^2y]dif y = 0 \ &=> (dif y) / (dif x) = -(4x(x^2+y^2)-2a^2x) / (4y(x^2+y^2)+2a^2y) \ $ $ (dif y) / (dif x) = 0 space.quad => space.quad x = 0 "or" x^2+y^2=1 / 2a^2 \ $ 考虑 $x=0 => (x,y) = (0,0)$ 此处不可微, 舍去 $ dcases( x^2+y^2=1/2a^2 , x^2-y^2=1/4a^2 , ) space.quad => space.quad y^2 = 1 / 8 $ 接下来可以用更一般的做法判断是否是极大值/极小值, 我们这里推荐一种更加初等但是高效的做法: 考虑关于 $x^2$ 的二次方程: $ x^4+(2y^2-a^2)x^2+(y^4+a^2y^2) = 0 $ 其中 $Delta=(4y^4-4a^2y^2+a^4)-4(a^2y^2+y^4)=-8a^2y^2+a^4>=0$ 因此 $y^2<=1/8a^2$, 两个$ y_1=-sqrt(2)/4a, space.quad y_2=sqrt(2)/4a $分别为极小值、极大值(考虑到在其附近光滑.) #image("imgs/2.png", width: 50%) 这个例子上学期将隐函数、参数方程求导的时候提到过: 有些参数方程的形式能给出 $(dif y)/(dif x)$ 的值, 但我们认为隐函数在 $(0,0)$ 附近不存在. #pagebreak() === 7(5) $ &x^2+y^2+z^2-2x+2y-4z-10=0 \ => & (2x-2)dif x +(2y+2)dif y+(2z-4)dif z=0 \ => & (diff z) / (diff x) = -(x-1) / (z-2) space.quad (diff z) / (diff y) = -(y+1) / (z-2) \ => & (diff^2 z) / (diff x^2) = -((x-1)^2+(z-2)^2) / (z-2)^3 space.quad (diff^2 z) / (diff y^2) = -((z-2)^2+(y+1)^2) / ( z-2 )^3 space.quad (diff^2 z) / (diff x diff y) = 0 \ $ $ ((diff z) / (diff x), (diff z) / (diff y)) = (0, 0) => (x, y) = (1, -1) \ $ 此时对应 $z_1 = 6, z_2=-2$, 我们分别在两个点的局部判断这是极大值/极小值, 即 $(1,-1,6)$ 和 $(1,-1,-2)$ - $z_1 = 6$ 时 $Delta=A C-B^2>0, A<0$, 正定, 极大值 - $z_2 = -2$ 时 $Delta=A C-B^2>0, A>0$, 负定, 极小值 #image("imgs/3.png", width: 50%) #pagebreak() === 9 #image("imgs/4.png", width: 50%) #pagebreak() === 10(3) $ u(x,y,z)&=sin x sin y sin z\ U(x,y,z,phi)&=sin x sin y sin z -phi dot.c(x+y+z-pi/2)\ &dcases( (diff U)/(diff x) &= cos x sin y sin z -phi=0 , (diff U)/(diff y) &= sin x cos y sin z -phi=0 , (diff U)/(diff z) &= sin x sin y cos z -phi=0 , &x+y+z=pi/2 , ) $ 可以解出 $ P_0=(pi / 6,pi / 6,pi / 6) quad P_1=(pi / 2,0,0)\ P_2=(0,pi / 2,0) quad P_3=(0,0,pi / 2)\ $ 分别代入 $u$ 可以得到 $u(P_0)=1/8, u(P_1)=0, u(P_2)=0, u(P_3)=0$ 极大值极小值的判断不能直接从拉格朗日乘子法中得到, 应该通过如下方法判断: - *降为二元函数* $ u(x,y)=u(x,y,z)=u(x,y,pi / 2-x-y)=sin x sin y cos(x+y)\ $ 接下来继续处理 $Delta=(diff^2 u)/(diff x^2) dot.c (diff^2 u)/(diff y^2)-((diff^2 u)/(diff x diff y))^2$, 按照一般的二元函数的处理 (#strike[也许可以从头开始就按照这样的做法]), 最终可以获得结果. #image("./imgs/6.png", width: 50%) #box[ 可以做如下处理: $ u&=sin x sin y sin z \ dif u&=cos x sin y sin z dif x + sin x cos y sin z dif y + sin x sin y cos z dif z\ $ ] #align(center)[ #rect[ $ &x+y+z=pi / 2 space.quad=>space.quad dif x + dif y + dif z = 0\ $ ] ] $ => & dif u = (cos x sin y sin z - sin x sin y cos z)dif x + (sin x cos y sin z - sin x sin y cos z)dif y\ => & (diff u) / (diff x)=cos x sin y sin z - sin x sin y cos z=0\ => & (diff u) / (diff y)=sin x cos y sin z - sin x sin y cos z=0\ => & (diff^2 u) / (diff x^2)= -2sin x sin y sin z - 2cos x sin y cos z =-1\ => & (diff^2 u) / (diff y^2) = -2sin x sin y sin z - 2sin x cos y cos z =-1\ => & (diff^2 u) / (diff x diff y) = cos x cos y sin z - sin x cos y cos z - cos x sin y cos z - sin x sin y sin z =-1 / 2\ $ 所以有 $ Delta = A C - B^2 = 3 / 4 >0 space.quad A=-1<0 $ 正定, 最大值 $ u(pi/6,pi/6,pi/6)=1/8 $ 下图是 $u(x,y,z)=sin x sin y sin z $ 的热力图,通过颜色来反应无法画出的另一维度的信息. #image("imgs/5.png", width: 50%) - *紧集最值定理*: - 考虑 $f: D -> RR, D subset RR^d$ $D$ 在 $RR^d$ 上 compact, $f$ 在 $D$ 上连续, 则 $f$ 在 $D$ 上有最大值和最小值. - 讨论在 $diff D$ 上 $f$ 的取值, 在这个问题中 $f|_(diff D) eq.triple 0$ - 在 $D^o$ 中, 最值必定在驻点中取得 $=> f(D^o)= [0,1\/8]$ - 那么 $f(D) = [0, 1\/8] quad forall x in D$, 在 $P_0$ 的一个局部$U(P_0) subset D$ 内, $forall x in P_0, f(x) <= 1\/8 = f(P_0)$, 因此是极大值 #pagebreak() === 10(4) $ &u=x y z space.quad x+y+z=0 space.quad x^2+y^2+z^2=1\ $ 按照上文的处理思路, $ &dif u = y z dif x + x z dif y + x y dif z\ $ #align(center)[ #rect[ $ dif x + dif y + dif z = 0\ x dif x + y dif y + z dif z = 0\ $ ] ] 在后两个方程中, 解出 $dif y, dif z$ 关于 $dif x$的表达式为: $ dif y = (z-x) / (y-z) dif x quad dif z = (x-y) / (y-z) dif x $ 接下来可以类似处理得到 $(dif u)/(dif x) , (dif^2 u)/(dif x^2)$, 按照一元函数的极值点处理即可. #rect(width: 100%, inset: 1em)[ 这样的思路主要源自这样的几何直观: 一个过原点的平面截一个单位球总会得到一个闭曲线: #image("imgs/9.png", width: 50%) ] #pagebreak() 按照一般的 Lagrange 乘子法: $ &U(x,y,z, lambda, mu) = x y z - lambda(x+y+z) - mu(x^2+y^2+z^2-1) \ &dcases( (diff U)/(diff x) &= y z - lambda - 2mu x = 0 , (diff U)/(diff y) &= x z - lambda - 2mu y = 0 , (diff U)/(diff z) &= x y - lambda - 2mu z = 0 , &x+y+z = 0 , &x^2+y^2+z^2 = 1 , )\ $ 对上面式子的处理较为依赖对称性, 我们提供一种比较简便、标准的做法: 首先考虑 $(diff U)/(diff x),(diff U)/(diff y),(diff U)/(diff z)$ 中 $mu x,mu y,mu z$的对称性, 我们将这三项对应加起来: $ &quad (diff U) / (diff x) + (diff U) / (diff y) + (diff U) / (diff z)\ &= x z + y z + x y - 3 lambda - 2 mu (x+y+z)\ &= x z + y z + x y - 3 lambda \ &= 0 $ 前面一项是已知的, 得益于这样的关系: $ (x+y+z)^2&=x^2+y^2+z^2+2(x y+y z+z x)\ &=1+2(x y+y z+z x)=0 \ => & x y+y z+z x = -1 / 2 $ 因此得到 $lambda = - 1/6$, 接下来只需要关心前两项 $(diff U)/(diff x), (diff U)/(diff y)$: $ &dcases( y z + 1/6 - 2 mu x = 0 , x z + 1/6 - 2 mu y = 0 , ) => dcases( y^2 z + 1/6 y - 2 mu x y = 0 , x^2 z + 1/6 x - 2 mu y x = 0 , )\ &=> [z(x+y) + 1 / 6](x-y) = 0 , &=> [(x+y)^2 - 1 / 6](x-y) = 0 , $ 接下来我们分别讨论 $x=y$ 和 $(x+y)^2=1/6$. 其实他们反映的是一种情况的对称. #box(width: 100%)[ - $x=y$ $ x=y => z=-2x\ x^2 + y^2 + z^2 = 6 x^2 = 1 => x = plus.minus sqrt(6) / 6 \ P_1(sqrt(6) / 6,sqrt(6) / 6, -sqrt(6) / 3) quad P_2(-sqrt(6) / 6, -sqrt(6) / 6, sqrt(6) / 3) $ ] #box(width: 100%)[ - $(x+y)^2=1 / 6$ $ (x+y)^2=1 / 6 => z^2 = 1 / 6 \ => x^2 + y^2 = 5 / 6 \ (x-y)^2 = 9 / 6 \ $ $ dcases( (x-y) = plus.minus 3/6sqrt(6) , (x+y) = plus.minus 1/6sqrt(6) , )\ $ $ &P_3(sqrt(6) / 3, -sqrt(6) / 6, -sqrt(6) / 6) quad &P_4(-sqrt(6) / 3, sqrt(6) / 6, sqrt(6) / 6) \ &P_5(sqrt(6) / 6, -sqrt(6) / 3, sqrt(6) / 6) quad &P_6(-sqrt(6) / 6, sqrt(6) / 3, -sqrt(6) / 6) \ $ ] 因此 $ u_max = sqrt(6)/18 quad u_min = -sqrt(6)/18 $ 考虑到讨论的取值范围是光滑的闭曲线, 光滑函数在此上的最大(小)值点也显然是局部的极大(小)值点. #pagebreak() === 11(1) 在做这道题之前也许就应该想到 $z=x^2-y^2$ 的曲面图像: #image("imgs/7.png", width: 50%) 即便没有这样的几何直观, 处理起来也是固定的模式, 先讨论内部的极值点、再讨论边界上的条件极值. $ (diff z) / (diff x) = 2x quad (diff z) / (diff y) = -2y \ (diff^2 z) / (diff x^2) = 2 quad (diff^2 z) / (diff y^2) = -2 quad (diff^2 z) / (diff x diff y) = 0 $ Hessian 矩阵总是负定的, 函数在${(x,y)mid("|") x^2+y^2 <4}$内部不存在极值点. (事实上也说明在任何区域内部都不存在极值点) #rect(fill: red.lighten(70%), width: 100%, inset: 1em)[ 实际上我们有更一般的结论: - *弱极值原理*: 考虑调和函数 $u$($Delta u=0$), 在区域 $overline(Omega)$ 上连续, 则函数的最值总能在边界上取得: $ max_(overline(Omega)) u = max_(partial Omega) u $ - *极值原理*: 非常数调和函数的最值总在边界上取得. $ max_(Omega) u = max_(partial Omega) u \ max_(Omega^o) u < max_(partial Omega) u $ ] 回到本题, 我们继续讨论边界上的极值点, 问题转化为一个一般的拉格朗日乘子法问题: $ L(x,y, mu) = x^2-y^2 + mu(x^2+y^2-4) \ dcases( (diff L)/(diff x) = 2x(1+mu) = 0 , (diff L)/(diff y) = -2y(1-mu) = 0 , x^2+y^2 = 4 \ )\ P_1(0,2) quad P_2(0,-2) quad P_3(2,0) quad P_4(-2,0) $ 容易得到 $z_min = -4, z_max = 4$ #pagebreak() === 11(3) $ &z=sin x +sin y -sin(x+y) quad D={(x,y)mid("|")x>=0,y>=0,x+y<=2pi} \ $ 区域内的情况: $ &dcases( (diff z)/(diff x) &= cos x -cos(x+y) = 0 , (diff z)/(diff y) &= cos y -cos(x+y) = 0 , ) \ &P_1(0,0) quad P_2(2pi,0) quad P_3(0,2pi) quad P_4(2 / 3 pi, 2 / 3 pi) $ $ => z_min = 0, z_max=3 / 2 sqrt(3) quad forall x in D^o $ 边界上的情况,我们分成三段: $ &l_1: {0}times[0,2pi] \ &l_2: [0,2pi]times{0} \ &l_3: {(x,y)mid("|")x+y=2pi,x>=0,y>=0} \ $ 其中 $l_1, l_2$ 是对称的,我们只考虑 $l_1$: $ &z = sin 0 + sin y - sin(0+y) eq.triple 0 quad &(x,y) in l_1 \ &z = sin x + sin(2pi-x) - sin(2pi) eq.triple 0 quad &(x,y) in l_3 \ $ #image("imgs/11.png", width: 50%) #pagebreak() === 13 $ dcases( z=x^2+2y^2 , z=6-2x^2-y^2 , ) quad => quad x^2 + y^2 = 2 $ #image("imgs/10.png", width: 50%) 问题转为考虑 $z=x^2+2y^2$ 在 $x^2+y^2=2$ 上的最值问题, 应用拉格朗日乘子法: $ L(x,y, mu) = x^2+2y^2 + mu(x^2+y^2-2) \ dcases( (diff L)/(diff x) = 2x(1+mu) = 0 , (diff L)/(diff y) = 4y(1+mu) = 0 , x^2+y^2 = 2 , )\ P_1(0,sqrt(2)) quad P_2(0,-sqrt(2)) quad P_3(sqrt(2),0) quad P_4(-sqrt(2),0) $ 因此 $z_min = 2, z_max = 4$ #pagebreak() === 14 #rect(width: 100%, inset: 1em)[ #columns(2)[ $ f(x,y)=3x^2y-x^4-2y^2\ dcases( (diff f)/(diff x)|_((0,0)) = 6x y-4x^3 = 0 , (diff f)/(diff y)|_((0,0)) = 3x^2-4y = 0 , (diff^2 f)/(diff x^2)|_((0,0)) = 6y-12x^2 = 0 , (diff^2 f)/(diff y^2)|_((0,0)) = -4 , (diff^2 f)/(diff x diff y)|_((0,0)) = 6x = 0 , )\ $ #colbreak() $Delta = A C-B^2=0$ 欠定, 我们继续往下算: $ dcases( (diff^3 f)/(diff x^3)|_((0,0)) = -24x = 0 , (diff^3 f)/(diff x^2 diff y)|_((0,0)) = 6 , (diff^3 f)/(diff x diff y^2)|_((0,0)) = 0 , (diff^3 f)/(diff y^3)|_((0,0)) = 0 = 0 , ) $ ] ] 因此 $f(x,y)$的三阶 Taylor 展开为: $ f(x,y) = -2y^2+3x^2y + o(rho^3) $ #align(center)[ #image("imgs/12.png", width: 70%) (蓝色是级数逼近) ] 显然 $f(0,0)=0$ 不是极值点. 我们考虑任意过 $(0,0)$ 的直线 $y=k x$, 有: $ f(x) = 3k x^3 - x^4 - 2k^2 x^2\ dcases( (dif f)/(dif x)|_((0,0)) = 9k x^2 - 4x^3 - 4k^2 x = 0 , (dif^2 f)/(dif x^2)|_((0,0)) = 18k x - 12x^2 - 4k^2 = - 4k^2<0 , ) $ 因此在每条过原点的直线上 $(0,0)$ 都是极大值点. #pagebreak() === 18 $ f(x,y) = (x - x_1)^2 + (y - y_1)^2 + dots.c + (x - x_n)^2 + (y - y_n)^2\ dcases( (diff f)/(diff x) = 2(x - x_1) + dots.c + 2(x - x_n) = 0 , (diff f)/(diff y) = 2(y - y_1) + dots.c + 2(y - y_n) = 0 , (diff^2 f)/(diff x^2) = 2n , (diff^2 f)/(diff y^2) = 2n , (diff^2 f)/(diff x diff y) = 0 , )\ $ 因此解出 $ => x = (x_1 + dots.c + x_n) / n quad y = (y_1 + dots.c + y_n) / n\ $ $ Delta = A C - B^2 = 4n^2 - 0 = 4n^2 > 0 quad A = 2n > 0 $ 因此是极小值点. #pagebreak() === 19 $ f(x,y, mu) = x y z - mu(x^2/a^2 + y^2/b^2 + z^2/c^2 - 1)\ dcases( (diff f)/(diff x) = y z - 2 mu x / a^2 = 0, (diff f)/(diff y) = x z - 2 mu y / b^2 = 0, (diff f)/(diff z) = x y - 2 mu z / c^2 = 0, x^2/a^2 + y^2/b^2 + z^2/c^2 = 1, )\ (x,y,z) = cal(k) (a,b,c) quad=>quad cal(k)=sqrt(3) / 3 $ 此时 $V = 8x y z = 8/9 sqrt(3) a b c$, 为最大值. #image("imgs/13.png", width: 50%) #pagebreak() === 20 $ f(x,y,z,mu)=abs(x+y+2z-9) / (sqrt(1^2+1^2+2^2)) - mu(x^2/4+y^2+z^2-1)\ dcases( (diff f)/(diff x) = -1/(sqrt(6)) - mu x / 2 = 0 , (diff f)/(diff y) = -1/(sqrt(6)) - 2mu y = 0 , (diff f)/(diff z) = -2/(sqrt(6)) - 2mu z = 0 , x^2/4 + y^2 + z^2 = 1 , )\ P_1(4 / 3,1 / 3,2 / 3) quad P_2(-4 / 3,-1 / 3,-2 / 3) $ #image("imgs/14.png", width: 60%) #pagebreak() === 21 $ F(x,y,z)=sqrt(x)-sqrt(y)-sqrt(z)-sqrt(a) eq.triple 0\ dif F = 1 / (2sqrt(x))dif x - 1 / (2sqrt(y))dif y - 1 / (2sqrt(z))dif z = 0\ 1 / (2sqrt(x_0))(x-x_0) - 1 / (2sqrt(y_0))(y-y_0) - 1 / (2sqrt(z_0))(z-z_0) = 0\ x / (2sqrt(x_0)) - y / (2 sqrt(y_0)) - z / (2 sqrt(z_0)) = sqrt(x_0) / 2 - sqrt(y_0) / 2 - sqrt(z_0) / 2\ x / (sqrt(x_0)) - y / (sqrt(y_0)) - z / (sqrt(z_0)) = sqrt(x_0) - sqrt(y_0) - sqrt(z_0)\ $ 所有截距之和: $ l_1+l_2+l_3=(sqrt(x_0)-sqrt(y_0)-sqrt(z_0))^2 = a $ 四面体体积: $ 1 / 6 l_1 dot l_2 dot l_3 = 1 / 6 a^(3 / 2) dot sqrt(x_0) dot sqrt(y_0) dot sqrt(z_0) $ $ f( x_0, y_0, z_0, mu ) = a^(3 / 2) dot sqrt(x_0) dot sqrt(y_0) dot sqrt(z_0) - mu(sqrt(x_0) - sqrt(y_0) - sqrt(z_0) - sqrt(a)) \ f(l,m,n,mu) = a^(3 / 2) dot l m n - mu(l - m - n - sqrt(a))\ dcases( (diff f)/(diff l) = a^(3/2) dot m n - mu = 0 , (diff f)/(diff m) = a^(3/2) dot l n - mu = 0 , (diff f)/(diff n) = a^(3/2) dot l m - mu = 0 , l m n = a , )\ => quad l = m = n = 1 / 3 sqrt(a) $ 所以最大四面体面积为 $a^3/162$, 截面: $x-y-z+1/9 a=0$ #pagebreak() === P156 1 - (2) $ integral_0^2 dif x integral_(2x)^(6-x)f(x,y)dif y = integral_0^4 dif y integral_0^(y / 2) f( x,y )dif x + integral_4^6 dif y integral_0^(6-y) f(x,y)dif x\ $ - (3) $ integral_0^a dif y integral_(a-sqrt(a^2-y^2))^(a+sqrt(a^2-y^2))f( x,y )dif x = integral_0^(2a) dif x integral_(0)^(sqrt(a^2-(x-a)^2))f(x,y)dif y\ $ - (5) $ integral_0^1 dif x integral_0^x f(x,y) dif y+integral_1^2 dif x+integral_0^(2-x) f( x,y ) dif y = integral_0^2 dif y integral_y^(2-y) f(x,y) dif x\ $ - (6) $ integral_0^1dif y integral_(1 / 2)^1f(x,y)dif x+integral_1^2dif y integral_(1 / 2)^(1 / y)f( x,y )dif x = integral_(1 / 2)^1dif x integral_(0)^(1 / x)f(x,y)dif y\ $ #align(center)[ #box(width: 95%)[ #table( columns: (auto, auto, auto), align: bottom, stroke: none, [ #image("imgs/15.png") (2) ], [ #image("imgs/16.png") (3) ], [ #image("imgs/17.png") (5) #image("imgs/18.png") (6) ], ) ] ] #pagebreak() === 2 - (1) $ &quad integral.double_D y / (1+x^2+y^2)^(3 / 2) dif x dif y quad D=[0,1]times[0,1]\ &=integral_0^1 dif x integral_0^1 y / (1+x^2+y^2)^(3 / 2) dif y\ &=integral_0^1 dif x integral_0^1 (1 / 2 dif y^2) / (1+x^2+y^2)^(3 / 2)\ &=integral_0^1 [-(1+x^2+y^2)^(-1 / 2)]_0^1 dif x\ &=integral_0^1 -1 / sqrt(2+x^2) + 1 / sqrt(1+x^2) dif x\ &=[-ln(x+sqrt(2+x^2)) + ln(x+sqrt(1+x^2))]_0^1\ &=-ln(1+sqrt(3)) + ln(1+sqrt(2)) + 1 / 2ln 2\ &=quad ln(-1+sqrt(3))+ln(1+sqrt(2))-1 / 2ln 2 $ - (2) $ &quad integral.double_D sin(x+y) dif x dif y quad D=[0,pi]times[0,pi]\ &=integral_0^pi dif x integral_0^pi sin(x+y) dif y\ &=integral_0^pi (-cos(pi+x)+cos x) dif x\ &=-sin 2pi + sin pi + sin pi - sin 0\ &=0 $ - (3) $ &quad integral.double_D cos(x+y) dif x dif y\ &=integral_0^pi dif x integral_x^pi cos(x+y) dif y\ &=-integral_0^pi sin(2x)+sin x dif x\ &=-2 $ #align(center)[ #rect[ $ integral_0^pi sin(2x) dif x = 1 / 2 integral_0^(2pi) sin x dif x = 0 $ ] ] #pagebreak() - (6) $ &quad integral.double_D (sin y) / y dif x dif y\ &= integral_0^1 dif y integral_(y^2)^y (sin y) / y dif x\ &= integral_0^1 (1-y)sin y dif y\ &= 1-sin 1 $ - (7) $ &quad integral.double_D x^2 / y^2 dif x dif y\ &= integral_1^2dif x integral_(1 / x)^x x^2 / y^2 dif y\ &= integral_1^2 [-x^2 / y]_(y=1 / x)^(y=x) dif x\ &= integral_1^2 (-x+x^3) dif x\ &= -2^2 / 2 + 2^4 / 4 + 1 / 2 - 1 / 4\ &= 9 / 4 $ - (8) $ &quad integral.double_D abs(cos(x+y)) dif x dif y\ &= integral_0^(pi / 4) dif x integral_0^x dif y + integral_(pi / 4)^(pi / 2) dif x integral_0^(pi / 2-x) dif y - integral_(pi / 4)^(pi / 2) dif x integral_(pi / 2 - x)^(x) dif y \ &= pi / 2 - 1 $ #pagebreak() === 3 - (1) $ &quad integral.double_D (x^2+y^2) dif x dif y quad D = [-1,1]times [-1,1]\ &= 4 integral.double_(D^') (x^2+y^2) dif x dif y quad D^' = [0,1]times [0,1]\ &= 8 integral.double_(D^') x^2 dif x dif y\ &= 8 integral_0^1 x^2 dif x\ &= 8 / 3 $ - (2) #image("imgs/8.png", width: 50%) $ integral.double_D sin x sin y dif x dif y &= integral sin x dot dif x integral_(y_2(x))^(y_1(x)) sin y dif y\ (forall x, quad y_1(x) + y_2(x) = 0) &=> integral 0 dot sin x dif x\ &=0 $ #pagebreak() === 5 $ integral_0^a dif x integral_0^x f(x)f(y) dif y &= integral_0^a dif y integral_0^x f(y)f(x) dif x \ => integral_0^a dif x integral_0^x f(x)f(y) dif y &= 1 / 2 integral.double_D f(x)f(y) dif x dif y quad &D = [0,a]times[ 0,a ]\ &= 1 / 2 (integral_0^a f(x) dif x)^2 quad qed \ integral_0^a dif x integral_0^x f(y) dif y &= integral_0^a dif y integral_y^a f(y) dif x\ &=integral_0^a (a-y) f(y) dif y quad qed $ // #pagebreak() === 6 $ integral.double_D (diff^2 f) / (diff x diff y) dif x dif y & = integral_a^b dif x integral_c^d (diff^2 f) / (diff x diff y) dif y\ &= integral_a^b ((diff f) / (diff x)(x,d) - (diff f) / (diff x)(x,c)) dif x\ &= f(b,d) - f(a,d) - f(b,c) + f(a,c) quad qed $ // #pagebreak() === 7 $ 1 / (2pi r_0) integral.double_(r<r_0) f( r,theta ) r dif r dif theta = 1 / (2pi r_0) integral_0^(2pi) dif theta integral_0^r_0 f(r,theta) r dif r\ \ forall epsilon>0, exists r_0 = r_0(theta) quad s.t. quad r<r_0 => |f(r,theta)-f(0,0)|<epsilon\ \ => 1 / (2pi r_0) integral_0^(2pi) dif theta integral_0^r_0 f( r,theta ) r dif r < 1 / (2pi r_0) integral_0^(2pi) dif theta integral_0^r_0 (f(0, 0) + epsilon) r dif r = f(0,0) + epsilon\ => 1 / (2pi r_0) integral_0^(2pi) dif theta integral_0^r_0 f( r,theta ) r dif r > 1 / (2pi r_0) integral_0^(2pi) dif theta integral_0^r_0 (f(0, 0) - epsilon) r dif r = f(0,0) - epsilon\ \ \ => ( 1 / (2pi r_0) integral.double_(r<r_0) f(r,theta) r dif r dif theta - f(0,0) ) < epsilon quad forall epsilon, exists r, forall r_0<r quad qed $ #pagebreak() === P166 1 - (1) $ &quad integral_0^R dif x integral_0^(sqrt(R^2-x^2)) ln(1+x^2+y^2) dif y\ &= integral_0^(pi / 2) dif theta integral_0^R ln(1+r^2) r dif r\ &= pi / 2 integral_0^R ln(1+r^2) r dif r\ &= pi / 4 integral_0^R^2 ln(1+t) dif t\ &= pi / 4 dot [(t+1)ln(1+t)-t]_0^R^2\ &= pi / 4 dot [(R^2+1)ln(1+R^2)-R^2] $ - (4) $ &quad integral_0^(1 / (sqrt(2))) dif x integral_x^(sqrt(1-x^2)) x y(x+y) dif y \ &=integral_(pi / 4)^(pi / 2) dif theta integral_0^1 r cos theta dot r sin theta dot (r cos theta + r sin theta) r dif r\ &=1 / 5 integral_(pi / 4)^(pi / 2) (cos^2 theta sin theta + cos theta sin^2 theta) dif theta\ &=1 / 15 $ - (5) $ &quad integral_0^(R/(sqrt(1+R^2))) dif x integral_0^(R x)(1+(y^2)/(x^2)) dif y + integral_(R/sqrt(1+R^2))^(R) dif x integral_0^(sqrt(R^2-x^2))(1+(y^2)/(x^2)) dif y\ &=integral_0^(arctan R) dif theta integral_0^R (r^2)/(r^2 cos^2 theta) r dif r\ &=integral_0^(arctan R) dif theta integral_0^R sec^2 theta dot r dif r\ // &=integral_0^(arctan R) dif theta sec^2 theta r^2/2 |_0^R\ &=R^2/2 integral_0^(arctan R) sec^2 theta dif theta\ &=R^2/2 [tan theta]_0^(arctan R)\ // &=1/2 R^2 tan(arctan R)\ &=1/2 R^3 $ #pagebreak() === 2 - (2) $ &quad integral.double_D sqrt(x^2/a^2+y^2/b^2) dif x dif y\ (x=a dot r sin theta, y=b dot r cos theta) & = integral.double_D r dot a b dot r dif r dif theta\ &= a b integral_0^(arctan(a\/b)) dif theta integral_0^2 r^2 dif r\ &= 8 / 3 a b dot arctan(a/b) $ - (5) $ (x y = u, x^2 / y=v) => (x=root(3,u v), y = root(3,u^2/v))\ \ (diff (x,y)) / (diff ( u,v )) = abs(mat( 1/3 u^(-2/3) v^(1/3), 1/3 u^(1/3) v^(-2/3); 2/3 u^(-1/3) v^(-1/3), -1/3 u^(2/3) v^(-4/3) )) = abs( -1/9v^(-1) - 2/9 v^(-1)) = 1 / 3 v^(-1) quad u,v > 0\ \ $ $ integral.double_D x y dif x dif y &= 1 / 3integral.double_D u^1 v^(-1) dif u dif v\ &= 1 / 3integral_a^b dif u integral_c^d u / v dif v\ &= 1 / 3(integral_a^b u dif u)(integral_c^d (dif v) / v)\ &= 1 / 6 (b^2-a^2)(ln d - ln c) $ - (8) $ (x+y = u, y=v) => (x=u-v,y=v)\ (diff (x,y)) / (diff (u,v)) = abs(mat(1,-1;0, 1)) = 1 $ $ integral.double_D sin y / (x+y) dif x dif y &=integral_0^1 dif t integral_0^t sin y / t dif y\ &=integral_0^1 dif t dot (t cos t / t - t cos 0 / t)\ &=integral_0^1 (-t cos 1 + t) dif t\ &=1 / 2 - 1 / 2 cos 1\ &=sin^2 1 / 2 $ #pagebreak() === 3 - (2) 考虑变换 $u=x-y,v=y, (diff (x,y))/(diff (u,v))=1$ 因此在 $x-y$ 下的面积与 $u-v$下相等, 均为 $pi a^2$ - (3) 考虑变换 $u=x+y,v=y/x quad => quad x = u/(1+v), y=(u v)/(1+v)$ $ (diff (x,y)) / (diff (u,v)) = u / (1+v) $ $ integral.double_D dif x dif y &= integral.double_D u / (1+v) dif u dif v\ &=(integral_a^b u dif u)(integral_k^m (dif v) / (1+v))\ &=1 / 2(b^2-a^2)(ln(1+m) - ln (1+ k)) $ // #pagebreak() === 4 证明:$integral.double_(x^2+y^2<=1) e^(x^2+y^2) dif x dif y<= (integral_(-sqrt(pi)/2)^(sqrt(pi)/2)e^(x^2)dif x)^2$ $ "LHS"-"RHS" = integral.double_D - integral.double_(D^') = 4 integral.double_(D_1) - 4 integral.double_(D_2) $ 注意到 $ dcases( forall (x,y) in D_1 quad x^2+y^2 <=1 , forall (x,y) in D_2 quad x^2+y^2 >= 1 , )\ => forall (x_1,y_1) in D_1, (x_2,y_2) in D_2 quad f(x_1,y_1) <= f(x_2,y_2)\ \ sigma(D_1) = sigma(D_2) \ => integral.double_(D_1) - integral.double_(D_2) <=0 quad=>quad "LHS"-"RHS" <= 0 $ #image("imgs/19.png", width: 40%) #rect(width: 100%)[ #strike[骗分做法:] $ integral.double_(x^2+y^2<=1) e^(x^2+y^2) dif x dif y &= integral_0^(2pi) dif theta integral_0^1 r e^r dif r\ &= 2pi integral_0^1 r e^(r^2) dif r\ &= 2pi [1 / 2 e^(r^2)]_0^1\ &= pi(e-1) $ $ (integral_(-sqrt(pi) / 2)^(sqrt(pi) / 2)e^(x^2)dif x)^2 &= (2 integral_0^(sqrt(pi) / 2)e^(x^2)dif x)^2\ &>= (2 integral_0^(sqrt(pi) / 2) (1+ x^2 + x^4 / 2)dif x)^2\ &= (pi^(1 / 2) + 1 / 12 pi^(3 / 2)+1 / 160 pi^(5 / 2))^2\ &>= pi + 1 / 6 pi^2 + 1 / 144 pi^3 + 1 / 80 pi^3 + 1 / 960 pi^4 $ #strike[这不可能是正常做法, 这样放缩至少需要把 $pi > 22/7$ 代入, 然后至少计算到小数点后三位.] ] #pagebreak() === 6 考虑把关于原点对称的两个区域合并计算, 即我们现在在右半平面上计算, 左侧的区域对称到 $(-x,-y)$ 计算: $ & quad integral.double_(abs(x)+abs(y)<=1)e^(f(x,y)) dif x dif y\ & = integral.double_(abs(x)+abs(y)<=1, x>0)e^(f(x,y))+e^(f(-x,-y)) dif x dif y\ & = integral.double_(abs(x)+abs(y)<=1, x>0)e^(f(x,y))+e^(-f(x,y)) dif x dif y\ & >= 2integral.double_(abs(x)+abs(y)<=1, x>0) dif x dif y\ & = 2 quad qed $ === 7 考虑变换 $s=x+y, t=x-y, (diff (x,y))/(diff (s,t)) = 1/2$ $ integral.double_D f(x-y) dif x dif y &= integral.double_(D^') 1 / 2 f(t) dif s dif t\ &=1 / 2 integral_(-A)^A dif t integral_(abs(t) - A)^(A-abs(t)) f(t) dif s\ &=integral_(-A)^A (A-abs(t)) f(t) dif t quad qed $ #image("imgs/20.png", width: 50%)
https://github.com/jxpeng98/quarto-ssrn-scribe
https://raw.githubusercontent.com/jxpeng98/quarto-ssrn-scribe/main/_extensions/ssrn-scribe/typst-show.typ
typst
#show: doc => paper( $if(font)$ font: ["$font$"], $endif$ $if(fontsize)$ fontsize: $fontsize$, $endif$ maketitle: $maketitle$, $if(title)$ title: [$title$], $endif$ $if(by-author)$ authors: ( $for(by-author)$ ( name: "$it.name.literal$", $for(it.affiliations/first)$ department: [$it.department$], affiliation: [$it.name$], location: [$it.city$, $it.country$], $endfor$ email: "$it.email$", note: "$it.note$" ), $endfor$ ), $endif$ $if(date)$ date: [$date$], $endif$ $if(abstract)$ abstract: [$abstract$], $endif$ $if(bibliography)$ bibliography: bibliography($bibliography$, title: "References", style: "chicago-author-date"), $endif$ doc )
https://github.com/SWATEngineering/Docs
https://raw.githubusercontent.com/SWATEngineering/Docs/main/src/3_PB/PianoDiProgetto/sections/PianificazioneSprint/DodicesimoSprint.typ
typst
MIT License
#import "../../functions.typ": glossary === Dodicesimo #glossary[sprint] *Inizio*: Venerdì 08/03/2024 *Fine*: Giovedì 14/03/2024 *Obiettivi dello #glossary[sprint]*: - Proseguire la stesura del _Piano di Progetto_: - Aggiornare pianificazione e preventivo pertinenti allo #glossary[sprint] 12 e inserire il consuntivo pertinente allo #glossary[sprint] 11; - Aggiornare ed aggiungere rispettivamente per gli #glossary[sprint] 13 e 14, pianificazione e preventivo; - Perfezionare il documento _Specifica Tecnica_; - Integrare nel cruscotto della qualità del _Piano di Qualifica_ le metriche relative alla qualità della codifica e alla qualità del prodotto; - Inizio della stesura del documento _Manuale Utente_. - Continuazione della codifica del prodotto: - Rimozione della tecnologia Pydantic dalla componente di simulazione; - Implementazione dei simulatori di dati urbanistici; - Introduzione della componente di allarmistica.
https://github.com/EpicEricEE/typst-marge
https://raw.githubusercontent.com/EpicEricEE/typst-marge/main/tests/overflow/multiple/test.typ
typst
MIT License
#import "/src/lib.typ": sidenote #set par(justify: true) #set page(width: 8cm, height: 18cm, margin: (outside: 4cm, rest: 5mm)) #let sidenote = sidenote.with(numbering: "1") #lorem(30) #for n in range(10) [ do #sidenote[ This note is moved up to prevent overlap. ] ] #lorem(30) #sidenote[ This sidenote would overflow out the bottom of the page and is thus moved upwards. ]
https://github.com/cskeeters/novela-typst
https://raw.githubusercontent.com/cskeeters/novela-typst/master/README.md
markdown
This repository contains instructions for modifying [Novela] to be used with [Typst]. # Issues Novela as it comes from [atipo] has a few issues when being used with Typst. ## Styles Typst cannot distinguish font styles "Regular" and "Display Regular". If you run `typst fonts --variants`, you'll find duplicate entries for Novela with *Typst* Style (italics) **Normal** Weight **400** and Stretch **1000**. This is because Novela sees the *OTF* styles **Regular** and **Display Regular** as having the same properties. > Novela > - Style: Italic, Weight: 900, Stretch: FontStretch(1000) > - Style: Normal, Weight: 600, Stretch: FontStretch(1000) > - Style: Normal, Weight: 900, Stretch: FontStretch(1000) > - Style: Italic, Weight: 600, Stretch: FontStretch(1000) > - Style: Italic, Weight: 400, Stretch: FontStretch(1000) > - Style: Normal, Weight: 700, Stretch: FontStretch(1000) > - Style: Italic, Weight: 700, Stretch: FontStretch(1000) > - Style: Italic, Weight: 400, Stretch: FontStretch(1000) > - Style: Normal, Weight: 400, Stretch: FontStretch(1000) > - Style: Normal, Weight: 400, Stretch: FontStretch(1000) > - Style: Italic, Weight: 400, Stretch: FontStretch(1000) This can lead to Typst using **Display Regular** instead of **Regular** when it's uncalled for. Typst may address [issue #2098](https://github.com/typst/typst/issues/2098) at some point to make this correction unnecessary. ## Ornaments Novela comes with three ornaments (fleuron001.ornm, fleuron002.ornm, and fleuron003.ornm), but they are unassociated with any Unicode value. This makes them impossible to use as long as [issue #4393](https://github.com/typst/typst/issues/4393) is unresolved. This can be confirmed with `otfinfo -u` and `otfinfo -g` from `lcdf-typetools`. # Modifying the Font We can modify the fonts as follows. We can move *Display Regular* and *Display Italic* into their own font family called *Novela Display*. Note that with this change, any other files which use a *Display* style of Novela may need to be change to use the *Regular* or *Italic* style of *Novela Display*. Second, we can associate the fleurons with Unicode values. I chose 0x273E, 0x273F, and 0x2740. There was nothing there, so this has no side effect. ## Procedure To modify the font you must have: * make (Tested with GNU) * [FontTools] FontTools comes with a program called `ttx` that can convert `.otf` files to `.ttx` which is XML. This XML can be modified to make corrections needed to address the above issues. Then the `.ttx` files can be converted back into `.otf` files. The Makefile automates this process. git clone [email protected]:cskeeters/novela-typst.git cd novela-typst cp ~/Downloads/Novela-Complete-Desktop/* . make After this is complete, three modified files will be located in a sub-folder named `modified`. If you have already installed the font, you need to remove the following styles: * Regular * Display Regular * Display Italic Then install the modified versions. # Testing You should be able to confirm that the changes worked in Typst by examining the output after compiling `test.typ`. [atipo]: https://www.atipofoundry.com/ [FontTools]: https://github.com/fonttools/fonttools [Novela]: https://www.atipofoundry.com/fonts/novela [Typst]: https://github.com/typst/typst
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/grid-3_02.typ
typst
Apache License 2.0
#import "/contrib/templates/std-tests/preset.typ": * #show: test-page // Test two columns in the same row overflowing by a different amount. #set page(width: 5cm, height: 2cm) #grid( columns: 3 * (1fr,), row-gutter: 8pt, column-gutter: (0pt, 10%), [A], [B], [C], [Ha!\ ] * 6, [rofl], [\ A] * 3, [hello], [darkness], [my old] )
https://github.com/sitandr/typst-examples-book
https://raw.githubusercontent.com/sitandr/typst-examples-book/main/src/snippets/chapters/page-numbering.md
markdown
MIT License
# Page numbering ## Separate page numbering for each chapter ```typ /// author: tinger // unnumbered title page if needed // ... // front-matter #set page(numbering: "I") #counter(page).update(1) #lorem(50) // ... // page counter anchor #metadata(()) <front-matter> // main document body #set page(numbering: "1") #lorem(50) #counter(page).update(1) // ... // back-matter #set page(numbering: "I") // must take page breaks into account, may need to be offset by +1 or -1 #context counter(page).update(counter(page).at(<front-matter>).first()) #lorem(50) // ... ```
https://github.com/satoqz/dhbw-template
https://raw.githubusercontent.com/satoqz/dhbw-template/main/template.typ
typst
MIT License
#let template( /// the title of your thesis /// type: str title: "Thesis Title", /// the subtitle, such as "Bachelor's Thesis", "T1000", "T2000" etc. /// type: str subtitle: "Bachelor's Thesis", /// single author name or list of author names /// type: str | array<str> author: "<NAME>", /// date of submission /// type: datetime date: datetime(year: 1970, month: 1, day: 1), /// formatting template for the date /// type: str date-format: "[day].[month].[year]", /// List of logos to show at the top of the titlepage /// type: array<image> logos: (), /// Additional details to show at the bottom of the titlepage /// type: dictionary<str, str> details: (:), /// Your abstract /// type: content abstract: lorem(100), /// Dictionary of acronyms /// type: dictionary<str, str> acronyms: (:), /// Rest of the document /// type: content body, ) = { let author = if type(author) != array { (author,) } else { author } set document(title: title, author: author, date: date) set page(paper: "a4", margin: 2.5cm) set pagebreak(weak: true) set text( size: 12pt, font: ("New Computer Modern Sans", "CMU Sans Serif"), ) set par(leading: 1em, justify: true) show par: set block(spacing: 1.5em) set list(indent: 0.75em) set enum(indent: 0.75em) set ref(supplement: it => { if it.func() == heading and it.level == 1 { "Chapter" } else { it.supplement } }) set math.equation(numbering: "1") set bibliography(title: "References") set outline(indent: auto, depth: 2, fill: repeat(" . ")) show outline: set heading(outlined: true) show heading.where(level: 1): set block(above: 2em, below: 2em) show heading.where(level: 2): set block(above: 2em, below: 1.5em) show heading.where(level: 3): set block(above: 1.5em, below: 1em) show heading.where(level: 1): set text(size: 24pt) show heading.where(level: 2): set text(size: 20pt) show heading.where(level: 3): set text(size: 16pt) show heading: it => { set par(justify: false) if it.level == 1 { pagebreak() } if it.numbering == none { it } else { grid( columns: (auto, auto), box(width: 48pt, counter(heading).display()), it.body, ) } } show outline.entry.where(level: 1): it => [ #if it.element.func() != heading { return it } #show ".": "" #v(4pt) #strong(it) ] show raw.where(block: true): set align(left) show raw.where(block: true): set par(justify: false) show raw.where(block: true): set text(size: 8pt) show raw.where(block: false): it => { text(size: 4pt, " ") + box( radius: 2pt, outset: 3pt, fill: luma(240), it ) + text(size: 4pt, " ") } show raw.where(block: true): it => { show: block.with( width: 100%, radius: 2pt, inset: 8pt, fill: luma(240), stroke: luma(128), ) grid( columns: 2, align: (right, left), row-gutter: 0.75em, column-gutter: 1em, ..it.lines .enumerate() .map(((i, line)) => (text(fill: luma(120), [#(i + 1)]), line)) .flatten(), ) } { set align(center) set par(justify: false) let author = author.join(" & ") let date = date.display(date-format) stack( spacing: 1fr, stack(dir: ltr, spacing: 1fr, ..logos), 2fr, text(size: 20pt, strong(title)), text(size: 14pt, strong(subtitle)), [presented to the \ *Department of Computer Science*], [for the \ *Bachelor of Science*], [at the \ *DHBW Stuttgart*], [by \ #text(size: 16pt, strong(author))], [submitted on \ #strong(date)], 2fr, table( columns: (auto, 1fr, auto), align: left, stroke: none, ..details.keys().map(it => (strong(it), none)).zip(details.values()).flatten(), ), ) } align(horizon, [ #set text(lang: "de") #heading(outlined: false, level: 2, [Selbstständigkeitserklärung]) Ich versichere hiermit, dass ich meine Bachelorarbeit mit dem Thema: #emph(title) selbstständig verfasst und keine anderen als die angegebenen Quellen und Hilfsmittel benutzt habe. Ich versichere zudem, dass die eingereichte elektronische Fassung mit der gedruckten Fassung, falls vorhanden, übereinstimmt. #v(16pt) Stuttgart, #date.display(date-format) #v(32pt) #line(length: 256pt) #author.join(", ") #pagebreak() ]) align(horizon + center, { heading(outlined: false, level: 2, [Abstract]) abstract pagebreak() }) set page(numbering: "I") counter(page).update(1) outline(target: heading, title: "Table of Contents") outline(target: figure.where(kind: image), title: "List of Figures") outline(target: figure.where(kind: raw), title: "List of Listings") heading(level: 1, [Acronyms]) table( columns: (auto, auto), inset: (left: 0em, right: 2em), stroke: none, ..acronyms.keys().map(strong).zip(acronyms.values()).flatten(), ) set heading(numbering: "1.1") [#box() <end-frontmatter>] set page(numbering: "1") counter(page).update(1) body set page(numbering: "I") context counter(page).update(counter(page).at(<end-frontmatter>).first() + 1) bibliography("references.yml") }
https://github.com/HarumiKiyama/resume
https://raw.githubusercontent.com/HarumiKiyama/resume/master/README.md
markdown
To change the text size, you can uncomment the lines in `cv.typ` and set to your choice. (Recommended text size for CV is from 10pt to 12pt) You can also change the page margin in `cv.typ` to fit in more contents in a single page. The margin default is set to `(x: 0.9cm, y: 1.3cm)`. Don't forget to include `#chiline()` every time you open a new section, this line acts as a perfect split. For basic typst syntax, check this template as a reference, it's super easy to understand and use! For advanced topics, please refer to [official reference](https://typst.app/docs/reference/) by typst.
https://github.com/dashuai009/dashuai009.github.io
https://raw.githubusercontent.com/dashuai009/dashuai009.github.io/main/src/content/blog/044.typ
typst
#let date = datetime( year: 2023, month: 6, day: 17, ) #metadata(( title: "windows只用键盘如何打开蓝牙,连接蓝牙鼠标?", subtitle: [windows], author: "dashuai009", description: "", pubDate: date.display(), ))<frontmatter> #import "../__template/style.typ": conf #show: conf == 事故描述 今天手残了,用蓝牙鼠标把蓝牙关掉了,鼠标直接切断了。电脑上只有有线键盘还连着。 == 恢复 我用的windows11,win+i打开设置,用tab和上下键挪到打开蓝牙的地方,用#strong[空格键]打开蓝牙。 之前用enter死活打不开,还是上网搜出来,应该用空格键。
https://github.com/WinstonMDP/math
https://raw.githubusercontent.com/WinstonMDP/math/main/knowledge/vector_spaces.typ
typst
#import "../cfg.typ": cfg #show: cfg = Vector spaces $(V, +, dot)$ is a vector space over a field $F :=$ + $(V, +)$ is an abelian group. + $dot in V^(F times V)$. + $V$ is closed under $dot$. + $lambda(a + b) = lambda a + lambda b$. + $(lambda + mu)a = lambda a + mu a$. + $(lambda mu)a = lambda (mu a)$. + $1a = a$. A linear combination of vectors $a_1, ..., a_n := sum_i lambda_i a_i$. A system of vectors $:=$ sequence of the vectors. b is linerly expressed through vectors $a_1, ..., a_n :=$ it's their linear combination. A system of vectors $e_1, e_2, ..., e_n$ is a basis of a vector space $V :=$ + ${e_1, e_2, ..., e_n} subset.eq V$. + $exists!$ linear expression of a vector $v$ through $e_1, ..., e_n$. Coordinates of a vector $v$ in a basis $e_1, e_2, ..., e_n :=$ coefficients of a linear expression of $v$ through $e_1, e_2, ..., e_n$. A vector space with a basis of $n$ vectors $tilde.eq F^n$. A dimension of a vector space $:= hash$ its basis vectors. A linear combination is trivial $:=$ the coefficients are zero. Vectors $a_1, ..., a_n$ are linearly dependent $:= exists$ zero nontrivial linear combination. A vector $b$ is linearly expressed through vectors $a_1, ..., a_n -> ("the expression is unique" <-> a_1, ..., a_n "are linerly indepndent")$. A span of a set $S := angle.l S angle.r :=$ a set of finite linear combinations of vectors $in S$. A vector space $V$ is generated by a set $S := V = angle.l S angle.r$. A vector space is finite-dimensional $:=$ it's generated by a finite set of vectors. A linear independent system of vectors generating a space is a basis of the space. A finite-dimensional vector space has a basis. Bases of a finite-dimensional vector space have the same $\#$ vectors. Two finite-dimensional vector spaces are isomorphic $<->$ they have the same dimension. A rank of a system of vectors $:=$ its span dimension. A system of vectors $a_1, ..., a_n$ is equivalent to a system of vectors $b_1, ..., b_n := angle.l a_1, ..., a_n angle.r = angle.l b_1, ..., b_n angle.r$. $a_1, ..., a_n$ is lenearly expressed through $b_1, ..., b_m$ and vice versa $<-> angle.l a_1, ..., a_n angle.r = angle.l b_1, ..., b_m angle.r$.
https://github.com/Area-53-Robotics/53B-Notebook-Over-Under-2023-2024
https://raw.githubusercontent.com/Area-53-Robotics/53B-Notebook-Over-Under-2023-2024/master/templates/headers.typ
typst
Creative Commons Attribution Share Alike 4.0 International
#let entry_header(title: [], color: black) = { text( fill: color, size: 16pt, [=== #title] ) } #let box_header(title: [], color: luma(230)) = { box( fill: color, radius: 2pt, inset: 6pt )[ #text( size: 16pt, [=== #title] ) ] }
https://github.com/katamyra/Notes
https://raw.githubusercontent.com/katamyra/Notes/main/Compiled%20School%20Notes/CS2110/Modules/LC3.typ
typst
#import "../../../template.typ": * = The LC-3 == Memory Organization The LC-3 memory has an address space of $2^16$ locations and an addressability of 16 bits. For the LC-3, we refer to 16 bits _as one word_, and we say that the LC-3 is _word addressable_ #note[ Reminder (because I forgot lol): *addressability*: the amount of bits stored for each locations *address space*: the amount of unique storage locations ] == Registers #definition[ *Registers*: used to store data temporarily because if often takes more than one clock cycle to access memory/do other tasks. ] The LC-3 has 8 unique registers, each identifiable by a three-digit register number. == The Instruction Set Instructions are made of two things, their *opcode* (what the instruction is asking the computer to do) and its *operands* (who the computer is expected to do it to) #definition[ *Instruction Set*: defined by its opcodes, datatypes, and addressing modes. ] #example[ The INSTRUCTION [ADD R2, R0 R1] has an opcode of ADD, one addressing mode (register mode}, and one data type (2's complement), In this case we define two registers from which to add to and a register to store the value in. ] Other instructions: AND, BR, JMP, JSR, LD, STI, etc Theres too many maybe ill desscribe them if its required. == Opcodes The LC-3 ISA has 15 instructions, each defined by its unique opcodes, meaning 4 bits are used for the opcode. #note[ The LC3 only has 15 opcodes, even when there are 16 possibilities. The code 1101 has been left unspecified. ] There are three different kinds of opcodes: + *Operates*: process information + *Data movement*: move information between memory and the registers and between them and I/O + *Control*: change the sequence of instructions that will be executed == Data Types Every opcode will interpret the bit patterns of its operands according to the data type it is designed to support. For ADD opcode, this is 2's complement. == Addressing Modes #definition[ *Addressing Modes*: a mechanism is a mechanism for specifying where the operand is located. ] An operand can generally be found in one of three places: + In memory + In a register + As a part of the instruction - If part of the instruction, we refer to it as a _literal_ or as an _immediate operand_. The LC-3 supports five addressing modes, immediate, register, and three memory addressing modes (PC-relative, indirect, and Base+offset) #definition[ *PC relative* operands are calculated relative to the Program Counter value, For example `LD R0, 50(PC)` means to load the content of memory at an address calculated by adding 50 to the PC into register R0 ] #definition[ *Indirect Addressing* operands involve accessing memory indirectly through a pointer. ] #definition[ *Base+offset Addressing* operands are calculated by adding a base value (usually in a register) to some offset. Useful for accessing elements of arrays/structures in memory ] == Condition Codes The LC-3 has three single bit registers that are individually set each time one of the 8 general purpose registers (GPR's) is written into as a result of execution of one of the operate instructions/load instructions. #definition[ The three single bit registers are the *N, Z, and P* registers corresponding to negative, zero, and positive. These three are referred to as *condition codes* because the condition of thehose bits are used to change the sequence of execution of instructions in a program. ]
https://github.com/Ligandlly/typst-templates
https://raw.githubusercontent.com/Ligandlly/typst-templates/main/note.typ
typst
MIT License
#let note(title, body) = { align(center)[#text(title, 1.75em, weight: "bold")] set text(font: ("Times", "Songti SC")) set par(justify: true) set heading(numbering: "1.1") show heading: set text(orange) show "TODO": it => box(fill: yellow.lighten(60%))[ #text(it, red) ] show strong: set text(font: ("Times", "Pingfang SC"), weight: "bold") columns(body) } #let highlight(body) = {block(body, fill: yellow.lighten(30%))}
https://github.com/RodolpheThienard/typst-template
https://raw.githubusercontent.com/RodolpheThienard/typst-template/main/reports/1/example.typ
typst
MIT License
#import "template1.typ": template #show: doc => template( title: [ *Template 1 * ], subtitle: "Subtitle", subsubtitle: none, authors: ( ( name: "<NAME>", ), ( name: "<NAME>", ), ), supervisors: ( ( name: "<NAME>", ), ( name: "<NAME>", ), ), abstract: [#lorem(100)], doc, )
https://github.com/jamesrswift/blog
https://raw.githubusercontent.com/jamesrswift/blog/main/assets/2024-07-03-sparklines/code.typ
typst
MIT License
#import "../packages/sparklines/lib.typ" as sparkline #import "data.typ" #set par(justify: true) #set text(size: 12pt) #let par-wrap(body) = { lorem(20) body lorem(20) } #par-wrap[ Owing to the good $#`ETH`->#`GBP`$ #sparkline.make( data.series("Adj Close").filter(it=>it.first()>300), width: 2.5em, height: 1.2em ) exchange rate. ] #par-wrap[ #sparkline.make( (1,4,2,5).enumerate(), width: 0.8em, height: 1em, style: (stroke: 0.92pt), ) ] #par-wrap[ After the event #sparkline.make( data.series("Adj Close").filter(it=>(it.first()>290 and it.first() < 320)), width: 1.7em, height: 1.2em, draw: ( sparkline.line, sparkline.vline(313, stroke: (thickness: 0.4pt, paint: red)) ) ) the exchange rate ] #import "../packages/booktabs/lib.typ" as booktabs #let column(key, display, width: 1em) = (: key: key, header: sparkline.make( width: width, data.series(key) ) + display, width: 1fr, align: right, ) #booktabs.rigor.make( columns: ( ( key: "Date-Display", header: [Date], gutter: 1em ), column("Open", width: 2.3em)[Open], column("High", width: 2.3em)[High], column("Low", width: 2.6em)[Low], column("Close", width: 2.1em)[Close], column("Volume")[Volume] ), data: data.converted )
https://github.com/EgorGorshen/scripts-for-typst
https://raw.githubusercontent.com/EgorGorshen/scripts-for-typst/main/lib.typ
typst
MIT License
#import "matrix.typ": * #import "truth-table.typ": * #import "utils.typ": * #import "gause-algo.typ": * /* Утилита для оформления вопросов запросов etc... https://github.com/jomaway/typst-gentle-clues/tree/main */ #import "@preview/gentle-clues:0.9.0": * #import "@preview/cetz:0.1.2": *
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-10C80.typ
typst
Apache License 2.0
#let data = ( ("OLD HUNGARIAN CAPITAL LETTER A", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER AA", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER EB", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER AMB", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER EC", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER ENC", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER ECS", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER ED", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER AND", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER E", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER CLOSE E", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER EE", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER EF", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER EG", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER EGY", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER EH", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER I", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER II", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER EJ", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER EK", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER AK", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER UNK", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER EL", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER ELY", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER EM", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER EN", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER ENY", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER O", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER OO", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER NIKOLSBURG OE", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER RUDIMENTA OE", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER OEE", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER EP", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER EMP", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER ER", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER SHORT ER", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER ES", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER ESZ", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER ET", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER ENT", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER ETY", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER ECH", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER U", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER UU", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER NIKOLSBURG UE", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER RUDIMENTA UE", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER EV", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER EZ", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER EZS", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER ENT-SHAPED SIGN", "Lu", 0), ("OLD HUNGARIAN CAPITAL LETTER US", "Lu", 0), (), (), (), (), (), (), (), (), (), (), (), (), (), ("OLD HUNGARIAN SMALL LETTER A", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER AA", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER EB", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER AMB", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER EC", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER ENC", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER ECS", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER ED", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER AND", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER E", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER CLOSE E", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER EE", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER EF", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER EG", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER EGY", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER EH", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER I", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER II", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER EJ", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER EK", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER AK", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER UNK", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER EL", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER ELY", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER EM", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER EN", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER ENY", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER O", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER OO", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER NIKOLSBURG OE", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER RUDIMENTA OE", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER OEE", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER EP", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER EMP", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER ER", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER SHORT ER", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER ES", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER ESZ", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER ET", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER ENT", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER ETY", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER ECH", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER U", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER UU", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER NIKOLSBURG UE", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER RUDIMENTA UE", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER EV", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER EZ", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER EZS", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER ENT-SHAPED SIGN", "Ll", 0), ("OLD HUNGARIAN SMALL LETTER US", "Ll", 0), (), (), (), (), (), (), (), ("OLD HUNGARIAN NUMBER ONE", "No", 0), ("OLD HUNGARIAN NUMBER FIVE", "No", 0), ("OLD HUNGARIAN NUMBER TEN", "No", 0), ("OLD HUNGARIAN NUMBER FIFTY", "No", 0), ("OLD HUNGARIAN NUMBER ONE HUNDRED", "No", 0), ("OLD HUNGARIAN NUMBER ONE THOUSAND", "No", 0), )
https://github.com/marcustut/machinelearning
https://raw.githubusercontent.com/marcustut/machinelearning/main/spotify/report.typ
typst
#import "template.typ": * #import "@preview/cetz:0.1.2": canvas, plot, palette, draw #let legend-item(point, name, style) = { draw.content( (point.at(0)+2, point.at(1)), (point.at(0)+2, point.at(1)), frame: "rect", padding: .3em, fill: style.fill, stroke: none, [], ) draw.content( (point.at(0)+2.4, point.at(1)+0.1), (point.at(0)+5.5, point.at(1)), [ #v(.2em) #text(name, size: .7em, weight: "bold") ] ) } #show: project.with( title: "Machine Learning with Spotify dataset", authors: ( (name: "<NAME>", email: "<EMAIL>"), ), date: "November 22, 2023", font: "CMU Serif", monofont: "CMU Typewriter Text", ) = Individual classifiers vs Ensemble (vote) of classifiers == Evaluation measure(s) I chose *F1-Measure* because it is a combination of both precision and recall, hence it reflects both the true positive rate and false positive rate of the classifiers' performance which is why this measure depicts a more complete picture of each classifier's strengths and weaknesses compared to other measures such as accuracy. Hence, it is easier for us to compare and determine the trade-offs between classifiers. \ \ To clarify, Weka labels it as `F-Measure` but it is the same as F1-Measure because it uses harmonic mean ($beta = 1$) in the calculation of: $ F = frac((1 + beta^2) dot.c text("Precision") dot.c text("Recall"), beta^2 dot.c text("Precision") + text("Recall")) $ Note that in Weka, the weighted average of F-Measure was calculated by: $ F_("weighted avg") = frac(sum_(k=1)^n F_k dot.c "count(k)", sum_(k=1)^n "count(k)") $ where $n$ is the number of classes, $F_k$ is the F-Measure of class $k$ and $"count"(k)$ is the number of instances of class $k$. == Evaluation of the three classifiers The weka classifiers used for the evaluation are: #list( indent: 1em, [*Decision Tree* $arrow.r$ `weka.classifiers.trees.J48`], [*Neural Network* $arrow.r$ `weka.classifiers.functions.MultiLayerPerceptron`], [*k-NN* $arrow.r$ `weka.classifiers.lazy.IBk`] ) #align(center)[ #figure( table( columns: (auto, auto, auto, auto), inset: 6pt, align: horizon, fill: (col, row) => if row == 0 or row == 6 or col == 0 { silver } else { white }, [], [*Decision Tree*], [*Neural Network*], [*k-NN (k=1)*], [*edm*], [0.593], [0.608], [0.542], [*latin*], [0.408], [0.375], [0.411], [*pop*], [0.343], [0.360], [0.331], [*rap*], [0.608], [0.629], [0.549], [*rock*], [0.653], [0.679], [0.608], [*Weighted Average*], [*0.522*], [*0.532*], [*0.488*], ), caption: [F1-Measure of the three classifiers on the dataset] ) ] The table above shows the results of the classifiers, they were ran with default classifier settings and test options of 10-fold cross validation. In this case, k-fold cross validation was preferred over train test split so that the classifiers are trained on as much data as possible and because the dataset is small, the incurred computational overhead is acceptable. As can be seen from the results, the _neural network_ classifier performed the best with an average F1-Measure of *0.532*. The _decision tree_ classifier performed the second best with an average F1-Measure of *0.522*. The _k-NN_ classifier performed the worst with an average F1-Measure of *0.488*. One of the possible cause that neural network performed the best because it is capable of learning complex non-linear relationship across the features set. == Evaluation of the ensemble of the three classifiers The combination rules used for the ensemble are: #list( indent: 1em, [*Average of Probabilities*], [*Majority Voting*], [*Minimum Probability*] ) === Results To ensure the fairness of the test, the ensemble was also ran with 10-fold cross validation. The results for different combination rules are as follows: #align(center)[ #figure( table( columns: (auto, auto, auto, auto), inset: 6pt, align: horizon, fill: (col, row) => if row == 0 or row == 6 or col == 0 { silver } else { white }, [], [*Average of Possibilities*], [*Majority Voting*], [*Minimum Probability*], [*edm*], [0.614], [0.621], [0.540], [*latin*], [0.435], [0.432], [0.402], [*pop*], [0.366], [0.378], [0.345], [*rap*], [0.644], [0.650], [0.551], [*rock*], [0.696], [0.703], [0.618], [*Weighted \ Average*], [*0.552*], [*0.557*], [*0.491*], ), caption: [F1-Measure of the ensemble of the three classifiers] ) ] Looking at the table above, we can see that _Majority Voting_ produces the best result with an average F1-Measure of *0.557*. The _Average of Probabilities_ produces the second best result with an average F1-Measure of *0.552*. The _Minimum Probability_ produces the worst result with an average F1-Measure of *0.491*. === Results justification _Majority Voting_ performed the best because it utilises a simple voting scheme and since the dataset has discrete class labels such as `edm`, `latin`, `pop`, `rap` and `rock`, it is robust to noisy prediction wherein even a few classifiers produced incorrect predictions the influence on the final decision is minor as long as majority of the classifiers produced correct predictions. As for _Minimum Probability_, it performed the worst due to its conservativeness in which it only focuses on the minimum probability. This is especially not ideal for this dataset because the dataset is small hence a lot of information was discarded when solely looking at the minimum probability. The justification for _Average of Possibilities_ is that although it does not perform as well as _Majority Voting_, it is a safe go-to approach since it factors in both the strengths and weaknesses of all classifiers by averaging their output. In conclusion, comparing the overall results of the ensemble with the individual classifiers, we can see that the ensemble performed better than the individual classifiers. This is because the ensemble is able to combine the strengths of the individual classifiers and mitigate their weaknesses. = Ensemble with bagging == Results with increasing bag size The configuration for the results are as follows: #list( indent: 1em, [*Decision Tree* $arrow.r$ `weka.classifiers.trees.J48 (10-fold cross validation`)], [*Neural Network* $arrow.r$ `weka.classifiers.functions.MultiLayerPerceptron (80/20 train test split`)], [*k-NN* $arrow.r$ `weka.classifiers.lazy.IBk (10-fold cross validation)`] ) The results below were ran using `weka.classifiers.meta.Bagging` with the above configuration and increasing `numIterations` (equivalent to bag size) from $2$ to $20$. Note that, neural network does not use 10-fold cross validation because it takes too long to run. For the sake of simplicity, only the results for _Weighted Average_ is included. #let bg_dtree = ( (2, 0.495), (4, 0.548), (6, 0.563), (8, 0.572), (10, 0.579), (12, 0.583), (14, 0.584), (16, 0.587), (18, 0.588), (20, 0.592) ) #let bg_nn = ( (2, 0.559), (4, 0.560), (6, 0.565), (8, 0.565), (10, 0.566), (12, 0.566), (14, 0.567), (16, 0.566), (18, 0.567), (20, 0.566) ) #let bg_knn = ( (2, 0.467), (4, 0.484), (6, 0.489), (8, 0.492), (10, 0.492), (12, 0.494), (14, 0.494), (16, 0.494), (18, 0.495), (20, 0.495) ) #align(center)[ #figure( table( columns: (auto, auto, auto, auto, auto, auto, auto, auto, auto, auto, auto), inset: 6pt, align: horizon, fill: (col, row) => if row == 0 or col == 0 { silver } else { white }, [], [*2*], [*4*], [*6*], [*8*], [*10*], [*12*], [*14*], [*16*], [*18*], [*20*], [*Decision Tree (J48)*], ..bg_dtree.map(it => [#it.at(1)]), [*Neural \ Network*], ..bg_nn.map(it => [#it.at(1)]), [*k-NN*], ..bg_knn.map(it => [#it.at(1)]) ), caption: [F1-Measure of the ensemble (bagging) with increasing bag size] ) ] #align(center)[ #figure( canvas(length: 1cm, { plot.plot( size: (8, 6), x-tick-step: 2, x-label: "Bag Size", y-min: 0.4, y-max: 0.6, y-tick-step: 0.1, y-label: "Weighted F1-Measure", { plot.add( mark: "o", mark-size: .1, style: (mark: (stroke: blue)), bg_dtree, ) plot.add( mark: "o", mark-size: .1, style: (mark: (stroke: red)), bg_nn, ) plot.add( mark: "o", mark-size: .1, style: (mark: (stroke: green)), bg_knn, ) }) let legend = ( "k-NN": (color: black, fill: green), "Neural Network": (color: black, fill: red), "Decision Tree": (color: black, fill: blue), ); let x = 3 let y = 0.8 for (key, style) in legend { legend-item((x, y), key, style) y = y + 0.5 } }), caption: [Plot of ensemble (bagging) with increasing bag size] ) ] From the table and figure above we can see a general trend for all three classifiers that as the bag size increases, the F1-Measure also increases. Hence the best performing ensemble size is the largest bag size, $20$. This phenomenon can be explained by the _Condorcet Jury Theorem_ where given the probability of each voter being correct is $p$ and the probability of majority of voters being correct is $M$, then: $ "if" p > 0.5, "then" M > p $ $ "if " p "always" > 0.5, "then" M "approaches" 1.0 "as the number of voters approaches" infinity $ In our case, as the bag size approaches infinity, the F1-measure of the ensemble approaches $1.0$. However, the computation cost of the ensemble also increases linearly with the bag size. Hence, there is a trade-off between the computation cost and the performance of the ensemble so the advisable approach is to keep increasing the bag size until improvements are too tiny to be considered or running out of computation power. Additionally, we can observe that for each classifier there seems to be a threshold bag size where increment after that have small impacts on the result. This is a phenomenon called "level-off" and will be discussed in the next section where the effect is more observable. = Ensemble with random subspacing == Results with increasing subspace size The configuration for the results are as follows: #list( indent: 1em, [*Decision Tree* $arrow.r$ `weka.classifiers.trees.J48 (10-fold cross validation`)], [*Neural Network* $arrow.r$ `weka.classifiers.functions.MultiLayerPerceptron (80/20 train/test split`)], [*k-NN* $arrow.r$ `weka.classifiers.lazy.IBk (10-fold cross validation)`] ) The results below were ran using `weka.classifiers.meta.RandomSubSpace` with the above configuration and increasing `subSpaceSize` from $2$ to $20$. Note that, neural network does not use 10-fold cross validation because it takes too long to run. For the sake of simplicity, only the results for _Weighted Average_ is included. #let rss_dtree = ( (2, 0.482), (4, 0.527), (6, 0.556), (8, 0.572), (10, 0.573), (12, 0.522), (14, 0.522), (16, 0.522), (18, 0.522), (20, 0.522) ) #let rss_nn = ( (2, 0.451), (4, 0.507), (6, 0.533), (8, 0.559), (10, 0.558), (12, 0.557), (14, 0.557), (16, 0.557), (18, 0.567), (20, 0.567) ) #let rss_knn = ( (2, 0.415), (4, 0.480), (6, 0.517), (8, 0.527), (10, 0.514), (12, 0.488), (14, 0.488), (16, 0.488), (18, 0.488), (20, 0.488) ) #align(center)[ #figure( table( columns: (auto, auto, auto, auto, auto, auto, auto, auto, auto, auto, auto), inset: 6pt, align: horizon, fill: (col, row) => if row == 0 or col == 0 { silver } else { white }, [], [*2*], [*4*], [*6*], [*8*], [*10*], [*12*], [*14*], [*16*], [*18*], [*20*], [*Decision Tree (J48)*], ..rss_dtree.map(it => [#it.at(1)]), [*Neural \ Network*], ..rss_nn.map(it => [#it.at(1)]), [*k-NN*], ..rss_knn.map(it => [#it.at(1)]) ), caption: [F1-Measure of the ensemble (random subspacing) with increasing sub space size] ) ] #align(center)[ #figure( canvas(length: 1cm, { plot.plot( size: (8, 6), x-tick-step: 2, x-label: "Subspace Size", y-min: 0.4, y-max: 0.6, y-tick-step: 0.1, y-label: "Weighted F1-Measure", { plot.add( mark: "o", mark-size: .1, style: (mark: (stroke: blue)), rss_dtree, ) plot.add( mark: "o", mark-size: .1, style: (mark: (stroke: red)), rss_nn, ) plot.add( mark: "o", mark-size: .1, style: (mark: (stroke: green)), rss_knn, ) }) let legend = ( "k-NN": (color: black, fill: green), "Neural Network": (color: black, fill: red), "Decision Tree": (color: black, fill: blue), ); let x = 3 let y = 0.8 for (key, style) in legend { legend-item((x, y), key, style) y = y + 0.5 } }), caption: [Plot of ensemble (random subspacing) with increasing sub space size] ) ] From the table and figure above, we see that from subspace size *$2$* to *$8$* all three classifiers shown an upward trend. However, after subspace size *$8$* the F1-Measure of all three classifiers started to level off and eventually plateus. This phenomenon is due to new ensemble members start to produce results similar to previous members hence no additional diversity are added causing the final output to be similar if not exact with the previous runs. = Suitable classfiers each ensemble method == Suitable classifiers for bagging ensemble Based on the lectures, bagging is suitable for classifiers that are unstable. Unstable classifiers are classifiers that are sensitive to small change in the input data or in other words high variance. This is a sign that these classifiers are more prone to overfitting. Hence, bagging is suitable for unstable classifiers because it builds new models using the same classifier on variants of the data and if the classifier is very stable, it will just get similar results each time therefore not gaining much from the classifier. Several examples of unstable classifiers are decision trees and neural networks. == Suitable classifiers for random subspacing ensemble On the other hand, random subspacing is suitable for classifiers that are stable. Stable classifiers are classifiers that are not sensitive to small change in the input data or in other words low variance. Hence, random subspacing is suitable for stable classifiers because it introduced randomness by randomly selecting a subset of features each time and by doing so made the base models more diverse and hence reducing model correlation. One such stable classifier are k-NN. == Best ensemble method for each classifer in this dataset === Decision Tree #align(center)[ #figure( canvas(length: 1cm, { plot.plot( size: (8, 6), x-tick-step: 2, x-label: "Ensemble Size", y-min: 0.4, y-max: 0.6, y-tick-step: 0.1, y-label: "Weighted F1-Measure", { plot.add( mark: "o", mark-size: .1, style: (mark: (stroke: blue)), bg_dtree, ) plot.add( mark: "o", mark-size: .1, style: (mark: (stroke: red)), rss_dtree, ) }) let legend = ( "Random Subspacing": (color: black, fill: red), "Bagging": (color: black, fill: blue), ); let x = 2.4 let y = 0.8 for (key, style) in legend { legend-item((x, y), key, style) y = y + 0.5 } }), caption: [Performance of bagging ensemble against random subspacing ensemble] ) ] The figure above shows the performance of bagging ensemble against random subspacing ensemble for decision tree. As can be seen from the figure, bagging ensemble performed better than random subspacing ensemble. This is in line with expectation because decision tree is an unstable classifier and hence bagging ensemble is more suitable for it. === Neural Network #align(center)[ #figure( canvas(length: 1cm, { plot.plot( size: (8, 6), x-tick-step: 2, x-label: "Ensemble Size", y-min: 0.4, y-max: 0.6, y-tick-step: 0.1, y-label: "Weighted F1-Measure", { plot.add( mark: "o", mark-size: .1, style: (mark: (stroke: blue)), bg_nn, ) plot.add( mark: "o", mark-size: .1, style: (mark: (stroke: red)), rss_nn, ) }) let legend = ( "Random Subspacing": (color: black, fill: red), "Bagging": (color: black, fill: blue), ); let x = 2.4 let y = 0.8 for (key, style) in legend { legend-item((x, y), key, style) y = y + 0.5 } }), caption: [Performance of bagging ensemble against random subspacing ensemble] ) ] From the comparison figure above, we can see that bagging ensemble performed better than random subspacing ensemble. This is in line with expectation because neural network is an unstable classifier and bagging ensemble takes advantage of this characteristics. === k-NN #align(center)[ #figure( canvas(length: 1cm, { plot.plot( size: (8, 6), x-tick-step: 2, x-label: "Ensemble Size", y-min: 0.4, y-max: 0.6, y-tick-step: 0.1, y-label: "Weighted F1-Measure", { plot.add( mark: "o", mark-size: .1, style: (mark: (stroke: blue)), bg_knn, ) plot.add( mark: "o", mark-size: .1, style: (mark: (stroke: red)), rss_knn, ) }) let legend = ( "Random Subspacing": (color: black, fill: red), "Bagging": (color: black, fill: blue), ); let x = 2.4 let y = 5.1 for (key, style) in legend { legend-item((x, y), key, style) y = y + 0.5 } }), caption: [Performance of bagging ensemble against random subspacing ensemble] ) ] From the figure above, the performance for both strategy seems similar but if we average each measure, we would get *#{calc.round(bg_knn.fold(0, (acc, it) => acc + it.at(1)) / bg_knn.len(), digits: 4)}* for bagging and *#{calc.round(rss_knn.fold(0, (acc, it) => acc + it.at(1)) / rss_knn.len(), digits: 4)}* for random subspacing. Hence, on average bagging ensemble performed better than random subspacing ensemble although the gain is almost negligible. Interestingly, random subspace performed better than bagging for ensemble size from *$6$* to *$10$*. This is a little out of expectation since k-NN is a stable classifier, I would expect random subspace to outperform bagging. One possible cause might be because I am using $k=1$ which might be too sensitive to noise and hence the ensemble is not able to mitigate the noise. = Linear Regression and Stochastic Gradient Descent == Results for Linear Regression The source code for the linear regression can be found at `src/LinearRegression.java`. To run the code, use the following command: ```sh make run-linear-regression ``` This will compile the code and run linear regression on the dataset. The model trained with 80% of the data and evaluated with 10-fold cross validation using 20% of the data, results are as follows: #align(center)[ #figure( table( columns: (auto, auto), inset: 6pt, align: horizon, fill: (col, row) => if row == 0 or col == 0 { silver } else { white }, [*Metrics*], [*Value*], [*Correlation coefficient*], [0.6922], [*Mean absolute error*], [0.0986], [*Root mean squared error*], [0.1248], [*Relative absolute error*], [71.2496%], [*Root relative squared error*], [72.1565%], ), caption: [Evaluation metrics for linear regression] ) ] Looking at the result, it seems that there indeed is a linear relationship between the features `tempo`, `loudness` and `liveness` and the target variable `energy` as proven by the correlation coefficient of *0.6922* indicating it has a positive correlation. However, the model is not doing very well since the relative absolute error (RAE) is *71.2496%* and the root relative squared error (RRSE) is *72.1565%*. This is probably due to the fact that the dataset is small and hence the model is not able to learn the relationship between the features and the target variable well enough. == Results for Stochastic Gradient Descent The source code for the linear regression can be found at `src/StochasticGradientDescent.java`. To run the code, use the following command: ```sh make run-sgd ``` This will compile the code and run stochastic gradient descent with Squared Loss as the loss function on the dataset. The model trained with 80% of the data and evaluated with 10-fold cross validation using 20% of the data, results are as follows: #align(center)[ #figure( table( columns: (auto, auto), inset: 6pt, align: horizon, fill: (col, row) => if row == 0 or col == 0 { silver } else { white }, [*Metrics*], [*Value*], [*Correlation coefficient*], [0.6876], [*Mean absolute error*], [0.0992], [*Root mean squared error*], [0.1256], [*Relative absolute error*], [71.6696%], [*Root relative squared error*], [72.6127%], ), caption: [Evaluation metrics for stochastic gradient descent] ) ] Surprisingly, the result for SGD is very similar to linear regression with its RAE and RRSE also in the 71% - 73% range. I would also consider that this model is also of poor quality because having a RAE of 71% means that the model is only able to predict the target variable correctly 29% of the time. == Differences between Linear Regression and Stochastic Gradient Descent From the results above, we can see that the performance for both linear regression and stochastic gradient descent (SGD) is very similar with subtle differences. There are several possible reasons for this: #list( indent: 1em, [*Similar Loss Function* - The loss function that the SGD model uses is Squared Loss and the linear regression by Weka might also be using a similar underlying loss function.], [*Dataset Size* - The dataset size is small and therefore both models might not be able to learn the relationship between the features and the target variable well enough to make a difference.], [*Linearity* - The linear relationship between the features and target variable might be too dominant and hence SGD is basically optimising the same problem as linear regression on a similar gradient line. One evidence to support this is that the correlation coefficient for both models are relatively moderate and similar to each other.] )
https://github.com/EGmux/TheoryOfComputation
https://raw.githubusercontent.com/EGmux/TheoryOfComputation/master/unit2/tese-church-turing.typ
typst
#set heading(numbering: "1.") === 3.1 This exercise concerns TM $M_2$, whose description and state diagram appear in Example 3.7. In each of the parts, give the sequence of configurations that $M_2$ enters when started on the indicated input string. \ #figure(image("../assets/diag.png", width: 80%), caption: []) <fig-diag> ==== a) 0 \ #math.equation(block: true, $ q_1 0 $) 🚨 note that a computation is a sequence of configurations and such sequence must be finite! 🚨 Recognizability of turing machine requires for the machine to have a language and for that halting is not required. 💡 a TM is decidable iff the machine halts and it language is the set of recognizable languages 💡 We don't cross the starting zero because is "every other 0" #math.equation(block: true, $ bracket.b q_2 bracket.b $) #math.equation(block: true, $ bracket.b bracket.b q_"accept" bracket.b $) ==== b) 00 \ #math.equation(block: true, $ bracket.b q_1 00 bracket.b $) #math.equation(block: true, $ bracket.b q_2 0 bracket.b $) #math.equation(block: true, $ bracket.b "x" q_3 bracket.b $) #math.equation(block: true, $ bracket.b q_5 "x" bracket.b $) #math.equation(block: true, $ bracket.b q_5 bracket.b "x" bracket.b $) #math.equation(block: true, $ bracket.b q_2 "x" bracket.b $) #math.equation(block: true, $ bracket.b x q_2 bracket.b $) #math.equation(block: true, $ bracket.b x bracket.b q_"accept" bracket.b $) ==== c) 000 \ #math.equation(block: true, $ bracket.b q_1 000 bracket.b $) #math.equation(block: true, $ bracket.b q_2 00 bracket.b $) #math.equation(block: true, $ bracket.b x q_3 0 bracket.b $) #math.equation(block: true, $ bracket.b x 0 q_4 bracket.b $) #math.equation(block: true, $ bracket.b x 0 bracket.b q_"reject" bracket.b $) ==== d) 000000 \ #math.equation(block: true, $ bracket.b q_1 000000 bracket.b $) #math.equation(block: true, $ bracket.b bracket.b q_2 "00000" bracket.b $) #math.equation(block: true, $ bracket.b bracket.b x q_3 "0000" bracket.b $) #math.equation(block: true, $ bracket.b bracket.b "x0" q_4 "000" bracket.b $) #math.equation(block: true, $ bracket.b bracket.b "x0x" q_3 "00" bracket.b $) #math.equation(block: true, $ bracket.b bracket.b "x0x0" q_4 "0" bracket.b $) #math.equation(block: true, $ bracket.b bracket.b "x0x0x" q_3 bracket.b $) #math.equation(block: true, $ bracket.b bracket.b "x0x0" q_5 "x" bracket.b $) #math.equation(block: true, $ bracket.b bracket.b "x0x" q_5 "0x" bracket.b $) #math.equation(block: true, $ bracket.b bracket.b "x0" q_5 "x0x" bracket.b $) #math.equation(block: true, $ bracket.b bracket.b "x" q_5 "0x0x" bracket.b $) #math.equation(block: true, $ bracket.b bracket.b q_5 "x0x0x" bracket.b $) #math.equation(block: true, $ bracket.b bracket.b "x" q_2 "0x0x" bracket.b $) #math.equation(block: true, $ bracket.b bracket.b "xx" q_3 "x0x" bracket.b $) #math.equation(block: true, $ bracket.b bracket.b "xxx" q_3 "0x" bracket.b $) #math.equation(block: true, $ bracket.b bracket.b "xxx0" q_4 "x" bracket.b $) #math.equation(block: true, $ bracket.b bracket.b "xxx0x" q_4 bracket.b $) #math.equation( block: true, $ bracket.b bracket.b "xxx0x" bracket.b q_"reject" bracket.b $, ) === 3.2 This exercises concerns TM *$M_1$* whose description and state diagram appear in Example 3.9. In each of the parts give the sequence of configurations that *$M_1$* enters when started on the indicated input string. \ #figure(image("../assets/tur.png", width: 80%), caption: []) <fig-tur> ==== a) 11 \ #math.equation(block: true, $ bracket.b q_1 11 bracket.b $) #math.equation(block: true, $ bracket.b "x" q_3 1 bracket.b $) #math.equation(block: true, $ bracket.b "x1" q_3 bracket.b $) #math.equation(block: true, $ bracket.b "x1" bracket.b q_"reject" $) ==== b) 1\#1 \ #math.equation(block: true, $ bracket.b q_1 1\#1 bracket.b $) #math.equation(block: true, $ bracket.b "x" q_3 \#1 bracket.b $) #math.equation(block: true, $ bracket.b "x#" q_5 \1 bracket.b $) #math.equation(block: true, $ bracket.b "x" q_6 "#x" bracket.b $) #math.equation(block: true, $ bracket.b q_7 "x#x" bracket.b $) #math.equation(block: true, $ bracket.b "x" q_1 "#x" bracket.b $) #math.equation(block: true, $ bracket.b "x#" q_8 "x" bracket.b $) #math.equation(block: true, $ bracket.b "x#x" q_8 bracket.b $) #math.equation(block: true, $ bracket.b "x#x" bracket.b q_"accept" bracket.b $) ==== c) 1\#\#1 \ #math.equation(block: true, $ bracket.b q_1 1\#\#1 bracket.b $) #math.equation(block: true, $ bracket.b "x" q_3 \#\#1 bracket.b $) #math.equation(block: true, $ bracket.b "x#" q_5 \#1 bracket.b $) #math.equation(block: true, $ bracket.b "x#" \# q_"reject" 1 bracket.b $) ==== d) 10\#11 \ #math.equation(block: true, $ bracket.b q_1 10\#11 bracket.b $) #math.equation(block: true, $ bracket.b "x" q_3 0\#11 bracket.b $) #math.equation(block: true, $ bracket.b "x0" q_3 \#11 bracket.b $) #math.equation(block: true, $ bracket.b "x0#" q_5 11 bracket.b $) #math.equation(block: true, $ bracket.b "x0#x" q_6 1 bracket.b $) #math.equation(block: true, $ bracket.b "x0#" q_6 "x1" bracket.b $) #math.equation(block: true, $ bracket.b "x0#x" q_"reject" 1 bracket.b $) ==== e) 10\#10 \ #math.equation(block: true, $ bracket.b q_1 10\#10 bracket.b $) #math.equation(block: true, $ bracket.b "x" q_3 0\#10 bracket.b $) #math.equation(block: true, $ bracket.b "x0" q_3 \#10 bracket.b $) #math.equation(block: true, $ bracket.b "x0#" q_5 10 bracket.b $) #math.equation(block: true, $ bracket.b "x0" q_6 "#x"0 bracket.b $) #math.equation(block: true, $ bracket.b "x" q_7 "0#x"0 bracket.b $) #math.equation(block: true, $ bracket.b q_7 "x0#x"0 bracket.b $) #math.equation(block: true, $ bracket.b "x" q_1 "0#x"0 bracket.b $) #math.equation(block: true, $ bracket.b "xx" q_2 "#x"0 bracket.b $) #math.equation(block: true, $ bracket.b "xx#" q_4 "x"0 bracket.b $) #math.equation(block: true, $ bracket.b "xx#x" q_4 ""0 bracket.b $) #math.equation(block: true, $ bracket.b "xx#" q_6 "xx" bracket.b $) #math.equation(block: true, $ bracket.b "xx" q_6 "#xx" bracket.b $) #math.equation(block: true, $ bracket.b "x" q_7 "x#xx" bracket.b $) #math.equation(block: true, $ bracket.b "xx" q_1 "#xx" bracket.b $) #math.equation(block: true, $ bracket.b "xx#" q_8 "xx" bracket.b $) #math.equation(block: true, $ bracket.b "xx#x" q_8 "x" bracket.b $) #math.equation(block: true, $ bracket.b "xx#xx" q_8 "" bracket.b $) #math.equation(block: true, $ bracket.b "xx#xx" bracket.b q_"accept" bracket.b $) === 3.3 Modify the proof of Theorem 3.16 to obtain corollary 3.19 showing that a language is decidable iff some nondeterministic Turing machine decides it. (You may assume the following theorem about trees. If every node in a tree has finetly many children and every branch of the tree has finitely many nodes, the tree itself has finitely many nodes.) \ ==== language decidible *$=>$* some nondeterministic Turing Machine decides it \ Suppose L is a decidible language we can deduce that a deterministic turing machine, M, must decide it, but any deterministic TM is a subset of non deterministic TM, thus any NDTM decides such language. ==== some nondeterministic Turming Machine decides a language *$=>$* language decidible \ If a NDTM decides a language, that implies all braches of computation must halt we'll convert such NDTM to DTM we'll use 3 tapes for this approach - first tape has the input - second tape has the current computation - third tape has current node address a node address is a subset of the power set for the alphabet indexed, that is if the alphabet has 4 elements, such as a,b,c and d, each element would correspond to 0 to 3 as mapping. the machine works as follows - first put the input in the first tape - copy the input from tape 1 to tape 2 - set the third tape to $epsilon$, the root node address - run the computation in tape 2, until a reject or accept state happens - to run the current TM configuration in tape 2 make sure the last non empty position in tape 3 is used as a decision of next symbol to read, that is as input to transition function - if the computation is rejected modify tape3 to visit another possible branch in a BFS fashion - if the computation is accepted, accept and end the computation because the set of addresses for each node in the computation history must be finite eventually all the possibilities will be seen, thus the computation must halt we succesfully created a DTM that has the same language as NDTM in the beginning, and the following is true as well - if all branches of computation are exausthed the string is rejected - otherwise is accepted thus the language must be decidible $qed$ === 3.4 Give a formal definition of an enumerator. Consider it to be a type of two-tape Turing machine that uses its second tape as the printer. Include a definition of the enumerated language. \ #math.equation( block: true, $ E:= <Gamma, Sigma,q_"start",Q_"print",Q_"accept",Q_"reject", Q, delta :: Q times Gamma -> Q times Gamma times {L,R,S} times Sigma_epsilon > $, ) 💡 We use S as a special motion where the machine does nothing, but is equivalent to $L->R "or" R->L$ The machine work as follows - for an alphabet *$Sigma$* the machine will generate a list of every possible string available according to size - for each string generated according to size run the machine and check if *$Q_"print"$* state is reached, the string are separated by markers, such as '\#' - if the answer is yes move the cursor to closest leftmost marker to the *$Q_"print"$* state and copy the string betwwen the marker and such state to the next tape - mark each end of string in the second tape with a symbol and each string between such symbol is the printed value. - Move the work tape cursor to the next marker and move the printer cursor to the end of the tape and create a marker there as well 💡 repetion may occur, but that's allowed == 3.5) Examine the formal definition of a Turing machine to answer the following questions, and explain your reasoning. \ ==== a. Can a Turing machine ever write the blank symbol $bracket.b$ on its tape? \ note, *$Sigma$* doesn't contain the *$bracket.b$* symbol as such the symbol must be in the *$Gamma$* alphabet, every symbol in this alphabet is an allowed symbol for being written in the tape, so the answer is yes a turing machine can write the *$bracket.b$* sybol. ==== b. Can the tape alphabet *$Gamma$* be the same as the input alphabet *$Sigma$*? \ never because *$bracket.b$* always belong to *$Gamma$* as such *$Sigma$* can never the same set. ==== c. Can a Turing machine's head ever be in the same location in two succesive steps? \ Yes, just a matter of the first transition be a L-transition, the cursor moves one tape element left, and the other must be a R-transition also a R-transition and then a L-transition are possible sequence of configurations as well. ==== d. Can a Turing machine contain just a single state? \ no, even if $q_"accept"$ were the same as $q_"start"$ there's the imposition that $q_"reject" != q_"accept"$ and also note that $Q$ is a set of states, that is each of it elements must be a state, can't be the $nothing$ because is a set, not a state as such a turing machine must always have at least two states. === 3.6 In Theorem 3.2.1, we showed that a language is Turing-recognizable iff some enumerator enumerates it. Why didn't we use the following simpler algorithm for the forward direction of the proof? As before, $s_1, s_2, ...$ is a list of all string in $Sigma^*$. \ #math.equation(block: true, $E &= "\"Ignore the input." & & \ & 1. "Repeat the following for " i = 1,2,3 & & \ & 2. "Run M on" s_i. & & \ & 3. "If it accepts, print out" s_i "\"" $) the trick here is noticing that the enumerator need to deal with not halting and also printing every valid string. If we print sequentially every possible string in $Sigma^*$ the machine might never halt for such a string and as such we can't validate that $E$ enumerates $R$. But by employing the trick of testing in parallel for each input if a tape never halt that won't be a problem because the other's will continue to work. === 3.7 Explain why the following is not a description of a legitimate Turing machine. \ #math.equation( block: true, $ M_"bad" &= "\"On input <p>, a polynomial over variables " x_1, ..., x_k: && \ & 1. "Try all possible settings of" x_1, .., x_k "to integer values" && \ & 2. "Evaluate p on all of these settings." && \ & 3. "If any of these settings evaluates to 0, accept; otherwise reject. \"" && \ $, ) note that the input alphabet is infinite, but in the turing machine description we assume that each set must be finite, but beacuse the input alphabet is non terminating the memory also would be which would also force time of computation to be infinite and as such it would never halt, so is not an algorithm and because TM's and algorithms are the same thing such device can't possibly be a turing machine. === 3.8 Give implementation-level description of Turing machines that decide the following languages over the alphabet {0,1}. \ ==== a. {w | w "contains an equal numbers of " 0s "and" 1s} \ #math.equation( block: true, $ M &= "\"On input <w>," w in {0,1}^*: && \ & 1. "Copy all the 1's to a tape and all the 0's to another tape" && \ & 2. "Run in parallel each tape going always to the right" && \ & 3. "If at any moment a cursor hits a " bracket.b "halt that tape, if the other didn't hit such symbol reject otherwise accept\"" && \ $, ) ==== b. {w | w "contains twices as many 0s and 1s"} \ #math.equation( block: true, $ M &= "\"On input <w>," w in {0,1}^*: && \ & 1. "Copy all the 1's to a tape", T_a, "and all the 0's to another tape" T_b && \ & 2. "Consider the following possible transitions" && \ & 2.1. "If " T_b "hits a 0, execute a R-transition," T_a "doesn't move the cursor, a S-transition happens to " T_a "instead" && \ & 2.2. "If" T_b "hits another 0, " T_a "now executes a R-transition" && \ & 2.3 "If " T_b "hits" bracket.b "check if " T_a "would also hit the same symbol with a R-transition, if yes then accept otherwise reject" && \ & 2.4 "cursor of" T_b "moves back to 2.1\"" && \ $, ) ==== c. {w | w does not contain twice as many 0s as 1s} \ #math.equation( block: true, $ M &= "\"On input <w>," w in {0,1}^*: && \ & 1. "Copy all the 1's to a tape", T_a, "and all the 0's to another tape" T_b && \ & 2. "Consider the following possible transitions" && \ & 2.1. "If " T_b "hits a 0, execute a R-transition," T_a "doesn't move the cursor, a S-transition happens to " T_a "instead" && \ & 2.2. "If" T_b "hits another 0, " T_a "now executes a R-transition" && \ & 2.3 "If " T_b "hits" bracket.b "check if " T_a "would also hit the same symbol with a R-transition, if yes then reject otherwise accept" && \ & 2.4 "cursor of" T_b "change state to 2.1\"" && \ $, ) === 3.10 Say that a write-once Turing machine is a single-tape TM that can alter each tape square at most once (including the input portion of the tape). Show that this variant Turing machine model is equivalent to the ordinary Turing machine model.(hint: As a first step, consider the case whereby the Turming machine may alter each tape square at most twice. Use lots of tape.) \ To prove this we need to show that a write-once TM recognizes the same language as an ordinary TM. the constraint imposed in the question implies the following #math.equation(block:true, $ delta_"wo" (q_i,b_u) = (q_j,b_v,T) "and" delta_"wo" (q_i,b_u) != (q_j, b_w,T) forall w != v $ ) we need a way that would allow rewriting a square multiple times if needed, note that is the position that is constraining us, but we could copy the symbol and paste it now let's consider a transition function for the ordinary TM #math.equation(block:true, $ delta (q_i,b_u) = (q_j,b_v,T) $ ) where T is a R,L,S- transition 💡 S-transition means don't move the cursor let's consider a single tape position after a sequence of configurations #math.equation(block:true, $ bracket.b "ab0" q_j "def" bracket.b && \ bracket.b "ab" q_r "1def" bracket.b && \ bracket.b "ab2" q_j "def" bracket.b && \ $ ) note that for 0,1,2 the position is the same then to get equivalent behavior in the writing only TM all we have to do for any modified symbol is as follows -1 when resuming/starting make sure to move every string one position to the right and place a mark, '\#' in the starting position. -2 for every modified symbol that is not a mark, put a dot in the top of the position -3 if an extra modification is required , that is the symbol with a dot will be overwritten, do as follows, pause the main loop -3.1 move the cursor back to the closest mark, copy everything to the right of the mark until a $bracket.b$ is found -3.2 now go back to 1 -4 if cursors hits $q_"accept"$ or $q_"reject"$ stop the computation === 3.11 A Turing machine with doubly infinite tape is similar to an ordinary Turing machine, but its tape is infinte to the left as well as to the rigmt. The tape is intially filled with blanks except for the portion that contains the input. Computation is defined as usual except tha the head never encounters an end to the tape as sit moves leftward. Show that this type of Turing machine recognizes the class of Turing-recognizable languages. \ We need to show that the infinte TM is equivalent to a recognizer TM to prove that we need to revise what it means for a TM to be a recognizer it means that the machine might not halt for every input. we now show that such infinite TM may not halt for every input and as such is a recognizer TM as well we know that a enumerator TM can be used to classify a language as Turing recognizable as such we could create an enumerator for the infinite TM. and if that is possible it must mean such infinite TM is a recognizer according to the theorem that every enumerator TM is equivalent to a recognizer TM. first let's identify the constraint for this TM #math.equation(block:true, $ delta_"q0" (q_i,b_u) = (q_j,b_v,L) "where " q_0 "is the first tape position" $ ) in other words a L-transition in the beginning of the tape must allow a new tape position that is not the beginning and in a normal TM such transition should result in the cursor at the same position and possibility of symbol being overwritten as well to convert an infinite TM to a recognizer do as follows - move the input one step to the right and mark the initial position with '\#' and the final position with the same marker but with a dot in the top - whenever a L-transition happens in the position Immediately to the right of the marker do as follows - copy everything from the right of the marker until the marker with the dot is hit by the cursor - paste the copied contents to the left of the marker, add a dot to the marker without a dot and create a marker,without a dot, in the position Immediately to the left of the first copied position - make sure to offset the cursor enough from the marker so to not overwrite it! === 3.12 A Turing machine with left reset is similar to an ordinary Turing machine, but the transition function has the form \ #math.equation(block:true, $ delta: Q times Gamma -> Q times Gamma times {R, "RESET"} $ ) *If *$delta(q,a) = (r,b,"RESET")$*, when the machine is in state q reading an a, the machine's head jumps to the left-hand end of the tape after it writes b on the tape and enters state r. Note that these machines do not have the usual ability to move the head one symbol left. Show that Turing machines with left reset recognize the classes of Turing-recognizable languages.* we need to convert a L-reset to a normal recognizer, we'll do as follows. let's first identify the constraint the machine can't move 1 head left as a normal turing machine, we need to build that feature with this machine the modification is as follows for each transition function of the form // #math.equation(block: true, $ case(delta, 1) $)
https://github.com/7sDream/fonts-and-layout-zhCN
https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/06-features-2/positioning/pair.typ
typst
Other
#import "/template/template.typ": web-page-template #import "/template/components.typ": note #import "/lib/glossary.typ": tr #show: web-page-template // ### Pair adjustment === 字偶对调整 // We've already seen pair adjustment rules: they're called kerns. They take two glyphs or glyphclasses, and move glyphs around. We've also seen that there are two ways to express a pair adjustment rule. First, you place the value record after the two glyphs/glyph classes, and this adjusts the spacing between them. 我们已经在介绍字偶矩时见过字偶对调整规则了。这些规则对两个#tr[glyph]或#tr[glyph]类进行移动调整。它也有两种写法,第一种是直接在最后写上一个数值记录,这会调整它们两个之间的间隙。 ```fea pos A B -50; ``` // Or you can put a value record after each glyph, which tells you how each of them should be repositioned: 第二种是在每个#tr[glyph]后面写数值记录,来分别对它们进行重新#tr[positioning]: ```fea pos @longdescenders 0 uni0956 <0 -90 0 0>; ```
https://github.com/ern1/typiskt
https://raw.githubusercontent.com/ern1/typiskt/main/resume.typ
typst
#import "resume-content.typ": eduEntries, expEntries, projEntries #import "templates/typiskt.typ": resume, mainEntry, article, util, colors, icons, fonts #show: resume.with( firstName: "Tommy", lastName: "Ernsund", description: "Software Developer", //profile-image: "assets/dog1-vt_svgo.svg" profile-image: "img/profile1-1.png", // TODO: Change? maybe stamp "temp" on top of it.. ) /* Main content stuff */ = Education #mainEntry(..eduEntries.at("gu")) #mainEntry(..eduEntries.at("mdh")) = Professional Experience #mainEntry(..expEntries.at("evtest")) #mainEntry(..expEntries.at("noteab")) #mainEntry(..expEntries.at("voltair")) = Projects #mainEntry(..projEntries.at("gea")) #mainEntry(..projEntries.at("app")) #mainEntry(..projEntries.at("unityAR")) #mainEntry(..projEntries.at("volvoce")) /* Sidebar content */ #article[ = Contact // & Profi #set text(size: 1em-1pt) #util.iconWithText(icons.getPath("phone"), link("tel:+46123456789", "+46 12 345 67 89")) \ #util.iconWithText(icons.simplePath("gmail-color"), link("mailto:<EMAIL>", "<EMAIL>")) \ #util.iconWithText(icons.simplePath("linkedin-color"), link("https://linkedin.com/TommyErnsund", "<NAME>")) \ #util.iconWithText(icons.simplePath("github-color"), link("https://github.com/ern1", "ern1")) \ //#util.iconWithText(icons.simplePath("steam-color"), link("https://steamcommunity.com/id/_t_o_m_m_y_/", "_t_o_m_m_y_")) ] #article[ // TODO: var lägga game design, web, embedded/realtime systems ux/ui etc? = Technical Skills #set text(size: 1em-0.5pt) #set terms(separator: [:]) #let skill(ico, name, color: rgb("#eee")) = [ //#util.iconWithText(icons.getPath(ico), name) //#util.iconWithTextHighlight(icons.getPath(ico), color.lighten(50%), name) #util.iconWithTextHighlight(icons.simplePath(ico), color.lighten(60%), name) ] //#show "|": it => [~#it~] /*#let highlight(color, body) = { place(dx: -3pt, strike(extent: 3pt, stroke: (thickness: 1.2em, paint: rgb(color), cap: "round"), background: false, body)) body }*/ / Languages: #v(-5pt) //#util.iconWithText(icons.getPath(""), ""), //#set text(spacing: 2pt) #skill("python-color", "Python", color: colors.md.yellow-100) //#highlight(colors.md.yellow-100, skill("python", "Python")) \ #skill("cplusplus-color", "C/C++", color: colors.md.blue-100) #skill("csharp-color", "C#", color: colors.md.purple-100) #skill("javascript-color", "Javascript", color: colors.md.yellow-100) #skill("typescript-color", "Typescript", color: colors.md.blue-100) #skill("html5-color", "HTML5", color: colors.md.orange-100) #skill("css3-color", "CSS", color: colors.md.yellow-100) #skill("sqlite-color", "SQL", color: colors.md.blue-100) #skill("powershell-color", "Powershell", color: colors.md.blue-100) //F\#, //Matlab //Bash, /*#util.textColorMap("Python", 80%), #util.textColorMap("C++", 60%), #util.textColorMap("PHP", 18%),*/ / Frameworks, etc: #v(-5pt) #skill("nodedotjs-color", "Node.js", color: colors.md.green-100) #skill("react-color", "React.js/Native", color: colors.md.blue-100) #skill("vuedotjs-color", "Vue", color: colors.md.green-100) #skill("amazonaws-color", "AWS", color: colors.md.orange-100) //NumPy, //UML / Tools: #v(-5pt) #skill("git-color", "Git", color: colors.md.red-100) #skill("gradle-color", "Gradle", color: colors.md.blue-100) #skill("cmake-color", "CMake", color: colors.md.blue-100) #skill("unity-color", "Unity", color: colors.md.grey-300) #skill("visualstudio-color", "Visual Studio", color: colors.md.purple-100) ] #article[ = Languages #set text(size: 1em-1pt) *Swedish*: Native \ *English*: Fluent \ //*Norwegian*: Almost fluent \ *Chinese*: Want to learn ]
https://github.com/Brndan/formalettre
https://raw.githubusercontent.com/Brndan/formalettre/main/template/src/exemple.typ
typst
BSD 3-Clause "New" or "Revised" License
#import "@preview/formalettre:0.1.1": * #set text(lang: "fr") #show: lettre.with( expediteur: ( nom: "de La Boétie", prenom: "Étienne", voie: "145 avenue de Germignan", complement_adresse: "", code_postal: "33320", commune: "Le Taillan-Médoc", telephone: "01 23 45 67 89", email: "<EMAIL>", signature: false, // indiquez true si ajout d’une image comme signature ), destinataire: ( titre: "<NAME>", voie: "17 butte Farémont", complement_adresse: "", code_postal: "55000", commune: "Bar-le-Duc", sc: "", ), lieu: "<NAME>", objet: [Ceci est un objet de courrier.], date: [le 7 juin 1559], pj: "", ) // Le corps du document remplace cette fonction #lorem(200) // Décommenter ces deux lignes pour ajouter la signature sous forme d’image //#set align(right + horizon) //#image("Signature.png")
https://github.com/Joelius300/hslu-typst-template
https://raw.githubusercontent.com/Joelius300/hslu-typst-template/main/additional-outlines.typ
typst
MIT License
#import "@preview/big-todo:0.2.0": todo #import "@preview/i-figured:0.2.4" #import "@preview/acrostiche:0.3.1": print-index = Weitere Inhaltsverzeichnisse == Abkürzungsverzeichnis #print-index(title: "") // Table of images == Abbildungsverzeichnis #i-figured.outline(title: v(-1.3em)) // Tables of tables == Tabellenverzeichnis #i-figured.outline(title: v(-1.3em), target-kind: table) // Table of equations == Formelverzeichnis #i-figured.outline(title: v(-1.3em), target-kind: math.equation)
https://github.com/Functional-Bus-Description-Language/Specification
https://raw.githubusercontent.com/Functional-Bus-Description-Language/Specification/master/src/functionalities/stream.typ
typst
== Stream The stream functionality represents a stream of data to a provider (downstream), or a stream of data from a provider (upstream). An empty stream (stream without any param or return) is always a downstream. It is useful for triggering cyclic action with constant time interval. A downstream must not have any return. An upstream shall not have any param, and must have at least one return. The stream functionality is very similar to the proc functionality, but they are not the same. There are two main differences. The first one is that the stream must not contain both param and return. The second one is that the code for the stream, generated for the requester, shall take into account the fact that access to the stream is multiple and access to the proc is single. For example, lets consider the following bus description: #block(breakable:false)[ #pad(left: 1em)[ ```fbd Main bus P proc p param S stream p param ``` ] ] The code generated for the requester, implemented in the C language, might include following function prototypes: #block(breakable:false)[ #pad(left: 1em)[ ``` int Main_P(const uint32_t p); int Main_S(const uint32_t * p, size_t count); ``` ] ] The stream has associated strobe signal at the provider side. The strobe signal must be driven active for one clock cycle after all registers storing the parameters of a downstream have been written. It also must be driven active for one clock cycle after all registers storing the returns of an upstream have been read. The stream functionality has following properties. *`delay`*` time (None) {definitive}` #pad(left: 1em)[ The delay property defines the time delay between writing/reading consecutive datasets for a downstream/upstream. ]
https://github.com/JerryYin777/Jerry_CV
https://raw.githubusercontent.com/JerryYin777/Jerry_CV/main/resume.typ
typst
MIT License
#import "chicv.typ": * #show: chicv = Congrui (<NAME> #fa[#envelope] <EMAIL> | #fa[#github] #link("https://github.com/JerryYin777")[github.com/JerryYin777] | #fa[#globe] #link("https://jerrysys.top")[jerrysys.top] | #fa[#google] #link("https://scholar.google.com/citations?user=7gsdLw4AAAAJ&hl=en")[Google Scholar] | #fa[#linkedin] #link("https://www.linkedin.com/in/congrui-yin-a21314292/")[Congrui Yin] == Education #cventry( tl: "University of Minnesota Twins Cities", tr: "2024/01 - 2025/06 (Expected)", bl: "Bachelor of Arts in Computer Science, Minor in Statistics", br: "Minneapolis, MN, USA" )[ ] #cventry( tl: "Nanchang University", tr: "2021/09 - 2023/12", bl: "Candidate for B. Eng in Artificial Intelligence", br: "Nanchang, Jiangxi, China" )[ - Enterprise Special Scholarship, 2023. *(Only 30 in School)* - School Special Academic Scholarship, 2023. *(1%)* - School First-Class Academic Scholarship, 2022. *(8%)* ] == Research Interests I am broadly interested in the intersection between natural language processing and efficient machine learning system (Mainly based on GPU). My previous work was in building *efficient computation systems for NLP training & inference and supercomputer scientific applications.* == Publications - *F-PABEE: Flexible-Patience-Based Early Exiting For Single-Label and Multi-Label Text Classification Tasks*. <NAME>, <NAME>, <NAME> and *<NAME>*. (2023). _2023 IEEE International Conference on Acoustics, Speech and Signal Processing_ (*ICASSP 2023*). #link("https://ieeexplore.ieee.org/abstract/document/10095864")[[Paper]] - *Multi-scale and multi-task learning for human audio forensics based on convolutional networks*. *<NAME>*. (2023). _International Conference on Image, Signal Processing, and Pattern Recognition_ (*ISPP 2023*). #link("https://www.spiedigitallibrary.org/conference-proceedings-of-spie/12707/127074I/Multi-scale-and-multi-task-learning-for-human-audio-forensics/10.1117/12.2681344.short#_=_")[[Paper]] == Research Experience #cventry( tl: "TsinghuaNLP", tr: "2023/07 - 2023/09", bl: "LLM System Algorithm Research Assistant, advised by Prof. <NAME>", br: "Beijing, China" )[ - My research focuses on distributed AI training systems, specifically addressing training methods for neural networks at a *scale of trillions of parameters*. As the size of AI models increases, the time complexity of Transformer architecture models escalates, prompting the need to explore more effective architectures for training. In this context, I endeavor to train models using the RetNet architecture within the *BMTrain* distributed training framework that I have developed. ] #cventry( tl: "GOOD Lab, Nanchang University", tr: "2021/12 - 2023/12", bl: "High-Performance System Algorithm Research Assistant, advised by Prof. <NAME>", br: "Nanchang, Jiangxi, China" )[ - I have been engaged in long-term research on Serverless Computing (Function as a Service) at GOOD Lab, to simplify AI deployment, enhance efficiency, and reduce costs through cloud computing. As part of this research, I undertook stress experiments on AI services utilizing Kubernetes and Docker on supercomputers. // - Implemented *Zone-Aware Garbage Collection* in *TerrakDB* for Zoned Namespace SSDs, which reduced 3-4x of space amplification caused by interleaving write lifetime in a single ZNS zone. #iconlink("https://github.com/bytedance/terarkdb", icon: github) // - Added observability facilities to *ZenFS* (by Western Digital) to analyze bottlenecks and implemented a *WAL-Aware Zone Allocator*, which reduced the p999 tail latency by 100x. #iconlink("https://github.com/bzbd/zenfs", icon: github) ] == Work Experience #cventry( tl: "Zhihu & ModelBest Ltd.", tr: "2023/07 - 2023/09", bl: "Algorithm Intern", br: "Beijing, China" )[ - In collaboration with TsinghuaNLP, I simultaneously worked on Zhihu (Chinese Quora) and ModelBest Ltd. During this partnership, I utilized the highest-quality Chinese corpus available on Zhihu for training an 80b multimodal large model. I also played a significant role in creating the distributed LLM training framework *BMTrain*, which successfully addressed communication bottlenecks during the training of large language models. When compared to DeepSpeed, this tool achieved a *1.6x increase in throughput* for training Zhihu LLM. ] == Open-Source Contributions #cventry( tl: "OpenBMB Community" + " " + iconlink("https://github.com/OpenBMB", icon: github), tr: "2023/07 - 2023/09" )[ - *Contributor of *#iconlink(icon: github, text: "BMTrain", "https://github.com/OpenBMB/BMTrain")* (#fa[#star]454)*. BMTrain is an efficient large model training toolkit that can be used to train large models with tens of billions of parameters. It can train models in a distributed manner while keeping the code as simple as stand-alone training, used by ChatGLM-6b, CPM-20b, Luca-80b LLMs. - I implemented the Zero-offload method based on Triton and CUDA within BMTrain. This allows memory occupancy to replace GPU memory usage, reducing the computational load for training large language models. The successful implementation of distributed training was achieved on a cluster of 512 A100 GPUs. - Additionally, I added support for bf16 and fp8 data types specifically for the A100 and H100 architectures and implemented optimizations for the corresponding Adam Optimizer and learning rate schedule. ] #cventry( tl: "CGCL-Codes" + " " + iconlink("https://github.com/CGCL-codes", icon: github), tr: "2023/03 - 2023/06", )[ - *Contributor of *#iconlink(icon: github, text: "NaturalCC", "https://github.com/CGCL-codes/naturalcc")* (#fa[#star]225)*. NaturalCC is a sequence modeling toolkit designed to bridge the gap between programming and natural languages through advanced machine learning techniques. It allows researchers and developers to train custom models for a variety of software engineering tasks, e.g., code generation, code completion, code summarization, code retrieval, code clone detection, and type inference. - I enhanced the Transformer architecture based on AST syntax tree principle, making the construction of large-scale code language models more abstract at a lower level. - Additionally, I extended its compatibility from only using Fairseq to supporting Transformers, including popular large code models from Hugging Face such as Codellama, CodeT5, CodeGen, and StarCoder. ] #cventry( tl: "Personal Projects", tr: "130 followers, 800+ Stars " + iconlink(text: "JerryYin777", icon: github, "https://github.com/JerryYin777") )[ - *#iconlink(icon: github, text: "NanoGPT-Pytorch2.0-Implementation", "https://github.com/JerryYin777/NanoGPT-Pytorch2.0-Implementation")* (#fa[#star]52) NanoGPT Implementation based on Pytorch 2.0 (when Pytorch 2.0 released soon on Mar. 2023), faster and simpler, a good tutorial learning GPT. - *#iconlink(icon: github, text: "Intelligent-Creator", "https://github.com/JerryYin777/IntelligentCreator")* (#fa[#star]2) I implemented the Intelligent Creation Platform Creator, which comprises a front-end and back-end separation architecture software for generating titles and summaries based on Chinese news text using the GPT-2 model. This implementation predates the ChatGPT era. - *#iconlink(icon: github, text: "Cr‘s Research Toolchain", "https://github.com/JerryYin777/Cr_Research_Toolchain")* (#fa[#star]52) Share my research toolchain ] == Selected Awards I was the leader of Nanchang University Student Cluster Competition Team (#link("https://hpc.ncuscc.tech/")[*Team NCUSCC*]), participating in world's largest supercomputer competition *ASC22* and *SC23.* #cventry( tl: "ASC22 (5-people Group)", tr: "2022/01 - 2022/03" )[ - Ranking 23/500+, Second Prize in World Class - *#iconlink(icon: github, text: "Yuan-LLM", "https://github.com/NCUSCC/ASC22-Yuan")* (#fa[#star]6) I employed parallel frameworks and methods such as Megatron-LM, ZeRO, and DeepSpeed for distributed training of the largest Chinese language model (2022.2) YUAN-18B with 8 V100 GPUs in two servers. ] #cventry( tl: "SC23 (6-people Group)", tr: "2023/03 - 2023/11" )[ - Ranking 7/15 (Among MIT, Brown, Tsinghua U, Gatech, Peking U, etc.) - I utilized the SLURM management tool to successfully execute HPL benchmark tests on *300 servers* using parallel methods such as MPI, OpenMP. ] == Technical Skills - *Languages:* Python, C/C++, CUDA, Go, Rust, Shell, LaTeX - *Frameworks and Tools:* Pytorch, JAX, Triton, Docker, Kubernetes, MPI, OpenMP, AWS, Sklearn, Numpy, RISC-V - *AI:* Natural language Processing (llama-2, ChatGLM-3, CPM-Bee) | MLSys (Flash attention & ZeRO Series) | Computer Vision (YOLO Series, OpenCV, Simple Ray Tracing) | Multimodal Pretrained Model (BLIP-2, LLAVA) #align(right, text(fill: gray)[Last Updated on #today()])
https://github.com/duwenba/typst-pkgs
https://raw.githubusercontent.com/duwenba/typst-pkgs/master/packages/local/choices/0.1.0/lib.typ
typst
#let choice( content, number-style: "1.", number-align: alignment.left, opt-style : "A.", space : 8pt, options : (:), ans : (), show-ans : true, ) = { let arr = (:);let opts = (:); let l = 0; if not show-ans { ans = () } for e in content.children { if (e.has("body")) { l += 1; let op = 0; // 有选项 if e.body.has("children") { let option = (:) arr.insert(str(numbering(number-style,l)),e.body.children.at(0)) for opt in e.body.children { // 提取选项 if opt.has("body") { op += 1; option.insert(str(numbering(opt-style,op)),opt.body) } } opts.insert(str(numbering(number-style,l)),option) } else { // 无选项 arr.insert(str(numbering(number-style,l)),e.body) opts.insert(str(numbering(number-style,l)),(:)) } } } let xuanxiang = (:) for i in opts.keys() { let t = grid( columns : 2, align : (left,left), column-gutter: 8pt, row-gutter : 0.8em, ..opts.at(i).values().map(grid.cell.with(x:1)), ..opts.at(i).keys().map(grid.cell.with(x:0)), ) xuanxiang.insert(i,t) } let timu(n:"1.") = { let t = grid( columns : 2, row-gutter: 0em, grid.cell(arr.at(n),colspan: 2,y:0), if not xuanxiang.at(n).children == () { // grid.cell(arr.at(n),colspan: 2,y:0); grid.cell(pad(xuanxiang.at(n),y: 1em),colspan: 2); } else { } ) return t } let problems = for i in arr.keys() { timu(n:i) } grid( columns : (auto,1fr,auto,1em-2*space,auto), align : (left,left,right,center,left), column-gutter: 8pt, row-gutter : 0.8em, ..problems.children.map(grid.cell.with(x:1)), ..arr.keys().map(grid.cell.with(x:0)), ..(grid.cell("(", x:2 ),) * l, ..(grid.cell(")", x:4 ),) * l, ..ans ) }
https://github.com/An-314/Notes-of-Probability_and_Stochastic_Processes
https://raw.githubusercontent.com/An-314/Notes-of-Probability_and_Stochastic_Processes/main/chap1.typ
typst
#import "@preview/physica:0.9.2": * = 概率与概率空间 == 随机事件与概率 === 随机事件与概率 ==== 事件与样本空间 一般地,我们把实验的每一种可能的结果称为一个*基本事件*(或称*样本点*),称所有基本事件的全体为该试验的*样本空间*,记为$Omega$。 $Omega$的一个子集合$A$,可以看作是一个随机事件。 _从数学的角度看,与试验相关的每个“事件”都可以描述称样本空间$Omega$的一个子集$A$,反之亦然。_ 在一次试验中,我们得到了一个结果$omega in Omega$。 - 如果$omega in A$,我们就称*事件$A$发生了*;否则就说*$A$没有发生*。 - 如果$omega in Omega$恒成立,我们称$Omega$为必然事件,其反面为$emptyset$,称为不可能事件。 ==== 古典概型 *古典概型*描述了一个随机试验所包含的单位事件都是有限的,且每个单位事件发生的可能性均相等的情况。 $ P(A) = (|A|)/(|Omega|) $ ==== 事件之间的关系与运算 - 事件之间的关系 - 事件的包含:$A subset B$ - 事件的相等:$A = B$ - 事件的对立:$A sect A' = emptyset$,$A union A' = Omega$ - 事件之间的运算 - 并:$A union B$ - 交:$A sect B = A B$ - 差:$A - B$ - 有限个事件的并:$union.big_(i=1)^n A_i$ - 有限个事件的交:$sect.big_(i=1)^n A_i$ - 事件的运算定律 - 交换律:$A union B = B union A$,$A sect B = B sect A$ - 结合律:$(A union B) union C = A union (B union C)$,$(A sect B) sect C = A sect (B sect C)$ - 分配律:$A sect (B union C) = (A sect B) union (A sect C)$,$A union (B sect C) = (A union B) sect (A union C)$ - 对偶律(De Morgan):$(A')' = A$,$(A B)' = A' union B'$,$(A union B)' = A' sect B'$ ==== 几何概型 *几何概型*:每个事件发生的概率只与构成该事件区域的长度(面积或体积)成比例,即 $ P(A) = (L(A))/(L(Omega)) $ 这里$Ω$为可以度量的区域,$A$为$Ω$的可度量子集,$L(A)$表示$A$的度量。 _一些经典的例子:超几何分布、约会问题_ _Bucal(F)on投针问题:一根长度为$l$的针随机地抛向一块地板,地板上有一些平行线,间距为$d$,针与线相交的概率为$P = (2l)/(pi d)$。用Monte Carlo方法可以求$pi$的值。_ *非等概率问题和几何概型对古典概型提出了挑战,下面我们将用概率空间来解决这个问题。* === 概率空间 事件族($Ω$的子集族)$cal(F)$ 称为*$σ−$域*(也称为$σ-$代数或事件体),如果它满足下列条件: - $Ω ∈ cal(F)$ - 若$A ∈ cal(F)$,则$A^c ∈ cal(F)$ - 若$A_1, A_2, dots ∈ cal(F)$,则$union.big_(i=1)^oo A_i ∈ cal(F)$ 由此定义的$cal(F)$称为*$σ−$域*。 *Kolmogorov概率公理化定义*:$P$是$cal(F)$上的非负值函数,即对每一事件$A in cal(F)$ ,都可定义一个数$P(A)$,满足下列条件: - 非负性:$P(A) >= 0$ - 规范性:$P(Ω) = 1$ - 可数可加性: 若$A_1, A_2, dots ∈ cal(F)$,且$A_i A_j = emptyset$,则 $ P(union.big_(i=1)^oo A_i) = sum_(i=1)^oo P(A_i) $ 则称$P(A)$为事件$A$的*概率*。 试验的样本空间$Ω$、事件$σ−$域$cal(F)$ 以定义在$cal(F)$ 上的概率$P$所构成的*三元组*$(Ω; cal(F) ; P$),称为描述该随机试验的*概率空间*。 _注:_ - 由上述公理体系,易见P(∅) = 0。 - 由可数可加性可得有限可加性,即 $ P(A_1 A_2 dots A_n) = sum_(i=1)^n P(A_i) $ 上式中$A_1, A_2,..., A_n in cal(F) $为两两互不相容的事件。 - 古典概型仅仅是Kolmogorov模型中的一个非常小的子模型。 - 如果$Ω$包含可数个点,我们就不能对基本事件作等可能假设,但仍然可以对每个${omega}$指定概率,之后依然有$P(A) = sum_(omega in A) P(omega)$。 - 如果Ω包含不可数多个点,每个单点集是一个基本事件,但我们不能简单指定每个基本事件的概率,否则会破坏公理$P(Ω) = 1$。 _一个例子:将一均匀硬币连续的投掷,直到首次出现正面。令$omega^((i))$表示“首次正面出现在第$i$次投掷”;并以$omega^((oo))$表示“正面永远不出现”。因此该试验的样本空间为_ $ Ω = {omega^((1)), omega^((2)), dots, omega^((oo))} $ _我们可以定义概率$P(omega^((i))) = 1/2^i$,$P(omega^((oo))) = 0$。_ === 概率的基本性质 ==== 概率的基本性质 - *求逆公式*:$P(A) = 1 - P(A')$ _证明:$A = A' union (A - A')$,由可加性得_ $ P(A) = P(A' union (A - A')) = P(A') + P(A - A') = 1 - P(A') $ - *减法公式*:$P(A - B) = P(A) - P(A B)$ _证明:考虑$A - B = A - A B = A (1 - B)$,由求逆公式得_ $ P(A - B) = 1 - P(A (1 - B)) = 1 - P(A) + P(A B) $ - *一般的加法公式*:$P(A union B) = P(A) + P(B) - P(A B) <= P(A) + P(B)$ _证明:$A B = A - (A - B)$,由减法公式得_ $ P(A B) = P(A) - P(A - B) = P(A) - P(A) + P(B) = P(A) + P(B) - P(A B) $ 一般地: $ P(union.big_(i=1)^n A_i) = sum_(i=1)^n P(A_i) - sum_(1 <= i < j <= n) P(A_i A_j) + sum_(1 <= i < j < k <= n) P(A_i A_j A_k) - dots + (-1)^(n-1) P(A_1 A_2 dots A_n) $ - *有限可加性*:若$A_1, A_2, dots in cal(F)$,且$A_i A_j = emptyset$,则 $ P(union.big_(i=1)^oo A_i) = sum_(i=1)^oo P(A_i) $ - *下连续性*:设${A_n}$是$cal(F)$中的非减事件序列(即$A_n in cal(F)$,并且$A_n ⊂ A_(n+1)$),则 $ P(union.big_(n=1)^oo A_n) = lim_(n->oo) P(A_n) $ - *上连续性*:设${A_n}$是$cal(F)$中的非增事件序列(即$A_n in cal(F)$,并且$A_(n+1) ⊂ A_n$),则 $ P(sect.big_(n=1)^oo A_n) = lim_(n->oo) P(A_n) $ _注:实际上,概率的上连续性与下连续性是等价的。因此,概率既有上连续性也有下连续性,统称为连续性。_ == 条件概率 === 条件概率的定义 *条件概率*:$P(A|B)$将原有的样本空间$Ω$缩减为$Ω_B$。在$Ω$中计算事件$A$的概率就是$P(A)$,而在$Ω_B$中记为$P(A|B)$或$P_B(A)$。 从古典概型入手: $ P(A|B) = (|A B|)/(|B|) = ((|A B|)/(|Omega|))/((|B|)/(|Omega|)) = P(A B)/P(B) $ 在Kolmogorov模型中,我们可以定义条件概率为:设$(Ω, cal(F) , P)$为概率空间,$B in cal(F) $,且$P(B) > 0$,则对任何$A in cal(F)$ ,定义 $ P(A|B) = P(A B)/P(B) $ 验证$P(A|B)$满足概率的三个公理,即可证明$P(A|B)$是$Ω$上的概率。 设$(Ω, cal(F) , P)$为概率空间,并且设$B in cal(F)$满足$P(B) > 0$,则$(Ω, cal(F) , P_B)$也是一个概率空间。 ==== 乘法公式 设$A$与$B$是两个事件,满足$P(A) > 0, P(B) > 0$,则由条件概率定义,可推出 $ P(A B) = P(A)P(B|A) = P(B)P(A|B): $ 推广到多个事件的情况,我们可以得到下面概率的乘法公式。 *乘法公式*:设$(Ω, cal(F) , P)$是个概率空间,并且$A_1, A_2, dots, A_n in cal(F)$,并且有$P(sect.big_(i=1)^n A_i) > 0$,则 $ P(sect.big_(i=1)^n A_i) = P(A_1)P(A_2|A_1)P(A_3|A_1 A_2) dots P(A_n|sect.big_(i=1)^(n-1) A_i) $ ==== 全概率公式 考虑样本空间$Ω$及事件族$B_i(1 ≤ i ≤ n)$,满足 - $B_i B_j = emptyset$,$i ≠ j$ - $union.big_(i=1)^n B_i = Ω$ 则称$B_i$为$Ω$的一个*划分*。如果 - $P(B_i) > 0$,$1 ≤ i ≤ n$ 则称$B_i$为$Ω$的一个*正划分*。 *全概率公式*:设$B_i$为$Ω$的一个正划分,$A in cal(F)$,则 $ P(A) = sum_(i=1)^n P(A|B_i)P(B_i) $ _证明:_ $ A &= A sect Omega\ &= A sect (union.big_(i=1)^n B_i)\ &= union.big_(i=1)^n (A sect B_i) $ 且其中 $ A B_i sect A B_j = emptyset, i ≠ j $ 因此 $ P(A) &= P(union.big_(i=1)^n (A sect B_i))\ &= sum_(i=1)^n P(A sect B_i)\ &= sum_(i=1)^n P(A|B_i)P(B_i) $ 如果我们把$B_i$看成是导致事件$A$发生的各种可能原因,根据全概率公式,事件$A$发生的概率即为该事件$A$在各种原因$B_i$下发生的条件概率的加权平均,其权重即为$P(B_i)$。 _例子:Polya坛子模型_ ==== 逆概率公式:Bayes公式 *Bayes公式*:设$B_i$为$Ω$的一个正划分,$A in cal(F)$,且$P(A) > 0$,则 $ P(B_i|A) = (P(A|B_i)P(B_i))/P(A) = (P(A|B_i)P(B_i))/(sum_(j=1)^n P(A|B_j)P(B_j)) $ 在直观上,我们把$B_i$看成是导致事件$A$发生的各种可能原因,$P(B_i)$可以看作事件$B_i$发生的*先验概率*。如果我们知道$A$发生了,那么这个新的信息可以用于对事件$B_i$发生的概率做重新评估,即利用条件$P(B_i|A)$作为得到“$A$发生”之后的重估,称为事件$B_i$的*后验概率*。 === 事件的独立性和相关性 ==== 两个事件的独立性和相关性 设$(Ω, F , P)$是一概率空间,事件$A,B in cal(F) $满足$P(B) > 0$。一般来说, 事件$A$发生的概率$P(A)$和事件$B$发生条件下事件$A$发生的条件概率$P(A|B)$是有差异的,这反映了$B$发生影响着$A$发生的可能性。 - 若P$(A|B) > P(A)$,则表明$B$发生使$A$发生的可能性增大; - 若P$(A|B) < P(A)$,则表明$B$发生使$A$发生的可能性减小; - 若P$(A|B) = P(A)$,则表明$B$发生与否对$A$发生的可能性没有影响——*独立*。 $ P(A|B) = P(A) <=> P(A B) = P(A)P(B) $ *事件A与事件B相互独立。* 设$0 < P(A) < 1$; $0 < P(B) < 1$,称*相关系数* $ r(A, B) = (P(A B) - P(A)P(B))/sqrt(P(A)(1 - P(A))P(B)(1 - P(B))) $ 我们有如下结论: - $r(A, B) = 0$当且仅当A与B相互独立。 - $|r(A, B)| ≤ 1$。 _证明:利用$P(A B)-P(A)P(B) <= P(A)(1-P(B))$(前者为正);$P(A B)-P(A)P(B) <= (1-P(A))(1-P(B))$(前者为负)即可。_ - $r(A, B) = 1$当且仅当$P(A) = P(A B) = P(B)$,即$A$与$B$几乎处处相同; - $r(A, B) = −1$当且仅当P(A) = P(AB^c) = P(B^c),即$A$与$B^c$几乎处处相同。 - $r(A, B) > 0 <=> P(A|B) > P(A) <=> P(B|A) > P(B)$,称为$A,B$正相关; - $r(A, B) < 0 <=> P(A|B) < P(A) <=> P(B|A) < P(B)$,称为$A,B$负相关。 _注:这个相关系数对于随机变量就是线性相关系数,并不是刻画两个事件之间的相关性的最好指标。_ ==== 事件的条件独立性 若事件$A, B, C$满足 $ P(A B|C) = P(A|C)P(B|C) $ 则称事件$A$与事件$B$在事件$C$下*条件独立*。 下列命题相互等价: - $P(A B|C) = P(A|C)P(B|C)$ - $P(A|B C) = P(A|C)$ - $P(A|B^c C) = P(A|C)$ - $P(B|A C) = P(B|C)$ - $P(B|A^c C) = P(B|C)$ - $P(A B^c|C) = P(A|C)P(B^c|C)$ ==== 多个事件的独立性 设$A_1, A_2, dots, A_n in cal(F)$,若对于任意$i,j$,都有 $ P(A_i A_j) = P(A_i)P(A_j) $ 则称事件$A_1, A_2, dots, A_n$*两两独立*。 若对任意的$1 ≤ i_1 < i_2 < dots < i_k ≤ n$,都有 $ P(A_(i_1) A_(i_2) dots A_(i_k)) = P(A_(i_1))P(A_(i_2)) dots P(A_(i_k)) $ 则称事件$A_1, A_2, dots, A_n$*相互独立*。 显然,若$n$个事件相互独立,则必然蕴含这$n$个事件两两相互独立;反之,则未必正确。 如果事件$A, B, C$相互独立,则 - $A$和$B union C$相互独立; - $A$和$B - C$相互独立。 === 系统的可靠性 独立性的作用在系统的可靠性分析中体现的最为完美。假设某系统由若干个元件联接而成,而每个元件可能正常工作,也可能失效,我们称元件能正常工作的概率为该元件的可靠性。而系统的可靠性就是该系统能正常工作的概率。这个系统的可靠性取决于各个元件的可靠性以及系统的构型。两个最简单的系统为串联系统和并联系统,更复杂的系统往往是这两个简单系统的复合。 考虑串联系统,它由n个元件串联而成,故只要有一个元件失效,该系统就失效;同时,对于并联系统,它由n个元件并联而成,故只要有一个元件正常工作,该系统就不会失效。 对于事件$A_i ="第i个元件正常工作"$,$B = "串联系统正常工作"$,$C = "并联系统正常工作"$,则有 $ P(B) = P(A_1 A_2 dots A_n) = P(A_1)P(A_2) dots P(A_n)\ P(C) = P(union.big _(i=1)^n A_i) = 1 - P(A_1^c A_2^c dots A_n^c) $
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-A800.typ
typst
Apache License 2.0
#let data = ( ("SYLOTI NAGRI LETTER A", "Lo", 0), ("SYLOTI NAGRI LETTER I", "Lo", 0), ("SYLOTI NAGRI SIGN DVISVARA", "Mn", 0), ("SYLOTI NAGRI LETTER U", "Lo", 0), ("SYLOTI NAGRI LETTER E", "Lo", 0), ("SYLOTI NAGRI LETTER O", "Lo", 0), ("SYLOTI NAGRI SIGN HASANTA", "Mn", 9), ("SYLOTI NAGRI LETTER KO", "Lo", 0), ("SYLOTI NAGRI LETTER KHO", "Lo", 0), ("SYLOTI NAGRI LETTER GO", "Lo", 0), ("SYLOTI NAGRI LETTER GHO", "Lo", 0), ("SYLOTI NAGRI SIGN ANUSVARA", "Mn", 0), ("SYLOTI NAGRI LETTER CO", "Lo", 0), ("SYLOTI NAGRI LETTER CHO", "Lo", 0), ("SYLOTI NAGRI LETTER JO", "Lo", 0), ("SYLOTI NAGRI LETTER JHO", "Lo", 0), ("SYLOTI NAGRI LETTER TTO", "Lo", 0), ("SYLOTI NAGRI LETTER TTHO", "Lo", 0), ("SYLOTI NAGRI LETTER DDO", "Lo", 0), ("SYLOTI NAGRI LETTER DDHO", "Lo", 0), ("SYLOTI NAGRI LETTER TO", "Lo", 0), ("SYLOTI NAGRI LETTER THO", "Lo", 0), ("SYLOTI NAGRI LETTER DO", "Lo", 0), ("SYLOTI NAGRI LETTER DHO", "Lo", 0), ("SYLOTI NAGRI LETTER NO", "Lo", 0), ("SYLOTI NAGRI LETTER PO", "Lo", 0), ("SYLOTI NAGRI LETTER PHO", "Lo", 0), ("SYLOTI NAGRI LETTER BO", "Lo", 0), ("SYLOTI NAGRI LETTER BHO", "Lo", 0), ("SYLOTI NAGRI LETTER MO", "Lo", 0), ("SYLOTI NAGRI LETTER RO", "Lo", 0), ("SYLOTI NAGRI LETTER LO", "Lo", 0), ("SYLOTI NAGRI LETTER RRO", "Lo", 0), ("SYLOTI NAGRI LETTER SO", "Lo", 0), ("SYLOTI NAGRI LETTER HO", "Lo", 0), ("SYLOTI NAGRI VOWEL SIGN A", "Mc", 0), ("SYLOTI NAGRI VOWEL SIGN I", "Mc", 0), ("SYLOTI NAGRI VOWEL SIGN U", "Mn", 0), ("SYLOTI NAGRI VOWEL SIGN E", "Mn", 0), ("SYLOTI NAGRI VOWEL SIGN OO", "Mc", 0), ("SYLOTI NAGRI POETRY MARK-1", "So", 0), ("SYLOTI NAGRI POETRY MARK-2", "So", 0), ("SYLOTI NAGRI POETRY MARK-3", "So", 0), ("SYLOTI NAGRI POETRY MARK-4", "So", 0), ("SYLOTI NAGRI SIGN ALTERNATE HASANTA", "Mn", 9), )
https://github.com/LDemetrios/Typst4k
https://raw.githubusercontent.com/LDemetrios/Typst4k/master/src/test/resources/suite/math/op.typ
typst
// Test text operators. --- math-op-predefined --- // Test predefined. $ max_(1<=n<=m) n $ --- math-op-call --- // With or without parens. $ &sin x + log_2 x \ = &sin(x) + log_2(x) $ --- math-op-scripts-vs-limits --- // Test scripts vs limits. #set page(width: auto) #set text(font: "New Computer Modern") Discuss $lim_(n->oo) 1/n$ now. $ lim_(n->infinity) 1/n = 0 $ --- math-op-custom --- // Test custom operator. $ op("myop", limits: #false)_(x:=1) x \ op("myop", limits: #true)_(x:=1) x $ --- math-op-styled --- // Test styled operator. $ bold(op("bold", limits: #true))_x y $ --- math-non-math-content --- // With non-text content $ op(#underline[ul]) a $
https://github.com/kpindur/rinko.typst
https://raw.githubusercontent.com/kpindur/rinko.typst/main/template.typ
typst
MIT License
#import "rinko.typ": conf, abstract #show: body => conf(body) #abstract(abstract: lorem(50)) = Introduction #lorem(90) = Related Works #lorem(120) = Methods #lorem(240) == Details #lorem(120) = Results #lorem(30) == Experiment 1 #lorem(100) == Experiment 2 #lorem(100) = Discussion #lorem(30) == Experiment 1 #lorem(100) == Experiment 2 #lorem(100) = Conclusions and Future Works #lorem(50)
https://github.com/jcrist/cv
https://raw.githubusercontent.com/jcrist/cv/main/cv.typ
typst
#let par_space = 0.5em #let sep_space = par_space + 0.2em #let color = rgb("#336699") #let format_date(start_date: none, end_date: none) = { let date = { if end_date == none { start_date } else if start_date == none { end_date } else { start_date + " - " + end_date } } [*#date*] } #let format_detail(el) = { let display = box(image(height: 0.7em, "images/" + el.image + ".svg")) + " " + el.text if el.link != none { link(el.link, display) } else { display } } #let entry(name, location, start_date: none, end_date: none, description) = { grid( columns: (3fr, 1fr), column-gutter: 1cm, { set align(left) [*#name*] + [ --- #location] }, { set align(right) format_date(start_date: start_date, end_date: end_date) } ) if description != none { block(above: sep_space, description) } } #set page(margin: (x: 1.5cm, y: 0.5cm)) #set text(11pt, font: "Fira Sans") #set par(leading: par_space) #set list(indent: 1em) #show link: underline #show heading.where(level: 1): i => { set align(left) let title = smallcaps(i.body) set block(above: 1em) set text(weight: "light", size: 1.2em, fill: color) stack( dir: ttb, spacing: 2mm, title, line(length: 100%, stroke: color + 2pt) ) } #show heading.where(level: 2): i => { set align(left) let title = smallcaps(i.body) set block(above: 0.8em) set text(weight: "light", size: 1em, fill: color) title } #let contact_data = ( ( "image": "email", "text": "<EMAIL>", "link": "mailto://<EMAIL>" ), ( "image": "github", "text": "jcrist", "link": "https://github.com/jcrist" ), ( "image": "website", "text": "jcristharif.com", "link": "https://jcristharif.com" ), ( "image": "location", "text": "Saint Paul, MN", "link": none ) ) #align(center)[ #smallcaps(text(size: 2.5em, fill: color)[<NAME>]) \ #{ if contact_data != none and contact_data.len() > 0 { contact_data.map(format_detail).join(" | ") } } ] = Summary An experienced software engineer and manager with a proven track record of developing OSS data analytics tools that users love. #grid(columns: (1fr, 1fr))[ == Languages - Proficient: Python | Cython | C - Capable: SQL | Go | Java == Technologies - K8s | Apache YARN | HPC Schedulers - DuckDB | Polars | Dask | Ibis ][ == Interests - Data Analytics Tooling - API Design / Developer Experience - Distributed Systems - Open Source Community Health ] = Experience #entry( "Engineering Manager", "Voltron Data", start_date: "Jun 2022", end_date: "Oct 2024", )[ - Led the team developing `ibis`, an ergonomic dataframe API wrapping 20+ SQL databases. - Interfaced with Product and Leadership to turn product requirements into engineering tasks. - Authored Ibis's backend wrapping Voltron Data's distributed GPU engine. - Created `ibis-ml`, an ML framework for preprocessing and feature engineering using Ibis. ] #entry( "Senior Software Engineer", "Coiled", start_date: "Aug 2021", end_date: "May 2022", )[ - Led an effort to revamp dask's parquet support, improving performance by up to 8x. - Optimized dask's networking layer, reducing RPC latency by 2x. ] #entry( "Senior Software Engineer", "Prefect", start_date: "Apr 2020", end_date: "May 2021", )[ - Led development of Prefect's core workflow framework. - Interfaced actively with users and customers, helping resolve issues and improve user experience. - Optimized Prefect's `dask` integration, allowing for resilient and scalable execution of large workflows. ] #entry( "Senior Software Engineer", "Anaconda", start_date: "Jun 2015", end_date: "Apr 2020", )[ - Core contributor to `dask` & `distributed`, a parallel and distributed compute framework for Python. Wrote much of the original implementation of `dask` and `dask-ml`. - Created #link("https://datashader.org/")[datashader], a high performance plotting library for large data. - Identified and resolved issues deploying Python in enterprise Hadoop environments. This resulted in the development of several new tools, including a #link("https://jupyterhub-on-hadoop.readthedocs.io/en/latest/")[complete JupyterHub deployment]. - Created and led development of #link("https://gateway.dask.org/")[dask-gateway], a tool for managing and deploying Dask in multi-user enterprise environments. Helped several large organizations deploy `dask-gateway` to serve hundreds of users. - Executed `dask` consulting engagements, helping several Fortune 500 companies use `dask` in production. ] = Open Source - #link("https://dask.org")[Dask] steering council member - #link("https://ibis-project.org")[Ibis] steering council member - Creator and lead maintainer of #link("https://jcristharif.com/msgspec/")[msgspec], currently the fastest JSON validation library for Python. - Presented #link("https://jcristharif.com/talks.html")[talks and tutorials] at 30+ past conferences including Strata, PyCon, SciPy & PyData. - Past mentor for #link("https://summerofcode.withgoogle.com/")[Google Summer of Code]. = Education #entry( "MSc. Mechanical Engineering (unfinished)", "University of Minnesota", start_date: "Sep 2013", end_date: "May 2015", "Designed novel state estimation algorithm for linear actuator system." ) #entry( "BSc. Mechanical Engineering", "University of Minnesota", start_date: "Sep 2009", end_date: "May 2013", none, )
https://github.com/TechnoElf/mqt-qcec-diff-presentation
https://raw.githubusercontent.com/TechnoElf/mqt-qcec-diff-presentation/main/content/conclusion.typ
typst
#import "@preview/tablex:0.0.8": tablex #import "../template/conf.typ": slide #import "data.typ": * #slide(title: "")[ #box(width: 100%, height: 80%, align(center + horizon)[ #text(size: 60pt)[*Conclusion*] ]) ] #slide(title: "Conclusion")[ #figure( box(width: 90%, height: 90%, align(horizon, { tablex( columns: (2fr, 1fr, 1fr, 1fr), [*Algorithm*], [*Average Run Time Improvement (%)*], [*Maximum Run Time Improvement (%)*], [*Maximum Run Time Regression (%)*], [Myers' Diff], align(right, [#calc.round(unclip(results-r1-b5q16).map(r => -(r.cmyers-p.mu / r.cprop.mu * 100 - 100)).sum() / unclip(results-r1-b5q16).len(), digits: 2)]), align(right, [#calc.round(unclip(results-r1-b5q16).map(r => -(r.cmyers-p.mu / r.cprop.mu * 100 - 100)).fold(0, calc.max), digits: 2)]), align(right, [#calc.round(unclip(results-r1-b5q16).map(r => -(r.cmyers-p.mu / r.cprop.mu * 100 - 100)).fold(0, calc.min), digits: 2)]), [Myers' Diff (Processed)], align(right, [#calc.round(unclip(results-r1-b5q16).map(r => -(r.cmyers-pmismc.mu / r.cprop.mu * 100 - 100)).sum() / unclip(results-r1-b5q16).len(), digits: 2)]), align(right, [#calc.round(unclip(results-r1-b5q16).map(r => -(r.cmyers-pmismc.mu / r.cprop.mu * 100 - 100)).fold(0, calc.max), digits: 2)]), align(right, [#calc.round(unclip(results-r1-b5q16).map(r => -(r.cmyers-pmismc.mu / r.cprop.mu * 100 - 100)).fold(0, calc.min), digits: 2)]), [Myers' Diff (Reversed)], align(right, [#calc.round(unclip(results-r1-b5q16).map(r => -(r.cmyersrev-p.mu / r.cprop.mu * 100 - 100)).sum() / unclip(results-r1-b5q16).len(), digits: 2)]), align(right, [#calc.round(unclip(results-r1-b5q16).map(r => -(r.cmyersrev-p.mu / r.cprop.mu * 100 - 100)).fold(0, calc.max), digits: 2)]), align(right, [#calc.round(unclip(results-r1-b5q16).map(r => -(r.cmyersrev-p.mu / r.cprop.mu * 100 - 100)).fold(0, calc.min), digits: 2)]), [Myers' Diff (Reversed, Processed)], align(right, [#calc.round(unclip(results-r1-b5q16).map(r => -(r.cmyersrev-pmismc.mu / r.cprop.mu * 100 - 100)).sum() / unclip(results-r1-b5q16).len(), digits: 2)]), align(right, [#calc.round(unclip(results-r1-b5q16).map(r => -(r.cmyersrev-pmismc.mu / r.cprop.mu * 100 - 100)).fold(0, calc.max), digits: 2)]), align(right, [#calc.round(unclip(results-r1-b5q16).map(r => -(r.cmyersrev-pmismc.mu / r.cprop.mu * 100 - 100)).fold(0, calc.min), digits: 2)]), [Patience Diff], align(right, [#calc.round(unclip(results-r1-b5q16).map(r => -(r.cpatience-p.mu / r.cprop.mu * 100 - 100)).sum() / unclip(results-r1-b5q16).len(), digits: 2)]), align(right, [#calc.round(unclip(results-r1-b5q16).map(r => -(r.cpatience-p.mu / r.cprop.mu * 100 - 100)).fold(0, calc.max), digits: 2)]), align(right, [#calc.round(unclip(results-r1-b5q16).map(r => -(r.cpatience-p.mu / r.cprop.mu * 100 - 100)).fold(0, calc.min), digits: 2)]), [*Myers' Diff (Processed) \ with filter*], align(right, [*#calc.round(filter(unclip(results-r1-b5q16)).map(r => -(r.cmyers-pmismc.mu / r.cprop.mu * 100 - 100)).sum() / filter(unclip(results-r1-b5q16)).len(), digits: 2)*]), align(right, [*#calc.round(filter(unclip(results-r1-b5q16)).map(r => -(r.cmyers-pmismc.mu / r.cprop.mu * 100 - 100)).fold(0, calc.max), digits: 2)*]), align(right, [*#calc.round(filter(unclip(results-r1-b5q16)).map(r => -(r.cmyers-pmismc.mu / r.cprop.mu * 100 - 100)).fold(0, calc.min), digits: 2)*]), ) })) ) ]
https://github.com/mrcinv/nummat-typst
https://raw.githubusercontent.com/mrcinv/nummat-typst/master/08_spektralno_grucenje.typ
typst
#import "admonitions.typ": opomba = Spektralno razvrščanje v gruče <spektralno-razvrščanje-v-gruče> Pokazali bomo metodo razvrščanja v gruče, ki uporabi spektralno analizo Laplaceove matrike podobnostnega grafa podatkov, zato da podatke preslika v prostor, kjer jih je lažje razvrstiti. == Podobnostni graf in Laplaceova matrika <podobnostni-graf-in-laplaceova-matrika> Podatke \(množico točk v $bb(R)^n$) želimo razvrstiti v več gruč. Najprej ustvarimo #emph[podobnostni uteženi graf], ki povezuje točke, ki so si v nekem smislu blizu. Podobnostni graf lahko ustvarimo na več načinov: - #strong[$ε$-okolice]: s točko $x_k$ povežemo vse točke, ki ležijo v ε-okolici te točke - #strong[$k$ najbližji sosedi]: $x_k$ povežemo z $x_i$, če je $x_i$ med $k$ najbližjimi točkami. Tako dobimo usmerjen graf, zato ponavadi upoštevmo povezavo v obe smeri. - #strong[poln utežen graf]: povežemo vse točke, vendar povezave utežimo glede na razdaljo. Pogosto uporabljena utež je nam znana #link("https://en.wikipedia.org/wiki/Radial_basis_function")[radialna bazna funkcija] $ w lr((x_i comma x_k)) eq exp lr((minus frac(parallel x_i minus x_k parallel^2, 2 sigma^2))) $ pri kateri s parametrom $sigma$ lahko določamo velikost okolic. Grafu podobnosti priredimo matriko uteži $ W eq lr([w_(i j)]) comma $ in Laplaceovo matriko $ L eq D minus W comma $ kjer je $D eq lr([d_(i j)])$ diagonalna matrika z elementi $d_(i i) eq sum_j w_(i j)$. Laplaceova matrika $L$ je simetrična, nenegativno definitna in ima vedno eno lastno vrednost 0 za lastni vektor iz samih enic. == Algoritem <algoritem> Velja naslednji izrek, da ima Laplaceova matrika natanko toliko lastnih vektorjev za lastno vrednost 0, kot ima graf komponent za povezanost. Na prvi pogled se zdi, da bi lahko bile komponente kar naše gruče, a se izkaže, da to ni najbolje. - Poiščemo #emph[k] najmanjših lastnih vrednosti za Laplaceovo matriko in izračunamo njihove lastne vektorje. - Označimo matriko lastnih vektorjev $Q=[v_1, v_2, dots,v_k]$. Stolpci $Q^T$ ustrezajo koordinatam točk v novem prostoru. - Za stolpce matrike $Q^T$ izvedemo nek drug algoritem gručenja (npr. algoritem $k$ povprečij). #opomba(naslov: [Algoritem $k$ povprečij])[ Izberemo si število gruč $k$. Najprej točke naključno razdelimo v $k$ gruč. Nato naslednji postopek ponavljamo, dokler se rezultat ne spreminja več - izračunamo center posamezne gruče $bold(c)_i= 1/(|G_i|)sum_(j in G_i) bold(x)_i$, - vsako točko razvrstimo v gručo, ki ima najbližji center. ] == Primer <primer> Algoritem preverimo na mešanici treh gaussovih porazdelitev ```julia using Plots using Random m = 100; Random.seed!(12) x = [1 .+ randn(m, 1); -3 .+ randn(m,1); randn(m,1)]; y = [-2 .+ randn(m, 1); -1 .+ randn(m,1); 1 .+ randn(m,1)]; scatter(x, y, title="Oblak točk v ravnini") savefig("06_oblak.png") ``` #figure([], caption: [ Oblak točk ] ) Izračunamo graf sosednosti z metodo $epsilon$-okolic in poiščemo laplaceovo matriko dobljenega grafa. ```julia using SparseArrays tocke = [(x[i], y[i]) for i=1:3*m] r = 0.9 G = graf_eps_okolice(tocke, r) L = LaplaceovaMatrika(G) spy(sparse(Matrix(L)), title="Porazdelitev neničelnih elementov v laplaceovi matriki") savefig("06_laplaceova_matrika_grafa.png") ``` #figure([], caption: [ Neničelni elementi Laplaceove matrike ] ) Če izračunamo lastne vektorje in vrednosti laplaceove matrike dobljenega grafa, dobimo 4 najmanjše lastne vrednosti, ki očitno odstopajo od ostalih. ```julia import LinearAlgebra.eigen razcep = eigen(Matrix(L)) scatter(razcep.values[1:20], title="Prvih 20 lastnih vrednosti laplaceove matrike") savefig("06_lastne_vrednosti.png") ``` #figure([], caption: [ Lastne vrednosti laplaceove matrike ] ) ```julia scatter(razcep.vectors[:,4], razcep.vectors[:,5], title="Vložitev s komponentami 4. in 5. lastnega vektorja") savefig("06_vlozitev.png") ``` #figure([], caption: [ Vložitev točk v nov prostor ] ) == Inverzna potenčna metoda <inverzna-potenčna-metoda> Ker nas zanima le najmanjših nekaj lastnih vrednosti, lahko njihov izračun in za izračun lastnih vektrojev uporabimo #link("https://en.wikipedia.org/wiki/Inverse_iteration")[inverzno potenčno metodo]. Pri inverzni potenčni metodi zgradimo zaporedje približkov z rekurzivno formulo $ bold(x)^(lr((k plus 1))) eq frac(A^(minus 1) bold(x)^(lr((n))), parallel A^(minus 1) bold(x)^(lr((n))) parallel) $ in zaporedje približkov konvergira k lastnemu vektorju za najmanjšo lastno vrednost matrike $A$. !!! warning "Namesto inverza uporabite razcep" ``` Računanje inverza je časovno zelo zahtevna operacija, zato se jo razen v nizkih dimenzijah, če je le mogoče izognemo. Namesto inverza raje uporabimo enega od razcepov matrike $A$. Če naprimer uporabimo LU razcep $A=LU$, lahko $A^{-1}\mathbf{b}$ izračunamo tako, da rešimo sistem $A\mathbf{x} = \mathbf{b}$ oziroma $LU\mathbf{x} = \mathbf{b}$ v dveh korakih $$ \begin{aligned} L\mathbf{y}&=\mathbf{b}\cr U\mathbf{x}&=\mathbf{y} \end{aligned} $$ Programski jezik `julia` ima za ta namen prav posebno metodo [factorize](https://docs.julialang.org/en/v1/stdlib/LinearAlgebra/index.html#LinearAlgebra.factorize), ki za različne matrike, izračuna najbolj primeren razcep. ``` Laplaceova matrika je simetrična, zato so lastne vrednosti ortogonalne. Lastne vektorje lahko tako poiščemo tako, da iteracijo izvajamo na več vektorjih hkrati in nato na dobljeni bazi izvedemo ortogonalizacijo \(QR razcep), da zaporedje lastnih vektorjev za lastne vrednosti, ki so najbližje najmanjši lastni vrednosti. Laplaceova matrika grafa okolic je simetrična in diagonalno dominantna. Poleg tega je zelo veliko elementov enakih 0. Zato za rešitev sistema uporabimo metodo #link("https://en.wikipedia.org/wiki/Conjugate_gradient_method")[konjugiranih gradientov]. Za uporabo metode konjugiranih gradientov zadošča, da učinkovito izračunamo množenje matrike z vektorjem. Težava je, ker so je laplaceova matrika grafa izrojena, zato metoda konjugiranih gradientov ne konvergira. Težavo lahko rešimo s premikom. Namesto, da računamo lastne vreednosti in vektorje matrike $L$, iščemo lastne vrednosti in vektorje malce premaknjene matrike $L plus epsilon I$, ki ima enake lastne vektorje, kot $L$. !!! note ``` Programski jezik julia omogoča polimorfizem v obliki [večlične dodelitve](https://docs.julialang.org/en/v1/manual/methods/index.html). Tako lahko za isto funkcijo definiramo različne metode. Za razliko od polmorfizma v objektno orientiranih jezikih, se metoda izbere ne le na podlagi tipa objekta, ki to metodo kliče, ampak na podlagi tipov vseh vhodnih argumentov. To lastnost lahko s pridom uporabimo, da lahko pišemo generično kodo, ki deluje za veliko različnih vhodnih argumentov. Primer je funkcija [`conjgrad`](@ref), ki jo lahko uporabimo tako za polne matrike, matrike tipa `SparseArray` ali pa tipa `LaplaceovaMatrika` za katerega smo posebej definirali operator množenja [`*`](@ref). ``` $ L bold(x^(lr((k plus 1)))) eq bold(x^(lr((k)))) $ Primerjajmo inverzno potenčno metodo z vgrajeno metodo za iskanje lastnih vrednosti s polno matriko ```julia import Base:*, size struct PremikMatrike premik matrika end *(p::PremikMatrike, x) = p.matrika*x + p.premik.*x size(p::PremikMatrike) = size(p.matrika) Lp = PremikMatrike(0.01, L) l, v = inverzna_iteracija(Lp, 5, (Lp, x) -> conjgrad(Lp, x)[1]) ``` == Algoritem k-povprečij <algoritem-k-povprečij> ```julia nove_tocke = [tocka for tocka in zip(razcep.vectors[:,4], razcep.vectors[:,5])] gruce = kmeans(nove_tocke, 3) p1 = scatter(tocke[findall(gruce .== 1)], color=:blue, title="Originalne točke") scatter!(p1, tocke[findall(gruce .== 2)], color=:red) scatter!(p1, tocke[findall(gruce .== 3)], color=:green) p2 = scatter(nove_tocke[findall(gruce .== 1)], color=:blue, title="Preslikane točke") scatter!(p2, nove_tocke[findall(gruce .== 2)], color=:red) scatter!(p2, nove_tocke[findall(gruce .== 3)], color=:green) plot(p1,p2) savefig("06_gruce.png") ``` #figure([], caption: [ Gruče ] ) == Literatura <literatura> - <NAME> #link("https://arxiv.org/abs/0711.0189")[A Tutorial on Spectral Clustering] - <NAME> #link("http://people.inf.ethz.ch/arbenz/ewp/Lnotes/lsevp.pdf")[Lecture Notes on Solving Large Scale Eigenvalue Problems] - Knjižnica #link("http://danspielman.github.io/Laplacians.jl/latest/index.html")[Laplacians.jl]
https://github.com/GYPpro/Java-coures-report
https://raw.githubusercontent.com/GYPpro/Java-coures-report/main/Report/10.typ
typst
#set text(font:("Times New Roman","Source Han Serif SC")) #show raw.where(block: false): box.with( fill: luma(240), inset: (x: 3pt, y: 0pt), outset: (y: 3pt), radius: 2pt, ) // Display block code in a larger block // with more padding. #show raw.where(block: true): block.with( fill: luma(240), inset: 10pt, radius: 4pt, ) #set math.equation(numbering: "(1)") #set text( font:("Times New Roman","Source Han Serif SC"), style:"normal", weight: "regular", size: 13pt, ) #set page( paper:"a4", number-align: right, margin: (x:2.54cm,y:4cm), header: [ #set text( size: 25pt, font: "KaiTi", ) #align( bottom + center, [ #strong[暨南大学本科实验报告专用纸(附页)] ] ) #line(start: (0pt,-5pt),end:(453pt,-5pt)) ] ) #show raw: set text( font: ("consolas", "Source Han Serif SC") ) = 实现一个瘟疫传播的可视化模拟 \ #text("*") 实验项目类型:设计性\ #text("*")此表由学生按顺序填写\ #text( font:"KaiTi", size: 15pt )[ 课程名称#underline[#text(" 面向对象程序设计/JAVA语言 ")]成绩评定#underline[#text(" ")]\ 实验项目名称#underline[#text(" 实现一个瘟疫传播的可视化模拟 ")]\ 指导老师#underline[#text(" 干晓聪 ")]\ 实验项目编号#underline[#text(" 1 ")]实验项目类型#underline[#text(" 设计性 ")]实验地点#underline[#text(" 数学系机房 ")]\ 学生姓名#underline[#text(" 郭彦培 ")]学号#underline[#text(" 2022101149 ")]\ 学院#underline[#text(" 信息科学技术学院 ")]系#underline[#text(" 数学系 ")]专业#underline[#text(" 信息管理与信息系统 ")]\ 实验时间#underline[#text(" 2023年11月1日上午 ")]#text("~")#underline[#text(" 2023年11月1日中午 ")]\ ] #set heading( numbering: "一、" ) #set par( first-line-indent: 1.8em) = 实验目的 \ #h(1.8em) 在`JPanel`的基础上实现面板类`MyPanel`,练习`swing`库的使用,并且写出有实用价值的可视化程序。 = 实验环境 \ #h(1.8em)计算机:PC X64 操作系统:Windows 编程语言:Java IDE:IntelliJ IDEA = 程序原理 \ #h(1.8em) 实现`Hospital`、`Persion`、`Person Poll`、`Bed`等类进行数据计算,随后传递给`myPanal`进行绘图。在`myPanal`中申请`Timer`进行数据的周期性重绘和刷新。 = 程序代码 文件`sis9\Bed.java`实现了一个`Bed`类,用于计算医院床位相关数据 ```java package sis9; public class Bed extends Point { public Bed(int x, int y) { super(x, y); } private boolean isEmpty=true; public boolean isEmpty() { return isEmpty; } public void setEmpty(boolean empty) { isEmpty = empty; } } ``` 文件`sis9\City.java`实现了一个`City`类,用于确定城市位置 ```java package sis9; public class City { private int centerX; private int centerY; public City(int centerX, int centerY) { this.centerX = centerX; this.centerY = centerY; } public int getCenterX() { return centerX; } public void setCenterX(int centerX) { this.centerX = centerX; } public int getCenterY() { return centerY; } public void setCenterY(int centerY) { this.centerY = centerY; } } ``` 文件`sis9\Constants.java`实现了一个`Constants`类,用于存放常量 ```java package sis9; public class Constants { public static int ORIGINAL_COUNT = 50;//初始感染数量 public static float BROAD_RATE = 0.8f;//传播率 public static float SHADOW_TIME = 140;//潜伏时间 public static int HOSPITAL_RECEIVE_TIME = 10;//医院收治响应时间 public static int BED_COUNT = 1000;//医院床位 public static float u = -0.99f;//流动意向平均值 public static int CITY_PERSON_SIZE = 5000;//城市总人口数量 } ``` 文件`sis9\Hospital.java`实现了一个`Hospital`类,用于计算医院相关数据 ```java package sis9; import java.util.ArrayList; import java.util.List; public class Hospital { private int x = 800; private int y = 110; private int width; private int height = 606; public int getWidth() { return width; } public int getHeight() { return height; } public int getX() { return x; } public int getY() { return y; } private static Hospital hospital = new Hospital(); public static Hospital getInstance() { return hospital; } private Point point = new Point(800, 100); private List<Bed> beds = new ArrayList<>(); private Hospital() { if (Constants.BED_COUNT == 0) { width = 0; height = 0; } int column = Constants.BED_COUNT / 100; width = column * 6; for (int i = 0; i < column; i++) { for (int j = 10; j <= 610; j += 6) { Bed bed = new Bed(point.getX() + i * 6, point.getY() + j); beds.add(bed); } } } public Bed pickBed() { for (Bed bed : beds) { if (bed.isEmpty()) { return bed; } } return null; } } ``` 文件`sis9\Main.java`实现了程序入口 ```java package sis9; import javax.swing.*; import java.util.List; import java.util.Random; public class Main { public static void main(String[] args) { initPanel(); initInfected(); } private static void initPanel(){ MyPanel p = new MyPanel(); Thread panelThread = new Thread(p); JFrame frame = new JFrame(); frame.add(p); frame.setSize(1000, 800); frame.setLocationRelativeTo(null); frame.setVisible(true); frame.setTitle("瘟疫传播模拟"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); panelThread.start(); } private static void initInfected() { List<Person> people = PersonPool.getInstance().getPersonList(); for (int i = 0; i < Constants.ORIGINAL_COUNT; i++) { Person person; do { person = people.get(new Random().nextInt(people.size() - 1)); } while (person.isInfected()); person.beInfected(); } } } ``` 文件`sis9\MoveTarget.java`实现了一个`MoveTarget`类,用于计算点云移动 ```java package sis9; public class MoveTarget { private int x; private int y; private boolean arrived=false; public MoveTarget(int x, int y) { this.x = x; this.y = y; } public int getX() { return x; } public void setX(int x) { this.x = x; } public int getY() { return y; } public void setY(int y) { this.y = y; } public boolean isArrived() { return arrived; } public void setArrived(boolean arrived) { this.arrived = arrived; } } ``` 文件`sis9\MyPanel.java`实现了一个`MyPanel`类,用于绘图 ```java package sis9; import javax.swing.*; import java.awt.*; import java.util.List; import java.util.Timer; import java.util.TimerTask; public class MyPanel extends JPanel implements Runnable { private int pIndex = 0; public MyPanel() { super(); this.setBackground(new Color(0x444444)); } @Override public void paint(Graphics g) { super.paint(g); g.setColor(new Color(0x00ff00));//设置医院边界颜色 //绘制医院边界 g.drawRect(Hospital.getInstance().getX(), Hospital.getInstance().getY(), Hospital.getInstance().getWidth(), Hospital.getInstance().getHeight()); g.setFont(new Font("微软雅黑", Font.BOLD, 16)); g.setColor(new Color(0x00ff00)); g.drawString("医院", Hospital.getInstance().getX() + Hospital.getInstance().getWidth() / 4, Hospital.getInstance().getY() - 16); List<Person> people = PersonPool.getInstance().getPersonList(); if (people == null) { return; } people.get(pIndex).update(); for (Person person : people) { switch (person.getState()) { case Person.State.NORMAL: { g.setColor(new Color(0xdddddd)); break; } case Person.State.SHADOW: { g.setColor(new Color(0xffee00)); break; } case Person.State.CONFIRMED: g.setColor(new Color(0xff0000)); break; case Person.State.FREEZE: { g.setColor(new Color(0x48FFFC)); break; } } person.update(); g.fillOval(person.getX(), person.getY(), 3, 3); } pIndex++; if (pIndex >= people.size()) { pIndex = 0; } //显示数据信息 g.setColor(Color.WHITE); g.drawString("城市总人数:" + Constants.CITY_PERSON_SIZE, 16, 40); g.setColor(new Color(0xdddddd)); g.drawString("健康者人数:" + PersonPool.getInstance().getPeopleSize(Person.State.NORMAL), 16, 64); g.setColor(new Color(0xffee00)); g.drawString("潜伏者人数:" + PersonPool.getInstance().getPeopleSize(Person.State.SHADOW), 16, 88); g.setColor(new Color(0xff0000)); g.drawString("感染者人数:" + PersonPool.getInstance().getPeopleSize(Person.State.CONFIRMED), 16, 112); g.setColor(new Color(0x48FFFC)); g.drawString("已隔离人数:" + PersonPool.getInstance().getPeopleSize(Person.State.FREEZE), 16, 136); g.setColor(new Color(0x00ff00)); g.drawString("空余病床:" + (Constants.BED_COUNT - PersonPool.getInstance().getPeopleSize(Person.State.FREEZE)), 16, 160); } public static int worldTime = 0; public Timer timer = new Timer(); class MyTimerTask extends TimerTask { @Override public void run() { MyPanel.this.repaint(); worldTime++; } } @Override public void run() { timer.schedule(new MyTimerTask(), 0, 100); } } ``` 文件`sis9\Person.java`实现了一个`Person`类,用于计算人相关数据 ```java package sis9; import java.util.List; import java.util.Random; public class Person { private City city; private int x; private int y; private MoveTarget moveTarget; int sig = 1; double targetXU; double targetYU; double targetSig = 50; public interface State {//市民状态 int NORMAL = 0;//未被感染 int SHADOW = NORMAL + 1;//潜伏者 int CONFIRMED = SHADOW + 1;//感染者 int FREEZE = CONFIRMED + 1;//已隔离 } public Person(City city, int x, int y) { this.city = city; this.x = x; this.y = y; targetXU = 100 * new Random().nextGaussian() + x; targetYU = 100 * new Random().nextGaussian() + y; } public boolean wantMove() { double value = sig * new Random().nextGaussian() + Constants.u; return value > 0; } private int state = State.NORMAL; public int getState() { return state; } public void setState(int state) { this.state = state; } public int getX() { return x; } public void setX(int x) { this.x = x; } public int getY() { return y; } public void setY(int y) { this.y = y; } int infectedTime = 0; int confirmedTime = 0; public boolean isInfected() { return state >= State.SHADOW; } public void beInfected() { state = State.SHADOW; infectedTime = MyPanel.worldTime; } public double distance(Person person) { return Math.sqrt(Math.pow(x - person.getX(), 2) + Math.pow(y - person.getY(), 2)); } private void freezy() { state = State.FREEZE; } private void moveTo(int x, int y) { this.x += x; this.y += y; } private void action() { if (state == State.FREEZE) { return; } if (!wantMove()) { return; } if (moveTarget == null || moveTarget.isArrived()) { double targetX = targetSig * new Random().nextGaussian() + targetXU; double targetY = targetSig * new Random().nextGaussian() + targetYU; moveTarget = new MoveTarget((int) targetX, (int) targetY); } int dX = moveTarget.getX() - x; int dY = moveTarget.getY() - y; double length = Math.sqrt(Math.pow(dX, 2) + Math.pow(dY, 2)); if (length < 1) { moveTarget.setArrived(true); return; } int udX = (int) (dX / length); if (udX == 0 && dX != 0) { if (dX > 0) { udX = 1; } else { udX = -1; } } int udY = (int) (dY / length); if (udY == 0 && udY != 0) { if (dY > 0) { udY = 1; } else { udY = -1; } } if (x > 700) { moveTarget = null; if (udX > 0) { udX = -udX; } } moveTo(udX, udY); } private float SAFE_DIST = 2f; public void update() { if (state >= State.FREEZE) { return; } if (state == State.CONFIRMED && MyPanel.worldTime - confirmedTime >= Constants.HOSPITAL_RECEIVE_TIME) { Bed bed = Hospital.getInstance().pickBed(); if (bed == null) { } else { state = State.FREEZE; x = bed.getX(); y = bed.getY(); bed.setEmpty(false); } } if (MyPanel.worldTime - infectedTime > Constants.SHADOW_TIME && state == State.SHADOW) { state = State.CONFIRMED; confirmedTime = MyPanel.worldTime; } action(); List<Person> people = PersonPool.getInstance().personList; if (state >= State.SHADOW) { return; } for (Person person : people) { if (person.getState() == State.NORMAL) { continue; } float random = new Random().nextFloat(); if (random < Constants.BROAD_RATE && distance(person) < SAFE_DIST) { this.beInfected(); } } } } ``` 文件`sis9\PersonPool.java`实现了一个`PersonPool`类,用于计算人口池相关数据 ```java package sis9; import java.util.ArrayList; import java.util.List; import java.util.Random; public class PersonPool { private static PersonPool personPool = new PersonPool(); public static PersonPool getInstance() { return personPool; } List<Person> personList = new ArrayList<Person>(); public List<Person> getPersonList() { return personList; } public int getPeopleSize(int state) { if (state == -1) { return Constants.CITY_PERSON_SIZE; } int i = 0; for (Person person : personList) { if (person.getState() == state) { i++; } } return i; } private PersonPool() { City city = new City(400, 400); for (int i = 0; i < Constants.CITY_PERSON_SIZE; i++) { Random random = new Random(); int x = (int) (100 * random.nextGaussian() + city.getCenterX()); int y = (int) (100 * random.nextGaussian() + city.getCenterY()); if (x > 700) { x = 700; } personList.add(new Person(city, x, y)); } } } ``` 文件`sis9\Point.java`实现了一个`Point`类,用于计算点数据 ```java package sis9; public class Point { private int x; private int y; public Point(int x, int y) { this.x = x; this.y = y; } public int getX() { return x; } public void setX(int x) { this.x = x; } public int getY() { return y; } public void setY(int y) { this.y = y; } } ``` = 出现的问题、原因与解决方法 \ #h(1.8em) 编码过程中大量参考`JPanel`的`reference`,并且结合一些开源项目的实例,因此非常顺利,没有出现什么问题。 #pagebreak() = 测试数据与运行结果 刚开始模拟 #image( "1.png", width: 80% ) 数十秒后,医院接近满员 #image( "2.png", width: 80% )
https://github.com/mrtz-j/typst-thesis-template
https://raw.githubusercontent.com/mrtz-j/typst-thesis-template/main/template/utils/caption.typ
typst
MIT License
#import "../chapters/global.typ": in-outline #let dynamic-caption(long, short) = ( context { if in-outline.get() { short } else { long } } )
https://github.com/ist199211-ist199311/homeworks-nss
https://raw.githubusercontent.com/ist199211-ist199311/homeworks-nss/master/hw1.typ
typst
#import "@preview/tablex:0.0.6": tablex, hlinex, colspanx #import "common/template.typ": cover_page, header, footer, setup_page #cover_page(title: "Homework 1", date: "November 2023") #pagebreak() #show: setup_page #set page("a4", header: header(title: "HW1"), footer: footer) #counter(page).update(1) #outline() #pagebreak() = Key Exchange #set enum(numbering: "a)") + Two possible attacks would be the following: #set enum(numbering: "(i)") + The ticket ${A, K_"AB"}_(K_"BS")$ provided by $S$ to $B$ (through $A$) does not include any nonces, compromising its freshness, so an attacker $T$ that cracks a previous session key $K_"AB"$ can use it to encrypt $N_B$ and then forge a message $ T -> B: quad {A, K_"AB"}_(K_"BS"), space.quarter {N_B}_(K_"AB") $ which makes use of a replay attack to re-use the old session's ticket that had been encrypted with the $K_"BS"$ key unknown to the attacker. This message would trick $B$ into believing they were communicating with $A$, as they trust $S$ for authentication. The attacker $T$ could then continue impersonating $A$ using the previous, compromised $K_"AB"$. + Given that nonces are 32 bits in length and keys 64 bits, two nonces concatenated together might be misinterpreted as a key, since $ underbrace("len"(N_A||N_B), 32 + 32) = underbrace("len"(K_"AB"), 64) $ As both $N_A$ and $N_B$ are public values, the attacker can compute ${N_B}_(N_A||N_B)$, eavesdrop on message 2 of the protocol to obtain ${A, N_A || N_B}_(K_"BS")$, and use both to forge a message $ T -> B: quad {A, N_A || N_B}_(K_"BS"), space.quarter {N_B}_(N_A||N_B) $ which could be interpreted by $B$ to mean $ T -> B: quad {A, K'_"AB"}_(K_"BS"), space.quarter {N_B}_(K'_"AB") $ where $K'_"AB" = N_A || N_B$. This would again trick $B$ into believing they were communicating with $A$ (authenticated by $S$, who $B$ trusts). The attacker $T$ could then continue impersonating $A$ using the session key $K'_"AB"$. This attack assumes, of course, that no field separators of any kind are used, with adjacent values simply being concatenated together - i.e., ${alpha, beta} = {alpha || beta}$. + Possible countermeasures to the attacks above would be: #set enum(numbering: "(i)") + In order to guarantee ticket _freshness_, the nonces should be included in the ticket, allowing $B$ to verify that it is only used once. This could mean, for example, changing messages 3 and 4 of the protocol to: $ &3. space.quarter S -> A: quad {B, K_"AB", N_A, N_B}_(K_"AS"), space.quarter {A, K_"AB", bold(N_A\, N_B)}_(K_"BS") \ &4. space.quarter A -> B: quad {A, K_"AB", bold(N_A\, N_B)}_(K_"BS"), space.quarter {N_B}_(K_"AB") $ This would no longer allow the described attack to succeed, as each ticket will only be accepted by $B$ once, and the attacker cannot compute a new one (even for a compromised session key) without knowing $K_"BS"$. Although in theory only $N_B$ would be necessary, it might be worth it to include $N_A$ as well (per above) in order to halve the probability of nonce collisions that could be exploited by the attacker with a replay attack. This countermeasure could also be considered to render unnecessary sending ${N_B}_(K_"AB")$ in message 4, as the ticket is already scoped to $N_B$ and only $A$ would have $K_"AB"$ to encrypt and decrypt subsequent communications, but requiring $A$ to send ${N_B}_(K_"AB")$ simultaneously with the ticket prevents $B$ from falsely believing they have established a valid session with an attacker that could forward the ticket faster than $A$ but did not have $K_"AB"$ to actually communicate any further. If done repeatedly, and if for example $B$ allocated memory for each open session (by remembering, at the very least, $K_"AB"$), an attacker could potentially perform these steps repeatedly as part of a Denial of Service attack. + One way to solve this problem would be to promote _explicitness_ by specifying intent, in order to avoid any possible misinterpretations. For example, messages 3 and 4 of the protocol could be changed to: $ &3. space.quarter S -> A: quad {B, K_"AB", N_A, N_B}_(K_"AS"), space.quarter {bold("\"Ticket\""), A, K_"AB"}_(K_"BS") \ &4. space.quarter A -> B: quad {bold("\"Ticket\""), A, K_"AB"}_(K_"BS"), space.quarter {N_B}_(K_"AB") $ Changing the ticket to explicitly state it consists of a ticket prevents it from having a structure too similar to other messages or message components, regardless of the serialization implementation. Finally, one should note that the countermeasure presented for attack (i) would also prevent this attack, since the length coincidence preconditions would no longer be true. #pagebreak() = Eavesdropping #set enum(numbering: "1)") + Considering the worst case scenario of $d(A, E) = 1.5 dot.c d(A, B)$, we can use the provided formula to calculate the secure rate between $A$ and $B$: $ R_s (A, B) =& R(A, B) - R(A, E) \ =& k/(d^2 (A, B)) - k/(d^2 (A, E)) \ =& k/(d^2 (A, B)) - k/(1.5^2 dot.c d^2 (A, B)) \ =& k/(d^2 (A, B)) (1 - 1/(1.5^2)) \ =& 5/9 dot.c k/(d^2 (A, B)) $ where $k$ is some proportionality constant. We can then use that result to calculate the percentage of the transmission that can be communicated confidentially: $ (R_s (A, B))/(R(A, B)) = (5/9 dot.c cancel(k/(d^2 (A, B))))/cancel(k/(d^2 (A, B))) = 5/9 approx 55.56% $ + Using the percentage calculated in the previous question, we can determine the probability of transmitting data confidentially, on average, if we analyze the behavior over an arbitrarily large number of transmissions: $ P = lim_(n -> +oo) (5/9 dot.c cancel(n))/cancel(n) = 5/9 $ where $5/9 dot.c n$ is the percentage of $n$ transmissions that can be communicated confidentially. + #set enum(numbering: "a)") + With channel hopping every 250ms, a transmission that spans 1 second will last $(1000"ms")/(250"ms") = 4$ slots. We can therefore calculate the probability of such a transmission being completely confidential as: $ P = sum_(i=0)^4 binom(4, i) (7/10)^i (3/10 dot.c 5/9)^(4-i) approx 56.4% $ where, with $i = 0..4$ channels being free from any eavesdroppers, for the 4 slots: - $binom(4, i)$ represents choosing which $i$ channels are free; - $(7/10)^i$ is the probability of choosing $i$ free channels $(7 = 10 - 3)$; and - $(3/10 dot.c 5/9)^(4-i)$ is the probability of choosing $4-i$ eavesdropped channels $(3/10)$ and of communicating confidentially in each of them $(5/9)$ --- here, it is assumed that the circumstances of the previous questions still apply, with $A$ and $B$ still using (only) the proper physical-layer secure coding technique to transmit the data. #v(1fr) // force page break, for style (next item would not fully fit) + A transmission spanning 0.5 seconds will last $(500"ms")/(250"ms") = 2$ slots. For at least $78%$ to be confidential, there are 2 possible cases: - either both channels chosen (for each of the 2 slots) are free and not being eavesdropped --- in this case, $100% >= 78%$ is transmitted confidentially; or - one of the channels selected (for either of the 2 slots) is free, but the other is being eavesdropped --- in this case, $1/2 dot.c (100% + 5/9) = 7/9 approx 78%$ is transmitted confidentially. We can therefore calculate the probability with: $ P = 1 - (3/10)^2 = 91% $ where $(3/10)^2$ is the probability of choosing an eavesdropped channel twice (which is the only case that would lead to less than $78%$ of the transmission being confidential, as $1/2 dot.c (5/9 + 5/9) = 5/9 approx 55.56% < 78%$). #pagebreak() = Distributed Denial of Service #set enum(numbering: "1)") + Assuming that the first (all-zeroes host ID) and last (all-ones host ID) addresses of each network cannot be hosts due to representing the network and broadcast (respectively), we have, for each network: #let net_sizes = (28, 25, 23, 27) #let host_counts = () #for (i, size) in net_sizes.enumerate(start: 1) [ #host_counts.push(calc.pow(2, 32 - size) - 2) - Network \##i: $2^(32 - #size) - 2 = #(host_counts.last())$ hosts ] #let total_hosts = host_counts.sum() In total, this means #total_hosts hosts can participate in the attack. + #let host_uplink = 2 // Mbit/s #let total_uplink = host_uplink * total_hosts With each host having #host_uplink Mbit/s of bandwidth, and considering #total_hosts hosts, in aggregate they can generate a total bandwidth of $#host_uplink dot.c #total_hosts = #total_uplink "Mbit/s"$. + #let server_downlink = 2000 // Mbit/s At the peak of the attack, $ (#total_uplink "Mbit/s")/(#server_downlink "Mbit/s") = #(100*total_uplink/server_downlink)% $ of the webserver's link is used. + #let syn_size = 60 // Bytes #let host_syn_rate = calc.floor(host_uplink*1e6/(syn_size*8)) Each host can generate $ floor((#host_uplink "Mbit/s")/(#syn_size "Bytes")) = floor((#(host_uplink*1e6) "bits")/(#syn_size dot.c 8 "bits")) = #host_syn_rate "SYN/s" $ This means that each network can generate #for (i, count) in host_counts.enumerate(start: 1) [ - Network \##i: $#count "hosts" times #host_syn_rate "SYN/s" = #(count * host_syn_rate) "SYN/s"$ ] + #let server_mem = 8 // GBytes (metric) #let server_connection_alloc = 256 // Bytes #let max_syn_segs = server_mem * 1e9 / server_connection_alloc Assuming that each SYN segment received causes the webserver to allocate #server_connection_alloc Bytes, the number of segments required to fill up the web server's available memory is: $ (#server_mem "GBytes")/(#server_connection_alloc "Bytes") = (#server_mem times 10^9)/#server_connection_alloc = #max_syn_segs "segments" $ Please note that it is assumed that the server's memory size is #server_mem GB and not #server_mem GiB. + #let host_clog_time = calc.round(max_syn_segs / host_syn_rate) One host can clog the webserver's memory within $ (#max_syn_segs "SYN")/(#host_syn_rate "SYN/s") approx #host_clog_time "s" $ + Using the previous question's result, each network can clog the webserver's memory within #let net_clog_times = () #for (i, count) in host_counts.enumerate(start: 1) [ #net_clog_times.push(calc.round(host_clog_time/count)) - Network \##i: $(#host_clog_time "s")/(#count "hosts") approx #net_clog_times.last() "s" $ ] #v(1fr) // force page break, for style (next item would not fully fit) + #let total_clog_time = calc.round(host_clog_time/total_hosts) All the networks together can clog the webserver in $ (#host_clog_time "s")/(#total_hosts "hosts") approx #total_clog_time "s" $ + #let ids_detection_percentage = 0.3 // 0..1 #let ids_detection_time = ids_detection_percentage * total_clog_time The attack would be detected by the IDS in $#(ids_detection_percentage*100)% dot.c #total_clog_time "s" approx #ids_detection_time "s"$. + In order to prevent against SYN flooding attacks, there are several measures that can be taken, such as: - Using SYN cookies, preventing the server from allocating memory when receiving an initial SYN segment, only doing so after receiving the client's second message (this is also effective against origin spoofing, as the client will only be able to send a second message if they receive the server's response); - Blocking TCP traffic and using SCTP _(Stream Control Transmission Protocol)_ instead, as it natively supports 4-way handshakes with cookies; - Installing an Intrusion Prevention System (IPS) that has a certain probability of detecting SYN flooding attacks and dynamically adjusting firewall rules in order to block that traffic; and - Mediating all connections with a proxy or load balancing server that is dedicated to bearing this load and delegating jobs to the actual web server(s). #pagebreak() = Firewalls #set enum(numbering: "1)") + #let requirement = counter("requirement") #let req() = { requirement.step() requirement.display("(a)") } #let rule = counter("rule") #let r() = { rule.step() rule.display() } Such stateful firewall rules could be: #align(center)[ #tablex( columns: (auto, auto, 2fr, 2fr, 2fr, 1fr, 1fr, auto, auto), align: center + horizon, [*\#*], [*Direction*], [*Source*], [*Destination*], [*Protocol*], [*Src. Port*], [*Dest. Port*], [*State*], [*Action*], hlinex(stroke: 2pt), colspanx(9)[_Requirement #req()_], [#r() <udp-vpn-in>], [IN], [#sym.star], [172.16.17.32/24], [UDP], [#sym.star], [1194], [N / E], [ACCEPT], [#r() <udp-vpn-out>], [OUT], [172.16.17.32/24], [#sym.star], [UDP], [1194], [#sym.star], [EST.], [ACCEPT], [#r() <udp-drop-in>], [IN], [#sym.star], [172.16.17.32/24], [UDP], [#sym.star], [#sym.star], [N / E], [DROP], [#r() <udp-drop-out>], [OUT], [172.16.17.32/24], [#sym.star], [UDP], [#sym.star], [#sym.star], [N / E], [DROP], colspanx(9)[_Requirement #req()_], [#r() <attacker-in>], [IN], [203.0.113.0/24], [172.16.17.32/24], [#sym.star], [#sym.star], [#sym.star], [N / E], [DROP], [#r() <attacker-out>], [OUT], [172.16.17.32/24], [203.0.113.0/24], [#sym.star], [#sym.star], [#sym.star], [N / E], [DROP], colspanx(9)[_Requirement #req()_], [#r() <ftp-plain-out>], [OUT], [172.16.17.32/24], [192.168.3.11], [TCP], [#sym.star], [20, 21], [N / E], [REJECT], [#r() <ftp-secure-out>], [OUT], [172.16.17.32/24], [192.168.3.11], [TCP], [#sym.star], [22], [N / E], [ACCEPT], [#r() <ftp-secure-in>], [IN], [192.168.3.11], [172.16.17.32/24], [TCP], [22], [#sym.star], [EST.], [ACCEPT], colspanx(9)[_Requirement #req()_], [#r() <ssh-in>], [IN], [198.51.100.0/24], [172.16.17.32/24], [TCP], [#sym.star], [22], [NEW], [ACCEPT], [#r() <ssh-out>], [OUT], [172.16.17.32/24], [172.16.17.32/24], [TCP], [22], [#sym.star], [EST.], [ACCEPT], colspanx(9)[_Requirement #req()_], [#r() <icmp-out>], [OUT], [172.16.17.32/24], [#sym.star], [ICMP], [---], [---], [N / E], [ACCEPT], [#r() <icmp-est-in>], [IN], [#sym.star], [172.16.17.32/24], [ICMP], [---], [---], [EST.], [ACCEPT], [#r() <icmp-new-in>], [IN], [#sym.star], [172.16.17.32/24], [ICMP], [---], [---], [NEW], [DROP], colspanx(9)[_Requirement #req()_], [#r() <dns-out>], [OUT], [172.16.17.32/24], [#sym.star], [UDP, TCP], [#sym.star], [53], [N / E], [ACCEPT], [#r() <dns-in>], [IN], [#sym.star], [172.16.17.32/24], [UDP, TCP], [53], [#sym.star], [EST.], [ACCEPT], colspanx(9)[_Requirement #req()_], [#r() <other-in> ], [IN], [#sym.star], [172.16.17.32/24], [#sym.star], [#sym.star], [#sym.star], [N / E], [DROP], [#r() <other-out> ], [OUT], [172.16.17.32/24], [#sym.star], [#sym.star], [#sym.star], [#sym.star], [N / E], [DROP], ) ] where "N / E" denotes "NEW / ESTABLISHED", and "EST." only the latter connection state. + #let rnum(target) = "#" + locate(loc => rule.at(query(target, loc).first().location()).first() + 1) #let rnums(..targets) = targets.pos().map(rnum).join(", ") In order to fulfill all the requirements, a possible rule ordering could be, for each default policy: - *DROP-ALL:* #rnums(<attacker-in>, <attacker-out>, <udp-vpn-in>, <udp-vpn-out>, <ftp-plain-out>, <ftp-secure-out>, <ftp-secure-in>, <ssh-in>, <ssh-out>, <icmp-out>, <icmp-est-in>, <dns-out>, <dns-in>). - *ACCEPT-ALL:* #rnums(<attacker-in>, <attacker-out>, <udp-vpn-in>, <udp-vpn-out>, <ftp-plain-out>, <ftp-secure-out>, <ftp-secure-in>, <ssh-in>, <ssh-out>, <icmp-out>, <icmp-est-in>, <dns-out>, <dns-in>, <other-in>, <other-out>). #v(1fr) // force page break, for style (next item would not fully fit) + The two default policies work in opposite fashion and can be considered to be more appropriate for different contexts. Example use cases could be: #set enum(numbering: "(i)") - *DROP-ALL:* + A sensitive network that must be as isolated as possible, with any potentially permitted traffic pattern having to be identified, analyzed, vetted, and approved for security purposes + An environment pertaining to a highly-regulated economic sector, where certain requirements (such as forbidding any outbound traffic flows, for data protection reasons) are paramount and must be easily audited by government authorities - *ACCEPT-ALL:* + A server that frequently launches services on different ports, with a dynamicity that makes it harder to constantly adjust firewall rules to allow traffic to and from those services (or if doing so would introduce too much overhead) + A development environment where flexibility and ease of service/configuration deployment take precedence over strict, in-depth access control (especially if it is already part of a larger, stricter network that safeguards it from most external interference) It is also worth noting that it is trivial to emulate the opposite policy when using a given default policy, by simply appending the chain with a rule ACCEPT'ing or DROP'ing all traffic (respectively for DROP-ALL and ACCEPT-ALL). #pagebreak() = Password Management #set enum(numbering: "a)") + If the alphabet being considered is the same, then yes: $N = \# Sigma$ is constant, therefore so will $log_2 N$ be, and thus $H = L dot.c log_2 N prop L$. Otherwise, if the alphabet is different, no conclusions can be inferred. For example, if $N_1 = \# Sigma_1 = 4$, $L_1 = 10$, $N_2 = \# Sigma_2 = 64$, $L_2 = 4$, we have $L_1 > L_2$, but #set math.cases(reverse: true) $ cases(H_1 &= L_1 dot.c log_2 N_1 &= 10 dot.c log_2 4 &= 20 "bits", H_2 &= L_2 dot.c log_2 N_2 &= 4 dot.c log_2 64 &= 24 "bits") ==> H_1 < H_2 $ This can have subtle implications, as several different alphabets may be at play, and the password's entropy (as a measure of unpredictability) would then be the minimum of all those partial entropy values (each with regard to a different alphabet). For instance, although `qwerty` has the same length as `mqprhx` and so would have the same entropy value with respect to an alphabet such as the set of all ASCII lowercase letters, `qwerty` is much more likely to be susceptible to a dictionary attack, wherein $L_D = 1$ and $N_D$ would be the dictionary's length (which could be quite small and still include this word, if per usual it contained the most common passwords), leading to a much smaller $H = H_D$. + #set enum(numbering: "(i)") + 16 symbols: $H = 16 dot.c log_2 256 = 128 "bits"$ + 20 symbols: $H = 20 dot.c log_2 256 = 160 "bits"$ + #set math.cases(reverse: true) $ cases("Naïvely," &H_N &= 10 dot.c log_2 256 &= 80 "bits", "Using dictionary," &H_D &= 1 dot.c log_2 2000 &approx 11 "bits") ==> H = min{H_N, H_D} approx 11 "bits" $ + KDFs _(Key Derivation Functions)_ are used to derive cryptographic keys from a secret. The primary purpose of having several hash iterations in KDFs is to make it slower for attackers to brute-force cracking the secret key by trying different possible secret combinations and checking whether the final keys match the expected value. If the function has more hash iterations, it becomes slower to compute, therefore potentially making it infeasible for attackers to try many combinations. This does not change the asymptotic time complexity associated with the attack, but it does introduce a large constant that can nonetheless have severe practical consequences for the attack's feasibility. From a security standpoint, more iterations make the KDF slower and are therefore better (magnifying the effect described above), but there is a trade-off to be considered with _usability_: if the KDF is _too_ slow, it may prove to be a limitation to legitimate users and impact normal system usage. In summary, more iterations are better but only up to a certain point, after which the impact on user experience is non-negligible. + Considering peppers as described in the question, and assuming (per its wording) that they are kept by the user but generated securely (per-user, perhaps pseudo-randomly): - Advantage: if the system (including the authentication database) is fully compromised, the pepper is not stored anywhere, so attackers still need to bruteforce each user's pepper, which may be computationally infeasible (especially when paired with a slow hash function); - Disadvantage: there can be usability concerns with regards to requiring users to store their pepper value and submitting it on every login request, which can be a burden if the pepper is sufficiently long. #v(1fr) // force page break, for style (next item would not fully fit) + We can quantify password cracking difficulty using entropy: - *64-bit salt:* since the salt value is known, using it is trivial if the scheme is public, and only ensures the calculation happens in the first place rather than using rainbow tables with pre-cracked hashes. This means that the entropy will be the same as the password's own entropy: $ H = log_2 1 + H_P = 0 + H_P = 10 "bits" $ - *64-bit pepper:* we now need to consider the unpredictability associated with the need for bruteforcing each of the 64 bits in the pepper: $ H = 64 dot.c log_2 2 + H_P = 64 + 10 = 74 "bits" $ Here, we assume that both the salt/pepper and the password are independent, and therefore $H = H_("salt/pepper") + H_P$. + In terms of unpredictability, $ H_P &= 32 dot.c log_2 16 &= 128 "bits" \ H_F &= 128 dot.c log_2 2 &= 128 "bits" $ the entropy is the same, so in theory it does not matter which of them is used. The original password should therefore be preferred, in order to avoid unnecessary calculations associated with the KDF. However, this is no longer the case if other factors are present, such as if the KDF is not deterministic (based exclusively on the secret input) and generates a session-specific key, in which case the KDF output should be used to promote forward secrecy across sessions, assuming that the KDF is one-way. #pagebreak() = Byzantine Link #set enum(numbering: "1)") + Communicating exclusively in-band through LSAs _(Link State Advertisements)_, it is not possible for $G$ and $E$ to introduce a fake link among themselves, as the advertisement would have to necessarily pass by one or more other routers, who would discard it. For example, if $G$ generated and signed an advertisement $AA = {"'I am G'", "'Next hop is E'"}_("Priv"_G)$ and then sent it to $E$ through $F$, the latter would realize that $AA$ is invalid (the next hop field should be $F$, not $E$) and would drop it. Conversely, if $G$ and $E$ can communicate out-of-band, it is possible for them to pretend a fake link exists between them. For example, $G$ can generate $AA = {"'I am G'", "'Next hop is E'"}_("Priv"_G)$ as before, but now send it encoded as a regular data message addressed to $E$ (rather than announcing it as a control LSA to $F$). As $AA$ would now be disguised as a regular, inconspicuous data stream, any intermediary routers would be oblivious to it representing an LSA and would not validate it, simply forwarding it to $E$. On arrival, $E$ could then generate $AA' = {AA, "'I am E'", "'Next hop is D'"}_("Priv"_E)$ and only now advertise $AA'$ as an LSA that would necessarily be considered valid by other routers. Using this technique, $G$ and $E$ would be able to trick others into believing a link exists between them. Evidently, this could also be simplified if both routers had each other's private keys and could sign initial advertisements on behalf of each other, therefore being able to forge valid LSAs claiming a link exists between them. + As asymmetric cryptography primitives can be computationally intensive and introduce undesired overhead to every announcement hop, it would be beneficial to reduce this cost, while maintaining the desirable properties of authenticated announcements. One solution would be to use the protocol described in Papadimitratos & Haas (2002)#footnote[<NAME> and <NAME>, "Securing the Internet routing infrastructure," in IEEE Communications Magazine, vol. 40, no. 10, pp. 60-68, Oct. 2002, doi: 10.1109/MCOM.2002.1039858.]: - Each router $R$ sends their initial LSA as normal, with the exception that for each link $j$ it chooses a random value $N_(R,j)$ and includes $H^n (N_(R,j))$ in that initial LSA (which is authenticated with $R$'s private key and can be verified to be so by a receiving router $S$) --- here, $H^n (x) = H(H(...(H(x))))$ represents computing $n$ successive iterations of a hash function $H$, for some large $n$ - For each subsequent LSA transmitted by $R$, instead of signing it with its private key (which would be computationally expensive), it simply attaches to the LSA the next value in the hash chain, i.e., $H^(n-p) (N_(R,j))$, where $p$ is a counter between $0$ and $n$ of how many LSAs (including the initial one) have been sent by $R$ --- this means that each value $H^(n-p) (H_(R,j))$ is only sent once, as $p$ increases immediately afterwards - When another router $S$ receives an LSA from $R$ that is not signed with the latter's private key but rather has some hash value $HH$, $S$ can verify the LSA was legitimately sent by $R$ by checking whether $H(HH) = HH'$, where $HH'$ is the previous hash value received from $R$ (in the previous authenticated LSA) --- since $H$ is one-way, only $R$ (that has the original secret) can generate $HH$ such that $H(HH) = HH'$. $S$ then stores $HH$ as the new value for $HH'$, so that it can verify the next LSA in the same fashion - When $p = n$, $R$ chooses a new $N'_(R,j)$ and sends another LSA authenticated with its private key, including $H^n (N'_(R,j))$ and re-starting the hash chain from another secret This protocol has the advantage of only requiring asymmetric cryptography computations every $n$ LSAs sent, greatly reducing the overhead associated with advertisement security. It also centralizes most of the computational burden on just one node (the LSA sender), as all others only need to compute one hash iteration (though the sender must compute $n-p$). + Yes, if $A$ and $D$ wish to communicate with each other, the shortest real path $A - H - C - D$ has cost $2 + 4 + 3 = 9$, but if $G$ and $E$ advertise a fake link between them with cost $alpha <= 4$, the path $A - G - E - D$ with cost $3 + alpha + 1 <= 8 < 9$ would become the shortest, therefore tricking $A$ and $D$ into communicating through the malicious routers and attracting traffic. + In order for $G$ and $E$ to control all communications between $A$ and $D$ regardless of what cost is advertised for the $G-E$ fake link (allowing it even to be arbitrarily large, $alpha >> 4$), they could recruit router $C$ to also become malicious. As all traffic passes through $C$ (except for that which already passes through $G$ and/or $E$), if that router can be controlled by a malicious actor, it could advertise all its paths as costing a very high value (or not advertise them at all), leading $A$ and $D$ to always choosing to route through $G$/$E$. $D$ would receive $C$'s new advertisement (either inflated or non-existent) directly, and $A$ would indirectly feel its consequences through the information propagated by $B$ and $H$. #pagebreak() = RPKI and ROA #set enum(numbering: "1)") + Considering the following language and procedures from the Internet Engineering Task Force: #block(stroke: (left: 2pt + rgb("#888")), quote(block: true, attribution: [RFC6811])[ - (...) - Covered: A Route Prefix is said to be Covered by a VRP when the VRP prefix length is less than or equal to the Route prefix length, and the VRP prefix address and the Route prefix address are identical for all bits specified by the VRP prefix length. (That is, the Route prefix is either identical to the VRP prefix or more specific than the VRP prefix.) - Matched: A Route Prefix is said to be Matched by a VRP when the Route Prefix is Covered by that VRP, the Route prefix length is less than or equal to the VRP maximum length, and the Route Origin ASN is equal to the VRP ASN. Given these definitions, any given BGP Route will be found to have one of the following validation states: - NotFound: No VRP Covers the Route Prefix. - Valid: At least one VRP Matches the Route Prefix. - Invalid: At least one VRP Covers the Route Prefix, but no VRP Matches it. ]) For each announcement: #set enum(numbering: "(1):") + `172.16.31.10/16`, originally announced by `AS234` (`/16` means a netmask of `255.255.0.0`) - `172.16.31.10/16` is not compatible with any known VRP, so no VRP Covers the announcement - Conclusion: #underline([_unknown_]) + `192.168.127.12/8`, originally announced by `AS213` (`/8` means a netmask of `255.0.0.0`) - VPR \#2 Covers the announcement, as $8_"(VRP.length)" <= 8_"(announcement)"$ and the first 8 bits of `192.168.127.12` (VRP) and `192.168.127.12` (announcement) are identical - VRP \#2 Matches the announcement, as it Covers it, $8_"(announcement)" <= 16_"(VRP.maxlength)"$, and the announcement's origin ASN (`AS213`) is equal to the VRP's ASN (`AS213`) - Conclusion: #underline([_valid_]) (at least one VRP Matches the announcement) + `192.168.127.12/16`, originally announced by `AS213` (`/16` means a netmask of `255.255.0.0`) - VRP \#1 Covers the announcement, as $16_"(VRP.length)" <= 16_"(announcement)"$ and the first 16 bits of `192.168.127.12` (VRP) and `192.168.127.12` (announcement) are identical - VRP \#1 does not Match the announcement, as the latter's origin ASN (`AS213`) is not equal to the VRP's ASN (`AS234`) - No other VRPs Cover the announcement, as `192.168.127.12/16` is not compatible with the only other one - Conclusion: #underline([_invalid_]) (at least one VRP Covers the announcement, but none Match it) + `172.16.17.32/16`, originally announced by `AS213` (`/16` means a netmask of `255.255.0.0`) - VPR \#2 Covers the announcement, as $8_"(VRP.length)" <= 16_"(announcement)"$ and the first 8 bits of `192.168.127.12` (VRP) and `172.16.17.32` (announcement) are identical - VRP \#2 Matches the announcement, as it Covers it, $16_"(announcement)" <= 16_"(VRP.maxlength)"$, and the announcement's origin ASN (`AS213`) is equal to the VRP's ASN (`AS213`) - Conclusion: #underline([_valid_]) (at least one VRP Matches the announcement) #v(1fr) // force page break, for style (next item would not fully fit) + `172.16.31.10/24`, originally announced by `AS234` (`/24` means a netmask of `255.255.255.0`) - VRP \#1 Covers the announcement, as $16_"(VRP.length)" <= 24_"(announcement)"$ and the first 16 bits of `192.168.127.12` (VRP) and `172.16.31.10` (announcement) are identical - VPR \#1 does not Match the announcement, as $24_"(announcement)" lt.eq.not 16_"(VRP.maxlength)"$ - No other VRPs Cover the announcement, as `172.16.31.10/24` is not compatible with the only other one - Conclusion: #underline([_invalid_]) (at least one VRP Covers the announcement, but none Match it) + #set enum(numbering: "1)") + Yes, VRP \#2 is susceptible to a forged-origin sub-prefix hijack, as it makes use of the maxlength property. Announcement \#4 could be an example of such an attack, with `AS431` forging an announcement (which, as described in the previous question, would be accepted as valid) as if it had originated from `AS213`, who is authorized to issue it. If `AS213` does not make any analogous announcement, perhaps because it only uses (and announces) another subnet such as `192.168.3.11/16`, the attacker could announce `172.16.17.32/16` and be uncontested (this would be the only route for that subnet, as no other would be announced by the real AS), effectively giving them control over the subnet. Even with `AS213` announcing `192.168.127.12/8`, the attacker's `172.16.17.32/16` announcement would still prevail as it is more specific than the former, and routers will choose the route with the longest prefix. + Yes, the solution would be to only use minimal ROAs - that is, only issue ROAs for exactly the routes that will be announced, rather than relying on the maxlength attribute for the flexibility it offers (which comes as a trade-off for security). In this case, the following two ROAs could be used: $ &("AS213", 192.168.127.12, 8, -) \ &("AS213", 192.168.3.11, 16, -) $ This means that an attacker could no longer announce `172.16.17.32/16` (even if allegedly on behalf of `AS213`), as none of the ROAs would Cover it. It should be noted, however, that this solution prevents forged-origin *sub-*prefix hijacks, but it does not secure against forged-origin *prefix* hijacks, as an attacker can still forge an announcement such as `(192.168.127.12/8; ASPATH: AS[attacker], AS213)`, which would conflict with the real announcement made by the legitimate `AS213`. These attacks are of a lower severity, however, as then the attacker would no longer be presenting the _only_ route to `AS213`, just an additional one that would attract less traffic (especially since the fake announcement would always be one hop longer than the legitimate announcements coming directly from `AS213`). + With a global view of the network topology, we can consider and validate the `ASPATH` attribute in each announcement. In this manner, we can determine that announcement \#4 is invalid as there is no path connecting `AS213` to `AS431` directly, so it must have been forged. The remaining announcements are plausible as they refer to existing router paths in accordance with the known network layout.
https://github.com/jakobjpeters/Typstry.jl
https://raw.githubusercontent.com/jakobjpeters/Typstry.jl/main/docs/source/references/internals.md
markdown
MIT License
# Internals This reference documents non-public utilities. ```@docs Typstry.compile_workload ``` ## Strings ```@docs Typstry.Strings Typstry.Strings.examples Typstry.Strings.typst_mime Typstry.Strings.backticks Typstry.Strings.block Typstry.Strings.code_mode Typstry.Strings.depth Typstry.Strings.enclose Typstry.Strings.indent Typstry.Strings.join_with Typstry.Strings.math_mode Typstry.Strings.math_pad Typstry.Strings.maybe_wrap Typstry.Strings.mode Typstry.Strings.parenthesize Typstry.Strings.show_array Typstry.Strings.show_parameters Typstry.Strings.show_raw Typstry.Strings.show_vector Typstry.Strings.escape ``` ### Dates.jl !!! info A Dates.jl package extension would currently print warnings during precompilation. See also the [Julia issue #52511](https://github.com/JuliaLang/julia/issues/52511) ```@docs Typstry.Strings.date_time Typstry.Strings.duration Typstry.Strings.dates ``` ## Commands ```@docs Typstry.Commands Typstry.Commands.typst_compiler Typstry.Commands.apply Typstry.Commands.format ```
https://github.com/f7ed0/typst-template
https://raw.githubusercontent.com/f7ed0/typst-template/master/cv.typ
typst
#import "lib/blocks.typ" : * #let educationElement(dates : [FROM - TO], title : [TITLE], school : [school], location : [location], GPA : [GPA : 5.0], coursework : ()) = [ #set block(spacing : 0.5em) #text(dates, weight : 500, fill : gray.darken(30%)) #text(title, weight : 500) : #text(school) #text(location) #text(GPA, weight : 500) #text("Relevant Coursework :", weight : 600) #pad(left : 5pt,list(..coursework)) ] #let workElement(dates : [FROM - TO], title : [TITLE], enterprise : [enterprise], location : [location], tasks : ()) = [ #set block(spacing : 0.5em) #text(dates, weight : 500, fill : gray.darken(30%)) #text(title, weight : 500) : #text(enterprise) #text(location) #pad(left : 5pt,list(..tasks)) ] #let ExCuElement(title : [TITLE], subtitle : [SUBTITLE], info : [INFO]) = [ #set block(spacing : 0.5em) #text(title, weight : 500) : #text(subtitle) #pad(left : 5pt,info) #block(height : 5pt) ] #let init( doc, name : [BASIC CV], title : [Student at university], desc : lorem(50), telephone : [tel], email : [email], linkedIn : [linkedIn], location : [location], website : [website], github : [github], cqual : ([a],[b]), educations : (educationElement(),), workexp : (), ExCu : (), col : blue ) = { set block(spacing : 1em) set page(paper: "a4", margin : 0pt) set text(size : 10pt, font : "Fira Sans") block(inset : 20pt,width : 100%, fill : col, below: 0pt)[ #align(center + horizon)[ #text(name, size : 24pt, fill : white, weight : 800) #text(title, size : 13pt, fill : white, weight : 600) #line(length : 45%, stroke : white) #align(left, text(desc, size : 10pt, fill : white)) ] ] block(width : 100%, inset : 7pt, fill : col.lighten(50%), below: 5pt)[ #align(center, table(columns : (1fr,1fr,1fr), stroke : none, inset : 2pt, table.cell(align(horizon, grid( columns : (20pt, 1fr),image("assets/pin.png", height : 15pt),text(location)))), table.cell(align(horizon, grid( columns : (20pt, 1fr),image("assets/mail.png", height : 15pt),text(email)))), table.cell(align(horizon, grid( columns : (20pt, 1fr),image("assets/call.png", height : 15pt),text(telephone)))), table.cell(align(horizon, grid( columns : (20pt, 1fr),image("assets/in.png", height : 15pt),text(linkedIn)))), table.cell(align(horizon, grid( columns : (20pt, 1fr),image("assets/link.png", height : 15pt),text(website)))), table.cell(align(horizon, grid( columns : (20pt, 1fr),image("assets/gh.png", height : 15pt),text(github)))) )) ] block(inset : 5pt)[ #block(width : 100%, inset : 7pt, height : 14%, below : 0pt)[ #text("CORE QUALIFICATION", size : 11pt, weight : 600) #pad(left : 20pt,columns(2, gutter : 5%,list(..cqual))) ] #block(width : 100%, inset : 7pt, below : 0pt)[ #text("EDUCATION", size : 11pt, weight : 600) #pad(left : 20pt, grid(columns : (1fr), ..educations)) ] #block(width : 100%, inset : 7pt, below : 0pt)[ #text("WORK EXPERIENCE", size : 11pt, weight : 600) #pad(left : 20pt, grid(columns : (1fr), ..workexp)) ] #block(width : 100%, inset : 7pt, below : 0pt)[ #text("EXTRACURRICULAR EXPERIENCE", size : 11pt, weight : 600) #pad(left : 20pt, grid(columns : (1fr), ..ExCu)) ] ] doc }
https://github.com/AU-Master-Thesis/thesis
https://raw.githubusercontent.com/AU-Master-Thesis/thesis/main/sections/3-methodology/study-3/global-planning.typ
typst
MIT License
#import "../../../lib/mod.typ": * === Global Planning <s.m.global-planning> #let gp = ( robot: text(theme.peach, $bold(R)$), A: text(theme.lavender, $bold(A)$), B: text(theme.mauve, $bold(B)$), A1: boxed(text(weight: 900, "GP-1")), A2: boxed(text(weight: 900, "GP-2")), ) Global planning has been made as an extension to the original GBP Planner software developed by @gbpplanner. The original algorithm works very well on a local level, and lacks a global overview of how to get from #gp.A to #gp.B. In order to solve this problem; a pathfinding algorithm has to be leveraged. In this thesis, the optimal #acr("RRT*") path planning algorithm has been utilized. The theory behind #acr("RRT*")@sampling-based-survey@erc-rrt-star can be found in @s.b.rrt-star, which builds on the original #acr("RRT")@original-rrt algorithm in @s.b.rrt. However, do note that the method described here is algorithm-agnostic, that is; as long at the path-finding algorithm in use outputs a series of points that avoid obstacles, it can be swapped in instead of #acr("RRT*"). The global planning procedure follows @ex.global-planning. The environment for each experiment scenario is generated from an configuration file, which is described in @s.m.configuration, and the experimentation scenarios can be seen in @s.r.scenarios. #pagebreak(weak: true) #example( caption: [Global Planning Procedure], )[ #set par(first-line-indent: 0em) #[ Algorithm-agnostic global planning procedure. Any time PF is mentioned, it refers to the path-finding algorithm in use, e.g. #acr("RRT*"). ] #grid( columns: (2.3fr, 3fr), blocked( title: [*Setup*], color: none, divider-stroke: 0pt, )[ - A robot #gp.robot needs to get from point #gp.A to point #gp.B. - The path from #gp.A to #gp.B is convoluted, and includes more than simple obstacles to go around. An exemplification of such an environment could be a maze-like structure, see @f.m.maze-env. ], blocked( title: [*Steps*], color: none, divider-stroke: 0pt, )[ #set enum(numbering: box-enum.with(prefix: "Step ")) + The PF algorithm is used to find a path from #gp.A to #gp.B. + The result of PF is a series of points that can now be leveraged to navigate through the environment. + The GBP local planning will still be in effect, in order to avoid obstacles and other robots in a local context. ] ) ]<ex.global-planning> === ECS Integration <s.m.robot-mission> // #jonas[this is new, please read] To extend the simulation to both handle local planners of fixed number of waypoints given as part of the formation description and global planners that find new waypoints during run-time. A `Mission` component is created that is assigned to each robot entity. It abstracts away which strategy is used. Several systems monitor and an update the component individually, in order to maintain and update its state. For example a system checks each entity with a `Mission` component if they are within reach of their next waypoint. If they are the next waypoint or path planning task is scheduled, and once finished the new waypoints are extended into the mission. Another system checks for when the final waypoint has been reached, and then transition the robots state to being finished and sends a snapshot of the robots historical data to the exporting module. // The simulator is made capable of handling both local planners with a fixed number of waypoints and global planners that find new waypoints in real-time, a `Mission` component is created and assigned to each robot entity. This component abstracts the strategy used. Several systems run to monitor and update the component. For example, one system checks if each entity with a `Mission` component is within reach of its next waypoint. If it is, the next waypoint or path planning task is scheduled, and upon completion, new waypoints are added to the mission. Another system checks when the final waypoint is reached, transitions the robot's state to "finished," and sends a snapshot of the robot's historical data to the exporting module. // as seen in @f.m.rrt-colliders. // An example of the RRT algorithm in action can be seen in @f.m.rrt-colliders. #note.jens[more about the collider, and the environment representation.] === Environment Integration <s.m.planning.environment-integration> // #todo[Something something `parry`] As mentioned earlier in, the environment is built from several #acr("AABB") colliders. This is done using the 2D collision detection library `parry2d`@parry2d. @f.m.rrt-colliders shows how the environment is broken into smaller #acr("AABB") collider rectangles. The #acr("RRT*") algorithm defines a collision problem between all the environment colliders, and a collision circle with radius $r_C$. Every time the #acr("RRT*") algorithm wants to place a new node, it checks if the collision circle, placed at the position of the new node, intersects with any of the environment colliders. In case an intersection is found the new node is abondened, and the alhorithm tries again by sampling a new point. The radius of the collision circle is important, as it defines how close the path will get to the obstacles. If the radius is small, the path will tend to get closer to the obstacles, as seen in @f.m.rrt-colliders#text(accent, "A"). With a larger $r_C$, the path will tend towards the middle of the free space, staying far from the environment, as seen in @f.m.rrt-colliders#text(accent, "B"). Furthermore, to ensure that the algorithm does not accept any corner cutting of the environment, the radius of the collision circle has to exceed half of the step length, $r_C = s"/"2$. The step length is the distance between the current node, and where $acr("RRT*")$ wants to place the new node in direction of the sampled point. #figure( { // set text(font: "JetBrainsMono NF", size: 0.85em) grid( // columns: (30%, 30%), columns: 2, std-block( breakable: false, )[ #pad(x: 2mm, scale(y: -100%, image("../../../figures/out/rrt-colliders.svg"))) #v(0.5em) #text(theme.text, [A: Small Collision Radius]) ], std-block( breakable: false, )[ #pad(x: 2mm, scale(y: -100%, image("../../../figures/out/rrt-colliders-expand.svg"))) #v(0.5em) #text(theme.text, [B: Large Collision Radius]) ], ) }, caption: [RRT algorithm and environment avoidance integration, where #acr("RRT*") is tasked with finding a path from the blue#sl to the purple#sp point. A) shows a small collision radius#sg, which results in a path that tends to get closer to the obstacles#sr. B) shows a collision radius, equal to the half the step length; $r_C = s"/"2$, which results in a path that tends towards the middle of the free space, staying far from the environment. The collision radius for each node is green#sg when no intersection is detected, and yellow#sy when an intersection is detected.], )<f.m.rrt-colliders> Again, do note that even though #acr("RRT*") is used here, the collision detection is a detached module, which can also be used with other path-finding algorithms. The `rrt` crate@rrt-crate has been extended for the purpose of this thesis, as it only provided an #acr("RRT") implementation, the authors have extended it to include the #acr("RRT*") algorithm as well. This is done through the `rrstar`#footnote[Found in the #source-link("https://github.com/AU-Master-Thesis/rrt", "rrt") crate at #source-link("https://github.com/AU-Master-Thesis/rrt/blob/d4384c7ef96cde507f893d8953ce053659483f85/src/rrtstar.rs#L159", "src/rrtstar.rs:159")]<footnote.rrtstar> function, which provides an interface taking two $N$-dimensional points, a step length, a neighbourhood radius, a max number of iterations, see @lst.rrtstar. Furthermore, it is a higher order function which takes two function closure trait objects; `is_collision_free` and `random_sample`. #listing( [ ```rust pub fn rrtstar<N>( start: &[N], goal: &[N], mut is_collision_free: impl FnMut(&[N]) -> bool, mut random_sample: impl FnMut() -> Vec<N>, extend_length: N, max_iters: usize, neighbourhood_radius: N, stop_when_reach_goal: bool, ) -> RRTStarResult<N, f32> where N: Float + Debug, { ... } ``` ], caption: [The `rrtstar`@footnote.rrtstar function signature.], )<lst.rrtstar> === Path Adherence <s.m.planning.path-adherence> // #jonas[Take a look at this section and then at the top, where I have described a separate study (4) for the path tracking stuff. Does it make sense to separate it out, or is it good to have it in this context. I feel like you are biased towards wanting it in this context, but I am not too sure, since the path tracking is a general factor graph addition, which works no matter how the path is found or given, where the global planning is path-finding stuff, which is separate from figuring out how to follow the path right?] In general the path-finding algorithm chosen does not matter for either of the approaches described here. What is required is; 1) the path that is found, places waypoints near most of the bends in the path, and 2) the path avoids big obstacles. The possible approaches for the path adherence are as follows: #set enum(numbering: box-enum.with(prefix: "GP-")) + *Waypoint Tracking:* Simply perform a #acr("RRT*"), and use the resulting points as waypoints. Only difference from the original@gbpplanner is that the waypoints are not placed by hand, but by the #acr("RRT*") algorithm, which has information about the environment. + *Path Tracking:* From the points in the #acr("RRT*") path, introduce a new kind of factor, $f_t$; a tracking factor. This factor will be used to _track_ the robot along the path, by _pulling_ the prediction horizon towards the found path. This approach leverages the already-existing factorgraph structure, which is to follow a global path on a local level. The main difference between the two approaches is that the second approach is more likely to adhere to the path, as it is _pulled_ towards the path, simply tracking directly towards the next waypoint. Both global planning approach #gp.A1 and #gp.A2 are generalizable over any sequence of waypoints. In effect these waypoints could be generated by a different path-finding algorithm such as A\* or Dijkstra. However, one could argue that the #acr("RRT*") and other sample-based algorithms are more suited for this task, as the found path has not been descritized into a grid, which might need a level of post-processing to be useful. Solutions that use these grid-based algorithms are not explored by this thesis, but not ruled out as a possibility either. // #jens[Maybe mention this in future work?] ==== Approach 1: Waypoint Tracking <s.m.planning.waypoint-tracking> The steps to perform this approach is visualized in @f.m.waypoint-tracking, and are: #[ #set enum(numbering: box-enum.with(prefix: "Step ")) + Perform the #acr("RRT*") algorithm to find a path from #gp.A to #gp.B. + Where usually #gp.A and #gp.B would be used as waypoint, now instead; use the points of the resulting path as waypoints. + Follow the waypoint with the already existing local planning algorithm without any modifications. ] #figure( [], caption: [Waypoint tracking steps.], )<f.m.waypoint-tracking> #[ #set par(first-line-indent: 0em) *Expectation:* The waypoints from the path will be followed, however, without any guarantees or attempts to adhere to the known obstacle free path that the line the resulting #acr("RRT*") path represents. However, as the #acr("RRT*") path is obstacle-free, the original difficulty with more complex environments without global planning is solved. Furthermore, without any path adherence measures, other than aiming for the next waypoint, the robots will have much more freedom to cut corners, and also to move around each other in more creative ways. ] #pagebreak(weak: true) ==== Approach 2: Path Tracking <s.m.planning.path-tracking> To achieve a level of adherence to the path given to each robot, the factor graph structure can be utilized. A new factor, namely the tracking factor, $f_t$, has been designed to reach this goal. The tracking factor is designed to attach to each variable in the prediction horizon, except for the first and last that already have anchoring pose factors, as these cannot be influenced either way. In @s.m.tracking-factor, the design of the tracking factor is explained in detail, while @f.m.tracking-factor visualizes the inner workings. In @f.m.tracking-factor#text(accent, "A") an array of isolated variables with corresponding tracking factors are shown. These variables are spread over a path, with a likely trajectory, as if different timesteps for the same variable is show. Both in @f.m.tracking-factor#text(accent, "A") and @f.m.tracking-factor#text(accent, "B") it can be seen how the tracking factor measures perpendicularly onto the path and then pulls slightly forwards as well, where close to the corner; the tracking factor essentially pulls towards the corner. // #figure( // { // // set text(font: "JetBrainsMono NF", size: 0.85em) // grid( // columns: (30%, 30%), // std-block( // breakable: false, // )[ // #image("../../../figures/out/rrt-optimization-no-env.svg") // #v(0.5em) // #text(theme.text, [A: Path Optimization]) // ], // std-block( // breakable: false, // )[ // #image("../../../figures/out/rrt-path-tracking.svg") // #v(0.5em) // #text(theme.text, [B: Path Tracking]) // ], // ) // }, // caption: [Path tracking on the pseudo-optimal path found with #acr("RRT*").], // )<f.m.path-tracking> #figure( { include "figure-tracking.typ" }, caption: [Visualization of the tracking factor's measurements in oragne#so. On A) it is visualized how the tracking factor pulls the variable towards the path, while also trying to keep the variable moving along the path. Furthermore, a green area#swatch(theme.green.lighten(35%)) is shown close to the second waypoint $w_1$. Within this area, the tracking factor will track towards the corner. On B) tracking factors are visualized for a robot, #text(theme.lavender, font: "JetBrainsMono NF", [*R*]), moving from $w_0$ to $w_1$. _Note that the underlying measurement math of the tracking factor does not exactly pull towards the corner, as is shown here, but in bends that are close to 90#sym.degree it is close._], )<f.m.tracking-factor> // #jens[make figure for each approach described above.] // #jens[make figure that matches the end reasult of the path tracking.] #[ #set par(first-line-indent: 0em) // *Expectation:* The path tracking approach will be more likely to adhere to the path, as the tracking factors will _pull_ the prediction horizon towards the found path. This will result in a more _stable_ path, that does not necessarily guarantee strict adherence to the path, but it is expected that the average deviation error from the path will be lower for this approach than for #gp.A1. Lastly, this method will not provide as much freedom to the robots, which could be a good or a bad thing, depending on the context. One might need stronger path adherence guarantees, e.g. a system could exploit the found #acr("RRT*") paths to reason about each robot's whereabouts within some timeframe. *Expectation:* The path tracking approach, influenced by the tracking factor $f_t$, will likely adhere more closely to the prescribed path $#m.P$. This factor $f_t$ consistently pulls the prediction horizon towards the path, akin to but opposite of how the interrobot factor $f_i$ pushes the variables apart. This results in a more _stable_ path with lower average _perpendicular deviation error_ compared to approach #gp.A1. The effect should be particularly noticeable around corners, where $f_t$ reduces the tendency to cut corners and helps maintain consistent speed by minimizing the time spent correcting deviations or _catching up_ to the horizon variable. While this approach provides less freedom to the robots, it ensures stronger adherence to the path, which could be beneficial in contexts where precise path tracking is critical, such as in systems exploiting #acr("RRT*") paths for reasoning about each robot's whereabouts within specific timeframes. ]
https://github.com/zagoli/simple-typst-thesis
https://raw.githubusercontent.com/zagoli/simple-typst-thesis/main/README.md
markdown
Apache License 2.0
# simple-typst-thesis This template defines a frontpage with a centered title and author informations, and an optional logo. Each page of the main body has a custom header. The header shows what the current page is about. It does this in three ways: - If the current page has a main heading, the header uses that. - If not, the header combines the last main and secondary headings from previous pages. - Sometimes, there are no secondary headings on previous pages. In that case, the header only uses the last main heading. ### PDF [main.pdf](https://github.com/zagoli/simple-typst-thesis/blob/main/main.pdf)
https://github.com/alberto-lazari/cns-report
https://raw.githubusercontent.com/alberto-lazari/cns-report/main/future-work.typ
typst
= Future work <future_work> Future work should move along two main axes: the improvement of the current project and the parallelization of fuzzing. == Improving the project The actual project can be improved in several different ways. Firstly, expanding the project with additional inspectors would significantly broaden its applicability, enabling it to fuzz a more extensive range of applications. Furthermore, introducing features such as the possibility to utilize a specific subset of dictionaries and adjust logging levels through command flags would enhance the flexibility and customization options for users. Additionally, future development efforts could focus on streamlining the process of downloading the required APKs, therefore streamlining the setup process and enhancing the overall automation. == Parallelizing the fuzzing Leveraging the abstraction and automation capabilities already integrated into the system, researchers can now readily replicate experiments within containerized environments. This opens the way for creating Docker images of the system, facilitating seamless testing within controlled, deterministic environments. This approach allows to conduct parallel tests through orchestrating services and platforms like Ansible, enabling more efficient and scalable fuzz testing.
https://github.com/stepbrobd/cv
https://raw.githubusercontent.com/stepbrobd/cv/master/readme.md
markdown
MIT License
# Curriculum Vitae ![Status](https://github.com/stepbrobd/curriculum-vitae/actions/workflows/release.yml/badge.svg) CV with [Typst](https://github.com/typst/typst) and [Nix](https://nixos.org). ```shell curl https://api.github.com/repos/stepbrobd/cv/releases/latest | jq -r ".assets[].browser_download_url" ``` ## Build With Nix: ```shell nix build github:stepbrobd/cv ``` ## Provenance Download the latest release (`cv.pdf`), then execute the following command: ```shell gh attestation --repo stepbrobd/cv verify cv.pdf ``` ## License The contents inside this repository, excluding all submodules, are licensed under the [MIT License](license.md). Third-party file(s) and/or code(s) are subject to their original term(s) and/or license(s).
https://github.com/0x1B05/algorithm-journey
https://raw.githubusercontent.com/0x1B05/algorithm-journey/main/practice/note/content/贪心.typ
typst
#import "../template.typ": * #pagebreak() = 贪心 - 狭义的贪心 - 每一步都做出在当前状态下最好或最优的选择,从而希望最终的结果是最好或最优的算法 - 广义的贪心 - 通过分析题目自身的特点和性质,只要发现让求解答案的过程得到加速的结论,都算广义的贪心 == 专题 1 有关贪心的若干现实 & 提醒 1. 不要去纠结严格证明,每个题都去追求严格证明,浪费时间、收益很低,而且千题千面。玄学! 2. 一定要掌握用对数器验证的技巧,这是解决贪心问题的关键 3. 解答几乎只包含贪心思路的题目,代码量都不大 4. 大量累积贪心的经验,重点不是证明,而是题目的特征,以及贪心方式的特征,做好总结方便借鉴 5. 关注题目数据量,题目的解可能来自贪心,也很可能不是,如果数据量允许,能不用贪心就不用(稳) 6. 贪心在笔试中出现概率不低,但是面试中出现概率较低,原因是 淘汰率 vs 区分度 7. 广义的贪心无所不在,可能和别的思路结合,一般都可以通过自然智慧想明白,依然不纠结证明 === 题目1: 字典序最小 `strs` 中全是非空字符串,要把所有字符串拼接起来,形成字典序最小的结果 #code(caption: [题目1: 字典序最小])[ ```java // 暴力方法, 为了验证 // 生成所有可能的排列 // 其中选出字典序最小的结果 public static String way1(String[] strs) { ArrayList<String> ans = new ArrayList<>(); f(strs, 0, ans); ans.sort((a, b) -> a.compareTo(b)); return ans.get(0); } // 全排列代码,讲解038,常见经典递归过程解析 public static void f(String[] strs, int i, ArrayList<String> ans) { if (i == strs.length) { StringBuilder path = new StringBuilder(); for (String s : strs) { path.append(s); } ans.add(path.toString()); } else { for (int j = i; j < strs.length; j++) { swap(strs, i, j); f(strs, i + 1, ans); swap(strs, i, j); } } } public static void swap(String[] strs, int i, int j) { String tmp = strs[i]; strs[i] = strs[j]; strs[j] = tmp; } // strs中全是非空字符串,要把所有字符串拼接起来,形成字典序最小的结果 // 正式方法 // 时间复杂度O(n*logn) public static String way2(String[] strs) { Arrays.sort(strs, (a, b) -> (a + b).compareTo(b + a)); StringBuilder path = new StringBuilder(); for (int i = 0; i < strs.length; i++) { path.append(strs[i]); } return path.toString(); } // 为了验证 // 生成长度1~n的随机字符串数组 public static String[] randomStringArray(int n, int m, int v) { String[] ans = new String[(int) (Math.random() * n) + 1]; for (int i = 0; i < ans.length; i++) { ans[i] = randomString(m, v); } return ans; } // 为了验证 // 生成长度1~m,字符种类有v种,随机字符串 public static String randomString(int m, int v) { int len = (int) (Math.random() * m) + 1; char[] ans = new char[len]; for (int i = 0; i < len; i++) { ans[i] = (char) ('a' + (int) (Math.random() * v)); } return String.valueOf(ans); } // 为了验证 // 对数器 public static void main(String[] args) { int n = 8; // 数组中最多几个字符串 int m = 5; // 字符串长度最大多长 int v = 4; // 字符的种类有几种 int testTimes = 2000; System.out.println("测试开始"); for (int i = 1; i <= testTimes; i++) { String[] strs = randomStringArray(n, m, v); String ans1 = way1(strs); String ans2 = way2(strs); if (!ans1.equals(ans2)) { // 如果出错了 // 可以增加打印行为找到一组出错的例子 // 然后去debug System.out.println("出错了!"); } if (i % 100 == 0) { System.out.println("测试到第" + i + "组"); } } System.out.println("测试结束"); } ``` ] 其中暴力方法的解析, 见#link("经典递归流程.md")[经典递归流程] === #link("https://leetcode.cn/problems/largest-number/")[ 题目2: 最大数 ] 给定一组非负整数 nums,重新排列每个数的顺序(每个数不可拆分)使之组成一个最大的整数。 #tip("Tip")[ 输出结果可能非常大,所以你需要返回一个字符串而不是整数。 ] #code(caption: [题目2: 最大数])[ ```java public static String largestNumber(int[] nums) { int n = nums.length; String[] strs = new String[n]; for (int i = 0; i < n; i++) { strs[i] = String.valueOf(nums[i]); } Arrays.sort(strs, (a, b) -> (b + a).compareTo(a + b)); if (strs[0].equals("0")) { return "0"; } StringBuilder path = new StringBuilder(); for (String s : strs) { path.append(s); } return path.toString(); } ``` ] === #link("https://leetcode.cn/problems/two-city-scheduling/")[ 题目3: 两地调度 ] 公司计划面试 `2n` 人。给你一个数组 `costs` ,其中 `costs[i] = [aCosti, bCosti]` 。第 `i` 人飞往 `a` 市的费用为 `aCosti` ,飞往 `b` 市的费用为 `bCosti` 。 返回将每个人都飞到 `a` 、`b` 中某座城市的最低费用,要求每个城市都有 `n` 人抵达。 #example("Example")[ - 输入:`costs = [[10,20],[30,200],[400,50],[30,20]]` - 输出:`110` - 解释: - 第一个人去 a 市,费用为 10。 - 第二个人去 a 市,费用为 30。 - 第三个人去 b 市,费用为 50。 - 第四个人去 b 市,费用为 20。 ] 思路, 先让所有人都去 a, 接着算 a 改到 b 的差值, 差值最小的前 n 个去 b. #code(caption: [ 题目3: 两地调度 ])[ ```java public static int twoCitySchedCost(int[][] costs) { int n = costs.length; int[] arr = new int[n]; int sum = 0; for (int i = 0; i < n; i++) { arr[i] = costs[i][1] - costs[i][0]; sum += costs[i][0]; } Arrays.sort(arr); int m = n / 2; for (int i = 0; i < m; i++) { sum += arr[i]; } return sum; } ``` ] #tip("Tip")[ 这里面先算都去 `a`, 再进行修正的技巧值得学习. ] === #link("https://leetcode.cn/problems/minimum-number-of-days-to-eat-n-oranges/")[ 题目4: 吃掉 N 个橘子的最少天数 ] 厨房里总共有 n 个橘子,你决定每一天选择如下方式之一吃这些橘子: - 吃掉一个橘子。 - 如果剩余橘子数 n 能被 2 整除,那么你可以吃掉 n/2 个橘子。 - 如果剩余橘子数 n 能被 3 整除,那么你可以吃掉 2\*(n/3) 个橘子。 请你返回吃掉所有 n 个橘子的最少天数。 #tip("Tip")[ - 输入:n = 10 - 输出:4 - 解释:你总共有 10 个橘子。 - 第 1 天:吃 1 个橘子,剩余橘子数 `10 - 1 = 9`。 - 第 2 天:吃 6 个橘子,剩余橘子数 `9 - 2*(9/3) = 9 - 6 = 3`。(9 可以被 3 整除) - 第 3 天:吃 2 个橘子,剩余橘子数 `3 - 2*(3/3) = 3 - 2 = 1`。 - 第 4 天:吃掉最后 1 个橘子,剩余橘子数 `1 - 1 = 0`。 ] ==== 解答 1. 吃掉一个橘子 2. 如果 `n` 能被 2 整除,吃掉一半的橘子,剩下一半 3. 如果 `n` 能被 3 正数,吃掉三分之二的橘子,剩下三分之一 因为方法 2 和 3,是按比例吃橘子,所以必然会非常快, 所以,决策如下: - 可能性 1:为了使用 2 方法,先把橘子吃成 2 的整数倍,然后直接干掉一半,剩下的 `n/2` 调用递归, 即,`n % 2 + 1 + minDays(n/2)` - 可能性 2:为了使用 3 方法,先把橘子吃成 3 的整数倍,然后直接干掉三分之二,剩下的 `n/3` 调用递归, 即,`n % 3 + 1 + minDays(n/3)` 这两个中选择一个最小的. 至于方法 1,完全是为了这两种可能性服务的,因为能按比例吃,肯定比一个一个吃快(显而易见的贪心) ```java public static HashMap<Integer, Integer> map = new HashMap<>(); public static int minDays(int n) { if (n <= 1) { map.put(n, n); return n; } if (map.containsKey(n)) { return map.get(n); } else { int ans = Math.min(n % 2 + 1 + minDays(n / 2), n % 3 + 1 + minDays(n / 3)); map.put(n, ans); return ans; } } ``` 复杂度分析: log2(n)+log3(n) === #link("https://www.nowcoder.com/practice/1ae8d0b6bb4e4bcdbf64ec491f63fc37")[题目5: 最多线段重合问题] 每一个线段都有 `start` 和 `end` 两个数据项,表示这条线段在 X 轴上从 `start` 位置开始到 `end` 位置结束。给定一批线段,求所有重合区域中最多重合了几个线段,首尾相接的线段不算重合。 例如:线段`[1,2]`和线段`[2.3]`不重合。 线段`[1,3]`和线段`[2,3]`重合 - 输入描述: - 第一行一个数`N`,表示有`N`条线段 - 接下来`N`行每行`2`个数,表示线段起始和终止位置 - 输出描述: - 输出一个数,表示同一个位置最多重合多少条线段 ==== 解答 === #link("https://leetcode.cn/problems/course-schedule-iii/")[题目6: 课程表 III ] 这里有 `n` 门不同的在线课程,按从 `1` 到 `n` 编号。给你一个数组 `courses` ,其中 `courses[i] = [durationi, lastDayi]` 表示第 `i` 门课将会持续上 `durationi` 天课,并且必须在不晚于 `lastDayi` 的时候完成。你的学期从第 1 天开始。且不能同时修读两门及两门以上的课程。 返回你最多可以修读的课程数目。 #example("Example")[ - 输入:`courses = [[100, 200], [200, 1300], [1000, 1250], [2000, 3200]]` - 输出:`3` - 解释: - 这里一共有 4 门课程,但是你最多可以修 3 门: - 首先,修第 1 门课,耗费 100 天,在第 100 天完成,在第 101 天开始下门课。 - 第二,修第 3 门课,耗费 1000 天,在第 1100 天完成,在第 1101 天开始下门课程。 - 第三,修第 2 门课,耗时 200 天,在第 1300 天完成。 - 第 4 门课现在不能修,因为将会在第 3300 天完成它,这已经超出了关闭日期。 ] === 解答 早结束的课程,优先考虑. 晚结束的课程, 后面考虑. 当前时间+代价<=截止日期 == 专题2 === #link("https://leetcode.cn/problems/jian-sheng-zi-ii-lcof/")[题目1: 砍竹子II] 现需要将一根长为正整数 `n` 的竹子砍为若干段,每段长度均为 正整数。请返回每段竹子长度的 最大乘积 是多少。 答案需要取模 `1e9+7`。 #tip("Tip")[ 2 <= n <= 1000 ] ==== 解答 #code(caption: [题目1: 砍竹子])[ ```java public class Code01_CuttingBamboo { // 快速幂,求余数 // 求x的n次方,最终得到的结果 % mod public static long power(long x, int n, int mod) { long ans = 1; while (n > 0) { if ((n & 1) == 1) { ans = (ans * x) % mod; } x = (x * x) % mod; n >>= 1; } return ans; } public static int cuttingBamboo(int n) { if (n == 2) { return 1; } if (n == 3) { return 2; } int mod = 1000000007; // n = 4 -> 2 * 2 // n = 5 -> 3 * 2 // n = 6 -> 3 * 3 // n = 7 -> 3 * 2 * 2 // n = 8 -> 3 * 3 * 2 // n = 9 -> 3 * 3 * 3 // n = 10 -> 3 * 3 * 2 * 2 // n = 11 -> 3 * 3 * 3 * 2 // n = 12 -> 3 * 3 * 3 * 3 int tail = n % 3 == 0 ? 1 : (n % 3 == 1 ? 4 : 2); int power = (tail == 1 ? n : (n - tail)) / 3; return (int) (power(3, power, mod) * tail % mod); } } ``` ] === 题目2: 分成k份的最大乘积 一个数字n一定要分成k份,得到的乘积最大是多少? #tip("Tip")[ - 数字`n`和`k`,可能非常大,到达`10^12`规模, 结果可能更大,所以返回结果对`1000000007`取模 - 来自真实大厂笔试,没有在线测试,对数器验证 ] ==== 解答 #code(caption: [题目2: 分成k份的最大乘积])[ ```java public class Code02_MaximumProduct { // 快速幂 public static long power(long x, int n, int mod) { long ans = 1; while (n > 0) { if ((n & 1) == 1) { ans = (ans * x) % mod; } x = (x * x) % mod; n >>= 1; } return ans; } // 暴力递归 public static int maxValue1(int n, int k) { return f1(n, k); } // 剩余的数字rest拆成k份 // 返回最大乘积 // 暴力尝试一定能得到最优解 public static int f1(int rest, int k) { if (k == 1) { return rest; } int ans = Integer.MIN_VALUE; for (int cur = 1; cur <= rest && (rest - cur) >= (k - 1); cur++) { int curAns = cur * f1(rest - cur, k - 1); ans = Math.max(ans, curAns); } return ans; } // 贪心 // 如果结果很大,那么求余数 public static int maxValue2(int n, int k) { int mod = 1000000007; long a = n / k; int b = n % k; long part1 = power(a + 1, b, mod); long part2 = power(a, k - b, mod); return (int) (part1 * part2) % mod; } // 对数器 public static void main(String[] args) { int N = 30; int testTimes = 2000; System.out.println("测试开始"); for (int i = 1; i <= testTimes; i++) { int n = (int) (Math.random() * N) + 1; int k = (int) (Math.random() * n) + 1; int ans1 = maxValue1(n, k); int ans2 = maxValue2(n, k); if (ans1 != ans2) { // 如果出错了 // 可以增加打印行为找到一组出错的例子 // 然后去debug System.out.println("出错了!"); } if (i % 100 == 0) { System.out.println("测试到第" + i + "组"); } } System.out.println("测试结束"); } } ``` ]
https://github.com/LDemetrios/Typst4k
https://raw.githubusercontent.com/LDemetrios/Typst4k/master/src/test/resources/suite/layout/container.typ
typst
// Test the `box` and `block` containers. --- box --- // Test box in paragraph. A #box[B \ C] D. // Test box with height. Spaced \ #box(height: 0.5cm) \ Apart --- block-sizing --- // Test block sizing. #set page(height: 120pt) #set block(spacing: 0pt) #block(width: 90pt, height: 80pt, fill: red)[ #block(width: 60%, height: 60%, fill: green) #block(width: 50%, height: 60%, fill: blue) ] --- box-fr-width --- // Test fr box. Hello #box(width: 1fr, rect(height: 0.7em, width: 100%)) World --- block-fr-height --- #set page(height: 100pt) #rect(height: 10pt, width: 100%) #align(center, block(height: 1fr, width: 20pt, stroke: 1pt)) #rect(height: 10pt, width: 100%) --- block-fr-height-auto-width --- // Test that the fr block can also expand its parent. #set page(height: 100pt) #set align(center) #block(inset: 5pt, stroke: green)[ #rect(height: 10pt) #block(height: 1fr, stroke: 1pt, inset: 5pt)[ #set align(center + horizon) I am the widest ] #rect(height: 10pt) ] --- block-fr-height-first-child --- // Test that block spacing is not trimmed if only an fr block precedes it. #set page(height: 100pt) #rect(height: 1fr) #rect() --- block-fr-height-multiple --- #set page(height: 100pt) #rect(height: 1fr) #rect() #block(height: 1fr, line(length: 100%, angle: 90deg)) --- block-multiple-pages --- // Test block over multiple pages. #set page(height: 60pt) First! #block[ But, soft! what light through yonder window breaks? It is the east, and Juliet is the sun. ] --- block-box-fill --- #set page(height: 100pt) #let words = lorem(18).split() #block(inset: 8pt, width: 100%, fill: aqua, stroke: aqua.darken(30%))[ #words.slice(0, 13).join(" ") #box(fill: teal, outset: 2pt)[tempor] #words.slice(13).join(" ") ] --- block-spacing-basic --- #set par(spacing: 10pt) Hello There #block(spacing: 20pt)[Further down] --- block-above-below-context --- #context test(block.above, auto) #set block(spacing: 20pt) #context test(block.above, 20pt) #context test(block.below, 20pt) --- block-spacing-context --- // The values for `above` and `below` might be different, so we cannot retrieve // `spacing` directly // // Error: 16-23 function `block` does not contain field `spacing` #context block.spacing --- block-spacing-table --- // Test that paragraph spacing loses against block spacing. #set block(spacing: 100pt) #show table: set block(above: 5pt, below: 5pt) Hello #table(columns: 4, fill: (x, y) => if calc.odd(x + y) { silver })[A][B][C][D] --- block-spacing-maximum --- // While we're at it, test the larger block spacing wins. #set block(spacing: 0pt) #show raw: set block(spacing: 15pt) #show list: set block(spacing: 2.5pt) ```rust fn main() {} ``` - List Paragraph --- block-spacing-collapse-text-style --- // Test spacing collapsing with different font sizes. #grid(columns: 2)[ #text(size: 12pt, block(below: 1em)[A]) #text(size: 8pt, block(above: 1em)[B]) ][ #text(size: 12pt, block(below: 1em)[A]) #text(size: 8pt, block(above: 1.25em)[B]) ] --- block-fixed-height --- #set page(height: 100pt) #set align(center) #lines(3) #block(width: 80%, height: 60pt, fill: aqua) #lines(2) #block( breakable: false, width: 100%, inset: 4pt, fill: aqua, lines(3) + colbreak(), ) --- block-consistent-width --- // Test that block enforces consistent width across regions. Also use some // introspection to check that measurement is working correctly. #block(stroke: 1pt, inset: 5pt)[ #align(right)[Hi] #colbreak() Hello @netwok ] #show bibliography: none #bibliography("/assets/bib/works.bib") --- block-sticky --- #set page(height: 100pt) #lines(3) #block(sticky: true)[D] #block(sticky: true)[E] F --- block-sticky-alone --- #set page(height: 50pt) #block(sticky: true)[A] --- block-sticky-many --- #set page(height: 80pt) #set block(sticky: true) #block[A] #block[B] #block[C] #block[D] E #block[F] #block[G] --- block-sticky-colbreak --- A #block(sticky: true)[B] #colbreak() C --- box-clip-rect --- // Test box clipping with a rectangle Hello #box(width: 1em, height: 1em, clip: false)[#rect(width: 3em, height: 3em, fill: red)] world 1 Space Hello #box(width: 1em, height: 1em, clip: true)[#rect(width: 3em, height: 3em, fill: red)] world 2 --- block-clip-text --- // Test clipping text #block(width: 5em, height: 2em, clip: false, stroke: 1pt + black)[ But, soft! what light through ] #v(2em) #block(width: 5em, height: 2em, clip: true, stroke: 1pt + black)[ But, soft! what light through yonder window breaks? It is the east, and Juliet is the sun. ] --- block-clip-svg-glyphs --- // Test clipping svg glyphs Emoji: #box(height: 0.5em, stroke: 1pt + black)[🐪, 🌋, 🏞] Emoji: #box(height: 0.5em, clip: true, stroke: 1pt + black)[🐪, 🌋, 🏞] --- block-clipping-multiple-pages --- // Test block clipping over multiple pages. #set page(height: 60pt) First! #block(height: 4em, clip: true, stroke: 1pt + black)[ But, soft! what light through yonder window breaks? It is the east, and Juliet is the sun. ] --- box-clip-radius --- // Test clipping with `radius`. #set page(height: 60pt) #box( radius: 5pt, stroke: 2pt + black, width: 20pt, height: 20pt, clip: true, image("/assets/images/rhino.png", width: 30pt) ) --- box-clip-radius-without-stroke --- // Test clipping with `radius`, but without `stroke`. #set page(height: 60pt) #box( radius: 5pt, width: 20pt, height: 20pt, clip: true, image("/assets/images/rhino.png", width: 30pt) ) --- container-layoutable-child --- // Test box/block sizing with directly layoutable child. // // Ensure that the output respects the box size. #let check(f) = f( width: 40pt, height: 25pt, fill: aqua, grid(rect(width: 5pt, height: 5pt, fill: blue)), ) #stack(dir: ltr, spacing: 1fr, check(box), check(block)) --- issue-2128-block-width-box --- // Test box in 100% width block. #block(width: 100%, fill: red, box("a box")) #block(width: 100%, fill: red, [#box("a box") #box()])
https://github.com/noahjutz/AD
https://raw.githubusercontent.com/noahjutz/AD/main/components/admonition.typ
typst
#import "/config.typ": theme #let admonition(title, body) = { block( stroke: 1pt + theme.fg_light, width: 100%, radius: 4pt, clip: true, stack( block( fill: theme.bg_light, inset: 6pt, width: 100%, below: 0pt, title ), block( inset: 6pt, body ) ) ) }
https://github.com/typst-community/valkyrie
https://raw.githubusercontent.com/typst-community/valkyrie/main/src/lib.typ
typst
Other
#import "types.typ": * #import "ctx.typ": z-ctx #import "base-type.typ": base-type #import "assertions-util.typ" as advanced #import "assertions.typ" as assert #import "coercions.typ" as coerce #import "schemas.typ" as schemas #let parse( object, schemas, ctx: z-ctx(), scope: ("argument",), ) = { // don't expose to external import "assertions-util.typ": assert-base-type // Validate named arguments if (type(schemas) != type(())) { schemas = (schemas,) } advanced.assert-base-type-array(schemas, scope: scope) for schema in schemas { object = (schema.validate)( schema, ctx: ctx, scope: scope, object, ) } return object }
https://github.com/OCamlPro/ppaqse-lang
https://raw.githubusercontent.com/OCamlPro/ppaqse-lang/master/src/étude/OCamlPro_PPAQSE-COTS_rapport.typ
typst
#import "../base.typ": * #import "defs.typ": * #import "links.typ": * #show: report.with( title: [Ecosystèmes COTS de développement et de vérification des logiciels critiques et temps réel], version: sys.inputs.at("git_version", default: "<unknown>"), authors: ( ( firstname: "Julien", lastname: "Blond", email: "<EMAIL>" ), ( firstname: "Arthur", lastname: "Carcano", email: "<EMAIL>", ), ( firstname: "Pierre", lastname: "Villemot", email: "<EMAIL>", ) ), reference: [DLA-SF-0000000-194-QGP], abstract: [ Ce rapport présente une étude des langages #C, #Cpp, #Ada, #Scade, #OCaml et #Rust du point de vue de la sûreté. Il suit les clauses techniques #cite(<ctcots>) relatives au projet «COTS de qualité : logiciels critiques et temps réel» par et pour le #CNES. ] ) #show raw.where(block: true): code => { show raw.line: it => { let size = calc.ceil(calc.log(it.count)) let total = measure([#size]) let num = measure([#it.number]) let space = total.width - num.width + 0.5em h(space) text(fill: gray)[#it.number] h(0.5em) it.body } code } #show heading.where( level: 1, ): it => [ #pagebreak(weak: true) #align(center, it) #v(1cm) ] #include "introduction.typ" #include "C.typ" #include "C++.typ" #include "Ada.typ" #include "Scade.typ" #include "OCaml.typ" #include "Rust.typ" #include "conclusion.typ" #bibliography("bibliography.yml", title: "Références") #set heading(numbering: "A.1") #show heading.where(level: 1): set heading( numbering: (..nums) => "Annexe " + numbering("A.1.", ..nums.pos()) ) #counter(heading).update(0) #include "paradigmes.typ" #include "analyseurs.typ" #include "precision.typ" #include "pointeurs.typ" #include "mesures.typ" #include "concurrence.typ" #include "formalisation.typ" #include "tests.typ"
https://github.com/Kasci/LiturgicalBooks
https://raw.githubusercontent.com/Kasci/LiturgicalBooks/master/SK/zalmy/Z_PaneJaVolam.typ
typst
#import "/style.typ": * #set par(first-line-indent: 1em) === Žalm 140 #note[(... Pokračovanie)] Bože, stráž nad mojimi rečami \* a dozeraj na moje slová. Nedovoľ, aby sa mi srdce naklonilo k zlému, \* aby som sa dal na bezbožné skutky so zločincami a zasadal si s nimi na hostinách. Nech ma radšej statočný človek uderí a priateľ pokarhá, \* ale nech mi bezbožníci olejom nekropia hlavu na privítanie, lebo tým by som sa stal \* účastným na ich zločinoch. Budú vydaní Božej spravodlivosti \* a ľudia uznajú že moje slová boli pravdivé. Ich kosti budú pohodené na okraji hrobu \* ako kusy dreva a úlomky skál roztrúsené po zemi. Ale moje oči sa obracajú k tebe, Pane a Bože môj, \* k tebe sa utiekam, ochraňuj môj život. Ochráň ma pred osídlom, čo mi nastrojili, \* pred nástrahami zlých ľudí. Nech bezbožní popadajú do vlastných sietí, \* zatiaľ čo ja prejdem bezpečne pomimo. === Žalm 141 Zo všetkých síl volám k Bohu o pomoc, \* hlasite prosím Boha o zľutovanie. Predkladám mu svoje ťažkosti, \* rozprávam mu o svojich úzkostiach. Ja klesám na duchu, \* ale ty, Bože, vieš, kade mám ísť. Na chodníku, ktorým sa uberám, \* nastavili mi osídlo. Obzerám sa napravo \* a nik sa ku mne nepriznáva, nemám výhľad na útek, \* nikomu nezáleží na mojom živote. Bože, k tebe úpenlivo volám. \* Ty si moje útočisko, ty si všetko, čo mám na tomto svete. Vypočuj moje volanie o pomoc, \* lebo som úplne vyčerpaný. Zachráň ma pred mojimi prenasledovateľmi, \* lebo sú silnejší odo mňa.
https://github.com/LDemetrios/Typst4k
https://raw.githubusercontent.com/LDemetrios/Typst4k/master/src/test/resources/suite/scripting/modules/cycle2.typ
typst
// SKIP #import "cycle1.typ": * #let val = "much cycle" This is the second element of an import cycle.
https://github.com/PeiPei233/typst-template
https://raw.githubusercontent.com/PeiPei233/typst-template/main/test.typ
typst
#import "zju-exp-report/report-template.typ": project #let (cover,report-title,doc) = project( course: "Hello", name: "233", id: "11111111", ) #cover(name:"hhhh") #show: doc #report-title(course: "World") #report-title()
https://github.com/Mc-Zen/zero
https://raw.githubusercontent.com/Mc-Zen/zero/main/src/state.typ
typst
MIT License
#let default-state = ( digits: auto, fixed: none, product: sym.times, decimal-separator: ".", tight: false, omit-unity-mantissa: false, positive-sign: false, positive-sign-exponent: false, base: 10, uncertainty-mode: "separate", math: true, group: ( size: 3, separator: sym.space.thin, threshold: 5 ), round: ( mode: none, precision: 2, pad: true, direction: "nearest", ) ) #let num-state = state("num-state", default-state) #let group-state = state("group-state", default-state.group) #let round-state = state("round-state", default-state.round)
https://github.com/PgBiel/typst-improv-tables-planning
https://raw.githubusercontent.com/PgBiel/typst-improv-tables-planning/main/pkg/codly.typ
typst
Other
// Lets you set a line number offset. #let codly-offset(offset: 0) = { state("codly-offset").update(offset) } // Lets you set a range of line numbers to highlight. #let codly-range( start: 1, end: none, ) = { state("codly-range").update((start, end)) } // Disables codly. #let disable-codly() = { state("codly-config").update(none) } // Default language-block style #let language-block(name, icon, color) = { let content = icon + name locate(loc => { let config = state("codly-config").at(loc) style(styles => { let height = measure(content, styles).height box( radius: config.radius, fill: color.lighten(60%), inset: config.padding, height: height + config.padding * 2, stroke: config.stroke-width + color, content, ) }) }) } // Configures codly. #let codly( // The list of languages, allows setting a display name and an icon, // it should be a dict of the form: // `<language-name>: (name: <display-name>, icon: <icon-content>, color: <color>)` languages: (:), // Whether to display the language name. display-name: true, // Whether to display the language icon. display-icon: true, // The default color for a language not in the list. // Only used if `display-icon` or `display-name` is `true`. default-color: rgb("#283593"), // Radius of a code block. radius: 0.32em, // Padding of a code block. padding: 0.32em, // Fill color of lines. // If zebra color is enabled, this is just for odd lines. fill: white, // The zebra color to use or `none` to disable. zebra-color: luma(240), // The stroke width to use to surround the code block. // Set to `none` to disable. stroke-width: 0.1em, // The stroke color to use to surround the code block. stroke-color: luma(240), // The width of the numbers column. // If set to `none`, the numbers column will be disabled. width-numbers: 2em, // Format of the line numbers. // This is a function applied to the text of every line number. numbers-format: text, // A function that takes 3 positional parameters: // - name // - icon // - color // It returns the content for the language block. language-block: language-block, // Whether this code block is breakable. breakable: true, // Whether each raw line in a code block is breakable. // Setting this to true may cause problems when your raw block is split across pagebreaks, // so only change this setting if you're sure you need it. breakable-lines: false, ) = locate(loc => { let old = state("codly-config").at(loc); if old == none { state("codly-config").update(( languages: languages, display-name: display-name, display-icon: display-icon, default-color: default-color, radius: radius, padding: padding, fill: fill, zebra-color: zebra-color, stroke-width: stroke-width, width-numbers: width-numbers, numbers-format: numbers-format, breakable: breakable, breakable-lines: breakable-lines, stroke-color: stroke-color, language-block: language-block )) } else { let folded_langs = old.languages; for (lang, def) in languages { folded_langs.insert(lang, def) } state("codly-config").update(( languages: folded_langs, display-name: display-name, display-icon: display-icon, default-color: default-color, radius: radius, padding: padding, zebra-color: zebra-color, fill: fill, stroke-width: stroke-width, width-numbers: width-numbers, numbers-format: numbers-format, breakable: breakable, breakable-lines: breakable-lines, stroke-color: stroke-color, language-block: language-block )) } }) #let codly-init( body, ) = { show raw.where(block: true): it => locate(loc => { let config = state("codly-config").at(loc) let range = state("codly-range").at(loc) let in_range(line) = { if range == none { true } else if range.at(1) == none { line >= range.at(0) } else { line >= range.at(0) and line <= range.at(1) } } if config == none { return it } let language_block = if config.display-name == false and config.display-icon == false { none } else if it.lang == none { none } else if it.lang in config.languages { let lang = config.languages.at(it.lang); let name = if config.display-name { lang.name } else { [] } let icon = if config.display-icon { lang.icon } else { [] } (config.language-block)(name, icon, lang.color) } else if config.display-name { (config.language-block)(it.lang, [], config.default-color) }; let offset = state("codly-offset").at(loc); let start = if range == none { 1 } else { range.at(0) }; let border(i, len) = { let end = if range == none { len } else if range.at(1) == none { len } else { range.at(1) }; let stroke-width = if config.stroke-width == none { 0pt } else { config.stroke-width }; let radii = (:) let stroke = (x: config.stroke-color + stroke-width) if i == start { radii.insert("top-left", config.radius) radii.insert("top-right", config.radius) stroke.insert("top", config.stroke-color + stroke-width) } if i == end { radii.insert("bottom-left", config.radius) radii.insert("bottom-right", config.radius) stroke.insert("bottom", config.stroke-color + stroke-width) } radii.insert("rest", 0pt) (radius: radii, stroke: stroke) } let width = if config.width-numbers == none { 0pt } else { config.width-numbers } show raw.line: it => if not in_range(it.number) { none } else { block( width: 100%, height: 1.2em + config.padding * 2, inset: (left: config.padding + width, top: config.padding + 0.1em, rest: config.padding), breakable: config.breakable-lines, fill: if config.zebra-color != none and calc.rem(it.number, 2) == 0 { config.zebra-color } else { none }, radius: border(it.number, it.count).radius, stroke: border(it.number, it.count).stroke, { if it.number == start { place( top + right, language_block, dy: -config.padding * 0.66666, dx: config.padding * 0.66666 - 0.1em, ) } set par(justify: false) if config.width-numbers != none { place( horizon + left, dx: -config.width-numbers, (config.numbers-format)[#(offset + it.number)] ) } it } ) } let stroke = if config.stroke-width == 0pt or config.stroke-width == none { none } else { config.stroke-width + config.zebra-color }; block( breakable: config.breakable, clip: false, width: 100%, radius: config.radius, fill: config.fill, stack(dir: ttb, ..it.lines) ) codly-offset() codly-range(start: 1, end: none) }) body }
https://github.com/DieracDelta/presentations
https://raw.githubusercontent.com/DieracDelta/presentations/master/polylux/book/src/dynamic/cover.typ
typst
#import "../../../polylux.typ": * #set page(paper: "presentation-16-9") #set text(size: 30pt) #polylux-slide[ #uncover(3, mode: "transparent")[abc] #one-by-one(start: 2, mode: "transparent")[def ][ghi] #line-by-line(mode: "transparent")[ - jkl - mno ] #enum-one-by-one(mode: "transparent", tight: false)[pqr][stu][vwx] ]
https://github.com/EpicEricEE/typst-marge
https://raw.githubusercontent.com/EpicEricEE/typst-marge/main/tests/template.typ
typst
MIT License
#import "/src/lib.typ": sidenote #set par(justify: true) #set page(width: 8cm, height: auto, margin: (outside: 4cm, rest: 5mm))
https://github.com/TypstApp-team/typst
https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/compute/foundations.typ
typst
Apache License 2.0
// Test foundational functions. // Ref: false --- #test(type(1), int) #test(type(ltr), direction) #test(type(10 / 3), float) --- #test(repr(ltr), "ltr") #test(repr((1, 2, false, )), "(1, 2, false)") --- // Test panic. // Error: 7-9 panicked #panic() --- // Test panic. // Error: 7-12 panicked with: 123 #panic(123) --- // Test panic. // Error: 7-24 panicked with: "this is wrong" #panic("this is wrong") --- // Test failing assertions. // Error: 8-16 assertion failed #assert(1 == 2) --- // Test failing assertions. // Error: 8-51 assertion failed: two is smaller than one #assert(2 < 1, message: "two is smaller than one") --- // Test failing assertions. // Error: 9-15 expected boolean, found string #assert("true") --- // Test failing assertions. // Error: 11-19 equality assertion failed: value 10 was not equal to 11 #assert.eq(10, 11) --- // Test failing assertions. // Error: 11-55 equality assertion failed: 10 and 12 are not equal #assert.eq(10, 12, message: "10 and 12 are not equal") --- // Test failing assertions. // Error: 11-19 inequality assertion failed: value 11 was equal to 11 #assert.ne(11, 11) --- // Test failing assertions. // Error: 11-57 inequality assertion failed: must be different from 11 #assert.ne(11, 11, message: "must be different from 11") --- // Test successful assertions. #assert(5 > 3) #assert.eq(15, 15) #assert.ne(10, 12) --- // Test the `type` function. #test(type(1), int) #test(type(ltr), direction) #test(type(10 / 3), float) --- // Test the eval function. #test(eval("1 + 2"), 3) #test(eval("1 + x", scope: (x: 3)), 4) #test(eval("let x = x + 1; x + 1", scope: (x: 1)), 3) --- // Test evaluation in other modes. // Ref: true #eval("[_Hello" + " World!_]") \ #eval("_Hello" + " World!_", mode: "markup") \ #eval("RR_1^NN", mode: "math", scope: (RR: math.NN, NN: math.RR)) --- // Error: 7-12 expected identifier #eval("let") --- #show raw: it => text(font: "PT Sans", eval("[" + it.text + "]")) Interacting ``` #set text(blue) Blue #move(dy: -0.15em)[🌊] ``` --- // Error: 7-17 cannot continue outside of loop #eval("continue") --- // Error: 7-32 cannot access file system from here #eval("include \"../coma.typ\"") --- // Error: 7-30 cannot access file system from here #eval("image(\"/tiger.jpg\")") --- // Error: 23-30 cannot access file system from here #show raw: it => eval(it.text) ``` image("/tiger.jpg") ``` --- // Error: 23-42 cannot access file system from here #show raw: it => eval("[" + it.text + "]") ``` #show emph: _ => image("/giraffe.jpg") _No relative giraffe!_ ``` --- // Error: 7-12 expected semicolon or line break #eval("1 2")
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-1D000.typ
typst
Apache License 2.0
#let data = ( ("BYZANTINE MUSICAL SYMBOL PSILI", "So", 0), ("BYZANTINE MUSICAL SYMBOL DASEIA", "So", 0), ("BYZANTINE MUSICAL SYMBOL PERISPOMENI", "So", 0), ("BYZANTINE MUSICAL SYMBOL OXEIA EKFONITIKON", "So", 0), ("BYZANTINE MUSICAL SYMBOL OXEIA DIPLI", "So", 0), ("BYZANTINE MUSICAL SYMBOL VAREIA EKFONITIKON", "So", 0), ("BYZANTINE MUSICAL SYMBOL VAREIA DIPLI", "So", 0), ("BYZANTINE MUSICAL SYMBOL KATHISTI", "So", 0), ("BYZANTINE MUSICAL SYMBOL SYRMATIKI", "So", 0), ("BYZANTINE MUSICAL SYMBOL PARAKLITIKI", "So", 0), ("BYZANTINE MUSICAL SYMBOL YPOKRISIS", "So", 0), ("BYZANTINE MUSICAL SYMBOL YPOKRISIS DIPLI", "So", 0), ("BYZANTINE MUSICAL SYMBOL KREMASTI", "So", 0), ("BYZANTINE MUSICAL SYMBOL APESO EKFONITIKON", "So", 0), ("BYZANTINE MUSICAL SYMBOL EXO EKFONITIKON", "So", 0), ("BYZANTINE MUSICAL SYMBOL TELEIA", "So", 0), ("BYZANTINE MUSICAL SYMBOL KENTIMATA", "So", 0), ("BYZANTINE MUSICAL SYMBOL APOSTROFOS", "So", 0), ("BYZANTINE MUSICAL SYMBOL APOSTROFOS DIPLI", "So", 0), ("BYZANTINE MUSICAL SYMBOL SYNEVMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL THITA", "So", 0), ("BYZANTINE MUSICAL SYMBOL OLIGON ARCHAION", "So", 0), ("BYZANTINE MUSICAL SYMBOL GORGON ARCHAION", "So", 0), ("BYZANTINE MUSICAL SYMBOL PSILON", "So", 0), ("BYZANTINE MUSICAL SYMBOL CHAMILON", "So", 0), ("BYZANTINE MUSICAL SYMBOL VATHY", "So", 0), ("BYZANTINE MUSICAL SYMBOL ISON ARCHAION", "So", 0), ("BYZANTINE MUSICAL SYMBOL KENTIMA ARCHAION", "So", 0), ("BYZANTINE MUSICAL SYMBOL KENTIMATA ARCHAION", "So", 0), ("BYZANTINE MUSICAL SYMBOL SAXIMATA", "So", 0), ("BYZANTINE MUSICAL SYMBOL PARICHON", "So", 0), ("BYZANTINE MUSICAL SYMBOL STAVROS APODEXIA", "So", 0), ("BYZANTINE MUSICAL SYMBOL OXEIAI ARCHAION", "So", 0), ("BYZANTINE MUSICAL SYMBOL VAREIAI ARCHAION", "So", 0), ("BYZANTINE MUSICAL SYMBOL APODERMA ARCHAION", "So", 0), ("BYZANTINE MUSICAL SYMBOL APOTHEMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL KLASMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL REVMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL PIASMA ARCHAION", "So", 0), ("BYZANTINE MUSICAL SYMBOL TINAGMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL ANATRICHISMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL SEISMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL SYNAGMA ARCHAION", "So", 0), ("BYZANTINE MUSICAL SYMBOL SYNAGMA META STAVROU", "So", 0), ("BYZANTINE MUSICAL SYMBOL OYRANISMA ARCHAION", "So", 0), ("BYZANTINE MUSICAL SYMBOL THEMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL LEMOI", "So", 0), ("BYZANTINE MUSICAL SYMBOL DYO", "So", 0), ("BYZANTINE MUSICAL SYMBOL TRIA", "So", 0), ("BYZANTINE MUSICAL SYMBOL TESSERA", "So", 0), ("BYZANTINE MUSICAL SYMBOL KRATIMATA", "So", 0), ("BYZANTINE MUSICAL SYMBOL APESO EXO NEO", "So", 0), ("BYZANTINE MUSICAL SYMBOL FTHORA ARCHAION", "So", 0), ("BYZANTINE MUSICAL SYMBOL IMIFTHORA", "So", 0), ("BYZANTINE MUSICAL SYMBOL TROMIKON ARCHAION", "So", 0), ("BYZANTINE MUSICAL SYMBOL KATAVA TROMIKON", "So", 0), ("BYZANTINE MUSICAL SYMBOL PELASTON", "So", 0), ("BYZANTINE MUSICAL SYMBOL PSIFISTON", "So", 0), ("BYZANTINE MUSICAL SYMBOL KONTEVMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL CHOREVMA ARCHAION", "So", 0), ("BYZANTINE MUSICAL SYMBOL RAPISMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL PARAKALESMA ARCHAION", "So", 0), ("BYZANTINE MUSICAL SYMBOL PARAKLITIKI ARCHAION", "So", 0), ("BYZANTINE MUSICAL SYMBOL ICHADIN", "So", 0), ("BYZANTINE MUSICAL SYMBOL NANA", "So", 0), ("BYZANTINE MUSICAL SYMBOL PETASMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL KONTEVMA ALLO", "So", 0), ("BYZANTINE MUSICAL SYMBOL TROMIKON ALLO", "So", 0), ("BYZANTINE MUSICAL SYMBOL STRAGGISMATA", "So", 0), ("BYZANTINE MUSICAL SYMBOL GRONTHISMATA", "So", 0), ("BYZANTINE MUSICAL SYMBOL ISON NEO", "So", 0), ("BYZANTINE MUSICAL SYMBOL OLIGON NEO", "So", 0), ("BYZANTINE MUSICAL SYMBOL OXEIA NEO", "So", 0), ("BYZANTINE MUSICAL SYMBOL PETASTI", "So", 0), ("BYZANTINE MUSICAL SYMBOL KOUFISMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL PETASTOKOUFISMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL KRATIMOKOUFISMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL PELASTON NEO", "So", 0), ("BYZANTINE MUSICAL SYMBOL KENTIMATA NEO ANO", "So", 0), ("BYZANTINE MUSICAL SYMBOL KENTIMA NEO ANO", "So", 0), ("BYZANTINE MUSICAL SYMBOL YPSILI", "So", 0), ("BYZANTINE MUSICAL SYMBOL APOSTROFOS NEO", "So", 0), ("BYZANTINE MUSICAL SYMBOL APOSTROFOI SYNDESMOS NEO", "So", 0), ("BYZANTINE MUSICAL SYMBOL YPORROI", "So", 0), ("BYZANTINE MUSICAL SYMBOL KRATIMOYPORROON", "So", 0), ("BYZANTINE MUSICAL SYMBOL ELAFRON", "So", 0), ("BYZANTINE MUSICAL SYMBOL CHAMILI", "So", 0), ("BYZANTINE MUSICAL SYMBOL MIKRON ISON", "So", 0), ("BYZANTINE MUSICAL SYMBOL VAREIA NEO", "So", 0), ("BYZANTINE MUSICAL SYMBOL PIASMA NEO", "So", 0), ("BYZANTINE MUSICAL SYMBOL PSIFISTON NEO", "So", 0), ("BYZANTINE MUSICAL SYMBOL OMALON", "So", 0), ("BYZANTINE MUSICAL SYMBOL ANTIKENOMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL LYGISMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL PARAKLITIKI NEO", "So", 0), ("BYZANTINE MUSICAL SYMBOL PARAKALESMA NEO", "So", 0), ("BYZANTINE MUSICAL SYMBOL ETERON PARAKALESMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL KYLISMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL ANTIKENOKYLISMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL TROMIKON NEO", "So", 0), ("BYZANTINE MUSICAL SYMBOL EKSTREPTON", "So", 0), ("BYZANTINE MUSICAL SYMBOL SYNAGMA NEO", "So", 0), ("BYZANTINE MUSICAL SYMBOL SYRMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL CHOREVMA NEO", "So", 0), ("BYZANTINE MUSICAL SYMBOL EPEGERMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL SEISMA NEO", "So", 0), ("BYZANTINE MUSICAL SYMBOL XIRON KLASMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL TROMIKOPSIFISTON", "So", 0), ("BYZANTINE MUSICAL SYMBOL PSIFISTOLYGISMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL TROMIKOLYGISMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL TROMIKOPARAKALESMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL PSIFISTOPARAKALESMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL TROMIKOSYNAGMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL PSIFISTOSYNAGMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL GORGOSYNTHETON", "So", 0), ("BYZANTINE MUSICAL SYMBOL ARGOSYNTHETON", "So", 0), ("BYZANTINE MUSICAL SYMBOL ETERON ARGOSYNTHETON", "So", 0), ("BYZANTINE MUSICAL SYMBOL OYRANISMA NEO", "So", 0), ("BYZANTINE MUSICAL SYMBOL THEMATISMOS ESO", "So", 0), ("BYZANTINE MUSICAL SYMBOL THEMATISMOS EXO", "So", 0), ("BYZANTINE MUSICAL SYMBOL THEMA APLOUN", "So", 0), ("BYZANTINE MUSICAL SYMBOL THES KAI APOTHES", "So", 0), ("BYZANTINE MUSICAL SYMBOL KATAVASMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL ENDOFONON", "So", 0), ("BYZANTINE MUSICAL SYMBOL YFEN KATO", "So", 0), ("BYZANTINE MUSICAL SYMBOL YFEN ANO", "So", 0), ("BYZANTINE MUSICAL SYMBOL STAVROS", "So", 0), ("BYZANTINE MUSICAL SYMBOL KLASMA ANO", "So", 0), ("BYZANTINE MUSICAL SYMBOL DIPLI ARCHAION", "So", 0), ("BYZANTINE MUSICAL SYMBOL KRATIMA ARCHAION", "So", 0), ("BYZANTINE MUSICAL SYMBOL KRATIMA ALLO", "So", 0), ("BYZANTINE MUSICAL SYMBOL KRATIMA NEO", "So", 0), ("BYZANTINE MUSICAL SYMBOL APODERMA NEO", "So", 0), ("BYZANTINE MUSICAL SYMBOL APLI", "So", 0), ("BYZANTINE MUSICAL SYMBOL DIPLI", "So", 0), ("BYZANTINE MUSICAL SYMBOL TRIPLI", "So", 0), ("BYZANTINE MUSICAL SYMBOL TETRAPLI", "So", 0), ("BYZANTINE MUSICAL SYMBOL KORONIS", "So", 0), ("BYZANTINE MUSICAL SYMBOL LEIMMA ENOS CHRONOU", "So", 0), ("BYZANTINE MUSICAL SYMBOL LEIMMA DYO CHRONON", "So", 0), ("BYZANTINE MUSICAL SYMBOL LEIMMA TRION CHRONON", "So", 0), ("BYZANTINE MUSICAL SYMBOL LEIMMA TESSARON CHRONON", "So", 0), ("BYZANTINE MUSICAL SYMBOL LEIMMA IMISEOS CHRONOU", "So", 0), ("BYZANTINE MUSICAL SYMBOL GORGON NEO ANO", "So", 0), ("BYZANTINE MUSICAL SYMBOL GORGON PARESTIGMENON ARISTERA", "So", 0), ("BYZANTINE MUSICAL SYMBOL GORGON PARESTIGMENON DEXIA", "So", 0), ("BYZANTINE MUSICAL SYMBOL DIGORGON", "So", 0), ("BYZANTINE MUSICAL SYMBOL DIGORGON PARESTIGMENON ARISTERA KATO", "So", 0), ("BYZANTINE MUSICAL SYMBOL DIGORGON PARESTIGMENON ARISTERA ANO", "So", 0), ("BYZANTINE MUSICAL SYMBOL DIGORGON PARESTIGMENON DEXIA", "So", 0), ("BYZANTINE MUSICAL SYMBOL TRIGORGON", "So", 0), ("BYZANTINE MUSICAL SYMBOL ARGON", "So", 0), ("BYZANTINE MUSICAL SYMBOL IMIDIARGON", "So", 0), ("BYZANTINE MUSICAL SYMBOL DIARGON", "So", 0), ("BYZANTINE MUSICAL SYMBOL AGOGI POLI ARGI", "So", 0), ("BYZANTINE MUSICAL SYMBOL AGOGI ARGOTERI", "So", 0), ("BYZANTINE MUSICAL SYMBOL AGOGI ARGI", "So", 0), ("BYZANTINE MUSICAL SYMBOL AGOGI METRIA", "So", 0), ("BYZANTINE MUSICAL SYMBOL AGOGI MESI", "So", 0), ("BYZANTINE MUSICAL SYMBOL AGOGI GORGI", "So", 0), ("BYZANTINE MUSICAL SYMBOL AGOGI GORGOTERI", "So", 0), ("BYZANTINE MUSICAL SYMBOL AGOGI POLI GORGI", "So", 0), ("BYZANTINE MUSICAL SYMBOL MARTYRIA PROTOS ICHOS", "So", 0), ("BYZANTINE MUSICAL SYMBOL MARTYRIA ALLI PROTOS ICHOS", "So", 0), ("BYZANTINE MUSICAL SYMBOL MARTYRIA DEYTEROS ICHOS", "So", 0), ("BYZANTINE MUSICAL SYMBOL MARTYRIA ALLI DEYTEROS ICHOS", "So", 0), ("BYZANTINE MUSICAL SYMBOL MARTYRIA TRITOS ICHOS", "So", 0), ("BYZANTINE MUSICAL SYMBOL MARTYRIA TRIFONIAS", "So", 0), ("BYZANTINE MUSICAL SYMBOL MARTYRIA TETARTOS ICHOS", "So", 0), ("BYZANTINE MUSICAL SYMBOL MARTYRIA TETARTOS LEGETOS ICHOS", "So", 0), ("BYZANTINE MUSICAL SYMBOL MARTYRIA LEGETOS ICHOS", "So", 0), ("BYZANTINE MUSICAL SYMBOL MARTYRIA PLAGIOS ICHOS", "So", 0), ("BYZANTINE MUSICAL SYMBOL ISAKIA TELOUS ICHIMATOS", "So", 0), ("BYZANTINE MUSICAL SYMBOL APOSTROFOI TELOUS ICHIMATOS", "So", 0), ("BYZANTINE MUSICAL SYMBOL FANEROSIS TETRAFONIAS", "So", 0), ("BYZANTINE MUSICAL SYMBOL FANEROSIS MONOFONIAS", "So", 0), ("BYZANTINE MUSICAL SYMBOL FANEROSIS DIFONIAS", "So", 0), ("BYZANTINE MUSICAL SYMBOL MARTYRIA VARYS ICHOS", "So", 0), ("BYZANTINE MUSICAL SYMBOL MARTYRIA PROTOVARYS ICHOS", "So", 0), ("BYZANTINE MUSICAL SYMBOL MARTYRIA PLAGIOS TETARTOS ICHOS", "So", 0), ("BYZANTINE MUSICAL SYMBOL GORTHMIKON N APLOUN", "So", 0), ("BYZANTINE MUSICAL SYMBOL GORTHMIKON N DIPLOUN", "So", 0), ("BYZANTINE MUSICAL SYMBOL ENARXIS KAI FTHORA VOU", "So", 0), ("BYZANTINE MUSICAL SYMBOL IMIFONON", "So", 0), ("BYZANTINE MUSICAL SYMBOL IMIFTHORON", "So", 0), ("BYZANTINE MUSICAL SYMBOL FTHORA ARCHAION DEYTEROU ICHOU", "So", 0), ("BYZANTINE MUSICAL SYMBOL FTHORA DIATONIKI PA", "So", 0), ("BYZANTINE MUSICAL SYMBOL FTHORA DIATONIKI NANA", "So", 0), ("BYZANTINE MUSICAL SYMBOL FTHORA NAOS ICHOS", "So", 0), ("BYZANTINE MUSICAL SYMBOL FTHORA DIATONIKI DI", "So", 0), ("BYZANTINE MUSICAL SYMBOL FTHORA SKLIRON DIATONON DI", "So", 0), ("BYZANTINE MUSICAL SYMBOL FTHORA DIATONIKI KE", "So", 0), ("BYZANTINE MUSICAL SYMBOL FTHORA DIATONIKI ZO", "So", 0), ("BYZANTINE MUSICAL SYMBOL FTHORA DIATONIKI NI KATO", "So", 0), ("BYZANTINE MUSICAL SYMBOL FTHORA DIATONIKI NI ANO", "So", 0), ("BYZANTINE MUSICAL SYMBOL FTHORA MALAKON CHROMA DIFONIAS", "So", 0), ("BYZANTINE MUSICAL SYMBOL FTHORA MALAKON CHROMA MONOFONIAS", "So", 0), ("BYZANTINE MUSICAL SYMBOL FHTORA SKLIRON CHROMA VASIS", "So", 0), ("BYZANTINE MUSICAL SYMBOL FTHORA SKLIRON CHROMA SYNAFI", "So", 0), ("BYZANTINE MUSICAL SYMBOL FTHORA NENANO", "So", 0), ("BYZANTINE MUSICAL SYMBOL CHROA ZYGOS", "So", 0), ("BYZANTINE MUSICAL SYMBOL CHROA KLITON", "So", 0), ("BYZANTINE MUSICAL SYMBOL CHROA SPATHI", "So", 0), ("BYZANTINE MUSICAL SYMBOL FTHORA I YFESIS TETARTIMORION", "So", 0), ("BYZANTINE MUSICAL SYMBOL FTHORA ENARMONIOS ANTIFONIA", "So", 0), ("BYZANTINE MUSICAL SYMBOL YFESIS TRITIMORION", "So", 0), ("BYZANTINE MUSICAL SYMBOL DIESIS TRITIMORION", "So", 0), ("BYZANTINE MUSICAL SYMBOL DIESIS TETARTIMORION", "So", 0), ("BYZANTINE MUSICAL SYMBOL DIESIS APLI DYO DODEKATA", "So", 0), ("BYZANTINE MUSICAL SYMBOL DIESIS MONOGRAMMOS TESSERA DODEKATA", "So", 0), ("BYZANTINE MUSICAL SYMBOL DIESIS DIGRAMMOS EX DODEKATA", "So", 0), ("BYZANTINE MUSICAL SYMBOL DIESIS TRIGRAMMOS OKTO DODEKATA", "So", 0), ("BYZANTINE MUSICAL SYMBOL YFESIS APLI DYO DODEKATA", "So", 0), ("BYZANTINE MUSICAL SYMBOL YFESIS MONOGRAMMOS TESSERA DODEKATA", "So", 0), ("BYZANTINE MUSICAL SYMBOL YFESIS DIGRAMMOS EX DODEKATA", "So", 0), ("BYZANTINE MUSICAL SYMBOL YFESIS TRIGRAMMOS OKTO DODEKATA", "So", 0), ("BYZANTINE MUSICAL SYMBOL GENIKI DIESIS", "So", 0), ("BYZANTINE MUSICAL SYMBOL GENIKI YFESIS", "So", 0), ("BYZANTINE MUSICAL SYMBOL DIASTOLI APLI MIKRI", "So", 0), ("BYZANTINE MUSICAL SYMBOL DIASTOLI APLI MEGALI", "So", 0), ("BYZANTINE MUSICAL SYMBOL DIASTOLI DIPLI", "So", 0), ("BYZANTINE MUSICAL SYMBOL DIASTOLI THESEOS", "So", 0), ("BYZANTINE MUSICAL SYMBOL SIMANSIS THESEOS", "So", 0), ("BYZANTINE MUSICAL SYMBOL SIMANSIS THESEOS DISIMOU", "So", 0), ("BYZANTINE MUSICAL SYMBOL SIMANSIS THESEOS TRISIMOU", "So", 0), ("BYZANTINE MUSICAL SYMBOL SIMANSIS THESEOS TETRASIMOU", "So", 0), ("BYZANTINE MUSICAL SYMBOL SIMANSIS ARSEOS", "So", 0), ("BYZANTINE MUSICAL SYMBOL SIMANSIS ARSEOS DISIMOU", "So", 0), ("BYZANTINE MUSICAL SYMBOL SIMANSIS ARSEOS TRISIMOU", "So", 0), ("BYZANTINE MUSICAL SYMBOL SIMANSIS ARSEOS TETRASIMOU", "So", 0), ("BYZANTINE MUSICAL SYMBOL DIGRAMMA GG", "So", 0), ("BYZANTINE MUSICAL SYMBOL DIFTOGGOS OU", "So", 0), ("BYZANTINE MUSICAL SYMBOL STIGMA", "So", 0), ("BYZANTINE MUSICAL SYMBOL ARKTIKO PA", "So", 0), ("BYZANTINE MUSICAL SYMBOL ARKTIKO VOU", "So", 0), ("BYZANTINE MUSICAL SYMBOL ARKTIKO GA", "So", 0), ("BYZANTINE MUSICAL SYMBOL ARKTIKO DI", "So", 0), ("BYZANTINE MUSICAL SYMBOL ARKTIKO KE", "So", 0), ("BYZANTINE MUSICAL SYMBOL ARKTIKO ZO", "So", 0), ("BYZANTINE MUSICAL SYMBOL ARKTIKO NI", "So", 0), ("BYZANTINE MUSICAL SYMBOL KENTIMATA NEO MESO", "So", 0), ("BYZANTINE MUSICAL SYMBOL KENTIMA NEO MESO", "So", 0), ("BYZANTINE MUSICAL SYMBOL KENTIMATA NEO KATO", "So", 0), ("BYZANTINE MUSICAL SYMBOL KENTIMA NEO KATO", "So", 0), ("BYZANTINE MUSICAL SYMBOL KLASMA KATO", "So", 0), ("BYZANTINE MUSICAL SYMBOL GORGON NEO KATO", "So", 0), )
https://github.com/kdog3682/2024-typst
https://raw.githubusercontent.com/kdog3682/2024-typst/main/src/typography.typ
typst
#import "base-utils.typ": * #import "classes.typ": classes #let line(class: "") = { let base = classes.at(class).at("line") let value = "" let line-attrs = base.line value = _line(..line-attrs) if "spacing" in base { v(resolve-pt(base.spacing.above)) value v(resolve-pt(base.spacing.below)) } } #let text(s, class: "") = { let content = resolve-content(s) if class in classes.default { let text-attrs = classes.default.at(class) return _text(..text-attrs, content) } let base = classes.at(class).at("text") let text-attrs = base.at("text", default: none) let value = _text(..text-attrs, content) let align-arg = base.at("align", default: none) if align-arg != none { value = align(align-arg, value) } return value } #let title(s, class: "hawaii") = { text(s, class: class) line(class: class) } #let header(s) = { } #let footer(s) = { } #let bold = text.with(class: "bold") #let md-text = text.with(class: "md-text") #let sm-text = text.with(class: "sm-text") #let lg-text = text.with(class: "lg-text")
https://github.com/WinstonMDP/math
https://raw.githubusercontent.com/WinstonMDP/math/main/exers/g.typ
typst
#import "../cfg.typ": * #show: cfg $ "Find" lim_(x -> +oo) x^(1/x) $ $x^(1/x) = e^((ln x)/x)$ $(ln x)/x = (ln x)/(e^(ln x))$ $x/e^x ->_(x -> +oo) 0$ $(ln x)/(e^(ln x)) ->_(x -> +oo) 0$ $e^((ln x)/x) ->_(x -> +oo) 1$
https://github.com/qujihan/typst-beamer
https://raw.githubusercontent.com/qujihan/typst-beamer/main/readme_zh.md
markdown
MIT License
# 用[Typst](https://typst.app/)写的Beamer [中文](https://github.com/qujihan/typst-beamer/blob/main/readme_zh.md) | [English](github.com/qujihan/typst-beamer) [Bilibili](https://www.bilibili.com/video/BV1Nk4y157fu/) ## 它是这个样子滴 ![pic_1](./example/example_pic_1.png) ![pic_2](./example/example_pic_2.png) 也可以下载 [PDF](https://github.com/qujihan/typst-beamer/blob/main/example/example.pdf)看 ## 如何编译example代码 **注意:** 如果是使用VSCode于Typst LSP插件的话可能会报错,这个是因为插件没更新,等插件更新就好了! Linux/Macos ``` typst --root . c ./example/example.typ ``` Windows ``` typst --root . c .\example\example.typ ``` ## 快速开始 ``` #import "beamer.typ": beamer #show: beamer.with( title: "Write a Beamer Template in Typst", author: "<EMAIL>", date: "2023-07-17", ) = First == Second ... ``` ` = ` 创建一个新分割页, ` == ` 创建一个新的页面, 每一个页上面的标题都是由它所在 ` = ` 决定的 ## Thanks [diapo](https://github.com/lvignoli/diapo)
https://github.com/topdeoo/NENU-Thesis-Typst
https://raw.githubusercontent.com/topdeoo/NENU-Thesis-Typst/master/lib.typ
typst
#import "layout/document.typ": doc #import "fonts/fonts.typ": font-size, font-family #import "pages/bachelor-cover.typ": bachelor-cover #import "pages/bachelor-declare.typ": bachelor-declare #import "pages/bachelor-abstract.typ": bachelor-abstract #import "pages/toc-page.typ": toc #import "layout/mainmatter.typ": mainmatter #import "@preview/kouhu:0.1.0": kouhu #import "pages/reference.typ": nenu-bibliography #import "pages/ackonwledge.typ": acknowledgement #let thesis( // TODO 新增 "master" 和 "phd" 类型 thesis-type: "bachelor", // TODO 新增学位类型的选项(只在硕士和博士学位生效) degree: "academic", // TODO 双面模式,在封面与封底插入空白页 two-side: false, // 参考文献函数 bibliography: none, // 字体 fonts: (:), // 额外信息 info: (:), // 关键词 keywords-cn: (), keywords-en: (), ) = { fonts = font-family + fonts info = ( title: ("毕业论文中文题目"), title-en: ("毕业论文英文题目"), student-id: "123456", author: "张三", department: "信息科学与技术学院", major: "计算机科学与技术", supervisor: "李四", submit-date: datetime.today(), ) + info ( thesis-type: thesis-type, degree: degree, two-side: two-side, fonts: fonts, info: info, //TODO 分发更多函数 doc: (..args) => { doc( ..args, info: info + args.named().at("info", default: (:)), thesis-type: thesis-type ) }, mainmatter: (..args) => { mainmatter(..args) }, cover: (..args) => { if thesis-type == "bachelor" { bachelor-cover( two-side: two-side, fonts: fonts + args.named().at("fonts", default: (:)), info: info + args.named().at("info", default: (:)), ..args ) } else { panic("Not Implemented Yet!") } }, declare: (..args) => { if thesis-type == "bachelor" { bachelor-declare( two-side: two-side, fonts: fonts + args.named().at("fonts", default: (:)), ..args ) counter(page).update(0) } else { panic("Not Implemented Yet!") } }, abstract-cn: (..args) => { if thesis-type == "bachelor" { bachelor-abstract( two-side: two-side, fonts: fonts + args.named().at("fonts", default: (:)), display-lang: "cn", keywords: keywords-cn, ..args ) } else { panic("Not Implemented Yet!") } }, abstract-en: (..args) => { if thesis-type == "bachelor" { bachelor-abstract( two-side: two-side, fonts: fonts + args.named().at("fonts", default: (:)), display-lang: "en", keywords: keywords-en, ..args ) } else { panic("Not Implemented Yet!") } }, toc: (..args) => { toc( two-side: two-side, fonts: fonts + args.named().at("fonts", default: (:)), ..args ) }, nenu-bibliography: (..args) => { nenu-bibliography( bibliography: bibliography, ..args ) }, acknowledgement: (..args) => { acknowledgement( two-side: two-side, ..args ) } ) }
https://github.com/freefrancisco/typst-resume
https://raw.githubusercontent.com/freefrancisco/typst-resume/main/README.md
markdown
# Instructions This uses `typst` to create a resume. The nix flake loads the needed environment, and direnv does that automatically. Without direnv, enter the environment manually with `nix develop`. Without nix shells enabled just load a new shell with typst like this `nix-shell -p typst`. Without nix, just refer to [the github repo](https://github.com/typst/typst) for installation instructions. To develop use `typst watch resume.typ` To compile into `resume.pdf` use `typst compile resume.typ` To have `vscode` do that for you just use Shift+Command-P to open the control panel and use `Typist preview: Preview current file` having the Typist Preview extension installed, but note that the preview doesn't actually match the output layout always, so it's better to use the watch method.
https://github.com/jijinbei/typst_template
https://raw.githubusercontent.com/jijinbei/typst_template/main/manual/template.typ
typst
// Settings #let TITLE_HEIGHT = 22mm #let TITLE_SIZE = 36pt #let HEADING1_SIZE = 18pt #let HEADING2_SIZE = 14pt #let BODY_SIZE = 13pt // paper size (default: A4) #let WIDTH = 210mm #let HEIGHT = 297mm #let conf(title: [], cols: 2, doc) = { set page(width: WIDTH, height: HEIGHT, margin: 8mm) // title box( width: 100%, height: TITLE_HEIGHT, inset: 0pt, fill: black.lighten(20%), align(center + horizon, text( font: "Noto Sans JP", fill: white, size: TITLE_SIZE, weight: 600, title, )), ) // body set text(font: "<NAME>", size: BODY_SIZE) set par(justify: true) // heading show heading: it => { if it.level == 1 { block( stroke: black, width: 100%, inset: 6pt, outset: 1pt, above: 15pt, radius: 2pt, text(size: HEADING1_SIZE, weight: 600, it), ) } else { block( width: 100%, below: 13pt, stroke: (bottom: 1pt + black), inset: (bottom: 0.2em), it, ) } } set enum(numbering: (..nums) => text(font: "Noto Sans JP", weight: 800, nums .pos() .map(str) .join(".") + ".")) // body columns(cols, doc) } #let caution(doc) = { box(fill: black.lighten(80%), width: 100%, inset: 6pt, radius: 4pt, grid( columns: (5%, 95%), gutter: 1%, image("icon\caution_mark.svg"), block(inset: (x: 7pt, y: 3pt), text(size: 12pt, doc)), )) } #let QandA(Q: [], A: []) = { grid( columns: (10%, 90%), gutter: 3%, align: horizon, block( width: 30pt, height: 30pt, fill: black, inset: 12pt, align(center + horizon, text(fill: white, size: 18pt, weight: 800, "Q")), ), block(inset: 6pt, outset: 1pt, above: 15pt, radius: 2pt, text(size: 13pt, Q)), ) grid( columns: (10%, 90%), gutter: 3%, block( width: 30pt, height: 30pt, fill: black, align(center + horizon, text(fill: white, size: 18pt, weight: 800, "A")), ), block(inset: 6pt, outset: 1pt, above: 15pt, radius: 2pt, text(size: 13pt, A)), ) } #let encloseText(doc) = { block( stroke: black, inset: 6pt, outset: 1pt, above: 14pt, below: 14pt, radius: 2pt, text(size: 13pt, doc), ) }
https://github.com/EpicEricEE/typst-plugins
https://raw.githubusercontent.com/EpicEricEE/typst-plugins/master/united/src/unit.typ
typst
#import "data.typ" // Format a list of atoms. // // Parameters: // - atoms: List of // - prefix: Prefix of the atom, or `none`. // - name: Base name. // - space: Whether a space is needed before the atom. // - exponent: Exponent of the atom. // - unit-sep: The separator between atoms. // - per: How to format fractions. #let format-atoms( atoms, unit-sep: math.thin, per: "reciprocal" ) = { // Format a single atom. let format-atom(atom) = { let result = [#atom.prefix#atom.name] if atom.exponent != "1" { result = math.attach(result, tr: atom.exponent) } result } // Remove atoms with zero exponent, as they are equivalent to 1. atoms = atoms.filter(atom => atom.exponent != "0") // Join atoms into a sequence with the unit space as separator. let join(atoms) = atoms.map(format-atom).join[#unit-sep] if per == "reciprocal" { // Format as sequence of atoms with positive and negative exponents. return join(atoms) } // Partition atoms into normal and inverted ones and adjust exponents. let (normal, inverted) = atoms.fold(((), ()), (acc, unit) => { let (index, exponent) = if unit.exponent.first() == str(math.minus) { (1, unit.exponent.slice(str(math.minus).len())) } else { (0, unit.exponent) } acc.at(index).push((..unit, exponent: exponent)) acc }) if inverted == () { // No inverted units, so just join the normal ones. return join(normal) } let numerator = join(normal) let denominator = join(inverted) if per == "fraction" { if normal == () { numerator = "1" } return math.frac(numerator, denominator) } else { if inverted.len() > 1 { denominator = math.lr[(#denominator)] } return [#numerator#per#denominator] } } // Parses an atom string using the shorthand notation. // // Parameters: // - atom: String containing the atom. // - inverted: Whether the atom is inverted. // // Returns: // - prefix: Prefix of the atom, or `none`. // - name: Base name. // - space: Whether a space is needed before the atom. // - exponent: Exponent of the atom. #let expect-short-atom(atom, inverted: false) = { let split = atom.split("^") let base = split.first() let exponent = split.at(1, default: "1").replace("-", math.minus) let unit = data.units-short.at(base, default: none) let space = data.units-short-space.at(base, default: true) if unit == none { // Check if unit is given in quotes. let match = base.match(regex("^'(.+)'$")) if match != none { unit = math.upright(match.captures.first()) } } if unit != none { // Base is a unit without a prefix. return ( prefix: none, name: unit, space: space, exponent: if inverted { math.minus } + exponent ) } // Find first matching prefix + unit let prefix = "" for char in base.clusters() { prefix += char if prefix in data.prefixes-short { let unit = base.slice(prefix.len()) if unit in data.units-short { return ( prefix: data.prefixes-short.at(prefix), name: data.units-short.at(unit), space: data.units-short-space.at(unit), exponent: if inverted { math.minus } + exponent ) } } } // No matching prefix + unit name found. panic("invalid unit: " + base) } // Parses a unit string using the shorthand notation into a list of atoms. // // Parameters: // - string: String containing the unit. // // Returns: List of // - prefix: Prefix of the atom, or `none`. // - name: Base name. // - space: Whether a space is needed before the atom. // - exponent: Exponent of the atom. #let parse-short-unit(string) = { let factors = string .replace(regex("\s*/\s*"), "/") .replace(regex("\s+"), " ") .split(regex(" ")) let atoms = factors .map(factor => factor.split("/")) .map(((first, ..rest)) => ( expect-short-atom(first), ..rest.map(expect-short-atom.with(inverted: true)) )) .flatten() atoms } // Parses the next atom in a list of words using the long notation. // // Parameters: // - words: List of words. // // Returns: // - prefix: Prefix of the atom, or `none`. // - name: Base name. // - space: Whether a space is needed before the atom. // - exponent: Exponent of the atom. // - index: Index of the next word. #let expect-long-atom(words) = { let word-count = 0 // Check if atom is inverted. let next = words.at(word-count, default: none) assert.ne(next, none, message: "expected unit") let inverted = next == "per" if inverted { word-count += 1 } // Check if prefix is given. let (prev, next) = (next, words.at(word-count, default: none)) assert.ne(next, none, message: "expected unit after \"" + prev + "\"") let prefix = data.prefixes.at(next, default: none) if prefix != none { word-count += 1 } // Expect base atom. let (prev, next) = (next, words.at(word-count, default: none)) assert.ne(next, none, message: "expected unit after \"" + prev + "\"") let unit = data.units.at(next, default: none) if unit == none { // Check if unit is given in quotes. let match = next.match(regex("^'(.+)'$")) if match != none { unit = math.upright(match.captures.first()) } } let space = data.units-space.at(next, default: true) assert.ne(unit, none, message: "invalid unit: " + next) word-count += 1 // Check if exponent is given. let next = words.at(word-count, default: none) let exponent = if next != none { data.postfixes.at(next, default: none) } if exponent != none { word-count += 1 } else { exponent = "1" } return ( prefix: prefix, name: unit, space: space, exponent: if inverted { math.minus } + exponent, word-count: word-count ) } // Parses a unit string using the long notation into a list of atoms. // // Parameters: // - string: String containing the unit. // // Returns: List of // - prefix: Prefix of the atom, or `none`. // - name: Base name. // - space: Whether a space is needed before the atom. // - exponent: Exponent of the atom. // - inverted: Whether the atom is inverted. #let parse-long-unit(string) = { let words = lower(string) .replace(regex("\s+"), " ") .split(regex(" ")) let i = 0 let atoms = () while i < words.len() { let (word-count, ..atom) = expect-long-atom(words.slice(i)) atoms.push(atom) i += word-count } atoms } // Format the given unit string. // // The unit can be given in shorthand notation (e.g. "m/s^2") or as written-out // words (e.g. "meter per second squared"). // // Parameters: // - string: String containing the unit. // - unit-sep: The separator between units. // - per: How to format unit fractions. #let format-unit( string, unit-sep: math.thin, per: "reciprocal", prefix-space: false, ) = { // Check whether any word is characteristic for the long notation. let long = lower(string).trim().split(" ").any(word => { (("per",), data.prefixes, data.units, data.postfixes).any(d => word in d) }) let atoms = if long { parse-long-unit(string) } else { parse-short-unit(string) } // Prefix with thin space if required. let result = format-atoms(atoms, unit-sep: unit-sep, per: per) if prefix-space and atoms.first().space { result = math.thin + result } result }
https://github.com/FilipSolich/CV-Template
https://raw.githubusercontent.com/FilipSolich/CV-Template/main/example.typ
typst
The Unlicense
#import "template.typ": CV, chips #let lang = "en" #let personalInfo = ( name: "<NAME>", degrees: ("Ing.",), photo: "photos/me.jpg", links: ( ( icon: "icons/email.svg", link: link("mailto:<EMAIL>", "<EMAIL>") ), ( icon: "icons/phone.svg", link: link("tel:+420123456789", "+420 123 456 789") ), ( icon: "icons/github.svg", link: link("https://github.com/GitHub", "John") ), ( icon: "icons/linkedin.svg", link: link("https://www.linkedin.com", "in/john") ), ( icon: "icons/location.svg", link: link("https://www.openstreetmap.org/relation/435514", "Prague / Czech republic") ), ) ) #let jobs = ( title: "Experience", jobs: ( ( title: "Senior developer", company: "Company 3", description: [ - #lorem(15) - #lorem(11) - #lorem(18) ], chips: (chips.python, chips.docker), dateFrom: "july 2023", dateTo: "december 2025", logo: "logos/3.png" ), ( title: "Medior developer", company: "Company 2", description: [ - #lorem(20) - #lorem(12) - #lorem(8) ], chips: (chips.python, chips.fastapi, chips.mongo, chips.grpc), dateFrom: "january 2023", dateTo: "july 2023", logo: "logos/2.png" ), ( title: "Junior developer", company: "Company 1", description: [ - #lorem(12) - #lorem(13) ], chips: (chips.cpp, chips.postgres, chips.redis), dateFrom: "june 2010", dateTo: "january 2023", logo: "logos/1.png" ), ), ) #let education = ( title: "Education", education: ( ( title: "Master degree", company: "University name", description: [#lorem(10)], dateFrom: "2008", dateTo: "2010", logo: "logos/2.png" ), ( title: "Bachelor degree", company: "University name", description: [#lorem(12)], dateFrom: "2005", dateTo: "2008", logo: "logos/1.png" ), ), ) #let projects = ( title: "Projects", projects: ( ( title: "Proj 1", description: [ #lorem(10) - #lorem(10) - #lorem(15) - #lorem(13) ], link: [#box(baseline: 0.15em, height: 1em, image("icons/github.svg")) #link("https://github.com")], chips: (chips.go, chips.rabbitmq, chips.github), logo: "logos/1.png" ), ), ) #let certifications = ( title: "Certifications", certs: ( ( title: "Cisco CCNA: Routing and switching", issuer: "Cisco Network Academy", date: "2010", logo: "logos/1.png" ), ), ) #let languages = ( title: "Languages", langs: ( ("Czech", "C2"), ("English", "B2"), ), ) #CV(lang, personalInfo, languages, jobs, education, projects, certifications)
https://github.com/DawnEver/mcm-icm-typst-template
https://raw.githubusercontent.com/DawnEver/mcm-icm-typst-template/main/template.typ
typst
MIT License
#let fontsize = 12pt #let title-fontsize = 16pt #let fonttype = "Times New Roman" // Solution to problem in first-line-indent. Please refer to https://github.com/typst/typst/issues/311#issuecomment-1556115270 #let first-line-indent = 20pt #let cover( body, title: "", abstract: [], keywords: (), team-number: "", problem-chosen: "", year: "", )={ // Team Infomation pad(left:-first-line-indent)[#grid(columns: (9fr, 11fr, 11fr), rows: (auto,), gutter: 0pt, align(center)[ #block(text("Problem Chosen", size: title-fontsize, weight: "bold")) #block(text(problem-chosen, size: 30pt, fill: red, weight: "bold")) ], align(center)[ #text(year, size: title-fontsize, weight: "bold") #text("\nMCM/ICM\nSummary Sheet", size: title-fontsize, weight: "bold") ], align(center)[ #block(text("Team Control Number", size: title-fontsize, weight: "bold")) #block(text(team-number, size: 30pt, fill: red, weight: "bold")) ])] // dividing line pad(left:-first-line-indent)[#line(length: 100%, stroke: black)] // Title row. align( center, )[ #block(text(title, size: 24pt), below: 20pt, above: 20pt) #block(text("Summary", size: title-fontsize, weight: "bold"), below: 20pt, above: 20pt) ] set par( // first-line-indent: first-line-indent, hanging-indent: -first-line-indent, linebreaks: "optimized", ) // abstract abstract // keywords if keywords != () [ #v(5pt) #text("Keywords: ", weight: "bold") #keywords.join(", ") ] body } #let project( title: "", abstract: [], keywords: (), team-number: "", problem-chosen: "", year: "", bibliography-file: none, body, ) = { // Set the document's basic properties. set document(title: title) set page(margin: (left: 80pt, right: 50pt, top: 40pt, bottom: 40pt)) set text(font: fonttype, size: fontsize) // Configure equation numbering and spacing. set math.equation(numbering: "(1)", supplement: []) show math.equation: set block(spacing: 0.65em) // Configure figures and tables. set figure(supplement: []) show figure: it => { set text(fontsize) set align(center) let has_caption = it.caption != none if it.kind == image [ #box[ #it.body #v(fontsize, weak: true) Figure #it.caption ] ] else if it.kind == table [ #box[ #if has_caption{ text("Table") it.caption } #v(fontsize, weak: true) #it.body ] ] else [ ... ] } set table(stroke: 0.5pt) show table: set text(fontsize) // Configure lists. // set enum(indent: first-line-indent) // set list(indent: first-line-indent) set heading(numbering: "1.") show heading: it => { if it.body in ([Appendices],[Acknowledgment], [Acknowledgement]){ text(it.body, size: title-fontsize, weight: "bold") }else if it.body in ([Contents], ){ align(center)[ #text(it.body, size: title-fontsize, weight: "bold") ] } else{ pad(left: -first-line-indent)[#it] } } show: cover.with(title: title, abstract: abstract, keywords: keywords, team-number: team-number, problem-chosen:problem-chosen, year:year) // contents set page(margin: (left: 60pt, right: 40pt, top: 60pt, bottom: 40pt)) set par( first-line-indent: 0pt, hanging-indent: 0pt, ) show outline.entry.where(level: 1): it => { v(fontsize, weak: true) strong(it) } pagebreak() outline(indent: auto) // Settings for main body set par( // first-line-indent: first-line-indent, hanging-indent: -first-line-indent, linebreaks: "optimized", ) let page_counter = counter(page) page_counter.update(0) set page( header: pad(left:-first-line-indent)[#box(stroke: (bottom: 1pt), inset: 5pt)[ #text("Team #") #text(team-number) #h(1fr) #text("Page") #page_counter.display("1 of 1", both: true) ]], header-ascent: 20%, margin: (left: 80pt, right: 50pt, top: 60pt, bottom: 40pt), ) // Display the paper's main body. body // Display bibliography. if bibliography-file != none { show bibliography: set text(fontsize) bibliography(bibliography-file, title: "References", style: "ieee") } }
https://github.com/Daillusorisch/HYSeminarAssignment
https://raw.githubusercontent.com/Daillusorisch/HYSeminarAssignment/main/template/components/tlt.typ
typst
#let tlt_frame(stroke) = (x, y) => ( left: 0pt, right: 0pt, top: if y < 2 { stroke } else { 0pt }, bottom: stroke, ) #let tlt(..args) = { set table( stroke: tlt_frame(rgb("000000")) ) table(..args) }
https://github.com/LDemetrios/Typst4k
https://raw.githubusercontent.com/LDemetrios/Typst4k/master/src/test/resources/suite/layout/table.typ
typst
// Test tables. --- table-empty --- #table() --- table-newlines --- #set page(height: 70pt) #set table(fill: (x, y) => if calc.even(x + y) { rgb("aaa") }) #table( columns: (1fr,) * 3, stroke: 2pt + rgb("333"), [A], [B], [C], [], [], [D \ E \ F \ \ \ G], [H], ) --- table-fill-basic --- #table(columns: 3, stroke: none, fill: green, [A], [B], [C]) --- table-fill-bad --- // Error: 14-19 expected color, gradient, pattern, none, array, or function, found string #table(fill: "hey") --- table-align-array --- // Test alignment with array. #table( columns: (1fr, 1fr, 1fr), align: (left, center, right), [A], [B], [C] ) // Test empty array. #set align(center) #table( columns: (1fr, 1fr, 1fr), align: (), [A], [B], [C] ) --- table-inset --- // Test inset. #table( columns: 3, inset: 10pt, [A], [B], [C] ) #table( columns: 3, inset: (y: 10pt), [A], [B], [C] ) #table( columns: 3, inset: (left: 20pt, rest: 10pt), [A], [B], [C] ) #table( columns: 2, inset: ( left: 20pt, right: 5pt, top: 10pt, bottom: 3pt, ), [A], [B], ) #table( columns: 3, fill: (x, y) => (if y == 0 { aqua } else { orange }).darken(x * 15%), inset: (x, y) => (left: if x == 0 { 0pt } else { 5pt }, right: if x == 0 { 5pt } else { 0pt }, y: if y == 0 { 0pt } else { 5pt }), [A], [B], [C], [A], [B], [C], ) #table( columns: 3, inset: (0pt, 5pt, 10pt), fill: (x, _) => aqua.darken(x * 15%), [A], [B], [C], ) --- table-inset-fold --- // Test inset folding #set table(inset: 10pt) #set table(inset: (left: 0pt)) #table( fill: red, inset: (right: 0pt), table.cell(inset: (top: 0pt))[a] ) --- table-gutters --- // Test interaction with gutters. #table( columns: (3em, 3em), fill: (x, y) => (red, blue).at(calc.rem(x, 2)), align: (x, y) => (left, right).at(calc.rem(y, 2)), [A], [B], [C], [D], [E], [F], [G], [H] ) #table( columns: (3em, 3em), fill: (x, y) => (red, blue).at(calc.rem(x, 2)), align: (x, y) => (left, right).at(calc.rem(y, 2)), row-gutter: 5pt, [A], [B], [C], [D], [E], [F], [G], [H] ) #table( columns: (3em, 3em), fill: (x, y) => (red, blue).at(calc.rem(x, 2)), align: (x, y) => (left, right).at(calc.rem(y, 2)), column-gutter: 5pt, [A], [B], [C], [D], [E], [F], [G], [H] ) #table( columns: (3em, 3em), fill: (x, y) => (red, blue).at(calc.rem(x, 2)), align: (x, y) => (left, right).at(calc.rem(y, 2)), gutter: 5pt, [A], [B], [C], [D], [E], [F], [G], [H] ) --- table-contextual-measurement --- // Test that table cells with varying contextual results are properly // measured. #let c = counter("c") #let k = context square(width: c.get().first() * 5pt) #let u(n) = [#n] + c.update(n) #table( columns: 3, u(1), k, u(2), k, u(4), k, k, k, k, ) --- table-header-citation --- #set page(height: 60pt) #table( table.header[@netwok], [A], [A], ) #show bibliography: none #bibliography("/assets/bib/works.bib") --- table-header-counter --- #set page(height: 60pt) #let c = counter("c") #table( table.header(c.step() + context c.display()), [A], [A], ) --- table-header-footer-madness --- #set page(height: 100pt) #let c = counter("c") #let it = context c.get().first() * v(10pt) #table( table.header(c.step()), [A], [A], [A], [A], [A], [A], [A], table.footer(it), ) --- table-cell-override --- // Cell override #table( align: left, fill: red, stroke: blue, columns: 2, [AAAAA], [BBBBB], [A], [B], table.cell(align: right)[C], [D], align(right)[E], [F], align(horizon)[G], [A\ A\ A], table.cell(align: horizon)[G2], [A\ A\ A], table.cell(inset: 0pt)[I], [F], [H], table.cell(fill: blue)[J] ) --- table-cell-show --- // Cell show rule #show table.cell: it => [Zz] #table( align: left, fill: red, stroke: blue, columns: 2, [AAAAA], [BBBBB], [A], [B], table.cell(align: right)[C], [D], align(right)[E], [F], align(horizon)[G], [A\ A\ A] ) --- table-cell-show-and-override --- #show table.cell: it => (it.align, it.fill) #table( align: left, row-gutter: 5pt, [A], table.cell(align: right)[B], table.cell(fill: aqua)[B], ) --- table-cell-set --- // Cell set rules #set table.cell(align: center) #show table.cell: it => (it.align, it.fill, it.inset) #set table.cell(inset: 20pt) #table( align: left, row-gutter: 5pt, [A], table.cell(align: right)[B], table.cell(fill: aqua)[B], ) --- table-cell-folding --- // Test folding per-cell properties (align and inset) #table( columns: (1fr, 1fr), rows: (2.5em, auto), align: right, fill: (x, y) => (green, aqua).at(calc.rem(x + y, 2)), [Top], table.cell(align: bottom)[Bot], table.cell(inset: (bottom: 0pt))[Bot], table.cell(inset: (bottom: 0pt))[Bot] ) --- table-cell-align-override --- // Test overriding outside alignment #set align(bottom + right) #table( columns: (1fr, 1fr), rows: 2em, align: auto, fill: green, [BR], [BR], table.cell(align: left, fill: aqua)[BL], table.cell(align: top, fill: red.lighten(50%))[TR] ) --- table-cell-various-overrides --- #table( columns: 2, fill: green, align: right, [*Name*], [*Data*], table.cell(fill: blue)[J.], [Organizer], table.cell(align: center)[K.], [Leader], [M.], table.cell(inset: 0pt)[Player] ) --- table-cell-show-emph --- #{ show table.cell: emph table( columns: 2, [Person], [Animal], [John], [Dog] ) } --- table-cell-show-based-on-position --- // Style based on position #{ show table.cell: it => { if it.y == 0 { strong(it) } else if it.x == 1 { emph(it) } else { it } } table( columns: 3, gutter: 3pt, [Name], [Age], [Info], [John], [52], [Nice], [Mary], [50], [Cool], [Jake], [49], [Epic] ) } --- grid-cell-in-table --- // Error: 8-19 cannot use `grid.cell` as a table cell // Hint: 8-19 use `table.cell` instead #table(grid.cell[]) --- issue-183-table-lines --- // Ensure no empty lines before a table that doesn't fit into the first page. #set page(height: 50pt) Hello #table( columns: 4, [1], [2], [3], [4] ) --- issue-1388-table-row-missing --- // Test that a table row isn't wrongly treated like a gutter row. #set page(height: 70pt) #table( rows: 16pt, ..range(6).map(str).flatten(), )
https://github.com/rabotaem-incorporated/algebra-conspect-1course
https://raw.githubusercontent.com/rabotaem-incorporated/algebra-conspect-1course/master/sections/03-polynomials/09-alg-closed-field.typ
typst
Other
#import "../../utils/core.typ": * == Алгебраически замкнутые поля. Каноническое разложение над $CC$ и над $RR$. #def[ Поле $K$ называется _алгебраически замкнутым_, если любой $f in K[x]$ имеет корень в $K$. ] #th(name: [Основная теорема алгебры.])[ $CC$ алгебраически замкнуто. ] #proof[ Не будет в курсе. Идея доказательства: $f = a_n x^n + ... + a_1x + a_0, space z in CC, space f(z) = 0$. $r > max\{abs(a_0), ..., abs(a_n)\}$. $f(r(cos(phi) + i sin(phi))) = r^n (cos(n phi) + i sin(n phi)) + g(r(cos(phi) + i sin(phi)))$. $abs(g(r(cos(phi) + i sin(phi)))) < r^(n - 1)(abs(a_(n - 1)) + ... + abs(a_1) + abs(a_0)) < r^n$. $==> Delta arg f(r(cos(phi) + i sin(phi))) = 2 pi n$. $D = \{z in CC divides abs(z) <= r\}$ $limits(==>)^("Топология") f(D)$ --- односвязная область. $==> 0 in f(D) ==> exists z space f(z) = 0$. ] #notice[ Любое поле можно вложить в алгебраически замкнутое поле. Всегда есть минимальное такое поле. Для $QQ$ это поле алгебраических чисел. Алгебраическое число --- комплексный корень многочлена над $QQ$. ] #pr[ $K$ --- алгебраически замкнутое поле, $f in K[x]$. Тогда $f$ --- неприводим $<==> deg f = 1$. ] #proof[ Все многочлены степени $1$ неприводимы. $deg f eq.not 1 ==> exists x in K: f(x) = 0$ $limits(==>)^("Т. Безу") (x - c) divides f ==>$ он приводим Таким образом если $f in K[x], space deg f >= 1$, то его каноническое разложение имеет вид: $ f = c_0 limits(product)_(i = 1)^n (x - c_i)^(d_i), $ где $c_i in K, space d_i in ZZ_+$. ] #pr[ $f in RR[x], space a in CC$ --- его корень. Тогда $overline(a)$ --- корень $f$ той же кратности. ] #proof[ Пусть $l$ --- кратность корня $a$. В $CC[x]$ имеем $f = (x - a)^l g, quad g in CC[x], space g(a) eq.not 0$. Пусть $g = b_n x^n + ... + b_1 x + b_0$. Рассмотрим $overline(g) = overline(b_n) x^n + ... + overline(b_1) x + overline(b_0)$. Тогда $f = overline(f) = overline((x - a)^l) overline(g) = (x - overline(a))^l overline(g) ==> f(overline(a)) = 0$ $0 eq.not g(a) = overline(overline(g)(overline(a))) ==> (x - overline(a)) divides.not overline(g)$ $==> overline(a)$ --- корень $f$ кратности $l$ $==>$ все корни разбиваются на пары сопряженных, тогда каноническое разложение в $CC[x]$ имеет вид: $ f = r_0(limits(product)_(i = 1)^n (x - r_i)^(d_i)) dot (limits(product)_(i = 1)^m ((x - c_i)(x - overline(c_i)))^(p_i)), $ где $r_i in RR, space d_i in Z_+, space c_i,overline(c_i) in CC, space p_i in ZZ_+$. $ <==> f = r_0(limits(product)_(i = 1)^n (x - r_i)^(d_i)) dot (limits(product)_(i = 1)^m B_i^(p_i)), $ $B_i$ --- квадратичные многочлены, неприводимые в $RR$. $B_i = (x - c_i)(x - overline(c_i)) = x^2 - (c_i + overline(c_i))x + c_i overline(c_i) = x^2 - 2 Re c_i x + abs(c_i)^2 in RR[x]$ ] #pr[ Унитарные неприводимые многочлены в $RR$ --- это: 1. $x - a, quad a in RR$ 2. $x^2 + a x + b, quad a, b in RR, space b^2 - 4a c < 0$ ] #proof[ С многочленами степени 1 и 2 все ясно. Если степень многочлена больше 2, то справедливо разложение по предыдущему предложению значит он приводим. ]
https://github.com/mroberts1/fsu-smt-su24
https://raw.githubusercontent.com/mroberts1/fsu-smt-su24/main/w3-selfies.typ
typst
MIT License
// Some definitions presupposed by pandoc's typst output. #let blockquote(body) = [ #set text( size: 0.92em ) #block(inset: (left: 1.5em, top: 0.2em, bottom: 0.2em))[#body] ] #let horizontalrule = [ #line(start: (25%,0%), end: (75%,0%)) ] #let endnote(num, contents) = [ #stack(dir: ltr, spacing: 3pt, super[#num], contents) ] #show terms: it => { it.children .map(child => [ #strong[#child.term] #block(inset: (left: 1.5em, top: -0.4em))[#child.description] ]) .join() } // Some quarto-specific definitions. #show raw.where(block: true): block.with( fill: luma(230), width: 100%, inset: 8pt, radius: 2pt ) #let block_with_new_content(old_block, new_content) = { let d = (:) let fields = old_block.fields() fields.remove("body") if fields.at("below", default: none) != none { // TODO: this is a hack because below is a "synthesized element" // according to the experts in the typst discord... fields.below = fields.below.amount } return block.with(..fields)(new_content) } #let empty(v) = { if type(v) == "string" { // two dollar signs here because we're technically inside // a Pandoc template :grimace: v.matches(regex("^\\s*$")).at(0, default: none) != none } else if type(v) == "content" { if v.at("text", default: none) != none { return empty(v.text) } for child in v.at("children", default: ()) { if not empty(child) { return false } } return true } } #show figure: it => { if type(it.kind) != "string" { return it } let kind_match = it.kind.matches(regex("^quarto-callout-(.*)")).at(0, default: none) if kind_match == none { return it } let kind = kind_match.captures.at(0, default: "other") kind = upper(kind.first()) + kind.slice(1) // now we pull apart the callout and reassemble it with the crossref name and counter // when we cleanup pandoc's emitted code to avoid spaces this will have to change let old_callout = it.body.children.at(1).body.children.at(1) let old_title_block = old_callout.body.children.at(0) let old_title = old_title_block.body.body.children.at(2) // TODO use custom separator if available let new_title = if empty(old_title) { [#kind #it.counter.display()] } else { [#kind #it.counter.display(): #old_title] } let new_title_block = block_with_new_content( old_title_block, block_with_new_content( old_title_block.body, old_title_block.body.body.children.at(0) + old_title_block.body.body.children.at(1) + new_title)) block_with_new_content(old_callout, new_title_block + old_callout.body.children.at(1)) } #show ref: it => locate(loc => { let suppl = it.at("supplement", default: none) if suppl == none or suppl == auto { it return } let sup = it.supplement.text.matches(regex("^45127368-afa1-446a-820f-fc64c546b2c5%(.*)")).at(0, default: none) if sup != none { let target = query(it.target, loc).first() let parent_id = sup.captures.first() let parent_figure = query(label(parent_id), loc).first() let parent_location = parent_figure.location() let counters = numbering( parent_figure.at("numbering"), ..parent_figure.at("counter").at(parent_location)) let subcounter = numbering( target.at("numbering"), ..target.at("counter").at(target.location())) // NOTE there's a nonbreaking space in the block below link(target.location(), [#parent_figure.at("supplement") #counters#subcounter]) } else { it } }) // 2023-10-09: #fa-icon("fa-info") is not working, so we'll eval "#fa-info()" instead #let callout(body: [], title: "Callout", background_color: rgb("#dddddd"), icon: none, icon_color: black) = { block( breakable: false, fill: background_color, stroke: (paint: icon_color, thickness: 0.5pt, cap: "round"), width: 100%, radius: 2pt, block( inset: 1pt, width: 100%, below: 0pt, block( fill: background_color, width: 100%, inset: 8pt)[#text(icon_color, weight: 900)[#icon] #title]) + block( inset: 1pt, width: 100%, block(fill: white, width: 100%, inset: 8pt, body))) } #let article( title: none, authors: none, date: none, abstract: none, cols: 1, margin: (x: 1.25in, y: 1.25in), paper: "us-letter", lang: "en", region: "US", font: (), fontsize: 11pt, sectionnumbering: none, toc: false, toc_title: none, toc_depth: none, toc_indent: 1.5em, doc, ) = { set page( paper: paper, margin: margin, numbering: "1", ) set par(justify: true) set text(lang: lang, region: region, font: font, size: fontsize) set heading(numbering: sectionnumbering) if title != none { align(center)[#block(inset: 2em)[ #text(weight: "bold", size: 1.5em)[#title] ]] } if authors != none { let count = authors.len() let ncols = calc.min(count, 3) grid( columns: (1fr,) * ncols, row-gutter: 1.5em, ..authors.map(author => align(center)[ #author.name \ #author.affiliation \ #author.email ] ) ) } if date != none { align(center)[#block(inset: 1em)[ #date ]] } if abstract != none { block(inset: 2em)[ #text(weight: "semibold")[Abstract] #h(1em) #abstract ] } if toc { let title = if toc_title == none { auto } else { toc_title } block(above: 0em, below: 2em)[ #outline( title: toc_title, depth: toc_depth, indent: toc_indent ); ] } if cols == 1 { doc } else { columns(cols, doc) } } #set table( inset: 6pt, stroke: none ) #show: doc => article( title: [W3: Smile for the Camera: Selfies], margin: (x: 1.25in,y: 1.25in,), paper: "us-letter", font: ("Source Sans",), fontsize: 14pt, toc_title: [Table of contents], toc_depth: 3, cols: 1, doc, ) - <NAME>finkel, "#link("https://medium.com/@socialcreature/ai-and-the-american-smile-76d23a0fbfaf")[AI and the American Smile];" (#strong[Medium];, 17 March 2023) - <NAME>, "#link("https://www.theguardian.com/global-development/2021/mar/03/china-positive-energy-emotion-surveillance-recognition-tech")[Smile for the Camera: The Dark Side of China’s Emotion-Recognition Tech];" (#strong[The Guardian];, 3 March 2021) == AI and the American Smile <ai-and-the-american-smile> #quote(block: true)[ In flattening the diversity of facial expressions of civilizations around the world AI had collapsed the spectrum of history, culture, photography, and emotion concepts into a singular, monolithic perspective. It presented a false visual narrative about the universality of something that in the real world — where real humans have lived and created culture, expression, and meaning for hundreds of thousands of years — is anything but uniform. ] #quote(block: true)[ —<NAME>, "AI and the American Smile" ] So this week we are tackling everybody’s favorite social-media topic: the selfie, or if you’ve read the reading assignment, might be more accurately called the #strong[smilie];. I initially assigned only <NAME>’s article for this week, because the article itself is packed with references to other sources that you are presumably also going to want to explore. But as I was rereading it, I decided to add one other recent article that came to mind, that adds a different perspective to the discussion of social media, cameras, and smiling: a #strong[Guardian] article about the use of facial recognition as a tool of social control in contemporary China. So in some ways, this week we could think about not just the \*smile\*\* but the #emph[face] in social media, as a focus of social, cultural, and political power. Conceptually, the trickiest concept that’s referenced in the AI-selfie article is "uncertainty avoidance," a rather self-contradictory notion that may need clarification. My favorite example of this is from country walking. As I’m sure you know, it’s an unspoken convention when out walking in a forest or some other remote location, that if you pass someone walking in the other direction, you are expected to make eye contact, say "Hi!," and above all, #strong[smile] at the other person(s)! Why do we all do this? We don’t do it when walking around town, for example. The answer is: to avoid uncertainty - uncertainty, that is, about the potentially malevolent intentions of the other we are encountering in this remote place. In short, eye contact, a brief verbal greeting, and, especially, a smile are a way of reassuring the other person that you are #strong[not a psycho];. Of course, given that as the article also explains (in the reference to the University of Rochester study), smiling is also a sign of duplicity, there is no reason to be reassured by the smiling stranger that we may encounter alone in a forest; in such cases, indeed, the smile may actually be more a source of anxiety than reassurance. Regardless of whether smiles and a cheerful demeanor are reassuring or not, the central point of the article is that smiling is a #strong[cultural] practice: even though everyone smiles (universal humanism), we do so - or not - for very different reasons (cultural specificity). The Russian examples discussed in the article are perhaps the clearest example of this. The central question raised by Gurfinkel’s argument, however, stems from its title: how far can smiling—at least self-type smiling—be considered specifically "American"? Did Americans invent this kind of smiling? In answer to this, I would add one other example to the discussion of the cultural dimension of smiling: what I would call the #strong[subaltern smile];. ("Subaltern" here refers to a form of social subjectivity that has internalized structures of domination and accept, even welcome, their position of subservience). From this standpoint, the smile is a sign of subservience, of #strong[eagerness to be of service] - is there anything else I can do for you? As the example of the Russian bank teller in the US shows, such smiling is often not just a social but a professional requirement, a form of coercion. In his book #strong[Black Skin, White Masks];, the postcolonial theorist <NAME> writes about how during imperialism Africans were expected to #strong[smile] for their colonial masters, as a sign of their willing acceptance of their subservient role. There are numerous examples of this, but the most famous one is the ubiquitous image of the WW1/WW2 African infantryman in the French army (the #emph[tirailleur sénégalais];) that was used for the chocolate milk brand #link("https://en.wikipedia.org/wiki/Banania")[Banania];. #box(image("../img/banania.jpg", width: 200)) We see from this example that Gurfinkel’s point that smiling is a sign of confidence or power does not always apply, that smiling has many other cultural meanings. What if AIs were trained not on models of American smiles but African ones? In light of the history of slavery, how awkward does this make the AI selfie of "Ancient African Tribal Warriors"? #box(image("../img/african-ai.webp", width: 200)) #horizontalrule == Selfie Studies <selfie-studies> The discussion of "smiling for the camera" is itself part of the larger study of the selfe in social media, a field that is much more developed than you would ever believe. To get an idea of the scope of this field, you could start with the #link("https://selfieresearchers.com/")[Selfies Research Network];, founded (as far as I know) by #link("https://researchers.mq.edu.au/en/persons/theresa-m-senft")[<NAME>] and other colleagues, notably #link("https://tiara.org/")[<NAME>];. In light of this week’s reading, you may be struck by Alice’s authentically American smile! #box(image("../img/alice-marwick.jpeg", width: 200)) If you’re interested in exploring selfie culture for your research paper, there are many directions you could go, including music video: #link("https://youtu.be/kdemFfbS5H0") I’ll be interested to see what other examples are brought to mind both by this week’s readings and my lecture! #horizontalrule
https://github.com/gym-8/solving-physics
https://raw.githubusercontent.com/gym-8/solving-physics/master/lib.typ
typst
MIT License
#import "@preview/wrap-it:0.1.0": wrap-content #let task = (given: "", find: "", stroke: "partially", fig: none, fig-align: top + right, given-width: auto, body) => { grid( columns: (given-width, auto), column-gutter: 1.2em, table( inset: 0.6em, stroke: (x, y) => { if (stroke == "full") { return black } else if (stroke == "partially") { return ( right: black, bottom: black ) } else if (stroke == "find") { if (y == 0) { return ( bottom: black ) } } }, given, if find != "" { find } ), pad(top: 0.6em, if (fig == none) { body } else { if (fig-align == top + center) { align(center, fig) body } else { wrap-content( fig, body, align: top + right, column-gutter: 1em ) } } ) ) }
https://github.com/noahjutz/AD
https://raw.githubusercontent.com/noahjutz/AD/main/notizen/algorithmen/ggt_proof.typ
typst
#import "@preview/cetz:0.2.2" #cetz.canvas(length: 5%, { import cetz.draw: * let g = rgb(0, 0, 0, 10%) let r = rgb(255, 0, 0, 30%) set-style(stroke: (thickness: 1, paint: g)) line((0, 0), (20, 0), name: "a") line((0, -3), (16, -3), name: "b") set-style(stroke: (thickness: 1pt, paint: black, dash: "dashed")) for i in range(0, 11) { line((2 * i, .5), (2 * i, -.5)) line((2 * i, -3.5), (2 * i, -2.5)) line((2 * i, -.5), (2 * i, -4.5), stroke: (paint: rgb(0, 0, 0, 15%))) } set-style(stroke: (dash: none)) cetz.decorations.brace((2,-4.5),(0,-4.5)) content((1, -5.5))[$x$] cetz.decorations.brace((20,-4.5),(16,-4.5)) content((18, -5.5))[$d$] content("a.mid", anchor: "south", padding: (bottom: 1))[$a$] content("b.mid", anchor: "south", padding: (bottom: 1))[$b$] })
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-10E80.typ
typst
Apache License 2.0
#let data = ( ("YEZIDI LETTER ELIF", "Lo", 0), ("YEZIDI LETTER BE", "Lo", 0), ("YEZIDI LETTER PE", "Lo", 0), ("YEZIDI LETTER PHE", "Lo", 0), ("YEZIDI LETTER THE", "Lo", 0), ("YEZIDI LETTER SE", "Lo", 0), ("YEZIDI LETTER CIM", "Lo", 0), ("YEZIDI LETTER CHIM", "Lo", 0), ("YEZIDI LETTER CHHIM", "Lo", 0), ("YEZIDI LETTER HHA", "Lo", 0), ("YEZIDI LETTER XA", "Lo", 0), ("YEZIDI LETTER DAL", "Lo", 0), ("YEZIDI LETTER ZAL", "Lo", 0), ("YEZIDI LETTER RA", "Lo", 0), ("YEZIDI LETTER RHA", "Lo", 0), ("YEZIDI LETTER ZA", "Lo", 0), ("YEZIDI LETTER JA", "Lo", 0), ("YEZIDI LETTER SIN", "Lo", 0), ("YEZIDI LETTER SHIN", "Lo", 0), ("YEZIDI LETTER SAD", "Lo", 0), ("YEZIDI LETTER DAD", "Lo", 0), ("YEZIDI LETTER TA", "Lo", 0), ("YEZIDI LETTER ZE", "Lo", 0), ("YEZIDI LETTER EYN", "Lo", 0), ("YEZIDI LETTER XHEYN", "Lo", 0), ("YEZIDI LETTER FA", "Lo", 0), ("YEZIDI LETTER VA", "Lo", 0), ("YEZIDI LETTER VA ALTERNATE FORM", "Lo", 0), ("YEZIDI LETTER QAF", "Lo", 0), ("YEZIDI LETTER KAF", "Lo", 0), ("YEZIDI LETTER KHAF", "Lo", 0), ("YEZIDI LETTER GAF", "Lo", 0), ("YEZIDI LETTER LAM", "Lo", 0), ("YEZIDI LETTER MIM", "Lo", 0), ("YEZIDI LETTER NUN", "Lo", 0), ("YEZIDI LETTER UM", "Lo", 0), ("YEZIDI LETTER WAW", "Lo", 0), ("YEZIDI LETTER OW", "Lo", 0), ("YEZIDI LETTER EW", "Lo", 0), ("YEZIDI LETTER HAY", "Lo", 0), ("YEZIDI LETTER YOT", "Lo", 0), ("YEZIDI LETTER ET", "Lo", 0), (), ("YEZIDI COMBINING HAMZA MARK", "Mn", 230), ("YEZIDI COMBINING MADDA MARK", "Mn", 230), ("YEZIDI HYPHENATION MARK", "Pd", 0), (), (), ("YEZIDI LETTER LAM WITH DOT ABOVE", "Lo", 0), ("YEZIDI LETTER YOT WITH CIRCUMFLEX ABOVE", "Lo", 0), )
https://github.com/EstebanMunoz/typst-template-informe
https://raw.githubusercontent.com/EstebanMunoz/typst-template-informe/main/custom-outline.typ
typst
MIT No Attribution
// Función que genera un outline de acuerdo al target entregado #let custom-outline( title: auto, target: heading ) = context { // Busca todos los elementos que concuerden con el target let queried = query(selector(target)).filter(q => q.outlined == true) // Si no existe ningún elemento, outline no entrega nada y no se continua con la ejecución if queried.len() == 0 {return} // Ubicación de cada elemento let locations = queried.map(q => q.location().position()) // Tipo de datos que se están buscando let queried-type = queried.at(0).func() // En caso de buscar headings, se usan algunas variables extras para ese caso particular let (depths, max-depth) = if queried-type == heading { ( queried.map(h => h.depth), calc.max(..queried.map(h => h.depth)) ) } else {(none, none)} // Body de cada elemento encontrado let bodies = if queried-type == heading { queried.map(h => h.body) } else { queried.map(q => q.caption.body) } // Número de columnas de la grilla let num-cols = if queried-type == heading { max-depth + 2 } else { 3 } // Array con los numberings de todos los elementos encontrados let queried-numbering = queried.map(q => { if q.numbering != none { numbering(q.numbering, ..counter(target).at(q.location())) } }) // Array con los números de página de los elementos encontrados let page-numbers = queried.map(q => { let loc = q.location() let page-numbering = if loc.page-numbering() != none {loc.page-numbering()} else {"1"} numbering(page-numbering, ..counter(page).at(loc)) }) // Ancho máximo de un numering antes de hacer un merge con el contenido let max-numb-width = 43pt // Función auxiliar que maneja el llenado de cada celda let populate-cells() = { // Array que contendrá todas las celdas a utilizar en la grilla let cells = () // Por cada elemento encontrado se agrega una fila for (i, body) in bodies.enumerate() { // Si el tipo de dato es un heading, se necesita saber el nivel que posee let current-depth = if queried-type == heading {depths.at(i)} // Numbering y número de página del elemento let current-numbering = queried-numbering.at(i) let current-page = page-numbers.at(i) // Llenado entre el contenido y el número de página. En caso de ser un heading de nivel 1 el valor en `none` let fill = if current-depth != 1 {box(width: 1fr, repeat[~~.])} // Contenido que se usará en el outline let content = [#body #fill] // Inset que se agreagrá a los heading de nivel 1 let inset = auto // Los headings de nivel 1 tienen su información en negrita if current-depth == 1 { content = strong(content) current-numbering = strong(current-numbering) current-page = strong(current-page) inset = (top: 10pt) } // Si un numbering excede el límite, se hace un merge con el contenido let exceeds-numb-width = measure(current-numbering).width > max-numb-width if exceeds-numb-width { content = [#current-numbering #h(5pt) #content] } // Se guarda el valor de si un elemento no posee una celda para numbering let has-no-number-cell = queried-numbering.at(i) == none or exceeds-numb-width // Cantidad de columnas que ocupa el contenido let colspan = if queried-type == heading { max-depth - current-depth + 1 + int(has-no-number-cell) } else { 1 + int(has-no-number-cell) } // Columna en la que se coloca el numbering let numb-x-pos = if queried-type == heading { current-depth - 1 } else { 0 } // Columna en la que comienza en contenido let x-pos = if queried-type == heading { current-depth - int(has-no-number-cell) } else { 1 - int(has-no-number-cell) } // Si el elemento tiene una celda para numbering, se agrega dicho numbering a la grilla if not has-no-number-cell { cells.push(grid.cell(y: i, x: numb-x-pos, inset: inset, link(locations.at(i), current-numbering))) } // Se agrega la celda de contenido al array cells.push(grid.cell(y: i, colspan: colspan, x: x-pos, inset: inset, link(locations.at(i), content))) // Se agrega la celda de número de página al array cells.push(grid.cell(y: i, x: num-cols - 1, align: right + bottom, inset: inset, link(locations.at(i), current-page))) } // Se retorna el array de celdas return cells } let shown-title = if title == auto {if text.lang == "es" {"Índice"} else {"Contents"}} else {title} // Contenido entregado por el outline: un heading y una grilla heading(numbering: none)[#shown-title] grid( columns: (auto,) * (num-cols - 1) + (1fr,), column-gutter: (10pt,) * (num-cols - 2) + (20pt,), row-gutter: 6pt, ..populate-cells(), ) }
https://github.com/Myriad-Dreamin/tinymist
https://raw.githubusercontent.com/Myriad-Dreamin/tinymist/main/docs/tinymist/frontend/zed.typ
typst
Apache License 2.0
#import "/docs/tinymist/frontend/mod.typ": * #show: book-page.with(title: "Tinymist Zed Extension") See #link("https://github.com/WeetHet/typst.zed")[typst.zed];.
https://github.com/dashuai009/dashuai009.github.io
https://raw.githubusercontent.com/dashuai009/dashuai009.github.io/main/src/content/blog/030.typ
typst
#let date = datetime( year: 2022, month: 8, day: 28, ) #metadata(( title: "为存有Box<dyn T>的结构体实现Clone", subtitle: [rust], author: "dashuai009", description: "为存有Box<dyn T>的结构体实现Clone how to clone a struct storing a boxed trait object", pubDate: date.display(), ))<frontmatter> #import "../__template/style.typ": conf #show: conf == 问题描述: <问题描述> 无法为以下结构体实现Clone: ```rust pub trait GetName { fn get_name(&self) -> String; } #[device(Clone)] struct Node{ name: String, children: Vec<Box<dyn GetName>> } ``` == 参考链接 <参考链接> #link("https://stackoverflow.com/questions/30353462/how-to-clone-a-struct-storing-a-boxed-trait-object")[stackoverflow how-to-clone-a-struct-storing-a-boxed-trait-object] == 具体实现 <具体实现> ```rust // 先为GetName继承一个能clone的trait pub trait GetName: NodeClone{ fn get_name(&self) -> String; } //自定义一个clone,里面套默认clone pub trait NodeClone { fn clone_box(&self) -> Box<dyn GetName>; } //自定义clone接口,where限制这个NodeClone只是用在GetName上, //如果不借助这个NodeClone,直接pub tarit GetName:Clone{}是不行的 //没法放在Box<>里 impl<T> NodeClone for T where T: 'static + GetName + Clone, { fn clone_box(&self) -> Box<dyn GetName> { Box::new(self.clone()) } } //为Box<dyn T>实现Clone,调用dyn GetName的自定义Clone impl Clone for Box<dyn GetName> { fn clone(&self) -> Box<dyn GetName> { self.clone_box() } } #[derive(Clone)] pub struct Node { name: String, children: Vec<Box<dyn GetName>>//子元素只要能GetName就可以 } impl GetName for Node{ fn get_name(&self) -> String{ return self.name.clone(); } } impl Node{ //将子元素的name拼起来打印 pub fn print_children(&self)->String{ let s = self.children.iter() .map(|n|n.get_name()) .collect::<Vec<_>>() .join(","); format!("[{}]",s) } } fn main() { let a1= Box::new(Node{ name: "a1111".to_string(), children:vec![] }); let a2= Box::new(Node{ name: "a2".to_string(), children:vec![] }); let b = Node{ name: "bbb".to_string(), //测试一下,to_vec()会调用每个元素的Clone //a1、a2具体类型是Node,具有GetName、Clone接口, children:[a1 as Box<dyn GetName>,a2 as Box<dyn GetName>].to_vec() }; println!("b={}",b.print_children()); } ```