repo
stringlengths
26
115
file
stringlengths
54
212
language
stringclasses
2 values
license
stringclasses
16 values
content
stringlengths
19
1.07M
https://github.com/polarkac/MTG-Stories
https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/053%20-%20Wilds%20of%20Eldraine/001_Episode%201%3A%20Pure%20of%20Heart.typ
typst
#import "@local/mtgstory:0.2.0": conf #show: doc => conf( "Episode 1: Pure of Heart", set_name: "Wilds of Eldraine", story_date: datetime(day: 08, month: 08, year: 2023), author: "<NAME>", doc ) There was a king who dwelled in Eldraine, a good king, who had at his side a good queen. Together they had four good children, and those who lived within the kingdom lived happily, knowing they would remain in good hands for generations to come. But the good king is dead—slain defending his family to the last—and the queen is dead, too. All of their superstitions, all of their wards, all of their goodness meant nothing in the face of the Phyrexian invasion. The generations that should have lived in peace now lie in mass graves below upturned heaths and meadows. The knights who repelled the invasion—both those gone to seed as mercenaries and those yet clinging to valor—call Will the Boy King. And, no matter how much she wishes it were otherwise, Rowan cannot blame them. The knight they've come to see provides easy comparison. Dents and rends mark her armor, telling the story of her valor as surely as letters on a page. Her handsome face is silvered with scars earned in valiant service. Her hammer alone is near to Will's size. The arm she lost in the fight against the Phyrexians has been replaced by enchanted wood—a gift from the fae that begs as many questions as it answers. And there are many questions surrounding this woman. For the past six months she's been demanding tribute from nearby villages in exchange for her services driving off "raiders." But the raiders in question, well, they always seem to wear her colors. In spite of this, the townsfolk have a fondness for her—and it is this fondness that drove Will to seek her out for parlay. "<NAME>," Will says. He inclines his head, offering a hand to the knight. "Glad tidings to you. I'd like to thank you for welcoming me among you and yours." The knight does not budge from her makeshift throne. Legend had it she'd crafted it from the bodies of fallen Phyrexians—and it certainly looked the part, all sharp angles and edges. She sits with one leg draped over her lap, her eyes narrowed at Will. "Queen Imodane," she says. "Ah, a queen. Then we can make arrangements as equals," Will says. He offers a friendly smile, though Rowan can see the cracks in his mask. Imodane's riders laugh. She does, too, her shoulders rising and falling. "Oh, we're beyond talking, Boy King. The only reason I agreed to this little meeting was to see if you were as pathetic as I'd heard. You are." "Watch your—" Rowan starts, but Will raises a hand to cut her off. Anger boils in the pit of her stomach. Her brother's smile never quite leaves his face. "Pathetic, is that what you think of me?" "You've given me no reason to think otherwise," says Imodane. "Where were you during the Invasion? Certainly not on the field." "Watch your tongue," Rowan cuts in. They may not have been on the field, but they'd fought their own battles within the castle. Will waves her off. "Then how about a duel? If I give you reason to think otherwise, you bend the knee. No more raiding, no more pretending to the throne. In honor of your service to the crown you can remain one of our vassals and champions, provided you act accordingly." His calm only makes Rowan angrier. Power prickles in her blood. She flexes her fingers, palm to fist, palm to fist, trying to bury her feelings. Imodane scratches at one of the scars along her jaw. "And if I win?" Will gestures to the heralds behind them. She knows what he is going to say, and she already hates that he is going to say it. "Me and mine follow you, instead. I'll surrender the crown of Eldraine. You will be High Queen in name and deed." He hadn't consulted her about this. If he had, she would have told him how foolish it is. Will could hold his own in some fights, sure. But against a woman like Imodane he had about as much chance as an ant before a lion. Their mother could have done this, even their father—but Will? "Let me do it," she whispers to her brother. "I can handle her." "I'll be all right," Will says. "Her hammer's bigger than you are. Will, please. There's no need for more of us to get hurt." She will grant him one thing—his gaze has more steel in it than it did a few months ago. "If it brings us stability, I don't mind shedding my own blood," he says. "Besides, she'll come around when she realizes I don't back down from a fight." #emph[You will lose more than your blood if you do this.] #emph[She will not respect you if she sees you broken before her.] #emph[I'm right here, why won't you trust me?] Death is thick in the air on Eldraine; family ties bind her in place. She cannot make a fool of her brother. Not in so public a place as this. Besides, he's been training tirelessly every morning. He's come a long way from the awkward boy she once knew. A raiding knight like Imodane has land cleared for battles. How else are her underlings to work out their rage between campaigns? The grass here is well worn, the earth packed tight below. On one side, Imodane's rebels sit staring out at them in their cobbled together armor. Nothing unites them, save their faith in Imodane, and yet to her they seem happier than her own brothers and sisters in arms. The Ardenvale knights may wear finer cloth, yes, and they've a place to sleep when many don't—but their faith and loyalty lay with the old king. The Good King. Rowan takes a breath. Her brother takes his place. A bugler sounds the horn. #figure(image("001_Episode 1: Pure of Heart/01.jpg", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none) Long have knights tilted at one another on fields of battle and fields of glory. So many of her memories see her bouncing on her father's lap as she watched them, asking questions about everything she saw, asserting with perfect confidence that she'd number among them one day. Her father always assured her that she was right. When at last she tilted for the first time, her joy sparked in the hearts of all her family and thus, like kindling to flame, grew stronger. Phyrexia took that from her. Now, when she watches Will fall into a fighting stance, she sees their father's face shadowing his. Imodane becomes a barbed monstrosity intent on destruction. Rowan tightens her grip on the sword. She tries to root herself to the present moment through its heft, through the sensation of leather against her fingers. It's going to be all right. This time is not that time. Imodane makes the first move, rushing toward Will with her great hammer in tow. Rowan flinches—but Will has this under control. He blasts the ground with ice, leaving it slick. Imodane's momentum carries her to a pratfall. Unable to recover, she falls face-first onto the ice. Even her rebels cannot help but laugh. Whatever hope they had for an honorable duel is gone. Imodane doesn't take kindly to being made a fool. Flame bursts from the head of her hammer. The ice coating the field melts, the thirsty ground drinking up the meager moisture with gusto. Imodane raises herself up and—with one mighty arm—swings the hammer overhead. Will manages to avoid the shattering blow, but only just, throwing himself to the side. The move of a complete and utter novice: he cannot regain his balance before he, too, falls to the ground. And Imodane can raise her hammer faster than Will can get back up. Rowan's throat goes tight. Fear screws her to the sticking spot. Each second of indecision burns her from the inside. She hates this. It isn't who she is. She won't let it be. All of the anger she'd felt then, watching her father die, all of the sorrow she'd felt after—as current through a wire she lets them course through her, unimpeded. But there is something else coming along with the anger, the sorrow. Something new and terrible. Rowan knows it not, yet like poison it courses through her veins, setting her afire. To name what leaves her fingertips a bolt of lightning is to name a cauldron a thimble. The heavens themselves tremble at the sight; dark clouds recede to allow the king of elements its regal charge. By the time the thundercrack brings them all to their knees, it has been full five seconds. Only when the dust settles does she realize what she's done. #figure(image("001_Episode 1: Pure of Heart/02.jpg", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none) Generations from now they will call this Stormcutter Mountain. With lightning her blade, Rowan's cut a massive rift into the side of the nearest peak. Giants could not hope to match it, not for all their trying. Her fingertips tingle, her heart catches in her chest. She stares at her hand, at the massive rift, in disbelief. Power like this isn't. Where had she found it? "Rowan?" Will sounds horrified. He looks it, too. Even Imodane's gone pale with terror. The way she's looking at her is the way people looked at ... They're afraid of her? Rowan's tongue sticks to the roof of her mouth. She can't think of anything to say, so she stands tall, instead. If she reaches for her sword, she'll still project power— But the moment she makes the gesture, Imodane drops the hammer, turns tail, and runs. The woods swallow her up before either of them can figure out how to stop her. That isn't quite true. Will could have. A single ice bolt would have done it, but he remains on the ground, staring up at Rowan. Even when she helps him up, he never takes his eyes away from her. "What have you done?" he asks her. She isn't ready to answer that. "You should have let me fight. You never should have done it yourself; you know you don't have the training—" Eyes on her back. Swords drawn behind them. Her warrior's senses are alight. Imodane may have fled, but her rebels haven't. And without any clear direction, all of them are looking for a chance to make names for themselves. "We can talk about this later," she says. "When we're out of this mess." #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) There was once a good and noble knight who served at <NAME> with her fellows, who drank deep of festival wine and boasted as loud as any man could boast. Stout of arm she was, but stouter of heart. That woman died months ago. Imodane is all that remains. She runs, fear lending her fleet feet, through the thick brambles and over fallen boughs. But here is the way of things: whenever one flees the past, one must watch carefully the future. Imodane does not. Nor does she realize what has happened until her foot lands, beyond all thought and reason, on cold stone. Hewed stone in the middle of the wilds. Sense returns. Her spine ashiver, she looks around for what feels the first time. Wherever she is, the woods are gone. Into a palace she has wandered, a throne room glittering and gossamer. Music in strange keys beguiles her ears; she smells wine, ripe fruit, and perfume. All around, the landscape shifts as easily as the music—walls become windows into a realm of plenty; windows become doorways to who knows where. If she tries, she thinks she could see straight through the misty structures, but she doesn't want to try. There are things mortals are not yet meant to know. Though the throne before her is shrouded in shadow, she knows upon seeing it where she must have ended up. Imodane falls to her knees. "Forgive me, Your Majesty, I had no intent to trespass." Two eyes, gold as mead, glow from the dark. "#emph[There is no need for apologies. You were summoned.] " She wishes to answer—but the sight of this delicate sovereign has robbed her of any sense. A gentle, cruel laugh caresses her cheeks. #emph["Would-be Queen. Once-brave adventurer. Tell me ...] " The fae lord's hand cups Imodane's chin, tilts her face up. "#emph[Are you pure of heart?"] #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) There is a village far from all of this. It lies at an edge of the Realm so remote that, in day-to-day life, the names of kings and queens never cross the lips of its residents. Yearly visits from a single traveling merchant serve as a holiday all in their own. Whatever road the merchant takes to find this place, he has not shared it with the world, for even the Phyrexians neglected this place. Perhaps they were not fond of sheep. There are more sheep in the village than people by at least five-fold. When people say the word Orrinshire, the word "wool" inevitably follows. Kellan doesn't like it here. And as he slinks through the door of his family's small home, he knows the feeling is mutual. He just hopes his mother doesn't notice the signs. But mothers are gifted with many magical talents, among them the unnatural ability to ask questions their children would rather go unasked. As Kellan walks through the door, his mother looks up from her spinning—and when she does, her face drops from joy to concern. "Welcome home, sweet—oh, no. Are you hurt?" He tries to wave her off before she can stand up, but there's no use. She's crossed the meager distance in the blink of an eye. Already she is looking at the scratches on his cheek, the pricks of blood on his forearms. Kellan decides to look at the floor rather than up at his mother. "It isn't a big deal," he mumbles. "It isn't a big deal?" she repeats. From the folds of his hood she produces a nail. "Kellan, what is this? What did they do to you out there?" He winces. He thought he'd gotten all of them, but he should have known there'd be one hiding somewhere. "It was just ... do we have to talk about this?" He does not need to see his mother's face to know her heart is sinking. She smooths the yew shavings from Kellan's hair with a sniff. "Oh, honey, I'm sorry. We don't need to talk if you don't want to." After a breath to steady herself, she turns her head and gives a shout. "Ronald! Ronald, get me some water from the well!" Kellan winces as his stepfather shouts in answer. When his mother leads him to sit by the table, he plops down into the chair with a pout, slumping a like marionette whose strings have been cut. Yes, quite like a marionette, he is wiry and small for sixteen. All the more reason for the other boys to have chosen him as their victim. He still doesn't meet his mother's gaze, not even when she fetches a clean rag and starts dabbing away the blood from his brown skin. "Was it the Cotter boys?" she asks. "I owe Matilda five skeins, I can give her a talking-to while I drop them off—" Kellan sighs. He can't find it in himself to lie. "It's not their fault." "If they're the ones who hurt you, I can't see how it wouldn't be," his mother answers. Wide grins. Laughter and jeers as he ran from them. #emph[You never belonged here, half-blood.] "They asked me a question, I answered wrong, that's all it is," Kellan says. He hears his stepfather's thumping footsteps, the open of the door. "What kind of question warrants this sort of treatment?" his mother says. "Kellan, honey, whatever happened, none of this is your fault. You didn't answer wrong. These boys, they've got ..." "They're afraid of me, I think," Kellan says. "They think the Slumber's my fault." His stepfather arrives; the bucket sloshes to a stop beside them. "Who's afraid of our Kellan? Whoa—what happened?" "It isn't a big deal," Kellan says. He wants to get up and hide, so they stop staring at him and the cuts on his face, but he knows that isn't going to happen. "The Cotter boys. Look what they threw at him," his mother says, plucking another nail from among his clothes. "And look at his hair! I've no idea what's gotten into their heads ..." A soft #emph[hrm] from Ronald. He plucks a wood shaving from Kellan's wavy brown hair, then holds it to his nose. "Yew, and I'd bet that nail is cold iron. That so, Kellan?" Biting his lip, Kellan nods. His mother stops mid-gesture. "The question they asked you ..." He still doesn't look up. "They asked if it was true my real dad was a faerie." The nail drops between the three of them. Ronald is the first to break the silence. He lays a hand on Kellan's shoulder. "It doesn't matter what they say, son. All that matters is who you are, not where you're from. And who you are is our boy." Kellan swallows. The question's almost too frightening to ask, but he has to be brave. Heroes in all the stories are brave. "But ... But what if it's true, and that is who I am? Don't I belong in the woods?" "The woods aren't the way you think," his mother says. "There are dangers there you can't yet imagine, my sweet boy. When you're older, we can face them together. But for now ..." His mother throws her arms around him. For a moment, he's not sure who's embracing who. "You belong here," his mother says. "With us. No matter what anyone else says." But it isn't the first time she's said this to him, nor the first time they've all embraced. And as much as Kellan loves his family, when he looks to the woods ... When he looks to the woods, all he feels is longing. #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) Castle Ardenvale lies in ruins. Half-burned and abandoned, it is no proper home for a would-be High King and his court. Will's taken up residence at Castle Vantress, instead. Perhaps he hopes the knowledge that's seeped into the stone will lend him wisdom. Rowan's not so sure of that. Although she's been standing in her brother's makeshift war room for fifteen minutes, this is the first time he realizes she's there. No matter that the guards announced her, no matter how many times she's cleared her throat, his papers have interested him more. She can't blame him for it, not entirely; as acting king, Will's buried beneath a mound of paperwork taller than the two of them put together. Alliances, arrangements for taxes, oaths of fealty and fiery condemnations—it is impossible to tell which is which when the stack is so high. Of course, she can blame him for taking the title in the first place. It's clear to see how much all this has worn on him. There are bags under his eyes and stubble on his chin. The black eye he sustained during the fight with Imodane hasn't yet healed. Either Will can't be bothered to ask Cerise to heal it for him or he's trying to make a statement. Must be the latter—if Cerise #emph[had] gotten a look at him, it'd be gone, regardless of what he wanted. "We're leaving," she says. Will squints at her. His own twin, and he can't recognize her. He thinks he can rule the realm like this? "Don't think with your sword arm, Rowan," he says, sounding far more like a beleaguered parent than their father ever did. "Our siblings need us. Our people need us." "I've already told Hazel and Erec I'll be away for a while, and I think this is the best thing we can do for the Realm," she says. She had a speech in mind before coming here, but she finds now that the words have changed. "Look at yourself, Will. You're exhausted. The soldiers tell me you haven't slept in two days, and looking at you right now, I believe it. Word's going to spread throughout the kingdom about what happened at the cliffs—" "—A situation we could have avoided if you'd trusted me," he cuts in, sharp as ice. Will sits up and sets his jaw. Not breaking eye contact, he picks up a letter. "The Marquess of Roxburgh wrote to me today. He says he will not bend the knee to a man who lets his sister inflict such harm on others. ‘A coward cannot be High King of Eldraine,' he says. It isn't the only letter of its type I've received. I wish you'd trusted me more." There is a spike of pain in Rowan's temple, a headache she's been dealing with of late, one that's eroded her patience. She presses her eyes closed. "You'd be dead if I hadn't interfered. But he is right about one thing: you aren't the real High King of Eldraine. You didn't go on the High Quest." "Don't rake me over the coals for a technicality. The Realm#emph[ needs] a High King; I did what I had to do. And I would have done that at the cliffs, too. I had a #emph[plan, ] Rowan. I don't always need you to save me," he says. "We have to be careful about the impression we're making. People want to be united, and I want to unite them. #emph[Blasting a hole in a mountain ] is no one's idea of unity. I could have talked to her, found some way forward, but now she's gone off into the woods and her rebels have reason to fear us." "So? Let them be afraid. I doubt any of them will be raiding the countryside any time soon with the beating we gave them. I'd rather have a thousand brigands living in fear of me than a dozen farmers living in fear of brigands," Rowan says. Her brother clenches his jaw, pinches the bridge of his nose. "That's not what our parents would have done." The headache pounding at her temple, her own bottled anger, the spark of her blood—who can say what it is that causes her to burst out at him? But burst she does. "That's rich, Will. Our parents wouldn't ignore a curse that's spreading through the kingdom. Or is 'unity' going to solve the Wicked Slumber, too? I didn't know all those people needed was a handshake and a cup of ale. And before you forget, our parents #emph[earned] their titles. You just decided to call yourself High King because you thought it suited you, no matter how much I told you it didn't." She's gone too far, she knows she has. But that's fine. They don't have to talk about this anymore. All they have to focus on is finding a way to solve the problem. The Wicked Slumber might have stopped the Phyrexians in their tracks, but the Realm struck a foul bargain to pay for it. Now it's spreading among the citizens of Eldraine with no end in sight. Nothing can wake the dreamers—neither true love's kiss, nor a bucket of ice water. So long as they can solve the problem of the Wicked Slumber, the people will rally behind them. Vantress's finest minds have not cracked it in the months they've had to study the issue—but Vantress's finest minds don't have access to the Multiverse. The twins do. Besides, it gets them away from here. From the castle that is not quite theirs, from the memories. And for all their differences, they share at least one thing in common: their spark. Rowan reaches for its power as she has so many times before. Will tenses. "Rowan, we can't just leave—" "We aren't going to sit here either," she says. "Strixhaven taught us to find magical solutions for our problems. That's what we need to do." "I'm #emph[High King] . I have to stay here!" Strange. Shouldn't they have left by now? It must be Will's fault—his petulance is keeping them in place. Or maybe his annoying insistence on an unearned title. "Your duty is to Eldraine, and duty's calling. You're ruining my focus." This time, she puts all of her focus into Walking—closes her eyes, forces herself to look past the stabbing pain in her head, her own frustrations. But closing her eyes is a mistake. Once more she sees them down the long, curved halls of Castle Ardenvale: her father, sword in hand; the Phyrexian behemoth he's fighting. Her stepmother and her siblings running away, straight toward Will and Rowan, fear in the children's eyes and determination in her stepmother's. "Keep them safe, and live well," Linden says. She knows how this story ends. She doesn't want to see it. "... Rowan?" Will says. For the first time since they started this conversation he sounds concerned. "Are you all right?" Her chest feels tight, her head might as well have a spike through it, and whenever she closes her eyes she sees their father dead on the end of a Phyrexian's blade. And, as if taking away her parents and ruining her relationship with her brother wasn't enough, the Phyrexians seem to have taken something else from her. She can't clear her mind enough to planeswalk. The spark—it doesn't seem to respond. In fact, she can't feel it at all. "No," she says, flatly. "Fine. Stay if you want to. I'm going." #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) Every new moon, Kellan and his mother walk to an old willow tree on the edge of the woods. With its bark against their backs and its leaves shading their eyes, Kellan's mother tells him stories. The stars dance in front of his eyes with every word. Fireflies become the gleaming shields of knights; swaying blades of grass their swords. Lately, instead of a new set of heroes every time, he hears of two worthies in particular: a young woman who fled her training as a hedgewitch, and a young man she saved from a troll's rampage. Through the wilds they've journeyed together, facing all manner of beast and canny mage. He has the feeling he knows who they are, but he's enjoying getting to know them this way. On this night, as any other new moon night, he is near-running to the top of the hill. The family sheepdog's following in his wake, bounding along through the grass, full of energy despite the hour. "Think you can beat me there, boy?" Kellan calls. Hex barks, drool flying from his prodigious jowls. Kellan grins. He gives Hex a running pat, but pulls ahead of him all the same. There's no mercy when it comes to racing your sheepdog. When at last they reach the tree he is panting for breath, but happier than he's been all day. From here on the hill, the rest of the village seems as far as Castle Ardenvale. He lays a hand on the comforting bark of the willow and turns. His mother said she'd be along in a moment—he could see her from here. But when he looks out, it isn't the village he sees. Rather, it isn't #emph[just] the village. Ahead of him there is an archway made of ethereal, translucent stone. His mother's stories have prepared him a little. He knows precisely what it is: An invitation to speak with one of the High Faeries. As for why it's here ... Kellan's breath catches in his chest. To the right of the archway he can see his mother running up the hill. If she sees it, she hasn't said a word. He could stay here. He could wait for her, ignoring the door until it fades away. But his scratches still ache, and the words of his so-called fellows echo in his mind. #emph[You don't belong here. ] If they're right ... Could it be his father's finally taken notice of him? His real father? The moment Kellan has the thought, his hand is on the strange doorknob. Hex barks up a storm. Each one feels in time with the hammer of Kellan's heart. But he can't falter— this might be his only chance. If his mother catches up, she'd never let him go through. Kellan passes through the archway. A hero never hesitates. An unseen gust of wind throws him the rest of the way through and he lands on a cool, mossy floor. Only when he props himself up does he realize that the grass here is all silver; the twisting trees overhead bear jeweled fruit. In the distance he sees thatch-roof houses large as mountains, while all around him there are miniature castles populated by moving miniature knights. #figure(image("001_Episode 1: Pure of Heart/03.jpg", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none) When he sets his eyes—a little afraid, now—on the horizon once more, he spots the staircase, and at its top the throne. There is a figure upon it. Humans are as fond of calling things beautiful as they are of drawing breath. In doing this the meaning of the word has been worn away, as a mountain may, over eons, become a shore. The reason for this is simple: true beauty, unadulterated and pure, is enough to strike the viewer senseless. The figure who sits on the throne is as beautiful as the stars themselves. Kellan, who has never ventured far from his village, can't comprehend what he's seeing. The planes of the figure's face beguile him; the flash of their wicked smile sees him robbed of all thought. #emph["Tell me, brave hero ... Are you true of heart?"] It's only a passing veil of clouds—clouds that have no right being so low to the ground—that dispel Kellan's fatal fascination. What was it the stories said? Best to avoid looking at the fae directly. He stares down at the ground. "I don't know. I think I'd like to be," he says. #emph["That's no answer at all," ] says the figure. They sigh, much in the way his mother did when imitating princes. #emph["Are you truly your father's son? Bearing such wounds as that, without having dealt twice as many in turn?"] His heart skips a painful beat. "So, it's true? I'm half-fae? D-do you know my father? Wait, are you—?" Maybe if he got a better look at the figure's face, he'd know. He steps forward—only for roses to lash his feet in place. #emph["Careful, child. The blood that compels hatred from mortals offers you some protection here. But that protection is finite," ] they say. #emph["Remain where you are and I shall make no move to stop you, but take another step, and you forsake your realm for my own] ." Oh. This was the Faerie Lord. Who else could it have been? Kellan's knees knock together. He tries to kneel, like all the knights do. He feels silly. "Y-Your Majesty." "#emph[<NAME>,"] they answer him. "<NAME>," he says. "Do you know my father?" #emph["I know many things. Yet if you know who I am, and to whence you have come, you know that our kind surrender nothing,"] Talion answers. They lean forward on the throne, perching their head on their hand. #emph["We have our own laws. Render me a service, child, and you shall have your answers."] Our kind. Our own laws. This place, with its jeweled fruit, with strange animals slinking between stranger trees. To stand here is to stand in the home of a long-lost relative, uncertain of what significance anything holds. Yet the fae do not lie. His mother's always been clear about that. #emph[When you deal with fae, the straighter the answer the better. ] And this seemed pretty straightforward to him. "What do you need?" Talion hums a strange tune, as lovely as bird song. They snap their fingers and two fae appear on either side of Kellan, each with a bowl of glistening fruit. Kellan's stomach rumbles at the sight; his throat feels dry. #emph["You must be hungry."] But his mother taught him well, and Talion said it themself: the fae do nothing for free. "No, thank you." Talion smirks. With a wave of their hands, they dismiss the other fae. #emph["To business, then. Witches three have this land with slumber plagued. Agatha, the Hungry, lays in wait near her great cauldron, in search of heroes to eat. <NAME> has taken winter's crown for her own. Wherever there are lovers and lords, you shall find the beguiling Eriette. Whosoever is brave enough to defeat them shall break the curse upon the Realm, and for that service, earn a boon from my ever-full treasury."] A curse upon the Realm? Three witches? Talion's in need of a real hero. Kellan's palms sweat. The bravest thing he's ever done is go through that archway. He's never fought a battle, nor completed a quest. But how can he say no? This place, these people ... they're his blood too, aren't they? Maybe his father's a faerie knight, strapping and bold; or perhaps a mage, cunning and clever. Whoever he was, he was someone Talion respected. Shouldn't that mean something? Kellan wants to know more about him. Wants to be more like him, this man who dwells among the silver grass, in a land of impossible beauty. His mother glimpsed it once and left—but Kellan only wants more. If he fails, he fails. But if he can do it, he'll finally know the truth. "I'll do it. I'll go."
https://github.com/The-Notebookinator/notebookinator
https://raw.githubusercontent.com/The-Notebookinator/notebookinator/main/themes/radial/components/team.typ
typst
The Unlicense
#import "../colors.typ": * /// Display information about your team. /// /// Example Usage: /// ```typ /// #team( /// ( /// name: "<NAME>", /// picture: image("./path-to-image.png", width: 90pt, height: 90pt), /// about: [ /// Likes Coding /// ], /// ), /// ) /// ``` /// - ..members (dictionary): A list of members in your team. Each dictionary must contain the following fields: /// - name `<string>`: The name of the team member /// - picture `<content>`: An image of the team member /// - about `<content>`: About the team member /// -> content #let team(..members) = { set align(center) grid( columns: (1fr, 1fr), gutter: 20pt, ..for member in members.pos() { ( rect( fill: surface-1, inset: 20pt, radius: 1.5pt, )[ * #member.name * #line( length: 100%, stroke: (cap: "round", dash: "solid", thickness: 1.5pt), ) #v(8pt) #grid( columns: (1fr, 1fr), gutter: 20pt, align(center, member.picture), align(left, member.about), ) ], ) }, ) }
https://github.com/Gekkio/gb-ctr
https://raw.githubusercontent.com/Gekkio/gb-ctr/main/chapter/cartridges/mbc3.typ
typst
Creative Commons Attribution Share Alike 4.0 International
#import "../../common.typ": * == MBC3 mapper chip MBC3 supports ROM sizes up to 16 Mbit (128 banks of #hex("4000") bytes), and RAM sizes up to 256 Kbit (4 banks of #hex("2000") bytes). It also includes a real-time clock (RTC) that can be clocked with a quartz crystal on the cartridge even when the Game Boy is powered down. The information in this section is based on my MBC3 research, and Pan Docs @pandocs.
https://github.com/LDemetrios/Typst4k
https://raw.githubusercontent.com/LDemetrios/Typst4k/master/src/test/resources/suite/scripting/modules/chap2.typ
typst
// SKIP #import "chap1.typ" #let name = "Peter" == Chapter 2 #name had not yet conceptualized their consequences.
https://github.com/dssgabriel/master-thesis
https://raw.githubusercontent.com/dssgabriel/master-thesis/main/src/chapters/2-context.typ
typst
Apache License 2.0
#show raw.where(block: true): it => { set text(font: "Intel One Mono", size: 8pt) set align(left) set block(fill: luma(240), inset: 10pt, radius: 4pt, width: 100%) it } #show raw.where(block: false): box.with( fill: luma(240), inset: (x: 3pt, y: 0pt), outset: (y: 3pt), radius: 2pt ) #show raw.where(block: false): text.with(font: "Intel One Mono") = Context of the internship == Hardware accelerators #h(1.8em) In this section, we give an overview of hardware accelerators. We introduce the architecture of a GPU, illustrated with current NVIDIA cards, and how they integrate into modern heterogeneous systems. Then, we present the existing programming models used to write code targeting such hardware. Finally, we cover their performance benefits and relevant use cases in HPC. === GPU architecture #h(1.8em) While CPUs are optimized to compute serial tasks as quickly as possible, GPUs are instead designed to share work between many small processing units that run in parallel. They trade reduced hardware capabilities in program logic handling for much higher core counts, emphasizing parallel data processing. As a result, GPUs prioritize high throughput over low latency, allowing them to outperform CPUs in compute-intensive workloads that can be trivially parallelized, making them particularly well suited to compute-bound applications. In this section, we will use the Ampere architecture as an example, as it is described in NVIDIA's whitepaper @noauthor_nvidia_nodate. We will also provide the terminology for equivalent hardware components on AMD GPUs. #v(1em) #figure( image("../../figures/2-context/ga100-full.png", width: 100%), caption: [Block diagram of the full NVIDIA GA100 GPU implementation], ) <ga100-full> @ga100-full shows the compute and memory resources hierarchy available on NVIDIA's data center GA100 GPU, designed for HPC and machine learning workloads. @a100-sm presents the Streaming Multiprocessor (SM) of the GA100 GPU. SMs are the fundamental building block of NVIDIA GPUs and are comparable to the Compute Unit (CU) in AMD terminology. Each SM is a highly parallel processing unit containing multiple CUDA cores (or shader cores on AMD) and various specialized hardware units. It achieves parallel execution through the Single Instruction, Multiple Thread (SIMT) technique, allowing multiple CUDA cores within the SM to execute the same instruction on different data simultaneously. Threads are scheduled and executed in groups of 32, called "warps" (or "wavefronts" in AMD terminology), thus promoting data parallelism. The SM also manages various types of memory, including fast on-chip registers, instruction and data caches, and shared memory for intra-thread block communication. Additionally, it provides synchronization mechanisms to coordinate threads within a block. Starting from the Volta architecture in 2017 @noauthor_inside_2017, NVIDIA SMs introduced an acceleration unit called the Tensor Core, purposefully built for high-performance matrix multiplication and accumulation operations (MMA), which are crucial in AI and machine learning workloads. However, these specialized cores only provide noticeable performance improvements for mixed-precision data types, reducing their usefulness for most HPC applications that work with double-precision (64-bit) floating-point values. On the full implementation of NVIDIA's GA100 GPU, there are 128 SMs and 64 single-precision (32-bit) floating-point CUDA cores per SM, enabling the parallel execution of up to 8192 threads. #figure( image("../../figures/2-context/a100-sm.png", width: 40%), caption: [Streaming Multiprocessor (SM) of NVIDIA A100 GPU] ) <a100-sm> NVIDIA GPUs expose multiple levels of memory, each with different capacities, latencies and throughputs. We present them hereafter from fastest to slowest: #linebreak() 1. *Registers* are the fastest and smallest type of memory available on the GPU, located on the individual CUDA cores (SM units), providing very low latency and high-speed data access. Registers store local variables and intermediate values during thread execution. 2. *L1 Cache/Shared Memory* is a small, fast, on-chip memory shared among threads within the same thread block (see @defs and @gpu_memory). This cache level can either be managed automatically by the GPU or managed manually by the programmer and treated as shared memory between threads. This allows them to communicate and cooperate on shared data. Shared memory is particularly useful when threads need to exchange information or access contiguous data with reduced latency compared to accessing global memory. 3. *L2 Cache* is a larger on-chip memory that serves as a cache for global memory accesses. It is shared among all the SMs in the GPU. L2 cache helps reduce global memory access latency by storing recently accessed data closer to the SMs. 4. *Global Memory* is the largest and slowest memory available on the GPU as it is located off-chip in the GPU's dedicated Video Random Access Memory (VRAM). Global memory serves as the main memory for the GPU and is used to store data that needs to be accessed by all threads and blocks. However, accessing global memory has higher latency than on-chip memories described above. Global memory generally comprises either Graphic Double Data Rate (GDDR) or High-Bandwidth Memory (HBM), which provides higher throughput in exchange for higher latencies. 5. *Host Memory* refers to the system's main memory, i.e., the CPU's RAM. Data transfers between the CPU and the GPU are necessary for initializing data, transferring results back to the host, or when data does not fit within the GPU's global memory. Data transfers between host and GPU memory often involve much higher latencies because of the reduced bus bandwidth between the two hardware components (implemented using, e.g., PCIe buses). @gpu_memory showcases how these kinds of memory are typically organized on an NVIDIA chip. #figure( image("../../figures/2-context/gpu_memory.svg", width: 96%), caption: [Generic memory hierarchy of NVIDIA GPUs] ) <gpu_memory> #v(1em) #h(1.8em) The increased integration of GPUs in modern HPC systems requires fast interconnect networks that enable the use of distributed programming models. As most supercomputers use a combination of 2-4 GPUs per CPU (or per socket), there need to be two levels of interconnect fabric: 1. Inter-GPU networks, generally comprised of proprietary technologies (e.g., NVLink on NVIDIA, Infinity Fabric on AMD, etc.), ensuring the fastest possible data transfers between nearby GPUs. 2. Inter-node networks allowing fast, OS-free Remote Direct Memory Accesses (RDMA) between faraway GPUs. === Programming models #h(1.8em) GPU programming models refer to the different approaches and methodologies used to program and utilize GPUs for general-purpose computation tasks beyond their traditional use cases in graphics rendering. This section introduces some of the programming models used in HPC based on their abstraction level. Firstly, we present low-level models that closely map to the underlying hardware architecture. Secondly, we showcase higher-level programming styles that offer more expressiveness and portability, often at the expense of a high degree of fine-tuned optimization. We start by introducing common concepts for GPU programming. We define the terms starting from the most high-level view and gradually refine them toward smaller components of GPU programming. ==== Definitions <defs> 1. *Kernel:* A kernel is a piece of device code, generally composed of one or multiple functions, that leverages data parallelism (SIMD) and is meant to execute on one or multiple GPUs. A kernel must be launched from the host code, although it can be split into multiple smaller kernels that are called from the device. 2. *Grid:* A grid is the highest-level organizational structure of a kernel's execution. It encompasses a collection of blocks, also called work groups and manages their parallel execution. Kernels are launched with parameters that define the configuration of the grid, such as the number of blocks on a given axis. Multiple grids --- i.e., kernels --- can simultaneously exist on a GPU to efficiently utilize the available resources. 3. *Block:* A block, also called thread block (CUDA), workgroup (OpenCL), or team/league (OpenMP), is a coarse-grain unit of parallelism in GPU programming. It is the main component of grids and represents a collection of threads working together on parts of the data operated on by a kernel. Like grids, the dimensions of blocks can be configured when launching a kernel. Threads in a block can share memory in the L1 cache (see @gpu_memory), which enables better usage of this resource by avoiding expensive, repeated reads to the GPU's global memory. 4. *Warp:* A warp (also called wavefront in AMD terminology) is a fine-grain unit of parallelism in GPU programming, very much related to the hardware implementation of a GPU. However, it also appears at the software level in some programming models. On NVIDIA devices, threads inside a block are scheduled in groups of 32, which programmers can take advantage of in their kernels (e.g., warp-level reductions). 5. *Thread*: A thread is the smallest unit of parallelism in GPU programming. They are grouped in blocks and concurrently perform the operations defined in a kernel. Each thread executes the same instruction as the others but operates on different data (SIMD parallelism). @compute_model summarizes these structures as exposed to programmers in a CUDA-style programming model, which we introduce in the next section. #figure( image("../../figures/2-context/compute_model.svg", width: 100%), caption: [General compute model for CUDA-like GPU programming] ) <compute_model> ==== Low-level <low_lvl_gpu_prog> #h(1.8em) Low-level programming models strive to operate as closely as possible to the underlying hardware. Consequently, kernels are articulated using specialized subsets of programming languages like C or C++. Such frameworks provide developers with the essential tools and abstractions to accurately represent the accelerator's architecture, enabling them to write highly optimized kernels. #h(-1.8em) *Computed Unified Device Architecture (CUDA)* @noauthor_contents_nodate is NVIDIA's proprietary tool for GPU programming. Using a superset of C/C++, it provides a parallel programming platform and several APIs, allowing developers to write kernels that will execute on NVIDIA GPUs, locking users into the vendor's ecosystem. However, CUDA is one of the most mature environments for GPU programming, offering a variety of highly optimized computing libraries and tools. Thanks to the vast existing codebase, CUDA is often the default choice for GPU programming in HPC. #h(-1.8em) *Heterogeneous-Compute Interface for Portability (HIP)* @noauthor_hipdocsreferencetermsmd_nodate is the equivalent of CUDA for AMD GPUs. It is part of the RadeonOpenCompute (ROCm) software stack. Contrary to its NVIDIA equivalent, HIP is open-source, making it easier to adopt as it does not lock users into a specific vendor ecosystem. It provides basic compute libraries (BLAS, DNN, FFT, etc.) optimized for AMD GPUs and several tools to port CUDA code to HIP's syntax automatically. It is quickly gaining traction as AMD is investing a lot of resources into its GPU programming toolkit in order to catch up with NVIDIA in this space. From an HPC standpoint, its hardware advantage over NVIDIA is also enabling AMD to improve adoption in the domain. #h(-1.8em) *Open Compute Language (OpenCL)* @noauthor_opencl_2013 is developed as an open standard by Khronos Group for low-level, vendor-agnostic hardware-accelerator programming. Unlike CUDA and HIP, OpenCL supports offloading to GPU and FPGA and can even fall back to the CPU if no devices are available. It provides APIs and tools to allow programmers to interact with devices, manage memory, launch parallel executions on the GPU, and write kernels using a superset of C's syntax. The standard defines common mechanisms that make the programming model portable, similar to what @compute_model showcases. Because of its focus on portability, OpenCL implementations can have performance limitations compared to specialized programming models such as CUDA or HIP. Most GPU vendors (NVIDIA, AMD, Intel) supply their own implementation of the OpenCL standard optimized for their hardware. Some open-source implementations of OpenCL and OpenCL compute libraries also exist. ==== High-level #h(1.8em) In contrast to low-level programming models, high-level programming models focus on portability, ease of use and expressiveness. They are much more tightly integrated into the programming language they are used in --- generally C++ --- and offer an intuitive way of writing GPU code. Kernels' syntax and structure closely resemble typical CPU code, simplifying the process of porting them to various target architectures. Most of the "hardware mapping" (i.e., translating CPU constructs to suit the architecture of hardware accelerators) and optimization work is delegated to the compiler and runtime. #h(-1.8em) *SYCL* @noauthor_sycl_2014 is a recent standard developed by the Khronos Group that provides high-level abstractions for heterogeneous programming. Based on recent iterations of the C++ standard (C++17 and above), it aims to replace OpenCL by simplifying the process of writing kernels and abstracting the usual low-level compute model (see @compute_model). Kernels written in SYCL look very much like standard CPU code does. However, they can be offloaded to hardware accelerators as the user desires, in an approach similar to what programmers are used to with OpenCL. #h(-1.8em) *Open Multi-Processing (OpenMP)* @noauthor_home_nodate is an API specification for shared-memory parallel programming that also supports offloading to hardware accelerators. It is based on compiler directives for C/C++ and Fortran programming languages. Similarly to SYCL, standard CPU code can be automatically offloaded to the GPU by annotating the relevant section using OpenMP `pragma omp target` clauses. OpenMP is well-known in the field of HPC as it is the primary tool for fine-grain, shared-memory parallelism on CPUs. #h(-1.8em) *Kokkos* @CarterEdwards20143202 is a modern C++ performance portability programming ecosystem that provides mechanisms for parallel execution on hardware accelerators. Unlike other programming models, Kokkos implements GPU offloading using a variety of backends (CUDA, HIP, SYCL, OpenMP, etc.). Users write their kernels in standard C++ and can choose their preferred backend for code generation. Kokkos also provides useful memory abstractions, tools, and compute libraries targeted for HPC use cases. === Performance benefits and HPC use cases #h(1.8em) Historically, GPUs have primarily been used for graphics-intensive tasks like 3D modeling, rendering, or gaming. However, their highly parallelized design makes them appealing for HPC workloads, which often induce many computations that can be performed concurrently. Applications that are primarily compute-bound can benefit from significant performance improvements when executed on GPUs. Modern HPC systems have already taken advantage of GPUs by tightly incorporating them into their design. Around 98% of the peak performance of modern supercomputers such as Frontier (\#1 machine on the June 2023 TOP500 ranking @noauthor_june_nodate) comes from GPUs, making it crucial to use them efficiently. Moreover, nine systems in the top 10 are equipped with GPUs, further demonstrating their importance. The convergence between HPC and AI contributes to the hardware-accelerator trend, especially in exascale-class systems. However, it is essential to note that while both fields can benefit from GPUs, their hardware uses are entirely different. Indeed, most AI workloads can profit from reduced floating-point (FP) precision. Notably, they can leverage specialized tensor cores found in modern GPUs to enhance the performance of AI workloads even further, which predominantly depend on dense matrix operations. This is not the case for HPC, which, most of the time, requires the use of double-precision floating-point arithmetic. As AI continues to gain significant momentum and attract a growing user base, it may impact the design of the next generation of GPUs. This influence could lead to a shift towards prioritizing more tensor cores and reduced floating-point precision, potentially at the expense of HPC's interests. As massive reliance on hardware accelerators is becoming the norm within heterogeneous systems, it is crucial to efficiently program GPUs to exploit the performance benefits they offer correctly. To this end, the industry is investing a considerable amount of resources in software engineering to encourage and facilitate the development of GPU-accelerated applications. We are witnessing significant efforts, particularly in the field of programming languages, that concentrate on ensuring the safety and performance of massively parallel code. The Rust programming language targets those goals and will be our focus in the next section. == The Rust programming language #h(1.8em) This section introduces the Rust programming language, its notable features, and its possible usage in HPC software that leverages hardware accelerators like GPUs. === Language features Rust is a compiled, general-purpose, multi-paradigm, imperative, strong, and statically typed language designed for performance, safety and concurrency. Its development started in 2009 at Mozilla, and its first stable release was announced in 2015. As such, it is a relatively recent language that aims at becoming the new gold standard for systems programming. Its syntax is based on C and C++ but with a modern twist and heavily influenced by functional languages. Rust's primary feature distinguishing it from other compiled languages is its principle of ownership and _borrow-checker_. Ownership rules @noauthor_what_nodate state the following: 1. Each value has an owner. 2. There can only be _one_ owner at a time. 3. When the owner goes out of scope, the value's associated memory is _automatically_ dropped. Contrarily to C++, Rust is a move-by-default language. This means that instead of creating implicit deep copies of heap-allocated objects, Rust destructively moves the data between objects --- i.e., any subsequent use of a moved value is detected and rejected by the compiler (see @ownership). Furthermore, variables are constant by default and must be explicitly declared mutable using the `mut` keyword. #figure(caption: "Rust's ownership in action")[ ```rs let s1 = String::from("foo"); let s2 = s1; // `s1` ownership is moved to `s2`, which now owns the value "foo" println!("{s1}"); // ERROR! value of `s1` has been moved to `s2` ``` ]<ownership> In @ownership, declaring `s2` by assigning `s1` to it takes ownership of the value held by `s1` (ownership rule \#2). This invalidates any later use of `s1`, and the Rust compiler can statically catch such mistakes. This guarantees that the compiled code cannot contain use-after-free bugs. In order to share values between multiple variables, the language also provides references that implement a borrowing mechanism. There are two kinds of references in Rust: - *Immutable (shared) references* are read-only. Multiple immutable references to the same value can exist simultaneously. - *Mutable references* allow modifying a value that has been borrowed. However, a mutable reference is unique. There cannot be other references (mutable or shared) to a value that has been mutably borrowed while the reference remains in scope. #h(1.8em) The Rust "borrow-checker" enforces these rules at compile-time (see @read_only and @read_write in @appendix). It can also check that any given reference remains valid while in use (i.e., the object it points to is still in scope). This statically guarantees no dangling pointers/references in the code. @borrowing demonstrates these rules annotated with comments summing up the compiler errors, where relevant. #figure(caption: "Rust's borrowing in action")[ ```rs let s2; { let s1 = String::from("bar"); s2 = &s1; // Borrowing the value held by `s1` println!("{s1}"); // OK! `s1` has only been borrowed; thus it is still valid println!("{s2}"); // OK! `s2` holds a reference to "bar" but does not own it } println!("{s2}"); // ERROR! `s2` held a reference that is not valid anymore // because the owner `s1` went out of scope ``` ```rs let mut s1 = String::from("Hello, "); // `s1` needs to be declared as mutable // so we can mutably borrow it { let s2 = &mut s1; // Mutably borrowing the value held by `s1` let s3 = &s1; // ERROR! cannot borrow because `s2` is a mutable // reference in scope, later used to modify `s1` s2.push_str("world!"); // OK! Modifying `s1` through `s2` } // `s2` falls out of scope, and the mutable reference is dropped let s3 = &s1; // OK! There are no mutable references to `s1` println!("{s3}"); // Prints "Hello, world!" ``` ]<borrowing> #v(1em) #h(1.8em) Rust's ownership and borrowing rules eliminate the need for a garbage collector, thus maintaining high performance comparable to other compiled languages such as C, C++ or Fortran. Moreover, they also prevent an entire class of memory safety bugs that plague C/C++ codebases, often causing crashes, memory leaks, or even opening vulnerabilities to cyber-attacks. The advantages of these features do not stop there either. By leveraging the rules of ownership and Rust's strict type system, the compiler can catch most concurrency-related bugs, such as race conditions or data races. @cpp_thread_safety implements a simple parallel vector sum using C++ standard library threads and a lambda function. The vector contains a million values, all initialized to `1`. Compiling the following code using the following command does not produce any warnings whatsoever: ``` $ g++ -std=c++17 -Wall -Wextra parallel_vector_sum.cpp ``` However, when running the code a few times, we get the following results: ``` $ for i in $(seq 0 5); do ./a.out; done RESULT: 639810 RESULT: 641278 RESULT: 619719 RESULT: 1235839 RESULT: 590743 ``` We obtain different results for each run and never get the expected `1,000,000` result. This is because @cpp_thread_safety contains a race condition that happens when we try to increment the `result` variable. #figure(caption: "Multi-threaded vector sum in standard C++17")[ ```cpp constexpr size_t NELEMENTS = 1'000'000; constexpr size_t NTHREADS = 8; constexpr size_t QOT = NELEMENTS / NTHREADS; constexpr size_t REM = NELEMENTS % NTHREADS; std::vector<int> vector(NELEMENTS, 1); std::vector<std::thread> threads(NTHREADS); int result = 0; for (size_t t = 0; t < NTHREADS; ++t) { size_t const start = t * QOT; size_t const end = t == NTHREADS - 1 ? start + QOT + REM : start + QOT; threads[t] = std::thread([&]() { for (size_t i = start; i < end; ++i) { result += vector[i]; } }); } for (auto& t: threads) { t.join(); } printf("RESULT: %d\n", result); ``` ]<cpp_thread_safety> #v(3em) #figure( image("../../figures/2-context/race_cond.svg", width: 96%), caption: "Illustration of a race condition" )<race_cond> #v(1em) @race_cond shows how two threads, A and B, can cause a race condition while trying to update the value of `result` concurrently. Both threads load the same value and increment it before storing it again. Thread B overwrites the value stored by thread A without considering thread A's changes, therefore losing information and producing the wrong sum. In contrast, Rust's ownership rules allow the compiler to notice such race conditions, making it reject the following equivalent code: #figure(caption: "Multi-threaded vector sum in standard Rust")[ ```rs const NELEMENTS: usize = 1_000_000; const NTHREADS: usize = 8; const QOT: usize = NELEMENTS / NTHREADS; const REM: usize = NELEMENTS % NTHREADS; let vector = vec![1; NELEMENTS]; let mut threads = Vec::with_capacity(NTHREADS); let mut result = 0; for t in 0..NTHREADS { let start = t * QOT; let end = if t == NTHREADS - 1 { start + QOT + REM } else { start + QOT }; threads.push(std::thread::spawn(|| { for i in start..end { result += vector[i]; // thread `t` mutably borrows `result` } })); } for t in threads { t.join().unwrap(); } println!("RESULT: {result}"); ``` ]<rs_thread_safety> #h(1.8em) Indeed, in @rs_thread_safety, although Rust automatically infers that it must mutably borrow `result` in the thread's lambda (called "closures" in Rust), it cannot guarantee that the thread will finish executing before `result` goes out of scope. Furthermore, when a thread `t` mutably borrows `result`, it prevents the other threads from borrowing it, resulting in a compiler error. The entire error message is available in @error_race_cond in the @appendix. #linebreak() In some cases, Rust can even propose the relevant changes to make the code valid. E.g., @rs_thread_safety can be fixed by either: - Making the `result` variable atomic to guarantee shared-memory communication of the updated value between threads; - Wrapping the `result` variable with a lock (e.g., a mutex) to ensure that increments of the value are protected atomically. Rust's ownership and borrowing rules make it an excellent fit for parallel programming, as the compiler can assert that the code is semantically correct and will produce the expected behavior. Being a compiled language, it is able to match and even sometimes surpass the performance of its direct competitors, C and C++. #h(-1.8em) Hereafter, we exhaustively list other valuable features that the language includes but that are not worth exploring in detail in the available space of this report: - Smart pointers and Resource Acquisition Is Initialization (RAII) mechanisms for automatic resource allocation and deallocation; - Powerful enum types that can both encode meaning and hold values, paired with pattern-matching expressions to handle variants concisely; - Optional datatypes that enable compact error handling for recoverable and unrecoverable errors; - A generic typing system, and _traits_ that provide ways to express shared behaviors between objects; - A robust documenting, testing, and benchmarking framework integrated into the language; - A broad standard library that provides an extensive set of containers, algorithms, OS and I/O functionalities, asynchronous and networking primitives, concurrent data structures and message passing features, etc. #pagebreak() Rust also comes with a vast set of tools to aid software development: - A toolchain manager that handles various hardware targets, `rustup`; - A package manager and build system, `cargo`; - A registry for sharing open-source libraries, `crates.io`; - A comprehensive language and standard library documentation, `docs.rs`; - First-class programming tools for improved development efficiency: a language server, `rust-analyzer`, a linter `clippy`, a code formatter `rustfmt`, etc. === HPC use cases Rust's accent on performance, safety and concurrency makes the language a fitting candidate for becoming a first-tier choice for HPC applications. Its focus on thread safety, in particular, empowers programmers to write fast, robust, and safe code that will be easily maintainable and improvable over its lifetime. Rust avoids many of the pitfalls of C++, especially in terms of language complexity, and its modern features and syntax make it a lot easier to work with than Fortran. Its adoption into many of the top companies that operate in fields related to HPC (Amazon, Google, Microsoft Azure, etc.), and its acceptance as the second language of the Linux kernel helped it gain a lot of traction in low-level, high-performance programming domains. Not only is it well suited to writing scientific software that relies on efficient parallelism, Rust is also a formidable language for writing HPC tools, such as profilers, debuggers, or even low-level libraries that power abstractions in higher-level languages (e.g., Python, Julia, etc.). Rust's robust memory and thread safety features position it as an excellent candidate for GPGPU programming. Should the language's properties ensure the elimination of common bugs in parallel programming (e.g., race conditions, data races, or accesses to invalid memory regions within GPU kernels), Rust emerges as a highly attractive choice for developing the next generation of scientific applications harnessing the heterogeneous architecture of modern supercomputers. == Goals #h(1.8em) This internship aims to establish an exhaustive state of the art for the capabilities of Rust in GPU programming. The goal is to explore what the language is currently able to support natively and what are the existing frameworks or libraries for writing GPGPU software. As the CEA is involved in developing critical applications for simulation purposes, Rust's focus on high performance and its guarantees in type, memory and thread safety are compelling assets for writing fast, efficient, and robust code. As a primary actor in research and industry, the CEA could benefit from using Rust for hardware acceleration purposes in scientific computing. Several crates, e.g., `rayon` @noauthor_rayon_2023, enable trivial parallelization of code sections for CPU use cases, similar to OpenMP's ease of use in C, C++ and Fortran. This library provides parallel implementations of Rust's standard iterators that fully leverage the language thread-safety features. Code is guaranteed to be correct at compile time and unlocks the processor's maximum multi-core performance. Rayon also implements automatic work-stealing techniques that keep the CPU busy, even when the application's load balancing is not optimal. We want to investigate if something similar exists for GPU computing and, if not, to determine what the limitations would be if we tried to. #linebreak() In a secondary stage, we want to assess Rust's ability to keep up with C and C++ GPGPU programming performance. This comparison would be primarily based on common compute kernels and should aim at evaluating the best options for GPU code generation in Rust. #linebreak() Finally, we would like to research the limits of Rust for GPU computing by porting parts of real-world CEA applications. This work involves evaluating both the effort necessary for such ports, and the performance improvements that we can expect for industrial-grade software. This work's ultimate purpose is to determine if it is possible to leverage Rust's properties for writing efficient code whose concurrent correctness is asserted by the compiler.
https://github.com/ShapeLayer/ucpc-solutions__typst
https://raw.githubusercontent.com/ShapeLayer/ucpc-solutions__typst/main/tests/presets_difficulties/test.typ
typst
Other
#import "/lib/lib.typ" as ucpc #import ucpc.presets: difficulties as lv #lv.easy #lv.normal #lv.hard #lv.challenging #lv.bronze #lv.silver #lv.gold #lv.platinum #lv.diamond #lv.ruby
https://github.com/jgm/typst-hs
https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/layout/enum-numbering-03.typ
typst
Other
// Test numbering with closure and nested lists. #set enum(numbering: n => super[#n]) + A + B + C
https://github.com/Arrata-TTRPG/Arrata-TTRPG
https://raw.githubusercontent.com/Arrata-TTRPG/Arrata-TTRPG/main/src/sections/quirks.typ
typst
Other
#import "../typst-boxes.typ": * = Quirks Quirks are the backbone of any character. They help you as a player or GM define who exactly a character is, how they operate, and how you should be representing them. The point of Quirks is to allow a degree of freedom in roleplaying a character without letting you lose what makes that character unique as a person. Quirks are usually a single word or a very short phrase that defines a particular characteristic of someone. They are not intended to be stereotypes or absolute rules of how a character works. Instead, they define the boundaries, biases, beliefs, and any aspect of the character that's relevant to the story. == Quirk Types Quirks are divided into three categories under the three classical rhetorical appeals: *Ethos*, *Pathos*, and *Logos*. Each category defines a set of Quirks and what they usually do for a character. By building a character with at least one or two Quirks in each category, you're almost guaranteed to have at least a half-interesting person. === Ethos Ethos expresses a character's Ethical, Moral, Societal, and Religious beliefs and context. Often they contain information about their past and how they're currently seen by the society they live in today. Ethos Quirks are usually what gets a person into trouble; what they use to stir the pot and cause conflict. === Pathos Pathos deals with a character's emotional situation - how they act around other people and with what level of apathy or empathy they approach different tasks. Pathos Quirks tend to define things that may seem simple or stereotypical but can be used in much more nuanced ways when combined with other Quirks. === Logos Logos is how a character makes decisions; it's their inner voice that drives their actions step by step through whatever mess the other Quirks put them into. == Boons and Flaws Quirks can offer _Boons_ and _Flaws_ which allow for relevant rolls to be modified. When a check is made such that a Quirk's Boons seem relevant, then that roll will gain a level of advantage. In opposition, if a Quirk's Flaws seem more relevant, they will gain a level of disadvantage. Note that this is not exclusive; Quirks that have a Boon in a scenario could also have a relevant Flaw, and therefore the roll would have a level of advantage and disadvantage. #slantedColorbox( title: "Example Quirk: Veritable", width: auto, radius: 0pt, color: black, )[ #set text(size: 8pt) _Note: Bring your GM treats; they may consider your PC's Boons._ Here's an example of the _Veritable_ Quirk: *Name*: Veritable *Defines*: - Being a genuine article or item. - A real instance of something believed gone or impossible. - Acting genuinely or truthfully. *Boons*: People will often trust or believe you. They might see you as an ally when things are wrong in the world. *Flaws*: You may often disclose things you should not. When there is great abundance, you may be seen as archaic. ] == Fighting Quirks Doing something that isn't what a character would normally do is incredibly interesting, but only if such an event is justifiable, otherwise Quirks would have no meaning other than to provide you with advantage and disadvantage. In the event where a character might reasonably consider and even act in a way contrary to a Quirk, we say they're \emph{Fighting the Quirk}. To clarify, Fighting the Quirk is an event where a character might say; #align(center)["_Do I want to be me? Do I accept who I am, or should I change?_"] Fighting against Quirks is the key to _Change_ in Arrata. You as a player are the controller of your character and are ultimately the one who pilots the fate of your character. Part of that fate is deciding if a character _Accepts_ or _Rejects_ their Quirks. === Acceptance and Rejection _Acceptance_ and _Rejection_ are measures of how much a character likes or dislikes a particular Quirk. Utilizing a Quirk in ways that demonstrate not only a reliance on the Quirk, but trust and belief in that aspect of the character is likely to increase your character's Acceptance of that Quirk. Doing the opposite; Fighting a Quirk (and succeeding), increases its Rejection. Acceptance and Rejection function like stats, although they don't have a Quality, and aren't rolled. Instead, they act in opposition to each other; for every level of Acceptance, acts that fight against the Quirk gain a level of disadvantage. For every level of Rejection, acts that utilize the Quirk's Boons gain a level of Disadvantage. But, they also offer relief when used to further themselves; acts that are faithful to a Quirk and use its Boons gain levels of advantage equal to the Quirk's Acceptance, and acts that fight the Quirk gain equal levels of advantage to its Rejection. == Intuition Intuition is a point system that seeks to reward good character crafting and storytelling; when Quirks are roleplayed well, and when the conflict in the story is dealt with in interesting ways. A single "Intuition point" is awarded to a PC by the GM when the player of that character does one of the following: - Roleplays a Quirk well, - Roleplays a scenario well, - Creates a particularly funny or interesting scenario, - Fights against a Quirk successfully. It's also important to note that for the given methods of gaining Intuition, if a particular Quirk is a reason why Intuition is being gained, then the Intuition will go into that Quirk's Intuition category. If there isn't a Quirk that caused it, the player may choose which category they want the Intuition to go to freely. How often and in what volume Intuition is given out is dependent on the GM, but every player should be earning 1-2 Intuition points per 3-4 hours of play. === Spending Intuition == Argos #slantedColorbox( title: "Argos - Etymology", width: auto, radius: 0pt, color: black, )[ #set text(size: 8pt) _Argos is a city in the Greek Peloponnese, the same island Sparta is situated on. Argo, the ship Jason used with his Argonauts, was the vessel by which he carried out his journey. Many terrible things happen on this adventure to find the Golden Fleece, but ultimately, they retrieve the Fleece and return to Greece where Jason assumes his father's throne. This story is short-lived though, and the people reject Jason and his wife, driving them into solitude. Jason breaks his vows to his wife in exile, and she takes his new wife's life as well as their child's ascending to Mount Olympus. Jason returns to his land, where the Argo is on display. As he rests next to it, a part of it breaks loose, crushing and killing Jason._ ] *Argos* in Arrata is the drive a character has. Their goal, mission, and ultimate weakness. It is their source of power and the destination where they're "retired"; their final resting place. It's from their Argos that a story is driven. Argos is often a sentence or short phrase, written from the perspective of the character who has that Argos. The written Argos should be short, astute, and clear in its goal; it should be a stopping point the character deeply desires and wishes for. You should write a character's Argos as if it were their final words, and in a way only they could fully understand. This is because an Argos is incredibly dangerous for a character. It can drive them into stupidly dangerous situations, taking on foes far more powerful than themselves. It can also be corrupted and turned against them, used to manipulate a character for the benefit of another. Argos should incorporate a character's Quirks; if they're _Caliban_, then so should their Argos, if they're _Corrupt_, then their Argos should be underhanded, if they're _Cursed_, then they should be fighting with or against that curse. Argos provides your characters with a special power most others don't have: _Purpose_. With this purpose, their actions will become more likely to succeed, and actions that directly serve towards moving that character closer to their Argos should be offered a level of advantage by the GM. This is the power that Argos provides, but actions that go against Argos are severely punished. If a character moves against their Argos, ignoring the goal they've sought, they will be considered ``Failing'', and it's the job of everyone at the table to notify their player that they're not playing their character to their Argos. If still, they continue to ignore or fight against their Argos, it's the GM's job to intervene. Characters who have a purpose but don't care for it will lose it, and revert to their original, purposeless existence. Remove these characters (and probably their player) from the table immediately. This is one rule that is immutable in Arrata: *Evict purposeless characters and the players who lead them to this*. If the situations and events that have occurred make it such that they lose their Argos, that is acceptable. But if a player's actions drag their character time and again away from their purpose, then they're actively breaking the rules of this system. Take necessary action and remove them. Not all Argos are noble. People fight and die for incredibly stupid reasons constantly. A hope with Arrata is to allow you to see those purposes and their reasons, even if you might disagree with them, and understand them better. === Completing Argos If the Argos of a character is made true, then we say that the character has _Completed their Argos_. Their journey in this story is over, the control of the player over the character is no longer needed for them to live in this world. Discuss with your group then; what should we do with this character? Do they retire, move out somewhere nice and settle down? Do we send them off on another, tangential adventure? Will the characters remaining hear of them again? Wonder what happy, or bittersweet ending would be appropriate for them. Ultimately, this is a decision between a player and the GM, with a heavy lean towards the player's opinion. As a player, you're bargaining for the best end for this person you've made come true. As a GM, you're trying to retire and fit that character into the background as well as possible. Some characters might choose to stay and see the others' journeys through to completion. They might turn and abandon them in a bleak moment of vengeance. As the character's player, you should choose an option best fitting the character and see them through to the end well: it's your last responsibility to that character. === Breaking Argos In opposition, there might come a time when an Argos is made void; a family seeking to be reunited is destroyed, a wish for peace is turned into an eternal war, and vengeance is proved wrong in its assumptions. When this happens, we say that the character's _Argos is Broken_. Often, this is a traumatic event for them, where emotions take precedence, and stupid decisions are made in desperation. When Argos is broken, a decision is to be made immediately: what does the character do? Do they sink into endless despair, do they go on a rampage, do they turn and silently leave? Do any of the other characters stop them? A broken Argos can be mended, and a new one can be found. This is a critical part of _change_, but before that can be done, the trial of overcoming such a deep loss should be difficult. The GM should assign a few checks to see in what ways the character degenerates and what they lose. If it seems to be a total loss, _toss the character aside_. They're gone, and there's nothing you can do about it. #pagebreak()
https://github.com/jgm/typst-hs
https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/meta/document-07.typ
typst
Other
#box[ // Error: 4-15 pagebreaks are not allowed inside of containers #pagebreak() ]
https://github.com/vimkat/typst-ohm
https://raw.githubusercontent.com/vimkat/typst-ohm/main/src/lib/elements.typ
typst
MIT License
#import "vars.typ" #let pipe = text(fill: vars.red, "|")
https://github.com/hugo-s29/typst-algo
https://raw.githubusercontent.com/hugo-s29/typst-algo/master/README.md
markdown
MIT License
# The `typst-algo` package. This package helps you typeset algorithms in Typst. To learn more about this package, take a look at the [manual](docs/manual.pdf). ## Examples of algorithms (extracted from the manual) This is Bogosort, a _very efficent_ sorting algorithm. ![First example](docs/example-1.png) This is a Monte-Carlo algorithm to approximate $\pi$. ![Second example](docs/example-2.png) This is the Quine–McCluskey algorithm for solving SAT. ![Third example](docs/example-3.png) ## Contributing This project is open-source (MIT-licensed). Feel free to contribute if you think a feature is missing, the code could be improved, or anything else. Also, feel free to correct any typo you find.
https://github.com/Meisenheimer/Notes
https://raw.githubusercontent.com/Meisenheimer/Notes/main/src/IVP.typ
typst
MIT License
#import "@local/math:1.0.0": * = Initial Value Problem #env("Notation")[ To numerically solve the IVP, we are given initial condition $mathbf(u)_0 = mathbf(u) (t_0)$, and want to compute approximations ${ mathbf(u)_k, k = 1, 2, dots }$ such that $ mathbf(u)_k approx mathbf(u)(t_k), $ where $k$ is the uniform time step size and $t_n = n k$. ] == Linear Multistep Method #env("Definition")[ For solving the IVP, an s-step *linear multistep method* (LMM) has the form $ sum_(j=0)^s alpha_j mathbf(u)_(n+j) = k sum_(j=0)^s beta mathbf(f) (mathbf(u)_(n+j), t_(n+j)), $ where $alpha_s = 1$ is assumed WLOG. ] #env("Definition")[ An LMM is *explicit* if $beta_s = 0$, otherwise it is *implicit*. ] == Runge-Kutta Method #env("Definition")[ An s-stage *Runge-Kutta method* (RK) is a one-step method of the form $ & mathbf(y)_i & & = mathbf(f) (mathbf(u)_n + k sum_(j=1)^s a_(i j) mathbf(y)_j, t_n + c_i k), \ & mathbf(u)_(i+1) & & = mathbf(u)_i + k sum_(j=1)^s b_j mathbf(y)_j, $ where $i = 1, dots, s$ and $a_(i j), b_j, c_i in RR$. ] #env("Definition")[ The \textsf{Butcher tableau} is one way to organize the coefficients of an RK method as follows #align(center)[ #table( columns: (auto, auto, auto, auto), stroke: none, align: center + horizon, $c_1$, table.vline(), $a_(11)$, $dots.c$, $a_(1s)$, $dots.v$, table.vline(), $dots.v$, "", $dots.v$, $c_s$, table.vline(), $a_(s 1)$, $dots.c$, $a_(s s)$, table.hline(), table.hline(), table.hline(), table.hline(), "", table.vline(), $b_1$, $dots.c$, $b_s$, ) ] The matrix $A = (a_(i j))_(s times s)$ is called the RK matrix and $mathbf(b) = (b_1, dots, b_s)^T, mathbf(c) = (c_1, dots, c_s)^T$ are called the RK weights and the RK nodes. ] #env("Definition")[ An s-stage *collocation method* is a numerical method for solving the IVP, where we + choose $s$ distinct collocation parameters $c_1, dots, c_s$, + seek $s$-degree polynomial $p$ satisfying $ forall i = 1, 2, dots, s, #h(1em) mathbf(p) (t_n) = mathbf(u)_n " and " mathbf(p)^prime (t_n + c_i k) = mathbf(f) (mathbf(p) (t_n + c_i k), t_n + c_i k), $ + set $mathbf(u)_(n+1) = mathbf(p) (t_(n+1))$. ] #env("Theorem")[ The s-stage collocation method is an s-stage IRK method with $ a_(i j) = integral_0^(c_i) l_j (tau) upright(d) tau, #h(1em) b_j = integral_0^1 l_j (tau) upright("d") tau, $ where $i, j = 1, dots, s$ and $l_k (tau)$ is the elementary Lagrange interpolation polynomial. ] == Theoretical analysis #env("Definition")[ A function $mathbf(f): RR^n times [0, +infinity) -> RR^n$ is *Lipschitz continuous* in its first variable over some domain $ Omega = { (mathbf(u), t): ||mathbf(u) - mathbf(u)_0|| <= a, t in [0, T] } $ iff $ exists L >= 0, " s.t. " forall (mathbf(u), t) in Omega, #h(1em) ||mathbf(f) (mathbf(u), t) - mathbf(f) (mathbf(v), t) <= ||mathbf(u) - mathbf(v)||. $ ] === Error analysis #env("Definition")[ The *local truncation error* $tau$ is the error caused by replacing continuous derivatives with numerical formulas. ] #env("Definition")[ A numerical formulas is *consistent* if $limits(lim)_(k -> 0) tau = 0$. ] === Stability #env("Definition")[ The *region of absolute stability* (RAS) of a numerical method, applied to $ mathbf(u)^prime = lambda mathbf(u), #h(1em) mathbf(u)_0 = mathbf(u) (t_0), $ is the region $Omega$ that $ forall mathbf(u)_0, #h(1em) forall lambda k in Omega, #h(1em) lim_(n -> +infinity) mathbf(u)_n = 0. $ ] #env("Definition")[ The *stability function* of a one-step method is a function $R: CC -> CC$ that satisfies $ mathbf(u)_(n+1) = R(z) mathbf(u)_n $ for the $mathbf(u)^prime = lambda mathbf(u)$ where $upright("Re") (E(lambda)) <= 0$ and $z = k lambda$. ] #env("Definition")[ A numerical method is *stable* or *zero stable* iff its application to any IVP with $mathbf(f) (mathbf(u), t)$ Lipschitz continuous in $mathbf(u)$ and continuous in $t$ yields $ forall T > 0, #h(1em) lim_(k -> 0, N k = t) ||mathbf(u)_n|| < infinity. $ ] #env("Definition")[ A numerical method is *A($bold(alpha)$)-statble* if the region of absolute stability $Omega$ satisfies $ {z in CC: pi - alpha <= arg(z) <= pi + alpha} subset.eq Omega. $ ] #env("Definition")[ A numerical method is *A-statble* if the region of absolute stability $Omega$ satisfies $ { z in CC: upright("Re") (z) <= 0 } subset.eq Omega. $ ] #env("Definition")[ A one-step method is *L-stable* if it is A-stable, and its stability function satisfies $ lim_(z -> infinity) |R(z)| = 0. $ ] #env("Definition")[ An one-step method is *I-stable* iff its stability function satisfies $ forall y in RR, |R(y mathbf(i))| <= 1. $ ] #env("Definition")[ An one-step method is *B-stable* (or *contractive*) if for any contractive ODE system, every pair of its numerical solutions $mathbf(u)_n$ and $mathbf(v)_n$ satisfy $ forall n in NN, ||u_(n+1) - v_(n+1)|| <= ||u_n - v_n||. $ ] #env("Definition")[ An RK method is *algebraically stable* iff the RK weights $b_1, dots, b_s$ are nonnegative, the *algebraic stability matrix* $M = (b_i a_(i j) + b_i a_(j i) - b_i b_j)_(s times s)$ is positive semidefinite. ] #env("Theorem")[ The order of accuracy of an implicit A-stable LMM satisfies $p <= 2$. An explicit LMM cannot be A-stable. ] #env("Theorem")[ No ERK method is A-stable. ] #env("Theorem")[ An RK method is A-stable if and only if it is I-stable and all poles of its stability function $R(z)$ have positive real parts. ] #env("Theorem")[ If an A-stable RK method with a nonsingular RK matrix $A$ is stiffly accurate, then it is L-stable. ] #env("Theorem")[ If an A-stable RK method with a nonsingular RK matrix $A$ satisfies $ forall i in {1, dots, s}, #h(1em) a_(i 1) = b_i, $ then it is L-stable. ] #env("Theorem")[ B-stable one-step methods are A-stable. ] #env("Theorem")[ An algebraically stable RK method is B-stable and A-stable. ] === Convergence #env("Definition")[ A numerical method is convergent iff its application to any IVP with $mathbf(f) (mathbf(u), t)$ Lipschitz continuous in $mathbf(u)$ and continuous in $t$ yields $ forall T > 0, #h(1em) lim_(k -> 0, n k = T) mathbf(u)_n = mathbf(u) (T). $ ] #env("Theorem")[ A numerical method is convergent iff it is consistent and stable. ] == Important Methods === Forward Euler's method #env("Definition")[ The *forward Euler's method* solves the IVP by $ mathbf(u)_(n+1) = mathbf(u)_n + k mathbf(f) (mathbf(u)_n, t_n). $ ] #env("Theorem")[ The region of absolute stability for forward Euler's method is $ { z in CC: |1 + z| <= 1 }. $ ] === Backward Euler's method #env("Definition")[ The *backward Euler's method* solves the IVP by $ mathbf(u)_(n+1) = mathbf(u)_n + k mathbf(f) (mathbf(u)_(n+1), t_(n+1)). $ ] #env("Theorem")[ The region of absolute stability for backward Euler's method is $ { z in CC: |1 - z| >= 1 \}. $ ] === Trapezoidal method #env("Definition")[ The *trapezoidal method* solves the IVP by $ mathbf(u)_(n+1) = mathbf(u)_n + k/2 (mathbf(f) (mathbf(u)_n, t_n) + mathbf(f) (mathbf(u)_(n+1), t_(n+1))). $ ] #env("Theorem")[ The region of absolute stability for trapezoidal method is $ { z in CC: abs((2 + z)/(2 - z)) >= 1 }. $ ] === Midpoint method (Leapfrog method) #env("Definition")[ The *midpoint method (Leapfrog method)* solves the IVP by $ mathbf(u)_(n+1) = mathbf(u)_(n-1) + 2 k mathbf(f) (mathbf(u)_n, t_n). $ ] #env("Theorem")[ The region of absolute stability for midpoint method is $ { z in CC: abs(z plus.minus sqrt(1 + z^2)) <= 1 } eq.quest { 0 }. $ ] === Heun's third-order RK method #env("Definition")[ The *Heun's third-order formula* is an ERK method of the form $ cases( & mathbf(y)_1 & = mathbf(f)(mathbf(u)_n, t_n)\,, & mathbf(y)_2 & = mathbf(f)(mathbf(u)_n + k/3 mathbf(y)_1, t_n + k/3)\,, & mathbf(y)_3 & = mathbf(f)(mathbf(u)_n + (2 k)/3 mathbf(y)_2, t_n + (2 k)/3)\,, & mathbf(u)_(n+1) & = mathbf(u)_n + k/4 (mathbf(y)_1 + 3 mathbf(y)_3). ) #h(1em) #table( columns: (auto, auto, auto, auto), stroke: none, align: center + horizon, $0$, table.vline(), $0$, $0$, $0$, $1/3$, table.vline(), $1/3$, $0$, $0$, $2/3$, table.vline(), $0$, $2/3$, $0$, table.hline(), "", table.vline(), $1/4$, $0$, $3/4$, ) $ ] === Classical fourth-order RK method #env("Definition")[ The *classical fourth-order RK method* is an ERK method of the form $ cases( & mathbf(y)_1 & = mathbf(f)(mathbf(u)_n, t_n)\,, & mathbf(y)_2 & = mathbf(f)(mathbf(u)_n + k/2 mathbf(y)_1, t_n + k/2)\,, & mathbf(y)_3 & = mathbf(f)(mathbf(u)_n + k/2 mathbf(y)_2, t_n + k/2)\,, & mathbf(y)_4 & = mathbf(f)(mathbf(u)_n + k mathbf(y)_3, t_n + k)\,, & mathbf(u)_(n+1) & = mathbf(u)_n + k/6 (mathbf(y)_1 + 2 mathbf(y)_2 + 2 mathbf(y)_3 + mathbf(y)_4). ) #h(1em) #table( columns: (auto, auto, auto, auto, auto), stroke: none, align: center + horizon, $0$, table.vline(), $0$, $0$, $0$, $0$, $1/2$, table.vline(), $1/2$, $0$, $0$, $0$, $1/2$, table.vline(), $0$, $1/2$, $0$, $0$, $1$, table.vline(), $0$, $0$, $1$, $0$, table.hline(), "", table.vline(), $1/6$, $1/3$, $1/3$, $1/6$, ) $ ] === Third-order strong-stability preserving RK method #env("Definition")[ The *third-order strong-stability preserving RK method* is an ERK method of the form $ cases( & mathbf(y)_1 & = mathbf(u)_n + k mathbf(f)(mathbf(u)_n, t_n)\,, & mathbf(y)_2 & = 3/4 mathbf(u)_n + 1/4 mathbf(y)_1 + 1/4 k mathbf(f)(mathbf(y)_1, t_n + k)\,, & mathbf(u)_(n+1) & = 1/3 mathbf(u)_n + 2/3 mathbf(y)_2 + 2/3 k mathbf(f)(mathbf(y)_2, t_n + k/2)\. ) #h(1em) $ which can also be written as $ cases( & mathbf(y)_1 & = mathbf(f)(mathbf(u)_n, t_n)\,, & mathbf(y)_2 & = mathbf(f)(mathbf(u)_n + k mathbf(y)_1, t_n + k)\,, & mathbf(y)_3 & = mathbf(f)(mathbf(u)_n + 1/4 k mathbf(y)_1 + 1/4 k mathbf(y)_2, t_n + 1/2)\,, & mathbf(u)_(n+1) & = mathbf(u)_n + k/6 (mathbf(y)_1 + mathbf(y)_2 + 4 mathbf(y)_3). ) #h(1em) #table( columns: (auto, auto, auto, auto), stroke: none, align: center + horizon, $0$, table.vline(), $0$, $0$, $0$, $1$, table.vline(), $1$, $0$, $0$, $1/2$, table.vline(), $1/4$, $1/4$, $0$, table.hline(), "", table.vline(), $1/6$, $1/6$, $2/3$ ) $ ] === TR-BDF2 method #env("Definition")[ The *TR-BDF2 method* is an one-step method of the form $ cases( & mathbf(u)_* & = mathbf(u)_n + k/4 (mathbf(f) (mathbf(u)_n, t_n) + mathbf(f) (mathbf(u)_*, t_n + k/2))\,, & mathbf(u)_(n+1) & = 1/3 (4 mathbf(u)_* - mathbf(u)_n + k mathbf(f) (mathbf(u)_(n+1), t_(n+1))). ) $ ]
https://github.com/crd2333/crd2333.github.io
https://raw.githubusercontent.com/crd2333/crd2333.github.io/main/src/docs/AI/Deep%20Learning/LLM.typ
typst
#import "/src/components/TypstTemplate/lib.typ": * #show: project.with( title: "Deep Learning for Language Models", lang: "zh", ) #let softmax = math.op("softmax") #let QKV = $Q K V$ #let qkv = QKV #let Concat = math.op("Concat") #let MultiHead = math.op("MultiHead") #let Attention = math.op("Attention") #let head = $"head"$ #let dm = $d_"model"$ #let FFN = math.op("FFN") = LLM - 关于 Transformer 的基础可以参考 #link("http://crd2333.github.io/note/Reading/%E8%B7%9F%E6%9D%8E%E6%B2%90%E5%AD%A6AI%EF%BC%88%E8%AE%BA%E6%96%87%EF%BC%89/Transformer")[原论文阅读笔记] #hide[ - #link("https://lhxcs.github.io/note/AI/EfficientAI/LLM/")[lhx 的笔记] ] == Transformer Design Variants === Encoder-Only: BERT - BERT: #strong[B]idrectional #strong[D]nocder #strong[R]epresentations from #strong[T]ransformers - BERT 就是字面意思的只含 Encoder,它提出的动机是想类似 CV 那样做一个预训练提取 feather 的模型,然后在上面做 fine-tuning #fig("/public/assets/AI/AI_DL/LLM/2024-09-21-17-14-54.png") - 它包含两个预训练任务: 1. Masked Language Model (MLM) $15%$ 的 token 会被特殊处理。其中 $80%$ 替换为为 `<mask>`,$10%$ 替换为随机词,$10%$ 不变(作弊,糊弄,挖空)。 2. Next Sentence Prediction (NSP) $50%$ 的概率选择相邻句子对作为正例,$50%$ 的概率选择随机句子作为负例,然后用第一句话开头的 `<cls>` 抽出的 feather 放到全连接层来预测 - 个人理解 MLM 这种设计有两方面的原因: + 不确定 Google 处理预训练数据是怎么样的,但从李沐提供的数据处理方法可以看出,数据的 mask 处理是在训练开始前就处理好了的(哪个 token 被 mask 在每个训练迭代周期里都是固定的)。如果 mask 方法都是替换为 `<mask>` 标记,有可能导致某些 token 被掩码了,模型自始至终都在预测它却没有见过它,会影响下游任务微调的效果。因此设置 $10%$ 几率不变 + $15%$ 的词当中以 $10%$ 的概率用任意词替换去预测正确的词。作者在论文中谈到了采取上面的mask策略的好处。大致是说采用上面的策略后,Transformer encoder就不知道会让其预测哪个单词(不仅仅是 `mask`,现在其它 token 也可能需要纠正),逼迫它学习到每个输入 token 的一个上下文的表征分布(a distributional contextual representation)。另外,由于随机替换相对句子中所有 tokens 的发生概率只有$1.5%$($15% * 10%$),所以并不会影响到模型的语言理解能力 - 输入数据做三种 embedding: 1. Token Embedding 2. Segment Embedding:两个句子的区分(`[[0,0,0,0,1,1,1,], ...]`) 3. Positional Encoding:可学习的位置编码 - BERT 微调 - 作为不能生成文本的模型,BERT 的下游任务有一定局限性,它的任务一般分为*序列级*和*词元级*应用 - 序列级应用:单文本分类(如语法上可否接受)、文本对分类或回归(如情感分析) #grid2( fig("/public/assets/AI/AI_DL/LLM/2024-09-22-15-53-32.png"), fig("/public/assets/AI/AI_DL/LLM/2024-09-22-15-53-48.png") ) - 词元级应用:文本标注(如词性分类)、问答(如把语料和问题作为句子对,对语料的每个 token 判断是否是回答的开始与结束) #grid2( fig("/public/assets/AI/AI_DL/LLM/2024-09-22-15-55-57.png"), fig("/public/assets/AI/AI_DL/LLM/2024-09-22-15-56-28.png") ) === Decoder-Only: GPT - 预训练的目标是 Next word prediction - 对于小模型(GPT-2) 预先训练好的模型将根据下游任务进行微调。Large model can run in zero-shot/few-shot. == Positional Encoding - Absolute Positional Encoding - 把位置信息直接加到 embedding 上,会同时影响 #qkv 的值。信息将会沿着整个 Transformer 传播 - Relative Positional Encoding - 将位置信息加到 attention score 上,不会影响 $V$。可以泛化到训练中未见的训练长度 (train short, test long)
https://github.com/monaqa/typst-easytable
https://raw.githubusercontent.com/monaqa/typst-easytable/master/src/elem.typ
typst
MIT License
/// Sets column width. #let cwidth(..columns) = { (( _kind: "easytable.set_column", length: columns.pos().len(), value: columns.pos(), ),) } /// Sets column style. #let cstyle(..columns) = { let layout_func = columns.pos().map((e) => { if type(e) == "alignment" { return _content => align(e, _content) } else { return e } }) (( _kind: "easytable.set_layout", length: layout_func.len(), layout: layout_func, ),) } /// Add table row data. #let tr(trans: none, trans_by_idx: none, cell_style: none, ..columns) = { let cell_trans = if trans != none { (x: none, y: none, c) => trans(c) } else if trans_by_idx != none { trans_by_idx } else { none } (( _kind: "easytable.push_row", length: columns.pos().len(), data: columns.pos(), cell_style: cell_style, cell_trans: cell_trans, ),) } /// Add table row data. #let hline(..args) = ((_kind: "easytable.push_hline", args: args),) /// Add table row data. #let vline(..args) = ((_kind: "easytable.push_vline", args: args),) /// Add table header. #let th( trans: text.with(weight: 700), trans_by_idx: none, cell_style: none, ..columns, ) = (..tr( trans: trans, trans_by_idx: trans_by_idx, cell_style: cell_style, ..columns, ), ..hline(stroke: 0.5pt, expand: -2pt),)
https://github.com/ShapeLayer/ucpc-solutions__typst
https://raw.githubusercontent.com/ShapeLayer/ucpc-solutions__typst/main/lib/utils/make-prob-meta.typ
typst
Other
#import "/lib/i18n.typ": en-us #let __make-answer-stat(stat, i18n) = { let keys = stat.keys() let builder = () if "submit-count" in keys { builder.push(i18n.submitted_prefix + str(stat.submit-count) + i18n.submitted_postfix) } if "ac-count" in keys { builder.push(i18n.passed_prefix + str(stat.ac-count) + i18n.passed_postfix) } if "ac-ratio" in keys { builder.push(i18n.ac-ratio_prefix + str(stat.ac-ratio) + i18n.ac-ratio_postfix) } if builder.len() == 3 [ - #builder.at(0)\, #builder.at(1)\ #builder.at(2) ] else if builder.len() > 0 [ - #builder.join(", ") ] builder = () if "first-solver" in keys { builder.push(i18n.first-solver_prefix + stat.first-solver + i18n.first-solver_postfix) } if "first-solve-time" in keys { builder.push(i18n.first_solved_at_prefix + str(stat.first-solve-time) + i18n.first_solved_at_postfix) } if builder.len() > 0 [ - #builder.join(", ") ] } #let make-prob-meta( tags: (), difficulty: none, authors: (), stat-open: ( submit-count: -1, ac-count: -1, ac-ratio: -1, first-solver: "", first-solve-time: -1, ), stat-onsite: none, i18n: en-us.make-prob-meta ) = [ // Tags #text(size: .8em)[#tags.map(each => raw("#" + each)).join(", ") \ ] #i18n.difficulty_prefix#difficulty#i18n.difficulty_postfix #align(horizon)[ #if type(authors) == array [ #if authors.len() == 1 [ - #i18n.author: #authors.at(0) ] else [ - #i18n.authors: #authors.join(", ") ] ] else if (type(authors) == content) or (type(authors) == str) [ - #i18n.author: #authors ] #if stat-onsite == none { __make-answer-stat(stat-open, i18n) } else [ #table( columns: (1fr, 1fr), stroke: none, inset: (x: 0pt), [ - #i18n.online-open-contest #__make-answer-stat(stat-open, i18n) ], [ - #i18n.offline-onsite-contest #__make-answer-stat(stat-onsite, i18n) ] ) ] ] ]
https://github.com/ymgyt/techbook
https://raw.githubusercontent.com/ymgyt/techbook/master/programmings/js/typescript/specification/keyof.md
markdown
# keyof * 型を引数にとってその型のpropertyのstringのunionを返す ```typescript interface Person { name: string age: number location: string } // SomeNewType ( "name" | "age" | "location" ) type SomeNewType = keyof Person ```
https://github.com/htlwienwest/da-vorlage-typst
https://raw.githubusercontent.com/htlwienwest/da-vorlage-typst/main/lib/pages/arbeitsaufteilung.typ
typst
MIT License
#let arbeitsaufteilung(aufteilungen: ()) = [ = Arbeitsaufteilung #{ set heading(outlined: false) set heading(numbering: none) table( columns: (auto, auto), [*Person*], [*Folgende Punkte der Diplomarbeit wurden inklusive aller Unterpunkte von folgenden Personen verfasst:*], ..for (p, a) in aufteilungen { (p, a) } ) } ]
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/unichar/0.1.0/ucd/block-2200.typ
typst
Apache License 2.0
#let data = ( ("FOR ALL", "Sm", 0), ("COMPLEMENT", "Sm", 0), ("PARTIAL DIFFERENTIAL", "Sm", 0), ("THERE EXISTS", "Sm", 0), ("THERE DOES NOT EXIST", "Sm", 0), ("EMPTY SET", "Sm", 0), ("INCREMENT", "Sm", 0), ("NABLA", "Sm", 0), ("ELEMENT OF", "Sm", 0), ("NOT AN ELEMENT OF", "Sm", 0), ("SMALL ELEMENT OF", "Sm", 0), ("CONTAINS AS MEMBER", "Sm", 0), ("DOES NOT CONTAIN AS MEMBER", "Sm", 0), ("SMALL CONTAINS AS MEMBER", "Sm", 0), ("END OF PROOF", "Sm", 0), ("N-ARY PRODUCT", "Sm", 0), ("N-ARY COPRODUCT", "Sm", 0), ("N-ARY SUMMATION", "Sm", 0), ("MINUS SIGN", "Sm", 0), ("MINUS-OR-PLUS SIGN", "Sm", 0), ("DOT PLUS", "Sm", 0), ("DIVISION SLASH", "Sm", 0), ("SET MINUS", "Sm", 0), ("ASTERISK OPERATOR", "Sm", 0), ("RING OPERATOR", "Sm", 0), ("BULLET OPERATOR", "Sm", 0), ("SQUARE ROOT", "Sm", 0), ("CUBE ROOT", "Sm", 0), ("FOURTH ROOT", "Sm", 0), ("PROPORTIONAL TO", "Sm", 0), ("INFINITY", "Sm", 0), ("RIGHT ANGLE", "Sm", 0), ("ANGLE", "Sm", 0), ("MEASURED ANGLE", "Sm", 0), ("SPHERICAL ANGLE", "Sm", 0), ("DIVIDES", "Sm", 0), ("DOES NOT DIVIDE", "Sm", 0), ("PARALLEL TO", "Sm", 0), ("NOT PARALLEL TO", "Sm", 0), ("LOGICAL AND", "Sm", 0), ("LOGICAL OR", "Sm", 0), ("INTERSECTION", "Sm", 0), ("UNION", "Sm", 0), ("INTEGRAL", "Sm", 0), ("DOUBLE INTEGRAL", "Sm", 0), ("TRIPLE INTEGRAL", "Sm", 0), ("CONTOUR INTEGRAL", "Sm", 0), ("SURFACE INTEGRAL", "Sm", 0), ("VOLUME INTEGRAL", "Sm", 0), ("CLOCKWISE INTEGRAL", "Sm", 0), ("CLOCKWISE CONTOUR INTEGRAL", "Sm", 0), ("ANTICLOCKWISE CONTOUR INTEGRAL", "Sm", 0), ("THEREFORE", "Sm", 0), ("BECAUSE", "Sm", 0), ("RATIO", "Sm", 0), ("PROPORTION", "Sm", 0), ("DOT MINUS", "Sm", 0), ("EXCESS", "Sm", 0), ("GEOMETRIC PROPORTION", "Sm", 0), ("HOMOTHETIC", "Sm", 0), ("TILDE OPERATOR", "Sm", 0), ("REVERSED TILDE", "Sm", 0), ("INVERTED LAZY S", "Sm", 0), ("SINE WAVE", "Sm", 0), ("WREATH PRODUCT", "Sm", 0), ("NOT TILDE", "Sm", 0), ("MINUS TILDE", "Sm", 0), ("ASYMPTOTICALLY EQUAL TO", "Sm", 0), ("NOT ASYMPTOTICALLY EQUAL TO", "Sm", 0), ("APPROXIMATELY EQUAL TO", "Sm", 0), ("APPROXIMATELY BUT NOT ACTUALLY EQUAL TO", "Sm", 0), ("NEITHER APPROXIMATELY NOR ACTUALLY EQUAL TO", "Sm", 0), ("ALMOST EQUAL TO", "Sm", 0), ("NOT ALMOST EQUAL TO", "Sm", 0), ("ALMOST EQUAL OR EQUAL TO", "Sm", 0), ("TRIPLE TILDE", "Sm", 0), ("ALL EQUAL TO", "Sm", 0), ("EQUIVALENT TO", "Sm", 0), ("GEOMETRICALLY EQUIVALENT TO", "Sm", 0), ("DIFFERENCE BETWEEN", "Sm", 0), ("APPROACHES THE LIMIT", "Sm", 0), ("GEOMETRICALLY EQUAL TO", "Sm", 0), ("APPROXIMATELY EQUAL TO OR THE IMAGE OF", "Sm", 0), ("IMAGE OF OR APPROXIMATELY EQUAL TO", "Sm", 0), ("COLON EQUALS", "Sm", 0), ("EQUALS COLON", "Sm", 0), ("RING IN EQUAL TO", "Sm", 0), ("RING EQUAL TO", "Sm", 0), ("CORRESPONDS TO", "Sm", 0), ("ESTIMATES", "Sm", 0), ("EQUIANGULAR TO", "Sm", 0), ("STAR EQUALS", "Sm", 0), ("DELTA EQUAL TO", "Sm", 0), ("EQUAL TO BY DEFINITION", "Sm", 0), ("MEASURED BY", "Sm", 0), ("QUESTIONED EQUAL TO", "Sm", 0), ("NOT EQUAL TO", "Sm", 0), ("IDENTICAL TO", "Sm", 0), ("NOT IDENTICAL TO", "Sm", 0), ("STRICTLY EQUIVALENT TO", "Sm", 0), ("LESS-THAN OR EQUAL TO", "Sm", 0), ("GREATER-THAN OR EQUAL TO", "Sm", 0), ("LESS-THAN OVER EQUAL TO", "Sm", 0), ("GREATER-THAN OVER EQUAL TO", "Sm", 0), ("LESS-THAN BUT NOT EQUAL TO", "Sm", 0), ("GREATER-THAN BUT NOT EQUAL TO", "Sm", 0), ("MUCH LESS-THAN", "Sm", 0), ("MUCH GREATER-THAN", "Sm", 0), ("BETWEEN", "Sm", 0), ("NOT EQUIVALENT TO", "Sm", 0), ("NOT LESS-THAN", "Sm", 0), ("NOT GREATER-THAN", "Sm", 0), ("NEITHER LESS-THAN NOR EQUAL TO", "Sm", 0), ("NEITHER GREATER-THAN NOR EQUAL TO", "Sm", 0), ("LESS-THAN OR EQUIVALENT TO", "Sm", 0), ("GREATER-THAN OR EQUIVALENT TO", "Sm", 0), ("NEITHER LESS-THAN NOR EQUIVALENT TO", "Sm", 0), ("NEITHER GREATER-THAN NOR EQUIVALENT TO", "Sm", 0), ("LESS-THAN OR GREATER-THAN", "Sm", 0), ("GREATER-THAN OR LESS-THAN", "Sm", 0), ("NEITHER LESS-THAN NOR GREATER-THAN", "Sm", 0), ("NEITHER GREATER-THAN NOR LESS-THAN", "Sm", 0), ("PRECEDES", "Sm", 0), ("SUCCEEDS", "Sm", 0), ("PRECEDES OR EQUAL TO", "Sm", 0), ("SUCCEEDS OR EQUAL TO", "Sm", 0), ("PRECEDES OR EQUIVALENT TO", "Sm", 0), ("SUCCEEDS OR EQUIVALENT TO", "Sm", 0), ("DOES NOT PRECEDE", "Sm", 0), ("DOES NOT SUCCEED", "Sm", 0), ("SUBSET OF", "Sm", 0), ("SUPERSET OF", "Sm", 0), ("NOT A SUBSET OF", "Sm", 0), ("NOT A SUPERSET OF", "Sm", 0), ("SUBSET OF OR EQUAL TO", "Sm", 0), ("SUPERSET OF OR EQUAL TO", "Sm", 0), ("NEITHER A SUBSET OF NOR EQUAL TO", "Sm", 0), ("NEITHER A SUPERSET OF NOR EQUAL TO", "Sm", 0), ("SUBSET OF WITH NOT EQUAL TO", "Sm", 0), ("SUPERSET OF WITH NOT EQUAL TO", "Sm", 0), ("MULTISET", "Sm", 0), ("MULTISET MULTIPLICATION", "Sm", 0), ("MULTISET UNION", "Sm", 0), ("SQUARE IMAGE OF", "Sm", 0), ("SQUARE ORIGINAL OF", "Sm", 0), ("SQUARE IMAGE OF OR EQUAL TO", "Sm", 0), ("SQUARE ORIGINAL OF OR EQUAL TO", "Sm", 0), ("SQUARE CAP", "Sm", 0), ("SQUARE CUP", "Sm", 0), ("CIRCLED PLUS", "Sm", 0), ("CIRCLED MINUS", "Sm", 0), ("CIRCLED TIMES", "Sm", 0), ("CIRCLED DIVISION SLASH", "Sm", 0), ("CIRCLED DOT OPERATOR", "Sm", 0), ("CIRCLED RING OPERATOR", "Sm", 0), ("CIRCLED ASTERISK OPERATOR", "Sm", 0), ("CIRCLED EQUALS", "Sm", 0), ("CIRCLED DASH", "Sm", 0), ("SQUARED PLUS", "Sm", 0), ("SQUARED MINUS", "Sm", 0), ("SQUARED TIMES", "Sm", 0), ("SQUARED DOT OPERATOR", "Sm", 0), ("RIGHT TACK", "Sm", 0), ("LEFT TACK", "Sm", 0), ("DOWN TACK", "Sm", 0), ("UP TACK", "Sm", 0), ("ASSERTION", "Sm", 0), ("MODELS", "Sm", 0), ("TRUE", "Sm", 0), ("FORCES", "Sm", 0), ("TRIPLE VERTICAL BAR RIGHT TURNSTILE", "Sm", 0), ("DOUBLE VERTICAL BAR DOUBLE RIGHT TURNSTILE", "Sm", 0), ("DOES NOT PROVE", "Sm", 0), ("NOT TRUE", "Sm", 0), ("DOES NOT FORCE", "Sm", 0), ("NEGATED DOUBLE VERTICAL BAR DOUBLE RIGHT TURNSTILE", "Sm", 0), ("PRECEDES UNDER RELATION", "Sm", 0), ("SUCCEEDS UNDER RELATION", "Sm", 0), ("NORMAL SUBGROUP OF", "Sm", 0), ("CONTAINS AS NORMAL SUBGROUP", "Sm", 0), ("NORMAL SUBGROUP OF OR EQUAL TO", "Sm", 0), ("CONTAINS AS NORMAL SUBGROUP OR EQUAL TO", "Sm", 0), ("ORIGINAL OF", "Sm", 0), ("IMAGE OF", "Sm", 0), ("MULTIMAP", "Sm", 0), ("HERMITIAN CONJUGATE MATRIX", "Sm", 0), ("INTERCALATE", "Sm", 0), ("XOR", "Sm", 0), ("NAND", "Sm", 0), ("NOR", "Sm", 0), ("RIGHT ANGLE WITH ARC", "Sm", 0), ("RIGHT TRIANGLE", "Sm", 0), ("N-ARY LOGICAL AND", "Sm", 0), ("N-ARY LOGICAL OR", "Sm", 0), ("N-ARY INTERSECTION", "Sm", 0), ("N-ARY UNION", "Sm", 0), ("DIAMOND OPERATOR", "Sm", 0), ("DOT OPERATOR", "Sm", 0), ("STAR OPERATOR", "Sm", 0), ("DIVISION TIMES", "Sm", 0), ("BOWTIE", "Sm", 0), ("LEFT NORMAL FACTOR SEMIDIRECT PRODUCT", "Sm", 0), ("RIGHT NORMAL FACTOR SEMIDIRECT PRODUCT", "Sm", 0), ("LEFT SEMIDIRECT PRODUCT", "Sm", 0), ("RIGHT SEMIDIRECT PRODUCT", "Sm", 0), ("REVERSED TILDE EQUALS", "Sm", 0), ("CURLY LOGICAL OR", "Sm", 0), ("CURLY LOGICAL AND", "Sm", 0), ("DOUBLE SUBSET", "Sm", 0), ("DOUBLE SUPERSET", "Sm", 0), ("DOUBLE INTERSECTION", "Sm", 0), ("DOUBLE UNION", "Sm", 0), ("PITCHFORK", "Sm", 0), ("EQUAL AND PARALLEL TO", "Sm", 0), ("LESS-THAN WITH DOT", "Sm", 0), ("GREATER-THAN WITH DOT", "Sm", 0), ("VERY MUCH LESS-THAN", "Sm", 0), ("VERY MUCH GREATER-THAN", "Sm", 0), ("LESS-THAN EQUAL TO OR GREATER-THAN", "Sm", 0), ("GREATER-THAN EQUAL TO OR LESS-THAN", "Sm", 0), ("EQUAL TO OR LESS-THAN", "Sm", 0), ("EQUAL TO OR GREATER-THAN", "Sm", 0), ("EQUAL TO OR PRECEDES", "Sm", 0), ("EQUAL TO OR SUCCEEDS", "Sm", 0), ("DOES NOT PRECEDE OR EQUAL", "Sm", 0), ("DOES NOT SUCCEED OR EQUAL", "Sm", 0), ("NOT SQUARE IMAGE OF OR EQUAL TO", "Sm", 0), ("NOT SQUARE ORIGINAL OF OR EQUAL TO", "Sm", 0), ("SQUARE IMAGE OF OR NOT EQUAL TO", "Sm", 0), ("SQUARE ORIGINAL OF OR NOT EQUAL TO", "Sm", 0), ("LESS-THAN BUT NOT EQUIVALENT TO", "Sm", 0), ("GREATER-THAN BUT NOT EQUIVALENT TO", "Sm", 0), ("PRECEDES BUT NOT EQUIVALENT TO", "Sm", 0), ("SUCCEEDS BUT NOT EQUIVALENT TO", "Sm", 0), ("NOT NORMAL SUBGROUP OF", "Sm", 0), ("DOES NOT CONTAIN AS NORMAL SUBGROUP", "Sm", 0), ("NOT NORMAL SUBGROUP OF OR EQUAL TO", "Sm", 0), ("DOES NOT CONTAIN AS NORMAL SUBGROUP OR EQUAL", "Sm", 0), ("VERTICAL ELLIPSIS", "Sm", 0), ("MIDLINE HORIZONTAL ELLIPSIS", "Sm", 0), ("UP RIGHT DIAGONAL ELLIPSIS", "Sm", 0), ("DOWN RIGHT DIAGONAL ELLIPSIS", "Sm", 0), ("ELEMENT OF WITH LONG HORIZONTAL STROKE", "Sm", 0), ("ELEMENT OF WITH VERTICAL BAR AT END OF HORIZONTAL STROKE", "Sm", 0), ("SMALL ELEMENT OF WITH VERTICAL BAR AT END OF HORIZONTAL STROKE", "Sm", 0), ("ELEMENT OF WITH DOT ABOVE", "Sm", 0), ("ELEMENT OF WITH OVERBAR", "Sm", 0), ("SMALL ELEMENT OF WITH OVERBAR", "Sm", 0), ("ELEMENT OF WITH UNDERBAR", "Sm", 0), ("ELEMENT OF WITH TWO HORIZONTAL STROKES", "Sm", 0), ("CONTAINS WITH LONG HORIZONTAL STROKE", "Sm", 0), ("CONTAINS WITH VERTICAL BAR AT END OF HORIZONTAL STROKE", "Sm", 0), ("SMALL CONTAINS WITH VERTICAL BAR AT END OF HORIZONTAL STROKE", "Sm", 0), ("CONTAINS WITH OVERBAR", "Sm", 0), ("SMALL CONTAINS WITH OVERBAR", "Sm", 0), ("Z NOTATION BAG MEMBERSHIP", "Sm", 0), )
https://github.com/ofurtumi/formleg
https://raw.githubusercontent.com/ofurtumi/formleg/main/h06/H6.typ
typst
#import "@templates/ass:0.1.1": * #import "@preview/finite:0.3.0" #import "@preview/cetz:0.1.1" #import cetz.draw: set-style #import finite.draw: state, transition, loop #show: doc => template( project: "Homework 6", class: "TÖL301G", doc ) #set heading(numbering: "1.a)") #set enum(numbering: "i.") = Four languages Let $Sigma = {x,y}$ == ${x y x | n > 0}$ Since neither $x$ or $y$ rely on $n$ we can draw a very simple *DFA*, therefore the language is regular: #align(center, cetz.canvas({ state((0,0), "q0", initial: true) state((2,0), "q1") state((4,0), "q2") state((6,0), "q3", final: true) transition("q0", "q1", label: "x", curve: 0) transition("q1", "q2", label: "y", curve: 0) transition("q2", "q3", label: "x", curve: 0) }) ) == ${x^n y x^n | n>=0}$ This language requires knowledge about the first $x$ group to guarantee the later $x$ group is the same size, let's make a simple *CFG* for this non-regular language: $ S -> x S x | y $ == ${x y^n x | n>=0}$ This language is very clearly regular and we can use a sligthly modified *DFA* from *a)* to show this: #align(center, cetz.canvas({ state((0,0), "q0", initial: true) state((3,0), "q1") state((6,0), "q2", final: true) transition("q0", "q1", label: "x", curve: 0) transition("q1", "q1", label: "y",) transition("q1", "q2", label: "x", curve: 0) }) ) == ${(x y x)^n | n>=0}$ This language is regular and we can use a *DFA* to show this: #align(center, cetz.canvas({ state((0,0), "q0", initial: true, final: true) state((3,0), "q1") state((6,0), "q2") transition("q0", "q1", label: "x", curve: 0) transition("q1", "q2", label: "y", curve: 0) transition("q2", "q0", label: (text: "x", dist: -0.33), curve: -1) }) ) #pagebreak() = Stacks on stacks For ease of both reading and writing i'll be using $lambda$ to represent both $a$ and $b$, since i won't ever be using them both in the same transition. The transitions will always be some version $a, (epsilon, epsilon) -> (a, epsilon)$ and never $a, (epsilon, epsilon) -> (b, epsilon)$ #align(center, cetz.canvas({ state((0,0), "q0", initial: true) state((0,5), "q1") state((5,5), "q2") state((10,5), "q3") state((10,0), "q4") transition( "q0", "q1", curve: 0, label: ( text: $epsilon, (epsilon, epsilon) -> (\$,epsilon)$, dist: -0.3, angle: 270deg ) ) loop( "q1", label: ( text: $lambda, (epsilon, epsilon) -> (lambda, epsilon)$, ) ) transition( "q1", "q2", curve: 0, label: ( text: $epsilon, (epsilon, epsilon) -> (epsilon, \$)$, dist: -0.3 ) ) loop( "q2", label: ( text: $epsilon, (lambda, epsilon) -> (epsilon, lambda)$, ) ) transition( "q2", "q3", curve: 0, label: ( text: $epsilon, (\$, epsilon) -> (epsilon, epsilon)$, dist: -0.3 ) ) loop( "q3", label: ( text: $epsilon, (epsilon, lambda) -> (epsilon, epsilon)$, ) ) transition( "q3", "q4", curve: 0, label: ( text: $epsilon, (epsilon, \$) -> (epsilon, epsilon)$, dist: -0.3, angle: 90deg ) ) }) ) The transition $q_1->q_2$ is a _magic_ transition that happens when we have finished the first half of tokens = Checksum continuation Suppose that *A* is a *CFL*, then there exists a number $p$ where, if $S$ is any string in *A* of length $p$, then $S$ may be divided into five pieces $S = u v x y z$ satisfying the following rules: + $u v^i x y^i z in A$ for each $i >= 0$ + $|v y| > 0$ + $|v x y| <= p$ == pump it up (or down) - We let $p$ denote the pumping length - We let $S = 0^p 1^i 1^(p-2i) 1^i 2^p 00$ According to pumping lemma $S$ splits into $u v x y z$ so that i-iii. hold $ u:0^p, v:1^i, x:1^(p-2i), y:1^i, z:2^p 00 $ + We offset the number of $x$'s by the number of both $v$'s and $y$'s so our string holds for each $i >= 0$ + $| v y |$ is $2 > 0$ + We make $|v x y| = i + (p-2i) + i = p$ Now we pump down and set $i = 0$ then $S' = 0^p 1^(p-2) 2^p 00 in.not A$ and since the checksum should now be $10$ the language $A$ is not context free
https://github.com/chubetho/Bachelor_Thesis
https://raw.githubusercontent.com/chubetho/Bachelor_Thesis/main/chapters/conclusion.typ
typst
= Evaluation <section_evaluation> In this chapter, the results of the experiment are evaluated and compared with those of a monolithic architecture using a single-page application (SPA). The decision to use the monolithic @spa approach for comparison stems from its close alignment with the experiment’s solution, as both rely on client-side composition and routing. Additionally, MULTA MEDIO is currently working on another rewrite project for a lottery platform using the same monolithic @spa version, which already offers several advantages. This consistency provides developers within the organization with a unified perspective. It is also worth noting that the @spa version is essentially a simplified adaptation of the micro frontends version. Following this comparison, the four key aspects affected by adopting micro frontend architecture, as they relate to the first research question, are discussed. == Comparison of Development Cycle The table below outlines the key differences in the development cycle between the micro frontend and monolithic @spa approaches. #{ show table.cell.where(x: 0): strong show table.cell.where(y: 0): strong let flip = c => table.cell(align: horizon, rotate(-90deg, reflow: true)[#c]) table( columns: (auto, 1fr, 1fr), inset: 10pt, align: (top, left), table.header( [], align(center, [Micro frontends]), align(center, [Monolithic SPA]), ), flip[Setup Stage], [ Although the project structure of micro frontends is organized to ensure a clear overview, it remains complex due to the presence of numerous directories. All remotes, the host application, and the UI library must be properly configured to ensure seamless integration. Managing dependencies between micro frontends introduces additional complexities. ], [ The directories `apps`, `packages`, and `tools` mentioned in @figure_project_structure are redundant. Instead, a single `app` directory is used to store the entire frontend. Both dependency management and configurations are simplified, as the application utilizes a single `package.json` file for all dependencies and a single `vite.config.ts` file for all configurations. ], flip[Implementation Stage], table.cell(colspan: 2)[ As discussed in @section_implementation, Module Federation with client-side composition offers a development experience similar to that of the @spa approach, leading to comparable implementations for both the host and remote applications in each method. However, the routing challenges and the integration of the UI library encountered in the micro frontend version are significantly easier to manage in the @spa version. ], flip[Build Stage], [ The UI library must be built first before the host and remote applications can be successfully bundled. The process of building the server application remains identical in both approaches. ], [ No special considerations are necessary, as the entire frontend can be built in a single process. ], flip[Testing Stage], table.cell(colspan: 2)[ Unit testing and end-to-end testing are the same for both approaches. Unit testing occurs at the component level, while end-to-end testing primarily simulates user interactions in a real browser environment. Both types of testing focus on verifying functionality and user workflows, rather than on how components are composed into the view. ], flip[Deployment Stage], [ Deployment with micro frontends is more complex, as it necessitates the creation of key configuration files depending on the number of applications involved. However, the more effort invested during this stage, the less work will be required in the @ci/@cd process. ], [ The configuration files can be written once and require minimal modifications thereafter, as there will consistently be two applications running in parallel: the frontend and the server application. ], flip[@ci Stage], [ The number of configuration files for @ci increases with the number of micro frontends. However, only the pipeline responsible for a specific micro frontend will be triggered when changes are made to that micro frontend. ], [ The @ci steps are mostly identical in both approaches. But in the @spa approach, having a single configuration file for @ci results in a shorter pipeline runtime. However, as any modification in any part of the application will trigger the pipeline for the entire application. ], flip[@cd Stage], table.cell(colspan: 2)[ The @cd pipeline is identical for both approaches, with each configured to run after code changes are merged into the main branch, triggering redeployment on the virtual private server. ], ) } Overall, the @spa approach is simpler, with fewer directories, configurations, and a single build process. In contrast, micro frontends add complexity in setup, build, and deployment, requiring more configuration. == Impact on Flexibility, Maintainability, Scalability, and Performance Based on the results of the experiment, this section will discuss how these four aspects of the implemented web application are impacted by adopting micro frontend architecture. === Flexibility The `home` and `lotto` micro frontends offer the flexibility to use different dependencies during development. For example, the `home` micro frontend can use `zod` for schema validation, while the `lotto` micro frontend can utilize `valibot`. Furthermore, if a new frontend framework is chosen in the future to replace Vue.js, it can be applied incrementally in the `home` application, while the `lotto` application continues using Vue.js, ensuring that the overall functionality of the application remains unaffected during the transition. Additionally, new features or patches can be quickly applied or roll-backed at runtime without disrupting the other. If, during an update, the updated micro frontend becomes temporarily unavailable, the host application will detect the issue and navigate the user to an error page, providing clear and appropriate information about the disruption. However, this flexibility introduces complexity in maintaining unified functionality across micro frontends, as different libraries may not behave consistently. Additionally, if the choice of a UI library is not carefully planned from the planning stage, the application may suffer from inconsistent styling. These drawbacks can result in a poor user experience. === Maintainability The frontend is divided into `home` and `lotto` modules, with each module located in its directory. This modular structure simplifies the management and maintenance of the overall system by allowing developers to focus on specific micro frontends without needing to understand the entire application. It also facilitates the isolation of bugs within a specific micro frontend, making them easier to identify and resolve, while minimizing the risk of introducing unintended errors or inconsistencies caused by changes made by other teams. While this separation offers advantages, it can also lead to redundancy in some areas. For example, shared logic, such as the fetch function for retrieving gaming history quotes, may be replicated across different micro frontends, conflicting with the DRY (Don't Repeat Yourself) principle. Moreover, ensuring consistency becomes more difficult, as refactoring or updating shared logic may not be uniformly applied across all micro frontends. This inconsistency, coupled with the complexity of managing multiple build processes and deployment pipelines, tends to increase maintenance efforts over time. === Scalability Due to the isolation provided by the architecture, two teams can work simultaneously on the `home` and `lotto` micro frontends without impacting each other's progress. Additionally, if the `lotto` module experiences a spike in traffic, it can be scaled independently, optimizing resource usage by ensuring that only the necessary parts of the system receive additional resources. Independent scaling can introduce increased infrastructure overhead. Each micro frontend may require its own hosting and monitoring, which adds complexity and raises operational costs. Additionally, common backend services, such as databases, may become bottlenecks if not properly optimized to handle the demands of independently scaled micro frontends. This can result in performance issues that impact the entire application, despite the modularity of the frontend components. === Performance The Module Federation approach allows micro frontends to be loaded on demand, meaning that `lotto` is only loaded when the user navigates to it, reducing initial load times for the `home` micro frontend. However, the integration of multiple micro frontends at runtime, along with potential duplications in each micro frontend, introduces latency because the bundle size increases. As a result, performance can be impacted, particularly when handling larger bundles during runtime. To better evaluate performance, the open-source tool Sitespeed.io is used to analyze website speed based on performance best practices @_SiteSpeedIO_. The table below compares the micro frontends and @spa versions, with metrics gathered from the homepage of the application using the Chrome browser over five iterations. The results are color-coded: blue for informational data, green for passing, yellow for warnings, and red for poor performance. #{ let tred = c => text(weight: "bold", fill: red, c) let tgreen = c => text(weight: "bold", fill: green, c) let tblue = c => text(weight: "bold", fill: blue, c) let tyellow = c => text(weight: "bold", fill: rgb(225, 164, 0), c) show table.cell.where(y: 0): strong show table.cell.where(x: 0): strong figure( caption: [Comparison between the micro frontends and monolithic SPA versions.], table( columns: (1.5fr, 1fr, 1fr), inset: 10pt, align: left, table.header([], [Micro frontends], [Monolithic SPA]), [First Contentful Paint], tgreen[60 ms], tgreen[44 ms], [Fully Loaded], tblue[80 ms], tblue[58 ms], [Page Load Time], tblue[7 ms], tblue[16 ms], [Largest Contentful Paint], tgreen[185 ms], tgreen[168 ms], [Total Requests], tgreen[26], tgreen[15], [JavaScript Requests], tblue[15], tblue[6], [CSS Requests], tblue[3], tblue[1], [HTML Transfer Size], tblue[563 B], tblue[458 B], [JavaScript Transfer Size], tred[299.6 KB], tyellow[143.3 KB], [CSS Transfer Size], tblue[24.2 KB], tblue[16.8 KB], [Total Transfer Size], tgreen[333.3 KB], tgreen[167.8 KB], ), ) } #v(1em) An important metric to consider is the JavaScript Transfer Size, which accounts for approximately 85-90% of the Total Transfer Size. In the micro frontends implementation, this transfer size is nearly double that of the @spa version, leading to longer page load times. The primary reason for this increase is the requirement for the host application to fetch the `remoteEntry.js` files from its remote modules. These entry files play a crucial role in Module Federation, containing essential information about the remote module, such as its name and the components it exposes @_ModuleFederation_. This additional overhead will slow down the initial load, as the host must retrieve and process these files to properly display the micro frontends and manage their interactions. == Limitations Due to the limited scope of the experiment, the implemented application is relatively small, making it difficult to fully examine the advantages of micro frontend architecture for larger, more complex applications. Additionally, the experiment focused on a single implementation approach, leaving several key aspects unexplored. For instance, the potential benefits of using native Web Components instead of Module Federation, as well as the impact of integrating Module Federation with server-side composition, were not examined. These alternatives could offer valuable insights into how micro frontends might perform in different scenarios. #pagebreak(weak: true)
https://github.com/LDemetrios/Typst4k
https://raw.githubusercontent.com/LDemetrios/Typst4k/master/src/test/resources/suite/layout/grid/cell.typ
typst
// Test basic styling using the grid.cell element. --- grid-cell-override --- // Cell override #grid( align: left, fill: red, stroke: blue, inset: 5pt, columns: 2, [AAAAA], [BBBBB], [A], [B], grid.cell(align: right)[C], [D], align(right)[E], [F], align(horizon)[G], [A\ A\ A], grid.cell(align: horizon)[G2], [A\ A\ A], grid.cell(inset: 0pt)[I], [F], [H], grid.cell(fill: blue)[J] ) --- grid-cell-show --- // Cell show rule #show grid.cell: it => [Zz] #grid( align: left, fill: red, stroke: blue, inset: 5pt, columns: 2, [AAAAA], [BBBBB], [A], [B], grid.cell(align: right)[C], [D], align(right)[E], [F], align(horizon)[G], [A\ A\ A] ) --- grid-cell-show-and-override --- #show grid.cell: it => (it.align, it.fill) #grid( align: left, row-gutter: 5pt, [A], grid.cell(align: right)[B], grid.cell(fill: aqua)[B], ) --- grid-cell-set --- // Cell set rules #set grid.cell(align: center) #show grid.cell: it => (it.align, it.fill, it.inset) #set grid.cell(inset: 20pt) #grid( align: left, row-gutter: 5pt, [A], grid.cell(align: right)[B], grid.cell(fill: aqua)[B], ) --- grid-cell-folding --- // Test folding per-cell properties (align and inset) #grid( columns: (1fr, 1fr), rows: (2.5em, auto), align: right, inset: 5pt, fill: (x, y) => (green, aqua).at(calc.rem(x + y, 2)), [Top], grid.cell(align: bottom)[Bot], grid.cell(inset: (bottom: 0pt))[Bot], grid.cell(inset: (bottom: 0pt))[Bot] ) --- grid-cell-align-override --- // Test overriding outside alignment #set align(bottom + right) #grid( columns: (1fr, 1fr), rows: 2em, align: auto, fill: green, [BR], [BR], grid.cell(align: left, fill: aqua)[BL], grid.cell(align: top, fill: red.lighten(50%))[TR] ) --- grid-cell-various-overrides --- #grid( columns: 2, fill: red, align: left, inset: 5pt, [ABC], [ABC], grid.cell(fill: blue)[C], [D], grid.cell(align: center)[E], [F], [G], grid.cell(inset: 0pt)[H] ) --- grid-cell-show-emph --- #{ show grid.cell: emph grid( columns: 2, gutter: 3pt, [Hello], [World], [Sweet], [Italics] ) } --- grid-cell-show-based-on-position --- // Style based on position #{ show grid.cell: it => { if it.y == 0 { strong(it) } else if it.x == 1 { emph(it) } else { it } } grid( columns: 3, gutter: 3pt, [Name], [Age], [Info], [John], [52], [Nice], [Mary], [50], [Cool], [Jake], [49], [Epic] ) } --- table-cell-in-grid --- // Error: 7-19 cannot use `table.cell` as a grid cell // Hint: 7-19 use `grid.cell` instead #grid(table.cell[])
https://github.com/thomasschuiki/thomasschuiki
https://raw.githubusercontent.com/thomasschuiki/thomasschuiki/main/cv/lib-impl.typ
typst
#let fa-icon( /// The name of the icon. /// /// This can be used with the ligature feature or the unicode of the glyph. name, /// Whether the icon is solid or not. solid: false, ..args ) = { text( font: ( "Font Awesome 6 Free" + if solid { " Solid" }, "Font Awesome 6 Brands", ), weight: if solid { 900 } else { 400 }, name, ..args ) }
https://github.com/Lucas-Wye/tech-note
https://raw.githubusercontent.com/Lucas-Wye/tech-note/main/src/Zotero.typ
typst
= Zotero #label("zotero") Zotero is a free, easy-to-use tool to help you collect, organize, cite, and share research. == Tips #align(center)[#table( columns: 2, align: (col, row) => (auto, auto).at(col), inset: 6pt, [Requirements], [Operations], [查看文献条目所属的分类], [选中该条目后按住Ctrl/Option/Alt,该文献所在的文件夹就会高亮为黄色], [在文献集(Collections)之间移动文献], [选中待移动的文献,然后根据你的操作系统按下相应快捷键, macOS:`Cmd` Windows/Linux:`Shift`,将该文献拖拽到其他文献集即可], [快速查看近期添加的文献], [右键单击#strong[My Library],选择#strong[New Saved Search],进行具体的设置], ) ] == Plugin === #link("https://retorque.re/zotero-better-bibtex/")[Better Bibtex] - #link("https://retorque.re/zotero-better-bibtex/citation-keys/")[设置格式] `Zotero Preferences` -\> `Better Bibtex` -\> `Citation Keys`中,修改`citation key format`为 ``` [auth:lower][year][veryshorttitle:lower] ``` === #link("http://zotfile.com/")[ZotFile] - 修改附件的命名格式 `Zotero Tools` -\> `ZotFile Preferences` -\> `Renaming`中,修改格式为`{%y_}{%t_}{%a}` === #link("https://github.com/beloglazov/zotero-scholar-citations")[Zotero Scholar Citations] == More #link("https://www.zotero.org/")[Zotero Website]
https://github.com/Blezz-tech/math-typst
https://raw.githubusercontent.com/Blezz-tech/math-typst/main/test/template.typ
typst
#import "/lib/my.typ": * === Задание #task("Задание") // #include "/Картинки/0000-0000.typ" #answer("Решение") Ответ: $$
https://github.com/adelhult/typst-hs-test-packages
https://raw.githubusercontent.com/adelhult/typst-hs-test-packages/main/README.md
markdown
MIT License
# Clone the Typst packages repo ``` git submodule update --init --recursive ``` # Run the test-suite ```sh cabal test --test-show-details=streaming ``` # Results Running the `typst-hs` parser on all `.typ` files in the packages repo results in: ``` Cases: 1246 Tried: 1246 Errors: 0 Failures: 76 ``` See `test/counter-examples` shruken tests that work in the Rust Typst compiler but not in the Haskell implementation.
https://github.com/JakMobius/courses
https://raw.githubusercontent.com/JakMobius/courses/main/mipt-os-basic-2024/sem06/utils.typ
typst
#let cell-color(base-color) = { if base-color == none { base-color = blue } let background-color = color.mix((base-color, 20%), (white, 80%)) let stroke-color = color.mix((base-color, 50%), (black, 50%)) ( base-color: base-color, background-color: background-color, stroke-color: stroke-color, ) }
https://github.com/katamyra/Notes
https://raw.githubusercontent.com/katamyra/Notes/main/Compiled%20School%20Notes/CS3001/Sections/Section4.typ
typst
#import "../../../template.typ": * #set page( header: align(right)[ <NAME> ] ) #align(center)[ = Section 3 Guide ] #set text( font: "New Computer Modern", size: 11pt ) #set heading( numbering: "1." ) = Quote #blockquote[ "To possess a virtue is to be a certain sort of person with a certain complex mindset." ] The quote underscores the idea that virtues are not superficial qualities that can be acquired or discarded at will, but rather integral aspects of a person's identity. *It suggests that virtues shape how individuals perceive and respond to the world around them, influencing their thoughts, feelings, desires, and choices in a consistent and integrated manner.* Furthermore, the notion of a "complex mindset" highlights the multifaceted nature of virtues, encompassing a broad range of considerations and reasons for action. Virtuous individuals are depicted as possessing a deep understanding of the principles underlying their virtues and applying them across various domains of life. = Hate Speech == For Banning Hate Speech + Protection of Vulnerable Groups: Hate speech can target and marginalize vulnerable groups such as ethnic minorities, religious communities, LGBTQ+ individuals, and others. Making it illegal helps to protect these groups from discrimination, harassment, and violence. + Promotion of Social Cohesion: Prohibiting hate speech can foster a more inclusive and cohesive society by discouraging the spread of divisive and harmful ideologies that seek to divide communities along lines of race, religion, ethnicity, or other characteristics. + Prevention of Incitement to Violence: Hate speech often contains inflammatory rhetoric that can incite violence or discrimination against targeted groups. Banning hate speech can help prevent such incitement and mitigate the risk of hate crimes. + Preservation of Human Dignity: Everyone has the right to dignity and respect, and hate speech undermines these fundamental values by dehumanizing and demeaning individuals based on their identity. Making hate speech illegal reinforces the principle of treating all individuals with dignity and equality. == Against Banning Hate Speech + Protection of Free Speech: Freedom of speech is a foundational principle in many democratic societies, including the United States. Allowing hate speech to be legal ensures that individuals have the right to express their opinions and beliefs, even if they are unpopular or offensive to others. + Avoidance of Slippery Slope: Prohibiting hate speech could set a precedent for restricting other forms of expression deemed offensive or controversial. Some argue that allowing hate speech to remain legal protects against the erosion of free speech rights more broadly. + Facilitation of Debate and Discourse: Permitting hate speech, within certain limits, can facilitate open debate and discussion on contentious issues. It allows individuals to challenge and criticize prevailing beliefs, ideologies, and social norms, which can ultimately lead to greater understanding and progress. + Empowerment of Counter-Speech: Instead of banning hate speech outright, some argue for promoting counter-speech and robust societal responses to hateful rhetoric. This approach empowers individuals and communities to challenge hateful ideas through education, dialogue, and advocacy, rather than relying on legal restrictions. = Term Proposal Bring up the issue of how social platforms such as twitter bring up tons of different moral issues such as their AI algorithms that amplify hate speech or allow hate speech to roam free and the idea of how as as a technological artifact they can cause lots of social good yet bad at the same time. So I would focus on how in the future we can have laws or proposals that can balance the power of these social platforms with making sure they stay ethical. + should/how can companies moderate hate speech + hate speech being amplified and radicalized by AI algorithms + AI algorithms filtering out people's opinions
https://github.com/olligobber/Matrixst
https://raw.githubusercontent.com/olligobber/Matrixst/master/matrix.typ
typst
// verifies a matrix is valid and returns (height, width) #let dimension(m) = { // check input is valid types if type(m) != array { panic("matrix is not valid: not an array") } for row in m { if type(row) != array { panic("matrix is not valid: rows are not arrays") for val in row { if type(val) != integer and type(val) != float { panic("matrix is not valid: values are not numbers") } } } } // get input dimensions let numrows = m.len() if numrows == 0 { panic("matrix is not valid: must have at least one row") } let numcols = m.at(0).len() // verify all rows are same length for row in m { if row.len() != numcols { panic("matrix is not valid: rows are different lengths") } } if numcols == 0 { panic("matrix is not valid: must have at least one column") } return (numrows, numcols) } // show a matrix in a math environment #let render(m) = { // check input is valid let _ = dimension(m) return math.mat(..m) } // multiply one or more matrices in the given order #let multiply(..ms) = { if ms.named() != (:) { panic("multiply does not take named arguments") } let ms = ms.pos() if ms.len() < 1 { panic("cannot multiply zero matrices: unclear dimensions of result") } // Deal with more than two inputs by iteration if (ms.len() > 2) { let result = ms.at(0) for i in range(1, ms.len()) { result = multiply(result, ms.at(i)) } return result } // Deal with exactly two inputs by direct computation let (m,n) = ms // check input is valid and get dimensions let (firstrows, firstcols) = dimension(m) let (secondrows, secondcols) = dimension(n) // check dimensions match if firstcols != secondrows { panic("cannot multiply matrices: mismatched dimensions") } // build result let resultrows = firstrows let resultcols = secondcols let interdim = firstcols let result = () for i in range(resultrows) { result.push(()) for j in range(resultcols) { let sum = 0 for k in range(interdim) { sum += m.at(i).at(k) * n.at(k).at(j) } result.at(-1).push(sum) } } if (resultrows, resultcols) != dimension(result) { panic("error when multiplying matrices, result has wrong size") } return result } // get the identity matrix of a given size #let identity(n) = { if int(n) != n { panic("error when generating identity matrix, size must be an integer") } if n < 1 { panic("error when generating identity matrix, size must be positive") } let result = () for i in range(n) { result.push(()) for j in range(n) { if i == j { result.at(-1).push(1) } else { result.at(-1).push(0) } } } if (n,n) != dimension(result) { panic("error when generating identity matrix, result has wrong size") } return result } // Get a column vector of length n with its ith component set to 1 and the rest 0 #let column_basis(n, i) = { if int(n) != n { panic("error when generating column basis, size is not an integer") } if n < 1 { panic("error when generating column basis, size must be positive") } if int(i) != i { panic("error when generating column basis, index is not an integer") } if i < 1 or i > n { panic("error when generating column basis, index must be between 1 and n") } let result = () for j in range(1,n+1) { if i == j { result.push((1,)) } else { result.push((0,)) } } if (n,1) != dimension(result) { panic("error when generating column basis, result has wrong size") } return result } // Get a row vector of length n with its ith component set to 1 and the rest 0 #let row_basis(n, i) = { if int(n) != n { panic("error when generating row basis, size is not an integer") } if n < 1 { panic("error when generating row basis, size must be positive") } if int(i) != i { panic("error when generating row basis, index is not an integer") } if i < 1 or i > n { panic("error when generating row basis, index must be between 1 and n") } let row = () for j in range(1,n+1) { if i == j { row.push(1) } else { row.push(0) } } let result = (row,) if (1,n) != dimension(result) { panic("error when generating row basis, result has wrong size") } return result } // Tranpose a matrix #let transpose(m) = { let (in_rows, in_cols) = dimension(m) let out_rows = in_cols let out_cols = in_rows let result = () for i in range(out_rows) { result.push(()) for j in range(out_cols) { result.at(-1).push(m.at(j).at(i)) } } if dimension(result) != (out_rows, out_cols) { panic("error when transposing matrix, result has wrong size") } return result } // Get the minor of a matrix by deleting row i and column j #let minor(m, i, j) = { let (in_rows, in_cols) = dimension(m) if int(i) != i { panic("error when getting minor of matrix, row index must be an integer") } if int(j) != j { panic("error when getting minor of matrix, column index must be an integer") } if i < 1 or i > in_rows { panic("error when getting minor of matrix, row index must be between 1 and height of matrix") } if j < 1 or j > in_cols { panic("error when getting minor of matrix, column index must be between 1 and width of matrix") } if in_rows == 1 or in_cols == 1 { panic("error when getting minor of matrix, matrix must be at least 2x2 to get non-empty minor") } let out_rows = in_rows - 1 let out_cols = in_cols - 1 let result = () for k in range(in_rows) { if k == i - 1 { continue } result.push(()) for l in range(in_cols) { if l == j - 1 { continue } result.at(-1).push(m.at(k).at(l)) } } if dimension(result) != (out_rows, out_cols) { panic("error when getting minor of matrix, result has wrong size") } return result } // Get the determinant of a matrix #let determinant(m) = { let (rows,cols) = dimension(m) if rows != cols { panic("error when calculating determinant, matrix must be square") } let n = rows if n == 1 { return m.at(0).at(0) } let det = 0 for i in range(n) { det += calc.pow(-1,i) * m.at(0).at(i) * determinant(minor(m, 1, i + 1)) } return det } // Get the inverse of a matrix #let invert(m) = { let (rows, cols) = dimension(m) if rows != cols { panic("error inverting matrix: matrix is not square") } let n = rows let det = determinant(m) if det == 0 { panic("error inverting matrix: matrix with zero determinant has no inverse") } let mt = transpose(m) let result = () for i in range(1,n+1) { result.push(()) for j in range(1,n+1) { result.at(-1).push(calc.pow(-1, i+j) * determinant(minor(mt, i, j)) / det) } } if dimension(result) != (n,n) { panic("error inverting matrix: result is wrong size") } return result } // raise a matrix to a given power #let power(m, k) = { if int(k) != k { panic("power must be integer") } if k < 0 { return invert(power(m,-k)) } let (rows,cols) = dimension(m) if rows != cols { panic("cannot raise matrix to power if it is not square") } let n = rows if k == 0 { return identity(n) } if k == 1 { return m } if int(k / 2) * 2 == k { let x = power(m, k/2) return multiply(x, x) } else { let x = power(m, k - 1) return multiply(x, m) } } #let column_vector(..l) = { if l.named() != (:) { panic("column vector does not take named arguments") } let l = l.pos() if l.len() == 0 { panic("cannot make column vector: input array has no elements") } for x in l { if type(x) != int and type(x) != float { panic("cannot make column vector: elements must all be numbers") } } return l.map(x => (x,)) } #let row_vector(..l) = { if l.named() != (:) { panic("row vector does not take named arguments") } let l = l.pos() if l.len() == 0 { panic("cannot make row vector: input has no elements") } for x in l { if type(x) != int and type(x) != float { panic("cannot make row vector: elements must all be numbers") } } return (l,) } #let diagonal(..l) = { if l.named() != (:) { panic("diagonal matrix does not take named arguments") } let l = l.pos() if l.len() == 0 { panic("cannot make diagonal matrix: input has no elements") } for x in l { if type(x) != int and type(x) != float { panic("cannot make diagonal matrix: elements must all be numbers") } } let n = l.len() let result = () for i in range(n) { result.push(()) for j in range(n) { if i == j { result.at(-1).push(l.at(i)) } else { result.at(-1).push(0) } } } return result } #let horizontal_cat(..ms) = { if ms.named() != (:) { panic("horizontal concatenation does not take named arguments") } let ms = ms.pos() if ms.len() < 1 { panic("cannot concatenate zero matrices: matrices must have positive size") } if ms.len() == 1 { return ms.at(0) } if ms.len() > 2 { let result = ms.at(0) for i in range(1, ms.len()) { result = horizontal_cat(result, ms.at(i)) } return result } let (m,n) = ms let (m_rows, m_cols) = dimension(m) let (n_rows, n_cols) = dimension(n) if m_rows != n_rows { panic("cannot concatenate matrices horizontally, mismatched number of rows") } let rows = m_rows let cols = m_cols + n_cols let result = () for i in range(rows) { result.push(m.at(i) + n.at(i)) } return result } #let vertical_cat(..ms) = { if ms.named() != (:) { panic("vertical concatenation does not take named arguments") } let ms = ms.pos() if ms.len() < 1 { panic("cannot concatenate zero matrices: matrices must have positive size") } if ms.len() == 1 { return ms.at(0) } if ms.len() > 2 { let result = ms.at(0) for i in range(1, ms.len()) { result = vertical_cat(result, ms.at(i)) } return result } let (m,n) = ms let (m_rows, m_cols) = dimension(m) let (n_rows, n_cols) = dimension(n) if m_cols != n_cols { panic("cannot concatenate matrices vertically, mismatched number of columns") } let rows = m_rows + n_rows let cols = m_cols let result = m + n return result } #let show_multiply(..ms) = $ #ms.pos().map(render).join() = #render(multiply(..ms)) $ #let show_power(m, i) = $ #render(m)^#i = #render(power(m,i)) $
https://github.com/polarkac/MTG-Stories
https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/001%20-%20Magic%202013/007_The%20Stonekiller.typ
typst
#import "@local/mtgstory:0.2.0": conf #show: doc => conf( "The Stonekiller", set_name: "Magic 2013", story_date: datetime(day: 08, month: 08, year: 2012), author: "<NAME>", doc ) #figure(image("007_The Stonekiller/01.jpg", height: 40%), caption: [], supplement: none, numbering: none) The pebble was a hazy green, like the eye of a dead fish. Like the eyes of the girl in the village yesterday. "Runt," the green-eye girl had said. At first, Lia had no idea who the girl was talking about. And then she saw those fishy eyes looking straight into her own. "Worthless runt." Even girls she thought of as friends joined in to mock Lia: "No one wants you here... your fingers are freakish... Are you stupid #emph[and] lame?" It was true, her fingers were curled like claws. Try as she might, Lia couldn't straighten them completely. Even her mother, who was the village healer, couldn't fix them. Lia had never cared, at least not until yesterday. She balanced the pebble on her knuckle. She didn't even know who the green-eyed girl was. She'd just asked to join their skipping game. #emph[Stupid fingers. Stupid runt.] Lia dug her toes into the sandy riverbank and stared hard at the sparkling river. Today was the first time her parents let her play by herself near the water. Her father and older brother were just over the rise in the field, but she couldn't see them, so she #emph[felt] alone. Lia stared at the pebble. #emph[Make the green-eyed girl disappear] . Instead, there was a popping noise, and the pebble crumbled on her knuckle. Despite herself, Lia smiled as the green dust swirled away in the warm summer breeze. #figure(image("007_The Stonekiller/02.jpg", width: 100%), caption: [], supplement: none, numbering: none) Unfortunately, she didn't have the power to make the girl disappear. She could make rocks crumble, and that was it. Mages were rare in her village, and none of the other children had any casting abilities. Her mother said it was a gift. Lia wasn't sure; it wasn't like the world needed any more dust. But her mother insisted she was special. #emph[All great mages started somewhere, and pebbles are as fine a place as any] . The village was about a mile away from Lia's family farm, and her mother was tending people there again today. Usually Lia went along, but not after what happened with the girls yesterday. #emph[I'll never go there again] . The village was a ragtag collection of houses and shops built among the ruins of a castle. Before the Conflux came and remade the landscape, the castle had been one of the jewels of the Bant. Lia was too young to remember the hellish years of torment and war that followed the merging of Alara, but she often wished she could have seen the castle in its glory days. All that was left was its high tower and the four corners of its outer wall. The elders said they were blessed because the village was in a region that was much like the old Bant. Only the southern horizon had changed during the upheaval. An unnatural mountain range had clawed its way out of the earth and forever blocked passage to the sea. #figure(image("007_The Stonekiller/03.jpg", width: 100%), caption: [], supplement: none, numbering: none) #emph[Bant had been a vast realm. A beautiful land of floating castles, seas of lush grass, and the bluest skies you can imagine. ] Lia loved the elders' stories about old Bant, especially about brave knights fighting hordes of undead monsters. Suddenly, she resolved not to waste any more time thinking about the green-eyed girl. Instead, she leaped to her feet and searched for sticks, which could serve as Grixis hordes. With a fistful of twigs, Lia recited the story in her mind: #emph[It was a crisp autumn day when Eos Castle was besieged by creatures too horrible to imagine.] #figure(image("007_The Stonekiller/04.jpg", width: 100%), caption: [], supplement: none, numbering: none) Lia mounded up sand for a castle, and then smashed it with her fist. The stick hordes poured into the courtyard! #emph[They broke through the wall!] #emph[Knight Aran fought valiantly atop his horse!] She was just about to unleash the ballista when something moved in one of the trees on the other side of the river. The sunlight glinting off the water made her squint, but she glimpsed someone perched on a tree branch, hidden among the leaves. Just then, a gust of wind rattled the branches, and Lia saw her watcher. Its body was covered in spotted fur. Pointed ears stuck up from its head, and its face was more animal than human. "Mami!" Lia screamed, even though her mother was far away. When Lia looked back, the creature was gone. #figure(image("007_The Stonekiller/05.jpg", width: 100%), caption: [], supplement: none, numbering: none) Usually, Lia's family ate together and then told stories until bedtime. But two of the village hunters were missing, and her father and brother joined a search party. Lia ate her stew alone on the little stool by the iron stove while her mother comforted the young wife of one of the missing hunters. Lia knew better than to interrupt, although she desperately wanted to tell someone about the thing she'd seen in the tree. "...strange signs on the road to the mountains," her mother was saying to Cele, as the young woman nervously twisted the end of her braid. "They were tracking a herd that way," Cele said, her eyes were brimming with tears. "Maybe they went up into the mid-lands." Her mother noticed Lia was watching them and motioned her over. As the firelight danced across the rafters, her mother slipped an arm around Lia and pulled her close. #figure(image("007_The Stonekiller/06.jpg", width: 100%), caption: [], supplement: none, numbering: none) "Did you have fun by the river today?" her mother asked. "It was such a pretty day." Lia nodded. "Are there cats who walk like people?" Her mother's brow furrowed. "In other lands, Lia. Why do you ask?" "I saw a thing that had a cat face but a body like us," Lia said, half-expecting her mother wouldn't believe her. "In the trees along the river." Cele's eyes grew enormous and suddenly her mother was saying how late it was and bundling Lia off to bed. And maybe they could have sweet bread for breakfast? Lia fell asleep and dreamed of girls with pebbles for eyes and floating sand castles. The next morning, her father's eyes were sunken with tiredness. He hugged Lia and wanted to hear her story from the day before. People were trying to act normal, but Lia could tell something was very wrong. Everyone's faces seemed pinched too tight and she heard them whispering about the missing hunters. At mid-day, Lia's mother sent her outside after she promised not to wander beyond the shadow of the cottage. But she grew tired of playing by herself under the eaves and decided to run circles around the house. #emph[The eagle flew over Eos Castle] ...With her arms outstretched like wings, Lia ran around the corner and bashed into something. She stumbled backward and was caught by strong hands. As a dark bag fell across her eyes, she glimpsed a cat-like face. They'd been waiting for her behind the cottage, where there were no windows or doors, and no one to see her disappear. #figure(image("007_The Stonekiller/07.jpg", width: 100%), caption: [], supplement: none, numbering: none) That night, the demon came to the village. He came while the hunters' wives were weeping for their husbands. He came while Lia's parents frantically searched for their daughter. Just as the crimson sun disappeared behind the unnatural mountains, the demon seemed to materialize in the starry sky. His presence immediately afflicted the villagers. They became weak and ill and fell to their knees. A ring of black-clad servants encircled the village, and as the noose closed around them, none had the strength to raise their hands in defense. #figure(image("007_The Stonekiller/08.jpg", width: 100%), caption: [], supplement: none, numbering: none) By morning, a sickly wind blew through the open door of Lia's cottage, which was as empty as the rest of the village. The Sculptor surveyed his work with a critical eye. To the uninitiated, it must look like chaos. But to him, every clink and jangle of bone was a perfect harmony to the breathing of his master, Nefarox, who slumbered in the tunnels below the arena. It was early morning, the servants still in their cages. Seventeen minutes until sunrise, and then the supervisors would have them working again. But for a few precious moments, the world was wonderfully peaceful. The hum of locusts in the trees on the ridges that surrounded the worksite was the loudest sound. For once there was no screaming, no wailing, no scraping of bloody meat off bone. #figure(image("007_The Stonekiller/09.jpg", width: 100%), caption: [], supplement: none, numbering: none) His worksite had once been a massive#emph[ Matca] arena where Nayan humans fought for sport. Everything in Alara had a former life. Even him. He had crafted bodies from etherium in Esper before he realized his work had been a perverted lie. The Sculptor sighed, angry at his misspent youth. Now, he was an old man, but at least the master had given him a purpose, a reason to keep on living. He took a deep breath and placed one foot on the bottom rung of the ladder. Time to tally the signs. The ritual could only begin when the numbers aligned. If something was one tick off, the project would fail. The Sculptor felt a ripple of panic at the thought of disappointing his master. If he failed, it would be better to cut his own throat than face punishment. The Sculptor counted the rungs as he climbed. Seventy-six steps, and he was at the top. From this vantage point, he could assess how his great work was progressing from a bird-eye view. Months ago, the Sculptor had removed the stone benches that encircled the arena—five-hundred sixty-six benches. The servants dug deep pits to hold the carcasses before they were skinned. One-hundred forty-two pits. Most were now brimming with discarded meat. Ninety-two. The number of knife strokes to skin a behemoth. #figure(image("007_The Stonekiller/10.jpg", width: 100%), caption: [], supplement: none, numbering: none) Feeling very kingly, Sculptor eased himself onto the walkway, which creaked and shifted under his weight. It was constructed from the bones of a hellkite the master had slaughtered in the high peaks. The dragon's beautiful corpse had moved the Sculptor to tears. Indeed, it was the seed of inspiration for the entire project. Etherium had no life inside of it. But bone? Bone was imbued with blood and power—energy he would harness for his master. The servants had carried the skeleton down precarious mountain paths. Once installed, the ribs branched out and down to form a cage around the arena floor. The spine was the walkway on which he now stood. When he bent down and touched the bones, he could still feel the immense power of the hellkite pulsing through the marrow. One-hundred twelve. Number of total hands needed to move the hellkite's corpse. Three fingers lost. #figure(image("007_The Stonekiller/11.jpg", width: 100%), caption: [], supplement: none, numbering: none) The Sculptor enjoyed a gust of wind. It brought a scent of honeysuckle from the golden lowlands. The warm air rattled the bones hanging from ropes beneath his feet. Seven-hundred sixty-nine silk ropes. Seven-hundred sixty-nine bones. Sometimes he wished he were a puppet master and could make those bones dance like marionettes. But that would be the master's pleasure. And all the power derived from the ritual? That was the master's reward. An abrasive metallic screech tore the Sculptor away from his reverie. The twisted metal gate swung open, and new recruits filed into the arena. Bant humans from the grasslands, probably that miserable little village near the ruined castle. They were bound together with rope, and the Sculptor counted carefully as they passed under the hellkite's spine. Forty-seven bodies. Plus the two hunters they'd caught spying on them earlier. Forty-nine bodies. The Sculptor's breathing quickened. Frantically, he tallied the figures in his mind again. Was it possible? Yes, all the numbers aligned. It was perfect. And after such a long wait, it would be tonight. The Sculptor dug his fingernails into the hellkite's ribs and prayed that the master liked his gift.
https://github.com/kdog3682/2024-typst
https://raw.githubusercontent.com/kdog3682/2024-typst/main/src/examples.typ
typst
#import "@preview/cetz:0.1.2" #cetz.canvas({ import cetz.draw: * import cetz.tree tree.tree( spread:3, grow:4, ([root],[A],[B]), draw-node: (node, parentnode) => { content((),text(4pt,[#node]),padding:.1,name:"content") rect("content.top-left","content.bottom-right") } ) }) --- #let pinyin(doc) = { doc } #let zhuyin(doc, ruby, scale: 0.7, gutter: 0.3em, delimiter: none, spacing: none) = { if delimiter == none { return box(align( bottom, table( columns: (auto, ), align: (center, ), inset: 0pt, stroke: none, row-gutter: gutter, text(1em * scale, pinyin(ruby)), doc, ), )); } let extract-text(thing) = if type(thing) == "string" { thing } else { thing.text }; let chars = extract-text(doc).split(delimiter); let aboves = extract-text(ruby).split(delimiter); if chars.len() != aboves.len() { error("count of character and zhuyin is different") } chars.zip(aboves).map(((c, above)) => [#zhuyin(scale: scale)[#c][#above]]).join(if spacing != none [#h(spacing)]) } #set text( lang: "zh", region: "cn", font: "Noto Sans CJK HK", fallback: false, ) #let per-char(f) = [#f(delimiter: "|")[汉|语|拼|音][ha4n|yu3|pi1n|yi1n]] #let per-word(f) = [#f(delimiter: "|")[汉语|拼音][ha4nyu3|pi1nyi1n]] #let all-in-one(f) = [#f[汉语拼音][ha4nyu3pi1nyi1n]] #let example(f) = (per-char(f), per-word(f), all-in-one(f)) // argument of scale and spacing #let arguments = ((0.5, none), (0.7, none), (0.7, 0.1em), (1.0, none), (1.0, 0.2em)) #table( columns: (auto, auto, auto, auto), align: (center + horizon, center, center, center), [arguments], [per char], [per word], [all in one], ..arguments.map(((scale, spacing)) => ( text(font: "Crimson Pro", size: 0.9em)[#scale,#repr(spacing)], ..example(zhuyin.with(scale: scale, spacing: spacing)) )).flatten(), )
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/layout/pad_03.typ
typst
Apache License 2.0
#import "/contrib/templates/std-tests/preset.typ": * #show: test-page // Test that padding adding up to 100% does not panic. #pad(50%)[]
https://github.com/polarkac/MTG-Stories
https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/051%20-%20March%20of%20the%20Machine/012_Episode%207%3A%20Divine%20Intervention.typ
typst
#import "@local/mtgstory:0.2.0": conf #show: doc => conf( "Episode 7: Divine Intervention", set_name: "March of the Machine", story_date: datetime(day: 23, month: 04, year: 2023), author: "<NAME>", doc ) #emph[Tell her not to follow me. None of you should. Not ever.] The first time Karn tried to solve Mirrodin's Phyrexian problem, he'd left word behind that he wasn't to be pursued. It was a conscious decision. The corruption was taking hold of him. Mirrodin fell because of Karn. In his arrogance, he'd shaped the plane; in his hubris, he'd left one of his own creations in charge of it; in his ignorance, he'd tracked Phyrexian oil throughout the plane. If he'd been more present, he might have realized Memnarch had lost his way. If he'd paid attention, he might have seen the oil dripping in his wake. But he wasn't present, and he wasn't paying attention, and Mirrodin's fall crushed anyone who lived within it. #emph[Don't follow me] , he'd told the others—because all of this was his problem, and solving it was going to kill him. And he was right. If it wasn't for Venser sacrificing his spark, Karn would be dead. A brilliant inventor, teller of awful jokes, and general thorn in the sides of most who knew him, Venser was part of the group that'd come to find Karn when he was deep in the throes of phyresis. Koth and Elspeth beat back the enemy's legions long enough to buy Venser time to find him deep within Mirrodin's core. Melira had made Venser immune to corruption, and Venser~ Time and time again Karn swore that he'd do honor to Venser's memory. Venser had seen something in him, something worth dying for. If Karn let himself die he'd be betraying that hope. Which makes his current predicament even more painful. Lashed to a floating piece of slag held aloft by Norn's choir, made of the same material that had arrested his planeswalking in the Caves of Koilos what felt like years ago, he has the perfect view to the end of the Multiverse. Most of his body has been taken for scrap. Karn used to wonder why he could feel pain. "People are less likely to hurt something that screams," Urza said. What a shame Phyrexians aren't people. Karn's in agony. He has no choice but to embrace it, reshape it, make it something useful: an anchor that will keep him tied to what remains of this body. So long as he can feel that pain, he is himself. And surrounded by the triumph of his failures, it only feels appropriate. This is the end—of his creations, of the Multiverse, of him. Knowing Norn, it won't come quickly. Between Vorinclex's endless taunts and Jin-Gitaxias's prodding, Karn has no illusions about what's going to be done with him. What #emph[is ] being done to him. The Phyrexians have been taking him apart, piece by piece, and repurposing his silver body. Vorinclex and Jin-Gitaxias have different ideas on how best to do that—but the core idea is the same. And Norn? Norn wants him to suffer. He sees that in her fang-ridden smile. "False Patriarch," she says to Karn, "isn't this a blessed sight? After all your years of stumbling, to see the heights we've climbed without you." Karn does not look at her. Cannot. He has precious little power left within him. With whatever remains, he wants to remember his friends. It is the least he can do for them. Koth sits straight-backed even as Jin-Gitaxias advances toward him. Of the captives, he is the only one to meet Karn's eyes. The others all have their reasons. Chandra is too beaten to kneel upright. And Melira? Melira can't bear to look, either. Though his heart aches, he understands. After all that they'd worked toward, all their time struggling against the impossible, the sacrifices and the dreams, they are all going to die here. Because of his mistakes all that time ago. If he were in her position, he wouldn't want to look, either. His Mirran would-be rescuers have lost limbs; some are already being spliced into new monstrosities. When they first arrived, there were dozens of them. Now there's only a handful left—Koth, Melira, Wrenn, Chandra, and perhaps ten or twenty survivors. One by one the rest of them had been dragged off for experiments. Those that remain here Norn has kept for her own special reasons. One of the chorus members extends herself, her spine unfolding like an accordion to accommodate her new growth. She takes Karn's head in hand and holds it in place—forces him to confront Norn's face. "Phyrexia has posed you a question. You must answer it. It is no wonder you failed to lead us if you cannot do as much as this." Karn is weary. He cannot think of something to say. In the end, he does not have to. Jin-Gitaxias raises an arm to strike. In the gleam of his wicked claws, Karn sees the ghosts of his past. Who better to offer him comfort in a time like this? Soon he will be among them, wherever they might have gone. Will it hurt? Will it be like falling asleep? He's always been envious of sleep. Time to rest. He closes his eyes—for a flash of gold to blare across his eyelids. A clarion call shatters the chitter-skitter of Phyrexia's great machine. The gathered forces have only an instant's warning for what is to come—and no idea what it might be. As golden light swallows the onlookers, Karn hears the clashing of metal. And because he was built to function in the strangest environments, he can see what has appeared in the center of the shockwave now rocking the bridge. An angel in gleaming armor, a golden blade now raised to counter Jin-Gitaxias's strike. She descends from on high like the lance of an angry god toward them—and when she lands, she craters the metal beneath her. Her impact sends dozens of Jin-Gitaxias's legions tumbling into the abyss. The choir's delicate bodies aren't meant for an impact like this, either. Soon they plummet into the dark as well, sending Karn's chunk of slag hurtling to the ground. Still, he watches. "You will not strike this man down," says the angel. Wait. Doesn't he~ ? That voice~ Karn isn't the only one to recognize it. Only an arm's length away, <NAME> lets out a keening screech. "#emph[You?!"] #figure(image("012_Episode 7: Divine Intervention/01.jpg", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none) When the light fades, <NAME> stands in the center of the crater. Yet this is Elspeth as Karn has never seen her: flanked by radiant golden wings. The many wounds of her past no longer mark her serene face. #figure(image("012_Episode 7: Divine Intervention/02.png", height: 40%), caption: [], supplement: none, numbering: none) Jin-Gitaxias scrambles backward, his servitors closing ranks to protect his escape. Elspeth lets him go. She's busy elsewhere—placing a hand on Chandra's swollen face. Healing light flows from her into the pyromancer. Flesh knits back together. Already Norn has shot up from her throne; in her anger, she's thrown it over. Two of her choir are crushed beneath its bulk. To Karn's eye, their deaths don't seem to have bothered her. "You!~ You were not meant to trouble us anymore!" Elspeth does not dignify this with a response, does not look up at her; she keeps her attention on Chandra, then on Koth. The shock on his face is plain to see—but there is hope there, too. That in turn kindles some within Karn. When was the last time he saw hope in those eyes? "We, the might and heart of Phyrexia, address you!" Norn flings a chunk of her throne at Elspeth. Karn braces himself for Elspeth to come tumbling down—but it doesn't seem to bother Elspeth at all when the rock shatters against her wing. Something ripples through the assembled ranks of Phyrexia, something like fear, something like shock. Whatever it is, they do not like it. Like animals before fire, they begin to pull back, to scatter. The Mirrans see their chance. The moment Koth's healed, he drives his fist into the ground. Magma shoots up from a burning orange crack in the bridge, running all the way to the base of the Invasion Tree. "Mirrodin!" says Koth. "With me!" But Norn does not seem to hear them. Whatever she can get a hold of will serve as a weapon, it seems: more chunks of her throne, a horn she snaps from a howling Vorinclex, the severed head of an unlucky choir member. She hurls them all through the air at Elspeth. Elspeth dodges, cuts, and parries—none of the blows land true. Norn screeches again. Jin-Gitaxias crawls up to Norn's side. "The prisoners—" "You and Nissa deal with them," Norn snaps. "We have something more important to attend to." "That angel?" Jin-Gitaxias asks. "Preposterous. She's only one among many. My legions can handle them, and Vorinclex will eat whatever we leave behind. It would be wiser for you to retreat and leave the matter—" Norn grabs him by the throat. "Dissent is a blasphemy, #emph[praetor] . It does not stain the tongue of the faithful. Our will is Phyrexia's will. See it done." It is ridiculous that they are having this conversation. Norn must be losing control. Especially if she does not notice Melira running to Karn's platform. A simple wave to Koth and suddenly, Karn is aloft again. "You're going to be okay," Melira says. So much of this is hard to believe. Once, long ago, he almost died on New Phyrexia. It was only the intervention of his friends—Venser, Koth, Elspeth, and Melira—that saved him. Now nearly all of them are here to save him again, and Venser's spark lends him strength. #emph[Don't follow me] , he once said to Venser. But Venser's spark was in him still and followed him all the way here. He couldn't give up. Not yet. #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) Wrenn couldn't give up. Not yet. Though there isn't much of Wrenn left, though their entire fighting force has been reduced to only a few broken survivors, she can't give up. What does it matter if she no longer has legs? The weight of the world is still on her shoulders. The angel's arrival isn't a surprise to her but a confirmation. Anything else would have been unacceptable because it meant they might all die. Someone came to save them, and it was Elspeth in her new autumn colors, of course. She looks splendid, though there isn't time to appreciate it. Humans are often distracted by bright, shiny things. She hopes Phyrexians will be the same. "Chandra," she rasps. "Chandra, we need to go." Gold dances in the pyromancer's eyes—she's as distracted by the goings-on as the others are. It isn't until Wrenn bites Chandra's sleeve and tugs that she looks down. "I can't walk anymore. I need your help," says Wrenn. It's all the explanation Chandra needs. Reality seems to set in for her again. She scoops Wrenn up. "You've got it. Let's go." Together they take off. The Mirrans follow, looking back every other step toward the woman who's saved them all—and the army they'll have to evade. At least, that's what Wrenn thought they were admiring. "Karn!" shouts Melira. "We have to save him, too." "Got it!" says Koth. The slab Karn's on is stone like any other—it answers his call the way wood heeds hers. Karn's slab flies out to meet them. A barrage of arrows and spears bounce off the back of the slab. That's Koth's work, too: he's using it to shield their retreat. Wrenn frowns. The weapons weren't actually hurting Karn, but this still strikes her as a callous thing to do. How long have they been fighting that they make decisions like this? They deserve peace. Wrenn wants to bring it to them, but she won't be able to do it alone. Teferi will know what to do, if only she can reach him. And he's gone somewhere she won't be able to reach without the Invasion Tree's help. She can't reach it without Chandra, and Chandra~ "Drop the dryad and there's still hope for you, Chandra. You're smart enough to know I'll kill you otherwise." Chandra had Nissa to deal with. They all do. No matter how fast they run, none of it is going to mean anything if Nissa catches them. And she means to catch them. The elf's flinging the bodies of the fallen back at them, her steps certain and inevitable. Wrenn wishes she hadn't turned back to get a look. There's no compassion in those eyes, no mercy, no trace of the woman who was once there. Koth has his hands full keeping the others from getting hurt. Karn's as torn apart as she is. The fleeing resistance—they're doing what they can, but what they can do to a Phyrexianized elven Planeswalker isn't much. Elspeth's distracting Norn. And Chandra? Chandra can't bring herself to hurt Nissa. Wrenn knows that without having to ask. They need to get to the tree. And they will. Wrenn's sure of it, because if they don't everyone will die, and that can't happen. What she can't see from here is how. All she has is faith. #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) Phyrexia rages, yet it cannot break Elspeth Tirel's peace. It is a peace as certain and solid as her golden armor, as hard won as her battle scars. Chunk after chunk of porcelain comes flying at her; she does not so much as flinch. These are the desperate actions of a person who knows they are going to lose. Elspeth is above all that now. Once, she'd found Norn frightening. Once, those needle-like teeth had haunted her. Norn's uncanny voice narrated her nightmares with a false god's bravado. #emph[Remember always your humility, for it was Phyrexia that brought you low.] Elspeth does not find her frightening anymore. She's no longer brought low. In fact, with a single flap of her wings, Elspeth can soar above her. From here Norn is more of an oversized doll than a threat to the Multiverse. Everything seems smaller now. Further away. All the dross of Elspeth's life has been cut away, leaving only the truth. #figure(image("012_Episode 7: Divine Intervention/03.jpg", width: 100%), caption: [Art by: awanqi (<NAME>)], supplement: none, numbering: none) And the truth is that Phyrexia will not win this day. Beneath her, the Mirrans flee toward the tree. Koth covers the retreat with Chandra the vanguard, Wrenn in her arms. Strapped to a hunk of slag is Karn—who watches her with naked admiration. Though he is in a pitiful state she nevertheless finds herself smiling at him. After all these years, they're finally going to put everything right. So long as Elspeth can see them safely there. She has to stop Nissa. Wrenn#emph[ must] reach that tree. But there's a more pressing matter to attend—someone who doesn't want her running off. Furious, Norn lunges for Elspeth, claws outstretched. She yanks Elspeth from the air by a dangling leg and slams her against the ground. "You shall not ruin our moment of triumph!" Elspeth's ears ring; her vision blurs. She blinks. Norn towers above her once more. "We have dedicated ourselves to this cause without reservation. The salvation of the Multiverse is our righteous calling. How dare you stand against it?" "I have my own calling," Elspeth answers. She stands, dust falling from her cloak. "You won't keep me from it." Norn's laughter is enough to chill the blood. "Your calling is false," she begins. As she speaks, the bodies of the fallen Phyrexians lift and swarm around her. Pieces fly from their limp forms: shards of metal, shards of bone; blades and razors; teeth and tubes. Warp through weft, Norn weaves herself a hideous new suit of armor. "On all of the planes, there is only one eternal, untarnished truth: all will be one. Any who stand in the way of unity stand in the way of a perfect future." Elspeth looks over her shoulder. The others are fleeing, and Nissa has broken off to stalk toward them. She can't afford to stay here and listen to Norn's grandstanding. Elspeth concentrates on her blade: the crackling, golden Godsend. This is only a facsimile of the real thing—but it's her facsimile. She knows it'll work for her. A little focus is all it takes to send a searing beam of light at Norn. Chunks of armor fall away, incinerated by the sword's purifying rays. A smoking pit rises from the praetor's shoulder. This time, Norn doesn't scream. Instead, she raises a clawed hand. The bodies of the fallen soldiers around them—those already shucked of their useful parts—rise anew to encircle the two fighters. "Phyrexia shall never fall," Norn says. "Look around you. There is no death, <NAME>, only Phyrexia." She won't have long to act. Before the risen ranks can pin her down, Elspeth takes to the air once more. Yet as she turns toward the fleeing Mirrans, walls shoot up from the ground to block her path—ones that rise to the endless heights of the sanctuary's ceiling. "You cannot run from us," Norn says. "We are the ground beneath your feet, the air in your lungs. Everything that you lay eyes upon is Phyrexia, and Phyrexia is us. We are whole." Elspeth strikes at the wall. Sparks are the only sign of progress: the porcelain plating does not yield to her blade. Up ahead, Nissa is closing on the Mirrans. Chandra is with them—the two of them were close, weren't they? Would Chandra be able to strike her down? Elspeth hesitates. If Chandra falters, Nissa will stop them. They need Elspeth. This fight is a distraction. She needs to get through that wall. If the others can hold out for just a few seconds~ Once more she concentrates on the blade, every breath setting it more brightly aglow. An aurora gleams across her armor. Behind and beneath her, the risen legions of Vorinclex and Jin-Gitaxias are on the attack. Lashes close around her wings. As one, they pull back. Her muscles strain under the pressure. "Why must you struggle so?" Norn asks. "You've always struggled against us. What is it you desire? If you have longed for a home, find home with us. If you have need of friends or lovers, there are numberless legions of them among our ranks. You can still join us, if you submit." Elspeth looks over her shoulder. Norn's standing taller than ever, the added plates from the bridge and the fallen serving to stretch her even further. Bright viscera shines beneath the surface: the flayed flesh of which she's so proud. From the size of her, the cruel shapes of her armor, and the casque-like grill of her new carapace, she looks nothing like home. Elesh Norn is war and death. A second set of lashes shoots from Norn's outstretched hand. Elspeth doesn't have any other choice: if she wants to stay aloft, she's going to have to get through Norn. A single chop of her blade severs both sets of lashes; momentum topples the Gitaxians onto their backs. Elspeth flies toward Norn. "You don't understand me." Another slash comes her way; she dodges and repays Norn with a slice across the arm. Smoke rises from the wound. The smell of burning flesh sticks to the top of Elspeth's mouth. "I'm nothing like you." Norn grabs one of Elspeth's wings. In a foul parody of a child holding a bird, she dangles Elspeth aloft. "You longed for a purpose—for something greater than yourself. Your dearest heart's desire is a place where you might belong, a place of endless peace, where those you value are never far. A bright future. A #emph[Phyrexian] future." Norn's voice is joyously sick, and sickly joyous. Elspeth cuts at Norn's grasping fingers, but though she draws blood, the praetor does not let go. "What does this form offer you that Phyrexia cannot? Peace. Purpose. Unity. Yet they cannot grant you the last, not in truth. Skin still binds you together. Weakens you. To be Phyrexian is to be free from all such boundaries. What you've gained is a pale imitation of what we've perfected. Look around you!" She does. And though she is loath to admit it, there is truth to what Norn is saying. The eyes that stare back at her from the army's ranks are all the same. Those that breathe do so in unison—and with those breaths, the sanctuary clicks and whirrs, a machine keyed to the lives of its denizens. Nissa, Nahiri, Ajani~ none of them seemed upset with their new states. On every one of them she'd seen nothing but ecstasy. Home could be whatever you made it and whomever you made it with. If she joined, she wouldn't be short of friends. She and Ajani could forge Theros into its best possible form. Even Daxos could join them. Undying, ageless, all one—forever. "Angels are a pale shadow of divinity. We are its true light. From the heights of this sanctum, we see all things exactly as they are. After this battle you will no longer exist as your self—you will become one of them. All those years you looked with horror on phyresis, and here you are, embracing it by another name." "It isn't the same!" Elspeth answers. Norn holds Elspeth upside down in front of her. They are eye to porcelain, Elspeth dangling meters over the ground. Norn's teeth gleam with the refracted light from Elspeth's blade. "Then name a single difference." "My purpose is divine." "My evangels act as the swords of our divinity. Try again." "This transformation hasn't changed anything about me." The lie stains her tongue the second she's told it. "Those new wings of yours tell a different truth. Is this so difficult for you to understand?" "I~" Elspeth starts. Another voice from behind—a familiar one. Jin-Gitaxias calls out to his liege. "Haven't we spent enough time on this? Compleat her and let us be on our way." "Quiet!" Norn shouts. At once her mood drops to furious rage. She turns toward Jin-Gitaxias. A clash of metal, the sound of tearing flesh. Jin-Gitaxias gurgles behind Elspeth. She realizes he was right: they've spent enough time bickering like children. Her purpose is greater than this. And Norn's reaction to insubordination tells Elspeth all she needs to know about their differences. She drives her sword into Norn's wounded shoulder—the only place she can reach from here. A spurt of blood slicks Elspeth's armor as Norn, at last, lets go. Elspeth takes to the air again. Jin-Gitaxias's arm lays in a pool of oil not far from Norn. If she hadn't escaped, it might have been hers. Instead, Elspeth concentrates her power on her sword. Golden light floods the platform. #figure(image("012_Episode 7: Divine Intervention/04.jpg", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none) "You're right, Norn," she says. "We aren't so different. We argue. We make mistakes. We have our own wants, dreams, and desires." Norn's mouth wrenches in confusion and disgust. "What blasphemy is this? We speak only Phyrexia's will—" Norn swings, but Elspeth ducks out of the way. "You disagreed with Jin-Gitaxias, didn't you? Phyrexia wants you to ignore me, but #emph[you ] want differently." A shout peels from Norn's throat. Shards of fallen soldiers slice through the air, blades of the dead, each aimed at Elspeth. "You!~ You understand nothing of Phyrexia!" "No, the problem is I understand you far too well," says Elspeth. The blade hums with power. She raises it high overhead. This is it. After all these years and all these dead—it is finally time to strike down Elesh Norn. Jin-Gitaxias will do nothing to help her. Already his legions are charging~ Toward the tree. Thousands of them for only a handful of Mirrans. Spears land like hail across the bridge's surface. That person crumpling to the ground—is that Melira? Born without a trace of metal in her body, immune to the horrors of phyresis, the girl once represented hope to the entire plane. Is that her crumpling to the ground? Koth's scream confirms it. It is not right to linger. Elspeth looks down on the praetor before her. A storm of blades swirl around Norn like the petals of a porcelain flower. "We are beyond your comprehension, beyond your reach! When we have conquered the Multiverse you've held so dear, you will kneel at our feet and bask in the glory of our creation! #emph[You] will not ruin everything we've achieved. Eons from now you will be forgotten, and we will remain the eternal hierophant, Elesh Norn!" "That's just what I mean. You want people to worship #emph[Elesh Norn] , don't you? Phyrexia doesn't matter to you. It never has. Power's the only thing you care about." The swords around Elesh Norn hang still and silent. A sanguine glow of rage builds from behind them. "You~ #emph[I] hate you!" Like the arrows of an army, the swords flying toward her—shards pulled now from the bridge, from the walls, from the very body of Phyrexia. So she's finally learned to speak for herself, has she? Well, that's no longer Elspeth's concern. The swords, however~ Only one shot at this. If Elspeth aligns everything right~ She flies straight for the wall. At the very last moment she pulls back. Momentum turns her stomach, holds her in a vise, but she makes the turn up and away. The blades don't have the space. With all her problems concentrated in one place, Elspeth at last unleashes a ray of light. When the light fades, she's already halfway down the bridge, toward the tree. The hopes of the Multiverse rest on her plumed shoulders. She does not hear Jin-Gitaxias get to his feet—but she does hear Norn's scream. "Come back here! I wasn't finished with you!" She's dawdled too long. It is time to do the right thing. Wrenn and Chandra are almost to the tree. Elspeth's got to make sure they make it.
https://github.com/pluttan/trps
https://raw.githubusercontent.com/pluttan/trps/main/lab1/lab1.typ
typst
#import "@docs/bmstu:1.0.0":* #import "@preview/tablex:0.0.8": tablex, rowspanx, colspanx, cellx #show: student_work.with( caf_name: "Компьютерные системы и сети", faculty_name: "Информатика и системы управления", work_type: "лабораторной работе", work_num: "1", discipline_name: "Технология разработки программных систем", theme: "Выбор структур и методов обработки данных (Вариант 11)", author: (group: "ИУ6-42Б", nwa: "<NAME>"), adviser: (nwa: "<NAME>"), city: "Москва", table_of_contents: true, ) = Цель лабораторной работы Определить основные критерии оценки структуры данных и методов ее обработки применительно к конкретной задаче. = Описание задания == Задания 1. На основе теоретических сведений выделить критерии оценки структур данных, принципы работы и критерии оценки операций поиска, сортировки и корректировки. 2. В соответствии с вариантом задания (Вариант 11) предложить конкретную схему структуры данных (в задании указана абстрактная структура данных) и способ ее реализации на выбранном языке программирования. 3. Определить качественные критерии оценки (универсальность, тип доступа и др.) полученной на шаге 2 структуры данных с учетом специфики задачи по выданному варианту. 4. Определить качественные критерии оценки полученной на шаге 2 структуры данных: требуемый объем памяти на единицу информации, на структуру данных в целом и др. 5. Провести сравнительный анализ структуры данных, предложенной на шаге 2, на основе оценок, полученных на шаге 3 и шаге 4, с другими возможными вариантами реализации с целью поиска лучшей структуры данных к заданию по варианту. 6. Если цель шага 5 достигнута, то необходимо выполнить шаг 2, но для новой абстрактной структуры данных с указанием качественных и количественных критериев. 7. Оценить применимость метода поиска, который указан в варианте задания, с учетом структуры данных. 8. Если метод поиска применим, то необходимо сформулировать его достоинства и недостатки, используя качественные и количественные критерии: универсальность, требуемые ресурсы для реализации, среднее количество сравнений, время выполнения (такты) и др. 9. Предложить альтернативный, более эффективный метод поиска (отличный от задания), если такой существует, с учетом специфики задачи по варианту, а также с учетом структур данных, полученных на предыдущих шагах. Для обоснования выбора альтернативного метода поиска использовать качественные и количественные критерии. 10. Оценить применимость метода упорядочивания, который указан в варианте задания, с учетом структуры данных. 11. Если метод упорядочивания применим, то необходимо сформулировать его достоинства и недостатки, используя качественные и количественные критерии: универсальность, требуемые ресурсы для реализации, среднее количество сравнений, время выполнения (такты) и др. 12. Предложить альтернативный метод упорядочивания, более эффективный и отличный от задания, если такой существует. При этом должны учитываться задача по варианту и структура данных. Для обоснования выбора альтернативного метода упорядочивания использовать качественные и количественные критерии. 13. Оценить применимость метода корректировки, который указан в задании, к структуре данных. 14. Если метод корректировки применим, то необходимо сформулировать его достоинства и недостатки, используя качественные и количественные критерии: универсальность, требуемые ресурсы для реализации, время выполнения (такты) и др. 15. Предложить альтернативный способ корректировки, более эффективный и отличный от задания, если такой существует. При этом должны учитываться задача по варианту, структура данных. Для обоснования выбора альтернативного способа корректировки использовать качественные и количественные критерии. 16. Определить влияние метода корректировки на выполнение операций поиска и упорядочивания. 17. Определить основной режим работы программы и с учетом этого сделать выводы, а итоговые полученные результаты внести в таблицу. Из данной таблицы должно следовать, что предложенный альтернативный вариант решения задачи лучше. Как минимум должно быть одно улучшение, но могут быть заменены все методы обработки и сама структура данных. #pagebreak() == Основные требования Основные требования приведены в таблице ниже: #align(center)[ #table( columns: 5, inset: 10pt, align: horizon, [Номер\ задачи],[Структура\ данных],[Поиск],[Упорядочение],[Корректировка], [2],[Таблица],[Вычисление\ адреса],[Пузырьком],[Удаление\ сдвигом], ) ] == Задача Дана таблица материальных нормативов, состоящая из K записей фиксированной длины вида: код детали, код материала, единица измерения, номер цеха, норма расхода. = Основной вариант == Структура данных В заданной таблице указана абстрактная структура данных -- таблица. Это означает, что нам необходимо явно выбрать структуру данных, элементы которой связанны неявно. Из задачи следует, что столбцами таблицы будут код детали, код материала, еденица измерения, номер цеха, норма расхода. Строками мы будем заносить записи. Каждая строка будет занимать одно и тоже количество места в памяти. Для реализации строк, так как типы данных у столбцов разные, будем использовать структуры. Они занимают столько же места, сколько бы занимали переменные, хранящиеся по отдельности. Определим тип данных для каждого столбца в зависимости от информации, которую необходимо в нем хранить. #align(center)[ #table( columns: 2, inset: 10pt, align: horizon, [Столбец],[Тип данных], [Код детали],[```cpp unsigned int```], [Код материала],[```cpp unsigned int```], [Единица измерения],[```cpp String (char[])```], [Номер цеха],[```cpp unsigned int```], [Норма расхода],[```cpp unsigned int```], [Номер записи],[```cpp unsigned int```], ) ] Везде (кроме единиц измерения) будем использовать беззнаковый целочисленный тип, для того, чтобы увеличить количество чисел, которые можно сохранить в двое, по сравнению со знаковым типом. Коды, номера изначально не могут быть отрицательными в силу здравного смысла. Норма расхода -- максимально допустимое плановое количество сырья, материалов, топлива, энергии на производство какой-либо единицы какой-либо продукции. Т. е. это значение тоже не может быть отрицательным, но в зависимости от единиц измерения оно может перестать быть целочисленным. В данной задаче не сказано, что это число будет вещественным, поэтому примем правило: если необходимо ввести вещественную норму расхода, необходимо ввести целую норму расхода и изменить еденицы измерения на более меньшие. О номере записи опишем подробнее позже, когда будем рассматривать непосредственно алгоритмы. После определения структуры необходимо задуматься о том, как хранить строки в памяти. Самым простым в реализации способом будет хранение их друг за другом -- в массиве. Таким образом общей структурой данных будет массив записей. Вот его схема: #img(image("dstruct.svg", width:94%), [Схема структуры данных]) === Реализация структуры на языке C++ #let lab1 = parsercpp(read("lab1.cpp")) #code(funcstr(lab1, "struct norm {") + "}", "cpp", [Структура данных ```asm norm```]) === Расчет памяти, занимаемой массивом Объем занимаемой памяти массивом $V = k V_э$, где $k$ — количество элементов, а $V_э$ — размер одного элемента. Множитель $k$ определяется пользователем, но не может быть динамически изменен. Размер элемента является суммой размера полей элемента. Рассчитаем эти размеры, учитывая, что 1 символ занимает 1 байт (в среднем сокращенные еденицы измерения занимают 2 символа): $ V_э = l_"det" + l_"mat" + l_"mea" + l_"ws" + l_"cn" + l_"no" tilde.eq 4 times 5 + 3 = 23 $ Получаем $V = 23k$ байт. Если массив статический, а $k$ неизвестно, то это приведет к неэффективному использованию оперативной памяти, а так же к ограничению к количеству записей изменить которое можно только в коде программы (для этого объявим ```cpp #define max 300```). Необходимо пояснить, что ```cpp std::string``` является более сложной структурой, чем ```cpp char[]```, хотя бы из-за того, что в данной структуре можно легко оперировать с динамическим добалением элементов, но так как в процессе программы использования этого нет необходимости будем считать, что ```cpp std::string``` занимает столько же памяти, сколько и ```cpp char[]```. === Оценка времени доступа к $i$-му элементу В массиве доступ выполняется по индексу. Для удобства мы храним адрес лишь первого элемента массива, поэтому для доступа мы должны прибавить к этому адресу $i$, что займет $t_"++" = 1$ такт, доступ к оперативной памяти осуществляется за $t_arrow = 1$ такт процессора. $ T_д = t_"++" + t_arrow = 1 + 1 = 2 "такта." $ Стоит отметить, что из-за особенностей адресного поиска доступ к i-тому элементу в данной реализации будет $T_"ди" = 2 i "такта"$. Но операции поиска и сортировки будут выполняться по индексам ($T_д = 2$ такта), поэтому доступ к элементам (не по индексу, а по номеру записи) будет необходим только в процессе вывода или удаления элементов. === Оценка времени удаления $i$-го элемента При удалении мы переписываем весь массив на одну ячейку влево, начиная с $i$-го элемента, поэтому удаление -- это перезапись $k-i$ элементов. (подробнее см. 3.4.3) == Метод поиска Поиск необходимо реализовать адресный. Адресный поиск -- один из самых эффективных по времени методов поиска. Он предполагает связь искомой ячейки и индекса записи. Так как значения заренее не извесны, создать функцию поиска индекса по значению можно только проиндексировав динамически массив. Это возможно сделать в виде структур ключ-значение. Возьмем для этого структуру хеш-таблицы, куда динамически будем при записи заносить значения в виде ключей, а значениями хеша будет выступать односвязный список -- одно значение может быть в нескольких записях. Список использовать будем из-за того, что результат поиска необходимо выводить весь, т.е. использовать доступ к элементам не по их порядку нет необходимости. Для каждого значения поиска необходимо генерировать отдельный хеш, поэтому для удобства, занесем хеши одного и того же типа в массив. Отдельно создадим хеш для поля "еденицы измерения". #code("typedef unordered_map<unsigned int, list<int> > uimap; typedef unordered_map<string, list<int> > smap; uimap maps[4] = {{},{},{},{}}; smap measureMap = {}; ", "cpp", "Объявление новых типов и переменных для поиска") Теперь следует поговорить о самих индексах в записи. Из-за того, что необходимо реализовать методы сортировки и удаления элементов, индексы массива не подходят для однозначного определения положения записи в массиве, ведь каждый раз при сортировке или удалении необходимо будет переписывать индексы всех (многих) записей в хеш таблицах, т.е. переиндексировать таблицу. Это будет влиять на время выполнения алгоритмов, поэтому необходимо создать дополнительное поле, содержащее свой индекс записи, который не будет изменяться при удалении других записей или при сортировке. Это увеличивает размер структуры, но позволяет наиболее эффективно по времени выполнить все три алгоритма. Алгоритм поиска будет возвращать список с индексами найденных элементов, так как сами элементы не упорядочены по этому индексу доступ к элементам усложняется (будет равен $T_"ди"$). Для доступа к элементам будем использовать следующую функцию: #code(funcstr(lab1, "unsigned int idxSearch(unsigned int idx){")+"}", "cpp", [Ищем запись с индексом `idx`]) Эта функция принимает "статический" индекс записи и возвращает индекс записи в массиве. При этом используется последовательный поиск по массиву. === Реализация метода поиска #code(funcstr(lab1, "list<int> adressSearch(string icol, bool wo){")+"}", "cpp", [Метод адресного поиска]) Тут мы передаем в функцию номер колонки, по которой будем искать и `wo` -- флаг, который указывает стоит ли выводить информацию о выходе из поиска после вывода результатов. Мы запршиваем у пользователя ключ для поиска, если ищем единицы измерения, используем отдельный хеш. Поиском является само обращение к хешу. === Среднее количество сравнений Таблица проиндексирована до поиска, поэтому сравнивать ничего не нужно, просто берем индексы записей, подходящих по ключу. $ C = 0 $ === Оценка времени поиска В лучшем случае, если не возникает коллизий поиск выполняется за время генерации хеша из ключа (для этого используются хеш-функции). Хеш-функций много, но основная задача у них одна: используя некоторую математическую базу обеспечить одностороннее преобразование ключа в хеш (обычно обратное преобразование не требуется). При этом создается структура, похожая на динамический массив, но для доступа к элементам используются не индексы, а хеши. Если возникают коллизии программа выделяет большую облать памяти для хранения всей хеш-таблицы и полностью переносит всю хеш-таблицу туда. При вставки значений в хеш-таблицу коллизий избежать не удастся, но при получении значения по ключу они возникают редко (или вообще не возникают, зависит от хеш-функции). Возьмем для примера хеш-функцию на делении, общее количество тактов при делении -- 28, поэтому на получение одного значения по ключу будет уходить около 30 тактов. === Оценка занимаемой памяти Хеш-таблицы являются некоторым компромиссом между временем (удаления, доступа, вставки) и занимаемой памятью. В таблице всегда есть незаполненное пространство, которое во избежание коллизий увеличивается с увеличением самой таблицы. Отношения памяти, заполненной данными ко всей выделенной под таблицу называют коэффициентом загрузки. Оптимальным коэффициентом считают значение $0.5$ и ниже, при этих значениях маленькая вероятность коллизии. Так как во всех записях обязательно должны быть значения их индексы обязательно попадут во все 5 таблиц. $ V_"таб" = 1/"к-т загрузки" times 5 times v_"int" times k = 40 k "байт" $ == Метод упорядочивания Метод сортировки пузырьком заключается в том, что наибольшие элементы "всплывают" (отсюда название -- пузырек) в вверх массива. Для этого просто сравниваются два рядом стоящих элемента и если у элемента, индекс в массиве которого меньше, большее значение, то они меняются местами (при этом меняется их индекс в массиве, номер записи остается неизменным). Далее тот же элемент сравнивается со следующим элементом, если он больше, то снова элементы меняются местами и так далее. Так как сравниваются постоянно два элемента для реализации необходимо использовать вложенный цикл: первый элемент -- тот, который будет "всплывать" будет с индексом `i` (`i = range(0, n)`), второй элемент -- тот, с которым будем сравнивать первый, будет с индексом `j` (`j = range(i+1, n)`). Таким образом в худшем случае -- когда массив отсортирован по убыванию будет необходимо пройти по всем элементам $n^2$ раз, а в лучшем, когда массив отсортирован, $n$ раз. === Реализация метода упорядочивания #code(funcstr(lab1, "void bubbleSortUi(int nocol){")+"}\n\n"+funcstr(lab1, "void bubbleSortStr(){")+"}", "cpp", [Метод сортировки пузырьком]) Так как сравнения у целочисленного типа и строк сильно отличаются создадим две функции, для охвата всех полей таблицы. Отличаться тут будет только само условие: для целочисленного типа используем оператор больше, для строк используем функцию, которая последовательно сравнивает все буквы в строке, до того момента, как номер букв (в кодировке) в двух строках не будет отличаться, затем возвращет число, насколько этот номер отличается. Если это число больше 0, то вся запись с этой строкой "всплывает", таким образом получаются записи, отсортированные в алфавитном порядке относительно единиц измерения. === Среднее количество сравнений Внутри цикла мы сравниваем 2 значения (не важно какого типа), так как значение не должно сравниваться само с собой для каждого `i`-того элемента есть `k-1` значений, всего таких элементов `k`, поэтому получим: $ C_"пуз" = k times (k-1) $ === Оценка времени упорядочивания Посчитаем такты, которые выполняются внутри цикла, считать будем для сортировки целочисленных значений -- строковые сравнения будут, в среднем выполняться в 2 раза дольше. $ T_"if" = 2 times (T_д + T_"get") + T_> + t_"if" = 2 times (2 + 2) + 2 + 1 = 11 "тактов" $ $ T_"in_if" = 3 times t_= + 4 times T_д = 3 times 2 + 4 times 2 = 14 "тактов" $ Мы можем войти в условие по вероятности $p = 0.5$, тогда общее время упорядочивания будет равно: $ T_"пуз" = С_"пуз" times (T_"if" + p times T_"in_if") =\ = k times (k-1) times (11+0.5 times 14) = 18 k (k-1) "тактов" $ === Оценка занимемой памяти Так как все изменения происходят непосредственно в массиве, память используется только для индексов и замены записей местами. $ V = 31 "байт" $ #pagebreak() == Метод корректировки Удаление сдвигом. Все элементы, начиная с того, который необходимо удалить сдвигаются влево в памяти, таким образом затирая удаляемый элемент. === Реализация метода корректировки #code(funcstr(lab1, "void removeOffsetIndex(unsigned int index, bool wo){")+"}", "cpp", [Метод удаления сдвигом]) В удалении мы используем тот же флаг, что и при выводе, чтобы при необходимости дать возможность пользователю выйти. Удаление происходит в 3 этапа: по номеру записи находится индекс записи в массиве, номер записи удаляется из всех хешей, где он был в списках, что бы адресный поиск больше не смог найти индекс удаляемой записи и сама запись удаляется сдвигом из массива записей. Но удалять запись по номеру не совсем удобно, поэтому я добавил так же удаление всех записей по списку, который может вернуть адресный поиск, таким образом реализовав поддрежку удаления записей по всем полям. #code(funcstr(lab1, "void removeOffsetArray(list<int> indexes){")+"}", "cpp", [Метод удаления сдвигом (по полям)]) Тут видно, зачем нужен флаг в предыдущей функции, так как после каждой удаленной записи пользователю не нужно сообщение о выходе. Необходимо так же показать реализацию метода поиска номера записи в списке. Предполагается, что списки, средней длиной $n << k$. При $k = 300$ длина списка в среднем будет равна $5$-$10$ из логических соображений. Таким образом поиск по списку можно реализовать последовательный, при этом удаление все равно будет происходить быстро. #code(funcstr(lab1, "list<int>::const_iterator whereList(const list<int>& l, int a) {")+"}", "cpp", [Последовательный поиск индекса в списке]) Поиск реализован так, что возвращает итератор, поэтому `erase` не проходится 2-ой раз по списку, а сразу освобождает элемент и переделывает ссылки предыдущего и следующего элемента (`std::list` - двусвязный список, но можно использовать собственную реализацию односвязного списка, тогда сравнивать необходимо, сохранив предыдущее значение и возвращать предыдущее значение. В целом для данной реализации это единственное отличие двусвязного списка от односвязного). === Среднее количество сравнений Для поиска значений в списке используется последовательный поиск, для поиска индекса удаляемого элемента в массиве тоже используется последовательный поиск, поэтому количество сравнений будет в основном в этих поисках. $ C_"уд" = (5 times n)/2 + k/2 = (5 times k/42)/2 + k/2 = (5 times k + 42 times k)/84 = (47k)/84 $ === Оценка времени удаления Удаление, как было сказано ранее, происходит в 3 этапа, первые 2 по времени выполнения будут равны $ T_"поиск" = (2 times T_д) times k/2 + (2 times T_arrow) times (5 k)/84 = 2k + (5k)/84 = 173/84 k "тактов" $ со Видно, что поиск уже занимает достаточно много времени, решение этой проблемы заключается в выборе альтернативной структуры данных. (альтерантивное решение) Для удаления тоже потребуется некоторое число тактов, а именно доступ и запись k-i (в среднем $k/2$) элементов. $ T_"уд" = (T_д + T_=) times k/2 = 4 times k/2 = 2k "тактов" $ $ T_"уд_общ" = T_"поиск" + T_"уд" = 173/84 k + 2k = 341/84 k tilde.eq 4k "тактов" $ Эффективное по времени удаление данных из массива в лучших случаях достигает 3-4 такта, будем стремиться к этому в альтернативном варианте. === Оценка освобождаемой памяти При удалении освобождается память, занимаемая списками в хешах, но из-за статического масссива при затирании элементов длина массива не сокращается, т.е. даже после удаления элемента память, выделенная для него остается, так как в задании нет необходимости реализации добавления новых элементов эта память остается пустой в конце массива до завершения программы. За каждое удаление мы освобождаем от данных 23 байта. Полностью освобождается только память, занимаемая номером в списках, т.е. 20 байт. = Альтернативный вариант == Структура данных В основном случае был расмотрен неплохой вариант организации таблицы. Поиск выполняется за константу, что при большом количестве элементов все равно лучше, чем зависимость от этого количества. Хорошим показателем памяти обладает алгоритм упорядочивания. Алгоритм удаления вышел самым неэффективным -- во-первых память после удаления элемента не освобождается, во-вторых для удаления необходимо пройти по всем элементам несколько раз. Все проблемы заключаются в двух доступах: по индексу массива и по номеру записи, но если организовывать один доступ в массиве прийдется при удалении и сортировке переиндексировать полностью таблицу. Это увеличит время алгоритмов и они станут неэффективными. Так как скорость выполнения в наше время имеет больший приоретет, можно увеличить количество памяти, занимаемой таблицей, для того, что бы ускорить сортировку и удаление. Одним из самых эффективных алгоритмов удаления является алгоритм маркировки, так как он только помечает в памяти удаляемую запись, и только раз в несколько (десятков, сотен или даже тысяч) удалений освобождает всю память, занимаемую уже удаленными элементами. Итак, для альтернативного решения необходимо подобрать такую структуру данных, чтобы в ней были индексы, которые не меняются при сортировке или удалении других элементов, доступ по индексу должен производиться за константу, и должен быть порядок вывода этих записей -- для отображения результатов работы сортировки. Будем использовать динамический массив, который будет хранить в себе номера записей, при сортировке эти номера будут меняться местами, а если i-тый (i - номер элемента, а не индекс в массиве) элемент удален, то поставим в его еденицы измерения "\~", как флаг удаленного элемента (сборщик мусора потом пройдет по этому массиву и удалит все записи с "\~" как из массива, так и из памяти). Из самих записей уберем номер -- теперь этот номер будет использоваться непосредственно как ключ в отдельной хеш таблице, значения которой будут сами записи. Таким образом мы получаем быстрый доступ к элементам по индексу массива или по номеру записи. Предыдущая сортировка эффективна по памяти, но по времени она проигрывает быстрой сортировке, поэтому в данном решении будем использовать сортировку Хоара. Адресный поиск оставим без изменений, он эффективен по времени. Вот схема структуры данных для альтернаятивного решения: #img(image("dastruct.svg", width:80%), [Схема структуры данных для альтернативного решения]) === Реализация структуры на языке C++ #let lab1a = parsercpp(read("lab1a.cpp")) #code(funcstr(lab1a, "struct norm {") + "}", "cpp", [Структура записей ```asm norm```]) #code("typedef unordered_map<unsigned int, norm > tytbl; tytbl tbl; vector<int> stbl; ", "cpp", [Объявление хеш-таблицы записей и дин. массива индексов]) === Рассчет памяти, занимаемой структурой Память занимаемая структурой состоит из 2 частей: память массива и память хеш-таблицы. $ V_"хеш" = 1/"к-т загрузки" times V_"зап" times k = k/"к-т загрузки" times \ times (l_"det" + l_"mat" + l_"mea" + l_"ws" + l_"cn") tilde.eq k/0.5 times (4 times 4 + 3) = 38k "байт" $ $ V_"мас" = V_"int" times k = 4k "байт" $ $ V_"ст" = V_"хеш" + V_"мас" = 46k + 4k = 42k "байт" $ Почусили на $14k$ байт больше, чем в прошлой реализации, но тут данные хранятся динамически, поэтому $k$ вводится в процессе выполнения программы и может быть изменено. При большом разбросе количества записей этот факт может компенсировать увеличение объема памяти. === Оценка времени доступа к $i$-му элементу Увеличение памяти, занимвемой структурой приводит к уменьшению времени доступа к элементам. Доступ к элементам может быть через индекс массива $T_д$ или через номер элемента $T_"ди"$. Так как теперь эти два индекса связанны в одном массиве, то справедливо равенство $ T_д = t_"++" + t_arrow + T_"ди" = 2 + T_"ди" "тактов" $ В части 3.2.3 оценено примерное время доступа к одному значению по ключу: $ T_"ди" = 30 "тактов" $ Как мы видим теперь оба доступа к значениям константы, что хорошо для большого количества значений. === Оценка времени удаления $i$-го элемента list<int> adressSearch(string icol, bool wo); Так как удаляется не каждый элемент, введем ```cpp #define ev 10```, где 10 -- количество вызовов, через которое элементы будут удаляться. Общее удаление будет происходить сдвигом, размер сдвига будет увеличиваться, каждый раз, когда будет необходимо пропустить(затереть) текущий элемент и таким образом к концу прохода массива сдвиг будет равен `ev`. Очищать будем и хеш и массив, всего необходимо очистить за 1 раз $46 "ev"$ байт. (подробнее см. 4.4.3) #pagebreak() == Метод поиска === Реализация метода поиска #code(funcstr(lab1a, "list<int> adressSearch(string icol, bool wo){") + "}", "cpp", [Метод адресного поиска]) Так как общие структуры для индексации не изменились с предыдущего решения сам алгоритм поиска тоже не изменился. Поэтому в основном все, что сказано в части 3.2 справедливо и тут. === Среднее количество сравнений $ C = 0 $ === Оценка времени поиска На получение одного значения по ключу, в среднем, уходит около 30 тактов === Оценка занимаемой памяти $ V_"таб" = 1/"к-т загрузки" times 5 times v_"int" times k = 40 k "байт" $ == Метод упорядочивания Быстрая сортировка организована так: берется центральный элемент массива, а так же берутся крайние элементы. Пока крайние элементы меньше (для левого) или больше (для правого) центрального мы сдвигаем указатель к центральному, не трогая элементы. После этой операции есть 2 варианта: указатели перешли через друг друга или нашлись числа подходящие под все условия. Если 2 верно, то меняем элементы местами и далее идем по той же схеме, пока не будет верно первое, попутно меняя элементы местами. После всех операций мы получим слева от центра только элементы большие центрального (в не отсортированном виде), а справа только элементы меньшие. Следующим шагом разделяем массив уже на 4 части (пополам, а потом каждую часть еще раз пополам) и проделываем те же действия с половинками, так идем, пока не закончатся половинки. После этого получим отсортированный массив. === Реализация метода упорядочивания #code(funcstr(lab1a, "void qSortI(vector<int>::iterator stbl, int k){") + "}", "cpp", [Сортировка Хоара для номеров записей]) #code(funcstr(lab1a, "void qSortUi(int nocol, vector<int>::iterator stbl, int k) {") + "}\n"+ funcstr(lab1a, "void qSortStr(vector<int>::iterator stbl, int k){") + "}", "cpp", [Сортировка Хоара для целочисленного и строкового типа]) В данной реализации я решил создать 3 разные функции сортировки: для целочисленных, строквых полей и номеров записей в массиве. === Среднее количество сравнений В данной сортировке используются 2 сравнения: сравнения с центральным элементом и сравнения индексов. Для начала посчитаем все для одного прохода. Тут $n$ - кол-во обрабатываемых элементов, на первой итерации оно совпадает с k, на второй вдвое меньше и т.д. В цикле произойдет от 1 до $n/2$ сравнений индексов, но по статистике в этот цикл входят повторно с 40% вероятностью (К. Дин "Простой анализ ожидаемого времени выполнения для рандомизированных алгоритмов “разделяй и властвуй”"). После цикла условия входа в рекурсию. $ C_"инд" = 1 + 0.4 + 2 = 3.4 $ В цикле 2 раза войдем в цикл с $n/4$ в среднем значений, основной цикл выполняется $ C_"piv" = 1.4 times (n/4 + n/4) = 1.4 times n/2 = 0.7n $ $ C_"общ" = C_"инд" + C_"piv" = 3.4 + 0.7n $ Так как мы постоянно разделяем массив на 2 всего функцию мы вызовем примерно $2 k$ раз. $ C_"общ_rec" = (3.4 + 0.7n) times 2 k $ Причем $n = log_2 k$ $ C_"общ_rec" = 6.8k + 2k log_2 k $ === Оценка времени упорядочивания Для начала посчитаем такты для одной функции, затем умножим на $2k$ это значение и подставим $n = log_2 k$ Так как сама функция достаточно сложна подстичтаем только такты, учавствующие непосредственно в сравнениях, потому что остальные такты для всех методов, основанных на прямой перестановке будут примерно одинаковы. $ T_"инд" = 4 times C_"инд" = 13.6 $ $ T_"piv" = T_д times C_"piv" = 30 times 0.7n = 21n $ $ T_"общ" = T_"инд" + T_"piv" = 13.6 + 21n $ $ T_"общ_rec" = 27.2k + 42k log_2 k $ === Оценка используемой памяти Функция рекурсивная, поэтому до окончания ее выполнения занимает некоторое место в стеке, но в сегменте данных она ничего не выделяет и работает только с текущим массивом, поэтому можно считать, что место используется только для хранения адреса массива, 2 индексов и серединного элемента. И так для каждой из $2k$ вызываемых рекурсивно функций. $ V = 2k times (4+2 times 4 + 4) = 32k "байт" $ В результате получаем, что для организации рекурсии используется $32k$ байт стека. Но любая рекурсия может быть организована с помощью цикла, поэтому этой памятью можно принебречь. == Удаление маркировкой и сдвигом === Реализация метода удаления маркировкой и сдвигом #code(funcstr(lab1a, "void removeMarkIndex(unsigned int index, bool wo){") + "}", "cpp", [Удаление маркировкой по индексу]) Глобальная переменная ```cpp evi``` теперь считает количество удаленных элементов, если их теперь столько, сколько и выставленный предел все элементы одним циклом удаляются полностью. Иначе просто заменяем еденицу измерения "\~" и элементы не отображаются (так написана функция отображения). Идея с удалением по столбцам осталась такой же, поменялся только код реализации удаления. Удаление из хешей (для адресного поиска) тоже не изменилось, но ушло в отдельную функцию для лучшей читаймости кода. === Среднее количество сравнений Сравнения требуются только при очистке и их ровно $k$. $ C_"уд" = k/"ev" $ === Оценка времени удаления Для начала разберем время без очистки: оно расходуется только на добавление "\~". $ T_"уд" = T_д + t_"++" + t_"=" = 30 + 2 + 2 = 34 "такта" $ При очистке мы проходим весь массив так же, как это делали каждый раз в основном решении. $ T_"оч" = 2k $ $ T_"общ" = T_"уд" + T_"оч"/"ev" = 34 + (2k)/"ev" $ === Оценка освобождаемой памяти Понятно, что удаление будет самым быстрым, если `ev` будет большим, но с другой стороны, с каждым увеличением `evi` мы используем большее количество ненужной памяти. Одна запись в таблице занимает 42 байта. В массиве хранится индекс, который занимает 4 байта. В хешах адресного поиска хранится 20 байт индексов. $ V_"ос" = (42+4+20)*e v = 66 e v "байт" $ = Таблица результатов и вывод #align(center)[ #tablex( columns: 5, inset: 10pt, align: center + horizon, map-cells: cell => { if (cell.x == 3 and cell.y == 3 or cell.x == 1 and cell.y == 3 or cell.x == 4 and cell.y == 8 or cell.x == 3 and cell.y == 9 or cell.x == 4 and cell.y == 9 or cell.x == 1 and cell.y == 10 or cell.x == 3 and cell.y == 10 or cell.x == 4 and cell.y == 10 ){ cell.content = { let text-color = green.darken(20%) set text(text-color) strong(cell.content) } } if (cell.x == 2 and cell.y == 3 or cell.x == 2 and cell.y == 4 or cell.x == 2 and cell.y == 5 or cell.x == 2 and cell.y == 8 or cell.x == 2 and cell.y == 9 or cell.x == 2 and cell.y == 10 ){ cell.content = { let text-color = yellow.darken(20%) set text(text-color) strong(cell.content) } } cell }, [],[Структура\ данных],[Метод\ поиска],[Метод\ упорядочения],[Метод\ корректировки], colspanx(5)[*Основной вариант*], [Название], [Массив\ записей],[Адресный\ поиск],[Сортировка пузырьком],[Удаление смещением], [Занимаемая/\ освобождаемая память (байт)], [$23k$], [$40k$],[$31$], [$23(p) + 20$], [Среднее\ количество\ сравнений], [--], [$0$], [$k times (k - 1)$],[$ 47/84 k $], [Занимаемое\ время\ (такты)], [$2$/$2i$],[$30$],[$18k(k-1)$],[$ 341/84 k $], colspanx(5)[*Альтернативный вариант*], [Название], [Массив\ индексов#h(4pt)и \хеш-таблица],[Адресный\ поиск],[Сортировка\ Хоара],[Удаление маркировкой и смещением ], [Занимаемая/\ освобождаемая память (байт)], [$42k$],[$40k$],[$32$],[$66"ev"$], [Среднее\ количество\ сравнений], [--],[$0$],[$6.8k +$ $+ 2k log_2 k$],[$ k/"ev" $], [Занимаемое\ время\ (такты)], [$32$/$30$],[$30$],[$27.2k+$ $+42k log_2 k$],[$32$], ) ] Как видно из таблицы, альтернативный вариант выигравает основной по времени (во всех методах) и по освобождаемой памяти, но занимет больше места в оперативной памяти. == Вывод В результате выполнения лабораторной работы были проведены качественные и количественные оценки структур данных и методов их обработки в соответствии с вариантом задания. В альтернативном варианте предложены решения, которые обеспечат более эффективные поиск, сортировку и удаление данных. = Приложения В процессе разработки лабораторной работы для отладки методов были созданы два консольных приложения. Ниже приведен код этих приложений. == Основной вариант #let lab1 = read("lab1.cpp") #show raw: block.with( fill: luma(240), inset: 9pt, radius: 4pt, ) #align(left+top)[ #raw(writeft(lab1, 0, 38),lang:"cpp") #raw(writeft(lab1, 44, 84),lang:"cpp") #raw(writeft(lab1, 85, 125),lang:"cpp") #raw(writeft(lab1, 126, 166),lang:"cpp") #raw(writeft(lab1, 167, 208),lang:"cpp") #raw(writeft(lab1, 209, 249),lang:"cpp") #raw(writeft(lab1, 250, 290),lang:"cpp") #raw(writeft(lab1, 291, 331),lang:"cpp") #raw(writeft(lab1, 332, 365),lang:"cpp") #raw(writeft(lab1, 366, 382),lang:"cpp") ] == Альтернативный вариант #let lab1 = read("lab1.cpp") #align(left+top)[ #raw(writeft(lab1, 0, 18),lang:"cpp") #raw(writeft(lab1, 20, 63),lang:"cpp") #raw(writeft(lab1, 66, 110),lang:"cpp") #raw(writeft(lab1, 111, 155),lang:"cpp") #raw(writeft(lab1, 156, 195),lang:"cpp") #raw(writeft(lab1, 196, 239),lang:"cpp") #raw(writeft(lab1, 240, 279),lang:"cpp") #raw(writeft(lab1, 280, 323),lang:"cpp") #raw(writeft(lab1, 324, 356),lang:"cpp") #raw(writeft(lab1, 358, 382),lang:"cpp") ]
https://github.com/janekx21/bachelor-thesis
https://raw.githubusercontent.com/janekx21/bachelor-thesis/main/main.typ
typst
#import "template.typ": * #import "@preview/cetz:0.2.2": canvas, draw, tree #import "@preview/pintorita:0.1.1" #show: project.with( title: "Design and development of an OWL 2 manchester syntax language server", authors: ((name: "<NAME>", email: "<EMAIL>"),), date: "December 6, 2023", topleft: [ Otto-von-Guericke-University Magdeburg \ Faculty of Computer Science\ Research Group Theoretical Computer Science ], ) // Pintoria setup #show raw.where(lang: "pintora"): it => pintorita.render(it.text) #heading(outlined: false, numbering: none)[Abstract] As the number of code editors and programming languages rises, language servers, which communicate with the editor to provide language-specific smarts, are getting more relevant. Traditionally this hard work hat been repeated for each editor as each editor API was different. This can be avoided with a standard. The _de facto_ standard to realize editing support for languages is the language server protocol (LSP). This work implements an LSP compatible language server for the OWL 2 Manchester syntax/notation (OMN) using the incremental parser generator tree sitter. It provides language features like auto complete, go to definition and inlay hints, which are critical in large OMN files, as it would be tedious and error-prone without a graphical editor. I also evaluated the practical relevance of the LSP. #pagebreak() #outline(indent: auto, fill: repeat(" . ")) #pagebreak() = Introduction == The Research Objective // How to design and implement an efficient language server for the owl language The aim of my research is to find out how best to implement a language server for a language that is unknown to the author. Which data structures, techniques and protocols are best suited, and what are performance characteristics of different alternatives? == The Structure of the Thesis // first explain what the work i am doing is and The thesis begins with background information about OWL2, the mancherster syntax, IDE's and language servers. This wide background is followed by detailed information about my implementation of a language server. What my decisions where and why. It involves translating a grammar, creating a language server crate and a plugin example for Visual Studio Code. The third big chapter is about testing the created program by running grammar tests, unit tests, end-to-end tests and benchmarks. Then analyzing and evaluating the results in the categories of speed, correctness and usability. = Related work #lorem(100) = Background In this chapter I will explain programs, libraries, frameworks and techniques that are important to this work. You can skip parts that you are familiar with. We start with the ontology language this language server will support. Then we go over how IDE's used to work and what modern text editors do different. Afterwards I will say something about tree sitter, the parser generator that was used. == What is Owl, Owl 2 and Owl 2 manchester syntax // TODO What it OWL 1 #quote( block: true, attribution: [w3.org #cite(<OWLWebOntologya>, supplement: [abstract])], )[The OWL 2 Web Ontology Language, informally OWL 2, is an ontology language for the Semantic Web with formally defined meaning. OWL 2 ontologies provide classes, properties, individuals, and data values and are stored as Semantic Web documents. OWL 2 ontologies can be used along with information written in RDF, and OWL 2 ontologies themselves are primarily exchanged as RDF documents.] // TODO why owl2 not owl1 #quote( block: true, attribution: [w3.org #cite(<OWLWebOntologya>, supplement: [chapter 1 introduction])], )[ The Manchester OWL syntax is a user-friendly syntax for OWL 2 descriptions, but it can also be used to write entire OWL 2 ontologies. The original version of the Manchester OWL syntax was created for OWL 1 [...]. The Manchester syntax is used in Protégé 4 and TopBraid Composer®, particularly for entering and displaying descriptions associated with classes. Some tools (e.g., Protégé 4) extend the syntax to allow even more compact presentation in some situations (e.g., for explanation) or to replace IRIs by label values [...]. The Manchester OWL syntax gathers together information about names in a frame-like manner, as opposed to RDF/XML, the functional-style syntax for OWL 2, and the XML syntax for OWL 2. It is thus closer to the abstract syntax for OWL 1, than the above syntaxes for OWL 2. Nevertheless, parsing the Manchester OWL syntax into the OWL 2 structural specification is quite easy, as it is easy to identify the axioms inside each frame. As the Manchester syntax is frame-based, it cannot directly handle all OWL 2 ontologies. However, there is a simple transform that will take any OWL 2 ontology that does not overload between object, data, and annotation properties or between classes and datatypes into a form that can be written in the Manchester syntax. ] == How IDE's work IDE's use syntax trees to deliver language smarts to the programmer. The problem with IDE's is that they are focused on specific languages or platforms. They are usually slow due to not using incremental parsing. This means on every keystroke the IDE is parsing the whole file. This can take 100 milliseconds or longer, getting slower with larger files. This delay can be felt by programmers while typing. @loopTreesitterNewParsing == What is a language server // https://www.thestrangeloop.com/2018/tree-sitter---a-new-parsing-system-for-programming-tools.html 4:05 #lorem(100) == What is tree sitter Tree-sitter is a parser generator and query tool for incremental parsing. It builds a deterministic parser for a given grammar that can parse a source file into a syntax tree and update that syntax tree efficiently. It aims to be general enough for any programming language, fast enough for text editors to act upon every keystroke, robust enough to recover from previous syntax errors and dependency free, meaning that the resulting runtime library can be embedded or bundled with any application. @TreesitterIntroduction It originated from <NAME> and was build at GitHub with c and c++ and is designed to be used in applications like atom, light text editors that need plugins to become as useful as an IDE. Its core functionality is to parse many programming languages into a coherent syntax trees that all have the same interface. The incremental parsing is "superfast" and needs very little memory, because it shares nodes with the previous version of the syntax tree. This makes it possible to parse on every keystroke and run parsers in parallel. Another important feature is the error recovery. Tree-sitter can, unlike other common parsers that error out on parsing fails, find the start and end of a wrong syntax snipped, by "inspecting" the code. @loopTreesitterNewParsing All these features make it extremely useful for parsing code that is constantly modified and contains syntactical errors, like source code, written inside code editors. == What makes a parser a GLR parser GLR parsers (generalized left-to-right rightmost derivation parser) are more general LR Parsers that handle non-deterministic or unambiguous grammars. Deterministic LR parsers #cite(<ahoTheoryParsingTranslation1972>, supplement: [chapter 4]) have been well studied and optimized, yielding very efficient parsers. But they are limited to a subset of grammars that are not unambiguous. GLR parsers do not produce non-deterministic automata in the theoretical sense, rather they produce an algorithm simulating them. Keeping track of all possible states in parallel. Backtracking on the other hand is extremely in efficient. Parallel parsers #cite(<ahoTheoryParsingTranslation1972>, supplement: [chapter 4.1 and 4.2]) on the other hand produce in the worse case a time complexity of $O(n^3)$, like random grammars. But on a large class of grammars they are linear in time. This makes them extremely useful for design and research, because of the otherwise grammatical constrains that LR parsing comes with @ironsExperienceExtensibleLanguage1970 #cite(<langDeterministicTechniquesEfficient1974>, supplement: [introduction]). // TOOD (Context free grammars als wort einbauen) // secondary source @langDeterministicTechniquesEfficient1974 // TODO mehr schriebe // TODO sekundärquelle durch primärquelle ersetzten //TODO paper Deterministic Techniques for Efficient Non-Deterministic Parsers DOI:10.1007/3-540-06841-4_65 = Implementation This chapter will explain what was implemented and how it was done. I will also show why I choose the tools that I did, what alternatives exist and when to use those. == Parsing A language server needs a good parser and when there is no incremental error recovering parser it needs to be build. The parser generator chosen for this language server is tree sitter. === Why use tree sitter I chose three sitter, because it is an incremental parsing library. This is a must because the OMN files can be very large. Parsing a complete file after only changing one character wound be inefficient. In some cases unusable. The parser I build takes about 490ms for the initial parse of a 2M file. The parser then only needs about 150ms for a changed character in the same file using the resulting tree of the previous parsing. The next big reason why I chose tree sitter is the error recovery. In the presence of syntax error the parser can recover to a valid state and continue parsing with a prior rule. For example the following OMN file ```omn Ontology: Class: ??????? Class: other_thing ``` results in the S-expression (See @how_to_read_s_expression for how to read them) ```lisp (source_file [0, 0] - [4, 0] (ontology [0, 0] - [3, 22] (ERROR [1, 4] - [1, 18]) (class_frame [3, 4] - [3, 22] (class_iri [3, 11] - [3, 22] (simple_iri [3, 11] - [3, 22]))))) ``` You can see that the first class frame contains a syntax error. The second class frame is valid, and the parser can pick up parsing after the erroneous first frame. Without this error recovery, the source code after a syntax error would not be checked for errors or would become invalid to. It would be impossible to show all syntax error in a file. // Rust bindings Tree sitter comes with rust bindings but also supports a number of programming languages. I chose rust. The programming language could offer me the safety, speed and comfort needed. Some notable alternatives are typescript and c++. I choose rust over typescript, because of performance. Rust compiles to native machine code and runs without a garbage collector while typescript first gets transpiled into javascript and then would run on a "virtual machine" like javascript engine - e.g. V8. Modern javascript engines are fast enough and this language server could be ported. One other benefit of typescript is the fact that it can be packaged into a Visual Studion Code plugin.// TODO is it possible to do the same thing using a rust binary? I will try the same thing using a rust binary.// TODO C++ on the other hand is very fast but lacks the safety and comfort. This is not a strict requirement, and it would be a viable implementation language for this language server.// More about that in @rust_over_cpp. But the rust bindings, cargo package manager and memory safety are excellent and guaranteed an efficient implementation. In hindsight, it was a good choice and I recommend rust for writing language servers. // TODO reference rust book, typescript and c++ stuff I came across tree sitter when I was researching what my own text editor uses for its syntax highlighting. It turned out that the editor Helix also uses tree sitter. Just like GitHub.// TODO Reference for this? It is popular and the standardized syntax tree representation and grammar files make it ideal. // Alternatives Initially I wanted to work with Haskell and use the parser from spechub's Hets, but it uses parsec and is sadly not an incremental parser. Also, it has no error recovery functionality that would be sufficient for text editors. There are similar reasons to not use the owlapi for parsing. //TODO besserer satz als Nobody would like to have a completely red underlined document in case of a syntax error in //line one. I then read about the Happy parser generator, witch Haskell uses, and Alex the tool for generating lexical analyzers. But the complexity of these tools put me off, and I also didn't know how to connect the different libraries with one another. The Protégé project uses the parser of the owlapi which does not do error recovering or incremental paring.// TODO ref owl api The package responsible for parsing in the owlapi is `org.semanticweb.owlapi.manchestersyntax.parser`. // TODO more alternavies For these reasons I ended up writing a custom parser. The next chapter will show how this was done. === Writing the grammar Staring with the official reference of the OWL2 manchester syntax @OWLWebOntologya, I transformed the rules into tree sitter rules, replacing the rules with the corresponding tree sitter ones. For example, I rewrote the following rule from ```bnf ontologyDocument ::= { prefixDeclaration } ontology ``` into ```js ontology_document: $ => seq(repeat($.prefix_declaration), $.ontology), ``` Tree sitter rules are always read from the `$` object and named in snake_case. Some are prefixed with `_`. We call theses "hidden rules". We call rules that are not prefixed "named rules" and we call terminals symbols, literals and regular expressions "anonymous rules". For example the rule ```javascript _frame: $ => choice( $.datatype_frame, $.class_frame, $.object_property_frame, $.data_property_frame, $.annotation_property_frame, $.individual_frame, $.misc, ) ``` is a hidden rule, because `_frame` is a supertype of `class_frame`. These rules are hidden because they add substantial depth and noise to the syntax tree.// TODO reference https://tree-sitter.github.io/tree-sitter/creating-parsers#hiding-rules The transformations where done using the following table for reference. Each construct has a rule in the original reference and in the new tree sitter grammar. #table( columns: (1fr, auto, auto), table.header([*Construct*], [*OWL BNF*], [*tree sitter grammar*]), // --------------------- [sequence], [```'{' literalList '}'```], [```js seq('{', $.literal_list, '}')```], // --------------------- [non-terminal symbols], [`ClassExpression`], [```js $.class_expression```], // --------------------- [terminal symbols], [```'PropertyRange'```], [```js 'PropertyRange'```], // --------------------- [zero or more], [```{ ClassExpression }```], [```js repeat($.class_exprresion)```], // --------------------- [zero or one], [```[ ClassExpression ]```], [```js optional($.class_expression)```], // --------------------- [alternative], [`Assertion | Declaration`], [```js choice($.assertion, $.declaration)```], // --------------------- [grouping], [```( restriction | atomic )```], [```js choice($.restriction, $.atomic)```], ) I also, in a second step, transformed typical BNF constructs into more readable tree sitter rules. These include #table( columns: (2fr, 3fr, 3fr), table.header([*Construct*], [*OWL BNF*], [*tree sitter grammar*]), // --------------------- [separated by comma], [```<NT> { ',' <NT> }```], [```js sep1(<NT>, ',')```], // --------------------- [separated by "or"], [```<NT> { 'o' <NT> }```], [```js sep1(<NT>, 'o')```], // --------------------- [one or more], [```<NT> ',' <NT>List```], [```js repeat1(<NT>)```], // --------------------- [annotated list], [```[a] <NT> { ',' [a] <NT> }```], [```js annotated_list(a, <NT>)```], ) Where `<NT>` is a non-terminal and `a` is the non-terminal called `annotations`. This is used for `<NT>List`, `<NT>2List`, `<NT>AnnotatedList` and every derivative that replaces `<NT>` with a real non-terminal. This results in the following example transformation: ```bnf annotationAnnotatedList ::= [annotations] annotation { ',' [annotations] annotation } ``` will become ```js annotation_annotated_list: $ => annotated_list($.annotations, $.annotation) ``` // TODO regex and where to stop parsing There are limits on how precise your parse should be. The IRI rfc3987 format is part of the OWL2-MS specification but not simple in any way. I skipped some specification for the IRI and put in some regexs that worked for me but not necessarily for you. For example the IRI specification defines many small non-terminals.// TODO write more // TODO i wrote tests to check that the parsing is correct I wrote tests to see if my grammar and the resulting parser would produce the correct syntax tree for the given source code. This is done with tree sitters CLI. More about tree sitter query testing in @query-tests. I did have to change the grammar while developing. This was of course time-consuming, as I had to adapt all queries for the language server. Unfortunately, there is no type checking or other tool support. Everything is based on magic strings. === Using the generated parser There are a number of uses for the generated parser. The simplest is syntax highlighting. Because the language server was developed with helix, a tree sitter focused editor, in mind, the syntax highlighting uses tree sitter queries. They use a modified version of the s-expression syntax and look something like this. ```scm ; highlights.scm "func" @keyword "return" @keyword (type_identifier) @type (int_literal) @number (function_declaration name: (identifier) @function)``` The file contains multiple queries that each have a specially named captures. These arbitrary highlight names map to colors inside the editor theme. Some common highlight names are `keyword`, `function`, `type`, `property`, and `string` but this is editor dependent. This language server uses `punctuation.bracket`, `punctuation.delimiter`, `keyword`, `operator`, `string`, `number`, `constant.buildin`, `variable` and `variable.buildin`. Queries can also be used to extend the functionality of a text editor by supplying useful syntactic information. The grammar can provide text objects, indentations and other editor specific queries. The following queries are for folding in the owl-ms grammar. ```scm [ (ontology) (class_frame) (datatype_frame) (object_property_frame) (data_property_frame) (annotation_property_frame) (individual_frame) (misc) (sub_class_of) (equivalent_to) (disjoint_with) (disjoint_union_of) (has_key) ] @fold ``` They define which nodes are foldable. In this case it's when frames or properties of frames start. The same query, replacing `fold` with `indent` and `extend` captures, is used for indents. While developing the language server the grammar had to change, because the queries where getting more and more complicated. Changing the grammar is time-consuming because the language server depends on it. All relevant queries have to be adapted and unfortunately there is no good static analysis for this. // === Why use rust and not c or c++ <rust_over_cpp> // // TODO // - memory safety // - pointers // - compile time // - ease of use // - arithmetic type system // https://www.educative.io/blog/rust-vs-cpp#compare The grammar distribution works through the GitHub repository https://github.com/janekx21/tree-sitter-owl-ms. It contains an NPM package, queries and bindings. The queries are for folds, highlights and indents; the bindings are for node and rust. The latter defines a crate which the language server imports as a submodule and uses as a local dependency. == Getting started with the LSP specification and its Rust implementation // TODO who defines the specification Microsoft defines the LSP specification (Version 3.17) on a GitHub page./* TODO reference LSP specification */ The page contains the used base protocol and its RPC methods using typescript types. The @lsp_lifecycle shows a typical LSP lifecycle. #figure( caption: [An overview of the LSP lifecycle], kind: image, )[ ```pintora sequenceDiagram @param noteMargin 20 Client->>Server: Start the Server activate Server @start_note left of Server The client creates a sub process and opens a stdio pipe for communication. The following communication is via json-rpc. @end_note Client->>+Server: initialize(params: InitializeParam) Server-->>-Client: result: InitilizeResult Client->>Server: initialized(params: InitializedParams) Client->>Server: textDocument/didOpen(params: DidOpenTextDocumentParams) Server->>Client: textDocument/publishDiagnostics(params: PublishDiagnosticsParams) == Using the Language Server == Client->>+Server: shutdown() Server-->>-Client: result: null Client-xServer: exit() deactivate Server ``` // TODO ref springer buch ]<lsp_lifecycle> Not all LSP features can be listed here because many require more than one RPC call or are not relevant for this language server. That said, let us begin with the start of the language server. The client creates a sub process of the LSP server executable using the executables path and arguments. Some language servers use the `--stdio` command line argument to indicate that the communication will be done via the stdin and stdout pipes. The owl-ms-language-server uses the stdio pipes by default and does not support sockets. When a message needs to be sent between server and client, they just write into the stdio pipe. The client uses stdin the server uses stdout. Hey communicate using a simple protocol called json-rpc. The so-called base protocol consists of a header part and a content part. ``` Content-Length: ...\r\n \r\n { "jsonrpc": "2.0", "id": 1, "method": "textDocument/completion", "params": { ... } } ``` The `method` string and `params` object contain the data that is used for all language smarts. It is used for requests, responses and notifications. However, the client should not expect a reply to a message. These are only remote procedure calls, not classic endpoints as we know them from servers. // TODO maybe errors, notification, progress and cancelation // TODO reference https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/ The first thing that happens in the communication after starting the language server is that the client sends an in initialize request to the server. This contains meta information, who the client is, etc. But the most important thing is the client capabilities. Not every editor supports all LSP features. It must therefore be clarified here which features are relevant for the server. For the time being, all other messages are prohibited, or it must not be guaranteed that a response will be sent. The server responds with the initialize result, which also contains the server capabilities. Not every language server supports all features, so it must be communicated at this point which features the client can use. The client then sends an initialized notification to the server. Now the server and client are ready to start working. The handshake is complete. A communication ends preferably with a shutdown request from the client. After an empty response from the server, the client sends a final exit notification and this concludes the communication. // TODO maybe dynamic client capabilities, set trace, log trace The good thing is that we can leave out all these technical details when building a language server, because packages provide these functions. In the case of this server, the used rust crate is called "tower-lsp". // TODO ref tower-lsp The next big point is synchronizing our document. The following chapters will deal with this, followed by documentation, diagnostics, hints and auto-completion. == Text Document Synchronization === The `TextDocumentSyncKind` <sync_kind_incremental> According to the specification, the text document can be synchronized in two ways. In the first type is `Full`; the entire document is always sent. The second type is `Incemental`; only changes in the content of a document are sent. It must be remembered here that an ontology file can become very large. The file I am working with, `oeo-full.omn`, is about 2 megabytes in size. When deciding whether to synchronize the entire document or just the changed parts, incremental changes should be preferred. Only sending changes is fewer data and the server gets information where content changed for free and without diffing anything. This is very useful for incremental parsing and updating diagnostics, because it reduces the search space. The content changes are sent in small snippets. Each snippet has a range and content. If something is added to the document, for example, the start and end of the range are the same and the content contains what has been added. If you remove something, the content is empty and the range indicates where it is deleted. If something is replaced, there is a range and a content that contains the replacement. Marginal amounts of data are transferred in most cases (single character insertion or deletion), compared to a full document. This is faster than transferring the entire document. You can find an example in @did_change_example. === `textDocument/didOpen` Notification Now let's explore our first language server endpoint. The simplest one after to the initialization. The notification is sent from the client to the server to inform the language server, that a new text document was opened on the client. The parameter contains the text document. Those documents are identified by a URI (Unified Resource Identifier). They are just strings and in most cases start with the file protocol like `file:///home/janek/project/readme.md`. The format is specified by rfc3986, which is the same format that OWL uses. ```ts interface DidOpenTextDocumentParams { textDocument: TextDocumentItem; } interface TextDocumentItem { uri: DocumentUri; // just a string languageId: string; // "owl-ms" in this case version: integer; // 0 in this case text: string; }``` The `textDocument/didOpen` endpoint is handled in the rust language server as a method of a `LanguageServer` trait called `did_open`. This trait is implemented for the struct `Backend`, which contains the whole state of the language server. ```rust struct Backend { client: Client, parser: Mutex<Parser>, document_map: DashMap<Url, Document>, // DashMap is an async hash map // ... } #[tower_lsp::async_trait] // traits can not be async by default impl LanguageServer for Backend { // ... async fn did_open(&self, params: DidOpenTextDocumentParams) { // Parse the document (locking the Mutex) from borrowed params.text_document.text // Create rope from params.text_document.text // Insert document into self.document_map // Create diagnostics from the parse // Extract and save information from the parse (eg. used for gotodefinition) // Send diagnostics to client // ... } // ... } ``` This method does not return anything; it is a Notification and only there for our language server to register that a file has been opened. The language server then parses the file and creates a rope data structure. It also creates diagnostics and sends them to the client. The interesting thing here is that rust's borrow checker shows exactly where it is likely that a copy of the data is done and where not. For example, the parsing happens on the original buffer/document, while the creation of the rope consumes the original. Then the rope and the tree are moved into the document map. This also consumes them, so it is a move, not a copy. === `textDocument/didChange` Notification <did_change> This notification is sent from the client to the server to inform the language server about changes in a text document. Its parameter contains the text document identification and the content changes made to the document. The version should increase after each change. The content changes are ranges with texts. They work as explained in @sync_kind_incremental. ```ts interface DidChangeTextDocumentParams { textDocument: VersionedTextDocumentIdentifier; // URI and version contentChanges: TextDocumentContentChangeEvent[]; } export type TextDocumentContentChangeEvent = { range: Range; text: string; } ``` The first thing that happens is that the server retrieves the document from its model using the URI from the parameter. Then the rope and the old syntax tree are modified. The rope then contains the same text document as the client. The old syntax tree moves its node positions. A reparsing does not yet take place, but the process is necessary so that the old, unmodified nodes later fit into the new positions. Incidentally, the syntax tree does not save any text. It only contains the nodes. To get the text from the nodes, you still need a rope that fits the tree. The old syntax tree (with the new node positions) and the new rope can then generate a new syntax tree using reparsing. The IRI map and diagnostics are then adapted and published. This is done by first removing the entries that overlap the changed positions and then generating and inserting new information in these positions (prune and extend). If the number of changes exceeds a threshold value, IRI map and diagnostics are completely removed and regenerated. I don't know if this is really necessary. You can find an example in @did_change_example. ```rust // inside impl LanguageServer async fn did_change(&self, params: DidChangeTextDocumentParams) { // Get the document from self.document_map // Update the documents rope and syntax tree with the changes // Parse using the old tree with changed ranges (incremental parse) if use_full_replace { // Do a full replace instead of incremental when worth } // Prune and extend iri info map // Prune and extend diagnostics // Publish diagnostics async } ``` === `textDocument/didClose` Notification <did_close> This notification is sent from the client to the server when a text document got closed on the client. ```rust // inside impl LanguageServer async fn did_close(&self, params: DidCloseTextDocumentParams) { self.document_map.remove(&params.text_document.uri); } ``` == `textDocument/semanticTokens/full` Request This request is sent from the client to the server to resolve all semantic token in a text document. The parameter contains the URI of the file and the result contains a list of semantic tokens. The language server protocol does not define how syntax highlighting is done. Most editor do the highlighting with regular expressions, some use tree sitter queries. Visual Studio Code uses TextMate grammars.// TOOD ref https://code.visualstudio.com/api/language-extensions/syntax-highlight-guide IntelliJ IDEA uses TextAttributeKeys2.// TODO ref https://plugins.jetbrains.com/docs/intellij/syntax-highlighting-and-error-highlighting.html Helix uses the tree sitter queries of `highlights.scm`, a common syntax highlighting file that a grammar optionally can define.// TODO ref https://docs.helix-editor.com/guides/adding_languages.html But the language server protocol defines how semantic tokens can be resolved. This is similar to syntax highlighting with a big advantage. Semantic tokens can capture language specific semantic meaning that regular expression can not (nearly impossible, but certainly unfeasible).// TODO ref https://link.springer.com/content/pdf/10.1007/978-1-4842-7792-8.pdf That said, the owl-ms-language-server uses semantic tokens for syntax highlighting. // TODO <------------------------------ hier weiter arbeiten ```rust // inside impl LanguageServer async fn semantic_tokens_full(&self, params: SemanticTokensParams) -> Result<Option<SemanticTokensResult>> { // Get the document from self.document_map // Load the highlights query from the tree_sitter_owl_ms crate // Query the whole text document // Convert each match into a semantic token, using the capture name as the token type // Sort the tokens // Convert the token ranges from absolute to relative // Return the tokens } ``` #figure( image("assets/screenshot_vscode_just_opened.svg", width: 80%), caption: [ Visual Studio Code (editor-container) with the owl-ms plugin after opening the pizza ontology ], ) == `hover` #figure( image("assets/screenshot_vscode_hover.svg", width: 80%), caption: [ Visual Studio Code (editor-container) with the owl-ms plugin after hovering the `pizza:NamedPizza` IRI ], ) #lorem(100) == `diagnostics` #figure( image("assets/screenshot_vscode_diagnostics_1.svg", width: 80%), caption: [ TODO ], ) #figure( image("assets/screenshot_vscode_diagnostics_2.svg", width: 80%), caption: [ TODO ], ) // TODO diagnostics with multiple errors #lorem(100) //TODO: staticly generated == `inlay_hint` #lorem(100) // TODO screenshot == `completions` #lorem(100) // TODO how the node kinds are converted into completion items // TODO how the parent node is queried and used with static nodes #figure( image("assets/screenshot_vscode_completion_iri.svg", width: 80%), caption: [ TODO ], ) #figure( image("assets/screenshot_vscode_completion_keyword.svg", width: 80%), caption: [ TODO ], ) == Used data structures #lorem(50) === Rope Strings are traditionally fixed length arrays of characters with or without additional space for expansion. These data structures are occasionally appropriate, but common operations do not scale on these. Performance should largely not be impacted by long strings. Strings that are a continues array of characters violate these requirements. Any copy of strings allocates large chunks of memory. Character insertion and deletion or any operation that shifts the characters will result in a copy and would make any text editor intolerably slow. In order to obtain acceptable performance, special purpose data structures are needed to represent these strings. Ropes make that practical, because it allows the concatenation of strings to be efficient in both space and time by sharing data structure of results with its arguments. This is done by a tree in witch each node represents the concatenation of all child nodes left-to-right. Leaf nodes consist of flat strings, thus the tree represents the concatenation of all its leaf nodes. @rope_quick_brow_fox shows an example with the string "The quick brown fox". @boehmRopesAlternativeStrings1995 #figure( caption: "Rope representation of \"The quick brown fox\"", )[ #set align(center) #let data = ([concat], ([concat], [`"The qui"`], [`"ck brown"`]), ([`" fox"`])) #canvas( { import draw: * set-style( content: (padding: .2), fill: rgb("#f5f5f5"), line: (fill: gray.lighten(60%), stroke: gray.lighten(60%)), ) tree.tree( data, spread: 4, grow: 1.5, draw-node: (node, ..) => { content( (), box(par(node.content), fill: rgb("#f5f5f5"), inset: 8pt, radius: 4pt), ) }, draw-edge: (from, to, ..) => { line((a: from, number: .8, b: to), (a: to, number: .8, b: from)) }, name: "tree", ) // Draw a "custom" connection between two nodes // let (a, b) = ("tree.0-0-1", "tree.0-1-0",) // line((a, .6, b), (b, .6, a), mark: (end: ">", start: ">")) }, ) ]<rope_quick_brow_fox> === DashMap #lorem(100) == Optimizations #lorem(100) === Rust Async with Tokio #lorem(100) === LS State vs. on promise tree query = Analysis #lorem(50) == Automated Testing //TODO why i tested #lorem(100) === Query tests in tree sitter <query-tests> #lorem(100) === Unit tests in rust #lorem(100) === E2E tests using a LSP client #lorem(100) // TOOD using vscode? == Benchmarks #lorem(100) === Experimental Setup #lorem(100) === Results #lorem(100) == Evaluation of the usability === Experimental Setup #lorem(100) === Results #lorem(100) //TODO who are the users //TODO describe the usability //TODO is the LSP fast enough = Conclusion // TODO - It was hard to track each syntax thing like keywords and rules. - Changing the grammar has a large impact. #lorem(100) == Performance #lorem(100) == Future Work #lorem(100) = Appendix == How to read S-expressions <how_to_read_s_expression> Symbolic expressions are expressions inside tree structures. They were invented and used for lisp languages, where they are data structures and source code. Tree sitter uses them, with some extended syntax, to display syntax trees and for queries. An S-expression is either an atom like `x` or an S-expression of the form `(x y)`. A long list would be written as `(a (b (c (d NIL))))`, where `NIL` is a special end of list atom, but tree sitter unrolls those lists into `(a b c d)`. ```lisp (root (leaf) (node (leaf) (leaf))) ``` This is an S-expression with abbreviated notation to represent lists with more than two members. The `root` is the root of the tree, `node` is a node with one parent and two children and `leaf` nodes are tree leafs. This results in the following tree. #figure( caption: "TODO", )[ #set align(center) #let data = ([root], ([leaf]), ([node], [leaf], [leaf])) #canvas( { import draw: * set-style( content: (padding: .2), fill: rgb("#f5f5f5"), line: (fill: gray.lighten(60%), stroke: gray.lighten(60%)), ) tree.tree( data, spread: 4, grow: 1.5, draw-node: (node, ..) => { circle((), radius: .5, stroke: none) content((), node.content) }, draw-edge: (from, to, ..) => { line( (a: from, number: .6, b: to), (a: to, number: .6, b: from), mark: (end: ">"), ) }, name: "tree", ) // Draw a "custom" connection between two nodes // let (a, b) = ("tree.0-0-1", "tree.0-1-0",) // line((a, .6, b), (b, .6, a), mark: (end: ">", start: ">")) }, ) ] Tree sitter uses a range syntax to show where the syntax tree nodes lay within the source code. The range is represented using a start and an end position, both are written using zero based row and column positions. For example, the following S-expression could be a syntax tree result from tree sitter. ```lisp (root [0, 0] - [4, 0] (leaf [1, 4] - [1, 18]) (node [3, 4] - [3, 22] (leaf [3, 4] - [3, 8]]) (leaf [3, 9] - [3, 21]))) ``` // TODO ref https://tree-sitter.github.io/tree-sitter/using-parsers#query-syntax A query is written with S-expressions and some special syntax for fields, anonymous nodes, capturing, quantification, alternations, wildcards, anchors and predicates. Here are some examples to give a short overview. ```lisp (leaf) ``` Matches every `leaf` node. ```lisp (node (leaf)) ``` Matches every `node` node that has a `leaf` node. ```lisp (node (leaf) @leaf-one (leaf)) @leaf-two ``` Matches every `node` that has two `leaf` nodes and captures the leaf nodes in `leaf-one` and `leaf-two`. ```lisp (node [ (leaf) (node) ]) ``` Matches every `node` that has either a `leaf` node children or a `node` node children. ```lisp (node (_ (leaf))) ``` Matches every `node` that has a child with a child that is a `leaf` node. == Incremental change example <did_change_example> Imagine a text document with the content below. ```owl-ms Ontology: <http://www.co-ode.org/ontologies/pizza> Class: pizza:Margherita Annotations: rdfs:label "Margherita"@en ``` Upon the language server getting a request with the `textDocuement/didChange` message from @did_change, the language server's model of the document is updated. ```json { textDocument: <the identifier of the text file (like a URL)>, contentChanges: [ { range: { start: { line: 3, character: 14 }, end: { line: 3, character: 24 } // end position is exclusive }, text: "Napoletana", }, { range: { start: { line: 5, character: 21 }, end: { line: 5, character: 35 } // end position is exclusive }, text: "Napoletana", } ] } ``` Applying this change will result in the following text document. ```owl-ms Ontology: <http://www.co-ode.org/ontologies/pizza> Class: pizza:Napoletana Annotations: rdfs:label "Napoletana"@en ``` #heading(outlined: false, numbering: none)[Acknowledgments] #bibliography("lib.bib", style: "ieee") /* Notes: Possible Features: - Auto completion with label - Goto definition of keywords inside the lsp directory Warum hast du so etwas gebaut, was wären Alternativen gewesen und warum hast du sie nicht genutzt. - Am Anfrang, Fragestellung: Kann man damit flüssig arbeiten - Am Ende, Frage beantworten */
https://github.com/rabotaem-incorporated/algebra-conspect-1course
https://raw.githubusercontent.com/rabotaem-incorporated/algebra-conspect-1course/master/sections/04-linear-algebra/03-permutations-cont.typ
typst
Other
#import "../../utils/core.typ": * == Перестановки #ticket[Разложение перестановки в произведение транспозиций и элементарных транспозиций] #def[ $M$ --- множество. _Перестановкой_ множества $M$ называется биекция на себя. $S(M)$ = ${"перестановка" M}$ $S(M) times S(M) -> S(M)$ $(g, f) |-> g compose f$ ] #pr[ $(S(M), compose)$ --- группа. ] #proof[ + Ассоциативность очевидна. + $id_M$ --- нейтральный элемент. + $f in S(M) ==> f^(-1) in S(M)$ --- обратный элемент. ] #def[ $S_n$ --- _симметрическая группа_ степени $n$ _(группа перестановок $n$-элементного множества)_. ] #notice[ $abs(S_n) = n!$ ] #example[ $S_3 = {(1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), (3, 2, 1)}$ ] #def[ _Циклы_ --- перестановки, которые переводят некое подмножество элементов "по кругу". Более формально: _циклом_ называется $sigma in S$ такая что: $ sigma(i_1) = i_2, space sigma(i_2) = i_3, space ..., sigma(i_(k - 1)) = i_k, space sigma(i_k) = i_1 $ а так же $sigma(i_j) = i_j$ для всех $j in.not {1, 2, ..., k}$, где $k >= 2$ --- _длина цикла_. ] #examples[ - $(123)$ --- перестановка $display(mat(1, 2, 3; 2, 3, 1))$, - $(34)$ --- перестановка $display(mat(1, 2, 3, 4; 1, 2, 4, 3))$. ] #def[ Циклы $(i_1 i_2 ... i_k)$ и $(j_1 j_2 ... j_l)$ называются _независимыми_, если $forall r, s : i_r eq.not j_s$. ] #pr[ Любая перестановка является произведением нескольких попарно независимых циклов. ] #proof[ $i, sigma(i), sigma(sigma(i)), ...$ все различны, так как $sigma$ --- биекция, значит это --- независимый цикл. ] #def[ Цикл длины 2 называется _транспозицией_. ] #def[ Транспозиция $(i, i + 1)$ называется _элементарной транспозицией_. ] #pr[ Любой цикл $(i_1 i_2 ... i_k)$ раскладывается в произведение транспозиций $(i_1 i_(k)) dot (i_k i_(k - 1)) dot ... dot (i_3 i_2)$. ] #pr[ Любая перестановка является произведением элементарных транспозиций. ] #proof[ Перестановку можно разложить в произведение циклов, а цикл в произведение транспозиций. Транспозиции в свою очередь можно разложить в произведение элементарных транспозиций. ] #ticket[Чётность и знак перестановки] #def[ _Инверсией_ в перестановке $sigma$ называется пара $(i, j)$, $i < j$ для которой $sigma(i) > sigma(j)$. Число инверсий в перестановке обозначается $Inv(sigma)$. ] #example[ $Inv((123)) = 2$. ] #def[ Перестановка называется _четной_, если в ней четное число инверсий, иначе _нечетной_. ] #def[ _Знаком_ перестановки называется $ sgn(sigma) = cases(1\, & "если" sigma "четна", -1\, space & "если" sigma "нечетна") $ ] #lemma[ Если перестановку умножить слева на транспозицию, то ее знак поменяется на противоположный, то есть $sgn((i, j) compose sigma) = -sgn(sigma)$. ] #proof[ Четность числа инверсий с участием $sigma(i)$ и $sigma(j)$ не изменится, так как все элементы между $i$ и $j$ поменяют число инверсий четное число раз. Соответственно, изменится лишь инверсия между $i$ и $j$. ] #follow[ Четность перестановки равна четности количества транспозиций в ее разложении. ] #proof[ $sgn((i_1 j_1)(i_2 j_2)...(i_k j_k)) = (-1)^k$ ] #exercise[ Доказать что $sgn(sigma compose (i, j)) = -sgn(sigma)$. ] #follow[ Пусть $sigma, tau in S_n$, тогда: $sgn(sigma tau) = sgn(sigma) dot.c sgn(tau)$ $sgn(sigma^(-1)) = sgn(sigma)$ ] #def[ _Множество четных перестановок_ $A_n = {sigma in S_n bar space.hair sgn sigma = 1}$. Можно заметить, что $A_n$ --- замкнуто относительно композиции, а так как $sgn(A_n) = sgn(A_n^(-1))$, $A_n$ --- подгруппа $S_n$. ] #pr[ Пусть $n >= 2$. Тогда $abs(A_n) = (n!) / 2$. ] #proof[ Рассмотрим $ A_n &limits(-->)^Phi S_n without A_n & #h(3em) S_n without A_n &limits(-->)^Psi A_n \ sigma & arrow.long.bar sigma (1 2) & sigma & arrow.long.bar sigma(1 2) $ Заметим, что $ cases(Psi compose Phi = sigma(1 2)(1 2) = id_(A_n), Phi compose Psi = id_(S_n without A_n)) $ Значит $Phi$ --- биекция и $abs(S_n without A_n) = abs(A_n) ==> abs(A_n) = (n!) / 2$. ]
https://github.com/donRumata03/aim-report
https://raw.githubusercontent.com/donRumata03/aim-report/master/lib/presentation-template.typ
typst
#import "@preview/polylux:0.3.1": * #import "pdfpc.typ" #import themes.simple: title-slide, logic #import "presentation-utilities.typ": timeblock, codeblock, superblock, set-code, set-code-for-bad-projector #import "todos.typ": * #import "generic-utils.typ": * #let simple-theme( aspect-ratio: "16-9", background: white, foreground: black, body ) = { set page( paper: "presentation-" + aspect-ratio, margin: 1em, header: none, footer: none, fill: background, ) set text(fill: foreground, size: 25pt) show footnote.entry: set text(size: .6em) // show heading.where(level: 2): set block(below: 2em) // set outline(target: heading.where(level: 1), title: none, fill: none) //show outline.entry: it => it.body // show outline: it => block(inset: (x: 1em), it) show emph: set text(fill: red.darken(20%)) show raw: set text(font: "Fira Code") body } #let slide(body) = { body = if "heading" in repr(body){ show heading: it => it + v(1fr) body } else { v(1fr) + body } let deco-format(it) = text(size: .6em, fill: gray, it) set page( footer: deco-format({ locate(loc => { let sections = query(heading.where(level: 1, outlined: true).before(loc), loc) if sections == () [] else { set text(size: 2em) deco-format(sections.last().body) } h(1fr); logic.logical-slide.display() + [/] + str(logic.logical-slide.final(loc).at(0)) }); }), footer-descent: -0.1em, header-ascent: 1em, ) logic.polylux-slide(body + v(1fr)) } #let authors_d = ("V": "<NAME>", "I": "<NAME>", "A": "<NAME>") #let comm(body) = text(gray, body) #let template(body, name: "", authors: "AIV", time: 75, bad-projector: true) = { set text(lang: "ru") show link: set text(blue) show: simple-theme set raw(lang: "rs") show raw.where(block: false): box.with(fill: gray.lighten(90%), outset: (y: 5pt, x: 1pt), radius: 5pt) show raw.where(block: true, lang: "rs"): codeblock show raw.where(block: true, lang: "cpp"): codeblock show raw.where(lang: "error"): it => codeblock(nums: false, raw(it.text, lang: "rs")) show: set-code-for-bad-projector let comm(body) = text(gray.darken(50%), body) pdfpc.config(duration-minutes: time, last-minutes: 10) title-slide[ #heading(level: 1, name) #v(2em) #for a in authors { authors_d.at(a) h(1em) } #datetime.today().display("[day]-[month]-[year]") #place(bottom+right, comm[NSS Lab ITMO]) ] body } #let png-ferris = ("point", "laptop") #let ferris(size: 40%, type: "point") = { let ext = if type in png-ferris {".png" } else {".svg"}; box(image("../res/ferris-" + type + ext, height: size), baseline: 50%, inset: 1em) } #let notice(body, head: "", size: 0.5, fill: red.lighten(90%), inset: 0.2, padding: 1, type: "point") = align(center, block( stroke: red, fill: fill, radius: size * 1em, width: 100%, inset: size * inset * 1em, grid(columns: 2, ferris(size: size * 35%, type: type), pad(rest: size * padding * 1em, { set align(center + horizon) strong(head) " " body } ) ) )) #let _s_align = align; // #let question(ask: "", size: 1.0, align: right, dx: 0pt, dy: 0pt) = place(align, dx: dx, dy: dy, grid( _s_align(right, ferris(size: 25%*size, type: "question")), text(red)[#ask]) ) #let panics(ask: "", size: 1.0, align: right, dx: 0pt, dy: 0pt) = place(align, dx: dx, dy: dy, grid( _s_align(right, ferris(size: 25%*size, type: "panic")), text(red)[#ask]) ) #let livecode(size: 1.3, align: right, dx: 0pt, dy: 0pt) = place(align, dx: dx, dy: dy, ferris(size: 25%*size, type: "laptop"))
https://github.com/Shuenhoy/modern-zju-thesis
https://raw.githubusercontent.com/Shuenhoy/modern-zju-thesis/master/utils/structure.typ
typst
MIT License
#let frontmatter(s) = { set page(numbering: "I") counter(page).update(1) s } #let mainmatter(s) = { set page(numbering: "1") counter(page).update(1) s }
https://github.com/DaAlbrecht/thesis-TEKO
https://raw.githubusercontent.com/DaAlbrecht/thesis-TEKO/main/content/Requirements.typ
typst
#import "@preview/tablex:0.0.5": tablex, cellx #import "@preview/codelst:1.0.0": sourcecode #show figure.where(kind: raw): set block(breakable: true) The following Stakeholders are identified: #figure( tablex( columns: (auto, 1fr, 2fr), rows: (auto), align: (center + horizon, center + horizon, left), [*ID*], [*Stakeholder*], [*Description*], [DEV], [Developer], [The microservice developer], [CUS], [Customer], [The customer of Integon, who wants to use the microservice], [OPS], [Operations], [The team who is responsible for deploying and operating the microservice], [INT], [Integon], [The company Integon], [OSS], [Open source software Community], [The microservice could be published as open-source software], ), kind: table, caption: [Stakeholders and their abbreviation], ) #pagebreak() == Stakeholder interview #include "../personal/interview.typ" == Stakeholder requirements<stakeholder_requirements> #figure( tablex( columns: (auto, auto, 1fr), rows: (auto), align: (center + horizon, center + horizon, left), [*ID*], [*Trace from*], [*Description*], [STR-1], [DEV], [The communication with RabbitMQ should be done with an existing library], [STR-2], [DEV], [The microservice should be written in a programming language that is supported by Integon], [STR-3], [OPS], [The microservice should be deployable in a containerized environment], [STR-4], [OPS], [The microservice should be resource-efficient], [STR-5], [OPS], [The microservice should be easy to operate], [STR-6], [OPS], [The microservice should be easy to monitor], [STR-7], [OPS,INT], [The microservice should be easy to integrate into existing monitoring systems], [STR-8], [OPS,INT], [The microservice should be easy to integrate into existing logging systems], [STR-9], [OPS,INT], [The microservice should be easy to integrate into existing alerting systems], [STR-10], [OPS,CUS,INT], [The microservice should be easy to integrate into existing deployment systems], [STR-11], [CUS], [The microservice can replay messages from a specific queue and time range], [STR-12], [INT], [The microservice should be easy to maintain], [STR-13], [INT], [The microservice should be easy to extend], [STR-14], [INT], [The microservice should be easy to test], [STR-15], [INT], [The microservice should fit into the existing architecture of different customers], [STR-16], [INT,OPS], [The microservice should be easy to integrate into existing CI/CD pipelines], [STR-17], [OSS], [The microservice should be easy to contribute to], [STR-18], [OSS], [The microservice should be easy to understand], [STR-19], [OPS], [The microservice should be traceable], [STR-20], [CUS], [The microservice can replay messages from a specific transaction ID and queue], [STR-21], [CUS, OSS], [The microservice can list all messages in a given time range and queue], ), kind: table, caption: [Stakeholder Requirements], ) #pagebreak() == System architecture and design In this section, a high-level overview of the system architecture and design is given. This is not the implementation architecture of the microservice itself, but the architecture of the microservice and its points of contact with other potential systems according to the stakeholder requirements. This high-level overview of the architecture is used to derive the concrete system requirements. #figure( image("../assets/high_level_design.svg"), kind: image, caption: [System architecture], ) === External interfaces The microservice needs to interact with different systems to be compliant with the stakeholder requirements @stakeholder_requirements. #linebreak() The following table identifies the external interfaces of the microservice. #figure( tablex( columns: (auto, auto, 1fr, 2fr), rows: (auto), align: (center + horizon, center + horizon, center + horizon, left), [*ID*], [*Trace from*], [*Name*], [*Description*], [EXT-1], [STR-1], [RabbitMQ], [RabbitMQ is used as the messaging broker], [EXT-2], [STR-2], [RabbitMQ client], [The microservice uses a RabbitMQ client library to communicate with RabbitMQ], [EXT-3], [STR-3], [OCI], [The microservice is deployed in a containerized environment, and therefore should be compliant with the OCI specification], [EXT-4], [STR-7], [Prometheus], [Prometheus needs to be supported as a metrics backend], [EXT-5], [STR-8], [Stdout], [Stdout needs to be supported as a logging target], [EXT-6], [STR-19], [Tracing], [Tracing needs to be supported], ), kind: table, caption: [External interfaces], ) === Data follow The API expects to receive a request with a message ID. The message ID is used to identify the message in the RabbitMQ queue. The microservice then sends a request to RabbitMQ to requeue the message. The following figure shows the request flow. #figure( image("../assets/request_flow.svg", width: 90%), kind: image, caption: [Request flow], ) In both the in- and outflow, the microservice needs to aggregate observability data according to the following table. #figure( tablex( columns: (auto,auto,auto, 1fr), rows: (auto), align: (center + horizon,center + horizon,center + horizon,left), [*ID*], [*Trace from*], [*Category*], [*Description*], [OBS-1], [EXT-4], [Metrics], [CPU usage], [OBS-2], [EXT-4], [Metrics], [Memory usage], [OBS-3], [EXT-4], [Metrics], [Network usage], [OBS-4], [EXT-4], [Metrics], [Request duration], [OBS-5], [EXT-4], [Metrics], [Request size], [OBS-6], [EXT-4], [Metrics], [Response size], [OBS-7], [EXT-4], [Metrics], [Response duration], [OBS-8], [EXT-4], [Metrics], [Response code], [OBS-9], [EXT-4], [Metrics], [Response error], [OBS-10], [EXT-5], [Logs], [Request body], [OBS-11], [EXT-5], [Logs], [Response body], [OBS-12], [EXT-5], [Logs], [Request headers], [OBS-13], [EXT-5], [Logs], [Response headers], [OBS-14], [EXT-5], [Logs], [Request message ID], [OBS-15], [EXT-5], [Logs], [Response message ID], [OBS-16], [EXT-6], [Tracing], [Request trace], [OBS-17], [EXT-6], [Tracing], [Response trace], ), kind: table, caption: [Observability data], )<observability_data> #pagebreak() === OpenAPI specification<openapi_specification> The microservice needs to have the following API specification: #figure( sourcecode()[ ```yaml openapi: 3.0.1 info: version: 1.0.0 title: RabbitMQ Replay API paths: /replay: get: summary: Retrieve data from a specified time range and queue. parameters: - name: from in: query description: Start timestamp (inclusive). required: false schema: type: string format: date-time - name: to in: query description: End timestamp (exclusive). required: false schema: type: string format: date-time - name: queueName in: query description: Name of the queue. required: true schema: type: string responses: '200': description: Successful retrieval of data. content: application/json: schema: $ref: '#/components/schemas/Message' '500': description: Internal server error. content: application/json: schema: type: object properties: error: type: string post: summary: Submit timestamps, a transaction ID, and a queue for replay. requestBody: description: Data to submit for replay. required: true content: application/json: schema: oneOf: - type: object properties: from: type: string format: date-time to: type: string format: date-time queueName: type: string - type: object properties: transactionId: type: string queueName: type: string responses: '201': description: Successful replay. content: application/json: schema: $ref: '#/components/schemas/Message' '400': description: Bad request. Neither timestamps nor transactionId submitted. '404': description: Transaction ID not found. '500': description: Internal server error. content: application/json: schema: type: object properties: error: type: string components: schemas: TransactionHeader: type: object properties: name: type: string value: type: string Message: type: object properties: offset: type: integer format: int64 transaction: $ref: '#/components/schemas/TransactionHeader' timestamp: type: string format: date-time data: type: string ```], caption: [OpenAPI specification], ) #pagebreak() == System Requirements #figure( tablex( columns: (auto, auto, 1fr), rows: (auto), align: (center + horizon, center + horizon, left), [*ID*], [*Trace from*], [*Description*], [REQ-1], [STR-1], [The RabbitMQ client library needs to be actively maintained and supported], [REQ-2], [STR-2, STR-12, STR-13, STR-16], [The microservice needs to be written in Rust, Go or Java], [REQ-3], [STR-3, STR-10, STR-15, STR-16], [The microservice needs to be compliant with the OCI specification], [REQ-4], [STR-4], [The microservice should not use more than 500MB of memory and 0.5 CPU cores when idle], [REQ-5], [STR-5, STR-11], [The microservice provides an OpenAPI specification for its API], [REQ-6], [STR-6, STR-7, STR-15], [The microservice provides Prometheus metrics according to @observability_data], [REQ-7], [STR-8, STR-15], [The microservice logs to stdout according to @observability_data], [REQ-8], [STR-9, STR-15], [The microservice provides a health endpoint], [REQ-9], [STR-14], [The microservice provides unit tests], [REQ-10], [STR-16], [The microservice has no dependencies on other systems], [REQ-11], [STR-17], [The microservice is published as open-source software, and a contribution guide is provided], [REQ-12], [STR-18], [The microservice provides a README.md file with a description of the microservice and its API], [REQ-13], [STR-19], [The microservice provides a trace header for tracing according to @observability_data], [REQ-14], [STR-20], [A transaction ID can be submitted to the microservice to replay messages from a specific transaction ID and queue], [REQ-15], [STR-11], [A time range can be submitted to the microservice to replay messages from a specific time frame and queue], [REQ-16], [STR-21], [A time range and queue can be submitted to the microservice to list all messages in a given time range and queue], ), kind: table, caption: [System Requirements], ) #pagebreak()
https://github.com/Functional-Bus-Description-Language/Specification
https://raw.githubusercontent.com/Functional-Bus-Description-Language/Specification/master/src/main.typ
typst
#set text( //font: "Open Sans", size: 11pt ) #set document( title: [Functional Bus Description Language - Specification], author: "<NAME>" ) #import "vars.typ" #include "cover.typ" #set page( numbering: "1", header: [ #text(9pt)[Rev. #vars.rev] #h(1fr) #text(9pt)[FBDL Specification] ] ) #set heading( numbering: "1.1.1" ) #set page(footer: { set text(9pt) align(center, context counter(page).display()) }) #set par(justify: true) #outline(indent: 1em) #set raw(syntaxes: "fbdl.sublime-syntax") #include "participants.typ" #include "glossary.typ" #include "overview.typ" #include "references.typ" #include "concepts.typ" #include "lexical-elements.typ" #include "data-types.typ" #include "expressions.typ" #include "functionalities/functionalities.typ" #include "parametrization.typ" #include "scope-and-visibility.typ" #include "grouping.typ"
https://github.com/TOMATOFQY/MyChiCV
https://raw.githubusercontent.com/TOMATOFQY/MyChiCV/main/chicv.chinese.typ
typst
MIT License
#import "fontawesome.typ": * #let italic_fonts = ("Source Han Serif SC VF") #let title_fonts = ("FZJinLS-B-GB") #let fonts = ( // "Source Han Serif SC VF", // "FZJinLS-B-GB", "Songti SC", "Avenir Next LT Pro", // original chi-cv font\ "Manrope", // a font available in the typst environment and looks similar to Avenir "Apple Color Emoji", "SF Pro", "Font Awesome 6 Free Solid" ) #let chiline() = { v(-3pt); line(length: 100%, stroke: gray); v(-10pt) } #let iconlink( uri, word: "", icon: link-icon) = { link(uri)[ #fa[#icon] #word ] } #let landr( tl: lorem(2), tr: "YYYY/MM - YYYY/MM" ) = { text(font:fonts,weight: "bold" ,tl) + h(1fr) + tr } #let cventry( tl: lorem(2), tr: "2333/23 - 2333/23", bl: "", br: "", content ) = { show text:it => { text(font:fonts, it) } show strong: it => { text(font:fonts,weight:"bold", it.body) } show emph : it => { text(font:italic_fonts,weight:"regular", it.body) } block( inset: (left: 0pt), text(font:fonts,weight: "black" ,tl) + h(1fr) + tr + linebreak() + if bl != "" or br != "" { bl + h(1fr) + br + linebreak() } + content ) } #let chicv(body) = { set par(justify: true) show heading.where( level: 1 ): set text( size: 22pt, font: fonts, weight: "black", ) show heading.where( level: 2 ): it => text( size: 12pt, font: fonts, // customed for chinese heading weight: "black", block( chiline() + it, ) ) set list(indent: 0pt) show link: it => underline(stroke:1pt,evade: true,offset: 2pt,extent:0pt, it) set page( margin: (x: 0.9cm, y: 1.1cm), ) set par(justify: true) body }
https://github.com/tingerrr/hydra
https://raw.githubusercontent.com/tingerrr/hydra/main/src/util.typ
typst
MIT License
#import "/src/util/assert.typ" #import "/src/util/core.typ": *
https://github.com/bejaouimohamed/labs-EI2
https://raw.githubusercontent.com/bejaouimohamed/labs-EI2/main/Lab%20%233%20Web%20Application%20with%20Genie/Lab-3.typ
typst
#set heading(numbering: "1.") #import "Class.typ": * #show: ieee.with( title: [#text(smallcaps("Lab #3: Web Application with Genie"))], /* abstract: [ #lorem(10). ], */ authors: ( ( name: "<NAME>", department: [Dept. of EE], organization: [ISET Bizerte --- Tunisia], profile: "bejaouimohamed", ), ( name: "<NAME>", department: [Dept. of EE], organization: [ISET Bizerte --- Tunisia], profile: "Gharbijamila", ), ) // index-terms: (""), // bibliography-file: "Biblio.bib", ) = Introduction In this report, we will explain what we have done to add to the previous basic web application two extra sliders to change phase and offset which also modify the behaviour of the sine wave graph. = Sine Wave Control == Julia coding To the previous app.jl file , we have add two inputs *phase* and *offset*.Its types are Float64 and Float32 and default value is 0. Also,we have add their names after *onchange* so we can control them as we wish. This work is shown in code below : #let code=read("../Codes/web-app/app.jl") #raw(code, lang: "julia") == HTML coding For app.jl.html file, we have add two sliders : === The phase - Firstly , we have links the slider's value to a variable named *ph*. - Secondly , we have sets the minimum value of the slider to $-pi$. - Thridly , we have sets the maximum value of the slider to $pi$. - Fourthly , we have sets the step increment of the slider to $pi/100$. - In the end ,we specifies that labels should be displayed on the slider. === The offset - Firstly , we have links the slider's value to a variable named *off*. - Secondly , we have sets the minimum value of the slider to $-0.5$. - Thridly , we have sets the maximum value of the slider to $1$. - Fourthly , we have sets the step increment of the slider to $0.1$. - In the end ,we specifies that labels should be displayed on the slider. This is shown in the html code below : #let code=read("../Codes/web-app/app.jl.html") #raw(code, lang: "html") == Graphical interface After checking the app.jl and app.jl.html codes , we have open the terminal of vs code and open julia and tap the commands below to use the GenieFramework to develop a web application. ```julia julia> using GenieFramework julia> cd("C:/Users/bejao/OneDrive/Bureau/infodev-main/Codes/web-app") julia> Genie.loadapp() julia> up() ``` `using GenieFramework`This line imports the GenieFramework module into the Julia environment, allowing you to access the functionality provided by the Genie web framework. `cd("C:\\Users\\bejao\\OneDrive\\Bureau\\infodev-main\\Codes\\web-app")`This choose the current working directory in Julia to the specified path where your web application is located. `Genie.loadapp()`This command loads the web application defined in the current directory into the Genie framework. It sets up the necessary configurations and initializes the application. `up()`This command starts the web server, allowing our web application to be accessible through a web browser. Once the server is up and running,we can navigate to the specified URL to interact with our web application and control any parameters . We can now open the browser and navigate to the link #highlight[#link("http://127.0.0.1:8000")[http://127.0.0.1:8000]]. We will get the updated graphical interface where we can control now plus the amplitude and frequency of the sine wave the phase and the offset as in @fig:genie-updated. #figure( image("Images/Genie-sinewave.png", width: 100%, fit: "cover"), caption: "Genie -> Old Sine Wave", ) <fig:genie-webapp> #figure( image("Images/2.png", width: 100%), caption: "Genie -> Updated Sine wave", ) <fig:genie-updated>
https://github.com/thanhdxuan/dacn-report
https://raw.githubusercontent.com/thanhdxuan/dacn-report/master/report-week-5/contents/05-use-case-diagram.typ
typst
= Use-case diagram == Use-case diagram toàn hệ thống #block( width: 100%, inset: 10pt, radius: 3pt, stroke: rgb(87, 127, 230), fill: rgb(87, 127, 230, 40), par( justify: true, "Xem chi tiết sơ đồ use-case của nhóm tại: " + link("https://drive.google.com/file/d/1EtNV2Po7mtwW4zg2rioz8HqAY8JeO-W2/view?usp=sharing") ) ) #figure( image("..\images\DACN-Whole-system.jpg", width: 100%), caption: [ Sơ đồ use-case cho toàn hệ thống. ], ) #pagebreak() == Use-case diagram và đặc tả cho QUẢN LÝ ĐƠN VAY #figure( image("..\images\DACN-manage-loan.jpg", width: 100%), caption: [ Sơ đồ use-case cho tính năng QUẢN LÝ ĐƠN VAY. ], ) #let use-case-header = ( "Use-case Name", "Actors", "Description", "Trigger", "Pre-Conditions", "Post-Conditions", "Normal Flow", "Alternative Flow", "Exeption Flow", "Constraints" ) #let use-cases = ( new-: ( name: "<NAME>", actor: "", des: "", trigger: "", preconds: "", postconds: "", norflow: "", alterflow: "", exceptflow: "", constraint: "" ), new-loan: ( name: "<NAME>", actor: "Admin", des: "Admin tạo một đơn xin vay vốn mới dựa trên hồ sơ người dùng làm tại ngân hàng.", trigger: "Admin chọn vào button 'Tạo đơn mới' trong menu 'Quản lý đơn vay'.", preconds: [ - Admin đã đăng nhập thành công vào hệ thống và có quyền truy cập chức năng "<NAME>". ], postconds: "Dữ liệu về đơn vay mới do admin thực hiện các thao tác được cập nhật vào trong cơ sở dữ liệu và hiển thị vào danh mục 'Đơn chưa được xử lý'", norflow: [ + Admin chọn mục ' Tạo đơn mới ' trong menu 'Quản lý đơn vay'. + Hệ thống mở cửa sổ 'Tạo đơn mới' bao gồm biểu mẫu yêu cầu nhập các thông tin cần thiết. + Admin nhập các thông tin cần thiết vào biểu mẫu và nhấn 'Tạo'. + Hệ thống xác nhận và thực hiện thêm thông tin của đơn mới vào cơ sở dữ liệu, đồng thời sử dụng model mà admin đã chọn để thực hiện dự đoán dựa trên các thông tin được cung cấp. + Hệ thống thông báo tạo đơn mới thành công. ], alterflow: "Không có", exceptflow: [ *E1: Tại bước 2* \ 2.1 Admin nhấn vào button 'Huỷ'. \ 2.2 Hệ thống ẩn biểu mẫu tạo đơn mới. \ *E2: Tại bước 4* \ 4.1 Hệ thống xác định Admin chưa chọn loại mô hình để thực hiện dự đoán / Chưa nhập đủ các thông tin bắt buộc. \ 4.2 Hệ thống hiển thị warning và yêu cầu Admin thực hiện đủ các bước. \ ], constraint: [ Đơn mới tạo phải được nằm đầu tiên trong danh sách các đơn chưa xử lý. ] ), manage-unprocessed-loan: ( name: "Quản lý các đơn chưa xử lý", actor: "Admin", des: "Là một Admin, sẽ muốn xem xét các đơn đã tạo nhưng chưa được xử lý, từ đó có thể dễ dàng quản lý các đơn mới.", trigger: "Admin chọn vào tab 'Đơn chưa xử lý' trong menu 'Quản lý đơn vay'.", preconds: [ - Admin đã đăng nhập thành công vào hệ thống và có quyền truy cập chức năng "Quản lý đơn vay". ], postconds: [ Admin xem được thông tin của các đơn vay hiện có trong hệ thống tuy nhiên chưa được xử lý. ], norflow: [ + Admin chọn mục "Đơn chưa xử lý" trong menu "Quản lý đơn vay". + Hệ thống hiển thị danh sách các đơn vay có trong hệ thống nhưng có trạng thái "Chưa được xử lý", được sắp xếp theo ngày tạo đơn. \ Mỗi đơn, ngoài các thông tin cơ bản, hệ thống hiển thị các thông tin quan trọng như trạng thái, ngày tạo, kết quả dự đoán của mô hình. ], alterflow: [Không có], exceptflow: [Không có], constraint: [Không có] ), processing-loan: ( name: "<NAME>", actor: "Admin", des: "Admin xử lý các đơn vay (Chấp nhận cho vay / Không chấp nhận)", trigger: [ Admin chọn vào button "Xử lý" trong mỗi đơn vay chưa được xử lý trong tab "Đơn chưa xử lý" trong menu "Quản lý đơn vay". ], preconds: [ - Admin đã đăng nhập thành công vào hệ thống và có quyền truy cập chức năng "Quản lý đơn vay". ], postconds: [ - Admin xử lý thành công đơn vay, trạng thái đơn vay thay đổi từ "Chưa được xử lý" thành "Được chấp nhận" hoặc "Bị từ chối" - Dữ liệu được cập nhật vào cơ sở dữ liệu. - Đơn vay chuyển từ tab "Đơn chưa xử lý" sang tab "Đơn đã xử lý". ], norflow: [ + Admin nhấn vào button "Xử lý" của đơn vay bất kỳ trong danh sách được hiển thị trong tab "Đơn chưa xử lý". + Hệ thống hiển thị cửa sổ "Xử lý đơn + ID đơn" và hiển thị các thông tin cơ bản của đơn. + Admin chọn vào button "Chấp nhận" hoặc "Từ chối" trong cửa sổ + Hệ thống hiển thị thông báo xử lý đơn thành công. ], alterflow: [ *A2: Tại bước 2* \ 2.1. Admin chọn button "Đổi mô hình dự đoán".\ 2.2. Hệ thống hiển thị các mô hình hiện có.\ 2.3. Admin chọn mô hình mong muốn.\ 2.4. Hệ thống tính toán, gửi kết quả dự đoán trở lại màn hình.\ ], exceptflow: [ *E1: Admin nhấn vào nút "Huỷ"* \ Hệ thống ẩn cửa sổ đi, và trở lại giao diện của tab "Đơn chưa xử lý". ], constraint: [ - Các đơn mới được xử lý sẽ được xuất hiện ở đầu của tab "Đơn đã xử lý". ] ), ) *TẠO ĐƠN VAY MỚI* #table( columns: (0.5fr, 2fr), inset: 10pt, align: horizon, use-case-header.at(0), use-cases.new-loan.name, use-case-header.at(1), use-cases.new-loan.actor, use-case-header.at(2), use-cases.new-loan.des, use-case-header.at(3), use-cases.new-loan.trigger, use-case-header.at(4), use-cases.new-loan.preconds, use-case-header.at(5), use-cases.new-loan.postconds, use-case-header.at(6), use-cases.new-loan.norflow, use-case-header.at(7), use-cases.new-loan.alterflow, use-case-header.at(8), use-cases.new-loan.exceptflow, use-case-header.at(9), use-cases.new-loan.constraint, ) *QUẢN LÝ CÁC ĐƠN CHƯA XỬ LÝ* #table( columns: (0.5fr, 2fr), inset: 10pt, align: horizon, use-case-header.at(0), use-cases.manage-unprocessed-loan.name, use-case-header.at(1), use-cases.manage-unprocessed-loan.actor, use-case-header.at(2), use-cases.manage-unprocessed-loan.des, use-case-header.at(3), use-cases.manage-unprocessed-loan.trigger, use-case-header.at(4), use-cases.manage-unprocessed-loan.preconds, use-case-header.at(5), use-cases.manage-unprocessed-loan.postconds, use-case-header.at(6), use-cases.manage-unprocessed-loan.norflow, use-case-header.at(7), use-cases.manage-unprocessed-loan.alterflow, use-case-header.at(8), use-cases.manage-unprocessed-loan.exceptflow, use-case-header.at(9), use-cases.manage-unprocessed-loan.constraint, ) *XỬ LÝ ĐƠN* #table( columns: (0.5fr, 2fr), inset: 10pt, align: horizon, use-case-header.at(0), use-cases.processing-loan.name, use-case-header.at(1), use-cases.processing-loan.actor, use-case-header.at(2), use-cases.processing-loan.des, use-case-header.at(3), use-cases.processing-loan.trigger, use-case-header.at(4), use-cases.processing-loan.preconds, use-case-header.at(5), use-cases.processing-loan.postconds, use-case-header.at(6), use-cases.processing-loan.norflow, use-case-header.at(7), use-cases.processing-loan.alterflow, use-case-header.at(8), use-cases.processing-loan.exceptflow, use-case-header.at(9), use-cases.processing-loan.constraint, ) *UPDATING ...*
https://github.com/refparo/24xx-typst
https://raw.githubusercontent.com/refparo/24xx-typst/master/24xx.typ
typst
Creative Commons Attribution 4.0 International
// inline styling #set text( 9pt, top-edge: 9pt, luma(10%), font: "Barlow", stretch: 87.5%, number-width: "tabular", ) #show "₡": set text(font: "IBM Plex Sans", stretch: 100%) #show "➡": set text(7.5pt, font: "Noto Sans CJK SC") // block styling #let leading = 3pt #let top-edge = 9pt #let line-height = leading + top-edge #let block-spacing = line-height / 2 + leading #set block(spacing: block-spacing) #set par(leading: leading) #show heading.where(level: 1): upper #show heading.where(level: 1): set text( 36pt, top-edge: 25.5pt, weight: 500, stretch: 75%, ) #show heading.where(level: 1): it => { set block(below: 4 * line-height - leading - text.top-edge) colbreak(weak: true) + it } #let marked(marker, gap, content) = context block( inset: (left: -(gap + measure(marker).width)), spacing: block-spacing, grid( columns: (auto, 1fr), column-gutter: gap, marker, content ) ) #show heading.where(level: 2): it => { set text(9pt, rgb(255, 47, 23), weight: 400) marked( text( 6pt, baseline: -0.75pt, font: "Noto Sans CJK SC", "▶" ), 2pt, it.body ) } #let design-note(text-fill: rgb(50, 165, 194), content) = { set text(text-fill, style: "italic") marked(text(tracking: -1pt, "//"), 2.5pt, content) } #set enum(numbering: "1", body-indent: 1em) #let columns-full(count, body) = context { let line-height = text.size + par.leading let line-count = calc.ceil( (measure(body).height + par.leading) / (line-height * count) ) block( height: line-height * line-count - par.leading, spacing: block-spacing, columns(count, gutter: (100% - measure(body).width * count) / count, block(body)) ) } #let footer(body) = { set text( 7pt, top-edge: 6pt, rgb(133, 142, 140), style: "italic" ) set par(leading: 3pt) body } // document styling #set document( title: "24XX System Reference Document", author: "<NAME>" ) #set page(paper: "us-statement") #set columns(gutter: 18pt) // =================== // content begins here // =================== #set page(margin: 0pt) #place(image("cover.png")) // Typst currently doesn't support blending modes // so this is a rough emulation #block( width: 100%, height: 100%, inset: 18pt, fill: rgb(92.5%, 92.5%, 100%, 6%), )[ #set par(leading: 0pt) #set text(fill: rgb(70%, 70%, 100%, 22.5%)) = #stack( { set text(125pt, top-edge: 89pt) h(-6pt) text(stretch: 87.5%)[24] text(stretch: 75%)[XX] }, text(21pt, top-edge: 23.5pt)[ #upper[System Reference Document] ] ) ] // page 1 #set page( margin: 18pt, columns: 2, ) = Rules *PLAY:* Players describe what their characters do. The GM advises when their action is impossible, demands a cost or extra steps, or presents a risk. Players can revise plans before committing so as to change goal/stakes. Only roll to _avoid risk_. *ROLLING:* Roll a _skill die_ — d6 by default, higher with a relevant skill, or d4 if _hindered_ by injury or circumstances. If _helped_ by circumstances, roll an extra d6; if _helped_ by an ally, they roll their skill die and share the risk. Take the highest die. #grid( columns: (auto, 1fr), column-gutter: 0.5em, row-gutter: leading, [*1–2*], [*Disaster.* Suffer the full risk. GM decides if you succeed at all. If risking death, you die.], [*3–4*], [*Setback.* A lesser consequence or partial success. If risking death, you’re injured.], [*5+*], [*Success.* The higher the roll, the better.] ) If success can’t get you what you want (_you make the shot, but it’s bulletproof!_), you’ll at least get useful info or set up an advantage. *LOAD:* Carry as much as makes sense, but more than one _bulky_ item may hinder you at times. *ADVANCEMENT:* After a job, each character increases a skill (_none ➡ d8 ➡ d10 ➡ d12_) and gains d6 credits (₡). *DEFENSE:* Say how one of your items _breaks_ to turn a hit into a brief _hindrance_. _Broken_ gear is useless until repaired. *HARM:* Injuries take time and/or medical attention to heal. If killed, make a new character to introduce ASAP. Favor inclusion over realism. // allow this paragraph to overflow a bit to the right to ensure that it doesn't go to the second column #block(inset: (right: -1.5pt))[*GM:* Describe characters in terms of behaviors, risks, and obstacles, not skill dice. Lead the group in setting lines not to cross in play. _Fast-forward, pause,_ or _rewind/redo_ for pacing and safety; invite players to do likewise. Present dilemmas you don’t know how to solve. Move spotlight to give all time to shine. Test as needed for bad luck (e.g., run out of ammo, or into guards) — roll a die to check for (1– 2) trouble or (3–4) signs of it. Improvise rulings to cover gaps in rules; on a break, revise unsatisfactory rulings as a group.] = Characters #design-note[SRD design notes start with two slashes, like this. Other paragraphs are player/GM-facing text.] #design-note[Characters start with 6ish skill increases and/or credits in items, possibly combining “specialty” and “origin” (or “3 skill increases” as a stand-in).] == Choose your character’s *specialty*. *FACE:* Skilled in _Reading People_ (d8), _Deception_ (d8). Take an _extensive disguise wardrobe_. *MUSCLE:* Skilled in _Intimidation_ (d8) and either _Hand-to-hand_ (d8) or _Shooting_ (d8). Take a _sword_, _firearm_, or _cyber-arm_. *PSYCHIC:* Skilled in _Telepathy_ (d8, sense surface thoughts), _Telekinesis_ (d8, as strong as your arms), or pick one at d10. Take a _bottle of PsychOut_ (amplify powers; addictive). *MEDIC:* Skilled in _Medicine_ (d8), _Electronics_ (d8). Take a _medkit_ and _cyber-surgery tools_ (_bulky_). *SNEAK:* Skilled in _Climbing_ (d8), _Stealth_ (d8). Take _climbing gear_ and _night vision goggles_. *TECH:* Skilled in _Hacking_ (d8), _Electronics_ (d8). Take _repair tools_ and a _custom computer_ (_bulky_). == Choose your character’s *origin*. *ALIEN:* Invent 2 traits, like _electric current_, _wings_, _natural camouflage_, or _six-limbed_. *ANDROID:* You have an upgrade-ready cyber-body. Take _synth skin_ (looks human) or a _case_ (break harmlessly for _defense_). Increase 1 skill. *HUMAN:* Apply 3 skill increases (from _no skill ➡ d8 ➡ d10 ➡ d12_). You can take new skills and/or increase skills you already have. == Choose or invent *skills* (if prompted by origin). _Climbing, Connections, Deception, Electronics, Engines, Explosives, Hacking, Hand-to-hand, Intimidation, Labor, Persuasion, Piloting, Running, Shooting, Spacewalking, Stealth, Tracking_ #design-note[Characters who start with broader skills should start with fewer skills, or with less useful skills.] // page 2 = Gear #design-note[If an item costs less than a new video game system, the only cost is the time it takes to get it.] == Take a *_comm_* (smartphone) and ₡2. Most items and upgrades cost ₡1 each. Ignore microcredit transactions like a knife or a meal. *ARMOR:* _Vest_ (break once for _defense_), _battle armor_ (₡2, _bulky_, break up to 3×), _hardsuit_ (₡3, _bulky_, break up to 3×, vacuum-rated, mag boots). *CYBERNETICS:* _Cyber-ear_ (upgrade with _echolocation, vocal stress detector_), _cyber-eye_ (upgrade with _infrared, telescopic, x-ray_), _cyber-limb_ (upgrade with _fast, strong, compartments, tool or weapon implant_), _cranial jack_ (upgrade with _sensory data backup, skill increase_), _healing nanobots, toxin filter, voice mimic_. *TOOLS:* _Flamethrower (bulky), low-G jetpack, med scanner, mini drone, repair tools, survey pack_ (climbing gear, flare gun, tent; _bulky_). *WEAPONS:* _Grenades_ (4, any of _fragmentation, flashbang, smoke, EMP_), _pistol, rifle (bulky), shotgun (bulky), stun baton, tranq gun_. #block(inset: (right: -1.5pt))[== *Starships* have basic versions of these functions; upgrades cost ₡10 each. In an emergency, players pick an action to perform or _help_ with.] *COMMS:* Upgrade with _eavesdropper, jammer, tachyon burst_ (no lag in-system). *CRAFTS:* Comes with _escape pod_. Upgrade with _fighter, shuttle_ (reentry-rated). *DRIVE:* FTL jump and sublight speeds. Upgrade with _longer jumps, faster speed, greater agility_. *EQUIPMENT:* _Vac suits_ for crew. Upgrade with _armory, heavy loader, mining gear, tow cable_. *HULL ARMOR:* Break harmlessly for _defense_. Upgrade with _reentry-rated, sun shielding_. *SENSORS:* Upgrade with _deep-space, life-sign scan, planetary survey, tactical vessel scan_. *WEAPONS:* Deflector turrets. Upgrade with _laser cutter, military-grade turret, torpedos_. = Details #design-note[Additional character and setting details often need to be customized for specific settings (especially when aliens and fashion are involved). Feel free to draw from these options, which should work in a range of sci-fi settings.] == Invent or roll for *personal details*. *SURNAME* #columns-full(4)[ + Acker + Black + Cruz + Dallas + Engel + Fox + Gee + Haak + Iyer + Joshi + Kask + Lee + Moss + Nash + Park + Qadir + Singh + Tran + Ueda + Zheng ] *NICKNAME* #columns-full(4)[ + Ace + Bliss + Crater + Dart + Edge + Fuse + Gray + Huggy + Ice + Jinx + Killer + Lucky + Mix + Nine + Prof + Red + Sunny + Treble + V8 + Zero ] *DEMEANOR* #columns-full(2)[ + Anxious + Appraising + Blunt + Brooding + Calming + Casual + Cold + Curious + Dramatic + Dry + Dull + Earnest + Formal + Gentle + Innocent + Knowing + Pricky + Reckless + Terse + Weary ] *SHIP NAME* #columns-full(2)[ + _Arion_ + _Blackjack_ + _Caleuche_ + _Canary_ + _Caprice_ + _Chance_ + _Darter_ + _Falkor_ + _Highway Star_ + _Moonshot_ + _Morgenstern_ + _Phoenix_ + _Peregrine_ + _Restless_ + _Silber Blaze_ + _Stardust_ + _Sunchaser_ + _Swift_ + _Thunder Road_ + _Wayfarer_ ] // back page #set page( margin: (top: 18pt - block-spacing), footer-descent: 6pt, footer: footer[ Version 1.41.typ.2 • Text, layout & 24XX logo all CC BY <NAME> • Art CC BY Beeple (<NAME>) • Typst version CC BY Paro ], ) #place(hide[= The Back Page]) #block( fill: rgb(37, 101, 136), inset: (bottom: line-height), outset: (x: 7pt, top: block-spacing) )[ #set text(white, style: "italic") #text(tracking: -1pt, "//") *THE PREMISE:* Explain the basics of the setting. If it’s not made clear elsewhere, give a reason for the characters to stick together, and hint at what they’ll spend their time doing. ] #design-note[*THE BACK PAGE:* If you’d like to mimic the style of the original micro RPGs this SRD is based on, the back page (or the left half of one side of a letter-sized sheet of paper) can fit 4 tables of 20 items each. A GM can use these to generate ideas for an improvised session, like, “[Name] has hired you for [Job] at [Location], but there’s a [Twist]!” An example table is offered below.] #design-note[*ADDING TO RULES:* This SRD is very brief, with the hope experienced RPG players will fill in the gaps confidently, and RPG newcomers will be free of too many preconceived notions. Anything left vague is deliberately open to interpretation. (Like: Can you get help dice from an ally AND circumstances on one roll? Your call!) Expand or clarify as needed. My own principles for new rules are to minimize addition and subtraction, avoid too much bookkeeping (on top of tracking credits, hindrances, number of bulky items, and which items are broken), and strive to use terms either self-evident in meaning or invitingly vague.] == *Roll d20 for a contact, client, rival, or target.* + Arcimboldo, quirky tech dealer & tinkerer + Aurora, wealthy collector of unique items + Blackout, quiet evidence removal specialist + Bleach, wry janitor android turned assassin + Bron, dour security chief with a metal arm + Bullet, no-nonsense android gun runner + Carryout, cocky courier with fast cyber-legs + Fisher, eager street kid looking for a crew + Ginseng, people-loving drug dealer + Hot Ticket, extremely cautious fence + Kaiser, grinning loan shark in a silver suit + Osiris, tired, street-level sawbones + Powder Blue, android fixer, generous rates + Reacher, sharp mercenary tac squad leader + Rhino, thickheaded, bighearted bodyguard + Sam, plucky journalist, likely to get killed + Shifter, hard-working chop-shop owner + Walleye, businesslike information broker + Whistler, smiling cabbie/getaway driver + “X,” unflappable broker for an unnamed corp == *Roll d6 to try to find a job. Spend ₡1 to re-roll.* #grid( columns: (auto, 1fr), column-gutter: 0.5em, row-gutter: leading, [1–2], [_Nothing. Owe somebody to get in on a job._], [3–4], [_Found a job, but something seems off._], [5-6], [_Choose between 2 jobs._] ) #v(6pt) #block(inset: (right: 1.5pt))[#design-note[*FINDING JOBS:* Many teams don’t need to look for paying work (e.g., military units). If your game does use this setup, though, dangerous jobs should pay more to cover 1–3 credits in “expenses” for medical treatment, fixing/ replacing broken gear, re-rolling unsavory jobs, or getting through dry spells with no jobs. Also, in the table above, the phrase “owe somebody” is intentionally vague, but may be worth clarifying or alluding to elsewhere (e.g., put a loan shark in your “Contacts” table).]] #design-note[*JOBS:* The list of jobs (or missions, situations, quests, etc.) should be tailored for your setting, and suggest scenarios where every character’s skills will be useful. Common job templates include “deal with an unusual threat,” “investigate something seemingly inexplicable,” or “retrieve a thing from a location for a person.” They serve as “gameable lore” — elements that hint at a setting, ready-made for use in play.] #design-note(text-fill: rgb(50, 145, 73))[*LICENSE:* This SRD is released under a Creative Commons Attribution 4.0 license (#underline[CC BY 4.0]). You’re welcome to use this text and layout in your own game, provided you do the following:] #design-note(text-fill: rgb(50, 145, 73))[GIVE CREDIT: See that tiny text along the bottom of the page? That’s where I cram the version number and credit for licensed content (like the cover art). You’re welcome to put it elsewhere in your game, but be sure to include it somewhere — like, “24XX rules are CC BY Jason Tocci.”] #design-note(text-fill: rgb(50, 145, 73))[USE 24XX, NOT 2400: You can say your game is “compatible with 2400” or “for use with 2400,” but please don’t use material directly from 2400, or name your game so it looks like it’s part of my 2400 series (unless you have explicit approval).] #design-note(text-fill: rgb(50, 145, 73))[NO BIGOTRY: Please don’t use any text from this game, the 24XX logo, or my name in any product that promotes or condones white supremacy, racism, misogyny, ableism, homophobia, transphobia, or other bigotry against marginalized groups.]
https://github.com/thanhdxuan/dacn-report
https://raw.githubusercontent.com/thanhdxuan/dacn-report/master/Lab03/contents/02-introduce.typ
typst
= Mã xác thực thông điệp MAC == Câu 1: Sử dụng công cụ Cryptool để tính toán HMAC cho một thông điệp theo các bước như bên dưới === Tính Hmac cho hàm MD5 - H(k,m): key in front of message #image("/images/md5_b1.jpg") #pagebreak() - H(m,k): key in back of message #image("/images/md5_b2.jpg") #pagebreak() - H(k,m,k): key in front and at the back of message #image("/images/md5_b3.jpg") #pagebreak() - H(k,m,k’): different keys #image("/images/md5_b4.jpg") #pagebreak() - H(k,H(k,m)): double hashing (RFC 2104) #image("/images/md5_b5.jpg") === Tính Hmac cho hàm SHA-256 #pagebreak() - H(k,m): key in front of message #image("/images/sha256_b1.jpg") #pagebreak() - H(m,k): key in back of message #image("/images/sha256_b2.jpg") #pagebreak() - H(k,m,k): key in front and at the back of message #image("/images/sha256_b3.jpg") #pagebreak() - H(k,m,k’): different keys #image("/images/sha256_b4.jpg") #pagebreak() - H(k,H(k,m)): double hashing (RFC 2104) #image("/images/sha256_b5.jpg") == Câu 2: Hãy liệt kê những hình thức tấn công dựa trên xác thực thông điệp? - Tấn công ngày sinh nhật - Tấn công Meet in the middle - Tấn công brute force == Câu 3: Trình bày sự khác nhau giữa mã xác thực thông điệp (MAC) và hàm băm (Hash) #table( columns: (auto, 1fr, 1fr), inset: 10pt, align: horizon, table.header( [], [*MAC*], [*Hashing*], ), [Tính chất],[Xác thực, toàn vẹn],[Toàn vẹn], [Giải thuật], [Cô đọng một thông điệp M có chiều dài thay đổi dùng một khóa bí mật K thành một mã xác thực có chiều dài cố định.], [Cô đọng một thông điệp M có chiều dài thay đổi thành một mã xác thực có chiều dài cố định.], [Yêu cầu], [ - Phân bố đồng đều - Phụ thuộc như nhau trên tất cả các bit - Biết thông điệp và mã xác thực thông điệp của nó thì không khả thi để tìm ra một thông điệp khác có cùng mã xác thực thông điệp ], [ - Cho h thì không khả thi để tìm x mà H(x)=h - Cho x thì không khả thi để tìm y mà H(y)=H(x) - Không khả thi để tìm x,y mà H(y)=H(x) ] )
https://github.com/grnin/Zusammenfassungen
https://raw.githubusercontent.com/grnin/Zusammenfassungen/main/Bsys2/05_Threads.typ
typst
// Compiled with Typst 0.11.1 #import "../template_zusammenf.typ": * #import "@preview/wrap-it:0.1.0": wrap-content /*#show: project.with( authors: ("<NAME>", "<NAME>"), fach: "BSys2", fach-long: "Betriebssysteme 2", semester: "FS24", tableofcontents: (enabled: true), language: "de" )*/ = Threads #wrap-content( image("img/bsys_22.png"), align: top + right, columns: (75%, 25%), )[ == Prozessmodell Jeder Prozess hat virtuell den _ganzen Rechner_ für _sich alleine_. _Prozesse_ sind gut geeignet für _unabhängige Applikationen_. Nachteile: Realisierung _paralleler Abläufe_ innerhalb derselben Applikation ist _aufwändig_. _Overhead_ zu gross falls nur kürzere Teilaktivitäten, _gemeinsame Ressourcennutzung_ ist _erschwert_. ] #wrap-content( image("img/bsys_23.png"), align: top + right, columns: (85%, 15%), )[ == Threadmodell Threads sind _parallel ablaufende Aktivitäten innerhalb eines Prozesses_, welche auf _alle_ Ressourcen im Prozess gleichermassen Zugriff haben #hinweis[(Code, globale Variablen, Heap, geöffnete Dateien, MMU-Daten)] === Thread als Stack + Kontext Jeder Thread benötigt einen _eigenen Kontext_ und einen _eigenen Stack_, weil er eine eigene Funktions-Aufrufkette hat. Diese Informationen werden häufig in einem _Thread-Control-Block_ abgelegt. #hinweis[(Linux: Kopie des PCB mit eigenem Kontext)] ] == Amdahls Regel #wrap-content( image("img/bsys_24.png"), align: top + right, columns: (70%, 30%), )[ Bestimmte Teile eines Algorithmus können _nicht_ parallelisiert werden, weil sie _voneinander abhängen_. Man kann für jeden Teil eines Algorithmus angeben, ob dieser _parallelisiert_ werden kann oder nicht. ] #wrap-content( image("img/bsys_25.png"), align: top + right, columns: (65%, 35%), )[ / $T$: Ausführungszeit, wenn _komplett seriell_ durchgeführt\ #hinweis[Im Bild: $T = T_0 + T_1 + T_2 + T_3 + T_4 $] / $n$: Anzahl der Prozessoren / $T'$: Ausführungszeit, wenn _maximal parallelisiert_ #hinweis[gesuchte Grösse] / $T_s$: Ausführungszeit für den Anteil, der _seriell_ ausgeführt werden _muss_\ #hinweis[Im Bild: $T_s = T_0 + T_2 + T_4$] / $T - T_s$: Ausführungszeit für den Anteil, der _parallel_ ausgeführt werden _kann_\ #hinweis[Im Bild: $T - T_s = T_1 + T_3$] / $(T - T_s) / n$: Parallel-Anteil verteilt auf alle $n$ Prozessoren\ #hinweis[Im Bild: $(T_1 + T_3) / n$] / $T_s + (T - T_s) / n$: Serieller Teil + Paralleler Teil #hinweis[$= T'$] Die _serielle Variante_ benötigt also höchstens _$f$ mal mehr Zeit_ als die _parallele Variante_ #hinweis[(wegen Overhead nur $<=$)]: ] #block($ f <= T / T^' = T / (T_s + (T - T_s) / n) $) $f$ heisst auch _Speedup-Faktor_, weil man sagen kann, dass die parallele Variante maximal $f$-mal schneller ist als die serielle. Definiert man $s = T_s/T$, also den seriellen Anteil am Algorithmus, dann ist $s dot T = T_s$. Dadurch erhält man $f$ unabhängig von der Zeit: #block($ f <= T / (T_s + (T - T_s) / n) = T / (s dot T + (T - s dot T) / n) = T / (s dot T + (1 - s) / n dot T) => f <= 1 / (s + (1 - s) / n) $) #wrap-content( image("img/bsys_26.png"), align: top + right, columns: (60%, 40%), )[ === Bedeutung - Abschätzung einer _oberen Schranke_ für den maximalen Geschwindigkeitsgewinn - Nur wenn _alles_ parallelisierbar ist, ist der Speedup _proportional_ und _maximal_ #hinweis[$f(0,n) = n$] - Sonst ist der Speedup mit _höherer Prozessor-Anzahl_ immer _geringer_ #hinweis[(Kurve flacht ab)] - $f(1,n)$: rein seriell === Grenzwert Mit höherer Anzahl Prozessoren nähert sich der Speedup $1/s$ an: ] #grid( columns: (1fr, 1fr, 1fr), [$ lim_(n -> infinity) (1 - s) / n = 0 $], [$ lim_(n -> infinity) s + (1 - s) / n = s $], [$ lim_(n -> infinity) 1 / (s + (1 - s) / n) = 1 / s $], ) == POSIX Thread API === `pthread_create()` ```c int pthread_create( pthread_t *thread_id, pthread_attr_t const *attributes, void * (*start_function) (void *), void *argument ) ``` _Erzeugt einen Thread_ und gibt bei _Erfolg 0_ zurück, sonst einen Fehlercode. Die _ID_ des neuen Threads wird im _Out-Parameter `thread_id`_ zurückgegeben. _`attributes`_ ist ein _opakes Objekt_, mit dem z.B. die _Stack-Grösse_ spezifiziert werden kann. Die _erste Instruktion_, die der neue Thread ausführen soll, ist ein _Aufruf der Funktion_, deren Adresse in _`start_function`_ übergeben wird. Diese Funktion muss ebendiese Signatur haben. Zusätzlich übergibt der Thread das Argument `argument` an diese Funktion. Dies ist typischerweise ein Pointer auf eine Datenstruktur auf dem Heap. #hinweis[(*Achtung:* Legt man diese Struktur auf dem Stack an, muss man sicherstellen, dass man während der Lebensdauer des Threads den Stack nicht abbaut.)] #grid( columns: (50%, 60%), gutter: 11pt, [ ```c //Erstellung struct T { int value; }; void * my_start (void * arg) { struct T * p = arg; printf ("%d\n", p->value); free (arg); return 0: } ``` ], [ ```c //Verwendung void start_my_thread (void) { struct T * t = malloc ( sizeof (struct T)); t->value = 109; pthread_t tid; pthread_create ( &tid, 0, // default attributes &my_start, t ); } ``` ], ) ==== Thread-Attribute Um Attribute _anzugeben_, muss man nach folgendem _Muster_ verfahren, da `pthread_attr_t` je nach Implementation _weiteren Speicher_ benötigen kann: ```c pthread_attr_t attr; // Variabel erstellen pthread_attr_init (&attr); // Variabel initialisieren pthread_attr_setstacksize (&attr, 1 << 16); // 64kb Stackgrösse pthread_create (..., &attr, ...); // Thread erstellen pthread_attr_destroy (&attr); // Attribute löschen ``` === Lebensdauer eines Threads Ein Thread _lebt_ solange, bis eine der folgenden Bedingungen eintritt: - Er springt aus der Funktion _`start_function`_ zurück - Er ruft _`pthread_exit`_ auf #hinweis[(Normales `exit` terminiert Prozess)] - Ein _anderer Thread_ ruft _`pthread_cancel`_ auf - Sein _Prozess_ wird _beendet_. === ```c void pthread_exit (void *return_value)``` _Beendet_ den Thread und gibt den `return_value` zurück. Das ist äquivalent zum _Rücksprung aus `start_function` mit dem Rückgabewert_. === ```c int pthread_cancel (pthread_t thread_id)``` Sendet eine _Anforderung_, dass der Thread mit `thread_id` _beendet_ werden soll. Die Funktion _wartet nicht_, dass der Thread _tatsächlich beendet_ wurde. Der Rückgabewert ist 0, wenn der Thread existiert, bzw. `ESRCH` #hinweis[(error_search)], wenn nicht. === ```c int pthread_detach (pthread_t thread_id)``` _Entfernt den Speicher_, den ein Thread belegt hat, falls dieser _bereits beendet_ wurde. Beendet den Thread aber _nicht_. #hinweis[(Erstellt Daemon Thread)] === ```c int pthread_join (pthread_t thread_id, void **return_value)``` _Wartet_ solange, bis der Thread mit `thread_id` _beendet_ wurde. Nimmt den _Rückgabewert_ des Threads im Out-Parameter _`return_value`_ entgegen. Dieser kann _`NULL`_ sein, wenn nicht gewünscht. Ruft _`pthread_detach`_ auf. === ```c pthread_t pthread_self (void)``` Gibt die _ID_ des _gerade laufenden_ Threads zurück. == Thread-Local Storage (TLS) In C geben viele System-Funktionen den Fehlercode nicht direkt zurück, sondern über `errno`, z.B. die `exec`-Funktionen. Wäre `errno` eine _globale Variable_, würde folgender Code bei mehreren Threads _unerwartetes Verhalten_ aufeisen: ```c void f (void) { int result = execl (...); if (result == -1) { int error = errno; // Kann auch von einem anderen Thread sein printf ("Error %d\n", error); } } ``` TLS ist ein Mechanismus, der _globale Variablen per Thread_ zur Verfügung stellt. Dies benötigt mehrere explizite Einzelschritte:\ *Bevor Threads erzeugt werden:* - Anlegen eines _Keys_, der die TLS-Variable _identifiziert_ - _Speichern_ des Keys in einer _globalen Variable_ *Im Thread:* - _Auslesen_ des Keys aus der globalen Variable - _Auslesen / Schreiben_ des Werts anhand des Keys über besondere Funktionen === ```c int pthread_key_create( pthread_key_t *key, void (*destructor) (void*) )``` Erzeugt einen _neuen Key_ im Out-Parameter `key`. _`pthread_key_t`_ ist eine _opake Datenstruktur_. Für jeden Thread und jeden Key hält das OS einen Wert vom Typ _`void *`_ vor. Dieser Wert wird immer mit _`NULL`_ initialisiert. Das OS ruft den _`destructor`_ am Ende des Threads mit dem jeweiligen _thread-spezifischen Wert_ auf, wenn dieser dann nicht `NULL` ist. Gibt 0 zurück wenn alles OK, sonst Fehlercode. === ```c int pthread_key_delete( pthread_key_t key)``` _Entfernt den Key_ und die entsprechenden Values aus allen Threads. Der Key darf nach diesem Aufruf _nicht mehr verwendet_ werden. Sollte erst aufgerufen werden, wenn alle dazugehörende Threads beendet sind. Das Programm muss dafür sorgen, _sämtlichen Speicher freizugeben_, der eventuell zusätzlich alloziert worden war. Gibt 0 zurück wenn alles OK, sonst Fehlercode. === `pthread_setspecific` und `pthread_getspecific` ```c int pthread_setspecific( pthread_key_t key, const void * value )```\ ```c void * pthread_getspecific( pthread_key_t key )``` _schreibt_ bzw. _liest_ den Wert, der mit dem Key in diesem Thread assoziiert ist. Typischerweise verwendet man den Wert als _Pointer auf einen Speicherbereich_, bspw: ```c // Setup typedef struct { int code; char *message; } error_t; pthread_key_t error; void set_up_error (void) { // wird am Anfang des Threads aufgerufen pthread_setspecific( error, malloc( sizeof( error_t ))) } // Lesen und Schreiben im Thread void print_error (void) { error_t * e = pthread_getspecific (error); printf("Error %d: %s\n", e->code, e->message); } int force_error (void) { error_t * e = pthread_getspecific (error); e->code = 98; e->message = "file not found"; return -1; } // Main und Thread void *thread_function (void *) { set_up_error(); if (force_error () == -1) { print_error (); } } int main (int argc, char **argv) { pthread_key_create (&error, NULL); // Key erzeugen pthread_t tid; pthread_create (&tid, NULL, &thread_function, NULL); // Threads erzeugen pthread_join (tid, NULL); } ```
https://github.com/nicolasfara/Curriculum-Vitae
https://raw.githubusercontent.com/nicolasfara/Curriculum-Vitae/master/data-it.typ
typst
Apache License 2.0
#import "@preview/fontawesome:0.1.0": * #let contact-entries = ( (icon-path: fa-envelope(), content: link("mailto:<EMAIL>")), (icon-path: fa-phone(), content: link("tel:+39 3402876022")), (icon-path: fa-location-dot(), content: [Bertinoro, Italy]), (icon-path: fa-icon("github", fa-set: "Brands"), content: link("https://github.com/nicolasfara")[nicolasfara]), ) #let current-position-entries = ( ( title: [PhD Student], subtitle: [Department of Computer Science and Engineering], subtitle-aside: [University of Bologna, Italy], date: [November 2023 --- Today], ), ( title: [Teaching Tutor], subtitle: [Department of Computer Science and Engineering], subtitle-aside: [University of Bologna, Italy], date: [September 2020 --- Today], ), ) #let work-experience-entries = ( ( title: [Teaching Tutor --- Algorithms and Data Structures], subtitle: [University of Bologna], subtitle-aside: [Campus of Cesena, Italy], date: [A.Y. 2022/2023], ), ( title: [Teaching Tutor --- Algorithms and Data Structures], subtitle: [University of Bologna], subtitle-aside: [Campus of Cesena, Italy], date: [A.Y. 2021/2022], ), ( title: [Teaching Tutor --- Algorithms and Data Structures], subtitle: [University of Bologna], subtitle-aside: [Campus of Cesena, Italy], date: [A.Y. 2020/2021], ), ( title: [HPC Internship], subtitle: [University of Bologna], subtitle-aside: [Campus of Cesena, Italy], date: [June 2019 --- November 2019], more: [ / Supervisor: Prof. <NAME> / Scope: Study of the TensorCore architecture and its application to simple parallel algorithms for matrix multiplication and the Dirac operator. ] ), ( title: [High School Internship], subtitle: [General System s.r.l.], subtitle-aside: [Cesena, Italy], date: [June 2015 --- August 2015], more: [ / Scope: Implementation and validation of several electronic circuits for the control of industrial machinery and PLCs programming. ] ), ) #let education-entries = ( ( title: [Master's degree in Computer Science and Engineering], subtitle: [University of Bologna], subtitle-aside: [Campus of Cesena, Italy], date: [September 2020 --- March 2023], more: [ / Final result: 110/110 cum laude / Thesis: _Design and Implementation of a Portable Framework for Application Decomposition and Deployment in Edge-Cloud Systems_ / Supervisors: Prof. <NAME>, Prof. <NAME> / Area of Study: Pervasive Computing ] ), ( title: [Bachelor's degree in Computer Science and Engineering], subtitle: [University of Bologna], subtitle-aside: [Campus of Cesena, Italy], date: [September 2016 --- March 2020], more: [ / Final result: 103/110 / Thesis: _Optimized Implementation of the Dirac Operator on GPGPU_ / Supervisor: Prof. <NAME> / Area of Study: High Performance Computing ] ), ( title: [High School in Electrotechnics and Electronics], subtitle: [Istituto Tecnico Tecnologico "Blaise Pascal"], subtitle-aside: [Cesena, Italy], date: [September 2011 - June 2016] ), ) #let programming-languages = ( "Programming Languages": ( [Bash], [C], [C++], [C\#], [Kotlin], [Java], [Javascript], [Prolog], [Protelis], [Python], [Rust], [Scala], [Typescript] ), "Other Languages": ( [CSS], [CSS3], [HTML5], [JSON], [LaTeX], [Markdown], [YAML], [SQL] ), "Software Tools": ( [Cargo], [Git], [Gradle], [GH Actions], [Hugo], [Inkscape], [Markdown], [sbt], [SQL], [npm], [IntelliJ], [VS Code] ) ) #let languages = ( [Italian - native speaker], [English - B1] ) #let extra-curricular-activities-entries = ( ( title: [E-Powertrain Division Member], subtitle: [Unibo Motorsport], subtitle-aside: [Bologna, Italy], date: [September 2020 --- February 2021], more: [ / Scope: Development of BMS boards for the electric vehicle of the team; implemented #emph[LabView] software for the test of the battery pack and development of the vehicle's wiring system. ] ), ( title: [Ce.Se.N.A. Security Team], subtitle: [Core Team Member], subtitle-aside: [Cesena, Italy], date: [October 2016 --- June 2019], more: [ / Scope: Learned reverse engineering techniques and security analysis of software and hardware systems. Attended several CTFs and presentations with focus on security topics. ] ), ( title: [FabLab Romagna], subtitle: [Core Member], subtitle-aside: [Cesena, Italy], date: [May 2014 --- September 2017], more: [ / Scope: Responsible for the management of the laboratory; speaker of few seminar on elettronics and 3D printing. Involved in the organization of the #emph[Rimini Beach Mini Maker Faire 2015]. Developed several prototypes with #emph[Arduino] and #emph[Raspberry Pi]. ] ), ) #let projects-contributions-entries = ( ( title: [Maintainer of collektive], subtitle: [Complier plugin developer], date: [2023 --- Today], more: [ #link("https://github.com/collektive/collektive")[#fa-icon("github", fa-set: "Brands") collektive/collektive] ] ), ( title: [Lead designer and maintainer of PulvReAKt], subtitle: [Kotlin multiplatform framework for pulverized application development], date: [2022 --- Today], more: [ #link("https://github.com/pulvreakt/pulvreakt")[#fa-icon("github", fa-set: "Brands") pulvreakt/pulvreakt] ] ), ( title: [Major Maintainer of MDM], subtitle: [Mambelli Domain Model --- Pure functional domain modellation], date: [2022], more: [ #link("https://github.com/atedeg/mdm")[#fa-icon("github", fa-set: "Brands") atedeg/mdm] ] ), ( title: [Major Maintainer of ECScala], subtitle: [Entity Component System for Scala], date: [2021], more: [ #link("https://github.com/atedeg/ecscala")[#fa-icon("github", fa-set: "Brands") atedeg/ecscala] ] ), ( title: [Lead designer and maintainer of conventional-commit], subtitle: [Gradle plugin for enforcing conventional commit messages], date: [2022 --- Today], more: [ #link("https://github.com/nicolasfara/conventional-commits")[#fa-icon("github", fa-set: "Brands") nicolasfara/conventional-commits] ] ), ( title: [Lead designer and maintainer of sbt-conventional-commit], subtitle: [SBT plugin for enforcing conventional commit messages], date: [2022 --- Today], more: [ #link("https://github.com/nicolasfara/sbt-conventional-commits")[#fa-icon("github", fa-set: "Brands") nicolasfara/sbt-conventional-commits] ] ), ( title: [Lead designer and maintainer of pfeeder], subtitle: [Cloud-based software system for a smart pet feeder management], date: [2020], more: [ #link("https://github.com/nicolasfara/pfeeder")[#fa-icon("github", fa-set: "Brands") nicolasfara/pfeeder] ] ), ) #let pubblications = ( ( title: [Scalability through Pulverisation: Declarative deployment reconfiguration at runtime], subtitle: [ <NAME>., <NAME>., <NAME>., & <NAME>. (2024). Scalability through Pulverisation: Declarative deployment reconfiguration at runtime. Future Generation Computer Systems, 161, 545–558. http://dx.doi.org/10.1016/j.future.2024.07.042], date: [2024], ), ( title: [Proximity-based Self-Federated Learning], subtitle: [ <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2024). Proximity-based Self-Federated Learning. arXiv. https://doi.org/10.48550/ARXIV.2407.12410 ], date: [2024], ), ( title: [Middleware Architectures for Fluid Computing], subtitle: [ <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2024). Middleware Architectures for Fluid Computing (<NAME>, <NAME>, <NAME>, & <NAME>.; pp. 49–63). Springer International Publishing. https://doi.org/10.1007/978-3-031-62146-8_3 ], date: [2024], ), ( title: [Towards Intelligent Pulverized Systems: a Modern Approach for Edge-Cloud Services], subtitle: [ <NAME>., <NAME>., <NAME>., & <NAME>. (2024). Towards Intelligent Pulverized Systems: a Modern Approach for Edge-Cloud Services. In <NAME>, <NAME>, <NAME>, <NAME>, & <NAME> (Eds.), Proceedings of the 25th Workshop “From Objects to Agents”, Bard (Aosta), Italy, July 8-10, 2024 (Vol. 3735, pp. 233–251). CEUR-WS.org. https://ceur-ws.org/Vol-3735/paper_19.pdf ], date: [2024], ) )
https://github.com/feiyangyy/Learning
https://raw.githubusercontent.com/feiyangyy/Learning/main/linear_algebra/矩阵的秩和线性方程组的解.typ
typst
#set text( font: "New Computer Modern", size: 6pt ) #set page( paper: "a5", margin: (x: 1.8cm, y: 1.5cm), ) #set par( justify: true, leading: 0.52em, ) #set heading(numbering: "1.") = 矩阵的行空间、列空间,以及矩阵的秩 == 定义 === 矩阵的行列空间 矩阵的行空间定义为矩阵的各行向量所生成的空间,矩阵的行秩定义为矩阵行向量组的秩 矩阵的列空间定义为矩阵的各列向量所生成的空间,矩阵的列秩定义为矩阵的列向量组的秩 === 定理1. 阶梯型矩阵J的行秩与列秩相等,他们都等于J的非零行个数,并且,J的各主元列,形成一个极大线性无关组 #let jarray = $j_1, j_2, ..., j_r$ #let jvectors = $bold(j_1), bold(j_2), ..., bold(j_r)$ #let rank=$r a n k$ 从主元相关定义可知,主元的数量等于非0行的数量(阶梯型中,每一行中第一个不为0的元素,从而有多少个主元就有多少个非0行) 证: 对于阶梯型矩阵$s times n$, 不妨设其有r个主元($r<=s$),分别位于$(#jarray )$ 列,其形如: $ J = mat(0, ..., 0, c_(j 1)_1, ..., c_(j 2)_1, ..., c_(j_r)_1; 0,..., 0, 0, ..., c_(j 2)_2, ..., c_(j_r)_2; dots.v; 0,0, ...., ...,...,...,...,c_(j_r)_r; 0,0,...,0, ..., ..., ..., 0,;) $ 我们提取出所有主元列,$#jarray$, 可以得到形如: $ mat(c_(j 1)_1, c_(j 2)_1, ..., c_(j_r)_1; 0, c_(j 2)_2, ..., c_(j_r)_2; dots.v; 0,0, ...., c_(j_r)_r; 0,0,...,0; ) <- 最 后 的 0 行 可 能 存 在,可 能 不 存 在 $ 我们忽略其后的0行,则得到一个上三角矩阵$mat(c_(j 1)_1, c_(j 2)_1, ..., c_(j_r)_1; 0, c_(j 2)_2, ..., c_(j_r)_2; dots.v; 0,0, ...., c_(j_r)_r; ) $ 其行列式为$product_(i=1)^(r) c_(j i)_i$ 而$c_(j i)_i$为J的主元,故而 $c_(j i)_i != 0$, 从而$product_(i=1)^(r) c_(j i)_i != 0$,从而向量$bold(a_1) = (c_(j 1)_1, 0, ..., 0)^T, bold(a_2) = (c_(j 2)_1, c_(j 2)_2, 0, ..., 0)^T ,..., bold(a_r) = (c_(j r)_1, c_(j r)_2, ..., c_(j r)_r)^T $ 是线性无关的, 根据向量组线性无关的扩展性推论, #let agrps=$bold(a_1), bold(a_2), ..., bold(a_r)$; $(#agrps)$ 的扩展组,即J中的各主元列向量#jvectors 也是线性无关的,从而$#rank (#jvectors) = dim <#jvectors> = r$ #let eps=$bold(epsilon)$ #let eps_vectors=$eps _1, eps _2, ..., eps _r$ #let all_vecotrs = $bold(alpha)_1, bold(alpha)_2, ..., bold(alpha)_n$ 对于J的任意列向量,可以表达成$bold(alpha)_i = [(a_1, a_2, ..., a_r,0, ..., 0)^T | a_k in K]$,则J的列向量组表达为$(#all_vecotrs)$. 设U包含所有这样的向量,即J的列空间$W subset.eq U$,U中的任意向量可以写作$bold(beta)_i = a_1 * #eps _1 + a_2 * #eps _2 + ...+ a_r * #eps _r$, 而$(#eps_vectors)$ 是线性无关的,则它是U的一个标准基,而$rank (#eps_vectors) = r -> dim U = r$. 即$W subset.eq U -> dim W <= dim U = r => #rank (#all_vecotrs) <= r$ 又因为$(#jvectors) subset.eq (#all_vecotrs) => <#jvectors> subset.eq <#all_vecotrs> ==> r = #rank (#jvectors) <= #rank ( #all_vecotrs) <= r $ 从而$rank(#jvectors) = rank(#all_vecotrs) = r$. 由于$#jvectors$ 与$#all_vecotrs$的秩相等,并且$(#jvectors) subset.eq (#all_vecotrs)$, 因此根据向量组的秩的相关推论,(#jvectors) 就是 (#all_vecotrs)的一个极大无关组,并且(#jvectors)的秩等于J中的非0行数目r 对于行向量组的秩基行空间的证明,是类似的,我们只需要在前述构造的矩阵: $mat(c_(j 1)_1, c_(j 2)_1, ..., c_(j_r)_1; 0, c_(j 2)_2, ..., c_(j_r)_2; dots.v; 0,0, ...., c_(j_r)_r; ) $ 将其进行转置,根据行列式的性质,方阵的转置的行列式和原矩阵行列式相,原行列式不为0,转置后也不为0,则行向量组(转置后是列向量)线性无关,后面的证明步骤和列空间的证明步骤相同。 === 定理2. 矩阵初等行变换不改变矩阵的行秩 #let a_vecs = $(bold(a_1), bold(a_2), ..., bold(a_n))$ #let b_vecs = $(bold(b_1), bold(b_2), ..., bold(b_n))$ 1. 行倍加 设A的行向量组为(#a_vecs), B由A行倍加而来(j行 + i行k倍),B的行向量组可表达为:$(#b_vecs) = (bold(a_1), bold(a_2), ..., bold(a_j) + k bold(a_i), ...,bold(a_i),..., bold(a_n))$,注意此处不要求 B A的行向量组是线性无关的。 非常容易得到,#a_vecs 可以被B的行向量组线性表出,其中$bold(a_j)=bold(b_j) - bold(b_i) k$,其他依次相等,因此#a_vecs 等价于#b_vecs (行秩不变) 2. 行交换 设 i j行交换得到B,那么$bold(b_j) = bold(a_i) <=> bold(a_i) = bold(b_j); bold(b_i) = bold(a_j) <=> bold(a_j) = bold(b_i)$, 其他向量依次相等,因此#a_vecs 等价于 #b_vecs 3. 行倍乘 设 i行乘k得到B, 那么$bold(b_i)=k bold(a_i) <=> bold(a_i) = 1/k bold(b_i)$ ,其他依次相等,固#a_vecs 等价于 #b_vecs 综合1 2 3,定理得证 #highlight(fill: green)[复习:等价可以$=>$秩相等,反之不成立] === 定理3. #highlight(fill: blue)[矩阵初等行变换不改变列向量的相关性,也不会改变列秩] 这个定理的证明过程非常重要 分为两部分: 1. 矩阵A初等行变换到B,则B的列向量组的线性相关当且仅当A的列向量组线性相关 2. 设矩阵A初等行变换到B,并且B的$#jarray$ 列向量构成极大无关组,则A的$#jarray$ 的列向量同样构成极大无关组,从而A B的列秩相等 先证明1. 不妨设A的列向量分别是$#a_vecs$, B的列向量是#b_vecs, 那么对于齐次方程组$x_1 bold(a_1) + x_2 bold(a_2) + ... + x_n bold(a_n) = 0$ 与 $x_1 bold(b_1) + x_2 bold(b_2) + ... + x_n bold(b_n) = 0$ 同解(回顾,高斯消元法等价于矩阵初等行变换),如果第一个方程有非0解,则A的列向量线性相关。同理,如果第一个方程有非0解,那么第二个方程也有非0解,从而B的列向量也线性相关。反之亦然(A ... 线性无关 $=>$ B ... 线性无关) 再证明2. 在A矩阵中,有#jvectors 组成的矩阵设为$A_1$, 当A初等行变换到B时,$A_1$ 变成$B_1$. 由证明1. 中的结论可知,$B_1$的列向量和$A_1$的列向量的线性相关性相同,而($#jvectors$) 是一个极大无关组,因此$B_1$ 列向量组线性无关,从而$A_1$ 列向量组线性无关。因此,A中(#jvectors)是A的列向量组的一个线性无关组。 接下来,在#jvectors 所构成的矩阵$A_1$ 中添加A中除#jvectors 中剩余的1列,不妨设为l列,那么A(#jvectors, $l$) 所构成的矩阵$A_2$ 经初等行变换变为B(#jvectors, $l$) ,记作$B_2$。由已知条件,#jvectors 是B列向量的极大无关组,因此 $B_2$ 的列向量组一定线性相关,因此$A_2$ 中的列向量组也一定线性相关。即, #highlight(fill:red)[ 我们任取A中除去$#jvectors$ 剩下的任意一列,与#jvectors 组成的矩阵的列向量组一定线性相关,同时#jvectors 构成的矩阵的列向量组线性无关,从而#jvectors 就构成A的极大无关组 ] 进一步可得,#highlight(fill:green)[A的列秩等于B的列秩] === 定理4. 任意矩阵的A的行秩等于它的列秩 证明: 1. 在定理1.中,我们证明$J$的行秩与列秩相等,且等于非0行个数 2. 矩阵A可以经过初等行变换 变为$J$ 根据定理2. 行秩不变, 根据定理3, 列秩不变 3. 即 A的行秩 = J的行秩 = J的列秩 = A的列秩 = J中非0行的数目 得证 == 定义2. 矩阵的秩 根据定理4, 矩阵的行秩和列秩统称为 矩阵的秩,记作$#rank (A)$ === 推论1. 矩阵A的秩等于其阶梯型$J$的非0行数目,并且$J$的主元列$#jvectors$ 在A中对应的列向量组构成A 的列向量组的极大线性无关组 证:根据定理4.的证明第3步, 可以立即得到前半部分,而根据定理3 证明的第二部分,可以得到后半部分 推论1.表明,一个#highlight()[矩阵的阶梯型可能有多种],但是其非0行的数目是固定等于该矩阵的秩。 === 推论2. 矩阵初等列变换不改变矩阵的秩 (略) === 定理5. 任意非0矩阵的秩等于它不为0的子式的最高阶数 证: 设矩阵 A 为$s times n$的矩阵,其秩为r, 根据秩的性质,则A有r行线性无关,r列线性无关。我们选取这不相关的r行,构成新矩阵$A_1$,则$A_1$的秩也是r(行秩为r), 并且可知$A_1$的列也有r列线性无关(行秩=矩阵秩=列秩),我们选取$A_1$ 中不相关的r列,组成新的矩阵的行列式则是A的一个$r$阶子式, 这个r阶子式的列向量组线性无关,从而其值不为0 {这里注意,能不能直接从A中选择r个不相关的行、r个不相关的列,然后取两者交集,并且这个r阶子式不为0,首先在选取r行的过程中已经将这个r个列截断了,线性无关的推论只适合扩张,线性相关的推论只适合缩短。因此从这个角度分析,通过这样选取得到的r阶子式,行列不一定是线性相关或者不相关的。 当然,这个定理并不要求找到具体的哪些r行、r列,其只要在逻辑上证明就行了 在定理3.的第二个证明过程中,我们可以得知,如果A 经过初等行变换 到$J$, 而$J$ 中的主元列为$#jarray$, $J(#jvectors)$ 是J列向量组的极大无关组。则在A中,$jarray$ 对应的列向量组$A(#jvectors)$ 也是A的列向量组的极大无关组,这表明了,我们已经可以知道取那些列了。这r列元素构成新的矩阵$A_1$中,则其行秩是$r$, 与上类似,选定r个线性无关的行,则得到一个$r$阶子式 (行怎么取,其实可以从初等列变换分析出,这里不分析了) } 接下来取一个m阶子式,并且$m>r and m<=s$, 则有: $A_3=A mat(k_1, k_2, ..., k_m; l_1, l_2, ..., l_m;) $ #let lvectors=$(bold(l_1), bold(l_2),..., bold(l_m))$ 因为A的秩为r,则#lvectors 一定线性线性相关,而$A_3$中的列向量组是#lvectors 的缩短组,因此其必线性相关。(对应齐次线程方程组有非0解$->$行列式为0 或者 按列展开有0列...) 从而其行列式为0。因此对于任意$m>r$的m阶子式,其值为0 定理4. 定理5. 表示了矩阵的秩的内涵 1. 矩阵的行秩 2. 矩阵的列秩 3. 矩阵的最高的不为0的r阶子式 === 推论3 对于$s times n$的矩阵A,其秩为r,则其r阶不为0的子式所在的行、列 向量分别构成其行、列向量组的极大线性无关组 1. 先说行,对于$#rank (A)= r$, 其r阶不为0的子式的行线性无关,从而其延伸组也线性无关,其延伸组的秩为r=矩阵的秩=矩阵的行秩$=>$ 其所在行所构成的行向量组就是A的行向量组的一个极大无关组 2. 再说列, 与行同理 == 定义. n阶方阵A的秩如果等于其阶数n,则称A为满秩矩阵 === 推论4. n阶方阵A满秩的充要条件是$|A| != 0$ 必要性: 由定理5. 若A的秩为n,则A的不等于0的最高阶子式的阶数为n,而A的n阶子式就是|A|,从而$|A|!=0$ 充分性: 设$|A|!=0$ 则有A作为系数矩阵构成的齐次线性方程组只有0解,从而A的列向量组线性无关,从而 A的列秩是n, 从而A的秩是n = 线性方程组有解的充要条件 == 定理1. 线性方程组的有解判别定理 数域 K上的线性方程组,其有解的充要条件是增广矩阵的秩等于系数矩阵的秩 原书此处推论可能存在错误,本文参考原书给出其他证法 #let avec = $bold(a_1),bold(a_2), ..., bold(a_n)$ #let ainv = $bold(alpha_1), bold(alpha_2), ..., bold(alpha_r)$ #let x_expr= $x_1 bold(a_1) + x_2 bold(a_2) + ... + x_n bold(a_n)$ 即 $#x_expr = bold(b) <=> rank mat(bold(a_1), bold(a_2), ..., bold(a_n);) = rank mat(bold(a_1), bold(a_2), ..., bold(a_n), bold(b);)$ 证: 如果$ #x_expr= bold(b)$ 有解, 则表示$bold(b)$可以被$#avec$ 线性表出。设$(#avec)$的一个极大无关组是$(ainv)$, 对于系数矩阵的列向量组 $#avec$ 中的任何一个向量都可以被该无关组表示,而对于增广矩阵的列向量组$#avec ,bold(b)$, 其等价于$#avec, [sum_i x_i bold(a_i)] = #avec, [sum_i x_i sum_j p_j bold(alpha_j)] = #avec, [sum_j sum_i x_i bold(alpha_j)]$ 即增广矩阵的列向量组可以被$(ainv)$ 线性表出,从而增广矩阵的秩也是r。必要性得证 充分性:增广矩阵的秩和系数矩阵的秩相等 $=>$ dim <#avec, $bold(b)$> = dim <#avec>. 又因为 <#avec> $subset.eq?$ <#avec, $bold(b)$> , 所以,根据向量空间维数的命题4(有包含关系的两个向量空间的维数相等时,两个向量空间相等), 则<#avec> = <#avec, $bold(b)$>, 设$(#ainv)$是 <#avec>的一个极大无关组,则其也是$<#avec> i.e. <#avec, bold(b)>$的一个基,从而$bold(b)$ 可以被$(#ainv)$ 线性表示 即可以被 $(#avec)$ 线性表示,从而原方程组有解, 充分性得证 == 定理2. 线性方程组解的个数判定定理 1. n元线性方程组#highlight()[有解时],如果其系数矩阵的秩为n,则方程组有唯一解 2. 。。。。。。。。。。,如果其系数矩阵的秩小于n, 则方程组有无穷多解 证明: 将系数矩阵A经过系列初等行变换为$J$, 根据矩阵的秩的相关推论,则$J$中的非0行数目就是A的秩即为n, 从而就有唯一解。当A的秩小于n时,即$J$的非0行数目小于n,存在自由变量,则方程组有无穷多解。 === 推论1. n元线性齐次方程组有非0解的充要条件是其系数矩阵的秩小于n 1. 根据定理2. 的 第2条, 可得充分性 2. 必要性: n元齐次线性方程组有非0解时,则其阶梯型$J$非0行的数目 少于未知量数目n, 从而其行秩必然小于n, 从而系数矩阵的秩小于n = 齐次线性方程组的解的结构 #let KSpace =$K^n$ #let ap_v = $bold(alpha)$ #let bt_v = $bold(beta)$ 对于数域K上的一个n元齐次线性方程组 $#x_expr = bold(b)$ (1) 的一个解,它是$K^n$ 中的一个向量, 我们称做方程(1)的一个解向量。(1) 的解的集合是$K^n$的一个非空子集W, 那么W具有以下性质: == 性质1. 若 $#ap_v, #bt_v in W$, 则$#ap_v + #bt_v in W$ 证明: 设$#ap_v = vec(c_1, c_2, ..., c_n), #bt_v = vec(d_1, d_2, ..., d_n)$,那么有:$c_1 bold(a_1) + c_2 bold(a_2) + ... + c_n bold(a_n) = bold(0)$ 及$d_1 bold(a_1) + d_2 bold(a_2) + ... + c_n bold(d_n) = bold(0)$, 从而$(c_1+d_1) bold(a_1) + (c_2+d_2) bold(a_2) + ... + (c_n+d_n) bold(a_n) = bold(0)$ (结合律), 从而 $#ap_v + #bt_v in W$ == 性质2. 如果$#ap_v in W$, 则$k in K, k #ap_v in W$ 继承性质1 假设,则 $k c_1 bold(a_1) + k c_2 bold(a_2) + ... + k c_n bold(a_n) = k (c_1 bold(a_1) + c_2 bold(a_2) + ... + c_n bold(a_n)) = k bold(0) = bold(0)$ 从而$k #ap_v in W$ 因为W对加法和数量乘法封闭,则W是#KSpace 上的一个子空间,对于线性子空间,我们就要看能否找到一个基来表示它 #let eta_vectors=$bold(eta_1), bold(eta_2), ..., bold(eta_s)$ == 定义1. 齐次线性方程组的基础解系 齐次线性方程组有非0解时,如果它的某些解$#eta_vectors$ 满足以下条件 1. $(#eta_vectors)$ 线性无关 2. $W$ 中的其他向量(即该方程组的任意解)都可以有$(#eta_vectors)$ 线性表出 则称$(#eta_vectors)$ 是 该线性方程组的一个基础解系(即W的一个基) == 定理1. 解空间W的维数 $dim W = n - rank(A)$, n是未知量个数,A是系数矩阵 证: #let r=$r$ #let n=$n$ #let beta = $bold(eta)$ #let x_mod_r = $bold(x_(n \\ r))$ case 1. 当#x_expr = $bold(0)$ 只有0解时, W 只包含 $bold(0)$, $dim W = 0 = n - #rank (A) = n - n$ case 2. 当其有非0解时, 我们设$#rank (A) = r$, 那么A经过初等行变换化成简化阶梯型$hat(J)$时,其包含$r$个非0行,#r 个主元,即有#r 个主变量,有$n-r$个自由变量,我们可以将主变量写成关于自由变量的表达式: $cases(x_1 = -b_(1, r+1)x_(r+1) - ... - -b_(1, n) x_(n), x_2 = -b_(2, r+1)x_(r+1) - ... - -b_(2, n) x_(n), dots.v, x_r = -b_(r, r+1)x_(r+1) - ... - -b_(r, n) x_(n),) --(1) $ 此处有2点,1. $-b_(k, m)x_(m)$中系数项表示第k个主变量,关于第m个自由变量的系数; 2. $x_1, x_2, ..., x_r$ 表示的是 r个主变量, 主变量下标是否连续不影响该定理的证明,只要主变量不出现等式右侧即可,但为了叙述简单假设其连续。 可见自由变量部分可构成一个$n-r$维的向量,每个向量代入上述表达式中可得一个完整的解,不妨让自由变量的向量取以下值:$(#x_mod_r) = vec(1,0,dots.v, 0), vec(0,1,dots.v, 0), ..., vec(0,0,dots.v, 1) <-$ 共(#n - #r)项。将其代入主变量表达式中,可得(#n - #r) 个解: $ #beta _1=vec(-b_(1, r+1), -b_(2, r+1), dots.v, -b_(r, r+1), 1, 0, dots.v, 0),#beta _2=vec(-b_(1, r+2), -b_(2, r+2), dots.v, -b_(r, r+2), 0, 1, dots.v, 0), #beta _(n-r)=vec(-b_(1, n), -b_(2, n), dots.v, -b_(r, n), 0, 0, dots.v, 1) $ 这样的向量集可以视作$(#beta _1, #beta _2, ..., #beta _(n-r))$ 可以视作$#x_mod_r _1, #x_mod_r _2,...,#x_mod_r _(n-r)$的扩展组,而$#x_mod_r _1, #x_mod_r _2,...,#x_mod_r _(n-r)$ 向量组线性无关,则$(#beta _1, #beta _2, ..., #beta _(n-r))$线性无关 #let eta_vectors=$#beta _1, #beta _2, ..., #beta _(n-r)$ 任取该方程组的一个解$#beta = vec(c_1,c_2, ..., c_r, ..., c_n)$ , $#beta $ 必满足 表达式(1), 即$ cases(c_1 = -b_(1, r+1)c_(r+1) - ... - -b_(1, n) c_(n), c_2 = -b_(2, r+1)c_(r+1) - ... - -b_(2, n) c_(n), dots.v, c_r = -b_(r, r+1)c_(r+1) - ... - -b_(r, n) c_(n),)$ 从而$#beta = vec(-b_(1, r+1)c_(r+1) - ... - -b_(1, n) c_(n),-b_(2, r+1)c_(r+1) - ... - -b_(2, n) c_(n), dots.v, -b_(r, r+1)c_(r+1) - ... - -b_(r, n) c_(n), c_(r+1),dots.v, c_n) = c_(r+1) vec(-b_(1, r+1), -b_(2, r+1), ..., -b_(r, r+1), 1, 0,0,...,0 ) + ...+c_(n) vec(-b_(1, n), -b_(2, n), ..., -b_(r, n), 0, 0,0,...,1 ) $ 即$#beta $ 可被$(#eta_vectors)$ 线性表示 $<=>$ $forall #beta in W, #beta$ 可被 $(#eta_vectors)$ 线性表出,从而$(#eta_vectors)$ 是W的一个基。进而$#rank (#eta_vectors) = n-r = dim W$ 。 这里证明时,也可以从(1)直接出发,由主变量$x_i$ 与自由变量的$x_k$的关系,写出解向量的表示,因为自由变量可以取任何值,直接让自由变量依次取$K^(n-r)$空间的标准基(因为自由变量的向量是#n - #r 维的),从而就能得出W的秩以及求得任意一个基础解系 = 非齐次线性方程组的解的结构 #let x_exp = $x_1 bold(a_1) + x_2 bold(a_2) + ... + x_n bold(a_n)$ #let bb = $bold(b)$ #let by = $bold(y)$ #let bc = $bold(c)$ #let bd = $bold(d)$ #let beta = $bold(eta)$ 对于$#x_exp = #bb, #bb != bold(0) --(1)$ 的非齐次线性方程组,设其解集为U, 忽略其右侧的常数项,则其构成的齐次线性方程组$#x_exp = bold(0) -- (2)$ 称为(1)的#highlight()[导出组],设#highlight()[导出组]的解集为W, 则有以下性质(方程组有解时) == 性质1. 任取 #by , #bc $in U$, 则$#by - #bc in W$ 证: 对于任意#by, #bc, 则(#by - #bc) 代入方程组 可得:$(y_1 - c_1)bold(a_1) + ... + (y_n - c_n) bold(a_n) = y_1bold(a_1) + ... + y_n bold(a_n) - (c_1 bold(a_1) + ... + c_n bold(a_n)) = #bb - #bb = bold(0)$ 则#by - #bc 是导出组的一个解 == 性质2. 任取#by $in U$, 任取 $#bd in W$, 则$#by + #bd in U$ 证: $#by in U, #bd in W, => (y_1 + d_1)bold(a_1) + ... + (y_n + d_n)bold(a_n) = (y_1 bold(a_1) + ... + y_n bold(a_n)) + (d_1 bold(a_1) + ... + d_n bold(a_n)) = #bb + bold(0) = #bb => #by + #bd in U$ == 定理1. 如果非齐次线性方程组有解,则其解集为: $U = {#by + #beta | #beta in W}$ 其中#by 是 方程组的一个特解, #beta 是其导出组解集中的任意向量 由性质1. 可以得出,U可以被W中的某些(设其集合为L)向量加上特解向量表示;由性质2. 可得W中的所有向量加上特解向量都属于U。则联合可得L=W , 从而得证 $U = {#by + #beta | #beta in W}$ 称为$K^n$ 上的一个线性流形,其不是一个线性子空间。(最直接的,其对数量乘法不封闭) == 推论1. 如果非齐次线性方程组有解,则其导出组只有0解时,该方程组有唯一解 由定理1. 取一个特解$#by$, 而导出组的解集W={$bold(0)$}, 因此$U = {#by}$ 至此,线性方程组解的情况,就研究完了
https://github.com/vEnhance/1802
https://raw.githubusercontent.com/vEnhance/1802/main/mockmt2.typ
typst
MIT License
#import "@local/evan:1.0.0":* #show: evan.with( title: [18.02 Mock Midterm 2], author: "<NAME>", date: [10 October 2024], maketitle: false, ) #block[ #show heading: set align(center) #set heading(numbering: none) = 18.02 Mock Midterm 2 Instructions - Don't turn the page until the signal to start is given (3:05 PM in 4-370 on October 21, 2024). - You have 50 minutes to answer five questions. We're not grading anything, so write your solutions anywhere (the space below, other loose paper, notebook, iPad, etc.). - Like the real exam, I suggest not referring to any notes/calculators/etc. - Solutions are posted in Section 29 of my LAMV book at #url("https://web.evanchen.cc/1802.html"). ] #pagebreak() #block[ #show heading: set align(center) #set heading(numbering: none) = 18.02 Mock Midterm 2 Questions / Question 1.: A butterfly is fluttering in the $x y$ plane with position given by $bf(r)(t) = angle.l cos(t), cos(t) angle.r$, starting from time $t = 0$ at $bf(r)(0) = angle.l 1,1 angle.r$. - Compute the speed of the butterfly at $t = pi/3$. - Compute the arc length of the butterfly's trajectory from $t = 0$ to $t = 2 pi$. - Sketch the butterfly's trajectory from $t = 0$ to $t = 2pi$ in the $x y$ plane. / Question 2.: Let $k > 0$ be a fixed real number and let $f(x,y) = x^3 + k y^2$. Assume that the level curve of $f$ for the value $21$ passes through the point $P = (1,2)$. Compute the equation of the tangent line to this level curve at the point $P$. / Question 3.: Let $f(x,y) = x^(5y)$ for $x,y > 0$. Use linear approximation to estimate $f(1.001, 3.001)$ starting from the point $(1,3)$. / Question 4.: Consider the function $f : RR^2 -> RR$ defined by $ f(x,y) = cos(pi x) + y^4/4 - y^3/3 - y^2. $ - Compute all the critical points and classify them as saddle point, local minimum, or local maximum. - Find the global minimums and global maximums of $f$, if they exist. / Question 5.: Compute the minimum and maximum possible value of $x + 2 y + 2 z$ over real numbers $x$, $y$, $z$ satisfying $x^2 + y^2 + z^2 <= 100$. / Question 6.: Consider the level surface of $f(x,y,z) = (x-1)^2 + (y-1)^3 + (z-1)^4$ that passes through the origin $O = (0,0,0)$. Let $cal(H)$ denote the tangent plane to this surface at $O$. Give an example of two nonzero tangent vectors to this surface at $O$ whose span is $cal(H)$. #v(3em) The solutions to all the problems are now posted in Section 29 of my LAMV book: #align(center)[ #url("https://web.evanchen.cc/upload/1802/lamv.pdf"). ] ]
https://github.com/jgm/typst-hs
https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/meta/link-11.typ
typst
Other
Text <hey> Text <hey> // Error: 2-20 label occurs multiple times in the document #link(<hey>)[Nope.]
https://github.com/soul667/typst
https://raw.githubusercontent.com/soul667/typst/main/PPT/MATLAB/touying/docs/i18n/zh/docusaurus-plugin-content-docs/version-0.2.x/start.md
markdown
--- sidebar_position: 2 --- # 开始 在开始之前,请确保您已经安装了 Typst 环境,如果没有,可以使用 [Web App](https://typst.app/) 或 VS Code 的 [Typst LSP](https://marketplace.visualstudio.com/items?itemName=nvarner.typst-lsp) 和 [Typst Preview](https://marketplace.visualstudio.com/items?itemName=mgt19937.typst-preview) 插件。 要使用 Touying,您只需要在文档里加入 ```typst #import "@preview/touying:0.2.1": * #let (init, slide, slides) = utils.methods(s) #show: init #show: slides = Title == First Slide Hello, Touying! #pause Hello, Typst! ``` ![image](https://github.com/touying-typ/touying/assets/34951714/6f15b500-b825-4db1-88ff-34212f43723e) 这很简单,您创建了您的第一个 Touying slides,恭喜!🎉 ## 更复杂的例子 事实上,Touying 提供了多种 slides 编写风格,例如上面的例子依靠一级和二级标题来划分新 slide,实际上您也可以使用 `#slide[..]` 的写法,以获得 Touying 提供的更多更强大的功能。 ```typst #import "@preview/touying:0.2.1": * #let s = (s.methods.enable-transparent-cover)(self: s) #let (init, slide) = utils.methods(s) #show: init // simple animations #slide[ a simple #pause *dynamic* #pause slide. #meanwhile meanwhile #pause with pause. ][ second #pause pause. ] // complex animations #slide(setting: body => { set text(fill: blue) body }, repeat: 3, self => [ #let (uncover, only, alternatives) = utils.methods(self) in subslide #self.subslide test #uncover("2-")[uncover] function test #only("2-")[only] function #pause and paused text. ]) // math equation animations #slide[ == Touying Equation #touying-equation(` f(x) &= pause x^2 + 2x + 1 \ &= pause (x + 1)^2 \ `) #meanwhile Touying equation is very simple. ] // multiple pages for one slide #slide[ == Multiple Pages for One Slide #lorem(200) ] // appendix by freezing last-slide-number #let s = (s.methods.appendix)(self: s) #let (slide,) = utils.methods(s) #slide[ == Appendix ] ``` ![image](https://github.com/touying-typ/touying/assets/34951714/192b13f9-e3fb-4327-864b-fd9084a8ca24) 除此之外,Touying 还提供了很多内置的主题,能够简单地编写精美的 slides,基本上,您只需要在文档顶部加入一行 ``` #let s = themes.metropolis.register(s, aspect-ratio: "16-9") ``` 即可使用 metropolis 主题。关于更详细的教程,您可以参阅后面的章节。
https://github.com/Tetragramm/flying-circus-typst-template
https://raw.githubusercontent.com/Tetragramm/flying-circus-typst-template/main/src/FlyingCircus.typ
typst
MIT License
#import "Impl.typ" : FlyingCircus, FCPlane, FCShip, FCVehicleFancy, FCVehicleSimple, FCWeapon, KochFont, HiddenHeading, FCPlaybook, FCPRule, FCPSection, FCPStatTable, FCShortNPC, FCShortAirship
https://github.com/kdog3682/typkit
https://raw.githubusercontent.com/kdog3682/typkit/main/0.1.0/src/typography.typ
typst
#import "resolve.typ": * #import "patterns.typ" #import "ao.typ": build-attrs #let factory(x, newline: false, ..sink) = { if x == none { hide(text("hi")) } else { text(resolve-content(x), ..sink) } if newline == true { parbreak() } } #let sm-text = factory.with(size: 0.8em) #let md-text = factory.with(size: 1.1em) #let lg-text = factory.with(size: 1.5em) #let h4 = factory.with(size: 1em, weight: "bold", newline: true) #let h3 = factory.with(size: 1.2em, weight: "bold", newline: true) #let h2 = factory.with(size: 1.5em, weight: "bold", newline: true) #let h1 = factory.with(size: 2em, weight: "bold", newline: true) #let bold = factory.with(weight: "bold") #let big = factory.with(size: 1.5em) #let medium = factory.with(size: 1.3em) #let small = factory.with(size: 0.8em) #let italic = factory.with(style: "italic") #let math-bold(s) = { math.equation(math.bold(text(str(s)))) } #let strike-and-replace(a, b) = { strike(a) h(3pt) resolve-content(b) } #let pattern-text(s, style: "criss-cross", size: 32pt, ..sink) = { let fill = dictionary(patterns).at(style) let attrs = build-attrs( weight: "bold", fill: fill, stroke: 1pt, key: "text", size: size, sink, ) text(str(s), ..attrs) }
https://github.com/jgm/typst-hs
https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/math/delimited-07.typ
typst
Other
// Test predefined delimiter pairings. $floor(x/2), ceil(x/2), abs(x), norm(x)$
https://github.com/cadojo/correspondence
https://raw.githubusercontent.com/cadojo/correspondence/main/src/dear/dear.typ
typst
MIT License
#import "src/letter.typ": * #import "src/cover.typ": * #import "src/statement.typ": *
https://github.com/thanhdxuan/dacn-report
https://raw.githubusercontent.com/thanhdxuan/dacn-report/master/Lab03/contents/03-datahandling.typ
typst
#pagebreak() = Chữ ký số == Câu 1: Mô phỏng chữ ký số bằng chương trình Cryptool, thực hiện theo các bước như hướng dẫn tham khảo bên dưới, với đầu vào là tập tin msg.txt chứa thông tên đầy đủ và mã số sinh viên. Chụp ảnh màn hình từng bước như phần tham khảo. _Ví dụ: sinh viên có tên <NAME> và MSSV là 123456789, tập tin msg.txt có nội dung như sau:_ ``` Ten: <NAME> MSSV: 123456789 Lab 04 Digital Signature ``` - Bước 1: Từ giao diện của chương trình Cryptool, chọn menu “Digital Signatures/PKI” 🡪 “Signature Demonstration (Signature Generation)” #image("/images/dig_b1.jpg") #pagebreak() - Bước 2: Chọn “Select hash function”. Chọn MD5 (hoặc một giải thuật hash khác) và nhấn OK. #image("/images/dig_b2.jpg") #pagebreak() - Chọn “Generate Key” và “Generate prime numbers” trong hộp thoại step by step Signature Generation #image("/images/dig_b3.png") #pagebreak() - Nhập cân dưới: 2^150 và cận trên: 2^151. Sau đó nhấn nút Generate prime numbers và apply primes. #image("/images/dig_b4.jpg") #pagebreak() - Nhấn nút Store key #image("/images/dig_b5.jpg") #pagebreak() - Nhấn nút Provide certificate, nhập vào - Name (nhập thông tin họ và chữ lót của sinh viên): Smith - First name (nhập tên của sinh viên): Mary - Key identifier (<tên> key): Mary key - PIN: cryptool - PIN verification: cryptool #image("/images/dig_b6.jpg") #pagebreak() - Nhấn nút “Create Certificate and PSE”. #image("/images/dig_b7.jpg") #pagebreak() - Chọn “Compute hash value”. #image("/images/dig_b8.jpg") #pagebreak() - Chọn “Encrypt hash value”. #image("/images/dig_b9.png") #pagebreak() - Chọn “Generate signature” #image("/images/dig_b10.jpg") #pagebreak() - Chọn “Store signature” #image("/images/dig_b11.jpg") #pagebreak() - Nhấn nút OK, chúng ta được thông điệp và chữ ký số như hình bên dưới. #image("/images/dig_b12.jpg") == Câu 2: Hãy cho biết các yêu cầu của chữ ký số? - Phải phụ thuộc trên thông điệp được ký. - Phải sử dụng thông tin duy nhất từ người gửi để tránh giả mạo và từ chối. - Phải tương đối dễ dàng để tạo. - Phải tương đối dễ dàng để nhận biết và xác minh. - Không khả thi trong tính toán để giả mạo - Một thông điệp mới với chữ ký số đang tồn tại. - Chữ ký số cho một thông điệp đã cho. - Lưu trữ chữ ký số trong thực tế
https://github.com/hojelse/typst
https://raw.githubusercontent.com/hojelse/typst/main/README.md
markdown
# typst Trying out typst
https://github.com/swablab/documents
https://raw.githubusercontent.com/swablab/documents/main/datenschutz.typ
typst
Creative Commons Zero v1.0 Universal
#import "templates/tmpl_page.typ": tmpl_page #show: doc => tmpl_page( title: "Datenschutzhinweise", version: "v2.0", change_date: "16.10.2024", doc, ) == Verantwortlicher Verantwortlicher für die Datenverarbeitung ist der swablab e. V., vertreten durch den Vorstand: <NAME>, <NAME>, den weiteren einzelvertretungsberechtigten Vorstandsmitgliedern, sowie den in der Verwaltung tätigen Mitgliedern. Vereinsanschrift: \ swablab e. V., Katharinenstr. 1, 72250 Freudenstadt; E-Mail: #link("<EMAIL>"). == Erhebung und Verarbeitung personenbezogener Daten Bei Mitgliedern speichern wir alle personenbezogenen Daten, die im Mitgliedsantrag angegeben sind, einschließlich Name, Adresse, E-Mail-Adresse, Telefonnummer, Geburtsdatum und die für SEPA benötigten Informationen. Bei Personen, die den Haftungsausschluss unterschrieben haben, speichern wir alle personenbezogene Daten, die im Haftungsausschluss angegeben sind, einschließlich Name, Adresse, E-Mail-Adresse, Telefonnummer und Geburtsdatum. == Zweck der Datenverarbeitung Die Verarbeitung der Daten erfolgt zu folgenden Zwecken: - Mitgliederverwaltung - Kommunikation über Veranstaltungen und Angebote - Rechtliche Absicherung - Veröffentlichung von Bildern und Videos auf der Website (https://swablab.de), auf Social Media sowie in Printmedien == Rechtsgrundlage der Verarbeitung Die Verarbeitung der Daten erfolgt auf Grundlage der Einwilligung gemäß Art. 6 Abs. 1 lit. a DSGVO. == Dauer der Speicherung Die Daten werden so lange gespeichert, wie es für die oben genannten Zwecke erforderlich ist. Personenbezogene Daten, die die Kassenverwaltung betreffen, werden gemäß den steuerrechtlichen Bestimmungen bis zu 10 Jahre ab Austritt aufbewahrt. Bei Mitgliedern bleiben die Daten bis zur Beendigung der Mitgliedschaft gespeichert. Bilder werden bis auf Widerruf veröffentlicht. == Rechte der betroffenen Person Personen haben das Recht, Auskunft über die von uns gespeicherten personenbezogenen Daten zu verlangen, Berichtigung, Löschung oder Einschränkung der Verarbeitung zu verlangen. Außerdem kann die Einwilligung jederzeit widerrufen werden. == Verstöße Das Mitglied hat das Recht, sich bei der Aufsichtsbehörde (Landesdatenschutzbeauftragter des Landes Baden-Württemberg) bei Verstößen des swablab e.V. gegen datenschutzrechtliche Bestimmungen bei der Verarbeitung seiner personenbezogenen Daten zu beschweren.
https://github.com/polarkac/MTG-Stories
https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/036%20-%20Guilds%20of%20Ravnica/006_The%20Ascension%20of%20Reza.typ
typst
#import "@local/mtgstory:0.2.0": conf #show: doc => conf( "The Ascension of Reza", set_name: "Guilds of Ravnica", story_date: datetime(day: 27, month: 02, year: 2019), author: "<NAME>", doc ) #emph[Quiet mind, quiet steps.] I ascend the staircase to the Old Laws Library. I've practically memorized the route now—one hundred twelve steps across the Verity Promenade, two hundred twelve steps up to reach the Pavilion of Justice, eighty-seven steps through the stoic Halls of Reason. And now, I've only thirty-three steps left to climb, through the mists from the Cascades of Justice. Water droplets dapple my robes as the fifteen-story waterfall turns to vapor right before it reaches the main floor of the Jelenn Column complex. If I looked down, I would see hundreds of Azorius bureaucrats and legislative members walking across the atrium in orderly lines, but I dare not look. That's the surest way to lose my footing, and with no handrails to steady me if I stumble~ #emph[Quiet mind, quiet steps. Quiet mind, quiet steps.] The sprawling archway of the law library finally greets me, and I exhale the softest sigh of relief to be on level ground. I'm immediately enveloped in the aroma of dusty law books, leather-bound treasures of order and righteousness. Most lawmages in my cohort do their research in the runes library, but with histories that run as deeply as Ravnica's, it is prudent to study the origins upon which our laws were built. Here, I can see the first draft of the Guildpact—pressed under three inches of magic-treated glass and authored by Azor himself. If you look hard enough, you'll even see a fine strand of blue fur from his mane on the fifth page. The original draft had been plagued with loopholes large enough to run a fully grown wurm through, but slowly and methodically, Azor had scratched them out, red notes in the margins the color of old blood. In my own quest toward finding perfection through law, I have come to appreciate this process of rooting through the past to find weak spots so we can afford ourselves the most orderly of futures. #figure(image("006_The Ascension of Reza/01.jpg", width: 100%), caption: [Tome of the Guildpact | Art by: <NAME>], supplement: none, numbering: none) "Pull these for me," I whisper to the homunculus who tends to the library. I hand him a list of the texts I'll be working from today. As he scampers off, I crane my neck, peeking over the walls of the study hutches in search of Tagan. She hadn't been in her chambers when I'd checked, and my impatience was too thin to wait for her return. The inaccessibility of the Old Laws Library makes it a favorite place for sphinxes, so chances are I'll find her here. Finally, I spot my mentor's blue and brown brindle fur, then surreptitiously enter the adjacent hutch. The homunculus lays the books upon my desk along with a translation sigil to sort out antiquated terms. He signs to ask me whether I'd like page-turning service as well, but I wave him away and pour myself into the section where I'd left off during my last visit. It's hard to concentrate with Tagan so close, knowing she knows how my latest law rune was received by the Senate. The loophole I'd closed was a major one, and the new law I'd drafted consisted of three pages of the most judiciously convoluted legalese, including fifteen double negatives, twelve triple negatives, seven footnotes, and twenty-eight qualifications, all fitting into a single, perfect sentence. I fight back my nerves and my desires to pry Tagan for answers, and then lose myself in an old map of the Tenth District as I wait for my mentor to notice me. I trace my finger along the Transguild Promenade noting the difference five hundred years has made. Many of the neighborhoods featured on the map have now fallen at the hands of the Gruul. The Ghost Quarter was three times the size it is now. Zonot Seven was but an unassuming lake. And further upstream is a fully functioning Azorius enclave—a once-thriving community that now sits in ruins, thanks to a thirty-block tract of land run by rogue Izzet chemisters called the Thinktank. The Thinktank Jurisdictional Fallacy was a favorite problem set given to first-year lawmages. No one in my cohort had solved it, and no one has in the many years since. Four guilds have expressed their claim to the area the Thinktank rests upon: It's a fruitless exercise. There will never be an agreement on who has proper jurisdiction. The last time someone tried to lay a lawful claim to it, a war had nearly broken out. So now it sits, largely ungoverned, unpoliced, and unserviced through a series of unmendable loopholes. I turn the page, and as if to spite me, the edge of the paper rips through my skin—one of the many hazards of being a lawmage. "By Azor's immaculate mane!" I curse, a whole two levels above a whisper. Practically shouting in library tones. "Reza?" comes Tagan's voice from the hutch over. She pops her head up, hangs her paws over the divider and looks down at me. "Peace and order to you," she whispers in greeting. "Tranquil tidings to you as well," I say, and then we leave a moment for the silence to stir so we can order our thoughts. The rules of etiquette dictate that during a non-arranged meeting within an institution of learning between lawmages of disparate rankings, the inferior should be the first to engage the conversation after greetings, but from the way Tagan's tail is whipping, I can tell she is eager to bestow news upon me, so I defer to her with a nod of my head. "The Senate has ruled upon your law rune concerning the closure of the identity loophole," she says. "And?" I ask, my heart beating so loudly in my chest, I think the homunculus will come over and shush me. "They adored it. So intricate. So comprehensive. <NAME> said it was the most brilliant law he's seen this month. It's being sent to the sky scribes as we speak." #figure(image("006_The Ascension of Reza/02.jpg", width: 100%), caption: [Dovin's Acuity | Art by: <NAME>], supplement: none, numbering: none) "He said those words? Those exact words?" I can feel my cheeks warming, the blue of my skin flushing purple with humbled honor. "I wouldn't dare paraphrase <NAME> without qualifying it first." A wave of nausea overcomes me. This is my first law to be written in the skies above New Prahv. It was my most complicated find, and the one of which I was most proud. I knew from the moment I found the loophole that it'd get attention, but a skyscribing? So quickly? Adulations from my cohort will be forthcoming. Thanks to all the hours I'd put into writing that law, the streets would become more orderly. Citizens would feel safer walking the streets of Ravnica, even at night. Perfection would be one step closer. "You've got Baan's attention." She hops over the wall and lands, perfectly silent. Then she draws up a privacy spell around us. If I weren't her mentee, I wouldn't have noticed her casting it—an ever so slight twitch in her front right paw. "Now's the time to follow up with something equally impressive. What else are you working on?" I'd given so much of myself getting the last law written, I hadn't had time to spare a thought about what was next. "Well," I say grasping for ideas. "There's always the Thinktank Jurisdictional Fallacy~" She arches her back in a stretch, looking bored. "First-year riddles aren't going to impress Baan," she says. "What else?" I rattle off some ideas to her, but I've already lost her attention. She's more taken with the translation sigil sitting on the edge of my desk. She bats at it with her paw until it slides over the edge. I catch it before it hits the floor. I keep the sigil clenched in my fist. If I put it back, she'd just swat it off again, but that thought nudges an idea forth about repeat offenders. "I noticed a possible loophole when I was researching last week~a clause that ties the average length of prison sentences to recidivism rates. Theoretically, we could end up having negative-term sentences should the rate fall low enough. I would have followed up earlier, but it referenced an ancient Azorius Law, 394-H, and I'd need to have someone fetch the corresponding scrolls from the Historical Archives to confirm." Tagan perks at this. "Theoretical loopholes are easy to sensationalize. We can get the populous riled up about how we averted near disaster within the prison system, and it'll be easier to justify our salaries. It'll take days for the librarians to approve the interlibrary transfer, though. You should visit the Historical Archives yourself, while your rune is still new in the sky." She sees my hesitation. Not the reaction she was expecting. "Don't tell me you've never been outside of New Prahv," Tagan says. "Of course, I have!" I say. It's been several years. Eight, to be exact, but sometimes I get so entrenched in the pressings of order, I forget that Ravnica exists as more than a theoretical world on which I enact laws. The Historical Archives aren't far. And it would be spectacular to see the lumbering archive golems that haunt those long-abandoned stacks. Talk about walking law history. But then numbers start spinning in my head: two laws in the sky in the same week. Twenty minutes' ride by griffin. Two-hundred feet above the ground. Flying over the heads of thousands and thousands of Ravnicans. #emph[Quiet mind. Quiet mind.] No need to panic. Everything will be fine. #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) The flight requisitions officer takes my paperwork, verifies my identity rune, and then escorts me to the griffin stable located in one of the highest domes in New Prahv. Seven open bays lead out to the city, serving as an aerohub for archons, sphinxes, and the sliver and blue surveillance thopters whooshing in and out, soundlessly flapping wings of pulsating rune light. "There's a lot of chaos out there," the requisitions officer says to me when she notices I've stopped walking forward, stunned by the sheer vastness of the city below. "Is this your first time flying?" I nod. "You'll do fine. It is a solemn duty we must uphold, but it is one worthy of our time and our efforts." At the mention of duty, my legs stop wobbling, and I'm able to climb upon the griffin. I'm unsteady at first, but the officer assures me that this beast does well with inexperienced handlers. I find my confidence and my balance as I make sure both of my satchels sit perfectly aligned, containing the necessary reference texts that Tagan let me borrow from her personal library. Now I'm ready to strike out for the Archive and make a name for myself in this guild. Seconds later, I'm whipping out of the bay and into the sky. The griffin dips sharply, then banks left and rises. It cuts right through one of the new law runes above the Guildhall. There are so many, it'd be impossible to avoid them all. I look around for mine and shiver when I see it.   #emph[Azorius Law 3455-J] #emph[Failure to submit proper identity] ~   And then the runes thin, and Ravnica comes into view, taking my breath away. The city stretches as far as the eye can see, a patchwork of color and styles, buildings ranging from massive and bulky to thin and stately and everything in between. But as diverse as its people are, they are all bound together by the same laws under the same sky. Yes, the Azorius Senate hasn't many friends among the other guilds, but it is not our duty to nurture friendships. Instead, we must focus upon preserving order, lest the entire city fall victim to disarray. #figure(image("006_The Ascension of Reza/03.jpg", width: 100%), caption: [Mountain | Art by: <NAME> Ro], supplement: none, numbering: none) Ten minutes into the flight, my path is blocked by an odd swarm of thopters hanging in the air like a cloud. The griffin maneuvers around them, but then a bolt of purple electricity erupts from the ground and cuts through the sky, striking the thopter closest to us. Another thopter goes down, and now my griffin is spooked. It swerves left, right, rears back. I try to compensate so I can steady it, but my efforts make things worse, and I lose my grip. And then I'm falling. Frantic and running on sheer instinct, I reach for one of the thopters as I fall past it, grabbing it on the side. It slows my fall some, but not enough. It struggles with my weight, one magic-fueled wing giving out after the next, until we're both free falling. But instead of pavement breaking my descent, my landing is cushioned—oh, I still hurt all over, and my mind is pounding, but I'm alive. The first clear thought I have is that my robes are stained. The second thought is that they're stained with my blood. Those two bits of awful news are dwarfed when I realize exactly #emph[what] I've landed in. A pile of trash. A giant pile of trash. I feel the collective horror of all my vedalken ancestors screaming out in unison. I'll have my bathers scrub my skin raw. I'll incinerate these clothes and have the ashes bundled up and tossed into the deepest zonot. But I'm quite sure I'll never be able to wipe clean this memory from my mind. "Help!" I cry out, but it is a library whisper. "Help!" I try again, and the word breaks through my throat as I thrash about. "You're okay," rumbles a deep, reassuring voice. I look up and see a large face—all scraggly red beard and oversized brass goggles—human, though if he told me he had a giant somewhere in his family tree, I'd believe him. "That was a mighty fall you took. You're lucky to be alive." He offers a grease-covered hand. At least I #emph[hope] it's grease. I reluctantly take it. "It doesn't feel much like luck," I say, peeling a strip of gelatinous ooze from my cheek. "Ah, you're right. Seems we've got a mad genius around here who's been shooting thopters out of the sky. Didn't hurt anything, did you?" "Just my pride, I suspect. Where am I?" I ask. "Thinktank," the guy says. "I'm Hendrik. My friends call me Hennie. Or <NAME>. Or B.H. Or Benny Two-Clocks on account of an incident with a fuzzy-headed blastseeker with a flair for misreading dials. Stopped my heart dead." He pounds his chest. "But Ol' Doc jiggered me a new one. Keeps time better than a clock in the Continuism lab!" "I'm Reza," I say slowly, not sure if it's this guy that's running my mind in circles, or just a concussion. "My associates call me Reza." I look around beyond the pile of trash. #emph[This] is the Thinktank? Mizzium-plated boilers coil up and down the streets, like a never-ending maze of intestines. They haven't weathered well, and layers upon layers of metal patches are welded to each building. Dozens of pressure valves release steam and other, more nefarious, vapors into the streets, covering the neighborhood with an awful yellow haze. I can't understand why any guild would fight over this. "All right, Reeze. Why don't you come home with me? We'll get you all cleaned up and back in the sky in no time." "It's Reza. And no offense, but I think it'd be more prudent if I returned to New Prahv immediately, seeing as I have no idea of your intentions toward me." "Suit yourself," Hendrik says, then shuffles off down the trash pile. "You might want to mind the compost wurms, though." I jump up. "Compost wurms?" "No trash service here, so we find ways to make do." I scuttle down the trash hill, then scrutinize my thoroughly soiled robes. I can't exactly return to New Prahv looking like this. If my associates ever caught wind that I'd fouled myself in such a manner, I'd never regain their respect. "Can you guarantee me your intentions are virtuous?" I ask Hendrik, masking the desperation in my voice with formal airs. "I won't consent to being the test subject to any sort of mad experiment." "I promise no additional misfortunes will fall upon you." He seems like a man of honor, and I'm running low on options, so I follow him home. #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) For some reason, I thought the industrial boilerworks design of the Thinktank was just a poorly conceived facade, and that Hendrik's apartment would boast comfortable living and dining quarters and all the comforts of home. But it's worse inside. Brass piping and valve wheels jut out into every conceivable space, creating trip hazards and burn hazards wherever I turn. Their entire home is so engulfed in steam that it's taken the wrinkles out of my robes. I start sweating profusely, and Hendrik motions me over to a slightly less steamy corner. "B.H.? Is that you?" comes a voice from beyond the clanking of metal and the grinding of worn gears. "Me and a guest!" Hendrik cries. "Apparently, it's raining men. That maniac with the ball lightning generator is shooting thopters out of the sky again." He nudges me in the ribs. "Reezey here was the latest casualty." #figure(image("006_The Ascension of Reza/04.png", width: 100%), caption: [Art by: <NAME>], supplement: none, numbering: none) "Reza," I correct him again as a lithe human walks out, so thin and graceful, he could be vedalken, if it weren't for the hue of his skin and the thick mound of curls on his head. "B.H.'s given you a nickname already. Means you're in trouble. He likes you." He smiles. "I'm Janin. I keep the gears from falling off in this little hovel. Master Chemister, if you're the sort that likes respectable titles." "Get him cleaned up and fed, eh, Moonie?" Hendrik says to Janin as he slings a tool satchel over his shoulder. "I'll arrange a way for him to get back home." "Moonie?" I ask Janin once Hendrik is gone. "He says my eyes glow like the moons," Janin says with a shrug. "No one else calls me that. B.H. is a bit~eccentric. Even for a scatterbrained chemister with a slight death wish. So Reza~is that short for Rezajaelis?" I stare at him, marveling at how he knew that, and how he'd pronounced it so effortlessly. "Yes~how—" "I was raised by vedalken. My biological parents were killed in a lab explosion a few blocks from here. Mum and Pap felt partially responsible for the faulty blistercoils." "I'm sorry," I say, though in the back of my mind, I can't help but think that if only they'd had proper oversight, perhaps the accident wouldn't have occurred at all. "That's just how it works in the Thinktank. If your inventions hurt someone, you do your best to make it right. They took me in without a moment of hesitation. We can't count on anyone else, so we have to rely on each other." He gestures to an opening between copper pipes. "Let me show you to the bath." I grit my teeth and follow along, hoping the tub won't leave me dirtier than I am already. But when Janin opens the door, it's a little oasis inside. The porcelain gleams. He hands me a washcloth, towel, and a vial of vedalken cleansing oils. "I was going to give these to Mum to celebrate her purification rites but seems like you need them more." I must have a confused look on my face, because he smiles his disarming smile at me again. "I forgot, you're from New Prahv. Probably used to your own personal bathers and all that? Here, I'll get you started." He pulls a brass knob and water starts flowing. Then he opens the vial and lets a few drops of oil fall into the tub. A light blue mist coils upon the water's surface. "You can leave your robes outside the door. I can make those stains disappear." He practically disappears as well, quickly slamming the door shut behind him. The cleansing oils are potent, bordering on toxic, especially to humans and others with less refined senses. But to vedalken, the astringent smell is next to godliness. I tuck the vial in my satchel, then place my robes outside. Janin wasn't wrong about the bathers. Nevertheless, I don't intend for this excursion to get the best of me, so I render my skin clean the best I can, then submerge myself under the water, and spend several minutes in deep thought. When I come up, the air hits my face, and I rest for a moment, letting my body reacclimate to breathing through my lungs. Janin's still scrubbing at stains when I rejoin him in what passes for the living quarters. He holds the robes up, and sure enough, the fabric is nearly pristine. Most humans would stop now and call it clean, but Janin gets back to work until there's nary a sign of imperfection. "Your parents raised you well," I say. He laughs, and we chat about our favorite vedalken customs, and time just slips away from us. But as the light from outside starts to change, Janin's posture does, too. "B.H. should be back by now," he says. "It's getting dark." The way he said "dark" definitely made it sound like something undesirable. "We should go check the workshop. He's got an obsession with that place." So, we venture off a couple streets over, where the machination that is the Thinktank doubles in size and complexity. The mizzium is so dense here, I can feel it in my teeth. We enter through a big brass hatch, and inside, hundreds of tinkerers gather, showing off their inventions. A swarm of ratchet faeries cuts in front of us, each carrying a gleaming bolt. Sparks fly from all directions. Captive elementals peer out from a collection of glass globes. A crowd forms around a woman who claims she's able to conjure rifts from tainted electrical magic. I stop and watch, safety violations ticking away in my head. She's broken twenty-eight laws in the three minutes I've watched her. Purple electricity gathers in the glass retaining bell of her invention and then blazes down a long rod. A warbling hum fills my ears, and sure enough, a small rift opens in front of her, so dark, it hurts my eyes. "She's going to hurt someone with that," I say to Janin. He just shrugs and says, "Probably." "But shouldn't we—" "We shouldn't. Come on. Stick close." But the crowd is thick. Too thick. I start to feel queasy and need to calm myself. I make a run for the exit, and Janin calls after me, but I need silence like I need air. #emph[Quiet mind, quiet steps.] The streets are better, wide and open, and I'm able to breathe again. A long, thin shadow falls on the ground beside me. I think that Janin's found me, but when I look up, I see a vedalken. He comes closer, and I try to smile through my nervousness, but then he's lunging at me. He swipes at the strap of one of my satchels. It comes free, and then he's running off with my precious reference texts. I can't imagine how disappointed Tagan will be with me if I return to New Prahv without them, so I give chase, running nearly the whole length of the Thinktank before I lose him in a tangle of brass piping. Exhausted, I take a moment to claim my breath, then realize I'll need help to get those books back. Slowly, steadily, I climb over the piping, venturing into non-disputed territory, where Azorius law is undeniable. Three arresters approach me, and I breathe a sigh of relief when I see them. From the crinkle on their brows, I suspect they're not quite as happy to see me. "You there," one of the arresters says to me. "What's your purpose here?" My purpose? "I'm sorry~I was seeking you out for—" "What's your name? Do you live around here?" The questions keep coming, and I'm stunned by their brusk demeanor. The arresters I've encountered at New Prahv are nothing but pleasant. "We've got reports of a mugger who's been wreaking havoc around here," he says, and finally, I think we're getting somewhere, but then he says, "You fit the description. Tall. Blue. Bald." #figure(image("006_The Ascension of Reza/05.jpg", width: 100%), caption: [High Alert | Art by: Daarken], supplement: none, numbering: none) "So basically vedalken?" I say. "That could be anyone!" "He was last seen with a satchel~just like that one. Let's have a look, shall we? What's inside?" "My personal property!" I know there are laws to protect me, but all that knowledge drains out of my head when a sense of raw vulnerability overcomes me. I fight back against those feelings, steadying my logic and my nerves. "I'm <NAME>, lawmage at New Prahv. I had an accident with my griffin and had the extreme misfortune of being stranded in the Thinktank where I was mugged by some hooligan, and now I'm trying to recover my lawful property, so I can return home. I'd hoped for your assistance, but you've done nothing but harass me from the moment you saw me. Now, let me get #emph[your] names, so that I can pass them along to my superiors as soon as I'm back at the Jelenn Column complex." The arresters' body language changes immediately. They look me over once, and one of them starts to speak, but then a blood curdling scream comes from down the street. Two of the arresters take off in response, and one remains. "Sorry for bothering you," she says. "If you'll just present your identity rune, we'll wrap this up, and you'll be free to go." "Free to go!" I say reaching into my satchel for my identification. "Aren't you going to help me find who did this?" "If it happened in the Thinktank, I'm afraid we have no jurisdiction there." I growl as I continue to fish around in my satchel for my identity rune, but then slowly realize that it was packed in my other satchel. My eyes lock with the arrester's. "Problem?" she asks, her posture shifting back to the offensive. "No. No problem," I mutter. That new law I'd put in the sky enters my mind. Failure to produce proper identification will result in detention for an indefinite amount of time—as long as it takes for some overworked public servant to determine if I am who I say I am~In other words, I'd be sitting in an Azorius jail for a long, long time. I couldn't let an arrest tarnish my reputation at New Prahv. It'd be like letting everything I've worked so hard for wash right into an open sewer. My hand touches the vial of cleansing oils Janin had given me. I retrieve it from the satchel, then throw it down at the arrester's feet. Glass breaks, and a caustic odor fills the air. The arrester begins coughing and wheezing, and then I'm running, running. The arrester hails her partners, and then they're all after me, eyes bleary and red from the oil, snot trickling from their noses like busted faucets. It slows them down, but not by much. At every street, I keep looking for nooks and crannies that will take me back into the relative safety of the Thinktank, trying to ignore the fact that I've made things a million times worse for myself. There's no escape. I'd have to climb back inside, and I couldn't get up fast enough. I'm cornered, caught at the end of an alleyway. I turn, watching my pursuers as they close in. They stop in their tracks as an ominous blue light cuts through the steam. Their jaws drop. I turn and see it too, an unwieldy flying contraption that looks like it's held together with a combination of a little bit of piping tape and a whole lot of sheer will. Hendrik peeps his head out. "Come on, Reezemeister," he says, motioning to the back of the vehicle with his thumb. Janin leans out to give me a hand up. Then something familiar strikes me—the vehicle consists of sleek white metal interspersed with blue glass domes. I squint harder and see that dozens of Azorius Senate emblems have been filed away. The rune magic has been tampered with and now glows purple, but the truth is undeniable. My honorable savior is not so honorable after all. #figure(image("006_The Ascension of Reza/06.jpg", width: 100%), caption: [Deploy | Art by: <NAME>], supplement: none, numbering: none) "It's you!" I say to Hendrik. "You're the 'mad genius' who's been shooting down thopters! I almost died because of you!" "Yeah, sorry about that. Not the thopter part, just the part where you fell out of the sky. Now get in before these rule sniffers lay some spells on us." "This is stolen property!" I scream. I can't. I can't. I look back at the arresters, gaining steadily on me now. Violations keep stacking up in my mind:   #emph[Azorius Law 2795-V, Non-compliance with arresters] ~ #emph[Azorius Law 3343-J, Traveling in a stolen vehicle] ~ #emph[Azorius Law—]   "You've got about three seconds before those rule sniffers are here," Hendrik warns. My survival instincts finally kick in. I grab Janin's hand and lunge for my life. Hendrik flies up and over the heads of the arresters, and soon they're nothing but specks below us. "Where are we headed?" Hendrik asks. "Back to New Prahv? Can't take you all the way there, of course, but I can get you close enough to walk." I ignore his question, too anxious to deal with it right now. "Why?" I ask him. "Why would anyone want to live like this? Breaking laws. Shooting down thopters?" "Whose laws? And whose thopters?" Hendrik asks. "I get your reluctance to trust the Azorius," I say, remembering the predatory look in those arresters' eyes. "But wouldn't the Thinktank be better off if you conceded to oversight? We could make the streets safer, establish utility services so you wouldn't have to rely on man-eating wurms to get rid of your garbage. And you'd be able to petition the Izzet League for real funding for your workshop." Hendrik shakes his head. "We'll manage on our own. We always have. It might not be perfect, but it's home." "At least promise me no more shooting down thopters," I say. "Sure, if you can get Azorius to quit sending them to spy on us," Hendrik says. The inside of the thopter-vehicle goes quiet with awkward tension, but it's soon broken by a deep warbling noise that rattles the bolts on this flying heap. The sound rises in pitch, and then lightning flashes, turning the entire sky bright purple. I look down and see an enormous rift throbbing, blacker than black. It sizzles, blue-white light twisting along its mouth, right where the Thinktank workshop used to be. "Hendrik!" I shout. "The Thinktank is under siege by some~some kind of electrical elemental." More lightning lashes out of the rift as the elemental starts to take on a distinct form, looking less like a collection of electricity and more a monstrous beast—arms, claws, teeth. It swings at a building, but its touch is so hot, it melts everything in its path. Static saturates the air. If I had any hair on my body, it'd be standing straight right now. Azorius archons are on alert and sweep toward the Thinktank, stopping just short of attacking. They'll have to wait until the elemental crosses over the boundary before they can attempt to subdue it, but the whole of the Thinktank might be destroyed before that can happen. "Please tell me you have some invention powerful enough to fend that thing off," I say to Hendrik. "We do," Hendrik says. "A manifestation matrix converter with a dually optimized cascade link." "Oh, thank Azor's infinite foresight!" I exclaim. "But~" Hendrik continues. #emph[Buts] are never good in these sorts of situations. "~it's down there, sitting under about ten feet of molten mizzium." The citizens of the Thinktank are doing what they can to defend themselves, but it's a losing battle. Help is #emph[right there] ~if it weren't for the Thinktank Jurisdictional Fallacy, hundreds of lives could be saved. But if the problem was impossible to solve in the quiet sanctum of the Jelenn Column complex with every resource I could want at my fingertips, how could I possibly hope to solve it now—in the company of these lawless people, in emergency conditions, with about forty-five seconds left before the elemental notices us and swats us out of the sky? I sit bolt upright and start conjuring. I realize I do have something all those other lawmages didn't. I've seen the Thinktank. I've talked to its residents. And now, with this elemental wreaking havoc, I can call upon emergency law to make a declaration. I may not have the authority to resolve the jurisdiction dispute—that part of the fallacy is unsolvable—but if I grant the Thinktank sovereignty, making it its own little city within the city, they'd have the authority to contract with other entities, namely the growing Azorius army standing at the ready. "How'd you like to be Grand Arbiter of the Thinktank?" I say to Hendrik. He opens his mouth, but there's no time to answer, so I continue. "All Thinktank citizens in favor of declaring Hendrik~what's your surname?" "Azmerak," Hendrik says. "~declaring <NAME> as Grand Arbiter #emph[pro tem] , raise their hands." I nudge Janin, and his hand shoots straight up. "Against?" I keep conjuring as I talk, forming the law rune that will hopefully save the day. I explain my law to Hendrik and Janin. It's not efficient at seven sentences long. It's not convoluted. There are no double negatives, footnotes, or extensive qualifications. It's by no means perfect, but it is perfectly clear. Instead of trying to solve the problem of five adversaries squabbling over a stretch of land, we'll have five neighbors, helping to protect each other's best interests. "By the prerogative writ of emergency, and by a unanimous vote, I hereby declare <NAME>rbiter #emph[pro tem] of the Thinktank Enclave. As the leader of your people, do I have your permission to put the following law into effect?" He looks the law rune over, taking his time. It's wonderful he's being so thorough, the sign of a competent leader, but we've only got seconds left to act. #figure(image("006_The Ascension of Reza/07.jpg", width: 100%), caption: [Deputy of Detention | Art by: G-Host Lee], supplement: none, numbering: none) "Yes!" Hendrik finally says, and then I release the rune, and it shoots up into the sky, shining more brightly than any law rune ought to. Maybe I'd overdone the magic, but I couldn't risk it going unnoticed or unread. The request for help is immediately acted upon, and archons and knights file over the borders, slashing their swords and staffs through the elemental. The bolts of electricity sever from the blows, but seconds later, they regenerate, becoming thicker and brighter. The elemental shrieks, then strikes three archons from the sky. But reinforcements have finally arrived, two dozen nullmages on griffinback. They work together to cast a blue dome of magic over the elemental, and in a coordinated effort, they tighten it down, bit by bit, until it is subdued. The static slowly fades from the air, as does any remaining tension between Hendrik, Janin, and myself. There is no formal writ that binds us together, but the connection between us goes deeper than mere acquaintances. "You did good, Reza," Hendrik says, slapping me on my back. "Thanks, B.H.," I say, trying the nickname out. Nope, nope, nope. It just doesn't feel right in my mouth—grating against my palate like a mouthful of sand, but that doesn't mean I feel Hendrik is any less of a friend to me. #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) "You're sure you want to go through with this?" my mentor asks, looking at the draft of my proposed law: fifty-seven pages of concessions and sanctions and easements. I'd solved the Fallacy. For real this time. It took months of negotiations between the Thinktank Enclave and the guilds, but it's done. "It's unprecedented. It's reckless. And I'm sure <NAME> won't be happy." "It's what's right and just. The Thinktank deserves more than a temporary writ. It wouldn't be fair to offer them that taste of freedom, only to take it back." Maybe the Thinktank was just a nuisance in the margins to Azorius before, but as it turns out, people start paying attention when an eighty-foot tall electrical elemental threatens to consume several city blocks. I expect Tagan to reprimand me or lecture me on how putting this law in front of the Senate will ruin my career, but her tail just swishes back and forth. Back and forth. "I don't think I can be your mentor anymore," Tagan finally says. "What? Why?" I ask, eager to do whatever I can to stay under her supervision. I plead my case. "You have to believe in me. I know I can make a difference. I've been so focused on burying myself in the laws of the past, but I'm just now learning how to reach out to citizens so we can create new laws that are relevant to the current needs of Ravnica. You can't give up on me now!" She smiles. "I'm not giving up on you. I can't be your mentor anymore, because I think it's time you become a mentor yourself. I believe in you, but what you want, it's going to be a hard sell to people like Baan. But if you can start changing the minds of those who'll come after us, maybe we can get more people on our side. And who knows?" She leaves the thought open, letting the silence stir. Getting four guilds and an enclave to agree on a small plot of land was an enormous amount of work, but it pales in comparison to the larger issues plaguing Ravnica. But with justice on our side, #emph[true justice] ~who knows? Maybe one of the lawmages I'll mentor will be the author of the next Guildpact.
https://github.com/7sDream/fonts-and-layout-zhCN
https://raw.githubusercontent.com/7sDream/fonts-and-layout-zhCN/master/chapters/06-features-2/anchor/mark-to-lig.typ
typst
Other
#import "/lib/draw.typ": * #import "/template/lang.typ": thai #let start = (0, 0) #let end = (500, 350) #let f0nt = text.with( font: ("F0nt",), weight: 500, lang: "tha", region: "TH", script: "thai", ) #let graph = with-unit((ux, uy) => { // mesh(start, end, (50, 50)) txt(f0nt[\u{0E19}\u{0E15}], (20, 20), anchor: "lb", size: 420 * ux) point((200, 267), radius: 5, color: gray) shape((180, 256), (191, 326), (228, 328), (218, 257), closed: true, stroke: 1 * ux + theme.main) txt([部件 #thai[\u{0E19}] 的锚点], (150, 290), anchor: "rc", size: 20 * ux) bezier((150, 290), (170, 290), none, (185, 279), stroke: 2 * ux + theme.main) arrow-head((185, 279), 8, theta: -40deg, point-at-center: true) point((422, 273), radius: 5, color: gray) shape((400, 255), (410, 327), (448, 329), (438, 257), closed: true, stroke: 1 * ux + theme.main) txt([部件 #thai[\u{0E15}] 的锚点], (380, 297), anchor: "rc", size: 20 * ux) bezier((380, 297), (400, 297), none, (410, 285), stroke: 2 * ux + theme.main) arrow-head((410, 285), 8, theta: -40deg, point-at-center: true) }) #canvas(end, width: 50%, graph)
https://github.com/rousbound/typst-compiler
https://raw.githubusercontent.com/rousbound/typst-compiler/main/tests/test.typ
typst
Apache License 2.0
#set text(font:"New Computer Modern") Hello World!
https://github.com/LordJatonyas/B1-Project
https://raw.githubusercontent.com/LordJatonyas/B1-Project/main/report/report.typ
typst
#import "tablex.typ": * // document setting #set document(title: "B1 Project: Optimisation", author: "<NAME>") // Require 20mm margins #set page(paper: "a4", numbering: "1", margin: (x: 20mm, y: 20mm)) // require Arial, 11pt font #set text(lang: "en", region: "UK", size: 11pt, hyphenate: false, font: ( "Arial", "Twitter Color Emoji"), ) #set par(leading: 1em, justify: true) #set enum(indent: 1em) #set table(inset: 8pt) #set heading(numbering: "1.1.") #set math.mat(delim: "[", column-gap: 0.5em) #show heading: it => { if it.numbering == none { block(inset: (y: 1em))[ #text(font: ("Arial"), weight: "bold")[#it.body] ] } else { block(inset: (y: 0.4em))[ #text(font: ("Arial"), weight: "bold")[#counter(heading).display() #it.body] ] } } #show math.equation: set block(spacing: 1em) #show par: set block(spacing: 1.5em) #show raw.where(block: false): box.with( fill: luma(240), inset: (x: 3pt, y: 0pt), outset: (y: 3pt), radius: 2pt, ) #show raw.where(block: true): block.with( fill: luma(240), inset: 10pt, radius: 6pt, ) #show ref: it => { let el = it.element if el != none and el.func() == heading { numbering( el.numbering, ..counter(heading).at(el.location()) ) } else { it } } #show link: it => { underline(offset: 3pt, text(blue, it.body)) } #show emph: it => { text(font: "Arial", style: "italic", it.body) } #show figure.where(kind : table): set figure.caption(position: top) #let derive(body) = { block(fill:rgb(250, 250, 250), width: 100%, inset: 14pt, radius: 4pt, stroke: rgb(50, 50, 50), body) } // Cover Page (Not counted) #set page(numbering: none) // title #align(center, text(size: 24pt, font: ("Arial"))[ *B1 Project: Optimisation* ]) #align(center, [<NAME>]) // content #align(left, text(17pt)[*Introduction*]) This is the report for B1 Engineering Computation - Project B: Optimisation for regression and classification models. The project investigates how to apply different optimisation methods for learning optimal parameters of a model that can predict a value of interest for a given input data point. A total of 6 tasks were given and completed using the MATLAB programming language with the「Statistics and Machine Learning Toolbox version 23.2」 and the 「Optimization Toolbox version 23.2」. Results shown in this report are generated using ```matlab rng(12345)``` unless otherwise stated. #outline(title: "Content", depth: 1, indent: auto) #pagebreak() // Start counting page numbers #set page(numbering: "1") #counter(page).update(1) // Task 1 = Task 1: Linear Regression via analytical solution to MSE <Analytical_Regression> == Optimal Parameters <task1-optimal_param> With 1000 training samples, the resulting $bold(w)$ and $b$ are sensible given that they roughly equal to the coefficients of the equation within the data generating function: $upright(y) = 1.5 + 0.6x#super[(1)] + 0.35x#super[(2)]$ #grid( columns: (1fr, 1fr), gutter: 10pt, align(horizon)[ #figure( tablex( columns: 4, inset: 4pt, align: center, auto-lines: false, [], [$b$], [$w^((1))$], [$w^((2))$], hlinex(), [Learnt], [1.4862], [0.6083], [0.3435], [Actual], [1.5], [0.6], [0.35], ), kind: table, caption: [Training with 1000 samples], ) ], align(horizon)[ #figure( tablex( columns: 3, inset: 4pt, align: center, auto-lines: false, [], [Training], [Test], hlinex(), [MSE], [0.0478], [0.0495], ), kind: table, caption: [MSE for 1000 training samples], ) ] ) == Changing to Training Sample Size <task1-thousand_to_ten> This set of parameters differs from the one obtained in @task1-optimal_param With too small a training sample size, there is insufficient data to accurately capture the coefficients for the linear regression. #grid( columns: (1fr, 1fr), gutter: 10pt, align(horizon)[ #figure( tablex( columns: 4, inset: 4pt, align: center, auto-lines: false, [], [$b$], [$w^((1))$], [$w^((2))$], hlinex(), [Learnt], [1.1395], [0.6049], [0.5678], [Actual], [1.5], [0.6], [0.35], ), kind: table, caption: [Training with 10 samples], ) ], align(horizon)[ #figure( tablex( columns: 3, inset: 4pt, align: center, auto-lines: false, [], [Training], [Test], hlinex(), [MSE], [0.0548], [0.0927], ), kind: table, caption: [MSE for 10 training samples], ) ] ) == Experimentation with Training Sample Sizes and RNG Seeds <task1-experiment> Prior to this, everything was done with ```matlab rng(12345)```. Now, we will vary seed values and training sample sizes to observe how the number of training samples affects `Training MSE` and `Test MSE`.\ #figure( tablex( columns: (auto, 17em, auto), inset: 5pt, align: center, auto-lines: false, hlinex(), [Training Samples], [Training MSE], [Test MSE], hlinex(stroke: 0.3pt), [4], [0.0083 $plus.minus$ 0.0093], [0.2205 $plus.minus$ 0.2989], [10], [0.0355 $plus.minus$ 0.0188], [0.0808 $plus.minus$ 0.0571], [20], [0.0414 $plus.minus$ 0.0147], [0.0586 $plus.minus$ 0.0068], [100], [0.0493 $plus.minus$ 0.0078], [0.0518 $plus.minus$ 0.0012], [1000], [0.0504 $plus.minus$ 0.0028], [0.0503 $plus.minus$ 0.0004], [10000], [0.0499 $plus.minus$ 0.0007], [0.0501 $plus.minus$ 0.0003], hlinex(), ), kind: table, caption: [Experimentation MSE Means and Standard Deviations], ) As the training sample size increases, both `Training MSE` and `Test MSE` converge to the variance for regression target as mentioned in the data generating function ```matlab r_noise_var = 0.05```. To understand this, we firstly introduce the $epsilon$ term as random error independent of $bold(upright(X))$ with mean of zero $upright(E) [epsilon] = 0$ such that $bold(upright(y))$ can be expressed as #align(center)[ $bold(upright(y)) = f(bold(upright(X))) + epsilon$ ] Now we can re-express the MSE in terms of an expectation and a variance #derive[ $upright(M S E) = upright(E)[bold((upright(y - accent(y,hat))))^2] &= upright(E)[(f(bold(upright(X))) + epsilon - accent(f,hat)(bold(upright(X))))^2] \ &= upright(E) [f(bold(upright(X)))^2 + 2epsilon f(bold(upright(X))) - 2 f(bold(upright(X))) accent(f,hat)(bold(upright(X))) + epsilon^2 - 2accent(f,hat)(bold(upright(X))) epsilon + accent(f,hat)(bold(upright(X)))^2] \ &= upright(E) [(f(bold(upright(X))) - accent(f,hat)(bold(upright(X))))^2] + cancel(2upright(E)[epsilon]upright(E)[f(bold(upright(X)))], angle: #80deg) + upright(E)[epsilon^2] - cancel(2upright(E)[epsilon]upright(E)[accent(f,hat)(bold(upright(X)))], angle: #80deg) \ &= upright(E) [(f(bold(upright(X))) - accent(f,hat)(bold(upright(X))))^2] + upright(E)[epsilon^2] = upright(E) [(f(bold(upright(X))) - accent(f,hat)(bold(upright(X))))^2] + (upright(E)[epsilon^2] - upright(E)[epsilon]^2) \ &= upright(E) [(f(bold(upright(X))) - accent(f,hat)(bold(upright(X))))^2] + upright(V a r)(epsilon)$ ] /* #derive[ $upright(M S E) = upright(E)[bold((upright(y - accent(y,hat))))^2] &= upright(E)[(f(bold(upright(X))) + epsilon - accent(f,hat)(bold(upright(X))))^2] = upright(E) [(f(bold(upright(X))) - accent(f,hat)(bold(upright(X))))^2] + upright(E)[epsilon^2]\ &= upright(E) [(f(bold(upright(X))) - accent(f,hat)(bold(upright(X))))^2] + (upright(E)[epsilon^2] - upright(E)[epsilon]^2) = upright(E) [(f(bold(upright(X))) - accent(f,hat)(bold(upright(X))))^2] + upright(V a r)(epsilon)$ ]*/ Given this re-expression, we observe that as the error between prediction and target goes to zero, there remains an irreducible error source in the form of the regression target's variance, explaining the convergence to 0.05 in agreement with ```matlab r_noise_var = 0.05```. // Task 2 = Task 2: Linear Regression via Gradient Descent <Gradient_Descent_Linear_Regression> == Optimal Learning Rate 1 <task2-optimal_lr_1> To find the optimal learning rate $lambda$, we keep ```matlab n_iters = 1000``` and perform the experiment for ```matlab lambdas = [0.00001 0.0001 0.001 0.01 0.1 1]```, picking the one with lowest corresponding `Training MSE` and `Validation MSE` $==>$ ```matlab lambda = 0.1```. #figure( tablex( columns: (auto, 17em, auto), inset: 4pt, align: center, auto-lines: false, hlinex(), [$lambda$], [Training MSE], [Validation MSE], hlinex(stroke: 0.3pt), [0.00001], [6.5561], [6.8993], [0.0001],[1.2430],[1.2485], [0.001], [0.1428], [0.1279], [0.01], [0.0472], [0.0532], vlinex(start: 5, end: 6, stroke: red), hlinex(stroke: red), vlinex(start: 5, end: 6, stroke: red), [0.1], [0.0465], [0.0528], hlinex(stroke: red), [1], [NaN], [NaN], hlinex(), ), kind: table, caption: [Training and Validation MSEs for Different Learning Rates at 1000 iterations], ) == Test Set <task2-testing> With the optimal parameters derived via ```matlab lambda = 0.1```, we calculate the ``` Test MSE```. There is a slight difference between the two MSE values, and this is simply a result of ```matlab rng(12345)``` "shuffling" the numbers. The MSE values obtained so far have been done with test set being created after the train-val set. The results will be different if the ordering was swapped. #grid( columns: (1fr, 1fr), gutter: 10pt, align(horizon)[ #figure( tablex( columns: 3, inset: 4pt, align: center, auto-lines: false, [], [Validation], [Test], hlinex(), [MSE], [0.0528], [0.0496], ), kind: table, caption: [200 validation and 20000 test samples], ) ], align(horizon)[ #figure( tablex( columns: 3, inset: 4pt, align: center, auto-lines: false, [], [Validation], [Test], hlinex(), [MSE], [0.0486], [0.0497], ), kind: table, caption: [Reversed order of data generation], ) ] ) == Optimal Learning Rate 2 <task2-optimal_lr_2> Changing from ```matlab iters_total = 1000``` $arrow.r$ ```matlab iters_total = 10000```, we get ```matlab lambda = 0.01```. #figure( tablex( columns: (auto, 17em, auto), inset: 4pt, align: center, auto-lines: false, hlinex(), [$lambda$], [Training MSE], [Validation MSE], hlinex(stroke: 0.3pt), [0.00001], [1.2439], [1.2495], [0.0001],[0.1428],[0.1279], [0.001], [0.0473], [0.0532], vlinex(start: 4, end: 5, stroke: red), hlinex(stroke: red), vlinex(start: 4, end: 5, stroke: red), [0.01], [0.0465], [0.0528], hlinex(stroke: red), [0.1], [0.0465], [0.0528], [1], [NaN], [NaN], hlinex(), ), kind: table, caption: [Training and Validation MSEs for Different Learning Rates at 10000 iterations], ) == Relationship between $bold(n#sub[iters])$ and $bold(lambda)$ <task2-n_and_lambda> Based on the experiments in @task2-optimal_lr_1 and @task2-optimal_lr_2, we can see that as $n#sub[iters]$ decreases, the minimum optimal learning rate, $lambda$, increases. This is reasonable since a suitable larger $lambda$ would lead to faster convergence, thereby reducing the $n#sub[iters]$ necessary. Between the two, it is $n#sub[iters]$ that affects runtime, because it dictates the number of loops for the learning function while $lambda$ is only involved in one multiplication step within each loop. Therefore, in practice, it is preferable to use a short $n#sub[iters]$ coupled with the optimal $lambda$ so that runtime is shortened. == Comparison with Task 1 <task2-comparison> Running the analytical solution from Task 1 on the Task 2 training data produces the same optimal parameters, which is reasonable since Gradient Descent should converge to the exact solution. #figure( tablex( columns: 4, inset: 4pt, align: center, auto-lines: false, [], [$b$], [$w^((1))$], [$w^((2))$], hlinex(), [Task 1], [1.4899], [0.6089], [0.3382], [Task 2], [1.4899], [0.6089], [0.3382], ), kind: table, caption: [Comparison of optimal parameters between Task 1 and Task 2], ) // Task 3 = Task 3: Logistic Regression using Gradient Descent <Gradient_Descent_Logistic_Regression> == Derivation of Log-Loss Gradient <task3-derivation> #derive[ $ nabla#sub[$bold(theta)$] cal(L)#sub[C] &= nabla#sub[$bold(theta)$] {-y log(sigma (bold(upright(accent(x, hat)))^T bold(theta))) - (1 - y) log(1 - sigma (bold(upright(accent(x, hat)))^T bold(theta)))} \ &= nabla#sub[$bold(theta)$] {-y log(1/(1 + exp(-bold(upright(accent(x, hat))^T bold(theta))))) - (1 - y) log(1 - 1/(1 + exp(-bold(upright(accent(x, hat))^T bold(theta)))))}\ &= nabla#sub[$bold(theta)$] {y log(1 + exp(-bold(upright(accent(x, hat))^T bold(theta)))) - (1 - y) log(exp(-bold(upright(accent(x, hat))^T bold(theta)))/(1 + exp(-bold(upright(accent(x, hat))^T bold(theta)))))}\ &= nabla#sub[$bold(theta)$] {y log(1 + exp(-bold(upright(accent(x, hat))^T bold(theta)))) + (1 - y) log(1 + exp(-bold(upright(accent(x, hat))^T bold(theta)))) - (1- y) log(exp(-bold(upright(accent(x, hat))^T bold(theta))))}\ $ $#h(34.5pt) &= nabla#sub[$bold(theta)$] {log(1 + exp(-bold(upright(accent(x, hat))^T bold(theta)))) - (1 - y) log(exp(-bold(upright(accent(x, hat))^T bold(theta))))}\ &= nabla#sub[$bold(theta)$] {log(1 + exp(-bold(upright(accent(x, hat))^T bold(theta)))) + (1 - y) bold(upright(accent(x, hat))^T bold(theta))}\ &= -exp(-bold(upright(accent(x, hat))^T bold(theta)))/(1 + exp(-bold(upright(accent(x, hat))^T bold(theta)))) bold(upright(accent(x, hat))) + (1 - y) bold(upright(accent(x, hat)))\ &= ((-1 - exp(-bold(upright(accent(x, hat))^T bold(theta))))/(1 + exp(-bold(upright(accent(x, hat))^T bold(theta)))) + 1/(1 + exp(-bold(upright(accent(x, hat))^T bold(theta))))) bold(upright(accent(x, hat))) + (1 - y) bold(upright(accent(x, hat)))\ &= (-1 + accent(y, macron)) bold(upright(accent(x, hat))) + (1 - y) bold(upright(accent(x, hat))) = (accent(y, macron) - y) bold(upright(accent(x, hat))) #h(200pt) qed $ ] == Optimal Learning Rate 1 <task3-optimal_lr_1> To find the optimal learning rate $lambda$, we keep ```matlab n_iters = 1000``` and perform the experiment for ```matlab lambdas = [0.00001 0.0001 0.001 0.01 0.1 1]```, picking the one with lowest corresponding `Training Error` and `Validation Error` $==>$ ```matlab lambda = 1```. #figure( tablex( columns: (auto, 17em, auto), inset: 4pt, align: center, auto-lines: false, hlinex(), [$lambda$], [Training $e$], [Validation $e$], hlinex(stroke: 0.3pt), [0.00001], [0.5000], [0.4450], [0.0001],[0.4988],[0.4450], [0.001], [0.4537], [0.4400], [0.01], [0.0963], [0.1050], [0.1], [0.0275], [0.0350], vlinex(start: 6, end: 7, stroke: red), hlinex(stroke: red), vlinex(start: 6, end: 7, stroke: red), [1], [0.0200], [0.0300], hlinex(stroke: red), ), kind: table, caption: [Training and Validation $e$ for Different Learning Rates at 1000 iterations], ) == Optimal Learning Rate 2 <task3-optimal_lr_2> With ```matlab n_iters = 1000``` and ```matlab lambda = 1```, the loss plateaus at the end of training, suggesting convergence of mean log-loss. #figure( image("task3-mean_loglosses.png", width: 60%, ), caption: [Mean Log-loss for ```matlab lambda = 1``` against number of iterations], ) == Test Set <task3-test> #figure( tablex( columns: 3, inset: 4pt, align: center, auto-lines: false, [], [Validation], [Test], hlinex(), [$e$], [0.0300], [0.0242], ), kind: table, caption: [Validation and Test $e$ with Optimal Parameters], ) == Performance <task3-performance> Using seeds from ```matlab rng(1)``` to ```matlab rng(20)```, we collect 20 classification error ratios for each dataset, over which we obtain their means and standard deviations. #figure( tablex( columns: 4, inset: 5pt, align: center, auto-lines: false, hlinex(), [Training Samples], [Training Error], [Validation Error], [Test Error], hlinex(stroke: 0.3pt), vlinex(start: 1, end: 3, stroke: red), hlinex(stroke: red), vlinex(start: 1, end: 3, stroke: red), [10], [0.0000 $plus.minus$ 0.0000], [0.0500 $plus.minus$ 0.1500], [0.0600 $plus.minus$ 0.0248], [20], [0.0000 $plus.minus$ 0.0000], [0.0250 $plus.minus$ 0.0750], [0.0399 $plus.minus$ 0.0119], hlinex(stroke: red), [100], [0.0150 $plus.minus$ 0.0135], [0.0250 $plus.minus$ 0.0296], [0.0285 $plus.minus$ 0.0052], [1000], [0.0247 $plus.minus$ 0.0039], [0.0255 $plus.minus$ 0.0086], [0.0253 $plus.minus$ 0.0013], [10000], [0.0249 $plus.minus$ 0.0016], [0.0254 $plus.minus$ 0.0032], [0.0251 $plus.minus$ 0.0014], hlinex(), ), kind: table, caption: [Experimentation $e$ Means and Standard Deviations], ) Over-fitting is when the error ratio for one dataset is much lower than that for the other 2. This is observed in the first 2 rows, where `Training Error` deviates greatly from both `Validation Error` and `Test Error`. In fact, the `Training Error` reaches 0, implying perfect inference by the trained model. This over-fitting may be due to the training sample size being too small (10 and 20 samples). Another observation from the table is that `Validation Error` and `Test Error` are very similar. This suggests that the model's performance on the validation samples gives us a good indication of that for the test samples. = Task 4: Logistic Regression with Stochastic Gradient Descent <Stochastic_Gradient_Descent_Logistic_Regression> == Configuring Hyperparameters <task4-hyperparameters> Using seeds from ```matlab rng(1)``` to ```matlab rng(20)```, we collect 20 classification error ratios for each dataset, over which we obtain their means and standard deviations. From there, we pick the batch size that yields the lowest corresponding `Training Error` and `Validation Error` $==>$ ```matlab batch_size = 100``` #figure( tablex( columns: (auto, 17em, auto), inset: 5pt, align: center, auto-lines: false, hlinex(), [Batch Size], [Training Error], [Validation Error], hlinex(stroke: 0.3pt), [1], [0.0276 $plus.minus$ 0.0058], [0.0410 $plus.minus$ 0.0041], [10], [0.0218 $plus.minus$ 0.0025], [0.0370 $plus.minus$ 0.0029], [20], [0.0210 $plus.minus$ 0.0017], [0.0355 $plus.minus$ 0.0035], [50], [0.0209 $plus.minus$ 0.0015], [0.0345 $plus.minus$ 0.0035], vlinex(start: 5, end: 6, stroke: red), hlinex(stroke: red), vlinex(start: 5, end: 6, stroke: red), [100], [0.0207 $plus.minus$ 0.0009], [0.0328 $plus.minus$ 0.0029], hlinex(stroke: red), ), kind: table, caption: [Training and Validation $e$ Means and Standard Deviations], ) == Convergence <task4-convergence> With ```matlab n_iters = 1000```, ```matlab lambda = 1```, and ```matlab batch_size = 100```, the loss plateaus at the end of training, suggesting convergence of mean log-loss. #figure( image("task4-mean_loglosses.png", width: 60%, ), caption: [Mean Log-loss for ```matlab batch_size = 100``` and ```matlab lambda = 1``` against number of iterations], ) == Comparison with Task @Gradient_Descent_Logistic_Regression <task4-comparison> By zooming into Figure 2, we can see that the plot is not smooth. This "noise" exists because the actual training sample size is smaller than the full training dataset, which leads to greater variance in mean log-loss when checked against the full training dataset. == Test Set <task4-test> The resulting classification error ratios are very similar to those in @task3-test, with test error ratio outperforming validation error ratio by a similar margin given the same order of data generation. #figure( tablex( columns: (auto, 10em, auto), inset: 4pt, align: center, auto-lines: false, [], [Validation], [Test], hlinex(), [$e$], [0.0328 $plus.minus$ 0.0029], [0.0251 $plus.minus$ 0.0011], ), kind: table, caption: [Validation and Test Accuracy with Optimal Learning Rate and Batch Size], ) == Runtime and Memory Usage Differences <task4-performance_diff> The number of steps within each of the $n#sub[iters]$ loops directly affect model training runtime. Unlike in GD, SGD only considers a subset of the full training dataset. As such, the number of calculations in each loop is smaller, resulting in shorter runtime. However, to continually get batches of random elements within the full training dataset, the algorithm requires extra memory as large as the `batch_size`. The larger the `batch_size`, the slower the training and the larger the extra memory needed. For large databases in real-world applications, applying an SGD makes sense because it has the potential of greatly shortening model training runtime. To realise this potential, however, there must be sufficient RAM to store the batches. = Task 5: Optimizing SVM via Linear Programming <SVM_Linear_Programming> == Linear Programming implementation <task5-linear_prog_imp> To be compatible with MATLAB's ```matlab linprog``` solver, $theta$ and $xi$ can be combined into $psi$ where #align(center, text(size: 15pt)[ $bold(psi) = mat(xi_1, xi_2, xi_3, ... , xi_(n-1), xi_n, b, w^((1)), w^((2)))^T$ ]) and since $bold(upright(f)^T psi) = sum_(i=1)^n xi_i$ #align(center, text(size: 15pt)[ $bold(upright(f)) = mat(1, 1, 1, ... , 1, 1, 0, 0, 0)^T$ ]) Another necessary part is to form $bold(upright(A)), bold(beta), bold(upright(A)#sub[eq]) bold(beta#sub[eq]), bold(upright(l b)), bold(upright(u b))$ such that they map to the original conditions of the optimisation equation. These then can be used in ```matlab linprog(f, A, b, Aeq, beq, lb, ub)```. Four of these are trivial to setup due to the lack of an equality condition as well as there only being a lower bound for $bold(xi)$. Parameters $bold(upright(A))$ and $bold(beta)$ can be derived by first manipulating the original condition into #align(center, [ $-(bold(upright(accent(x, hat))_i^T)theta) y_i - xi_i <= -1 $ ]) #align(center, [ ```matlab function theta_opt = train_SVM_linear_progr(X, y) f = [ones(length(y), 1); 0; 0; 0]; % [1 1 1 ... 1 1 0 0 ]^ T with n + 3 elements b = -ones(length(y), 1); % [-1 -1 -1 ... -1 -1] ^ T with n elements A = -eye(length(y)); % Only ith slack param for the ith inequality % Based on an observation from the altered original condition for i = 1:length(y) if y(i, 1) == 1 X(i, :) = -X(i, :); end end A = [A X]; Aeq = []; % No equality condition beq = []; % No equality condition lb = [zeros(length(y), 1); -inf; -inf; -inf]; % Lower bound applies to slack params ub = [inf(length(lb), 1)]; % No upper bound theta_opt = linprog(f, A, b, Aeq, beq, lb, ub); theta_opt = theta_opt(end-2:end, 1); % The last 3 elements are the optimal params end``` ]) == Performance <task5-performance> Compared to the `Test Error` in both the GD (0.0242) and SGD (0.0251 $plus.minus$ 0.0011) cases for logistic regression, the SVM method yields very similar results. #figure( tablex( columns: (auto, 5em, auto), inset: 4pt, align: center, auto-lines: false, [], [Train], [Test], hlinex(), [$e$], [0.0200], [0.0241], ), kind: table, caption: [Training and Test Accuracy with SVM Linear Programming], ) == Decision Boundary Derivation <task5-decision_boundary> Given the form $accent(y, macron) = b + w^((1))x^((1)) + w^((2))x^((2))$, making $x^((2))$ the subject of formula generates the following form: $x^((2)) = alpha x^((1)) + beta$ where $alpha = (-w^((1)) / w^((2)))$ and $beta = (accent(y, macron) - b) / w^((2))$. #figure( tablex( columns: (5em, auto, 10em, auto), inset: 4pt, align: center, auto-lines: false, hlinex(), [], [SVM ($accent(y, macron) = 0$)], [SGD LR ($accent(y, macron) = 0.5$)], [GD LR ($accent(y, macron) = 0.5$)], hlinex(stroke: 0.3pt), [$b$], [-9.6567], [-10.9964], [-10.5063], [$w^((1))$], [1.9256], [1.9981], [1.9365], [$w^((2))$], [5.4760], [6.5872], [6.2485], hlinex(stroke: 0.5pt), [$alpha$], [-0.3516], [-0.3033], [-0.3099], [$beta$], [1.7635], [1.7453], [1.7614], hlinex(), ), kind: table, caption: [Optimal Parameters and Resultant $alpha$ and $beta$ for SVM, SGD and GD Logistic Regression], ) The $alpha$ and $beta$ values are very similar in all 3 cases, and their graphs reflect the similarity too. #figure( image("task5-decision_boundaries.png", width: 60%, ), caption: [2D Plots of Decision Boundaries for SVM, SGD, and GD Logistic Regression], ) = Task 6: Optimizing SVM via Gradient Descent and Hinge Loss <SVM_GD_Hinge_Loss> == Hinge Loss Gradient Computation <task6-hinge_loss_grad> #align(center, [```matlab for i = 1:iters_total grad_loss = zeros(n_features, 1); % Initialise Hinge Loss Gradients hinge_losses = hinge_loss_per_sample(X_train, y_train, theta_curr); for j = 1:length(hinge_losses) if hinge_losses(j) > 0 % Satisfy the Gradient condition grad_loss = grad_loss - y_train(j) * X_train(j, :)'; end end theta_curr = theta_curr - learning_rate / length(y_train) * grad_loss; end ``` ]) == Configuring Hyperparameters <task6-hyperparameters> To find the optimal learning rate $lambda$, we keep ```matlab n_iters = 10000``` and perform the experiment for ```matlab lambdas = [0.00001 0.0001 0.001 0.01 0.1 1]```, picking the one with lowest corresponding `Training Error` and `Validation Error` $==>$ ```matlab lambda = 1```. #figure( tablex( columns: (auto, 17em, auto), inset: 4pt, align: center, auto-lines: false, hlinex(), [$lambda$], [Training $e$], [Validation $e$], hlinex(stroke: 0.3pt), [0.00001], [0.5000], [0.4450], [0.0001],[0.4250],[0.4050], [0.001], [0.0438], [0.0400], [0.01], [0.0200], [0.0350], [0.1], [0.0175], [0.0300], vlinex(start: 6, end: 7, stroke: red), hlinex(stroke: red), vlinex(start: 6, end: 7, stroke: red), [1], [0.0163], [0.0300], hlinex(stroke: red), [10], [0.0163], [0.0300], hlinex(), ), kind: table, caption: [Training and Validation $e$ for Different Learning Rates at 10000 iterations], ) == Average Loss <task6-ave_loss> The optimal combination of ```matlab n_iters = 10000``` and ```matlab lambda = 1``` generates the following average loss behaviour against number of iterations. We can see that the plot plateaus, suggesting convergence of the average loss. #figure( image("task6-average_loss.png", width: 60%, ), caption: [Average Loss for ```matlab n_iters = 10000``` and ```matlab lambda = 1``` against number of iterations], ) == Performance <task6-performance> Compared with @task5-performance, the `Test Error` are very similar, implying similar performance of the learnt optimal parameters in both techniques. #grid( columns: (1fr, 1fr), gutter: 10pt, align(horizon)[ #figure( tablex( columns: (auto, 5em, auto, auto), inset: 4pt, align: center, auto-lines: false, [], [Train], [Validation], [Test], hlinex(), [$e$], [0.0163], [0.0300], [0.0238], ), kind: table, caption: [Errors with SVM Hinge Loss GD], ) ], align(horizon)[ #figure( tablex( columns: 3, inset: 4pt, align: center, auto-lines: false, [], [Hinge Loss GD], [Linear Programming], hlinex(), [$e$], [0.0238], [0.0241], ), kind: table, caption: [Errors for Different SVM methods], ) ] ) == Comparison of Optimal Parameters <task6-compare_svm_params> When compared with the Linear Programming method, we do not get the same parameters, explaining the slight real-world performance difference of the 2 sets of optimal parameters. #figure( tablex( columns: (auto, 8em, auto), inset: 4pt, align: center, auto-lines: false, hlinex(), [], [Hinge Loss GD], [Linear Programming], hlinex(stroke: 0.3pt), [$b$], [-10.0675], [-9.6567], [$w^((1))$], [2.0265], [1.9256], [$w^((2))$], [5.7141], [5.4760], hlinex(), ), kind: table, caption: [Optimal Parameters learnt from Different methods], ) == Decision Boundary Derivation <task6-decision_boundary> Referring to the altered form: $x^((2)) = alpha x^((1)) + beta$ where $alpha = (-w^((1)) / w^((2)))$ and $beta = (accent(y, macron) - b) / w^((2))$, since comparison is only made between different SVM methods, $accent(y, macron) = 0$, allowing for $beta = (- b) / w^((2))$ and yielding the following $alpha$ and $beta$ values. #figure( tablex( columns: (5em, auto, 10em), inset: 4pt, align: center, auto-lines: false, hlinex(), [], [Hinge Loss GD], [Linear Programming], hlinex(stroke: 0.3pt), [$alpha$], [-0.3546], [-0.3516], [$beta$], [1.7619], [1.7635], hlinex(), ), kind: table, caption: [Resultant $alpha$ and $beta$ for SVM, SGD and GD Logistic Regression], ) The $alpha$ and $beta$ values are very similar for both cases, and their graphs reflect this too. #figure( image("task6-decision_boundaries.png", width: 60%, ), caption: [2D Plots of Decision Boundaries for Hinge Loss GD and Linear Programming] )
https://github.com/floriandejonckheere/utu-thesis
https://raw.githubusercontent.com/floriandejonckheere/utu-thesis/master/thesis/chapters/06-automated-modularization/01-introduction.typ
typst
#import "@preview/acrostiche:0.3.1": * #import "@preview/sourcerer:0.2.1": code #import "/helpers.typ": * = Automated modularization <automatedmodularization> In this chapter, we investigate the state of the art in (semi-)automated technologies for modularization of monolith codebases. Using a systematic literature review, we identify and categorize existing literature on (semi-)automated modularization of monolith codebases. We focus in particular on the identification of microservices candidates in monolith codebases, as this is a crucial step in the migration from monolith to microservices architecture. Using the systematic literature review, we answer the following research question: #link(<research_question_2>)[*Research Question 2*]: What are the existing approaches for (semi-)automated microservice candidate identification in monolith codebases? The motivation behind the research question is discussed in @introduction. == Plan In the current literature, several systematic mapping studies related to microservices architecture have been conducted @pahl_jamshidi_2016, @alshuqayran_etal_2016, as well as systematic literature reviews related to microservice candidate identification @abgaz_etal_2023, @schmidt_thiry_2020. However, the methods discussed in these studies are mostly aimed at assisting the software architect in identifying microservice candidates, rather than providing automated processes. Therefore, we believe that there is a need for a systematic literature review aimed at summarizing existing literature regarding (semi-)automated methods for modularization of monolith codebases. Automated methods for modularization are techniques that autonomously perform the decomposition process, without requiring intervention of a software architect. The resulting architecture can then be validated and implemented by the software architect. Semi-automated methods for modularization are techniques that assist the software architect in the decomposition process, but do not perform the entire process autonomously. The software architect is required to make decisions during the process, and is often left with several final proposals to choose from. Automated methods are of particular interest, as they take away the manual effort required from the software architect to analyze and decompose the monolith codebase. As a search strategy, the following platforms were queried for relevant publications: + IEEE Xplore#footnote[#link("https://ieeexplore.ieee.org/")[https://ieeexplore.ieee.org/]] + ACM Digital Library#footnote[#link("https://dl.acm.org/")[https://dl.acm.org/]] The platforms were selected based on their academic relevance, as they contain a large number of publications in the field of software engineering. Furthermore, the platforms also contain only peer-reviewed publications, which ensures a certain level of quality in the publications. Based on a list of relevant topics, we used a combination of related keywords to formulate the search query. We refrained from using more generic keywords, such as "architecture" or "design", as they would yield too many irrelevant results. #pagebreak() The topics relevant for the search query are: - *Architecture*: the architectural styles discussed in the publications #linebreak() Keywords: _microservice, monolith, modular monolith_ - *Modularization*: the process of identifying and decomposing modules in a monolith architecture #linebreak() Keywords: _service identification, microservice decomposition, monolith modularization_ - *Technology*: the methods or technologies used for modularization #linebreak() Keywords: _automated tool, machine learning, static analysis, dynamic analysis, hybrid analysis_ We formulated the search query by combining the keywords related to the topics. @slr_search_query represents the search query as a boolean expression. #figure( code( ```sql (('microservice*' IN title OR abstract) OR ('monolith*' IN title OR abstract)) AND (('decompos*' IN title OR abstract) OR ('identificat*' IN title OR abstract)) AND ('automate*' IN title OR abstract) ``` ), caption: [Search query] ) <slr_search_query> The search query was adapted to the specific search syntax of the platform. In addition to search queries on the selected platforms, we used snowballing to identify additional relevant publications. Snowballing is a research technique used to find additional publications of interest by following the references of the selected publications @wohlin_2014. #pagebreak() Based the inclusion/exclusion criteria in @slr_criteria, the results were filtered, and the relevant studies were selected for inclusion in the systematic literature review. #figure( table( columns: (14%, auto), inset: 10pt, stroke: (x: none), align: (center, left), [], [*Criteria*], "Inclusion", [ - Title, abstract or keywords include the search terms - Conference papers, research articles, blog posts, or other publications - Publications addressing (semi-)automated methods or technologies ], "Exclusion", [ - Publications in languages other than English - Publications not available in full text - Publications using the term "microservice", but not referring to the architectural style - Publications aimed at greenfield #footnote[Development of new software systems lacking constraints imposed by prior work @gupta_2011] or brownfield #footnote[Development of new software systems in the presence of legacy software systems @gupta_2011] development of systems using microservices architecture - Publications published before 2014, as the definition of "microservices" as an architectural style is inconsistent before 2014 @pahl_jamshidi_2016 - Publications addressing manual methods or technologies - Surveys, opinion pieces, or other non-technical publications ] ), caption: [Inclusion and exclusion criteria] ) <slr_criteria> As a final step, the publications were subjected to a validation scan to ensure relevance and quality. To assess the quality, we mainly focused on the technical soundness of the method or approach described in the publication. The quality of the publication was assessed based on the following criteria: - The publication is peer-reviewed or published in a respectable journal - The publication thoroughly describes the technical aspects of the method or approach - The publication includes a validation phase or case study demonstrating the effectiveness of the method or approach This step is necessary to ensure that the selected publications are relevant to the research question and that the results are not biased by low-quality publications. Once a final selection of publications was made, the resulting publications were reviewed, relevant information was extracted, and the publications were categorized based on the methods or approaches described. #pagebreak() == Conduct Using the search strategy outlined in the previous section, we queried the selected platforms and found a total of #slr.platforms.values().map(p => p.total).sum() publications. @slr_search_results gives an overview of the search results. #figure( table( columns: (auto, auto, auto), inset: 10pt, stroke: (x: none), align: (left, center, center), [*Platform*], [*Search results*], [*Selected publications*], // (("All Metadata":"microservices" OR "All Metadata":"monolith") AND ("All Metadata":"decomposition" OR "All Metadata":"identification")) [IEEE Xplore], [#slr.platforms.ieee.total], [#slr.platforms.ieee.selected], // Title:(microservice) AND AllField:(microservice OR monolith) AND AllField:(decomposition OR identification OR refactor) AND AllField:(automated) [ACM Digital Library], [#slr.platforms.acm.total], [#slr.platforms.acm.selected], [Snowballing], none, [#slr.snowballing.total], [*Total*], [#slr.platforms.values().map(p => p.total).sum()], [#(slr.platforms.values().map(p => p.selected).sum() + slr.snowballing.total)], ), caption: [Summary of search results] ) <slr_search_results> After applying the inclusion/exclusion criteria, we selected #slr.platforms.values().map(p => p.selected).sum() publications for inclusion in the systematic literature review. Of these publications, #slr.platforms.values().map(p => p.primary.len()).sum() are primary studies, and #slr.platforms.values().map(p => p.secondary.len()).sum() are secondary studies. The secondary studies were used as a starting point for the snowballing process, which resulted in #slr.snowballing.total additional publications being included in the systematic literature review. For a list of the selected publications, see @slr_publications. #pagebreak() The selected publications range in publication date from 2014 to 2024, with a peak in 2022. Fewer publications were selected during the first part of the interval, but the number of publications selected increased significantly in the second part of the decade. @slr_by_year gives an overview of the distribution of publications by year. #figure( include("/figures/06-automated-modularization/publications-by-year.typ"), caption: [Distribution of publications by year] ) <slr_by_year> From the selected publications, we extracted relevant information, such as: - The type of method or approach described (automated, semi-automated) - The input data used for the microservice candidate identification process - The algorithms used in the microservices candidate identification process - The quality metrics used in the evaluation of the decomposition #cite_full(<kitchenham_charters_2007>) suggest that the data extraction process should be performed by at least two researchers to ensure the quality and consistency of the extracted data. However, due to resource constraints, the data extraction was performed only by one researcher. To prevent bias and ensure the quality of the data extraction, the results were validated by a re-test procedure where the researcher performed a second extraction from a random selection of the publications to check the consistency of the extracted data. #pagebreak() == Report The publications selected for inclusion in the systematic literature review were qualitatively reviewed and categorized in three dimensions. The categorization was only performed on the primary studies, as the secondary studies already aggregate and categorize primary studies. The secondary studies were used to perform the snowballing process, which resulted in additional primary studies being included in the systematic literature review. First, we categorized the publications based on the #acr("SDLC") artifact used as input for the microservice candidate identification process. Each artifact category has an associated collection type: either static, dynamic, or hybrid @bajaj_etal_2021. Static collection describes a #acr("SDLC") artifact that was collected without executing the software (e.g. source code or binary code), while dynamic collection describes a #acr("SDLC") artifact that was collected after or during execution of the software (e.g. call trace or execution logs). Some publications describe methods or algorithms that use a combination of #acr("SDLC") artifacts, which is categorized as hybrid. Second, we categorized the publications based on the class of algorithm(s) used for microservice candidate identification. We based the classification of the algorithms on the work of #cite_full(<abdellatif_etal_2021>), who identified six types of microservice candidate identification algorithms. Third, the publications were also categorized by the quality metrics used for evaluation of the proposed decompositions.
https://github.com/pncnmnp/typst-poster
https://raw.githubusercontent.com/pncnmnp/typst-poster/master/examples/example.typ
typst
MIT License
#import "../poster.typ": * #show: poster.with( size: "36x24", title: "A typesetting system to untangle the scientific writing process", authors: "<NAME>, <NAME>, <NAME>, <NAME>", departments: "Department of Computer Science", univ_logo: "./images/ncstate.png", footer_text: "Conference on Typesetting Systems, 2000", footer_url: "https://www.example.com/", footer_email_ids: "<EMAIL>", footer_color: "ebcfb2", // Modifying the defaults keywords: ("Typesetting", "Scientific Writing", "Typst"), ) = #lorem(3) #lorem(100) #figure( image("../images/Women_operating_typesetting_machines.png", width: 50%), caption: [#lorem(10)] ) #lorem(60) = #lorem(2) #lorem(30) + #lorem(10) + #lorem(10) + #lorem(10) #lorem(50) #set align(center) #table( columns:(auto, auto, auto), inset:(10pt), [#lorem(4)], [#lorem(2)], [#lorem(2)], [#lorem(3)], [#lorem(2)], [$alpha$], [#lorem(2)], [#lorem(1)], [$beta$], [#lorem(1)], [#lorem(1)], [$gamma$], [#lorem(2)], [#lorem(3)], [$theta$], ) #set align(left) #lorem(80) $ mat( 1, 2, ..., 8, 9, 10; 2, 2, ..., 8, 9, 10; dots.v, dots.v, dots.down, dots.v, dots.v, dots.v; 10, 10, ..., 10, 10, 10; ) $ == #lorem(5) #lorem(65) #figure( image("../images/Standard_lettering.png", width: 100%), caption: [#lorem(8)] ) = #lorem(3) #block( fill: luma(230), inset: 8pt, radius: 4pt, [ #lorem(80), - #lorem(10), - #lorem(10), - #lorem(10), ] ) #lorem(75) ```rust fn factorial(i: u64) -> u64 { if i == 0 { 1 } else { i * factorial(i - 1) } } ``` = #lorem(5) #lorem(100) - #lorem(10) - #lorem(5) - #lorem(8) - #lorem(15) - #lorem(9) - #lorem(7) $ sum_(k=1)^n k = (n(n+1)) / 2 = (n^2 + n) / 2 $ #block( fill: luma(230), inset: 8pt, radius: 4pt, [ #lorem(30), ] ) #figure( image("../images/Rosetta_stone.png", width: 85%), caption: [#lorem(30)] )
https://github.com/crd2333/crd2333.github.io
https://raw.githubusercontent.com/crd2333/crd2333.github.io/main/src/docs/Courses/计算机图形学.typ
typst
#import "/src/components/TypstTemplate/lib.typ": * #show: project.with( title: "计算机图形学", lang: "zh", ) - 主要是 Games 101 的笔记,然后如果 ZJU 课上有新东西的话可以加进来 #quote()[ - 首先上来贴几个别人的笔记,#strike[自己少记点] + #link("https://www.bilibili.com/read/readlist/rl709699?spm_id_from=333.999.0.0")[B站笔记] + #link("https://iewug.github.io/book/GAMES101.html#01-overview")[博客笔记] + #link("https://www.zhihu.com/column/c_1249465121615204352")[知乎笔记] + #link("https://blog.csdn.net/Motarookie/article/details/121638314")[CSDN笔记] ] = Overview 图形学概述 - 课程内容主要包含 + 光栅化(rasterization):把三维空间的几何形体显示在屏幕上。实时(30fps)是一个重要的挑战 + 几何表示(geometry):如何表示一条光滑的曲线、一个曲面,如何细分以得到更复杂的曲面,形变时如何保持拓扑结构 + 光线追踪(ray tracing):慢但是真实。实时是一个重要的挑战。本节课还介绍实时光线追踪 + 动画/模拟(animation/simulation):譬如扔一个球到地上,球如何反弹、挤压、变形等 - 这节课不包含:OpenGL, DirectX, Vulcan 等计算机图形学 API、Maya, Blender, Unity, Unreal Engine 等 3D 建模、计算机视觉(一切需要猜测的事情,识别)、硬件编程 - CG 和 CV 的区别 - 实际上并没有明显的界限 #fig("/public/assets/Courses/CG/img-2024-07-25-10-25-57.png") = Math 数学基础 == Linear Algebra 向量与线性代数 - 以右手坐标系讲解,但引擎可能是左手坐标系 - 点乘判断前后;叉乘判断左右和点在凸多边形内外 - Vector - 如何表示 - 相加、点积、叉积 == Transformation 变换 #info()[ + 二维与三维 + 模型、视图、投影 ] #let gt = $hat(g) times hat(t)$ - 各种变换矩阵,后统一归为仿射变换(在齐次坐标的概念下) - 平移 - 缩放(反直觉是这是对原点而言的) - 旋转(2D 相对于原点,3D 相对于轴) - 齐次坐标的引入:用矩阵乘向量统一表示平移、旋转、缩放等操作 - $(x,y,z)$ $->$ $(x,y,z,1)$; $(x,y,z,w)$ $->$ $(x/w,y/w,z/w)$ - point 和 vector 的不同表示 - 例子 - 绕任意点旋转 - 绕任意轴 $(x_1, y_1, z_1) - (x_2, y_2, z_2)$ 旋转 $theta$ 角 + 把这跟轴平移过原点 + 旋转使其跟某个轴重合,以 $z$ 轴为例,那么就先后绕 $x$ 轴和 $y$ 轴旋转 + 绕重合轴($z$ 轴)旋转 $theta$ 角 + 再做 1, 2 步的逆变换回去 - Viewing (观测) transformation - View (视图) / Camera transformation - Projection (投影) transformation - Orthographic (正交) projection - Perspective (透视) projection - 在现实生活中如何照一张照片? + 找个好地方摆 pose(Model 变换) + 把相机放个好角度(View 变换) + 按快门(Projection 变换) - MVP: model, view, projection - 模型变换(Model) - 就是之前的几种仿射变换(如果线性的话) $ M_"model" = mat(a, b, c, t_x; d, e, f, t_y; g, h, i, t_z; 0, 0, 0, 1)\ "for any axis: "~~~ upright(R) = E cos theta + (1 - cos theta) mat(k_(x) ; k_(y) ; k_(z)) (k_(x), k_(y), k_(z)) + sin theta mat(0, -k_(z), k_(y) ; k_(z), 0, -k_(x) ; -k_(y), k_(z), 0) $ - 视图/相机变换(View) - 也被叫做 ModelView transformation,因为对模型也要做(保持相对关系不变),我们要把整个世界变换到相机坐标系(View Reference Coordinate System)下 - 定义相机位姿(SLAM 中的外参):position, lookat, up - 看起来有 $9$ 个参数,实际上是 $6$ 个自由度(6-DOF),因为后两个方向可以用叉乘合并表示 - 一般把相机转到原点,看向(*gaze at*) $-z$,up direction(*top*) 为 $y$ 轴 - 先平移对齐远点,再旋转对齐轴($g$ to $-Z$, $t$ to $Y$, $(g times t)$ to $X$) $ T_"view" = mat(1,0,0,-x_e;0,1,0,-y_e;0,0,1,-z_e;0,0,0,1), ~~~~~~~ R_"view" = mat(x_gt, y_gt, z_gt, 0; x_t, y_t, z_t, 0; x_(-g), y_(-g), z_(-g), 0; 0, 0, 0, 1) =^("typically") mat(1,0,0,0;0,1,0,0;0,0,1,0;0,0,0,1) \ M_"view" = R_"view" times T_"view" $ - 投影变换(Projection) - 正交投影 - 视口是个 $[l,r] [b,t] [f,n]$(注意这里 z 轴是小的 far,大的 near)的长方体 - 但一般 $f,n$ 通过 aspect ratio(width / height) 和 field of view(FOV,视野角)计算 $ tan ("fovY") / 2=t / (|n|), ~~~~~~~~ "aspect"=r / t $ - 先将立方体的中心平移到原点,再将立方体缩放到$[-1,1]^3$中(也就是一个平移矩阵+一个缩放矩阵),方便之后的计算 $ M_"ortho" = mat(2/(r-l), 0, 0, 0; 0, 2/(t-b), 0, 0; 0, 0, 2/(n-f), 0; 0, 0, 0, 1) mat(1, 0, 0, -(r+l)/2; 0, 1, 0, -(t+b)/2; 0, 0, 1, -(n+f)/2; 0, 0, 0, 1) = mat(2/(r-l), 0, 0, -(r+l)/(r-l); 0, 2/(t-b), 0, -(t+b)/(t-b); 0, 0, 2/(n-f), -(n+f)/(n-f); 0, 0, 0, 1) $ - 透视投影(近大远小) - PRP: Projection Reference Point = Eye position - 如何做?从理解的角度看,首先在远平面上挤(squish)一下,然后做正交投影,也就是分为两步 - 这里的推导很妙,利用了两个性质:近平面的点不会发生变化;远平面的点 z 的值不会发生变化 $ M_"persp" = M_"ortho" times mat(n, 0, 0, 0; 0, n, 0, 0; 0, 0, n+f, -n f; 0, 0, 1, 0) $ = Rasterization 光栅化 #info()[ + 三角形的离散化 + 深度测试与抗锯齿 ] - 上节课操作后,所有物体都处在$[-1,1]^3$的立方体中,接下来把他画在屏幕上,这一步就叫做光栅化 - 光栅化算法得名于光栅化显示器(CRT),生成二维图像的过程是从上到下,从左到右一个像素一个像素来。 - Raster == screen in German, rasterization == drawing onto the screen == 画线算法 - DDA(Digital Differential Analyzer):数字微分分析器,用来画直线,但受限于浮点数精度 - Breseham's Line Drawing Algorithm:更高效的画线算法,只需要整数运算 - 注意到每次 $x$ 加一,$y$ 的变化不会超过 $1$ #fig("/public/assets/Courses/CG/2024-09-18-09-19-46.png") - 画圆算法 #fig("/public/assets/Courses/CG/2024-09-18-09-22-15.png") - 但 Further acceleration 可能会有误差累积的问题 - Breseham's Circle Drawing Algorithm - 把 360° 分成 8 个部分,每次画一个点,然后对称画出另外 7 个点 - 以 $x=0,y=R$ 开始为例 $d_1 = (x+1)^2 + y^2 - R^2, d_2 = (x+1)^2 + (y-1)^2 - R^2, ~ d = d_1 + d_2$,判断 $d$ 的正负 - 多边形填充 - 一种方法是判断每个像素在多边形内外(叉积或奇偶检验) - 另一种方法是 scan-line method,从上到下、左到右扫描 - 找到扫描线与多边形的*交点*,然后按照扫描线的方向*排序*,每两个交点之间*填充* - 也可以用 Breseham 的思想加速 #hline() - 屏幕空间 - *视口变换*:(暂时忽略 z 轴)将$[-1,1]^2$变换到屏幕空间的矩阵 - 一是要考虑屏幕的宽高比,二是要考虑将坐标系原点移到左上角 $ M_"viewport"=mat(w/2,0,0,w/2; 0,h/2,0,h/2; 0,0,1,0; 0,0,0,1) $ - 显示方式 + Cathode Ray Tube (CRT):电子束扫描,优化——隔行扫描 + Frame Buffer:存储屏幕上每个像素的颜色值 + Flat Panel Display:LCD(液晶显示器), OLED + LED + Electrophoretic Display:如 Kindle - 三角形:光栅化的基本单位 - 原因:三角形是最基本(最少边)的多边形;其他多边形都可以拆分为三角形(建模常用四边形,但到了引擎里拆成三角形);三角形必定在一个平面内;容易定义三角形的里外;三角形内任意一点可以通过三个定点的线性插值计算得到 - 把三角形转化为像素:简单近似采样 - 使用内积计算内点外点 - 使用*包围盒*(Bounding Box)减小计算量,或者 incremental triangle traversal 等 - 问题:Jaggies(锯齿)/ Aliasing(走样) - 需要抗锯齿、抗走样的方法,为此先介绍采样理论:把到达光学元件上的光,产生的信息,离散成了像素,对这些像素采样,形成了照片 - 采样产生的问题(artifacts):走样、摩尔纹、车轮效应,本质都是信号变化频率高于采样频率 - 反走样方法:Blurring(Pre-Filtering) Before Sampling - 不能 Sample then filter, or called blurred aliasing - 为什么不行?为此介绍*频域、时域*的知识 - 傅里叶、滤波、卷积 - 卷积定理:时域的卷积 = 频域的乘积,反过来也成立 - 采样不同的间隔,会引起频谱不同间隔进行复制,所相交的部分就是走样 - 所谓反走样就是把高频信息砍掉,砍掉虚线方块以外,再以原始采样频率进行采样,这样就不易交叉了 - MSAA(Multi-Sample Anti-Aliasing):多重采样抗锯齿 - 把一个像素划分为几个小点,用小点的覆盖率来模拟大点的颜色 - 增大计算负担,但也有一些优化,比如只在边缘处采样、复用采样点等 - 另外的一些里程碑式的抗锯齿方案:FXAA, TAA - Super resolution - 把图片从低分辨率放大到高分辨率 - 本质跟抗锯齿类似,都是采样频率不够的问题 - 使用 DLSS(Deep Learning Super Sampling) 技术 - 考虑远近物体一起被画在屏幕上的问题 - 画家算法(Painter's Algorithm):油画家,先画远的,再画近的覆盖掉 - 但并不总是有效,为此引入 Z-buffering(深度缓冲):既然没办法判断三角形整体的深度,那么就判断每个像素的深度,在渲染时存一张深度图 - 暂不考虑相同深度,处理不了透明物体,另外 z-buffering 可能会与 MSAA 结合 == \*OpenGL - OpenGL 是一个跨平台的图形 API,用于渲染 2D 和 3D 图形 - OpenGL Ecosystem - OpenGL, WebGL, OpenGL ES, OpenGL NG, Vulkan, DirectX ... - #link("https://blog.csdn.net/qq_23034515/article/details/108283747")[WebGL,OpenGL和OpenGL ES三者的关系] - OpenGL 是做什么的? - 定义对象形状、材质属性和照明 - 从简单的图元、点、线、多边形等构建复杂的形状。 - 在3D空间中定位物体并解释合成相机 - 将对象的数学表示转换为像素 - 光栅化 - 计算每个物体的颜色 - 遮光 - Three Stages in OpenGL - Define Objects in World Space - Set Modeling and Viewing Transformations - Render the Scene - 跟我们学的顺序类似 - OpenGL Primitives - GL_POINTS, GL_LINES, GL_LINE_STRIP, GL_LINE_LOOP, GL_TRIANGLES, GL_QUADS, GL_POLYGON, GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN, GL_QUAD_STRIP - 放到 `glBegin` 里决定如何解释,具体见 PPT - OpenGL 的命令基本遵守一定语法 - 所有命令以 `gl` 开头 - 常量名基本是大写 - 数据类型以 `GL` 开头 - 大多数命令以两个字符结尾,用于确定预期参数的数据类型 - OpenGL 是 Event Driven Programming - 通过注册回调函数(callbacks)来处理事件 - 事件包括键盘、鼠标、窗口大小变化等 #fig("/public/assets/Courses/CG/2024-09-22-16-45-55.png", width: 70%) - Double buffering - 隐藏绘制过程,避免闪烁 - 有时也会用到 Triple buffering - 后来看了看 OpenGL 的相关教程,感觉现在的实现和这里不太一样(可能过时了)。。。还是以网络教程为准 - WebGL Tutorial / OpenGL ES = Shading 着色 #info()[ + 光照和基本着色模型 + 着色频率、图形管线、纹理映射 + 插值、高级纹理映射 ] - 计算机图形学领域定义为:为物体赋予*材质*的过程(把材质和着色合在一起),但不涉及 shadow(shading is local!) - 输入:Viewer direction $v$, Surface normal $n$, Light direction $I$(for each of many lights, Surface parameters(color,shininess) - 一些物理的光学知识 - Lambertian(Diffuse) Term(漫反射) - 在某一 shading point,有 $L_d = k_d I_d \/ r^2 max(0, n dot l)$ - $k_d$ 是漫反射系数,$I_d$ 是光源强度,$n$ 是法向量,$l$ 是光线方向,$r$ 是光源到点的距离 - 也可以用半球积分来推导 - Specular Term(高光) - 亮度也取决于观察角度,用一个 trick 转化计算:半程向量(half vector) $h = (v + l) \/ ||v + l||$ - $L_s = k_s I_s \/ r^2 max(0, n dot h)^p$ - 注意简化掉了光通量项,以及 $p$ 是高光系数,$v$ 是观察方向 - Ambient Term(环境) - 过于复杂,一般用一个常数来代替 - $L_a = k_a I_a$ - 综合得到 Blinn-Phong 模型 #fig("/public/assets/Courses/CG/img-2024-07-26-23-37-29.png") - 着色频率 - Flat Shading, Gouraud Shading, Phong Shading,平面、顶点、像素,依次增加计算量 - 平面着色的法向量很好理解,但顶点和像素就绕一些,需要插值方法得到平滑效果 - Graphics(Real-time Rendering) Pipeline - 顶点处理(Vertex Processing) $->$ 三角形处理(Triangle Processing) $->$ 光栅化(Rasterization) $->$ 片元处理(Fragment Processing) $->$ 帧缓冲处理(Framebuffer Processing) - 需要理解之前讲解的各个操作归属于哪个阶段 + 顶点处理:作用是对所有顶点数据进行MVP变换,最终得到投影到二维平面的坐标信息(同时为Zbuffer保留深度z值)。超出观察空间的会被剪裁掉 + 三角形处理:容易理解,就是将所有的顶点按照原几何信息变成三角面,每个面由3个顶点组成 + 光栅化:得到了许许多多个三角形之后,接下来的操作自然就是三角形光栅化了,涉及到抗锯齿、采样方法等 + 片元处理:在进行完三角形的光栅化之后,知道了哪些在三角形内的点可以被显示,通过片元处理阶段的着色问题确定每个像素点或者说片元(Fragement)的颜色[注:片元可能比像素更小,如 MSAA 抗拒齿操作的进一步细分得到的采样点]。该阶段部分工作可能在顶点处理阶段完成,因为我们需要顶点信息对三角形内的点进行属性插值(e.g. 在顶点处理阶段就算出每个顶点的颜色值,如 Gouraud Shading),当然这一阶段也少不了Z-Buffer来帮助确定深度。另外,片元处理涉及纹理映射的步骤 - Shaders - 着色器,运行于 GPU 上 - GLSL 语言 - 每个顶点执行一次 $->$ 顶点着色器 vertex shader - 每个像素执行一次 $->$ 片元着色器 fragment shader,或者像素着色器 pixel shader - 纹理映射 - 3D世界的物体的表面是2D的,将其展开就是一张图,纹理映射就是将这张图映射到物体上 - 如何知道2D的纹理在3D的物体上的位置?通过纹理坐标。有手动(美工)和自动(参数化)的方法,这里我们就认为已经知道3D物体的每一个三角形顶点对应的纹理 $u v(in [0,1])$ 坐标 - 四方连续纹理:tiled texture,保证纹理拼接时的连续性 - 三角形内插值: 重心坐标(Barycentric Coordinates) - 重心坐标:$(x,y)=al A + beta B + ga C ~~~ (al+beta+ga=1)$,通过 $al, beta, ga >= 0$ 可以任意表示三角形内的点,且与三个顶点所在坐标系无关。这个重心坐标跟三角形重心不是一回事,三角形重心的重心坐标为 $(1/3, 1/3, 1/3)$ - 对什么运用插值:插值的属性:纹理坐标,颜色,法向量等等,统一记为 $V$,插值公式为 $V = al V_A + beta V_B + ga V_C$ - 重心坐标没有投影不变性,所以要在三维中插值后再投影(特殊的如深度,要逆变换回3D插值后再变换回来) - 纹理过大过小 - Texture Magnification:如果纹理过小 - 比如一个4K分辨率的墙贴上一个$256*256$的纹理,那么就会出现 uv 坐标非整数的情况(a pixel on texture,简记为 *texel*,纹理元素、纹素,不可以取非整数值) - 使用 nearest(四舍五入), bilinear(双线性插值, 4), bicubic(双三次插值, 16) - Texture Minification:如果纹理过大 - 问题:走样、摩尔纹、锯齿等(且越远越严重)。原因在于屏幕上 pixel 越远就对应越大面积的 texel(footpoint 现象);或者说,采样的频率跟不上信号的频率 - 一个很自然的想法是类似之前抗走样所采用的超采样方法,但这里提出另一种 mipmap(image pyramid) 方法:Allowing (fast, approx, square) range queries - 离线预处理(在渲染前生成)每个 footprint 对应纹理区域(不同 level)里的均值 - 开销:$1+1/4+dots approx 4/3$,仅仅是额外的 33% 开销 - 计算 level:用 pixel 的相邻点投影到 texel 估算 footpoint 大小(近似为正方形)再取对数;然后因为 level 是不连续的,通过三线性插值(两个层内双线性再一个层之间线性)得到连续性,避免突兀 - 优化:真实情况屏幕空间上的区域对应的 footprint 并不一定是正方形,导致 overblur,为此提出各向异性滤波(*Anisotropic Filtering*),开销为 $3$ 倍。进一步,依旧无法解决斜着的区域,用 EWA Filtering - Environment Map 环境光映射 - 前面说的纹理可以扩展到其它概念(数据,而非仅仅是图像),比如将某一点四周的环境光(光源、反射 or anything else)也存储在一个贴图上 - 这样的做法是一种假设:环境光与物体的位置无关,只与观察方向有关(即环境光不随距离衰减)。然后各个方向的光源可以用一个球体进行存储,即任意一个 3D 方向,都标志着一个 texel - 适用于什么场景呢?比如说一个大屋子里面的小茶壶,它相对整个屋子是很小的,如果假设环境光没有衰减,那它反射的光就只和方向有关 - 但是,类比地球仪,展开后把球面信息转换到平面上,从而得到环境 texture,同时存在拉伸和扭曲问题 - 解决办法:Cube Map(天空盒),将球面一一映射到立方体的六个面上,这样展开后得到的纹理面就是均匀的 - Bump Mapping 凹凸贴图 - 在不改变物体本身几何形状的情况下,通过纹理来模拟物体表面的凹凸不平 - Displacement Mapping 位移贴图 - 与之相对,位移贴图的输入同样是一张纹理图,但它的输出真的对物体的几何形状进行了改变,从而对物体边缘和物体阴影有更好的效果 - 这要求建模的三角形足够细到比纹理的采样频率还高。但又引申出一个问题,为什么不直接在建模上体现其位移?因为这样便于修改、特效;另外,DirectX 中的动态曲面细分:开始先用粗糙的三角形,应用 texture 的过程中检测是否需要把三角形拆分的更细 - 三维纹理 - 前面说的纹理局限于 2D,但可以扩展到 3D - 三维纹理,定义空间中任意一点的纹理:并没有真正生成纹理的图,而是定义一个三维空间的噪声函数经过各种处理,变成需要的样子(如地形生成) - 阴影纹理 - 阴影可以预先计算好,直接写在 texture 里,然后把着色结果乘以阴影纹理的值 - 3D Texture 和体渲染 - (讲完 Geometry 后回来),阴影映射(Shadow Mapping) - 图像空间中计算,生成阴影时不需要场景的几何信息;但会产生走样现象 - Key idea:不在阴影里则能被光源和摄像机同时看道;在阴影里则能被相机看到,但不能被光源看到 - Step1:在光源处做光栅化,得到深度图(shadow map,二维数组,每个像素存放深度);Step2:从摄像机做光栅化,得到光栅图像;Step3:把摄像机得到图像中的每个像素所对应的世界空间中物体的点投影回光源处,得到该点的深度值,将此数值跟该 点到光源的距离做比较(注意用的是世界空间中的坐标,即透视投影之前的) - 浮点数精度问题,shadow map 和光栅化的分辨率问题(采样频率) - 硬阴影、软阴影(后者的必要条件是光源有一定大小) = Geometry 几何 #info()[ + 基本表示方法(距离函数SDF、点云) + 曲线与曲面(贝塞尔曲线、曲面细分、曲面简化) + 网格处理、阴影图 ] - 主要分为两类:隐式几何、显式几何 - 隐式几何:不告诉点在哪,而描述点满足的关系,generally $f(x,y,z)=0$ - 好处:很容易判断给定点在不在面上;坏处:不容易看出表示的是什么 - Constructive Solid Geometry(CSG):可以对各种不同的几何做布尔运算,如并、交、差 - Signed Distance Function(SDF):符号距离函数:描述一个点到物体表面的最短距离,外表面为正,内表面为负,SDF 为 $0$ 的点组成物体的表面 - 对两个“规则”形状物体的 SDF 进行线性函数混合(blend),可以得到一个新的 SDF,令其为 $0$ 反解出的物体形状将变得很“新奇” - 水平集(Level Set):与 SDF 很像,也是找出函数值为 $0$ 的地方作为曲线,但不像 SDF 会空间中的每一个点有一种严格的数学定义,而是对空间用一个个格子去近似一个函数。通过 Level Set 可以反解 SDF 为 $0$ 的点,从而确定物体表面 - 显式几何:所有曲面的点被直接给出,或者可以通过参数映射关系直接得到 - 好处:容易直接看出表示是什么;坏处:很难判断内/外 - 以下均为显式表示法 - 点云,很基础,不多说,但其实不那么常见(除了扫描出来的那种) - 多边形模型 - 用得最广泛的方法,一般用三角形或者四边形来建模 - 在代码中怎么表示一个三角形组成的模型?用 wavefront object file(.obj) - v 顶点;vt 纹理坐标;vn 法向量;f 顶点索引(哪三个顶点、纹理坐标、法线) - 贝塞尔曲线 - 用三个控制点确定一条二次贝塞尔曲线,de Casteljau 算法。三次、四次等也是一样的思路。用伯恩斯坦多项式 - 贝塞尔曲线好用的性质 + 首/尾两个控制点一定是起点/终点 + 对控制点做仿射变换,再重新画曲线,跟原来的一样,不用一直记录曲线上的每个点 + 凸包性质:画出的贝塞尔曲线一定在控制点围成的线之内 - piecewise Bezier Curve:每 $4$ 个顶点为一段,定义多段贝塞尔曲线,每段的终点是下一段的起点 - Splines 样条线:一条由一系列控制点控制的曲线 - B-splines 基样条:对贝塞尔曲线的一种扩展,比贝塞尔曲线好的一点是:局部性,可以更局部的控制变化 - NURBS:比B样条更复杂的一种曲线,了解即可 - 贝塞尔曲面:将贝塞尔曲线扩展到曲面 - 用 $4 times 4$ 个控制点得到三次贝塞尔曲面。每四个控制点绘制出一条贝塞尔曲线,这 $4$ 条曲线上每一时刻的点又绘制出一条贝塞尔曲线,得到一个贝塞尔曲面 - 几何操作:Mesh operations(mesh subdivision, mesh simplification, mesh regularization),下面依次展开 - 曲面细分 - Loop 细分:分两步,先增加三角形个数,后调整位置 - 新顶点:$3/8 * (A+B) + 1/8 * (C+D)$ - 旧顶点:$(1-n*u)*"priginal_position"+u*"neighbor_position_sum"$,其中 $n$ 为顶点的度,$u=3/16 (n=3) "or" 3/(8n)$($n$ 越大越相信自己) - Catmull-Clark 细分 - 非四边形面、奇异点 - 一次细分后:每个非四边形面引入一个奇异点;非四边形面全部消失 - 顶点更新规则:新的边上的点、新的面内的点、旧的点 - 曲面简化 - 多细节层次(LOD):如果说 MipMap 是纹理上的层次结构——根据不同距离(覆盖像素区域的大小)选择不同层的纹理;那么 LOD 就是几何的层次结构——根据不同距离(覆盖像素区域的大小)选择不同的模型面数 - 边坍缩:把某一条边坍缩成一个点,要求这个点距离原先相邻面的距离平方和最小(优化问题) - 贪心算法,小顶堆 = Ray Tracing 光线追踪 #info()[ + 基本原理 + 加速结构 + 辐射度量学、渲染方程与全局光照 + 蒙特卡洛积分与路径追踪 ] == 光线追踪原理 - 光栅化:已知三角形在屏幕上的二维坐标,找出哪些像素被三角形覆盖(物体找像素点);光线追踪:从相机出发,对每个像素发射射线去探测物体,判断这个像素被谁覆盖。(像素点找物体) - 为什么要有光线追踪,光栅化不能很好的模拟全局光照效果:难以考虑 glossy reflection(反射性较强的物体), indirect illuminaiton(间接光照);不好支持 soft shadow;是一种近似的效果,不准确、不真实 - 首先定义图形学中的光线:光沿直线传播;光线之间不会相互影响、碰撞;光路可逆(reciprocity),从光源照射到物体反射进入人眼,反着来变成眼睛发射光线照射物体 - Recursive (Whitted-Style) Ray Tracing - 两个假设前提:人眼是一个点;场景中的物体,光线打到后都会进行完美的反射/折射; - 每发生一次折射或者反射(弹射点)都计算一次着色,前提是该点不在阴影内,如此递归计算 + 从视点从成像平面发出光线,检测是否与物体碰撞 + 碰撞后生成折射和反射部分 + 递归计算生成的光线 + 所有弹射点都与光源计算一次着色,前提是该弹射点能被光源看见 + 将所有着色通过某种加权叠加起来,得到最终成像平面上的像素的颜色 - 为了后续说明方便,课程定义了一些概念: + primary ray:从视角出发第一次打到物体的光线 + secondary rays:弹射之后的光线 + shadow rays:判断可见性的光线 - 那么问题的重点就成了求交点。接下来对其中的技术细节进行讲解 - Ray Equation:$r(t)=o+t d ~~~ 0 =< t < infty$,光线由点光源和方向定义 - Ray Intersection With Implicit Surface 光线与隐式表面求交 - General implicit surface: $p: f(p)=0$ - Substitute ray equation: $f(o+t d)=0$ - Solve for real,positive roots - Ray Intersection With Explicit Triangle 光线与显式表面(三角形)求交 - 通过光线和三角形求交可以实现:渲染(判断可见性,计算阴影、光照);几何(判断点是否在物体内,通过光源到点的线段与物体交点数量的奇偶性) - 求交方法一:遍历物体每个三角形,判断与光线是否相交 + 光线-平面求交 + 计算交点是否在三角形内 - 求交方法二:Möller-Trumbore射线-三角形求交算法 - 直接结合重心坐标计算 #fig("/public/assets/Courses/CG/img-2024-07-30-14-36-34.png") == Accelerating Ray-Surface Intersection - 空间划分与包围盒 Bounding Volumes - 常用 Axis-Aligned-Bounding-Box(AABB) 轴对齐包围盒 - 算 $t_"enter"$ 和 $t_"exit"$,光线与 box 有交点的判定条件:$t_"enter" < t_"exit" && t_"exit" >= 0$ - AABB 盒的好处就在于光线与盒子的交点很容易计算,于是我们将复杂的三角形与光线求交问题部分转化为了简单的盒子与光线求交问题:首先做预处理对空间做划分(均匀或非均匀),然后剔除不包含任何三角形的盒子,计算一条光线与哪些盒子有交点,在这些盒子中再计算光线与三角形的交点 - 非均匀空间划分 + Oct-Tree:类似八叉树结构,注意下面省略了一些格子的后续划分,格子内没有物体或物体足够少时,停止继续划分 + BSP-Tree:空间二分的方法,每次选一个方向砍一刀,不是横平竖直(并非 AABB),所以不好求交,维度越高越难算 + *KD-Tree*:每次划分只沿着某个轴砍一刀,XYZ 交替砍,不一定砍正中间,每次分出两块,类似二叉树结构 - KD-tree 的缺陷:不好计算三角形与包围盒的相交性(不好初始化);一个三角形可能属于多个包围盒导致冗余计算 - 对象划分 Bounding Volume Hierarchy(BVH) - 逐步对物体进行分区:所有物体分成两组,对两组物体再求一个包围盒(xyz 的最值作为边界)。这样每个包围盒可能有相交(无伤大雅)但三角形不会有重复,并且求包围盒的办法省去了三角形与包围盒求交的麻烦 - 分组方法:启发式——总是选择最长的轴,或选择处在中间(中位数意义上)的三角形 - 课上讲的 BVH 是宏观上的概念,没有细讲其实现,可以看 #link("https://www.cnblogs.com/lookof/p/3546320.html")[这篇博客] == 辐射度量学(Basic radiometry)、渲染方程与全局光照 - Motivation:Whitted styled 光线追踪、Blinn-phong 着色计算不够真实 - 辐射度量学:在物理上准确定义光照的方法,但依然在几何光学中的描述,不涉及光的波动性、互相干扰等 - 几个概念:Radiant Energy 辐射能 $Q$, Radiant Flux(Power) 辐射通量$Phi$, Radiant Intensity 辐射强度 $I$, Irradiance 辐照度 $E$, Radiance 辐亮度 $L$ + Radiance Energy:$Q[J = "Joule"]$,基本不咋用 + Radiant Flux:$Phi = (dif Q)/(dif t) [W = "Watt"][l m="lumen"]$,有时也把这个误称为能量 + 后面三个细讲 - Radiant Intensity: Light Emitted from a Source - $I(omega) = (dif Phi)/(dif omega) [W/(s r)][(l m)/(s r) = c d = "candela"]$ - solid angle 立体角 - Irradiance: Light Incident on a Surface - $E = (dif Phi)/(dif A cos theta) [W/m^2]$,其中 $A$ 是投影后的有效面积 - 注意区分 Intensity 和 Irradiance,对一个向外锥形,前者不变而后者随距离减小 - Radiance: Light Reflected from a Surface - $L = (dif^2 Phi(p, omega))/(dif A cos theta dif omega) [W/(s r ~ m^2)][(c d)/(m^2)=(l m)/(s r ~ m^2)=n i t]$,$theta$ 是入射(或出射)光线与法向量的夹角 - Radiance 和 Irradiance, Intensity 的区别在于是否有方向性 - 把 Irradiance 和 Intensity 联系起来,Irradiance per solid angle 或 Intensity per projected unit area - $E(p)=int_(H^2) L_i(p, omega) cos theta dif omega$ - 双向反射分布函数(Bidirectional Reflectance Distribution Function, BRDF):描述了入射($omega_i$)光线经过某个表面反射后在各个可能的出射方向($omega_r$)上能量分布(反射率)——$f_r(omega_i -> omega_r)=(dif L_r (omega_r))/(dif E_i (omega_i)) = (dif L_r (omega_r)) / (L_i (omega_i) cos theta_i dif omega_i) [1/(s r)]$ - 反射方程:$ L_r(p, omega_r)=int_(H^2) f_r (p, omega_i -> omega_r) L_i (p, omega_i) cos theta_i dif omega_i $ - 注意,入射光不止来自光源,也可能是其他物体反射的光。递归思想,反射出去的光 $L_r$ 也可被当做其他物体的入射光 $E_i$ - 推广为渲染方程(绘制方程):$ L_o (p, omega_o)=L_e (p, omega_o) + int_(Omega^+) f_r (p, omega_i, omega_o) L_i (p, omega_i) (n dot omega_i) dif omega_i $ - 把式子通过“算子”概念简写为 $L=E+K L$,然后移项泰勒展开得到 $L=E+K E+K^2 E+...$,如下图,光栅化一般只考虑前两项,这也是为什么我们需要光线追踪 #fig("/public/assets/Courses/CG/img-2024-07-31-23-36-46.png") - 全局光照 = 直接光照(Direct Light) + 间接光照(Indirect Light) == 蒙特卡洛路径追踪(Path Tracing) - 概率论基础 - 回忆 Whitted-styled 光线追踪:摄像机发射光线,打到不透明物体,则认为是漫反射,直接连到光源做阴影判断、着色计算;打到透明物体,发生折射、反射。总之光线只有三种行为——镜面反射、折射、漫反射 + 难以处理毛面光滑材质? + 忽略了漫反射物体之间的反射影响 - 采样蒙特卡洛方法解渲染方程:直接光照;全局光照,采用递归 #fig("/public/assets/Courses/CG/img-2024-08-01-22-25-38.png") - 问题一:$"rays"=N^"bounces"$,指数级增长。当 $N=1$ 时,就称为 *path tracing* 算法 - $N=1$ 时 noise 的问题:在每个像素内使用 $N$ 条 path,将 path 结果做平均(同时也解决了采样频率,解决锯齿问题) - 问题二:递归算法的收敛条件。如果设置终止递归条件,与自然界中光线就是弹射无数次相悖。如何不无限递归又不损失能量? - 俄罗斯轮盘赌 RussianRoulette(RR),以一定的概率停止追踪(类似神经网络的 dropout) - 期望停止次数为 $1/(1-P)$ - 而结果的正确性由 $E=P times L_o / P + (1-P) times 0 = L_o$ 保证 - 问题三:低采样数的情况下噪点太多,而高采样率又费性能(当光源越小,越多的光线被浪费) - _*重要性采样*_:直接采样光源的表面(其它方向概率为 $0$),这样就没有光线被浪费 - 蒙特卡洛在(单个像素内)立体角 $omega$ 上采样,在 $omega$ 上积分;现在对光源面采样,就需要把公式写成对光源面的积分 $ L_(o) (x, omega_(o)) & = integral_(Omega^(+)) L_(i) (x, omega_(i)) f_(r) (x, omega_(i), omega_(o)) cos theta dif omega_(i) \ & = integral_(A) L_(i) (x, omega_(i)) f_(r) (x, omega_(i), omega_(o)) (cos theta cos theta') / norm(x^(prime) - x)^2) dif A $ - 这样又只考虑了直接光照,对间接光照依旧按原本方式处理 - 最终着色计算伪代码为: ``` // 如果 depth 为 0,wo 为从像素打出的光线的出射方向,与物体的第一个交点为 p // 如果 depth 不为 0,从之前的交点投出反射光线或光源光线作为 wo,p 为新的交点 Shade(p, wo) { // 1、来自光源的贡献 对光源均匀采样,即随机选择光源表面一个点x'; // pdf_light = 1 / A shoot a ray form p to x'; L_dir = 0.0; if (the ray is not blocked in the middle) // 判断是否被遮挡 L_dir = L_i * f_r * cosθ * cosθ' / |x' - p|^2 / pdf_light; // 2、来自其他物体的反射光 L_indir = 0.0; Test Russian Roulette with probability P_RR; Uniformly sample the hemisphere toward wi; //pdf_hemi = 1 / 2π Trace a ray r(p,wi); if (ray r hit a non-emitting object at q) L_indir = shade(q, -wi) * f_r * cosθ / pdf_hemi / P_RR; return L_dir + L_indir; } ``` - 最后的结语与拓展 - Ray tracing: Previous vs. Modern Concepts - 过去:Ray tracing == Whitted-style ray tracing - 现在:一种 light transport 的广泛方法,包括 (Unidirectional & bidirectional) path tracing, Photon mapping, Metropolis light transport, VCM / UPBP - 如何对半球进行均匀采样,更一般地,如何对任意函数进行这样的采样? - 随机数的生成(low discrepancy sequences) - multiple importance sampling:把对光源和半球的采样相结合 - 对一个像素的不同 radiance 是直接平均还是加权平均(pixel reconstruction filter)? - 算出来的 radiance 还不是最终的颜色(而且并非线性对应),还需要 gamma correction,curves, color space 等 = Materials and Appearances 材质与外观 - 自然界中的材质 == 计算机图形学中的材质 - 材质 == BRDF(Bidirectional Reflectance Distribution Function,双向反射分布函数) - 漫反射材质(Diffuse)的 BRDF - Light is equally reflected in each output direction - 如果再假设入射光也是均匀的,并且有能量守恒定律 $L_o = L_i$,那么: #fig("/public/assets/Courses/CG/img-2024-08-04-11-39-26.png") - 定义反射率 $rho$ 来表征一定的能量损失,还可以对 RGB 分别定义 $rho$ - 抛光/毛面金属(Glossy)材质的 BRDF - 这种材质散射规律是在镜面反射方向附近,一小块区域进行均匀的散射 - 代码实现上,算出*镜面*反射方向$(x,y,z)$,就以$(x,y,z)$为球心(或圆心),内部随机生成点,以反射点到这个点作为真的光线反射方向。在较高 SPP(samples per pixel)下,就能均匀的覆盖镜面反射方向附近的一块小区域 - 完全镜面反射+折射材质(ideal reflective/refractive)的 BRDF - (完全)镜面反射(reflect) - 方向描述 + 直接计算:$omega_o = - omega_i + 2 (omega_i dot n) n$ + 用天顶角 $phi$ 和方位角 $theta$ 描述:$phi_o = (phi_i+pi) mod 2 pi, ~~ theta_o = theta_i$ + 还有之前讲过的半程向量描述(简化计算) - 镜面反射的 BRDF 不太好写,因为它是一个 delta 函数,只有在某个方向上有值,其它方向上都是 $0$(?) - 折射(refract) - Snell's Law(斯涅耳定律,折射定律):$n_1 sin theta_1 = n_2 sin theta_2$ - $cos theta_t = sqrt(1 - (n_1/n_2)^2 (1 - (cos theta_i)^2))$,有全反射现象(光密介质$->$光疏介质) - 折射无法用严格意义上的 BRDF 描述,而应该用 BTDF(T: transmission),可以把二者统一看作 BSDF(S: scattering) = BRDF + BTDF。不过,通常情况下,当我们说 BRDF 时,其实就指的是 BSDF - 反射与折射能量的分配与入射角度物体属性有关,用 Fresnel Equation 描述 - 菲涅尔项(Fresnel Term) - 精确计算菲涅尔项(复杂,没有必要),只要知道这玩意儿跟 出/入射角度、介质反射率 $eta$ 有关j就行 - 近似计算:Schlick’s approximation(性价比更高):$R(th)=R_0+(1-R_0)(1-cos th)^5$,其中 $R_0=((n_1-n_2)/(n_1+n_2))^2$ == 微表面材质(Microfacet Material) - 微表面模型:微观上——凹凸不平且每个微元都认为只发生镜面反射(bumpy & specular);宏观上——平坦且略有粗糙(flat & rough)。总之,从近处看能看到不同的几何细节,拉远后细节消失 - 用法线分布描述表面粗糙程度 #fig("/public/assets/Courses/CG/img-2024-08-04-12-48-55.png") - 微表面的 BRDF #fig("/public/assets/Courses/CG/img-2024-08-04-12-52-57.png") - 其中 $G$ 项比较难理解。 当入射光以非常平(Grazing Angle 掠射角度)的射向表面时,有些凸起的微表面就会遮挡住后面的微表面。$G$项 其实对这种情况做了修正 - 微表面模型效果特别好,是 sota,现在特别火的 PBR(physically Based Rendering)一定会使用微表面模型 == 各向同性(Isotropic)和各向异性(Anisotropic)材质 - 各向同性——各个方向法线分布相似;各项异性——各个方向法线分布不同,如沿着某个方向刷过的金属 - 用 BRDF 定义,各向同性材质满足 BRDF 与方位角 $phi$ 无关($f_r (th_i,phi_i; th_r, phi_r) = f_r (th_i, th_r, |phi_r - phi_i|)$) - BRDF 的性质总结 + 非负性(non-negativity):$f_r (omega_i -> omega_r) >= 0$ + 线性(linearity):$L_r (p, omega_r) = int^(H^2) f_r (p, omega_i -> omega_r) L_i (p, omega_i) cos theta_i dif omega_i$ + 可逆性(reciprocity):$f_r (omega_i -> omega_r) = f_r (omega_r -> omega_i)$ + 能量守恒(energy conservation):$forall omega_r int^(H^2) f_r (omega_i -> omega_r) cos theta_i dif omega_i =< 1$ + 各向同性和各向异性(Isotropic vs. anisotropic) - 测量 BRDF - 前面对于 BRDF 的讨论都隐藏了 BRDF 的定义细节,即使我们对微表面模型的 BRDF 给出了一个公式,但其中比如菲涅尔项是近似计算的,不够精确。有时候,我们不需要给出 BRDF 的精确模型(公式),只需要测量后直接用即可 - 一般测量方法:遍历入射、出射方向,测量 radiance(入射出射可以互换,因为光路可逆),复杂度为 $O(n^4)$ - 一些优化 + 各向同性的材质,可以把 4D 降到 3D + 由于光的可逆性,工作量可以减少一半 + 不用采样那么密集,就采样若干个点,其中间的点可以插值出来 + $dots$ - 测量出来 BRDF 的存储,应该挺热门的方向是用神经网络压缩数据 - MERL BRDF Database 是一个很好的 BRDF 数据库 = Advanced Topics in Rendering 渲染前沿技术介绍 - 偏概述和思想介绍,具体技术细节不展开 - *有偏*、*无偏*,以及有偏中的*一致* == 无偏光线传播方法 - 普通的 Path Tracing 也是无偏的 - 双向路径追踪 (Bidirectional Path Tracing, BDPT) - 从摄像机出发投射子路径,从光源出发投射子路径,把两者的端点相连(在技术上比较复杂) - 之前学的路径追踪对于某些用间接光照亮的场景不太好用(由于光源角度苛刻,成功采样概率小),而 BDPT 可以提高采样效率从而减少噪点,但会导致计算速度下降 - Metropolis Light Transport(MLT) - 马尔可夫链蒙特卡洛(Markov Chain Monte Carlo, MCMC)的应用 - 马尔可夫链可以根据一个样本,生成跟这个样本靠近的下一个样本,使得这些样本的分布跟被积函数曲线相似,这样的 variance 较小。用在路径追踪里面,就可以实现“局部扰动现有路径去获取一个新的路径”(在现有采样点附近生成新采样点,连起来得到新路径) - 适用于复杂场景(间接光照、Caustics 现象),只要找到一条,我就能生成很多条 - 缺陷:难以估计收敛速度,不知道跑多久能产生没有噪点的渲染结果图;不能保证每像素的收敛速度相等,通常会产生“肮脏”的结果,因此一般不用于渲染动画 == 有偏光线传播方法 - 光子映射(Photon Mapping) - 适用于渲染焦散(caustics)、Specular-Diffuse-Specular(SDS)路径 - 实现方法(两步) + Stage 1——photon tracing:光源发射光子,类似光线一样正常传播(反射、折射),打到 Diffuse 表面后停止并记录 + Stage 2——photon collection(final gathering):摄像机出发打出子路径,正常传播,打到 Diffuse 表面后停止 + Calculation——local density estimation:对于每个像素,找到它附近的 $N$ 个光子(怎么找?把光子排成加速结构如 k 近邻),计算它们的密度为 $N/A$ - 这种渲染方法,往往是模糊和噪声(bias & variance)之间的平衡:$N$ 取小则噪声大,$N$ 取大则变模糊(BTW,有偏 == 模糊;一致 == 样本接近无穷则能收敛到不模糊的结果) - 由于局部密度估计应该估计每个着色点的密度 $(di N) / (di A)$,但是实际计算的是 $(Delta N) / (Delta A)$,只有加大 $N$ 使 $Delta A$ 趋近于 $0$ 才能使估计值趋近于真实值,因此是一个有偏但一致的方法 - 此时我们也能明白为什么用固定 $N$ 计算 $A$ 的方法而不是固定 $A$,因为后者永远有偏 - 光子映射 + 双向路径追踪 (Vertex Connection and Merging, VCM) - 很复杂,但是想法很简单,依旧是提高采样效率 - 在 BDPT 的基础上,如果光源的子路径和摄像机的子路径最后交点非常接近但又不可能反射折射到对方,那么就把光源子路径认为是发射光子的路径,从而把这种情况也利用起来 - 实时辐射度算法(Instant Radiosity, IR) - 有时也叫 many-light approaches - 关键思想: 把光源照亮的点(经过 $1$ 次或多次弹射)当做一堆新的点光源(Vritual Point Light) (VPL),用它们照亮着色点。然后用普通的光线追踪算法计算 - 从相机发射光线击中的每个着色点,都连接到这些光源计算光照。对于那些 VPL,是从真正光源发射后经过弹射形成,某种意义上也是一种双向路径追踪。宏观上看,这个方法实现了用直接光照的计算方法得到的间接光照的结果 - 优点是计算速度快,通常在漫反射场景会有很好的表现;缺点是不能处理 Glossy 材质,以及当光源离着色点特别近时会出现异常亮点(因为渲染方程中有 $1/r^2$ 项) == 非表面模型(Non-Surface Models) === 参与介质(Participating Media)或散射介质 - 类似云、雾霾等,显然不是定义在一个表面上的,而是定义在空间中的。当光线穿过,介质会吸收一定的能量,并且朝各个方向散射能量 - 定义参与介质以何种方式向外散射的函数叫相位函数(Phase Function),很像 3D 的 BRDF #fig("/public/assets/Courses/CG/img-2024-08-04-20-58-32.png") - 如何渲染:随机选择一个方向反弹(决定散射);随机选择一个行进距离(决定吸收);每个点都连到光源(感觉有点像 Whitted-Styled),但不再用渲染方程而是用新的 3D 的方程来算着色 - 事实上我们之前考虑的很多物体都不算完美的表面,只是光线进入多跟少的问题 === 毛发、纤维模型 - 考虑光线如何跟一根曲线作用 - Kajiya-Kay Model(不常用,比较简单、不真实):光线击中细小圆柱,被反射到一个圆锥形的区域中,同时还会进行镜面反射和漫反射。 - Marschner Model(计算量爆炸,但真实) - 把光线与毛发的作用分为三个部分 + R:在毛发表面反射到一个锥形区域 + TT:光线穿过毛发表面,发生折射进入内部,然后穿出再发生一个折射,形成一块锥形折射区域 + TRT:穿过第一层表面折射后,在第二层的内壁发生反射,然后再从第一层折射出去,也是一块锥形区域 - 把人的毛发认为类似于玻璃圆柱体,分为表皮(cuticle)和皮质(cortex)。皮质层对光线有不同程度的吸收,色素含量决定发色,黑发吸收多,金发吸收少 #grid2( fig("/public/assets/Courses/CG/img-2024-08-04-21-10-19.png"), fig("/public/assets/Courses/CG/img-2024-08-04-21-13-52.png") ) - 动物皮毛(Animal Fur Appearance) - 如果直接把人头发的模型套用到动物身上效果并不好 - 从生物学的角度发现,皮毛最内层还可以分出*髓质*(medulla),人头发的髓质比动物皮毛的小得多。而光线进去这种髓质更容易发生散射 - 双层圆柱模型(Double Cylinder Model):某些人(闫)在之前的毛发模型基础上多加了两种作用方式 TTs, TRTs,总共五种组成方式 #fig("/public/assets/Courses/CG/img-2024-08-04-21-25-48.png") === 颗粒状材质(Granular Material) - 由许多小颗粒组成的物体,如沙堡等 - 计算量非常大,因此并没有广泛应用 == 表面模型(Surface Models) === 半透明材质(Translucent Material) - 实际上不太应该翻译成“半透明”(semi-transparent),因为它不仅仅是半透明所对应的吸收,还有一定的散射 - *次表面散射*(Subsurface Scattering):光线从一个点进入材质,在表面的下方(内部)经过多次散射后,从其他一些点射出 - 双向次表面散射反射分布函数(BSSRDF):是对 BRDF 概念的延伸,某个点出射的 Radiance 是其他点的入射 Radiance 贡献的 #fig("/public/assets/Courses/CG/img-2024-08-04-21-35-28.png") - 计算比较复杂,因此又有一种近似的方法被提出 - Dipole Approximation:引入两个点光源来近似达到次表面散射的效果 #fig("/public/assets/Courses/CG/img-2024-08-04-21-38-45.png") === 布料材质(Cloth) - 布料有一系列缠绕的纤维组成 - 三个层级:纤维(fiber)缠绕形成股(ply),股缠绕形成线(thread),线编织形成布料(cloth) - 有时当做一个表面,忽略细节使用 BRDF 进行渲染 - 有时看做参与介质进行渲染,计算量巨大 - 有时直接把每一根纤维都进行渲染,计算量巨大 == 细节模型 - 微表面模型中最重要的是它的法线分布(NDF),但是我们描述这个分布用的都是很简单的模型,比如正态分布之类的,真实的分布要更复杂(基本符合统计规律的同时包含一些细节,比如划痕之类) - 如果使用法线贴图来把这些起伏细节都定义出来,会非常耗时。使用路径追踪困难的点在于,微表面的镜面反射在法线分布复杂的情况下,很难建立有效的的光线通路从相机出发打到光源(反之也是一样) - 我们可以让一个像素对应一块小区域(patch),用 patch 的统计意义的法线分布来反射光线。当 patch 变得微小时,一样能够显示出细节(感觉又是速度和细度的 trade-off) #grid2( fig("/public/assets/Courses/CG/img-2024-08-05-10-44-43.png"), fig("/public/assets/Courses/CG/img-2024-08-05-10-45-02.png") ) - 另外,在深入到这么微小的尺度后,波动光学效应也变得明显。这方面的公式完全没有提到(涉及复数域上的积分等),波动光学的 BRDF 结果与几何光学类似,但由于干涉出现不连续的特点 == 程序化生成外观 - 纹理这种东西的存储是个大问题,Can we define details without textures? - 因此有一种方法是不存,把它变成一个 noise 函数(3D),什么时候要用就去动态查询,生成的噪声可能需要经过 thresholding 二值化处理 - 应用:车绣效果、程序化地形、水面、木头纹理 = Cameras, Lenses and Light Fields 相机与透镜 - *图像* = *合成* + *捕捉*(*捕捉*,比如拿个相机把真实的物体拍下来,之后用到你的*图像*里) - 一些部件 + 快门:可以控制光在一个极短的时间内进入相机 + 传感器:在曝光过程中,在传感器每个点上记录其接受到的 irradiance(没有方向信息) + 针孔相机和透镜相机(为什么要有针孔或者透镜,正因为记录的是 irradiance) - 针孔相机:没有景深,任何地方都是锐利的而不是虚化的 - 视场(Field of Vied, FOV) - 定义针孔相机的 $h$ 和 $f$,$"FOV" = 2 * arctan(0.5 * h / f)$ #fig("/public/assets/Courses/CG/img-2024-08-05-12-08-38.png") - 通常描述焦距都会换算到 $h=35"mm"$ 所对应的焦距长度 - 如果改传感器大小,涉及到传感器和胶片的关系,一般认为混淆着使用二者概念 - 曝光(Exposure) - H = T x E - T:曝光时间(time),通过快门控制多长时间光可以进入(明亮和昏暗的场景中) - E:辐照度(irradiance),感光器的单位面积上接收到的辐射通量总和,通过光圈大小(aperture)和焦距控制 - 摄影中的曝光影响因素 - 快门(Shutter speed):改变传感器每个像素吸收光的时间,快门打开时间长,拍摄运动的物体就会拖影(运动模糊),因为物体在光圈打开这段时间内可能每刻都在运动,而相机把每一刻的信息都记录下来了 - 光圈大小(Aperture size):通过开关光圈改变光圈级数(F-Number, F-Stop)。写作 $F N$ 或 $F \/ N$,其中 $N$ 就是 $F$ 数,可以简单形象的理解为光圈直径的倒数(实际上 F-Stop 的数值为 焦距与光圈直径之比,即 $f/D$),*基本上也就等同于透镜的大小*。大光圈会模糊(浅景深),小光圈更清晰。原因见后面章节 CoC 介绍 - 感光度(ISO gain):可以简单的理解成后期处理,把结果乘上一个数。在信号的角度理解,这样的操作同时将噪声放大 - 薄透镜近似(Thin Lens Approximation) - 理想的薄透镜应该有以下性质 + 任意平行光穿过透镜会聚焦在焦点处 + 任意光通过焦点射向透镜,会变为互相平行的光 + 假设薄透镜的焦距可以任意改变(用透镜组来实现) - 薄透镜公式:$1/f = 1/z_i + 1/z_o$ - Circle of Confusion(CoC):可以看出C和A成正比——光圈越大越模糊 #fig("/public/assets/Courses/CG/img-2024-08-05-13-53-49.png") - 渲染中模拟透镜(Ray Tracing Ideal Thin Lenses) - 一般光线追踪和光栅化使用的是针孔摄像机模型,但是如果想做出真实相机中的模糊效果,需要模拟薄透镜相机(而且不再需要 MVP 等) - (One possible setup)定义成像平面尺寸、透镜焦距 $f$、透镜尺寸(光圈影响模糊程度)、透镜与相机成像平面的距离 $z_i$,根据公式$1/f = 1/z_o + 1/z_i$,算出 focal plane 到透镜的距离 $z_o$ - 渲染 + 遍历每个感光器上的点 $x'$(视锥体近平面的每个像素),连接 $x'$ 和透镜中心,与 focal plane 相交于 $x'''$,则 $x'$ 对应的所有经过透镜的光线必然都要相交于这一点 + 在透镜平面随机采样 SPP 个点 $x''$,*以 $x''$ 作为光线的起点*。一般 SPP 不为 $1$(e.g. 50, cover the whole len) + 以 $(x'''-x'')/(|x'''-x''|)$ 得到光线方向$arrow(d)$ + 计算最近交点,最终得到 radiance,记录到 $x'$ - 好像还有简化的方法,参考 #link("https://blog.csdn.net/Motarookie/article/details/122998400#:~:text=简化实现方法")[根据我抄的笔记] - 景深(Depth of Field) - 在 focal point 附近的一段范围内的 CoC 并不大(比一个像素小或者差不多大),如果从场景中来的光经过理想透镜后落在这一段内,可以认为场景中的这段深度的成像是清晰、锐利的 #fig("/public/assets/Courses/CG/img-2024-08-05-14-27-55.png") = Color and Perception 光场、颜色与感知 - 光场(Light Field / Lumigraph) - 一个案例:人坐在屋子里,用一张画布将人眼看到的东西全部画下来。然后在人的前面摆上这个画布,以此 2D 图像替代 3D 场景以假乱真(这其实就是VR的原理) == 全光函数与光场 - 全光函数是个 $7$ 维函数,包含任意一点的位置 $(x, y, z)$、方向 (极坐标 $th, phi$)、波长$(la)$(描述颜色)、时间$(t)$ - 全光函数描述了摄像机在任何位置,往任何方向看,在任何时间上看到的不同的颜色,描述了整个空间(全息空间) - 而光场是全光函数的一小部分,描述任意一点向任意方向的光线的强度 - 光线的定义 - 一般空间中,我们用 5D 来描述:3D 位置 $(x, y, z)$ + 2D 方向 $(th, phi)$(这里似乎隐含了固定极坐标轴朝向的意思,可能默认轴对齐了) - 光场中用 4D 来描述:2D 位置 + 2D 方向 $(th, phi)$,这是怎么理解呢? - 黑盒(包围盒)思想与光场 - 我们是怎么看物体的?就像前面的案例一样,我们其实可以不关心物体是什么、怎么组成,当做黑盒。我只需要知道,从某个位置看某个方向过去,能看到什么。 - 用一个包围盒套住物体。从任何位置、任何方向看向物体,与包围盒有一个交点;由于光路可逆,也可以描述为:从包围盒上这个交点,向任意方向发射光线。如果我们知道包围盒(2D)上任意一点向任意方向(2D)发射光线的信息(radiance),这就是光场(个人理解:有点往 Path Tracing 里面引入纹理映射的感觉) - 再升级一步,由于两点确定一条直线:2D 位置 + 2D 方向 $->$ 2D 位置 + 2D 位置。于是,我们可以用两个平面(两个嵌套的盒子)来描述光场 - 双平面参数化后的两种视角,物体在 st 面的右侧。图 a 从 uv 面看 st,描述了从不同位置能看到什么样的物体;图 b 从 st 面看 uv,描述了对物体上的同一个点,从不同方向看到的样子(神经辐射场理解方式:每个像素存的是 irradiance ,遍历 uv 面所有点就是把 irradiance 展开成 radiance) #fig("/public/assets/Courses/CG/img-2024-08-05-15-37-39.png") - 双平面参数化后在实现上也变得更好理解,直接用一排摄像机组成一个平面就好 == 光场照相机 - Lytro 相机,原理就是光场。它最重要的功能:先拍照,后期动态调节聚焦 - 原理(事实上,昆虫的复眼大概就是这个原理): - 一般的摄像机传感器的位置在下图那一排透镜所在的平面上,每个透镜就是一个像素,记录场景的 irradiance。现在,光场摄像机将传感器后移一段距离,原本位置一个像素用透镜替换,然后光穿过透镜后落在新的传感器上,击中一堆像素,这一堆像素记录不同方向的 radiance - 从透镜所在平面往左看,不同的透镜对应不同的拍摄位置,每个透镜又记录了来自不同方向的 radiance。总而言之,原本一个像素记录的 irradiance,通过替换为透镜的方法,拆开成不同方向的 radiance 用多个“像素”存储 #fig("/public/assets/Courses/CG/img-2024-08-05-17-51-25.png") - 变焦:对于如何实现后期变焦比较复杂,但思想很简单,首先我已经得到了整个光场,只需算出应该对每个像素查询哪条“像素”对应光线,也可能对不同像素查询不同光线 - 不足之处:分辨率不足,原本 $1$ 个像素记录的信息,需要可能 $100$ 个像素来存储;高成本,为了达到普通相机的分辨率,需要更大的胶片,并且仪器造价高,设计复杂 == 颜色的物理、生物基础 - 光谱:光的颜色 $approx$ 波长,不同波长的光分布为光谱,图形学主要关注可见光光谱 - 光谱功率分布(Spectral Power Distribution, SPD) - 自然界中不同的光对应不同的 SPD - SPD 有线性性质 - 从生物上,颜色是并不是光的普遍属性,而是人对光的感知。不同波长的光 $!=$ 颜色 - 人眼跟相机类似,瞳孔对应光圈,晶状体对应透镜,视网膜则是传感器(感光元件) - 视网膜感光细胞:视杆细胞(Rods)、视锥细胞(Cones) - Rods 用来感知光的强度,可以得到灰度图 - Cones 相对少很多,用来感知颜色,它又被分为 $3$ 类(S-Cone, M-Cone, L-Cone),SML 三类细胞对光的波长敏感度(回应度)不同 - 事实上,不同的人这三种细胞的比例和数量呈现很大的差异(也就是颜色在不同人眼中是不一样的,只是定义统一成一样) - 人看到的不是光谱,而是两种曲线积分后得到 SML 再叠加的结果。那么一定存在一种现象:两种光,对应的光谱不同,但是积分出来的结果是一样的,即同色异谱(Metamerism);事实上,还有同谱异色 == 色彩复制 / 匹配 - 计算机中的成色系统成为 Additive Color(加色系统) - 所谓加色法,是指 RGB 三原色按不同比例相加而混合出其他色彩的一种方法 - 而自然界采用减色法,因此许多颜色混合最后会变成黑色而不是计算机中的白色 - CIE sRGB 颜色匹配 - 利用 RGB 三原色匹配单波长光,SPD 表现为集中在一个波长上(如前所述,有其它 SPD 也能体现出同样的颜色,但选择最简单的) - 然后,给定任意波长的*单波长光*(目标测试光),我们可以测出它需要上述 RGB 的匹配(可能为负,意思是加色系统匹配不出来,但可以把目标也加个色),得到*匹配曲线* #fig("/public/assets/Courses/CG/img-2024-08-05-18-32-22.png") - 然后对于自然界中并非单波长光的任意 SPD,我们可以把它分解成一系列单波长光,然后分别匹配并加权求和,也就是做积分 == 颜色空间 - Standardized RGB(sRGB):多用于各种成像设备,上面介绍的就是 sRGB。色域有限(大概为 CIE XYZ 的一半)。 - CIE XYZ - 这种颜色空间的匹配函数,对比之前的sRBG,没有负数段 - 匹配函数不是实验测出,而是人为定义的 - 绿色 $y$ 的分布较为对称,用这三条匹配函数组合出来的 $Y$(类比之前的 $G$) 可以一定程度上表示亮度 - HSV - 基于感知而非 SPD 的色彩空间,对美工友好 - H 色调(Hue):描述颜色的基本属性,如红、绿、蓝等 - S 饱和度(Saturation):描述颜色的纯度,越不纯越接近白色 - V 亮度(Value) or L(Light):描述颜色的明暗程度 - CIE LAB - 也是基于感知的颜色空间 - L 轴是亮度(白黑),A 轴是红绿,B 轴是黄蓝 - 轴的两端是互补色,这是通过实验得到的,可以用视觉暂留效果验证 - 减色系统:CMYK - 蓝绿色(Cyan)、品红色(Magenta)、黄色(Yellow)、黑色(Key) - CMY 本身就能表示 K,加入 K 是经济上的考量(颜料生产成本和需求) = Animation 动画与模拟 #info()[ + 基本概念、质点弹簧系统、运动学 + 求解常微分方程,刚体与流体 ] == 基本概念 - 动画历史 - 关键帧动画(Keyframe Animation) - 关键位置画出来,中间位置用线性插值或 splines 平滑过渡 - 物理模拟(Physical Simulation) - 核心思想就是真的构建物理模型,分析受力,从而算出某时刻的加速度、速度、位置 - 物理仿真和渲染是分开的两件事 == 质点弹簧系统 - 质点弹簧系统(Mass Spring System) - $f_(a->b)=k_s (b-a)/norm(b-a)(norm(b-a)-l)$,存在的问题,震荡永远持续 - 如果简单的引入阻尼(damping):$f=-k_d dot(b)$,问题在于它会减慢一切运动(而不只是弹簧内部的震荡运动) - 引入弹簧内部阻尼:$f_b=-k_d underbrace((b-a)/norm(b-a) dot (dot(b)-dot(a)), "相对速度在弹簧方向投影") dot underbrace((b-a)/norm(b-a), "重新表征方向")$ - 用弹簧结构模拟布料 #grid( columns: (.3fr, 1fr) * 2, row-gutter: 6pt, fig("/public/assets/Courses/CG/img-2024-08-06-13-54-14.png", width: 3em),[1. 不能模拟布料,因为它不具备布的特性(不能抵抗切力、不能抵抗对折力)], fig("/public/assets/Courses/CG/img-2024-08-06-13-54-32.png", width: 4em),[2. 改进了一点,虽然能抵抗图示对角线的切力,但是存在各向异性。另外依然不能抵抗折叠], fig("/public/assets/Courses/CG/img-2024-08-06-13-54-45.png", width: 4em),[3. 可以抵抗切力,有各向同性,不抗对折], fig("/public/assets/Courses/CG/img-2024-08-06-13-54-52.png", width: 4em),[4. 红色 skip connection 比较小,仅起辅助作用。现在可以比较好的模拟布料], ) - Aside: FEM(Finite Element Method) instead of Springs 也能很好地模拟这些问题 - 粒子系统(Particle Systems) - 建模一堆微小粒子,定义每个粒子会受到的力(粒子之间的力、来自外部的力、碰撞等),在游戏和图形学中非常流行,很好理解、实现 - 实现算法,对动画的每一帧: + 创建新的粒子(如果需要) + 计算每个粒子的受力 + 根据受力更新每个粒子的位置和速度 + 结束某些粒子生命(如果需要) + 渲染 - 应用:粒子效果、流体模拟、兽群模拟 == 运动学 - 正向运动学(Forward Kinematics) - 以骨骼动画为例,涉及拓扑结构(Topology: what’s connected to what)、关节相互的几何联系(Geometric relations from joints)、树状结构(Tree structure: in absence of loops) - 关节类型 + 滑车关节(Pin):允许平面内旋转 + 球窝关节(Ball):允许一部分空间内旋转 + 导轨关节(Prismatic joint):允许平移 - 正向运动学就是——给定关节的角度与位移,求出尖端的位置 - 控制方便、实现直接,但不适合美工创作动画 - 逆运动学(Inverse Kinematics) - 通过控制尖端位置,反算出应该旋转多少 - 有多解、无解的情况,是典型的最优化问题,用优化方法求解,比如梯度下降 - 动画绑定(Rigging) - rigging 是一种对角色更高层次的控制,允许更快速且直观的调整姿势、表情等。皮影戏就有点这个味道,但是提线木偶对表情、动作的控制更贴切一些 - 在角色身体、脸部等位置创造一系列控制点,美工通过调整控制点的位置,带动脸部其他从点移动,从而实现表情变化,动作变化等 - Blend Shapes:直接在两个不同关键帧之间做插值,注意是对其表面的控制点做插值 - 动作捕捉(Motion capture) - 在真人身上放置许多控制点,在不同时刻对人进行拍照,记录控制点的位置,同步到对应的虚拟人物上 == 求解常微分方程 - 单粒子模拟(Single Particle Simulation) - 之前讲的多粒子系统只是宏观上的描述,现在我们对单个粒子进行具体方法描述,这样才能扩展到多粒子 - 假设粒子的运动由*速度矢量场*决定,速度场是关于位置和时间的函数(定义质点在任何时刻在场中任何位置的速度):$v(x, t)$,从而可以解常微分方程来得到粒子的位置 - 怎么解?使用欧拉方法(a.k.a 前向欧拉或显示欧拉) - 欧拉方法 - 简单迭代方法,用上一时刻的信息推导这一时刻的信息 $x^(t+Delta t)=x^t + Delta t dot(x)^t$ - 误差与不稳定性:用更小的 $Delta t$ 可以减小误差,但无法解决不稳定性(比如不管采用多小的步长,圆形速度场中的粒子最终都会飞出去,本质上是误差的阶数不够导致不断累计) - 定义稳定性:局部截断误差(local truncation error)——每一步的误差,全局累积误差(total accumulated error)——总的累积误差。但真正重要的是步长 $h$ 跟误差的关系(阶数) - 对抗误差和不稳定性的方法 - 中点法(or Modified Euler):质点在时刻 $t$ 位置 $a$ 经过 $De t$ 来到位置 $b$,取 $a b$ 中点 $c$ 的速度矢量回到 $a$ 重新计算到达位置 $d$ - 每一步都进行了两次欧拉方法,公式推导后可以看作是加入了二次项 - 自适应步长(Adaptive Step Size):先用步长 $T$ 做一次欧拉计算 $X_T$,再用步长 $T/2$ 做两次欧拉得到 $X_T/2$,比较两次位置误差 $"error" = norm(X_T - X_T/2)$,如果 error > threshold,就减少步长,重复上面步骤 - 隐式欧拉方法(Implicit Euler Method):用下一个时刻的速度和加速度来计算下一个时刻的位置和速度,但事实上并不知道下一时刻的速度和加速度,因此需要解方程组。 - 局部误差为 $O(h)$,全局误差为 $O(h^2)$ - 龙格库塔方法(Runge-Kutta Families):求解一阶微分方程的一系列方法,特别擅长处理非线性问题,其中最常用的是一种能达到 $4$ 阶的方法,也叫做 RK4 - 初始化 $(di y)/(di t)=f(t,y), ~~ y(t_0)=y_0$ - 求解方法(下一时刻等于当前位置加上步长乘以六个速度的平均):$t_(n+1)=t_n+h, ~~ y_(n+1)=y_n+1/6 h(k_1+2k_2+2k_3+k_4)$ - 其中 $k_1 \~ k_4$ 为:$k_1=f(t_n, y_n), ~~ k_2=f(t_n+h/2, y_n+h/2 k_1), ~~ k_3=f(t_n+h/2, y_n+h/2 k_2), ~~ k_4=f(t_n+h, y_n+h k_3)$,具体推导为什么是四阶就略过(可以参考《数值分析》) - 非物理的方法 - 基于位置的方法(Position-Based)、Verlet积分等方法 - Idea:使用受限制的位置来更新速度,可以想象成一根劲度系数无限大的弹簧 - 优点是快速而且简单;缺点是不基于物理,不能保证能量守恒 == 刚体与流体 - 刚体:不会发生形变,且内部所有粒子以相同方式运动 - 刚体的模拟中会考虑更多的属性 $ di/(di t) vec(X, th, dot(X), omega) = vec(dot(X), omega, F/M, Gamma/I) $ - 有了这些属性就可以用欧拉方法或更稳定的方法求解 - 流体,使用基于位置的方法(Position-Based Method) - 前面已经说过流体可以用粒子系统模拟,然后我们用基于位置的方法求解 - 主要思想:水是由一个个刚体小球组成的;水不能被压缩,即任意时刻密度相同;任何一个时刻,某个位置的密度发生变化,就必须通过移动小球的位置进行密度修正;需要知道任何一个位置的密度梯度(小球位置的变化对其周围密度的影响),用机器学习的梯度下降优化;这样简单的模拟最后会一直运动停不下来,我们可以人为的加入一些能量损失 - 模拟大量物体运动的两种思路: - 拉格朗日法(质点法):以每个粒子为单位进行模拟 - 欧拉法(网格法):以网格为单位进行分割模拟(跟前面解常微分方程不是一回事) - 混合法(Mterial Point Method, MPM):粒子将属性传递给网格,模拟的过程在网格里做,然后把结果插值回粒子
https://github.com/HarryLuoo/sp24
https://raw.githubusercontent.com/HarryLuoo/sp24/main/math321/briefAppliedAnalysis.typ
typst
#set page("us-letter") #set heading(numbering: "I.1") #set page(margin: (x: 3cm, y: 1cm)) #set text(12pt) #show math.equation: set text(13pt) #text(font: "Cambria",size: 14pt,weight: "black")[Awesome applied analysis\ Notes on MATH 321] <NAME> The course contents could be better had it been Fabien's class, but probably Trinh saved my GPA. #line(length:100%, stroke:(thickness: 2pt)) #outline(indent: auto,) #pagebreak() = Vector algebra == Coordinate Transformation === cylindical $ x = rho cos phi\ y = rho sin phi\ z = z $ reverse $ rho = sqrt(x^2 + y^2)\ cos phi = x/rho\ sin phi = y/rho $ === spherical $ x= rho sin phi cos theta\ y= rho sin phi sin theta\ z= rho cos phi $ reverse $ rho = sqrt(x^2 + y^2 + z^2)\ cos phi = z/rho\ cos theta = x/r\ sin theta = y/r $ == Dot product - commutative - positive definite - distributive - cauchy-schwarz inequality == cross product - anticommutative $arrow(u)times arrow(v)= -(arrow(v) times arrow(u))$ - distributive $arrow(u) times (arrow(v)+ arrow(w ))= arrow(u) times arrow(v) + arrow(u)+ arrow(w)$ - scalar mulipication - triple scalar product $arrow(u) dot (arrow(v) times arrow(w)) = (arrow(u)times arrow(v) dot arrow(w)) $ - triple vector product $arrow(a) times( arrow(b) times arrow(c))=(arrow(b)dot arrow(a))arrow(c)- (arrow(c)dot arrow(a))arrow(b)$ == Projection The projection of $arrow(a)$ onto $arrow(b)$ is given by $ #rect(inset: 8pt)[ $ display( (arrow(a) dot arrow(b))/(norm(arrow(b))^2) med arrow(b) = (a dot hat(b)) hat(b) )$ ] $ #line(length: 100%) = Vector calculus == Are length - Def: Given a curve $arrow(r)(u)=(x(u) , y(u), z(u))$ for $a<=t<=b$ the length of the curve S, as a function of time is given by #rect(inset: 8pt)[ $ display( S(t) = integral_(a)^(t) norm(dot(r(u))) dif u) \ "where" norm(dot(r)(u)) = sqrt(((dif x )/(dif t))^2 + ( (dif y)/(dif t) )^2 + ( (dif z)/(dif t)) ^2) $ ] - Curvature: $ K(t) = (norm(dot(T)(t)) )/ (norm(dot(r)(t)))=norm( (dot(r)(t) times dot.double(r)(t) ) ) / (norm(dot(r)(t)))^3 , "where" T(t) =( dot(r)(t)) / norm(dot(r)(t)) $ == Line integration - for curve $arrow(r)(t) =(x(t),y(t))$ $ #rect(inset: 8pt)[ $ display( integral_(a)^(b) f(x,y) dif s = integral_(a)^b f[x(t),y(t)] sqrt(((dif x)/(dif t))^2+ ( (dif y)/(dif t))^2) dif t)$ ] $ - center of mass $(overline(x), overline(y), overline(z))$, where $ cases( overline(x) = (1/M) integral_(C) rho(x,y,z) x d s, overline(y) = (1/M) integral_(C) y rho(x,y,z) d s, overline(z) = (1/M) integral_(C) z rho (x,y,z) d s) $ - Work done by force F along curve, $arrow(r)(t) $ , which can be generalized into the formula for line integration, $ W = integral_(C) F dot dif arrow(r) = integral_(C) arrow(F) dot arrow(T) dif s = #rect(inset: 8pt)[ $ display(integral_(a)^(b) F[x(t),y(t)] dot (dot(r)(t)) dif t )$ ] $ - When vector field $arrow(F)=arrow(F)(x,y,z)=(P,Q,R)$, $ integral_(C) arrow(F) dot dif arrow(r) = integral_(C) P d x + Q d y + R d z $ == Surface integration - Parametric representation of surface: $ cases(x = x(u,v) , y = y(u,v) , z = z(u,v) ) $ - Use normal vector at a point $(u_0,v_0)$ of surface to represent tangent plane. $ arrow(r_v) = (partial arrow(r))/ (partial v)(u_0,v_0), arrow(r_u) = (partial arrow(r))/ (partial u)(u_0,v_0) \ arrow(N) = arrow(r_u) times arrow(r_v) $ - Surface area of a surface S with $(u,v) in D$ $ A(S) = integral.double_(D) norm(arrow(r_u) times arrow(r_v)) dif u dif v $ == Jacobian - Def: Given a transformation $(u,v) in D --> [x(u,v) , y(u,v) ] in S$, the Jacobian is given by $ #rect(inset: 8pt)[ $ display( J(u,v)=(diff (x,y))/(diff (u,v)) equiv det mat( (diff x)/(diff u) , (diff x)/(diff v) ; (diff y)/(diff u) , (diff y)/(diff v) ) )$ ] $ - Jacobian in coordinate transformation Upon evaluating an integral, we can change the coordinates of the integral from ${x,y} -> {u,v}$ by parametrize the variables: $ x = x(u,v) quad y= y(u,v) $ Then the integral becomes $ integral.double_(S) f(x,y) dif A = integral.double_(D) f(x(u,v),y(u,v)) |J(u,v)| dif u dif v $ == Gradient - Nabla operation: $ #rect(inset: 8pt)[ $ display( nabla = (partial)/(partial x) hat(x) + (partial)/(partial y) hat(y) + (partial)/(partial z) hat(z)) $ ] $ - Gradient in cartesian Scalar field $f = f(x,y,z)$ $ nabla f = ( (diff f)/(diff x) , (diff f)/(diff y) , (diff f)/(diff z) ) $ - Gradient in polar coordinates $f = f(r,theta)$ $ nabla f = arrow(e_r) (diff g)/(diff r) +arrow(e_theta) 1/r (diff g)/(diff theta)\ "where" arrow(e_r) = (x)/(norm(x)) = (cos theta, sin theta) arrow(e_theta) = (-sin theta, cos theta)\ nabla = arrow(e_r) partial_r + arrow(e_theta) 1/r partial_theta $ - Gradient in spherical$ nabla f = hat(rho) partial_rho + hat(phi) 1/rho partial_phi + hat(theta) 1/(rho sin phi) partial_theta $ - Gradient of scalar field in spherical coordinates $ #image("scalar_field_spherical.png") $ == Divergence - div of vec field: 3D: $ nabla dot arrow(F) = (diff F_1)/(diff x ) + (diff F_2)/(diff y) + (diff F_3)/(diff z) $ - Div in polar 2D $ arrow(U) = U_r hat(r) + U_theta hat(theta), "where" U_r = U dot hat(r), U_theta = U dot hat(theta)\ nabla dot U = (1/r) (partial (r U_r))/(partial r) + (partial U_theta)/(partial theta) $ - Div in sphereical coord $ arrow(U)=U_rho hat(rho) + U_theta hat(theta) + U_phi hat(phi), \ nabla dot arrow(U) = 1/rho^2 (partial (rho^2 U_rho))/(partial rho) + 1/(rho) sin phi (partial (U_theta))/(partial theta) + 1/(rho sin phi) (partial (U_theta sin phi))/(partial phi)) $ == Green's theorem For $P(x,y), Q(x,y)$, and a simple closed curve C, $ #rect(inset: 8pt)[ $ display( integral_(C) P d x + Q d y = integral.double_(D) (partial Q)/(partial x) - (partial P)/(partial y) d A=integral.double_(C) arrow(F) dot dif arrow(r) )$ ] $ == Flux - for a surface, $ arrow(r)(u,v) = (x(u,v), y(u,v), z(u,v))\ => integral.double_(S) arrow(F) dot dif arrow(S) = integral.double_(S) arrow(F) dot hat(n) dif S = integral.double_(D) arrow(F)(arrow(r)(u,v)) dot (arrow(r_u) times arrow(r_v)) dif A $ - if the surface is a graph of a fucntion $z=g(x,y) $ where $ (x,y) in D, arrow(F) = (P,Q,R)$, then $ integral_(S) arrow(F) dot dif arrow(s) = integral.double_(D) (P,Q,R) dot (-partial_x g, -partial_y g, 1) dif A $ == Stokes' theorem Let $F: R^3 -> R^3$ be a vector field on $R^3$ with any normal vector $arrow(n)$ , and for a surface $S$ with projection on ${u,v}$ being $A$, then $ #rect(inset: 8pt)[ $ display( integral_(C) arrow(F) dot dif arrow(r) = integral.double_(S) "curl"(arrow(F)) dot hat(n)dif S = integral.double_(S) ( nabla times arrow(F)) dot arrow(n) dif A)$ ] ,\ "where" "curl"(arrow(F)) = nabla times arrow(F) $ - Discussion on stokes theorem for a surface surface parametrized by $arrow(r)_(u), arrow(r)_(v)$, we have $ dif arrow(S) = hat(n) dif S =arrow(n) dif A = arrow(n) dif u dif v $ Therefore, when using stokes theorem, we cna either turn it into a surface integral with respect to actual surface S, with #line(length: 100%) = Complex analysis == Complex numbers and basic operations === Definitions - Def: $i^2 = -1$ - Complex number: $z = x + i y$ - Conjugate: $z = x - i y$ - Real part: $Re(z) = x$, Imaginary part: $Im(z) = y$ - Modulus/ Norm/ Magnitude: $|z| = sqrt(x^2 + y^2)$ - Polar form: $z = |z| (cos theta + i sin theta) = r e ^(i theta) $ - Argument(angle) : $arg(z) = theta$ such that $z = |z| (cos theta + i sin theta)$. Angle between vector $(x,y)$ with real axis === operations - addition: $z_1+z_2 = (x_1+x_2) + i(y_1+y_2)$ - multiplication: $z_1 z_2 = (x_1 x_2 - y_1 y_2) + i(x_1 y_2 + x_2 y_1)$ \ (normal multiplication with $i^2=1$ ) - Division:$ z_1/z_2 = (z_1z^*_1)/(z_2z^*_2) =(x_1 x_2 + y_1 y_2)/(x_2^2 + y_2^2) + i (x_2 y_1 - x_1 y_2)/(x_2^2 + y_2^2) $ - Commutativity: $z_1 z_2 = z_2 z_1 quad z_1+z_2 = z_2 + z_1$ - associativity: $(z_1 z_2) z_3 = z_1 (z_2 z_3) quad (z_1+z_2)+z_3=z_1+(z_2+z_3)$ - distributivity: $z_1(z_2+z_3) = z_1 z_2 + z_1 z_3$ - Trig inequality: $|z_1+z_2| <= |z_1| + |z_2|$ == Differentiation === open sets in $bb(C)$ - Def: Let $z_0 in bb(C), r >0$. Disk $B_r(z_0) = {z in bb(C)| abs(z-z_0)<r}$ It is very important to note that it's not "less or equal" Given a set $Omega in bb(C)$, A point $z_0 in Omega$ is called an interior point of $Omega$ if there exists $r>0$ s.t. $B_r(z_0) subset Omega$. \ A set $Omega$ is *open* if every point of $Omega$ is an interior poinot of $Omega$. In other words, there are no points on the boundary of $Omega$ that are included in $Omega$. === Holomorphic function Let $Omega$ be an open set in $bb(C)$, A function $f: Omega -> bb(C)$ is called *holomorphic* at $z_0 in Omega$ if the limit $ f'(z_0) = lim_(h -> 0) (f(z_0+h) - f(z_0))/h (h in bb(C), h eq.not 0) $ exists. - The said function $f(z)$ is holomorphic on $Omega$ if it is holomorphic on every point of $Omega$. - In the special case that $f$ is holomorphic on $bb(C)$, $f$ is an *entire* function. - Holomorphic in 1st order guarantees holomorphic and analytic in any order and thus continous. === Differentiation operations If $f$ and $g$ are holomorphic on $Omega$, then - $f+g$ is holomorphic on $Omega$, $ (f+g)' = f' + g' $ - $f g$ is analytic on $Omega$, $ (f g)' = f'g + f g' $ - $f/g$ is analytic and, if $g(z) eq.not 0$, $ f/g = (f'g - f g')/g^2 $ === Cauchy-Riemann equations for complex function $f: Omega -> bb(C) , f(z) = u(x,y)+i v(x,y)$ that is holomorphic at $z_0 = x_0 + i y_0$, then the partial derivatives of $u$ and $v$ exist and satisfy the Cauchy-Riemann equations: $ #rect(inset: 8pt)[ $ display( partial_x u = partial_y v quad partial_y u = -partial_x )$ ] $ Conversly, if $u$ and $v$ are continuously differentiable on an open set $Omega$ and satisfy the Cauchy-Riemann equations, then $f(z) = u(x,y) + i v(x,y)$ is holomorphic on $Omega$. In the language of logic, let C be "satisfying cauchy-riemann equations", and H be "function is holomorphic", then $H -> C$. If D is "u and v have continuous partial derivatives with respect to x and y", then $(C "&" D) <-> H$ == Cauchy's integral theorem (closed loop) For a closed curve $C$ in an open set $Omega$ and a holomorphic function $f: Omega -> bb(C)$, then $ integral.cont_(C) f(z) dif z = 0 $ == Fundemental theorem of calculus for complex analysis If $f$ is holomorphic on an open set $Omega$ and $a,b in Omega$, and for $f(z) = F'(z)$ , and $a,b$ are the start and end points of curve $C$, we have $ #rect(inset: 8pt)[ $ display( integral_(C) f(z) dif z = F(b) - F(a))$ ] $ == Cauchy's integral formula This relates the value of a contour integration to the value of its derivatives on a curve. $ #rect(inset: 8pt)[ $ display( f^(n) (z_0) = (n!)/(2 pi i) integral_(C) (f(z))/((z - z_0)^(n + 1) ) dif z )$ ] $ Often times, we are concernnd in finding the value of a function of the form $ integral_(C) (f(z))/((z-z_0)^(n+1) ) dif z , $ so we would like to take the nth derivative of the function $f(z)$ at $z_0$, and find the desired integral by $ (2 pi i)/(n!) f^(n) (z_0) $ == Cauchy's residue theorem === Poles Simply find where the fraction is not defined, i.e. where the denominator is 0. This is normally done by first using $(a^2 + z^2) = ( z + a i) (z - a i)$ to factor the denominator, and then setting the denominator to 0 to find poles $z_i $ . === Residue If the factored denominator has the form $(z + a i) (z + b i)$, then it has two poles of order 1. If it has the form $(z + a i)^2 (z + b i )^2$, then it has 2 poles of order 2. If has poles of order one, for each pole $z_i$ , find residue by $ "Res"(f,z_i) = lim_(z -> z_i) (z - z_i) f(z) $ If has pole of order n, for each pole $z_0$, find res by$ "Res"(f,z_0) = lim_(z -> z_0) (1)/((n-1)!) (d/(d z))^(n-1) ((z-z_0)^n f(z)) $ === Cauchy's residue theorem For a simple closed curve $C$ in an open set $Omega$ and a holomorphic function $f: Omega -> bb(C)$, then $ integral.cont_(C) f(z) dif z = 2 pi i sum_(k=1)^(n) "Res"(f,z_k) $ where $z_k$ are the poles of $f$ in $C$. Often times, we want to find the value of the integral $ integral_0^infinity f(x) dif x $ to which we are clueless to solve in the real domain. Cauchy suggests that we can take a detour via the complex domain by using the subsitution $f(z) = f(x)$ where $z in bb(C)$. By residue theorem we have $ integral.cont_(C) f(z) dif z = integral_(-R)^(R) f(z) dif z + integral_(gamma) f(z) dif z = 2 pi i sum_(k=1)^(n) "Res"(f,z_k) $ Normally, this looks like #image("assets/2024-04-30-23-11-37.png", width: 50%) where $gamma$ is the semicircle in the complex domain We thus get $ integral_(-R)^(R) f(z) dif z = 2 pi i sum_(k=1)^(n) "Res"(f,z_k) - underbrace(integral_(gamma) f(z) dif z , *) $ we notice that $(*) <= "max"_(|z| = R)[ f(z) ] * "length of" gamma = f(R) * pi R =^(R-> 0) 0$. In english this means $(*)$ is smaller than the product of maximal value of $f(z)$ on the semicircle and the length of the semicircle, which goes to 0 as $R$ goes to infinity. Thus the above integral becomes$ integral_(-R)^(R) f(z) dif z = 2 pi i sum_(k=1)^(n) "Res"(f,z_k) $
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/text/quote_02.typ
typst
Apache License 2.0
#import "/contrib/templates/std-tests/preset.typ": * #show: test-page // Spacing with other blocks #set quote(block: true) #set text(8pt) #lorem(10) #quote(lorem(10)) #lorem(10)
https://github.com/fenjalien/metro
https://raw.githubusercontent.com/fenjalien/metro/main/tests/num/zero-decimal-as-symbol/test.typ
typst
Apache License 2.0
#import "/src/lib.typ": num, metro-setup #set page(width: auto, height: auto) #num[123.00] #metro-setup(zero-decimal-as-symbol: true) #num[123.00] #num(zero-symbol: sym.dash.wave)[123.00]
https://github.com/daniel-eder/typst-template-jku
https://raw.githubusercontent.com/daniel-eder/typst-template-jku/main/src/template/template.typ
typst
// SPDX-FileCopyrightText: 2023 <NAME> // // SPDX-License-Identifier: Apache-2.0 #import "./styles/default.typ": default as default_style #import "./styles/content.typ": content as content_style #import "./pages/title.typ": title as title_page #import "./pages/abstract.typ": abstract as abstract_page #import "./pages/dedication.typ": dedication as dedication_page #import "./pages/acknowledgements.typ": acknowledgements as acknowledgements_page #import "./pages/toc.typ": toc as toc_page #import "./definitions/thesis_types.typ": thesis_types #import "./definitions/programme_types.typ": programme_types #import "./definitions/programmes.typ": programmes #let project( title: "<Title>", subtitle: none, //optional author: "<Firstname> <Lastname>", department: "<The Department>", first_supervisor: "<Professor's Name>", second_supervisor: none, //optional assistant_supervisor: none, //optional submission_date: "<Month> <Year>", //For Information: Year of submission to Examination and Recognition Services copyright_year: none, thesis_type: thesis_types.doctorate, degree: "<Degree>", programme_type: programme_types.doctorate, programme: programmes.law, abstract: none, //optional acknowledgements: none, //optional dedication: none, //optional body ) = { set document(author: author, title: title) //set default style that is applied to all pages show: default_style //render title page title_page(title, subtitle, author, department, first_supervisor, second_supervisor, assistant_supervisor, submission_date, copyright_year, thesis_type, degree, programme_type, programme) //the "meta" = non content pages will use a separate roman numeral counter. //set to 0, because each page calls #set page with the numbering style, which causes an increment counter(page).update(0) //render abstract if(abstract != none){ abstract_page(abstract) } //render acknowledgements if(acknowledgements != none){ acknowledgements_page(acknowledgements) } //render dedication if(dedication != none){ dedication_page(dedication) } //render table of contents toc_page() //now set style for content pages show: content_style //for the content pages start at one, because we already set numbering above and the first body page will use the last counter value counter(page).update(1) body }
https://github.com/ClazyChen/Table-Tennis-Rankings
https://raw.githubusercontent.com/ClazyChen/Table-Tennis-Rankings/main/history/2013/MS-07.typ
typst
#set text(font: ("Courier New", "NSimSun")) #figure( caption: "Men's Singles (1 - 32)", table( columns: 4, [Ranking], [Player], [Country/Region], [Rating], [1], [MA Long], [CHN], [3420], [2], [ZHANG Jike], [CHN], [3373], [3], [XU Xin], [CHN], [3291], [4], [WANG Hao], [CHN], [3283], [5], [BOLL Timo], [GER], [3141], [6], [YAN An], [CHN], [3130], [7], [#text(gray, "WANG Liqin")], [CHN], [3099], [8], [HAO Shuai], [CHN], [3029], [9], [<NAME>], [BLR], [3013], [10], [OVTCHAROV Dimitrij], [GER], [3009], [11], [#text(gray, "MA Lin")], [CHN], [3004], [12], [CHEN Qi], [CHN], [2997], [13], [CHUANG Chih-Yuan], [TPE], [2966], [14], [NIWA Koki], [JPN], [2951], [15], [BAUM Patrick], [GER], [2945], [16], [#text(gray, "RY<NAME>ungmin")], [KOR], [2925], [17], [MIZUTANI Jun], [JPN], [2912], [18], [SHIONO Masato], [JPN], [2897], [19], [GAO Ning], [SGP], [2892], [20], [MATSUDAIRA Kenta], [JPN], [2890], [21], [MAZE Michael], [DEN], [2889], [22], [<NAME>], [IRI], [2874], [23], [JOO Saehyuk], [KOR], [2862], [24], [<NAME>], [TPE], [2857], [25], [FANG Bo], [CHN], [2856], [26], [TAN Ruiwu], [CRO], [2847], [27], [<NAME>], [ROU], [2845], [28], [<NAME>], [CHN], [2844], [29], [<NAME>], [AUT], [2821], [30], [<NAME>], [GER], [2818], [31], [<NAME>], [HKG], [2816], [32], [FREITAS Marcos], [POR], [2807], ) )#pagebreak() #set text(font: ("Courier New", "NSimSun")) #figure( caption: "Men's Singles (33 - 64)", table( columns: 4, [Ranking], [Player], [Country/Region], [Rating], [33], [KISHIKAWA Seiya], [JPN], [2797], [34], [SMIRNOV Alexey], [RUS], [2797], [35], [LEE Jungwoo], [KOR], [2791], [36], [KIM Minseok], [KOR], [2790], [37], [OH Sangeun], [KOR], [2778], [38], [ZHAN Jian], [SGP], [2764], [39], [<NAME>], [CRO], [2756], [40], [<NAME>], [SLO], [2748], [41], [MURAMATSU Yuto], [JPN], [2746], [42], [SHIBAEV Alexander], [RUS], [2735], [43], [<NAME>], [CHN], [2735], [44], [<NAME>], [GER], [2734], [45], [<NAME>], [GRE], [2731], [46], [<NAME>], [POR], [2727], [47], [<NAME>], [KOR], [2721], [48], [LIVENTSOV Alexey], [RUS], [2718], [49], [TAKAKIWA Taku], [JPN], [2717], [50], [<NAME>], [CAN], [2714], [51], [SKACHKOV Kirill], [RUS], [2712], [52], [<NAME>], [CHN], [2712], [53], [<NAME>], [CHN], [2707], [54], [KREANGA Kalinikos], [GRE], [2702], [55], [<NAME>], [PRK], [2699], [56], [YOSHIDA Kaii], [JPN], [2678], [57], [HE Zhiwen], [ESP], [2677], [58], [<NAME>], [BRA], [2677], [59], [<NAME>], [SWE], [2676], [60], [YOSHIMURA Maharu], [JPN], [2675], [61], [FRANZISKA Patrick], [GER], [2673], [62], [SALIFOU Abdel-Kader], [FRA], [2672], [63], [JIANG Tianyi], [HKG], [2670], [64], [CHAN Kazuhiro], [JPN], [2669], ) )#pagebreak() #set text(font: ("Courier New", "NSimSun")) #figure( caption: "Men's Singles (65 - 96)", table( columns: 4, [Ranking], [Player], [Country/Region], [Rating], [65], [<NAME>], [TUR], [2657], [66], [LUNDQVIST Jens], [SWE], [2654], [67], [OYA Hidetoshi], [JPN], [2652], [68], [<NAME>], [AUT], [2649], [69], [#text(gray, "YOON Jaeyoung")], [KOR], [2648], [70], [<NAME>], [HKG], [2646], [71], [<NAME>], [SWE], [2643], [72], [<NAME>], [KOR], [2641], [73], [CHTCHETININE Evgueni], [BLR], [2639], [74], [<NAME>], [CZE], [2638], [75], [<NAME>], [AUT], [2637], [76], [<NAME>], [IND], [2635], [77], [MONTEIRO Joao], [POR], [2633], [78], [JAKAB Janos], [HUN], [2632], [79], [#text(gray, "JANG Song Man")], [PRK], [2631], [80], [ASSAR Omar], [EGY], [2631], [81], [<NAME>], [AUT], [2629], [82], [PITCHFORD Liam], [ENG], [2625], [83], [<NAME>], [KOR], [2624], [84], [<NAME>], [SGP], [2623], [85], [<NAME>], [SVK], [2622], [86], [<NAME>], [FRA], [2621], [87], [<NAME>], [POL], [2617], [88], [TSUBOI Gustavo], [BRA], [2615], [89], [<NAME>], [FRA], [2605], [90], [<NAME>], [SWE], [2604], [91], [<NAME>], [TUR], [2596], [92], [KIM Junghoon], [KOR], [2596], [93], [GROTH Jonathan], [DEN], [2595], [94], [<NAME>], [DOM], [2594], [95], [<NAME>], [SWE], [2594], [96], [JEOUNG Youngsik], [KOR], [2593], ) )#pagebreak() #set text(font: ("Courier New", "NSimSun")) #figure( caption: "Men's Singles (97 - 128)", table( columns: 4, [Ranking], [Player], [Country/Region], [Rating], [97], [MATSUDAIRA Kenji], [JPN], [2590], [98], [HOU Yingchao], [CHN], [2589], [99], [KANG Dongsoo], [KOR], [2587], [100], [<NAME>], [ESP], [2587], [101], [CHEN Feng], [SGP], [2585], [102], [PATTANTYUS Adam], [HUN], [2582], [103], [<NAME>], [CRO], [2580], [104], [<NAME>], [IND], [2577], [105], [<NAME>], [CHN], [2577], [106], [<NAME>], [GER], [2574], [107], [<NAME>], [SRB], [2572], [108], [<NAME>], [SRB], [2565], [109], [<NAME>], [JPN], [2563], [110], [<NAME>], [CZE], [2561], [111], [<NAME>], [FRA], [2557], [112], [<NAME>], [FRA], [2553], [113], [<NAME>], [POL], [2550], [114], [<NAME>], [CRO], [2549], [115], [<NAME>], [RUS], [2548], [116], [<NAME>], [JPN], [2545], [117], [<NAME>], [BRA], [2545], [118], [<NAME>], [HKG], [2545], [119], [<NAME>], [SVK], [2545], [120], [<NAME>], [SCO], [2543], [121], [YOSHIDA Masaki], [JPN], [2539], [122], [<NAME>], [POL], [2537], [123], [<NAME>], [GER], [2534], [124], [<NAME>], [MEX], [2532], [125], [<NAME>], [QAT], [2531], [126], [KIM Donghyun], [KOR], [2530], [127], [KORBEL Petr], [CZE], [2524], [128], [SAHA Subhajit], [IND], [2523], ) )
https://github.com/rlpundit/typst
https://raw.githubusercontent.com/rlpundit/typst/main/Typst/en-Report/chaps/outro.typ
typst
MIT License
/* --------------------------------- DO NOT EDIT -------------------------------- */ #import "../Class.typ": * #show: report.with(isAbstract: false) #chap("General Conclusion") // GC #set page(header: smallcaps(title) + h(1fr) + emph("General Conclusion") + line(length: 100%)) /* ------------------------------------------------------------------------------ */ // Summarize the key findings and conclusions of your capstone project. *Discussion* // Analyze and discuss the implications of your results, including any limitations or challenges encountered. #lorem(64) *Future Work* // Suggest potential future work and improvements that can be made based on your capstone project. #lorem(32)
https://github.com/jgm/typst-hs
https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compiler/ops-11.typ
typst
Other
// Test destructuring assignments. #let a = none #let b = none #let c = none #((a,) = (1,)) #test(a, 1) #((_, a, b, _) = (1, 2, 3, 4)) #test(a, 2) #test(b, 3) #((a, b, ..c) = (1, 2, 3, 4, 5, 6)) #test(a, 1) #test(b, 2) #test(c, (3, 4, 5, 6)) #((a: a, b, x: c) = (a: 1, b: 2, x: 3)) #test(a, 1) #test(b, 2) #test(c, 3) #let a = (1, 2) #((a: a.at(0), b) = (a: 3, b: 4)) #test(a, (3, 2)) #test(b, 4) #let a = (1, 2) #((a.at(0), b) = (3, 4)) #test(a, (3, 2)) #test(b, 4) #((a, ..b) = (1, 2, 3, 4)) #test(a, 1) #test(b, (2, 3, 4)) #let a = (1, 2) #((b, ..a.at(0)) = (1, 2, 3, 4)) #test(a, ((2, 3, 4), 2)) #test(b, 1)
https://github.com/Mc-Zen/quill
https://raw.githubusercontent.com/Mc-Zen/quill/main/docs/images/create-image.typ
typst
MIT License
#set page(width: auto, height: auto, margin: 0pt) #let scale-factor = 100% #let content = include("/examples/composition.typ") #context { let size = measure(content) rect( stroke: none, radius: 3pt, inset: (x: 6pt, y: 6pt), fill: white, box( width: size.width * scale-factor, height: size.height * scale-factor, scale(scale-factor, content, origin: left + top) ) ) }
https://github.com/darkMatter781x/OverUnderNotebook
https://raw.githubusercontent.com/darkMatter781x/OverUnderNotebook/main/entries/auton/pre-skills/pre-skills.typ
typst
#import "../auto-util.typ": * #auton( "Skills Matchloading Macro", datetime(year: 2024, month: 2, day: 25), "matchload.cpp", directory: "src/auton/actions/", [ Before driver skills and auton skills we run a matchload macro to first push alliance triballs into blue goal and then position the bot such that it is ready to be matchloaded. Then it fires all 44 matchloads while maintaining an accurate angle towards the goal. Another feature is that if at any time the driver moves the joysticks or press a button, then the macro will exit and the driver control will take over. This is implemented with the// typstfmt::off ```cpp checkDriverExit()```, ```cpp betterIsMotionRunning()```, ```cpp betterWaitUntilDone()```, and ```cpp exitBecauseDriver()``` // typstfmt::on functions. ], )[ = Starting position We also use a weird starting position that is angled. This enables us to optimize our route for speed and efficiency. To ensure the accuracy of our odd set position, we use a setting tool: #figure( grid(columns: 2, gutter: 2mm, image("set1.png"), image("set2.png")), caption: [how we use the setting tool to align the robot], ) #block(breakable: false)[ = Visual Route #figure(image("./route.svg", width: 100%), caption: [ auton route ]) ] ```cpp /** * @brief * Lemlib uses heading, which is like a compass, whereas trig functions like sine and cosine use trig angles. * Therefore we must convert the trig angle to a heading. * * @param angle trig angle * @return heading angle */ float trigAngleToHeading(float angle) { return -(angle * 180 / M_PI) + 90; } /** * @brief whether the driver has pressed any buttons or moved the joysticks */ bool exitForDriver = false; /** * @brief how much the driver must move the joysticks for the macro to exit */ const int inputThreshold = 24; /** * @brief exits the macro because the driver has exited and ensure motion stops */ void exitBecauseDriver() { exitForDriver = true; Robot::chassis->cancelMotion(); } /** * @brief checks that the driver has not exited the macro by checking all the controller inputs * * @return true if the driver has exited the macro * @return false if the driver has not exited the macro */ bool checkDriverExit() { if (exitForDriver) { exitBecauseDriver(); return true; } exitForDriver |= std::abs(Robot::control.get_analog(pros::E_CONTROLLER_ANALOG_LEFT_Y)) > inputThreshold; exitForDriver |= std::abs(Robot::control.get_analog(pros::E_CONTROLLER_ANALOG_RIGHT_Y)) > inputThreshold; const static std::vector<pros::controller_digital_e_t> buttons = { pros::E_CONTROLLER_DIGITAL_A, pros::E_CONTROLLER_DIGITAL_B, pros::E_CONTROLLER_DIGITAL_X, pros::E_CONTROLLER_DIGITAL_Y, pros::E_CONTROLLER_DIGITAL_L1, pros::E_CONTROLLER_DIGITAL_L2, pros::E_CONTROLLER_DIGITAL_R1, pros::E_CONTROLLER_DIGITAL_R2, pros::E_CONTROLLER_DIGITAL_UP, pros::E_CONTROLLER_DIGITAL_DOWN, pros::E_CONTROLLER_DIGITAL_LEFT, pros::E_CONTROLLER_DIGITAL_RIGHT}; if (exitForDriver) { exitBecauseDriver(); return true; } for (const auto& button : buttons) { exitForDriver |= Robot::control.get_digital(button); if (exitForDriver) { exitBecauseDriver(); return true; } } return false; } /** * @returns whether a motion currently is running and the driver has not exited the macro */ bool betterIsMotionRunning() { return isMotionRunning() && !checkDriverExit(); } /** * @brief waits until driver exits or motion is done */ void betterWaitUntilDone() { while (betterIsMotionRunning()) pros::delay(10); } void auton::actions::matchload(int triballs, int until) { // weird angled set (see discord) Robot::chassis->setPose(fieldDimensions::skillsStartingPose, false); // push alliance balls into goal Robot::chassis->moveToPose(MIN_Y + TILE_RADIUS, -TILE_RADIUS, DOWN, 8000, {.forwards = false, .minSpeed = 127}); if (checkDriverExit()) return; pros::delay(1000); Robot::chassis->cancelMotion(); // POSES // where the robot should be shooting to const lemlib::Pose shootingTarget {MAX_X - TILE_LENGTH, -4}; // where the robot should go to to matchload lemlib::Pose matchloadTarget = {MIN_X + TILE_LENGTH - 5.5, MIN_Y + TILE_LENGTH - 2}; const float trigMatchloadTargetTheta = matchloadTarget.angle(shootingTarget); // calculate the angle to the shooting target matchloadTarget.theta = trigAngleToHeading(trigMatchloadTargetTheta); // where the robot should go to to smoothly go to matchload bar const float stagingTargetDistance = 12; lemlib::Pose stagingTarget = matchloadTarget + lemlib::Pose {cos(trigMatchloadTargetTheta) * stagingTargetDistance, sin(trigMatchloadTargetTheta) * stagingTargetDistance}; stagingTarget.theta = matchloadTarget.theta; // calculate the angle to the shooting target stagingTarget.theta = trigAngleToHeading(stagingTarget.angle(shootingTarget)); printf("stage x: %f\n", stagingTarget.x); printf("stage y: %f\n", stagingTarget.y); printf("stage theta: %f\n", stagingTarget.theta); printf("load x: %f\n", matchloadTarget.x); printf("load y: %f\n", matchloadTarget.y); printf("load theta: %f\n", matchloadTarget.theta); // move in front of matchload bar Robot::chassis->moveToPose(stagingTarget.x, stagingTarget.y, stagingTarget.theta, 3000, {.minSpeed = 64}); // wait until robot is angled correctly waitUntil([stagingTarget] { return !betterIsMotionRunning() || robotAngDist(stagingTarget.theta) < 5; }); if (checkDriverExit()) return; printf("staging done\n"); Robot::chassis->cancelMotion(); Robot::Subsystems::catapult->matchload(until - pros::millis(), triballs); // then move to matchload bar Robot::chassis->moveToPose(matchloadTarget.x, matchloadTarget.y, matchloadTarget.theta, 750, {.forwards = false}); betterWaitUntilDone(); if (checkDriverExit()) return; printf("matchload boomerang touch done\n"); // make sure the robot is touching the matchload bar tank(-48, -48, 0, 0); const float startingTheta = Robot::chassis->getPose().theta; // when the robot touches the bar it should begin to turn waitUntil([startingTheta] { return robotAngDist(startingTheta) > 1 || checkDriverExit(); }); if (checkDriverExit()) return; printf("matchload touch done\n"); // switch to IMU further from cata Robot::Actions::switchToMatchloadingIMU(); // run turn pid until done matchloading or driver exits lemlib::PID turnPID {Robot::Tunables::angularController.kP, Robot::Tunables::angularController.kI, Robot::Tunables::angularController.kD}; // prevent robot from turning for too fast const float maxSpeed = 48; while (Robot::Subsystems::catapult->getIsMatchloading() && !checkDriverExit()) { const float targetTheta = trigAngleToHeading(Robot::chassis->getPose().angle(shootingTarget)); const float error = lemlib::angleError(targetTheta, Robot::chassis->getPose().theta, false); float output = turnPID.update(error); printf("target: %f\n", targetTheta); printf("error: %f\n", error); printf("output: %f\n", output); // prevent the robot from turning too fast output = std::clamp(output, -maxSpeed, maxSpeed); tank(output, /*ensure that we are touching the matchload bar*/ -16, 0, 0); pros::delay(10); }; stop(); printf("exit\n"); // switch to IMU further from cata Robot::Actions::switchToNormalIMU(); } ``` ]
https://github.com/polarkac/MTG-Stories
https://raw.githubusercontent.com/polarkac/MTG-Stories/master/stories/007%20-%20Theros/012_Asphodel.typ
typst
#import "@local/mtgstory:0.2.0": conf #show: doc => conf( "Asphodel", set_name: "Theros", story_date: datetime(day: 11, month: 12, year: 2013), author: "<NAME>", doc ) Maia picked her way through the ruined building with bare and careful feet. They were already covered in soot, but she could wash them off in the stream. It was her sandals she couldn't get dirty, along with her tunic. She'd only made that mistake once, and mother had sent her to bed without dinner and forbidden her from going back. She ducked under a creaking beam and stepped into the forge itself. The stone anvil still sat in its familiar place. Papa never let her go back there when he was casting molten bronze or smelting iron. #emph[Much too dangerous] , he'd say gruffly, and then, with a twinkle in his eye: #emph[Maybe next year] . So, most of the time she'd spent back there had been on days when papa was cold-working the swords and shields after they'd cooled, hammering their edges to harden the bronze. She'd sit and listen to the clang, clang, clang of hammer and stone. Sometimes papa would tell her about what he was doing, and why, and every time she learned a little bit more about the trade. In her hand she clutched a bundle of delicate white and purple flowers, fresh picked in the hills beyond the forge. Mother brought asphodels, the flowers of the dead, to the spot up on the hill where they had buried him. She said that was proper. But asters had always been papa's favorite, and he had never even been up on that hill that Maia knew of. This was papa's place. So she brought these flowers, to this place, as often as she could. Maia closed her eyes and saw the forge as it had been, before the fire, filled with tools and smoke and the clang of the hammer. Charms bearing prayers to Purphoros hung on the walls, imploring the god of the forge to fill the place with the passion of things being made. She opened her eyes and blinked back tears. The roof had collapsed, and sunlight streamed in. The light should have been cheery, but it was wrong, all wrong. It had been half a year since they had placed a clay mask on his face and laid him in the ground. She lay the bundle of asters on the anvil, like she always did. Usually, by the time she was able to make it back to the forge, days or weeks later, the flowers were gone. She knew they'd probably been eaten by deer or blown away by a gust of wind. But she thought of papa coming and taking them away just the same. The angle of the sun shining through the slats of the ruined roof told her it was time to go. She said a quick prayer to Erebos, god of the dead, and hurried out of the forge. She picked up her sandals and ran to the stream, washed her feet, dried them in the grass, and made her way back to their little house on the outskirts of Meletis. Maia unlaced her sandals by the door and washed her feet in the basin. The thick smell of lentil stew wafted out of the kitchen. And if there was less spicing than there had been this time last year, and smoked fish instead of fresh, could her mother be blamed? Without the forge, and without papa's strong hands around the house, times were harder, and food was food. She made her quiet way to the kitchen. Mother was slowly stirring the pot of stew, and little Kadmos, who was only four, was sitting on the floor playing with a straw soldier. He looked up at her, blinked, and went back to his play. "Hello, Mama," she said. Her mother turned from the stew pot. Her hair was streaked with gray, and her eyes were always sad, but she was as pretty as ever. "Hello, Maia," said her mother. "How were lessons?" Mother still paid to send Maia to a scholar in the city for lessons. Maia had offered to give them up, when she realized that they cost money the family no longer had, but her mother wouldn't hear of it. Maia no longer complained about the lessons, even when they were dull. "Fine," she said, although in truth this had been one of the dull ones. "We learned about triangles, and the lengths of their sides, and... and numbers." "Very good," said her mother absently. She had gone back to stirring the stew. Dinner was quiet, the stew bland but filling. Kadmos fussed and spilled his bowl, but Mother only sighed. Maia went to bed early and dreamed of the ring of hammers. #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) #figure(image("012_Asphodel/01.jpg", width: 100%), caption: [Plains | Art by <NAME>], supplement: none, numbering: none) It was over a week before lessons finished early enough that she could stop by the forge on the way home again. The growing season was coming to a close and the days growing shorter, and she had to range farther into the hills to find asters, but she pulled together a decent bundle and judged that she still had enough daylight left. Maia was just cresting the last hill, bundle of flowers in hand, when she heard it. #emph[Clang] . Hammering. Someone was in papa's forge! She hurried down the slope, still clutching the flowers, and didn't stop to take off her sandals. They'd get dirty. She could wash them. She had to know who it was. #emph[Clang] . As soon as she was inside the ruin, quiet but for slow, rhythmic sound of hammering, she stopped. Apprehension set in. If somebody from town had asked to use the forge, she'd have heard about it. Mother would have told her. That left outlaws, or invaders, or... Maia took a deep breath, ducked under the fallen beam, and peered into the forge. She saw a broad back in a leather apron and strong arms as big around as she was. One arm raised a charred hammer and struck. #emph[Clang] . Maia crept around the edge of the room, picking her way through the debris and staying in the shadows. She had to see his face. In his left hand was the charred remnant of a sword in progress. Even she could see that it was beyond help. It had warped in the fire. She'd seen swords like that, and papa had always melted them down and started over. The sword's blackened edges had begun to flake under the hammer. Again the arm rose and fell. #emph[Clang] . She crept closer. The figure wore a gold mask with broad features, stylized but recognizable: a broad nose, a great bushy beard, and close-set eyes that had twinkled with life, now dead and cold and crafted. The arm rose again. "Papa?" she said. The figure paused with its arm upraised and turned its head to regard her. It lowered its arm, slowly, but did not move from the stone anvil. She remembered the bundle of asters at her side, held it up, and stepped forward. The figure remained still. With no sudden motions or threatening movements, she walked toward the anvil. Soon she was close enough to touch it, closer than papa ever let her get when he was working. She held out the bouquet of asters. The golden mask betrayed no expression whatsoever. The hammer rose. She dropped the flowers and jumped back with a gasp. The hammer fell, then again, and again, pounding the delicate purple flowers against the ruined sword. #emph[Clang] . #emph[Clang] . #emph[Clang] . Maia turned and ran out of the forge, away from the pounding of the hammer. She didn't stop until she reached home, wild eyed, covered in soot and sweat. Mother was furious. She marched Maia to the back of the house and fairly well threw her into a bath. She asked Maia what had happened, why she had gone back. Maia didn't answer, and mother sent her to bed without dinner again. She didn't mind. She wasn't hungry. She lay awake well into the night, certain she could hear the rhythmic pounding of hammer on stone. #v(0.35em) #line(length: 100%, stroke: rgb(90%, 90%, 90%)) #v(0.35em) #figure(image("012_Asphodel/02.jpg", width: 100%), caption: [Traveling Philosopher | Art by <NAME>], supplement: none, numbering: none) Over the next several days, Maia threw herself into her lessons and her chores around the house. If her mother noticed her newly quiet intensity, she didn't mention it. She was probably just glad of the extra help. On the fourth day after her encounter at the forge, as lessons ended, she waited as the other students filed out. Her teacher, a matronly woman named Pylia, turned to her after the others were gone. "You've been quiet lately," she said. "Is there something I can do for you?" "I... I have an unusual question," said Maia. "I am your instructor," said Pylia. "It's hardly unusual for you to ask me a question." "It's about the Returned," said Maia. "The Noston are a sad lot," said Pylia. "What is your question?" "Do they... remember?" "As a rule, no," said Pylia. "They retain their skills and their knowledge of the world. A Returned navigator could still sail up the coast. But their memories of life are left behind in the Underworld. It is a sad irony: they loved life enough to return to it, but they cannot bring that love back with them." "What happens to them?" asked Maia. "Some wander. Some are violent. Many gather in the necropolises—quiet Asphodel and bitter Odunos—to be among their own kind. They live shadowed lives, filled with sorrow and anger." Maia nodded and blinked back tears, and the teacher's expression softened. "Maia," she said. "It's very rare for someone to join the ranks of the Noston. And of those who do come back, the things that made them who they were—their memories, their relationships, the things they treasure—are lost forever. No one truly returns from the Underworld." "I understand," said Maia. "Thank you." Pylia nodded, and Maia rose and left as quickly as she could. She left the city and headed home. She steeled herself to hurry past the path that led to the forge, as she had the last few days, but as she came around a bend in the road, she saw that the grass around the little path was trampled. She looked closer and saw a great jumble of footprints, heavy in the style of military gear. She ran for the forge. A group of about twenty hoplites had surrounded the forge, some pointing their spears inward while others searched the area nearby. "No!" cried Maia. #figure(image("012_Asphodel/03.jpg", width: 100%), caption: [Phalanx Leader | Art by <NAME>umbo], supplement: none, numbering: none) The heavily armed men and women turned, but most of them went back to their duty when they saw she was just a child. Their captain, a strong young man with a high-crested helm, walked toward her. She tried to slip past him, but he barred her way with the shaft of his spear. "Stay back," he said. "Someone spotted a Returned in the area. We're to send it back to the Underworld before it vandalizes something or snatches a child. Like you." "The building's clear!" yelled one of his troops. "Fan out!" said the captain. "If it's here, we'll find it." Before he could turn back to Maia, she bolted. She ran past him, ignoring him when he yelled after her. She ran past his troops and out into the hills. She searched for what felt like hours, until at last she saw a hunched figure loping away through the brush, hammer in hand. "Wait!" she yelled. The figure stopped, then turned, its mask frozen in an expression of sorrow. She walked toward it, but stopped out of its reach. "They're looking for you," she said. "They want to hurt you." The figure nodded. "Do... Do you know who I am?" Slowly, sadly, the figure shook its head. Tears tumbled from Maia's eyes. The Returned reached out a hand, and she didn't flinch away. With one thumb, it wiped the tears from her face—a gentle gesture, shocking in its familiarity. "Don't cry," it said, in dull monotone. "Don't cry." She stepped back, and the thing turned to leave. "Where are you going?" she asked. The great hunched shoulders rose in a shrug. "Away." Maia's lip trembled. She looked down to see a patch of flowers at their feet, long white flowers with sturdy green stems. She bent down and picked one, and offered it to the figure. "Asphodel," she said, pointing inland. "Go to Asphodel, the necropolis. There are more there like you." The Returned took the flower from her shaking hand and nodded. It turned to follow her pointing finger and walked, hammer in hand, away from the ruined forge. To Asphodel.
https://github.com/fenjalien/metro
https://raw.githubusercontent.com/fenjalien/metro/main/tests/unit/test.typ
typst
Apache License 2.0
#import "/src/lib.typ": unit, units, prefixes #import units: * #import prefixes: * #set page(width: auto, height: auto) $unit(kg m/s^2)$ #unit($kilo gram metre / second^2$) $unit(joule / mole / kelvin)$ #unit("kilo gram metre per square second") #unit("kilo gram metre second^2") #unit("per square becquerel") #unit("/becquerel^2") #unit("square becquerel") #unit("joule squared per lumen") #unit("cubic lux volt tesla cubed") #unit("henry tothe(5)") #unit("henry^5") #unit("raiseto(4.5) radian") #unit("kilogram of(metal)") #unit("kilogram_metal")
https://github.com/Myriad-Dreamin/typst.ts
https://raw.githubusercontent.com/Myriad-Dreamin/typst.ts/main/fuzzers/corpora/meta/ref_03.typ
typst
Apache License 2.0
#import "/contrib/templates/std-tests/preset.typ": * #show: test-page #set heading(numbering: "1.", supplement: [Chapter]) #set math.equation(numbering: "(1)", supplement: [Eq.]) = Intro #figure( image("/assets/files/cylinder.svg", height: 1cm), caption: [A cylinder.], supplement: "Fig", ) <fig1> #figure( image("/assets/files/tiger.jpg", height: 1cm), caption: [A tiger.], supplement: "Tig", ) <fig2> $ A = 1 $ <eq1> #set math.equation(supplement: none) $ A = 1 $ <eq2> @fig1, @fig2, @eq1, (@eq2) #set ref(supplement: none) @fig1, @fig2, @eq1, @eq2
https://github.com/TypstApp-team/typst
https://raw.githubusercontent.com/TypstApp-team/typst/master/tests/typ/text/quote.typ
typst
Apache License 2.0
// Test the quote element. --- // Text direction affects author positioning And I quote: #quote(attribution: [<NAME>])[cogito, ergo sum]. #set text(lang: "ar") #quote(attribution: [عالم])[مرحبًا] --- // Text direction affects block alignment #set quote(block: true) #quote(attribution: [<NAME>])[cogito, ergo sum] #set text(lang: "ar") #quote(attribution: [عالم])[مرحبًا] --- // Spacing with other blocks #set quote(block: true) #set text(8pt) #lorem(10) #quote(lorem(10)) #lorem(10) --- // Inline citation #set text(8pt) #quote(attribution: <tolkien54>)[In a hole in the ground there lived a hobbit.] #set text(0pt) #bibliography("/files/works.bib") --- // Citation-format: label or numeric #set text(8pt) #set quote(block: true) #quote(attribution: <tolkien54>)[In a hole in the ground there lived a hobbit.] #set text(0pt) #bibliography("/files/works.bib", style: "ieee") --- // Citation-format: note #set text(8pt) #set quote(block: true) #quote(attribution: <tolkien54>)[In a hole in the ground there lived a hobbit.] #set text(0pt) #bibliography("/files/works.bib", style: "chicago-notes") --- // Citation-format: author-date or author #set text(8pt) #set quote(block: true) #quote(attribution: <tolkien54>)[In a hole in the ground there lived a hobbit.] #set text(0pt) #bibliography("/files/works.bib", style: "apa")
https://github.com/jgm/typst-hs
https://raw.githubusercontent.com/jgm/typst-hs/main/README.md
markdown
Other
# Typst-hs Typst-hs is a Haskell library for parsing and evaluating typst syntax. Typst (<https://typst.app>) is a document formatting and layout language, like TeX. Currently this library targets v0.12.0 of typst, and offers only partial support. There are two main components: - a parser, which produces an AST from a typst document - an evaluator, which evaluates the typst expressions in the AST
https://github.com/ajayfoo/math-notes
https://raw.githubusercontent.com/ajayfoo/math-notes/main/college_algebra.typ
typst
MIT License
#import "templates/plain.typ": * #show: project.with(title: "College Algebra", authors: ("Ajay",), date: "March 6, 2024") = Prerequisites == Real Numbers: Algebra Essentials === Number Sets + *Natural Numbers(#sym.NN)*: Counting Numbers. ${1,2,3,...}$ + *Whole Numbers(#sym.WW)*: Natural Numbers including the zero. ${0,1,2,3,...}$ + *Integers(#sym.ZZ)*: A union of set of opposites of Natural Numbers and set of Whole Numbers. ${...,-3,-2,-1,0,1,2,3,...}$. *Positive Integers($#sym.ZZ^+$)* are #sym.NN. *Negative Integers($#sym.ZZ^-$)* are opposites of #sym.NN, i.e. ${...,-3,-2,-1}$. + *Rational Numbers(#sym.QQ)*: A set that can be defined as follows... $ {a/b | a "and" b in ZZ "and" b eq.not 0} $ Since rational numbers are fractions they can also be represented as decimals. Decimals are numbers containing a whole and a fractional part(the number(s) succeeding the decimal separator "."), #"eg." 1.89, 0.$overline(6666)$. Decimals must either terminate or form a repeating pattern, thus, there are two types of decimals: + Terminating decimals: $15/8 = 1.875$ + Non-terminating/repeating decimals: $-4/11=-0.36363636...=-0.overline(36)$ + *Irrational Numbers(#sym.PP or $QQ^'$)*: Numbers that cannot be expressed as fractions #"eg." #sym.pi, $e$ : Euler's number, $sqrt(2)$, etc.. A set of *Irrational Numbers* can be defined as... $ {h | h "is not a rational number"} $ #align(center, [or]) $ {h | h#sym.in.not #sym.QQ} $ + *Real Numbers(#sym.RR)*: A set of *Rational Numbers* and *Irrational Numbers*. A set of *Real Numbers* can be defined as... $ {n | n "is either a rational number or irrational number"} $ #align(center, [or]) $ {n | n in QQ or n in PP} $ === Order Of Operations + *Exponential Notation*: - Factors: Positive integers that divide a number evenly. - $a^n$ is an Exponential Notation it means that _a_ is multiplied by itself _n_ times. It's read as "a to the nth" or "a raised to n". <exponent_definition>#align(center, [#"\"n\" number of times"])$ a^n = a times a times a times ... times a $ _a_ is called the *base* and _n_ is called the *exponent*. The result after evaluating $a^n$ for some _n_ is called the *power* of _a_. - Exponentiation is a type of math operation that tell us to take the *base* and multiply it by itself *exponent* number of times. Exponents are also called *indices*. - Powers of 2 are 1,2,4,8,16,32,...,256 and so on. Note: $2^0=1$. - Power is often used interchangeably with exponent, but they are distinct. In $a^n = b$, _a_ is the *base*, _n_ is the *exponent* and _b_ is the *power*. + *#"PEMDAS"* - #[*P*]#"arentheses" - #[*E*]#"xponents" - #[*M*]#"ultiplication" and #[*D*]#"ivision" - #[*A*]#"ddition" and #[*S*]#"ubtraction" === Properties Of Real Numbers + *Commutative Properties*: Commutative means relating or involving substitution. + Addition: Numbers may be added in any order without affecting the sum. $ a + b = b + a $ $ "eg." 3 + (-4) = (-4) + 3 $ + Multiplication: Numbers may be multiplied in any order without affecting the product. $ a times b=b times a $ $ "eg." 3 times -4 = -4 times 3 $ *NOTE*: Neither subtraction of division is commutative. $"eg." 1-2 eq.not 2-1$ and $1 div 2 eq.not 2 div 1$. + *Associative Properties*: Numbers can be grouped differently without affecting the result. + Addition: $a + (b + c) = (a + b) + c$ + Subtraction: $a times (b times c) = (a times b) times c$. *NOTE*: Neither subtraction or division is associative. $"eg." 1-(2-3) eq.not (1-2)-3$ and $1 div (2 div 4) eq.not (1 div 2) div 4$. Solve expressions containing multiple subtraction or division operations left to right just like addition and multiplication. + *Distributive Property*: Product of a factor and sum of terms is equal to sum of the product of the factor and each terms. $ a times (b+c) = a times b + a times c $ + *Identity Properties* + *Addition*: There is a unique(there's no other like it) number called the additive identity which is 0, when added to a number results in the original number. $ a+0=a $ $ "eg." 9 + 0 = 9 $ + *Multiplication*: There is a unique(there's no other like it) number called the multiplicative identity which is 1, when multiplied to a number results in the original number. $ a times 1 = a $ $ "eg." 9 times 1 = 9 $ + *Inverse Properties*: The opposite of something. + *Addition*: For every real number $a$ there is an additive inverse, which is $-a$, that, when added to $a$ yields the additive identity, which is 0. $ a + (-a) = 0 $ + *Multiplication*: If $a$ is every real number other than 0 then, there exists a multiplicative inverse(also known as reciprocal) denoted by $1/a$, that, when multiplied to $a$ yields the multiplicative identity, which is 1. $ a times 1/a = 1 $ === Algebraic Expressions + *Variables or Indeterminates*: Symbols representing a value that will be subject to change. Usually represented by small letters, #"eg." $x,y,a$ etc. + *Constants*: The opposite of variables, symbols representing a fixed value. #"Eg." #sym.pi, $e$, 79, etc. + *Expressions*<expression_definition>: A well formed(according to the rules of context) and finite combination of mathematical symbols. Mathematical symbols can be numbers(constants), variables, operators, parenthesis, etc. + *Definition of Algebraic Expression*: A collection of constants and variables joined together by algebraic operations of addition, subtraction, multiplication and division. To evaluate an algebraic expression means to determine its value for a given value of all the variables. === Formulas + *Equation*: A mathematical statement indicating that two expressions as equal. It may or may not be true, it's only a proposition. Example: $1 + x = 89$. + *Formula*: An equation expressing relationship between some variables and constants. Most often used to find the value of one quantity in terms of another or other quantities. Example: $c^2 = a^2 + b^2$[Pythagoras Theorem]. == Exponents and Scientific Notation === Rules Of Exponent + *Product Rule*: $ a^m times a^n = a^(m+n), forall a in RR "and" m,n in NN $ + *Quotient Rule*: $ a^m/a^n = a^(m-n), forall a in RR "and" m,n in NN "and" m>n $ + *Power Rule*: $ (a^m)^n = a^(m times n), forall a in RR "and" m,n in ZZ^+ $ + *Zero Exponent Rule*: $ a^0 = 1, forall a in RR, a eq.not 0 $ + *Negative Exponent Rule*: $ a^(-n) = 1/a^n $ + *Power Of A Product Rule*: $ (a times b)^n = a^n times b^n $ + *Power Of A Quotient Rule*: $ (a/b)^n = a^n/b^n $ === Scientific Notation - A shorthand for writing very small or large numbers in the terms of the exponent or index of 10(in this context exponent is a operation). - A number is written in scientific notation if it's written in the form... $ a times 10^n, "where" a in [1,10) and n in ZZ $ == Radicals and Rational Exponents === Square Roots + Roots are the inverse of exponents, i.e. they undo the results of exponents. In $x^n=b$, _x_ is the _nth_ root of _b_. + *Principal Square Roots*: A positive number square root. $sqrt(2)$ is a *Principal Square Root* whereas $-sqrt(2)$ is not. + Roots are represented using the radical symbol: $sqrt(space)$ and expression(s) sitting inside the radical symbol are called radicand. If radical expressions don't have an object to it's top left, like in $root(n, a)$, then it's assumed we are taking the square root of the radicand. The object to the top left of the radical symbol is called the index, it tells us which root we have to find. For example in $root(n, a)$, we are taking the _nth_ root of _a_. + $root(n, a)=a^(1/n)$, $(root(n, a))^p=a^(p/n)=root(n, (a^p))$. === Rationalizing Denominators + The process of removing the radical from the denominator by multiplying the denominator and numerator with the radical at the denominator. Example: To rationalize the denomiators of the expression $1/(2 sqrt(a))$ we'd multiply the expression by $sqrt(a)/sqrt(a)$, which will result to $sqrt(a)/(2 a)$. + If our denominator has two terms then we'd first have to find its conjugate and then multiply it with the denominator and the numerator. A conjuate of two terms are the same two terms with opposite sign in the middle. Example: conjugate of $root(6, 32)-32$ is $root(6, 32)+32$. == Polynomials Mathematical #link(<expression_definition>, [expressions]) consisting varaibles(indeterminates) and constants, where only addition, subtraction, multiplication and exponentiation consisting of only posivite #link(<exponent_definition>, [exponents]) for the variables are involved. *Polynomials* are typically written in the following form: $ a_n x^n + ... + a_2 x^2 + a_1 x + a_0 $ Where $a_i$ is called the coefficient, $a_0$ is called the constant and each product $a_i x^i$ is called the term. *Example*: $x+3$, 0,$root(79, a pi)+32$ and $-x^(12/68) + b d$ are polynomials, whereas $x^(-1)$, $sqrt(y)/y$, $(z x)/y + x^3 + 1/x^2$ are not. === Related Terminology + *Term*: A product of variables and/or constants. Example: $a^2, 2 a b "and" b^2$ in the polynomial $a^2 + 2 a b + b^2$ are called terms. + *Coefficient*: A number multiplied by a variable. Example: 2 in the term $2 a b$. + *Degree*: Highest exponent of a variable present in the expression. Example: The degree of the expression $a^78 + c^32 + d^2$ is 78. + *Leading Term*: The term providing the degree of the expression. They are called _*leading term*_ because they are usually written in the beginning of the expression. Example: $327 pi b^88$ in the expression $327 pi b^88 + c^2 + d + 45$. + *Leading Coefficient*: The constant part of the *leading term*. + *Fundamental Theorem of Algebra*: Every polynomial equation of degree "n" with complex number coefficients has "n" roots or solutions in the complex numbers(#sym.CC). + *Root Or Zero Or Solution Of A Polynomial*: The value of variable(s) in the polynomial for which it equals zero. === Special Forms of Polynomials + *Perfect Square Trinomials*: Square of a binomial. Typical form: $ (x+a)^2 = (x^2 + 2 a x + a^2) $ + *Difference Of Squares*: Product of two identical binomials except for the fact that at most one of the terms of the second binomial's sign is opposite of the corresponding term in the first binomial. Examples:- $ p^2 - q^2 = (p+q)(p-q) $ $ (-1)(a^2) + b^2 = b^2 - a^2 = (-a+b)(a+b) $ == Factoring === Greatest Common Factor The Greatest Common Factor of two or more numbers is the gretest number that divides them evenly. Example: GCF of 12, 48, 9 and 36 is 3, as 3 is the greatest number that can divide 12, 48, 9 and 36 evenly(without leaving any remainder). === Greatest Common Factor Of a Polynomial The GCF of a two or more polynomials is the greatest polynomial that can evenly divide them. Example: GCF of $x^4 + 6x^3 + x$ and $x^10 + x^5$ is _x_.\ *Example Problem*: Factorize $49 m b^2 - 35 m^2 b a + 77 m a^2$\ *Solution*:\ $ &= 7m (7b^2 - 5m b a + 11a^2) \ $ === Trinomial with Leading Coefficient 1 A trinomial of the form $x^2 + b x + c$ can be factorized as $(x + p)(x + q)$ if there exists a pair of _p_ and _q_ such that $p + q = b$ and $p q = c$, else the polynomial is a prime polynomial or irreducible polynomial. === Factoring by Grouping To factor a polynomial in the form $a x^2 + b x + c$ we need to... + Find two numbers _p_ and _q_ such that $p q = a c$ and $p + q = b$. + Write the entire expression as $a x^2 + p x + q x + c$ + Pull out the GFC of $a x^2 + p x$ and $q x + c$. + Futher factorize the entire expression. *Example*: Factorize $2x^2 + 9x + 9$ via grouping.\ *Solution*: We can take $p = 6$ and $q = 3$. $because 6 times 3 = 2 times 9$ and $6 + 3 = 9$. $ &=2x^2 + 6x + 3x + 9\ &=2x(x + 3) + 3(x + 3) &&"Pull out GFC"\ &=(2x+3)(x+3) &&"Factorize further"\ $ === Factoring A Prefect Square Trinomial A perfect square trinomial is a trinomial that can be represented as a square of a binomial. Such as $a^2 + 2a b + b^2 = (a + b)^2$ and $a^2 - 2a b + b^2 = (a - b)^2$. *Example*: Factor perfect square polynomial $49x^2 - 14x + 1$.\ *Solution*: $ &=49x^2 -7x -7x + 1\ &=7x(7x -1) -1(7x - 1)\ &=(7x-1)(7x-1)\ &=(7x-1)^2\ $ === Factoring A Difference Of Squares We can use the property of difference of squares to find factors of any binomial the follows the form $a^2 - b^2$. *Example*: Find factors of the difference of squares $81y^2 - 100$.\ *Solution*: $ &=(9y)^2 - 10^2\ &=(9y - 10)(9y + 10) &&"Property of difference of squares"\ $ === Factoring The Sum And Difference Of Cubes Sum and difference of cubes can be factored into a binomial and trinomial like so...\ $ a^3 + b^3 = (a+b)(a^2 - a b + b^2)\ a^3 - b^3 = (a-b)(a^2 + a b + b^2) $ == Rational Expressions The quotient of two polynomial expressions is called a rational expression. == Reference + #link( "https://openstax.org/details/books/college-algebra-2e", )[College Algebra 2e by Openstax] #pagebreak() = Equations And Inequalities == Prerequisite + *Space*: A set with definition of relationships among its members. + *Euclidean Space*: Basic shape of geometry. + *Coordinate System*: A system that uses one or more numbers(called coordinates) to uniquely identify the position of the points or other geometric objects on a manifold(?) such as Eucliden space. + *Dimension*: The minimum number of coordinates needed to specify a point within it. + *Plane*: A Euclidean space of dimension two, denoted by $EE^2$. + *Cartesian*: Of or relating to the works of Rene Descartes. + *Cartesian/Rectangular Coordinate System*: Cartesian Coordinate System in a plane is a coordinate system that uniquely identifies a point in the plane by a pair of ordered real numbers called *coordinates*. These coordinates are signed distances from the two fixed lines, which must be perpendicular to each other, called the *coordinate axes* or *coordinates lines*. The *coordinate axes* are typically called X axis and Y axis. The point where the two axes meet is called the *origin* and it's coordinates are (0,0). The X and Y axis divide the plane into 4 *quadrants*. + *Abscissa & Ordinate*: *Abscissa* is the x-coordinate and *Ordinate* is the y-coordinate. + *Slope*: Slope or Gradient is a number that describes both the direction and steepness(the extent of rising or falling quickly) of a line. Slope is usually denoted by 'm' and expressed as follows in relation to two points $(x_1,y_1)$ and $(x_2, y_2)$. $ m = (y_2 - y_1)/(x_2 - x_1) = (y_1 - y_2)/(x_1 - x_2) $ === Distance Formula Distance formula is used to find the distance between two points in a plane. The distance between points $P(x_1,y_1)$ and $Q(x_2,y_2)$, can be expressed as follows. $ d = sqrt((x_1 - x_2)^2 + (y_1 - y_2)^2) = sqrt((x_2 - x_1)^2 + (y_2 - y_1)^2) $ === Midpoint Formula Midpoint is the point that is equidistant from the two ends of a line segment. The midpoint of a line segment having end points $P(x_1,y_1)$ and $Q(x_2,y_2)$ can be expressed as follows. $ M = ((x_1 + x_2)/2, (y_1 + y_2)/2) $ == Linear Equations In One Variable A *linear equation in one variable* is a equation used to represent a straight line using only one variable. The degree of the variable must be 1. *Standard Form* of *linear equation* is as follows... $ a x + b = 0 $ where _a_ and _b_ are constant real numbers and $a eq.not 0$. === Types Of Linear Equation In One Variable + *Identity*: True for all the values of the variable. *Example*: $8x - 5x + 4 = 3x + 4$, $322x = 322x$ + *Conditional*: True only for some values of the variable. *Example*: $2x + 6 = 3$ + *Inconsistent*: False for all the values of the variable. A false statement. *Example*: $2x = 7x$ === Rational Equations A equation that consists of at least one rational expression containing at least one variable at it's denominator is called a *rational equation*.\ *Example*:- $ 7/(2x) - 5/(3x) = 22/7, (x^2)/(9x^3) - 73 = (3 pi)/x $ === Different Forms Of Line Equation + *Vertical Lines* $ x = c $ where _c_ is the x-intercept. The slope is undefined. + *Horizontal Lines* $ y = c $ where _c_ is the y-intercept. The slope is 0. + *Standard Form* $ A x + B y = C $ where _A, B,_ and _C_ are integer constants. + *Slope Intercept Form* $ y = m x + b $ where, _m_ is the slope and _b_ is the y-intercept(the ordinate of the point where the line intersects the Y axis) of the line. + *Point-Slope Form* $ y - y_0 = m(x - x_0) $ where, $(x_0,y_0)$ a point where the line passes through and _m_ is the slope. + *Two Point Form* $ y - y_1 = ((y_2 - y_1)/(x_2 - x_1))(x - x_1) = ((y_1 - y_2)/(x_1 - x_2))(x - x_1) $ where, $(x_1, y_1)$ and $(x_2, y_2)$ are two distinct points where the line passes through. + *Intercept Form* $ x/a + y/b = 1 $ where, _a_ is the x-intercept and _b_ is the y-intercept. === Properties of a Pair of Lines + The slopes of two parallel lines are the same. $m_1 = m_2$ where $m_1$ is the slope of line _l_ and $m_2$ of line _p_ and _l_ $parallel$ _p_ . + The slopes of two lines that are perpendicular to each other are negative reciprocal of each other. $m_1 = -1/m_2$ where $m_1$ is the slope of line _l_ and $m_2$ of line _p_ and _l_ $perp$ _p_ . == Complex Numbers === Prerequisites + *Imaginary Number*: A multiple of a quantity called "_i_", which is defined by $i^2 = -1$. *Example*: $-i, 5i "and" (32/72)i$ A *Complex Number* is the sum of a real and an imaginary numbers. The standard form of a complex number is $a + b i$, where $a$ is the real part and $b$ is the imaginary part. If $a=0 and b eq.not 0$ then it's called a *Pure Imaginary Number*. *Example*: $5 + 3i$, $-89 + (10/22)i$ === Complex Plane A *Complex Plane* is a coordinate system for plotting complex numbers. In the *Complex Plane* the real part is represented along the horizontal axis and the imaginary part is represented along the vertical axis. === Complex Conjugate The *Complex Conjugate* of a complex number $a+b i$ is $a-b i$. It is same as the original number except the sign of the imaginary number is changed to its opposite sign. === Powers of _i_ $ i^2 = -1 i^3 = -i i^4 = 1 $ *Problem*: Evaluate $i^35$ $ i^35 =& i^32 times i^3\ = & (i^4)^8 times -i\ = & 1^8 times -i\ = & -i $ == Quadratic Equations The equation containing a second-degree polynomial is called a *Quadratic Equation*. The standard form of a quadratic equation is $a x^2 + b x + c = 0$, where $a, b "and" c in RR and a eq.not 0$ === Solving By Factorization *Problem*: Factor and solve the equation $4x^2 + 15x + 9 = 0$. *Solution*: $ 4x^2 + 12x + 3x + 9 &= 0\ 4x(x+3) + 3(x+3) &= 0\ (4x+3)(x+3) &= 0\ 4x+3 = 0 "and" x+3 &=0\ 4x &=-3\ therefore x &=-3/4\ x+3 &=0\ therefore x &=-3\ $ $ therefore x &= -3, -3/4 $ === Solving By Completing The Square *Problem*: Solve by completing the square: $x^2 - 6x = 13$. *Solution*: $ x^2 + 2 times x times (-3) + (-3)^2 &= 13 - (-3)^2\ (x-3)^2 &= 13 - 9\ (x-3)^2 &= 4\ x-3 &= plus.minus sqrt(4) &= plus.minus 2\ x &= 3 plus.minus 2\ therefore x &= 5,1 $ === Quadratic Formula For $a x^2 + b x + c = 0$ where $a,b "and" c in RR and a eq.not 0$. $ x = (-b plus.minus sqrt(b^2 - 4a c))/(2a) $ *Problem*: Solve the quadratic equation using the quadratic formula: $9x^2 + 3x - 2 = 0$. *Solution*: Comparing the equation with the standard form of quadratic equation we get, $ a=9, b=3 "and" c=-2\ therefore x&=(-3 plus.minus sqrt((-3)^2 - (4 times 9 times (-2))))/(2 times 9)\ therefore x&=(-3 plus.minus sqrt(9 - (-72)))/18\ therefore x&=(-3 plus.minus sqrt(9+72))/18\ therefore x&=(-3 plus.minus sqrt(81))/18\ therefore x&=(-3 plus.minus 9)/18\ therefore x&=(-1 plus.minus 3)/6\ therefore x&=(-1 + 3)/6,(-1-3)/6\ therefore x&=2/6,-4/6\ therefore x&=1/3,-2/3\ $ === The Discriminant The expression "$b^2 - 4a c$" under the radical in the quadratic formula is called the *discriminant*, it's often denoted by "D" or "$Delta$"(upper case greek delta). The *discriminant* tells us the nature of the roots/solutions like so... $ Delta &= 0 && arrow.l.r.double.long "two same rational solutions"\ Delta &> 0 and Delta "is a perfect square" && arrow.l.r.double.long "two distinct rational solutions"\ Delta &> 0 and Delta "is not a perfect square" && arrow.l.r.double.long "two distinct irrational solutions"\ Delta &< 0 && arrow.l.r.double.long "two complex solutions"\ $ == Other Types Of Equations === Radical Equations *Radical Equations* are equations where the variable is located in a radicand. *Example*: $sqrt(4x - 33) = 9$, $sqrt(9/(78x)+ 2) = 53 - sqrt(3x + 11)$. While solving *radical equations* we may encounter extraneous solutions, those are solutions that don't actually satify the equation. We need the check whether each potential solution satisfies the equation. === Absolute Value Equation *Absolute value* can be considered as a distance from one point another. *Absolute value* of $x$ is written as $abs(x)$, where\ $ abs(x) = cases(-x wide & forall x<0, x wide & forall x>=0) $ *Example*: $abs(-32)=32, abs(9)=9$ *Absolute Value Equations* are equations containing absolutes values of variabes. For *Absolute Value Equations* of form $abs(a x + b) = c$,... $ c &< 0 arrow.r.double.long "no solution exists"\ c &= 0 arrow.r.double.long "one solution exists"\ c &> 0 arrow.r.double.long "two solution exists"\ $ == Linear Inequalities And Absolute Value Inequalities === Interval Notation *Interval* is a continuous set of all the real numbers lying between two fixed endpoints. *Interval Notation* is a way to represent intervals. In *interval notation* "(" and ")" are used to represent open interval and "[" and "]" are used to represent closed interval. Open interval don't include their endpoints whereas closed interval do. *Example*: $ 1 &<=x<=5 arrow.r.l.double.long && x in [1,5]\ 1 &<x<5 arrow.r.l.double.long && x in (1,5)\ 1 &<=x<5 arrow.r.l.double.long && x in [1,5)\ 1 &<x<=5 arrow.r.l.double.long && x in (1,5]\ -infinity&<x<=5 arrow.r.l.double.long && x in (infinity,5]\ $ #pagebreak() === Properties Of Inequalities These properties applies to all the inequalities. + *Addition*: $a<b arrow.r.double.long a+c<b+c$ + *Multiplication*: $ a<b and c>0 arrow.r.double.long a c<b c\ a<b and c<0 arrow.r.double.long a c>b c $ *Problem*: Solve $-3/4x >= -5/8 + 2/3x$ *Solution*: $ -3/4x - 2/3x &>= -5/8\ (-9 - 8)/12x &>= -5/8\ -17/12x &>= -5/8\ 17/12x &<= 5/8 wide "multiplication property"\ 17/3x &<= 5/2\ x &<= (5 times 3)/(2 times 17)\ therefore x &<= 15/34\ therefore x &in (-infinity,15/34]\ $ === Solving Compound Inequalities *Compound Inequalities* are equalities containing two ineqaulities in one statement. *Problem*: Solve $3<=2x+2<6$ *Solution*: $ 1 &<=2x &&<4\ 1/2 &<=x &&<2\ therefore &x in &&[1/2,2) $ === Absolute Value Inequalities Inequalities containing absolute values. $ abs(X) &< k arrow.r.l.double.long -k < X < k\ abs(X) &> k arrow.r.l.double.long X < -k or X > k\ $ where $X$ is an algebraic expression and $k > 0$. These statements also applies to $<= "and" >=$. *Problem*: Describe all values "x" within a distance of 3 from the number 2. *Solution*: $ abs(x - 2) &<=3\ therefore -3&<=x-2&&<=3\ -3+2 &<=x &&<=3+2\ -1 &<=x &&<=5\ therefore x &in [-1,5] $ #pagebreak() = Functions == Introduction === Prerequisite + *Cartesian Product*: $ {(a,b) | a in A and b in B and A eq.not emptyset and B eq.not emptyset} $ + *Relation*: A subset of the cartesian product of two non-empty sets. If $R$ is a relation from set $A$ to set $B$ then $R subset.eq A times B$. Depending on the type of relation we want to create between sets we can set conditions that need to be met for an element from set $A$ to related to an element from set $B$, such conditions could $x<=y$ or "there must be a bidirectional edge connecting the two nodes", etc. $x R y arrow.double.l.r.long "relation" R "holds for" x "and" y $. The set of first part of the all elements in a relation set is called the *domain* and the set of second part of all the elements is called the *range*. Each elements of the *domain* set is called an *input value* or *independent variable*. Each elements of the *range* is called the *output value* or *dependent variable*.\ *Example*: $ A = { "black", "gray", "white" }\ B = {0,1,2,3,...,255}\ R = { (a,b) | a = "black" forall med 0 <= b <= 85 and\ a = "gray" forall med 86<=b<=170 and\ a="white" forall med 171<=b<=255\ a in A and b in B }\ R= {("black",0),("black",1),...,("gray",86),...,("white",171),...} $ Set $B$ is called the *codomain*, it includes all the range elements and some other elements aren't in the range.\ === Definition *Function* is a special *relation* where for each input in the domain there is strictly only one output in the range. A *function* where for each output there's exactly one corresponding input is called a *one-to-one function*.\ *Example*: $ A &= {1,2,3,4,5,...}\ B &= {0.5,1.0,1.5,2.0,...}\ R &= {(a,b) | b/2 = a and a in A and b in B}\ therefore R &= {(0.5,1),(1,2),(1.5,3),(2,4),...}\ $ $R$ is a function. *Functions* are denoted by small letters(usually $f$) with the input(s) separated by comma, placed inside a pair of parenthesis like $y=f(a,b,c)$, where $f$ is the function name and $a,b "and" c$ are the inputs and $y$ is output. It's read as "$y$ is a function of $a,b "and" c$" or $y$ is $f$ of $a,b "and" c$. There are also other notation of functions like the function $g$ from $X "to" Y$ can be denoted by $g:X arrow.r Y$. *Problems*: Given the $h(p) = p^2 + 2p$, solve h(p)=3. *Solution*: $ h(p) &= 3\ p^2 + 2p &= 3\ p^2 - 2p - 3 &= 0\ p^2 - 3p + p - 3 &= 0\ p(p-3)+1(p-3) &= 0\ (p+1)(p-3) &= 0\ therefore p &= -1\ therefore p &= 3\ $ === Vertical Line Test If a vertical line drawn on a graph intersects more than one point on the graph, then that graph does not represent a function. === Horizontal Line Test If a horizontal line drawn on a graph intersects more than one point on the graph, then that graph does not represent a one-to-one function. === Toolkit Functions $x in RR$ and $f(x) in RR$ for all the *toolkit functions*, unless specified otherwise. + *Constant*: $f(x)=c$, where $c$ is a constant. Domain: $[c,c]$, Range: $(-infinity,infinity)$. + *Identity*: $f(x)=x$. Domain: $(-infinity,infinity)$, Range: $(-infinity,infinity)$. + *Absolute Value*: $f(x)=abs(x)$. Domain: $(-infinity,infinity)$, Range: $[0,infinity)$. + *Quadratic*: $f(x)=x^2$. Domain: $(-infinity,infinity)$, Range: $[0,infinity)$ + *Cubic*: $f(x)=x^3$. Domain: $(-infinity,infinity)$, Range: $(-infinity,infinity)$ + *Reciprocal*: $f(x)=1/x$, where $x in RR\\{0}$. Domain: $(-infinity,0) union (0,infinity)$. Range: $(-infinity,0) union (0,infinity)$ + *Reciprocal Squared*: $f(x)=1/x^2$, where $x in RR\\{0}$. Domain: $(-infinity,0) union (0,infinity)$. Range: $(0,infinity)$. + *Square Root*: $f(x)=sqrt(x)$. Domain: $[0,infinity)$. Range: $[0,infinity)$ + *Cube Root*: $f(x)=root(3, x)$. Domain: $(-infinity,infinity)$. Range: $(-infinity,infinity)$. === Piecewise-Defined Functions A *Piecewise-Defined* function is a function that requires more than one formula to be defined depending on the value of it's input. *Example*: absolute value functions. == Rates Of Change And Behavior Of Graphs *Rates Of Change* describes how the output value changes relative to the input value. The Greek letter Delta($Delta$) denotes the change in a quantity. *Example*: Change in x is denoted as $Delta x$. *Average Rate Of Change* = $(Delta y)/(Delta x) = (y_2 - y_1)/(x_2 - x_1)$, from $(x_1,y_1) "to" (x_2,y_2)$. Same as slope. *Problem*: Find the average rate of change of $f(x)=x^2 + 2x - 8$ on the interval $[5,a]$ in simplest forms in terms of $a$.\ *Solution*: $ f(5) &=25+10-8\ &=25+2\ &=27\ x_1=5 &"and" y_1=27\ f(a) &=a^2 + 2a - 8\ therefore x_2=a &"and" y_2= a^2 + 2a -8\ therefore "Avg rate of change" &= (y_2 - y_1)/(x_2 - x_1)\ &=(a^2 + 2a - 8 - 27)/(a - 5)\ &=(a^2 + 2a -35)/(a-5)\ &=(a^2 +7a -5a -35)/(a-5)\ &=(a(a +7) -5(a +7))/(a-5)\ &=((a-5)(a+7))/(a-5)\ &=(a+7)\ therefore "Avg rate of change" &= a+7 $ === Extrema - *Increasing Function*: If $f$ is a function defined on the interval $Q$ and for every $a<b in Q$ $f(a)=<f(b)$ then $f$ is an *Increasing Function*. - *Decreasing Function*: If $f$ is a function defined on the interval $Q$ and for every $a<b in Q$ $f(a)=>f(b)$ then $f$ is an *Decreasing Function*. - *Strictly Increasing Function*: If $f$ is a function defined on the interval $Q$ and for every $a<b in Q$ $f(a)<f(b)$ then $f$ is an *Strictly Increasing Function*. - *Strictly Decreasing Function*: If $f$ is a function defined on the interval $Q$ and for every $a<b in Q$ $f(a)>f(b)$ then $f$ is an *Strictly Decreasing Function*. - *Local/Relative Minimum*: For a function $f$, $b$ is it's *local minima* in the interval $(a,c)$ if $a<b<c$ and for every $x in (a,c)$, $f(x)>=f(b)$. A value of a function where it changes from increasing to decreasing. Plural form is *Local Minima*. - *Local/Relative Maximum*: For a function $f$, $b$ is it's *local maxima* in the interval $(a,c)$ if $a<b<c$ and for every $x in (a,c)$, $f(x)<=f(b)$. A value of a function where it changes from to decreasing to increasing. Plural form is *Local Maxima*. - *Local Extrema*: *Local Minima* and *Local Maxima* together make *Local Extrema*. - *Absolute Minimum*: $f(c)$ is the absolute minimum of $f$ if $f(c)<=f(x)$ for all x in the domain of $f$. - *Absolute Maximum*: $f(c)$ is the absolute maximum of $f$ if $f(c)>=f(x)$ for all x in the domain of $f$. - *Absolute Extrema*: *Absolute maximum* and *absolute minimum*, respectively, together the make up *Absolute Extrema*. #linebreak() == Composition of functions The process of combining two or more functions such that the output of one function becomes the input of another function. $ (f compose g)(x)=f(g(x)) $ LHS Read as "$f$ composed with $g$ at $x$". RHS Read as "$f$ of $g$ of $x$". The open circle "$compose$" is called compostion operator. The result of composition is a composite function.=== Properties - Composition of functions are not necessarily commutative. - The domain of $(f compose g)(x)$ or $f(g(x))$ are the values in domain of $g$ for which it produces values that are in domain of $f$ === Problems + The gravitational force on a planet a distance $r$ from the sun is given by the function $G(r)$. The acceleration of a planet subjected to any force $F$ is given by the function $a(F)$. Form a meaningful composition of these two functions, and explain what it means.\ *Solution:* $a(G(r))$ will give the acceleration of planet that is distance $r$ from the sun. + Find the domain of $(f compose g)(x)$ where $f(x)=5/(x-1)$ and $g(x)=4/(3x-2)$.\ *Solution:-*\ Step one: Find the domain of the inner most function($g$).\ Find where $g(x) = "undefined"$ or $g(x)arrow.t$.\ $g(x)arrow.t$ when its denominator $3x-2=0$\ $ 3x-2 &=0 quad ("say")\ 3x &=2\ x &=2/3\ therefore g(x) arrow.t "when" x&=2/3\ $\ Step two: Find the domain of outer function($f$).\ Find $x$ where $f(x)="undefined"$ or $f(x)arrow.t$.\ $f(x)arrow.t$ when its denominator $x-1=0$.\ $ x-1 &=0 quad ("say")\ x &=1\ therefore f(x) arrow.t "when" x&=1\ therefore x &in (-infinity,1) union (1,infinity)\ $\ Step three: Find the value of $x$ for which $g$ is defined and $g(x)$ is in the domain of $f$.\ $therefore$ Find the values of $x$ for which $g(x) "is defined or" g(x)arrow.b "and" g(x) eq.not 1$.\ $ 4/(3x-2) &=1 quad ("say")\ 4 &=3x-2\ 4+2 &=3x\ 6 &=3x\ 3x &=6\ x &=2\ therefore "the domain of" (f compose g)(x) &in (-infinity, infinity) - {2/3,2} $ + Find the domain of $(f compose g)(x)$ where $f(x)=1/(x-2)$ and $g(x)=sqrt(x+4)$.\ *Solution:-*\ Step 1: Find the domain of innermost function i.e. $g$.\ $g(x)arrow.t$ when $x + 4 < 0$.\ $ therefore g(x)arrow.t "when" x < -4\ therefore g(x)arrow.b forall x in [-4,infinity)\ $\ Step 2: Find the domain of next innermost function i.e. $f$.\ $f(x)arrow.t "when" x - 2 = 0$.\ $ therefore f(x)arrow.t "when" x &=2\ therefore f(x)arrow.b forall x &in (-infinity,2) union (2,infinity) $\ Step 3: Find what values of $x$ will yield 2 for $g(x)$.\ $ g(x) &=2 quad ("say")\ therefore sqrt(x+4)&=2\ therefore x+4 &= 2^2\ therefore x &= -4 + 4\ therefore x &= 0\ $\ $therefore$ The domain of $(f compose g)(x)$ is $[-4,0) union (0, infinity)$ === Decomposition of function We can decompose(break down) a complex function into two or more simpler function and represent it as a composition of those simpler functions.\ Example: We can decompose $f(x)=4/(3-sqrt(4+x^2))$ into $g(x)=sqrt(4+x^2)$ and $h(x)=4/(3-x)$ and represent it as a composite function $(h compose g)(x)=4/(3-(sqrt(4 + x^2)))$ == == Transformation Of Functions Change the functions in some way.=== Shifting Shifting is a type of function transformation where we move the function up, down, right or left.=== Shifts *Vertical Shift:* Adding or subtracting a constant from the function. Example: To vertically shift $f(x)$ by $k$ units we need to add $k$ to $f(x)$, the transformed function will be $f(x)+k$.\ *Horizontal Shift:* Adding or subtracting constants from the input. For example the horizontal shift of $f(x)$ by $k$ units is $g(x)=f(x+k)$. If $k$ is -ve then the $f(x)$ will shift $k$ units right else $f(x)$ will shift $k$ units left. === Reflections *Vertical Reflection:* Given a function $f(x)$ a new function $h(x)=-f(x)$ is a vertical reflection of the function sometimes called a reflection about (or over, or through) the x-axis.\ *Horizontal Reflection:* Given a function $f(x)$ a new function $h(x)=f(-x)$ is a horizontal reflection of the function sometimes called reflection about the y-axis. === Even And Odd Functions *Even Function:* If the horizontal reflection of a function is the same as the original function then it's an Even Function.$ f(x)=f(-x) $\ The graph of an even function is symmetric about the y-axis. *A function is symmetric about the y-axis if $forall (x,y) exists (-x, y)$ on it.*\ *Odd Function:* If we horizontally and vertically reflect a function and get the original function then that function is an Odd Function.$ f(x)=-f(-x) $\ The graph of an odd function is symmetric about the origin. *A function is symmetric about the origin if $forall (x,y) exists (-x, -y)$ on it.* === Stretches And Compressions - *Vertical:*\ Given function $f$, $g(x)=a f(x)$ where $ g(x)=cases( "vertical stretch if" a>1, "vertical compression if" 0<a<1, "vertical compression with vertical reflection if" -1<a<0, "vertical stretch with vertical reflection if" a< -1, ) $ - *Horizontal:*\ Given function $f$, $g(x)= f(a x)$ where $ g(x)=cases( "horizontal compression by" 1/a "if" a>1, "horizontal stretch by" 1/a "if" 0<a<1, "horizontal stretch by" 1/a "with horizontal reflection if" -1<a<0, "horizontal compression by" 1/a "with horizontal reflection if" a< -1, ) $ === Multiple Transformation For multiple tranformation follow the same order as *PEMDAS*. == Absolute Value Functions === Absolute Value Equations Equations of form: $ |A| = B $ where $A "and" B in RR$ and $B>=0$. $B=A$ if $A>=0$ else $B=-A$. In an *absoulute value equation* the unknown value is in absolute value bars. Example: $|x|=9, |x-32|=102$ *Problem 1:* For the function $f(x)=|2x-1|-3$, find the values of $x$ such that $f(x)=0$\ *Solution:-* $ "when" f(x)&=0\ therefore 0&=|2x-1|-3\ therefore 3&=|2x-1|\ "if" (2x-1)&>=0 "then"\ 3&=2x-1\ 3+1&=2x\ 4&=2x\ 4/2&=x\ therefore x&=2\ "if" (2x-1)&<0 "then"\ 3&=-(2x-1)\ -3&=2x-1\ -3+1&=2x\ -2&=2x\ therefore x&=-1\ therefore x=-1 "or" x=2& "when" f(x)=0 $
https://github.com/HenkKalkwater/aoc-2023
https://raw.githubusercontent.com/HenkKalkwater/aoc-2023/master/parts/day-1-2.typ
typst
#let numbers = ( "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine", "ten" ) #let num_reg = "(" + numbers.join("|") + "|\d)" #let to_num = (str) => { let val = numbers.position(x => x == str) if val == none { val = int(str) } val } #let solve = (input) => { let first_reg = regex("[a-z]*?" + num_reg) let last_reg = regex(".*" + num_reg + "[a-z]*") let answ = input .split("\n") .filter(line => line.len() > 0) .map(line => { let first = to_num(line.match(first_reg).captures.at(0)) let last = to_num(line.match(last_reg).captures.at(0)) first * 10 + last }) .sum() return answ }
https://github.com/JWangL5/CAU-ThesisTemplate-Typst
https://raw.githubusercontent.com/JWangL5/CAU-ThesisTemplate-Typst/master/ref/template.typ
typst
MIT License
#import "./booktab.typ": * #import "@preview/codly:0.2.0": * #import "@preview/codelst:2.0.1": sourcecode #import "./acronyms.typ": acro, usedAcronyms, acronyms #let project( kind: "硕士", title: "中国农业大学论文模板", abstract-en: [], abstract-zh: [], title-en:[], title-zh:[], authors: [], teacher: [], // co-teacher:[], degree: [], major: [], field: [], college: [], signature: "", classification:[], security:[], acknowledgement: [], author-introduction: [], student-id:[], year: [], month: [], day: [], outline-depth: 3, draft:true, blind-review: false, logo:"./CAU_Logo.png", ref-path: "", ref-style:"emboj", acro-path: "", body ) = { if(blind-review){ authors = hide[#authors] teacher = hide[#teacher] major = hide[major] field = hide[#field] student-id = hide[#student-id] acknowledgement = hide[#acknowledgement] author-introduction = hide[#author-introduction] draft = false } // Set the document's basic properties. set document(title: title) set page( paper: "a4", margin: (left: 25mm, right: 25mm, top: 30mm, bottom: 25mm), background: if draft {rotate(-12deg, text(80pt, font:"Sigmar One", fill: silver)[DRAFT])} else {}, ) set text(font: ("Times New Roman", "SimSun"), size: 12pt, hyphenate: false) // show strong: set text(font: ("Times New Roman", "SimHei"), weight: "semibold", size: 12pt) show strong: set text(font: ("Times New Roman", "FZXiaoBiaoSong-B05S"), size: 11pt, baseline: -0.5pt) set par(leading: 12pt, first-line-indent: 2em) set list(indent: 1em) set enum(indent: 1em) set highlight(fill: yellow) set heading(numbering: "1.1") set heading(numbering: (..n) => { if n.pos().len() > 1 { numbering("1.1", ..n) } }) show heading.where(level: 1): it => [ #pagebreak(weak: true) #block(width: 100%)[ #set align(center) #v(6pt,weak: false) #text(font: ("Times New Roman","Microsoft YaHei"), weight: "bold", 16pt)[#it.body] #v(6pt,weak: false) ] ] let titlepage = { let justify(s) = { if type(s) == "content" and s.has("text") { s = s.text } assert(type(s) == "string") s.clusters().join(h(1fr)) } set text(12pt) table( columns: (38pt, 1em, 1fr, 50pt, 1em, auto), rows:(15.6pt, 15.6pt), stroke:0pt+white, align: left+horizon, inset:0pt, justify[分类号], [:], [#classification], justify[单位代码], [:], [10019], justify[密级], [:], [#security], justify[学号], [:], [#student-id] ) v(28pt) align(center, box(image(logo, fit:"stretch", width: 60%))) // align(center, image(logo, width:48%)) align(center)[#text(18pt, weight: 700, kind+"学位论文")] v(15.6pt) align(center)[ #set par(leading: 14pt) #text(22pt, font:("Times New Roman", "SimHei"), weight: 700, title-zh) ] align(center)[ #set text(16pt, font:"Time New Roman", weight: 700, baseline:-8pt) #title-en ] v(40pt) let table_underline(s) = [ #set text(14pt, baseline:5pt) #s #v(-0.5em) #line(length: 100%, stroke: 1pt) ] align(center)[ #set text(14pt) #table( columns: (150pt, 2pt, 40%), rows:27.3pt, align:center+horizon, stroke: none, justify[研究生], [:], table_underline[#authors], justify[指导教师],[:], table_underline[#teacher], // justify[合作指导教师],[:],table_underline[#co-teacher], justify[申请学位门类级别], [:], table_underline[#degree], justify[专业名称], [:], table_underline[#major], justify[研究方向], [:], table_underline[#field], justify[所在学院], [:], table_underline[#college] ) ] v(75pt) align(center, year+"年"+month+"月") pagebreak() } let statementpage = { set text(font:"SimSun", 12pt) text(font:"SimHei", 22pt)[#align(center)[独创性声明]] [本人声明所呈交的学位论文是我个人在导师指导下进行的研究工作及取得的研究成果。尽我所知,除了文中已经注明引用和致谢的内容外,论文中不包含其他人已经发表或撰写过的研究成果,也不包含本人为获得中国农业大学或其他教育机构的学位或证书而使用过的材料。与我一同工作的同志对本研究所做的任何贡献均已在论文中作了明确的说明并表达了谢意。] v(4em) grid( columns: (2em, auto, 1fr, auto), [], [学位论文作者签名:], [], text("时间: "+year+"年"+month+"月"+day+"日"), ) v(4em) text(font:"SimHei", 22pt)[#align(center)[关于学位论文使用授权的说明]] text(font:"SimSun", 12pt)[本人完全了解中国农业大学有关保留、使用学位论文的规定。本人同意中国农业大学有权保存及向国家有关部门和机构送交论文的纸质版和电子版,允许论文被查阅和借阅;本人同意中国农业大学将本学位论文的全部或部分内容授权汇编录入《中国博士学位论文全文数据库》或《中国优秀硕士学位论文全文数据库》进行出版,并享受相关权益。\ #h(2em)*(保密的学位论文在解密后应遵守此协议)*] v(4em) grid( columns: (2em, auto, 1fr, auto), [], [学位论文作者签名:], [], text("时间: "+year+"年"+month+"月"+day+"日"), ) v(2em) grid( columns: (2em, auto, 1fr, auto), [], [导师签名:], [], text("时间: "+year+"年"+month+"月"+day+"日"), ) if draft{ }else{ place(top+left, dx: 47%, dy: 72%, rotate(-24deg, image("./CAU_Stamp.png", width: 100pt))) place(top+left, dx: 47%, dy: 25%, rotate(-24deg, image("./CAU_Stamp.png", width: 100pt))) } if(signature != ""){ place(top+left, dx: 29%, dy: 25%, image("../"+signature, width: 100pt)) place(top+left, dx: 29%, dy: 68%, image("../"+signature, width: 100pt)) } pagebreak() } let abstractpage={ set page(numbering: "I") counter(page).update(1) align(center)[ #heading(outlined: true, level: 1, numbering:none, [摘要])] v(16pt,weak: false) set par(justify: true) [#h(2em) #abstract-zh] align(center)[ #heading(outlined: false, level: 1, numbering: none, [Abstract])] v(16pt,weak: false) set par(justify: true) [#abstract-en] } let contentspage={ set page(numbering: "I") show outline: set heading(level: 1, outlined: true) heading(level: 1, numbering: none)[目录] v(16pt,weak: false) outline(depth: outline-depth, indent: n => [#h(2em)] * n, title: none) } let illustrationspage={ // set text(font: sunfont, size: 12pt) set page(numbering: "I") // set par(leading: 12pt) heading(level: 1, numbering: none)[插图和附表清单] v(16pt,weak: false) outline(title:none, target: figure.where(kind:image)) set par(first-line-indent: 0em) outline(title:none, target: figure.where(kind:table)) } let acronymspage={ // set text(font: sunfont, size: 12pt) set page(numbering: "I") // set par(leading: 12pt) heading(level: 1, numbering: none)[缩略词表] v(16pt,weak: false) set text(font: ("Times New Roman", "SimHei"), size: 10.5pt) line(length: 100%); v(-0.5em) grid(columns: (20%, 1fr, 30%), align(center)[缩略词], [英文全称], align(center)[中文全称]) v(-0.5em); line(length: 100%) set text(font: ("Times New Roman", "SimSun"), size: 10.5pt) locate(loc => usedAcronyms.final(loc) .pairs() .filter(x => x.last()) .map(pair => pair.first()) .sorted() .map(key => grid( columns: (20%, 1fr, 30%), align(center)[#eval(acronyms.at(key).at(0), mode: "markup")], eval(acronyms.at(key).at(1), mode: "markup"), align(center)[#eval(acronyms.at(key).at(2), mode: "markup")], ) ) .join() ) line(length: 100%) } let acknowledgementpage = [ = 致谢 #acknowledgement ] let authorpage = [ = 个人简介 #author-introduction ] let reference = { show bibliography: set par(leading: 1em, first-line-indent: 0em) show bibliography: set text(size: 10.5pt) heading(level: 1)[参考文献] if ref-style == "emboj" { bibliography(ref-path, title: none, style: "the-embo-journal.csl") }else{ bibliography(ref-path, title: none, style: ref-style) } heading(level: 6, numbering: none, outlined: false)[] } let bodyconf() = { set par(justify: true) set page( numbering: "1", number-align: center, header:[ #set text(9pt, font:("Times New Roman", "SimSun")) #text("中国农业大学"+kind+"学位论文") #h(1fr) #locate(loc => { let eloc = query(selector(heading).after(loc), loc).at(0).location() query(selector(heading.where(level:1)).before(eloc), eloc).last().body.text }) #v(-3.8pt) #line(length: 100%, stroke: 3pt) #v(-8pt) #line(length: 100%, stroke: 0.5pt) ], header-ascent: 10%, ) show heading: it => { let levels = counter(heading).at(it.location()) if it.level == 1 { if levels.at(0) != 1 { colbreak(weak:false) } block(width:100%, breakable: false, spacing: 0em)[ #set align(center) #v(16pt,weak: false) #text(font: ("Times New Roman","Microsoft YaHei"), weight: "bold", 16pt)[#it.body] #v(16pt,weak: false) ] } else if it.level == 2 { block(breakable: false, spacing: 0em)[ #v(14pt, weak: false) #text(font: ("Times New Roman","SimHei"), 14pt, weight: "regular")[#it] #v(14pt, weak: false) ] } else if it.level == 3 { block(breakable: false, spacing: 0em)[ #v(12pt, weak: false) #text(font: ("Times New Roman","SimHei"), 12pt, weight: "regular")[#it] #v(12pt, weak: false) ] } par()[#text(size:0.0em)[#h(0em)]] } set figure.caption(separator: [. ]) show figure.where(supplement: [表]): set figure.caption(position: top) show figure.caption: set text(font:("Times New Roman","SimHei"), 10.5pt) show figure.where(kind: image): set figure( numbering: i=> numbering("1-1", ..counter(heading.where(level: 1)).get(), i) ) show heading.where(level: 1): it =>{ counter(figure.where(kind: table)).update(0) counter(figure.where(kind: image)).update(0) it } show figure.where(kind: image): it => { set text(font:("Times New Roman","SimSun"), 10.5pt) it v(-4pt) par()[#text(size:0.0em)[#h(0em)]] } show figure.where(kind: table): it => { set text(font:("Times New Roman","SimSun"), 10.5pt) it v(-1em) par()[#text(size:0.0em)[#h(0em)]] } show list:it =>{ it v(-1em) par()[#text(size:0.0em)[#h(0em)]] } show enum:it =>{ it v(-1em) par()[#text(size:0.0em)[#h(0em)]] } show math.equation.where(block:true):it =>{ it v(-1em) par()[#text(size:0.0em)[#h(0em)]] } show: codly-init.with() show raw.where(block: true): set par(justify: false) show raw.where(block:true):it =>{ it v(-4pt) par()[#text(size:0.0em)[#h(0em)]] } codly( zebra-color: rgb("#FAFAFA"), stroke-width: 2pt, fill: rgb("#FAFAFA"), display-icon: false, padding: 0.5em, display-name: false, ) [ #body #reference #acknowledgementpage #authorpage ] disable-codly() } [ #titlepage #statementpage #abstractpage #contentspage #illustrationspage #acronymspage #show: body => bodyconf() ] } #let l(it) = align(left)[#it] #let u(it) = underline(offset: 5pt)[#it] #let legend(it) = block(breakable: false, above: 8pt)[ #set text(size: 9pt) #set par(first-line-indent: 2em, leading:1em) #align(left)[#h(2em) #it] ]
https://github.com/clysto/polylux-ustc-theme
https://raw.githubusercontent.com/clysto/polylux-ustc-theme/main/examples/example.typ
typst
#import "../ustc.typ": * #show: ustc-theme.with( aspect-ratio: "16-9", footer: "University of Science and Technology of China", ) #set par(leading: 1em) #title-slide( title: "中国科学技术大学Polylux主题 (USTC Polylux Theme)", subtitle: "毛亚琛" ) #slide(title: "引言(Introduction)", alignment: horizon, title-size: 30pt)[ - This is a theme for the USTC Polylux Template. - #lorem(12) - It is written in Typst. #footnote[#lorem(20)] - It is awesome. #footnote[#lorem(10)] hello 你好 ] #slide(title: "引言(Introduction)", alignment: horizon, title-size: 30pt)[ - This is a theme for the USTC Polylux Template. - #lorem(12) - It is written in Typst. - It is awesome. hello ] #focus-slide[ = Hello World == 你好世界 ] #slide( title: "What is the challenges?", alignment: horizon, )[ 总结有如下挑战: - #lorem(4) ] #slide( title: "What is the challenges?", alignment: horizon, )[ #lorem(80) ]
https://github.com/faria-s/CV
https://raw.githubusercontent.com/faria-s/CV/main/example.typ
typst
#import "twentysecondcv.typ": * #set text(font: "PT Sans") #main( [ #profile( name: "<NAME>", jobtitle: "Software Engineering Student", ) #show_contacts( ( ( icon: "linkedin", fa-set: "Brands", text: "<NAME>", ), ( icon: "github", fa-set: "Brands", text: "faria-s", ), ( icon: "phone", fa-set: "Free Solid", text: "939 772 052", ), ( icon: "email", fa-set: "Free Solid", text: "<EMAIL>", ), ) ) #profile_section("Skills") #show_interests(( ( interest: "Haskell", score:0.5, ), ( interest: "C", score: 0.3, ), ( interest: "Html/CSS", score: 0.4, ), )) #profile_section("Languages") #show_interests(( ( interest: "Portuguese (native)", score: 1, ), ( interest: "English (Fluent)", score: 0.8, ), )) ], [ #body_section("Education") #twentyitem( period: [ Sep. 2020 - \ Jun. 2023 ], title: "Scondary Education", note: [EBS Dr. Jaime Magalhães Lima], addtional_note: "Finished secondary education in Socioeconomic Sciences" ) #twentyitem( period: [ Sep. 2023 - \ Ongoing ], title: "Bachlor's Software Engineering", note: [Universidade do Minho], addtional_note: "Currently enrolled in the first year of the major." ) #body_section("Projects") #twentyitem( period: [ Oct. 2017 - \ Now ], title: "<NAME>", addtional_note: "Tool: Haskell", note: "2023", body: "A project done for a class (Laboratórios de Informática 1 - Software Labs 1) with the objective of doing a Donkey Kong remake." ) #body_section("Experience") #twentyitem( period: [2023 - Ongoing], title: "Cesium", body: "Collaborator at CAOS" ) #twentyitem( period: [2023 - Ongoing], title: "Cesium", body: "Collaborator at DMC" ) #twentyitem( period: [2023 - Ongoing], title: "<NAME>", body: "Audit committee Secretary" ) ] )
https://github.com/AU-Master-Thesis/thesis
https://raw.githubusercontent.com/AU-Master-Thesis/thesis/main/sections/0-predoc/acronym-index.typ
typst
MIT License
#import "../../lib/mod.typ": * // == Acronym Index // #v(1em) // #todo[Make acronym table break, so it starts on this page] // #print-index(title: "", delimiter: "") #let print-acronyms(level: 1, outlined: false, sorted:"", title:"Acronyms Index", delimiter:":") = { //Print an index of all the acronyms and their definitions. // Args: // level: level of the heading. Default to 1. // outlined: make the index section outlined. Default to false // sorted: define if and how to sort the acronyms: "up" for alphabetical order, "down" for reverse alphabetical order, "" for no sort (print in the order they are defined). If anything else, sort as "up". Default to "" // title: set the title of the heading. Default to "Acronyms Index". Passing an empty string will result in removing the heading. // delimiter: String to place after the acronym in the list. Defaults to ":" // assert on input values to avoid cryptic error messages assert(sorted in ("","up","down"), message:"Sorted must be a string either \"\", \"up\" or \"down\"") if title != ""{ heading(level: level, outlined: outlined)[#title] } let cells = () state("acronyms",none).display(acronyms=>{ // Build acronym list let acr-list = acronyms.keys() // order list depending on the sorted argument if sorted!="down"{ acr-list = acr-list.sorted() }else{ acr-list = acr-list.sorted().rev() } // print the acronyms let cells = () for acr in acr-list { let acr-long = acronyms.at(acr) let acr-long = if type(acr-long) == array { acr-long.at(0) } else { acr-long } cells.push(acr) cells.push(acr-long) } assert(calc.rem(cells.len(), 2) == 0) let split = 56 // let split = int(cells.len() / 2) let first-half = cells.slice(0, split) let last-half = cells.slice(split) show table.cell.where(x: 0): strong let t(cells) = tablec( columns: 2, alignment: (right, left), header: table.header([Acronym], [Definition]), ..cells ) grid( columns: 2, t(first-half), t(last-half), ) }) } #print-acronyms(level: 1, outlined: true)
https://github.com/lphoogenboom/typstThesisDCSC
https://raw.githubusercontent.com/lphoogenboom/typstThesisDCSC/master/chapters/tableOfContents.typ
typst
#import "../typFiles/specialChapter.typ": * // You should not have to edit this file #show: specialChapter.with( content: outline(title: none,), chapterTitle: "Table of Contents", showInOutline: false )
https://github.com/paugarcia32/CV
https://raw.githubusercontent.com/paugarcia32/CV/main/modules_es/personalSummary.typ
typst
Apache License 2.0
#import "../brilliant-CV/template.typ": * #cvSection("Resumen Personal") Graduado en Ingeniería Telemática por la UPC de Castelldefels, con una sólida formación en IoT y ciberseguridad. Apasionado por la tecnología y siempre dispuesto a aprender, busco contribuir con mis conocimientos y entusiasmo a proyectos innovadores y desafiantes.
https://github.com/xrarch/books
https://raw.githubusercontent.com/xrarch/books/main/xrcomputerbook/chapamtsu.typ
typst
#import "@preview/tablex:0.0.6": tablex, cellx, colspanx, rowspanx = Amtsu Peripheral Bus == Introduction The XR/computer platform supports up to 4 low-speed peripheral devices connected to the system via the Amtsu peripheral bus. These devices include things such as mice and keyboards. The Amtsu bus interface is presented as a set of Citron ports. There are four data ports, *SELECT* (0x30), *MID* (0x31), *A* (0x33), and *B* (0x34). There is one command port (0x32). The *SELECT* port contains an ID number of the currently selected Amtsu device, of the range [0, 5]. Writing into this port selects a different device. ID 0 is reserved for the command set of the Amtsu controller. The *MID* port is read-only and contains the Model ID of the currently selected device. This is a unique identifier for the types of peripheral devices. The *A*, *B*, and command ports are mapped to virtual *A*, *B*, and command ports of the selected peripheral device. Note that these are actually transmitted via a simple protocol over a relatively slow serial connection, and therefore take many more cycles to access than most Citron ports. Note that when interrupts are enabled for an Amtsu peripheral, the IRQ number is 0x30 + N where N is the device ID. The following is a table of the currently defined Amtsu model identifiers: #tablex( columns: (1fr, 3fr), align: center, cellx([ #set text(fill: white) #set align(center) *Name* ], fill: rgb(0,0,0,255)), cellx([ #set text(fill: white) #set align(center) *MID* ], fill: rgb(0,0,0,255)), [AISA Mouse], [0x4D4F5553], [AISA Keyboard], [0x8FC48FC4], ) When ID 0 is selected, the Amtsu controller itself accepts commands through the Citron ports. It accepts the following commands: #box([ #tablex( columns: (1fr, 14fr), cellx([ #set text(fill: white) #set align(center) *\#* ], fill: rgb(0,0,0,255)), cellx([ #set text(fill: white) #set align(center) *Function* ], fill: rgb(0,0,0,255)), [0x1], [Enable interrupts from the device number specified in data port *B*.], [0x2], [Reset the devices on the Amtsu peripheral bus.], [0x3], [Disable interrupts from the device number specified in data port *B*.], ) ]) #box([ == Keyboard There is a standard keyboard device for the Amtsu bus. The keyboard is a simple input device designed to operate at the speed of a human hand (that is, very slowly relative to the microprocessor). When the IRQ for a keyboard device is enabled in the Amtsu controller, an interrupt will be signaled whenever a key is pressed or released. When selected in the Amtsu interface, this device presents several commands: #tablex( columns: (1fr, 14fr), cellx([ #set text(fill: white) #set align(center) *\#* ], fill: rgb(0,0,0,255)), cellx([ #set text(fill: white) #set align(center) *Function* ], fill: rgb(0,0,0,255)), [0x1], [Pop a scancode from the keyboard into data port *A*. If the 15th bit of the scancode is set, that is, it has been OR'ed with 0x8000, then the key was released and the true scancode is the low 14 bits. Otherwise, it was pressed.], [0x2], [Reset the keyboard.], [0x3], [If the scancode in data port *A* is currently pressed, then set data port *A* to 1. Otherwise, set it to 0.], ) ]) #box([ The layout of the keyboard is shown below. Scancodes for each key are labeled in the center of the key: #image("layout.png") ]) #box([ == Mouse There is a standard mouse device for the Amtsu bus. The mouse is a simple pointing input device. There are three buttons. When the IRQ for a mouse device is enabled in the Amtsu controller, an interrupt will be signaled whenever the mouse moves, and whenever one of the buttons is pressed or released. ]) #box([ When selected in the Amtsu interface, this device presents several commands: #tablex( columns: (1fr, 14fr), cellx([ #set text(fill: white) #set align(center) *\#* ], fill: rgb(0,0,0,255)), cellx([ #set text(fill: white) #set align(center) *Function* ], fill: rgb(0,0,0,255)), [0x1], [ Read information from the last event into the data ports. Data port *A* is set to a value that indicates the type of the event. Data port *B* is set to an argument for the event. ], [0x2], [Reset the mouse.], ) ]) === Mouse Events #box([ When command 0x1 is written to the command port, information from the last mouse event is latched into data ports *A* and *B*. The event types reported in data port *A* have the following meaning: #tablex( columns: (1fr, 14fr), cellx([ #set text(fill: white) #set align(center) *0x1* ], fill: rgb(0,0,0,255)), [Button pressed.], cellx([ #set text(fill: white) #set align(center) *0x2* ], fill: rgb(0,0,0,255)), [Button released.], cellx([ #set text(fill: white) #set align(center) *0x3* ], fill: rgb(0,0,0,255)), [Mouse moved.], ) ]) #box([ When the event type indicates a button press or release, data port *B* reports a number representing the mouse button: #tablex( columns: (1fr, 14fr), cellx([ #set text(fill: white) #set align(center) *0x1* ], fill: rgb(0,0,0,255)), [Left button.], cellx([ #set text(fill: white) #set align(center) *0x2* ], fill: rgb(0,0,0,255)), [Right button.], cellx([ #set text(fill: white) #set align(center) *0x3* ], fill: rgb(0,0,0,255)), [Middle button.], ) ]) #image("mousedelta.png") When the event type indicates mouse movement, the change in mouse position is indicated in a 32-bit value called the "mouse delta" which is latched into data port *B*. The upper 16 bits of this value contain the change in X position, and the lower 16 bits contain the change in Y position. These are both 16-bit signed (two's complement) integers. X represents "left-right" and Y represents "up-down". A negative change indicates a movement to the "left" or "up", and a positive change represents a movement to the "right" or "down".
https://github.com/lyzynec/orr-go-brr
https://raw.githubusercontent.com/lyzynec/orr-go-brr/main/01/main.typ
typst
#import "../lib.typ": * #knowledge[ #question(name: [Give a rigorous definition of a _local minimum_ (and _maximum_) and explain how it differs from a _global minimum_ (_maximum_).])[ Local minimum (maximum) $e_x$ of a function is a point in the domain of the function for which exists some $epsilon$ for which the values of the points closer than $epsilon$ are larger (smaller) than $e_x$. In other words, it is a point which is the largest (smallest) in some neighbourhood. ] #question(name: [Define _convex function_, _convex set_ and _convex optimization problem_ and explain the impact the convexity has on the optimization.])[ A _convex function_ $f$ is a function such that for all its poins a line segment connecting those point lies above the function. A _convex set_ is a set in which all poinst can be connected by a line segment which is entirely conained in the set. A _convex optimization problem_ is an optimization problem in which the function is convex. Impact of convexity on the optimization is, that any optimum found will be a global optimum. ] #question(name: [State the _Weierstrass_ (extreme value) theorem on existence of minimum (maximum).])[ The theorem states, that if function is continuous, on closed and bounded interval, it will attain minimum and maximum at least once. ] #question(name: [Explain the concepts of _big_ $O()$ and _little_ $o()$ within the framework of truncated Taylor series.])[ For a function $f$, if we translate it to a Taylor series and truncate the series at a certain point, it states the proportion of the error. For $ f(x) = ...\ f(x) approx a_0 + a_1(x-x_0) + a_2(x-x_0)^2 + ... + a_n(x-x_0)^n $ error is $ <= O(x^(n+1))$ Little $o()$ is stronger and implies $O()$. It is non--tight bound on the asymptotic value as error $< o(x^n)$. ] #question(name: [Give _first--order necessary conditions of optimality_ for a scalar function of a scalar argument and define _critical_ (or _stationary_) _point_. Extend these to the vector argument case. Formulate them both using a _Fréchet_ and _Gateaux_ derivatives. Specialize the result to quadratic functions.])[ #part(name: [For scalar functions])[ $ upright(d)/(upright(d) x) f(x) = 0 $ is the first order necessary condition for optimality. _Critical point_ is a point satisfing the necessary condition. ] #part(name: [For vector functions])[ _Gateaux_ derivative states that the derivative is zero at critical point $x^*$ if $f(x^* + alpha bold(d))$ has a minimum at at $x^*$ for all directions bold(d). _Fréchet_ derivative states that the derivative is zero at critical point $bold(x)^*$ if $gradient bold(f)(bold(x)^*) = bold(0)$. It works directly with perturbation and the error is scaled by $norm(bold(d))$ (norm of perturbation). ] #part(name: [For quadratic functions])[ $ f(bold(x)) = 1/2 bold(x)^T Q bold(x) + bold(b)^T bold(x) + c $ if looking for $min f(bold(x))$ the first--order necessary conditions are $ gradient f(bold(x)) = (Q^T + Q)/2 bold(x) + bold(b)^T = bold(0) $ for symmetric matrix $Q$ it simplifies to $ gradient f(bold(x)) = Q bold(x) + bold(b)^T = bold(0) $ ] ] #question(name: [Give second--order sufficient conditions of optimality for a scalar function of a scalar argument. How can we distinguish between a minimum, maximum, and an inflection point? Extend these to the vector case. Define Hessian and show how it can be used to classify the critical points into minimum, maximum, saddle point, and singularity point. Specialize the results to quadratic functions.])[ #part(name: [For scalar functions])[ $ upright(d)^2 / (upright(d) x^2) f(x) > 0 $ is the second order sufficient condition for optimality. Minimum is defined by positive second derivative, maximum by negative, zero second derivative implies inflextion point. ] #part(name: [For vector functions])[ $ gradient^2 f(x) = H > 0 $ meaining the Hessian is _positive definite_. Minimum is defined by positive definite Hessian, maximum by negative definite, indefinite Hessian implies saddle point. Singularity point is defined by zero Hessian. ] #part(name: [For quadratic functions])[ $ f(bold(x)) = 1/2 bold(x)^T Q bold(x) + bold(b)^T bold(x) + c $ if looking for $min f(bold(x))$ the second--order sufficient conditions are $ gradient^2 f(bold(x)) = (Q^T + Q)/2 = H > 0 $ for symmetric matrix $Q$ it simplifies to $ gradient^2 f(bold(x)) = Q = H > 0 $ ] ] #question(name: [Give first-order necessary condition of optimality for an equality--constrained optimization problem using Lagrange multipliers. Specialize the results to quadratic cost functions and linear constraints.])[ For optimization $ min_(bold(x) in RR^n) f(bold(x))\ "subject to" bold(h)(bold(x)) = bold(0) $ the Lagrange function is $ cal(L)(bold(x), bold(lambda)) = f(bold(x)) + bold(lambda)^T bold(h)(bold(x)) $ and the first--order necessary condition is $ gradient cal(L)(bold(x), bold(lambda)) = bold(0) $ which implies $ gradient f(bold(x)) + sum_(i = 1)^m lambda_i gradient h_i(bold(x)) & = bold(0)\ bold(h)(bold(x)) & = bold(0) $ #part(name: [For quadratic cost functions and linear constraints])[ $ min_(bold(x) in RR^n) 1/2 bold(x)^T bold(Q) bold(x) + bold(b)^T bold(x) "subject to" bold(A) bold(x) + bold(b) = bold(0) $ the first--order necessary condition is $ mat( bold(Q), bold(A)^T; bold(A), bold(0) ) vec(bold(x), bold(lambda)) = vec(-bold(r), bold(b)) $ ] ] #question(name: [Characterize the regular point (for a given set of equality constraints). Give an example of equality constraints lacking regularity.])[ Regular point is a point $x$ for which the Jacobian $gradient bold(h)(bold(x))^T$ is regular. Lack of regularity implies defect in formulation. Singular Jacobian can be obtained by introducing two identical constraints. ] #question(name: [Give second-order sufficient conditions of optimality for an equality-constrained optimization problem using the concept of a projected Hessian.])[ Projected Hessian is defined as $ bold(H)_P = bold(Z)^T gradient_(bold(x)bold(x))^2 cal(L)(bold(x), bold(lambda)) bold(Z) $ where $bold(Z)$ is orthonormal basis of the nullspace of the Jacobian $gradient bold(h)(bold(x))^T$. The sufficient condition is $bold(H)_P > 0$. Using unconstrained Hessian $gradient_(bold(x)bold(x))^2 cal(L)(bold(x), bold(lambda))$ is unnecessarily strong. ] #question(name: [State and explain the Karush-Kuhn-Tucker (KKT) conditions for inequality-constrained optimization problems.])[ For optimization $ min_(bold(x) in RR^n) f(bold(x))\ "subject to" bold(g)(bold(x)) <= bold(0) $ the KKT conditions are $ gradient f(bold(x)) + sum_(i=1)^p mu_i gradient g_i(bold(x)) &= bold(0)\ bold(g) &<= bold(0)\ mu_i g_i(bold(x)) &= 0 "for" i = 1,2, ..., m\ mu_i &>= 0 "for" i = 1, 2, ..., m $ The first line states (in domain of real numbers) that $ op("sign")(f'(x) = -op("sign")(g'(x))) $ meaining, that either we are inside the domain and $f$ is minimal or we are at the boundary. The third line states that for $g(x) < 0$ the $mu = 0$ ] ] #skills[ #question(name: [Formulate a provided problem as an instance of mathematical optimization: identify the cost function, the constraints, decide if the problems fits into one of the (numerous) families of optimization problems such as linear program, quadratic program (with linear constraints, with quadratic constraints), (general) nonlinear program, ...])[] #question(name: [Solve a provided linear and/or quadratic programming problem using a solver of your choice.])[] ]
https://github.com/Turkeymanc/notebook
https://raw.githubusercontent.com/Turkeymanc/notebook/main/appendix.typ
typst
#import "./packages.typ": * #glossary.add-term( "Example word", )[ This is an example word which will appear in the glossary. ] #create-appendix-entry(title: "Glossary")[ #components.glossary() ]
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/chic-hdr/0.1.0/internal.typ
typst
Apache License 2.0
/* * Chic-header - A package for Typst * <NAME> (c) 2023 * * internal.typ -- The package's file containing * the internal functions * * This file is under the MIT license. For more * information see LICENSE on the package's main folder. */ /* * chic-valid-type * * Checks if a given argument is a valid element of * the chic-hdr package * * Parameters: * - arg: Argument to check */ #let chic-valid-type(arg) = { return type(arg) == "dictionary" and "chic-type" in arg } /* * chic-grid * * Creates a grid element with the corresponding * format needed to be used as a header or a footer * * Parameters: * - left-side: Content that goes at the left side * - center-side: Content that goes at the center * - right-side: Content that goes at the right side */ #let chic-grid(left-side, center-side, right-side) = block( spacing: 0pt, grid( columns: (left-side, center-side, right-side).map(side => { if side == none { 0fr } else { 1fr } }), column-gutter: 11pt, align(left, left-side), align(center, center-side), align(right, right-side) ) ) /* * chic-generate-props * * Obtains the correct properties to apply for a * particular type of pages (or all pages). * * Parameters: * - width: Width of the header and the footer * - options: Options given to apply into the header and footer */ #let chic-generate-props(width, options) = { let page-options = ( // Set default options header: none, footer: none, margin: (:), header-ascent: 30%, footer-descent: 30% ) let additions = ( header-sep: none, footer-sep: none ) // Process each option and modify the properties according to them for option in options { if chic-valid-type(option) { // Footer and Header if option.chic-type in ("header", "footer") { page-options.at(option.chic-type) = option.value // Separator } else if option.chic-type == "separator" { if option.on == "both" { additions.header-sep = option.value additions.footer-sep = option.value } else if option.on == "header" { additions.header-sep = option.value } else if option.on == "footer" { additions.footer-sep = option.value } // Height of footer and header } else if option.chic-type == "margin" { if option.on == "both" { page-options.margin.insert("top", option.value) page-options.margin.insert("bottom", option.value) } else if option.on == "header" { page-options.margin.insert("top", option.value) } else if option.on == "footer" { page-options.margin.insert("bottom", option.value) } // Offset of footer and header } else if option.chic-type == "offset" { if option.on == "both" { page-options.header-ascent = option.value page-options.footer-descent = option.value } else if option.on == "header" { page-options.header-ascent = option.value } else if option.on == "footer" { page-options.footer-descent = option.value } } } } // Add separator and set width of the header and footer if page-options.header != none { page-options.header = align( center, block( width: width )[ #page-options.header #if additions.header-sep != none { additions.header-sep } ] ) } if page-options.footer != none { page-options.footer = align( center, block( width: width )[ #if additions.footer-sep != none { additions.footer-sep } #page-options.footer ] ) } return page-options }
https://github.com/eduardz1/UniTO-typst-template
https://raw.githubusercontent.com/eduardz1/UniTO-typst-template/main/README.md
markdown
MIT License
# UniTO Thesis Typst Template This is a thesis template for the University of Turin (UniTO) based on [my thesis](https://github.com/eduardz1/Bachelor-Thesis), since there are no strict templates (notable mention to [Eugenio's LaTeX template though](https://github.com/esenes/Unito-thesis-template)) take my choices with a grain of salt, different supervisors may ask you to customize the template differently. My choices are loosely based on this document: [Indicazioni per il Format della Tesi](https://elearning.unito.it/sme/pluginfile.php/29485/mod_folder/content/0/format_TESI_2011-2012.pdf). If you find errors or ways to improve the template please open an issue or contribute directly with a PR. ## Usage In the Typst web app simply click "Start from template" on the dashboard and search for `unito-thesis`. From the CLI you can initialize the project with the command ```bash typst init @preview/modern-unito-thesis:0.1.0 ``` A new directory with all the files needed to get started will be created. ## Configuration This template exports the `template` function with the following named arguments: - `title`: the title of the thesis - `academic-year`: the academic year (e.g. 2023/2024) - `subtitle`: e.g. "Bachelor's Thesis" - `paper-size` (default `a4`): the paper format - `candidate`: your name, surname and matricola (student id) - `supervisor` (relatore): your supervisor's name and surname - `co-supervisor` (correlatore): an array of your co-supervisors' names and surnames - `affiliation`: a dictionary that specifies `university`, `school` and `degree` keywords - `lang`: configurable between `en` for English and `it` for Italian - `bibliography-path`: the path to your bibliography file (e.g. `works.bib`) - `logo` (already set to UniTO's logo by default): the path to your university's logo - `abstract` (true/false, set to true by default): whether to include an abstract - `acknowledgements` (true/false, set to true by default): whether to include an acknowledgements section - `keywords`: a list of keywords for the thesis The template will initialize an example project with sensible defaults, if you want to include an abstract edit the [abstract](template/abstract.typ) file with the content, same applies for the [acknowledgements](template/acknowledgements.typ) file. The template divides the level 1 headings in chapters under the `chapters` directory, I suggest using this structure to keep the project organized. If you want to change an existing project to use this template, you can add a show rule like this at the top of your file: ```typ #import "@preview/modern-unito-thesis:0.1.0": template #show: template.with( title: "My Beautiful Thesis", academic-year: [2023/2024], subtitle: "Bachelor's Thesis", candidate: ( name: "<NAME>", matricola: 947847 ), supervisor: ( "Prof. <NAME>" ), co-supervisor: ( "Dott. <NAME>", "Dott. <NAME>" ), affiliation: ( university: "Università degli Studi di Torino", school: "Scuola di Scienze della Natura", degree: "Corso di Laurea Triennale in Informatica", ), bibliography-path: "works.yml", keywords: [keyword1, keyword2, keyword3] ) // Your content goes here ``` ## Compile To compile the project from the CLI you just need to run ```bash typst compile main.typ ``` or if you want to watch for changes (recommended) ```bash typst watch main.typ ``` ## Bibliography I integrated the bibliography as a [Hayagriva](https://github.com/typst/hayagriva) `yaml` file under [works.yml](template/works.yml), nonetheless using the more common `bib` format for your bibliography management is as simple as passing a BibTex file to the template `bibliography` parameter. Given that our university is not strict in this regard I suggest using Hayagriva though :).
https://github.com/Rhinemann/mage-hack
https://raw.githubusercontent.com/Rhinemann/mage-hack/main/src/chapters/Distinctions.typ
typst
#import "../templates/interior_template.typ": * #show: chapter.with(chapter_name: "Distinctions") #set table(align: horizon + center) #show table: set text(hyphenate: true) #show table.cell.where(y: 0): strong #show table.cell.where(x: 0): strong #show table.cell: set block(breakable: false) // #show table.cell.where(x: 0): set align(horizon) #let n = counter("row") #n.step() #let nrow = context [ #n.step() #n.display() ] = Distinctions #[ #show: columns.with(2, gutter: 1em) Who are you and where do you come from? What do others remember about you? How are you described to others in the tales told of your adventures? You are the grand sum of your distinctions. Every PC starts with three distinctions rated at #spec_c.d8. You always include one of your distinctions in your dice pool. Which one you choose may have a part to play in the outcome of your test, contest, or challenge. Each distinctions belongs to one of three groups: - Your personality & life before Awakening - The Awakened group you are affiliated with anf your position in it - Your focus through which you perform magick Each of your distinctions benefits from the Hinder SFX by default, and may have a number of SFX locked as well. #block(breakable: false)[ == Personality & Sleeper Life Who were you before you Awakened, and who are you now, having realised the inner workings of reality? Your personality & sleeper life distinction describes your life before Awakening and your main character trait or quirk that defines you as a person. Most PCs will have their vocation and some adjective or a catchphrase as a part of this distinction. ] #block(breakable: false)[ == Awakened Affiliation Who are your people in the Awakened world? After you Awakened you changed, a truth of the world opened up to you, you now belong to a group of other Awakened or are fending for your own. Awakened affiliation distinction represents your position in the awakened world, your Tradition, Convention, Craft, as well as other partnerships in the supernatural world are described here, as well as your standing in those contexts. ] #block(breakable: false)[ == Focus How do you do magick? There's more to magick than just Awakening and having the right Spheres. Every mage has a unique set of beliefs - their paradigm, ways to employ their magick - practice, and tools through which they do so - instruments, together they form your focus. Focus distinction explains how your character does magick. ] == Creating Distinctions Some players may have trouble creating distinctions on their own. For help in building distinctions, this section presents three tables and the guidelines for using them. The three tables are the Descriptor Table, the Noun Table, and the Catchphrase Table. You can use the tables however you want, whether as lists to pick from or as random tables you pull from by rolling dice, but there are suggested guidelines for using them below. *To Make Your Personality & Sleeper Life Distinction,* choose or roll a descriptor from the Descriptor Table, and then add it to a noun. The noun can be one summarizing your character's ancestry, culture, career, heritage, or background (such as Swordsmith, Parisian, Ranger, or Clone), or one you choose or roll from the Noun Table. Together, the descriptor and noun form your personality & sleeper life distinction distinction. Alteratively roll from a *To Make Your Awakened Affiliation Distinction,* choose or roll a group from the Tradition, Convention or Craft Table, and then add a descriptor to it. The descriptor can be one defining a, pursuit, quirk, reputation, or identity (such as Relentless, Treasure-Hunting, Agoraphobic, Monstrous, Bionic, or Egyptian), or one you choose or roll from the Descriptor Table. Together, the descriptor and noun form your second distinction. *To Make Your Focus Distinction,* choose or roll a paradigm from the Paradigm Table and then add one or two practices and a number of instruments. Practice and instruments may be created by the player or rolled from respective tables. Practice makes your focus distinction. You are encouraged to add a more detailed description, including the instruments and the paradigm used for this distinction. *To Use the Descriptors Table,* choose any descriptor on the table, or you can roll randomly. If you roll, first roll a #spec_c.d12 and find the row for the resulting number. Then roll a #spec_c.d4, and find the column for that result. Your random descriptor is where the row meets the column. *To Use the Nouns Table,* choose any noun on the table, or you can roll randomly. If you roll, first roll a #spec_c.d12 and find the row for the resulting number. Then roll a #spec_c.d4, and find the column for that result. Your random noun is where the row meets the column. *To Use the Catchphrases Table,* choose any catchphrase on the table, or you can roll randomly. If you roll, first roll a #spec_c.d8 and find the row for the resulting number. Then roll a #spec_c.d4, and find the column for that result. Your random catchphrase is where the row meets the column. *To Use the Tradition table,* choose any Tradition on the table, or you can roll randomly. If you roll, roll a #spec_c.d10 and find the row for the resulting number. Your random Tradition is on that row. *To Use the Convention table,* choose any Convention on the table, or you can roll randomly. If you roll, roll a #spec_c.d6 and find the row for the resulting number. Your random Convention is on that row. *To Use the Craft table,* choose any Craft on the table, or you can roll randomly. If you roll, roll a #spec_c.d12 and find the row for the resulting number. Your random Craft is on that row. *To use Paradigm Table,* choose any paradigm on the table, or you can roll randomly. If you roll, first roll a #spec_c.d6 and find the row for the resulting number. Then roll a #spec_c.d4, and find the column for that result. Your random paradigm is where the row meets the column. *To use Practice Table,* choose any practice on the table, or you can roll randomly. If you roll, first roll a #spec_c.d6 and find the row for the resulting number. Then roll a #spec_c.d4, and find the column for that result. Your random practice is where the row meets the column. *To use Instrument Table,* choose any seven Instrument on the table, or you can roll randomly. If you roll, first roll a #spec_c.d10 and find the row for the resulting number. Then roll a #spec_c.d6, and find the column for that result. Your random instrument is where the row meets the column, repeat that 7 times to get your full instrument list. #colbreak() #[ #show table.cell.where(y: 1): strong #table( columns: 5, table.header(table.cell(colspan: 5, [=== Descriptor]), [], [1], [2], [3], [4]), [#nrow], [Affable], [Arrogant], [Blunt], [Bookish], [#nrow], [Brooding], [Charming], [Conflicted], [Creative], [#nrow], [Dashing], [Defiant], [Dutiful], [Earnest], [#nrow], [Eccentric], [Faithful], [Fearless], [Genius], [#nrow], [Gentle], [Grim], [Icy], [Insecure], [#nrow], [Logical], [Lonely], [Loyal], [Maverick], [#nrow], [Misfit], [Naïve], [Nurturing], [Optimist], [#nrow], [Pacifist], [Passionate], [Pessimist], [Quiet], [#nrow], [Quirky], [Reckless], [Rude], [Ruthless], [#nrow], [Sarcastic], [Shady], [Stubborn], [Tenacious], [#nrow], [Thoughtful], [Timid], [Vengeful], [Veteran], [#nrow], [Weird], [Wise], [Young], [Zealous], ) ] #[ #show table.cell.where(y: 1): strong #n.update(1) #table( columns: 5, table.header(table.cell(colspan: 5, [=== Noun]), [], [1], [2], [3], [4]), [#nrow], [Artist], [Assassin], [Athlete], [Believer], [#nrow], [Bodyguard], [Comrade], [Crafter], [Criminal], [#nrow], [Deceiver], [Detective], [Diplomat], [Expatriate], [#nrow], [Expert], [Extrovert], [Freak], [Fugitive], [#nrow], [Gambler], [Guard], [Heir], [Historian], [#nrow], [Hunter], [Imposter], [Introvert], [Inventor], [#nrow], [Kid], [Leader], [Loner], [Mediator], [#nrow], [Mercenary], [Nurse], [Occultist], [Outsider], [#nrow], [Parent], [Performer], [Physician], [Rebel], [#nrow], [Refugee], [Romantic], [Smuggler], [Spy], [#nrow], [Student], [Teacher], [Thief], [Traveler], [#nrow], [Vagabond], [Vigilante], [Visionary], [Warrior], ) ] ] #[ #show table.cell.where(y: 1): strong #n.update(1) #table( columns: 5, table.header(table.cell(colspan: 5, [=== Catchphrase]), [], [1], [2], [3], [4]), [#nrow], [Act first, ask questions later.], [Actually, that's a funny story...], [Come at me!], [Couldn't stop now if I tried.], [#nrow], [I have a cunning plan.], [I play to win.], [I saw this coming.], [I'm the best there ever was.], [#nrow], [It's almost too easy...], [I've seen this before.], [Lead from the front.], [Never give up.], [#nrow], [Never tell me the odds.], [No one's getting paid enough for this], [Nobody asked for that.], [Sacrifices must be made.], [#nrow], [Someone had to do it.], [Something doesn't feel right], [Stop, or be stopped.], [Perhaps there's a simpler way...], [#nrow], [There's always a way.], [Things can always get worse.], [This is our destiny.], [Time to rage!], [#nrow], [Victory comes at price.], [We didn't get dressed up for nothing.], [We don't have time for this.], [We have unfinished business.], [#nrow], [Well, isn't this ironic?], [We're better than this.], [We've got this.], [You haven't thought this through.], ) ] #n.update(1) #table( columns: 2, table.header(table.cell(colspan: 2, align: center, [=== Tradition])), align: (x, _) => if x == 0 { horizon + center } else { start }, [#nrow], [Akashayana/Akashic Brotherhood -- Masters of mind, body, and spirit through the Arts of personal discipline. #parbreak() Affinity Spheres: Mind or Life], [#nrow], [Celestial Chorus -- Sacred singers who give a human Voice to the Divine Song. #parbreak() Affinity Spheres: Prime, Forces, or Spirit], [#nrow], [Cult of Ecstasy/Sahajiya -- Visionary seers who transcend limitations through sacred experience. #parbreak() Affinity Spheres: Time, Life, or Mind], [#nrow], [Dreamspeakers/Kha'vadi -- Preservers and protectors of both the Spirit Ways and the Earthly cultures that have been looted, abandoned, and oppressed. #parbreak() Affinity Spheres: Spirit, Force, Life, or Matter], [#nrow], [Euthanatoi/Chakravanti -- Disciples of mortality who purge corruption and bring merciful release from suffering. #parbreak() Affinity Spheres: Entropy, Life, or Spirit], [#nrow], [Order of Hermes -- Rigorous masters of High Magick and the Elemental Arts. #parbreak() Affinity Spheres: Forces], [#nrow], [Society of Ether/Sons of Ether -- Graceful saviors of scientific potential. #parbreak() Affinity Spheres: Matter, Forces, or Prime], [#nrow], [Verbena -- Primal devotees of rough Nature and mystic blood. #parbreak() Affinity Spheres: Life or Forces], [#nrow], [Virtual Adepts -- Reality-hackers devoted to rebooting their world. #parbreak() Affinity Spheres: Correspondence/ Data or Forces], [#nrow], [Reroll], ) #n.update(1) #table( columns: 2, align: start, table.header(table.cell(colspan: 2, align: center, [=== Convention])), [#nrow], [Iteration X -- Perfectors of the human machine. #parbreak() Affinity Spheres: Forces, Matter, or Time], [#nrow], [New World Order -- Custodians of social order and global stability. #parbreak() Affinity Spheres: Mind or Correspondence/ Data], [#nrow], [Progenitors -- Innovators dedicated to the potential of organic life. #parbreak() Affinity Spheres: Life or Prime], [#nrow], [Syndicate -- Masters of finance, status, and the power of wealth. #parbreak() Affinity Spheres: Entropy, Mind, or Primal Utility], [#nrow], [Void Engineers -- Explorers and protectors of extradimensional space. #parbreak() Affinity Spheres: Dimensional Science, Correspondence, or Forces], [#nrow], [Reroll], ) #n.update(1) #table( columns: 2, align: start, table.header(table.cell(colspan: 2, align: center, [=== Craft])), [#nrow], [Ahl-i-Batin -- Seers of Unity through Divine connection and subtle influence. #parbreak() Affinity Spheres: Correspondence or Mind (never Entropy)], [#nrow], [Bata'a -- Inheritors of voodoo, dedicated to restoring a broken world. #parbreak() Affinity Spheres: Life or Spirit], [#nrow], [Children of Knowledge -- Crowned Ones devoted to alchemical perfection. #parbreak() Affinity Sphere: Forces, Matter, Prime, or Entropy], [#nrow], [Hollow Ones -- Dark romantics laughing in the face of ruin. #parbreak() Affinity Sphere: Any], [#nrow], [Kopa Loei -- Defenders of Nature, the Old Gods, and their culture. #parbreak() Affinity Sphere: Any], [#nrow], [Ngoma -- African High Magi, sworn to restore what's been taken from their home and people. #parbreak() Affinity Spheres: Life, Mind, Prime, or Spirit], [#nrow], [Orphans -- Self-Awakened mages surviving in the shadows of other sects. #parbreak() Affinity Sphere: Any], [#nrow], [Sisters of Hippolyta -- Guardians of the Sacred Feminine. #parbreak() Affinity Spheres: Life or Mind], [#nrow], [Taftâni -- Middle Eastern mystics shaping the gifts of Allah and the Arts of man. #parbreak() Affinity Spheres: Forces, Matter, Prime, or Spirit], [#nrow], [Templar Knights -- Bastions of chivalry in a corrupt age. #parbreak() Affinity Spheres: Forces, Life, Mind, or Prime], [#nrow], [Wu Lung -- Preservers of heavenly wisdom, order, and nobility. #parbreak() Affinity Sphere: Spirit, Forces, Matter, or Life], [#nrow], [Reroll], ) #[ #show table.cell.where(y: 1): strong #n.update(1) #table( columns: 5, table.header(table.cell(colspan: 5, [=== Paradigm]), [], [1], [2], [3], [4]), [#nrow], [A Mechanistic Cosmos], [A World of Gods and Monsters], [Aliens Make Us What We Are], [All Power Comes from God(s)], [#nrow], [All the World's a Stage], [Ancient Wisdom is the Key], [Bring Back the Golden Age!], [Consciousness is the Only True Reality], [#nrow], [Creationd's Divine and Alive], [Divine Order and Earthly Chaos], [Everything is Chaos], [Everything is Data], [#nrow], [Everything's an Illusion, Prison, or Mistake], [It's All Good -- Have Faith!], [Might is Right], [One-Way Trip to Oblivion], [#nrow], [Tech Holds All Answers], [Embrace the Threshold], [Holographic Reality], [Transcend Your Limits], [#nrow], [Turning the Keys to Reality], [We are Meant to be Wild], [We are Not Men!], [We're All God(s) in Disguise], ) ] #[ #show table.cell.where(y: 1): strong #n.update(1) #table( columns: 5, table.header(table.cell(colspan: 5, [=== Practice]), [], [1], [2], [3], [4]), [#nrow], [Alchemy], [Animalism], [Art of Desire/Hypereconomics], [Bardism], [#nrow], [Chaos Magick], [Craftwork], [Crazy Wisdom], [Cybernetics], [#nrow], [Dominion], [Elementalism], [Faith], [God-Bonding], [#nrow], [Gutter Magick], [High Ritual Magick], [Hypertech], [Maleficia], [#nrow], [Martial Arts], [Medicine-Work], [Mediumship], [Psionics], [#nrow], [Reality Hacking], [Shamanism], [Voudoun], [Weird Science], [#nrow], [Witchcraft], [Yoga], [Reroll], [Reroll], ) ] #[ #show table.cell.where(y: 1): strong #n.update(1) #table( columns: 9, table.header(table.cell(colspan: 9, [=== Instrument]), [], [1], [2], [3], [4], [5], [6], [7], [8]), [#nrow], [Artwork], [Blessings and curses], [Blood and fluids], [Body Modification], [Bodywork], [Bones and remains], [Books and periodicals], [Brain-computer interface], [#nrow], [Brews and concoctions], [Cannibalism], [Cards and instruments of chance], [Celestial alignments], [Circles and designs], [Computer gear], [Crossroads and crossing-days], [Cups and vessels], [#nrow], [Cybernetic Implants], [Dances and movement], [Devices and machines], [Drugs and poisons], [Elements], [Energy], [Eye contact], [Fashion], [#nrow], [Food and drink], [Formulae and math], [Gadgets and inventions], [Gems and stones], [Genetic Manipulation], [Group rites], [Herbs and plants], [Household tools], [#nrow], [Internet Activity], [Knots and ropes], [Labs and gear], [Languages], [Management and HR], [Mass media], [Medical Procedures], [Meditation], [#nrow], [Money and wealth], [Music], [Nanotech], [Numbers and numerology], [Offerings and sacrifices], [Ordeals and exertions], [Prayers and invocations], [Sacred iconography], [#nrow], [Sex and sensuality], [Social domination], [Symbols], [Thought-forms], [Toys], [Transgression], [Tricks and illusions], [True Names], [#nrow], [Vehicles], [Voice and vocalizations], [Wands and staves], [Weapons], [Writings inscriptions and runes], [Reroll], [Reroll], [Reroll], ) ]
https://github.com/crd2333/crd2333.github.io
https://raw.githubusercontent.com/crd2333/crd2333.github.io/main/src/docs/AI/Reinforce%20Learning/其它技巧.typ
typst
--- order: 5 --- #import "/src/components/TypstTemplate/lib.typ": * #show: project.with( title: "AI 笔记之强化学习", lang: "zh", ) #let ba = $bold(a)$ #let bw = $bold(w)$ = Other Technique == 完全去中心化学习(FDMARL) - 之前讲的 MA 都是基于 CTDE 的假设,即智能体在学习的过程中可以获取全局信息,但是在执行的过程中是分布式的。但是在一些情况下,智能体在学习和执行的过程中都是分布式的,这就是完全去中心化学习 #fig("/public/assets/AI/AI_RL/img-2024-07-12-09-45-22.png") - 思考:CTDE设定对于MARL问题是否可以是默认假设? - 其实,目前来看,基本上大部分方法在 CTDE 的假设下才能 work,似乎是一个默认假设;换句话说,目前去中心化学习效果不是很好 - 有CTDE假设的多智能体强化学习问题可以被视为行为空间指数提升的单智能体强化学习问题。在单智能体环境中,解决连续行为空间的多种方法,对多智能体设定依然有较大提升 - 类似于一个桥梁,把单智能体的方法应用到多智能体上 - 但存在一些问题(情景),不存在集中训练的前提。例如司机对超车意图的判定,司机(智能体)在训练过程中无法获取全局的态势 - 完全去中心化学习(Fully Decentralized Learning) + 更高的可扩展性 + 更符合实际设定 + MARL与Single-Agent RL in Multi-agent Setting的核心区别之一 - 回忆之前讲的 MARL 范例:联合学习、独立学习、分解学习(CTDE),FDMARL 更接近第二种 - 困难 - 每个智能体只通过局部观测、自身行为、获取的奖励,而不通过智能体间的交流或者参数分享来训练策略。 - 环境不稳定: $ P(s'|s,a) &-> P(s'|s,a_i) = sum_(a_(-i)) pi_(-i) (a_(-i)|s) P(s'|s,a_i, a_(-i))\ r(s,a) &-> r(s,a_i) = sum_(a_(-i)) pi_(-i) (a_(-i)|s) r(s,a_i, a_(-i)) $ - 因为$pi_(-i)$在学习的过程中同时更新,针对每一个智能体环境变得不稳定,因此简单的方法诸如IQL,IPPO等方法没有收敛保证。 - 思路:智能体不同步更新,每次只有一个智能体进行策略更新 - IQL 没有收敛保证,MA2QL 有 #fig("/public/assets/AI/AI_RL/img-2024-07-12-09-59-53.png") - MA2QL:多智能体轮流策略迭代 - 有收敛保证(纳什均衡),即收敛但未必全局最优 - 这种轮流迭代的方法无论是基于策略还是基于价值函数,最终都会收敛到纳什均衡 #fig("/public/assets/AI/AI_RL/img-2024-07-12-10-09-46.png") - 轮流更新与单步策略提升需要太久收敛 - FDMARL 的圈子实际上目前比较小,北大有个组提出了一个 I2Q 的算法和一定的理论推导,这里不展开 == 分布式RL - 首先介绍分布式机器学习 + 计算量太大:基于共享内存的多线程或多机并行计算 + 训练数据太多:划分多个节点训练(数据并行) + 模型规模太大:将模型划分为多个模块,在不同节点上进行训练(模型并行);不同子模型一般依赖性较强,对通信的要求较高。 - 对强化学习来说,训练数据太多基本不会遇到(因为也就只是 $s,a,r,s'$,或者最多 $s$ 以图片等形式输入,一般不太会爆);模型规模也不会太大,因为基本只是采用比较简单的 CNN or RNN or MLP;主要问题是计算量 #grid( columns: 2, [ - 一般分为以下几个模块: - 数据与模型划分模块 - 单机优化模块 - 通信模块 - 数据与模型聚合模块 #fig("/public/assets/AI/AI_RL/img-2024-07-12-10-25-09.png") #fig("/public/assets/AI/AI_RL/img-2024-07-12-10-26-01.png") ], [ #fig("/public/assets/AI/AI_RL/img-2024-07-12-10-22-08.png") - 模型聚合的基本方式(左图): + 参数平均 - 每个 worker 都有当前模型参数的备份 - 每个 worker 在各自的子数据集上训练 - 全局参数被设置为各个 worker 模型参数的平均 + 分布式随机梯度下降 - 梯度回传至参数服务器进行更新 + 去中心化异步随机梯度下降: - 基于中心参数服务器的模型聚合方法具有单节点故障问题 - 不再设置中心参数服务器 - 通过端到端的通信来传递模型的更新 ] ) - 回到分布式机器学习 - 单机强化学习基本流程: - 一般情况下只有单个智能体与单个环境交互获取经验样本,将大量时间用于了数据收集阶段,这也是强化学习的特点之一。 - 因此想要提高训练效率,一个重点就是需要提高采样效率。 - 分布式强化学习方法中引入了多个智能体同时与环境交互并行采样,以提升单位时间内所能获得的样本量。 - 在具体实现时,一般将智能体解耦为两部分:Actor 与 Learner。其中 Actor 负责与环境交互获取数据,而 Learner 则利用收集而来的数据进行批量更新。 - A3C 和 A2C(A3C同步版,不是之前说的 Advantage 版) - IMPALA 强化学习框架 - 并行PPO(DPPO) - 原始PPO算法(单线程):每一轮优化中,策略串行地收集轨迹,并进行策略更新 #fig("/public/assets/AI/AI_RL/img-2024-07-12-10-46-48.png") - 哪些流程可以并行: + 不同轨迹的收集:在收集轨迹的过程中,不同的轨迹都是由同一个当前策略与环境交互得到的,不同轨迹的收集之间相互独立,所以我们可以并行地收集策略。 + 轨迹收集与策略更新:策略的更新可以不必等待所有轨迹收集完毕,而是在满足一定条件后直接开始更新全局策略 - 分布式近端策略优化算法(Distributed Proximal Policy Optimization,DPPO),直接地采用的多 worker 并行地收集轨迹并计算梯度,然后主线程(chief)将个 worker 计算的梯度平均,并回传给 worker 用于更新其策略。流程图: #fig("/public/assets/AI/AI_RL/img-2024-07-12-10-48-58.png") - DPPO 存在的问题: - 共享经验池中的轨迹样本由多个不同的策略与环境交互得到,这导致优化的策略和采集样本的策略有很大程度的差异进一步加剧。 - 当$pi_th >> pi_(th"old")$时,$r_t (th)$将非常巨大。这种情况下如果 $A_t < 0$,计算得到裁剪代理目标为 $L^"CLIP"=max(r_t (th),1-ep)A_t=r_t (th) A_t$ 将会是一个非常大的负数,引入巨大且无界的方差,导致算法无法最终收敛 - 开悟 DPPO 的解决方法 #mitex(`\mathbb{E}\left[\text{max}(\text{min}(r_{t}(\theta)A_{t},c l i p(r_{t}(\theta),1-\epsilon,1+\epsilon)A_{t}),c A_{t})\right]`) - 开悟 DPPO + AI Servers:用于与环境交互收集策略。 + RL Learner:学习器,用于学习策略。多个RL Learner并行地通过共享内存从经验池采集样本用于策略学习 + Dispatch:中介模块,用于提高收集到的样本传输到共享内存的速度 + Memory Pool:内存池,用于存储轨迹经验。供多个RL Learner训练。
https://github.com/widsnoy/algorithms
https://raw.githubusercontent.com/widsnoy/algorithms/typst/widsnoy_template.typ
typst
#set page( paper: "a4", header: align(left)[ _hdu-t05: widsnoy, WQhuanm, xu826281112_ ] ) #set heading( numbering: "1." ) #set text( size: 12pt, font: ("Linux libertine", "Noto Sans CJK SC"), lang: "zh", region: "cn") #set page(numbering: "(i)") #let style-number(number) = text(gray)[#number] #show raw.where(block: true): it => block( fill: luma(240), inset: 10pt, radius: 4pt, width: 100%, )[#grid(columns: (1em, 1fr), align: (right, left), column-gutter: 0.7em, row-gutter: 0.6em, ..it.lines .enumerate() .map(((i, line)) => (style-number(i + 1), line)) .flatten())] #outline( title: [_widsnoy's *template*_], indent: auto ) #pagebreak() #set page(numbering: "1") #counter(page).update(1) = 数论 == 原根 - 阶:$"ord"_m (a)$ 是最小的正整数 $n$ 使 $ a^n equiv 1 (mod m)$ - 原根:若 $g$ 满足 $(g,m)=1$ 且 $"ord"_m (g) eq phi(m)$ 则 $g$ 是 $m$ 的原根。若 $m$ 是质数,有 $g^i mod m, 0<i<m$ 的取值各不相同。 原根的应用:$m$ 是质数时,若求$a_k=sum_(i*j mod m eq k)f_i*g_j$ 可以通过原根转化为卷积形式(要求 $0$ 处无取值)。具体而言,$[1,m-1]$ 可以映射到 $g^([1,m-1])$,原式变为 $a_(g^k)=sum_(g^(i+j mod (m-1)) eq g^k)f_(g^i)*g_(g^j)$,令 $f_i eq f_(g^i)$ 则 $a_k=sum_((i+j) mod (m-1) eq k)f_i*g_j$ ```cpp int q[10005]; int getG(int n) { int i, j, t = 0; for (i = 2; (ll)(i * i) < n - 1; i++) { if ((n - 1) % i == 0) q[t++] = i, q[t++] = (n - 1) / i; } for (i = 2; ;i++) { for (j = 0; j < t; j++) if (fpow(i, q[j], n) == 1) break; if (j == t) return i; } return -1; } vector<int> fpow(int kth) { if (kth == 0) return e; auto r = fpow(kth - 1); r = multiply(r, r); for (int i = p - 1; i < r.size(); i++) r[i % (p - 1)] = (r[i % (p - 1)] + r[i]) % mod; r.resize(p - 1); if (kk[kth] == '1') { r = multiply(r, e); for (int i = p - 1; i < r.size(); i++) r[i % (p - 1)] = (r[i % (p - 1)] + r[i]) % mod; r.resize(p - 1); } return r; } void MAIN() { g = getG(p); int tmp = 1; for (int i = 1; i < p; i++) { tmp = tmp * 1ll * g % p; mp[tmp] = i % (p - 1); } e.resize(p - 1); for (int i = 0; i < p - 1; i++) e[i] = 0; for (int i = 0; i < p; i++) { for (int j = 0; j <= i; j++) { if (binom[i][j] == 0) continue; e[mp[binom[i][j]]]++; } } } ``` == 解不定方程 给出a,b,c,x1,x2,y1,y2,求满足 ax+by+c=0,且x∈[x1,x2],y∈[y1,y2]的整数解有多少对? 输入格式 第一行包含7个整数,a,b,c,x1,x2,y1,y2,整数间用空格隔开。 a,b,c,x1,x2,y1,y2的绝对值不超过$10^8$。 ```cpp #define y1 miku ll a, b, c, x1, x2, y1, y2; ll exgcd(ll a, ll b, ll &x, ll &y) { if (b) { ll d = exgcd(b, a % b, y, x); return y -= a / b * x, d; } return x = 1, y = 0, a; } pll get_up(ll a, ll b, ll x1, ll x2) { //x2>=ax+b>=x1 if (a == 0) return (b >= x1 && b <= x2) ? (pll){-1e18, 1e18} : (pll){1, 0}; ll L, R; ll l = (x1 - b) / a - 3; for (L = l; L * a + b < x1; L++); ll r = (x2 - b) / a + 3; for (R = r; R * a + b > x2; R--); return {L, R}; } pll get_dn(ll a, ll b, ll x1, ll x2) { //x2>=ax+b>=x1 if (a == 0) return (b >= x1 && b <= x2) ? (pll){-1e18, 1e18} : (pll){1, 0}; ll L, R; ll l = (x2 - b) / a - 3; for (L = l; L * a + b > x2; L++); ll r = (x1 - b) / a + 3; for (R = r; R * a + b < x1; R--); return {L, R}; } void MAIN() { cin >> a >> b >> c >> x1 >> x2 >> y1 >> y2; if (a == 0 && b == 0) return cout << (c == 0) * (y2 - y1 + 1) * (x2 - x1 + 1) << '\n', void(); ll x, y, d = exgcd(a, b, x, y); c = -c; if (c % d != 0) return cout << "0\n", void(); x *= c / d, y *= c / d; ll sx = b / d, sy = -a / d; //x + k * sx y + k * sy // 0<= 3 - k <= 4 [-1,3] [0,4] auto A = (sx > 0 ? get_up(sx, x, x1, x2) : get_dn(sx, x, x1, x2)); auto B = (sy > 0 ? get_up(sy, y, y1, y2) : get_dn(sy, y, y1, y2)); A.fi = max(A.fi, B.fi), A.se = min(A.se, B.se); cout << max(0ll, A.se - A.fi + 1) << '\n'; } ``` == 中国剩余定理 考虑合并两个同余方程 $ cases( x equiv a_1 (mod m_1), x equiv a_2 (mod m_2) ) $ 改写为不定方程形式 $ cases( x+m_1y=a_1, x+m_2y=a_2 ) $ 取解集公共部分 $x=a_1-m_1 y_1=a_2- m_2 y_2$, 若$gcd(m_1, m_2)| (a_1-a_2)$ 有解,可以得到$x=k"lcm"(m_1,m_2)+a_2- m_2 y_2$ 化为同余方程的形式:$x equiv a_2- m_2y_2 (mod "lcm"(m_1,m_2))$ ```cpp ll n, m, a; ll exgcd(ll a, ll b, ll &x, ll &y) { if (b != 0) { ll g = exgcd(b, a % b, y, x); return y -= a / b * x, g; } return x = 1, y = 0, a; } ll getinv(ll a, ll mod) { ll x, y; exgcd(a, mod, x, y); x = (x % mod + mod) % mod; return x; } int get(ll x) { return x < 0 ? -1 : 1; } ll mul(ll a, ll b, ll mod) { ll res = 0; if (a == 0 || b == 0) return 0; ll f = get(a) * get(b); a = abs(a), b = abs(b); for (; b; b >>= 1, a = (a + a) % mod) if (b & 1) res = (res + a) % mod; res *= f; if (res < 0) res += mod; return res; } // m 互质 // int main() { // cin >> n; // ll phi = 1; // for (int i = 1; i <= n; i++) { // cin >> m[i] >> a[i]; // phi *= m[i]; // } // ll ans = 0; // for (int i = 1; i <= n; i++) { // ll p = phi / m[i], q = getinv(p, m[i]); // ans += mul(p, mul(q, a[i], phi), phi); // ans %= phi; // } // cout << ans << '\n'; // } int main() { cin >> n; cin >> m >> a; for (int i = 2; i <= n; i++) { ll nm, na; cin >> nm >> na; ll x, y; ll g = exgcd(m, -nm, x, y), d = (na - a) / g, md = abs(nm / g); if ((na - a) % g) return -1; x = mul(x, d, md); ll lc = abs(m / g); lc *= nm; a = (a + mul(m, x, lc)) % lc; m = lc; } cout << a << '\n'; } ``` == 卢卡斯定理 - p 为质数 $ binom(n, m) mod p=binom(floor(n/p), floor(m/p))binom(n mod p, m mod p) mod p $ - p 不为质数 其中 calc(n, x, p) 计算 $n!/x^y mod p$ 的结果,其中 $y$ 是 $n!$ 含有 $x$ 的个数 如果 $p$ 是质数,利用 Wilson 定理 $(p-1)! equiv -1 (mod p)$ 可以$O(log P)$ 的计算 calc。其他情况可以通过预处理 $n!/(n"以内所有"p"倍数的乘积")$ 达到同样的效果。 ```cpp ll exgcd(ll a, ll b, ll &x, ll &y) { if (b) { ll d = exgcd(b, a % b, y, x); return y -= a / b * x, d; } else return x = 1, y = 0, a; } int getinv(ll v, ll mod) { ll x, y; exgcd(v, mod, x, y); return (x % mod + mod) % mod; } ll fpow(ll a, ll b, ll p) { ll res = 1; for (; b; b >>= 1, a = a * 1ll * a % p) if (b & 1) res = res * 1ll * a % p; return res; } ll calc(ll n, ll x, ll p) { if (n == 0) return 1; ll s = 1; for (ll i = 1; i <= p; i++) if (i % x) s = s * i % p; s = fpow(s, n / p, p); for (ll i = n / p * p + 1; i <= n; i++) if (i % x) s = i % p * s % p; return calc(n / x, x, p) * 1ll * s % p; } int get(ll x) { return x < 0 ? -1 : 1; } ll mul(ll a, ll b, ll mod) { ll res = 0; if (a == 0 || b == 0) return 0; ll f = get(a) * get(b); a = abs(a), b = abs(b); for (; b; b >>= 1, a = (a + a) % mod) if (b & 1) res = (res + a) % mod; res *= f; if (res < 0) res += mod; return res; } ll sublucas(ll n, ll m, ll x, ll p) { ll cnt = 0; for (ll i = n; i; ) cnt += (i = i / x); for (ll i = m; i; ) cnt -= (i = i / x); for (ll i = n - m; i; ) cnt -= (i = i / x); return fpow(x, cnt, p) * calc(n, x, p) % p * getinv(calc(m, x, p), p) % p * getinv(calc(n - m, x, p), p) % p; } ll lucas(ll n, ll m, ll p) { int cnt = 0; ll a[21], mo[21]; for (ll i = 2; i * i <= p; i++) if (p % i == 0) { mo[++cnt] = 1; while (p % i == 0) mo[cnt] *= i, p /= i; a[cnt] = sublucas(n, m, i, mo[cnt]); } if (p != 1) mo[++cnt] = p, a[cnt] = sublucas(n, m, p, mo[cnt]); ll phi = 1; for (int i = 1; i <= cnt; i++) phi *= mo[i]; ll ans = 0; for (int i = 1; i <= cnt; i++) { ll p = phi / mo[i], q = getinv(p, mo[i]); ans += mul(p, mul(q, a[i], phi), phi); ans %= phi; } return ans; } ``` == BSGS 求解 $a^x equiv n (mod p)$, $a, p$ 不一定互质 ```cpp int fpow(int a, int b, int p) { int res = 1; for (; b; b >>= 1, a = a * 1ll * a % p) if (b & 1) res = res * 1ll * a % p; return res; } ll exgcd(ll a, ll b, ll &x, ll &y) { if (b == 0) return x = 1, y = 0, a; ll d = exgcd(b, a % b, y, x); y -= a / b * x; return d; } int inv(int a, int p) { ll x, y; ll g = exgcd(a, p, x, y); if (g != 1) return -1; return (x % p + p) % p; } int BSGS(int a, int b, int p) { if (p == 1) return 1; unordered_map<int, int> x; int m = sqrt(p + 0.5) + 1; int v = inv(fpow(a, m, p), p); int e = 1; for(int i = 1; i <= m; i++) { e = e * 1ll * a % p; if(!x.count(e)) x[e] = i; } for(int i = 0; i <= m; i++) { if(x.count(b)) return i * m + x[b]; b = b * 1ll * v % p; } return -1; } pii exBSGS(int a, int n, int p) { int d, q = 0, sum = 1; if (n == 1) return {0, gcd(a, p) == 1 ? BSGS(a, 1, p) : 0}; a %= p, n %= p; while((d = gcd(a, p)) != 1) { if(n % d) return {-1, -1}; q++; n /= d; p /= d; sum = (sum * 1ll * a / d) % p; if(sum == n) return {q, gcd(a, p) == 1 ? BSGS(a, 1, p) : 0}; } int v = inv(sum, p); n = n * 1ll * v % p; int ans = BSGS(a, n, p); if(ans == -1) return {-1, -1}; return {ans + q, BSGS(a, 1, p)}; } ``` == 数论函数 + $phi(n)=n product (1-1/p)$ + $mu(n)=cases( 1\,n=1, (-1)^"质因子个数"\, n "无平方因子", 0\, n "有平方因子" )$ + $mu * id eq phi $, $mu * 1 = epsilon$, $phi * 1 = id$ - 有一个表格,$a_(i,j)=gcd(i,j)$, 支持某一列一行乘一个数,查询整个表格的和。 因为 $gcd(n,m)=sum_(i divides n and i divides m)phi(i)$,对每个 $phi(i)$ 维护一个大小为 $floor(n/i)$ 的表格,初始值全是 $phi(i)$, $(x,y)$ 对应 $(x*i,y*i)$。对大表格的修改可以转化为对小表格的修改,只需要对每行每列维护一个懒标记就行。 == 莫比乌斯反演 1. 若 $f(n) eq sum_(d divides n) g(d)$, 则 $g(n) eq sum_(d divides n)mu(n/d)f(d)$ $ sum_(d divides n)mu(n/d)f(d)&=sum_(d divides n)mu(n/d)sum_(k divides d)g(k)\ &=sum_(k divides n)g(k)sum_(d divides n/k)mu(d)\ &=sum_(k divides n)g(k)\[n/k eq 1\] = g(n) $ 2. 若 $f(n) eq sum_(n divides d) g(d)$, 则 $g(n) eq sum_(n divides d)mu(d/n)f(d)$ 3. $d(n m)=sum_(i divides n)sum_(j divides m)[gcd(i,j)=1]$ 常见的一些推式子套路: + 证明是否积性函数,只需要观察是否满足 $f(p^i)f(q^j)=f(p^i q^j)$ 即可,用线性筛积性函数也是同理。 + 形如 $sum_(d divides n) mu(d)sum_(k divides n/d)phi(k)floor(n/(d k))$ 的式子,这时候令 $T=d k$,枚举 $T$ 就能得到 $d,k$ 一个卷积的形式。如果是底数和指数,这时候不能线性筛,但是可以调和级数暴力算函数值。 == 整除分块 + 下取整 ```cpp for (int i = 1, j; i <= min(n, m); i = j + 1) { j = min(n / (n / i), m / (m / i)); // n / {i,...,j} = n / i } ``` + 上取整 $ceil(n/i)=floor((n+i-1)/i)=floor((n-1)/i)+1$ == 区间筛 - 求解一个区间内的素数 如果是合数那么一定不大于 $sqrt(x)$ 的约数,使用这个范围内的数埃氏筛即可。 == Min25 筛 能在 $O(n^(3/4)/log(n))$ 时间求出 $F(n)=sum_(i=1)^(n)f(i)$ 的值,要求积性函数能快速求出 $f(p^k)$ 处的点值。 - 定义 $R(i)$ 表示 $i$ 的最小质因子 $ G(n,j)=sum_(i=1)^n f(i)[i in "prime" or R(i) > P_j] $ 考虑递推 $ G(n,j)=cases( G(n,j-1) "IF" p_j times p_j > n, G(n,j-1)-f(p_j)(G(n/p_j,j-1)-sum_(i=1)^(j-1)f(p_i)) "IF" p_j times p_j <= n ) $ 根据整除分块,G 函数的第一维只用 $sqrt(n)$ 种取值,将其存在 $w[]$ 中,且用 $"id1"[]$ 和 $"id2"[]$ 分别存数字对应的下标位置。因为最后只需要知道 $G(x,"pcnt")$ 所以第二维可以滚掉。 - 定义 $S(n,j)=sum_(i=1)^n f(i)[R(i)>=p_j]$ 质数部分答案显然为 $G(n,"pcnt")-sum_(i=1)^(j-1)f(p_i)$, 合数部分考虑提出最小的质因子 $p^k$,得到 $S(n,j)$ 的递推式 $ S(n,j)=G(n,"pcnt")-sum_(i=1)^(j-1)f(p_i)+sum_(i=j)^"pcnt"sum_(k=1)^(p_i^(k+1)<=n)f(p^k)S(n/p^k,j+1)+f(p^(k+1)) $ 递归边界是 $n=1 or p_j > n$, $S(n,j)=0$ $sum_(i=1)^(n)f(i)=S(n,1)+f(1)$ ```cpp #include <cstdio> #include <cmath> typedef long long ll; const int N = 4e6 + 5, MOD = 1e9 + 7; const ll i6 = 166666668, i2 = 500000004; ll n, id1[N], id2[N], su1[N], su2[N], p[N], sqr, w[N], g[N], h[N]; int cnt, m; bool vis[N]; ll add(ll a, ll b) {a %= MOD, b %= MOD; return (a + b >= MOD) ? a + b - MOD : a + b;} ll mul(ll a, ll b) {a %= MOD, b %= MOD; return a * b % MOD;} ll dec(ll a, ll b) {a %= MOD, b %= MOD; return ((a - b) % MOD + MOD) % MOD;} void init(int m) { for (ll i = 2; i <= m; i++) { if (!vis[i]) p[++cnt] = i, su1[cnt] = add(su1[cnt - 1], i), su2[cnt] = add(su2[cnt - 1], mul(i, i)); for (int j = 1; j <= cnt && i * p[j] <= m; j++) { vis[p[j] * i] = 1; if (i % p[j] == 0) break; } } } ll S(ll x, int y) { if (p[y] > x || x <= 1) return 0; int k = (x <= sqr) ? id1[x] : id2[n / x]; ll res = dec(dec(g[k], h[k]), dec(su2[y - 1], su1[y - 1])); for (int i = y; i <= cnt && p[i] * p[i] <= x; i++) { ll pow1 = p[i], pow2 = p[i] * p[i]; for (int e = 1; pow2 <= x; pow1 = pow2, pow2 *= p[i], e++) { ll tmp = mul(mul(pow1, dec(pow1, 1)), S(x / pow1, i + 1)); tmp = add(tmp, mul(pow2, dec(pow2, 1))); res = add(res, tmp); } } return res; } int main() { scanf("%lld", &n); sqr = sqrt(n + 0.5) + 1; init(sqr); for (ll l = 1, r; l <= n; l = r + 1) { r = n / (n / l); w[++m] = n / l; g[m] = mul(w[m] % MOD, (w[m] + 1) % MOD); g[m] = mul(g[m], (2 * w[m] + 1) % MOD); g[m] = mul(g[m], i6); g[m] = dec(g[m], 1); h[m] = mul(w[m] % MOD, (w[m] + 1) % MOD);; h[m] = mul(h[m], i2); h[m] = dec(h[m], 1); (w[m] <= sqr) ? id1[w[m]] = m : id2[r] = m; } for (int j = 1; j <= cnt; j++) for (int i = 1; i <= m && p[j] * p[j] <= w[i]; i++) { int k = (w[i] / p[j] <= sqr) ? id1[w[i] / p[j]] : id2[n / (w[i] / p[j])]; g[i] = dec(g[i], mul(mul(p[j], p[j]), dec(g[k], su2[j - 1]))); h[i] = dec(h[i], mul(p[j], dec(h[k], su1[j - 1]))); } //printf("%lld\n", g[1] - h[1]); printf("%lld\n", add(S(n, 1), 1)); return 0; } ``` = 图论 == 找环 ```cpp const int N = 5e5 + 5; int n, m, col[N], pre[N], pre_edg[N]; vector<pii> G[N]; vector<vector<int>> resp, rese; //point void get_cyc(int u, int v) { if (!resp.empty()) return; vector<int> cyc; cyc.push_back(v); while (true) { v = pre[v]; if (v == 0) break; cyc.push_back(v); if (v == u) break; } reverse(cyc.begin(), cyc.end()); resp.push_back(cyc); } // edge void get_cyc(int u, int v, int id) { if (!rese.empty()) return; vector<int> cyc; cyc.push_back(id); while (true) { if (pre[v] == 0) break; cyc.push_back(pre_edg[v]); v = pre[v]; if (v == u) break; } reverse(cyc.begin(), cyc.end()); rese.push_back(cyc); } void dfs(int u, int edg) { col[u] = 1; for (auto [v, id] : G[u]) if (id != edg) { if (col[v] == 1) { get_cyc(v, u); get_cyc(v, u, id); } else if (col[v] == 0) { pre[v] = u; pre_edg[v] = id; dfs(v, id); } } col[u] = 2; } void MAIN() { cin >> n >> m; for (int i = 1; i <= m; i++) { int u, v; cin >> u >> v; // G[u].push_back({v, i}); // G[v].push_back({u, i}); } for (int i = 1; i <= n; i++) if (!col[i]) dfs(i, -1); } ``` == SPFA ```cpp mt19937_64 rng(chrono::steady_clock::now().time_since_epoch().count()); const int mod = 998244353; const int N = 5e5 + 5; const ll inf = 1e17; int n, m, s, t, q[N], ql, qr; int vis[N], fr[N]; ll dis[N]; vector<pii> G[N]; void MAIN() { cin >> n >> m >> s >> t; for (int i = 1; i <= m; i++) { int u, v, w; cin >> u >> v >> w; G[u].push_back({v, w}); } for (int i = 0; i <= n; i++) dis[i] = inf; dis[s] = 0; q[qr] = s; vis[s] = 1; while (ql <= qr) { if (rng() % (qr - ql + 1) == 0) sort(q + ql, q + qr + 1, [](int x, int y) { return dis[x] < dis[y]; }); int u = q[ql++]; vis[u] = 0; for (auto [v, w] : G[u]) { if (dis[u] + w < dis[v]) { dis[v] = dis[u] + w; fr[v] = u; if (!vis[v]) { if (ql > 0) q[--ql] = v; else q[++qr] = v; vis[v] = 1; } } } } if (dis[t] == inf) { cout << "-1\n"; return; } cout << dis[t] << ' '; vector<pii> stk; while (t != s) { stk.push_back({fr[t], t}); t = fr[t]; } reverse(stk.begin(), stk.end()); cout << stk.size() << '\n'; for (auto [u, v] : stk) cout << u << ' ' << v << '\n'; } ``` == 连通分量 === 有向图强连通分量 ```cpp const int N = 5e5 + 5; int n, m, dfc, dfn[N], low[N], stk[N], top, idx[N], in_stk[N], scc_cnt; vector<int> G[N]; void tarjan(int u) { low[u] = dfn[u] = ++dfc; stk[++top] = u; in_stk[u] = 1; for (int v : G[u]) { if (!dfn[v]) { tarjan(v); low[u] = min(low[u], low[v]); } else if (in_stk[v]) low[u] = min(dfn[v], low[u]); } if (low[u] == dfn[u]) { int x; scc_cnt++; do { x = stk[top--]; idx[x] = scc_cnt; in_stk[x] = 0; } while (x != u); } } void MAIN() { for (int i = 1; i <= n; i++) low[i] = dfn[i] = idx[i] = in_stk[i] = 0; dfc = scc_cnt = top = 0; cin >> n >> m; for (int i = 1; i <= n; i++) if (!dfn[i]) tarjan(i); } ``` === 强连通分量(incremental) $"edge"[3]$ 保存了每条边的两个点在同一个强连通分量的时间。调用的时候右端点时间要大一位,因为可能有些边到最后也不能在一个强连通分量中。 ```cpp int n, m, Q, s[N]; vector<array<int, 4>> edge; vector<int> G[N]; struct DSU { int fa[N], dep[N], top; pii stk[N]; void init(int n) { top = 0; iota(fa, fa + n + 1, 0); fill(dep, dep + n + 1, 1); } int find(int u) { return u == fa[u] ? u : find(fa[u]); } void merge(int u, int v) { u = find(u), v = find(v); if (u == v) return; if (dep[u] > dep[v]) swap(u, v); stk[++top] = {u, (dep[u] == dep[v] ? v : -1)}; fa[u] = v; dep[v] += (dep[u] == dep[v]); } void rev(int tim) { while (tim < top) { auto [u, v] = stk[top--]; fa[u] = u; if (v != -1) dep[v]--; } } } D; int stk[N], top, dfc, dfn[N], low[N], in_stk[N]; void tarjan(int u) { low[u] = dfn[u] = ++dfc; stk[++top] = u; in_stk[u] = 1; for (int v : G[u]) { if (!dfn[v]) { tarjan(v); low[u] = min(low[u], low[v]); } else if (in_stk[v]) low[u] = min(dfn[v], low[u]); } if (low[u] == dfn[u]) { int x; do { x = stk[top--]; D.merge(x, u); in_stk[x] = 0; } while (x != u); } } void solve(int l, int r, int a, int b) { if (l == r) { for (int i = a; i <= b; i++) edge[i][3] = l; return; } int mid = (l + r) >> 1; vector<int> node; for (int i = a; i <= b; i++) if (edge[i][0] <= mid) { int u = D.find(edge[i][1]), v = D.find(edge[i][2]); if (u != v) node.push_back(u), node.push_back(v), G[u].push_back(v); } int otp = D.top; for (int x : node) if (!dfn[x]) tarjan(x); vector<array<int, 4>> e1, e2; for (int i = a; i <= b; i++) { int u = D.find(edge[i][1]), v = D.find(edge[i][2]); if (edge[i][0] > mid || u != v) e2.push_back(edge[i]); else e1.push_back(edge[i]); } int s1 = e1.size(), s2 = e2.size(); for (int i = a; i < a + s1; i++) edge[i] = e1[i - a]; for (int i = a + s1; i <= b; i++) edge[i] = e2[i - a - s1]; dfc = 0; for (int x : node) dfn[x] = low[x] = 0, vector<int>().swap(G[x]); vector<int>().swap(node); vector<array<int, 4>>().swap(e1); vector<array<int, 4>>().swap(e2); solve(mid + 1, r, a + s1, b); D.rev(otp); solve(l, mid, a, a + s1 - 1); } ``` === 割点和桥 ```cpp int dfn[N], low[N], dfs_clock; bool iscut[N], vis[N]; void dfs(int u, int fa) { dfn[u] = low[u] = ++dfs_clock; vis[u] = 1; int child = 0; for (int v : e[u]) { if (v == fa) continue; if (!dfn[v]) { dfs(v, u); low[u] = min(low[u], low[v]); child++; if (low[v] >= dfn[u]) iscut[u] = 1; } else if (dfn[u] > dfn[v] && v != fa) low[u] = min(low[u], dfn[v]); if (fa == 0 && child == 1) iscut[u] = 0; } } ``` === 点双 ```cpp #include <cstdio> #include <vector> using namespace std; const int N = 5e5 + 5, M = 2e6 + 5; int n, m; struct edge { int to, nt; } e[M << 1]; int hd[N], tot = 1; void add(int u, int v) { e[++tot] = (edge){v, hd[u]}, hd[u] = tot; } void uadd(int u, int v) { add(u, v), add(v, u); } int ans; int dfn[N], low[N], bcc_cnt; int sta[N], top, cnt; bool cut[N]; vector<int> dcc[N]; int root; void tarjan(int u) { dfn[u] = low[u] = ++bcc_cnt, sta[++top] = u; if (u == root && hd[u] == 0) { dcc[++cnt].push_back(u); return; } int f = 0; for (int i = hd[u]; i; i = e[i].nt) { int v = e[i].to; if (!dfn[v]) { tarjan(v); low[u] = min(low[u], low[v]); if (low[v] >= dfn[u]) { if (++f > 1 || u != root) cut[u] = true; cnt++; do dcc[cnt].push_back(sta[top--]); while (sta[top + 1] != v); dcc[cnt].push_back(u); } } else low[u] = min(low[u], dfn[v]); } } int main() { scanf("%d%d", &n, &m); int u, v; for (int i = 1; i <= m; i++) { scanf("%d%d", &u, &v); if (u != v) uadd(u, v); } for (int i = 1; i <= n; i++) if (!dfn[i]) root = i, tarjan(i); printf("%d\n", cnt); for (int i = 1; i <= cnt; i++) { printf("%llu ", dcc[i].size()); for (int j = 0; j < dcc[i].size(); j++) printf("%d ", dcc[i][j]); printf("\n"); } return 0; } ``` === 边双 ```cpp #include <algorithm> #include <cstdio> #include <vector> using namespace std; const int N = 5e5 + 5, M = 2e6 + 5; int n, m, ans; int tot = 1, hd[N]; struct edge { int to, nt; } e[M << 1]; void add(int u, int v) { e[++tot].to = v, e[tot].nt = hd[u], hd[u] = tot; } void uadd(int u, int v) { add(u, v), add(v, u); } bool bz[M << 1]; int bcc_cnt, dfn[N], low[N], vis_bcc[N]; vector<vector<int>> bcc; void tarjan(int x, int in) { dfn[x] = low[x] = ++bcc_cnt; for (int i = hd[x]; i; i = e[i].nt) { int v = e[i].to; if (dfn[v] == 0) { tarjan(v, i); if (dfn[x] < low[v]) bz[i] = bz[i ^ 1] = 1; low[x] = min(low[x], low[v]); } else if (i != (in ^ 1)) low[x] = min(low[x], dfn[v]); } } void dfs(int x, int id) { vis_bcc[x] = id, bcc[id - 1].push_back(x); for (int i = hd[x]; i; i = e[i].nt) { int v = e[i].to; if (vis_bcc[v] || bz[i]) continue; dfs(v, id); } } int main() { scanf("%d%d", &n, &m); int u, v; for (int i = 1; i <= m; i++) { scanf("%d%d", &u, &v); if (u == v) continue; uadd(u, v); } for (int i = 1; i <= n; i++) if (dfn[i] == 0) tarjan(i, 0); for (int i = 1; i <= n; i++) if (vis_bcc[i] == 0) { bcc.push_back(vector<int>()); dfs(i, ++ans); } printf("%d\n", ans); for (int i = 0; i < ans; i++) { printf("%llu", bcc[i].size()); for (int j = 0; j < bcc[i].size(); j++) printf(" %d", bcc[i][j]); printf("\n"); } return 0; } ``` == 二分图匹配 === 匈牙利算法 mch 记录的是右部点匹配的左部点 ```cpp int mch[maxn], vis[maxn]; std::vector<int> e[maxn]; bool dfs(const int u, const int tag) { for (auto v : e[u]) { if (vis[v] == tag) continue; vis[v] = tag; if (!mch[v] || dfs(mch[v], tag)) return mch[v] = u, 1; } return 0; } int main() { int ans = 0; for (int i = 1; i <= n; ++i) if (dfs(i, i)) ++ans; } ``` === KM == 网络流 === 网络最大流 ```cpp int head[N], cur[N], ecnt, d[N]; struct Edge { int nxt, v, flow, cap; }e[]; void add_edge(int u, int v, int flow, int cap) { e[ecnt] = {head[u], v, flow, cap}; head[u] = ecnt++; e[ecnt] = {head[v], u, flow, 0}; head[v] = ecnt++; } bool bfs() { memset(vis, 0, sizeof vis); std::queue<int> q; q.push(s); vis[s] = 1; d[s] = 0; while (!q.empty()) { int u = q.front(); q.pop(); for (int i = head[u]; i != -1; i = e[i].nxt) { int v = e[i].v; if (vis[v] || e[i].flow >= e[i].cap) continue; d[v] = d[u] + 1; vis[v] = 1; q.push(v); } } return vis[t]; } int dfs(int u, int a) { if (u == t || !a) return a; int flow = 0, f; for (int& i = cur[u]; i != -1; i = e[i].nxt) { int v = e[i].v; if (d[u] + 1 == d[v] && (f = dfs(v, std::min(a, e[i].cap - e[i].flow))) > 0) { e[i].flow += f; e[i ^ 1].flow -= f; flow += f; a -= f; if (!a) break; } } return flow; } ``` === 最小费用最大流 ```cpp const int inf = 1e9; int head[N], cur[N], ecnt, dis[N], s, t, n, m, mincost; bool vis[N]; struct Edge { int nxt, v, flow, cap, w; }e[100002]; void add_edge(int u, int v, int flow, int cap, int w) { e[ecnt] = {head[u], v, flow, cap, w}; head[u] = ecnt++; e[ecnt] = {head[v], u, flow, 0, -w}; head[v] = ecnt++; } bool spfa(int s, int t) { std::fill(vis + s, vis + t + 1, 0); std::fill(dis + s, dis + t + 1, inf); std::queue<int> q; q.push(s); dis[s] = 0; vis[s] = 1; while (!q.empty()) { int u = q.front(); q.pop(); vis[u] = 0; for (int i = head[u]; i != -1; i = e[i].nxt) { int v = e[i].v; if (e[i].flow < e[i].cap && dis[u] + e[i].w < dis[v]) { dis[v] = dis[u] + e[i].w; if (!vis[v]) vis[v] = 1, q.push(v); } } } return dis[t] != inf; } int dfs(int u, int a) { if (vis[u]) return 0; if (u == t || !a) return a; vis[u] = 1; int flow = 0, f; for (int& i = cur[u]; i != -1; i = e[i].nxt) { int v = e[i].v; if (dis[u] + e[i].w == dis[v] && (f = dfs(v, std::min(a, e[i].cap - e[i].flow))) > 0) { e[i].flow += f; e[i ^ 1].flow -= f; flow += f; mincost += e[i].w * f; a -= f; if (!a) break; } } vis[u] = 0; return flow; } ``` == 2-SAT $2 * u$ 代表不选择,$2*u+1$ 代表选择。 === 搜索 ```cpp vector<int> G[N * 2]; bool mark[N * 2]; int stk[N], top; void build_G() { for (int i = 1; i <= n; i++) { int u, v; G[2 * u + 1].push_back(2 * v); G[2 * v + 1].push_back(2 * u); } } bool dfs(int u) { if (mark[u ^ 1]) return false; if (mark[u]) return true; mark[u] = 1; stk[++top] = u; for (int v : G[u]) { if (!dfs(v)) return false; } return true; } bool 2_sat() { for (int i = 1; i <= n; i++) { if (!mark[i * 2] && !mark[i * 2 + 1]) { top = 0; if (!dfs(2 * i)) { while (top) mark[stk[top--]] = 0; if (!dfs(2 * i + 1)) return 0; } } } return 1; } ``` === tarjan 如果对于一个*x* `sccno`比它的反状态 *x*∧1 的 `sccno` 要小,那么我们用 *x* 这个状态当做答案,否则用它的反状态当做答案。 == 生成树 === Prime ```cpp int n, m; vector<pii> G[N]; ll dis[N]; int vis[N]; void MAIN() { cin >> n >> m; for (int i = 1; i <= m; i++) { int u, v, w; cin >> u >> v >> w; G[u].push_back({v, w}); G[v].push_back({u, w}); } for (int i = 1; i <= n; i++) dis[i] = 1e18, vis[i] = 0; priority_queue<pair<ll, int>> q; dis[1] = 0; q.push({-dis[1], 1}); ll ans = 0; while (!q.empty()) { auto [val, u] = q.top(); q.pop(); if (vis[u]) continue; vis[u] = 1; ans -= val; for (auto [v, w] : G[u]) if (dis[v] > w) { dis[v] = w; q.push({-w, v}); } } cout << ans << '\n'; } ``` == 圆方树 记得开两倍空间。 ```cpp void tarjan(int u) { stk[++top] = u; low[u] = dfn[u] = ++dfc; for (int v : G[u]) { if (!dfn[v]) { tarjan(v); low[u] = min(low[u], low[v]); if (low[v] == dfn[u]) { cnt++; for (int x = 0; x != v; --top) { x = stk[top]; T[cnt].push_back(x); T[x].push_back(cnt); val[cnt]++; } T[cnt].push_back(u); T[u].push_back(cnt); val[cnt]++; } } else low[u] = min(low[u], dfn[v]); } } // 调用 cnt = n; for (int i = 1; i <= n; i++) if (!dfn[i]) { tarjan(i); --top; } ``` - 静态仙人掌最短路。边权设置为到点双顶点的最短距离。 ```cpp void tarjan(int u) { stk[++top] = u; dfn[u] = low[u] = ++dfc; for (auto [v, w] : G[u]) if (!dfn[v]) { dis[v] = dis[u] + w; tarjan(v); low[u] = min(low[u], low[v]); if (low[v] == dfn[u]) { ++cnt; val[cnt] = cyc[stk[top]] + dis[stk[top]] - dis[u]; for (int x = 0; x != v; --top) { x = stk[top]; //assert(val[cnt] >= (dis[x] - dis[u])); int w = min(dis[x] - dis[u], val[cnt] - (dis[x] - dis[u])); T[cnt].push_back({x, w}); T[x].push_back({cnt, w}); } T[cnt].push_back({u, 0}); T[u].push_back({cnt, 0}); } } else if (dfn[v] < dfn[u]) { cyc[u] = w; low[u] = min(low[u], dfn[v]); } } void dfs(int u, int fa) { faz[0][u] = fa; for (int k = 1; k < M; k++) faz[k][u] = faz[k - 1][faz[k - 1][u]]; for (auto [v, w] : T[u]) if (v != fa) { dep[v] = dep[u] + 1; ff[v] = ff[u] + w; dfs(v, u); } } int dist(int u, int v) { int tu = u, tv = v; if (dep[u] < dep[v]) swap(u, v); int det = dep[u] - dep[v]; for (int k = 0; k < M; k++) if ((det >> k) & 1) u = faz[k][u]; int lca; if (u == v) lca = u; else { for (int k = M - 1; k >= 0; k--) if (faz[k][u] != faz[k][v]) { u = faz[k][u]; v = faz[k][v]; } lca = faz[0][u]; } if (lca <= n) return ff[tu] + ff[tv] - ff[lca] * 2; int tm = min(abs(dis[u] - dis[v]), val[lca] - abs(dis[u] - dis[v])); return ff[tu] - ff[u] + ff[tv] - ff[v] + tm; } ``` - 圆方树上 dp 以单源最短路为例,原点记录该点出发是否返回的最长路,方点记录顶点出发经过环上所能走到的最长路。 ```cpp void dfs(int u, int fa) { for (int v : T[u]) if (v != fa) dfs(v, u); if (u <= n) { int mx = 0; /* 这里必须设为 0 而不是 -inf, 或者在平凡方点转移的时候要 max(dp[0], dp[1]) hack: 4 4 1 2 2 3 3 4 4 2 */ for (int v : T[u]) if (v != fa) { dp[u][1] += dp[v][1]; mx = max(mx, dp[v][0] - dp[v][1]); dp[u][0] += dp[v][1]; } dp[u][0] += mx; } else { int sum = 1; dp[u][1] = 1; for (int v : T[u]) if (v != fa) { dp[u][1] += dp[v][1] + 1; dp[u][0] = max(dp[u][0], sum + dp[v][0]); sum += dp[v][1] + 1; } sum = 1; reverse(T[u].begin(), T[u].end()); for (int v : T[u]) if (v != fa) { dp[u][0] = max(dp[u][0], sum + dp[v][0]); sum += dp[v][1] + 1; } if (val[u] == 2) dp[u][1] = 0; } } ``` == 欧拉回路 - 有向图 ```cpp void dfs(int u) { for (int &i = hd[u]; i < G[u].size(); ) dfs(G[u][i++]); stk.push_back(u); } int check() { int mo = 0, le = 0, st = 1; for (int i = 1; i <= n; i++) { if (abs(in[i] - out[i]) > 1) return -1; if (in[i] > out[i]) le++; if (in[i] < out[i]) mo++, st = i; } if (mo > 1 || le > 1 || mo + le == 1) return -1; return st; } void MAIN() { cin >> n >> m; for (int i = 1; i <= m; i++) { int u, v; cin >> u >> v; in[v]++; out[u]++; G[u].push_back(v); } for (int i = 1; i <= n; i++) sort(G[i].begin(), G[i].end()); int tmp = check(); if (tmp == -1) cout << "No\n"; else { dfs(tmp); copy(stk.rbegin(), stk.rend(), ostream_iterator<int>(cout, " ")); cout << '\n'; } } ``` - 无向图 ```cpp void dfs(int u) { for (int &i = hd[u]; i < G[u].size(); ) { while (i < G[u].size() && cnt[u][G[u][i]] == 0) ++i; if (i == G[u].size()) break; cnt[u][G[u][i]]--; cnt[G[u][i]][u]--; dfs(G[u][i++]); } stk.push_back(u); } int check() { int odd = 0, st = -1; for (int i = 1; i <= n; i++) { if (deg[i] == 0) continue; if (st == -1) st = i; if (deg[i] & 1) { ++odd; if (odd == 1) st = i; } } if (odd > 2) return -1; return st; } void MAIN() { n = 500; cin >> m; for (int i = 1; i <= m; i++) { int u, v; cin >> u >> v; ++deg[u]; ++deg[v]; G[u].push_back(v); G[v].push_back(u); ++cnt[u][v]; ++cnt[v][u]; } for (int i = 1; i <= n; i++) sort(G[i].begin(), G[i].end()); int tmp = check(); if (tmp == -1) cout << "No\n"; else { dfs(tmp); copy(stk.rbegin(), stk.rend(), ostream_iterator<int>(cout, "\n")); } } ``` == 无向图三/四元环计数 - 三元环 ```cpp int vis[N]; vector<int> G[N]; ll main() { ll cnt = 0; for (int i = 0; i < m; i++) { if (deg[ed[i].fi] == deg[ed[i].se] && ed[i].fi > ed[i].se) swap(ed[i].fi, ed[i].se); if (deg[ed[i].fi] > deg[ed[i].se]) swap(ed[i].fi, ed[i].se); G[ed[i].fi].push_back(ed[i].se); } for (int u = 1; u <= n; u++) { for (int v : G[u]) vis[v] = 1; for (int v : G[u]) for (int w : G[v]) if (vis[w]) ++cnt; for (int v : G[u]) vis[v] = 0; } return cnt; } ``` - 四元环 统计 $c?b->a<-d?c$ 的数目,因为最大度数点 $a$ 不同,所以不会算重。 ```cpp int n, m, deg[N], cnt[N]; bool bigger(int a, int b) { return deg[a] > deg[b] || (deg[a] == deg[b] && a > b); } void MAIN() { cin >> n >> m; for (int i = 1; i <= m; i++) { int u, v; cin >> u >> v; ed.push_back({u, v}); G[u].push_back(v); G[v].push_back(u); ++deg[u]; ++deg[v]; } for (auto [u, v] : ed) { if (bigger(v, u)) swap(u, v); T[u].push_back(v); } ll ans = 0; for (int a = 1; a <= n; a++) { for (int b : T[a]) { for (int c : G[b]) { if (c == a || bigger(c, a)) continue; ans += cnt[c]; ++cnt[c]; } } for (int b : T[a]) for (int c : G[b]) cnt[c] = 0; } cout << ans << '\n'; } ``` == 虚树 需要保证 $"LCA"(0, u) = 0$ ```cpp int solve(vector<int>po) { sort(po.begin(), po.end(), [](int x, int y) { return dfn[x] < dfn[y]; }); int ans = 0; top = 0; stk[++top] = 0; for (int u : po) { int lca = LCA(u, stk[top]); if (lca == stk[top]) stk[++top] = u; else { for (int i = top; i >= 2 && dep[stk[i - 1]] >= dep[lca]; i--) { // ans += ff[stk[i]] - ff[stk[i - 1]] - (vis[stk[i]] ? val[stk[i]]: 0); // cout << stk[i] << ' ' << stk[i - 1] << ' ' << ff[stk[i]] - ff[stk[i - 1]] - (vis[stk[i]] ? val[stk[i]]: 0) << '\n'; add_edge(stk[i], stk[i - 1]); --top; } if (stk[top] != lca) { // cout << lca << ' ' << stk[top] << ' ' << ff[stk[top]] - ff[lca] - (vis[stk[top]] ? val[stk[top]] : 0) << '\n'; // ans += ff[stk[top]] - ff[lca] - (vis[stk[top]] ? val[stk[top]] : 0); add_edge(stk[top], lca); stk[top] = lca; } stk[++top] = u; } } for (int i = 2; i < top; i++) { // cout << stk[i + 1] << ' ' << stk[i] << ' ' << ff[stk[i + 1]] - ff[stk[i]] - (vis[stk[i + 1]] ? val[stk[i + 1]] : 0) << '\n'; // ans += ff[stk[i + 1]] - ff[stk[i]] - (vis[stk[i + 1]] ? val[stk[i + 1]] : 0); add_edge(stk[i + 1], stk[i]); } //ans += (vis[stk[2]] ? 0 : val[stk[2]]); return ans; } ``` == 最近公共祖先 ```cpp // 倍增 int faz[N][20], dep[N]; void dfs(int u, int fa) { faz[u][0] = fa; dep[u] = dep[fa] + 1; for (int i = 1; i < 20; i++) faz[u][i] = faz[faz[u][i - 1]][i - 1]; for (int v : G[u]) if (v != fa) { dfs(v, u); } } int LCA(int u, int v) { if (dep[u] < dep[v]) swap(u, v); int d = dep[u] - dep[v]; for (int i = 0; i < 20; i++) if ((d >> i) & 1) u = faz[u][i]; if (v == u) return u; for (int i = 19; i >= 0; i--) if (faz[u][i] != faz[v][i]) u = faz[u][i], v = faz[v][i]; return faz[u][0]; } //树剖 int dfc, dfn[N], rnk[N], siz[N], top[N], dep[N], son[N], faz[N]; void dfs1(int u, int fa) { dep[u] = dep[fa] + 1; siz[u] = 1; son[u] = -1; faz[u] = fa; for (int v : G[u]) { if (v == fa) continue; dfs1(v, u); siz[u] += siz[v]; if (son[u] == -1 || siz[son[u]] < siz[v]) son[u] = v; } } void dfs2(int u, int fa, int tp) { dfn[u] = ++dfc; rnk[dfc] = u; top[u] = tp; if (son[u] != -1) dfs2(son[u], u, tp); for (int v : G[u]) { if (v == fa || v == son[u]) continue; dfs2(v, u, v); } } int LCA(int u, int v) { while (top[u] != top[v]) { if (dep[top[u]] > dep[top[v]]) u = faz[top[u]]; else v = faz[top[v]]; } return dep[u] > dep[v] ? v : u; } // O(1) query int dfn[N], faz[N], dep[N], rnk[N], dfc, st[N][20]; void dfs(int u, int fa) { dfn[u] = ++dfc; faz[u] = fa; dep[u] = dep[fa] + 1; rnk[dfc] = u; for (auto [v, w] : G[u]) if (v != fa) dfs(v, u); } int LCA(int u, int v) { if (u == v) return u; if (dfn[u] > dfn[v]) swap(u, v); int l = dfn[u] + 1, r = dfn[v]; int k = __lg(r - l + 1); return dep[st[l][k]] < dep[st[r - (1 << k) + 1][k]] ? faz[st[l][k]] : faz[st[r - (1 << k) + 1][k]]; } int main() { dfs(1, 0); dep[0] = n + 1; for (int i = 1; i <= n; i++) st[i][0] = rnk[i]; for (int j = 1; j < 20; j++) { for (int i = 1; i <= n; i++) { st[i][j] = dep[st[i][j - 1]] <= dep[st[min(n, i + (1 << (j - 1)))][j - 1]] ? st[i][j - 1] : st[min(n, i + (1 << (j - 1)))][j - 1]; } } } ``` = 数学 == 子集卷积 高维前缀和 ```cpp for (int k = 0; k < 20; k++) { for (int i = 0; i < (1 << 20); i++) if ((i >> k) & 1) { f[i] = f[i] + f[i ^ (1 << k)]; } } ``` 高维后缀和 ```cpp for (int k = 0; k < 20; k++) { for (int i = 0; i < (1 << 20); i++) if ((i >> k) & 1) { f[i] = f[i] + f[i ^ (1 << k)]; } } ``` 高维差分 ```cpp for (int k = 0; k < 20; k++) { for (int i = 0; i < (1 << 20); i++) if ((i >> k) & 1) { f[i] = f[i] - f[i ^ (1 << k)]; } } ``` == 线性基 ```cpp struct LinerBasis { int a[20], pos[20]; void add(int v, int p) { for (int i = 19; i >= 0; i--) if ((v >> i) & 1) { if (a[i]) { if (p > pos[i]) { swap(p, pos[i]); swap(a[i], v); } v ^= a[i]; } else { a[i] = v; pos[i] = p; return; } } } } b[N]; LinerBasis operator + (LinerBasis a, LinerBasis b) { for (int i = 19; i >= 0; i--) { if (b.a[i]) a.add(b.a[i], b.pos[i]); } return a; } ``` == 高斯消元 ```cpp namespace Gauss { bitset<258> a[256 + 256 + 5]; int n; void push(const bitset<258>& x) { a[++n] = x; } bool solve(int m) { int k = 1; for (int i = 1; i <= m; i++) { if (k > n) break; for (int j = k + 1; j <= n; j++) if (a[j][i] > 0) { swap(a[k], a[j]); break; } if (a[k][i] == 0) break; for (int j = 1; j <= n; j++) if (j != k && a[j][i]) { a[j] ^= a[k]; } ++k; } for (int i = k; i <= n; i++) if (a[i][m + 1]) return false; return true; } } ``` = 多项式 == NTT 这个板子很慢 ```cpp #include <bits/stdc++.h> using namespace std; typedef vector<int> poly; const int mod = 998244353; const int N = 4000000 + 5; int rf[32][N]; int fpow(int a, int b) { int res = 1; for (; b; b >>= 1, a = a * 1ll * a % mod) if (b & 1) res = res * 1ll * a % mod; return res; } void init(int n) { assert(n < N); int lg = __lg(n); static vector<bool> bt(32, 0); if (bt[lg] == 1) return; bt[lg] = 1; for (int i = 0; i < n; i++) rf[lg][i] = (rf[lg][i >> 1] >> 1) + ((i & 1) ? (n >> 1) : 0); } void ntt(poly &x, int lim, int op) { int lg = __lg(lim), gn, g, tmp;; for (int i = 0; i < lim; i++) if (i < rf[lg][i]) swap(x[i], x[rf[lg][i]]); for (int len = 2; len <= lim; len <<= 1) { int k = (len >> 1); gn = fpow(3, (mod - 1) / len); for (int i = 0; i < lim; i += len) { g = 1; for (int j = 0; j < k; j++, g = gn * 1ll * g % mod) { tmp = x[i + j + k] * 1ll * g % mod; x[i + j + k] = (x[i + j] - tmp + mod) % mod; x[i + j] = (x[i + j] + tmp) % mod; } } } if (op == -1) { reverse(x.begin() + 1, x.begin() + lim); int inv = fpow(lim, mod - 2); for (int i = 0; i < lim; i++) x[i] = x[i] * 1ll * inv % mod; } } poly multiply(const poly &a, const poly &b) { assert(!a.empty() && !b.empty()); int lim = 1; while (lim + 1 < int(a.size() + b.size())) lim <<= 1; init(lim); poly pa = a, pb = b; while (pa.size() < lim) pa.push_back(0); while (pb.size() < lim) pb.push_back(0); ntt(pa, lim, 1); ntt(pb, lim, 1); for (int i = 0; i < lim; i++) pa[i] = pa[i] * 1ll * pb[i] % mod; ntt(pa, lim, -1); while (int(pa.size()) + 1 > int(a.size() + b.size())) pa.pop_back(); return pa; } poly prod_poly(const vector<poly>& vec) { // init vector, too slow int n = vec.size(); auto calc = [&](const auto &self, int l, int r) -> poly { if (l == r) return vec[l]; int mid = (l + r) >> 1; return multiply(self(self, l, mid), self(self, mid + 1, r)); }; return calc(calc, 0, n - 1); } // Semi-Online-Convolution poly semi_online_convolution(const poly& g, int n, int op = 0) { assert(n == g.size()); poly f(n, 0); f[0] = 1; auto CDQ = [&](const auto &self, int l, int r) -> void { if (l == r) { // exp if (op == 1 && l > 0) f[l] = f[l] * 1ll * fpow(l, mod - 2) % mod; return; } int mid = (l + r) >> 1; self(self, l, mid); poly a, b; for (int i = l; i <= mid; i++) a.push_back(f[i]); for (int i = 0; i <= r - l - 1; i++) b.push_back(g[i + 1]); a = multiply(a, b); for (int i = mid + 1; i <= r; i++) f[i] = (f[i] + a[i - l - 1]) % mod; self(self, mid + 1, r); }; CDQ(CDQ, 0, n - 1); return f; } poly getinv(const poly &a) { assert(!a.empty()); poly res = {fpow(a[0], mod - 2)}, na = {a[0]}; int lim = 1; while (lim < int(a.size())) lim <<= 1; for (int len = 2; len <= lim; len <<= 1) { while (na.size() < len) { int tmp = na.size(); if (tmp < a.size()) na.push_back(a[tmp]); else na.push_back(0); } auto tmp = multiply(na, res); for (auto &x : tmp) x = (x > 0 ? mod - x : x); tmp[0] = ((tmp[0] + 2) >= mod) && (tmp[0] -= mod); tmp = multiply(res, tmp); while (tmp.size() > len) tmp.pop_back(); res = tmp; } while (res.size() > a.size()) res.pop_back(); return res; } poly exp(const poly &g) { int n = g.size(); poly b(n, 0); for (int i = 1; i < n; i++) b[i] = i * 1ll * g[i] % mod; return semi_online_convolution(b, n, 1); } poly ln(const poly &A) { int n = A.size(); auto C = getinv(A); poly A1(n, 0); for (int i = 0; i < n - 1; i++) A1[i] = (i + 1) * 1ll * A[i + 1] % mod; C = multiply(C, A1); for (int i = n - 1; i > 0; i--) C[i] = C[i - 1] * 1ll * fpow(i, mod - 2) % mod; C[0] = 0; while (C.size() > n) C.pop_back(); return C; } poly quick_pow(poly &a, int k, int k_mod_phi, bool is_k_bigger_than_mod = false) { assert(!a.empty()); int n = a.size(), t = -1, b; for (int i = 0; i < n; i++) if (a[i]) { t = i, b = a[i]; break; } if (t == -1 || t && is_k_bigger_than_mod || k * 1ll * t >= n) return poly(n, 0); poly f; for (int i = 0; i < n; i++) { if (i + t < n) f.push_back(a[i + t] * 1ll * fpow(b, mod - 2) % mod); else f.push_back(0); } f = ln(f); for (auto &x : f) x = x * 1ll * k % mod; f = exp(f); poly res; for (int i = 0; i < k * t; i++) res.push_back(0); int fb = fpow(b, k_mod_phi); for (int i = k * t; i < n; i++) res.push_back(f[i - k * t] * 1ll * fb % mod); return res; } int main() { ios::sync_with_stdio(0); cin.tie(0); int n, k = 0, k_mod_phi = 0, isb = 0; string s; cin >> n >> s; for (auto ch : s) { if ((ch - '0') + k * 10ll >= mod) isb = 1; k = ((ch - '0') + k * 10ll) % mod; k_mod_phi = ((ch - '0') + k_mod_phi * 10ll) % 998244352; } poly a(n); for (auto &x : a) cin >> x; a = quick_pow(a, k, k_mod_phi, isb); while (a.size() > n) a.pop_back(); for (auto x : a) cout << x << ' '; return 0; } ``` == 任意模数NTT 模数小于 $10^9$ ```cpp #include <bits/stdc++.h> using namespace std; typedef complex<double> cp; typedef vector<cp> poly; typedef long long ll; const int N = 4000000 + 5; const double pi = acos(-1); int rf[26][N]; void init(int n) { assert(n < N); int lg = __lg(n); static vector<bool> bt(26, 0); if (bt[lg] == 1) return; bt[lg] = 1; for (int i = 0; i < n; i++) rf[lg][i] = (rf[lg][i >> 1] >> 1) + ((i & 1) ? (n >> 1) : 0); } void fft(poly &x, int lim, int op) { int lg = __lg(lim); for (int i = 0; i < lim; i++) if (i < rf[lg][i]) swap(x[i], x[rf[lg][i]]); for (int len = 2; len <= lim; len <<= 1) { int k = (len >> 1); for (int i = 0; i < lim; i += len) { for (int j = 0; j < k; j++) { cp w(cos(pi * j / k), op * sin(pi * j / k)); cp tmp = w * x[i + j + k]; x[i + j + k] = x[i + j] - tmp; x[i + j] = x[i + j] + tmp; } } } if (op == -1) for (int i = 0; i < lim; i++) x[i] /= lim; } poly multiply(const poly &a, const poly &b) { assert(!a.empty() && !b.empty()); int lim = 1; while (lim + 1 < int(a.size() + b.size())) lim <<= 1; init(lim); poly pa = a, pb = b; pa.resize(lim); pb.resize(lim); for (int i = 0; i < lim; i++) pa[i] = (cp){pa[i].real(), pb[i].real()}; fft(pa, lim, 1); pb[0] = conj(pa[0]); for (int i = 1; i < lim; i++) pb[lim - i] = conj(pa[i]); for (int i = 0; i < lim; i++) { pa[i] = (pa[i] + pb[i]) * (pa[i] - pb[i]) / cp({0, 4}); } fft(pa, lim, -1); pa.resize(int(a.size() + b.size()) - 1); return pa; } vector<int> MTT(const vector<int> &a, const vector<int> &b, const int mod) { const int B = (1 << 15) - 1, M = (1 << 15); int lim = 1; while (lim + 1 < int(a.size() + b.size())) lim <<= 1; init(lim); poly pa(lim), pb(lim); auto get = [](const vector<int>& v, int pos) -> int { if (pos >= v.size()) return 0; else return v[pos]; }; for (int i = 0; i < lim; i++) pa[i] = (cp){get(a, i) >> 15, get(a, i) & B}; fft(pa, lim, 1); pb[0] = conj(pa[0]); for (int i = 1; i < lim; i++) pb[lim - i] = conj(pa[i]); poly A0(lim), A1(lim); for (int i = 0; i < lim; i++) { A0[i] = (pa[i] + pb[i]) / (cp){2, 0}; A1[i] = (pa[i] - pb[i]) / (cp){0, 2}; } for (int i = 0; i < lim; i++) pa[i] = (cp){get(b, i) >> 15, get(b, i) & B}; fft(pa, lim, 1); pb[0] = conj(pa[0]); for (int i = 1; i < lim; i++) pb[lim - i] = conj(pa[i]); poly B0(lim), B1(lim); for (int i = 0; i < lim; i++) { B0[i] = (pa[i] + pb[i]) / (cp){2, 0}; B1[i] = (pa[i] - pb[i]) / (cp){0, 2}; } for (int i = 0; i < lim; i++) { pa[i] = A0[i] * B0[i]; pb[i] = A0[i] * B1[i]; A0[i] = pa[i]; pa[i] = A1[i] * B1[i]; B1[i] = pb[i]; B0[i] = A1[i] * B0[i]; A1[i] = pa[i]; pa[i] = A0[i] + (cp){0, 1} * A1[i]; pb[i] = B0[i] + (cp){0, 1} * B1[i]; } fft(pa, lim, -1); fft(pb, lim, -1); vector<int> res(int(a.size() + b.size()) - 1); const int M2 = M * 1ll * M % mod; for (int i = 0; i < res.size(); i++) { ll a0 = round(pa[i].real()), a1 = round(pa[i].imag()), b0 = round(pb[i].real()), b1 = round(pb[i].imag()); a0 %= mod; a1 %= mod; b0 %= mod; b1 %= mod; res[i] = (a0 * 1ll * M2 % mod + a1 + (b0 + b1) % mod * 1ll * M % mod) % mod; } return res; } int main() { #ifdef LOCAL freopen("miku.in", "r", stdin); freopen("miku.out", "w", stdout); #endif ios::sync_with_stdio(0); cin.tie(0); int n, m, p; cin >> n >> m >> p; vector<int> a(n + 1), b(m + 1); for (auto &x : a) cin >> x; for (auto &x : b) cin >> x; auto res = MTT(a, b, p); for (auto x : res) cout << x << ' '; } ``` = 数据结构 == 李超树 ```cpp \begin{lstlisting} struct Line { ll k, b; } lin[N]; int lcnt; int add_line(ll k, ll b) { lin[++lcnt] = {k, b}; return lcnt; } struct node { int ls, rs, u; } tr[N << 2]; int tot; ll calc(int u, ll x) { return lin[u].k * x + lin[u].b; } bool cmp(int u, int v, ll x) { return calc(u, x) <= calc(v, x); // 如果要求最大值,只需要修改为大于等于 } void pushdown(int &p, int l, int r, int v) { if (!p) p = ++tot; if (l == r) return; int mid = (l + r) >> 1; int &u = tr[p].u, b = cmp(v, u, mid); if (b) swap(u, v); int bl = cmp(v, u, l), br = cmp(v, u, r); if (bl) pushdown(tr[p].ls, l, mid, v); if (br) pushdown(tr[p].rs, mid + 1, r, v); } void update(int &p, int l, int r, int L, int R, int v) { if (l > R || r < L) return; if (!p) p = ++tot; int mid = (l + r) >> 1; if (l >= L && r <= R) return pushdown(p, l, r, v), void(); update(tr[p].ls, l, mid, L, R, v); update(tr[p].rs, mid + 1, r, L, R, v); } ll query(int p, int l, int r, ll pos) { if (!p) return 1e16; ll res = calc(tr[p].u, pos); int mid = (l + r) >> 1; if (l == r) return res; if (pos <= mid) { res = min(res, query(tr[p].ls, l, mid, pos)); } else res = min(res, query(tr[p].rs, mid + 1, r, pos)); return res; } int main() { lin[0].b = 1e16; return 0; } ``` == 兔队线段树 求有多少个严格前缀最大值。 线段树保存每个区间为子问题时右部分的答案 res(可以不需要信息可减),和区间的最大值 mx。 calc 考虑一段区间之前有 x 大的数时,区间此时前缀最大数的树目。 1. $x >= "val"["lson"],"ans" = "calc"("rson")$ 2. $x < "val"["lson"],"ans" = "calc"("lson") + "res"[p]$ ```cpp #include <bits/stdc++.h> using namespace std; using ll = long long; const int N = 1e5 + 5; #define lson (p << 1) #define rson ((p << 1) | 1) #define mid ((l + r) >> 1) int n, m; struct node { int s, a, b; } tr[N << 2]; bool cmp(int a, int b, int c, int d) { if (d == 0 && b == 0) return 0; if (d == 0 && a == 0) return 0; if (d == 0) return 1; return a * 1ll * d > c * 1ll * b; } int calc(int p, int l, int r, int c, int d) { if (l == r) return cmp(tr[p].a, tr[p].b, c, d); if (cmp(tr[lson].a, tr[lson].b, c, d)) { return calc(lson, l, mid, c, d) + tr[p].s; } return calc(rson, mid + 1, r, c, d); } void modify(int p, int l, int r, int pos, int v) { if (l == r) { tr[p] = {0, v, pos}; return; } if (pos <= mid) modify(lson, l, mid, pos, v); else modify(rson, mid + 1, r, pos, v); if (cmp(tr[lson].a, tr[lson].b, tr[rson].a, tr[rson].b)) { tr[p] = tr[lson]; } else tr[p] = tr[rson]; tr[p].s = calc(rson, mid + 1, r, tr[lson].a, tr[lson].b); } int main() { scanf("%d %d", &n, &m); while (m--) { int x, y; scanf("%d %d", &x, &y); modify(1, 1, n, x, y); printf("%d\n", calc(1, 1, n, 0, 0)); } return 0; ``` == 平衡树 ```cpp #include <bits/stdc++.h> using namespace std; using ll = long long; #define rank abcdefg const int mod = 998244353; const int N = 1e5 + 5; int tot, fa[N], tr[N][2], sz[N], cnt[N], val[N], rt; void maintain(int x) { sz[x] = sz[tr[x][0]] + sz[tr[x][1]] + cnt[x]; } int getdir(int x) { return tr[fa[x]][1] == x; } void clear(int x) { fa[x] = sz[x] = cnt[x] = tr[x][0] = tr[x][1] = val[x] = 0; } int create(int v) { ++tot; val[tot] = v; sz[tot] = cnt[tot] = 1; return tot; } void rotate(int x) { if (x == rt) return; int y = fa[x], z = fa[y], d = getdir(x); tr[y][d] = tr[x][d ^ 1]; if (tr[x][d ^ 1]) fa[tr[x][d ^ 1]] = y; fa[y] = x; tr[x][d ^ 1] = y; fa[x] = z; if (z) tr[z][y == tr[z][1]] = x; maintain(y); maintain(x); } void splay(int x) { for (int f = fa[x]; f = fa[x], f; rotate(x)) { if (fa[f]) rotate(getdir(f) == getdir(x) ? f : x); } rt = x; } void insert(int v) { if (!rt) { rt = create(v); return; } int u = rt, f = 0; while (true) { if (val[u] == v) { cnt[u]++; maintain(u); maintain(f); splay(u); return; } f = u, u = tr[u][v > val[u]]; if (u == 0) { int id; fa[id = create(v)] = f; tr[f][v > val[f]] = id; maintain(f); splay(id); return; } } } int rank(int v) { int rk = 0; int u = rt; while (u) { if (val[u] == v) { rk += sz[tr[u][0]]; splay(u); return rk + 1; } if (v < val[u]) { u = tr[u][0]; } else { rk += sz[tr[u][0]] + cnt[u]; u = tr[u][1]; } } return -1; } int kth(int x) { int u = rt; while (u) { if (sz[tr[u][0]] + cnt[u] >= x && sz[tr[u][0]] < x) return val[u]; if (x <= sz[tr[u][0]]) { u = tr[u][0]; } else { x -= sz[tr[u][0]] + cnt[u]; u = tr[u][1]; } } return u ? val[u] : -1; } int pre() { int u = tr[rt][0]; if (!u) return val[rt]; while (true) { if (tr[u][1] == 0) return splay(u), val[u]; u = tr[u][1]; } return 233; } int suf() { int u = tr[rt][1]; if (!u) return val[rt]; while (true) { if (tr[u][0] == 0) return splay(u), val[u]; u = tr[u][0]; } return 233; } void del(int v) { if (rank(v) == -1) return; if (cnt[rt] > 1) { cnt[rt]--; return; } if (!tr[rt][1] && !tr[rt][0]) { clear(rt), rt = 0; } else if (!tr[rt][0]) { int x = rt; rt = tr[x][1]; fa[rt] = 0; clear(x); } else if (!tr[rt][1]) { int x = rt; rt = tr[x][0]; fa[rt] = 0; clear(x); } else { int cur = rt, y = tr[cur][1]; pre(); tr[rt][1] = y; fa[y] = rt; clear(cur); maintain(rt); } } int main() { int n, opt, x; for (scanf("%d", &n); n; --n) { scanf("%d%d", &opt, &x); if (opt == 1) insert(x); else if (opt == 2) del(x); else if (opt == 3) printf("%d\n", rank(x)); else if (opt == 4) printf("%d\n", kth(x)); else if (opt == 5) insert(x), printf("%d\n", pre()), del(x); else insert(x), printf("%d\n", suf()), del(x); } return 0; } ``` == 文艺平衡树 ```cpp # include<iostream> # include<cstdio> # include<cstring> # include<cstdlib> using namespace std; const int MAX=1e5+1; int n,m,tot,rt; struct Treap{ int pos[MAX],siz[MAX],w[MAX]; int son[MAX][2]; bool fl[MAX]; void pus(int x) { siz[x]=siz[son[x][0]]+siz[son[x][1]]+1; } int build(int x) { w[++tot]=x,siz[tot]=1,pos[tot]=rand(); return tot; } void down(int x) { swap(son[x][0],son[x][1]); if(son[x][0]) fl[son[x][0]]^=1; if(son[x][1]) fl[son[x][1]]^=1; fl[x]=0; } int merge(int x,int y) { if(!x||!y) return x+y; if(pos[x]<pos[y]) { if(fl[x]) down(x); son[x][1]=merge(son[x][1],y); pus(x); return x; } if(fl[y]) down(y); son[y][0]=merge(x,son[y][0]); pus(y); return y; } void split(int i,int k,int &x,int &y) { if(!i) { x=y=0; return; } if(fl[i]) down(i); if(siz[son[i][0]]<k) x=i,split(son[i][1],k-siz[son[i][0]]-1,son[i][1],y); else y=i,split(son[i][0],k,x,son[i][0]); pus(i); } void coutt(int i) { if(!i) return; if(fl[i]) down(i); coutt(son[i][0]); printf("%d ",w[i]); coutt(son[i][1]); } }Tree; int main() { scanf("%d%d",&n,&m); for(int i=1;i<=n;i++) rt=Tree.merge(rt,Tree.build(i)); for(int i=1;i<=m;i++) { int l,r,a,b,c; scanf("%d%d",&l,&r); Tree.split(rt,l-1,a,b); Tree.split(b,r-l+1,b,c); Tree.fl[b]^=1; rt=Tree.merge(a,Tree.merge(b,c)); } Tree.coutt(rt); return 0; } ``` = 字符串 == KMP ```cpp int n = strlen(s + 1); for (int i = 2; i <= n; i++) { int j = k[i - 1]; while (j != 0 && s[i] != s[j + 1]) j = k[j]; if (s[i] == s[j + 1]) k[i] = j + 1; else k[i] = 0; } ``` == Z function ```cpp for (int i = 2, l = 0, r = 0; i <= n; i++) { if (r >= i && r - i + 1 > z[i - l + 1]) { z[i] = z[i - l + 1]; } else { z[i] = max(0, r - i + 1); while (z[i] < n - i + 1 && s[z[i] + 1] == s[i + z[i]]) ++z[i]; } if (i + z[i] - 1 > r) l = i, r = i + z[i] - 1; } ``` == SA ```cpp int sa[N], ork[N], rk[N], cnt[N], id[N], h[N], M, n; char s[N]; int mn[22][N]; int lcp(int a, int b) { if (a == b) return n - a + 1; if (rk[a] > rk[b]) swap(a, b); int l = rk[a] + 1, r = rk[b]; int len = r - l + 1, k = __lg(len); return min(mn[k][l], mn[k][r - (1 << k) + 1]); } void MAIN() { scanf("%s", s + 1); n = strlen(s + 1); for (int i = 1; i <= n; i++) M = max(M, (int)s[i]); for (int i = 1; i <= n; i++) if ((int)(s[i]) > M) M = (int)(s[i]); for (int i = 1; i <= n; i++) cnt[rk[i] = s[i]]++; for (int i = 0; i <= M; i++) cnt[i] += cnt[i - 1]; for (int i = n; i; i--) sa[cnt[rk[i]]--] = i; for (int w = 1, p; w < n; w <<= 1, M = p) { p = 0; for (int i = n; i > n - w; i--) id[++p] = i; for (int i = 1; i <= n; i++) if (sa[i] > w) id[++p] = sa[i] - w; for (int i = 0; i <= M; i++) cnt[i] = 0; for (int i = 1; i <= n; i++) cnt[rk[i]]++; for (int i = 1; i <= M; i++) cnt[i] += cnt[i - 1]; for (int i = n; i; i--) sa[cnt[rk[id[i]]]--] = id[i]; p = 0; for (int i = 0; i <= n; i++) ork[i] = rk[i]; for (int i = 1; i <= n; i++) { if (ork[sa[i]] == ork[sa[i - 1]] && ork[sa[i] + w] == ork[sa[i - 1] + w]) rk[sa[i]] = p; else rk[sa[i]] = ++p; } if (p == n) break; } for (int i = 1, k = 0; i <= n; i++) { if (rk[i] == 1) continue; if (k) k--; while (s[i + k] == s[sa[rk[i] - 1] + k]) k++; h[rk[i]] = k; } for (int i = 1; i <= n; i++) mn[0][i] = h[i]; for (int j = 1; j < 22; j++) { for (int i = 1; i <= n; i++) { mn[j][i] = min(mn[j - 1][i], mn[j - 1][min(n, i + (1 << (j - 1)))]); } } } ``` == AC自动机 ```cpp int ch[N][26], tot, fail[N], e[N]; void insert(const char *s) { int u = 0, n = strlen(s + 1); for (int i = 1; i <= n; i++) { if (!ch[u][s[i] - 'a']) ch[u][s[i] - 'a'] = ++tot; u = ch[u][s[i] - 'a']; } e[u] += 1; } void build() { queue<int> q; for (int i = 0; i <= 25; i++) if (ch[0][i]) q.push(ch[0][i]); while (!q.empty()) { int now = q.front(); q.pop(); for (int i = 0; i < 26; i++) { if (ch[now][i]) fail[ch[now][i]] = ch[fail[now]][i], q.push(ch[now][i]); else ch[now][i] = ch[fail[now]][i]; } } } int query(const char *s) { int u = 0, n = strlen(s + 1), res = 0; for (int i = 1; i <= n; i++){ u = ch[u][s[i] - 'a']; for (int j = u; j && e[j] != -1; j = fail[j]) { res += e[j]; e[j] = -1; } } return res; } ``` == Manacher 对于第 $i$ 个字符为对称轴: + 如果回文串长为奇数, $d[2 * i]/2$ 是半径加上自己的长度 + 如果长为偶数, $d[2 * i -1]/2$ 是半径的长度, 方向向右. ```cpp int n, d[N * 2]; char s[N]; for (int i = 1; i <= n; i++) t[i * 2] = s[i], t[i * 2 - 1] = '#'; t[n * 2 + 1] = '#'; m = n * 2 + 1; for (int i = 1, l = 0, r = 0; i <= m; i++) { int k = i <= r ? min(d[r - i + l], r - i + 1) : 1; while (i + k <= m && i - k >= 1 && t[i + k] == t[i - k]) k++; d[i] = k--; if (i + k > r) r = i + k, l = i - k; } ``` = 杂项 == fastio 来自 oiwiki ```cpp // #define DEBUG 1 // 调试开关 struct IO { #define MAXSIZE (1 << 20) #define isdigit(x) (x >= '0' && x <= '9') char buf[MAXSIZE], *p1, *p2; char pbuf[MAXSIZE], *pp; #if DEBUG #else IO() : p1(buf), p2(buf), pp(pbuf) {} ~IO() { fwrite(pbuf, 1, pp - pbuf, stdout); } #endif char gc() { #if DEBUG // 调试,可显示字符 return getchar(); #endif if (p1 == p2) p2 = (p1 = buf) + fread(buf, 1, MAXSIZE, stdin); return p1 == p2 ? ' ' : *p1++; } bool blank(char ch) { return ch == ' ' || ch == '\n' || ch == '\r' || ch == '\t'; } template <class T> void read(T &x) { double tmp = 1; bool sign = false; x = 0; char ch = gc(); for (; !isdigit(ch); ch = gc()) if (ch == '-') sign = 1; for (; isdigit(ch); ch = gc()) x = x * 10 + (ch - '0'); if (ch == '.') for (ch = gc(); isdigit(ch); ch = gc()) tmp /= 10.0, x += tmp * (ch - '0'); if (sign) x = -x; } void read(char *s) { char ch = gc(); for (; blank(ch); ch = gc()); for (; !blank(ch); ch = gc()) *s++ = ch; *s = 0; } void read(char &c) { for (c = gc(); blank(c); c = gc()); } void push(const char &c) { #if DEBUG // 调试,可显示字符 putchar(c); #else if (pp - pbuf == MAXSIZE) fwrite(pbuf, 1, MAXSIZE, stdout), pp = pbuf; *pp++ = c; #endif } template <class T> void write(T x) { if (x < 0) x = -x, push('-'); // 负数输出 static T sta[35]; T top = 0; do { sta[top++] = x % 10, x /= 10; } while (x); while (top) push(sta[--top] + '0'); } template <class T> void write(T x, char lastChar) { write(x), push(lastChar); } } io; ``` == 高精度 来自 oiwiki ```cpp constexpr int MAXN = 9999; // MAXN 是一位中最大的数字 constexpr int MAXSIZE = 10024; // MAXSIZE 是位数 constexpr int DLEN = 4; // DLEN 记录压几位 struct Big { int a[MAXSIZE], len; bool flag; // 标记符号'-' Big() { len = 1; memset(a, 0, sizeof a); flag = false; } Big(const int); Big(const char*); Big(const Big&); Big& operator=(const Big&); Big operator+(const Big&) const; Big operator-(const Big&) const; Big operator*(const Big&) const; Big operator/(const int&) const; // TODO: Big / Big; Big operator^(const int&) const; // TODO: Big ^ Big; // TODO: Big 位运算; int operator%(const int&) const; // TODO: Big ^ Big; bool operator<(const Big&) const; bool operator<(const int& t) const; void print() const; }; Big::Big(const int b) { int c, d = b; len = 0; // memset(a,0,sizeof a); CLR(a); while (d > MAXN) { c = d - (d / (MAXN + 1) * (MAXN + 1)); d = d / (MAXN + 1); a[len++] = c; } a[len++] = d; } Big::Big(const char* s) { int t, k, index, l; CLR(a); l = strlen(s); len = l / DLEN; if (l % DLEN) ++len; index = 0; for (int i = l - 1; i >= 0; i -= DLEN) { t = 0; k = i - DLEN + 1; if (k < 0) k = 0; g(j, k, i) t = t * 10 + s[j] - '0'; a[index++] = t; } } Big::Big(const Big& T) : len(T.len) { CLR(a); f(i, 0, len) a[i] = T.a[i]; // TODO:重载此处? } Big& Big::operator=(const Big& T) { CLR(a); len = T.len; f(i, 0, len) a[i] = T.a[i]; return *this; } Big Big::operator+(const Big& T) const { Big t(*this); int big = len; if (T.len > len) big = T.len; f(i, 0, big) { t.a[i] += T.a[i]; if (t.a[i] > MAXN) { ++t.a[i + 1]; t.a[i] -= MAXN + 1; } } if (t.a[big]) t.len = big + 1; else t.len = big; return t; } Big Big::operator-(const Big& T) const { int big; bool ctf; Big t1, t2; if (*this < T) { t1 = T; t2 = *this; ctf = true; } else { t1 = *this; t2 = T; ctf = false; } big = t1.len; int j = 0; f(i, 0, big) { if (t1.a[i] < t2.a[i]) { j = i + 1; while (t1.a[j] == 0) ++j; --t1.a[j--]; // WTF? while (j > i) t1.a[j--] += MAXN; t1.a[i] += MAXN + 1 - t2.a[i]; } else t1.a[i] -= t2.a[i]; } t1.len = big; while (t1.len > 1 && t1.a[t1.len - 1] == 0) { --t1.len; --big; } if (ctf) t1.a[big - 1] = -t1.a[big - 1]; return t1; } Big Big::operator*(const Big& T) const { Big res; int up; int te, tee; f(i, 0, len) { up = 0; f(j, 0, T.len) { te = a[i] * T.a[j] + res.a[i + j] + up; if (te > MAXN) { tee = te - te / (MAXN + 1) * (MAXN + 1); up = te / (MAXN + 1); res.a[i + j] = tee; } else { up = 0; res.a[i + j] = te; } } if (up) res.a[i + T.len] = up; } res.len = len + T.len; while (res.len > 1 && res.a[res.len - 1] == 0) --res.len; return res; } Big Big::operator/(const int& b) const { Big res; int down = 0; gd(i, len - 1, 0) { res.a[i] = (a[i] + down * (MAXN + 1)) / b; down = a[i] + down * (MAXN + 1) - res.a[i] * b; } res.len = len; while (res.len > 1 && res.a[res.len - 1] == 0) --res.len; return res; } int Big::operator%(const int& b) const { int d = 0; gd(i, len - 1, 0) d = (d * (MAXN + 1) % b + a[i]) % b; return d; } Big Big::operator^(const int& n) const { Big t(n), res(1); int y = n; while (y) { if (y & 1) res = res * t; t = t * t; y >>= 1; } return res; } bool Big::operator<(const Big& T) const { int ln; if (len < T.len) return true; if (len == T.len) { ln = len - 1; while (ln >= 0 && a[ln] == T.a[ln]) --ln; if (ln >= 0 && a[ln] < T.a[ln]) return true; return false; } return false; } bool Big::operator<(const int& t) const { Big tee(t); return *this < tee; } void Big::print() const { printf("%d", a[len - 1]); gd(i, len - 2, 0) { printf("%04d", a[i]); } } void print(const Big& s) { int len = s.len; printf("%d", s.a[len - 1]); gd(i, len - 2, 0) { printf("%04d", s.a[i]); } } ``` == 手写 bitset ```cpp struct Bitset { #define For(i,a,b) for(int i=a,i##end=b; i<=i##end; i++) #define foR(i,a,b) for(int i=a,i##end=b; i>=i##end; i--) using uint = unsigned int; using ull = unsigned long long; vector < ull > bit; int len; Bitset(int x = n) {x = (x >> 6) + 1; bit.resize(x); len = x;} void resize(int x) {bit.resize((x >> 6) + 1); len = (x >> 6) + 1;For(i, 0, len-1) bit[i] = 0;} void set1(int x) {bit[x>>6] |= (1ull<<(x&63));} void set0(int x) {bit[x>>6] &= (~(1ull<<(x&63)));} void flip(int x) {bit[x>>6] ^= (1ull<<(x&63));} bool operator [] (int x) {return (bit[x>>6] >> (x&63)) & 1;} bool any() {For(i, 0, len-1) if(bit[i]) return 1;return 0;} Bitset operator ~ () const {Bitset res(len);For(i, 0, len-1) res.bit[i] = ~bit[i];return res;} Bitset operator | (const Bitset &b) const {Bitset res(len); For(i, 0, len-1) res.bit[i] = bit[i] | b.bit[i];return res;} Bitset operator & (const Bitset &b) const {Bitset res(len); For(i, 0, len-1) res.bit[i] = bit[i] & b.bit[i];return res;} Bitset operator ^ (const Bitset &b) const {Bitset res(len); For(i, 0, len-1) res.bit[i] = bit[i] ^ b.bit[i];return res;} void operator &= (const Bitset &b) {For(i, 0, len-1) bit[i] &= b.bit[i];} void operator |= (const Bitset &b) {For(i, 0, len-1) bit[i] |= b.bit[i];} void operator ^= (const Bitset &b) {For(i, 0, len-1) bit[i] ^= b.bit[i];} Bitset operator << (const int t) const { Bitset res(len); int high = t >> 6, low = t & 63; ull lst = 0; for(int i = 0; i + high < len; i++) { res.bit[i + high] = (lst | (bit[i] << low)); if(low) lst = (bit[i] >> (64 - low)); } return res; } Bitset operator >> (const int t) const { Bitset res(len); int high = t >> 6, low = t & 63; ull lst = 0; for(int i = len - 1; i >= high; i--) { res.bit[i - high] = (lst | (bit[i] >> low)); if(low) lst = (bit[i] << (64 - low)); } return res; } void operator <<= (const int t) { int high = t >> 6, low = t & 63; for(int i = len - high - 1; ~i; i--) { bit[i + high] = (bit[i] << low); if(low && i) bit[i + high] |= (bit[i - 1] >> (64 - low)); } for(int i = 0; i < min(high, len - 1); i++) bit[i] = 0; } void operator >>= (const int t) { int high = t >> 6, low = t & 63; for(int i = high; i < len; i++) { bit[i - high] = (bit[i] >> low); if(low && i != len) bit[i - high] |= (bit[i + 1] << (64 - low)); } for(int i = max(len - high, 0); i < len; i++) bit[i] = 0; } ull get(int x) { int t = x >> 6, q = x & 63; if (q == 63) return bit[t]; return bit[t] & ((1ull << (q + 1)) - 1); } ull get(int l, int r) { int lt = (l >> 6), rt = (r >> 6); if (lt == rt) { if ((l & 63) == 0) return get(r); return (get(r) - get(l - 1)) >> ((l & 63)); } ull a = (l & 63) == 0 ? (bit[lt]) : ((bit[lt] - get(l - 1)) >> ((l & 63))); return a + (get(r) << (64 - (l & 63))); } } ``` == 对拍 ```bash #!/usr/bin/bash g++ ./my.cpp -o my -std=c++17 -fsanitize=undefined g++ ./std.cpp -o std -std=c++17 -fsanitize=undefined g++ ./data.cpp -o data -std=c++17 -fsanitize=undefined cnt=0; while true; do ./data > data.in ./my < data.in > my.out ./std < data.in > std.out if diff my.out std.out; then let cnt++; echo "# $cnt AC"; else echo "WA"; break; fi done ```
https://github.com/jgm/typst-hs
https://raw.githubusercontent.com/jgm/typst-hs/main/test/typ/compute/calc-21.typ
typst
Other
// Error: 10-19 the result is not a real number #calc.pow(-1, 0.5)
https://github.com/typst/packages
https://raw.githubusercontent.com/typst/packages/main/packages/preview/frackable/0.1.0/src/lib.typ
typst
Apache License 2.0
#import "impl.typ": frackable
https://github.com/shiki-01/typst
https://raw.githubusercontent.com/shiki-01/typst/main/lib/component/description.typ
typst
#let description(name, body) = [ #underline[#name]\ #pad( top: -5pt, bottom: 5pt, [ #grid( columns: 2, […], [#body] ) ] ) ]
https://github.com/ayoubelmhamdi/typst-phd-AI-Medical
https://raw.githubusercontent.com/ayoubelmhamdi/typst-phd-AI-Medical/master/chapters/ch09-wil-fr.typ
typst
MIT License
#import "../functions.typ": heading_center, images, italic,linkb, dots #let finchapiter = text(size: 24pt, fill:rgb("#1E045B"),[■]) #let S1 = "S1" #let S2 = "S2" #let S3 = "S3" = DÉTECTION DES NODULES PULMONAIRES DU CANCER PAR MGI-CNN. == Introduction. Dans le monde entier, le cancer du poumon est la cause principale des décès liés au cancer @Siegel2017Cancer2017. Une découverte opportune grâce à des scanners thoraciques de dépistage peut augmenter considérablement les chances de survie @Nationallungscreening. Potentiellement, les nodules pulmonaires (des masses rondes ou ovales détectables dans les scanners thoraciques) peuvent indiquer un cancer du poumon @Gould2007EvaluationEdition. L'efficacité des soins de santé pourrait bénéficier significativement d'un système informatisé capable d'identifier automatiquement ces nodules, permettant d'économiser du temps et des ressources pour les prestataires de santé et les patients. Ce chapitre est basé sur un article intitulé "Multi-scale Gradual Integration Convolutional Neural Network for False Positive Reduction in Pulmonary Nodule Detection". Les algorithmes de détection des nodules se composent généralement de deux parties @Setio2016PulmonaryNetworks : - La première étape recherche une grande variété de nodules possibles avec une grande sensibilité; cependant, elle génère de nombreux faux positifs. - L'étape suivante atténue ces faux positifs en utilisant des caractéristiques et des classificateurs améliorés, une tâche difficile en raison des variables englobant les formes, les tailles, les types de nodules et leur ressemblance potentielle avec d'autres composants thoraciques comme les vaisseaux sanguins ou les ganglions lymphatiques @Gould2007EvaluationEdition@Roth2016ImprovingAggregation. Dans nos recherches,nous explorons un réseau neuronal convolutionnel à intégration graduelle multi-échelle (mgi-cnn) comme méthode pour diminuer les faux positifs. Cette méthode présente trois caractéristiques notables : 1. Elle utilise des patchs de tailles différentes à partir des entrées de scanner thoracique; chaque taille contribue des informations variantes sur le nodule et sa zone environnante. 2. Cette méthode combine progressivement les patchs à travers différentes couches de réseau au lieu d'une intégration simultanée, ce qui conduit à un apprentissage approfondi des caractéristiques à différentes échelles. 3. Deux stratégies sont responsables de la combinaison des patchs : l'une allant du petit au grand (zoom in), et l'autre du grand au petit (zoom out), fournissant des informations de perspective variées @Karpathy2014Large-ScaleNetworks@Shen2015@Shen2017Multi-cropClassification@Dou2016Multi-levelDetection. Nous avons vu comment examiner cette méthode à travers l'ensemble de données public LUNA16, composé de scanners thoraciques de 888 patients évalués par quatre experts médicaux @Setio2016PulmonaryNetworks. Finalement, la méthode a affiché une performance supérieure dans la limitation des faux positifs, principalement à des taux plus faibles. Cela suppose qu'elle peut identifier de manière précise plus de nodules cancéreux et moins de nodules non cancéreux @Lin2016FeatureDetection@Kamnitsas2017EfficientSegmentation. == Travaux connexes. === Informations volumétriques contextuelles Les premières tentatives d'automatisation des dépistages du cancer du poumon se sont appuyées sur des algorithmes pour extraire les caractéristiques uniques des nodules pulmonaires. Les chercheurs ont mis l'accent sur les données volumétriques des nodules et les zones proches, mais ces méthodes ont souvent eu du mal à différencier correctement la gamme de variations des nodules, nécessitant une personnification pour chaque type de nodule distinct@Jacobs2014AutomaticImages@Okumura1998AutomaticFilter@Li2003SelectiveScans. Avec le temps, grâce à l'avancée des réseaux neuronaux profonds, ces techniques ont été progressivement améliorées. Les innovations récentes, tout particulièrement les méthodes basées sur les réseaux neuronaux convolutionnels (Convolutional Neural Networks, CNN), ont montré qu'elles pourraient améliorer la classification des nodules@Roth2016ImprovingAggregation@Setio2016PulmonaryNetworks@Ding2017AccurateNetworks. === Informations contextuelles multi-échelles Un changement significatif dans le paradigme de détection des nodules pulmonaires a été l'intégration d'informations contextuelles multi-échelles, en particulier avec l'ensemble de données Luna16. Cette approche tire profit des méthodologies d'apprentissage profond pour évaluer une vaste gamme de caractéristiques morphologiques et structurales à travers diverses échelles@Shen2015@Dou2016Multi-levelDetection. Plusieurs techniques ont démontré leur efficacité, dont : - L'approche Multi-scale CNN (MCNN) qui exploite l'extraction de caractéristiques à partir d'images à différentes échelles pour informer la formation du classificateur de différenciation des nodules@Shen2015. - La technique Multi-Crop CNN qui utilise une combinaison d'approches de rognage et de poolage pour extraire des données notables de différentes régions de la carte de caractéristiques de convolution, affinant ainsi la précision de la détection@Shen2017Multi-cropClassification. Des chercheurs ont également suggéré plusieurs stratégies prometteuses pour la détection des anomalies pulmonaires, comme l'utilisation de patches 3D pour une précision accrue avec les données volumétriques et la réduction des faux positifs@Setio2016PulmonaryNetworks@Roth2016ImprovingAggregation. L'extraction graduelle des caractéristiques, une méthode séquentielle qui fusionne l'information de contexte à différentes échelles, offre une alternative à la pratique conventionnelle de l'intégration radicale@Shen2015@Shen2017Multi-cropClassification. La combinaison holistique de ces approches a permis d'obtenir des modèles plus fiables et robustes pour la détection des nodules pulmonaires. Les régions entourant les nodules pulmonaires potentiels ont été minutieusement examinées et comparées à d'autres organes ou tissus pour améliorer la différenciation des nodules@Shen2017Multi-cropClassification. Les améliorations futures pourraient inclure l'intégration de données contextuelles provenant des zones adjacentes aux nodules, renforçant ainsi potentiellement les performances et l'exactitude des modèles@Dou2016Multi-levelDetection@Shen2017Multi-cropClassification. == Méthode. Le Réseau Neuronal Convolutionnel d'Intégration Graduelle à Plusieurs Échelles, ou MGI-CNN, s'applique à l'identification des nodules pulmonaires. Il intègre deux composants principaux : l'Extraction Graduelle de Caractéristiques (GFE) et l'Intégration de Caractéristiques en Multi-Flux (MSFI) @Dou2016Multi-levelDetection@Nair2010RectifiedMachines. === Processus d'Extraction Graduelle de Caractéristiques Avec GFE, le réseau fusionne les détails contextuels de patches à différentes échelles étape par étape. Il fonctionne dans deux scénarios : Zoom-In et Zoom-Out @Zhang2014ScaleAnalysis@Shen2015@Shen2017Multi-cropClassification. Dans le scénario Zoom-In, les patches à des échelles croissantes subissent une filtration à l'aide de noyaux convolutionnels locaux. Le réseau concatène les cartes de caractéristiques obtenues à chaque échelle avec le patch qui suit et les entre dans les couches de convolution. Ce processus continue jusqu'à ce que le réseau intègre tous les détails contextuels de toutes les échelles. D'autre part, le scénario Zoom-Out suit la même procédure mais inverse l'ordre des patches. Cette approche permet au réseau de combiner progressivement les caractéristiques contextuelles, capturant à la fois les informations locales et globales. Ainsi, le réseau peut cibler des zones nodulaires spécifiques ou des régions environnantes, distinguant ainsi efficacement les nodules d'autres structures dans le poumon. === Le Rôle de l'Intégration de Caractéristiques en Multi-Flux MSFI, l'autre composant du réseau, utilise une combinaison de flux d'informations 'zoom-in' et 'zoom-out' qui reflètent différentes échelles de forme et de contexte du nodule @Nair2010RectifiedMachines. Ces caractéristiques variées et complémentaires améliorent la détection des nodules. Faisant partie du MGI-CNN, MSFI combine des caractéristiques de différentes échelles et perspectives. Cette application vise à améliorer la réduction des faux positifs dans la détection des nodules pulmonaires, en utilisant l'ensemble de données Luna16. Pour illustrer, prenons un patch 3D d'une image pulmonaire présentant un nodule. L'objectif est de détecter le nodule et de minimiser les faux positifs. Pour y parvenir, MSFI fusionne des caractéristiques de différentes échelles et perspectives. Le réseau utilise deux flux de patches d'entrée à partir du patch original : un flux 'zoom-in', se concentrant sur la région du nodule, et un flux 'zoom-out', couvrant un contexte environnant plus large. Des échelles séparées, $S1$, $S2$ et $S3$, peuvent être mises en œuvre pour chaque flux. Ensuite, le CNN extrait des caractéristiques de ces patches et génère une carte de caractéristiques qui encapsule les caractéristiques du patch. La sortie finale de MSFI est une carte de caractéristiques combinée contenant des informations provenant des deux flux. Enfin, le réseau classe le patch comme un nodule ou non-nodule en utilisant cette carte de caractéristiques combinée. Il utilise un classificateur comme une couche softmax, qui attribue une probabilité à chaque classe. Plus la probabilité est élevée, plus la prédiction est sûre. De cette façon, MSFI améliore les performances de détection des nodules pulmonaires en exploitant des caractéristiques de différentes échelles et perspectives. Ces caractéristiques capturent à la fois les propriétés morphologiques et contextuelles du nodule, réduisant ainsi les faux positifs. == Procédure expérimentale et résultats === L'ensemble de données Luna16 Dans l'exploration du Deep-Learning pour la détection des nodules pulmonaires, les ensembles de données du défi LUNA16 sont devenus un facteur crucial. Ces données, qui comprennent 888 patients présentant des nodules pulmonaires examinés par quatre radiologues experts, ont constitué la base du évaluation@Setio2017validation. Il ont omis les patients avec une épaisseur de coupe supérieure à 2.5mm. Lorsque trois radiologues étaient d'accord sur un nodule, on le classions comme Vérité Terrain (VT). Ce processus a abouti à un total de 1186 nodules confirmés par VT. Dans l'ensemble de données LUNA16, pour la tâche de réduction des faux positifs (FP), nous avons reçu des coordonnées du nodule potentiel, les identifiants des patients et les labels respectifs. Pour traiter l'information provenant des scans CT, il ont utilisé l'extraction de patchs 3D @Dou2016Multi-levelDetection. Cela a impliqué trois types d'échelles: 40x40x26, 30x30x10, et 20x20x6, ce qui a assuré une couverture complète du nodule. La redimension plus tard à 20x20x6 a été faite par interpolation du plus proche voisin, et la normalisation entre la plage de [-1000, 400] HU @Hounsfield1980ComputedImaging. La formation du réseau a été facilitée par l'initialisation de Xavier@glorot2010understanding et un taux d'apprentissage de 0.003, englobant 40 époques. L'activation ReLU, et l'utilisation de la descente de gradient stochastique. La performance sur les données de test en utilisant la Metric de Performance Compétitive (CPM) @Niemeijer2011OnSystems. ==== Le calcul de CPM. #dots #dots #dots ==== Rapport Nodules à Non-nodules. Les données suggèrent un rapport nodule à non-nodule d'environ 1:6. En pratique, cela signifie que pour chaque nodule dans l'ensemble de données, il y a environ six échantillons non-nodule. Il est important que l'équilibrage de la représentation des nodules par rapport aux non-nodules aide le modèle d'apprentissage automatique à apprendre à partir d'un ensemble de données diversifié. Nous avons atteint ce ratio 1:6 par augmentation. À l'origine, à un ratio de 1:5, nous avons créé de nouveaux échantillons de nodules en les déplaçant et en les faisant pivoter sur divers axes. Cela a fourni un ensemble de données équilibré, améliorant la performance du modèle dans la détection des nodules. === Evaluation du modèle MGI-CNN. L'évaluation de la performance du modèle a utilisé le score CPM comme une sensibilité moyenne sur sept niveaux FP/scan (à savoir 0.125, 0.25, 0.5, 1, 2, 4, and 8) @Niemeijer2011OnSystems. En utilisant une validation croisée en 7 unités, nous avons obtenu un équilibre grâce à l'augmentation des échantillons de nodules. #dots #dots #dots Pour déterminer l'efficacité du modèle, nous l'avons comparé aux méthodes existantes de pointe @Setio2016PulmonaryNetworks@Dou2016Multi-levelDetection. affichage des scores CPM supérieurs pour sept valeurs FP/scan différentes #dots #dots #dots En examinant l'avantage de cette méthode, nous avons expérimenté divers CNN multi-échelles. La méthode a clairement surclassé les autres, aboutissant au meilleur CPM et à la réduction moyenne de FP. Cela souligne l'efficacité des stratégies GFE et MSFI que nous avons incorporées dans cette méthode. #dots #dots #dots == Discussions L'utilisation de techniques d'apprentissage en profondeur, notamment l'ensemble de données LUNA16, pour la détection des nodules pulmonaires contribue grandement au diagnostic précoce du cancer du poumon. La structure MGI-CNN, spécialement conçue à cet effet, présente deux principales forces : - Elle permet l'extraction de caractéristiques morphologiques et contextuelles à différentes échelles à partir des patchs d'entrée. L'information morphologique et contextuelle est progressivement incorporée par le réseau de zoom-in, tandis que le processus inverse a lieu dans le réseau de zoom-out. L'exploitation de l'information multi-échelle de cette manière fournit des caractéristiques complémentaires, ce qui booste les performances. - La structure permet l'intégration de caractéristiques plus abstraites à partir des deux flux dans le MSFI, maximisant ainsi la réduction des faux positifs (FP) en fusionnant des caractéristiques à un niveau plus abstrait où l'information morphologique et contextuelle reste intacte. Trois différentes méthodes : la concaténation, la somme élément par élément, et la convolution 1x1, ont été testées dans une tentative de fusionner les cartes de caractéristiques des deux flux dans le MSFI @Lin2013network. La somme élément par élément s'est avérée la plus efficace pour réduire les FP, même si aucune variance significative du CPM moyen n'a été observée entre les trois techniques. Les patchs 3D originaux ont été modifiés en 20x20x6 pour s'aligner avec la taille du champ réceptif du réseau, risquant une éventuelle perte ou distortion d'information. Cependant, l'information essentielle du nodule a été préservée car le nodule occupait la majorité du patch. Ainsi, l'opération de redimensionnement n'a pas eu d'impact majeur sur les performances. Une analyse approfondie a été réalisée sur les 232 FP que la MGI-CNN n'a pas pris en compte, les classant en trois groupes distincts formés sur la base de leurs probabilités de nodule : Faible Confiance (p variant de $ 0.5 < p < 0.7$) ; Confiance Modérée (p variant de $ 0.7 < p < 0.9$) ; et Haute Confiance ($p > 0.9$). La majorité des FP étaient essentiellement des composantes de grands tissus ou organes que le réseau n'a pas réussi à différencier des nodules. Les FPs du groupe Confiance Moderée ont montré un faible contraste, potentiellement ancré dans le processus de normalisation lors de la prétraitement. Cette observation suggère une amélioration des performances si nous utilisons plus de patchs de différentes échelles et employons différentes méthodologies de normalisation. L'approche vise à améliorer le segment de réduction des FP d'un système typique de détection de nodules pulmonaires qui se compose de deux principales composantes: un segment de détection de candidats et un segment de réduction des FP. = CONCLUSION GÉNÉRALE. La MGI-CNN architecture, est spécifiquement conçue pour minimiser les FP dans la détection des nodules pulmonaires via les scans CT. Cela est réalisé grâce à trois stratégies majeures : des entrées multi-échelles avec différents niveaux d'information contextuelle, l'intégration progressive des données provenant de différentes échelles d'entrée, et l'intégration des caractéristiques multi-flux grâce à l'apprentissage de bout en bout. En utilisant ces techniques, nous pouvons extraire des caractéristiques morphologiques et intégrer progressivement des informations contextuelles en multi-échelle, ce qui réduit le nombre de FP et extrait des caractéristiques morphologiques et contextuelles de la région du nodule. L'analyse des performances du MGI-CNN sur les jeux de données du défi LUNA16 a donné un CPM moyen très impressionnant de 0.942, surpassant significativement les techniques de pointe. La méthodologie a démontré une bonne efficacité, notamment dans des conditions de faibles FP/scan. En apportant de légères modifications, le réseau pourrait détecter directement les nodules à partir des scans CT, repoussant ainsi les limites de la détection du cancer. Avançant, le recherche vise à maîtriser la classification des sous-types de nodules, tels que solides, non-solides, partiellement solides, pérfissuraux, calcifiés et spiculés. Différents traitements sont nécessaires pour différents types de nodules, ce qui rend leur détection précise encore plus pertinente pour un traitement réussi.
https://github.com/SWATEngineering/Docs
https://raw.githubusercontent.com/SWATEngineering/Docs/main/src/2_RTB/PianoDiProgetto/sections/ConsuntivoSprint/SettimoSprint.typ
typst
MIT License
#import "../../const.typ": Re_cost, Am_cost, An_cost, Ve_cost, Pr_cost, Pt_cost #import "../../functions.typ": rendicontazioneOreAPosteriori, rendicontazioneCostiAPosteriori, glossary == Settimo #glossary[sprint] *Inizio*: Venerdì 05/01/2024 *Fine*: Giovedì 11/01/2024 #rendicontazioneOreAPosteriori(sprintNumber: "07") #rendicontazioneCostiAPosteriori(sprintNumber: "07") === Analisi a posteriori Il consuntivo delinea come le ore effettivamente utilizzate nel corso di questo #glossary[sprint] siano risalite rispetto a allo #glossary[sprint] precedente, segno che il ritmo di lavoro del team ha quasi raggiunto il livello che lo caratterizzava prima delle vacanze natalizie. Non sorprende che la maggior parte delle ore siano state investite nel ruolo di Verificatore (come da nuovo preventivo, realizzato nel corso dello #glossary[sprint]), dato che il team si sta ormai impegnando nella revisione in stile #glossary[walkthrough] della maggior parte della #glossary[documentazione]. Per quanto riguarda il _Piano di Progetto_, il team è stato felice di constatare che la pianificazione a breve termine, in questo caso la pianificazione e il preventivo realizzati precedentemente per lo #glossary[sprint] 8 al termine dello #glossary[sprint] 7, si è rivelata significativamente più sensata e, di conseguenza, utile di quella a lungo termine: in particolare, il Responsabile non ha dovuto aggiornare quanto già deciso per lo #glossary[sprint] 8 in modo significativo, in quanto sia le attività dettagliate nella pianificazione che la ripartizione delle ore per ruolo specificata nel preventivo si sono rivelate particolarmente azzeccate. Si è notato, dunque, come un lieve sforzo aggiuntivo a priori, atto a stabilire le modalità di lavoro dei 2 #glossary[sprint] immediatamente successivi ed effettuato come team piuttosto che da un singolo componente, contribuisca ad alleggerire il carico che grava sul Responsabile. In tal modo, il Responsabile può dedicare maggior tempo alle altre mansioni di sua competenza, tra cui anche la stesura dei consuntivi. Dunque ci si aspetta che i prossimi consuntivi (in particolare quelli successivi alla prima revisione #glossary[RTB]), e l'analisi a posteriori che li caratterizza, siano ancor più dettagliati e riflettano quanto più possibile l'andamento dello #glossary[sprint] appena concluso e le discussioni che avvengono internamente durante il meeting di #glossary[sprint retrospective], come avverrebbe idealmente. Altra cosa degna di nota per questo #glossary[sprint] è stato l'aggiornamento del preventivo realizzato inizialmente dal team per la candidatura al capitolato, che riporta sempre lo stesso numero di ore a disposizione (570 totali) ma aumenta nel costo, in quanto è stata cambiata la ripartizione delle ore per ruolo, aggiungendone ai ruoli di Amministratore e Analista e sottraendone ai ruoli di Programmatore e Verificatore (come verrà illustrato all'interno della _Lettera di Presentazione_). Questo cambiamento deve essere riportato ed evidenziato all'interno del _Piano di Qualifica_, per cui il team ha deciso di mantenere i dati e i grafici corrispondenti invariati fino allo #glossary[sprint] 6 (dunque il tutto fa riferimento al vecchio BAC), e mostrare come il nuovo BAC modifichi i grafici rilevanti in modo da poter trarre osservazioni aggiornate sull'andamento del progetto all'interno del cruscotto delle metriche. Per quanto riguarda i rischi, si è manifestato nuovamente RP3 o Variazioni nei tempi e costi del progetto, questa volta con riferimento al costo modificato all'interno del preventivo; nel corso dello #glossary[sprint] 4 il rischio si era rivelato difficile da gestire a causa della pianificazione poco flessibile che non aveva preso in considerazione l'eventualità di sostenere la prima revisione #glossary[RTB] più tardi del previsto. Nel corso di quest'ultimo #glossary[sprint], invece, l'implementazione delle prime due misure preventive ha consentito al team di ripensare quanto preventivato inizialmente in modo critico e consapevole: in particolare, si sono potute impiegare ore da Amministratore che non erano più a disposizione per alcuni membri e, di conseguenza, aggiornare i preventivi dello #glossary[sprint] 8 e successivi all'interno della pianificazione a breve termine in modo molto più agevole di quanto non sarebbe stato se il team avesse mantenuto quella a lungo termine. Inoltre, la rendicontazione delle ore produttive effettivamente utilizzate all'interno dello #glossary[spreadsheet] "Time & Resource Manager" si è rivelato di importanza cruciale per mettere a punto dei consuntivi accurati e, di conseguenza, trarre conclusioni utili ad informare la scelta del team di cambiare il rapporto ore e il costo del progetto. Se inizialmente il team non disponeva dei dati o degli strumenti adatti a compiere scelte informate sui tempi e i costi del progetto, ora invece può sfruttare una pianificazione flessibile, consuntivi accurati e i grafici riportati nel _Piano di Qualifica_ per comprendere l'andamento del progetto e continuare a monitorarlo da vicino in futuro.