id
int64
2
42.1M
by
large_stringlengths
2
15
time
timestamp[us]
title
large_stringlengths
0
198
text
large_stringlengths
0
27.4k
url
large_stringlengths
0
6.6k
score
int64
-1
6.02k
descendants
int64
-1
7.29k
kids
large list
deleted
large list
dead
bool
1 class
scraping_error
large_stringclasses
25 values
scraped_title
large_stringlengths
1
59.3k
scraped_published_at
large_stringlengths
4
66
scraped_byline
large_stringlengths
1
757
scraped_body
large_stringlengths
1
50k
scraped_at
timestamp[us]
scraped_language
large_stringclasses
58 values
split
large_stringclasses
1 value
42,046,616
janandonly
2024-11-04T22:23:01
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,046,633
null
2024-11-04T22:25:59
null
null
null
null
null
null
[ "true" ]
null
null
null
null
null
null
null
null
train
42,046,639
el_hacker
2024-11-04T22:26:38
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,046,648
null
2024-11-04T22:27:50
null
null
null
null
null
null
[ "true" ]
null
null
null
null
null
null
null
null
train
42,046,654
stanislavb
2024-11-04T22:28:35
At all costs (Isaac Asimov's three rules of robotics)
null
https://seths.blog/2024/11/at-all-costs/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,046,660
rustoo
2024-11-04T22:29:57
Employment Barometer in Germany Falls Further
null
https://www.ifo.de/en/facts/2024-11-04/employment-barometer-germany-falls-further-october-2024
17
22
[ 42047260, 42046917, 42046894 ]
null
null
null
null
null
null
null
null
null
train
42,046,664
SixFeetUp
2024-11-04T22:30:07
null
null
null
1
null
[ 42046665 ]
null
true
null
null
null
null
null
null
null
train
42,046,669
fzliu
2024-11-04T22:30:15
Rerank-2 and rerank-2-lite: the next generation of Voyage multilingual rerankers
null
https://blog.voyageai.com/2024/09/30/rerank-2/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,046,695
marcodiego
2024-11-04T22:32:57
Release release candidate GIMP 3.0.0 RC1
null
https://gitlab.gnome.org/GNOME/gimp/-/commit/76036f4833f1131fe6a74c1a7837e246bb0ee680
4
0
[ 42046783 ]
null
null
null
null
null
null
null
null
null
train
42,046,716
heavyset_go
2024-11-04T22:34:39
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,046,737
gabthinking2017
2024-11-04T22:37:00
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,046,749
nathancspencer
2024-11-04T22:38:55
5 Months to Run Code Locally
null
https://nathancspencer.com/5months/
13
2
[ 42046947, 42047312, 42046792 ]
null
null
null
null
null
null
null
null
null
train
42,046,762
amichail
2024-11-04T22:39:53
Is convenience making our lives more difficult?
null
https://www.theguardian.com/books/2024/nov/04/the-big-idea-is-convenience-making-our-lives-more-difficult
1
0
null
null
null
null
null
null
null
null
null
null
train
42,046,794
sosodev
2024-11-04T22:42:36
The Fediverse Desperately Needs Sustainable File Hosting
null
https://evergreenfiles.com/blogs/the-fediverse-desperately-needs-sustainable-file-hosting
4
0
[ 42046904 ]
null
null
null
null
null
null
null
null
null
train
42,046,813
iamsahaj
2024-11-04T22:44:13
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,046,845
EA-3167
2024-11-04T22:46:40
The Great American Nuclear Weapons Upgrade
null
https://undark.org/2024/11/04/the-great-american-nuclear-weapons-upgrade/
7
0
null
null
null
null
null
null
null
null
null
null
train
42,046,861
zainhussaini
2024-11-04T22:48:41
Show HN: Colorful Images with Uniform Grayscale Values
Normally, when you convert a color image to grayscale, you can still recognize the image. But what if you could keep the image’s color while making it convert to a uniform gray?<p>This project explores an algorithm that takes an RGB image and adjusts it so that it appears visually similar in color but turns into a single shade when converted to grayscale. The method uses color-space math to preserve hue while adjusting brightness and saturation, resulting in unique images with hidden uniformity. It touches on color theory, the RGB and HSV color spaces, and some linear algebra. Check out the code and math behind this experiment!
https://github.com/zainhussaini/uniform-grayscale-image
1
0
null
null
null
null
null
null
null
null
null
null
train
42,046,865
Guardianmag
2024-11-04T22:49:01
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,046,867
1vuio0pswjnm7
2024-11-04T22:49:19
Delta Airlines vs. CrowdStrike
null
https://www.courtlistener.com/docket/69315665/14/1/crowdstrike-inc-v-delta-air-lines-inc/
1
0
[ 42046891 ]
null
null
null
null
null
null
null
null
null
train
42,046,869
mixeden
2024-11-04T22:49:28
Advanced AI Alignment Technique: Nova
null
https://synthical.com/article/Nova%3A-A-Practical-and-Advanced-Alignment-215b0ad8-cd19-40f8-ac5b-9eec020bca8c
2
0
null
null
null
null
null
null
null
null
null
null
train
42,046,888
melmahdi
2024-11-04T22:51:52
null
null
null
1
null
[ 42046892 ]
null
true
null
null
null
null
null
null
null
train
42,046,902
runamuck
2024-11-04T22:52:55
Escape the Surveillance Web with Gemini
null
https://john.soban.ski/gemini.html
19
24
[ 42047055, 42048533, 42047091, 42050169, 42047117, 42047136, 42047406, 42047144 ]
null
null
null
null
null
null
null
null
null
train
42,046,914
loeber
2024-11-04T22:54:45
Insurance for AI: Easier Said Than Done
null
https://loeber.substack.com/p/24-insurance-for-ai-easier-said-than
3
0
null
null
null
no_error
#24: Insurance for AI: Easier Said than Done
2024-11-04T22:44:32+00:00
John Loeber
In recent months, many friends have pitched or asked me about insuring AI risk. The idea is usually something like this: businesses want to adopt AI for efficiency, but they’re nervous about the AI hallucinating and making costly mistakes. Even if they buy all the best software to mitigate such mistakes, the scope of LLM outputs is so large that unpredictable, hugely expensive edge cases always remain. Insurance offers a clean way to transfer that risk. You could read that as a bullish thesis for such an AI insurance product: imagine a world of widespread AI adoption, where every AI deployment is underpinned by an insurance policy. Or imagine a world where insurance products act as the critical enabler for widespread AI adoption in the first place.But the thesis is not that easy! While I won’t present a slam-dunk-view either way, I want to discuss some of the nuance and complexities that make this market tricky, and probably smaller than it appears at first glance.In the history of business, AI isn’t the first thing to make mistakes. Humans have been making mistakes for a long time. For that reason, accountants, lawyers, real estate agents, etc. all carry insurance — specifically, an Errors & Omissions or Professional Liability policy that covers them if they make a costly mistake on the job and get sued by a client. In recent decades, a significant amount of rote human labor has transitioned to being completed by software instead. This software transition was subject to the same concerns as the current AI transition: can you really trust accounting software not to make mistakes? Won’t there be edge-cases in mortgage underwriting that software might miss, but an experienced underwriter would catch? The proof is in the pudding: the world runs on software now. And similar to Professional Liability, many software companies carry Technology Errors & Omissions insurance, in case their software messes something up and their customer goes after them. You would think that the market for such insurance is massive. Software handles every button-press in your car, it manages industrial control systems in factories, it monitors the life-or-death status of patients in hospitals. The stakes are high. And we know most software is broken in the margins: every day I visit websites of big, respected companies, and they’re full of bugs. But most software companies haven’t even heard of Tech E&O insurance. It’s considered a specialty product, often included as an add-on to cybersecurity insurance. Because it’s so niche, it’s hard to estimate the market size, but that’s an indicator of just how small it is: accounting for under $5B in global annual premiums seems like a very safe bet to me.1 For comparison, in the US, Workers’ Compensation runs around $55-60B a year in premiums, and Personal Auto insurance over $300B. This should give you pause. The handing-over of professional duties to software feels riddled with liability, even today. The thesis for Tech E&O would be very similar to the thesis for the AI insurance product we started out with. (Let’s call it AI E&O.) And yet the market for Tech E&O is small, even in the face of software carrying weighty responsibilities in every nook and cranny of our world. Taking this one step further: you could consider AI E&O as a new form of Tech E&O, or — depending on the details of the contract — as included by Tech E&O policies. After all, AI software is still software. It may not be quite as deterministic as software before LLMs, but you’re still trying to insure the same type of risk: software mistakes.Then, in what sense does AI E&O expand the Tech E&O market? Before LLMs, software could make devastatingly expensive mistakes. After LLMs, software can still make devastatingly expensive mistakes. The LLM aspect may increase the potential frequency and severity of those mistakes, but you have a needle to thread: if frequency of severe mistakes increases too much, then insurance becomes moot. People are not going to use a software product that breaks all the time, regardless of whether any damages are covered or not. It’d just be a nuisance.This puts insurance entrepreneurs in a structurally tricky position. The Tech E&O market is so small that for a venture-scale thesis, you’d need to forecast AI E&O increasing the size of the Tech E&O market several-fold, probably 10-20x+. To get there, you’d have to:Overcome any structural market issues2 that may inhibit growth;Bet on severity of claims shooting up, much more so than frequency. AI-enabled software would have to become tremendously more dangerous to deploy, with multi-million-dollar-loss glitches lurking. The risk scenarios you’d be insuring would be cases like “I’m Chevrolet, and my marketing AI promised new trucks to 163 customers”3 or “I fired all my accountants, replaced them with ChatGPT, and when I woke up this morning I owed a customer a million dollars.”Maybe I’m being unimaginative, but the maneuvering room to get to widespread AI E&O adoption seems tight. I think the likelier path is that businesses will adopt AI while maintaining some risk-reward equilibrium: steering clear of the use cases with the most severe downside risks, and leaving humans in the loop where appropriate. You may well be right to argue that there is still more risk in the system than before, but I don’t know if there’s so much risk that it gives rise to a major new class of insurance product and satisfies a venture-scale thesis.An important detail of insurance markets is that insurance carriers must be better at evaluating the risk than the purchasers. Otherwise you get adverse selection problems: consumers who know they are more likely to incur claims purchase insurance, the insurance carriers take losses, and the market eventually collapses.4 This takes you to a practical concern: how would AI E&O products be underwritten? There would be two parts to it:The insurer would evaluate the characteristics of the AI company — industry, size, safety and testing practices, etc., and look at their service agreements with customers to figure out what kind of risk they’re on the hook for. The insurer would run a large battery of tests against the AI offering of the company, seeing how it holds up under a variety of adversarial scenarios, and what the variability of outputs is.The first part is a classic point of strength for insurers: given a large portfolio of businesses underwritten, they can figure out how these factors affect pricing. But I expect that for an AI E&O insurance product, it’s really the second part that determines the risk. Here’s the problem: why would an insurer be better at testing a company’s AI outputs than the company itself? Revisiting our earlier example, the folks at Chevrolet would have a much better understanding of their own business, all the ways in which they could deploy AI, and the most dangerous, error-prone areas, than any insurer looking in from the outside. Specifically, there are two related problems:As an outsider, it’s extremely hard to get a full understanding of all the ways in which AI will be deployed, and what risks that implies downstream. Hard to price!There is a massive information asymmetry between companies utilizing/selling AI software, and insurers seeking to insure the consequent risks. Trying to insure AI applications looks like a hotbed of adverse selection.Another classic detail of insurance markets is that insurers need to diversify the risks that they underwrite: for example, if you provide flood insurance, then you wouldn’t want to write all your policies in a single town by the river: when one house gets flooded by a storm, chances are that all the houses get flooded, and you go out of business. That’s concentration of risk, and insurers strive to avoid it.The trouble is that the ecosystem of AI software products currently has enormous concentration of risk. There’s a single-digit number of major LLM providers. AI infrastructure, whether for RAG or data labeling, etc. has a similar concentration of activity, with many small providers and a few major ones. Practically speaking, if you’re insuring mostly GPT wrappers, and the newest GPT model has some kind of safety regression, then your entire portfolio of policies is in trouble.For any insurer, it will be tricky to maintain adequate diversification of the underlying risks. In practice, this means your portfolio might simply be constrained to a small size, as you can never grow such that you’d be over-exposed to any particular underlying provider.The final challenge is that insurance policies are usually written for the full year ahead, and AI software is evolving with great speed. In our own AI deployments at Limit, we found surprising differences in behavior and quality from different models. It’s hard to trust software updates from outside vendors to be strict improvements. Further, at the speed at which businesses are iterating on their AI software, or deploying it in new contexts, makes the underwriting problem even harder. It’s tough enough to test the AI software at any one point in time. There’s no good way to make assumptions about how else it will get used in the next few months, or how well-tested the next software release will be. The remedy for an insurance underwriter will be to prescribe what kinds of updates are in scope for the policy, what level of testing must be done, etc. This helps limit the risk, but it also greatly increases the complexity of the insurance contract, and makes it more cumbersome to purchase.My skepticism above doesn’t mean there’s no case for AI E&O. There certainly is. But it’s tricky. You’d have to bring the following conditions together:There must be rare, hard-to-mitigate, severe risks from AI deployment;The purchasers of such insurance are the actors in the market (such as software providers and consumers) that are stuck with the risk, i.e. not able to contractually transfer it to other parties;The insurers would need to be better than the policyholders at figuring out the riskiness of the AI deployment.Could AI E&O insurers partner with AI testing/safety/QA service providers, similar to how cyber insurers partner with cybersecurity providers? Yes, but those services are already readily accessible to potential insurance customers on the open market!5 The information asymmetry remains.An insurer wouldn’t need to know how to underwrite every such company, but could constrain their appetite to certain types of businesses where they feel they can exhaustively understand the AI risks;Diversification of underlying risks (technology vendors) would have to be maintained, which practically implies limiting the portfolio size of the insurer;The insurance policies would need to prescribe guardrails around software updates.It is certainly possible to bring all these conditions together — it’s just not easy, and even when you do, it implies a very selective, small portfolio of underwritten risks. I suspect that at least for the next few years, the set of such opportunities will be pretty thin, making it a way but not the best way to attack the AI liability problem. Furthermore, you would need this risk environment to scale up dramatically to give rise to a venture-scale insurance thesis. For now, if you’re really good at evaluating AI model safety, that’s probably better sold as a standalone service than used to underpin an insurance product.This piece was inspired by conversations over the past weeks with Rune, Bala, Zack, Alex, and others. Thanks for your thoughts!
2024-11-08T14:08:15
en
train
42,046,915
mwanago
2024-11-04T22:54:54
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,046,927
doener
2024-11-04T22:56:17
Apple Finds Its Gaming Console with the New Mac Mini
null
https://www.bloomberg.com/news/newsletters/2024-11-03/apple-finally-finds-its-game-console-rival-with-the-new-m4-and-m4-pro-mac-minis-m31na57p
3
0
null
null
null
null
null
null
null
null
null
null
train
42,046,929
aralroca
2024-11-04T22:56:40
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,046,953
pseudolus
2024-11-04T22:58:50
World’s oldest tree? Genetic analysis traces evolution of iconic Pando forest
null
https://www.nature.com/articles/d41586-024-03570-4
116
51
[ 42051004, 42051086, 42050940, 42054067, 42055180, 42055752, 42055699 ]
null
null
no_error
The world’s oldest tree? Genetic analysis traces evolution of iconic Pando forest
null
Kudiabor, Helena
DNA samples from one of the world’s largest and oldest plants — a quaking aspen tree (Populus tremuloides) in Utah called Pando — have helped researchers to determine its age and revealed clues about its evolutionary history.By sequencing hundreds of samples from the tree, researchers confirmed that Pando is between 16,000 and 80,000 years old, verifying previous suggestions that it is among the oldest organisms on Earth. They were also able to track patterns of genetic variation spread throughout the tree that offer clues about how it has adapted and evolved over the course of its lifetime. The findings were posted on the bioRxiv preprint server on 24 October1. The work has not yet been peer reviewed.“It’s just pretty cool to study such an iconic organism,” says co-author Rozenn Pineau, a plant evolutionary geneticist at the University of Chicago in Illinois. “I think it’s important to draw people’s attention on natural wonders of the world.”One very big treePando — whose name means ‘I spread’ in Latin — consists of some 47,000 stems that cover an area of 42.6 hectares in Utah’s Fishlake National Forest. Because of the way the plant reproduces, this collection of aspens is technically all one tree, supported by a single, vast root system. Pando is triploid, meaning that its cells contain three copies of each chromosome, rather than two. As a result, Pando cannot reproduce sexually and mix its DNA with that of other trees, and instead creates clones of itself.The trees’ lessons: climate records are written in tree ringsAlthough this process generates offspring that are genetically identical, they can still accumulate genetic mutations as their cells divide. Biologists are interested in these variations because they provide information on how the plant has changed since the first seedling sprouted. Some studies have explored the spread of new mutations in plants and fungi that reproduce clonally, but few have investigated centuries-old plants like Pando.“It’s kind of shocking to me that there hasn’t been a lot of genetic interest in Pando already, given how cool it is,” says study co-author William Ratcliff, an evolutionary biologist at the Georgia Institute of Technology in Atlanta.The researchers collected samples of roots, bark, leaves and branches from across the Pando clone, as well as from other, unrelated quaking aspen trees for comparison. They extracted DNA from the samples, then sequenced and analysed a subsection of the genome.After removing variants that were found in both Pando and neighbouring trees, as well as mutations found in just one sample, the researchers were able to review nearly 4,000 genetic variants that had arisen as Pando cloned itself repeatedly over millennia.Analysing the patterns of these mutations revealed surprising results. “You would expect that the trees that are spatially close are also closer genetically,” says Pineau. “But this is not exactly what we find. We found a spatial signal, but that is much weaker than what we expected.” Physically close trees did share more similar mutations than those that were far apart — but only slightly more. However, over a smaller scale of 1–15 metres the trend was stronger, with stems that were closer together having significantly more shared mutations. Pando covers an area of more than 40 hectares, “but it almost looks like it’s a well-mixed pot of genetic information”, says Ratcliff.Protective mechanismBy inputting Pando’s genetic data into a theoretical model that plots an organism’s evolutionary lineage, the researchers also estimated Pando’s age. They put this at between 16,000 and 80,000 years. “It makes the Roman Empire seem like just a young, recent thing,” says Ratcliff.The team also considered reasons for the tree’s remarkable endurance. Pineau says that Pando being triploid might lead to “bigger cells, bigger organisms, better fitness”, and that existing clones might be more durable than new mixed offspring.Philippe Reymond, who researches interactions between plants and herbivores at the University of Lausanne in Switzerland, says that the findings hint that “plants and trees have a mechanism to protect the genome” from the accumulation of harmful genetic mutations, a suggestion that is “quite interesting for many scientists”. He adds that future studies could search for this exact mechanism at the cellular level.Ratcliff is also keen for more studies to be done on Pando’s genetic history. “I would love to make a call for people to work on these kinds of organisms,” he says.
2024-11-08T13:30:23
en
train
42,046,956
SerCe
2024-11-04T22:59:22
ReiserFS and the Art and Artist Problem
null
https://corecursive.com/reiserfs/
18
17
[ 42047928, 42048382, 42047521, 42047552 ]
null
null
no_error
ReiserFS - CoRecursive Podcast
null
Adam Gordon Bell
Note: This podcast is designed to be heard. If you are able, we strongly encourage you to listen to the audio, which includes emphasis that’s not on the page Intro Welcome to CoRecursive, I’m Adam Gordon Bell. Today’s episode is the story of a piece of software being built. My wife and I often debate people’s character. Sometimes it’s big, like separating Michael Jackson’s music from his actions. Is “Billie Jean” still great, or is it tainted? But usually, it’s more specific. Sometimes people give her bad vibes. And I’m not always a good judge of character, I give people the benefit of the doubt, where she often knows right away what she thinks about someone. And she’ll know that the person is off. She’s not always right, but I mean, she’s not going to listen to this so like, yeah, like actually, shes always right, it just takes me time to realize it usually. The Apology But anyways, this is going somewhere I swear. So, it’s late January 2024. I’m at my desk, clicking through links, and I land on the Linux Kernel Mailing List, the LKML. It’s mostly patches. One title reads “Merge tag timers-v6,” followed by a message: “Happy New Year 2024! Please consider pulling these changes.” Then comes the literal diff patch: + code on line X, - this code. You can apply these patches to your source with git apply, just like pulling a pull request. But that’s not the link I get. The one I follow starts like this: Hans: I was asked by a kind Fredrick Brennan for my comments that I might offer on the discussion of removing ReiserFS V3 from the kernel. I don’t post directly because I am in prison for killing my wife Nina in 2006. Hans: I am very sorry for my crime–a proper apology would be off topic for this forum, but available to any who ask. That’s Hans Reiser. His voice, like everyone else’s here except mine, is generated by OpenAI. The letter responds to a prompt from Fredrick Brennan. ReiserFS is being deprecated from the kernel, meaning it’s becoming obsolete and will be removed entirely. And Hans Reiser, the creator—well, he’s in prison. For murder. And actually the file system and the man and the murder are all really bound up together. They are all linked and so the letter goes on. He points people to Reiser4. A more maintainable basis for the future of file system, he calls it. And then he goes on. And on. Thousands of words. About the technical challenges, the interpersonal conflicts, the mistakes he made. The dreams he had. The life he lost. It’s this unexpected thing. A letter from a convicted murderer on a technical mailing list. A glimpse into the human story behind the code. A man trying to explain himself, to grapple with his past. A man wondering if redemption is even possible. And it all starts with just trying to make a better linux filesystem. There are many ways to tell the story of ReiserFS, Hans, and Nina, his victim. Entire books cover it. But my way of telling the story… well I want to tell you about how you can’t separate the person from the code. You can’t separate the technical from the social. You can’t be a monster in one domain and not have it be part of the others. It’s all mixed up together. That’s today’s story The Rise of ReiserFS So, picture this. It’s the late 1990s. The internet is taking off. Linux, this free, open-source operating system, it’s gaining traction. Programmers, they’re building all sorts of new things. Websites, applications, tools. And all these things, especially on linux, they’re made of files. Lots and lots of files. And how does Linux keep track of all these files? With a filesystem of course. And back then, the popular Linux filesystem, it was ext2. It was…okay. It worked. But it had its problems. Under the covers, a file system, the code, is like a librarian for your disk. I want to add a new book to the collection, Librarian has to find an empty shelf and put it there. But then it has to update the card catalog with details on where it put the book, or I’ll never find it. This card catalog is your directory listing, the index of the things on your drive. Imagine you’re doing this, and suddenly the lights go out. Power failure. When they come back on, what’s the state of your library? You might have found the shelf space but never got the book there. Or the book’s on the shelf but not in the catalog. Maybe you’re halfway through writing the catalog card. Everything’s in this weird, half-finished state. Back then, computers crashed all the time, leaving you with a mess. Files marked as stored but not actually there. Files there but not properly recorded. It was chaos. So, if your computer crashed, you had to run this thing called fsck. Filesystem check. And it could take hours. Literally hours. On a big disk, it could take all day. And then there were the performance issues. Big directories, lots of little files? Ext2 would slow to a crawl. It used linked lists to organize directories, and if you’ve done your leetcoding exercises, you know that going node by node through a linked list can take time. But yeah, this is 1993, and the dot com boom, which started with the netscape IPO, is still a couple years away. And Hans is in Oakland, California, across the bay. In a cluttered home office, filled with computer monitors, stacks of books, and the hum of cooling fans. And He wants to build a better filesystem. Faster, more efficient, more elegant than anything out there. But building a filesystem, it’s not a one-person job. It takes a team. And Hans, he didn’t have a lot of money. He was bootstrapping this thing, working a day job, pouring every spare minute into his dream. And then he had this idea. From Russia, With Code He’d read an article about how Russian programmers, incredibly talented programmers, were working for next to nothing after the collapse of the Soviet Union. And Hans, he saw an opportunity. He booked a flight to Moscow. Now Moscow in 1993. It’s just a couple years after The Soviet Union collapse. Everything’s changing. And here’s this American programmer, this guy with a cowboy hat, walking into a world he doesn’t really understand. He literally wore a cowboy hat in Moscow, to play up his Americaness. He’s trying to build a team, to communicate his vision, to navigate a culture that’s completely different from his own. He’s this American in Moscow, and he’s sticking out. He’s not blending in. He’s making a statement. And he’s doing it all on a shoestring budget. He’s paying these programmers a fraction of what they’d make in the U.S., but for them, it’s still a significant raise. And Hans, he’s working his butt off to keep the money coming. He’s coding for Synopsys, then Sun Microsystems, taking on any contract gig he can find. He’s even moonlighting at some army research center in New Jersey, flying back and forth across the country, across continents, just to keep this dream alive. He’s pouring all his energy into this project, into this dream of building a better filesystem. And for a while, it seems like it’s working. The team is making progress. The code is coming together. ReiserFS is starting to take shape. He’s traveling back and forth between the US and Russia, checking in on his team, making sure the code is clean, that the algorithms are efficient. He’s pushing them hard, demanding excellence, because he knows that in the cutthroat world of file systems, there’s no room for second best. But there are cracks in the foundation. Cultural differences, communication barriers, the challenges of managing a remote team. These things, they start to wear on Hans. He’s used to getting his way, to being in control. And in Russia, things aren’t so simple. Fast forward to March 1998. Saint Petersburg. A cafe next to a canal. Hans is meeting a woman. Nina Sharanova. A mail-order bride. And Hans, he’s smitten. Her voice, her smile, her intelligence. She’s a doctor, an OB-GYN. She seems to be everything he’s looking for. And so, they get married. A quick courtship, a hastily arranged wedding. And soon, Nina is pregnant. Their first child, Rory, is born in September 1999. It seems like a happy ending, a new chapter in Hans’s life. He’s now got a wife and kid and his team in Russia, and him have made great strides with their filesystem. It had journaling - which was an old idea, actually. Before you shelf the book and update the card catalog, you write to a journal that you are going to do so, then you can recover without having to check every entry in the catalog. Then it used B+ trees to organize directories, so no slow listing of files. But the biggest trick actually sort of created more space on the disk. And that was a big deal. But also the price of Han’s ambition was starting to become tragically clear. Seeds of Trouble Because if you rewind a bit, if you go back to the late 80s, before ReiserFS, before Namesys, there were these…warning signs. Little and sometimes big social glitches. They weren’t about the technology, not exactly. They were about Hans himself. When Han’s at UC Berkeley, and he’s part of this student-run group called the Open Computing Facility, the OCF. It’s down in the basement of Evans Hall. Rows of humming computers, with fluorescent lights buzzing overhead. And for Hans and many others it’s a haven. A place to code, to build, to create. And a place dedicated to Open Source and open access. The OCF is volunteer run, and Hans gets very involved. He even manages to secure a huge donation of Apollo workstations. But the OCF, it’s not just about the technology. It’s about the community. It’s about people working together, sharing ideas, building something bigger than themselves. Open source, it thrives on collaboration. And Hans, he doesn’t really get that. He’s brilliant, yes, but he’s also got this…intense personality. He’s arrogant. He wants control. He doesn’t play well with others. There are stories. Like the time he booted an undergrad off the system for posting a message he disagreed with on usenet. Or the time he physically assaulted a colleague after a disagreement. Or the meeting minutes with headings like, “Hans Complains, the Earth Shakes, etc.” These weren’t just isolated incidents. They were a pattern. One former OCF user put it this way: He acted as if he owned the Open Computer Facility, and that everyone should kowtow to him. Another said, He went out of his way to be mean, petty, arrogant, and small-minded. These are signs. Signs of a person who’s not well integrated. Signs that are often rationalized away when someone is talented. The Dream Begins But yeah, Namesys, by the time they got to version 3 of ReiserFS they were really on to something. Linux kernel version 2.4.1 included ReiserFS as an option and all the sudden this code had distribution. And since it was the first linux filesystem with journalling, it was a solid choice. But the thing that really made it popular, was a NameSys / Han’s innovation called tail packing. ( I feel like I’m going to get tired of the library metaphor ) Imagine that our librarian’s shelves are all divided into blocks that are the size of a medium sized hardcover book. We call that a block, and most file systems had 4kb blocks. That is how a hard drive works, and each little area is a block. The librarian, in the card catalog is actually writing down the address of the blocks where the book is stored. And if the book is larger than the block size, the librarian just splits up the book, puts it in as many blocks as it needs. 4kb is actually pretty small, so many books are split across many many blocks. Fragmentation, if you remember running defrag on your home windows machine like I do, fragmentation is when the books that are big and need to be split are split all over the library, so when retrieving them, the librarian has to go all over the place, instead of all being next to each other, block after block. Defrag is putting them back, sequentially next to each other. But tail packing is a different thing then defragging. It’s a technique for dealing with small files. You see when you have all these books that are a bit larger than the block, you get these little tails, instead of storing them next to the rest of the book, you store them all together. You pack all these tails, ends of book together into one block. This effectively gave you more space, and especially if you had lots of small files. Because imagine without tail packing, if you were storing pamplets instead of books, storing one per block, instead of packing a whole bunch into one block is going to waste a lot of space. It was brilliant! Suddenly, you had all this extra space on your hard drive. No file system checks, more space, and it was significantly faster than ext2. And the Linux community loved it! Companies issues praise: Philipp: ReiserFS is the main engine behind our LivingXML database system…With the great help of ReiserFS, we now have one of the best database systems. SUSE Linux, a popular distribution, even adopted it as their default filesystem. This meant enterprise usage and professional support. SuSe was putting their reputation behind and on the line for ReiserFS. Hans, he was on top of the world. His dream, it was becoming a reality. His filesystem was changing the Linux landscape. He was a star in the open-source community. He was getting the recognition he craved. But even then, even as ReiserFS was taking off, there were these whispers. Rumors of data corruption. Concerns about scalability. And then there was Hans himself. His personality, maybe had some bugs as well. Post honeymoon phase the marriage was getting harder. The Beginning of the End Because while Hans is in Moscow, chasing his technological ambitions, something else is happening back in Oakland. Because Nina came to the US for love, for a better life. Nina: We were madly in love until our first child was born. But things changed. Hans, he was consumed by his work, spending most of his time in Russia. Nina, she was left to navigate a new culture, a new language, a new life, all while raising two young children on her own. And it wasn’t just the distance. There were conflicting expectations. Hans, he wanted a traditional wife, someone who would put her career aside and focus on the family. Nina, she had her own dreams, her own ambitions. She wanted to be a doctor in the US, to build a life for herself and her children. Nina: Hans did not want me to be a doctor in the U.S. He wanted me to have six children and then I could deal with my career… He believed that Russian women would stay at home and devote themselves to their children… He didn’t want me to study for my exams. I knew that when I married him. We thought we could change each other. But they couldn’t. The tension, the resentment, it grew with each passing day. Nina, she felt isolated, trapped in a marriage that was slowly suffocating her. Meanwhile Hans full of excitement for more ideas he has about improving filesystems. Or maybe he’s just unsatisfied with where ReiserFS 3 stood. The Dream Continues Hans: Hierarchy doesn’t scale well for human beings, and hierarchical namespaces scale extremely poorly. He’s already thinking bigger. He’s got this grand vision, this almost utopian idea of how computers should work. He sees the limitations of existing systems, the walls between applications, the data silos. And he wants to tear them down. He wants to build something better, something faster, something more… connected. He gives a talk at Google. He’s passionate, intense. He’s pacing back and forth, explaining his vision. Hans: The file system is the most central namespace of the OS. Hans: I would like to suggest that namespaces in general are like roads and waterways. He’s building on an idea, an analogy to Adam Smith, the economist. Smith saw how roads and waterways connected cities, how they facilitated trade and communication, how they fueled the growth of civilizations. Hans, he sees files the same way. As the infrastructure that connects data, that allows applications to communicate, that powers the digital world. And he’s convinced that the current infrastructure, it’s not good enough. It’s fragmented, it’s inefficient. Namespaces are a barrier holding back data and holding back progress. He wants to build a unified namespace, a single, interconnected system where all data is accessible, where information flows freely. Hans: To unify the namespaces within the operating system is a bit of a quest for a holy grail… even though we will never succeed in unifying all the namespaces within the operating system, the closer we can get ourselves to it, we’re in a better place than if we don’t get any closer to it. He secures funding from DARPA, the Pentagon’s R&D agency. They want a filesystem for the future, a filesystem that can handle anything. Hans: Reiser4 is not only a file system. It is a software framework for creation, assembly, and customizing file systems. Hans: How well your file system performs is very much determined by how easy it is to make little changes to it, and the more little experiments you make the higher your performance is going to be. It’s a bit hard for me to understand his vision. But I think its a bit like replaces a file based system with a database, where files can easily be searched and indexed and have metadata and plugins can add whole new ways of seeing the file system layer. It’s a lot. Hans: The thing that’s true of everything that’s highly empirical is that you’re going to get it wrong a lot of the time because nature is just so much more complex than our puny little brains. The actual storage layer is also different. Hans: Reiser4 uses dancing trees, which obsolete the balanced tree algorithms used in databases…This makes Reiser4 more space efficient than other filesystems because we squish small files together rather than wasting space due to block alignment like they do Hans: For some interfaces, Reiser4 performs such switching in intelligent manner without user intervention. Thus, the file system is in permanent evolution, adapting to current conditions. He’s working with his team in Russia. Long nights, endless emails, debates about algorithms and data structures. He’s pushing them hard, demanding perfection. He wants Reiser4 to be the best, the fastest, the most revolutionary filesystem ever created. But he’s also becoming increasingly isolated. His communication style, his relentless pursuit of his vision, it’s creating friction. He’s alienating colleagues, pushing away potential allies. The warning signs are there, flashing brighter than ever. A Bittersweet Affair And meanwhile Nina’s isolation increases. She doesn’t know many people in Oakland. Hans is off in Russia, chasing his dreams of file system domination, leaving Nina to deal with the realities of daily life: two young kids, a new country, a failing marriage. Nina starts meeting people off craigs list. She’s trying to balance her responsibilities as a mother, her desire for a career, her own personal needs. She’s spending her days at Grand Lake Montessori, this private school where her kids go, this place that’s all about nurturing and child-centered learning. She’s volunteering, helping out in the classrooms, connecting with other parents. She’s trying to create some semblance of normalcy, of stability, in a life that’s spinning out of control. And into this void steps Sean Sturgeon. Hans’s best friend. A complicated guy. A former truck driver, a self-proclaimed “ex-gay prostitute,” a one-time fixture in the Bay Area S&M scene. Later, when the police are involved, when the Hans and Nina story hits the news, and it does, the salacious details of Nina and Sean affair becomes the center stage. But actually their marital struggles and tension that led to a trial separation was less about affairs and more about parenting. Rory is five years old. He’s a bright kid, but he’s also struggling. Nightmares, anxiety, behavioral problems. Nina, she’s worried. She takes him to therapists, gets him evaluated. She’s trying to figure out what’s going on, how to help him. But Hans, he’s dismissive. He thinks Nina is overreacting, that Rory’s problems are just a normal part of growing up. He sees Nina’s concern as a tactic in their custody battle, a way to paint him as a bad father. Nina: Our children hardly know their father because he has been home for only months at a time, three times a year. Hans, he’s got his own ideas about parenting. He believes in toughening up kids, exposing them to the real world. He sees video games, even violent ones as educational, as a way to teach Rory about history, about strategy, about the culture of manhood. Hans: Little boys take to violent computer games like monkeys take to trees. They do not have instincts that favor combat rehearsal activities for no reason, they have them because they affect whether they live or die a significant amount of the time. Nina, she sees it differently. She sees the nightmares, the anxiety, the drawings of monsters and soldiers. She sees a child who’s struggling, a child who needs a safe and nurturing environment, not a virtual battlefield. The conflict escalates. Hans accuses Nina of manipulating Rory, of turning him against him. And anyways he’s busy, back and forth from Russia, building his perfect file system, his perfectly ordered world of data. And the big struggle he’s having is maybe the size of his ambition because he doesn’t want to talk about his existing widely used ReiserFS V3 file system anymore. The Cracks in the Code And yeah, that existing version, V3, it’s getting this reputation for being…fragile. Especially when the hardware isn’t perfect. And in the early 2000s, hardware, it wasn’t always perfect. Hard drives crashed, power flickered, things happened. And when things happened, ReiserFS, sometimes it just…fell apart. Data got corrupted, files vanished. And the Linux community, they’re starting to notice. They’re starting to talk. On mailing lists, in forums, the whispers are growing louder. Jeff: ReiserFS has serious scalability problems…the scalability problems are real. Jeff Mahoney, a SUSE developer, he’s seeing the writing on the wall. ReiserFS, it’s great for small files, but it doesn’t scale. It can’t handle the massive datasets, the high-volume workloads that are becoming more and more common. And Hans, he’s dismissive. He’s got this almost messianic belief in his own vision. He’s not interested in patching up ReiserFS. He’s got Reiser4, this next-generation file system, this masterpiece he’s convinced will solve everything. Hans-Reiser: The code was unmaintainable terrible code that needed to be rewritten from scratch… He’s telling SUSE, telling the Linux community, that ReiserFS is obsolete, that they need to move on to 4. He’s not interested in compromise, in collaboration. He’s got his own way, and he’s sticking to it. And the kernel developers, they’re not having it. They’re seeing Hans’s brilliance, but they’re also seeing his arrogance, his inability to work within a community. Alan Cox, a core Linux developer, he’s worried about the long-term viability of Reiser4. Alan-Cox: “It doesn’t matter if reiser4 causes crashes. It matters that people can fix them…and the code is maintainable.” yeah, What happens if Hans disappears. Who’s going to maintain this complex, unconventional file system? Who’s going to fix the bugs, the inevitable crashes? There is a huge element of trust here, and Hans isn’t interested in building trust. If SuSe is selling paid enterprise support for his existing file system, they need to trust it works. If they don’t, if he’s not helping maintain it and fix issues, why would they ever trust his next idea? But Hans has his vision for a new filesystem world, where all the previous ideas are tossed aside. He doesn’t care about the social elements, or the people he is rubbing the wrong way. The Seeds of Reiser4 Hans : It had to be written from scratch to be written right… That’s Han’s talking to the kernel mailing list. He’s thinking about a world where a simple search can unearth anything and everything, regardless of the application, regardless of the file format. But He’s been butting heads with the Linux kernel developers. He sees them as resistant to change, as unable to grasp the brilliance of his vision. Hans : What makes you think kernel developers have a deep understanding of the value of connectivity in the OS? They don’t. The average kernel developer is not particularly bright. And the kernel developers, they’re pushing back. Linus Torvalds, the father of Linux, he’s not impressed with Reiser4’s plugin architecture. Linus: As long as you call them ‘plugins’…I (and I suspect a lot of other people) are totally uninterested… Uninterested, because they’re worried about the complexity, the stability, the long-term viability of Reiser4. They’re worried about his inability to collaborate, to compromise. They’re seeing a man who’s so focused on his own vision that he’s blind to the practical realities of working within a community, of releasing and maintain code at the scale of heavily used operating system kernel. And at home, while all this is going on, he’s accusing Nina of Munchausen by proxy, of weaponizing their son’s health in their custody battle. Hans: “You don’t want the kids except as a bargaining chip.” He’s projecting, blaming Nina for his own failings, for his own inability to connect, to empathize. He’s building this perfect file system, this world of interconnected data, but his own world is fracturing. But he does it, he gets Reiser4 completed. Control and Loss Hans presents benchmarks for Reiser4. For small files, still a specialty, its much faster than ext3. For metadata operations, like file creation and deletion, it was again, much faster than other linux filesystems. On other benchmarks, large files, concurrency, Hans had benchmarks showing it could be 2 to 3 times faster. Others disputed this, and found hans benchmarks to focus in on the specific scenarios where his filesystem clearly had an advantage, but either way, it existed. His vision had been created. And then he needs to get it into the linux kernel. Hans : “All objections have now been addressed…I request that reiser4 be included.” He’s fighting for recognition, for his vision to be accepted. It’s a fight that mirrors another battle he’s waging, a brutal custody battle with his now estranged wife, Nina. Their emails are a war zone. He accuses her of Munchausen by proxy, of fabricating illnesses for their son, Rory. Hans : You don’t want the kids except as a bargaining chip. They interfere with your career. He sees her as manipulative, an obstacle to his control over their children. But Nina, she’s just trying to protect her kids, to shield them from his increasingly erratic behavior. She takes Rory to therapy, gets him evaluated. Nina : Rory needs a very safe environment. He needs to thrive. She expresses her fears to friends, worries about the impact of the constant conflict on her children. She wants them to feel safe, loved, protected. She wants to build a stable, nurturing environment for them, far from the toxic battlefield of her marriage. Meanwhile, Hans is consumed by the fight. The custody battle, the Reiser4 debates, they’re fueling his anger, his paranoia. Hans : Male geeks…are one of America’s most hated cultural minorities. … I am tired of being the punching bag. He feels misunderstood, unfairly targeted. He lashes out, sends threatening emails. Hans : Those who anger slowly, cool slowly Nina. The stress is mounting, the pressure building. He’s losing his grip, his world spiraling out of control. And then, Nina disappears. September 3, 2006. She drops the kids off at his house, a normal Sunday afternoon exchange. A hug, a kiss goodbye. And then, nothing. She vanishes without a trace. It’s like a file system corrupted, a system crashing. All that data, all those connections, suddenly fragmented, lost. His wife, the mother of his children, gone. His dream of a unified namespace, of total control over data, a stark contrast to the chaos of his own life. He’s the architect of his own destruction, a brilliant mind consumed by his own demons. He’s about to pay the price for his ambition, his isolation, his inability to connect, to empathize, to see the human cost of his pursuit of control. The Wire Nina’s friend, Ellen became concerned that Nina didn’t pick the kids up from school the next day. That evening she phoned the police to report a missing person. She must have had suspicions because the police had her phone Hans, with them present. She asked him if he might know where Nina was and he immediately said he wanted to talk to his lawyer. Not a great sign. So the police, they start watching Hans, following him, but they’re also listening. They’ve got his phones tapped, a wire room set up at headquarters. Officers are working in shifts, headphones on, listening to every conversation, every whispered word. They’re hoping to hear something, anything that will give them a break in the case, a clue to where Nina might be. And what they hear, it’s not what they expect. It’s not the frantic calls of a worried husband, desperately searching for his missing wife. It’s not the hushed conversations with accomplices, plotting a cover-up. It’s something else entirely. It’s Hans, talking to his mother, Beverly. He’s complaining, ranting, not about Nina’s disappearance, but about Nina herself. About the custody battle, about the divorce, about how she “lied” about their son’s illnesses. Hans : She really was nuts, mom. She really was…and you know, she came up with these illnesses because she hated me. He’s angry, bitter, resentful. He’s talking about Nina in the past tense, like she’s already gone. He’s not showing any remorse, any concern for her well-being. Mother: Still, Nina didn’t deserve whatever it is that happened to her. Don’t you think? Hans: I think my children shouldn’t be endangered by her. His mother, she tries to steer him back, to remind him that Nina, no matter their differences, didn’t deserve whatever happened to her. But Hans, he’s not having it. He’s caught up in his own narrative, his own justifications. Hans : Yeah, well being decent is a mistake, a mistake I paid for heavily. The officers listening, they’re taking notes, marking down the times, the dates, the words. They’re analyzing his tone, his inflections, the pauses, the hesitations. They’re building a profile, a psychological portrait of a man consumed by anger, a man who seems more concerned with winning a custody battle than finding his missing wife. Meanwhile, the investigation is intensifying. They’ve searched Hans’s home, not once, but twice. They’ve found traces of Nina’s blood, mixed with Hans’s, on a pillar in the living room. They’ve found more of her blood in his car, on a sleeping bag stuff sack. The passenger seat’s missing, the floor looks scrubbed clean. There are books on homicide in his car. It’s all starting to add up, a pattern emerging from the noise. And then, there’s the car chase. Hans, spotted driving his mother’s Honda, leads police on a wild goose chase through the Oakland hills, dodging and weaving, trying to shake his tail. He abandons the car, sprints through the neighborhood, disappears into the night. He’s acting like a guilty man, a man with something to hide. The police, they’re convinced. They announce Hans Reiser as a suspect in Nina’s disappearance. The media, they’re all over it, cameras flashing, microphones thrust in faces. The pressure’s mounting, the public scrutiny intensifying. Hans gets a lawyer who tries to downplay the evidence, calling it “flimsy,” “circumstantial.” Saying Hans is just a computer guy, a bit eccentric, not a killer. But the police, they’ve got their man. They’re just waiting for the final piece of the puzzle to fall into place. They’re waiting for Hans to crack, to confess, to lead them to Nina. But Hans, he’s not talking. For him truth is a variable, waiting to be assigned, waiting altering, updated and incremented. And as the investigation continues, the question hangs heavy in the air: Where is Nina? Will they find her? Or will Hans Reiser, the architect of a revolutionary file system, become the architect of his own escape? The Trial Fast forward to 2007. and in Oakland it’s a media circus. TV trucks, reporters, bloggers, all jostling for position. Inside, a courtroom drama is unfolding, a real-life tragedy playing out in real time. The prosecution methodically lays out their case. They show the jury the last known images of Nina, shopping with her kids, just hours before she vanished. They present the blood evidence, Nina’s blood in Hans’s house, in his car. They highlight his erratic behavior. They call witness after witness, each one painting a picture of a man consumed by anger, a man capable of violence. The defense tries to counter this narrative. They attempt to portray Nina as manipulative, unstable, a woman who might still be alive, hiding somewhere to punish him. They bring up her affair, trying to shift the blame, to create reasonable doubt. They talk about Hans’s personality, his quirks, his social awkwardness. They say he’s just a programmer, a bit different, not a killer. And then, Hans takes the stand. He’s wearing a suit, trying to project an image of composure, of innocence. He tells his story, his version of the truth. He denies killing Nina. He says he doesn’t know where she is. But under the pressure of cross-examination, the facade crumbles. He’s evasive, condescending, arrogant. He contradicts himself, gets caught in lies. He admits to perjury, to hiding evidence. He stumbled over questions about the missing passenger seat from his car, offering a series of shifting explanations. Yeah, the seat of his car is just gone, with no real explanation, besides it made it better for sleeping in the car. The car is also soaking wet on the inside when they take it, as if hosed off. Plus Nina’s blood in the car and at his house. He also has his passport and a lot of cash in fanny pack. Its all circumstantial but yeah … come on. He admitted to perjury, to intentionally misleading the jury. And then there were the missing hard drives, given to his lawyer months before but only revealed during the trial. And then there were the murder books. “Homicide: A Year on the Killing Streets”, the behind the scenes look at Baltimore homicide investigators that would eventually lead to the the show “The Wire” and “Masterpieces of Murder” as true crime book. Both bought together with cash from a local Barnes and Noble. He tried to explain away the books, saying he bought them out of an “arrogance of innocence”. And this part maybe seems true, well the arrogance part. He was a smart person, and cocky and thought if he had a plan and did some research, he could get away with everything. Not even thinking of the optics of heading to the local book store and buying all the books on ‘murder’ to help craft a plan. And then, that moment of tension. The jury delivers the verdict: guilty, first-degree murder. As Hans is led away, he utters those chilling words: “I’ve been the best father that I know how.” A desperate attempt to justify the unjustifiable. The Fallout The Linux community, they’re watching all of this unfold. And their reactions, they’re all over the map. Jonathan Corbet, the editor of LWN.net, a respected Linux news site, writes an article analyzing the impact of Reiser’s conviction. He talks about Reiser’s technical brilliance, his innovative ideas, but also about his flaws, his “disregard for the rest of the community,” his “certainty of always being right.” He acknowledges the loss to the community, the loss of a “voice which, for all its faults, had some unique and innovative things to say.” But in the comment sections, in the online forums, a different story unfolds. A raw, unfiltered, and often unsettling reaction. There’s shock, disbelief, of course. But there’s also something else, something darker. Some comments focus on the technical implications. What will happen to ReiserFS? Will Reiser4 ever see the light of day? Will someone else take over the projects? Or will they be abandoned, tainted by their creator’s crimes? There’s talk of renaming the filesystem, of erasing Reiser’s name from the code, of distancing themselves from the scandal. And then there are the jokes, the dark humor, the casual cruelty. “At least they’ll let him code in prison,” one commenter quips. Another suggests, “Maybe he’ll create something even better now that he has plenty of time.” A disturbing lack of empathy, a strange disconnect from the human tragedy at the heart of it all. Others express genuine concern, for Nina, for her children, for the impact on the open-source community. They worry about the negative stereotypes, the headlines screaming “Linux :: murder!” They lament the loss of a brilliant mind, a wasted talent. But the silence from the leaders of the Linux community, it’s deafening. His legacy, once a source of pride, is now shrouded in shame. His brilliance, once celebrated, is now overshadowed by the darkness of his crimes. And as the Linux community grapples with the fallout, a question lingers: What happens when the code we create, the technology we build, becomes entangled with the dark parts of human nature? The Confession But there’s still a piece missing, a gaping hole in the story. Nina’s body has never been found. If you believe hans, if his filesystem has been serving you well all these years, if you’ve seen his google talks, if you think he’s a genius and someone to look up to, you might just rationalize things away. He said it on the stand, computer people might be quirky, but we shouldn’t be assumed to be evil because of it. I think for a brief time, people like me, who maybe weren’t popular in highschool, who spend a lot of time indoors with a computer, they identify with that message, and think Hans is maybe a stand in for our own past persecutions. He’s just a nerd being picked on by the world. But that’s where the plea bargain comes in. A deal is struck and on a hot July afternoon in 2008, A convoy of police cars snakes its way through the Oakland hills, up into Redwood Regional Park. A SWAT team, armed with rifles, scans the dense undergrowth. Inside a caged van, Hans Reiser sits handcuffed to his lawyer. They arrive at a remote parking lot, the end of the road. Hans leads them down a narrow deer trail, the air thick with the smell of pine and eucalyptus. Hans stops. He points. Hans: If you dig down two feet, you’re going to hit Nina’s toes. The officers exchange glances. They start digging and they find her or what remains. And for his cooperation, his sentence is reduced to 2nd degree murder. The Lost Vision So, Hans Reiser is in prison, his legacy forever tainted by his crime. But what about his code, his creation, ReiserFS? What about the dream of Reiser4, the filesystem he believed would revolutionize the Linux world? It’s a story of what might have been, a story of unrealized potential. Reiser4, despite its technical innovations, never quite makes it. It’s a complex filesystem, with features like “dancing trees” and a plugin architecture that promised flexibility and performance. But it’s also a filesystem burdened by its creator’s past. The Linux community, already wary of Hans Reiser’s abrasive personality and unconventional coding style, now grapples with the implications of his crime. Trust is broken. The enthusiasm for Reiser4 wanes. And as the community debates the merits of Reiser4, other filesystems step into the spotlight. Ext4, building on the familiar foundation of ext3, emerges as a stable and reliable option, quickly becoming the default choice for many Linux distributions. And ext4 is a group effort, various experienced linux developers working together to get it working and into the kernel. Btrfs, with its advanced features and focus on data integrity, gains a following among those seeking a more modern and robust filesystem. XFS, known for its high performance with large files, continues to be a strong contender in the enterprise space. Reiser4, meanwhile, languishes. It lacks the corporate backing needed to drive its development and integration into the mainline kernel. And also unlike ext4, there is only a singular person, Hans pushing for it. And from his prison cell his voice is now a whisper, lost in the noise of the rapidly evolving Linux community. Edward Shishkin, a former Namesys employee, picks up the torch, continuing to develop Reiser4, even releasing a new version, Reiser5. But without Hans’s drive and vision, without the support of the community, the project struggles to gain momentum. And as the years pass, as the Linux kernel evolves, ReiserFS is marked as obsolete, slated for removal. The code, once so innovative, becomes a footnote in the history of Linux filesystems. A reminder that technical brilliance alone is not enough. That true progress requires not just code, but collaboration, community, and a shared vision for the future. Now known as inmate G31008, Hans His legacy, once a testament to innovation in the Linux world, is now overshadowed by a single, horrific act. But he gets a letter asking about this thoughts on Reiser3 being slated for removal from the linux kernel. And from his prison cell, Hans Reiser writes. A 6,500-word letter to the Linux Kernel Mailing List, a community he once clashed with. It’s a letter filled with regrets, reflections, and a plea for understanding. Hans: The man I am now would do things very differently from how I did things then. He reflects on the early days of ReiserFS, recalling the struggles to make it perform competitively. He admits to a crucial social misstep, a failure to acknowledge the work of others. He expresses regret for not appreciating his team more. He acknowledges the technical challenges of Reiser4 and the social missteps that hindered its acceptance. Hans: The problem was that it didn’t use the code that had been written by others in the kernel community, and people don’t really like their code not being used. People want to feel included. I responded to their social need by, well, screwing the pooch in response. He talks about the prison workshops, the lessons he’s learning about conflict resolution. He thanks Edward Shishkin for his work on Reiser5, though he admits he doesn’t know what’s in it, due to his lack of internet access. He encourages the community to support the project, disentangling it from his own tarnished reputation. It’s a complex letter, a glimpse into the mind of a man grappling with his past. He closes with a poignant reflection: Hans: It has been an honor to be of even passing value to the users of Linux. Ending Rightfully, this story should be about Nina. She’s the one who lost her life. But there’s something important here. A thing I thought should be said. Your technical and social skills—they work together. They multiply when working together. Maybe here, with Hans they divide. I’m not saying your difficult colleague is a murderer, But these things, they are not not connected. Hans struggled with empathy. Frankly I struggle with empathy sometimes. And Hans is going to be out, probably in not too long and I hope he gets better at that. And thinking of others. I hope we all get better at that. So, yeah, back what I’ve learned from my wife. How she can spot problems in people I’ve given a pass to. Well, I think how you interact with people matters. You can’t separate the art from the artist, because they are all tied up together. The coworker you have that some women in the office refuse to work with - a real story I’ve heard. Or that mean-spirited person that just gets a lot of good work done. And they shouldn’t get a pass. We do ourselves no favors when we rationalize, defend, or rally behind people whose lack of empathy makes them dangerous. And yeah my wife has this knack for seeing through the facade, for sensing when something’s off. But really we all have that. And maybe that’s what we need more of in tech—a little less focus on the code and a little more on the character. Understand the people, and their motivations. Because, in the end, it’s all connected. Even Hans touched on this lesson in his letter to the linux mailing list: Hans: The man I was then presented papers with benchmarks showing that ReiserFS was faster than ext2. The man I am now would start his papers crediting them for being faster than the filesystems of other operating systems, and thanking them for the years we used their filesystem to write ours. Not doing that was my first serious social mistake in the Linux community, and it was completely unnecessary. ReiserFS was named after Hans, who often spoke of his grand vision. But what about his team in Russia? Who were they? What ideas were theirs? What crucial work should be credited to Hans, and what to his unnamed team? How did they endure his challenging personality? This is the true purpose of the letter. Hans: In prison I have been working quite hard on developing my social skills, especially my conflict resolution and conflict avoidance skills…It has changed me. Hans: Assuming that the decision is to remove V3 from the kernel, I have just one request: that for one last release the README be edited to add Mikhail Gilula, Konstantin Shvachko, and Anatoly Pinchuk to the credits, and to delete anything in there I might have said about why they were not credited. It is time to let go. Outro That was the show! Hans’ request was granted. A patch email was sent to the Linux mailing list, updating the readme. ReiserFS will be dropped from Linux in 2025, but for now, the readme better reflects the teamwork it took to build it. I’m deeply indebted for this episode to the myriad of coverage of Han’s trial. Espeically Henry K. Lee for his thorough reporting in Presumed Dead and to Fredrick Brennan for sharing Hans’s letter and for the web archive’s way back machine, because this didn’t happen that long ago, but a lot the webpages seem lost in the sands of time. All quoted dialogue here is exact quotes, from Henry’s book, from emails, from reporting on the case, trial transcripts or somewhere else. But I’m sure i got some things wrong, because I’m just a guy, clicking around, reading web pages and writing down my thoughts. So forgive me any error. Thank you Nina Reiser’s family, I’m sorry you have to go through this. I hope for a brighter future for her children, Rory and Nio. And Thank you to all the people who sent me interesting links, like this linux mailing list link and thanks to the supporters who keep me at this, even though I’m new to a job and seem to struggle to find the type of investment of time an episode really needs. If you want to join the supporters and show you appreciate for the show go to corecursive.com/supporters. We also have a pretty awesome slack channel you can find on the website. And until next time, thank you so much for listening. Hello, I make CoRecursive because I love it when someone shares the details behind some project, some bug, or some incident with me. No other podcast was telling stories quite like I wanted to hear. Right now this is all done by just me and I love doing it, but it's also exhausting. Recommending the show to others and contributing to this patreon are the biggest things you can do to help out. Whatever you can do to help, I truly appreciate it! Thanks! Adam Gordon Bell Support The Podcast
2024-11-08T10:31:34
en
train
42,046,962
pseudolus
2024-11-04T23:00:28
Boston Dynamics' Latest Vids Show Atlas Going Hands On
null
https://spectrum.ieee.org/boston-dynamics-new-atlas
1
1
[ 42047026 ]
null
null
null
null
null
null
null
null
null
train
42,046,964
bj-rn
2024-11-04T23:00:47
"For you" feed shows activity from users I don't follow
null
https://github.com/orgs/community/discussions/53331
1
0
[ 42047478 ]
null
null
null
null
null
null
null
null
null
train
42,046,967
webbytuts
2024-11-04T23:01:09
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,046,975
leiferik
2024-11-04T23:02:10
null
null
null
3
null
[ 42047135, 42047008 ]
null
true
null
null
null
null
null
null
null
train
42,046,981
sandwichsphinx
2024-11-04T23:03:29
Why $11T in Assets Isn't Enough for BlackRock's Larry Fink
null
https://www.wsj.com/finance/investing/why-11-trillion-in-assets-isnt-enough-for-blackrocks-larry-fink-8644ac17
6
0
null
null
null
null
null
null
null
null
null
null
train
42,046,983
fagnerbrack
2024-11-04T23:03:43
Heic-To: Convert HEIC/HEIF Images to JPEG, PNG in Browser
null
https://github.com/hoppergee/heic-to
2
0
null
null
null
null
null
null
null
null
null
null
train
42,047,002
zdw
2024-11-04T23:07:59
Hollywood Shot Actors with Arrows Before CGI [video]
null
https://www.youtube.com/watch?v=D3BxILDpT6k
1
0
null
null
null
null
null
null
null
null
null
null
train
42,047,015
ericra
2024-11-04T23:10:55
mods: this post can be removed
null
https://www.usps.com/
1
5
[ 42047016, 42047327, 42047138 ]
null
null
null
null
null
null
null
null
null
train
42,047,018
zdw
2024-11-04T23:11:20
Understanding USB Type C: Cable Types, Pitfalls and More (2019)
null
https://learn.adafruit.com/understanding-usb-type-c-cable-types-pitfalls-and-more/overview
2
0
[ 42053072 ]
null
null
null
null
null
null
null
null
null
train
42,047,020
pseudolus
2024-11-04T23:11:34
NYPD using drones to combat subway surfing
null
https://boingboing.net/2024/11/04/nypd-using-drones-to-combat-subway-surfing.html
2
1
[ 42047060 ]
null
null
null
null
null
null
null
null
null
train
42,047,027
hn_acker
2024-11-04T23:12:28
Ticketmaster’s Attempt to Game Arbitration Services Fails–Heckman v. Live Nation
null
https://blog.ericgoldman.org/archives/2024/10/ticketmasters-gaming-of-arbitration-services-fails-heckman-v-live-nation.htm
66
8
[ 42054905 ]
null
null
null
null
null
null
null
null
null
train
42,047,033
PaulHoule
2024-11-04T23:13:29
'A big lever for change': the contract protecting Hamburg's green space
null
https://www.theguardian.com/environment/2024/oct/24/hamburg-green-space-contract-agreement-wildlife-biodiversity
2
0
null
null
null
null
null
null
null
null
null
null
train
42,047,037
teleforce
2024-11-04T23:13:57
Machine Learning Algorithms in Depth
null
https://www.manning.com/books/machine-learning-algorithms-in-depth
1
0
null
null
null
null
null
null
null
null
null
null
train
42,047,050
billwear
2024-11-04T23:16:01
After You Vote, Unplug
null
https://calnewport.com/blog/
5
3
[ 42048271, 42048126, 42048123 ]
null
null
null
null
null
null
null
null
null
train
42,047,062
lifeisstillgood
2024-11-04T23:18:23
Will a kettle full of alcohol boil forever?
null
https://www.youtube.com/watch?v=VzqN4Cn8r3U
2
0
null
null
null
null
null
null
null
null
null
null
train
42,047,064
JumpCrisscross
2024-11-04T23:18:36
Crypto firms including Robinhood, Kraken launch global stablecoin network
null
https://www.reuters.com/technology/crypto-firms-including-robinhood-kraken-launch-global-stablecoin-network-2024-11-04/
3
0
null
null
null
null
null
null
null
null
null
null
train
42,047,100
karanveer
2024-11-04T23:21:52
Show HN: Pocket Calculator for Chrome Browsers
Chrome Extension Calculator. I thought it&#x27;d be nice to have a non &quot;AI&quot; project, so here you go.
https://chromewebstore.google.com/detail/calculator/dlpbkbmnbkkliidfobhapmdajdokapnm
1
0
null
null
null
no_error
Calculator - Chrome Web Store
null
null
OverviewA simple calculator for those quick calculations, without leaving the browserHow many times you leave your browser to open the calculator app on your PC/Mac? In middle of a movie and felt like calculating those bills? Using a sheet and want to do a quick calculation? well, this extension saves you those extra steps and makes you access a calculator within click of a button or a custom assigned shortcut. NOW CALCULATE WITHOUT EVER LEAVING THE BROWSER. "Calculator" by theindiecompny helps you quickly calculate on the web, without leaving your train of thought or the tab. Best Way to Use this Calculator: 1. Install it 2. Use "Ctrl + Q" on Windows, or "Cmd + Q" to launch quickly. You can also customize this shortcut key, for me it is "Ctrl+1" [go to this link and assign your keys to "Activate the Extension": chrome://extensions/shortcuts] 3. Enjoy!DetailsVersion1.0.2UpdatedNovember 4, 2024Size21.27KiBLanguagesDeveloper Email [email protected] developer has not identified itself as a trader. For consumers in the European Union, please note that consumer rights do not apply to contracts between you and this developer.PrivacyThe developer has disclosed that it will not collect or use your data. To learn more, see the developer’s privacy policy.This developer declares that your data isNot being sold to third parties, outside of the approved use casesNot being used or transferred for purposes that are unrelated to the item's core functionalityNot being used or transferred to determine creditworthiness or for lending purposesSupportFor help with questions, suggestions, or problems, visit the developer's support site
2024-11-08T04:44:34
en
train
42,047,105
gokiti646
2024-11-04T23:22:12
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,047,110
pseudolus
2024-11-04T23:22:45
Confidential Computing or Cryptographic Computing?
null
https://cacm.acm.org/practice/confidential-computing-or-cryptographic-computing/
1
0
null
null
null
no_error
Confidential Computing or Cryptographic Computing? – Communications of the ACM
null
By Raluca Ada Popa
Increasingly stringent privacy regulations—for example, the European Union’s General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA) in the U.S.—and sophisticated attacks leading to massive breaches have increased the demand for protecting data in use, or encryption in use. The encryption-in-use paradigm is important for security because it protects data during processing; in contrast, encryption at rest protects data only when it is in storage and encryption in transit protects data only when it is being communicated over the network. In both cases, however, the data is exposed during computation—namely, while it is being used/processed at the servers. It is during that processing window when many data breaches happen, either at the hands of hackers or insider attackers.Another advantage of encryption in use is that it allows different parties to collaborate by putting their data together for the purpose of learning insights from their aggregate data—without actually sharing their data with each other. This is because the parties share encrypted data with each other, so no party can see the data of any other party in decrypted form. The parties can still run useful functions on the data and release only the computation results. For example, medical organizations can train a disease-treatment model on their aggregate patient data without seeing each other’s data. Another example is within a financial institution, such as a bank, where data analysts can build models across different branches or teams that would otherwise not be allowed to share data with each other.Today there are two prominent approaches to secure computation:A purely cryptographic approach (using homomorphic encryption and/or secure multi-party computation).A hardware security approach (using hardware enclaves sometimes combined with cryptographic mechanisms), also known as confidential computing.There is a complex trade-off between these two approaches in terms of security guarantees, performance, and deployment. Comparisons between the two for ease of use, security, and performance are shown in Tables 1, 2, and 3. For simple computations, both approaches tend to be efficient, so the choice between these two would likely be based on security and deployment considerations. However, for complex workloads, such as advanced machine-learning (ML) training (for example, transformers) and rich SQL analytics, the purely cryptographic approach is too inefficient for many real-world deployments. In these cases, the hardware security approach is the practical choice. Cryptographic ComputationThere are two main ways to compute on encrypted data using cryptographic mechanisms: homomorphic encryption and secure multi-party computation.Homomorphic encryption permits evaluating a function on encrypted input. For example, with fully homomorphic encryption,9 a user can send to a cloud Encrypt(x) for some input x, and a cloud can compute Encrypt(f(x)) using a public evaluation key for any function f.Secure multi-party computation23 is often more efficient than homomorphic encryption and can protect against a malicious attacker, but it has a different setup, shown in Figure 1. In secure multi-party computation (MPC), n parties having private inputs x_1, ..., x_n, compute a function f(x_1, ..., x_n) without sharing their inputs with each other. This is a cryptographic protocol at the end of which the parties learn the function result, but in the process no party learns the input of the other party beyond what can be inferred from the function result. Figure 1.  Illustration of secure multi-party computation for 3 parties. Each party essentially has a cryptographic key that only that party can access. Parties exchange encrypted data (often over multiple iterations), and compute locally on cryptographic data from other parties and their local data.There are many different threat models for computation in MPC, resulting in different performance overheads. A natural threat model is to assume that all but one of the participating parties are malicious, so each party need only trust itself. This natural threat model, however, comes with implementations that have high overheads because the attacker is quite powerful. To improve performance, people often compromise in the threat model by assuming that a majority of the parties act honestly (and only a minority are malicious).Also, since the performance overheads often increase with the number of parties, another compromise in some MPC models is to outsource the computation to m < n servers in different trust domains. For example, some works propose outsourcing the secure computation to two mutually distrustful servers. This latter model tends to be weaker than threat models, where a party needs to trust only itself. Therefore, in the rest of this article, we only consider maliciously secure n-party MPC. This also makes the comparison to secure computation via hardware enclaves more consistent, because this second approach, which is discussed next, aims to protect against all parties being malicious.Hardware EnclavesFigure 2 shows a simplified view of one processor. The light blue area denotes the inside of the enclave. The Memory Encryption Engine (MEE) is a special hardware unit that contains cryptographic keys and, by using them, it encrypts the data leaving the processor, so the memory bus and memory receive encrypted data. Inside the processor, the MEE decrypts the data so the core can compute on data at regular processor speeds. Figure 2.  Simplified illustration of memory encryption in a hardware enclave.Trusted execution environments such as hardware enclaves aim to protect an application’s code and data from all other software in the system. The MEE ensures that even an administrator of a machine with full privileges examining the data in memory sees encrypted data (Figure 2). When encrypted data returns from main memory into the processor, the MEE decrypts the data and the CPU computes on decrypted data. This is what enables the high performance of enclaves compared with the purely cryptographic computation: The CPU performs computation on raw data as in regular processing.At the same time, from the perspective of any software or user accessing the machine, the data looks encrypted at any point in time: The data going into the processor and coming out is always encrypted, giving the illusion that the processor is computing on the encrypted data. Hardware enclaves also provide a useful feature called remote attestation,5 with which remote clients can verify code and data loaded in an enclave and establish a secure connection with the enclave, which they can use to exchange keys.A number of enclave services are available today on public clouds such as Intel Software Guard Extensions (SGX) in Azure, Amazon Nitro Enclaves in Amazon Web Services (although this enclave is mostly software-based and does not provide memory encryption), Secure Encrypted Virtualization (SEV) from AMD12 in Google Cloud, and others. Nvidia recently added enclave support in its H100 GPU.6Ease-of-use comparison.  With a purely cryptographic approach, there is no need for specialized hardware and special hardware assumptions. At the same time, in a setting like MPC, the parties must be deployed in different trust domains for the security guarantees of MPC to hold. In the threat models discussed earlier, participating organizations have to run the cryptographic protocol on site or in their private clouds, which is often a setup, management, and/or cost burden compared with running the whole computation on a cloud. This can be a deal-breaker for some organizations.Table 1. Ease-of-use comparison.Cryptographic Computing (such as FHE and MPC)Enclave/Confidential Computingx Requires cryptographic expertise to design a tailored protocol for increased performance√ Can run proprietary systems in confidential computing without modification√ Does not require specialized hardwarex Requires specialized hardware to run onx Requires a deployment across multiple trust domains√ Can be deployed in a single trust domain (for example, the Cloud)x Cannot support proprietary systems With homomorphic encryption, in principle, the whole computation can be run in the cloud, but homomorphic encryption does not protect against malicious attackers as MPC and hardware enclaves do. For such protection, you would also have to use heavy cryptographic tools, such as zero-knowledge proofs.In contrast, hardware enclaves are now available on major cloud providers such as Azure, AWS, and Google Cloud. Running an enclave collaborative computation is as easy as using one of these cloud services. This also means that to use enclaves, you do not need to purchase specialized hardware: The major clouds already provide services based on these machines. Of course, if the participating organizations want, they could each deploy enclaves on their premises or in private clouds and perform the collaborative computation across the organizations in a distributed manner similar to MPC. The rest of this article assumes a cloud-based deployment for hardware enclaves, unless otherwise specified.With cryptographic computing, cryptographic expertise is often required to run a certain task. Since the cryptographic overhead is high, tailoring the MPC design for a certain task can bring significant savings. At the same time, this requires expertise and time that many users do not have. Hiring cryptography experts for this task is burdensome and expensive. For example, a user cannot simply run a data-analytics or machine-learning pipeline in MPC. Instead, the user has to identify some key algorithms in those libraries to support, employ tailored cryptographic protocols for those, and implement the resulting cryptographic protocols in a system that likely requires significant code changes as compared with an existing analytics/ML pipeline.In contrast, modern enclaves provide a VM interface, resulting in a Confidential Virtual Machine.10 This means that the user can install proprietary software in these enclaves without modifying this software. Complex codebases are supported in this manner. For example, Confidential Google Kubernetes engine nodes11 enable Kubernetes to run in confidential VMs. The first iteration of the enclave, Intel SGX, did not have this flexibility and required modifying and porting a program to run it in the enclave. Since then, it has been recognized that to use this technology for confidential data pipelines, users must remove the friction of porting to the enclave interface. This is how the confidential VM model was born.Security comparison.  The homomorphic encryption referred to here can compute more complex functions, meaning either fully or leveled homomorphic encryption. Some homomorphic encryption schemes can perform simple functions efficiently (such as addition or low-degree polynomials). As soon as the function becomes more complex, performance degrades significantly.Table 2 . Security comparison.Cryptographic Computing (such as FHE and MPC)Enclave/Confidential Computingx Homomorphic encryption is typically slower than MPC for non-trivial functionalities and does not protect against malicious attackers√ Enclaves offer a notion of integrity of computation and data, unlike FHE√ MPC does not suffer from side-channel attacks within the permitted number of compromised parties.x Enclaves suffer from side-channel attacks. (Leveraging oblivious computation prevents many of these attacks.) x Enclaves have a large, trusted compute base (TCB).Homomorphic encryption is a special form of secure computation, where a cloud can compute a function over encrypted data without interacting with the owner of the encrypted data. It is a cryptographic tool that can be used as part of an MPC protocol. MPC is more generic and encompasses more cryptographic tools; parties running an MPC protocol often interact with each other over multiple rounds, which affords better performance than being restricted to a noninteractive setting.For general functions, homomorphic encryption is slower than MPC. Also, as discussed, it does not provide malicious security without employing an additional cryptographic tool such as zero-knowledge proofs, which can be computationally expensive.When an MPC protocol protects against some malicious parties, it also protects against any side-channel attacks at the servers of those parties. In this sense, the threat model for the malicious parties is cleaner than for hardware enclaves’ threat model because it does not matter what attack adversaries mount at their servers; MPC considers any sort of compromise for these parties. For the honest parties, MPC does not protect against side-channel attacks.In the case of enclaves, attackers can attempt to perform side-channel attacks. A common class of side-channel attack (which encompasses many different types) involves an attacker who observes which memory locations are accessed as well as the order and frequency of these accesses. Even though the data at those memory locations is encrypted, seeing the access pattern can provide confidential information to the attacker. These attacks are called memory-based access-pattern attacks, or simply access-pattern attacks.There has been significant research on protecting against these access-pattern side-channel attacks using a cryptographic technique called data-oblivious computation. Oblivious computation ensures that the accesses to memory do not reveal any information about the sensitive data being accessed. Intuitively, it transforms the code into a side-channel-free version of the code, similar to how the OpenSSL cryptographic libraries have been hardened.Oblivious computation protects against a large class of side-channel attacks based on cache-timing-exploiting memory accesses, page faults, branch predictor, memory bus leakage, dirty bit, and others.Hardware enclaves such as Intel SGX are also prone to other side-channel attacks besides access patterns (for example, speculative-execution-based attacks, attacks to remote attestation), which are not prevented by oblivious computation. Fortunately, when such attacks are discovered, they are typically patched in a short amount of time by cloud providers, such as Azure confidential computing and others. Even if the hardware enclaves would be vulnerable for the time period before the patch, the traditional cloud security layer is designed to prevent attackers from breaking in to mount such a side-channel attack. This additional level of security would not exist on a client-side usage of enclaves.Subverting this layer as well as being able to set up a side-channel attack in a real system with such protection is typically much harder to do for an attacker because it requires the attacker to succeed at mounting two different and difficult types of attacks. It is not sufficient for the attacker to succeed in attacking only one. At the time of writing this article, there is no evidence of any such dual attack having occurred on state-of-the-art public clouds such as Azure confidential computing. This is why, when using hardware enclaves, you can assume that the cloud provider is a well-intended organization and its security practices are state of the art, as would be expected from major cloud providers today.Another aspect pertaining to security is the size of the trusted computing base (TCB). The larger the TCB, the larger the attack surface and the more difficult it is to harden the code against exploits. Considering the typical use of enclaves these days—namely, the confidential VM abstraction—the enclave contains an entire virtual machine. This means that the TCB for enclaves is large—many times larger than the one for cryptographic computation. For cryptographic computation, the TCB is typically the client software that encrypts the data, but there might be some extra assumptions on the server system, depending on the threat model.Performance comparison.  Cryptographic computation is efficient enough for running simple computations, such as summations, counts, or low-degree polynomials. As of the time this article was published, cryptographic computation was still too slow to run complex functions, such as ML training or rich data analytics. Take, for example, training a neural-network model. Recent state-of-the-art work on Microsoft Falcon (2021) estimates that training a moderate-size neural network such as VGG-16 on datasets such as CIFAR-10 could range into years. This work also assumes a threat model with three parties that have an honest majority, so a weaker threat model than the n organizations where n-1 can be malicious.Table 3. Performance comparison.Cryptographic Computing (such as FHE or MPC)Enclave/Confidential Computingx MPC (and homomorphic encryption) are still very inefficient for complex computation.√ Enclave computation is much more efficienct, sometimes close to vanilla processor speeds.Now let us take an example with the stronger threat model: our state-of-the-art work on Senate,18 which enables rich SQL data analytics with maliciously secure MPC. Senate improved the performance of existing MPC protocols by up to 145 times. Even with this improvement, Senate can perform analytics only on small databases of tens of thousands of rows and cannot scale to hundreds of thousands or millions of rows because the MPC computation runs out of memory and becomes very slow. We have been making a lot of progress on reducing the memory overheads in our recent work on MAGE13 and in another work, Piranha, on employing GPUs for secure computation learning,22 but the overheads of MPC remain too high for training advanced ML models and for rich SQL data analytics. It could still take years until MPC becomes efficient for these workloads.Some companies claim to run MPC efficiently for rich SQL queries and ML training. How is that possible? An investigation of a few of them showed that they decrypt a part of the data or keep a part of the query processing in unencrypted form, which exposes that data and the computation to an attacker. This compromise reduces the privacy guarantee.Hardware enclaves are far more efficient than cryptographic computation because, as explained earlier, deep down in the processor the CPU computes on unencrypted data. At the same time, data coming in and out of the processor is in encrypted form, and any software or entity outside of the enclave that examines the data sees it in encrypted form. This has the effect of computing on encrypted data without the large overheads of MPC or homomorphic encryption. The overheads of such computation depend a lot on the workload, but there have been overheads of, for example, 20%—twice for many workloads.Adding side-channel protection, such as oblivious computation, can increase the overhead, but overall the performance of secure computation using enclaves still is much better than MPC/homomorphic encryption for many realistic SQL analytics and ML workloads. The amount of overhead from side-channel protection via oblivious computation varies based on the workload—from adding almost no overhead for workloads that are close to being oblivious to 10 times the overhead for some workloads.The Nvidia GPU enclaves16 in the H100 architecture offer significant speed-ups for ML workloads, especially for generative AI. Indeed, there are significant industry efforts around using GPU enclaves to protect prompts during generative AI inference, data during generative AI fine-tuning, and even model weights during training of the foundational model. At the time of writing this article, Azure offers a preview of its GPU Confidential Computing service, and other major clouds have similar efforts under way. Confidential computing promises to bring the benefits of generative AI to confidential data, such as the proprietary data of businesses, to increase their productivity and the private data of users to assist them in various tasks.Real-world Use CasesBecause of the need for data protection in use, there has been an increase in use secure-computation use cases, whether it is cryptographic or hardware-enclave based. This section looks at use cases for both types.Cryptographic computation.  One of the main resources to track major use cases for secure multi-party computation is the MPC Deployments dashboard15 hosted by University of California, Berkeley. The community can contribute use cases to this tracker if they have users. A variety of deployed use cases are available for applications such as privacy-preserving advertising, cryptocurrency wallets (Coinbase, Fireblocks, Dfns), private inventory matching (J.P. Morgan), privacy-preserving Covid exposure notifications (Google, Apple), and others.Most of these use cases are centered around a specific, typically simple computation and use specialized cryptography to achieve efficiency. This is in contrast to supporting a more generic system, on top of which you can build many applications such as a database, data-analytics framework, or ML pipeline—these use cases are more efficiently served by confidential computing.One prominent use case was collecting Covid exposure notification information from users’ devices in a private way. The organizations involved were Internet Security Research Group (ISRG) and National Institutes of Health (NIH). Apple and Google served as injection servers to obtain encrypted user data, and the ISRG and NIH ran servers that computed aggregates with help from MITRE. The results were shared with public health authorities. The computation in this case checked that the data uploaded from users satisfied some expected format and bounds, and then performed simple aggregates such as summation.Heading toward a more general system based on MPC, Jana8 is an MPC-secured database developed by Galois Inc. using funding from DARPA over 4½ years and providing privacy-preserving data as a service (PDaaS). Jana’s goal is to protect the privacy of data subjects while allowing parties to query this data. The database is encrypted, and parties perform queries using MPC. Jana also combines differential privacy and searchable encryption with MPC.The Jana developers detail the challenges7 they encountered, such as “Performance of queries evaluated in our linear secret-sharing protocols remained disappointing, with JOIN-intensive and nested queries on realistic data running up to 10,000 times slower than the same queries without privacy protection.” Nevertheless, Jana was used in real-world prototype applications, such as inter-agency data sharing for public policy development, and in a secure computation class at Columbia University.Confidential Computing Use CasesBecause of its efficiency, confidential computing has been more widely adopted than cryptographic computation. The major clouds—Azure, AWS, and Google Cloud—offer confidential computing solutions. They provide CPU-based confidential computing, and some are in the process of offering GPU-based confidential computing (for example, Azure has a preview offering for the H100 enclave). A significant number of companies have emerged to enable various types of workloads in confidential computing in these clouds. Among them are Opaque, Fortanix, Anjuna, Husmesh, Antimatter, Edgeless, and Enclaive.For example, Opaque17 enables data analytics and ML to run in confidential computing. Using the hardware enclave in a cloud requires significant security expertise. Consider, for example, that a user wants to run a certain data-analytics pipeline—say, from Databricks—in confidential VMs in the cloud. Simply running in confidential VMs is not sufficient for security: The user has to be concerned with correctly setting up the enclaves’ remote attestation process, key distribution and management, a cluster of enclaves that offer scaling out, as well as defining and enforcing end-to-end policies on who can see what part of the data or the computation.To avoid this work for the user, Opaque provides a software stack running on top of the enclave hardware infrastructure that allows the user to run the workflow frictionlessly without security expertise. Opaque’s software stack takes care of all these technical aspects. This is the result of years of research at UC Berkeley, followed by product development. Specifically, the technology behind Opaque was initially developed in the Berkeley RISELab (Realtime Intelligent Secure Explainable Systems),19 and it has evolved to support ML workloads and a variety of data-analytics pipelines.Opaque can scale to an arbitrary cluster size and big data, essentially creating one “large cluster enclave” out of individual enclaves. It enables collaboration between organizations or teams in the same organization that cannot share data with each other: These organizations can share encrypted data with each other in Opaque’s workspace and perform data analytics or ML without seeing each other’s dataset. Use cases include financial services (such as cross-team collaboration for identity resolution or cross-organization collaboration for crime detection); high-tech (such as fine-tuning ML from encrypted data sets); a privacy-preserving large language model (LLM) gateway that offers logging, control, and accountability; and generating a verifiable audit report for compliance.A number of companies have created the Confidential Computing Consortium,2 an organization meant to catalyze the adoption of confidential computing through a community-led consortium and open collaboration. The consortium lists more than 30 companies that offer confidential computing technology.Following are a few examples of end use cases. Signal, a popular end-to-end encrypted messaging application, uses hardware enclaves to secure its private contact discovery service.21 Signal built this service using techniques from the research projects Oblix14 and Snoopy.4 In this use case, each user has a private list of contacts on their device, and Signal wants to discover which of these contacts are Signal users as well. At the same time, Signal does not want to reveal the list of its users to any user, nor does it want to learn the private contact list of each user. Essentially, this computation is a private set intersection. Signal investigated various cryptographic computation options and concluded that these would not perform fast enough and cheaply enough for its large-scale use case. As a result, it chose to use hardware enclaves in combination with oblivious computation to reduce a large number of side channels, as discussed earlier. Our work on Oblix and Snoopy developed efficient oblivious algorithms for use inside enclaves.Other adopters include the cryptocurrency MobileCoin, the Israeli Ministry of Defense,1 Meta, ByteDance (to increase user privacy in TikTok), and many others.Combining the Two ApproachesGiven the trade-off between confidential computing via enclaves and secure computation via cryptography, a natural question is whether a solution can be designed that benefits from the best of both worlds. A few solutions have been proposed, but they still inherit the slowdown from MPC.For example, my students and I have collaborated with Signal with its SecureValueRecovery20 system to develop a mechanism that helps Signal users recover secret keys based on a combination of different hardware enclaves on three clouds and secure multi-party computation. The purpose of this combination is to provide a strong security guarantee, stacking the power of the two technologies as defense in depth.A similar approach is taken by Meta and Fireblocks, a popular cryptocurrency wallet: They both combine hardware enclaves with cryptographic computation for increased security. The resulting system will be at least as slow as the underlying MPC, but these examples are for specialized tasks for which there are efficient MPC techniques.Conclusions and How to Learn MoreSecure computation via MPC/homomorphic encryption versus hardware enclaves presents trade-offs involving deployment, security, and performance. Regarding performance, it depends significantly on the target workload. For simple workloads, such as simple summations, low-degree polynomials, or simple ML tasks, both approaches can be ready to use in practice. However, for rich computations, such as complex SQL analytics or training large ML models, only the hardware enclave approach is practical enough at the moment for many real-world deployment scenarios.Confidential computing is a relatively young sub-field in computer science—but one that is evolving rapidly. To learn more about confidential computing, attend or watch the content from the Confidential Computing Summit,3 which was held in June of 2024 in San Francisco. This conference is the premier in-person event for confidential computing and has attracted the top technology players in the space, from hardware manufacturers (Intel, ARM, Nvidia) to hyperscalers (Azure, AWS, Google), solution providers (Opaque, Fortanix), and use-case providers (Signal, Anthropic). The conference is hosted by Opaque and co-organized by the Confidential Computing Consortium.
2024-11-08T15:27:22
en
train
42,047,157
syzmony
2024-11-04T23:28:38
Show HN: Simtown.ai – US Election Simulation with AI Characters
Show HN: simtown.ai – US Election Simulation with AI Characters<p>Inspired by the Generative Agents paper [0], we&#x27;ve created a browser-based simulation of the upcoming U.S. election with GPT-driven versions of Kamala and Trump, along with nine diverse NPCs representing voters in Pennsylvania.<p>These AI characters freely roam, interact with each other, and engage with real players. Currently, we have a single global instance where everyone can interact together. Single game-day is 24mins in real life.<p>What can you do?<p>- Listen to speeches by the candidates<p>- Chat with NPCs and other players<p>- Persuade others to vote for your candidate<p>- Eavesdrop on conversations between other players by moving closer to them<p>- Explore the environment<p>- Observe the final vote tally once everyone make their mind<p>We appreciate any feedback or suggestions!<p>[0] <a href="https:&#x2F;&#x2F;github.com&#x2F;joonspk-research&#x2F;generative_agents">https:&#x2F;&#x2F;github.com&#x2F;joonspk-research&#x2F;generative_agents</a>
https://app.simtown.ai/
3
0
null
null
null
null
null
null
null
null
null
null
train
42,047,217
eugeniasergio3
2024-11-04T23:36:11
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,047,220
amichail
2024-11-04T23:36:36
On Literate Programming
null
https://mgubi.github.io/docs/zettels/literate-programming.html
1
0
null
null
null
no_error
Massimiliano Gubinelli
null
null
On Literate Programming Some reflections on literate programming, TeXmacs, programming languages and writing. [November 4th 2024. Version 1] Literate programming was invented by Knuth to provide a literary work which would explain (unroll / open wide in front of us) a computer program by telling a story about it. It's about following a thread where a plot, made of chunks of the actual code of the program, is gradually unveiled. Collecting chunk after chunk, a special utility program can reconstitute the entirety of the code to be then given back to the computer, for execution or compilation. A monologue that occasionally becomes a dialogue. I've encountered many times examples of literate programs. The most famous is the TeX program itself, which take life as a narration in the homonymous book “TeX the program” (here for a the full collection of PDF files and here for the literate Cweb sources for TeX). Some of them, especially the short ones, are quite unimpressive. Look here for a whole Wiki of literate programs to get an idea. Apart from TeX (and Metafont) other large programs are maintained as literate works: Axiom, the book “Physically based rendering”, the lcc retargetable ANSI C compiler, “Lisp In Small Pieces” and “Clojure in Small Pieces” (see also here for context on this last book). Literate programs are human-specific super-languages However my brain “clicked” when I discovered peg.md by K. J. Sitaker. It is a small MarkDown document that gives a literate implementation of parsing expression grammars (PEGs) together with a meta compiler implemented in Javascript. What I found remarkable is the expressivity of the literate style to convey complex interactions among different computer idioms and dense representations of ideas, even when reduced to the essence of some tricks in a markdown text file. The text file peg.md contains the code of various versions of the Javascript PEG compiler, and a PEG meta-compiler written in itself, and the Makefile which extracts from the literate document a bootstrapping Javascript PEG compiler and use it to compile the PEG metacircular description of the PEG compiler progressively in two versions with larger set of features. Another Markdown file handaxeweb.md is the literate implementation of the Lua script which extracts the program files from the Markdown documents. This made me realize that the expressivity of the literate style is both towards humans and towards the machine. A single literate document contains the code of various programs, maybe even written in different languages, which are gathered together and made speak to each other in ways in which is not possible in standard programming environments, where each program lives in a separate file while, usually, certain parts of a program here have their raison d'être in other parts of another program there. Think of HTML/Javascript/CSS plus maybe some code in C++ compiled to Webassembly and which has to interact with Javascript. In this sense literate programming moves away from usual file-based programming in a direction completely orthogonal to what Smalltalk, Squeak/Pharo, NewSpeak, Unison or some image-based Lisp system does (e.g. Interlisp/Medley). In these systems everything is programmed in a single language: the living program is its own documentation and the narrative thread is provided by the user's interactive exploration of the hyperlinked web of “materialized code”. A literate document is, instead, an heterodox medium where policies can be put in place to make different languages coexist, including natural language, and where a thread is provided by the writer (or maybe better the “editor” or “director”) who supervise and coordinate the various voices in order to provide a coherent and synesthetic intellectual experience to the reader, while still being able to distill automatically the various “voices” into different files for the use in a computer system. It seems that this characteristic of LP is not usually valued or even considered, see the discussion in the c2 wiki or in the Wikipedia page or in the literateprogramming.com and softpanorama.org websites. All these expositions collapse the idea of literate document towards a single programming language. LP implementations are usually also targeted to a single language with few exceptions (e.g. Noweb) in which case the stress is towards the universality more than towards the openness of the system. However, as the peg.md example, in its minimality, masterfully shows, the liberating power of LP is in its complete malleability in providing metacircular description of complex computer systems. So while certain languages implement domain-specific sub-languages in order to better model domain-specific (human) knowledge (see e.g. the language-oriented programming of Racket), literate programs are then (human) domain-specific super-languages. While TeX was born as a literate program, Axiom is becoming (or better, want to, see this message of Daly in the axiom-developer mailing list) one because Daly felt the need to document the valuable millions lines of code constituting the program and implementing sophisticate algorithms for computer algebra. So LP programming has also a role of explanation and exploration device, which allows open–source developer communities to consolidate and gather the knowledge acquired by studying and traversing exogenous codebases. Daly advocated this pedagogical use of LP with “Clojure in Small Pieces” and says: Since I'm studying the details of the inner workings of Clojure this seemed to be the right time to experiment with a literate form. I have put together the beginnings of a Clojure in Small Pieces book. As Daly again points out in the talk “LP in the large”, big open source systems need literate programming to thrive and survive through time. Writing is the technology which allows us to transmit knowledge across time and space. Literate programs contains also the “why” some code exists or is modeled in some way instead of another, and also point out logical links among different parts of the code, which may not be materialized via the specific programming language in use. [Talk about org.mode] Literate programming in TeXmacs TeXmacs as a LP tools has many advantages over more classical solutions. First of all, there is no need of “tangling”, a TeXmacs document is “alive” and does not need to be compiled. Also a TeXmacs document is active, because can invoke Scheme scripts or other plugins which provide then interactive features and in particular can contain its very own “tangling” code, possibly modified and customized for the specific kind of document at hand. TeXmacs literate coding is self-contained and self-expandable and targeted to human fruition even more thanks to the user-centered design of the TeXmacs document system and user interface. A Literate TeXmacs? TeXmacs is also a large computer system. Not a single program, but a web of interconnected programs written in multiple languages: C++, Scheme, TeXmacs macro language for the styles and the packages and small DSLs for specific tasks (e.g. description of virtual glyphs, UI, etc..). All these pieces have to work together quite tightly for the system to work properly. Especially so because TeXmacs user interface (written mostly in Scheme) is tightly integrated both with the typesetter (written mostly in C++) and with the various typesetting macros and style packages (written in the TeXmacs macro language). Moreover it depends on various external libraries which have to be integrated via the usual wodoo of Makefiles and system specific configurations. And this without considering that export to HTML and LaTeX also means that we need some support files written in CSS, Javascript or LaTeX, which again have to be carefully coordinated with the exporting routines. Another complexity layer then come from the fact that the plugins which allow TeXmacs to communicate with the external world are written in a variety of languages, and despite the wire protocol allows a shallow coordination, they have to be also maintained alongside the main sources. As I've tried to explain above, the crucial features I see in LP are two: a metacircular description of complex computer systems via a (human) domain–specific super-language an explanation and exploration device They fit perfectly with the above description of TeXmacs. We need a tool which allow the community to obtain a shared understanding of the various sources in their interrelation and of the principles, design decisions and specific tradeoffs present in the codebase. TeXmacs itself is a document preparation system, so a literate development of TeXmacs can be metacircular in a very tight way: one can envisage that the TeXmacs system can be described by a TeXmacs literate document comprising all the various sources codes in a unique “book” which tells the story (or a story) of the program. In this vision there are no other source files than a web of hyperlinked TeXmacs documents. A Scheme script can extract the usual arborescence of OS files to be feed to the various compilers or to be assembled in runtime resources. The build process can be described alongside the system specific changes which can be even implemented in more flexible ways because the parametrization of the code is not in full control of the LP scripts. This will reduce the need for complex build systems, at the price of having available a bootstrapping avenue in the form of some cross-platform support or small Scheme interpreter which can “weave” the relevant sources to compile a first runnable version of TeXmacs.
2024-11-08T09:06:21
en
train
42,047,228
amadeuspagel
2024-11-04T23:37:08
What Jeff Bezos Got Wrong About Newspaper Endorsements
null
https://www.cjr.org/political_press/what-jeff-bezos-got-wrong-about-newspaper-endorsements.php
2
2
[ 42048107, 42047516 ]
null
null
null
null
null
null
null
null
null
train
42,047,243
LorenDB
2024-11-04T23:39:15
Aldebaran 1959 Spacecraft Concept (2010)
null
https://armaghplanet.com/the-amazing-aldebaran-spacecraft.html
62
27
[ 42051359, 42065657, 42049312, 42065682, 42047898, 42047852, 42055011, 42047872, 42050653, 42047809 ]
null
null
null
null
null
null
null
null
null
train
42,047,248
ebalit
2024-11-04T23:39:40
Predicted outputs: GPT-4o inference speed-up for editing tasks
null
https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs
3
0
null
null
null
no_article
null
null
null
null
2024-11-08T13:19:51
null
train
42,047,274
CrankyBear
2024-11-04T23:42:51
Netscape lives on: 30 years of shaping the web, open source, and business
null
https://www.zdnet.com/home-and-office/networking/how-netscape-lives-on-30-years-of-shaping-the-web-open-source-and-business/
6
1
[ 42047545 ]
null
null
null
null
null
null
null
null
null
train
42,047,286
Anon84
2024-11-04T23:44:11
Using Mathematics to Make Money by James Simons
null
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4668072
2
0
null
null
null
null
null
null
null
null
null
null
train
42,047,294
edwinarbus
2024-11-04T23:45:19
OpenAI Predicted Outputs
null
https://twitter.com/OpenAIDevs/status/1853564730872607229
2
0
[ 42049086 ]
null
null
null
null
null
null
null
null
null
train
42,047,295
dangle1
2024-11-04T23:45:25
Code libraries posted to NPM try to install malware on dev machines
null
https://arstechnica.com/security/2024/11/javascript-developers-targeted-by-hundreds-of-malicious-code-libraries/
14
1
[ 42049792, 42049225 ]
null
null
null
null
null
null
null
null
null
train
42,047,299
tortilla
2024-11-04T23:46:21
FDIC accidentally reveals the Silicon Valley Bank depositors it bailed out (2023)
null
https://fortune.com/2023/06/23/fdic-accidentally-released-list-of-companies-it-bailed-out-silicon-valley-bank-collapse/
39
16
[ 42047423, 42047485, 42047946, 42047631, 42047411, 42047442, 42047432, 42047451 ]
null
null
no_error
The FDIC has accidentally released a list of companies it bailed out for billions in the Silicon Valley Bank collapse
null
Lizette Chapman,Jason Leopold,Bloomberg
When federal regulators stepped in to backstop all of Silicon Valley Bank’s deposits, they saved thousands of small tech startups and prevented what could have been a catastrophic blow to a sector that relied heavily on the lender. But the decision to guarantee all accounts above the $250,000 federal deposit insurance limit also helped bigger companies that were in no real danger. Sequoia Capital, the world’s most prominent venture-capital firm, got covered the $1 billion it had with the lender. Kanzhun Ltd., a Beijing-based tech company that runs mobile recruiting app Boss Zhipin, received a backstop for more than $900 million.  A document from the Federal Deposit Insurance Corp., which the agency said it mistakenly released unredacted in response to a Bloomberg News Freedom of Information Act request, provides one of the most detailed glimpses yet into the bank’s big customers. The FDIC, which has been selling off pieces of the bank since its failure, asked that Bloomberg destroy and not share the depositor list, saying the agency intended to “partially” withhold some details from the document “because it included confidential commercial or financial information,” according to a letter from an attorney for the regulator. The agency subsequently declined to comment on the substance of the information in the document. US regulators’ decision to declare a “systemic risk exception” and make all depositors at Silicon Valley Bank whole came after a white-knuckled weekend as tech founders digested SVB’s collapse on Friday, March 10. President Joe Biden described the solution as one that “protects American workers and small businesses, and keeps our financial system safe.”  Treasury Secretary Janet Yellen cast the government’s response — including backstopping all depositors — as necessary. “American households depend on banks to finance their homes, invest in an education, and otherwise improve their standards of living. Businesses borrow from these institutions to start new companies and expand existing ones,” she said at an industry conference the following week before discussing the intervention.  But the decisions that government agencies, including the FDIC, made in a frantic few days after SVB failed were immediately controversial. Some critics said that making all depositors whole at the lender and Signature Bank, which failed March 12, created a moral hazard. A fierce debate is also raging over whether the insurance limit needs to be raised for businesses. Former Vice President Mike Pence argued that backstopping all depositors amounted to a bailout, a depiction the Biden administration has pushed back against strenuously. Pence blasted the government’s decision to insure all deposits, in part, because the move would cover Chinese companies that did business with the bank.  In May, the FDIC proposed tagging the largest banks with billions of dollars in extra fees to replenish the US government’s bedrock deposit insurance fund after it was tapped to backstop deposits above the $250,000 threshold. At the time, the regulator estimated the decision to cover all depositors at SVB and Signature cost the fund about $15.8 billion.  FDIC Chairman Martin Gruenberg has previously said that at SVB the guarantee to uninsured depositors covered small and midsize business, as well as those with very large balances, and that the bank’s top 10 depositor accounts held $13.3 billion total.  The new document underscores that in addition to serving a legion of startups and fledgling businesses, SVB was a go-to bank for tech industry giants, including some that have kept their relationships with the bank confidential. The $1 billion that Sequoia, the firm famous for backing iconic companies including Apple, Google and WhatsApp, had at SVB made up a fraction of its $85 billion assets under management. In addition to maintaining its own accounts at the lender, the firm also recommended every startup it backed do the same, Michael Moritz, a partner at the firm, wrote in the Financial Times. A representative for Sequoia declined to comment on the depositor list. Kanzhun, which had $902.9 million in deposits with SVB according to the document, didn’t respond to multiple emailed requests for comment. The company, which was heavily backed by Chinese giant Tencent before it went public on the Nasdaq in 2021, was among the largest Chinese companies to IPO in the US that year. Altos Labs Inc., a life sciences startup that works on cell regeneration, had $680.3 million in deposits with the bank. The privately held company has raised $3.27 billion from billionaires including Jeff Bezos and Yuri Milner, as well as Mubadala Investment Company and other investors. An Altos representative declined to comment. Payments startup Marqeta Inc. had $634.5 million at the bank, according to the document. In a statement, the firm acknowledged that it had “significant deposits” at SVB, but was already in the process of moving money to other banks. “While Marqeta supported the decision to guarantee all deposits at the bank, our ability to execute as a business and meet our financial obligations would not have been impacted, even if it was a longer resolution process” the firm said. IntraFi Network, which provides deposit services to financial institutions, had $410.9 million worth of deposits at the bank, according to the document. However, in a statement, the firm said that it didn’t actually have any of its own money with the lender, nor was it a client. The amount, rather, represents the funds of almost 2,000 different depositors whose balances were fully insured when SVB collapsed, according to IntraFi. Crypto stablecoin company Circle Internet Financial Ltd. previously disclosed its SVB deposits, which at the time represented 8.2% of the reserves backing its USD Coin. A spokesman said the company had no additional comment. The USD Coin, which is intended to maintain a 1-to-1 peg to the dollar, briefly drifted from that $1 level on the news of Circle’s exposure. The document listed it as SVB’s biggest depositor with a balance of $3.3 billion. Streaming set-top box maker Roku Inc. also previously disclosed having roughly 26% of its cash and cash equivalents parked at the bank. The document listed its balance at $420 million. A Roku spokesman declined further comment. Fintech company Bill.com previously disclosed it had roughly $670 million at the bank. The firm said the amount included about $300 million of its money and $370 million that belonged to customers. A company spokesman declined further comment. The FDIC document listed Bill.com’s total balance at $761.1 million. Silicon Valley Bank and parent SVB Financial Group Inc. were also listed as having a combined $4.6 billion in deposits. SVB Financial has argued in its bankruptcy case that at least $2 billion in deposits the parent had with the bank should be returned. Federal regulators have said SVB Financial, which declined to comment on the document, must apply to the bank’s receiver for that money.  –With assistance from Steven Church and stacy-marie ishmael.A newsletter for the boldest, brightest leaders: CEO Daily is your weekday morning dossier on the news, trends, and chatter business leaders need to know. Sign up here.
2024-11-07T15:01:39
en
train
42,047,302
begoon
2024-11-04T23:46:53
Mypy vs. Pyright
null
https://github.com/microsoft/pyright/blob/main/docs/mypy-comparison.md
1
1
[ 42047317 ]
null
null
no_error
pyright/docs/mypy-comparison.md at main · microsoft/pyright
null
null
Differences Between Pyright and Mypy What is Mypy? Mypy is the “OG” in the world of Python type checkers. It was started by Jukka Lehtosalo in 2012 with contributions from Guido van Rossum, Ivan Levkivskyi, and many others over the years. For a detailed history, refer to this documentation. The code for mypy can be found in this github project. Why Does Pyright’s Behavior Differ from Mypy’s? Mypy served as a reference implementation of PEP 484, which defines standard behaviors for Python static typing. Although PEP 484 spells out many type checking behaviors, it intentionally leaves many other behaviors undefined. This approach has allowed different type checkers to innovate and differentiate. Pyright generally adheres to the official Python typing specification, which incorporates and builds upon PEP 484 and other typing-related PEPs. The typing spec is accompanied by an ever-expanding suite of conformance tests. For the latest conformance test results for pyright, mypy and other type checkers, refer to this page. For behaviors that are not explicitly spelled out in the typing spec, pyright generally tries to adhere to mypy’s behavior unless there is a compelling justification for deviating. This document discusses these differences and provides the reasoning behind each design choice. Design Goals Pyright was designed with performance in mind. It is not unusual for pyright to be 3x to 5x faster than mypy when type checking large code bases. Some of its design decisions were motivated by this goal. Pyright was also designed to be used as the foundation for a Python language server. Language servers provide interactive programming features such as completion suggestions, function signature help, type information on hover, semantic-aware search, semantic-aware renaming, semantic token coloring, refactoring tools, etc. For a good user experience, these features require highly responsive type evaluation performance during interactive code modification. They also require type evaluation to work on code that is incomplete and contains syntax errors. To achieve these design goals, pyright is implemented as a “lazy” or “just-in-time” type evaluator. Rather than analyzing all code in a module from top to bottom, it is able to evaluate the type of an arbitrary identifier anywhere within a module. If the type of that identifier depends on the types of other expressions or symbols, pyright recursively evaluates those in turn until it has enough information to determine the type of the target identifier. By comparison, mypy uses a more traditional multi-pass architecture where semantic analysis is performed multiple times on a module from the top to the bottom until all types converge. Pyright implements its own parser, which recovers gracefully from syntax errors and continues parsing the remainder of the source file. By comparison, mypy uses the parser built in to the Python interpreter, and it does not support recovery after a syntax error. This also means that when you run mypy on an older version of Python, it cannot support newer language features that require grammar changes. Type Checking Unannotated Code By default, pyright performs type checking for all code regardless of whether it contains type annotations. This is important for language server features. It is also important for catching bugs in code that is unannotated. By default, mypy skips all functions or methods that do not have type annotations. This is a common source of confusion for mypy users who are surprised when type violations in unannotated functions go unreported. If the option --check-untyped-defs is enabled, mypy performs type checking for all functions and methods. Inferred Return Types If a function or method lacks a return type annotation, pyright infers the return type from return and yield statements within the function’s body (including the implied return None at the end of the function body). This is important for supporting completion suggestions. It also improves type checking coverage and eliminates the need for developers to needlessly supply return type annotations for trivial return types. By comparison, mypy never infers return types and assumes that functions without a return type annotation have a return type of Any. This was an intentional design decision by mypy developers and is explained in this thread. Unions vs Joins When merging two types during code flow analysis or widening types during constraint solving, pyright always uses a union operation. Mypy typically (but not always) uses a “join” operation, which merges types by finding a common supertype. The use of joins discards valuable type information and leads to many false positive errors that are well documented within the mypy issue tracker. def func1(val: object): if isinstance(val, str): pass elif isinstance(val, int): pass else: return reveal_type(val) # mypy: object, pyright: str | int def func2(condition: bool, val1: str, val2: int): x = val1 if condition else val2 reveal_type(x) # mypy: object, pyright: str | int y = val1 or val2 # In this case, mypy uses a union instead of a join reveal_type(y) # mypy: str | int, pyright: str | int Variable Type Declarations Pyright treats variable type annotations as type declarations. If a variable is not annotated, pyright allows any value to be assigned to that variable, and its type is inferred to be the union of all assigned types. Mypy’s behavior for variables depends on whether the --allow-redefinition is specified. If redefinitions are not allowed, then mypy typically treats the first assignment (the one with the smallest line number) as though it is an implicit type declaration. def func1(condition: bool): if condition: x = 3 # Mypy treats this as an implicit type declaration else: x = "" # Mypy treats this as an error because `x` is implicitly declared as `int` def func2(condition: bool): x = None # Mypy provides some exceptions; this is not considered an implicit type declaration if condition: x = "" # This is not considered an error def func3(condition: bool): x = [] # Mypy doesn't treat this as a declaration if condition: x = [1, 2, 3] # The type of `x` is declared as `list[int]` Pyright’s behavior is more consistent, is conceptually simpler and more natural for Python developers, leads to fewer false positives, and eliminates the need for many otherwise-necessary variable type annotations. Class and Instance Variable Inference Pyright handles instance and class variables consistently with local variables. If a type annotation is provided for an instance or class variable (either within the class or one of its base classes), pyright treats this as a type declaration and enforces it accordingly. If a class implementation does not provide a type annotation for an instance or class variable and its base classes likewise do not provide a type annotation, the variable’s type is inferred from all assignments within the class implementation. class A: def method1(self) -> None: self.x = 1 def method2(self) -> None: self.x = "" # Mypy treats this as an error because `x` is implicitly declared as `int` a = A() reveal_type(a.x) # pyright: int | str a.x = "" # Pyright allows this because the type of `x` is `int | str` a.x = 3.0 # Pyright treats this as an error because the type of `x` is `int | str` Class and Instance Variable Enforcement Pyright distinguishes between “pure class variables”, “regular class variables”, and “pure instance variable”. For a detailed explanation, refer to this documentation. Mypy does not distinguish between class variables and instance variables in all cases. This is a known issue. class A: x: int = 0 # Regular class variable y: ClassVar[int] = 0 # Pure class variable def __init__(self): self.z = 0 # Pure instance variable print(A.x) print(A.y) print(A.z) # pyright: error, mypy: no error Assignment-based Type Narrowing Pyright applies type narrowing for variable assignments. This is done regardless of whether the assignment statement includes a variable type annotation. Mypy skips assignment-based type narrowing when the target variable includes a type annotation. The consensus of the typing community is that mypy’s behavior here is inconsistent, and there are plans to eliminate this inconsistency. v1: Sequence[int] v1 = [1, 2, 3] reveal_type(v1) # mypy and pyright both reveal `list[int]` v2: Sequence[int] = [1, 2, 3] reveal_type(v2) # mypy reveals `Sequence[int]` rather than `list[int]` Type Guards Pyright supports several built-in type guards that mypy does not currently support. For a full list of type guard expression forms supported by pyright, refer to this documentation. The following expression forms are not currently supported by mypy as type guards: x == L and x != L (where L is an expression with a literal type) x in y or x not in y (where y is instance of list, set, frozenset, deque, tuple, dict, defaultdict, or OrderedDict) bool(x) (where x is any expression that is statically verifiable to be truthy or falsey in all cases) Aliased Conditional Expressions Pyright supports the aliasing of conditional expressions used for type guards. Mypy does not currently support this, but it is a frequently-requested feature. Narrowing Any Pyright never narrows Any when performing type narrowing for assignments. Mypy is inconsistent about when it applies type narrowing to Any type arguments. b: list[Any] b = [1, 2, 3] reveal_type(b) # pyright: list[Any], mypy: list[Any] c = [1, 2, 3] b = c reveal_type(b) # pyright: list[Any], mypy: list[int] Inference of List, Set, and Dict Expressions Pyright’s inference rules for list, set and dict expressions differ from mypy’s when values with heterogeneous types are used. Mypy uses a join operator to combine the types. Pyright uses either an Unknown or a union depending on configuration settings. A join operator often produces a type that is not what was intended, and this leads to false positive errors. x = [1, 3.4, ""] reveal_type(x) # mypy: list[object], pyright: list[Unknown] or list[int | float | str] For these mutable container types, pyright does not retain literal types when inferring the container type. Mypy is inconsistent, sometimes retaining literal types and sometimes not. def func(one: Literal[1]): reveal_type(one) # Literal[1] reveal_type([one]) # pyright: list[int], mypy: list[Literal[1]] reveal_type(1) # Literal[1] reveal_type([1]) # pyright: list[int], mypy: list[int] Inference of Tuple Expressions Pyright’s inference rules for tuple expressions differ from mypy’s when tuple entries contain literals. Pyright retains these literal types, but mypy widens the types to their non-literal type. Pyright retains the literal types in this case because tuples are immutable, and more precise (narrower) types are almost always beneficial in this situation. x = (1, "stop") reveal_type(x[1]) # pyright: Literal["stop"], mypy: str y: Literal["stop", "go"] = x[1] # mypy: type error Assignment-Based Narrowing for Literals When assigning a literal value to a variable, pyright narrows the type to reflect the literal. Mypy does not. Pyright retains the literal types in this case because more precise (narrower) types are typically beneficial and have little or no downside. x: str | None x = 'a' reveal_type(x) # pyright: Literal['a'], mypy: str Pyright also supports “literal math” for simple operations involving literals. def func1(a: Literal[1, 2], b: Literal[2, 3]): c = a + b reveal_type(c) # Literal[3, 4, 5] def func2(): c = "hi" + " there" reveal_type(c) # Literal['hi there'] Type Narrowing for Asymmetric Descriptors When pyright evaluates a write to a class variable that contains a descriptor object (including properties), it normally applies assignment-based type narrowing. However, when the descriptor is asymmetric — that is, its “getter” type is different from its “setter” type, pyright refrains from applying assignment-based type narrowing. For a full discussion of this, refer to this issue. Mypy has not yet implemented the agreed-upon behavior, so its type narrowing behavior may differ from pyright’s in this case. Parameter Type Inference Mypy infers the type of self and cls parameters in methods but otherwise does not infer any parameter types. Pyright implements several parameter type inference techniques that improve type checking and language service features in the absence of explicit parameter type annotations. For details, refer to this documentation. Constructor Calls When pyright evaluates a call to a constructor, it attempts to follow the runtime behavior as closely as possible. At runtime, when a constructor is called, it invokes the __call__ method of the metaclass. Most classes use type as their metaclass. (Even when a different metaclasses is used, it typically does not override type.__call__.) The type.__call__ method calls the __new__ method for the class and passes all of the arguments (both positional and keyword) that were passed to the constructor call. If the __new__ method returns an instance of the class (or a child class), type.__call__ then calls the __init__ method on the class. Pyright follows this same flow for evaluating the type of a constructor call. If a custom metaclass is present, pyright evaluates its __call__ method to determine whether it returns an instance of the class. If not, it assumes that the metaclass has custom behavior that overrides type.__call__. Likewise, if a class provides a __new__ method that returns a type other than the class being constructed (or a child class thereof), it assumes that __init__ will not be called. By comparison, mypy first evaluates the __init__ method if present, and it ignores the annotated return type of the __new__ method. None Return Type If the return type of a function is declared as None, an attempt to call that function and consume the returned value is flagged as an error by mypy. The justification is that this is a common source of bugs. Pyright does not special-case None in this manner because there are legitimate use cases, and in our experience, this class of bug is rare. Constraint Solver Behaviors When evaluating a call expression that invokes a generic class constructor or a generic function, a type checker performs a process called “constraint solving” to solve the type variables found within the target function signature. The solved type variables are then applied to the return type of that function to determine the final type of the call expression. This process is called “constraint solving” because it takes into account various constraints that are specified for each type variable. These constraints include variance rules and type variable bounds. Many aspects of constraint solving are unspecified in PEP 484. This includes behaviors around literals, whether to use unions or joins to widen types, and how to handle cases where multiple types could satisfy all type constraints. Constraint Solver: Literals Pyright’s constraint solver retains literal types only when they are required to satisfy constraints. In other cases, it widens the type to a non-literal type. Mypy is inconsistent in its handling of literal types. T = TypeVar("T") def identity(x: T) -> T: return x def func(one: Literal[1]): reveal_type(one) # Literal[1] v1 = identity(one) reveal_type(v1) # pyright: int, mypy: Literal[1] reveal_type(1) # Literal[1] v2 = identity(1) reveal_type(v2) # pyright: int, mypy: int Constraint Solver: Type Widening As mentioned previously, pyright always uses unions rather than joins. Mypy typically uses joins. This applies to type widening during the constraint solving process. T = TypeVar("T") def func(val1: T, val2: T) -> T: ... reveal_type(func("", 1)) # mypy: object, pyright: str | int Constraint Solver: Ambiguous Solution Scoring In cases where more than one solution is possible for a type variable, both pyright and mypy employ various heuristics to pick the “best” solution. These heuristics are complex and difficult to document in their fullness. Pyright’s general strategy is to return the “simplest” type that meets the constraints. Consider the expression make_list(x) in the example below. The type constraints for T could be satisfied with either int or list[int], but it’s much more likely that the developer intended the former (simpler) solution. Pyright calculates all possible solutions and “scores” them according to complexity, then picks the type with the best score. In rare cases, there can be two results with the same score, in which chase pyright arbitrarily picks one as the winner. Mypy produces errors with this sample. T = TypeVar("T") def make_list(x: T | Iterable[T]) -> list[T]: return list(x) if isinstance(x, Iterable) else [x] def func2(x: list[int], y: list[str] | int): v1 = make_list(x) reveal_type(v1) # pyright: "list[int]" ("list[list[T]]" is also a valid answer) v2 = make_list(y) reveal_type(v2) # pyright: "list[int | str]" ("list[list[str] | int]" is also a valid answer) Value-Constrained Type Variables When mypy analyzes a class or function that has in-scope value-constrained TypeVars, it analyzes the class or function multiple times, once for each constraint. This can produce multiple errors. T = TypeVar("T", list[Any], set[Any]) def func(a: AnyStr, b: T): reveal_type(a) # Mypy reveals 2 different types ("str" and "bytes"), pyright reveals "AnyStr" return a + b # Mypy reports 4 errors Pyright cannot use the same multi-pass technique as mypy in this case. It needs to produce a single type for any given identifier to support language server features. Pyright instead uses a mechanism called conditional types. This approach allows pyright to handle some value-constrained TypeVar use cases that mypy cannot, but there are conversely other use cases that mypy can handle and pyright cannot. “Unknown” Type and Strict Mode Pyright differentiates between explicit and implicit forms of Any. The implicit form is referred to as Unknown. For example, if a parameter is annotated as list[Any], that is a use of an explicit Any, but if a parameter is annotated as list, that is an implicit Any, so pyright refers to this type as list[Unknown]. Pyright implements several checks that are enabled in “strict” type-checking modes that report the use of an Unknown type. Such uses can mask type errors. Mypy does not track the difference between explicit and implicit Any types, but it supports various checks that report the use of values whose type is Any: --warn-return-any and --disallow-any-*. For details, refer to this documentation. Pyright’s approach gives developers more control. It provides a way to be explicit about Any where that is the intent. When an Any is implicitly produced due to an missing type argument or some other condition that produces an Any within the type checker logic, the developer is alerted to that condition. Overload Resolution Overload resolution rules are under-specified in PEP 484. Pyright and mypy apply similar rules, but there are inevitably cases where different results will be produced. For full documentation of pyright’s overload behaviors, refer to this documentation. One known difference is in the handling of ambiguous overloads due to Any argument types where one return type is the supertype of all other return types. In this case, pyright evaluates the resulting return type as the supertype, but mypy evaluates the return type as Any. Pyright’s behavior here tries to preserve as much type information as possible, which is important for completion suggestions. @overload def func1(x: int) -> int: ... @overload def func1(x: str) -> float: ... def func2(val: Any): reveal_type(func1(val)) # mypy: Any, pyright: float Import Statements Pyright intentionally does not model implicit side effects of the Python import loading mechanism. In general, such side effects cannot be modeled statically because they depend on execution order. Dependency on such side effects leads to fragile code, so pyright treats these as errors. For more details, refer to this documentation. Mypy models side effects of the import loader that are potentially unsafe. import http def func(): import http.cookies # The next line raises an exception at runtime x = http.cookies # mypy allows, pyright flags as error Ellipsis in Function Body If Pyright encounters a function body whose implementation is ..., it does not enforce the return type annotation. The ... semantically means “this is a code placeholder” — a convention established in type stubs, protocol definitions, and elsewhere. Mypy treats ... function bodies as though they are executable and enforces the return type annotation. This was a recent change in mypy — made long after Pyright established a different behavior. Prior to mypy’s recent change, it did not enforce return types for function bodies consisting of either ... or pass. Now it enforces both. Circular References Because mypy is a multi-pass analyzer, it is able to deal with certain forms of circular references that pyright cannot handle. Here are several examples of circularities that mypy resolves without errors but pyright does not. A class declaration that references a metaclass whose declaration depends on the class. T = TypeVar("T") class MetaA(type, Generic[T]): ... class A(metaclass=MetaA["A"]): ... A class declaration that uses a TypeVar whose bound or constraint depends on the class. T = TypeVar("T", bound="A") class A(Generic[T]): ... A class that is decorated with a class decorator that uses the class in the decorator’s own signature. def my_decorator(x: Callable[..., "A"]) -> Callable[..., "A"]: return x @my_decorator class A: ... Class Decorator Evaluation Pyright honors class decorators. Mypy largely ignores them. See this issue for details. Support for Type Comments Versions of Python prior to 3.0 did not have a dedicated syntax for supplying type annotations. Annotations therefore needed to be supplied using “type comments” of the form # type: <annotation>. Python 3.6 added the ability to supply type annotations for variables. Mypy has full support for type comments. Pyright supports type comments only in locations where there is a way to provide an annotation using modern syntax. Pyright was written to assume Python 3.5 and newer, so support for older versions was not a priority. # The following type comment is supported by # mypy but is rejected by pyright. x, y = (3, 4) # type: (float, float) # Using Python syntax from Python 3.6, this # would be annotated as follows: x: float y: float x, y = (3, 4) Plugins Mypy supports a plug-in mechanism, whereas pyright does not. Mypy plugins allow developers to extend mypy’s capabilities to accommodate libraries that rely on behaviors that cannot be described using the standard type checking mechanisms. Pyright maintainers have made the decision not to support plug-ins because of their many downsides: discoverability, maintainability, cost of development for the plug-in author, cost of maintenance for the plug-in object model and API, security, performance (especially latency — which is critical for language servers), and robustness. Instead, we have taken the approach of working with the typing community and library authors to extend the type system so it can accommodate more use cases. An example of this is PEP 681, which introduced dataclass_transform.
2024-11-08T12:11:06
en
train
42,047,318
gnabgib
2024-11-04T23:50:39
Typosquat Campaign Targeting NPM Developers
null
https://blog.phylum.io/supply-chain-security-typosquat-campaign-targeting-puppeteer-users/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,047,320
thomascountz
2024-11-04T23:50:49
Low-poly image generation using evolutionary algorithms in Ruby (2023)
null
https://thomascountz.com/2023/07/30/low-poly-image-generation
113
41
[ 42049705, 42049204, 42050951, 42050424, 42050815, 42054469, 42049374, 42051527, 42049132, 42050254, 42050823, 42052128 ]
null
null
null
null
null
null
null
null
null
train
42,047,374
caleb_thompson
2024-11-04T23:59:26
Build Colors from Colors with CSS Relative Color Syntax
null
https://calebhearth.com/css-relative-colors
2
2
[ 42047763, 42047558 ]
null
null
no_error
Build Colors from Colors with CSS Relative Color Syntax
null
null
This is a post I’m mostly writing for my future self, because I can never remember the actual term for the CSS feature that lets you define a color based on another color (it’s “CSS Relative Color”) and “color mix” which is what I keep wanting the feature to be called never turns up any results but is an actual CSS function. The feature here is that you can take a color you already have and manipulate its components. Which things you can change vary by the color space you choose, so for an RGB color you can change the red, green, blue, and alpha channels, for an HSL color you can change hue, saturation, lightness, and alpha, and for my beloved OKLCH you can change lightness, chroma, hue, and yes, opacity. The syntax if you wanted to use this and not change anything about the color is: oklch(from var(--color) l c h / 1) But of course you can change each component, either swapping them entirely as with this which sets the lightness to 20%: oklch(from var(--color) 20% c h / 1) Or, more usefully, with a nested calc to change a component with some 𝐦𝐚𝐭𝐡, used here to find the inverse of a color while maintaining the perceived brightness and saturation: hsl(from var(--color) h s calc(l * 0.2)) The fun thing in these examples is that var(--color) 1. doesn’t need to be a variable, it can be any color definition and 2. can be any color definition, not necessarily one that matches the colorspace you want to do your manipulation in. This is a really useful feature, but I have to look up the syntax almost every time I use it. Hopefully now I’ll have an easier time finding it.
2024-11-08T07:40:04
en
train
42,047,375
sandwichsphinx
2024-11-04T23:59:40
What Is Window Dressing in Finance?
null
https://www.investopedia.com/terms/w/windowdressing.asp
3
0
null
null
null
null
null
null
null
null
null
null
train
42,047,386
eminetto
2024-11-05T00:01:20
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,047,414
thecosas
2024-11-05T00:07:01
How to Become a Billionaire in Skilled Nursing by Scamming Taxpayers
null
https://hindenburgresearch.com/pacs/
12
2
[ 42055891, 42047670 ]
null
null
null
null
null
null
null
null
null
train
42,047,447
null
2024-11-05T00:13:32
null
null
null
null
null
null
[ "true" ]
true
null
null
null
null
null
null
null
train
42,047,452
zhengiszen
2024-11-05T00:13:52
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,047,464
AnhTho_FR
2024-11-05T00:15:04
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,047,475
gods
2024-11-05T00:17:23
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,047,477
JoeOfTexas
2024-11-05T00:17:34
Ask HN: Is there a datastructure for leaderboards with filtering and timeslices?
I built a variation of B+Tree to track leaderboard rankings and it works similar to Redis Z functions (zadd&#x2F;zrange).<p>Now I&#x27;m trying to think of a solution to filter by country code or some other player attribute. In addition, I also want to dynamically create timeslices so I can show leaderboard rankings that occurred between time A and time B.<p>If I were to implement the filter or timeslices into my B+Tree, I&#x27;d have to basically scan from the first record down to the last by checking against player attributes and&#x2F;or the timestamp of when ranking was inserted.<p>The goal is to have 1 leaderboard index per stat, rather than X leaderboard per stat.<p>Is it feasible, or should I just create a new leaderboard index for every combination of attribute &#x2F; timeslice?
null
2
3
[ 42048744 ]
null
null
null
null
null
null
null
null
null
train
42,047,507
preciousoo
2024-11-05T00:24:00
Ask HN: How does HN preserve state so well?
I&#x27;m a bad tab hoarder and I have &gt;500 tabs in my backlog, across many devices. One thing I&#x27;ve notices is that if I open an article from HN, if the &lt;Back&gt; button leads back to the HN homepage, it preserves the state exactly where I left HN opened at.<p>Maybe I haven&#x27;t been on the static web in too long, but my memory is of pages refreshing whenever you hit back after some amount of time. Do they have a longer cache option? I don&#x27;t see anything in their JS that suggests saving state.
null
6
5
[ 42048695, 42047549, 42047543 ]
null
null
null
null
null
null
null
null
null
train
42,047,510
todsacerdoti
2024-11-05T00:24:11
Principles of Dependent Type Theory [pdf]
null
https://www.danielgratzer.com/papers/type-theory-book.pdf
3
0
null
null
null
null
null
null
null
null
null
null
train
42,047,518
denisshilov
2024-11-05T00:26:11
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,047,519
brianzelip
2024-11-05T00:26:15
Where Web Components Shine
null
https://daverupert.com/2024/10/super-web-components-sunshine/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,047,530
egorpv
2024-11-05T00:27:11
How to Salt Water Test a Dice
null
https://thecriticaldice.com/blogs/news/how-to-salt-water-test-your-dice
3
0
null
null
null
null
null
null
null
null
null
null
train
42,047,536
LorenDB
2024-11-05T00:28:31
Big Buck Bunny in stereoscopic 3D
null
http://bbb3d.renderfarming.net/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,047,540
PaulHoule
2024-11-05T00:29:56
Turning Recycled Polystyrene Waste to Triboelectric Nanogenerators
null
https://onlinelibrary.wiley.com/doi/10.1002/aesr.202300259
1
0
null
null
null
null
null
null
null
null
null
null
train
42,047,547
ThomasCloarec
2024-11-05T00:31:15
CreateSite.ai: The first AI-native website builder
null
https://createsite.ai
3
0
null
null
null
null
null
null
null
null
null
null
train
42,047,567
willie-zhou
2024-11-05T00:34:38
Show HN: A tool for redacting any information from documents
Author here - one of the most interesting facts about the service is that it&#x27;s built entirely using VLMs! On my own benchmarks, I was able to beat Amazon Comprehend on redaction accuracy from 72% to 91%.<p>Feel free to ask me anything about training VLMs to detect and draw reliably boundary boxes.
https://www.getredacto.com/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,047,569
jonifico
2024-11-05T00:34:57
Facebook, Nvidia ask US Supreme Court to spare them from securities fraud suits
null
https://www.reuters.com/legal/facebook-nvidia-ask-us-supreme-court-spare-them-securities-fraud-suits-2024-11-04/
5
0
[ 42047794 ]
null
null
null
null
null
null
null
null
null
train
42,047,574
mfiguiere
2024-11-05T00:35:30
Apple Explores Push into Smart Glasses with 'Atlas' User Study
null
https://www.bloomberg.com/news/articles/2024-11-04/apple-explores-push-into-smart-glasses-with-atlas-user-study
4
1
[ 42048968 ]
null
null
null
null
null
null
null
null
null
train
42,047,607
chany2
2024-11-05T00:41:35
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,047,611
Liamjames1684
2024-11-05T00:42:14
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,047,612
pabs3
2024-11-05T00:42:17
Automattic is 'short-staffed' amid WordPress vs. WP Engine drama
null
https://techcrunch.com/2024/10/30/matt-mullenweg-says-automattic-is-very-short-staffed-amid-wordpress-vs-wp-engine-drama/
5
0
null
null
null
null
null
null
null
null
null
null
train
42,047,639
behnamoh
2024-11-05T00:48:36
Bring Your Own Model (BYOM): Using Brave Leo with Your Own LLMs
null
https://brave.com/blog/byom-nightly/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,047,640
aard
2024-11-05T00:49:03
Build an 8-bit computer from scratch
null
https://eater.net/8bit
10
1
[ 42047761 ]
null
null
null
null
null
null
null
null
null
train
42,047,648
aard
2024-11-05T00:50:20
Linux Performance Analysis in 60k Milliseconds (2015)
null
https://netflixtechblog.com/linux-performance-analysis-in-60-000-milliseconds-accc10403c55
3
1
[ 42047791, 42047904 ]
null
null
null
null
null
null
null
null
null
train
42,047,653
aard
2024-11-05T00:52:16
Code Ownership (2006)
null
https://martinfowler.com/bliki/CodeOwnership.html
2
1
[ 42047766, 42047953 ]
null
null
null
null
null
null
null
null
null
train
42,047,672
throwaway1194
2024-11-05T00:56:43
null
null
null
5
null
[ 42047887, 42047915, 42047797 ]
null
true
null
null
null
null
null
null
null
train
42,047,677
OuterVale
2024-11-05T00:58:11
Pagination widows, or, why I'm embarrassed about my eBook (2023)
null
https://clagnut.com/blog/2426
217
132
[ 42050026, 42049739, 42049088, 42049426, 42049789, 42048211, 42048360, 42049196, 42050959, 42049405, 42053739, 42053664, 42048273, 42050016, 42048465, 42048225, 42049378, 42051786, 42048586 ]
null
null
no_error
Pagination widows, or, Why I’m embarrassed about my ebook
null
Richard Rutter
The physical copies of my book on Web Typography sold out quickly. I self-published, and print runs are expensive when you’re funding them yourself, so numbers were limited. However it was always my plan to publish an ebook at the same time, and that has out-sold the hard copy by an order of magnitude. I set myself some pretty stiff criteria for the ebook – it needed to replicate the design of print edition as far as possible, adapting to the medium when required. To this day I’m proud of the result. I completely hand-coded the ePub (meaning it’s mostly HTML and CSS under the hood), and I believe the effort paid off. If you’ll forgive the rather un-British boasting, I still think it’s one of the more advanced ebooks out there: with embedded fonts, SVG images, alt text, bold typographic heirarchy, Javascript-driven syntax highlighting and what I hope is a nuanced, highly readable overall design. Not bad for an ebook anyway, although I’ll grant you the bar is not set high (notable exceptions include A Book Apart publications). All hubris aside, I am still frequently embarrassed by how the ebook renders, particularly in Apple Books. Like a well structured webpage, my book uses a lot of headings and subheadings – I wrote it to be referenced as much as to be read, so this helps the scanability of the text. However Apple Books, and other WebKit, Gecko, or old Blink-powered ebook readers will happily do this to headings: Notice the orphaned heading “Lean on six centuries of typesetting experience” with its following paragraph out of sight on the next page. This is a typographic no-no, and has been for – um – six centuries. Far better for the reader to have the heading attached to its paragraph on the next page, even if that means leaving some redundant whitespace in its place. Since 1997(!) and the early drafts of CSS2, there has been an easy way to tell browsers not insert a page break directly after, or in the middle of, a heading: h2 { page-break-after: avoid; page-break-inside: avoid; } h2 { break-after: avoid; break-inside: avoid; } However 26 years later, break-after:avoid is still not supported by either Safari or Firefox, and was only introduced to Chrome 108 in December 2022. I’ve put together a test for support of break-after and break-inside in multi-column layout. Have a play with it in Chrome – try removing break-inside:avoid and then break-after:avoid from the h2 rule in the CSS and you should see how the subheadings end up at the bottom of a column, or worse still, split over two columns. Browser support for CSS properties tends to follow demand from web developers. Unlike in 1997 – or indeed 2017 – there is now an annual Interop arrangement between browser rendering engine makers in which they agree a common list of priorities for CSS and other web technologies. Interop 2024 has just closed for new proposals. Unfortunately I didn’t manage to submit a request in time for breaking controls to be universally implemented. Thankfully Scott Kellum of Typetura did put in a proposal for advanced multi-column layouts to be improved, and this included support for break- properties. Sadly there’s little to no clamour for it from other developers – the blog post you’re reading probably doubles the published demand, and that’s just for within columns. Update: Annoyingly the proposal was not selected for Interop 2024. I'll just have to keep prodding the bug reports and keep my fingers crossed they are fixed soon – these bugs are older than some of my colleagues! Paged media is very much a forgotten aspect, and it’s probably true that web pages are rarely printed in the grand scheme of things, however ebooks are definitely a popular form of paged media and deserve attention. I’d certainly like to read ebooks without failed typographic fundamentals.
2024-11-07T15:05:54
en
train
42,047,688
ternaus
2024-11-05T01:00:48
Ask HN: Need your help with Albumentations feedback
I&#x27;ve spent the last 10 months heads-down in Albumentations code - fixing bugs, improving performance, and adding features that people asked for years.<p>Now I need your help.<p>I&#x27;d love to chat with you if you:<p>- Use Albumentations in production - Use in your research - Apply in ML competitions at Kaggle or other platforms - Play with it in your pet projects<p>Or maybe you&#x27;re using torchvision, DALI, Kornia, or imgaug instead? I&#x27;d love to hear what stops you from moving to Albumentations.<p>I would like to understand what&#x27;s working for you and what isn&#x27;t - whether it&#x27;s missing functionality, unclear docs, lack of tutorials, or anything else that makes you frustrated.<p>Your feedback will help me prioritize what to work on next.<p>Would you be willing to spend a short video call with me?<p>My email is in my profile.<p>Your input would mean a lot.
null
1
0
null
null
null
null
null
null
null
null
null
null
train
42,047,697
mepian
2024-11-05T01:02:28
CEO of Russia's Channel One state news network compares Steve Jobs to Hitler
null
https://meduza.io/en/news/2024/11/04/ceo-of-russia-s-channel-one-state-news-network-compares-steve-jobs-to-hitler
6
1
[ 42047827, 42047792, 42047880 ]
null
null
null
null
null
null
null
null
null
train
42,047,700
Jerrywither
2024-11-05T01:02:46
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,047,717
js2
2024-11-05T01:06:28
Voting from Space
null
https://www.nasa.gov/podcasts/houston-we-have-a-podcast/voting-from-space/
17
1
[ 42054477 ]
null
null
null
null
null
null
null
null
null
train
42,047,724
luu
2024-11-05T01:07:41
Old Books and the Passage of Time
null
https://jaydaigle.net/blog/old-books-and-the-passage-of-time/
2
0
null
null
null
no_error
Old Books and the Passage of Time
null
Jay Daigle
I recently got a couple books that I’ve had on my “to read” list for several years: Frank Abagnale’s The Art of the Steal and Joseph Heath’s Enlightenment 2.0. And I’ve been enjoying both books, but they’re both surprisingly dated. I keep being aware of how old they are, sometimes in ways that really catch me off guard. The Art of the Steal The anachronisms in the Abagnale book are more dramatic, in part because the book is older (from 2001), and in part because Abagnale is more of a sensationalist to begin with. (It seems a lot of the claims he made about his most impressive capers are less than accurate—which makes sense coming from a successful con man!) He takes pains to explain cutting-edge technology, like color scanners and laser printers. Then he warns about people using them to forge store gift certificates, when I’m not sure I’ve seen an actual printed gift certificate (as opposed to a gift card) in years. He talks about scanning and printing near-perfect replicas of US currency, which is no longer possible. He describes the exciting new security features in the redesign of the twenty-dollar bill, which I just barely remember being introduced. But the most jarring bits are in his first real chapter, about check forgery. Partly, again, the technology has gotten better. He complains that many companies print checks on “that familiar blue or green basketweave check paper” you can buy at any office supply store. But it’s not really familiar to me! Instead I just take for granted that all checks have the fancy new security features he’s advocating. But moreover, he’s amazed that stores accept checks without checking the signature—whereas I’m amazed that stores accept checks at all! He raises the possibility of paper checks dying out, only to dismiss it: I’ll be long dead, even if I live to a ripe old age, before checks will ever disappear. The amount of checks we write is growing at a rate of more than a billion checks a year. So they’re not even declining in use. They’re growing. I remember fifteen years ago, when we were writing 40 billion checks a year, people said it would never reach 50 billion, and now we’re at almost 70 billion. People happen to like checks. They’re familiar. Many consumers will say, “I like this check. It has some float to it. I like that much better than when the bank immediately goes into my checking account and takes the money out. I also like the idea that I can get the check back and see who I wrote it to and have a record of it.” And we have a very large generation that is not comfortable with smart cards and electronics. They’re leery of new ways of payments, and they don’t fully grasp them. Electronic banking is still much more of an unknown frontier. And there’s no forgetting the billions of dollars that banks have invested in electronic readers, sorters, and other check processing equipment. We’re not going to just scrap it and plow money into home banking. There are banks out there pushing electronics, but there are a lot of other banks that would just as soon stay with checks. And that’s all very convincing, except for one thing: The Evolution of the Check as a Means of Payment: A Historical Survey Stephen Quinn and William Roberds Data collected from https://www.federalreserve.gov/paymentsystems/frps_previous.htm I can’t explain the discrepancy between Abagnale’s numbers and the Fed’s. Abagnale doesn’t cite a source, and while the Fed is pretty clear that its numbers aren’t totally solid, I know I trust them more. But it sure looks like The Art of the Steal was written at nearly the exact peak of US check-writing. The book is confidently asserting that checks would never fade—to a present-day audience which knows they’re well on their way out.1 Enlightenment 2.0 Joseph Heath wrote a much more serious book, and a much better researched one. It’s also much more recent, from just 2014. But that makes the anachronisms more disconcerting. The first thing that really surprised me is his discussion of computer chess. Heath argues (correctly!) that people don’t think in a purely linear, logical-deductive manner, but instead rely on a lot of shortcuts and heuristics. He illustrates the difference by contrasting the human approach to chess-playing with the approach of chess computers like Deep Blue. Computers, he explains, are analyzing millions of branches of the chess decision tree; in contrast, human grandmasters rely on “a heuristic pruning of the decision tree, guided by an intuitive sense of what seem to be the most promising moves or of what sort of position they want on the board.” He goes on to observe that [N]o one is able to articulate how this initial pruning is done. It is all based on “feel.” … To this day, no one has ever succeeded in reproducing the intuitive style of thinking in a computer, simply because we don’t know how it is done (despite the fact that we ourselves do it)…. The fact that this much computing power can be deployed without yet achieving the “final, generally accepted, victory over the human”22 is a monument to the power and sophistication of nonrational thought processes in the human mind. Three years later, Google unveiled the AlphaZero engine, which uses modern machine learning techniques to do heuristic pruning very similar to what humans do, and avoids the need to crunch through the entire decision tree. To the best of my knowledge, every top chess program now uses these neural network-based heuristics. I don’t bring this up to criticize Heath. He was correct when he was writing; and his main point is still correct, since he was mostly trying to explain how human thought works, not how to write a chess program. But it’s definitely a moment where I paused and was thrown out of the argument, because my first reaction was “but that isn’t true!” With a belated followup of “…any more”. Anachronistic Politics But there’s another bit that seems far more jarring and anachronistic today, even though it also seems prescient. Heath writes as an unapologetic liberal2, and his project is to build a modern, renewed liberal politics. So he sets the stage for his argument by discussing some of the problems he sees in the modern Republican party. The big tent of the American right has always sheltered its share of crazies… There came a point, however, when the sideshow began to take over center stage. Americans woke up to find that their political system was increasingly divided, not between right and left, but between crazy and non-crazy. And what’s more, the crazies seemed be gaining the upper hand. He later observes that the American right “always seem to be very angry”, and that there has also been a significant rise in the amount of bullshit. Lying for political advantage, of course, is as old as the hills. What has changed is that politicians used to worry about getting caught. He is, of course, describing the 2012 campaign that pitted Mitt Romney against Rick Santorum in the primary and Barack Obama in the general election. Ten years later, I’m not sure whether to read Heath’s writing as prescient or naive. He forecast the shape of Trumpian politics nearly perfectly, so in that sense he was clearly on to something. But it’s disconcerting to remember a time when we might have viewed Romney and Santorum as shockingly out-of-bounds artists of bullshit. So those are two different books I’m reading, which both aged surprisingly quickly. I don’t have any grand takeaways from this, or anything. But it’s interesting to see just how unpredictable trends can be. Sometimes they keep going much further than you think they can. And other times, when they seem like they’ll last forever, they stop almost without warning. What else has aged surprisingly quickly—or surprisingly well? Tweet me @ProfJayDaigle, BlueSky me @profjaydaigle.bsky.social, or leave a comment below. Tags: data books
2024-11-08T11:22:24
en
train
42,047,729
raybb
2024-11-05T01:08:57
New memory chip controlled by light and magnets could make AI less power-hungry
null
https://www.livescience.com/technology/artificial-intelligence/new-memory-chip-controlled-by-light-and-magnets-could-one-day-make-ai-computing-less-power-hungry
3
0
null
null
null
no_error
New memory chip controlled by light and magnets could one day make AI computing less power-hungry
2024-11-02T16:00:00+00:00
Skyler Ware
The magneto-optic memory cell design could one day reduce the energy required to power AI computing farms, researchers said. (Image credit: Brian Long, UCSB) Researchers have developed a new type of memory cell that can both store information and do high-speed, high-efficiency calculations.The memory cell enables users to run high-speed computations inside the memory array, researchers reported Oct. 23 in the journal Nature Photonics. The faster processing speeds and low energy consumption could help scale up data centers for artificial intelligence (AI) systems."There's a lot of power and a lot of energy being put into scaling up data centers or computing farms that have thousands of GPUs [graphics processing units] that are running simultaneously," study co-author Nathan Youngblood, an electrical and computer engineer at the University of Pittsburgh, told Live Science. "And the solution hasn't necessarily been to make things more efficient. It's just been to buy more and more GPUs and spend more and more power. So if optics can address some of the same problems and do it more efficiently and faster, that would hopefully result in reduced power consumption and higher throughput machine learning systems."The new cell uses magnetic fields to direct an incoming light signal either clockwise or counterclockwise through a ring-shaped resonator, a component that intensifies light of certain wavelengths, and into one of two output ports. Depending on the intensity of light at each of the output ports, the memory cell can encode a number between zero and one, or between zero and minus one. Unlike traditional memory cells, which only encode values of zero or one in one bit of information, the new cell can encode several non-integer values, allowing it to store up to 3.5 bits per cell.Related: New 'petabit-scale' optical disc can store as much information as 15,000 DVDsThose counterclockwise and clockwise light signals are akin to " two runners on a track that are running in opposite directions around the track, and the wind is always in the face of one and to the back of the other. One can go faster than the other," Youngblood said.. "You're comparing the speed at which those two runners are running around the track, and that allows you to basically code both positive and negative numbers."The numbers that result from this race around the ring resonator could be used to either strengthen or weaken connections between nodes in artificial neural networks, which are machine learning algorithms that process data in ways similar to the human brain. That could help the neural network identify objects in an image, for example, Youngblood said.Get the world’s most fascinating discoveries delivered straight to your inbox.Unlike traditional computers, which make calculations in a central processing unit then send results to memory, the new memory cells perform high-speed computations inside the memory array itself. In-memory computing is particularly useful for applications like artificial intelligence that need to process a lot of data very quickly, Youngblood said.The researchers also demonstrated the endurance of the magneto-optic cells. They ran more than 2 billion write and erase cycles on the cells without observing any degradation in performance, which is a 1,000-fold improvement over past photonic memory technologies, the researchers wrote.Typical flash drives are limited to between 10,000 and 100,000 write and erase cycles, Youngblood said.In the future, Youngblood and his colleagues hope to put multiple cells onto a computer chip and try more advanced computations.Eventually, this technology could help mitigate the amount of power needed to run artificial intelligence systems, Youngblood said. Skyler Ware is a freelance science journalist covering chemistry, biology, paleontology and Earth science. She was a 2023 AAAS Mass Media Science and Engineering Fellow at Science News. Her work has also appeared in Science News Explores, ZME Science and Chembites, among others. Skyler has a Ph.D. in chemistry from Caltech. Most Popular
2024-11-07T22:25:00
en
train
42,047,736
fsndz
2024-11-05T01:10:48
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,047,740
sandwichsphinx
2024-11-05T01:10:56
Larry Ellison talks curing cancer, genetically altering food as AI possibilities
null
https://www.tennessean.com/story/money/2024/10/31/oracle-founder-larry-ellison-nashville-ai/75915989007/
2
1
[ 42048110, 42047944 ]
null
null
null
null
null
null
null
null
null
train
42,047,745
astra-ai
2024-11-05T01:12:54
null
null
null
1
null
[ 42047746 ]
null
true
null
null
null
null
null
null
null
train
42,047,755
mayallo
2024-11-05T01:16:15
I Improved Video Streaming with FFmpeg and Node.js
null
https://mayallo.com/video-processing-using-ffmpeg-nodejs/
2
0
null
null
null
null
null
null
null
null
null
null
train