id
int64 2
42.1M
| by
large_stringlengths 2
15
⌀ | time
timestamp[us] | title
large_stringlengths 0
198
⌀ | text
large_stringlengths 0
27.4k
⌀ | url
large_stringlengths 0
6.6k
⌀ | score
int64 -1
6.02k
⌀ | descendants
int64 -1
7.29k
⌀ | kids
large list | deleted
large list | dead
bool 1
class | scraping_error
large_stringclasses 25
values | scraped_title
large_stringlengths 1
59.3k
⌀ | scraped_published_at
large_stringlengths 4
66
⌀ | scraped_byline
large_stringlengths 1
757
⌀ | scraped_body
large_stringlengths 1
50k
⌀ | scraped_at
timestamp[us] | scraped_language
large_stringclasses 58
values | split
large_stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
42,031,459 | olayhabercomtr | 2024-11-03T06:27:28 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,031,460 | peutetre | 2024-11-03T06:27:29 | Hyundai Inster: The Best Small Car Ever Made? [video] | null | https://www.youtube.com/watch?v=W7IRi0GbRog | 4 | 0 | [
42032253
] | null | null | no_article | null | null | null | null | 2024-11-08T11:08:32 | null | train |
42,031,469 | drmustafash | 2024-11-03T06:32:01 | Stripe Closed My Account | After three months of using Stripe, they conducted a routine credit check and requested three months of bank statements. Since I use a Payoneer receiving account, I provided a Payoneer statement covering this period. However, shortly after I uploaded the documents, my account was closed, with Stripe citing it as high-risk. I have never had a chargeback, and I run a legitimate web hosting business.<p>Note:Stripe asked for the statement from the bank used for payouts, which for me is Payoneer, so I sent that. If they’d asked for a personal bank statement, I’d have happily provided it, but their request specifically focused on my payout account. | null | 1 | 8 | [
42031506,
42032751
] | null | null | null | null | null | null | null | null | null | train |
42,031,476 | hilux | 2024-11-03T06:33:16 | null | null | null | 9 | null | [
42031615
] | null | true | null | null | null | null | null | null | null | train |
42,031,477 | bilater | 2024-11-03T06:34:05 | Show HN: Easily Fact Check any YouTube video as you watch it | null | https://twitter.com/deepwhitman/status/1852935781993865608 | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,479 | gndp | 2024-11-03T06:34:20 | Show HN: Screenux: GNU Screen for Humans | A user friendly alternative to nohup, screen, tmux for running long-running scripts. It takes care of logging, and have some configuration options. I made it to make it easier to run ML training scripts. But the functionality is generic and is suitable for any other background scripts/commands. This is my first share on HN, please let me know what you think of it. I have wrote thorough readme to uninstall, so that you don't have to figure that out if it's not for you. I wanna apologise to zsh and other shell users because it only works in bash afaik, but chatgpt said it should work in zsh too. | https://github.com/gndps/screenux | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,486 | ksec | 2024-11-03T06:36:27 | Apple CEO Tim Cook on How Steve Jobs Recruited Him and More – The Job Interview [video] | null | https://www.youtube.com/watch?v=m4RVTK7iU1c | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,495 | lomolo | 2024-11-03T06:39:18 | Show HN: I built an Uber clone for deliveries | null | https://github.com/drago-plc/uzi | 1 | 0 | [
42031551
] | null | null | no_error | GitHub - drago-plc/uzi | null | drago-plc |
GitHub Copilot
Write better code with AI
Security
Find and fix vulnerabilities
Actions
Automate any workflow
Codespaces
Instant dev environments
Issues
Plan and track work
Code Review
Manage code changes
Discussions
Collaborate outside of code
Code Search
Find more, search less
Explore
Learning Pathways
White papers, Ebooks, Webinars
Customer Stories
Partners
GitHub Sponsors
Fund open source developers
The ReadME Project
GitHub community articles
Enterprise platform
AI-powered developer platform
Pricing
Provide feedback
Saved searches
Use saved searches to filter your results more quickly
Sign up
| 2024-11-08T05:39:02 | en | train |
42,031,513 | MagicSet | 2024-11-03T06:45:34 | null | null | null | 1 | null | [
42031514
] | null | true | null | null | null | null | null | null | null | train |
42,031,515 | super_linear | 2024-11-03T06:45:38 | Duolingo: We reduced our cloud spending by 20% | null | https://blog.duolingo.com/reducing-cloud-spending/ | 7 | 1 | [
42037649,
42032151
] | null | null | null | null | null | null | null | null | null | train |
42,031,523 | karanveer | 2024-11-03T06:48:02 | Show HN: Calculator in Browser | This Halloween I wasnt invited to any party so I got bored and I created this quick calculator for me calculations, without leaving my chrome browser. | https://chromewebstore.google.com/detail/calculator/dlpbkbmnbkkliidfobhapmdajdokapnm | 1 | 0 | null | null | null | no_error | Calculator - Chrome Web Store | null | null | OverviewA simple calculator for those quick calculations, without leaving the browserHow many times you leave your browser to open the calculator app on your PC/Mac?
In middle of a movie and felt like calculating those bills? Using a sheet and want to do a quick calculation?
well, this extension saves you those extra steps and makes you access a calculator within click of a button or a custom assigned shortcut.
NOW CALCULATE WITHOUT EVER LEAVING THE BROWSER.
"Calculator" by theindiecompny helps you quickly calculate on the web, without leaving your train of thought or the tab.
Best Way to Use this Calculator:
1. Install it
2. Use "Ctrl + Q" on Windows, or "Cmd + Q" to launch quickly.
You can also customize this shortcut key, for me it is "Ctrl+1"
[go to this link and assign your keys to "Activate the Extension": chrome://extensions/shortcuts]
3. Enjoy!DetailsVersion1.0.2UpdatedNovember 4, 2024Size21.27KiBLanguagesDeveloper Email [email protected] developer has not identified itself as a trader. For consumers in the European Union, please note that consumer rights do not apply to contracts between you and this developer.PrivacyThe developer has disclosed that it will not collect or use your data. To learn more, see the developer’s privacy policy.This developer declares that your data isNot being sold to third parties, outside of the approved use casesNot being used or transferred for purposes that are unrelated to the item's core functionalityNot being used or transferred to determine creditworthiness or for lending purposesSupportFor help with questions, suggestions, or problems, visit the developer's support site | 2024-11-08T04:44:34 | en | train |
42,031,537 | kumark1 | 2024-11-03T06:52:27 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,031,538 | denkar112 | 2024-11-03T06:53:19 | null | null | null | 1 | null | [
42031539
] | null | true | null | null | null | null | null | null | null | train |
42,031,556 | gitroom | 2024-11-03T07:00:23 | Show HN: Pinterest Font Generator | null | https://pinterestfontgenerator.com | 2 | 0 | [
42032104
] | null | null | null | null | null | null | null | null | null | train |
42,031,566 | taubek | 2024-11-03T07:03:40 | Fight over Privacy Firefox and Brave Take Potshots at Each Other | null | https://news.itsfoss.com/brave-slams-firefox/ | 4 | 0 | [
42032152
] | null | null | no_error | Fight Over Privacy! Firefox and Brave Take Potshots at Each Other | 2024-11-01T11:55:50.000Z | Sourav Rudra |
Web browsers are synonymous with the internet, as they serve as a user-friendly means to interact with the online world. Users who care about their privacy usually switch to options like LibreWolf, Brave, Firefox, and Mullvad Browser.At the same time, the debate over Brave vs. Firefox is a longstanding one, and now it appears that the conversation has taken a rather spicy turn following Firefox's post on Brave.Let's see what's happening. 😳Brave Hits Back: Slams Firefox Over Claims— LukΞ Mulks 🦁⟁◎⟁ (@lukemulks) October 30, 2024
Debunking Firefox's claims that Brave was an ill-equipped browser, Luke Mulks, VP of Business Operations at Brave Software, took to X (formerly Twitter) to share that it was not the case.He started by showing off the most recent PrivacyTests stats, where Firefox performed worse than Brave on the privacy front. This was in response to the claim that “Firefox's privacy settings are strong and easy to use.”He then moved on to Firefox's claim that Brave's default ad-blocker may break websites and that you have to “keep fiddling with it”. Luke added that their ad blocking is being continuously improved, and can be toggled off per site.The post in question.Similarly, the statement about Brave defaulting to their search engine and users needing to go into the browser settings also caught flak. Luke pointed out that Mozilla itself takes money from Google for keeping it as the default search engine on Firefox, and that this behavior with Brave Search was a feature, not a bug.He also further clarified that there is a dedicated “Find elsewhere” button on Brave, which allows users to use Google, Bing, and Mojeek to search for things.Closing out his arguments against Firefox, Luke noted that even though Brave is a Chromium-based web browser, their team ensures to “harden the hell out of it”. He said that open source software like Chromium is beautiful in a way that it allows building applications on user-first principles, allowing developers to “correct user-hostile business ethics corruption at scale”.💬 What are your views on this? Should Firefox up their game instead of doing just PR?Suggested Read 📖Comparing Brave vs. Firefox: Which one Should You Use?The evergreen open-source browser Firefox compared to Brave. What would you pick?It's FOSSAnkush Das
Here's why you should opt for It's FOSS Plus Membership
Even the biggest players in the Linux world don't care about desktop Linux users. We do.
We don't put content behind paywall. Your support keeps it open for everyone. Think of it like 'pay it forward'.
Don't like ads? With the Plus membership, you get an ad-free reading experience.
When millions of AI-generated content is being published daily, you read and learn from real human Linux users.
It costs just $2 a month, less than the cost of your favorite burger.
| 2024-11-07T22:58:36 | en | train |
42,031,580 | varun_chopra | 2024-11-03T07:08:53 | Can you build a startup without sacrificing your mental health? | null | https://techcrunch.com/2024/11/02/can-you-build-a-startup-without-sacrificing-your-mental-health-bonobos-founder-andy-dunn-thinks-so/ | 2 | 1 | [
42032532,
42032190
] | null | null | null | null | null | null | null | null | null | train |
42,031,598 | rmanolis | 2024-11-03T07:15:11 | Error handling challenge (can your language pass the challenge?) | null | https://rm4n0s.github.io/posts/3-error-handling-challenge/ | 2 | 0 | null | null | null | no_error | Error Handling Challenge! | (a)RManos Blog | null | null |
(DISCLAIMER: the article may topple a benevolent dictator)
Introduction
In the previous article,
I compared Golang’s and Odin’s error handling, and I said why Go’s error types suck.
However, some Golang developers told me that it is possible to do the same things with errors.Is(), errors.As(), fmt.Errorf() or other library.
At that point, I realized that these developers didn’t understand the problem and for that reason, I made it as an exercise for them to solve.
Nevertheless, this exercise it is not only for Golang developers, but for all the developers. Everybody has the chance to prove that their favorite programming language is better than Odin’s superior error handling.
I challenge you ALL to the “Error Handling Challenge!”
The challenge
The challenge is simple to understand. Print a message to the user based on the path of function calls that produced the error, and not based on the error itself.
The exercise is in Golang, and it has in comments the requirements to complete the task.
package main
import (
"errors"
"fmt"
"math/rand/v2"
)
var ErrBankAccountEmpty = errors.New("account-is-empty")
var ErrInvestmentLost = errors.New("investment-lost")
func f1() error {
n := rand.IntN(9) + 1
if n%2 == 0 {
return ErrBankAccountEmpty
}
return ErrInvestmentLost
}
func f2() error {
return f1()
}
func f3() error {
return f1()
}
func f4() error {
n := rand.IntN(9) + 1
if n%2 == 0 {
return f2()
}
return f3()
}
func main() {
err := f4()
// print three different messages based on
// the execution path of the functions and error:
// - for f4()->
// f2()->
// f1()->
// ErrBankAccountEmpty
// print "Aand it's gone"
// - for f4()->
// f3()->
// f1()->
// ErrInvestmentLost
// print "The money in your account didn't do well"
//
// - for the rest of the cases
// print "This line is for bank members only"
// also print any type of stack trace for err
fmt.Println("Print stack trace", err)
}
Please don’t change the logic, the number of functions and the messages.
However, you can change the types of errors, add more return values, global values and use any library you want.
Furthermore, you can rewrite this exercise in any programming language you want, as long as you keep the logic the same. If your programming language is OOP, then use static methods in the same object.
Yet, before you start solving it, I want you to know that I believe no language can solve this challenge except Odin.
Because Odin it is not just a programming language, it is a holy teacher.
Odin’s holy teachings
To solve this task in Odin, you have to follow the unwritten verses from Odin’s source code:
Use Enum, Union and Struct for errors, no strings or other types of pointers
Define a type as an error by adding “_Error” at the end of the name.
Give one or more error types for each procedure no matter the race, sex, religion and political beliefs
If a procedure does not produce its own errors, then unionize the errors from the functions it calls.
To create messages for users, parse the errors with switch/case, if conditions or “core:reflect” library.
Replace stack traces with type traces.
Odin’s teachings in action
Here I solve the above exercise in Odin, using the holy verses.
package main
import "core:fmt"
import "core:math/rand"
// my library for type traces
// https://github.com/rm4n0s/trace
import "trace"
F1_Error :: enum {
None,
Account_Is_Empty,
Investment_Lost,
}
F2_Error :: union #shared_nil {
F1_Error,
}
F3_Error :: union #shared_nil {
F1_Error,
}
F4_Error :: union #shared_nil {
F2_Error,
F3_Error,
}
f1 :: proc() -> F1_Error {
n := rand.int_max(9) + 1
if n % 2 == 0 {
return .Account_Is_Empty
}
return .Investment_Lost
}
f2 :: proc() -> F2_Error {
return f1()
}
f3 :: proc() -> F3_Error {
return f1()
}
f4 :: proc() -> F4_Error {
n := rand.int_max(9) + 1
if n % 2 == 0 {
return f2()
}
return f3()
}
main :: proc() {
err := f4()
switch err4 in err {
case F2_Error:
switch err2 in err4 {
case F1_Error:
#partial switch err2 {
case .Account_Is_Empty:
fmt.println("Aand it's gone")
case:
fmt.println("This line is for bank members only")
}
}
case F3_Error:
switch err3 in err4 {
case F1_Error:
#partial switch err3 {
case .Investment_Lost:
fmt.println("The money in your account didn't do well")
case:
fmt.println("This line is for bank members only")
}
}
}
tr := trace.trace(err)
fmt.println("Trace:", tr)
/* Prints randomly:
Aand it's gone
Trace: F4_Error -> F2_Error -> F1_Error.Account_Is_Empty
The money in your account didn't do well
Trace: F4_Error -> F3_Error -> F1_Error.Investment_Lost
This line is for bank members only
Trace: F4_Error -> F3_Error -> F1_Error.Account_Is_Empty
*/
}
As you can see, Odin’s holy teachings not only solve difficult challenges, but also it supports developers’ rights.
But let me guess, you don’t even know your rights and for that reason, you let your favorite programming language treat you horrible.
I will educate you, my brothers and sisters, about your rights.
Developers’ rights
To read the errors of a function without reading the function’s code.
Spent your time in thinking, not searching.
To print a stack trace for each error.
The way is more important than the destination.
To create error messages for users from parsing the trace of functions that produced the error.
DON’T use the same language for users, administrators, and developers.
No programming language can give us all the rights, except Odin.
Odin is the evolution
Start the revolution
If you feel woke now and ready to protest for your rights, here is a list of chants:
In Union there is strength, for errors to transcend
We want non-binary errors, no more exception terrors
Errors unite! in a union type!
Two, Four, Six, Eight! How Do You Know Your Errors Are Traced?
Who’s got the power? We’ve got the power!
What kind of power? Union power!
Get up, get down, Odin is a union town!
We don’t want exceptions, we want union inceptions!
Developers rights are human rights!
Languages that passed the challenge
Here is a list of implementations per programming language that solved the exercise.
I will not judge how each implementation is solved. That is for you to decide.
But I will put them in order based on personal preference.
Zig
@driggy implemented
https://godbolt.org/z/Y9G4T1K4K source
@johan__A implemented
https://godbolt.org/z/WWsh66a6q source
Go
@nemith implemented
https://go.dev/play/p/-whhI-Zd94Q source
https://go.dev/play/p/rtb2jOQ4QrK source
| 2024-11-07T20:03:10 | en | train |
42,031,606 | hunglee2 | 2024-11-03T07:16:43 | The Making of China and India in the 21st Century | null | https://www.dropbox.com/scl/fo/3lskcwur97a67jhjgkn9z/ABCZ60fQDoHEG1zXL52PjPs?dl=0&e=1&noscript=1&preview=The_Making_of_China_and_India_2024.pdf&rlkey=v60l9q4uusuk50s3rqu1og5gj | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,614 | zhengiszen | 2024-11-03T07:18:49 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,031,616 | null | 2024-11-03T07:20:03 | null | null | null | null | null | null | [
"true"
] | true | null | null | null | null | null | null | null | train |
42,031,624 | the-mitr | 2024-11-03T07:22:54 | Protecting Artists from Theft by AI | null | https://nautil.us/protecting-artists-from-theft-by-ai-660557/ | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,631 | willemlaurentz | 2024-11-03T07:24:56 | Offline Music | null | https://willem.com/blog/2022-07-17_offline-music/ | 1 | 0 | [
42032103
] | null | null | null | null | null | null | null | null | null | train |
42,031,642 | robinhouston | 2024-11-03T07:28:00 | Can humans say the largest prime number before we find the next one? | null | https://saytheprime.com/ | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,658 | vincent_build | 2024-11-03T07:33:32 | Show HN: An app for biomechanically-optimized exercise selection | I built an app that analyzes individual anatomical variations (limb lengths, joint alignments, mobility patterns) and matches them with biomechanically suitable exercises.<p>The matching algorithm considers:
- Valgus/varus alignment
- Limb-to-torso ratios
- Joint mobility ranges
- Anatomical leverages<p>It then cross-references these data points with a curated database of exercises to determine optimal movement patterns for each body type.<p>Early access signup : <a href="https://morpho.fit/" rel="nofollow">https://morpho.fit/</a> | https://morpho.fit/ | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,663 | goodereader | 2024-11-03T07:35:16 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,031,664 | uxhacker | 2024-11-03T07:35:19 | An 'Interview' with a Dead Luminary Exposes the Pitfalls of A.I | null | https://www.nytimes.com/2024/11/03/world/europe/poland-radio-station-ai.html | 3 | 1 | [
42033352
] | null | null | null | null | null | null | null | null | null | train |
42,031,678 | mixeden | 2024-11-03T07:39:33 | ToMChallenges: Exploring Theory of Mind with AI in Diverse Tasks | null | https://synthical.com/article/ToMChallenges%3A-A-Principle-Guided-Dataset-and-Diverse-Evaluation-Tasks-for-Exploring-Theory-of-Mind-a68bbd89-7c6c-4da1-ab92-5c96d96db2db | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,737 | hbrk | 2024-11-03T07:55:10 | null | null | null | 1 | null | [
42031738
] | null | true | null | null | null | null | null | null | null | train |
42,031,750 | agluszak | 2024-11-03T07:57:08 | Herb Sutter's Cppfront 0.8.0 | null | https://github.com/hsutter/cppfront/releases/tag/v0.8.0 | 2 | 0 | [
42032168
] | null | null | null | null | null | null | null | null | null | train |
42,031,753 | mahin | 2024-11-03T07:58:03 | Hacker News Explorer | null | https://chromewebstore.google.com/detail/hn-explorer/amiaaonefodebppoklclafmglnkleobk | 21 | 3 | [
42033551,
42036090
] | null | null | null | null | null | null | null | null | null | train |
42,031,756 | warsamw | 2024-11-03T07:58:46 | How to send a photo around the world in 1926 (2013) | null | https://gizmodo.com/how-to-send-a-photo-around-the-world-in-1926-533206646 | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,759 | JumpCrisscross | 2024-11-03T07:59:12 | Musk Loses Bid to Dismiss Ex-Twitter CEO's Severance Claim | null | https://www.bloomberg.com/news/articles/2024-11-02/musk-loses-bid-to-dismiss-ex-twitter-ceo-s-severance-lawsuit | 11 | 1 | [
42031875,
42032169
] | null | null | cut_off | Musk Loses Bid to Dismiss Ex-Twitter CEO’s Severance Claim | 2024-11-02T04:00:36.927Z | Malathi Nayak |
Skip to content
BloombergConnecting decision makers to a dynamic network of information, people and ideas, Bloomberg quickly and accurately delivers business and financial information, news and insight around the world
For CustomersBloomberg Anywhere Remote LoginSoftware UpdatesManage Products and Account Information
SupportAmericas+1 212 318 2000EMEA+44 20 7330 7500Asia Pacific+65 6212 1000
CompanyAboutCareersDiversity and InclusionTech At BloombergPhilanthropySustainabilityBloomberg LondonBloomberg BetaGender-Equality Index
CommunicationsPress AnnouncementsPress Contacts
FollowFacebookInstagramLinkedInTwitterYouTube
ProductsBloomberg TerminalDataTradingRiskComplianceIndices
Industry ProductsBloomberg LawBloomberg TaxBloomberg GovernmentBloombergNEF
MediaBloomberg MarketsBloomberg TechnologyBloomberg PursuitsBloomberg PoliticsBloomberg OpinionBloomberg BusinessweekBloomberg Live ConferencesBloomberg RadioBloomberg TelevisionNews Bureaus
Media ServicesBloomberg Media DistributionAdvertising
CompanyAboutCareersDiversity and InclusionTech At BloombergPhilanthropySustainabilityBloomberg LondonBloomberg BetaGender-Equality Index
CommunicationsPress AnnouncementsPress Contacts
FollowFacebookInstagramLinkedInTwitterYouTube
ProductsBloomberg TerminalDataTradingRiskComplianceIndices
Industry ProductsBloomberg LawBloomberg TaxBloomberg GovernmentBloomberg EnvironmentBloombergNEF
MediaBloomberg MarketsBloomberg TechnologyBloomberg PursuitsBloomberg PoliticsBloomberg OpinionBloomberg BusinessweekBloomberg Live ConferencesBloomberg RadioBloomberg TelevisionNews Bureaus
Media ServicesBloomberg Media DistributionAdvertising
BloombergConnecting decision makers to a dynamic network of information, people and ideas, Bloomberg quickly and accurately delivers business and financial information, news and insight around the world
For CustomersBloomberg Anywhere Remote LoginSoftware UpdatesManage Contracts and Orders
SupportAmericas+1 212 318 2000EMEA+44 20 7330 7500Asia Pacific+65 6212 1000
Live TVMarketsChevron DownEconomicsIndustriesTechPoliticsBusinessweekOpinionMoreChevron DownParag Agrawal alleges his firing was timed to avoid paying himBillionaire is fighting pay claims by thousands of ex-workersNovember 2, 2024 at 12:00 AM EDTUpdated on November 2, 2024 at 11:00 AM EDTElon Musk was dealt a significant setback in a court fight over compensation sought by the top Twitter Inc. executives he fired when he took over the company in 2022. A judge ruled late Friday that former chief executive officer Parag Agrawal and other high-ranking officers can proceed with claims that Musk terminated them right as he was closing the deal to cheat them out of severance pay before they could submit resignation letters. | 2024-11-08T01:41:41 | en | train |
42,031,767 | HiPHInch | 2024-11-03T08:02:24 | Evolve: An incremental game about evolving a civilization | null | https://pmotschmann.github.io/Evolve/ | 4 | 0 | [
42032182
] | null | null | null | null | null | null | null | null | null | train |
42,031,769 | dtquad | 2024-11-03T08:03:39 | Datacenter Anatomy Part 1: Electrical Systems | null | https://www.semianalysis.com/p/datacenter-anatomy-part-1-electrical | 2 | 0 | null | null | null | paywall_blocked | Datacenter Anatomy Part 1: Electrical Systems | 2024-10-14T14:10:06+00:00 | null |
The Datacenter industry has been a critical industry for many years, but now it is experiencing unprecedented acceleration due to its important role in national security and future economic growth, all being driven by massive demand for AI training and inference. The surge in power demand triggered by AI has huge macro and micro implications and supply is tight.
Over the past few years, the industry has been inching towards adoption of key systems enabling higher power density that is essential for AI applications in a haphazard fashion. This push was led mainly by smaller players offering their own new designs to accommodate AI without sign on from Nvidia who was slow to adopt liquid cooling versus Google. No industry standards emerged during these years.
This all changes with the upcoming Blackwell ramp. Nvidia’s GB200 family requires direct to chip liquid cooling and has up to 130kW rack power density delivering ~9x inference and ~3x training performance improvements over the H100. Any datacenter that is unwilling or unable to deliver on higher density liquid cooling will miss out on the tremendous performance TCO improvements for its customers and will be left behind in the Generative AI arm’s race.
Blackwell has standardized the requirements and now suppliers and system designers have a clear roadmap for AI datacenter development leading to a massive shift in datacenter systems and components winners / losers.
These design shifts have already caused considerable impact. For example, Meta demolished an entire building under construction because it was their old datacenter design with low power density which they have used for many years. Instead they replaced it with their brand new AI-Ready design! Most datacenter designs are not ready for GB200, and Meta in particular has by far the lowest power density in their DCs compared to the other hyperscalers.
Source: SemiAnalysis Datacenter Model
In this report, Datacenter Anatomy Part 1 – Electrical Systems, we’ll dig into the electrical system of AI Datacenters and explore how Gigawatt clusters will impact traditional supply chains. We’ll discuss key equipment suppliers such as Vertiv, Schneider Electric, and Eaton and the impact of AI on their business. We show our Datacenter Bill-of-Materials estimate and derive a CapEx by component forecast for the industry.
Future reports will explore facility cooling systems, upcoming server cooling technologies such as Immersion, and dig into hyperscaler designs. This report is based on our work tracking 200+ datacenter-related suppliers and modelling market shares per product and a bottom-up TAM for new key technologies, such as liquid cooling. In addition, we have the most detailed top-down estimate of the market via a semiconductor-based SKU-by-SKU demand forecast and a building-by-building Datacenter capacity forecast up to 2030.
Datacenter Basics
A Datacenter is a purpose-built facility designed to deliver power to IT Equipment in an efficient and safe manner and deliver the lowest possible total cost of ownership over the life of the IT hardware. The IT equipment is generally laid out in racks filled with servers, networking switches and storage devices. Running these devices can require vast amounts of power and generate a lot of heat.
Thirty years ago, these facilities resembled office buildings with beefed-up air conditioning, but the scale has massively increased since with consumers today watching billions of hours of YouTube videos, Netflix shows, and Instagram stories. This triggered deep changes in the way datacenters are built, as modern facilities can require >50x the amount of electricity per square foot compared to a typical office building, and facilitating heat dissipation for these servers calls for fundamentally different cooling infrastructure.
Source: Data Center Dynamics
With such scale, any outage caused by a datacenter issue can incur significant revenue losses and harm the operator’s reputation, whether it be a Cloud Service Provider (CSP) such as Azure and AWS, or a Colocation operator (Datacenter Real Estate). Ensuring a high uptime ensures more revenue, and that largely comes down to having a reliable electrical and cooling system inside. Electrical faults, while more common, tend to have a smaller “blast radius” and are typically less disruptive than cooling failures.
A useful framework to evaluate Datacenters based on expected downtime and redundancy is the “Tier” Classification from the Uptime Institute, or the ANSI/TIA-942 standard (based on the Uptime’s Tiers), with the following four rating levels in the diagram below.
Source: PRASA
“Rated 3” datacenters (and equivalent standards) are the most common globally for large facilities, and always require backup power redundancy for IT equipment. When talking about redundancy, we use terms such as “N”, “N+1” or “2N”. For example, if a datacenter needs ten transformers, N+1 means buying eleven units in total, of which ten are operational and one redundant, while 2N would require purchasing 20 transformers.
Rated 3 facilities must be “concurrently maintainable” and typically call for N+1 redundancy in components like transformers and generators, and 2N on power distribution components – uninterruptible power supply (UPS) and power distribution unit (PDU). Rated 4 datacenters are less common and must be “fault tolerant” – commonly for mission critical or government datacenter facilities.
As a side note, CSPs often talk about “three nines” (99.9%) or “five nines” (99.999%) expected availability – this is part of their Service Level Agreement with clients and covers the service’s uptime, which is broader than just a single datacenter’s expected availability. Multiple Datacenters are covered (“Availability Zones”) and the uptime of components such servers and networking are included.
From Retail Datacenters to Hyperscale Campuses
There are many shapes and sizes of Datacenters, and we usually categorize them based on their Critical IT power capacity (i.e. max power of IT equipment), in Kilowatts. This is because in the colocation business, operators lease empty racks and price it on a “IT kW-per-month” basis. Space in a datacenter is far less costly than the electrical and cooling equipment involved to power a client’s server.
Whereas Critical IT Power refers to the maximum power of the IT equipment, the actual draw from the power grid will include non-IT load such as cooling and lighting, as well as a Utilization rate factor. On average, Power Utilization rate is typically 50-60% for Cloud Computing workloads, and north of 80% for AI training. Enterprises are often even lower than 50%.
We categorize facilities into three major brackets:
Retail Datacenters : small facilities with a lower power capacity – at most a few Megawatts, but typically located within cities. They generally have many small tenants who only lease a few racks (i.e. a few kW). The value proposition lies in offering a strong network ecosystem by bringing together many different customers within the same facility. By offering easy interconnection to other customers and networks with low latency, operators of retail datacenters can lower customers’ networking costs. Thus, the business model of a retail datacenter operator is more akin to a traditional real estate play with its “location, location, location” value proposition.
Source: Google Earth (Here)
Wholesale Datacenters: larger facilities in the range of 10-30MW. Customers in these facilities tend to lease larger areas, i.e. a whole row or multiple rows, and with the option to further expand. In contrast to retail datacenters, the value proposition is about deploying larger capacities and having scalability over time. Many wholesale datacenters are built out in phases to attain their ultimate capacity, which means they can expand as customers’ demanded leading capacity grows. Below is an example owned by Digital Reality.
Source: Google Earth (Here)
Hyperscale Datacenters: These facilities are commonly self-built by hyperscalers for their own exclusive use, typically with individual buildings of 40-100MW each, and part of larger campuses with multiple interconnected buildings. Such campuses are rated in the 100s of MW, such as the below Google site with close to 300MW of power. Big Tech firm can also engage a colocation provider to construct a “build-to-suit” datacenter that will be built to the specifications of the hyperscaler, then leased out to the hyperscaler. Build to suit lease sizes north of 100MW are increasingly common.
Source: SemiAnalysis Datacenter Model
We can also segment datacenters based on their operator: Colocation or Self-Build.
Colocation is simply renting datacenter capacity in units of power ($/kW/mth) from a third-party datacenter operator. A typical small size tenancy is 100-500kW of Critical IT Power, while wholesale sized tenancies would typically range from 1-5 MW. Hyperscale clients usually lease in sizes greater than 5 MW, and sometimes in the 100s of MW when leasing a full campus!
This can help hyperscalers be more efficient with capital by not having to pay the Capex up front and deal with many of the logistics. Hyperscalers also use arrangements in between leasing build-to-suit and self-build – for example they can lease a “warm shell” which has secured a ready connection to utility power, but with the hyperscaler building out their own Mechanical and Electrical infrastructure within the leased shell.
On the other hand, Self-Build datacenters are built privately by companies for their exclusive us. This was historically carried by large companies in sectors with sensitive data such as finance, payments, healthcare, government, energy – for example, JPMorgan or Verizon. The design of these datacenters is very heterogeneous, but power capacity per facility generally falls between that of Retail and Wholesale datacenters.
But by far the most impactful trend in the datacenter market over the last 10 years has been the rise of Self-Build Hyperscale datacenters, largely driven by the rise of cloud computing and social media platforms running increasingly powerful recommendation models and content delivery, among other workloads. As mentioned above, Hyperscalers can also take up colocation capacity. Common reasons to do so are for markets where they do not have the scale or local market knowledge to do self-build or do not have local teams capable of executing complex projects, or as means to grow their total capacity faster. Hyperscalers also have requirements for smaller scale deployments closer to the end customers, for purposes such as network edge, content delivery networks (CDN) where colocation would is more appropriate.
To give a bit of perspective on the power requirements of Hyperscale campuses, an average home in the US can draw up to 10kW of power at a given time, but actual average load is about 9x lower, at 1.214kW. Therefore, the annual electricity consumption of a 300MW Datacenter Campus is easily equivalent to that of ~200,000 households, given higher power utilization rate.
The table from our previous Datacenter deep dive should also help you relate those capacity numbers to AI deployments: a 20,840 Nvidia H100 cluster requires a datacenter with ~25.9MW of Critical IT Power capacity. This is still set to rise tremendously, as people are now building 100,000 H100 clusters and Gigawatt clusters.
Source: SemiAnalysis Datacenter Model
Now that we’ve covered the basic categories, let’s look at how we get power into these facilities.
The Electrical System of a Datacenter
We’ll start with a very simplified layout to understand how these facilities are designed. The goal is to deliver high amounts of power to racks filled with IT Equipment, placed in rooms called Data Halls. Doing that efficiently and safely while ensuring hardware lifetime requires a lot of equipment.
To minimize power distribution losses, we want to keep voltage as high as possible until being physically close to the end device – higher voltage means lower current, and power loss is proportional to the square of current (Ploss = I2R).
But high voltage can be dangerous and requires more insulation, which isn’t suitable near a building – therefore Medium Voltage (e.g. 11kV or 25kV or 33kV) is the preferred solution for power delivery into the building. When getting inside the data hall we need to step down that voltage again, to Low Voltage (415V three-phase in the US).
From the outside-in, power follows the following path:
Source: DEAC
The utility delivers either High Voltage (>100kV) or Medium Voltage power. In the former case, an on-site substation with power transformers is required to step it down to Medium Voltage (MV).
MV power will then be safely distributed using MV switchgear into another Transformer, physically near the data hall, stepping down voltage to Low Voltage (415V).
Paired with a transformer is a Diesel Generator that also outputs at 415V AC. If there is an outage from the electrical utility, an Auto Transfer Switch (ATS) will automatically switch power to the generator.
From here – there are then two power paths: one towards the IT Equipment, and the other cooling equipment:
The IT Equipment path first flows through a UPS system, which is connected to a bank of batteries: it is common to have 5-10min of battery storage, enough time for the generators to turn on (within a minute) and avoid a temporary outage.
The “UPS Power” is then supplied directly to IT Equipment, generally via Power Distribution Units (PDUs).
The last step is delivering electricity to the chip, via Power Supply Units (PSU) and Voltage Regulator Modules (VRM), which we covered here.
This diagram of course can vary dramatically based on the capacity of the Datacenter, but the general idea and power flows remains the same.
High Voltage Transformers
Modern Hyperscale datacenters are of course more complex than the diagram shown above. Such campuses typically have an on-site high-voltage electrical substation, such as the Microsoft site shown below, or the above Google complex.
Source: Google Earth, SemiAnalysis
Given the need for >100MW in a dense location, these facilities will typically be placed near High Voltage (HV) transmission lines (138kV or 230kV or 345kV). These lines can carry much more power than distribution lines at Medium Voltage (MV) – in some areas, regulators impose a maximum power draw based on the voltage level of power lines. Therefore, hyperscalers will require an on-site substation to step down voltage from HV to MV. If there is no pre-existing substation with HV Transformers, the datacenter operator either self-builds or funds the utility to build one.
These transformers are rated in MVA: an MVA is roughly equivalent to a MW, but MVA can is “apparent power” (simply voltage * current), while a MW is the “real” power; and MW is lower because of the Power factor, which reflects inefficiencies in an AC power distribution system. A 5% difference is typical, but to have a cushion, datacenter operators often provision for a 10% power factor.
Typical High Voltage Transformers are rated between 50 MVA and 100 MVA: for example, a Datacenter campus requiring 150MW of peak power could use two 80 MVA transformers, or three for N+1 redundancy to cover an eventual failure – each stepping down voltage from 230kV to 33kV and increasing current from ~350 amps to ~2500 amps. In such an N+1 configuration, all three transformers would share the load, but run at 2/3 of their rated capacity to detect any infant mortality (i.e. failure upon initial activation) and avoid deterioration that can occur on completely unused transformers. It is important to note that HV transformers are generally custom-made as each transmission line has its own characteristics, and therefore tend to have long lead times (>12mo). To alleviate the bottleneck, datacenter operators can preorder them as part of their planning process.
Despite being a core piece of our electrical transmission system, transformers are very simple devices: they change the voltage and current of Alternating Current (AC) power from one level to another. This century-old technology works because electrical current produces a magnetic field – and AC produces a continuously changing magnetic field.
Source: The Engineering Mindset
Two copper coils are placed next to each other – when a portion of a wire is wound closely together, it creates a strong magnetic field. Placing two such coils nearby with AC will transfer power from one to the other via magnetic induction. In this process, while total power remains the same, we can change voltage and current by changing the characteristics of the wire.
If the secondary coil has less “turns” than the primary coil, power will be transferred at a lower voltage and higher current – this is an example of a step-down transformer.
Source: GeeksforGeeks
The two major components of a transformer are copper for the coils, and steel for the “transformer core” whose role is to facilitate the transfer of energy. When dissecting the shortage of transformers, the issue is generally the latter: a specific type of steel is required, called GOES (Grain Oriented Electrical Steel), for which the number of manufacturers is limited.
Data Halls and Pods
Back to Datacenters and our power flow: we now have medium voltage power at one of 11kV, 25kV or 33kV (depending on the cluster configuration and location) and want to send that power to IT racks. Modern datacenters are built in a modular fashion, and the following Microsoft Datacenter is a perfect example.
Source: Google Earth, SemiAnalysis
A building is generally broken down into multiple Data Halls (blue rectangle)– a Data Hall is simply a room in which we place servers. In the above example, we believe that each building (~250k square foot) has a Critical IT capacity of 48MW, and is divided into five data halls per building, meaning 9.6MW per data hall.
Inside a Data Hall are located multiple “Pods”, and each Pod runs off its own dedicated set of Electrical Equipment: generators (orange rectangle), transformers (green rectangle), UPS and switchgear. In the above picture, we can see four generators and transformers per Data Hall. There are also four Pods per hall, which also means four low voltage Switchboards, and eight UPS systems assuming 2N distribution redundancy.
Source: Legrand
Data Halls are typically broken into Pods for two reasons.
Modularity: a facility can progressively and quickly scale up to accommodate a higher load.
Standardization: a Pod’s size is designed to match the most standardized (i.e. cheap and readily available) electrical equipment. In the Microsoft example, we see multiple 3MW generators and 3MVA Transformers – these sizes are widely used across many industries and make procurement a lot easier than larger lower volume more custom ones. The most common pod sizes are 1600kW, 2MW and 2.5MW, though theoretically any pod size is possible.
Generators, Medium Voltage Transformers and Power Distribution
After stepping down from High Voltage (i.e. 115kV or 230kV etc) down to Medium Voltage (MV) (i.e. 33kV, 22kV or 11kV etc) with the help of a HV Transformer, we use Medium Voltage switchgear to distribute this medium voltage power near individual pods. Typical IT equipment like servers and networking switches can’t run on 11kV, so before getting power inside a data hall, we need another set of Medium Voltage (MV) transformers, generally 2.5 MVA or 3 MVA, to step down from MV (11kV/25kV/33kV) to LV (415V a common voltage in the US).
The diagram below helps to illustrate a typical HV and MV distribution: how power is stepped down from HV to MV then distributed by MV Switchgear generally placed either outside or inside the facility and configured such that each data hall can be supplied by two different power sources, leaving no single point of failure.
Source: Schneider Electric
Medium Voltage switchgear is a factory-assembled metal enclosure filled with equipment to distribute, protect and meter power.
Source: Eaton
Inside these enclosures you will find the following devices:
Circuit breakers: an electrical safety device designed to interrupt current when it is running too high and prevent fire.
Metering components and relays.
A current transformer and a voltage transformer: they work in tandem with the breakers and the metering equipment.
A switch to turn the power on or off.
Medium voltage cables.
Source: Schneider Electric
As discussed above, Medium Voltage (MV) switchgear will route the MV power at 33kV or 22kV or 11kV to MV transformers. At this point, we are now physically very close the actual IT racks: while current is much higher (4000-5000A), power losses from LV cables won’t be very high. Power is then distributed by LV (Low voltage) Switchgear – this is a factory–assembled enclosure very similar to the above MV unit, and again filled with equipment to protect (breakers), meter and distribute electrical power.
Alongside every LV transformer is a generator that matches the power rating of the transformer and will step in in the event of a failure of the transformer or power supply upstream from the transformer. An Automatic Transfer Switch (ATS), usually a component of the LV switchgear, is used to automatically switch to the generators (2-3MW per unit in hyperscale campuses) as the main power source should this happen.
For context, a 3 MW generator has a horsepower north of 4,000, similar to a locomotive engine, and it is common to find 20 or more such units in a hyperscale datacenter! These units generally run on diesel, with natural gas being the main alternative. Datacenters commonly hold 24 to 48 hours of fuel at full load, and diesel’s superior ease of transportation and storage makes it often the preferred option. Diesel is also more energy efficient but pollutes more: due to regulatory constraints, diesel generators tend to be more expensive, as specific equipment is required to reduce environmental pollution.
Source: SemiAnalysis
Source: Data Center Frontier
Directly downstream from the Automatic Transfer Switch (ATS) is a uninterruptible power supply (UPS) system to ensure that the power supply never gets interrupted. These units incorporate power electronics and are connected to a bank of batteries to ensure a constant flow of power: generators generally take ~60 seconds to turn on and reach full capacity, but like your car engine, they occasionally don’t turn on at the first shot. The role of a UPS system is to fill that gap, as their response time is typically lower than 10ms, using the following components:
An inverter: generally based on IGBT power semiconductors and converts DC power from the battery to AC, which is used in datacenters.
A rectifier: this converts AC power to DC and allows the UPS to charge the battery bank – it must be fully charged to ensure the power flow.
A battery bank, either Lead-Acid or Lithium. Lead-acid batteries are being replaced by Lithium, though the latter does have strict fire codes to comply with.
A Static Bypass switch: if the UPS has a fault, the load will automatically switch to the main power source. The loan can also be manually switched if the UPS needs to be taken out of service for maintenance.
Source: Vertiv
A UPS can be a large source of inefficiency, with 3-5% losses typical, and further exacerbated when the load is low. Modern units can improve efficiency to >99% by operating in standby mode (“VFD” below) and bypassing the AC-DC-AC conversion, but this increases the transfer time by a few milliseconds (ms) and poses a risk of a short power interruption.
Source: Vertiv
Modern systems are modular: instead of having one fixed-size large unit, they are broken down into smaller “cores” that can be stacked together and working as one. In Vertiv’s latest product, cores are either 200kVA or 400kVA – for comparison, a Tesla Model 3 inverter can output 200kW of AC Power. In a modular UPS, up to ten cores can be stacked in one unit – and up to eight units can work in parallel to further improve capacity, at a maximum of 27MW.
Source: Vertiv
A 2N redundancy on UPS systems (i.e. “2N Distribution”) for Rated 3 datacenters is typical. Downstream components such as PDUs will be 2N as well, allowing for a “Concurrently maintainable” facility.
Source: Schneider Electric
But hyperscalers typically use schemes such as 4N3R (four sets of equipment available vs three needed in normal operation) or N+2C also known as “Catcher” to increase UPS load utilization rate (better efficiency) and reduce CapEx per MW. In Catcher, instead of having two UPS systems each capable of handling the full load (2*3MW in the below example), we have an N+1 design with multiple smaller UPS (3*1MW) and a redundant unit. We use Static Transfer Switches (STS) to instantly switch the load from one UPS to another in case of a failure – STS are much faster than ATS as they rely on power electronics instead of mechanical components. In 4N3R, we use four independent power systems from distribution to the backplane (i.e. from the power whips all the way to the generator and transformer), of which only three are needed for operation.
2N distribution, however, is the simplest to understand and it is commonly used by retail and wholesale colocation operators that operated Rated 3 datacenters. In 2N distribution, the two independent power distribution systems (from the UPS down to the whips) are known as A side and B side, with the IT racks able to use one side if power supply is interrupted on the other side due to any component failures.
Source: SOCOMEC
We now have UPS power entering inside the Data Hall, and there are still a few other pieces of equipment before delivering electricity to our CPUs, GPUs and other IT components. We’ll now explore the typical layout of IT racks to better understand how it all works.
Racks are generally placed next to each other and form a row. In the picture below, each room has six rows with 26 racks each, but this can of course vary widely.
Source: Schneider Electric
Power is distributed by either an overhead busway – a solid bar of conducting metal, usually copper – or using flexible power cables. In the example above, a main “room” busway distributes power to smaller “row” busways, of which there are three per row.
When using busway, a power distribution unit (PDU) in addition to a remote power panel (RPP) is used to manage, monitor and distribute power to individual rows and racks using the busway. Tap-off units attached to the busway above each rack provide power to the rack using whips, flexible cables that run from the tap-off box to an in-rack power supply or to power shelves in the rack.
Source: Vertiv
When using flexible power cables, a power distribution unit (PDU) outside of the rack is used, which also manages distribution and contains circuit breakers for the individual racks. These flexible power cables are then routed directly into each of the racks.
Source: Vertiv
Those are two different solutions to accomplish the same goal: safely distribute low voltage power to the servers. Both PDUs and Busways integrate another set of circuit breakers and metering devices.
Legacy datacenters tend to use flexible cables and PDUs, but when dealing with large amounts of power and high density, busway is often the preferred solution and has been widely adopted by hyperscalers for numerous years. To achieve redundancy, busways are used in pairs, powered by independent UPS systems and there are typically two busbar tap-off units for each rack – one for A side and one for B side, representing the two independent power distribution sides in a 2N distribution redundancy scheme.
Source: Datacenterknowledge
Inside the rack, we often use vertical PDUs, shown in the below diagram. We have them on the two sides of the rack, one for A side and one for B side, to achieve 2N distribution redundancy, and thus no single point of failure.
Source: Vertiv
OCP Racks and BBUs
The above describes a typical power flow in a datacenter, but in their quest for efficiency, hyperscalers often deviate from typical deployments. A great example is the Open Compute Project (OCP) rack introduced by Meta a decade ago.
In the OCP architecture, instead of vertical in-rack PDUs delivering AC power to each server (with each server having its own rectifier to convert AC to DC), central Power Shelves take care of that step, converting AC to DC for the entire rack and supplying servers with DC power via a busbar. Power shelves are typically modular – in the below example, we can see six 3kW modules per unit. The OCP design requires custom server design that includes a bar clip for connecting to the DC busbar and that does not have a rectifier inside.
Source: StorageReview
Power Shelves can also incorporate a Battery Backup Unit (BBU), with Li-Ion batteries supporting a few minutes of load, acting as an “in-rack UPS” thus obviating the need for any central UPS. By bypassing the central UPS, efficiency is improved as the battery’s DC power can directly supply the IT Equipment.
This also has the benefit of cutting in half the total battery capacity needed for the datacenter as there is no longer a need for both an A-side and a B-side UPS with only a single in-rack battery used for backup. The downside of this approach is that placing Lithium batteries inside the rack requires advanced fire suppression solutions to meet fire codes, whereas in a central UPS system, all the batteries can be isolated in a fire-resistant room.
Source: Schneider Electric
To further improve efficiency, Google introduced the 48V busbar, as explained in detail in our report on VRMs for AI Accelerators.
Source: Google
Let’s now explore how the rise of AI Datacenters will impact the equipment supplier landscape and datacenter design. We’ll also explain why Meta was in such a rush and decided to demolish facilities under construction.
Pushing The Limits Of Traditional Datacenters
Generative AI brings new computing requirements on a very large scale which significantly changes Datacenter design and planning. The first big change is power: as explained in our Datacenter energy, 100k H100 and Gigawatt scale multi-datacenter training reports, the electricity requirements of AI are rising extremely fast, and 50MW+ per facility won’t be enough next year.
The second major change is computing density. As we’ve discussed in depth for many years, networking is a key piece to increase cluster sizes. This is a significant cost item in terms of CapEx, but even more importantly, a poorly designed network can significantly reduce the utilization rate of your expensive GPUs.
Generally, in a large cluster, we want to use as much copper as possible, both for the scale-up network and the scale-out network. Using copper electrical cables for communications avoids the use of fiber optic transceivers, which are costly, consume power and introduce latency. But copper’s reach is typically limited to just a couple meters at most when transmitting at very high speed – therefore, GPUs must be kept as close together as possible in order to run communication over copper.
The prime example of AI’s impact on computing density is the latest rack-scale GPU server from Nvidia shown below: the GB200 family. We published a comprehensive analysis of its architecture here. The NVL72 version is a rack composed of 72 GPUs totaling 130kW+ of power. All GPUs are interconnected by the ultra-fast scale-up network NVLink, allowing a 9x increase in inference performance throughput for the largest language models vs an H100.
Back to Datacenter Power: the key item here is 130 kW+ per rack. How different is that from the past? Let’s just look at the below chart: average rack densities used to be below 10kW, and Omdia projects a rise to 14.8kW by 2030. Omdia’s numbers are wrong even on a historical basis, and going forward this number will be much higher in the future.
Source: Vertiv & OMDIA
Hyperscalers tend to have very varying rack densities that also differ across building type. Generally, Meta has the lowest density, in the 10s of kW, while Google has the highest density racks, usually above 40 kW.
Referring back to the intro, Meta demolished an entire building under construction because it was it’s old datacenter design with low power density which they have used for many years. Instead they replaced it a brand new AI-Ready design.
Power density, alongside cooling which we’ll explore in Part two, are the key reasons that led Meta to such a sharp turn. Meta’s reference design i.e. the “H” building has a much lower power density compared to competitors. While hyperscalers don’t publish the precise MW capacity of their buildings, we can estimate it using permitting data, utility filings and other data sources.
Source: SemiAnalysis Datacenter Model
Our Datacenter Model subscribers have the full detail, but as an oversimplified rule of thumb, we can simply count generators. A Meta “H” has up to 36 generator units, compared to Google’s 34. But Google uses larger generators, and each of its buildings are >2x smaller than an “H”. When comparing power density, Google is >3x denser than Meta in terms of kW per square foot. In addition, given its size and complex structure, an “H” building takes a long time to build – around two years from start to completion, compared to 6-7 months for Google’s buildings.
While the “H” has merits, of which the most notable is energy efficiency (more on that in Part Two on cooling), it was a key competitive disadvantage compared to other hyperscalers in the GenAI arms race.
Next let’s discuss the winners and losers on the datacenter scale and show our capex per component breakdown for a Datacenter.
How to build a next-generation Blackwell Datacenter – winners and losers
Subscribe for full access to this article
With a SemiAnalysis subscription you get full access to all articles, Data Explorer graphs, article discussions, and further insight into deep dives.
Please verify your email address to proceed.
| 2024-11-07T23:08:42 | en | train |
42,031,806 | 7EF4C4705A7A | 2024-11-03T08:13:30 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,031,821 | pmbanugo | 2024-11-03T08:18:18 | We Moved from AWS to Fly.io | null | https://dev.to/pmbanugo/how-we-moved-from-aws-to-flyio-7h3 | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,823 | easybellezza | 2024-11-03T08:18:48 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,031,825 | tosh | 2024-11-03T08:21:13 | Maybe I'm Not a Pro Anymore | null | https://www.macstories.net/stories/maybe-im-not-a-pro-anymore/ | 1 | 0 | [
42031970
] | null | null | null | null | null | null | null | null | null | train |
42,031,832 | DeathArrow | 2024-11-03T08:24:43 | Why Didn't Larrabee Fail? (2016) | null | https://tomforsyth1000.github.io/blog.wiki.html#%5B%5BWhy%20didn%27t%20Larrabee%20fail%3F%5D%5D | 2 | 1 | [
42032124,
42032176,
42032177
] | null | null | null | null | null | null | null | null | null | train |
42,031,842 | thunderbong | 2024-11-03T08:30:41 | Disposables: Run your test dependencies in disposable containers | null | https://github.com/akashrawal/disposables | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,855 | cannibalXxx | 2024-11-03T08:35:03 | Biden's chips achievement is losing support before the money rolls out | null | https://www.politico.com/news/2024/11/02/biden-tech-chips-achievement-losing-support-00186851 | 1 | 0 | [
42032001
] | null | null | no_article | null | null | null | null | 2024-11-08T04:00:21 | null | train |
42,031,860 | dither8 | 2024-11-03T08:36:57 | Rqbit – BitTorrent Client in Rust | null | https://github.com/ikatson/rqbit | 3 | 2 | [
42031864,
42031993
] | null | null | no_error | GitHub - ikatson/rqbit: A bittorrent client in Rust | null | ikatson |
rqbit - bittorrent client in Rust
rqbit is a bittorrent client written in Rust. Has HTTP API and Web UI, and can be used as a library.
Also has a desktop app built with Tauri.
Usage quick start
Optional - start the server
Assuming you are downloading to ~/Downloads.
rqbit server start ~/Downloads
Download torrents
Assuming you are downloading to ~/Downloads. If the server is already started, -o ~/Downloads can be omitted.
rqbit download -o ~/Downloads 'magnet:?....' [https?://url/to/.torrent] [/path/to/local/file.torrent]
Web UI
Access with http://localhost:3030/web/. It looks similar to Desktop app, see screenshot below.
Desktop app
The desktop app is a thin wrapper on top of the Web UI frontend.
Download it in Releases for OSX and Windows. For Linux, build manually with
Streaming support
rqbit can stream torrent files and smartly block the stream until the pieces are available. The pieces getting streamed are prioritized. All of this allows you to seek and live stream videos for example.
You can also stream to e.g. VLC or other players with HTTP URLs. Supports seeking too (through various range headers).
The streaming URLs look like http://IP:3030/torrents/<torrent_id>/stream/<file_id>
Integrated UPnP Media Server
rqbit can advertise managed torrents to LAN, e.g. your TVs and stream torrents there (without transcoding). Seeking to arbitrary points in the videos is supported too.
Usage from CLI
rqbit --enable-upnp-server server start ...
Performance
Anecdotally from a few reports, rqbit is faster than other clients they've tried, at least with their default settings.
Memory usage for the server is usually within a few tens of megabytes, which makes it great for e.g. RaspberryPI.
CPU is spent mostly on SHA1 checksumming.
Installation
There are pre-built binaries in Releases.
If someone wants to put rqbit into e.g. homebrew, PRs welcome :)
If you have rust toolchain installed, this should work:
Docker
Docker images are published at ikatson/rqbit
Build
Just a regular Rust binary build process.
The "webui" feature requires npm installed.
Useful options
-v
Increase verbosity. Possible values: trace, debug, info, warn, error.
--list
Will print the contents of the torrent file or the magnet link.
--overwrite
If you want to resume downloading a file that already exists, you'll need to add this option.
--peer-connect-timeout=10s
This will increase the default peer connect timeout. The default one is 2 seconds, and it's sometimes not enough.
-r / --filename-re
Use a regex here to select files by their names.
Features and missing features
Some supported features
Sequential downloading (the default and only option)
Resume downloading file(s) if they already exist on disk
Selective downloading using a regular expression for filename
DHT support. Allows magnet links to work, and makes more peers available.
HTTP API
Pausing / unpausing / deleting (with files or not) APIs
Stateful server
Web UI
Streaming, with seeking
UPNP port forwarding to your router
UPNP Media Server
Fastresume (no rehashing)
HTTP API
By default it listens on http://127.0.0.1:3030.
curl -s 'http://127.0.0.1:3030/'
{
"apis": {
"GET /": "list all available APIs",
"GET /dht/stats": "DHT stats",
"GET /dht/table": "DHT routing table",
"GET /torrents": "List torrents (default torrent is 0)",
"GET /torrents/{id_or_infohash}": "Torrent details",
"GET /torrents/{id_or_infohash}/haves": "The bitfield of have pieces",
"GET /torrents/{id_or_infohash}/peer_stats": "Per peer stats",
"GET /torrents/{id_or_infohash}/stats/v1": "Torrent stats",
"GET /web/": "Web UI",
"POST /rust_log": "Set RUST_LOG to this post launch (for debugging)",
"POST /torrents": "Add a torrent here. magnet: or http:// or a local file.",
"POST /torrents/{id_or_infohash}/delete": "Forget about the torrent, remove the files",
"POST /torrents/{id_or_infohash}/forget": "Forget about the torrent, keep the files",
"POST /torrents/{id_or_infohash}/pause": "Pause torrent",
"POST /torrents/{id_or_infohash}/start": "Resume torrent",
"POST /torrents/{id_or_infohash}/update_only_files": "Change the selection of files to download. You need to POST json of the following form {"only_files": [0, 1, 2]}"
},
"server": "rqbit"
}
Add torrent through HTTP API
curl -d 'magnet:?...' http://127.0.0.1:3030/torrents
OR
curl -d 'http://.../file.torrent' http://127.0.0.1:3030/torrents
OR
curl --data-binary @/tmp/xubuntu-23.04-minimal-amd64.iso.torrent http://127.0.0.1:3030/torrents
Supported query parameters, all optional:
overwrite=true|false
only_files_regex - the regular expression string to match filenames
output_folder - the folder to download to. If not specified, defaults to the one that rqbit server started with
list_only=true|false - if you want to just list the files in the torrent instead of downloading
Code organization
crates/rqbit - main binary
crates/librqbit - main library
crates/librqbit-core - torrent utils
crates/bencode - bencode serializing/deserializing
crates/buffers - wrappers around binary buffers
crates/clone_to_owned - a trait to make something owned
crates/sha1w - wrappers around sha1 libraries
crates/peer_binary_protocol - the protocol to talk to peers
crates/dht - Distributed Hash Table implementation
crates/upnp - upnp port forwarding
crates/upnp_serve - upnp MediaServer
desktop - desktop app built with Tauri
Motivation
First of all, I love Rust. This project began purely out of my enjoyment of writing code in Rust. I wasn’t satisfied with my regular BitTorrent client and wanted to see how much effort it would take to build one from scratch. Starting with the bencode protocol, then the peer protocol, it gradually evolved into what it is today.
What really drives me to keep improving rqbit is seeing genuine interest from users. For example, my dad found the CLI too difficult, so I built a desktop app just for him. Later, he got into Docker Compose and now runs rqbit on his NAS, showing how user feedback and needs can inspire new features. You can find other examples of new features born from user requests in the issues/PRs.
Hearing from people who use and appreciate rqbit keeps me motivated to continue adding new features and making it even better.
Donations and sponsorship
If you love rqbit, please consider donating through one of these methods. With enough support, I might be able to make this my full-time job one day — which would be amazing!
Github Sponsors
Crypto
ETH (Ethereum) 0x68c54b26b5372d5f091b6c08cc62883686c63527
XMR (Monero) 49LcgFreJuedrP8FgnUVB8GkAyoPX7A9PjWfKZA1hNYz5vPCEcYQ9HzKr3pccGR6Lc3V3hn52bukwZShLDhZsk57V41c2ea
XNO (Nano) nano_1ghid3z6x41x8cuoffb6bbrt4e14wsqdbyqwp5d8rk166meo3h77q7mkjusr
| 2024-11-07T22:43:29 | en | train |
42,031,871 | JNRowe | 2024-11-03T08:41:23 | Profiling Without Frame Pointers | null | https://blogs.gnome.org/chergert/2024/11/03/profiling-w-o-frame-pointers/ | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,883 | ameliagonzalex- | 2024-11-03T08:44:10 | The Comprehensive Approach of Washington Recovery Pro to Bitcoin Recovery | The Bitcoin Recovery Chronicles tell an epic tale of perseverance, technological prowess, and the triumph of the human spirit against seemingly insurmountable odds. At the heart of this saga is the story of Washington Recovery Pro, a digital forensics expert whose unparalleled skills were put to the ultimate test. When a client's digital wallet was compromised, resulting in the loss of a staggering 71,000 bitcoins, the stakes could not have been higher. But Washington Recovery Pro , undaunted by the scale of the challenge, dove headfirst into the complex web of blockchain transactions, utilizing his encyclopedic knowledge of cryptocurrency protocols and his razor-sharp analytical mind. Through painstaking investigation, meticulous data analysis, and innovative techniques, he was able to meticulously trace the flow of the stolen assets, navigating the labyrinthine pathways of the digital underworld. With unwavering determination, Washington Recovery Pro pursued the elusive trail, outmaneuvering the shadowy figures responsible and ultimately recovering the entirety of the lost fortune - a feat that left the cryptocurrency community in awe. This remarkable achievement not only restored the client's financial security but also solidified Washington Recovery Pro's reputation as a true master of his craft, a digital wizard whose skills transcend the boundaries of the virtual realm. The Bitcoin Recovery Chronicles stand as a testament to the power of human ingenuity, the resilience of the blockchain, and the triumph of good over those who would seek to exploit the digital frontier for their own nefarious ends.<p>WhatsApp: +1 (903) 249‑8633
Email: ([email protected])
Telegram:https://t.me/Washingtonrecoverypro | null | 1 | 2 | [
42047511,
42050144,
42049837,
42046422,
42036030
] | null | null | null | null | null | null | null | null | null | train |
42,031,889 | _tk_ | 2024-11-03T08:45:43 | Stepping out of the shadows: ASIS asks publicly, 'Do you want in on the secret?' | null | https://www.aspistrategist.org.au/stepping-out-of-the-shadows-asis-asks-publicly-do-you-want-in-on-the-secret/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,894 | phibr0 | 2024-11-03T08:47:24 | Use Git Add -p | null | https://gist.github.com/mattlewissf/9958704 | 3 | 1 | [
42032909,
42031975
] | null | null | null | null | null | null | null | null | null | train |
42,031,918 | writeboywrite | 2024-11-03T08:55:34 | null | null | null | 1 | null | [
42031919
] | null | true | null | null | null | null | null | null | null | train |
42,031,921 | lispybanana | 2024-11-03T08:56:12 | Killing or Wounding to Protect a Property Interest (1971) | null | https://www.journals.uchicago.edu/doi/abs/10.1086/466708 | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,924 | epilys | 2024-11-03T08:57:11 | What has case distinction but is neither uppercase nor lowercase? | null | https://devblogs.microsoft.com/oldnewthing/20241031-00/?p=110443 | 13 | 2 | [
42032317,
42032252,
42032113
] | null | null | no_error | What has case distinction but is neither uppercase nor lowercase? - The Old New Thing | 2024-10-31T14:00:00+00:00 | Raymond Chen |
If you go exploring the Unicode Standard, you may be surprised to find that there are some characters that have case distinction yet are themselves neither uppercase nor lowercase.
Oooooh, spooky.
In other words, it is a character c with the properties that
toUpper(c) ≠ toLower(c), yet
c ≠ toUpper(c) and c ≠ toLower(c).
Congratulations, you found the mysterious third case: Title case.
There are some Unicode characters that occupy a single code point but represent two graphical symbols packed together. For example, the Unicode character dz (U+01F1 LATIN SMALL LETTER DZ), looks like two Unicode characters placed next to each other: dz (U+0064 LATIN SMALL LETTER D followed by U+007A LATIN SMALL LETTER Z).
These diagraphs are characters in the alphabets of some languages, most notably Hungarian. In those languages, the diagraph is considered a separate letter of the alphabet. For example, the first ten letters of the Hungarian alphabet are¹
a
á
b
c
cs
d
dz
dzs
e
é
These digraphs (and one trigraph) have three forms.
Form
Result
Uppercase
DZ
Title case
Dz
Lowercase
dz
Unicode includes four diagraphs in its encoding.
Uppercase
Title case
Lowercase
DŽ
Dž
dž
LJ
Lj
lj
NJ
Nj
nj
DZ
Dz
dz
But wait, we have a Unicode code point for the dz digraph, but we don’t have one for the cs digraph or the dzs trigraph. What’s so special about dz?
These digraphs owe their existence in Unicode not to Hungarian but to Serbo-Croatian. Serbo-Croatian is written in both Latin script (Croatian) and Cyrillic script (Serbian), and these digraphs permit one-to-one transliteration between them.¹
Just another situation where the world is more complicated than you think. You thought you understood uppercase and lowercase, but there’s another case in between that you didn’t know about.
Bonus chatter: The fact that dz is treated as a single letter in Hungarian means that if you search for “mad”, it should not match “madzag” (which means “string”) because the “dz” in “madzag” is a single letter and not a “d” followed by a “z”, no more than “lav” should match “law” just because the first part of the letter “w” looks like a “v”. Another surprising result if you mistakenly use a literal substring search rather than a locale-sensitive one. We’ll look at locale-sensitive substrings searches next time.
¹ I got this information from the Unicode Standard, Version 15.0, Chapter 7: “Europe I”, Section 7.1: “Latin”, subsection “Latin Extended-B: U+0180-U+024F”, sub-subsection “Croatian Digraphs Matching Serbian Cyrillic Letters.”
Author
Raymond has been involved in the evolution of Windows for more than 30 years. In 2003, he began a Web site known as The Old New Thing which has grown in popularity far beyond his wildest imagination, a development which still gives him the heebie-jeebies. The Web site spawned a book, coincidentally also titled The Old New Thing (Addison Wesley 2007). He occasionally appears on the Windows Dev Docs Twitter account to tell stories which convey no useful information. | 2024-11-08T13:36:43 | en | train |
42,031,930 | donutloop | 2024-11-03T08:59:50 | US sanctions China and India suppliers of Russia's war machine in Ukraine | null | https://www.scmp.com/news/world/russia-central-asia/article/3284529/us-sanctions-china-and-india-suppliers-russias-war-machine-ukraine | 3 | 1 | [
42040944,
42031962
] | null | null | null | null | null | null | null | null | null | train |
42,031,933 | bariscan | 2024-11-03T09:01:34 | Show HN: We Build Custom Slack Emoji Creator with AI | Hey there! We're just a bunch of Slack enthusiasts who love making our workspaces more fun and expressive. We built this tool because we believe that custom emojis make team communication more enjoyable and personal.<p>We created this application with fully AI and Claude. | https://www.slackemoji.net/ | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,941 | mixeden | 2024-11-03T09:04:41 | 4D-Based Robot Navigation Using Relativistic Image Processing | null | https://synthical.com/article/4D-based-Robot-Navigation-Using-Relativistic-Image-Processing-ac76007b-a721-49e1-97c4-718d21f6b5a5 | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,948 | vulnerabiliT | 2024-11-03T09:06:00 | Exploiting 0-Day Opera Vulnerability with a Cross-Browser Extension Store Attack | null | https://labs.guard.io/crossbarking-exploiting-a-0-day-opera-vulnerability-with-a-cross-browser-extension-store-attack-db3e6d6e6aa8 | 3 | 0 | [
42032013
] | null | null | null | null | null | null | null | null | null | train |
42,031,972 | razodactyl | 2024-11-03T09:13:11 | null | null | null | 1 | null | [
42031981
] | null | true | null | null | null | null | null | null | null | train |
42,031,985 | 082349872349872 | 2024-11-03T09:15:25 | Koch Method to Learn Morse | null | https://stendec.io/morse/koch.html | 3 | 0 | [
42031996
] | null | null | cut_off | Koch method to learn Morse | null | Elvis Pfützenreuter PU5EPX |
The Koch method is based on exposing the student to full-speed Morse from day one.
The first lesson starts with just two characters, played in full speed. The student
must "copy" them (i.e. writing them down or typing them, like in this page). Once 90% of the
characters are correctly "copied", the student can go move to the next lesson, where just one
more character is added.
Hz tone
words per minute
% volume
Lesson
Click or touch any letter to see and listen the respective Morse code
c h a r
and type chars below, or
Press Start button
The Morse parameters are suggestions that happened to work well for me. You can play
with parameters in this page or in the Web Morse player.
After lesson 39, there are two "post-training" or maintenance lessons that you can use
to try different speeds, or just not to forget the Morse code. The option "Post-training: all chars"
creates a lesson with all possible characters guaranteedly included, just like lesson 39.
The option "Post-training: mostly letters" reduces the frequency of punctuation and numeric symbols to
better represent the real-world frequency of characters.
| 2024-11-08T08:43:01 | en | train |
42,031,989 | circuitai | 2024-11-03T09:15:55 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,032,005 | asplake | 2024-11-03T09:21:46 | Please Publish and Share More | null | https://micro.webology.dev/2024/11/02/please-publish-and.html | 3 | 4 | [
42032037,
42032068,
42032011
] | null | null | null | null | null | null | null | null | null | train |
42,032,018 | ZeljkoS | 2024-11-03T09:24:10 | DNA-testing site 23andMe fights for survival | null | https://www.bbc.com/news/articles/c4gm08nlxr3o | 3 | 0 | [
42032140
] | null | null | no_error | DNA-testing site 23andMe fights for survival | 2024-11-03T01:19:58.996Z | Zoe Kleinman | Getty ImagesThree years ago, the DNA-testing firm 23andMe was a massive success, with a share price higher than Apple's.But, from those heady days of millions of people rushing to send it saliva samples in return for detailed reports about their ancestry, family connections and genetic make-up, it now finds itself fighting for its survival.Its share price has plummeted and this week it narrowly avoided being delisted from the stock market.And of course this is a company that holds the most sensitive data imaginable about its customers, raising troubling questions about what might happen to its huge – and extremely valuable – database of individual human DNA.When contacted by the BBC, 23andMe was bullish about its prospects - and insistent it remained "committed to protecting customer data and consistently focused on maintaining the privacy of our customers."But how did what was once one of the most talked-about tech firms get to the position where it has to answer questions about its very survival?DNA gold rushNot so long ago, 23andMe was in the public eye for all the right reasons.Its famous customers included Snoop Dogg, Oprah Winfrey, Eva Longoria and Warren Buffet - and millions of users were getting unexpected and life-changing results.Some people discovered that their parents were not who they thought they were, or that they had a genetic pre-disposition to serious health conditions. Its share price rocketed to $321.Fast forward three years and that price has slumped to just under $5 - and the company is worth 2% of what it once was.What went wrong?Getty ImagesCo-founder Anne Wojcicki with then husband Sergei Brin at a 23andMe "Spit party" in New YorkAccording to Professor Dimitris Andriosopoulos, founder of the Responsible Business Unit at Strathclyde University, the problem for 23andMe was twofold. Firstly, it didn’t really have a continuing business model – once you’d paid for your DNA report, there was very little for you to return for. Secondly, plans to use an anonymised version of the gathered DNA database for drug research took too long to become profitable, because the drug development process takes so many years.That leads him to a blunt conclusion: “If I had a crystal ball, I’d say they will maybe last for a bit longer,” he told the BBC.“But as it currently is, in my view, 23andMe is highly unlikely to survive.”The problems at 23andMe are reflected in the turmoil in its leadership.The board resigned in the summer and only the CEO and co-founder Anne Wojcicki – sister of the late YouTube boss Susan Wojcicki and ex-wife of Google co-founder Sergei Brin – remains from the original line-up.Rumours have swirled that the firm will shortly either fold or be sold - claims that it rejects."23andMe’s co-founder and CEO Anne Wojcicki has publicly shared she intends to take the company private, and is not open to considering third party takeover proposals," the company said in a statement.But that hasn’t stopped the speculation, with rival firm Ancestry calling for US competition regulators to get involved if 23andMe does end up for sale.What happens to the DNA?Companies rising and falling is nothing new - especially in tech. But 23andMe is different."It's worrying because of the sensitivity of the data," says Carissa Veliz, author of Privacy is Power.And that is not just for the individuals who have used the firm."If you gave your data to 23andMe, you also gave the genetic data of your parents, your siblings, your children, and even distant kin who did not consent to that," she told the BBC.David Stillwell, professor of computational social science at Cambridge Judge Business School, agrees the stakes are high.“DNA data is different. If your bank account details are hacked, it will be disruptive but you can get a new bank account," he explained."If your (non-identical) sibling has used it, they share 50% of your DNA, so their data can still be used to make health predictions about you.”The company is adamant these kinds of concerns are without foundation."Any company that handles consumer information, including the type of data we collect, there are applicable data protections set out in law required to be followed as part of any future ownership change," it said in its statement."The 23andMe terms of service and privacy statement would remain in place unless and until customers are presented with, and agree to, new terms and statements."There are also legal protections which apply in the UK under its version of the data protection law, GDPR, whether the firm goes bust or changes hands.Even so, all companies can be hacked - as 23andMe was 12 months ago.And Carissa Veliz remains uneasy - and says ultimately a much robust approach is needed if we want to keep our most personal information safe."The terms and conditions of these companies are typically incredibly inclusive; when you give out your personal data to them, you allow them to do pretty much anything they want with it," she said."Until we ban the trade in personal data, we are not well protected enough."Additional reporting by Tom Gerken | 2024-11-07T22:25:55 | en | train |
42,032,035 | keepamovin | 2024-11-03T09:27:22 | Advances in Zero-Knowledge Proofs: Bridging the Gap Between Theory and Practice [pdf] | null | https://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-35.pdf | 25 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,032,048 | hestefisk | 2024-11-03T09:29:13 | Java Server Faces | null | https://en.wikipedia.org/wiki/Jakarta_Faces | 2 | 0 | [
42032062
] | null | null | null | null | null | null | null | null | null | train |
42,032,055 | edanm | 2024-11-03T09:30:44 | Steven Rudich (1961-2024) | null | https://scottaaronson.blog/?p=8449 | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,032,056 | thunderbong | 2024-11-03T09:30:50 | QuickJS JavaScript Engine | null | https://bellard.org/quickjs/ | 3 | 1 | [
42032289,
42032077
] | null | null | no_error | QuickJS Javascript Engine | null | null |
News
2024-01-13:
New release (Changelog)
2023-12-09:
New release (Changelog)
Introduction
QuickJS is a small and embeddable Javascript engine. It supports the
ES2023 specification
including modules, asynchronous generators, proxies and BigInt.
It optionally supports mathematical extensions such as big decimal
floating point numbers (BigDecimal), big binary floating point numbers
(BigFloat) and operator overloading.
Main Features:
Small and easily embeddable: just a few C files, no external
dependency, 210 KiB of x86 code for a simple hello world
program.
Fast interpreter with very low startup time: runs the 76000 tests
of the ECMAScript Test
Suite in less than 2 minutes on a single core of a desktop PC. The
complete life cycle of a runtime instance completes in less than 300
microseconds.
Almost
complete ES2023
support including modules, asynchronous generators and full Annex B
support (legacy web compatibility).
Passes nearly 100% of the ECMAScript Test Suite tests when selecting the ES2023 features. A summary is available at Test262 Report.
Can compile Javascript sources to executables with no external dependency.
Garbage collection using reference counting (to reduce memory usage
and have deterministic behavior) with cycle removal.
Mathematical extensions: BigDecimal, BigFloat, operator overloading, bigint mode, math mode.
Command line interpreter with contextual colorization implemented in Javascript.
Small built-in standard library with C library wrappers.
Benchmark
Online Demo
An online demonstration of the QuickJS engine with its mathematical
extensions is available
at numcalc.com. It was compiled from
C to WASM/asm.js with Emscripten.
qjs and qjscalc can be run in JSLinux.
Documentation
QuickJS documentation: HTML version,
PDF version.
Specification of the JS Bignum Extensions: HTML
version, PDF version.
Download
QuickJS source code: quickjs-2024-01-13.tar.xz
QuickJS extras (contain the unicode files needed to rebuild the unicode tables and the bench-v8 benchmark): quickjs-extras-2024-01-13.tar.xz
Official GitHub mirror.
Binary releases are available in jsvu, esvu and here.
Cosmopolitan binaries running on Linux, Mac, Windows, FreeBSD, OpenBSD, NetBSD for both the ARM64 and x86_64
architectures: quickjs-cosmo-2024-01-13.zip.
Typescript compiler compiled with QuickJS: quickjs-typescript-4.0.0-linux-x86.tar.xz
Babel compiler compiled with QuickJS: quickjs-babel-linux-x86.tar.xz
Sub-projects
QuickJS embeds the following C libraries which can be used in other
projects:
libregexp: small and fast regexp library fully compliant with the Javascript ES2023 specification.
libunicode: small unicode library supporting case
conversion, unicode normalization, unicode script queries, unicode
general category queries and all unicode binary properties.
libbf: small library implementing arbitrary precision
IEEE 754 floating point operations and transcendental functions with
exact rounding. It is maintained as a separate project.
Links
QuickJS Development mailing list
Small Javascript programs to compute
one billion digits of pi.
Licensing
QuickJS is released under
the MIT license.
Unless otherwise specified, the QuickJS sources are copyright Fabrice
Bellard and Charlie Gordon.
Fabrice Bellard - https://bellard.org/
| 2024-11-08T08:54:52 | en | train |
42,032,085 | null | 2024-11-03T09:36:37 | null | null | null | null | null | null | [
"true"
] | null | null | null | null | null | null | null | null | train |
42,032,086 | plamartin | 2024-11-03T09:36:48 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,032,091 | marban | 2024-11-03T09:38:05 | Why Elon Musk's Robotaxi Dreams Are Premature | null | https://www.wsj.com/business/autos/elon-musk-robotaxi-end-to-end-ai-plan-1827e2bd | 5 | 0 | [
42032121
] | null | null | null | null | null | null | null | null | null | train |
42,032,097 | tosh | 2024-11-03T09:39:44 | Apple iPhone 16 Pro review: a small camera update makes a big difference | null | https://www.theverge.com/24247538/apple-iphone-16-pro-review | 2 | 0 | [
42032120
] | null | null | null | null | null | null | null | null | null | train |
42,032,101 | kennethologist | 2024-11-03T09:40:17 | null | null | null | 1 | null | [
42032117,
42032102
] | null | true | null | null | null | null | null | null | null | train |
42,032,107 | fanf2 | 2024-11-03T09:42:02 | The curious design of skateboard trucks (2022) | null | https://www.bedelstein.com/post/the-curious-design-of-skateboard-trucks | 3 | 0 | [
42032129
] | null | null | null | null | null | null | null | null | null | train |
42,032,125 | DeathArrow | 2024-11-03T09:45:15 | North Korea's Smartphone Market Expands as Border Restrictions End | null | https://www.38north.org/2024/09/north-koreas-smartphone-market-expands-as-border-restrictions-end/ | 1 | 1 | [
42032155
] | null | null | null | null | null | null | null | null | null | train |
42,032,126 | pyeri | 2024-11-03T09:46:36 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,032,163 | JNRowe | 2024-11-03T09:58:48 | Competitive Programming in Haskell: Union-Find | null | https://byorgey.github.io/blog/posts/2024/11/02/UnionFind.html | 5 | 1 | [
42032439
] | null | null | null | null | null | null | null | null | null | train |
42,032,184 | gritzko | 2024-11-03T10:06:26 | ABC buffers: as simple as possible, but not simpler | null | https://github.com/gritzko/librdx/blob/master/B.md | 2 | 0 | [
42032251
] | null | null | null | null | null | null | null | null | null | train |
42,032,197 | Tomte | 2024-11-03T10:09:18 | A Columbine Site – The Columbine High School Tragedy | null | http://www.acolumbinesite.com/ | 1 | 0 | [
42032238
] | null | null | null | null | null | null | null | null | null | train |
42,032,200 | Tomte | 2024-11-03T10:10:25 | The Hacker's Diet (2005) | null | https://www.fourmilab.ch/hackdiet/www/hackdiet.html | 3 | 0 | [
42032241
] | null | null | null | null | null | null | null | null | null | train |
42,032,222 | lenimuller93 | 2024-11-03T10:18:03 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,032,230 | plurby | 2024-11-03T10:19:32 | Transformer Circuits Thread | null | https://transformer-circuits.pub/ | 2 | 0 | [
42032244
] | null | null | no_error | Transformer Circuits Thread | null | Templeton et al., 2024 |
Can we reverse engineer transformer language models into human-understandable computer programs?
Inspired by the Distill Circuits Thread, we're going to try.
We think interpretability research benefits a lot from interactive articles (see Activation Atlases for a striking example).
Previously we would have submitted to Distill, but with Distill on Hiatus,
we're taking a page from David Ha's approach of simply creating websites (eg. World Models) for research projects.
As part of our effort to reverse engineer transformers, we've created several other resources besides our paper which we hope will be useful.
We've collected them on this website, and may add future content here, or even collaborations with other institutions.
| 2024-11-08T01:30:23 | en | train |
42,032,249 | lenimuller93 | 2024-11-03T10:23:27 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,032,250 | tomohawk | 2024-11-03T10:23:35 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,032,254 | sprunky | 2024-11-03T10:25:32 | null | null | null | 1 | null | [
42032255
] | null | true | null | null | null | null | null | null | null | train |
42,032,266 | apk-version | 2024-11-03T10:27:38 | null | null | null | 1 | null | [
42032267
] | null | true | null | null | null | null | null | null | null | train |
42,032,293 | horusegy | 2024-11-03T10:35:41 | null | null | null | 1 | null | [
42032294
] | null | true | null | null | null | null | null | null | null | train |
42,032,300 | bugartisan | 2024-11-03T10:37:33 | Bonjour Kuala Lumpur – Read about KL and Malaysia | null | https://bonjourkualalumpur.com/ | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,032,314 | jeswin | 2024-11-03T10:43:26 | Show HN: A browser extension for Claude/ChatGPT to edit your projects locally | null | https://github.com/codespin-ai/codespin-chrome-extension | 4 | 2 | [
42032333
] | null | null | null | null | null | null | null | null | null | train |
42,032,320 | stocknear | 2024-11-03T10:45:12 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,032,326 | twapi | 2024-11-03T10:46:47 | Hacker News Clones | null | https://blog.jim-nielsen.com/2024/hacker-news-clones/ | 3 | 0 | [
42032336
] | null | null | null | null | null | null | null | null | null | train |
42,032,329 | healthypunk | 2024-11-03T10:47:51 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,032,334 | naveen_ | 2024-11-03T10:48:43 | What features are missing in Django? | null | https://old.reddit.com/r/django/comments/1gijjyw/what_features_are_missing_in_django/ | 2 | 0 | [
42032341
] | null | null | no_error | What features are missing in Django? | null | The_Naveen | When working with Django, my biggest challenge is the frontend. While tools like HTMX and Alpine provide some interactivity, they fall short of the full-featured capabilities I’m looking for. I want a complete frontend framework like React or Svelte, which leads me to Django REST Framework (DRF) as the only viable option for such a setup.
My ideal setup in Django would allow me to render components like render("component.tsx") instead of traditional HTML templates (e.g., page.html), enabling server-side rendering (SSR) with client-side hydration essentially combining the best of SSR and SPA benefits, similar to what frameworks like Next.js or SvelteKit offer. While I understand this approach would involve separate backend and frontend languages, it would be an abstraction over REST or other APIs, creating a seamless experience between the two layers.
Laravel offers some similar capabilities in its ecosystem, but Django lacks comparable native options or, if they do exist, they’re rarely maintained or widely discussed.
At the very least, I would love to see first-party, native support for Tailwind CSS in Django without requiring npm dependencies, ideally with an option to include Tailwind when initializing a new project.
| 2024-11-08T08:37:39 | en | train |
42,032,344 | mixeden | 2024-11-03T10:50:45 | Enhancing Robot Dexterity with Object-Oriented Rewards | null | https://synthical.com/article/Bridging-the-Human-to-Robot-Dexterity-Gap-through-Object-Oriented-Rewards-89ac8905-0bf7-4e9c-909e-a523896d1f05 | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,032,362 | willemlaurentz | 2024-11-03T10:58:02 | Weekly and monthly backup rotation with bash and rsync | null | https://willem.com/blog/2023-12-15_backup-rotation-scheme/ | 2 | 0 | null | null | null | Failed after 3 attempts. Last error: Quota exceeded for quota metric 'Generate Content API requests per minute' and limit 'GenerateContent request limit per minute for a region' of service 'generativelanguage.googleapis.com' for consumer 'project_number:854396441450'. | Backup Rotation Scheme - Rotate your backups with 'rsync-backup-rotator' | null | Willem L. Middelkoop |
Rotate your backups with 'rsync-backup-rotator'
In today's digital age, safeguarding your data is paramount. Simply creating a copy of your files may not be enough as they can get corrupted, overwritten or blocked by ransomware. Having multiple, time-rotated (and ideally, offsite) backups is a stronger defense. I created a new tool, rsync-backup-rotator, to help you with this.
Backup Rotation SchemeBackup rotation is a strategy used in data management where multiple backup copies are created and stored at different intervals, rather than relying on a single backup copy. This method is particularly useful because it mitigates various risks associated with data loss. For instance, if a single backup copy gets corrupted, overwritten, or compromised (e.g., by ransomware), all data since the last backup could be lost. By rotating backups, you create multiple recovery points, allowing you to restore data from different moments in time. This approach provides a more comprehensive safety net, as it protects against both recent data loss and more long-term issues.Rotating backup to enable data recovery from multiple points in time - enabling time traveling for data!The `rsync-backup-rotator` tool embraces this concept of backup rotation. It automates the process of creating and managing these rotated backups. Specifically, the tool uses a central 'current' folder that contains the latest backup data. Based on user-defined settings, it then rotates this data into different folders. For daily backups, it creates and stores copies in subfolders named after each weekday, within a 'week' folder. This means there’s a separate backup for each day, like 'Monday', 'Tuesday', and so on. For monthly backups, on the first day of each month, it creates a snapshot in a corresponding monthly subfolder within a 'month' folder, like 'January', 'February', etc. This system ensures that users have a series of time-stamped backups, providing flexibility and security in their data restoration options.Why rsync?Rsync is a tool that makes backups both efficient and effective. It works by only updating the parts of files that have changed since the last backup, rather than copying everything over again. This approach significantly reduces the amount of data being transferred, saving on wear and tear for hard drives and SSDs, and reducing costs for cloud storage since fewer data changes mean less I/O operations. Essentially, rsync ensures that backups are quick and light on resources, making it a smart choice for regular data backup and rotation. Rsync is a standard, non-commercial tool built into many operating systems like GNU/Linux, BSD variants, and macOS, making it easily accessible without extra installations or purchases. Trusted by millions worldwide, its robust and efficient data backup capabilities have established it as a reliable and widely-used industry standard.Creating Backups with RsyncTo create a single copy using rsync, you can use the command `rsync -arvz SOURCE TARGET`. In an earlier post, I explained how to use rsync for making backups, which you can find here: How to Use Rsync to Make Backups. It's important to carefully decide what you want to backup, whether it's a simple documents folder or an entire installation. The `rsync-backup-rotator` tool, as discussed in this blog post, operates under the assumption that you have a functional copy mechanism in place. This mechanism should maintain an up-to-date copy of your files in a folder named "current", located in a path accessible by the tool, thereby enabling efficient rotation.The tool: rsync-backup-rotatorLike my watches, here and here, I prefer my tools to be timeless, self explanatory, simple and dedicated to their job. The rsync-backup-rotator was created with this same design ethos.The tool is a bash script that runs without any weird dependencies or bloat on pretty much all GNU/Linux, BSD and macOS systems. Heck, you can probably get it to work on Windows, too, using the WSL. It is designed to do its job now and in the future, without any maintenance or mandatory updates. You can download the tool here: https://source.willem.com/rsync-backup-rotator/ There you'll find instructions and a changelog, too. The tool is distributed as free software under the GPLv3 license, intended to guarantee your freedom to use, share and change all versions of this tool.InstallationDownload the script from https://source.willem.com/rsync-backup-rotator/Make the script executable: `chmod +x rsync-backup-rotator.sh`UsageTo use rsync-backup-rotator, run the script with the required arguments: ./rsync-backup-rotator.sh [options]Options:-w: Enable backups for each weekday.-m: Enable backups for the first day of each month.--delete: Include the --delete option in rsync to remove files in the destination that are no longer in the source.--help: Display help information.Examples:Perform a weekday backup: `./rsync-backup-rotator.sh /path/to/parent -w`Perform both weekday and monthly backups with deletion of removed files: `./rsync-backup-rotator.sh /path/to/parent -w -m --delete`ConclusionIn the future I intend to release more tools and applications under the GPLv3 licence as free software. Although this is a small beginning, it is my humble attempt to give something back to the world. Stay safe, stay backed up! The 'rsync-backup-rotator' tool v1.0 available as free software under the GPLv3 license
Did you enjoy this post?
If you found this content useful, consider showing your appreciation
by buying me a coffee ❤️😋:
| 2024-11-08T09:21:21 | null | train |
42,032,363 | doener | 2024-11-03T10:58:56 | Rocket launch and re-entry air pollutant and CO2 emissions | null | https://www.nature.com/articles/s41597-024-03910-z | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,032,371 | lsuresh | 2024-11-03T11:02:55 | How to Analyze Unbounded Time-Series Data Using Bounded State | null | https://www.feldera.com/blog/time-series | 4 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,032,374 | vednig | 2024-11-03T11:04:06 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,032,378 | sccarlos | 2024-11-03T11:05:46 | Show HN: React app to check your Bitcoin investments | null | https://sccarlos.com/post/20241103-bitcoin-wallet-explorer/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,032,387 | ptman | 2024-11-03T11:09:08 | Matrix 2.0 Is Here | null | https://matrix.org/blog/2024/10/29/matrix-2.0-is-here/?resubmit | 335 | 162 | [
42038925,
42034151,
42034100,
42040450,
42035571,
42034136,
42039670,
42034163,
42032399,
42033208,
42035808,
42034745,
42035947,
42033782,
42034043,
42034189,
42033795,
42034041,
42037689,
42034443,
42034859,
42035284,
42043085,
42034105,
42051378,
42037374,
42036053,
42038237,
42041289,
42035235,
42041859,
42036954,
42037538,
42036070,
42033773,
42035654,
42043834,
42045031,
42033686,
42034853,
42034883
] | null | null | null | null | null | null | null | null | null | train |
42,032,398 | bryanrasmussen | 2024-11-03T11:14:03 | Obligations of the Author | null | https://medium.com/luminasticity/obligations-of-the-author-0c2f396f111c | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,032,402 | muc-martin | 2024-11-03T11:15:19 | Show HN: Blog Posts from Git Commits – A Tiny Tool for the Lazy Dev | I put together a small prototype that might help folks like me who struggle with keeping their website’s blog active just for SEO. It’s a basic tool: you enter your Git repo details (owner, name, branch), pick a date range, and it generates a short blog post from the commits in that period. The idea is to give small businesses and solo devs a way to keep their blog up-to-date with actual product updates without needing to write new content.<p>Right now, it’s still pretty minimal but I’d really appreciate any feedback, especially if you think this could be useful, or if you have ideas on how it could be improved. Should I keep developing this? | https://git2blog.streamlit.app/ | 5 | 0 | null | null | null | fetch failed | null | null | null | null | 2024-11-08T17:28:03 | null | train |
42,032,405 | mabsademola | 2024-11-03T11:16:18 | Lessons from Launching Sellio: Navigating Sign-Up Barriers | Hey, Guys,<p>I recently launched a social commerce platform, Sellio. The goal is to support small businesses by combining social networking with a marketplace. It's been exciting, but I've encountered some unexpected challenges I wanted to share, hoping they might help others in similar positions.<p>One major observation: sign-up friction. We initially required users to sign up with just Display Name, Email and Pass or with Google to explore the platform, aiming for a quick 7-second process. However, I noticed that a significant portion of visitors (around 30-40%) drop off at this stage, which got me wondering—why are users hesitant to create accounts on new platforms?<p>In seeking answers, we listened closely to user feedback and monitored site interactions. This insight motivated us to rethink our approach. Soon, we’ll be rolling out a feature allowing users to browse anonymously on Sellio—offering a taste of what we have to offer before needing to commit to registration. I hope this will help users feel more comfortable exploring our platform at their own pace.<p>I’m not promoting Sellio here, I genuinely want to discuss the psychology behind user sign-ups and engagement on new platforms. For anyone curious to check out the platform and give feedback, feel free to explore it https://selliohub.com). You can also join our Discord community if you're interested let build it together https://discord.gg/3VBAMDmk.<p>Any advice, feedback, or shared experiences would be super valuable. Thanks, and looking forward to learning from this community! | null | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,032,411 | onuradsay | 2024-11-03T11:17:56 | Show HN: Kis.tools – A directory of tools that work | Hey HN! Tired of hitting "Sign up for free" buttons only to discover the real limitations after creating an account? Or finding a "free" tool that adds watermarks to everything? Yeah, me too.<p>I'm building kis.tools, a curated directory of tools that:
Work instantly - registration only when technically necessary, have genuine free functionality, keep interfaces clean and focused, process data locally when possible, keep promotional messages or ads minimal and unobtrusive<p>Think of it as a home for tools like Eric Meyer's Color Blender (running since 2003!) - tools that do one thing, do it well, and respect users enough to let them try before asking for anything in return.<p>Every tool is personally tested and described honestly, including limitations. No marketing fluff, just straight talk about what works and what doesn't.<p>Would love your feedback and tool suggestions, especially mobile apps - seems like every 'free' app nowadays sells its core functionality through in-app purchases. | https://kis.tools | 50 | 19 | [
42040548,
42032833,
42032680,
42032676,
42032769,
42033259,
42032692,
42034578,
42032772
] | null | null | null | null | null | null | null | null | null | train |
42,032,414 | minamo | 2024-11-03T11:18:49 | The virtual element method in 50 lines of Matlab (2016) | null | https://link.springer.com/article/10.1007/s11075-016-0235-3 | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,032,416 | crowdhailer | 2024-11-03T11:20:02 | 7 Minutes of algebraic effects and a structural editor | null | https://www.youtube.com/watch?v=4GOeYylCMJI | 3 | 1 | [
42032486
] | null | null | no_article | null | null | null | null | 2024-11-07T07:21:23 | null | train |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.