id
int64
2
42.1M
by
large_stringlengths
2
15
time
timestamp[us]
title
large_stringlengths
0
198
text
large_stringlengths
0
27.4k
url
large_stringlengths
0
6.6k
score
int64
-1
6.02k
descendants
int64
-1
7.29k
kids
large list
deleted
large list
dead
bool
1 class
scraping_error
large_stringclasses
25 values
scraped_title
large_stringlengths
1
59.3k
scraped_published_at
large_stringlengths
4
66
scraped_byline
large_stringlengths
1
757
scraped_body
large_stringlengths
1
50k
scraped_at
timestamp[us]
scraped_language
large_stringclasses
58 values
split
large_stringclasses
1 value
42,050,849
impish9208
2024-11-05T12:02:19
Judgment Day
null
https://www.astralcodexten.com/p/mantic-monday-judgment-day
4
0
[ 42051205 ]
null
null
null
null
null
null
null
null
null
train
42,050,862
darwindarak
2024-11-05T12:04:14
Show HN: rallyup – Lightweight Wake-on-LAN Scheduler
Hi HN,<p>I’ve wanted a simple solution to handle Wake-on-LAN sequences for my home and work labs to boot up servers in the right order. I was already dabbling in Rust and thought this would be an interesting project to dive deeper and see if it could work well for this kind of network tool. The result is rallyup.<p>rallyup lets you set up server dependencies in a YAML file, so each service (e.g., firewalls, storage, VM hosts) comes online in the right order. It verifies each server’s status before moving to the next.<p>Features:<p>- Dependency-based WOL with VLAN support - Built-in health checks (HTTP, open ports, shell commands) - Lightweight enough to run on a Raspberry Pi or similar device<p>Would love any feedback. Thanks for taking a look!
https://github.com/darwindarak/rallyup
136
25
[ 42051791, 42059474, 42051263, 42051475, 42051196, 42052018, 42051204 ]
null
null
null
null
null
null
null
null
null
train
42,050,864
Tomte
2024-11-05T12:05:30
Breadbox Ensemble changes name to PC/GEOS Ensemble and becomes open source
null
http://blog.bluewaysw.de/breadbox-ensemble-changes-name-to-pc-geos-ensemble-and-becomes-open-source
1
0
null
null
null
missing_parsing
Breadbox Ensemble changes name to PC/GEOS Ensemble and becomes open source – blueway.Softworks
null
Hans Lindgren
Breadbox Ensemble changes name to PC/GEOS Ensemble and becomes open source under the Apache-2.0 license. The project is called FreeGEOS and published at Github. Check out the link to the Github page here:  https://github.com/bluewaysw Breadbox Ensemble changes name to PC/GEOS Ensemble and becomes open source
2024-11-08T07:06:30
null
train
42,050,870
AIwonderful
2024-11-05T12:06:53
null
null
null
1
null
[ 42050871 ]
null
true
null
null
null
null
null
null
null
train
42,050,873
Tomte
2024-11-05T12:07:07
Book About the Aftermath of the Civil War
null
https://www.politico.com/news/magazine/2024/11/05/congress-post-civil-war-book-00187281
1
0
[ 42050880 ]
null
null
null
null
null
null
null
null
null
train
42,050,899
mooreds
2024-11-05T12:13:26
Riding Amtrak from PDX to Lax
null
https://alextheward.com/posts/riding-amtrak-from-pdx-to-lax/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,050,900
Amnascamcaught
2024-11-05T12:13:27
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,050,909
Ewukong
2024-11-05T12:14:59
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,050,921
Amnascamcaught
2024-11-05T12:18:09
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,050,928
mooreds
2024-11-05T12:19:17
An example implementation of an AT Protocol (Bluesky) OAuth client
null
https://github.com/pilcrowonpaper/atproto-oauth-example
1
0
null
null
null
null
null
null
null
null
null
null
train
42,050,929
mikkom
2024-11-05T12:19:26
Russia fines Google $20 000 000 000 000 000 000 000 000 000 000 000
null
https://www.cnn.com/2024/10/31/tech/google-fines-russia/index.html
3
1
[ 42051049, 42051201 ]
null
null
null
null
null
null
null
null
null
train
42,050,932
AiswaryaMadhu
2024-11-05T12:20:18
null
null
null
1
null
[ 42050933 ]
null
true
null
null
null
null
null
null
null
train
42,050,947
belter
2024-11-05T12:22:24
Nvidia CPUs (not GPUs) Coming in 2025
null
https://www.techpowerup.com/328422/nvidia-cpus-not-gpus-coming-in-2025
3
0
null
null
null
null
null
null
null
null
null
null
train
42,050,954
todsacerdoti
2024-11-05T12:23:08
Zig's (.{}){} Syntax
null
https://www.openmymind.net/Zigs-weird-syntax/
4
0
null
null
null
null
null
null
null
null
null
null
train
42,050,963
quraniduaa
2024-11-05T12:25:00
null
null
null
1
null
[ 42050964 ]
null
true
null
null
null
null
null
null
null
train
42,050,967
unripe_syntax
2024-11-05T12:25:42
JRuby 9.4.9.0 Released
null
https://www.jruby.org/2024/11/04/jruby-9-4-9-0.html
1
0
[ 42051202 ]
null
null
null
null
null
null
null
null
null
train
42,050,970
hunglee2
2024-11-05T12:26:05
China Releases Third National Time Use Survey Results
null
https://www.fredgao.com/p/china-releases-third-national-time
1
0
null
null
null
null
null
null
null
null
null
null
train
42,050,977
0x3d-site
2024-11-05T12:27:19
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,050,982
Gizopedia
2024-11-05T12:28:29
null
null
null
2
null
[ 42051510, 42050983, 42052710, 42051208 ]
null
true
null
null
null
null
null
null
null
train
42,050,993
epompeii
2024-11-05T12:29:20
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,050,994
bailvgu
2024-11-05T12:29:27
null
null
null
1
null
[ 42050995 ]
null
true
null
null
null
null
null
null
null
train
42,051,005
chmaynard
2024-11-05T12:30:57
Git Config
null
https://blog.izissise.net/posts/gitconfig/
16
0
[ 42053475 ]
null
null
null
null
null
null
null
null
null
train
42,051,012
nextcaller
2024-11-05T12:32:11
Show HN: Control Linux with a Logitech USB controller
null
https://github.com/madprops/logitech
5
0
[ 42051305, 42051195 ]
null
null
null
null
null
null
null
null
null
train
42,051,030
reynoldss
2024-11-05T12:36:27
Netflix European headquarters in Amsterdam raided in tax fraud investigation
null
https://nltimes.nl/2024/11/05/netflix-european-headquarters-amsterdam-raided-tax-fraud-investigation
8
0
[ 42051183 ]
null
null
null
null
null
null
null
null
null
train
42,051,038
rntn
2024-11-05T12:38:43
Jensen Huang asked SK hynix to give Nvidia 12-layer HBM4 chips earlier
null
https://www.theregister.com/2024/11/05/sk_hynix_ai_summit/
2
0
[ 42051155 ]
null
null
no_error
Jensen Huang asked SK hynix to give Nvidia 12-layer HBM4 chips earlier
2024-11-05T12:33:14Z
Laura Dobberstein
Nvidia CEO Jensen Huang asked Korean chipmaker SK hynix to pull forward delivery of 12-layer HBM4 chips by half a year, according to the company's group chairman Chey Tae-won. In a keynote speech at the SK AI Summit 2024 on Monday, Chey said he responded by deferring to CEO Kwak Noh-jung, who in turn promised to try. The chips were originally set for delivery in the first half of 2026, but bringing the schedule forward by six months would see them released before the end of 2025. That's quite a tall order. SK hynix's 12-layer HBM3E products were scheduled to be placed into the supply chain just this quarter – Q4 2024. Mass production of the most advanced chip to date only began in late September. 16-layer HBM3E samples are expected to be available in the first half of 2025, Kwak announced during the summit. The chips are made using Mass Reflow Molded Underfill (MR-MUF) process, a packaging technique that improves thermal management and was used on the 12-layer chips. There's more on them in our sister site, Blocks & Files. The CEO described the 16-layer HBM3E chips as having an 18 percent improvement in learning performance and 32 percent improvement in inference performance over the 12-layer chips of the same generation. Kwak also confirmed his company was developing LPCAMM2 module for PCs and datacenters, as well as 1cm based LPDDR5 and LPDDR6. With record revenue, SK hynix brushes off suggestion of AI chip oversupply Samsung's HBM3E has been a disaster, but there's a path back SK hynix begins mass production of 36 GB 12-layer HBM3E SK hynix shimmies towards AI silicon by driving merger of South Korean Nvidia challengers Huang made a video appearance at the summit, as did Microsoft CEO Satya Nadella and TSMC CEO CC Wei. As is customary at such events, all three praised their companies' respective partnerships with SK hynix, while Huang also reportedly said SK hynix's development plan was both "super aggressive" and "super necessary." Nvidia accounts for more than 80 percent of the world's AI chip consumption. SK hynix execs brushed off the notion of any AI chip oversupply in its recent Q3 2024 earnings call. HBM chip sales were reported up 330 percent year-on-year. SK Group chairman Choi Tae-won predicted in a speech this week that the AI market will likely baloon further in or around 2027 due to the emergence of the next-generation ChatGPT. ®
2024-11-07T23:11:00
en
train
42,051,040
handfuloflight
2024-11-05T12:39:19
Wearing Faith: The Story of a Karkari Disciple
null
https://www.karkari.org/library/wearing-faith-the-story-of-a-karkari-disciple
1
0
null
null
null
no_article
null
null
null
null
2024-11-08T03:13:17
null
train
42,051,041
jamile
2024-11-05T12:39:20
null
null
null
1
null
[ 42051043 ]
null
true
null
null
null
null
null
null
null
train
42,051,051
ricberw
2024-11-05T12:41:03
Ask HN: How will today's election affect the products you will build next year?
I’m very curious whether the outcome will significantly change your plans to build product — and how the differences in policy will change the startup ecosystem.
null
3
4
[ 42051306, 42051524, 42051396, 42052069, 42051180 ]
null
null
null
null
null
null
null
null
null
train
42,051,056
cmpit
2024-11-05T12:41:49
The power of technical writing for developers
null
https://catalins.tech/the-power-of-technical-writing/
2
0
[ 42051131 ]
null
null
no_error
How I Landed 4 Jobs and Earned $25K+ with Technical Writing
2024-10-10T15:09:37.000Z
Catalin Pit
Table of Contents My Technical Writing Journey The Results of Technical Writing You Can Learn It Too My Technical Writing Journey As a final-year university student preparing to enter the industry, I knew I had to differentiate myself from other candidates in a competitive job market. That's when the idea struck me: what if I combined two activities I enjoy, programming and writing? Combining programming and writing would bring three significant benefits. First of all, publishing my code online would help me consolidate the information in my brain. They say that "to teach is to learn twice". Secondly, others can find my code and hopefully learn something from it (or give me feedback to improve my skills). Lastly, it would showcase my thinking process and soft skills, such as communication skills, to prospective employers. With this goal and benefits in mind, I began my journey into technical writing by launching this website catalins.tech. The image from the Internet Archive shows the very first posts that I published. As an aside, the excerpt from the bottom of my blog validates the things I've been saying about my passion for coding and writing: Before commenting This blog serves as my online notebook. Its purpose is not to teach other people because my solutions might not be the best ones, as I am a Computer Science student. I use this blog to explain my solutions to programming challenges. Occasionally, I am writing how I have developed/added different features in my projects. The blog allows me to come at a later time and see why I did things the way I did. It also helps me to track my programming progress. Thus, I combine two things I like: to write and to code. Initially, my goal was to share my solutions to interview questions and coding challenges. But as I continued publishing content, I increasingly enjoyed the dopamine rush of pressing the "publish" button. This made me go beyond coding challenges and interview questions. I began writing about: My software development career, including topics like interview preparation and dealing with impostor syndrome, for example The side projects I built in my free time Career guidance for fellow developers Useful resources for developers I kept writing about everything related to technology and a career in this industry. As I published more articles, my work started gaining traction and getting noticed. Tech publications like FreeCodeCamp and SitePoint, to name a few, invited me to publish on their platforms. These opportunities marked an important milestone in my technical writing journey. When I started writing, it was to stand out in job applications and reinforce the concepts I was learning. I never thought I'd publish on sites with hundreds of thousands, maybe even millions, of monthly readers, especially considering that English is my second language. It felt good and validated my writing skills. Collaborating with these publications changed the trajectory of my journey. It helped me expand my reach, build credibility within the tech community, and level up my writing skills. Having to adapt to various style guides and working alongside other talented writers helped me improve my writing tenfold. The Results of Technical Writing But it gets even better. Consistently publishing on my site and contributing to popular tech publications eventually led to my first paid technical writing gigs. I went from a random guy publishing on the internet to being paid to write technical articles. According to my memory, these were my first earnings from technical writing. As far as I can recall, the $1,170 represents the total amount for four articles. When the money hit my bank account, I was over the moon! After landing my first paid technical articles, finding these opportunities became easier and easier. Companies and publications started reaching out to me after discovering my work online, and people began recommending me to others. At one point, I earned more from these side gigs than my regular salary. Fast forward to today, and technical writing has helped me earn more than $25,000. Don't get me wrong. I'm not trying to brag or show off here. I want to show that technical writing can be a lucrative and beneficial skill. I had no idea until I got into it myself. But that's not at all. Sure, earning a generous income on the side is nice, but technical writing has also played a huge role in landing jobs. No exaggeration. It played a big part in landing 2 Developer Relations (DevRel) jobs and 2 engineering roles. I'm not going to talk about all those roles, but here's an example with the interview assignment for my DevRel role at Hasura. Similarly, here's the announcement from when I joined Hashnode, where my writing played a bigger role since it is a blogging platform, and I am a big advocate for writing as a developer. You may say that technical writing is relevant for Developer Relations (DevRel) roles since they involve creating and publishing content online. However, my experience has shown that technical writing also played an important role in helping me land engineering roles. Many developers tend to focus only on their coding skills while neglecting soft skills such as communication. However, coding is just one aspect of software development. In reality, you likely spend a good percentage of your working hours communicating with various stakeholders, including clients, team members, managers, and others. You also document your code and processes, write changelogs, and create documentation, among other tasks. Communication is a very important skill, yet many engineers often overlook it. You know this is true if you've worked in a team of more than a few people. Having understood that, I took a different route than most engineers. Instead of focusing solely on my technical skills, I split my time and priorities in a way that allowed me to work on my soft skills. I started this site and published articles about the projects I was working on, the technologies and concepts I was learning, and the challenges I faced as an engineer. I also shared my experiences, insights, and lessons learned. In fewer words, I wrote about anything and everything related to tech and my tech career. When it came to finding new jobs and interviewing with companies, my technical writing skills and portfolio helped me stand out from the other candidates. They not only showcased my technical expertise but also my ability to communicate. They served as concrete examples of my thought process, work ethic, ability to articulate ideas and expertise. Instead of having to "sell" myself to potential employers, they did most of the work for me. I'm not trying to show off or brag, and I'll stop here so I don't become annoying. The bottom line is that my technical writing skills, and this site subsequently, opened doors to many opportunities, propelled my career forward, and helped me grow as an engineer. These experiences made me realize the potential writing can have in one's career. You Can Learn It Too One of the best things about technical writing is that it has a low entry barrier and is also a low-friction activity. You can start a website like this and begin publishing articles right away. You can write about the projects you're working on, the technologies and concepts you're learning, or the challenges you're facing as an engineer. The idea is to start writing and keep writing consistently. As you write more, you'll naturally improve your writing skills and find your voice. Here are a few tips to help you get started: Pick a domain: The first step is buying a domain. By having a personal domain, you maintain complete ownership and control over your content, reap the full SEO benefits of your work, and build and promote your brand. Choose a platform: There are many platforms available for starting a website or blog, such as WordPress, Medium, GitHub Pages, and Ghost, to name just a few. Or you can even build a custom site. Pick one that suits your needs and allows you to use your personal domain. Personally, I have used Ghost for a while, and it's one of the best solutions for a personal site. I wrote about how to get started with Ghost by self-hosting it on DigitalOcean. Identify your topics: List topics you're passionate about or have experience with. These could be specific technologies, programming languages, tools, or even soft skills related to your career in tech. You can write about: Your learnings. Did you learn something new? Write about it. Your expertise. Are you an expert in a specific area? Write about it. The issues you solve. Did you solve an issue? Write about it. Your experiences. Have you gone through an interview, got a job, or got promoted? Write about it. Write, edit, and publish: Set aside dedicated time for writing. Start writing, and don't worry about anything else. Your main priority is to dump the ideas from your brain. Then, revise and edit your article until you're satisfied with the content. Finally, hit the publish button. It's important to remember that technical writing is a skill that improves with practice. The more you write, the better you'll become at writing. As you consistently publish articles, you'll build a valuable portfolio that showcases your technical writing skills. Regularly writing about technical topics will deepen your understanding of those topics. The process of researching, organizing, and articulating your ideas will reinforce your knowledge and help you identify and fill knowledge gaps. In addition to that, it will also improve your communication abilities. As you practice conveying and breaking down complex concepts, you'll become more effective at communication, which is an underrated and valuable skill in the tech industry. Lastly, a portfolio of technical articles can open doors to new opportunities. Your writing serves as concrete examples of your abilities and expertise. Potential employers or clients can discover your work and understand the value you can bring. Your articles can also establish you as a thought leader in your field, attracting invitations to speak at conferences or contribute to publications. The benefits you reap from technical writing will compound over time, propelling your career forward and opening up new opportunities. If you want to explore technical writing more in-depth, I recently released a technical writing course for developers. This course teaches you the technical writing basics, including how to structure your writing effectively, use appropriate language and tone, and incorporate visuals to enhance your message. You'll also learn various techniques for generating unlimited content ideas and establishing a solid reputation by publishing your content on high-quality, high-authority platforms and gaining exposure and credibility. But that's not all. The course explores many more topics, such as making money with technical writing and translating your articles into videos, to name a few. You may say, "Did I read until now just to have you sell me a course?". The answer is no. I've been writing about technical writing and blogging long before having a course.
2024-11-08T03:27:06
en
train
42,051,059
fanf2
2024-11-05T12:42:02
Practical third-party library sandboxing with RLBox
null
https://rlbox.dev/
1
0
[ 42051146 ]
null
null
Failed after 3 attempts. Last error: Quota exceeded for quota metric 'Generate Content API requests per minute' and limit 'GenerateContent request limit per minute for a region' of service 'generativelanguage.googleapis.com' for consumer 'project_number:854396441450'.
Practical third-party library sandboxing with RLBox
null
null
Practical third-party library sandboxing with RLBox Overview RLBox is a toolkit for sandboxing third party C libraries, that are being used by C++ code (support for other languages is in the works). RLBox was originally developed for Firefox1, which has been shipping with it since 2020. The RLBox toolkit consists of: A C++ framework (RLBox) that makes it easy to retrofit existing application code to safely interface with sandboxed libraries. An RLBox plugin that allows the use of wasm2c compiler for isolating (sandboxing) C libraries with Wasm. In this section, we provide an overview of the RLBox framework, its reason for being, and a high level sketch of how it works. In the next section, we will provide a tutorial that provides an end-to-end example of applying RLBox to a simple application. Why RLBox Work on RLBox began several years ago while attempting to add fine grain isolation to third party libraries in the Firefox renderer. Initially we attempted this process without any support from a framework like RLBox, instead attempting to manually deal with all the details of sandboxing such as sanitizing untrusted inputs, and reconciling ABI differences between the sandbox and host application. This went poorly; it was tedious, error prone, and did nothing to abstract the details of the underlying sandbox from the developer. We had basically no hope that this would result in code that was maintainable, or that normal Mozilla developers who were unfamiliar with the gory details of our system would be able to sandbox a new library, let alone maintain existing ones. So we scrapped this manual approach and built RLBox1. RLBox automates many of the low level details of sandboxing and allows you, as a security engineer or application developer, to instead focus just on what you need to do to sandbox your particular application. To sandbox a library — and thus to move to a world where the library is no longer trusted — we need to modify this application-library boundary. For example, we need to add security checks in Firefox to ensure that any value from the sandboxed library is properly validated before it is used. Otherwise, the library (when compromised) may be able to abuse Firefox code to hijack its control flow 1. The RLBox API is explicitly designed to make retrofitting of existing application code simpler and less error-prone.2 What does RLBox provide? RLBox ensures that a sandboxed library is memory isolated from the rest of the application — the library cannot directly access memory outside its designated region — and that all boundary crossings are explicit. This ensures that the library cannot, for example, corrupt Firefox's address space. It also ensures that Firefox cannot inadvertently expose sensitive data to the library. The figure below illustrates this idea. Memory isolation is enforced by the underlying sandboxing mechanism (e.g., using Wasm3) from the start, when you create the sandbox with create_sandbox(). Explicit boundary crossings are enforced by RLBox (either at compile- or and run-time). For example, with RLBox you can't call library functions directly; instead, you must use the invoke_sandbox_function() method. Similarly, the library cannot call arbitrary Firefox functions; instead, it can only call functions that you expose with the register_callback() method. (To simplify the sandboxing task, though, RLBox does expose a standard library as described in the Standard Library.) When calling a library function, RLBox copies simple values into the sandbox memory before calling the function. For larger data types, such as structs and arrays, you can't simply pass a pointer to the object. This would leak ASLR and, more importantly, would not work: sandboxed code cannot access application memory. So, you must explicitly allocate memory in the sandbox via malloc_in_sandbox() and copy application data to this region of memory (e.g., via strlcpy). RLBox similarly copies simple return values and callback arguments. Larger data structures, however, must (again) be passed by sandbox-reference, i.e., via a reference/pointer to sandbox memory. To ensure that application code doesn't unsafely use values that originate in the sandbox - and may thus be under the control of an attacker - RLBox considers all such values as untrusted and taints them. Tainted values are essentially opaque values (though RLBox does provide some basic operators on tainted values). To use a tainted value, you must unwrap it by (typically) copying the value into application memory - and thus out of the reach of the attacker - and verifying it. Indeed, RLBox forces application code to perform the copy and verification in sync using verification functions (see this). References
2024-11-08T04:40:51
null
train
42,051,066
thunderbong
2024-11-05T12:43:30
Hundreds of Code libraries posted to NPM try to install malware on dev machines
null
https://arstechnica.com/security/2024/11/javascript-developers-targeted-by-hundreds-of-malicious-code-libraries/
8
0
[ 42051142 ]
null
null
null
null
null
null
null
null
null
train
42,051,074
AndreyKarpov
2024-11-05T12:45:05
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,051,078
qwikhost
2024-11-05T12:45:26
Show HN: Share Notebooks from Kindle Scribe to Google Drive
Doc Genie is only way to share notebooks from your amazon Kindle Scribe to Google Drive, OneDrive &amp; DropBox.
https://docgenie.co.uk
1
0
null
null
null
null
null
null
null
null
null
null
train
42,051,087
spookily4136
2024-11-05T12:47:43
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,051,098
ambigious7777
2024-11-05T12:50:16
DeepMind debuts watermarks for AI-generated text
null
https://spectrum.ieee.org/watermark
116
125
[ 42056152, 42051387, 42057650, 42051308, 42051230, 42051566, 42065290, 42056760, 42051274, 42062643, 42051337, 42061307, 42051252, 42056915, 42060692, 42051332, 42051314, 42056174, 42051282, 42051445, 42051276, 42057253, 42056173, 42051448, 42057333, 42056708 ]
null
null
no_error
Google Is Now Watermarking Its AI-Generated Text
2024-10-23T15:00:03Z
Eliza Strickland
The chatbot revolution has left our world awash in AI-generated text: It has infiltrated our news feeds, term papers, and inboxes. It’s so absurdly abundant that industries have sprung up to provide moves and countermoves. Some companies offer services to identify AI-generated text by analyzing the material, while others say their tools will “humanize“ your AI-generated text and make it undetectable. Both types of tools have questionable performance, and as chatbots get better and better, it will only get more difficult to tell whether words were strung together by a human or an algorithm.Here’s another approach: Adding some sort of watermark or content credential to text from the start, which lets people easily check whether the text was AI-generated. New research from Google DeepMind, described today in the journal Nature, offers a way to do just that. The system, called SynthID-Text, doesn’t compromise “the quality, accuracy, creativity, or speed of the text generation,” says Pushmeet Kohli, vice president of research at Google DeepMind and a coauthor of the paper. But the researchers acknowledge that their system is far from foolproof, and isn’t yet available to everyone—it’s more of a demonstration than a scalable solution. Google has already integrated this new watermarking system into its Gemini chatbot, the company announced today. It has also open-sourced the tool and made it available to developers and businesses, allowing them to use the tool to determine whether text outputs have come from their own large language models (LLMs), the AI systems that power chatbots. However, only Google and those developers currently have access to the detector that checks for the watermark. As Kohli says: “While SynthID isn’t a silver bullet for identifying AI-generated content, it is an important building block for developing more reliable AI identification tools.”The Rise of Content Credentials Content credentials have been a hot topic for images and video, and have been viewed as one way to combat the rise of deepfakes. Tech companies and major media outlets have joined together in an initiative called C2PA, which has worked out a system for attaching encrypted metadata to image and video files indicating if they’re real or AI-generated. But text is a much harder problem, since text can so easily be altered to obscure or eliminate a watermark. While SynthID-Text isn’t the first attempt at creating a watermarking system for text, it is the first one to be tested on 20 million prompts.Outside experts working on content credentials see the DeepMind research as a good step. It “holds promise for improving the use of durable content credentials from C2PA for documents and raw text,” says Andrew Jenks, Microsoft’s director of media provenance and executive chair of the C2PA. “This is a tough problem to solve, and it is nice to see some progress being made,” says Bruce MacCormack, a member of the C2PA steering committee. How Google’s Text Watermarks WorkSynthID-Text works by discreetly interfering in the generation process: It alters some of the words that a chatbot outputs to the user in a way that’s invisible to humans but clear to a SynthID detector. “Such modifications introduce a statistical signature into the generated text,” the researchers write in the paper. “During the watermark detection phase, the signature can be measured to determine whether the text was indeed generated by the watermarked LLM.”The LLMs that power chatbots work by generating sentences word by word, looking at the context of what has come before to choose a likely next word. Essentially, SynthID-Text interferes by randomly assigning number scores to candidate words and having the LLM output words with higher scores. Later, a detector can take in a piece of text and calculate its overall score; watermarked text will have a higher score than non-watermarked text. The DeepMind team checked their system’s performance against other text watermarking tools that alter the generation process, and found that it did a better job of detecting watermarked text.However, the researchers acknowledge in their paper that it’s still easy to alter a Gemini-generated text and fool the detector. Even though users wouldn’t know which words to change, if they edit the text significantly or even ask another chatbot to summarize the text, the watermark would likely be obscured. Testing Text Watermarks at ScaleTo be sure that SynthID-Text truly didn’t make chatbots produce worse responses, the team tested it on 20 million prompts given to Gemini. Half of those prompts were routed to the SynthID-Text system and got a watermarked response, while the other half got the standard Gemini response. Judging by the “thumbs up” and “thumbs down” feedback from users, the watermarked responses were just as satisfactory to users as the standard ones. Which is great for Google and the developers building on Gemini. But tackling the full problem of identifying AI-generated text (which some call AI slop) will require many more AI companies to implement watermarking technologies—ideally, in an interoperable manner so that one detector could identify text from many different LLMs. And even in the unlikely event that all the major AI companies signed on to some agreement, there would still be the problem of open-source LLMs, which can easily be altered to remove any watermarking functionality. MacCormack of C2PA notes that detection is a particular problem when you start to think practically about implementation. “There are challenges with the review of text in the wild,” he says, “where you would have to know which watermarking model has been applied to know how and where to look for the signal.” Overall, he says, the researchers still have their work cut out for them. This effort “is not a dead end,” says MacCormack, “but it’s the first step on a long road.”
2024-11-07T23:34:03
en
train
42,051,105
Myrmornis
2024-11-05T12:50:51
OpenTelemetry Is expanding into CI/CD observability
null
https://www.cncf.io/blog/2024/11/04/opentelemetry-is-expanding-into-ci-cd-observability/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,051,111
ms7892
2024-11-05T12:52:01
Coffeespace.com – Tinder like app to find co-founder
null
https://www.coffeespace.com/
2
0
[ 42051118 ]
null
null
null
null
null
null
null
null
null
train
42,051,116
8organicbits
2024-11-05T12:53:10
Is Email Confidential in Transit Yet?
null
https://alexsci.com/blog/is-email-confidential-in-transit-yet/
3
3
[ 42051215, 42051184, 42051136 ]
null
null
null
null
null
null
null
null
null
train
42,051,120
null
2024-11-05T12:54:09
null
null
null
null
null
null
[ "true" ]
true
null
null
null
null
null
null
null
train
42,051,124
matek075
2024-11-05T12:55:23
null
null
null
1
null
[ 42051125 ]
null
true
null
null
null
null
null
null
null
train
42,051,126
geox
2024-11-05T12:55:27
The '27 Club' isn't true, but it is real
null
https://phys.org/news/2024-11-club-isnt-true-real-sociologist.html
3
0
null
null
null
null
null
null
null
null
null
null
train
42,051,129
albexl
2024-11-05T12:56:16
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,051,133
null
2024-11-05T12:56:36
null
null
null
null
null
null
[ "true" ]
true
null
null
null
null
null
null
null
train
42,051,143
PaulHoule
2024-11-05T12:58:50
null
null
null
1
null
[ 42051158 ]
null
true
null
null
null
null
null
null
null
train
42,051,152
LorenDB
2024-11-05T12:59:35
Meta Quest HDMI Link
null
https://www.meta.com/blog/quest/meta-quest-hdmi-link-launch/
1
0
[ 42051163 ]
null
null
null
null
null
null
null
null
null
train
42,051,156
usesubtle
2024-11-05T13:00:00
null
null
null
1
null
[ 42051157 ]
null
true
null
null
null
null
null
null
null
train
42,051,164
diggan
2024-11-05T13:01:35
A conceptual model of ATProto and ActivityPub
null
https://fediversereport.com/a-conceptual-model-of-atproto-and-activitypub/
2
0
[ 42051175 ]
null
null
no_error
A conceptual model of ATProto and ActivityPub
2024-11-04T19:51:27+00:00
Laurens Hof
If you were to design an open social networking protocol, what would that look like? Which metaphors and comparisons would you use to get a general idea of how the network functions? What would you answer if people ask if your network is decentralised and federated? This article is not a deep technical explanation about how either ActivityPub or ATProto work. Instead I want to explain to you have these two protocols have different conceptual models of what an open social network looks like. These conceptual models differ much more from each other than people expect, leading people to apply concepts that come from the ActivityPub world to the ATProto world, in a way that does not fit with the conceptual model that ATProto has. One of the main subjects of discussion recently has been whether Bluesky is decentralised and if it is federated. I think answering these questions requires a clarity on how ATProto differs conceptually from ActivityPub. Decentralisation and federation are valued for how they impact power structures, but there are multiple ways to build other power structures in open social networks. A bit of the summary at the top, since that might help during reading: The conceptual model of ActivityPub resembles that of email: independent servers sending messages to each other. The conceptual model of ATProto resembles that of the web: independent sites publish data, and indexers aggregate this data into different views and apps. A conceptual model of ActivityPub and the fediverse The fediverse1 is a network of independent social networking sites that use the ActivityPub protocol with each other. The conceptual model of the fediverse is that each social networking site, often called a server or instance, is it’s own network that can exist independently. You can set up your own Mastodon server, not connect to any other server, invite some of your friends, and have a fully functional social networking site. Because each server is its own independent and complete social networking site, it means that each fediverse server is a monolith, that puts all components together in a single place. A fediverse server: Owns your identity. Stores your data. Transforms protocol data into a site with a timeline that you can look at. Most people run a fediverse server because they want their independent social networking site to join a super-network of interconnected social networking sites; the fediverse. This ‘anyone can run a fediverse server’ is the ‘decentralised’ part of the fediverse. In order for the server to communicate with the rest of the fediverse it does a fourth thing: Communicates with the rest of the network. In ActivityPub terms: the server gives you an inbox and outbox, and messages flow between these inboxes and outboxes on other servers. This communication between servers is the ‘federation’ part of the fediverse. Decentralisation and Federation in the fediverse The reason to create a super-network of independent social networking sites is one of governance. The fediverse is in many ways a response to the centralised governance under a single billionaire of the current Big Tech platforms, and creates a governance structure where each social networking site is it’s own authority; it has authoritative control over the users on their site, but no authority over any of the other ~30k independent servers. Decentralisation and federation are crucial for the functioning of the architecture of the fediverse. Decentralisation means that anyone can set up their own social networking site, and federation means that all these independent sites can connect with each other without a single central authority. While these terms often get used as being valuable in itself, I think they should be seen as technical solutions to solve a governance problem: how can we build a social network without a single central authority? Human nature is a funny thing however, and technical solutions to limit authoritative control usually means that chokepoints simply pop up in other places; whether that’s the software that’s used 75% of users being governed by a self-styled ‘benevolent dictator for life’, or server admins having full centralised control over the users on their server. A conceptual model of ATProto and the ATmosphere Bluesky PBC2 is also building an open social network with the explicit goal that the network should not be owned by a single company. The protocol they use is called AT Protocol, often called ATProto. The network that is build on top of ATProto is called the ATmosphere. The approach they take to get there is quite different than the one the fediverse takes, however. The conceptual model of ATProto is that every account is it’s own independent place in the ATmosphere3, and every app is an aggregator that takes data from all the places in the ATmosphere and uses them to create their own service. Every account has its own place to store their data, a Personal Data Server4. This PDS is a simply a database that contains all your own data on ATProto: the posts you made on Bluesky, as well as your RSVP to an event on Smoke Signal. In turn, every application is an aggregator, similar to how Google is an aggregator of the web. An ATProto app (like Bluesky) takes the data from all the PDSes in the network5. The ‘app’ processes that data, and provides the end-user with a ‘view’ on that data. As such, these applications on ATProto are called AppViews. In the case of Bluesky, they take all the microblogging posts stored on all the PDSes in the ATmosphere, aggregate them together. This aggregation allows Bluesky to give users the Discover feed, count the number of likes on a post, among other things. It is then presented (the ‘view’) to the user in a format that resembles Twitter6. But other AppViews are also possible: WhiteWind is a blogging platform build on ATProto: it allows you to write blogs, and if you use WhiteWind to write a blog posts, these posts are also stored in the same PDS that stores your Bluesky data. The WhiteWind application (AppView) aggregates data from the entire ATmosphere, and takes both WhiteWind-style blog posts, as well as Bluesky’s microblogs. The View WhiteWind presents on their site is blog posts, and with Bluesky’s microblogs functioning as a sort of comment section7. In short, the conceptual model of ATProto is has some resembles to how the web works. The web has many independent websites, and indexers such as Google aggregate the websites and present it back to users in their own ‘view’. Similarly, the ATmosphere has contains of many PDSes, and AppViews aggregate this data into a product for users. Independence, openness and power on ATProto As the question ‘Is Bluesky decentralised and federated’ is the greatest thread in the history of forums, and the discussion is still not locked by moderators after 12,239 pages of debate, it’s worth taking a step back at what these concepts are meant to accomplish. Decentralisation and federation in the fediverse mean an open network that anyone can join without any without centralised control. Anyone can run their own PDS and be a full part of the network, without needing any centralised permission. Anyone can run their own AppView8, and build their own product in the ATmosphere, without needing permission by any centralised authority. They can even reuse Bluesky’s data for their own purposes, like WhiteWind does. On first glance, this seems pretty decentralised. The question of federation becomes more complicated: who can communicate with what exactly? Any AppView can communicate with the PDSes, is that federation? The PDSes cannot communicate with each other directly (so no federation?) but do so via AppViews (so maybe federation?) What about AppViews communicating with each other? Picosky and IRCsky are two different AppViews that both allow you to chat over ATProto, and see the same chat messages. Are these two AppViews federated? And how many individual parts of the system need to federate before you can describe the entire ATmosphere as federated? I don’t know the answer to all of this, but I’m personally trending towards: are we even asking the right questions here? To make matters even more complicated; many people are not asking the question ‘is the ATmosphere decentralised’, but are wondering ‘is Bluesky decentralised’? Here the ATmosphere take a different direction than the fediverse. The answer for the ATmosphere is not: ‘there should be many versions of the Bluesky app so users can switch to another app’. Having many instances of the Bluesky app provides no real additional benefit to making the network more open and less controlled by a single point of authority.9 Instead, the conceptual model of how the ATmosphere is defending itself against an AppView turning ‘bad’, is to have competing different AppViews where people can switch to instead. Bluesky PBC hopes that there will be a hypothetical GreenCloud AppView, which does microblogging slightly differently than Bluesky. This way, people have a different app they can use, in case the Bluesky app does not suffice. The hypothetical GreenCloud microblogging app build on ATProto does not actually exist. But the code for the Bluesky app is available as open source10, anyone can run their own Bluesky if they want to. The interesting problem here is that nobody has done so: running a competing service to Bluesky is totally possible, but why would you? It costs money, time and expertise to do so, and there is little gain to doing so. How applicable the concepts of decentralisation and federation are to the ATmosphere is debatable, but they are used as an approximation for the core question: how is power distributed in the network? And Bluesky and the ATmosphere make it clear that technological architecture can only help so much here: Sure, you can be completely independent of Bluesky PBC on the ATmosphere, as everything is open. But in the end, 99% of users are exclusively on infrastructure owned by Bluesky PBC. No technological architecture can compensate for that kind of the power distribution. This is not lost on Bluesky PBC and their investors either. Blockchain Capitol, lead investor in the series A, has the investment thesis that the value is in growing the ATmosphere, writing that they are “investing in more than a product but rather a vision of what social infrastructure could be”. The challenge here is clear: get more people to build products on ATProto. The sales pitch is attractive: there is a social graph with 13 million accounts that are free for the taking for any developer to build on top upon. The sales pitch is also unusual though: Bluesky PBC is asking for other organisations and companies to be their competitors so that both can contribute to a growing ecosystem. How this will work out remains an open question. On Identity Technological solutions to prevent control of chokepoints usually mean that these chokepoints simply pop up in different places, and the ATmosphere is no different than the fediverse in this regard. This explanation of how the ATmosphere works, is missing a crucial part: how Decentralised Identity works on ATProto. Explaining how the system in detail is the subject of my next article. And that system might just be both more centralised than people expect, more decentralised than people think, and it’s most centralising aspect might just be… a clock. Notes The fediverse is defined here by the Mastodon-dominant supernetwork that mostly uses ActivityPub and is mostly is used for microblogging, with some link-aggregators on Lemmy, some video on PeerTube on the side. I’m aware that this definition does not cover the entirety of the network, as well as that you can contest every word in that definition. But its a close enough approximation for how the word is used in casual day-to-day life. ↩︎For clarity, ‘Bluesky PBC’ refers to the Bluesky Public Benefit Company, while ‘Bluesky’ refers to the microblogging app made by Bluesky PBC. ↩︎See also Paul Frazee’s thread on how every user is basically a website. I hesitate to use the term website here, as that comes with certain preconceived notions of what a website is, and a PDS repository is different in some manners. Maybe over time it will turn out that the equivalence of ‘PDS repo’ with ‘website’ will makes sense to people however, I’m unsure. ↩︎Technically, a repository on a PDS, a PDS can contain the repositories for many different accounts. ↩︎This is mostly done via a Relay. Relays are an optional part of the network, and can best be seen as an extension of an AppView. This extension is in itself also flexible, multiple AppViews can use the same Relay. ↩︎The extra step here is that the AppView sends data to your client, such as the bsky.app website, the official mobile clients or a third-party client like deck.blue ↩︎Example of a WhiteWind blog that combines Bluesky’s microblogs here. ↩︎And/or run their own Relays, which multiple people are in fact doing. ↩︎This is the problem that the fediverse has; there are 10k Mastodon servers, accounting for 80% of active users, but the software is controlled by a single person. Many concurrent deployments of the same software does not reduce the amount of control that this software has, it arguably increases the control instead. ↩︎The core functionalities all are, some parts are not. ↩︎
2024-11-08T13:55:50
en
train
42,051,168
PaulHoule
2024-11-05T13:03:03
Farm pesticides found floating in California air samples; officials say it's OK
null
https://www.latimes.com/environment/story/2024-10-25/farm-pesticides-found-floating-in-california-air-samples
9
4
[ 42051675, 42051170 ]
null
null
null
null
null
null
null
null
null
train
42,051,173
CurtneyBarton
2024-11-05T13:04:26
null
null
null
1
null
[ 42051174 ]
null
true
null
null
null
null
null
null
null
train
42,051,182
doener
2024-11-05T13:05:07
'Maybe 20 people left at BioWare' who know how their Dragon Age engine works
null
https://www.pcgamer.com/games/rpg/dragon-age-boss-says-a-legendary-edition-style-remaster-of-the-old-games-in-the-series-is-unlikely-because-theres-maybe-20-people-left-at-bioware-who-know-how-their-engine-works/
4
0
[ 42051186 ]
null
null
null
null
null
null
null
null
null
train
42,051,193
LinuxBender
2024-11-05T13:06:31
Ask HN: For the people running authoritative DNS servers
For those that log DNS traffic specifically with tcpdump, are you seeing an unusually high number of spoofed <i>answers, vs queries</i> for all the DNS registrar domains, DNS providers and long nonsensical 36 character apex+tld domains? This will not show up in native query logs as it&#x27;s not hitting port 53, but rather acting as if I am making the request.<p>Obviously this does not hurt my servers as they just silently drop it but it looks like someone is getting ready to do something to all the DNS registrars and big DNS providers including but not limited to Cloudflare, Name dot com, Afraid dot org, AWS DNS, Porkbun, nic dot uk, Nether dot net, ofpenguins dot net.<p>Adding to the oddity, all the traffic makes it look like I am making the request and each of those registrars and DNS providers are answering it like as if they are trying to poison cache but my server is authoritative, not recursive. The spoofed &quot;answers&quot; will never reach my DNS daemon. There is no cache and they should be able to easily see it is not a recursive server. There are a good deal of bogus RRSIG&#x2F;NSEC3 &quot;answers&quot;. My server just ignores it obviously and it is harmless. I am only asking to see if others are suddenly getting this traffic. It just <i>&quot;feels&quot;</i> like someone is getting ready to do <i>&quot;something&quot;</i> on a big scale. <i>A gut feeling so to speak.</i> I have monitored DNS traffic daily for over 26 years and have not seen this particular pattern.<p>To look for this:<p><pre><code> tcpdump -p --dont-verify-checksums -i any -NNnnvvv -s0 -B16384 -c65536 not host 127.0.0.1 and port 53 </code></pre> [Edit:] Whatever is going on, they stopped hitting my server but I suspect others may start seeing this. I have no idea how many DNS providers and registrars passively log DNS traffic.
null
3
0
null
null
null
null
null
null
null
null
null
null
train
42,051,197
art049
2024-11-05T13:06:57
State of Python 3.13 performance: Free-threading
null
https://codspeed.io/blog/state-of-python-3-13-performance-free-threading
177
165
[ 42051523, 42053094, 42051733, 42052796, 42056592, 42056285, 42058467, 42057509, 42056778, 42051214, 42052157 ]
null
null
null
null
null
null
null
null
null
train
42,051,210
arabco
2024-11-05T13:09:55
null
null
null
1
null
[ 42051211 ]
null
true
null
null
null
null
null
null
null
train
42,051,218
lyton
2024-11-05T13:10:52
A random number generator determined the likely election winner
null
https://www.natesilver.net/p/a-random-number-generator-determined
3
0
[ 42051365 ]
null
null
null
null
null
null
null
null
null
train
42,051,220
sgasser88
2024-11-05T13:11:23
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,051,222
celicoo
2024-11-05T13:11:39
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,051,225
marklemay
2024-11-05T13:11:54
null
null
null
2
null
[ 42051356, 42051226 ]
null
true
null
null
null
null
null
null
null
train
42,051,228
hacxx
2024-11-05T13:12:00
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,051,229
Crowgirl
2024-11-05T13:12:27
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,051,237
orbesargentina
2024-11-05T13:14:48
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,051,238
rapnie
2024-11-05T13:14:51
null
null
null
1
null
[ 42051283, 42051290 ]
null
true
null
null
null
null
null
null
null
train
42,051,273
tldrthelaw
2024-11-05T13:22:42
R&D Tax Expensing Is Broken, but Changing Some Rules Can Fix It
null
https://news.bloombergtax.com/tax-insights-and-commentary/r-d-tax-expensing-is-broken-but-changing-some-rules-can-fix-it
2
0
null
null
null
null
null
null
null
null
null
null
train
42,051,297
dndndnd
2024-11-05T13:26:19
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,051,301
todsacerdoti
2024-11-05T13:26:58
"German string" optimizations in Spellbook
null
https://the-mikedavis.github.io/posts/german-string-optimizations-in-spellbook/
2
0
[ 42051321 ]
null
null
no_error
"German string" optimizations in Spellbook
null
null
Spellbook is a Rust spell-checking library I've written the style of Hunspell to bring spell checking to the Helix editor. It's more-or-less a Rust rewrite of Nuspell, which itself is more-or-less a rewrite of Hunspell. Spellbook has a pretty slim interface: you can instantiate a dictionary from Hunspell dictionary files and use it to check words. For a small example of how you might use Spellbook: fn main() { // Dictionary files can be sourced from // <https://github.com/LibreOffice/dictionaries> let aff = std::fs::read_to_string("en_US.aff").unwrap(); let dic = std::fs::read_to_string("en_US.dic").unwrap(); let dict = spellbook::Dictionary::new(&aff, &dic).unwrap(); let word = std::env::args().nth(1).expect("expected a word to check"); if dict.check(&word) { println!("{word:?} is in the dictionary."); } else { println!("{word:?} is NOT in the dictionary."); std::process::exit(1); } } In this post we'll be looking at the string representation used in Spellbook and aiming to optimize it to save memory. Strings in Spellbook How Spellbook works exactly is beyond the scope of this post, so this section gives a simplified overview and deals with simplified types. If you're interested in more details, check out the Spellbook README or @zverok's Rebuilding the Spellchecker blog post and the Spellbook internals document. A central part of the procedure to check a word is to look up word(s) in a hash table. This lookup table contains an entry for each "stem" in the dictionary. You might imagine that the Dictionary type is a wrapper around a HashSet<String>. This is correct in essence but Hunspell-like checkers don't store every possible word in memory. Instead there is some "compression." For an example from the en_US (American English) dictionary, the lookup table in Spellbook associates a stem "adventure" with a set of flags like 'D', 'R' and 'S'. The flags correspond to rules defined for the dictionary allowing transformations like prefixes and suffixes. 'D' for example allows adding the "d" (or "ed" or "ied", depending on the stem) suffix, producing "adventured." 'R' allows "adventurer" and 'S' allows "adventures." So we can imagine that the lookup table has a type similar to HashMap<String, HashSet<Flag>>. Despite the "compression" that prefixes and suffixes enable, the lookup table contains many entries. The exact number varies with which dictionary files you use as input but American English contains around 50,000 stems, and it's a relatively slim dictionary. Others contain hundreds of thousands or even millions of stems, so it's worth trying to optimize the space we take for each stem. Good optimizations come from good observations so let's list out some properties of these strings: Once inserted into the lookup table these strings are never modified. These strings have a small maximum size. Spellbook refuses to check words over 360 bytes long (in UTF-8 representation) so there's no point in storing words over 360 bytes in the lookup table. Stems correspond to words so they're typically shorter rather than longer. Strings in Rust Let's take a bit of a detour to talk about how strings are represented in Rust. For starters there's the String type. Strings are quite flexible: they can be modified, resized and have a large maximum size. As for how they are represented, the Rust docs say: A String is made up of three components: a pointer to some bytes, a length, and a capacity. Simplifying a bit here, we can imagine a String looks like this: struct String { pointer: NonNull<u8>, length: usize, capacity: usize, } Box<str> and fat pointers The first thing that comes to mind is that storing length and capacity is redundant for our use-case. In our lookup table the strings are never modified so there's no need to store any extra information that would allow us to resize the string. A non-resizable string can be written with the Box<str> type. Box<str> is the owned version of a &str. &str and slices (&[T]) have an interesting representation and learning about them is a good way to dig into "fat pointers" in Rust. A &str (or equivalently, &[u8]) is a fat pointer - a pointer to some bytes plus some metadata. For &[T] the metadata is the length of the slice. Using a fat pointer makes string (&str) and other slices nice to work with - you can subslice and read the length of a string slice cheaply and ergonomically. Box<str> and Box<[T]> are laid out the same way. You can imagine that these fat pointers are basically a tuple (*const T, usize). This takes 2 usizes worth of memory to represent: one usize for the actual pointer ("thin pointer") and one for the metadata. What exactly is a usize though? Quoting the Rust docs again: The size of [usize] is how many bytes it takes to reference any location in memory. For example, on a 32 bit target, this is 4 bytes and on a 64 bit target, this is 8 bytes. So usize is an unsigned integer type of the same size as a "thin pointer": a pointer with no metadata, like *const T/*mut T or equivalently NonNull<T>. For simplicity we'll talk only about 64 bit targets for the rest of the post and assume that size_of::<usize>() == 8. By switching the stem type to Box<str> we save 8 bytes per stem from not tracking capacity, taking advantage of our observation that strings are not modified. Nice! But there's still room for improvement from our other observations. The road to "German strings" The other observations are about the length of each string. They're short. If the length field is a usize that means your strings can be at most 2^64 bytes long, and wow that is long! Our strings will never be longer than 360 bytes so of the 64 bits we use to represent the length we'll only ever use 9 (2^9 = 512). That's quite a few bits wasted. If we used a u16 to store the length instead we'd save 6 bytes. What should we do with those 6 bytes we've saved? This is where "German strings" come in. "German strings" or "German-style strings" or "Umbra strings" (all the same thing) are described very well in a post from CedarDB: Why German Strings are Everywhere. The idea is to use a integer type smaller than usize for the length (u32 in their case) and repurpose the remaining bytes to store a prefix of the data. We can store a few more bytes in the "prefix" section since we're using a u16 for length, so our type would look like this in memory: #[repr(C)] struct UmbraString { len: u16, // takes 2 bytes prefix: [u8; 6], // takes 6 bytes pointer: NonNull<u8>, // this takes `usize` (8 bytes) } +-------+-----------------------+-------------------------------+ + len + prefix + pointer + +-------+-----------------------+-------------------------------+ u16 6x u8 8x u8 Umbra and CedarDB like this prefix because it can be used to cheaply compute whether two of these UmbraStrings are (not) equal - the Eq trait in Rust. Consider a very short string like "hi!". In memory that would look like so: +-------+-----------------------+-------------------------------+ + 3u16 + h i ! . . . + pointer (?) + +-------+-----------------------+-------------------------------+ And what's the pointer pointing to? Nothing I guess. We already stored the full string right in the struct "inline." So there's no need to allocate memory and point to it. In fact for medium-long strings that can fit in the prefix bytes plus the pointer bytes, we can eliminate the pointer part altogether. This is a Short String Optimization (SSO): when the string is short enough, we can store it directly in our UmbraString struct and avoid allocating a buffer. We can store 6 bytes in the prefix and another 8 in the suffix area for a total of 14 bytes inline. For an ASCII string, that's up to 14 characters we can represent without allocating. Very nice! +-------+-----------------------+-------------------------------+ + 12u16 + h e l l o _ + w o r l d ! . . + +-------+-----------------------+-------------------------------+ len prefix suffix This either-or type would look like so, using a union: #[repr(C)] struct UmbraString { len: u16, prefix: [u8; 6], trailing: Trailing } #[repr(C)] union Trailing { suffix: [u8; 8], // ManuallyDrop is necessary since we only want // to deallocate the buffer if we're using the // "long" variant of this union. ptr: ManuallyDrop<NonNull<u8>>, } How do we know which member of the union our UmbraString is? Just look at the len field: if it's 14 or less then we're using the "short" variant - everything inline. If it's 15 or greater then the string is allocated and pointed to. Memory savings Why is this layout so attractive? This representation is no more expensive than a Box<str> in terms of memory consumption. size_of::<Box<str>>() is 16 - 16 bytes. (Note that size_of is counting the size of the type, not the size of the allocation the pointer is pointing to.) size_of::<UmbraString>() is also 16. The difference is that any non-empty Box<str> will allocate. A short string like "hi!" allocates 3 bytes somewhere on the heap for a total of 19 bytes. UmbraString does not: it's still 16 bytes. For a medium string like "hello_world!" Box<str> will allocate those 12 bytes on the heap for a total cost of 28 bytes. The equivalent UmbraString is still a total of 16 bytes. For long strings like "a".repeat(50), Box<str> will allocate the 50 bytes for a total cost of 66 bytes. In the worst case (long strings) UmbraString is no worse: it also takes exactly 66 bytes. Umbra strings are attractive here because they don't have a memory cost: we would be paying the 16 bytes of a Box<str> anyways and wasting the 6 bytes from the length usize. Any time we use the inline variant of UmbraString we save memory. You might also think UmbraString is faster to work with if you commonly have short strings because you don't need to follow a pointer to compare data. We'll see in the benchmarks that UmbraString is not much different in terms of speed. We need an extra comparison operation to figure out if we're using a short or long variant after all. Theory into practice: let's build UmbraString This is basically the same snippet as above. We'll define some constants for the lengths of each segment and some basic helpers. use core::mem::{size_of, ManuallyDrop}; use core::ptr::NonNull; // 6 on 64 bit machines const PREFIX_LEN: usize = size_of::<usize>() - size_of::<u16>(); // 8 on 64 bit machines const SUFFIX_LEN: usize = size_of::<usize>(); // We can fit 14 bytes inline, nice! const INLINE_LEN: u16 = (PREFIX_LEN + SUFFIX_LEN) as u16; #[repr(C)] pub struct UmbraString { len: u16, prefix: [u8; PREFIX_LEN], trailing: Trailing, } #[repr(C)] union Trailing { suffix: [u8; SUFFIX_LEN], ptr: ManuallyDrop<NonNull<u8>>, } impl UmbraString { pub fn len(&self) -> usize { self.len as usize } pub fn is_empty(&self) -> bool { self.len == 0 } } Default The empty string is easy to represent: the length is 0 so it belongs as the inline variant. We'll set everything to zero - we won't access those bytes so it doesn't really matter what they're set to, but this seems like a reasonable default. impl Default for UmbraString { fn default() -> Self { Self { len: 0, prefix: [0; PREFIX_LEN], trailing: Trailing { suffix: [0; SUFFIX_LEN] } } } } Allocating Let's define some helper functions for actually allocating the data. The allocation helpers are only used when working with the long variant. A &str is a &[u8] that is valid UTF-8 so we'll be working in terms of *mut u8/*const u8 thin pointers. use alloc::alloc; use core::ptr::{self, NonNull}; fn copy_slice(src: &[u8]) -> NonNull<u8> { let layout = layout(src.len()); let nullable = unsafe { alloc::alloc(layout) }; let ptr = match NonNull::new(nullable) { Some(ptr) => ptr.cast(), None => alloc::handle_alloc_error(layout), }; unsafe { ptr::copy_nonoverlapping(src.as_ptr(), ptr.as_ptr(), source.len()); } ptr } fn layout(len: usize) -> alloc::Layout { alloc::Layout::array::<u8>(len) .expect("a valid layout for an array") .pad_to_align() } copy_slice allocates an array of bytes on the heap and then copies the source byte slice into our new array, and returns the pointer. Instantiation To create an UmbraString we'll take an existing &str as input. This operation could possibly fail if the input string is too long. Let's ignore that for now and just assert! that the string is not too long: impl From<str> for UmbraString { fn from(src: &src) -> Self { assert!(src.len() <= u16::MAX as usize); let len = src.len(); let mut prefix = [0; PREFIX_LEN]; let trailing = if len as u16 <= INLINE_LEN { let suffix = [0; SUFFIX_LEN]; if len <= PREFIX_LEN { prefix[..len].copy_from_slice(source); } else { prefix.copy_from_slice(&source[..PREFIX_LEN]); suffix[..len - PREFIX_LEN].copy_from_slice(&source[PREFIX_LEN..]); } Trailing { suffix } } else { let ptr = copy_slice(source); Trailing { ptr: ManuallyDrop::new(ptr) } } Self { len: len as u16, prefix, trailing } } } For the short variant (src.len() as u16 <= INLINE_LEN) we copy from the source byte slice into however much of the prefix and suffix slices we can fill and leave the rest as 0s. (Note that 0 is a valid representation in UTF-8. See the section below on FlagSets for more discussion on why this is important.) For the long variant we'll use our copy_slice helper from above to allocate a new byte array pointer. Reconstructing a byte slice Did you notice in our copy_slice helper function above that we copy the entire slice into a newly allocated array buffer instead of the part after the prefix? We copied source instead of &source[PREFIX_LEN..]. You might think that we could save some space by only storing the remaining bytes after the prefix - and we could - but that would prevent us from recreating a &[u8] or &str from an UmbraString. Slices are contiguous memory chunks - array layouts in memory. We can't create a slice that starts in the prefix field and then continues by following a pointer. All of the data needs to be in one place. With that in mind, let's add a function to get our bytes back: use core::{ptr, slice}; impl UmbraString { pub fn as_slice(&self) -> &[u8] { let ptr = if self.len <= INLINE_LEN { let ptr = ptr::from_ref(self); unsafe { ptr::addr_of!((*ptr).prefix) }.cast() } else { unsafe { self.trailing.ptr }.as_ptr() }; unsafe { slice::from_raw_parts(ptr, self.len()) } } pub fn as_bytes(&self) -> &[u8] { self.as_slice() } pub fn as_str(&self) -> &str { unsafe { core::str::from_utf8_unchecked(self.as_slice()) } } } For inline Umbra strings our slice starts at the prefix field and ends either in the prefix field's array or in the suffix field's array depending on the length. The #[repr(C)] annotation on UmbraString and Trailing enforces that when represented in memory at runtime, the fields are in the same order as we define them, so we can safely assume that prefix comes before suffix and there's no space between. We can safely treat them as contiguous memory. For allocated strings we reconstruct the slice directly from our allocated buffer's pointer. Remember earlier when we said that slices were basically (*const T, usize)? That's what we give to slice::from_raw_parts - a pointer to an array layout in memory and a length - and we get a fat pointer. Clone Cloning the string is similar to how we initially created one from a &str. impl Clone for UmbraString { fn clone(&self) -> Self { let trailing = if self.len <= INLINE_LEN { let suffix = unsafe { self.trailing.suffix }; Trailing { suffix } } else { let ptr = copy_slice(self.as_slice()); Trailing { ptr: ManuallyDrop::new(ptr) } }; Self { len: self.len, prefix: self.prefix, trailing, } } } The len and prefix fields are copied. For the inline version we copy the suffix array too, and for the allocated version we create a new allocation and copy self's buffer. Another nice property of this type you might notice here: for strings short enough to be inlined, Clone is actually a Copy - no allocation required. Drop Now on to Drop. We need to deallocate our allocated buffer for the long variant. For the short variant we do nothing: Copy types are cleaned up without any mention in Drop. impl Drop for UmbraString { fn drop(&self) { if self.len > INLINE_LEN { let layout = layout(self.len()); let ptr = unsafe { self.trailing.ptr }.as_ptr(); unsafe { alloc::dealloc(ptr.cast(), layout); } } } Eq As the CedarDB article notes, we can optimize the comparison of Umbra strings. To do that we cast the len and prefix chunks together as a usize and compare those, and then compare the remaining parts of the string if that first word of memory is equal. We don't use the Eq optimization in Spellbook since Umbra strings are only used for the lookup table representation (we use PartialEq<str> for UmbraString instead), but it's interesting from an academic perspective. impl PartialEq<Self> for UmbraString { fn eq(&self, other: &Self) -> bool { let self_len_and_prefix = ptr::from_ref(self).cast::<usize>(); let other_len_and_prefix = ptr::from_ref(other).cast::<usize>(); if unsafe { *self_len_and_prefix != *other_len_and_prefix } { return false; } // The lengths and prefixes are equal. Now compare the rest. if self.len <= INLINE_LEN { // We can use the same trick as above: compare the suffixes as one big chunk. let self_ptr = ptr::from_ref(self); let self_suffix = unsafe { ptr::addr_of!((*self_ptr).trailing.suffix) }.cast::<usize>(); let other_ptr = ptr::from_ref(other); let other_suffix = unsafe { ptr::addr_of!((*other_ptr).trailing.suffix) }.cast::<usize>(); unsafe { *self_suffix == *other_suffix } } else { let suffix_len = self.len() - PREFIX_LEN; let self_rest = unsafe { slice::from_raw_parts( self.trailing.ptr.as_ptr().add(PREFIX_LEN), suffix_len ) }; let other_rest = unsafe { slice::from_raw_parts( other.trailing.ptr.as_ptr().add(PREFIX_LEN), suffix_len ) }; self_rest == other_rest } } } impl Eq for UmbraString {} We start by comparing the length and prefix parts together with one usize comparison. If that is equal then we need to check the rest. For the short variant we can use another usize comparison to check the rest. For the long variant we can reconstruct the byte slices for the remaining bytes and compare those. We can actually make this a little better. We know in that else block that the lengths of self and other are equal but comparing the byte slices (PartialEq<Self> for &[T]) will repeat that check. We can skip that check and do the comparison directly. Since u8s are byte-wise equal to each other, we can use memcmp like the standard library does. impl PartialEq<Self> for UmbraString { fn eq(&self, other: &Self) -> bool { // ... unchanged ... if self.len <= INLINE_LEN { // ... unchanged ... } else { let suffix_n_bytes = self.len() - PREFIX_LEN; unsafe { memcmp( self.trailing.ptr.as_ptr().add(PREFIX_LEN), other.trailing.ptr.as_ptr().add(PREFIX_LEN), suffix_n_bytes, ) == 0 } } } } // Snipped from `library/core/src/slice/cmp.rs`: extern "C" { /// Calls implementation provided memcmp. /// /// Interprets the data as u8. /// /// Returns 0 for equal, < 0 for less than and > 0 for greater /// than. fn memcmp(s1: *const u8, s2: *const u8, n: usize) -> core::ffi::c_int; } Benchmarking and memory analysis Speed benchmarks are unfortunately not very interesting. Spellbook doesn't take advantage of the Eq comparison so we only end up paying for the conversion in UmbraString::as_slice. This is nearly imperceptibly slower than Box<str>::as_bytes. Using cargo bench and simple benchmarks like so: // NOTE: this needs nightly. #![feature(test)] extern crate test; use test::{black_box, Bencher}; use spellbook::umbra_slice::UmbraString; #[bench] fn umbra_str_as_bytes(b: &mut Bencher) { let s: UmbraString = "a".repeat(50).into(); b.iter(|| black_box(&s).as_bytes()); } #[bench] fn boxed_str_as_bytes(b: &mut Bencher) { let s: Box<str> = "a".repeat(50).into(); b.iter(|| black_box(&s).as_bytes()); } umbra_str_as_bytes measures at around 0.69 ns/iter on my machine while boxed_str_as_bytes measures around 0.46 ns/iter. We would need to be converting to bytes very very often to notice the difference, and Spellbook doesn't ultimately convert that often. The benchmarks for Spellbook's check function don't change perceptibly. Where we see the difference is in memory usage and heap interaction. Measuring heap allocations is not as straightforward in Rust as you might imagine if you're coming from garbage collected languages: garbage collectors need to track the heap to know when to clean up garbage so there's typically an interface to query heap information. Not so with Rust. Measuring Memory Usage in Rust from the rust-analyzer blog points out a few options. Of them I'm partial to valgrind's DHAT tool since it's straightforward to use. We'll run a small example program that creates the en_US dictionary and checks a single word: cargo run --release --example check hello valgrind --tool=dhat ./target/release/examples/check hello Before (Box<str> stems), DHAT reports: Total: 3,086,190 bytes in 130,988 blocks At t-gmax: 2,717,005 bytes in 90,410 blocks At t-end: 0 bytes in 0 blocks Reads: 3,923,475 bytes Writes: 2,610,900 bytes After (UmbraString stems): Total: 2,714,546 bytes in 82,475 blocks At t-gmax: 2,343,567 bytes in 41,487 blocks At t-end: 0 bytes in 0 blocks Reads: 2,332,587 bytes Writes: 2,239,256 bytes We've saved around 300kb of total runtime memory (12%) with the change, plus we're using fewer blocks of memory and reading from and writing to the heap less. Success! We can go further though if we apply this "German string" optimization to another oft-instantiated type in the lookup table: the FlagSet. Bonus points: the FlagSet can also be German! Remember way back at the beginning of the post when were discussing the lookup table and how it's like a HashMap<String, HashSet<Flag>>? The HashSet<Flag> part is defined in the Spellbook source as a FlagSet newtype wrapper. It doesn't wrap a HashSet<Flag> though - hash sets can be wasteful in terms of memory usage. Before the Umbra string optimization they were represented as Box<[Flag]>. For short slices, slice::contains or slice::binary_search are very fast at determining set membership. Like stems, flagsets are usually short. If we measure a histogram of the number of flags used per stem in all dictionaries in LibreOffice/dictionaries, we see the distribution skew very short: Number of flagsPercentile (rounded) 032 169 280 386 490 ...... 796 ...... One crazy dictionary used 271 flags on a single stem. So if we can store some number of flags inline like we did with bytes an Umbra string, we can avoid allocations in the vast majority of cases. Rather than an "Umbra string" we'll be constructing a more generic "Umbra slice" type. In fact we can imagine that the UmbraString is just a special case of an UmbraSlice around bytes: // These bytes are valid UTF-8. struct UmbraString(UmbraSlice<u8>); The new type comes with new challenges though. For... reasons... Flag is defined as: type Flag = core::num::NonZeroU16; So rather than dealing with bytes we need to deal with 16-bit integers. Ok, that changes the arithmetic a little: // We can fit 3 u16s in the prefix. const fn prefix_len<T>() -> usize { // Remove 16 bits for the `len`. (size_of::<usize>() - size_of::<u16>()) / size_of::<T>() } // And 4 in the suffix. const fn suffix_len<T>() -> usize { size_of::<usize>() / size_of::<T>() } We can fit up to 7 flags inline. That's really awesome: it'll cover up to 96% of real-world flagsets and should save us many many really tiny allocations. Pitfalls and MaybeUninit<T> We're talking in terms of u16 above but our type is actually a NonZeroU16. They have the same size and layout but NonZeroU16 can't be 0u16. The challenge is the NonZero nature: the zeroed bit pattern is not a valid representation, and Default for NonZeroU16 is not a thing. Places where we wrote [0u8; N] above have to be rewritten anyways since we're changing the type, but we can't just say: // 💣 UNDEFINED BEHAVIOR!! let mut prefix: [T; PREFIX_LEN] = unsafe { core::mem::zeroed() }; let mut suffix: [T; SUFFIX_LEN] = unsafe { core::mem::zeroed() }; You can't say that a value is a NonZeroU16 and at the same time represent it with zeroes, even if you never formally access those elements of the array. The proper way to encode what we're trying to do is to use MaybeUninit. use core::mem::MaybeUninit; use crate::Flag; // Unfortunately we cannot call `prefix_len`/`suffix_len` within // the definition of `UmbraSlice` so we need to use const generics. // The result is that this type is not pretty :/ pub type FlagSlice = UmbraSlice< Flag, { prefix_len::<Flag>() }, { suffix_len::<Flag>() }, >; #[repr(C)] pub struct UmbraSlice<T: Copy, const PREFIX_LEN: usize, const SUFFIX_LEN: usize> { len: u16, prefix: [MaybeUninit<T>; PREFIX_LEN], trailing: Trailing<T, SUFFIX_LEN>, } #[repr(C)] union Trailing<T: Copy>, const SUFFIX_LEN: usize> { suffix: [MaybeUninit<T>; SUFFIX_LEN], ptr: ManuallyDrop<NonNull<T>>, } impl<T: Copy, const PREFIX_LEN: usize, const SUFFIX_LEN: usize> UmbraSlice<T, PREFIX_LEN, SUFFIX_LEN> { const INLINE_LEN: u16 = (PREFIX_LEN + SUFFIX_LEN) as u16; } This makes the type slightly harder to work with: when accessing the prefix and suffix arrays we need to be sure to ptr::cast() from a pointer of MaybeUninit<T> to a pointer of T. When initializing the slice in our From implementation we need to transmute the source slice from &[T] to &[MaybeUninit<T>] before we can copy the data: fn copy_to_slice<T: Copy>(dst: &mut [MaybeUninit<T>], src: &[T]) { // SAFETY: &[T] and &[MaybeUninit<T>] have the same layout. let uninit_src: &[MaybeUninit<T>] = unsafe { core::mem::transmute(src) }; dst.copy_from_slice(uninit_src); } Zeroed bit patterns We also need to be very careful to initialize prefix and suffix with MaybeUninit<T>::zeroed() rather than MaybeUninit<T>::uninit(). Why? Remember that our PartialEq<Self> implementation compares the prefix array and maybe also the suffix array for the short variant. Those arrays might contain uninitialized data if the length of the slice is shorter than the INLINE_LEN or PREFIX_LEN. MaybeUninit<T>::zeroed() works around this because comparing zeroed bits is defined behavior. The important distinction is that we are not treating the zeroed memory as NonZeroU16. That is undefined behavior. If we treat it as a usize though, the zeroed bit pattern is valid and the behavior is defined. It's also accurate as long as T is Copy. // Note that `T` is not bound by `Eq`. // We only ever compare bits, not `T`s. impl<T: Copy, const PREFIX_LEN: usize, const SUFFIX_LEN: usize> PartialEq<Self> for UmbraSlice<T, PREFIX_LEN, SUFFIX_LEN> { fn eq(&self, other: &Self) -> bool { // SAFETY: the `prefix` field is created with `MaybeUninit::zeroed` memory, so even // if the slice has fewer than `PREFIX_LEN` elements, comparing the uninitialized // memory is defined behavior, and it is accurate since `T` is `Copy`. let self_len_and_prefix = ptr::from_ref(self).cast::<usize>(); let other_len_and_prefix = ptr::from_ref(other).cast::<usize>(); if unsafe { *self_len_and_prefix != *other_len_and_prefix } { return false; } // ... compare suffixes ... } } Null pointer optimization and strange behavior What exactly can go wrong if you don't use MaybeUninit<T>? The compiler can see that NonZeroU16 cannot ever be a zeroed bit pattern and it can design the layouts for other types using FlagSlice around that. If we designed our type like this: #[repr(C)] struct FlagSlice { len: u16, prefix: [Flag; PREFIX_LEN], trailing: Trailing, } #[repr(C)] union Trailing { suffix: [Flag; SUFFIX_LEN], ptr: ManuallyDrop<NonNull<Flag>>, } Then FlagSlice is eligible for the null pointer memory layout optimization. The compiler can tell that the zero bit pattern is not a valid representation for the struct and so it can try to fit other information in that representation, like whether an Option<T> is Some or None. It's a really handy optimization that makes size_of::<Option<T>>() == size_of::<T>() - you don't pay for the option. But how would you represent the empty flag slice? // 💣 UNDEFINED BEHAVIOR!! impl Default for FlagSlice { fn default() -> Self { Self { len: 0, prefix: unsafe { core::mem::zeroed() }, trailing: Trailing { suffix: unsafe { core::mem::zeroed() }, } } } } The length is zero, the prefix is zeroes, the suffix is zeroes. The whole struct is zeroes! With this representation, Option::<FlagSlice>::None is exactly the same as FlagSlice::default(), causing your code to behave weirdly. Suddenly Some(FlagSlice::default()).is_some() is false! 🥴 While this pitfall seems scary and hard to debug, Miri has got your back. Write types without the MaybeUninit<T> wrapper and cargo miri test will helpfully point out that you're opening yourself up to undefined behavior. FlagSlice Memory Savings Rerunning the same example from above, DHAT reports: Total: 2,584,850 bytes in 44,741 blocks At t-gmax: 2,190,833 bytes in 947 blocks At t-end: 0 bytes in 0 blocks Reads: 1,733,361 bytes Writes: 2,109,560 bytes So to compare: Stem + FlagSetTotalAt t-gmaxReads (B)Writes (B) Box<str> + Box<[Flag]>3,086,190 bytes in 130,988 blocks2,717,005 bytes in 90,410 blocks3,923,4752,610,900 UmbraString + Box<[Flag]>2,714,546 bytes in 82,475 blocks2,343,567 bytes in 41,487 blocks2,332,5872,239,256 UmbraString + FlagSlice2,584,850 bytes in 44,741 blocks2,190,833 bytes in 947 blocks1,733,3612,109,560 These are some respectable savings! We've cut out about a half of a megabyte of total memory, used far fewer allocations (blocks) and write to the heap a fair amount less. Plus we read from the heap less than half as much as we did before the changes. Not every dictionary will see the same savings, though: some dictionaries use more flags and have longer stems. But as discussed above, every time we use a short variant of an Umbra slice we save memory over a Box<str> or Box<[Flag]>. Wrapping up & Kudos We've designed and implemented a German string inspired UmbraSlice<T> type that can carry a small number of Ts inline - a small slice optimization - and used it to save a respectable amount of total memory for the Dictionary type, and also cut way down on heap interaction. We've also stumbled upon lots of interesting detours into Rust topics: fat pointers, runtime memory measurement, MaybeUninit<T> and the null-pointer optimization. The full code for UmbraSlice<T> lives in Spellbook's repository in src/umbra_slice.rs. As mentioned above, CedarDB has an excellent intro post for German strings and also a nice deeper dive. The former has a snide remark about an optimization which is supposedly impossible in Rust, provoking interesting response posts by those who had been successfully nerd-sniped. One of these - An Optimization That's Impossible in Rust! - I found very informative on the Rust aspects of implementing German strings, and may be interesting if your use-case benefits from Clone for UmbraString being cheap like Clone for Arc. (Not so for Spellbook.) Thank you to these authors!
2024-11-08T09:34:50
en
train
42,051,309
sgasser
2024-11-05T13:27:52
Show HN: Chrome extension to share webpages with Claude by pasting URLs
When working with Claude AI, sharing web content usually requires copying and pasting text or taking screenshots. This extension removes that friction - just paste a URL in your Claude conversation and it automatically extracts and formats the webpage content.<p>The extension uses Jina.ai Reader for content extraction, preserves document structure, and only activates on claude.ai. No configuration needed, no data storage, fully open source.<p>Demo video: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=XmpoaLmCjKM" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=XmpoaLmCjKM</a>
https://chromewebstore.google.com/detail/website-reader-for-claude/jolimpecpmladpnpipohidkbcodngkpn
2
0
null
null
null
null
null
null
null
null
null
null
train
42,051,325
giuliomagnifico
2024-11-05T13:30:11
AMD overtook Intel for the first time in data center revenue in 3Q24
null
https://twitter.com/SKundojjala/status/1853041284157682063
6
0
[ 42051351 ]
null
null
no_article
null
null
null
null
2024-11-07T20:02:10
null
train
42,051,342
fuglede_
2024-11-05T13:33:51
Exponential Separation Between Quantum and Quantum-Inspired Algorithms for ML
null
https://arxiv.org/abs/2411.02087
1
0
null
null
null
null
null
null
null
null
null
null
train
42,051,343
Nayak_S1991
2024-11-05T13:34:02
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,051,349
mhb
2024-11-05T13:34:41
Jaw-Dropping Report Reveals Causes of Arecibo Telescope Collapse
null
https://gizmodo.com/jaw-dropping-report-reveals-causes-arecibo-telescope-collapse-2000517284
6
1
[ 42051457, 42051605, 42051352 ]
null
null
null
null
null
null
null
null
null
train
42,051,353
westurner
2024-11-05T13:35:17
Intel Releases x86-SIMD-sort 6.0 for 10x faster AVX2/AVX-512 Sorting
null
https://www.phoronix.com/news/x86-simd-sort-6.0
2
1
[ 42069878, 42051369 ]
null
null
null
null
null
null
null
null
null
train
42,051,367
westurner
2024-11-05T13:37:31
Revealing the superconducting limit of twisted bilayer graphene
null
https://phys.org/news/2024-11-revealing-superconducting-limit-bilayer-graphene.html
3
2
[ 42051372 ]
null
null
null
null
null
null
null
null
null
train
42,051,368
mhb
2024-11-05T13:37:42
Failure analysis of the Arecibo 305 meter telescope collapse
null
https://nap.nationalacademies.org/read/26982/chapter/1
227
107
[ 42051602, 42052148, 42055165, 42057913, 42056097, 42051740, 42053745, 42053136, 42058315, 42051629, 42056529, 42052272, 42051586, 42051637, 42054286, 42052720, 42052162, 42052937 ]
null
null
null
null
null
null
null
null
null
train
42,051,373
lopkeny12ko
2024-11-05T13:38:42
null
null
null
1
null
[ 42051377 ]
null
true
null
null
null
null
null
null
null
train
42,051,380
timetoogo
2024-11-05T13:39:41
How WebSockets cost us $1M in AWS spend
null
https://www.recall.ai/post/how-websockets-cost-us-1m-on-our-aws-bill
8
0
null
null
null
no_error
How WebSockets cost us $1M on our AWS bill
null
null
We're Hiring Engineers! Join us in building the future of engineering at Recall.ai. Apply Now IPC is something that is rarely top-of-mind when it comes to optimising cloud costs. But it turns out that if you IPC 1TB of video per second on AWS it can result in enormous bills when done inefficiently. Join us in this deep dive where we unexpectedly discover how using WebSockets over loopback was ultimately costing us $1M/year in AWS spend and the quest for an efficient high-bandwidth, low-latency IPC. Recall.ai powers meeting bots for hundreds of companies. We capture millions of meetings per month, and operate enormous infrastructure to do so. We run all this infrastructure on AWS. Cloud computing is enormously convenient, but also notoriously expensive, which means performance and efficiency is very important to us. In order to deliver a cost-efficient service to our customers, we're determined to squeeze every ounce of performance we can from our hardware. We do our video processing on the CPU instead of on GPU, as GPU availability on the cloud providers has been patchy in the last few years. Before we started our optimization efforts, our bots generally required 4 CPU cores to run smoothly in all circumstances. These 4 CPU cores powered all parts of the bot, from the headless Chromium used to join meetings to the real-time video processing piplines to ingest the media. We set a goal for ourselves to cut this CPU requirement in half, and thereby cut our cloud compute bill in half. A lofty target, and the first step to accomplish it would be to profile our bots. Our CPU is being spent doing what?? Everyone knows that video processing is very computationally expensive. Given that we process a ton of video, we initially expected the majority of our CPU usage to be video encoding and decoding. We profiled a sample of running bots, and came to a shocking realization. The majority of our CPU time was actually being spent in two functions: __memmove_avx_unaligned_erms and __memcpy_avx_unaligned_erms. Let's take a brief detour to explain what these functions do. memmove and memcpy are both functions in the C standard library (glibc) that copy blocks of memory. memmove handles a few edge-cases around copying memory into overlapping ranges, but we can broadly categorize both these functions as "copying memory". The avx_unaligned_erms suffix means this function is specifically optimized for systems with Advanced Vector Extensions (AVX) support and is also optimized for unaligned memory access. The erms part stands for Enhanced REP MOVSB/STOSB, which are optimizations in recent Intel processors for fast memory movement. We can broadly categorize the suffix to mean "a faster implementation, for this specific processor" In our profiling, we discovered that by far, the biggest callers of these functions were in our Python WebSocket client that was receiving the data, followed by Chromium's WebSocket implementation that was sending the data. An expensive set of sockets... After pondering this, the result started making more sense. For bots that join calls using a headless Chromium, we needed a way to transport the raw decoded video out of Chromium's Javascript environment and into our encoder. We originally settled on running a local WebSocket server, connecting to it in the Javascript environment, and sending data over that channel. WebSocket seemed like a decent fit for our needs. It was "fast" as far as web APIs go, convenient to access from within the JS runtime, supported binary data, and most importantly was already built-in to Chromium. One complicating factor here is that raw video is surprisingly high bandwidth. A single 1080p 30fps video stream, in uncompressed I420 format, is 1080 * 1920 * 1.5 (bytes per pixel) * 30 (frames per second) = 93.312 MB/s Our monitoring showed us that at scale, the p99 bot receives 150MB/s of video data. That's a lot of data to move around! The next step was to figure out what specifically was causing the WebSocket transport to be so computationally expensive. We had to find the root cause, in order to make sure that our solution would sidestep WebSocket's pitfalls, and not introduce new issues of it's own. We read through the WebSocket RFC, and Chromium's WebSocket implementation, dug through our profile data, and discovered two primary causes of slowness: fragmentation, and masking. Fragmentation The WebSocket specification supports fragmenting messages. This is the process of splitting a large message across several WebSocket frames. According to Section 5.4 of the WebSocket RFC): The primary purpose of fragmentation is to allow sending a message that is of unknown size when the message is started without having to buffer that message. If messages couldn't be fragmented, then an endpoint would have to buffer the entire message so its length could be counted before the first byte is sent. With fragmentation, a server or intermediary may choose a reasonable size buffer and, when the buffer is full, write a fragment to the network. A secondary use-case for fragmentation is for multiplexing, where it is not desirable for a large message on one logical channel to monopolize the output channel, so the multiplexing needs to be free to split the message into smaller fragments to better share the output channel. (Note that the multiplexing extension is not described in this document.) Different WebSocket implementations have different standards Looking into the Chromium WebSocket source code, messages larger than 131KB will be fragmented into multiple WebSocket frames. A single 1080p raw video frame would be 1080 * 1920 * 1.5 = 3110.4 KB in size, and therefore Chromium's WebSocket implementation would fragment it into 24 separate WebSocket frames. That's a lot of copying and duplicate work! Masking The WebSocket specification also mandates that data from client to server is masked. To avoid confusing network intermediaries (such as intercepting proxies) and for security reasons that are further discussed in Section 10.3, a client MUST mask all frames that it sends to the server Masking the data involves obtaining a random 32-bit masking key, and XOR-ing the bytes of the original data with the masking key in 32-bit chunks. This has security benefits, because it prevents a client from controlling the bytes that appear on the wire. If you're interested in the precise reason why this is important, read more here! While this is great for security, the downside is masking the data means making an additional once-over pass over every byte sent over WebSocket -- insignificant for most web usages, but a meaningful amount of work when you're dealing with 100+ MB/s Quest for a cheaper transport! We knew we need to move away from WebSockets, so we began our quest to find a new mechanism to get data out of Chromium. We realized pretty quickly that browser APIs are severely limited if we wanted something significantly more performant that WebSocket. This meant we'd need to fork Chromium and implement something custom. But this also meant that the sky was the limit for how efficient we could get. We considered 3 options: raw TCP/IP, Unix Domain Sockets, and Shared Memory: TCP/IP Chromium's WebSocket implementation, and the WebSocket spec in general, create some especially bad performance pitfalls. How about we go one level deeper and add an extension to Chromium to allow us to send raw TCP/IP packets over the loopback device? This would bypass the issues around WebSocket fragmentation and masking, and this would be pretty straightforward to implement. The loopback device would also introduce minimal latency. There were a few drawbacks however. Firstly, the maximum size for TCP/IP packets is much smaller than the size of our raw video frames, which means we still run into fragmentation. In a typical TCP/IP network connected via ethernet, the standard MTU (Maximum Transmission Unit) is 1500 bytes, resulting in a TCP MSS (Maximum Segment Size) of 1448 bytes. This is much smaller than our 3MB+ raw video frames. Even the theoretical maximum size of a TCP/IP packet, 64k, is much smaller than the data we need to send, so there's no way for us to use TCP/IP without suffering from fragmentation. There was another issue as well. Because the Linux networking stack runs in kernel-space, any packets we send over TCP/IP need to be copied from user-space into kernel-space. This adds significant overhead as we're transporting a high volume of data. Unix Domain Sockets We also explored exiting the networking stack entirely, and using good old Unix domain sockets. A classic choice for IPC, and it turns out Unix domain sockets can actually be pretty fast. Most importantly however, Unix domain sockets are a native part of the Linux operating system we run our bots in, and there are pre-existing functions and libraries to push data through Unix sockets. There is one con however. To send data through a Unix domain socket, it needs to be copied from user-space to kernel-space, and back again. With the volume of data we're working with, this is a decent amount of overhead. Shared Memory We realized we could go one step further. Both TCP/IP and Unix Domain Sockets would at minimum require copying the data between user-space and kernel-space. With a bit of DIY, we could get even more efficient using Shared Memory. Shared memory is memory that can be simultaneously accessed by multiple processes at a time. This means that our Chromium could write to a block of memory, which would then be read directly by our video encoder with no copying at all required in between. However there's no standard interface for transporting data over shared memory. It's not a standard like TCP/IP or Unix Domain sockets. If we went the shared memory route, we'd need to build the transport ourselves from the ground up, and there's a lot that could go wrong. Glancing at our AWS bill gave us the resolve we needed to push forward. Shared memory, for maximum efficiency, was the way to go. Sharing is caring (about performance) As we need to continuously read and write data serially into our shared memory, we settled on a ring buffer as our high level transport design. There are quite a few ringbuffer implementations in the Rust community, but we had a few specific requirements for our implementation: Lock-free: We need consistent latency and no jitter, otherwise our real-time video processing would be disrupted. Multiple producer, single consumer: We have multiple chromium threads writing audio and video data into the buffer, and a single thread in the media pipline consuming this data. Dynamic Frame Sizes: Our ringbuffer needed to support audio packets, as well as video frames of different resolutions, meaning the size of each datum could vary drastically. Zero-Copy Reads: We want to avoid copies as much as possible, and therefore want our media pipeline to be able to read data out of the buffer without copying it. Sandbox Friendlyness: Chromium threads are sandboxed, and we need them to be able to access the ringbuffer easily. Low Latency Signalling: We need our Chromium threads to be able to signal to the media pipeline when new data is available, or when buffer space is available. We evaluated the off-the-shelf ringbuffer implementations, but didn't find one that fit our needs... so we decided to write our own! The most non-standard part of our ring-buffer implementation is our support for zero-copy reads. Instead of the typical two-pointers, we have three pointers in our ring buffer: write pointer: the next address to write to peek pointer: the address of the next frame to read read pointer: the address where data can be overwritten To support zero-copy reads we feed frames from the peek pointer into our media pipeline, and only advance the read pointer when the frame has been fully processed. This means that it's safe for the media pipeline to hold a reference to the data inside the ringbuffer, since that reference is guaranteed to be valid until the data is fully processed and the read pointer is advanced. We use atomic operations to update the pointers in a thread-safe manner, and to signal that new data is available or buffer space is free we use a named semaphore. After implementing this ringbuffer, and deploying this into production with a few other optimizations, we were able to reduce the CPU usage of our bots by up to 50%. This exercise in optimizing IPC for CPU efficiency reduced our AWS bill by over a million dollars per year, a huge impact and a really great use of time!
2024-11-08T05:53:41
en
train
42,051,385
impish9208
2024-11-05T13:40:36
Never been kissed – Japan's teen boys losing out on love
null
https://www.bbc.com/news/articles/cp9z2pp80nyo
8
1
[ 42051626, 42051483 ]
null
null
missing_parsing
Japan: Survey finds record low number of teen boys had first kiss
2024-11-05T09:58:51.030Z
Joel Guinto
In many countries it's a teenage rite of passage: a first kiss.But a new survey of Japanese high school students has revealed that four out of five 15-18-year-old boys have yet to reach the milestone.And things aren't looking much different for the girls, with just over one in four female high schoolers having had their first kiss.These are the lowest figures recorded since Japan first began asking teenagers about their sexual habits back in 1974 - and are likely to be a worry in a country with one of the world's lowest birth rates.The study by the Japan Association for Sex Education (Jase) quizzed 12,562 students across junior high schools, high schools and university - asking them about everything from kisses to sexual intercourse.The survey takes place every six years, and has been recording a fall in first kisses since 2005 - when the figure was closer to one in two.But this year's report found kissing was not the only area which had seen a fall in numbers. Perhaps unsurprisingly, it also revealed a drop in the numbers of Japanese youth having sexual intercourse.According to the study, the ratio of high school boys who say they have had sexual intercourse fell 3.5 points from 2017 to 12%. For high school girls, it declined 5.3 points to 14.8%.Experts have pointed to the impact of the Covid pandemic as one possible reason for the drop.School closures and restrictions on physical contact during the Covid pandemic had likely impacted many of these students, as it happened "at a sensitive time when [they were] beginning to become interested in sexuality", according to Yusuke Hayashi, a sociology professor at Musashi University quoted in the Mainichi newspaper.However, the survey did find one area of increase: the number of teenagers admitting to masturbation across all demographics was at record high levels.The results come after a separate survey earlier this year found that nearly half of marriages in Japan are sexless.The results of the surveys come as Japan struggles to arrest its falling birth rate, and provide further cause for concern. In 2023, the then-prime minister warned that the country's low birth rate was pushing it to the brink of being able to function.Some researchers have suggested the population - currently at 125 million people - could fall to less than 53 million by the end of the century. A range of other contributing factors have been marked out as possible contributing factors - including rising living costs, more women in education and work, as well as greater access to contraception, leading to women choosing to have fewer childrenJapan already has the world's oldest population, measured by the UN as the proportion of people aged 65 or older.In late 2023, Japan said that for the first time one in 10 people in the country are aged 80 or older.In March, diaper-maker Oji Holdings announced it would stop making baby nappies to focus on making adult diapers.
2024-11-08T20:25:07
null
train
42,051,402
mantegna
2024-11-05T13:43:54
Show HN: Free Harvard-Style Resume Generator
null
https://plump.ai/resume-project
1
1
[ 42051570 ]
null
null
null
null
null
null
null
null
null
train
42,051,406
cloudygandalf
2024-11-05T13:44:08
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,051,409
leontrolski
2024-11-05T13:44:24
Stop Using Pytest Fixtures
null
https://leontrolski.github.io/fixtures.html
2
0
[ 42051485 ]
null
null
null
null
null
null
null
null
null
train
42,051,420
Attummm
2024-11-05T13:46:53
Show HN: RedisDict
RedisDict is a Python dictionary with a Redis backend, designed for handling large datasets. It simplifies Redis operations especially for large-scale and distributed systems, and has been running in production since 2017, originally built in Python 2.<p>The library focuses on get and set operations in Redis, ensuring that each key-value pair operates independently, so changes to one entry do not affect others.<p>Optimized for performance with large datasets, RedisDict maintains high speed even at scale.<p>Data types are managed without using Pickle to avoid security risks associated with untrusted, serialized data.<p>Key features include namespacing, pipelining, expiration, and support for multiple data types. RedisDict provides a full dictionary interface and has extensive test coverage.<p>GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;Attumm&#x2F;redis-dict">https:&#x2F;&#x2F;github.com&#x2F;Attumm&#x2F;redis-dict</a><p>Documentation: <a href="https:&#x2F;&#x2F;attumm.github.io&#x2F;redis-dict&#x2F;" rel="nofollow">https:&#x2F;&#x2F;attumm.github.io&#x2F;redis-dict&#x2F;</a>
null
3
2
[ 42051575 ]
null
null
null
null
null
null
null
null
null
train
42,051,423
codetoli
2024-11-05T13:47:31
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,051,425
gdbuildsgd
2024-11-05T13:48:12
Show HN: I built a tool that protects your browsing privacy
Hey HN Family,<p>Long time no see. After the epic failure of my first product (spent 5 months developing it and made $0), I believe this time I&#x27;ve built something actually cool.<p>You know how a remote meeting with your team or a client goes, especially when you have to share your screen. Do I have any tabs with sensitive information? Maybe I didn&#x27;t close THAT page. What if the client sees this or that information?<p>If you are a streamer&#x2F;content creator, you face similar challenges as well. Let&#x27;s say you are shooting a nice YouTube video; but you will have to blur or filter out some sensitive information. Usually you&#x27;d deal with that during post-production, and this is such a waste of time.<p>Well, nevermore.<p>Blurs is a browser extension that protect your browser privacy while screen sharing, streaming, or browsing, with different filtering options and modes.<p>You can select any HTML element on a page, and apply one of three different filtering options (Blur, Solid box, Pixels); or just draw a fixed position filter on your page as you wish. The world is your canvas after all ^^<p>Not sure how it might help you? Using Blurs brings you those benefits:<p>1. Enhanced privacy: Protect yourself from sharing private or sensitive data during screen sharing and streaming sessions. 2. Save time on post-production: Reduce the need for post-production editing for screen recording and taking screenshots. 3. Complete control over your browser: Gain fine-tuned control over what parts of your browser to blur or filter. 4. Better screen sharing experience: Remove the risk of sharing personal, business, or sensitive information during meetings.<p>It works on any Chromium-based browser and Firefox, Microsoft Edge approval still pending though .<p>But, who is it for? 1. Professionals in virtual meetings: Blurs can help you prevent accidental information leaks. 2. Content creators and streamers: By filtering out unwanted elements, Blurs can minimize the time spent on editing or obscuring sensitive information during post-production. 3. Remove workers handling sensitive information: For remote employees or freelancers dealing with sensitive information, Blurs can help keep work data private during video calls or presentations. 4. Educators and trainers: Educators sharing learning materials, or educational resources on screen can use Blurs to filter out non-relevant information, protect personal information; providing a cleaner, distraction-free presentation for students. 5. Privacy-concerned individuals: Anyone concerned with protecting their browsing privacy can use Blurs to keep their personal details to themselves.<p>I hope that this finds this tool of mine useful. I am open to all constructive criticism, feedback, and looking forward to hearing about your opinions.<p>Have a wonderful day &lt;3 - Goksu
https://www.blurs.app
1
0
null
null
null
null
null
null
null
null
null
null
train
42,051,435
rrampage
2024-11-05T13:50:25
The Tail at Scale – Jeff Dean et al. (2013)
null
https://cacm.acm.org/research/the-tail-at-scale/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,051,452
andrewl
2024-11-05T13:52:31
We Shall Fight in the Buttery – Oxford's War 1939–1945
null
https://literaryreview.co.uk/we-shall-fight-in-the-buttery
55
11
[ 42056020, 42052160, 42054094, 42056652, 42052534 ]
null
null
null
null
null
null
null
null
null
train
42,051,455
visoar
2024-11-05T13:52:59
Pro OG:Image for Your Website by AI in 10s
null
https://ogimage.site/
1
0
[ 42051456 ]
null
null
no_error
AI OG:Image Generator in 10 Seconds | OGImage.site
null
UllrAI
user-casespricingLoginSign upQuick & Easy ProcessCreate Professional Website OG Images in 3 Simple StepsExperience OGimage.site's cutting-edge AI technology to generate perfectly optimized OG images that maximize your website's social media impactEnter Website InfoInput your website URL or provide metadata and content for your OG imageCustomize DesignSelect from premium styles and personalize element to match your website brandingGenerate & DownloadOur AI generates optimized social sharing OG images in secondsEnhance Your Website's Social PresenceTransform your website's social media appearance with OGimage.site. Create professional, eye-catching OG images that drive engagement and clicks.Effortless GenerationGenerate perfect OG images automatically from your website content with just one click.AI-Powered CustomizationUse advanced AI to create custom OG images that match your website's branding perfectly.Maximize Social ImpactBoost your social media presence with professional OG images that increase engagement and clicks.FeaturesPowerful FeaturesCreate stunning OG images in seconds with our AI-powered tools at OGimage.siteTestimonialsWhat our users are saying.See how OGimage.site is helping websites increase their social media engagement.Please replace the following placeholders when deploying as your own ogimage generator.Sarah GreenPortrait PhotographerOGimage.site has transformed how we present our content on social media. The AI-generated OG images are professional and eye-catching.Laura BennettDigital Marketing SpecialistCreating consistent, branded OG images for our website content has never been easier with OGimage.site.Michael CarterBrand StrategistThe OG images generated by OGimage.site perfectly match our brand identity and significantly improve our social sharing results.Olivia TurnerStartup FounderAs a startup, we need to stand out on social media. OGimage.site helps us create professional OG images that grab attention.David HarrisCreative DirectorThe flexibility of OGimage.site is amazing. We can create custom OG images that perfectly match our website's style.Chris WilsonFull-Stack DeveloperThe speed and quality of OGimage.site are remarkable. We can generate perfect OG images for all our pages in seconds.Emma CollinsMarketing CoordinatorOGimage.site has become essential for our content strategy. The AI-generated images consistently drive more engagement.The automated OG image generation saves us hours of design work while maintaining professional quality across all our pages.Create Your OG Image!Tired of boring social previews? Our AI OG image generator is here to help! Just enter your website URL and let our AI create stunning preview images that will make your links stand out!
2024-11-07T22:58:31
en
train
42,051,464
iamsahaj
2024-11-05T13:55:16
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,051,468
hn_acker
2024-11-05T13:56:00
AT&T (Again) Caught Cheating Federal Subsidy Program for Poor People
null
https://www.techdirt.com/2024/11/05/att-again-caught-cheating-federal-subsidy-program-for-poor-people/
12
0
[ 42051476 ]
null
null
null
null
null
null
null
null
null
train
42,051,469
Babakagolo
2024-11-05T13:56:17
null
null
null
1
null
[ 42051470 ]
null
true
null
null
null
null
null
null
null
train
42,051,503
robenkleene
2024-11-05T14:01:07
The Developer of Acorn on Apple's Pixelmator Acquisition
null
https://shapeof.com/archives/2024/11/apple_buys_pixelmator.html
1
0
[ 42051517 ]
null
null
missing_parsing
Apple Buys Pixelmator
null
null
November 4, 2024 So I didn't see that one coming. John Gruber has a good take over at Daring Fireball: Pixelmator Acquired by Apple; Future of Their Apps Uncertain. Acorn and Pixelmator came out 15 days apart from each other in 2007, and the target market between the two has always overlapped. But even with that I've always been on good terms with the Pixelmator folks. Any time we were both attending WWDC, we would meet up and complain about image APIs or just chat over lunch. The other major player in this category is Affinity, which was purchased by Canva in March of this year. So it feels strange that Acorn is now effectively the only independent Mac image editor from that era. I have no inside information on what Apple is going to do with Pixelmator. Will it be discontinued? Will it be part of an Apple One subscription tier? Will it be part of the OS now or folded into Photos? Was this purely a talent grab? Time will tell. But today I woke up, and got to work on Acorn. And I'll do the same tomorrow and the day after that. I enjoy what I work on and I plan on doing it for many years to come. And I truly value my independence. I love being able to work on what I want, and when I want to. Good things are happening to Acorn these days. I'm wrapping up some great new features in a new release, and if you'd like to test them out, let me know. © August Mueller.
2024-11-08T20:08:25
null
train
42,051,509
philk10
2024-11-05T14:01:33
Test a Server with Docker Compose on GitHub Actions
null
https://spin.atomicobject.com/docker-compose-github-actions/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,051,516
gaius_baltar
2024-11-05T14:03:12
The Freedom Covenant
null
https://github.com/c3d/freedom-covenant
2
0
[ 42051526 ]
null
null
null
null
null
null
null
null
null
train
42,051,518
PaulHoule
2024-11-05T14:03:26
Enhancing Long Context Performance in LLMs Through Inner Loop Query Mechanism
null
https://arxiv.org/abs/2410.12859
2
0
null
null
null
null
null
null
null
null
null
null
train
42,051,555
todsacerdoti
2024-11-05T14:07:52
Developer Survey 2024
null
https://gleam.run//news/developer-survey-2024/
3
0
[ 42052576 ]
null
null
null
null
null
null
null
null
null
train
42,051,556
sgasser
2024-11-05T14:07:55
Everything I Learned About Writing Professional Emails After 11Years in Software
Hi! After years of leading software teams and seeing countless emails fail to achieve their goals, I&#x27;ve written down everything I learned about effective business communication.<p>Most business emails fail before they&#x27;re even opened because we:<p>* Write novels instead of emails<p>* Use vague subject lines<p>* Bury the important stuff<p>* Miss clear calls to action<p>But these are all fixable problems. I&#x27;ve broken down the key components of effective emails, including:<p>* A practical framework for professional emails<p>* Templates for common business scenarios<p>* Time-saving techniques that actually work<p>* Cross-cultural communication tips<p>* Mobile-first considerations<p>I&#x27;ve written up my complete findings and frameworks here: &lt;https:&#x2F;&#x2F;mailwizard.ai&#x2F;blog&#x2F;how-to-write-professional-emails&gt;<p>What&#x27;s your approach to handling business communication? What practices have you found most effective?
null
2
0
null
null
null
null
null
null
null
null
null
null
train
42,051,564
ayoisaiah
2024-11-05T14:08:31
Redacting Sensitive Data with the OpenTelemetry Collector
null
https://betterstack.com/community/guides/observability/redacting-sensitive-data-opentelemetry/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,051,571
JoshTriplett
2024-11-05T14:09:46
Bell Canada to Acquire Ziply Fiber
null
https://ziplyfiber.com/news/press-release/BCE-to-acquire-Ziply-Fiber
2
0
null
null
null
null
null
null
null
null
null
null
train
42,051,579
domferr
2024-11-05T14:11:01
Show HN: Tiling Shell – Bringing Modern Window Management to Gnome
Hey HN, I’m excited to share Tiling Shell, an open-source GNOME extension I created to bring a more modern, user-friendly window management experience to Linux. Many existing solutions feel outdated and lack in user experience, so my goal with Tiling Shell is to bring a fresh, integrated approach that works well for both newcomers and advanced users.<p>Demo video: <a href="https:&#x2F;&#x2F;youtu.be&#x2F;RBoO5lgR1kA?t=20" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;RBoO5lgR1kA?t=20</a> by Omg Ubuntu<p>Key Features: - Easily manage, edit, create, and delete custom layouts with a built-in editor. - Works seamlessly across multiple monitors, even with different scaling factors. - Provides Windows 11’s snap assistant functionalities, supports keybindings and much more!<p>I’m excited to keep improving Tiling Shell, adding more customization options and compatibility features. Check it out here: <a href="https:&#x2F;&#x2F;extensions.gnome.org&#x2F;extension&#x2F;7065&#x2F;tiling-shell&#x2F;" rel="nofollow">https:&#x2F;&#x2F;extensions.gnome.org&#x2F;extension&#x2F;7065&#x2F;tiling-shell&#x2F;</a><p>The GitHub repo (<a href="https:&#x2F;&#x2F;github.com&#x2F;domferr&#x2F;tilingshell">https:&#x2F;&#x2F;github.com&#x2F;domferr&#x2F;tilingshell</a>) contains more details and demos. I’d love to gather feedback from the HN community on features, improvements, and ideas for future versions &lt;3
https://github.com/domferr/tilingshell
5
0
null
null
null
null
null
null
null
null
null
null
train
42,051,595
imanestoslatko
2024-11-05T14:13:16
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,051,596
MattieTK
2024-11-05T14:13:30
Show HN: A free multiviewer to watch major news events like elections
Hey HN!<p>I&#x27;m a Product Manager at the FT in the UK, and VidGrid is something I&#x27;ve been working on for years in my own time. It&#x27;s just got a major update just in time for the election.<p>The idea is a way to watch lots of different news streams at the same time, as if you&#x27;re in a control room or news gallery for the very latest info. I tried to prioritise usability and speed. All streams are third-party sourced (from broadcaster&#x27;s own links).<p>You can now sign up for an account to save&#x2F;favourite your own streams, and it has better keyboard and drag and drop support. I&#x27;ve also fixed a ton of bugs from the last version.<p>Happy to answer questions about it, or about news media at this time of year in general!<p>Enjoy, and have a good evening.
https://vidgrid.tk.gg
9
1
[ 42065019 ]
null
null
null
null
null
null
null
null
null
train
42,051,603
rntn
2024-11-05T14:14:57
Finding Beauty in Rusting Iron: Hongo Shinya / Artist Blacksmith [video]
null
https://www3.nhk.or.jp/nhkworld/en/shows/2105156/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,051,607
ingve
2024-11-05T14:15:10
Notes on Binary Soup
null
https://www.marginalia.nu/log/a_112_slop_ideas/
42
4
[ 42053678 ]
null
null
null
null
null
null
null
null
null
train