id
int64
2
42.1M
by
large_stringlengths
2
15
time
timestamp[us]
title
large_stringlengths
0
198
text
large_stringlengths
0
27.4k
url
large_stringlengths
0
6.6k
score
int64
-1
6.02k
descendants
int64
-1
7.29k
kids
large list
deleted
large list
dead
bool
1 class
scraping_error
large_stringclasses
25 values
scraped_title
large_stringlengths
1
59.3k
scraped_published_at
large_stringlengths
4
66
scraped_byline
large_stringlengths
1
757
scraped_body
large_stringlengths
1
50k
scraped_at
timestamp[us]
scraped_language
large_stringclasses
58 values
split
large_stringclasses
1 value
42,037,799
mooreds
2024-11-04T02:14:05
Think first about what problem this is solving and for whom (2021)
null
https://letterstoanewdeveloper.com/2021/01/18/think-first-about-what-problem-this-is-solving-and-for-whom/
9
0
null
null
null
null
null
null
null
null
null
null
train
42,037,814
impish9208
2024-11-04T02:18:03
Some Final Personal-Finance Advice from Jonathan Clements
null
https://www.wsj.com/personal-finance/jonathan-clements-personal-finance-cancer-e30d1396
1
2
[ 42037816 ]
null
null
null
null
null
null
null
null
null
train
42,037,830
productmode
2024-11-04T02:20:43
Show HN: Leftwrite the AI Co-Writer
Disclaimer: this AI doesn&#x27;t write for you.<p>I&#x27;ve been blogging for about a year now and think writing is a great skill to have. I had 2 newsletters and tried Substack, Ghost, Posthaven and Beehiv. These are all great, but the hardest part about writing for me was a) developing a unique style and b) the brainstorming and editing. The brainstorming is fun, but the drafting can be hell. Anyway, I&#x27;ve used a few tools supposedly meant to help with writing, but none has been what I&#x27;m looking for beyond syntax and tone of voice.<p>I&#x27;m building LeftWrite, an Al co-writing assistant that adapts to your writing style and helps you write words that matter and will resonate with your audience. If this sounds interesting, or you&#x27;ve got any questions, hit the waitlist: leftwrite.xyz<p>Eager to get some feedback!
https://leftwrite.xyz/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,037,841
bookofjoe
2024-11-04T02:23:18
Portrait of Alan Turing Made by an A.I.-Powered Robot Could Sell for Up to $180k
null
https://www.smithsonianmag.com/smart-news/a-portrait-of-alan-turing-made-by-an-ai-powered-robot-could-sell-for-up-to-180000-180985356/
1
1
[ 42037846 ]
null
null
null
null
null
null
null
null
null
train
42,037,847
leightonlove
2024-11-04T02:24:38
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,037,877
pseudolus
2024-11-04T02:31:12
Life evolves. So do minerals. How about everything else?
null
https://www.science.org/content/article/life-evolves-so-do-minerals-how-about-everything-else
3
4
[ 42038060, 42039202 ]
null
null
no_article
null
null
null
null
2024-11-08T02:29:38
null
train
42,037,904
throwoutway
2024-11-04T02:37:01
Inkle blog – ink version 1.0 release
null
https://www.inklestudios.com/2021/02/22/ink-version-1.html
1
0
null
null
null
Failed after 3 attempts. Last error: Quota exceeded for quota metric 'Generate Content API requests per minute' and limit 'GenerateContent request limit per minute for a region' of service 'generativelanguage.googleapis.com' for consumer 'project_number:854396441450'.
inkle blog - ink version 1.0 release!
null
null
February, 22, 2021 We're proud to announce that ink, our open-source scripting language for interactive narrative, has now officially reached version 1.0! What's new in Version 1.0? Version 1.0 is a stable release of "the story so far". The core features are well-tested and well-used, and the current integration has powered two full inkle releases: 2019's 3D adventure game, Heaven's Vault, and 2020's procedurally narrated tactics game, Pendragon. But there's one big new feature: we've introduced the concept of parallel, shared-state story-flows - allowing the game to, say, switch between different simultaneous NPC conversations, while still allowing one conversation to affect the other. We've also improved error handling and improved the way ink calls game-side functions. Inky now has a dark mode (see above!), zoom, a word count and stats menu, and better syntax highlighting. The default web player has new features for links and audio. And the Unity integration now allows live recompilation mid-game. Full details can be found on the release notes page for ink, Inky and the ink-Unity-integration plugin. What is ink? ink is designed from the ground up to be "Word for interactive fiction". Open it up, and start writing. Branch when you need to, rejoin the flow seamlessly, track state and vary what's written based on what came before - without any need to plan, layout, or structure in advance. Organise your content when you know what shape it wants to take, not before. It was recently awarded an Epic Megagrant, and we're otherwise supported by a Patreon. A Different Approach To Interactive Writing ink takes a different approach from other interactive fiction writing software in several ways. It's entirely script-based, with no diagrams or flow-charts. Instead of being optimised for loops, it allows writers to quickly and robustly create heavily branching flow that runs naturally from beginning to end - as most interactive stories do. All code and technical information is added as mark-up on top of text, making it easy to scan, proof-read, redraft and edit. It's also easy to see what's been changed in a file when using source-control. Another key concept is global, always-on state tracking: every line the player sees in the course of the game is remembered, automatically, by the engine, without the need to define variables. This allows for fast iteration on game-logic and the easy implementation of cause-and-effect, without the need for "boilerplate" code. Flexible and Powerful But that doesn't mean ink is limited: it has variables, functions, maths and logic should you need them, and can also hand off complex decision-making to the game-code itself. ink is also deliberately layout-agnostic. By handing the UI over to the game, it can be used to make hyperlink games, visual novels, RPGs, chatbots, FMV games, or simply to deliver highly-responsive barks in an first-person action game. Over on the engine side, ink comes with a run-time debugger that allows reading and poking of variable state, and a profiling tool to help developers in frame-rate dependent environments to find and fix story-side slowdowns. Uptake ink has been adopted by game studios and other developers all around the world. It's been used on big indie games such Haven, NeoCab, Over the Alps, Falcon Age, Signs of the Sojourner and others. For people looking to learn more about using ink, we've got several talks on our approaches, including this on from GDC 2017 on how Heaven's Vault drives its 3D world from a text-based script: Development History Here at inkle, ink has been our bedrock. We've used ink on every single title we've released over the last ten years, expanding and developing the feature set of the language over that time from quick mark-up for authoring branching choice-based narratives (Sorcery!) to authoring open-world, responsive, go-anywhere-and-do-anything narratives (er, Sorcery! 3. Also, Heaven's Vault.) And we're continuing to find new ways to use the engine, like last year's experiment in procedurally narrating a chess-like game. Originally released as an open source beta in 2016, ink quickly accrued an editor, inky, for easily writing and testing content, and a dedicated Unity plug-in to assist with integrating and testing stories at run-time. Community Development Since its release, a wide community of developers and enthusiasts have contributed to the project. There is a full javascript port, built into inky, which allows the editor to produce stand-alone web-playable games. There is a port for the popular Godot engine, and work is progressing on a C port that will ultimately enable Unreal integration. We've had contributions in the form of bug fixes and features requests to the main ink code base, and too many contributions to ink to list - from Dark Mode, through auto-complete, to an integrated version of the "Writing With Ink" documentation and, most recently, an "open recent project" menu listing. Looking back, we think these developments have justified our decision to make ink fully free and open source: the development around the system that's taken place would never have happened without the efforts of other developers, and we'd like to take this opportunity to thank everyone who's offered contributions, both large and small, over the last five years. The core meeting point for ink developers has been the inkle Discord, which is now the go-to place on the internet for assistance with implementing ink features, and contains a wealth of tips and ideas. Looking forward! As inkle continues to develop games, we're continuing to develop and extend both ink and the Unity integration to allow us to tackle new problems. Though we're naturally more cautious with new languages features now the codebase is mature, we have an internal roadmap of issues and features we'd like to address. The more support we get - both financially, and in terms of bug and community support - the more we can push forwards. Meanwhile, ink will continue to be free to use and available to all for as long as we are able to support it! Happy writing!
2024-11-07T23:37:12
null
train
42,037,929
Petiver
2024-11-04T02:41:19
The Saga of a Celebrated Scientist – and His Rodent Dystopia
null
https://www.chronicle.com/article/the-saga-of-a-celebrated-scientist-and-his-rodent-dystopia
30
16
[ 42040608, 42040898, 42040894, 42042543, 42042054, 42042031 ]
null
null
no_article
null
null
null
null
2024-11-08T17:36:07
null
train
42,037,930
aard
2024-11-04T02:41:29
A growing body of data is debunking myths about remote work
null
https://thehill.com/opinion/technology/4747313-remote-work-benefits-meta-analysis/
5
2
[ 42038212, 42038523, 42040585 ]
null
null
null
null
null
null
null
null
null
train
42,037,945
harambae
2024-11-04T02:44:26
Camdom: German company launches first 'digital condom'. How's it work?
null
https://www.firstpost.com/tech/german-company-launches-worlds-first-digital-condom-what-is-it-and-how-does-it-work-13830267.html
1
1
[ 42038280 ]
null
null
null
null
null
null
null
null
null
train
42,037,950
mobiletech
2024-11-04T02:45:08
null
null
null
1
null
[ 42037951 ]
null
true
null
null
null
null
null
null
null
train
42,037,954
robinseazon
2024-11-04T02:45:30
null
null
null
1
null
[ 42037955 ]
null
true
null
null
null
null
null
null
null
train
42,037,964
mayfer
2024-11-04T02:48:02
Sidecoder: Better multi-file coding assistant UX
null
https://twitter.com/mayfer/status/1853235800978112922
1
1
[ 42037965 ]
null
null
null
null
null
null
null
null
null
train
42,037,975
vinnyglennon
2024-11-04T02:50:13
How To – Choose the Right Instance Size for AWS RDS
null
https://reliabilityengineering.substack.com/p/how-to-choose-the-right-instance
1
0
null
null
null
no_error
How to - Choose the Right Instance Size for AWS RDS
2024-10-20T23:31:49+00:00
Prabesh
Amazon RDS (Relational Database Service) offers a variety of instance types, each optimized for different workloads. These instances come with varying levels of CPU, memory, storage, and networking capacities. Choosing the right instance type is crucial because it directly affects three key aspects:Performance: An undersized instance can lead to slow database queries, while an oversized one may result in unnecessary costs.Cost: Larger instances are more expensive, so selecting the right size helps balance performance and cost.Scalability: Instance size impacts your ability to scale effectively. Starting with an appropriate instance size sets you up for future growth.AWS offers a broad variety of instance types for both EC2 and RDS, each tailored to specific use cases:Burstable instances (T2, T3, T4) are cost-effective but operate using a credit system. When the CPU utilization is below the baseline, credits are accumulated. These credits can be consumed when the CPU usage spikes. Once the credits are exhausted:Standard Mode: The instance is throttled to a lower performance, and no additional charges are incurred.Unlimited Mode (default for T3 and T4): The CPU continues operating without throttling, but AWS bills extra for the burst usage, which can lead to higher costs.When deciding on an RDS instance size, consider the following factors:CPU Utilization: High CPU performance is necessary for fast query execution. Monitor CPU usage regularly and adjust accordingly.Memory Requirements: Adequate memory is critical for performance. Memory-optimized instances are ideal for workloads that require large amounts of RAM.Storage Needs: Evaluate whether you need General Purpose, Provisioned IOPS, or Magnetic storage based on your workload’s requirements.Network Bandwidth: Ensure the instance can handle your network traffic.AWS CloudWatch provides key performance metrics to help monitor your RDS instance:CPU Utilization: Percentage of CPU capacity used.Freeable Memory: How much RAM is available.Read/Write Throughput: Data read from or written to disk per second.Network Throughput: Rate of incoming and outgoing network traffic.DB Connections: Number of client sessions connected to the database.Regularly review these metrics in the CloudWatch Monitoring tab to ensure optimal performance.Techniques for matching instance size to workload demands.We'll consider downsizing if over a four-week period:If: vCPU utilization averages < 40% and memory utilization averages < 50%Then: Evaluate stepping down to the next lower in the same instance familyWe will consider up-sizing if over a four-week period:If: vCPU utilization averages > 80% and memory utilization averages < 50%Then: Evaluate a compute-optimized instance upgradeIf: vCPU utilization averages < 50% AND memory utilization averages > 80%Then: Evaluated a memory-optimized instance upgradeIf: vCPU utilization averages > 80% and memory utilization averages < 80%Then: Upgrade within the same instance familyWhile there's no direct 'memory utilization percentage' metric in AWS RDS, we can estimate it using FreeableMemory UsedMemory = TotalMemory - FreeableMemory MemoryUtilizationPercentage = (UsedMemory / TotalMemory) * 100 Note: The FreeableMemory metric doesn't encapsulate all the ways a database engine like MySQL or PostgreSQL uses memory. You may need to consult OS-level metrics or database-specific toolsAdditionally, consider percentile-based utilization (e.g., p90, p95) for a more accurate understanding of peak usage periods. Plan for future growth by adding a buffer (e.g., 20%) to your utilization thresholds.Here’s an example of how different metrics (average, p95, p99.5) and headroom might affect your decision:Amazon offers two primary pricing models for RDS instances:Database Tuning: Before upgrading your instance, ensure your database is tuned for performance. Analyze PostgreSQL logs to find slow-running queries. Use tools like pgBadger to pinpoint resource-intensive queries.Optimize SQL queries. Use EXPLAIN ANALYZE to understand query execution plan, and configure memory settings. Adjust the shared_buffers parameter for the buffer pool. Configure maintenance_work_mem and work_mem for local memory usage. Use parallel restoration during data import/export. Set max_parallel_workers appropriately.Continuous Monitoring: Regularly analyze performance metrics and iteratively adjust parameters for optimal performance.Right-Sizing: Implement a right-sizing schedule and enforce instance tagging to streamline the process.Right sizing is the most effective way to control cloud costs. It involves continually analyzing instance performance and usage needs and patterns—and then turning off idle instances and right sizing instances that are either over provisioned or poorly matched to the workload. Because your resource needs are always changing, right sizing must become an ongoing process to continually achieve cost optimization. You can make right sizing a smooth process by establishing a right-sizing schedule for each team, enforcing tagging for all instances, and taking full advantage of the powerful tools that AWS and others provide to simplify resource monitoring and analysis.https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html#Concepts.DBInstanceClass.Summaryhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-credits-baseline-concepts.htmlhttps://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL.Tuning.concepts.htmlhttps://aws.amazon.com/blogs/database/optimizing-and-tuning-queries-in-amazon-rds-postgresql-based-on-native-and-external-tools/
2024-11-08T02:49:51
en
train
42,037,976
aard
2024-11-04T02:50:19
There is mounting evidence that starting a business reduces stress
null
https://fortune.com/2024/03/25/mounting-evidence-starting-business-reduces-stress-persistent-myths-stopping-employees-careers-success/
26
20
[ 42038779, 42038724, 42038560, 42038751, 42038445, 42038681, 42040000, 42037999, 42038460, 42038914, 42038676, 42038899, 42038763, 42040571 ]
null
null
null
null
null
null
null
null
null
train
42,038,002
AiswaryaMadhu
2024-11-04T02:56:04
null
null
null
1
null
[ 42038003 ]
null
true
null
null
null
null
null
null
null
train
42,038,004
GeneThomas
2024-11-04T02:56:09
New better alterative to XML, JSON and YAML
null
https://xenondata.org
24
95
[ 42038675, 42038049, 42038508, 42038563, 42038707, 42038395, 42038464, 42038850, 42042536, 42039136, 42038702, 42038416, 42038446, 42038631, 42038965, 42038622, 42038916, 42038518, 42038655, 42038715, 42041941, 42040931, 42038628, 42038392, 42038694, 42038005, 42039857, 42038232 ]
null
null
null
null
null
null
null
null
null
train
42,038,011
Shreesha_Bhat
2024-11-04T02:58:00
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,038,029
arcanus
2024-11-04T03:01:24
Reevaluating Google's Reinforcement Learning for IC Macro Placement
null
https://cacm.acm.org/research/reevaluating-googles-reinforcement-learning-for-ic-macro-placement/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,038,053
luu
2024-11-04T03:06:18
BC7 optimal solid-color blocks
null
https://fgiesen.wordpress.com/2024/11/03/bc7-optimal-solid-color-blocks/
5
0
[ 42040440 ]
null
null
no_error
BC7 optimal solid-color blocks
2024-11-04T01:59:28+00:00
null
That’s right, it’s another texture compression blog post! I’ll keep it short. By “solid-color block”, I mean a 4×4 block of pixels that all have the same color. ASTC has a dedicated encoding for these (“void-extent blocks”), BC7 does not. Therefore we have an 8-bit RGBA input color and want to figure out how to best encode that color with the encoding options we have. (The reason I’m writing this up now is because it came up in a private conversation.) In BC1/3, hitting a single color exactly is not always possible, and we have the added complication of the decoder being under-specified. BC7 has neither problem. If we look at our options in table 109 of the Khronos data format spec, we see that mode 5 stores color endpoints with 7 bits of precision per channel and alpha endpoints with a full 8 bits, so it’s a very solid candidate for getting our 8-bit colors through unharmed. The alpha portion is trivial: we can send our alpha value as the first endpoint and just use index 0 for the alpha portion (mode 5 is one of the BC7 modes that code RGB and A separately, similar to BC3), which leaves the colors. Can we use the color interpolation to turn our 7 endpoint bits per channel into an effective 8? We have 2 color index bits per pixel. Indices 0 and 3 are not useful for us, they return the dequantized endpoint value, and those are 7 bits, so that only gives us 128 out of 256 possible options for each color channel. Index 1 is interpolated between the two at a 21/64 fraction; index 2 is symmetric (exactly symmetric in BC7, unlike the situation in BC1), i.e. it’s the same as using index 1 with the two endpoints swapped, and therefore doesn’t give us any new options over just using index 1. That means we only need to consider the case where all index values are 1: if the value we need in a color channel happens to be one of the 128 values we can represent directly, we just set both endpoints to that value, otherwise we want select a pair of endpoints so that the value we actually want is between them, using that 21/64 interpolation factor (subject to the BC7 rounding rules). For BC1, at this point, we would usually build a table where we search for ideal endpoint pairs for every possible target color. For BC7, we can do the same thing, but it turns out we don’t even need a table. Specifically, if we build that table (breaking ties to favor pairs that lie closer together) and look at it for a bit, it becomes quickly clear that we can not only hit each value in [0,255] exactly, but there’s a very simple endpoint pair that works: // Determine e0, e1 such that (BC7 interpolation formula) // target == (43*expand7(e0) + 21*expand7(e1) + 32) >> 6 // where expand7(x) = (x << 1) | (x >> 6) e0 = target >> 1 e1 = ((target < 128) ? (target + 1) : (target - 1)) >> 1 And that’s it. Do this for each of the R, G, B channels of the input color, and set all the color indices to 1. As noted earlier, the A channel is trivial since we just get to send a 8-bit value there to begin with, so we just send it as one of the endpoints and leave all alpha index bits at 0. This is exact and the encoding we’ve used in all our shipping BC7 encoders, regular and with rate-distortion optimization both. Often there are many other possible encodings using the other modes, especially for particularly easy cases like all-white or all-black. In our experiments it didn’t particularly matter which encoding is used, so we always use the above. The one thing that does matter is that whatever choice you make should be consistent, since solid-color blocks frequently occur in runs.
2024-11-08T10:56:40
en
train
42,038,057
Cplum90
2024-11-04T03:06:44
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,038,058
luu
2024-11-04T03:06:54
GPL Clarification
null
https://ma.tt/2024/11/gpl-clarification/
1
1
[ 42038068, 42040461 ]
null
null
no_error
GPL&nbsp;Clarification
2024-11-01T17:00:10Z
Matt
A quick followup on my prior conversation with Theo. During that chat, I talked briefly about a trademark infringer that was also distributing nulled plugins. I said “Not illegal. Legal under the GPL. But they weren’t changing the names. They were selling their customers Pro Plugins with the licensing stuff nulled out.” I want to be clear that my reference to legality and GPL was solely focused on the copying and modifying of the code. That is one of the key freedoms of open source and GPL: the right to copy and modify GPL code. I was not speaking about their right to charge money for nulled plugins. GPLv2 prohibits that because they aren’t providing physical copies or support. This is very different from reputable web hosts, who provide hosting and support for websites and e-commerce stores. Post navigation
2024-11-07T22:35:54
en
train
42,038,062
null
2024-11-04T03:07:36
null
null
null
null
null
null
[ "true" ]
true
null
null
null
null
null
null
null
train
42,038,096
HideInNews
2024-11-04T03:13:49
Win That Communication – Presenting an Effective Status Update
null
https://blog.gouravkhanijoe.com/p/reflection-shorts-win-that-communication-fe3
2
0
null
null
null
null
null
null
null
null
null
null
train
42,038,112
retsl
2024-11-04T03:17:32
Exporting iCloud passwords on Windows
null
https://lets.re/blog/icloud-passwords-export/
2
0
[ 42040459 ]
null
null
null
null
null
null
null
null
null
train
42,038,121
nitishagar
2024-11-04T03:19:07
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,038,129
thunderbong
2024-11-04T03:21:55
Rails is having a moment (again)
null
https://changelog.com/podcast/615
3
0
[ 42040457 ]
null
null
null
null
null
null
null
null
null
train
42,038,137
thunderbong
2024-11-04T03:25:20
Polish radio station abandons use of AI 'presenters' following outcry
null
https://apnews.com/article/poland-media-radio-ai-bba6beb01d523c6727d650c69da14960
2
0
[ 42040455 ]
null
null
null
null
null
null
null
null
null
train
42,038,174
cratermoon
2024-11-04T03:30:52
Tech titans' obsession with turbocharged computer power could be our downfall
null
https://www.theguardian.com/commentisfree/2024/nov/02/better-faster-stronger-tech-titans-obsession-with-turbocharged-computer-power-could-be-our-downfall
3
1
[ 42038265, 42038314 ]
null
null
null
null
null
null
null
null
null
train
42,038,208
teleforce
2024-11-04T03:35:49
Manuevering in tight spaces with BYD Denza Z9 sedan [video]
null
https://www.youtube.com/watch?v=3JSFjlkkNto
1
1
[ 42038275, 42040454 ]
null
null
null
null
null
null
null
null
null
train
42,038,215
Jimmc414
2024-11-04T03:37:59
Harvard Study Links Popular Plastic Ingredient to DNA Damage
null
https://scitechdaily.com/harvard-study-links-popular-plastic-ingredient-to-dna-damage/
3
0
[ 42040452 ]
null
null
null
null
null
null
null
null
null
train
42,038,219
akkartik
2024-11-04T03:38:39
Byoyomi Explained (1997)
null
https://www.britgo.org/bgj/10643.html
1
0
null
null
null
null
null
null
null
null
null
null
train
42,038,252
octopus2023inc
2024-11-04T03:45:33
A declarative language to build LLM applications
null
https://github.com/octopus2023-inc/gensphere
1
1
[ 42038253 ]
null
null
null
null
null
null
null
null
null
train
42,038,260
zdw
2024-11-04T03:48:55
Demystifying Secure NFS
null
https://blogsystem5.substack.com/p/demystifying-secure-nfs
3
0
null
null
null
Failed after 3 attempts. Last error: Quota exceeded for quota metric 'Generate Content API requests per minute' and limit 'GenerateContent request limit per minute for a region' of service 'generativelanguage.googleapis.com' for consumer 'project_number:854396441450'.
Demystifying secure NFS
2024-11-04T00:00:13+00:00
Julio Merino
I recently got a Synology DS923+ for evaluation purposes which led me to setting up NFSv4 with Kerberos. I had done this about a year ago with FreeBSD as the host, and going through this process once again reminded me of how painful it is to secure an NFS connection.You see, Samba is much easier to set up, but because NFS is the native file sharing protocol of Unix systems, I felt compelled to use it instead. However, if you opt for NFSv3 (the “easy default”), you are left with a system that has zero security: traffic travels unencrypted and unsigned, and the server trusts the client when the client asserts who is who. Madness for today’s standards. Yet, when you look around, people say “oh, but NFSv3 is fine if you trust the network!” But seriously, who trusts the network in this day and age?You have to turn to NFSv4 and combine it with Kerberos for a secure file sharing option. And let me tell you: the experience of setting these up and getting things to work is horrible, and the documentation out there is terrible. Most documents are operating-system specific so they only tell you what works when a specific server and a specific client talk to each other. Other documents just assume, and thus omit, various important details of the configuration.So. This article is my recollection of “lab notes” on how to set this whole thing up along with the necessary background to understand NFSv4 and Kerberos. My specific setup involes the Synology DS923+ as the NFSv4 server; Fedora, Debian, and FreeBSD clients; and the supporting KDC on a pfSense (or FreeBSD) box.NFSv3, or usually just NFS, is a protocol from the 1980s—and it shows. In broad terms, NFSv3 exposes the inodes of the underlying file system to the network. This became clear to me when I implemented tmpfs for NetBSD in 2005 and realized that a subset of the APIs I had to support were in service of NFS. This was… a mind-blowing realization. “Why would tmpfs, a memory file system, need NFS-specific glue?”, I thought, and then I learned the bad stuff.Anyhow. Now that you know that NFSv3 exposes inodes, you may understand why sharing a directory over NFSv3 is an all-or-nothing option for the whole file system. Even if you can configure mountd to export a single directory via /etc/exports, malicious clients can craft NFS RPCs that reference inodes outside of the shared directory. Which means… they get free access to the whole file system and explains why system administrators used to put NFS-exported shares in separate partitions, mounted under /export/, in an attempt to isolate access to subsets of data.To make things worse, NFSv3 has no concept of security. A client can simply assert that a request comes from UID 1000 and the server will trust that the client is really operating on behalf of the server’s UID 1000. Which means: a malicious client can pretend to be any user that exists on the server and gain access to any file in the exported file system. Which then explains why the maproot option exists as an attempt to avoid impersonating root… but only root. Crazy talk.All in all, NFSv3 may still be OK if you really trust the network, if you compartmentalize the exported file system, and if you are sharing inconsequential stuff. But can you trust the network? Maybe you can if you are using a P2P link, but otherwise… it is really, really risky and I do not want to do that.NFSv4, despite having the same NFS name as NFSv3, is a completely different protocol. Here are two main differences:NFSv4 operates on the basis of usernames, not UIDs. Each request to the server contains a username and the server is responsible for translating that username to a local UID while verifying access permissions.NFSv4 operates at the path level, not the inode level. Each request to the server contains the path of the file to operate on and thus the server can apply access control based on those.Take these two differences together and NFSv4 can implement secure access to file systems. Because the server sees usernames and paths, the server can first verify that a user is who they claim to be. And because the server can authenticate users, it can then authorize accesses at the path level.That said, if all you have is NFSv4, you only get the AUTH_SYS security level, which is… the same as having no security at all. In this mode, the server trusts the client and assumes that user X on the client maps exactly to user X on the server, which is almost the same as NFSv3 did.The real security features of NFSv4 come into play when it’s paired with Kerberos. When Kerberos is in the picture, you get to choose from the following security levels for each network share:krb5: Requires requests to be authenticated by Kerberos, which is good to ensure only trusted users access what they should but offers zero “on-the-wire” security. Traffic flows unsigned and unencrypted, so an attacker could tamper with the data and slurp it before it reaches the client.krb5i: Builds on krb5 to offer integrity checks on all data. Basically, all packets on the network are signed but not encrypted. This prevents spoofing packets but does not secure the data against prying eyes.krb5p: Builds on krb5 to offer encrypted data on the wire. This prevents tampering with the network traffic and also avoids anyone from seeing what’s being transferred.Sounds good? Yeah, but unfortunately, Kerberos and its ecosystem are… complicated.Kerberos is an authentication broker. Its goal is to detach authentication decisions between a client machine and a service running on a second machine, and move that responsibility to a third machine—the Kerberos Domain Controller (KDC). Consequently, the KDC is a trusted entity between the two machines that try to communicate with each other.All the machines that interact with the KDC form a realm (AKA a domain, but not a DNS domain). Each machine needs an /etc/krb5.conf file that describes which realms the machine belongs to and who the KDC for each realm is.The actors that exist within the realm are the principals. The KDC maintains the authoritative list of principals and their authentication keys (passwords). These principals represent:Users, which have names of the form <username>@REALM. There has to be one of these principals for every person (or role) that interacts with the system.Machines, which have names of the form host/<machine>.<domain>@REALM. There has to be one of these principals for every server, and, depending on the service, the clients may need one too as is the case for NFSv4.Services, which have names of the form <service>/<machine>.<domain>@REALM. Some services like NFSv4 require one of these, in which case <service> is nfs, but others like SSH do not.Let’s say Alice wants to log into the Kerberos-protected SSH service running on SshServer from a client called LinuxLaptop, all within the EXAMPLE.ORG Kerberos realm.(Beware that the description below is not 100% accurate. My goal is for you to understand the main concepts so that you can operate a Kerberos realm.)First, Alice needs to obtain a Ticket-Granting-Ticket (TGT) if she doesn’t have a valid one yet. This ticket is issued by the KDC after authenticating Alice with her password, and allows Alice to later obtain service-specific tickets without having to provide her password again. For this flow:Steps involved in obtaining a TGT from the KDC for a user on a client machine.Alice issues a login request to the KDC from the client LinuxLaptop by typing kinit (or using other tools such as a PAM module). This request carries Alice’s password.The KDC validates Alice’s authenticity by checking her password against the KDC’s database and issues a TGT. The TGT is encrypted with the KDC’s key and includes an assertion of who Alice is and how long the ticket is valid for.The client LinuxLaptop stores the TGT on disk. Alice can issue klist to see the ticket:linux-laptop$ klist Credentials cache: FILE:/tmp/krb5cc_1001 Principal: [email protected] Issued Expires Principal Oct 26 09:04:57 2024 Oct 26 19:05:01 2024 krbtgt/[email protected] linux-laptop$ █The TGT, however, is not sufficient to access a service. When Alice wants to access the Kerberos-protected SSH service running on the SshServer machine, Alice needs a ticket that’s specific to that service. For this flow:Steps involved in obtaining a service-specific ticket from the KDC for a user on a client machine.Alice sends a request to the Ticket-Granting-Service (KDS) and asks for a ticket to SshServer. This request carries the TGT.The TGS (which lives in the KDC) verifies who the TGT belongs to and verifies that it’s still valid. If so, the TGS generates a ticket for the service. This ticket is encrypted with the service’s secret key and includes details on who Alice is and how long the ticket is valid for.The client LinuxLaptop stores the service ticket on disk. As before, Alice can issue klist to see the ticket:linux-laptop$ klist Credentials cache: FILE:/tmp/krb5cc_1001 Principal: [email protected] Issued Expires Principal Oct 26 09:04:57 2024 Oct 26 19:05:01 2024 krbtgt/[email protected] Oct 26 09:05:11 2024 Oct 26 19:05:01 2024 host/[email protected] linux-laptop$ █At this point, all prerequisite Kerberos flows have taken place. Alice can now initiate the connection to the SSH service:Accessing a remote SSH server using a Kerberos ticket without password authentication.Alice sends the login request from the LinuxLaptop client to the SshServer server and presents the service/host-specific ticket that was granted to her earlier on.The SshServer server decrypts the ticket with its own key, extracts details of who the request is from, and verifies that they are correct. This happens without talking to the KDC and is only possible because SshServer trusts the KDC via a pre-shared key.The SSH service on SshServer decides if Alice has SSH access as requested and, if so, grants such access.Note these very important details:The KDC is only involved in the ticket issuance process. Once the client has a service ticket, all interactions between the client and the server happen without talking to the KDC. This is essential to not make the KDC a bottleneck in the communication.Each host/service and the KDC have unique shared keys that are known by both the host/service and the KDC. These shared keys are created when registering the host or service principals and are copied to the corresponding machines as part of their initial setup. These keys live in machine-specific /etc/krb5.keytab files.Kerberos does authentication only, not authorization. The decision to grant Alice access to the SSH service in Think is made by the service itself, not Kerberos, after asserting that Alice is truly Alice.As you can imagine, the KDC must be protected with the utmost security measures. If an attacker can compromise the KDC’s locally-stored database, they will get access to all shared keys so they can impersonate any user against any Kerberos-protected service in the network. That’s why attackers try to breach into an Active Directory (AD) service as soon as they infiltrate a Microsoft network because… AD is a KDC.Enough theory. Let’s get our hands dirty and follow the necessary steps to set up a KDC.The KDC’s needs are really modest. Per the discussion above, the KDC isn’t in the hot data path of any service so the number of requests it receives are limited. Those requests are not particularly complex to serve either: at most, there is some CPU time to process cryptographic material but no I/O involved, so for a small network, any machine will do.In my particular case, I set up the KDC in my little pfSense box as it is guaranteed to be almost-always online. This is probably not the best of ideas security-wise, but… it’s sufficient for my paranoia levels. Note that most of the steps below will work similarly on a FreeBSD box, but if you are attempting that, please go read FreeBSD’s official docs on the topic instead. Those docs are one of the few decent guides on Kerberos out there.The pfSense little box that I run the KDC on.Here are the actors that will appear throughout the rest of this article. I’m using the real names of my setup here because, once again, these are my lab notes:MEROH.NET: The name of the Kerberos realm.jmmv: The user on the client machine wanting access to the NFSv4 share. The UID is irrelevant.router.meroh.net: The pfSense box running the KDC.nas.meroh.net: The Synology DS923+ NAS acting as the NFSv4 server.think.meroh.net: A FreeBSD machine that will act as a Kerberized SSH server for testing purposes and an NFSv4 client. (It’s a ThinkStation, hence its name.)x1nano.meroh.net: A Linux machine that will act as an NFSv4 client. While in reality this is running Fedora, I’ll use this hostname interchangeably for Fedora and Debian.Knowing all actors, we can set up the KDC. The first step is to create the krb5.conf for the KDC which tells the system which realm the machine belongs to. You’ll have to open up SSH access to the machine via the web interface to perform these steps.Here is the minimum content you need:[libdefaults] default_realm = MEROH.NET [realms] MEROH.NET = { kdc = router.meroh.net admin_server = router.meroh.net } [domain_realm] .meroh.net = MEROH.NETWith that, you should be able to start the kdc service, which is responsible for the KDC. All documentation you find out there will tell you to also start kadmind, but if you don’t plan to do administer the KDC from another machine (why would you?), then you don’t need this service.pfSense’s configuration is weird because of the read-only nature of its root partition, so to do this, you have to edit the /cf/conf/config.xml file stored in NVRAM and add this line right before the closing </system> tag:<shellcmd>service kdc start</shellcmd>If you were to set this up on a FreeBSD host instead of pfSense, you would modify /etc/rc.conf instead and add:kdc_enable=YESThen, from the root shell on either case:kdc# service kdc start kdc# █It is now a good time to ensure that every machine involved in the realm has a DNS record and that reverse DNS lookups work. Failure to do this will cause problems later on when attempting to mount the NFSv4 shares, and clearing those errors won’t be trivial because of caching at various levels.Once the KDC is running, we must create principals for the hosts, the NFSv4 service, and the users that will be part of the realm. The client host and service principals aren’t always necessary though: SSH doesn’t require them, but NFSv4 does.To create the principals, we need access the KDC’s administrative console. Given that the KDC isn’t configured yet, we can only gain such access by running kadmin -l on the KDC machine directly (the pfSense shell), which bypasses the networked kadmind service that we did not start.Start kadmin -l and initialize the realm:kdc# kadmin -l kadmin> init MEROH.NET ... answer questions with defaults ... kadmin> █Next, create principals for the users that will be part of the realm:kadmin> add jmmv ... answer questions with defaults ... ... but enter the desired user password ... kadmin> █Then, create principals for the hosts (server and clients, but not the KDC) and the NFSv4 service:kadmin> add --random-key host/think.meroh.net ... answer questions with defaults ... kadmin> add --random-key host/x1nano.meroh.net ... answer questions with defaults ... kadmin> add --random-key host/nas.meroh.net ... answer questions with defaults ... kadmin> add --random-key nfs/nas.meroh.net ... answer questions with defaults ... kadmin> █And finally, extract the host and service credentials into the machine-specific keytab files. Note that, for the servers, we extract both the host and any service principals they need, but for the client, we just extract the host principal. We do not export any user principals:kadmin> ext_keytab --keytab=think.keytab host/think.meroh.net kadmin> ext_keytab --keytab=x1nano.keytab host/x1nano.meroh.net kadmin> ext_keytab --keytab=nas.keytab host/nas.meroh.net nfs/nas.meroh.net kadmin> █You now need to copy each extracted keytab file to the corresponding machine and name it /etc/krb5.keytab. (We’ll do this later on the Synology NAS via its web interface.) This file is what contains the shared key between the KDC and the host and is what allows the host to verify the authenticity of KDC tickets without having to contact the KDC. Make sure to protect it with chmod 400 /etc/krb5.keytab so that nobody other than root can read it.If scp is unsuitable or hard to use from the KDC to the client machines (as is my case because I restrict SSH access to the KDC to one specific machine), you can use the base64 command to print out a textual representation of the keytab and use the local clipboard to carry it to a shell session on the destination machine.At this point, the realm should be functional but we need to make the clients become part of the realm. We also need to install all necessary tools, like kinit, which aren’t present by default on some systems:On Debian:Run apt install krb5-user nfs-common.Follow the prompts that the krb5-user installer shows to configure the realm and the address of the KDC. This will auto-create /etc/krb5.conf with the right contents so you don’t have to do anything else.On Fedora:Run dnf install krb5-workstation.Edit the system-provided /etc/krb5.conf file to register the realm and its settings. Use the file content shown above for the KDC as the template, or simply replace all placeholders for example.org and EXAMPLE.ORG with the name of your DNS domain and realm.On FreeBSD:Create the /etc/krb5.conf file from scratch in the same way we did for the KDC.All set! But… do you trust that you did the right thing everywhere? We could go straight into NFSv4, but due to the many pitfalls in its setup, I’d suggest you verify your configuration using a simpler service like SSH.To do this, modify the SSH server’s (aka think’s configuration) /etc/ssh/sshd_config file and add GSSAPIAuthentication yes so that it can leverage Kerberos for authentication. Restart the SSH service and give it a go: run kinit on the client (x1nano) and then see how ssh think works without typing a password anymore.But… GSSAPIAuthentication? What’s up with the cryptic name?GSS-API stands for Generic Security Services API and is the interface that programs use to communicate with the Kerberos implementation on the machine. GSS-API is not always enabled by default for a service, and the way you enable it is service-dependent. As you saw above, all we had to do for SSH was modify the sshd_config file… but for other services, you may need to take extra steps on the server and/or the client.And, guess what, NFSv4 is weird on this topic. Not only we need service-specific principals for NFS, but we also need the gssd daemon to be running on the server and the client machines. This is because NFSv4 is typically implemented inside the kernel, but not Kerberos, so the kernel needs a mechanism to “call into” Kerberos. And the kernel needs to do this to map kernel-level UIDs (a Unix kernel doesn’t know anything about usernames) to Kerberos principals and vice-versa—and that’s precisely what gssd offers. So:On the Synology NAS:Do nothing. The system handles gssd by itself.On Linux:You shouldn’t have to do anything if you correctly created the prerequisite /etc/krb5.keytab early enough, but make sure the service is running with systemctl status rpc-gssd.service (and know that this command only shows useful diagnostic logs when run as root).Run systemctl start rpc-gssd.service if the service isn’t running.On FreeBSD:Add gssd_enable=YES to /etc/rc.conf.Run service gssd start.It’s time to deal with NFSv4, so let’s start by configuring the server on the NAS.The Synology Disk Station Manager (DSM) interface—the web UI for the NAS—is… interesting. As you might expect, it is a web-based interface but… it pretends to be a desktop environment in the browser, which I find overkill and unnecessary. But it’s rather cool in its own way.Navigating the Synology DSM menus to configure the NFS file service with Kerberos.The first step is to enable the NFS service. Roughly follow these steps, which are illustrated in the picture just above:Open the File Services tab of the Control Panel.In the NFS tab, set NFSv4 as the Minimum NFS protocol.Click on Advanced Settings and, in the panel that opens, enter the Kerberos realm under the NFSv4 domain option.Click on Kerberos Settings and, in the panel that opens, select Import and then upload the keytab file that we generated earlier on for the NAS. This should populate the host and nfs principals in the list.Finish and save all settings.That should be all to enable NFSv4 file serving.Navigating the Synology DSM menus to configure the properties of a single shared folder over NFS.Then, we need to expose shared folders over NFSv4, and we have to do this for every folder we want to share. Assuming you want to share the homes folder as shown in the picture just above:Open the Shared Folder tab of the Control Panel.Select the folder you want to share (in our case, homes), and click Edit.In the NFS Permissions tab, click either Create or Edit to enter the permissions for every host client that should have access to the share.Fill the NFS rule details. In particular, enter the hostname of the client machine, enable the Allow connections from non-privileged ports option, and select the Security level you desire.In my case, I want krb5p and krb5p only so that’s the only option I enable. But your risk profile and performance needs may be different, so experiment and see what works best for you.Now that the server is ready and we have dealt with the GSS-API prerequisites, we can start mounting NFSv4 on the clients.On Linux, things are pretty simple. We can mount the file system with:x1nano# sudo mount nas:/volume1/homes /shared x1nano# █Or persist the entry in /etc/fstab if we want to:nas:/volume1/homes /shared nfs sec=krb5p 0 0And then we should be able to list its contents assuming we’ve got a valid TGT for the current user (run kinit if it doesn’t work):x1nano$ ls -l /shared total 0K drwxrwxrwx 1 nobody users 0 Sep 27 21:11 admin drwxrwxrwx 1 jmmv users 0 Nov 2 20:41 jmmv drwxrwxrwx 1 nobody users 0 Oct 8 16:59 manager x1nano$ █Easy peasy, right? But wait… why do all directories have 777 permissions?This is rather unfortunate and I’m not sure why the Synology does this. Logging onto the DS923+ via SSH, I inspected the shared directory and realized that it has various ACLs in place to control access to the directories, but somehow, the traditional Unix permissions are all 777 indeed. Not great.I used chmod to fix the permissions for all directories to 755 and things seem to be OK, but that doesn’t give me a lot of comfort because I do not know if the DSM will ever undo my changes or if I might have broken something.There might be one more problem though, which I did not encounter on Debian clients but that showed up later in Fedora and FreeBSD clients:x1nano$ ls -l /shared total 0K drwxr-xr-x 1 nobody nogroup 0 Sep 27 21:11 admin drwxr-xr-x 1 nobody nogroup 0 Nov 2 20:41 jmmv drwxr-xr-x 1 nobody nogroup 0 Oct 8 16:59 manager x1nano$ █Note how all entries are owned by nobody:nogroup which is… not correct. Yet the right permissions are in effect: accessing the jmmv directory is only possible by the jmmv user as expected. Which means that the user mapping between Kerberos principals and local users is working correctly on the server… but not on the client, where stat isn’t returning the right information.I do not yet know why this issue happens, especially because I see no material differences between my Fedora and Debian configurations.We now have the Linux clients running just fine so it is time to pivot to FreeBSD. If we try a similar “trivial” mount command, we get an error:think# mount -t nfs nas:/volume1/homes /shared mount_nfs: nmount: /shared: Permission denied think# █The error is pretty… unspecific. It took me quite a bit of trial and error to realize that I had to specify -t nfsv4 for it to attempt a NFSv4 connection and not NFSv3 (unlike Linux, whose mount command attempts the highest possible version first and then falls back to older versions):think# mount -t nfs -o nfsv4 nas:/volume1/homes /shared mount_nfs: nmount: /shared, wrong security flavor think# █OK, progress. Now this complains that the security flavor we request is wrong. Maybe we just need to be explicit and also pass sec=krb5p as an argument:think# mount -t nfs -o nfsv4,sec=krb5p nas:/volume1/homes /shared mount_nfs: nmount: /shared, wrong security flavor think# █Wait, what? The mount operation still fails? This was more puzzling and also took a fair bit of research to figure out because logs on the client and on the server were just insufficient to see the problem.The reason for the failure is that we are trying to mount the share as root but… we don’t have a principal for this user so root cannot obtain an NFSv4 service ticket to contact the NAS. So… do we need to create a principal for root? No! We do not need to provide user credentials when mounting an NFSv4 share (unlike what you might be used to with Windows shares).What Kerberized NFSv4 needs during the mount operation is a host ticket: the NFSv4 server checks if the client machine is allowed to access the server and, if so, exposes the file system to it. This is done using the client’s host principal. Once the file system is mounted, however, all operations against the share carry the ticket of the user requesting the operation.Knowing this, we need to “help” FreeBSD and tell it that it must use the host’s principal when mounting the share. Why this isn’t the default, I don’t know, particularly because non-root users are not allowed to mount file systems in the default configuration. Anyhow. The gssname=host option rescues us:think# mount -t nfs -o nfsv4,sec=krb5p,gssname=host nas:/volume1/homes /shared think# █Which finally allows the mount operation to succeed. We should persist all this knowledge into an /etc/fstab entry like this one:nas:/volume1/homes /shared nfs rw,nfsv4,gssname=host,sec=krb5p 0 0Color me skeptical, but everything I described above seems convoluted and fragile, so I did not trust that my setup was sound. Consequently, I wanted to verify that the traffic on the network was actually encrypted.To verify this, I installed Wireshark and ran a traffic capture against the NAS with host nas as the filter. Then, from the client, I created a text file on the shared folder and then read it. Inspecting the captured packets confirmed that the traffic is indeed flowing in encrypted form. I could not find the raw file content anywhere in the whole trace (but I could when using anything other than krb5p).Content of an NFS reply packet with Kerberos-based encryption. The packet contents are not plain text.And, as a final test, I tried to mount the network share without krb5p and confirmed that this was not possible:# mount -t nfs -o nfsv4,gssname=host,sec=krb5i nas:/volume1/homes /shared mount_nfs: nmount: /shared, wrong security flavor # █All good! I think…That’s about it. But I still have a bunch of unanswered questions from this setup:Kerberos claims to be an authentication system only, not an authorization system. However, the protocol I described above separates the TGT from the TGS, and this separation makes it sound like Kerberos could also implement authorization policies. Why doesn’t it do these?The fact that Fedora and FreeBSD show nobody for file ownership even when they seems to do the right thing when talking to the NFSv4 server sound like a bug either in the code or in my configuration. Which is it?Having to type kinit after logging into the machine is annoying. I remember that, back at Google when we used Kerberos and NFS—those are long gone days—the right tickets would be granted after logging in or unlocking a workstation. This must have been done with the Kerberos PAM modules… but I haven’t gotten them to do this yet and I’m not sure why.The fact that the shared directories created by the Synology NAS have 777 permissions seems wrong. Why is it doing that? And does anything break if you manually tighten these permissions?And the most important question of all: is this all worth it? I’m tempted to just use password-protected Samba shares and call it a day. I still don't trust that the setup is correct, and I still encounter occasional problems here and there.If you happen to have answers to any of the above or have further thoughts, please drop a note in the comments section. And…Credit and disclaimers: the DS923+ and the 3 drives it contains that I used for throughout this article were provided to me for free by Synology for evaluation purposes in exchange for blogging about the NAS. The content in this article is not endorsed has not been reviewed by them.
2024-11-08T13:43:38
null
train
42,038,279
Gym_Rat_Tips
2024-11-04T03:52:49
Progressive Overload Workout
null
https://gymrattips.com/progressive-overload-workout/
1
0
[ 42040448 ]
null
null
null
null
null
null
null
null
null
train
42,038,282
bookofjoe
2024-11-04T03:53:50
Keeping hikers alive in Death Valley
null
https://www.washingtonpost.com/sports/2024/11/02/hiking-death-valley/
2
1
[ 42038285 ]
null
null
null
null
null
null
null
null
null
train
42,038,287
gmays
2024-11-04T03:54:47
'Not today': The twin miracles of Palm Beach
null
https://sundaylongread.com/2024/10/17/palm-beach-airport-heart-surgeon/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,038,305
safar_so_far
2024-11-04T03:59:27
Show HN: Web Interactive Particle Sphere
Just a simple visually pleasing thing)<p>Made with JavaScript + Three.js webGL library<p>The source code is available here: <a href="https:&#x2F;&#x2F;github.com&#x2F;SafarSoFar&#x2F;sphere-particle-wrap">https:&#x2F;&#x2F;github.com&#x2F;SafarSoFar&#x2F;sphere-particle-wrap</a>
https://safarsofar.github.io/sphere-particle-wrap/
3
0
[ 42039333 ]
null
null
null
null
null
null
null
null
null
train
42,038,308
creer
2024-11-04T03:59:47
Soviet Russia's Merciless War for Grain [video]
null
https://www.youtube.com/watch?v=VEowCqd4zug
3
0
null
null
null
null
null
null
null
null
null
null
train
42,038,325
chaosmachine
2024-11-04T04:02:54
Two mountains are better than one
null
https://heredragonsabound.blogspot.com/2016/12/two-mountains-are-better-than-one.html
1
0
null
null
null
null
null
null
null
null
null
null
train
42,038,336
teleforce
2024-11-04T04:05:25
Revisiting the Netflix Prize the Decade After (2019) [pdf]
null
https://cs229.stanford.edu/proj2017/final-reports/5237277.pdf
1
1
[ 42038355 ]
null
null
null
null
null
null
null
null
null
train
42,038,362
vips7L
2024-11-04T04:11:57
The Infamous Gnome Shell Memory Leak (2018)
null
https://feaneron.com/2018/04/20/the-infamous-gnome-shell-memory-leak/
1
0
null
null
null
no_error
The Infamous GNOME Shell Memory Leak – feaneron
null
Georges Stavracas
Greetings GNOMErs, at this point, I think it’s safe to assume that many of you already heard of a memory leak that was plaguing GNOME Shell. Well, as of yesterday, the two GitLab’s MRs that help fixing that issue were merged, and will be available in the next GNOME version. The fixes are being considered for backporting to GNOME 3.28 – after making sure they work as expected and don’t break your computer. First, I’d like to thank the GJS maintainer, Philip C., for all the hand-holding, the reviews, and the incredibly insightful discussions we had. Secondly, to my employer, Endless, for the support they gave me to fix this issue. And last but not least, to the Ubuntu folks, which made a public call for testing with the changes – this will give us confidence that the fix is working, and that backporting it will be a relatively safe and smooth process. As always, great new features and fixes are a courtesy of Endless I’m writing this blog post with three goals in mind: Explain in greater details what is the issue (or at least, what we think it is), the journey to find it, and how it was fixed. Give more exposure to important extra work from other contributors that absolutely deserve more credits. Expose a social issue that showed up during this time, and open a discussion about it. Memory Leak To me, it all started when I saw GitLab’s ticket #64 passing by in the IRC channels. It was challenging enough, I was curious to dig into GNOME Shell/Mutter/GJS internals, perfect match. Of course, when you’re not familiar with a given codebase, the first step to fixing a bug is being able to reproduce it, so I started to play around with GNOME Shell to see if I could find a reliable way to reproduce it. Well, I found a way and wrote a very simple observation: running animations (showing and hiding the Overview, switching applications using Alt+Tab, etc) was reliably increasing memory usage. Then a few people came in, and dropped bits of useful information here and there. But at this point, it was still pointing to a wide range of directions, and definitely there was not actionable task there. This is when OMG! Ubuntu first wrote about it. Carlos Garnacho then came in and wrote a pretty relevant comment with important information. It was specially insightful because he put numbers on the guts of GNOME Shell. His comment was the first real solid step to uncover what was going on. A week passed, and I experimented different toys tools in order to have a better understanding of memory management inside GNOME Shell. This is the kind of tedious work that nobody talks about, but I learned tons of new stuff, so in the end it was worth the hassle. I even wrote about my crazy experiments, and the results of this long week are documented in a long comment in GNOME/gnome-shell#64. I kept experimenting until I reached heapgraph, an interesting tool that allowed generating the following picture: Notice the sudden drops of memory at x=42 and x=71 Well, as stated in the comment, GJS’ garbage collect was indeed collecting memory when triggered. Problem is, it wasn’t being triggered at all. That was the leading clue to one of the problems that was going on. One idea came to my mind, then, and I decided to investigate it further. A Simple Example Consider that we have a few objects in memory, and they have parent/child relationships: The root object is “1” Lets suppose that we decided that we don’t need the root object anymore, so we drop a reference to it, and it is marked for garbage collection. The root object is now marked for garbage collection If we destroy the root object, we would need to destroy the other objects related to it, and go destroying everyone that depended, directly or indirectly, on the root object. Traditionally, JavaScript objects track who they own, so the garbage collector can clean up every other dependent object. Here’s the problem: C objects don’t track who owns them; instead, they only track how many owners they have. This is the traditional reference counting mechanism, and it works fine in C land because C is not garbage collected. To the garbage collector, however, the C objects would look like this: The garbage collector has no means to know the relationships between C objects. The garbage collector, then, will go there and destroy the root one. This object will be finalized, and the directly dependent objects will be marked for garbage collection. Only the directly dependent objects are marked for the next garbage collection. But… when will the next GC happen? Who knows! Can be now, can be in 10 minutes, or tomorrow morning! And that was the biggest offender to the memory leak – objects were piling up to be garbage collected, and these objects had child objects that would only be collected after, and so it goes. In other words, this is not really a memory leak – the memory is not being lost. I’d label it as a “misbehavior” instead. The Solution While people might think this was somehow solved, the patches that were merged does not fix that in the way it should be fixed. The “solution” is basically throwing a grenade to kill ants. We now queue a garbage collection every time an object is marked for destruction. So every single time an object becomes red, as in the example, we queue a GC. This is, of course, a very aggressive solution. But it is not all bad. Some early tests shows that this has a small impact on performance – at least, it’s much smaller than what we were expecting. A very convincing explanation is that the higher frequency of GCs is reducing the number of things that are being destroyed each GC. So now we have smaller and more frequent garbage collections. EDIT: Looks like people need more clarification here, since the comments about it are just plain wrong. I’ll be technical, and precise – if you don’t understand, please do some research. The garbage collector is scheduled every time a GObject wrapped in GJS has its toggle reference gone from >1 to 1. And scheduled here means that a GC is injected into the mainloop as an idle callback, that will be executed when there’s nothing else to be executed in the mainloop. The absolute majority of the time, it means that only one GC will happen, even if hundreds of GObjects are disposed. I’ve spotted in the wild it happening twice. This fix is strictly specific to GObjects wrapped by GJS; all other kinds of memory management, such as strings and whatever else, aren’t affected by this fix. Together with this patch, an accompanying solution landed that reduces the number of objects with a toggle reference. This obviously needs more testing on a wider ranger of hardwares, specially on lower ends. But, quite honestly, I’m personally sure that this apparently small performance penalty is compensated by the memory management gains. Other Improvements While the previous section covered my side of this history, there are a few other contributors that did a great job, and I think it would be unfair with them if their work was not properly highlighted. Red Hat’s Carlos Garnacho published two merge requests for GJS that, in my testing, substantially improved the smoothness of GNOME Shell. The first one changes the underlying data structure of JS objects, which allows us to stop using an O(n) algorithm and starting an O(1) one. The second one is particularly interesting, and it yields the most noticeable improvements in my computer. Gross, it vastly reduces the number of temporary memory allocations. He also has a number of patches on Mutter and GNOME Shell. Another prominent contributor regarding performance is Canonical’s Daniel van Vugt, which helped early testing the GJS patches, and is doing some deep surgeries in Mutter to make the rendering smoother. And for every great contributor, there is a great reviewer too. It would be extremely unfair if those relevant people haven’t had their work valued by the community, so please, take a moment to appreciate their work. They deserve it. Final Thoughts At this point, hopefully the cautious reader will have at least a superficial knowledge on the problem, the solution, and other relevant work around the performance topic. Which is good – if I managed to communicate that well enough, by the time you finish reading this blog post, you’ll have more knowledge. And more knowledge is good. You can stop here if you want nothing more than technical knowldedge. Still around? Well, I’d like to raise an interesting discussion about how people reacted to the memory leak news, and reflect upon that. By reading the repercussions of the news, I found it quite intriguing to read comments like these: As a regular contributor for the last few years, this kind of comment sound alien to me. These comments sound completely disconnected to the reality of the development process of GNOME. It completely misses the individuality of the people involved. Maybe because we all know each other, but it is just plain impossible to me to paint this whole community as “they”; “GNOME developers”; etc. To a deeper degree, it misses the nuances and the beauty of community-driven development, and each and every individual that make it happen. To some degree, I think this is a symptom of users being completely disconnected to GNOME development itself. It almost feels like there’s a wall between the community and the users of what this community produces. Which is weird. We are an open community, with open development, no barriers for new contributors – and yet, there is such a distance between the community of users and the community of developers/designers/outreachers/etc. Is that a communication problem from our side? How can we bridge this gap? Well, do we want to bridge this gap? Is it healthy to reduce the communication bandwidth in order to increase focus, or would it be better to increase that and deal with the accompanying noise? I would love to hear your opinions, comments and thoughts on this topic.
2024-11-08T21:20:33
en
train
42,038,367
JakeMake
2024-11-04T04:13:05
AI Will Not Solve Alignment for Us
null
https://www.thecompendium.ai/ai-safety
2
0
[ 42038368 ]
null
null
no_error
The Compendium
null
null
We are not on track to solve the hard problems of safetySome time before we build AI that surpasses humanity’s intelligence, we need to figure out how to make AI systems safe. Once AI exceeds humanity’s intelligence, it will be in control, and our safety will depend on aligning AI’s goals with humanity’s best interests.The alignment problem is not a mere technical challenge — it demands that we collectively solve one of the most difficult problems that humanity has ever tackled, requiring progress in fields that resist formalization, Nobel-prize-level breakthroughs, and billions or trillions of dollars of investment.In Defining alignment, we explain what alignment really means and why it’s not just a technical problem but an all-encompassing civilizational one.In Current technical efforts are not on track to solve alignment, we take a critical look at the current level of funding, organizations, and research dedicated to alignment. We argue that these efforts are insufficient, and that many of them do not even acknowledge the cost or complexity of the challenge. In AI will not solve alignment for us, we turn to the question of whether AI can help us solve alignment. We show that any potential benefits are mostly illusions, and argue that trying to use more advanced AI to solve alignment is a dangerous strategy.Because both current and future safety efforts are not on track to solve alignment, we conclude that we are not on track to avert catastrophe from godlike AI. In the field of AI, alignment refers to the ability to “steer AI systems toward a person's or group's intended goals, preferences, and ethical principles.”With simpler systems that are less intelligent than humans, the alignment challenge addresses simpler safety issues, such as making current chatbots refuse to create propaganda or provide instructions for building weapons. For systems that exceed human intelligence, the alignment problem is more complex and depends on guaranteeing that AI systems as powerful as godlike do what is best for humanity. This has a vastly larger scope than just censoring chatbots. We already need to solve alignment today, which demands getting individuals, companies, and governments to act reliably according to some set of values. Alignment challenges vary depending on the scope and entity:To align individuals, we educate them to behave according to a certain set of cultural values. We also enforce compliance with the law through threat of state punishment. Most people are aligned and share values, with rare aberrations such as sociopathic geniuses or domestic terrorists. To align companies with societal values, we rely on regulations, corporate governance, and market incentives. However, companies often find loopholes or engage in unethical practices, such as how Boeing’s profit motive undermined safety, leading to crashes and hundreds of fatalities.To align governments with the will of the people, we rely on constitutions, checks and balances, and democratic elections. Some countries operate under dictatorships or authoritarian regimes. But both of these models can go wrong, leading governments to commit atrocities against their own people or experience democratic backsliding.To align humanitytoward common goals like peace and environmental sustainability, we establish international organizations and agreements, like the United Nations and the Paris Climate Accords. On a global scale, enforcement is challenging — there are ongoing wars on multiple continents, and we have met only 17% of the Sustainable Development Goals (SDGs) that all United Nations member states have agreed to.The examples show that alignment relies on processes that reliably incentivize entities to pursue good outcomes, based on some set of values. In each of the instances above, we need to design processes to determine values (e.g. constitutional conventions), reconcile them (e.g. voting), enshrine them (e.g. constitutions, amendments, laws), oversee and enforce them (e.g. institutions and police), and coordinate the constituent parts (e.g. administrations). A system is aligned if there is a mechanistic connection between the original values and reliable outcomes. For example, while UN member states all share the value of protecting the environment and strive toward the Sustainable Development Goals, they lack reliable processes to ensure traction. Regardless of intention, without concrete processes we cannot consider the UN successfully aligned with protecting the environment. While they often fail us, we currently entrust the fate of the world to governments, corporations, and international institutions. AI alignment demands solving all of the same problems our current institutions try to solve, but instead use software to do it. As AI becomes more intelligent, its causal impact will increase, and misalignment will be more consequential. We must find a way to install our deepest values in AI, addressing questions ranging from how to raise children, to what kinds of governance to apply to which problems.Solving the alignment problem is philosophy on a deadline, and requires defining and reconciling our values, enshrining them in robust processes, and entrusting those processes to AIs that may soon be more powerful than we are.Although alignment is not an impossible problem, it is extremely difficult and requires answering novel social and technical questions humanity has not yet solved. By considering some of these questions, we can understand how much it would cost to solve this problem. What do we value and how do we reconcile contradictions in values? We must align godlike AI with “what humanity wants,” but what does this even mean?It is clear that even as individuals, we often don’t know what we want. For example, if we say and think that we want to spend more time with our family, but then end up playing games on our phones, which one do we really want? Individuals often have multiple conflicting desires or unconscious preferences that make it difficult to know what someone really wants. When we zoom out from the individual to groups, up to the whole of humanity, the complexity of “finding what we want” explodes: when different cultures, different religions, different countries disagree about what they want on key questions like state interventionism, immigration, or what is moral, how can we resolve these into a fixed set of values? If there is a scientific answer to this problem, we have made little progress on it.If we cannot find, build, and reconcile values that fit with what we want, we will lose control of the future to AI systems that ardently defend a shadow of what we actually care about.Making progress on understanding and reconciling values requires ground-breaking advances in the fields of psychology, neuroscience, anthropology, political science, and moral philosophy. The former fields are necessary for diving into the human psyche resolving uncertainties related to human rationality, emotion, and biases, and the latter two are necessary for finding ways to resolve conflicts between these.How can we predict the consequences of our actions? A positive understanding of “what we want” is insufficient to keep AI safe: we also need to understand the consequences of getting what we want, to avoid unwanted side effects. Yet history demonstrates how often we fail to see consequences of our actions until after they are implemented. The Indian vulture crisis was a massive environmental disaster in which a new medicine given to cows turned out to be toxic for vultures, which died by millions upon eating the carcasses. The collapse in vulture population meant that carcasses were not cleaned, contaminating water sources, providing breeding grounds for feral dogs with rabies, and ultimately leading to a humanitarian disaster costing billions due to a single unknown externality. The same can happen for designing institutions. The Articles of Confederation was the first attempt to create a US government, but they left Congress completely impotent to govern the individual states, so this had to be corrected in the US constitution. Progress on our ability to predict the consequences of our actions requires better science in every technical field, and learning what to do with these predictions requires progress in fields like non-idealized decision making. The last 100 years have seen some progress in scientific thinking and decision theory, and some efforts in rationalism have even attempted to inspire better decision-making in light of the AI problem. But while better decision-making has had clear consequences in fields like investment–quantitative strategies are increasingly outperforming discretionary ones–most people make decisions the same way we did 100 years ago. To confidently move forward on these questions, we need faster science, simulation, and modeling; breakthroughs in fields related to decision-making; and better institutions that demonstrate these approaches work.Process design for alignment: If we can answer the philosophical questions of values alignment, and get better at predicting and avoiding consequential errors, we still need to build processes to ensure that our values are represented in systems and actually enacted in the real world. Often, even the most powerful entities fail to build processes that connect the dots between values and end outcomes. Nearly every country struggles with taxation, particularly of large entities and high net worth individuals. And there are process failures abound in history, such as the largest famine ever caused by inefficient distribution of food within China’s planned economy during the Great Leap Forward. The mechanism design of these entities and their implementation are two separate things. When we zoom in on the theory, our best approaches aren’t great. The field of political philosophy attempts to make progress on statecraft, but ideas like the separation of powers in many modern constitutions are based on 250-year-old theories from Montesquieu. New ideas in voting theory have been proposed, and efforts like blockchain governance try to implement some of these, but these have done little so far to displace our current systems. Slow theoretical progress and little implementation of new systems suggest massive room to improve our current statecraft and decision-making processes. On the corporate level, we have the theory of the firm and management theory to tell us about how to run companies, but the start of the art in designing a winning company today looks a lot closer to Y-combinator’s oral theory of knowledge and knowing the right people than science, and even then the failure rate is very high. Neither statecraft nor making a company is a scientific process in which there are formal guarantees, and things often go very wrong. In the context of the alignment problem, this demonstrates large gaps in humanity’s knowledge that expose huge risks if we were to imagine trusting advanced AI systems to run the future. Without better theories and implementation, these systems could make the same mistakes, with larger consequences given their greater intelligence and power. Guaranteeing alignment: Last but not least, even if we can design processes to align AIs and we know what to align them to, we still need to be able to guarantee and check that they will actually do what we want. That is, as in any critical technology, we want guarantees that it won’t create a catastrophe before turning it on.This is already difficult with normal software: making (almost) bug-free systems require the use of formal methods which are both expensive (in time, skill, effort) and in their infancy, especially with regard to the kind of complex properties that we would care about for AIs acting in the world.And this is even harder with AIs built with the current paradigm, due to the fact that they are not built by hand (like normal software), but instead grown through mathematical optimization. This means that even the makers of AI systems have next to no understanding of what they can and cannot do, and no predictive capabilities whatsoever to anticipate what they will do before training them, or even just before using them.But the situation is actually worse than that: whereas most current AI systems are still less smart than humans, alignment actually requires getting guarantees on systems that are significantly smarter than humans. That is, in addition to managing the complexity of software, and the obscurity of neural networks, we need to figure out how to check an entity which can outsmart us at every turn.Even if all of these questions need not be answered at once, we nonetheless need to invent a process by which they are answered. Currently, our human science and morality is inadequate to address the risks posed by advanced AI, and it must improve for us to have a chance.How much would this cost, in terms of funding and human effort?When we look at major research efforts that humans have pursued in the past which led to breakthroughs and Nobel prizes, we can begin to envision what such a “significant research project” constitutes. The Manhattan Project cost $27B to produce the first nuclear weapons, and at its height employed 130,000 people. Over four years, researchers and engineers cracked problem after problem to develop the bomb, with over 31 Nobel-prize winners tied to the project. Another massive research effort, the Human Genome Project (HGP), cost $5B over 13 years and required contributions from thousands of researchers from various countries.If alignment was of the same difficulty level as these problems, we would assume at least a tens-of-billions of dollars effort, featuring thousands of people, with dedicated coordination, and multiple breakthroughs of Nobel-prize magnitude. Given the magnitude of the danger ahead, the complexity and uncertainty of these estimates should make us even more careful, and cause us to assume that the costs may be higher still than what is presented here.We conclude that solving alignment is extremely hard, and the cost is clearly very high: at least billions, maybe trillions, and with a time frame of decades of a constant string of nobel-prize winning research.The field of AI Safety is not making meaningful progress on or investment in alignment; current funding and focus are insufficient, and the research approaches being pursued do not attend to the hard problems of aligning AI. And these are optimistic estimates; in reality, only a tiny fraction of this total goes to genuine AI alignment efforts. With few exceptions, the majority of funding is directed at problems associated with AI safety, rather than paying the exorbitant cost of alignment. Nearly all current technical safety approaches are limited in their efficacy and trail their own stated goals:Black-box evaluations and red-teaming aim to test a model's capabilities to evaluate how powerful or dangerous it is. With this strategy, the theory of change is that identifying dangerous behavior could forceAI companies to pause development or governments to coordinate on regulation. Teams working on evaluations include AI Safety Institutes, METR and Evaluations at Anthropic.Black-Box Evaluations can only catch all relevant safety issues insofar as we have either an exhaustive list of all possible failure modes, or a mechanistic model of how concrete capabilities lead to safety risks. We currently have neither, so evaluations boil down to ad-hoc lists of tests that capture some possible risks. These are insufficient even for today’s models, as demonstrated by the fact that current LLMs can notice they are being tested, which the evaluators and researchers did not even anticipate.Interpretability aims to reverse-engineer the concepts and thought processes a model uses to understand what it can do and how it works. This approach presumes that a more complete understanding of the systems could prevent misbehavior, or unlock new ways to control models, such as training them to be fully honest. Teams working on interpretability include Interpretability at Anthropic and Apollo Research.Interpretability’s value depends on its ability to fully understand and reverse engineer AI systems to check if they have capabilities and thoughts that might lead to unsafe actions. Yet current interpretability research is unable to do that even for LLMs a few generations back (GPT2), let alone for the massive and complex models used in practice today (Claude and GPT4/o1). And even with full understanding and reverse engineering of state of the art LLMs, interpretability is blind to any form of extended cognition, such as what the system can do when connected to the environment (notably the internet), given tools, interacting with other systems or instances of itself. A huge part of recent progress in AI comes from moving to agents and scaffolding that leverage exactly this form of extended cognition. Just as solving neuroscience would be insufficient to explain how a company works, even full interpretability of an LLM would be insufficient to explain most research efforts on the AI frontier.Whack-A-Mole Fixes, which use techniques like RLHF, fine-tuning, and prompting to remove undesirable model behavior or a specific failure mode. The theory of change is that current safety problems can be solved in a patchwork manner, addressed as they arise, and that we can perhaps learn from this process to correct the behavior of more advanced systems. Teams working on this include Alignment Capabilities at Anthropic and OpenAI’s Safety Team.Whack-A-Mole fixes, from RLHF to finetuning, are about teaching the system to not demonstrate problematic behavior, not about fundamentally fixing that behavior. For example, a model that produces violent text output may be finetuned to be more innocuous, but the underlying base model is just as capable of producing violent content as ever. The problem as to how this behavior arose in the first place is left unaddressed by even the best finetuning. By pushing for models to hide unsafe actions rather than resolving underlying issues, whack-a-mole fixes lead to models that are more and more competent at hiding their issues and failures, rather than models that are genuinely safer.At best, these strategies can identify and incrementally correct problems, address model misbehavior, and use misbehavior as a red flag to motivate policy solutions and regulations. However, even according to their proponents, these strategies do not attempt to align superhuman AI, but merely align the next generation of systems, trusting that aligning the Nth systems will help align the N+1 system. This approach can be thought of as Iterative Alignment, a strategy that rests on the hope that we can build slightly smarter systems, align those, and use them to help align successor systems, repeating the process until we reach superintelligent AI. OpenAI’s Superalignment plan explicitly states this:"Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.Our goal is to build a roughly human-level automated alignment researcher. We can then use vast amounts of compute to scale our efforts, and iteratively align superintelligence. To align the first automated alignment researcher, we will need to 1) develop a scalable training method, 2) validate the resulting model, and 3) stress test our entire alignment pipeline."The plans of nearly every other AI company are similarly limited and failure-prone attempts at iterative alignment. Deepmind’s 2024 update on AGI safety approaches discusses the evaluative techniques listed above and names “amplified oversight” as its focus. Anthropic’s Core Views on AI Safety describes how the evaluative techniques can be combined in a “portfolio approach” to keep advanced AI safe, and offers a similar justification for iterative alignment:"Turning language models into aligned AI systems will require significant amounts of high-quality feedback to steer their behaviors. A major concern is that humans won't be able to provide the necessary feedback. It may be that humans won't be able to provide accurate/informed enough feedback to adequately train models to avoid harmful behavior across a wide range of circumstances. It may be that humans can be fooled by the AI system, and won't be able to provide feedback that reflects what they actually want (e.g. accidentally providing positive feedback for misleading advice). It may be that the issue is a combination, and humans could provide correct feedback with enough effort, but can't do so at scale. This is the problem of scalable oversight, and it seems likely to be a central issue in training safe, aligned AI systems.Ultimately, we believe the only way to provide the necessary supervision will be to have AI systems partially supervise themselves or assist humans in their own supervision. Somehow, we need to magnify a small amount of high-quality human supervision into a large amount of high-quality AI supervision. This idea is already showing promise through techniques such as RLHF and Constitutional AI, though we see room for much more to make these techniques reliable with human-level systems. We think approaches like these are promising because language models already learn a lot about human values during pretraining. Learning about human values is not unlike learning about other subjects, and we should expect larger models to have a more accurate picture of human values and to find them easier to learn relative to smaller models. The main goal of scalable oversight is to get models to better understand and behave in accordance with human values."Regardless of whether or not one believes that this strategy will work, it is clear that this approach does not adequately address the true complexity of alignment. A meaningful attempt at alignment must integrate moral philosophy to understand values reconciliation, implement formal verification to make guarantees about system properties, consider humanitarian questions of what we value and why, and propose institution design, at minimum. All current efforts fail to do so.Today’s AI safety research is vastly underfunded compared to investments in capabilities work, and the majority of technical approaches intentionally do not address the conceptual complexity of alignment, instead operating in a reactive  empiricist framework that simply identifies misbehavior once it already exists. Humanity’s current AI safety plan is to race toward building superintelligent AI, and delegate the most difficult questions of alignment to AI itself. This is a naive and dangerous approach.But on reflection, this is an incredibly risky approach. Situational Awareness, a document written by ex-OpenAI superalignment researcher Leopold Aschenbrenner which has gotten significant traction even from popular news outlets, puts the argument bluntly. Aschenbrennerargues for a vision of the future in which AI becomes powerful extremely quickly due to scaling up the orders of magnitude (“OOMs”) of AI models. When discussing future safety approaches, he makes a vivid argument for iterative alignment:"Ultimately, we’re going to need to automate alignment research. There’s no way we’ll manage to solve alignment for true superintelligence directly; covering that vast of an intelligence gap seems extremely challenging. Moreover, by the end of the intelligence explosion—after 100 million automated AI researchers have furiously powered through a decade of ML progress—I expect much more alien systems in terms of architecture and algorithms compared to current system (with potentially less benign properties, e.g. on legibility of CoT, generalization properties, or the severity of misalignment induced by training). But we also don’t have to solve this problem just on our own. If we manage to align somewhat-superhuman systems enough to trust them, we’ll be in an incredible position: we’ll have millions of automated AI researchers, smarter than the best AI researchers, at our disposal. Leveraging these army of automated researchers properly to solve alignment for even-more superhuman systems will be decisive. Getting automated alignment right during the intelligence explosion will be extraordinarily high-stakes: we’ll be going through many years of AI advances in mere months, with little human-time to make the right decisions, and we’ll start entering territory where alignment failures could be catastrophic."The dangers here are explicit: alien systems, huge advances in mere months, and a tightrope walk through an “intelligence explosion” in which wrong choices could lead to catastrophe. But even before we get to a dramatic vision of the AI future, the iterative alignment strategy has an ordering error – we first need to achieve alignment to safely and effectively leverage AIs. Consider a situation where AI systems go off and “do research on alignment” for a while, simulating tens of years of human research work. The problem then becomes: how do we check that the research is indeed correct, and not wrong, misguided, or even deceptive? We can’t just assume this is the case, because the only way to fully trust an AI system is if we’d already solved alignment, and knew that it was acting in our best interest at the deepest level.Thus we need to have humans validate the research. That is, even automated research runs into a bottleneck of human comprehension and supervision.Proponents of iterated alignment argue that this is not a real issue, because “evaluation is easier than generation.” For example, Aschenbrenner further argues in Situational Awareness that:"We get some of the way [to superalignment] “for free,” because it’s easier for us to evaluate outputs (especially for egregious misbehaviors) than it is to generate them ourselves. For example, it takes me months or years of hard work to write a paper, but only a couple hours to tell if a paper someone has written is any good (though perhaps longer to catch fraud). We’ll have teams of expert humans spend a lot of time evaluating every RLHF example, and they’ll be able to “thumbs down” a lot of misbehavior even if the AI system is somewhat smarter than them. That said, this will only take us so far (GPT-2 or even GPT-3 couldn’t detect nefarious GPT-4 reliably, even though evaluation is easier than generation!)"The argument holds for standard peer-review, where the authors and reviewers are generally on the same intellectual level, with sensibly similar cognitive architecture, education, and knowledge. But this doesn’t not apply to automated alignment research, where to be useful the research needs to be done by AIs that are both smarter and faster than humans.The appropriate analogy is not one researcher reviewing another, but rather a group of preschoolers reviewing the work of a million Einsteins. It might be easier and faster than doing the research itself, but it will still take years and years of effort and verification to check any single breakthrough.Fundamentally, the problem with iterative alignment is that it never pays the cost of alignment. Somewhere along the story, alignment gets implicitly solved – yet no one ever proposes an actual plan for doing so beyond “the (unaligned) AIs will help us”.There are other risks with this approach as well. The more powerful AI we have, the faster things will go. As AI systems improve and automate their own learning, AGI will be able to improve faster than our current research, and ASI will be able to improve faster than humanity can do science. The dynamics of intelligence growth means that it is possible for an ASI “about as smart as humanity” to move to “beyond all human scientific frontiers” on the order of weeks or months. While the change is most dramatic with more advanced systems, as soon as we have AGI we enter a world where things begin to move much quicker, forcing us to solve alignment much faster than in a pre-AGI world.'Tensions between world powers will also heat up as AI becomes more powerful, something we are already witnessing in AI weapons used in warfare, global disinformation campaigns, the US-China chip war, and how Europe is struggling with regulation around Big Tech. As we move towards AGI, ASI, and eventually godlike AI, pressure on existing international treaties and diplomacy methods will be pushed beyond their limits. Unlike with nuclear war, there is not necessarily the same promise of mutually assured destruction with AI that could create a (semi)stable equilibrium. Ensuring geopolitical stability is necessary to create supportive conditions to solve the hard problems of alignment, something that gets more challenging if AI is becoming rapidly more powerful. AGI and its successor AIs will also cause massive political, economical, and societal destabilization through automating disinformation and online manipulation, job automation, and other shifts that look like “issues seen today but magnified as systems grow stronger”. This in turn makes coordination around massive research projects like the ones necessary to solve alignment extremely difficult. Thus, iterative alignment fails on multiple accounts. In addition to not addressing the hard parts of alignment, it also encourages entering a time-pressured and precarious world.We have seen that alignment is an incredibly complex technical and social problem, one of the most complex any civilization needs to handle. And while the costs are enormous, no one is even starting to pay them, instead hoping that they will disappear by themselves as AIs become more powerful.In the light of this failure to address the risks of godlike AI from a research angle, it’s necessary to strongly slow down and regulate AI progress, in order to avoid the catastrophe ahead. This comes from strong AI regulations, policies, and institutions.Unfortunately, as we explore next, the landscape is as barren here as it is in the research side.
2024-11-08T10:14:32
en
train
42,038,370
aaron695
2024-11-04T04:14:03
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,038,371
yla92
2024-11-04T04:14:04
RFC 9669 BPF Instruction Set Architecture (ISA)
null
https://www.rfc-editor.org/rfc/rfc9669.html
4
1
[ 42046876 ]
null
null
null
null
null
null
null
null
null
train
42,038,382
sandwichsphinx
2024-11-04T04:16:10
Connect with One Old Colleague or Boss
null
https://www.wsj.com/lifestyle/careers/reaching-out-old-boss-coworker-1f8c75d3
1
0
[ 42040434 ]
null
null
null
null
null
null
null
null
null
train
42,038,413
webninja
2024-11-04T04:23:09
Oct. JOBS Report Worst in Years: 12k added vs. 120k expected by Dow Jones
null
https://www.cnbc.com/2024/11/01/us-jobs-report-october-2024.html
4
2
[ 42038608, 42039314, 42040432 ]
null
null
null
null
null
null
null
null
null
train
42,038,465
Maks2204
2024-11-04T04:33:38
Whisper WhatsApp AI Bot
null
https://api.whatsapp.com/send/?phone=15092946789&text=start&type=phone_number&app_absent=0&_fb_noscript=1
1
3
[ 42038467, 42041318 ]
null
null
null
null
null
null
null
null
null
train
42,038,469
westurner
2024-11-04T04:34:02
Ask HN: What happened to the No Paid Prioritization net neutrality rule?
null
null
2
10
[ 42038485, 42038506 ]
null
null
null
null
null
null
null
null
null
train
42,038,486
sandwichsphinx
2024-11-04T04:37:13
Six Charged in Scheme to Defraud the Federal Government
null
https://www.justice.gov/opa/pr/six-charged-scheme-defraud-federal-government
5
0
[ 42040419, 42038494 ]
null
null
null
null
null
null
null
null
null
train
42,038,492
opengears
2024-11-04T04:38:15
How is freenginx doing so far?
null
https://old.reddit.com/r/nginx/comments/1gj68v2/how_is_freenginx_doing_so_far/
1
3
[ 42038629 ]
null
null
null
null
null
null
null
null
null
train
42,038,517
PaulHoule
2024-11-04T04:42:56
'Do-it-yourself' data storage on DNA paves way to simple archiving system
null
https://www.nature.com/articles/d41586-024-03312-6
1
1
[ 42038540 ]
null
null
null
null
null
null
null
null
null
train
42,038,532
saikatsg
2024-11-04T04:45:21
Thundering Herd Problem
null
https://en.wikipedia.org/wiki/Thundering_herd_problem
1
0
[ 42040426 ]
null
null
no_error
Thundering herd problem
2004-09-30T17:51:13Z
Contributors to Wikimedia projects
From Wikipedia, the free encyclopedia In computer science, the thundering herd problem occurs when a large number of processes or threads waiting for an event are awoken when that event occurs, but only one process is able to handle the event. When the processes wake up, they will each try to handle the event, but only one will win. All processes will compete for resources, possibly freezing the computer, until the herd is calmed down again.[1] The Linux kernel serializes responses for requests to a single file descriptor, so only one thread or process is woken up.[2] For epoll() in version 4.5 of the Linux kernel, the EPOLLEXCLUSIVE flag was added. Thus several epoll sets (different threads or different processes) may wait on the same resource and only one set will be woken up. For certain workloads this flag can give significant processing time reduction.[3] Similarly in Microsoft Windows, I/O completion ports can mitigate the thundering herd problem, as they can be configured such that only one of the threads waiting on the completion port is woken up when an event occurs.[4] In systems that rely on a backoff mechanism (e.g. exponential backoff), the clients will retry failed calls by waiting a specific amount of time between consecutive retries. In order to avoid the thundering herd problem, jitter can be purposefully introduced in order to break the synchronization across the clients, thereby avoiding collisions. In this approach, randomness is added to the wait intervals between retries, so that clients are no longer synchronized. Process management (computing) Lock convoy Sleeping barber problem TCP global synchronization Cache stampede ^ "Thundering Herd Problem". The Jargon File (version 4.4.7). Retrieved 9 July 2019. ^ "Does the Thundering Herd Problem exist on Linux anymore". stackoverflow.com. Retrieved 2019-07-09. ^ Madars, Vitolins (2015-12-05). "EPOLLEXCLUSIVE Linux Kernel patch testing". mvitolin. Retrieved 2020-08-11. ^ "IO Completion Ports — Matt Godbolt's blog". xania.org. Retrieved 2019-01-23. A discussion of this observation on Linux Better Retries with Exponential Backoff and Jitter
2024-11-08T05:13:11
en
train
42,038,543
saikatsg
2024-11-04T04:47:26
The Missing Readme: A Guide for the New Software Engineer Book Review (2021)
null
https://nurkiewicz.com/2021/10/the-missing-readme-book-review.html
1
0
null
null
null
null
null
null
null
null
null
null
train
42,038,574
webbytuts
2024-11-04T04:53:05
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,038,581
lucianchauvin
2024-11-04T04:53:48
Can humans say the largest prime number before we find the next one?
null
http://saytheprime.com/
3
0
null
null
null
null
null
null
null
null
null
null
train
42,038,582
kelt
2024-11-04T04:53:52
Sega's emojam – pagers with an emoji-twist
null
https://japantoday.com/category/features/lifestyle/overwhelmed-by-modern-social-media-japanese-company-is-bringing-back-pagers-with-an-emoji-twist
2
0
[ 42040424 ]
null
null
no_error
Overwhelmed by modern social media? Japanese company is bringing back pagers with an emoji-twist
null
TOKYO
It’s not an uncommon opinion that modern communication technology, and social media in particular, is a double-edged sword. The ease and speed with which messages can be sent to anyone in the world allows people to form and maintain connections they would have been unable to otherwise, but it can also lower the social barriers that insulate us from people we’d rather not be connected to, subjecting us to harassing messages that are unpleasant or even traumatic. So while you probably won’t find too many people saying it’s time to go all the way back to handwritten letters and landline phone calls as our only non-face-to-face options, a lot of people are likely longing for some sort of happy medium in terms of technology for interpersonal communication, and Sega thinks it has the answer: bring back pagers, but with some fun new twists. Pictured above is the emojam, a new creation from Sega’s Sega Fave division. Like the pagers of yore, emojam doesn’t allow for text entry, and there’s a pretty tight cap on how long messages can be. Instead of sending a series of numbers, though, emojam lets you send a string of emoji. The device comes with over 1,100 pre-loaded emoji, and you can send up to 10 per message. The intent, Sega says, is to encourage users to put extra thought and care into crafting and deciphering messages, helping to strengthen bonds between friends as a result of considering how each other’s perspective and emotions influences their interpretation of the pictures. ▼ In this example image, the top message is “I’ve got a crush on that boy,” apparently someone who’s on the soccer team, and the friend’s excited reaction is “Wow! Really?” The text has been added for demonstration purposes – the actual devices would display only the emoji. Though emojam sends messages through Wi-Fi networks, it’s not a conventionally Internet connectable device. Group chats are limited to five users, and the friend list, required to send and receive messages, tops out at 100 people, big enough for just about anyone’s primary social circle, but small enough to leave out less vetted individuals who exchanging messages with might do more harm than good to your mental health. Along the same lines, registering friends requires physically touching your emojams to each other, eliminating the anonymity of conventional social media that often enable online harassment. Before someone can exchange emojam messages with you, they have to be someone you’ve met in real life, which would hopefully mean more civil and accountable communication than with a total stranger. ▼ There are also emojam accessories like cases and straps, and the emoji library can be expanded with additional sets featuring characters like the Sanrio crew. As you can probably tell from the promotional images, Sega is marketing the emojam towards kids, with many of the limits on what kind of messages can be sent and who they can be sent to put in place to put parents’ minds at ease. For any adults who grew up in an era with a less intensely connected communications culture than we have now, though, there’s likely a nostalgic appeal to the concept too, though. emojam goes on sale December 10 with a suggested retail price of 7,150 yen, and an Amazon Japan preorder page is already up here. Source: PR Times via IT Media, emojam official website, Amazon Japan Insert images: Amazon Japan, emogam official website Read more stories from SoraNews24. -- Pager service officially ends in Japan, funeral service for outdated tech held in Akihabara【Pics】 -- Japan’s 10 favorite emoji for Twitter, and how they compare to the rest of the world -- You can send email from payphones in Japan?!? We try the technology trick that shocked the nation © SoraNews24
2024-11-08T01:09:21
en
train
42,038,610
9600modem
2024-11-04T04:59:11
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,038,619
fortran77
2024-11-04T05:00:37
Security Bulletin: Nvidia GPU Display Driver – October 2024
null
https://nvidia.custhelp.com/app/answers/detail/a_id/5586/~/security-bulletin%3A-nvidia-gpu-display-driver---october-2024
1
1
[ 42038642, 42040429 ]
null
null
null
null
null
null
null
null
null
train
42,038,626
asilia
2024-11-04T05:02:22
Should I ask for a raise and/or bonus from my startup?
Wondering what the norm is. Is it reasonable to expect a raise or bonus from an early-stage startup?<p>If yes, of what magnitude?<p>Startup is reasonably well funded, double-digit million series A, 10-15 employees.
null
1
0
null
null
null
null
null
null
null
null
null
null
train
42,038,637
antognini
2024-11-04T05:04:01
The Song of Urania: A podcast about the history of astronomy
null
https://songofurania.com/about/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,038,653
zdw
2024-11-04T05:06:42
A change of heart regarding employee metrics
null
http://rachelbythebay.com/w/2024/11/03/metrics/
626
382
[ 42038912, 42039666, 42039765, 42039142, 42039061, 42040318, 42041279, 42039476, 42040130, 42038955, 42039501, 42038972, 42038842, 42038995, 42039112, 42039179, 42039382, 42039420, 42040503, 42040808, 42039237, 42039223, 42039648, 42042234, 42040540, 42039990, 42046174, 42041494, 42039405, 42042096, 42047239, 42047392, 42039403, 42039680, 42039042, 42039129, 42039232, 42039549, 42039688, 42039085, 42040039, 42039118, 42039332, 42039185, 42039861, 42039194, 42039491, 42043503, 42038860, 42051759, 42043627, 42039747, 42039397, 42041009, 42040597, 42043762, 42040927, 42039533, 42039161, 42041968, 42039590, 42039652, 42042653, 42044274, 42055394, 42039336, 42040038, 42042337, 42039936, 42039109, 42039591, 42041054, 42038959, 42040445 ]
null
null
null
null
null
null
null
null
null
train
42,038,674
lia_kim
2024-11-04T05:11:33
Readme-Decorate
https:&#x2F;&#x2F;github.com&#x2F;27Lia&#x2F;readme-decorate<p>SVG Generator SVG Generator is a web application that generates custom SVG images based on various parameters input by the user. It allows web designers, developers, marketers, and others to easily create and share the SVG images they need.<p>Key Features SVG Image Generation: Customize text, font color, background color, font size, and more to generate tailored SVG images. Support for Various Styles: Apply different styles such as Rectangle, Stroke, and Gradient. Real-Time Preview: View the SVG image in real-time as you input the parameters. URL Generation and Sharing: Generate a URL for the created SVG image for easy sharing. How to Use Set Height: Enter the height of the SVG image. Enter Text: Input the text to be displayed on the image. Choose Font Color: Select the color for the text. Choose Background Color: Select the background color for the image. Set Font Size: Enter the size of the text. Choose Style: Select the style of the image (Rectangle, Stroke, Gradient). Set Gradient Colors: If the Gradient style is selected, set the two gradient colors. Generate SVG: Click the &#x27;Generate SVG&#x27; button to create the SVG image. Generate URL: Click the &#x27;Generate URL&#x27; button to create a URL for the generated SVG image.
null
1
0
[ 42040346 ]
null
null
null
null
null
null
null
null
null
train
42,038,680
Quasimarion
2024-11-04T05:13:20
Maxun: Open-Source No-Code Web Data Extraction Platform
null
https://github.com/getmaxun/maxun
1
0
null
null
null
null
null
null
null
null
null
null
train
42,038,689
croes
2024-11-04T05:14:56
VibMilk: Nonintrusive Milk Spoilage Detection via Smartphone Vibration
null
https://ieeexplore.ieee.org/document/10422771
2
0
null
null
null
null
null
null
null
null
null
null
train
42,038,697
hitechhub
2024-11-04T05:16:51
null
null
null
1
null
[ 42038698 ]
null
true
null
null
null
null
null
null
null
train
42,038,703
papernotes
2024-11-04T05:17:10
Ask HN: How do you get traction for AI startups not funded by YC (or equivalent)
AI startup are notoriously hard for numerous reasons -<p>1. Training large models requires a lot of capital and GPU<p>2. Large models (OpenAI, Anthropic etc.) are absorbing functionality offered by startups with new releases. For example, ChatGPT can perform numerous operations on CSV and PDF leaving hundreds of startups moot.<p>3. A negative climate has built whenever people say AI. This may be due to overmarketing, overpromise on what was possible.<p>4. No marketing budget for early stage startups<p>5. Biased treatment by large players. For example, Google promotes Reddit. Reddit promotes ChatGPT, Claude whenever someone signs up. Reddit immediately removes other startup posts whenever it reads like a promotion.<p>6. Hard to get people use new AI when they distrust or already have hundreds of other solutions offering same thing.<p>How do you differentiate in this case and how do you market? I know BoltAI got descent traction. Are there any other examples of how people are getting traction when building AI startup?<p>Suggest any marketing methods that you know worked either for your or other startups.
null
3
2
[ 42046286, 42039550 ]
null
null
null
null
null
null
null
null
null
train
42,038,720
lijunhao
2024-11-04T05:19:21
Show HN: Ping visualization in terminal with heatmap and barchart
null
https://www.x-cmd.com/mod/ping/
2
1
[ 42038721 ]
null
null
no_error
x ping | x-cmd mod | Enhanced modules for ping
null
X-CMD
Enhanced modules for pingExamples ​Use the default mode to output data for 'ping x-cmd.com'Show the ping results as a heatmapProcess the ping results and display them as a heatmapshping x-cmd.com | x ping vis -mUsing the raw option of the ping command, display the results in a bar graph formatshx ping -- [option] x-cmd.com | x ping vis -bUsage ​shx ping [OPTIONS] [FLAGS] [SUB_COMMAND]Options ​Name, ShorthandDefaultDescription-wTime to wait for a response, in seconds-cThe number of requests sentFlags ​Name, ShorthandDescription--verboseVerbose mode output (default mode)--heatmap, -mOutput in heatmap mode--bar, -bOutput in bar chart mode--raw, -rOutput in raw data mode--csvOutput in CSV format--tsvOutput in TSV formatSub Commands ​NameDescriptionx ping --runx ping visx ping execRun the ping command directlyx ping --run ​Usage :x ping vis ​Usage :shx ping vis [OPTIONS] [FLAGS]Options :Name, ShorthandDefaultDescription--inputThe type of data inputFlags :Name, ShorthandDescription--verboseVerbose mode output (default mode)--heatmap, -mOutput in heatmap mode--bar, -bOutput in bar chart mode--raw, -rOutput in raw data mode--csvOutput in CSV format--tsvOutput in TSV formatx ping exec ​Run the ping command directlyUsage :TIPIn interactive terminal ( zsh, bash ... ), Can use Tab to get completion informationRun CMD SUBCOMMAND --help for more information on a command
2024-11-08T12:59:04
en
train
42,038,739
thunderbong
2024-11-04T05:22:58
Avoid capital letters in Postgres names
null
https://weiyen.net/articles/avoid-capital-letters-in-postgres-names
2
0
[ 42040342 ]
null
null
null
null
null
null
null
null
null
train
42,038,764
udev4096
2024-11-04T05:26:14
32 Vulnerabilities in IBM Security Verify Access
null
https://pierrekim.github.io/blog/2024-11-01-ibm-security-verify-access-32-vulnerabilities.html#auth-bypass-runtime
1
0
[ 42040341 ]
null
null
null
null
null
null
null
null
null
train
42,038,775
Tomte
2024-11-04T05:30:11
Fast and Cool Cars
null
http://www.fastcoolcars.com/
1
0
[ 42040340 ]
null
null
no_error
Fast and Cool Cars, Low Riders, Custom, Classic, Sports Cars
null
null
  Millions of People See How Cool Your Car Is Here Welcome to Fast Cool Cars. This web site has grown to become the most popular car web site on the Internet. This site has been online for 11 years now, people from all over the world visit to see the 62,000+ pictures, and learn interesting new facts. So visit often and learn more about the automotive world, we have many different sections to fit most everyone's specific automotive desires. Many of the sections listed above in the navigation area will help take you to the category that you're looking for. Here you will see some of the fastest coolest cars in the world. I've personally been a car enthusiast for most of my life, and that is why over a decade ago I started this website and continue to add to it daily. So browse through the many sections and share my life-long interest with automobiles. Cool Eco Cars - Pictures, information, check it out. Great Tips on how to get cheap car insurance rates. We Have Free FastCoolCars.com Decals You can have a free "FastCoolCars.com" professional vinyl decal in bright white, or red. Just send me your mailing information and I'll send you one. Contact me to request your free decal in white or red. So Many Cars, Over 62,000+ pictures. Most of the cars posted here are high performance vehicles, some unique, some are cool classic cars. All of them definitely have many hours of work and love put into them, not to mention the money. All the owners are proud car enthusiasts that are serious about their rides,, and that is why they have them here on the #1 automotive enthusiast website on the web. 2014 Chevrolet Corvette Stingray ~ C7 Texting and Driving - Understanding the dangers and laws. Eco Friendly Vehicles - Great information to make the world a better place. Lemon Law - See if your vehicle qualifies as a "Lemon" by checking out your states . Car Shows & Events - Thousands of high quality pictures of fast cool cars at many car shows. Top Ten Fastest Cars in the World - A page with nice pictures, descriptions, and videos of each car.   Only real car enthusiasts drive cars like these. Have There Been any Updates to Fast Cool Cars? This site is constantly being updated, daily in fact, with new cars added to different sections all the time. When I add any new sections  I'll tell you about them below.  Check out the link below for  "What's New in the Automotive world & FastCoolCars.com"   2010 Ford Mustang Lots of pictures and information   C5 Z06 Corvettes Thousands of pictures   08-09 Dodge Challenger SRT8 Hemi, tons of pictures   SSC Ultimate Aero - Worlds Fastest Production Vehicle   2007 - 2008 Dodge Avenger Tuner Stormtrooper   2008-2009 Nissan GT-R Twin-Turbo 473HP - 434TQ   Saleen S7 Twin Turbo Power   C6 Corvette ZHZ Special Edition Hertz Rental   2 SSIC - All Electric 0-60 2.1 Seconds 150 MPH   Tesla's All Electric Vehicle Lots of pictures and info   Exotic Supercars Many pictures   Fast and the Furious 4   What's New With the Automotive World & FastCoolCars.com? Check out the pages in this section to see what is new in the automotive world, and new here on FastCoolCars.com too.  Check it out, 2011 World of Wheels. No matter what type of fast or cool car you may have, windshield cracks and chips are a common issue that plague these low and exotic cars. This is typically due to driving on freeways or simply because supercars and sports cars sit so low to the ground and are more vulnerable to flying debris. Nowadays though, you can easily get even the most expensive sports car's windshield replaced at shops like SunTec Auto Glass of Phoenix. Fast Cool Cars is looking for a few female models... We're looking for a few models to have pictures taken with some fast cool cars. We are also looking for possible models to take with us to national car shows and events. For more information, check out our Model Search section. You can make FastCoolCars.com your homepage. Click here to make Fast Cool Cars your Homepage   Click Here to add Fast Cool Cars to your Favorites.   What are some of the popular sections on your site? NEW...  Low-Riders, several hundred pictures. Pimp my Ride, lots of cool pimped rides with big chrome rims, and lots of Bling. Fast and the Furious 4 with Cool movie trailer, new information and pictures, wild cars. Lots of Used Cars for Sale listings here as well. Check out the all new high resolution large pictures and description of the 2014 Chevrolet Corvette, sweet. Cool Car Parts section offers info on many OEM, and aftermarket parts and products available for you and your fast cool car, SUV or truck, like Chrome Rims, Custom Wheels, Wire Rims, and Spinning Wheels. How Can I Add My Car to Fast Cool Cars? Find the section that your vehicle belongs in, then send me some pictures and a description, and I will add it to the site. You can then tell your friends and family "My car is on Fast Cool Cars.com." I update this site on a daily basis, by adding new vehicles and content. (Throughout most of the site, you can hold your cursor over any image for a moment and a description of the image will appear next to the cursor.) This site now has more then 16,500 pages in it, and more then 62,000 pictures.  2/28/2017 Can I Send You My Suggestions? If you have an idea of something that you  think would be cool to have here on the site, send me an email. I will definitely consider finding out more about  it, and will more than likely place it here for all to enjoy. The amount of car related web sites that are available to us lately is overwhelming. This site is different, and its also been here and online for over a decade, not like some of those fly-by-night here today gone tomorrow sites. It is also not one of the many car sites thrown up instantly with some massive automated program. This is the #1 car enthusiast web site on the Internet, it stands out from the rest. So sit back, surf through the site and enjoy all the pictures, information and content.
2024-11-08T00:10:53
en
train
42,038,799
boulos
2024-11-04T05:35:13
End of an era for iconic moving walkways at SFO
null
https://www.youtube.com/watch?v=dOAgah-a2U0
1
0
null
null
null
null
null
null
null
null
null
null
train
42,038,806
dumbthinker
2024-11-04T05:35:38
Ask HN: Why Postgres chose meson over CMake?
null
null
2
2
[ 42039131 ]
null
null
null
null
null
null
null
null
null
train
42,038,816
MrSmooth97
2024-11-04T05:37:26
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,038,821
motownphilly
2024-11-04T05:38:52
Internet Archive "Save Page Now" has been re-enabled
null
https://web.archive.org/save/
19
0
[ 42040001 ]
null
null
missing_parsing
Wayback Machine
null
null
The Wayback Machine is an initiative of the Internet Archive, a 501(c)(3) non-profit, building a digital library of Internet sites and other cultural artifacts in digital form. Other projects include Open Library & archive-it.org. Your use of the Wayback Machine is subject to the Internet Archive's Terms of Use.
2024-11-07T13:25:47
null
train
42,038,834
The_News_Crypto
2024-11-04T05:40:12
null
null
null
1
null
[ 42038835 ]
null
true
null
null
null
null
null
null
null
train
42,038,845
asicsp
2024-11-04T05:41:29
Disaggregated Storage – A Brief Introduction
null
https://avi.im/blag/2024/disaggregated-storage/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,038,870
0x54MUR41
2024-11-04T05:46:40
What Is Kafka? (2022)
null
https://www.ponelat.com/blog/what-is-kafka
1
0
null
null
null
null
null
null
null
null
null
null
train
42,038,884
xbmcuser
2024-11-04T05:49:56
China Can't Cut Electric Vehicle Subsidies It Isn't Paying
null
https://www.bloomberg.com/opinion/articles/2024-11-03/china-can-t-cut-electric-vehicle-subsidies-it-isn-t-paying
7
1
[ 42038889, 42040335 ]
null
null
missing_parsing
China Can’t Cut Electric Vehicle Subsidies It Isn’t Paying
2024-11-03T19:00:25.238Z
David Fickling
There’s an old joke about running into an old man on a bus. The man’s tearing pages out of a magazine, scrunching them up, and throwing them out the window.“Why are you doing that?” he is asked. “To keep the elephants away,” the man replies. “There’s no elephants around here, though.” “Exactly,” the man says. “That’s how you can tell it’s working.”
2024-11-08T02:06:10
null
train
42,038,902
SyncfusionBlogs
2024-11-04T05:54:01
null
null
null
1
null
[ 42038903 ]
null
true
null
null
null
null
null
null
null
train
42,038,911
ofrzeta
2024-11-04T05:58:27
USB Insight Hub
null
https://www.crowdsupply.com/aerio-solutions/usb-insight-hub
17
3
[ 42039465, 42040156, 42040132 ]
null
null
null
null
null
null
null
null
null
train
42,038,936
thunderbong
2024-11-04T06:04:02
Hitchens's Razor
null
https://en.wikipedia.org/wiki/Hitchens%27s_razor
4
0
[ 42040326 ]
null
null
null
null
null
null
null
null
null
train
42,038,937
Byaidu
2024-11-04T06:04:13
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,038,947
domofutu
2024-11-04T06:07:47
Creating a False Memory in the Hippocampus (2013)
null
https://www.science.org/doi/10.1126/science.1239073
1
0
null
null
null
null
null
null
null
null
null
null
train
42,038,953
cryptohuin
2024-11-04T06:08:52
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,038,961
pabs3
2024-11-04T06:09:52
Reproducible builds made easy: introducing StageX
null
https://quorum.tkhq.xyz/posts/reproducible-builds-made-easy-introducing-stagex/
1
0
null
null
null
no_error
Reproducible builds made easy: introducing StageX
2024-10-31T08:03:21+05:00
null
This post is about Turnkey’s journey with reproducible builds. As mentioned in this other post, we don’t have a choice: our builds must be reproducible to secure TEE deployments and use remote attestations meaningfully. By reproducible, we mean that each time the build runs, given the same source code, it generates the same binary artifact, byte-for-byte, regardless of where or when it runs. Unfortunately reproducible builds aren’t easy out-of-the-box. After a brief reminder on why reproducible builds matter, we’ll survey the landscape of existing options available to us, show our first attempt at reproducible builds which leveraged Debian containers, and explain why and how we’ve arrived at StageX: a new container-based, full-source-bootstrapped, reproducible, multi-party signed Linux distro. It simplifies reproducible builds considerably and supports all Turnkey builds today. Reproducible builds transfer trust from code to binaries When you pull a container image from DockerHub, who built it? How do we know this particular artifact matches the published source code and isn’t some malware pushed by someone who phished the credentials of a legit maintainer? It turns out we have no idea, and that is a problem. A similar problem was highlighted by Ken Thompson in his 1984 paper “Reflections on Trusting Trust”, which describes how a malicious software compilation tool could tamper with any software compiled with that compiler, including other compilers. The problem of having to trust compiled artifacts is everywhere: your operating system downloads packages, your code imports packages, your production servers pull docker images, and so on. When attackers succeed in sneaking bad third-party binaries or code into our systems, we talk about “Supply Chain Attacks”. It would be easy to fill pages of examples of real world supply chain attacks because they happen all the time. Here are a few famous examples: one, two, and most recently: three. To avoid these attacks we need to verify binaries before using them. Unfortunately humans can’t directly verify them: they’re opaque! This is where reproducible builds help: they transfer trust from source code to binaries. Given an opaque binary and human-readable source code, anyone can: Read the code and convince themselves it doesn’t do anything malicious. Now they trust the code! ✅ Obtain a binary from the code with a reproducible build1 Compare this binary with the published binary (usually through a digest comparison) Now they trust the binary! ✅ This wouldn’t be possible without a reproducible build. If the binary was different every time the build ran, only the person or machine who first published the binary would trust it. Without a reproducible build, trust can’t transfer from code to published binary. With a reproducible build, anyone can reproduce the binary and trust the published binary they’re about to download. The first version of reproducible builds at Turnkey used Debian containers as a base and Toolchain to build them in a reproducible way. The main idea behind Toolchain was to abstract away differences between build environments (such as user and group IDs, number of CPUs, timestamp and many others) with custom environment variables and system configuration baked into build processes via Makefile macros. This came with major downsides that slowed down developer productivity: Repositories needed to keep costly snapshots of all dependencies in Git LFS or similar to be able to reproduce the exact build container. Otherwise the “latest” packages would shift over time, breaking reproducibility. This created a lot of friction for our team having to regularly archive, hash-lock, and sign hundreds of .deb files for every project. Debian has very old versions of Rust, which we rely on heavily. This very frequently caused frustration when trying to upgrade external crates. The builds themselves relied heavily on Makefile and macros. Most engineers are not familiar with this syntax; as a result debugging builds was really, really hard. After a few months with this setup, we concluded that something had to change. In the rest of this post we introduce StageX, a community effort which builds on classical Stage 0-3 compiler bootstrapping to produce a container-native, minimal, and reproducible toolchain. Creating StageX: why not use X instead? To achieve reliable reproducible builds we took a hard look at the available options around us to avoid building anything from scratch ourselves if we did not have to. This is a list of what we evaluated and why we ultimately rejected those options: Alpine is the most popular distro in container-land and has made great strides in proving a minimal musl-based distro with reasonable security defaults. It is suitable for most use cases, however in the interest of developer productivity and low friction for contributors, packages are only signed by centralized CI builder keys. This single point of failure makes it a non-starter for our own threat model. Chainguard sounds great on paper (container-native!), but on closer inspection they built their framework on top of Alpine which is neither signed nor reproducible and Chainguard image authors do not sign commits or packages with their own keys. They double down on centralized signing with cosign and the SLSA framework to prove their centrally built images were built by a known trusted CI system. This is however only as good as those central signing keys and the people who manage them which we have no way to trust independently. Debian (and derivatives like Ubuntu) is one of most popular options for servers, and also sign most packages. However, these distros are glibc-based with a focus on compatibility and desktop use-cases. As a result they have a huge number of dependencies, partial code freezes for long periods of time between releases, and stale packages as various compatibility goals block updates. Fedora (and RedHat-based distros) sign packages with a global signing key, similar to Chainguard, which is not great. They otherwise suffer from similar one-size-fits-all bloat problems as Debian with a different coat of paint. Their reliance on centralized builds has been used as justification for them to not pursue reproducibility, which makes them a non-starter for security-focused use cases. Arch Linux has very fast updates as a rolling release distro. Package definitions are signed, and often reproducible, but they change from one minute to the next. Reproducible builds require pinning and archiving sets of dependencies that work well together for your own projects. Nix is almost entirely reproducible by design and allows for lean and minimal output artifacts. It is also a big leap forward in having good separation of concerns between privileged immutable and unprivileged mutable spaces, however they do not mandate contributor-level signing, in order to ensure any hobbyist can contribute with low friction. Guix is reproducible by design, borrowing a lot from Nix2. It also does maintainer-level signing like Debian. It comes the closest to what we need overall (and this is what Bitcoin settled on!), but lacks the enforcement of multiple signatures for each package contributions. The dependency tree is large because of glibc, which makes retrofitting signature requirements or reproducibility an uphill battle. Summarizing the above in a table: Distro OCI support3 Signatures Libc4 Reproducible5 Bootstrapped Alpine Published 1 Bot musl No No Chainguard Native 1 Bot musl No No Debian Published 1 Human glibc Partial (96%) No Fedora Published 1 Bot glibc No No Arch Published 1+ Human glibc Partial (90%) No Nix Exported 1 Bot glibc Partial (95%) Partially Guix Exported 1+ Human glibc Partial (90%) Yes StageX Native 2+ Humans musl Yes (100%) Yes 3 4 5 This should speak for itself: the current candidates didn’t quite meet our bar. We wanted the musl-based container-ideal minimalism of Alpine, the obsessive reproducibility and full-source supply chain goals of Guix, and a step beyond the single-sig signed packages of Debian or Arch. How StageX works StageX distributes packages as OCI containers. This allows hosting them just like any other images, on DockerHub6, and allows for hash-locked pulls out of the gate. OCI is the only well-documented packaging standard with multiple competing toolchain implementations and multiple-signature support. Because StageX packages are OCI images, using StageX’s reproducible Rust is a simple FROM away: FROM stagex/rust@sha256:b7c834268a81bfcc473246995c55b47fe18414cc553e3293b6294fde4e579163 This forces a download of an exact image, pinned to a specific digest (b7c83426…). You can see existing signatures for this image at stagex:signatures/stagex/rust@sha256=b7c83426…, or reproduce it yourself from source with make rust. As a result you can trust that the Rust image you’re pulling comes from this Containerfile and contains nothing malicious, even if you pull it from an untrusted source. If the downloaded image is corrupted, its sha256 digest won’t match the pinned digest, and the build will error out. StageX packages are all produced by a single Containerfile with multiple layers: base: sets environment variables, defines source code locations, and pins digests. fetch: downloads source code in a hash-locked way over the network. build: builds sources into artifacts, potentially bringing in dependencies (other StageX packages!) to do so. This is done with no network access. install: places the binaries in the right location within the /rootfs directory package: copies /rootfs to a final container. This is what StageX users import. A good example to look at is the bash Containerfile: file locations and hashes are hardcoded in base, source code is downloaded in fetch (with --checksum), build untars the source code, calls ./configure and make, install calls install, and package exports the contents of /rootfs. If you’ve ever installed something from source on a Unix based OS before, this should feel very familiar! Creating Containerfiles for applications using StageX packages is no different than packaging applications with standard Docker images. The StageX README contains an example Containerfile to compile and run a basic Rust “hello, world!”, pasted here for convenience: FROM scratch AS build COPY --from=stagex/rust@sha256:b7c834268a81bfcc473246995c55b47fe18414cc553e3293b6294fde4e579163 . / COPY --from=stagex/gcc:13.1.0@sha256:439bf36289ef036a934129d69dd6b4c196427e4f8e28bc1a3de5b9aab6e062f0 . / COPY --from=stagex/binutils:2.43.1@sha256:30a1bd110273894fe91c3a4a2103894f53eaac43cf12a035008a6982cb0e6908 . / COPY --from=stagex/libunwind:1.7.2@sha256:97ee6068a8e8c9f1c74409f80681069c8051abb31f9559dedf0d0d562d3bfc82 . / COPY --from=stagex/musl:1.2.4@sha256:ad351b875f26294562d21740a3ee51c23609f15e6f9f0310e0994179c4231e1d . / COPY --from=stagex/llvm:18.1.8@sha256:30517a41af648305afe6398af5b8c527d25545037df9d977018c657ba1b1708f . / COPY --from=stagex/zlib:1.3.1@sha256:96b4100550760026065dac57148d99e20a03d17e5ee20d6b32cbacd61125dbb6 . / COPY <<-EOF ./hello.rs fn main(){ println!("Hello World!"); } EOF RUN ["rustc","-C","target-feature=+crt-static","-o","hello","hello.rs"] FROM scratch COPY --from=build /hello . ENTRYPOINT ["/hello"] The structure of this file follows the “multi-layer” philosophy: The build layer is responsible for compiling our source code (inlined with <<-EOF) into a “hello” binary We then use a fresh FROM scratch layer to copy and expose this new binary as our default entry point. Note the difference between build and the final layer: build requires llvm, binutils, zlib and many other dependencies to build our program, whereas our final container only contains the “hello” binary. This ensures the final image is as slim as possible. To build it yourself, save this snippet as “Containerfile” somewhere, and run docker build . -t rust-hello -f Containerfile. This builds an image with the tag rust-hello. Once this is done, execute your hello world program by running the new image: $ docker run -t rust-hello Hello World! Voilà! Turnkey applications are all built this way. As a result anyone can reproduce builds independently, and use remote attestations meaningfully when we deploy critical software into secure enclaves. The invisible hard problems StageX resolved The fact that StageX works is a miracle that could not have been possible without relying on other people’s work. Here we highlight a few of the big challenges. Bootstrapping GCC Do you know how to make yogurt? The first step is to add yogurt to milk! — Bootstrappable Builds Project This was by far the thorniest issue to resolve. Many individuals and projects have contributed to solving it over the years. Carl Dong gave a talk about bootstrapping which rallied people to the effort started by the Bitcoin community, Guix recently proved it could bootstrap a modern Linux distribution for which the Stage0 team and the Gnu Mes provided key ingredients, and the bootstrappable builds and live-bootstrap projects glued it all together. StageX follows the footsteps of Guix and uses the same full-source bootstrap process, starting from hex0, a 190 bytes seed of well-understood assembly code. This seed is used to compile kaem, “the world’s worst build tool”, in Stage 0. Stage 1, 2, and 3 build on this just enough to build gcc, which is used to build many other compilers and tools. GCC to Golang It is worth acknowledging the excellent work done by Google. They have documented this path well and provide all the tooling to do it. You only need 3 versions of golang to get all the way back to GCC. See stagex:packages/go. Bootstrapping Rust A given version of Rust can only ever be built with the immediately previous version. If you go down this chicken-and-egg problem far enough and you realize that in most distros the chicken comes first: most include a non-reproducible “seed” Rust binary presumably compiled by some member of the Rust team, use that to build the next version, and carry on from there. Even some of the distros that say their Rust builds are reproducible have a pretty major asterisk. Thankfully John Hodges created mrustc, which implements a minimal semi-modern Rust 1.54 compiler in C++. It is missing a lot of critical features but it does support enough features to compile the official Rust 1.54 sources, which can compile Rust 1.55 and so on. This is the path Guix and Nix both went down, and StageX is following their lead, except using musl. A quick patch did the trick to make mrustc work with musl. See this in action for yourself at stagex:packages/rust. Reproducible NodeJS (!!) NodeJS was never designed with reproducible builds in mind. Through extensive discussion with the maintainers and a lot of effort, NodeJS is now packaged in StageX: packages/nodejs. This is (to our knowledge) an industry first. StageX is only possible because a few dozen people around the world have collectively decided to address the massive supply chain risks that threaten everything we do on the internet. While Turnkey, Mysten and Distrust provided the funding that brought StageX to life, it has only been possible to hit this level of quality by being open-source and receiving feedback from external entities and individuals that share similar requirements to ours. For this reason all contributing parties agreed StageX should be a standalone project hosted by the open-source community. Anyone is free to add any packages useful to them that meet or exceed the current security standards in place today. You can find the StageX repo at https://codeberg.org/stagex/stagex. The repository is hosted by Codeberg, a non-profit deployment of Forgejo, which is itself open source and has correct code signing enforcement, which Github currently lacks. Repo ownership is currently shared by contributing Turnkey engineers and trusted members of the open-source community. Our Matrix room is #stagex:matrix.org and the team is actively looking for constructive feedback, improvements in various areas, and package maintainers. If you have access to beefy desktops or servers, consider building and co-signing all new packages to prove no one is tampering with them! Acknowledgements This blog post started as a document authored by Lance Vick, who founded StageX, got to the initial MVP and helped build the community that now runs it day-to-day. Plain and simple: StageX is Lance’s baby. Couldn’t have happened without him. StageX stands on the shoulders of many others, among which: Carl Dong and Bitcoin, the Stage0 team, the Gnu Mes team, the bootstrappable builds and live-bootstrap projects, the Guix team, the Docker and OCI teams, and all the many maintainers and contributors that are constantly maintaining, reproducing, and improving StageX so it can build everything and anything reproducibly. A big THANK YOU to Lance Vick, Michael Avrukin, and Andrew Min for reviewing drafts of this blog post and providing great comments and suggestions along the way!
2024-11-08T06:17:17
en
train
42,038,966
domofutu
2024-11-04T06:11:42
Patterns of Brain Maturation in Autism and Their Molecular Associations
null
https://jamanetwork.com/journals/jamapsychiatry/fullarticle/2825153
1
0
null
null
null
body_too_long
null
null
null
null
2024-11-08T01:34:02
null
train
42,038,969
sunkcostisalie
2024-11-04T06:12:05
Billionaires emit more CO₂ in 90 minutes than most people do in a lifetime
null
https://www.oxfam.org/en/press-releases/billionaires-emit-more-carbon-pollution-90-minutes-average-person-does-lifetime
48
26
[ 42039305, 42039364, 42042039, 42039294, 42039616, 42039475, 42039662 ]
null
null
null
null
null
null
null
null
null
train
42,038,970
lapnect
2024-11-04T06:12:22
Deep Threads Diving into the Core of Concurrent Programming with C
null
https://mohitmishra786.github.io/chessman/2024/09/24/Deep-Threads-Diving-into-the-Core-of-Concurrent-Programming-with-C.html
4
1
[ 42042128, 42040325 ]
null
null
null
null
null
null
null
null
null
train
42,038,977
typkunbo
2024-11-04T06:14:21
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,038,982
AbhilashK26
2024-11-04T06:15:33
Host a FastAPI Application Without a Server
null
https://pinggy.io/blog/host_a_fastapi_app_without_a_server/
10
1
[ 42039492, 42038983, 42040319 ]
null
null
null
null
null
null
null
null
null
train
42,038,985
logicalxor
2024-11-04T06:15:57
Watch Japan launch military communications satellite
null
https://www.space.com/space-exploration/launches-spacecraft/japan-launching-military-communications-satellite-early-nov-4-on-4th-flight-of-h3-rocket
1
0
null
null
null
null
null
null
null
null
null
null
train
42,039,000
monsoonw
2024-11-04T06:19:52
null
null
null
2
null
null
null
true
null
null
null
null
null
null
null
train
42,039,007
ckrapu
2024-11-04T06:20:56
Markov Eclipse
null
https://civilization.fandom.com/wiki/Markov_Eclipse_(CivBE)
1
0
[ 42040317 ]
null
null
null
null
null
null
null
null
null
train
42,039,012
null
2024-11-04T06:21:44
null
null
null
null
null
null
[ "true" ]
null
null
null
null
null
null
null
null
train
42,039,018
gnabgib
2024-11-04T06:22:47
Curbing the excessive emissions of an elite few can create a sustainable planet
null
https://policy-practice.oxfam.org/resources/carbon-inequality-kills-why-curbing-the-excessive-emissions-of-an-elite-few-can-621656/
3
1
[ 42039134 ]
null
null
null
null
null
null
null
null
null
train
42,039,044
lapnect
2024-11-04T06:27:58
Understanding Multimodal LLMs
null
https://magazine.sebastianraschka.com/p/understanding-multimodal-llms
2
0
null
null
null
null
null
null
null
null
null
null
train
42,039,047
williswee
2024-11-04T06:28:28
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,039,082
null
2024-11-04T06:35:39
null
null
null
null
null
[ 42039083 ]
[ "true" ]
null
null
null
null
null
null
null
null
train
42,039,099
dsfdsg57
2024-11-04T06:38:57
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train