id
int64
2
42.1M
by
large_stringlengths
2
15
time
timestamp[us]
title
large_stringlengths
0
198
text
large_stringlengths
0
27.4k
url
large_stringlengths
0
6.6k
score
int64
-1
6.02k
descendants
int64
-1
7.29k
kids
large list
deleted
large list
dead
bool
1 class
scraping_error
large_stringclasses
25 values
scraped_title
large_stringlengths
1
59.3k
scraped_published_at
large_stringlengths
4
66
scraped_byline
large_stringlengths
1
757
scraped_body
large_stringlengths
1
50k
scraped_at
timestamp[us]
scraped_language
large_stringclasses
58 values
split
large_stringclasses
1 value
42,022,937
null
2024-11-02T00:16:47
null
null
null
null
null
null
[ "true" ]
true
null
null
null
null
null
null
null
train
42,022,938
wglb
2024-11-02T00:16:55
Icy moon of Uranus may have once hid watery secret
null
https://www.space.com/uranus-moon-miranda-subsurface-ocean-voyager-2
2
0
null
null
null
no_error
Icy moon of Uranus may have once hid watery secret, Voyager 2 archives reveal
2024-11-01T10:00:34+00:00
Conor Feehly
Uranus’ icy moon Miranda, captured by NASA’s Voyager 2 spacecraft on Jan. 24, 1986. (Image credit: NASA/JPL-Caltech) Over the last few decades, planetary scientists have been steadily adding to the list of moons in our solar system that may harbor interior oceans either currently or at some point in their past. For the most part, these moons (such as Europa or Enceladus) have been gravitationally bound to the gas giants Jupiter or Saturn. Recently, though, planetary scientists have been turning their attention further afield, towards the ice giant Uranus, the coldest planet in the solar system. And now, new research based on images taken by the Voyager 2 spacecraft has suggested that Miranda, a small Uranian icy moon, may have once possessed a deep liquid water ocean beneath its surface. What's more, remnants of that ocean may still exist on Miranda today. When the Voyager 2 spacecraft cruised past Miranda in 1986, it captured images of its southern hemisphere. The resulting pictures revealed a smattering of different geological features on its surface, including grooved terrain, rough scarps, and cratered areas. Researchers, such as Tom Nordheim, a planetary scientist at Johns Hopkins Applied Physics Laboratory (APL), wanted to explain Miranda's bizarre geology by reverse engineering the surface features, working out what type of internal structures could best explain how the moon came to look like it does today. The team mapped the moon's various surface features, such as the cracks and ridges seen by Voyager 2, before developing a computer model to test an array of possible compositions of the moon's interior that could best explain the stress patterns seen on the moon's surface. The computer model found that internal composition that produced the closest match between stress patterns on the surface and the moon's actual surface geology was the presence of a deep ocean beneath Miranda's surface that existed between 100-500 million years ago. According to their models, the ocean may have once measured 62 miles (100 kilometers) deep, buried beneath 19 miles (30 kilometers) of surface ice. Breaking space news, the latest updates on rocket launches, skywatching events and more!Miranda reveals a complex geologic history in this view, acquired by Voyager 2 on Jan. 24, 1986, around its close approach to the Uranian moon. (Image credit: JPL)Miranda has a radius of just 146 miles (235 kilometers), which means the ocean would have taken up almost half the moon's entire body. It also means that finding such an ocean is unlikely. "To find evidence of an ocean inside a small object like Miranda is incredibly surprising," Nordheim said in a statement about the new research. "It helps build on the story that some of these moons at Uranus may be really interesting — that there may be several ocean worlds around one of the most distant planets in our solar system, which is both exciting and bizarre," he continued. Researchers speculate that the tidal focus between Miranda and other nearby moons were crucial to keeping Miranda's interior warm enough to sustain a liquid ocean. The gravitational stretching and compressing of Miranda, amplified by orbital resonances with other moons in its past, could have generated enough frictional energy to keep it warm enough from freezing over. Similarly, Jupiter's moons Io and Europa have a 2:1 resonance (for every two orbits Io makes around Jupiter, Europa makes one), which generates enough tidal forces to sustain an ocean beneath Europa's surface. Miranda eventually fell out of sync with one of the other Uranian moons, nullifying the mechanism keeping its interior warm. Researchers don't think Miranda has fully frozen over yet though, as it should have expanded, causing telltale crack on its surface. "We won't know for sure that it even has an ocean until we go back and collect more data," Nordheim says. "We're squeezing the last bit of science we can from Voyager 2's images. For now, we're excited by the possibilities and eager to return to study Uranus and its potential ocean moons in depth."This new research was published in The Planetary Science Journal on Oct. 15. Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: [email protected]. Conor Feehly is a New Zealand-based science writer. He has earned a master's in science communication from the University of Otago, Dunedin. His writing has appeared in Cosmos Magazine, Discover Magazine and ScienceAlert. His writing largely covers topics relating to neuroscience and psychology, although he also enjoys writing about a number of scientific subjects ranging from astrophysics to archaeology. Most Popular
2024-11-08T17:25:18
en
train
42,022,945
tkgally
2024-11-02T00:18:14
I Tried Real Augmented Reality Glasses [video]
null
https://www.youtube.com/watch?v=G0eKzU_fV00
1
0
null
null
null
no_article
null
null
null
null
2024-11-08T16:10:34
null
train
42,022,955
crazydoggers
2024-11-02T00:19:55
Hacked TP-Link routers used in years-long account takeover attacks
null
https://arstechnica.com/information-technology/2024/11/microsoft-warns-of-8000-strong-botnet-used-in-password-spraying-attacks/
14
0
null
null
null
no_error
Thousands of hacked TP-Link routers used in yearslong account takeover attacks
2024-11-02T00:13:20+00:00
Dan Goodin
Hackers working on behalf of the Chinese government are using a botnet of thousands of routers, cameras, and other Internet-connected devices to perform highly evasive password spray attacks against users of Microsoft’s Azure cloud service, the company warned Thursday. The malicious network, made up almost entirely of TP-Link routers, was first documented in October 2023 by a researcher who named it Botnet-7777. The geographically dispersed collection of more than 16,000 compromised devices at its peak got its name because it exposes its malicious malware on port 7777. Account compromise at scale In July and again in August of this year, security researchers from Sekoia.io and Team Cymru reported the botnet was still operational. All three reports said that Botnet-7777 was being used to skillfully perform password spraying, a form of attack that sends large numbers of login attempts from many different IP addresses. Because each individual device limits the login attempts, the carefully coordinated account-takeover campaign is hard to detect by the targeted service. On Thursday, Microsoft reported that CovertNetwork-1658—the name Microsoft uses to track the botnet—is being used by multiple Chinese threat actors in an attempt to compromise targeted Azure accounts. The company said the attacks are “highly evasive” because the botnet—now estimated at about 8,000 strong on average—takes pains to conceal the malicious activity. “Any threat actor using the CovertNetwork-1658 infrastructure could conduct password spraying campaigns at a larger scale and greatly increase the likelihood of successful credential compromise and initial access to multiple organizations in a short amount of time,” Microsoft officials wrote. “This scale, combined with quick operational turnover of compromised credentials between CovertNetwork-1658 and Chinese threat actors, allows for the potential of account compromises across multiple sectors and geographic regions. Some of the characteristics that make detection difficult are: The use of compromised SOHO IP addresses. The use of a rotating set of IP addresses at any given time. The threat actors had thousands of available IP addresses at their disposal. The average uptime for a CovertNetwork-1658 node is approximately 90 days. The low-volume password spray process; for example, monitoring for multiple failed sign-in attempts from one IP address or to one account will not detect this activity.
2024-11-08T09:08:22
en
train
42,022,966
aloukissas
2024-11-02T00:21:45
Shopify: Why We Built Observe [video]
null
https://www.youtube.com/watch?v=ApOV8ELhIG4&list=PLvQF73bM4-5X9mt0lweCXL_v8xdvrLEvB&index=1
1
0
null
null
null
no_article
null
null
null
null
2024-11-08T18:22:37
null
train
42,022,993
tomrod
2024-11-02T00:27:34
Z-Library
null
https://en.wikipedia.org/wiki/Z-Library
10
0
[ 42025964 ]
null
null
null
null
null
null
null
null
null
train
42,022,997
Mobil1
2024-11-02T00:28:24
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,023,007
null
2024-11-02T00:30:22
null
null
null
null
null
null
[ "true" ]
true
null
null
null
null
null
null
null
train
42,023,021
sebaschapela
2024-11-02T00:32:59
null
null
null
1
null
[ 42023022 ]
null
true
null
null
null
null
null
null
null
train
42,023,032
tagawa
2024-11-02T00:34:04
State of CSS 2024 Results
null
https://css-tricks.com/state-of-css-2024-results/
3
0
null
null
null
no_error
State Of CSS 2024 Results | CSS-Tricks
2024-10-30T10:43:55-06:00
null
DigitalOcean provides cloud products for every stage of your journey. Get started with $200 in free credit! They’re out! Like many of you, I look forward to these coming out each year. I don’t put much stock in surveys but they can be insightful and give a snapshot of the CSS zeitgeist. There are a few little nuggets in this year’s results that I find interesting. But before I get there, you’ll want to also check out what others have already written about it. Josh Comeau digested his takeaways in a recent newsletter. Oh, I guess that’s it — at least it’s the most formal write-up I’ve seen. There’s a little summary by Ahmad Shadeed at the end of the survey that generally rounds things up. I’ll drop in more links as I find ’em. In no particular order… Demographics Josh has way more poignant thoughts on this than I do. He rightfully calls out discrepancies in gender pay and regional pay, where men are way more compensated than women (a nonsensical and frustratingly never-ending trend) and the United States boasts more $100,000 salaries than anywhere else. The countries with the highest salaries were also the most represented in survey responses, so perhaps the results are no surprise. We’re essentially looking at a snapshot of what it’s like to be a rich, white male developer in the West. Besides pay, my eye caught the Age Group demographics. As an aging front-ender, I often wonder what we all do when we finally get to retirement age. I officially dropped from the most represented age group (30-39, 42%) a few years ago into the third most represented tier (40-49, 21%). Long gone are my days being with the cool kids (20-29, 27%). And if the distribution is true to life, I’m riding fast into my sunset years and will be only slightly more represented than those getting into the profession. I don’t know if anyone else feels similarly anxious about aging in this industry — but if you’re one of the 484 folks who identify with the 50+ age group, I’d love to talk with you. Before we plow ahead, I think it’s worth calling out how relatively “new” most people are to front-end development. Wow! Forty-freaking-four percent of respondents have less than 10 years of experience. Yes, 10 years is a high threshold, but we’re still talking about a profession that popped up in recent memory. For perspective, someone developing for 10 years came to the field around 2014. That’s just when we were getting Flexbox, and several years after the big bang of CSS 3 and HTML 5. That’s just under half of developers who never had to deal with the headaches of table layouts, clearfix hacks, image sprites, spacer images, and rasterized rounded corners. Ethan Marcotte’s seminal article on “Responsive Web Design” predates these folks by a whopping four years! That’s just wild. And exciting. I’m a firm believer in the next generation of front-enders but always hope that they learn from our past mistakes and become masters at the basics. Features I’m not entirely sure what to make of this section. When there are so many CSS features, how do you determine which are most widely used? How do you pare it down to just 50 features? Like, are filter effects really the most widely used CSS feature? So many questions, but the results are always interesting nonetheless. What I find most interesting are the underused features. For example, hanging-punctuation comes in dead last in usage (1.57%) but is the feature that most developers (52%) have on their reading list. (If you need some reading material on it, Chris initially published the Almanac entry for hanging-punctuation back in 2013.) I also see Anchor Positioning at the end of the long tail with reported usage at 4.8%. That’ll go up for sure now that we have at least one supporting browser engine (Chromium) but also given all of the tutorials that have sprung up in the past few months. Yes, we’ve contributed to that noise… but it’s good noise! I think Juan published what might be the most thorough and thoughtful guide on the topic yet. I’m excited to see Cascade Layers falling smack dab in the middle of the pack at a fairly robust 18.7%. Cascade Layers are super approachable and elegantly designed that I have trouble believing anybody these days when they say that the CSS Cascade is difficult to manage. And even though @scope is currently low on the list (4.8%, same as Anchor Positioning), I’d bet the crumpled gum wrapper in my pocket that the overall sentiment of working with the Cascade will improve dramatically. We’ll still see “CSS is Awesome” memes galore, but they’ll be more like old familiar dad jokes in good time. (Aside: Did you see the proposed designs for a new CSS logo? You can vote on them as of yesterday, but earlier versions played off the “CSS is Awesome” mean quite beautifully.) Interestingly enough, viewport units come in at Number 11 with 44.2% usage… which lands them at Number 2 for most experience that developers have with CSS layout. Does that suggest that layout features are less widely used than CSS filters? Again, so many questions. Frameworks How many of you were surprised that Tailwind blew past Bootstrap as Top Dog framework in CSS Land? Nobody, right? More interesting to me is that “No CSS framework” clocks in at Number 13 out of 21 list frameworks. Sure, its 46 votes are dwarfed by the 138 for Material UI at Number 10… but the fact that we’re seeing “no framework” as a ranking option at all would have been unimaginable just three years ago. The same goes for CSS pre/post-processing. Sass (67%) and PostCSS (38%) are the power players, but “None” comes in third at 19%, ahead of Less, Stylus, and Lightning CSS. It’s a real testament to the great work the CSSWG is doing to make CSS better every day. We don’t thank the CSSWG enough — thank you, team! Y’all are heroes around these parts. CSS Usage Josh already has a good take on the fact that only 67% of folks say they test their work on mobile phones. It should be at least tied with the 99% who test on desktops, right? Right?! Who knows, maybe some responses consider things like “Responsive Design Mode” desktop features to be the equivalent of testing on real mobile devices. I find it hard to believe that only 67% of us test mobile. Oh, and The Great Divide is still alive and well if the results are true and 53% write more JavsScript than CSS in their day-to-day. Missing CSS Features This is always a fun topic to ponder. Some of the most-wanted CSS features have been lurking around 10+ years. But let’s look at the top three form this year’s survey: Mixins Conditional Logic Masonry We’re in luck team! There’s movement on all three of those fronts: A new CSS Functions and Mixins Module draft was published in late June after the CSSWG resolved to adopt the proposal back in February. (Read our notes.) The CSS Working Group (CSSWG) resolved to add an if() conditional to the CSS Values Module Level 5 specification. (Read our notes.) There are competing proposals for how to forge ahead with a CSS-y approach to masonry layouts. One is based on the CSS Grid Layout Module Level 3 draft specifcation and the other is a fresh new module dedicated to masonry. Apple has planted its flag. So has Chrome. Let the cage-match continue! Resources This is where I get to toot our own horn a bit because CSS-Tricks continues to place first among y’all when it comes to the blogs you follow for CSS happenings. I’m also stoked to see Smashing Magazine right there as well. It was fifth in 2023 and I’d like to think that rise is due to me joining the team last year. Correlation implies causation, amirite? But look at Kevin Powell and Josh in the Top 10. That’s just awesome. It speaks volumes about their teaching talents and the hard work they put into “helping people fall in love with CSS” as Kevin might say it. I was able to help Kevin with a couple of his videos last year (here’s one) and can tell you the guy cares a heckuva lot about making CSS approachable and fun. Honestly, the rankings are not what we live for. Now that I’ve been given a second wind to work on CSS-Tricks, all I want is to publish things that are valuable to your everyday work as front-enders. That’s traditionally happened as a stream of daily articles but is shifting to more tutorials and resources, whether it’s guides (we’ve published four new ones this year), taking notes on interesting developments, spotlighting good work with links, or expanding the ol’ Almanac to account for things like functions, at-rules, and pseudos (we have lots of work to do). My 2024 Pick No one asked my opinion but I’ll say it anyway: Personal blogging. I’m seeing more of us in the front-end community getting back behind the keyboards of their personal websites and I’ve never been subscribed to more RSS feeds than I am today. Some started blogging as a “worry stone” during the 2020 lockdown. Some abandoned socials when Twitter X imploded. Some got way into the IndieWeb. Webrings and guestbooks are even gaining new life. Sure, it can be tough keeping up, but what a good problem to have! Let’s make RSS king once and for all. That’s a wrap! Seriously, a huge thanks to Sacha Greif and the entire Devographics team for the commitment to putting this survey together every year. It’s always fun. And the visualizations are always to die for.
2024-11-08T00:10:28
en
train
42,023,053
benbreen
2024-11-02T00:38:31
Diving to Drink a 19th-Century Shipwreck's Treasure
null
https://www.nytimes.com/2024/11/01/science/shipwreck-lake-huron-rye-seeds-whiskey.html
3
0
null
null
null
null
null
null
null
null
null
null
train
42,023,062
ajdude
2024-11-02T00:40:03
EA Drops Linux (and Steam Deck) Support for Apex Legends to Curb Cheating
null
https://news.itsfoss.com/apex-legends-drops-steam-deck/
15
1
[ 42023085 ]
null
null
null
null
null
null
null
null
null
train
42,023,088
goodburb
2024-11-02T00:44:30
Magneto-Optical Drive
null
https://en.wikipedia.org/wiki/Magneto-optical_drive
3
0
[ 42025953 ]
null
null
no_error
Magneto-optical drive
2003-12-03T00:07:19Z
Contributors to Wikimedia projects
From Wikipedia, the free encyclopedia "MO disc" redirects here. For the M-DISC optical media format, see M-DISC. A Magneto-optical disc surface has sector partition rectangles. A magneto-optical drive is a kind of optical disc drive capable of writing and rewriting data upon a magneto-optical disc. 130 mm (5.25 in) and 90 mm (3.5 in) discs are the most common sizes. In 1983, just a year after the introduction of the compact disc, Kees Schouhamer Immink and Joseph Braat presented the first experiments with erasable magneto-optical compact discs during the 73rd AES Convention in Eindhoven.[1] The technology was introduced commercially in 1985.[2] Although optical, they normally appear as hard disk drives to an operating system and can be formatted with any file system. Magneto-optical drives were common in some countries, such as Japan,[3] but have fallen into disuse. Visible sectors partition lines on a 130 mm 652 MB magneto-optical disk. (1024 user byte, 17 sectors per track).[4] A 130 mm 2.6 GB magneto-optical disc A 230 MB Fujitsu 90 mm magneto-optical disc. Early drives are 130 mm and have the size of full-height 130 mm hard-drives (like in the IBM PC XT). 130 mm media looks similar to a CD-ROM enclosed in an old-style caddy, while 90 mm media is about the size of a regular 31⁄2-inch floppy disk, but twice the thickness. The cases provide dust resistance, and the drives themselves have slots constructed in such a way that they always appear to be closed. Original MO systems were WORM (write once, read many), and later systems were read/write.[5] The disc consists of a ferromagnetic material sealed beneath a plastic coating. The only physical contact is during recording when a magnetic head is brought into contact with the side of the disc opposite to the laser, similar to Floptical drives, but not the same. During reading, a laser projects a beam on the disk and, according to the magnetic state of the surface, the reflected light varies due to the magneto-optic Kerr effect. During recording, laser power is increased to heat the material to the Curie point in a single spot. This enables an electromagnet positioned on the opposite side of the disc to change the local magnetic polarization. The polarization is retained after the temperature drops. Each write cycle requires both a pass to erase a region and another pass to write information. Both passes use the laser to heat the recording layer; the magnetic field is used to change the magnetic orientation of the recording layer. The electromagnet reverses polarity for writing, and the laser is pulsed to record spots of "1" over the erased region of "0". As a result of this two-pass process, it takes twice as long to write data as it does to read it. In 1990, a 300 mm disc with 7 GB capacity was made available.[6] In 1996, Direct Overwrite technology was introduced for 90 mm discs eliminating the initial erase pass when writing. This requires special media. By default, magneto-optical drives verify information after writing it to the disc, and are able to immediately report any problems to the operating system. This means writing can actually take three times longer than reading, but it makes the media extremely reliable, unlike the CD-R or DVD-R media upon which data is written without any concurrent data integrity checking. Using a magneto-optical disc is much more like using a diskette drive than a CD-RW drive. During a read cycle, the laser is operated at a lower power setting, emitting polarized light. The reflected light has a change in Kerr rotation and Kerr ellipticity which is measured by an analyzer and corresponds to either a logical 0 or 1. The 130 mm drives have been available in capacities from 650 MB to 9.2 GB. However, this is split in half over both sides of the disk. The 2.6 GB disks, for example, have a formatted capacity of 1.2 GB per side. The 130 mm drives were always SCSI. The 90 mm discs had their entire capacity on one side, with no capability to flip them over. The 90 mm drives were produced in SCSI, IDE, and USB formats. Capacities range from 128 MB to 2.3 GB. While they were never particularly popular with consumers (the main consumer market was the 90 mm drives), the 130 mm drives had some lasting service in corporate storage and retrieval. Optical libraries, such as the Hewlett Packard 40XT, were created to automate loading and storing of the disks. A self-contained unit holding 16 or more disks and connected by SCSI to a host computer, the library required specialized archival software to store indices of data, and select disks. Popular uses were for legal document storage and medical imaging, where high reliability, long life, and (at the time) high storage capacity were required. The optical libraries could also manually be used on a Windows 2000/XP machine by selecting and ejecting discs under the Computer Management icon's Removable Storage Service, but this is cumbersome in practice. Light Intensity Modulated Direct OverWrite (LIMDOW) technology used a different write technology, which improved on the performance levels of earlier magneto-optical devices.[7][8] LIMDOW disks and drives worked on the same basic principle as a standard magneto-optical drive: the write surface is heated up and took on a magnetic force applied from outside. But instead of using a magnetic head in the drive to make the changes, the magnets were built into the disk itself.[9] The LIMDOW disk has two magnetic layers just behind the reflective writing surface. This write surface can take magnetism from one of those magnetic layers when it is heated up to one temperature; but if it is heated up further, it will take its polarity from the other magnetic layer. To write the data onto the disk, the magneto-optical drive's laser pulses between two powers. At high power, the surface heats up more and takes its magnetic charge from the north pole magnetic layer. At the lower power, it heats up less and takes its magnetic charge from the south pole layer. Thus, with LIMDOW the magneto-optical write process has a single stage, improving write times. Because the magnetic surface is adjacent to the writing surface, rather than somewhere outside the disk itself, the magnetic writing can be done at a higher resolution, including that of the resolution of the laser spot doing the heating up. In the spring of 1997 Plasmon launched its DW260 drive, which used LIMDOW technology for a higher level of performance than previous magneto-optical drives. LIMDOW drives that shipped in the second half of 1997 has search speeds of less than 15 ms and data transfer rates in excess of 4 Mbit/s, which are fast enough for storing audio and streaming MPEG-2 video. MiniDiscs are magneto-optical discs used to store music. Magneto-optical drives were first offered in NeXT computers. They were later also offered in Canon products. Sony MiniDiscs are magneto-optical, and Sony produces many other formats of magneto-optical media. As of August 2021, Sony continues to manufacture one type of blank MiniDisc available only in Japan; the rest of the world only has access to dwindling new stock from vendors on sites such as eBay or Amazon. TEAC & TASCAM continued to manufacture MiniDisc decks up until 2020 while Sony ceased production of hardware in 2013.[10][11] Pinnacle Micro was a major manufacturer of magneto-optical drives. 3.5" drives were 128 MB and 230 MB. 5.25" drives produced were 650 MB and 1.3 GB (Sierra), 2.6 GB (Vertex) and 4.6 GB (Apex). The Vertex and Apex were non-ISO standard drives and used proprietary media. Pinnacle Micro has ceased production of these products. LMSI produced 5.25" magneto-optical drives as well. Maxoptix, a spin-off of Maxtor Corp., was a major manufacturer of 130 mm or 5.25" magneto-optical drives. A current model is the T7-9100 drive, which has a maximum capacity of 9.1 GB and is downward read and write compatible with 5.2 GB, 4.8 GB, 4.1 GB, 2.6 GB, and 2.3 GB magneto-optical disks, and read compatible with 1.3 GB, 1.2 GB, 650 MB, and 600 MB magneto-optical disks. Popular older models of 5.25" Maxoptix MO drives are the T6 Star, T6-5200 and T5-2600 MO drives. Maxoptix was acquired by Techware Distribution in 2008. Fujitsu was a major manufacturer of 90 mm magneto-optical drives, exceeding 2 GB in capacity, but they have discontinued production and sale of this product category. PDO Konica Minolta was the last manufacturer of 90 mm 3.5" magneto-optical drives. They had a 3.5" 1.3 GB USB external pocket drive available for sale in the United States and Europe. Magneto-optical drives are not Floptical drives, which likewise combine ferromagnetic and optical technologies, albeit in a different manner. Flopticals are 21 megabyte 3.5" magnetic diskettes using optical tracks to increase the tracking precision of the magnetic head, from the usual 135 tracks per inch to 1,250 tracks per inch. No laser or heating is involved; a simple infrared LED is used to follow the optical tracks, while a magnetic head touches the recording surface. The drives can also read and write traditional 3.5" diskettes, although not the 2.88 megabyte variety. Flopticals were manufactured by Insite Peripherals, a company founded by Jim Burke. At the Consumer Electronics Show in January 2004, Sony revealed a 1 gigabyte capacity MiniDisc called Hi-MD. Its recorders can also double the capacity of regular MiniDiscs with special formatting that renders the disc incompatible with other recorders. As with all removable storage media, the advent of cheap CD and DVD drives and flash memory has made them largely obsolete. Magneto-optical disks in particular were expensive when new, with high reliability but slow writing. Magnetic tape formats like LTO have far surpassed MO media for high capacity enterprise-grade backup storage. In 2016 a new phenomenon, magnetization melting by photoinduced photoconductors, was discovered in magnetic photoconductors.[12] It was demonstrated that extremely low light intensities in the range of 1 μWcm−2 can be used to read/write magnetic information in femtosecond (10−15 s timescales) allowing high-speed, high-density data storage in principle. Domain Wall Displacement Detection (DWDD), a magneto-optical reproducing technology developed by Canon Inc. and Sony Floptical Ultra Density Optical ^ K. Schouhamer Immink and J. Braat (1984). "Experiments Toward an Erasable Compact Disc". J. Audio Eng. Soc. 32: 531–538. Retrieved 2018-02-02. ^ Mueller, Scott (2010). Upgrading and Repairing PCs (19th ed.). p. 584. ISBN 978-0-7897-3954-4. ^ "Sony announces the end of the MiniDisc – Ars Technica". 2 February 2013. ^ "ECMA-153 Information interchange on 130 mm optical disk cartridges of the Write Once, Read Multiple (WORM) type, using the magneto-optical effect | ecma-international.org". June 1994. ^ Mueller, Scott (August 2003). Upgrading and Repairing PCs. Que. ISBN 0-7897-2974-1. ^ "WORM disk stores 7 Gbytes of data". Computer. IEEE. 1990-12-01. Retrieved 2023-11-17. ^ Mueller, Scott. "12 "Magneto-Optical Drives"" (PDF). Upgrading and Repairing PCs (15th ed.). p. 670. ^ "LIMDOW recording on magneto-optical disk". Computer weekly (white paper). Hewlett-Packard. 1999. ^ "LIMDOW" (definition). PC Magazine. ^ "Mdデッキ生産の終息について – 株式会社 松本無線音響設備". 27 January 2020. ^ "Sony Ends Production on all MiniDisc Players | Stereophile.com". February 2013. ^ Náfrádi, Bálint (24 November 2016). "Optically switched magnetism in photovoltaic perovskite CH3NH3(Mn:Pb)I3". Nature Communications. 7: 13406. arXiv:1611.08205. Bibcode:2016NatCo...713406N. doi:10.1038/ncomms13406. PMC 5123013. PMID 27882917. Wikimedia Commons has media related to MO. MO Forum Japan The Optical Storage Technology Association
2024-11-08T20:55:17
en
train
42,023,089
jondlm
2024-11-02T00:44:42
My Time Working at Stripe
null
https://jondlm.github.io/website/blog/leaving_stripe/
395
377
[ 42024884, 42024457, 42024975, 42024561, 42024793, 42026042, 42024476, 42025611, 42025112, 42024453, 42024373, 42026257, 42028530, 42024477, 42024913, 42025223, 42024456, 42025402, 42024450, 42027204, 42027850, 42027199, 42025099, 42029203, 42026887, 42025121, 42026558, 42028117, 42025326, 42027599, 42024590, 42025262, 42026907, 42024980, 42027191, 42025706, 42029041, 42026600, 42024562, 42024378, 42026293, 42024584, 42028067, 42026930, 42027569, 42029137, 42024736, 42027019, 42026849, 42023090, 42043390, 42029881, 42027816, 42028398, 42026415, 42028176, 42026761, 42026933, 42024526, 42028434, 42024965, 42026015, 42024567 ]
null
null
null
null
null
null
null
null
null
train
42,023,134
mot92
2024-11-02T00:53:40
Building Tiny AI Tools: How I Integrated Slack, Jira, and AI to Create JiraGPT
null
https://motaha.io/p/building-tiny-ai-tools-how-i-integrated
2
0
null
null
null
null
null
null
null
null
null
null
train
42,023,144
aturati
2024-11-02T00:55:50
Machine Learning Engineering at Sketchpro.ai
null
https://sketchpro.ai/home
3
1
[ 42023145 ]
null
null
null
null
null
null
null
null
null
train
42,023,201
mfiguiere
2024-11-02T01:05:33
Having Fun with Modern C++
null
https://lemire.me/blog/2024/11/02/having-fun-with-modern-c/
4
0
null
null
null
null
null
null
null
null
null
null
train
42,023,209
bookofjoe
2024-11-02T01:06:51
'Ike's Road Trip' Review: The Eisenhower Highway
null
https://www.wsj.com/arts-culture/books/ikes-road-trip-review-the-eisenhower-highway-f20e92bb
1
1
[ 42023211 ]
null
null
null
null
null
null
null
null
null
train
42,023,212
tikhonj
2024-11-02T01:07:26
Debugging Haskell Type Errors
null
https://jelv.is/blog/Debugging-Haskell-Type-Errors/
2
0
null
null
null
no_error
Debugging Haskell Type Errors | jelv.is
null
null
Blog Projects Talks About Fixing Haskell type errors can be hard. Learning how to understand and fix type errors was the first real obstacle I faced when I first picked up the language. I’ve seen the same tendency with every Haskell beginner I’ve taught. With a bit of experience, I got so used to the quirks of GHC’s typechecker and Haskell’s standard library that I could resolve most type errors intuitively. Most but not all. Worse yet, the intuition that helped me in easier cases did not scale to harder errors; instead, fixing hard errors required frustrating trial-and-error. I did not have a mental toolkit for debugging confusing type errors. An intimidating block of error messages for a single mistake! At the same time, I was going through the same story with debugging in general. When I started programming all bugs were hard; gradually, I developed an intuition for fixing most bugs; but I did not have the mental tools to deal with hard bugs, leaving me thrashing around when my initial assumptions about a bug were wrong. I was missing one key insight: you can debug systematically. Debugging is a skill you can learn—not just glorified guess-and-check. Realizing this, my approach to debugging improved significantly. I slowed down, stopped jumping in based on my initial assumptions and instead approached problems step-by-step, following some simple principles. This insight translated to Haskell. We can fix Haskell type errors systematically. It’s a skill you can learn. Let’s look at a simple framework for fixing type errors by following three principles: Read the error Think in constraints Divide and conquer Systematic debugging is something I only learned almost a decade after I first started programming. In hindsight, it’s a bit surprising—none of the tutorials, books, online discussions or college courses I took ever treated debugging as a concrete skill and never covered any specific debugging techniques or principles. At the same time, debugging is easily one of the most important skills for any sort of programmer; everyone from the hobbyist to the academic to the professional spends at least as much time and effort debugging as they do writing code. The first time I heard anyone talk about debugging as a systematic skill was in a lecture during an internship.1 Shortly afterwards, somebody recommended a book2 which had nine “rules”—really rules-of-thumb or general principles—for debugging. By following these principles, I could solve problems step-by-step rather than thrashing around until I stumbled onto the solution. We can approach Haskell type errors with the same mindset, with some variations on general debugging principles tailored to Haskell’s style of type system specifically. (The principles apply just as well to other Hindley-Milner languages like OCaml.) Here are three principles I use to deal with harder type errors: Read the error: when you load your code and see a bunch of red text, don’t panic. Stop and read the errors—the error messages are the compiler’s best attempt to tell you what’s going on, and they’re where we will start our debugging process. Think in constraints: Haskell’s type system works like a set of constraints (and type inference works like constraint solving). When you see an error, read it as “here is an inconsistency in your code’s types” and not “here is exactly where your code is wrong”. Divide and conquer: if the fix is not immediately clear from the error message, you need to understand other parts of the code to figure out a fix. Use the compiler, the types and the structure of the code to find which other parts are relevant. Let’s dive into each principle and see how to put them into action. Read the Error First step: when you see an error, read the error. If there are multiple errors, read all of them. The first error you get is not necessarily the best starting point. This might sound obvious but, in practice, it isn’t. Everyone I’ve mentored started out with a tendency to jump straight to their code as soon as they saw an error. I’ve caught myself doing the same thing! Error messages are a bit intimidating; it feels like you’ve done something wrong. Wanting to fix the error immediately is a natural impulse. As you get a bit more experience, you’ll learn to quickly recognize the most common types of errors you’ll encounter. Some errors are clear right away; others are confusing, but understandable once you learn the pattern3. And then there’s the minority of errors that point your in the wrong direction or are plain weird; it’s these final errors where slowing down and proceeding systematically is the most important. Cutting Through the Noise Haskell error messages get verbose fast. Each error produces a lot of noise. Haskell errors are verbose because they try to present all the information you’d need in a vacuum. Most error messages will have several parts giving distinct information, like: The error itself. Additional context about the error. Steps to identify which part of the code caused the error. Suggestions for how to fix the error. Here’s a simple type error message from one of my projects with its three distinct parts highlighted in different colors: src/Theta/Target/Python.hs:130:50: warning: [-Wdeferred-type-errors] … • Couldn't match expected type ‘Theta.Type’ with actual type ‘()’ • In the third argument of ‘toReference’, namely ‘()’ In the expression: toReference prefix currentModule () In an equation for ‘type_’: type_ = toReference prefix currentModule () This message has three parts: The first line tells us the location4 as well as the type of error (a deferred type error warning).5 The second line is the actual error message. The third, fourth, fifth and sixth lines all tell us where the error is in our code. Even the simplest type error leads to six lines of error text. It only takes a handful of errors like this to fill an entire screen! If you’re using an editor that highlights errors in place, none of the location information—4½ out of 6 lines!—matters. The only information we need is: It’s a type error. We expected a Theta.Type value but got (). So: the first trick to reading Haskell type errors is to mentally filter out the bits that don’t matter—often most of the message!6 More involved code produces even more noise. A slightly different type error in the same Python.hs file, for example, produced 42 lines of localization information7—none of which was useful because my editor highlighted the exact part of the code I needed to look at! Once you cut through the noise, most Haskell type errors are reasonably clear. For example the message for this (somewhat contrived) error is clear: I need to replace the () with a value of the type Theta.Type. Even the error with 42 lines of noise had the correct suggestion that I was missing an argument to a function. However, some errors will not be nearly as clear. Perhaps the message itself is confusing or there are several errors and it is not clear which one to start from. Other times, the error attribution is wrong: either the error is pointing to the wrong part of the code, or the type of error itself is misleading. (We’ll talk more about attribution and localization in later sections.) Even in those cases, the error messages are still worth reading. A message might not point us to a solution directly but it still give us information. One of my personal debugging principles is to start debugging by getting all the information I can out of a system before doing anything else; for Haskell type errors, the error messages are the information we start with. Error messages will be our starting points for understanding what’s going on and starting our divide and conquer process to find the real cause of the error. Multiple Error Messages What should you do when you write some new code—or just make a single innocuous change—and see two screens of error messages? Don’t panic. Remember that Haskell error messages are verbose; once you cut through the noise, those two screens of errors reduce to a handful of distinct errors. Instead of jumping into the first error in the list, take a step back and read all of the errors. The first error you see may not be the best starting point. Moreover, patterns in the errors can be a useful indicator for diagnosing the underlying problem. Multiple errors often group into a single “logical” error. For example, if we change the type of a function parameter, we’ll get an error for every call site. A slightly contrived example: render :: Int -> String render x = show x add :: Int -> Int -> String add a b = render $ a + b sub :: Int -> Int -> String sub a b = render $ a - b If we change the type signature of render to render :: Integer -> String we will get two errors for that one change: src/Example.hs:7:20: warning: [-Wdeferred-type-errors] … • Couldn't match expected type ‘Integer’ with actual type ‘Int’ • In the second argument of ‘($)’, namely ‘a + b’ In the expression: render $ a + b In an equation for ‘add’: add a b = render $ a + b src/Example.hs:10:20: warning: [-Wdeferred-type-errors] … • Couldn't match expected type ‘Integer’ with actual type ‘Int’ • In the second argument of ‘($)’, namely ‘a - b’ In the expression: render $ a - b In an equation for ‘sub’: sub a b = render $ a - b These two errors group into a single logical error: the type for the argument we need to pass to render has changed. Real-world code is not going to be quite this clean; in one of my projects, changing a function from taking a Maybe AST value to an AST value resulted in 16 type errors with a couple of variations on the actual error message—all stemming from a single change to a type signature! Was the mistake in the change to the function’s type signature, or was the change intentional and now all the call sites need fixing? The compiler fundamentally has no way to know without reading your mind. In lieu of mind-reading, the compiler treats type signatures as sources of truth and gives you a ton of errors. When you’re making the change intentionally this is actively useful: you get a checklist of every location in your program that you need to update. But if the change to the function was a typo, it’s a bit confusing—you get a ton of errors and none point to the actual mistake—so you have to read all the errors and notice the pattern in order to diagnose and fix the actual problem. A similar pattern to watch out for is when a single change leads to several different errors pointing to the same place. I ran into this with some of my own code recently, which had the following call to mapM—don’t worry about the details: toModule Theta.Module {..} prefix = do definitions <- mapM (toDefinition prefix moduleName) types ... What would happen if I left out the moduleName argument in toDefinition? toModule Theta.Module {..} prefix = do definitions <- mapM (toDefinition prefix) types ... Because Haskell functions are curried by default, toDefinition prefix would still be a function, but it would not match the type mapM expected. However, instead of getting an error that pointed out the missing argument directly, I got several type errors instead (noisy output skipped up for readability): src/Theta/Target/Python.hs:64:24: warning: [-Wdeferred-type-errors] … • Couldn't match type ‘m’ with ‘(->) (Theta.Definition Theta.Type)’ Expected: Name.ModuleName -> m (m0 Python) Actual: Name.ModuleName -> Theta.Definition Theta.Type -> m0 Python ... src/Theta/Target/Python.hs:64:45: warning: [-Wdeferred-type-errors] … • Couldn't match type ‘Theta.Definition Theta.Type’ with ‘Name.ModuleName’ Expected: Data.Map.Internal.Map Name.Name Name.ModuleName Actual: Data.Map.Internal.Map Name.Name (Theta.Definition Theta.Type) ... src/Theta/Target/Python.hs:65:41: warning: [-Wdeferred-type-errors] … • Couldn't match type ‘m0 Python’ with ‘Python’ Expected: [Python] Actual: [m0 Python] ... The three error messages—with all their text—were a bit intimidating, but I gave them a quick scan and noticed that they were all pointing to roughly the same part of my code, a hint that they share the same underlying cause. In this case, it turned out that the first error message was the best one to start with. But if the code had been written in a slightly different order (say the call to mapM was in a where clause—we could have gotten exactly the same errors in a different order and the best starting point could have been the second or third error instead. Reading the first error message carefully, we can see that Couldn't match type ‘m’ with ‘(->) (Theta.Definition Theta.Type) is telling us that we are missing an argument—which becomes much clearer if you read the next two lines in the error: Expected: Name.ModuleName -> m (m0 Python) Actual: Name.ModuleName -> Theta.Definition Theta.Type -> m0 Python Expected and Actual types are often more useful than the top-level error message. Think in Constraints While Haskell’s error messages can be confusing, I’ve found that error attribution is a larger problem. It doesn’t matter how well-written and well-formatted your error messages are if the error is pointing in the wrong place! Some level of error misattribution is inevitable. The core problem is that types can’t tell us that some code is right or wrong; types can only point out inconsistencies. You always have multiple parts of your code that you can change to fix a type error: if you pass an invalid argument to a function, you can change the argument, change the argument’s type, change the function’s definition or use a different function altogether. Or maybe it’s a sign you need an even larger refactoring! Which change is “correct” depends on your intentions. The compiler cannot read your mind and does not know anything about the world outside your code, so it cannot know what your code is supposed to do. This is fundamentally true for all languages but it’s exacerbated in Haskell because Haskell’s type system is so flexible and expressive, and because Haskell has global type inference à la Hindley Milner. To understand what Haskell’s type errors indicate about our code and how to compensate for confusing error localization, we need to understand how Haskell’s types act like constraints and how Haskell’s type inference and type checking act as constraint resolution. Haskell Types as Constraints How does Haskell determine what type an expression should have? A good mental model is that Haskell starts by treating an expression or variable as able to have any type (x :: a) then looks through the code for anything that would force (constrain) the expression to have a more specific type. A constraint could be: an explicit type signature: if Haskell sees x :: Int in the code, it will proceed with the assumption that x has type Int using x in a context that restricts its type; if Haskell sees x && False in the code, it will proceed with the requirement that x has type Bool an implicit constraint that lets x be polymorphic: if Haskell sees x + 1, it will assume x has the type Num a => a Ideally, all the constraints are consistent. If x has the type Int everywhere in your code, everything is good. Alternatively, if x is constrained to Int at one point and Num a => a at another, things are still good. Int is an instance of Num so the two signatures are compatible and x has the more specific of the two types (Int). A type error is what we get when these constraints are not consistent. For example: We see x + 1 on line 10, constraining x to Num a => a We see x && y on line 20, constraining x to Bool Bool is not an instance of Num, so these two types are incompatible So now we need to generate a type error. Should the error point to line 10 or line 20? There’s no real way to know. Perhaps you meant to write x' + 1 at line 10. Perhaps you meant to write even x && y on line 20, or maybe x + y'. Or maybe you meant to define a Num instance for Bool!8 All that the compiler knows is that you have to change some part of the code in order to make the types consistent. There are multiple places you could change, but an error message can only point to one, so the compiler has to choose somehow. The way real-world compilers choose where to point an error is more-or-less arbitrary, an implementation detail of the typechecking algorithm maybe coupled with some rough heuristics. This ad hoc approach works surprisingly well in practice but it isn’t—fundamentally can’t be—perfect.9 So when you encounter a type error pointing to a line of code that seems totally correct, don’t panic! There’s a good chance that the problem is in some other line of code and the typechecker chose the “wrong” line. Understanding Haskell’s types as constraints will help us to track down the actual source of the error. As we divide and conquer the codebase, the candidates in our code will be the lines that introduce the constraints that led to the type error we are fixing. Type Signatures as Assertions An important aspect of Haskell’s type checking and type inference is that type signatures act like assertions. That is, when Haskell sees x :: Int, it will take this as given for the rest of the code. This is true even if x is defined to be something that can’t be an Int. If we load the following code, we’ll get two type errors: x = False y = x + 1 z = 2 * x + y src/Theta/Misc.hs:3:7: warning: [-Wdeferred-type-errors] … • No instance for (Num Bool) arising from a use of ‘+’ • In the expression: x + 1 In an equation for ‘y’: y = x + 1 | src/Theta/Misc.hs:5:11: warning: [-Wdeferred-type-errors] … • No instance for (Num Bool) arising from a use of ‘+’ • In the expression: 2 * x + y In an equation for ‘z’: z = 2 * x + y | (Side note: that second error is a great example of arbitrary attribution: why does it point to + and not * as the reason we need a Num instance? Either choice would have been totally valid!) Now let’s add a type signature to x: x :: Int x = False y = x + 1 z = 2 * x + y With this type signature, we’ve asserted that x has the type Int. Now Haskell will treat x as an Int everywhere in the code even though x is defined as False. We will only get a single error for the definition itself, but no errors for y or z: src/Theta/Misc.hs:2:5: warning: [-Wdeferred-type-errors] … • Couldn't match expected type ‘Int’ with actual type ‘Bool’ • In the expression: False In an equation for ‘x’: x = False | Type signatures are Haskell’s way of letting us explicitly specify our intentions. By telling the compiler that x :: Int, it knows that y and z are fine, but that the definition x = False is inconsistent. The code is still semantically the same, but we get a more pointed error message. Type signatures can also constrain the type of an expression more than it would be otherwise. A definition x = [] will have the type x :: [a], but if we add an explicit signature like x :: [Int], the code will compile with the more specific type. Just like the previous example this can give you more specific type error messages, as well as avoiding weird edge cases like the monomorphism restriction. Type signatures in Haskell are—mostly—optional. You can write entire Haskell programs without annotating any types yourself, relying entirely on type inference. In practice, however, including top-level type signatures is a really good idea because it communicates your intent to both the compiler and to anybody else reading your code. You will consistently get clearer, better-attributed type errors if you write explicit type signatures. The more types you specify as type signatures, the more specific your type errors will be—but don’t forget that the type signature itself can be wrong! Some of the trickier type errors I’ve had to solve boiled down to a mistake in a type signature rather than a mistake in an expression. More Signatures, More Better Type signatures let us isolate parts of our code for the typechecker, giving us better error messages and localization. This gives us a technique for debugging confusing type errors: add more type signatures. Apart from top-level definitions, you can also add explicit signatures for: let and where definitions variables bound in do-notation pattern-match variables arbitrary sub-expressions (x + (y * 10 :: Double)) typeclass implementations Adding a type signature is a way to assert something you believe about your code’s types. Maybe you’re right about the type, maybe you’re wrong, but the type signature will help in either case: if your type signature is wrong, you’ll get a new type error from it and you will have learned something new about your code if your type signature is right, you’ll give the typechecker more information to provide clearer, better-localized errors I’ve cleared up numerous confusing type errors in real-world code by adding a type signature or two to helper functions defined in where clauses. Sometimes I even pull out sub-expressions into a where or let clause just to add a type signature—while you can add type signatures directly inside expressions, code often reads much better with those subexpressions pulled out into their own definitions. There’s nothing wrong with leaving type signatures you added to debug a type error after you’re done fixing your code. If the type signature helped once, it will likely help again; and, regardless, the explicit type signature will help anybody reading the code in the future. Divide and Conquer So: you’ve read your error messages, you’ve added some type signatures, but you still can’t find what’s causing the type error. The code highlighted by the error looks fine and it’s not clear what’s actually wrong. What do we do? We need to find which other part of the code is incorrectly causing our types to be inconsistent. We could try jumping around the code based purely on intuition, but it’s easy to go in the completely wrong direction if your initial guesses aren’t right. Alternatively, we could try reading our code from start to end—does code even have a start and an end?—but that would take a lot of work! Instead of jumping around in an ad hoc way or doing a linear scan of our code, we can borrow an idea from the world of algorithms and find the problem through divide and conquer. Remember that a type error corresponds to an inconsistency between type constraints in your codebase. An inconsistency is not a single point that is wrong; rather, it is composed of multiple components that are incompatible and we can search through them separately. Violated Expectations A type error highlights a specific expression gives us the two incompatible sides: The actual type the expression has. The type that the context of the expression expected. We don’t know which one is “wrong”, we just know that they do not match. Some type errors explicitly list the “expected” and “actual” sides, like we saw in an earlier example: src/Theta/Target/Python.hs:65:41: warning: [-Wdeferred-type-errors] … • Couldn't match type ‘m0 Python’ with ‘Python’ Expected: [Python] Actual: [m0 Python] ... Other errors leave us to reason out the two sides from the error message, as we saw in a different example: src/Theta/Misc.hs:5:11: warning: [-Wdeferred-type-errors] … • No instance for (Num Bool) arising from a use of ‘+’ ... The literal text of this error tells us that Bool does not have a Num instance—but that’s fine, Bool should really not have a Num instance! Booleans aren’t numbers. Instead, we should read this message as: Expected: an instance of Num Actual: Bool As you see different kinds of type error messages, it’s worth learning how to translate all of them into this format. Writing out the two sides explicitly when you first see an error can help. Searching through the Code The two sides of a type error give us the perfect starting point for dividing our problem into two halves: Why does the compiler believe our expression has its actual type? Why does the compiler believe the surrounding context expected the type it did? Often, we will have a good idea of which side to look at: either the actual or the expected type are “obviously” correct. (That said, always be wary of anything that seems “obvious”—believing the wrong thing to be obvious is the easiest way to go off on a wild goose chase!) Even if neither side is clearly right, we’ve still made progress by splitting our big problem (“why are we getting this type error?”) into two smaller problems. The next step is to take one of these sides and figure out what constraints led to that particular type. One way to do this is by reading the code and reasoning through the types in your head—a pain at first but manageable with a bit of experience. You only have to reason about types, not about what the code actually does: static types are a syntactic property of the program, so they can only depend on the code and not on runtime behavior or state. We also have a few tools that can help us figure out what’s going on with our types: An IDE or haskell-language-server can tell you the type inferred for a specific identifier or expression in your code. You can replace parts of your code with typed holes to see what types are inferred for those parts. ghci has several commands for inspecting types: the :t command gives you the type of an expression the :i command gives you information about an identifier, including all the typeclass instances for a type or all the implementing types for a typeclass the :k command will tell you the kind of a type and can simplify type expressions (with :k!) If we have a good idea of what parts of the code constrain the expression that led to our type error, but that is not enough to resolve the type error, we can continue the search in the same way: figure out which parts of the code constraint the parts we’re currently looking at. We’re searching through the type dependencies of our code like a graph. Of course, this graph of type dependencies can get big. Searching through it effectively will always require some intuition about what could reasonably cause the errors we’re seeing. Writing additional type signatures is a powerful tool for managing this large search space. By asserting types with type signatures, we can fence off the parts of the code we’ve looked at from the parts we’re still investigating, directing where the type checker looks. (More realistically, I often add type signatures simply because more signatures is more better rather than based on any sort of sophisticated tactical reasoning!) My advice here is to try to search more-or-less systematically and to think of types in terms of constraints, but not to overthink beyond that. At first this will sometimes take a lot of effort, but this gets much easier with experience: experience with Haskell in general, with GHC in particular and even with the libraries and abstractions you’re using. Haskell type errors can be hard. Haskell has the reputation for bad type error messages but, while the messages do have issues, a more common problem is bad error attribution: type errors do not always give the “right” reason for the problem or point to the “right” part of the code. While getting comfortable fixing Haskell type errors will only come with experience and practice, we can start by approaching the problem systematically. This both gives you a foundation for learning how to solve type errors as well as helping you deal with trickier errors even once you have more experience. When you see an error, you can follow three principles to deal with it: read the error—the error is your best starting point (and sometimes reading the error explains the error!) think in constraints—it’s not about “right” and “wrong”, it’s about two sides being incompatible divide and conquer—why do we have the type we have? why do we need the type we need? At first, all of these principles will take conscious effort to apply. But with a bit of experience, it becomes a habit—a habit that will save you a lot of time and frustration, and a habit I wish I had developed earlier myself! The first time I saw anybody talk about debugging as a skill was in a talk as part of Jane Street’s internship program—unfortunately, more than a decade later, I don’t remember exactly who gave the talk. At that point I had taken three years of CS courses at Berkeley and none of them ever touched on debugging like this; in hindsight, I would say this was the biggest missing piece in my CS education.↩︎ Debugging: The 9 Indispensable Rules is, despite the click-baity title, an amazing book for learning how to debug systematically. My approach for debugging in general and for fixing Haskell type errors in particular is heavily influenced by this book.↩︎ I recently asked the community for examples of confusing error messages and got a ton of great examples. There were too many good examples to include in this post—which is already a bit too long—so I’m planning to write a follow-up post focused just on common patterns of confusing type errors.↩︎ The location is given as a path to the file, a line number and a column number. This error is in Python.hs at line 130 starting on character 50. Some editors recognize this format and let you jump to the specified location.↩︎ This type error is actually a warning because I have the -Wdeferred-type-errors flag turned on. This flag is great for development because it lets the compiler surface more type errors and lets you experiment with the working parts of your code even if other parts don’t typecheck.↩︎ On GHC 9.8 and later, the noisy context information can be disabled with the -fno-show-error-context flag. In ghci you can enable this flag with :set: λ> :set -fno-show-error-context ↩︎ Seriously! At least there is a great suggestion for a fix on line 4 (highlighted in blue). src/Theta/Target/Python.hs:137:28: warning: [-Wdeferred-type-errors] … • Couldn't match expected type ‘Python’ with actual type ‘Name.Name -> Python’ • Probable cause: ‘toIdentifier’ is applied to too few arguments In the expression: toIdentifier prefix currentModule In a case alternative: Theta.Newtype' name _ -> toIdentifier prefix currentModule In the expression: case baseType of Theta.Primitive' t -> primitive t Theta.Fixed' _ -> "bytes" Theta.Array' a -> let items = ... in ((Theta.Target.LanguageQuoter.fromText @Python $ Text.pack ((Theta.Target.LanguageQuoter.indentBy 0) ("List[" <> (Text.unpack (Theta.Target.LanguageQuoter.toText items) <> ("]" <> "")))))) Theta.Map' a -> let values = ... in ((Theta.Target.LanguageQuoter.fromText @Python $ Text.pack ((Theta.Target.LanguageQuoter.indentBy 0) ("Mapping[str, " <> (Text.unpack (Theta.Target.LanguageQuoter.toText values) <> ("]" <> "")))))) Theta.Optional' a -> let type_ = ... in ((Theta.Target.LanguageQuoter.fromText @Python $ Text.pack ((Theta.Target.LanguageQuoter.indentBy 0) ("Optional[" <> (Text.unpack (Theta.Target.LanguageQuoter.toText type_) <> ("]" <> "")))))) Theta.Enum' name _ -> toIdentifier prefix currentModule name Theta.Record' name _ -> toIdentifier prefix currentModule name Theta.Variant' name _ -> toIdentifier prefix currentModule name Theta.Newtype' name _ -> toIdentifier prefix currentModule Theta.Reference' name -> toIdentifier prefix currentModule name ↩︎ A Num instance for Bool would require an orphan instance and would be an awful idea in practice, but it would be valid Haskell, and, hey, it even makes sense conceptually: if we have 8/16/etc-bit integers as Num instances, why not make Bool a 1-bit integer? That would be bad from a UX point of view—treating a Bool value as a number is almost definitely a programming mistake, and if it’s intentional you can use the fromEnum function to make it explicit—but it would be conceptually coherent.↩︎ Type error localization is an active area of research as is the quality of compiler error messages more broadly. David Binder pointed this research out to me on Discourse, including additional links and context. Some of the research approaches are promising and seem to work well in practice, but have heavyweight dependencies: for example, one promising approach requires solving a MaxSMT problem to find the “best” error location. That works well, but do we really want our compiler to depend on an SMT solver with cutting-edge capabilities just for better error messages?↩︎
2024-11-07T22:30:43
en
train
42,023,214
Anon84
2024-11-02T01:07:58
Language Models Learn to Mislead Humans via RLHF
null
https://arxiv.org/abs/2409.12822
3
1
[ 42023817 ]
null
null
no_error
Language Models Learn to Mislead Humans via RLHF
null
[Submitted on 19 Sep 2024 (v1), last revised 25 Sep 2024 (this version, v2)]
View PDF HTML (experimental) Abstract:Language models (LMs) can produce errors that are hard to detect for humans, especially when the task is complex. RLHF, the most popular post-training method, may exacerbate this problem: to achieve higher rewards, LMs might get better at convincing humans that they are right even when they are wrong. We study this phenomenon under a standard RLHF pipeline, calling it "U-SOPHISTRY" since it is Unintended by model developers. Specifically, we ask time-constrained (e.g., 3-10 minutes) human subjects to evaluate the correctness of model outputs and calculate humans' accuracy against gold labels. On a question-answering task (QuALITY) and programming task (APPS), RLHF makes LMs better at convincing our subjects but not at completing the task correctly. RLHF also makes the model harder to evaluate: our subjects' false positive rate increases by 24.1% on QuALITY and 18.3% on APPS. Finally, we show that probing, a state-of-the-art approach for detecting Intended Sophistry (e.g. backdoored LMs), does not generalize to U-SOPHISTRY. Our results highlight an important failure mode of RLHF and call for more research in assisting humans to align them. Submission history From: Jiaxin Wen [view email] [v1] Thu, 19 Sep 2024 14:50:34 UTC (5,314 KB) [v2] Wed, 25 Sep 2024 00:32:31 UTC (5,314 KB)
2024-11-07T14:53:56
en
train
42,023,220
chmaynard
2024-11-02T01:09:10
Floating point: Everything old is new again
null
https://www.johndcook.com/blog/2024/11/01/floating-point/
4
0
null
null
null
null
null
null
null
null
null
null
train
42,023,237
dmd
2024-11-02T01:11:29
Show HN: Midnight Reminders via Morse Code
null
https://github.com/dmd/morse
130
53
[ 42030442, 42027629, 42026005, 42028004, 42031284, 42028546, 42032873, 42031193, 42030882, 42027304, 42026602, 42030869, 42023495 ]
null
null
null
null
null
null
null
null
null
train
42,023,239
turtlegrids
2024-11-02T01:11:59
Meta is using more than 100k Nvidia H100 AI GPUs to train Llama-4
null
https://www.tomshardware.com/tech-industry/artificial-intelligence/meta-is-using-more-than-100-000-nvidia-h100-ai-gpus-to-train-llama-4-mark-zuckerberg-says-that-llama-4-is-being-trained-on-a-cluster-bigger-than-anything-that-ive-seen
8
1
[ 42023861, 42025941 ]
null
null
no_error
Meta is using more than 100,000 Nvidia H100 AI GPUs to train Llama-4 &mdash; Mark Zuckerberg says that Llama 4 is being trained on a cluster &ldquo;bigger than anything that I&rsquo;ve seen&rdquo;
2024-10-31T16:44:58+00:00
Jowi Morales
(Image credit: CNET/YouTube) Mark Zuckerberg said on a Meta earnings call earlier this week that the company is training Llama 4 models “on a cluster that is bigger than 100,000 H100 AI GPUs, or bigger than anything that I’ve seen reported for what others are doing.” While the Facebook founder didn’t give any details on what Llama 4 could do, Wired quoted Zuckerberg referring to Llama 4 as having “new modalities,” “stronger reasoning,” and “much faster.” This is a crucial development as Meta competes against other tech giants like Microsoft, Google, and Musk’s xAI to develop the next generation of AI LLMs.Meta isn’t the first company to have an AI training cluster with 100,000 Nvidia H100 GPUs. Elon Musk fired up a similarly sized cluster in late July, calling it a ‘Gigafactory of Compute’ with plans to double its size to 200,000 AI GPUs. However, Meta stated earlier this year that it expects to have over half a million H100-equivalent AI GPUs by the end of 2024, so it likely already has a significant number of AI GPUs running for training Llama 4.Meta’s Llama 4 is taking a unique approach to developing AI, as it releases its Llama models entirely for free, allowing other researchers, companies, and organizations to build upon it. This differs from other models like OpenAI’s GPT-4o and Google’s Gemini, which are only accessible via an API. However, the company still places limitations on Llama’s license, like restricting its commercial use and not offering any information on how it was trained. Nevertheless, its “open source” nature could help it dominate the future of AI — we’ve seen this with Chinese AI models built off open-source code that could match GPT-4o and Llama-3 in benchmark tests.Power consumption concernsAll this computing power results in a massive power demand, especially as a single modern AI GPU could use up to 3.7MWh of power annually. That means a 100,000 AI GPU cluster would use at least 370GWh annually — enough to power over 34 million average American households. This raises concerns about how these companies could find such massive supplies, especially as bringing new power sources online takes time. After all, even Zuckerberg himself said that power constraints will limit AI growth.For example, Elon Musk used several large mobile power generators to power his 100,000-strong compute in Memphis. Google has been slipping behind its carbon targets, increasing its greenhouse gas emissions by 48% since 2019. Even the former Google CEO suggested we should drop our climate goals, let AI companies go full tilt, and then use the AI technologies we’ve developed to solve the climate crisis.However, Meta executives dodged the question when an analyst asked them how the company was able to power such a massive computing cluster. On the other hand, Meta’s AI competitors, like Microsoft, Google, Oracle, and Amazon, are jumping on the nuclear bandwagon. They’re either investing in small modular reactors or restarting old nuclear plants to ensure they will have enough electricity to power their future developments.While these will take time to develop and deploy, giving AI data centers their small nuclear plants would help reduce the burden of these power-hungry clusters on the national power grid.Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Jowi Morales is a tech enthusiast with years of experience working in the industry. He’s been writing with several tech publications since 2021, where he’s been interested in tech hardware and consumer electronics. Most Popular
2024-11-07T13:54:09
en
train
42,023,241
car
2024-11-02T01:12:00
Ghost Nonprofits and the Manufacturing of Virtue
null
https://anandsanwal.me/ghost-nonprofits-manufacturing-virtue/
3
2
[ 42023687, 42025249 ]
null
null
null
null
null
null
null
null
null
train
42,023,290
matt_heyqq
2024-11-02T01:20:07
Show HN: Brand and Marketing Audit with AI Agents
null
https://www.branding5.com/brand-audit
1
0
null
null
null
null
null
null
null
null
null
null
train
42,023,312
colinprince
2024-11-02T01:25:30
Training and Diet Are Simple Because Your Body Is Complex
null
https://www.strongerbyscience.com/training-diet-simple-body-complex/
4
0
null
null
null
null
null
null
null
null
null
null
train
42,023,319
dirien
2024-11-02T01:26:29
How Secrets Sprawl Is Slowing You Down – and What to Do About It
null
https://www.pulumi.com/blog/how-secrets-sprawl-is-slowing-you-down/
3
1
[ 42023320 ]
null
null
null
null
null
null
null
null
null
train
42,023,334
yeknoda
2024-11-02T01:29:28
Relativity Space Faces Cash Drain, Exploring Options
null
https://www.bloomberg.com/news/articles/2024-11-01/relativity-space-is-said-to-face-cash-drain-exploring-options
2
1
[ 42023700 ]
null
null
null
null
null
null
null
null
null
train
42,023,342
PaulHoule
2024-11-02T01:30:39
Closing the nutrient cycle: Municipal solid waste in (peri-)urban agriculture
null
https://www.sciencedirect.com/science/article/pii/S0956053X24002952
3
0
null
null
null
null
null
null
null
null
null
null
train
42,023,354
FinnKuhn
2024-11-02T01:33:21
Downloading Images from US Military Satellites [video]
null
https://www.youtube.com/watch?v=ReHYn7llzy4
4
0
null
null
null
null
null
null
null
null
null
null
train
42,023,369
hn_acker
2024-11-02T01:37:11
David Clements: The Evangelist of Election Refusal
null
https://www.lawfaremedia.org/article/david-clements--the-evangelist-of-election-refusal
2
0
[ 42025936 ]
null
null
missing_parsing
David Clements: The Evangelist of Election Refusal
null
Benjamin Wittes
In the final moments of the training exercise, dozens of people surrounded a woman who was shouting about defective voting machines. They were shielding her from a sheriff’s deputy—or, rather, a man assigned to play the role of a sheriff’s deputy—who was trying to evict her from the faux event.The trainees had been instructed that American elections are rigged and that they are battling a “spiritual war” against election fraud. They had been told that they could resist “tyranny” by showing up en masse to pressure local officials to withhold certification of voting machines or election results.And at this climactic moment, at the direction of a former business law professor named David Clements, they were role-playing large-scale civil disobedience at a local elections meeting, crowding around a fellow comrade-in-arms to physically block law enforcement from removing her from a public meeting at which she was filibustering. Later, Clements assured trainees that this is the only way to fight back: “You have to create a righteous, sober-minded, well-spoken, articulate mob, if you will, because that’s the only thing that will work short of where we’re headed, which is a kinetic civil war—if we don’t get this resolved peacefully.”This scene unfolded during a day-long “election integrity” training event held in September inside the worship center at Grace Covenant church in Hogansville, Georgia, a small town in the westernmost part of the state. But thousands of people across the country have attended similar training sessions hosted by the former professor. The traveling event series has been billed as the “Gideon 300” tour—a reference to the biblical story of 300 men who faced an army of 135,000 and won. Clements has described the “Gideon 300” project as an effort to mobilize 300 or more “warriors” in each county in the United States, meaning people who are willing to show up in large numbers at local elections meetings to speak against certification and who aren’t afraid “to die” or “to be arrested.” These “warriors,” Clements has said, must demand that local officials withhold certification of voting machines or election results. “Gideon 300” trainings typically involve a simulation of a county election board meeting, in which Clements demonstrates what the crowd should do if local officials won’t listen: hijack the public meeting by physically occupying the space, getting control of the microphone, and not giving it up based on what he believes to be “arbitrary” time limits put on speakers. The “Gideon 300” tour is not the first time Clements has criss-crossed the country to evangelize about purported election fraud. He set out to persuade local officials to refuse certification long before the upcoming election, as the Washington Post and Reuters reported in 2022. Yet his recent activities have largely escaped notice in the lead-up to the Nov. 5 presidential election, even as commentators have sounded the alarm about the prospect that county officials in Georgia or elsewhere might refuse to certify the election. Withholding certification of election results at the local level is not lawful, and it is unlikely to work as a means of preventing the winner from taking office. That said, it could have destructive effects, sparking post-election chaos, misinformation, and possibly violence.Lawfare reviewed dozens of photos, videos, and audio recordings of Clements as he has traveled from town to town across the country, simulating election certification meetings at which scores of people confront local officials and pressure them to withhold certification.Last month, Lawfare also attended one of Clements’s trainings in Hogansville, where the charismatic lawyer used a combination of religion and conspiracy theories to promote lawlessness ahead of the upcoming election—lawlessness both on the part of the election officials whom Clements wants to refuse to certify results, and on the part of the “mob” he is training to pressure them.In a lengthy interview on Oct. 29, Clements denied that he is encouraging lawlessness, much less violence. He insisted that local officials do, in fact, have the authority to withhold certification in the presence of suspected fraud; that all of the tactics he is teaching at his trainings are First Amendment-protected speech; that natural law theory countenances violations of mere state law to the extent that the latter infringes on fundamental human rights; that the risk of violence by police against him and other protesters far exceeds the risk of violence by the people he is training; and that police violence justifies uses of force by protesters.“Here we are six days from the most consequential election of my lifetime,” Clements said during the interview. “And everyone’s preparing themselves for what could be a very, very kinetic situation in the months leading up to certification. And all of my efforts have not been to escalate rhetoric. It’s actually to diffuse, talk, use our words.”But while insisting on their legality, Clements also acknowledged that his tactics might push the line in some instances and are, among other things, designed to set up challenges to restrictions on speech.“In order to actually challenge these things in a court of law, you have to test the limits of where does the First Amendment start and stop,” he argued, “in order to communicate, I think, these concepts that have to be communicated.”Clements agreed to speak on the condition that the interview be recorded and released in its entirety, noting by email in response to Lawfare’s request that, “Thus far, USA Today, Washington Post, NBC, and many others have been instructed not to go on record with me for a fully transparent interview.”Full audio of the nearly-two hour interview is available here: “Civil disobedience right up to the edge” Clements’s September event in Hogansville was not a one-shot deal. It was part of a string of such events across major swing states and elsewhere, all focused on the same thing: getting local election officials to refuse to certify voting machines or election results. Since Jan. 1, 2024, Clements has brought his “Gideon 300” events to at least 40 counties across more than a dozen states, including counties in swing states such as Michigan, Pennsylvania, North Carolina, and Georgia, according to a Lawfare review of events advertised on public social media posts. But he began evangelizing against local certification processes long before 2024. As early as July 2022, an NPR analysis reported that he had appeared at 62 events across 25 states since Jan. 6, 2021. Asked about how many such events he has done, Clements said he has lost track because he has done so many of them.Caption: Clements posts a photo of a “Gideon 300” training in the swing state of North Carolina. Source: Facebook/David Clements. Caption: On Telegram, Clements shares a photo of a Gideon 300 training he conducted in Crawford County, Michigan. Source: Telegram/David Clements. Clements secured his first major success as an election fraud agitator in 2022, when he persuaded commissioners in rural Otero County, New Mexico, to withhold certification of the state’s June primary results, citing alleged defects with the voting machines there. The victory was short-lived, with two commissioners who had voted not to certify later changing course after the state’s Supreme Court intervened and ordered the commissioners to fulfill their statutory duty to do so. But it solidified Clements’s reputation as a leading “election integrity” activist.His rising profile within the election denial movement has coincided with unprecedented efforts to delay or obstruct certification at the local level. A recent report by Citizens for Responsibility and Ethics in Washington (CREW) identified more than 30 examples of county officials who, since 2020, have voted to deny or delay certifying elections based on false claims of voter fraud or irregularities. According to the report, some of those officials include “avowed 2020 election deniers” and “individuals who acted as fake presidential electors for Donald Trump.”Nowhere has the politicization of certification processes been more acutely felt than in Georgia. Back in August, a Trump-aligned majority of the State Election Board sparked controversy when it approved rule changes related to the certification process at the county level. The most controversial of those redefined what it means to “certify” an election, adding a new requirement that county boards conduct an undefined “reasonable inquiry” before certifying results. Critics of the rule worried that it could be used as a pretext to delay or outright obstruct certification of Georgia’s election results. Doing so would be unlawful, at least as the law has been articulated in decades of case law, and the rule changes were recently enjoined by order of Judge Thomas Cox, meaning that they will not go into effect before the 2024 election. Still, the controversy raised concerns in Georgia that local officials who do not like the results of the election might delay or outright refuse certification as a kind of protest against purported election fraud. Even before the State Election Board rule changes, county-level certification disputes had become an increasingly contentious issue in Georgia. According to an Atlanta-Journal Constitution survey, at least 19 election board members across nine Georgia counties have objected to certifying elections during the past four years. In most instances, officials who refused to certify did not cite any specific instances of wrongdoing or outcome-determinative fraud. Instead, they pointed to lack of confidence in voting machines, the need to review additional election-related documentation, or generalized concerns about ballot drop-boxes.Clements has given speeches or conducted trainings in at least six counties in Georgia over the course of the past three years: Cherokee, Fulton, Chatham, Forsyth, Fayette, and Troup. His appearances in the Peach State have drawn sizable crowds. In March, he packed a theater in Roswell, Georgia, for a film screening and “Gideon 300” training. Before that, in September 2022, more than 100 people showed up to hear him address the county commission in Cherokee County, where he elicited applause after he demanded that officials refuse to certify future elections. Caption: Clements packs a theater in Roswell, Georgia, for a “Gideon 300” training. Source: Facebook/David Clements. The day after his appearance in Cherokee County, Clements was in Forsyth County, where he urged the audience to pressure local election officials. “There needs to be a point where we have that ‘emperor has no clothes’ moment, where when we show up in mass—this whole group shows up—and that they are undone, that they are naked before you. And the gig that they thought was sweet is a curse. That’s the attitude,” he said. “And I’m talking about we take civil disobedience right up to the edge, because we have to.”Asked in his interview with Lawfare about his impact on the policymaking process in Georgia, Clements said that he has not knowingly had direct contact with two specific election officials in the state who have spearheaded certification refusals, and he said that he doesn’t know whether any policymakers have attended his Gideon 300 trainings. But he did not exclude the possibility that election officials in Georgia—either locally or on the State Election Board—have been influenced by his advocacy. And he’s clearly delighted by the direction the state has gone. “What I love about what’s going on in Georgia ... is that you’ll notice that none of it even requires a show of force because there’s a partnership between the election officials working in concert with their constituents,” he said. “And I think that’s much healthier.”But Clements is preparing for the less healthy options too.“Ain’t no devil gonna tread on me”The parking lot was nearly full on Sept. 21 at Grace Covenant, a modest church nestled atop a swath of green pasture just off Highway 85 in Hogansville. The tiny town of Hogansville—population: 3, 227—is in Troup County, a region in the westernmost part of the state that shares a border with Alabama. Despite the small size of both the church and the town, and despite its being a warm Saturday afternoon during football season in the South, Clements had attracted a sizable crowd. Inside the sparsely decorated sanctuary hall, more than 80 people were gathered to hear him espouse the gospel of election fraud.Two of those people were notables: Garland and Tamara Favorito, a married couple who run an election integrity nonprofit called VoterGA. VoterGA has become increasingly influential in Georgia since 2020, when it got involved in efforts to prove that the presidential election had been stolen. As CNN recently reported, the group pushed for several of the recent rule changes by the State Election Board and has a history of pushing debunked election misinformation. The couple also have a history with Clements, having hosted an event with him in Fayetteville last year. So it was no surprise to see them at his “Gideon 300” training in Hogansville. Caption: Garland Favorito and Tamara Favorito at an event with David Clements in Fayetteville, Georgia, last year. Source: Facebook/Holly Michelle Kesler. The event kicked off with worship music. “We’re all here because we believe in Jesus,” said the six-person band’s frontman, a young man in flip flops and cargo shorts. “We’re especially here at this event because we’re trying to fight back. So, this is a little bit of a fight song.” Audience members rose to their feet, clapping along as the band began to play. “He’s choking on the blood that ran down the tree,” they sang in unison. “Ain’t no devil gonna tread on me.”The event’s organizer, a woman named April Loftin, climbed onto the dais to deliver an opening prayer. Loftin, a hair stylist, made an unsuccessful bid for a seat on the Troup County commission last spring. Her campaign advocated for “complete removal of Dominion voting systems in Troup County and the state of GA.” Now, praying aloud in the worship space at Grace Covenant, she asked God to save the nation. “We come to you as a new Gideon’s Army,” she said, her voice rising. “Defeat the enemy that is destroying our country,” she pleaded.As people in the crowd shouted “amen,” Clements sprang to his feet and joined Loftin on the dais, eliciting a round of raucous applause. Clements, who has called himself an “unemployed professor,” certainly looked the part: skinny jeans, rumpled green blazer, mop of graying hair. And he is, in fact, an attorney and former law professor. He described his professional history in the Lawfare interview as that of a former prosecutor who has presented scientific evidence and expert witnesses before courts. He said he also developed both “expertise” in election law and a good generalist knowledge of voting systems, even though he had no training or experience related to election systems prior to 2021.In the church that evening, however, Clements’s rhetorical style was more pastoral than professorial or lawyerly. In his podcast appearances and public speeches, he tends to emulate some of the most influential preachers in far-right American evangelicalism, portraying the world of election systems as a matter of good versus evil. Much of his rhetoric mixes religious faith with conspiratorial thinking, political resentment, grievance-based nationalism, and apocalyptic eschatology.That rhetoric was certainly on display in Hogansville, where Clements compared the use of voting machines to slavery. “Whether you want to admit it or not, legal violence is going to be committed against you through using these enslavement devices,” he said. He told the audience that it is time to decide whether they will succumb to tyranny. “We are in a spiritual war,” he said. “My hope for today is that you’re gonna feel more solid, more grounded, more connected to your community. And you’ll have a prescription to finally fight back.”Clements’s account of how we got here—and how he got here—is told through a documentary called “Let My People Go,” which he co-produced with Mike Lindell, the MyPillow.com CEO, who is a prominent election conspiracy theorist. Introducing the film at Grace Covenant, Clements explained that it premiered last December. On that same day, he said, a jury in Washington, D.C., returned a $148 million dollar damages verdict against Rudy Giuliani “for telling the truth about two particular election workers in Georgia by the name of Ruby Freeman and Shaye Moss.”He was alluding to the defamation suit Freeman and Moss brought against Giuliani, who falsely accused the women of committing election fraud during the 2020 election. Giuliani’s lies about the election workers resulted in a cascade of death threats and harassment against them. And in the defamation litigation, Giuiliani did not contest that the statements were false and defamatory and caused damage. But in Hogansville, Clements’s insistence that Giuliani was telling the truth found a receptive audience. When he told the crowd that Freeman and Moss “were part of a much larger scheme to defraud you of your voices,” a woman in the audience piped up: “Amen!”It was a prelude of what was to come in “Let My People Go.” In the film, Clements and a parade of what he terms “experts” assert a potpourri of false or misleading claims about widespread electoral fraud, from ballot stuffing to vote flipping to malicious algorithms manipulated by voting machine companies. The film sets out each of these claims in support of a broader narrative thread: The 2020 election was stolen from Trump; Joe Biden is an illegitimate “usurper”; individuals who participated in the Jan. 6 attack were set up by the government and have been wrongly imprisoned as a result; and Americans in every county must fight back to abolish the machines. In one scene, Clements addresses a group of people assembled at a ranch in Missouri for the “Second Annual J6 Family Retreat.” He tells them that “the J6ers” should be pardoned. The only “J6ers” who don’t deserve pardons, he says, are the “unindicted fed co-conspirators,” who “should be tried for treason.” In the audience at Grace Covenant, several people nodded their heads in agreement at this. A woman seated on the front row raised a wad of tissues to her face, wiping away tears.“Let My People Go” is a film about alleged election fraud, but it’s also a film about Clements, who narrates the story of his conversion from award-winning professor to traveling election fraud evangelist. Describing himself as a “trailer park kid who came from nothing,” Clements recalls his time teaching business law at New Mexico State University, where he taught until 2021. That fall, Clements says he was fired from his job over his refusal to comply with the school’s COVID-19 safety requirements, which mandated that faculty get vaccinations or regular tests. Out of work and facing multiple professional bar complaints before the New Mexico Supreme Court, Clements began podcasting out of his garage, interviewing supposed “experts” and “fact witnesses” about America’s “clearly rigged” elections. Soon enough, Clements and his wife, Erin, were traveling around the country to “shout from the rooftops” about the “stolen election.” By Clements’s estimation in the film, the two have delivered more than 200 “evidentiary presentations” about election fraud across 47 states. Throughout the film, Clements portrays himself as a kind of modern-day Job, a man who has lost everything and suffered much yet who remains steadfast in his faith—whether that faith be in God or widespread election fraud in the United States. He makes a point of giving the cameras a tour of the modest home he shares with his wife and three children in New Mexico, where he shows viewers the broken window on his old Buick and the rickety ceiling fan in his bedroom and the holes in the blazers hanging in his closet. But for all that Clements reveals about himself in the film, he omits important details too. He speaks of rigged elections, but he leaves out his own bitter personal experience running for political office, having once made a failed bid in the 2014 Republican primary for U.S. Senate in New Mexico—a loss he blamed at the time on his political rival’s campaign manager, alleging that the manager hacked his email and sent supporters messages to drive them away. The campaign manager sued for defamation, and the two later settled the suit. In his interview with Lawfare, Clements described the experience as, in retrospect, a formative one with respect to voting equipment. Sudden shifts in vote tallies with new Dominion machines, he said, seemed weird to him at the time, but he brushed the concerns aside because he believed in the system. Only after 2020 did he look back and wonder if his own election was rigged.Dominion has repeatedly denied that its machines were manipulated during the 2020 election, and there is no credible evidence of widespread irregularities or “flipped” votes for Joe Biden. Last year, the company brought a defamation suit against Fox News, alleging that the conservative TV network aired falsehoods about the voting machine company, including claims similar to those promoted by Clements. Fox settled the suit for a staggering figure of $787.5 million after a judge granted partial summary judgment to Dominion on the issue of falsity, writing that “the evidence developed in this civil proceeding demonstrates that [it] is CRYSTAL clear that none of the statements relating to Dominion about the 2020 election are true.”Clements sports other contradictions too. He presents himself in the film as a peaceful, God-fearing Christian, but he doesn’t mention that he has argued that Dominion Voting Systems executives should be tried for treason and executed by firing squad or hangings. Asked about these comments during his Lawfare interview, Clements insisted that he has not called for their killing, merely “pointed out the penalty” for the crime of treason. He said he does not believe that all “garden variety election workers” have engaged in treasonous conduct warranting execution. But he maintained that some voting system executives, such as Dominion Voting Systems CEO John Poulos, should be put to death after being afforded due process and following the return of a guilty verdict for treason. “In my legal opinion, if I were to bring this before a military tribunal or otherwise, I would make a closing argument that this person is deserving of whatever penalty is attendant to treason,” he said. “And I make no apologies for that.”In response to Lawfare’s request for comment about Clements’s remarks, Dominion said allegations that its employees have tried to interfere with any election are completely false. “This is yet another example of how lies about Dominion have damaged our company, subjected officials and Dominion employees to harassment, and baselessly diminished the public's faith in elections,” a spokesperson for the company said in a statement.“Dominion's certified systems remain secure, and we are confident in the security of future elections. We strongly encourage people to rely upon verified, credible sources of election information—sources that can explain the many layers of physical, operational, and technical safeguards that exist to protect the integrity of our elections, including use of paper ballots for auditing and recounts. We remain fully prepared to defend our company and our customers against lies and to seek accountability from those who spread them." Caption: On Twitter, now X, Clements says that Dominion Voting Systems employees have committed treason and should face the “legally mandated penalty of a rope.” The post has garnered more than 2,000 “likes.” Though Clements suggests in the film that he has fallen on hard times since he lost his job as a law professor, he is not without forms of financial support. A crowdfunding campaign to support him after he lost his professorship has raised more than $300,000 since it was created in August 2021—a sum he described in his interview with Lawfare as “more money in the bank than I’ve ever had in my entire life.” He said he uses this money only to “advocate for the J6ers” and to “get to the bottom of what happened with 2020.”Nor does “Let My People Go” acknowledge that Clements has achieved something most academics never do: proximity to fame and power. Within months of his introduction to the election denial scene, he had appeared on Tucker Carlson, interviewed Sidney Powell on his Rumble channel, and posted a photo of himself with Trump after they dined together at Bedminster, where Clements says the former president asked him for his opinion on the “legal environment” surrounding 2020 election litigation. These days, Clements is a MAGA celebrity in his own right, making regular appearances on the popular “Conservative Daily Podcast” and having amassed more than 65,000 subscribers on his Telegram channel. Caption: David Clements posts a photo with Trump in Bedminster, New Jersey. Clements told Lawfare that Trump asked for his opinion on the “legal environment” surrounding litigation claiming that the 2020 election had been affected by fraud. The meeting between Clements and Trump in August 2021 occurred more than five months after Joe Biden’s inauguration as the 46th president of the United States. “Let My People Go” concludes with what is supposed to be a rousing call to action by Clements, who urges viewers to show up at local meetings and “surround” election workers. “Election workers, canvassing boards, clerks that have broken their trust with you, you will surround them. Can you find 300 of God’s warriors surrounding the 10 feckless usurpers?” he asks. “Do not die on the altar of civility,” he commands. “Become an abolitionist.” The lights came on inside the worship hall as the credits began to roll upward on the screen in Hogansville. The first name to appear was a familiar one: Ashli Babbitt, the woman who was fatally shot by law enforcement during the Jan. 6 attack on the Capitol. “Let My People Go,” as it turned out, had swapped out the usual closing credits for a list of people who were indicted, incarcerated, or killed for participating in the Jan. 6 attack.While the credits rolled, Clements invited the audience to rise for a standing ovation. And they did so enthusiastically. They applauded for Stewart Rhodes, the leader of the far-right Oath Keepers, who has been convicted of seditious conspiracy; for Julian Khater, who pleaded guilty after assaulting three police officers with pepper spray, including an officer who died the next day after suffering two strokes; and for Guy Reffitt, who stormed the Capitol on Jan. 6 with a .40-caliber pistol on his belt—all of whose names scrolled up the screen.“Gideon 300”The standing ovation for people who have been accused or convicted of crimes committed during the Jan. 6 attack on the nation’s capitol subsided after about 10 minutes.By this point, nearly three hours had passed. The crowd was noticeably thinner, but more than 50 people remained. Over the past few hours, they had watched “Let My People Go” with rapt attention, gasping with outrage or shaking their heads in disgust at each allegation of rigged voting machines or widespread fraud—allegations that have elsewhere been shown to be false, misleading, or baseless. It was now time for Clements to tell the crowd what they can do about it all—in November and beyond. Pacing in front of an Appeal to Heaven flag strung up on the altar behind him, Clements encouraged his audience to confront local election officials about their “maladministration” and the use of “defective” machines. “You have grounds to say, ‘Board, you better not certify this process, this vote, or those machines,’” he said. The key to achieving victories, he explained further, is to show up “in mass, with numbers” at county elections meetings.To demonstrate how it will work in practice, Clements simulated a local board elections meeting in Troup County, the region where Hogansville is located. He selected two women—one clad in an American flag button-down, the other in a “Trump Girl!” T-shirt—to play the role of elections officials. A man near the door was assigned to act as a stand-in for the sheriff’s deputy. Then Clements scanned the crowd for another volunteer. “I need one great soul who really hates these election devices to come up,” he said. A woman waved her hand in the air, eager to be cast as a concerned citizen who despises voting machines. She got the part.During the role-play exercise, the volunteer defiantly told the pretend election officials to get rid of voting machines. “We demand paper ballots. Your machines are garbage,” she shouted. Clements, assuming the role of the elections board chairman, announced that her allotted time for public comment was up. “Sheriff’s deputy, please come up and remove her,” he instructed. The man assigned to the role of sheriff’s deputy grabbed the volunteer by the arm, simulating her removal from the meeting.“Let’s try something different,” Clements said, breaking the fourth wall. “I want everyone here, except the sheriff’s deputy, to stand.” As dozens of people in the crowd rose to their feet, he encouraged them to surround the volunteer, forming a wall of people between her and the sheriff’s deputy. “Come around, come around, fill up this whole way,” he said. Clements looked approvingly at the mass of people standing shoulder-to-shoulder before him. “Now, I hope that you guys are getting the lightbulb on right now that the power dynamic in this room changed instantly,” he said. “Look at where [the volunteer] is and look at that poor deputy, with a giant wall of people between them. You think he’s going to be in any rush?”Attendees shouted in response: “No, no!”Caption: David Clements conducts a “Gideon 300” training in Hogansville, Georgia, on Sept. 21, 2024.Clements made a point of telling his audience that the “Gideon 300” strategy they had just role-played is consistent with lawful, peaceful protest under the First Amendment. But he did not mention a critical fact: that efforts to physically block a police officer from removing a speaker at a public meeting may well amount to the crime of obstructing or hindering a law enforcement officer under Georgia law, potentially resulting in arrest or prosecution.Asked later in his interview with Lawfare about whether his trainings encourage people to physically impede the police during local elections meetings, Clements said, “there’s no blocking of law enforcement.” Instead, he described the simulation exercise as an effort to demonstrate to trainees how they can surround a public speaker en masse to disincentivize the police from attempting to make a removal or arrest in the first place. “There’s a positioning where they actually stand, where they stand before someone approaches,” he said. “And there’s not an easy access point. What you’re going to find in most of those cases is the sheriff's deputy doesn’t even attempt to engage with the person.” At Grace Covenant, however, not every trainee seemed to understand this fine distinction. At one point, Tamara Favorito rose to her feet. “In Georgia, we are practicing Gideon 300,” she announced. But she said the role-playing exercise helped her understand that there was something missing from their training. Tamara pointed to a 2023 incident in Chatham County, where a woman was forcibly removed by police after she ignored orders to stop speaking during a board of elections meeting. The sheriff’s deputies “pulled her out like an animal,” according to Tamara. “Just one woman moved toward her, and one of the deputies pushed her out of the way,” she continued. Tamara said she realizes now that one person moving wasn’t enough. “I think if we had trained people to do that and move quickly, if everybody that was there had moved quickly, they would not have been able to haul her out of there,” she said.Neither Tamara nor Garland Favorito responded to a request for comment. In the past, Clements has acknowledged that his advocacy strategy involves potentially unlawful tactics—or, at least, tactics that will be treated as such. “You have to be willing to be arrested,” he said during an appearance on the “All Politics Is Local” podcast earlier this year. “You have to be willing to be jailed over this. This is like an abolitionist movement.”In his interview with Lawfare, Clements used a number of different approaches to reconcile such comments—which seem to recognize that arrest is a possibility—with his claim that his tactics are all lawful. He argued that the tactics are all just peaceful speech seeking redress of grievance. He also argued that to the extent they may violate the law, such arrests might be necessary to posture police actions for challenge. He argued sometimes that there’s a higher law at issue. And he argued that what he is urging is no different from the civil rights movement. “I could say the same thing looking back at the civil rights movement in the 1960s, where you had African Americans fighting for equality of the vote. ... [T]hey were locking arms, getting sprayed by water cannons to posture and bring awareness to people to treat them equally.”During the training exercise, Clements also assured attendees that county election officials have lawful grounds to withhold certification of election results. “It’s based in law, because they actually have a trust and responsibility to make sure that your vote is true and accurate,” he said.This is a plainly inaccurate representation of the law, as least as the courts have authoritatively interpreted it over many decades. In Georgia and elsewhere, local election officials cannot lawfully refuse to certify election results. Courts have overwhelmingly held that local election boards have a “ministerial,” or mandatory, duty to certify. If an elections board withholds certification, courts can force it to certify by issuing a writ of mandamus. And in some jurisdictions, local officials could be subject to criminal sanctions if they refuse to certify election results. In 2023 in Cochise County, Arizona, for example, two election officials were indicted for election interference after they voted to delay certification past the statutory deadline. Earlier this month, one of the officials, Peggy Judd, pleaded guilty to a misdemeanor charge as a part of a plea deal.Clements is aware of all this. He has griped on social media and podcast episodes about instances in which courts have forced officials to certify, and he has shared articles about the Cochise County indictments. Yet in Hogansville, he stood before a church filled with non-lawyers, telling them that their local officials could lawfully withhold certification of the election and that they can defy and impede law enforcement in an effort to pressure those officials to do so. In his interview with Lawfare, Clements stressed that in very few states is the ministerial nature of the boards’ duty to certify statutory in nature. In most states, it’s purely a matter of judge-made law, he argued. By contrast, he contended, the boards are violating statutes in any number of ways by certifying fraudulent election returns.Indeed, when Clements describes an act as “lawful,” it’s not always clear which type of law he’s referring to—the laws of God, or the laws of man. As he explained to the audience in Hogansville, some of his thinking in this respect is rooted in a natural law theory called the “doctrine of lesser magistrates,” which claims that local elected officials—dubbed “lesser magistrates”—have a divine duty to oppose unjust or immoral laws imposed by higher government authorities. The doctrine is set out at length in a book by Matthew Trewhella, a pastor and anti-abortion activist who has become increasingly influential among some prominent conservatives. Michael Flynn, Trump’s one-time national security adviser, has said that Trewhella’s book on the doctrine is a “blueprint showing Americans how to successfully resist tyranny.”The training in Hogansville was not the first time Clements has cited Trewhella’s work as a kind of theological justification for his fight against election fraud. He has invoked the natural law doctrine at other events, often when comparing his quest to get rid of voting machines to that of Civil War-era abolitionists who stood up against laws permitting slavery. On other occasions, he has compared his crusade to that of the anti-abortion movement. “We need to have the same righteous indignation that your pro-lifers used to have when they used to bar the doors of an abortion clinic because they understood that right is right and wrong is wrong,” he said at an event in Cumming, Georgia, in 2022. “And you know something? I might get arrested. But innocent life and innocent blood is being shed. They see things in a righteous way. And you need to see these machines in a righteous way,” he said. The crowd replied in unison: “Amen!”In Hogansville, Clements had another analogy in mind—one that seems to acknowledge that under earthly law, at least, he is urging people to put themselves in jeopardy. He likened contemporary efforts to investigate election fraud to rebellion against the British crown during the American Revolution. “If it’s just about legal arguments, the corrupt, satanic attorneys win, and we lose,” he told the audience. “The founders were in the exact same position. But no one wants to say out loud that the tree of liberty has to be replenished with the blood of the tyrant and the patriot,” he said.“Why don’t I lead with that?” Clements asked aloud. “Because I don’t want articles saying ‘David Clements is asking for the blood of, you know, whoever. Then I’ll get arrested.’” The crowd roared with laughter.Clements, in all likelihood, will not get arrested. The question is how many of the people he is training and assuring of the legality of the conduct he is urging will end up getting arrested—and how many election workers they will intimidate or coax into defying the law along the way.In his interview with Lawfare, Clements seemed genuinely unconcerned on this point and even the possibility of stoking violence. In his answer, he returned to the government’s prosecution of the Jan. 6 defendants. “The amount of restraint that I’ve seen from these J6 families in light of [their persecution], they're not the ones that are calling for violence. And in fact, if you look at J6 itself, you’ve got so many agitators that I firmly believe were unindicted fed co-conspirators,” he said. “So I’m much more heartbroken over what the government has done to American citizens and what they continue to do than [I am concerned about my] having the audacity to train people to show up to a public meeting and point out a defective process and a defective system.”In Georgia, at least, a recent judicial decision could help mitigate efforts to persuade county officials to withhold certification of the presidential election. Three weeks after Clement’s visit to Hogansville, Fulton County Superior Court Judge Robert McBurney issued a declaratory judgment affirming that certification of election results is mandatory under Georgia law. But Clements, sharing news about the decision on social media, remained undeterred: “There is always a choice to do the right thing,” he wrote to more than 25,000 followers on Twitter, now X. “Clerks and canvassing boards must not certify a fraudulent election even under threat.”
2024-11-08T17:48:46
null
train
42,023,372
hn_acker
2024-11-02T01:37:49
What Does It Mean to Ensure Election Integrity in 2024?
null
https://www.lawfaremedia.org/article/what-does-it-mean-to-ensure-election-integrity-in-2024
4
0
[ 42025928 ]
null
null
null
null
null
null
null
null
null
train
42,023,385
luu
2024-11-02T01:40:22
Storybits: Error Resistant Mnemonics
null
https://rya.nc/storybits.html
24
0
null
null
null
no_error
Storybits: Error Resistant Mnemonics
2017-12-13T18:32:00-08:00
Ryan Castellucci
Note: This post was written quite some time ago, but originally not published because I never could convince myself that the order insensitivity was actually useful. Publishing it now due to yet another person asking about it.At DEFCON 22, Dan Kaminsky and I talked a little bit about something I built which he dubbed “Storybits[1]”. Storybits can reversibly transform short strings of binary data into a series of words designed to produce a mental image. Order of the words does not matter, and many typos can be corrected automatically. I already had working code at the time of that talk, but since then it’s just been sitting around on my computer. People have been asking about it, so I put it up on GitHub, though it’s still a hacky prototype. I’ve thrown together a demo and written a bit about how it works.Human brains have enormous storage capacity, but they work very differently from computers. Let’s try a little experiment. I’m going to pick a random number between 0 and 213 and give you both the binary representation of that number and the word at that position in a list of common English words. I got ”1101000110000” and ”radius”? Which is easier to remember? They both represent the same information, but memorizing the binary would take some real effort, whereas a single word is no problem. Our memory works on “chunks” not bits.While we’ve got lots of long term memory, deliberately storing anything specific in it can be a challenge. Often people use mnemonics, such as “Please Excuse My Dear Aunt Sally”, to transform complicated information into a form that is easier to memorize and recall. Most often this is used to memorize lists of things, but it is possible to generalize the technique to other kinds of data.When information is encrypted, a large number known as a key is used. The security of a key is based on the number of operations (work) that an attacker would need to do to find the correct key. For anything even remotely secure, this value is so large that it’s meaningless to most people. For convenience, a log2 scale (bits) is used to talk about key security. Doubling the strength of a key adds one bit, increasing it a thousandfold adds about ten bits. For most threat models, 96 bits should be good enough[2]. We can use a trick called key stretching to make guessing up to about a million times slower, bringing that down to 76 bits. In human terms, this would be a random sequence of 23 decimal digits, 17 letters or 7 words (from a list of about 2000).Many mnemonic systems have been proposed[3] and generating syntactically valid sentences is a really good idea, but implementation is far from trivial. Storybits is my iteration of that. Rather than actual sentences, it generates a series of adjective, noun, verb tuples. Building good wordlists for such a system is fairly tricky. I gathered up lists of common English words split up by type of speech, then filtered out words that were too long, too short or too similar[4]. Finally, Dan and I manually removed words that were semantically similar to other words, had high potential to combine offensively with other words or were hard to spell. We stopped somewhat arbitrarily at 256 words per list. Combined with some clever algorithms, we get a mnemonic passphrase system that handles typos and allows the words to be entered in any order.How it worksEncodingStart with an integer x in the range [0, m)[5] — cracking the encoded output will take the same amount of work as guessing x.Example:Select m = 220 and randomly choose x = 901713.Run a parameter search to find the smallest number of tuples that can represent m. The result will depend on the number wordlists in use and how many words each contains. The algorithms work with any number of wordlists of arbitrary size, but any given word cannot appear in multiple lists.Example:Use three wordlists containing 11, 13 and 7 words respectively.The notation nCk refers to the number of ways k items can be chosen from a list of n items[6].11C1 × 13C1 × 7C1 = 1001 < m11C2 × 13C2 × 7C2 = 90090 < m11C3 × 13C3 × 7C3 = 1651650 ≥ mSo three tuples of are needed, nine words total.Break down x into smaller ones, each corresponding to a combination of words from one of the wordlists.Example:t = x = 90171311C3 = 165; c1 = t mod 165 = 153; t = t div 165 = 546413C3 = 286; c2 = t mod 286 = 104; t = t div 286 = 207C3 = 35; c3 = t mod 35 = 20; t = t div 35 = 0So c = (153, 104, 20).Convert the combination number for each wordlist into a set of positions using a combinatorial number system.Example:t = c1 = 15311C3 = 165 > t10C3 = 120 ≤ t; p1 = 10; t = t − 120 = 339C2 = 36 > t8C2 = 28 ≤ t; p2 = 8; t = t − 28 = 56C1 = 6 > t5C1 = 5 ≤ t; p3 = 5; t = t − 5 = 0So p = (10, 8, 5) for this wordlist.Words taken based on those computed positions are grouped into tuples.A canonical string representation of those word tuples is returned as the output passphrase.DecodingStart with a typed passphrase.The typed passphrase is converted to lowercase and any character that’s not a letter or space is removed. Wherever possible typos are corrected and missing spaces are restored.The words are converted into sets of positions for each wordlist using lookup tables. These tables also have alternate spellings and verb tenses for many words.The sets of positions are turned back into a number for each wordlist by reversing the combinatorial encoding.Those numbers are combined back into the original integer.EfficiencyThe number of bits represented by w words from a set of n wordlists W1…Wn respectively containing |Wi| words, where w is an integer multiple of n, can be computed as follows:In other words, take the sum of the binary logarithms of the number of possible ways to choose w ÷ n words from each wordlist, rounding down. With the wordlists used in this demo, we get the following values:Words3691215213036516090141288369Bits 24 44 64 82 99130173200260293389516720754Notice that as the number of words increases we get diminishing returns on the number of bits represented — this is the price of order insensitivity.DemoTry generating a random passphrase (this demo encodes 80 bits), then re-type it in the passphrase box with typos and/or words out of order. The error correction algorithms need to precompute some data structures to work effectively, which is done in the background since it takes a few seconds. There will be a message in the box at the bottom of the form when it’s done. It can be used with limited error correction almost immediately. It’s pretty tolerant of sloppy typing.Full error correction can turn “amswe3rforbadcablenachoxacidrusticvfetchdjacketstuffopenhackyriot” back into “macho acid answering   rustic cable fetching   stuffy jacket forbidding   wacky riot opening”.What’s it good for?The use cases Storybits was designed around are passphrases and public key fingerprints. It can be frustrating to type a long passphrase, especially in an application that uses key stretching, because if you make any errors, you have to type the whole thing over again. I think that error correction would go a long way towards mitigating that problem. Public key fingerprints (which also include things like Bitcoin addresses and hidden service addresses) tend to be difficult to memorize and cumbersome to type and Storybits may be helpful for that as well. I’m not ready to recommend actually using it for anything yet — the wordlists still have room for improvement and usability studies should be done — but I’m interested to hear what people think of it.
2024-11-08T12:49:44
en
train
42,023,423
ValentineC
2024-11-02T01:47:11
Secure Custom Fields can't even steal the changelog entry properly
null
https://twitter.com/arunaswp/status/1851602348998639864
10
0
[ 42025896 ]
null
null
null
null
null
null
null
null
null
train
42,023,440
AnhTho_FR
2024-11-02T01:50:36
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,023,454
CalChris
2024-11-02T01:53:07
Tailscale Machine Explorer in VS Code
null
https://tailscale.com/blog/machine-explorer-vscode-extension
2
0
null
null
null
null
null
null
null
null
null
null
train
42,023,532
AnhTho_FR
2024-11-02T02:07:27
Oasis – Real-time AI world model
null
https://oasis.decart.ai/welcome
2
1
[ 42024496, 42023648 ]
null
null
null
null
null
null
null
null
null
train
42,023,554
shenli3514
2024-11-02T02:11:26
Perplexity and You.com: The AI Search Before ChatGPT Search
null
https://www.yourdomain.com/
3
1
[ 42024203 ]
null
null
missing_parsing
Free classifieds - yourdomain.com
null
null
Place free classified ads with photos. My Account Contact Privacy Terms 2024 © yourdomain.com
2024-11-08T14:48:02
null
train
42,023,595
Lord09
2024-11-02T02:20:40
null
null
null
1
null
[ 42023596 ]
null
true
null
null
null
null
null
null
null
train
42,023,634
dgfitz
2024-11-02T02:31:31
Ask HN: How to Buy Squatted Domains?
How does one go about buying a squatted domain? I realize domain names in this day and age aren’t super important, I am curious if knowledge persons have strategies or ideas as to how acquire a domain name that a.) you know has been squatted on for years and b.) cannot find any contact information associated with said domain.
null
13
3
[ 42025836, 42023666, 42024683 ]
null
null
null
null
null
null
null
null
null
train
42,023,641
obbutterfly
2024-11-02T02:34:51
The complete guide to integrating an OIDC server into your project
null
https://blog.logto.io/complete-guide-to-integrating-oidc-server
2
0
[ 42025937 ]
null
null
null
null
null
null
null
null
null
train
42,023,707
pbrowne011
2024-11-02T02:50:33
It's time for a modern synthesis kernel (2019)
null
https://blog.regehr.org/archives/1676
22
1
[ 42023713 ]
null
null
Failed after 3 attempts. Last error: Quota exceeded for quota metric 'Generate Content API requests per minute' and limit 'GenerateContent request limit per minute for a region' of service 'generativelanguage.googleapis.com' for consumer 'project_number:854396441450'.
It’s Time for a Modern Synthesis Kernel – Embedded in Academia
null
regehr
Alexia Massalin’s 1992 PhD thesis has long been one of my favorites. It promotes the view that operating systems can be much more efficient than then-current operating systems via runtime code generation, lock-free synchronization, and fine-grained scheduling. In this piece we’ll only look at runtime code generation, which can be cleanly separated from the other aspects of this work. Runtime Code Generation in Ring 0 The promise of kernel-mode runtime code generation is that we can have very fast, feature-rich operating systems by, for example, not including code implementing generic read() and write() system calls, but rather synthesizing code for these operations each time a file is opened. The idea is that at file open time, the OS has a lot of information that it can use to generate highly specialized code, eliding code paths that are provably not going to execute. Runtime code generation was a well-known idea in 1992, but it wasn’t used nearly as widely as it is today. In 2019, of course, just-in-time compilers are ubiquitous. However, operating system kernels still do not use runtime code generation very much, with a few exceptions such as: several OS kernels, including Linux, have a simple JIT compiler in their BPF implementation VMware used to use dynamic code generation to performantly virtualize OS kernels on x86 chips that lacked hardware virtualization extensions; I doubt that this is commonly used any longer pre-NT Windows kernels would dynamically generate bitblit code. I learned this in a talk by a VMware employee; this code generation was apparently a debugging issue for VMware since it would fight with their own runtime code generator. Some details can be found in this post. The old paper about the origins of this technique in the Xerox Alto is a classic. TempleOS, as explained in this nice writeup, made heavy use of dynamic code generation Anyway, back to Synthesis. The OS and code generators were all written, from scratch, in 68020 assembly language. How do we translate Massalin’s ideas to 2019? Most likely by reusing an existing code generator and OS. For most of this piece I’ll assume that that’s what we want to do, but we’ll also briefly touch on customized alternatives. Code Generator Requirements The particular technology that we use for runtime code generation isn’t that important, but for now let’s imagine using LLVM. This means that the pieces of the kernel that we wish to specialize will need to be shipped as bitcode, and then we’ll ask LLVM to turn it into object code as needed. LLVM has lots of great optimization passes, from which we could pick a useful subset, and it is not hard to use in JIT mode. On the other hand, LLVM isn’t as fast as we’d like and also it has a large footprint. In production we’d need to think carefully whether we wanted to include a big chunk of non-hardened code in the kernel. What optimizations are we expecting the code generator to perform? Mostly just the basic ones: function inlining, constant propagation, and dead code elimination, followed by high-quality instruction selection and register allocation. The hard part, as we’re going to see, is convincing LLVM that it is OK to perform these optimizations as aggressively as we want. This is an issue that Massalin did not need to confront: her kernel was designed in such a way that she knew exactly what could be specialized and when. Linux, on the other hand, was obviously not created with staged compilation in mind, and we’re going to have to improvise somewhat if we want this to work well. My guess is that while LLVM would be great for prototyping purposes, for deployment we’d probably end up either reusing a lighter-weight code generator or else creating a new one that is smaller, faster, and more suitable for inclusion in the OS. Performance of runtime code generation isn’t just a throughput issue, there’ll also be latency problems if we’re not careful. We need to think about the impact on security, too. Example: Specializing write() in Linux Let’s assume that we’ve created a version of Linux that is capable of generating a specialized version of the write() system call for a pipe. This OS needs — but we won’t discuss — a system call dispatch mechanism to rapidly call the specialized code when it is available. In Synthesis this was done by giving each process its own trap vector. Before we dive into the code, let’s be clear about what we’re doing here: we are pretending to be the code generator that is invoked to create a specialized write() method. Probably this is done lazily at the time the system call is first invoked using the new file descriptor. The specialized code can be viewed as a cached computation, and as a bonus this cache is self-invalidating: it should be valid as long as the file descriptor itself is valid. (But later we’ll see that we can do a better job specializing the kernel if we support explicit invalidation of runtime-generated code.) If you want to follow along at home, I’m running Linux 5.1.14 under QEMU, using these instructions to single-step through kernel code, and driving the pipe logic using this silly program. Skipping over the trap handler and such, ksys_write() is where things start to happen for real: ssize_t ksys_write(unsigned int fd, const char __user *buf, size_t count) { struct fd f = fdget_pos(fd); ssize_t ret = -EBADF; if (f.file) { loff_t pos = file_pos_read(f.file); ret = vfs_write(f.file, buf, count, &pos); if (ret >= 0) file_pos_write(f.file, pos); fdput_pos(f); } return ret; } At this point the “fd” parameter can be treated as a compile-time constant, but of course “buf” and “count” cannot. If we turn “fd” into a constant, will LLVM be able to propagate it through the remaining code? It will, as long as: We inline all function calls. Nobody takes the address of “fd”. It’s not that calls and pointers will always block the optimizer, but they complicate things by bringing interprocedural analysis and pointer analysis into the picture. Our goal is going to be to see whether the code generator can infer the contents of the struct returned from fdget_pos(). (You might wonder why performance-sensitive code is returning a “struct fd” by value. Turns out this struct only has two members: a pointer and an integer.) The call to fdget_pos() goes to this code: static inline struct fd fdget_pos(int fd) { return __to_fd(__fdget_pos(fd)); } and then here: unsigned long __fdget_pos(unsigned int fd) { unsigned long v = __fdget(fd); struct file *file = (struct file *)(v & ~3); if (file && (file->f_mode & FMODE_ATOMIC_POS)) { if (file_count(file) > 1) { v |= FDPUT_POS_UNLOCK; mutex_lock(&file->f_pos_lock); } } return v; } and then (via a trivial helper that I’m not showing) here: static unsigned long __fget_light(unsigned int fd, fmode_t mask) { struct files_struct *files = current->files; struct file *file; if (atomic_read(&files->count) == 1) { file = __fcheck_files(files, fd); if (!file || unlikely(file->f_mode & mask)) return 0; return (unsigned long)file; } else { file = __fget(fd, mask, 1); if (!file) return 0; return FDPUT_FPUT | (unsigned long)file; } } Keep in mind that up to here, we haven’t seen any optimization blockers. In __fdget_light(), we run into our first interesting challenge: “current” is a macro that returns a pointer to the running process’s PCB (in Linux the PCB, or process control block, is a “task_struct” but I’ll continue using the generic term). The current macro ends up being a tiny bit magical, but its end result can be treated as a constant within the context of a given process. There is no way a code generator like LLVM will be able to reach this conclusion, so we’ll need to give it some help, perhaps by annotating certain functions, macros, and struct fields as returning values that are constant over a given scope. This is displeasing but it isn’t clear there’s any easier or better way to achieve our goal here. The best we can hope for is that the annotation burden is close to proportional to the number of data types in the kernel; if it ends up being proportional to the total amount of code then our engineering effort goes way up. Now, assuming that we can treat “current” as a compile-time constant, we’re immediately faced with a similar question: is the “files” field of the PCB constant? It is (once the process is initialized) but again there’s not going to be any easy way for our code generator to figure this out; we’ll need to rely on another annotation. Continuing, the “count” field of files is definitely not a constant: this is a reference count on the process’s file descriptor table. A single-threaded Linux process will never see count > 1, but a multi-threaded process will. (Here we need to make the distinction between open file instances, which are shared following a fork, and the file descriptor table, which is not.) The fast path here is exploiting the insight that if our process is single-threaded we don’t need to worry about locking the file descriptor table, and moreover the process is not going to stop being single-threaded during the period where we rely on that invariant, because we trust the currently running code to not do the wrong thing. Here our specializing compiler has a fun policy choice to make: should it specialize for the single threaded case? This will streamline the code a bit, but it requires the generated code to be invalidated later on if the process does end up becoming multithreaded — we’d need some collection of invalidation hooks to make that happen. Anyhow, let’s continue into __fcheck_files(): static inline struct file *__fcheck_files(struct files_struct *files, unsigned int fd) { struct fdtable *fdt = rcu_dereference_raw(files->fdt); if (fd < fdt->max_fds) { fd = array_index_nospec(fd, fdt->max_fds); return rcu_dereference_raw(fdt->fd[fd]); } return NULL; } At this point we’re in deep “I know what I’m doing” RCU territory and I’m going to just assume we can figure out a way for the code generator to do what we want, which is to infer that this function returns a compile-time-constant value. I think this’ll work out in practice, since even if the open file instance is shared across processes, the file cannot be truly closed until its reference count goes to zero. Anyway, let’s move forward. Next, we’re back in __fget_light() and then __fdget_pos(): our code generator should be able to easily fold away the remaining branches in these functions. Finally, we return to line 4 of ksys_write() and we know what the struct fd contains, making it possible to continue specializing aggressively. I don’t think making this example any longer will be helpful; hopefully the character of the problems we’re trying to solve are now apparent. In summary, we saw four kinds of variables in this exercise: Those such as the “fd” parameter to write() that the code generator can see are constant at code generation time. Those such as the “current” pointer that are constant, but where the code generator cannot see this fact for one reason or another. To specialize these, we’ll have to give the compiler extra information, for example using annotations. Those such as the “count” field of the “files_struct” that are not actually constant, but that seem likely enough to remain constant that we may want to create a specialized version treating them as constants, and then be ready to invalidate this code if the situation changes. Those that are almost certainly not worth trying to specialize. For example, the “count” parameter to write() is not likely to remain constant over a number of calls. Writing one byte to a pipe from a single-threaded process executes about 3900 instructions on Linux 5.1.14 (this is just in ksys_write(), I didn’t measure the trapping and untrapping code). The Synthesis thesis promises an order of magnitude performance improvement. Can specialization reduce the fast path on this system call to 390 instructions? It would be fun to find out. I’ll finish up this example by observing that even though I chose to present code from the filesystem, I think it’s network stack code that will benefit from specialization the most. Discussion I have some experience with OS kernels other than Linux, and my belief is that attempting to dynamically specialize any mainstream, production-grade OS other than Linux would run into the same issues we just saw above. At the level the code generator cares about, there just isn’t much effective difference between these OSes: they’re all big giant blobs of C with plentiful indirection and domain-specific hacks. If our goal is only to create a research-grade prototype, it would be better to start with something smaller than Linux/Windows/Darwin so that we can refactor specialization-unfriendly parts of the OS in a reasonable amount of time. xv6 is at the other extreme: it is super easy to hack on, but it is so incredibly over-simplified that it could not be used to test the hypothesis “a realistic OS can be made much faster using specialization.” Hilariously, an xv6+LLVM system would be about 0.15% OS code and 99.85% compiler code. Perhaps there’s a middle ground that would be a better choice, Minix or OpenBSD or whatever. Given two developers, one who knows LLVM’s JIT interfaces and one who’s a good Linux kernel hacker, how long would it take to bring up a minimally ambitious dynamically specializing version of Linux? I would guess this could be done in a week or two, there’s not really anything too difficult about it (it’s easy to say this while blogging, of course). The problem is that this would not give good results: only the very easiest specialization opportunities will get spotted by the runtime code generator. But perhaps this would generate enough interest that people would keep building on it. Do we want to do specialization work on C code? No, not really, it’s just that every one of our production-grade kernels is already written in it. A fun but engineering-intensive alternative would be to create a new, specialization-friendly kernel in whatever programming language looks most suitable. Functional languages should offer real advantages here, but of course there are issues in using these languages to create a performant OS kernel. Perhaps Mirage is a good starting point here, it is already all about specialization — but at system build time, not at runtime. An ideal programming environment for a modern Synthesis kernel would provide tool and/or language support for engineering specialization-friendly kernel code. For example, we would identify a potential specialization point and then the tools would use all of our old friends — static analysis, dynamic analysis, symbolic execution, etc. — to show us what data items fall into each of the four categories listed in the last section, and provide us with help in refactoring the system so that specialization can work better. A tricky thing here is taking into account the different kinds of concurrency and synchronization that happen in a sophisticated OS. Some useful questions to ask (and of course we’re always asking these same things when doing OS and compiler research) are: How are we supposed to think about a dynamically specializing OS kernel? What are the new abstractions, if any? Specialization could really benefit from some sort of first-class “code region over which these values are effectively constant” and then also “but the constant-ness is invalidated by this set of events.” Why Now? The literature on dynamic specialization of OS code is interesting: it looks like there was a flurry of interest inspired by Synthesis in the mid/late 90s. Many of these papers had Calton Pu, Massalin’s thesis supervisor, on the author list. Not a whole lot has happened in this area since then, as far as I know. The only paper I can think of about optimistic OS specialization is this one; it’s a nice paper, I recommend it. Static OS specialization, on the other hand, is what unikernels are all about, so there’s been quite a bit of work done on this. It seems like time to revive interest in dynamic OS specialization because: Most of the processor speed wins lately are application specific; the cores that execute OS code are not getting noticeably faster each year, nor do they seem likely to. In fact, way back in 1989 John Ousterhout argued that increases in processor speed weren’t benefiting OS code as much as other kinds of code. OSes have slowed down recently to mitigate side channel attacks. Maybe we can get some of that speed back using dynamic specialization. OSes are way bloatier than they were in the 90s, increasing the potential benefits due to specialization. Compiler technology is far ahead of where it was in the 90s, with off-the-shelf toolkits like LLVM providing high-quality solutions to many of the problems we’d run into while prototyping this work. I’d like to thank Perry Metzger who suggested this piece and also provided feedback on a draft of it. Perry worked with Alexia back in the day and hopefully he’ll also write about this topic. Finally, I don’t want to give the impression that I’m summarizing a research proposal or an in-progress project. This is the kind of thing I love to think about, is all.
2024-11-08T04:25:14
null
train
42,023,717
fleuraly
2024-11-02T02:53:38
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,023,721
bookofjoe
2024-11-02T02:54:26
David Salle's Ghost in the A.I. Machine
null
https://www.nytimes.com/2024/10/30/arts/design/david-salle-ai-gladstone-painting-art.html
1
1
[ 42023723 ]
null
null
null
null
null
null
null
null
null
train
42,023,727
tomcam
2024-11-02T02:55:29
Scientists Caught Sperm Defying One of the Laws of Physics
null
https://www.sciencealert.com/scientists-caught-sperm-defying-one-of-the-laws-of-physics
1
0
null
null
null
null
null
null
null
null
null
null
train
42,023,775
null
2024-11-02T03:07:47
null
null
null
null
null
null
[ "true" ]
true
null
null
null
null
null
null
null
train
42,023,784
ivewonyoung
2024-11-02T03:11:09
Many People with Long Covid Have Signs of Persistent SARS-CoV-2 Proteins
null
https://directorsblog.nih.gov/2024/10/31/many-people-with-long-covid-have-signs-of-persistent-sars-cov-2-proteins-new-findings-show/
9
2
[ 42024037, 42024002, 42023826 ]
null
null
null
null
null
null
null
null
null
train
42,023,828
ollysb
2024-11-02T03:22:18
The New Jira
null
https://www.atlassian.com/blog/announcements/the-new-jira
3
0
null
null
null
null
null
null
null
null
null
null
train
42,023,829
RunOutOfMemory
2024-11-02T03:22:46
Worlds highest-altitude wind farm online
null
https://www.globaltimes.cn/page/202308/1295641.shtml
3
0
null
null
null
null
null
null
null
null
null
null
train
42,023,837
kristianp
2024-11-02T03:25:51
Mesa (Programming Language)
null
https://en.wikipedia.org/wiki/Mesa_(programming_language)
3
0
[ 42025891 ]
null
null
null
null
null
null
null
null
null
train
42,023,877
gadgetonhand
2024-11-02T03:34:50
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,023,882
null
2024-11-02T03:36:28
null
null
null
null
null
null
[ "true" ]
true
null
null
null
null
null
null
null
train
42,023,907
nmstoker
2024-11-02T03:41:40
Mea Culpa
null
https://tim-one.github.io/psf/meaculpa.html
3
0
[ 42025886 ]
null
null
null
null
null
null
null
null
null
train
42,023,909
MilnerRoute
2024-11-02T03:42:03
For some elite athletes, neurodivergence can be a super strength
null
https://www.washingtonpost.com/wellness/2024/11/01/adhd-autism-hyperfocus-elite-atheletes/
5
0
null
null
null
null
null
null
null
null
null
null
train
42,023,928
xanderlewis
2024-11-02T03:46:25
Reed Research Reactor
null
https://reactor.reed.edu/about.html
1
0
null
null
null
null
null
null
null
null
null
null
train
42,023,930
BobbyhaZ
2024-11-02T03:46:39
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,023,935
Rendello
2024-11-02T03:48:19
List of partitions of traditional Japanese architecture
null
https://en.wikipedia.org/wiki/List_of_partitions_of_traditional_Japanese_architecture
1
0
[ 42025887 ]
null
null
null
null
null
null
null
null
null
train
42,023,939
teleforce
2024-11-02T03:48:54
US Skydio Drone Need to Ration Batteries for Customers After Sanctions by China
null
https://www.forbes.com/sites/siladityaray/2024/10/31/largest-us-drone-manufacturer-says-it-will-need-to-ration-batteries-for-customers-after-sanctions-by-china/
3
0
null
null
null
null
null
null
null
null
null
null
train
42,023,941
quick_brown_fox
2024-11-02T03:50:21
Sick man of Europe? Germany's bosses sound alarm on staff illness
null
https://www.ft.com/content/8e7bc450-7dc7-45c2-82ed-99ab2a8c4952
3
2
[ 42024420, 42024202 ]
null
null
null
null
null
null
null
null
null
train
42,023,950
sandwichsphinx
2024-11-02T03:52:58
Autobiography of Benjamin Franklin (1791)
null
https://www.gutenberg.org/ebooks/20203
4
1
[ 42024183 ]
null
null
no_error
Autobiography of Benjamin Franklin by Benjamin Franklin
null
null
About this eBook Author Franklin, Benjamin, 1706-1790 Editor Pine, Frank Woodworth, 1869- Illustrator Smith, E. Boyd (Elmer Boyd), 1860-1943 Title Autobiography of Benjamin Franklin Note Reading ease score: 59.9 (10th to 12th grade). Somewhat difficult to read. Note See also PG#148 ed. by Charles W. Eliot Credits Produced by Turgut Dincer, Brian Sogard and the OnlineDistributed Proofreading Team at http://www.pgdp.net Summary "Autobiography of Benjamin Franklin" by Benjamin Franklin is a historical account written in the late 18th century. This work delves into Franklin's life, offering insights into his humble beginnings, his rise to prominence, and the philosophies that guided him throughout his achievements. It not only reflects on his personal journey but also serves as an inspiring narrative of self-improvement and perseverance. At the start of the autobiography, Franklin introduces himself to his son, outlining his motivations for writing, which include sharing life lessons and family anecdotes. He recalls his early years in Boston, his family's influences, and his father's aspirations for him. Through these recollections, Franklin discusses his childhood experiences, early education, and the start of his career as a printer. The narrative hints at his keen desire for knowledge and self-betterment, setting the stage for the remarkable life he would go on to lead. (This is an automatically generated summary.) Language English LoC Class E300: History: America: Revolution to the Civil War (1783-1861) Subject Franklin, Benjamin, 1706-1790 Subject Statesmen -- United States -- Biography Category Text EBook-No. 20203 Release Date Dec 28, 2006 Most Recently Updated Oct 19, 2022 Copyright Status Public domain in the USA. Downloads 8805 downloads in the last 30 days. Project Gutenberg eBooks are always free!
2024-11-08T08:50:11
en
train
42,023,957
e2e4
2024-11-02T03:55:15
Python has overtaken JavaScript on GitHub
null
https://www.infoworld.com/article/3594587/python-has-overtaken-javascript-on-github.html
6
1
[ 42027224 ]
null
null
null
null
null
null
null
null
null
train
42,023,996
appstorelottery
2024-11-02T04:06:04
Gary Newman (GMod/Rust) pressured by Unity to spend 500k on services
null
https://twitter.com/garrynewman/status/1852383376583307613
4
5
[ 42024010, 42025875 ]
null
null
null
null
null
null
null
null
null
train
42,024,024
gslin
2024-11-02T04:13:41
The Human Toll of ALPR Errors
null
https://www.eff.org/deeplinks/2024/11/human-toll-alpr-errors
3
0
null
null
null
null
null
null
null
null
null
null
train
42,024,055
pseudometa
2024-11-02T04:22:24
Show HN: World's Largest Minigolf Directory
Over the last two months I built a directory of minigolf businesses and it is now live. It has nearly 3000 businesses and in the United States and Canada, and I&#x27;ll be expanding it to other countries soon. The site includes a detailed view of each minigolf business, courses, and digital online scorecard.<p>The goal is to compile comprehensive course details including course designer, themes, difficulty scores, photos, videos, and more. To do this, I plan to work with businesses directly as well as crowd source details through the free minigolf scorecard capability. There are many advantages for players including statistics and note taking, as well as for business such as analytics, maintenance notifications, and more.<p>If you know of someone with a minigolf business, please have them check it out and verify if any information is incorrect. It is continuing to grow and expand daily.<p>A couple interesting notes is that the site is entirely developed using AI for all of its code. I also use it to select the best images to display for each business, and write the business overview that appears on each page. It is interesting that while AI isn&#x27;t core to this site, it plays a huge role in it&#x27;s development, and making it user-friendly as well as rapidly scaleable. It will be wild to see how much further it goes in a couple more months.
https://www.minigolfr.com
1
0
[ 42025860 ]
null
null
null
null
null
null
null
null
null
train
42,024,071
zemahran
2024-11-02T04:29:16
null
null
null
1
null
[ 42024072 ]
null
true
null
null
null
null
null
null
null
train
42,024,083
ctoth
2024-11-02T04:30:31
October 30 – Reflections on the Day the Earth Moved for H5N1
null
https://hogvet51.substack.com/p/october-30-reflections-on-the-day
52
34
[ 42024325, 42024317, 42031149, 42025489, 42025068, 42024322, 42024376 ]
null
null
null
null
null
null
null
null
null
train
42,024,097
thunderbong
2024-11-02T04:35:46
How to Build Smaller Container Images: Docker Multi-Stage Builds
null
https://labs.iximiuz.com/tutorials/docker-multi-stage-builds
4
0
[ 42025878 ]
null
null
no_error
How to Build Smaller Container Images: Docker Multi-Stage Builds | iximiuz Labs
null
Ivan Velichko
If you're building container images with Docker and your Dockerfiles aren't multi-stage, you're likely shipping unnecessary bloat to production. This not only increases the size of your images but also broadens their potential attack surface.What exactly causes this bloat, and how can you avoid it?In this article, we'll explore the most common sources of unnecessary packages in production container images. Once the problem is clear, we'll see how using Multi-Stage Builds can help produce slimmer and more secure images. Finally, we'll practice restructuring Dockerfiles for some popular software stacks - both to better internalize the new knowledge and to show that often, just a little extra effort can yield a significantly better image.Let's get started!Why is my image so huge?Almost any application, regardless of its type (web service, database, CLI, etc.) or language stack (Python, Node.js, Go, etc.), has two types of dependencies: build-time and run-time.Typically, the build-time dependencies are much more numerous and noisy (read - have more CVEs in them) than the run-time ones. Therefore, in most cases, you'll only want the production dependencies in your final images.However, build-time dependencies end up in production containers more often than not, and one of the main reasons for that is:⛔  Using exactly the same image to build and run the application.Building code in containers is a common (and good) practice - it guarantees the build process uses the same set of tools when performed on a developer's machine, a CI server, or any other environment.Running applications in containers is the de facto standard practice today. Even if you aren't using Docker, your code is likely still running in a container or a container-like VM.However, building and running apps are two completely separate problems with different sets of requirements and constraints. So, the build and runtime images should also be completely separate! Nevertheless, the need for such a separation is often overlooked, and production images end up having linters, compilers, and other dev tools in them.Here are a couple of examples that demonstrate how it usually happens.How NOT to organize a Go application's DockerfileStarting with a more obvious one:# DO NOT DO THIS IN YOUR DOCKERFILE FROM golang:1.23 WORKDIR /app COPY . . RUN go build -o binary CMD ["/app/binary"] The issue with the above Dockerfile is that golang was never intended as a base image for production applications. However, this image is the default choice if you want to build your Go code in a container. But once you've written a piece of Dockerfile that compiles the source code into an executable, it can be tempting to simply add a CMD instruction to invoke this binary and call it done. How NOT to structure a Dockerfile for a Go application.The gotcha is that such an image would include not only the application itself (the part you want in production) but also the entire Go compiler toolchain and all its dependencies (the part you most certainly don't want in production):trivy image -q golang:1.23 golang:1.23 (debian 12.7) Total: 799 (UNKNOWN: 0, LOW: 240, MEDIUM: 459, HIGH: 98, CRITICAL: 2) The golang:1.23 brings more than 800MB of packages and about the same number of CVEs 🤯How NOT to organize a Node.js application's DockerfileA similar but slightly more subtle example:# DO NOT DO THIS IN YOUR DOCKERFILE FROM node:lts-slim WORKDIR /app COPY . . RUN npm ci RUN npm run build ENV NODE_ENV=production EXPOSE 3000 CMD ["node", "/app/.output/index.mjs"] Unlike the golang image, the node:lts-slim is a valid choice for a production workload base image. However, there is still a potential problem with this Dockerfile. If you build an image using it, you may end up with the following composition: How NOT to structure a Dockerfile for a Node.js application.The diagram shows the actual numbers for the iximiuz Labs frontend app, which is written in Nuxt 3. If it used a single-stage Dockerfile like the above, the resulting image would have almost 500MB of node_modules, while only about 50MB of the "bundled" JavaScript (and static assets) in the .output folder would constitute the (self-sufficient) production app.This time, the "bloat" is caused by the npm ci step, which installs both production and development dependencies. But the problem cannot be fixed by simply using npm ci --omit=dev because it'd break the consequent npm run build command that needs both the production and the development dependencies to produce the final application bundle. So, a more subtle solution is required.How lean images were produced before Multi-Stage BuildsIn both the Go and Node.js examples from the previous section, the solution could involve splitting the original Dockerfile into two files.The first Dockerfile would start with a FROM <sdk-image> and contain the application building instructions:Dockerfile.buildFROM node:lts-slim WORKDIR /app COPY . . RUN npm ci RUN npm run build Running the docker build command using Dockerfile.build would produce an auxiliary image:docker build -t build:v1 -f Dockerfile.build . ...which then could be used to extract the built app (our artifact) to the builder host:docker cp $(docker create build:v1):/app/.output . The second Dockerfile would start with a FROM <runtime-image> and simply COPY the built application from the host into its future runtime environment:Dockerfile.runFROM node:lts-slim WORKDIR /app COPY .output . ENV NODE_ENV=production EXPOSE 3000 CMD ["node", "/app/.output/index.mjs"] Running the docker build command for the second time with Dockerfile.run would produce the final slim production image:docker build -t app:v1 -f Dockerfile.run . This technique, known as the Builder Pattern, was widely used before Docker added Multi-Stage Build support.However, while fully functional, the Builder Pattern had a relatively rough UX. It required:Writing multiple interdependent Dockerfiles.Copying build artifacts to and from the builder host.Devising extra scripts to execute docker build commands.Additionally, one would need to remember to always run the docker build -f Dockerfile.build command before the docker build -f Dockerfile.run command (otherwise, the final image could be baked with a stale artifact from the previous build), and the experience of sending the build artifacts through the host was also far from perfect.At the same time, a "native" Builder Pattern implementation could:Optimize the artifact copying.Simplify the build order organization.Standardize the technique across different teams.And luckily, the one followed!An easy way to understand Multi-Stage BuildsIn essence, Multi-Stage Builds are the Builder Pattern on steroids implemented right inside Docker. To understand how Multi-Stage Builds work, it's important to be familiar with two simpler and seemingly independent Dockerfile features.You can COPY files --from=<another-image>One of the most frequently used Dockerfile instructions is COPY. Most of the time, we COPY files from the host to the container image:COPY host/path/to/file image/path/to/file However, you can also COPY files straight from other images 🤯Here is an example that copies the nginx.conf file from the Docker Hub's nginx:latest image to the image that is being currently built:COPY --from=nginx:latest /etc/nginx/nginx.conf /nginx.conf The feature can also come in handy while implementing the Builder Pattern. Now, we can COPY the built artifacts directly --from the auxiliary build image:Dockerfile.runFROM node:lts-slim WORKDIR /app COPY --from=build:v1 /app/.output . ENV NODE_ENV=production EXPOSE 3000 CMD ["node", "/app/.output/index.mjs"] Thus, the COPY --from=<image> trick enables bypassing the builder host when copying artifacts from the build to runtime images.However, the need to write multiple Dockerfiles and the build order dependency problems remain...You can define several images in one DockerfileHistorically, a Dockerfile would start with a FROM <base-image> instruction:Dockerfile.simpleFROM node:lts-slim COPY ... RUN ["node", "/path/to/app"] ...and then the docker build command would use it to produce just one image:docker build -f Dockerfile.simple -t app:latest . However, since ~2018, Docker supports complex "multi-tenant" Dockerfiles. You can put as many named FROM instructions into a Dockerfile as you like:Dockerfile.complexFROM busybox:stable AS from1 CMD ["echo", "busybox"] FROM alpine:3 AS from2 CMD ["echo", "alpine"] FROM debian:stable-slim AS from3 CMD ["echo", "debian"] ...and every FROM will become a separate target for the docker build command:docker build -f Dockerfile.complex --target from1 -t my-busybox docker run my-busybox Same Dockerfile, but a totally different image:docker build -f Dockerfile.complex --target from2 -t my-alpine docker run my-alpine ...and one more image from exactly the same Dockerfile:docker build -f Dockerfile.complex --target from3 -t my-debian docker run my-debian Returning to our Builder Pattern problem, it means that we can put back together the build and runtime Dockerfiles using two different FROM instructions in one compound Dockerfile!The power of Multi-Stage DockerfilesHere is what a "compound" Node.js application Dockerfile could look like:# The "build" stage FROM node:lts-slim AS build WORKDIR /app COPY . . RUN npm ci RUN npm run build # The "runtime" stage FROM node:lts-slim AS runtime WORKDIR /app COPY --from=build /app/.output . ENV NODE_ENV=production EXPOSE 3000 CMD ["node", "/app/.output/index.mjs"] Using the official terminology, every FROM instruction defines not an image but a stage, and technically the COPY happens --from a stage. However, as we saw above, thinking of stages as independent images is helpful for connecting the dots.Last but not least, when all stages and COPY --from=<stage> instructions are defined in one Dockerfile, the Docker build engine (BuildKit) can compute the right build order, skip unused, and execute independent stages concurrently 🧙A few important facts to remember before writing your first multi-stage Dockerfile:The order of stages in the Dockerfile matters - it's impossible to COPY --from a stage defined below the current stage.The AS aliases are optional - if you don't name your stages, they still can be referred to by their sequence number.When the --target flag is not used, the docker build command will build the last stage (and all stages it copies from).Multi-Stage Builds in practiceBelow are examples of how to use Multi-Stage Builds to produce smaller and more secure container images for different languages and frameworks.Node.jsThere are different shapes and forms of Node.js applications - some of them require Node.js only during the development and build phases, while others need Node.js in the runtime container, too.Here are some examples of how to structure multi-stage Dockerfiles for Node.js applications:Multi-Stage Build example: React applicationReact applications are fully static when built, so they can be served by any static file server. However, the build process requires Node.js, npm, and all dependencies from package.json to be installed. Thus, it's important to carefully "cherry-pick" the static build artifacts from the potentially massive build image.# Build stage FROM node:lts-slim AS build WORKDIR /app COPY package*.json . RUN npm ci COPY . . RUN npm run build # Runtime stage FROM nginx:alpine WORKDIR /usr/share/nginx/html RUN rm -rf ./* COPY --from=build /app/build . ENTRYPOINT ["nginx", "-g", "daemon off;"] Multi-Stage Build example: Next.js applicationNext.js applications can be:Fully static: the build process and the multi-stage Dockerfile then are almost identical to the React example above.With server-side features: the build process is similar to React, but the runtime image requires Node.js, too.Below is an example of a multi-stage Dockerfile for a Next.js application that uses server-side features:# Lifehack: Define the Node.js image only once FROM node:lts-slim AS base # Build stage FROM base AS build WORKDIR /app COPY package*.json . RUN npm ci COPY . . RUN npm run build # Runtime stage FROM base AS runtime RUN addgroup --system --gid 1001 nextjs RUN adduser --system --uid 1001 nextjs USER nextjs WORKDIR /app COPY --from=build /app/public ./public RUN mkdir .next COPY --from=build --chown=nextjs /app/.next/standalone . COPY --from=build --chown=nextjs /app/.next/static ./.next/static ENV NODE_ENV=production CMD ["node", "server.js"] Multi-Stage Build example: Vue applicationFrom the build process perspective, Vue applications are pretty similar to React applications. The build process requires Node.js, npm, and all dependencies from package.json to be installed, but produced build artifacts are static files that can be served by any static file server.# Build stage FROM node:lts-slim AS build WORKDIR /app COPY package*.json . RUN npm ci COPY . . RUN npm run build # Runtime stage FROM nginx:alpine WORKDIR /usr/share/nginx/html RUN rm -rf ./* COPY --from=build /app/dist . Multi-Stage Build example: Nuxt applicationSimilarly to Next.js, Nuxt applications can be either fully static or with server-side support. Below is an example of a multi-stage Dockerfile for a Nuxt application that runs on a Node.js server:# Build stage FROM node:lts-slim AS build WORKDIR /app COPY package*.json . RUN npm ci COPY . . RUN npm run build # Runtime stage FROM node:lts-slim WORKDIR /app COPY --from=build --chown=node:node /app/.output . ENV NODE_ENV=production ENV NUXT_ENVIRONMENT=production ENV NITRO_HOST=0.0.0.0 ENV NITRO_PORT=8080 EXPOSE 8080 USER node:node ENTRYPOINT ["node"] CMD ["/app/server/index.mjs"] GoGo applications are always compiled during the build phase. However, the resulting binary can be either statically (CGO_ENABLED=0) or dynamically linked (CGO_ENABLED=1). The choice of the base image for the runtime stage will depend on the type of the produced binary:For statically linked binaries, you may pick the minimalistic gcr.io/distroless/static or even a scratch base (the latter with extreme caution).For dynamically linked binaries, a base image with standard shared C libraries is required (e.g., gcr.io/distroless/cc, alpine, or even debian).In most cases, the choice of the runtime base image will not impact the structure of the multi-stage Dockerfile.Multi-Stage Build example: Go application# Build stage FROM golang:1.23 AS build WORKDIR /app COPY go.* . RUN go mod download COPY . . RUN go build -o binary . # Runtime stage FROM gcr.io/distroless/static-debian12:nonroot COPY --from=build /app/binary /app/binary ENTRYPOINT ["/app/binary"] RustRust applications are typically compiled from source code using cargo. The Docker Official rust image includes cargo, rustc, and many other development and build tools, that make the total size of the image nearly 2GB. The multi-stage build is a must-have for Rust applications to keep the runtime image small. Note that the final choice of the runtime base image will depend on the Rust application's library requirements.Multi-Stage Build example: Rust application# Build stage FROM rust:1.67 AS build WORKDIR /usr/src/app COPY . . RUN cargo install --path . # Runtime stage FROM debian:bullseye-slim RUN apt-get update && \ apt-get install -y extra-runtime-dependencies && \ rm -rf /var/lib/apt/lists/* COPY --from=build /usr/local/cargo/bin/app /usr/local/bin/app CMD ["myapp"] JavaJava applications are compiled from source code using build tools such as Maven or Gradle and require a Java Runtime Environment (JRE) to execute.For containerized Java applications, it’s typical to use different base images for the build and runtime stages. The build stage requires a Java Development Kit (JDK), which includes tools for compiling and packaging the code, whereas the runtime stage generally only needs the smaller, more lightweight Java Runtime Environment (JRE) for execution.Multi-Stage Build example: Java applicationThis example is adapted from the official Docker documentation. The Dockerfile is more complex than previous examples because it includes an additional test stage, and the Java build process involves more steps compared to the simpler processes for Node.js and Go applications.# Base stage (reused by test and dev stages) FROM eclipse-temurin:21-jdk-jammy AS base WORKDIR /build COPY --chmod=0755 mvnw mvnw COPY .mvn/ .mvn/ # Test stage FROM base as test WORKDIR /build COPY ./src src/ RUN --mount=type=bind,source=pom.xml,target=pom.xml \ --mount=type=cache,target=/root/.m2 \ ./mvnw test # Intermediate stage FROM base AS deps WORKDIR /build RUN --mount=type=bind,source=pom.xml,target=pom.xml \ --mount=type=cache,target=/root/.m2 \ ./mvnw dependency:go-offline -DskipTests # Intermediate stage FROM deps AS package WORKDIR /build COPY ./src src/ RUN --mount=type=bind,source=pom.xml,target=pom.xml \ --mount=type=cache,target=/root/.m2 \ ./mvnw package -DskipTests && \ mv target/$(./mvnw help:evaluate -Dexpression=project.artifactId -q -DforceStdout)-$(./mvnw help:evaluate -Dexpression=project.version -q -DforceStdout).jar target/app.jar # Build stage FROM package AS extract WORKDIR /build RUN java -Djarmode=layertools -jar target/app.jar extract --destination target/extracted # Development stage FROM extract AS development WORKDIR /build RUN cp -r /build/target/extracted/dependencies/. ./ RUN cp -r /build/target/extracted/spring-boot-loader/. ./ RUN cp -r /build/target/extracted/snapshot-dependencies/. ./ RUN cp -r /build/target/extracted/application/. ./ ENV JAVA_TOOL_OPTIONS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8000" CMD [ "java", "-Dspring.profiles.active=postgres", "org.springframework.boot.loader.launch.JarLauncher" ] # Runtime stage FROM eclipse-temurin:21-jre-jammy AS runtime ARG UID=10001 RUN adduser \ --disabled-password \ --gecos "" \ --home "/nonexistent" \ --shell "/sbin/nologin" \ --no-create-home \ --uid "${UID}" \ appuser USER appuser COPY --from=extract build/target/extracted/dependencies/ ./ COPY --from=extract build/target/extracted/spring-boot-loader/ ./ COPY --from=extract build/target/extracted/snapshot-dependencies/ ./ COPY --from=extract build/target/extracted/application/ ./ EXPOSE 8080 ENTRYPOINT [ "java", "-Dspring.profiles.active=postgres", "org.springframework.boot.loader.launch.JarLauncher" ] PHPPHP applications are interpreted from source code, so they don't require compilation. However, the dependencies needed for development and production are often different, so it's often a good idea to use a multi-stage build to install only production dependencies, and copy them to the runtime image.Multi-Stage Build example: PHP application# Install dependencies stage FROM composer:lts AS deps WORKDIR /app COPY composer.json composer.lock ./ RUN --mount=type=cache,target=/tmp/cache \ composer install --no-dev --no-interaction # Runtime stage FROM php:8-apache AS runtime RUN docker-php-ext-install pdo pdo_mysql RUN mv "$PHP_INI_DIR/php.ini-production" "$PHP_INI_DIR/php.ini" COPY ./src /var/www/html COPY --from=deps /app/vendor/ /var/www/html/vendor USER www-data ConclusionProduction images often suffer from "forgotten" development packages, adding unnecessary bloat and security risks. Multi-Stage Builds solve this by letting us separate build and runtime environments while keeping them described in a single Dockerfile, allowing more efficient builds. As we've seen, a few straightforward adjustments can reduce image size, improve security, and make build scripts cleaner and easier to maintain.Multi-Stage Builds also enable a number of advanced use cases, such as conditional RUN instructions (branching), unit testing during the docker build step, and more. Start using Multi-Stage Builds to keep your containers lean and production-ready 🚀
2024-11-07T18:03:46
en
train
42,024,124
soubhagyaX
2024-11-02T04:43:00
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,024,128
geox
2024-11-02T04:44:56
Chinese military adapts Meta's AI model despite licensing restrictions
null
https://the-decoder.com/chinese-military-adapts-metas-ai-model-despite-licensing-restrictions/
7
0
[ 42025858 ]
null
null
null
null
null
null
null
null
null
train
42,024,138
freediver
2024-11-02T04:48:53
NucliaDB, the AI Search Database for RAG
null
https://github.com/nuclia/nucliadb
2
0
null
null
null
no_error
GitHub - nuclia/nucliadb: NucliaDB, The AI Search database for RAG
null
nuclia
The AI Search Database. Quickstart | Nuclia Docs | Community NucliaDB is a robust database that allows storing and searching on unstructured data. It is an out of the box hybrid search database, utilizing vector, full text and graph indexes. NucliaDB is written in Rust and Python. We designed it to index large datasets and provide multi-teanant support. When utilizing NucliaDB with Nuclia cloud, you are able to the power of an NLP database without the hassle of data extraction, enrichment and inference. We do all the hard work for you. Features Store text, files, vectors, labels and annotations Perform text searches and given a word or set of words, return resources in our database that contain them. Perform semantic searches with vectors. For example, given a set of vectors, return the closest matches in our database. With NLP, this allows us to look for similar sentences without being constrained by exact keywords. Export your data in a format compatible with most NLP pipelines (HuggingFace datasets, pytorch, etc) Store original data, extracting and data pulled from the Understanding API Index fields, paragraphs, and semantic sentences on index storage Cloud data and insight extraction with the Nuclia Understanding API™ Cloud connection to train ML models with Nuclia Learning API™ Role based security system with upstream proxy authentication validation Resources with multiple fields and metadata Text/HTML/Markdown plain fields support Field types: text, file, link, conversation Storage layer (PostgreSQL) Blob support with S3-compatible API, GCS and Azure Blob Storage Replication of index storage Distributed search Cloud-native Architecture Quickstart Trying NucliaDB is super easy! You can extend your knowledge with the following readings: Quick start! Read about what Knowledge boxes are in our basic concepts section Upload your data 💬 Community Chat with us in Slack 📝 Blog Posts Follow us on X Do you want to work with us? 🙋 FAQ How is NucliaDB different from traditional search engines like Elasticsearch or Solr? The core difference and advantage of NucliaDB is its architecture built from the ground up for unstructured data. Its vector index, keyword, graph and fuzzy search provide an API to use all extracted and extracted information from Nuclia, Understanding API and provides powerful NLP abilities to any application with low code and peace of mind. What license does NucliaDB use? NucliaDB is open-source under the GNU Affero General Public License Version 3 - AGPLv3. Fundamentally, this means that you are free to use NucliaDB for your project, as long as you don't modify NucliaDB. If you do, you have to make the modifications public. What is Nuclia's business model? Our business model relies on our normalization API, this one is based on Nuclia Learning API and Nuclia Understanding API. This two APIs offers transformation of unstructured data to NucliaDB compatible data with AI. We also offer NucliaDB as a service at our multi-cloud provider infrastructure: https://nuclia.cloud. 🤝 Contribute and spread the word We are always happy to have contributions: code, documentation, issues, feedback, or even saying hello on Slack! Here is how you can get started: Read our Contributor Covenant Code of Conduct Create a fork of NucliaDB and submit your pull request! ✨ And to thank you for your contributions, claim your swag by emailing us at info at nuclia.com. Reference Nuclia Documentation API Reference Meta Rust Code Style Python Code Style Code of conduct Contributing
2024-11-08T18:02:50
en
train
42,024,166
sandwichsphinx
2024-11-02T05:00:23
Tokyo Electron
null
https://en.wikipedia.org/wiki/Tokyo_Electron
2
0
[ 42025855 ]
null
null
null
null
null
null
null
null
null
train
42,024,195
NavinF
2024-11-02T05:10:31
German man deliberately got 217 Covid shots from 8 formulations
null
https://www.cnn.com/2024/03/06/health/covid-217-shots-hypervaccination-lancet/index.html
10
17
[ 42024307, 42024294, 42024600, 42024201, 42024495, 42025853, 42024441, 42024517 ]
null
null
null
null
null
null
null
null
null
train
42,024,215
breck
2024-11-02T05:17:43
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,024,246
WuxiFingerHold
2024-11-02T05:25:40
Rewrite It in Rails
null
https://dirkjonker.bearblog.dev/rewrite-it-in-rails/
267
238
[ 42024828, 42024542, 42024257, 42024779, 42024685, 42024663, 42024579, 42025268, 42024655, 42024615, 42024551, 42024752, 42045821, 42024904, 42038420, 42026504, 42026586, 42024651, 42024867, 42024945, 42024820, 42024790, 42024865, 42024885, 42026599, 42024599, 42024834, 42037604, 42035329, 42024857, 42025530, 42025126, 42025159, 42025010, 42024691 ]
null
null
null
null
null
null
null
null
null
train
42,024,324
mostech
2024-11-02T05:57:47
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,024,342
thunderbong
2024-11-02T06:04:57
Cramming Solitaire onto a Nintendo E-Reader card
null
https://mattgreer.dev/blog/cramming-solitaire-onto-a-nintendo-ereader-card/
122
2
[ 42026056 ]
null
null
null
null
null
null
null
null
null
train
42,024,353
croes
2024-11-02T06:06:52
Constraining Ocean and Ice Shell Thickness on Miranda
null
https://iopscience.iop.org/article/10.3847/PSJ/ad77d7
2
0
null
null
null
null
null
null
null
null
null
null
train
42,024,363
pramaanik
2024-11-02T06:09:17
Show HN: Frameworks to Understand Public Policy
This is our long list of public policy frameworks. Over four years of writing a news letter (publicpolicy.substack.com), we have compiled many frameworks that can help us analyse public policy issues.<p>We believe in the power of frameworks to zoom in on the most important aspects of complicated public policy issues. They are helpful in sense-making and should be thought of as a starting point to reflect on public policy issues rather than definitive solutions.<p>Some of these frameworks are our own creations, but most are credited to leading thinkers of public policy, politics, and philosophy.<p>For ease of reading, we’ve put these 80+ frameworks into a few categories. Click through on any of them and hopefully, you’ll discover a new frame to observe the world.
https://publicpolicy.substack.com/p/special-edition-frameworks-to-understand
2
2
[ 42025827, 42025828 ]
null
null
null
null
null
null
null
null
null
train
42,024,377
lapnect
2024-11-02T06:12:44
RFC 9669 – BPF Instruction Set Architecture (ISA)
null
https://www.rfc-editor.org/info/rfc9669
4
1
[ 42024624, 42025824 ]
null
null
null
null
null
null
null
null
null
train
42,024,379
mitchbob
2024-11-02T06:12:59
U.S. Spy Agencies Issue New Warning on Russia's Election Misinformation Campaign
null
https://www.nytimes.com/2024/11/01/us/politics/russia-election-misinformation.html
9
1
[ 42024382, 42024556, 42025819 ]
null
null
null
null
null
null
null
null
null
train
42,024,393
mitchbob
2024-11-02T06:17:05
Does the Enlightenment's Great Female Intellect Need Rescuing?
null
https://www.newyorker.com/magazine/2024/11/04/the-enlightenments-most-dangerous-woman-andrew-janiak-book-review
3
1
[ 42024394 ]
null
null
null
null
null
null
null
null
null
train
42,024,422
monarchwadia
2024-11-02T06:22:23
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,024,425
croes
2024-11-02T06:22:53
Failure Analysis of the Arecibo Observatory 305-Meter Telescope Collapse
null
https://nap.nationalacademies.org/read/26982/chapter/1
2
1
[ 42024875 ]
null
null
null
null
null
null
null
null
null
train
42,024,426
lapnect
2024-11-02T06:23:03
Stochastic Rounding 2.0, with a View Towards Complexity Analysis
null
https://www.siam.org/publications/siam-news/articles/stochastic-rounding-20-with-a-view-towards-complexity-analysis
2
0
null
null
null
null
null
null
null
null
null
null
train
42,024,439
CrouchEndTiger
2024-11-02T06:29:18
Breaking the image: a 12th-century Ai Weiwei?
null
https://keithamcgowan.blogspot.com/2024/11/breaking-image-12th-century-ai-weiwei.html
31
17
[ 42027931, 42027384, 42027924, 42025897, 42026185 ]
null
null
null
null
null
null
null
null
null
train
42,024,460
NewarkDays
2024-11-02T06:34:22
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,024,470
aragilar
2024-11-02T06:37:00
Apple Forces the Signing of Applications in macOS Sequoia 15.1
null
https://hackaday.com/2024/11/01/apple-forces-the-signing-of-applications-in-macos-sequoia-15-1/
17
3
[ 42024850, 42025279, 42024535, 42025811 ]
null
null
no_error
Apple Forces The Signing Of Applications In MacOS Sequoia 15.1
2024-11-02T02:00:35+00:00
null
Skip to content The dialogue that greets you when you try to open an unsigned application in MacOS Sequoia 15.1. Many MacOS users are probably used by now to the annoyance that comes with unsigned applications, as they require a few extra steps to launch them. This feature is called Gatekeeper and checks for an Apple Developer ID certificate. Starting with MacOS Sequoia 15, the easy bypassing of this feature with e.g. holding Control when clicking the application icon is now no longer an option, with version 15.1 disabling ways to bypass this completely. Not unsurprisingly, this change has caught especially users of open source software like OpenSCAD by surprise, as evidenced by a range of forum posts and GitHub tickets. The issue of having to sign applications you run on MacOS has been a longstanding point of contention, with HomeBrew applications affected and the looming threat for applications sourced from elsewhere, with OpenSCAD issue ticket #880 from 2014 covering the saga for one OSS project. Now it would seem that to distribute MacOS software you need to have an Apple Developer Program membership, costing $99/year. So far it appears that this forcing is deliberate on Apple’s side, with the FOSS community still sorting through possible workarounds and the full impact. Thanks to [Robert Piston] for the tip.
2024-11-08T16:24:20
en
train
42,024,471
HideInNews
2024-11-02T06:37:07
Advertising Week 2024: Top Takeaways
null
https://www.snowflake.com/en/blog/top-three-advertising-week-ai-data-takeaways/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,024,497
domofutu
2024-11-02T06:43:11
Scientific reasoning driven by influential data: resuscitate dfstat
null
https://www.biorxiv.org/content/10.1101/2024.10.30.621016v1
3
0
null
null
null
null
null
null
null
null
null
null
train
42,024,501
domofutu
2024-11-02T06:44:41
Highly cited engineer offers guaranteed publication in return for coauthorship
null
https://retractionwatch.com/2024/10/30/highly-cited-engineer-offers-guaranteed-publication-citations-in-return-for-coauthorship/
23
2
[ 42025165, 42025393, 42025545, 42025541 ]
null
null
null
null
null
null
null
null
null
train
42,024,509
lihaoyi
2024-11-02T06:48:31
Mill Build Tool Issue Bounties, ~26kUSD up for grabs, ~14kUSD paid out
null
https://github.com/orgs/com-lihaoyi/discussions/6
4
2
[ 42024801, 42025787 ]
null
null
no_error
Open `com-lihaoyi` issue bounties, last updated 7 Nov 2024 · com-lihaoyi · Discussion #6
null
lihaoyi
GitHub Copilot Write better code with AI Security Find and fix vulnerabilities Actions Automate any workflow Codespaces Instant dev environments Issues Plan and track work Code Review Manage code changes Discussions Collaborate outside of code Code Search Find more, search less Explore Learning Pathways White papers, Ebooks, Webinars Customer Stories Partners GitHub Sponsors Fund open source developers The ReadME Project GitHub community articles Enterprise platform AI-powered developer platform Pricing Provide feedback Saved searches Use saved searches to filter your results more quickly Sign up
2024-11-07T23:23:37
en
train
42,024,511
domofutu
2024-11-02T06:48:38
What Are Mechanisms?
null
https://www.thetransmitter.org/the-big-picture/what-are-mechanisms-unpacking-the-term-is-key-to-progress-in-neuroscience/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,024,518
thunderbong
2024-11-02T06:50:58
Mount Fuji snowless at end of October for first time in 130 years
null
https://www.jpost.com/science/science-around-the-world/article-826834
12
1
[ 42024921, 42025758, 42024993 ]
null
null
null
null
null
null
null
null
null
train
42,024,539
mgh2
2024-11-02T06:55:55
Apple researchers ran an AI test that exposed a fundamental 'intelligence' flaw
null
https://9to5mac.com/2024/11/01/apple-researchers-ran-an-ai-test-that-exposed-a-fundamental-intelligence-flaw/
8
7
[ 42024609, 42024868, 42026703, 42024570 ]
null
null
null
null
null
null
null
null
null
train
42,024,548
atdt
2024-11-02T06:57:43
Brazil's Farmers Are Plowing over an Ancient Amazon Civilization
null
https://www.bloomberg.com/graphics/2024-brazil-amazon-deforestation-ancient-civilization/
28
5
[ 42024881, 42027276 ]
null
null
null
null
null
null
null
null
null
train
42,024,575
thunderbong
2024-11-02T07:06:41
Zhong Zhong and Hua Hua
null
https://en.wikipedia.org/wiki/Zhong_Zhong_and_Hua_Hua
4
0
null
null
null
null
null
null
null
null
null
null
train
42,024,597
diginova
2024-11-02T07:13:32
The power of low agency – NeuralCalculus
null
https://priyavkaneria.com/posts/The-power-of-low-agency/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,024,598
AbenezerDaniel
2024-11-02T07:13:48
null
null
null
4
null
[ 42024878 ]
null
true
null
null
null
null
null
null
null
train
42,024,629
alex_x
2024-11-02T07:22:59
Next AI company should be about UI
null
https://x-x.codes/posts/next-ai-company-should-be-about-ui/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,024,646
ipnon
2024-11-02T07:28:28
Kill Your Heroes, Stop Doing It Harder (2012)
null
https://lethain.com/doing-it-harder-and-hero-programming/
2
0
null
null
null
null
null
null
null
null
null
null
train