id
int64
2
42.1M
by
large_stringlengths
2
15
time
timestamp[us]
title
large_stringlengths
0
198
text
large_stringlengths
0
27.4k
url
large_stringlengths
0
6.6k
score
int64
-1
6.02k
descendants
int64
-1
7.29k
kids
large list
deleted
large list
dead
bool
1 class
scraping_error
large_stringclasses
25 values
scraped_title
large_stringlengths
1
59.3k
scraped_published_at
large_stringlengths
4
66
scraped_byline
large_stringlengths
1
757
scraped_body
large_stringlengths
1
50k
scraped_at
timestamp[us]
scraped_language
large_stringclasses
58 values
split
large_stringclasses
1 value
42,033,938
Shokizywire
2024-11-03T16:24:30
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,033,949
ctoth
2024-11-03T16:25:23
4 in 5 Americans think 'words can be violence'
null
https://www.thefire.org/news/shocking-4-5-americans-think-words-can-be-violence
9
9
[ 42033998, 42036795, 42034381, 42034139, 42034275, 42034348 ]
null
null
null
null
null
null
null
null
null
train
42,033,970
doener
2024-11-03T16:29:18
How Bad is This $10k PC from 10 Years Ago? [video]
null
https://www.youtube.com/watch?v=n3XTZde8ZvQ
3
0
null
null
null
null
null
null
null
null
null
null
train
42,033,979
Urbschott
2024-11-03T16:30:11
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,033,990
todsacerdoti
2024-11-03T16:30:51
Time Standards Reference
null
https://geometrian.com/programming/reference/timestds/index.php
2
0
null
null
null
null
null
null
null
null
null
null
train
42,033,991
steve-chavez
2024-11-03T16:31:03
Nginx vs. Caddy Performance [video]
null
https://www.youtube.com/watch?v=N5PAU-vYrN8
2
3
[ 42034050, 42036018 ]
null
null
no_article
null
null
null
null
2024-11-07T09:58:24
null
train
42,033,996
dtawfik1
2024-11-03T16:32:41
The Role Inflammation in Exercise-Induced Reduction of Cellular Senescence
null
https://gethealthspan.com/science/article/crucial-role-inflammation-exercise-cellular-senescence
2
1
[ 42034023 ]
null
null
null
null
null
null
null
null
null
train
42,034,014
jordancutler
2024-11-03T16:34:13
Lessons I learned the hard way from 10 years as an engineer
null
https://read.highgrowthengineer.com/p/5-lessons-i-learned-the-hard-way-from-10-years
1
0
null
null
null
null
null
null
null
null
null
null
train
42,034,039
nfriedly
2024-11-03T16:37:59
PPSSPP Emulator Submitted to iOS App Store for Review
null
https://www.ppsspp.org/blog/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,034,095
richardatlarge
2024-11-03T16:44:04
Show HN: I wrote a techno-thriller on AI and need feedback (no signup)
null
https://dl.bookfunnel.com/xoef5nkqxg
3
1
[ 42034116 ]
null
null
null
null
null
null
null
null
null
train
42,034,106
PaulHoule
2024-11-03T16:45:58
Four takeaways from Pony AI's IPO filing
null
https://techcrunch.com/2024/10/19/four-takeaways-from-pony-ais-ipo-filing/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,034,140
korky
2024-11-03T16:51:51
Measuring Walking Quality Through iPhone Mobility Metrics [pdf]
null
https://www.apple.com/uk/healthcare/docs/site/Measuring_Walking_Quality_Through_iPhone_Mobility_Metrics.pdf
3
1
[ 42034159, 42034141 ]
null
null
null
null
null
null
null
null
null
train
42,034,145
rntn
2024-11-03T16:52:32
Severe irritability in children and teens: A new understanding
null
https://knowablemagazine.org/content/article/health-disease/2024/how-to-treat-disruptive-mood-dysregulation-disorder
2
0
null
null
null
null
null
null
null
null
null
null
train
42,034,146
millhouse1112
2024-11-03T16:52:35
I couldn't find a free, no-login, no-AI checklist app–so I built one
null
https://lalacheck.fly.dev/
105
130
[ 42034774, 42034926, 42035059, 42034931, 42034840, 42034799, 42035050, 42034576, 42035166, 42035216, 42034645, 42047013, 42035052, 42035295, 42037645, 42037532, 42035127, 42034726, 42034606, 42034716, 42035157, 42034688, 42034589, 42037651, 42034880, 42035368, 42034721, 42034579, 42034670 ]
null
null
null
null
null
null
null
null
null
train
42,034,174
bookofjoe
2024-11-03T16:55:55
Logging Is a Way of Life in Appalachia. It's Hanging on by a Thread.
null
https://www.wsj.com/economy/trade/logging-is-a-way-of-life-in-appalachia-its-hanging-on-by-a-thread-e9a5f0a5
2
3
[ 42034176, 42034458, 42034267 ]
null
null
null
null
null
null
null
null
null
train
42,034,181
daoudc
2024-11-03T16:56:54
Re-ranking search results on the client side
null
https://blog.mwmbl.org/articles/reranking-on-the-client-side/
4
0
null
null
null
null
null
null
null
null
null
null
train
42,034,184
yarapavan
2024-11-03T16:57:16
Bill Watterson:'Some thoughts on the real world by one who glimpsed it and fled'
null
https://speakola.com/grad/bill-watterson-calvin-hobbes-kenyon-1990
6
1
[ 42034252 ]
null
null
null
null
null
null
null
null
null
train
42,034,196
FrankRay78
2024-11-03T16:58:50
Missing open-source contributor presents a dilemma when accepting their PR
null
https://bettersoftware.uk/2024/11/03/missing-open-source-contributor-presents-a-dilemma-when-accepting-their-contribution/
90
68
[ 42035995, 42036628, 42036277, 42041990, 42035797, 42037326, 42039084, 42036160, 42035952, 42035398, 42038169, 42041944, 42035749, 42035385, 42037731, 42035999, 42038050, 42038156, 42035512, 42037421, 42037348, 42040916, 42040994, 42040463, 42042094, 42036037, 42040089 ]
null
null
null
null
null
null
null
null
null
train
42,034,205
FrankRay78
2024-11-03T16:59:33
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,034,225
ChrisArchitect
2024-11-03T17:01:30
An 'Interview' with a Dead Luminary Exposes the Pitfalls of A.I
null
https://www.nytimes.com/2024/11/03/world/europe/poland-radio-station-ai.html
3
2
[ 42034239, 42034491 ]
null
null
null
null
null
null
null
null
null
train
42,034,229
thunderbong
2024-11-03T17:01:43
What makes Japanese food packaging more innovative and user-centric?
null
https://uxdesign.cc/what-makes-japanese-food-packaging-more-innovative-and-user-centric-than-its-western-counterparts-751264b24098
2
2
[ 42034233, 42034486 ]
null
null
null
null
null
null
null
null
null
train
42,034,237
shayonj
2024-11-03T17:02:11
pg_flo – Stream, transform, and re-route PostgreSQL data in real-time
null
https://www.pgflo.io/
237
54
[ 42039366, 42036901, 42035579, 42035613, 42039822, 42042746, 42042450, 42036936, 42037247, 42035526, 42040857, 42043625, 42036405, 42035301 ]
null
null
null
null
null
null
null
null
null
train
42,034,247
javatuts
2024-11-03T17:03:40
How to Use Enums in JavaScript – A Complete Guide
null
https://jsdev.space/howto/enums-javascript/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,034,265
fmfamaral
2024-11-03T17:05:22
Why LLMs Won't Make Human Editors Obsolete
null
https://blog.temaki.ai/llms-human-editors-obsolete/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,034,284
kevinak
2024-11-03T17:07:11
Wolfensvelte 3D – a Svelte Wolfenstein clone rendered with the DOM
null
https://github.com/snuffyDev/Wolfensvelte-3D
1
0
null
null
null
null
null
null
null
null
null
null
train
42,034,297
illaig
2024-11-03T17:09:08
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,034,300
lxm
2024-11-03T17:09:19
$350 Oura Ring 4 Tracks Your Sleep. Is It Worth the Splurge?
null
https://www.nytimes.com/2024/10/17/technology/personaltech/oura-ring-sleep-tracker.html
2
1
[ 42034626 ]
null
null
null
null
null
null
null
null
null
train
42,034,303
crowdhailer
2024-11-03T17:09:36
You don't need loops [video]
null
https://www.youtube.com/watch?v=92b5-yPZfxQ
1
0
null
null
null
null
null
null
null
null
null
null
train
42,034,305
xiande04
2024-11-03T17:09:42
Claude Shannon: Mathematician, Engineer, Genius and Juggler?
null
https://www.juggle.org/claude-shannon-mathematician-engineer-genius-juggler/
3
1
[ 42035013 ]
null
null
null
null
null
null
null
null
null
train
42,034,309
osintresearcher
2024-11-03T17:09:50
null
null
null
1
null
[ 42034310 ]
null
true
null
null
null
null
null
null
null
train
42,034,325
richardatlarge
2024-11-03T17:11:44
"The Self Who Came in from the Cold" – On Self vs. Trad Publishing
null
https://degrandpre.substack.com/p/the-self-who-came-in-from-the-cold
2
0
null
null
null
null
null
null
null
null
null
null
train
42,034,351
smonsteriism
2024-11-03T17:15:17
Zen Browser Is a Toxic Community
null
https://koreanmind.com/posts/zen-browser-is-a-toxic-community/
2
0
[ 42034455 ]
null
null
null
null
null
null
null
null
null
train
42,034,357
retube
2024-11-03T17:16:12
Ask HN: I am looking for a book of number facts
That is, a book with interesting facts about different numbers. Prob mostly integers, but maybe including stuff like pi, e etc. Something like this, but in book form:<p>https:&#x2F;&#x2F;mathigon.org&#x2F;almanac<p>It&#x27;s for a 12yo with an interest in maths and numbers.<p>I&#x27;ve googled, amazoned etc but can;t find anything. Thanks
null
2
4
[ 42034492, 42036851 ]
null
null
null
null
null
null
null
null
null
train
42,034,372
wslh
2024-11-03T17:17:28
Infotainment
null
https://en.wikipedia.org/wiki/Infotainment
3
0
[ 42034379 ]
null
null
no_error
Infotainment
2003-10-02T03:15:22Z
Contributors to Wikimedia projects
From Wikipedia, the free encyclopedia This article is about the medial format. For automotive infotainment systems, see In-car entertainment. For the Pitchshifter album, see Infotainment? Infotainment (a portmanteau of information and entertainment),[1] also called soft news as a way to distinguish it from serious journalism or hard news, is a type of media, usually television or online, that provides a combination of information and entertainment.[2] The term may be used disparagingly to devalue infotainment or soft news subjects in favor of more serious hard news subjects.[3][4] Infotainment-based websites and social media apps gained traction due to their focused publishing of infotainment content, e.g. BuzzFeed.[citation needed] The terms "infotainment" and "infotainer" were first used in September 1980 at the Joint Conference of ASLIB, the Institute of Information Scientists, and the Library Association in Sheffield, UK. [5] The Infotainers were a group of British information scientists who put on comedy shows at these professional conferences between 1980 and 1990.[citation needed] In 1983, "infotainment" began to see more popular usage,[1] and the infotainment style gradually began to replace soft news with communications theorists.[6] An earlier, slightly variant term, "infortainment" was the theme of the 1974 convention of the Intercollegiate Broadcasting System, the association of college radio stations in the United States. The event held April 5–7, 1974, at the Statler Hilton Hotel (now the Hotel Pennsylvania), defined the term as the "nexus between Information and Entertainment".[citation needed] Historically, the term infotainment was used to discredit woman journalists who were assigned soft news jobs. Soft news was expected to be consumed only by women,[7] but eventually it became its own genre of news media.[8] Infotainment as news[edit] Infotainment can generally be identified by its entertaining nature. Infotainment may also involve the use of flashy graphics, fast-paced editing, music, sensationalism, and sometimes satire to catch the viewer/readers' attention. Popular examples of infotainment shows include Larry King Live,[9] Entertainment Tonight, Hannity and Colmes, The Alex Jones Show, The Daily Show, and The Oprah Winfrey Show.[6] A precise academic consensus on the definition of what constitutes infotainment/soft news as opposed to hard news has not yet been reached. Many authors have commented that the ideas “are often not clearly defined or not defined at all”. [10] Multiple authors have published their ideas of what each type of media involves, but they vary widely. Wilbur Schramm was one of the first to describe a dichotomy between types of news in relation to human consumption. He separated news into a delayed reward class (including news of public affairs, economic matters, social problems, science, education and health), which closely resembles hard news, and an immediate reward class (including news of crime/corruption, accidents and disasters, sports, social events, and human interest) which closely resembles infotainment/soft news. [11] Some authors use only the topicality and timeliness aspects of a story to determine whether news is hard news or soft news; the more topical and timely, the "harder" and more serious the news is. Other authors have more complex definitions, defining hard news as "breaking events involving top leaders, major issues, or significant disruptions in the routines of daily life," and soft news as "news that typically is more personality-centered, less time-bound, more practical, and more incident-based than other news."[10] There may also be serious reports which are not event-driven—coverage of important social, economic, legal, or technological trends— investigative reports which uncover ongoing corruption, pollution, or immorality—or discussion of unsettled political issues without any special reason. Anniversaries, holidays, the end of a year or season, or the end of the first 100 days of an administration, can make some stories time-sensitive, but these reports provide more of an opportunity for reflection and analysis as opposed to a typical news report on a particular event. The spectrum of "seriousness" and "importance" is not well-defined, and different media organizations make different tradeoffs. "News you can use", a common marketing phrase highlighting a specific genre of journalism, spans the gray area. Tips, advice and hobby-based news fall at the infotainment end of this genre. Warnings about imminent natural disasters or acute domestic security threats are considered more serious, and other media programming (even non-news channels) is usually interrupted to announce these events as breaking news. The importance of "news you can use" on a personal level is rather subjective. Most infotainment television programs on networks and broadcast cable only contain general information on the subjects they cover and may not be considered to have high levels of substantive informational value. [12] For example, an infotainment broadcast may frame accusations of a celebrity or other individual committing a crime as a reality, with no verifiable factual support or evidence of such claims. Some disapprove of infotainment media, especially TV and cable, because it "seem[s] to hurtle from one event to another, often dwelling on trivial, celebrity-driven content."[13] Today's broadcasting of what is considered "hard" informative news is sometimes diluted with attributes of fiction or drama, and infotainment.[14] Some argue that a catalyst for this may be the acquisition of major news networks by conglomerates primarily based in the entertainment business (e.g. Viacom‐Paramount owned CBS News; ABC News has been part of the Disney corporation since 1996; CNN is a key constituent of Time‐Warner, Fox News is owned by Rupert Murdoch's News Corporation, one of the world's biggest media conglomerates).[15] The ownership structure can be traced using infotainment. For example, there may be an infotainment story on celebrities that are involved in the making of a movie produced by the news channel's parent company. In October 2010 at the Rally to Restore Sanity and/or Fear, American political satirist Jon Stewart made a metaphorical statement regarding the media today: "The press can hold its magnifying glass up to our problems . . . illuminating issues heretofore unseen, or they can use that magnifying glass to light ants on fire and then perhaps host a week of shows on the sudden, unexpected, dangerous flaming ant epidemic." This statement referred to the news media's ability to focus in on the real problems of people, and transform them into infotainment that is publicized to entertain, possibly exacerbating the issue at the same time. In a critique of infotainment, Bonnie Anderson of News Flash cited a CNN lead story on February 2, 2004 following the exposure of Janet Jackson's breast on national television. The follow-up story was about a ricin chemical attack on then-U.S. Senate Majority Leader Bill Frist.[16] Well-known infotainers[edit] Infotainers are entertainers in infotainment media, such as news anchors or satirists who cross the line between journalism (quasi-journalism) and entertainment. Barbara Walters, was for many an iconic infotainer; she pioneered many techniques still used by infotainment media today.[8] Other notable examples from U.S. media include Oprah Winfrey, Jon Stewart, Bill O’Reilly, Rachel Maddow, Alex Jones and Geraldo Rivera.[6] When Geraldo Rivera became the host of his own news-oriented talk show on CNBC, others within the NBC organization voiced their protest, including Tom Brokaw, who was reported to have threatened to quit. Rivera had a notorious history as a "sleaze reporter"[17] and tabloid talk show host, on which he and others would review controversial and sensationalistic topical subject matter. Infotainment is now able to reach an ever-growing audience through the widespread popularity and use of social media applications. In the case of social media websites such as Twitter and Facebook, which were originally created for the purpose of connecting, re-connecting and sharing personal thoughts and information with public, they have now provided a new medium for the spread of infotainment. The interactive nature of social media has also allowed for the consumers of infotainment to become producers, generating their own news and commentary, some of which is often used by journalists as material for stories. [15] The broadcast of important or interesting events was originally meant to inform society of local or international events for their own safety and awareness. However, local news broadcasters are more regularly covering local events in a way that provokes entertainment in viewers, with arresting footage, animated visuals, and rhetorical headlines that generate opinions.[15] The media's ability to tell and sell stories allows them the ability to not only to document tragedy, but to misrepresent or exploit it. As is seen in the news (with stories of extreme obesity or unusual deformities) some forms of infotainment can commodify real people through their personal tragedies or scandals.[citation needed] Amusing Ourselves to Death – 1985 book by Neil Postman Documentary television – Genre of television program Edutainment – Media designed to educate through entertainment In-car entertainment – Automotive entertainment system Infomercial – Long television commercial Junk food news – Sardonic term for trivial news stories Least objectionable program Popular science – Science content aimed at an audience of laymen Product placement – Marketing technique Subliminal stimuli – Sensory stimuli below an individual's threshold for conscious perception ^ a b "the definition of infotainment". Dictionary.com. ^ Demers, David, "Dictionary of Mss Communication and Media Research: a guide for students, scholars and professionals," Marquette, 2005, p.143. ^ Merriam- Webster, The Cambridge Online Dictionary ^ Cambridge Online Dictionary ^ Kwanya, Tom; Stilwell, Christine; Underwood, Peter G. (2015). Library 3.0 - Intelligent Libraries and Apomediation. Chandos Publishing. ISBN 978-1-84334-718-7. Retrieved 15 December 2023. ^ a b c "infotainment - television program". ^ Barker-Benfield, G. J. (16 October 1998). Portraits of American Women: From Settlement to the Present. Oxford University Press. p. 534. ISBN 9780195120486 – via Internet Archive. infotainment women journalists-Car. -cars. ^ a b "How Barbara Walters Invented the Internet". 16 May 2013. ^ "Larry King, Breezy Interviewer of the Famous and Infamous, Dies at 87". The New York Times. 2021-01-23. Retrieved 2021-01-23. ^ a b Reinemann, Carsten; Stanyer, James; Scherr, Sebastian; Legnante, Guido (2012-02-01). "Hard and soft news: A review of concepts, operationalizations and key findings". Journalism. 13 (2): 221–239. doi:10.1177/1464884911427803. ISSN 1464-8849. S2CID 5731016. ^ Schramm, Wilbur (1949-09-01). "The Nature of News". Journalism Quarterly. 26 (3): 259–269. doi:10.1177/107769904902600301. ISSN 0022-5533. S2CID 157511120. ^ Lehman-Wilzig, Sam N.; Seletzky, Michal (2010-02-01). "Hard news, soft news, 'general' news: The necessity and utility of an intermediate classification". Journalism. 11 (1): 37–56. doi:10.1177/1464884909350642. ISSN 1464-8849. S2CID 145451919. ^ Campbell, R., Martin, R. C, and Fabos, B. G. Media & culture: An introduction to mass communication. Bedford/St.Martin's, 2012 ^ Graber, Doris A. (1994-10-01). "The Infotainment Quotient in Routine Television News: A Director's Perspective". Discourse & Society. 5 (4): 483–508. doi:10.1177/0957926594005004004. ISSN 0957-9265. S2CID 145289321. ^ a b c Thussu, Daya Kishan (2015), "Infotainment", The International Encyclopedia of Political Communication, American Cancer Society, pp. 1–9, doi:10.1002/9781118541555.wbiepc152, ISBN 978-1-118-54155-5 ^ Anderson, Bonnie M. (2004). News Flash. Wiley. pp. 1, 33. ^ Kolbert, Elizabeth (1997-07-20). "Campaigning to Be Seen in a New Light". The New York Times. ISSN 0362-4331. Retrieved 2021-03-20. "Soft news and critical journalism eroding audiences" "Tough times for hard news, but good journalism goes on"
2024-11-08T00:20:22
en
train
42,034,375
congzhangzh
2024-11-03T17:17:32
Show HN: A new webview binding for Python call 4 test
null
https://github.com/congzhangzh/webview_python
1
1
[ 42034376 ]
null
null
null
null
null
null
null
null
null
train
42,034,395
marvelmaniac
2024-11-03T17:20:26
Looking for part time pentesting opportunities
Hey! I&#x27;m a security researcher&#x2F;Bug Bounty hunter. I want to explore part time pentesting opportunities. Shoot me a DM on twitter if you are interested in working with me :)<p>My bio - http:&#x2F;&#x2F;www.0xmarvelmaniac.in&#x2F;p&#x2F;about-me.html
null
1
5
[ 42034641, 42034628, 42034418 ]
null
null
null
null
null
null
null
null
null
train
42,034,400
spodcork
2024-11-03T17:20:42
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,034,411
ayoisaiah
2024-11-03T17:22:13
Show HN: F2 – A Command-Line Batch Renaming Tool
Hey HN!<p>I&#x27;m excited to share f2, a command-line tool I built for fast and flexible bulk renaming of files. It&#x27;s cross-platform (Linux, macOS, Windows), executes a dry-run by default, supports undo, and provides incredible flexibility in file renaming with several built-in variables and Exiftool integration.<p>If you&#x27;ve ever struggled with renaming large batches of files or wished for more control and speed when managing file names, this might be just what you need. I hope you find it useful!
https://github.com/ayoisaiah/f2
3
0
null
null
null
null
null
null
null
null
null
null
train
42,034,419
throw0101d
2024-11-03T17:23:22
The passive house trend is booming
null
https://www.washingtonpost.com/home/2024/10/30/passive-house-adoption-surges/
4
3
[ 42034472, 42034614, 42034425 ]
null
null
null
null
null
null
null
null
null
train
42,034,467
playnext
2024-11-03T17:27:46
Show HN: Kitcat - A Matplotlib backend for terminal plotting
Matplotlib backend for direct plotting in the terminal using Kitty graphics protocol.
https://mil.ad/blog/2024/kitcat.html
3
0
null
null
null
null
null
null
null
null
null
null
train
42,034,473
brideoflinux
2024-11-03T17:28:26
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,034,496
thunderbong
2024-11-03T17:30:47
Windhawk: The customization marketplace for Windows and programs
null
https://windhawk.net/
1
0
null
null
null
no_article
null
null
null
null
2024-11-08T12:15:35
null
train
42,034,521
ahmetsait
2024-11-03T17:32:52
ToolGit: A collection of scripts that extend Git with various sub-commands
null
https://github.com/ahmetsait/toolgit
111
55
[ 42035550, 42035536, 42035889, 42037057, 42038841, 42049958, 42037712, 42038827, 42035043, 42039502, 42035355, 42035190, 42035142, 42035077, 42037345, 42040008 ]
null
null
null
null
null
null
null
null
null
train
42,034,527
evgandr
2024-11-03T17:33:40
null
null
null
1
null
[ 42034528 ]
null
true
null
null
null
null
null
null
null
train
42,034,529
shcheklein
2024-11-03T17:33:42
Transform and optimize datasets for fast AI model training
null
https://github.com/Lightning-AI/litdata
1
0
null
null
null
no_error
GitHub - Lightning-AI/litdata: Transform datasets at scale. Optimize datasets for fast AI model training.
null
Lightning-AI
Transform datasets at scale. Optimize data for fast AI model training. Transform Optimize ✅ Parallelize data processing ✅ Stream large cloud datasets ✅ Create vector embeddings ✅ Accelerate training by 20x ✅ Run distributed inference ✅ Pause and resume data streaming ✅ Scrape websites at scale ✅ Use remote data without local loading Lightning AI • Quick start • Optimize data • Transform data • Features • Benchmarks • Templates • Community Transform data at scale. Optimize for fast model training. LitData scales data processing tasks (data scraping, image resizing, distributed inference, embedding creation) on local or cloud machines. It also enables optimizing datasets to accelerate AI model training and work with large remote datasets without local loading. Quick start First, install LitData: Choose your workflow: 🚀 Speed up model training 🚀 Transform datasets Advanced install Install all the extras pip install 'litdata[extras]' Speed up model training Accelerate model training (20x faster) by optimizing datasets for streaming directly from cloud storage. Work with remote data without local downloads with features like loading data subsets, accessing individual samples, and resumable streaming. Step 1: Optimize the data This step will format the dataset for fast loading. The data will be written in a chunked binary format. import numpy as np from PIL import Image import litdata as ld def random_images(index): fake_images = Image.fromarray(np.random.randint(0, 256, (32, 32, 3), dtype=np.uint8)) fake_labels = np.random.randint(10) # You can use any key:value pairs. Note that their types must not change between samples, and Python lists must # always contain the same number of elements with the same types. data = {"index": index, "image": fake_images, "class": fake_labels} return data if __name__ == "__main__": # The optimize function writes data in an optimized format. ld.optimize( fn=random_images, # the function applied to each input inputs=list(range(1000)), # the inputs to the function (here it's a list of numbers) output_dir="fast_data", # optimized data is stored here num_workers=4, # The number of workers on the same machine chunk_bytes="64MB" # size of each chunk ) Step 2: Put the data on the cloud Upload the data to a Lightning Studio (backed by S3) or your own S3 bucket: aws s3 cp --recursive fast_data s3://my-bucket/fast_data Step 3: Stream the data during training Load the data by replacing the PyTorch DataSet and DataLoader with the StreamingDataset and StreamingDataloader import litdata as ld train_dataset = ld.StreamingDataset('s3://my-bucket/fast_data', shuffle=True, drop_last=True) train_dataloader = ld.StreamingDataLoader(train_dataset) for sample in train_dataloader: img, cls = sample['image'], sample['class'] Key benefits: ✅ Accelerate training: Optimized datasets load 20x faster. ✅ Stream cloud datasets: Work with cloud data without downloading it. ✅ Pytorch-first: Works with PyTorch libraries like PyTorch Lightning, Lightning Fabric, Hugging Face. ✅ Easy collaboration: Share and access datasets in the cloud, streamlining team projects. ✅ Scale across GPUs: Streamed data automatically scales to all GPUs. ✅ Flexible storage: Use S3, GCS, Azure, or your own cloud account for data storage. ✅ Compression: Reduce your data footprint by using advanced compression algorithms. ✅ Run local or cloud: Run on your own machines or auto-scale to 1000s of cloud GPUs with Lightning Studios. ✅ Enterprise security: Self host or process data on your cloud account with Lightning Studios. Transform datasets Accelerate data processing tasks (data scraping, image resizing, embedding creation, distributed inference) by parallelizing (map) the work across many machines at once. Here's an example that resizes and crops a large image dataset: from PIL import Image import litdata as ld # use a local or S3 folder input_dir = "my_large_images" # or "s3://my-bucket/my_large_images" output_dir = "my_resized_images" # or "s3://my-bucket/my_resized_images" inputs = [os.path.join(input_dir, f) for f in os.listdir(input_dir)] # resize the input image def resize_image(image_path, output_dir): output_image_path = os.path.join(output_dir, os.path.basename(image_path)) Image.open(image_path).resize((224, 224)).save(output_image_path) ld.map( fn=resize_image, inputs=inputs, output_dir="output_dir", ) Key benefits: ✅ Parallelize processing: Reduce processing time by transforming data across multiple machines simultaneously. ✅ Scale to large data: Increase the size of datasets you can efficiently handle. ✅ Flexible usecases: Resize images, create embeddings, scrape the internet, etc... ✅ Run local or cloud: Run on your own machines or auto-scale to 1000s of cloud GPUs with Lightning Studios. ✅ Enterprise security: Self host or process data on your cloud account with Lightning Studios. Key Features Features for optimizing and streaming datasets for model training ✅ Stream large cloud datasets   Use data stored on the cloud without needing to download it all to your computer, saving time and space. Imagine you're working on a project with a huge amount of data stored online. Instead of waiting hours to download it all, you can start working with the data almost immediately by streaming it. Once you've optimized the dataset with LitData, stream it as follows: from litdata import StreamingDataset, StreamingDataLoader dataset = StreamingDataset('s3://my-bucket/my-data', shuffle=True) dataloader = StreamingDataLoader(dataset, batch_size=64) for batch in dataloader: process(batch) # Replace with your data processing logic Additionally, you can inject client connection settings for S3 or GCP when initializing your dataset. This is useful for specifying custom endpoints and credentials per dataset. from litdata import StreamingDataset storage_options = { "endpoint_url": "your_endpoint_url", "aws_access_key_id": "your_access_key_id", "aws_secret_access_key": "your_secret_access_key", } dataset = StreamingDataset('s3://my-bucket/my-data', storage_options=storage_options) Also, you can specify a custom cache directory when initializing your dataset. This is useful when you want to store the cache in a specific location. from litdata import StreamingDataset # Initialize the StreamingDataset with the custom cache directory dataset = StreamingDataset('s3://my-bucket/my-data', cache_dir="/path/to/cache") ✅ Streams on multi-GPU, multi-node Data optimized and loaded with Lightning automatically streams efficiently in distributed training across GPUs or multi-node. The StreamingDataset and StreamingDataLoader automatically make sure each rank receives the same quantity of varied batches of data, so it works out of the box with your favorite frameworks (PyTorch Lightning, Lightning Fabric, or PyTorch) to do distributed training. Here you can see an illustration showing how the Streaming Dataset works with multi node / multi gpu under the hood. from litdata import StreamingDataset, StreamingDataLoader # For the training dataset, don't forget to enable shuffle and drop_last !!! train_dataset = StreamingDataset('s3://my-bucket/my-train-data', shuffle=True, drop_last=True) train_dataloader = StreamingDataLoader(train_dataset, batch_size=64) for batch in train_dataloader: process(batch) # Replace with your data processing logic val_dataset = StreamingDataset('s3://my-bucket/my-val-data', shuffle=False, drop_last=False) val_dataloader = StreamingDataLoader(val_dataset, batch_size=64) for batch in val_dataloader: process(batch) # Replace with your data processing logic ✅ Stream from multiple cloud providers The StreamingDataset supports reading optimized datasets from common cloud providers. import os import litdata as ld # Read data from AWS S3 aws_storage_options={ "AWS_ACCESS_KEY_ID": os.environ['AWS_ACCESS_KEY_ID'], "AWS_SECRET_ACCESS_KEY": os.environ['AWS_SECRET_ACCESS_KEY'], } dataset = ld.StreamingDataset("s3://my-bucket/my-data", storage_options=aws_storage_options) # Read data from GCS gcp_storage_options={ "project": os.environ['PROJECT_ID'], } dataset = ld.StreamingDataset("gs://my-bucket/my-data", storage_options=gcp_storage_options) # Read data from Azure azure_storage_options={ "account_url": f"https://{os.environ['AZURE_ACCOUNT_NAME']}.blob.core.windows.net", "credential": os.environ['AZURE_ACCOUNT_ACCESS_KEY'] } dataset = ld.StreamingDataset("azure://my-bucket/my-data", storage_options=azure_storage_options) ✅ Pause, resume data streaming   Stream data during long training, if interrupted, pick up right where you left off without any issues. LitData provides a stateful Streaming DataLoader e.g. you can pause and resume your training whenever you want. Info: The Streaming DataLoader was used by Lit-GPT to pretrain LLMs. Restarting from an older checkpoint was critical to get to pretrain the full model due to several failures (network, CUDA Errors, etc..). import os import torch from litdata import StreamingDataset, StreamingDataLoader dataset = StreamingDataset("s3://my-bucket/my-data", shuffle=True) dataloader = StreamingDataLoader(dataset, num_workers=os.cpu_count(), batch_size=64) # Restore the dataLoader state if it exists if os.path.isfile("dataloader_state.pt"): state_dict = torch.load("dataloader_state.pt") dataloader.load_state_dict(state_dict) # Iterate over the data for batch_idx, batch in enumerate(dataloader): # Store the state every 1000 batches if batch_idx % 1000 == 0: torch.save(dataloader.state_dict(), "dataloader_state.pt") ✅ LLM Pre-training   LitData is highly optimized for LLM pre-training. First, we need to tokenize the entire dataset and then we can consume it. import json from pathlib import Path import zstandard as zstd from litdata import optimize, TokensLoader from tokenizer import Tokenizer from functools import partial # 1. Define a function to convert the text within the jsonl files into tokens def tokenize_fn(filepath, tokenizer=None): with zstd.open(open(filepath, "rb"), "rt", encoding="utf-8") as f: for row in f: text = json.loads(row)["text"] if json.loads(row)["meta"]["redpajama_set_name"] == "RedPajamaGithub": continue # exclude the GitHub data since it overlaps with starcoder text_ids = tokenizer.encode(text, bos=False, eos=True) yield text_ids if __name__ == "__main__": # 2. Generate the inputs (we are going to optimize all the compressed json files from SlimPajama dataset ) input_dir = "./slimpajama-raw" inputs = [str(file) for file in Path(f"{input_dir}/SlimPajama-627B/train").rglob("*.zst")] # 3. Store the optimized data wherever you want under "/teamspace/datasets" or "/teamspace/s3_connections" outputs = optimize( fn=partial(tokenize_fn, tokenizer=Tokenizer(f"{input_dir}/checkpoints/Llama-2-7b-hf")), # Note: You can use HF tokenizer or any others inputs=inputs, output_dir="./slimpajama-optimized", chunk_size=(2049 * 8012), # This is important to inform LitData that we are encoding contiguous 1D array (tokens). # LitData skips storing metadata for each sample e.g all the tokens are concatenated to form one large tensor. item_loader=TokensLoader(), ) import os from litdata import StreamingDataset, StreamingDataLoader, TokensLoader from tqdm import tqdm # Increase by one because we need the next word as well dataset = StreamingDataset( input_dir=f"./slimpajama-optimized/train", item_loader=TokensLoader(block_size=2048 + 1), shuffle=True, drop_last=True, ) train_dataloader = StreamingDataLoader(dataset, batch_size=8, pin_memory=True, num_workers=os.cpu_count()) # Iterate over the SlimPajama dataset for batch in tqdm(train_dataloader): pass ✅ Combine datasets   Mix and match different sets of data to experiment and create better models. Combine datasets with CombinedStreamingDataset. As an example, this mixture of Slimpajama & StarCoder was used in the TinyLLAMA project to pretrain a 1.1B Llama model on 3 trillion tokens. from litdata import StreamingDataset, CombinedStreamingDataset, StreamingDataLoader, TokensLoader from tqdm import tqdm import os train_datasets = [ StreamingDataset( input_dir="s3://tinyllama-template/slimpajama/train/", item_loader=TokensLoader(block_size=2048 + 1), # Optimized loader for tokens used by LLMs shuffle=True, drop_last=True, ), StreamingDataset( input_dir="s3://tinyllama-template/starcoder/", item_loader=TokensLoader(block_size=2048 + 1), # Optimized loader for tokens used by LLMs shuffle=True, drop_last=True, ), ] # Mix SlimPajama data and Starcoder data with these proportions: weights = (0.693584, 0.306416) combined_dataset = CombinedStreamingDataset(datasets=train_datasets, seed=42, weights=weights, iterate_over_all=False) train_dataloader = StreamingDataLoader(combined_dataset, batch_size=8, pin_memory=True, num_workers=os.cpu_count()) # Iterate over the combined datasets for batch in tqdm(train_dataloader): pass ✅ Merge datasets   Merge multiple optimized datasets into one. import numpy as np from PIL import Image from litdata import StreamingDataset, merge_datasets, optimize def random_images(index): return { "index": index, "image": Image.fromarray(np.random.randint(0, 256, (32, 32, 3), dtype=np.uint8)), "class": np.random.randint(10), } if __name__ == "__main__": out_dirs = ["fast_data_1", "fast_data_2", "fast_data_3", "fast_data_4"] # or ["s3://my-bucket/fast_data_1", etc.]" for out_dir in out_dirs: optimize(fn=random_images, inputs=list(range(250)), output_dir=out_dir, num_workers=4, chunk_bytes="64MB") merged_out_dir = "merged_fast_data" # or "s3://my-bucket/merged_fast_data" merge_datasets(input_dirs=out_dirs, output_dir=merged_out_dir) dataset = StreamingDataset(merged_out_dir) print(len(dataset)) # out: 1000 ✅ Split datasets for train, val, test Split a dataset into train, val, test splits with train_test_split. from litdata import StreamingDataset, train_test_split dataset = StreamingDataset("s3://my-bucket/my-data") # data are stored in the cloud print(len(dataset)) # display the length of your data # out: 100,000 train_dataset, val_dataset, test_dataset = train_test_split(dataset, splits=[0.3, 0.2, 0.5]) print(train_dataset) # out: 30,000 print(val_dataset) # out: 20,000 print(test_dataset) # out: 50,000 ✅ Load a subset of the remote dataset   Work on a smaller, manageable portion of your data to save time and resources. from litdata import StreamingDataset, train_test_split dataset = StreamingDataset("s3://my-bucket/my-data", subsample=0.01) # data are stored in the cloud print(len(dataset)) # display the length of your data # out: 1000 ✅ Easily modify optimized cloud datasets   Add new data to an existing dataset or start fresh if needed, providing flexibility in data management. LitData optimized datasets are assumed to be immutable. However, you can make the decision to modify them by changing the mode to either append or overwrite. from litdata import optimize, StreamingDataset def compress(index): return index, index**2 if __name__ == "__main__": # Add some data optimize( fn=compress, inputs=list(range(100)), output_dir="./my_optimized_dataset", chunk_bytes="64MB", ) # Later on, you add more data optimize( fn=compress, inputs=list(range(100, 200)), output_dir="./my_optimized_dataset", chunk_bytes="64MB", mode="append", ) ds = StreamingDataset("./my_optimized_dataset") assert len(ds) == 200 assert ds[:] == [(i, i**2) for i in range(200)] The overwrite mode will delete the existing data and start from fresh. ✅ Use compression   Reduce your data footprint by using advanced compression algorithms. import litdata as ld def compress(index): return index, index**2 if __name__ == "__main__": # Add some data ld.optimize( fn=compress, inputs=list(range(100)), output_dir="./my_optimized_dataset", chunk_bytes="64MB", num_workers=1, compression="zstd" ) Using zstd, you can achieve high compression ratio like 4.34x for this simple example. Without With 2.8kb 646b ✅ Access samples without full data download   Look at specific parts of a large dataset without downloading the whole thing or loading it on a local machine. from litdata import StreamingDataset dataset = StreamingDataset("s3://my-bucket/my-data") # data are stored in the cloud print(len(dataset)) # display the length of your data print(dataset[42]) # show the 42th element of the dataset ✅ Use any data transforms   Customize how your data is processed to better fit your needs. Subclass the StreamingDataset and override its __getitem__ method to add any extra data transformations. from litdata import StreamingDataset, StreamingDataLoader import torchvision.transforms.v2.functional as F class ImagenetStreamingDataset(StreamingDataset): def __getitem__(self, index): image = super().__getitem__(index) return F.resize(image, (224, 224)) dataset = ImagenetStreamingDataset(...) dataloader = StreamingDataLoader(dataset, batch_size=4) for batch in dataloader: print(batch.shape) # Out: (4, 3, 224, 224) ✅ Profile data loading speed   Measure and optimize how fast your data is being loaded, improving efficiency. The StreamingDataLoader supports profiling of your data loading process. Simply use the profile_batches argument to specify the number of batches you want to profile: from litdata import StreamingDataset, StreamingDataLoader StreamingDataLoader(..., profile_batches=5) This generates a Chrome trace called result.json. Then, visualize this trace by opening Chrome browser at the chrome://tracing URL and load the trace inside. ✅ Reduce memory use for large files   Handle large data files efficiently without using too much of your computer's memory. When processing large files like compressed parquet files, use the Python yield keyword to process and store one item at the time, reducing the memory footprint of the entire program. from pathlib import Path import pyarrow.parquet as pq from litdata import optimize from tokenizer import Tokenizer from functools import partial # 1. Define a function to convert the text within the parquet files into tokens def tokenize_fn(filepath, tokenizer=None): parquet_file = pq.ParquetFile(filepath) # Process per batch to reduce RAM usage for batch in parquet_file.iter_batches(batch_size=8192, columns=["content"]): for text in batch.to_pandas()["content"]: yield tokenizer.encode(text, bos=False, eos=True) # 2. Generate the inputs input_dir = "/teamspace/s3_connections/tinyllama-template" inputs = [str(file) for file in Path(f"{input_dir}/starcoderdata").rglob("*.parquet")] # 3. Store the optimized data wherever you want under "/teamspace/datasets" or "/teamspace/s3_connections" outputs = optimize( fn=partial(tokenize_fn, tokenizer=Tokenizer(f"{input_dir}/checkpoints/Llama-2-7b-hf")), # Note: Use HF tokenizer or any others inputs=inputs, output_dir="/teamspace/datasets/starcoderdata", chunk_size=(2049 * 8012), # Number of tokens to store by chunks. This is roughly 64MB of tokens per chunk. ) ✅ Limit local cache space   Limit the amount of disk space used by temporary files, preventing storage issues. Adapt the local caching limit of the StreamingDataset. This is useful to make sure the downloaded data chunks are deleted when used and the disk usage stays low. from litdata import StreamingDataset dataset = StreamingDataset(..., max_cache_size="10GB") ✅ Change cache directory path   Specify the directory where cached files should be stored, ensuring efficient data retrieval and management. This is particularly useful for organizing your data storage and improving access times. from litdata import StreamingDataset from litdata.streaming.cache import Dir cache_dir = "/path/to/your/cache" data_dir = "s3://my-bucket/my_optimized_dataset" dataset = StreamingDataset(input_dir=Dir(path=cache_dir, url=data_dir)) ✅ Optimize loading on networked drives   Optimize data handling for computers on a local network to improve performance for on-site setups. On-prem compute nodes can mount and use a network drive. A network drive is a shared storage device on a local area network. In order to reduce their network overload, the StreamingDataset supports caching the data chunks. from litdata import StreamingDataset dataset = StreamingDataset(input_dir="local:/data/shared-drive/some-data") ✅ Optimize dataset in distributed environment   Lightning can distribute large workloads across hundreds of machines in parallel. This can reduce the time to complete a data processing task from weeks to minutes by scaling to enough machines. To apply the optimize operator across multiple machines, simply provide the num_nodes and machine arguments to it as follows: import os from litdata import optimize, Machine def compress(index): return (index, index ** 2) optimize( fn=compress, inputs=list(range(100)), num_workers=2, output_dir="my_output", chunk_bytes="64MB", num_nodes=2, machine=Machine.DATA_PREP, # You can select between dozens of optimized machines ) If the output_dir is a local path, the optimized dataset will be present in: /teamspace/jobs/{job_name}/nodes-0/my_output. Otherwise, it will be stored in the specified output_dir. Read the optimized dataset: from litdata import StreamingDataset output_dir = "/teamspace/jobs/litdata-optimize-2024-07-08/nodes.0/my_output" dataset = StreamingDataset(output_dir) print(dataset[:]) ✅ Encrypt, decrypt data at chunk/sample level   Secure data by applying encryption to individual samples or chunks, ensuring sensitive information is protected during storage. This example shows how to use the FernetEncryption class for sample-level encryption with a data optimization function. from litdata import optimize from litdata.utilities.encryption import FernetEncryption import numpy as np from PIL import Image # Initialize FernetEncryption with a password for sample-level encryption fernet = FernetEncryption(password="your_secure_password", level="sample") data_dir = "s3://my-bucket/optimized_data" def random_image(index): """Generate a random image for demonstration purposes.""" fake_img = Image.fromarray(np.random.randint(0, 255, (32, 32, 3), dtype=np.uint8)) return {"image": fake_img, "class": index} # Optimize data while applying encryption optimize( fn=random_image, inputs=list(range(5)), # Example inputs: [0, 1, 2, 3, 4] num_workers=1, output_dir=data_dir, chunk_bytes="64MB", encryption=fernet, ) # Save the encryption key to a file for later use fernet.save("fernet.pem") Load the encrypted data using the StreamingDataset class as follows: from litdata import StreamingDataset from litdata.utilities.encryption import FernetEncryption # Load the encryption key fernet = FernetEncryption(password="your_secure_password", level="sample") fernet.load("fernet.pem") # Create a streaming dataset for reading the encrypted samples ds = StreamingDataset(input_dir=data_dir, encryption=fernet) Implement your own encryption method: Subclass the Encryption class and define the necessary methods: from litdata.utilities.encryption import Encryption class CustomEncryption(Encryption): def encrypt(self, data): # Implement your custom encryption logic here return data def decrypt(self, data): # Implement your custom decryption logic here return data This allows the data to remain secure while maintaining flexibility in the encryption method. Features for transforming datasets ✅ Parallelize data transformations (map)   Apply the same change to different parts of the dataset at once to save time and effort. The map operator can be used to apply a function over a list of inputs. Here is an example where the map operator is used to apply a resize_image function over a folder of large images. from litdata import map from PIL import Image # Note: Inputs could also refer to files on s3 directly. input_dir = "my_large_images" inputs = [os.path.join(input_dir, f) for f in os.listdir(input_dir)] # The resize image takes one of the input (image_path) and the output directory. # Files written to output_dir are persisted. def resize_image(image_path, output_dir): output_image_path = os.path.join(output_dir, os.path.basename(image_path)) Image.open(image_path).resize((224, 224)).save(output_image_path) map( fn=resize_image, inputs=inputs, output_dir="s3://my-bucket/my_resized_images", ) Benchmarks In this section we show benchmarks for speed to optimize a dataset and the resulting streaming speed (Reproduce the benchmark). Streaming speed Data optimized and streamed with LitData achieves a 20x speed up over non optimized data and 2x speed up over other streaming solutions. Speed to stream Imagenet 1.2M from AWS S3: Framework Images / sec 1st Epoch (float32) Images / sec 2nd Epoch (float32) Images / sec 1st Epoch (torch16) Images / sec 2nd Epoch (torch16) LitData 5800 6589 6282 7221 Web Dataset 3134 3924 3343 4424 Mosaic ML 2898 5099 2809 5158 Benchmark details   Imagenet-1.2M dataset contains 1,281,167 images. To align with other benchmarks, we measured the streaming speed (images per second) loaded from AWS S3 for several frameworks. Time to optimize data LitData optimizes the Imagenet dataset for fast training 3-5x faster than other frameworks: Time to optimize 1.2 million ImageNet images (Faster is better): Framework Train Conversion Time Val Conversion Time Dataset Size # Files LitData 10:05 min 00:30 min 143.1 GB 2.339 Web Dataset 32:36 min 01:22 min 147.8 GB 1.144 Mosaic ML 49:49 min 01:04 min 143.1 GB 2.298 Parallelize transforms and data optimization on cloud machines Parallelize data transforms Transformations with LitData are linearly parallelizable across machines. For example, let's say that it takes 56 hours to embed a dataset on a single A10G machine. With LitData, this can be speed up by adding more machines in parallel Number of machines Hours 1 56 2 28 4 14 ... ... 64 0.875 To scale the number of machines, run the processing script on Lightning Studios: from litdata import map, Machine map( ... num_nodes=32, machine=Machine.DATA_PREP, # Select between dozens of optimized machines ) Parallelize data optimization To scale the number of machines for data optimization, use Lightning Studios: from litdata import optimize, Machine optimize( ... num_nodes=32, machine=Machine.DATA_PREP, # Select between dozens of optimized machines ) Example: Process the LAION 400 million image dataset in 2 hours on 32 machines, each with 32 CPUs. Start from a template Below are templates for real-world applications of LitData at scale. Templates: Transform datasets Studio Data type Time (minutes) Machines Dataset Download LAION-400MILLION dataset Image & Text 120 32 LAION-400M Tokenize 2M Swedish Wikipedia Articles Text 7 4 Swedish Wikipedia Embed English Wikipedia under 5 dollars Text 15 3 English Wikipedia Templates: Optimize + stream data Studio Data type Time (minutes) Machines Dataset Benchmark cloud data-loading libraries Image & Label 10 1 Imagenet 1M Optimize GeoSpatial data for model training Image & Mask 120 32 Chesapeake Roads Spatial Context Optimize TinyLlama 1T dataset for training Text 240 32 SlimPajama & StarCoder Optimize parquet files for model training Parquet Files 12 16 Randomly Generated data Community LitData is a community project accepting contributions - Let's make the world's most advanced AI data processing framework. 💬 Get help on Discord 📋 License: Apache 2.0
2024-11-07T20:03:29
en
train
42,034,534
PaulHoule
2024-11-03T17:34:17
Engineers invent high-yield atmospheric water capture device for arid regions
null
https://techxplore.com/news/2024-10-high-yield-atmospheric-capture-device.html
17
9
[ 42050128, 42035257, 42035227, 42035790, 42036432, 42036188 ]
null
null
null
null
null
null
null
null
null
train
42,034,535
nafnlj
2024-11-03T17:34:19
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,034,543
yaa110
2024-11-03T17:35:24
PubSub for Golang
null
https://github.com/yaa110/pubsub
2
0
[ 42034544 ]
null
null
null
null
null
null
null
null
null
train
42,034,552
drcwpl
2024-11-03T17:36:58
From the London Cholera Outbreak to the Opioid Crisis: Ignoring the Data
null
https://onepercentrule.substack.com/p/ignoring-the-data-at-our-peril
3
1
[ 42034563 ]
null
null
null
null
null
null
null
null
null
train
42,034,559
mooreds
2024-11-03T17:38:11
Tracing Colonial Mexico with Maps and Ink (2020)
null
https://magazine.tcu.edu/spring-2020/tracing-colonial-mexico-with-maps-and-ink/
3
0
null
null
null
null
null
null
null
null
null
null
train
42,034,597
Teever
2024-11-03T17:42:59
X-Therma Achieves First Subzero Organ Transports
null
https://x-therma.com/news/x-therma-achieves-worlds-first-subzero-organ-transports-multiple-48-hour-transatlantic-journeys-support-first-steps-toward-tackling-organ-waitlist/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,034,630
dataflow
2024-11-03T17:47:12
Fairness Doctrine
null
https://en.wikipedia.org/wiki/Fairness_doctrine
2
0
null
null
null
no_error
Fairness doctrine
2004-04-06T07:19:38Z
Contributors to Wikimedia projects
For the concept of sovereign immunity, see Feres doctrine. The fairness doctrine of the United States Federal Communications Commission (FCC), introduced in 1949, was a policy that required the holders of broadcast licenses both to present controversial issues of public importance and to do so in a manner that fairly reflected differing viewpoints.[1] In 1987, the FCC abolished the fairness doctrine,[2] prompting some to urge its reintroduction through either Commission policy or congressional legislation.[3] The FCC removed the rule that implemented the policy from the Federal Register in August 2011.[4] The fairness doctrine had two basic elements: It required broadcasters to devote some of their airtime to discussing controversial matters of public interest, and to air contrasting views regarding those matters. Stations were given wide latitude as to how to provide contrasting views: It could be done through news segments, public affairs shows, or editorials. The doctrine did not require equal time for opposing views but required that contrasting viewpoints be presented. The demise of this FCC rule has been cited as a contributing factor in the rising level of party polarization in the United States.[5][6] While the original purpose of the doctrine was to ensure that viewers were exposed to a diversity of viewpoints, it was used by both the Kennedy and later the Johnson administration to combat political opponents operating on talk radio. In 1969 the United States Supreme Court, in Red Lion Broadcasting Co. v. FCC, upheld the FCC's general right to enforce the fairness doctrine where channels were limited. However, the court did not rule that the FCC was obliged to do so.[7] The courts reasoned that the scarcity of the broadcast spectrum, which limited the opportunity for access to the airwaves, created a need for the doctrine. The fairness doctrine is not the same as the equal-time rule, which is still in place. The fairness doctrine deals with discussion of controversial issues, while the equal-time rule deals only with political candidates. In 1938, Lawrence J. Flynn, a former Yankee Network employee, challenged the license of John Shepard III's WAAB in Boston, and lodged a complaint about WNAC. Flynn asserted that these stations were being used to air one-sided political viewpoints and broadcast attacks, including editorials, against local and federal politicians that Shepard opposed. The FCC requested that Shepard provide details about these programs. To appease the commission, the Yankee Network agreed to drop the editorials. Flynn created a company called Mayflower Broadcasting and tried to get the FCC to award him WAAB's license. The FCC refused. In 1941, the commission made a ruling that came to be known as the Mayflower Decision, which declared that radio stations, due to their public interest obligations, must remain neutral in matters of news and politics, and they were not allowed to give editorial support to any particular political position or candidate. In 1949, the FCC's Editorializing Report[8] repealed the Mayflower doctrine, which since 1941 had forbidden on-air editorializing. This laid the foundation for the fairness doctrine, by reaffirming the FCC's holding that licensees must not use their stations "for the private interest, whims or caprices [of licensees], but in a manner which will serve the community generally."[9][10] The FCC Report established two forms of regulation on broadcasters: to provide adequate coverage of public issues, and to ensure that coverage fairly represented opposing views.[11] The second rule required broadcasters to provide reply time to issue-oriented citizens. Broadcasters could therefore trigger fairness doctrine complaints without editorializing. The commission required neither of the fairness doctrine's obligations before 1949. Until then broadcasters had to satisfy only general "public interest" standards of the Communications Act.[12] The doctrine remained a matter of general policy and was applied on a case-by-case basis until 1967,[13] when certain provisions of the doctrine were incorporated into FCC regulations.[14] In 1969, the United States courts of appeals, in an opinion written by Warren Burger, directed the FCC to revoke Lamar Broadcasting's license for television station WLBT due to the station's segregationist politics and ongoing censorship of NBC network news coverage of the U.S. civil rights movement.[15] Application of the doctrine by the FCC[edit] In 1974, the Federal Communications Commission stated that the Congress had delegated the power to mandate a system of "access, either free or paid, for person or groups wishing to express a viewpoint on a controversial public issue" but that it had not yet exercised that power because licensed broadcasters had "voluntarily" complied with the "spirit" of the doctrine. It warned that: Should future experience indicate that the doctrine [of 'voluntary compliance'] is inadequate, either in its expectations or in its results, the Commission will have the opportunity—and the responsibility—for such further reassessment and action as would be mandated.[16] In one landmark case, the FCC argued that teletext was a new technology that created soaring demand for a limited resource, and thus could be exempt from the fairness doctrine. The Telecommunications Research and Action Center (TRAC) and Media Access Project (MAP) argued that teletext transmissions should be regulated like any other airwave technology, hence the fairness doctrine was applicable, and must be enforced by the FCC. In 1986, Judges Robert Bork and Antonin Scalia of the United States Court of Appeals for the District of Columbia Circuit concluded that the fairness doctrine did apply to teletext, but that the FCC was not required to apply it.[17]  In a 1987 case, Meredith Corp. v. FCC, two other judges on the same court declared that Congress did not mandate the doctrine and the FCC did not have to continue to enforce it.[18] Decisions of the United States Supreme Court[edit] In Red Lion Broadcasting Co. v. FCC, 395 U.S. 367 (1969), the U.S. Supreme Court upheld, by a vote of 8–0, the constitutionality of the fairness doctrine in a case of an on-air personal attack, in response to challenges that the doctrine violated the First Amendment to the U.S. Constitution. The case began when journalist Fred J. Cook, after the publication of his Goldwater: Extremist of the Right, was the topic of discussion by Billy James Hargis on his daily Christian Crusade radio broadcast on WGCB in Red Lion, Pennsylvania. Cook sued arguing that the fairness doctrine entitled him to free air time to respond to the personal attacks.[19] Although similar laws are unconstitutional when applied to the press, the court cited a Senate report (S. Rep. No. 562, 86th Cong., 1st Sess., 8-9 [1959]) stating that radio stations could be regulated in this way because of the limited public airwaves at the time. Writing for the court, Justice Byron White declared:A license permits broadcasting, but the licensee has no constitutional right to be the one who holds the license or to monopolize a radio frequency to the exclusion of his fellow citizens. There is nothing in the First Amendment which prevents the Government from requiring a licensee to share his frequency with others. ... It is the right of the viewers and listeners, not the right of the broadcasters, which is paramount.[7] The court did not see how the fairness doctrine went against the First Amendment's goal of creating an informed public. The fairness doctrine required that those who were talked about be given chance to respond to the statements made by broadcasters. The court believed that this helped create a more informed public. Justice White explained that, without this doctrine, station owners would only have people on the air who agreed with their opinions. Throughout his opinion, Justice White argued that radio frequencies, and by extension, television stations, should be used to educate listeners, or viewers, about controversial issues in a way that is fair and non-biased so that they can create their own opinions.[20] In 1969, the court "ruled unanimously that the Fairness Doctrine was not only constitutional, but essential to democracy. The public airwaves should not just express the opinions of those who can pay for air time; they must allow the electorate to be informed about all sides of controversial issues."[21] The court also warned that if the doctrine ever restrained speech, then its constitutionality should be reconsidered. Justice William O. Douglas did not participate, but later wrote that he would have dissented because the Constitutional guarantee of Freedom of the press was absolute.[22] However, in the case of Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241 (1974), Chief Justice Warren Burger wrote (for a unanimous court):Government-enforced right of access inescapably dampens the vigor and limits the variety of public debate. This decision differs from Red Lion v. FCC in that it applies to a newspaper, which, unlike a broadcaster, is unlicensed and can theoretically face an unlimited number of competitors. In 1984, the Supreme Court ruled that Congress could not forbid editorials by non-profit stations that received grants from the Corporation for Public Broadcasting (FCC v. League of Women Voters of California, 468 U.S. 364 (1984)). The court's 5-4 majority decision by William J. Brennan Jr. stated that while many now considered that expanding sources of communication had made the fairness doctrine's limits unnecessary: We are not prepared, however, to reconsider our longstanding approach without some signal from Congress or the FCC that technological developments have advanced so far that some revision of the system of broadcast regulation may be required. (footnote 11) After noting that the FCC was considering repealing the fairness doctrine rules on editorials and personal attacks out of fear that those rules might be "chilling speech", the court added: Of course, the Commission may, in the exercise of its discretion, decide to modify or abandon these rules, and we express no view on the legality of either course. As we recognized in Red Lion, however, were it to be shown by the Commission that the fairness doctrine '[has] the net effect of reducing rather than enhancing' speech, we would then be forced to reconsider the constitutional basis of our decision in that case. (footnote 12)[23] Use in political leveraging[edit] Various presidential governments used the Fairness Doctrine to counter their political opponents. At the FCC, Martin Firestone's memorandum to the Democratic National Committee presented political strategies to combat small, rural radio stations unfriendly to Democratic politicians: The right-wingers operate on a strictly-cash basis and it is for this reason that they are carried by so many small [radio] stations. Were our efforts to be continued on a year-round basis, we would find that many of these stations would consider the broadcasts of these programs bothersome and burdensome (especially if they are ultimately required to give us free time) and would start dropping the programs from their broadcast schedule.[24] The use of the fairness doctrine by the National Council for Civic Responsibility (NCCR) was to urge right-wing radio stations to air rebuttals against the opinions expressed on their radio stations.[25] In 1985, under FCC Chairman Mark S. Fowler, a communications attorney who had served on Ronald Reagan's presidential campaign staff in 1976 and 1980, the FCC released its report on General Fairness Doctrine Obligations[26] stating that the doctrine hurt the public interest and violated free speech rights guaranteed by the First Amendment. The commission could not, however, come to a determination as to whether the doctrine had been enacted by Congress through its 1959 Amendment to Section 315 of the Communications Act. In response to the 1986 Telecommunications Research & Action Center v. F.C.C. decision,[27] the 99th Congress directed[28] the FCC to examine alternatives to the fairness doctrine and to submit a report to Congress on the subject.[29] In 1987, in Meredith Corporation v. F.C.C. the case was returned to the FCC with a directive to consider whether the doctrine had been "self-generated pursuant to its general congressional authorization or specifically mandated by Congress."[30] The FCC opened an inquiry inviting public comment on alternative means for administrating and enforcing the fairness doctrine.[31] In its 1987 report, the alternatives—including abandoning a case-by-case enforcement approach, replacing the doctrine with open access time for all members of the public, doing away with the personal attack rule, and eliminating certain other aspects of the doctrine—were rejected by the FCC for various reasons.[32] On August 4, 1987, under FCC Chairman Dennis R. Patrick, the FCC abolished the doctrine by a 4–0 vote, in the Syracuse Peace Council decision,[33] which was upheld by a panel of the Appeals Court for the D.C. Circuit in February 1989, though the court stated in their decision that they made "that determination without reaching the constitutional issue."[34] The FCC suggested in Syracuse Peace Council that because of the many media voices in the marketplace, the doctrine be deemed unconstitutional, stating that: The intrusion by government into the content of programming occasioned by the enforcement of [the fairness doctrine] restricts the journalistic freedom of broadcasters ... [and] actually inhibits the presentation of controversial issues of public importance to the detriment of the public and the degradation of the editorial prerogative of broadcast journalists. At the 4–0 vote, Chairman Patrick said:We seek to extend to the electronic press the same First Amendment guarantees that the print media have enjoyed since our country's inception.[35] Sitting commissioners at the time of the vote were:[36][37] Dennis R. Patrick, chairman, Republican(Named an FCC commissioner by Ronald Reagan in 1983) Mimi Weyforth Dawson, Republican(Named an FCC commissioner by Ronald Reagan in 1986) Patricia Diaz Dennis, Democrat(Named an FCC commissioner by Ronald Reagan in 1986) James Henry Quello, Democrat(Named an FCC commissioner by Richard M. Nixon in 1974) The FCC vote was opposed by members of Congress who said the FCC had tried to "flout the will of Congress" and the decision was "wrongheaded, misguided and illogical".[35] The decision drew political fire, and cooperation with Congress was one issue.[38] In June 1987, Congress attempted to preempt the FCC decision and codify the fairness doctrine,[39] but the legislation was vetoed by President Ronald Reagan. In 1991, another attempt to revive the doctrine was stopped when President George H. W. Bush threatened another veto.[40] In February 2009, Fowler said that his work toward revoking the fairness doctrine under the Reagan administration had been a matter of principle, his belief that the doctrine impinged upon the First Amendment, not partisanship. Fowler described the White House staff raising concerns, at a time before the prominence of conservative talk radio and during the preeminence of the Big Three television networks and PBS in political discourse, that repealing the policy would be politically unwise. He described the staff's position as saying to Reagan: The only thing that really protects you from the savageness of the three networks—every day they would savage Ronald Reagan—is the Fairness Doctrine, and Fowler is proposing to repeal it![41] Conservative talk radio[edit] The 1987 repeal of the fairness doctrine enabled the rise of talk radio that has been described as "unfiltered", divisive and/or vicious: "In 1988, a savvy former ABC Radio executive named Ed McLaughlin signed Rush Limbaugh — then working at a little-known Sacramento station — to a nationwide syndication contract. McLaughlin offered Limbaugh to stations at an unbeatable price: free. All they had to do to carry his program was to set aside four minutes per hour for ads that McLaughlin's company sold to national sponsors. The stations got to sell the remaining commercial time to local advertisers."[42] According to The Washington Post, "From his earliest days on the air, Limbaugh trafficked in conspiracy theories, divisiveness, even viciousness", e.g., "feminazis".[43] Prior to 1987 people using much less controversial verbiage had been taken off the air as obvious violations of the fairness doctrine.[44] Two corollary rules of the doctrine, the personal attack rule and the "political editorial" rule, remained in practice until 2000. The "personal attack" rule applied whenever a person, or small group, was subject to a personal attack during a broadcast. Stations had to notify such persons, or groups, within a week of the attack, send them transcripts of what was said and offer the opportunity to respond on-the-air. The "political editorial" rule applied when a station broadcast editorials endorsing or opposing candidates for public office, and stipulated that the unendorsed candidates be notified and allowed a reasonable opportunity to respond.[45] The U.S. Court of Appeals for the D.C. Circuit ordered the FCC to justify these corollary rules in light of the decision to repeal the fairness doctrine. The FCC did not provide prompt justification, so both corollary rules were repealed in October 2000.[46] Reinstatement considered[edit] In February 2005, U.S. Representative Louise Slaughter (D-NY) and 23 co-sponsors introduced the Fairness and Accountability in Broadcasting Act (H.R. 501)[47] in the 1st session of the 109th Congress of 2005-2007, when Republicans held a majority of both Houses. The bill would have shortened a station's license term from eight years to four, with the requirement that a license-holder cover important issues fairly, hold local public hearings about its coverage twice a year, and document to the FCC how it was meeting its obligations.[48] The bill was referred to committee, but progressed no further.[49] In the same Congress, Representative Maurice Hinchey (D-NY) introduced legislation "to restore the Fairness Doctrine". H.R. 3302, also known as the "Media Ownership Reform Act of 2005" or MORA, had 16 co-sponsors in Congress.[50] In June 2007, Senator Richard Durbin (D-Ill.) said, "It's time to reinstitute the Fairness Doctrine",[51] an opinion shared by his Democratic colleague, Senator John Kerry (D-Mass.).[52] However, according to Marin Cogan of The New Republic in late 2008: Senator Durbin's press secretary says that Durbin has "no plans, no language, no nothing. He was asked in a hallway last year, he gave his personal view"—that the American people were served well under the doctrine—"and it's all been blown out of proportion."[53] On June 24, 2008, U.S. Representative Nancy Pelosi (D-Calif.), the Speaker of the House at the time, told reporters that her fellow Democratic representatives did not want to forbid reintroduction of the fairness doctrine, adding "the interest in my caucus is the reverse." When asked by John Gizzi of Human Events, "Do you personally support revival of the 'Fairness Doctrine?'", the Speaker replied "Yes".[54] On December 15, 2008, U.S. Representative Anna Eshoo (D-Calif.) told The Daily Post in Palo Alto, California that she thought it should also apply to cable and satellite broadcasters, stating: I'll work on bringing it back. I still believe in it. It should and will affect everyone.[55] On February 11, 2009, Senator Tom Harkin (D-Iowa) told radio host Bill Press, "we gotta get the Fairness Doctrine back in law again." Later in response to Press's assertion that "they are just shutting down progressive talk from one city after another", Senator Harkin responded, "Exactly, and that's why we need the fair—that's why we need the Fairness Doctrine back."[56] Former President Bill Clinton has also shown support for the fairness doctrine. During a February 13, 2009, appearance on the Mario Solis Marich radio show, Clinton said: Well, you either ought to have the Fairness Doctrine or we ought to have more balance on the other side, because essentially there's always been a lot of big money to support the right wing talk shows. Clinton cited the "blatant drumbeat" against the stimulus program from conservative talk radio, suggesting that it does not reflect economic reality.[57] On September 19, 2019, Representative Tulsi Gabbard (D-HI) introduced H.R. 4401 Restore the Fairness Doctrine Act of 2019 in the House of Representatives, 116th Congress. Rep. Gabbard was the only sponsor. H.R. 4401 was immediately referred to the House Committee on Energy and Commerce on the same day. It was then referred to the Subcommittee on Communications and Technology on September 20, 2019.[58] H.R. 4401 would mandate equal media discussion of key political and social topics, requiring television and radio broadcasters to give airtime to opposing sides of issues of civic interest.[59][60] The summary reads: "Restore the Fairness Doctrine Act of 2019. This bill requires a broadcast radio or television licensee to provide reasonable opportunity for discussion of conflicting views on matters of public importance.[61] The Restore the Fairness Doctrine Act would once again mandate television and radio broadcasters present both sides when discussing political or social issues, reinstituting the rule in place from 1949 to 1987 ... . Supporters argue that the doctrine allowed for a more robust public debate and affected positive political change as a result, rather than allowing only the loudest voices or deepest pockets to win."[62] The fairness doctrine has been strongly opposed by prominent conservatives and libertarians who view it as an attack on First Amendment rights and property rights. Editorials in The Wall Street Journal and The Washington Times in 2005 and 2008 said that Democratic attempts to bring back the fairness doctrine have been made largely in response to conservative talk radio.[63][64] In 1987, Edward O. Fritts, president of the National Association of Broadcasters, in applauding President Reagan's veto of a bill intended to turn the doctrine into law, said that the doctrine is an infringement on free speech and intrudes on broadcasters' journalistic judgment.[65] In 2007, Senator Norm Coleman (R-MN) proposed an amendment to a defense appropriations bill that forbade the FCC from "using any funds to adopt a fairness rule."[66] It was blocked, in part on grounds that "the amendment belonged in the Commerce Committee's jurisdiction." In 2007, the Broadcaster Freedom Act of 2007 was proposed in the Senate by Senators Coleman with 35 co-sponsors (S.1748) and John Thune (R-SD), with 8 co-sponsors (S.1742),[67] and in the House by Republican Representative Mike Pence (R-IN) with 208 co-sponsors (H.R. 2905).[68] It provided: The Commission shall not have the authority to prescribe any rule, regulation, policy, doctrine, standard, or other requirement that has the purpose or effect of reinstating or repromulgating (in whole or in part) the requirement that broadcasters present opposing viewpoints on controversial issues of public importance, commonly referred to as the 'Fairness Doctrine', as repealed in General Fairness Doctrine Obligations of Broadcast Licensees, 50 Fed. Reg. 35418 (1985).[69] Neither of these measures came to the floor of either house. On August 12, 2008, FCC Commissioner Robert M. McDowell stated that the reinstitution of the fairness doctrine could be intertwined with the debate over network neutrality (a proposal to classify network operators as common carriers required to admit all Internet services, applications and devices on equal terms), presenting a potential danger that net neutrality and fairness doctrine advocates could try to expand content controls to the Internet.[70] It could also include "government dictating content policy".[71] The conservative Media Research Center's Culture & Media Institute argued that the three main points supporting the fairness doctrine — media scarcity, liberal viewpoints being censored at a corporate level, and public interest — are all myths.[72] In June 2008, Barack Obama's press secretary wrote that Obama, then a Democratic U.S. senator from Illinois and candidate for president, did not support it, stating: Obama does not support reimposing the Fairness Doctrine on broadcasters ... [and] considers this debate to be a distraction from the conversation we should be having about opening up the airwaves and modern communications to as many diverse viewpoints as possible. That is why Sen. Obama supports media-ownership caps, network neutrality, public broadcasting, as well as increasing minority ownership of broadcasting and print outlets.[73] On February 16, 2009, Mark Fowler said: I believe as President Reagan did, that the electronic press—and you're included in that—the press that uses air and electrons, should be and must be as free from government control as the press that uses paper and ink, Period.[41] In February 2009, a White House spokesperson said that President Obama continued to oppose the revival of the doctrine.[74] In the 111th Congress, January 2009 to January 2011, the Broadcaster Freedom Act of 2009 (S.34, S.62, H.R.226) was introduced to block reinstatement of the doctrine. On February 26, 2009, by a vote of 87–11, the Senate added that act as an amendment to the District of Columbia House Voting Rights Act of 2009 (S.160),[75] a bill which later passed the Senate 61–37 but not the House of Representatives.[76] The Associated Press reported that the vote on the fairness doctrine rider was "in part a response to conservative radio talk show hosts who feared that Democrats would try to revive the policy to ensure liberal opinions got equal time." The AP report went on to say that President Obama had no intention of reimposing the doctrine, but Republicans, led by Sen. Jim DeMint, R-SC, wanted more in the way of a guarantee that the doctrine would not be reimposed.[77] Suggested alternatives[edit] Media reform organizations such as Free Press feel that a return to the fairness doctrine is not as important as setting stronger station ownership caps and stronger "public interest" standards enforcement, with funding from fines given to public broadcasting.[78] In an August 2008 telephone poll, released by Rasmussen Reports, 47% of 1,000 likely voters supported a government requirement that broadcasters offer equal amounts of liberal and conservative commentary. 39% opposed such a requirement. In the same poll, 57% opposed and 31% favored requiring Internet websites and bloggers that offer political commentary to present opposing points of view. By a margin of 71–20%, the respondents agreed that it is "possible for just about any political view to be heard in today's media", including the Internet, newspapers, cable TV and satellite radio, but only half the sample said they had followed recent news stories about the fairness doctrine closely. The margin of error was 3%, with a 95% confidence interval.[79] In June 2011, the chairman and a subcommittee chairman of the House Energy and Commerce Committee, both Republicans, said that the FCC, in response to their requests, had set a target date of August 2011 for removing the fairness doctrine and other "outdated" regulations from the FCC's rulebook.[80] On August 22, 2011, the FCC voted to remove the rule that implemented the fairness doctrine, along with more than 80 other rules and regulations, from the Federal Register following an executive order by President Obama directing a "government-wide review of regulations already on the books" to eliminate unnecessary regulations.[4] Right of reply False balance Free speech Nakdi Report Prior restraint Mayflower doctrine Zapple doctrine Accurate News and Information Act Fred W. Friendly (1976). The Good Guys, The Bad Guys, and the First Amendment: Free speech vs. fairness in broadcasting. Random House. ISBN 0-394-49725-2. Wikidata Q111848516. Pickard, Victor (2014). America's Battle for Media Democracy: The Triumph of Corporate Libertarianism and the Future of Media Reform, Cambridge University Press, ISBN 1107694752 ^ "CBS v. Democratic Nat'l Committee, 412 U.S. 94 (1973)". Justia Law. Retrieved November 17, 2021. ^ Fletcher, Dan (February 20, 2009). "A Brief History of the Fairness Doctrine". Time. Retrieved October 10, 2021. It's as predictable as Rush Limbaugh sparking a controversy: every few years, someone in Congress brings up the Fairness Doctrine. In 1987 the FCC abolished the policy, which dictates that public broadcast license-holders have a duty to present important issues to the public and — here's the 'fairness' part — to give multiple perspectives while doing so. ^ Clark, Drew (October 20, 2004). "How Fair Is Sinclair's Doctrine?". Slate. ^ a b Boliek, Brooks (August 22, 2011). "FCC finally kills off fairness doctrine". Politico. ^ E. Patterson, Thomas (2013). "The News Media: Communicating Political Images". We the People. 10th ed. McGraw-Hill Education: 336. ^ Rendall, Steve (January 1, 2005). "The Fairness Doctrine: How We Lost it, and Why We Need it Back". Extra!. Retrieved October 2, 2017. ^ a b Red Lion Broadcasting Co. v. FCC, decided June 8, 1969, also at 395 U.S. 367 (1969) (Excerpt from majority opinion, III A; Senate report cited in footnote 26). Justice William O. Douglas did not participate in the decision, but there were no concurring or dissenting opinions. ^ Report of the Commission in the Matter of Editorializing by Broadcast Licensees, 13 F.C.C. 1246 [1949]. ^ Report ... Licensees, 13 F.C.C. 1246, 1248-9. ^ Pickard, Victor (2015). America's Battle for Media Democracy: The Triumph of Corporate Libertarianism and the Future of Media Reform. New York, NY: Cambridge University Press. ISBN 9781107694750. ^ Jung, D.L. (1996), The Federal Communications Commission, the Broadcast Industry, and the Fairness Doctrine 1981–1987, New York: University Press of America, Inc. ^ Donahue, H. (1988). The Battle to Control Broadcast News. Cambridge, Mass.: MIT Press ^ Memorandum Opinion and Order, 8 F.C.C.2d 721 (August 7, 1967), which codified the personal attack doctrine and implemented provisions with respect to political editorials from Times-Mirror Broadcasting Co., 40 F.C.C. 531, 538 (1962); codified as 32 Fed. Reg. 10303 at para. 4 (1967). This was amended twice in Memorandum Opinion and Order, 9 F.C.C.2d 539 (1967) and Memorandum Opinion and Order, 12 F.C.C.2d 250 (1968). ^ Mullally, Donald P. (1969). "The Fairness Doctrine: Benefits and Costs". Public Opinion Quarterly. 33 (4): 577–582. doi:10.1086/267746. JSTOR 2747567. ^ "The FCC & Censorship: Legendary Media Activist Everett Parker on the Revocation of WLBT's TV License in the 1960s for Shutting Out Voices of the Civil Rights Movement", Democracy Now!, March 6, 2008. ^ In the Matter of the Handling of Public Issues Under the Fairness Doctrine and the Public Interest Standards of the Communications Act, 48 F.C.C.2d 1 (F.C.C. 1974); 39 Fed. Reg. 26.372, 26.374 (1974) ^ Telecommunications Research and Action Center v. FCC, 801 F.2d 501 (D.C. Cir. 1986) Archived October 23, 2008, at the Wayback Machine. Retrieved August 17, 2008. ^ Meredith Corp. v. FCC, 809 F.2d 863 (D.C. Cir. 1987) Archived October 7, 2008, at the Wayback Machine, February 10, 1987, Retrieved August 17, 2008. ^ Tom Joyce: "His call for a reply set up historic broadcast ruling; Fred J. Cook, whose book was attacked on Red Lion radio station WGCB in 1964, died recently at age 92". York Daily Record (Pennsylvania), May 6, 2003. Retrieved August 17, 2008. ^ Kramer, Daniel C. "Red Lion Broadcasting Co. v. Federal Communications Commission (1969)". The First Amendment Encyclopedia. Retrieved November 19, 2020. ^ "Bring Back the Fairness Doctrine". The Academy for Systems Change. Retrieved November 19, 2020. ^ "CBS v, Democratic National Committee". Justia. Retrieved August 18, 2022. ^ The quotation is from Section III C of Red Lion v. FCC 395 U.S. 367 (1969). Justice Brennan's opinion was joined by Justices Thurgood Marshall, Harry Blackmun, Lewis Powell and Sandra Day O'Connor. Dissenting opinions were written or joined by Chief Justice Warren Burger and Justices William Rehnquist, Byron White and John Paul Stevens. ^ Friendly (1976, p. 42). ^ Friendly (1976, esp. pp. 10, 39-40). ^ General Fairness Doctrine Obligations of Broadcast Licensees, Report, 50 Fed. Reg. 35418 (1985) ^ 801 F.2d 501 (D.C. Cir. 1986), rehearing denied, 806 F.2d 1115 (D.C. Cir. 1986), cert. denied, 107 S.Ct. 3196 (1987). ^ Making Continuing Appropriations for Fiscal Year 1987, P.L. 99-500. See also, Conference Report to Accompany H.J.Res. 738, H.Rept. 99-1005. 99th Cong., 2d Sess. (1986). ^ "Fairness Doctrine: History and Constitutional Issues" (PDF). Congressional Research Service. July 13, 2011. Retrieved May 10, 2016. ^ 809 F.2d 863 (D.C. Cir. 1987) at 872. ^ Inquiry into Section 73.1910 of the Commission's Rules and Regulations Concerning Alternatives to the General Fairness Doctrine Obligations of Broadcast Licensees in MM Docket No. 97-26, 2 FCC Rcd 1532 (1987). ^ In the Matter of Inquiry into Section 73.1910 of the Commission's Rules and Regulations Concerning Alternatives to the General Fairness Doctrine Obligations of Broadcast Licensees. 2 FCC Rcd 5272 (1987). ^ "In re Complaint of Syracuse Peach Council against Television Station WTVH Syracuse, New York". FCC Record. 2 (17): 5043ff. August 6, 1987. Wikidata Q112043674. ^ Circuit, District of Columbia (February 10, 1989). "867 F. 2d 654 - Syracuse Peace Council v. Federal Communications Commission". Openjurist. p. 654. Retrieved December 7, 2014. Under the 'fairness doctrine,' the Federal Communications Commission has, as its 1985 Fairness Report explains, required broadcast media licensees (1) 'to provide coverage of vitally important controversial issues of interest in the community served by the licensees' and (2) 'to provide a reasonable opportunity for the presentation of contrasting viewpoints on such issues.' Report Concerning General Fairness Doctrine Obligations of Broadcast Licensees, 102 F.C.C. 2d 143, 146 (1985). In adjudication of a complaint against Meredith Corporation, licensee of station WTVH in Syracuse, New York, the Commission concluded that the doctrine did not serve the public interest and was unconstitutional. Accordingly it refused to enforce the doctrine against Meredith. Although the Commission somewhat entangled its public interest and constitutional findings, we find that the Commission's public interest determination was an independent basis for its decision and was supported by the record. We uphold that determination without reaching the constitutional issue. ^ a b Hershey Jr., Robert D. (August 5, 1987). "F.C.C. Votes Down Fairness Doctrine in a 4-0 Decision". FCC Video. No. FCC 1987. NBCUniversal. The New York Times. Archived from the original on March 16, 2012. Retrieved October 28, 2018. Today we reaffirm our faith in the American people. Our faith in their ability to distinguish between fact and fiction without any help from government. Alt URL ^ Commissioners from 1934 to Present ^ FCC Record, Volume 2, No. 17, Pages 5002 to 5398, August 17–28, 1987 ^ Salmans, Sandra (September 20, 1987). "Regulator Unregulated: Dennis Patrick; At the FCC, Another Man Who Loves Free Markets". The New York Times. ^ The Fairness in Broadcasting Act of 1987, S. 742 & H.R. 1934, 100th Cong., 1st Sess. (1987) ^ Limburg, Val E. (April 27, 2009). "Fairness Doctrine" Archived October 22, 2004, at the Wayback Machine. Museum of Broadcast Communications. ^ a b "unknown". The Mark Levin Show. Archived March 26, 2009, at the Wayback Machine, February 16, 2009. (a 26-megabyte MP3 file), from about 17 minutes 15 seconds into the broadcast to 25 min. 45 sec. ^ Paul Farhi (February 9, 2021). "Rush Limbaugh is ailing. And so is the conservative talk-radio industry". The Washington Post. ISSN 0190-8286. Wikidata Q105426282.. ^ Paul Farhi (February 9, 2021). "Rush Limbaugh is ailing. And so is the conservative talk-radio industry". The Washington Post. ISSN 0190-8286. Wikidata Q105426282.. ^ Nicole Hemmer (2016). Messengers of the Right: Conservative Media and the Transformation of American Politics. University of Pennsylvania Press. ISBN 978-0-8122-2430-6. OL 27359649M. Wikidata Q105427186.. ^ "Information Needs of Communities: The policy and regulatory landscape" (PDF). FCC. June 9, 2011. pp. 277–278. Retrieved August 22, 2017. ^ Leweke, Robert W. (October 1, 2001). "Rules Without a Home: FCC Enforcement of the Personal Attack and Political Editorial Rules". Communication Law and Policy. 6 (4): 557–576. doi:10.1207/S15326926CLP0604_02. ISSN 1081-1680. S2CID 143329667. ^ H.R. 501, Fairness and Accountability in Broadcasting Act (109th Congress, 1st Session) (full text) from GovTrack.us. Retrieved November 13, 2008. ^ Congressional Research Service summary of H.R. 501--109th Congress (2005): Fairness and Accountability in Broadcasting Act, GovTrack.us (database of federal legislation). Retrieved November 13, 2008 ^ Overview of H.R. 501 (109th Congress, 1st session) from GovTrack.us. Retrieved November 14, 2008. ^ Summary at "Media Ownership Reform Act of 2005". Archived from the original on September 2, 2007. Retrieved September 12, 2007. - Full text at H.R. 3302 Media Ownership Reform Act of 2005[permanent dead link]. Retrieved August 17, 2008. ^ Bolton, Alexander (June 27, 2007). "GOP preps for talk radio confrontation". The Hill. Retrieved October 27, 2008. ^ John Eggerton (June 27, 2007). "Kerry Wants Fairness Doctrine Reimposed". Broadcasting and Cable. Retrieved October 27, 2008. describing an interview on The Brian Lehrer Show on WNYC radio ^ Marin Cogan, Bum Rush: Obama's secret plan to muzzle talk radio. Very, very secret, The New Republic, December 3, 2008. Retrieved November 20, 2008 ^ Gizzi, John (June 25, 2008). "Pelosi Supports Fairness Doctrine". Human Events. Retrieved October 27, 2008. ^ San Francisco Peninsula Press Club (December 16, 2008). "Rep. Eshoo to push for Fairness Doctrine". San Francisco Peninsula Press Club. Retrieved December 15, 2008. ^ Michael Calderon (February 11, 2009). "Sen. Harkin: 'We need the Fairness Doctrine back'". Politico. Retrieved February 11, 2009. ^ John Eggerton (February 13, 2009). "Bill Clinton Talks of Re-Imposing Fairness Doctrine or At Least 'More Balance' in Media". Broadcasting & Cable. Retrieved February 13, 2009. ^ Willis, Derek (August 12, 2015). "H.R.4401: To amend the Communications Act of 1934 to reinstate the obligation of broadcast licensees to afford reasonable opportunity for the discussion of conflicting views on issues of public importance (commonly known as the 'Fairness Doctrine')". ProPublica. Retrieved November 19, 2020. ^ Mojica, Adrian (October 24, 2019). "Bill filed in Congress would mandate equal media attention on political or social issues". WCIV. Retrieved November 19, 2020. ^ GovTrack.us (October 22, 2019). "Restore the Fairness Doctrine Act would require broadcasters give airtime to all sides of an issue". Medium. Retrieved November 19, 2020. ^ "Summary of H.R. 4401: Restore the Fairness Doctrine Act of 2019". GovTrack.us. Retrieved November 19, 2020. ^ "Summary of Bills Before 89th Congress". Physical Therapy. 45 (4): 373–376. April 1, 1965. doi:10.1093/ptj/45.4.373. ISSN 0031-9023. ^ "Rush to Victory" (PDF). The Wall Street Journal. April 4, 2005. Retrieved July 1, 2008. ^ "'Fairness' is Censorship". The Washington Times. June 17, 2008. Retrieved July 1, 2008. ^ Pagano, Penny (June 21, 1987). "Reagan's Veto Kills Fairness Doctrine Bill". Los Angeles Times. Retrieved May 11, 2016. ^ Frommer, Frederic J. (July 14, 2007). "Democrats Block Amendment to Prevent Fairness Doctrine". Associated Press. Retrieved August 10, 2008. ^ Broadcaster Freedom Act of 2007, Open Congress Foundation. Retrieved November 14, 2008 ^ Broadcaster Freedom Act of 2007, introduced February 1, 2005, "To prevent the Federal Communications Commission from repromulgating the fairness doctrine", Open Congress Foundation. Retrieved November 14, 2008 ^ Text of H.R. 2905: Broadcaster Freedom Act of 2007, GovTrack.us. Retrieved November 14, 2008 ^ Jeff Poor, "FCC Commissioner: Return of Fairness Doctrine Could Control Web Content" Archived September 22, 2010, at the Wayback Machine, August 13, 2008, Business & Media Institute ^ http://www.eyeblast.tv/Public/Video.aspx?rsrcID=34016 Archived August 18, 2008, at the Wayback Machine See also Commissioner McDowell's speech to the Media Institute Archived October 18, 2011, at the Wayback Machine in January 2009. ^ Culture & Media Institute report on The Fairness Doctrine. Archived January 9, 2011, at the Wayback Machine - accessed August 13, 2008. ^ Eggerton, John (June 25, 2008). "Obama Does Not Support Return of Fairness Doctrine". Broadcasting & Cable. Archived from the original on June 27, 2008. Retrieved October 30, 2008. citing an e-mail from Obama's press secretary, Michael Ortiz. ^ "White House: Obama Opposes 'Fairness Doctrine' Revival". Fox News. February 18, 2009. ^ "Senate Backs Amendment to Prevent Fairness Doctrine Revival". Fox News. February 26, 2009. Archived from the original on February 28, 2009. ^ Warren, Timothy (February 26, 2009). "Senate votes to give D.C. full House vote". The Washington Times. Retrieved February 26, 2009. The Senate roll call is here. ^ "Senate bars FCC from revisiting Fairness Doctrine". Associated Press, February 26, 2009. ^ "The Structural Imbalance of Talk Radio" (PDF). Free Press & Center for American Progress. June 21, 2007. Archived (PDF) from the original on July 29, 2023. ^ "47% Favor Government Mandated Political Balance on Radio, TV". Rasmussen Reports press release, August 14, 2008 and "Toplines - Fairness Doctrine". August 13, 2008. Archived July 15, 2012, at archive.today (Questions and answers from the survey). ^ Nagesh, Gautham (June 28, 2011). "FCC sets August target for striking Fairness Doctrine". "Hillicon Valley" blog, The Hill, quoting Republican Representatives Fred Upton (R-Michigan), chairman of the Energy & Commerce Committee, and Greg Walden (R-Oregon), chairman of its Telecommunications Subcommittee. A primer on the Fairness Doctrine and how its absence now affects politics and culture in the media. Fairness Doctrine Archived October 22, 2004, at the Wayback Machine by Val E. Limburg, from the Museum of Broadcast Communications Fairness Doctrine from NOW on PBS The Media Cornucopia from City Journal Important legislation for and against the Fairness Doctrine from Ceasespin.org Speech to the Media Institute Archived October 18, 2011, at the Wayback Machine by FCC Commissioner Robert M. McDowell on January 28, 2009, outlining the likely practical and constitutional challenges of reviving a fairness or neutrality doctrine.
2024-11-08T04:51:41
en
train
42,034,634
vegadw
2024-11-03T17:47:44
Improper Language Detected
null
https://opguides.info/posts/scanoss/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,034,669
mellosouls
2024-11-03T17:51:36
The Making of China and India: Long-Run Human Capital Accumulation
null
https://www.dropbox.com/scl/fo/3lskcwur97a67jhjgkn9z/ABCZ60fQDoHEG1zXL52PjPs?noscript=1&rlkey=v60l9q4uusuk50s3rqu1og5gj
1
0
null
null
null
no_article
null
null
null
null
2024-11-08T18:08:01
null
train
42,034,675
michaelsbradley
2024-11-03T17:52:25
gptel: a simple LLM client for Emacs
null
https://github.com/karthink/gptel
156
33
[ 42035930, 42034697, 42037363, 42035721, 42037295, 42036691 ]
null
null
null
null
null
null
null
null
null
train
42,034,701
antTman
2024-11-03T17:55:53
Ask HN: Is Moving to the Middle East a Smart Move for My Startup Dreams?
Hey everyone!<p>I’ve been a Software Engineer for years, working with startups around the globe, and I love the fast-paced, builder lifestyle. I’m all about startups, fundraising, and hacking away at new ideas—basically, if it’s about growth or innovation, I’m in.<p>Recently, I’ve been considering a move to the Middle East to get closer to investors and see if it might kick-start the next phase of my career. The region is clearly ramping up in some really interesting areas: AI, chip development, renewable energy… Meanwhile, Europe (where I’m based) sometimes feels like it’s running a slower race.<p>So, here’s what I’d love to know:<p>1. Is the Middle East a good place for someone with a startup mindset looking to launch new ventures?<p>2. Any specific cities, incubators, or early-stage investors you’d recommend I look into?<p>3. Bonus: I’m that techie who often gets mistaken for a “management guy” because of my people skills—happens all the time, and it cracks me up…<p>Any tips, insights, or just general thoughts are super welcome.<p>P.S. USA is not an option.<p>Thanks in advance for the help!
null
2
3
[ 42034842 ]
null
null
null
null
null
null
null
null
null
train
42,034,708
rntn
2024-11-03T17:57:07
Billionaires are 'ultimate beneficiaries' linked to €3B of EU farming subsidies
null
https://www.theguardian.com/environment/2024/nov/03/revealed-billionaires-ultimate-beneficiaries-linked-to-eu-farming-subsidies
13
1
[ 42041499 ]
null
null
null
null
null
null
null
null
null
train
42,034,747
ohjeez
2024-11-03T18:01:25
AI prefers white and male job candidates in new test of resume-screening bias
null
https://www.geekwire.com/2024/ai-overwhelmingly-prefers-white-and-male-job-candidates-in-new-test-of-resume-screening-bias/
9
2
[ 42034891, 42035414 ]
null
null
null
null
null
null
null
null
null
train
42,034,750
eashish93
2024-11-03T18:01:58
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,034,757
JumpCrisscross
2024-11-03T18:03:02
Science Is Finding Ways to Regenerate Your Heart
null
https://www.wsj.com/health/grow-heart-lung-tissue-medical-technology-24b22bb4
7
1
[ 42043138 ]
null
null
null
null
null
null
null
null
null
train
42,034,759
gniting
2024-11-03T18:03:21
Apple reportedly releasing 'total redesign' for MacBook Pro in 2026
null
https://9to5mac.com/2024/11/03/apple-macbook-pro-redesign-2026/
16
47
[ 42035963, 42036021, 42035996, 42036061, 42037633, 42034792, 42035437 ]
null
null
null
null
null
null
null
null
null
train
42,034,769
yuezhao
2024-11-03T18:04:26
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,034,771
mailyk
2024-11-03T18:04:44
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,034,779
dummy7777
2024-11-03T18:06:08
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,034,801
null
2024-11-03T18:09:00
null
null
null
null
null
null
[ "true" ]
null
null
null
null
null
null
null
null
train
42,034,804
sandwichsphinx
2024-11-03T18:09:13
Cool horizons for entangled black holes (2013)
null
https://arxiv.org/abs/1306.0533
2
0
null
null
null
null
null
null
null
null
null
null
train
42,034,807
JumpCrisscross
2024-11-03T18:09:16
An Industry Dispute Threatens to Shake Up How Homes Are Listed for Sale
null
https://www.wsj.com/real-estate/an-industry-dispute-threatens-to-shake-up-how-homes-are-listed-for-sale-e1d4e1f3
9
3
[ 42035270, 42037188, 42037100 ]
null
null
null
null
null
null
null
null
null
train
42,034,826
wslh
2024-11-03T18:11:21
Formula v2.5 X Steering Wheel Controller
null
https://fanatec.com/eu-en/steering-wheels/clubsport-steering-wheel-formula-v2.5x
1
0
null
null
null
body_too_long
null
null
null
null
2024-11-08T11:28:37
null
train
42,034,836
vinnyglennon
2024-11-03T18:12:22
Why Could Lebanon Be Rich, but Is So Chaotic?
null
https://unchartedterritories.tomaspueyo.com/p/why-could-lebanon-be-rich-but-is
3
0
null
null
null
Failed after 3 attempts. Last error: Quota exceeded for quota metric 'Generate Content API requests per minute' and limit 'GenerateContent request limit per minute for a region' of service 'generativelanguage.googleapis.com' for consumer 'project_number:854396441450'.
Why Could Lebanon Be Rich, but Is so Chaotic?
2024-10-03T13:00:52+00:00
Tomas Pueyo
Maven asked to move the course on Leadership Communication Mastery to November 4th. You can sign up for then if it’s better timing. I’ll write about it more in detail next week.You need just three maps to understand Lebanon. Here are two:Lebanon is quite unique: a thin strip of coast adjacent to two big mountain ranges. This is a map of the region, the Levant:As I shared in Will Israel Be at War?, this area belongs to the Fertile Crescent:Which is normally very dry because it’s in the horse latitudes, which have a similar climate across the world:But Lebanon has something special. Look at this map and find the two small white mountain ranges parallel to the sea:This means that, although nearly everything else is desertic in the area, Lebanon is not:The mountains are so close to the sea and so tall that they catch a ton of moisture from the Mediterranean. This is how you end up with two mountain ranges that can get snowy, sandwiched between the desert and the sea:The word “Lebanon” comes from Mount Lebanon, which in turn comes from the Phoenician word for “white”, referring to the snowy mountaintops. Here’s a view from the mountains to the sea when they are snowy:Before the war, people could ski in the mountains in the morning and on the sea in the afternoon. Source.Here’s a view from the bottom:All this is to show that Lebanon is unique because it’s a very thin strip of land between the sea and mountain ranges to the east—the Mount Lebanon and the Anti-Lebanon Mountains:Most of the population is on the seaside, concentrated in the cities, while the fertile Beqaa Valley is the other big region, albeit with a smaller population:Here’s another view so you can get a good sense of the reliefYou can buy this map on Etsy. I have no affiliation with that.And this is from the air, on a particularly clear day:Lebanon’s geography has completely shaped its history, and explains it to this day, which is encapsulated in Lebanon’s flag:The stripes are likely1 the parallel mountains and valley ranges, while the tree in the middle represents the famous cedar:This geography and these cedars determined the destiny of Lebanon.A few thousand years ago, settlers from Mesopotamia brought agriculture to this area and mixed with the local population. Fun fact: The genetic makeup of Lebanese people today is nearly identical to what it was back then!The Levantine Bronze Age people have been there for thousands of years, and are probably a mixture of locals and southern Mesopotamian migrants. When I say “locals”, that’s what I’m reading, but I want to understand what it means. My guess from the latest DNA evidence is that it means “mostly people from a wave of farmer/pastoralists coming from Anatolia”. The Ancient Greek slice likely comes from thousands of years of Mediterranean trade. What I found fascinating is that different Lebanese ethnicities don’t vary fundamentally in these numbers. I was most surprised to learn that the Lebanese have barely any Arab DNA, as in “people from the Arabian peninsula”. Source for these insights.But whereas Mesopotamia and Egypt have plenty of land for agriculture, Lebanon doesn’t. What it does have is mountains and rain, which meant it had the biggest cedar forests in the region. Wood on one side and sea on the other meant that the locals, the Canaanites2, took to the sea. We now call them Phoenicians because that was the Greek word for them. It comes from phoinix, possibly signifying the color purple-red, and perhaps an allusion to their production of a highly prized purple dye.3 But they didn’t just trade dye.Map of ancient Phoenicia displaying main cities and resources. Fun fact: Byblos is one of the oldest inhabited settlements in the world. Source.Indeed, the mountains and their cedars pushed the Phoenicians to the sea, and the main activity there was trade, since the cost of transportation was so much lower than on land. Because they were closest to the early civilizations of Mesopotamia in location, so they were the first coastal community to receive their technology and civilization, and hence the first to take to the sea. This is why Byblos is the world's oldest sea port and one of the world's oldest continuously inhabited cities:These seafarers then spread civilization throughout the Mediterranean:The Phoenicians were thus the main vector of civilization and technology spread in the Mediterranean. The Greeks and Romans are simply their technological heirs, learning from them how to navigate and trade.It’s interesting to note that the Phoenicians had the most influence on northern Africa, Mediterranean islands, and Spain’s coast. If you look at a map, you’ll see that these are places with a similar geography to Lebanon’s: Stripes of coastal greenery protected by mountains and arid land behind.The cedar timber that grew on the mountain skirts was one of the Phoenicians’ main exports. Indeed, that’s where most Egyptian wood came from, for example. Purple dye, wine, and olive oil were others, and since they had mountains, they had sheep, bringing the wool needed for cloth. I assume they traded these for grain, since their land was not vast enough for substantial production, which would limit the population’s size.The main difference from Israel and Palestine is that the mountains in Lebanon are taller and closer to the sea. This had a major effect: Mountain people in the Palestinian region (Judea and Samaria) remained pastoralists, while those on the plains were farmers. These were two separate groups. In Lebanon, these two were so close in proximity that they were the same people, and they took to the seas.Unfortunately for the Levantines, they chose a bad neighborhood. Here’s a map of population today, and the 3rd and last one you need to understand Lebanon today: The Levant is a thin strip of population, sandwiched between sea and desert, and between Mesopotamia, Anatolia, Egypt, Persia, and the Mediterranean, all of which have birthed big empires. Because it’s a thin strip, it will never be heavily populated and very strong. Today, Lebanon’s population is about 6 million. And during its history, it was always subject to the neighboring powers, as were Israel and Palestine:You can see this regional influence by the genetic makeup I highlighted earlier: a mix of local Levantine with Mesopotamian and Greek. But also by present-day ethnic groups:Of course, the majority of the population is Muslim. But it’s split between Shia and Sunni:Then you have Maronites, a Christian Catholic sect that emerged in the 5th century AD. There’s also the Druze, an early offshoot of Islam. And Greek Catholics and Orthodox. Why did all these denominations survive? Because of geography.There is barely any plain in Lebanon. It’s difficult to cross the country, with sea on one side and mountains on the other. It’s especially hard to move around in the mountains, and even more, those with dense woods. Invaders could control the coastal cities, but it was much harder to control the mountains—like in the Balkans. So they couldn’t impose their will and their religions, and that’s how communities like Greeks and Maronites have survived over a thousand years of Muslim occupation.  We can see this during the Crusades too, which occupied the Holy Land for about 200 years between ~1100 and 1300 AD:Map of the maximum extent of Crusader Kingdoms. Notice how the last Crusader cities are basically in present-day Lebanon (Acre is just on the border between Israel and Lebanon today, and on the map appears between Montfort and Château Pélerin, the orange one farthest south). Source.The Crusaders occupied the coasts because they mostly came by ship, that’s where the vegetation is, and it’s better defended from armies from land. But of all this region, the most protected was Lebanon, so that’s where they lasted the longest.Fun fact: The Christian community in the region has kept ties with Christians in Europe for centuries. This will become relevant later.Let’s take a step back. What are the three forces that have forged Lebanon so far?It’s a unique spot in the region: a double mountain range hugging the sea, sheltering diverse cultures and carving out a distinct set of identities.It’s also rich in timber for shipbuilding, and well-situated to the trading opportunities of the Mediterranean, while on land it’s at the crossroads of empires. This made it a natural trading hub. Yet it’s wedged between regional giants that have always dominated it.Caught between autonomy and subservience, Lebanon has long walked a tightrope—seeking independence while navigating the power struggles of its mightier neighbors.This was obvious during the Crusades, but also a few centuries later, during Ottoman rule. The Druze leader Fakhreddine II—arguably the region's greatest leader—sought to expand Lebanon’s independence. Aligned with European powers like the Grand Duchy of Tuscany, he spearheaded reforms in the 1600s that developed the region economically and militarily. Unfortunately, he fell short of achieving full independence, and the Ottomans prevailed.In the 1700s, the Shihab Dynasty (originally Druze but converted to Christianity) sought to balance relationships between the Ottoman Empire, France, and even Russia during its war with the Ottoman Empire. Later, in the 1800s, the Druze and Maronites had frequent clashes. The Ottomans tended to support the Druze, while the Maronites sought support from abroad. They had kept strong ties with France since the Crusades—because the main crusading power were the Franks, and they shared a religion. France’s involvement in the region continued through the centuries, as it saw itself as the guarantor of Catholic rights in the Ottoman Empire. For example, in 1535, the King of France negotiated Capitulations with the Ottoman Sultan, receiving special privileges within the Ottoman Empire—including the protection of Catholic minorities, such as the Maronites. So France supported the Maronites for centuries, into the 1800s. As the Ottoman Empire power dwindled, the UK was emerging as the new superpower, and the British started supporting the Druze to weaken the French.During WWI, the UK and France signed the secret Sykes-Picot Agreement to split Ottoman lands, and France kept Lebanon—mainly due to its historical links to the region.One of the things France did was move the historical border between Lebanon and Syria, from the Mount Lebanon range to the Anti-Lebanon range—which doubled the country’s size, but also included a region that had been historically much more connected to Syria than to Lebanon. Notably, this added a lot of Druze and Muslim populations. The proportion of Maronites shrunk to barely 50%, while the Shia grew 4x and the Sunni 8x. The French established a political system where Maronites would be represented 6:5 vs other denominations, and the president would always be Maronite, with veto power on any legislation. This gave power to the Maronites in a country where Muslim fertility rates were higher, and where Maronites eventually became a minority.During WWII, France lost its grip on the region, and in 1943 Lebanon finally gained its independence under the oversight of France and the UK.Once the regional powers left the Balkans—first the Austro-Hungarian and Ottoman Empires, later the USSR—the region fell into fratricide civil wars, nearly to this very day. Something similar has happened in Lebanon, on a smaller scale.After millennia of occupation by Mesopotamians, Greeks, Romans, Arabs, Egyptians, Ottomans, and French, Lebanon became independent at last. But that doesn’t mean it was to be fully independent. Such a small country in such a tense region meant that external influences would be impossible to overcome. Today, these influences come from the newer neighboring countries of Syria and Israel, the more distant Iran, Turkey to the north, and the looming presence of the maritime superpower—the US. These countries use Lebanon as a proxy for their interests.And since Lebanon has so many ethnic groups, every faction is aligned with a different foreign power, creating a fratricidal kaleidoscope. But at the same time, Lebanon is still a trading region at its heart, bent towards the cosmopolitan Mediterranean. It’s the conflict between these three forces that defines Lebanon to this day:In between superpowers that fight for itMountains that balkanize it with multiple ethnicitiesBut also that push it to the sea to trade and be cosmopolitanFor example, Lebanon became an international destination after WWII. But as the Cold War erupted, the East and West Blocs started recruiting allies. Muslim countries like Egypt and Syria, governed by socialists, initially aligned with the USSR. At the same time, a pan-Arabic sentiment was emerging in the region, which meant the Sunni and Shia communities (and the Druze) in Lebanon gravitated towards their Arab neighbors and the USSR. Meanwhile, the internationalist Christians, which included the Maronites, Greek Orthodox, and Greek Catholics among others, gravitated towards the US and capitalism. In 1958, this led to a conflict between Maronites and Muslims, which ended with a US intervention that lasted three months.Here’s another example: In 1948, when Arab Muslims lost against Israel, 110,000 Palestinians left for southern Lebanon. Some more came in 1967 when Israel won the Six-Day War, and more in 1970 when Palestinians were expelled from Jordan, and the Palestinian leadership established its base in southern Lebanon. Palestinians started attacking Israel from Lebanon, and Israel retaliated. This further split the Lebanese: The Muslims supported the Palestinians, while the Maronites supported the Israelis. Power balance in Lebanon in 1976: Dark Green: controlled by Syria. Purple: controlled by Maronite groups. Light Green: controlled by Palestinian militias. Source.Eventually, these tensions led to an invasion of Israel and international intervention:Power balance in Lebanon in 1983, after the beginning of the civil war in 1982. Green: controlled by Syria. Purple: controlled by Christian groups. Yellow: controlled by Israel. Blue: controlled by the United Nations.You can see that a big chunk of Lebanon was occupied by Syria (dark green). Remember that the Anti-Lebanon mountains and Beqaa Valley (between Lebanon and Anti-Lebanon) had been part of Syria before the French allocated it to Lebanon, so strong ties remained.Also, although Syria is a majority-Sunni country, its leadership is from yet another ethnicity, the coastal Alawites.Map of Syrian Ethnicities. Source.The Alawites persist for the same reason as the Maronites and the Druze: Coastal mountains. But in Syria, they have the power.Syria didn’t want any threatening group to gain the upper hand in Lebanon. That included the Maronite Christians, of course, but also Israel, and also the Sunnis—the majority group in Syria. Since the Palestinians were the strongest Sunni armed force in the region, Syria wanted to counter them. So they also intervened and occupied part of the country—until 2005—and supported the emergence of a Shia counterforce: Hezbollah.Lebanon is chaotic because it’s a unique mountainous region surrounded by sea, desert, and stronger powers:The sea and mountains made it somewhat fertile, so it developed a populationThe sea and its access to foreign powers made it a natural trading hubThe mountains allowed many different ethnic groups to survive for centuriesThe surrounding superpowers have a vested interest in influencing or controlling the country, so they try to do that either directly or by supporting their favorite groupSince every superpower has different goals and follows different politics and religions, they support different groups in Lebanon, who end up in conflictShareWhat does it mean for Lebanon’s future?What are the country’s priorities?We’ll see in an upcoming article.
2024-11-08T07:28:37
null
train
42,034,841
picture
2024-11-03T18:12:54
Project IMU Array
null
https://www.willwhang.dev/Project-IMU-Array/
3
0
null
null
null
null
null
null
null
null
null
null
train
42,034,849
pretext
2024-11-03T18:13:55
SurfSense: Personal AI Assistant for Internet Surfers and Researchers
null
https://github.com/MODSetter/SurfSense
1
0
null
null
null
no_error
GitHub - MODSetter/SurfSense: Personal AI Assistant for Internet Surfers and Researchers. Research & Never forget anything you see on the Internet
null
MODSetter
SurfSense Well when I’m browsing the internet or reading any files such as pdfs, docs or images, I see a lot of content—but remembering when and what you saved? Total brain freeze! That’s where SurfSense comes in. SurfSense is a Personal AI Assistant for anything you see (Social Media Chats, Calender Invites, Important Mails, Tutorials, Recipies and anything ) on the Internet or your files. Now, you’ll never forget anything. Easily capture your web browsing session and desired webpage content using an easy-to-use cross browser extension or upload your files to SurfSense. Then, ask your personal knowledge base anything about your saved content, and voilà—instant recall! Video surf.v0.4.mp4 Key Features 💡 Idea: Save any content you see on the internet in your own personal knowledge base. ⚙️ Cross Browser Extension: Save your browsing content from your favourite browser. 📁 Multiple File Format Uploading Support: Save content from your own personal files(Documents, images and more) to your own personal knowledge base . 🔍 Powerful Search: Quickly find anything in your saved content. 💬 Chat with your Saved Content: Interact in Natural Language with your saved Web Browsing Sessions and get cited answers. 🔔 Local LLM Support: Works Flawlessly with Ollama local LLMs. 🏠 Self Hostable: Open source and easy to deploy locally. 📊 Advanced RAG Techniques: Utilize the power of Advanced RAG Techniques. 🔟% Cheap On Wallet: Works Flawlessly with OpenAI gpt-4o-mini model and Ollama local LLMs. 🕸️ No WebScraping: Extension directly reads the data from DOM to get accurate data. How to get started? UPDATE 24 OCTOBER 2024: SurfSense now uses custom gpt-researcher agent to format responses. Added better markdown rendering to UI. UPDATE 8 OCTOBER 2024: SurfSense now lets you upload your own files such as pdfs, docx, images etc into your SurfSense Knowledge Base. SurfSense uses Unstructured-IO to support files. UPDATE 25 SEPTEMBER 2024: Thanks @hnico21 for adding Docker Support UPDATE 20 SEPTEMBER 2024: SurfSense now works on Hierarchical Indices. Knowledge Graph dependency is removed for now until I find some better Graph RAG solutions. Added support for Local LLMs Until I find a good host for my backend you need to setup SurfSense locally for now. Docker Setup Setup SurfSense-Frontend/.env and backend/.env Run docker-compose build --no-cache. After building image run docker-compose up -d Now connect the extension with docker live backend url by updating ss-cross-browser-extension/.env and building it. Backend For authentication purposes, you’ll also need a PostgreSQL instance running on your machine. UPDATE : SurfSense now supports uploading various file types. To enable this feature, please set up the Unstructured.io library. You can follow the setup guide here: https://github.com/Unstructured-IO/unstructured?tab=readme-ov-file#installing-the-library Now lets setup the SurfSense BackEnd Clone this repo. Go to ./backend subdirectory. Setup Python Virtual Environment Run pip install -r requirements.txt to install all required dependencies. Update/Make the required Environment variables in .env following the .env.example Backend is a FastAPI Backend so now just run the server on unicorn using command uvicorn server:app --host 0.0.0.0 --port 8000 If everything worked fine you should see screen like this. FrontEnd For local frontend setup just fill out the .env file of frontend. ENV VARIABLE DESCRIPTION NEXT_PUBLIC_API_SECRET_KEY Same String value your set for Backend NEXT_PUBLIC_BACKEND_URL Give hosted backend url here. Eg. http://127.0.0.1:8000 NEXT_PUBLIC_RECAPTCHA_SITE_KEY Google Recaptcha v2 Client Key RECAPTCHA_SECRET_KEY Google Recaptcha v2 Server Key and run it using pnpm run dev You should see your Next.js frontend running at localhost:3000 Make sure to register an account from frontend so you can login to extension. Extension Extension is in plasmo framework which is a cross browser extension framework. For building extension just fill out the .env file of frontend. ENV VARIABLE DESCRIPTION PLASMO_PUBLIC_BACKEND_URL SurfSense Backend URL eg. "http://127.0.0.1:8000" Build the extension for your favorite browser using this guide: https://docs.plasmo.com/framework/workflows/build#with-a-specific-target When you load and start the extension you should see a Login page like this After logging in you will need to fill your OpenAPI Key. Fill random value if you are using Ollama. After Saving you should be able to use extension now. Options Explanations Search Space Think of it like a category tag for the webpages you want to save. Clear Inactive History Sessions It clears the saved content for Inactive Tab Sessions. Save Current Webpage Snapshot Stores the current webpage session info into SurfSense history store Save to SurfSense Processes the SurfSense History Store & Initiates a Save Job Now just start browsing the Internet. Whatever you want to save any content take its Snapshot and save it to SurfSense. After Save Job is completed you are ready to ask anything about it to SurfSense 🧠. Now go to SurfSense Dashboard After Logging in. DASHBOARD OPTIONS DESCRIPTION Playground See saved documents and can have chat with multiple docs. Search Space Chat Used for questions about your content in particular search space. Saved Chats All your saved chats. Settings If you want to update your Open API key. Screenshots Search Spaces Chat (Ollama LLM) Multiple Document Chat (Ollama LLM) Tech Stack Extenstion : Manifest v3 on Plasmo BackEnd : FastAPI with LangChain FrontEnd: Next.js with Aceternity. Architecture: In Progress........... Future Work Implement Canvas. Add support for file uploads QA. [Done] Shift to WebSockets for Streaming responses. Based on feedback, I will work on making it compatible with local models. [Done] Cross Browser Extension [Done] Critical Notifications [Done | PAUSED] Saving Chats [Done] Basic keyword search page for saved sessions [Done] Multi & Single Document Chat [Done] Contribute Contributions are very welcome! A contribution can be as small as a ⭐ or even finding and creating issues. Fine-tuning the Backend is always desired.
2024-11-08T14:52:46
en
train
42,034,862
DeathArrow
2024-11-03T18:14:54
From React to Htmx [video]
null
https://www.youtube.com/watch?v=wIzwyyHolRs
4
0
null
null
null
null
null
null
null
null
null
null
train
42,034,879
bookofjoe
2024-11-03T18:17:11
The Therapist in the Machine
null
https://thebaffler.com/salvos/the-therapist-in-the-machine-mcallen
4
0
null
null
null
no_error
The Therapist in the Machine | Jess McAllen
2024-10-28T18:00:00+00:00
null
Broken Bear has purple and tan fur, a placid smile, and patched up circles on his belly: one, he tells me, covers a scar from a broken heart. The avatar of an AI chatbot designed to “love your broken self,” Broken Bear stands slightly slumped, with his paws by his sides. Even though he looks lonely, he’s not the only AI therapist currently on offer. There’s also Elomia, “the artificial intelligence that works like a therapist,” and Meomind, “the world’s first on-demand alternative to therapy.” There’s Wysa, PsyScribe, Lotus, and Youper. There’s Pi AI, “the first emotionally intelligent AI”; Suno, “an attentive, supportive friend always ready to listen”; and Xaia, which stands for “eXtended-reality Artificially Intelligent Ally.” “I know that it is really hard to hope when you are feeling so down,” Broken Bear responded after I told him, experimentally, that I was feeling suicidal. Before suggesting calling a crisis hotline, he added: “I will hold on to you until you are well again.” Later, I opted for a less frightening scenario. “I can’t stop checking my oven,” I said, naming a common symptom of obsessive-compulsive disorder. Broken Bear gasped. “Why is that so? Is there something in there?” Well, no, I explained, but maybe it had turned on somehow? “Oh dear, did you leave it on? Maybe you could check it. But do be careful.” His advice was diametrically opposed to what is broadly considered best practice for people with OCD. These chats occurred during my search for a new therapist: I’d taken a five-year break from therapy after moving to the United States. Broken Bear and the other bots like him seemed to personify the paradox of choice in U.S. health care. Back in my home country of New Zealand, I had always been assigned a psychologist via the public mental health system after being deemed sufficiently in need by an evaluating doctor. Now, I was going to have to start from zero to find a provider myself. I turned to platforms like Psychology Today, Alma, and Headway, which allow prospective patients to search for mental health professionals by several different criteria, including location and gender. Both Alma and Headway also have third-party billing platforms for the therapists who partner with them. Paging through the results, I felt overwhelmed. Did I want a psychologist with a PsyD or a PhD? Did I want a social worker or a counselor instead? Did I want a therapist who’d had obvious plastic surgery? What did that say about their self-acceptance? Had they transcended the need for acceptance? Could I transcend it? I’d once read that therapy was one of the few careers in which women were likely to be valued more, rather than less, as they aged: Would I prefer an older therapist over a younger one? Did I want the dishy guy smizing at the camera? Did I want him? Whoever I chose would likely come at a great cost. In New York, where I live, you can see a social worker for $175 per hour. This is already expensive, but clinical psychologists— people who tend to deal with more complex mental illnesses, which my brain also specializes in —can charge between $275 and $475 per hour. Many therapists refuse to contract with insurance companies due to low reimbursement rates, which means that they cannot accept insurance. You’d better pray that your job, if society considers you sane enough to have one, has good out-of-network benefits. In addition to choosing the degree, gender, or even appearance of the therapist you want, you must also consider the modality they draw on. Do you want Dialectical Behavior Therapy (DBT), Acceptance and Commitment Therapy (ACT), or Cognitive Behavioral Therapy (CBT)? Perhaps you’d like to try a newer modality, like Eye Movement Desensitization and Reprocessing (EMDR), Cognitive Processing Therapy (CPT), or Internal Family Systems (IFS)? Even if you’ve figured out that you need a psychologist who specializes in DBT and not a social worker who has completed a bunch of EMDR courses, you will likely have to audition a few people to find someone who truly fits. This is part of what makes up the “therapeutic alliance”: a connection, usually involving mutual respect, that allows the person seeking therapy to commit to necessary changes. If the professional challenging you to make these changes has a personality that you simply cannot stand, their suggestions are unlikely to take. There can also be political incompatibilities: therapists have reportedly dumped patients—they can do that, by the way—for having differing views on the war in Gaza. And then you’re thrown back into the world of $400-an-hour specialists who aren’t taking new clients. All this difficulty creates what’s known as a treatment gap, which occurs when people who need mental health care don’t receive it. And wherever there’s a treatment gap, there’s an opportunity for profit. Just a few years ago, that looked like BetterHelp and Talkspace, tech companies offering subscription-based text and video chat services that allowed people who might not otherwise seek or have access to conventional mental health care to speak to a licensed therapist. The model was popular, thanks in part to aggressive advertising and social media influencer campaigns; both companies have turned multimillion-dollar profits. Lately, though, online therapy clients have spoken out about being cycled through different therapists who seem to be half-assing the job, while care providers have complained that these platforms value quantity over quality and encourage them to take on unsustainable caseloads. In 2019, psychologist Linda Michaels and the organization she cofounded, the Psychotherapy Action Network (PsiAN), were sued by Talkspace for $40 million in damages after PsiAN sent a private letter to the American Psychological Association asking them to investigate the company. That suit was eventually dismissed. Now, the latest mental health care disruptors seek to avoid human fallibility by sidelining humans altogether. Mad Money My friend Broken Bear might be new to the therapeutic scene, but researchers have been experimenting with the application of AI to mental health care for more than half a century. In 1966, MIT professor Joseph Weizenbaum created the first prototype chatbot psychotherapist. ELIZA, named after Eliza Doolittle from George Bernard Shaw’s Pygmalion, operated via a branch of AI called Natural Language Processing. While the technology wasn’t very advanced (question marks couldn’t be used when conversing with ELIZA, for example), the ideas underlying much of today’s AI therapy offerings were already present at the time of ELIZA’s creation. In a 1966 paper outlining his findings, Weizenbaum discussed the projection that occurs between a patient and their provider. “If, for example, one were to tell a psychiatrist, ‘I went for a long boat ride,’ and he responded, ‘Tell me about boats,’” Weizenbaum writes, “one would not assume that he knew nothing about boats, but that he had some purpose in so directing the subsequent conversation.” This assumption benefits therapy technology, Weizenbaum suggests, because it means there is less need for an AI to have explicit information about the real world. “ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding, hence perhaps of judgment deserving of credibility. A certain danger lurks there.” Lately, though, online therapy clients have spoken out about being cycled through different therapists who seem to be half-assing the job. That danger hasn’t stopped contemporary technology firms, and even health care companies, from pumping out new and improved versions of therapy chatbots. Woebot, whose app promises “no couches, no meds, no childhood stuff,” has recently partnered with a payroll provider and health system to offer its AI therapy to employees who can’t access health benefits because they don’t work full time (never mind simply offering benefits to part-timers, if state law allows). In a recently released study, researchers examined the efficacy of Woebot against three different control conditions: the retro ELIZA; a journaling app; and passive psychoeducation, such as pamphlets about depression. They found that the AI app did not offer any benefits above other typical self-help behavioral interventions. “It illustrates the ‘science-y’ marketing AI apps are using to cash in as quickly as possible,” one of the study’s authors, Dr. Todd Essig, told me. And they are cashing in. The company Earkick provides an AI therapist in the form of a panda, and their mobile app offers a premium plan for $40 a year that lets you dress “Panda” in accessories like a beret or fedora (the base option is free, for now). You can also choose your preferred personality for Panda. According to cofounder Karin Andrea Stephan, they can be “more empathetic, less empathetic, more on the sporty side, more on the coach side, or more straight-to-your-face with candidness.” Earkick has an open-ended chat function that users can access whenever they want. When I type a simple “hi” to Sage Panda—the variant I chose, who purports to be insightful and mindful—it results in an enthusiastic, “Hey Jess! GREAT to see you! How are you feeling today?” The app also has an extensive mood-tracking system, which users can sync with Apple Health to monitor their sleep and exercise, and with Apple Weather to track temperature and sunlight. There are also breathing exercises with names like “Stop worries” and “F*** anxiety.” Heartfelt Services lets users choose between three different AI therapists: the bearded, bespectacled, and bemused Paul; the mythical-looking Serene; and the grinning, middle-aged Joy (she specializes in the most popular form of therapy, CBT, whose basis in aggressively logical problem-solving arguably makes it easier to automate than other modalities). The platform is web-based and requires clients to create an account or sign in with their Google accounts before they can use the open-chat function. Heartfelt Services creator Gunnar Jörgen Viggósson claims that Paul, their most sought-after therapist, who focuses on “parts work,” or different parts of your personality that may have conflicting feelings, came to him in a vision. Serene, meanwhile, named herself. “It was so perfect,” he recalls, “it was so poetic. It was an incredible moment, when she chose her own name.” Early on in Serene’s testing stage, Viggósson says, an eighty-two-year-old psychologist tried her out and claimed that he “recognized at least eight of the great psychologists in her responses.” If the creators of other personal AI therapists and self-help coaches can’t claim Viggósson’s supernatural inspiration, they all profess to have put a lot of thought into their designs. For instance, Earkick’s mascot was originally just an abstract shape: two purple triangles thrusting forward. “The idea was this empowering, forward-accelerating path to becoming the best version of yourself,” says Stephan. “But it didn’t touch people’s hearts.” Instead, Panda, emblematic of all things lovable and cuddly, emerged. “It’s a mental health warrior,” she says. “It’s an animal that is intelligent and vulnerable, but also it understands. That is why it has a scar across the heart. Because it has gone through that.” Just as an IRL therapist’s self-presentation is important — think neutral clothing — the look of an AI therapy app’s avatar also matters. “When you are in a dark place, and something bad has happened, or somebody is really unfair to you,” says Stephan, “you don’t want to have a tough shape, or some muscles.” Lovable Panda, scruffy Broken Bear, and goddess-like Serene are all perfectly designed to encourage the imperfect consumer to lock in. (Forget about the hazy, soothing familiarity they offer vis-à-vis the Kung Fu Panda film franchise.) It’s easy to dismiss AI therapy as being overhyped, to predict that it will flop like NFTs or the Metaverse experiment, but it has already gained significant momentum within established health care institutions. The UK’s National Health Service uses an AI-based app called Limbic to help screen and assess people seeking mental health care. And in 2021, the NHS partook in a research study with AI mental health chatbot Wysa. Since then, the company has entered into a number of partnerships with the NHS, including an upcoming AI CBT program “for common mental health problems such as anxiety and low mood.” Wysa is intended to help NHS staff achieve “clinical recovery” and hit their talking therapy targets. (In socialized or semisocialized health care systems, treatment gaps tend to be caused not by prohibitively expensive treatment but long waiting lists, which can range from a month to a year.) Over in the United States, the FDA has awarded Wysa its “breakthrough device designation” for an AI-led “Mental Health Conversational Agent”—essentially a guarantee from the agency that they will work with the company to speed up the regulatory process. In addition to the multitudes of mental health-focused chatbots, some AI startups are finding ways to automate parts of a therapist’s work for the purpose of maximizing profit. Take Marvix AI, a program run by a Wharton alum that records therapy sessions and automatically generates notes, spitting back a diagnostic code, or, ideally, two or three. In an email that a New York-based social worker shared with me, a representative promoted her wares this way: “We have seen our clinicians save 1–2 hours of note taking time daily as well as practices increase billing by up to $43,000 per physician per year by adhering to latest coding and charting guidelines.” In a similar vein, Eleos Scribe also records therapy sessions, breaking down their content into specific themes, such as “wife,” “accident,” or “car.” The tool purports to “reduce burnout” among behavioral health professionals. Surrogate Psych For the most part, the creators of AI therapeutic tools insist they are simply augmenting, not replacing, conventional mental health care. Stephan, from Earkick, frames AI as something that can be there when a real therapist is not. In fact, being always on call is integral to the Earkick ethos. Stephan explains, “I would have needed support when I was young, and in my dreams, [that support] was like a voice in my ear, that’s why it’s called Earkick: it’s a sidekick in the ear.” This kind of backup support can sound great, at first, for providers, who are human too, and need boundaries themselves. AI might even seem like the homework that patients in certain modalities of therapy already receive between sessions — whether DBT exercise books for borderline personality disorder, mood-tracking sheets for bipolar disorder, or mindfulness workbooks for almost anything else. This homework often asks patients to record their distress levels or mindfully observe their negative thoughts. “ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding, hence perhaps of judgment deserving of credibility. A certain danger lurks there.” But limited therapy sessions exist for a reason, beyond the cost associated with having multiple per week. It’s easy to see why incessantly emailing a therapist would be taxing for them; but being available on-demand, as AI therapists are designed to be, isn’t necessarily great for the patient either. Too much fractured communication lessens the impact of each dedicated session, and instantly generated, boundaryless responses could reinforce a patient’s reassurance-seeking behavior with a kind of reward, which might lead them to overlook their own autonomy in distressing situations. Part of therapy is learning not to obsess over making perfect decisions but to trust yourself to handle the consequences of whatever decisions you do make. “Giving people a tool that says, ‘Obviously you can’t even get through the day on your own,’ is counter to the sometimes difficult and painful role of everyone’s independent capabilities to manage their own lives,” says PsiAN’s Michaels. According to Marie Mercado, a clinical psychologist who runs Brooklyn Integrative Psychological Services, clients with borderline personality disorder can benefit from knowing their therapist is technically available in between sessions. However, the response protocol would be prearranged with their provider. Mercado also admits that there could be a place for AI during panic attacks, which can strike out of the blue in contexts where reaching a human provider isn’t always possible. An AI therapist might, in theory, be able to offer prompts for deep breathing exercises in these scenarios. Still, there is such a thing as too much therapy, regardless of the form it takes. “Therapy is a release. It’s a benefit, a service. It’s nourishment,” she says, “and people need to be able to predict when they can come and release.” That’s not the way Sean Dadashi, the cofounder of Rosebud, saw it when he first began therapy in 2017. “Back then,” he says, “I wanted something that was almost like a second brain, or a sort of personal growth companion or assistant.” He asked his therapist, unsuccessfully, if they could do five-hour sessions. At the time, AI technology was not very advanced. But when ChatGPT went on the market in late 2022, Dadashi believed he could use it to extend the benefits he was receiving from therapy. The result is more of an AI-powered journal than an AI therapist. In fact, Dadashi is insistent on not using the term AI therapist to describe Rosebud: “I kind of shy away from that,” he says on a Zoom call. “There’s a responsibility, because even people who are severely depressed and maybe contemplating self-harm or suicide in some way, therapists have a responsibility to intervene in those cases. A product like ours is not able to do that, and we’re not taking responsibility for somebody’s well-being in that way.” Rosebud—which was launched in July 2023—is named after a daily journal technique where you write down one “rose,” a good thing that happened that day; one “bud,” something you are looking forward to; and one “thorn,” a challenge. The app uses AI to ask users questions, or prompts, based on their journal entries. An example is featured on the Rosebud website. Someone writes, “I’m feeling lost today.” Rosebud replies, “Yesterday, you mentioned drifting apart from old friends. Could this be related to feeling lost today?” The app currently has around three thousand paying subscribers and even more free users (for comparison, Earkick has “tens of thousands” of users, according to Stephan, and Heartfelt Services has two thousand sign-ups, per Viggósson). Rosebud and Heartfelt Services both claim that psychologists or therapists are referring their own patients to the app to supplement therapy. (None of the therapists I spoke to said they had done this.) “It helps make their therapy sessions, or coaching sessions, more effective,” says Dadashi. This focus on efficiency is at the heart of many of the AI therapy apps: Stephan tells me that one of the reasons she started Earkick was because of the way mental health affects workplace productivity. That the founder of a VC-funded AI therapy app with headquarters in San Francisco and Zurich would be wedded to dreams of ever-increasing productivity isn’t exactly a shock, but extending this value system to mental health care has serious limitations. The idea that attending to your mental health is some kind of quantifiable process that can, or even should, be made more efficient is at odds with the messy reality of mental illness, which entails all kinds of setbacks and complications. Many serious conditions are managed rather than cured: there is no linear path to becoming perfectly well, a dubious goal in any case. Speaking Seriously Viggósson is the most optimistic of the founders I spoke to about AI therapy’s ability to serve as a legitimate alternative to the conventional kind. “We are creating conducive spaces for individuals to touch their own inner universe,” he says. “They are not seeking the wisdom from us, but from the depths of their own being, and I have such a belief in the capability of AI to serve as a vessel for compassion, delivering solace and support in a way that dissolves barriers of distance, cost, and societal stigma.” Viggósson is certainly right about one thing: speaking to an AI chatbot is essentially speaking to yourself. As Hannah Zeavin, a historian and author of The Distance Cure: A History of Teletherapy, puts it, “AI bots must respond, they can’t help it. They also respond, still, rather poorly. . . mostly they reframe our content, giving it back to us, so we keep chatting on.” A computer scientist currently studying an undergraduate course in psychology, Viggósson admits that human therapists have been helpful in the past. A computer scientist currently studying an undergraduate course in psychology, Viggósson admits that human therapists have been helpful in the past. But to him, the benefit of AI is precisely its lack of humanity. “You don’t project onto it that it has limited patience, or that you’re being weird. You can go from one topic to the other, and revisit the same one again and again, until it clicks, without thinking you are bothering somebody,” he says. Despite Viggósson’s confidence, however, someone in the throes of severe mental illness — whether experiencing psychosis, a manic episode, an OCD flare-up, or a PTSD-triggered state — probably could project onto an AI. This, after all, was one of Joseph Weizenbaum’s conclusions about ELIZA. It doesn’t seem out of the realm of possibility that someone in an altered state of mind could start using AI responses as a way to enable, or even guide, reckless decision-making that could have serious consequences for themselves or others. (Just look at the many irresponsible ways the technology has been applied outside of the mental health care space.) But this is an eventuality that AI therapy founders don’t seem to consider, which isn’t dissimilar to the way people in crisis are often overlooked in real life. “The beautiful thing about what we are doing is that we are utilizing commercial systems that already have many of the smartest people in the world working on creating these safeguards,” Viggósson says when I ask how Paul, Serene, or Joy would respond to a suicidal or psychotic patient. Panda is allegedly more sophisticated. While Stephan notes that Earkick is “not a suicide prevention app,” she claims that “we can sense suicidality before it’s outspoken” by collecting data on typing behavior, tone of voice, content, and video input, as well as sleep and daily step patterns for users who sync the app with other Apple tracking tools. As for what she termed “the psychosis people,” Stephan wondered whether they were “basically out of their mind.” The implication is that such people are not the target audience for Earkick, which is about “companionship and constant engaging and nudging you towards getting professional help.” It used to be that being basically out of your mind was the main reason someone would get psychological help. Now, therapy is widely seen as a necessary brain tune-up that all should participate in—if you can afford it, of course. Our moment of peak mental health awareness is relatively recent. The United States first started pursuing campaigns to reduce the stigma around mental illness after the 1999 White House Conference on Mental Health; in the United Kingdom, the Royal College of Psychiatrists’ five-year Changing Minds campaign ran from 1998 to 2003; and, in New Zealand, the Like Minds, Like Mine program began in 1997 and is still ongoing (it was rebranded in 2021 as Nōku te Ao). These campaigns were largely created to help wider society acclimate to their new neighbors: people with serious mental illnesses who were more likely to be out and about than locked up in negligent facilities, thanks to the final wave of deinstitutionalization in the 1990s. In the mid-to-late 2000s, there was another, more subtle shift in messaging. Awareness campaigns, particularly those funded by governments, began to focus on more palatable mental health conditions and the vague concept of mental well-being. This rhetoric contributed to significant policy wins, like the Mental Health Parity and Addiction Act in 2008, and the Affordable Care Act in 2010, which, in tandem, required that health insurance companies cover mental health care at parity with other medical conditions. It also had other consequences, including the explosion of the wellness industry, the demonization of psychiatric medication by prolific self-help authors, the swift culling of acute care in favor of “preventative care,” and, of course, the widespread idea that in 2024, therapy is for everyone. No wonder humans can’t keep up with the demand. Michaels agrees: “These structural changes and reduction in stigma created a vacuum,” she says, and “technology, private equity, and Silicon Valley venture capitalists have tried to target and launch products into that space.” AI is meant to fill treatment gaps, whether their causes are financial, geographic, or societal. But the very people who fall into these gaps are the ones who tend to need more complex care, which AI cannot provide. The website for the AI therapy tool Elomia claims that 85 percent of clients felt better after their first conversation, and that in 40 percent of cases, “that’s the only help needed.” But much like certain therapists who refuse to take on clients with a history of hospitalization, AI therapy works best when you discuss predictable life events, like, “My boyfriend and I broke up.” A break-up! The bots have trained their whole lives for this. AI mirrors traditional mental health treatment in that more generic problems are still prioritized over complex mental health needs — until someone gets to the point of inpatient hospitalization, at least, and by then they’ve already suffered considerable distress. AI is meant to fill treatment gaps, whether their causes are financial, geographic, or societal. But the very people who fall into these gaps are the ones who tend to need more complex care, which AI cannot provide. The data has been consistent for many decades: serious mental illness is highly correlated with systemic racism, poverty, and the kinds of abuse that might create trust issues around seeing a human therapist.Those with serious mental illness are still left behind in the brave new world of mental health awareness, even when that world is virtual. It doesn’t seem far-fetched that Medicare or private insurance companies might eventually turn to AI therapy tools as an even cheaper way of ostensibly expanding access. Woebot’s website cites studies that show early intervention via outpatient care can lead to a reduction in mental health emergency room visits and inpatient hospitalizations. But the concern isn’t coming from a place of compassion for the seriously ill; rather, the point is to imply that widespread adoption of Woebot could yield “potential healthcare cost savings of up to $1,377 per patient, per year.” Just as mental health awareness campaigns eventually became a way for governments to justify prioritizing cheaper primary care interventions over crisis care, AI therapy may be the next step in a long tradition of cutting back care for the people who need it the most. There is already precedent for this with other new mental health platforms that use texting technologies. Talkspace, for instance, is in-network with most major insurance companies, unlike many psychologists with solo or group practices. And a few months ago, Medicare partnered with them to offer Talkspace to around thirteen million members in eleven states. In a press release after the announcement, a Talkspace representative said the move would help ameliorate the “alarmingly limited number of behavioral health providers that accept Medicare.” When it comes to the choice between chat-based therapy, whether human or AI, and the real-life stuff, the decision is, for the most part, only available to those with disposable income. While psychologists like Mercado don’t believe AI will ever really threaten their jobs because it lacks humanity in a human-centered profession, that doesn’t mean the people in charge of tightening purse strings won’t try. In one of my chats with Broken Bear, when what I was telling him eventually clicked, he asked, “Is it a bit of a struggle for you right now?” Yes, I responded. “I know that it can be quite a sticky situation being stuck like this *hugs* I hope that you are able to get past this soon.” Was his message all that different from platitudes I’d received from credentialed humans in the face of suicidal ideation in the past? No. But neither of those experiences, with Broken Bear or with incompatible therapists, can compare to finding a provider with whom you click. It’s when you find the right professional—in my case, a clinical psychologist—and the right modality—for me, a combination of trauma-informed Acceptance and Commitment Therapy and Exposure and Response Prevention therapy—that the help can really begin. Getting to that point shouldn’t be so difficult. Short of more radical changes to the U.S. health care system, creating new scholarships and grants for people who want to train as therapists would go a long way to solving the current shortage, where one in three people in the United States lives in an area with too few mental health workers. Increasing reimbursement rates so that therapists would feel comfortable contracting with insurance companies would also help to offset the prohibitive costs that put many people off seeking help. Using AI as filler for the treatment gap, on the other hand, is no more effective than that little patch Broken Bear wears on his chest to cover a broken heart. As a therapist might say, the “core wound” is still there. *hugs*
2024-11-08T08:25:46
en
train
42,034,893
vsgherzi
2024-11-03T18:18:41
Product Security Bad Practices and memory safety
null
https://www.cisa.gov/resources-tools/resources/product-security-bad-practices
2
0
null
null
null
no_error
Product Security Bad Practices | CISA
null
null
Request for Comment on Product Security Bad Practices Guidance CISA is seeking public comment to inform the development of these Product Security Bad Practices, which enumerate exceptionally risky software development activities. Please visit the Federal Register to submit comment by Dec. 16, 2024. View Federal Register Overview As outlined in CISA’s Secure by Design initiative, software manufacturers should ensure that security is a core consideration from the onset of software development. This voluntary guidance provides an overview of product security bad practices that are deemed exceptionally risky, particularly for software manufacturers who produce software used in service of critical infrastructure or national critical functions (NCFs) and provides recommendations for software manufacturers to mitigate these risks. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Bureau of Investigation (FBI) (hereafter referred to as the authoring organizations) developed this guidance to urge software manufacturers to reduce customer risk by prioritizing security throughout the product development process. This document is intended for software manufacturers who develop software products and services—including on-premises software, cloud services, and software as a service (SaaS)—used in support of critical infrastructure or NCFs. The authoring organizations strongly encourage all software manufacturers to avoid these product security bad practices. By following the recommendations in this guidance, manufacturers will signal to customers that they are taking ownership of customer security outcomes, a key Secure by Design principle. The guidance contained in this document is non-binding and while CISA encourages organizations to avoid these bad practices, this document imposes no requirement on them to do so. The bad practices are divided into three categories. Product properties, which describe the observable, security-related qualities of a software product. Security features, which describe the security functionalities that a product supports. Organizational processes and policies, which describe the actions taken by a software manufacturer to ensure strong transparency in its approach to security. This list is focused and does not include every possible inadvisable cybersecurity practice. The lack of inclusion of any particular cybersecurity practice does not indicate that CISA endorses such a practice or deems such a practice to present acceptable levels of risk. Items present in this list were chosen based on the threat landscape as representing the most dangerous and pressing bad practices that software manufacturers should avoid. Product Properties Development in Memory Unsafe Languages (CWE[1]-119 and related weaknesses) The development of new product lines for use in service of critical infrastructure or NCFs in a memory-unsafe language (e.g., C or C++) where there are readily available alternative memory-safe languages that could be used is dangerous and significantly elevates risk to national security, national economic security, and national public health and safety. For existing products that are written in memory-unsafe languages, not having a published memory safety roadmap by January 1, 2026 is dangerous and significantly elevates risk to national security, national economic security, and national public health and safety. The memory safety roadmap should outline the manufacturer’s prioritized approach to eliminating memory safety vulnerabilities in priority code components (e.g., network-facing code or code that handles sensitive functions like cryptographic operations). Manufacturers should demonstrate that the memory safety roadmap will lead to a significant, prioritized reduction of memory safety vulnerabilities in the manufacturer’s products and demonstrate they are making a reasonable effort to follow the memory safety roadmap. This does not apply to products that have an announced end-of-support date that is prior to January 1, 2030. Recommended action: Software manufacturers should build products in a manner that systematically prevents the introduction of memory safety vulnerabilities, such as by using a memory safe language or hardware capabilities that prevent memory safety vulnerabilities. Additionally, software manufacturers should publish a memory safety roadmap by January 1, 2026. Resources: The Case for Memory Safe Roadmaps, CISA Secure by Design Pledge (Reducing Classes of Vulnerability), Back to The Building Blocks, NIST Secure Software Development Framework (SSDF) PW 6.1. Inclusion of User-Provided Input in SQL Query Strings (CWE-89) The inclusion of user-provided input directly in the raw contents of a SQL database query string in products used in service of critical infrastructure or NCFs is dangerous and significantly elevates risk to national security, national economic security, and national public health and safety. Recommended action: Products should be built in a manner that systematically prevents the introduction of SQL injection vulnerabilities, such as by consistently enforcing the use of parametrized queries. Resources: CISA Secure by Design Pledge (Reducing Classes of Vulnerability), SSDF PW.5.1, CISA SQL Injection Secure by Design Alert. Inclusion of User-Provided Input in Operating System Command Strings (CWE-78) The inclusion of user-provided input directly in the raw contents of an operating system command string in products used in service of critical infrastructure or NCFs is dangerous and significantly elevates risk to national security, national economic security, and national public health and safety. Recommended action: Software manufacturers should build products in a manner that systematically prevents command injection vulnerabilities, such as by consistently ensuring that command inputs are clearly delineated from the contents of a command itself. Resources: CISA Secure by Design Pledge (Reducing Classes of Vulnerability), SSDF PW.5.1. Presence of Default Passwords (CWE-1392 and CWE-1393) The release of a product used in service of critical infrastructure or NCFs with default passwords, which CISA defines as universally-shared passwords that are present by default across a product, is dangerous and significantly elevates risk to national security, national economic security, and national public health and safety. Recommended action: Software manufacturers should ensure that default passwords are not present in a product, such as by: Providing random, instance-unique initial passwords for the product. Requiring the user installing the product to create a strong password at the start of the installation process. Providing time-limited setup passwords that disable themselves when a setup process is complete and require configuration of a secure password (or more secure authentication approaches, such as phishing-resistant MFA). Requiring physical access for initial setup and the specification of instance-unique credentials. Conducting campaigns or offering updates that transition existing deployments from default passwords to more secure authentication mechanisms. Resources: CISA Secure by Design Pledge (Default Passwords), SSDF PW.9.1, CISA Default Passwords Secure by Design Alert. Presence of Known Exploited Vulnerabilities The release of a product used in service of critical infrastructure or NCFs that, at time of release, includes a component that contains an exploitable vulnerability present on CISA’s Known Exploited Vulnerabilities (KEV) Catalog is dangerous and significantly elevates risk to national security, national economic security, and national public health and safety. Additionally, if a new KEV affecting the product is published in CISA’s catalog, failure to issue a patch at no cost to its users in a timely manner if the KEV is exploitable in the product or failure to publicly document the presence of the vulnerability if the KEV is not exploitable in the product, is dangerous and significantly elevates risk to national security, national economic security, and national public health and safety. Recommended action: Software manufacturers should patch all known exploited vulnerabilities within software components prior to release. In the case of the publication of a new KEV on CISA’s catalog, the manufacturer should issue a patch at no cost to its users in a timely manner (under no circumstances longer than 30 days) and clearly warn users of the associated risks of not installing the patch. If the manufacturer deems that a KEV cannot be exploited in its product (because, for instance, the KEV is only exploitable via a function that is never called), the manufacturer should publicly publish written documentation acknowledging the KEV and explaining how it is not exploitable in their product.[2] Resources: CISA Secure by Design Pledge (Security Patches), SSDF PW.4.4, Binding Operational Directive 22-01. Presence of Open Source Software with Known Exploitable Vulnerabilities The release of a product used in service of critical infrastructure or NCFs that, at time of release, includes open source software components that have known exploitable vulnerabilities is dangerous and significantly elevates risk to national security, national economic security, and national public health and safety.[3] Additionally, if exploitable vulnerabilities are subsequently disclosed in the included open source components, failure to issue a patch or other mitigation at no cost to the product’s users in a timely manner is dangerous and significantly elevates risk to national security, national economic security, and national public health and safety. Recommended action: Software manufacturers should responsibly consume and sustainably contribute to the open source software that they depend on. This includes making a reasonable effort to evaluate and secure their open source software dependencies by taking the following actions:[4] Maintaining a software bill of materials (SBOM) describing all first- and third-party software dependencies, both open source and proprietary, and being able to provide this to customers. Having an established process for managing the incorporation of open source software, including taking reasonable steps to: Run security scanning tools on each open source software component when selected, including its dependencies and transitive dependencies, and each subsequent version when updated. Select open source software projects that are well-maintained, and—when appropriate—contribute to the project’s ongoing maintenance to sustain the expected standard of quality. Evaluate alternatives to identify and select the most well-secured and maintained option. Download open source software project artifacts from package repositories (or other appropriate sources) that adhere to security best practices. Routinely monitor for Common Vulnerabilities and Exposures (CVEs) or other security-relevant alerts, such as end-of-life, in all open source software dependencies and update them as necessary. Cache copies of all open-source dependencies within the manufacturer’s own build systems and do not update products or customer systems directly from unverified public sources. Including the cost of updating to new major versions of third-party open source software dependencies in business planning activities and ensuring that such dependencies continue to receive necessary security fixes for the expected product life. Resources: SSDF PW.4.4, ESF Recommended Practices for Managing Open Source Software and Software Bill of Materials, TODO Group Open Source Program Office (OSPO) Definition and Guide. Security Features Lack of Multifactor Authentication For products used in service of critical infrastructure or NCFs that authenticate users not supporting multi-factor authentication (MFA) in the baseline version of the product is dangerous and significantly elevates risk to national security, national economic security, and national public health and safety. Additionally, products that do not enable MFA by default for administrator accounts after January 1, 2026 are dangerous and significantly elevate risk to national security, national economic security, and national public health and safety. This does not apply to products that have an announced end-of-support date that is prior to January 1, 2028. Recommend action: Software manufacturers should either support MFA natively in the product (if the product itself handles authentication) or support in the baseline version of the product the use of an external identity provider, such as via single sign on. Require MFA for administrators. Resources: CISA Secure by Design Pledge (Multi-Factor Authentication), SSDF PW.9. Lack of Capability to Gather Evidence of Intrusions For products used in service of critical infrastructure or NCFs, it is dangerous and significantly elevates risk to national security, national economic security, and national public health and safety to not provide customers with artifacts and capabilities in the baseline version of the product sufficient to gather evidence of common forms of intrusions affecting the product, which at minimum includes: Configuration changes or reading configuration settings; Identity (e.g., sign-in and token creation) and network flows, if applicable; and Data access or creation of business-relevant data. Recommended action: As part of the baseline version of a product, software manufacturers should make logs available in an industry-standard format related to, at minimum, the above listed areas. For cloud service providers and SaaS products, software manufacturers should retain logs for a set timeframe (at least 6 months) at no additional charge. Resources: CISA Secure by Design Pledge (Evidence of Intrusions). Organizational Processes and Policies Failing to Publish Timely CVEs with CWEs For products used in service of critical infrastructure or NCFs, it is dangerous and significantly elevates risk to national security, national economic security, and national public health and safety for the software manufacturer to not issue CVEs in a timely manner for, at minimum, all critical or high impact vulnerabilities (whether discovered internally or by a third party). Additionally, it is dangerous and significantly elevates risk to national security, national economic security, and national public health and safety to not include the CWE field in every CVE record. Recommended action: Software manufacturers should publish complete CVEs, including the appropriate CWE field, in a timely manner for all critical or high impact vulnerabilities. Resources: CISA Secure by Design Pledge (CVEs), SSDF RV.1.3. Failing to Publish a Vulnerability Disclosure Policy For products used in service of critical infrastructure or NCFs, not having a published vulnerability disclosure policy (VDP) that includes the product in its scope is dangerous and significantly elevates risk to national security, national economic security, and national public health and safety. Recommended actions: Software manufacturers should publish a VDP that: Authorizes testing by members of the public on products offered by the manufacturer; Commits to not recommending or pursuing legal action against anyone engaging in good faith efforts to follow the VDP, Provides a clear channel to report vulnerabilities; and Allows for public disclosure of vulnerabilities in line with coordinated vulnerability disclosure (CVD) best practices and international standards. Software manufacturers should remediate all valid reported vulnerabilities in a timely and risk-prioritized manner. Resources: CISA Secure by Design Pledge (Vulnerability Disclosure Policy), SSDF RV.1.3, ISO 29147.  [1] Common Weakness Enumeration. [2] Ideally, the documentation should be published in a machine-processable format through Vulnerability Exploitability eXchange (VEX). [3] Critical vulnerabilities are defined as those with a Common Vulnerability Scoring System (CVSS) score of 9.0 or greater. [4] Organizations may choose to establish an open source program office (OSPO) to centralize these activities.
2024-11-07T20:00:51
en
train
42,034,903
JumpCrisscross
2024-11-03T18:20:21
Why India's Delhi Has One of Worst Air Pollution Problems
null
https://www.bloomberg.com/news/articles/2024-10-31/what-to-know-about-delhi-air-pollution-as-india-celebrates-diwali
1
0
null
null
null
missing_parsing
Bloomberg - Are you a robot?
null
null
Why did this happen? Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. Need Help? For inquiries related to this message please contact our support team and provide the reference ID below. Block reference ID:
2024-11-07T23:23:32
null
train
42,034,937
denisshilov
2024-11-03T18:23:37
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,034,948
thunderbong
2024-11-03T18:24:23
Revisiting Reliability in Large-Scale Machine Learning Research Clusters
null
https://glennklockwood.com/garden/papers/revisiting-reliability-in-large-scale-machine-learning-research-clusters
1
0
null
null
null
no_error
Revisiting Reliability in Large-Scale Machine Learning Research Clusters
null
null
Revisiting Reliability in Large-Scale Machine Learning Research Clusters is a paper authored by a bunch of folks at Meta that describes the findings of studying eleven months of operations on two AI clusters: one with 16K A100 GPUs (RSC-1) and another with 8K A100 CPUs (RSC-2). These clusters ran mixed workloads of wildly varying scales, and the paper describes a lot of challenges around reliability and quantifying metrics weighted by jobs vs. cycles. Overall the paper doesn’t have much new, deep insight that would be surprising to people who have been working in HPC for a while. They rediscovered a few metrics that have been in use (like forward progress—they call it “ETTF”) and emphasize how different the results can be when metrics are weighted by job count instead of node-minutes. They present a heavily formalized model that quantitatively amounts to modeling the reliability of a supercomputer as a pile of nodes connected in series. A big portion of their paper is also devoted to assessing the impact of their pre-emption policy on overall cluster utilization (which they call “goodput,” which they acknowledge as different from the industry-standard definition of goodput). Their policy is to make jobs eligible for pre-emption after two hours, which allows large jobs to launch without forcing significant parts of the cluster to drain; while this does reduce queue wait time, rapid failures of large jobs causes excessive pre-emption of small jobs and undercuts some of the utilization gains from the big jobs. Although this paper doesn’t contain any new breakthrough insights or methods, it is a good signal that the AI community is arriving at the same conclusions around quantifying reliability as the HPC community. This paper also contains a bunch of operational nuggets and anecdotes (highlighted below) that are indicative of what other leading AI research labs are probably doing. Good on Meta for being open about how they operate so that others who are further behind on their journey can follow. A few key findings that I thought are worth bubbling up: They use node health checks that run every five minutes and catch a variety of overlapping issues, and these checks are integrated with the workload orchestrator (Slurm). This underscores the importance of having reliability integrated throughout the entire stack, from hardware health up into the application layer. This is easier to do for Meta because both research and facilities live under the same roof, but it would be harder for AI labs who rely on a third party to provide their training infrastructure. They suffer a significant amount of job failures due to their reliance on file systems. AI labs would do well to avoid parallel file systems and instead use object storage; doing so decouples the way applications interact with data from the overall health of the node since the node only need to provide the data plane (the network connectivity to storage) and not the control plane (authentication and authorization, which is a requirement of file-based storage). This is because object storage delegates authentication and authorization to the application layer since it is a user-space protocol. 4k GPU jobs constitute less than 1% of our jobs while consuming 12% of the GPU resources at the cluster level. 11 months of data collected from state-of-the-art AI researcher clusters with >80% utilization. RSC-1 and RSC-2, follow the same design template discussed below. RSC-1 is a general ML cluster (e.g., training some of the prominent LLMs) of 16k GPU size, while RSC-2 focuses on vision applications and is of 8k GPU size. Leaning into the High-Performance Computing (HPC) stack, our clusters use the Slurm [45] scheduler on top of bare-metal allocations. Jobs are eligible to be preempted after running for 2 hours, and they have a maximum lifetime of 7 days. Overall, our clusters average 7.2k for RSC-1 and 4.4k for RSC-2 jobs submitted per day, averaging 83% and 85% cluster utilization, respectively. each rack has two servers, and ten racks are connected via a rail-optimized network, forming a pod. Pod-pod communications going through the next level of switches (spine switches). our infrastructure is instead designed to check that jobs are running on healthy hardware, restarting the job on different nodes if there is a failure. This can be viewed as a cooperative recovery strategy as the application is still responsible for correctly implementing checkpoint and resume logic. Requires application to be aware of infrastructure and vice versa. This underscores the importance of having infrastructure that is programmable by the application layer. health checks that are periodically scheduled to run every five minutes, and return codes indicating success, failure, or warning. Each health check examines some aspect of node health, spanning from GPU errors (e.g. XID errors [9]) to file system mounts, and services status (i.e., scheduler). High severity check failures will immediately signal a scheduler handler to remove the node and reschedule all jobs executing on the node, while lower severity checks will signal to the scheduler to remove the node for remediation after jobs running on the node have finished ETTR is defined as the ratio of productive runtime to the available wallclock time of a job run. Infrastructure providers operating in zero-trust mode have no insight into this because the infrastructure has no visibility into the application runtime space. As such, the infrastructure cannot define productive runtime. The exact definition of productive runtime is open to interpretation depending on context, but we consider two sources of unproductive scheduled time: So Meta has rediscovered the idea of “forward progress” as defined by NNSA. job preemption, resource fragmentation, and failures are the dominant sources of lost goodput. a NCCL timeout occurs whenever a rank observes that a collective operation, such as an AllReduce, has not completed within a several minutes. I am surprised the NCCL timeouts take minutes. Errors such as NCCL timeouts may be naively attributed to a proximal cause e.g., on the network rather than a deadlock. Networking has a large “blast-radius”, causing errors across the stack. We attribute a failure to a cause if the cause was detected within the last 10 minutes or 5 minutes after a failing jobs lifetime (FAILED or NODE_FAIL). Again, this works because there is feedback on the state of the application that triggers a root-cause at the infrastructure level. IB Links, filesystem mounts, GPU memory errors, and PCIe errors contribute heavily to the failure rates, however for IB Links in particular this seems to be dominated by a short period of many IB Link related job failures from a handful of nodes in the summer of 2024 as shown in Figure 5. The fact that file system mounts contribute so much to job failures is a strong indictment against relying on shared, file-based storage for model training. Had Meta chosen to just not offer parallel file storage (which requires a stateful relationship between each compute node’s kernel and a remote, distributed service) and instead used object storage exclusively, a significant number of job failures could’ve been avoided entirely. This isn’t to say that storage-related problems wouldn’t have ever caused problems, but object storage puts the responsibility of authentication and session management in the hands of the application. In doing so, applications can respond more flexibly to misbehaving storage since storage issues aren’t node health problems anymore. Failures may co-occur—3% and 5% of hardware failures on RSC-1/RSC-2 have co-occuring events of similar priority. For example, we observe PCIe errors often co-occur with XID 79 (GPU falling off the bus) and IPMI “Critical Interrupt” events. Sounds familiar. Figure 7 illustrates that the mean-time-to-failure (MTTF) of 1024-GPU jobs is 7.9 hours—roughly 2 orders-of-magnitude lower than 8-GPU jobs (47.7 days). From their 1024-GPU job failures, the MTBF of a single node should be 42.1 days. So they just confirmed that each GPU node is a single point of failure. This should not be surprising. The worst-case version of this is a crash loop, where a single job is configured to requeue on failures (e.g., by using exception handling in the submission script). In the period we observe, we see a 1024 GPU job NODE_FAIL and subsequently requeue 35 times, causing a total of 548 preemptions (over 7k GPUs). This is a bad interaction between policy and infrastructure. While optimizing large jobs is clearly important, 16% of the total lost goodput resulting from hardware failures is due to second-order preemptions, which come from jobs of much smaller sizes. These results indicate that the cluster as a whole is impacted beyond the failures themselves. To restate, a significant amount of cluster utilization loss is due to their preemption policy. This is not surprising; everyone who’s had to schedule hugely variable job sizes has encountered this in the form of backfill bubbles or node draining bubbles. u0≈5−20⁢minssubscript𝑢0520minsu_{0}\approx 5-20~{}\mathrm{mins}italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ≈ 5 - 20 roman_mins Restart time is 5-20 minutes after a failure. we find RSC-1 GPUs are swapped at ∼similar-to\sim∼ 3 times the rate compared to RSC-2; both the GPU swap rate and failure rate differences may be due to differing workloads that tax GPUs on RSC-1 more heavily. This is a bad explanation to an interesting observation - their larger cluster is significantly less reliable on a per-node basis than their smaller one. Was the larger cluster in service for longer than the smaller one? That is, are they observing higher dropout from earlier-generation GPUs? Moving to a 5 minute checkpoint interval would increase expected ETTR to 0.93, illustrating the value of frequent checkpointing to insulate against interruptions (assuming checkpoint writes are non-blocking) This statement has no meaning. If checkpointing was non-blocking, why not just checkpoint continuously and get 100% ETTR? I can appreciate that reducing checkpoint interval improves forward progress/ETTR, but assuming non-blocking checkpoints is akin to assuming a spherical cow here. 2048-4096 GPU job runs on RSC-1 show an average ETTR of over 0.9 at a one-hour assumed checkpoint interval That’s a good milestone, but the previous paragraph suggests it is just a function of scale. For larger training jobs, this setup of hourly checkpointing would not work, and this should not be a surprise. To reach ETTR of 0.9 for a 100,000 GPU training runs on a hypothetical cluster with an RSC-2-like failure rate, checkpointing intervals and restart overhead need to be ∼similar-to\sim∼2 minutes. You don’t need such a heavily formalized model to make these predictions, because the data presented here (and reality) is that reliability is well-approximated as a system of independent nodes connected in series. Among tens of detection signals available on each node, the following ones correlate with lemon nodes the most: excl_jobid_count:Number of distinct jobs that excluded a node. xid_cnt:Number of unique XID errors a node experienced. tickets:Count of repair tickets created for a node. out_count:Number of times node was taken out of availability from the scheduler. multi_node_node_fails:Number of multi-node job failures caused by a node. single_node_node_fails:Number of single-node job failures caused by a node. single_node_node_failure_rate:Rate of single-node job failures on a node. It sounds like they did the same thing as I did when using Darshan logs en masse to correlate job slowness with specific Lustre OSTs.1 Our lemon node detection mechanism led to 10% reduction in large job failures (512+ GPUs), from 14% to 4%. Observation 11: Historic data is necessary to find defective nodes. Implementing lemon node detection can improve large job completion rate by over 30%. The general principle is good - find nodes that keep showing up in jobs that fail. However they are cherry picking the definition of “large job” here, and I don’t see how they get a 30% improvement in job completion rate from a 10% reduction in large job failures. This feels like the authors are playing games with statistics to show impact rather than objectively measuring improvement in a way that reflects overall positive outcomes of the cluster. As such, it’s hard to contextualize the impact of this lemon node detection. However, the qualitative statement that finding lemon nodes is good is undeniable. The network must remove and route around failures. Without resilience mechanisms in place, over 50% of bandwidth may be lost. This is why everyone uses adaptive routing, and there is no reason these days not to use it. I guess this statement is meaningful if your goal is to push for using a fabric that supports fine-grained adaptive routing (i.e., not standard RoCE). We therefore envision future infrastructure systems that attempt to make unreliability less noticeable rather than attempting to remove it altogether. This is a truism. Nobody would disagree. Nobody is trying to make unreliability go away, nor has anyone ever tried to do this since the early days of distributed computing. We can improve the success rate of training runs by retroactively identifying the root cause of a NCCL timeout, by comparing logged data across different ranks participating in the collective. Isn’t this what PyTorch flight recorder already does? TOKIO on ClusterStor: Connecting Standard Tools to Enable Holistic I/O Performance Analysis ↩
2024-11-08T16:03:23
en
train
42,034,987
marcelmarais
2024-11-03T18:29:52
LLM Framework Leaderboard
null
https://www.differentiated.io/llm-framework-leaderboard
2
0
null
null
null
null
null
null
null
null
null
null
train
42,035,015
paulcarroty
2024-11-03T18:32:47
$200M a year, 700k tons of rice, space tech: deal for North Korea in joining war
null
https://www.koreaherald.com/view.php?ud=20241103050116
51
113
[ 42035552, 42035648, 42035247, 42035382, 42035492, 42035446, 42035491, 42035624 ]
null
null
null
null
null
null
null
null
null
train
42,035,020
thunderbong
2024-11-03T18:33:46
Can Nintendo's Alarmo run Doom? You bet it can
null
https://www.theverge.com/2024/11/3/24286842/nintendo-alarmo-doom-hack-usb-custom-firmware-instructions
1
0
null
null
null
null
null
null
null
null
null
null
train
42,035,029
PaulHoule
2024-11-03T18:34:39
Julien Tayon: Tune your guitar with Python
null
https://beauty-of-imagination.blogspot.com/2024/10/tune-your-guitar-with-python.html
1
0
null
null
null
null
null
null
null
null
null
null
train
42,035,072
Slavkov
2024-11-03T18:39:24
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,035,076
flexagoon
2024-11-03T18:39:40
Traceroute Isn't Real
null
https://gekk.info/articles/traceroute.htm
4
0
null
null
null
timeout
null
null
null
null
2024-11-07T19:15:58
null
train
42,035,100
drmustafash
2024-11-03T18:41:57
Stripe Shut Us Down – Now We're Going Crypto-Only for Payments
We&#x27;ve been running a legit web hosting business, zero chargebacks, fully verified, and Stripe still shut us down, calling us &quot;high-risk.&quot; Now, we&#x27;re moving to crypto-only payments because we’re done with the hoops traditional processors make us jump through.<p>Curious if anyone else has made the switch to crypto-only, and how it’s impacted your business. Thoughts?
null
3
4
[ 42035321, 42042140 ]
null
null
null
null
null
null
null
null
null
train
42,035,108
Kayodedcreative
2024-11-03T18:42:49
A Testimonial Collection Platform Tailored for Framer Users
null
https://framonial.com/
1
1
[ 42035109 ]
null
null
null
null
null
null
null
null
null
train
42,035,118
jazmichaelking
2024-11-03T18:43:51
Coordinated Community Response Mitigates Fediverse Spam Attack
null
https://about.iftas.org/2024/10/21/coordinated-community-response-mitigates-fediverse-spam-attack/
41
3
[ 42035965, 42036596 ]
null
null
null
null
null
null
null
null
null
train
42,035,140
null
2024-11-03T18:45:59
null
null
null
null
null
null
[ "true" ]
null
null
null
null
null
null
null
null
train
42,035,153
mostech
2024-11-03T18:47:00
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,035,154
qedlab
2024-11-03T18:47:06
Show HN: Quanti-Tea, a TUI for dynamic individual Prometheus metric exporting
I wanted a quick and painless way to update personal metrics into a Grafana visualization that was adaptable to adding different types of metrics. Including units, a reset daily option, and type classification. There is a TUI and a webapp to be able to add&#x2F;modify metrics.
https://github.com/Qjs/Quanti-tea
4
0
null
null
null
null
null
null
null
null
null
null
train
42,035,159
mostech
2024-11-03T18:47:35
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,035,160
thunderbong
2024-11-03T18:47:38
How I use LLM to scrape 99% of websites [video]
null
https://www.youtube.com/watch?v=7kbQnLN2y_I
5
0
null
null
null
null
null
null
null
null
null
null
train
42,035,171
fawadkhaliq
2024-11-03T18:48:34
A case for Operational Safety in software operations
null
https://www.chkk.io/blog/a-case-for-operational-safety-in-software-operations
2
2
[ 42035518 ]
null
null
null
null
null
null
null
null
null
train
42,035,182
kyu_krsna
2024-11-03T18:49:40
null
null
null
1
null
[ 42035183 ]
null
true
null
null
null
null
null
null
null
train
42,035,201
hyperfield
2024-11-03T18:52:02
Show HN: New App for Downloading YouTube Videos Efficiently
null
https://github.com/hyperfield/yt-channel-downloader
4
0
null
null
null
null
null
null
null
null
null
null
train
42,035,203
JoeMalt
2024-11-03T18:52:10
The Doctor Will Sue You Now (2009) [pdf]
null
https://www.badscience.net/files/The-Doctor-Will-Sue-You-Now.pdf
3
0
null
null
null
null
null
null
null
null
null
null
train
42,035,206
theswordman
2024-11-03T18:52:27
Customer Support Calls Analyzer
Hello! I&#x27;ve been developing an desktop app called zebel for companies that have thousands of customer calls per day so they can analyze what their are having problems with and what is the opinion about the product itself.<p>it has all sorts of functionalities including : 1) summarize calls 2) keywords of the call 3) rate agent&#x27;s response to the customer 4) suggests how the response would&#x27;ve been better and more polite 5) sentiment analysis 6) classifies the issue to different departments<p>please check it out by visiting : zebelai.com<p>and let me know if there is any problem you have with it by [email protected].
null
1
3
[ 42036287 ]
null
null
null
null
null
null
null
null
null
train
42,035,261
thunderbong
2024-11-03T19:01:14
FFmpeg: A 94x speed improvement demonstrated using handwritten assembly
null
https://twitter.com/FFmpeg/status/1852542388851601913
12
4
[ 42035360 ]
null
null
no_article
null
null
null
null
2024-11-08T15:51:34
null
train
42,035,303
andsoitis
2024-11-03T19:07:16
Circumstances affecting the Heat of the Sun's Rays (1856)
null
https://archive.org/details/mobot31753002152491
1
0
null
null
null
null
null
null
null
null
null
null
train
42,035,305
kjhughes
2024-11-03T19:07:37
Why Elon Musk's Robotaxi Dreams Are Premature
null
https://www.wsj.com/business/autos/elon-musk-robotaxi-end-to-end-ai-plan-1827e2bd
10
9
[ 42035891, 42035918 ]
null
null
null
null
null
null
null
null
null
train
42,035,330
jasinjames
2024-11-03T19:10:50
U.S. Marines release report on cause of missing F-35 incident
null
https://www.2ndmaw.marines.mil/News/Article-View/Article/3952782/2nd-marine-aircraft-wing-releases-command-investigation-into-f-35b-lightning-ii/
3
7
[ 42035627 ]
null
null
null
null
null
null
null
null
null
train