id
int64
2
42.1M
by
large_stringlengths
2
15
time
timestamp[us]
title
large_stringlengths
0
198
text
large_stringlengths
0
27.4k
url
large_stringlengths
0
6.6k
score
int64
-1
6.02k
descendants
int64
-1
7.29k
kids
large list
deleted
large list
dead
bool
1 class
scraping_error
large_stringclasses
25 values
scraped_title
large_stringlengths
1
59.3k
scraped_published_at
large_stringlengths
4
66
scraped_byline
large_stringlengths
1
757
scraped_body
large_stringlengths
1
50k
scraped_at
timestamp[us]
scraped_language
large_stringclasses
58 values
split
large_stringclasses
1 value
42,057,268
okasaki
2024-11-06T04:51:28
US lawmaker urges investigation in China's SMIC over alleged China's Huawei ties
null
https://www.scmp.com/tech/tech-war/article/3285360/us-lawmaker-urges-investigation-chinas-chipmaker-smic-over-alleged-huawei-ties
1
2
[ 42057311 ]
null
null
null
null
null
null
null
null
null
train
42,057,270
behnamoh
2024-11-06T04:52:27
callama: Call llama.cpp directly, either locally or remotely
null
https://github.com/ibehnam/callama
2
0
null
null
null
null
null
null
null
null
null
null
train
42,057,272
keepamovin
2024-11-06T04:53:50
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,275
OptionOfT
2024-11-06T04:54:56
Mozilla Foundation lays off about third of staff
null
https://www.theregister.com/2024/11/06/mozilla_foundation_layoffs/
14
4
[ 42057279, 42057553 ]
null
null
null
null
null
null
null
null
null
train
42,057,276
ChanderG
2024-11-06T04:55:03
Asemic Writing
null
https://inconvergent.net/2020/asemic-writing/
3
0
null
null
null
null
null
null
null
null
null
null
train
42,057,281
arunc
2024-11-06T04:56:41
A solution to the anti-Bredt olefin synthesis problem
null
https://www.science.org/doi/10.1126/science.adq3519
1
0
null
null
null
no_article
null
null
null
null
2024-11-08T10:45:14
null
train
42,057,282
imotai
2024-11-06T04:57:06
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,290
paulpauper
2024-11-06T04:58:43
Vintage computer forum: Bosch FGS 4000
null
https://forum.vcfed.org/index.php?threads/bosch-fgs-4000.45327/
12
0
null
null
null
null
null
null
null
null
null
null
train
42,057,292
cammsaul
2024-11-06T04:58:57
Getting 50k Companies on Board with Clojure
null
https://www.youtube.com/watch?v=vUe3slLHk20
1
2
[ 42057724, 42057294 ]
null
null
null
null
null
null
null
null
null
train
42,057,297
paulpauper
2024-11-06T05:01:35
Survival Without Dignity
null
https://www.lesswrong.com/posts/BarHSeciXJqzRuLzw/survival-without-dignity
2
2
[ 42057302, 42059592 ]
null
null
null
null
null
null
null
null
null
train
42,057,298
paulpauper
2024-11-06T05:02:03
Why cities like Paris are good for the body
null
https://juliabelluz.substack.com/p/why-cities-like-paris-are-good-for
2
0
null
null
null
null
null
null
null
null
null
null
train
42,057,313
dndndnd
2024-11-06T05:06:07
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,318
keepamovin
2024-11-06T05:07:18
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,321
zdw
2024-11-06T05:10:11
Nintendo Switch successor will have backwards compatibility
null
https://twitter.com/NintendoCoLtd/status/1853972163033968794
11
7
[ 42057753, 42059158 ]
null
null
null
null
null
null
null
null
null
train
42,057,332
kylebenzle
2024-11-06T05:16:18
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,348
nxobject
2024-11-06T05:24:22
RCTab Ranked Choice Voting Tabulator
null
https://www.rcvresources.org/rctab
2
0
null
null
null
no_error
RCTAB - Ranked Choice Voting Resource Center
null
null
RCTab, formerly the RCV Universal Tabulator (RCVUT), is a federally tested open-source software that can tally ranked choice voting (RCV) election results using cast vote records (CVRs) from most voting system vendors. In addition, RCTab can be used to verify RCV results generated by other voting systems. RCTab is the most comprehensive RCV tabulation module to be tested under the Voluntary Voting System Guidelines (VVSG) and the first open-source software to meet VVSG standards. Download the software along with instructions on how to install and use the tabulator by clicking the button at the bottom of this page. An 8 minute video is also available at the bottom of this page that further explains the reason for the existence of RCTab. RCTab is available under the Mozilla Public License Version 2.0. A visualizer, RCVis, which uses JSON files produced by RCTab, is also available. If you are interested in using the tabulator for an RCV election, email us at [email protected], and we can provide the support you need to use it in your jurisdiction. Where is RCTab Used?  RCTab is currently used to produce official RCV results, as a testing tool, or as an auditing tool. The State of New York certified RCTab for use in single-winner RCV elections in the State, the State of Utah certified RCTab for use in local RCV elections, and the State of Michigan certified RCTab for use in Eastpointe, Michigan’s RCV elections. The Democratic Parties of Alaska, Kansas, and Wyoming used RCTab to produce district- and county-level results in their 2020 Presidential Preference Primaries using RCV. RCTab has been accepted by a federal voting system standards working group as a well-documented, precise implementation of the RCV rules used in U.S. RCV elections. Benton County, Oregon, also used RCTab as a verification tool in the testing and certification process for the RCV tabulation software provided by its voting system vendor, ES&S. Successful replications of past RCV election results have been run on results in San Francisco, California, Minneapolis, Minnesota, the State of Maine, and several other RCV jurisdictions. RCTab successfully tallied RCV elections for the Student Government Association at the University of Houston in January 2019.
2024-11-08T11:30:20
en
train
42,057,360
ankit428
2024-11-06T05:28:39
Ask HN: How are you guys using AI to improve productivity on a day-to-day basis?
Have been trying to think about things I can do in my job as a software professional, personal growth, chores, kids learning etc. There are so many facets where we can potentially use AI to improve the throughout.<p>WOuld love to hear what folks have done.
null
2
2
[ 42057450, 42063298, 42068597 ]
null
null
null
null
null
null
null
null
null
train
42,057,378
jaypatelani
2024-11-06T05:35:31
GoToSocial WASM-based SQLite driver and BSD
null
https://www.tumfatig.net/2024/gotosocial-wasm-based-sqlite-driver-and-bsd/
55
1
[ 42058127 ]
null
null
no_error
GoToSocial WASM-based SQLite driver and BSD
null
null
  2024-11-05      1227 words, 6 minutesIntroductionUsing WASM-based SQLite driver on NetBSD or OmniOS(Fail) Using nowasm on OmniOSUsing nowasm on NetBSDMigration from WASM to nowasmConclusionI started using GoToSocial (the fast, fun and small ActivityPub server) in 2022 on OpenBSD. Because it was nearly the only OpenBSD-native ActivityPub options at that time, because it was light and because it could use the SQLite database engine .I stopped using it when it was marked BROKEN because of incompatibilities between modernc.org/sqlite and OpenBSD kernel. This is when I switched to Mastodon and stop using it. Until recently, when I discovered there was a pkgsrc option available.IntroductionI decided to use GoToSocial again this summer to communicate on the Fediverse about what happens to the noGoo.me service; a SearXNG instance I run on OpenBSD and that can be used by anybody. Thanks to pkgsrc, GoToSocial would either run on a NetBSD bhyve virtual machine or on am OmniOS/illumos zone; depending on what works and what doesn’t, and depending on my daily mood ;-)With version 0.16.0 “Snappy Sloth”, the project introduced an alternative wasm sqlite3 implementation and it started having impacts on the program memory usage. With version 0.17.0 “Selective Sloth”, it became even worse and the devs made an unsupported “nowasm” option available which basically targets FreeBSD and Linux 32-bits. On my NetBSD VM or the OmniOS zone, I need to allocate 4GB of RAM for GtS to start even though this is a single user instance that posts like 5 messages a week. Given that my OmniOS hypervisor has 128GB of RAM, this is not a problem for me but GoToSocial is supposed to be light so there must be ways to have it light on non-Linux 64-bit platforms…I could simply switch to some Linux implementation but…“Strength lies in differences, not in similarities.” – Stephen CoveyDisclaimer: I never tried running GtS on FreeBSD or OpenBSD these days. What follows may apply to those OSes or be totally useless.Using WASM-based SQLite driver on NetBSD or OmniOSWhether you’re using pkgsrc binary packages or compile GoToSocial using the pkgsrc source tree, you’ll get a WASM (GO_BUILDTAGS=“wasmsqlite3”) enabled version of the software. Depending on the time you run the installation (or the compilation), your version of GtS may differ. There are patches for version 0.17.1 on their way that may not be committed yet.No matter if I used an OmniOS pkgsrc branded zone or a NetBSD bhyve virtual machine, resource usage was about the same running GtS 0.17.0 in WASM mode: CPU usage was rather low but the gotosocial process used about 1GB of RSS memory and 6GB of virtual memory. There seem to be a 4GB of memory that is required for accessing the SQLite database when the program starts.The working settings is used were:db-type: “sqlite”db-address: “/var/db/gotosocial/sqlite.db”db-max-open-conns-multiplier: 8db-sqlite-journal-mode: “WAL”db-sqlite-synchronous: “NORMAL”(Fail) Using nowasm on OmniOSTrying to compile GoToSocial 0.17.1 with the GO_BUILDTAGS="nowasm" option failed whatever I tried. The error looked like :# modernc.org/libc vendor/modernc.org/libc/libc_unix.go:1308:34: (*ctime.Tm)(unsafe.Pointer(tm)).Ftm_gmtoff undefined (type *struct{Ftm_sec int32; Ftm_min int32; Ftm_hour int32; Ftm_mday int32; Ftm_mon int32; Ftm_year int32; Ftm_wday int32; Ftm_yday int32; Ftm_isdst int32} has no field or method Ftm_gmtoff) vendor/modernc.org/libc/libc_unix.go:1309:34: (*ctime.Tm)(unsafe.Pointer(tm)).Ftm_zone undefined (type *struct{Ftm_sec int32; Ftm_min int32; Ftm_hour int32; Ftm_mday int32; Ftm_mon int32; Ftm_year int32; Ftm_wday int32; Ftm_yday int32; Ftm_isdst int32} has no field or method Ftm_zone) *** Error code 1 There is a closed pull request that indicates this should work someday. I couldn’t compile it, even adding the sqlite3_flock flags. Maybe it will work on next GtS release…Using nowasm on NetBSDAfter patching my local pkgsrc tree, I could compile GoToSocial 0.17.1 “Very Selective Sloth” on NetBSD, without WASM, using env PKG_OPTIONS.gotosocial=nowasm make build. This modification runs the build script with GO_BUILDTAGS="nowasm" configured.Unfortunately, replacing the wasm binary with the nowasm binary, GoToSocial would not start anymore. Even starting with an empty (or non existent) sqlite.db, errors would raise and make the program die:func=cache.(*Caches).Start level=INFO msg="start: 0xc001548008" unexpected fault address 0x7f7fb11c556b fatal error: fault [signal SIGBUS: bus error code=0x2 addr=0x7f7fb11c556b pc=0x1f6be2f] goroutine 1 gp=0xc0000061c0 m=0 mp=0x44f2ce0 [running]: runtime.throw({0x28b01f9?, 0x100000000?}) runtime/panic.go:1067 +0x48 fp=0xc0009e9a20 sp=0xc0009e99f0 pc=0x470088 runtime.sigpanic() runtime/signal_unix.go:897 +0x18a fp=0xc0009e9a80 sp=0xc0009e9a20 pc=0x4722aa modernc.org/libc.Xmemcpy(...) modernc.org/[email protected]/libc.go:1745 (...) I could finally have GtS start with an empty database when I changed the db-sqlite-journal-mode setting to “TRUNCATE”.The working nowasm settings I used were:db-type: “sqlite”db-address: “/var/db/gotosocial/sqlite.db”db-max-open-conns-multiplier: 8db-sqlite-journal-mode: “TRUNCATE”db-sqlite-synchronous: “NORMAL”Still, there were no settings that allowed me to recover my existing data by transferring the sqlite.* files; as I normally do. So I went for another SQLite backup / restore method. See next section.With the empty database, resources consumption of the gotosocial process was finally quite low again: very low CPU, 84M of RSS memory and 1.2GB of Virtual Memory. I could then switch my NetBSD VM back to using 1 vCPU and 1GB of vRAM. The 4GB of RAM to access the SQLite database aren’t required anymore.Migration from WASM to nowasmThe migration from one OS to the other is straightforward when the GtS version and the WASM feature are the same. I also could upgrade from 0.17.0 WASM on NetBSD to 0.17.1 WASM on OmniOS.But when it comes to switching from WASM to nowasm, the gotosocial binary quickly dies exposing line of logs that I am not smart enough to understand. To my eyes, it was like “blah blah memory allocation failed blah blah trying to open sqlite.db blah blah modernc.org/sqlite blah blah dies”.Hopefully, after a helpful discussion with the kind GtS devs who pointed me towards configuration options to try and what is expected (or not) regarding SQLite data migration, I finally got a stable way to migrate my existing data from a WASM instance to a nowasm instance. Here’s the receipt so that you don’t enjoy the same adventure I encountered: 🌬️🌊🚣⛈️On the “old” WASM GoToSocial instance, stop GtS, make a copy of the configuration file and the storage directory. Then use the sqlite3 command to make a proper flat export of the SQLite database. On my initial OmniOS zone, it looked like this:# svcadm disable svc:/pkgsrc/gotosocial:default # tar cpf gts-backup.tar /opt/local/etc/gotosocial /var/db/gotosocial # sqlite3 /var/db/gotosocial/sqlite.db .dump > gts-sqlite.dump.sql # gzip -f gts-backup.tar gts-sqlite.dump.sql Transfer the backup archives to the “new” nowasm GoToSocial instance. In my case, this is a NetBSD 10.0_STABLE byhve virtual machine. I adapted the configuration file with the correct db-sqlite-journal-mode parameter, extracted the storage directory content and recreated a sane sqlite.db file using the sqlite3 command.# tar xzpf gts-backup.tar.gz -C / /var/db/gotosocial/storage # rm /var/db/gotosocial/sqlite.db* # sudo -u gotosocial sh -c \ "zcat gts-sqlite.dump.sql.gz | sqlite3 /var/db/gotosocial/sqlite.db" GoToSocial can now be started. It will check the database and apply migration commands if required.Note that in this configuration, ffmpeg needs to be installed on the system. GtS expects the command to have this exact name and to be located in the PATH environment variable.# pkgin in ffmpeg7 # ln -s /usr/pkg/bin/ffmpeg7 /usr/pkg/bin/ffmpeg # ln -s /usr/pkg/bin/ffprobe7 /usr/pkg/bin/ffprobe # grep -B2 PATH /etc/rc.conf # Add local overrides below. # export PATH=$PATH:/usr/pkg/sbin:/usr/pkg/bin:/usr/local/sbin:/usr/local/bin ConclusionThe following dashboard illustrate the memory gain of switching to nowasm; the switch occurred on the 31/10/2024.The program is stable and has not crashed during the last week. There wasn’t any error while using WASM on OmniOS either. So you can choose whether you want to compile GtS with a special flag or give it a bit more memory than what a light daemon should get :)
2024-11-08T12:19:28
en
train
42,057,387
Chiheng2023
2024-11-06T05:39:08
null
null
null
1
null
[ 42057388 ]
null
true
null
null
null
null
null
null
null
train
42,057,403
AiswaryaMadhu
2024-11-06T05:43:59
null
null
null
1
null
[ 42057404 ]
null
true
null
null
null
null
null
null
null
train
42,057,410
be_erik
2024-11-06T05:45:51
DeCENC is another way to beat web video DRM
null
https://www.theregister.com/2024/09/12/cenc_encryption_stream_attack/
17
1
[ 42058284 ]
null
null
null
null
null
null
null
null
null
train
42,057,426
reddyanna37
2024-11-06T05:50:43
null
null
null
1
null
[ 42057427 ]
null
true
null
null
null
null
null
null
null
train
42,057,428
Alex_Zhang
2024-11-06T05:51:24
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,431
yen223
2024-11-06T05:51:40
Useful built-in macOS command-line utilities
null
https://weiyen.net/articles/useful-macos-cmd-line-utilities
680
192
[ 42064988, 42062191, 42058069, 42061843, 42067083, 42063067, 42068886, 42058168, 42066896, 42065168, 42071089, 42058191, 42062644, 42064045, 42059072, 42070364, 42059182, 42060348, 42061840, 42062969, 42065658, 42060817, 42058651, 42061389, 42061451, 42059314, 42058084, 42063422, 42064497, 42068804, 42063188, 42062389, 42063969, 42058108, 42058339, 42058160, 42063976, 42068625, 42061670, 42063206, 42063937, 42069224, 42068524, 42058693, 42067579, 42064504, 42058087, 42058107, 42061551, 42058246, 42065867, 42062906, 42061795, 42063443 ]
null
null
null
null
null
null
null
null
null
train
42,057,433
thunderbong
2024-11-06T05:52:07
Prejudice and China
null
https://research.gavekal.com/article/prejudice-and-china/
12
8
[ 42057996, 42057727, 42058094, 42059126 ]
null
null
no_error
Prejudice And China - Gavekal Research
null
null
Prejudice And China Gavekal Research | 22 Oct 2024 At an investment conference in Kuala Lumpur recently, I caught up with an old friend and Gavekal client. Over coffee between sessions, we talked about one of the most visible changes of the last few years in Asia: the Chinese cars that have so quickly appeared on roads across the continent. This led us to the comments made in September by Ford chief executive officer Jim Farley. Freshly returned from a visit to China, Farley told The Wall Street Journal that the growth of the Chinese auto sector poses an existential threat to his company, and that “executing to a Chinese standard is now going to be the most important priority.” By any measure, this is an earth-shattering statement. Making cars is complicated. Not as complicated as making airliners or nuclear power plants. But making cars is still the hallmark of an advanced industrial economy. So, the idea that China is suddenly setting the standards that others must now strive to meet is a sea-change compared with the world we lived in just five years ago. This led my friend to question how Farley and other auto industry CEOs could have fallen quite so deeply asleep at the wheel. How could China so rapidly leapfrog established industries around the world without all those very well paid Western CEOs realizing what was happening until two minutes ago? There are many possible answers to this question. They range from the obvious through the historical and cultural to the tin-foil hat variety. And they are well worth reviewing in an attempt both to understand where China is today, and to highlight the blind spots some investors still suffer from when looking at the world’s second largest economy and their implications for markets. The obvious explanation: Covid, Ukraine, DEI and ESG Gavekal’s head office is in Hong Kong. But we also have an office in Beijing, with a great team of analysts who publish excellent work (at least, I like to think so). I do not want to sound as if I am bragging (even though I am), but for years our Beijing office would host at least one visitor from abroad every day. I wouldn’t claim that Gavekal was a mandatory stop for every portfolio manager and CEO visiting Beijing. That would make me sound like a conceited jerk. But for many of Gavekal’s clients and their friends, it really was true (that we were a mandatory stop, not that I am a conceited jerk). Then Covid hit. For three years, no visitor crossed our threshold. By the time the Chinese government finally lifted its Covid restrictions, Russia had launched its “special military operation” in Ukraine. This meant that for most Westerners, China had become uninvestible. The visitors stayed away. The end of Covid restrictions barely made a mark on our Beijing conference room’s planning schedule. This brings me to the simplest, most obvious, and likeliest explanation why most CEOs and investors missed how China leapfrogged the West in industry after industry over the last five years: during that time, no one from the West bothered to visit China. Consequently—and perhaps more by accident than design—China followed Deng Xiaoping’s advice to “secure our position; cope with affairs calmly; hide our capacities and bide our time; keep a low profile and never claim leadership.” To be fair, it wasn’t just that visiting China was difficult—even impossible—for much of the last five years; foreign CEOs had a lot on their plates. Covid restrictions forced company managements to come up with new ways to work on the fly. There were also massive supply chain disruptions to contend with. And some of these were greatly compounded by the Russia-Ukraine conflict. Consider a car company CEO: after spending a few quarters figuring out how to rearrange factory work to comply with social distancing, he or she suddenly had to worry about the supply of platinum coming out of Russia, or of neon coming out of Ukraine. This might help to explain how car company CEOs missed how rapidly Chinese autos were gaining in their rearview mirrors. And of course at the same time, many CEOs were trying to keep up with ever-multiplying diversity, equity and inclusion standards and environmental, social and governance requirements. Diversity is a strength. But unfortunately, it could be that all the focus on diversity has not strengthened Western industries quite enough to cope with the oncoming Chinese onslaught. Hence Western policymakers’ enthusiasm for executing a 180˚ U-turn, and instead of promoting free trade and the beauty of Western liberalism, suddenly imposing tariffs and building walls. Or to put it less kindly, while Western CEOs focused on virtue-signaling, Chinese companies forged ahead, producing better products for less money— which is what capitalism should be about. Today, we are seeing the results. The cultural and political prejudice explanation A second possible reason the West failed to spot how it was being leapfrogged by Chinese industry could simply be good old-fashioned ingrained cultural prejudice. It may be unkind to highlight it, but history has shown that Western leaders repeatedly underestimate their Asian competitors. Russian Tsar Nicholas II infamously thought his army and navy would quickly defeat the Japanese, only for his army to suffer successive defeats and for his navy to be destroyed at Tsushima in 1905. Winston Churchill and the British military’s chiefs of staff never thought the Japanese army capable of advancing so swiftly down the Malay peninsula and positioned Singapore’s big guns facing the wrong way. Douglas MacArthur and the US general staff underestimated their opponents’ resolve in the Korean war. The French establishment did the same in Indochina. Lyndon Baines Johnson and Robert McNamara did the same in Vietnam. US automakers initially laughed off Japanese competitors. The “West” underestimating the “East” is a fairly strong constant of history (for more on this, I cannot recommend highly enough the 1963 book East And West by Cyril Northcote Parkinson). This time around, the underestimation may have been compounded by China’s official name—the People’s Republic of China—and the country’s political structure as a communist one-party state. To any self-respecting Western capitalist, the word “communist” implies inefficiencies, poor products, and technological backwardness. This belief was amply demonstrated by the fall of the Berlin Wall and the collapse of the Soviet Union. By now, the PRC has survived longer than the USSR’s 74 years. Nevertheless, most Westerners still believe that at some point in the not-so-distant future, the Chinese Communist Party will lose its grip on power, just like the Communist Party of the Soviet Union. How could it be otherwise? It’s all in the name. Communism is bound to fail. This assumes, of course, that China really is communist; a notion that could be debated. It also ignores the old adage that “the tragedy of Asia is that Japan is a profoundly socialist country on which capitalism was imposed, while China is a profoundly capitalist country on which socialism was imposed. But each will naturally drift back to its natural state.” Recent anchoring and the Japan explanation Another explanation for the Western blind spot on China’s industrial progress might well be the last three “lost decades” of Japanese growth. This shows up in investor’s responses to the China’s stimulus. Conversations about China’s growth predicament typically start with the assumption that without massive fiscal stimulus, China will be unable to get out of its current economic rut. This is because China resembles Japan 20 or 30 years ago, with (i) terrible demographics and (ii) widespread large losses across the real estate sector. However, this is probably where the similarities end. Unlike Japan in the 1990s, China has not seen its banking system go bust and lose its ability to fund new projects. On the contrary, the surge in loans to industry over the past few years lies at the heart of China’s booming industrial productivity. Interactive chart This is another key difference between China today and Japan in the 1990s. China today is not only more efficient and more productive than a decade ago, it is probably more efficient and more productive than most other major industrial economies. And it boasts a very attractive cost structure. Until a few years ago, you would need to check your bank balance before going out for dinner in Tokyo. Today, you can stay in the Four Seasons in Beijing or Shanghai for less than US$250 a night. Perhaps the best illustration of how Japan’s past is a very poor guide to China’s present is the difference in their trade balances; a reflection of how different their competitiveness has been. Interactive chart This is not to understate the magnitude of the Chinese property bust. The rollover in real estate has been a massive drag on growth and on animal spirits over the past five years. But on this front, there is another key difference between China and Japan: in China, the contraction of real estate was the policy. It was not the unfortunate consequence of policies gone-wrong. Reallocating capital away from real estate and towards industry was a stated goal of the government. This is clear from the chart on bank lending. The pain of the property bust is also clear in the consumer confidence data. As discussed in past reports, the rollover in real estate has hit millennials living in first and second-tier cities disproportionately hard (see Stimulus And Confidence In China or Chinese Stocks Are For Living In). This hit to confidence might help partially explain the Western blind spot on China’s recent industrial progress. The ‘it depends who you talk to’ explanation The table below illustrates how two groups in China feel particularly unhappy. Older folks living in the countryside—the “left behind” in China’s mad rush towards modernity. Millennials living in first and second tier cities—the “bag-holders” in China’s real estate consolidation. Importantly, millennials in first tier cities also happen to be the group that most Westerners who have contacts in China typically talk to. This is the group that speaks English (older folks were seldom taught English at school) and that grew up using social media. It is the group that was spared the hardships of the cultural revolution, and did not experience the trauma of 1989, and which therefore tends to be more vocal. This group has had little positive to report over the past five years. Their time has been tough. First, their balance sheets were hammered by falling real estate prices. Second their income prospects have been capped by the rapidly rising numbers of Gen-Z graduates churned out by China’s universities. In short, being a millennial in a first tier city has not been a fun experience in recent years. Meanwhile, people living in third and fourth tier cities talk about the better-paying jobs in the growing factories, the improved municipal and regional infrastructure and the high-speed trains that link their towns to China’s mega-cities. To put it more succinctly, there have been two main stories in China over the past five years. The first was a real estate bust, which was felt disproportionately in the rich cities of China’s coast. The second was an impressive industrial boom, which had a greater impact on the cities of the interior with cheaper labor which were suddenly linked to the coast by new highways, railways and airports. Over the past five years, consumers of Western media have heard a lot about the first trend; very little about the second. The ‘maybe the media covered the wrong trend’ explanation Over the past few years, I have argued at length that the relentlessly negative coverage of China by the Western media was doing its readers a disservice. This is not to say that China does not have serious problems to confront and major challenges to overcome. But by disproportionately focusing on these, Western media helped their readers to develop a massive blind spot when it came to China’s global economic and geopolitical impact. Instead of collapsing into economic irrelevance, currency devaluation and a “shadow banking” meltdown (remember that one?), China has continued to make progress along the path it set for itself over a decade ago: tying ever more emerging markets into China’s economic orbit, settling more of its trade in its own domestic currency, bypassing Swift, fostering energy independence, and moving up the export value chain. All these trends were both predictable and predicted. So how did the Western media manage almost entirely to ignore them? Why were there so few stories about how China now installs almost twice the number of industrial robots as the rest of the world combined? Or on China’s new status as the global leader in the nuclear industry? Or on how China graduates more engineers each year than the entire OECD? The simplest explanation is that the media is in the “bad news” game. The old adage “if it bleeds, it leads” still holds good in most editorial conferences. So, in a click-obsessed world, stories about ghost cities and impending economic doom are bound to get more traction than features about educational progress, revolutionary drones or factory automation. A second possible explanation is linked to our own equity-market-obsessed culture. It is hard to go anywhere in the US—airport lounge, hotel lobby, sports bar—without a screen in the background playing either CNBC or Bloomberg TV with the day’s stock quotes filing by. In Europe, stock prices aren’t quite so “in your face,” although you can still feel their presence. And in an equity-market-obsessed culture, the performance of the stock market index is quickly equated with the performance of the economy at large. Of course, in most emerging markets, the relationship between economic progress and equity prices is tenuous at best. China is a great example. China’s economic progress over the past five, 10 and 20 years is undeniable, with collapsing infant mortality, increasing life expectancy, soaring educational attainment, the build-out of new infrastructure and enormous productivity gains across a broad swath of industries. But broad equity market returns as measured by key indexes have been pedestrian at best. Interactive chart For an equity-obsessed culture, it is tempting to look at China’s disappointing stocks market performance and conclude that if stocks are not doing well, then something must be wrong with the underlying economy. But just because it’s tempting doesn’t mean it is right. The tin-foil hat explanation: the user is the product It is one of my deeply held beliefs that media organizations continue to charge viewers and readers for access, whether through streaming service subscription fees or just the few dollars needed to buy a newspaper or magazine, in order to give the impression to the end user that he or she is still the client. However, the true clients are the health care industry (one of the largest advertisers in the US), the luxury goods industry (another giant advertiser), the automobile industry (same again) and—perhaps most worrying—governments everywhere. In some countries, such as France, governments have always doled out generous subsidies to the press. In other countries less so—at least in the past. But in many countries, Covid changed the relationship between governments and media. Governments took out full-page advertisements to remind people to wash their hands, keep their distance from each other, and to participate in an enormous health experiment. And, call it a miracle, but for their part the media almost entirely failed to question the unprecedented way governments trampled all over age-old civil rights and personal liberties. Unfortunately, history shows that once latched on, it is difficult for anyone to ween themself off the government’s generous breast. This is where the happy—for the media—news of HR 1157 comes in. On September 9, the US House of Representatives approved a bill entitled “Countering the PRC Malign Influence Fund Authorization Act” by 351 votes to 36. If passed by the Senate, this bill will authorize the US government to spend US$325mn a year every year for the next five years to “support... independent media to raise awareness of and increase transparency regarding the negative impact of activities related to the Belt and Road Initiative, associated initiatives, other economic initiatives with strategic or political purposes, and coercive economic practices.” So yes, at a time of record debt and swelling budget deficits, the US government proposes to spend US$325mn a year paying “independent” media (the irony!) to push stories about the negative impact that China may be having around the world. As Charlie Munger liked to say “show me the incentives, and I will tell you the outcome.” If the US government is openly declaring that it will pay for negative stories on China in “independent” media, and allocating millions of US dollars to this purpose, should we be surprised if negative stories about China are precisely what the media delivers? So, now more than ever before, when assessing stories in the media it is helpful to ask the question: just who here is the client, and who is the product? Three Chinas Putting all this together, there seem to be at least three separate visions of China. The first is the China you read about in much of the Western media: a place of despond and despair. It is permanently on the cusp of social disorder and revolution, or it would be, were it not an Orwellian nightmare of state surveillance, supervision and repression that strangles creativity and stifles progress. This is the place that Westerners who have never visited China typically imagine, because it is the place portrayed by the media. And not just by the media. This is also the China portrayed by large parts of the financial industry. Every 10 days or so, I get forwarded another report forecasting the imminent collapse of the Chinese economy. More often than not these are written by Western portfolio managers who typically don’t speak Chinese, know very few people who live in China, and in some cases have never even visited what is very clearly the most productive economy in the world today. This has happened so often, I have made a meme about it. This is the vision of China that allowed CEOs of Western industrial companies to spend their time worrying about DEI initiatives while Chinese companies were racing ahead of them. The second is the vision of China you get from talking to Chinese millennials in tier-one cities. This version of China recalls the “lost decades” of Japanese deflationary depression. Clearly, for investors there are important differences between China today and Japan of the 1990s and 2000s. First, in 1990, Japan was 45% of the MSCI World index even though Japan accounted for only around 17% of global GDP. Today, Chinese equities make up less than 3% of the MSCI World, even as China is around 18% of world GDP. So, it seems unlikely that foreign investors will spend the coming years running down their exposure to China; few have much exposure to China in their portfolios to begin with. Second, China’s dominance in a number of important industrial segments is growing by leaps and bounds. This is a reflection of the rapidly changing geopolitical landscape. In 2018, Donald Trump’s decision to ban the sale of high-end semiconductors to China acted as a galvanic shock on the Chinese leadership. If semiconductors could be banned today, tomorrow it might be chemical products or special steels. Protecting China’s supply chains from possible Western sanctions became a priority to which almost everything else (aside from the currency and the bond markets) was a distant second. This brings me to the third vision of China: that it is only just beginning to leapfrog the West in a whole range of industries. This vision is starting to show up itself in the perception of Western brands in China, and their sales. For example, Apple’s iPhones no longer figure in the five best-selling smartphone models in China. And Audi’s new electric cars made and sold in China will no longer carry the company’s iconic four-circle logo; the branding is now perceived to be more of a hindrance than a benefit. To put it another way, following years of investment in transport infrastructure, education, industrial robots, the electricity grid and other areas, the Chinese economy today is a coiled spring. So far, the productivity gains engendered by these investments have manifested themselves in record trade surpluses and capital flight—into Sydney and Vancouver real estate, and Singapore and Hong Kong private banking. This has mostly been because money earners’ confidence in their government has been low. From bursting the real estate bubble, through cracking down on big tech and private education, to the long Covid lockdowns, in recent years the Chinese government has done little to foster trust among China’s wealthy. It’s small surprise, then, that many rich Chinese have lost faith in their government’s ability to deliver a stable and predictable business environment. This brings me to the recent stimulus announcements and the all-important question whether the measures rolled out will prove sufficient to revitalize domestic confidence in a meaningful way. Will it even be possible to lift confidence as long as the Damocles’ sword of a wider trade conflict with the US and yet more sanctions looms over the head of Chinese businesses? From this perspective, perhaps the most bullish development for China would be for the new US administration (regardless who sits behind the Resolute desk) to come in and look to repair the damage done to relations by the 2018 semiconductor sanctions and the 2021 Anchorage meeting (see Punitive Tariffs Or Towards A New Plaza Accord?). At the risk of mixing metaphors, this could be the match that lights the fuse that ignites a real fireworks show. In the meantime, the dynamics in China can perhaps best be summarized by the following decision tree. Investment conclusions The narrative around China is shifting—regardless of the US$325mn that the US Congress is looking to spend each year to fund negative stories about China in the “independent” media. Just a few weeks ago, China was still said to be uninvestible. This view had led many people, including prominent Western CEOs, to conclude that China no longer mattered. This was a logical leap encouraged by Western media organizations, whose coverage of China has been relentlessly negative. It was a leap that turned out to be a massive mistake. When it comes to China’s relevance to investors, there are four ways of looking at things. China can be uninvestible and unimportant. This is the pool that most investors have been swimming in for the last few years. But this is akin to saying that China is like Africa. It simply doesn’t pass the smell test. Instead of sliding into irrelevance, China’s impact on the global economy only continues to grow. China can be uninvestible but important. This is essentially what Jim Farley, fresh back from his China trip, told The Wall Street Journal. China can be investible but unimportant. This is the space Japan inhabited for a couple of decades, and into which Europe seems to be gently sliding. However, the idea that China today is where Japan has been for the last three decades is grossly misplaced on many fronts, including the competitiveness of its economy, its overall cost structure, and its weight in global indexes. China can be investible and important. This is what David Tepper of Appaloosa Management argued on CNBC following the announcement of China’s stimulus (see Changing Narratives Around The World). For now, this is still a minority view, at least among Western investors. Not that Western investors matter all that much. What truly matters is whether Chinese investors themselves start rallying to this view. If they do, the unfolding bull markets in Chinese equities and the renminbi could really have legs. Only registered users can post comments. Please, log in.
2024-11-08T08:48:44
en
train
42,057,434
arscynic
2024-11-06T05:52:07
Crypto Cult Science (2021)
null
https://www.arscyni.cc/file/crypto_cult_science.html
1
0
null
null
null
null
null
null
null
null
null
null
train
42,057,436
loixou
2024-11-06T05:52:38
YouTube Summary with ChatGPT
null
https://glasp.co/extension-update/youtube-summary?version
1
0
null
null
null
null
null
null
null
null
null
null
train
42,057,443
canaryex
2024-11-06T05:55:04
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,449
Jerry2
2024-11-06T05:56:54
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,451
nielsole
2024-11-06T05:57:15
Windows 2022 Servers Unexpectedly Upgrading to 2025
null
https://old.reddit.com/r/sysadmin/comments/1gk2qdu/windows_2022_servers_unexpectedly_upgrading_to/
75
26
[ 42057889, 42064660, 42059457, 42057979, 42062989, 42057803, 42058409, 42057526 ]
null
null
null
null
null
null
null
null
null
train
42,057,457
raskelll
2024-11-06T06:00:17
LameDuck: analyzing Anonymous Sudan's threat operations – Cloudflare
null
https://www.cloudflare.com/threat-intelligence/research/report/inside-lameduck-analyzing-anonymous-sudans-threat-operations/
2
0
null
null
null
no_error
Inside LameDuck: analyzing anonymous Sudan’s threat operations
null
null
The United States Department of Justice (DOJ) recently unsealed an indictment outlining efforts to dismantle Anonymous Sudan, a prominent group tracked by Cloudflare as LameDuck, notorious for its apparent politically motivated hacktivism and significant involvement in distributed denial-of-service (DDoS) attacks. This broad initiative to bring to justice the group's key members is an impressive step in improving internet security, and was made possible through coordinated efforts among international law enforcement agencies and private sector entities, including Cloudflare. It underscores the importance of partnership across all stakeholders in combating today’s most advanced cyber threats, while also demonstrating the value transparency brings to improving threat intelligence. As such, Cloudflare is eager to share insights from our experience in tracking and disrupting LameDuck operations to help bolster your defenses against similar threats.Executive summaryDOJ recently unsealed an indictment revealing charges against two Sudanese brothers for orchestrating LameDuck’s large-scale DDoS operations from January 2023 through March 2024. The indictment was made possible through coordinated efforts across law enforcement and private industry, including Cloudflare LameDuck developed and managed “Skynet Botnet”, a Distributed Cloud Attack Tool, allowing them to conduct more than 35,000 confirmed DDoS attacks in the span of a year, while profiting financially from selling their DDoS services to possibly more than 100 customersThe threat actor’s operations revealed an unusual combination of motives along with a wide spectrum of targeted industries and governments across the globeCloudflare observed a timely correlation between geopolitical events and LameDuck strikes against high-profile targets, aligning with an anti-Western ideologyWho is LameDuck?LameDuck is a threat group that emerged in January 2023, presenting itself as an anti-Western, pro-Islamic politically motivated collective. The group is known for launching thousands of DDoS attacks against a wide array of global targets across critical infrastructure (airports, hospitals, telecommunications providers, banks), cloud providers, healthcare, academia, media, and government agencies. LameDuck gained notoriety by amplifying their successful attacks against widely recognized organizations via social media, while also offering DDoS-for-hire services. Their operations have included not only successful large-scale DDoS attacks, but also DDoS extortion or ransom DDoS. The group’s focus on monetary gain has called into question their emphasis on a political or religious narrative, with many of its operations more closely resembling financially driven cybercrime. Mixed motivesTo further add complexity to this actor’s motives, activity conducted by LameDuck revealed a disparate blend of operations, where high-profile strikes were launched against a vastly diverse set of targets in self-proclaimed support of an odd mix of anti-Israeli, pro-Russian and Sudanese nationalist sentiments. It is possible, however, these attacks were simply driven by a need to bolster their reputation and gain notoriety. In fact, LameDuck heavily leveraged their own social media presence to issue public warnings and spread their narrative in order to attract widespread attention. AttributionLameDuck’s unusual combination of motives, along with their religious rhetoric and apparent alliances with other hacktivist groups (e.g., collaboration with Killnet, Türk Hack Team, SiegedSec, and participation in #OpIsreal and #OPAustralia hacktivist campaigns), intensified speculation regarding their true origins and objectives. Previous theories on attribution suggested LameDuck was a Russian state-sponsored group masquerading as Sudanese nationalists. However, the unsealing of DOJ’s indictment revealed that the individuals orchestrating LameDuck’s prolific and highly disruptive DDoS operations were, in fact, not Russian and instead two Sudanese brothers. Criminal charges against the Sudan-based leaders of LameDuck do not necessarily discount possible Russian involvement in the group’s operations. It’s hard to ignore their shared ideologies, use of the Russian language and inclusion of pro-Russian rhetoric in LameDuck messaging, targeting that aligns with Russian interests, and the group’s coordination with pro-Russian "hacktivist" collectives such as Killnet.LameDuck targeting and victimologyLameDuck often conducted operations against prominent, high-profile targets to attract greater attention and amplify the impact of their attacks. Their targeting covered a wide geographic range, including the United States, Australia, and countries across Europe, the Middle East, South Asia, and Africa. LameDuck targets also spanned numerous sectors and industry verticals, with some of the more notable targets belonging to the following:Government and foreign policy Critical infrastructureLaw enforcementNews and mediaTech industryThis list represents only a portion of the industries targeted, emphasizing the wide scope of sectors affected by LameDuck's operations. Potential reasons for LameDuck targeting include:The targeted organization or entity was in opposition to LameDuck’s ideological beliefsLameDuck may have selected specific infrastructure for targeting due to its potential to impact a larger user base, amplifying the disruption caused and enhancing the group’s notorietyThe ease of successfully executing DDoS attacks on specific infrastructure, due to vulnerabilities and/or poor security practicesPolitically motivated targetingCloudflare observed that a substantial volume of LameDuck targeting aligns with its self-proclaimed identity as a pro-Muslim Sudanese “hacktivist” group. In particular, the conflict in Sudan and its political repercussions seem to inform a subset of its targets. For example, attacks against Kenyan organizations could be explained by the increasingly tense relations between the Sudanese government and Kenya, culminating in Sudan recalling its ambassador to Kenya in January. Politically motivated attacks were aimed at private sector companies like Microsoft and OpenAI, as LameDuck announced plans to indiscriminately target U.S. companies as long as the U.S. government continued “intervening in Sudanese internal affairs.” Apart from the conflict, LameDuck conducted operations revealing support of Sudanese nationalist sentiments, such as their targeting of Egyptian ISPs, which they claimed were meant “to send a message to the Egyptian government that they should hold accountable anyone who insults Sudanese people on social media, just as we do in Sudan to those who insult Egyptians.”LameDuck’s pro-Muslim stance also led to targeting organizations perceived as Islamophobic. For example, the high level of targeting against Swedish organizations was claimed to be punishment for the burning of Qurans. Also, after perceived insults against Muslims in Canada and Germany, LameDuck announced the addition of these countries to their target list. LameDuck also placed additional focus on pro-Israeli targets following the attack by Hamas on October 7, 2023 and the subsequent Israeli military action. Cloudflare observed widespread operations against Israeli organizations across various sectors, with attacks in October 2023 focusing, for example, on major U.S. and international news outlets accusing them of “false propaganda.” Cloudflare not only observed and mitigated attacks against various organizations but also became a target itself. Last November, LameDuck “officially declared war” on Cloudflare, stating the attack was due to our status as an American company and the use of our services to protect Israeli websites. In other instances, Cloudflare observed LameDuck heavily targeting Ukraine, in particular state organizations, or critical transportation infrastructure in the Baltics. These activities have fueled speculations about Russian involvement in LameDuck’s operations, as Sudanese actors are not active in Ukraine. However, the geopolitical developments in Sudan are not detached from the Russian war of aggression in Ukraine, as both Russian and Ukrainian forces have been active in Sudan. Not to mention, this past summer, Russia shifted its support to favor the Sudanese Armed Forces and has been sanctioned for providing weapons to Sudan in exchange for access to a port. While previous misconceptions about the group's origin have been dispelled and an understanding of their mixed motivations has somewhat emerged, their disparate targeting and operations that seem to align with pro-Russian sentiments still raise questions about possible affiliations. Cybercriminal targetingIn addition to LameDuck’s politically motivated targeting, the group engaged in financially driven cybercrime, including DDoS-for-hire services. While it is easier to associate ideologically driven targeting with LameDuck actors, attributing operations motivated by financial gain has often proven less straightforward. The group’s DDoS-for-hire services makes it difficult to differentiate their attacks from those conducted by their customers. Through the unsealing of DOJ’s indictment, we learned that LameDuck had more than 100 users of their DDoS capabilities, which were leveraged in attacks targeting numerous victims worldwide. LameDuck was also known for engaging in DDoS extortion, demanding payment from their victims in exchange for stopping the attacks. Like other LameDuck operations, these extortion attempts were directed at a wide range of targets. In July 2023, the group attacked the fanfiction site Archive of our own and demanded $30,000 in Bitcoin to withdraw the attack. Setting their sights on a much larger target, LameDuck claimed credit in May of this year for an attack on the Bahrain ISP Zain, publicly stating, “if you want us to stop contact us at InfraShutdown_bot and we can make a deal.” This, of course, wasn’t their only prominent target. The group initiated a wave of DDoS attacks against Microsoft, and shortly after demanded $1 million to halt their operation and prevent further attacks. Another high profile target included Scandinavian Airlines, which suffered a series of attacks, causing disruption to various online services. LameDuck’s attempts to extort the airline began with demands of $3,500 and later escalated to a staggering $3 million. Whether successful or not, these extortion demands are unusual for a self-proclaimed hacktivist group and further highlight LameDuck’s use of mixed tactics and apparent need for attention. LameDuck tactics and techniquesIn its first year of operation, LameDuck conducted more than 35,000 confirmed DDoS attacks by developing and employing a powerful DDoS tool known by several names, including “Godzilla Botnet,” the “Skynet Botnet,” and “InfraShutdown”. Despite its many names suggesting it is a botnet, the DDoS tool leveraged by LameDuck is actually a Distributed Cloud Attack Tool (DCAT), which is comprised of three main components: A command and control (C2) serverCloud-based servers that receive commands from the C2 server and forward them to open proxy resolvers Open proxy resolvers run by unaffiliated third parties, which then transmit the DDoS attack traffic to LameDuck targetsLameDuck used this attack infrastructure to overwhelm a victim organization’s website and/or web infrastructure with a flood of malicious traffic. Without proper protections in place, this traffic can severely impact, if not completely impede, a website’s ability to respond to legitimate requests, leaving actual users unable to access it. Since its emergence in early 2023, LameDuck employed a variety of tactics and techniques using its DCAT capabilities. Several identified patterns include:Launching layer 7 attacks via HTTP flooding. The type of flood attack we detected and mitigated was an HTTP GET attack, which involves the attacker sending thousands of HTTP GET requests to the targeted server from thousands of unique IP addresses. The victim server is inundated with incoming requests and responses, resulting in denial of service for legitimate traffic. LameDuck was also known to leverage multi-vector attacks (e.g., a combination of TCP-based direct-path and various UDP reflection or amplification vectors).Using paid infrastructure. Unlike many other attack groups, research indicates that LameDuck did not use a botnet of compromised personal and IoT devices to conduct attacks. Rather, the group used a cluster of rented servers — which can output more traffic than personal devices — to launch attacks. The fact that LameDuck had the financial resources to rent these servers was another reason some researchers believed the group were not the grassroots hacktivists they claimed to be.Traffic generation and anonymity. LameDuck used public cloud server infrastructure to generate traffic, and also leveraged free and open proxy infrastructure to randomize and conceal the attack source. Evidence indicates the group in some cases also used paid proxies to obscure their identity.High cost endpoints. In some instances LameDuck operations were aimed at high-cost endpoints of the targeted infrastructure (i.e. endpoints responsible for resource intensive processing). Attacking these endpoints are far more disruptive than taking out several dozen less computationally intensive, low-cost endpoints.High demand periods. For some targets, LameDuck was careful to choose attack times that corresponded to high-demand periods for the target. For example, attacks during peak consumer periods to aim for maximum disruption. Blitz approach. LameDuck was known to initiate a series of concentrated attacks on multiple interfaces of their target infrastructure simultaneously.Subdomain overwhelming. A similar concept to the attack technique above, where LameDuck would simultaneously target numerous subdomains of the victim domain.Low RPS. The attack’s requests per second (RPS) was relatively low in an attempt to blend in with legitimate traffic and avoid detection.Threats via public announcements and propaganda. LameDuck often threatened targets in advance of actual attacks, and sometimes made threats that were never borne out. Likely reasons for doing so included gaining attention for their ideological motives and sowing uncertainty amongst potential targets.RecommendationsCloudflare has successfully defended numerous customers against attacks facilitated by LameDuck, whether it be those conducted directly by the group or those initiated by individuals utilizing their DDoS-for-hire services. It’s important to note that LameDuck’s advanced DDoS capabilities enabled them to severely impact networks and services that did not have proper protections in place. With that said, this group is unfortunately only one of many to employ successful large-scale DDoS attacks, which are only growing in size and sophistication. Organizations can protect themselves from attacks like those launched by LameDuck and similar advanced adversaries by following a standard set of DDoS mitigation best practices.Use dedicated, always-on DDoS mitigation. A DDoS mitigation service uses a large bandwidth capacity, continuous analysis of network traffic, and customizable policy changes to absorb DDoS traffic and prevent it from reaching a targeted infrastructure. Organizations should ensure they have DDoS protection for Layer 7 traffic, Layer 3 traffic, and DNSUse a web application firewall (WAF). A WAF uses customizable policies to filter, inspect, and block malicious HTTP traffic between web applications and the InternetConfigure rate limiting. Rate limiting restricts volumes of network traffic over a specific time period, essentially preventing web servers from getting overwhelmed by requests from specific IP addressesCache content on a CDN. A cache stores copies of requested content and serves them in place of an origin server. Caching resources on a content delivery network (CDN) can reduce the strain on an organization’s servers during a DDoS attackEstablish internal processes for responding to attacks. This includes understanding existing security protection and capabilities, identifying unnecessary attack surfaces, analyzing logs to look for attack patterns, and having processes in place for where to look and what to do when an attack beginsLearn about DDoS mitigation strategies in more detail.About Cloudforce OneCloudflare’s mission is to help build a better Internet. And a better Internet can only exist with forces of good that detect, disrupt and degrade threat actors who seek to erode trust and bend the Internet for personal or political gain. Enter Cloudforce One – Cloudflare’s dedicated team of world-renowned threat researchers, tasked with publishing threat intelligence to arm security teams with the necessary context to make fast, confident decisions. We identify and defend against attacks with unique insight that no one else has. The foundation of our visibility is Cloudflare’s global network – one of the largest in the world – which encompasses about 20% of the Internet. Our services are adopted by millions of users across every corner of the Internet, giving us unparalleled visibility into global events – including the most interesting attacks on the Internet. This vantage point allows Cloudforce One to execute real-time reconnaissance, disrupt attacks from the point of launch, and turn intelligence into tactical success.
2024-11-08T08:50:31
en
train
42,057,465
elliottlovell88
2024-11-06T06:03:21
null
null
null
5
null
[ 42057875, 42057876 ]
null
true
null
null
null
null
null
null
null
train
42,057,470
LuD1161
2024-11-06T06:03:58
null
null
null
1
null
[ 42057471 ]
null
true
null
null
null
null
null
null
null
train
42,057,478
toomuchtodo
2024-11-06T06:06:51
BYD added a Tesla-worth of production capacity over the past 3 months
null
https://cleantechnica.com/2024/11/04/byd-added-a-tesla-worth-of-production-capacity-over-the-past-3-months-with-more-to-come/
82
101
[ 42057610, 42057683, 42057589, 42064220, 42057884, 42057757, 42057861, 42057536, 42057560 ]
null
null
bot_blocked
403 Forbidden
null
null
nginx
2024-11-08T15:30:09
null
train
42,057,482
rkta
2024-11-06T06:08:33
The Big Array Size Survey for C
null
https://thephd.dev/the-big-array-size-survey-for-c
3
0
null
null
null
no_error
The Big Array Size Survey for C
2024-11-06T00:00:00+00:00
null
New in C2y is an operator that does something people have been asking us for, for decades: something that computes the size in elements (NOT bytes) of an array-like thing. This is a great addition and came from the efforts of Alejandro Colomar in N3369, and was voted into C2y during the recently-finished Minneapolis, MN, USA 2024 standardization meeting. But, there’s been some questions about whether we chose the right name or not, and rather than spend an endless amount of Committee time bikeshedding and arguing about this, I wanted to put this question to you, the user, with a survey! (Link to the survey at the bottom of the article.) The Operator Before we get to the survey (link at the bottom), the point of this article is to explain the available choices so you, the user, can make a more informed decision. The core of this survey is to provide a built-in, language name to the behavior of the following macro named SIZE_KEYWORD: #define SIZE_KEYWORD(...) (sizeof(__VA_ARGS__) / sizeof(*(__VA_ARGS__))) int main () { int arfarf[] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9}; return SIZE_KEYWORD(arfarf); // same as: `return 10;` } This is called nitems() in BSD-style C, ARRAY_SIZE() by others in C with macros, _countof() in MSVC-style C, std::size() (a library feature) and std::extent_v<...> in C++, len() in Python, ztdc_size() in my personal C library, extent in Fortran and other language terminology, and carries many other names both in different languages but also in C itself. The survey here is not for the naming of a library-based macro (though certain ways of accessing this functionality could be through a macro): there is consensus in the C Standard Committee to make this a normal in-language operator so we can build type safety directly into the language operator rather than come up with increasingly hideous uses of _Generic to achieve the same goal. This keeps compile-times low and also has the language accept responsibility for things that it, honestly, should’ve been responsible for since 1985. This is the basic level of knowledge you need to access the survey and answer. Further below is an explanation of each important choice in the survey related to the technical features. We encourage you to read this whole blog article before accessing the survey to understand the rationale. The link is at the bottom of this article. The Choices The survey has a few preliminary questions about experience level and current/past usage of C; this does not necessarily change how impactful your choice selection will be! It just might reveal certain trends or ideas amongst certain subsets of individuals. It is also not meant to be extremely specific or even all that deeply accurate. Even if you’re not comfortable with C, but you are forced to use it at your Day Job because Nobody Else Will Do This Damn Work, well. You may not like it, but that’s still “Professional / Industrial” C development! The core part of the survey, however, revolve around 2 choices: the usage pattern required to get to said operator/keyword; and, the spelling of the operator/keyword itself. There’s several spellings, and three usage patterns. We’ll elucidate the usage patterns first, and then discuss the spellings. Given this paper and feature were already accepted to C2y, but that C2y has only JUST started and is still in active development, the goal of this survey is to determine if the community has any sort of preference for the spelling of this operator. Ideally, it would have been nice if people saw the papers in the WG14 document log and made their opinions known ahead-of-time, but this time I am doing my best to reach out to every VIA this article and the survey that is linked at the bottom of the article. Usage Pattern Using SIZE_KEYWORD like in the first code sample, this section will explain the three usage patterns and their pros/cons. The program is always meant to return 42. const double barkbark[] = { 0.0, 0.5, 7.0, 14.7, 23.3, 42.0 }; static_assert(SIZE_KEYWORD(barkbark) == 6, "must have a size of 6"); int main () { return (int)barkbark[SIZE_KEYWORD(barkbark) - 1]; } Underscore and capital letter _Keyword; Macro in a New Header This technique is a common, age-old way of providing a feature in C. It avoids clobbering the global user namespace with a new keyword that could be affected by user-defined or standards-defined macros (from e.g. POSIX or that already exist in your headers). A keyword still exists, but it’s spelled with an underscore and a capital letter to prevent any failures. The user-friendly, lowercase name is only added through a new macro in a new header, so as to prevent breaking old code. Some notable features that USED to be like this: _Static_assert/static_assert with <assert.h> _Alignof/alignof with <stdalignof.h> _Thread_local/thread_local with <threads.h> _Bool/bool with <stdbool.h> As an example, it would look like this: #include <stdkeyword.h> const double barkbark[] = { 0.0, 0.5, 7.0, 14.7, 23.3, 42.0 }; _Static_assert(keyword_macro(barkbark) == 6, "must have a size of 6"); int main () { return (int)barkbark[_Keyword(barkbark) - 1]; } Underscore and capital letter _Keyword; No Macro in Header This is a newer way of providing functionality where no effort is made to provide a nice spelling. It’s not used very often, except in cases where people expect that the spelling won’t be used often or the lowercase name might conflict with an important concept that others deem too important to take for a given spelling. This does not happen often in C, and as such there’s really only one prominent example that exists in the standard outside of extensions: _Generic; no macro ever provided in a header As an example, it would look like this: // no header const double barkbark[] = { 0.0, 0.5, 7.0, 14.7, 23.3, 42.0 }; static_assert(_Keyword(barkbark) == 6, "must have a size of 6"); int main () { return (int)barkbark[_Keyword(barkbark) - 1]; } This is the more bolder way of providing functionality in the C programming language. Oftentimes, this does not happen in C without a sister language like C++ bulldozing code away from using specific lowercase identifiers. It can also happen if a popular extension dominates the industry and makes it attractive to keep a certain spelling. Technically, everyone acknowledges that the lowercase spelling is what we want in most cases, but we settle for the other two solutions because adding keywords of popular words tends to break somebody’s code. That leads to a lot of grumbling and pissed off developers who view code being “broken” in this way as an annoying busywork task added onto their workloads. For C23, specifically, a bunch of things were changed from the _Keyword + macro approach to using the lowercase name since C++ has already effectively turned them into reserved names: true, false, and bool thread_local static_assert alignof typeof (already an existing extension in many places) As an example, it would look like this: // no header const double barkbark[] = { 0.0, 0.5, 7.0, 14.7, 23.3, 42.0 }; static_assert(keyword(barkbark) == 6, "must have a size of 6"); int main () { return (int)barkbark[keyword(barkbark) - 1]; } Keyword Spellings By far the biggest war over this is not with the usage pattern of the feature, but the actual spelling of the keyword. This prompted a survey from engineer Chris Bazley at ARM, who published his results in N3350 Feedback for C2y - Survey results for naming of new nelementsof() operator. The survey here is not going to query the same set of names, but only the names that seemed to have the most discussion and support in the various e-mails, Committee Meeting discussion, and other drive-by social media / Hallway talking people have done. Most notably, these options are presented as containing both the lowercase keyword name and the uppercase capital letter _Keyword name. Specific combinations of spelling and usage pattern can be given later during an optional question in the survey, along with any remarks you’d like to leave at the end in a text box that can handle a fair bit of text. There are only 6 names, modeled after the most likely spellings similar to the sizeof operator. If you have another name you think is REALLY important, please add it at the end of the comments section. Some typical names not included with the reasoning: size/SIZE is too close to sizeof and this is not a library function; it would also bulldoze over pretty much every codebase in existence and jeopardize other languages built on top of / around C. nitems/NITEMS is a BSD-style way of spelling this and we do not want to clobber that existing definition. ARRAY_SIZE/stdc_size and similar renditions are not provided because this is an operator exposed through a keyword and not a macro, but even then array_size/_Array_size were deemed too awkward to spell. dimsof/dimensionsof was, similarly, not all that popular and dimensions as a word did not convey the meaning very appropriately to begin with. Other brave but unfortunately unmentioned spellings that did not make the cut. The options in the survey are as below: lenof / _Lenof A very short spelling that utilizes the word “length”, but shortened in the typical C fashion. Very short and easy to type, and it also fits in with most individual’s idea of how this works. It is generally favored amongst C practitioners, and is immediately familiar to Pythonistas. A small point of contention: doing _Lenof(L"barkbark") produces the answer “9”, not “8” (the null terminator is counted, just as in sizeof("barkbark")). This has led some to believe this would result in “confusion” when doing string processing. It’s unclear whether this worry is well-founded in any data and not just a nomenclature issue. As “len” and lenof are popular in C code, this one would likely need a underscore-capital letter keyword and a macro to manage its introduction, but it is short. const double barkbark[] = { 0.0, 0.5, 7.0, 14.7, 23.3, 42.0 }; static_assert(_Lenof(barkbark) == 6, "must have an length of 6"); int main () { return (int)barkbark[lenof(barkbark) - 1]; } lengthof / _Lengthof This spelling won in Chris Bazley’s ARM survey of the 40 highly-qualified C/C++ engineers and is popular in many places. Being spelled out fully seems to be of benefit and heartens many users who are sort of sick of a wide variety of C’s crunchy, forcefully shortened spellings like creat (or len, for that matter, though len is much more understood and accepted). It is the form that was voted into C2y as _Lengthof, though it’s noted that the author of the paper that put _Lengthof into C is strongly against its existence and thinks this choice will encourage off-by-one errors (similarly to lenof discussed above). Still, it seems like both the least hated and most popular among the C Committee and the adherents who had responded to Alejandro Colomar’s GCC patch for this operator. Whether it will continue to be popular with the wider community has yet to be seen. As “length” and lengthof are popular in C code, this one would likely need a underscore-capital letter keyword and a macro to introduce it carefully into existing C code. const double barkbark[] = { 0.0, 0.5, 7.0, 14.7, 23.3, 42.0 }; static_assert(_Lengthof(barkbark) == 6, "must have an length of 6"); int main () { return (int)barkbark[lengthof(barkbark) - 1]; } countof / _Countof This spelling is a favorite of many people who want a word shorter than length but still fully spelled out that matches its counterpart size/sizeof. It has strong existing usage in codebases around the world, including a definition of this macro in Microsoft’s C library. It’s favored by a few on the C Committee, and I also received an e-mail about COUNT being provided by the C library as a macro. It was, unfortunately, not polled in the ARM survey. It also conflicts with C++’s idea of count as an algorithm rather than an operation (C++ just uses size for counting the number of elements). It is dictionary-definition accurate to what this feature is attempting to do, and does not come with off-by-one concerns associated with strings and “length”, typically. As “count” and countof are popular in C code, this too would need some management in its usage pattern to make it available everywhere without getting breakage in some existing code. const double barkbark[] = { 0.0, 0.5, 7.0, 14.7, 23.3, 42.0 }; static_assert(_Countof(barkbark) == 6, "must have an length of 6"); int main () { return (int)barkbark[countof(barkbark) - 1]; } nelemsof / _Nelemsof This spelling is an alternative spelling to nitems() from BSD (to avoid taking nitems from BSD). nelemsof is also seem as the short, cromulent spelling of another suggestion in this list, nelementsof. It is a short spelling but lacks spaces between n and elems, but emphasizes this is the number of elements being counted and not anything else. The n is seen as a universal letter for the count of things, and most people who encounter it understand it readily enough. It lacks problems about off-by-one counts by not being associated with strings in any manner, though n being a common substitution for “length” might bring this up in a few people’s minds. As “nelems” and nelems are popular in C code, this too would need some management in its usage pattern to make it available everywhere without getting breakage in some existing code. const double barkbark[] = { 0.0, 0.5, 7.0, 14.7, 23.3, 42.0 }; static_assert(_Nelemsof(barkbark) == 6, "must have an length of 6"); int main () { return (int)barkbark[nelemsof(barkbark) - 1]; } nelementsof / _Nelementsof This is the long spelling of the nelemsof option just prior. It is the preferred name of the author of N3369, Alejandro Colomar, before WG14 worked to get consensus to change the name to _Lengthof for C2y. It’s a longer name that very clearly states what it is doing, and all of the rationale for nelems applies. This is one of the only options that has a name so long and unusual that it shows up absolutely nowhere that matters. It can be standardized without fear as nelements with no macro version whatsoever, straight up becoming a keyword in the Core C language without any macro/header song-and-dance. const double barkbark[] = { 0.0, 0.5, 7.0, 14.7, 23.3, 42.0 }; static_assert(nelementsof(barkbark) == 6, "must have an length of 6"); int main () { return (int)barkbark[nelementsof(barkbark) - 1]; } extentof / _Extentof During the discussion of the paper in the Minneapolis 2024 meeting, there was a surprising amount of in-person vouching for the name extentof. They also envisioned it coming with a form that allowed to pass in which dimension of a multidimensional array you wanted to get the extent of, similar to C++’s std::extent_v and std::rank_v, as seen here and here. Choosing this name comes with the implicit understanding that additional work would be done to furnish a rankof/_Rankof (or similar spelling) operator for C as well in some fashion to allow for better programmability over multidimensional arrays. This option tends to appeal to Fortran and Mathematically-minded individuals in general conversation, and has a certain appeal among older folks for some reason I have not been able to appropriately pin down in my observations and discussions; whether or not this will hold broadly in the C community is anyone’s guess. As “extent” is a popular word and extentof similarly, this one would likely need a macro version with an underscore capital-letter keyword, but the usage pattern can be introduced gradually and gracefully. const double barkbark[] = { 0.0, 0.5, 7.0, 14.7, 23.3, 42.0 }; static_assert(_Extentof(barkbark) == 6, "must have an extent of 6"); int main () { return (int)barkbark[extentof(barkbark) - 1]; } The Survey Here’s the survey: https://www.allcounted.com/s?did=qld5u66hixbtj&lang=en_US. There is an optional question at the end of the survey, before the open-ended comments, that allows for you to also rank and choose very specific combinations of spelling and feature usage mechanism. This allows for greater precision beyond just answering the two core questions, if you want to explain it. Employ your democratic right to have a voice and inform the future of C, today! Good Luck! 💚 Banner and Title Photo by Luka, from Pexels Tags C C standard 📜
2024-11-07T22:50:07
en
train
42,057,491
anotherhue
2024-11-06T06:11:19
Vopono – Run apps through VPN tunnels with temporary network namespaces
null
https://github.com/jamesmcm/vopono
1
0
null
null
null
null
null
null
null
null
null
null
train
42,057,506
leizhan
2024-11-06T06:16:44
null
null
null
1
null
[ 42057507 ]
null
true
null
null
null
null
null
null
null
train
42,057,515
coinpress
2024-11-06T06:18:47
null
null
null
1
null
[ 42057516 ]
null
true
null
null
null
null
null
null
null
train
42,057,525
alraj
2024-11-06T06:20:26
The Art of Manually Editing Hunks
null
https://kennyballou.com/blog/2015/10/art-manually-edit-hunks/index.html
1
0
null
null
null
null
null
null
null
null
null
null
train
42,057,534
lkjalglkjl3t
2024-11-06T06:23:29
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,542
busymom0
2024-11-06T06:27:07
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,547
fgdggf
2024-11-06T06:27:32
Oracle Database – Sysoper Privilege
null
https://datacadamia.com/db/oracle/sysoper
1
0
null
null
null
null
null
null
null
null
null
null
train
42,057,556
Physkal
2024-11-06T06:29:47
'Fat Leonard' gets 15 years for plotting one of US Military's biggest scandals
null
https://www.theguardian.com/us-news/2024/nov/05/fat-leonard-sentenced-military-fraud
3
0
null
null
null
null
null
null
null
null
null
null
train
42,057,574
SeaTunnel
2024-11-06T06:35:26
null
null
null
1
null
[ 42057575 ]
null
true
null
null
null
null
null
null
null
train
42,057,579
punnerud
2024-11-06T06:36:52
Building a version of Python that can be embedded into macOS, iOS, tvO, watchOS
null
https://github.com/beeware/Python-Apple-support
2
0
null
null
null
null
null
null
null
null
null
null
train
42,057,597
dbacar
2024-11-06T06:40:56
A GitHub repo that curates all the awesome repos
null
https://github.com/bayandin/awesome-awesomeness
26
8
[ 42058021, 42057947, 42058018, 42057598 ]
null
null
no_error
GitHub - bayandin/awesome-awesomeness: A curated list of awesome awesomeness
null
bayandin
GitHub Copilot Write better code with AI Security Find and fix vulnerabilities Actions Automate any workflow Codespaces Instant dev environments Issues Plan and track work Code Review Manage code changes Discussions Collaborate outside of code Code Search Find more, search less Explore Learning Pathways White papers, Ebooks, Webinars Customer Stories Partners GitHub Sponsors Fund open source developers The ReadME Project GitHub community articles Enterprise platform AI-powered developer platform Pricing Provide feedback Saved searches Use saved searches to filter your results more quickly Sign up
2024-11-08T10:55:40
en
train
42,057,600
dndndnd
2024-11-06T06:41:11
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,611
eashish93
2024-11-06T06:42:44
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,647
koolba
2024-11-06T06:49:49
Trump wins presidency for second time
null
https://thehill.com/homenews/campaign/4969061-trump-wins-presidential-election/
1,565
7,290
[ 42067489, 42063854, 42065473, 42065765, 42064630, 42067742, 42066202, 42064757, 42060354, 42061453, 42063722, 42069459, 42070311, 42058797, 42059351, 42068914, 42060629, 42069403, 42068858, 42061364, 42060294, 42069575, 42060879, 42061407, 42062445, 42071386, 42063832, 42066716, 42062244, 42065051, 42057655, 42060183, 42061003, 42060657, 42065653, 42066321, 42058686, 42069509, 42066281, 42065926, 42060029, 42063066, 42065073, 42066899, 42058484, 42062220, 42067589, 42059839, 42064327, 42065705, 42067312, 42060406, 42067215, 42058641, 42064951, 42069469, 42062939, 42058742, 42062110, 42068189, 42065176, 42064132, 42059296, 42062228, 42061913, 42067880, 42064737, 42061837, 42069513, 42062088, 42060285, 42058580, 42071020, 42063383, 42066496, 42058758, 42070583, 42069116, 42066922, 42070880, 42058408, 42058221, 42069285, 42061778, 42058360, 42060969, 42058416, 42067664, 42069291, 42067482, 42063740, 42060429, 42060820, 42065715, 42066097, 42060717, 42070125, 42066860, 42059187, 42064726, 42065323, 42070130, 42070805, 42065001, 42064711, 42071160, 42064765, 42059794, 42058381, 42063759, 42068982, 42066008, 42065404, 42065440, 42058582, 42068446, 42063278, 42058413, 42060250, 42059941, 42066035, 42061674, 42060189, 42065497, 42060015, 42059247, 42059193, 42061210, 42067065, 42060965, 42070596, 42059052, 42066024, 42060195, 42058271, 42059041, 42058175, 42066149, 42061182, 42059456, 42059502, 42062465, 42062868, 42060242, 42058880, 42065093, 42065769, 42060773, 42059145, 42062884, 42059648, 42060159, 42058757, 42059826, 42065983, 42059494, 42059441, 42070765, 42058242, 42058508, 42058982, 42058455, 42058801, 42069592, 42060404, 42061140, 42061189, 42070111, 42066593, 42064586, 42059542, 42058423, 42061653, 42059199, 42061059, 42066612, 42069364, 42060152, 42061145, 42059122, 42062114, 42057912, 42066440, 42066573, 42066272, 42058461, 42061574, 42058564, 42062997, 42058833, 42062754, 42058249, 42065947, 42058257, 42065799, 42061338, 42058524, 42059446, 42065994, 42058499, 42059480, 42061028, 42059960, 42068519, 42060915, 42063111, 42068073, 42061702, 42068549, 42061101, 42060660, 42059025, 42060573, 42070924, 42060357, 42060061, 42062101, 42059365, 42065624, 42058941, 42071351, 42060056, 42070715, 42060536, 42062002, 42065478, 42062001, 42064058, 42059092, 42058976, 42057847, 42060557, 42058940, 42061245, 42062011, 42060182, 42059764, 42061165, 42060150, 42058231, 42059733, 42058370, 42059806, 42059901, 42060896, 42059596, 42061844, 42058547, 42060941, 42058647, 42060301, 42058699, 42066978, 42062142, 42061581, 42059129, 42060839, 42059832, 42059956, 42058196, 42060095, 42060308, 42061620, 42066765, 42058632, 42060934, 42065925, 42057661, 42063295, 42061192, 42063538, 42061479, 42058283, 42065915, 42061457, 42068360, 42058575, 42061745, 42062692, 42058213, 42059019, 42068679, 42070633, 42067030, 42071308 ]
null
null
null
null
null
null
null
null
null
train
42,057,673
namuol
2024-11-06T06:55:42
Real Life "Sort by Controversial"
null
https://www.lesswrong.com/posts/qbbaF79uJqvmWZELv/real-life-sort-by-controversial
2
2
[ 42058238 ]
null
null
null
null
null
null
null
null
null
train
42,057,674
sandwichsphinx
2024-11-06T06:55:52
Photovoltaic materials: Present efficiencies and future challenges (2016)
null
https://www.science.org/doi/abs/10.1126/science.aad4424
1
0
null
null
null
null
null
null
null
null
null
null
train
42,057,675
COINTURK
2024-11-06T06:56:52
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,679
COINTURK
2024-11-06T06:57:16
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,684
ArtTimeInvestor
2024-11-06T06:58:29
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,690
keepamovin
2024-11-06T07:00:07
null
null
null
5
null
[ 42057735 ]
null
true
null
null
null
null
null
null
null
train
42,057,691
matthewddy
2024-11-06T07:00:31
null
null
null
1
null
[ 42057692 ]
null
true
null
null
null
null
null
null
null
train
42,057,703
gemanor
2024-11-06T07:01:47
Python dethrones JavaScript as the most-used language on GitHub
null
https://www.theregister.com/2024/11/05/python_dethrones_javascript_github/
38
10
[ 42062902, 42058010, 42058165, 42058444, 42063088, 42057704, 42064392 ]
null
null
null
null
null
null
null
null
null
train
42,057,707
tagawa
2024-11-06T07:02:24
South Korea fines Meta about $15M over collection of user data
null
https://www.reuters.com/technology/south-korea-fines-meta-about-15-mln-over-collection-user-data-2024-11-05/
3
0
null
null
null
null
null
null
null
null
null
null
train
42,057,745
doener
2024-11-06T07:09:57
Mystery of the "most mysterious song on the internet" solved (German)
null
https://www.golem.de/sonstiges/zustimmung/auswahl.html?from=https%3A%2F%2Fwww.golem.de%2Fnews%2Fnew-wave-song-raetsel-um-mysterioesestes-lied-des-internet-geloest-2411-190500.html
5
2
[ 42059272, 42057907 ]
null
null
null
null
null
null
null
null
null
train
42,057,758
doener
2024-11-06T07:12:43
2x 16GBx256GB Mac minis cost $1 cheaper than a single 32GBx512GB Mac mini
null
https://twitter.com/seatedro/status/1853262737557590479
88
44
[ 42060960, 42063243, 42058200, 42057771, 42058182, 42058009, 42060306, 42062379, 42058143, 42057997, 42061041 ]
null
null
null
null
null
null
null
null
null
train
42,057,759
dtquad
2024-11-06T07:12:57
Russia blamed for bomb threats at polling sites in Georgia and other states
null
https://www.washingtonpost.com/technology/2024/11/05/russia-blamed-bomb-threats-that-briefly-shut-ga-polling-stations/
11
1
[ 42057773 ]
null
null
missing_parsing
Russia blamed for bomb threats at polling sites in Georgia and other states
2024-11-05T20:49:24.144Z
Joseph Menn, Amy Gardner, Perry Stein
Russia was behind false bomb threats in Georgia and other states that briefly closed polling stations in some Democratic-leaning areas Tuesday, an escalation in tactics aimed at sowing fear and suppressing votes, federal and local officials said.“The FBI is aware of bomb threats to polling locations in several states, many of which appear to originate from Russian email domains,” the agency said in a statement. “None of the threats have been determined to be credible thus far.”
2024-11-08T07:07:58
null
train
42,057,763
halildeniz
2024-11-06T07:14:15
null
null
null
1
null
[ 42057764 ]
null
true
null
null
null
null
null
null
null
train
42,057,769
dotMartin
2024-11-06T07:14:55
null
null
null
1
null
[ 42057770 ]
null
true
null
null
null
null
null
null
null
train
42,057,774
gnabgib
2024-11-06T07:15:42
Gut microbiota regulates stress responsivity via the circadian system
null
https://www.sciencedirect.com/science/article/pii/S1550413124003991
3
0
null
null
null
null
null
null
null
null
null
null
train
42,057,778
teleforce
2024-11-06T07:16:30
HAPI FHIR: Open-Source Framework for Implementing HL7 FHIR API and Interop
null
https://hapifhir.io/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,057,779
Fortuna001
2024-11-06T07:16:41
null
null
null
1
null
[ 42057780 ]
null
true
null
null
null
null
null
null
null
train
42,057,785
umairjutt
2024-11-06T07:18:23
Would you pay for Hacker News?
I wanna know that since hackernews being really valuable source, if they go paid how much would you be willing to pay&#x2F;month?
null
5
10
[ 42062736, 42057885, 42058054, 42058157, 42057960, 42058096, 42058118, 42057961, 42057844, 42058359 ]
null
null
null
null
null
null
null
null
null
train
42,057,793
M0HD197
2024-11-06T07:20:03
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,826
todsacerdoti
2024-11-06T07:26:02
Vegetation in Cod:BO4
null
https://c0de517e.com/015_vegetation_system.htm
4
1
[ 42058189 ]
null
null
missing_parsing
From the archive: Vegetation in COD:BO4.
null
Angelo Pesce
Welcome to another entry in the "from the archive" series, where I finally get around writing about stuff that there was no time to present back when it was made, hoping nobody gets upset and that my memory of things has not entirely evaporated. And I can call it now a series, having written this, which is the second post :) I guess it's also "timely" considering that Black Ops 6 just came out (and it seems it's going to be a smash hit). For context, BO4 was... let me count... six call of duties ago! Setting the stage. The common wisdom is that COD never changes, it's simply a thin new skin over the same game and codebase. Of course, the common wisdom is... entirely wrong. Of course, a franchise is a franchise, and I don't imagine there will be an RPG or platformer with the COD name anytime soon, but, in no small part due to having different studios alternating at the helm of development, each game in my experience brought significant innovations. And after a bit, you might even start noticing the uniqueness in each studio culture, and way of working. For me, from my limited vantage point, Treyarch was "the crazy one". I have to emphasize the limits of my POV: I never believed it is possible to understand a large, multifaceted production as a whole, even from within, and I certainly did not see most of it, sitting in Vancouver and working only on the technological side. That said... Why do I say "crazy"? Well... The first Treyarch COD I helped on was COD:BO3, the studio's first entry for a new generation of consoles, the Ps4/XB1 era. In BO3, not only Treyarch changed pretty much everything in how the game was rendered (going from forward to deferred, from lightmaps to volumes, adding OIT and software particles, removing lots of arcane magic from the level baking process, going to GPU instancing etc etc), how it was created (rewriting large parts of the editor and workflows) and how it was played (BO3 shipped with a ton of game modes, including a coop campaign for the first time)... but what really surprised me at the time was the willingness to take risks and accept changes up to the very last minute. I specifically remember making some improvements in the lighting system that I thought were never going to ship, as what I had already committed was good and the new code would alter the look of all levels in the last few weeks we had to complete the game, and the leads just eagerly taking the changes. It was no surprise then that for BO4 when it was decided not to have at all a single-player campaign in favor of adding a new battle-royale option, the team jumped on the opportunity. This was huge. Call of Duty is, famously, a "corridor shooter" - made for relatively small, cramped urban maps, not an open-world engine! The team managed to "pivot" the tech in record time, and BO4 shipped at launch with battle royale, a feat that the same year will not be matched by Battlefield V (which will ship the mode months later, as an update), even if EA's title has always been about much larger maps than COD!COD:BO4 "Blackout" battle royale. Treyarch needed to figure out what the mode would look like (and I can assure you, many ideas were prototyped) and how to make it work in the engine. After the dust settled enough on the gameplay for the mode, we knew that we needed to focus on a new, large terrain system and its biomes. Design prototype. IIRC, when Central Tech was called in to help, we had already a prototype to show how the system was supposed to work. Treyarch made a new terrain system and had something working in the editor to place vegetation on top of it, using artist-painted density maps and procedural rules - and they were happy with the overall workflows. There were three main problems to solve: 1) The prototype went out of memory on console... simply trying to load the data files with the locations of all the procedurally created objects! 2) Editing was less than interactive... The performance of the procedural generation was poor. 3) Quality/variety of the patterns used for distributing objects could improve. You might have spotted that runtime performance is not on the list! Treyarch's engine moved to an instancing-based solution that year, and already for BO3 there had been a lot of work on the streaming system, and COD tends to be quite efficient to boot. Another researcher, Josiah, worked on improving the geometry LOD / impostor system he created to account for vegetation and bigger maps, but that's another story. The key part of the prototype was about how placement was supposed to work with painted density maps on the terrain. Treyarch went for a solution that allowed multiple layers, in a stack. Each layer could then influence the ones below, at the very least "occluding" them, but ideally also in other ways. For example, you might have placed big trees in the top layer, driven by the density map, and that got evaluated first, deciding where to place the tree instances. The objects would be placed checking that they could not intersect each other. Then a second layer might have been, say, big boulders - and you don't want a rock to intersect a tree, so the tree layer could be set to "inhibit" the layer below in a radius around each placed tree. A third layer might have been smaller rocks, and you wanted them not to intersect either the trees or the boulders, but favor clumping around the latter, so the boulder layer would act as negative density bias up to a given radius around a placed boulder, and then a positive bias in a small ring after that. And so on... So, we had already lots of the pieces in place, which helped a lot, especially as we were helping from outside the studio. In total I think this vegetation effort ended up taking four engineers from Central Tech: Josh and Ryan on the actual implementation for the engine and editor, Wade made the engine system for grass on the terrain, and myself to R&D the algorithms for scattering and the visuals of the grass (i.e. shaders). *The elephant in the room & the road not taken.* Imagine being tasked with working on vegetation in 2018. There is no way your first thoughts don't go to the lush forests of Guerilla's 2017 Horizon Zero Dawn in their real-time procedurally generated glory.Love! And so of course I went to look! If procedural placement is slow, we'll make it real-time, right? Well, to be honest, I did not get it... It might be because I did not see their GDC presentation live, unfortunately - it might be that I'm not that smart, but I didn't fully grok it. I appreciated the idea of using dithering for placement, good dithering is in practice an exercise in generating blue-noise distributions of variable density (even the old Floyd-Steinberg is fundamentally that), but overall I didn't fully grok it. Moreover, it seemed to me it did not work under the same assumptions of having the placement of given objects directly affect the placement of others, I think they effectively "emulate" the same idea by virtue of being able to manipulate the density maps, but as far as I can tell, it does not work at the object level. I had an idea in my head to make a sort of rasterizer, writing to a bitmask, to mark areas that are already occupied in a given vegetation tile. This idea of using bitmasks on the GPU is somewhat popular today, Martin Mittrig did an excellent summary in this EA SEED presentation, but at the time I was mostly inspired by the vague recollection of this blog post (which I always struggle to find when I need it) where the author computer AO for a voxel world making a tiny, bitmask-based, CPU cubemap rasterizer.Jotting down the idea of using bitmasks.Thinking about how it could work...Realizing that the "pixels" can be entirely virtual, we could associate bits to irregular shapes. I spent a bit of time on pen and paper trying to make this idea work, but ultimately ended up shelving it. Albeit I was persuaded it could work, I simply did not see the use. It is tricky to "linearize" research into a timeline - exploration and exploitation alternate in the search of viable techniques, it looks much more like a (real-world) tree... But relatively soon it became apparent that it was unlikely we would need real-time generation - and to this day, I don't fully appreciate the appeal. On one hand, I was working on the placement algorithms, and it seemed to me that we could efficiently store the results, directly, without needing to evaluate the logic in runtime. Remember that I mentioned someone was working on a grass system? That was a parallel decision that helped a lot - in the original prototype, there was no specialized grass rendering, but we knew pretty early on that we wanted to integrate that directly with the terrain system. Once the grass was out of the way - the number of instances we had to deal with decreased by orders of magnitude...COD's "radiant" editor. Placement logic. If you are asked to place objects in a way that they don't self-intersect, you'd likely think of some minimum-distance guarantees, which leads to blue-noise point set generation ideas. The tricky part here is though that we wanted to have objects in layers, where each layer entailed a different size/minimum distance, and remember as well that we needed to check for the logic that makes layers "talk" to the ones below.I doubt anyone can read my notes... My first instinct was to create a point set that is "hierarchically" blue-noise, meaning that as you add more and more points, these look still blue-noise, but they become denser, and thus, the minimum-distance they guarantee shrinks. This property is guaranteed implicitly by Mitchell's best-candidate algorithm (where we generate each iteration a number of random candidate point locations and then we add to the point set one that is furthest from any other point in the set) - but we can do better than that!Mitchell's best candidate point generation. Let's think about it. First of all, for any point set - in any random order, we can compute how as we add points from it the minimum distance between the points decreases. So, having a decreasing minimum distance is not the objective, that is a given! What we can optimize with sorting and/or proper point location is how gradually the distance decreases. Or, in other words, how many points we can use for objects of a given size.Is this art? https://www.instagram.com/p/Bjf_PWWFxCJ/ No......it's the visualization of a small program that tries to generate point sets with as-gradual-as-possible minimum distance decrease This ends up being quite tricky to write down in a way that we can perform numerical optimization on the point locations themselves, but no worries, it won't matter. Quite simply we can take any point set and sort it to make it incremental in the sense of minimum distance. And a good heuristic on how to do the sort is provided by the best-candidate algorithm itself! Simply use the same idea, but instead of generating random candidate points, take them from the fixed, existing set. If you want the optimal sorting, you have to try N times, once per every possible point in the set as the initial one to pick. It takes time, but we have to do it only once! Solved? Done? Not quite! In fact, this whole thing went to the trashcan, but no worries - we didn't waste time! All this idea of building a fixed point set that we could tile, and use to guarantee non-self-intersection of objects of different sizes was still under the assumption that we needed to generate placements very fast, on the GPU! But it turns out, as the other engineers started to implement tiled, parallel updates on the prototype code, we figured out that we probably did not need to be that fast! In fact, after a few optimizations to the collision detection/cross-layer influence system (circle-circle intersection and circles-to-point distance queries), even on CPU we already had something interactive enough for artists to use! In the end, we would have CPU and GPU compute implementations for the editor, but by the time we had the final placement algorithms, the latter ended up being somewhat overkill. Once we became unshackled by the requirement of doing everything in runtime, it was easy to decide that to accommodate objects of different sizes - each in its own layer - we could simply use different-sized tiles per layer, effectively, scaling up or down the point set in world space, so that we can always use the whole set, and we need only to "remove" points to accommodate for the variable density as painted from the artists. And we made layers able to have multiple 3d models associated with them - as long as they were of similar size and needed to respect the same placement logic - to provide more variation with an easier workflow.Same as the paragraph above, "encrypted". Making it not boring. So now we had something simple, working, and fast. The point ordering ended up being re-used to determine if a given location passed or not the density-map threshold test: for each point, we sampled the density map and computed the number of points that we theoretically would have enabled in the tile if that density value was constant across it. If the current point index is < that number, we enable it, otherwise, we don't. This is the same as saving a threshold activation value per point, and making sure that each point has a unique one, but it does not require saving said threshold. This idea is also very similar to an ordered dither pattern. And exactly like a dither pattern, the order of the points creates different "styles". Now we have all the components in place. First, we start with a fixed point set. In practice, to give more control, artists could alter the positions per layer a bit - I ended up generating a noisy set with Mitchell's best candidate, and then moving the point locations with a relaxation algorithm to distribute them more uniformly (note that if you relax too much you end up just with a hexagonal grid, as hexagonal packing is optimal for circles in a plane) - and layers had a value to interpolate between the two.Point locations from best-candidate, and relaxed. Then, we compute different orderings - permutations - of the point set, which will create different styles that can be selected, again, per layer. I think we ended up with a handful of options there, but I created an app to explore sorting variants and their parameters, with which technical artists could then pick the styles they wanted. The app itself was cute, C++ using something like a graphical "printf" system - you could pass to a function a lambda with the drawing code, the lambda would receive the window inputs and execute in a separate thread, but that's also a story for another day.This is how the "max-spacing" style looked at 20, 40, and 60% constant density in a tile. Random permutation. Alternating maximum-distance point choice with minimum-to-last added: forces clumping. Same, but choosing a random number of points to clump in the minimum-to-last distance loop. Clumping around a randomly selected anchor, not the last added point. String-like clumps. Storage. Now that we have a good way to distribute instances, we only need to solve storage. Because we did all this work to be able to use only a handful of preset point locations the storage is trivial. Once we did all the logic to remove self-intersections and account for layer-to-layer influences, we ended up needing only the layer parameters and a bitmask per layer per tile. Note that we don't even need, in runtime, the various permutations used for "style" - these are needed only in the editor when the density maps are evaluated! The bitmask is already reasonably small, in fact, smaller than storing density maps unless we use very small objects (remember, we scale tile size based on the object size, so the smaller the object the more tiles we'll have in the world). But we can do better with simple run-length encoding on top. We sort the point set one last time - for storage. In this case, we want them in an order that preserves spatial locality, because we can imagine that even with the various dithering styles and density maps and complex logic, we often end up with patches of dense point areas and barren areas, over the map. We can use a space-filling curve to sort the points, or we can run a "traveling salesman" optimizer. I don't remember which one I ended up using, probably the latter, but I'll put here some more hand-drawn diagrams of when I was trying to figure out beta-omega curves, which are the state-of-the-art:I was going slightly mad... Storing this way also ends up being a slight win rendering-wise, as the instances you'll create are spatially compact and that's a good property in a variety of settings.This incomprehensible mess was a debug visualization of the spatial "runs" and "breaks" in the encoding. I think.Same. Maybe? Who remembers :) Putting it all together.Tiles over a density map and resulting instances.More testing...This I think was from actual game terrain data. Blackout map.
2024-11-08T17:46:35
null
train
42,057,851
0xFACEFEED
2024-11-06T07:30:37
New images of Jupiter
null
https://www.missionjuno.swri.edu/junocam/processing?source=all&ob_from=2024-10-01&ob_to=2024-11-01&phases%5B%5D=PERIJOVE+66&perpage=16
401
65
[ 42061164, 42060292, 42061777, 42058138, 42058439, 42061299, 42058441, 42069665, 42058623, 42059051, 42061825, 42063478, 42058427, 42058314, 42069912, 42060832, 42060964, 42057852, 42060666, 42058849 ]
null
null
null
null
null
null
null
null
null
train
42,057,854
Geant4
2024-11-06T07:32:22
When will MacBook Air M4 come into the market?
null
null
7
3
[ 42069436, 42058101 ]
null
null
null
null
null
null
null
null
null
train
42,057,864
ngninja
2024-11-06T07:35:49
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,870
philonoist
2024-11-06T07:37:32
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,871
willemlaurentz
2024-11-06T07:37:52
Escape from Apple iCloud Photos using free software (2020)
null
https://willem.com/blog/2020-08-31_free-from-the-icloud-escaping-apple-photos/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,057,879
ElonChrist
2024-11-06T07:38:49
Donald Trump wins 2024 presidential election
null
https://www.msn.com/en-us/news/politics/donald-trump-wins-2024-presidential-election-defying-the-odds-again/ar-AA1tAQ94
28
1
[ 42064652 ]
null
null
no_article
null
null
null
null
2024-11-08T15:28:46
null
train
42,057,886
kailashahirwar
2024-11-06T07:40:08
FLUX.1-Dev LoRA Outfit Generator by TryOn Labs
null
https://huggingface.co/tryonlabs/FLUX.1-dev-LoRA-Outfit-Generator
2
0
[ 42057887 ]
null
null
null
null
null
null
null
null
null
train
42,057,903
noleary
2024-11-06T07:43:55
All the data can be yours: reverse engineering APIs
null
https://jero.zone/posts/reverse-engineering-apis
1
0
null
null
null
null
null
null
null
null
null
null
train
42,057,905
ilonamosh
2024-11-06T07:44:14
null
null
null
1
null
[ 42057906 ]
null
true
null
null
null
null
null
null
null
train
42,057,918
zaikunzhang
2024-11-06T07:47:42
null
null
null
5
null
null
null
true
null
null
null
null
null
null
null
train
42,057,924
haltingproblem
2024-11-06T07:48:55
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,935
jonasdn
2024-11-06T07:51:27
GStreamer udpsrc: Surpising effects of SO_REUSEADDR on Linux
null
https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6948
4
0
null
null
null
no_error
udpsrc: Disable allocated port reuse for unicast (!6948) · Merge requests · GStreamer / gstreamer · GitLab
null
null
Skip to content udpsrc: Disable allocated port reuse for unicast Review changes Download Overview 23 Commits 1 Pipelines 12 Changes 1 The reuse property end up setting the SO_REUSEADDR socket option for the UDP socket. This setting have surprising effects. On Linux systems the man page (socket(7)) states: SO_REUSEADDR Indicates that the rules used in validating addresses supplied in a bind(2) call should allow reuse of local addresses. For AF_INET sockets this means that a socket may bind, except when there is an active listening socket bound to the address. But since UDP does not listen this ends up meaning that when an ephemeral port is allocated (setting the port to 0) the kernel is free to reuse any other UDP port that has SO_REUSEADDR set. Tests checking the likelyhood of port conflict when using multiple udpsrc shows port conflicts starting to occur after ~100-300 udpsrc with port allocation enabled. See issue #3411 (closed) for more details. Changing the default value of a property is not a small thing we risk breaking application that rely on the current default value. But since the effects of having reuse default TRUE on can also have damaging and hard-to-debug consequences, it might be worth to consider. Having SO_REUSEADDR enabled for multicast, might have some use cases but for unicast, with dynamic port allocation, it does not make sense. When not using an multicast address we will disable port reuse if the port property is set to 0 (=allocate) and warn the user that we did so. Closes #3411 (closed) Edited Sep 02, 2024 by Jonas Danielsson
2024-11-08T08:31:59
en
train
42,057,948
Elfener
2024-11-06T07:55:20
First-class Prompt Engineering with LLM-lang (This is a bad idea.) [video]
null
https://www.youtube.com/watch?v=ueGC3xVcDlc
7
0
null
null
null
null
null
null
null
null
null
null
train
42,057,964
tndata
2024-11-06T07:59:01
Pushed Authorization Requests (Par) in Asp.net Core 9
null
https://nestenius.se/net/pushed-authorization-requests-par-in-asp-net-core-9/
3
1
[ 42057965 ]
null
null
null
null
null
null
null
null
null
train
42,057,974
rudycables
2024-11-06T08:02:08
null
null
null
1
null
[ 42057975 ]
null
true
null
null
null
null
null
null
null
train
42,057,983
JumpCrisscross
2024-11-06T08:04:08
Baruch Plan
null
https://en.wikipedia.org/wiki/Baruch_Plan
6
0
null
null
null
no_error
Baruch Plan
2005-04-19T02:56:43Z
Contributors to Wikimedia projects
From Wikipedia, the free encyclopedia The Baruch Plan was a proposal put forward by the United States government on 14 June 1946 to the United Nations Atomic Energy Commission (UNAEC) during its first meeting. Bernard Baruch wrote the bulk of the proposal, based on the March 1946 Acheson–Lilienthal Report. (The United States, Great Britain and Canada had called for an international organization to regulate the use of atomic energy, and President Truman responded by asking Undersecretary of State Dean Acheson and David E. Lilienthal to draw up a plan.) The Soviet Union, fearing the plan would preserve the American nuclear monopoly, declined in December 1946 in the United Nations Security Council to endorse Baruch's version of the proposal,[1] and the Cold War phase of the nuclear arms race followed. In the Plan, the US agreed to decommission all of its atomic weapons and transfer nuclear technology on the condition that all other countries pledged not to produce atomic weapons and agreed to an adequate system of inspection, including monitoring, policing, and sanctions. The Plan also proposed to internationalize fission energy via an International Atomic Development Authority, which would exercise a monopoly of mining uranium and thorium, refining the ores, owning materials, and constructing and operating nuclear plants. This Authority would fall under the United Nations Atomic Energy Commission.[2] In short, the plan proposed to:[3] extend between all countries the exchange of basic scientific information for peaceful conclusions; implement control of nuclear power to the extent necessary to ensure its use only for peaceful purposes; eliminate from national armaments atomic weapons and all other major weapons adaptable to mass destruction; and establish effective safeguards by way of inspection and other means to protect complying States against the hazards of violations and evasions. In presenting his plan to the United Nations, Baruch stated:[4]We are here to make a choice between the quick and the dead. That is our business. Behind the black portent of the new atomic age lies a hope which, seized upon with faith, can work our salvation. If we fail, then we have damned every man to be the slave of fear. Let us not deceive ourselves; we must elect world peace or (elect) world destruction. The Soviets rejected the Baruch Plan and suggested a counter-proposal on the grounds that the United Nations was dominated by the United States and its allies in Western Europe, and could therefore not be trusted to exercise authority over atomic weaponry in an evenhanded manner. Nationalist China, a UN Security Council member with veto privileges, was anti-communist and aligned with the US at this time. The USSR counter-proposal insisted that America eliminate its own nuclear weapons first before considering any proposals for a system of controls and inspections.[5][6][2] Although the Soviets showed further interest in the cause of arms control after they became a nuclear power in 1949, and particularly after the death of Stalin in 1953, the issue of the Soviet Union submitting to international inspection was always a thorny one, upon which many attempts at nuclear arms control stalled. Crucially, the Baruch Plan suggested that none of the permanent members of the United Nations Security Council would be able to veto a decision to punish culprits. Because of the difficulties in monitoring and policing, as well as Stalin's ambition to develop atomic weapons, although negotiations over the Baruch Plan and the Soviet counter-proposal continued in the UNAEC until 1948, the Plan was not seriously advanced later than the end of 1947. Throughout the negotiations, the USSR was fast-tracking its own atomic bomb project, and the United States was continuing its own weapons development and production. With the failure of the Plan, both nations embarked on accelerated programs of weapons development, innovation, production, and testing as part of the overall nuclear arms race of the Cold War.[2] Bertrand Russell urged control of nuclear weapons in the 1940s and early 1950s to avoid the likelihood of a general nuclear war, and initially felt hopeful when the Baruch Proposal was made. In late 1948 he suggested that "the remedy might be the threat of immediate war by the United States on Russia for the purpose of forcing nuclear disarmament on her." Later he thought less well of the Baruch Proposal as "Congress insisted upon the insertion of clauses which it was known that the Russians would not accept."[7] In his 1961 book Has Man a Future?, Russell described the Baruch plan as follows: The United States Government... did attempt... to give effect to some of the ideas which the atomic scientists had suggested. In 1946, it presented to the world what is now called "The Baruch Plan", which had very great merits and showed considerable generosity, when it is remembered that America still had an unbroken nuclear monopoly... Unfortunately, there were features of the Baruch Proposal which Russia found unacceptable, as, indeed, was to be expected. It was Stalin's Russia, flushed with pride in the victory over the Germans, suspicious (not without reason) of the Western Powers, and aware that in the United Nations it could almost always be outvoted.[8] Historical significance[edit] Scholars such as David S. Painter, Melvyn Leffler, and James Carroll have questioned whether or not the Baruch Plan was a legitimate effort to achieve global cooperation on nuclear control.[2][9][10] The Baruch Plan is often cited as a pivotal moment in history in works promoting internationalizing nuclear power[11] or revisiting nuclear arms control.[12][6] In philosopher Nick Bostrom's 2014 work Superintelligence: Paths, Dangers, Strategies, he cited the Baruch Plan as part of an argument that a future power possessing superintelligence that obtained a sufficient strategic advantage would employ it to establish a benign 'singleton' or form of global unity.[13]:89 Acheson–Lilienthal Report Atoms for Peace Cold War International Thermonuclear Experimental Reactor (ITER) Nuclear arms race Russell–Einstein Manifesto Science diplomacy United Nations Atomic Energy Commission (UNAEC) ^ Painter, David S. (25 March 2010). "Oil, resources, and the Cold War". In Leffler, Melvyn P.; Westad, Odd Arne (eds.). The Cambridge History of the Cold War. Vol. 1: Origins (reprint ed.). Cambridge: Cambridge University Press. p. 487. ISBN 9780521837194. Retrieved 9 May 2023. Aware of the Baruch Plan's implications, the Soviets blocked its adoption by the United Nations Security Council in December 1946. ^ a b c d Gerber, Larry G. (1982). "The Baruch Plan and the Origins of the Cold War". Diplomatic History. 6 (4): 69–96. doi:10.1111/j.1467-7709.1982.tb00792.x. ISSN 1467-7709. ^ Rumble, Greville (1985). The Politics of Nuclear Defence – A Comprehensive Introduction (1st ed.). Cambridge: Polity Press. pp. 285 (8–9, 219). ISBN 0-7456-0195-2. ^ Williams, Joshua. "The Quick and the Dead". Carnegie International Non-Proliferation Conference. June 16, 2005. ^ "Milestones: 1945–1952 - Office of the Historian". history.state.gov. Retrieved 2020-09-05. ^ a b Kearn, David W. (2010-03-12). "The Baruch Plan and the Quest for Atomic Disarmament". Diplomacy & Statecraft. 21 (1): 41–67. doi:10.1080/09592290903577742. ISSN 0959-2296. S2CID 154515687. ^ Russell, Bertrand (1969). The Autobiography of Bertrand Russell: 1944-1967, Volume III. London: George Allen and Unwin. pp. 17, 18, 181. ISBN 978-0-04-921010-3. ^ Bertrand Russell, Has Man a Future? [London: Allen and Unwin, 1961], pp. 28-9. ^ Painter, David S. (September 2007). "From Truman to Roosevelt Roundtable" (PDF). H-Diplo. ^ Carroll, James (2007-06-04). House of War: The Pentagon and the Disastrous Rise of American Power. Houghton Mifflin Harcourt. pp. 120–121. ISBN 9780547526454. ^ Nilsson, L. J. Verfasser (1991). Safeguarding and internationalizing nuclear power. OCLC 1068978033. ^ Mackby, Jenifer (2016-07-03). "Still seeking, still fighting". The Nonproliferation Review. 23 (3–4): 261–286. doi:10.1080/10736700.2016.1290026. ISSN 1073-6700. S2CID 151383304. ^ Bostrom, Nick, 1973- author., Superintelligence : paths, dangers, strategies, ISBN 978-1-5012-2774-5, OCLC 1061147095 CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link) Chace, James. "Sharing the Atom Bomb." Foreign Affairs (1996) 75#1 pp 129–144. short summary Hewlett, Richard G. and Oscar E. Anderson. A History of the United States Atomic Energy Commission: The New World, 1939-1946, Volume I. (Pennsylvania State University Press, 1962). Mayers, David. "Destruction Repaired and Destruction Anticipated: United Nations Relief and Rehabilitation Administration (UNRRA), the Atomic Bomb, and US Policy 1944–6." International History Review 38#5 (2016) pp 961–83. Atomic Archive: The Baruch Plan David J Holloway. 2020. "The Soviet Union and the Baruch Plan."
2024-11-08T05:43:43
en
train
42,057,989
wmstack
2024-11-06T08:05:31
My local, AI Voice Assistant (I replaced Alexa!!)
null
https://www.youtube.com/watch?v=XvbVePuP7NY
3
1
[ 42058861 ]
null
null
null
null
null
null
null
null
null
train
42,058,011
ngninja
2024-11-06T08:11:55
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,058,050
goodereader
2024-11-06T08:20:16
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,058,056
silangel
2024-11-06T08:22:04
null
null
null
1
null
[ 42058057 ]
null
true
null
null
null
null
null
null
null
train
42,058,077
impish9208
2024-11-06T08:27:04
The changes in vibes – why did they happen?
null
https://marginalrevolution.com/marginalrevolution/2024/07/the-changes-in-vibes-why-did-they-happen.html
3
1
[ 42059136 ]
null
null
null
null
null
null
null
null
null
train
42,058,091
broken_broken_
2024-11-06T08:30:26
Perhaps Rust Needs "Defer"
null
https://gaultier.github.io/blog/perhaps_rust_needs_defer.html
46
44
[ 42059142, 42059068, 42059407, 42060397, 42059111, 42060433, 42060641, 42059869, 42061340, 42062725, 42061436, 42061554, 42059241, 42059765, 42060781, 42060132 ]
null
null
null
null
null
null
null
null
null
train
42,058,092
nodeshiftcloud
2024-11-06T08:30:53
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,058,097
todsacerdoti
2024-11-06T08:31:42
Turing kicked us out of Heaven (2023)
null
https://buttondown.com/hillelwayne/archive/turing-kicked-us-out-of-heaven/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,058,115
dndndnd
2024-11-06T08:34:10
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,058,144
throwaway71271
2024-11-06T08:41:38
Google Claims World First as AI Finds 0-Day Security Vulnerability
null
https://www.forbes.com/sites/daveywinder/2024/11/05/google-claims-world-first-as-ai-finds-0-day-security-vulnerability/
3
0
null
null
null
no_error
Google Claims World First As AI Finds 0-Day Security Vulnerability
2024-11-05T06:55:15-05:00
Davey Winder
Google's Big Sleep team uncovers a zero-day vulnerability using AI for the first time.SOPA Images/LightRocket via Getty Images Update, Nov. 05, 2024: This story, originally published Nov. 04, now includes the results of research into the use of AI deepfakes. An AI agent has discovered a previously unknown, zero-day, exploitable memory-safety vulnerability in widely used real-world software. It’s the first example, at least to be made public, of such a find, according to Google’s Project Zero and DeepMind, the forces behind Big Sleep, the large language model-assisted vulnerability agent that spotted the vulnerability. If you don’t know what Project Zero is and have not been in awe of what it has achieved in the security space, then you simply have not been paying attention these last few years. These elite hackers and security researchers work relentlessly to uncover zero-day vulnerabilities in Google’s products and beyond. The same accusation of lack of attention applies if you are unaware of DeepMind, Google’s AI research labs. So when these two technological behemoths joined forces to create Big Sleep, they were bound to make waves. ForbesHow Anxiety Beat The $10 Billion Startup AI Deepfake HackersBy Davey Winder Google Uses Large Language Model To Catch Zero-Day Vulnerability In Real-World Code In a Nov. 1 announcement, Google’s Project Zero blog confirmed that the Project Naptime large language model assisted security vulnerability research framework has evolved into Big Sleep. This collaborative effort involving some of the very best ethical hackers, as part of Project Zero, and the very best AI researchers, as part of Google DeepMind, has developed a large language model-powered agent that can go out and uncover very real security vulnerabilities in widely used code. In the case of this world first, the Big Sleep team says it found “an exploitable stack buffer underflow in SQLite, a widely used open source database engine.” The zero-day vulnerability was reported to the SQLite development team in October which fixed it the same day. “We found this issue before it appeared in an official release,” the Big Sleep team from Google said, “so SQLite users were not impacted.” AI Could Be The Future Of Fuzzing, The Google Big Sleep Team Says Although you may not have heard the term fuzzing before, it’s been part of the security research staple diet for decades now. Fuzzing relates to the use of random data to trigger errors in code. Although the use of fuzzing is widely accepted as an essential tool for those who look for vulnerabilities in code, hackers will readily admit it cannot find everything. “We need an approach that can help defenders to find the bugs that are difficult (or impossible) to find by fuzzing,” the Big Sleep team said, adding that it hoped AI can fill the gap and find “vulnerabilities in software before it's even released,” leaving little scope for attackers to strike. ForbesNew Cyber Attack Warning—Confirming You Are Not A Robot Can Be DangerousBy Davey Winder“Finding a vulnerability in a widely-used and well-fuzzed open-source project is an exciting result,” the Google Big Sleep team said, but admitted the results are currently “highly experimental.” At present, the Big Sleep agent is seen as being only as effective as a target-specific fuzzer. However, it’s the near future that is looking bright. “This effort will lead to a significant advantage to defenders,” Google’s Big Sleep team said, “with the potential not only to find crashing test cases, but also to provide high-quality root-cause analysis, triaging and fixing issues could be much cheaper and more effective in the future.” The Flip Side Of AI Is Seen In Deepfake Security Threats While the Big Sleep news from Google is refreshing and important, as is that from a new RSA report looking at how AI can help with the push to get rid of passwords in 2025, the flip side of the AI security coin should always be considered as well. One such flip side being the use of deepfakes. I’ve already covered how Google support deepfakes have been used in an attack against a Gmail user a report that went viral for all the right reasons. Now, a Forbes.com reader has got in touch to let me know about some research undertaken to gauge how the AI technology can be used to influence public opinion. Again, I covered this recently as the FBI issued a warning about a 2024 election voting video that was actually a fake backed by Russian distributors. The latest VPNRanks research is well worth reading in full, but here’s a few handpicked statistics that certainly get the grey cells working. 50% of respondents have encountered deepfake videos online multiple times. 37.1% consider deepfakes an extremely serious threat to reputations, especially for creating fake videos of public figures or ordinary people. Concerns about deepfakes manipulating public opinion are high, with 74.3% extremely worried about potential misuse in political or social contexts. 65.7% believe a deepfake released during an election campaign would likely influence voters’ opinions. 41.4% feel it’s extremely important for social media platforms to immediately remove non-consensual deepfake content once reported. When it comes to predictions for 2025, global deepfake-related identity fraud attempts are forecasted to reach 50,000 and in excess of 80% of global elections could be impacted by deepfake interference, threatening the integrity of democracy. Follow me on Twitter or LinkedIn. Check out my website or some of my other work here. 
2024-11-08T21:06:23
en
train
42,058,148
Rowsana
2024-11-06T08:43:21
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,058,153
shivamrishi
2024-11-06T08:43:57
Just Launched: PostPocket – Effortlessly Save and Organize Your Online Content
null
https://apps.apple.com/au/app/postpocket/id6670723615
16
10
[ 42058154, 42060410, 42059538, 42059210 ]
null
null
no_error
‎PostPocket
null
null
Are you tired of losing track of important articles, social media posts, or favorite links? PostPocket is your ultimate content keeper, helping you save, organize, and access your favorite content with ease. Use PostPocket's powerful Share Extension to capture and categorize posts, clips, and collections—no need to open the app!Key Features:Social Bookmarking Made EasySave and organize posts, articles, links, and clips from across the web. Whether it’s a must-read article, a recipe, or a favorite tweet, PostPocket acts as your go-to content organizer, keeping it all in one place.Organize with Categories & TagsCreate custom categories and tags to neatly organize your content. With PostPocket, you can filter, sort, and find the bookmarks you need—whenever you need them.Save for Later with Share ExtensionEffortlessly save content on the go! PostPocket’s share extension lets you capture and save any URL or link with a simple tap, making it easy to build your digital collections without leaving the app you're browsing.Read Later, AnywhereNever miss out on crucial content. Save and bookmark posts to read later and access them anytime—even offline. Perfect for busy readers who need to catch up on their content at their own pace.Secure & PrivateYour saved content and collections are stored safely and securely. PostPocket respects your privacy and keeps your data safe with the highest security standards.Boost Your ProductivityPostPocket keeps you organized and on top of your favorite content. From articles, blogs, and clips to social media posts, manage it all in one place and never lose track of important information again.Perfect for:Article keepers who want to save must-read posts for later.Social bookmarking fans who need an organized way to manage their links and collections.Content organizers looking to build a personal archive of saved links and clips.Read-later enthusiasts who like to save social media posts and return to them later.PostPocket makes it easy to manage all your bookmarks, links, and favorite posts. With intuitive organization features, tags, and the ability to save directly from your favorite apps, staying organized has never been simpler.Start saving, tagging, and organizing your favorite content today—download PostPocket, the ultimate content keeper for all your digital collections!Privacy Policy: https://www.freeprivacypolicy.com/live/f74231f5-87c9-464d-a372-4d38190fa685Terms of Use: https://www.apple.com/legal/internet-services/itunes/dev/stdeula/
2024-11-08T09:02:18
en
train
42,058,166
joebaf
2024-11-06T08:45:09
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,058,169
marketechy
2024-11-06T08:45:35
null
null
null
1
null
[ 42058170 ]
null
true
null
null
null
null
null
null
null
train
42,058,187
rowsanal
2024-11-06T08:48:16
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train