id
int64 2
42.1M
| by
large_stringlengths 2
15
⌀ | time
timestamp[us] | title
large_stringlengths 0
198
⌀ | text
large_stringlengths 0
27.4k
⌀ | url
large_stringlengths 0
6.6k
⌀ | score
int64 -1
6.02k
⌀ | descendants
int64 -1
7.29k
⌀ | kids
large list | deleted
large list | dead
bool 1
class | scraping_error
large_stringclasses 25
values | scraped_title
large_stringlengths 1
59.3k
⌀ | scraped_published_at
large_stringlengths 4
66
⌀ | scraped_byline
large_stringlengths 1
757
⌀ | scraped_body
large_stringlengths 1
50k
⌀ | scraped_at
timestamp[us] | scraped_language
large_stringclasses 58
values | split
large_stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
42,030,090 | Avalaxy | 2024-11-02T23:49:21 | Ask HN: Why are social media spam bots still a thing? | Specifically Instagram. After many years, I still get regular messages from spam bots, usually pretending to be women that are romantically interested. It seems trivial to ban (or even prevent) those accounts, from a technical perspective. Reporting them also doesn't seem to do much. Why is this still a thing? This should have been a solved problem years ago no? Are there, besides technical reasons, other reasons to keep spam bots alive on their platforms? | null | 6 | 5 | [
42043041,
42030107,
42030481,
42030111,
42040176
] | null | null | null | null | null | null | null | null | null | train |
42,030,092 | kjhughes | 2024-11-02T23:49:36 | What If AI Is Good for Hollywood? | null | https://www.nytimes.com/2024/11/01/magazine/ai-hollywood-movies-cgi.html | 2 | 1 | [
42030116
] | null | null | bot_blocked | nytimes.com | null | null | Please enable JS and disable any ad blocker | 2024-11-08T13:41:55 | null | train |
42,030,201 | dndndnd | 2024-11-03T00:12:14 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,030,206 | brandPlug | 2024-11-03T00:13:00 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,030,208 | kristianpaul | 2024-11-03T00:13:20 | Build Your Own Darknet | null | https://clan.lol/ | 3 | 1 | [
42030993
] | null | null | null | null | null | null | null | null | null | train |
42,030,233 | jasondavies | 2024-11-03T00:17:26 | Data movement bottlenecks to large-scale model training: Scaling past 1e28 FLOP | null | https://epochai.org/blog/data-movement-bottlenecks-scaling-past-1e28-flop | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,030,249 | edbltn | 2024-11-03T00:19:15 | Show HN: Show Me The Ballot – Compare election complexity across US counties | null | http://show-me-the-ballot.glitch.me/ | 2 | 1 | [
42030277,
42030315
] | null | null | missing_parsing | Show me the Ballot! | null | null |
About Show Me The Ballot
Show Me The Ballot is a project that won second prize at the Media Party Hackathon held on September 29, 2024 at the Brown Institute for Media Innovation at Columbia University. Our goal is to make voting information more accessible and transparent to citizens.
Export Data
Read Our Methodology
Created by:
Marie-France Han
Eric Bolton
Jeff Nickerson
Share Your Feedback
| 2024-11-07T07:19:13 | null | train |
42,030,250 | Gaishan | 2024-11-03T00:19:21 | A-12 Avenger II would've been America's first real 'stealth fighter' | null | https://www.sandboxx.us/news/the-a-12-avenger-ii-wouldve-been-americas-first-real-stealth-fighter/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,030,303 | thunderbong | 2024-11-03T00:29:34 | Natural Nuclear Fission Reactor | null | https://en.wikipedia.org/wiki/Natural_nuclear_fission_reactor | 3 | 0 | [
42030429
] | null | null | no_error | Natural nuclear fission reactor | 2004-02-17T02:49:20Z | Contributors to Wikimedia projects |
From Wikipedia, the free encyclopedia
A natural nuclear fission reactor is a uranium deposit where self-sustaining nuclear chain reactions occur. The idea of a nuclear reactor existing in situ within an ore body moderated by groundwater was briefly explored by Paul Kuroda in 1956.[1] The existence of an extinct or fossil nuclear fission reactor, where self-sustaining nuclear reactions have occurred in the past, are established by analysis of isotope ratios of uranium and of the fission products (and the stable daughter nuclides of those fission products). The first such fossil reactor was first discovered in 1972 in Oklo, Gabon by researchers from the French Alternative Energies and Atomic Energy Commission (CEA) when chemists performing quality control for the French nuclear industry noticed sharp depletions of fissionable 235U in gaseous uranium made from Gabonese ore.
Oklo is the only location where this phenomenon is known to have occurred, and consists of 16 sites with patches of centimeter-sized ore layers. There, self-sustaining nuclear fission reactions are thought to have taken place approximately 1.7 billion years ago, during the Statherian period of the Paleoproterozoic. Fission in the ore at Oklo continued off and on for a few hundred thousand years and probably never exceeded 100 kW of thermal power.[2][3][4] Life on Earth at this time consisted largely of sea-bound algae and the first eukaryotes, living under a 2% oxygen atmosphere. However even this meager oxygen was likely essential to the concentration of uranium into fissionable ore bodies, as uranium dissolves in water only in the presence of oxygen. Before the planetary-scale production of oxygen by the early photosynthesizers groundwater-moderated natural nuclear reactors are not thought to have been possible.[4]
Discovery of the Oklo fossil reactors[edit]
In May 1972, at the Tricastin uranium enrichment site at Pierrelatte, France, routine mass spectrometry comparing UF6 samples from the Oklo mine showed a discrepancy in the amount of the 235U isotope. Where the usual concentrations of 235U were 0.72% the Oklo samples showed only 0.60%. This was a significant difference—the samples bore 17% less 235U than expected.[5] This discrepancy required explanation, as all civilian uranium handling facilities must meticulously account for all fissionable isotopes to ensure that none are diverted into the construction of unsanctioned nuclear weapons. Further, as fissile material is the reason for mining uranium in the first place, the missing 17% was also of direct economic concern. Geological situation in Gabon leading to natural nuclear fission reactors Nuclear reactor zonesSandstoneUranium ore layerGranite
Thus the French Alternative Energies and Atomic Energy Commission (CEA) began an investigation. A series of measurements of the relative abundances of the two most significant isotopes of uranium mined at Oklo showed anomalous results compared to those obtained for uranium from other mines. Further investigations into this uranium deposit discovered uranium ore with a 235U concentration as low as 0.44% (almost 40% below the normal value). Subsequent examination of isotopes of fission products such as neodymium and ruthenium also showed anomalies, as described in more detail below. However, the trace radioisotope 234U did not deviate significantly in its concentration from other natural samples. Both depleted uranium and reprocessed uranium will usually have 234U concentrations significantly different from the secular equilibrium of 55 ppm 234U relative to 238U. This is due to 234U being enriched together with 235U and due to it being both consumed by neutron capture and produced from 235U by fast neutron induced (n,2n) reactions in nuclear reactors. In Oklo any possible deviation of 234U concentration present at the time the reactor was active would have long since decayed away. 236U must have also been present in higher than usual ratios during the time the reactor was operating, but due to its half life of 2.348×107 years being almost two orders of magnitude shorter than the time elapsed since the reactor operated, it has decayed to roughly 1.4×10−22 its original value and thus basically nothing and below any abilities of current equipment to detect.
This loss in 235U is exactly what happens in a nuclear reactor. A possible explanation was that the uranium ore had operated as a natural fission reactor in the distant geological past. Other observations led to the same conclusion, and on 25 September 1972 the CEA announced their finding that self-sustaining nuclear chain reactions had occurred on Earth about 2 billion years ago. Later, other natural nuclear fission reactors were discovered in the region.[4]
Nd
143
144
145
146
148
150
C/M
0.99
1.00
1.00
1.01
0.98
1.06
Fission product isotope signatures[edit]
Isotope signatures of natural neodymium and fission product neodymium from 235U which had been subjected to thermal neutrons.
The neodymium found at Oklo has a different isotopic composition to that of natural neodymium: the latter contains 27% 142Nd, while that of Oklo contains less than 6%. The 142Nd is not produced by fission; the ore contains both fission-produced and natural neodymium. From this 142Nd content, we can subtract the natural neodymium and gain access to the isotopic composition of neodymium produced by the fission of 235U. The two isotopes 143Nd and 145Nd lead to the formation of 144Nd and 146Nd by neutron capture. This excess must be corrected (see above) to obtain agreement between this corrected isotopic composition and that deduced from fission yields.
Isotope signatures of natural ruthenium and fission product ruthenium from 235U which had been subjected to thermal neutrons. The 100Mo (an extremely long-lived double beta emitter) has not had time to decay to 100Ru in more than trace quantities over the time since the reactors stopped working.
Similar investigations into the isotopic ratios of ruthenium at Oklo found a much higher 99Ru concentration than otherwise naturally occurring (27–30% vs. 12.7%). This anomaly could be explained by the decay of 99Tc to 99Ru. In the bar chart, the normal natural isotope signature of ruthenium is compared with that for fission product ruthenium which is the result of the fission of 235U with thermal neutrons. The fission ruthenium has a different isotope signature. The level of 100Ru in the fission product mixture is low because fission produces neutron rich isotopes which subsequently beta decay and 100Ru would only be produced in appreciable quantities by double beta decay of the very long-lived (half life 7.1×1018 years) molybdenum isotope 100Mo. On the timescale of when the reactors were in operation, very little (about 0.17 ppb) decay to 100Ru will have occurred. Other pathways of 100Ru production like neutron capture in 99Ru or 99Tc (quickly followed by beta decay) can only have occurred during high neutron flux and thus ceased when the fission chain reaction stopped.
The natural nuclear reactor at Oklo formed when a uranium-rich mineral deposit became inundated with groundwater, which could act as a moderator for the neutrons produced by nuclear fission. A chain reaction took place, producing heat that caused the groundwater to boil away; without a moderator that could slow the neutrons, however, the reaction slowed or stopped. The reactor thus had a negative void coefficient of reactivity, something employed as a safety mechanism in human-made light water reactors. After cooling of the mineral deposit, the water returned, and the reaction restarted, completing a full cycle every 3 hours. The fission reaction cycles continued for hundreds of thousands of years and ended when the ever-decreasing fissile materials, coupled with the build-up of neutron poisons, no longer could sustain a chain reaction.
Fission of uranium normally produces five known isotopes of the fission-product gas xenon; all five have been found trapped in the remnants of the natural reactor, in varying concentrations. The concentrations of xenon isotopes, found trapped in mineral formations 2 billion years later, make it possible to calculate the specific time intervals of reactor operation: approximately 30 minutes of criticality followed by 2 hours and 30 minutes of cooling down (exponentially decreasing residual decay heat) to complete a 3-hour cycle.[6] Xenon-135 is the strongest known neutron poison. However, it is not produced directly in appreciable amounts but rather as a decay product of iodine-135 (or one of its parent nuclides). Xenon-135 itself is unstable and decays to caesium-135 if not allowed to absorb neutrons. While caesium-135 is relatively long lived, all caesium-135 produced by the Oklo reactor has since decayed further to stable barium-135. Meanwhile, xenon-136, the product of neutron capture in xenon-135 decays extremely slowly via double beta decay and thus scientists were able to determine the neutronics of this reactor by calculations based on those isotope ratios almost two billion years after it stopped fissioning uranium.
Change of content of Uranium-235 in natural uranium; the content was 3.65% 2 billion years ago.
A key factor that made the reaction possible was that, at the time the reactor went critical 1.7 billion years ago, the fissile isotope 235U made up about 3.1% of the natural uranium, which is comparable to the amount used in some of today's reactors. (The remaining 96.9% was 238U and roughly 55 ppm 234U, neither of which is fissile by slow or moderated neutrons.) Because 235U has a shorter half-life than 238U, and thus decays more rapidly, the current abundance of 235U in natural uranium is only 0.72%. A natural nuclear reactor is therefore no longer possible on Earth without heavy water or graphite.[7]
The Oklo uranium ore deposits are the only known sites in which natural nuclear reactors existed. Other rich uranium ore bodies would also have had sufficient uranium to support nuclear reactions at that time, but the combination of uranium, water, and physical conditions needed to support the chain reaction was unique, as far as is currently known, to the Oklo ore bodies. It is also possible that other natural nuclear fission reactors were once operating but have since been geologically disturbed so much as to be unrecognizable, possibly even "diluting" the uranium so far that the isotope ratio would no longer serve as a "fingerprint". Only a small part of the continental crust and no part of the oceanic crust reaches the age of the deposits at Oklo or an age during which isotope ratios of natural uranium would have allowed a self sustaining chain reaction with water as a moderator.
Another factor which probably contributed to the start of the Oklo natural nuclear reactor at 2 billion years, rather than earlier, was the increasing oxygen content in the Earth's atmosphere.[4] Uranium is naturally present in the rocks of the earth, and the abundance of fissile 235U was at least 3% or higher at all times prior to reactor startup. Uranium is soluble in water only in the presence of oxygen.[citation needed] Therefore, increasing oxygen levels during the aging of the Earth may have allowed uranium to be dissolved and transported with groundwater to places where a high enough concentration could accumulate to form rich uranium ore bodies. Without the new aerobic environment available on Earth at the time, these concentrations probably could not have taken place.
It is estimated that nuclear reactions in the uranium in centimeter- to meter-sized veins consumed about five tons of 235U and elevated temperatures to a few hundred degrees Celsius.[4][8] Most of the non-volatile fission products and actinides have only moved centimeters in the veins during the last 2 billion years.[4] Studies have suggested this as a useful natural analogue for nuclear waste disposal.[9] The overall mass defect from the fission of five tons of 235U is about 4.6 kilograms (10 lb). Over its lifetime the reactor produced roughly 100 megatonnes of TNT (420 PJ) in thermal energy, including neutrinos. If one ignores fission of plutonium (which makes up roughly a third of fission events over the course of normal burnup in modern human-made light water reactors), then fission product yields amount to roughly 129 kilograms (284 lb) of technetium-99 (since decayed to ruthenium-99), 108 kilograms (238 lb) of zirconium-93 (since decayed to niobium-93), 198 kilograms (437 lb) of caesium-135 (since decayed to barium-135, but the real value is probably lower as its parent nuclide, xenon-135, is a strong neutron poison and will have absorbed neutrons before decaying to 135Cs in some cases), 28 kilograms (62 lb) of palladium-107 (since decayed to silver), 86 kilograms (190 lb) of strontium-90 (long since decayed to zirconium), and 185 kilograms (408 lb) of caesium-137 (long since decayed to barium).
Relation to the atomic fine-structure constant[edit]
The natural reactor of Oklo has been used to check if the atomic fine-structure constant α might have changed over the past 2 billion years. That is because α influences the rate of various nuclear reactions. For example, 149Sm captures a neutron to become 150Sm, and since the rate of neutron capture depends on the value of α, the ratio of the two samarium isotopes in samples from Oklo can be used to calculate the value of α from 2 billion years ago.
Several studies have analysed the relative concentrations of radioactive isotopes left behind at Oklo, and most have concluded that nuclear reactions then were much the same as they are today, which implies that α was the same too.[10][11][12]
Deep geological repository
Geology of Gabon
Mounana
^ Kuroda, P. K. (1956). "On the Nuclear Physical Stability of the Uranium Minerals" (PDF). Journal of Chemical Physics. 25 (4): 781–782, 1295–1296. Bibcode:1956JChPh..25..781K. doi:10.1063/1.1743058.
^ Meshik, A. P. (November 2005). "The Workings of an Ancient Nuclear Reactor". Scientific American. 293 (5): 82–86, 88, 90–91. Bibcode:2005SciAm.293e..82M. doi:10.1038/scientificamerican1105-82. PMID 16318030.
^ Mervin, Evelyn (July 13, 2011). "Nature's Nuclear Reactors: The 2-Billion-Year-Old Natural Fission Reactors in Gabon, Western Africa". blogs.scientificamerican.com. Retrieved July 7, 2017.
^ a b c d e f Gauthier-Lafaye, F.; Holliger, P.; Blanc, P.-L. (1996). "Natural fission reactors in the Franceville Basin, Gabon: a review of the conditions and results of a "critical event" in a geologic system". Geochimica et Cosmochimica Acta. 60 (23): 4831–4852. Bibcode:1996GeCoA..60.4831G. doi:10.1016/S0016-7037(96)00245-1.
^ Davis, E. D.; Gould, C. R.; Sharapov, E. I. (2014). "Oklo reactors and implications for nuclear science". International Journal of Modern Physics E. 23 (4): 1430007–236. arXiv:1404.4948. Bibcode:2014IJMPE..2330007D. doi:10.1142/S0218301314300070. ISSN 0218-3013. S2CID 118394767.
^ Meshik, A. P.; et al. (2004). "Record of Cycling Operation of the Natural Nuclear Reactor in the Oklo/Okelobondo Area in Gabon". Physical Review Letters. 93 (18): 182302. Bibcode:2004PhRvL..93r2302M. doi:10.1103/PhysRevLett.93.182302. PMID 15525157.
^ Greenwood, Norman N.; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann. p. 1257. ISBN 978-0-08-037941-8.
^ De Laeter, J. R.; Rosman, K. J. R.; Smith, C. L. (1980). "The Oklo Natural Reactor: Cumulative Fission Yields and Retentivity of the Symmetric Mass Region Fission Products". Earth and Planetary Science Letters. 50 (1): 238–246. Bibcode:1980E&PSL..50..238D. doi:10.1016/0012-821X(80)90135-1.
^ Gauthier-Lafaye, F. (2002). "2 billion year old natural analogs for nuclear waste disposal: the natural nuclear fission reactors in Gabon (Africa)". Comptes Rendus Physique. 3 (7–8): 839–849. Bibcode:2002CRPhy...3..839G. doi:10.1016/S1631-0705(02)01351-8.
^ New Scientist: Oklo Reactor and fine-structure value. June 30, 2004.
^ Petrov, Yu. V.; Nazarov, A. I.; Onegin, M. S.; Sakhnovsky, E. G. (2006). "Natural nuclear reactor at Oklo and variation of fundamental constants: Computation of neutronics of a fresh core". Physical Review C. 74 (6): 064610. arXiv:hep-ph/0506186. Bibcode:2006PhRvC..74f4610P. doi:10.1103/PHYSREVC.74.064610. S2CID 118272311.
^ Davis, Edward D.; Hamdan, Leila (2015). "Reappraisal of the limit on the variation in α implied by the Oklo natural fission reactors". Physical Review C. 92 (1): 014319. arXiv:1503.06011. Bibcode:2015PhRvC..92a4319D. doi:10.1103/physrevc.92.014319. S2CID 119227720.
Bentridi, S.E.; Gall, B.; Gauthier-Lafaye, F.; Seghour, A.; Medjadi, D. (2011). "Génèse et évolution des réacteurs naturels d'Oklo" [Inception and evolution of Oklo natural nuclear reactors]. Comptes Rendus Geoscience (in French). 343 (11–12): 738–748. Bibcode:2011CRGeo.343..738B. doi:10.1016/j.crte.2011.09.008.
The natural nuclear reactor at Oklo: A comparison with modern nuclear reactors, Radiation Information Network, April 2005
Oklo Fossil Reactors
NASA Astronomy Picture of the Day: NASA, Oklo, Fossile Reactor, Zone 15 (16 October 2002)
הכור הגרעיני של הטבע (in Hebrew language)
| 2024-11-07T08:31:02 | en | train |
42,030,316 | thunderbong | 2024-11-03T00:33:13 | The Salema Porgy Is the Fish That Can Give You LSD-Like Trips | null | https://allthatsinteresting.com/salema-porgy | 4 | 0 | [
42033575
] | null | null | null | null | null | null | null | null | null | train |
42,030,325 | Maxamillion96 | 2024-11-03T00:35:42 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,030,328 | paulpauper | 2024-11-03T00:36:00 | The Banality of Online Recommendation Culture | null | https://www.newyorker.com/culture/infinite-scroll/the-banality-of-online-recommendation-culture | 4 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,030,341 | paulpauper | 2024-11-03T00:37:34 | Costco hearing aids as loss leader | null | https://www.wsj.com/tech/personal-tech/people-are-hooked-on-costco-hearing-aids-and-this-is-why-85bb5bab | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,030,363 | vetinari | 2024-11-03T00:43:48 | null | null | null | 2 | null | [
42030396,
42030387,
42030430
] | null | true | null | null | null | null | null | null | null | train |
42,030,364 | decryptlol | 2024-11-03T00:44:09 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,030,388 | Lord09 | 2024-11-03T00:50:15 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,030,389 | dtw45 | 2024-11-03T00:50:18 | null | null | null | 1 | null | [
42030390
] | null | true | null | null | null | null | null | null | null | train |
42,030,435 | LorenDB | 2024-11-03T00:59:41 | Eefing | null | https://en.wikipedia.org/wiki/Eefing | 4 | 0 | [
42030440
] | null | null | no_error | Eefing | 2006-03-20T05:50:23Z | Contributors to Wikimedia projects |
From Wikipedia, the free encyclopedia
Eefing (also written eeephing, eephing, eeefing, eefin,[1] or eefn'[2]) is an Appalachian (United States) vocal technique similar to beatboxing, but nearly a century older. NPR's Jennifer Sharpe describes it as "a kind of hiccupping, rhythmic wheeze that started in rural Tennessee more than 100 years ago."[3]
An eefing piece called "Swamp Root" was one of the first singles recorded and released by Sam Phillips. Singer Joe Perkins had a minor 1963 hit, "Little Eeefin' Annie" (#76 on the Billboard chart), featuring eefer Jimmy Riddle, whom Sharpe calls "the acknowledged master of the genre". Riddle later brought eefing to national visibility on the television series Hee Haw.[3]
In fall 1963, the same time as Perkins' "Little Eefin' Annie" was released, a group called the Ardells issued a single on Epic called "Eefenanny", a sort of bluegrass/hillbilly spoof on the folk hootenanny movement. It was not as big a hit. Also in 1963, Alvin and the Chipmunks released an original song entitled "Eefin' Alvin" where the boys attempt eefing.
Another early eefing record was released in 1963 on the Philadelphia label Guyden Records #2096 by the Goodlettsville Five: "Eef" b/w "Bailey's Gone Eefin" - a version of "Won't You Come Home Bill Bailey". The group appears to have been session musicians assembled by the credited composer/producer Jerry Kennedy.
The song "Hillbilly Beatbox" by The Evolution Control Committee prominently features eefing recordings.[4]
Vocal hiccup
^ eefin Archived 2006-12-05 at the Wayback Machine, What the heck is an eef?. Accessed January 7, 2007.
^ eefn', Third Level Digression (blog). Accessed March 20, 2006.
^ a b Sharpe 2005
^ "Hillbilly beatboxing" by ECC (page on SoundCloud)
Jennifer Sharpe, Jimmie Riddle and the Lost Art of Eephing, National Public Radio, March 13, 2006. Accessed March 20, 2006. Includes audio in RealAudio and Windows Media Player formats.
Joe Perkins' Little Eeefin' Annie page
| 2024-11-08T08:18:07 | en | train |
42,030,463 | wumeow | 2024-11-03T01:06:37 | Security flaws found in Nvidia GeForce GPUs | null | https://www.pcworld.com/article/2504035/security-flaws-found-in-all-nvidia-geforce-gpus-update-drivers-asap.html | 216 | 147 | [
42031345,
42030994,
42030658,
42030922,
42031840,
42030743,
42031319,
42032740,
42031959,
42030950,
42031363,
42031423,
42032944,
42031346,
42033658,
42034024,
42030722,
42032687,
42034951,
42031438
] | null | null | null | null | null | null | null | null | null | train |
42,030,474 | null | 2024-11-03T01:09:02 | null | null | null | null | null | null | [
"true"
] | null | null | null | null | null | null | null | null | train |
42,030,499 | fhcxvbdb | 2024-11-03T01:15:41 | Darkmode Overleaf's PDF viewer using userscript | null | https://www.physicslog.com/thought/2024/10/darkmode-overleaf/ | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,030,534 | mooreds | 2024-11-03T01:24:06 | Enterprise Roshambo (2021) | null | https://monkeynoodle.org/2021/12/19/enterprise-roshambo/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,030,535 | herronkai1 | 2024-11-03T01:24:17 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,030,576 | dndndnd | 2024-11-03T01:37:23 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,030,611 | mooreds | 2024-11-03T01:44:05 | That Which Is Seen, and That Which Is Not Seen (1850) | null | http://bastiat.org/en/twisatwins.html | 3 | 0 | null | null | null | body_too_long | null | null | null | null | 2024-11-07T23:29:09 | null | train |
42,030,615 | softwaredoug | 2024-11-03T01:44:41 | The evolutionary mystery of the German cockroach | null | https://johnhawks.net/weblog/the-mystery-of-the-german-cockroach/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,030,625 | todsacerdoti | 2024-11-03T01:47:23 | Ractor – a Rust Actor Framework | null | https://slawlor.github.io/ractor/quickstart/ | 149 | 75 | [
42033711,
42033604,
42034539,
42032928,
42031403,
42035893,
42032459,
42033105,
42031752,
42034610,
42031706,
42031466,
42033946,
42032078,
42034047,
42031546
] | null | null | no_error | Quickstart | null | null |
Some notations to keep in mind
While working through this quickstart, a few notations we want to clarify for readers.
Messaging actors
Since we’re trying to model as best we can around Erlang’s practices, message sends in
Ractor can occur in 2 ways, first-and-forget and waiting on a reply. Their notations however follow the Erlang naming schemes of “cast” and “call”
respectively.
Installation
Install ractor by adding the following to your Cargo.toml dependencies
[dependencies]
ractor = "0.9"
Your first actor
We have to, of course, start with the iconic “Hello world” sample. We want to build an actor
that’s going to print “Hello world” for every message sent to it. Let’s begin by defining our
actor and filling in the necessary bits. We’ll start with out message definition
pub enum MyFirstActorMessage {
/// Print's hello world
PrintHelloWorld,
}
Then we follow up with the most basic required actor definition
use ractor::{Actor, ActorRef, ActorProcessingErr};
pub struct MyFirstActor;
#[async_trait::async_trait]
impl Actor for MyFirstActor {
type State = ();
type Msg = MyFirstActorMessage;
type Arguments = ();
async fn pre_start(&self, _myself: ActorRef<Self::Msg>, _arguments: Self::Arguments)
-> Result<Self::State, ActorProcessingErr>
{
Ok(())
}
}
Let’s break down what we’re doing here, firstly we need our actor’s struct-type which we’re calling MyFirstActor.
We are then defining our Actor behavior, which minimally needs to define three types
State - The “state” of the actor, for stateless actors this can be simply () denoting that the actor has no mutable state
Msg - The actor’s message type.
Arguments - Startup arguments which are consumed by pre_start in order to construct initial state. This is helpful for say a
TCP actor which is spawned from a TCP listener actor. The listener needs to pass the owned stream to the new actor, and Arguments is
there to facilitate that so the other actor can properly build it’s state without clone()ing structs with potential side effects.
Lastly we are defining the actor’s startup routine in pre_start which emits the initial state of the actor upon success. Once this
is run, your actor is alive and healthy just waiting for messages to be received!
Well that’s all fine and dandy, but how is this going to print hello world?! Well we haven’t defined that bit yet, we need to
wire up a message handler. Let’s do that!
#[async_trait::async_trait]
impl Actor for MyFirstActor {
type State = ();
type Msg = MyFirstActorMessage;
type Arguments = ();
async fn pre_start(&self, _myself: ActorRef<Self::Msg>, _arguments: Self::Arguments)
-> Result<Self::State, ActorProcessingErr>
{
Ok(())
}
async fn handle(&self, _myself: ActorRef<Self::Msg>, message: Self::Msg, _state: &mut Self::State)
-> Result<(), ActorProcessingErr>
{
match message {
MyFirstActorMessage::PrintHelloWorld => {
println!("Hello world!");
}
}
Ok(())
}
}
Ok now that looks better! Here we’ve added the message handler handle() method which will be executed for every message received in
the queue.
All together now
Let’s wire it all up into a proper program now.
#[tokio::main]
async fn main() {
// Build an ActorRef along with a JoinHandle which lives for the life of the
// actor. Most of the time we drop this handle, but it's handy in the
// main function to wait for clean actor shut-downs (all stop handlers will
// have completed)
let (actor, actor_handle) = Actor::spawn(None, MyFirstActor, ()).await.expect("Actor failed to start");
for _i in 0..10 {
// Sends a message, with no reply
actor.cast(MyFirstActorMessage::PrintHelloWorld).expect("Failed to send message to actor");
}
// give a little time to print out all the messages
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
// Cleanup
actor.stop(None);
actor_handle.await.unwrap();
}
Adding State
Now what if we wanted to ask the actor for some information? Like the number of hello-worlds that it has printed thus far
in its lifecycle, let’s see what that might look like.
use ractor::{Actor, ActorRef, ActorProcessingErr, RpcReplyPort};
pub enum MyFirstActorMessage {
/// Print's hello world
PrintHelloWorld,
/// Replies with how many hello worlds have occurred
HowManyHelloWorlds(RpcReplyPort<u16>),
}
pub struct MyFirstActor;
#[async_trait::async_trait]
impl Actor for MyFirstActor {
type State = u16;
type Msg = MyFirstActorMessage;
type Arguments = ();
async fn pre_start(&self, _myself: ActorRef<Self::Msg>, _arguments: Self::Arguments)
-> Result<Self::State, ActorProcessingErr>
{
Ok(0)
}
async fn handle(&self, _myself: ActorRef<Self::Msg>, message: Self::Msg, state: &mut Self::State)
-> Result<(), ActorProcessingErr>
{
match message {
MyFirstActorMessage::PrintHelloWorld => {
println!("Hello world!");
*state += 1;
}
MyFirstActorMessage::HowManyHelloWorlds(reply) => {
if reply.send(*state).is_err() {
println!("Listener dropped their port before we could reply");
}
}
}
Ok(())
}
}
There’s a bit to unpack here, so let’s start with the basics.
We changed the type of the Actor::State to be a u16 so that the actor could maintain some internal state which is the count of the number of times it’s printed “Hello world”
We changed the hello-world message handling to increment the state every time it prints
We added a new message type MyFirstActorMessage::HowManyHelloWorlds which has an argument of type RpcReplyPort. This is one of the primary ways actors can inter-communicate, via remote procedure calls. This call is a message which provides the response channel (the “port”) as an argument, so the receiver doesn’t need to know who asked. We’ll look at how we construct this in a bit
We added a hander match arm for this message type, which sends the reply back when requested.
Running a stateful sample
Very similar to the non-stateful example, we’ll wire it up as such!
#[tokio::main]
async fn main() {
// Build an ActorRef along with a JoinHandle which lives for the life of the
// actor. Most of the time we drop this handle, but it's handy in the
// main function to wait for clean actor shut-downs (all stop handlers will
// have completed)
let (actor, actor_handle) =
Actor::spawn(None, MyFirstActor, ())
.await
.expect("Actor failed to start");
for _i in 0..10 {
// Sends a message, with no reply
actor.cast(MyFirstActorMessage::PrintHelloWorld)
.expect("Failed to send message to actor");
}
let hello_world_count =
ractor::call_t!(actor, MyFirstActorMessage::HowManyHelloWorlds, 100)
.expect("RPC failed");
println!("Actor replied with {} hello worlds!", hello_world_count);
// Cleanup
actor.stop(None);
actor_handle.await.unwrap();
}
WHOA what is call_t!?! That’s a handy macro which constructs our RPC call for us! There’s are three macro variants to ease development use for actor messaging
cast! - alias of actor.cast(MESG), simply send a message to the actor non-blocking
call! - alias of actor.call(|reply| MESG(reply)) which builds our message for us without having to provide a lambda function to take the reply port as an argument to construct the message type. We don’t need to actually build & wait on the port, the RPC functionality will do that for us.
call_t! - Same as call! but with a timeout argument
Checkout docs.rs on RPCs for more detailed information on these macros.
In this brief example, we’re having our actor send our 10 messages, and then sending a final query message to read
the current count and print it. We’re additionally giving it 100ms to execute (hence the use of call_t!) or return
a timeout result.
| 2024-11-08T03:14:11 | en | train |
42,030,638 | CaliforniaKarl | 2024-11-03T01:49:42 | Making Pop Rocks from scratch (is complicated) [video] | null | https://www.youtube.com/watch?v=_uiAcr2449w | 5 | 1 | [
42031001
] | null | null | null | null | null | null | null | null | null | train |
42,030,650 | theodpHN | 2024-11-03T01:53:31 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,030,665 | vkccchan | 2024-11-03T01:56:35 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,030,679 | rntn | 2024-11-03T01:59:29 | Fungi may not think, but they can communicate | null | https://arstechnica.com/science/2024/11/fungi-may-not-think-but-they-can-communicate/ | 5 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,030,686 | doppp | 2024-11-03T02:01:18 | Ruby on (Guard)Rails | null | https://mikemcquaid.com/ruby-on-guard-rails/ | 24 | 1 | [
42033951,
42031128
] | null | null | null | null | null | null | null | null | null | train |
42,030,688 | DevbranchDev | 2024-11-03T02:01:40 | null | null | null | 1 | null | [
42030689
] | null | true | null | null | null | null | null | null | null | train |
42,030,704 | todsacerdoti | 2024-11-03T02:04:25 | Moving my game project from C to the Odin language | null | https://akselmo.dev/posts/moving-from-c-to-odin/ | 32 | 19 | [
42032074,
42041270
] | null | null | null | null | null | null | null | null | null | train |
42,030,718 | imrankhanjoya | 2024-11-03T02:07:35 | "Phuket Thailand" is a channel on OnionPose.com designed for expats | null | https://onionpose.com/channels/phuket_thailand | 2 | 1 | [
42031576
] | null | null | missing_parsing | Explore phuket_thailand on onionpose.com by people | null | null | phuket_thailand"Phuket Thailand" is a channel on OnionPose.com designed for expats and tourists to discover essential services, jobs, accommodations, and community needs in Phuket. Whether you're searching for rental homes, local services, or items for sale, this channel helps connect you with the local community and resources. | 2024-11-08T17:58:01 | null | train |
42,030,728 | null | 2024-11-03T02:11:35 | null | null | null | null | null | null | [
"true"
] | true | null | null | null | null | null | null | null | train |
42,030,731 | yueyuechen | 2024-11-03T02:12:48 | null | null | null | 4 | null | null | null | true | null | null | null | null | null | null | null | train |
42,030,770 | preciousoo | 2024-11-03T02:23:50 | Show HN: EventSnap – Place flyers on your calendar | Hi HN, I just launched an app, EventSnap. It takes a photo of an event flyer or invite and turns it into an event on your iCloud or Google Calendar.<p>It's free and currently only for iPhone (was using this idea to gain familiarity with swift, sorry haha)<p>Just got AppStore approval, as always, any comments or feedback are very much welcome | https://apps.apple.com/us/app/event-snap/id6737088852 | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,030,787 | mondobe | 2024-11-03T02:27:38 | AI-Generated Minecraft in real time | null | https://oasis.decart.ai/welcome | 5 | 3 | [
42030802,
42031023
] | null | null | null | null | null | null | null | null | null | train |
42,030,798 | kmarker1101 | 2024-11-03T02:30:21 | Jeopardy Game Using LLM and Python | null | https://github.com/kmarker1101/jeopardy | 2 | 0 | [
42030799
] | null | null | null | null | null | null | null | null | null | train |
42,030,801 | trytrycc | 2024-11-03T02:31:23 | null | null | null | 4 | null | [
42034936
] | null | true | null | null | null | null | null | null | null | train |
42,030,819 | Kinrany | 2024-11-03T02:36:39 | Tossed Salads and Scrumbled Eggs | null | https://ludic.mataroa.blog/blog/tossed-salads-and-scrumbled-eggs/ | 6 | 1 | [
42031383
] | null | null | no_error | Tossed Salads And Scrumbled Eggs — Ludicity | null | Published on September 19, 2024 |
With the decision to focus on our consultancy full-time in 2025, my time as an employee draws to a close. My attitude has become that of an entomologist observing a strange insect which can miraculously diagnose issues, but only if the diagnosis is "you weren't Agile enough". My rage has quickly morphed into relief because this is, broadly speaking, the competition.
'A beetle clock?' she said. She had turned away from the glass dome.
'Oh, er, yes... The Hershebian lawyer beetle has a very consistent daily routine,' said Jeremy. 'I, er, only keep it for, um, interest.'
While our team is now blessedly free of both the madness of corporate dysfunction and the grotesque world of VC-funded enterprise, we must still interface with such organizations, ideally in as pleasant a manner as is possible for both parties. But there is still one question from my soon-to-be old life that bears pondering, which I must understand if I am to overcome its terrible implications.
What the fuck is going on with all those dweebs talking about Scrum all day?
I.
I would rather cover myself in paper cuts and jump into a pool of lemon juice than attend one more standup where Amateur Alice and Blundering Bob pat each other on the back for doing absolutely fucking nothing... except using the learning stipend to get "Scrum Master Certified" on LinkedIn or whatever.
— A reader
You may be surprised to hear, given my previous writing, that I am more-or-less ambivalent about the specifics of Scrum. The inevitable protestations that "Scrum works well for my team" whenever it comes up are both tedious and very much beside the point.
The reason that these protestations are tedious is that, in the environment we find ourselves navigating, these are meaningless statements without context. Most people in management roles are executing the Agile vision in totally incompetent fashion, usually conflating Scrum with Agile. They also think that it's going just swimmingly, when a more accurate characterization would be drowningly. Given that you do not know who you are speaking to over the internet, whether they are competent engineers or self-proclaimed thought leaders, any statement about Agile "working for my team" does not convey much information, in the same way that someone proclaiming that they are totally not guilty is generally an insufficient defense in court.
The reason that they are beside the point is that the specifics of Scrum are much, much less interesting than what we can infer from the malformed version of the practice that we see spreading throughout the industry. I believe there are issues with Scrum, but those issues simply do not explain the breathtaking dysfunction in the industry writ large. Instead I believe that Scrum and the assorted mutations that it has acquired simply reflect a broader lack of understanding of the systems that drive knowledge work, and the industry has simply adopted the methodology that slots most neatly into our most widely-held misconceptions.
II. Oh Baby, I Hear The Blues A-Callin'
As you can see, I am not a data engineer like yourself, but we share the deep belief that Scrum is complete and utter bullcrap.
— A reader
When I first entered the industry, I was at a large institution that had recently decided to become more Agile.
It is worth taking the time to explain what this means for the non-technicians in the audience, both so that they can follow what is going on, and so that the technicians here can develop an appreciation for how fucking nuts this all sounds when you explain it to someone with some distance. Most of us are so deeply immersed in this lunacy that we no longer have full context on how bizarre this all is.
To begin with, "Agile" is a term for a massive industry around productivity related to software engineering. Astute adults, with no further context, will see the phrase "industry around productivity" and become appropriately alarmed. The industry is replete with Agile consultants, Agile coaches, Agile gurus, and Agile thought leaders. Note that these are all the same thing at different points on the narcissism axis. Agile is actually a philosophy with no concrete implementation details, so there are management methodologies that claim to be inspired by that broader philosophy. The most popular one is called Scrum.
As with any project management methodology, the dream goal with Scrum is for teams to work more quickly, to respond to changes in the business more rapidly, and to provide reliable estimates so that projects do not end up in dependency hell. This is typically accomplished through Jira. All-powerful Jira! All-knowing Jira! What is this miraculous Jira? It's a website that simulates a board of sticky notes!
That's it, that's the whole thing. When something needs to be done, you put it in there and communicate on the card. Well, all right, that doesn't sound so bad yet.
In any case, this is paired with a meeting that runs every morning, called a Stand-Up. It is supposed to run for approximately ten minutes, as one would expect for a meeting that's going to happen every day. Instead, every team I've seen running Scrum has this meeting go on for an hour. Yikes, yes, daily one hour meeting. And since orthodoxy in the modern business world is that a "silo" is bad1, many people work on more than one team, so they attend two one hour meetings per day. That is a full 25% of an organization's total attention dedicated to the same meeting every day.
What on earth are you doing in daily one hour meetings?
Well, we discuss the cards.
Wait, I thought the whole point of Jira was so that all your notes are on the electronic cards?
You're asking too many questions, heretic. Guards, seize them!
Of course, while this is usually enough to provoke complete confusion when explained to people with enough distance from the field to retain their grasp on common sense, it gets worse. Prepare yourself for a brain-frying.
You typically don't just do work as it needs doing. In an effort to keep track of the team's commitments as time goes on, the team commits to Sprints, which basically means that you commit about two weeks worth of cards to the board, then only work on those cards on pain of haranguing. Sprints are usually arranged back-to-back with no breaks, and "sprinting" nonstop throughout the year is obviously a totally healthy choice of words which has definitely never driven anyone to burnout.
But to keep track of how much work is in each card, there is usually another meeting called Backlog Grooming, where the team sits around and estimates how much time each card is going to take. This is done by assigning Story Points to cards. What is a Story Point? Why, it's a number that is meant to represent complexity rather than time, because we know in the software world that time estimates are notoriously unreliable.
To make things even simpler, most teams actually still use them to mean time, enough so that there are all sorts of articles out here where people desperately try to explain to professionals in the industry that they shouldn't use the phrase "Story Points" incorrectly, even though knowing what one of the core phrases in the methodology means should be a given.
Okay, you're with me so far, right? Scrum is a project management methodology based on Agile, where you run daily Stand-Ups to reflect on how your Sprint is going, and the progress of your Sprint is measured by the number of Story Points you've completed in your cards, which may or may not be hours.
Fuck, wait, did I say cards? There are no cards, there are Stories and Tasks, and a long sequence of Stories and Tasks contributes to an Epic... wait, did I not explain what an Epic is? An Epic usually translates to some broader commitment to the business. Sorry, sorry, we'll try again.
So you do the Stand-Ups to evaluate how many Story Points you've completed in your Sprint — ah shit, wait, wait, I forgot. Okay, so the number of Story Points you've done in a Sprint is Velocity.
Yeah, right, so you want your Velocity to stay high.
So you run Backlog Grooming to produce Story Points for each of our Stories and Tasks, which are not time estimates except when they are, and then we try to estimate how many Story Points we can fit into a Sprint, which is explicitly a timespan of two weeks, again keeping in mind that Story Points are not time estimates, okay? If we do a good job, we'll have a high Velocity. And then we put that all into Jira, and you write down everything you're doing but then I also ask you about it every morning while I simultaneously try not to turn this into a "justify your last eight hours of work" ceremony.
Damn it, wait, wait, I forgot to tell you, these aren't meetings, okay? They're called Ceremonies, and the fact that I am demanding large swathes of people attend ceremonies against their will does not make this a cult.
Phew. Okay. Now, with all of that in mind, how many Story Points do you think it'll take you to update that API, Fred?
Four?
Fred, you dumb motherfucker, Story Points have to adhere to the Fibonacci sequence2, you stupid idiot. Four? You mud-soaked yokel. You've disrespected me, this team, and most of all, yourself. Christ, Fred, you're better than this. Fucking four? I wouldn't let that garbage-ass number near my business. I don't understand why you're struggling with th—
III. None Of These Words Are In The Bible
As someone whose part of their job was to write end-user documentation at REDACTED about these exact things, you have my wholehearted, eye-twitching encouragement from this section alone.
— A reader reluctantly working in Scrum advocacy
I'm going to stop there for a second.
You may understand now why, when first confronted by Jira and the Agile Cult, I elected not to read anything about it for about a year. It was, after all, an organization-wide transformation being pushed by many people with big ol' beards and very serious names like Deloitte in their work histories. Repeated references were made to the Agile "manifesto" by non-engineers, which caused me to avoid reading anything about it. It was a whole manifesto, for which the only frames of reference I have are voluminous works by Marx and Kaczynski which I haven't read either. Surely these people had been employed because they had some formidable skillset that I was missing.
Imagine my befuddlement when I realized that this is the entirety of the manifesto:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on the right,
we value the items on the left more.
Congratulations! You're all Agile certified! I'm so proud of all of us. This certificate is going right on the fridge, alongside the prettiest macaroni art.
A few things here are striking.
The first is that I don't see any Proper Nouns at all. So all that Scrum-Velocity-Story-Point-Epic-Fibonacci stuff is very much some sort of philosophy emerging from a weird nerd-aligned religious schism.
The second thing is that the text is actually reasonable, and able to provoke meaningful discourse. For example, should individuals be contrasted with processes? Is taking the individual into account necessarily at odds with process? In some ways yes, in some ways no, but the authors are merely stating a rough preference for when the two come into conflict. And the final sentence walks back the preceding four, so this is hardly the incendiary foundation for the monolith of bullshit I just described.
So where on earth is all this Scrum stuff coming from? There's nothing in the original manifesto that would suggest that this is sensible, nor is there anything that would even begin to send people down this strange pathway.
For all the flaws with Scrum, you will find no support for this stark madness anywhere within an authoritative source on the topic. Yes, it comes with a thousand confusing names and questionable value, but it doesn't actually suggest people dedicate up to half their time to meetings. In fact, Scrum is mostly embraced in a manner which implies some of its most fervent advocates have failed to spend even a few minutes reading about their primary job functions. How else can we explain the prevalence of Story Points as time estimates, and the one hour meetings every morning?
And this is exactly why I don't view Scrum itself as particularly problematic. The fundamental issue, the one that is only moved by small degrees by project management methodologies, is that many, many people simply have totally unsophisticated ideas around how knowledge work functions.
IV. Anxiety3 and Scrum Masters
Last week, someone was trying to bully me into estimating something. It took two hours of me saying "I can't estimate that, it has no end point, you don't understand what you're asking" for it to finally devolve into "Ok, you can't estimate it, I understand, but if you had to estimate it what would it be?"
I said fuck it, 8 hours for me to investigate and come back with an answer on how long the work would take... and they were happy with that. Nobody paused to consider it took 25% of my estimated time to have a meeting about why it was a dumb question.
— A reader
Work at large companies has a tendency to blow up, run far behind schedule, then ultimately limp past the finish line in a maimed state. One of my friends talks about how, when faced by his first failed project on a team, a management consultant responded to all critical self-reflection with "But you'd say that, overall, this was a success?" in a desperate bid to generate a misleading quote to put into a presentation to the board.
The core of this issue lies in the simple fact that time estimates will, with varying frequency based on domain and team skill, explode in spectacular fashion. We are not even talking about a task taking twice as long as initially estimated. I'm talking about missing deadlines by years. The software I mention in this blog post is now over ten years overdue. I am fairly certain that the majority of software projects collapse in this fashion, for reasons that would only fit into a post about Sturgeon's Law.
It is in the shadow of this threat that the Scrum Master lives. Yes, that's right, there are still exciting Important Words that we haven't introduced. The Scrum Master, who I usually call Scrum Lords because it's funnier, is some sort of weird role that's specialized entirely in managing the Jira board, providing Agile coaching, and generally doing ad hoc work for an engineering team. Keeping with the theme of Scrum being bad but people being even worse at implementing it as prescribed, they usually end up being the project manager as well. Atlassian's definition of a Scrum Master notes that one of their core roles is "Remove blockers: superhero time", which makes me want to passionately make out with a double-barreled shotgun. I can only assume that Scrum Masters feel deeply infantilized by this, and I am offended on their behalf.
They are generally very sad and stressed out, while simultaneously pissing off everyone around them. I can just punch any software YouTuber's name into the search bar along with "Scrum Master" and be assured that I can find someone sneering. Putting that brief meanness aside, I am actually very sympathetic. They are, after all, people, and I take the bold stance that I'd prefer people be happy and self-actualized.
All of this, with the boards and the Stories and the Epics, they're all mechanisms for trying to construct some terrible fractal of estimation that will mystically transmute the act of software engineering into the act of bricklaying. And I'm guessing that bricklaying is also way more complicated than it looks, so this still wouldn't improve matters much even if it worked. This is further complicated by the fact that most Scrum Masters have either no understanding of the work under consideration, or have learned enough merely to be dangerous4. This puts them into an impossible position.
If companies are going to pay outsized compensation to perform a job that simply requires a degree and a willingness to endure tedium, I can hardly fault someone for taking that deal. Even the Atlassian definition of a Scrum Master notes that technical knowledge is "not mandatory", so who can blame them for not having technical knowledge? And once you're in that position, you have now become the shrieking avatar of the latent anxiety in the business. All projects are default dead barring exceptional talent, but this level of realism would fail to extract funding from the business, even if cool analysis reveals that the failure chance is still worth the risk.
The Scrum Master is thus reduced to a tragic figure. They worry about losing their overpaid role, are not developing skills that are easily packaged when pitching themselves to other businesses, and feel responsible for far too much inside a project. Yet they do not have the knowledge or the power to debug the machine that is the team, even if they are well-intentioned and otherwise talented.
Bad actors can more-or-less get away with saying anything to avoid doing work, because the truth is that only an engineer can tell when another engineer is making things up, which is precisely why we all live in fear of sketchy mechanics overcharging us for vehicle repairs. Even if someone is suspected as malingering, the Scrum Master is unable to initiate termination procedures, and will probably have to trust their gut to a degree that is unpalatable for most people if they want to escalate issues.
If the project is running late, they have no recourse other than to ask the engineers to re-prioritize work, then perform what I think of as "slow failure", which is normally the demesne of the project manager. When a project is failing, the typical step is not to pull the plug or take drastic action, it is to gradually raise a series of delays while everyone pretends not to notice the broader trend. By slowly failing, and at no point presenting anyone else in the business with a clear point where they should pull the plug, you can ultimately deliver nothing while tricking other people into implicitly accepting responsibility. The Scrum Master is generally not malicious, they are just failing to see the broader trend, and simply hoping for the sake of personal anxiety regulation that this task will indeed be accomplished by the next sprint.
When I run into someone in this position, I have very little trouble with my disdain when they're enjoying harassing everyone, but I mostly run into people who are actually struggling to be happy with 40 hours of their week. I know of Scrum Masters who have broken down crying when they hear that people are leaving teams — not due to a deep emotional connection with the person leaving, but because that anxiety is lying right below the surface, and almost any disruption can set it off. It is not unusual to hear people in this role flip between intensely rededicating themselves to "fixing the issues" and then despairing about their value to society, something that I personally went through on my first corporate team. It sucks.
I suspect that the impact of the organization manifesting their anxiety in one person in this way, then giving that person control of meetings and the ability to deliver that anxiety to their teams, is perhaps one of the most counter-productive configurations possible if you assume that the median Scrum Master is not a bastion of self-regulation. These people exist, but I wouldn't bet on being able to hire a half dozen of them at an affordable rate. For most of us, including me, attaining this level of equanimity is very much a lifelong work-in-progress. But even this is not a problem with Scrum, it's a much more serious problem — that organizations run default-dead projects and have cultures where people have to hide this while executives loot the treasury — that is simply made slightly worse by Scrum configurations.
V. The PowerPoint Is Not The Territory
Most engineers just go through the Scrum/Agile motions, finding clever ways to make that burndown chart progress at the right slant without questioning what they’re doing, and it’s nice to read someone articulating the negative thoughts that some of us have had for such a long time. Believe me when I say it’s been this way pretty much since the inception of this fad in the mid 90s.
— A reader
I have previously joked (I was actually dead serious) that the symbolic representation of the work, the card on the Jira board, is taken to be such a literal manifestation of the work that you can just move the pointless tasks to "Done" without actually doing anything and the business will largely not notice. If the card is in "Done", then the work is Done. The actual impact of the work is so low that no one notices, in a way that my electrician would never be able to get away with. Readers have written in to say that they have done exactly this, and nothing untoward has happened.
This conflation of management artefacts with the actual reality of the organization is widespread, and also not Scrum specific, but it is my contention that methodologies producing these artefacts is core to their appeal. A phrase I love is that "the map is not the territory", which more or less translates to the idea that maps merely contain abstract symbols of the territory they represent, and that while we may never have access to a perfect view of the whole territory, it is important to understand that we aren't looking at the real territory. That little doodle of a mountain is not what the mountain actually looks like. Despite the scribble of a sleeping dragon, Smaugh may be awake when you get there.
The harsh truth is that, as with anything complicated enough in life, you cannot realistically de-risk it. We go through our days with complex five-year plans, have them utterly blown apart every year by Covid, assassinations, coups, and if you're super lucky the best you can hope for is the dreadful experience of watching other people getting cancer rather than yourself. Then, because this is terrifying, we immediately go back to pretending that the most important event in the next five years will also be predictable.
And the other thing that we do because risk is usually terrifying (it's actually quite fun when you learn to expose yourself to good rare events — say, writing in public), is we immediately cling to things that smooth this away. Software engineers do not like engaging with the business partially because they trend towards being nerds, but mostly because interfacing with true economic reality is confronting. And non-programmers seem like they're interfacing with the reality of the business, but frequently they are interfacing with Reality As PowerPoint, which is closer to the territory but still not the territory.
True reality is never accessible because no one has perfect information. We do not know whether our competitor's latest product is going to be far behind schedule or utterly obliterate us. We do not know if a pandemic is going to shut down the state for a year.
To make matters worse, reality that is accessible is usually not accessible from a high vantage point. From a bird's eye view, you have no way of knowing that 80% of a specific team's output is from Sarah, and Sarah's son just broke his arm playing soccer so that project is about to collapse as she scrambles to cope. This is totally visible to some people at the business, but is not going to be shared with the person making promises to the board. We could build a complex systems-thinking approach about our work, but that is very hard and will have obvious fuzziness.
Many of the mediocre executives I meet, particularly those I meet in the data governance space5, love their PowerPoints and Jira boards because while they are nonsense, they are nonsense that looks non-fuzzy and you will only have to deal with their inaccuracy once every few years, at which point so many people signed off on the clear-but-wrong vision of reality that it's hard to tell who is ultimately accountable for the failure. A more effective management methodology, one which accurately portrays the degree to which no one knows what is going on because life is chaotic, only makes sense for an entirely privately owned business where the owner needs to turn a profit rather than impress his employers or the markets.
This mode of non-fuzzy being is only available to those who are salaried to "run the business", which means that they are not accountable to the territory, much like a hedge fund manager who receives bonuses for good years and a simple firing (keeping their ill-gotten gains) during a bad year, allowing them to engage in strategies that have massively negative expected returns but only during rare events. This is in stark contrast to the reality of the bootstrapped business founder, such as the barber down the road, who will simply be on the hook for such losses. If you're looking for results, rather than being precise, we want the symbols for our work to look as fuzzy as we are uncertain, rather than pseudoclarity. I want the rope bridges on my map to exist in a superposition of being intact and destroyed in the last big storm.
Of course, this is exactly what expensive PowerPoint reports and Jira provide. Pseudoclarity, for as long as you're willing to fork over enterprise license money. The version of reality where you can simply calculate how many Story Points you're completing per month, compare that to the number of Story Points in the project, then calculate that the project will be finished on time is very, very tempting, but the ability to do this is dictated by factors that are almost totally unrelated to Scrum itself. A team that can work this smoothly has probably already won, whatever you decide to do.
Some people have just lived with these symbols for so long that they think drawing a box on a PowerPoint slide that says "Secure Personally Identifiable Data" is the same thing as actually making that happen, as if one could conjure a forest into existence by drawing some trees.
VI. Dear God, The Meetings
Funny thing is that, every time my company tried to introduce OKRs and make it work, it was clear to me that nobody read the book nor understood the final goal. Like Agile or Scrum. People try to implement them as dogma, and they always backfire because of that. I guess it is always easier to be a cook and follow recipes than to be a chef and adjust based on current circumstances/ingredients.
— A reader
If there is a specific problem with Scrum, something that I genuinely think makes it stand out as uniquely bad rather than just reflecting baseline organizational pathology, meetings are it. People are not good at running meetings, and mandating that they hold more meetings does not merely reflect our weaknesses as a society, it greatly amplifies the consequences of that weakness.
So many meetings. So, so many meetings. Suffice it to say that anyone who runs a one-hour Stand-Up with any consistency should be immediately terminated if they are primarily Agile practitioners. There is very little to say here, save that people are so terrible at running meetings that, on average, the sanest thing to do for most businesses is pick a framework that minimizes the default number of them. I will appeal to authority here on the normalcy of meeting-running skill being low and simply quote Luke Kanies who has given meetings more thought than I have:
So a manager’s day is built around meetings, and there is a new crop of tools to help with them. What’s not to love?
Well. The tools are built by and for people who hate meetings, and often who aren’t very good at them. Instead, I want tools for people whose job is built around meetings, and who know they must be excellent at them.
In the absence of people who treat running meetings as seriously as we treat system design, try not to run many meetings. If this sounds unpalatable then get good, nerds. I'm turning the tattered remnants of my humility module off to say that the team at our consultancy runs a meeting at 8PM every Thursday, after most of the team has just worked their day job and struggled to send kids off to bed, and we actually look forward to it. This is attainable, though even then we constantly reflect on whether the meeting needs to keep existing before it wears out its welcome. I ask people very frequently, possibly too frequently, whether they're still having fun.
I currently believe that meeting-heavy methodologies are preferred because they feel like productivity if you aren't mindful enough to notice the difference between Talking About Things and Doing Things. Even some of the worst Agile consultants I know have periodically produced real output, but I suspect they can no longer differentiate between the things they do that have value and the things that do not.
VII. We Need To Talk About Jeff
A while ago, I wrote a short story about Scrum being a Lovecraftian plot designed to steal human souls. It ended with this quote:
In the future, historians may look back on human progress and draw a sharp line designating "before Scrum" and "after Scrum." Scrum is that ground-breaking. [...]
If you've ever been startled by how fast the world is changing, Scrum is one of the reasons why. Productivity gains of as much as 1200% have been recorded.
In this book you'll journey to Scrum's front lines where Jeff's system of deep accountability, team interaction, and constant iterative improvement is, among other feats, bringing the FBI into the 21st century, perfecting the design of an affordable 140 mile per hour/100 mile per gallon car, helping NPR report fast-moving action in the Middle East, changing the way pharmacists interact with patients, reducing poverty in the Third World, and even helping people plan their weddings and accomplish weekend chores.
This is so unhinged that readers thought this was something I made up. 1200% productivity improvements? You can use Scrum to report on wars and accomplish your weekend chores? This looks like I asked ChatGPT to produce erotica for terminally online LinkedIn power users.
I wish it was.
That's the blurb from one of Jeff Sutherland's books, one of the Main Agile Guys. The subtitle of the book is "doing twice the work in half the time", so this absolute weirdo is proposing that Scrum makes you four times faster than not doing Scrum. Jeff has also gone on record with amazing pearls of wisdom like this:
If Scrum team is ten times faster than a dysfunctional team and AI makes teams go four times faster, then when both use AI the Scrum team with still be be ten times faster than the dysfunctional AI team. Actually, what I am teaching today is not only for developers to use AI to generate 80% of the code and be five times faster. This will make each individual team member 5 times as productive and the whole team five times faster. But it you make the AI a developer on the team you will get a synergistic effect, potentially making the team 25 times faster.
Jeff, what the fuck are you saying? This is incomprehensible nonsense. You are throwing random numbers out faster than Ben Shapiro when he's flinging anti-trans rhetoric, a formidable accomplishment in and of itself, and they're all totally unsubstantiated. This is insane. This is demented. Scrum is ten times faster than a dysfunctional team? Are all non-Scrum teams dysfunctional? AI makes teams go four times faster? But you're teaching people to use AI to be five times faster? Then if you put AI on the team as a developer there's a synergistic effect and they're 25 times faster? Fucking what? What mystical AI do you have access to that can replace a developer on a team today and why aren't you worth a trillion dollars?
And if we throw Scrum into the mix for that sweet, sweet 10x speedup, can I get my team to be 250 times faster? Can our team of six people perform the work of, let me do some maths, 1500 normal developers?
How have we defaulted to a methodology that has this raving fanatic at the helm?
VIII. Cognitive Biases and Doing Better
Contracting has exposed me to a variety of technical challenges and domains, ranging from work I remember with fondness and pride, to the kinds of unbearable, interminable corporate Scrum nightmares you describe so eloquently in your blog which seemed to be cooked up in a lab intent on undermining and punishing any sign of genuine ambition towards the improvement of human life.
— A reader
The reality is that teams are messy, filled with emotion, and that this is further compounded by the fact that our work requires a great deal of emotional well-being to deliver consistently. I once worked on the management team at a Southeast Asian startup, and while I was terribly depressed at that job, I was able to get my work done with some degree of consistency. Now that I am in IT, I basically cannot program when I am in a negative headspace because I cannot think clearly, and this dominates most of the productivity gains I see in a typical engineering team. Poor sleep, low psychological safety, and a thousand other little levers in the brain can disrupt functioning. There is no real shortcut for this.
With that said, I do have thoughts on how to do better, and you're going to get them no matter how annoying that is! Behold, the unbridled power of not relying on advertising revenue!
Names Matters And Simple Is Good
Let's start very, very simply.
Names matter.
Agile is popular because the word Agile has connotations of speed, and that is genuinely as sophisticated as many people are when designing their entire company's culture.
Sprints are popular because the word Sprint has connotations of speed. The fact they are called Sprints has probably genuinely killed a few people when you aggregate the harm of being told you are Sprinting every week across a few million anxious people. Don't give things idiotic names.
All methodologies should compete against a baseline of a bunch of sticky notes on a whiteboard, and you should question your soundness of mind every time you feel the need to introduce a Proper Noun, okay? Just have a big list of things to do, order in terms of what you want done first, and then do that. Just think about how much you'll save on onboarding and consultants for the exact same outcome in almost all cases. There are plenty of superior methods to this, but there are way more worse ones. If you have a meeting, just call it a meeting and prepare an agenda.
I swear to God, if you invent a Proper Noun and someone asks me to learn it for no reason, sweet merciful Jesus, I will find you and —
Cognitive Biases
Ahem.
A spectacular amount of the design that goes into these methodologies is based around avoiding cognitive biases around estimation, though they frequently fall short because there is no easy fix for a mind that craves only easy fixes. That one sentence describes 90% of dysfunction in all fields.
Consider the Fibonacci sequence restrictions, meaning that four Story Points can't exist (which is hilarious when adopted by teams using Story Points as time, because now four days can't exist). The generous reasoning behind this is that the variance in a highly-complex or time-intensive task is higher than that of a simple task, so it makes sense to force people into increasingly large number rather than stressing about a single point here or there. In reality, this is fucking silly and if someone suggested this ex nihilo, without the background of Scrum, we'd be absolutely baffled. But hey, an attempt was made.
Most of the suggestions will be around different models for handling these biases. I'll indicate which of these I have actually tried. For the most part, I have found that the typical organization is completely unwilling to actually try these, and will only consider them when talking to me when I am presenting in my Consultant Mode, not my Employee Mode. Even though I am more deeply embedded in the workplace culture as an employee, most people can't stop seeing ICs as too low-status to take seriously. In these contexts, I've just gone rogue and experimented without management buy-in.
Bets and Sunk Costs
Basecamp has a free book out online that talks about a methodology they call ShapeUp. It's quite good despite my general disdain for business books. In it, they explicitly deal with the failure mode of tasks stretching well beyond their value to the business, existing in the perpetual zone of "almost done".
We combine this uninterrupted time with a tough but extremely powerful policy. Teams have to ship the work within the amount of time that we bet. If they don’t finish, by default the project doesn’t get an extension. We intentionally create a risk that the project—as pitched—won’t happen. This sounds severe but it’s extremely helpful for everyone involved.
First, it eliminates the risk of runaway projects. We defined our appetite at the start when the project was shaped and pitched. If the project was only worth six weeks, it would be foolish to spend two, three or ten times that.
[...]
Second, if a project doesn’t finish in the six weeks, it means we did something wrong in the shaping. Instead of investing more time in a bad approach, the circuit breaker pushes us to reframe the problem. We can use the shaping track on the next six weeks to come up with a new or better solution that avoids whatever rabbit hole we fell into on the first try. Then we’ll review the new pitch at the betting table to see if it really changes our odds of success before dedicating another six weeks to it.
All this does is turn off the sunk cost fallacy, forcibly. It's very smart.
I ran this for a while with the other engineer mentioned in this blog post. Despite the broad horror of that story, it was the most productive work period of my life. It's also worth noting that the other engineer went on to become one of my co-founders, and that we both studied psychology together before getting into IT. A lot of our effectiveness came down to ruthless self-analysis and paying attention to failure modes.
Small Slippages Don't Exist
I like the advice given by P. Fagg, an experienced hardware engineer, "Take no small slips." That is, allow enough time in the new schedule to ensure that the work can be carefully and thoroughly done, and that rescheduling will not have to be done again.
— Fred Brooks, The Mythical Man Month
This is the smartest and most practicable thing that I'm ever going to write on this blog. Unsubscribe after this because it's all downhill from here.
There is a rather strange phenomenon that arises around project lateness. When we estimate that something is going to be completed in a month, the natural temptation is to think that, when you are one day overdue, that you are almost done. I.e, it will be finished in one month and five days.
In reality, each day past the deadline increases the estimated deadline. Prepare yourself for one of my patented doodles, a thing which I am bullied for relentlessly at work, and enjoy the dreadful simulated experience of being one of my colleagues enduring an interminable lecture about some abstract concept that no one cares about.
This is the distribution that people think that are sampling from. As you move past the one month mark, you are approaching the sad probability that it'll take two months, but the likelihood of that is astonishingly low. At each point, the odds are that it's the next sprint where you'll deliver.
The truth is that if you keep missing deadlines (or even miss one deadline), reality is gently, and eventually not-so-gently, informing you that you are not drawing from the distribution you thought you were. Instead, which each passing day, it is increasingly likely that you are drawing from some super cursed distribution that will ruin your project forever.
Each delay represents the accumulation of evidence that you are more likely to be drawing from the blue instead of the red, or something even worse than the blue. These days, when something important is late by one day, I immediately escalate to the highest alert level possible. This is unpalatable for political reasons, but it is the only appropriate response. It works.
I've tried this out, and I have never regretted it. I also once warned an executive about a server that was two weeks late in being provisioned. I failed to adequately explain the idea because it was complicated, they were impatient, and I hadn't practiced the explanation enough times... and I think that it's too counter-intuitive for non-statisticians to actually act on. Doing unusual things is a genuinely hard skill, even if you absolutely believe that the unusual thing is better. That server was provisioned a year behind schedule. There are no small delays in my world, only early delivery and absolute catastrophes.
No Deadlines
Our consultancy doesn't do deadlines. This was a strange idea when I first came across it because it is so different from the corporate norm, but it's a much better model when you have trust with the parties involved. If you don't have trust, guess what, nothing else matters. We pair this with fixed price billing, but the core is that we try to only work projects where there's no real risk of a few weeks here or there affecting our client adversely. The fixed price billing means that we aren't rewarded for running late, and have a higher effective hourly rate if we deliver something the client is happy with in less time. It also means that clients don't feel bad when we do things like document comprehensively or improve test suites.
This is tricky, because it runs totally counter to how a large business operates, but there would also be very little to enjoy in starting a business only to do what everyone else is doing. There's common wisdom in the savvier parts of the IT world that you should limit the number of weird things you do with regards to technology stack. Even that is actually an argument against bias (people drastically underestimate the difficulties of using weird technology and overestimate the value) — but that's also terrible advice if you actually know what you're doing. You want to maximize the amount of weird stuff you're doing across the business to generate asymmetry with your competitors, with the admittedly serious caveat that the pathway to this particular ancient ruin is littered with skulls. Pay attention to the skulls.
This isn't very useful for a larger business, as usually they have been architected to rely on conventional mechanisms, but it's worth thinking about the fact that this is possible. Authors like Jonathan Stark have it as part of their normal practice. You can choose to build a system that is not predicated on this idea of work flowing linearly through a system like a factory floor. In fact, one of the most famous books on IT operations is The Phoenix Project, but the The Phoenix Project is self-admittedly just a reskinned version of The Goal, a totally different book that is explicitly about factory floors.
This is also totally beside the point, but the audiobook version of The Goal has a romantic subplot in a book about factory operations and the editors included smooth saxophone when the protagonist finally frees up enough time at work to attend to his ailing marriage, which caused me to exhale tea through my nose.
Finally, here is a boring disclaimer that some industries simply can't get away with experimenting along these dimensions. Microchip manufacturers need to deliver the product in time for the next iPhone to ship or Apple cancels the contract. C’est la vie.
Estimation Is Expensive
This is a point taken from a private conversation with Jesse Alford, but obviously tasks can be estimated accurately, it's just expensive. I've done it before to a higher accuracy than I've seen on any Scrum team, with minimal practice, just by taking the time to have deep conversations with another engineer about the work. Unfortunately, it takes a non-trivial amount of engineering effort to do, and frequently has to be paired with actual work. Once again going to Basecamp, whose kool-aid I swear that I only drink sparingly even though it's delicious and refreshing, they have a specific chart on their platform called a hill chart. I love hill charts. They look like this:
Simply put, they reflect the reality that there is a phase of a project where scope increases as you run into new cases during implementation, and then a phase where you actually have a good idea of how long something is going to take. For example, if I was going to integrate an application with a third-party application, the period of horror where I learn about the Informatica API (a wretched abomination whose developers should be crucified) is the left side of the chart as I learn things like "The security logs don't tell you if a login attempt was successful, just that someone clicked the login button". The right side of the chart, once I have painfully hiked up the hillside which is littered in caltrops, is where I say "This is still absolute torment, but I am now confident that there are only three days of pain left".
People can have their gigantic Jira board, I guess, if they're willing to put that much time into something that isn't the work itself. And of course, that wouldn't be that great anyway, as it's possible to miss board deadlines, have to perform re-work, otherwise teams you depend on could screw up, people leave the team or become demotivated, etc. For the most part, businesses are best served by doing really, really simple things that have outsized value.
Even within the consultancy, the only things we've bothered setting internal deadlines for are those that we've been procrastinating on. Even our deadlines are deployed towards psychological ends. This will change over time, but it has been totally fine until now.
Autonomy Builds Morale
Nothing crushes software engineering productivity faster than low morale. Generally speaking, software engineers in the first world are well-paid enough, and their work is hard enough to measure, that you cannot intimidate them into working faster by standing behind them whilst demanding they flip burgers faster. Nor would I want to, on account of at least trying not to be a bastard.
Scrum teams, and any team that does not pay extremely close attention to the symbolic framework that the team is operating in, will collapse over the course of approximately a year. A core issue with Scrum specifically is that many of the ceremonies symbolize extant organizational issues much too clearly. One of the other important meetings, the Retrospective (I know, there's another word!) is where you sit down and evaluate how a sprint went, and what the team can change. I love the idea of the Retrospective. It is a fantastic idea that any team aspiring to greatness should adopt.
But if everyone in the Retrospective requires organizational permission to change things, or is the generic team where changes are not addressed with violent purpose, the Retrospective becomes emblematic of everything that makes people feel disrespected. On my current conventional-employment team, only about 20% of the team continues to attend Retrospectives, which is the outcome I've gradually observed everywhere I've seen Scrum. Again, this isn't a problem with Scrum — it's that Retrospectives interact positively with good cultures and negatively with bad cultures, and since most cultures are bad, it follows that Scrum is actively harmful to deploy into a random environment. It's only appropriate to roll it out when the organization is ready to actually make changes, and the continuance of the process should be immediately interrogated, and possibly terminated, the moment an engineer reports that they feel it's a waste of time. You can bring it back after figuring out how it all went so wrong the first time.
IX. Scrumbled Eggs All Over My Face
There's probably some broad lesson in here about thoughtfulness, constantly evolving how you approach your craft and the structures that surround it, being suspicious of people selling 1200% improvements in productivity, and accepting that there's no substitute for reading and thinking deeply about your problems. There is no management methodology that will make up for having team members that proselytize full-time for a philosophy that is five lines long without having read those five lines6.
But that sounds really tiring! Someone has announced Agile 2.0, baby! Nothing can go wrong! Finally, all our problems are solved! Goodnight, everybody!
| 2024-11-08T02:19:13 | en | train |
42,030,823 | js2 | 2024-11-03T02:37:42 | Swapped at birth: two women discovered they weren't who they thought they were | null | https://www.bbc.com/news/articles/cp3njqd9nl9o | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,030,828 | PaulHoule | 2024-11-03T02:38:47 | Replayed reef sounds induce settlement of Favia fragum coral larvae | null | https://pubs.aip.org/asa/jel/article/4/10/107701/3317630/Replayed-reef-sounds-induce-settlement-of-Favia | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,030,832 | gooob | 2024-11-03T02:39:51 | Ask HN: What would you preserve if the internet were to go down tomorrow? | thought experiment: if the internet were to go down tomorrow for an indefinite period, what content would you most want to download and preserve? | null | 243 | 298 | [
42034690,
42031680,
42033931,
42034012,
42039707,
42031049,
42034778,
42031266,
42032450,
42033902,
42030893,
42035765,
42040984,
42035138,
42034426,
42032689,
42067653,
42034135,
42034736,
42030973,
42031450,
42040539,
42034647,
42053410,
42031595,
42035095,
42035185,
42041538,
42032050,
42035146,
42041549,
42045418,
42040554,
42031431,
42031010,
42052366,
42037268,
42034847,
42034369,
42040386,
42041116,
42032136,
42037393,
42031324,
42034444,
42038180,
42043925,
42035456,
42041771,
42038008,
42040446,
42034102,
42031090,
42031766,
42040782,
42034060,
42042231,
42033781,
42040826,
42036988,
42044402,
42034461,
42039603,
42040062,
42043816,
42040002,
42040616,
42031028,
42035945,
42060043,
42048888,
42036642,
42032725,
42055764,
42035371,
42035222,
42040035,
42031915,
42036066,
42032582,
42040124,
42035868,
42038499,
42040441,
42043246,
42049647,
42033840,
42036107,
42035067,
42042358,
42038444,
42034435,
42035267,
42033211,
42035281,
42035486,
42035557,
42034655,
42032215,
42031446,
42035887,
42031719,
42040516,
42034633,
42031424,
42034725
] | null | null | null | null | null | null | null | null | null | train |
42,030,834 | bookofjoe | 2024-11-03T02:40:00 | Large meltwater accumulation revealed inside Greenland Ice Sheet | null | https://phys.org/news/2024-10-large-meltwater-accumulation-revealed-greenland.html | 22 | 1 | [
42031220
] | null | null | http_other_error | Just a moment... | null | null | Please complete security verificationThis request seems a bit unusual, so we need to confirm that you're human. Please press and hold the button until it turns completely green. Thank you for your cooperation!Press and hold the buttonIf you believe this is an error, please contact our support team.88.216.233.107 : 8979d5cd-dd76-4cc1-92dc-91a9aaad | 2024-11-08T02:19:27 | null | train |
42,030,835 | nateb2022 | 2024-11-03T02:40:02 | VSCode high CPU usage issue affecting macOS | null | https://github.com/microsoft/vscode/issues/232699 | 6 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,030,852 | rezaprima | 2024-11-03T02:46:20 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,030,868 | Melissabergamot | 2024-11-03T02:51:21 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,030,884 | thunderbong | 2024-11-03T02:55:58 | The 15-year-old blind quarterback hoping to reach the NFL (2021) | null | https://www.cnn.com/2021/10/23/us/blind-football-quarterback-modesto-raiders/index.html | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,030,905 | Jimmc414 | 2024-11-03T03:03:13 | null | null | null | 21 | null | [
42030911,
42030934
] | null | true | null | null | null | null | null | null | null | train |
42,030,907 | Liddry | 2024-11-03T03:03:49 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,030,915 | ohjeez | 2024-11-03T03:07:26 | Listening in on the Mysterious Marbled Murrelet | null | https://hakaimagazine.com/news/listening-in-on-the-mysterious-marbled-murrelet/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,030,942 | paulpauper | 2024-11-03T03:21:22 | Political Fetishism | null | https://grognoscente.substack.com/p/on-political-fetishism | 6 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,030,946 | paulpauper | 2024-11-03T03:21:57 | Understanding pain, mental illness, and grief | null | https://dhruvmethi.substack.com/p/understanding-pain-mental-illness | 17 | 1 | [
42031462
] | null | null | null | null | null | null | null | null | null | train |
42,030,949 | kermerlerper | 2024-11-03T03:22:16 | A Golang pipeline abomination | null | https://poxate.com/blog/golang-pipeline-abomination | 21 | 8 | [
42031237,
42031362,
42031270
] | null | null | null | null | null | null | null | null | null | train |
42,030,953 | neustradamus | 2024-11-03T03:23:17 | Openfire 4.9.1 Released – Open-Source – Java XMPP/Jabber Server | null | https://discourse.igniterealtime.org/t/openfire-4-9-1-release/94857 | 8 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,030,966 | null | 2024-11-03T03:27:58 | null | null | null | null | null | [
42030967
] | [
"true"
] | null | null | null | null | null | null | null | null | train |
42,031,003 | ioblomov | 2024-11-03T03:36:33 | The college wrestlers who took on a grizzly bear (2023) | null | https://www.espn.com/college-sports/story/_/id/35820049/college-wrestlers-grizzly-bear-attack | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,018 | Saint_Vandora | 2024-11-03T03:44:56 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,031,020 | hedayet | 2024-11-03T03:45:02 | Show HN: Appents Social Media Content Checker | Use Appents Content Analyzer to ensure your content adheres to policies and maintains a positive tone before posting on social medias. | https://appents.com/apps/content-editor/ | 1 | 0 | null | null | null | missing_parsing | Appents | null | null | © 2024 Appents. A Jhotika Inc.product.Privacy PolicyTerms & ConditionsDeletion Policy | 2024-11-08T10:53:55 | null | train |
42,031,030 | Mikajis | 2024-11-03T03:49:07 | Is GitHub Copilot Helping Me Code, or Just Filling in the Blanks? | null | https://bassi.li/articles/copilot-convenience-vs-skills | 3 | 4 | [
42031054,
42031299,
42031162,
42031156,
42031209,
42031170
] | null | null | null | null | null | null | null | null | null | train |
42,031,045 | sandwichsphinx | 2024-11-03T03:53:56 | China Faces a Dilemma with North Korean Troops Pouring into Russia | null | https://www.wsj.com/world/china-faces-a-dilemma-with-north-korean-troops-pouring-into-russia-34f0532f | 7 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,084 | downboots | 2024-11-03T04:06:42 | Skinner Box | null | https://en.wikipedia.org/wiki/Operant_conditioning_chamber | 2 | 0 | null | null | null | no_error | Operant conditioning chamber | 2003-02-25T21:25:09Z | Contributors to Wikimedia projects |
From Wikipedia, the free encyclopedia
"Skinner box" redirects here. For the ska band, see Skinnerbox.
Skinner box
An operant conditioning chamber (also known as a Skinner box) is a laboratory apparatus used to study animal behavior. The operant conditioning chamber was created by B. F. Skinner while he was a graduate student at Harvard University. The chamber can be used to study both operant conditioning and classical conditioning.[1][2]
Skinner created the operant conditioning chamber as a variation of the puzzle box originally created by Edward Thorndike.[3] While Skinner's early studies were done using rats, he later moved on to study pigeons.[4][5] The operant conditioning chamber may be used to observe or manipulate behaviour. An animal is placed in the box where it must learn to activate levers or respond to light or sound stimuli for reward. The reward may be food or the removal of noxious stimuli such as a loud alarm. The chamber is used to test specific hypotheses in a controlled setting.
Students using a Skinner boxSkinner was noted to have expressed his distaste for becoming an eponym.[6] It is believed that Clark Hull, a psychologist and Hull's Yale students coined the expression "Skinner box". Skinner said that he did not use the term himself; he went so far as to ask Howard Hunt to use "lever box" instead of "Skinner box" in a published document.[7]
Original puzzle box designed by Edward Thorndike
In 1898, American psychologist, Edward Thorndike proposed the 'law of effect', which formed the basis of operant conditioning.[8] Thorndike conducted experiments to discover how cats learn new behaviors. His work involved monitoring cats as they attempted to escape from puzzle boxes. The puzzle box trapped the animals until they moved a lever or performed an action which triggered their release.[9] Thorndike ran several trials and recorded the time it took for them to perform the actions necessary to escape. He discovered that the cats seemed to learn from a trial-and-error process rather than insightful inspections of their environment. The animals learned that their actions led to an effect, and the type of effect influenced whether the behavior would be repeated. Thorndike's 'law of effect' contained the core elements of what would become known as operant conditioning. B. F. Skinner expanded upon Thorndike's existing work.[9] Skinner theorized that if a behavior is followed by a reward, that behavior is more likely to be repeated, but added that if it is followed by some sort of punishment, it is less likely to be repeated. He introduced the word reinforcement into Thorndike's law of effect.[10] Through his experiments, Skinner discovered the law of operant learning which included extinction, punishment and generalization.[10]
Skinner designed the operant conditioning chamber to allow for specific hypothesis testing and behavioural observation. He wanted to create a way to observe animals in a more controlled setting as observation of behaviour in nature can be unpredictable.[2]
A rat presses a button in an operant conditioning chamber. An operant conditioning chamber allows researchers to study animal behaviour and response to conditioning. They do this by teaching an animal to perform certain actions (like pressing a lever) in response to specific stimuli. When the correct action is performed the animal receives positive reinforcement in the form of food or other reward. In some cases, the chamber may deliver positive punishment to discourage incorrect responses. For example, researchers have tested certain invertebrates' reaction to operant conditioning using a "heat box".[11] The box has two walls used for manipulation; one wall can undergo temperature change while the other cannot. As soon as the invertebrate crosses over to the side which can undergo temperature change, the researcher will increase the temperature. Eventually, the invertebrate will be conditioned to stay on the side that does not undergo a temperature change. After conditioning, even when the temperature is turned to its lowest setting, the invertebrate will avoid that side of the box.[11]
Skinner's pigeon studies involved a series of levers. When the lever was pressed, the pigeon would receive a food reward.[5] This was made more complex as researchers studied animal learning behaviours. A pigeon would be placed in the conditioning chamber and another one would be placed in an adjacent box separated by a plexiglass wall. The pigeon in the chamber would learn to press the lever to receive food as the other pigeon watched. The pigeons would then be switched, and researchers would observe them for signs of cultural learning.
On the left are two mechanisms including two levers and light signals. There is a light source and speaker above the box and an electrified floor at the bottom.
The outside shell of an operant conditioning chamber is a large box big enough to easily accommodate the animal being used as a subject. Commonly used animals include rodents (usually lab rats), pigeons, and primates. The chamber is often sound-proof and light-proof to avoid distracting stimuli.
Operant conditioning chambers have at least one response mechanism that can automatically detect the occurrence of a behavioral response or action (i.e., pecking, pressing, pushing, etc.). This may be a lever or series of lights which the animal will respond to in the presence of stimulus. Typical mechanisms for primates and rats are response levers; if the subject presses the lever, the opposite end closes a switch that is monitored by a computer or other programmed device.[12] Typical mechanisms for pigeons and other birds are response keys with a switch that closes if the bird pecks at the key with sufficient force.[5] The other minimal requirement of an operant conditioning chamber is that it has a means of delivering a primary reinforcer such as a food reward.A pigeon offering the correct response to stimuli is rewarded with food pellets.A simple configuration, such as one response mechanism and one feeder, may be used to investigate a variety of psychological phenomena. Modern operant conditioning chambers may have multiple mechanisms, such as several response levers, two or more feeders, and a variety of devices capable of generating different stimuli including lights, sounds, music, figures, and drawings. Some configurations use an LCD panel for the computer generation of a variety of visual stimuli or a set of LED lights to create patterns they wish to be replicated.[13]
Some operant conditioning chambers can also have electrified nets or floors so that shocks can be given to the animals as a positive punishment or lights of different colors that give information about when the food is available as a positive reinforcement.[14]
Operant conditioning chambers have become common in a variety of research disciplines especially in animal learning. The chambers design allows for easy monitoring of the animal and provides a space to manipulate certain behaviours. This controlled environment may allow for research and experimentation which cannot be performed in the field.
There are a variety of applications for operant conditioning. For instance, shaping the behavior of a child is influenced by the compliments, comments, approval, and disapproval of one's behavior.[15] An important factor of operant conditioning is its ability to explain learning in real-life situations. From an early age, parents nurture their children's behavior by using reward and praise following an achievement (crawling or taking a first step) which reinforces such behavior. When a child misbehaves, punishment in the form of verbal discouragement or the removal of privileges are used to discourage them from repeating their actions.
Skinner's studies on animals and their behavior laid the framework needed for similar studies on human subjects. Based on his work, developmental psychologists were able to study the effect of positive and negative reinforcement. Skinner found that the environment influenced behavior and when that environment is manipulated, behaviour will change. From this, developmental psychologists proposed theories on operant learning in children. That research was applied to education and the treatment of illness in young children.[10] Skinner's theory of operant conditioning played a key role in helping psychologists understand how behavior is learned. It explains why reinforcement can be used so effectively in the learning process, and how schedules of reinforcement can affect the outcome of conditioning.
Commercial applications[edit]
Slot machines, online games, and dating apps are examples where sophisticated operant schedules of reinforcement are used to reinforce certain behaviors.[16][17][18][19]
Gamification, the technique of using game design elements in non-game contexts, has also been described as using operant conditioning and other behaviorist techniques to encourage desired user behaviors.[20]
Behaviorism
Radical behaviorism
Operant conditioning
Punishment (psychology)
Reinforcement
Synchronicity
^ Carlson NR (2009). Psychology-the science of behavior. U.S: Pearson Education Canada; 4th edition. p. 207. ISBN 978-0-205-64524-4.
^ a b Krebs JR (1983). "Animal behaviour. From Skinner box to the field". Nature. 304 (5922): 117. Bibcode:1983Natur.304..117K. doi:10.1038/304117a0. PMID 6866102. S2CID 5360836.
^ Schacter DL, Gilbert DT, Wegner DM, Nock MK (January 2, 2014). "B. F. Skinner: The Role of Reinforcement and Punishment". Psychology (3rd ed.). Macmillan. pp. 278–80. ISBN 978-1-4641-5528-4.
^ Kazdin A (2000). Encyclopedia of Psychology, Vol. 5. American Psychological Association.
^ a b c Sakagami T, Lattal KA (May 2016). "The Other Shoe: An Early Operant Conditioning Chamber for Pigeons". The Behavior Analyst. 39 (1): 25–39. doi:10.1007/s40614-016-0055-8. PMC 4883506. PMID 27606188.
^ Skinner BF (1959). Cumulative record (1999 ed.). Cambridge, MA: B.F. Skinner Foundation. p. 620.
^ Skinner BF (1983). A Matter of Consequences. New York, NY: Alfred A. Knopf, Inc. pp. 116, 164.
^ Gray P (2007). Psychology. New York: Worth Publishers. pp. 108–109.
^ a b "Edward Thorndike – Law of Effect | Simply Psychology". www.simplypsychology.org. Retrieved November 14, 2021.
^ a b c Schlinger H (January 17, 2021). "The Impact of B. F. Skinner's Science of Operant Learning on Early Childhood Research, Theory, Treatment, and Care". Early Child Development and Care. 191 (7–8): 1089–1106. doi:10.1080/03004430.2020.1855155. S2CID 234206521 – via Routledge.
^ a b Brembs B (December 2003). "Operant conditioning in invertebrates" (PDF). Current Opinion in Neurobiology. 13 (6): 710–717. doi:10.1016/j.conb.2003.10.002. PMID 14662373. S2CID 2385291.
^ Fernández-Lamo I, Delgado-García JM, Gruart A (March 2018). "When and Where Learning is Taking Place: Multisynaptic Changes in Strength During Different Behaviors Related to the Acquisition of an Operant Conditioning Task by Behaving Rats". Cerebral Cortex. 28 (3): 1011–1023. doi:10.1093/cercor/bhx011. PMID 28199479.
^ Jackson K, Hackenberg TD (July 1996). "Token reinforcement, choice, and self-control in pigeons". Journal of the Experimental Analysis of Behavior. 66 (1): 29–49. doi:10.1901/jeab.1996.66-29. PMC 1284552. PMID 8755699.
^ Craighead, W. Edward; Nemeroff, Charles B., eds. (2004). The Concise Corsini Encyclopedia of Psychology and Behavioral Science 3rd ed. Hoboken, New Jersey: John Wiley & Sons, Inc. p. 803. ISBN 0-471-22036-1.
^ Shrestha P (November 17, 2017). "Operant Conditioning". Psychestudy. Retrieved November 14, 2021.
^ Hopson, J. (April 2001). "Behavioral game design". Gamasutra. Retrieved April 27, 2019.
^ Coon D (2005). Psychology: A modular approach to mind and behavior. Thomson Wadsworth. pp. 278–279. ISBN 0-534-60593-1.
^ "The science behind those apps you can't stop using". Australian Financial Review. October 7, 2016. Retrieved January 23, 2024.
^ "The scientists who make apps addictive". The Economist. ISSN 0013-0613. Retrieved January 23, 2024.
^ Thompson A (May 6, 2015). "Slot machines perfected addictive gaming. Now, tech wants their tricks". The Verge.
B.F. Skinner Foundation
| 2024-11-08T17:11:01 | en | train |
42,031,096 | tejonutella | 2024-11-03T04:10:14 | Ask HN: How do I consolidate all my messaging services into ont big dashboard? | I’m talking everything from Facebook Messenger, iMessage, Instagram, LinkedIn, and more into into one feed or dashboard | null | 2 | 2 | [
42031572,
42044937,
42044940
] | null | null | null | null | null | null | null | null | null | train |
42,031,101 | Jimmc414 | 2024-11-03T04:11:23 | null | null | null | 4 | null | [
42031124,
42031104,
42031123
] | null | true | null | null | null | null | null | null | null | train |
42,031,117 | peutetre | 2024-11-03T04:14:36 | Toyota to buy clean power from a $1.1B solar farm in Texas | null | https://electrek.co/2024/11/01/toyota-solar-farm-texas/ | 62 | 44 | [
42031926,
42032882,
42031368,
42031526,
42031171
] | null | null | null | null | null | null | null | null | null | train |
42,031,131 | happer64bit | 2024-11-03T04:18:39 | No More Volume Key Pain | null | https://github.com/happer64bit/NoMoreVolumeKeyPain | 2 | 1 | [
42031132
] | null | null | missing_parsing | GitHub - happer64bit/NoMoreVolumeKeyPain: Put your Fn Key away! Just Volume Control 🚀 | null | happer64bit | No More Volume Key Pain
No More Volume Key Pain is a simple Windows app that makes adjusting your volume easier. If you’re tired of needing to press the Fn key just to change the volume, this app’s for you! With this little tool, you can use F6 and F7 to adjust volume directly—no Fn key needed.
Why This App?
Honestly, this app was created out of frustration. Many keyboards make you press Fn along with the volume keys, which can get annoying fast if you’re frequently adjusting volume.
F5 - Mute/Unmute
F6 - Volume Down
F7 - Volume Up
Stopping the App
You can easily stop the app from tray menu.
Features
Volume Control: Use F6 to decrease and F7 to increase the volume in easy 10% steps.
Overlay Display: See your current volume level in a small overlay when you adjust it.
System Tray Icon: Minimizes to the system tray with a quick right-click Quit option when you’re done.
Requirements
Windows OS
C++ Compiler (Visual Studio, MinGW, etc.)
Windows SDK (for Windows-specific APIs)
CMake (for building)
Getting Started
Clone and Build
Clone the repo and go to the project folder:
git clone https://github.com/yourusername/nomore_volume_key_pain.git
cd nomore_volume_key_pain
Create a build folder and set up CMake:
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
Build the project:
cmake --build . --config Release
Run the app from the build directory:
./Release/NoMoreVolumeKeyPain.exe
How to Use
Adjust Volume: Hit F6 to lower volume and F7 to raise it.
See Volume Level: An overlay shows you the current volume when it changes.
Quit: Right-click the icon in the system tray and choose Quit to close the app.
What’s Happening Under the Hood
Volume Control: Uses the Windows Core Audio API to read and set the system volume.
Keyboard Hook: Captures F6 and F7 to adjust volume, so you don’t need Fn.
Overlay Display: Pops up briefly to show you the new volume percentage.
System Tray: Gives you an easy way to exit from the tray when you’re done.
License
This app is licensed under the MIT License, so feel free to tweak it, share it, or improve it!
| 2024-11-08T17:46:32 | null | train |
42,031,133 | austinallegro | 2024-11-03T04:18:41 | Oasis and Ticketmaster Effigy Burned at Edenbridge Bonfire Night | null | https://news.sky.com/story/oasis-and-ticketmaster-effigy-burned-at-edenbridge-bonfire-night-13247303 | 2 | 0 | null | null | null | missing_parsing | Oasis and Ticketmaster effigy burned at Edenbridge Bonfire Night | 2024-11-03T04:00:00Z | null |
Oasis fans who have been looking back in anger after being caught out by dynamic pricing for tickets to their upcoming reunion tour may delight in the sight of an effigy of Ticketmaster being burned for Bonfire Night.Liam and Noel Gallagher were depicted as puppets on the 11m-tall effigy of the ticket-selling platform at Edenbridge Bonfire Society in Kent.
The town's Bonfire Night celebrations have previously poked fun at politicians.
Image:
The Ticketmaster and Oasis brothers effigy before it was set on fire
An effigy of London mayor Sadiq Khan was burned last year over his decision to extend the capital's charge for polluting vehicles, and models of Liz Truss (and a lettuce), Boris Johnson and Donald Trump have also faced the flames."We wanted to remind people that it doesn't always have to be politicians who we create for our annual event," said Andrea Deans, one of the creators of the giant effigy.
"The Ticketmaster ticket fiasco has affected a lot of different age groups, such is the appeal of Oasis, and I know many fans were very unhappy... when they discovered the price of the tickets."She said "no one likes being taken advantage of".
Fans were outraged after spending hours queueing for tickets only to find some had more than doubled in price from around £148 to £355 in August.An explanation about the "in-demand standing ticket" price on the Ticketmaster website said: "The event organiser has priced these tickets according to their market value.
"Tickets do not include VIP packages. Availability and pricing are subject to change."Read more:Special guest announced for Oasis reunion showsLiam Gallagher brands sketch about brothers 'excruciating'
Gallagher brothers 'not our target' - 'it is the corporate giant of Ticketmaster'Reece Hook, another creator of the effigy, said: "Although our effigy includes Liam and Noel Gallagher, they are not our target, it is the corporate giant of Ticketmaster we have gone with this year."We are all big Oasis fans and wish them a very successful tour."It comes after promoters warned thousands of Oasis tickets listed on unauthorised sites would start to be cancelled "in the coming weeks".The fallout from the ticket-buying debacle has led to a proposed new law to improve pricing transparency and prevent fans from being ripped off.The UK competition watchdog has said it is looking at the use of the dynamic pricing system.
| 2024-11-08T17:31:39 | null | train |
42,031,138 | cen4 | 2024-11-03T04:20:58 | The Prozac Era. What Next? | null | https://davidhealy.org/the-prozac-era-what-next/ | 11 | 4 | [
42031394,
42035237,
42032731,
42032217
] | null | null | null | null | null | null | null | null | null | train |
42,031,142 | whothatcodeguy | 2024-11-03T04:22:17 | Show HN: Chimney Man – A charming, wintry tale about home invasion and homicide [video] | This is a little off brand for hacker news, but thought folks maybe will enjoy it as alternate Santa lore going into the holidays. | https://www.youtube.com/watch?v=f2bcvmf0nzw | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,143 | k33g | 2024-11-03T04:22:24 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,031,161 | golly_ned | 2024-11-03T04:25:58 | Ask HN: My director got fired. His rival is taking his place. What to expect? | As title says, my director, my initial hiring manager, was fired, presumably for underperformance. He was apparently pretty broadly disliked, since he came through a startup acquisition and ruffled feathers early on and had continual territory conflicts with the guy who eventually took his place.<p>So my director was very suddenly fired one morning. Just gone. New director shows up.<p>In the shuffle a few things happen. My manager, who manages two teams, will only manage one team, the team I am not on. The team of 3 I am on will be merged with a team of 4 from his side. These teams happened to be working on the same problem space. (They should’ve been one team from the beginning, but due to territory issues and bad blood, they weren’t.)<p>The new director met with me, presumably to figure out what team I will be landing on. The conversation turned out very poorly. I had excellent performance reviews and feel very respected by my peers and manager. I thought he would be reaching out to convince me to stay on that team, which he may have been at first. By the end he seemed intent on making my team out to be underperforming — while my director got fired, this team was performing fine, was my understanding, but my manager’s other team was on fire.<p>So what can I expect might happen? As of the day before my director got fired, I felt very secure in my career. As of now, I feel like I’m on this new director’s shit list.<p>I would probably change companies if not for the fact that the companies stock price tripled in the last year. And I would probably move organizations if not for the fact that I work in a specialty that I can’t work in elsewhere in the company. | null | 40 | 51 | [
42032584,
42031374,
42031250,
42033141,
42031245,
42069281,
42035117,
42034178,
42062985,
42034710,
42033338,
42038554,
42057532,
42037775,
42033164,
42035580,
42035547,
42033755,
42033248,
42057528
] | null | null | null | null | null | null | null | null | null | train |
42,031,167 | codetoli | 2024-11-03T04:28:27 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,031,169 | sandwichsphinx | 2024-11-03T04:29:14 | Speed, scale and reliability: 25 years of Google datacenter networking evolution | null | https://cloud.google.com/blog/products/networking/speed-scale-reliability-25-years-of-data-center-networking | 288 | 75 | [
42032298,
42031636,
42031796,
42032481,
42031705,
42033171,
42033000,
42032506,
42036239,
42031721,
42032218,
42032950,
42032075,
42034964,
42032535,
42032556
] | null | null | null | null | null | null | null | null | null | train |
42,031,188 | ksec | 2024-11-03T04:32:44 | Unreal Fest Seattle 2024 | null | https://www.unrealengine.com/en-US/blog/catch-up-on-the-big-news-from-unreal-fest-seattle-2024 | 2 | 0 | null | null | null | no_title | null | null | null | null | 2024-11-08T08:09:19 | null | train |
42,031,190 | ksec | 2024-11-03T04:32:59 | WebKit Features in Safari 18.1 | null | https://webkit.org/blog/16188/webkit-features-in-safari-18-1/ | 1 | 0 | null | null | null | no_error | WebKit Features in Safari 18.1 | 2024-10-28T09:30:39-07:00 | Oct 28, 2024
by Jen Simmons |
Today, Safari 18.1 is available for iOS 18.1, iPadOS 18.1, macOS Sequoia 15.1 and visionOS 2.1, as well as macOS Sonoma and macOS Ventura. Two features are newly available with Apple Intelligence, on devices and in languages where available.
Summaries in Reader
Since 2010, Safari Reader has provided an easy way to view articles on the web without navigation or other distractions — formatted for easy reading and presented all on one page. You can adjust the background color, font, and font size. Safari 18.0 brought a refreshed design to Reader, making it even easier to use.
Now in Safari Reader in Safari 18.1, you can tap Summarize to use Apple Intelligence to summarize the article. Longer pages include table of contents. Safari also offers summary highlights for some articles in the Page Menu on macOS, iOS and iPadOS.
Writing Tools
These days, we do a lot of writing on the web. With Apple Intelligence, Safari 18.1 can help you find just the right words. Writing Tools can proofread your text, or rewrite different versions until the tone and wording are just right. And it can summarize selected text with a tap.
WebKit also adds support for the Writing Tools API in WKWebView for enabling and customizing the behavior of Writing Tools in apps built with web technology. Learn more watching Get started with Writing Tools.
For more information about the availability of Apple Intelligence, see apple.com.
Bug Fixes and more
In addition to all the new features, WebKit for Safari 18.1 includes work to polish existing features, including some that help Safari pass even more tests for Interop 2024.
Accessibility
Fixed display: contents on tbody elements preventing table rows from being properly exposed in the accessibility tree.
Fixed the handling of ElementInternals‘s ariaValueNow null values so the right value is exposed to assistive technologies.
Fixed tables with hidden rows reporting wrong counts and blocking access to some rows in VoiceOver.
Fixed role="menu" elements to allow child groups with menuitem children.
Fixed updating the accessibility tree when text underneath an aria-describedby element changes.
Fixed text exposed to assistive technologies when display: contents directly wraps a display: block text container.
Fixed VoiceOver not finding any content in a table when display: table is applied to tbody elements.
Authentication
Fixed an issue using large credential lists with security keys.
CSS
Fixed style container queries querying the root element.
Editing
Fixed deleting content immediately before a <picture> element unexpectedly removing <source> elements.
Fixed inserting text before a <picture> element inserting the text after the element instead.
JavaScript
Fixed incorrect optimization and random non-updated values.
Media
Fixed a bug in WebCodecs where audio and video codecs with pending work could be prematurely garbage collected.
Networking
Fixed a bug where Cross-Origin-Opener-Policy header fields in the response of iframe elements were not ignored, resulting in window.opener being null after multiple cross-origin navigations of the embedder document.
Rendering
Fixed content-visibility to not apply to elements with display: contents or display: none.
Fixed float clearing in the WordPress Classic Editor sidebar layout.
Security
Fixed the ping attribute for <a> elements to be controlled by the connect-src CSP directive.
Web Extensions
Fixed blob: URL downloads failing to trigger from an extension.
WebRTC
Fixed blurry screen sharing for some sites.
WKWebView
Fixed AVIF in WKWebView on macOS. (FB14678252)
Updating to Safari 18.1
Safari 18.1 is available on iOS 18.1, iPadOS 18.1, macOS Sequoia, macOS Sonoma, macOS Ventura, and in visionOS 2.1.
If you are running macOS Sonoma or macOS Ventura, you can update Safari by itself, without updating macOS. Go to > System Settings > General > Software Update and click “More info…” under Updates Available.
To get the latest version of Safari on iPhone, iPad or Apple Vision Pro, go to Settings > General > Software Update, and tap to update.
Feedback
We love hearing from you. To share your thoughts, find us on Mastodon at @[email protected] and @[email protected]. Or send a reply on X to @webkit. You can also follow WebKit on LinkedIn. If you run into any issues, we welcome your feedback on Safari UI (learn more about filing Feedback), or your WebKit bug report about web technologies or Web Inspector. If you run into a website that isn’t working as expected, please file a report at webcompat.com. Filing issues really does make a difference.
Download the latest Safari Technology Preview on macOS to stay at the forefront of the web platform and to use the latest Web Inspector features.
You can also find this information in the Safari 18.1 release notes.
| 2024-11-08T06:47:08 | en | train |
42,031,191 | ksec | 2024-11-03T04:33:13 | Everything we launched at Make with Notion | null | https://www.notion.so/blog/conference-product-releases | 15 | 2 | [
42035223,
42033786
] | null | null | null | null | null | null | null | null | null | train |
42,031,195 | dsubburam | 2024-11-03T04:35:41 | Lawsuit Argues Warrantless Use of Flock Surveillance Cameras Is Unconstitutional | null | https://www.404media.co/lawsuit-argues-warrantless-use-of-flock-surveillance-cameras-is-unconstitutional/ | 12 | 1 | [
42036582
] | null | null | null | null | null | null | null | null | null | train |
42,031,198 | haydenbannz | 2024-11-03T04:36:12 | null | null | null | 1 | null | [
42031199
] | null | true | null | null | null | null | null | null | null | train |
42,031,203 | kp1197 | 2024-11-03T04:37:47 | null | null | null | 1 | null | [
42031204
] | null | true | null | null | null | null | null | null | null | train |
42,031,226 | mixeden | 2024-11-03T04:49:21 | Summarization-Based Document IDs for Language Model Retrieval | null | https://synthical.com/article/Summarization-Based-Document-IDs-for-Generative-Retrieval-with-Language-Models-f26ace53-440c-4a5d-8fe1-5b59887bfe3b | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,235 | AbenezerDaniel | 2024-11-03T04:52:09 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,031,249 | lsllc | 2024-11-03T04:58:10 | 'Shocking' microplastic research prompts review | null | https://www.bbc.com/news/articles/c4gpe30j0x3o | 7 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,264 | AiswaryaMadhu | 2024-11-03T05:03:47 | null | null | null | 1 | null | [
42031265
] | null | true | null | null | null | null | null | null | null | train |
42,031,271 | komsenapati | 2024-11-03T05:06:55 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,031,272 | komsenapati | 2024-11-03T05:07:29 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,031,292 | thunderbong | 2024-11-03T05:15:19 | More than 30% of PRs in public GitHub repos are from bots | null | https://twitter.com/valyala/status/1852675989588873244 | 4 | 1 | [
42033873
] | null | null | null | null | null | null | null | null | null | train |
42,031,300 | aard | 2024-11-03T05:19:29 | The Secret Document That Transformed China | null | https://www.npr.org/sections/money/2012/01/20/145360447/the-secret-document-that-transformed-china | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,301 | thriftman | 2024-11-03T05:19:43 | Industry Level Music Production Tool – MIDI AI | null | https://midigen.app/ | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,303 | aard | 2024-11-03T05:20:36 | The State of Agile Software in 2018 | null | https://martinfowler.com/articles/agile-aus-2018.html | 1 | 0 | null | null | null | no_error | The State of Agile Software in 2018 | null | Martin Fowler | On the surface, the world of agile software development is bright,
since it is now mainstream. But the reality is troubling, because much of
what is done is faux-agile, disregarding agile's values and principles. The three
main challenges we should focus on are: fighting the Agile Industrial
Complex and its habit of imposing process upon teams, raising the importance
of technical excellence, and organizing our teams around products (rather than
projects). Despite the problems, the community's great strength is its ability
to learn and adapt, tackling problems that we original manifesto authors
didn't imagine.
This is a transcript of my talk at Agile Australia, Melbourne 2018. I did the talk
off-the-cuff, with just the outline to guide me. I have edited the
transcript to make the text read less incoherently, while following the
track of the talk. You can find a video of the talk at
InfoQ.
How many people have seen me talk here before at Agile Australia?. And you
came back? Wow, I'm impressed. If you've seen me talk before at Agile
Australia, or indeed at any conference, you know that pretty much every time I give
a talk, I call it “Software Design in the 21st Century”, or something like that:
because it's a vague title and I can talk about whatever I like. And I was
going to do that again here, but I decided that I'm going to do
something specific: to talk about where we are: the agile community
in 2018. I decided I'd do not something that's a particularly deeply
planned talk with lots of slides and clever diagrams and beautiful transitions
- but just me yakking. Which I've done before,
but it's been a while, so we'll see how we go.
As we look at the state of agile: at a surface level,
things are looking very good.
As we look at the state of agile: in many ways, on a kind of surface level,
things are looking very good. I mean, look at the size of this crowd for
instance. We're huge, we're fitting in this big conference place - actually
not fitting in very well because it was pretty crowded out there. You go to
all sorts of places and you see agile scattered around. Somebody sent me a
Harvard Business Review cover with “agile” on
it. I mean, it's all over the place. That's a big shift from 10 years
ago here, or even longer when we were in that ski place at Snowbird talking
about what the hell we should call ourselves. And it sounds like success, but
you talk to a lot of old-time agilists, people who were doing it before it was
called “agile” back in the late 90s, and there's actually a lot of disquiet, a
lot of disappointment, a lot of unhappiness in the air.
photo: Agile Australia
That's actually not unusual because it's been like that pretty much the
whole time as far as I can tell1. And that's
actually a good thing, because that dissatisfaction is a sign of wanting to
improve. But it does lead to that sense of: “why are we struggling”?
1:
I first remember hearing people complain about agile losing its way in 2005-ish.
What's the current
challenge that we're having to face? Back 10 years ago, the challenge was
people taking agile at all seriously. I remember going into one of
the big Thoughtworks clients in Australia. They wanted me to
give them the usual “Martin Fowler come and give us a talk” kind of routine.
Somebody from the clients said, “Yeah, we want you to talk about whatever you like,
but don't say anything about the agile stuff.” Which was a bit of an alarming
thing in the mid-2000s when I was talking a lot about agile, but that was the
sense then. A sense that this was something kind-of bad and you don't want to
talk about it.
Now agile is everywhere, it's popular, but there's been an important shift. It
was summed up quite nicely by a colleague of mine who said, “In the old days
when we talked about doing agile, there was always this pushback right from
the beginning from a client, and that would bring out some important
conversations that we would have. Now, they say, 'Oh, yeah, we're doing agile
already', but you go in there and you suddenly find there's some very big
differences to what we expect to be doing. As Thoughtworks, we like to think
we're very deeply steeped in agile notions, and yet we're going to
a company that says, “Yeah, we're doing agile, it's no problem,” and we find a
very different world to what we expect.
Our challenge now is dealing with faux-agile
Our challenge at the moment isn't making agile a thing that people want to
do, it's dealing with what I call faux-agile: agile that's just the name, but
none of the practices and values in place. Ron Jeffries often refers to it as
“Dark Agile”, or specifically “Dark Scrum“.
This is actually even worse than just pretending to do agile, it's actively using
the name “agile” against the basic principles of what we were trying to do, when we talked
about doing this kind of work in the late 90s and at Snowbird.
So
that's our current battle. It's not about getting agile respectable enough to
have a crowd like this come to a conference like this, it's realizing that a
lot of what people are doing and calling agile, just isn't. We have to
recognize that and fight against it because some people have said, “Oh, we're
going to 'post-agile', we've got to come up with some new word,” - but that
doesn't help the fundamental problem. It's the values and principles that
count and we have to address and keep pushing those forwards and we might as
well use the same label, but we've got to let people know what it really
stands for. 2
2:
The term I use for terms losing their meaning is Semantic Diffusion - and in
that post I argue why we should fight against that process rather than
come up with new terms.
If that's the broad level of the problem, how do we focus on
particular things? I want to focus on three main problem areas that I think
are the ones that I would like to highlight. The ones that I think are most
worthy of our attention.
Our first problem is dealing with the Agile Industrial Complex
The first one of these is what I would call the Agile Industrial
Complex. To be fair, I'm part of it, right? I'm
standing on the stage here talking about agile, indeed we're
all part of it to some degree, many of us are parts of some kind of agile
consulting firms, probably with agile in the title. But a lot of what is being
pushed is being pushed in a way that, as I said, goes really against a lot of
our precepts.
In particular, one of the really central things that
impressed me very much about the early agile advocates was this realization
that people are operating at the best when they choose how they want to work.
When we talked about the Agile Manifesto
and laid out the four value statements, with most of those value statements,
we didn't care very much about what order they came in. But we did have an
opinion about the first one: which is “Individuals and Interactions over
Processes and Tools”. To me that crystallized a very important part of what
agile thinking is about. If you want to succeed in doing software
development, you need to find good people. You need to find good people that
work together at a human level, so they can collaborate effectively. The
choice of what tools they use or what process they should follow is second
order. I thought that's a very important statement to come from what was
basically a gathering of process weenies. I mean we were all process guys to
some extent or other, and yet we were acknowledging that what we were
talking about was actually of secondary importance. What matters is that the
team chooses its own path.
A team should not only choose its process, but continue to evolve
it
It goes further than that for the team should not just choose
the process that they follow, but they should be actively encouraged to
continue to evolve it and change it as they go. One of the things about any
kind of agile processes or agile method is that it is inherently very
slippery. It changes week to week, month to month. One of the quotes that I
used to flash around people was “if you're doing Extreme Programming the same
way as you were doing it a year ago, you're no longer doing Extreme
Programming”. Because if you don't take charge and you don't alter things to
fit your circumstance, then you are missing the key part of it. There are
various rituals and things that we can set up to make this work, and
retrospectives are
clearly a technique that lots of people find to be really very
central. In fact, I think Ron Jeffries joked that Allistair Cockburn's approach
to agile was “come together in peace and harmony, deliver software every
week, and do a retro every time to figure out how you can improve”. The
retrospective is really such a central part of the practice.
Now, it actually doesn't matter whether you actually have a formal
retrospective. It doesn't matter whether you have four or five labels of
things on your retro board, or exactly how you do the retro. What does matter
is the notion of thinking about what we're doing and how we can do better, and
it is the team that's doing the work that does this, that is the central
thing.
This is a reaction against the whole Frederick Taylor, separate process
people. How many people here know the story of Frederick Taylor and his
approach? [a few hands go up] How many people have come across the name Frederick Taylor or even
heard of it? A few more. A lot more of you should raise your hands. He's
probably one of the most important figures in the history of the 20th century
in terms of how he's actually affected people's day to day lives. He was
from the late 19th century, in America, and he was very interested in
trying to make people more efficient in the industrial workplaces that were
developing at that time. His view of the average worker was that they were
lazy, venal, and stupid. Therefore you didn't want them to decide how
they should make a particular piece of machinery, but somebody else,
somebody who was a more intelligent and educated, they should figure out
exactly the best way to do it. Even going down to: do I do this and
then that or do I do that and then this. This is a very scripted sense of motion
and movement. The whole time and motion industry came out of that.
At the heart of this notion was that the people who are doing the work should not
decide how to do it. It should be a separate group of planners who does this,
and that strongly affected manufacturing and factory work through much of the
early 20th century.
And it affected the software industry as well - people said,
“we need software process experts to figure out how to do things, and then
programmers just fit into the slots”. But
interestingly, just as software people were talking about how we need to
kind of follow this very Taylorist notion as the future of software
development, (I heard people saying that back in the 80s and 90s), the
manufacturing world was moving away from it. The whole notion of what was
going on in a lot in manufacturing places was the people doing
the work need to have much more of a say in this because they actually
see what's happening.
The agile movement was part of trying to push that, to try to say,
“The teams involved in doing the work should decide how it gets done,” because
let's face it, we're talking about software developers here. People who are
well paid, well educated, hopefully, well-motivated people, and so they should
figure out what in their particular case is necessary.
photo: Agile Australia
“Particular case” becomes important because different kinds of software
are different. I've lived most of my life in enterprise applications:
database-backed, GUI/web frontend kind of world. That's what most of the
people I know do but that's very different to designing telephone switches or
designing embedded software. Even within the world that I'm
relatively familiar with, different teams have different kinds of situations,
we have different legacy environments to coordinate with, and we have
different dynamics between individuals on a team. With so many differences,
how can we say there's one way that's going to work for everybody? We can't.
And yet what I'm hearing so much is the Agile Industrial Complex imposing
methods upon people, and that to me is an absolute travesty.
The Agile Industrial Complex imposing methods on people is an
absolute travesty
I was gonna say “tragedy”, but I think “travesty” is the better word because in
the end there is no one-size-fits-all in software development. Even the agile
advocates wouldn't say that agile is necessarily the best thing to use
everywhere. The point is, the team doing work decides how to do it. That is a
fundamental agile principle. That even means if the team doesn't want to work in
an agile way, then agile probably isn't appropriate in that context, and
[not using agile]
is the most agile way they can do things in some kind of strangely
twisted world of logic.
So that's the first problem: the Agile Industrial Complex
and this imposition of one-best-way of doing things. That's something we must
fight against.
The second problem is the lack of recognition of the
importance of technical excellence.
The second problem is the lack of recognition of the
importance of technical excellence to what we do. A lot of agile conferences
I go to don't tend to talk very much about the actual techniques of
writing software. How many people here are software developers by the way?
[a few people raise hands]
Smattering, but very much a minority. The same problem was true at
the Agile Alliance's main conference for quite a while, but they realized that
they were getting a lot of people who were involved in the project management
side and things of that kind, but not very many people who are the technical
people who actually did the work. And that's actually quite tragic. It led to
an even more tragic thing of some software developers saying, “Oh, we need to
create a whole new world for ourselves. The software craftsmanship movement
where we can go away, get away from all of these business experts and project
managers and business analysts, and just talk about our technical stuff.” But
that's a terrible thing because the whole point of agile is to combine
across these different areas. The lowliest, juniorest programmer tapping away
on JavaScript should be connected to people who are out there thinking about
the business issues and business strategies of the group that they're
working with.
And I'll say a bit more about that in my third point, but that means we've
got to pay attention to these technical skills, and we have to think about how
do we nurture those, how do they grow, how do we make them important? I've
spent the last couple of years as my primary writing exercise coming up with a
new edition of the book about refactoring. How many people here have heard
about refactoring? [a lot of hands] Good. How many people could accurately describe it to
people? [less hands] Rather less. It's a very core technique to the whole
agile way of thinking because it fits in with the whole way in which we can
build software in a way that it can change easily. When I summarize agile
to people, I usually say there's two main pieces to it. One, I've already
talked about, the primacy of the
team, and the team's choices of how they do things, but the other is our
ability to change rapidly, to be able to deal with change easily.
I always loved Mary Poppendieck's phrase, “A late change in requirements is
a competitive advantage.” But to do that you need software that's designed in
a way that's able to react to that kind of change. Refactoring is central to
this because refactoring is a disciplined way of making changes. Now it's
a horrible thing when somebody tweets something along the lines of:
“I'm refactoring our software at the moment, so it's going to be broken for
the next two weeks.” Bzzz - that's not refactoring. Refactoring is small changes,
each of which keeps everything working. It doesn't change the observable
behavior of the software, that's its definition. And I should know because I was
the one who got to define it.
Refactoring is lots of small changes, none of which change the
observable part of the software
Refactoring is lots of small changes, none of which change the
observable part of the software, but all of which change its internal
structure.
Usually (you refactor) because you want to make some new feature, and the current internal
structure doesn't fit very well for that new feature. So you change the
structure to make the new feature easy to fit in, all the time refactoring and
not breaking anything, and then you can put it in. Kent Beck summarizes
this. “When you want to make a change, first, make the change easy.
(Warning, this may be hard.) Then make the easy change”. That's a very
crucial way to use refactoring but it goes beyond that because
refactoring is a constant thing. You're looking at some code and you're
trying to say, “What does this do? I don't really understand what this does.
Let me think about this. Let's burrow in. Ah, now I understand what this code
does.” Then, before you move on, in the words of Ward Cunningham:
“You take the understanding out of your head and you put it into the code,”: by
restructuring it, by renaming it. Naming is a really important part of all of this.
So that the next time you or somebody else comes through that same piece of
code, you don't have to go through that puzzle exercise.
for each desired change, make the change easy (warning: this may be
hard), then make the easy change
-- Kent Beck
Why is this important? Because if you want to make changes, you want to add
things quickly, you've got to quickly understand which parts of the
program matter, what do they do, and how do I work so that I can quickly
make that change. This also burrows up into modularity. If I've got a
well-modularized piece of software, instead of having to understand the whole
thing, I can just understand part of it. Technical excellence is about being
able to build that kind of adaptive software.
Then a self-reinforcing principle comes in. Once you realize I can change
software quickly to add new things, I don't try to make the software do
everything I want it to do right at the beginning. Because I don't need to.
Because I'll be able to change things later on. This is the principle of
Yagni - “You aren't gonna need it”. Don't add features to
the software until you need them because if you do, it bloats the software and
makes it harder to understand. But that strategy is totally hopeless without
good refactoring techniques. And then refactoring relies on testing, and
refactoring also relies on continuous integration, and together with
continuous integration, you have the practice of continuous delivery and the
notion that we're gonna be able to release the software very, very frequently.
And in some organizations, we actually do release the software very, very
frequently.
Now, there's a notion out there that
says that rapid release of software can only be done if you're prepared to put
up with lots and lots of errors. And if you want software to be reliable, you
do things in much more slow, deliberate way. Wrong. And we're seeing more and
more evidence pile up, that this is very, very wrong. My favorite book of the
year so far, I think it's going to be the best book of the year is
“Accelerate” by Nicole Forsgren, Jez Humble and Gene Kim. (And I say that with
great sadness because my book's going to come out this year, but I'm hoping
for number two) In this book they, through looking at lots of
surveys of organizations, show how different practices affect
things. One of the things that they demonstrate is releasing many times a
day and low defect rates go together. Because you can't release many times a
day unless you figure out how to get your defect rates down. That requires
the kinds of practices we talk about: automated testing, refactoring, yagni -
all of this stuff flows together. The most important thing about Extreme
Programming is that the individual practices of Extreme Programming reinforce
each other.
We should get rid of software projects and use a product-oriented
view
The third thing that I want to stress is the importance of getting rid of
software projects as a notion. Instead we want to switch to a product-oriented view of the world where instead
of projects that you spin up, run for a while and then stop; you instead say,
“Let's focus on things that are much more long-lasting and organize a product
team around that.” Another way of thinking about it is: what are the business capabilities that your
organization has, and then organize the teams around those. These business
capabilities will be long-lasting and will necessarily mean combining together
technical people and the people who are on the business side of things into
the same team.
A popular notion at the moment is the Amazon Two Pizza Team - people talk
about this all the time: “Organize yourself into two-pizza teams.” Maybe with
a little side joke about how American pizzas are really, really big, so they
can be bigger teams than you might think. But when they give that
description, they often miss off something that's clear whenever I
hear Amazon people talk about this; which is that each of those teams needs to
connect - right the way through to the customer. Each team is focused on some
aspect of the customer's experience, some aspect of what Kathy Sierra calls
making the customer kick ass at what they do. This
alters our notion of what that small team does because if that small team is
focused on some piece of customer experience, some way of making the customer
do what they do better, then that tells us how we should draw lines between
our small teams. Now, it's not always easy to do this, but it should, I think
be the driving notion.
photo: Agile Australia
But again, we often see it violated. I went to one supposedly agile
organization. I was chatting with a development team. There was about half a
dozen programmers. There were only four users of the software that they were writing,
senior planners in retail planning. Six developers, four users. They had
never spoken to each other. They weren't allowed to. Instead, there was a
Scrum product owner who was managing the conversation between them. Well,
actually, there had been four product owners over the course of a year. I
mean, what nightmare world of agile is this that they can dare use the word
“agile” to describe it? When [at Snowbird] we were talking about names for agile
software development, Kent Beck suggested “conversational”, because there
should be an ongoing conversation between software developers, business people,
and anybody involved in making that customer experience better.
If I was on that team as a developer, I'd want to be on first-name terms with
all of the users. I would want to be talking. I would want to watch what they
do, I'd want to understand how they think about making their customers happier
and see that whole flow. So we need to think about that. That's my
third challenge.
So my three things, that we should face as challenges:
Get rid of
the Agile Industrial Complex and the idea of imposing stuff on teams. Let
teams work out the way they should work themselves.
Raise the importance of
technical excellence, and never forget that when writing software, the technology
side is really vital, and
organize around products.
So that all sounds a little bit negative because I've talked about the
problems of this current state of agile. I have two minutes and 45 seconds
left to offer one reason why I'm not too depressed about the situation. It
comes from probably my favorite thing about the whole Agile Manifesto. This
didn't occur in Snowbird when we wrote it, but occurred about six months later
in Tampa, Florida at OOPSLA where most of
the people who wrote the manifesto were together with a bunch of other
interested people. At this point, the Agile Manifesto had taken off in a way
that we could never have imagined. Suddenly it was clear there was this huge
opportunity for doing interesting things, and many people said, “We
need to set up a real organization here.”
The manifesto authors said “no” to a special role in the
movement's future
One of the questions was: “Should the original 17 people that wrote the
manifesto take a special place in this ongoing effort?” The thing I'm
proud of, that we the 17 authors did, is that we said “no”. We wrote the manifesto. We did
a nice piece of work, we'll be part of what goes on in the future, but we have
no special role in that future. That role has to grow forwards. We said “new people will
come in and do great things”, which is indeed what happened.
One of the ways where that was important was about a big problem with the
people who wrote the manifesto, particularly looking at it now with 2018
glasses. 17 people: 17 white, middle-aged guys. 3 Not a thread of diversity
there, but because we let go, the agile world could incorporate many people
from all sorts of backgrounds who took part. Mary Poppendieck was one
of the big leaders of the early agile alliance efforts. Rebecca Parsons, my
boss at Thoughtworks, was the chair of agile alliance for a long time. I'd go
to an agile conference, and I tend to see much more women than I do in other
kinds of software conferences. That's a good thing. Certainly, it wasn't the
work of us at the beginning. We weren't even thinking about that. But the
point was: because we let go, we ended up with a community that could develop
things themselves. That could tackle challenges that we hadn't even imagined
and work on them. And it's that constant learning and growing and changing as
a community, which is our greatest strength.
3:
My statement there wasn't entirely accurate. The much missed Mike Beedle was Mexican. I only met him a couple times,
and my memory did a typical bit of white-guy inattentiveness.
because we let go, we ended up with a community that could tackle
problems we hadn't imagined
I spoke not that long ago to someone who is a latecomer to the agile world,
who said that they found a degree of welcoming and openness that wasn't true
of many other groups. That made me really, really happy, because as long
as we can be that flexible, as long as we can
always change, then whatever challenges we face, I think we have a future.
Thank you.
| 2024-11-08T18:15:16 | en | train |
42,031,307 | thunderbong | 2024-11-03T05:21:54 | Google is adding smart home integration to their Gemini chatbot | null | https://simonwillison.net/2024/Nov/1/smart-home-prompt-injection/ | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,312 | aard | 2024-11-03T05:23:52 | Intermittent breaks in interaction improve collective intelligence | null | https://www.pnas.org/doi/10.1073/pnas.1802407115 | 5 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,031,317 | thunderbong | 2024-11-03T05:25:51 | The Death of the Architect | null | https://explaining.software/archive/the-death-of-the-architect/ | 3 | 0 | null | null | null | no_error | the death of the architect | 2024-10-30T18:49:49.359250+00:00 | Explaining Software Design |
October 30, 2024
Once upon a time, every project began with the creation of a canonical design document. This was called the system architecture, because it "rightly implie[d] the notion of the arch, or prime, structure."1
Then, documents would be written for each module. These would provide detailed instructions for how the module should be implemented. Often, there would be diagrams for the control flow of each subroutine.
And only then, once all the documentation was complete, would the implementation begin. This was seen as a largely mechanical task, akin to compilation. It was generally assigned to the junior members of the team; after all, the hard part was already done.
This approach was a major contributor to the software crisis. Decisions made early in this process would become load-bearing, impossible to change.2 This, however, only made the design phase seem more important. Countless methodologies were proposed, all of them design-first.
And so, when Kent Beck began to talk about iterative development, people were ready to listen. In his Extreme Programming (XP) methodology, design and implementation were interleaved. "There will never be a time," he said, "when the system 'is designed.' It will always be subject to change, although there will be parts of the system that remain quiet for a while."3
We can find a number of familiar ideas in the first edition of Extreme Programming Explained. Beck, for instance, also measures complexity in bits:
Simplicity and communication have a wonderful mutually supportive relationship. The more you communicate, the clearer you can see exactly what needs to be done and the more confidence you have about what really doesn't need to be done. The simpler your system is, the less you have to communicate about, which leads to more complete communication, especially if you can simplify the system enough to require fewer programmers.4
And for your design to remain simple, you would need "a clear overall metaphor so you were sure future design changes would tend to follow a convergent path."5
In fact, metaphors were central to the XP methodology. A metaphor "helps everyone on the project understand the basic elements and their relationships."6 And they are especially useful for high-level design:
Architecture is just as important in XP projects as it is in any software project. Part of the architecture is captured by the system metaphor. If you have a good metaphor in place, everyone on the team can tell about how the system as a whole works.7
Instead of exhaustive design, Beck wanted just enough design. His system metaphor was something that could be explained in a moment, and was robust to change. It was a pane of frosted glass, a locus.
A year later, Beck's ideas were distilled into the Agile Manifesto.8 His notion of lightweight design became a preference for "responding to change over following a plan." What's the point of a plan that solves yesterday's problems?
And then, three years after the Manifesto, Beck released the second edition of Extreme Programming Explained. It had been rewritten from scratch. There was not a single mention of metaphors or system architecture. Nor, really, any discussion of the future. In this iteration of XP, you simply moved from moment to moment.
As readers, our instinct is to treat the second edition as a continuation of the first. After all, they have the same author. And there was a reason Beck called it "extreme" programming:
When I first articulated XP, I had the mental image of knobs on a control board. Each knob was a practice that from experience I knew worked well. I would turn all the knobs up to 10 and see what happened.9
The second edition, then, could be Beck simply turning Agile's preference for "responding to change" up to 10. If you're always living in the moment, what's the use of architecture or metaphors?
A broader review of the Agile literature, however, reveals a different story. As Robert Martin explains it:
In the years just before and after the signing of the Agile Manifesto, the Metaphor practice was something of an embarrassment for us because we couldn't describe it. We knew it was important, and we could point to some successful examples. But we were unable to effectively articulate what we meant. In several of our talks, lectures, or classes, we simply bailed out and said things like, "You'll know it when you see it."10
And this embarrassment, when you look for it, is plain to see. Metaphors are treated like stray parts next to newly constructed furniture; if they're mentioned at all, it's to explain why they probably don't matter. In Domain-Driven Design, for instance, the topic is buried four hundred pages deep:
System metaphor has become a popular approach because it is one of the core practices of extreme programming. Unfortunately, few projects have found really useful metaphors, and people have tried to push the idea into domains where it is counterproductive.11
This was not a refinement; it was a tactical retreat. Despite everyone's best efforts, Beck's ideas about lightweight software design remained stubbornly tacit. And so, they were quietly discarded. With them, the Agile methodology lost any notion of continuity, of describing the future. It was left floating, unmoored, in the eternal now.
Decades later, software design has become something of a backwater. Most writing on the subject can only be called "post-design." It is defined by what it refuses to discuss.
"Programmers," Sandi Metz warns us, "are not psychics."
Practical design does not anticipate what will happen to your application. It simply accepts that something will and that, in the present, you cannot know what.12
Our only choice, Ron Jeffries tells us, is to focus on the present:
The source code is also the ONLY document in whatever collection you may have that is guaranteed to reflect reality exactly. As such, it is the only design document that is known to be true. The thoughts, dreams, and fantasies of the designer are only real insofar as they are in the code.13
This seems like hard-nosed pragmatism, until you realize that software is a living text. Each snapshot is just a stepping stone. And the path they follow is, undeniably, shaped by our "thoughts, dreams, and fantasies."
There is a smallness to this post-design literature. It confines itself to the syntax, offering heuristics for better code. Sometimes, these heuristics are fenced in by warnings about the failures of the past. But more often, the limitations are treated as self-evident; software design is just a collection of heuristics.
This newsletter (and the underlying book) is an attempt to turn back the clock. It imagines a world in which Beck's ideas about software design were more explainable. And it begins, appropriately enough, with a metaphor.
This post is an excerpt from my (incomplete) book on software design. For more about the book, see the overview.
| 2024-11-07T22:05:33 | en | train |
42,031,323 | sandwichsphinx | 2024-11-03T05:27:23 | Physically Based Animation | null | https://en.wikipedia.org/wiki/Physically_based_animation | 3 | 0 | [
42032256
] | null | null | null | null | null | null | null | null | null | train |
42,031,336 | geektao | 2024-11-03T05:33:02 | I Created a Casual Game Website Using AI Programming: LevelDevil.club | null | https://leveldevil.club/en | 1 | 1 | [
42031337
] | null | null | null | null | null | null | null | null | null | train |
42,031,356 | nathanmarz | 2024-11-03T05:40:35 | Migrating terabytes of data instantly (can your ALTER TABLE do this?) | null | https://blog.redplanetlabs.com/2024/09/30/migrating-terabytes-of-data-instantly-can-your-alter-table-do-this/ | 1 | 1 | [
42031373
] | null | null | no_error | Migrating terabytes of data instantly (can your ALTER TABLE do this?) | 2024-09-30T18:35:22+00:00 | null |
Every seasoned developer has been there: whether it’s an urgent requirement change from your business leader or a faulty assumption revealing itself after a production deployment, your data needs to change, and fast.
Maybe a newly-passed tariff law means recalculation of the tax on every product in your retail catalog (and you sell everything). Maybe a user complains that her blog post is timestamped to the year 56634, and you realize you’ve been writing milliseconds, not seconds, as your epoch time for who knows how long. Or maybe Pluto has just been reclassified and your
favorite_planet
column urgently needs rectification across millions of astrological enthusiast rows.
Now you’re between a rock and a hard place. Is downtime acceptable while you take the database offline and whip it into shape? That’s a hard “no.” If you’re using SQL, you might be able to express your changes in your database’s arcane API, but even then, you’re left with the laborious job of coordinating your migration with your application deployment (and hopefully you’ve understood the relevant concurrency and locking semantics). If you’re running on NoSQL, you might as well commence with the stages of database grief: denial of severe migration restrictions, bargaining with third-party tools, and finally acceptance that there’s no hope at all. The solutions left to you all rhyme with “tech debt.”
But what if there were a better way?
Today we’re releasing Rama’s new “instant PState migration” feature. For those unfamiliar with Rama, PStates are like databases: they’re durable indexes that are replicated and potentially sharded, and they are structured as arbitrary combinations of maps, sets and lists.
Instant PState migrations are a major leap forward compared to schema migration functionality available in databases: use your own programming language to implement arbitrary schema transformations, deploy them worry-free with a single CLI command, and then watch as the data in your PStates, no matter how large, is instantly migrated in its entirety.
If you want to go straight to the nitty gritty, you can jump to the public documentation or the example in the rama-demo-gallery. Otherwise, let’s take a look at the status quo before diving into a demonstration.
Status quo
SQL
SQL needs no introduction – it’s a tried-and-true tool with built-in support for schema evolution.
SQL (Structured Query Language) is composed of sub-languages, two of which are the Data Definition Language (DDL) and the Data Manipulation Language (DML).
Via DDL, you can specify a table’s schema:
123456CREATE TABLE golfers (
golfer_id SERIAL PRIMARY KEY,
full_name VARCHAR(100),
handicap_index DECIMAL(4, 2),
total_rounds_played INTEGER
);
Then, maybe months later, you can modify it:
1234ALTER TABLE golfers
ALTER COLUMN full_name TYPE TEXT;
ADD COLUMN is_experienced BOOLEAN
ADD COLUMN skill_level VARCHAR(20);
Via DML, you can manipulate the data in your table:
123456789UPDATE golfers
SET
is_experienced = total_rounds_played >= 10,
skill_level = CASE
WHEN total_rounds_played < 10 THEN 'Beginner'
WHEN handicap_index <= 5.0 THEN 'Advanced'
WHEN handicap_index <= 20.0 THEN 'Intermediate'
ELSE 'Beginner'
END;
In this example, an internet amateur golfer database is making some changes:
Change
full_name
to a
TEXT
field (perhaps uber-long names have become fashionable)
Precompute a golfer’s experience indicator and skill level (say, to shave off some milliseconds at render time)
To actually update the production database, they’ll need to wrap the changes in a transaction so that a failure can’t leave the table with unpopulated new columns:
123456BEGIN;
ALTER TABLE golfers ...;
UPDATE golfers ...;
COMMIT;
Taken together, this demonstrates some powerful functionality:
New attributes can be derived from existing ones
In some cases, a column’s type can be altered “for free”, without reading a single row from disk, as would happen if the only modification was to change
full_name
‘s type from
VARCHAR(50)
to
TEXT
SQL is sufficiently expressive to describe changes to multiple columns in a single operation, and smart enough to apply them in a single full-table scan. Doing so should offer significant speed-up compared to doing multiple, separate full-table scans.
However, there are some areas that could use improvement:
Changes must be specified using nothing but SQL. This will likely mean re-implementation of code and duplication of business logic that’s already been expressed in the application programming language. For example, the 10-round experience threshold and skill level tiers above would be duplicated in both SQL and whichever programming language the application uses.
Deployment of the migration will take hand-holding and coordination. If the table is massive, then scanning it may take hours or days, during which the old schema must still be assumed by application code. If there’s an unexpected fault (say, power outage), the transaction may fail and require manual re-attempt.
Some migrations may require locking entire tables for the duration of the migration, inducing downtime as reads and writes are blocked. While there may be third-party tools available that minimize downtime, these generally work by providing a phased rollout of the new schema, which may still involve an extended period of backfilling during which the old schema must be used, as is the case with the pgroll plugin for PostgreSQL.
Under the hood, the SQL database must always retain all state necessary to perform a rollback while in the middle of a commit; in practice, this could mean holding on to duplicate data for every single migrated row until the commit goes through.
If the database is sharded across multiple nodes, then deployment becomes immensely trickier, requiring careful thought and attention to ensuring its coordinated success on all shards.
NoSQL
The category of “NoSQL” databases is vast and varied, but we’ll try and summarize the landscape with respect to schema and data migrations.
In general, NoSQL databases eschew the relative power of SQL in order to gain horizontal scalability. Any schema migration capabilities had by SQL are likewise mostly thrown out with the bathwater.
Some NoSQL databases retain a distinctly SQL-ish interface, as exemplified by the column-oriented Apache Cassandra’s ALTER TABLE command. This command enables immediate addition or logical deletion of a column, but little else (its support for making even very limited changes to a column’s type was removed). A search for “Cassandra schema migration” yields primarily links to third-party tools.
Indeed, the general theme across NoSQL databases is a total lack of built-in support for anything resembling schema migrations. This might seem sensible for the category of document databases, which are often referred to as schemaless or as having dynamic schemas. These databases are lax about the shape of the data stored. Each record is a collection of key-value attributes; the attributes are an open set, and the only one required is the all-important one used as the key for lookup and partitioning. For example, the CLI command to define the
golfers
table in DynamoDB might look like:
1234aws dynamodb create-table \
--table-name golfers \
--attribute-definitions AttributeName=golfer_id,AttributeType=S \
--key-schema AttributeName=golfer_id,KeyType=HASH
Notice that Dynamo isn’t told what other attributes the golfers will have; it’s got no idea that it will ultimately be storing fields like
full_name
and
total_rounds_played
.
But what happens when changes must be made to the data’s shape and contents? The answer from document databases is: you’re on your own, kid. One option is to roll your own migration system by writing code that scans an entire dataset and rewrites everything, but this is tedious, non-transactional, and error-prone. The other options boil down to variants of migrate-on-read, wherein the tier of the codebase which reads from the database is updated to tolerate different versions of the data at read time. This might mean deserializing records as instances of either
GolferV1
,
GolferV2
, etc. When a record is updated, it’s written to the database using the new schema. Optionally, additional code may be written to perform a more eager write-on-first-read wherein the record is immediately written back to the database the first time it happens to be read following deployment of a new schema.
The migrate-on-read approach comes with lots of baggage. It requires tedious, imperative code to be written and deployed to the database access tier. Since many NoSQL databases provide little in the way of locking, this code may need to explicitly handle race conditions inherent to reading and re-writing a record that might have been updated in the interim. Worse, this code can never be removed unless you are certain that every single record has been re-written to the database, which can only be determined by carefully scanning the entire dataset. This might mean incurring a significant performance penalty on every read, forever.
Many NoSQL databases have ecosystems of third-party tools around them, some of which build out support for schema-migration capabilities. Mongock is one such tool, a Java library that supports code-first migrations for MongoDB and DynamoDB. While such tools will inevitably appear as godsends to developers in tight spots, they’ll never offer the ease-of-use and efficiency achievable via first-party support.
NewSQL
We should note that there is a class of “NewSQL” databases which attempt to bring NoSQL’s horizontal scalability to SQL. Schema migrations with these databases are mostly the same as SQL’s, except that they may provide assistance with coordinating changes across multiple partitions. For example, CockroachDB’s online schema changes actually enable background migration of partitioned tables, followed by a coordinated “switch-over” to the new schema on all nodes. While this is a commendable effort, it still suffers from the same limitations and expressivity issues that hamstring standard SQL schema migrations, and it’s far from instantaneous. We feel that an entirely new paradigm is necessary.
Schema evolution in Rama
Rama was built from the ground up to enable rapid iteration on software backends.
With this in mind, let’s take a quick look at Rama’s existing support for schema evolution. Then, we’ll take a detailed dive into today’s newly-released feature, instant PState migrations.
Existing support
Rama has had built-in support for schema evolution since day one.
Unlike systems built with SQL or document databases, systems built with Rama use an event sourcing architecture which separates raw facts, i.e. depot entries, from the indexes (or “views”) built from them, i.e. PStates.
This design wipes out an entire class of problems in traditional databases: by recording data in terms of irrevocable facts rather than overwriting fields in a database record, no fact once learned is ever lost to time.
With Rama, when your requirements change, you can materialize new PStates using the entirety of your depot data. For example, continuing with the above golf scenario, suppose a change must be made as to how a golfer’s handicap is computed. Thankfully, the event sourcing architecture means that the raw facts required are available: a depot record for each golf round completed by a golfer, e.g.
GolfRound(golferId, finishedAt, score)
.
Even if the handicap calculation requires examining every golf round ever played by a golfer, Rama happily enables its calculation via use of the “start from beginning” option on a depot subscription. Here’s how it’s done with Rama’s Java API:
12345678910111213setup.declareDepot("*rounds-depot", Depot.hashBy(ExtractGolferId.class));
StreamTopology golfRounds = topologies.stream("golf-rounds");
golfRounds.pstate("$$handicaps", PState.mapSchema(Long.class, // golfer-id
Double.class // handicap
));
golfRounds.source("*rounds-depot", StreamSourceOptions.startFromBeginning()).out("*round")
.each((Round round) -> round.golferId, "*round").out("*golfer-id")
.localSelect("$$handicaps", Path.key("*golfer-id")).out("*handicap")
// updateHandicap performs the actual arithmetic to calculate the new handicap
.each(GolfModule::updatedHandicap, "*handicap", "*round").out("*new-handicap")
.localTransform("$$handicaps", Path.termVal("*new-handicap"));
And here’s the equivalent code expressed in the Clojure API:
12345678910111213141516(let [golf-rounds (stream-topology topologies "golf-rounds")]
(declare-pstate golf-rounds
$$handicaps
(map-schema Long ; golfer-id
Double ; handicap
))
(<<sources golf-rounds
(source> *rounds-depot
{:start-from :beginning}
:> {:as *round :keys [*golfer-id]})
(local-select> [(keypath *golfer-id)] $$handicaps
:> *handicap)
;; updated-handicap performs the arithmetic to calculate the new handicap
(updated-handicap *handicap *round :> *new-handicap)
(local-transform> [(keypath *golfer-id) (termval *new-handicap)]
$$handicaps)))
Having the ability to easily compute new indexes based on the entirety of the raw data is immensely powerful, but there are some scenarios where it might be infeasible or impossible to compute the desired view in this manner:
If you’ve enabled depot trimming to cut down on storage costs, then you won’t have access to each and every historical depot record.
If your existing PStates have data that was non-deterministically generated, you might find that you need to describe your change in terms of existing views rather than in terms of your depot records.
Scanning millions of depot records might be egregiously inefficient – for example, if your depot records describe many repeated updates to a given entity, and you already have a PState view on the “current” state of the entity, then it might mean lots of wasted effort to examine all of the obviated depot entries corresponding to that entity.
In these scenarios, Rama’s new instant PState migration feature is here to help.
New: instant PState migrations
Just as Rama reifies decades of the industry’s collective learnings into a cohesive set of abstractions, our new instant PState migration feature draws from SQL’s expressivity and NoSQL’s scalability.
In Rama, PState migrations are:
Expressive – just as Rama PStates support infinite, arbitrary combinations of elemental data structures, so do migrations support arbitrary transformations expressed in the programming language you’re already using.
Instant – after a quick deployment, all PState reads will immediately return migrated data, regardless of the volume of data.
Durable and fault-tolerant – in the background, Rama takes care of durably persisting your changes in a consistent, fault-tolerant manner.
Rama achieves this via a simple, easy-to-reason-about design. On every PState read until the PState is durably migrated, Rama automatically applies the user-supplied migration function before returning the data to the client. In the background, Rama works on durably migrating the PState; it does so unobtrusively on the task thread as part of the same streaming and microbatches your application is already doing.
Let’s take a detailed look at each facet of migration.
Expressive
PState migrations are specified as code, and the heart of each migration is a function written in your programming language of choice. Specifying your migration as an arbitrary function is tremendously powerful. Rather than being confined to a limited, predefined set of operations, as is often the case with SQL migrations, with PState migrations you have the Turing-complete power of your language, your entire codebase and all its dependencies available to you.
When you declare a PState, you provide a schema describing the shape of the data it contains. At certain locations within the schema, you may now specify a migration.
Continuing with the golf example, the golfers PState schema expressed via the Java API might look like this:
12345PState.mapSchema(String.class,
PState.fixedKeysSchema(
"fullName", String.class,
"handicapIndex", Double.class,
"totalRoundsPlayed", Long.class))
Or, using the Clojure API:
123456(map-schema
String ; golfer-id
(fixed-keys-schema
{:full-name String
:handicap-index Double
:total-rounds-played Long}))
When it comes time to add a golfer’s experience indicator and skill level, you can specify a migration using code you already have. Here it is with the Java API:
123456789101112131415161718192021222324252627282930313233private static Object enrichGolfer(Object o) {
Map m = (Map)o;
if (m.get("skillLevel") == null) {
Map n = new HashMap();
n.putAll(m);
Boolean isExperienced = (Integer)m.get("totalRoundsPlayed") > 10;
n.put("isExperienced", isExperienced);
Double handicapIndex = (Double)m.get("handicapIndex");
if (!isExperienced) {
n.put("skillLevel", "beginner");
} else if (handicapIndex < 5.0) {
n.put("skillLevel", "advanced");
} else if (handicapIndex < 20.0) {
n.put("skillLevel", "intermediate");
} else {
n.put("skillLevel", "beginner");
}
return n;
} else {
return o;
}
}
PState.mapSchema(String.class,
PState.migrated(
PState.fixedKeysSchema(
"fullName", String.class,
"handicapIndex", Double.class,
"totalRoundsPlayed", Long.class,
"isExperienced", Boolean.class,
"skillLevel", String.class),
"precompute-experience-and-skill",
GolfModule::enrichGolfer));
And the equivalent Clojure code:
123456789101112131415161718192021222324252627282930(defn is-experienced?
[{:keys [total-rounds-played]}]
(>= total-rounds-played 10))
(defn skill-level
[{:as golfer :keys [handicap-index]}]
(cond
(not (is-experienced? golfer)) "beginner"
(< handicap-index 5.0) "advanced"
(< handicap-index 20.0) "intermediate"
:else "beginner"))
(defn enrich-golfer
[golfer]
(-> golfer
(update :is-experienced #(or % (is-experienced? golfer))
(update :skill-level #(or % (skill-level golfer))))))
(map-schema
String
(migrated
(fixed-keys-schema
{:full-name String
:handicap-index Double
:total-rounds-played Long
:is-experienced Boolean
:skill-level String}))
"precompute-experience-and-skill"
enrich-golfer
[(fixed-key-additions #{:is-experienced :skill-level})])
The new API addition demonstrated here is the
migrated
function. It takes three or four arguments:
the new PState schema
a migration ID string
a function from old-data to new data
optionally, some options describing the migration
The migration function used here is
enrich-golfer
, a function from
golfer
to
golfer
which calculates the
:is-experienced
and
:skill-level
keys unless they’re already set.
It’s important to note that the migration function must be idempotent. Rama will invoke the migration function on every read of a migrated location until the PState is completely durably migrated in the background, whether or not a particular entry has been migrated yet or not. This means that the migration function may run against both yet-to-be-migrated and already-migrated inputs. This design choice gives total control to the user: rather than adding definite storage and computational overhead to the implementation, e.g. state for every single PState entry indicating whether it has been migrated, the user’s migration function may switch on state which is already present, e.g. the migrated entity’s type.
The migration ID is used to determine whether successive migrations to the same PState are the same or different. It is only relevant when you perform a module update while a PState is undergoing migration. In such cases, Rama will look at the migration IDs in the PState’s schema and restart the migration from scratch if any of them has changed; otherwise, it continues where it left off. For example, consider the following cases:
You’ve deployed a module update with a migration on your massive
$$golfers
PState which will take several days to complete. However, in the midst of migration an unrelated hot-fix must be made to some other topology. Another module update may safely be made with the
$$golfers
migration left untouched, and the background migration will resume where it left off.
Or, suppose you’ve deployed a migration on the
$$golfers
PState, but while it’s running you realize there’s a bug in your migration function that’s somehow made it through your staging environment testing. In this case you don’t have to wait for background migration to complete – you can fix your migration function, alter the migration’s ID, and do another module update immediately. Background migration will immediately be restarted from scratch.
There are also some options available for making certain kinds of structural changes to your schema; see the docs for more details.
Instant
With the migrated schema in place, committed to version control and built into a jar, all that’s left is to do is deploy it with single command:
1234rama deploy \
--action update \
--jar golf-application-0.0.1.jar \
--module 'com.mycompany.GolfModule'
This is the same command used for any ordinary module update, and this will do the same thing as any other module update: spin up new workers running the new code and gracefully hand over writes and reads before shutting down the old workers. It will take no longer than if there were no migrations specified in the new code.
Once the module update concludes, every read of the migrated location will return migrated data, whether made via a distributed query, a select on a foreign PState, or a topology read. Rama automatically applies the migration function at read time. This means that your topology code and client queries can immediately expect to see the migrated data, without ever having to worry about handling the old schema or content.
Durable and Fault Tolerant
After deploying a migration, Rama begins iterating over your migrated PStates and re-writing migrated data back to disk. Like all PState reads and writes, this happens on the task thread, so there are no races. Rama does migration work as part of the streaming event batches and microbatches that are already occurring, so the additional overhead of background migration is minimal.
The rate of migration is tuned primarily via four dynamic options, two apiece for streaming and microbatching:
topology.stream.migration.max.paths.per.second
topology.microbatch.migration.max.paths.per.second
topology.stream.migration.max.paths.per.batch
topology.microbatch.migration.max.paths.per.batch
With these options, you may tune the target number of paths for Rama to migrate each second, and limit the amount of migration work done in each batch. In our testing with the default dynamic option values, background migration work added about 15% and 7% task group load for streaming and microbatch topologies respectively, with one million paths per partition migrated in about 3 hours 15 minutes and 2 hours 45 minutes respectively (but this will depend on your hardware, append rate, and other configuration). If your Rama cluster has 128 partitions, this comes out to about 40M and 46M paths migrated per hour respectively.
Remember, Rama applications can be scaled up or down with a single CLI command, so if you need a little extra CPU to perform a migration or want to increase its rate, it’s trivial to do.
Migrations are done in a fault tolerant manner; they will progress and eventually complete even in the face of leader switches, worker death, and network disconnection issues, with no intervention from a cluster operator required.
Migration status details are visible in the UI, at the top-level modules page down through to the individual PState pages. If the monitoring module is deployed, detailed migration progress metrics are also available.
These three screenshots taken from the cluster UI of one of our test clusters show how migration status is surfaced at the module, module instance, and PState levels:
On an individual PState’s page, the PState’s schema, migration status, and collection of tasks undergoing migration are displayed:
If the monitoring module is deployed, then migration progress metrics are also available per-PState:
Once your migration completes, you are free to remove the migration from your source code and forget it ever happened.
Conclusion
Schema evolution is an inevitable part of application development. Existing databases have varied levels of support for it: none at the low end, but even at the high end, SQL databases leave much to be desired in terms of expressivity, operational ease, and fault tolerance.
Rama was built with schema evolution in mind: with event-sourcing at its core, you’ll never “forget” anything once known, and you’ll always have the ability to derive new PState views from existing depot data.
With Rama’s new instant PState migration feature, the story gets even better: you now have the power to update your PStates’ schemas and data in-place, via the powerful programming language you’re already using, instantly and without any operational pain.
As always, we’re excited to see what kinds of novel applications are unlocked by this new leap forward in development ease.
| 2024-11-07T23:26:38 | en | train |
42,031,380 | gls2ro | 2024-11-03T05:52:05 | Scientists found a clear link between red meat and cancer | null | https://bgr.com/science/scientists-found-a-clear-link-between-red-meat-and-cancer/ | 10 | 6 | [
42034660,
42031381
] | null | null | null | null | null | null | null | null | null | train |
42,031,409 | Jr23_xd | 2024-11-03T06:03:12 | Selling My AI Startup | null | https://nujoom.ai/ | 2 | 4 | [
42031410,
42031414,
42031430
] | null | null | null | null | null | null | null | null | null | train |
42,031,422 | hyperific | 2024-11-03T06:07:28 | Cottingley Fairies | null | https://en.wikipedia.org/wiki/Cottingley_Fairies | 2 | 0 | [
42032193
] | null | null | null | null | null | null | null | null | null | train |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.