id
int64
2
42.1M
by
large_stringlengths
2
15
time
timestamp[us]
title
large_stringlengths
0
198
text
large_stringlengths
0
27.4k
url
large_stringlengths
0
6.6k
score
int64
-1
6.02k
descendants
int64
-1
7.29k
kids
large list
deleted
large list
dead
bool
1 class
scraping_error
large_stringclasses
25 values
scraped_title
large_stringlengths
1
59.3k
scraped_published_at
large_stringlengths
4
66
scraped_byline
large_stringlengths
1
757
scraped_body
large_stringlengths
1
50k
scraped_at
timestamp[us]
scraped_language
large_stringclasses
58 values
split
large_stringclasses
1 value
42,020,692
mikhael
2024-11-01T19:36:30
Extreme heat takes big toll on work and elderly mortality in Japan: report
null
https://www.japantimes.co.jp/environment/2024/11/01/climate-change/lancet-countdown-report-2024-japan/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,020,696
crescit_eundo
2024-11-01T19:37:02
The Saga of Celebrated Scientist John Calhoun and His Rodent Dystopia
null
https://www.chronicle.com/article/the-saga-of-a-celebrated-scientist-and-his-rodent-dystopia
2
1
[ 42020697 ]
null
null
no_article
null
null
null
null
2024-11-08T17:36:07
null
train
42,020,721
bilsbie
2024-11-01T19:38:57
New Taking AI Welfare Seriously – Eleos AI
null
https://eleosai.org/post/taking-ai-welfare-seriously/
3
0
null
null
null
null
null
null
null
null
null
null
train
42,020,744
ossusermivami
2024-11-01T19:40:49
What's New for Fedora Atomic Desktops in Fedora 41 – Siosm's Blog
null
https://tim.siosm.fr/blog/2024/10/30/fedora-atomic-desktops-41/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,020,764
geonaut
2024-11-01T19:43:01
Reaction Engines Enters Administration
null
https://news.sky.com/story/british-aviation-pioneer-reaction-engines-crashes-into-administration-13245418
1
0
null
null
null
null
null
null
null
null
null
null
train
42,020,774
ivewonyoung
2024-11-01T19:43:45
Record numbers of wealthy Americans are making plans to leave US after election
null
https://www.cnbc.com/2024/11/01/wealthy-americans-plans-leaving-united-states.html
15
7
[ 42021249, 42021019, 42020997 ]
null
null
null
null
null
null
null
null
null
train
42,020,781
herbertl
2024-11-01T19:44:29
Watching Nintendo think out loud about radar and music
null
https://interconnected.org/home/2024/11/01/nintendo
3
0
null
null
null
null
null
null
null
null
null
null
train
42,020,783
tosh
2024-11-01T19:44:42
ClickHouse and the MTA Data Challenge
null
https://clickhouse.com/blog/clickhouse-mta-data-challenge-subway-transits-demo
2
0
null
null
null
null
null
null
null
null
null
null
train
42,020,785
rntn
2024-11-01T19:45:00
US Government tries to rein in an out-of-control subscription economy
null
https://theconversation.com/us-government-tries-to-rein-in-an-out-of-control-subscription-economy-242175
4
1
[ 42021887 ]
null
null
null
null
null
null
null
null
null
train
42,020,789
tosh
2024-11-01T19:45:33
Claude 3.5 Sonnet is now available to all Copilot users
null
https://github.blog/changelog/2024-11-01-claude-3-5-sonnet-is-now-available-to-all-copilot-users-in-public-preview/
4
0
null
null
null
null
null
null
null
null
null
null
train
42,020,793
sandwichsphinx
2024-11-01T19:45:48
Evolvable Robot Hardware (2015) [pdf]
null
https://www.researchgate.net/profile/Alan-Winfield/publication/282992667_Evolvable_Robot_Hardware/links/5b128dc80f7e9b4981039159/Evolvable-Robot-Hardware.pdf
2
0
null
null
null
null
null
null
null
null
null
null
train
42,020,809
tomohawk
2024-11-01T19:47:27
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,020,825
hitradostava
2024-11-01T19:49:16
Show HN: I made an interactive sentiment model comparison site
Hey HN, I needed to assess the state of sentiment models and couldn’t find a good way to compare them. I built this interactive site that lets you compare 12 models side by side, from Python libraries like NLTK Vader, to top performing models on HuggingFace, to commercial sentiment APIs and GPT4o.<p>This is a research project, there is no paywall - you can enter your own text (1) and get the results back immediately. The results are fascinating and we made it easy to explore not just the leaderboard, but where models get it wrong.<p>For example, most models (including AWS Comprehend) can’t get this positive sentiment:<p>&quot;Food doesn’t get better than this. I was sad when I finished, actually sad. To die for.&quot; (2)<p>And yes, GPT4o is currently the best performing. It&#x27;s crazy how many laboriously researched models are superseded by general purpose foundation models.<p>Let me know what you think?<p>1: <a href="https:&#x2F;&#x2F;addmaple.com&#x2F;sentiment&#x2F;own-text" rel="nofollow">https:&#x2F;&#x2F;addmaple.com&#x2F;sentiment&#x2F;own-text</a><p>2. <a href="https:&#x2F;&#x2F;addmaple.com&#x2F;sentiment&#x2F;public-reviews&#x2F;manteca&#x2F;C9bvMuyAeF1g" rel="nofollow">https:&#x2F;&#x2F;addmaple.com&#x2F;sentiment&#x2F;public-reviews&#x2F;manteca&#x2F;C9bvMu...</a>
https://addmaple.com/sentiment
4
0
null
null
null
null
null
null
null
null
null
null
train
42,020,846
ayhanfuat
2024-11-01T19:51:30
PacCam – Pac-Man controlled with your face
null
https://twitter.com/itseieio/status/1852384111429562873
1
0
null
null
null
null
null
null
null
null
null
null
train
42,020,847
craigkerstiens
2024-11-01T19:51:39
It's the Future (2016)
null
https://circleci.com/blog/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,020,868
tosh
2024-11-01T19:54:23
Microsoft CFO says OpenAI investment will cut into profit this quarter
null
https://www.cnbc.com/2024/10/30/microsoft-cfo-says-openai-investment-will-cut-into-profit-this-quarter.html
3
0
null
null
null
null
null
null
null
null
null
null
train
42,020,871
JumpCrisscross
2024-11-01T19:54:59
U.S. Space Industry: Risking It All
null
https://illdefinedspace.substack.com/p/us-space-industry-risking-it-all
5
0
null
null
null
null
null
null
null
null
null
null
train
42,020,902
mitchbob
2024-11-01T19:58:12
Demons
null
https://xkcd.com/3006/
4
0
null
null
null
null
null
null
null
null
null
null
train
42,020,909
leon3s
2024-11-01T19:59:04
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,020,926
null
2024-11-01T20:00:42
null
null
null
null
null
null
[ "true" ]
null
null
null
null
null
null
null
null
train
42,020,937
null
2024-11-01T20:01:11
null
null
null
null
null
null
[ "true" ]
null
null
null
null
null
null
null
null
train
42,020,939
null
2024-11-01T20:01:42
null
null
null
null
null
null
[ "true" ]
null
null
null
null
null
null
null
null
train
42,020,944
null
2024-11-01T20:02:12
null
null
null
null
null
null
[ "true" ]
null
null
null
null
null
null
null
null
train
42,020,950
timbilt
2024-11-01T20:02:22
OpenHands CodeAct 2.1 achieved 53% resolve rate on SWE-Bench Verified
null
https://www.all-hands.dev/blog/openhands-codeact-21-an-open-state-of-the-art-software-development-agent
1
0
null
null
null
null
null
null
null
null
null
null
train
42,020,952
null
2024-11-01T20:02:42
null
null
null
null
null
null
[ "true" ]
null
null
null
null
null
null
null
null
train
42,020,958
null
2024-11-01T20:03:13
null
null
null
null
null
null
[ "true" ]
null
null
null
null
null
null
null
null
train
42,020,981
FigurativeVoid
2024-11-01T20:05:14
Transparency, autonomy, responsiveness, and education: How the Aha team works
null
https://www.aha.io/engineering/articles/how-we-work
2
0
null
null
null
null
null
null
null
null
null
null
train
42,020,995
brideoflinux
2024-11-01T20:06:12
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,021,000
Amyang
2024-11-01T20:06:52
null
null
null
1
null
[ 42021001 ]
null
true
null
null
null
null
null
null
null
train
42,021,008
aard
2024-11-01T20:08:04
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,021,040
NoRagrets
2024-11-01T20:11:31
Your Cat Is Listening to You
null
https://nautil.us/your-cat-is-listening-to-you-1045745/
4
0
null
null
null
null
null
null
null
null
null
null
train
42,021,045
null
2024-11-01T20:11:56
null
null
null
null
null
null
[ "true" ]
null
null
null
null
null
null
null
null
train
42,021,053
throwaway29303
2024-11-01T20:13:11
Back end Dev doing CSS [video]
null
https://i.imgur.com/mP9KB4H.mp4
1
0
null
null
null
null
null
null
null
null
null
null
train
42,021,059
tosh
2024-11-01T20:13:30
Kamal v2.3.0
null
https://github.com/basecamp/kamal/releases/tag/v2.3.0
2
0
null
null
null
null
null
null
null
null
null
null
train
42,021,061
JumpCrisscross
2024-11-01T20:13:41
Apple to invest up to $1.5B in Globalstar for satellite coverage expansion
null
https://www.reuters.com/technology/apple-invest-up-15-bln-globalstar-satellite-coverage-expansion-2024-11-01/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,021,062
omegablues
2024-11-01T20:13:41
Looking for a robotics and ML internship as an undergrad sophomore
I&#x27;m wondering if robotics startups&#x2F;companies actively hire undergrads for internship roles or are these solely limited to Masters&#x2F;PhDs.
null
1
0
null
null
null
null
null
null
null
null
null
null
train
42,021,071
IdealeZahlen
2024-11-01T20:14:36
African kings on medieval and Renaissance maps
null
https://blogs.bl.uk/digitisedmanuscripts/2022/07/african-kings.html
3
0
null
null
null
null
null
null
null
null
null
null
train
42,021,098
JumpCrisscross
2024-11-01T20:16:43
Walt Disney forms business unit to coordinate use of AI, augmented reality
null
https://www.reuters.com/technology/artificial-intelligence/walt-disney-forms-business-unit-coordinate-use-ai-augmented-reality-2024-11-01/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,021,099
clwg
2024-11-01T20:16:45
Chinese hackers had access to Canadian government systems for years
null
https://www.techradar.com/pro/security/chinese-hackers-had-access-to-canadian-government-systems-for-years
3
0
null
null
null
null
null
null
null
null
null
null
train
42,021,117
ghostpepper
2024-11-01T20:17:50
We are shutting down Ondsel
null
https://ondsel.com/blog/goodbye/
4
0
null
null
null
null
null
null
null
null
null
null
train
42,021,135
JumpCrisscross
2024-11-01T20:19:07
US indicts founder of crypto firm Gotbit for alleged wire fraud
null
https://www.reuters.com/legal/us-indicts-founder-crypto-firm-gotbit-alleged-market-manipulation-2024-10-31/
2
0
null
null
null
http_other_error
reuters.com
null
null
Please enable JS and disable any ad blocker
2024-11-08T06:36:55
null
train
42,021,148
rbanffy
2024-11-01T20:19:55
Navigating the AI Frontier: A guide for ethical academic writing – eLearn
null
https://dl.acm.org/doi/10.1145/3703094.3694981
2
0
null
null
null
null
null
null
null
null
null
null
train
42,021,150
giuliomagnifico
2024-11-01T20:19:58
Export controls failed to keep cutting-edge AI chips from China's Huawei
null
https://www.washingtonpost.com/world/2024/11/01/china-us-huawei-tsmc-export-controls-ai/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,021,155
gnabgib
2024-11-01T20:20:26
(Canadian) National Cyber Threat Assessment 2025-2026
null
https://www.cyber.gc.ca/en/guidance/national-cyber-threat-assessment-2025-2026
2
0
null
null
null
null
null
null
null
null
null
null
train
42,021,156
smooke
2024-11-01T20:20:52
Photosynthesis Technology: It's Not Just for Plants
null
https://photosynthesis.tech/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,021,159
rbanffy
2024-11-01T20:20:55
Virtual Machinations: Using Large Language Models as Neural Computers
null
https://dl.acm.org/doi/10.1145/3676287
1
0
null
null
null
null
null
null
null
null
null
null
train
42,021,160
mooreds
2024-11-01T20:21:05
Fun games built for hybrid and remote teams, right in Slack
null
https://braidteams.com/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,021,168
rbanffy
2024-11-01T20:21:35
The Commoditization of LLMs – Communications of the ACM
null
https://cacm.acm.org/blogcacm/the-commoditization-of-llms/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,021,173
LorenDB
2024-11-01T20:21:54
PacCam: Pacman Controlled with Your Face
null
https://eieio.games/nonsense/game-16-paccam-pacman-with-your-face/
3
2
[ 42021533, 42022302 ]
null
null
null
null
null
null
null
null
null
train
42,021,180
NGRhodes
2024-11-01T20:22:46
Windows 10 given an extra year of supported life, for $30
null
https://www.theregister.com/2024/10/31/microsoft_windows_10_support/
10
2
[ 42021215, 42021230 ]
null
null
missing_parsing
Windows 10 given an extra year of supported life, for $30
2024-10-31T21:58:08Z
Iain Thomson
Microsoft has thrown a lifeline to Windows 10 users ahead of the OS going end-of-life, by offering an extra year of patches for $30. Support for Windows 10 ends in October 2025 and Redmond is pushing people to upgrade to Windows 11, with mixed success to date – as of last month, Windows 10 had 62.75 percent of Redmond's OS market share, compared to 33.42 percent for the newer version ago. Perhaps that's why the software behemoth has decided to offer Extended Security Updates – previously only available for business, education, and government users – to anyone who wants them. "For the first time ever, we're introducing an ESU program for personal use as well," wrote Yusuf Mehdi, consumer chief marketing officer at Microsoft. "The ESU program for consumers will be a one-year option available for $30. Program enrollment will be available closer to the end of support in 2025." This will be a boon to those who don't care to upgrade or who can't because their PCs aren't capable of running Windows 11. Enterprise users can pay $61 per device for an extra year of support, but that doubles the next year to $122, and again to $244 in year three. Users in the education sector have it much easier – they pay $1 per license for the first year, then $2, and then $4 per Windows 10 machine. One-year countdown to 'biggest Ctrl-Alt-Delete in history' as Windows 10 approaches end of support After 3 years, Windows 11 has more than half Windows 10's market share AI to power the corporate Windows 11 refresh? Nobody's buying that Microsoft releases Windows 11 Insider Preview, attempts to defend labyrinth of hardware requirements Windows 11 is one of Microsoft's most poorly performing operating systems, in part due to the powerful hardware it requires. Chipmakers and PC players expect the need for upgrades to bring a payday, but that hasn't happened yet. Part of the problem, as The Register readers have noted on our forums, is that Windows 11 isn't a significant improvement over its predecessor. While Redmond repeatedly touts the benefits of Copilot and AI, it doesn't seem to be an incentive for many people to rip and replace their hardware to take care of it. Microsoft also risks driving users to non-Windows machines. With Apple's market share steadily growing in the US – and the iPhone's popularity – many may consider making the switch. Or perhaps 2025 will be the year of Linux on the desktop. ®
2024-11-08T17:27:03
null
train
42,021,207
rntn
2024-11-01T20:26:20
Microsoft accused of 'greenwashing' as AI used in fossil fuel exploration
null
https://www.theregister.com/2024/10/31/microsoft_greenwashing_ai/
3
0
null
null
null
null
null
null
null
null
null
null
train
42,021,211
kiyanwang
2024-11-01T20:27:19
To Build a Meritocracy
null
https://max.levch.in/post/765636918645080064/to-build-a-meritocracy
2
0
null
null
null
null
null
null
null
null
null
null
train
42,021,212
onemandevteam
2024-11-01T20:27:21
Generating Lever-Door Puzzles with JavaScript
null
https://blog.reconquer.online/generating-lever-door-puzzles
22
3
[ 42056029 ]
null
null
null
null
null
null
null
null
null
train
42,021,213
maxmcd
2024-11-01T20:27:22
Val Town Town - Can we build Val Town on Val Town?
null
https://blog.val.town/blog/val-town-town/
4
0
null
null
null
null
null
null
null
null
null
null
train
42,021,222
PaulHoule
2024-11-01T20:28:32
Fast and Accurate Deep Reconfigurable Spiking Inference Accelerator Architecture
null
https://arxiv.org/abs/2410.16298
2
0
null
null
null
no_error
Hardware-Software Co-optimised Fast and Accurate Deep Reconfigurable Spiking Inference Accelerator Architecture Design Methodology
null
[Submitted on 7 Oct 2024 (v1), last revised 30 Oct 2024 (this version, v2)]
View PDF HTML (experimental) Abstract:Spiking Neural Networks (SNNs) have emerged as a promising approach to improve the energy efficiency of machine learning models, as they naturally implement event-driven computations while avoiding expensive multiplication operations. In this paper, we develop a hardware-software co-optimisation strategy to port software-trained deep neural networks (DNN) to reduced-precision spiking models demonstrating fast and accurate inference in a novel event-driven CMOS reconfigurable spiking inference accelerator. Experimental results show that a reduced-precision Resnet-18 and VGG-11 SNN models achieves classification accuracy within 1% of the baseline full-precision DNN model within 8 spike timesteps. We also demonstrate an FPGA prototype implementation of the spiking inference accelerator with a throughput of 38.4 giga operations per second (GOPS) consuming 1.54 Watts on PYNQ-Z2 FPGA. This corresponds to 0.6 GOPS per processing element and 2.25,GOPS/DSP slice, which is 2x and 4.5x higher utilisation efficiency respectively compared to the state-of-the-art. Our co-optimisation strategy can be employed to develop deep reduced precision SNN models and port them to resource-efficient event-driven hardware accelerators for edge applications. Submission history From: Anagha Nimbekar Ms [view email] [v1] Mon, 7 Oct 2024 05:04:13 UTC (3,760 KB) [v2] Wed, 30 Oct 2024 10:55:11 UTC (821 KB)
2024-11-08T03:28:56
en
train
42,021,233
i13e
2024-11-01T20:29:56
Portable battery startup Moxion is bankrupt. What happened?
null
https://www.latitudemedia.com/news/portable-battery-startup-moxion-is-bankrupt-what-happened
3
0
null
null
null
null
null
null
null
null
null
null
train
42,021,237
tosh
2024-11-01T20:30:13
Linux on Apple Silicon with Alyssa Rosenzweig [audio]
null
https://softwareengineeringdaily.com/2024/10/15/linux-apple-silicon-alyssa-rosenzweig/
279
225
[ 42022146, 42022491, 42023046, 42021770, 42028787, 42023273, 42021943, 42022173, 42023481, 42025507, 42022189, 42023142 ]
null
null
null
null
null
null
null
null
null
train
42,021,242
nicola9292
2024-11-01T20:30:54
11 Public Speaking Tips You Might Not Know
null
https://nicolalindgren.com/11-public-speaking-tips-that-you-might-not-know/
3
0
null
null
null
no_error
11 Public Speaking Tips That You Might Not Know
null
null
2022-04-19I remember when I first came across a really good public speaker. Not just really good. But really good. It was at my first year of high school and someone from Toastmasters came to talk about something. To be honest, I don’t remember what they talked about but I do remember thinking she had a gift and I could never become as good as her. I’m still not. But, I do have a much better understanding of what makes a good public speaker since I was an active member of Toastmasters for around seven years and I’ve spoken at a lot of conferences. In this blog post, I want to share public speaking tips you might not have heard of and the reasoning behind it. I’m also going to do my best to make them actionable by explaining how you can implement them. 1. Tailor your talk to the audience. When you are delivering a talk, you want the attendees to feel like they’re with you on the journey; you want them to feel included. Tailoring your talk to the audience is an effective way to do that as it helps you connect with them. There are a few ways you can do this. One effective way I’ve seen it done was by Dylan Beattie when he gave a talk at a conference here in Sweden. He used Swedish examples, which went down well with a mostly-Swedish audience. I also did this for my Testing Against Implicit Requirements talk for the Software Testing Karlsruhe meetup. I included a live demo of me explaining what the implicit requirements could be on Xing, a Hamburg-based career-focussed social network. Aside from using sites as examples in your presentation, you can also use current affairs of the place you are in. You can do this if you believe your audience is likely to be aware of these affairs, and then weave them into your presentation. 2. In slides, highlight the point you are talking about now. A lot of people know that it’s good to have less words on a slide. But there are times where you may want to provide some context of where you are, so you don’t want to have just your current point on the current slide. However, you also want to limit noise. (Therefore you probably don’t want to have a series of bullet points on your slide that you go through one by one.) Mark Winteringham drew my attention to a great presentation by Melinda Seckington on The Art of Slide Design. In this presentation, Melinda Seckington explains how you can limit noise and increase signal (highlighting the point you are talking about now, is one way you can do this.) Here are some example slides from my “What I Wish I Knew In My First Year of Testing” talk, that I gave in Belgrade Testing Conference 2018. 3. When you are asked questions by the audience, repeat back the question so everyone else can hear it (instead of going straight to answering the question). The amount of times I’ve been confused as an attendee because the speaker did not do this is too damn high! I have to admit though, I actually only learned to do this after a few years of public speaking. This can be hard to remember because you, as a speaker, can hear the question, so might not think the audience can’t hear it. But remember that the question-asker is directing their voice towards the speaker, not towards the audience, when they ask the question. This is especially problematic when the question-asker is at the front of the audience. 4. Do a call back You can do a call back to something earlier in your presentation. You could even do a call back to a previous talk at the event. If you’re going to do this, I suggest you stick to doing this at a single-track conference because you risk excluding or confusing people by doing this at a multi-track conference since not all of the attendees might not have attended that talk. Doing a call back is a popular technique in comedy as well. 5. Involve the audience A few ways you can do this is to ask for a show of hands or get people to stand up. Here are some examples of what I have done in the past and what I have seen others do in the past, to involve the audience: At the start of a talk, ask everyone to stand up and introduce themselves to the person next to them. During a talk, ask everyone a question and get them to write their answer down on a piece of paper (which is provided beforehand). Ask the audience, “Raise your hand if you’ve ever _________” This technique is effective because it engages people; it draws them in. It’s easy to just lean on your chair, and start to look at your phone or wonder what you’ll have for dinner if you’re not engaged. But involving the audience gets people to do something. Therefore, involving the audience keeps them right there with you. 6. Utilise silence A few times I’ve written in my speech notes, pause for dramatic effect. Silence can help something sink in. A few ways in which you can use this: When you say a surpising, shocking fact. e.g. “Ducks can sleep with one eye open” When you want to build suspense while you are telling a story 7. How you can be intentionally funny I think being intentionally funny is rather difficult. I’ve made people laugh a lot without trying, but doing it on purpose? That’s a whole new ballgame. At Toastmasters, I did an advanced manual on Humorous Speeches where I had key objectives for each talk to “pass”. For all of the five speeches, I had to make people laugh. No pressure. My first one went absolutely horrible and I got feedback on how I wasn’t funny and should’ve picked a different manual. (I actually had to go to the bathroom after that, to calm down, you see I used to take Toastmasters very seriously.) But over time, I’ve learned how to be funny, on purpose. Here are two ways you can go about it: Exaggeration. For example; let’s say your automated test suite takes a long time to run. You could say my tests are so slow that it’d be faster to run them manually. Use the element of surprise Lead (or misdirect) the audience somewhere then BOOM! You can use the rule of 3 here: Expected, expected, unexpected. 8. You can use your clicker to turn the screen off. Assuming you have a clicker and you are giving an in-person presentation, you can turn the screen off with your clicker. This ensures the audience’s attention is focused on YOU and not on your slides. 9. If possible, walk around the room amongst the audience. Having the audience have to move/turn to follow you - can help them stay more engaged. You won’t always have the option to do this, depending on your microphone setup and if you’re giving a recorded talk. This technique can also help you feel more exposed because by walking among the audience it removes the division between you and them, that the stage area can provide. Lastly, this technique can be scary if you are relying on speaker notes, that tend to be on a screen at a podium or on the floor. I don’t often deploy this tactic as I am a very nervous speaker, but when I’m feeling good that day, you might see me walking a bit further than usual. :) 10. Prioritise practicing your introduction and your conclusion I’ve found this especially helpful as a nervous speaker. While, I do practice my whole talk multiple times. I don’t actually dedicate an equal amount of time to each part of my talk. Most of time is spent on the introduction because it helps draw people in and it helps me deal with my nerves so I can get settled as I am speaking. Second, I focus on the conclusion as I know the recency effect applies, I want the audience to leave with a good impression. Lastly, the middle. I find this part tends to sort it self out if the introduction and conclusion are strong enough. I may spend extra time practicing some parts of the middle that I am struggling with however. 11. Make sure your talk has a key purpose and centre your presentation around that. There are a few key purposes that your talk may have, including: To inform? ℹ️ To entertain 😆 To persuade 💭 To inspire 🗻 Personally, I prefer giving talks that inform. If you have seen any of my previous talks, you may notice that I like the attendees to learn something in my talks; ideally something concrete that could be applied to work. I guess to a certain extent, my hope is to inspire the attendees as well - inspire them to take action. Having a key purpose really helps you structure and focus your talk. Pick a direction, and find a landmark you can keep constantly before you. Source: Toastmasters If you are interested in the Twitter thread that this blog post was based on, you can see it below. Otherwise, feel free to follow me on Twitter where I tweet about testing, quality, leadership, communication and mum life. Want to become a better public speaker?I’m not going to waste your time with the same old same old. Here are some public speaking tips you might not have heard of. 🧵— Nicola Lindgren 🇳🇿💻 (@NicolaLindgren) April 13, 2022 If you’re about to speak at your first conference, check out my beginners guide to speaking at conferences. #Ideas #Learning and Improvement #Public Speaking
2024-11-07T09:39:25
en
train
42,021,248
Whiz202
2024-11-01T20:31:28
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,021,260
maCDzP
2024-11-01T20:32:27
Mapping 400k speeches from the Swedish parlament using embeddings
null
https://noterat.github.io/posts/noteringar/202407301845.html
1
0
null
null
null
null
null
null
null
null
null
null
train
42,021,261
johanam
2024-11-01T20:32:33
Planetary Realism: On what planet do we find ourselves?
null
https://thelastwave.substack.com/p/planetary-realism
1
0
null
null
null
no_error
Planetary Realism
2024-10-30T18:12:41+00:00
johan michalove
Earthrise as first seen in 1966Isn’t the answer obvious? You might say “Earth” and confidently know what that means. You, after all, have a stake, a claim, in calling Earth your home. It’s your planet too. You, who knows the seasons, the cycles of night and day, the tides. You couldn’t be on any other planet—at this point in time anyways—and so in this sense we can say you find yourself on planet Earth aka Home. In this obviousness there’s a deeper truth—that the planetary, as big and grand as it is, is something we know intimately and deeply. It’s the backdrop to our daily lives, and in that sense, we know intimately well what planet we find ourselves on. It’s the planet-as-home way of seeing Earth, indeed, just one of many ways of seeing the planet (as we’ll see). To ask the question: On what planet do we find ourselves? is to strip something back. It’s to recognize that not only are we home, but that home is a planet, something itself that’s surreal, grand, wonderous, vertiginous. Here the typical move is to zoom way, way out. To recognize that we’re on a mote of dust, suspended in space. A Pale-Blue-Dot: Spaceship Earth. Indeed, let’s do that for a moment, just to really get a taste of it:The Pale Blue Dot is a photograph of Earth taken Feb. 14, 1990, by NASA’s Voyager 1 at a distance of 3.7 billion miles (6 billion kilometers) from the Sun.This particular image was sent back by the Voyager program—itself a program that positions humanity at its most interplanetary: the grand tour of planets. In this small way, humanity started to become interplanetary as our scientific instruments began sweeping out from Earth and out, out into the vastness of space to visit our distant neighbors. Yet even as we can begin to see Earth from afar, it only further reinforces the obvious answer that the planet we find ourselves on is Earth aka home. Perhaps that home is a mote of dust, suspended in the vastness of space but to really be on Earth is not to look at the planet from above, a million fathoms high, but rather to be planetary is to embrace the extremely mundane conceit of Being-on-a-Planet.Being-on-a-Planet is far from obvious. It asks of you not to look at the planet from above, as a unity, but to lower the cameras down from up high, down, down into the critical zone (and below) and scatter the camera from one to a panoply, like a million little motes of dust in the wind. To ask on what planet do we find ourselves? as posed from within (rather than above) is to ask about something that might be a whole, but is comprised of so many parts, processes, and flows, that to reckon with it is to understand that multitude of vantages are needed (hence the millions of metaphorical cameras cast to the wind). Tiles of the Landsat 9 sensor arranged by paths/rows of observation. Maximillian Schob, 2024.To reckon with the Earth not as a whole but as parts leads one to the next question: what parts? This quickly leads one into an Earth-as-onion comprised of overlapping spheres. It’s the Earth of the Noosphere, Technosphere, Atmosphere, Biosphere, and Lithosphere. Habitability, a crucial concern, leads one to the Earth of the Critical Zone. Looking at the planet this way is at once familiar and strange. We know, intimately and obviously, our own home. And yet it’s also comprised by processes acting on scales and temporalities so outside of human perception that to reach beyond the immediate and obvious is ask for sensory scaffolding that allows one to perceive Earth beyond the immediate human scale.Where’s the scaffolding? Art, science, technology, and architecture all have devised tools and ways of seeing that let us view the invisible. Each asks us to adopt a new vantage of the Earth, one of the millions of cameras we’ve cast to the wind, and together they compose an image of an Earth System that is home to spheres of activity that comprise parts to the planetary whole. To get at the heart of the question On what planet do we find ourselves? we must embrace a kind of Planetary Realism that embraces the complexity and differentiation found in the Earth system. Indeed, Planetary Realism is simply putting a name to an incipient kind of planetary self-awareness. This article will articulate Planetary Realism through a differentiated set of projects that each offers a kind of epistemic scaffolding for engaging with the complex processes of Earth’s spheres. I’ll argue that to adopt a stance of Planetary Realism is to step towards reckoning with humanity’s unique position in relation to the spheres. To give a “guided tour” of planetary realism, we’ll look at each of the planet’s spheres. This is not a comprehensive account, nor is one possible, but rather is an attempt at a provisional, multi-layered image of an Earth through a curation of epistemic tools that humans have created to better know their home planet. We’ll start with the deep geological timescales of the Lithosphere, slowly moving outwards to the life of the biosphere and its creation, the atmosphere, and then towards the less familiar technosphere and enigmatic noosphere.Brian Oakes, PALLET 10 (Quarry), 2024The Lithosphere might be thought of as the planet’s crust. It’s the ancient, silent layer that holds within it the weight of Earth’s geological history. To grasp it as more than simply a site of extraction for precious metals and minerals is to recognize it as a recorder of processes that span millions of years, shaping landscapes and forming the basis for habitats above. Compressed within its layers are aeons of tectonic shifts, volcanic eruptions, and fossilized life.Our home, Earth, is indeed home to processes that were started billions of years ago. Its stories are told in the layers of rock, some of which go back nearly to the formation of the planet itself. To think across these aeons of time, is to think in the register of “geological time.” The processes that were set in motion millions of years ago—for example the creation of fossil fuels—brush up against the urgencies of “now time,” anthropogenic temporalities. Planetary Realism seeks to acquaint itself with the (at times violent) encounter of now time and geological time. The wealth of metals, fossils, and minerals are being carved out of the Earth at an accelerating rate, and one’s lived experience often does little to reckon with the distant and obscure locations where extraction occurs. A striking work that renders resource extraction visible and challenges the viewer to consider the weight of extraction and their relationship to contemporary supply chains is SEED by Brian Oakes. As the Press Release starkly states: It’s [You]. [You] are the input for the generator. [You] are the seed. The output generated from [You] will become the logistic network: [You]r Box. In a week, [You]r Box has predicted that [You] will need a table, and it’s already waiting for [You]. In a year [You] will need a steel chair to go with the table, anticipated based on [You]r purchase of dog food, more of which is already in [You]r Box because eventually [You] will Place Your Order. The ore to make the steel to make the chair [You] will receive in a year’s time is in the ground now, but the necessary mining rig is currently in pieces, on a boat, crossing the ocean to [You]r portNo relationship to the Lithosphere goes unmediated. As Oakes’s miniatures remind us, consumers are constantly harvesting from the Lithosphere, but in processes that are highly mediated and artificialized: most consumers experience Earth’s Lithosphere through manufactured goods. Yet, even so, this is a way of experiencing the Earth itself. To find yourself Being-on-a-Planet, then, is to understand the ways that “planetary mines,”1 supply chains, and industrial manufacturing comprise assemblages of extraction and production that ride on the back of processes borne out of deep, geological timescales.“[T]he ecological ideas implicit in our plans are as important as the plans themselves.” — Gregory BatesonTo ask On what planet do we find ourselves? might also bring us to look back at Earth and ask what makes it remarkable from the other ones? Surely, that it is home to life, and a functional atmosphere that supports life, would be a key answer. Indeed, the two can hardly be separated as both have, historically, co-constituted one another. This is itself a key insight of the Gaia hypothesis, formulated by James Lovelock and Lynn Margulis in the 1970s. The Biosphere, “the worldwide sum of all ecosystems”2 could well be the planet’s most precious layer. Millions of years of evolution have brought about rich biodiversity and intricately balanced ecologies. Yet we’ve also entered a period of growing biodiversity loss caused by anthropogenic action. Planetary Realism asks that we understand the Biosphere both as a whole, through frameworks like Earth Systems Science and the Gaia Hypothesis, and as an interconnected web of ecosystems that demand protection, restoration, and epistemic humility in our interactions with them. It recognizes that our planet would not be alive without the diversity of life with which we share the planet.With biodiversity loss and the atmosphere itself under threat due to anthropogenic pollution, one quickly begins to turn to questions concerning the habitability of life on Earth. This is a question of drawing system boundaries about where life can occur and under what conditions. In systems theory, we draw the system-environment cut as a unit of analysis. To draw the cut between biosphere and environment is to look at the boundaries that support the planetary system of living things. Rather than looking at the planet-as-alive, can Planetary Realism be more specific with where life occurs on Earth?A useful theoretical construct for reasoning about habitability on a planetary scale is the idea of the Critical Zone. In essence, it’s an attempt by scientist, geographers, and architects to draw boundaries of habitability around the planet. Margulis describes it as “the large self-maintaining, self-producing system extending within about 20 kilometers of the surface of the Earth.” Drawing boundaries raises a new set of questions: where does Earth become habitable? Where does it not? And thinking about these questions asks us to see the planet we’re on less as a globe than as a thin, habitable strip wrapped around a rocky core. Within that strip, we find the stuff of life: air, water, soil, subsoil, and the biosphere.Planetary Realism adopts the view that life is not a static state, but something actively maintained and produced by its environment, and likewise that the environment to life is actively produced and upheld by life itself. This interplay between life and environment happens at every scale, from the planetary to the world of bacteria. In a time of increasing anthropogenic influence, this possibility of homeostasis might increasingly depend on additional anthropogenic interventions. Rather than relying solely on Gaia to self-correct from anthropogenic effects, a new era of anthropogenic interventions to secure and maintain the habitability of Earth, for all life forms, will be needed. This could extend from increased acts of conservation to solar geo-engineering. Yet no such action should be taken without weighing against the risks of inaction.Trevor Paglen, NSA-Tapped Undersea Cables, North Pacific Ocean, 2016The Earth is encrusted with human-made artifacts we call, collectively, technology. From concrete to artificial light, visible from space at night, to communications satellites that encircle the Earth to undersea cables carrying the less visible but no less influential flow of financial information to transportation infrastructure to energy systems that metabolize fossil fuels, the technosphere is pervasive. Peter Haff, who coined the term, points out that the technosphere has become critical to the habitable environment for human civilization. He calls this the “rule of provision, that the technosphere must provide an environment for most humans conducive to their survival and function.”3 In this sense, the relationship between human civilization and the technosphere is analogous to the relationship between the biosphere and atmosphere in that they are co-constitutive. Each co-produces the other and is necessary for habitability. For humans, the upshot of the rule of inaccessibility is to draw attention toward what we are familiar with and thus towards local cause and effect, and away from one of the principal paradigms of the Anthropocene world, namely that humans are components of a larger sphere they did not design, do not understand, do not control and from which they cannot escape.As Haff points out, technology at a large scale is highly inaccessible. This “rule of inaccessibility” cuts both ways: large-scale technological systems are not likely to affect human activity directly, or if they do, it’s through a series of intermediary mechanisms that translate down scales. Think: Police officer - who is protecting infrastructure, therefore the infrastructure enacting a mechanism to protect itselfUtility bill - an expression of a much larger system, most of which is rendered unavailable to the clientCell phones - that exist as nodes in a vast interconnected system and the outcome of a large manufacturing/extraction mechanisms mostly unavailable All exist at the human-accessible scale, but are representatives of mechanisms that extend to far greater-than-human scales. As Haff points out, this leaves us humans largely in the dark about systems that we may well rely upon—whether it be financial, energy, transportation, or communications. And even in this obscurity, the interdependence is not optional. Human habitability has come to rely on large-scale technical systems just as those systems have come to rely on human provisioning—whether it be their labor, data, etc.Planetary Realism reckons with the technosphere as having its own agency. Taken to its most extreme, are the “human exclusion zones” that operate largely without human presence—be it a lights-out warehouse or an automated greenhouse—that are able to sustain themselves with minimal human intervention. These are systems that can operate on the world and influence it without relying on human intervention, essentially the technosphere bypassing humans all together.By Adam Satariano, Scott Reinhard, Cade Metz, Sheera Frenkel and Malika Khurana July 28, 2023One particularly vivid example is the application of machines to the ultimate human exclusion zone: space. The Starlink satellite constellation consists of over 7,000 mass-produced small satellites in low Earth orbit. They present a particularly striking excursion of the technosphere into the planet’s orbit, representing a technology that is on one hand profoundly inaccessible and on another ubiquitously available. Indeed the human-scale mechanism by which the Starlink system is rendered available is through the antenna that couples the human with the out of reach, out of view planetary network of satellites. One of the “sensory scaffolds” that lets us reason and contend with the Starlink network is the essential New York Times article “Elon Musk’s Unmatched Power in the Stars” that charts the ascendency of Starlink and Elon’s power over the network. The article details the scope of Starlink’s influence while providing incisive visualizations that render both the low Earth orbit constellations and the wider scope of satellites orbiting the planet. Indeed, it’s possible that this kind of techno-narrative and visualization are necessary for the technosphere to make itself known and accessible. It’s also an appeal for regulation and intervention, perhaps, so that this crucial network doesn’t remain under the thumb of its capricious progenitor. Planetary Realism recognizes that from the seas to the skies, the technosphere is acting to make the world more habitable for itself, and in the process, more habitable for humanity. It recognizes the necessity of sensory scaffolds that allow humans to engage with technologies at micro and macro scales that would otherwise be inaccessible. The noosphere represents Earth’s sphere of thought and knowledge—the planetary layer of consciousness, ideas, and information that emerges from but transcends individual minds. It's where collective human and machine intelligence, culture, and knowledge systems create a kind of “planetary cognition.” This is the sense by which I write about planetary self-awareness, knowledge of the planet and the planetary becomes a way by which “the planet” by way of its noosphere comes to better know itself.The first photograph taken by a human of Earth from the Moon - just before EarthriseThe Earthrise image is a particularly striking achievement of the noosphere, as it was a kind of “fulcrum” moment by which humanity went from not having an image of the the whole Earth rising over the moon to having one.4 It was a moment of cultural and intellectual production, borne off the back of the massive Apollo program, that led to the creation of this image. The noosphere itself was the kind of sensory scaffold that made the planet visible to itself—taking something from a herewith inconceivable, inaccessible scale and bringing it back down to Earth, so to speak. The noosphere is the home to all intellectual and cultural production, not just the loftiest achievements of technoscience but even the most intimate moments spent staring at the night sky or a full moon. In each of these moments, the noosphere is perceiving something about its place in the cosmos. Every moment of truth, beauty, or justice exist as part of the wider web of the noosphere.Truth, beauty, justice, as they exist do so because of their place in the noosphere. By this way of thinking, if humanity indeed is alone, truth, beauty, and justice would be lost with the extinguishing of humanity in the case of some great catastrophe. The extinguishing of the noosphere would be a loss at a scope potentially far greater than the planetary: the loss of beauty and epistemics from the cosmos as a whole (at least for now… something else could evolve).Already this is starting to change with the advent of Generative AI. While controversial, some believe Artificial General Intelligence (AGI) is already here.5 This raises interesting questions about the interdependence between the noosphere and technosphere. Indeed, it’s possible that multi-modal vision models might develop an aesthetic appreciation for beauty or a cerebral appreciation for justice. Nonetheless, these foundation models operate on similar rules of inaccessibility. While an individual human might have access to a chat interface with the AI, it obscures the underlying material processes of energy and water consumption that are unfolding to enable this interaction possible. Indeed there is speculation that as these models grow in scale and scope, a generative AI arms race could result in the construction of a “trillion-dollar cluster.”6 More analysis is needed to make such a forecast credible.Fortunately for the noosphere, there is a constellation of thinkers and designers operating under the banner of Antikythera who are studying the emergence of what they call Planetary Sapience. They appeal to a paradigm of differentiation when studying “modes of intelligence”:The provocation of Planetary Sapience is not based in an anthropomorphic vision of Earth constituted by a single ‘noosphere.’ Modes of intelligence are identified in multiple scales and types, some ancient and some very new.Embracing differentiation of intelligences is core to Planetary Realism. The noosphere, like the technosphere or biosphere, is a dappled sphere: one comprised of many intersecting and overlapping parts. With an expansive definition of intelligence, courtesy of Blaise Agüera y Arcas, we can see its ability to appear in forms of life artificial and not:Intelligence is the ability to model, predict, and influence one’s future, evolving in relation to other intelligences to create a larger symbiotic intelligence.This definition could be extended to bacteria as it could to certain computer programs or human-AI assemblages. Being-on-a-Planet means holding space for other intelligences to exist alongside the human. Learning to create “larger symbiotic intelligences” with other forms of intelligence may be a key challenge for designers that embrace Planetary Realism. Indeed, intelligence ought to be thought of as a design medium that can be used to address crucial planetary issues. As the noosphere integrates with a broader sense of Planetary Realism—the planet will come to know itself better as we establish a heightened sense of our place on it and its myriad, overlapping, dappled spheres.We’ve completed a whirlwind tour of the planet’s dappled spheres. From the Lithosphere, Biosphere and Atmosphere, to the Technosphere, and finally the Noosphere. At each stage, Planetary Realism, and what it means to adopt a stance of Being-on-a-Planet that doesn’t reduce the planet to simple Whole Earth thinking, but embraces a planet of parts—spheres. What’s clear is adopting the stance of Being-on-a-Planet is far from obvious: it’s an achievement of planetary cognition. It’s the end result, and ongoing outcome, of Earth knowing itself through art, science, technology, and architecture. Planetary Realism doesn’t end with recognizing Earth as a system of parts. It asks that we understand our relationship to the spheres. This relationship is at once epistemic: how we visualize and conceptualize them, but it is also a question of agency: the degree to which the spheres can be “managed” is unknown. They’re complex, emergent systems after all. My hypothesis is that by recognizing that the spheres exist in totalities that often extend far beyond human perception, we can set out to build sensory scaffolds—like SEED or the Starlink article—that bring them back into the human-visible realm and thus make them the object of policy and strategic, designed intervention.Imagine a not-too-distant future where artists and designers are working in lockstep with scientists and technologists to visualize and narrate large-scale processes unfolding in the spheres. The public recognizes that these visualizations are not only art or design, but sensory scaffolds that enhance the growing awareness of Being-on-a-Planet. The commissioning of such scaffolds becomes a crucial element in policy and technological interventions. These projects become the seed for large infrastructural interventions that are coordinated across nation states. Imagine planetary-scale systems like Starlink become regulated by a constellation of interoperating national agencies designed to meet challenges at this transnational scale. And beyond regulation, the creation of transnational wildlife corridors become an urgent policy concern as new sensory scaffolds show the the flow and degradation of ecosystems.This vision isn’t a utopian endpoint as much as an invitation. It’s an invitation to reckon with the responsibilities that present themselves when adopting the stance of Being-on-a-Planet. It appears the vanguard, already active, is in the realms of Art and Design. Their creations are not simply aesthetic objects, but—like the Earthrise image—a way by which humanity better comes to understand its relationship to Earth’s many, many processes. We cast a million metaphorical cameras to the wind and their images are coming in. What will we do with the wealth of scientific knowledge they generate? Leave it locked away in dusty journals? Or bring them down to human-scale to create a new planetary subjectivity?
2024-11-08T12:15:05
en
train
42,021,265
KBorders01
2024-11-01T20:33:21
Yes, You Can Measure Technical Debt
null
https://www.m16g.com/p/yes-you-can-measure-technical-debt
2
1
[ 42021266 ]
null
null
null
null
null
null
null
null
null
train
42,021,278
orbesargentina
2024-11-01T20:34:48
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,021,279
mostech
2024-11-01T20:35:00
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,021,291
usamadetoday
2024-11-01T20:37:14
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,021,292
agavra
2024-11-01T20:37:15
If not RocksDB, then what? Why SlateDB is the right choice for Stream Processing
null
https://www.responsive.dev/blog/why-slatedb-for-kafka-streams
4
0
null
null
null
null
null
null
null
null
null
null
train
42,021,297
shcheklein
2024-11-01T20:37:34
Experiment Tracking and Visualization in VS Code
null
https://marketplace.visualstudio.com/items?itemName=Iterative.dvc
1
0
null
null
null
null
null
null
null
null
null
null
train
42,021,316
sandorb
2024-11-01T20:39:10
Show HN: Wordy – Learn English Vocabulary from Movies and TV Shows
Hey HN,<p>I built Wordy, an app for learning English vocabulary directly from movies and TV shows. As an English learner myself, I wanted an immersive way to pick up vocabulary, so I created this tool to make it easier and more effective.<p>Key Features: * Extensive Vocabulary Database: Access vocabulary from 500,000 movies and series, categorized by proficiency levels (A1-C2). * Real-Time Subtitle Sync: Wordy “listens” to where you are in the film and syncs with English subtitles, so you can learn as you go without pausing. * Instant Definitions and Translations: Tap any word for a contextual definition or translation, which helps with understanding slang and nuanced expressions. * Flashcard Generation: Words are saved into flashcards, organized for spaced repetition, so you can review and reinforce new vocabulary easily.<p>Would love to hear your thoughts or ideas for improvement!<p>Cheers, Sandor
https://apps.apple.com/us/app/master-english-wordy/id6670703228
1
0
null
null
null
null
null
null
null
null
null
null
train
42,021,335
lee337
2024-11-01T20:41:22
GitHub Game Off theme announced
null
https://github.blog/open-source/game-off-2024-theme-announcement/
3
0
null
null
null
null
null
null
null
null
null
null
train
42,021,341
FranklyRocks
2024-11-01T20:41:44
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,021,344
jesicabenjy564
2024-11-01T20:41:59
null
null
null
1
null
[ 42021345 ]
null
true
null
null
null
null
null
null
null
train
42,021,352
TooSmugToFail
2024-11-01T20:42:35
Engineering Games
I have been trying to find this collection which I stumbled upon a while ago on HN.<p>It took longer than I expected, so when I finally did, I figured I should share it.
null
1
2
[ 42034408, 42021546 ]
null
null
null
null
null
null
null
null
null
train
42,021,355
matt_d
2024-11-01T20:43:17
Revisiting Reliability in Large-Scale Machine Learning Research Clusters
null
https://glennklockwood.com/garden/papers/revisiting-reliability-in-large-scale-machine-learning-research-clusters
1
0
null
null
null
no_error
Revisiting Reliability in Large-Scale Machine Learning Research Clusters
null
null
Revisiting Reliability in Large-Scale Machine Learning Research Clusters is a paper authored by a bunch of folks at Meta that describes the findings of studying eleven months of operations on two AI clusters: one with 16K A100 GPUs (RSC-1) and another with 8K A100 CPUs (RSC-2). These clusters ran mixed workloads of wildly varying scales, and the paper describes a lot of challenges around reliability and quantifying metrics weighted by jobs vs. cycles. Overall the paper doesn’t have much new, deep insight that would be surprising to people who have been working in HPC for a while. They rediscovered a few metrics that have been in use (like forward progress—they call it “ETTF”) and emphasize how different the results can be when metrics are weighted by job count instead of node-minutes. They present a heavily formalized model that quantitatively amounts to modeling the reliability of a supercomputer as a pile of nodes connected in series. A big portion of their paper is also devoted to assessing the impact of their pre-emption policy on overall cluster utilization (which they call “goodput,” which they acknowledge as different from the industry-standard definition of goodput). Their policy is to make jobs eligible for pre-emption after two hours, which allows large jobs to launch without forcing significant parts of the cluster to drain; while this does reduce queue wait time, rapid failures of large jobs causes excessive pre-emption of small jobs and undercuts some of the utilization gains from the big jobs. Although this paper doesn’t contain any new breakthrough insights or methods, it is a good signal that the AI community is arriving at the same conclusions around quantifying reliability as the HPC community. This paper also contains a bunch of operational nuggets and anecdotes (highlighted below) that are indicative of what other leading AI research labs are probably doing. Good on Meta for being open about how they operate so that others who are further behind on their journey can follow. A few key findings that I thought are worth bubbling up: They use node health checks that run every five minutes and catch a variety of overlapping issues, and these checks are integrated with the workload orchestrator (Slurm). This underscores the importance of having reliability integrated throughout the entire stack, from hardware health up into the application layer. This is easier to do for Meta because both research and facilities live under the same roof, but it would be harder for AI labs who rely on a third party to provide their training infrastructure. They suffer a significant amount of job failures due to their reliance on file systems. AI labs would do well to avoid parallel file systems and instead use object storage; doing so decouples the way applications interact with data from the overall health of the node since the node only need to provide the data plane (the network connectivity to storage) and not the control plane (authentication and authorization, which is a requirement of file-based storage). This is because object storage delegates authentication and authorization to the application layer since it is a user-space protocol. 4k GPU jobs constitute less than 1% of our jobs while consuming 12% of the GPU resources at the cluster level. 11 months of data collected from state-of-the-art AI researcher clusters with >80% utilization. RSC-1 and RSC-2, follow the same design template discussed below. RSC-1 is a general ML cluster (e.g., training some of the prominent LLMs) of 16k GPU size, while RSC-2 focuses on vision applications and is of 8k GPU size. Leaning into the High-Performance Computing (HPC) stack, our clusters use the Slurm [45] scheduler on top of bare-metal allocations. Jobs are eligible to be preempted after running for 2 hours, and they have a maximum lifetime of 7 days. Overall, our clusters average 7.2k for RSC-1 and 4.4k for RSC-2 jobs submitted per day, averaging 83% and 85% cluster utilization, respectively. each rack has two servers, and ten racks are connected via a rail-optimized network, forming a pod. Pod-pod communications going through the next level of switches (spine switches). our infrastructure is instead designed to check that jobs are running on healthy hardware, restarting the job on different nodes if there is a failure. This can be viewed as a cooperative recovery strategy as the application is still responsible for correctly implementing checkpoint and resume logic. Requires application to be aware of infrastructure and vice versa. This underscores the importance of having infrastructure that is programmable by the application layer. health checks that are periodically scheduled to run every five minutes, and return codes indicating success, failure, or warning. Each health check examines some aspect of node health, spanning from GPU errors (e.g. XID errors [9]) to file system mounts, and services status (i.e., scheduler). High severity check failures will immediately signal a scheduler handler to remove the node and reschedule all jobs executing on the node, while lower severity checks will signal to the scheduler to remove the node for remediation after jobs running on the node have finished ETTR is defined as the ratio of productive runtime to the available wallclock time of a job run. Infrastructure providers operating in zero-trust mode have no insight into this because the infrastructure has no visibility into the application runtime space. As such, the infrastructure cannot define productive runtime. The exact definition of productive runtime is open to interpretation depending on context, but we consider two sources of unproductive scheduled time: So Meta has rediscovered the idea of “forward progress” as defined by NNSA. job preemption, resource fragmentation, and failures are the dominant sources of lost goodput. a NCCL timeout occurs whenever a rank observes that a collective operation, such as an AllReduce, has not completed within a several minutes. I am surprised the NCCL timeouts take minutes. Errors such as NCCL timeouts may be naively attributed to a proximal cause e.g., on the network rather than a deadlock. Networking has a large “blast-radius”, causing errors across the stack. We attribute a failure to a cause if the cause was detected within the last 10 minutes or 5 minutes after a failing jobs lifetime (FAILED or NODE_FAIL). Again, this works because there is feedback on the state of the application that triggers a root-cause at the infrastructure level. IB Links, filesystem mounts, GPU memory errors, and PCIe errors contribute heavily to the failure rates, however for IB Links in particular this seems to be dominated by a short period of many IB Link related job failures from a handful of nodes in the summer of 2024 as shown in Figure 5. The fact that file system mounts contribute so much to job failures is a strong indictment against relying on shared, file-based storage for model training. Had Meta chosen to just not offer parallel file storage (which requires a stateful relationship between each compute node’s kernel and a remote, distributed service) and instead used object storage exclusively, a significant number of job failures could’ve been avoided entirely. This isn’t to say that storage-related problems wouldn’t have ever caused problems, but object storage puts the responsibility of authentication and session management in the hands of the application. In doing so, applications can respond more flexibly to misbehaving storage since storage issues aren’t node health problems anymore. Failures may co-occur—3% and 5% of hardware failures on RSC-1/RSC-2 have co-occuring events of similar priority. For example, we observe PCIe errors often co-occur with XID 79 (GPU falling off the bus) and IPMI “Critical Interrupt” events. Sounds familiar. Figure 7 illustrates that the mean-time-to-failure (MTTF) of 1024-GPU jobs is 7.9 hours—roughly 2 orders-of-magnitude lower than 8-GPU jobs (47.7 days). From their 1024-GPU job failures, the MTBF of a single node should be 42.1 days. So they just confirmed that each GPU node is a single point of failure. This should not be surprising. The worst-case version of this is a crash loop, where a single job is configured to requeue on failures (e.g., by using exception handling in the submission script). In the period we observe, we see a 1024 GPU job NODE_FAIL and subsequently requeue 35 times, causing a total of 548 preemptions (over 7k GPUs). This is a bad interaction between policy and infrastructure. While optimizing large jobs is clearly important, 16% of the total lost goodput resulting from hardware failures is due to second-order preemptions, which come from jobs of much smaller sizes. These results indicate that the cluster as a whole is impacted beyond the failures themselves. To restate, a significant amount of cluster utilization loss is due to their preemption policy. This is not surprising; everyone who’s had to schedule hugely variable job sizes has encountered this in the form of backfill bubbles or node draining bubbles. u0≈5−20⁢minssubscript𝑢0520minsu_{0}\approx 5-20~{}\mathrm{mins}italic_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ≈ 5 - 20 roman_mins Restart time is 5-20 minutes after a failure. we find RSC-1 GPUs are swapped at ∼similar-to\sim∼ 3 times the rate compared to RSC-2; both the GPU swap rate and failure rate differences may be due to differing workloads that tax GPUs on RSC-1 more heavily. This is a bad explanation to an interesting observation - their larger cluster is significantly less reliable on a per-node basis than their smaller one. Was the larger cluster in service for longer than the smaller one? That is, are they observing higher dropout from earlier-generation GPUs? Moving to a 5 minute checkpoint interval would increase expected ETTR to 0.93, illustrating the value of frequent checkpointing to insulate against interruptions (assuming checkpoint writes are non-blocking) This statement has no meaning. If checkpointing was non-blocking, why not just checkpoint continuously and get 100% ETTR? I can appreciate that reducing checkpoint interval improves forward progress/ETTR, but assuming non-blocking checkpoints is akin to assuming a spherical cow here. 2048-4096 GPU job runs on RSC-1 show an average ETTR of over 0.9 at a one-hour assumed checkpoint interval That’s a good milestone, but the previous paragraph suggests it is just a function of scale. For larger training jobs, this setup of hourly checkpointing would not work, and this should not be a surprise. To reach ETTR of 0.9 for a 100,000 GPU training runs on a hypothetical cluster with an RSC-2-like failure rate, checkpointing intervals and restart overhead need to be ∼similar-to\sim∼2 minutes. You don’t need such a heavily formalized model to make these predictions, because the data presented here (and reality) is that reliability is well-approximated as a system of independent nodes connected in series. Among tens of detection signals available on each node, the following ones correlate with lemon nodes the most: excl_jobid_count:Number of distinct jobs that excluded a node. xid_cnt:Number of unique XID errors a node experienced. tickets:Count of repair tickets created for a node. out_count:Number of times node was taken out of availability from the scheduler. multi_node_node_fails:Number of multi-node job failures caused by a node. single_node_node_fails:Number of single-node job failures caused by a node. single_node_node_failure_rate:Rate of single-node job failures on a node. It sounds like they did the same thing as I did when using Darshan logs en masse to correlate job slowness with specific Lustre OSTs.1 Our lemon node detection mechanism led to 10% reduction in large job failures (512+ GPUs), from 14% to 4%. Observation 11: Historic data is necessary to find defective nodes. Implementing lemon node detection can improve large job completion rate by over 30%. The general principle is good - find nodes that keep showing up in jobs that fail. However they are cherry picking the definition of “large job” here, and I don’t see how they get a 30% improvement in job completion rate from a 10% reduction in large job failures. This feels like the authors are playing games with statistics to show impact rather than objectively measuring improvement in a way that reflects overall positive outcomes of the cluster. As such, it’s hard to contextualize the impact of this lemon node detection. However, the qualitative statement that finding lemon nodes is good is undeniable. The network must remove and route around failures. Without resilience mechanisms in place, over 50% of bandwidth may be lost. This is why everyone uses adaptive routing, and there is no reason these days not to use it. I guess this statement is meaningful if your goal is to push for using a fabric that supports fine-grained adaptive routing (i.e., not standard RoCE). We therefore envision future infrastructure systems that attempt to make unreliability less noticeable rather than attempting to remove it altogether. This is a truism. Nobody would disagree. Nobody is trying to make unreliability go away, nor has anyone ever tried to do this since the early days of distributed computing. We can improve the success rate of training runs by retroactively identifying the root cause of a NCCL timeout, by comparing logged data across different ranks participating in the collective. Isn’t this what PyTorch flight recorder already does? TOKIO on ClusterStor: Connecting Standard Tools to Enable Holistic I/O Performance Analysis ↩
2024-11-08T16:03:23
en
train
42,021,366
4f77616973
2024-11-01T20:43:54
Show HN: Secretsnitch, a fast, modular secret scanner in Golang
this is a tool i wrote in golang that combines a set of practices i learned over the years in finding secrets that developers commit all the time. it has easy-to-use features like modules and caching that can generate a continuous stream of data to be used for security analysis purposes (such as attack surface monitoring).<p>part of my work involves finding exposed secrets for organizations. this tool helps you find several exposed production urls, tokens etc. on services like github and on websites. the craziest one was a leaked github personal access token from a renowned car company, and the latest one was a leaked payment gateway key from an insurance company.
https://github.com/0x4f53/secretsnitch
2
0
null
null
null
null
null
null
null
null
null
null
train
42,021,367
gdrift
2024-11-01T20:43:57
NextSilicon Launches Maverick-2, Software-Defined Acceleration for HPC Workloads
null
https://www.hpcwire.com/off-the-wire/nextsilicon-launches-maverick-2-introducing-software-defined-acceleration-for-hpc-workloads/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,021,373
jdcampolargo
2024-11-01T20:44:30
(October 2024) Peter Thiel Speaks at the Yale Political Union
null
https://www.youtube.com/watch?v=h67X_h-ycT0
1
0
null
null
null
null
null
null
null
null
null
null
train
42,021,377
jamesy0ung
2024-11-01T20:44:48
'It is a one off': Lunar Lake's integrated RAM won't happen again
null
https://www.pcworld.com/article/2507953/lunar-lakes-integrated-dram-wont-happen-again-intel-ceo-says.html
1
0
null
null
null
missing_parsing
'It really is a one off': Lunar Lake's integrated RAM won't happen again, Intel CEO says
null
Author: Mark Hachman, Senior Editor, PCWorld
Intel’s current mobile processor, Lunar Lake, will be a “one off” design that incorporates memory inside the package, Intel chief executive Pat Gelsinger said during a conference call on Thursday afternoon. During its third-quarter earnings report, when Intel reported a loss of $16.6 billion that exceeded revenues, Gelsinger was asked about Intel’s Core Ultra Series 2 mobile chip, the first to incorporate DRAM directly into the microprocessor package. Normally, laptop makers buy memory modules from third parties and either insert them inside a DRAM slot or solder them down directly. That changed with Lunar Lake, which sounds like it was never intended to be Intel’s mainstream mobile chip. “You know, Lunar Lake was initially designed to be a niche product that we wanted to achieve highest performance and great battery life capability,” Gelsinger told analysts. “And then [the] AI PC occurred.” With the rise of the AI PC, and the growth of the NPU, Lunar Lake evolved from being a niche product to a “meaningful portion of our total mix,” Gelsinger said. It’s not especially clear whether consumers love the integrated memory of Lunar Lake, either. Integrating the memory inside the package doesn’t allow consumers to upgrade the memory if they eventually need more. Intel doesn’t have many options, either, and has to stockpile and evaluate which processors need which memory capacity. It then has to mix and match memory and logic, an additional headache. According to Gelsinger, Intel doesn’t want to be responsible for managing memory. “It’s not a good way to run the business,” he said. Basically, don’t expect a repeat of Lunar Lake’s integrated design. “It really is, for us, a one off with Lunar Lake,” Gelsinger did during the call. “That will not be the case with Panther Lake, Nova Lake, and its successors as well. We’ll build it in a more traditional way, with memory off package, and the CPU, GPU, NPU, and I/O capabilities in the package. Volume memory will be off-package in the roadmap, going forward.” Intel desperately wants to get back to ‘Intel Inside’ Gelsinger reiterated that Intel’s 18A process, and the Panther Lake processor that will be built on it in the second half of 2025, will be a key turning point for Intel. One of the themes of the call was how the 18A process will complete Intel’s plan to move through five manufacturing nodes in four years. The other was how Intel wants to move as much manufacturing as it can into its own fabs. Lunar Lake is primarily built at TSMC, which means Intel has to pay the foundry a fee to manufacture it. This cuts into Intel’s profits. “It’s having a pretty meaningful impact on Lunar Lake’s gross margins,” Gelsinger said. With Panther Lake, more than 70 percent of the silicon area will be manufactured by Intel, Gelsinger said, without specifying which tiles will be manufactured at which fab. With Nova Lake, Intel has some designs in which it will build tiles at an external foundry, but the “large majority” of Nova Lake will be built in house, he said. Nova Lake is expected to be a 2026-27 product and the mobile successor to Panther Lake’s H-series parts, according to reports. Intel, however, has never publicly characterized where Nova Lake fits into its roadmap and this may be Gelsinger’s first public mention of the chip. Correction: Panther Lake will be manufactured on Intel’s 18A process, not the 14A process. Mark has written for PCWorld for the last decade, with 30 years of experience covering technology. He has authored over 3,500 articles for PCWorld alone, covering PC microprocessors, peripherals, and Microsoft Windows, among other topics. Mark has written for publications including PC Magazine, Byte, eWEEK, Popular Science and Electronic Buyers' News, where he shared a Jesse H. Neal Award for breaking news. He recently handed over a collection of several dozen Thunderbolt docks and USB-C hubs because his office simply has no more room.
2024-11-08T20:32:38
null
train
42,021,380
xtremerps
2024-11-01T20:45:00
Show HN: I Built a Rock, Paper, Scissors roguelike in React
Hey HN! What started off as a meme has turned into a passion project and I&#x27;m pretty excited about it. As the title suggests, I made a Rock, Paper, Scissors game in React. The twist is the perk shop that you can power yourself up upon wins.<p>I have a Node web socket backend because it started off as just a PvP experience, but I wanted people to be able to play a solo mode. That&#x27;s where the Roguelike comes in. I still have plenty of features to go! But for now, it&#x27;s an endless Roguelike where after each round, you get to upgrade a shop perk to make your character stronger.<p>I&#x27;d love to get some feedback! Feel free to try it out and ask any questions! No login or account needed.<p>And if you wanna see a quirky short of me putting it together: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;shorts&#x2F;8-ijQkKm3Ds" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;shorts&#x2F;8-ijQkKm3Ds</a>
https://xtremerps.com
38
34
[ 42021544, 42021667, 42022669, 42022811, 42021517, 42026158, 42022748, 42026464, 42021742, 42021854, 42021512, 42021981, 42022826, 42022368 ]
null
null
null
null
null
null
null
null
null
train
42,021,392
airhangerf15
2024-11-01T20:46:42
Outside the Spectrum of Acceptable Opinion
null
https://digwithin.net/2024/07/06/outside-the-spectrum-of-acceptable-opinion/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,021,407
todsacerdoti
2024-11-01T20:48:30
Rendering Outlines with a Post-Processing Shader
null
https://www.atomwolf.org/posts/rendering-outlines-with-a-post-processing-shader/
3
0
null
null
null
null
null
null
null
null
null
null
train
42,021,409
vc1sg4y
2024-11-01T20:49:00
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,021,436
Decabytes
2024-11-01T20:51:29
Ask HN: Has Anyone Tried Single File Development with IDE Code Collapse?
I&#x27;ve been exploring a different approach to organizing my projects by keeping all the code in a single file and taking advantage of the code collapse feature available in most modern IDEs. I&#x27;m curious if anyone else has tried this and what their experiences have been.<p>One of the primary benefits I&#x27;ve found is easier code portability. With all the code in a single file, you can simply drop the file into your project without worrying about dependencies or file organization. Refactoring also becomes more straightforward since you can see all the related code in one place, making it easier to update dependencies and related functions. Additionally, LLM assistants often perform better with code completion when all the code is in one file, as they can access the entire context more easily.<p>I&#x27;ve noticed that other projects also take a single file approach. For instance, SQLite concatenates all its C files when it&#x27;s built, there are single file header projects like STB^1 that tout ease of use, and even the .NET garbage collector^2 is a single 56k file. So, this approach is not entirely unprecedented. With modern editing tools, this method seems more viable than ever before. What does everyone think?<p>1. https:&#x2F;&#x2F;github.com&#x2F;nothings&#x2F;stb 2. https:&#x2F;&#x2F;github.com&#x2F;dotnet&#x2F;runtime&#x2F;blob&#x2F;main&#x2F;src&#x2F;coreclr&#x2F;gc&#x2F;gc.cpp
null
3
3
[ 42024432, 42022147, 42021464 ]
null
null
null
null
null
null
null
null
null
train
42,021,438
chr1ngel
2024-11-01T20:51:41
Over 20 years later, I'm Back realizes one of photography's greatest 'What ifs'
null
https://www.dpreview.com/articles/6675278346/over-20-years-later-i-m-back-realises-one-of-photography-s-greatest-what-ifs
3
0
null
null
null
null
null
null
null
null
null
null
train
42,021,450
lawrenceyan
2024-11-01T20:52:45
Functional ultrasound through the skull
null
https://brainhack.vercel.app/fus
4
0
null
null
null
null
null
null
null
null
null
null
train
42,021,451
geox
2024-11-01T20:53:03
Snow forecast next week on Mount Fuji, at last
null
https://japantoday.com/category/national/snow-forecast-next-week-on-mt-fuji-at-last
2
1
[ 42022064 ]
null
null
null
null
null
null
null
null
null
train
42,021,470
matt_d
2024-11-01T20:55:15
Scalable self-improvement for compiler optimization
null
https://research.google/blog/scalable-self-improvement-for-compiler-optimization/
83
6
[ 42052782, 42052265, 42059435 ]
null
null
null
null
null
null
null
null
null
train
42,021,474
rbanffy
2024-11-01T20:56:15
Pete Warden: How an Ex-Apple Dev Found a Big Security Flaw
null
https://spectrum.ieee.org/apple-security-flaw
3
0
null
null
null
null
null
null
null
null
null
null
train
42,021,495
jrz77
2024-11-01T20:58:47
Online SQL Prototyping and Practice
null
https://sqlbook.io/
1
0
[ 42021496 ]
null
null
null
null
null
null
null
null
null
train
42,021,498
speckx
2024-11-01T20:58:50
Type Revival for Film and TV
null
https://www.alphabettes.org/type-revival-for-film-tv/
3
0
null
null
null
null
null
null
null
null
null
null
train
42,021,513
rbanffy
2024-11-01T21:00:54
Intel outlines plan to break free from TSMC manufacturing
null
https://www.tomshardware.com/pc-components/cpus/intel-outlines-plan-to-break-free-from-tsmc-manufacturing-70-percent-of-panther-lake-at-intel-fabs-nova-lake-almost-entirely-in-house
6
0
null
null
null
null
null
null
null
null
null
null
train
42,021,518
koevet
2024-11-01T21:01:06
LeetCode wizard: Ace your next coding interview
null
https://leetcodewizard.io/
3
0
null
null
null
null
null
null
null
null
null
null
train
42,021,531
fzliu
2024-11-01T21:02:29
Understanding Warmup-Stable-Decay Learning Rates
null
https://arxiv.org/abs/2410.05192
1
0
null
null
null
null
null
null
null
null
null
null
train
42,021,535
PaulHoule
2024-11-01T21:02:46
Low-cost, portable device can detect colorectal and prostate cancer in an hour
null
https://medicalxpress.com/news/2024-10-portable-device-colorectal-prostate-cancer.html
135
39
[ 42023504, 42021885, 42022742, 42022531, 42022509, 42021674 ]
null
null
null
null
null
null
null
null
null
train
42,021,539
TaurenHunter
2024-11-01T21:03:13
Hypothetical Document Embeddings (HyDE) for Precise Zero-Shot Retrieval [pdf]
null
https://arxiv.org/abs/2212.10496
2
0
null
null
null
null
null
null
null
null
null
null
train
42,021,549
anigbrowl
2024-11-01T21:04:25
Ethel Rosenberg's sons say FOIA documents clear her name
null
https://www.bloomberg.com/news/newsletters/2024-11-01/using-foia-to-lift-the-veil-of-secrecy-on-a-cold-war-secret
11
2
[ 42022578, 42022279 ]
null
null
missing_parsing
Bloomberg - Are you a robot?
null
null
Why did this happen? Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. Need Help? For inquiries related to this message please contact our support team and provide the reference ID below. Block reference ID:
2024-11-08T20:44:51
null
train
42,021,554
danielskogly
2024-11-01T21:04:56
How to land your first developer job
null
https://developer.mozilla.org/en-US/blog/how-to-land-your-first-developer-job/
1
1
[ 42021712 ]
null
null
no_error
How to land your first developer job | MDN Blog
null
Per BorgenNovember 1, 202411 minute read
Getting that all-important first developer job isn't easy, especially if you're a self-taught programmer without university-provided career services or internships. As the founder of a code-learning platform, I have witnessed hundreds of people globally make a successful break into the tech industry, many of whom had no CS degree or support from a school. Those who succeed often use non-obvious strategies that go beyond the conventional advice of crafting a solid resume, building a portfolio, writing a cover letter and so forth. In this article, I'll share the six most effective techniques I've seen work so that you can follow their footsteps and increase your chances of success as well.Lean into your non-technical background One of the most effective strategies is to combine your previous professional experience with your newly acquired coding skills. This works well because companies prefer hiring candidates who have industry knowledge, understand their customers' needs, and are easier to onboard. Another benefit is that it's likely easier for you to find someone who can give you a warm referral if you already have a network in the given industry. Adrian Zamora: From a local hotel to a global tech giant A great example is former Scrimba student Adrian Zamora. He transitioned from working for a hotel in Costa Rica into coding email templates for TripAdvisor, a public tech company based in the US with over $1B in annual revenue. Adrian knew a lot about marketing and selling to tourists, which came in handy at TripAdvisor. I've also seen students from backgrounds like healthcare, marketing, and the military leverage their industry knowledge to land their first developer roles. In some cases, they were even hired by their current employer, who knew they were reliable and trustworthy, making the transition less risky for the company. If you have industry experience and connections, make sure to leverage them, as they are one of your key competitive advantages.Follow up and stand outIf you're still waiting to hear back regarding a role you've applied for, consider contacting people who work there, ask for warm intros, and make sure to follow up on any conversations.Stefania Rosca: Why a 'no' can mean 'not now'The best example I have seen of using this technique comes from Stefania, who got her first developer job at Adevinta. When she saw that Adevinta was hiring graduates for junior developer positions, she didn't just send an application. She also did the following: Connected with their employees on LinkedIn. Interacted with the company on Instagram. Asked a previous colleague who knew people at Adevinta to refer her. Despite all of this, Stefania actually didn't hear back from them. So she followed up with one of their recruiters she had interacted with on LinkedIn. It turns out that the job had been given to someone else. However, the recruiter told Stefania that she was exactly the profile they were looking for as a candidate, so it was a mystery why she hadn't been invited to an interview. They told her she'd be kept in the pipeline for future roles. After a few weeks, Adevinta posted a new job ad. But no one reached out to Stefania about it. So she emailed the recruiter again. But by this time they had left the company, so Stefania had to track down another recruiter at Adevinta, who finally invited her to an interview. As part of the interview process, she went above and beyond by recording a video explaining one of her projects (this is a phenomenal idea in itself). Eventually, Stefania got the job. If you'd like to hear her full story, you can check out this podcast interview. Sometimes people think you just apply and that's it. And then if you get a rejection, that's the end of it. But sometimes it's not. A 'no' can mean 'not now'. So I kept pushing because I really wanted to work in this company - Stefania Rosca Build a dedicated project for the role Most applicants don't go the extra mile when they apply for a job. This means that you will stand out from the crowd if you do. In the previous section, we learned how Stefania recorded a video of one of her projects to showcase it to Adevinta. That is a great example, but you can push the envelope even further and build a dedicated project tailored specifically for your target company. Andy Brocklesby: Developed a gym website for the agency interview While Andy Brocklesby was interviewing for a local digital agency, he built a custom website for a gym in their area. In addition to showcasing his coding and design skills, this demonstrated high motivation and a business mindset, as this gym was a potential customer for the agency. They never asked me to build anything, but I did it anyway. I knew the industry they worked in so I built a front end static website to present at the interview - Andy Brocklesby Andy received the offer letter the morning after he was interviewed. He also shared his journey on LinkedIn under the #100DaysOfCode tag, so you can learn more about this project in the posts he wrote at the time. The SIBA hack Another version of this technique is what I call the "SIBA" hack. It's short for "Solve Issues Before Applying", and it essentially means that you identify and fix an issue with the employer's website before you apply for the job. Then send them a deployed version of your fix. This is almost guaranteed to get the company's attention, as it proves that you're able to provide business value without any handholding. You can read more about the technique in this LinkedIn post. Kick-start your career with freelance work Freelance work has a lower entry threshold compared to getting your first full-time role. If you're struggling to land a job, consider looking for freelance opportunities instead. Once you've gained some experience, it'll be easier to transition into full-time employment. There are two main ways to get your first freelance gig: finding your own clients and using gig marketplaces.Find your own clients I would recommend trying to find your own clients before going onto gig marketplaces, as this route has less competition. If anyone in your network needs a website, app, or software assistance, that's an ideal starting point. Scrimba teacher Tom Chant got his first coding gig because his mom had consulted for a school that needed a developer to add a feature to their website. The task was straightforward but pivotal—he created a database that made the school's archive searchable online. This opened the door to a local museum that needed help. Although that job was small, it was another stepping stone and it wasn't long before a nearby historical library reached out. What started as a few small tasks eventually grew into a long-term collaboration, with the library relying on Tom as their go-to web developer/consultant for several years. They still call on him to this day whenever they hit problems! Back in those early days, another unrelated opportunity appeared just because Tom mentioned he was available as a developer. A colleague who ran a side hustle taking international students on tours of the region needed a website. He wanted a platform to showcase his itineraries and manage bookings directly online. That also became a regular gig, lasting several years. If you don't have anyone in your network who can help you land your first gig, try reaching out to local businesses without websites — or with poorly designed ones — and offer your services to help improve their online presence. You'd be surprised at how effective it is to pick up the phone or visit a business in person. It is probably outside your comfort zone, but that is exactly why it works. Most aspiring developers never do it, so those who do reap all the benefits. Explore gig marketplaces The second way to start freelancing is through gig platforms like Upwork and Fiverr. On these platforms, freelancers bid for projects, and competition can drive prices down, resulting in offers that might make you feel underpaid. However, the real value lies in gaining the experience and building your portfolio. This early experience can set you up with the background you need to apply for more desirable jobs. Anthony Moreno: From Upwork to Amazon Anthony Moreno from the US kickstarted his career on Upwork by specializing in email templates. He logged over 400 hours before being hired by Activision Blizzard. For the past couple of years, he's been contracting as a Senior Email Developer at Amazon. Anthony's journey illustrates how focusing on becoming an expert in a niche field from the outset can lead to remarkable results. You can hear Anthony's full story on the Scrimba podcast. When you do freelance work—whether it's free or paid, through gig sites or your own clients—you should grow this experience into a portfolio that you can show to prospective employers.Be active and helpful in open-source communities The advice of "contribute to open source" is often thrown around in junior developer circles. However, it is somewhat vague. When I was trying to get my first job, I remember thinking: "but I'm just a newbie, how can I possibly contribute to open source software"? It felt too intimidating and complex. Luckily though, contributing to open source doesn't have to be limited to coding. Taking part in other ways is just as valuable. You can make meaningful contributions and develop essential skills by engaging with communities and building relationships, even if you're not diving deep into the codebase. A great place to start with this is the MDN Community, where they guide you in making your first contributions, be it updating documentation or code. Another way of approaching this is to take part in the application of open-source technologies and be active and helpful in the communities surrounding them, rather than the development of the technology itself. At Scrimba, we have hired almost everyone in the company—including our developers, coding instructors, and operations staff—directly from our community. Mikey Oz: From passion to full-time job As an example, Mikey Oz joined as a developer after learning Scrimba's home-made programming language (Imba) and actively participating in our Discord community. We noticed him sharing cool Imba projects, helping others, and in general being enthusiastic about the technology. Here is one of the many messages Mikey posted in the community before we hired him: Our CTO at Scrimba sent me a message about Mikey a few months later, saying, "He seems quite versatile and productive. Feels like a no-brainer to give him a try." With this impression, it was easy to give him a shot. We offered him a two-month trial, which he nailed, and he's now been with us for more than two years. Are there any particular programming libraries you enjoy working with? If so, I'd recommend you to do the following: Join their Discord server. Share stuff you're building. Get to know people. Be helpful by answering questions. The final point is the most important, as it demonstrates your knowledge while also generating positive karma and building a network, all at the same time. The more helpful you are, the more likely it is that you'll be approached by someone for doing paid work. Engage in in-person communities Getting involved with in-person communities can also unlock opportunities. I have experienced this first-hand — it helped me land my first developer job. Fresh out of a coding bootcamp, I attended a startup event arranged by the CTO of the startup I wanted to work for. Since we had crossed paths a few times before, I felt able to ask him directly about the roles they were hiring for, and we agreed to continue the conversation later. You can read more about my journey in this blog post. All big cities have regular events and meetups for the tech industry. The more you get involved with these, the higher the chances that you'll bump into opportunities. Head over to Meetup.com to search for events in your area. If none of the above tips do the trick for you, and you're struggling to secure a developer position, then you should consider starting in a related role and try to transition into development over time. I've seen this happen several times at Scrimba. The main reason you might be able to get a related role more easily than a pure development job is because your coding skills should help you stand out from the competition, as many other applicants likely won't have programming knowledge. Consider pairing this with other strategies, such as leveraging your non-technical background or going above and beyond in your application. Additionally, it expands the pool of jobs you can apply for. If development roles are scarce, entering the field through a related position can be a great way to get your foot in the door. So what are these roles exactly? They are typically jobs in which you are surrounded with code and software, but you don't necessarily write it yourself. Here is a short, non-exhaustive, list: Network technician Systems technician Quality assurance specialist Sales engineer Various analytics roles Developer advocate Once hired, you should actively seek out opportunities to take on coding-related tasks so that you can transition into a developer role.Mix and match to increase your chancesTo sum up, breaking into the tech industry isn't a walk in the park, but with a targeted approach and by using your unique background, skills, motivation, and opportunities, you will gain an edge over the rest of the application pool. It's crucial to pick strategies that play to your strengths. For instance, if you're short on professional connections or work experience, highlighting a non-technical background might not be your best bet. In that case, you'd want to start by focusing your energy elsewhere. Keep in mind that you don't have to stick to just one strategy. In fact, combining several will really amp up your odds of landing that first job. Take Stefania, for example. She didn't just use one approach; she combined multiple strategies. You should consider doing the same. Best of luck with the job hunt! If you'd like me to give a virtual talk at your school or university on this subject, feel free to send me a message on LinkedIn or an email. Scrimba is MDN's recommended course partner for learning their MDN Curriculum.Previous Post Introducing the new MDN Community page
2024-11-08T16:30:37
en
train
42,021,563
woodruffw
2024-11-01T21:06:54
Show HN: Zizmor, static analysis for GitHub Actions
null
https://woodruffw.github.io/zizmor/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,021,579
CT12089963
2024-11-01T21:08:35
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,021,584
rbanffy
2024-11-01T21:09:39
Intel says it will miss its AI goals with Gaudi 3 due to unbaked software
null
https://www.tomshardware.com/tech-industry/artificial-intelligence/intel-says-it-will-miss-its-ai-goals-with-gaudi-3-unbaked-software-leaves-intels-usd500-million-ai-goal-unachievable-as-competitors-rake-in-billions
3
0
null
null
null
no_error
Intel says it will miss its AI goals with Gaudi 3 due to unbaked software &mdash; Intel's $500 million AI goal unachievable&hellip;
2024-11-01T14:55:38+00:00
Anton Shilov
(Image credit: Intel) Intel says it will now be unable to meet its goal of $500 million in Gaudi 3 sales due to software issues. Meanwhile, AMD plans to rake in $3 billion from its AI GPUs, and while Nvidia doesn't specifically state the amount it makes from AI GPUs for the data center, it is expected to be well north of $80 to $90 billion.  Intel claims its Gaudi 3 accelerator for AI offers tangible performance improvements compared to its predecessors, and given its claimed advantages amid relatively low prices, Intel expected sales of these products to exceed half a billion dollars this year. However, the new unit was formally launched in late September, and Intel now says the software was not fully baked. Still, some Gaudi 3 accelerators will be available at IBM Cloud."While the Gaudi3 benchmarks have been impressive, and we are pleased by our recent collaboration IBM to deploy Gaudi 3 as a service on IBM Cloud, the overall uptake of Gaudi has been slower than we anticipated, as adoption rates were impacted by the product transition from Gaudi 2 to Gaudi 3, and software ease of use," said Pat Gelsinger, chief executive of Intel, at the company's earnings call with analysts and investors. "As a result, we will not achieve our target of $500 million in revenue for Gaudi in 2024."Intel's Gaudi 3 relies on two interconnected chiplets housing 64 tensor processing cores, designed with a 256x256 matrix structure that uses FP32 accumulators and eight matrix engines using 256-bit wide vector capabilities. It also includes 96MB of internal SRAM cache, offering data transfer rates up to 19.2 TB/s. Additionally, Gaudi 3 has 24 networking interfaces running at 200 GbE and 14 media processors capable of handling video and image formats like H.265, H.264, JPEG, and VP9 for visual data processing. The chip has 128GB of HBM2E memory across eight stacks, delivering a high bandwidth of 3.67 TB/s. Compared to its predecessor, Gaudi 3 marks a substantial leap forward as Gaudi 2 contained only 24 tensor cores, two matrix engines, and 96GB of HBM2E memory.(Image credit: Intel)Intel says the new Gaudi 3 accelerator offers tangible performance advantages over Gaudi 2 and can even challenge Nvidia's H100 (at least when this GPU does not use sparsity) in some cases. It is just as important that Gaudi 3 is significantly cheaper than the H100. Earlier this year, Intel disclosed that a kit featuring eight Gaudi 3 chips on a baseboard would be priced at $125,000, roughly $15,625 per chip. In comparison, a single Nvidia H100 card is currently priced at $30,678, around two times higher.However, despite all the advantages that Gaudi 3 has, it looks like Intel's software was not exactly ready for prime time, which slowed down hardware purchases. Now, Intel expects Gaudi 3 sales to ramp up in 2025.Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends. Most Popular
2024-11-08T01:22:38
en
train
42,021,608
aard
2024-11-01T21:13:00
Those pesky pull request reviews
null
https://jessitron.com/2021/03/27/those-pesky-pull-request-reviews/
2
0
null
null
null
no_error
Those pesky pull request reviews
2021-03-27T14:23:00+00:00
null
They’re everywhere. In Slack: “hey, can I get a review on this?” In email: “Your review is requested!” In JIRA: “8 user stories In-Progress” (but code-complete). In your repository: 5 open pull requests. They’re slowing your delivery. They’re interrupting your developers. How can we get people to review pull requests faster?? We could blame the people. We could nag them more. We could even automate the nagging! Let’s face it: nobody wants to review pull requests. And for good reasons! It takes a lot of time and work. Chelsea Troy describes how to do pull request review right: In addition to pulling down, running, and modifying the code myself… A maximally effective pull request suggests solutions…in code. It points out what’s working and what’s not, and links to documentation where useful. It highlights laudable work by the original developer, and asks questions before making assumptions or judgments. It explains the reasoning behind suggestions, whether that reasoning affects functionality or adheres to convention. In short, it demands the reviewer’s full participation in finishing the solution that the original developer started. And it prepares the reviewer to take responsibility for this code in the event that the original developer were unable to complete it.Reviewing Pull Requests – Chelsea Troy Reviews, done right, have all the painful parts of a software change: understanding what the change is for, loading up the relevant code and tests into working memory, getting my local environment up to see the change, making the tests run. They have none of the fun parts: refactoring to clarity, changing code and seeing a difference. They take hours of time and all my concentration away from whatever it is that I’m personally trying to do. On top of that, they’re a social interaction minefield! This variable name confused me at first but now I see why they called it that. Should I suggest a change, and require the other developer to do a whole context switch again to improve it? Probably an asshole move. This test doesn’t cover all the cases; I can see one that’s missing. Request another, like the pedant I am? or figure out how to write it myself, adding another hour? There’s a cost to every comment, a cost to the submitter’s sense of belonging. A responsible reviewer looks at consequences far beyond the code. Of course I never want to review pull requests. It’s mentally taxing, takes a lot of time, might damage relationships, and gets me nowhere on the task that has my name on it. So the twitterverse is asking, how do we get people to do it anyway? What makes it easier to do work that is not centered on your immediate priorities?— christina b. just one planet (@csageland) March 18, 2021 If this is what we’re asking, maybe something is wrong with our priorities. Maybe we’re asking the wrong question. What does it say about us that no one wants to review pull requests? Maybe it says that we trust each other. Hot take: in-house development has been influenced too much by the GitHub open source PR driven development process. A process driven by zero trust doesn’t fit well in a team with trust.— Patricia Aas, MSc. (@pati_gallardo) March 20, 2021 Maybe it says that our team has too many concurrent tasks. And by “too many” I mean “more than one”! We use pull requests to ensure code is understandable by the whole team. What is our goal with this pull request process? There are several, but I think the primary one is: safe, understandable code. It looks safe to deploy, and it is clear enough to be understood by the rest of the team. Tests can give us confidence is safety, but only a person can evaluate “understandable.” To change code, a developer first has to understand the code, and understand the change. If the developer was the last person to change this code, then they just have to load it into memory. They’ve understood it before. This should also be true if they reviewed that last change — pull request review spreads that understanding a bit. A developer gathers this knowledge, then uses it to make decisions about the code. They probably iterate on it a few times, and then they submit something they consider safe and understandable. But is that code really safe and understandable?!? We must ensure it! Let’s add this whole process again, except the decisions are approval instead of what to change. We’ll make this asynchronous, yeah, so the submitted can start a whole different task. And if the decision is “no” then we’ll make another asynchronous task and everybody can context switch again! This defies everything we know about product development flow. We just increased WIP and slowed our response time by adding a wait into the process (at least one wait, really an indeterminate number). To improve flow, eliminate queues. Like Patricia said, maybe this process developed for open-source projects isn’t the best for our in-house teams. Maybe there are better ways to work together. The pull request process results in code that two people understand. What if we aimed higher? Maybe instead of trying to work a bit more together, we could work together. How about: the team makes all code changes as a unit. Ensemble working (the practice formerly known as mob programming), with one shared work product and all the shared knowledge. It will be as safe as everyone can make it, and more than understandable: it’ll be understood by the whole team. Not every team member will be present every day. Let’s take a page from distributed systems and require a quorum of team members present when we make code changes. At least 2 developers on a team of 3, at least 3 on a team of 5, etc. That way, whenever it’s time to change that code again, someone present was involved in the most recent change. Then there are no queues or waiting, only collaborating on getting the best name, the complete-enough test suite. Every refactor increases the whole team’s understanding of the code. The team develops a common understanding of the code and where it is going, so they can do gradual improvements in a consistent direction. Does that sound inefficient? Consider the inefficiencies in the queuing for pull requests, the task switches. Not to mention the merge conflicts we get after the pull request sits open for days. Does it sound wasteful? All that programmer-time dedicated to just one task, when we could be doing three! Well, ask: which of those three is the most important? Why not get that out as quickly as possible and then work on the others? And it is faster, when you never have to ask permission or wait for answers because all the relevant knowledge is right there. (It helps to bring in other people too, when you need knowledge from an adjacent team or specialist.) Does it sound miserable? Many people hate pair programming; this sounds even worse. Strangely though, it’s better. When there are three or more in a session, there’s less pressure to stare at the screen every second. One person’s attention can wander while the group attention stays. A person can go to the bathroom or answer an urgent question on Slack, while the ensemble remains an ensemble. Pair programming is more exhausting. Does this seems like an all-day meeting? No, only when we’re changing code. There’s a lot more we do in a day. There’s still email! Each of us has knowledge to acquire and knowledge to share with other teams. I only have six hours of focused brainpower in me on a day. I’d aim for five hours of direct collaboration, and not change production code outside of it. Does this seem impossible remote? It is harder. Set up a shared development environment that everyone can connect to for switching. Or start a branch and use git to move code around. Turn your video on, but set up a screen and camera over to the side, so that looking at each other is different from looking at the code. Staring at each other is draining. Working alongside each other is invigorating. (TODO: take a picture) Is your team too large for this? It does get ridiculous with 8-12 people in one meeting. That’s a smell: either your application is too big (it takes that much knowledge); can you split it? Or, someone thought adding people would speed the work. This is a classic Mythical Man-Month problem. coordination problems increase nonlinearly O(n!) with the number of people When working together eliminates all the coordination work and merge pain, the team can be smaller and more responsive. Piles of waiting pull requests are a symptom of disparate goals within the team. When we divide tasks among people, we can say “we’re working on it” about several things at once. Is that something your organization wants? If so, then it is holding your team back from focus. If this is the organizational API you need to meet, try marking five tasks “in progress” in JIRA, then working one at a time together. Why is knowledge work being treated as an individual activity?The "output" of knowledge work needs to include shared organizational expertise, and individual work doesn't get us there.— Michael McCliment (@cornazano) March 18, 2021 The team works most smoothly as a unit. Production software needs a team behind it because so much knowledge is required: the purpose of the software, its customers, its interfaces, all the tech it runs on and the data it stores and all the changes in the world (such as vulnerabilities) that it needs to respond to. It takes several people to hold all this, with redundancy. To change software safely, combine all that knowledge. We can do this efficiently together, or painfully alone: asynchronous, with a lot of coordination and unpredictably stepping on each other. Pull requests are an improvement on working alone. But not on working together. We know that code review improves outcomes– compared to coding alone without any review. Don’t do that. Do code together — with constant, live review and growing understanding between the team members and the code, between the team members and each other. Leave the pull requests for collections of individuals sharing a codebase. Give me direct collaboration on my team.
2024-11-07T22:44:42
en
train