id
int64
2
42.1M
by
large_stringlengths
2
15
time
timestamp[us]
title
large_stringlengths
0
198
text
large_stringlengths
0
27.4k
url
large_stringlengths
0
6.6k
score
int64
-1
6.02k
descendants
int64
-1
7.29k
kids
large list
deleted
large list
dead
bool
1 class
scraping_error
large_stringclasses
25 values
scraped_title
large_stringlengths
1
59.3k
scraped_published_at
large_stringlengths
4
66
scraped_byline
large_stringlengths
1
757
scraped_body
large_stringlengths
1
50k
scraped_at
timestamp[us]
scraped_language
large_stringclasses
58 values
split
large_stringclasses
1 value
42,013,617
mlnews1991
2024-11-01T02:49:40
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,013,619
disndat91
2024-11-01T02:49:49
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,013,633
null
2024-11-01T02:52:37
null
null
null
null
null
null
[ "true" ]
true
null
null
null
null
null
null
null
train
42,013,639
thomas_moon
2024-11-01T02:54:33
Thoughts on Getting Frustrated While Learning
null
https://tmunayyer.com/Projects/learning-every-day-frustrations.html
2
0
null
null
null
no_error
Getting Frustrated
null
null
Thomas Munayyer Background I recently embarked on a journey to learn some Java with a plan to start digging into Kotlin after. I felt like learning Java might provide a nice “where it started” perspective and help solidify some aspects of Kotlin later, maybe its a waste of time. To hold myself accountable, and to try something new, I started live streaming myself while learning. You can see how I set this up in my first post here. I also upload every stream to YouTube. This absolutely gave me the motivation I needed to show up every day which in my opinion is by far the most important part. These videos are raw. I don’t edit them. I stream and record simultaneously. As soon as I finish I upload it to YouTube. Theres a few things here to mention that have changed over time: when I started I was hyper aware of the camera rolling and probably masking some emotions and natural reactions also when I started, I had just come back from a vacation feeling pretty good and motivated, no extra stress in life at all I think I now have a time horizon long enough (almost 50 days) that things have changed and given me an opportunity to reflect a bit. I Get So Frustrated Now that I’m older I can recognize when I am tired or stressed out, but learning every day means learning every day, not just days you feel good. If you pick a random day to watch, you might see me cursing and losing my mind because I cannot get the answer to a question correct. I’ll sit there and blame the course which is ridiculous and start questioning why I am even learning this stuff. I think looking back at these days its comical or maybe a bit embarrassing. In the end, I really don’t want that kind of emotional reaction to learning and struggle. I tried to give it some thought. Looking Back Further When I think further back into the past with school, I don’t think anyone truly told me that I wasn’t supposed to “just know” things. Like its ok to be a beginner in something or to not understand something after it was explained a single time. I realize now that the feeling of struggling was/is physically uncomfortable and it caused me to try and get out of it as quickly as possible. In school, I remember vividly having this breaking point on tests where if it was taking too long, I just started guessing so I didn’t have to continue reading the question and all the multiple choice options. Somehow I only ever got a single grade below C and it was an “emotions” class in 6th grade because I thought it was dumb, which is of course ironic as I am writing this. I had such a short tolerance for struggle that even on the SAT, probably the most consequential test you take in your teens, that I remember just filling in bubbles if I didn’t know the answer immediately. Luckily, somewhere along the way I developed a sort of tenacity to make it through things despite getting uncomfortable and super frustrated. I raised the expectations I had for myself in college and turned things around academically at the very end of that journey. Even with this persistence I have developed, I don’t want anger to be my first reaction to struggle, especially if I am doing it every day. Why Is It Uncomfortable The point I am trying to make in a rambling way is that I never dealt with why exactly not knowing something caused me this discomfort to the point that I was trying to escape as quickly as possible even when their were high stakes attached. Im not sure what the answer is yet besides a shift in mentality and some initial googling leads me to think that might really be the best answer. There are a few resources on learning techniques, but I am unsure how applicable they would be to programming languages or even how helpful they would be dealing with the actual struggle of learning. At least, on this rare occasion, I am quite pleased I started this blog, the YouTube channel, and learning every day. With this process, I can try again tomorrow.
2024-11-07T22:55:45
en
train
42,013,650
peutetre
2024-11-01T02:56:10
Final Conclusions on Crash of F-35B That Flew Without a Pilot for 64 Miles
null
https://www.twz.com/air/final-conclusions-on-bizarre-crash-of-zombie-f-35b-that-flew-without-a-pilot-for-64-miles-released
8
3
[ 42015106 ]
null
null
Failed after 3 attempts. Last error: Quota exceeded for quota metric 'Generate Content API requests per minute' and limit 'GenerateContent request limit per minute for a region' of service 'generativelanguage.googleapis.com' for consumer 'project_number:854396441450'.
Final Conclusions On Bizarre Crash Of 'Zombie' F-35B That Flew Without A Pilot For 64 Miles Released
null
Tyler Rogoway
The USMC has released its final conclusions regarding its investigation into the bizarre loss of an F-35B that crashed in South Carolina on September 17th of last year. The Marines say that the mishap was caused primarily by pilot error, stating that “the pilot incorrectly diagnosed an out-of-controlled flight emergency and ejected from a flyable aircraft, albeit during a heavy rainstorm compounded with aircraft electrical and display malfunctions.” The jet went on to continue flying without a pilot for over 60 miles before slamming into a field. Thankfully, nobody was injured as a result of the incident. You can read our last in a number of reports on the F-35B’s ‘ghost ship’-like mishap here. The release from the 2nd Marine Aircraft Wing summarizes the circumstances of the crash as they are now known: “On the afternoon of Sept. 17, 2023, a U.S. Marine Corps F-35B Lightning II Joint Strike Fighter, assigned to Marine Fighter Attack Training Squadron (VMFAT) 501, 2nd MAW, crashed in South Carolina. The pilot safely ejected from the aircraft while attempting to execute a climbout during a missed approach in instrument meteorological conditions and heavy precipitation near Joint Base Charleston, South Carolina. The aircraft continued to fly unmanned for 11 minutes and 21 seconds before impacting in a rural area approximately 64 nautical miles northeast of the airfield in Williamsburg County, South Carolina.“ F-35B in STOVL configuration. (USAF) The release goes on to describe other issues that contributed to the loss of the F-35B, including a cascade of systems failures throughout the jet: “Contributing factors to the mishap included an electrical event during flight, which induced failures of both primary radios, the transponder, the tactical air navigation system, and the instrument landing system; and the probability that the helmet-mounted display and panoramic cockpit display were not operational for at least three distinct periods. This caused the pilot to become disoriented in challenging instrument and meteorological conditions. This electrical malfunction was not related to any maintenance activities. All preventative, scheduled, and unscheduled maintenance conducted on the aircraft was correct and in keeping with established standards. The pilot was qualified and current to conduct the scheduled flight. The flight was scheduled, planned, briefed, and conducted properly, professionally, and in accordance with applicable orders and directives. The forecasted and observed weather at the time of the mishap supported the decision to land back at Joint Base Charleston. The investigation concludes the mishap aircraft’s extended unmanned flight was due to stability provided by the F-35’s advanced automatic flight-control systems. The loss of positive radar contact with the mishap aircraft resulted from a failed transponder caused by the electrical malfunction and the aircraft’s eventual descent below the air-traffic control radar horizon. The loss of positive contact could also be partially attributed to the F-35B’s low-observable technology.” “The investigation concluded that the mishap occurred due to pilot error. The pilot incorrectly diagnosed an out-of-controlled flight emergency and ejected from a flyable aircraft, albeit during a heavy rainstorm compounded with aircraft electrical and display malfunctions.” The release also discusses the recovery and remediation efforts once the wreck was found on September 18th, noting that “The mishap resulted in no ground-related injuries, but it did result in property damage in the form of lost forested land and crops.” Part of the lift fan, nose landing gear, and other debris found in the area where the F-35B impacted the ground. (USMC crash investigation document) Finally, it concludes that “there were no punitive actions recommended.” You can check out the initial detailed report on the crash published earlier this year here. So there you have it, finally we get the USMC’s conclusions to what was a the time a very strange mishap that thankfully ended up with just the loss of an aircraft, albeit a prized and very expensive one. Editor’s note: 2nd MAW sent us a request to remove a section of their original release and we obliged, but due to the ambiguity of why this was removed and the complexity of this event and the placement of the blame on the pilot, we need to disclose the nature of the request and its contents here. We hope to do a follow up on this report and the impact it had on the pilot who was in command of the aircraft at the time of the incident: I’d ask that the following paragraph please be removed from the below linked article to reflect the most recent press release, “The investigation concluded the pilot’s decision to eject was ultimately inappropriate because commanded-flight inputs were in progress at the time of ejection, standby flight instrumentation was providing accurate data, and the aircraft’s backup radio was, at least partially, functional. Furthermore, the aircraft continued to fly for an extended period after ejection.” Please utilize the following statement, “The investigation concluded that the mishap occurred due to pilot error. The pilot incorrectly diagnosed an out-of-controlled flight emergency and ejected from a flyable aircraft, albeit during a heavy rainstorm compounded with aircraft electrical and display malfunctions.” The placement of blame on the pilot in the aftermath of this mishap requires further investigation. We will be looking into it further. Stay tuned. Contact the author: [email protected]
2024-11-08T03:39:50
null
train
42,013,662
ashtanga
2024-11-01T02:58:29
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,013,682
exists
2024-11-01T03:02:56
Beating every possible game of Pokemon Platinum at the same time [video]
null
https://www.youtube.com/watch?v=jNMWkD5VsZ8
4
1
[ 42014284 ]
null
null
no_article
null
null
null
null
2024-11-08T00:22:26
null
train
42,013,683
nyc111
2024-11-01T03:02:58
Org Mode Syntax Cheat Sheet (2017)
null
https://nhigham.com/2017/11/02/org-mode-syntax-cheat-sheet/
132
69
[ 42014950, 42014111, 42014929, 42014214, 42016453, 42016146, 42014541, 42015545, 42014993, 42015879, 42014286, 42016027 ]
null
null
null
null
null
null
null
null
null
train
42,013,687
thunderbong
2024-11-01T03:04:51
The Problem with Single-Threaded Shared Mutability
null
https://manishearth.github.io/blog/2015/05/17/the-problem-with-shared-mutability/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,013,701
dewanemutunga
2024-11-01T03:08:08
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,013,702
sumanmd
2024-11-01T03:08:11
null
null
null
1
null
[ 42013703 ]
null
true
null
null
null
null
null
null
null
train
42,013,751
TaurenHunter
2024-11-01T03:21:33
They Are Scrubbing the Internet Right Now
null
https://brownstone.org/articles/they-are-scrubbing-the-internet-right-now/
10
1
[ 42016108, 42014411 ]
null
null
no_error
They Are Scrubbing the Internet Right Now ⋆ Brownstone Institute
2024-10-30T15:33:19+00:00
Jeffrey A. Tucker, Debbie Lerman
Instances of censorship are growing to the point of normalization. Despite ongoing litigation and more public attention, mainstream social media has been more ferocious in recent months than ever before. Podcasters know for sure what will be instantly deleted and debate among themselves over content in gray areas. Some like Brownstone have given up on YouTube in favor of Rumble, sacrificing vast audiences if only to see their content survive to see the light of day. It’s not always about being censored or not. Today’s algorithms include a range of tools that affect searchability and findability. For example, the Joe Rogan interview with Donald Trump racked up an astonishing 34 million views before YouTube and Google tweaked their search engines to make it hard to discover, while even presiding over a technical malfunction that disabled viewing for many people. Faced with this, Rogan went to the platform X to post all three hours. Navigating this thicket of censorship and quasi-censorship has become part of the business model of alternative media. Those are just the headline cases. Beneath the headlines, there are technical events taking place that are fundamentally affecting the ability of any historian even to look back and tell what is happening. Incredibly, the service Archive.org which has been around since 1994 has stopped taking images of content on all platforms. For the first time in 30 years, we have gone a long swath of time – since October 8-10 – since this service has chronicled the life of the Internet in real time. As of this writing, we have no way to verify content that has been posted for three weeks of October leading to the days of the most contentious and consequential election of our lifetimes. Crucially, this is not about partisanship or ideological discrimination. No websites on the Internet are being archived in ways that are available to users. In effect, the whole memory of our main information system is just a big black hole right now. The trouble on Archive.org began on October 8, 2024, when the service was suddenly hit with a massive Denial of Service attack (DDOS) that not only took down the service but introduced a level of failure that nearly took it out completely. Working around the clock, Archive.org came back as a read-only service where it stands today. However, you can only read content that was posted before the attack. The service has yet to resume any public display of mirroring of any sites on the Internet. In other words, the only source on the entire World Wide Web that mirrors content in real time has been disabled. For the first time since the invention of the web browser itself, researchers have been robbed of the ability to compare past with future content, an action that is a staple of researchers looking into government and corporate actions. It was using this service, for example, that enabled Brownstone researchers to discover precisely what the CDC had said about Plexiglas, filtration systems, mail-in ballots, and rental moratoriums. That content was all later scrubbed off the live Internet, so accessing archive copies was the only way we could know and verify what was true. It was the same with the World Health Organization and its disparagement of natural immunity which was later changed. We were able to document the shifting definitions thanks only to this tool which is now disabled. What this means is the following: Any website can post anything today and take it down tomorrow and leave no record of what they posted unless some user somewhere happened to take a screenshot. Even then there is no way to verify its authenticity. The standard approach to know who said what and when is now gone. That is to say that the whole Internet is already being censored in real time so that during these crucial weeks, when vast swaths of the public fully expect foul play, anyone in the information industry can get away with anything and not get caught. We know what you are thinking. Surely this DDOS attack was not a coincidence. The time was just too perfect. And maybe that is right. We just do not know. Does Archive.org suspect something along those lines? Here is what they say:Last week, along with a DDOS attack and exposure of patron email addresses and encrypted passwords, the Internet Archive’s website javascript was defaced, leading us to bring the site down to access and improve our security. The stored data of the Internet Archive is safe and we are working on resuming services safely. This new reality requires heightened attention to cyber security and we are responding. We apologize for the impact of these library services being unavailable.Deep state? As with all these things, there is no way to know, but the effort to blast away the ability of the Internet to have a verified history fits neatly into the stakeholder model of information distribution that has clearly been prioritized on a global level. The Declaration of the Future of the Internet makes that very clear: the Internet should be “governed through the multi-stakeholder approach, whereby governments and relevant authorities partner with academics, civil society, the private sector, technical community and others.”  All of these stakeholders benefit from the ability to act online without leaving a trace.To be sure, a librarian at Archive.org has written that “While the Wayback Machine has been in read-only mode, web crawling and archiving have continued. Those materials will be available via the Wayback Machine as services are secured.”When? We do not know. Before the election? In five years? There might be some technical reasons but it might seem that if web crawling is continuing behind the scenes, as the note suggests, that too could be available in read-only mode now. It is not.Disturbingly, this erasure of Internet memory is happening in more than one place. For many years,  Google offered a cached version of the link you were seeking just below the live version. They have plenty of server space to enable that now, but no: that service is now completely gone. In fact, the Google cache service officially ended just a week or two before the Archive.org crash, at the end of September 2024.Thus the two available tools for searching cached pages on the Internet disappeared within weeks of each other and within weeks of the November 5th election.Other disturbing trends are also turning Internet search results increasingly into AI-controlled lists of establishment-approved narratives. The web standard used to be for search result rankings to be governed by user behavior, links, citations, and so forth. These were more or less organic metrics, based on an aggregation of data indicating how useful a search result was to Internet users. Put very simply, the more people found a search result useful, the higher it would rank. Google now uses very different metrics to rank search results, including what it considers “trusted sources” and other opaque, subjective determinations.Furthermore, the most widely used service that once ranked websites based on traffic is now gone. That service was called Alexa. The company that created it was independent. Then one day in 1999, it was bought by Amazon. That seemed encouraging because Amazon was well-heeled. The acquisition seemed to codify the tool that everyone was using as a kind of metric of status on the web. It was common back in the day to take note of an article somewhere on the web and then look it up on Alexa to see its reach. If it was important, one would take notice, but if it was not, no one particularly cared.This is how an entire generation of web technicians functioned. The system worked as well as one could possibly expect.Then, in 2014, years after acquiring the ranking service Alexa, Amazon did a strange thing. It released its home assistant (and surveillance device) with the same name. Suddenly, everyone had them in their homes and would find out anything by saying “Hey Alexa.” Something seemed strange about Amazon naming its new product after an unrelated business it had acquired years earlier. No doubt there was some confusion caused by the naming overlap.Here’s what happened next. In 2022, Amazon actively took down the web ranking tool. It didn’t sell it. It didn’t raise the prices. It didn’t do anything with it. It suddenly made it go completely dark. No one could figure out why. It was the industry standard, and suddenly it was gone. Not sold, just blasted away. No longer could anyone figure out the traffic-based website rankings of anything without paying very high prices for hard-to-use proprietary products.All of these data points that might seem unrelated when considered individually, are actually part of a long trajectory that has shifted our information landscape into unrecognizable territory. The Covid events of 2020-2023, with massive global censorship and propaganda efforts, greatly accelerated these trends. One wonders if anyone will remember what it was once like. The hacking and hobbling of Archive.org underscores the point: there will be no more memory. As of this writing, fully three weeks of web content have not been archived. What we are missing and what has changed is anyone’s guess. And we have no idea when the service will come back. It is entirely possible that it will not come back, that the only real history to which we can take recourse will be pre-October 8, 2024, the date on which everything changed. The Internet was founded to be free and democratic. It will require herculean efforts at this point to restore that vision, because something else is quickly replacing it. Jeffrey Tucker is Founder, Author, and President at Brownstone Institute. He is also Senior Economics Columnist for Epoch Times, author of 10 books, including Life After Lockdown, and many thousands of articles in the scholarly and popular press. He speaks widely on topics of economics, technology, social philosophy, and culture. View all posts Debbie Lerman, 2023 Brownstone Fellow, has a degree in English from Harvard. She is a retired science writer and a practicing artist in Philadelphia, PA. View all posts
2024-11-08T12:46:43
en
train
42,013,756
hlycaffeinated
2024-11-01T03:22:48
2024 Dora Report Takeaways
null
https://newsletter.getdx.com/p/2024-dora-report
1
0
null
null
null
missing_parsing
2024 DORA Report
2024-10-30T13:01:59+00:00
Abi Noda, Laura Tacho
This is the latest issue of Engineering Enablement, a weekly newsletter covering the data behind world-class engineering organizations. To get articles like this in your inbox every week, subscribe:This week, , DX’s CTO, is diving into the newly released 2024 State of DevOps report from DORA. For those unfamiliar, DORA is a long-running research program focused on helping engineering teams improve software delivery. Each year, they release a report analyzing the capabilities that drive software delivery and organizational performance.This year’s report covers the impact of AI tools, interesting trends in throughput and quality, and how platform engineering and transformational leadership can influence performance. Here’s Laura. I look forward to the DORA report every year. When I first get it, I usually scroll through to the charts and graphs to see if there’s anything really surprising. Then I read it cover to cover. It's long and thorough, so if you're just interested in the highlights, here's my list:AI helps individual productivity but hurts software delivery performanceMeasures of software delivery throughput and quality are continuing to move more independentlyOverall software delivery performance seems a bit weaker when compared to last yearInvest in systems and processes that help developers execute independently (documentation, self-serve platforms, etc)Developer platforms might slow down delivery overall, but they do boost individual and team performanceIn 2024’s report, for the second year in a row, DORA research shows that using AI tooling actually worsens software delivery performance. This is the area of research that I was most curious about, because 2023’s report also shared some findings that went against the grain of what was being reported elsewhere in the industry.But the reason isn’t necessarily what you might expect. “AI code is garbage, of course it breaks.” While it is true that many respondents do not trust code generated by AI (39.2%), the reason behind the AI tooling and worsened software delivery performance correlation is not that; it’s because batch size seems to increase when AI is used in the coding process. And bigger changesets are riskier, something that DORA’s research has long supported. It’s just easier to write more code with AI.What I found most interesting about the AI buzz is that it’s contributing to operational process stability. Adopting AI is a clear priority, even on teams who are used to operating in a world where everything is urgent and priorities shift constantly. So the "drop everything to work on AI" has at least given some companies more operational stability.But AI is a story of tradeoffs. One perhaps counterintuitive finding is that adoption of AI tooling actually results in less time spent on valuable, meaningful work. However, the amount of toilsome work (meetings, admin overhead, busywork) remains mostly unaffected. This makes sense though: the most common use case for AI is assisting with coding tasks. I’ve never met a developer who wanted to spend less time coding. We get time back because our meaningful work gets completed faster—not because we can get the robots to do the unsavory parts of our jobs.Documentation is likely the biggest growth opportunity for AI when it comes to potential impact. DORA’s research has long shown the correlation between documentation and performance, and adding AI to this problem space can accelerate positive results. It’s not totally clear if using AI helps us generate better documentation, or if AI just makes it easier to work with bad documentation. But DORA estimates that if AI adoption increases by just 25%, we should expect a 7.5% increase in documentation quality—how reliable, up-to-date, findable, and accurate it is—the highest of all the factors in their prediction.2024 DORA State of DevOps Report, page 37On the software delivery side of things, something interesting is happening with Change Failure Rate, one of the four key metrics to measure software delivery performance (the others being deployment frequency, lead time to change, and time to recover from a failed deployment).For a few years, there has been some evidence to show that quality and throughput are moving more independently. And this year, the medium performance cluster actually has a lower change failure rate than the high performance cluster, which is unusual. In previous years, all four key metrics tended to move together.2024 DORA State of DevOps Report, page 14This year, the DORA team had to make an important choice when assigning rank to the clusters, choosing between a group that deploys more frequently with more frequent failures, or a group that has fewer failures while deploying, but deploys less frequently. This year, both of those groups report recovering from failures in less than one day, another interesting feature of 2024’s clusters. In the end, deploying more frequently, albeit with more failures, was designated high performance, while the slow and steady approach fell into the medium performance category.The change in distribution across performance clusters from 2023 is also something I’m curious about. The elite cluster remained mostly the same, while the high performance cluster shrank significantly, from 31% of respondents in 2023 to just 22% in 2024. Meanwhile, the low cluster represents 25% of respondents this year, up from 17% last year. And it’s not a case of raising standards: the low cluster is actually performing worse in both deployment frequency and change lead time compared to last year, but has improved in change failure rate and failed deployment recovery time.Overall, delivery seems a bit worse for 2024. The last 18 months have been fairly tumultuous in many companies, so I'm not surprised to see the impact of of our macro-economic situation show up this way.Aside from AI and software delivery, this year’s report went deeper into topics around platform engineering, developer experience, and transformational leadership.Platform engineering, and specifically the adoption of internal platforms, are correlated with a boost in both individual productivity and team performance. But, they can slow down throughput and cause additional instability. Still, organisations who use platforms are shown to deliver software to users faster overall, and have higher operational performance.There are probably a few reasons here, one which was mentioned on the DORA community thread by James Brookbank: "We very rarely see companies doing platform engineering primarily for developer productivity reasons." So can we increase security and governance while also improving developer productivity? Generally, yes. Even with tradeoffs, orgs that adopt a platform engineering model are still better off.Critical to the success of an internal platform is user-centricity, or seeking to understand what your users are going to do with the software you build. In this case, internal developers are the users. It’s important to think not just about what tasks they are trying to complete, but what their goals are. Sometimes this is called a “platform as product” mindset. Developer independence is another key factor. Can developers get their work done without having to wait for an enabling team?User-centricity was again a main feature of the discussion on developer experience. This year’s report also featured interviews as a qualitative research method, used to enrich and triangulate the quantitative data collected in the DORA survey (yes, all this data is self-reported, even the software delivery metrics). In the interviews, some respondents share more about why user-centricity impacts their work, and how they derive value from what they do all day.Finally, transformational leadership is called out as a key factor for high performance. In short, leaders should have a clear vision and support their team members. Depending on your own personal experience, it may or may not be surprising just how much these basic traits impact performance: decrease in burnout, increase in job satisfaction, increase in team, product, and organizational performance.The whole point of DORA's research is to help you get better at getting better. The data presented here is not meant to be a maturity model, or something to measure against once and then forever chase an unchanging target.DORA performance clusters are not static, and the definitions of elite, high, mid, and lower performers change each year based on respondent data. DORA looks at the data to define these clusters, not that DORA defines the cluster performance thresholds and then sees how many respondents fit into one category. This is why the definitions change each year – and another reminder that using these DORA benchmarks should be a continuous process, not just an assessment that is done once. These clusters are not a maturity model.One thing I always keep in mind when reading a benchmarking report like DORA is that high performance is an horizon to chase, a never-ending story, not something that is ever finished. While some folks may find the lack of a finish line demotivating, I enjoy the journey of getting better. And at its core, DORA is an organization focused on giving you more data and information so you can get better at getting better.The 4 key metrics from DORA can be useful metrics to help you keep track of progress. I wrote a guide on how to think about DORA metrics and other developer productivity frameworks here.Here’s a roundup of Developer Productivity job openings. Find more open roles here.Capital One is hiring multiple Product Manager roles - DevEx | USPicnic is hiring a Technical Product Owner - Platform, and DevEx Coach | AmsterdamShopware is hiring a Technical Delivery Manager - Cloud Infra | BerlinSiriusXM is hiring a Staff Software Engineer - Platform Observability | USAdobe is hiring a Sr Engineering Manager - DevEx | San JoseThat’s it for this week. Thanks for reading.-Abi
2024-11-08T17:28:46
null
train
42,013,762
misonic
2024-11-01T03:24:17
Embeddings are underrated
null
https://technicalwriting.dev/data/embeddings.html
363
175
[ 42014936, 42016799, 42014173, 42016747, 42015790, 42018957, 42014831, 42014036, 42016369, 42014683, 42018485, 42015282, 42015723, 42014634, 42015069, 42014495, 42016494, 42016671, 42020927, 42015899, 42020122, 42014850, 42014125, 42025726, 42027944, 42029244, 42016650, 42030692, 42015484, 42022496, 42019380, 42019068 ]
null
null
null
null
null
null
null
null
null
train
42,013,777
geox
2024-11-01T03:27:00
It's not to be. Universe too short for Shakespeare typing monkeys
null
https://www.sciencedaily.com/releases/2024/10/241030150811.htm
2
0
null
null
null
null
null
null
null
null
null
null
train
42,013,784
sandwichsphinx
2024-11-01T03:27:45
Celebrating 30 Years of Display Innovation
null
https://www.appliedmaterials.com/us/en/blog/blog-posts/celebrating-30-years-of-display-innovation.html
1
0
null
null
null
null
null
null
null
null
null
null
train
42,013,800
thunderbong
2024-11-01T03:31:01
Opera Browser Vulnerable to Cross-Browser Attacks via Malicious Extensions
null
https://cyberinsider.com/opera-browser-vulnerable-to-cross-browser-attacks-via-malicious-extensions/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,013,820
null
2024-11-01T03:34:26
null
null
null
null
null
null
[ "true" ]
null
null
null
null
null
null
null
null
train
42,013,838
mosfets
2024-11-01T03:38:03
Ask HN: What's the best on-device TTS engine with natural voice
T5-TTS is one, but the voice is kind of not that natural.
null
2
1
[ 42013850 ]
null
null
null
null
null
null
null
null
null
train
42,013,843
jamii
2024-11-01T03:38:37
What is the point of an online conference?
null
https://www.scattered-thoughts.net/writing/what-is-the-point-of-an-online-conference/
140
105
[ 42015782, 42015123, 42014205, 42054369, 42014327, 42014362, 42017132, 42016222, 42014996, 42013849, 42014728, 42014067, 42017431, 42015087, 42016004, 42015312, 42014235, 42019418, 42015882, 42014150, 42014226, 42014415, 42014026 ]
null
null
null
null
null
null
null
null
null
train
42,013,851
ramn7
2024-11-01T03:40:32
Microsoft and Google are at war again
null
https://www.theverge.com/2024/10/31/24284543/microsoft-google-cloud-war-notepad
20
8
[ 42015214, 42014381, 42015570 ]
null
null
null
null
null
null
null
null
null
train
42,013,853
matthewddy
2024-11-01T03:40:48
null
null
null
1
null
[ 42013854 ]
null
true
null
null
null
null
null
null
null
train
42,013,857
spuiszis
2024-11-01T03:41:12
The FTC's New Click-to-Cancel Rule, Amazon Prime, and the Future of Subscription
null
https://stephen.fm/dark-patterns-in-the-wild-amazon-prime/
5
0
null
null
null
null
null
null
null
null
null
null
train
42,013,858
sonabinu
2024-11-01T03:41:44
Monkeys will never type Shakespeare, study finds
null
https://www.bbc.com/news/articles/c748kmvwyv9o
6
1
[ 42013883 ]
null
null
null
null
null
null
null
null
null
train
42,013,862
yyjhao
2024-11-01T03:42:45
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,013,897
webbytuts
2024-11-01T03:50:41
Google's Experimental AI Transforms Heavy PDF into Quick, Fun Audio Summaries
null
https://vmvirtualmachine.com/meet-illuminate-googles-experimental-ai-that-transforms-heavy-research-papers-into-quick-fun-audio-summaries/
3
0
null
null
null
null
null
null
null
null
null
null
train
42,013,902
nilslice
2024-11-01T03:50:54
XTP: Make Squishy Software
null
https://www.getxtp.com/blog/meet-xtp
42
12
[ 42013909, 42022305, 42032463, 42032630 ]
null
null
null
null
null
null
null
null
null
train
42,013,929
todsacerdoti
2024-11-01T03:54:35
Seven Obscure Languages in Seven Weeks
null
https://pragprog.com/titles/dzseven/seven-obscure-languages-in-seven-weeks/
7
0
null
null
null
null
null
null
null
null
null
null
train
42,013,980
gnabgib
2024-11-01T04:08:36
Making agriculture more resilient to climate change
null
https://news.mit.edu/2024/making-agriculture-more-resilient-climate-change-1101
3
0
null
null
null
null
null
null
null
null
null
null
train
42,014,003
SyncfusionBlogs
2024-11-01T04:13:54
null
null
null
1
null
[ 42014004 ]
null
true
null
null
null
null
null
null
null
train
42,014,024
mattfrasernz
2024-11-01T04:19:38
Show HN: Strava for Musicians
I have been working on a new app to help musicians get motivated to practice. Think Strava, but for musicians.<p>The challenge with music practice is that it&#x27;s invisible. It&#x27;s incredibly easy to skip a day because &#x27;nobody will know&#x27;. I have seen countless talented musicians stop enjoying their instrument and ultimately stop playing entirely due to this phenomenon.<p>MusoLink solves the problem by making your practice visible and social.<p>In my day job, I manage software engineering teams, and in my spare time I run Australia&#x27;s top pipe band. After using Strava to help get ready for a marathon, I realised something similar would be incredibly useful for the band. I couldn&#x27;t find anything so I started coding each night in front of the TV once the kids are in bed. This is the result!<p>I have had around 20-30 musicians each day helping me test and give feedback, but I&#x27;m ready to open it up to the world now! Already, a school has used it to prepare for a World Championship - which they won! And several users have won prestigious competitions.
https://www.musolink.com/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,014,045
Rendello
2024-11-01T04:26:04
UTF-8 characters that behave oddly when the case is changed
null
https://gist.github.com/rendello/d37552507a389656e248f3255a618127
71
63
[ 42014046, 42018116, 42016936, 42017552, 42018296, 42016981, 42018762, 42014225 ]
null
null
no_error
Unicode codepoints that expand or contract when case is changed in UTF-8. Good for testing parsers. Includes the data `utf8_case_data.rs` and the script to generate it, `generate_utf8.py`.
null
rendello
/* Copyright (c) 2024 Rendello Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted. THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ // ========================================================================== //! Unicode codepoints that expand or contract when case is changed in UTF-8. // ========================================================================== pub const LOWERCASING_CONTRACTS: [&str; 22] = [ "ẞ", /* ß (3->2), -1 bytes */ "Ω", /* ω (3->2), -1 bytes */ "Å", /* å (3->2), -1 bytes */ "Ɫ", /* ɫ (3->2), -1 bytes */ "Ɽ", /* ɽ (3->2), -1 bytes */ "Ɑ", /* ɑ (3->2), -1 bytes */ "Ɱ", /* ɱ (3->2), -1 bytes */ "Ɐ", /* ɐ (3->2), -1 bytes */ "Ɒ", /* ɒ (3->2), -1 bytes */ "Ȿ", /* ȿ (3->2), -1 bytes */ "Ɀ", /* ɀ (3->2), -1 bytes */ "Ɥ", /* ɥ (3->2), -1 bytes */ "Ɦ", /* ɦ (3->2), -1 bytes */ "Ɜ", /* ɜ (3->2), -1 bytes */ "Ɡ", /* ɡ (3->2), -1 bytes */ "Ɬ", /* ɬ (3->2), -1 bytes */ "Ɪ", /* ɪ (3->2), -1 bytes */ "Ʞ", /* ʞ (3->2), -1 bytes */ "Ʇ", /* ʇ (3->2), -1 bytes */ "Ʝ", /* ʝ (3->2), -1 bytes */ "Ʂ", /* ʂ (3->2), -1 bytes */ "K", /* k (3->1), -2 bytes */ ]; pub const LOWERCASING_EXPANDS: [&str; 2] = [ "Ⱥ", /* ⱥ (2->3), +1 bytes */ "Ⱦ", /* ⱦ (2->3), +1 bytes */ ]; pub const LOWERCASING_EXPANDS_MULTI_CHAR: [&str; 1] = [ "İ", /* i̇ (2->3), +1 bytes, +1 chars */ ]; pub const UPPERCASING_CONTRACTS: [&str; 13] = [ "ı", /* I (2->1), -1 bytes */ "ſ", /* S (2->1), -1 bytes */ "ᲀ", /* В (3->2), -1 bytes */ "ᲁ", /* Д (3->2), -1 bytes */ "ᲂ", /* О (3->2), -1 bytes */ "ᲃ", /* С (3->2), -1 bytes */ "ᲄ", /* Т (3->2), -1 bytes */ "ᲅ", /* Т (3->2), -1 bytes */ "ᲆ", /* Ъ (3->2), -1 bytes */ "ᲇ", /* Ѣ (3->2), -1 bytes */ "ι", /* Ι (3->2), -1 bytes */ "ⱥ", /* Ⱥ (3->2), -1 bytes */ "ⱦ", /* Ⱦ (3->2), -1 bytes */ ]; pub const UPPERCASING_CONTRACTS_MULTI_CHAR: [&str; 5] = [ "ff", /* FF (3->2), -1 bytes, +1 chars */ "fi", /* FI (3->2), -1 bytes, +1 chars */ "fl", /* FL (3->2), -1 bytes, +1 chars */ "ſt", /* ST (3->2), -1 bytes, +1 chars */ "st", /* ST (3->2), -1 bytes, +1 chars */ ]; pub const UPPERCASING_EXPANDS: [&str; 18] = [ "ȿ", /* Ȿ (2->3), +1 bytes */ "ɀ", /* Ɀ (2->3), +1 bytes */ "ɐ", /* Ɐ (2->3), +1 bytes */ "ɑ", /* Ɑ (2->3), +1 bytes */ "ɒ", /* Ɒ (2->3), +1 bytes */ "ɜ", /* Ɜ (2->3), +1 bytes */ "ɡ", /* Ɡ (2->3), +1 bytes */ "ɥ", /* Ɥ (2->3), +1 bytes */ "ɦ", /* Ɦ (2->3), +1 bytes */ "ɪ", /* Ɪ (2->3), +1 bytes */ "ɫ", /* Ɫ (2->3), +1 bytes */ "ɬ", /* Ɬ (2->3), +1 bytes */ "ɱ", /* Ɱ (2->3), +1 bytes */ "ɽ", /* Ɽ (2->3), +1 bytes */ "ʂ", /* Ʂ (2->3), +1 bytes */ "ʇ", /* Ʇ (2->3), +1 bytes */ "ʝ", /* Ʝ (2->3), +1 bytes */ "ʞ", /* Ʞ (2->3), +1 bytes */ ]; pub const UPPERCASING_EXPANDS_MULTI_CHAR: [&str; 89] = [ "ΐ", /* Ϊ́ (2->6), +4 bytes, +2 chars */ "ΰ", /* Ϋ́ (2->6), +4 bytes, +2 chars */ "ὒ", /* Υ̓̀ (3->6), +3 bytes, +2 chars */ "ὔ", /* Υ̓́ (3->6), +3 bytes, +2 chars */ "ὖ", /* Υ̓͂ (3->6), +3 bytes, +2 chars */ "ᾷ", /* Α͂Ι (3->6), +3 bytes, +2 chars */ "ῇ", /* Η͂Ι (3->6), +3 bytes, +2 chars */ "ῒ", /* Ϊ̀ (3->6), +3 bytes, +2 chars */ "ΐ", /* Ϊ́ (3->6), +3 bytes, +2 chars */ "ῗ", /* Ϊ͂ (3->6), +3 bytes, +2 chars */ "ῢ", /* Ϋ̀ (3->6), +3 bytes, +2 chars */ "ΰ", /* Ϋ́ (3->6), +3 bytes, +2 chars */ "ῧ", /* Ϋ͂ (3->6), +3 bytes, +2 chars */ "ῷ", /* Ω͂Ι (3->6), +3 bytes, +2 chars */ "և", /* ԵՒ (2->4), +2 bytes, +1 chars */ "ᾀ", /* ἈΙ (3->5), +2 bytes, +1 chars */ "ᾁ", /* ἉΙ (3->5), +2 bytes, +1 chars */ "ᾂ", /* ἊΙ (3->5), +2 bytes, +1 chars */ "ᾃ", /* ἋΙ (3->5), +2 bytes, +1 chars */ "ᾄ", /* ἌΙ (3->5), +2 bytes, +1 chars */ "ᾅ", /* ἍΙ (3->5), +2 bytes, +1 chars */ "ᾆ", /* ἎΙ (3->5), +2 bytes, +1 chars */ "ᾇ", /* ἏΙ (3->5), +2 bytes, +1 chars */ "ᾈ", /* ἈΙ (3->5), +2 bytes, +1 chars */ "ᾉ", /* ἉΙ (3->5), +2 bytes, +1 chars */ "ᾊ", /* ἊΙ (3->5), +2 bytes, +1 chars */ "ᾋ", /* ἋΙ (3->5), +2 bytes, +1 chars */ "ᾌ", /* ἌΙ (3->5), +2 bytes, +1 chars */ "ᾍ", /* ἍΙ (3->5), +2 bytes, +1 chars */ "ᾎ", /* ἎΙ (3->5), +2 bytes, +1 chars */ "ᾏ", /* ἏΙ (3->5), +2 bytes, +1 chars */ "ᾐ", /* ἨΙ (3->5), +2 bytes, +1 chars */ "ᾑ", /* ἩΙ (3->5), +2 bytes, +1 chars */ "ᾒ", /* ἪΙ (3->5), +2 bytes, +1 chars */ "ᾓ", /* ἫΙ (3->5), +2 bytes, +1 chars */ "ᾔ", /* ἬΙ (3->5), +2 bytes, +1 chars */ "ᾕ", /* ἭΙ (3->5), +2 bytes, +1 chars */ "ᾖ", /* ἮΙ (3->5), +2 bytes, +1 chars */ "ᾗ", /* ἯΙ (3->5), +2 bytes, +1 chars */ "ᾘ", /* ἨΙ (3->5), +2 bytes, +1 chars */ "ᾙ", /* ἩΙ (3->5), +2 bytes, +1 chars */ "ᾚ", /* ἪΙ (3->5), +2 bytes, +1 chars */ "ᾛ", /* ἫΙ (3->5), +2 bytes, +1 chars */ "ᾜ", /* ἬΙ (3->5), +2 bytes, +1 chars */ "ᾝ", /* ἭΙ (3->5), +2 bytes, +1 chars */ "ᾞ", /* ἮΙ (3->5), +2 bytes, +1 chars */ "ᾟ", /* ἯΙ (3->5), +2 bytes, +1 chars */ "ᾠ", /* ὨΙ (3->5), +2 bytes, +1 chars */ "ᾡ", /* ὩΙ (3->5), +2 bytes, +1 chars */ "ᾢ", /* ὪΙ (3->5), +2 bytes, +1 chars */ "ᾣ", /* ὫΙ (3->5), +2 bytes, +1 chars */ "ᾤ", /* ὬΙ (3->5), +2 bytes, +1 chars */ "ᾥ", /* ὭΙ (3->5), +2 bytes, +1 chars */ "ᾦ", /* ὮΙ (3->5), +2 bytes, +1 chars */ "ᾧ", /* ὯΙ (3->5), +2 bytes, +1 chars */ "ᾨ", /* ὨΙ (3->5), +2 bytes, +1 chars */ "ᾩ", /* ὩΙ (3->5), +2 bytes, +1 chars */ "ᾪ", /* ὪΙ (3->5), +2 bytes, +1 chars */ "ᾫ", /* ὫΙ (3->5), +2 bytes, +1 chars */ "ᾬ", /* ὬΙ (3->5), +2 bytes, +1 chars */ "ᾭ", /* ὭΙ (3->5), +2 bytes, +1 chars */ "ᾮ", /* ὮΙ (3->5), +2 bytes, +1 chars */ "ᾯ", /* ὯΙ (3->5), +2 bytes, +1 chars */ "ᾲ", /* ᾺΙ (3->5), +2 bytes, +1 chars */ "ῂ", /* ῊΙ (3->5), +2 bytes, +1 chars */ "ῲ", /* ῺΙ (3->5), +2 bytes, +1 chars */ "ʼn", /* ʼN (2->3), +1 bytes, +1 chars */ "ǰ", /* J̌ (2->3), +1 bytes, +1 chars */ "ὐ", /* Υ̓ (3->4), +1 bytes, +1 chars */ "ᾳ", /* ΑΙ (3->4), +1 bytes, +1 chars */ "ᾴ", /* ΆΙ (3->4), +1 bytes, +1 chars */ "ᾶ", /* Α͂ (3->4), +1 bytes, +1 chars */ "ᾼ", /* ΑΙ (3->4), +1 bytes, +1 chars */ "ῃ", /* ΗΙ (3->4), +1 bytes, +1 chars */ "ῄ", /* ΉΙ (3->4), +1 bytes, +1 chars */ "ῆ", /* Η͂ (3->4), +1 bytes, +1 chars */ "ῌ", /* ΗΙ (3->4), +1 bytes, +1 chars */ "ῖ", /* Ι͂ (3->4), +1 bytes, +1 chars */ "ῤ", /* Ρ̓ (3->4), +1 bytes, +1 chars */ "ῦ", /* Υ͂ (3->4), +1 bytes, +1 chars */ "ῳ", /* ΩΙ (3->4), +1 bytes, +1 chars */ "ῴ", /* ΏΙ (3->4), +1 bytes, +1 chars */ "ῶ", /* Ω͂ (3->4), +1 bytes, +1 chars */ "ῼ", /* ΩΙ (3->4), +1 bytes, +1 chars */ "ﬓ", /* ՄՆ (3->4), +1 bytes, +1 chars */ "ﬔ", /* ՄԵ (3->4), +1 bytes, +1 chars */ "ﬕ", /* ՄԻ (3->4), +1 bytes, +1 chars */ "ﬖ", /* ՎՆ (3->4), +1 bytes, +1 chars */ "ﬗ", /* ՄԽ (3->4), +1 bytes, +1 chars */ ];
2024-11-08T00:03:08
en
train
42,014,056
abstruse1
2024-11-01T04:28:56
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,014,063
paurora
2024-11-01T04:31:21
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,014,079
null
2024-11-01T04:34:04
null
null
null
null
null
null
[ "true" ]
true
null
null
null
null
null
null
null
train
42,014,081
hboon
2024-11-01T04:35:24
Show HN: Demos–provision Digital Ocean w Terraform+deploy front/back end w Kamal
I normally use Render, but looked into self-hosting some stuff recently. Took a few days to figure out so I clean my repos up and pushed to GitHub. Let me know if you find it useful.
https://hboon.com/terraform-and-kamal-for-digital-ocean-demo-repositories/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,014,091
Jtsummers
2024-11-01T04:37:35
The Debugging Book
null
https://www.debuggingbook.org
2
0
null
null
null
null
null
null
null
null
null
null
train
42,014,092
codetoli
2024-11-01T04:37:47
Show HN: I Built Popupbuild to help startup increase their sales
please tell your reviews
https://popupbuild.netlify.app/p/landing
1
0
null
null
null
missing_parsing
PopupBuild - Create Beautiful Popups
null
Sarah Johnson Senior Developer at TechCorp
"PopupBuild has revolutionized how we handle user notifications. The customization options and ease of use are exactly what we needed!" Sarah Johnson Senior Developer at TechCorp "The code quality and performance are outstanding. It's saved us countless hours of development time." Michael Chen Lead Engineer at StartupX "Finally, a popup solution that's both beautiful and performant. Our conversion rates have improved significantly!" Emma Davis Product Manager at DesignCo
2024-11-08T20:51:56
null
train
42,014,123
vpotta
2024-11-01T04:47:31
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,014,129
philngo
2024-11-01T04:48:34
United States Camel Corps
null
https://en.wikipedia.org/wiki/United_States_Camel_Corps
5
1
[ 42014145 ]
null
null
null
null
null
null
null
null
null
train
42,014,148
221561
2024-11-01T04:53:29
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,014,157
dipaksahirav
2024-11-01T04:55:42
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,014,178
alex_hirner
2024-11-01T05:00:54
I tried fuzz testing and found a crash
null
https://ha.nnes.dev/blog/fuzzing-is-fun/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,014,193
paradite
2024-11-01T05:03:56
Why don't more people use Linux?
null
https://world.hey.com/dhh/why-don-t-more-people-use-linux-33b75f53
9
10
[ 42014428, 42014321, 42014404, 42014261, 42015803, 42014763, 42015443, 42014645 ]
null
null
no_error
Why don't more people use Linux?
null
null
David Heinemeier Hansson September 2, 2024 A couple of weeks ago, I saw a tweet asking: "If Linux is so good, why aren't more people using it?" And it's a fair question! It intuitively rings true until you give it a moment's consideration. Linux is even free, so what's stopping mass adoption, if it's actually better? My response:If exercising is so healthy, why don't more people do it?If reading is so educational, why don't more people do it?If junk food is so bad for you, why do so many people eat it?The world is full of free invitations to self-improvement that are ignored by most people most of the time. Putting it crudely, it's easier to be fat and ignorant in a world of cheap, empty calories than it is to be fit and informed. It's hard to resist the temptation of minimal effort.And Linux isn't minimal effort. It's an operating system that demands more of you than does the commercial offerings from Microsoft and Apple. Thus, it serves as a dojo for understanding computers better. With a sensei who keeps demanding you figure problems out on your own in order to learn and level up.Now I totally understand why most computer users aren't interested in an intellectual workout when all they want to do is browse the web or use an app. They're not looking to become a black belt in computing fundamentals.But programmers are different. Or ought to be different. They're like firefighters. Fitness isn't the purpose of firefighting, but a prerequisite. You're a better firefighter when you have the stamina and strength to carry people out of a burning building on your shoulders than if you do not. So most firefighters work to be fit in order to serve that mission.That's why I'd love to see more developers take another look at Linux. Such that they may develop better proficiency in the basic katas of the internet. Such that they aren't scared to connect a computer to the internet without the cover of a cloud.Besides, if you're able to figure out how to setup a modern build pipeline for JavaScript or even correctly configure IAM for AWS, you already have all the stamina you need for the Linux journey. Think about giving it another try. Not because it is easy, but because it is worth it. About David Heinemeier Hansson
2024-11-08T11:40:20
en
train
42,014,204
null
2024-11-01T05:07:06
null
null
null
null
null
null
[ "true" ]
null
null
null
null
null
null
null
null
train
42,014,211
Anele4Mathaba_
2024-11-01T05:08:42
null
null
null
1
null
[ 42014212 ]
null
true
null
null
null
null
null
null
null
train
42,014,240
xanderlewis
2024-11-01T05:14:57
Tuning
null
https://tuning.ableton.com/
3
0
null
null
null
no_error
Tuning
null
null
Welcome A tuning system is a way to organize musical pitch, by narrowing down from the infinite number of possible pitches to a usable subset. A tuning system tells you where to put the frets on a guitar, for example, to make the pitches that form melodies and chords. Tuning systems also inform the techniques that singers, violinists, and other musicians use to tune their instruments and find specific pitches. Tuning systems are often associated with a particular musical tradition or culture, and they give us the names we use for pitches and their relationships. Ableton Live 12 now supports different tuning systems beyond 12-tone equal temperament (or 12-TET), which has traditionally been the default tuning for MIDI hardware and software. This website allows you to explore the various tuning presets that come with Live 12, and even make your own.
2024-11-08T10:17:10
en
train
42,014,241
null
2024-11-01T05:14:58
null
null
null
null
null
[ 42014242 ]
[ "true" ]
null
null
null
null
null
null
null
null
train
42,014,245
null
2024-11-01T05:16:16
null
null
null
null
null
[ 42014246 ]
[ "true" ]
true
null
null
null
null
null
null
null
train
42,014,247
MilnerRoute
2024-11-01T05:16:46
Inflation gauge falls to its lowest level since early 2021
null
https://apnews.com/article/inflation-prices-election-federal-reserve-rates-economy-e4ff4b6745ea1c8badd6a0502f372f25
1
0
null
null
null
null
null
null
null
null
null
null
train
42,014,248
null
2024-11-01T05:17:00
null
null
null
null
null
[ 42014249 ]
[ "true" ]
true
null
null
null
null
null
null
null
train
42,014,251
ocean_moist
2024-11-01T05:17:12
Bcnm: A better client network manager
null
https://skarnet.org/software/bcnm/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,014,255
ocean_moist
2024-11-01T05:18:22
The Entropic State as Creative Necessity
null
https://rohan.ga/blog/philosophy/creativity/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,014,266
limpwristedpimp
2024-11-01T05:19:51
Scientist captures elusive bird on film
null
https://www.microsoft.com/penis-bird-first-time
1
0
null
null
null
null
null
null
null
null
null
null
train
42,014,273
3Sophons
2024-11-01T05:22:01
WebAssembly from Edge to the Cloud at FOSDEM 2025 – Call for Speakers Open
null
https://lists.fosdem.org/pipermail/fosdem/2024q4/003581.html
3
1
[ 42014274 ]
null
null
null
null
null
null
null
null
null
train
42,014,292
ctoth
2024-11-01T05:25:52
OHA reports 3 humans with bird flu traveled to Oregon during Washington outbreak
null
https://www.koin.com/news/oregon/oregon-human-washington-cases-bird-flu-10312024/
3
0
null
null
null
null
null
null
null
null
null
null
train
42,014,296
onkarjanwa89
2024-11-01T05:27:10
null
null
null
1
null
[ 42014297 ]
null
true
null
null
null
null
null
null
null
train
42,014,303
hxpfnjc
2024-11-01T05:28:15
null
null
null
1
null
[ 42014304 ]
null
true
null
null
null
null
null
null
null
train
42,014,305
austinallegro
2024-11-01T05:28:32
Academy Film Archive Lays Off Multiple Staff Members in Restructuring
null
https://variety.com/2024/film/news/academy-ampas-layoffs-library-archive-1236196896/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,014,311
xanderlewis
2024-11-01T05:29:52
AI-Generated or Real?
null
https://detectfakes.kellogg.northwestern.edu
2
0
null
null
null
null
null
null
null
null
null
null
train
42,014,314
goodereader
2024-11-01T05:30:36
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,014,317
Sinaghodsi93
2024-11-01T05:31:35
Launch Wiser, Rank Higher: Your ProductHunt Secret Weapon
null
https://indielaunch.vercel.app
1
0
[ 42014318 ]
null
null
null
null
null
null
null
null
null
train
42,014,319
noleary
2024-11-01T05:31:40
How to Bypass Authentication on RushOrderTees
null
https://xeiaso.net/notes/2024/rushordertees-total-auth-bypass/
1
0
null
null
null
no_error
How to completely bypass authentication on RushOrderTees
null
null
Published on 09/20/2024, 271 words, 1 minutes to read Just don't enter a password lol A photo of a local wild grain plant on a blue sky - Photo by Xe Iaso, Canon EOS R6mkii, Helios 44-2 58mm f/2 While evaluating RushOrderTees for a previous employer, an embarrassing security vulnerability was discovered. User accounts created inside their t-shirt designer do not have a password attached to them, allowing anyone to authenticate with only an email address. This allows disclosure of at least this information: Full name on any orders Any custom designs Order id numbers Phone numbers when placing new orders This was proven by attempting to log into a RushOrderTees company account using a publicly visible email address. Replication RushOrderTees has not acknowledged this issue and it is still trivial to reproduce it today: Create a new design Attempt to purchase it Save it with a custom name Enter in your email address You have now created a RushOrderTees account without a password attached. Explanation This lapse in security is understandable from a customer acquisition standpoint (every barrier in the way of users paying makes you lose half of your potential customer base), but is fairly inexcusable in 2024. Additionally, by making user accounts only protected with email addresses (public identifiers), this bypasses the entire point of authentication. It is difficult to figure out if this is a design choice or a security issue. Timeline 2024-04-15: Initial contact made to Rushordertees' sales@ and security@ email. The security@ email bounced. 2024-04-16: Reduction in scope of the issue and complete replication instructions discovered. 2024-04-17: Various other attempts were made to get their attention, all ended in failure. 2024-09-20: This bulletin was posted. Rushordertees has not acknowledged this bulletin and did not review it prior to publishing. Facts and circumstances may have changed since publication. Please contact me before jumping to conclusions if something seems wrong or unclear. Tags:
2024-11-08T20:41:55
en
train
42,014,332
ericwang1997
2024-11-01T05:34:19
Ask HN: How do I find UI/UX design systems for my app?
Been out of the web dev game for a bit. What are the latest popular design systems? Ideally for React and B2C? Is there a directory out there?
null
2
1
[ 42019520 ]
null
null
null
null
null
null
null
null
null
train
42,014,338
mdhb
2024-11-01T05:35:23
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,014,343
milar
2024-11-01T05:36:50
Deployments still suck, but maybe they don't have to
null
https://www.blacksmith.sh/blog/deployments-still-suck-but-maybe-they-dont-have-to
5
0
null
null
null
null
null
null
null
null
null
null
train
42,014,347
FMecha
2024-11-01T05:37:34
Microsoft Sucks at Everything [video]
null
https://www.youtube.com/watch?v=LZzubS1ILTs
5
1
[ 42014478 ]
null
null
null
null
null
null
null
null
null
train
42,014,352
mentalically
2024-11-01T05:38:32
Advertising and mixed motives (2022) [for search engines]
null
https://alexandre.storelli.fr/advertising-and-mixed-motives-sergey-brin-larry-page-1998/
1
0
null
null
null
no_error
Advertising and Mixed Motives (Sergey Brin & Larry Page, 1998)
2022-02-15T21:39:32.000Z
Alexandre Storelli
Feb 15, 2022 • 2 min read The vision Google founders had in 1998 about the relationship between their search engine and advertising is, years later, quite ironic:Currently, the predominant business model for commercial search engines is advertising. The goals of the advertising business model do not always correspond to providing quality search to users. For example, in our prototype search engine one of the top results for cellular phone is "The Effect of Cellular Phone Use Upon Driver Attention", a study which explains in great detail the distractions and risk associated with conversing on a cell phone while driving. This search result came up first because of its high importance as judged by the PageRank algorithm, an approximation of citation importance on the web. It is clear that a search engine which was taking money for showing cellular phone ads would have difficulty justifying the page that our system returned to its paying advertisers. For this type of reason and historical experience with other media, we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.Since it is very difficult even for experts to evaluate search engines, search engine bias is particularly insidious. A good example was OpenText, which was reported to be selling companies the right to be listed at the top of the search results for particular queries. This type of bias is much more insidious than advertising, because it is not clear who "deserves" to be there, and who is willing to pay money to be listed. This business model resulted in an uproar, and OpenText has ceased to be a viable search engine. But less blatant bias are likely to be tolerated by the market. For example, a search engine could add a small factor to search results from "friendly" companies, and subtract a factor from results from competitors. This type of bias is very difficult to detect but could still have a significant effect on the market. Furthermore, advertising income often provides an incentive to provide poor quality search results. For example, we noticed a major search engine would not return a large airline’s homepage when the airline’s name was given as a query. It so happened that the airline had placed an expensive ad, linked to the query that was its name. A better search engine would not have required this ad, and possibly resulted in the loss of the revenue from the airline to the search engine. In general, it could be argued from the consumer point of view that the better the search engine is, the fewer advertisements will be needed for the consumer to find what they want. This of course erodes the advertising supported business model of the existing search engines. However, there will always be money from advertisers who want a customer to switch products, or have something that is genuinely new. But we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.In The Anatomy of a Large-Scale Hypertextual Web Search Engine, Sergey Brin and Larry Page (1998) http://ilpubs.stanford.edu:8090/361/ or http://infolab.stanford.edu/~backrub/google.html
2024-11-08T12:23:24
en
train
42,014,353
johntfella
2024-11-01T05:38:35
Can we trust official statistics? The data gaps shaping our view of the economy
null
https://www.ft.com/content/4978a9f8-e2d5-4a9d-80e7-02b0ba2bd56c
4
0
null
null
null
null
null
null
null
null
null
null
train
42,014,364
noleary
2024-11-01T05:44:12
Random access DNA memory using Boolean search in an archival file storage system
null
https://pmc.ncbi.nlm.nih.gov/articles/PMC8564878/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,014,372
quillzhou
2024-11-01T05:46:14
Show HN: Open-in-SearchGPT – Access SearchGPT in your Chrome context menus
null
https://chromewebstore.google.com/detail/open-in-searchgpt/mdfpjfomkgddaoibacdfjajiddbggdjj
1
0
null
null
null
null
null
null
null
null
null
null
train
42,014,374
yem2432
2024-11-01T05:46:36
Every company can scale properly
null
https://medium.com/@yemadetunji_87001/intelligent-growth-management-the-missing-framework-for-high-growth-companies-794476c8a2a5
1
0
null
null
null
no_error
Intelligent Growth Management: The Missing Framework for High-Growth Companies
2024-11-01T05:36:50.282Z
Yem Adetunji
IntroductionGrowing a company is chaotic and ambitious. The urgency to scale can lead to reckless growth — expanding without structure, adding people and projects without a clear plan. Growth isn’t just about getting bigger; it’s about building the right foundation to stay big.While most growing companies focus solely on metrics like revenue, users, and headcount, sustainable growth requires a more sophisticated approach. Intelligent Growth Management (IGM) represents a paradigm shift in scaling — combining operational rigor with adaptability through AI-powered insights and human-centric management.The Pain Points of Traditional GrowthScaling isn’t just hard — it’s confusing. Traditional approaches often fall into these traps:Fragmented InitiativesTeams work in silos, unaware of cross-departmental impactsMultiple tools and systems create data fragmentationKey initiatives lack clear ownership and accountabilityResources get allocated inefficiently across departmentsOverlooked Employee SentimentGrowth pressures lead to burnout and decreased engagementHigh turnover during critical scaling phasesCultural disconnect between leadership and teamsReduced innovation due to stressed teamsData Without InsightsInformation overload without actionable conclusionsDelayed responses to market changesMissing critical growth indicatorsInability to predict and prevent scaling issuesWhat is Intelligent Growth Management (IGM)?IGM revolutionizes how we approach scaling by creating a unified framework that maintains order while preserving innovation. Here’s what makes it unique:Integrated Growth TrackingReal-time dashboard of all growth initiativesCross-functional project alignmentAutomated progress tracking and reportingResource allocation optimizationEmployee Engagement Aligned with StrategyPulse surveys integrated with performance metricsTeam capacity planning and workload balancingCareer development pathways tied to company growthCultural health indicatorsAI-Powered InsightsPredictive analytics for growth bottlenecksAutomated risk assessment and mitigationPattern recognition for successful initiativesReal-time adjustment recommendationsWhy Growing Companies Should Embrace IGMMinimize ChaosStructured scaling processesClear communication channelsStandardized decision-making frameworksRisk management protocolsRetain TalentBalanced workload distributionClear growth opportunitiesEngaged and motivated teamsStrong cultural alignmentScale IntelligentlyData-driven decision makingProactive problem identificationResource optimizationSustainable growth patternsKey Performance IndicatorsSuccessful IGM implementation typically improves:Project Completion Rate: +30–40%Employee Satisfaction: +25–35%Resource Utilization: +20–30%Decision-Making Speed: +50–60%My Experience and the Birth of IGMThe concept of Intelligent Growth Management emerged from my years in management consulting, working with business units of Fortune 100 companies. Despite their resources and potential, almost all of these units struggled with completing business initiatives successfully, particularly in digital transformation.The pattern was clear and recurring: leaders couldn’t effectively communicate change, plans remained static and inflexible, and teams felt disenfranchised — directly impacting the initiatives’ success. Static PowerPoint decks and rigid project plans couldn’t capture the dynamic nature of transformation. Teams became disconnected from the larger purpose, leading to reduced impact and failed implementations.These experiences led to the development of Luna and the IGM framework. Companies needed more than just another tool — they needed a cohesive system that brought growth initiatives, employee engagement, and AI-driven insights together. Intelligent Growth Management is our solution to help companies scale sustainably and smartly.ConclusionIntelligent Growth Management represents more than an incremental improvement in how companies scale — it’s a fundamental rethinking of sustainable growth. By integrating people, processes, and technology, IGM provides the framework needed to build lasting success.For companies serious about scaling effectively, IGM offers a structured yet flexible approach to growth. It’s time to move beyond reactive management and embrace intelligent scaling.Stay UpdatedReady to transform your growth journey? Join our waitlist to stay informed about how Intelligent Growth Management can revolutionize your business scaling.Remember: Growth isn’t just about speed — it’s about building something that lasts.
2024-11-08T06:10:22
en
train
42,014,382
xnhbx
2024-11-01T05:48:43
Showa American Story – Exclusive Trailer [video]
null
https://www.youtube.com/watch?v=BIUQo1y74Fw
2
0
null
null
null
null
null
null
null
null
null
null
train
42,014,407
thunderbong
2024-11-01T05:56:07
The Dual Nature of Events in Event-Driven Architecture
null
https://www.reactivesystems.eu/2024/10/31/the-dual-nature-of-events-in-eda.html
1
0
null
null
null
no_error
The Dual Nature of Events in Event-Driven Architecture
null
null
Given that events play such a central role in event-driven architecture, there’s an astonishing lack of agreement on what should be contained in an event. This may be rooted in the fact that, depending on your perspective, events fulfill different purposes. In a system that follows event-driven architecture in its contemporary style, microservices collaborate by emitting and subscribing to events. (Please note this article only talks about events that are “published” from one domain for others to subscribe to. Not about internal events that are used for example if your approach to persistence is event sourcing.) In these event-driven systems, events that travel between services have a dual role: They trigger actions and carry data. In principle events emitted from a service can be anywhere on the spectrum shown in the picture below. Events are usually both trigger and carrier of data - varying in the amount of data included. The left end would be “pure trigger”, where all the information is contained in the event type alone. On the right end of the spectrum, all properties of the changed entity/aggregate would be included in the event. By the way, not only is there no consensus on how much data should usually be included in an event - it’s not even clear what to call the data-heavy events on the right end of the spectrum. Having been taught this term by a colleague of mine, I call them wide events. But elsewhere on the internet, you’ll also find them referred to as fat events, god events, RESTful events, or state events. The “software engineer/architect with DDD background” view As a developer working on event-driven microservices, the primary concern is implementing a business process as an event flow. You think of events as triggers, and you want to have different types of events for different triggers. This allows you to look at a sequence of events and understand what’s going on. Having different types of events also matches design processes such as event storming. The stickies contain what happened (the type of event), you don’t write the data on them. Using different, properly named types for events means applying the ubiquitous language. Looking at the technical events, even a business person understands what’s going on. The processes you implement are stories, and events are the smallest unit of a story. If you had only one type of event, e.g. BookingUpdated, you’d have to figure out what’s going on by looking at what data has changed. Guesswork. Let’s say your process is buying a cinema ticket. If you look at the sequence of events, what do you want to see? SeatSelected → PaymentReceived → TicketIssued or BookingUpdated → BookingUpdated → BookingUpdated After all, it’s about collaboration between services - event-driven is not data replication. Taking this perspective, for any entity you emit different types of events, with the event type clearly indicating what has happened. In terms of data contained in the event, it would be only the properties related to the event (the ones that have changed in the context of the event). If you use Kafka, you publish all the different events relating to the same class of entity to one topic. (To read the events relating to the same entity in order, they must be on the same partition. Being on the same topic is a prerequisite for being on the same partition.) If you use a schema registry, you use the RecordNameStrategy or the TopicRecordNameStrategy. This is totally legitimate and will work. But there’s a different perspective you should also consider. The “data engineer” view As a data engineer, it’s just data. Instead of in a table, it’s in a stream, but in the end, it represents the state of things. Having data in too small units just creates more work for the data team to eventually create usable tables to represent the state of the represented entities. That works best if you have just one type of event in the stream, so all events on a topic share the same schema. This gives you the “table-stream duality”. Also, it makes it easy to ingest the stream into a database, or into some headless data format (such as Iceberg). From a data point of view, if you could query the stream like a DB, you’d be happy with a stream that just retains the latest state forever. In fact, instead of having both a streaming infrastructure and a database, you’d actually prefer to just have one. (I think streaming databases address this, but haven’t really looked into this yet. And there are new streaming products such as Tektite, which lets you, I quote: “Query the data in any stream or table as if it were a database table”.) If you use Kafka, you publish only one type of event per topic. If you use a schema registry, you use the TopicNameStrategy. So what to do? If you focus on only one of these purposes in the design of your events and neglect the other, you might make your life harder down the line. If you only follow the data perspective, you’ll lose vital information about the reason for the event. Don’t reduce event collaboration to data replication. Having said that, you’ll probably come across cases where it really is just data replication, and where you want wide events. This includes Using events to populate your data warehouse or data lake. Bootstrapping new services that are added to your system later, that need the full event history to start off with the up-to-date data. Cases where other services hold a local projection of the data that needs to be updated (but the update itself doesn’t trigger an action). If you focus only on the triggering nature of events, and these use cases come up later in your product’s lifecycle, you might have to introduce wide events as additional events. That adds effort you can avoid if you have the data aspect in mind early on. So, based on all this, what should be in an event? My take is: It’s an absolute must that the event contains its reason, i.e. the business event that it represents. It must contain at least the data that was changed in this event. A fair amount of additional data doesn’t hurt. If your entity can be serialized into an event that’s still small enough, include a complete snapshot of its state and make your (and your data’s consumers) life easier. “Include a complete snapshot” requires further qualification. Still be mindful of the data you include of your events. Not only because, from a technical perspective, events should be small, so they can be replicated quickly e.g. between the multiple nodes of your distributed message broker. But even more importantly: Your event stream is an API. You need to design the events just as carefully as you would design JSON objects for a RESTful HTTP API. What goes in there is hard to remove, and you want to be able to change your domain model internally to some extent without affecting the event payload. So my standard approach is to use carefully designed, wide events and include the reason in the event (or alternatively as a header). While I want to see the reasons, and get the story from looking at the sequence of events, it doesn’t necessarily have to be encoded as the event type. A sequence would then be something like BookingUpdated(Reason: SeatSelected) → BookingUpdated(Reason: PaymentReceived) → BookingUpdated(Reason: TicketIssued) To me this embraces the dual nature, it’s a “best of both worlds” practice for me. What’s your approach? Let me know what you think in the comments.
2024-11-07T22:22:12
en
train
42,014,443
thunderbong
2024-11-01T06:04:04
A blind but elusive critter that was presumed extinct is rediscovered
null
https://www.washingtonpost.com/climate-environment/2023/12/03/dewinton-golden-mole-rediscovered-dna/
3
1
[ 42014444 ]
null
null
null
null
null
null
null
null
null
train
42,014,467
hunglee2
2024-11-01T06:12:39
A System of Agents Brings Service-as-Software to Life
null
https://foundationcapital.com/system-of-agents/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,014,468
null
2024-11-01T06:12:47
null
null
null
null
null
null
[ "true" ]
null
null
null
null
null
null
null
null
train
42,014,469
gfejdb
2024-11-01T06:13:23
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,014,479
thunderbong
2024-11-01T06:16:02
Fastest Open-Source Databases
null
https://datasystemreviews.com/fastest-open-source-databases.html
3
0
null
null
null
null
null
null
null
null
null
null
train
42,014,491
aidirectories
2024-11-01T06:18:32
null
null
null
1
null
[ 42014492 ]
null
true
null
null
null
null
null
null
null
train
42,014,499
ingve
2024-11-01T06:21:43
Multi-version concurrency control in TLA+
null
https://surfingcomplexity.blog/2024/10/31/multi-version-concurrency-control-in-tla/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,014,501
helloxd
2024-11-01T06:22:46
Sample Post Title
null
https://example.com
2
0
[ 42014502 ]
null
null
missing_parsing
Example Domain
null
null
This domain is for use in illustrative examples in documents. You may use this domain in literature without prior coordination or asking for permission. More information...
2024-11-08T08:53:32
null
train
42,014,532
COINTURK
2024-11-01T06:30:17
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,014,535
COINTURK
2024-11-01T06:30:34
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,014,537
wangxiaofei
2024-11-01T06:31:01
null
null
null
1
null
[ 42014538 ]
null
true
null
null
null
null
null
null
null
train
42,014,543
blunum
2024-11-01T06:32:44
null
null
null
1
null
[ 42014544 ]
null
true
null
null
null
null
null
null
null
train
42,014,564
drcwpl
2024-11-01T06:37:42
AI Safety and the Titanic Disaster
null
https://onepercentrule.substack.com/p/the-titanic-disaster-and-the-conundrum
9
0
null
null
null
null
null
null
null
null
null
null
train
42,014,573
clariont
2024-11-01T06:40:41
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,014,576
Hisunny
2024-11-01T06:40:47
Winmail DAT Viewer – Open DAT Files
null
https://apps.apple.com/us/app/winmail-viewer-dat-opener/id6737042318?mt=12
1
0
null
null
null
missing_parsing
‎Winmail Viewer - DAT Opener
null
null
Screenshots 50% OFF! Extract, view, share, and save the contents of Winmail.dat files sent from Windows Outlook or Exchange. Open Winmail.dat files with just a double-click.Winmail.dat files are in fact TNEF format (Transport Neutral Encapsulation Format) sent by Microsoft Windows Outlook or Microsoft Exchange. Winmail.dat contains all attachments and rich text messages, but not all mail clients can recognize its format. Now, Winmail Viewer Letter Opener is the tool. Winmail Reader for Windows Outlook allows you to open .dat files and to preview, extract, share and save all contents. With user-friendly interface and batch extraction of winmail.dat files directly, Winmail.dat Viewer is your best winmail.dat opener application.Open Winmail.dat files• Drag and drop .dat files, or double-click the attached files in email to open• Open TNEF winmail.dat files and list all contained attachments, such as RTF, TXT, HTML, PDF, JPG, XPS, etc. including email message• Batch extract all or selected attached files from winmail.dat files directly with Winmail Extractor • Decode and view winmail.dat file instantly, Open Winmail.dat files with just a double-click.View, Extract, Store, and Share• Save one or all extracted attachments wherever you want• Simply Drag and Drop out from the file list and save in a target location• You can reuse and resend the entire decoded email with message and extracted attachments.TNEF’s Enough, With support for viewing all attachments and rich text messages within Winmail.dat, Winmail Opener offers a comprehensive solution for users who need to access these files.If you have any suggestions, ideas, questions or problems, please feel free to contact us at [email protected] . What’s New Oct 24, 2024Version 1.0.1 Welcome to Winmail Viewer - dat Opener, a solid Winmail.dat Reader, designed for Mac. · Improved Winmail.dat file support· Compatible with macOS SequoiaIf you have any suggestions, ideas, questions or problems, please feel free to contact us at [email protected] .If you like Winmail Viewer - dat Opener, please consider leaving a review. That helps us a lot. Thanks! App Privacy The developer, 璇 杨, indicated that the app’s privacy practices may include handling of data as described below. For more information, see the developer’s privacy policy. Data Not Collected The developer does not collect any data from this app. Privacy practices may vary, for example, based on the features you use or your age. Learn More Information Seller 璇 杨 Size 446.8 KB Category Business Compatibility Mac Requires macOS 10.13 or later. Languages English, Simplified Chinese, Traditional Chinese Copyright ©2024 Good PDF Team. All Rights Reserved. Price $2.99 Developer Website App Support Privacy Policy Developer Website App Support Privacy Policy Supports Family Sharing Up to six family members can use this app with Family Sharing enabled.
2024-11-08T18:04:22
null
train
42,014,580
monsoonw
2024-11-01T06:42:14
null
null
null
1
null
[ 42014581 ]
null
true
null
null
null
null
null
null
null
train
42,014,588
ingve
2024-11-01T06:45:01
Apple silently uploads your passwords and keeps them
null
https://lapcatsoftware.com/articles/2024/10/4.html
169
127
[ 42016770, 42016512, 42019050, 42019377, 42014887, 42019007, 42023773, 42014904, 42016481 ]
null
null
no_error
Apple silently uploads your passwords and keeps them
null
null
Previous: Apple rejected my Vision Pro app update Articles index Jeff Johnson (My apps, PayPal.Me, Mastodon) October 31 2024 This is a follow-up to my blog post macOS Sonoma silently enabled iCloud Keychain despite my precautions from five months ago. The TL;DR of that blog post is that when you have iCloud enabled but not iCloud Keychain, updating from Ventura to Sonoma causes iCloud Keychain to be silently enabled. (I don't know yet whether that still occurs when updating from Sonoma to Sequoia.) What I didn't realize at the time, indeed didn't realize until now, is that iCloud Keychain already uploaded all of my passwords and kept them in iCloud even after I disabled iCloud Keychain. Let me start with some background. My main machine with all of my personal data including passwords is a MacBook Pro, which is still running Sonoma. It's logged into iCloud, but I don't use iCloud for anything personal. The only reason I enable iCloud is to work on sync features in my apps for my customers. Also for development purposes, I have an iPad and a Mac mini with macOS Big Sur through Sequoia installed on separate APFS volumes. Both devices are used only for software testing and contain no personal data. Finally, I have an iPhone, which I've never actually logged into iCloud. Today I was shocked to discover a bunch of my website passwords in Safari while booted into Sequoia on the Mac mini. There shouldn't be any personal data on the mini, and iCloud Keychain is disabled in its Sequoia volume. Incidentally, the reason I was looking at Safari passwords on the Mac mini is that I noticed on the MacBook Pro that Allow Automatic Passkey Upgrades was automatically, silently enabled in Safari, and I wanted to check whether that was also true on other devices. I looked around on other boot volumes on the Mac mini and other devices but didn't find my passwords anywhere else except in Sequoia. I was struggling to determine how my passwords got there when eventually I remembered my old blog post, which allowed me to reconstruct a plausible scenario. The key piece of evidence was that when I opened the Sequoia Passwords app and sorted by date edited, the most recent was May 25, 2024. Coincidentally, my old blog post, written when I updated the MacBook Pro to Sonoma, was on May 26, 2024. There aren't any more recent passwords on the Mac mini, yet there are more recent passwords on the MacBook Pro. Hence, my assumption about what happened is that when I updated the MacBook Pro to Sonoma, iCloud Keychain got silently enabled, and all of my passwords quickly got uploaded to iCloud, before I could disable it. When I disabled iCloud Keychain on the MacBook Pro, my passwords did not get removed from Apple's servers. They've been sitting up in iCloud all along. But I had no way of knowing that, because iCloud Keychain is not enabled on any of my devices. The only way to see the contents of iCloud Keychain is on an Apple device with iCloud Keychain enabled. You can't even see anything on the icloud.com website. WWDC 2024 was in June, the month after I updated the MacBook Pro to Sonoma. I installed the new Sequoia beta on the Mac mini and signed into iCloud. When I signed into iCloud for the first time, Sequoia must have automatically enabled iCloud Keychain, which caused my already synced passwords to be downloaded. These are what I see now in Safari and the Passwords app. Once again, when I disabled iCloud Keychain in Sequoia back in June, that didn't remove the passwords from either the Mac or from iCloud. The question is, how do you delete all data from iCloud Keychain? I found an old Apple support document from 2021 with the Wayback Machine: What happens when I turn off iCloud Keychain on a device? When you turn off iCloud Keychain for a device, you're asked to keep or delete the passwords and credit card information that you saved. If you choose to keep the information, it isn't deleted or updated when you make changes on other devices. If you don't choose to keep the information on at least one device, your Keychain data will be deleted from your device and the iCloud servers. However, the URL https://support.apple.com/en-us/HT204085 now redirects to https://support.apple.com/en-us/109016, which says nothing about deleting keychain data from iCloud servers: If you turn off iCloud Keychain When you turn off iCloud Keychain, password, passkey, and credit card information is stored locally on your device. When you sign out of iCloud on your device while iCloud Keychain is turned on, you're asked to keep or delete your Keychain information. If you choose to keep the information, your passwords and passkeys are stored locally on your device, but aren't deleted or updated when you make changes on other devices. If you don't keep the information, your passwords and passkeys aren't available on your device. An encrypted copy of your Keychain data is kept on iCloud servers. If you turn iCloud Keychain back on, your passwords and passkeys will sync to your device again. Apparently Apple now just keeps your iCloud Keychain data forever, whether you want them to or not? I didn't even want Apple to have my keychain data in the first place! As a workaround, I manually deleted all of my passwords in the Passwords app in Sequoia, enabled iCloud Keychain, and then disabled iCloud Keychain again. To verify the password deletion, I booted into Sonoma on the Mac mini and enabled iCloud Keychain there. Fortunately, no passwords were downloaded from iCloud. (As I mentioned in my old blog post, Sonoma System Settings still has the bug where it hangs and crashes when you disable iCloud Keychain. Apple software quality on exhibition.) I'm still concerned about other data that may still be in iCloud Keychain. For example, what about wifi passwords? I can't very well delete my wifi password on the Mac mini and then sync the deletion to iCloud Keychain, because of course I can't sync anything without wifi! And what else does iCloud Keychain store that I can't necessarily see in the user interface? Hopefully nothing else… By the way, after I published my old blog post about iCloud Keychain, I did ultimately find a solution to prevent iCloud Keychain from ever getting silently enabled: use a MDM profile. Jeff Johnson (My apps, PayPal.Me, Mastodon) Articles index Previous: Apple rejected my Vision Pro app update
2024-11-08T03:58:03
en
train
42,014,589
NavinF
2024-11-01T06:45:13
Introduction to Compute-in-Memory [video]
null
https://www.youtube.com/watch?v=PsYTQhN_n7M
1
0
null
null
null
null
null
null
null
null
null
null
train
42,014,591
striat
2024-11-01T06:46:01
Show HN: AutoSEO – Generate SEO-optimized blog posts to get more organic traffic
Hey!<p>I built AutoSEO for myself since I needed a solution for my lack of SEO. As a very much non-writer, I have tried to generate something like this from the browser using Claude or in VS Code using Copilot. But neither work for the scale vs. time input I&#x27;m looking for.<p>There is something to be said for polluting the web with more AI generated listicles and posts. If I had built this 2 years ago I may have agreed with some ethical concerns, but I think Google search is far past the point where I would be concerned these days.
https://autoseo.app
2
1
[ 42016490 ]
null
null
null
null
null
null
null
null
null
train
42,014,592
kiyanwang
2024-11-01T06:46:52
AI leads Python to top language as the number of global developers surges
null
https://github.blog/news-insights/octoverse/octoverse-2024/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,014,607
erie
2024-11-01T06:50:54
A fitness app was used to dox and leak sensitive personal data
null
https://www.haaretz.com/israel-news/security-aviation/2024-10-29/ty-article-magazine/.premium/intelligence-operation-collected-information-on-sensitive-israeli-bases-soldiers/00000192-d7bb-df2b-a5db-d7bf8d440000
3
2
[ 42014615, 42014608 ]
null
null
Failed after 3 attempts. Last error: Quota exceeded for quota metric 'Generate Content API requests per minute' and limit 'GenerateContent request limit per minute for a region' of service 'generativelanguage.googleapis.com' for consumer 'project_number:854396441450'.
Haaretz investigation: Intelligence operation collected information on sensitive Israeli bases, soldiers
2024-10-29T13:16:00+0200
Bar Peleg, Omer Benjakob, Avi Scharf
A fake account on Strava systematically exploited the popular fitness app to collect information about Israeli soldiers serving in sensitive bases and secret sites, including a U.S. military base. Israeli army says it is investigatingAn unknown, possibly foreign actor is conducting an intelligence operation on army bases and sensitive sites in Israel, using the fitness app Strava to collect information about those serving within them. ICYMIFrom Cruz to Rosen, Nine Senate Races Where the Jewish Vote Could Decide Who WinsIsrael's War on Hezbollah's Economic Empire: Human Trafficking to Drug-running CaptagonThirty-three Orthodox Rabbis Endorse Kamala Harris a Day Before U.S. Presidential ElectionBibiLeaks: Everything You Need to Know About Netanyahu's Latest Scandal139 East Jerusalem Residents Discover Their Land Is Registered to the Jewish National FundDo These Photos, Taken in 1956, Show a Peaceful Gaza Under Israeli Occupation?
2024-11-08T05:31:24
null
train
42,014,622
the-mitr
2024-11-01T06:53:54
Vanishing Culture: A Report on Our Fragile Cultural Record [pdf]
null
http://blog.archive.org/wp-content/uploads/2024/10/Vanishing-Culture-2024.pdf
4
0
null
null
null
null
null
null
null
null
null
null
train
42,014,633
karanveer
2024-11-01T06:58:15
Chrome Extension Calculator
null
https://chromewebstore.google.com/detail/calculator/dlpbkbmnbkkliidfobhapmdajdokapnm
1
0
null
null
null
no_error
Calculator - Chrome Web Store
null
null
OverviewA simple calculator for those quick calculations, without leaving the browserHow many times you leave your browser to open the calculator app on your PC/Mac? In middle of a movie and felt like calculating those bills? Using a sheet and want to do a quick calculation? well, this extension saves you those extra steps and makes you access a calculator within click of a button or a custom assigned shortcut. NOW CALCULATE WITHOUT EVER LEAVING THE BROWSER. "Calculator" by theindiecompny helps you quickly calculate on the web, without leaving your train of thought or the tab. Best Way to Use this Calculator: 1. Install it 2. Use "Ctrl + Q" on Windows, or "Cmd + Q" to launch quickly. You can also customize this shortcut key, for me it is "Ctrl+1" [go to this link and assign your keys to "Activate the Extension": chrome://extensions/shortcuts] 3. Enjoy!DetailsVersion1.0.2UpdatedNovember 4, 2024Size21.27KiBLanguagesDeveloper Email [email protected] developer has not identified itself as a trader. For consumers in the European Union, please note that consumer rights do not apply to contracts between you and this developer.PrivacyThe developer has disclosed that it will not collect or use your data. To learn more, see the developer’s privacy policy.This developer declares that your data isNot being sold to third parties, outside of the approved use casesNot being used or transferred for purposes that are unrelated to the item's core functionalityNot being used or transferred to determine creditworthiness or for lending purposesSupportFor help with questions, suggestions, or problems, visit the developer's support site
2024-11-08T04:44:34
en
train
42,014,635
todsacerdoti
2024-11-01T06:59:01
Porting ioquake3 from SDL2 to SDL3 [video]
null
https://www.youtube.com/watch?v=i3yVqWYFbCE
2
0
null
null
null
null
null
null
null
null
null
null
train
42,014,650
ChadNauseam
2024-11-01T07:02:48
Oasis: A Universe in a Transformer
null
https://oasis-model.github.io/
255
88
[ 42016826, 42019352, 42015804, 42015697, 42017174, 42017016, 42015730, 42015927, 42017095, 42018133, 42031814, 42019390, 42015689, 42020837, 42016854, 42019497, 42021501, 42020316, 42017319, 42020347, 42019618, 42020711, 42019219, 42024773, 42016068, 42020768, 42021857, 42019980, 42016649, 42022607, 42015681, 42016621, 42020928, 42027486, 42020819, 42020058, 42020929, 42016300 ]
null
null
null
null
null
null
null
null
null
train
42,014,654
jcoblin
2024-11-01T07:04:02
ML Foundations: Understanding the Math Behind Backpropagation
null
https://jordancoblin.github.io/posts/understanding-the-math-behind-backpropagation/
4
2
[ 42014729 ]
null
null
no_error
ML Foundations: Understanding the Math Behind Backpropagation
2024-11-01T00:00:00+00:00
AI Meanderings
The past decade has marked a heyday for neural networks, driving innovations from deep learning advancements to the rise of transformer models that power tools like ChatGPT, Claude, and other large language models. Recently, Geoffrey Hinton was even awarded the Nobel Prize in Physics for his pioneering contributions to neural networks - a testament to the profound impact of these models on both AI and society. While a variety of powerful libraries, such as PyTorch, TensorFlow, and JAX, have simplified the process of training and deploying neural networks, developing an understanding of their underlying principles remains invaluable. In this post, I’ll guide you through the mathematical underpinnings of backpropagation, a key algorithm for training neural networks, and demonstrate how to implement it from scratch using Python with NumPy. We’ll apply this knowledge to train a simple fully connected neural network for classifying images in the MNIST dataset. By the end of this post, you can expect to have have a deeper understanding of how neural networks learn and a larger appreciation for the automatic differentiation libraries that handle many of the mathematical details for you. Let’s dive in! MNIST Digit Classification Let’s begin by laying some notational groundwork for the classification task. As usual for supervised learning problems, we consider the setting where we are provided a dataset $\mathcal{D}$ consisting of input vectors $x$ and label vectors $y$: $$\mathcal{D} = \bigl\lbrace (x^{(i)}, y^{(i)}) \bigr\rbrace_{i=1}^m \space,$$ where $m$ is the number of samples in our dataset. The standard MNIST dataset consists of 60,000 training images and 10,000 test images, which we will call $\mathcal{D_{\text{train}}}$ and $\mathcal{D_{\text{test}}}$. An image can be represented as a column vector: $$x^{(i)} = [x_1^{(i)}, x_2^{(i)}, …, x_{n_x}^{(i)}]^T \space,$$ where $n_x = 28 \times 28$ is the number of pixels in each image. Each image has a real-valued label $y^{(i)} \in [0, 9]$ that indicates which digit, or class, the image corresponds to. To help us perform classification, we will represent this as a one-hot encoded vector: $$y^{(i)} = [y_1^{(i)}, y_2^{(i)}, …, y_{n_y}^{(i)}]^T \space,$$ where $n_y = 10$ is the number of digits or classes to choose from and $$ y_k^{(i)} = \begin{cases} 1 & \text{if class } k \text{ is the correct class}, \\\ 0 & \text{otherwise}. \end{cases} $$ Below we can see some sample images from this dataset, along with their corresponding labels. Because we have multiple digits to choose from, we consider this a multi-class classification problem, where the goal is roughly to find some function $f(x)$ that is able to correctly determine the labels for as many images in our dataset (or more precisely, our test set) as possible. Neural Network Definition In this section, we’ll outline the the mathematical foundation of our neural network model, starting with the classification function $ f(x; \theta) $. This function maps input data $x$ to a predicted class $ \hat{y} $, represented as $ \hat{y} = f(x; \theta) = \arg\max_k f_k(x; \theta) $, where $ f_k(x; \theta) $ denotes the score or probability for class $k$. The neural network’s purpose is to model $f_k(x; \theta)$ with learnable parameters $\theta$ . Neural networks may have an abritrary number of layers - the more layers, the “deeper” the network. The parameters $\theta$ of our model are comprised of weights and biases, which are denoted using $W^{[l]}$ and $b^{[l]}$ respectively, for each layer $l$. For our MNIST classification problem, we will use a network with a single hidden layer of size 128. The output of this first layer, also known as a hidden layer, is: $$h(x) = \sigma (W^{[1]} x + b^{[1]}),$$ where $W^{[1]} \in \mathbb{R}^{n_h \times n_x}$, $b^{[1]} \in \mathbb{R}^{n_h}$, $n_h = 128$ is the hidden layer size, and $\sigma$ is the sigmoid activation function. To output class probabilities, we’ll apply a softmax function to the final layer, providing a normalized probability distribution across the classes, making it ideal for classification. The softmax function is defined as $$\text{softmax}(z) = \frac{e^{z}}{\sum_{k=1}^{C} e^{z}_{k}} \space,$$ where $K = n_y$ is the number of classes. With this, the final output of our neural network becomes: $$f_k(x; \theta) = \text{softmax} (W^{[2]} h(x) + b^{[2]}),$$ where $W^{[2]} \in \mathbb{R}^{n_y \times n_h}$ and $b^{[2]} \in \mathbb{R}^{n_y}$. Notice that our input $x$ is passed through the hidden layer to produce $h(x)$, which is then passed through the output layer to produce the final class probabilities. Pictorally, our neural network can be visualized as follows: A simple fully-connected neural network with a single hidden layer. Gradient Descent with Backpropagation We now have a parameterized model that is capable of representing a variety of functions. Our goal is to find the function which provides the best fit with respect to our dataset $\mathcal{D}$. To accomplish this, we will introduce a loss function $\mathcal{L}(\hat{y}, y)$ as a measure of fit, and then minimize this function to find the optimal parameters of the model: $$\theta_* = \arg\min_{\theta} \mathcal{L}(\hat{y}, y).$$ For multi-class classification problems, cross-entropy is a common loss function which measures the distance between the distribution produced by our model, and the true distribution $P(y|x)$. The cross-entropy loss for a tuple $(x, y)$ is defined as: $$ \begin{equation} \label{eq:loss} \mathcal{L}(\hat{y}, y) = - \sum_{k=1}^{K} y_k \log \hat{y}_k \space. \end{equation} $$ To solve this optimization problem, we will use gradient descent with the backpropagation algorithm. At a high level, backpropagation allows us to efficiently compute the derivatives needed to perform gradient updates using the chain rule in calculus. During this process, derivatives from later layers in the network get passed back through previous layers, hence the name! Deriving the Backprop Learning Updates At this point, the fastest way forward would be to use an automatic differentiation library like PyTorch to handle all the gradient computations and not muddle ourselves in all the mathematical details. But where would be the fun in that? Let’s go ahead and derive the gradient descent updates ourselves. Updating parameters $\theta$ at each iteration of gradient descent is a matter of taking a step in the direction of steepest descent in the loss function, with step size $\alpha$: $$ \theta \leftarrow \theta - \alpha \nabla \mathcal{L}(\theta).$$ Breaking down the gradient by each set of weights and biases in our network, we arrive at the following four update expressions: $$ \begin{align*} W^{[1]} & \leftarrow W^{[1]} - \alpha \frac{\partial \mathcal{L}}{\partial W^{[1]}} \\\ b^{[1]} & \leftarrow b^{[1]} - \alpha \frac{\partial \mathcal{L}}{\partial b^{[1]}} \\\ W^{[2]} & \leftarrow W^{[2]} - \alpha \frac{\partial \mathcal{L}}{\partial W^{[2]}} \\\ b^{[2]} & \leftarrow b^{[2]} - \alpha \frac{\partial \mathcal{L}}{\partial b^{[2]}}\space. \end{align*} $$ It’s important to remember that $W^{[l]}$ is a matrix and $b^{[l]}$ is a vector, so the result of the gradients here will be either a matrix or vector as well. The components of these gradient objects are the partial derivative with respect to each individual weight. That is, $$ \begin{equation*} \label{eq:jacobian} \frac{\partial \mathcal{L}}{\partial W^{[l]}} = \begin{bmatrix} \frac{\partial \mathcal{L}}{\partial W_{1,1}^{[l]}} & \frac{\partial \mathcal{L}}{\partial W_{1,2}^{[l]}} & \cdots & \frac{\partial \mathcal{L}}{\partial W_{1,n_{l-1}}^{[l]}} \\\ \frac{\partial \mathcal{L}}{\partial W_{2,1}^{[l]}} & \frac{\partial \mathcal{L}}{\partial W_{2,2}^{[l]}} & \cdots & \frac{\partial \mathcal{L}}{\partial W_{2,n_{l-1}}^{[l]}} \\\ \vdots & \vdots & \ddots & \vdots \\\ \frac{\partial \mathcal{L}}{\partial W_{n_l,1}^{[l]}} & \frac{\partial \mathcal{L}}{\partial W_{n_l,2}^{[l]}} & \cdots & \frac{\partial \mathcal{L}}{\partial W_{n_l,n_{l-1}}^{[l]}} \\\ \end{bmatrix}, \end{equation*} $$ where $n_l$ and $n_{l-1}$ are the number of neurons in layers $l$ and $l-1$, respectively. Forward Pass To begin with an iteration of backpropogation, we first do a forward pass, where we pass an input $x$ through the network. During the forward pass, we compute outputs at each layer of the network, and store some which will be used later during the backward pass. We introduce the variable $z^{[l]}$ as well, to aid us during the backward pass: $$ \begin{align*} z^{[1]} &= W^{[1]} x + b^{[1]} \\\ h &= \sigma(z^{[1]}) \\\ z^{[2]} &= W^{[2]} h + b^{[2]} \\\ \hat{y} &= \text{softmax}(z^{[2]}). \end{align*} $$ At this stage, it is helpful if we visualize how all of these outputs and parameters fit together. For simplicity, we’ll consider a network with just a few neurons: Backward Pass For our backward pass, we will compute the partial derivatives needed for our learning update. Conceptually, we can think of this as figuring out how much a change in each weight contributes to a change in the overall loss. To determine derivatives for weights in earlier layers in the network, we use the chain rule to decompose the derivatives into parts, which enables re-use of derivatives that were computed for later layers; this is essentially dynamic programming. Let’s start by computing derivatives $\frac{\partial \mathcal{L}}{\partial W_{j,i}^{[l]}}$ for weights in the output layer, and then move on to the hidden layer. Output Layer Derivatives It is helpful to start by visualizing the output layer of our network, to understand how the weights and biases connect with the loss function. We will be using $j$ to index neurons in the output layer, and $i$ to index neurons in the hidden layer. So $W_{2,1}^{[2]}$ indicates the weight connecting neuron $1$ in $h$ and neuron $2$ in $z^{[2]}$: Output layer of our simplified neural network. We first notice that $\mathcal{L}$ is a function of $z_j^{[l]}$ and use the chain rule to re-express our derivative: $$ \frac{\partial \mathcal{L}}{\partial W_{j,i}^{[2]}} = \frac{\partial \mathcal{L}}{\partial z_j^{[2]}} \frac{\partial z_j^{[2]}}{\partial W_{j,i}^{[2]}}. $$ This $\frac{\partial \mathcal{L}}{\partial z_j^{[2]}}$ term is important, as we’ll be re-using it later to compute derivatives in the earlier hidden layer. To solve for this quantity, you might think (as I did) that the same pattern could be applied when decomposing by $\hat{y}$, but interestingly, this is not the case: $$ \frac{\partial \mathcal{L}}{\partial z_j^{[2]}} \neq \frac{\partial \mathcal{L}}{\partial \hat{y_j}} \frac{\partial \hat{y_j}}{\partial z_j^{[2]}} \space. $$ The reason for this is related to our usage of the $\text{softmax}$ function over our outputs $z_j^{[l]}$. Because $\text{softmax}$ causes $z_j^{[l]}$ to have an effect on both $\hat{y}_1$ and $\hat{y}_2$, we need to take both of these “paths” into account when applying the chain rule. The path that $W_{1,1}^{[2]}$ takes to reach $\mathcal{L}$. Notice that the computation flows through all nodes in $\hat{y}$. So in this case, we need to apply the multivariable chain rule by summing the derivatives of each path: $$ \frac{\partial \mathcal{L}}{\partial z_j^{[2]}} = \sum_{k=1}^{K} \frac{\partial \mathcal{L}}{\partial \hat{y_k}} \frac{\partial \hat{y_k}}{\partial z_j^{[2]}} \space. $$ As we’ll see, it is also useful to split this expression into cases where $j=k$ and $j \neq k$: $$ \begin{equation} \label{eq:dldz2_expanded} \frac{\partial \mathcal{L}}{\partial z_j^{[2]}} = \frac{\partial \mathcal{L}}{\partial \hat{y_k}} \frac{\partial \hat{y_k}}{\partial z_k^{[2]}} + \sum_{k \neq j} \frac{\partial \mathcal{L}}{\partial \hat{y_k}} \frac{\partial \hat{y_k}}{\partial z_j^{[2]}}\space. \end{equation} $$ Our final expression for the derivative of the loss with respect to a single weight in the output layer then becomes $$ \begin{equation} \label{eq:dldw2_expanded} \frac{\partial \mathcal{L}}{\partial W_{j,i}^{[2]}} = \frac{\partial \mathcal{L}}{\partial \hat{y_k}} \frac{\partial \hat{y_k}}{\partial z_k^{[2]}} \frac{\partial z_k^{[2]}}{\partial W_{j,i}^{[2]}} + \sum_{k \neq j} \frac{\partial \mathcal{L}}{\partial \hat{y_k}} \frac{\partial \hat{y_k}}{\partial z_j^{[2]}} \frac{\partial z_j^{[2]}}{\partial W_{j,i}^{[2]}}\space. \end{equation} $$ Let’s go ahead and solve this expression, one term at a time. Solving Individual Derivatives For the first derivative term in Equation \ref{eq:dldw2_expanded}, we can take the derivative of the loss with respect to $\hat{y}_k$ by noting that the derivative is zero for each term in the sum, save for the case where $u=k$: $$ \begin{align} \frac{\partial \mathcal{L}}{\partial \hat{y_k}} &= \frac{\partial}{\partial \hat{y_k}} \bigl( - \sum_{u=1}^{K} y_u \log \hat{y}_u \bigr) \nonumber \\\ &= -y_k \frac{\partial}{\partial \hat{y_k}} \log \hat{y_k} \nonumber \\\ &= -\frac{y_k}{\hat{y}_k}\space. \end{align} $$ Solving the $\frac{\partial \hat{y}_k}{\partial z_j^{[2]}}$ term is a bit more involved, and so we’ll leave out an in-depth derivation here. If you’d like to dig into the nitty gritty here, a full derivation is provided in Appendix: Softmax Gradient Derivation. For now, just note that you can use the quotient rule, along with splitting into different cases to solve. In the end, we find the solution to be the following piecewise function: $$ \begin{equation} \frac{\partial \hat{y}_k}{\partial z_j^{[2]}} = \begin{cases} \hat{y}_k \left( 1 - \hat{y}_k \right), & \text{if } j = k \\\ -\hat{y}_j \hat{y}_k, & \text{if } j \neq k \space. \end{cases} \end{equation} $$ For the third term, again we note that the derivative is zero for each term in $W^{[2]} h$, except for $W_{j,i}^{[2]} h_i$: $$ \begin{align} \frac{\partial z_j^{[2]}}{\partial W_{j,i}^{[2]}} &= \frac{\partial }{\partial W_{j,i}^{[2]}} \bigl( W^{[2]} h + b^{[2]} \bigr) \nonumber \\\ &= \frac{\partial }{\partial W_{j,i}^{[2]}} W_{j,i}^{[2]} h_i \nonumber \\\ &= h_i \space. \end{align} $$ Putting it Together Plugging each of these results first into Equation \ref{eq:dldz2_expanded} we get $$ \begin{align} \frac{\partial \mathcal{L}}{\partial z_j^{[2]}} &= \biggl(-\frac{y_j}{\hat{y_j}} \biggr) \biggl( \hat{y_j} (1 - \hat{y_j}) \biggr) + \sum_{k \neq j} \biggl(-\frac{y_k}{\hat{y_k}} \biggr) \biggl( -\hat{y_j} \hat{y_k} \biggr) \nonumber \\\ &= -y_j + y_j \hat{y_j} + \hat{y_j} \sum_{k \neq j} y_k \nonumber \\\ &= -y_j + \hat{y_j} \underbrace{\biggl(y_j + \sum_{k \neq j} y_k \biggr)}_{=1} \nonumber \\\ &= \hat{y_j} - y_j \space. \label{eq:dl_dz2_final} \end{align} $$ Now solving for $\frac{\partial \mathcal{L}}{\partial W_{j,i}^{[2]}}$ by plugging in the results above, we get $$ \begin{align} \frac{\partial \mathcal{L}}{\partial W_{j,i}^{[2]}} &= \frac{\partial \mathcal{L}}{\partial z_j^{[2]}} \frac{\partial z_k^{[2]}}{\partial W_{j,i}^{[2]}} \nonumber \\\ &= \bigl(\hat{y_j} - y_j \bigr) h_i \space. \end{align} $$ That took a bit of work, but we now see that the derivative of the loss with respect to a single weight in the output layer is equal to the value of the input neuron $h_i$ times the difference between the predicted output $\hat{y_j}$ and ground truth $y_j$. Pretty cool! The Bias Term We’re almost done with the output layer, but we still need to solve for the derivative of the loss with respect to the bias term. Again we can use the chain rule to decompose the derivative: $$ \begin{equation*} \frac{\partial \mathcal{L}}{\partial b_{j,i}^{[2]}} = \frac{\partial \mathcal{L}}{\partial z_j^{[2]}} \frac{\partial z_j^{[2]}}{\partial b_{j,i}^{[2]}}\space. \end{equation*} $$ We already know the solution for the first term here. Solving for the second term, $$ \begin{align*} \frac{\partial z_j^{[2]}}{\partial b_{j,i}^{[2]}} &= \frac{\partial }{\partial b_{j,i}^{[2]}} \bigl( W^{[2]} h + b^{[2]} \bigr) \nonumber \\\ &= \frac{\partial }{\partial b_{j,i}^{[2]}} b_{j,i}^{[2]} \nonumber \\\ &= 1 \space, \end{align*} $$ such that $$ \begin{align} \frac{\partial \mathcal{L}}{\partial b_{j,i}^{[2]}} &= \hat{y_j} - y_j \space. \end{align} $$ Hidden Layer Derivatives Now that we’ve solved for the output layer, we can move on to the hidden layer. The process is similar, except we’ll now be passing derivatives computed in the output layer back to the hidden layer - finally some backpropagation! Looking at the full network, we can see how the effect of the hidden layer’s weights flow through the network: The path that $W_{1,1}^{[1]}$ takes to reach $\mathcal{L}$. We can start by decomposing the derivative of the loss with respect to the weights in the hidden layer, using the chain rule once again: $$ \begin{align} \frac{\partial \mathcal{L}}{\partial W_{j,i}^{[1]}} &= \frac{\partial \mathcal{L}}{\partial h_j} \frac{\partial h_j}{\partial z_j^{[1]}} \frac{\partial z_j^{[1]}}{\partial W_{j,i}^{[1]}} \space. \nonumber \end{align} $$ Here we use $i$ to index neurons in the input and $j$ to index neurons in the hidden layer, and we can solve for each of these derivatives in a similar manner to the output layer. Solving Individual Terms For the first term, notice that we need to sum over all paths from $h_j$ to $\mathcal{L}$, $$ \frac{\partial \mathcal{L}}{\partial h_j} = \sum_{k=1}^{K} \frac{\partial \mathcal{L}}{\partial z_k^{[2]}} \frac{\partial z_k^{[2]}}{\partial h_j} \space. $$ Since $z_k^{[2]} = \sum_{j=1}^{n_h} W_{k,j}^{[2]} h_j + b_k^{[2]}$, we have $$ \frac{\partial z_k^{[2]}}{\partial h_j} = W_{k,j}^{[2]}. $$ Here we notice that the $\frac{\partial \mathcal{L}}{\partial z_k^{[2]}}$ term is exactly the result we computed in Equation \ref{eq:dl_dz2_final} for the output layer. Propagating this derivative backwards through the network to the hidden layer, we find that $$ \begin{equation} \frac{\partial \mathcal{L}}{\partial h_j} = \sum_{k=1}^{K} (\hat{y_k} - y_k) W_{k,j}^{[2]} \space. \end{equation} $$ Solving for the second term $\frac{\partial h_j}{\partial z_j^{[1]}}$ is straightforward, as the derivative of the sigmoid function $\sigma$ is simply $\sigma(1 - \sigma)$. Thus, $$ \begin{align} \frac{\partial h_j}{\partial z_j^{[1]}} &= \sigma(z_j^{[1]}) (1 - \sigma(z_j^{[1]})) \nonumber \\\ &= h_j (1 - h_j) \space. \end{align} $$ Finally, the third term is the same as before, where the derivative is zero for all terms except for $W_{j,i}^{[1]} x_i$: $$ \begin{align} \frac{\partial z_j^{[1]}}{\partial W_{j,i}^{[1]}} &= \frac{\partial }{\partial W_{j,i}^{[1]}} \bigl( W^{[1]} x + b^{[1]} \bigr) \nonumber \\\ &= \frac{\partial }{\partial W_{j,i}^{[1]}} W_{j,i}^{[1]} x_i \nonumber \\\ &= x_i \space. \end{align} $$ Putting it Together Plugging these results back into our original expression, we find that the derivative of the loss with respect to a single weight in the hidden layer is $$ \begin{align} \frac{\partial \mathcal{L}}{\partial W_{j,i}^{[1]}} &= \biggl[ \sum_{k=1}^{K} (\hat{y_k} - y_k) W_{k,j}^{[2]} \biggr] h_j (1 - h_j) x_i \space. \end{align} $$ And similarly for the bias terms, we find that $$ \begin{align} \frac{\partial \mathcal{L}}{\partial b_{j,i}^{[1]}} &= \biggl[ \sum_{k=1}^{K} (\hat{y_k} - y_k) W_{k,j}^{[2]} \biggr] h_j (1 - h_j) \space. \end{align} $$ Summary We’ve now solved for the derivatives of the loss with respect to each weight and bias in our network. We can now use these results to update the parameters of our model using gradient descent! Collecting the derivatives for each set of parameters, we have $$ \begin{align*} \frac{\partial \mathcal{L}}{\partial W_{j,i}^{[1]}} &= \biggl[ \sum_{k=1}^{K} (\hat{y_k} - y_k) W_{k,j}^{[2]} \biggr] h_j (1 - h_j) x_i \\\ \frac{\partial \mathcal{L}}{\partial b_{j,i}^{[1]}} &= \biggl[ \sum_{k=1}^{K} (\hat{y_k} - y_k) W_{k,j}^{[2]} \biggr] h_j (1 - h_j) \\\ \frac{\partial \mathcal{L}}{\partial W_{j,i}^{[2]}} &= (\hat{y_j} - y_j) h_i \\\ \frac{\partial \mathcal{L}}{\partial b_{j,i}^{[2]}} &= \hat{y_j} - y_j \space. \end{align*} $$ In vectorized form, we can express these individual parameter derivatives over the full gradient objects: $$ \begin{align*} \frac{\partial \mathcal{L}}{\partial W^{[1]}} &= x^T \bigl[ \bigl( (\hat{y} - y) W^{[2]} \bigr) \odot h \odot (1 - h) \bigr] \\\ \frac{\partial \mathcal{L}}{\partial b^{[1]}} &= (\hat{y} - y) W^{[2]} \odot h \odot (1 - h) \\\ \frac{\partial \mathcal{L}}{\partial W^{[2]}} &= h^T (\hat{y} - y) \\\ \frac{\partial \mathcal{L}}{\partial b^{[2]}} &= \hat{y} - y \space, \end{align*} $$ where the hadamard product $\odot$ allows us to vectorize the expression using element-wise multiplication. You might be suspecting this, but given the repetitive structure of neural networks, the gradients that we computed here can be expressed in a general form for any layer $l$ in a network. We’ll leave this as an exercise for the reader, but suffice to say that the derivations we’ve done here provide the foundation for training neural networks of any depth. Python Implementation Okay enough math, let’s get back to the task we set out to tackle: training a neural network on the MNIST dataset. For this purpose, we’ll implement a simple feedforward neural network with a single hidden layer, using the sigmoid activation function and softmax output layer. Here’s the Python code for our network: import numpy as np def sigmoid(z): return 1 / (1 + np.exp(-z)) def sigmoid_derivative(z): return sigmoid(z) * (1 - sigmoid(z)) def softmax(z): exp_z = np.exp(z - np.max(z, axis=1, keepdims=True)) # Subtract max(z) for numerical stability return exp_z / exp_z.sum(axis=1, keepdims=True) class FCNetwork(): """Single hidden layer network""" def __init__(self, input_dim, hidden_dim, output_dim, activation=sigmoid): self.w1 = np.random.randn(input_dim, hidden_dim) * np.sqrt(1. / input_dim) # d x h self.w2 = np.random.randn(hidden_dim, output_dim) * np.sqrt(1. / hidden_dim) # h x 10 self.b1 = np.random.rand(1, hidden_dim) # 1 x h self.b2 = np.random.rand(1, output_dim) # 1 x 10 self.activation = activation def forward(self, X): batch_size = X.shape[0] X = X.reshape((batch_size, -1)) z1 = np.dot(X, self.w1) + self.b1 h = self.activation(z1) z2 = np.dot(h, self.w2) + self.b2 f_k = softmax(z2) return z1, h, z2, f_k def predict(self, X): _, _, _, f_k = self.forward(X) y_hat = np.argmax(f_k, axis=1) return y_hat def compute_grad(self, X, y, y_hat, z1, a1, z2): batch_size = X.shape[0] X = X.reshape((batch_size, -1)) # Output layer grads dz2 = y_hat - y dw2 = np.dot(a1.T, dz2) db2 = np.sum(dz2, axis=0, keepdims=True) / batch_size # sum over sammples # Hidden layer grads dz1 = np.dot(dz2, self.w2.T) * sigmoid_derivative(z1) dw1 = np.dot(X.T, dz1) db1 = np.sum(dz1, axis=0, keepdims=True) / batch_size # sum over sammples return dw1, db1, dw2, db2 def update_weights(self, dw1, db1, dw2, db2, lr): self.w1 -= lr * dw1 self.b1 -= lr * db1 self.w2 -= lr * dw2 self.b2 -= lr * db2 The gradients computed in the compute_grad method are the same as the ones we derived, but modified slightly to work with mini-batches1 of dataset tuples. These batch updates are accomplished by averaging the gradients over each mini-batch, which helps to stabilize the learning process. Also note the initialization scheme used for the weights, which is known as Xavier initialization. This initialization scheme helps to prevent the gradients from vanishing or exploding during training, which can be a common issue in deep networks. In practice, I found that a basic initialization which sampled from a Gaussian with mean 0 and standard deviation of 1 caused learning to fail, but using Xavier initialization fixed the issue. Evaluating Performance After training the model, we want to evaluate how well it generalizes to unseen data (our validation/test set). We can accomplish this using the accuracy metric, which is the percentage of correctly predicted examples. We’ll also track the training loss to monitor the model’s learning progress: def cross_entropy_loss(y, y_hat): # Small epsilon added to avoid log(0) epsilon = 1e-12 y_hat = np.clip(y_hat, epsilon, 1. - epsilon) # Ensure y_hat is within (0, 1) to prevent log(0) # Compute cross-entropy return -np.sum(y * np.log(y_hat)) / y.shape[0] # Average over the batch def evaluate(model, data_loader): total_loss = 0 correct = 0 for X_val, y_val in data_loader: _, _, _, y_pred = model.forward(X_val) y_onehot = np.eye(10)[y_val] loss = cross_entropy_loss(y_onehot, y_pred) total_loss += loss y_pred_classes = np.argmax(y_pred, axis=1) y_true = np.argmax(y_onehot, axis=1) correct += np.sum(y_pred_classes == y_true) accuracy = correct / len(data_loader.dataset) avg_loss = total_loss / len(data_loader) return accuracy, avg_loss Here, we convert the one-hot encoded labels and predictions into their respective class indices using np.argmax, and then compute the percentage of correctly predicted examples. Training the Model We can now tie everything together in a training loop. The model will iterate over the training data, compute the loss, backpropagate the errors, and update its parameters. After each epoch, we evaluate the model on the test set to monitor its performance: def train(train_loader: DataLoader, test_loader: DataLoader): # Initialize the weights lr = 0.01 input_dim = 28 * 28 hidden_dim = 256 output_dim = 10 model = FCNetwork(input_dim, hidden_dim, output_dim) NUM_EPOCHS = 20 VAL_INTERVAL = 1 for epoch in range(NUM_EPOCHS): train_loss = 0 for batch_idx, (x, y) in enumerate(train_loader): y_onehot = np.eye(output_dim)[y] # Forward pass z1, h, z2, f_k = model.forward(x) loss = cross_entropy_loss(y_onehot, f_k) train_loss += loss # Backward pass dw1, db1, dw2, db2 = model.compute_grad(x, y_onehot, f_k, z1, h, z2) model.update_weights(dw1, db1, dw2, db2, lr) # Compute average training loss across minibatches avg_train_loss = train_loss / len(train_loader) # Evaluate on validation set every VAL_INTERVAL epochs if (epoch + 1) % VAL_INTERVAL == 0: val_acc, val_loss = evaluate(model, test_loader) print(f"Epoch {epoch}, Train Loss: {avg_train_loss:.4f}, Validation Accuracy: {val_acc:.4f}, Validation Loss: {val_loss:.4f}") else: print(f"Epoch {epoch}, Training Loss: {avg_train_loss:.4f}") return model Now, it’s simply a matter of loading the MNIST dataset, and calling the train function to train the model: from torch.utils.data import DataLoader from torchvision import datasets, transforms transform = transforms.Compose([ transforms.Resize((28, 28)), transforms.ToTensor(), # Ensure fast so no action is needed ]) # Fetch the dataset train_dataset = datasets.MNIST(root='./data', train=True, transform=transform, download=True) test_dataset = datasets.MNIST(root='./data', train=False, transform=transform, download=True) train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True) test_loader = DataLoader(test_dataset, batch_size=1000, shuffle=False) train(train_loader, test_loader) And that’s it! We’ve implemented a feedforward neural network from scratch and trained it on the MNIST dataset. The model should achieve an accuracy of roughly 98% on the test set after 20 epochs, which is quite impressive for such a simple model. Conclusion In this post, we’ve covered the basics of implementing a neural network from scratch. We carefully calculated the gradients for a simple feedforward neural network with a single hidden layer, and implemented the model in Python using NumPy. We then trained the model on the MNIST dataset and achieved 98% accuracy using the gradients we manually derived. For a more intuitive, visual walkthrough of backpropagation, I highly recommend the What is backpropagation really doing? video by 3Blue1Brown. Hopefully this post has given you a better understanding of how neural networks work under the hood. Stay tuned for more posts on foundations of machine learning, or maybe pictures of my latest sourdough loaf. Until next time! Appendix Softmax Gradient Derivation To solve for $\frac{\partial \hat{y_k}}{\partial z_j^{[2]}}$, we need to consider both cases where $j=k$ and where $j \neq k$. For the case when $j = k$: $$ \frac{\partial \hat{y_k}}{\partial z_j^{[2]}} = \frac{\partial}{\partial z_j^{[2]}} \biggl( \frac{e^{z_k^{[2]}}}{\sum_{t=1}^{K} e^{z_t^{[2]}}} \biggr) $$ Using the quotient rule, where $u = e^{z_k^{[2]}}$ and $v = \sum_{t=1}^{K} e^{z_t^{[2]}}$: $$ \frac{\partial \hat{y}_k}{\partial z_j^{[2]}} = \frac{(v \cdot \frac{\partial u}{\partial z_j^{[2]}} - u \cdot \frac{\partial v}{\partial z_j^{[2]}})}{v^2} $$ Since $u = e^{z_k^{[2]}}$, we have: $$ \frac{\partial u}{\partial z_j^{[2]}} = \frac{\partial e^{z_k^{[2]}}}{\partial z_j^{[2]}} = \begin{cases} e^{z_k^{[2]}}, & \text{if } j = k \\\ 0, & \text{if } j \neq k \end{cases} $$ Also, since $v = \sum_{t=1}^{K} e^{z_t^{[2]}}$, we have: $$ \frac{\partial v}{\partial z_j^{[2]}} = \frac{\partial}{\partial z_j^{[2]}} \sum_{t=1}^{n} e^{z_t^{[2]}} = e^{z_j^{[2]}} $$ Substituting these into the quotient rule: $$ \frac{\partial \hat{y_k}}{\partial z_j^{[2]}} = \frac{\Bigl(\sum_{t=1}^{n} e^{z_t^{[2]}} \cdot e^{z_k^{[2]}} \Bigr) - e^{z_k^{[2]}} \cdot e^{z_j^{[2]}}} {\Bigl(\sum_{t=1}^{n} e^{z_t^{[2]}} \Bigr)^2}, $$ which simplifies to $$ \begin{equation*} \frac{\partial \hat{y_k}}{\partial z_j^{[2]}} = \hat{y}_k \left( 1 - \hat{y}_k \right) \end{equation*} $$ When $j \neq k$, the derivation is similar, but in this case, the term $ \frac{\partial u}{\partial z_j^{[2]}} = 0 $, because $u = e^{z_k^{[2]}}$ and $j \neq k$. We still have $$ \frac{\partial \hat{y_k}}{\partial z_j^{[2]}} = \frac{- e^{z_k^{[2]}} \cdot e^{z_j^{[2]}}} {\Bigl(\sum_{t=1}^{n} e^{z_t^{[2]}} \Bigr)^2}, $$ which simplifies to $$ \begin{equation*} \frac{\partial \hat{y}_k}{\partial z_j^{[2]}} = -\hat{y}_j \hat{y}_k \end{equation*} $$ Thus, we can express the derivative of the softmax output $\hat{y}_k$ with respect to $z_j^{[2]}$ with $$ \begin{equation*} \frac{\partial \hat{y}_k}{\partial z_j^{[2]}} = \begin{cases} \hat{y}_k \left( 1 - \hat{y}_k \right), & \text{if } j = k \\\ -\hat{y}_j \hat{y}_k, & \text{if } j \neq k \end{cases} \end{equation*} $$
2024-11-07T22:30:45
en
train