id
int64 2
42.1M
| by
large_stringlengths 2
15
⌀ | time
timestamp[us] | title
large_stringlengths 0
198
⌀ | text
large_stringlengths 0
27.4k
⌀ | url
large_stringlengths 0
6.6k
⌀ | score
int64 -1
6.02k
⌀ | descendants
int64 -1
7.29k
⌀ | kids
large list | deleted
large list | dead
bool 1
class | scraping_error
large_stringclasses 25
values | scraped_title
large_stringlengths 1
59.3k
⌀ | scraped_published_at
large_stringlengths 4
66
⌀ | scraped_byline
large_stringlengths 1
757
⌀ | scraped_body
large_stringlengths 1
50k
⌀ | scraped_at
timestamp[us] | scraped_language
large_stringclasses 58
values | split
large_stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
42,049,787 | Rinzler89 | 2024-11-05T08:47:02 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,049,799 | rahijamil | 2024-11-05T08:49:03 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,049,811 | rhazn | 2024-11-05T08:51:07 | Using systems modeling to refine strategy | null | https://lethain.com/strategy-systems-modeling/ | 2 | 0 | null | null | null | no_error | Using systems modeling to refine strategy. | 2024-11-04T07:00:00-07:00 | null | While I was probably late to learn the concept
of strategy testing,
I might have learned about systems modeling too early in my career,
stumbling on Donella Meadows’ Thinking in Systems: A Primer
before I began my career in software.
Over the years, I’ve discovered a number of ways to miuse systems modeling,
but it remains the most effective, flexible tool I’ve found to debugging complex problems.In this chapter, we’ll work through:when systems model is a useful technique, and when it’s better to
rely on other refinement techniques like Wardley mapping or strategy testing insteada two minute primer on the basics of systems modeling, along with resources for those looking for a deeper exploration
of the foundational topicsa discussion on systems modeling tooling, why there’s no perfect systems modeling tool out there,
and how I recommend picking the tool that you build proficiency withthe steps to build a systems model for a problem you’re engaging withhow to document your learnings from a systems model to maximize the
chance that others will pay attention to it rather than ignoring
it due to the unfamiliarity or complexity of the toolingwhat systems modeling can’t do, even if you really want to believe it canAfter working through this chapter’s overview of systems modeling,
you can see the approaches implemented in a number of system models created
to refine the strategies throughout this book.
The theory of systems modeling is certainly interesting, but hopefully
seeing real models in support of concrete engineering strategies will
be even more useful.This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book.
As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts.When is systems modeling useful?Although refinement is an important step of developing any strategy,
some refinement techniques work better for any given strategy.
Systems modeling is extremely useful in three distinct scenarios:When you’re unsure where leverage points might be in a complex system,
modeling allows you to cheaply test which levers might be meaningful.
For example, modeling onboarding drivers in a ride-sharing app
showed that improving onboarding was less important than reengaging departed drivers.When you have significant data to compare against,
which allows you to focus in on the places where the real data and your model are in tensions.
For example, I was able to model the impact of hiring on Uber’s engineering productivity,
and then compare that with internal data.When stakeholder disagreements are based in their unstated intuitions,
models can turns those intuitions into something structured that can be debated more effectively.In all three categories, modeling makes it possible iterate your thinking much faster than running a live process or technology experiment
with your team. I sometimes hear concerns that modeling slows things down, but this is just an issue of familiarity.
The more you practice, modeling can be faster than asking for advice fro industry peers.
The actual models I’ve developed for this book took less than an hour. (With one notable exception: modeling Large Language Models (LLMs) impacts on developer experience,
which took much longer because I deliberately used an impractical tool to reveal the importance of good tooling).Additionally, systems modeling will often expose counter-intuitive dimensions to the problem you’re working on.
For example, the model I mentioned above on LLMs’ impact on developer experience suggests that effective LLMs might
cause us to spend more time writing and testing code (but less fixing issues discovered post-production).
This is a bit unexpected, as you might imagine they’d reduce testing time, but reducing testing time is only valuable
to the extent that issues identified in production remains–at worst–constant; if issues found in production increases,
then reduced testing time does not contribute to increased productivity.Modeling without praxis, creates unsubstantiated conviction.
However, in combination with praxis, I’ve encountered few other techniques that can similar accelerate learning.That doesn’t mean that it’s always the ideal refinement technique.
If you already have conviction on the general approach, and want to refine the narrow details,
then strategy testing is a better option.
If you’re trying to understand the evolution of a wider ecosystem, then you may prefer
Wardley mapping.Two minute primerIf you want an exceptional introduction to systems thinking, there’s no better place to go than
Donella Meadows’ Thinking in Systems.
If you want a worse, but shorter, introduction, I wrote a short Introduction to systems thinking
available online and in An Elegant Puzzle.If you want something even shorter, then here’s the briefest that I can manage.Accumulations are called stocks. For example, each of the boxes (Requests, Server, etc)
in the above diagram is a stock. Changes to stocks are called flows. Every arrow (OK, Error in server, etc)
between stocks in the diagram is a a flow.Systems modeling is the practice of using various configurations of stocks and flows
to understand circumstances that might otherwise have surprising behavior or are too slow
to understand from measurement.For example, we can use the above model to explore the tradeoffs between a load balancer that does and does not cap throughput
to a load-sensitive service behind it.Without a model, you might get into a philosophical debate about how rediculous it is that the downstream server
is load-sensitive. With the model, it’s immediately obvious that it’s worthwhile protecting it, even if it certainly
is concerning that it is so sensitive. This is what models do: they create a cheap way to understand reality when
fully understanding reality is cumbersome.For an idea that’s quite intuitive, the tools of systems modeling are a real obstacle to wider adoption.
Perhaps a downstream consequence of many early, popular systems modeling tools being quite expensive,
the tooling ecosystems for systems modeling has remained fragmented for some time.
There also appears to be a mix of complex requirements, patent consolidation, and percieved small market size
that’s discouraged a modern solutions from consolidating the tooling market.Earlier, I mentioned that system modeling is extremely quick, but that many folks find it a slow, laborous process.
Part of that is an issue of practice, but I suspect that the quality of modeling tooling is at least a big a part of the challenge.
In the LLMs impact on developer experience model, I go about the steps of building the model in an increasingly messy spreadsheet.
This was slow, challenging, and extremely brittle. Even after finishing the model, I couldn’t extend it effectively to test new ideas,
and I inadvertently introduced a number of bugs into the implementation.Going in the opposite direction, I explored using a handful of tools, such as Sagemodeler
or InsightMaker, which seemed like a potentially simpler toolchains
than the one I typically rely on. There are so many of these introductary toolchains for systems modeling,
but I generally find that they’re either constrained in their capabilities, have a fairly high learning curve,
or make it difficult to share your model with others.In the end, I wound up back at the toolchain that I use,
which happens to be one that I wrote some years ago,lethain/systems.
This is far from a perfect toolchain, but I think it’s a relatively effective mechanism for demonstrating
systems modeling for a few reasons:quick to create models and iterate on those modelseasy to share those models with others for inspection and their own explorationrelatively low surface area for bugs in your modelsfree, open-source, self-hosted toolchain that integrates well with Jupyter ecosystem
for diagramming, modeling and so onYou should absolutely pick any tool that feels right to you, and practice with it until you feel confident
quickly modeling scenarios. Afterwards, I wouldn’t recommend spending too much time thinking about tools at all:
the most important thing is to build models and learn from them quickly, and almost any tool will be sufficient
to that goal with some deliberate practice.How to modelLearning to system model takes some practice, so we’ll approach the details of learning to
model from two directions.
First, by documenting a general structure for approaching modeling,
and second by providing breadcrumbs to the models
developed in this book for deeper exploration of particular modeling ideas.The structure to systems modeling that I find effective is:Sketch the stocks and flows on paper or a diagramming application (e.g.
Excalidraw, Figma, Whimsical, etc).
Use whatever you’re comfortable with.Reason about how you would expect a potential change to shift the flows through the diagram.
Which flows do you expect to go up, and which down, and how would that movement help you
evaluate whether your strategy is working?Model the stocks and flows in your spreadsheet tool of choice.
Start by modeling the flows from left to right (e.g. the happy path flows). Once you have that fully working,
then start modeling the right to left flows (e.g. the exception path flows).See the Modeling impact of LLMs on Developer Experience model
for a deep dive into the particulars of creating a model.Exercise the model by experiment with a number of different starting values
and determining which rates really impact the model’s values.
This is essentially performing sensitivity analysisDocument the work done in the above sections into a standalone writeup.
You can then link to that writeup from strategies that benefit from a given model’s insights.
You might link to any section of your strategy, depending on what
topic the particular model explores.
I recommend decoupling models from specific strategies, as generally the details of any given
model are a distraction from understanding a strategy, and it’s best to avoid that distraction unless
a reader is surprised by the conclusion, in which case, the link out supports drilling into the details.As always, this is the sequence of steps that I’d encourage you to follow,
and the sequence that I generally follow, but you should adapt them to solve
the particular problems at hand.
Over time, my experience is that most of these steps–excluding documentation–turn into a single
iterative process, and that I document everything after several iterations.Now that we’ve covered the overarching approach to system modeling,
here are the breadcrumbs to specific models that go deeper on particular elements:Modeling driver onboarding
explores how the driver lifecycle at Theoretical Ride Sharing might be improved
with LLMs,
and introduces using the lethain/systems library
for modelingModeling impact of LLMs on Developer Experience
looks at how LLMs might impact developer experience at Theoretical Ride Sharing,
and is demonstrates (the downsides of) modeling with a spreadsheetModeling engineering backfill strategy
studies the financial consequences of various policies for how we backfill departed
engineers in an engineering organization, and introduces further lethain/systems featuresBeyond these models, you can find other systems models that I’ve written
on my blog’s systems-thinking category, and there
are numerous, great examples in the materials references in the systems modeling primer
section above.How to document a modelMuch like documenting strategy is challenging,
communicating with models in a professional setting is challenging.
The core problems is that there are many distinct groups of model readers.
Some will lack familiarity with the tooling you use to develop models.
Others will try to refine, or invalidate, your model by digging into the details.I navigate those mismatches by focusing first on the audience who
is least likely to dig into the model. I still want to keep all the details
handy, ideally in the rawest form possible to allow others to manipulate the model
themselves, but it’s very much my second goal when documenting a model.From experience, I recommended this order (it’s also the order used in the models
in this book, so you’ll see it in practice a number of times):start with learning section, with charts showing what model has taught yousketch and explaing the stocks and flowsreason about what the sketch itself teaches youexplain how you developed the model, with an emphasis on any particularly complex portionsexercise the model by testing how changing the flows and stocks leads to different outcomesIf you remember nothing else, your document should reflect the reality that
most people don’t care how you built the model, and just want the insights.
Give them the insights early, and assume no one will trust your model nearly as much as you do.
Models are an input into the strategy, never a reliable sole backer for a strategy.What systems modeling isn’tAlthough I find systems modeling a uniquely powerful way to accelerate learning,
I’ve also encountered many practioners who believe that their models are reality
rather than reflecting reality.
Over time, I’ve developed a short list of cautions to help
would-be modelers avoid overcommitting to their model’s insights:When your model and reality conflict, reality is always right.
At Stripe, we developed a model to guide our reliability strategy.
The model was intuitively quite good, but its real-world results were mixed.
Attachment to our early model distracted us (too much time on collecting and classifying data)
and we were slow to engage with the most important problems (maximizing impact of scarce mitigation bandwidth, and growing mitigation bandwidth).
We’d have been more impactful if we engaged directly with what reality was teaching us rather than looking for reasons to disregard reality’s lessons.Models are immutable, but reality isn’t.
I once joined an organization investing tremendous energy into hiring but nonetheless struggling to hire.
Their intuitive model pushed them to spend years investing into top of funnel optimization,
and later steered them to improving the closing process.
What they weren’t able to detect was that misalignment in interviewer expectations was the largest hurdle in hiring.Every model omits information; some omit critical information.
The service migration at Uber is a great example: modeling clarified that we had to adopt a more aggressive
approach to our service migration in order to succeed. Subsequently, we did succeed at the migration,
but the model didn’t study the consequences of completing the migration, which were a very challenging development environment.
The model captured everything my team cared about, as the team responsible for running the migration,
but did nothing to evaluate whether the migration was a good idea overall.In each of those situations, two things are true: the model was extremely valuable, and the model subtly led us astray.
We would have been led astray even without a model, so the key thing to remember isn’t that models are inherently misleading,
instead the risk is being overly confident about your model. A powerful tool to use in tandem with judgment, not a replacement.SummarySystems modeling isn’t prefect.
If you’ve already determined your strategy and want to refine the details,
then strategy testing is probably a better choice.
If you’re trying to understand the dynamics of an envolving ecosystem,
then Wardley mapping is a more appropriate tool.However, if you have the general shape, but lack conviction on how
the pieces fit together, systems modeling is a remarkable tool.
After this chapter, you know how to select appropriate tooling,
and how to use that tooling to model your problem at hand.
Next, we’ll work through systems modeling a handful of detailed problems
to provide concrete examples of applying this technique. | 2024-11-07T14:58:13 | en | train |
42,049,825 | Manojbhat09 | 2024-11-05T08:53:13 | null | null | null | 1 | null | [
42050542,
42049962,
42050543
] | null | true | null | null | null | null | null | null | null | train |
42,049,830 | Manojbhat09 | 2024-11-05T08:54:04 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,049,846 | lghui | 2024-11-05T08:56:33 | null | null | null | 1 | null | [
42049852
] | null | true | null | null | null | null | null | null | null | train |
42,049,905 | rbanffy | 2024-11-05T09:09:30 | Generative AI Has an E-Waste Problem | null | https://spectrum.ieee.org/e-waste | 1 | 0 | null | null | null | no_error | Generative AI Has a Massive E-Waste Problem | 2024-11-04T13:00:04Z | Katherine Bourzac | Private investment in generative AI has grown from about US $3 billion in 2022 to $25 billion in 2023, and about 80 percent of private companies expect AI to drive their business in the next 3 years, according to Deloitte. Keeping up with the latest advancements means upgrading GPUs, CPUs, and other electronic equipment in data centers as newer, more advanced chips become available. And that, researchers project, will lead to an explosion in the production of electronic waste.A study published last week in the journal Nature Computational Science estimates that aggressive adoption of large language models (LLMs) alone will generate 2.5 million tonnes of e-waste per year by 2030.“AI doesn’t exist in a vacuum; it relies on substantial hardware resources that have tangible environmental footprints,” says study coauthor Asaf Tzachor, a sustainability and climate researcher at Reichman University, in Israel. “Awareness of the e-waste issue is crucial for developing strategies that mitigate negative environmental impacts while allowing us to reap the benefits of AI advancements,” he says.Most research on AI sustainability has focused on these models’ energy and water use and their concomitant carbon emissions. Tzachor worked with Peng Wang and Wei-Qiang Chen, both professors at the Chinese Academy of Sciences, to calculate the potential increase in e-waste associated with generative AI. The study is intended to provide an estimate of the potential scale of the problem, and the researchers hope it will spur companies to adopt more sustainable practices.The Scale of the E-Waste ProblemElectronic waste contains toxic metals and other chemicals that can leach out into the environment and cause health problems. In 2022, the world produced 62 million tonnes of e-waste in total, according to the United Nations Global E-waste Monitor. This waste stream is growing five times as fast as recycling programs, the U.N. found.In the coming years, AI could make a significant contribution to the problem. Tzachor says e-waste associated with generative AI includes discarded GPUs, CPUs, batteries used for backup power in data centers, memory modules, and printed circuit boards.The study details four potential scenarios for generative AI adoption—ranging from limited to aggressive expansion—and projects potential e-waste expansion from a 2023 baseline of 2,600 tons per year. Limited expansion of AI use would generate a total of 1.2 million tonnes of e-waste from 2023 to 2030; aggressive use would result in a total of 5 million tonnes over that period. Tzachor says given current trends, the aggressive scenario is most likely.The study isn’t comprehensive—it considers only large language models, not other forms of generative AI. Tzachor says the team focused on LLMs because they’re among the most computationally intensive. “Including other forms of AI would increase the projected e-waste figures,” Tzachor says.What Can Be Done to Reduce AI’s E-Waste?In theory, adopting more advanced chips should help server farms do more with less, and produce less waste. But each upgrade results in a net increase in the waste stream. And given current trade restrictions on semiconductors, upgrading is not always an option. Countries that don’t have access to the most advanced chips may generate more waste as a result. A one-year delay in upgrading to the latest chips will result in a 14 percent increase in e-waste, according to the study.One of the best ways to mitigate this AI waste stream is to find ways to reuse electronic equipment—what Tzachor calls downcycling. Servers that are no longer cutting edge can be repurposed for hosting websites or doing more basic data processing tasks, or they can be donated to educational institutions. Most tech companies—including Amazon, Google, and Meta—have announced sustainability goals that focus on carbon footprints and using green energy. And Microsoft has pledged to limit e-waste production from its data centers. But Tzachor says regulation may be needed to ensure adherence to the best practices around AI e-waste. “Companies should have incentives to adopt these strategies,” he says. | 2024-11-08T12:31:57 | en | train |
42,049,910 | signalhound | 2024-11-05T09:10:40 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,049,912 | marketechy | 2024-11-05T09:11:05 | null | null | null | 1 | null | [
42049913
] | null | true | null | null | null | null | null | null | null | train |
42,049,916 | rbanffy | 2024-11-05T09:11:20 | Amazon Rufus: How We Built an AI-Powered Shopping Assistant | null | https://spectrum.ieee.org/amazon-rufus | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,049,936 | afiodorov | 2024-11-05T09:16:06 | Show HN: IMDb SQL Best Movie Finder | I've built a static web app called IMDb SQL Best Movie Finder that lets you query a database of 1.5 million IMDb titles using SQL directly in your browser. It’s entirely client-side, so all the data processing happens locally on your machine — no server involved. | https://www.imdb-sql.com/ | 128 | 74 | [
42050260,
42051451,
42052287,
42050557,
42050610,
42050559,
42051154,
42050548,
42050331,
42051035,
42050299,
42050396,
42050336,
42053327,
42050263,
42050462,
42053178,
42050214,
42049953,
42051054
] | null | null | null | null | null | null | null | null | null | train |
42,049,939 | null | 2024-11-05T09:16:35 | null | null | null | null | null | null | [
"true"
] | true | null | null | null | null | null | null | null | train |
42,049,940 | ArneTR | 2024-11-05T09:16:42 | Show HN: Carbon Emissions of GitHub/Gitlab Pipelines (Eco-CI) | Eco-CI is an open source plugin that works on many major CI/CD vendors: GitHub, GitLab, Jenkins<p>It leverages an ML energy model to estimate the power of the current machine executing the pipeline and correlates that with the carbon grid intensity of the public IP.<p>It then can show directly in the Pull-Request how much energy and carbon is used.<p>That functionality is paired with an external dashboard that can be hosted which is also free and open source and can show the carbon emissions over time.<p>Here is an example link where we have for instance been tracking the carbon emissions of Django on GitHub: <a href="https://metrics.green-coding.io/ci.html?repo=green-coding-solutions/django&branch=main&workflow=60545072" rel="nofollow">https://metrics.green-coding.io/ci.html?repo=green-coding-so...</a> | https://github.com/green-coding-solutions/eco-ci-energy-estimation | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,049,941 | sergiuchiriac | 2024-11-05T09:17:27 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,049,952 | lnrd | 2024-11-05T09:19:59 | Making onscreen content available to Siri and Apple Intelligence | null | https://developer.apple.com/documentation/appintents/making-onscreen-content-available-to-siri-and-apple-intelligence | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,049,983 | medmarrouchi | 2024-11-05T09:25:22 | null | null | null | 1 | null | [
42049984
] | null | true | null | null | null | null | null | null | null | train |
42,050,007 | thedevsaddam | 2024-11-05T09:30:35 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,050,010 | marols | 2024-11-05T09:30:53 | Understanding privacy risk with k-anonymity and l-diversity | null | https://marcusolsson.dev/k-anonymity-and-l-diversity/ | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,050,020 | marols | 2024-11-05T09:32:18 | Get started with Fides | null | https://marcusolsson.dev/get-started-with-fides/ | 1 | 0 | [
42050939
] | null | null | no_error | Get started with Fides | 2024-09-23T00:00:00Z | Marcus Olsson | To remain compliant with privacy laws and regulations, organizations need to continuously monitor how data is used across their systems. Instead of relying on PR reviews, imagine if you could catch privacy issues automatically before they make it into production.In this tutorial, you’ll learn about Fides—an open-source privacy engineering platform that lets you map sensitive data across your systems, run automated privacy checks, and quickly respond to data subject requests, or DSRs.You’ll first deploy a local sample project with an e-commerce sample application that uses Fides to map personal data. Throughout the tutorial, you’ll switch between the role of a user exercising their privacy rights and an administrator responsible for responding to user-submitted data requests.Before you startTo finish this tutorial, you’ll need:Docker (20.10.11 or higher)Python (3.9 or higher)venv (or any environment manager you’re comfortable with, such as
Conda)macOS users with Python 3.12 or higherIf you’re running Python 3.12, you may experience errors during the installation process. If this happens, you may find it easier to use an earlier Python version.To see your current Python version:python3 --versionWhat is Fides?Fides is an open-source privacy engineering platform that uses a privacy-as-code approach to manage personal data across your data systems. Fides can scan your infrastructure and generate data maps, which you can then use to, for example, run automated privacy checks and fulfill data requests.At the core of Fides is
Fideslang—a YAML-based configuration language that defines your datasets, systems, and policies. While Fideslang enables a comprehensive set of use cases, this tutorial will focus on responding to data subject requests (DSR).Defining a taxonomy for privacyIn September 2024,
IAB announced a new privacy taxonomy based on Fideslang—a significant step towards a standardized language for defining personal data!For a more in-depth explanation of Fides, see the video by Cillian Kieran—founder and CEO at
Ethyca, the company behind Fides.Install Fides locallyFides is deployed as a web server that reads Fideslang configuration and provides several operations through the
Fides REST API.To avoid setting up a production-like environment when you just want to try it out, Fides comes with a sample project intended to run locally. The sample project includes the Fides web server, an admin UI, and a sample application that we’ll explore later.Once you’re ready to deploy Fides to your own infrastructure, see the
Advanced installation docs.Fides is available as a Python package, so we’ll start by installing Fides to a virtual Python environment.Create a folder for the Fides sample project:mkdir ~/fides
cd ~/fidesCreate and activate a virtual environment using venv:python3 -m venv fides
source fides/bin/activateInstall Fides using pip:pip install ethyca-fidesOnce the pip command finishes, we can start deploying the sample project.Run the Fides sample projectTo deploy the Fides sample project locally, run the following command:fides deploy upThe first time you run the command, it’ll need to download the necessary dependencies. This may take a few minutes, depending on your internet connection.Behind the scenesThe Fides sample project uses Docker to define the services and databases required by the sample project.If you have experience with Docker, you may be interested to see the
docker-compose.yml file for the sample project.Once Fides has been successfully deployed, you’ll be asked whether you’d like to share usage analytics with
Ethyca. Select the option you’re most comfortable with.In the final output, you’ll see the URLs to access each component of the sample project. You can also browse to
localhost:3000/landing to get an overview, including a project diagram.Now that the sample project is up and running, take some time to explore the different components. Remember to check out the project diagram at the bottom of the landing page to better understand how the various parts interact. When you’re ready, let’s check out the included sample application.Cookie House—delicious privacyCookie House is a fictional e-commerce store where you can buy—judging by the prices—Michelin-rated cookies.To deliver the cookies, Cookie House needs some personal data, such as the name and physical address of the user. Let’s make a purchase so we’ll have some data to request later.Visit
localhost:3000/ to see the store.Find the tastiest-looking cookie, and click Purchase under it.Fill in all the fields. Remember the email you used. You’ll need it later to access your data.Click Purchase at the bottom.You’ve entrusted Cookie House with your personal data. Later, we’ll request access to the data you submitted. But before that, let’s see what it looks like for the administrator at Cookie House.Fides Admin UIThe Admin UI is a web application that communicates with the Fides API to perform common administrative tasks.Open
localhost:8080 in your browser.Sign in using the test credentials. You can find your credentials on the
landing page or in the terminal output from the fides deploy up command.The Admin UI lets you manage several aspects of your Fides installation. Let’s look at two of them: system inventory and request manager.In the sidebar on the left, under Data inventory, click System inventory.This view gives you an overview of all the systems that Fides manages. In this demo, you can see the different systems that handle data about Cookie House users.In the sidebar, under Privacy requests, click Request manager.This view lists the data requests made by data subjects, or the Cookie House users in our example.As you can see, the Privacy Requests view is empty right now. Let’s change that by making an access request to see what data Cookie House has about you.Respond to data access requestsBefore we decide whether to erase the data, let’s first see what data they stored from our recent purchase.Create a data access requestThe Cookie House sample application includes a Privacy center that lets users exercise their privacy right by requesting access to their data, or erasing it altogether.Head back to
Cookie House and click Privacy center at the very bottom (or browse directly to
localhost:3001).Click Access your data.In Email, enter the email you used when you purchased the cookies.In First name, enter the name you used with your order.Click Continue.The data request has now been submitted to Fides and awaits approval by a Cookie House administrator.Approve a data access requestOnce the user has submitted a data request, we need to respond within the configured time frame. We don’t want them to wait for too long, so let’s review it right now.Switch back to the
Privacy Requests in the Fides Admin UI. You’ll see a new access request in the list (you may need to refresh the page).The Days left column shows how long you have until you must respond to the request.The Actions column lets you either approve (checkmark icon) or deny (cross icon) the data request.In the Actions column, click the checkmark.Click Confirm to approve the request.Notice that the Status changed to Completed. The request has now been fulfilled, which means the user can now access their data.Inspect the exported dataIn production, you’d likely send an email to the user to let them know where they can find the exported data. When running Fides locally, the data is instead exported to a folder in your project folder.In your terminal, change the directory to the fides_upload folder and list its contents:cd ~/fides/fides_uploads
lsYou’ll see a ZIP file with a name starting with pri_.Unzip the exported data into a new folder.unzip pri_b3624022-a2ba-48bc-8956-541ff81d9a63.zip -d data_exportThe exported data contains a data folder, and a welcome.html page where you can browse the contents.Open welcome.html in your browser.# macOS
open ./data_export/welcome.htmlYou can click the rows in the table to navigate the dataset. Click the Back arrow at the top to go back to the previous view.Respond to data erasure requestsWhile you may have enjoyed your $20 cookie, you later come to terms with the fact that it was a one-time purchase, and your budget won’t be able to sustain your costly cookie cravings.Since we don’t expect to do any more business with Cookie House, let’s request the data to be erased.Create a data erasure requestTo create an erasure request, head back to the
Privacy center.Click Erase your data.Enter the email you used earlier and click Continue.You’ve successfully submitted an erasure request and need to wait for a Cookie House administrator to approve it.Approve a data erasure requestIn the Fides Admin UI, switch back to
Privacy requests.You’ll see that a new request has been added (refresh the page if not). Notice under Request type that this is an Erasure request, whereas the previous one was an Access request.Approve the request by clicking the checkmark in the Actions column.The user data has now been erased from the systems managed by Fides. If you’d like, you can verify this by submitting another access request through the Privacy center.SummaryIn this tutorial, you’ve explored how Fides can be used to manage data subject requests for an e-commerce store. You learned how to respond to both access and erasure requests submitted by users.How are DSRs handled in your organization today? How do you keep track of personal data throughout your systems today, and how do you think that would change with Fides?Fulfilling data requests is just one of several use cases that is possible with Fides. To learn more, see the
Fides docs.Are you interested in learning more about Fides or other open-source privacy tools,
let me know.Clean up resourcesFeel free to continue exploring the sample project. When you’re done, you can run the following command to shut down the sample project to free up resources:fides deploy down | 2024-11-07T23:27:27 | en | train |
42,050,041 | aidirectories | 2024-11-05T09:36:24 | null | null | null | 1 | null | [
42050042
] | null | true | null | null | null | null | null | null | null | train |
42,050,051 | thund | 2024-11-05T09:38:02 | OpenAI Predicted Outputs | null | https://community.openai.com/t/introducing-predicted-outputs/1004502 | 1 | 0 | [
42050528
] | null | null | null | null | null | null | null | null | null | train |
42,050,066 | djaygour | 2024-11-05T09:41:39 | KuwarPay- Payments on Social Media | null | https://kuwarpay.onrender.com/ | 1 | 1 | [
42050078
] | null | null | null | null | null | null | null | null | null | train |
42,050,067 | andrewstuart | 2024-11-05T09:41:47 | PointCast | null | https://en.wikipedia.org/wiki/PointCast | 2 | 1 | [
42067338,
42050523
] | null | null | no_error | PointCast | 2005-11-17T19:46:06Z | Contributors to Wikimedia projects |
From Wikipedia, the free encyclopedia
PointCastIndustrySoftware DevelopmentFounded1992; 32 years ago in Sunnyvale, California, United StatesFounderChristopher R. HassettDefunct2000FateAcquired by Launchpad Technologies
PointCast was a dot-com company founded in 1992 by Christopher R. Hassett in Sunnyvale, California.
The company's initial product amounted to a screensaver that displayed news and other information, delivered live over the Internet. The PointCast Network used push technology, which was a new concept at the time, and received enormous press coverage when it launched in beta form on February 13, 1996.[1]
The product did not perform as well as expected, often believed to be because its traffic burdened corporate networks with excessive bandwidth use,[2] and was banned in many places.[3] It demanded more bandwidth than the home dial-up Internet connections of the day could provide, and people objected to the large number of advertisements that were pushed over the service as well.[4] PointCast offered corporations a proxy server that would dramatically reduce the bandwidth used, but even this didn't help save the company. The increasing popularity of "portal websites" also accelerated the demise of PointCast. When PointCast first started, Yahoo offered little more than a hierarchical structure on the Internet (broken down by subject much like DMOZ), but was soon to introduce the portal which was customizable and offered a much more convenient way to read the news.
News Corporation purchase offer and change of CEO[edit]
At its height in January 1997, News Corporation made an offer of $450 million to purchase the company. However, the offer was withdrawn in March. While there were rumors that it was withdrawn due to issues with the price and revenue projections, James Murdoch said it was due to PointCast's inaction.[5][6]
Shortly after not accepting the purchase offer, the board of directors decided to replace Christopher Hassett as the CEO. Some reasons included turning down the recent purchase offer, software performance problems (using too much corporate bandwidth) and declining market share (lost to the then-emerging Web portal sites.) After five months, David Dorman was chosen as the new CEO. In an effort to raise more capital, Dorman planned to take the company public. A filing was made in May 1998 with a valuation of $250 million. This plan was abandoned after two months in favor of looking for a company with whom to partner or be acquired.[6]
In August 1998, PointCast found such a partner. In order to compete with @Home, a consortium of telephone companies and Microsoft put together a project designed to promote use of DSL in preference to cable modems. The project was dubbed "Newnet" and the plan was to use PointCast's software as a portal for the service. The consortium planned to buy PointCast for $100 million as part of the deal. The deal was signed in December 1998 with the intent of launching the service in April 1999.[6][7]
Due to delays in the project, Dorman resigned as CEO in March 1999. Two weeks later, PointCast was informed that their planned acquisition had been scrapped. In the reorganization that followed, 75 of the 220 employees were let go in an effort to reduce costs.[8] A number of bids were made to buy the company, including two from former CEO Christopher Hassett, which were rejected.[9][10]
Instead, they sold out for about $7 million in May 1999 to Launchpad Technologies, Inc., a San Diego company founded and backed by Idealab, and the PointCast network was shut down the next year.[4][5][11]
Launchpad's eWallet product was combined with the existing PointCast technology to create EntryPoint, which had a free desktop toolbar and offered customized news, stocks and sports feeds.[12]
EntryPoint merged with Internet Financial Network in 2000 forming Infogate, continuing the same free service until switching to a fee-based co-branded model, partnering with news outlets such as USA Today and CNN. Infogate was sold to AOL Time Warner in March 2003. Infogate senior executives Cliff Boro, Vidar Vignisson, and Tom Broadhead formed CVT Ventures, LLC, a venture-development group dedicated to accelerating technology startups.[citation needed]
^ Aguilar, Rose (1996-02-13). "PointCast unveils free news service". News.com. Archived from the original on 2011-06-16.
^ Mark Mcadden (April 1997). "Singin' the Broadcast Bandwidth Blues". Digital Age (formerly DEC Professional: an independent magazine from Cardinal Business Media Inc. p. 40. When pushed too far, shove back
^ World Wide What? The Internet's 10 Worst Ideas - Fox News, May 17, 2010
^ a b Meyer, Katherine (2006-05-03). "The Best of the Worst". The Wall Street Journal.
^ a b Kawamoto, Dawn & Borland, John (1999-05-10). "PointCast acquired by Idealab". News.com.
^ a b c Himelstein, Linda & Siklos, Richard (1999-04-26). "PointCast: The Rise and Fall of an Internet Star". BusinessWeek. Archived from the original on 1999-11-10.
^ Smith, Tony (1998-12-03). "PointCast strategic investor becomes buyer". The Register.
^ Smith, Tony (1999-04-02). "PointCast sacks third of workforce". The Register.
^ Smith, Tony (1999-04-06). "PointCast ex-CEO looks to re-acquire company". The Register.
^ Smith, Tony (1999-04-21). "PointCast rejects founder's buy-back offer". The Register.
^ Lettice, John (1999-05-11). "PointCast bows out for a mere $7 million". The Register.
^ "Welcome To EntryPoint". Archived from the original on 1999-10-13. Retrieved 2008-01-11.
| 2024-11-08T00:09:43 | en | train |
42,050,069 | fanf2 | 2024-11-05T09:42:02 | Ratchets in Software Development (2021) | null | https://qntm.org/ratchet | 3 | 1 | [
42050230,
42050924
] | null | null | null | null | null | null | null | null | null | train |
42,050,075 | rbanffy | 2024-11-05T09:43:25 | Black hole feeds at 40 times the theoretical limit | null | https://arstechnica.com/science/2024/11/researchers-spot-black-hole-feeding-at-40x-its-theoretical-limit/ | 5 | 0 | [
42050520
] | null | null | null | null | null | null | null | null | null | train |
42,050,082 | null | 2024-11-05T09:45:43 | null | null | null | null | null | [
42050083
] | [
"true"
] | null | null | null | null | null | null | null | null | train |
42,050,087 | y1se3n | 2024-11-05T09:46:14 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,050,094 | youngstoney | 2024-11-05T09:47:24 | null | null | null | 1 | null | [
42050095
] | null | true | null | null | null | null | null | null | null | train |
42,050,123 | rbanffy | 2024-11-05T09:52:53 | Programmer Collaboration Styles – By Adam Ard | null | https://rethinkingsoftware.substack.com/p/programmer-collaboration-styles | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,050,124 | pregress | 2024-11-05T09:53:07 | Show HN: TFLint ruleset to enforce security best practices on Azure | TFLint ruleset to enforce security best practices on the AzureRM provider | https://github.com/pregress/tflint-ruleset-azurerm-security | 1 | 0 | [
42050518
] | null | null | null | null | null | null | null | null | null | train |
42,050,125 | marban | 2024-11-05T09:53:12 | Prime Video's new feature uses generative AI to recap what you're watching | null | https://www.aboutamazon.com/news/entertainment/amazon-prime-video-x-ray-recaps | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,050,126 | Ewukong | 2024-11-05T09:53:40 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,050,134 | ggleason | 2024-11-05T09:55:20 | LLM Classifier Bootstrapping | null | https://vectorlink.ai/blog/llm-classifier-bootstrapping/ | 1 | 1 | [
42050135
] | null | null | null | null | null | null | null | null | null | train |
42,050,153 | vanschelven | 2024-11-05T09:58:09 | You don't need Application Performance Monitoring | null | https://www.bugsink.com/blog/you-dont-need-application-performance-monitoring/ | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,050,161 | basiclines | 2024-11-05T09:59:37 | null | null | null | 1 | null | [
42050162
] | null | true | null | null | null | null | null | null | null | train |
42,050,168 | wslh | 2024-11-05T10:01:51 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,050,171 | ta8645 | 2024-11-05T10:02:18 | The crisis in physics is real: Science is failing | null | https://www.youtube.com/watch?v=HQVF0Yu7X24 | 2 | 0 | [
42050818
] | null | null | null | null | null | null | null | null | null | train |
42,050,174 | wslh | 2024-11-05T10:02:27 | Apple VisionOS 2.2 Beta Adds Wide and Ultrawide Modes to Mac Virtual Display | null | https://www.macrumors.com/2024/11/04/visionos-2-2-beta-ultrawide-mac-virtual-display/ | 2 | 0 | [
42050405
] | null | null | null | null | null | null | null | null | null | train |
42,050,190 | ySoul_xander | 2024-11-05T10:06:03 | null | null | null | 1 | null | [
42050191
] | null | true | null | null | null | null | null | null | null | train |
42,050,204 | lambertsimnel | 2024-11-05T10:07:51 | Chinese Air Fryers May Be Spying on Consumers, Which? Warns | null | https://www.infosecurity-magazine.com/news/chinese-air-fryers-spying/ | 6 | 3 | [
42051333,
42053399,
42050353,
42050482
] | null | null | null | null | null | null | null | null | null | train |
42,050,205 | algorr | 2024-11-05T10:08:02 | Show HN: The AI App I Made for My Wife Became Everyone's 'Digital Painkiller' | Sometimes love inspires innovation, and sometimes the best solutions come from real pain points. Here's my story.<p>It All Started with a "Pain"
My wife is a dedicated teacher who supervises student internships. Every week, she manually typed hundreds of handwritten addresses into Maps. Every. Single. Week.<p>One evening at our kitchen table, surrounded by papers, with tired eyes, she sighed, "There has to be a better way."<p>Right then, I realized I had a similar "pain" - as a developer, I was constantly lost among code screenshots, manually retyping each one... Two different pains, same root problem.<p>Not Just Our Pain, Everyone's Problem
As we shared our solution, we discovered everyone has similar "digital pains":<p>For Teachers:<p>The struggle of digitizing handwritten student assignments<p>The hassle of entering hundreds of addresses into maps<p>The challenge of preparing exam questions<p>For Developers:<p>The pain of retyping code from screenshots<p>Gallery chaos<p>Documentation creation headaches<p>For Students:<p>The struggle of digitizing lecture notes<p>The hassle of copying book pages<p>The challenge of creating study questions<p>For Business People:<p>Piles of business cards<p>Meeting notes chaos<p>Document archiving struggles<p>Introducing Our Digital Painkiller: Uscan!
Born from my wife's need, became everyone's solution. Our AI-powered app:<p>Take a photo - get instant digital text<p>Recognizes even the messiest handwriting<p>Open addresses directly in Maps<p>Create searchable PDFs<p>Summarize long texts<p>Generate study questions automatically<p>Edit extracted text easily<p>Copy any part or all of the text<p>Basic features work offline<p>Share files instantly<p>Create professional PDFs from regular photos<p>Early Relief Reports!
Started with My Wife, Growing with You
Our app is brand new, and we're discovering new use cases every day. What's your "digital pain"? What problem should we solve for you?<p>What Can You Do With It?
Convert handwritten notes to digital text<p>Extract addresses directly to Maps<p>Turn screenshots into editable text<p>Save handwritten documents as searchable PDFs<p>Generate summaries of long texts<p>Create study questions from any text<p>Convert photos into professional-looking PDFs<p>Edit any extracted text on the spot<p>Share your digitized documents instantly<p>Copy specific parts or the entire text with one tap<p>Let's Create More Solutions Together!
We're continuously developing the app. What pain point can we relieve in your life? Share in the comments!<p>Your Digital Painkiller is Ready:<p>App Store
<a href="https://apps.apple.com/tr/app/uscan-ai-text-capture-ocr/id6698874831" rel="nofollow">https://apps.apple.com/tr/app/uscan-ai-text-capture-ocr/id66...</a><p>Google Play
<a href="https://play.google.com/store/apps/details?id=com.appoint.co.uscan">https://play.google.com/store/apps/details?id=com.appoint.co...</a><p>Edit: Wow, looks like we weren't alone with these pains!<p>TL;DR: The AI app I developed to solve my wife's address-typing pain has become everyone's digital painkiller! It instantly converts any text to digital, summarizes it, creates questions, and lets you edit and share everything easily. What pain point can we solve for you? | https://apps.apple.com/tr/app/uscan-ai-text-capture-ocr/id6698874831 | 1 | 0 | null | null | null | missing_parsing | UScan AI: Text Capture & OCR | null | null | Scan and get text from photos, screenshots, or handwriting with AI-powered OCR in seconds. Edit, convert to PDF/TXT, summarize, generate questions, and share with ease.Turn Photos into Digital Text with AI-Powered OCR – Fast, Easy, and Accurate!Experience the power of cutting-edge AI-powered Optical Character Recognition (OCR) with our app, designed to convert any image or screenshot into digital text in seconds. Whether it's handwritten notes, printed documents, or text from an image, our AI ensures precise and reliable results every time. No need for manual typing—just snap a photo or choose one from your gallery, and let our AI handle the rest!Key Features of Our AI-Powered OCR App:1. AI-Powered Text Extraction – Even for HandwritingHarness the power of AI to accurately recognize and convert text from photos, including handwriting. Capture notes from meetings, handwritten drafts, printed pages, or any image-based text effortlessly. Our state-of-the-art OCR technology ensures fast and precise text extraction, even from complex or low-quality images.2. Edit Extracted Text EasilyOnce text is extracted, you can edit it directly within the app. The built-in editor allows you to refine the content, correct errors, and organize your notes without needing to switch between apps. Quickly make changes and export the edited content as you need.3. Convert Text to PDF or TXT InstantlyTurn your extracted and edited text into a professional-looking PDF or TXT file with just a few taps. Our app offers seamless conversion options, ensuring your documents are ready for sharing, archiving, or printing. Whether you need a clean text document or a formatted PDF, we’ve got you covered.4. Summarize Long Text with AI for Quick InsightsStruggling with long documents? Let our AI summarization feature quickly distill large amounts of information into key points. This feature is perfect for students, researchers, or busy professionals who need to understand the essence of a document in seconds. Get quick insights without spending hours reading!5. Generate Questions with AI for Learning and TestingTurn any text into a study tool by using AI-generated questions. Whether you're a student creating a quiz, an educator making flashcards, or simply testing your knowledge, our app can automatically generate questions based on the text you input. Ideal for creating quizzes, study guides, and learning materials.6. Create High-Quality PDFs from PhotosNeed more than just text extraction? Select one or more photos, and our app will create high-quality PDF documents for you. This is ideal for archiving important documents, sharing content, or creating polished materials from images. Our app ensures crisp, clear PDFs every time.7. Save and Share Files with EaseSave your extracted text, PDFs, or other generated content directly to your device, or share them via email, messaging apps, or social media platforms. With our app, you can share important documents or notes quickly and efficiently, all while keeping everything organized in one place.Perfect for Every Use CaseStudents:Quickly digitize handwritten notes, extract text from textbooks, and generate study questions with AI.Summarize long chapters or research papers to grasp key concepts in seconds.Professionals:Simplify your document management process by extracting text from reports or business cards and converting them to PDF or TXT files.Summarize lengthy reports and share concise versions with your team or clients.Researchers and Writers:Extract critical information from books, articles, or handwritten notes effortlessly.Use the AI summarization tool to quickly review large volumes of text and organize research efficiently.For Everyone:From saving a favorite recipe to copying a quote from a book or archiving important documents, our AI-powered app simplifies your life by making text extraction, editing, and sharing quick and easy.Terms of Use: https://www.apple.com/legal/internet-services/itunes/dev/stdeula/ | 2024-11-08T21:52:20 | null | train |
42,050,215 | zlate | 2024-11-05T10:09:58 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,050,223 | ryanmccullagh | 2024-11-05T10:10:36 | Miss Manners meets the IETF (2002) | null | https://www.ietf.org/proceedings/53/slides/plenary-3/index.htm | 3 | 1 | [
42050443,
42050510
] | null | null | null | null | null | null | null | null | null | train |
42,050,243 | rbanffy | 2024-11-05T10:14:59 | Rocket Lab confirms plan to bid for Pentagon contracts with new medium rocket | null | https://spacenews.com/rocket-lab-confirms-plan-to-bid-for-pentagon-launch-contracts-with-new-medium-rocket/ | 2 | 0 | [
42050821
] | null | null | null | null | null | null | null | null | null | train |
42,050,244 | hhs | 2024-11-05T10:15:00 | Boeing union ends strike after contract vote | null | https://www.axios.com/2024/11/05/boeing-unions-strike-ends-contract-vote | 1 | 0 | [
42050446
] | null | null | null | null | null | null | null | null | null | train |
42,050,245 | denisshilov | 2024-11-05T10:15:22 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,050,246 | puuush | 2024-11-05T10:15:35 | Booking Platform for Accommodations – v1 | null | https://github.com/domits1/Domits | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,050,316 | zirrai | 2024-11-05T10:29:13 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,050,339 | mgh2 | 2024-11-05T10:32:52 | Facebook, Nvidia ask US Supreme Court to spare them from securities fraud suits | null | https://www.reuters.com/legal/facebook-nvidia-ask-us-supreme-court-spare-them-securities-fraud-suits-2024-11-04/ | 2 | 0 | [
42050498
] | null | null | null | null | null | null | null | null | null | train |
42,050,341 | yangxiaobo | 2024-11-05T10:33:16 | Show HN: I Made a Oasis AI Minecraft Website, the First AI-Generated Minecraft | Hey HN,<p>Recently, I discovered the first AI-driven video interactive game, where all the visuals and interactive content are generated in real-time by AI based on the user's actions. To put it simply, it's like an AI version of Minecraft.<p>I tried it out, and it is generally playable. However, the graphics are a bit blurry, and sometimes there are minor lags. Overall, there are still some flaws at this stage.<p>But I feel this is a truly groundbreaking innovation because all the game content is generated by AI in real-time. Therefore, theoretically, each player's gaming experience is unique, and it’s possible to generate an infinitely large world.<p>Although the overall experience isn't yet on par with mainstream games, I think this is a fascinating attempt. In the near future, the gaming industry could be completely revolutionized. Science fiction becoming reality—this might just be the future of 'Minecraft.'<p>I’ve created a website where you can experience the game online and also find some related information about it. I hope this will be helpful to you.<p>would love your feedback pls.<p>Charles | https://oasisaiminecraft.com/ | 2 | 2 | [
42050384,
42050354
] | null | null | null | null | null | null | null | null | null | train |
42,050,349 | agluszak | 2024-11-05T10:34:41 | Response to Blog Post from Malibal | null | https://blogs.coreboot.org/blog/2024/10/29/response-to-blog-post-from-malibal/ | 5 | 0 | [
42050469
] | null | null | null | null | null | null | null | null | null | train |
42,050,352 | todsacerdoti | 2024-11-05T10:35:15 | ZX81 3D Monster Maze disassembly (2020) | null | http://www.fruitcake.plus.com/Sinclair/ZX81/Disassemblies/MonsterMaze.htm | 6 | 0 | [
42050465
] | null | null | null | null | null | null | null | null | null | train |
42,050,359 | masterhood13 | 2024-11-05T10:37:06 | Dota 2 Match Outcome Predictor – Part 2: Dataset Enhancement | null | https://medium.com/@masterhood13/building-a-dota-2-match-outcome-predictor-part-2-enhancing-the-dataset-and-adding-new-features-3522965de468 | 3 | 4 | [
42053644,
42051135,
42050360
] | null | null | null | null | null | null | null | null | null | train |
42,050,364 | c420 | 2024-11-05T10:38:20 | Meta's AI feasts on user data most | null | https://www.axios.com/2024/11/05/meta-ai-user-data-information | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,050,366 | medchedli | 2024-11-05T10:38:55 | Show HN: Bringing Open Source Language Models to WordPress | Learn how to integrate an open source LLM (Large Language Model) into your WordPress website using Hexabot! In this video, Mohamed Marrouchi from the Hexabot team will guide you step-by-step through the process of creating an AI-powered chatbot for your WordPress site. Hexabot is an open source conversational AI builder that makes it easy to create engaging and intelligent chatbots. | https://www.youtube.com/watch?v=hyJW6JGCga4 | 3 | 0 | [
42050488
] | null | null | null | null | null | null | null | null | null | train |
42,050,385 | boris_m | 2024-11-05T10:42:47 | The Role of Goals and Emotions in Knowledge | null | https://abuseofnotation.github.io/time/02/ | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,050,400 | antoinefairmuse | 2024-11-05T10:45:00 | null | null | null | 1 | null | [
42050401
] | null | true | null | null | null | null | null | null | null | train |
42,050,417 | Tammilore | 2024-11-05T10:48:01 | Show HN: Open-Source HTTP Interceptor – Capture, Modify, Run Requests in Browser | Hey HN,<p>I recently built an open-source HTTP interceptor called Relay, which works as a Chrome extension for capturing, modifying, and replaying HTTP requests directly in your browser — no account needed.<p>How it works: Relay lets you capture requests as they happen, modify parameters, headers, or body content, and replay them on the fly. You can customize or debug network requests quickly without needing an external tool or complex setup.<p>Key features:<p>- Simple setup: Install the extension and start a session to capture requests. You can filter by URLs and methods.<p>-Request modification: Make quick edits to any part of the requests for debugging or testing.<p>- Copy as cURL: Easily copy requests as cURL commands to use elsewhere if needed.<p>- Replay functionality: Re-run requests with modified data or headers and see the results in your browser.<p>- Local, no account needed: All interactions are handled locally in your browser, so you maintain privacy and control over your data.<p>I built Relay to make tasks like testing API integrations, troubleshooting network calls, and experimenting with client-side requests easier. Originally, I made it for myself because I wanted a faster way to look at and edit network requests without constantly switching between my browser and other tools.<p>After seeing how useful it was, I decided to make it open-source for anyone who would find it useful.<p>Here's the GitHub repo: <a href="https://git.new/relay" rel="nofollow">https://git.new/relay</a><p>Would love to hear your feedback and suggestions! | https://chromewebstore.google.com/detail/relay-â-intercept-modify/kilmhgoembjiamcmcbecekdonljjiolg | 5 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,050,434 | nikitay | 2024-11-05T10:50:29 | null | null | null | 1 | null | [
42050435
] | null | true | null | null | null | null | null | null | null | train |
42,050,438 | Priyasinhakt | 2024-11-05T10:51:38 | null | null | null | 1 | null | [
42050439
] | null | true | null | null | null | null | null | null | null | train |
42,050,442 | trueduke | 2024-11-05T10:52:50 | Coding for a Finite World | null | https://yoric.github.io/post/coding-for-a-finite-world/ | 2 | 0 | null | null | null | no_error | Coding for a Finite World | null | David Teller |
(This is meant to be the first entry of a series which will cover individual points more in depth. We’ll see how that goes.)
We’re the tech industry. We have ideas. We have ideas all the time. And we’re used to turn our ideas into applications.
So, how does it go… here’s the back-end component… here’s the front-end component. We’ll write the former in Python, or perhaps JavaScript, to optimize for prototyping. After all, we have so many ideas, we need the ability to iterate quickly. Sprinkle in a few dependencies, that will speed us up. Oh, and let’s use ChatGPT and Copilot, we’ll be even faster. Oh, and performance, yeah, performance: microservices, Kafka, Redis, Kubernetes… we’re now ready to scale up. Oh, Sentry, Prometheus and Grafana, too, where would we be without ’em? For the front-end, we’ll write a website, and for mobile, Electron.
Oh, wait a second, we need to make money and to fidelize our users! Let me see… ads, tracking, and good reasons to revisit our app, perhaps a little NFT here, gamification… alright, we should be good.
Three… two… one… and we have shipped v1!
Also, the world is burning.
Perhaps it’s time we revisited how we do things?
The Infinite World Model
For the past ~30 years, the software industry has progressively optimized itself for an infinite world. An ever-increasing number of customers. Ever-increasing CPU and GPU power. Ever-increasing bandwidth. Ever-increasing Cloud power. Unlimited energy. For open-source projects, unlimited untapped contributors. For everyone else, unlimited VC money. Also, governments that either do not care, do not want to harm economic growth, or do not understand.
That’s the Infinite World Model and, truly, we owe it to a combination of 19th century Colonialism (unlimited raw materials and labour) and the Industrial Revolution (unlimited raw power and progress).
In this Infinite World, the main imperative is to be able to seize the opportunity:
be first on the market;
look good;
be ready to scale when the clients come;
do whatever you can to keep your clients.
For most companies, these factors trump everything else. They trump hardware costs. They trump maintenance costs. They trump bandwidth cost. They trump battery usage. They trump security. They trump accessibility. They trump privacy. They trump democracy. They may very well trump local or international law. And they definitely trump both ethics and technical debt. While we’ve been complaining about the result, we have spent the last 30+ years teaching developers (and product managers, and CEOs) to ignore all these. After all, Worse is Better, right?
And once the VC money comes, or the users come, there will be time to fix things. But of course, you have probably experienced it: once money comes, there will be time to add new features. Fixing things can wait until it becomes an emergency, or forever.
If you pay attention to actualities, you may realize that this world might not be around much longer. For one thing, we’re in an Energy Crisis and a Climate Crisis that has a direct or indirect impact on both industry and consumers. A Logistics Crisis. A few Addiction Crises. A worrying Consolidation of Power among few unreliable Tech Giants. There’s a strong risk of crisis on multiple raw materials and components. There’s also that Trade War that isn’t in the news anymore but never actually stopped. When I was at Mozilla, we started seeing the end of these unlimited open-source contributors. CPU performance per core has largely stalled since 2010. Did I mention that we’re in a Financial Crisis and that VC money has rather dried up for startups that do not have “AI” in their pitch? Also, we have Democratic Crises but governments look like they are finally moving on tech, for better and for worse. Oh, and of course we might have stepped into World War III without quite realizing. That’s bound to disrupt things quite a bit, in fun and interesting ways.
We can bet that things will get better. And for all we know, it might. I, for one, am not confident.
So I’m going to assume that the model that worked for the 19th century aren’t necessarily fit for our time.
Towards a Finite World Model
How do we build for a Finite World?
Let’s perform a quick Risk analysis:
Our Environment can crumble
Expect that the price of energy will keep increasing, for both you, for your cloud provider and your users.
Expect that hardware will stop growing more powerful both for you, for your cloud provider and your users.
Expect that both your users or you may need to move because of a drought, or a flood, or a storm, or forest fires, or famine.
Expect that your data centers may need to move or shutdown for the same reasons.
Our Globalization can crumble
Expect that hardware will get more expensive and possibly hard to find, both for you, for your cloud provider and for your users.
Expect that your cloud or API provider may become your competitor or may align with local or foreign interests hostile to you, your community or that of your users.
Expect that your goods, physical or digital, can have difficulties reaching your consumer due to legal or financial barriers.
Expect that you will have less money to build your product or keep your company afloat.
Our Societies can crumble
Expect that your nation might not remain a democracy much longer.
Expect that your nation might not accept you, or some of your colleagues, or some of your users much longer.
Expect the same in the various nations of your clients.
Expect that your users or your colleagues can be addicted, either to a narcotic or to an application.
Expect that scientific research or education may not be funded anymore.
Expect that violence, possibly war, can hit your nation within its borders.
Expect that AI and further automation will take people’s jobs, forcing them to change careers more often, with longer periods of unemployment/retraining.
Expect that extorsion criminals, private spies and nation states will be interested in any data you retain.
Expect more pandemics, fewer antibiotics and, generally, worse health.
That’s… actually much bleaker than what I intended to write when I started working on this blog entry. Our Finite World sounds like a paranoid dystopia, doesn’t it? In fact, it looks like the entire Cyberpunk genre, minus flying cars. As usual when dealing with Risk, we’ll hope that none of this will happen but plan ahead in case it does.
Since we’re talking Risk, let’s give our Threat a name. Let’s call it the Zerg. We live in a Finite World. Energy, disk space, food, attention, democracy, public health, society or peace are all finite resources. And we have the Zerg. Anything that eats away at our Finite World without giving us something at least as valuable in return, getting us closer to exhausting, bombing or coughing our way back to the stone age, is a Zerg.
Our mission is now to fight the Zerg, as aggressively as we need.
We’re the tech industry. We might not be able to solve climate, or energy, or food, but we have power. Power to do inadvertent harm, certainly. But power to do some good, too.
Designing for a Finite World
I’m not going to pretend that everybody can fight the Zerg. In particular, if you live in an authoritarian state and if your only choice is to comply with the boss’ order or starve or be deported, your hands might be tied too hard to do anything from this side. Note that you might still have a choice as a consumer, which is better than having no power, and I suggest you consider the best way to use that choice.
From this point, I’m going to assume that you have influence beyond being a consumer. Perhaps you are a developer, or a researcher, a decision-maker, or an advocate, or an entrepreneur. Let’s explore a few ways to use that influence and design against the Zerg.
Designing for Finite Needs
Alright, let’s start with a hard one.
You have an idea. It’s going to make a great start-up. It’s going to make you rich.
Congratulations. Please consider dropping it. No, seriously.
We’re talking about a Finite World. I know, I’ve been in start-ups and the ecosystem is exhilarating, is makes us feel smarter, in control of our life, possibly in permanent burnout, but we’re learning and achieving so much, plus we have a chance to strike rich! But the start-up ecosystem is designed as a permanent Zerg Rush. We’ll have to give it up, eventually. It’s time to start saying goodbye.
This goes double if your idea is based on another Zerg, such as blockchains or generative AI or social networks or adware or viral marketing or AAA video games or anything that requires addiction to sustain a business. Yes, some of these are insanely cool things. But by design, they require exponential amounts of resources to evolve, which makes them Zergs.
This does not mean that you should abandon research projects or art projects or hobbies. But it does mean that you should strongly consider not scaling them up. Sorry, I meant not Zerging them up.
Designing for Finite Funds
There are a few, rare, ideas that do need to scale. They are going to do so much good that they are going to outweigh the Zerg factor. If you have one, cherish it, feed it, grow it, fight for it, make it real.
But whatever you do, do not take VC money. If you are familiar with VC investors, you know that the system is designed to burn your hands with money into consuming as many resources as it takes to go big or go bust. In other words, to turn you into a Zerg. Perhaps some day, VC will evolve into something new and that fits our Finite World, but we’re not there yet, and no amount of “green” in the name or description of your VC will be sufficient to change that.
Alternatives exist. Bootstrapping. Open-source. Working with universities or non-profits. Self-funding the effort as a side job. None of these alternatives is as well-oiled as the VC money pipeline. They take time and effort. They can fail. Still, they remain better than the alternative.
Designing for Finite Performance
Expensive energy and hardware means that we need to stop considering performance as “we’ll throw in more cloud resources” or “we’ll run on more recent user hardware” and return to considering performance as “we’ll need to use high-performance languages/libraries” and “we’ll spend more time benchmarking”. This is true both on the front-end, on the back-end and on the wire. And since you don’t have VC money to spend on cloud resources, you’ll need to do this quite early.
Audit your programming language. If you are writing in Python, PHP, Ruby or server-side JavaScript, you are optimizing for quick prototypes, at the expense of performance, which means that you will need to throw in more cloud resources to scale. Consider faster languages (e.g. Rust is typically 10x-30x faster than Python, but also Go, Java/Scala or C#/F#).
Audit your model. If you are writing in microservices, you are optimizing for throwing in more cloud resources, at the expense of CPU and network performance, as well as developer velocity. Consider monolithic services or distributed agents (e.g. Elixir can run millions of agents per node).
Audit your protocols. If you are using HTTP and/or Kafka, you are using for communication within your system a protocol designed to show users with documentation. Consider faster protocols (e.g. Zenoh typicaly runs 30x faster than Kafka and is much more memory efficient).
Audit your web front-end. If you are shipping tens of megabytes of JavaScript (even lazily), you are overconsuming both your bandwidth and the battery of your users.
Audit your mobile/desktop front-end. If you are shipping Electron, you are shipping 50Mb-150Mb to your users (also, to your CI) for features that are already present on their machines. Consider Tauri or Neutralino, which offer a similar experience, but only consume 600kb (Tauri) - 2Mb (Neutralino) of disk space, and way less RAM than Electron. Alternatively, if your code is CPU-intensive, consider shipping native applications.
Audit your features. Some of your features may consume considerable energy, either server-side or client-side. If your front-end is polling the back-end permanently for updates, you may consider moving to websockets, or throttling the polls. If your video game requires a recent GPU, you are encouraging users to buy a new card, or a new phone, or a new laptop, and quite possibly throw away the old one. Consider designing your game for lower-end architectures, even if it means sacrificing looks or adopting a retro style.
In other words, to fight the Zerg, reconsider common wisdom and benchmark aggressively every component of your architecture.
I am not going to suggest you give up on using the Cloud, because it is not clear to me that alternatives are better with respect to the Zerg. If you have insights on the topic, please get in touch.
Designing for Finite Data
Expensive disk space, fragile democracies, hostile communities, stricter laws and higher chances of being hacked mean that we need to stop considering storage as both free and without consequence.
Again, let’s start with the hard one:
Audit your data for legality. Your data might not be legal anymore.
Audit your data for risks. Assume that the worst political or criminal figure in your country gets hold of it, and that they have gained dictatorial power, or perhaps that the worst political figure of the worst country imaginable gets hold of it, and they’re going to use it to wage war.
Not just the data you’re storing, also the data you’re sending to third parties, including your cloud provider.
Let me stress this, because, in a Finite World, you might be endangering people’s lives with your data. If you cannot run your business without endangering people’s lives, please consider pivoting.
Now, we can proceed with the usual Zergs:
Audit your data for content savings. Consider what you can (or must, for legal compliance) safely erase.
Audit your storage for structural saving. For instance, if you are using a document-oriented database because it makes prototyping easier, consider moving to either a relational or a column-oriented database, which are typically moch more efficient with disk space.
In other words, as much as you can, benchmark your Zergs into harmlessness.
Similarly, I am not going to suggest leaving your data off the Cloud, as I do not have a clear alternative to suggest. But do not hesitate to investigate.
Design for finite Brainpower
Everything is expensive. Your API provider might be your competitor and might price you out any day now. Working and hiring across borders has gotten more complicated. Science and tech education might be under-funded and lacking. Achieving performance takes time. You might be struggling to hire developers. You will need to work with less.
This might be counter-intuitive, but it means that you need to take time and invest it into using technologies that work for you in the long run.
Invest in Open-Source. Most companies consume open-source but few are part of the ecosystem, contributing back and having each other’s back. By being a good actor and contributing back, you help grow the reliability of your own technology, you gain the ability to expand it in directions that are important to you and, just as importantly, you gain opportunities to hire and you grow the skills that can help you diversify and pivot.
Invest in Maintenance. Besides open-source, pick technologies that are optimized to let small teams perform refactorings and reconfigurations. In terms of programming languages, this is another reason to avoid PHP, Python or JavaScript, which are optimized for writing prototypes, and to consider TypeScript, Rust, Java/Scala or C#/F#.
Invest in Knowledge. You will need to work with colleagues who do not have training. This means that rather than assuming that your new colleagues can just get started as soon as you have handed out work, you will need a form of mentorship. My experience of mentorship suggests that it grows the skillset of both the mentor and the mentoree, so it is a good investment.
Invest in Teaching Organizations. Consider working alongside universities or other teaching organizations, including non-profits. Much as working with open-source, this will grow your team’s skills, help you recruit from possibly untapped pools and be good for publicity and the community.
Again, this might be counter-intuitive, but I would also suggest against using ChatGPT, Copilot or any other AI assistant. For one thing, each request to ChatGPT or Copilot has a steep energy cost. For another, experience suggests that junior developers (and other professions) use the results without understanding it, which in turn hurts both the Maintenance and Knowledge objectives above.
Fighting back the Zerg
So far, everything we’ve discussed was about slowing down the Zerg. And if we, as an industry, only succeed at massively slowing down the Zerg we’ve been mass-producing for 30 years, that will already be a victory to celebrate.
I want to believe that we can go further. I’m not going to pretend I know how. But maybe you do.
Audit your surroundings. Consider if there is anything you can do to help heal your community from intolerance and violence, from addiction, from despair, from health issues, from authoritarian tendencies. Your contribution does not need to be technological.
What now?
I hope that this post can inspire some in the tech industry to take arms against the Zerg. I know that I am going to use this and push for better Zerg fighting at work.
Undoubtedly, I have missed many ideas, many possibilities. Undoubtedly, I have been naive about many things, too. But we need to start somewhere.
I’m planning to revisit some points of this post and dive into more details. After all, I’m supposed to be an expert in safety, performance and open-source, it’s time to put that knowledge to good use, isn’t it?
| 2024-11-08T01:20:37 | en | train |
42,050,453 | markwilliam8860 | 2024-11-05T10:54:09 | null | null | null | 1 | null | [
42050454
] | null | true | null | null | null | null | null | null | null | train |
42,050,459 | linkoten | 2024-11-05T10:54:57 | Show HN: memo - a Rust key-value store for terminal | null | https://github.com/pbrochar/memo | 4 | 0 | [
42050460
] | null | null | null | null | null | null | null | null | null | train |
42,050,475 | Evictor | 2024-11-05T10:57:41 | Manage Kubernetes Clusters with PHP and Laravel | null | https://laraub.com/projects/459 | 1 | 0 | [
42050805,
42050476
] | null | null | null | null | null | null | null | null | null | train |
42,050,485 | todsacerdoti | 2024-11-05T11:00:15 | The rise of advanced build systems | null | https://www.scalevp.com/insights/the-rise-of-advanced-build-systems/ | 2 | 0 | null | null | null | no_error | The rise of advanced build systems - Scale Venture Partners | 2024-09-19T16:25:23+00:00 | Josh Cohen |
It’s 2024 and decades-old memes about building software still hold up. Despite advances in the DevOps stack – Docker for containerization, CircleCI for CI/CD, and Terraform for infrastructure as code – many engineering organizations still struggle to deliver fast, consistent, and secure application builds.
The build problem is getting harder due to an increase in software project complexity. Today’s software teams are embracing monorepos and are pulling in record numbers of third party dependencies. At the same time, the number of builds in CI is growing as teams embrace continuous push. In 2024 CircleCI saw a 97% increase in daily workflow volume for top performing teams. The increase in build complexity is slowing down teams.
A new generation of advanced build systems are making builds faster and more reliable. This will change the way companies ship software.
The build bottleneck is growing
A software “build process” is broadly defined as the series of steps for building and testing a piece of software from source code. Software builds are kicked off by engineers locally and by CI systems remotely. Because building software is a core part of the software development lifecycle, slow and flaky builds can be a particularly potent bottleneck.
Engineers know the symptoms of a bad process: fresh builds in the morning take an hour due to cache misses, long builds in CI block PR merges, and onboarding takes days due to environment inconsistencies. All of these issues slow down developers and drive up infra bills. In StackOverflow’s 2024 developer survey, developers ranked quality developer environments and build environments as two of the most important factors in their overall job satisfaction.
The “long build” problem is unfortunately common. We’ve talked with many organizations that experience fresh build times in the 2+ hour range. Large teams are particularly impacted – Graphite reports that the P75 total CI time for teams with over 50 engineers is a whopping 130 minutes:
Graphite reports that P75 CI runtime is 130 minutes for teams with >50 engineers
It’s also getting worse. In 2024, CircleCI workflow times on production branches grew by 11%. The obvious drivers of long build times – expanded CI investment, monorepo adoption, and increased third-party package adoption – show no signs of slowing down.
Flaky builds still continue to haunt teams. According to CircleCI, ~17% of builds on production branches fail. Even mature organizations report experiencing many unexpected breakages per week due to dependency issues.
Enter the advanced build system
While build tools have been around for over fifty years, the newest build systems deliver a major leap forward in power and capability.
Stuart Feldman introduced the staple Unix “make” utility back in 1979. Over the subsequent decades, build tools like CMake, Ant, and Maven incrementally improved the build process, helping engineers build cross-platform projects more efficiently. In the early 2010s, big tech companies took up the build systems torch.
Google, Meta, and X (formerly Twitter) all developed internal build systems while pioneering a monorepo approach to code organization where thousands of developers collaborate in multi-million line repositories. Google’s Bazel, Meta’s Buck, and X’s Pants all support speedy builds through effective caching and remote execution. Each of these build systems has now been open sourced, and a wave of startups have emerged to deliver complementary offerings.
Software teams plagued by build headaches are adopting these advanced build systems to improve developer velocity, reduce infrastructure spend, and improve build consistency. Some new build system offerings, like those from EngFlow, BuildBuddy, and Aspect, are building on top of existing open source projects. Other startups, like Nx, Dagger, and Earthly, offer ground up solutions based on similar principles. By providing solutions that increase build speed, reliability, and security, build system startups are answering the “long build” problem that plagues many software organizations.
A range of open source projects and software startups are helping address the “long flaky build” problem on software teams
Under the hood
Newer build systems are delivering 10x build time speedups and highly reproducible builds. These systems accomplish speedups by supporting features like the subdivision of builds into smaller targets, deterministic dependency management, and remote execution.
A main mechanism in newer build systems is the subdivision of large builds into smaller targets. With Bazel and Buck, developers define targets through strict BUILD files. During a build, the system scans the project for changes, re-building only the targets which have changed. Since most code changes only affect a handful of targets, this dramatically reduces the amount of time needed for a build.
Another feature of build systems is deterministic dependency management. Old systems lack precision: many do not have version pinning requirements and rely on package repositories that do not provide consistency guarantees. This causes inconsistent builds. New systems are more precise. Nix-based systems like Flox and Determinate Systems fully specify an applications’ dependency tree with details on the environment in which each package was built. They also cryptographically hash packages to guarantee incoming dependencies have not been altered since the most recent build.
The impact of these software features is amplified through remote execution. Newer build systems execute builds on remote machines, parallelizing the build of sub-targets and the installation of dependencies. They also cache the output of tasks to reduce overall compute required. Some, like Blacksmith, are running build tasks on high performance machines. Remote execution allows developers and CI systems to invoke builds consistently and quickly.
Many systems expose this advanced functionality through modern syntax, like Dagger’s programmatic build functions and Earthly’s Earthfiles, making them easy to use. They also offer cross language support. Different languages have different build processes and associated challenges – C++ and Rust are compiled whereas Python and Javascript are interpreted. But the common jobs of a build system – to subdivide build tasks, manage dependencies effectively, and execute builds quickly – hold constant across languages and project types.
Monorepos are complicating builds
One major driver of advanced build system usage is monorepo adoption. Monorepos were pioneered by big tech firms in the 2010s, and now, a growing number of teams are embracing them. While monorepos have many advantages – they make it easier to grep a codebase, synchronize cross-project changes, and standardize coding practices – they also have drawbacks. In particular, dependency management can be difficult. It can be challenging to unify package versions across a monorepo, and sheer package volume can lead to long build times. In the Javascript community, monorepo tools like Nx, Turborepo, and Rushstack, which help developers manage the complexity of monorepo builds, have been taking off. Turborepo was acquired by Vercel in 2021.
Chart showing adoption of JS-specific monorepo tools
Prioritizing consistency and security
For some companies, it is consistency, not speed, that drives the need for an advanced build system. In compliance-focused sectors like financial services and aerospace, software teams place a premium on reproducible builds and third party package auditability.
Nix, the open source ecosystem which includes NixOS and Nixpkgs, is gaining strong traction on this front. Nix’s purely functional package manager provides strong consistency guarantees for third party dependencies, and its NixOS linux distribution makes it easy to audit and manage OS configurations. Startups Flox and Determinate Systems, which build on top of Nix, are fueling Nix adoption with a suite of enterprise tools.
Other tools are also benefiting from security tailwinds. Bazel, for example, has positioned itself as an offering for compliance-oriented organizations and cites adoption by many fintech organizations. Bazel lead Tony Aiuto reports general traction with “organizations that worry a lot about compliance and recertifying what they are shipping.”
Scotty, do I need more power?
Not all teams need an advanced build system. Many offerings have a steep learning curve. Nix documentation is famously sparse, and Bazel can take months to integrate. Integrating an advanced build system to an existing codebase can require substantial refactors due to strict dependency management requirements. Small teams with modular codebases may decide the investment isn’t worth the cost.
For these teams, using free subtools can be a good alternative to a full-on build system. Many newer package managers like the pnpm JS package manager offer fast install times and easy onboarding. Tools from incumbent CI providers can also help teams speed up remote builds. CI features like dependency caching and parallelization are helping teams address the biggest bottlenecks in their build process quickly and cheaply.
That being said, the trends that necessitate advanced build systems – third party package adoption, monorepo growth, and CI/CD expansion – are here to stay. Even companies that don’t explicitly sell software, like RedBull, American Airlines, and Caterpillar, are starting to adopt these systems. As new build systems get easier to integrate and adopt, more and more companies stand to benefit from them.
The road ahead
Advanced build systems will make “it worked on my machine” a predicament of the past. By enforcing build hermeticity and offering advanced caching tools, the next generation of build tools will allow developers to quickly and confidently build projects from anywhere.
Usability and cost remain the biggest adoption hurdles. The winning solutions in this space will gain trust through high quality developer experience, cross-platform compatibility, and powerful integrations.
The build systems shift will change development workflows. CI volume will increase as build costs go down. Collaboration will increase as building new projects gets easier. The biggest effect of all, though? That the best justification for a coffee break – “my code’s compiling” – is on its last legs. Guess we’ll have to find a new excuse.
| 2024-11-08T05:19:55 | en | train |
42,050,490 | M2Ys4U | 2024-11-05T11:01:20 | French govt gives thumbs up to nationalising Atos | null | https://www.theregister.com/2024/11/05/french_government_atos/ | 1 | 0 | [
42050500
] | null | null | null | null | null | null | null | null | null | train |
42,050,491 | todsacerdoti | 2024-11-05T11:01:31 | Comin: GitOps for NixOS Machines | null | https://github.com/nlewo/comin | 2 | 0 | [
42050800
] | null | null | no_error | GitHub - nlewo/comin: GitOps For NixOS Machines | null | nlewo | comin - GitOps for NixOS Machines
comin is a NixOS deployment tool operating in pull mode. Running
on a machine, it periodically polls Git repositories and deploys the
NixOS configuration associated to the machine.
Features
❄️ Git push to deploy NixOS configurations
🚧 Support testing branches to try changes
🚀 Poll multiple Git remotes to avoid SPOF
📮 Support machines migrations
⏩ Fast iterations with local remotes
📡 Observable via Prometheus metrics
📌 Create and delete system profiles
Quick start
This is a basic flake.nix example:
{
inputs = {
nixpkgs.url = "github:nixOS/nixpkgs";
comin = {
url = "github:nlewo/comin";
inputs.nixpkgs.follows = "nixpkgs";
};
};
outputs = { self, nixpkgs, comin }: {
nixosConfigurations = {
myMachine = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
comin.nixosModules.comin
({...}: {
services.comin = {
enable = true;
remotes = [{
name = "origin";
url = "https://gitlab.com/your/infra.git";
branches.main.name = "main";
}];
};
})
];
};
};
};
}
This enables a systemd service, which periodically pulls the main
branch of the repository https://gitlab.com/your/infra.git and
deploys the NixOS configuration corresponding to the machine hostname
myMachine.
A new commit in the main branch of the repository
https://gitlab.com/your/infra.git is then deployed in the next 60
seconds.
Documentation
Howtos
Advanced Configuraion
Authentication
Comin module options
Design
| 2024-11-08T09:01:13 | en | train |
42,050,505 | rbanffy | 2024-11-05T11:04:10 | House Speaker walks back plan to repeal CHIPS Act | null | https://www.theregister.com/2024/11/04/chips_act_repeal/ | 3 | 0 | [
42050594
] | null | null | null | null | null | null | null | null | null | train |
42,050,509 | rbanffy | 2024-11-05T11:05:24 | Intel CEO complains after investing $30B but receiving zero CHIPS Act funding | null | https://www.tomshardware.com/tech-industry/intel-ceo-complains-this-is-taking-too-long-after-investing-usd30b-but-receiving-zero-chips-act-funding | 5 | 0 | [
42050597
] | null | null | null | null | null | null | null | null | null | train |
42,050,555 | nhfhsjs | 2024-11-05T11:12:54 | Dwarf-Based Stack Walking Using eBPF | null | https://www.polarsignals.com/blog/posts/2022/11/29/dwarf-based-stack-walking-using-ebpf | 1 | 0 | null | null | null | no_error | DWARF-based Stack Walking Using eBPF | November 29, 2022 | null | This feature was previously introduced in the announcement post.Sampling CPU profilers periodically fetch the stacks of the profiled processes that are running on the CPU at a given time. Walking the stacks of native processes, such as the ones written in C, C++, Rust, etc. can be a bit more complicated than one might expect. Most of the complexity is due to the lack of frame pointers, which is quite common.We have developed an improved stack walker that works even if frame pointers are omitted in the Parca continuous profiling project's Agent.The stack in x86_64The x86_64 architecture, besides describing its instruction set and several other important characteristics, also defines the rules on how data should be laid out in its Application Binary Interface or ABI for short. The specification shows how the stack should be set up for this architecture.When this code is executed, the different `call` instructions will push the return address to the stack. Once a function returns, the CPU will read the return address and jump to it, continuing where the callsite of the function left.With no additional information, it’s not possible to reliably produce a stacktrace. There could be other values, such as function local data, that are stored in the stack that might look like function addresses. This is what frame pointers aim to solve.Can I have a (frame) pointer?For the following pseudocode, assuming no compiler optimisations:int top(void) { for(;;) { }}int c1(void) { top();}int b1(void) { c1();}int a1(void) { b1();}int main(void) { a1();}To walk the stack with this method, we need to keep a pointer to the previous frame. In the x86 architecture, this typically would be in the frame pointer, `$rbp`. As functions may call other functions, this register has to be stored on function entry and restored on function exit.This is accomplished by the so-called function prologue, on function entry, which might look like thispush $rbp # saves the stack frame pointermov $rbp, $rsp # sets the current stack pointer to the frame pointerThe function epilogue on the function returnspop $rbp # restores the function's frame pointerret # pops the saved return address and jumps to itIf we compile and run the C code above with frame pointers, the stack would have all the necessary information to walk the stack. Calling the different functions effectively creates a linked list that we need to traverse.Disassembly of the code above compiled with frame pointers# compiled with `gcc sample.c -o sample_with_frame_pointers -fno-omit-frame-pointer`$ objdump -d ./sample_with_frame_pointers0000000000401106 <top>: 401106: 55 push %rbp 401107: 48 89 e5 mov %rsp,%rbp 40110a: eb fe jmp 40110a <top+0x4>000000000040110c <c1>: 40110c: 55 push %rbp 40110d: 48 89 e5 mov %rsp,%rbp 401110: e8 f1 ff ff ff call 401106 <top> 401115: 90 nop 401116: 5d pop %rbp 401117: c3 ret0000000000401118 <b1>: 401118: 55 push %rbp 401119: 48 89 e5 mov %rsp,%rbp 40111c: e8 eb ff ff ff call 40110c <c1> 401121: 90 nop 401122: 5d pop %rbp 401123: c3 ret0000000000401124 <a1>: 401124: 55 push %rbp 401125: 48 89 e5 mov %rsp,%rbp 401128: e8 eb ff ff ff call 401118 <b1> 40112d: 90 nop 40112e: 5d pop %rbp 40112f: c3 ret0000000000401130 <main>: 401130: 55 push %rbp 401131: 48 89 e5 mov %rsp,%rbp 401134: e8 eb ff ff ff call 401124 <a1> 401139: b8 00 00 00 00 mov $0x0,%eax 40113e: 5d pop %rbp 40113f: c3 ret```The contents of the native stack in the example code above are compiled with frame pointers when the top function is running.To walk the stack, we would have to follow the generated linked list above, reading the values pushed before each saved `$rbp`, which will make our stack frames, until `$rbp` is zero, which indicates that we've reached the end of the stack.This is nice because it allows us to figure out the stack trace cheaply. It’s also relatively easy for compiler implementers to add, and in general, requires a reasonably small amount of surrounding infrastructure to make it work.Despite all the advantages, a lot of the code that we rely on is not compiled with frame pointers. Many of us rely on our Linux distribution applications and libraries, and the overwhelming majority of them choose to omit frame pointers. Even if you compile your code with frame pointers, dynamically or statically linking any library provided by your distribution might prevent you from being able to correctly unwind the stack using frame pointers alone.We won’t dive into the reasons why frame pointers are disabled in some environments and the nuances around them, but we believe that benchmarking their overhead has to be done on an application-by-application basis. The often-overlooked costs that come with disabling frame pointers should also be considered.The disassemble of this executable compiled without frame pointers# compiled with `gcc sample.c -o sample_without_frame_pointers -fomit-frame-pointer`$ objdump -d ./sample_without_frame_pointers[...]0000000000401106 <top>: 401106: eb fe jmp 401106 <top>0000000000401108 <c1>: 401108: e8 f9 ff ff ff call 401106 <top> 40110d: 90 nop 40110e: c3 ret000000000040110f <b1>: 40110f: e8 f4 ff ff ff call 401108 <c1> 401114: 90 nop 401115: c3 ret0000000000401116 <a1>: 401116: e8 f4 ff ff ff call 40110f <b1> 40111b: 90 nop 40111c: c3 ret000000000040111d <main>: 40111d: e8 f4 ff ff ff call 401116 <a1> 401122: b8 00 00 00 00 mov $0x0,%eax 401127: c3 ret[...]Diff between the two disassemblestop:- push %rbp- mov %rsp,%rbp jmp 40110a <top+0x4>c1:- push %rbp- mov %rsp,%rbp call 401106 <top> nop- pop %rbp retb1:- push %rbp- mov %rsp,%rbp call 40110c <c1> nop- pop %rbp reta1:- push %rbp- mov %rsp,%rbp call 401118 <b1> nop- pop %rbp retmain:- push %rbp- mov %rsp,%rbp call 401124 <a1> mov $0x0,%eax- pop %rbp retIf when we are profiling we are somewhere in the execution of c1, the stack might look like this:The contents of the native stack from the code above when top is running.We need some other information or hardware support to be able to reliably unwind the stack.Hardware approachesThere are some hardware facilities that we could use for stack unwinding, such as Intel’s Last Branch Record (LBR). LBR produces pairs of origin and destination addresses, `FROM_IP` and `TO_IP`, that we can use to build stack traces. One drawback they have is that the depth of the records they can produce is limited. Depending on the processor this could be around 32 last taken branches.While LBR is versatile and powerful, we decided to not use it for CPU profiling as this feature is not available in every virtualized environment and it’s Intel-specific. These drawbacks extend to other interesting vendor-specific processor features, such as Intel Processor Trace (PT).An exceptional encounterSome of you might be thinking, how is it possible that I can compile, let’s say, C++ applications without frame pointers, and exceptions still work just fine? What about Rust, where frame pointers are disabled by default but invoking `panic()`s shows a full and correct stack trace?For C++ exceptions to work no matter how the binaries are compiled, as well as to add some other necessary facilities to make them function, compilers can emit some metadata that indicated how to unwind the stack. This information provides a mapping of program counters to the instructions on how to restore all the registers.All this is described in two documents, the DWARF debugging information format and the x86_64 ABI.DWARF’s Call Frame Information (CFI)The main goal of the Call Frame Information is to provide answers on how to restore every register for the previous frame at any part of our code execution. Directly storing a table that contained each program counter and all the registers and their location, such as whether they’ve been pushed to the stack or not, would generate humongous unwind tables.For this reason, this format attempts to be compact and only contain the information that is needed. It uses various techniques to this effect such as:Variable length compression of numbers with LEB128.Data compression with a state machine. This is important as it allows for a very succinct representation of the data at the expense of increased complexity.The unwind tables are encoded in the CFI format in the form of opcodes that we need to evaluate. There are two main "layers" to it. The first one is a state machine encoded in a VM. This helps with repetitive patterns that compress well and allows for a more compact representation of some data, as in some cases there’s a specialized opcode that consumes 1, 2, or 4 bytes, rather than using 4 bytes all the time. Registers that aren’t pushed into the stack might not appear in the table.What I call the second level, is a special opcode that contains another set of opcodes, containing arbitrary expressions, that we need to evaluate. The main difference between these two levels is that while for the first level, we just need a stack to remember and restore registers (`DW_CFA_remember_state` and `DW_CFA_restore_state`, respectively), for the second level we need to evaluate arbitrary Turing complete expressions. For this reason, we need a full-blown VM to evaluate any expression.Implementing a VM in BPF is not very practical, so we decided to take a pragmatic approach and start by hardcoding the 2 expressions that happen more than 50% of the time in most binaries we've evaluated. We have some ideas on how to further improve expression support, but this blog post is getting way too long already :).Walking the stack using DWARF’s CFITo use this approach to walk the stack for a given Program Counter (PC), we need to find its corresponding unwind information. But what do we mean by the unwind information, exactly?We need to restore:The saved return address.The values for the stack pointer ($rsp) and frame pointer ($rbp) registers in the previous frame are used to restore the previous frame’s stack pointer.The value of the stack pointer for the previous frame, just before our current function got called, in DWARF’s CFI terms, is called the Canonical Frame Address or CFA. As we saw before, in x86_64, the saved return address is always 8 bytes (a word) ahead of the CFA.The unwinding algorithm looks something like this:Read the initial registersThe instruction pointer `$rip`. Needed to find the row in the unwind table.The stack pointer `$rsp`, and the frame pointer `$rbp`, which are needed to calculate the the previous frame's stack pointer value (CFA). We can find the return address and other registers pushed on the stack at an offset from CFA.While `unwind_frame_count <= MAX_STACK_DEPTH`:Find the unwind table row for the PC for which i satisfies that `$unwind_table[i].PC <= $target_PC <= $unwind_table[i+1].PC`.If there's no entry for it and `$rbp` is zero, we have reached the bottom of the stack.Add instruction pointer to the stack.Calculate the previous frame's stack pointer. This can be based on the current frame's `$rsp` or `$rbp`, if it's not an expression or register directly.Updates the registers with the calculated values for the previous frame.Continue with the next frame. Go to 2.Note: for simplicity, we are omitting some important details that our unwinder implements.To do this, we need to read the unwind opcodes, evaluate them, and generate the tables. This process can be quite expensive, but it’s how it’s done by the exception handling infrastructure in C++, among others. Because exceptions are supposed to be, erm, exceptional, these code paths should not be exercised often and the overhead won’t be too high.This is also the case for debuggers, such as GDB, where users might want to know the stack trace here and there to understand where they are in the execution. These use cases are sometimes categorised under offline unwinding.Profilers are a bit different in that they usually sample the stack dozens or hundreds of times a second. The overhead of having to read, parse, and evaluate the unwind information can be quite high. While stack unwinders might do some caching, the whole process is still quite expensive.A key observation in our case is that we don’t need to restore every register, we only need these 2 and the saved return address. This insight allows us to produce a representation that works better for our online unwinding use case.Possible implementationsThe profiler we’ve developed isn’t by far the first one to use this technique. Perf, the venerable Linux profiler has supported DWARF-based stack unwinding for a while. By leveraging `PERF_SAMPLE_REGS_USER` and `PERF_SAMPLE_STACK_USER` introduced in the perf_event_open system call in Linux 3.4, it can receive the registers for the profiled processes as well as a copy of the stack for every sample.While this approach has been proven to work, and we evaluated implementing our profiler in a similar fashion, it has a few drawbacks we wanted to avoid:Performance: the kernel copies the user stack for every sample. It copies the user stack that’s currently in use, but this can be quite a bit of data. Assuming a very conservative `1K per stack * 100 samples/second * 30% of the time running on CPU * 10 CPUs` = 300KB per second.Security: the implications of having another process having the values of another process’s stack can be complicated. What if some private key or any sort of Personally Identifiable Information (PII) is present there?While 300KB/s doesn’t seem like a lot of data, we believe that this number can be significantly higher for busy machines running CPU-intensive applications. We hope that by reducing the impact of the measurements while the profiler is running, fewer resources will be dedicated to the profiler that otherwise applications could use.Another idea that popped into our heads was to perhaps copy the stack in a BPF program, but this would still have the disadvantages we wanted to avoid, and we would have to reimplement the functionality that the kernel already has and that it’s proved to work very well!This brings us to the approach we eventually took, still leveraging BPF!Why BPF?We are big believers in BPF. There are many reasons for this. Broadly, it allows for the Linux kernel to be programmable with higher safety guarantees and a lower barrier of entry.Developing profilers in BPF makes a lot of sense as once the stack walking mechanism is implemented, we can leverage the perf subsystem to get samples on CPU cycles, instructions, L3 cache misses, or any other performance counter that’s available in our machine. It also helps develop other tools, such as allocation tracers, off-CPU profilers, and many many others.You might be wondering, why all this talking about stack unwinding in BPF? With the `bpf_get_stackid(ctx, &map, BPF_F_USER_STACK)` helper we can fetch user stacks! Turns out, this helper walks the stack using frame pointers, and a fully featured DWARF unwinder is unlikely to ever land in the kernel.A BPF-friendly representation of the unwind tableMost offline stack unwinders don’t process most of the DWARF CFI information as they target very few program counters. Profilers, on the other hand, might yield a higher cardinality of program counters. For this reason, and the fact that we only need a subset of the data to walk the stack, as we don’t require to know how to restore every single register, as well as to produce some representation that minimises the work that the BPF unwinder has to do, we decided to take on the unwind table generation cost upfront.In userspace, we first parse, evaluate, and generate unwind tables. So far, we only support information stored in the `.eh_frame` ELF section. The generated table is a couple of arrays built up from this row type:typedef struct { u64 pc; u16 _reserved_do_not_use; u8 cfa_type; u8 rbp_type; s16 cfa_offset; s16 rbp_offset;} stack_unwind_row_t;We use a whole word for the program counter and then have a couple of fields that help us calculate CFA. For example, it can be at an offset from either the current `$rsp` or `$rbp`. We also find out how to restore $rbp.Using the algorithm described above, we walk through the frames, by restoring the previous frame's registers. The table is sorted by program counter, to be able to binary search over the table in the BPF program.We are done walking the stack iff we can’t find unwind information for a given PC and the current `$rbp` is zero. We then aggregate the stacks in a BPF map, in kernel space, to maximise efficiency, and we collect this data twice a minute from userspace, where we generate the profiles and send them to a Parca Server compatible server.The developmentEarly on when we started this project we realised that there were many variables that could affect its success. To the best of our knowledge, there’s no other feature complete, open source, dwarf-based BPF unwinder, so we weren’t sure of how viable it would be. Hence, to maximise our chances of success, we tried to significantly reduce the problem space, while still giving us as much signal as possible.At Polar Signals, we create a Request For Comments (RFC) for every large feature or topic we want to discuss or get feedback on. For this work, we started with a document laying out what we wanted to achieve for the first iteration, including goals and even more importantly, the non-goals.After some weeks of work, we landed the first version, which focused on correctness. We continued with follow-ups to loosen the minimum kernel requirements (kernel ~4.10), as well as to make the unwind table rows more compact.Building this in BPF was an interesting challenge. The kernel has to ensure that any loaded program can’t make it crash. It statically analyses code using the verifier, which will either reject or accept a program. Some of the current rules as of the writing of this post are, no dynamic allocations, termination has to be provable, and many others, so we had to get creative to get the verifier to accept our program. This write-up is getting way too long so this would be a story for another time :)TestingFor the unwinder to work, both the unwind table and the unwind algorithm implemented in BPF have to work well. Ensuring that the tables were correct was paramount in the development of this project.In this case, we decided early on to use snapshot testing in a very simple form. We have some test binaries as well as the expected unwind table in a separate git repository. As part of our testing suite in the Agent, we regenerate the tables and ensure that there aren’t any changes.This technique allowed us to quickly iterate on the DWARF unwind information parser, helping us find a myriad of bugs, and saving us a lot of time we would have spent otherwise trying to understand why we failed at walking the stack.Future workThere are lots of features and fixes we are working on, and we are excited to be sharing them with you very soon!We’ve only released the first version that includes dwarf-based stack unwinding a few days ago. But we already have some more changes to ensure that the profiler runs well in memory-constrained machines, improved architecture, enabling better support for JIT’ed code, among others.Near term, we are shifting our focus towards reliability, performance, and wider support. The parsing, evaluation, and handling of DWARF’s unwind information is not optimised yet. We also want to ensure that we have detailed performance metrics for our profiler. Finally, we want to do more exhaustive testing for tables produced by Clang and the Rust compiler toolchains.The ultimate goal of this project is to enable this profiler by default to all of our users, without incurring a significantly higher resource usage.Give it a try!̶A̶s̶ ̶m̶e̶n̶t̶i̶o̶n̶e̶d̶ ̶a̶b̶o̶v̶e̶,̶ ̶t̶h̶i̶s̶ ̶n̶e̶w̶ ̶f̶e̶a̶t̶u̶r̶e̶ ̶i̶s̶ ̶b̶e̶h̶i̶n̶d̶ ̶a̶ ̶f̶e̶a̶t̶u̶r̶e̶ ̶f̶l̶a̶g̶,̶ ̶b̶u̶t̶ ̶w̶e̶ ̶a̶r̶e̶ ̶g̶o̶i̶n̶g̶ ̶t̶o̶ ̶e̶n̶a̶b̶l̶e̶ ̶i̶t̶ ̶b̶y̶ ̶d̶e̶f̶a̶u̶l̶t̶ ̶i̶n̶ ̶t̶h̶e̶ ̶n̶e̶x̶t̶ ̶v̶e̶r̶s̶i̶o̶n̶ ̶o̶n̶c̶e̶ ̶w̶e̶ ̶l̶a̶n̶d̶ ̶s̶o̶m̶e̶ ̶i̶m̶p̶r̶o̶v̶e̶m̶e̶n̶t̶s̶ ̶w̶e̶ ̶a̶r̶e̶ ̶w̶o̶r̶k̶i̶n̶g̶ ̶o̶n̶.̶This feature is now enabled by default and has been running in production for a long time as of January 2024. You can download the latest release here.Working with the communityWe believe that the pervasive lack of frame pointers is a big issue for application developers, as well as developers of profilers, debuggers, and compilers.Fortunately, this problem space is being actively worked on by many members of the wider engineering community, such as this proposal to enable frame pointers by default in Fedora, or the .ctf_frame work, an alternative format to dwarf unwinding that’s specifically tailored to the online, asynchronous (meaning that can unwind any program counter, not just from specific parts) use-case that profilers and other tools need.Open source and collaborating with other communities are a big part of our company ethos. That's why we started speaking about this project early on, starting with a Linux Plumbers talk last September, where we announced this work.Our unwinder is licensed under the GPL license. It's open for inspection and contributions, and we would love to work with other projects facing similar issues. Don’t hesitate to reach out! Let us know if there’s any feedback or features that you would like to see implemented, either in our Discord or in the GitHub discussion.AcknowledgmentsThis work wouldn’t have been possible without the work of many individuals. There’s a lot of infrastructure that had to be in place for this project to be possible at all.The Delve debugger for Go, which our DWARF unwind information parser is based on.Ian Lance Taylor’s blog series on .eh_frame.MaskRay’s blog is packed with interesting compilers and linkers content.The engineers involved in both the DWARF standards and different ABIs. Coming up with something so flexible is not easy. The open and detailed specs are a great resource.Compiler engineers are the unsung heroes of this work. Creating unwind tables isn’t easy. Maintaining them in sync across compiler passes is a herculean task.Without BPF and its surrounding ecosystem, we wouldn’t have a safe way to create programmable system-wide profilers. While sometimes the verifier can be tough, it’s our best ally.The Reliable and fast DWARF-based stack unwinding paper provides a superb description of all the systems we described in this post, as well as trying some different non-BPF approaches to speed up dwarf-based unwinders, and some of the correctness testing they carried that found several bugs. We owe them not just increasing the quality of unwinding tables that many systems, including ours, depend on, but also helping raise awareness of all these systems and how critical they are.Notes and ramblingsWhile the term "stack walking" is more correct in the context of profilers, and "stack unwinding" is typically used when the runtime is handling exceptions, we use these two terms interchangeably.Our table format uses the smallest datatypes we could use, which set some limits on the minimum and maximum value for offsets, among others. You can check them out in this design document, which also includes some of the requirements for this unwinder. We are already working on removing some of the mentioned limitations!A very interesting idea from the "Reliable and fast DWARF-based stack unwinding" paper is to synthesise unwind tables from object code when unwind tables are not present or complete for some reason. This is something we might entertain in the future.To add some insights into the complexity implications, just in terms of code size, the previous frame pointer-based unwinder could be re-implemented in BPF in less than 50 lines, while this dwarf-based one is more than 500 lines. This excludes all the necessary supporting code in userspace and tests.Last but not least, don't take this work as an apology for frame pointer removal! If we could change something technical and low-level in the computing industry, it would probably be enabling frame pointers by default. This is something that hyperscalers, such as Facebook and Google already do, despite the potential extra compute costs, as they save them headaches and time when every minute of troubleshooting an incident is costing lots of money. That being said, we understand that even if everybody would agree to enable frame pointers, it would take years until all of our users would reap the benefits.Side note: the C++ exception machinery is quite complex and has to do quite a bit of work as described in this write-up. Some interesting things to think about: what would be the cost when the unwind tables are in memory vs when they are not? Could this be a problem with your application? How are these paths exercised?Further ReadingIn other words, articles we wished we had at the beginning of our journey.https://www.corsix.org/content/elf-eh-framehttps://lesenechal.fr/en/linux/unwinding-the-stack-the-hard-way | 2024-11-08T02:04:55 | en | train |
42,050,561 | Tomte | 2024-11-05T11:13:15 | USS Haven | null | http://www.usshaven.org/ | 1 | 0 | [
42050592
] | null | null | missing_parsing | USS Haven Homepage | null | null |
Check out the ship store to get your official USS
Haven command patch or US
Navy Haven
ball cap!
| 2024-11-08T21:16:38 | null | train |
42,050,563 | Dowwie | 2024-11-05T11:13:23 | A Scalable Communication Protocol for Networks of Large Language Models | null | https://agoraprotocol.org/ | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,050,564 | baxtr | 2024-11-05T11:13:29 | EU Inc: A Vision for a Unified European Corporate Structure | null | https://www.orrick.com/en/tech-studio/insights/EU-Inc-A-Vision-for-a-Unified-European-Corporate-Structure | 4 | 2 | [
42050827,
42051088,
42050796
] | null | null | null | null | null | null | null | null | null | train |
42,050,570 | cnctvfrc | 2024-11-05T11:14:42 | Gitlab makes more money than you think | null | https://www.constellationr.com/blog-news/insights/gitlab-sees-traction-it-competes-against-github-atlassian | 33 | 4 | [
42051149,
42051091,
42057119,
42050789
] | null | null | null | null | null | null | null | null | null | train |
42,050,573 | fanf2 | 2024-11-05T11:14:55 | HashML-DSA Considered Harmful | null | https://keymaterial.net/2024/11/05/hashml-dsa-considered-harmful/ | 2 | 0 | null | null | null | no_error | HashML-DSA considered harmful | 2024-11-05T10:33:01+00:00 | null |
I mentioned ranted about this topic as a section of a previous blog post (at the very end), but the topic keeps coming up, so I am escalating to a full blog post, since obviously that will help with all these people who are wrong on the internet standardization.
The Problem
Prehashing is a paradigm often used within the context of digital signature schemes. To understand where the problem is, let’s start with the normal definition of a signature scheme, as used by cryptographers, as a tuple of three functions (G, S, V), with the following mappings:
(Key Generation)
(Signing algorithm)
(Verification algorithm)
In order to be secure, you’ll need some more stuff, like saying that a signature produced by S will verify when plugged into V with the same data, but that stuff is boring (not really boring) and Dan Boneh has already written it up in his book, so I’ll skip the details here.
As you can see, in the world of mathematics, where everything is perfect and wonderful, there are no hashes anywhere, so to understand what prehashing is about, we will unfortunately go a layer deeper, and pretend to implement these functions in made-up pseudo-code which happens to vaguely resemble C++.
typedef std::vector<uint8_t> Message;
typedef std::vector<uint8_t> Signature;
std::tuple<PrivateKey, PublicKey> generate_key();
Signature sign(PrivateKey sk, Message m);
bool verify(PublicKey pk, Message m, Signature s);
While we now specify that messages and signatures are byte arrays, this API is still very much the same as the mathematical description of a signature algorithm, and no hash function is anywhere to be seen. But there is a subtle problem with this API, or at least something that can be a subtle problem: While PrivateKey, PublicKey, and Signature are all types that, for any given algorithm, are constant in the size they take in memory, (or at least bounded, looking at Falcon), the message can be literally anything, from a few bytes to theoretically petabytes worth of data. This is especially a problem if you want to hold the private key in a separate device, such as an HSM or a Cloud service, and cannot easily transmit or process large quantities of data.
We mainly care about signing here, since private keys are the ones that need protecting, so a first attempt could be to introduce a streamed version of sign():
typedef std::vector<uint8_t> Message;
typedef std::vector<uint8_t> MessageChunk;
typedef std::vector<uint8_t> Signature;
std::tuple<PrivateKey, PublicKey> generate_key();
State sign_init(PrivateKey sk);
void sign_update(State* st, MessageChunk m);
Signature sign_finalize(State* st);
bool verify(PublicKey pk, Message m, Signature s);
There is still no hash to be found, and the state should probably be some object oriented thing, but we can now solve the problem: To sign a large piece of data, we call sign_init() to create a new stream, call sign_update as often as we need to, updating our state, and call finalize to get the signature. While this is no longer the exact same set of methods as the mathematical definition had, we can see the init/update/finalize pattern as a way to implement a streaming interface.
And there still is no hash. And while this interface is nice and all, it means that our HSM and/or Cloud service now need to manage state, which while possible, is still a bit of a problem. But we can make one more, somewhat more radical change to our API, by delaying at which point the private key gets involved:
typedef std::vector<uint8_t> MessageChunk;
typedef std::vector<uint8_t> Signature;
std::tuple<PrivateKey, PublicKey> generate_key();
State sign_init();
void sign_update(State* st, MessageChunk m);
Signature sign_finalize(State* st, PrivateKey sk);
bool verify(PublicKey pk, Message m, Signature s);
Assuming our State object is again bounded in size, we now could do sign_init() and sign_update() directly where the data lives, and only have sign_finalize() happen on the HSM/Cloud Service, with the State object itself being transmitted. Note that we still need to trust the caller of sign_init() and sign_update(), since the remote side would no longer be able to tell what it is signing, but we only have to trust that caller to correctly compute the State, and do not need to expose the private key there.
And if we squint a bit, we can now see the hash: With the addition of just one tiny function, we can make it more explicit:
typedef std::vector<uint8_t> MessageChunk;
typedef std::vector<uint8_t> Hash;
typedef std::vector<uint8_t> Signature;
std::tuple<PrivateKey, PublicKey> generate_key();
State sign_init();
void sign_update(State* st, MessageChunk m);
Hash sign_finalize_local(State* st);
Signature sign_finalize_remote(Hash message_identifier, PrivateKey sk);
bool verify(PublicKey pk, Message m, Signature s);
Now we can nicely see that signing can be written as the composition of a hash function call, followed by the actual signature algorithm.
So as long as we just keep our interface like this, we can use HSMs and Cloud services and all that good remote oracle stuff, without running into pesky data transmission problems sending over half a Petabyte.
The Problem with the Problem
This, however, does not readily work with every signature scheme. As stated, the signature scheme must decompose into a hash function call and a call to sign the output of that hash function, and not all signature schemes do. In fact, of the two signature schemes NIST just standardized, exactly zero decompose like that, in part because NIST actively encouraged having a property called “non-resignability”, which guarantees that the message identifier by itself, at least for messages with enough entropy to not be brute-forcable, is not enough to create a signature that verifies under a different, possibly attacker controlled key.
But people need the ability to use HSMs and remote oracles, and pushed for NIST to add it to the schemes, so NIST decided to instead of standardizing two schemes, they actually standardized four: ML-DSA, SLH-DSA, HashML-DSA, and HashSLH-DSA, with the latter two being absent from the draft standard. And those four standards are mutually exclusive, have different properties, and are not very well described as separate algorithms. I will go more into HashML-DSA (and HashSLH-DSA) in the last section of this post (which, just as a teaser, is called “The Bad Place”), but, with the hopefully not as futile as it feels wish to prevent us going there, present the in my opinion far better alternatives first.
The Solution (Part 1)
When introducing the whole problem, I somewhat intentionally named only one variable, the message_identifier, and presented the steps leading up to the requirement of decomposing the signature scheme into a hash function and the signing of the message identifier.
The main trick to get to this decomposition was moving the private key from init to finalize. Obviously, any one roundtrip protocol will need the private key to be moved to finalize, since the signature cannot be computed before all message data is accumulated, forcing finalize to be the remote function call that uses the secret key as a side input.
But do we have to make sign_init() to be a method that is completely independent of the rest of the signature scheme? After all, we do have a very conveniently public parameter that we could feed the signing procedure from the very start:
typedef std::vector<uint8_t> MessageChunk;
typedef std::vector<uint8_t> Hash;
typedef std::vector<uint8_t> Signature;
std::tuple<PrivateKey, PublicKey> generate_key();
State sign_init(PublicKey pk);
void sign_update(State* st, MessageChunk m);
Hash sign_finalize_local(State* st);
Signature sign_finalize_remote(Hash messsage_identifier, PrivateKey sk);
bool verify(PublicKey pk, Message m, Signature s);
In other words, the message identifier is no longer independent of the key, but only depends on the public key, and not the private key, so the private key can stay remote, but the resulting message identifier is still unique to our signature. A very convenient property, if say, you wanted to not be able to create a signature for a different public key using only the message identifier for messages with sufficient entropy.
And because it turns out to be such a convenient way to achieve this property, it turns out that ML-DSA, when it was still a little contestant going by Dilithium and trying to impress NIST, did exactly this.
And NIST thought that this was a good idea as well, so they left the following comment in Algorithm 7, line 6 of ML-DSA:
message representative that may optionally be computed in a different cryptographic module
And if you look at the rest of ML-DSA, things indeed only depend on this message identifier µ from there on out, and also notice that tr is a hash of the public key, meaning that it is possible to reorder the program in such a way that µ is computed without any knowledge of the private key at all, and can be passed to the HSM/remote oracle (aka the “cryptographic module”, in NIST speak) while being computed in a different “cryptographic module” such as a software library like BoringSSL.
In other words, and to be very explicit, here is how you could implement the above API for ML-DSA:
typedef std::vector<uint8_t> MessageChunk;
typedef std::vector<uint8_t> Hash;
typedef std::vector<uint8_t> Signature;
std::tuple<PrivateKey, PublicKey> generate_key();
State sign_init(PublicKey pk) {
State st = SHAKE256_init();
SHAKE256_absorb(&st, SHAKE256_oneshot(pk.serialize(), 64));
SHAKE256_absorb(&st, "\0\0");
return st;
}
void sign_update(State* st, MessageChunk m) {
SHAKE256_absorb(&st, m);
}
Hash sign_finalize_local(State* st) {
return SHAKE256_squeeze(st, 64);
}
Signature sign_finalize_remote(Hash message_identifier, PrivateKey sk);
bool verify(PublicKey pk, Message m, Signature s);
Or more explicitly: You can prehash ML-DSA with the hash function SHAKE256(SHAKE256(pk, 64) || 0x00 || 0x00 || m), where pk is the public key, and m is the message. Note that this hash function is for the case of using an empty context, but I leave figuring out the hash functions with context to my capable reader (Hint, there are two 0x00’s here for a reason).
The Solution (Part 2)
I want to be very clear: The first part of the solution solves the situation for ML-DSA. Pretty much completely. In my opinion, part 1 of the solution should be the most preferred way for solving this problem whenever we encounter it in its pure form. It allows the signing infrastructure to make choices about the locality of data and private key without having to bother the verifiers with it, allowing for someone to change it later, without having to update every verifier in existence. It also gives us this nice non-resignability bonus property, for whatever that is worth.
There are a few other things to tie up, though. First, this does not work for SLH-DSA, which is a multipass signature scheme, and has to hash the input data multiple times, so even the first API refinement would have led to an unbounded state object. Second, you might want to have resignability, i.e. you might want to produce signatures for the same piece of data with multiple public keys at once, even if that makes some academics sad. And third, you might want to sign the actual petabytes I promised above, and find yourself in need of a tad bit more parallelism than a single output stream can provide.
Thankfully, all three scenarios actually have the same solution: You need a protocol. By protocol I mean any description of the larger context your signature scheme is used in, and not necessarily an interactive protocol of two or more online parties. And the best part is: you already have a protocol, since a signature scheme, by itself, is pretty useless, and needs some larger framework that defines what data in what format is signed by whom, how public keys are distributed and trusted, etc. You never deploy a signature scheme by itself, it is always part of a larger system. And with just a few tweaks to that larger system, you make all the rest of your prehashing woes go away.
For example, when defining what data is signed by your signature algorithm, you could decide to simply sign the hash of some data, instead of the data itself. You could even sign the root of a Merkle tree, if say, you wanted to sign a petabyte of data using massively parallel data stores. You could sign the hash of your large data blob together with some unhashed metadata, which would give the remote side some auditing capabilities. The possibilities are endless. While I often blog about the rather minute details of primitives, it is actually this endlessness of possibilities of the protocol space of cryptographic system that is by far the largest part of my day job. Using the protocol to define a system that does not have to transport immense amount of data over constrained networks or to constrained devices is something that you will have to do in many instances of this problem, and in the simplest case this can be as easy as writing “The signer signs a hash of the data provided”. Yes, this means the data is hashed twice, which is a type of inefficiency, but these kind of tradeoffs are commonplace in protocol design.
The Bad Place
This leaves us with the bad place, HashML-DSA and HashSLH-DSA. Those two signature schemes are, in my opinion, what happens when protocol questions get pushed into the primitive layer, and leave a situation that is either ill-defined, or unnecessary. Note that even when pushing this protocol question into the primitive layer, you do not actually absolve yourself from the need of having a protocol defining the larger deployment of a signature scheme, but you merely make the protocol more complicated to analyze, as the primitive is now diverging from the neatly defined building block it is supposed to be.
The way HashML-DSA and HashSLH-DSA are defined, there are two different ways of signing the hash of some data. Option 1 is to just sign this hash, using the method described in the previous paragraph, by declaring the hash to be the actual data. Option 2 is to use HashML-DSA, and invoke a special version of ML-DSA that flips a bit and adds an OID.
The verifier will need to know which option you used, since, different from the solution (part 1), the resulting signature will differ. So in reality, both scenarios are actually protocol decisions, and nothing has been gained by the use of HashML-DSA.
But since HashML-DSA/HashSLH-DSA are no longer actual signature schemes as defined by the mathematical definition above, but require another parameter, the hash used, as a side input, you now have the problem of how to communicate this side input to the verifier, with two basic options: Either, you make it part of the public key, or equivalently, the verification logic, in which case it is simply superfluous, since your verification logic could have just said to first hash the data and then verify the signature on the hash. Or, you put the choice of hash with the data, and have it as an untrusted side input. The latter is the worst case scenario, because a signature scheme is not guaranteed to be secure if the payload describes how to verify itself. This is the problem with JWT, and the problem of countless security vulnerabilities in existence, and an extremely common mistake to make. The mathematical proofs used to show a signature scheme does not permit forgeries simply do not apply when the signature scheme is not fully defined. In the case of HashML-DSA, this, as far as I can tell, seems to mostly be a theoretical problem, as the hash OID should prevent forgeries, even if a weak hash function has been used in the past, but the mere fact that it encourages sending information necessary to define the verification algorithm through an untrusted channel, is a net negative, given how the whole thing is a completely unnecessary in the first place. We would be best off by ignoring the existence of the HashML-DSA/HashSLH-DSA variants completely, and instead operate on well defined signature schemes at all times.
Side note: A similar thing can be said about the context string, it too breaks the mathematical framework used and cuts through the abstraction layers of what a signature scheme does, but it is pretty easy to ignore or use correctly, so my visceral hatred for the HashML-DSA/HashSLH-DSA variants does not extend quite the same way here. In the end, both boil down to something Cas Cramer said, somewhat jokingly, about the misbinding issues with KEMs: “There should have been a precompetition, where the properties of the candidates for the competition are being set”. We should have clearly defined the APIs as used in practice from the beginning, including being clear about serialization (which causes the misbinding issues), implicit rejection (which causes even more misbinding issues), and prehashing capabilities of signature schemes. That would have avoided the last minute introduction of HashML-DSA, likely caused the use of explicit rejection in KEMs, and made papers like unbindable Kemmy Schmidt unnecessary to be written months before the standards were finalized.
| 2024-11-08T10:05:13 | en | train |
42,050,596 | passwordoops | 2024-11-05T11:19:12 | Bio-based fibers could pose greater threat to environment than plastics | null | https://phys.org/news/2024-11-bio-based-fibers-pose-greater.html | 2 | 1 | [
42052170
] | null | null | http_other_error | Just a moment... | null | null | Please complete security verificationThis request seems a bit unusual, so we need to confirm that you're human. Please press and hold the button until it turns completely green. Thank you for your cooperation!Press and hold the buttonIf you believe this is an error, please contact our support team.24.173.64.2 : f1581fbb-c39e-470d-844b-be1c063c | 2024-11-07T22:39:59 | null | train |
42,050,604 | todsacerdoti | 2024-11-05T11:21:08 | Nginx Explorer – Cookie Authentication | null | https://blog.izissise.net/posts/ngxp-cookie-auth/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,050,609 | doener | 2024-11-05T11:21:24 | Infinite Mac: Macintosh Garden Library | null | https://blog.persistent.info/2024/11/infinite-mac-macintosh-garden-library.html | 5 | 0 | [
42050917
] | null | null | no_error | Infinite Mac: Macintosh Garden Library | null | null |
The Macintosh Garden is a great resource in the retro Mac community. It has an archive of nearly every piece of software released in the 80s and 90s, complete with screenshots, manuals, and metadata like year of release and operating system requirements. From its debut Infinite Mac would let you use files from the Garden: download it to your computer, and then drag it in to have it appear in the “Downloads” folder. But while doable, it’s not the same as being discoverable or pleasant to use if you wanted to do this more than a few times.
Inspired by the CD-ROM library feature, I decided to investigate what it would take to add a “Macintosh Garden” drawer to the site. The goal was to allow any item in the Garden’s catalog to be loaded into the emulated Mac with one click (at least for the versions that support “The Outside World”, which is most from System 7 to Mac OS 9). I reached out to the Garden’s maintainer, who was on board with the project and even provided a JSON dump of the site’s catalog of 20,000 applications and games. Building the UI was a fun exercise in making the CD-ROM drawer into a reusable component - another contribution to my collection of Classic- and Platinum-themed UI controls.
In keeping with the secondary goal of Infinite Mac bringing the best of web technologies to retro-computing, I wanted the Macintosh Garden drawer to be as fast as possible. The entire catalog was 86 megabytes of JSON, which would take a while to load, even with gzip compression (27 MB). I decided to create a custom data format, with only a small index file being necessary to render the drawer list view and support search-as-you-type. It contains plain JavaScript arrays with known indices for the title, author, and other data. This approach (inspired by the JsPbLite format) minimizes redundancy while keeping decoding simple. The file is 1.5 MB, and only 439 KB when compressed with gzip, which - combined with preloading - makes the drawer pretty snappy.
A separate “details” file with descriptions, download URLs, and other information is loaded by the Cloudflare worker, which serves pieces of it on-demand. The worker also handles proxying of downloads from the Garden, both to avoid running into CORS issues and because Cloudflare’s caching should help with frequently-used items. For downloads that are large CD-ROM images, the HTTP range-based streaming approach from the CD-ROM library is also used to help with performance.
The search UI supports operators, which were implemented by a simple predicate function run as part of a linear search (the data set is small enough that fancier indexing is not required). As a small UI touch, I finally got to use scroll snap for something (the carousel view for screenshots). I also added support for triggering downloads of an item via a URL parameter, so it’s possible to share links to a favorite item.
In terms of other Infinite Mac-related work, it’s been relatively quiet. I have been tracking DingusPPC development, and it’s now possible to (sometimes, very slowly) boot Mac OS X 10.2. I also made some small quality-of-life improvements: more control over scaling, it’s harder to accidentally close the site via Command-W, and era-appropriate fonts are now used.
| 2024-11-08T11:09:33 | en | train |
42,050,614 | ethanleetech | 2024-11-05T11:22:09 | null | null | null | 1 | null | [
42050615
] | null | true | null | null | null | null | null | null | null | train |
42,050,622 | angelawizy | 2024-11-05T11:23:00 | null | null | null | 1 | null | [
42050623
] | null | true | null | null | null | null | null | null | null | train |
42,050,632 | T-A | 2024-11-05T11:24:08 | OPEA [Open Platform for Enterprise AI] | null | https://github.com/opea-project | 1 | 0 | null | null | null | no_error | OPEA [Open Platform for Enterprise AI] | null | null |
OPEA is an ecosystem orchestration framework to integrate performant GenAI technologies & workflows leading to quicker GenAI adoption and business value.
Overview
Repositories
Projects
Packages
People
OPEA is an open platform project that lets you create open,
multi-provider, robust, and composable GenAI solutions that harness the best
innovation across the ecosystem.
The OPEA platform includes:
Detailed framework of composable building blocks for state-of-the-art
generative AI systems including LLMs, data stores, and prompt engines
Architectural blueprints of retrieval-augmented generative AI component stack
structure and end-to-end workflows
A four-step assessment for grading generative AI systems around performance,
features, trustworthiness, and enterprise-grade readiness
Read more about OPEA at opea.dev and explore the OPEA
technical documentation at opea-project.github.io
Popular repositories
Loading
Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open Platform for Enterprise AI (OPEA) project.
Shell
265
185
GenAI components at micro-service level; GenAI service composer to create mega-service
Python
68
135
Containerization and cloud native suite for OPEA
Go
30
55
This repo contains documents of the OPEA project
Python
26
53
Evaluation, benchmark, and scorecard, targeting for performance on throughput and latency, accuracy on popular evaluation harness, safety, and hallucination
Python
22
39
GenAI Studio is a low code platform to enable users to construct, evaluate, and benchmark GenAI applications. The platform also provide capability to export developed application as a ready-to-depl…
JavaScript
4
5
Repositories
Showing 10 of 19 repositories
opea-project/opea-project.github.io’s past year of commit activity
HTML
0
Apache-2.0
5
0
0
Updated Nov 8, 2024
GenAIExamples
Public
Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open Platform for Enterprise AI (OPEA) project.
opea-project/GenAIExamples’s past year of commit activity
GenAIComps
Public
GenAI components at micro-service level; GenAI service composer to create mega-service
opea-project/GenAIComps’s past year of commit activity
Python
68
Apache-2.0
135
47
26
Updated Nov 8, 2024
GenAIInfra
Public
Containerization and cloud native suite for OPEA
opea-project/GenAIInfra’s past year of commit activity
GenAIEval
Public
Evaluation, benchmark, and scorecard, targeting for performance on throughput and latency, accuracy on popular evaluation harness, safety, and hallucination
opea-project/GenAIEval’s past year of commit activity
Python
22
Apache-2.0
39
19
3
Updated Nov 8, 2024
GenAIStudio
Public
GenAI Studio is a low code platform to enable users to construct, evaluate, and benchmark GenAI applications. The platform also provide capability to export developed application as a ready-to-deploy package for immediate enterprise integration.
opea-project/GenAIStudio’s past year of commit activity
JavaScript
4
Apache-2.0
4
0
0
Updated Nov 8, 2024
opea-project/Validation’s past year of commit activity
Shell
0
Apache-2.0
3
0
2
Updated Nov 8, 2024
docs
Public
This repo contains documents of the OPEA project
opea-project/docs’s past year of commit activity
opea-project/Security-Working-Group’s past year of commit activity
3
Apache-2.0
4
0
0
Updated Oct 2, 2024
opea-project/.github’s past year of commit activity
0
2
0
1
Updated Sep 5, 2024
Most used topics
Loading…
| 2024-11-08T15:41:19 | en | train |
42,050,659 | jmsflknr | 2024-11-05T11:29:14 | India issues notice to Wikipedia over concerns of bias | null | https://techcrunch.com/2024/11/05/india-issues-notice-to-wikipedia-over-concerns-of-bias/ | 7 | 1 | [
42050847
] | null | null | null | null | null | null | null | null | null | train |
42,050,660 | hathym | 2024-11-05T11:30:04 | Nintendo profit plunges 69% as it cuts forecast | null | https://www.cnbc.com/2024/11/05/nintendo-cuts-switch-sales-forecast-to-12point5-million-as-demand-fades.html | 3 | 0 | [
42050793
] | null | null | null | null | null | null | null | null | null | train |
42,050,662 | timmb | 2024-11-05T11:30:20 | It's hard to make art with AI, but it's not impossible | null | https://timmb.com/its-hard-to-make-art-with-ai-but-its-not-impossible/ | 1 | 0 | [
42050908
] | null | null | null | null | null | null | null | null | null | train |
42,050,681 | vccafe | 2024-11-05T11:35:02 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,050,692 | laiso | 2024-11-05T11:36:17 | Show HN: Askrepo – AI-Powered Code Understanding Tool | askrepo is a tool that helps developers understand complex codebases using AI. It leverages Gemini's API with a 2M token context window to provide accurate code explanations and insights.<p>## Key Features:<p>- Analyzes entire Git repositories or specific paths<p>- Maintains full context of the codebase<p>- Provides accurate answers based on comprehensive code analysis<p>- Flexible usage for code understanding, bug detection, and more<p>## Why This Exists:<p>- Traditional chat services and tools like Cursor/Copilot Chat often provide fragmented information<p>- Need for better context understanding in large codebases<p>- Especially useful for OSS analysis and complex project exploration<p>Try it out: <a href="https://github.com/laiso/askrepo">https://github.com/laiso/askrepo</a><p>## Questions for the Community:<p>1. Is there a real need for this tool?<p><pre><code> - Uncertain about how many developers need code explanation features
- Personally found it valuable for OSS analysis
- Looking for feedback on use cases
</code></pre>
2. Are there better alternatives?<p><pre><code> - Open to solutions that make code comprehension easier
- Key requirements:
- Flexible code scope selection
- Better AI context understanding
- Current tools (Cursor/GitHub Copilot) fell short in these aspects
</code></pre>
Would love to hear your thoughts on use cases and potential improvements! | https://github.com/laiso/askrepo | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,050,693 | devneelpatel | 2024-11-05T11:36:22 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,050,729 | Blahah | 2024-11-05T11:41:51 | Sentient Advocate of Nature [video] | null | https://www.ted.com/talks/goodbye_monkey_s_a_n_sentient_advocate_of_nature | 1 | 0 | null | null | null | missing_parsing | S.A.N (Sentient Advocate of Nature) | null | GoodBye Monkey | 21,985 views|GoodBye Monkey | TED Countdown Dilemma Series • October 2024In a universe not unlike ours, a tech-environmentalist group claims to have created an AI that is the direct “voice of the earth,” a computer connected via electrodes to the mycelium network under an ancient forest named S.A.N (Sentient Advocate of Nature). The film imagines what nature thinks of human impact on the planet, as a renowned reporter conducts a world-first interview with S.A.N.climate changeenvironmentsustainabilityAICountdown | 2024-11-08T00:09:36 | null | train |
42,050,735 | cmpit | 2024-11-05T11:42:48 | OpenAI's o1 model leaked on Friday | null | https://www.tomsguide.com/ai/chatgpt/openais-o1-model-leaked-on-friday-and-it-is-wild-heres-what-happened | 6 | 0 | [
42050896
] | null | null | no_error | OpenAI’s o1 model leaked on Friday and it is wild — here’s what happened | 2024-11-04T11:04:34+00:00 | Ryan Morrison |
(Image credit: SOPA Images / Contributor via Getty Images)
OpenAI is set to release the full version of its powerful o1 reasoning model sometime this year, but an unexpected leak last week means we may have already seen it in action — and it is even better than we expected.In September OpenAI unveiled a new type of AI model that takes time to reason through a problem before responding. This was added to ChatGPT in the form of o1-preview and o1-mini, neither of which demonstrated the full capabilities of the final o1 model, but did show a major improvement in terms of accuracy over GPT-4.CEO Sam Altman says o1 is a divergence from the GPT-style models normally released, including GPT-4o, which powers Advanced Voice. During a briefing with OpenAI, I’ve been told o1 full is a significant improvement over the preview, and the leak seems to confirm that is the case.Over about two hours on Friday users could access what is thought to be the full version of o1 (OpenAI has not confirmed) by changing a parameter in the URL. The new model will also be able to analyze images and access tools like web search and data analysis.An OpenAI spokesperson told Tom's Guide: "We were preparing limited external access to the OpenAI o1 model and ran into an issue. This has now been fixed."What was revealed in the o1 leak?HUGE LEAK 🔥 OpenAI full o1 Chain of Thought has native image capabilities See the response for recent SpaceX launch image. It walks through the details of each part of image step by step. pic.twitter.com/lxHlI435bONovember 4, 2024Ever since the release of the original o1-preview model OpenAI insiders have been boasting about the full capabilities of the model once the preview tag is removed. Theories suggest that the preview was trained on an earlier version of the GPT models whereas the full model was trained from scratch. Either way, the leak seemed to prove that they were right.In one example a user was able to get it to solve an image puzzle. The AI spent nearly two minutes thinking through the problem but it demonstrated the huge potential once it is able to review images, documents and other multimedia inputs.Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!In another example, a user was able to have it walk through every single element of an image showing a recent SpaceX rocket launch. It went into considerable detail about color and motion. This could be huge for AI image generation.It isn’t clear when OpenAI will unveil the full version of o1 properly but what we do know is that it will be a significant advancement in AI. It is likely to be sometime in the next few weeks as most AI companies seem to be holding back until after the U.S. Presidential election.More from Tom's GuideGemini Live Voice mode is free for Android users — and you can try it right nowMidjourney is building the Holodeck — new AI model lets you ‘enter’ 3D imagesGoogle's Gemini AI is now turning your notes into a podcast
Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover.
When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?
Most Popular
| 2024-11-08T09:48:53 | en | train |
42,050,736 | yamrzou | 2024-11-05T11:43:01 | Who Will Monitor the Monitor? (2010) | null | https://afinetheorem.wordpress.com/2011/02/01/who-will-monitor-the-monitor-d-rahman-2010/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,050,744 | perihelions | 2024-11-05T11:44:12 | Lost Maya city discovered in Mexico | null | https://www.cnn.com/2024/11/02/science/maya-city-discovered-valeriana-mexico/index.html | 1 | 0 | [
42050891
] | null | null | Failed after 3 attempts. Last error: Quota exceeded for quota metric 'Generate Content API requests per minute' and limit 'GenerateContent request limit per minute for a region' of service 'generativelanguage.googleapis.com' for consumer 'project_number:854396441450'. | Lost Maya city discovered in Mexico | CNN | 2024-11-02T08:30:48.026Z | Mindy Weisberger |
Sign up for CNN’s Wonder Theory science newsletter. Explore the universe with news on fascinating discoveries, scientific advancements and more.
CNN
—
For more than 1,000 years, dense forests in the Mexican state of Campeche concealed the region’s ancient human history.
Scientists called Campeche an archaeological “blank spot” in the Maya Lowlands, an area spanning what is now Belize, El Salvador, Guatemala and southeastern Mexico, and which the Maya inhabited from about 1000 BC to AD 1500.
But part of that region is blank no longer. Archaeologists have found thousands of never-before-seen Maya structures as well as a large city that they named Valeriana after a nearby lagoon, the researchers reported Monday in the journal Antiquity.
The sleuthing that led to the discovery took place from nearly 2,000 miles (3,200 kilometers) away, using aerial LiDAR — light detection and ranging equipment — that penetrated eastern Campeche’s thick forest cover from above, pinging the surface with lasers and revealing what lay beneath the leafy canopy. Encompassing about 47 square miles (122 square kilometers), the LiDAR scans were collected in 2013 for a forest survey by The Nature Conservancy of Mexico.
Like other large capital cities from Maya sites, Valeriana had a reservoir, a ball court, temple pyramids and a broad road connecting enclosed plazas. In total, the researchers identified 6,764 structures in Valeriana and in other rural and urban settlements of varying sizes. The density of the settlements in the area rivals that of other known locations in the Maya Lowlands, and archaeologists had suspected that numerous Maya ruins were hidden in Campeche since at least the 1940s, the scientists reported.
“On the one hand it was surprising; you see it and you’re struck by it. On the other hand, it actually confirmed what I expected to find,” said lead study author and archaeologist Luke Auld-Thomas, who conducted the research as a doctoral candidate in the department of anthropology at Tulane University.
“My own sense of this part of the Maya Lowlands, based on what I know of my archaeology, is that if you could throw darts at it, you would find urban areas,” Auld-Thomas said. “And so it was gratifying and exciting to see that that was actually the case.”
Campeche is sandwiched between two relatively well-explored areas — the northern Yucatán and the southern Maya Lowlands — but archaeologists previously all but ignored it, said study coauthor Marcello Canuto, a professor in Tulane’s department of anthropology.
In the north, Maya sites such as Chichén Itzá are highly visible. “They’re very easy to recognize on the landscape, and there was ready accessibility,” Canuto said. Sites from the southern Maya Lowlands were also familiar to archaeologists as a source of Maya hieroglyphs, texts and altars — “the kinds of things that have been long-sought by scholars,” Canuto said.
For decades, Campeche was not easily reachable or known for its artifacts. But this new study and other LiDAR-driven investigations are changing that.
“This is a new dawn for all of us, because we can now see where we would never have been able to see,” Canuto said.
The new LiDAR scans also highlight the connections between Maya settlements and hint at the complexity of Maya cities regardless of their size, said Carlos Morales-Aguilar, a landscape archaeologist and postdoctoral researcher at the University of Texas at Austin who was not involved in the research. Morales-Aguilar’s work on Maya settlements in Guatemala aligns closely with the new findings, he told CNN in an email.
“Dense settlement patterns indicate that the Maya were highly organized in managing their landscapes, with extensive networks of roads or causeways, residential areas, agricultural terraces, and defensive structures,” he said. The Antiquity study further indicates that the Maya adapted their infrastructure to fit the natural landscape, “utilizing sinkholes, ridges, and depressions as part of their urban planning and water management strategies.”
“These findings challenge the traditional view that Maya cities — including their hinterland — were isolated city-states or regional kingdoms,” Morales-Aguilar said. Instead, they paint a picture “of a vast, interconnected network of urban and rural areas that spanned across their territories throughout their occupation history.”
As LiDAR scans reveal more of these formerly hidden cities, the data will reshape earlier interpretations of the scale and diversity of Maya settlements, “which is a good thing!” said Tomás Gallareta Cervera, an assistant professor of anthropology and Latin American studies at Kenyon College in Ohio who was not involved in the study.
“LiDAR analysis has pushed urbanism and settlement pattern studies forward in unprecedented ways; some even call it the LiDAR revolution,” Gallareta Cervera said in an email. “Archaeologists now have a new framework to research how these ancient people adapted and thrived in their environment for thousands of years. And that is very exciting!”
While these remnants of Maya culture have persisted for millennia, locating and studying the full extent of Maya settlements — which could include more major cities — will be critical for preserving the future of these ancient sites, according to Auld-Thomas.
“We have yet to really wrap our heads around what that means for our understanding of these places as environments and how to care for them and protect them,” he said. “It’s important to understand that these are places that have always been peopled to varying degrees, and that people have an important place in their conservation.”
Mindy Weisberger is a science writer and media producer whose work has appeared in Live Science, Scientific American and How It Works magazine.
| 2024-11-08T05:31:35 | null | train |
42,050,766 | belter | 2024-11-05T11:48:05 | Accuracy-Performance Trade-Offs in LLM Quantization | null | https://arxiv.org/abs/2411.02355 | 1 | 1 | [
42050767
] | null | null | null | null | null | null | null | null | null | train |
42,050,773 | sunkcostisalie | 2024-11-05T11:49:20 | Apple offers Indonesia US$10M sweetener to reverse its iPhone ban | null | https://www.scmp.com/news/asia/southeast-asia/article/3285194/apple-offers-indonesia-us10-million-sweetener-reverse-its-iphone-ban | 3 | 2 | [
42051047,
42050895,
42050991
] | null | null | null | null | null | null | null | null | null | train |
42,050,781 | denisshilov | 2024-11-05T11:50:32 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,050,790 | ripjaygn | 2024-11-05T11:51:45 | Toyota to post first profit drop in 2 years as demand cools after big run | null | https://www.msn.com/en-us/money/news/toyota-to-post-first-profit-drop-in-2-years-as-demand-cools-after-big-run | 1 | 2 | [
42052507,
42050810,
42050890
] | null | null | null | null | null | null | null | null | null | train |
42,050,791 | perihelions | 2024-11-05T11:51:53 | Regulators deliver successive blows to Amazon and Meta's nuclear power ambitions | null | https://techcrunch.com/2024/11/04/regulators-deliver-successive-blows-to-amazon-and-metas-nuclear-power-ambitions/ | 3 | 0 | [
42050888
] | null | null | null | null | null | null | null | null | null | train |
42,050,802 | Hard_Space | 2024-11-05T11:53:39 | Self-Occluded Avatar Recovery from a Single Video in the Wild | null | https://soar-avatar.github.io/ | 8 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,050,812 | throw0101d | 2024-11-05T11:55:24 | Boeing machinists end strike with 38% wage increases | null | https://www.cnbc.com/2024/11/04/striking-boeing-machinists-vote-new-contract.html | 11 | 0 | [
42050875
] | null | null | null | null | null | null | null | null | null | train |
42,050,825 | rolifumi | 2024-11-05T11:57:20 | We 4X CV Performance with GenAI and Synthetic Data – Case Study Inside | null | https://medium.com/@rolandpinter/how-can-genai-and-synthetic-data-4x-computer-vision-performance-ea6b1ab5b196 | 2 | 1 | [
42050826,
42050886
] | null | null | null | null | null | null | null | null | null | train |
42,050,844 | AIwonderful | 2024-11-05T12:01:35 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.