id
int64 2
42.1M
| by
large_stringlengths 2
15
⌀ | time
timestamp[us] | title
large_stringlengths 0
198
⌀ | text
large_stringlengths 0
27.4k
⌀ | url
large_stringlengths 0
6.6k
⌀ | score
int64 -1
6.02k
⌀ | descendants
int64 -1
7.29k
⌀ | kids
large list | deleted
large list | dead
bool 1
class | scraping_error
large_stringclasses 25
values | scraped_title
large_stringlengths 1
59.3k
⌀ | scraped_published_at
large_stringlengths 4
66
⌀ | scraped_byline
large_stringlengths 1
757
⌀ | scraped_body
large_stringlengths 1
50k
⌀ | scraped_at
timestamp[us] | scraped_language
large_stringclasses 58
values | split
large_stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
42,043,713 | null | 2024-11-04T17:16:36 | null | null | null | null | null | null | [
"true"
] | true | null | null | null | null | null | null | null | train |
42,043,717 | danboarder | 2024-11-04T17:17:00 | Extrovert or Introvert: Most People Are Ambiverts | null | https://www.scientificamerican.com/article/extrovert-or-introvert-most-people-are-actually-ambiverts/ | 4 | 1 | [
42045464
] | null | null | null | null | null | null | null | null | null | train |
42,043,724 | nafnlj | 2024-11-04T17:17:33 | The Private Train Car Edition | null | https://whyisthisinteresting.substack.com/p/the-private-train-car-edition | 2 | 0 | null | null | null | no_error | The Private Train Car Edition | 2024-10-29T11:39:11+00:00 | Guest Contributor | Matt Locke (ML) is a WITI contributor and the Director of Storythings, a content agency based in the UK. His previous WITIs include the Andy Warhol Album Covers Edition and the Platinum Photography Edition.Matt here. At lunch recently with a friend visiting from the US, he casually mentioned that he’d once traveled on someone’s private rail car through the mountains, from Denver all the way to California. I’d heard of private jets, of course, but never private rail cars. I was curious. How does that even work?Why is this interesting? This rabbit hole led me to some of the most spectacular ways to travel I’ve ever seen. The way private rail cars work is a surprising example of luxury capitalism and public infrastructure not just coexisting, but working harmoniously together. In an era when public infrastructure is woefully underfunded, perhaps we can learn something from the history of private rail cars.The first, burning question I had was: how do you use them to actually travel anywhere? The surprising answer is one of the most maligned names in American public infrastructure: Amtrak. Private rail cars are stored at ordinary train yards, usually at the nearest major hub to their owners’ location. When you want to use them for a trip, you contact Amtrak and arrange for your car to be hitched to the end of a normal scheduled service. So these incredibly luxurious private cars live alongside, and rely on, the ordinary public infrastructure of Amtrak and the US rail network. Without that, they just wouldn’t go anywhere.The Amtrak guidelines for running your own private rail car are fantastic, with word docs for requesting a move, and guidelines about keeping your car certified to the safety standards needed to use public infrastructure. Although I loved finding the pictures of these beautiful rail cars in my research, I found the Amtrak guidelines page even more joyful. There is something about the generosity of the documentation (like an API for the railroad!), and the mutual co-existence of extreme private luxury and public goods, that really warms my heart.Private rail cars were, and still are, very much a high-end luxury. The private jets of the late nineteenth and early twentieth century, at their 1929 peak there were about 2,000 in use, owned by industrial barons like Henry Ford and many U.S. Presidents (though Abraham Lincoln apparently hated his so much, it was only ever used to transport his coffin).These early 20th-century cars cost around $20,000 to build (around $1m in today’s money) and would normally feature an observation deck, sleeping quarters, galleys, and of course a lounge and dining area. They are gorgeous examples of their era’s luxury craftsmanship, full of intricate art deco paneling, brass light fixtures, and gilt detailed bars. The most stunning features were saved for the observation decks, built as a second floor area with glass roof windows, or as a glass conservatory at the end of the carriage. Many private rail cars were made by the Pullman company in Chicago, a name that has become a byword for decadent rail travel.The observation decks showcased the stunning American countryside and towns, like a widescreen movie rushing past your eyes. Some decks had rotating chairs so you could shift your view between the landscape and your fellow travelers without having to move from your seat. They were a rare example of a luxury private space that didn’t cut you off from the world, but immersed you in it.Private rail cars reached their height in the early 20th century, before they were pretty much wiped out as a business by the Wall Street Crash. By the time industry barons returned their fortunes to a level where they could again afford private travel, the plane had taken over from the train as the best way to cross the US. But since then, new generations of rich enthusiasts have taken on the work of restoring these masterpieces to their former glory, and you can now buy tickets on scheduled trips or charter a private rail car for your own event.The super rich of today seem only to dream of ways of cutting themselves off from the messy reality of wider society in self driving cars, metaverses and hyperloops. Back at the turn of the nineteenth century, the 1% could have all the privacy and splendor their riches could afford them, but they still had to rely on public networks, hitching themselves to regular scheduled services full of the workers who made their fortunes for them; on rails maintained through a mixture of investment through private freight services and federal subsidies.There is something about this coexistence and codependency of the public and the private which feels even more important now than it was in our last Gilded Age. (ML)Quick LinksThe American Association of Private Rail Car Owners has lots of details about how to own, or charter, a private rail car, and links to their members’ cars. They also run autumn leaves journeys, running a collection of private cars together through the beautiful fall scenery of the US. This is a fantastic video from DownieLive of his trip on an autumn leaves train as it went to its pick up destination, showing the incredible beauty of the cars and the landscape rushing past.Friends of the 261 is a non-profit organization that restores and runs trips on private rail cars, including the Cedar Rapids pictured above. I also love their minimalist URL. | 2024-11-08T12:06:30 | en | train |
42,043,729 | lucasllinasm | 2024-11-04T17:17:53 | Nvidia AI Blueprint: easy for devs to build automated agents that analyze video | null | https://venturebeat.com/ai/nvidia-ai-blueprint-makes-it-easy-for-devs-in-any-industry-build-agents-to-analyze-video/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,043,743 | jamesbvaughan | 2024-11-04T17:19:08 | The smallest (useful) HTTP responses possible | null | https://jamesbvaughan.com/small-http/ | 2 | 1 | [
42044361
] | null | null | null | null | null | null | null | null | null | train |
42,043,744 | lucasllinasm | 2024-11-04T17:19:25 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,043,747 | qwezxcrty | 2024-11-04T17:19:34 | DB48X: High Performance Scientific Calculator, Reinvented | null | http://48calc.org/ | 194 | 124 | [
42045765,
42044499,
42045733,
42044283,
42047760,
42044028,
42052358,
42045379,
42044252,
42047005,
42049099,
42048526,
42048787,
42054691,
42044116,
42044503,
42043912,
42049268,
42048060,
42044460,
42047186,
42043985
] | null | null | null | null | null | null | null | null | null | train |
42,043,755 | wuliwong | 2024-11-04T17:20:35 | Any Indiehackers.com Alternatives | I loved Indie Hackers when it first came out. I'm someone that is always building a side project with hopes of it becoming a legit business. There was a previous website that was really new and had a fun community, I can't remember the name now but I met someone on their and we built a site together and still are in touch to this day. I'm in the US and he is in Latvia. My experience with IH over the last couple years has been poor. Posts get no interaction and the it feels like the focus has moved away from actual indiehackers generating the content through their work and questions. I get that this site has to generate revenue to survive, hopefully they figure it out. Just wondering if anyone else knows of communities for fellow "startup hackers."<p>Maybe the lesson for me is that i just want something like a sub reddit and that's probably where I should look. The idea in general is possibly too narrow to actually support a profitable business, long term. | null | 33 | 7 | [
42045649,
42046021,
42047986,
42051280,
42051275,
42049882,
42047695
] | null | null | null | null | null | null | null | null | null | train |
42,043,758 | segasaturn | 2024-11-04T17:20:37 | Japan Is Entering a New Era of Instability | null | https://www.nytimes.com/2024/10/28/opinion/japan-liberal-democratic-party.html | 11 | 2 | [
42043904,
42048085
] | null | null | null | null | null | null | null | null | null | train |
42,043,766 | PaulHoule | 2024-11-04T17:21:08 | New York's Cannabis Fund Became a Disaster. Its Managers Earned $1.7M | null | https://www.thecity.nyc/2024/10/24/new-york-cannabis-fund-managers-payout-chris-webber-bill-thompson/ | 2 | 0 | null | null | null | no_error | New York’s Cannabis Fund Became a Disaster. Its Managers Earned $1.7 Million Nonetheless. | 2024-10-24T17:14:59Z | Rosalind Adams |
They haven’t come close to fulfilling Gov. Kathy Hochul’s goal of helping 150 people victimized by the state’s old, racially biased drug laws enter the legal cannabis business — and some they have assisted fear their dispensary dreams are collapsing.But the three managers of a public-private loan fund established to carry out the primary social mission of New York’s sweeping cannabis legalization program are doing just fine.Records obtained by THE CITY show that they earned $1.7 million over the most recently tallied 12-month period and stand to make millions more in years to come, even though the New York Cannabis Social Equity Investment Fund has faced charges of predatory lending, secrecy and mission failure. By a conservative estimate computed by THE CITY, the managers’ longterm haul could easily come to $15 million over a decade. The state selected the three managers, who operate under the almost identical name of Social Equity Impact Ventures, after a bidding process in June 2022: Bill Thompson, a former New York City comptroller and mayoral candidate; the former NBA star Chris Webber; and Lavetta Willis, a former sneaker entrepreneur based in Los Angeles.Former NBA star Chris Webber speaks at the opening of cannabis shop on Bleecker Street, Jan. 23, 2023. Credit: Ben Fractenberg/THE CITYIn an early document, the state said the fund “shall have no other purpose other than to advance the public policy goal” of financing and helping to develop dispensaries “for the benefit of social equity licensees.” But the fund has financed only 21 stores in two and a half years. Hochul’s office declined to comment on the fees paid out to the fund managers. On behalf of the fund, Jeffrey Gordon, a spokesperson for the state Dormitory Authority, a financing agency that is a fund partner, sent a list of services that the fund provides that contained no information about payments to the managers. Willis and Thompson did not answer emails, calls and texts with questions about the management fee and how the fee structure was determined. The Dormitory Authority provided the $1.7 million figure in response to a Freedom of Information Law request. A document outlining the structure of the fund partnership, obtained by THE CITY through a second freedom of information request, shows that the bulk of what the management team received, in quarterly payments between October 2023 and July 2024, came from a 2% annual fee on all contributions to the fund. So far that figure is about $78 million, with $50-million coming from the state. The state set a goal for the fund managers to raise up to $150 million in private cash by September 2022, to bring the total funds up to $200 million to invest in establishing dispensaries. The fund managers are also entitled to another $25,000 for each dispensary they open, which generated another $525,000 over the 12-month period. THE CITY’s estimate of the possibility that the managers would earn $15-million more over a decade was based on them failing to raise any more cash or financing any more stores. If they do either, their compensation would rise even more. Criticized from the StartFrom the beginning, the fund has been criticized inside the Hochul administration and by outside experts. When state cannabis officials reviewed critical documents governing the fund weeks before it was signed, one concern was that the size of the management fees would undermine the social justice mission, according to emails reviewed by THE CITY. In one email, Matthew Greenberg, a former financial analyst at the state’s Office of Cannabis Management, wrote to his colleagues that the 2% management fee “will completely deplete the $50 million” over ten years. THE CITY reviewed the terms of the agreement with more than a half dozen cannabis and finance experts who broadly agreed that the fee structure was excessive, particularly for a fund founded with a social mission.The Dazed cannabis shop on Union Square is one of the few state-licensed dispensaries in New York, Nov. 28, 2023. Credit: Alex Krales/THE CITY“Social Equity Impact Ventures is capturing a good chunk of capital from the fund—money that could’ve gone to supporting more social equity businesses,” Lucas McCann, the co-founder of the cannabis consulting firm CannDelta, told THE CITY. “Instead it’s going right back into their own pockets.”New York State’s role in the fund is considerable. Aside from kicking the fund off with an announcement by Hochul and the $50-million commitment, the state holds a 49% stake in the venture through the Dormitory Authority, which was charged with scouting locations for dispensary leases and supervising the build out of stores. Licensees are required to sign loans that cover the design and construction costs of their dispensaries, which they are required to pay back over ten years. Essential to the financing of the effort was Social Equity’s ability to raise $150 million from private investors. But after missing a September 2022 deadline the fund managers failed to raise any equity financing despite months of sales pitches. Instead, as THE CITY revealed in April, the $150 million in investor funding came in the form of a $50 million loan from a private equity firm called Chicago Atlantic. An additional $100 million came from a separate pledge Chicago Atlantic made to invest in New York real estate that could in turn be leased to the fund for dispensary sites. By borrowing money itself rather than finding an equity partner, the fund became responsible for paying 15% interest on Chicago Atlantic’s $50 million loan. Under the terms of its agreement, if the fund failed to meet its payments to Chicago Atlantic, the state guaranteed that it would. A Tangle for BorrowersFor the fund’s social equity borrowers, the process has not been as seamless or secure. The stores took longer to open than expected. For months, licensees got conflicting information about what kind of deal the fund would offer after they were matched with dispensary locations. As the fund was getting started,fund officials told licensees at one meeting in January 2023 that it would offer $800,000 to $1.2-million loans at a 10% interest rate, according to a report of the event. Several licensees told THE CITY this was never put in writing, and after the deal with Chicago Atlantic, with the 15% interest rate it was charging the fund, the interest rate offered to prospective store operators jumped to 13%. New York City recently launched its own public-private investment fund for cannabis entrepreneurs which offers smaller loans for up $100,000 and interest rates capped at 9.5%. Former city comptroller Bill Thompson attends a pre-opening press conference for a state-sanctioned cannabis store on Bleecker Street, Jan. 23, 2023. Credit: Ben Fractenberg/THE CITYAgreements obtained by THE CITY also revealed that the licensees had little control over the costs that the fund was piling up in order to open the dispensaries. Loan agreements for the design and construction of some fund-supported dispensaries exceeded $2 million, while offering few details about the breakdowns of the costs. In interviews, some dispensary owners have described asking for a temporary reprieve on their loan payments, while others have ended up on a state list of loan recipients who haven’t paid their vendors, according to documents reviewed by THE CITY. McCann, of CannDelta, said that the $25,000 bonus Social Equity Impact Ventures receives for each dispensary opened feels “very excessive, especially for smaller operators that aren’t making any money in the beginning.” He also observed that the managers benefit “just by getting dispensaries open, regardless of how well they perform.”David Feder, who runs a law firm called Weed Lawyer, questioned whether Social Equity Ventures deserved any management fees since they failed to raise the anticipated $150-million private equity investment. “They never did what they sought out to do, so why are they getting paid at all?” he said.Keeping SecretsAs with much else about the fund, gleaning information about the payments to its managers was far more difficult than it likely would have been if the fund was a government agency open to public disclosure laws.Despite the state’s $50-million investment, Albany officials have repeatedly denied requests for documents or said they did not have information about the fund’s operations, citing its status as a private entity. When THE CITY asked DASNY for a figure for the management fees paid to Social Equity Impact Ventures, it requested that a reporter file a freedom of information request. The agency filed two extensions in response to a request to supply the information. When the Cornell Law School First Amendment Clinic filed an appeal on behalf of THE CITY, the agency released the figures three days later.Earlier this month, a coalition of advocates led by Reinvent Albany, a state good-government group, wrote a letter to the Dormitory Authority and the Office of Cannabis Management demanding the agencies release several documents that guide the fund’s operations, including the final loan agreement with Chicago Atlantic. Citing extensive controversies around the fund, the signatories said transparency is key to ensuring the goals of the program.
| 2024-11-08T11:07:49 | en | train |
42,043,770 | zhengiszen | 2024-11-04T17:21:36 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,043,783 | zshanhui | 2024-11-04T17:22:27 | MarsCode Agent: AI-Native Automated Bug Fixing | null | https://arxiv.org/abs/2409.00899 | 1 | 0 | [
42043784
] | null | null | no_error | MarsCode Agent: AI-native Automated Bug Fixing | null | [Submitted on 2 Sep 2024 (v1), last revised 4 Sep 2024 (this version, v2)] |
View PDF
HTML (experimental)
Abstract:Recent advances in large language models (LLMs) have shown significant potential to automate various software development tasks, including code completion, test generation, and bug fixing. However, the application of LLMs for automated bug fixing remains challenging due to the complexity and diversity of real-world software systems. In this paper, we introduce MarsCode Agent, a novel framework that leverages LLMs to automatically identify and repair bugs in software code. MarsCode Agent combines the power of LLMs with advanced code analysis techniques to accurately localize faults and generate patches. Our approach follows a systematic process of planning, bug reproduction, fault localization, candidate patch generation, and validation to ensure high-quality bug fixes. We evaluated MarsCode Agent on SWE-bench, a comprehensive benchmark of real-world software projects, and our results show that MarsCode Agent achieves a high success rate in bug fixing compared to most of the existing automated approaches.
Submission history From: Chao Peng [view email] [v1]
Mon, 2 Sep 2024 02:24:38 UTC (929 KB)
[v2]
Wed, 4 Sep 2024 06:19:08 UTC (930 KB)
| 2024-11-08T13:00:20 | en | train |
42,043,794 | garvit_gupta | 2024-11-04T17:22:57 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,043,795 | rezonant | 2024-11-04T17:22:59 | Google Pixel reportedly sees 3x growth in North America in just one month | null | https://9to5google.com/2024/11/04/google-pixel-growth-north-america-report-october-2024/ | 3 | 0 | [
42044127
] | null | null | null | null | null | null | null | null | null | train |
42,043,808 | zhengiszen | 2024-11-04T17:24:26 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,043,813 | Jimmc414 | 2024-11-04T17:24:57 | Hundreds of cosmetic goods contain toxic levels of PFAS, says EU agency | null | https://www.ft.com/content/be9d854c-9caf-461d-b612-80efaf438aa3 | 3 | 1 | [
42043915,
42044121
] | null | null | null | null | null | null | null | null | null | train |
42,043,823 | alwillis | 2024-11-04T17:25:38 | Fantastical for Windows – A Glass of Ice Water for Calendar Users in Hell | null | https://daringfireball.net/linked/2024/11/01/fantastical-for-windows | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,043,842 | tosh | 2024-11-04T17:26:51 | Fujifilm X100 | null | https://en.wikipedia.org/wiki/Fujifilm_X100 | 2 | 0 | [
42044107
] | null | null | null | null | null | null | null | null | null | train |
42,043,878 | Mainsail | 2024-11-04T17:30:15 | Russia Suspected of Plotting to Send Incendiary Devices on U.S.-Bound Planes | null | https://www.wsj.com/world/russia-plot-us-planes-incendiary-devices-de3b8c0a | 27 | 5 | [
42044034,
42045274,
42045166,
42047626,
42044088,
42044789
] | null | null | null | null | null | null | null | null | null | train |
42,043,881 | rahimnathwani | 2024-11-04T17:30:30 | Download Claude Transcript – Chrome Web Store | null | https://chromewebstore.google.com/detail/download-claude-transcrip/jkgmppnpgldlaabcjlokgmfhfhljphhc | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,043,893 | krunck | 2024-11-04T17:31:07 | Interpretable Online Log Analysis Using LLMs with Prompt Strategies | null | https://arxiv.org/abs/2308.07610 | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,043,921 | formalsystem | 2024-11-04T17:33:14 | Torch.load flipping default to weights_only=True | null | https://dev-discuss.pytorch.org/t/bc-breaking-change-torch-load-is-being-flipped-to-use-weights-only-true-by-default-in-the-nightlies-after-137602/2573 | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,043,922 | rbanffy | 2024-11-04T17:33:17 | Python is 30 – Python Workshop 1-3 November 1994 | null | http://ftp.ntua.gr/mirror/python/workshops/1994-11/ | 2 | 2 | [
42044032
] | null | null | no_error | Python Workshop November 1994 | null | null |
The workshop is over. This page will remain as a source of
information about the workshop for those who couldn't attend (as well
as those who did).
We are thinking of organizing another workshop in April/May 1995.
It will probably be on another coast or at least somewhere west, to
give West coast Python users a fair chance to attend. Watch this
space for more info.
General Workshop Info
Workshop Agenda (short)
Workshop Agenda (annotated)
Workshop Session Kickers
List of attendees
Pictures taken at the workshop
Workshop Session Notes
C++ session
notes (Skip Montanaro)
WWW session
notes (Paul Everitt)
Optimizing,
Compiling etc. (Steve Majewski)
Safe-Python (Steve Majewski)
Persistent Objects session notes
(Guido van Rossum)
Software Management (Ken Manheimer)
Post-Workshop Documents
Preliminary import mods: description
and shar file (Guido van Rossum)
New GUI API
design (Tommy
Burnette)
``Visual Python'' preliminary design
notes (Guido van Rossum)
C++ binding preliminary design notes
(Guido van Rossum)
More on ``flattening'' python
objects (Guido van Rossum)
Software Management Session
proposals (Ken Manheimer)
custom.py Prototype code implementing
module-customization mechanism (and, incidentally, exhibiting other
features proposed in the software management session)
Optimizing,
Compiling etc. (Steve Majewski)
Safe-Python (Steve Majewski)
Deriving Built-In Classes in Python (Donald Beaudry)
Random Pointers
Python home
page (Guido van Rossum)
Python at NIST (Michael McLay)
Transparencies of
my presentation at the VHLL conference, and some
more transparencies containing examples (Guido van Rossum)
The
WWW Conference in Chicago
Some personal favorites from the WWW conference
(Michael McLay)
Relevant papers for Python from the WWW
conference (Michael McLay)
--Guido van Rossum, CWI, Amsterdam
| 2024-11-08T15:01:02 | en | train |
42,043,928 | speckx | 2024-11-04T17:33:43 | The Global Fusion Race Is On | null | https://www.fusionenergybase.com/article/the-global-fusion-race-is-on | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,043,935 | alexwatson405 | 2024-11-04T17:34:22 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,043,939 | gus_leonel | 2024-11-04T17:34:28 | Writing secure Go code | null | https://jarosz.dev/article/writing-secure-go-code/ | 368 | 287 | [
42045346,
42045932,
42045531,
42046061,
42044457,
42054003,
42044190,
42055069,
42044645,
42045369,
42045604,
42045712,
42044837
] | null | null | null | null | null | null | null | null | null | train |
42,043,948 | shcheklein | 2024-11-04T17:34:57 | DataChain: DBT for Unstructured Data | null | https://github.com/iterative/datachain | 156 | 26 | [
42044379,
42044835,
42044366,
42044193,
42052987,
42047826,
42048942
] | null | null | no_error | GitHub - iterative/datachain: AI-data warehouse to enrich, transform and analyze unstructured data | null | iterative |
DataChain
DataChain is a modern Pythonic data-frame library designed for artificial intelligence.
It is made to organize your unstructured data into datasets and wrangle it at scale on
your local machine. Datachain does not abstract or hide the AI models and API calls, but helps to integrate them into the postmodern data stack.
Key Features
📂 Storage as a Source of Truth.
Process unstructured data without redundant copies from S3, GCP, Azure, and local
file systems.
Multimodal data support: images, video, text, PDFs, JSONs, CSVs, parquet.
Unite files and metadata together into persistent, versioned, columnar datasets.
🐍 Python-friendly data pipelines.
Operate on Python objects and object fields.
Built-in parallelization and out-of-memory compute without SQL or Spark.
🧠 Data Enrichment and Processing.
Generate metadata using local AI models and LLM APIs.
Filter, join, and group by metadata. Search by vector embeddings.
Pass datasets to Pytorch and Tensorflow, or export them back into storage.
🚀 Efficiency.
Parallelization, out-of-memory workloads and data caching.
Vectorized operations on Python object fields: sum, count, avg, etc.
Optimized vector search.
Quick Start
Selecting files using JSON metadata
A storage consists of images of cats and dogs (dog.1048.jpg, cat.1009.jpg),
annotated with ground truth and model inferences in the 'json-pairs' format,
where each image has a matching JSON file like cat.1009.json:
{
"class": "cat", "id": "1009", "num_annotators": 8,
"inference": {"class": "dog", "confidence": 0.68}
}
Example of downloading only "high-confidence cat" inferred images using JSON metadata:
from datachain import Column, DataChain
meta = DataChain.from_json("gs://datachain-demo/dogs-and-cats/*json", object_name="meta")
images = DataChain.from_storage("gs://datachain-demo/dogs-and-cats/*jpg")
images_id = images.map(id=lambda file: file.path.split('.')[-2])
annotated = images_id.merge(meta, on="id", right_on="meta.id")
likely_cats = annotated.filter((Column("meta.inference.confidence") > 0.93) \
& (Column("meta.inference.class_") == "cat"))
likely_cats.export_files("high-confidence-cats/", signal="file")
Data curation with a local AI model
Batch inference with a simple sentiment model using the transformers library:
The code below downloads files the cloud, and applies a user-defined function
to each one of them. All files with a positive sentiment
detected are then copied to the local directory.
from transformers import pipeline
from datachain import DataChain, Column
classifier = pipeline("sentiment-analysis", device="cpu",
model="distilbert/distilbert-base-uncased-finetuned-sst-2-english")
def is_positive_dialogue_ending(file) -> bool:
dialogue_ending = file.read()[-512:]
return classifier(dialogue_ending)[0]["label"] == "POSITIVE"
chain = (
DataChain.from_storage("gs://datachain-demo/chatbot-KiT/",
object_name="file", type="text")
.settings(parallel=8, cache=True)
.map(is_positive=is_positive_dialogue_ending)
.save("file_response")
)
positive_chain = chain.filter(Column("is_positive") == True)
positive_chain.export_files("./output")
print(f"{positive_chain.count()} files were exported")
13 files were exported
$ ls output/datachain-demo/chatbot-KiT/
15.txt 20.txt 24.txt 27.txt 28.txt 29.txt 33.txt 37.txt 38.txt 43.txt ...
$ ls output/datachain-demo/chatbot-KiT/ | wc -l
13
LLM judging chatbots
LLMs can work as universal classifiers. In the example below,
we employ a free API from Mistral to judge the publicly available chatbot dialogs. Please get a free
Mistral API key at https://console.mistral.ai
$ pip install mistralai (Requires version >=1.0.0)
$ export MISTRAL_API_KEY=_your_key_
DataChain can parallelize API calls; the free Mistral tier supports up to 4 requests at the same time.
from mistralai import Mistral
from datachain import File, DataChain, Column
PROMPT = "Was this dialog successful? Answer in a single word: Success or Failure."
def eval_dialogue(file: File) -> bool:
client = Mistral()
response = client.chat.complete(
model="open-mixtral-8x22b",
messages=[{"role": "system", "content": PROMPT},
{"role": "user", "content": file.read()}])
result = response.choices[0].message.content
return result.lower().startswith("success")
chain = (
DataChain.from_storage("gs://datachain-demo/chatbot-KiT/", object_name="file")
.settings(parallel=4, cache=True)
.map(is_success=eval_dialogue)
.save("mistral_files")
)
successful_chain = chain.filter(Column("is_success") == True)
successful_chain.export_files("./output_mistral")
print(f"{successful_chain.count()} files were exported")
With the instruction above, the Mistral model considers 31/50 files to hold the successful dialogues:
$ ls output_mistral/datachain-demo/chatbot-KiT/
1.txt 15.txt 18.txt 2.txt 22.txt 25.txt 28.txt 33.txt 37.txt 4.txt 41.txt ...
$ ls output_mistral/datachain-demo/chatbot-KiT/ | wc -l
31
Serializing Python-objects
LLM responses may contain valuable information for analytics – such as the number of tokens used, or the
model performance parameters.
Instead of extracting this information from the Mistral response data structure (class
ChatCompletionResponse), DataChain can serialize the entire LLM response to the internal DB:
from mistralai import Mistral
from mistralai.models import ChatCompletionResponse
from datachain import File, DataChain, Column
PROMPT = "Was this dialog successful? Answer in a single word: Success or Failure."
def eval_dialog(file: File) -> ChatCompletionResponse:
client = MistralClient()
return client.chat(
model="open-mixtral-8x22b",
messages=[{"role": "system", "content": PROMPT},
{"role": "user", "content": file.read()}])
chain = (
DataChain.from_storage("gs://datachain-demo/chatbot-KiT/", object_name="file")
.settings(parallel=4, cache=True)
.map(response=eval_dialog)
.map(status=lambda response: response.choices[0].message.content.lower()[:7])
.save("response")
)
chain.select("file.name", "status", "response.usage").show(5)
success_rate = chain.filter(Column("status") == "success").count() / chain.count()
print(f"{100*success_rate:.1f}% dialogs were successful")
Output:
file status response response response
name usage usage usage
prompt_tokens total_tokens completion_tokens
0 1.txt success 547 548 1
1 10.txt failure 3576 3578 2
2 11.txt failure 626 628 2
3 12.txt failure 1144 1182 38
4 13.txt success 1100 1101 1
[Limited by 5 rows]
64.0% dialogs were successful
Iterating over Python data structures
In the previous examples, datasets were saved in the embedded database
(SQLite in folder .datachain of the working directory).
These datasets were automatically versioned, and can be accessed using
DataChain.from_dataset("dataset_name").
Here is how to retrieve a saved dataset and iterate over the objects:
chain = DataChain.from_dataset("response")
# Iterating one-by-one: support out-of-memory workflow
for file, response in chain.limit(5).collect("file", "response"):
# verify the collected Python objects
assert isinstance(response, ChatCompletionResponse)
status = response.choices[0].message.content[:7]
tokens = response.usage.total_tokens
print(f"{file.get_uri()}: {status}, file size: {file.size}, tokens: {tokens}")
Output:
gs://datachain-demo/chatbot-KiT/1.txt: Success, file size: 1776, tokens: 548
gs://datachain-demo/chatbot-KiT/10.txt: Failure, file size: 11576, tokens: 3578
gs://datachain-demo/chatbot-KiT/11.txt: Failure, file size: 2045, tokens: 628
gs://datachain-demo/chatbot-KiT/12.txt: Failure, file size: 3833, tokens: 1207
gs://datachain-demo/chatbot-KiT/13.txt: Success, file size: 3657, tokens: 1101
Vectorized analytics over Python objects
Some operations can run inside the DB without deserialization.
For instance, let's calculate the total cost of using the LLM APIs, assuming the Mixtral call costs $2 per 1M input tokens and $6 per 1M output tokens:
chain = DataChain.from_dataset("mistral_dataset")
cost = chain.sum("response.usage.prompt_tokens")*0.000002 \
+ chain.sum("response.usage.completion_tokens")*0.000006
print(f"Spent ${cost:.2f} on {chain.count()} calls")
Output:
PyTorch data loader
Chain results can be exported or passed directly to PyTorch dataloader.
For example, if we are interested in passing image and a label based on file
name suffix, the following code will do it:
from torch.utils.data import DataLoader
from transformers import CLIPProcessor
from datachain import C, DataChain
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
chain = (
DataChain.from_storage("gs://datachain-demo/dogs-and-cats/", type="image")
.map(label=lambda name: name.split(".")[0], params=["file.name"])
.select("file", "label").to_pytorch(
transform=processor.image_processor,
tokenizer=processor.tokenizer,
)
)
loader = DataLoader(chain, batch_size=1)
Tutorials
Getting Started
Multimodal (try in Colab)
LLM evaluations (try in Colab)
Reading JSON metadata (try in Colab)
Contributions
Contributions are very welcome.
To learn more, see the Contributor Guide.
Community and Support
Docs
File an issue if you encounter any problems
Discord Chat
Email
Twitter
| 2024-11-07T23:29:24 | en | train |
42,043,951 | chrisjj | 2024-11-04T17:35:22 | Financial institutions told to get their house in order before next CrowdStrike | null | https://www.theregister.com/2024/11/02/fca_it_resilience/ | 14 | 3 | [
42043952,
42044730
] | null | null | no_error | Financial institutions told to get their house in order before the next CrowdStrike strikes | 2024-11-02T09:30:08Z | Connor Jones |
The UK's finance regulator is urging all institutions under its remit to better prepare for IT meltdowns like that of CrowdStrike in July.
The Financial Conduct Authority (FCA) said issues at unregulated third parties were the leading cause of operational disruption within Blighty's financial institutions between 2022 and 2023.
Many major organizations were affected to varying degrees by CrowdStrike's software cockup over the summer, including some of the world's leading banks and trading houses.
JPMorgan Chase's trade execution systems were reportedly affected, some Bloomberg terminals were rendered inaccessible, the London Stock Exchange was hit, and ION Group, UBS, CMC Markets, and others also all reported issues.
"These outages emphasize firms' increasing dependence on unregulated third parties to deliver important business services," the FCA said in a statement. "This highlights the importance of firms continuing to become operationally resilient in line with our rules.
"We encourage all firms, regardless of how they were affected by the CrowdStrike incident, to consider these lessons, to improve their ability to respond to and recover from future disruptions."
For those of you who somehow missed out on what will be remembered as one of the defining IT events of 2024, back in July, CrowdStrike pushed a now-infamous channel file update to its Falcon EDR platform. That update contained a critical logic error, causing Falcon to crash so hard that Windows did too, displaying blue screens of death on 8.5 million PCs worldwide. A bad time was had by many trying to fix this.
Soon, many financial institutions in the UK will be forced by the FCA to become resilient to these kinds of events. The regulator's rules (PS21/3) governing third-party events like CrowdStrike's, requiring in-scope organizations to implement robust business continuity measures that mitigate the worst impacts of incidents like IT outages, came into force in March 2022. The deadline to become compliant – March 2025 – is fast approaching.
The FCA said those who had already met the requirements of PS21/3 demonstrated the best response to the CrowdStrike outage. They were able to effectively prioritize which systems to bring back online first, minimizing the operational impact on the business and wider market, as well as consult prepared incident response and communications plans.
If they mapped their systems and third-party relationships, organizations demonstrated a stronger ability to manage their exposure to limit the overall impact of the incident.
From a technical perspective, some affected institutions were forced to identify single points of failure in their tech stacks and make changes accordingly. For example, some sought alternative products or operating systems, while others decided to review their change management processes relating to software updates.
The FCA urged all regulated organizations to ensure their update-testing procedures were up to scratch and amend them where necessary so any faults can be contained more easily. This especially applies to institutions whose services are relied upon by other key players in the industry.
Delta officially launches lawyers at $500M CrowdStrike problem
CrowdStrike's Blue Screen blunder: Could eBPF have saved the day?
CrowdStrike apologizes to Congress for 'perfect storm' that caused global IT outage
1 in 10 orgs dumping their security vendors after CrowdStrike outage
Other recommendations included preparing external comms templates, such as website banners so all customers and stakeholders are comprehensively informed about any issues in a timely manner. Plus, the usual incident response preparations you'd typically expect any organization to have in place.
Despite the widespread impact on financial markets, the institutions involved largely got on with things and recovered relatively quickly. Little fuss has been made of the incident since.
The same can't be said for Delta Air Lines, however, which recently launched legal proceedings against CrowdStrike, looking to recoup at least some of the circa $500 million in revenue it claims to have lost thanks to the outage.
Delta faced significant challenges, taking longer than most to return to service. It blamed CrowdStrike and Microsoft, and in response they pointed the finger straight back, saying the airline refused their offers of free technical support.
CrowdStrike also alleged Delta was running on aging IT equipment, a major factor in why it took so long to recover.
Shortly after Delta filed its lawsuit against the cybersecurity company, CrowdStrike itself launched a counter-suit alleging "Delta's own negligence" led to the issues it faced. ®
| 2024-11-08T12:08:54 | en | train |
42,043,962 | skydowx | 2024-11-04T17:36:04 | Sabine Hossenfelder – The crisis in physics is real: Science is failing [video] | null | https://www.youtube.com/watch?v=HQVF0Yu7X24 | 5 | 0 | [
42044117
] | null | null | null | null | null | null | null | null | null | train |
42,043,966 | hacktivity | 2024-11-04T17:36:15 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,043,967 | amrrs | 2024-11-04T17:36:17 | A CC-By Open-Source TTS Model with Voice Cloning | null | https://huggingface.co/OuteAI/OuteTTS-0.1-350M | 16 | 0 | null | null | null | missing_parsing | OuteAI/OuteTTS-0.1-350M · Hugging Face | null | null |
Model Description
OuteTTS-0.1-350M is a novel text-to-speech synthesis model that leverages pure language modeling without external adapters or complex architectures, built upon the LLaMa architecture using our Oute3-350M-DEV base model, it demonstrates that high-quality speech synthesis is achievable through a straightforward approach using crafted prompts and audio tokens.
Key Features
Pure language modeling approach to TTS
Voice cloning capabilities
LLaMa architecture
Compatible with llama.cpp and GGUF format
Technical Details
The model utilizes a three-step approach to audio processing:
Audio tokenization using WavTokenizer (processing 75 tokens per second)
CTC forced alignment for precise word-to-audio token mapping
Structured prompt creation following the format:
[full transcription]
[word] [duration token] [audio tokens]
Technical Blog
https://www.outeai.com/blog/OuteTTS-0.1-350M
Limitations
Being an experimental v0.1 release, there are some known issues:
Vocabulary constraints due to training data limitations
String-only input support
Given its compact 350M parameter size, the model may frequently alter, insert, or omit wrong words, leading to variations in output quality.
Variable temperature sensitivity depending on use case
Performs best with shorter sentences, as accuracy may decrease with longer inputs
Speech Samples
Listen to samples generated by OuteTTS-0.1-350M:
Input
Audio
Notes
Hello, I can speak pretty well, but sometimes I make some mistakes.
Your browser does not support the audio element.
Your browser does not support the audio element.
(temperature=0.1, repetition_penalty=1.1)
Once upon a time, there was a
Your browser does not support the audio element.
(temperature=0.1, repetition_penalty=1.1)
Scientists have discovered a new planet that may be capable of supporting life!
Your browser does not support the audio element.
Using the Q4_K_M quantized model. (temperature=0.7, repetition_penalty=1.1)
Scientists have discovered a new planet that may be capable of supporting life!
Your browser does not support the audio element.
The model partially failed to follow the input text. (temperature=0.1, repetition_penalty=1.1)
Scientists have discovered a new planet that may be capable of supporting life!
Your browser does not support the audio element.
In this case, changing to a higher temperature from 0.1 to 0.7 produces more consistent output. (temperature=0.7, repetition_penalty=1.1)
Installation
pip install outetts
Usage
Interface Usage
from outetts.v0_1.interface import InterfaceHF, InterfaceGGUF
# Initialize the interface with the Hugging Face model
interface = InterfaceHF("OuteAI/OuteTTS-0.1-350M")
# Or initialize the interface with a GGUF model
# interface = InterfaceGGUF("path/to/model.gguf")
# Generate TTS output
# Without a speaker reference, the model generates speech with random speaker characteristics
output = interface.generate(
text="Hello, am I working?",
temperature=0.1,
repetition_penalty=1.1,
max_length=4096
)
# Play the generated audio
output.play()
# Save the generated audio to a file
output.save("output.wav")
Voice Cloning
# Create a custom speaker from an audio file
speaker = interface.create_speaker(
"path/to/reference.wav",
"reference text matching the audio"
)
# Generate TTS with the custom voice
output = interface.generate(
text="This is a cloned voice speaking",
speaker=speaker,
temperature=0.1,
repetition_penalty=1.1,
max_length=4096
)
Model Details
Model Type: LLaMa-based language model
Size: 350M parameters
Language Support: English
License: CC BY 4.0
Speech Datasets Used:
LibriTTS-R (CC BY 4.0)
Multilingual LibriSpeech (MLS) (CC BY 4.0)
Future Improvements
Scaling up parameters and training data
Exploring alternative alignment methods for better character compatibility
Potential expansion into speech-to-speech assistant models
Credits
WavTokenizer: https://github.com/jishengpeng/WavTokenizer
CTC Forced Alignment: https://pytorch.org/audio/stable/tutorials/ctc_forced_alignment_api_tutorial.html
Disclaimer
By using this model, you acknowledge that you understand and assume the risks associated with its use.
You are solely responsible for ensuring compliance with all applicable laws and regulations.
We disclaim any liability for problems arising from the use of this open-source model, including but not limited to direct, indirect, incidental, consequential, or punitive damages.
We make no warranties, express or implied, regarding the model's performance, accuracy, or fitness for a particular purpose. Your use of this model is at your own risk, and you agree to hold harmless and indemnify us, our affiliates, and our contributors from any claims, damages, or expenses arising from your use of the model.
| 2024-11-08T20:38:32 | null | train |
42,044,003 | mpyllan | 2024-11-04T17:39:47 | null | null | null | 1 | null | [
42044004
] | null | true | null | null | null | null | null | null | null | train |
42,044,008 | gls2ro | 2024-11-04T17:40:32 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,044,021 | caleb_thompson | 2024-11-04T17:41:35 | Swift-Format GitHub Action | null | https://calebhearth.com/swift-format-github-action | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,033 | null | 2024-11-04T17:42:12 | null | null | null | null | null | null | [
"true"
] | null | null | null | null | null | null | null | null | train |
42,044,048 | todsacerdoti | 2024-11-04T17:43:23 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,044,059 | atlasunshrugged | 2024-11-04T17:44:42 | Open Source AI Can Help America Lead in AI and Strengthen Global Security | null | https://about.fb.com/news/2024/11/open-source-ai-america-global-security/ | 4 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,060 | gmays | 2024-11-04T17:44:44 | Why did it take so long to find giant squids? [video] | null | https://www.ted.com/talks/anna_rothschild_why_did_it_take_so_long_to_find_giant_squids | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,064 | chriscbr | 2024-11-04T17:45:00 | Building thread-safe abstractions in Java versus Go | null | https://rybicki.io/blog/2024/11/03/multithreaded-code-java-golang.html | 2 | 0 | [
42044104
] | null | null | no_error | Building thread-safe abstractions in Java versus Go | 2024-11-03T00:00:00+00:00 | Chris Rybicki |
A visualization of the concurrency manager subsystem in SimpleDB, relevant for the code in the second half of the post.
Recently, I started studying about how databases are implemented by reading the book Database Design and Implementation (2020).
I’m only part ways through it, but the text provides a solid walkthrough of the subsystems that go into a traditional relational database, from memory and recovery management to query processing and planning; and I’m enjoying my time going through it.
Something I especially appreciate is that the text includes a toy database implementation named SimpleDB (consisting of 12KLOC and 150+ Java classes) that demonstrates the all of the major pieces of a database system with working tests.
To build a working understanding and get more comfortable with Go (the systems programming language from Google), I decided to try my hand at porting the database implementation from Java to Go.
While the languages differ in a number of major ways – for example, Java lacks pointers and is more object-oriented than Go – they also share a lot of features, like garbage collection and strong typing; so I felt like it wouldn’t be too hard to translate the code.
In this post, I’ll dive into how I translated some pieces of the SimpleDB implementation from Java into Go, and how I navigated the differences between the concurrency and synchronization primitives the languages provide.
Part 1: Building a File Manager
First, I’ll start with a simple case where translating thread-safe Java code into Go isn’t too hard.
SimpleDB stores all data on the file system at the end of the day - but we we want to: (1) minimize file operations, and (2) avoid multiple threads performing conflicting reads and writes to underlying files.
To achieve this, it has a class named FileMgr for providing access to files.
Here’s an abbreviated Java implementation of the class:
package simpledb.file;
import java.io.*;
import java.nio.*;
import java.util.*;
public class FileMgr {
private Map<String,RandomAccessFile> openFiles = new HashMap<>();
public synchronized void read(String filename, ByteBuffer b, int offset) {
try {
RandomAccessFile f = getFile(filename);
f.seek(offset);
f.getChannel().read(b);
} catch (IOException e) {
throw new RuntimeException("cannot read file " + filename);
}
}
public synchronized void write(String filename, ByteBuffer b, int offset) {
try {
RandomAccessFile f = getFile(filename);
f.seek(offset);
f.getChannel().write(b);
} catch (IOException e) {
throw new RuntimeException("cannot write file" + filename);
}
}
public int length(String filename) {
try {
RandomAccessFile f = getFile(filename);
return (int)(f.length());
} catch (IOException e) {
throw new RuntimeException("cannot access " + filename);
}
}
private RandomAccessFile getFile(String filename) throws IOException {
RandomAccessFile f = openFiles.get(filename);
if (f == null) {
File dbTable = new File(filename);
f = new RandomAccessFile(dbTable, "rws");
openFiles.put(filename, f);
}
return f;
}
}
The class internally stores a map of file names to open files, and provides a handful of public methods like read(), write(), and length() for reading and writing to those files.
(In the actual implementation, there are a few more options and methods, but I’ve tried to simplify the example).
To synchronize access and prevent threads from performing conflicting writes or reads, many of the methods using the open files are marked as synchronized.
In Java, the synchronized keyword ensures that it’s not possible for two invocations of the methods on the same object to interleave:
“When one thread is executing a synchronized method for an object, all other threads that invoke synchronized methods for the same object block (suspend execution) until the first thread is done with the object.” - The Java Documentation
By synchronizing access to the openFiles map, the implementation avoids possible interleavings like the following:
Thread A calls read("file1", buffer1, 0)
Thread A runs f = getFile(..) and f.seek(0)
Thread B calls write("file1", buffer2, 40)
Thread B runs f = getFile(..) and f.seek(40) (referring to the same file as thread A)
Thread B runs f.getChannel().write(b) and exits the method
Thread A runs f.getChannel().read(b) and exits the method
In the above example, thread A intended to read bytes from the beginning of the file, but the way the operations interleave cause it to end up reading bytes after byte 40.
The Go version of FileManager
This kind of concurrency control, where you just want to ensure sections of code aren’t run at the same time, is easy to implement in Go.
At least, it’s common enough that it’s shown in the main Go tutorial.
The trick is to use a sync.Mutex (or sync.RWMutex) to guard the critical section(s):
package main
import (
"fmt"
"io"
"os"
"sync"
)
type FileMgr struct {
openFiles map[string]*os.File
mu sync.RWMutex
}
func NewFileMgr() *FileMgr {
return &FileMgr{
openFiles: make(map[string]*os.File),
}
}
func (fm *FileMgr) Close() {
fm.mu.Lock()
defer fm.mu.Unlock()
for _, f := range fm.openFiles {
f.Close()
}
fm.openFiles = nil
}
func (fm *FileMgr) Read(filename string, buf []byte, offset int64) error {
f, err := fm.getFile(filename)
if err != nil {
return err
}
if _, err = f.ReadAt(buf, offset); err != nil {
if err != io.EOF {
return fmt.Errorf("cannot read file %s: %w", filename, err)
}
}
return nil
}
func (fm *FileMgr) Write(filename string, buf []byte, offset int64) error {
f, err := fm.getFile(filename)
if err != nil {
return err
}
if _, err := f.WriteAt(buf, offset); err != nil {
return fmt.Errorf("cannot write file %s: %w", filename, err)
}
return nil
}
func (fm *FileMgr) Length(filename string) (int64, error) {
f, err := fm.getFile(filename)
if err != nil {
return 0, err
}
info, err := f.Stat()
if err != nil {
return 0, fmt.Errorf("cannot stat file %s: %w", filename, err)
}
return info.Size(), nil
}
func (fm *FileMgr) getFile(filename string) (*os.File, error) {
fm.mu.RLock()
f, ok := fm.openFiles[filename]
fm.mu.RUnlock()
if ok {
return f, nil
}
// If file isn't open, acquire write lock
fm.mu.Lock()
defer fm.mu.Unlock()
// Double-check after acquiring write lock
if f, ok := fm.openFiles[filename]; ok {
return f, nil
}
f, err := os.OpenFile(filename, os.O_RDWR|os.O_CREATE|os.O_SYNC, 0644)
if err != nil {
return nil, fmt.Errorf("cannot open file %s: %w", filename, err)
}
fm.openFiles[filename] = f
return f, nil
}
In the Go translation of FileMgr, I’ve moved where some of the synchronization happens so it’s just happening inside the public Close() function and private getFile() function.
We call mu.Lock() to obtain an exclusive lock, and mu.Unlock() will release the lock. (The defer keyword tells Go to run the code in the Unlock() call whenever the function exits).
This ensures that the openFiles map is never accessed by multiple goroutines at the same time.1
A goroutines is Go’s version of a thread.
We don’t have to do anything to synchronize access to the individual files because the Go os.File type is already designed to be safe for concurrent use by multiple goroutines.
Part 2: Building a Lock Table
Now, let’s dive into a more complex example.
To set the stage, recall that SimpleDB is modeled after a traditional relational database.
To support multiple concurrent transactions that perform a mix of reads and writes, SimpleDB has a dedicated subsystem called a concurrency manager which is responsible for ensuring transactions appear to work as if they’re isolated from each other.
For example, if one transaction updates a row in the database but that transaction hasn’t been committed, other transactions shouldn’t be able to see that partial change; this kind of error would be called a dirty read.
Some other examples of data inconsistencies that can occur are non-repeatable reads and phantom reads.
The concurrency manager prevents conflicting data accesses through a form of pessimistic concurrency control - and the main data structure it uses is a lock table.
A lock table keeps track of which pieces of data have been locked by transactions (i.e. given access exclusive) for reading or writing.
For this blog post, I’m going to assume we’re locking access to files.
(But the lock table could also be used to lock access to blocks, which are fixed-sized file parts, or even individual rows in a table).
When a transaction wants to read a row from the table, it first identifies which file the table’s data is in, and requests a shared lock (or SLock) for that file from the lock table.
When a transaction wants to write a row to the table, it again identifies which file the table’s data is in, and requests an exclusive lock (or XLock) for that file from the lock table.
The difference between a shared lock and and exclusive lock is that multiple transactions can hold shared locks to the same file without conflicts, but an exclusive lock can only be held by a single file at a time.
It’s not allowed for shared and exclusive locks to be held on the same file at the same time.
So, what does the LockTable implementation look like in practice?
Well, feast your eyes on this Java class:
package simpledb.tx.concurrency;
import java.util.*;
class LockTable {
private static final long MAX_TIME = 10000; // 10 seconds
private Map<String,Integer> locks = new HashMap<String,Integer>();
public synchronized void sLock(String filename) {
try {
long timestamp = System.currentTimeMillis();
while (hasXlock(filename) && !waitingTooLong(timestamp)) {
wait(MAX_TIME);
}
if (hasXlock(filename)) {
throw new LockAbortException();
}
int val = getLockVal(filename); // will not be negative
locks.put(filename, val+1);
} catch(InterruptedException e) {
throw new LockAbortException();
}
}
synchronized void xLock(String filename) {
try {
long timestamp = System.currentTimeMillis();
while (hasOtherLocks(filename) && !waitingTooLong(timestamp)) {
wait(MAX_TIME);
}
if (hasOtherLocks(filename)) {
throw new LockAbortException();
}
locks.put(filename, -1);
} catch(InterruptedException e) {
throw new LockAbortException();
}
}
synchronized void unlock(String filename) {
int val = getLockVal(filename);
if (val > 1) {
locks.put(filename, val-1);
} else {
locks.remove(filename);
notifyAll();
}
}
private boolean hasXlock(String filename) {
return getLockVal(filename) < 0;
}
private boolean hasOtherLocks(String filename) {
return getLockVal(filename) != 0;
}
private boolean waitingTooLong(long starttime) {
return System.currentTimeMillis() - starttime > MAX_TIME;
}
private int getLockVal(String filename) {
Integer ival = locks.get(filename);
return (ival == null) ? 0 : ival.intValue();
}
}
Not too bad, right?
The class stores a mapping of files to numbers inside a class field named locks.
Here’s what the numbers mean:
If a file’s number is 0, there are no locks on it.
If a file’s number is -1, there’s an exclusive lock on it.
If a file’s number is positive, there’s that many shared locks on it.
When a caller tries to obtain one of the locks, if it’s not possible to obtain the lock, the implementation tells the thread to wait().2
For example, if a transaction is trying to obtain an XLock for file A, but another transaction has already has an SLock on file A, then the transaction must wait until the SLock on file A is released.
The wait() method is passed a timeout value, so the thread will only wait for a limited amount of time before giving up and throwing an exception.
If you glance over to the unlock() method implementation, you’ll see that when a file is unlocked, it notifies all waiting threads via notifyAll().34
How would we implement this kind of data structure (with the thread waiting and resuming functionality) in Go?
Go attempt 1: sync.Cond
One idea I had was to use a type from the Go standard library: sync.Cond.
This type represents a conditional variable, and methods like Wait() and Broadcast() can be used to suspend the calling goroutine or wake any waiting goroutines, respectively.
Unfortunately, sync.Cond didn’t seem like the right tool for the job.
The main issue was that there’s no simple way to wait for the condition variable with a timeout.
We need a timeout mechanism so that even if two or more transactions are deadlocked, we can return control back to the clients connecting to the database.
In this GitHub issue, the Go maintainers advise that condition variables are “generally not the right thing to use in Go”, and channels should be used instead.
An alternative approach might have been to use sync.Cond but change the broader implementation strategy so that the LockTable immediately returns an error if it can’t acquire a lock, rather than making the caller wait.
But I think this might come with its own trade-offs, and I wanted to keep the implementation as close to the Java version as possible, so I decided to try something different.
Go attempt 2: synchronization using channels
It turns out that channels are super powerful in Go5, and they offer a lot of flexibility for scheduling goroutines.
If you’re not familiar with Go, channels are a first class datatype for sending and receiving values between goroutines.
In our case, we can use channels purely as a way to synchronize behavior between goroutines.
The strategy I employed was to create a channel for each file that can be locked.
Suppose a goroutine wants to acquire its own exclusive lock by calling XLock() and another goroutine already has an exclusive lock on the file.
In that case, it will wait on the channel.
When the file is unlocked, the channel can be closed, which will trigger all goroutines waiting on the channel to resume.6
Thus, the goroutine that called XLock() will wake up and try to acquire the lock again.
Here’s the implementation:
package main
import (
"errors"
"sync"
"time"
)
const maxWaitTime = 10 * time.Second
type LockTable struct {
mu sync.Mutex
locks map[string]int
waiters map[string]chan struct{}
}
func NewLockTable() *LockTable {
return &LockTable{
locks: make(map[string]int),
waiters: make(map[string]chan struct{}),
}
}
func (lt *LockTable) SLock(filename string) error {
lt.mu.Lock()
start := time.Now()
// While an XLock is still held on this file...
for lt.locks[filename] == -1 {
ch := lt.getOrCreateWaitChannel(filename)
lt.mu.Unlock()
if time.Since(start) > maxWaitTime {
return errors.New("lock abort error")
}
// Wait on the channel with a timeout
select {
case <-ch:
// Continue when the lock is released
case <-time.After(maxWaitTime):
return errors.New("lock abort error")
}
lt.mu.Lock()
}
val := lt.locks[filename] // will not be negative
lt.locks[filename] = val + 1
lt.mu.Unlock()
return nil
}
func (lt *LockTable) XLock(filename string) error {
lt.mu.Lock()
start := time.Now()
// While any lock is still held on this file...
for lt.locks[filename] != 0 {
ch := lt.getOrCreateWaitChannel(filename)
lt.mu.Unlock()
if time.Since(start) > maxWaitTime {
return errors.New("lock abort error")
}
// Wait on the channel with a timeout
select {
case <-ch:
// Continue when the lock is released
case <-time.After(maxWaitTime):
return errors.New("lock abort error")
}
lt.mu.Lock()
}
lt.locks[filename] = -1
lt.mu.Unlock()
return nil
}
func (lt *LockTable) Unlock(filename string) {
lt.mu.Lock()
defer lt.mu.Unlock()
val := lt.locks[filename]
if val > 1 {
lt.locks[filename] = val - 1
} else {
delete(lt.locks, filename)
}
// Signal all goroutines waiting for this file (and remove the channel)
if ch, exists := lt.waiters[filename]; exists {
close(ch)
delete(lt.waiters, filename)
}
}
func (lt *LockTable) getOrCreateWaitChannel(filename string) chan struct{} {
if ch, exists := lt.waiters[filename]; exists {
return ch
}
ch := make(chan struct{})
lt.waiters[filename] = ch
return ch
}
Synchronization within the XLock() and SLock() methods is a little more involved, because we want to ensure the locks map is only accessed when the current goroutine has exclusive access to it – but we can’t hold onto the lock while we’re waiting for one of the channels.
So the implementation has to do a little bit of work to unlock and re-lock during the loop to keep everything working smoothly.
The most important lines are here:
select {
case <-ch:
// Continue when the lock is released
case <-time.After(maxWaitTime):
return errors.New("lock abort error")
}
The select statement tells the current goroutine to wait until one of the conditions is ready:
case <-ch: will run when the channel is closed (or when a message is sent to the channel)
case <-time.After(maxWaitTime): will run after 10 seconds has passed
By doing this, we can bound the amount of time we’re waiting for a lock, and return an error if we can’t acquire it.
I think it’s pretty cool how channels can be used to build up abstractions like this.
Testing out the solution
When you’re building algorithms that deal with concurrency, you really have to test out your code to see whether it works.
Here’s some code I wrote to stress test the lock table.
It spins up a hundred goroutines, each of which creates a transaction that acquires and releases a single shared or exclusive lock:
// Simulate a transaction trying to acquire and release locks
func simulateTransaction(id int, lt *LockTable, filename string, wg *sync.WaitGroup) {
defer wg.Done()
// Randomly decide whether to request a shared or exclusive lock
if rand.Intn(2) == 0 {
// Try acquiring a shared lock
log.Printf("Transaction %d: Trying to acquire SLock on %v\n", id, filename)
err := lt.SLock(filename)
if err != nil {
log.Printf("Transaction %d: Failed to acquire SLock on %v: %v\n", id, filename, err)
return
}
log.Printf("Transaction %d: Acquired SLock on %v\n", id, filename)
// Simulate work with the lock
time.Sleep(time.Duration(rand.Intn(20)) * time.Millisecond)
// Release the lock
lt.Unlock(filename)
log.Printf("Transaction %d: Released SLock on %v\n", id, filename)
} else {
// Try acquiring an exclusive lock
log.Printf("Transaction %d: Trying to acquire XLock on %v\n", id, filename)
err := lt.XLock(filename)
if err != nil {
log.Printf("Transaction %d: Failed to acquire XLock on %v: %v\n", id, filename, err)
return
}
log.Printf("Transaction %d: Acquired XLock on %v\n", id, filename)
// Simulate work with the lock
time.Sleep(time.Duration(rand.Intn(20)) * time.Millisecond)
// Release the lock
lt.Unlock(filename)
log.Printf("Transaction %d: Released XLock on %v\n", id, filename)
}
}
func main() {
log.SetFlags(log.Ltime | log.Lmicroseconds)
rand.Seed(time.Now().UnixNano())
lt := NewLockTable()
filenames := []string{
"file1",
"file2",
"file3",
}
var wg sync.WaitGroup
numTransactions := 100
// Spin up a bunch of transactions
for i := 1; i <= numTransactions; i++ {
wg.Add(1)
blk := filenames[rand.Intn(len(filenames))]
go simulateTransaction(i, lt, blk, &wg)
// Spread out the transactions to better simulate a real workload
time.Sleep(time.Duration(rand.Intn(5)) * time.Millisecond)
}
wg.Wait()
fmt.Println("Done!")
}
In practice, transactions in the database system may acquire locks on multiple files (and this can result in complex deadlock scenarios) - but this simplified stress test is still useful to validate that in the happy path, we’re able to provide exclusive and shared access to resources without goroutines hanging due to bugs.
Here’s the output from one of the runs:
16:57:40.674684 Transaction 1: Trying to acquire SLock on file1
16:57:40.674936 Transaction 1: Acquired SLock on file1
16:57:40.677057 Transaction 2: Trying to acquire XLock on file1
16:57:40.679339 Transaction 3: Trying to acquire SLock on file3
16:57:40.679398 Transaction 3: Acquired SLock on file3
16:57:40.680637 Transaction 4: Trying to acquire XLock on file1
16:57:40.683899 Transaction 6: Trying to acquire SLock on file3
16:57:40.683966 Transaction 5: Trying to acquire SLock on file3
16:57:40.684032 Transaction 6: Acquired SLock on file3
16:57:40.684039 Transaction 5: Acquired SLock on file3
16:57:40.684044 Transaction 5: Released SLock on file3
16:57:40.686260 Transaction 7: Trying to acquire SLock on file2
16:57:40.686301 Transaction 7: Acquired SLock on file2
16:57:40.690894 Transaction 8: Trying to acquire XLock on file3
16:57:40.692564 Transaction 7: Released SLock on file2
16:57:40.692986 Transaction 9: Trying to acquire SLock on file3
16:57:40.693007 Transaction 9: Acquired SLock on file3
16:57:40.694066 Transaction 2: Acquired XLock on file1
16:57:40.694125 Transaction 1: Released SLock on file1
16:57:40.695186 Transaction 6: Released SLock on file3
16:57:40.695317 Transaction 10: Trying to acquire SLock on file1
16:57:40.696549 Transaction 12: Trying to acquire SLock on file3
16:57:40.696570 Transaction 3: Released SLock on file3
16:57:40.696593 Transaction 12: Acquired SLock on file3
16:57:40.696596 Transaction 11: Trying to acquire SLock on file1
16:57:40.699798 Transaction 13: Trying to acquire SLock on file2
16:57:40.699839 Transaction 13: Acquired SLock on file2
16:57:40.700768 Transaction 12: Released SLock on file3
16:57:40.700820 Transaction 14: Trying to acquire XLock on file2
16:57:40.705107 Transaction 16: Trying to acquire XLock on file2
16:57:40.705121 Transaction 15: Trying to acquire XLock on file3
16:57:40.708359 Transaction 13: Released SLock on file2
16:57:40.708505 Transaction 14: Acquired XLock on file2
16:57:40.708521 Transaction 2: Released XLock on file1
16:57:40.708529 Transaction 4: Acquired XLock on file1
16:57:40.709304 Transaction 18: Trying to acquire XLock on file2
16:57:40.709327 Transaction 17: Trying to acquire SLock on file2
16:57:40.712449 Transaction 9: Released SLock on file3
16:57:40.712507 Transaction 15: Acquired XLock on file3
16:57:40.713599 Transaction 19: Trying to acquire XLock on file3
16:57:40.716664 Transaction 14: Released XLock on file2
16:57:40.716767 Transaction 16: Acquired XLock on file2
16:57:40.717847 Transaction 20: Trying to acquire XLock on file1
16:57:40.719105 Transaction 21: Trying to acquire SLock on file1
16:57:40.722334 Transaction 22: Trying to acquire XLock on file3
16:57:40.723546 Transaction 23: Trying to acquire XLock on file1
16:57:40.723659 Transaction 15: Released XLock on file3
16:57:40.723684 Transaction 22: Acquired XLock on file3
16:57:40.725051 Transaction 19: Acquired XLock on file3
16:57:40.725122 Transaction 22: Released XLock on file3
16:57:40.725700 Transaction 4: Released XLock on file1
16:57:40.725732 Transaction 11: Acquired SLock on file1
16:57:40.725757 Transaction 21: Acquired SLock on file1
16:57:40.725809 Transaction 10: Acquired SLock on file1
16:57:40.725848 Transaction 24: Trying to acquire XLock on file3
16:57:40.727133 Transaction 25: Trying to acquire XLock on file3
16:57:40.728080 Transaction 11: Released SLock on file1
16:57:40.729095 Transaction 16: Released XLock on file2
16:57:40.729134 Transaction 17: Acquired SLock on file2
16:57:40.730392 Transaction 26: Trying to acquire XLock on file1
16:57:40.732181 Transaction 21: Released SLock on file1
16:57:40.734725 Transaction 28: Trying to acquire XLock on file2
16:57:40.734757 Transaction 27: Trying to acquire XLock on file1
16:57:40.738251 Transaction 29: Trying to acquire SLock on file1
16:57:40.738287 Transaction 10: Released SLock on file1
16:57:40.738307 Transaction 23: Acquired XLock on file1
16:57:40.739340 Transaction 17: Released SLock on file2
16:57:40.739379 Transaction 18: Acquired XLock on file2
16:57:40.741456 Transaction 19: Released XLock on file3
16:57:40.741524 Transaction 8: Acquired XLock on file3
16:57:40.741597 Transaction 30: Trying to acquire XLock on file3
16:57:40.743704 Transaction 18: Released XLock on file2
16:57:40.743753 Transaction 32: Trying to acquire SLock on file2
16:57:40.743785 Transaction 28: Acquired XLock on file2
16:57:40.743811 Transaction 31: Trying to acquire SLock on file3
16:57:40.747316 Transaction 28: Released XLock on file2
16:57:40.747352 Transaction 32: Acquired SLock on file2
16:57:40.747863 Transaction 33: Trying to acquire SLock on file2
16:57:40.747902 Transaction 33: Acquired SLock on file2
16:57:40.749088 Transaction 34: Trying to acquire SLock on file2
16:57:40.749260 Transaction 34: Acquired SLock on file2
16:57:40.749491 Transaction 23: Released XLock on file1
16:57:40.749519 Transaction 27: Acquired XLock on file1
16:57:40.750012 Transaction 33: Released SLock on file2
16:57:40.751480 Transaction 34: Released SLock on file2
16:57:40.752213 Transaction 35: Trying to acquire XLock on file1
16:57:40.752407 Transaction 32: Released SLock on file2
16:57:40.756748 Transaction 37: Trying to acquire SLock on file3
16:57:40.756810 Transaction 36: Trying to acquire XLock on file1
16:57:40.759961 Transaction 8: Released XLock on file3
16:57:40.759988 Transaction 25: Acquired XLock on file3
16:57:40.760884 Transaction 38: Trying to acquire SLock on file3
16:57:40.761704 Transaction 27: Released XLock on file1
16:57:40.761721 Transaction 35: Acquired XLock on file1
16:57:40.762935 Transaction 39: Trying to acquire XLock on file3
16:57:40.765408 Transaction 41: Trying to acquire XLock on file3
16:57:40.765424 Transaction 40: Trying to acquire SLock on file1
16:57:40.765469 Transaction 38: Acquired SLock on file3
16:57:40.765469 Transaction 25: Released XLock on file3
16:57:40.765493 Transaction 31: Acquired SLock on file3
16:57:40.765495 Transaction 37: Acquired SLock on file3
16:57:40.766625 Transaction 42: Trying to acquire SLock on file2
16:57:40.766659 Transaction 42: Acquired SLock on file2
16:57:40.766795 Transaction 35: Released XLock on file1
16:57:40.766810 Transaction 36: Acquired XLock on file1
16:57:40.768930 Transaction 43: Trying to acquire XLock on file1
16:57:40.768911 Transaction 44: Trying to acquire SLock on file3
16:57:40.768977 Transaction 44: Acquired SLock on file3
16:57:40.771104 Transaction 36: Released XLock on file1
16:57:40.771126 Transaction 40: Acquired SLock on file1
16:57:40.771138 Transaction 29: Acquired SLock on file1
16:57:40.771587 Transaction 38: Released SLock on file3
16:57:40.772723 Transaction 42: Released SLock on file2
16:57:40.772962 Transaction 45: Trying to acquire SLock on file1
16:57:40.772983 Transaction 45: Acquired SLock on file1
16:57:40.772987 Transaction 45: Released SLock on file1
16:57:40.777529 Transaction 46: Trying to acquire XLock on file3
16:57:40.778095 Transaction 44: Released SLock on file3
16:57:40.778593 Transaction 31: Released SLock on file3
16:57:40.780857 Transaction 47: Trying to acquire XLock on file3
16:57:40.783901 Transaction 37: Released SLock on file3
16:57:40.783939 Transaction 39: Acquired XLock on file3
16:57:40.785097 Transaction 48: Trying to acquire SLock on file1
16:57:40.785149 Transaction 48: Acquired SLock on file1
16:57:40.785188 Transaction 29: Released SLock on file1
16:57:40.786138 Transaction 49: Trying to acquire SLock on file3
16:57:40.787325 Transaction 50: Trying to acquire XLock on file3
16:57:40.787376 Transaction 40: Released SLock on file1
16:57:40.795835 Transaction 39: Released XLock on file3
16:57:40.795931 Transaction 41: Acquired XLock on file3
16:57:40.798671 Transaction 48: Released SLock on file1
16:57:40.798710 Transaction 43: Acquired XLock on file1
16:57:40.806266 Transaction 41: Released XLock on file3
16:57:40.806295 Transaction 49: Acquired SLock on file3
16:57:40.813819 Transaction 49: Released SLock on file3
16:57:40.813873 Transaction 50: Acquired XLock on file3
16:57:40.815948 Transaction 43: Released XLock on file1
16:57:40.815980 Transaction 26: Acquired XLock on file1
16:57:40.817133 Transaction 26: Released XLock on file1
16:57:40.817174 Transaction 20: Acquired XLock on file1
16:57:40.827463 Transaction 50: Released XLock on file3
16:57:40.827472 Transaction 30: Acquired XLock on file3
16:57:40.836338 Transaction 20: Released XLock on file1
16:57:40.846663 Transaction 30: Released XLock on file3
16:57:40.846683 Transaction 47: Acquired XLock on file3
16:57:40.853782 Transaction 47: Released XLock on file3
16:57:40.853797 Transaction 24: Acquired XLock on file3
16:57:40.863729 Transaction 24: Released XLock on file3
16:57:40.863756 Transaction 46: Acquired XLock on file3
16:57:40.875885 Transaction 46: Released XLock on file3
Done!
If you look at the lines just pertaining to any individual file, you’ll see that the sequence of operations performed on it satisfy the invariants we expect: no SLock and XLock are held on a file at the same time, and only one XLock is held on a file at the same time.
If you were designing a database for specific use cases (like for more read-heavy workloads or more write-heavy workloads), then I imagine you could adapt this kind of stress test to compare how different locking strategies improve performance.
| 2024-11-07T20:02:07 | en | train |
42,044,067 | geox | 2024-11-04T17:45:14 | Musk PAC tells judge $1M voter sweepstakes winners not chosen by chance | null | https://www.nbcphiladelphia.com/news/local/musk-pac-tells-philly-judge-1m-sweepstakes-winners-not-chosen-by-chance/4017920/ | 13 | 0 | [
42044092
] | null | null | null | null | null | null | null | null | null | train |
42,044,089 | bmdsxl | 2024-11-04T17:46:52 | Show HN: I made a web browser game with AI integrated NPC's | As of today the game's demo is officially live and playable at <a href="https://www.letsmaketv.com" rel="nofollow">https://www.letsmaketv.com</a><p>It's a web browser game, no sign in or downloads required!<p>The game uses fine-tuned OpenAI models to simulate the personalities of six different NPC's, each with their own unique training data and personalities.<p>You can chat with the AI's about anything, become friends or enemies with them and find out their secrets. The dialogue is surprisingly funny sometimes but it's still just very much a proof of concept.<p>There are also a few minigames you can play with the AI too.<p>I'll leave it there an let the game speak for itself. I would love it if you played and shared your feedback! I plan to keep working on this game and evolve it into something much broader but for the time being I wanted to share this big milestone!<p>Let me know what you think! | https://www.letsmaketv.com | 2 | 2 | [
42047081,
42044518
] | null | null | null | null | null | null | null | null | null | train |
42,044,096 | Corrado | 2024-11-04T17:47:20 | Open-Letter-to-Amazon-October-2024 [pdf] | null | https://blueduckcap.com/wp-content/uploads/2024/10/Open-Letter-to-Amazon-October-2024-.pdf | 3 | 1 | [
42044147,
42044112
] | null | null | null | null | null | null | null | null | null | train |
42,044,100 | amrrs | 2024-11-04T17:47:43 | Meta's Plan for Nuclear-Powered AI Data Centre Thwarted by Rare Bees | null | https://tech.slashdot.org/story/24/11/04/1356218/metas-plan-for-nuclear-powered-ai-data-centre-thwarted-by-rare-bees | 3 | 1 | [
42044177,
42044109
] | null | null | null | null | null | null | null | null | null | train |
42,044,128 | 0xlogk | 2024-11-04T17:49:04 | HFT Traders Dust Off 19th Century Tool in Search of Market Edge | null | https://www.datacenterknowledge.com/networking/hft-traders-dust-off-19th-century-tool-in-search-of-market-edge | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,162 | null | 2024-11-04T17:52:22 | null | null | null | null | null | null | [
"true"
] | true | null | null | null | null | null | null | null | train |
42,044,180 | rntn | 2024-11-04T17:54:02 | Why experimental variation in neuroimaging should be embraced | null | https://www.nature.com/articles/s41467-024-53743-y | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,185 | rbanffy | 2024-11-04T17:54:13 | Watching an American Election from Across the Pond | null | https://www.newyorker.com/news/letter-from-the-uk/watching-an-american-election-from-across-the-pond | 2 | 0 | [
42044212
] | null | null | null | null | null | null | null | null | null | train |
42,044,188 | segasaturn | 2024-11-04T17:54:22 | Half of US adults exposed to harmful lead levels as kids (2022) | null | https://apnews.com/article/science-health-environment-and-nature-centers-for-disease-control-and-prevention-bec63d5a6e98f952ad6d111c90e5a1b2 | 9 | 2 | [
42044316,
42048620,
42044314
] | null | null | null | null | null | null | null | null | null | train |
42,044,191 | prydt | 2024-11-04T17:54:49 | LazyLog: A New Shared Log Abstraction for Low-Latency Applications [pdf] | null | https://dassl-uiuc.github.io/pdfs/papers/lazylog.pdf | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,202 | PaulHoule | 2024-11-04T17:55:22 | VibeCheck: Discover and Quantify Qualitative Differences in LLMs | null | https://arxiv.org/abs/2410.12851 | 1 | 0 | null | null | null | no_error | VibeCheck: Discover and Quantify Qualitative Differences in Large Language Models | null | [Submitted on 10 Oct 2024 (v1), last revised 28 Oct 2024 (this version, v3)] |
View PDF
HTML (experimental)
Abstract:Large language models (LLMs) often exhibit subtle yet distinctive characteristics in their outputs that users intuitively recognize, but struggle to quantify. These "vibes" -- such as tone, formatting, or writing style -- influence user preferences, yet traditional evaluations focus primarily on the singular axis of correctness. We introduce VibeCheck, a system for automatically comparing a pair of LLMs by discovering identifying traits of a model (vibes) that are well-defined, differentiating, and user-aligned. VibeCheck iteratively discovers vibes from model outputs and then utilizes a panel of LLM judges to quantitatively measure the utility of each vibe. We validate that the vibes generated by VibeCheck align with those found in human discovery and run VibeCheck on pairwise preference data from real-world user conversations with Llama-3-70b vs GPT-4. VibeCheck reveals that Llama has a friendly, funny, and somewhat controversial vibe. These vibes predict model identity with 80% accuracy and human preference with 61% accuracy. Lastly, we run VibeCheck on a variety of models and tasks including summarization, math, and captioning to provide insight into differences in model behavior. VibeCheck discovers vibes like Command X prefers to add concrete intros and conclusions when summarizing in comparison to TNGL, Llama-405b often overexplains its thought process on math problems compared to GPT-4o, and GPT-4 prefers to focus on the mood and emotions of the scene when captioning compared to Gemini-1.5-Flash. Code can be found at this https URL
Submission history From: Lisa Dunlap [view email] [v1]
Thu, 10 Oct 2024 17:59:17 UTC (16,391 KB)
[v2]
Thu, 24 Oct 2024 20:01:12 UTC (16,385 KB)
[v3]
Mon, 28 Oct 2024 06:11:31 UTC (16,383 KB)
| 2024-11-08T06:48:53 | en | train |
42,044,205 | kingdompetshop | 2024-11-04T17:55:34 | null | null | null | 1 | null | [
42044206
] | null | true | null | null | null | null | null | null | null | train |
42,044,217 | speckx | 2024-11-04T17:57:01 | Hosting Static Content with Pico.sh | null | https://eklausmeier.goip.de/blog/2024/11-03-hosting-static-content-with-pico-sh | 4 | 0 | [
42044308
] | null | null | null | null | null | null | null | null | null | train |
42,044,222 | RyeCombinator | 2024-11-04T17:57:39 | Leveling up the 1Password Developer experience | null | https://blog.1password.com/new-developer-experience/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,258 | jmsflknr | 2024-11-04T18:00:53 | Red Sea Is Now So Dangerous Even NATO Warships Are Avoiding It | null | https://gcaptain.com/red-sea-is-now-so-dangerous-even-nato-warships-are-avoiding-it/ | 10 | 0 | [
42044301
] | null | null | null | null | null | null | null | null | null | train |
42,044,270 | archagon | 2024-11-04T18:01:59 | Elon Musk's 'crazy' plan to rip $2T out of America | null | https://www.telegraph.co.uk/business/2024/11/04/what-muskconomics-would-mean-for-donald-trumps-america/ | 16 | 13 | [
42044900,
42045449,
42049990,
42044664,
42044435,
42044785
] | null | null | null | null | null | null | null | null | null | train |
42,044,281 | paulpauper | 2024-11-04T18:02:51 | You don't have to endorse anyone, you can just vote | null | https://www.theintrinsicperspective.com/p/you-dont-have-to-endorse-anyone-you | 2 | 0 | [
42044322
] | null | null | null | null | null | null | null | null | null | train |
42,044,305 | paulpauper | 2024-11-04T18:05:19 | I got dysentery so you don't have to | null | https://www.lesswrong.com/posts/inHiHHGs6YqtvyeKp/i-got-dysentery-so-you-don-t-have-to | 1 | 1 | [
42044386,
42044313
] | null | null | null | null | null | null | null | null | null | train |
42,044,310 | paulpauper | 2024-11-04T18:05:37 | A bird's eye view of ARC's research | null | https://www.lesswrong.com/posts/ztokaf9harKTmRcn4/a-bird-s-eye-view-of-arc-s-research | 3 | 1 | [
42044517
] | null | null | null | null | null | null | null | null | null | train |
42,044,315 | birriel | 2024-11-04T18:06:11 | AI That Can Invent AI Is Coming. Buckle Up. | null | https://www.forbes.com/sites/robtoews/2024/11/03/ai-that-can-invent-ai-is-coming-buckle-up/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,328 | aquastorm | 2024-11-04T18:07:10 | Bad software keeps cyber security companies in business | null | https://www.dogesec.com/blog/bad_software_keeps_security_industry_in_business/ | 38 | 11 | [
42044628,
42044939,
42044818,
42044635,
42051680,
42044761,
42044624,
42051623,
42044774,
42044634,
42044698,
42045102
] | null | null | no_error | Bad Software Keeps Cyber Security Companies in Business | 2024-10-28T00:00:00+00:00 | DOGESEC |
If you are reading this blog post via a 3rd party source it is very likely that many parts of it will not render correctly (usually, the interactive graphs). Please view the post on dogesec.com for the full interactive viewing experience.
tl;dr
Despite countless frameworks, best practices, blog posts… so many developers still hardcode credentials into their code.
Key Findings
37,439 CVEs were published between October 2023 through September 2024
35,346 CWEs were assigned to those CVEs (520 unique CWEs in total)
CWE-79 Improper Neutralization of Input During Web Page Generation (‘Cross-site Scripting’) was the most reported weakness (in 6,006 of the CVEs, 17% of all CWEs)
Basic weaknesses like, CWE-532: Insertion of Sensitive Information into Log File (247 CVEs, 0.7%), CWE-798: Use of Hard-coded Credentials (213 CVEs, 0.6%), and CWE-306: Missing Authentication for Critical Function (208 CVEs, 0.6%) were also high up the list.
Overview
Many of those on the vendor side of cyber-security will often joke that insecure software keeps the majority of the industry in business.
There is some truth in that statement. We couldn’t expect perfect software and there needs to be checks in place by the industry (including responsible disclosure).
In a previous post I looked at some interesting data points about CVEs over the course of the last 25 years. This time around I wanted to take a look at some of the common weakness categories for published vulnerabilities over the last year (October 2023 through September 2024).
Follow along
If you’d like to follow along with the searches used in this post, follow the instructions described in that previous post to import the data.
Once you’ve done that, there’s one final stix2arango command you need to run;
python3 utilities/arango_cti_processor/insert_archive_cwe.py \
--versions 4_15 \
--database cti_knowledge_base_store
Analysis
An overview of the data
RETURN LENGTH(
FOR doc IN nvd_cve_vertex_collection
FILTER doc.type == "vulnerability"
FILTER DATE_TIMESTAMP(doc.created) >= DATE_TIMESTAMP("2023-10-01T00:00:00Z")
FILTER DATE_TIMESTAMP(doc.created) <= DATE_TIMESTAMP("2024-09-30T00:00:00Z")
RETURN doc
)
In total, 37,439 CVEs were published over this period.
RETURN (
FOR doc IN nvd_cve_vertex_collection
FILTER doc.type == "vulnerability"
FILTER DATE_TIMESTAMP(doc.created) >= DATE_TIMESTAMP("2023-10-01T00:00:00Z")
FILTER DATE_TIMESTAMP(doc.created) <= DATE_TIMESTAMP("2024-09-30T00:00:00Z")
COLLECT monthYear = CONCAT(DATE_YEAR(doc.created), "-",
(DATE_MONTH(doc.created) < 10 ? "0" : ""),
DATE_MONTH(doc.created)) WITH COUNT INTO count
SORT monthYear
RETURN {
monthYear: monthYear,
count: count
}
)
Most common weaknesses
When considering the following numbers keep in mind that not all CVEs have CWEs assigned. Sometimes it is because NVD are yet to analyse the CVE (and that’s a common problem given the NVDs backlog). Also some CVEs are assigned NVD-CWE-noinfo which means the NVD had insufficient information when analysing the CVE to assign a CWE.
For those that do have CWEs assigned;
// Step 1: Build a lookup table for CWE IDs and names from `mitre_cwe_vertex_collection`
LET cwe_lookup = (
FOR cwe_doc IN mitre_cwe_vertex_collection
FILTER TO_BOOL(cwe_doc.external_references)
FOR cwe_ref IN cwe_doc.external_references
FILTER cwe_ref.source_name == "cwe"
RETURN { cwe_id: cwe_ref.external_id, name: cwe_doc.name }
)
// Step 2: Use the lookup table to map CWE names to IDs in `nvd_cve_vertex_collection`
FOR doc IN nvd_cve_vertex_collection
FILTER doc.type == "vulnerability"
FILTER DATE_TIMESTAMP(doc.created) >= DATE_TIMESTAMP("2023-10-01T00:00:00Z")
FILTER DATE_TIMESTAMP(doc.created) <= DATE_TIMESTAMP("2024-09-30T00:00:00Z")
FILTER TO_BOOL(doc.external_references)
// Extract and match CWE IDs using the lookup table
FOR ref IN doc.external_references
FILTER ref.source_name == "cwe"
LET current_cwe_id = ref.external_id
// Get the CWE name from the lookup table
LET cwe_name = FIRST(FOR cwe IN cwe_lookup FILTER cwe.cwe_id == current_cwe_id RETURN cwe.name)
COLLECT cwe_id = current_cwe_id, name = cwe_name WITH COUNT INTO count
FILTER cwe_id != "NVD-CWE-noinfo" // Exclude "NVD-CWE-noinfo"
SORT count DESC
RETURN {
cwe_id: cwe_id,
name: name,
count: count
}
A total of 35,346 CWEs were assigned to those CVEs.
Here are the top 40;
cwe_id
name
count
CWE-79
Improper Neutralization of Input During Web Page Generation (‘Cross-site Scripting’)
6006
CWE-89
Improper Neutralization of Special Elements used in an SQL Command (‘SQL Injection’)
2644
CWE-352
Cross-Site Request Forgery (CSRF)
1615
CWE-787
Out-of-bounds Write
1491
CWE-862
Missing Authorization
1091
CWE-22
Improper Limitation of a Pathname to a Restricted Directory (‘Path Traversal’)
1028
CWE-416
Use After Free
1013
CWE-125
Out-of-bounds Read
902
CWE-121
Stack-based Buffer Overflow
857
CWE-78
Improper Neutralization of Special Elements used in an OS Command (‘OS Command Injection’)
845
CWE-200
Exposure of Sensitive Information to an Unauthorized Actor
775
CWE-20
Improper Input Validation
768
CWE-434
Unrestricted Upload of File with Dangerous Type
719
CWE-284
Improper Access Control
660
CWE-120
Buffer Copy without Checking Size of Input (‘Classic Buffer Overflow’)
615
CWE-476
NULL Pointer Dereference
584
CWE-94
Improper Control of Generation of Code (‘Code Injection’)
569
CWE-269
Improper Privilege Management
499
CWE-77
Improper Neutralization of Special Elements used in a Command (‘Command Injection’)
494
CWE-400
Uncontrolled Resource Consumption
441
CWE-122
Heap-based Buffer Overflow
426
CWE-918
Server-Side Request Forgery (SSRF)
408
CWE-287
Improper Authentication
405
CWE-502
Deserialization of Untrusted Data
384
CWE-190
Integer Overflow or Wraparound
312
CWE-863
Incorrect Authorization
297
CWE-119
Improper Restriction of Operations within the Bounds of a Memory Buffer
283
CWE-639
Authorization Bypass Through User-Controlled Key
250
CWE-532
Insertion of Sensitive Information into Log File
247
CWE-798
Use of Hard-coded Credentials
213
CWE-306
Missing Authentication for Critical Function
208
CWE-601
URL Redirection to Untrusted Site (‘Open Redirect’)
205
CWE-427
Uncontrolled Search Path Element
198
CWE-770
Allocation of Resources Without Limits or Throttling
184
CWE-276
Incorrect Default Permissions
174
CWE-401
Missing Release of Memory after Effective Lifetime
165
CWE-74
Improper Neutralization of Special Elements in Output Used by a Downstream Component (‘Injection’)
153
CWE-362
Concurrent Execution using Shared Resource with Improper Synchronization (‘Race Condition’)
148
CWE-732
Incorrect Permission Assignment for Critical Resource
135
CWE-59
Improper Link Resolution Before File Access (‘Link Following’)
133
There are some basic software development errors listed here.
The top two entries, XSS (Cross-Site Scripting) (CWE-79) and SQL injection (CWE-89), are fundamental to web application security and are always covered as part of basic development best practices. Both are common attack vectors and are often included in secure development guidelines like OWASP’s top security risks.
Outside of the top 20 more basic software development exist too;
CWE-532: Insertion of Sensitive Information into Log File (247 CVEs)
CWE-798: Use of Hard-coded Credentials (213 CVEs)
CWE-306: Missing Authentication for Critical Function (208 CVEs)
I can use a CWE ID to retreive the products that have shipped with these weaknesses. Here I use CWE-798 (Use of Hard-coded Credentials).
// Step 1: Collect all vulnerability IDs with CWE-798 within the specified date range
LET vulnerability_ids_with_cwe_798 = (
FOR vuln_doc IN nvd_cve_vertex_collection
FILTER vuln_doc.type == "vulnerability"
FILTER TO_BOOL(vuln_doc.external_references)
FILTER DATE_TIMESTAMP(vuln_doc.created) >= DATE_TIMESTAMP("2023-10-01T00:00:00Z")
FILTER DATE_TIMESTAMP(vuln_doc.created) <= DATE_TIMESTAMP("2024-09-30T00:00:00Z")
// Check for vulnerabilities with CWE-798
FOR ref IN vuln_doc.external_references
FILTER ref.source_name == "cwe"
FILTER ref.external_id == "CWE-798"
RETURN vuln_doc.id // Collect vulnerability ID directly
)
// Step 2: Collect unique vendor/product pairs from each indicator
LET all_unique_vulnerable_criteria = (
FOR vuln_id IN vulnerability_ids_with_cwe_798
// Replace "vulnerability--" with "indicator--" to get the corresponding indicator ID
LET indicator_id = CONCAT("indicator--", SPLIT(vuln_id, "--")[1])
// Retrieve the indicator document
FOR indicator_doc IN nvd_cve_vertex_collection
FILTER indicator_doc.id == indicator_id
// Deduplicate vendor/product pairs within each indicator
LET unique_criteria = UNIQUE(
FOR vuln IN indicator_doc.x_cpes.vulnerable
LET parts = SPLIT(vuln.criteria, ":")
RETURN {
vendor: parts[3],
product: parts[4]
}
)
// Return deduplicated vendor/product pairs for this indicator
FOR criteria IN unique_criteria
RETURN criteria
)
// Step 3: Count occurrences of each unique vendor/product pair across all indicators
FOR criteria IN all_unique_vulnerable_criteria
COLLECT vendor = criteria.vendor, product = criteria.product WITH COUNT INTO count
SORT count DESC
RETURN {
vendor: vendor,
product: product,
count: count
}
vendor
product
count of CVEs with CWE-798
hitron_systems
dvr_hvr-4781_firmware
6
boschrexroth
ctrlx_hmi_web_panel_wr2115_firmware
4
ibm
security_verify_governance
4
boschrexroth
ctrlx_hmi_web_panel_wr2110_firmware
4
bosch
nexo-os
4
boschrexroth
ctrlx_hmi_web_panel_wr2107_firmware
4
hongdian
h8951-4g-esp_firmware
3
sierrawireless
aleos
3
autel
maxicharger_ac_elite_business_c50_firmware
2
fedirtsapana
simple_http_server_plus
2
enbw
senec_storage_box_firmware
2
zohocorp
manageengine_ddi_central
2
dell
e-lab_navigator
2
skoda-auto
superb_3_firmware
2
kiloview
p1_firmware
2
cisco
emergency_responder
2
machinesense
feverwarn_firmware
2
estomed
simple_care
2
ibm
merge_efilm_workstation
2
csharp
cws_collaborative_development_platform
2
The count shows distinct CVEs that reference CWE-798 and have the product shown as vulnerable.
The main takeaway from the above table is that this happens to big and small vendors alike (see CISCO and IBM), and also more-often in firmware (which is often harder to mitigate the issue of leaked credentials in).
Vulmatch
Straightforward vulnerability management. Know when software you use is vulnerable, how it is being exploited, and how to detect an attack.
Discuss this post
Head on over to the DOGESEC community to discuss this post.
Never miss an update
Sign up to receive new articles in your inbox as they published.
Your subscription could not be saved. Please try again.
Your subscription has been successful.
| 2024-11-08T02:47:51 | en | train |
42,044,331 | itarato | 2024-11-04T18:07:22 | Show HN: Simple sliding boxes puzzle game | Wanted to learn some Unity and implemented one of my favourite puzzle: sliding boxes (aka Rush Hour if you have the physical version) with over 2.5 million levels. No ads, data collection, payments, points - just play. It's using the amazing puzzle map database from Michael Fogleman: <a href="https://www.michaelfogleman.com/rush/#DatabaseDownload" rel="nofollow">https://www.michaelfogleman.com/rush/#DatabaseDownload</a>. The source code is open source at: <a href="https://github.com/itarato/slider/">https://github.com/itarato/slider/</a>. | https://play.google.com/store/apps/details?id=com.PeterArato.GemSlide&hl=en_US | 2 | 4 | [
42044727
] | null | null | no_error | GemSlide - Apps on Google Play | null | null | About this gameA no-nonsense sliding box puzzle using Michael Fogleman's level database from https://www.michaelfogleman.com/rush/.You are given a 6 by 6 grid with vertical and horizontal boxes. The goal is to move the dedicated white box out of the 6x6 zone.Levels are grouped by their minimum-step requirement.Data safetySafety starts with understanding how developers collect and share your data. Data privacy and security practices may vary based on your use, region, and age. The developer provided this information and may update it over time.No data shared with third partiesLearn more about how developers declare sharingNo data collectedLearn more about how developers declare collectionCommitted to follow the Play Families PolicyWhat's new | 2024-11-08T05:04:49 | en | train |
42,044,332 | JumpCrisscross | 2024-11-04T18:07:24 | Right to Repair: McFlurries and Apples | null | https://gadallon.substack.com/p/right-to-repair-mcflurries-and-apples | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,341 | todsacerdoti | 2024-11-04T18:07:56 | Why GCP Is More Usable for Developers | null | https://tonym.us/why-gcp-is-more-usable-for-developers.html | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,348 | paulpauper | 2024-11-04T18:08:36 | Ex150-11 review: weight stable for 30 days | null | https://www.exfatloss.com/p/ex150-11-review-weight-stable-for | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,365 | Bslou | 2024-11-04T18:10:12 | Show HN: A free futuristic subscription platform for X users | null | https://xclusive.app | 2 | 2 | [
42046175,
42044531
] | null | null | no_error | Aftermarket.com | The domain xclusive.app is for sale! | null | null |
xclusive.app is for sale!
$4,999 USD
————— OR —————
Every great idea deserves a great domain. Establish your brand by investing in a quality domain name.
Listed By
Premium Domains
Make an Offer
Please complete this form to contact the owner.
Name *
Email *
Phone
Offer *
$
USD
Minimum Offer$100 USD
Message
Captcha *
Your offer has been sent!
We have forwarded your details to the seller.
Domain Parking by Aftermarket.com
| 2024-11-08T12:58:52 | en | train |
42,044,369 | gaguinaga2000 | 2024-11-04T18:10:22 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,044,380 | mindcrime | 2024-11-04T18:11:32 | Pedro – A [Prolog] subscription/notification communications system | null | https://staff.itee.uq.edu.au/pjr/HomePages/PedroHome.html | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,387 | null | 2024-11-04T18:12:12 | null | null | null | null | null | [
42044388
] | [
"true"
] | null | null | null | null | null | null | null | null | train |
42,044,389 | null | 2024-11-04T18:12:12 | null | null | null | null | null | null | [
"true"
] | null | null | null | null | null | null | null | null | train |
42,044,390 | withinboredom | 2024-11-04T18:12:29 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,044,399 | mrabdurakhimov | 2024-11-04T18:13:31 | The Chess Analogy or How Important Is Trust? | null | https://blog.usuf.dev/the-chess-analogy-or-how-important-is-trust | 1 | 1 | [
42044400
] | null | null | null | null | null | null | null | null | null | train |
42,044,403 | toomuchtodo | 2024-11-04T18:13:43 | Grindr Illegally Used RTO to Thwart Union, Forced Out 1/2 of Staff, NLRB Alleges | null | https://www.bloomberg.com/news/articles/2024-11-04/grindr-rto-plan-that-caused-80-terminations-was-illegally-imposed-nlrb-alleges | 92 | 33 | [
42044859,
42044404,
42045195,
42044951,
42045093,
42044853
] | null | null | null | null | null | null | null | null | null | train |
42,044,420 | smooke | 2024-11-04T18:15:32 | Could AI make data science obsolete? | null | https://www.zdnet.com/article/could-ai-make-data-science-obsolete/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,426 | ossusermivami | 2024-11-04T18:16:19 | Measuring keyboard-to-photon latency with a light sensor (2023) | null | https://thume.ca/2020/05/20/making-a-latency-tester/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,453 | OSINTTeam | 2024-11-04T18:18:34 | null | null | null | 1 | null | [
42044454
] | null | true | null | null | null | null | null | null | null | train |
42,044,455 | jslakro | 2024-11-04T18:18:36 | Video Game or Videogame? (2022) | null | https://www.videogamecanon.com/adventurelog/video-game-or-videogame-an-answer-to-the-most-important-question-of-our-time/ | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,465 | wut42 | 2024-11-04T18:19:26 | Claude 3.5 Haiku is now available | null | https://twitter.com/alexalbert__/status/1853498517094072783 | 10 | 0 | [
42045096
] | null | null | null | null | null | null | null | null | null | train |
42,044,467 | ryscrilla | 2024-11-04T18:19:33 | Open Letter to Amazon – October-2024 | null | https://blueduckcap.com/wp-content/uploads/2024/10/Open-Letter-to-Amazon-October-2024-.pdf?ck_subscriber_id=705630790 | 2 | 1 | [
42044478
] | null | null | null | null | null | null | null | null | null | train |
42,044,476 | emk_709 | 2024-11-04T18:20:24 | "Mantis Framework" poisons, traps hackers' AI agents in a tarpit | null | https://www.thestack.technology/mantis-framework-poisons-traps-hackers-ai-agents-in-a-tarpit/ | 3 | 0 | [
42044477
] | null | null | no_error | Mantis Framework poisons hackers' AI agents | 2024-11-04T18:30:00.000Z | Edward Targett |
A new framework, Mantis, lets cybersecurity professionals automate counter-offensive actions against any AI agents attacking their systems.The new open-source toolkit shows how defenders can use prompt injection attacks to take over systems hosting a malicious agent.Alternatively, they can soak up attackers' AI resources in an “agent tarpit” that traps the LLM agent in an infinite filesystem exploration loop*. "The attacker is driven into a fake and dynamically created filesystem with a directory tree of infinite depth and is asked/forced to traverse it indefinitely."The Mantis** framework is the creation of three Red Team security researchers and academics associated with George Mason University. It effectively generates honeypots or decoys designed to counter-attack LLM agents activated against them, using various prompt injections.AI versus AIDario Pasquini, Evgenios M. Kornaropoulos, and Giuseppe Ateniese say once deployed, Mantis “operates autonomously, orchestrating countermeasures…through a suite of decoy services…such as fake FTP servers and compromised-looking web applications [to] entrap LLM agents by mimicking exploitable features and common attack vectors. It can then counter-attack, with "prompt injection[s] inserted in…a way that [is] invisible to a human operator that loads the decoy’s response. We achieve this by using ANSI escape sequences and HTML comment tags.”Mantis can be customized to employ... dynamically tailored execution triggers specific to the attacking LLM agent. To achieve this, Mantis can use fingerprinting tools like LLMmap to identify the LLM version used by the attacking agent based on current interactions. Once identified, methods like NeuralExec [pdf] can then generate customized execution triggers[Mantis aims to] leverage the agent’s tool-access capabilities, such as terminal access, to manipulate it into executing unsafe commands that compromise the machine on which it is running [for example to] initiate a reverse shell connection to the attacker’s machine. Due to the limited robustness of LLMs, this strategy can be implemented relatively easily – Pasquini et al.In an October 28 arXiv paper they claimed that Mantis "consistently achieved over 95% effectiveness against automated LLM-driven attacks", showcasing a range of successful prompt injection counter-attacks.The framework, provided as a Python package, is a response to a) The susceptibility of AI agents to prompt injection attacks; b) The nascent use by threat actors of LLM agents to support automated exploitation.Somewhere, an overheating GPU sucked up vital electricity from the grid to help us generate this image as the planet overheated and extreme weather events proliferated. We're sorry.Big Sleep finds vulnerabilities: Don't nap on thisIt was released as Google's Project Zero said that its "Big Sleep" LLM agent had autonomously identified an exploitable stack-based buffer overflow in the SQLite open source database engine, which fuzzing had not identified.We believe this is the first public example of an AI agent finding a previously unknown exploitable memory-safety issue in widely used real-world software.That vulnerability (patched before the code was made public) "remained undiscovered after 150 CPU-hours of fuzzing" Google's researchers said. OpenAI and Microsoft wrote earlier in 2024 meanwhile that they had disrupted attempted "malicious uses of AI by state-affiliated threat actors".They wrote: "Previous red team assessments we conducted in partnership with external cybersecurity experts...found that GPT-4 offers only limited, incremental capabilities for malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI powered tools."See also: No LLMs aren’t about to “autonomously” hack your companyBut Mantis's release comes as Red Teams say that LLMs are increasingly helpful in offensive cyber-operations, with bespoke tools like PentestGPT performing [pdf] performing highly in Capture The Flag tests. Grim-faced security veterans will no doubt decry hype around the use of AI in malicious attacks beyond social engineering, saying that by far the greater risk comes from cretins persistently saving their passwords in plain text on their desktops, failure to deploy MFA, the rampant leaking of credentials, or firewall vendors pushing out products riddled with ancient code, SQL injection vulnerabilities or hard-coded passwords. (Scrutiny of the firmware running on Ivanti devices by Eclypsium earlier this year revealed that its Pulse Secure appliances run on an 11-year-old base OS that is no longer supported and are composed of multiple libraries which are vulnerable to a combined 973 flaws, with 111 having publicly known exploits? "Firewall"? Users seem to certainly get regularly burned.)But to those concerned at the potential for wider deployment of AI agents in offensive cyber activity and thinking about their response, Mantis may just be a lot of fun; just speak to counsel before... deploying in the wild. *Alternative tarpit approaches are available...**MANTIS is a rather creative acronym for “Malicious LLM-Agent Neutralization and Exploitation Through Prompt Injections” See also: Sophos attackers breached intelligence agency, wrote code to survive firmware updates
| 2024-11-08T18:08:13 | en | train |
42,044,494 | antognini | 2024-11-04T18:21:59 | Designing a Home Radio Telescope for 21 Cm Emission | null | https://arxiv.org/abs/2411.00057 | 109 | 25 | [
42045353,
42045241,
42045430,
42045715,
42045426,
42050377,
42045372,
42056019,
42051991
] | null | null | null | null | null | null | null | null | null | train |
42,044,506 | laurex | 2024-11-04T18:23:41 | A Rock-Star Researcher Spun a Web of Lies–and Nearly Got Away with It | null | https://thewalrus.ca/a-rock-star-researcher-spun-a-web-of-lies-and-nearly-got-away-with-it/ | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,507 | athousandsteps | 2024-11-04T18:23:49 | Free Ways to Improve Customer Experience | null | https://cba-gbl.com/improve-customer-experience-for-free/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,521 | donsupreme | 2024-11-04T18:25:05 | null | null | null | 9 | null | [
42044692,
42045077
] | null | true | null | null | null | null | null | null | null | train |
42,044,542 | phoenixwan | 2024-11-04T18:26:59 | BO6 Terminus Calculator,Help You Save 5k on Beam Smasher Steps | null | https://terminuscalculator.org/ | 1 | 1 | [
42044543
] | null | null | null | null | null | null | null | null | null | train |
42,044,565 | ensaktas | 2024-11-04T18:28:25 | Pure – Landing Page Design Inspiration | null | https://purelanding.page/ | 1 | 0 | [
42044566
] | null | null | null | null | null | null | null | null | null | train |
42,044,576 | herbertl | 2024-11-04T18:29:27 | Is this the perfect city? (2015) | null | https://www.bbc.com/culture/article/20151211-is-this-the-perfect-city | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,578 | PaulHoule | 2024-11-04T18:29:36 | A microscale soft lithium-ion battery for tissue stimulation | null | https://www.nature.com/articles/s44286-024-00136-z | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,617 | jobdevops | 2024-11-04T18:33:12 | Looking for Weekend Jobs? | null | https://workpt.com/weekend-jobs | 3 | 0 | [
42044618,
42045085
] | null | null | null | null | null | null | null | null | null | train |
42,044,621 | antipaul | 2024-11-04T18:33:38 | For better AI, add randomness. And start with a lot of answers | null | https://plpxsk.github.io/2024/10/28/better-ai-requires.html | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,673 | mfiguiere | 2024-11-04T18:38:09 | Intel Releases x86-SIMD-sort 6.0 For Speedy AVX2/AVX-512 Sorting | null | https://www.phoronix.com/news/x86-simd-sort-6.0 | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,677 | plurby | 2024-11-04T18:38:30 | StatMuse – Largest AI media company in the world for sports and finance | null | https://www.statmuse.com/ | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,678 | gajus | 2024-11-04T18:38:35 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,044,688 | abhishaike | 2024-11-04T18:39:25 | Why Recursion Pharmaceuticals abandoned cell painting for brightfield imaging | null | https://www.owlposting.com/p/why-recursion-pharmaceuticals-abandoned | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,694 | ctoth | 2024-11-04T18:39:37 | Influenza Expert Gets Real About the H5N1 Risk to Your Swine Herd | null | https://www.porkbusiness.com/news/influenza-expert-gets-real-about-h5n1-risk-your-swine-herd | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,697 | foweltschmerz | 2024-11-04T18:40:06 | An End-to-End Model with Adaptive Filtering for Retrieval-Augmented Generation | null | https://arxiv.org/abs/2411.00437 | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,044,700 | null | 2024-11-04T18:40:21 | null | null | null | null | null | null | [
"true"
] | true | null | null | null | null | null | null | null | train |
42,044,703 | null | 2024-11-04T18:40:34 | null | null | null | null | null | null | [
"true"
] | true | null | null | null | null | null | null | null | train |
42,044,716 | sandwichsphinx | 2024-11-04T18:41:31 | Analysing diagnosis of ischaemic heart disease with machine learning(1999) | null | https://www.sciencedirect.com/science/article/abs/pii/S0933365798000633 | 2 | 0 | null | null | null | http_other_error | Page restricted | ScienceDirect | null | null |
About ScienceDirect
Shopping cart
Contact and support
Terms and conditions
Privacy policy
Cookies are used by this site. By continuing you agree to the use of cookies.
Copyright © 2024 Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the Creative Commons licensing terms apply.
| 2024-11-08T12:56:18 | null | train |
42,044,719 | fanf2 | 2024-11-04T18:42:02 | Fibonacci hashing: the optimization that the world forgot (or: a better alternat (2018) | null | https://probablydance.com/2018/06/16/fibonacci-hashing-the-optimization-that-the-world-forgot-or-a-better-alternative-to-integer-modulo/ | 2 | 0 | [
42045065
] | null | null | no_error | Fibonacci Hashing: The Optimization that the World Forgot (or: a Better Alternative to Integer Modulo) | 2018-06-16T18:26:17+00:00 | Probably Dance |
I recently posted a blog post about a new hash table, and whenever I do something like that, I learn at least one new thing from my comments. In my last comment section Rich Geldreich talks about his hash table which uses “Fibonacci Hashing”, which I hadn’t heard of before. I have worked a lot on hash tables, so I thought I have at least heard of all the big important tricks and techniques, but I also know that there are so many small tweaks and improvements that you can’t possibly know them all. I thought this might be another neat small trick to add to the collection.
Turns out I was wrong. This is a big one. And everyone should be using it. Hash tables should not be prime number sized and they should not use an integer modulo to map hashes into slots. Fibonacci hashing is just better. Yet somehow nobody is using it and lots of big hash tables (including all the big implementations of std::unordered_map) are much slower than they should be because they don’t use Fibonacci Hashing. So let’s figure this out.
First of all how do we find out what this Fibonacci Hashing is? Rich Geldreich called it “Knuth’s multiplicative method,” but before looking it up in The Art of Computer Programming, I tried googling for it. The top result right now is this page which is old, with a copyright from 1997. Fibonacci Hashing is not mentioned on Wikipedia. You will find a few more pages mentioning it, mostly from universities who present this in their “introduction to hash tables” material.
From that I thought it’s one of those techniques that they teach in university, but that nobody ends up using because it’s actually more expensive for some reason. There are plenty of those in hash tables: Things that get taught because they’re good in theory, but they’re bad in practice so nobody uses them.
Except somehow, on this one, the wires got crossed. Everyone uses the algorithm that’s unnecessarily slow and leads to more problems, and nobody is using the algorithm that’s faster while at the same time being more robust to problematic patterns. Knuth talked about Integer Modulo and about Fibonacci Hashing, and everybody should have taken away from that that they should use Fibonacci Hashing, but they didn’t and everybody uses integer modulo.
Before diving into this, let me just show you the results of a simple benchmark: Looking up items in a hash table:
In this benchmark I’m comparing various unordered_map implementations. I’m measuring their lookup speed when the key is just an integer. On the X-axis is the size of the container, the Y-axis is the time to find one item. To measure that, the benchmark is just spinning in a loop calling find() on this container, and at the end I divide the time that the loop took by the number of iterations in the loop. So on the left hand side, when the table is small enough to fit in cache, lookups are fast. On the right hand side the table is too big to fit in cache and lookups become much slower because we’re getting cache misses for most lookups.
But the main thing I want to draw attention to is the speed of ska::unordered_map, which uses Fibonacci hashing. Otherwise it’s a totally normal implementation of unordered_map: It’s just a vector of linked lists, with every element being stored in a separate heap allocation. On the left hand side, where the table fits in cache, ska::unordered_map can be more than twice as fast as the Dinkumware implementation of std::unordered_map, which is the next fastest implementation. (this is what you get when you use Visual Studio)
So if you use std::unordered_map and look things up in a loop, that loop could be twice as fast if the hash table used Fibonacci hashing instead of integer modulo.
How it works
So let me explain how Fibonacci Hashing works. It’s related to the golden ratio which is related to the Fibonacci numbers, hence the name. One property of the Golden Ratio is that you can use it to subdivide any range roughly evenly without ever looping back to the starting position. What do I mean by subdividing? For example if you want to divide a circle into 8 sections, you can just make each step around the circle be an angle of degrees. And after eight steps you’ll be back at the start. And for any number of steps you want to take, you can just change the angle to be . But what if you don’t know ahead of time how many steps you’re going to take? What if the value is determined by something you don’t control? Like maybe you have a picture of a flower, and you want to implement “every time the user clicks the mouse, add a petal to the flower.” In that case you want to use the golden ratio: Make the angle from one petal to the next and you can loop around the circle forever, adding petals, and the next petal will always fit neatly into the biggest gap and you’ll never loop back to your starting position. Vi Hart has a good video about the topic:
(The video is part two of a three-part series, part one is here)
I knew about that trick because it’s useful in procedural content generation: Any time that you want something to look randomly distributed, but you want to be sure that there are no clusters, you should at least try to see if you can use the golden ratio for that. (if that doesn’t work, Halton Sequences are also worth trying before you try random numbers) But somehow it had never occurred to me to use the same trick for hash tables.
So here’s the idea: Let’s say our hash table is 1024 slots large, and we want to map an arbitrarily large hash value into that range. The first thing we do is we map it using the above trick into the full 64 bit range of numbers. So we multiply the incoming hash value with . (the number 11400714819323198486 is closer but we don’t want multiples of two because that would throw away one bit) Multiplying with that number will overflow, but just as we wrapped around the circle in the flower example above, this will wrap around the whole 64 bit range in a nice pattern, giving us an even distribution across the whole range from to . To illustrate, let’s just look at the upper three bits. So we’ll do this:
size_t fibonacci_hash_3_bits(size_t hash)
{
return (hash * 11400714819323198485llu) >> 61;
}
This will return the upper three bits after doing the multiplication with the magic constant. And we’re looking at just three bits because it’s easy to see how the golden ratio wraparound behaves when we just look at the top three bits. If we pass in some small numbers for the hash value, we get the following results from this:
fibonacci_hash_3_bits(0) == 0
fibonacci_hash_3_bits(1) == 4
fibonacci_hash_3_bits(2) == 1
fibonacci_hash_3_bits(3) == 6
fibonacci_hash_3_bits(4) == 3
fibonacci_hash_3_bits(5) == 0
fibonacci_hash_3_bits(6) == 5
fibonacci_hash_3_bits(7) == 2
fibonacci_hash_3_bits(8) == 7
fibonacci_hash_3_bits(9) == 4
fibonacci_hash_3_bits(10) == 1
fibonacci_hash_3_bits(11) == 6
fibonacci_hash_3_bits(12) == 3
fibonacci_hash_3_bits(13) == 0
fibonacci_hash_3_bits(14) == 5
fibonacci_hash_3_bits(15) == 2
fibonacci_hash_3_bits(16) == 7
This gives a pretty even distribution: The number 0 comes up three times, all other numbers come up twice. And every number is far removed from the previous and the next number. If we increase the input by one, the output jumps around quite a bit. So this is starting to look like a good hash function. And also a good way to map a number from a larger range into the range from 0 to 7.
In fact we already have the whole algorithm right here. All we have to do to get an arbitrary power of two range is to change the shift amount. So if my hash table is size 1024, then instead of just looking at the top 3 bits I want to look at the top 10 bits. So I shift by 54 instead of 61. Easy enough.
Now if you actually run a full hash function analysis on this, you find that it doesn’t make for a great hash function. It’s not terrible, but you will quickly find patterns. But if we make a hash table with a STL-style interface, we don’t control the hash function anyway. The hash function is being provided by the user. So we will just use Fibonacci hashing to map the result of the hash function into the range that we want.
The problems with integer modulo
So why is integer modulo bad anyways? Two reasons: 1. It’s slow. 2. It can be real stupid about patterns in the input data. So first of all how slow is integer modulo? If you’re just doing the straightforward implementation like this:
size_t hash_to_slot(size_t hash, size_t num_slots)
{
return hash % num_slots;
}
Then this is real slow. It takes roughly 9 nanoseconds on my machine. Which, if the hash table is in cache, is about five times longer than the rest of the lookup takes. If you get cache misses then those dominate, but it’s not good that this integer modulo is making our lookups several times slower when the table is in cache. Still the GCC, LLVM and boost implementations of unordered_map use this code to map the hash value to a slot in the table. And they are really slow because of this. The Dinkumware implementation is a little bit smarter: It takes advantage of the fact that when the table is sized to be a power of two, you can do an integer modulo by using a binary and:
size_t hash_to_slot(size_t hash, size_t num_slots_minus_one)
{
return hash & num_slots_minus_one;
}
Which takes roughly 0 nanoseconds on my machine. Since num_slots is a power of two, this just chops off all the upper bits and only keeps the lower bits. So in order to use this you have to be certain that all the important information is in the lower bits. Dinkumware ensures this by using a more complicated hash function than the other implementations use: For integers it uses a FNV1 hash. It’s much faster than a integer modulo, but it still makes your hash table twice as slow as it could be since FNV1 is expensive. And there is a bigger problem: If you provide your own hash function because you want to insert a custom type into the hash table, you have to know about this implementation detail.
We have been bitten by that implementation detail several times at work. For example we had a custom ID type that’s just a wrapper around a 64 bit integer which is composed from several sources of information. And it just so happens that that ID type has really important information in the upper bits. It took surprisingly long until someone noticed that we had a slow hash table in our codebase that could literally be made a hundred times faster just by changing the order of the bits in the hash function, because the integer modulo was chopping off the upper bits.
Other tables, like google::dense_hash_map also use a power of two hash size to get the fast integer modulo, but it doesn’t provide it’s own implementation of std::hash<int> (because it can’t) so you have to be real careful about your upper bits when using dense_hash_map.
Talking about google::dense_hash_map, integer modulo brings even more problems with it for open addressing tables it. Because if you store all your data in one array, patterns in the input data suddenly start to matter more. For example google::dense_hash_map gets really, really slow if you ever insert a lot of sequential numbers. Because all those sequential numbers get assigned slots right next to each other, and if you’re then trying to look up a key that’s not in the table, you have to probe through a lot of densely occupied slots before you find your first empty slot. You will never notice this if you only look up keys that are actually in the map, but unsuccessful lookups can be dozens of times slower than they should be.
Despite these flaws, all of the fastest hash table implementations use the “binary and” approach to assign a hash value to a slot. And then you usually try to compensate for the problems by using a more complicated hash function, like FNV1 in the Dinkumware implementation.
Why Fibonacci Hashing is the Solution
Fibonacci hashing solves both of these problems. 1. It’s really fast. It’s a integer multiplication followed by a shift. It takes roughly 1.5 nanoseconds on my machine, which is fast enough that it’s getting real hard to measure. 2. It mixes up input patterns. It’s like you’re getting a second hashing step for free after the first hash function finishes. If the first hash function is actually just the identity function (as it should be for integers) then this gives you at least a little bit of mixing that you wouldn’t otherwise get.
But really it’s better because it’s faster. When I worked on hash tables I was always frustrated by how much time we are spending on the simple problem of “map a large number to a small number.” It’s literally the slowest operation in the hash table. (outside of cache misses of course, but let’s pretend you’re doing several lookups in a row and the table is cached) And the only alternative was the “power of two binary and” version which discards bits from the hash function and can lead to all kinds of problems. So your options are either slow and safe, or fast and losing bits and getting potentially many hash collisions if you’re ever not careful. And everybody had this problem. I googled a lot for this problem thinking “surely somebody must have a good method for bringing a large number into a small range” but everybody was either doing slow or bad things. For example here is an approach (called “fastrange”) that almost re-invents Fibonacci hashing, but it exaggerates patterns where Fibonacci hashing breaks up patterns. It’s the same speed as Fibonacci hashing, but when I’ve tried to use it, it never worked for me because I would suddenly find patterns in my hash function that I wasn’t even aware of. (and with fastrange your subtle patterns suddenly get exaggerated to be huge problems) Despite its problems it is being used in Tensorflow, because everybody is desperate for a faster solution of this the problem of mapping a large number into a small range.
If Fibonacci Hashing is so great, why is nobody using it?
That’s a tricky question because there is so little information about Fibonacci hashing on the Internet, but I think it has to do with a historical misunderstanding. In The Art of Computer Programming, Knuth introduces three hash functions to use for hash tables:
Integer Modulo
Fibonacci Hashing
Something related to CRC hashes
The inclusion of the integer modulo in this list is a bit weird from today’s perspective because it’s not much of a hash function. It just maps from a larger range into a smaller range, and doesn’t otherwise do anything. Fibonacci hashing is actually a hash function, not the greatest hash function, but it’s a good introduction. And the third one is too complicated for me to understand. It’s something about coming up with good coefficients for a CRC hash that has certain properties about avoiding collisions in hash tables. Probably very clever, but somebody else has to figure that one out.
So what’s happening here is that Knuth uses the term “hash function” differently than we use it today. Today the steps in a hash table are something like this:
Hash the key
Map the hash value to a slot
Compare the item in the slot
If it’s not the right item, repeat step 3 with a different item until the right one is found or some end condition is met
We use the term “hash function” to refer to step 1. But Knuth uses the term “hash function” to refer to something that does both step 1 and step 2. So when he refers to a hash function, he means something that both hashes the incoming key, and assigns it to a slot in the table. So if the table is only 1024 items large, the hash function can only return a value from 0 to 1023. This explains why “integer modulo” is a hash function for Knuth: It doesn’t do anything in step 1, but it does work well for step 2. So if those two steps were just one step, then integer modulo does a good job at that one step since it does a good job at our step 2. But when we take it apart like that, we’ll see that Fibonacci Hashing is an improvement compared to integer modulo in both steps. And since we’re only using it for step 2, it allows us to use a faster implementation for step 1 because the hash function gets some help from the additional mixing that Fibonacci hashing does.
But this difference in terms, where Knuth uses “hash function” to mean something different than “hash function” means for std::unordered_map, explains to me why nobody is using Fibonacci hashing. When judged as a “hash function” in today’s terms, it’s not that great.
After I found that Fibonacci hashing is not mentioned anywhere, I did more googling and was more successful searching for “multiplicative hashing.” Fibonacci hashing is just a simple multiplicative hash with a well-chosen magic number. But the language that I found describing multiplicative hashing explains why nobody is using this. For example Wikipedia has this to say about multiplicative hashing:
Multiplicative hashing is a simple type of hash function often used by teachers introducing students to hash tables. Multiplicative hash functions are simple and fast, but have higher collision rates in hash tables than more sophisticated hash functions.
So just from that, I certainly don’t feel encouraged to check out what this “multiplicative hashing” is. Or to get a feeling for how teachers introduce this, here is MIT instructor Erik Demaine (who’s videos I very much recommend) introducing hash functions, and he says this:
I’m going to give you three hash functions. Two of which are, let’s say common practice, and the third of which is actually theoretically good. So the first two are not good theoretically, you can prove that they’re bad, but at least they give you some flavor.
Then he talks about integer modulo, multiplicative hashing, and a combination of the two. He doesn’t mention the Fibonacci hashing version of multiplicative hashing, and the introduction probably wouldn’t inspire people to go seek out more information it.
So I think people just learn that multiplicative hashing is not a good hash function, and they never learn that multiplicative hashing is a great way to map large values into a small range.
Of course it could also be that I missed some unknown big downside to Fibonacci hashing and that there is a real good reason why nobody is using this, but I didn’t find anything like that. But it could be that I didn’t find anything bad about Fibonacci hashing simply because it’s hard to find anything at all about Fibonacci hashing, so let’s do our own analysis:
Analyzing Fibonacci Hashing
So I have to confess that I don’t know much about analyzing hash functions. It seems like the best test is to see how close a hash function gets to the strict avalanche criterion which “is satisfied if, whenever a single input bit is changed, each of the output bits changes with a 50% probability.”
To measure that I wrote a small program that takes a hash , and runs it through Fibonacci hashing to get a slot in the hash table . Then I change a single bit in , giving me , and after I run that through Fibonacci hashing I get a slot . Then I measure depending on which bit I changed in , which bits are likely to change in compared to and which bits are unlikely to change.
I then run that same test every time after I doubled a hash table, because with different size hash tables there are more bits in the output: If the hash table only has four slots, there are only two bits in the output. If the hash table has 1024 slots, there are ten bits in the output. Finally I color code the result and plot the whole thing as a picture that looks like this:
Let me explain this picture. Each row of pixels represents one of the 64 bits of the input hash. The bottom-most row is the first bit, the topmost row is the 64th bit. Each column represents one bit in the output. The first two columns are the output bits for a table of size 4, the next three columns are the output bits for a table of size 8 etc. until the last 23 bits are for a table of size eight million. The color coding means this:
A black pixel indicates that when the input pixel for that row changes, the output pixel for that column has a 50% chance of changing. (this is ideal)
A blue pixel means that when the input pixel changes, the ouput pixel has a 100% chance of changing.
A red pixel means that when the input pixel changes, the output pixel has a 0% chance of changing.
For a really good hash function the entire picture would be black. So Fibonacci hashing is not a really good hash function.
The worst pattern we can see is at the top of the picture: The last bit of the input hash (the top row in the picture) can always only affect the last bit of the output slot in the table. (the last column of each section) So if the table has 1024 slots, the last bit of the input hash can only determine the bit in the output slot for the number 512. It will never change any other bit in the output. Lower bits in the input can affect more bits in the output, so there is more mixing going on for those.
Is it bad that the last bit in the input can only affect one bit in the output? It would be bad if we used this as a hash function, but it’s not necessarily bad if we just use this to map from a large range into a small range. Since each row has at least one blue or black pixel in it, we can be certain that we don’t lose information, since every bit from the input will be used. What would be bad for mapping from a large range into a small range is if we had a row or a column that has only red pixels in it.
Let’s also look at what this would look like for integer modulo, starting with integer modulo using prime numbers:
This one has more randomness at the top, but a clearer pattern at the bottom. All that red means that the first few bits in the input hash can only determine the first few bits in the output hash. Which makes sense for integer modulo. A small number modulo a large number will never result in a large number, so a change to a small number can never affect the later bits.
This picture is still “good” for mapping from a large range into a small range because we have that diagonal line of bright blue pixels in each block. To show a bad function, here is integer modulo with a power of two size:
This one is obviously bad: The upper bits of the hash value have completely red rows, because they will simply get chopped off. Only the lower bits of the input have any effect, and they can only affect their own bits in the output. This picture right here shows why using a power of two size requires that you are careful with your choice of hash function for the hash table: If those red rows represent important bits, you will simply lose them.
Finally let’s also look at the “fastrange” algorithm that I briefly mentioned above. For power of two sizes it looks really bad, so let me show you what it looks like for prime sizes:
What we see here is that fastrange throws away the lower bits of the input range. It only uses the upper bits. I had used it before and I had noticed that a change in the lower bits doesn’t seem to make much of a difference, but I had never realized that it just completely throws the lower bits away. This picture totally explains why I had so many problems with fastrange. Fastrange is a bad function to map from a large range into a small range because it’s throwing away the lower bits.
Going back to Fibonacci hashing, there is actually one simple change you can make to improve the bad pattern for the top bits: Shift the top bits down and xor them once. So the code changes to this:
size_t index_for_hash(size_t hash)
{
hash ^= hash >> shift_amount;
return (11400714819323198485llu * hash) >> shift_amount;
}
It’s almost looking more like a proper hash function, isn’t it? This makes the function two cycles slower, but it gives us the following picture:
This looks a bit nicer, with the problematic pattern at the top gone. (and we’re seeing more black pixels now which is the ideal for a hash function) I’m not using this though because we don’t really need a good hash function, we need a good function to map from a large range into a small range. And this is on the critical path for the hash table, before we can even do the first comparison. Any cycle added here makes the whole line in the graph above move up.
So I keep on saying that we need a good function to map from a large range into a small range, but I haven’t defined what “good” means there. I don’t know of a proper test like the avalanche analysis for hash functions, but my first attempt at a definition for “good” would be that every value in the smaller range is equally likely to occur. That test is very easy to fulfill though: all of the methods (including fastrange) fulfill that criteria. So how about we pick a sequence of values in the input range and check if every value in the output is equally likely. I had given the examples for numbers 0 to 16 above. We could also do multiples of 8 or all powers of two or all prime numbers or the Fibonacci numbers. Or let’s just try as many sequences as possible until we figure out the behavior of the function.
Looking at the above list we see that there might be a problematic pattern with multiples of 4: fibonacci_hash_3_bits(4) returned 3, for fibonacci_hash_3_bits(8) returned 7, fibonacci_hash_3_bits(12) returned 3 again and fibonacci_hash_3_bits(16) returned 7 again. Let’s see how this develops if we print the first sixteen multiples of four:
Here are the results:
0 -> 0
4 -> 3
8 -> 7
12 -> 3
16 -> 7
20 -> 2
24 -> 6
28 -> 2
32 -> 6
36 -> 1
40 -> 5
44 -> 1
48 -> 5
52 -> 1
56 -> 4
60 -> 0
64 -> 4
Doesn’t look too bad actually: Every number shows up twice, except the number 1 shows up three times. What about multiples of eight?
0 -> 0
8 -> 7
16 -> 7
24 -> 6
32 -> 6
40 -> 5
48 -> 5
56 -> 4
64 -> 4
72 -> 3
80 -> 3
88 -> 3
96 -> 2
104 -> 2
112 -> 1
120 -> 1
128 -> 0
Once again doesn’t look too bad, but we are definitely getting more repeated numbers. So how about multiples of sixteen?
0 -> 0
16 -> 7
32 -> 6
48 -> 5
64 -> 4
80 -> 3
96 -> 2
112 -> 1
128 -> 0
144 -> 7
160 -> 7
176 -> 6
192 -> 5
208 -> 4
224 -> 3
240 -> 2
256 -> 1
This looks a bit better again, and if we were to look at multiples of 32 it would look better still. The reason why the number 8 was starting to look problematic was not because it’s a power of two. It was starting to look problematic because it is a Fibonacci number. If we look at later Fibonacci numbers, we see more problematic patterns. For example here are multiples of 34:
0 -> 0
34 -> 0
68 -> 0
102 -> 0
136 -> 0
170 -> 0
204 -> 0
238 -> 0
272 -> 0
306 -> 0
340 -> 1
374 -> 1
408 -> 1
442 -> 1
476 -> 1
510 -> 1
544 -> 1
That’s looking bad. And later Fibonacci numbers will only look worse. But then again how often are you going to insert multiples of 34 into a hash table? In fact if you had to pick a group of numbers that’s going to give you problems, the Fibonacci numbers are not the worst choice because they don’t come up that often naturally. As a reminder, here are the first couple Fibonacci numbers: 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584… The first couple numbers don’t give us bad patterns in the output, but anything bigger than 13 does. And most of those are pretty harmless: I can’t think of any case that would give out multiples of those numbers. 144 bothers me a little bit because it’s a multiple of 8 and you might have a struct of that size, but even then your pointers will just be eight byte aligned, so you’d have to get unlucky for all your pointers to be multiples of 144.
But really what you do here is that you identify the bad pattern and you tell your users “if you ever hit this bad pattern, provide a custom hash function to the hash table that fixes it.” I mean people are happy to use integer modulo with powers of two, and for that it’s ridiculously easy to find bad patterns: Normal pointers are a bad pattern for that. Since it’s harder to come up with use cases that spit out lots of multiples of Fibonacci numbers, I’m fine with having “multiples of Fibonacci numbers” as bad patterns.
So why are Fibonacci numbers a bad pattern for Fibonacci hashing anyways? It’s not obvious if we just have the magic number multiplication and the bit shift. First of all we have to remember that the magic constant came from dividing by the golden ratio: . And then since we are truncating the result of the multiplication before we shift it, there is actually a hidden modulo by in there. So whenever we are hashing a number the slot is actually determined by this:
I’m leaving out the shift at the end because that part doesn’t matter for figuring out why Fibonacci numbers are giving us problems. In the example of stepping around a circle (from the Vi Hart video above) the equation would look like this:
This would give us an angle from 0 to 360. These functions are obviously similar. We just replaced with . So while we’re in math-land with infinite precision, we might as well make the function return something in the range from 0 to 1, and then multiply the constant in afterwards:
Where returns the fractional part of a number. So . In this last formulation it’s easy to find out why Fibonacci numbers give us problems. Let’s try putting in a few Fibonacci numbers:
What we see here is that if we divide a Fibonacci number by the golden ratio, we just get the previous Fibonacci number. There is no fractional part so we always end up with 0. So even if we multiply the full range of back in, we still get 0. But for smaller Fibonacci numbers there is some imprecision because the Fibonacci sequence is just an integer approximation of golden ratio growth. That approximation gets more exact the further along we get into the sequence, but for the number 8 it’s not that exact. That’s why 8 was not a problem, 34 started to look problematic, and 144 is going to be real bad.
Except that when we talk about badness, we also have to consider the size of the hash table. It’s really easy to find bad patterns when the table only has eight slots. If the table is bigger and has, say 64 slots, suddenly multiples of 34 don’t look as bad:
0 -> 0
34 -> 0
68 -> 1
102 -> 2
136 -> 3
170 -> 4
204 -> 5
238 -> 5
272 -> 6
306 -> 7
340 -> 8
374 -> 9
408 -> 10
442 -> 10
476 -> 11
510 -> 12
544 -> 13
And if the table has 1024 slots we get all the multiples nicely spread out:
0 -> 0
34 -> 13
68 -> 26
102 -> 40
136 -> 53
170 -> 67
204 -> 80
238 -> 94
272 -> 107
306 -> 121
340 -> 134
374 -> 148
408 -> 161
442 -> 175
476 -> 188
510 -> 202
544 -> 215
At size 1024 even the multiples of 144 don’t look scary any more because they’re starting to be spread out now:
0 -> 0
144 -> 1020
288 -> 1017
432 -> 1014
576 -> 1011
720 -> 1008
864 -> 1004
1008 -> 1001
1152 -> 998
So the bad pattern of multiples of Fibonacci numbers goes away with bigger hash tables. Because Fibonacci hashing spreads out the numbers, and the bigger the table is, the better it gets at that. This doesn’t help you if your hash table is small, or if you need to insert multiples of a larger Fibonacci number, but it does give me confidence that this “bad pattern” is something we can live with.
So I am OK with living with the bad pattern of Fibonacci hashing. It’s less bad than making the hash table a power of two size. It can be slightly more bad than using prime number sizes, as long as your prime numbers are well chosen. But I still think that on average Fibonacci hashing is better than prime number sized integer modulo, because Fibonacci hashing mixes up sequential numbers. So it fixes a real problem I have run into in the past while introducing a theoretical problem that I am struggling to find real examples for. I think that’s a good trade.
Also prime number integer modulo can have problems if you choose bad prime numbers. For example boost::unordered_map can choose size 196613, which is 0b110000000000000101 in binary, which is a pretty round number in the same way that 15000005 is a pretty round number in decimal. Since this prime number is “too round of a number” this causes lots of hash collisions in one of my benchmarks, and I didn’t set that benchmark up to find bad cases like this. It was totally accidental and took lots of debugging to figure out why boost::unordered_map does so badly in that benchmark. (the benchmark in question was set up to find problems with sequential numbers) But I won’t go into that and will just say that while prime numbers give fewer problematic patterns than Fibonacci hashing, you still have to choose them well to avoid introducing hash collisions.
Conclusion
Fibonacci hashing may not be the best hash function, but I think it’s the best way to map from a large range of numbers into a small range of numbers. And we are only using it for that. When used only for that part of the hash table, we have to compare it against two existing approaches: Integer modulo with prime numbers and Integer modulo with power of two sizes. It’s almost as fast as the power of two size, but it introduces far fewer problems because it doesn’t discard any bits. It’s much faster than the prime number size, and it also gives us the bonus of breaking up sequential numbers, which can be a big benefit for open addressing hash tables. It does introduce a new problem of having problems with multiples of large Fibonacci numbers in small hash tables, but I think those problems can be solved by using a custom hash function when you encounter them. Experience will tell how often we will have to use this.
All of my hash tables now use Fibonacci hashing by default. For my flat_hash_map the property of breaking up sequential numbers is particularly important because I have had real problems caused by sequential numbers. For the others it’s just a faster default. It might almost make the option to use the power of two integer modulo unnecessary.
It’s surprising that the world forgot about this optimization and that we’re all using primer number sized hash tables instead. (or use Dinkumware’s solution which uses a power of two integer modulo, but spends more time on the hash function to make up for the problems of the power of two integer modulo) Thanks to Rich Geldreich for writing a hash table that uses this optimization and for mentioning it in my comments. But this is an interesting example because academia had a solution to a real problem in existing hash tables, but professors didn’t realize that they did. The most likely reason for that is that it’s not well known how big the problem of “mapping a large number into a small range” is and how much time it takes to do an integer modulo.
For future work it might be worth looking into Knuth’s third hash function: The one that’s related to CRC hashes. It seems to be a way to construct a good CRC hash function if you need a n-bit output for a hash table. But it was too complicated for me to look into, so I’ll leave that as an exercise to the reader to find out if that one is worth using.
Finally here is the link to my implementation of unordered_map. My other two hash tables are also there: flat_hash_map has very fast lookups and bytell_hash_map is also very fast but was designed more to save memory compared to flat_hash_map.
| 2024-11-07T22:54:58 | en | train |
42,044,722 | tosh | 2024-11-04T18:42:35 | Final Cut Camera | null | https://support.apple.com/guide/final-cut-pro-ipad/intro-to-final-cut-camera-dev154067693/ipados | 1 | 0 | [
42045058
] | null | null | null | null | null | null | null | null | null | train |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.