id
int64 2
42.1M
| by
large_stringlengths 2
15
⌀ | time
timestamp[us] | title
large_stringlengths 0
198
⌀ | text
large_stringlengths 0
27.4k
⌀ | url
large_stringlengths 0
6.6k
⌀ | score
int64 -1
6.02k
⌀ | descendants
int64 -1
7.29k
⌀ | kids
large list | deleted
large list | dead
bool 1
class | scraping_error
large_stringclasses 25
values | scraped_title
large_stringlengths 1
59.3k
⌀ | scraped_published_at
large_stringlengths 4
66
⌀ | scraped_byline
large_stringlengths 1
757
⌀ | scraped_body
large_stringlengths 1
50k
⌀ | scraped_at
timestamp[us] | scraped_language
large_stringclasses 58
values | split
large_stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
42,058,192 | yy8402 | 2024-11-06T08:49:29 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,058,201 | ai-epiphany | 2024-11-06T08:51:45 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,058,209 | dotcoma | 2024-11-06T08:52:29 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,058,224 | doppp | 2024-11-06T08:54:29 | Security Best Practices for Deploying Rails 8 on Linux with Kamal | null | https://paraxial.io/blog/kamal-security | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,058,255 | nathanh4903 | 2024-11-06T08:57:33 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,058,302 | todsacerdoti | 2024-11-06T09:02:28 | Upcoming changes to the DNSSEC root trust anchor | null | https://lists.dns-oarc.net/pipermail/dns-operations/2024-November/022711.html | 92 | 23 | [
42060026,
42060368,
42063654,
42064464
] | null | null | null | null | null | null | null | null | null | train |
42,058,305 | mariuz | 2024-11-06T09:02:41 | Mozilla lost the Internet (& what's next) [video] | null | https://www.youtube.com/watch?v=aw-XYrMFb0A | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,058,309 | okqvhqike | 2024-11-06T09:03:15 | Show HN: Influencers Database with Audio Signals | I Created This App In Two Weekends With Cursor and It's Already at 1K MRR.<p>I built this influencers database which scrapes tiktok and analyzes video. It extracts data what influencers talked in their video and allows to filter via advanced filters (text, categories).<p>It downloads audio of videos, processes it, and extracts information such as:<p>- best categories to promote
- mentioned keywords
- previously promoted products<p>You can use this database to analyze which influencers your competitors are using, find which influencers are the best for your niche, outreach influencers since it already contains their verified emails.<p>Of course, the database is only ~570k influencers right now but it's growing daily.<p>Looking for feedback what could be improved and if you would like to try this, let me know in the comments! | https://old.reddit.com/r/cursor/comments/1gku61m/i_created_this_app_in_two_weekends_with_cursor/ | 206 | 1 | [
42058312
] | null | null | null | null | null | null | null | null | null | train |
42,058,341 | marban | 2024-11-06T09:05:52 | AI Workers Seek Whistleblower Cover to Expose Emerging Threats | null | https://news.bloomberglaw.com/artificial-intelligence/ai-workers-seek-whistleblower-cover-to-expose-emerging-threats | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,058,345 | thunderbong | 2024-11-06T09:06:00 | Segmenting Credit Card Customers with K-Means: A Fun Dive into Clustering | null | https://medium.com/@med.elhamly/segmenting-credit-card-customers-with-k-means-a-fun-dive-into-clustering-c7d2ed519b55 | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,058,355 | fisian | 2024-11-06T09:07:17 | 3D Rotation Design | null | https://www.mattkeeter.com/projects/rotation/ | 91 | 24 | [
42061465,
42070761,
42070979,
42065139,
42066839,
42069687,
42063029,
42061988,
42061584,
42063213
] | null | null | null | null | null | null | null | null | null | train |
42,058,405 | ohduran | 2024-11-06T09:12:30 | Apple Pay as Digital Check | null | https://news.alvaroduran.com/p/apple-pay-as-a-digital-check | 1 | 1 | [
42058490
] | null | null | null | null | null | null | null | null | null | train |
42,058,414 | NiharYCS24 | 2024-11-06T09:13:40 | Dictionary Extension for Edge | I recently found a useful dictionary extension for Microsoft Edge that brings quick, inline word definitions without needing to leave the page. It’s powered by OpenDictionary and even supports multiple parts of speech (POS) selection, so you can get more detailed insights into each word. There’s also a pronunciation option, which is great for language learners or anyone wanting to improve their vocabulary. It feels a bit like the built-in macOS dictionary, making it pretty seamless to use. Thought it might be useful for other Edge users who read a lot online.<p>Do check it out here: https://nightfury874.github.io/DictionaryLol/ | null | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,058,436 | JumpCrisscross | 2024-11-06T09:16:00 | Nvidia Rides AI Wave to Pass Apple as Largest Company | null | https://www.bloomberg.com/news/articles/2024-11-05/nvidia-rides-ai-wave-to-pass-apple-as-world-s-largest-company | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,058,437 | javatuts | 2024-11-06T09:16:02 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,058,452 | slobodan_ | 2024-11-06T09:17:17 | 5 Prompt Engineering Tips for Developers | null | https://slobodan.me/posts/5-prompt-engineering-tips-for-developers/ | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,058,454 | kiru_io | 2024-11-06T09:17:27 | How to Debug iOS Crash Reports? | null | https://kiru.io/til/entries/2024-11-06-how-to-debug-ios-crash-reports/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,058,488 | quarterback333 | 2024-11-06T09:20:08 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,058,489 | VonGuard | 2024-11-06T09:20:13 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,058,526 | Sophiaapple | 2024-11-06T09:22:57 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,058,642 | refresh_organic | 2024-11-06T09:33:53 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,058,702 | vitabaks | 2024-11-06T09:39:06 | How to Simplify Database Management and Reduce Costs | null | https://medium.com/@vitabaks/how-to-simplify-database-management-and-reduce-costs-e75b7d921ee7 | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,058,733 | rockybalboaa | 2024-11-06T09:41:02 | null | null | null | 2 | null | null | null | true | null | null | null | null | null | null | null | train |
42,058,744 | wideserg | 2024-11-06T09:41:59 | Image Splitter with Instagram Grid Preview (w\o need to upload to IG) | null | https://chromewebstore.google.com/detail/image-splitter/khkhfdckilojgneleiifofcaihjjohpi | 1 | 0 | [
42058745
] | null | null | null | null | null | null | null | null | null | train |
42,058,746 | fanf2 | 2024-11-06T09:42:03 | How to successfully rewrite a C++ codebase in Rust | null | https://gaultier.github.io/blog/how_to_rewrite_a_cpp_codebase_successfully.html | 5 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,058,785 | emma_smith23 | 2024-11-06T09:45:29 | null | null | null | 1 | null | [
42058786
] | null | true | null | null | null | null | null | null | null | train |
42,058,798 | ergintranslate | 2024-11-06T09:46:22 | null | null | null | 1 | null | [
42058799
] | null | true | null | null | null | null | null | null | null | train |
42,058,804 | adamscafe | 2024-11-06T09:46:27 | null | null | null | 1 | null | [
42058805
] | null | true | null | null | null | null | null | null | null | train |
42,058,847 | thomostester | 2024-11-06T09:49:49 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,058,888 | lapnect | 2024-11-06T09:52:03 | Iterative α-(de)blending and Stochastic Interpolants | null | http://nicktasios.nl/posts/iterative-alpha-deblending/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,058,895 | buenosdias | 2024-11-06T09:52:24 | Blockchain Developers Are Shaping the Future of FinTech | null | https://boostylabs.com/blockchain | 3 | 1 | [
42058896
] | null | null | no_error | Blockchain Development Services - outsourcing company Boosty Labs | null | null | Cryptocurrency, Smart-Contracts, and enterprise Blockchain development service require great responsibility and oversight due to their high financial risk. That’s why we rely on established and efficient internal processes, creating a sophisticated approach to quality control. The projects from the TOP 100 rating of CoinMarketCap entrust outsourced blockchain app development to our team. Cooperate Services Outsourcing blockchain app development for B2BWe create, design, and deploy enterprise-grade solutions, based on blockchain technology, for industries such as finance, logistics, information security, real estate, remote identification, secure data storage, etc. Our team can develop, integrate, and deploy crypto-fiat gateways into your product. Smart Contract Development| Creation and Issuance of cryptocurenciesUsing technologies such as Ethereum, NEM, Polkadot, Solana and more we develop smart contract solutions for projects focused on Defi (decentralized finance) and other areas. Cryptocurrency exchangesOutsource blockchain development for trading with crypto assets (digital assets), development of DEXs (decentralized exchanges), trading panels, integration of various widgets for use in your products. OTC-platforms development. Outstaffing team of blockchain developersDedicated blockchain developers team. Skills, expertise, and competence of our developers enable developing a project from scratch quickly and efficiently, as well as with improving an existing product. We have successful experience in building blockchain teams for Storj Labs, Bloom protocol. DeFiDevelopment of dApps for decentralized finance, which includes products for Yield farming, Staking, DEX. Development of a unique product from scratch, as well as a fork of existing and proven solutions in the market, such as: Uniswap, Sushiswap, YFI, YFII, YAMv2, and others. Secure storage of cryptocurrencyServer solutions for cold and hot storage of cryptocurrency; Mobile cryptocurrency wallet; Integration of storage functions and cryptocurrency transactions into your products. DappsDecentralized App is an application that uses the principles of decentralized storage and computing and can be deployed on blockchain technology. We create Decentralized Apps based on the following platforms: Ethereum, Quorum, Bitcoin (and its forks), Graphene (BitShares), EOS, Cosmos, Hyperledger Fabric. Web and mobile development outsourcingWe design and develop user-friendly interfaces and mobile apps for your blockchain projects. Crypto friendlyWe accept your project’s native crypto as a payment method for our services (if your project is in CMC 300 rating), which allows you to save more cash on your bank account to drive other business needs. Why Boosty Labs 01$1.8 Bln was transferred through the system of international money transfers during the course of the year with the use of a blockchain developed by our team. 02Our team was one of the first to participate in creating dApp for Samsung Blockchain SDK. 03We have been engaged in outsource blockchain development since 2015, long before it became mainstream. 04We were one of the first to participate in the decentralized cloud storage products development. Profile ProfileOur company specializes in outsourcing blockchain app development and cryptocurrency-related projects. Projects for established leadersBlockchain startups developed with our participation are already full-fledged, successful projects rated in the TOP 100 by CoinMarketCap. Quick startReady to cover any required blockchain specialist vacancy for our customers within 2 weeks. The Boosty Labs team took part in the development of more than 10 open source projects We know how to make your open source project reliable and profitable Cooperate CasesBlockchain is a technology for encrypting and storing data (registry), which are distributed over many computers connected in a common network. Blockchain is a digital database of information that reflects all completed transactions. All records in the blockchain are presented in the form of blocks, which are interconnected by special keys. Each new block contains data about the previous one.Blockchain is used to store and transmit digital data. These can be both financial and non-financial assets (for example, images or objects of the video game industry). Blockchain technology allows assigning an asset unique information about its ownership to a specific person. At the same time, such information cannot be forged, deleted or quietly changed.The basic principles of the blockchain (the distribution and combination of data about the authenticity of a document into blocks) were developed back in the early 1990s based on even earlier mathematical concepts. In 1991-1992, American scientists Wakefield Scott Stornetta, Stuart Haber and Dave Byer described the technology of sequential creation of data blocks, in which a certificate of authenticity and information about the date of generation are fixed using cryptographic algorithms and a hash tree. But at that time there was no technical possibility for the practical implementation of this idea.In 2004, the American programmer Harold Thomas Finney II developed the RPoW system, which is considered the prototype of the cryptocurrency. In October 2008, Satoshi Nakamoto (this is the pseudonym of a person or group of people) in a scientific article on the first cryptocurrency, Bitcoin, proposed using blockchain technology to create a decentralized and independent payment system with a limited supply of assets. Bitcoin development began in 2007 and ended in 2009. Blockchain technology became relevant when there was a need for fast and reliable transfer of digital data.How Blockchain WorksBlockchain allows each member of the network to have access to a distributed database. At the same time, the blockchain does not store the data itself, but records of events (transactions) in their chronological sequence. All new records are checked for authenticity – to be entered into the blockchain, they must be confirmed by the majority of network participants. Records are grouped into blocks, which are combined into chains. Data that has entered the blockchain cannot be changed or deleted without violating the integrity of the block chain.Types of BlockchainBlockchain can work both in a public (open) network, to which any user has access, and in a private (closed), for example, in a corporate network in case of using confidential data. In private versions of the blockchain, different levels of access for users and different complexity of information encryption can be provided. The most famous example of a public blockchain is Bitcoin and other cryptocurrencies. Corporations use blockchain not only in the financial sector, but also in other sectors, for example, in the entertainment industry (for issuing tickets) and healthcare (to protect patient data).There are also hybrid networks that combine the properties of both open and closed networks.Blockchain can be classified according to various criteria:by transaction objects:information;virtual value (value, the analogue of which is absent in the “real world” – for example, Bitcoin);by type of network access:unlimited (networks in which participants are allowed to carry out any activity);limited (networks that limit the activities of participants);according to the requirements for passing identification:anonymous;pseudo-anonymous;complete identification;according to the applied network consensus protocol:PoW (Proof-of-work) – the right to certify a block is given to a participant based on the his performance oof some fairly complex work that satisfies predetermined criteria.PoS (Proof-of-stake) – the right to certify a block is given to the account holder when the amount of his funds and the period of their ownership meet the specified criteria. The formulas for calculating the criteria may vary slightly.PoS + PoW – a hybrid of PoW and PoS, when blocks can be verified both through calculated PoS criteria and PoW enumeration. The purpose of this approach is to complicate the recalculation of the entire chain (from the very first block), which is possible in the case of using PoS in its purest form.PBFT (Practical Byzantine Fault Tolerance), Paxos, RAFT – multi-stage network consensus establishment algorithms. The algorithms of this group allow the blockchain to function at low cost and have a significant throughput, but are weakly resistant to an increase in the number of participants.Non-BFT (Non Byzantine Fault Tolerance) – consensus algorithms that are unstable to behavior in which some participants start working against the network. Such algorithms are applicable in closed networks with full identification.by the presence of a central administrator:there is a central administrator;there is no central administrator.Where is Blockchain Used?Blockchain is used in all areas where the speed of information transfer with a high degree of protection is required. The technology is used to launch and operate cryptocurrencies and digital currencies, when concluding smart contracts for the supply of goods, when generating non-fungible tokens (NFT), in banking and legal areas, in network administration and in the gaming industry. Blockchain technologies are used in the work of public authorities (for example, when conducting and processing the results of referendums and voting), in the activities of public and non-public corporations, public organizations and individuals.CryptocurrencyAny cryptocurrency functions on the basis of blockchain technology. The technology is used both in the issuance (release) of new cryptocurrencies and the generation of new tokens (coins), as well as in settlements with existing ones. Now there are more than 13,000 cryptocurrency projects in the world. Calculations in cryptocurrencies are used by PayPal and Square payment systems and one of the largest international banks, JP Morgan.Cryptocurrencies tend to have high volatility. For investments in cryptocurrencies, there are specialized cryptocurrency exchanges.Digital CurrencySome countries are launching pilot projects to create national digital currencies based on blockchain technology. China has achieved high results in this regard – the digital yuan became the first digital currency adopted in a major global economy.Central bank digital currencies (CBDC) have also been launched by the Central Bank of the Bahamas (sand dollar), the Eastern Caribbean Central Bank (DCash) and the Central Bank of Nigeria (e-naira). The governments (or Central Banks) of the Netherlands, Japan, Russia, Kazakhstan and Ecuador have announced plans to issue their national digital currencies.Smart ContractsBlockchain technology allows you to enter into smart contracts. Smart contracts are fully digital contacts, information about which is protected by encryption. Their key difference is the automatic control and execution of the clauses of the contract. When the conditions are met, the contract ends automatically, without additional actions and the participation of lawyers. Smart contracts allow you to track the entire supply chain, which reduces or completely eliminates the possibility of counterfeiting or illegal actions with products.NFTsNFT is a type of token where each instance is unique, it cannot be replaced or exchanged for another token. NFT testifies to the ownership of any asset in the blockchain and allows you to sell and buy virtual objects: music, photographs, paintings, drawings.Game IndustryAnother area of blockchain application is the gaming industry. Based on cryptocurrency technologies, GameFi projects are being implemented (from the English “game” and “finance”), combining game mechanics and NFT. These are online games that record everything that happens in the game in transactions on the blockchain and allow players to earn real money. Using the blockchain, you can buy and sell virtual characters and artifacts.DeFi & MoreBlockchain technology is being applied in the emerging decentralized finance (DeFi) market. Investors are also starting to invest in new types of digital assets, such as security tokens.Who Are the Miners?Blocks in the blockchain network, for example, when issuing cryptocurrencies, are added using the mining procedure – collecting and processing information about ongoing transactions.In large blockchain networks, this requires significant computing power, so the creation of blocks in them is carried out by special persons – miners.What is a Blockchain Wallet?A blockchain wallet is a special program that allows you to account, store and perform other actions with digital assets, in particular, with cryptocurrency. When registering a wallet, a person gets access to it in the form of an open (public) and a closed (private) key – a cryptographic code. The wallet stores records about the state of the account of its owner and the entire history of transactions. At the same time, the cryptocurrency is not stored directly in the wallet, it contains only information about public and private keys, and the coins themselves are stored in the blockchain. Most often, blockchain wallets are anonymous.Decentralization & DistributionDecentralization and distribution are both an advantage and a disadvantage of blockchain technology. Information is stored simultaneously on all network devices, there is no single data management and storage center. Data changes on each individual device occur independently, but are recorded by the rest of the system participants.All transactions take place almost instantly, but their confirmation may take some time, which depends on the algorithm of the blockchain network. All transactions with assets are confidential, only the wallet number is indicated, and commissions are minimal, since miners register transactions instead of centralized intermediaries.The disadvantages of decentralization are the need for multiple network participants to maintain its integrity and stability, as well as the cost in terms of computing power.Is Blockchain Technology Reliable?Blockchain technology is relatively secure, but not without vulnerabilities.Despite decentralization and distribution, there is a risk of hacker attacks. There is also the possibility of users with large computing power conspiring to make changes to the blockchain. In addition, there is a risk of losing assets due to Internet fraud. And the loss of a private hash key to access the blockchain wallet actually leads to the loss of assets, that is, a direct loss of funds.Blockchain Advantages and DisadvantagesThe main advantages of the blockchain are the transparency of the technology due to decentralization and distribution and the impossibility of changing or destroying information within the blocks.The disadvantages of the blockchain include a poorly developed regulatory and legislative framework in the vast majority of countries in the world. This leads to attempts by regulators to control operations in the blockchain, up to a ban on the circulation of cryptocurrencies (for example, the Chinese authorities did this). Regulators, as a rule, explain their actions by the risk of fraudulent schemes when exchanging digital assets for real money due to the anonymity of transactions. At the same time, another disadvantage of the blockchain is the irreversibility of transactions.Digital assets, especially cryptocurrencies, also have high volatility, which can lead to a complete loss of funds. Connect with UsEager to unleash your growth potential with Boosty Labs? Connect with our team to learn more about our services and how we can help you realize your ambitions. Book a call | 2024-11-08T00:49:22 | en | train |
42,058,914 | lapnect | 2024-11-06T09:53:32 | Use std:span instead of C-style arrays | null | https://www.sandordargo.com/blog/2024/11/06/std-span | 6 | 0 | null | null | null | no_error | Use std::span instead of C-style arrays | 2024-11-06T00:00:00+01:00 | null | While reading the awesome book C++ Brain Teasers by Anders Schau Knatten, I realized it might be worth writing about spans.std::span is a class template that was added to the standard library in C++20 and you’ll find it in the <span> header. A span is a non-owning object that refers to a contiguous sequence of objects with the first sequence element at position zero.In its goal, a span is quite similar to a string_view. While a string_view is a non-owning view of string-like objects, a span is also a non-owning view for array-like objects where the stored elements occupy contiguous places in memory.While it’s possible to use spans with vectors and arrays, most frequently it will be used with C-style arrays because a span gives you safe access to its elements and also to the size of the view, something that you don’t get with C-style arrays.When and why does it come in handy?Let me steal an example from C++ Brain Teasers, but we’ll go with another solution compared to the one in the book.1
2
3
4
5
6
7
8
9
10
11
12
#include <iostream>
void serialize(char characters[]) {
std::cout << sizeof(characters) << "\n";
}
int main() {
char characters[] = {'a', 'b', 'c'};
std::cout << sizeof(characters) << "\n";
std::cout << sizeof(characters) / sizeof(characters[0]) << "\n";
serialize(characters);
}
In the above piece of code, serialize takes an array of characters. When we define the array of characters in main(), we can use sizeof to print the size of the array. Well, we actually print how many bytes the characters[] array occupies. Let me demonstrate.1
2
3
4
5
char characters[] = {'a', 'b', 'c'};
std::cout << sizeof(characters) << "\n";
/*
3
*/
When we try to print the size of a char array all seems fine. We expect 3 and the output is three. But use another type, like an int and we see there is a problem:1
2
3
4
5
int ints[] = {1, 2, 3};
std::cout << sizeof(ints) << "\n";
/*
12
*/
The output is 12, because we printed the memory size the array needs and that’s 3 times the size of an int in this case. As an int on my system is 4 bytes, the output is 3 * 4 bytes, that is 12. As the size of a char is 1 byte the memory size of the array and the number of elements in it are the same.If you want to know how many elements are there in a C-style array of any type, you have to use this good old verbose and cumbersome pattern:1
2
3
4
5
6
std::cout << sizeof(characters) / sizeof(characters[0]) << "\n";
std::cout << sizeof(ints) / sizeof(ints[0]) << "\n";
/*
3
3
*/
Dividing the size of the array with the size of the first item will always work.Well, not always.In the above examples, we had the arrays declared in the same scope - or at least we assumed that they were declared there.But if the array is a function parameter, our assumptions break down. Let’s have a look at the following example.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#include <iostream>
#include <span>
void serialize(char characters[]) {
std::cout << sizeof(characters) << "\n";
std::cout << sizeof(characters) / sizeof(characters[0]) << "\n";
}
void serialize(int ints[]) {
std::cout << sizeof(ints) << "\n";
std::cout << sizeof(ints) / sizeof(ints[0]) << "\n";
}
int main() {
int ints[] = {1, 2, 3};
char characters[] = {'a', 'b', 'c'};
serialize(characters);
serialize(ints);
}
/*
8
8
2
2
*/
The outputs are broken both the size of the arrays and the number of items in them. The reason is that when a function takes a C-style array as an argument, the array is implicitly converted into a pointer. This is also called array decay.From a usage perspective, it still means that we can access individual elements, but we lost any means to compute the array size because the size of the parameter is not the size of the array anymore, simply the size of a pointer point to the first element of the array.That’s why we can often observe in C-style APIs that along an array its size is also passed.With std::span we don’t need that anymore.As a std::span is a proper (non-owning) object, it doesn’t decay to a pointer. On the other hand, a C-style array can be implicitly converted into a span. A span gives you access to the number of elements in it (without having to do a verbose and error-prone calculation), it gives you an easy way to access the items in the span and it’s also iterable.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#include <iostream>
#include <span>
void serialize(std::span<char> characters) {
std::cout << characters.size() << "\n";
for(size_t i = 0; i < characters.size(); ++i) {
std::cout << characters[i] << " ";
}
std::cout << '\n';
for (const auto c: characters) {
std::cout << c << " ";
}
std::cout << '\n';
}
int main() {
char characters[] = {'a', 'b', 'c'};
serialize(characters);
}
As a general rule of thumb, I’d recommend not using C-style arrays, but if you have no choice, use spans as function parameters to make it easier and safer to work with them.ConclusionC-style arrays are still used, mostly when you have to deal with C-libraries. They come with significant limitations, particularly when passed to functions where array decay occurs, leading to the loss of size information.std::span, introduced in C++20, solves this issue by providing a safe, non-owning view of contiguous data, retaining the size and offering easy access to elements. It simplifies working with arrays in functions without needing additional parameters for size, making code safer and more concise.Whenever possible, it’s advisable to replace C-style arrays with spans for more robust and maintainable code.Connect deeperIf you liked this article, pleasehit on the like button,subscribe to my newsletterand let’s connect on Twitter! | 2024-11-08T12:00:34 | en | train |
42,058,920 | lapnect | 2024-11-06T09:54:21 | Why I love Rust for tokenising and parsing | null | https://xnacly.me/posts/2024/rust-pldev/ | 6 | 1 | [
42059349
] | null | null | no_error | Why I love Rust for tokenising and parsing | 2024-11-04 00:00:00 +0000 UTC | null | I am currently writing a analysis tool for Sql: sqleibniz, specifically for the sqlite
dialect.The goal is to perform static analysis for sql input, including: syntax
checks, checks if tables, columns and functions exist. Combining this with an
embedded sqlite runtime and the ability to assert conditions in this runtime,
creates a really great dev experience for sql.Furthermore, I want to be able to show the user high quality error messages
with context, explainations and the ability to mute certain diagnostics.This analysis includes the stages of lexical analysis/tokenisation, the
parsing of SQL according to the sqlite documentation1 and
the analysis of the resulting constructs.After completing the static analysis part of the project, I plan on writing a
lsp server for sql, so stay tuned for that.In the process of the above, I need to write a tokenizer and a parser - both
for SQL. While I am nowhere near completion of sqleibniz, I still made some
discoveries around rust and the handy features the language provides for
developing said software.MacrosMacros work different in most languages. However they are used for mostly the
same reasons: code deduplication and less repetition.Abstract Syntax Tree NodesA node for a statement in sqleibniz implementation is defined as follows:1
2#[derive(Debug)]
3/// holds all literal types, such as strings, numbers, etc.
4pub struct Literal {
5 pub t: Token,
6}
Furthermore all nodes are required to implement the Node-trait, this trait
is returned by all parser functions and is later used to analyse the contents
of a statement:1pub trait Node: std::fmt::Debug {
2 fn token(&self) -> &Token;
3}
Code duplicationThus every node not only has to be defined, but an implementation for the
Node-trait has to be written. This requires a lot of code duplication and
rust has a solution for that.I want a macro that is able to:define a structure with a given identifier and a doc commentadd arbitrary fields to the structuresatisfying the Node trait by implementing fn token(&self) -> &TokenLets take a look at the full code I need the macro to produce for the
Literal and the Explain nodes. While the first one has no further fields
except the Token field t, the second node requires a child field with a
type. 1#[derive(Debug)]
2/// holds all literal types, such as strings, numbers, etc.
3pub struct Literal {
4 /// predefined for all structures defined with the node! macro
5 pub t: Token,
6}
7impl Node for Literal {
8 fn token(&self) -> &Token {
9 &self.t
10 }
11}
12
13
14#[derive(Debug)]
15/// Explain stmt, see: https://www.sqlite.org/lang_explain.html
16pub struct Explain {
17 /// predefined for all structures defined with the node! macro
18 pub t: Token,
19 pub child: Option<Box<dyn Node>>,
20}
21impl Node for Explain {
22 fn token(&self) -> &Token {
23 &self.t
24 }
25}
I want the above to be generated from the following two calls:1node!(
2 Literal,
3 "holds all literal types, such as strings, numbers, etc.",
4);
5node!(
6 Explain,
7 "Explain stmt, see: https://www.sqlite.org/lang_explain.html",
8 child: Option<Box<dyn Node>>,
9);
Code deduplication with macrosThe macro for that is fairly easy, even if the rust macro docs arent that good: 1macro_rules! node {
2 ($node_name:ident,$documentation:literal,$($field_name:ident:$field_type:ty),*) => {
3 #[derive(Debug)]
4 #[doc = $documentation]
5 pub struct $node_name {
6 /// predefined for all structures defined with the node! macro, holds the token of the ast node
7 pub t: Token,
8 $(
9 pub $field_name: $field_type,
10 )*
11 }
12 impl Node for $node_name {
13 fn token(&self) -> &Token {
14 &self.t
15 }
16 }
17 };
18}
Lets disect this macro. The Macro argument/metavariable definition starts with
$node_name:ident,$documentation:literal: 1$node_name : ident , $documentation : literal
2^^^^^^^^^^ ^ ^^^^^ ^
3| | | |
4| | | metavariable delimiter
5| | |
6| | metavariable type
7| |
8| metavariable type delimiter
9|
10metavariable name
Meaning, we define the first metavariable of the macro to be a valid
identifier rust accepts and the second argument to be a literal. A literal
refers to a literal expression, such as chars, strings or raw strings.The tricky part that took me some time to grasp is the way of defining
repetition of metavariables in macros, specifically $($field_name:ident:$field_type:ty),*.1$($field_name:ident:$field_type:ty),*
2^^ ^ ^ ^
3| | | |
4| metavariable | repetition
5| delimiter | (any)
6| |
7 sub group of metavariables
As I understand, we define a subgroup in our metavarible definition and
postfix it with its repetition. We use : to delimit inside the metavariable
sub-group, this enables us to write the macro in a convienient field_name: type way:1node!(
2 Example,
3 "Example docs",
4
5 // sub group start
6 field_name: &'static str,
7 field_name1: String
8 // sub group end
9);
We can use the $(...)* syntax to “loop over” our sub grouped metavariables,
and thus create all fields with their respective names and types:1pub struct $node_name {
2 pub t: Token,
3 $(
4 pub $field_name: $field_type,
5 )*
6}
TipSee
Repetitions
for the metavariable repetition documentation.Remember: the $documentation metavariable holds a literal containing our doc
string we want to generate for our node - we now use the #[doc = ...]
annotation instead of the commonly known /// ... syntax to pass our macro
metavariable to the compiler:1#[doc = $documentation]
2pub struct $node_name {
3 // ...
4}
I’d say the trait implementation for each node is pretty self explanatory.TestingLets start off with me saying: I love table driven tests and the way Go allows
to write them: 1func TestLexerWhitespace(t *testing.T) {
2 cases := []string{"","\t", "\r\n", " "}
3 for _, c := range cases {
4 t.Run(c, func (t *testing.T) {
5 l := Lexer{}
6 l.init(c)
7 l.run()
8 })
9 }
10}
In Go, I define an array of cases and just execute a test function for each
case c. As far as I know, Rust does not offer a similar test method - so
made one 😼.Lexer / Tokenizer Tests 1#[cfg(test)]
2mod should_pass {
3 test_group_pass_assert! {
4 string,
5 string: "'text'"=vec![Type::String(String::from("text"))],
6 empty_string: "''"=vec![Type::String(String::from(""))],
7 string_with_ending: "'str';"=vec![Type::String(String::from("str")), Type::Semicolon]
8 }
9
10 // ...
11}
12
13#[cfg(test)]
14mod should_fail {
15 test_group_fail! {
16 empty_input,
17 empty: "",
18 empty_with_escaped: "\\",
19 empty_with_space: " \t\n\r"
20 }
21
22 // ...
23}
Executing these via cargo test, results in the same output I love from table
driven tests in Go, each function having its own log and feedback
(ok/fail): 1running 68 tests
2test lexer::tests::should_pass::string::empty_string ... ok
3test lexer::tests::should_pass::string::string ... ok
4test lexer::tests::should_pass::string::string_with_ending ... ok
5test lexer::tests::should_fail::empty_input::empty ... ok
6test lexer::tests::should_fail::empty_input::empty_with_escaped ... ok
7test lexer::tests::should_fail::empty_input::empty_with_space ... ok
8
9test result: ok. 68 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out;
10finished in 0.00s
The macro accepts the name of the test group, for example: booleans and
string and a list of input and expected output pairs. The input is passed to
the Lexer initialisation and the output of the Lexer.run() is compared
against the expected output. Inlining the test_group_pass_assert! call for
string results in the code below. Before asserting the equality of the
resulting token types and the expected token types, a transformation is
necessary, I map over the token vector and only return their types. 1mod string {
2 use crate::{lexer, types::Type};
3
4 #[test]
5 fn string() {
6 let input = "'text'".as_bytes().to_vec();
7 let mut l = lexer::Lexer::new(&input, "lexer_tests_pass");
8 let toks = l.run();
9 assert_eq!(l.errors.len(), 0);
10 assert_eq!(
11 toks.into_iter().map(|tok| tok.ttype).collect::<Vec<Type>>(),
12 (vec![Type::String(String::from("text"))])
13 );
14 }
15
16 #[test]
17 fn empty_string() {
18 let input = "''".as_bytes().to_vec();
19 let mut l = lexer::Lexer::new(&input, "lexer_tests_pass");
20 let toks = l.run();
21 assert_eq!(l.errors.len(), 0);
22 assert_eq!(
23 toks.into_iter().map(|tok| tok.ttype).collect::<Vec<Type>>(),
24 (vec![Type::String(String::from(""))])
25 );
26 }
27
28 #[test]
29 fn string_with_ending() {
30 let input = "'str';".as_bytes().to_vec();
31 let mut l = lexer::Lexer::new(&input, "lexer_tests_pass");
32 let toks = l.run();
33 assert_eq!(l.errors.len(), 0);
34 assert_eq!(
35 toks.into_iter().map(|tok| tok.ttype).collect::<Vec<Type>>(),
36 (vec![Type::String(String::from("str")), Type::Semicolon])
37 );
38 }
39}
The counter part test_group_fail! for empty_input! produces the code below.
The main difference being the assertion of the resulting token vector to be
empty and the Lexer.errors field to contain at least on error. 1mod empty_input {
2 use crate::lexer;
3
4 #[test]
5 fn empty() {
6 let source = "".as_bytes().to_vec();
7 let mut l = lexer::Lexer::new(&source, "lexer_tests_fail");
8 let toks = l.run();
9 assert_eq!(toks.len(), 0);
10 assert_ne!(l.errors.len(), 0);
11 }
12
13 #[test]
14 fn empty_with_escaped() {
15 let source = "\\".as_bytes().to_vec();
16 let mut l = lexer::Lexer::new(&source, "lexer_tests_fail");
17 let toks = l.run();
18 assert_eq!(toks.len(), 0);
19 assert_ne!(l.errors.len(), 0);
20 }
21
22 #[test]
23 fn empty_with_space() {
24 let source = " \t\n\r".as_bytes().to_vec();
25 let mut l = lexer::Lexer::new(&source, "lexer_tests_fail");
26 let toks = l.run();
27 assert_eq!(toks.len(), 0);
28 assert_ne!(l.errors.len(), 0);
29 }
30}
Lets take a look at the macros itself, I will not go into detail around the
macro definition - simply because I explained the meta variable declaration in
the previous chapter. The first macro is
uesd for the assertions of test with valid inputs - test_group_pass_assert!: 1macro_rules! test_group_pass_assert {
2 ($group_name:ident,$($ident:ident:$input:literal=$expected:expr),*) => {
3 mod $group_name {
4 use crate::{lexer, types::Type};
5
6 $(
7 #[test]
8 fn $ident() {
9 let input = $input.as_bytes().to_vec();
10 let mut l = lexer::Lexer::new(&input, "lexer_tests_pass");
11 let toks = l.run();
12 assert_eq!(l.errors.len(), 0);
13 assert_eq!(toks.into_iter().map(|tok| tok.ttype).collect::<Vec<Type>>(), $expected);
14 }
15 )*
16 }
17 };
18}
While the second is used for invalid inputs and edge case testing with expected
errors - test_group_fail!: 1macro_rules! test_group_fail {
2 ($group_name:ident,$($name:ident:$value:literal),*) => {
3 mod $group_name {
4 use crate::lexer;
5 $(
6 #[test]
7 fn $name() {
8 let source = $value.as_bytes().to_vec();
9 let mut l = lexer::Lexer::new(&source, "lexer_tests_fail");
10 let toks = l.run();
11 assert_eq!(toks.len(), 0);
12 assert_ne!(l.errors.len(), 0);
13 }
14 )*
15 }
16 };
17}
Parser TestsI use the same concepts and almost the same macros in the parser module to
test the results the parser produces, but this time focussing on edge cases and
full sql statements. For instance the tests expected to pass and to fail for
the EXPLAIN sql statement: 1#[cfg(test)]
2mod should_pass {
3 test_group_pass_assert! {
4 sql_stmt_prefix,
5 explain: r#"EXPLAIN VACUUM;"#=vec![Type::Keyword(Keyword::EXPLAIN)],
6 explain_query_plan: r#"EXPLAIN QUERY PLAN VACUUM;"#=vec![Type::Keyword(Keyword::EXPLAIN)]
7 }
8}
9
10#[cfg(test)]
11mod should_fail {
12 test_group_fail! {
13 sql_stmt_prefix,
14 explain: r#"EXPLAIN;"#,
15 explain_query_plan: r#"EXPLAIN QUERY PLAN;"#
16 }
17}
Both macros get the sql_stmt_prefix as their module names, because thats the
function, in the parser, responsible for the EXPLAIN statement. The failing
tests check wheter the parser correctly asserts the conditions the sql standard
lays out, see sqlite - sql-stmt.
Specifically, either that a statement follows after the EXPLAIN identifier or
the QUERY PLAN and a statement follow.The difference between these tests and the tests for the lexer are in the way
the assertions are made. Take a look at the code the macros produce: 1#[cfg(test)]
2mod should_pass {
3 mod sql_stmt_prefix {
4 use crate::{lexer, parser::Parser, types::Keyword, types::Type};
5
6 #[test]
7 fn explain() {
8 let input = r#"EXPLAIN VACUUM;"#.as_bytes().to_vec();
9 let mut l = lexer::Lexer::new(&input, "parser_test_pass");
10 let toks = l.run();
11 assert_eq!(l.errors.len(), 0);
12 let mut parser = Parser::new(toks, "parser_test_pass");
13 let ast = parser.parse();
14 assert_eq!(parser.errors.len(), 0);
15 assert_eq!(
16 ast.into_iter()
17 .map(|o| o.unwrap().token().ttype.clone())
18 .collect::<Vec<Type>>(),
19 (vec![Type::Keyword(Keyword::EXPLAIN)])
20 );
21 }
22
23 #[test]
24 fn explain_query_plan() {
25 let input = r#"EXPLAIN QUERY PLAN VACUUM;"#.as_bytes().to_vec();
26 let mut l = lexer::Lexer::new(&input, "parser_test_pass");
27 let toks = l.run();
28 assert_eq!(l.errors.len(), 0);
29 let mut parser = Parser::new(toks, "parser_test_pass");
30 let ast = parser.parse();
31 assert_eq!(parser.errors.len(), 0);
32 assert_eq!(
33 ast.into_iter()
34 .map(|o| o.unwrap().token().ttype.clone())
35 .collect::<Vec<Type>>(),
36 (vec![Type::Keyword(Keyword::EXPLAIN)])
37 );
38 }
39 }
40}
As shown, the test_group_pass_assert! macro in the parser module starts
with the same Lexer initialisation and empty error vector assertion. However,
the next step is to initialise the Parser structure and after parsing assert
the outcome - i.e. no errors and nodes with the correct types. 1#[cfg(test)]
2mod should_fail {
3 mod sql_stmt_prefix {
4 use crate::{lexer, parser::Parser};
5 #[test]
6 fn explain() {
7 let input = r#"EXPLAIN;"#.as_bytes().to_vec();
8 let mut l = lexer::Lexer::new(&input, "parser_test_fail");
9 let toks = l.run();
10 assert_eq!(l.errors.len(), 0);
11 let mut parser = Parser::new(toks, "parser_test_fail");
12 let _ = parser.parse();
13 assert_ne!(parser.errors.len(), 0);
14 }
15
16 #[test]
17 fn explain_query_plan() {
18 let input = r#"EXPLAIN QUERY PLAN;"#.as_bytes().to_vec();
19 let mut l = lexer::Lexer::new(&input, "parser_test_fail");
20 let toks = l.run();
21 assert_eq!(l.errors.len(), 0);
22 let mut parser = Parser::new(toks, "parser_test_fail");
23 let _ = parser.parse();
24 assert_ne!(parser.errors.len(), 0);
25 }
26 }
27}
The test_group_fail! macro also extends the same macro from the lexer
module and appends the check for errors after parsing. Both macro_rules!: 1macro_rules! test_group_pass_assert {
2 ($group_name:ident,$($ident:ident:$input:literal=$expected:expr),*) => {
3 mod $group_name {
4 use crate::{lexer, parser::Parser, types::Type, types::Keyword};
5 $(
6 #[test]
7 fn $ident() {
8 let input = $input.as_bytes().to_vec();
9 let mut l = lexer::Lexer::new(&input, "parser_test_pass");
10 let toks = l.run();
11 assert_eq!(l.errors.len(), 0);
12
13 let mut parser = Parser::new(toks, "parser_test_pass");
14 let ast = parser.parse();
15 assert_eq!(parser.errors.len(), 0);
16 assert_eq!(ast.into_iter()
17 .map(|o| o.unwrap().token().ttype.clone())
18 .collect::<Vec<Type>>(), $expected);
19 }
20 )*
21 }
22 };
23}
24
25macro_rules! test_group_fail {
26 ($group_name:ident,$($ident:ident:$input:literal),*) => {
27 mod $group_name {
28 use crate::{lexer, parser::Parser};
29 $(
30 #[test]
31 fn $ident() {
32 let input = $input.as_bytes().to_vec();
33 let mut l = lexer::Lexer::new(&input, "parser_test_fail");
34 let toks = l.run();
35 assert_eq!(l.errors.len(), 0);
36
37 let mut parser = Parser::new(toks, "parser_test_fail");
38 let _ = parser.parse();
39 assert_ne!(parser.errors.len(), 0);
40 }
41 )*
42 }
43 };
44}
Macro Pitfallsrust-analyzer plays badly inside macro_rules!no real intellisenseno goto definitionno hover for signatures of literals and language constructscargo fmt does not format or indent inside of macro_rules! and macro invokationstreesitter (yes I use neovim, btw 😼) and chroma (used on this site)
sometimes struggle with syntax highlighting of macro_rules!documentation is sparse at bestMatching CharactersWhen writing a lexer, comparing characters is the part everything else depends
on. Rust makes this enjoyable via the matches! macro and the patterns the
match statement accepts. For instance, checking if the current character is
a valid sqlite number can be done by a simple matches! macro invocation: 1/// Specifically matches https://www.sqlite.org/syntax/numeric-literal.html
2fn is_sqlite_num(&self) -> bool {
3 matches!(self.cur(),
4 // exponent notation with +-
5 '+' | '-' |
6 // sqlite allows for separating numbers by _
7 '_' |
8 // floating point
9 '.' |
10 // hexadecimal
11 'a'..='f' | 'A'..='F' |
12 // decimal
13 '0'..='9')
14}
Similarly testing for identifiers is as easy as the above:1fn is_ident(&self, c: char) -> bool {
2 matches!(c, 'a'..='z' | 'A'..='Z' | '_' | '0'..='9')
3}
Symbol detection in the main loop of the lexer works exactly the same: 1pub fn run(&mut self) -> Vec<Token> {
2 let mut r = vec![];
3 while !self.is_eof() {
4 match self.cur() {
5 // skipping whitespace
6 '\t' | '\r' | ' ' | '\n' => {}
7 '*' => r.push(self.single(Type::Asteriks)),
8 ';' => r.push(self.single(Type::Semicolon)),
9 ',' => r.push(self.single(Type::Comma)),
10 '%' => r.push(self.single(Type::Percent)),
11 _ => {
12 // omitted error handling for unknown symbols
13 panic!("whoops");
14 }
15 }
16 self.advance();
17 }
18 r
19}
Patterns in match statement and matches blocks are arguably the most
useful feature of Rust.Matching TokensOnce the lexer converts the character stream into a stream of Token structure
instances with positional and type information, the parser can consume this
stream and produce an abstract syntax tree. The parser has to recognise
patterns in its input by detecting token types. This again is a case where
Rusts match statement shines.Each Token contains a t field for its type, see below. 1pub use self::keyword::Keyword;
2
3#[derive(Debug, PartialEq, Clone)]
4pub enum Type {
5 Keyword(keyword::Keyword),
6 Ident(String),
7 Number(f64),
8 String(String),
9 Blob(Vec<u8>),
10 Boolean(bool),
11 ParamName(String),
12 Param(usize),
13
14 Dot,
15 Asteriks,
16 Semicolon,
17 Percent,
18 Comma,
19
20 Eof,
21}
Lets look at the sql_stmt_prefix method of the parser. This function parses
the EXPLAIN statement, which - according to the sqlite documentation -
prefixes all other sql statements, hence the name. The corresponding syntax
diagram is shown below:The implementation follows this diagram. The Explain stmt is optional, thus
if the current token type does not match Type::Keyword(Keyword::EXPLAIN), we
call the sql_stmt function to processes the statements on the right of
the syntax diagram.If the token matches it gets consumed and the next check is for the second
possible path in the EXPLAIN diagram: QUERY PLAN. This requires both the
QUERY and the PLAN keywords consecutively - both are consumed. 1impl<'a> Parser<'a> {
2 fn sql_stmt_prefix(&mut self) -> Option<Box<dyn Node>> {
3 match self.cur()?.ttype {
4 Type::Keyword(Keyword::EXPLAIN) => {
5 let mut e = Explain {
6 t: self.cur()?.clone(),
7 child: None,
8 };
9 self.advance(); // skip EXPLAIN
10
11 // path for EXPLAIN->QUERY->PLAN
12 if self.is(Type::Keyword(Keyword::QUERY)) {
13 self.consume(Type::Keyword(Keyword::QUERY));
14 self.consume(Type::Keyword(Keyword::PLAN));
15 } // else path is EXPLAIN->*_stmt
16
17 e.child = self.sql_stmt();
18 Some(Box::new(e))
19 }
20 _ => self.sql_stmt(),
21 }
22 }
23}
This shows the basic usage of pattern matching in the parser. An other
example is the literal_value function, its sole purpose is to create the
Literal node for all literals.It discards most embedded enum values, but checks for some specific keywords,
because they are considered keywords, while being literals: 1impl<'a> Parser<'a> {
2 /// see: https://www.sqlite.org/syntax/literal-value.html
3 fn literal_value(&mut self) -> Option<Box<dyn Node>> {
4 let cur = self.cur()?;
5 match cur.ttype {
6 Type::String(_)
7 | Type::Number(_)
8 | Type::Blob(_)
9 | Type::Keyword(Keyword::NULL)
10 | Type::Boolean(_)
11 | Type::Keyword(Keyword::CURRENT_TIME)
12 | Type::Keyword(Keyword::CURRENT_DATE)
13 | Type::Keyword(Keyword::CURRENT_TIMESTAMP) => {
14 let s: Option<Box<dyn Node>> = Some(Box::new(Literal { t: cur.clone() }));
15 self.advance();
16 s
17 }
18 _ => {
19 // omitted error handling for invalid literals
20 panic!("whoops");
21 }
22 }
23 }
24}
Fancy error displayWhile the implementation itself is repetitive and not that interesting, I still
wanted to showcase the way both the lexer and the parser handle errors and how
these errors are displayed to the user. A typical error would be to miss a
semicolon at the end of a sql statement:1-- ./vacuum.sql
2-- rebuilding the database into a new file
3VACUUM INTO 'optimized.db'
Passing this file to sqleibniz promptly errors:OptionalsRust error handling is fun to do and propagation with the ?-Operator just
makes sense. But Rust goes even further, not only can I modify the value inside
of the Option if there is one, I can even check conditions or provide default
values.is_some_andSometimes you simply need to check if the next character of the input stream
is available and passes a predicate. is_some_and exists for this reason:1fn next_is(&mut self, c: char) -> bool {
2 self.source
3 .get(self.pos + 1)
4 .is_some_and(|cc| *cc == c as u8)
5}
6
7fn is(&self, c: char) -> bool {
8 self.source.get(self.pos).is_some_and(|cc| *cc as char == c)
9}
The above is really nice to read, the following not so much: 1fn next_is(&mut self, c: char) -> bool {
2 match self.source.get(self.pos + 1) {
3 Some(cc) => *cc == c as u8,
4 _ => false,
5 }
6}
7
8fn is(&self, c: char) -> bool {
9 match self.source.get(self.pos) {
10 Some(cc) => *cc as char == c,
11 _ => false,
12 }
13}
mapSince the input is a Vector of u8, not a Vector of char, this conversion is done with map:1fn next(&self) -> Option<char> {
2 self.source.get(self.pos + 1).map(|c| *c as char)
3}
Instead of unwrapping and rewrapping the updated value:1fn next(&self) -> Option<char> {
2 match self.source.get(self.pos + 1) {
3 Some(c) => Some(*c as char),
4 _ => None,
5 }
6}
map_orIn a similar fashion, the sqleibniz parser uses map_or to return
the check for a type, but only if the current token is Some:1fn next_is(&self, t: Type) -> bool {
2 self.tokens
3 .get(self.pos + 1)
4 .map_or(false, |tok| tok.ttype == t)
5}
6
7fn is(&self, t: Type) -> bool {
8 self.cur().map_or(false, |tok| tok.ttype == t)
9}
Again, replacing the not so idiomatic solutions: 1fn next_is(&self, t: Type) -> bool {
2 match self.tokens.get(self.pos + 1) {
3 None => false,
4 Some(token) => token.ttype == t,
5 }
6}
7
8fn is(&self, t: Type) -> bool {
9 if let Some(tt) = self.cur() {
10 return tt.ttype == t;
11 }
12 false
13}
Iterators 💖Filtering charactersRust number parsing does not allow _, sqlite number parsing
accepts _, thus the lexer also consumes them, but filters these
characters before parsing the input via the rust number parsing
logic:1let str = self
2 .source
3 .get(start..self.pos)
4 .unwrap_or_default()
5 .iter()
6 .map(|c| *c as char)
7 .filter(|c| *c != '_')
8 .collect::<String>();
TipI know you aren’t supposed to use unwrap and all derivates,
however in this situation the parser either way does not accept
empty strings as valid numbers, thus it will fail either way on
the default value.In go i would have to first iterate the character list with a for
loop and write each byte into a string buffer (in which each write
could fail btw, or at least can return an error) and afterwards
I have to create a string from the strings.Builder structure.1s := source[start:l.pos]
2b := strings.Builder{}
3b.Grow(len(s))
4for _, c := range s {
5 if c != '_' {
6 b.WriteByte(c)
7 }
8}
9s = b.String()
Checking charactersSqlite accepts hexadecimal data as blobs: x'<hex>', to verify
the input is correct, I have to check every character in this
array to be a valid hexadecimal. Furthermore I need positional
information for correct error display, for this I reuse the
self.string() method and use the chars() iterator creating
function and the enumerate function. 1if let Ok(str_tok) = self.string() {
2 if let Type::String(str) = &str_tok.ttype {
3 let mut had_bad_hex = false;
4 for (idx, c) in str.chars().enumerate() {
5 if !c.is_ascii_hexdigit() {
6 // error creation and so on omitted here
7 had_bad_hex = true;
8 break;
9 }
10 }
11 if had_bad_hex {
12 break;
13 }
14
15 // valid hexadecimal data in blob
16 }
17} else {
18 // error handling omitted
19}
The error display produces the following error if an invalid
character inside of a blob is found:InfoThanks for reading this far 😼.If you found an error (technical or semantic), please email me a nudge in the
right direction at [email protected]
([email protected]). | 2024-11-08T10:17:12 | en | train |
42,058,946 | null | 2024-11-06T09:56:00 | null | null | null | null | null | null | [
"true"
] | true | null | null | null | null | null | null | null | train |
42,058,956 | whoitwas | 2024-11-06T09:57:03 | null | null | null | 4 | null | [
42059334,
42059475,
42058957,
42059323
] | null | true | null | null | null | null | null | null | null | train |
42,059,011 | benithemaker | 2024-11-06T10:00:06 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,059,029 | lapnect | 2024-11-06T10:01:21 | Zeros and Poles | null | https://www.kuniga.me/blog/2024/10/02/poles.html | 5 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,059,032 | thunderbong | 2024-11-06T10:01:35 | Why we're still waiting for Ubuntu Core Desktop | null | https://www.theregister.com/2024/11/06/ubuntu_core_desktop_waiting/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,059,074 | panoramas4good | 2024-11-06T10:03:57 | Show HN: Obstracts – the feed reader for cyber-security teams | null | https://github.com/muchdogesec/obstracts | 1 | 0 | null | null | null | no_error | GitHub - muchdogesec/obstracts: Turn any blog into structured threat intelligence. | null | muchdogesec | Obstracts
Before you begin...
We offer a fully hosted web version of Obstracts which includes many additional features over those in this codebase. You can find out more about the web version here.
Overview
Obstracts takes a blog ATOM or RSS feed and converts into structured threat intelligence.
Organisations subscribe to lots of blogs for security information. These blogs contain interesting indicators of malicious activity (e.g. malicious URL).
To help automate the extraction of this information, Obstracts automatically downloads blog articles and extracts indicators for viewing to a user.
It works at a high level like so:
A feed is added to Obstracts by user (selecting profile to be used)
Obstracts uses history4feed as a microservice to handle the download and storage of posts.
The HTML from history4feed for each blog post is converted to markdown using file2txt in html mode
The markdown is run through txt2stix where txt2stix pattern extractions/whitelists/aliases are run based on staff defined profile
STIX bundles are generated for each post of the blog, and stored in an ArangoDB database called obstracts_database and Collections names matching the blog
A user can access the bundle data or specific objects in the bundle via the API
As new posts are added to remote blogs, user makes request to update blog and these are requested by history4feed
tl;dr
Watch the demo.
Install
Download and configure
# clone the latest code
git clone https://github.com/muchdogesec/obstracts
Configuration options
Obstracts has various settings that are defined in an .env file.
To create a template for the file:
To see more information about how to set the variables, and what they do, read the .env.markdown file.
Build the Docker Image
sudo docker compose build
Start the server
Access the server
The webserver (Django) should now be running on: http://127.0.0.1:8001/
You can access the Swagger UI for the API in a browser at: http://127.0.0.1:8001/api/schema/swagger-ui/
Contributing notes
Obstracts is made up of different core external components that support most of its functionality.
At a high-level the Obstracts pipeline looks like this: https://miro.com/app/board/uXjVKD2mg_0=/
Generally if you want to improve how Obstracts performs functionality, you should address the changes in;
history4feed: responsible for downloading the blog posts, including the historical archive, and keep posts updated
file2txt: converts the HTML post content into a markdown file (which is used to extract data from)
txt2stix: turns the markdown file into STIX objects
stix2arango: manages the logic to insert the STIX objects into the database
dogesec_commons: where the API Objects, Profiles, Extractors, Whitelist and Alias endpoints are imported from
For anything else, then the Obstracts codebase is where you need to be :)
Useful supporting tools
Turn any blog post into structured threat intelligence
An up-to-date list of threat intel blogs that post cyber threat intelligence research
Support
Minimal support provided via the DOGESEC community.
License
Apache 2.0.
| 2024-11-07T14:54:07 | en | train |
42,059,083 | ethanleetech | 2024-11-06T10:04:37 | null | null | null | 1 | null | [
42059084
] | null | true | null | null | null | null | null | null | null | train |
42,059,131 | aquray | 2024-11-06T10:06:52 | Gradle for Maven and Gradle | null | https://mavenlibs.com/maven/search/Gradle | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,059,138 | thunderbong | 2024-11-06T10:07:12 | Generating documentation from tests using files-to-prompt and LLM | null | https://til.simonwillison.net/llms/docs-from-tests | 4 | 0 | [
42059230,
42059237
] | null | null | no_error | Generating documentation from tests using files-to-prompt and LLM | null | null |
I was experimenting with wasmtime-py today and found the current documentation didn't quite give me the information that I needed.
The package has a solid looking test suite, so I decided to see if I could generate additional documentation based on that.
I started with a checkout of the repo:
cd /tmp
git clone https://github.com/bytecodealliance/wasmtime-py
The tests are all in the test/ folder, so I used my files-to-prompt tool to turn every .py file in that folder into a single prompt, using the XML-ish format that Claude likes (the -c option):
files-to-prompt wasmtime-py/tests -e py -c
Piping that through ttok shows that it's about 34,780 OpenAI tokens. I pasted the whole thing ( | pbcopy to copy to my clipboard) into my Claude token counter tool and got 43,490 - easily enough to fit Claude 3.5 Sonnet's 200,000 limit.
So I ran this:
files-to-prompt -e py wasmtime-py/tests -c | \
llm -m claude-3.5-sonnet -s \
'write detailed usage documentation including realistic examples'
I'm using llm-claude-3 here and a system prompt of:
write detailed usage documentation including realistic examples
Here's a short excerpt of the result:
Basic Usage
Engine and Store
from wasmtime import Engine, Store
# Create an engine
engine = Engine()
# Create a store
store = Store(engine)
The Engine handles compilation settings and caching. The Store holds runtime state for instances.
Loading Modules
from wasmtime import Module
# From WAT text format
module = Module(engine, '(module)')
# From binary Wasm
with open('module.wasm', 'rb') as f:
module = Module(engine, f.read())
You can see the full transcript here, saved using llm logs -c | pbcopy (I then hand-edited in a <details><summary> to hide the lengthy piped input). As always I'm sharing the transcript in a private Gist to keep this AI-generated, unverified text from being indexed by search engines.
At the end of the first chunk of output Claude offered the following:
This documentation covers the core functionality. The bindings also support more advanced features like:
Component model and interface types
Resource types and references
Custom linking and importing
Memory management controls
Execution limits and interruption
Let me know if you would like me to expand on any of these topics!
So I followed up with another prompt (using llm -c for "continue current conversation"):
llm -c 'write a detailed section about memory management and one about execution limits'
This produced a useful continuation of the documentation.
How good is this documentation? It's pretty solid! The only thing it had to go on was the content of those tests, so I can be reasonably confident it didn't make any glaringly terrible mistakes and that the examples it gave me are more likely than not to execute.
Someone with more depth of experience with the project than me could take this as an initial draft and iterate on it to create verified, generally useful documentation.
Created 2024-11-05T14:33:15-08:00, updated 2024-11-05T14:48:16-08:00 · History · Edit
| 2024-11-08T05:46:15 | en | train |
42,059,177 | dndndnd | 2024-11-06T10:09:22 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,059,211 | jakeprins | 2024-11-06T10:11:05 | null | null | null | 1 | null | [
42059212
] | null | true | null | null | null | null | null | null | null | train |
42,059,234 | jakeprins | 2024-11-06T10:12:24 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,059,276 | jakeprins | 2024-11-06T10:14:48 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,059,289 | godwinlawrence | 2024-11-06T10:15:29 | Support Home – DigitalOcean Documentation | null | https://docs.digitalocean.com/support/ | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,059,336 | ksec | 2024-11-06T10:18:45 | Understanding Ruby 3.3 Concurrency: A Comprehensive Guide | null | https://blog.bestwebventures.in/understanding-ruby-concurrency-a-comprehensive-guide | 25 | 1 | [
42068567
] | null | null | null | null | null | null | null | null | null | train |
42,059,350 | pringk02 | 2024-11-06T10:19:41 | Demystifying Kolmogorov-Arnold Networks | null | https://daniel-bethell.co.uk/posts/kan/ | 4 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,059,364 | ashmil | 2024-11-06T10:20:31 | Do you need a personal CRM as a founder | null | https://worktodo.today/ | 2 | 1 | [
42064323
] | null | null | no_error | Personal CRM for Founders and Professionals | null | null |
Organize your work and life
Stay on top of your personal and professional relationships with our advanced task manager CRM. It’s designed to help you organize your day and manage follow-ups effortlessly.
2:00 PM - Call with a client
Business
Call with a client at 2:00 PM
1 hour from now
Email review
Business
Review the new email copy and provide feedback
2 hours from now
Email to the team
Business
Send an email to the team about the new feature launch
3 hours from now
Idle Contacts
Sarah Lee
Personal
No recent activity
John Smith
Personal
No recent activity
Acme Corp
Business
No recent activity
Notes Area
Key learnings from the customer interview
2 days ago
Product roadmap for Q3
5 days ago
Action items from the team off-site
1 week ago
| 2024-11-08T14:35:56 | en | train |
42,059,379 | abagh999 | 2024-11-06T10:21:43 | Show HN: Free tool to make video memes in seconds | Hey HN,<p>I've noticed that many people here use memes to promote their products, so I created a tool that makes video meme creation easy!<p>The memes generated are in Shorts/Reels format, making them ready to post on social media without any further editing.<p>I believe it’s the a good tool for anyone wanting to promote products through viral videos on social media or just have fun with memes.<p>Give it a try, and I’d love to hear your feedback! | https://videomemes.online/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,059,384 | mezod | 2024-11-06T10:21:53 | Ask HN: What's Your Biggest Complex? | null | null | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,059,387 | southernplaces7 | 2024-11-06T10:22:18 | null | null | null | 2 | null | [
42064697
] | null | true | null | null | null | null | null | null | null | train |
42,059,391 | YukiTanak8 | 2024-11-06T10:22:35 | Glimmix | null | https://github.com/ZachWolpe/GLIMMIX/blob/main/modules/dependencies.py | 1 | 0 | [
42059392
] | null | null | null | null | null | null | null | null | null | train |
42,059,518 | Gizopedia | 2024-11-06T10:30:58 | null | null | null | 1 | null | [
42059519
] | null | true | null | null | null | null | null | null | null | train |
42,059,597 | walterbell | 2024-11-06T10:36:55 | How the hell did Jane Street alumni end up creating FTX? (2022) | null | https://www.ft.com/content/679d0fa9-8491-44f5-8336-f390d6c877fe | 2 | 1 | [
42060775
] | null | null | null | null | null | null | null | null | null | train |
42,059,602 | RafelMri | 2024-11-06T10:37:10 | null | null | null | 13 | null | null | null | true | null | null | null | null | null | null | null | train |
42,059,606 | null | 2024-11-06T10:37:27 | null | null | null | null | null | null | [
"true"
] | true | null | null | null | null | null | null | null | train |
42,059,631 | djaygour | 2024-11-06T10:39:16 | Problems Faced as Online Business Owner | As an online brand owner, the journey of creating engaging content and high-quality products is often filled with passion and dedication. However, facing the frustration of high cart abandonment rates can feel like a heavy weight on your shoulders. You invest countless hours curating the perfect collection, crafting compelling marketing messages, and building a vibrant online presence, only to watch potential customers slip away at the final hurdle—checkout. This scenario is all too common and raises critical questions that can keep you up at night: Why did they abandon their cart? What went wrong? How can I improve my processes to retain these customers and turn their interest into sales?<p>High cart abandonment rates can stem from various factors that significantly hinder the customer experience. A clunky checkout process is often a primary culprit; if customers encounter long forms, confusing navigation, or unexpected costs at the last minute, frustration can lead them to abandon their carts. Additionally, inadequate mechanisms for gathering customer feedback can leave you in the dark about what specifically deterred potential buyers. Without understanding their pain points—whether it’s a lack of payment options, unclear return policies, or concerns about security—it becomes challenging to make meaningful improvements.<p>Introducing KuwarPay: Seamless Social Media Shopping<p>As an online shopper myself, I've experienced the frustration of clunky checkout processes and abandoned carts. That's why I've spent countless hours building KuwarPay - a game-changing payment solution designed specifically for social media shopping.<p>With KuwarPay, customers can shop effortlessly from their favorite social media platforms. Here's how it works(in MVP stage): customers simply click the link in your post, checkout securely, pay via PhonePe, provide feedback, and receive instant order confirmation. Our exclusive benefits include simplified checkout, exclusive discounts, and enhanced customer experience.<p>Our next goal is to enable seamless shopping from Instagram, Facebook, and Twitter, making e-commerce more accessible and enjoyable for everyone.<p>We're excited to share KuwarPay with you and would love to hear your thoughts!<p>NOW, WE NEED YOUR HELP!<p>To make KuwarPay unstoppable, please share your thoughts on the following:<p>1. What features would make social media shopping unforgettable?<p>2. What pain points would you like KuwarPay to solve?<p>3. How can we make social media shopping a game-changer?<p>4. What would make you switch to KuwarPay?<p>5. Any other suggestions?<p>Join the conversation! Please share your thoughts on what challenges or problems you face when trying to sell online and what challenges or problems you face as customer when trying to buy from a social media as you saw a product ad. Your input will shape<p>the future of commerce! | null | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,059,639 | fforflo | 2024-11-06T10:39:44 | PgPDF: Pdf Type and Functions for Postgres | null | https://github.com/Florents-Tselai/pgpdf | 6 | 0 | null | null | null | no_error | GitHub - Florents-Tselai/pgpdf: pdf type for Postgres | null | Florents-Tselai | pgPDF: pdf type for Postgres
This extension for PostgreSQL provides a pdf data type and assorted functions.
The actual PDF parsing is done by poppler.
SELECT '/tmp/pgintro.pdf'::pdf;
pdf
----------------------------------------------------------------------------------
PostgreSQL Introduction +
Digoal.Zhou +
7/20/2011Catalog +
PostgreSQL Origin
Usage
Creating a pdf type,
by casting either text path or bytea blob.
SELECT '/path/to.pdf'::pdf;
SELECT ''::bytea::pdf;
The following functions are available:
pdf_title(pdf) → text
pdf_author(pdf) → text
pdf_num_pages(pdf) → integer
Total number of pages in the document
pdf_page(pdf, integer) → text
Get the i-th page as text
pdf_creator(pdf) → text
pdf_keywords(pdf) → text
pdf_metadata(pdf) → text
pdf_version(pdf) → text
pdf_subject(pdf) → text
pdf_creation(pdf) → timestamp
pdf_modification(pdf) → timestamp
Below are some examples
wget https://wiki.postgresql.org/images/e/ea/PostgreSQL_Introduction.pdf -O /tmp/pgintro.pdf
Content
SELECT '/tmp/pgintro.pdf'::pdf;
pdf
----------------------------------------------------------------------------------
PostgreSQL Introduction +
Digoal.Zhou +
7/20/2011Catalog +
PostgreSQL Origin
SELECT pdf_title('/tmp/pgintro.pdf');
pdf_title
-------------------------
PostgreSQL Introduction
(1 row)
Getting a subset of pages
SELECT pdf_num_pages('/tmp/pgintro.pdf');
pdf_num_pages
---------------
24
(1 row)
SELECT pdf_page('/tmp/pgintro.pdf', 1);
pdf_page
------------------------------
Catalog +
PostgreSQL Origin +
Layout +
Features +
Enterprise Class Attribute+
Case
(1 row)
SELECT pdf_subject('/tmp/pgintro.pdf');
pdf_subject
-------------
(1 row)
Metadata
SELECT pdf_author('/tmp/pgintro.pdf');
pdf_author
------------
周正中
(1 row)
SELECT pdf_creation('/tmp/pgintro.pdf');
pdf_creation
--------------------------
Wed Jul 20 11:13:37 2011
(1 row)
SELECT pdf_modification('/tmp/pgintro.pdf');
pdf_modification
--------------------------
Wed Jul 20 11:13:37 2011
(1 row)
SELECT pdf_creator('/tmp/pgintro.pdf');
pdf_creator
------------------------------------
Microsoft® Office PowerPoint® 2007
(1 row)
SELECT pdf_metadata('/tmp/pgintro.pdf');
pdf_metadata
--------------
(1 row)
SELECT pdf_version('/tmp/pgintro.pdf');
pdf_version
-------------
PDF-1.5
(1 row)
FTS
You can also perform full-text search (FTS), since you can work on a pdf file like normal text.
SELECT '/tmp/pgintro.pdf'::pdf::text @@ to_tsquery('postgres');
?column?
----------
t
(1 row)
SELECT '/tmp/pgintro.pdf'::pdf::text @@ to_tsquery('oracle');
?column?
----------
f
(1 row)
bytea
If you don't have the PDF file in your filesystem but have already stored its content in a bytea column,
you can cast a bytea to pdf, like so:
SELECT pg_read_binary_file('/tmp/pgintro.pdf')::pdf
Installation
sudo apt install -y libpoppler-glib-dev pkg-config
cd /tmp
git clone https://github.com/Florents-Tselai/pgpdf.git
cd pgpdf
make
make install
WarningReading arbitrary binary data (PDF) into your database can pose security risks.
Only use this for files you trust.
| 2024-11-08T03:57:54 | en | train |
42,059,641 | dj_sorry | 2024-11-06T10:40:02 | null | null | null | 4 | null | null | null | true | null | null | null | null | null | null | null | train |
42,059,646 | zipmapfoldright | 2024-11-06T10:40:18 | The Real State of VR (and the long game) [video] | null | https://www.youtube.com/watch?v=9JcRXUWA_s0 | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,059,650 | amalinovic | 2024-11-06T10:40:39 | Optimize Database Performance in Ruby on Rails and ActiveRecord | null | https://blog.appsignal.com/2024/10/30/optimize-database-performance-in-ruby-on-rails-and-activerecord.html | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,059,656 | djaygour | 2024-11-06T10:40:57 | How Social Media Is Emerging as New Sales Channel for Online Brands? | As an online brand owner, the journey of creating engaging content and high-quality products is often filled with passion and dedication. However, facing the frustration of high cart abandonment rates can feel like a heavy weight on your shoulders. You invest countless hours curating the perfect collection, crafting compelling marketing messages, and building a vibrant online presence, only to watch potential customers slip away at the final hurdle—checkout. This scenario is all too common and raises critical questions that can keep you up at night: Why did they abandon their cart? What went wrong? How can I improve my processes to retain these customers and turn their interest into sales?<p>High cart abandonment rates can stem from various factors that significantly hinder the customer experience. A clunky checkout process is often a primary culprit; if customers encounter long forms, confusing navigation, or unexpected costs at the last minute, frustration can lead them to abandon their carts. Additionally, inadequate mechanisms for gathering customer feedback can leave you in the dark about what specifically deterred potential buyers. Without understanding their pain points—whether it’s a lack of payment options, unclear return policies, or concerns about security—it becomes challenging to make meaningful improvements.<p>Introducing KuwarPay: Seamless Social Media Shopping<p>As an online shopper myself, I've experienced the frustration of clunky checkout processes and abandoned carts. That's why I've spent countless hours building KuwarPay - a game-changing payment solution designed specifically for social media shopping.<p>With KuwarPay, customers can shop effortlessly from their favorite social media platforms. Here's how it works(in MVP stage): customers simply click the link in your post, checkout securely, pay via PhonePe, provide feedback, and receive instant order confirmation. Our exclusive benefits include simplified checkout, exclusive discounts, and enhanced customer experience.<p>Our next goal is to enable seamless shopping from Instagram, Facebook, and Twitter, making e-commerce more accessible and enjoyable for everyone.<p>We're excited to share KuwarPay with you and would love to hear your thoughts!<p>NOW, WE NEED YOUR HELP!<p>To make KuwarPay unstoppable, please share your thoughts on the following:<p>1. What features would make social media shopping unforgettable?<p>2. What pain points would you like KuwarPay to solve?<p>3. How can we make social media shopping a game-changer?<p>4. What would make you switch to KuwarPay?<p>5. Any other suggestions?<p>Join the conversation! Please share your thoughts on what challenges or problems you face when trying to sell online and what challenges or problems you face as customer when trying to buy from a social media as you saw a product ad. Your input will shape<p>the future of commerce! | null | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,059,668 | tessierashpool9 | 2024-11-06T10:41:42 | OpenAI's o1 model leaked on Friday and it is wild – here's what happened | null | https://www.tomsguide.com/ai/chatgpt/openais-o1-model-leaked-on-friday-and-it-is-wild-heres-what-happened | 3 | 3 | [
42060281
] | null | null | no_error | OpenAI’s o1 model leaked on Friday and it is wild — here’s what happened | 2024-11-04T11:04:34+00:00 | Ryan Morrison |
(Image credit: SOPA Images / Contributor via Getty Images)
OpenAI is set to release the full version of its powerful o1 reasoning model sometime this year, but an unexpected leak last week means we may have already seen it in action — and it is even better than we expected.In September OpenAI unveiled a new type of AI model that takes time to reason through a problem before responding. This was added to ChatGPT in the form of o1-preview and o1-mini, neither of which demonstrated the full capabilities of the final o1 model, but did show a major improvement in terms of accuracy over GPT-4.CEO Sam Altman says o1 is a divergence from the GPT-style models normally released, including GPT-4o, which powers Advanced Voice. During a briefing with OpenAI, I’ve been told o1 full is a significant improvement over the preview, and the leak seems to confirm that is the case.Over about two hours on Friday users could access what is thought to be the full version of o1 (OpenAI has not confirmed) by changing a parameter in the URL. The new model will also be able to analyze images and access tools like web search and data analysis.An OpenAI spokesperson told Tom's Guide: "We were preparing limited external access to the OpenAI o1 model and ran into an issue. This has now been fixed."What was revealed in the o1 leak?HUGE LEAK 🔥 OpenAI full o1 Chain of Thought has native image capabilities See the response for recent SpaceX launch image. It walks through the details of each part of image step by step. pic.twitter.com/lxHlI435bONovember 4, 2024Ever since the release of the original o1-preview model OpenAI insiders have been boasting about the full capabilities of the model once the preview tag is removed. Theories suggest that the preview was trained on an earlier version of the GPT models whereas the full model was trained from scratch. Either way, the leak seemed to prove that they were right.In one example a user was able to get it to solve an image puzzle. The AI spent nearly two minutes thinking through the problem but it demonstrated the huge potential once it is able to review images, documents and other multimedia inputs.Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!In another example, a user was able to have it walk through every single element of an image showing a recent SpaceX rocket launch. It went into considerable detail about color and motion. This could be huge for AI image generation.It isn’t clear when OpenAI will unveil the full version of o1 properly but what we do know is that it will be a significant advancement in AI. It is likely to be sometime in the next few weeks as most AI companies seem to be holding back until after the U.S. Presidential election.More from Tom's GuideGemini Live Voice mode is free for Android users — and you can try it right nowMidjourney is building the Holodeck — new AI model lets you ‘enter’ 3D imagesGoogle's Gemini AI is now turning your notes into a podcast
Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover.
When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?
Most Popular
| 2024-11-08T09:48:53 | en | train |
42,059,671 | ofirg | 2024-11-06T10:41:50 | Wiki: Secretariat(horse) | null | https://en.wikipedia.org/wiki/Secretariat_(horse) | 1 | 1 | [
42059672
] | null | null | null | null | null | null | null | null | null | train |
42,059,690 | gfortaine | 2024-11-06T10:42:48 | null | null | null | 2 | null | null | null | true | null | null | null | null | null | null | null | train |
42,059,727 | ingve | 2024-11-06T10:45:45 | Entity extraction using OpenAI structured outputs mode | null | http://blog.pamelafox.org/2024/11/entity-extraction-using-openai.html | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,059,735 | shaicoleman | 2024-11-06T10:46:18 | v1.0 for Steampipe, Powerpipe, Flowpipe, 116 plugins, and 44 mods | null | https://turbot.com/blog/2024/10/open-source-v1 | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,059,741 | tomohawk | 2024-11-06T10:46:34 | null | null | null | 2 | null | null | null | true | null | null | null | null | null | null | null | train |
42,059,749 | thisisroushan | 2024-11-06T10:47:28 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,059,772 | lapnect | 2024-11-06T10:49:03 | Atari 800: Next-Generation Playfields | null | https://bumbershootsoft.wordpress.com/2024/11/02/atari-800-next-generation-playfields/ | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,059,780 | peutetre | 2024-11-06T10:49:40 | China's J-35A Stealth Fighter Officially Breaks Cover | null | https://www.twz.com/air/chinas-j-35a-stealth-fighter-officially-breaks-cover | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,059,786 | ingve | 2024-11-06T10:50:02 | Exporting Kindle Highlights for Personal Documents | null | https://mjtsai.com/blog/2024/11/05/exporting-kindle-highlights-for-personal-documents/ | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,059,788 | lapnect | 2024-11-06T10:50:06 | Complete Face Marker Removal in Blender | null | https://www.blendernation.com/2024/11/06/complete-face-marker-removal-free-mini-series/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,059,838 | gniting | 2024-11-06T10:53:05 | null | null | null | 2 | null | null | null | true | null | null | null | null | null | null | null | train |
42,059,878 | walterbell | 2024-11-06T10:56:22 | F# for Fun and Profit | null | https://fsharpforfunandprofit.com/ | 5 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,059,980 | ingve | 2024-11-06T11:03:32 | The Unreasonable Effectiveness of Naming Integers | null | https://ziglang.org/devlog/2024/#2024-11-04 | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,059,995 | matvei112 | 2024-11-06T11:04:24 | null | null | null | 1 | null | [
42059996
] | null | true | null | null | null | null | null | null | null | train |
42,060,006 | sipofwater | 2024-11-06T11:05:10 | Termux Processes Killed On Android 15 For OnePlus Devices | null | https://github.com/termux/termux-app/issues/4219 | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,060,011 | thunderbong | 2024-11-06T11:05:42 | Latency Optimization – OpenAI API | null | https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs | 1 | 0 | null | null | null | no_article | null | null | null | null | 2024-11-08T13:19:51 | null | train |
42,060,020 | perks_12 | 2024-11-06T11:06:05 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,060,031 | RafelMri | 2024-11-06T11:07:00 | null | null | null | 2 | null | null | null | true | null | null | null | null | null | null | null | train |
42,060,064 | prasanthch | 2024-11-06T11:09:45 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,060,076 | abhi-bunny | 2024-11-06T11:10:31 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,060,081 | walterbell | 2024-11-06T11:10:42 | The Product-Market Fit Scale | null | https://iwantproductmarketfit.substack.com/p/the-product-market-fit-scale | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,060,105 | remote_tools | 2024-11-06T11:12:34 | null | null | null | 1 | null | [
42060106
] | null | true | null | null | null | null | null | null | null | train |
42,060,107 | dotcoma | 2024-11-06T11:12:40 | Ask HN: How important was Twitter in the 2024 presidential election? | null | null | 1 | 1 | [
42062077
] | null | null | null | null | null | null | null | null | null | train |
42,060,120 | mpweiher | 2024-11-06T11:13:47 | Seem like peanut allergies were once rare and now everyone has them? | null | https://news.harvard.edu/gazette/story/2024/10/excerpt-from-blind-spots-by-marty-makary/ | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,060,141 | mattgecko | 2024-11-06T11:15:02 | Teleprompter X – Lifetime $149.99 to FREE | null | https://apps.apple.com/us/app/teleprompter-x/id6502788841 | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,060,160 | pseudolus | 2024-11-06T11:16:25 | Quest for a deeper theory of fundamental particles hits a curious snag | null | https://www.science.org/content/article/quest-deeper-theory-fundamental-particles-hits-curious-snag | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,060,166 | albexl | 2024-11-06T11:17:16 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,060,174 | tosh | 2024-11-06T11:17:42 | What you know that just ain't so | null | https://world.hey.com/dhh/what-you-know-that-just-ain-t-so-ab6f4bb1 | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,060,199 | flatlogic_team | 2024-11-06T11:19:42 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,060,215 | refset | 2024-11-06T11:20:59 | Bthreads: A Simple and Easy Paradigm for Clojure | null | https://thomascothran.tech/2024/10/a-new-paradigm/ | 1 | 1 | [
42060271
] | null | null | null | null | null | null | null | null | null | train |
42,060,237 | surprisetalk | 2024-11-06T11:22:34 | India's ambitious lithium dreams have stalled | null | https://restofworld.org/2024/india-lithium-reserves-halted/ | 6 | 0 | null | null | null | no_error | India’s ambitious lithium dreams have stalled | 2024-11-05T11:00:00+00:00 | null |
Sunil Thakur, a 24-year-old engineering graduate, once planned to build a career as a civil engineer. But jobs were scarce, and so Thakur spent his days frying samosas for his family’s snack shop in Salal — a picturesque mountain village of about 10,000 people in India’s northern state of Jammu and Kashmir.
Then, in February 2023, Thakur’s dreams of prosperity were suddenly revived.
India’s mining ministry informed the villagers that they were sitting on a fortune: 5.9 million metric tons of lithium, a silver-white metal that is a core component of the batteries necessary for India’s transition to clean energy.
The discovery — a first in India — would make the country the holder of the fifth-largest lithium reserve in the world, mining officials announced. Indian media outlets jubilantly reported that companies including Mitsubishi, Tesla, and Ola Electric were eyeing the reserve.
Thakur and his family started daydreaming about selling their land in exchange for “a duplex home in a big Indian city, and loads of cash,” he said. He imagined investing in the family business, first established by his grandfather nearly four decades ago.
Two years later, nothing has happened. The government tried to auction the lithium block twice in March, and failed both times, due to a lack of bidders. The extraction plans have been halted indefinitely.
There were several red flags surrounding the auction, according to PV Rao, a senior geologist in the mineral industry who represents India at the Committee for Mineral Reserves International Reporting Standards, a forum that sets standards for exploration results.
For one, the amount of lithium in the Salal reserve is much less significant than initially reported, Rao and other industry experts told Rest of World. They said that only about 0.02 million tonnes of lithium carbonate is present in the ore body, a small fraction of the levels seen in other major reserves. Secondly, the reserve holds minerals in clay-deposit form, which is difficult to mine commercially.
According to Rao, the geological report commissioned by the government didn’t contain enough information about the reserve to meet international standards. “That report is [a] very, very skeleton type of report with limited information, based on which the bids are being made,” he said, adding that genealogical reports produced by the Indian government often contain “misleading and quite inadequate” information.
“It was irresponsible of the government to act that way. It is nothing but actually hiding the facts,” Rao said. “Are you trying to sell it and put the investors into doom?”
“It was irresponsible of the government to act that way.”
According to Saurabh Priyadarshi, a former chief geologist at Geoxplorers Consulting Services with 30 years of experience advising conglomerates in the mining and metal business, “the auctions will fail every time if offered in its unexplored form” due to “inadequate information.”
Shafiq Ahmed, who was a district mineral officer at Reasi in Jammu when the Ministry of Mines announced the reserve, told Rest of World that a handful of private companies tested the samples independently and “were not satisfied” with the quantity and quality of the lithium. “That’s why the companies are walking back on it,” he said.
“It is neither feasible nor financially viable,” Ahmed said. “The government announced it in haste.”
Lithium was first discovered in Salal by accident when a team of geologists visited the region in the 1990s looking for bauxite, a source of aluminum. Decades later, in 2018, as lithium became important in the global transition to clean energy, Indian mining officials returned to the area for exploration.
Even if the reserve were truly stocked with “white gold,” as lithium is now sometimes called, mining in Salal village is fraught with challenges. For companies looking to invest in minerals, Jammu and Kashmir is a region full of uncertainties, Puneet Gupta, an electric mobility expert and director at rating agency S&P, told Rest of World. “The state suffers from political instability, violence, and lack of peace — any company coming in will see all those things in the picture,” he said.
Salal is located in the disputed and conflict-hit Kashmir region, near India’s border with Pakistan. In the past two years, militant attacks have intensified in the areas surrounding Salal. A local militant group announced it would attack any company that mined the lithium reserve, calling mining “the colonial exploitation and theft of resources of Jammu and Kashmir.”
The village is situated on the Chenab River, part of a water-sharing treaty between India and Pakistan. Lithium mining is a resource-intensive process that heavily pollutes water and soil, affecting local residents, agriculture, and biodiversity. The region is also highly seismically active. “These factors make any industrial intervention in the area a complex and delicate undertaking,” Priyadarshi said.
Before putting the reserve up for auction again, the government will consider further exploration. Mining officials say that may take months or years. Thakur, the snack shop worker in Salal, is furious about the delay. He said he feels like his future is caught up in the uncertainty. He was planning to renovate the shop, but is now hesitant to invest in infrastructure that might be “bulldozed the next day,” he said. “We have been standing on the gallows — I feel like the lever can be pulled any day.”
Karan Singh, Thakur’s 65-year-old uncle, lives with his mother in a well-furnished three-room house that sits over the mineral deposits. Before the discovery was announced, Singh had never heard of lithium. He said he spent many nights afterward dreaming of the family’s reversed fortunes.
Now, however, he welcomes the delay. He remembers growing up with clean air and water, amid “the natural beauty of my village,” he told Rest of World. Moving home at his age would be a difficult task, Singh said, and he is content for the lithium to remain in the ground so that he can stay.
| 2024-11-07T20:20:00 | en | train |
42,060,286 | surprisetalk | 2024-11-06T11:26:36 | Powering the Mars Base | null | https://caseyhandmer.wordpress.com/2024/11/05/powering-the-mars-base/ | 88 | 70 | [
42063793,
42067162,
42061394,
42069366,
42067491,
42064539,
42062743,
42064548,
42067862,
42062625,
42066234,
42063974,
42062470,
42064715,
42068608
] | null | null | null | null | null | null | null | null | null | train |
42,060,316 | surprisetalk | 2024-11-06T11:28:36 | Robots and labor in [Japanese] nursing homes [pdf] | null | https://www.nber.org/system/files/working_papers/w33116/w33116.pdf | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,060,317 | busmark_w_nika | 2024-11-06T11:28:44 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,060,342 | rbanffy | 2024-11-06T11:30:45 | NRO chief: "You can't hide" from our new swarm of SpaceX-built spy satellites | null | https://arstechnica.com/science/2024/11/nro-chief-you-cant-hide-from-our-new-swarm-of-spacex-built-spy-satellites/ | 25 | 7 | [
42061890
] | null | null | null | null | null | null | null | null | null | train |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.