id
int64
2
42.1M
by
large_stringlengths
2
15
time
timestamp[us]
title
large_stringlengths
0
198
text
large_stringlengths
0
27.4k
url
large_stringlengths
0
6.6k
score
int64
-1
6.02k
descendants
int64
-1
7.29k
kids
large list
deleted
large list
dead
bool
1 class
scraping_error
large_stringclasses
25 values
scraped_title
large_stringlengths
1
59.3k
scraped_published_at
large_stringlengths
4
66
scraped_byline
large_stringlengths
1
757
scraped_body
large_stringlengths
1
50k
scraped_at
timestamp[us]
scraped_language
large_stringclasses
58 values
split
large_stringclasses
1 value
42,015,722
KoftaBob
2024-11-01T11:03:52
Show HN: Chronicl – Decentralized Internet Archive distributed on Nostr relays
null
https://chronicl.vercel.app/
2
0
[ 42016352 ]
null
null
null
null
null
null
null
null
null
train
42,015,731
fnordpiglet
2024-11-01T11:04:50
Chinese researchers develop AI model for military use on back of Meta's Llama
null
https://www.reuters.com/technology/artificial-intelligence/chinese-researchers-develop-ai-model-military-use-back-metas-llama-2024-11-01/
11
1
[ 42015982 ]
null
null
null
null
null
null
null
null
null
train
42,015,732
rbanffy
2024-11-01T11:04:51
Hydrogen Storage Made Easier with New Carrier Fluid
null
https://spectrum.ieee.org/liquid-hydrogen-storage
2
0
null
null
null
null
null
null
null
null
null
null
train
42,015,738
noch
2024-11-01T11:05:33
Heritrix Web Crawler
null
https://github.com/internetarchive/heritrix3
1
0
null
null
null
null
null
null
null
null
null
null
train
42,015,740
mountainview
2024-11-01T11:05:48
Benchmark GGUF models with a one line of code
null
https://github.com/NexaAI/nexa-sdk/tree/main/nexa/eval
1
0
null
null
null
null
null
null
null
null
null
null
train
42,015,746
rbanffy
2024-11-01T11:06:54
Intel Takes the Big Restructuring Hits as It Looks Ahead
null
https://www.nextplatform.com/2024/11/01/intel-takes-the-big-restructuring-hits-as-it-looks-ahead/
2
0
[ 42016331 ]
null
null
null
null
null
null
null
null
null
train
42,015,759
rbanffy
2024-11-01T11:08:35
MS again delays Recall, says it will arrive for Insiders on Copilot Plus PCs
null
https://www.tomshardware.com/software/operating-systems/microsoft-again-delays-recall-feature-says-it-will-arrive-for-windows-insiders-on-copilot-plus-pcs-in-december
1
0
[ 42016320 ]
null
null
null
null
null
null
null
null
null
train
42,015,762
djoldman
2024-11-01T11:08:45
IlliniSpots – Real-Time UIUC Study Spots and Empty Classroom Finder
null
https://illinispots.vercel.app/
2
1
[ 42020851 ]
null
null
null
null
null
null
null
null
null
train
42,015,768
rbanffy
2024-11-01T11:09:27
A sign of life for Europe's sovereign satellite Internet constellation
null
https://arstechnica.com/space/2024/10/finally-a-sign-of-life-for-europes-sovereign-satellite-internet-constellation/
3
0
null
null
null
null
null
null
null
null
null
null
train
42,015,783
udev4096
2024-11-01T11:11:53
Celebrating 20 Years of Nginx
null
https://blog.nginx.org/blog/celebrating-20-years-of-nginx
3
0
[ 42016326 ]
null
null
null
null
null
null
null
null
null
train
42,015,788
smomara
2024-11-01T11:12:50
The data are in on the worst song of ALL TIME
null
https://bsky.app/profile/shanewriter.bsky.social/post/3l7uyo2po4i22
1
2
[ 42018232 ]
null
null
null
null
null
null
null
null
null
train
42,015,791
f1shy
2024-11-01T11:13:35
'Upgrading' a Microwave Oven to 20 KW
null
https://hackaday.com/2024/10/25/upgrading-a-microwave-oven-to-20-kw/
4
0
null
null
null
null
null
null
null
null
null
null
train
42,015,793
ankit84
2024-11-01T11:13:44
Ask HN: What's your go to DevTool these days?
We all use ChatGPT, Copilot, Cluade. These are generic. What's your story of using a dev-tool that's integrated at its best and you love to use more and more?
null
4
3
[ 42015960, 42019252, 42023371, 42016961, 42016324 ]
null
null
null
null
null
null
null
null
null
train
42,015,799
mooreds
2024-11-01T11:14:50
Apache Hop: AWS ECS and AWS Batch (2021)
null
https://diethardsteiner.github.io/hop/2021/02/05/Apache-Hop-AWS-Batch.html
1
0
null
null
null
null
null
null
null
null
null
null
train
42,015,806
laszlolm
2024-11-01T11:17:02
Use ChatGPT Search in Safari and Firefox (macOS)
null
https://old.reddit.com/r/ChatGPT/comments/1gh2wjl/use_chatgpt_search_in_safari_or_firefox_macos/
2
0
[ 42016323 ]
null
null
null
null
null
null
null
null
null
train
42,015,808
lazylizard
2024-11-01T11:17:06
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,015,812
f1shy
2024-11-01T11:17:17
Ritonavir Form III: A Coincidental Concurrent Discovery
null
https://pubs.acs.org/doi/10.1021/acs.cgd.2c01017
29
6
[ 42040469, 42039736, 42040192, 42040830, 42039832 ]
null
null
null
null
null
null
null
null
null
train
42,015,818
DotSauce
2024-11-01T11:17:57
Show HN: Dydas AI Agent for Marketing
null
https://dydas.com/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,015,835
iX901
2024-11-01T11:20:07
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,015,848
studyaids
2024-11-01T11:23:15
null
null
null
1
null
[ 42015849 ]
null
true
null
null
null
null
null
null
null
train
42,015,876
mooreds
2024-11-01T11:27:55
Proactive, Open source API security tool
null
https://github.com/akto-api-security/akto
1
0
null
null
null
null
null
null
null
null
null
null
train
42,015,886
mooreds
2024-11-01T11:29:45
Generating Random Mazes with JavaScript
null
https://cloudfour.com/thinks/generating-random-mazes-with-javascript/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,015,889
kiyanwang
2024-11-01T11:30:29
How to Manage a Distracted Team
null
https://hbr.org/2024/10/how-to-manage-a-distracted-team
1
0
null
null
null
null
null
null
null
null
null
null
train
42,015,894
nenecmrf
2024-11-01T11:30:56
Show HN: Inquir's open source lib for searching
I’ve just published open source library!<p>It’s the first pre-alpha version of an open-source library that allows seamless integration of Inquir with your web application.<p>It’s now available on NPM, and while the documentation is still in progress, you can already start exploring it. Check it out on GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;Inquir-search&#x2F;inquirsearch">https:&#x2F;&#x2F;github.com&#x2F;Inquir-search&#x2F;inquirsearch</a>. NPM: <a href="https:&#x2F;&#x2F;www.npmjs.com&#x2F;search?q=%40inquir" rel="nofollow">https:&#x2F;&#x2F;www.npmjs.com&#x2F;search?q=%40inquir</a>
https://github.com/Inquir-search/inquirsearch
2
1
[ 42015989 ]
null
null
Failed after 3 attempts. Last error: Quota exceeded for quota metric 'Generate Content API requests per minute' and limit 'GenerateContent request limit per minute for a region' of service 'generativelanguage.googleapis.com' for consumer 'project_number:854396441450'.
GitHub - Inquir-search/inquirsearch
null
Inquir-search
Inquir Search Library Check out Inquir: https://inquir.org/ Table of Contents Inquir Search Library Table of Contents Introduction Core Components Installation Getting Started 1. SearchManager 2. SearchBox 3. SearchResults Debounce Support Usage Example API Documentation SearchManager SearchBox SearchResults Contributing License Introduction The Inquir Search Library is a flexible, framework-agnostic JavaScript library designed to provide search functionality in your applications. It allows interaction with a backend search API, managing search state and handling user inputs through headless components. These components focus solely on state management, making them easy to integrate with any front-end framework or plain JavaScript. Core Components SearchManager: Manages search queries, executes API requests, and handles search results. SearchBox: Manages the search query state and updates the SearchManager. SearchResults: Subscribes to search results from the SearchManager and manages local state. Installation To install the library, use the following command for npm: Run the npm install command followed by @inquir/inquirsearch. Alternatively, if using yarn, run the yarn add command followed by @inquir/inquirsearch. Getting Started 1. SearchManager The SearchManager is the core component responsible for managing the search state, constructing query objects based on your API schema, executing search requests, and notifying subscribers about state changes. To instantiate the SearchManager, pass an API key to the constructor. This key is used in the Authorization header for API requests. The SearchManager provides several key methods: updateQueryParams: This method updates the search parameters based on the provided object. executeSearch: Executes the search request based on the current state and notifies subscribers about the results. subscribe: Subscribes to specific events like resultsChange or error. The SearchManager uses an internal event emitter to manage subscriptions and notifications. 2. SearchBox The SearchBox component manages the search query state and interacts with the SearchManager to update the query parameters and trigger searches. When instantiating SearchBox, pass in the SearchManager instance. Key methods of SearchBox include: setQuery: Sets the search query and triggers a search. subscribe: Subscribes to query changes, allowing a listener function to respond whenever the query changes. getQuery: Retrieves the current search query. Each SearchBox instance uses an event emitter to notify subscribers about query changes. 3. SearchResults The SearchResults component subscribes to search results from the SearchManager and maintains its own local state for the results. When instantiating SearchResults, pass in the SearchManager instance. Key methods of SearchResults include: subscribe: Subscribes to search result changes, allowing a listener function to respond whenever new results are available. getResults: Retrieves the current search results. destroy: Cleans up subscriptions when the component is no longer needed. Each SearchResults instance uses an event emitter to notify subscribers about updates to the results. Debounce Support To prevent excessive API calls during rapid user input, the SearchManager includes a debounce mechanism in the executeSearch method. This ensures that searches are only executed after a set delay since the last invocation. There are two approaches for implementing debounce: Manual Debounce: The executeSearch method is debounced using a setTimeout and clearTimeout mechanism. Using Lodash: For a more robust solution, lodash.debounce can be used to debounce the search execution. To use lodash, first install it and update the SearchManager to use lodash.debounce. Usage Example Here’s an example of how to use the Inquir Search Library to integrate search functionality into a web application: Create a basic HTML structure with a search input field and a results container. Instantiate SearchManager, SearchBox, and SearchResults. Listen for user input changes on the search input field, and pass the query to SearchBox to trigger a search. Subscribe to SearchResults to update the results container when new search results are available. When a user types into the search input, SearchBox will update the query in SearchManager, which will execute a debounced search and notify SearchResults of the updated results. API Documentation SearchManager The SearchManager constructor requires an API key for authenticating API requests. Key methods: updateQueryParams: Updates the search parameters, such as query, size, and page, using the provided object. executeSearch: Executes the search based on the current state and notifies listeners when results are available. subscribe: Subscribes to events like resultsChange or error and accepts a listener function that responds to updates. SearchBox The SearchBox constructor requires an instance of SearchManager. Key methods: setQuery: Sets the search query and updates the SearchManager. getQuery: Retrieves the current search query. subscribe: Subscribes to query changes with a listener function that reacts to query updates. SearchResults The SearchResults constructor requires an instance of SearchManager. Key methods: subscribe: Subscribes to updates in search results using a listener function. getResults: Retrieves the current search results. destroy: Cleans up subscriptions when the component is no longer needed. Contributing We welcome contributions to the Inquir Search Library. Follow these steps to contribute: Fork the repository. Clone your forked repository to your local machine. Create a new branch for your feature or bug fix. Make the necessary changes and commit them. Push your changes to your forked repository. Open a pull request with a detailed description of the changes. License The Inquir Search Library is licensed under the MIT License.
2024-11-07T23:22:09
null
train
42,015,919
marban
2024-11-01T11:35:43
Bricss – Simple and customizable low-level CSS library generator
null
https://github.com/ita-design-system/bricss
1
0
null
null
null
null
null
null
null
null
null
null
train
42,015,925
LAGGOUNEWalid
2024-11-01T11:36:27
Show HN: Pulsetracker Real-Time Location Tracking for Developers as a Back End
Hey HackerRank community!<p>I just launched Pulsetracker, a SaaS tool designed to make real-time location tracking simple and efficient. With Pulsetracker, developers don’t need to build or manage a backend – just integrate the client-side, and you’re set with fast, battery-optimized tracking via UDP or WebSocket. It’s scalable, secure, and ideal for applications needing reliable location data.<p>Any feedback or feature ideas would be awesome!
https://www.pulsestracker.com/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,015,933
bobismyuncle
2024-11-01T11:37:32
The Crisis in String Theory Is Worse Than You Think
null
https://www.math.columbia.edu/~woit/wordpress/?p=14200
43
35
[ 42016128, 42016388, 42020612, 42016287, 42016119, 42016268, 42016270, 42016180, 42016205 ]
null
null
null
null
null
null
null
null
null
train
42,015,941
domysee
2024-11-01T11:38:50
Details on Omnivore Shutting Down
null
https://blog.omnivore.app/p/details-on-omnivore-shutting-down
2
0
null
null
null
null
null
null
null
null
null
null
train
42,015,946
cannibalXxx
2024-11-01T11:39:44
Microsoft Hires Engineer Who Kept Facebook Data Centers Humming
null
https://www.bloomberg.com/news/articles/2024-10-31/microsoft-hires-engineer-who-kept-facebook-data-centers-humming
2
0
[ 42016289 ]
null
null
missing_parsing
Bloomberg - Are you a robot?
null
null
Why did this happen? Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. Need Help? For inquiries related to this message please contact our support team and provide the reference ID below. Block reference ID:
2024-11-08T10:54:13
null
train
42,015,947
SoundsGirls
2024-11-01T11:39:48
null
null
null
1
null
[ 42015948 ]
null
true
null
null
null
null
null
null
null
train
42,015,949
xnhbx
2024-11-01T11:40:03
CATL unveils new battery for extended-range hybrids
null
https://www.reuters.com/business/autos-transportation/chinas-catl-unveils-new-battery-extended-range-hybrids-2024-10-24/
3
0
[ 42016286 ]
null
null
null
null
null
null
null
null
null
train
42,015,957
jhunter1016
2024-11-01T11:41:04
Roc Camera Beta – Capture verifiably real moments in the age of AI
null
https://roc.camera/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,015,980
lapnect
2024-11-01T11:45:04
Python Is Now the Most Popular Language on GitHub
null
https://www.omgubuntu.co.uk/2024/10/python-most-popular-language-on-github-2024
2
0
[ 42016283 ]
null
null
no_error
Thanks to AI, Python is Now the #1 Language on GitHub
2024-10-31T23:11:00+00:00
Joey Sneddon
Python has overtaken JavaScript as the most-used language on GitHub, according to the code-hosting platform’s latest Octoverse report. The company attributes this momentum to a massive influx of “data science and machine learning on GitHub”, which has seen a 59% increase in the number of contributions to generative AI projects. With Python being heavily used across ML, data science, and related fields, the rise makes sense – it’s less that traditional software developers are switching to Python but more that developers working with AI-related projects are needing to use it. Plus, it’s good news for open source, with GitHub reporting that “1.4 million new developers globally joined open source, with a majority contributing to commercially backed and generative AI projects.” GitHub also says the increase in Python’s popularity this year “correlates with large communities of people joining the open source community from across the STEM world”. The latter tracks; Python is taught in schools here in the UK, and likely elsewhere. Python’s major pro is its simple, straightforward syntax, which excels at data handling. This has made it popular with novices of all shades, but especially those flocking to join the modern-day “gold rush” in AI. For first-timer coders, Python is easier to learn, understand, and adapt than many low-level programming languages – even I know some basic Python, and I’ve as much coding nous as a potato. Plus, the Python language is a steadfast feature in the desktop Linux software landscape. It’s preinstalled on most Linux distributions, boasts extensive library support, and can be used to fashion very cool (as well as very basic) Qt, GTK, and other toolkit UIs. Elsewhere, GitHub saw a major spike in usage across Jupyter Notebooks (with AI/ML fuelling that), and notes that Rust usage, while still trailing Python, JavaScript, TypeScript, and Java despite the (oft deserved) hype it generates, is certainly on the up. Could Copilot, accessible on GitHub itself and through major text editors like VSCode, be inadvertently helping accelerate an increase in new projects made with Python because of users asking for help working with AI/LLM projects? There’s plenty more to pore over in GitHub 2024 Octoverse recap, including a revised prediction for when the USA will be overtaken by India in the number of developers on GitHub – spoiler: sooner than expected! Thanks Scott
2024-11-07T08:32:07
en
train
42,016,031
manx
2024-11-01T11:52:23
Mauritius suspends social media until after election amid wiretapping scandal
null
https://www.reuters.com/world/africa/mauritius-suspends-social-media-until-after-election-communications-regulator-2024-11-01/
4
2
[ 42017596, 42017359, 42016279 ]
null
null
null
null
null
null
null
null
null
train
42,016,038
squircle
2024-11-01T11:53:12
Fear Is a Trainable Animal
null
https://www.bendingpinksteel.com/p/fear
1
0
null
null
null
null
null
null
null
null
null
null
train
42,016,039
Michelangelo11
2024-11-01T11:53:21
Oasis: The first playable AI-generated game
null
https://twitter.com/Etched/status/1852089772329869436
1
0
null
null
null
null
null
null
null
null
null
null
train
42,016,064
LorenDB
2024-11-01T11:57:27
The best laptop is the one somebody else had
null
https://ounapuu.ee/posts/2024/11/01/the-best-laptop/
4
1
[ 42016220, 42016277 ]
null
null
no_error
The best laptop is the one somebody else had
2024-11-01T06:00:00+02:00
null
In 2011, I was finishing 9th grade. As a gift, I got to choose a laptop in the 400 EUR range. I ended up picking an ASUS Eee PC 1201PN. It was new and the first computer in my life that was 100% mine, but awfully slow for a lot of tasks. It was so slow that I ended up giving Linux a go as a result. Linux! I didn’t even know computing all that well around that time! A few years later, I bought a ThinkPad T60 off of someone I knew for about 40 EUR. It was about 8 years old at that point, but it ran circles around the new laptop that I had in performance. That’s when I learned about the absurdly good price-to-performance ratio of used business-grade laptops, and the crappiness of netbooks.1 Note that I keep repeating the phrase business-grade laptops. Think Lenovo ThinkPads, Dell Latitudes or HP EliteBooks.2 That’s the core of this whole idea. Consumer-grade laptops are cheaper when bought new, but that is a result of a lot of compromises made in the build quality. Business-grade laptops are used for work and need to be reliable for years, which means that they will last for a long time. Used laptops are cheap I recently checked what the prices are for used laptops, mainly focusing on the 100-300 EUR range as I find that to be the sweet spot for bargains. For 195 EUR, I can get a ThinkPad X395, sporting an AMD Ryzen 5 3500U quad-core CPU, 16 GB of RAM and a 256GB NVMe SSD, sold by a store that specializes in selling used hardware. You even get a 6-month warranty! That’s crazy good value. New business-grade laptops cost somewhere around 1000-2000+ EUR. They are generally faster and provide more memory and storage, but in the best case scenario that performance difference will be 2-3x at best, while the price is 5-10+ times higher. The math does not check out. The price depreciation curve is also quite harsh on new laptops. You can pay 2000 EUR for a new laptop and only be able to sell it for 1000 EUR a year from now. Two years later? 500-700 EUR. The prices eventually settle at around the 5 year mark, which also happens to align with the extended warranties expiring. Used laptops reduce stress With used laptops, you don’t have to worry about the wear-and-tear that much. You accidentally drop your laptop on the floor? It might still be fine! Your child picked off all the keycaps on the keyboard? No worries, replacements are easy to find! Your lunch for the day ended up leaking all over the laptop, killing it completely? No problem, you can get a new one and still end up paying less compared to a new laptop! Buying used is no excuse to mistreat your hardware, but I personally love the lack of stress associated with trying to keep a new and expensive object in pristine condition. The laptop already has some cosmetic damage on it, so why worry? Used laptops are surprisingly reliable Reliability is often one of the top reasons why some people avoid buying used laptops. I attribute this to the experiences people have with used cars. You pay less, until you pay a lot more to get that hunk of junk fixed once it breaks down on you. I’ve had the complete opposite experience with used business-grade laptops. The ones that make it to the used market have gone through years of reliability testing, and those that don’t make it were defective anyway. The only areas to pay attention to is basic maintenance (remove dust, apply new thermal paste) and a potential battery replacement, which are quite simple to do on modern business-grade laptops. It’s so easy that even children and teenagers can do it with a little bit of guidance and supervision! The reliability doesn’t stop with the hardware. Buying used often means that you’ll be buying a laptop that has received all the software and firmware fixes to all sorts of issues. Linux users will also have a much better time with used laptops since by that time most of the issues associated with new hardware will have been fixed in the kernel. You should avoid buying new and used consumer-grade laptops. I’ve seen so many of those with missing pieces of plastic and the hinges breaking open the laptop case, but rarely with business-grade laptops. Used business-grade laptops are so reliable that some companies are even willing to rent and support those machines for a really low price. Exceptions to the rule There will always be a place for new laptops. Sometimes you do need the latest and greatest hardware for CAD work, complex video editing or high-end gaming. Some people find that a 30-second build of their software project taking 20 seconds is worth the productivity gain, regardless of the higher price or increased environmental impact of buying new. Some simply want to play around with the latest and greatest, for fun. There will always be people who find the idea of used laptops off-putting, and companies do prefer to buy pallet-loads of new laptops every few years. On the bright side, this does mean that there will always be a supply of cheap used laptops available for the rest of us. Conclusion If you ever need a laptop and your needs are not extremely specific, then give a used business-grade laptop a try. It will be fine, I promise.
2024-11-08T12:22:44
en
train
42,016,070
freetonik
2024-11-01T11:59:03
Filter, Map, Reduce in 1.5 minutes [video]
null
https://www.youtube.com/watch?v=PZvHZJVeYdw
14
8
[ 42016668, 42016722, 42016720 ]
null
null
no_article
null
null
null
null
2024-11-08T07:33:09
null
train
42,016,075
lapnect
2024-11-01T11:59:50
A Minimum Complete Tutorial of Portable Document Format [pdf] with Pdfsh
null
https://metebalci.com/blog/a-minimum-complete-tutorial-of-pdf-with-pdfsh
3
0
null
null
null
no_error
A Minimum Complete Tutorial of Portable Document Format (PDF) with pdfsh
2024-10-11
Mete Balci
IntroductionPDF is a defacto standard for sharing documents. In this post, I will give a minimal but complete tutorial of Portable Document Format (PDF).PDF was developed by Adobe beginning in 1993. At its core, it uses PostScript page description language for the description of text and graphics, but it adds many things to PostScript to be able to function as a document format. The main purpose of this post is to explain these additions to PostScript. I will not mention the text and graphics descriptions in this post. If you are looking for anything related to visual representation or rendering of PDF content, this post has nothing related to these topics.I am going to show the very basics of a PDF file first, and then use pdfsh to show all elements of the PDF file, and the basics of the PDF specification. This post also functions as a tutorial for pdfsh.The information in this post is based on PDF 2.0 specification, which can be downloaded for free (but it is 1003 pages). PDF 2.0 is also an ISO standard, ISO 32000-2:2020. This is also the reference document for this post. Although I am looking at this latest specification, the core concepts are pretty much the same in previous versions.The example PDF files I am using are from PDF Association’s PDF 2.0 examples repository. I am particularly using:Simple PDF 2.0 file.pdfPDF 2.0 via incremental save.pdfIn order to show the stream filters: I am using this PDF file: metebalci.com-about.pdf. I created this file by printing the about page of this blog on Windows using Chrome to PDF.PDF 101Here are a few basic concepts of Portable Document Format (PDF):Everything in a PDF document (here I say document not file on purpose) is an object. For example, an integer number is an object. An object can be a direct object (used in place without a label, like an integer number 3) or an indirect object which are identified by its identifier (or label), similar to variables in programming languages. An indirect object can be referenced from anywhere in the document. There are nine basic object types in PDF 2.0. I find the categorization in the specification a bit strange, so you can count or categorize the types and reach to a different number. I will show each object type later.The label of an indirect object is composed of two integer numbers. First is the object number, the other is the generation number. The object number is a positive integer (there is no object 0) and it can have maximum 10 digits, thus the maximum value is 9'999'999'999. There is no hard rule to use consecutive object numbers but it simplifies a few things, so it is probably always the case that the first object number is 1 and the next is 2 and so on. The generation number is a non-negative integer, and it has to start from 0 and its maximum value is 65'535. It seems the idea is that there can be different generations (versions) of the same object but practically I believe it is almost always 0, because an object can also be updated and have a different object number. The generation idea sounds unnecessary at the moment, I do not know the historical reason behind it. Although it is 0 most of the time, an object should always be referenced with the object number and generation pair, the object number alone is not enough.A PDF file is a binary file having a particular structure, that contains a PDF document, meaning the objects of the PDF document. The PDF file itself, and its first level structures (header, body etc.) are not PDF objects but they help to find and load the actual objects of the PDF document.A PDF file (hence the PDF document) can be incrementally updated. Incremental update means the file can be modified with existing data left intact. The updates are added only to the end of the file. This means an incremental update can add new objects and modify or delete existing objects. This also means a PDF file may contain objects that are not visible/not used anymore.When a PDF file is created (meaning it has no incremental updates yet), it has a basic structure consisting of header, body, cross reference table and trailer. Other than the trailer dictionary in trailer (I know this is a bit confusing, trailer is the last part in the file, while trailer dictionary is part of the trailer but trailer contains also something else), the basic structures (header, body etc.) are not objects but stored in the PDF file in a more rigid and fixed structure.Initial structure of a PDF file (Figure 2 in ISO 32000-2:2020)The header is only one line and only contains the PDF version.The body contains all the objects. It is like a linear dump of all objects.The cross reference table contains the byte offset (from the start of the file, pointing to a location in the body) of every object of the PDF document. It is basically an (object number, generation number) to byte offset mapping. Remember that even though the concept of generation numbers might not be used, the key of the mapping includes both the object number and the generation number.The trailer contains the trailer dictionary and the byte offset of the start of the cross reference table (also called startxref) and the trailer dictionary. The trailer dictionary is really a PDF dictionary object.When a PDF is incrementally updated, a new body, cross reference table and trailer is appended to the end of the PDF file. That means a PDF file in general has one header, and one or more body-xref-trailer groups.Structure of an updated PDF file (Figure 3 in ISO 32000-2:2020)It is easy to miss, in Figure 2, xref section is called cross reference table, but in Figure 3, xref sections are called cross reference section. As far as I see, it is not made very clear in the specification, but what I understand is this. The cross reference table is the final structure. A PDF file, updated or not, has only one cross reference table, which makes it possible to find the byte offset of an object (identified by object number and generation number). The cross reference table contains at least one section, but it can contain multiple sections when PDF file is updated. Basically, each update brings another section. From now on, I consistently use cross reference table only to refer to the final data structure.In order to load the objects (thus the PDF document), after reading the header, the file has to be read from the end, because:the objects are stored in the bodythe location of the objects in the body are stored in the cross reference sectionthe start of the (last) cross reference section is stored in the trailer at the end of the fileIf the PDF file is incrementally updated, the location of previous cross reference section is also stored in the trailer dictionary. Thus, starting from the very end of the document, all trailers and cross reference sections can be read and all objects can be loaded. Since an existing object can be modified or deleted, when reading from the end, the first entry about an object number+generation in a cross reference section is the correct (up-to-date) and valid information, overriding the previous entries.A cross reference table section does not directly contain all the entries consisting of object number-generation number to byte offset mappings, but first it contains subsections. A cross reference table subsection contains the first object number (of the entries it contain) and one or many entries. Each entry contains a byte offset, generation (number) and deleted mark (or flag). Thus, the entries in a subsection contains information about sequential objects, there is no object number in an entry but it is specified in subsection. This is why it makes sense to use sequential object numbers. Otherwise, many subsections would have to be created.After all the cross reference sections are read (hence the cross reference table is created), the catalog dictionary, the root page and then all pages and all other objects required to render the PDF document for viewing can be found, loaded and processed.pdfshpdfsh is a utility to investigate the PDF file structure in a shell-like interface, so the structure of the PDF file can be navigated like a file system with commands like ls, cd, cat and a few others.pdfsh can be installed with pip install pdfsh.In addition to providing a shell-like interface, pdfsh does one more thing. It implicitly creates a PDF object representation for the header, body, cross reference table sections and the trailer of the PDF file since these are not PDF objects. Thus, in essence, pdfsh is a shell-like interface that has a PDF dictionary object (representing the PDF document) at its root which is the navigation starting from its root.Running pdfsh Simple\ PDF\ 2.0\ file.pdf gives:pdfsh Copyright (C) 2024 Mete Balci License GPLv3+: GNU GPL version 3 or later Simple PDF 2.0 file.pdf:/ $ pdfsh assumes it is run under a ANSI capable terminal and it uses ANSI terminal features and colors. If strange behavior is observed, make sure the terminal emulation is ANSI compatible.The $ marks the command line entry and before $ the name of the current PDF file, a colon (:) and the current node is displayed (current node is root, / above).ESC or ctrl-c can be used to ignore and erase the current command entry. q can be used to quit the pdfsh interface. ? or help shows a brief help screen with valid commands.%©©F-2.0It is not directly related to PDF but I have to tell this first.If your terminal is using UTF-8 encoding (most probably), if a PDF file is used with commands like cat or head in the terminal, the first line is displayed as %©©F-2.0:$ head -n1 Simple\ PDF\ 2.0\ file.pdf %©©F-2.0 This is because, the PDF file contains the following in the first line (until the first end of line marker):$ xxd -l 16 Simple\ PDF\ 2.0\ file.pdf 00000000: 2550 4446 2d32 2e30 0d25 c2a9 c2a9 0d0a %PDF-2.0.%...... It contains %PDF-2.0 but after that there is a carriage return (0d), the percent symbol (25) and two copyright symbols encoded in utf-8 (c2a9) before the actual end of line marker consisting of a carriage return and a line feed (0d0a). Thus, head (or cat), after printing %PDF-2.0, goes back to the beginning of the line (due to carriage return) and overprints %©© and goes to the next line which leaves the line as %©©F-2.0. cat has a show non-printing characters with ^ option (-v) which would result:$ cat -vE Simple\ PDF\ 2.0\ file.pdf | head -n1 %PDF-2.0^M%M-BM-)M-BM-)^M$ The PDF specification says this is to “ensure proper behaviour of file transfer applications” and when the PDF file contains binary data, the header should be followed by a comment containing four bytes equal or greater than 128 (0x80). I guess c2a9 is selected as a useful option as each byte is >0x80 and it can be printed as © in utf-8 compatible environments.Since now you know, do not be surprised if you see the copyright symbols or something else instead of %PDF when you try the examples below. For the sake of clarity, I always write %PDF in this post when showing the contents of a PDF file.Simple PDF 2.0 fileHere is a simple example. In the output below, the objects in the body other than the first one are skipped for clarity. If you have not read the previous section, and wonder why you do not see %PDF-2.0 on your terminal, please read the previous section.$ cat Simple\ PDF\ 2.0\ file.pdf %PDF-2.0 1 0 obj << /Type /Catalog /Metadata 2 0 R /Pages 3 0 R >> endobj ... <other objects in the body are skipped> ... % The object cross-reference table. The first entry % denotes the start of PDF data in this file. xref 0 10 0000000000 65535 f 0000000016 00000 n 0000000096 00000 n 0000002547 00000 n 0000002619 00000 n 0000002782 00000 n 0000003587 00000 n 0000003811 00000 n 0000003972 00000 n 0000004524 00000 n trailer << /Size 10 /Root 1 0 R /ID [ <31c7a8a269e4c59bc3cd7df0dabbf388><31c7a8a269e4c59bc3cd7df0dabbf388> ] >> startxref 4851 %%EOF If you slowly scan the output above, you can see:the first line is %PDF-2.0there is a line containing xrefthere is a line containing trailerthere is a line containing startxrefThese are the header, the start of the cross reference table section, the start of the trailer and the start of the startxref in the trailer. The body has no start marker. After the header until the start of cross reference table section (xref) it is the body where only the first object (out of nine) is shown above.Now lets use pdfsh:$ pdfsh Simple\ PDF\ 2.0\ file.pdf pdfsh Copyright (C) 2024 Mete Balci License GPLv3+: GNU GPL version 3 or later Simple PDF 2.0 file.pdf:/ $ and run ls command:Simple PDF 2.0 file.pdf:/ $ ls header/ body/ xrt/ trailer/ objects/ It is not a standard term but in this post and also in pdfsh, I use xrt to refer cross-reference table. This is the final cross-reference table data structure, which is built from the cross-reference table sections in the PDF file.For convenience, although the document is a dictionary and inherently unordered, the keys of the PDF document dictionary is specially ordered this way to look like a PDF file (header, body, xrt, trailer, …). The keys of all other dictionaries are ordered alphanumerically.In this post, the pdfsh outputs are always in black and white. However, pdfsh actually uses color. The entries above are shown in blue. pdfsh shows the things that can be entered into (array, dictionary, indirect reference) in blue (this is like a directory in a file system), all others in gray (like a file in a file system).The type of any object shown in pdfsh can be seen by node command:Simple PDF 2.0 file.pdf:/ $ node . . is a dictionary object . represents the current node (like in a shell). The / node (the document or the PDF file) is a dictionary.Simple PDF 2.0 file.pdf:/ $ node header header is a dictionary object Simple PDF 2.0 file.pdf:/ $ node body body is an array object Simple PDF 2.0 file.pdf:/ $ node xrt xrt is an array object Simple PDF 2.0 file.pdf:/ $ node trailer trailer is a dictionary object pdfsh has auto-completion with tab. node he<TAB> completes into node header/. The final / is because header is a dictionary. It does not matter for node command if it ends with / or not.header, body, xrt and trailer directly correspond to PDF file structures.There is only one header in a PDF file, thus header is a dictionary containing the PDF version.Each incremental update brings a body, and a previous body may still contain an object in use. Thus, body is an array. It is ordered reverse, thus the first (0) element is the body of the last update.Each incremental update brings a cross reference table section, and a previous cross reference table section may still contain a valid entry. Thus, xrt is an array. Similar to the body, the first (0) element is the cross reference table section of the last update.Each incremental update brings a trailer. However, only the last trailer dictionary is used for the PDF document. The reason is that it is a must for an update to contain all keys of previous trailer dictionary. Thus, no item of the trailer dictionary can be deleted with an update, they can only be added or modified. The trailer dictionary has a Prev key containing the startxref value of the previous update (thus the start of previous cross table section). Thus, the trailer is a dictionary (not an array).In addition to these nodes, there is also an objects node:Simple PDF 2.0 file.pdf:/ $ node objects objects is a dictionary object The objects node is the final list of objects. If an object is deleted with an incremental update, it is not listed here. If an object is updated with an incremental update, only the last version is listed here. Thus, the PDF document (after the updates are applied) effectively only contains the objects under the objects node. The objects node is naturally not part of a PDF file, this is a virtual construction of pdfsh.PDF Object Types: Array, Dictionary and NameArray and Dictionary are the only container object types in PDF. Array is just a list of objects, whereas dictionary gives a mapping from a key to another object. Naturally, an array or a dictonary can also contain other arrays or dictionaries.pdfsh shows the contents of an array object with indices starting from 0, whereas the contents of a dictionary is shown with the keys, similar to a Python dict.The key of a PDF dictionary has to be an object of type Name (the key cannot be any other object type). A Name object is just a sequence of characters (a string) but starts with symbol /, such as /Name1. In order to improve readability, pdfsh shows the value of Name objects without /. Thus /Name is displayed as Name. This cannot be confused with regular strings because string objects have delimiters () or <> which you will see later.An array is represented between the symbols [ and ], whereas a dictionary is represented between the symbols << and >>. Although PDF representations of arrays and dictionaries do not require a separator between the elements (like a comma), pdfsh uses Python syntax to display them and separates the entries with a comma. There are going to be examples of both very soon when we look at the trailer.The first line of the PDF file is:$ head -n1 Simple\ PDF\ 2.0\ file.pdf %PDF-2.0 When using pdfsh, this is stored in the header dictionary with line and version entries.Simple PDF 2.0 file.pdf:/ $ node header header is a dictionary object The contents of a dictonary can be listed by cat:Simple PDF 2.0 file.pdf:/ $ cat header/ { line: (%PDF-2.0), version: 2.0 } or we can enter into the dictionary with cd (like a directory) and then the keys are listed with ls :Simple PDF 2.0 file.pdf:/ $ cd header Simple PDF 2.0 file.pdf:/header $ ls line version where the value of each key can be listed with cat individually:Simple PDF 2.0 file.pdf:/header $ cat line (%PDF-2.0) Simple PDF 2.0 file.pdf:/header $ cat version 2.0 It is also possible to use cat . inside /header.PDF Object Type: Literal StringThe value of line in header is shown between () above. The reason for this is that it is stored (by pdfsh) as a literal string object and PDF literal string objects are enclosed in parentheses ( and ).Simple PDF 2.0 file.pdf:/header $ node line line is a literal string object PDF VersionIt might be strange that version is not a string but a name, but it will be clear later why.Simple PDF 2.0 file.pdf:/header $ node version version is a name object The PDF version, when the PDF file is created, is written in the header. However, it is possible to update the version also with Version key in the catalog dictionary. You will see the catalog dictionary later.TrailerJust showing the trailer part of the PDF file here again:trailer << /Size 10 /Root 1 0 R /ID [ <31c7a8a269e4c59bc3cd7df0dabbf388><31c7a8a269e4c59bc3cd7df0dabbf388> ] >> startxref 4851 %%EOF Just before the very end, there is a line with startxref keyword followed by another line containing an integer number (4851 above). This integer value is the byte offset (from the start of PDF file) of the (last) cross-reference table section.Going upwards from the startxref keyword, there is the trailer keyword. After the trailer keyword, there is the dictionary between << and >>.Lets look at the startxref with pdfsh:Simple PDF 2.0 file.pdf:/ $ node trailer trailer is a dictionary object Simple PDF 2.0 file.pdf:/ $ cd trailer Simple PDF 2.0 file.pdf:/trailer $ ls dictionary/ startxref Simple PDF 2.0 file.pdf:/trailer $ node startxref startxref is a integer object Simple PDF 2.0 file.pdf:/trailer $ cat startxref 4851 As mentioned before, trailer section and startxref are not objects, but pdfsh represents trailer as a dictionary and stores startxref value as an integer object in this dictionary with startxref key.The actual trailer dictionary, which is stored as a PDF dictionary object in the PDF file, is also stored in trailer with dictionary key.Lets look at the trailer dictionary with pdfsh:Simple PDF 2.0 file.pdf:/trailer $ cat dictionary { Size: 10, Root: (1, 0, R), ID: [<31c7a8a269e4c59bc3cd7df0dabbf388>, <31c7a8a269e4c59bc3cd7df0dabbf388>] } Remember that to improve readability the Name objects are shown without the / in pdfsh. In the trailer dictionary above, for example Size key is actually a Name object /Size.The value of Size key is an integer (PDF integer numeric object type) representing the number of objects in this document.The value of ID key is an array (PDF array type), thus it is enclosed between [].The most important entry in the trailer dictionary is Root. It identifies the Catalog (dictionary) of this document. The catalog dictionary is the root of all document structure, it is how all other information regarding to pages etc. is found.PDF Object Type: Indirect ReferenceThe value of Root key above is (1 0 R), which is an indirect reference. It means the catalog (dictionary) object has the object number 1 and the generation 0. R means this is a (indirect object) Reference.Cross-Reference TableIn order to find and load the catalog dictionary (the object number 1 in the example), and actually before doing anything else, the cross-reference table has to be constructed by reading all cross-reference table sections.When described like this, it sounds like the trailer dictionary is read after startxref, and it can be read like this. However, I think, it is actually better to think the trailer dictionary is read after the cross reference section is read. This is because in incrementally updated PDF files, the location of previous trailer is not directly known but the location of the start of cross reference section is known. So the previous trailer dictionary can be read after the previous cross reference section is read.The byte offset of the start of the (last) cross reference section is given by the startxref value (in the trailer), and the section starts with xref keyword.Each section contains one or more subsections. Each subsection contains a list of entries which are the actual byte-offset values.Copying the xref part of the file shown before:xref 0 10 0000000000 65535 f 0000000016 00000 n 0000000096 00000 n 0000002547 00000 n 0000002619 00000 n 0000002782 00000 n 0000003587 00000 n 0000003811 00000 n 0000003972 00000 n 0000004524 00000 n In the line following xref, a subsection is introduced with two numbers (0 10 above):the first number is the first object number in this subsection (0 above)the second number is the number of objects in this subsection (10 above)You might have already realized the Size key of the trailer dictionary also tells the number of objects in cross-reference table, so the second number here (10) is the same as the value of the Size key in the trailer dictionary, because there is only one cross reference table section and subsection in this file.After these two numbers, there are multiple lines (as much as the number of objects given by the second number when subsection is introduced), with three pieces of information; two numbers and one character.the first number (always 10 digits) is the byte-offset of the data of this object in this file (in the second entry above, it is 0000000016=16)the second number (always 5 digits) is the generation number for this object (in the second entry above, it is 00000=0)a character of f (meaning free or deleted) or n (meaning in-use) (in the second entry above, it is n)Because this structure is fixed length, the numbers here are 0 padded. The entries do not specify the object number because the object number is given once for the subsection (0 above) and it is incremented sequentially. So, the first entry is for the object number 0 and the second entry is for the object number 1. The object number 1 is the catalog dictionary referenced from the trailer dictionary as we have seen before.You might have already realized there is something strange with the first entry. First, its object number is 0, but I said the object number is a positive integer. Second, its generation number is 65535. Last, it is shown free/deleted even though there is no incremental update applied to this file. This is a strange part of the specification. I do not know why it is designed like this, but theoretically, object number 0 is possible as it can be seen in this cross reference table section. However, the object number in an indirect reference has to be a positive integer according to the specification. Then, what is the point of this entry for object number 0 ?The reason is that the free entries are kept in a free entries list (I believe in order to make it easy to reuse these entries) and there are two ways to keep this list. This is described quite poorly in the specification but the first entry (object number 0) is assumed to be the head of this free entries list, and it should have a generation number 65535.BodyThe body contains the objects in the cross reference section.Simple PDF 2.0 file.pdf:/ $ node body body is an array object Simple PDF 2.0 file.pdf:/ $ cd body/ Simple PDF 2.0 file.pdf:/body $ ls 0/ There is only one, index=0, entry in the body, because this PDF file is not updated.Simple PDF 2.0 file.pdf:/body $ cd 0/ Simple PDF 2.0 file.pdf:/body/0 $ ls 1.0/ 3.0/ 4.0/ 7.0/ 8.0/ 9.0/ 2.0 5.0 6.0 Just by looking at the ls output, you can see the top 6 are container (dictionary or array) objects (with / suffix). As you might remember the catalog dictionary is (1, 0, R), lets just see what it is:Simple PDF 2.0 file.pdf:/body/0 $ cat 1.0/ { Type: Catalog, Metadata: (2, 0, R), Pages: (3, 0, R) } Before jumping to the document structure and the pages, lets summarize all PDF object types.PDF Object TypesLets summarize all the object types:Boolean: true and false keywordsNumeric: Integer (optionally with a sign) or Real (optionally with a sign and decimal point). All numbers are in decimal system and there is no exponent syntax, just plain numbers.String: Literal or HexadecimalLiteral string: is written between ( and ) such as (This is a string)Hexadecimal string: is written between < and > such as <65>, this means b'\x65' in Python syntaxName: is an atomic symbol, it starts with /Array: is written between [ and ] such as [549 3.14 false (Ralph) /aName]. There is no separator between the elements.Dictionary: is written between << and >> such as <</Type Catalog/Version 1>>. The keys of the dictionary is of type Name, hence starts with /. There is no separator between the elements.Null: null keywordThere is also another type (or extended type) of the indirect object which is called stream object.In the specification, only the object types above and the stream object type is listed and counted, which results 9 object types. It is 9 because integer and real is counted separately but literal and hexadecimal string is counted as one type and there is also stream object. Counting literal and hexadecimal as one type is not too strange, since both are actually byte strings only the representation is different. However, I think, Indirect Reference should also be an object type, because it is also stored in an array or a dictionary.Indirect Reference and Indirect ObjectIn the example before, the value of the Root key in the trailer dictionary was (1 0 R). This is an indirect reference.The indirect object of this indirect reference is at byte offset 16 (as found in the cross reference table entry for object number 1) and it is:1 0 obj << /Type /Catalog /Metadata 2 0 R /Pages 3 0 R >> endobj The indirect object definition starts with object_number generation_number obj line and ends with endobj keyword. The object here is a dictionary with three keys.Stream ObjectsStream objects are indirect objects with a binary data (bytes). A stream object always has a dictionary (as a direct object) first, and the binary data is stored between stream and endstream keywords, like this:dictionary stream binary data endstream Since it has to be an indirect object, in a PDF file it is always found in this form:N G obj << ... >> stream binary data endstream endobj The stream dictionary is required to have a key named Length whose value is the number of bytes in the stream.The binary data inside the stream can be stored as it is, or it can be stored after it is processed (i.e. it can be encoded, compressed and/or encrypted), and this brings the concept of a stream filter. I will explain the stream filters later.Document structurePDF represents the document structure in a tree. PDF specification says: “A PDF document can be regarded as a hierarchy of objects contained in the body section of a PDF file”. Until now, I showed how the objects are stored in a PDF file, now these objects will be used to form the document.The document structure starts from the catalog dictionary which is referenced in the trailer dictionary with the Root key. The catalog dictionary is the root of the document structure tree.CatalogAs we saw in the trailer dictionary of Simple PDF 2.0 file.pdf before, the catalog dictionary is (1 0) object and it can be viewed with pdfsh easily. All the objects can be found under objects. Remember there can be multiple bodies but objects is the final list of objects.Simple PDF 2.0 file.pdf:/ $ cat objects/1.0/ { Type: Catalog, Metadata: (2, 0, R), Pages: (3, 0, R) } The catalog dictionary must have Type and Pages keys, and the value of the Type key must be Catalog.The value of the Pages key is an indirect reference to page tree node, and it is 3 0 R above.Metadata is optional but it is also specified above and it contains an indirect reference to metadata stream.There are a number of (~30) optional keys that can be used in catalog dictionary.Page TreeWhen a PDF document is viewed, the pages are naturally shown as a linear sequence. However, pages are actually stored in a tree in the PDF document, and a node of this tree is called a page tree node. The pages with a content, thus rendered to the user for display, are the leaf page nodes.The root of this tree is given by the Pages key in the catalog dictionary, which is 3 0 R above and it is:Simple PDF 2.0 file.pdf:/ $ cat objects/3.0/ { Type: Pages, Kids: [(4, 0, R)], Count: 1 } This means, it is a page node with type Pages. This means it is not a leaf node. The Type key is a must in page nodes.Its kids are given by Kids key and the value is an array.Count is not the number of kids but the number of leaf page nodes under this node.Lets look at the kid, the page node (4 0 R):Simple PDF 2.0 file.pdf:/ $ cat objects/4.0/ { Type: Page, Parent: (3, 0, R), MediaBox: [0, 0, 612, 396], Contents: [(5, 0, R), (6, 0, R)], Resources: { Font: { F1: (7, 0, R) } } } It is a page node with type Page, meaning this is a leaf node. Since it is a leaf node, it does not have Kids and Count keys. However, because it is not the root, it has a Parent key. The page tree can have more than 2 levels, hence it is possible to have an intermediate page tree node with type Pages, with a Parent and with Kids.Other than the Type and Parent, a page would probably contain MediaBox, Contents and Resources at minimum like here. Contents are optional but empty contents mean it is an empty page. MediaBox and Resources are required but they can be omitted and then their value is inherited from the ancestors in the page tree.The MediaBox defines the boundaries of the page to be displayed or printed as a rectangle in user space units.The resources required by the content of a page can be given by a dictionary in Resources key. If this key is omitted, it means the resources will be inherited from the ancestors in the page tree. The keys of resources dictionary specifies the type of the resource. There is only one here, the Font type resource and there is only one font resource named F1 which is specified in object (7 0), and it is:Simple PDF 2.0 file.pdf:/ $ cat objects/7.0/ { Type: Font, Subtype: Type1, BaseFont: Helvetica, FirstChar: 33, LastChar: 126, Widths: (8, 0, R), FontDescriptor: (9, 0, R) } This specifies it is a Type 1 font of Helvetica which is the PostScript language name of the font.The contents of this page is given in Contents key. Its value can be a stream (an indirect reference to a stream) or an array consisting of indirect references to streams. Above, it is the latter, it is an array. These streams are specifically called as content stream by the specification. If there are multiple streams, they are simply joined when rendering the page. The content streams of this page are objects (5, 0) and (6, 0). Lets see them:Simple PDF 2.0 file.pdf:/ $ cat objects/5.0/ { Length: 746 } Since this is a stream, only the stream dictionary is shown by cat command with pdfsh. The contents (stream data) can be viewed by cats or catsx. cats decodes the stream data with utf-8 encoding and displays it as text, which would work here. Otherwise, catsx displays the content as hexadecimal string.The content stream contains PostScript instructions, hence it will not make much sense to many but this one has comments, here it is:Simple PDF 2.0 file.pdf:/ $ cats objects/5.0/ % Save the current graphic state q % Draw a black line segment, using the default line width. 150 250 m 150 350 l S % Draw a thicker, dashed line segment. 4 w % Set line width to 4 points [4 6] 0 d % Set dash pattern to 4 units on, 6 units off 150 250 m 400 250 l S [] 0 d % Reset dash pattern to a solid line 1 w % Reset line width to 1 unit % Draw a rectangle with a 1-unit red border, filled with light blue. 1.0 0.0 0.0 RG % Red for stroke color 0.5 0.75 1.0 rg % Light blue for fill color 200 300 50 75 re B % Draw a curve filled with gray and with a colored border. 0.5 0.1 0.2 RG 0.7 g 300 300 m 300 400 400 400 400 300 c b % Restore the graphic state to what it was at the beginning of this stream Q This content stream draws a black solid line, a black dashed line, a rectangle and a filled curve. The other content stream is very short:Simple PDF 2.0 file.pdf:/ $ cats objects/6.0/ % A text block that shows "Hello World" % No color is set, so this defaults to black in DeviceGray colorspace BT /F1 24 Tf 100 100 Td (Hello World) Tj ET This content stream only draws a text block to show “Hello World”.The PDF file with these two content streams looks like this:Simple PDF 2.0 file.pdfIncremental UpdatesThere are some rules on how incremental updates should function:The updated trailer dictionary always include all the information of the previous trailer dictionary. The value of a key naturally can be updated but it includes all of them even if it is not updated. So the last trailer dictionary contains all the information. This is why it is called updated trailer.The ID key in trailer dictionary contains an array with two byte strings. These byte strings are unique (generated from the hash of the file etc.). When the PDF file is created, both strings contain the same value. When it is updated, the second is updated.An updated trailer, in addition to the keys of the previous trailer, includes a key named Prev, which points to the byte offset of the start of previous cross-reference table section (or in other words, it contains the value of startxref before the update).PDF 2.0 via incremental saveNow lets look at the second example, PDF 2.0 via incremental save.pdf.$ pdfsh PDF\ 2.0\ via\ incremental\ save.pdf pdfsh Copyright (C) 2024 Mete Balci License GPLv3+: GNU GPL version 3 or later PDF 2.0 via incremental save.pdf:/ $ cat header/ { line: (%PDF-1.7), version: 1.7 } The first interesting thing is the PDF version in the header (thus, when the PDF file is created) is 1.7 not 2.0.PDF 2.0 via incremental save.pdf:/ $ ls body/ 0/ 1/ The body has two elements, thus, it is updated once. The new or modified objects are (meaning these are the objects in the last body section):PDF 2.0 via incremental save.pdf:/ $ ls body/0/ 1.0/ 9.0/ 6.0 The original objects are:PDF 2.0 via incremental save.pdf:/ $ ls body/1/ 1.0/ 3.0/ 4.0/ 7.0/ 8.0/ 9.0/ 2.0 6.0 The final objects are:PDF 2.0 via incremental save.pdf:/ $ ls objects/ 1.0/ 3.0/ 4.0/ 7.0/ 8.0/ 9.0/ 2.0 6.0 Thus, it looks like, the object number 1, 9 and 6 is updated, and no object is deleted. If, for example, the object 9 was deleted, it would not be listed under objects.Lets look at the trailer:PDF 2.0 via incremental save.pdf:/ $ cat trailer/ { dictionary: { Size: 10, Root: (1, 0, R), Prev: 4287, ID: [<15d2e6da9695e5fc3c609cbb33346129>, <31c7a8a269e4c59bc3cd7df0dabbf388>] }, startxref: 5342, prev: { dictionary: { Size: 10, Root: (1, 0, R), ID: [<15d2e6da9695e5fc3c609cbb33346129>, <15d2e6da9695e5fc3c609cbb33346129>] }, startxref: 4287 } } As before, there is a dictionary and startxref keys of the trailer but there is also prev key. The trailer dictionary (the value of dictionary) is similar but it has a Prev key with value 4287. Also the second element of the array at ID key is different. These are all as expected.pdfsh adds the previous trailer to the trailer as prev key (not Prev). As expected, the elements of its ID array has the same values and its startxref is different.Finally, if we check the catalog dictionary (the value of the Root key in catalog dictionary):$ cat objects/1.0/ { Type: Catalog, Pages: (3, 0, R), Metadata: (2, 0, R), Version: 2.0 } It has a Version key with the value 2.0. This came with the update, because the previous object has no such key:PDF 2.0 via incremental save.pdf:/ $ cat body/1/1.0/ { Type: Catalog, Metadata: (2, 0, R), Pages: (3, 0, R) } It might not be clear, it also surprised me, the version is not a string.PDF 2.0 via incremental save.pdf:/ $ node objects/1.0/Version Version is a name object It is a Name object. That is why I also decided to represent the version in the header as a Name object not as string.Stream FiltersI have mentioned before that the stream data can be pre-processed. This is done with stream filters. A stream filter can theoretically be anything taking a binary data as input and producing a binary data as output. In PDF, it is used for encoding (such as ASCII 85 for text or DCT for images), compression (such as LZW) or encryption. Also, multiple filters can be used, one after other. The filters used for a data in a stream is given in Filter key of the stream dictionary. The filters defined in PDF 2.0 spec are:ASCII Hex: encoded in an ASCII hexadecimal representationASCII 85: encoded in an ASCII base-85 representationLZW*: compressed using the LZW adaptive compression methodFlate (deflate)*: compressed using the zlib/deflate compression methodRun Length: compressed using a byte-oriented run-length encoding algorithmCCITT Fax*: compressed using the CCITT facsimile standard (Group 3 or Group 4)JBIG2*: compressed using the JBIG2 standard (excluding color palette coding)DCT*: compressed using a DCT (discrete cosine transform) based on JPEG standardJPX: compressed using the wavelet-based JPEG 2000 standardCrypt*: encrypted (exact mechanism is given in parameters, RC4, AES-128 CBC or AES-256 CBC)The ones marked with an * above have parameters, which are stored in the stream dictionary with DecodeParms key.When the data in a stream is read, and it has a filter (or filters), the reverse of the filter (or filters) has to be applied to derive the actual data, i.e. if it is encoded, the data has to be decoded first.The PDF files shown before have no stream filters, so I have another example.metebalci.com-about.pdfLets see what this PDF has:metebalci.com-about.pdf:/ $ cat trailer/ { dictionary: { Size: 149, Root: (136, 0, R), Info: (1, 0, R) }, startxref: 101850 } Lets look at the catalog (136, 0):metebalci.com-about.pdf:/ $ cat objects/136.0/ { Type: Catalog, Pages: (28, 0, R), MarkInfo: { Type: MarkInfo, Marked: True }, StructTreeRoot: (29, 0, R), ViewerPreferences: { Type: ViewerPreferences, DisplayDocTitle: True }, Lang: (en-us) } Lets look at the page tree root (28, 0):metebalci.com-about.pdf:/ $ cat objects/28.0/ { Type: Pages, Count: 2, Kids: [(2, 0, R), (23, 0, R)] } Lets look at the first kid (2, 0):metebalci.com-about.pdf:/ $ cat objects/2.0 { Type: Page, Resources: { ProcSet: [PDF, Text, ImageB, ImageC, ImageI], ExtGState: { G3: (3, 0, R) }, Font: { F4: (4, 0, R), F5: (5, 0, R), F6: (6, 0, R) } }, MediaBox: [0, 0, 594.96, 841.92], Annots: [(7, 0, R), (8, 0, R), (9, 0, R), (10, 0, R), (11, 0, R), (12, 0, R), (13, 0, R), (14, 0, R), (15, 0, R), (16, 0, R), (17, 0, R), (18, 0, R), (19, 0, R), (20, 0, R), (21, 0, R)], Contents: (22, 0, R), StructParents: 0, Parent: (28, 0, R) } Lets look at the contents of this kid (22, 0):metebalci.com-about.pdf:/ $ cat objects/22.0 { Filter: FlateDecode, Length: 3516 } This is a stream dictionary and it specifies FlateDecode filter. It is difficult to see the effect of this without extracting the data. A powerful and useful feature of pdfsh is that it can extract both encoded and decoded data from a PDF stream, using catsb.encoded and catsb.decoded commands. These are used directly from the OS command line with -c option and output is written to stdout. These commands cannot be used in the pdfsh shell interface.$ pdfsh -c 'catsb.encoded objects/22.0' metebalci.com-about.pdf | wc -c 3516 $ pdfsh -c 'catsb.decoded objects/22.0' metebalci.com-about.pdf | wc -c 20732 So the compressed content is 3516 bytes and uncompressed is 20732. Lets look at the first 100 bytes of original (encoded) content (data of the stream):$ pdfsh -c 'catsb.encoded objects/22.0' metebalci.com-about.pdf | xxd -l 100 00000000: 789c e55c 5b8b 5c39 0e7e af5f 719e 17d6 x..\[.\9.~._q... 00000010: b16c c917 0881 ba74 857d 18d8 1902 f303 .l.....t.}...... 00000020: 6677 322c 9d81 c9fe 7f58 ece3 8bec 73ec fw2,.....X....s. 00000030: aaca a47b 7bd9 4093 6a9f 3ab2 24cb d227 ...{{[email protected].:.$..' 00000040: 596e a1b4 8fff 16b9 c8e5 af82 fdea 1084 Yn.............. 00000050: 07ef ddf2 cb97 c31f 0700 1246 d192 ff57 ...........F...W 00000060: 0a8d 7096 ..p. and the decoded content:$ pdfsh -c 'catsb.decoded objects/22.0' metebalci.com-about.pdf | xxd -l 100 00000000: 2e32 3339 3939 3939 3920 3020 3020 2d2e .23999999 0 0 -. 00000010: 3233 3939 3939 3939 2030 2038 3431 2e39 23999999 0 841.9 00000020: 3139 3938 2063 6d0a 710a 3131 352e 3632 1998 cm.q.115.62 00000030: 3520 3131 352e 3632 3520 3232 3436 2e38 5 115.625 2246.8 00000040: 3735 2033 3237 3520 7265 0a57 2a20 6e0a 75 3275 re.W* n. 00000050: 710a 332e 3132 3520 3020 3020 332e 3132 q.3.125 0 0 3.12 00000060: 3520 3131 5 11 The original encoded data, as expected, is not readable/understandable but the decoded data is clearly readable.The data of any stream object can be extracted from a PDF file using pdfsh as long as the filters are supported by pdfsh.
2024-11-08T10:32:32
en
train
42,016,083
samch
2024-11-01T12:00:45
Intel Keychains with Computer Chips
null
https://www.chipsetc.com/intel-keychains.html
1
1
[ 42016084, 42016234 ]
null
null
null
null
null
null
null
null
null
train
42,016,094
freetonik
2024-11-01T12:02:27
Linkding: Self-Hosted Bookmark Manager
null
https://linkding.link/
3
0
null
null
null
null
null
null
null
null
null
null
train
42,016,109
polyrand
2024-11-01T12:05:24
Analytics-Optimized Concurrent Transactions
null
https://duckdb.org/2024/10/30/analytics-optimized-concurrent-transactions.html
2
0
null
null
null
null
null
null
null
null
null
null
train
42,016,115
mooreds
2024-11-01T12:06:19
Creating Runtime and Application Images with JLink
null
https://dev.java/learn/jlink/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,016,118
mooreds
2024-11-01T12:06:38
Show Them Something Good
null
https://www.dylanamartin.com/2024/10/29/show-them-something-good.html
1
0
[ 42016209 ]
null
null
null
null
null
null
null
null
null
train
42,016,124
LisaDziuba
2024-11-01T12:07:30
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,016,140
willemlaurentz
2024-11-01T12:10:45
Using Google Free Android
null
https://willem.com/blog/2021-10-25_using-google-free-android/
3
0
[ 42016200 ]
null
null
null
null
null
null
null
null
null
train
42,016,144
leephillips
2024-11-01T12:11:03
Explore Vortex Dynamics Using Julia
null
https://lee-phillips.org/vortex/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,016,154
lapnect
2024-11-01T12:12:04
Exploring Field Recording Battery Solutions
null
https://www.creativefieldrecording.com/2024/07/31/field-recording-battery/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,016,157
tosh
2024-11-01T12:12:36
The New ClickHouse SQL Playground
null
https://clickhouse.com/blog/announcing-the-new-sql-playground
1
0
[ 42016194 ]
null
null
no_error
Announcing the new ClickHouse SQL Playground
null
ClickHouse
As part of our efforts to make querying large datasets easier than ever, we’re pleased to announce the availability of sql.clickhouse.com! This new SQL playground has over 35 datasets and 220 example queries to get started. We’ve included some simple charting capabilities, which we plan to improve, and the ability to save and share queries! Take it for a spin and share your favorite queries either on social or via the GitHub repo, where we’ll add them for others to enjoy! As ClickHouse users, we are passionate about datasets. We even have an internal slack channel, aptly named "data lovers" for sharing interesting datasets for experimentation and testing of features! Historically, we've documented these datasets and tried to provide example queries to get users started. While we also made many of these datasets available in a public ClickHouse instance, also referenced from our documentation, this used the classic Play interface packaged with ClickHouse. This Play interface is deliberately simple and ideal for getting started: it has no dependencies and is a single HTML file. However, this didn’t provide the rich user experience we wanted for our playground. Ideally, we wanted something where users could navigate and save example queries while supporting syntax highlighting, autocomplete, query parameters, results export, basic charting, and rich sharing features. These features would allow users to explore datasets and hopefully help users get started with ClickHouse and share their problems. Fortunately, we recently built a UI for our CryptoHouse demo, where users can query over 100TB of blockchain data for free - including Solana, Ethereum, and Polymarket. This UI was also built to be reusable, benefiting from reusing some of the experiences and code from our own ClickHouse Cloud SQL console. With a few enhancements, we were able to quickly re-purpose this demo, and we also used the opportunity to re-organize and catalog our existing demo datasets. With over 35 datasets totaling 60 TB (and growing!), we’ve also loaded all of the 220 example queries from our docs and blogs to help users get started. Our documentation and blogs will increasingly reference this environment moving forward, with the recent MTA blog already benefiting from this new playground. Users who prefer to use the clickhouse-client, or wish to integrate the service into their own applications, can connect directly to the ClickHouse instance at sql-clickhouse.clickhouse.com e.g. clickhouse client --host sql-clickhouse.clickhouse.com --secure --user demo --password '' The demo-ui remains a single-page application built using NextJS and React, where the client makes all requests. As we described in detail in our blog post about building single-page applications, this is made possible by some key ClickHouse features: HTTP interface & REST API- makes it trivial to query ClickHouse with SQL from Javascript. By default, ClickHouse listens on port 8123 or 8443 (if SSL), with the latter exposed in ClickHouse Cloud. This interface includes support for HTTP compression and sessions. Output formats - support for over 70 output data formats, including 20 sub formats for JSON allowing easy parsing with Javascript. Query parameters - allowing queries to be templated and remain robust to SQL injections. Role-Based Access Control - allowing administrators to limit access to specific tables and rows. Restrictions on query complexity—We restrict users to read-only and limit the query complexity and resources available. In Playground, we limit users to reading 10 billion rows per query and returning 1000 in a result set. Quotas - to limit the number of queries from any specific client (keyed off IP Address), thus preventing rogue or malicious clients from overwhelming the database. Users are limited to 60 queries per hour in Playground. The latter three are particularly important here as they allow us to expose the ClickHouse Cloud instance behind the demo to the public internet and safely expose the read-only credentials. For further details on the exact configuration of ClickHouse used, see here. The UI also heavily uses our component library, click-ui. This library provides a set of React components that align with our own brand and provide an opinionated set of behaviors. This library is the backbone of our Cloud UI and rapidly accelerates development, avoiding spending hours “pixel pushing” to achieve just the right appearance. This is particularly useful when you want to move fast for a demo! Finally, we’d like to mention Apache Echarts. This charting library is easy to integrate and extremely well documented, allowing us to provide charting capabilities with minimal effort. While a demo playground is essential to any OSS database, we must also provide the service cost-efficiently. Quotas are a component of this, ensuring fair usage and preventing one user from consuming all the resources. The service also benefits from ClickHouse Cloud’s separation of storage and compute, with the data backed on object storage. This minimizes the data storage cost and allows us to scale infinitely as we add more demo datasets. While ClickHouse Cloud does support auto-scaling, the cluster is currently fixed at 3 nodes of 30vCPUs each—predominantly because we expect a fairly constant query load if our users adopt the demo. We monitor and alert on resource consumption and will review these resources based on demand, either scaling vertically or horizontally as required. Finally, while our current datasets are static, our next efforts will focus on ensuring as many of them are kept up-to-date as possible. This is likely to exploit two key ClickHouse Cloud features: ClickPipes - a managed integration platform that makes ingesting data into ClickHouse simple. While this will require us to ensure dataset changes are periodically made available on either Kafka or object storage, it should greatly simplify data loading. Compute-compute separation - currently in preview, this provides the flexibility to create multiple compute node groups, each with its own endpoint, while sharing the same object storage. This architecture enables isolation between various types of workloads, allowing for fine-tuned resource allocation. For the playground, this means we can allocate dedicated compute for writes, isolating this workload from user queries (and thus not impacting their performance) and allowing it to be scaled independently achieving better cost efficiency. Although the demo UI is simple, there is one feature that’s worth sharing the implementation details. New users to ClickHouse often comment on how the active query feedback provided in the clickhouse-client is one of the highlights when first running a query. This feedback becomes increasingly essential as users write more complex queries - not only to estimate how long the query will take but also for its performance and resource consumption. We were keen to ensure a similar experience was available to users in the playground, as well as support for canceling if it was apparent a query was likely to exhaust complexity limits (e.g., 10 billion rows scanned) and never complete. While the HTTP interface of ClicKHouse will send response headers as a query executes, these are not supported in browsers' fetch API and are tricky to read. Although alternatives exist, e.g., using the formats that return progress in the response stream, such as JSONEachRowWithProgress, these are also not ideal and incur an overhead. Instead, we explicitly assign a query ID to each query and use a separate query run on a fixed interval (every 100ms), which checks the system.processes table: SELECT sum(read_rows) AS read_rows, sum(total_rows_approx) AS total_rows, sum(read_bytes) AS read_bytes, read_rows / max(elapsed) AS rps, read_bytes / max(elapsed) AS bps, formatReadableQuantity(read_rows) AS formatted_rows, formatReadableQuantity(total_rows) AS formatted_total_rows, formatReadableSize(read_bytes) AS formatted_bytes, formatReadableQuantity(rps) AS formatted_rps, formatReadableSize(bps) AS formatted_bps FROM clusterAllReplicas(default, system.processes) WHERE (initial_user = 'demo') AND startsWith(query_id, {uuid:String}) This query runs under a monitor user, which has lower complexity limits concerning the number of rows it can read but a higher quota for the number of queries allowed per hour. This allows us to provide rich details on the progress of a query as it runs: The playground allows users to save queries (and their configured chart) locally. This persists only in browser storage, although you can share the query and its chart via a link. If you feel a query is worth documenting and sharing with the broader community as an official example, please raise an issue or PR on the source example query file. We will also ensure that example queries and new datasets contributed to the documentation are available in the playground. As we highlight below, we’re looking to improve this experience by simplifying the submission of example queries. Our efforts to improve the SQL playground moving forward will focus on three areas: Live datasets - Ensure the datasets are updated as new data becomes available. Although some of the datasets are not subject to change, others, such as GitHub events, can be updated in real-time. We expect this to be a gradual effort. Sharing widget - Although we’ve actively linked all example queries in our docs and blogs to the new playground, users would ideally be able to run these in place. This requires a query widget that we can embed across pages, with results and charts rendered in place. This same widget could then be used in a forum or discussion-based format to improve collaboration amongst our community. Simplifying sharing - As we noted above, the process for sharing new datasets and example queries is currently a bit cumbersome. We’re exploring ways to make this process smoother and easier. Stay tuned for developments! The new ClickHouse SQL Playground is a resource for our community and data enthusiasts to explore, experiment, and share insights using real-world datasets while also learning ClickHouse. We hope our users find the playground valuable and encourage you to share your feedback (and favorite queries)!Get started with ClickHouse Cloud today and receive $300 in credits. At the end of your 30-day trial, continue with a pay-as-you-go plan, or contact us to learn more about our volume-based discounts. Visit our pricing page for details.
2024-11-07T13:47:45
en
train
42,016,161
mooreds
2024-11-01T12:13:02
California SB 253 and SB 261: a guide for companies
null
https://watershed.com/blog/california-sb-253-and-sb-261-a-guide-for-companies
1
0
[ 42016177 ]
null
null
null
null
null
null
null
null
null
train
42,016,162
christophilus
2024-11-01T12:13:20
Why Zellij?
null
https://poor.dev/blog/why-zellij/
2
1
[ 42016163, 42016714 ]
null
null
no_error
Why Zellij?
2024-10-30 09:00:00 +0200 +0200
:: Aram Drevekenin
I am a terminal developer. I write my code inside vim, I use Zellij to manage my terminal workspace and automate everyday tasks. My desktop environment is a wrapper around the two most ubiquitous application development platforms of our time: the terminal and the browser. Those of us using Zellij often get asked: “Why Zellij?” I usually avoid this question, because I feel the choices of software and stack are deeply personal. I don’t like convincing people or evangelizing - I prefer creating great tools and allowing them to speak for themselves. In this post however, I’m going to fill this gap by trying something different: I’m going to tell the story of Zellij. My Story I’ve been a linux user since I was 15. Using bash and later perl to automate my personal tasks. I used nano as my text editor and later when I wanted syntax highlighting I moved to vim. Whenever I opened a full-fledged graphical IDE, I was confused and overwhelmed by the mixture of text editing and code analysis. The various menus and tooltips that kept popping up whenever I typed anything made me anxious. So I stuck with vim. As a terminal developer, I believe that my code editor is just that: a code editor. It’s there to display and manipulate text fast and efficiently. It’s not there to run my code, analyze it, provide tools to manipulate it or collaborate on it with others. For developers like me, this sort of magic happens inside the terminal itself. The cool thing about this approach is that you absolutely have to build your own tools. Whether through code or heavy configuration. There is no one-shot application that does all of this for you. You develop a personal relationship with your terminal workspace, making it behave exactly as you want it to without compromise. You share tricks with others through dotfile repositories and generally focus on improving workflows - because everything is pluggable. The frustrating thing about this approach is that you absolutely have to build your own tools. There is no one-shot application that does all of this for you. Your relationship with your terminal workspace is often lonely. Most others see your ways as impractical and user-hostile. Moving to a different computer is often an adventure and your workspace is something you need to maintain on your own. I created Zellij because I believe there is a better way. I believe we can have our cake and eat it. The Story of Zellij The terminal is an amazing application platform. It’s probably the most ubiquitous user-facing one we have. It has been mostly stable for decades, and has emerging properties that I feel place its rendering methods light-years ahead of traditional graphical environments. And yet it is extremely underutilized. Many developers use it grudgingly, preferring their graphical tools. They consider it a necessary evil, a relic of a bygone era that should pass away from the world. And who can blame them? The terminal at its core is a user-hostile environment. Little thought is given to making it more approachable to users beyond minor necessities such as managing tabs, splits and sessions. Terminal developers are often considered removed from others - either looked up to as elite or looked down upon as disconnected. Either way, we are seen as a distant anomaly. It doesn’t have to be this way. I created Zellij not just for me and other terminal developers. I created Zellij for anyone who loves the terminal, or would like to love it. Zellij is a user-friendly terminal workspace that does not sacrifice simplicity for power. Placing the full power of the terminal at everyone’s fingertips without forcing them to climb a mountain of lore and domain knowledge. Zellij is powerful, simple, beautiful and yet deeply configurable. Zellij is not a replacement for any other software. It is a re-imagining of the terminal that doesn’t leave anyone behind. Zellij is Discoverable One of the reasons terminal software is often considered hostile is that creating discoverable textual interfaces is hard. Creating discoverable interfaces in general is a challenge, and doing so without the benefit of a mouse or a touch screen is even harder. Terminal application developers understandably often skip this stage, deferring instead to cheat-sheets and manual pages that are outside the application itself. Zellij places an emphasis on having these in-app and on-screen. It is my belief that an interface being discoverable and looking good is one of the most important aspects of using software. It not only makes new and returning users feel at home, it helps discovering features and allowing users to get the most out of their tools. I believe this aspect should not be an add-on, but rather a core principle of the software. Zellij is an Application Platform and Runtime Zellij is designed primarily as an application platform. It can run terminals just as easily as it can run custom built applications that we call plugins. While the plugin ecosystem is young, this is the direction the project is going. These plugins are designed to be: Easy to develop - since the plugin system uses webassembly/wasi as a runtime, one should in the future (once some more SDKs are developed) be able to use any language to develop them (right now it’s either Rust or Go). One can also use a common UI language in the form of UI components to forgo developing one’s own, as well as be in sync with the rest of the user’s UI and theme. More powerful than terminal applications - they can do anything a terminal application does, plus be aware of the whole workspace and communicate/spawn each other in a preditable way. More secure than terminal applications - the system interface as well as application state are gated behind a permission system. More portable than terminal applications - compiled binaries can run on any system that has a Zellij instance and do not require any installation. Zellij does not Rely on Graphical Assets to Render Itself I believe terminal emulation is the most ubiquitous user-facing platform we have. While not perfect, it’s been around for a long while and is mostly stable. Any other platform (eg. desktop applications, browser applications or even terminal emulators themselves) represents a lock-in of one sort or another. Either to the platform itself or to some sort of infrastructure or other translation-layer. I believe the only way to ensure that an application and indeed an application development platform lasts is to base its rendering on text. This has many other emerging properties, such as UIs (or parts of them) being replayable as well as being parsable by external tools (indeed, this is how we run our e2e tests). I don’t believe this is a limitation, I believe it’s a strength. I would not want my development UI to be built from GUI assets. A Zellij Workspace is Truly Portable Zellij is designed to assist users in creating their own workspaces. As a pluggable combination of layouts and plugins that can be easily shared with others or when moving to new machines. Not as a brittle soup of bash scripts but as a single, concise and Human-readable configuration file. Zellij is Free Zellij is a gift to the developer community at large and indeed to all those who use a terminal. It’s free, it’s open-source and it is respecful of its users and their privacy. Zellij will never display ads or collect user-data. Indeed, its infrastructure is designed in such a way as to make these practices very hard for applications built on top of it. Zellij is a labor of love and passion. Asking users who love it to pay as they wish to support the Zellij development, or not pay at all if they are unable or unwilling. The Story of Zellij is Just Beginning The task of re-imagining the terminal is a large one. Hundreds of developer hours have been poured into Zellij and yet it is still pre-1.0. This is to be expected. I believe in consistent incremental improvement and I believe in developing in the open. This means that when using Zellij, there might be a few papercuts here and there. Personally as a linux user, I’m used to things being this way. But this doesn’t mean I don’t strive to fix everything as fast as possible and provide users with a smooth experience. Right now, I’m concentrating on building the infrastructure that facilitates this re-imagining. Some pieces are already in place, but many aren’t. Using Zellij today means being an early adopter. It means having a say about how the application will take shape over the years. It’s an exciting journey and experience, but I will be the first to say it’s not for everyone. Where do we go from here? If you are interested in the project, the best way to support it is to become a user. There’s a bustling user community spread across multiple Internet spaces. Enjoy the software, extend it with plugins and configuration - support others on their journey. Understand that Zellij is not here to replace any other tool. Zellij is here to place the terminal on the central stage where it belongs. Support Me and Zellij Development ❤️ I spend 100% of my time developing Zellij. I’m mostly living on my savings because I deeply believe in this project, its userbase and its future. If you’d like to support me and are able, a recurring donation of 5-10$ a month is the most sustainable way of doing so. It will really help me pay my bills. So please consider sponsoring my work.
2024-11-08T11:14:51
en
train
42,016,168
rolfvanroot
2024-11-01T12:14:12
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,016,172
0xlogk
2024-11-01T12:14:28
kgrep: small search engine, no fluff, no hype
null
https://kgrep.com
5
0
[ 42016258 ]
null
null
null
null
null
null
null
null
null
train
42,016,173
phafu
2024-11-01T12:14:35
Show HN: Nondeterministic finite queued dialog automaton
null
https://gitlab.com/z-s-e/nfqda
2
0
null
null
null
missing_parsing
Zeno Endemann / NFQDA · GitLab
null
null
Skip to content GitLab Why GitLab Pricing Contact Sales Explore Why GitLab Pricing Contact Sales Explore Sign in Get free trial N NFQDA Copy SSH clone [email protected]:z-s-e/nfqda.git Copy HTTPS clone URLhttps://gitlab.com/z-s-e/nfqda.git Loading
2024-11-08T08:25:46
null
train
42,016,184
hobology
2024-11-01T12:15:56
Show HN: Pokémon Ketsugo v2.0 – AI Powered Pokemon Fusion Guessing Game
null
https://pokeketsugo.com/
1
0
null
null
null
missing_parsing
Poké Ketsugo v2.0 - Peek-a-boo, Find The Two!
null
null
Think you can guess the Pokémon fusion?Guest mode offers limited fusions and featuresDaily Top ScorersNo champs yet. Be the first to play!Ketsugo ChampsNo champs yet. Be the first to play!
2024-11-08T08:52:42
null
train
42,016,185
aynyc
2024-11-01T12:16:06
Ask HN: What are your life hacks?
What hacks do you use to make your life easier? My recent cooking hack is air fryer. I can air fry tasty veggies like broccoli, cauliflowers, etc in 10-15 minutes. My next car maintenance hack is to put in a Fumoto oil drain valve.
null
3
9
[ 42016296, 42016249, 42032073, 42016454, 42016246 ]
null
null
null
null
null
null
null
null
null
train
42,016,206
dkobia
2024-11-01T12:19:35
These Record-Breaking New Solar Panels Produce 60 Percent More Electricity
null
https://theconversation.com/new-solar-cells-break-efficiency-record-they-could-eventually-supercharge-how-we-get-energy-from-the-sun-239417
2
0
null
null
null
null
null
null
null
null
null
null
train
42,016,212
thunderbong
2024-11-01T12:20:16
null
null
null
1
null
[ 42016243 ]
null
true
null
null
null
null
null
null
null
train
42,016,221
rbanffy
2024-11-01T12:21:31
To understand physics, we need to tell – and hear – stories Essays
null
https://aeon.co/essays/to-understand-physics-we-need-to-tell-and-hear-stories
2
0
null
null
null
Failed after 3 attempts. Last error: Quota exceeded for quota metric 'Generate Content API requests per minute' and limit 'GenerateContent request limit per minute for a region' of service 'generativelanguage.googleapis.com' for consumer 'project_number:854396441450'.
To understand physics, we need to tell – and hear – stories | Aeon Essays
2024-11-01
Jamie Zvirzdin
C P Snow’s lecture ‘The Two Cultures’ (1959) argued that the perceived divide between scientists and literary scholars is narrower than commonly believed. They both fundamentally seek to understand and express the relationships that structure reality – whether human relationships in literature, or physical relationships in science. In 1961, on the heels of that lecture, a children’s book came out – The Phantom Tollbooth by Norton Juster, a funny, punny allegorical fantasy that made the same argument but in a way that captivated readers well into the 1990s, when I first encountered this story: Milo, a boy already besieged by adult-like ennui and existential despair, takes on the quest to bring back the princesses Rhyme and Reason, reuniting them with their two quarrelsome brothers: King Azaz the Unabridged, Ruler of Dictionopolis, and the Mathemagician, Ruler of Digitopolis. King Azaz claims that words are superior to numbers; the Mathemagician insists the reverse. In the end, the brothers reconcile and rebuild the City of Wisdom with the help of Rhyme and Reason, and Milo returns to his own world with renewed curiosity for words and numbers. By age 13, I’d already been convinced of the value of interdisciplinarity. Eventually, I would learn that stories are not just a way of communicating science; they are intrinsic to science, actually part of doing science. My own story of merging these Two Cultures – for me, literary writing and particle physics – was complicated by a Third Culture, religion. I grew up in Utah, in an era when Mormon women could have physics careers, technically, but following this path was difficult, lonely, and considered a threat to the traditional family model. We were encouraged to pursue education, not to prepare for competitive careers but for traditional roles as wives and mothers. This worldview, where a woman’s education is merely a safeguard if her husband can’t work, exemplifies what George W Bush’s speechwriter Michael Gerson called ‘the soft bigotry of low expectations’. It is a mindset that stifles ambition and curiosity. In fact, in my world, ambition in a woman signified pride, selfishness, sin. Yet I loved my advanced high-school mathematics and physics classes. With friends, I won team competitions in physics and computer programming. As a teenager, I even interned for three summers with the Cosmic Ray Research Group at the University of Utah – the High Resolution Fly’s Eye collaboration that detected the Oh-My-God particle in 1991. This rare ultrahigh-energy cosmic ray – probably a proton – was an atomic shard travelling close to the speed of light, bombarding our detector with an absurd amount of energy. This event defied physics models and opened new questions about the limits of energy in the Universe, presenting a cosmic mystery story I wanted to pursue. Despite my interest in cosmic rays, the Third Culture reigned supreme. The pressure to conform was invisible but visceral: during my first semester at Utah’s Brigham Young University (BYU) in 2002, led not by reason or rhyme but by a fear of angering God and my Church, I walked out of the introductory physics class – the only woman in attendance – and changed my major from astrophysics to English. Burying myself in stories and syntax, I felt sad about the physics but decided to make the most of my education before I married. BYU’s editing and linguistics courses were truly superb, and I learned to find patterns in natural language, and improve those patterns to benefit readers and increase the quality of communication. Editing, I thought, was something I could do from home with a family. Maybe I’d even dare to be a science editor. Fast-forward 10 years, and that’s exactly what I was doing, while my toddler slept. I loved reading upper-level STEM textbooks as a freelance editor for Taylor & Francis; it was as physics-adjacent as I could manage. I could search the pattern of writing for errors while absorbing the patterns of mathematics and physics, even if I didn’t understand it perfectly. But I wanted to, though the desire still felt dangerous. I started writing fiction and essays, and my frustrations seethed onto the page. As soon as my son woke up, however, I would focus on him. Like me, he had a natural affinity for both letters and numbers, and we spent hours laughing and learning together. His intense curiosity reignited my own. In October 2012, still wrestling with deeply ingrained but self-limiting patterns of thought, I interviewed the psychologist LaNae Valentine, who directed the BYU Women’s Services and Resources Center. She told me that the counsellors for college women were explicitly instructed to use the word ‘education’ instead of ‘career’ – an omission reflected in the name of the centre itself. It grated on her, she said, but she complied. In the midst of disaster, I came full circle, back to the beginning of my story The explicit omission was a revelation to me. Second-wave feminism had come and gone, but its reverberations were reaching me for the first time. My husband read Simone de Beauvoir’s The Second Sex (1949), liked reading it, and handed it to me, which started a tsunami of good, hard questions. What was I good at, drawn to, excited by? Was it too late to develop previously abandoned skills? Confronting the self-limiting story from my Third Culture led to a breakthrough: like Andrew, I could take myself and my career seriously, and still be a great spouse and parent. Over the next 10 years, I began to level up in writing, then science writing, then physics. I contended with a Fourth Culture: life as the spouse of a US Foreign Service Officer. Moving from Washington, DC to the Marshall Islands, then Montreal, Virginia and Nicaragua, I had to actively resist the feelings of loss that come for those supporting a spouse’s job abroad. Fortunately for me, Andrew supported my personal and professional ambitions in return, so I could thrive alongside his career even as we moved country every two or three years. I started publishing science essays and teaching science writing at Johns Hopkins University in Maryland, both of which could be done remotely. In April 2018, political violence erupted in Nicaragua, and embassy families were sent back to safety in the US. Max and I evacuated to Utah while Andrew remained in Managua as essential personnel. Making the sweetest lemonade with the bitterest of lemons, I returned to work for the Cosmic Ray Research Group, now known as the Telescope Array Project. In the midst of disaster, I came full circle, back to the beginning of my story. From left: John Matthews of the Telescope Array, the author Jamie Zvirzdin and her former supervisor, Stan Thomas, at a café at the University of Utah I have been making up for lost time ever since. I couldn’t influence a dictator in Nicaragua, but I could traipse out to the Utah desert, fix detectors, and operate telescopes to help solve the mystery of ultrahigh-energy cosmic rays. Reunited with Andrew in October 2018 following his time in Nicaragua, I picked up my work for the Telescope Array Project remotely from Maryland, writing programs, analysing data and even operating telescopes during night shifts from my work computer. I am now more than halfway through a Master’s in applied physics from Johns Hopkins, a remote programme I can pursue from our current post in Germany. My unconventional path to physics reveals an important insight for those who may feel excluded from the field or intimidated by its complexities: at its core, physics is fundamentally a word problem. A story problem. Personal stories, history stories, thought experiments, formal proofs, metaphors, cautionary tales: surround yourself with the various stories embedded in physics, and you’ll find firm footing wherever you tread in this field. Some of the best physicists and physics teachers are also great storytellers: they tell wild tales of things that happened to them, both real and perhaps slightly embellished for comedic effect. One such tale comes from my friend Jihee Kim, now a postdoc at Brookhaven National Laboratory in New York. As a new PhD student with the Telescope Array collaboration, she was asked to take a picture of one of our fluorescence detectors. Housed in dark sheds, these detectors use large mirrors to capture faint ultraviolet light produced by cosmic-ray showers in the atmosphere during moonless nights. Not realising the potential danger, Kim opened the shed doors a little to let in more afternoon light for the photo. Almost immediately, she smelled something burning – indirect light from the Sun had reflected off the mirror and was now focused, like a magnifying glass, onto a nearby cable. To make sure no one else made the same mistake, our boss, John Matthews, put up DANGER signs in black, red and white, warning students never to let sunlight touch the telescope mirrors. He added a picture of the melting face from the film Raiders of the Lost Ark – just in case anyone needed an extra reminder. The history-based stories we tell in physics model the scientific method itself We need to hear stories of people who surmount difficulties large and small, who push past ennui and cynicism and embarrassment and discouragement, who act with honesty and courage, who humbly ask for and receive help, to advance the frontline of knowledge. I hope my story will spur more women and minorities to take the best of the cultures they belong to and give themselves permission to enter academic gates they thought were closed to them. Work hard and work smart, and record your stories for others. Jihee Kim and Jamie Zvirzdin with a Telescope Array fluorescence detector near Delta, Utah, shed doors safely closed. Photo supplied by the author Beyond personal anecdotes, the history-based stories we frequently tell in physics model the scientific method itself. Consider the Austrian physicist Victor Hess who, from 1911 to 1912, conducted a series of risky hydrogen balloon flights, the most famous of which reached an altitude of 5,350 metres – about as high as Mount Everest’s Base Camp – to measure radiation intensity in the atmosphere. As the atmosphere grew thinner and thinner, Hess, with his two-man crew, stared through the eyepieces of two electroscopes. He carefully counted how frequently the electroscope’s hanging fibres lifted, which meant they were detecting radiation from charged particles (ions) in the atmosphere. He found that, at the highest altitude, the atmosphere had 22 to 24 more ions than at ground level, which meant a significant increase in radiation intensity. Hess’s daring – he also did balloon flights at night, to rule out effects from the Sun – led to the discovery of cosmic rays, proving that Earth is constantly bombarded by these high-energy particles from space. For his efforts, he received the Nobel Prize in Physics in 1936. Victor Hess back from his balloon flight on 7 August 1912. Image courtesy of the American Physical Society This story introduces us to cosmic rays, yes, but also follows the classic progression of a short story: on 7 August 1912, at 6:12am, from Aussig, now the Czech city of Ústí nad Labem (setting), Hess and his crew (characters), curious about this mysterious radiation (exposition) decided to follow a hunch (inciting incident) and go up in a balloon to gather data (rising action, literally), making a groundbreaking discovery (climax), landing safely at 12:15pm in Bad Saarow-Pieskow, Germany, and arriving at important conclusions that were confirmed and rewarded (denouement). The story’s arc is echoed in the scientific method itself. We start by identifying a problem that needs explanation or further investigation, and we gather research and materials. From a question, we propose a testable hypothesis. We design and conduct an experiment to test the hypothesis, meticulously collecting data and controlling variables as carefully as we can. We analyse and interpret the data to see if it supports or refutes our hypothesis, and from this we draw a conclusion and report our results. It is a satisfying pattern to follow, this roadmap. It is a critical one. The power of science stories like this lies in their concrete details, an insight that not only helps us be more interesting teachers of physics but also better communicators when reporting our findings. Hess in a balloon, the high altitude, the sensation of flight, the cold metal of the electroscopes – these finite, sensory elements help anchor concepts like cosmic-ray radiation in our minds. As I learned when I did my MFA in writing and literature at Bennington College, using vivid imagery and sensory details – anything you can see, touch, taste, smell, hear – makes new, complex information easier to absorb and remember. As you follow the scientific method, keeping track of these details makes it easier to recount what happened and what you did. Some stories in physics are straight-up science fiction: aka, thought experiments. These ground abstract concepts in fictional characters, scenarios and sensory details. Take the Twin Paradox, as told by Amber Stuver in an excellent TED-Ed video. Stuver’s Twin Paradox explanation is the best I’ve yet heard, in contrast to fairly confusing ones out there. Some people don’t know how to tell a good story, perhaps through no fault of their own. It’s worth taking the time to learn how. The physicist Robert Resnick anthropomorphised Einstein’s story to a travelling twin returning to his brother As with Hess and his balloon, the Twin Paradox has characters, setting, actions, pacing, concrete details, the works. Once we can properly see the outline of the story, either directly or by imagining it, we can attach formulae like the Lorentz factor (which shows how time slows down for objects moving near the speed of light) and other mathematical details. I now see mathematics equations as sentences in their own right, adding concrete, sensory details that flesh out these stories and even providing fundamental plot points that advance the story. The characters in thought experiments have taken on lives of their own as the culture of physics has evolved through time: the original Twin Paradox thought experiment came in 1905 from Albert Einstein in the form of some pretty basic clocks. To explain special relativity in his original paper, Einstein wrote about two synchronised clocks, one of which moved from point A to point B along a line. The moving clock story then evolved, as stories do: in 1911, Einstein himself reimagined the travelling clock in terms of ‘a living organism in a box’. The physicist Robert Resnick then anthropomorphised the story to a travelling twin returning to his brother. A similar evolution happened to my favourite thought experiment, Maxwell’s demon. In 1867, James Clerk Maxwell pictured two compartments linked by an intelligent valve that could sort fast particles from slow particles, but Lord Kelvin embellished the story to include a demon with hands and feet, much to the delight of bored physics students everywhere. The demon selectively allows faster (hotter) molecules to pass one way and slower (cooler) molecules to pass the other, seemingly creating a temperature difference and violating the second law of thermodynamics, which says that the Universe always tends toward chaos. However, the demon’s work requires energy to gather information and sort the molecules. This energy expenditure ensures that the overall entropy of the system still increases, preserving the second law of thermodynamics. Character archetypes like Alice and Bob make frequent appearances in quantum cryptography, a way of securing communication by applying principles of quantum mechanics to encrypt data. Alice and Bob first appeared in the paper ‘A Method for Obtaining Digital Signatures and Public-key Cryptosystems’ (1978) by the computer scientist Ronald L Rivest et al, where Alice and Bob share a secret encryption key to secure against an eavesdropper (often called Eve). It is fun to have Alice and Bob pop up in different problems. These characters have the additional value of finding their way into popular culture, enticing new people to come to imagine these strange science scenarios. In his book Knowledge and Error: Sketches on the Psychology of Enquiry (1905), Ernst Mach argued that these imaginary, proxy experiments were ‘a necessary precondition for physical experiment’. Imagining these stories could ‘enliven enquiry, tell us about new properties and promote insight into their connections.’ Such stories – complete with characters, setting, a story arc and sensory details – are tools we use to think through a specific problem deliberately and systematically. In physics courses, we’re often expected to write formal proofs in our weekly problem sets, showing how one equation evolves into another. We’re shown the beginning and the end of the story, like a flashforward in time, and asked to fill in the plot points that lead to the conclusion, ending as dramatically as ‘The End’ with the initials QED (in Latin, quod erat demonstrandum – what was to be demonstrated). I really like proofs – I find them satisfying. There is something lovely in the way they sway back and forth between words and equations. A proof, like a narrative, is a carefully crafted sequence of ideas, leading the reader from assumptions and definitions to a logical conclusion. I see physics quantities – Force, Entropy, Volume, Current Density, Energy – as fully fledged characters, each possessing dimensionality (literally: dimensions of mass, length, time, charge – more formally, current – temperature, number of moles, and luminous intensity), affecting how they perform on the page. Their nicknames are their symbols in an equation: F for Force, S for Entropy, V for Volume, J for Current Density, and so on. As in a story with many characters, sometimes it takes a while to learn them all, particularly since everyone has preferred pet names. But the naming is important. My friend and fellow literary aficionado Pierre Sokolsky, dean emeritus of the University of Utah College of Science, told me: ‘Once you give something a name, you are using language, with everything that implies. The concept “Force”, once stated, has all the power of language to bring up images, similarities, even stories.’ Through the process of naming, physics quantities thus become characters in a grand story of not just the proof but the Universe. Each quantity shapes and is shaped by the natural laws it obeys. In a successful story, all main characters must evolve in some way; they must be subjected to processes that reveal their fundamental nature as the proof unfolds. The same is true with physics quantities. Since more than one quantity is usually involved, the relationships between multiple characters deepen and become more complex; their combinations and machinations produce new versions of themselves. Ultimately, both story and proof lead the reader on a journey towards understanding Some textbook writers and teachers drive me bananas when they treat their physics characters on the page like a shell in the shell game: they shuffle quantities around as fast as possible, switching characters and perspectives without care. Certain physics quantities are renamed willy-nilly with the most squiggly Greek characters possible, which is as jarring as renaming the main character in your story without bothering to signal the change to the reader or explain why. Instead of illuminating the deeper connections between physical quantities, poor physics communicators obscure rather than explain what they’re doing, robbing the process of the intellectual and narrative clarity that makes physics so compelling. If you don’t explain what is happening, even briefly, or if you skip too far forward in your proof, your reader will quickly grow frustrated and lose interest in your narrative. If your reader is your teacher, you’ll lose points. If your reader is a grant-giver, you won’t get your grant. The warp and weft of proofs, weaving words with numbers, sentences with equations, became familiar to me while editing formal proofs for Taylor & Francis. It was my job to ensure that the equations were properly punctuated when part of a sentence. Beyond keeping track of your characters, you need clarity, precision, structure and progression. These are all skills learned when studying language arts. Ultimately, both story and proof lead the reader on a journey towards understanding, closure and insight about the Universe. Physicists love metaphor, even if they claim otherwise. The Italian rhetorician Giambattista Vico in the 1700s called the metaphor ‘a fable in brief’. Metaphors and their relatives – comparisons, similes, analogies – are far more important in physics than you might think. An astute metaphor – a mini-story – can be the beginning of understanding a concept in physics, even the beginning of a new field of enquiry, as Michael Faraday’s analogies ‘current’ and ‘lines of force’ did for electromagnetics. Beyond sheer repetition of stories and cultural exposure to mathematics and physics concepts – which not everyone has the privilege of receiving, particularly if you had a heavy-handed Third Culture – metaphors and similies are the primary way humans learn. We connect something we do know to something we don’t. Here’s an example: a Fourier analysis is like turning a symphony into its individual musical notes. Just as we could break down a complex orchestral performance into distinct instruments, Fourier analysis allows us to decompose a complicated signal or waveform into simpler wave shapes. Here’s another example: working with physicists whose egos are bigger than supermassive black holes is like poking yourself in the eye with a needle, over and over. I am guilty of loving metaphors. I’m not sorry. To me, they breathe life, light and colour back to that which was deadly boring. When I become bored, I stop paying attention, so I try to fight this inclination by at least amusing myself with metaphors. In my book Subatomic Writing (2023), where I compare two traditionally dry subjects (grammar and particle physics), I liken particles of language to particles of matter and, through six lessons, build from word to paragraph as we build from a quark to a chain of carbon atoms – a pencil’s graphite. The ability to create such mini-stories as I learn is, for me, part of why I’ve been able to level up quickly in physics. In cosmic ray science, metaphors play a key role, too. For example, the Oh-My-God particle had an energy of 3.2 × 1020 electronvolts. To explain this quantity to those unfamiliar with units of energy like electronvolts and Joules, we used analogies: the OMG particle had the same kinetic energy as a bowling ball dropped from shoulder height, or a baseball thrown at 28 metres per second (63 miles per hour). We also describe ultrahigh-energy cosmic rays as ‘tiny bullets’ striking Earth’s atmosphere with incredible force. These analogies not only simplify complex phenomena but also help convey the scale and impact of these particles in a way that resonates with both scientists and the public. By connecting the unfamiliar with the familiar, metaphors make it easier to internalise and recall new information These kinds of analogies light up our brains. According to the article ‘The Neural Career of Sensory-motor Metaphors’ (2011) by the cognitive neuroscientist Rutvik H Desai et al, metaphors engage the neural networks of the brain that deal with sensory processing, motor planning, abstract thinking, emotion and memory. They do excellent things to keep us awake and engaged, these mini-stories. In other words, metaphors bridge abstract concepts and sensory experiences, allowing our brains to process complex ideas more naturally. By connecting the unfamiliar with the familiar, metaphors make it easier to internalise and recall new information, which is why they are such powerful tools in teaching and learning, especially in subjects like physics. But metaphors can also skew our thinking about a phenomenon. Beau Biller is a forensic mechanical engineer and an assistant instructor for the Johns Hopkins applied physics programme. He sees many students wrestle with difficult physics concepts. In the early stages of studying Einstein’s theory of general relativity, teachers often help students ‘see’ the curvature of space by showing them a rubber sheet with a bowling ball in the middle. Biller told me: It is very difficult to make analogies for the geometry in which we live. As far as we know, we’re not on a four-dimensional rubber sheet embedded in a higher dimension that we can ‘look in’ upon … Much like learning a new language, some of the concepts encountered in modern physics are simply … hard. No shortcuts allowed. All the same, metaphors can approximate a difficult concept. They are rough models that can be modified the more we learn. Maxwell, one of my favourite physicists, used billiard balls as a starting point to explain the interaction of molecules in his book Theory of Heat (1871), but he also explained that ‘two hard spherical balls is not an accurate representation’. He went on to explain why and relabelled this interaction as an ‘Encounter’, modifying the mental metaphor. At the beginning of each semester of my Master’s in applied physics, I think of a reigning metaphor I can use to learn the upcoming subject matter. During quantum mechanics last semester, since I often play the computer game ARK: Survival Evolved with my brother on Sunday afternoons, I started with the axiom ‘A particle is like a triceratops named Alice.’ As silly as it sounds, it was enjoyable and memorable. It was a fable that gave me a rough outline of the story of quantum particles and their interactions. I studied hard, wrote down all my dinosaur-related metaphors in detail, particularly mathematics details, in an Overleaf document just for my own sake, and experienced the pleasurable shock of earning an A+ in quantum mechanics. Fun and helpful mnemonic aids aside, physics is most thrilling when we can pull away the scaffolding of metaphors and see mathematics itself as the storytelling framework. ‘We have to read the story behind the equation,’ as Biller said, and I fully agree. The deeper we go in physics, the more the language of mathematics empowers us to precisely narrate the epic tale of the Universe. Some stories in physics are downright Kafkaesque – like the Monkey and Hunter thought experiment, which teaches projectile motion at great cost to the monkey; or Schrödinger’s Cat, which is forever being murdered or not murdered in a box containing a vial of poison. Just as Schrödinger’s Cat demonstrates quantum paradoxes, the man behind this thought experiment embodies the uncomfortable paradox of a brilliant mind who nevertheless chose to engage in predatory behaviour towards young girls, acts documented in his own diary. Erwin Schrödinger groomed a 14-year-old girl he was tutoring and impregnated her when she was 17. The abortion made her sterile. Richard Feynman’s FBI files, released in 2012, show the physicist’s private behaviour did not match his playful, charming public persona. One page of the report says: His ex-wife reportedly testified that on several occasions when she unwittingly disturbed either his calculus or his drums he flew into a violent rage, during which time he choked her, threw pieces of bric-a-brac about and smashed the furniture. Feynman also told boastful stories of frequenting strip clubs and having manipulative approaches to women in his earlier years. Hero worship in physics culture becomes insidious when we refuse to challenge unethical behaviour In his diary, Einstein recorded wildly racist things about people from China and other countries when he visited the Far East and the Middle East in 1922-23. He cheated on his first wife. And his second wife. The list of bad behaviour by intellectual elitists goes on. This grim reality reminds us that the emotional and physical abuse perpetrated by physics ‘geniuses’ has been catastrophically downplayed – a trend we must confront and reform. The problem of hero worship in physics culture becomes especially insidious when we refuse to challenge unethical behaviour in revered figures, allowing misconduct to persist unchecked. It is crucial to confront these darker narratives, not to diminish the scientific contributions of these individuals but to ensure that harmful legacies don’t continue to thrive in our institutions. Other sombre stories serve to promote empathy rather than tyranny. Sokolsky told me his favourite physics tool is a hammer, ‘to remind me’, he says, ‘to allow students to finish their PhD theses.’ He alludes to the 1978 incident where a Stanford mathematics student, Theodore Streleski, killed his faculty advisor with a hammer after failing to complete his dissertation for 19 years. The narratives we construct about our abilities and challenges in life are as vital as the equations we solve. ‘Being a stubborn/persistent scholar who loved the process of understanding how everything works has been key,’ says Rasha Abbasi, an astroparticle physicist at Loyola University Chicago. I’ve known Abbasi since she was a PhD student – and I, a 16-year-old intern – studying cosmic rays at the University of Utah. Today, she studies gamma-ray flashes from lightning with our cosmic ray detectors in the Utah desert. We’ve kept in contact through the years, and she inspires me with her tenacity, good nature, humour and intellect. When I ask Abbasi if she has any thoughts on the role of language in physics, she says: ‘I discovered later on in my career that language is a big part of being a scientist. Training in writing and communication needs to be emphasised more in our field.’ She’s right. Physics students can forget writing is a critical part of being a physicist: there are white papers, reports, journal articles, National Science Foundation grants, posters, presentations. Every type of writing involves some connection to story, even if the character of your story is a variable in an equation. In the end, the stories we tell shape our trajectories in life as profoundly as the cultural forces that mould us, serving as both barriers and bridges to our greatest ambitions. I have found a healthy balance among the cultures I subscribe to, and ambition is no longer a dirty word. I want to work with my friends to uncover the origins of ultrahigh-energy cosmic rays, a longstanding mystery. I want to change the way we teach physics. I want to win a Nobel Prize. My story of finding physics again is over, but the story of what I’ll do with it has just begun, and I’m excited to see what happens next. In political science, supporters of ‘horseshoe theory’ believe that far-Left views and far-Right views are more similar to each other than they are to more moderate, centrist views. Perhaps there exists a corollary between literary writing and physics, an academic horseshoe theory. You will find me happily oscillating back and forth in the cheerful space between Dictionopolis and Digitopolis, building bridges and repairing fences. I invite you to step out of your comfort zone, continue to confront and conquer challenging material, and join me in building the City of Wisdom, one story at a time. This Essay was made possible through the support of a grant to Aeon Media from the John Templeton Foundation. The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the Foundation. Funders to Aeon Media are not involved in editorial decision-making.
2024-11-08T20:48:27
null
train
42,016,225
intunderflow
2024-11-01T12:21:42
Crowds gather in Dublin for fake Halloween parade posted on social media
null
https://www.thejournal.ie/dublin-fake-halloween-parade-6529584-Oct2024/
26
13
[ 42017219, 42016560, 42016810, 42039111, 42018302, 42017625, 42029174, 42017441, 42017078 ]
null
null
no_error
Hundreds of prospective parade go-ers in Dublin for fake Halloween event
2024-10-31T20:52:08Z
Muiris O'Cearbhaill
fake parade Gardaí have asked members of the public to disperse. GARDAÍ HAVE ASKED hundreds of people who have gathered on O’Connell Street in Dublin city centre to disperse after a fake ‘Halloween parade’ was advertised online. The fake event, posted on social media and circulated to thousands online, promised to start on the north side of the city and make its way down to Temple Bar. However, no such parade was planned or due to take place. Hundreds of people turned up for the non-existent event, which it is now suggested was a large-scale, elaborate prank. Gardaí have requested that they leave the area safely. In a post on social media this evening, a spokesperson said: “Please be advised that contrary to information being circulated online, no Halloween parade is scheduled to take place in Dublin City Centre this evening or tonight.” — Artur Martins (@arturmartins) October 31, 2024 Live streams and clips shared on TikTok and other social media websites this evening show O’Connell Street filled with prospective parader go-ers. Videos advertising the event, and posted to video-sharing platform TikTok, have been viewed up to 80,000 times. It appears the videos have used footage of a parade in Dublin on 31 October 2023 by theatre company Macnas as proof that the event is an annual-outing. However, Macnas organised this year’s event in Co Galway. Readers like you are keeping these stories free for everyone... A mix of advertising and supporting contributions helps keep paywalls away from valuable information like this article. Over 5,000 readers like you have already stepped up and support us with a monthly payment or a once-off donation.
2024-11-08T15:20:41
en
train
42,016,235
AndreyKarpov
2024-11-01T12:22:29
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,016,237
ngtlyOfficial
2024-11-01T12:22:49
null
null
null
1
null
[ 42016238 ]
null
true
null
null
null
null
null
null
null
train
42,016,252
rbanffy
2024-11-01T12:24:16
NASA panel calls on SpaceX to "maintain focus" on Dragon safety after anomalies
null
https://spacenews.com/nasa-panel-calls-on-spacex-to-maintain-focus-on-dragon-safety-after-recent-anomalies/
3
0
[ 42016693 ]
null
null
null
null
null
null
null
null
null
train
42,016,254
croes
2024-11-01T12:24:22
After Era of Bloat, Veteran Video-Game Developers Are Going Smaller
null
https://www.bloomberg.com/news/articles/2024-10-30/after-era-of-bloat-veteran-video-game-developers-are-going-smaller
3
0
null
null
null
null
null
null
null
null
null
null
train
42,016,255
thunderbong
2024-11-01T12:24:39
Billionaires vs. Democracy
null
https://inequality.org/research/billionaires-vs-democracy/
8
0
[ 42016691 ]
null
null
null
null
null
null
null
null
null
train
42,016,260
itsappleseason
2024-11-01T12:25:02
Kaleidosync Music Visualizer • Product Announcement
null
https://kaleidosync.substack.com/p/kaleidosync-new-and-shiny
2
0
null
null
null
null
null
null
null
null
null
null
train
42,016,263
rbanffy
2024-11-01T12:25:37
'Cheat Engines' and Copyright in Video Games in the EU
null
https://cacm.acm.org/blogcacm/cheat-engines-and-copyright-in-video-games-in-the-eu/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,016,265
thaisi
2024-11-01T12:26:12
Show HN: Block Sort, a mobile/PWA puzzle game without ads
I like small puzzle games to play on my mobile, (because you can put them away easily as well). But I got really annoyed that a lot of them force feed you advertisements.<p>To counter this I made my own puzzle game, as a progressive web app. This means you can install it on your mobile or desktop as an application, and play offline.<p>After the game is offline ready, no requests should be outgoing except checking for updates of the game. So there is no tracking&#x2F;reporting going on. This also means I rely on old fashion email to get feedback!<p>The game is build in React + Typescript + Vite, and is open-source at: <a href="https:&#x2F;&#x2F;github.com&#x2F;matthijsgroen&#x2F;block-sort">https:&#x2F;&#x2F;github.com&#x2F;matthijsgroen&#x2F;block-sort</a><p>Challenges:<p>- I wanted to make the game using open web standards such as HTML + CSS. The game actually features one image, the rest is done in pure CSS (the cubes, buffers and placement stacks);<p>- All animation is done through CSS animations;<p>- All levels are randomly generated, and then proven playable by a solver before a player gets the level on screen. To remove loading times for the high difficulty levels, a process was made to generate these levels offline, and the game only contains the random seeds to reproduce them (and then they are still solved by the game first before offering)<p>- The entire game is statically hosted, so there is no backend involved. This proved challenging for data transfer capabilities. The game now generates a QR Code image containing all encrypted&#x2F;compressed game data, that can be loaded into another instance of the game.
https://matthijsgroen.github.io/block-sort/
20
13
[ 42016586, 42020149, 42047148, 42023245, 42023810, 42020127, 42027412, 42022322 ]
null
null
no_article
null
null
null
null
2024-11-07T23:37:56
null
train
42,016,272
abby126
2024-11-01T12:27:51
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,016,273
unripe_syntax
2024-11-01T12:27:52
Shuhari – Alchemists
null
https://alchemists.io/articles/shuhari
2
0
null
null
null
null
null
null
null
null
null
null
train
42,016,282
amalinovic
2024-11-01T12:28:37
Understanding Presenter Objects vs. Direct Rendering in Ruby on Rails
null
https://blog.bestwebventures.in/presenter-objects-vs-direct-rendering
2
0
null
null
null
null
null
null
null
null
null
null
train
42,016,293
null
2024-11-01T12:29:59
null
null
null
null
null
null
[ "true" ]
true
null
null
null
null
null
null
null
train
42,016,298
tosh
2024-11-01T12:30:30
Apple Plans to Drop Broadcom Chip by 2025 to Use In-House Design (2023)
null
https://www.bloomberg.com/news/articles/2023-01-09/apple-plans-to-drop-broadcom-chip-by-2025-to-use-in-house-design
4
0
[ 42016675 ]
null
null
missing_parsing
Bloomberg - Are you a robot?
null
null
Why did this happen? Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy. Need Help? For inquiries related to this message please contact our support team and provide the reference ID below. Block reference ID:
2024-11-08T15:09:51
null
train
42,016,304
null
2024-11-01T12:31:38
null
null
null
null
null
[ 42016305 ]
[ "true" ]
true
null
null
null
null
null
null
null
train
42,016,310
JamesSwinton
2024-11-01T12:32:04
Optimising HTML5 Canvas Rendering Performance
null
https://blog.ag-grid.com/optimising-html5-canvas-rendering-best-practices-and-techniques/
5
0
[ 42016685 ]
null
null
null
null
null
null
null
null
null
train
42,016,334
geox
2024-11-01T12:34:10
Researchers combine chloroplasts from algae with hamster cells
null
https://www.u-tokyo.ac.jp/focus/en/press/z0508_00375.html
2
0
null
null
null
null
null
null
null
null
null
null
train
42,016,346
popshort
2024-11-01T12:35:36
Show HN: Popshort.Ai- AI-driven short drama platform
null
https://popshortml.com/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,016,347
isaacfrond
2024-11-01T12:35:53
I just tested ChatGPT Search vs. Google – here's the results
null
https://www.tomsguide.com/ai/i-just-tested-google-vs-chatgpt-search-and-im-shocked-by-the-results
31
30
[ 42016680, 42016692, 42016587, 42016423, 42016704, 42016637, 42016742, 42016628, 42016725, 42017292, 42026667, 42016622 ]
null
null
null
null
null
null
null
null
null
train
42,016,349
thunderbong
2024-11-01T12:36:04
Go sync. Once is Simple... Does It Really?
null
https://victoriametrics.com/blog/go-sync-once/index.html
3
0
null
null
null
null
null
null
null
null
null
null
train
42,016,351
abby126
2024-11-01T12:36:24
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,016,372
sharpshadow
2024-11-01T12:39:57
Want Windows 10 Security? That Will Be $30, Microsoft Says
null
https://www.forbes.com/sites/daveywinder/2024/11/01/want-windows-10-security-that-will-be-30-microsoft-says/
13
5
[ 42016673, 42016686, 42016611 ]
null
null
null
null
null
null
null
null
null
train
42,016,373
atlasunshrugged
2024-11-01T12:40:00
Are open-source AI models worth the risk?
null
https://www.emergingtechbrew.com/stories/2024/10/31/open-source-ai-models-risk-rishi-bommasani-stanford
1
0
null
null
null
null
null
null
null
null
null
null
train
42,016,387
fanf2
2024-11-01T12:42:02
The Wendelstein 7-X fusion stellarator proves its efficiency (2021)
null
https://www.ipp.mpg.de/5125328/05_21
3
0
[ 42016619, 42016618 ]
null
null
null
null
null
null
null
null
null
train
42,016,402
Suneel478
2024-11-01T12:44:10
GitHub Bot to Review PRs
null
https://twitter.com/suneel_matham/status/1852329786854801495
2
2
[ 42016698, 42016403, 42016613 ]
null
null
null
null
null
null
null
null
null
train
42,016,410
jger15
2024-11-01T12:44:41
I Hate You, Please Stan Me
null
https://default.blog/p/i-hate-you-please-stan-me
1
0
null
null
null
null
null
null
null
null
null
null
train
42,016,411
LinuxBender
2024-11-01T12:44:42
UK councils bat away DDoS barrage from pro-Russia keyboard warriors
null
https://www.theregister.com/2024/11/01/uk_councils_russia_ddos/
4
0
null
null
null
null
null
null
null
null
null
null
train
42,016,412
impish9208
2024-11-01T12:44:55
Payroll employment essentially unchanged in October; jobless rate holds at 4.1%
null
https://www.bls.gov/news.release/archives/empsit_11012024.htm
2
1
[ 42016666 ]
null
null
null
null
null
null
null
null
null
train
42,016,421
LinuxBender
2024-11-01T12:46:37
xAI picked Ethernet over InfiniBand for its H100 Colossus training cluster
null
https://www.theregister.com/2024/10/29/xai_colossus_networking/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,016,429
LinuxBender
2024-11-01T12:47:47
US, Israel Describe Iranian Hackers' Targeting of Olympics, Surveillance Cameras
null
https://www.securityweek.com/us-israel-describe-iranian-hackers-targeting-of-olympics-surveillance-cameras/
3
0
[ 42016575 ]
null
null
no_article
null
null
null
null
2024-11-08T03:32:24
null
train
42,016,447
ashleypeacock
2024-11-01T12:50:16
Show HN: Serverless Apps on Cloudflare – A book on the Cloudflare dev platform
Hello HN!<p>I am Ashley, author of the book Serverless Apps on Cloudflare, and I&#x27;m excited to share it with you and hopefully spread the awareness of the powerful and developer-focussed platform that Cloudflare has built, that I believe is currently flying under the radar!<p>For years, I&#x27;ve been building applications on Cloudflare - from websites, to APIs, to Discord bots and everything in-between. The platform has grown hugely in the past few years, to the point it has all the essential building blocks for modern applications. It&#x27;s also built with engineers in mind, the developer experience (imo) is second to none. AWS is powerful, vast and very widely used, but I don&#x27;t think anyone would shout about its developer experience in most cases.<p>Some of the features of the platform are incredibly unique and powerful. With a distributed architecture of many different services, you have to deal with services being down, or the network being down - with Bindings, you get a zero-cost abstraction to make inter-service calls between your different services (called Workers in Cloudflare) and they are automatically injected for you at runtime - no secrets, no credentials, no configuration - it just works. The same goes for databases, caches, queues and all the other supporting services you need - they all use bindings.<p>Everything runs seamlessly locally too, including databases, caches and everything else. No setup required, again, it just works and massively increases productivity and enjoyment when building applications. You don&#x27;t need to worry about failover or redundancy, your application and resources are deployed and available globally, thanks to the edge network Cloudflare runs.<p>The pricing is great too, you only pay for what you use. For example, the CPU time you use in a Worker, you don&#x27;t pay while your code waits on I&#x2F;O (e.g. an API call), and there&#x27;s zero egress fees.<p>Lastly, they have a very unique concept called Durable Objects. You can think of these like mini single-threaded servers, that act as a way to coordinate multiple clients or a single entity. You write them the same way you&#x27;d write a class in your code, and Cloudflare takes care of persisting those objects and hydrating them with state - as they come with built-in state too (key-value or SQLite). They also have native WebSocket support, allowing you to add WebSockets to your application in literally a few lines of code - they are super cool.<p>That&#x27;s just a flavour of some of the cool offerings of the platform - if you want to learn more, it&#x27;s available in eBook format via The Pragmatic Programmers (<a href="https:&#x2F;&#x2F;pragprog.com&#x2F;titles&#x2F;apapps&#x2F;serverless-apps-on-cloudflare&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pragprog.com&#x2F;titles&#x2F;apapps&#x2F;serverless-apps-on-cloudf...</a>), as well as available to preorder in physical format via Amazon (<a href="https:&#x2F;&#x2F;www.amazon.com&#x2F;dp&#x2F;B0DFNTSMHP?maas=maas_adg_968101BB2682ACE50CD652D1BFC0F75D_afap_abs&amp;ref_=aa_maas&amp;tag=maas" rel="nofollow">https:&#x2F;&#x2F;www.amazon.com&#x2F;dp&#x2F;B0DFNTSMHP?maas=maas_adg_968101BB2...</a>). It&#x27;s content complete, and the beta tag will be removed in November.<p>Any questions or feedback, let me know!
https://pragprog.com/titles/apapps/serverless-apps-on-cloudflare/
6
0
null
null
null
null
null
null
null
null
null
null
train
42,016,471
Blackiwi
2024-11-01T12:53:40
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,016,478
fork-bomber
2024-11-01T12:54:14
Alexander the Great's tunic identified in royal tomb at Vergina?
null
https://www.tandfonline.com/doi/full/10.1080/00934690.2024.2409503
318
164
[ 42016761, 42021931, 42016760, 42020602, 42016794, 42016768, 42019095, 42018094, 42023779, 42016988, 42018194, 42023431, 42026327, 42016577, 42024934, 42019270, 42017494, 42016809, 42018210, 42017717 ]
null
null
null
null
null
null
null
null
null
train
42,016,487
asdamasceno
2024-11-01T12:54:59
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,016,493
lapnect
2024-11-01T12:56:08
Are the Pico 2 (RP2350) GPIO pins broken?
null
http://www.doctormonk.com/2024/09/are-pico-2-rp2350-gpio-pins-broken.html
1
0
null
null
null
null
null
null
null
null
null
null
train
42,016,504
handfuloflight
2024-11-01T12:57:35
Contextual Document Embeddings
null
https://huggingface.co/jxm/cde-small-v1
2
0
null
null
null
no_error
jxm/cde-small-v1 · Hugging Face
null
null
Edit model card Contextual Document Embeddings (CDE) Link to code: github.com/jxmorris12/cde Our new model that naturally integrates "context tokens" into the embedding process. As of October 1st, 2024, cde-small-v1 is the best small model (under 400M params) on the MTEB leaderboard for text embedding models, with an average score of 65.00. 👉 Try on Colab 👉 Contextual Document Embeddings (ArXiv) How to use cde-small-v1 Our embedding model needs to be used in two stages. The first stage is to gather some dataset information by embedding a subset of the corpus using our "first-stage" model. The second stage is to actually embed queries and documents, conditioning on the corpus information from the first stage. Note that we can do the first stage part offline and only use the second-stage weights at inference time. With Transformers Click to learn how to use cde-small-v1 with Transformers Loading the model Our model can be loaded using transformers out-of-the-box with "trust remote code" enabled. We use the default BERT uncased tokenizer: import transformers model = transformers.AutoModel.from_pretrained("jxm/cde-small-v1", trust_remote_code=True) tokenizer = transformers.AutoTokenizer.from_pretrained("bert-base-uncased") Note on prefixes Nota bene: Like all state-of-the-art embedding models, our model was trained with task-specific prefixes. To do retrieval, you can prepend the following strings to queries & documents: query_prefix = "search_query: " document_prefix = "search_document: " First stage minicorpus_size = model.config.transductive_corpus_size minicorpus_docs = [ ... ] # Put some strings here that are representative of your corpus, for example by calling random.sample(corpus, k=minicorpus_size) assert len(minicorpus_docs) == minicorpus_size # You must use exactly this many documents in the minicorpus. You can oversample if your corpus is smaller. minicorpus_docs = tokenizer( [document_prefix + doc for doc in minicorpus_docs], truncation=True, padding=True, max_length=512, return_tensors="pt" ).to(model.device) import torch from tqdm.autonotebook import tqdm batch_size = 32 dataset_embeddings = [] for i in tqdm(range(0, len(minicorpus_docs["input_ids"]), batch_size)): minicorpus_docs_batch = {k: v[i:i+batch_size] for k,v in minicorpus_docs.items()} with torch.no_grad(): dataset_embeddings.append( model.first_stage_model(**minicorpus_docs_batch) ) dataset_embeddings = torch.cat(dataset_embeddings) Running the second stage Now that we have obtained "dataset embeddings" we can embed documents and queries like normal. Remember to use the document prefix for documents: docs = tokenizer( [document_prefix + doc for doc in docs], truncation=True, padding=True, max_length=512, return_tensors="pt" ).to(model.device) with torch.no_grad(): doc_embeddings = model.second_stage_model( input_ids=docs["input_ids"], attention_mask=docs["attention_mask"], dataset_embeddings=dataset_embeddings, ) doc_embeddings /= doc_embeddings.norm(p=2, dim=1, keepdim=True) and the query prefix for queries: queries = queries.select(range(16))["text"] queries = tokenizer( [query_prefix + query for query in queries], truncation=True, padding=True, max_length=512, return_tensors="pt" ).to(model.device) with torch.no_grad(): query_embeddings = model.second_stage_model( input_ids=queries["input_ids"], attention_mask=queries["attention_mask"], dataset_embeddings=dataset_embeddings, ) query_embeddings /= query_embeddings.norm(p=2, dim=1, keepdim=True) these embeddings can be compared using dot product, since they're normalized. What if I don't know what my corpus will be ahead of time? If you can't obtain corpus information ahead of time, you still have to pass something as the dataset embeddings; our model will work fine in this case, but not quite as well; without corpus information, our model performance drops from 65.0 to 63.8 on MTEB. We provide some random strings that worked well for us that can be used as a substitute for corpus sampling. With Sentence Transformers Click to learn how to use cde-small-v1 with Sentence Transformers Loading the model Our model can be loaded using sentence-transformers out-of-the-box with "trust remote code" enabled: from sentence_transformers import SentenceTransformer model = SentenceTransformer("jxm/cde-small-v1", trust_remote_code=True) Note on prefixes Nota bene: Like all state-of-the-art embedding models, our model was trained with task-specific prefixes. To do retrieval, you can use prompt_name="query" and prompt_name="document" in the encode method of the model when embedding queries and documents, respectively. First stage minicorpus_size = model[0].config.transductive_corpus_size minicorpus_docs = [ ... ] # Put some strings here that are representative of your corpus, for example by calling random.sample(corpus, k=minicorpus_size) assert len(minicorpus_docs) == minicorpus_size # You must use exactly this many documents in the minicorpus. You can oversample if your corpus is smaller. dataset_embeddings = model.encode( minicorpus_docs, prompt_name="document", convert_to_tensor=True ) Running the second stage Now that we have obtained "dataset embeddings" we can embed documents and queries like normal. Remember to use the document prompt for documents: docs = [...] queries = [...] doc_embeddings = model.encode( docs, prompt_name="document", dataset_embeddings=dataset_embeddings, convert_to_tensor=True, ) query_embeddings = model.encode( queries, prompt_name="query", dataset_embeddings=dataset_embeddings, convert_to_tensor=True, ) these embeddings can be compared using cosine similarity via model.similarity: similarities = model.similarity(query_embeddings, doc_embeddings) topk_values, topk_indices = similarities.topk(5) Click here for a full copy-paste ready example from sentence_transformers import SentenceTransformer from datasets import load_dataset # 1. Load the Sentence Transformer model model = SentenceTransformer("jxm/cde-small-v1", trust_remote_code=True) context_docs_size = model[0].config.transductive_corpus_size # 512 # 2. Load the dataset: context dataset, docs, and queries dataset = load_dataset("sentence-transformers/natural-questions", split="train") dataset.shuffle(seed=42) # 10 queries, 512 context docs, 500 docs queries = dataset["query"][:10] docs = dataset["answer"][:2000] context_docs = dataset["answer"][-context_docs_size:] # Last 512 docs # 3. First stage: embed the context docs dataset_embeddings = model.encode( context_docs, prompt_name="document", convert_to_tensor=True, ) # 4. Second stage: embed the docs and queries doc_embeddings = model.encode( docs, prompt_name="document", dataset_embeddings=dataset_embeddings, convert_to_tensor=True, ) query_embeddings = model.encode( queries, prompt_name="query", dataset_embeddings=dataset_embeddings, convert_to_tensor=True, ) # 5. Compute the similarity between the queries and docs similarities = model.similarity(query_embeddings, doc_embeddings) topk_values, topk_indices = similarities.topk(5) print(topk_values) print(topk_indices) """ tensor([[0.5495, 0.5426, 0.5423, 0.5292, 0.5286], [0.6357, 0.6334, 0.6177, 0.5862, 0.5794], [0.7648, 0.5452, 0.5000, 0.4959, 0.4881], [0.6802, 0.5225, 0.5178, 0.5160, 0.5075], [0.6947, 0.5843, 0.5619, 0.5344, 0.5298], [0.7742, 0.7742, 0.7742, 0.7231, 0.6224], [0.8853, 0.6667, 0.5829, 0.5795, 0.5769], [0.6911, 0.6127, 0.6003, 0.5986, 0.5936], [0.6796, 0.6053, 0.6000, 0.5911, 0.5884], [0.7624, 0.5589, 0.5428, 0.5278, 0.5275]], device='cuda:0') tensor([[ 0, 296, 234, 1651, 1184], [1542, 466, 438, 1207, 1911], [ 2, 1562, 632, 1852, 382], [ 3, 694, 932, 1765, 662], [ 4, 35, 747, 26, 432], [ 534, 175, 5, 1495, 575], [ 6, 1802, 1875, 747, 21], [ 7, 1913, 1936, 640, 6], [ 8, 747, 167, 1318, 1743], [ 9, 1583, 1145, 219, 357]], device='cuda:0') """ # As you can see, almost every query_i has document_i as the most similar document. # 6. Print the top-k results for query_idx, top_doc_idx in enumerate(topk_indices[:, 0]): print(f"Query {query_idx}: {queries[query_idx]}") print(f"Top Document: {docs[top_doc_idx]}") print() """ Query 0: when did richmond last play in a preliminary final Top Document: Richmond Football Club Richmond began 2017 with 5 straight wins, a feat it had not achieved since 1995. A series of close losses hampered the Tigers throughout the middle of the season, including a 5-point loss to the Western Bulldogs, 2-point loss to Fremantle, and a 3-point loss to the Giants. Richmond ended the season strongly with convincing victories over Fremantle and St Kilda in the final two rounds, elevating the club to 3rd on the ladder. Richmond's first final of the season against the Cats at the MCG attracted a record qualifying final crowd of 95,028; the Tigers won by 51 points. Having advanced to the first preliminary finals for the first time since 2001, Richmond defeated Greater Western Sydney by 36 points in front of a crowd of 94,258 to progress to the Grand Final against Adelaide, their first Grand Final appearance since 1982. The attendance was 100,021, the largest crowd to a grand final since 1986. The Crows led at quarter time and led by as many as 13, but the Tigers took over the game as it progressed and scored seven straight goals at one point. They eventually would win by 48 points – 16.12 (108) to Adelaide's 8.12 (60) – to end their 37-year flag drought.[22] Dustin Martin also became the first player to win a Premiership medal, the Brownlow Medal and the Norm Smith Medal in the same season, while Damien Hardwick was named AFL Coaches Association Coach of the Year. Richmond's jump from 13th to premiers also marked the biggest jump from one AFL season to the next. Query 1: who sang what in the world's come over you Top Document: Life's What You Make It (Talk Talk song) "Life's What You Make It" is a song by the English band Talk Talk. It was released as a single in 1986, the first from the band's album The Colour of Spring. The single was a hit in the UK, peaking at No. 16, and charted in numerous other countries, often reaching the Top 20. Query 2: who produces the most wool in the world Top Document: Wool Global wool production is about 2 million tonnes per year, of which 60% goes into apparel. Wool comprises ca 3% of the global textile market, but its value is higher owing to dying and other modifications of the material.[1] Australia is a leading producer of wool which is mostly from Merino sheep but has been eclipsed by China in terms of total weight.[30] New Zealand (2016) is the third-largest producer of wool, and the largest producer of crossbred wool. Breeds such as Lincoln, Romney, Drysdale, and Elliotdale produce coarser fibers, and wool from these sheep is usually used for making carpets. Query 3: where does alaska the last frontier take place Top Document: Alaska: The Last Frontier Alaska: The Last Frontier is an American reality cable television series on the Discovery Channel, currently in its 7th season of broadcast. The show documents the extended Kilcher family, descendants of Swiss immigrants and Alaskan pioneers, Yule and Ruth Kilcher, at their homestead 11 miles outside of Homer.[1] By living without plumbing or modern heating, the clan chooses to subsist by farming, hunting and preparing for the long winters.[2] The Kilcher family are relatives of the singer Jewel,[1][3] who has appeared on the show.[4] Query 4: a day to remember all i want cameos Top Document: All I Want (A Day to Remember song) The music video for the song, which was filmed in October 2010,[4] was released on January 6, 2011.[5] It features cameos of numerous popular bands and musicians. The cameos are: Tom Denney (A Day to Remember's former guitarist), Pete Wentz, Winston McCall of Parkway Drive, The Devil Wears Prada, Bring Me the Horizon, Sam Carter of Architects, Tim Lambesis of As I Lay Dying, Silverstein, Andrew WK, August Burns Red, Seventh Star, Matt Heafy of Trivium, Vic Fuentes of Pierce the Veil, Mike Herrera of MxPx, and Set Your Goals.[5] Rock Sound called the video "quite excellent".[5] Query 5: what does the red stripes mean on the american flag Top Document: Flag of the United States The flag of the United States of America, often referred to as the American flag, is the national flag of the United States. It consists of thirteen equal horizontal stripes of red (top and bottom) alternating with white, with a blue rectangle in the canton (referred to specifically as the "union") bearing fifty small, white, five-pointed stars arranged in nine offset horizontal rows, where rows of six stars (top and bottom) alternate with rows of five stars. The 50 stars on the flag represent the 50 states of the United States of America, and the 13 stripes represent the thirteen British colonies that declared independence from the Kingdom of Great Britain, and became the first states in the U.S.[1] Nicknames for the flag include The Stars and Stripes,[2] Old Glory,[3] and The Star-Spangled Banner. Query 6: where did they film diary of a wimpy kid Top Document: Diary of a Wimpy Kid (film) Filming of Diary of a Wimpy Kid was in Vancouver and wrapped up on October 16, 2009. Query 7: where was beasts of the southern wild filmed Top Document: Beasts of the Southern Wild The film's fictional setting, "Isle de Charles Doucet", known to its residents as the Bathtub, was inspired by several isolated and independent fishing communities threatened by erosion, hurricanes and rising sea levels in Louisiana's Terrebonne Parish, most notably the rapidly eroding Isle de Jean Charles. It was filmed in Terrebonne Parish town Montegut.[5] Query 8: what part of the country are you likely to find the majority of the mollisols Top Document: Mollisol Mollisols occur in savannahs and mountain valleys (such as Central Asia, or the North American Great Plains). These environments have historically been strongly influenced by fire and abundant pedoturbation from organisms such as ants and earthworms. It was estimated that in 2003, only 14 to 26 percent of grassland ecosystems still remained in a relatively natural state (that is, they were not used for agriculture due to the fertility of the A horizon). Globally, they represent ~7% of ice-free land area. As the world's most agriculturally productive soil order, the Mollisols represent one of the more economically important soil orders. Query 9: when did fosters home for imaginary friends start Top Document: Foster's Home for Imaginary Friends McCracken conceived the series after adopting two dogs from an animal shelter and applying the concept to imaginary friends. The show first premiered on Cartoon Network on August 13, 2004, as a 90-minute television film. On August 20, it began its normal run of twenty-to-thirty-minute episodes on Fridays, at 7 pm. The series finished its run on May 3, 2009, with a total of six seasons and seventy-nine episodes. McCracken left Cartoon Network shortly after the series ended. Reruns have aired on Boomerang from August 11, 2012 to November 3, 2013 and again from June 1, 2014 to April 3, 2017. """ Colab demo We've set up a short demo in a Colab notebook showing how you might use our model: Try our model in Colab: Acknowledgments Early experiments on CDE were done with support from Nomic and Hyperbolic. We're especially indebted to Nomic for open-sourcing their efficient BERT implementation and contrastive pre-training data, which proved vital in the development of CDE. Cite us Used our model, method, or architecture? Want to cite us? Here's the ArXiv citation information: @misc{morris2024contextualdocumentembeddings, title={Contextual Document Embeddings}, author={John X. Morris and Alexander M. Rush}, year={2024}, eprint={2410.02525}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2410.02525}, } Downloads last month15,454 Inference API (serverless) does not yet support model repos that contain custom code. Evaluation results accuracy on MTEB AmazonCounterfactualClassification (en) test set self-reported 87.030 ap on MTEB AmazonCounterfactualClassification (en) test set self-reported 56.706 ap_weighted on MTEB AmazonCounterfactualClassification (en) test set self-reported 56.706 f1 on MTEB AmazonCounterfactualClassification (en) test set self-reported 81.932 f1_weighted on MTEB AmazonCounterfactualClassification (en) test set self-reported 87.765 main_score on MTEB AmazonCounterfactualClassification (en) test set self-reported 87.030 accuracy on MTEB AmazonPolarityClassification (default) test set self-reported 94.664 ap on MTEB AmazonPolarityClassification (default) test set self-reported 91.687 ap_weighted on MTEB AmazonPolarityClassification (default) test set self-reported 91.687 f1 on MTEB AmazonPolarityClassification (default) test set self-reported 94.659 View on Papers With Code
2024-11-08T03:48:26
en
train
42,016,518
tzury
2024-11-01T12:59:42
Oasis by Decart AI
null
https://oasis.decart.ai/welcome
1
0
null
null
null
null
null
null
null
null
null
null
train
42,016,521
dhruvbhatia7
2024-11-01T13:00:27
LocalPanda – Local SEO and marketing tool
null
https://localpanda.ai
1
0
[ 42016522 ]
null
null
null
null
null
null
null
null
null
train
42,016,527
millyh
2024-11-01T13:00:56
Show HN: ScraperWiz; desktop app to scrape target sites seamlessly
null
https://scraperwiz.com
2
0
null
null
null
null
null
null
null
null
null
null
train
42,016,536
todsacerdoti
2024-11-01T13:02:18
ActivityBot – single file PHP activity pub bot server
null
https://gitlab.com/edent/activity-bot
1
0
null
null
null
null
null
null
null
null
null
null
train
42,016,542
0xdecrypt
2024-11-01T13:02:44
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train