id
int64 2
42.1M
| by
large_stringlengths 2
15
⌀ | time
timestamp[us] | title
large_stringlengths 0
198
⌀ | text
large_stringlengths 0
27.4k
⌀ | url
large_stringlengths 0
6.6k
⌀ | score
int64 -1
6.02k
⌀ | descendants
int64 -1
7.29k
⌀ | kids
large list | deleted
large list | dead
bool 1
class | scraping_error
large_stringclasses 25
values | scraped_title
large_stringlengths 1
59.3k
⌀ | scraped_published_at
large_stringlengths 4
66
⌀ | scraped_byline
large_stringlengths 1
757
⌀ | scraped_body
large_stringlengths 1
50k
⌀ | scraped_at
timestamp[us] | scraped_language
large_stringclasses 58
values | split
large_stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
42,065,124 | udev4096 | 2024-11-06T16:51:32 | Influence Agents | null | https://geohot.github.io//blog/jekyll/update/2024/11/04/influence-agents.html | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,065,129 | stemic | 2024-11-06T16:51:42 | The Reality of Hospitals | null | https://rxjourney.com.ng/hospitals-are-depressing | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,065,193 | awbvious | 2024-11-06T16:55:15 | Samsamelo – Silicon Valley selling trolley problem lies | null | https://lgwnncpcqsloqa4sqqqq5osup2rlqp7iiqliyu4y6vveu5jy6tlq.g8way.io/WazWieKElugDkoQhDrpUfqK4P-hEFoxTmPVqSnU49Nc? | 3 | 1 | [
42065194
] | null | null | null | null | null | null | null | null | null | train |
42,065,250 | paulpauper | 2024-11-06T16:58:00 | The Immigration-Wage Myth | null | https://www.theatlantic.com/podcasts/archive/2024/11/immigration-worker-wages-myth-jobs/680523/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,065,264 | ToJans | 2024-11-06T16:58:47 | Show HN: A location-based, privacy-sensitive photo app for on-site reports etc. | This little tool allows you to take multiple photos, load photos from a folder on your device, or save them to a folder on your device.<p>If you turn on geo-location, your filename includes the location, and you can see this location by clicking on the red pin.<p>Everything runs client-side in the browser, and none of your picture info is sent to any server.<p>This might come in handy if you need to go to a wharf, and take multiple pictures with a smartphone for example, and would like to easily see them & browse them, together with the location. | https://pages.virtualsaleslab.com/tools/camera-pwa | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,065,269 | 2noame | 2024-11-06T16:59:19 | The Reasons Authoritarianism Is Growing – and How to Reverse It | null | https://www.scottsantens.com/the-hidden-reasons-authoritarianism-is-growing-and-how-to-reverse-it-ubi/ | 39 | 64 | [
42066247,
42065928,
42066439,
42066187,
42066621,
42066064,
42066552,
42066135,
42066193,
42065949
] | null | null | null | null | null | null | null | null | null | train |
42,065,304 | rntn | 2024-11-06T17:00:49 | EU charges Corning with antitrust violations over Gorilla Glass dominance | null | https://www.theregister.com/2024/11/06/eu_charges_corning_with_antitrust/ | 2 | 0 | null | null | null | no_error | EU charges Corning with antitrust violations over Gorilla Glass dominance | 2024-11-06T16:28:46Z | Brandon Vigliarolo |
Corning's Gorilla Glass is found in countless tech products, from smartphones and wearables to automobile windshields, and the European Commission has an inkling its success is due in part to the US-based business cutting anticompetitive deals.
The EC announced a formal antitrust investigation into Corning yesterday, accusing the company of abusing its dominant position as a maker of glass screens for mobile electronics, claiming the end result was the exclusion of rival glass manufacturers from the market.
The strategy ultimately caused consumers to pay higher prices, has made repairs tougher and reduced manufacturer innovation, the EC argued.
"It is very frustrating and costly experience to break a mobile phone screen," said EC competition chief Margrethe Vestager. "Therefore, strong competition in the production of the cover glass used to protect such devices is crucial to ensure low prices and high-quality glass.
"We are investigating if Corning, a major producer of this special glass, may have tried to exclude rival glass producers, thereby depriving consumers from cheaper and more break-resistant glass," Vestager added.
Gorilla Glass is Corning's branding for its alkali-aluminosilicate (alkali-AS) glass screens, a chemical composite that's more break-resistant than other types of glass, making it particularly suited for use on smartphones, wearables, laptops and tablets. Gorilla Glass can be found on devices from manufacturers including Google, Samsung, Sony, Apple and other globally recognized brands.
Butterfingers who don't bother with phone cases, rejoice: New Gorilla Glass 'Victus' tipped to survive 6ft drops
Intel thinks glass substrates are a clear winner in multi-die packaging
Google files first ever complaint with European Commission against Microsoft
Top EU court overturns Intel's billion-dollar antitrust fine
The Commission is concerned that Corning has abused its position with both mobile OEMs and companies that process raw glass, known as finishers. According to the EC, Corning's OEM agreements included requirements that companies source their alkali-AS glass exclusively from Corning, for which they would receive rebates, and that OEMs report all competitive offers from rivals to Corning to give it a chance to match the price.
Pertaining to finishers, the EC alleges Corning pressed them into similar sourcing exclusivity obligations, as well as including clauses that prevented finishers from challenging Corning patents.
As this is just an opening of proceedings against Corning, the EC said Corning hasn't been proven guilty yet. With Corning accused of violating Article 102 of the Treaty of the Functioning of the EU, the business could face fines of up to 10 percent of its annual turnover if found guilty of abusing its market dominance.
"Corning has and will continue to be committed to compliance with all applicable rules and regulations where it does business," a company spokesperson told The Register. "Aligned with our company values and as part of that commitment, we are working cooperatively to address the European Commission's concerns." ®
Editor's note: This story was amended post-publication with comment from Corning.
| 2024-11-08T01:53:26 | en | train |
42,065,312 | isabelandreu | 2024-11-06T17:01:10 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,065,331 | forrestbrazeal | 2024-11-06T17:01:58 | Russian influence on the 2024 election (queryable graph) | null | https://russia-elections24.getzep.com/ | 4 | 1 | [
42065775
] | null | null | null | null | null | null | null | null | null | train |
42,065,374 | tg3 | 2024-11-06T17:04:35 | Please take our programming assessment with an LLM | null | https://creatingvalue.substack.com/p/please-take-our-programming-assessment | 12 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,065,377 | royychacker | 2024-11-06T17:04:55 | Train Fast, but Think Slow | null | https://blog.bagel.net/p/train-fast-but-think-slow | 2 | 0 | null | null | null | no_error | Train Fast, But Think Slow | 2024-11-06T14:59:55+00:00 | Bidhan Roy, Marcos Villagra | AI is like fire.We have had radical technological advancements in recent history. Social media, augmented reality, platform shifts like web, mobile. But AI is way more significant of a technology. It is as significant as the discovery of fire. It has the potential to change the trajectory of the evolution of our species.One of the holy grails of unlocking this potential of AI is to build systems that can reason like humans. By improving AI's, Large Language Models in particular, ability to break down complex problems and apply logical steps.Bagel's research team has been exploring this problem. Analyzing LLM building techniques, especially fine-tuning techniques, to allow Large Language Models to evolve from pattern-recognizing prediction agents to true cognitive agents. Our deep research spanned three major types of reasoning, aka intelligence: arithmetic, commonsense, and symbolic.Today, we're sharing our findings. This research targets the core of what we believe to be the ultimate AI evolution, human-level reasoning. Or beyond (God level?).We have explored techniques for the training and fine-tuning phases of model development. We have also ventured into the absolutely fascinating world of inference-time reasoning. This is where LLMs can be built or fine-tuned to generate novel solutions during inference, even if the solutions aren't part of their training dataset.Dive in. And if you're in a rush, we have a TLDR at the end.Varied types of reasoning tasks stretch AI's abilities. First, let's understand how they're defined.Arithmetic reasoning pushes machine learning to test problem-solving in a clear way. It forces models to break down problems. Choose from many strategies. Connect steps to find solutions. This makes math different. It shows exactly how well models can grasp details. And use the right solution steps in order.Commonsense reasoning upends our expectations. Models must understand the strange logic of everyday life. The challenges emerge when systems face the quirks of human interactions. The implicit rules we take for granted. For example, a door opens before you walk through. Time flows forward not backward. Water makes things wet. These obvious truths become complex puzzles for artificial systems to unravel.Symbolic reasoning flips the script on traditional machine learning. While neural networks excel at fuzzy pattern matching, symbols demand precision. Models must follow strict rules. Manipulate abstract concepts. Chain logical steps. Like a careful mathematician rather than an intuitive artist. The symbols hold no inherent meaning. Yet through them, we build towers of logic that reach toward human-level reasoning.Beyond these core types, reasoning takes many forms. Logical deduction draws rigid conclusions while induction makes creative leaps. Causal reasoning traces the hidden threads between actions and consequences. Multimodal reasoning juggles text, images, and data in a complex combination of understanding. Knowledge graphs map the relationship of facts and relationships. Yet all serve one goal - moving AI from pattern matching toward true comprehension. From memorized responses to novel insights. From prediction to understanding.Below, we look into training time and inference time approaches to enhance these types of reasoning.How it works: PEFT reverses traditional model adaptation (Hu et al. 2023). Four methods reveal new techniques.Prompt-based learning embeds adjustable signals into frozen models. Prefix-tuning and P-tuning introduce small changes. These changes alter outputs without altering the main model.Reparametrization methods like LoRA simplify complex weight matrices. They turn large updates into efficient low-rank forms. LoRA captures patterns from high-dimensional spaces with minimal adjustment.Adapters create extra neural pathways. Series adapters stack, each layer adjusting outputs gradually. Parallel adapters develop side skills, keeping the base intact.Adapter placement is key. Series adapters fit after MLP layers. Parallel adapters excel within them. LoRA touches both attention and MLP layers. Each method targets the right spot.Why it's useful: PEFT reduces resource demands. Large models gain new abilities without major changes. PEFT preserves the base while adding specialized skills. Hardware that struggled with fine-tuning now handles complex updates.Tradeoffs: Not all tasks fit PEFT. Some models need deeper changes. Base model limitations still exist. Combining methods is tricky. PEFT may struggle with very complex tasks.How it works: WizardMath learns in three distinct steps (Luo et al., 2023).First is supervised fine-tuning. Here, the model picks up raw mathematical patterns. It starts recognizing basic structures. Patterns get mapped to solutions. This step builds intuition for common operations. The foundation is set.Next, instruction reward models refine the process. They judge both answers and methods. These models look for efficiency. They guide the model toward elegant solutions. The focus shifts from correctness to quality.Finally, PPO-based reinforcement learning enhances problem-solving. The model tests ideas, adapts, and improves. Evol-Instruct feedback loops refine its logic with each run (Xu et al. 2023). It gets better at selecting strategies.Why it's Useful: Most models just match patterns. WizardMath thinks in logical steps. It breaks down problems like a mathematician. It selects methods based on understanding, not memory. This leads to solutions that are both effective and precise.Tradeoffs: Training WizardMath takes heavy computational resources. Its deep math focus limits general use. Low-quality data can introduce errors. Practical solutions can sometimes lose to elegant ones.How it works: DCoT breaks the single-path approach (Puerto et al. 2024). Multiple paths form at once. Each one tackles the problem differently. Yet all conclude in a single inference.Zero-shot generation creates diverse solutions. Every path seeks the truth. Each follows its own course. Some are direct. Others are more complex. All valid. The model acts like a group of experts. Each path offers a different view.These paths then interact. Strong strategies merge. Weak points become clear. The model learns to assess its own reasoning. It compares methods. It blends insights. All this happens without extra training.Why it's Useful: Multiple paths offer built-in validation. When paths align, certainty rises. When they don’t, issues appear. Different views reveal hidden details. Diversity deepens understanding.Tradeoffs: More paths need more computation. Balancing variety and consistency is tricky. Conflicting paths need resolving. For simple tasks, it's overkill. A group isn't always better than one.How it works: Models like Galactica (Taylor et al. 2022) and MINERVA (Lewkowycz et al. 2022) go beyond standard training. They learn from over 100 billion tokens of scientific data. This includes mathematical papers, scientific articles, and technical documentation. Raw data is converted into structured knowledge.Galactica includes tokens for specific scientific terms. It treats citations as part of the vocabulary. Chemical formulas become meaningful. Mathematical symbols are treated like tools. It learns the language of science.MINERVA focuses on quantitative reasoning. It answers natural language questions in physics, chemistry, and economics. Converts questions into math formulas. Uses LaTeX to present detailed solutions. It performs the calculations on its own.Why it's useful: Smaller models can surpass larger ones in specific fields. They grasp complex math. Work with technical notation naturally. The gap between general models and experts shrinks.Tradeoffs: Training costs rise. Each field requires massive new data. As new knowledge grows, old knowledge fades. Balancing focus and breadth is hard. It might be great at physics but weak in other areas.How it Works: Learning transforms from random sampling to structured progression (Adyasha & Maharana 2022). Like evolution, but guided. Deliberate. Purposeful.A teacher network ranks training samples. Easy concepts come first. Complex ideas build on simple ones. The pacing function controls the flow of knowledge. Sometimes fixed. Sometimes adaptive. Responds to the model's growing understanding.Three methods measure sample difficulty. Question Answering Probability tracks how often the model succeeds. Model Variability watches for consistent responses. Energy-based scoring identifies outliers and edge cases. The curriculum adapts based on these signals.Why it's Useful: Models learn more efficiently. They build strong foundations before tackling complexity. Understanding grows naturally. Organically. Each concept reinforces the last. Difficult ideas become manageable when approached in sequence.Tradeoffs: Designing effective curricula challenges even experts. Learning time stretches longer. Some concepts resist ordered progression. The path from simple to complex isn't always clear. Sometimes chaos teaches better than order.How it Works: Large models become teachers. Small models become students. Knowledge transfers through carefully curated examples (Magister et al. 2023).The process splits into two phases. First, generate Chain of Thought data. Large models solve problems step by step. Show their work. Create a roadmap of reasoning. Only correct solutions make the cut. Quality matters more than quantity.Then comes student fine-tuning. Small models learn from these examples. They see not just answers but thinking processes. The target answer guides early steps. This prevents small errors from derailing entire solutions. Teacher forcing ensures the student stays on track.Why it's Useful: Advanced reasoning becomes accessible to smaller models. Complex problem-solving skills transfer efficiently. Small models learn to think clearly with limited resources. They gain the wisdom of larger models without the computational burden.Tradeoffs: Some sophistication gets lost in translation. Students never quite match their teachers. The distillation process demands careful curation. Bad examples can teach bad habits. The balance between compression and comprehension remains delicate.Join Bagel CommunityHow it works: Wei et al. redefined reasoning with their 2022 paper. They introduced language models to step-by-step problem-solving using just eight examples. These guided models unlock hidden potential.With precise prompts, models show their internal reasoning. No need for new training or changes to the model. This latent capability is accessed by using strategic examples.The models learn to break down problems into logical steps that mimic human thinking. Each step becomes clear. The internal thought process shifts from a black box to a visible sequence.This approach scaled well. PaLM, with chain-of-thought prompting, hit 75.6% on StrategyQA. Even sports questions saw 95.4% accuracy, surpassing human experts. Complex math problems were solved with clear, step-by-step reasoning. In commonsense tasks, hidden assumptions surfaced in natural language. Symbolic problems became easy to follow.Why it's useful: Wei et al.'s work showed breakthroughs across fields. LaMDA 137B demonstrated this by generating 96% correct answers with sound reasoning. Problem-solving became transparent. Larger models produced more coherent explanations.Tradeoffs: Reasoning sometimes fails. Models can get confused. Wei’s research showed that 46% of wrong answers had minor mistakes, while 54% had major logical errors. Sequential reasoning can hit barriers. Complex tasks push models to their limits.How it works: Chen et al.'s 2022 work changed how models approach math. They turned natural language into executable programs that solve complex problems with machine-level precision.The process is seamless. Word problems convert directly into Python code. Variables capture key details from the text. Functions embody solution strategies. Algorithms emerge from simple descriptions. The model coordinates external tools with precision.PoT set new records, improving math benchmarks by 8% in few-shot settings. In zero-shot, the gains were 12%. The code tells a story with structured logic. Control flows mirror human thought. Programs serve as both solution and explanation.PAL expanded on this. Gao et al. in 2023 showed how models could use Python interpreters for better reasoning. Complex calculations became sharper. Formal math operations translated into natural expression.Why it's useful: Precision dominates. Math problems flow into code. The model combines high-level reasoning with computational accuracy. It's like a mathematician working alongside a supercomputer.Tradeoffs: Some problems don't translate well into code. Executing programs raises security concerns. The model must handle both natural language and code, increasing the risk of errors.How it works: Wang et al. introduced SC in 2022, shifting from greedy decoding to statistical sampling. This method changes inference entirely.Instead of one solution, each step produces multiple paths. SC explores various reasoning attempts at once. The decoder samples different trajectories in the probability space. Errors are reduced by repeating steps, leading to validation through sampling.SC’s statistical foundation is strong. It marginalizes over samples to minimize errors in individual paths. Think of it like quantum mechanics: multiple paths exist, and truth emerges from the statistical patterns.Their approach was groundbreaking. The decoder generates n unique reasoning chains, each following a different probability path. Final answers come from majority voting, but the process goes beyond simple counts.Wang's team tested models from UL2-20B to PaLM-540B. Accuracy increased across the board. Smaller models showed the most improvement, indicating SC unlocks hidden potential in models of all sizes.Why it's useful: Numbers tell the story. Multiple paths automatically validate answers. Different paths catch edge cases. Robustness increases as more paths are explored. Quantity becomes quality.Tradeoffs: Computation grows costly. Each path demands resources. Memory use spikes. Contradictory paths sometimes arise. Resolving these conflicts adds complexity.How it works: Wang et al.’s 2024 paper introduced SE, a new verification method. The system generates diverse responses and then analyzes them. Facts are extracted, labeled, and compared. Cross-response validation assigns endorsement scores to each fact.SE uses advanced fact extraction algorithms. Neural retrieval identifies key claims, and automatic cross-referencing helps the model distinguish strong facts from weaker ones. This statistical validation process drives the system.High-scoring facts shape future outputs, while low-scoring ones lead to re-evaluation. Each pass refines the model’s response through consistency.The fact extraction pipeline is highly technical. Named entity recognition identifies key elements, and relation extraction maps connections. All of this occurs without human input.Why it's useful: Accuracy improves. Hallucinations drop. The system validates its own facts. Confidence scores make responses more reliable.Tradeoffs: Processing takes longer. Fact extraction sometimes fails. Complex statements resist simple validation. Some valid facts get rejected if they don’t fit the statistical pattern.How it works: Zhou et al. introduced LM in 2022, a system that breaks tasks into smaller parts and solves them step by step.The process follows phases. First, the model analyzes the input. Next, it identifies sub-tasks. Then, it solves each part. Finally, it combines the results. Each phase builds on the previous one.For example, in the last-letter task with "cat dog bird," the model processes each word separately. It finds ‘t’ from "cat," ‘g’ from "dog," and ‘d’ from "bird." Then, it combines them into "tgd." The model achieved 94% accuracy with four words and 74% even with twelve.Errors are predictable. Sometimes letters drop during connection. Sometimes extras appear. But it rarely confuses the final letter of each word.Why it's useful: LM is highly efficient. It only needs two examples to work well. It uses less tokens than traditional methods, achieving equal or better results.Scaling is impressive. The model handles sequences four times longer than its training examples without losing accuracy. Standard methods fail on long sequences, scoring 31.8% on twelve-word tests. LM hits 74%, with a growing advantage on harder tasks.Tradeoffs: Some tasks don't split easily. Certain problems need a different approach. The method requires more steps, which adds processing time.Technical limits arise. The model must track partial solutions, and memory usage grows with longer sequences. Some tasks need several attempts to find the best split.Careful planning is essential. The order of sub-tasks affects accuracy, and managing information efficiently becomes critical. The system must adapt its splitting strategy for different problems.Cognitive sciences have studied human reasoning since experimental psychology emerged in the late 19th century. This field has been crucial for technological development, education improvement, cognitive disorder treatment, and better decision-making. Scientists use various tools to study reasoning, including problem-solving tasks, computational models, brain imaging (fMRI and EEG), and behavioral measurements like eye-tracking. These combined methods help researchers understand how humans reason.Analogously, AI researchers have invented reasoning tasks to test the reasoning capabilities of LLMs in the form of special datasets. Being AI more of an engineering field and computer science field, these datasets provide a rigorous benchmark to test AI systems. This allows researchers to measure a model’s accuracy and identify areas where it may be falling short.Datasets for testing AI reasoning on one type of reasoning should be diverse around that type of reasoning in order to test various complexities and nuances in tasks. For example, to evaluate language models' common sense capabilities, a dataset like ARC is used. The figure below we shows a ranking of the best LLMs for the ARC-challenge dataset taken from different sources.Inference-time techniques appear in green, training-time techniques appear in orange, and standard base models appear in blue.In the image above, the best performing techniques correspond to inference-time approaches, in particular, SC has a clear advantage over standard CoT. The fine-tuning approaches cannot match the inference-time approaches where they show a clear advantage.Our research focuses on strengthening reasoning in Large Language Models (LLMs) in three ways. First, arithmetic reasoning - approaching math problems logically. Next, commonsense reasoning - grasping everyday situations and drawing conclusions. Finally, symbolic reasoning - handling abstract symbols by strict logic.We explore two strategies to push these areas forward. First are training-time methods. These adjust AI’s learning process, adjusting it for specific tasks but needing time and computing power. For example, WizardMath teaches detailed problem-solving for math, while PEFT (Parameter Efficient Fine-Tuning) builds skills without huge resources. DCoT (Divergent Chain of Thought) allows AI to consider multiple solutions simultaneously.The second approach is Inference-time methods. These enhance existing models without retraining, bringing quick improvements, though sometimes with less depth. Chain of Thought (CoT) prompts AI to explain each step it takes. Program of Thought (PoT) has AI write and run code to boost accuracy. Self-Consistency (SC) checks multiple paths to ensure reliable answers.The below table is a summary of our findings.Techniques for Enhancing AI ReasoningBy open-sourcing our research on AI reasoning, our team at Bagel aims to collaborate with the Open Source AI community to forge humanity's next chapter.Join Our CommunityBagel, Monetizable Open Source AI. | 2024-11-08T08:00:00 | en | train |
42,065,402 | afreechameleon | 2024-11-06T17:06:01 | null | null | null | 1 | null | [
42065403
] | null | true | null | null | null | null | null | null | null | train |
42,065,415 | doener | 2024-11-06T17:06:42 | Cache Goes on Top, or Cache Goes on Bottom? The X3D Dilemma | null | https://www.youtube.com/watch?v=4pGDEYApniU | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,065,437 | RicoElectrico | 2024-11-06T17:07:59 | Alpha max plus beta min algorithm | null | https://en.wikipedia.org/wiki/Alpha_max_plus_beta_min_algorithm | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,065,449 | null | 2024-11-06T17:08:37 | null | null | null | null | null | null | [
"true"
] | null | null | null | null | null | null | null | null | train |
42,065,491 | recliner | 2024-11-06T17:11:48 | Show HN: AI Recipe Website Creator – Recipe Super Site | This is my first attempt at implementing the OpenAI API into a product. I'm a solo developer and have spent months building a multi-tenant app using Cloudflare's custom hostnames, deployed on Workers. The product is very much an MVP, and I appreciate any feedback. Thanks! | https://recipesupersite.com | 1 | 0 | null | null | null | no_error | AI Recipe Website Creator | null | null | Build your recipe website with AI. Transform simple instructions into fully detailed recipes.Customizable RecipesCustomize every aspect of your recipes, including cuisine, prep time, cook time, and much more. Reorder and edit instructions, add dietary information, and customize your blog post to suit your style.Your Recipe CollectionAccess your personalized recipes anytime. Bookmark, share, or print your favorite dishes with all the details you need, from ingredients to step-by-step instructions.ServiceStandard$15Per monthPremium$25Per monthAI generated recipes per month100 recipes200 recipesCustomize ingredients, instructions, and morePrivate label recipe websiteConfigure your own domain nameCustomize recipe site settingsChange colors and appearanceUpload logo, social and background imagesAdd members and share site control5 membersCustomize Your Site AppearanceChoose from a variety of color and gray themes, or go completely custom. With support for light and dark modes, you can ensure optimal visibility in any environment. Tailor your recipe site to match your unique style and preferences.Personalize Your Site SettingsConfigure basic information, upload your logo, and add social and background images to make your site truly unique. Tailor every aspect to suit your brand and create a cohesive online identity.Add Your Own Domain NameMake it easier for your audience to find your recipes with a personalized web address. Adding your own domain name will give your site a distinctive and unforgettable presence.Add Site MembersEasily add members to your site and share control. Delegate responsibilities, manage permissions, and ensure everyone stays in sync. Enjoy the convenience of real-time updates and collaborative control, all designed to keep your site accurate and up-to-date.Create AccountSign up now to start sharing your recipes with the world. | 2024-11-07T23:24:12 | en | train |
42,065,492 | thunderbong | 2024-11-06T17:11:48 | Binary Tree Diameter: Algorithm and Implementation Guide | null | https://jsdevspace.substack.com/p/binary-tree-diameter-algorithm-and | 1 | 0 | null | null | null | no_error | Binary Tree Diameter: Algorithm and Implementation Guide | 2024-11-06T13:03:27+00:00 | JavaScript Development Space | The diameter (or width) of a binary tree represents the longest path between any two nodes. This guide explains how to calculate it efficiently with examples and implementation.The diameter is the longest path between any two nodesThe path doesn't need to pass through the rootPath length is measured by the number of edges between nodesThe diameter at any node can be:Path through the node (left height + right height)Diameter of left subtreeDiameter of right subtreeclass TreeNode {
constructor(val) {
this.val = val;
this.left = null;
this.right = null;
}
}
function diameterOfBinaryTree(root) {
let maxDiameter = 0;
function calculateHeight(node) {
if (!node) return 0;
// Calculate heights of left and right subtrees
const leftHeight = calculateHeight(node.left);
const rightHeight = calculateHeight(node.right);
// Update maximum diameter if current path is longer
maxDiameter = Math.max(maxDiameter, leftHeight + rightHeight);
// Return height of current node
return Math.max(leftHeight, rightHeight) + 1;
}
calculateHeight(root);
return maxDiameter;
} 1
/ \
2 3
/ \
4 5Diameter = 3 (path: 4 → 2 → 1 → 3)Time to calculate: O(n)Space complexity: O(h) where h is height 1
/
2
/
3Diameter = 2 (path: 3 → 2 → 1)Shows that diameter doesn't need to pass through rootO(n) where n is the number of nodesEach node is visited exactly onceO(h) where h is the height of the treeDue to recursion stack// Test cases
const tree1 = new TreeNode(1);
tree1.left = new TreeNode(2);
tree1.right = new TreeNode(3);
tree1.left.left = new TreeNode(4);
tree1.left.right = new TreeNode(5);
console.log(diameterOfBinaryTree(tree1)); // Expected: 3
const tree2 = new TreeNode(1);
tree2.left = new TreeNode(2);
console.log(diameterOfBinaryTree(tree2)); // Expected: 1Share JavaScript Development SubstackDiscussion about this post | 2024-11-08T17:43:44 | en | train |
42,065,505 | null | 2024-11-06T17:12:32 | null | null | null | null | null | null | [
"true"
] | true | null | null | null | null | null | null | null | train |
42,065,514 | thunderbong | 2024-11-06T17:13:00 | Error Handling in Bash: 5 Essential Methods with Examples | null | https://jsdev.space/error-handling-bash/ | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,065,538 | paulpauper | 2024-11-06T17:14:25 | Learning not to trust the All-In podcast | null | https://passingtime.substack.com/p/learning-not-to-trust-the-all-in | 292 | 133 | [
42066957,
42068061,
42068747,
42066519,
42071438,
42067237,
42070926,
42070756,
42068501,
42066847,
42067444,
42068741,
42068088,
42068190,
42069936,
42066611,
42067039,
42067978,
42069170,
42069467,
42070252,
42066471,
42068370,
42068483,
42069635,
42068526,
42067214,
42066944,
42070968,
42067989,
42066604,
42069074,
42067433
] | null | null | null | null | null | null | null | null | null | train |
42,065,546 | linuxbo | 2024-11-06T17:14:44 | null | null | null | 1 | null | [
42065547
] | null | true | null | null | null | null | null | null | null | train |
42,065,561 | mavdi | 2024-11-06T17:15:43 | Forget CDK and AWS's insane costs. Pulumi and DigitalOcean to the rescue | null | https://github.com/stoix-dev/stoix-cloud-saver | 71 | 35 | [
42071218,
42070997,
42071237,
42070774,
42069767,
42070516,
42065562,
42071249,
42069644,
42071329,
42069887,
42071111,
42070046,
42069354
] | null | null | null | null | null | null | null | null | null | train |
42,065,602 | paulpauper | 2024-11-06T17:17:28 | Metaphysics (1): Rediscovering Reality | null | https://declanbartlett.substack.com/p/on-metaphysics-1-rediscovering-reality | 1 | 0 | null | null | null | missing_parsing | On Metaphysics (1): Rediscovering Reality | 2024-11-04T06:37:16+00:00 | Declan B. | One of the things in maturing intellectually is realizing that our typical understanding of the world is most likely completely flawed. I saw this when I began my first philosophy units at university.When I learnt about formal logic, syllogism and critical thinking, I was struck by how bad I was at actually evaluating and truly understanding another person’s perspective. without unintentionally “straw-manning” them or yielding to common cognitive biases. I also learned about Berkely, Descartes, Aquinas, and many more. But the one that really stuck out to me was David Hume. My own capacity to use inductive reasoning was shattered after I learned about Humes argument against induction. The Principle of the Uniformity of Nature (PUN) as Hume called it, was completely unjustified rationally, and so the question is, how can I know that when I get on a plane, that it won’t crash? More generally, how can I rationally know that any inductive inference I make it valid? I couldn’t justify it initially. 1The only way I could, is that it was the best method we have. This was the turning point in my enquiry towards a pragmatist leaning. A more probabilistic approach to inference seemed appropriate here. Introducing Bayes Theorem, and Bayesian Inference.Bayesian inference or is a method of statistical inference in which Bayes' theorem is used to calculate a probability of a hypothesis, given prior evidence, and update it as more information becomes available. - WikipediaI found this to be the most fitting way to describe how I make decisions. But the more I understood Bayesian reasoning, the more I found that it didn’t really make sense in the context I was using it.Reconciling BayesianismFor one, the Frequentist branch of Bayesian reasoning doesn’t work, because we don’t just come into the world with zero assumptions and pre-conceived ways of how to operate in the world, and we don’t simply use the frequency of occurrence as a way to determine the probability of it occurring. 2 In fact, the Bayesian-like way that humans make decisions must rely on things called priors, or prior-probabilities.Fundamentally, Bayesian inference uses a prior distribution to estimate posterior probabilities.Given the Frequentist branch is incorrect, these priors must come from structures that are built into the way that human’s reason. In other words, we don’t come into the world with un-informative prior distributions. But why?Karl Friston’s “Free Energy Principle” Well, having a blank slate of an organism would not be very efficient. For example, it would be quite problematic if mothers had to learn how to give birth. Walking, speaking and learning itself would be something that wouldn’t be possible without some prior cognitive configuration that can actualize these processes in a quick and efficient manner. 3Similarly, having a baseline cognitive mode of being allows us to “pre-order” ways of navigating the world which doesn’t involve too much cost of time and effort to build these cognitive structures from scratch. Babies can swim, Mothers give birth, we all have a fight-or-flight instinct. It is not a stretch to say that this “pre-ordering” of cognition also applies to how we perceive the world.4Say I had a population, and as God, I selected out of the population anyone who didn’t believe in Leprechauns, then after many generations, I would likely get a population who majority believed in Leprechauns. Now this might take an extraordinarily long time, and I might need an extremely large population and birth rate, but if the cognitive structures in the brain are able to configure themselves to come to the conclusion that Leprechauns exist, then it’s possible to evolve these structures for the organism to fit its environment. 5In this example, God acts as the environment and the selection mechanism. Despite this, the population doesn’t evolve to perceive God, they evolve to perceive Leprechauns. In the same sense, we don’t evolve to perceive our environment, we evolve to perceive the things that will give us a fitness payoff. If this is true, it implies that cognition, knowledge formation, and perception is a function of survivability and reproduction - ultimately to maximize a fitness payoff.6 Donald Hoffman’s theory builds on this line of thought, arguing that our perceptions are evolutionary adaptations that maximize fitness rather than reveal objective reality.And remarkably, Donald seems to have had the same idea that I had. In The Case Against Reality, Donald Hoffman argues that our perceptions do not reflect objective reality; instead, they are adaptive illusions shaped by evolution to enhance survival rather than to reveal the true nature of the world.This theory is called Multimodal user interface (MUI) theory.“For any generic structure that you might think the world might have; a total order, a topology metric, the probability is precisely zero that natural selection would shape any sensory system, of any organism, to see any aspect of objective reality.”This isn’t just conjecture either. Hoffman and his teams’ simulations and mathematical work back this conclusion thoroughly.Fitness beats Truth as it pertains to perceiving objective reality.“Space and time, the thing we usually think of as our fundamental reality is just the format of our 3D desktop. Instead of a flat desktop, we have a 3D space-time desktop. And objects in 3D are merely the icons in our desktop. They’re not pointers to objective reality in any sense.Reality that you see on your desktop interface is completely different to the reality of the actual computer generating that interface, i.e., the electron flows, resistors, capacitors etc. And so, this implies that space-time, energy, quantum physics and so on, are simply projections that are useful for our perception. This explains the incredible discordance we see between quantum theory, relativity and so on. According to Hoffman, rather than quantum and relativity emerging from spacetime, space-time and these other theories emerge from something more fundamental. But what does Hoffman think is at the fundamental level?According to Hoffman, consciousness and conscious agents—interacting networks of subjective experiences—are the foundational elements from which the structure of our perceived reality emerges. 7More interestingly, Hoffman describes each conscious experience as “The Universe experiencing itself through a straw”. The universe has infinitely many traces, where each trace loops back on themselves to experience a very narrow subset of reality, through different observer perspectives.Each conscious experience, then, is a partial view of an expansive "consciousness field" or network where individual conscious agents are nodes. In this framework, interactions among these conscious agents give rise to what we perceive as the material world, including the illusion of separate objects, time, and space.I think there’s something to this for several reasons.It explains various psychedelic phenomenon and maybe even supernatural phenomenonThere is strong mathematical, philosophical and scientific backing i.e., this is not pseudoscience. It greatly comports with other Theories of Everything being developed such as Stephen Wolframs “Ruliad” theory. You can see the two theories converge on each other at the end of this discussion. It provides metaphysical backing to idealist theories developed by popular philosophers such as Bernardo Kastrup and David Chalmers. Starting from the assumption of natural selection, we end up with a solution to the “hard problem” of Consciousness. A problem with Hoffmans ideas is that the proofs for them aren’t very accessible. I don’t actually understand any of the mathematics underlying what Hoffman is talking about, and I’ve got a degree in engineering science. For me, and especially the average person, you kind of have to take him at his word and trust the fact that most other people with a highly advanced understanding of mathematics, such as Stephen Wolfram, can’t find much fault with what Donald is putting forward.I just recently stumbled across Chris Langan’s metaphysical theory, called the Cognitive-Theoretic-Model of the Universe (CTMU). This theory is even more puzzling than Hoffmans. But it does have the same, if not a higher level of intrigue. Chris Langan reportedly has the highest IQ (between 195 and 210) in America, or even the world. This of course has no bearing on the truth of his claims, but it does increase my curiosity for them.Here is an outline of the CTMU:Core Premise: Reality as a Self-Processing Language. CTMU posits that reality is a self-referential, self-configuring system that can be understood as a kind of "language" capable of processing information about itself. In CTMU, reality is not separate from the rules governing it; instead, it includes a set of structural and logical rules that can dynamically evolve within itself.One of the foundational ideas of CTMU is that reality functions analogously to a mind, in which cognition, perception, and action are interdependent and integrated. Thus, the universe is seen as a kind of “self-aware” system, capable of observing, reflecting upon, and interacting with itself in a way similar to human consciousness.CTMU suggests that everything in reality adheres to a syntactic structure, akin to grammar in language, that determines how objects and events can relate and interact. Reality is thus seen as both the process of generating information and the information itself, which are inseparable.Langan introduces a lot of conceptual baggage to iron out the issues in his theory, which I haven’t included here, but the overall approach is interesting.Chris claims that the CTMU is completely logically coherent and therefore logically necessary. But here are two problems I currently see with Langan’s model:I imagine that there are at least more than one possible metaphysical theories that are logically self-contained and coherent, and therefore necessarily exist. This undermines its exclusivity. Langan’s theory relies on logic itself but provides only a self-justification for this (which is the whole point of the argument). The problem with this, is that I can conceive of a world where evolution (as laid out in Hoffmans theory), has produced logic as a fictitious perceptual tool to survive. Maybe logic does point to objective and invariant structures in reality, but maybe it doesn’t. Maybe a different kind of logic is better at pointing to these structures. Maybe the idea of a structure is fictitious in and of itself, and reality is actually completely transient in form. Or maybe there is no reality, and we can adopt a solipsism, i.e., only my own phenomenological experience exists, logic doesn’t seem to play a role here.So, while I think that the CTMU is certainly intriguing, and definitely plausible - I am unconvinced of his strong claim that the theory is NECESSARY. Logically it might be so, but philosophically and conceptually, it isn’t. Engagement with thinkers like David Hume challenged my trust in empirical reasoning and revealed the limitations of induction and cognitive biases.Donald Hoffman's Multimodal User Interface (MUI) theory posits that our perceptions are adaptive illusions shaped by evolution rather than reflections of objective reality.Chris Langan's Cognitive-Theoretic Model of the Universe suggests that reality may be a conscious, self- contained, self-referential system rather than an independent entity. Langan’s claims that this theory is Necessary is dubitable. These ideas raise profound questions about existence and knowledge: if our beliefs are constructed for survival, what does that imply about the true nature of reality? | 2024-11-08T21:35:34 | null | train |
42,065,635 | gfortaine | 2024-11-06T17:19:35 | null | null | null | 6 | null | null | null | true | null | null | null | null | null | null | null | train |
42,065,642 | SaaSStrategist | 2024-11-06T17:20:07 | Ask HN: Solo founders who hit $1M ARR, what's your tech stack? | null | null | 4 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,065,652 | trevin | 2024-11-06T17:20:29 | The Shipwreck Detective | null | https://www.newyorker.com/magazine/2024/11/11/the-shipwreck-detective | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,065,699 | PaulHoule | 2024-11-06T17:23:19 | Custom alterations: Mending genes for long-lasting effects | null | https://medicalxpress.com/news/2024-10-custom-genes-effects.html | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,065,709 | gnabgib | 2024-11-06T17:23:52 | Autonomous mobile robots for exploratory synthetic chemistry | null | https://www.nature.com/articles/s41586-024-08173-7 | 3 | 1 | [
42068977
] | null | null | null | null | null | null | null | null | null | train |
42,065,718 | dancrystalbeach | 2024-11-06T17:24:15 | null | null | null | 1 | null | [
42065719
] | null | true | null | null | null | null | null | null | null | train |
42,065,722 | b20000 | 2024-11-06T17:24:25 | Ask HN: Use after license expiration protection schemes | What are some good solutions to prevent businesses from using your software past termination of a licensing agreement or past a trial date?<p>Specifically, this is in the context of software that is installed on linux servers for which a monthly licensing fee needs to be paid.<p>Some ideas I had myself:
- reduce reliability of software past certain date
- use a licensing server with the software connecting to it to check if it is allowed to run
- keep some type of secure date or time counter which is checked<p>Curious what scenarios you've had yourself in your startup dealing with customers and how you've handled this. | null | 3 | 3 | [
42066546,
42065780
] | null | null | null | null | null | null | null | null | null | train |
42,065,734 | hitekker | 2024-11-06T17:25:01 | A Political Misdiagnosis | null | https://www.nytimes.com/2024/10/14/briefing/hispanic-black-americans-election-poll.html | 4 | 2 | [
42067260,
42065740
] | null | null | null | null | null | null | null | null | null | train |
42,065,807 | savin-goyal | 2024-11-06T17:29:36 | Fast, Automatic Containerization of ML and AI Projects with Fast Bakery | null | https://outerbounds.com/blog/containerize-with-fast-bakery | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,065,882 | louison11 | 2024-11-06T17:33:42 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,065,892 | hedayet | 2024-11-06T17:34:15 | null | null | null | 1 | null | [
42065893
] | null | true | null | null | null | null | null | null | null | train |
42,065,951 | marban | 2024-11-06T17:37:29 | News organisations are forced to accept Google AI crawlers, says FT policy chief | null | https://pressgazette.co.uk/media_law/google-ai-scraping-crawlers-financial-times-news-publishers/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,065,996 | itfossil | 2024-11-06T17:39:33 | The Terrified Within | null | https://itfossil.com/posts/2024/11/the-terrified-within/ | 2 | 1 | [
42065997
] | null | null | null | null | null | null | null | null | null | train |
42,066,040 | paulpauper | 2024-11-06T17:41:11 | Book Review: Gödel, Escher, Bach | null | https://www.griffinknight.com/p/book-review-godel-escher-bach | 8 | 3 | [
42066727,
42068120,
42066808
] | null | null | no_error | Book Review: Gödel, Escher, Bach | 2024-10-02T22:56:33+00:00 | Griffin Knight | They say that if you want to earn respect in prison, you should go up to the biggest guy in the yard and punch him in the face. While I cannot attest to the efficacy of such advice, the intellectual equivalent is writing a book review of Gödel, Escher, Bach.Most people would describe the book to be about the intersection of math, art, and music (it’s not). Douglas Hofstadter became so frustrated about this misconception that he published a new book 30 years later titled I Am a Strange Loop, which presented a more succinct and less abstract version of what he was trying to say in GEB. This review is mostly structured around I Am a Strange Loop, which conveniently also serves as a review of the major themes of GEB. Consider this a low stakes bait-and-switch, for which I hope you'll forgive me.Let’s talk about the levels of reality. If someone asked, “what caused Hurricane Katrina?”, a normal answer would be about warm ocean water evaporating, or something. This is a high-level explanation that is useful in explaining the cause of a hurricane. You could go a level deeper and say hurricanes are caused by a bunch of water molecules interacting with each other. This is also correct, it’s just not very useful - whenever anything occurs, it's ultimately due to atoms interacting with other atoms interacting with other atoms. You could then go one level deeper and begin describing the specific water molecules involved in Hurricane Katrina. Of course this would be insane, there are 1,000,000,000,000,000,000,000,000 H2O molecules in a cup of water. To describe how each one contributed to the hurricane would take more time than the age of the universe. Despite the lowest level of reality - atoms interacting with each other - being responsible for everything, we do not and cannot use this level for explanation or understanding. Instead, we are forced to abstract to a higher level.Above atoms (physics) is the world of molecules and chemical reactions (chemistry), and above that is the world of cells (biology). But even the biological level is insufficient to understand the function of complex systems. Take the heart, for example. Analyzing billions of individual heart cells does not help us understand what a heart does. Instead, we need to transcend the biological and venture into the realm of philosophy, where we seek to understand purpose. The purpose of a heart is to pump blood. The concept of a “pump” is not visible at the atomic level, nor the molecular level, not even at the biological level. But in one word, “pump”, I can describe to you the interaction of septillions of atoms and billions of cells. Not only is this the best way to explain a heart, but it’s also the only way. Our entire existence is just molecular physical processes that we abstract to higher levels to understand.This abstraction can be seen outside of nature too. Take this example from GEB:Anteater: Perhaps I can make it a little clearer by an analogy. Imagine you have before you a Charles Dickens novel.Achilles: The Pickwick Papers - will that do?Anteater: Excellently! And now imagine trying the following game: you must find a way of mapping letters onto ideas, so that the entire Pickwick Papers makes sense when you read it letter by letter.Achilles: Hmm... You mean that every time I hit a word such as "the" I have to think of three definite concepts, one after another, with no room for variation?Anteater: Exactly. They are the 't'-concept, the 'h'-concept, and the 'e'-concept, and every time, those concepts are as they were the preceding time.Achilles: Well, it sounds like that would turn the experience of "reading" The Pickwick Papers into an indescribably boring nightmare. It would be an exercise in meaninglessness, no matter what concept I associated with each letter.Anteater: Exactly. There is no natural mapping from the individual letters into the real world. The natural mapping occurs on a higher level-between words, and parts of the real world. If you wanted to describe the book, therefore, you would make no mention of the letter level.Achilles: Of course not! I'd describe the plot and the characters, and so forth.Anteater: So there you are. You would omit all mention of the building blocks, even though the book exists thanks to them. They are the medium, but not the message. Additionally, the lowest level physical processes are completely irrelevant to whatever goal that we are trying to achieve. If a mechanical engineer is building a car engine, it does not matter what the trajectory and velocity of each molecule will be. What matters is that, with certainty, the piston will be pushed out when heated to the correct temperature and combustion occurs. Abstraction is a process by which complexity is made comprehensible, giving us the ability to understand.Let’s now look at another example of abstraction: the mind. In the same way we can’t understand a hurricane by looking at water molecules, Hofstadter claims we cannot truly understand consciousness by looking at neurons. Below is Hofstadter’s consciousness abstraction pyramid.Image borrowed from this video which also covers the concepts from IAASL.Starting at the bottom are neurons, the lowest level. Above neurons are groupings of neurons called symbols, which represent standalone concepts. For example, when you see a dog, a group of neurons fire together, representing “dog”. The symbol for a dog is simply this group of neurons that activate when you see a dog. Groups of symbols make up thoughts, such as “Achilles walked his dog in the rain.” Lastly, at the top, there is “I”, which is a collection of thoughts that combine to form a sense of self – more on this later. However, not all living things make it to the top of this hierarchy. The further up you go, the “more conscious” you are said to be. While everything is made up of atoms, not everything has neurons, less have symbols and thoughts, and even less have a sense of self. This results in a “consciousness hierarchy”.Normal adult humans maintain a fully developed sense of self – which Hofstadter claims is the maximum degree of consciousness. But some humans have less than others, infants and the senile are two examples. Below that are dogs – do dogs have a sense of self? If a dog looks at itself in the mirror does it think “that’s me”? Maybe, what matters is the position relative to others. It doesn’t matter if a dog has a sense of self, but rather that it is less than a human and greater than a bee.The idea of relativity within the hierarchy is important because it acts as a ranking of how much we care about something or someone. Why do you kill a buzzing mosquito but not a barking dog? Why are vegetarians ok with eating plants but not animals? Should the mentally ill be forced into institutions? Why do you cry when your dog dies but not your goldfish? These are all questions that deal with your consciousness hierarchy. We do things to beings on one level of the hierarchy that would be unthinkable at other levels.Hofstadter’s intention when he introduces us to this concept is that consciousness is not binary, but gradient. Just as a dog's sense of “I” is less developed than a human, different humans have varying degrees of consciousness too. Which brings us to “I”.What does it mean for “I” to be at the top of the consciousness hierarchy? Just as understanding a heart's function transcends analyzing cells, grasping the essence of “I” cannot be done through neurological examination. This “I” synthesizes our actions, desires, and beliefs into a unified self-awareness, providing a philosophical foundation for understanding our very existence. Remember that the philosophical level deals with purpose, the purpose of a heart is to pump, whereas the purpose of our thoughts (aka desires, beliefs, etc.) is our sense of self.Here is Hofstadter:"Why did you ride your bike to that building?" "I wanted to practice the piano." "And why did you want to practice the piano?" "Because I want to learn that piece by Bach." "And why do you want to learn that piece?" "I don't know, I just do — it's beautiful." "But what is it about this particular piece that is so beautiful?" "I can't say, exactly — it just hits me in some special way."This creature ascribes its behavior to things it refers to as its desires or its wants, but it can't say exactly why it has those desires. At a certain point there is no further possibility of analysis or articulation; those desires simply are there, and to the creature, they seem to be the root causes for its decisions, actions, motions. And always, inside the sentences that express why it does what it does, there is the pronoun "I" (or its cousins "me", "my", etc.). It seems that the buck stops there — with the so-called "I”.If this “I” is responsible for everything that we do, it must exist, right? Hofstadter says no. Your sense of “I” is an illusion that acts as the necessary abstraction for us to understand our actions and desires. Just as the concept “pump” doesn’t actually exist, your sense of self doesn’t exist either, they are both just concepts that we use to understand and survive in the world. Remember that humans abstract to higher levels because we are unable to understand the lower levels. Our sense of “I” is an abstraction that allows us to understand our motivations.Here is Hofstadter again:In which the starring role, rather than being played by the cerebral cortex, the hippocampus, the amygdala, the cerebellum, or any other weirdly named and gooey physical structure, is played instead by an anatomically invisible, murky thing called "I" , aided and abetted by other shadowy players known as "ideas", "thoughts", "memories", "beliefs", "hopes", "fears", "intentions" , "desires", "love", "hate", "rivalry", "jealousy", "empathy", "honesty", and on and on — and in the soft, ethereal, neurology-free world of these players, your typical human brain perceives its very own "I" as a pusher and a mover, never entertaining for a moment the idea that its star player might merely be a useful shorthand standing for a myriad of infinitesimal entities and the invisible chemical transactions taking place among them, by the billions — nay, the millions of billions — every single second. [Humans] can't see or even imagine the lower levels of a reality that is nonetheless central to its existence. We now must take a short detour to describe a key concept in Hofstadter's work: the strange loop. Unlike a linear hierarchy where each level distinctly surpasses the previous (A > B > C), a strange loop creates a paradoxical circuit where the hierarchy loops back onto itself (A > B > C > A). A strange loop is a hierarchical structure where, as you move upwards or downwards, you eventually find yourself where you began.The best way to understand strange loops is through images. Below is a famous print by M.C. Escher called Ascending and Descending.As you can see, as you move up or down the staircase, you eventually find yourself right where you started. Another visual example of a strange loop is Escher’s Waterfall, where the waterfall seems to flow into itself like a perpetual motion machine.These images are useful in allowing us to visualize the concept of a strange loop. But the problem, of course, is that they aren’t real… But don’t worry, Hofstadter has identified some other, non-visual strange loops.Take for example, the music of Johann Sebastian Bach. Hofstadter claims that in Bach’s The Musical Offering Canon 5 “continues to rise in key, modulating through the entire chromatic scale until it ends in the same key in which it began.”You can listen to it here. Since I have little to no musical background, especially in classical, and cannot verify this claim in the slightest, here is Hofstadter again from GEB:What makes this canon different from any other, however, is that when it concludes - or, rather, seems to conclude - it is no longer in the key of C minor, but now is in D minor. Somehow Bach has contrived to modulate (change keys) right under the listener's nose. And has so constructed that this "ending" ties smoothly onto the beginning again; thus one can repeat the process and return in the key of E, only to join again to the beginning.Another example of a strange loop is Kurt Gödel's Incompleteness Theorem, which I will now describe in grossly oversimplified terms. In 1931, Gödel embedded the phrase “this statement is false” into a mathematical equation. If the statement is true, then as it says, it's false. But if the statement “this statement is false” is false, then that means it's true. But if it's true, then it says it's false! Etcetera.Kurt GödelThe astute reader may recognize this as the Liar Paradox, which has been around for thousands of years. Gödel’s innovation was that by embedding it into a mathematical equation, he was able to prove that formal systems like arithmetic have statements that are true but cannot be proven from within the system itself. This might seem like semantic nonsense, but Gödel’s Incompleteness theorem is considered to be one of the greatest intellectual achievements of the 20th century. (Fun side bar on Gödel. When studying for his US citizen test, he found a loophole in the Constitution that would permit American democracy to legally turn into a dictatorship. He told his friends, including Einstein, about the existence of a flaw, but never the specifics. We still are not sure what he found. Gödel’s Loophole has been called “one of the great unsolved problems of constitutional law”.)Despite Hofstadter spending more time on Gödel and his theorems than anything else, I actually find it non-essential to understanding the actual thesis of his books (particularly IAASL). That is, you don’t have to fully understand the incompleteness theorem beyond knowing that Gödel discovered a strange loop within the heart of mathematics. We have now covered the titular triumvirate that is Gödel, Escher, and Bach. Congratulations. You now know more about GEB than most people – it is, in fact, not about the intersection of math, art, and music, but instead about proving the existence of strange loops through the works of these three men.If the title of Hofstadter’s latter book didn’t give it away, we now get to his primary insight: you are a strange loop. Meaning your sense of self, the abstraction process through which you identify as “you”, and the interactions you have with the world, are the result of a strange loop occurring in your brain.As your current “I” - an abstract illusion that acts as a collection of all your up-to-date desires and beliefs - interacts with the world, it causes a feedback loop that leaves you, after the interaction, with a slightly modified “I”. It is a paradox where our sense of self is derived from, but also drives, the lower levels of our existence. Neurons make up the desires that culminate in “I want that”, which in turn leads to the manipulation of atoms causing new neurons to fire. What? Imagine you decide you want to learn the guitar. This aspiration is the result of neurons made up of atoms, firing at once to create symbols and thoughts, culminating in “you” picking up and practicing the guitar. During your first practice session, you interact with the world in a myriad of ways: your fingers on the strings, the sounds of the strings, the feedback from a teacher or listener, and the emotional responses to playing. As we know, this is just atoms moving around other atoms, but these atomic movements lead to new neurons to activate in your brain, new symbols to form, resulting in thoughts that ever so slightly modify your concept of “I”. You no longer want to learn the guitar, you are learning the guitar. This is why experiences fundamentally change you, sometimes big, sometimes small.Another example would be someone that maintains a daily journal. Their current desires and reflections are translated into the physical action of graphite being transferred from pencil to paper. If the author re-reads their entry a few years later, they may have some new experiences that react with the old entry to spark new ideas. The “I” from a few years ago arranged graphite lines on a piece of paper, which caused a future “I” to change. This is the strange loop. Your “I” changes atoms, which changes your “I”, which changes atoms… and so forth.I quote Hofstadter:And thus the current "I" — the most up-to-date set of recollections and aspirations and passions and confusions — by tampering with the vast, unpredictable world of objects and other people, has sparked some rapid feedback, which, once absorbed in the form of symbol activations, gives rise to an infinitesimally modified "I"; thus round and round it goes, moment after moment, day after day, year after year. In this fashion, via the loop of symbols sparking actions and repercussions triggering symbols, the abstract structure serving us as our innermost essence evolves slowly but surely, and in so doing it locks itself ever more rigidly into our mind. Somehow, the world works in a way where atoms are just moving themselves around in a seemingly random way. Any “purpose” we could try to assign to these movements we have already determined to be an illusion. We can now take this idea to its logical conclusion. If “you” are just a high-level abstraction of incomprehensible lower levels, and we all share these lower levels (my atoms are identical to your atoms), then it implies that your strange loop (your sense of self) can exist outside yourself. Take for example anyone that you have lived with for a long time: parent, child, spouse, sibling, etc. The act of deeply knowing someone is to get a copy of their strange loop into your head as well. To understand someone else's desires and beliefs is no different than understanding your own. If you walk down the street and see something in a store window and your first thought is “wow, my wife would love this!”, you are experiencing someone else’s strange loop in your own head. The root desire is no longer “I like this”, but “they like this”. The implication is that you – as defined by your beliefs, desires, goals, and aspirations – exist in dozens of other people! Of course, the strange loop that exists in others is not as strong as the “original”, but it still exists nonetheless. Above we discussed the fact that consciousness is gradient and not binary, and the same goes for strange loops. Hofstadter uses language as an analogy to the idea of duplicate strange loops. While you will never be a “native speaker” (like you are with your own strange loop), you can still become fluent. Here is Hofstadter talking about his wife, Carol.Although it took me several years to learn to "be'' Carol, and although I certainly never reached the "native speaker" level, I think it's fair to say that, at our times of greatest closeness, I was a "fluent speaker" of my wife. I shared so many of her memories, both from our joint times and from times before we ever met, I knew so many of the people who had formed her, I loved so many of the same pieces of music, movies, books, friends, jokes, I shared so many of her most intimate desires and hopes. So her point of view, her interiority, her self, which had originally been instantiated in just one brain, came to have a second instantiation, although that one was far less complete and intricate than the original one.Two people that have spent decades together not only maintain a well-formed version of the other's strange loop, but they also begin sharing one. In addition to deeply understanding each other's desires and beliefs, these desires and beliefs have fused into one. The book gets oddly romantic in a sort of nerdy, scientific way. Here is Hofstadter talking about his wife again.We had exactly the same feelings and reactions, we had exactly the same dreads and dreams, exactly the same hopes and fears. Those hopes and dreams were not mine or Carol's separately, copied twice — they were one set of hopes and dreams, they were our hopes and dreams. I don't mean to sound mystical, as if to suggest that our common hopes floated in some ethereal neverland independent of our brains. That's not my view at all. Of course our hopes were physically instantiated two times, once in each of our separate brains - but when seen at a sufficiently abstract level, these hopes were one and the same pattern, merely realized in two distinct physical media.When someone dies, the original strange loop goes away, but copies of them continue to exist in all those that knew them. Which gets us back to Hofstadter’s wife, who you may have noticed is referred to in past-tense. Carol died tragically from brain cancer when their children were toddlers. While this is unfathomably tragic, it also acts as a potential critique of his book: is this entire idea around multiple strange loops just a way for him to cope over the untimely death of his wife? He addresses this concern head on and says that he was working on these concepts long before his wife passed. He is right, in a certain sense. GEB, which contains nearly all the building blocks that make up IAASL, was written years before he even met Carol. Despite this, it's impossible to say that some of the chapters in IAASL weren’t heavily influenced by her passing. And I wouldn’t expect them to be! As we have just seen, an event as large as that is certainly going to alter his strange loop in a dramatic way. I enjoyed both of these books for very different reasons. I Am A Strange Loop presents a well-structured, coherent theory of the mind. Even if you don’t fully subscribe to the theory laid out in the book, there is an array of tangential concepts that are intensely thought provoking.On the other hand, Gödel, Escher, Bach is a completely different beast. It has achieved almost mythological status in our current zeitgeist. It is also perhaps the most common answer of the tech intelligentsia to “what is your favorite book?” Maybe true, but probably signaling.That’s not to say it isn’t a very, very good book, but I cannot overstate the difficulty of reading it. This difficulty is what has given it longevity, GEB would lose its raison d'etre if it were ever tamed. If you want Hofstadter’s structured thesis, go read I Am a Strange Loop. If you want to be taken on a journey that, through its intellectual dead ends and fascinating-yet-irrelevant-to-the-main-point concepts, will challenge your mind and ever so slightly alter your strange loop, then read GEB. Part of the experience is dealing with its varying mediums, from dialogues to images to puzzles to pages of symbols. GEB is what you get when a young polymath decides to show off. Most books are relaxing to read. GEB is the intellectual version of a marathon. With the strong link between mental exercise and neurodegenerative disease prevention, it may be prudent for doctors to prescribe chapters of GEB. It will also take you a long time to finish. It is then fitting that Hofstadter invented a concept in GEB that describes the “difficulty of accurately estimating the time it will take to complete tasks of substantial complexity.” I give you, Hofstadter’s Law: “It always takes longer than you expect, even when you take into account Hofstadter's Law.” | 2024-11-08T04:44:44 | en | train |
42,066,045 | handfuloflight | 2024-11-06T17:41:29 | AI-Digest | null | https://github.com/khromov/ai-digest | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,066,052 | leosotx247 | 2024-11-06T17:42:00 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,066,071 | chaktty | 2024-11-06T17:42:58 | null | null | null | 1 | null | [
42066072
] | null | true | null | null | null | null | null | null | null | train |
42,066,116 | handfuloflight | 2024-11-06T17:45:04 | Exponent | null | https://www.exponent.run/ | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,066,127 | paulcarroty | 2024-11-06T17:45:39 | The creators of the Mozilla Firefox browser were fined in Russia | null | https://ria.ru/20241105/mozilla-1981933912.html | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,066,129 | impish9208 | 2024-11-06T17:45:41 | null | null | null | 6 | null | [
42066291,
42066430
] | null | true | null | null | null | null | null | null | null | train |
42,066,143 | wazbug | 2024-11-06T17:46:32 | What do people mean when they say C is 'dangerous'? | null | https://old.reddit.com/r/C_Programming/comments/1gg5wno/what_do_people_mean_when_they_say_c_is_dangerous/ | 1 | 3 | [
42070831,
42066684,
42066437,
42067505
] | null | null | null | null | null | null | null | null | null | train |
42,066,275 | eigenvalue | 2024-11-06T17:53:58 | Show HN: Use a GitHub Repo as a CMS for a NextJS Blog | I recently needed a blog for my recent Next.js app, and wanted something that would look really nice and that could be integrated into my existing Next.js app to keep deployment simple and to give me more control over how it's hosted and configured.<p>My goal was to get something that looked very slick, using modern CSS styling and rich client-side effects that would look great on desktop and mobile, and most importantly, something that would be very easy and convenient for me to create new blog posts and edit existing posts.<p>So I had the idea of using GitHub as the CMS and just writing the posts using markdown with some extra metadata at the beginning, and then basically parsing that into html/css. I know there are some other projects that do something similar, but mine is very minimal and easy to integrate into a project without a lot of configuration.<p>It ended up working really well, so I decided to turn it into a standalone open-source project, which you can see here:<p><a href="https://github.com/Dicklesworthstone/nextjs-github-markdown-blog">https://github.com/Dicklesworthstone/nextjs-github-markdown-...</a> | https://youtubetranscriptoptimizer.com/blog/03_nextjs_github_blogging_system | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,066,358 | caleb_thompson | 2024-11-06T17:57:55 | Thinking About Recipe Formats More Than Anyone Should | null | https://rknight.me/blog/thinking-about-recipe-formats-more-than-anyone-should/ | 4 | 2 | [
42066562
] | null | null | null | null | null | null | null | null | null | train |
42,066,371 | null | 2024-11-06T17:58:37 | null | null | null | null | null | null | [
"true"
] | true | null | null | null | null | null | null | null | train |
42,066,450 | jmsflknr | 2024-11-06T18:02:39 | UK will legislate against AI risks in next year, pledges Kyle | null | https://www.ft.com/content/79fedc1c-579d-4b23-8404-e4cb9e7bbae3 | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,066,488 | rntn | 2024-11-06T18:05:10 | Google Confirms Jarvis AI Is Real by Accidentally Leaking It | null | https://gizmodo.com/google-confirms-jarvis-ai-is-real-by-accidentally-leaking-it-2000521089 | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,066,500 | maxmaio | 2024-11-06T18:05:55 | Launch HN: Midship (YC S24) – Turn PDFs, docs, and images into usable data | Hey HN, we are Max, Kieran, and Aahel from Midship (<a href="https://midship.ai">https://midship.ai</a>). Midship makes it easy to extract data from unstructured documents like pdfs and images.<p>Here’s a video showing it in action: <a href="https://www.loom.com/share/ae43b6abfcc24e5b82c87104339f2625?sid=2f2cf0fa-d671-4590-992a-da51712c69e1" rel="nofollow">https://www.loom.com/share/ae43b6abfcc24e5b82c87104339f2625?...</a>, and a demo playground (no signup required!) to test it out: <a href="https://app.midship.ai/demo">https://app.midship.ai/demo</a><p>We started 5 months ago initially trying to make an AI natural language workflow builder that would be a simpler alternative to Zapier or Make.com. However, most of our users seemed to be much more interested in the basic (and not very good) document extraction feature we had. Seeing how people were spending hours a day manually extracting data from pdfs inspired us to build what has become Midship!<p>The problem is that despite all our progress in software, huge amounts of business data still lives in PDFs and images. Sure, you can OCR them, but getting clean, structured data out is still painful. Most existing tools just give you a blob of markdown - leaving you to figure out which parts matter and how they relate.<p>We've found that combining OCR with language models lets us do something more useful: extract specific fields and tables that users actually care about. The LLMs help correct OCR mistakes and understand context (like knowing that "Inv#" and "Invoice Number" mean the same thing).<p>We have two main kinds of users today, non-technical users that extract data via our web app and developers who use our extraction api. We were initially focused on the first one as they seemed like an underserved part of the market, but we’ve received a lot of interest from developers who face the same issues.<p>For pricing, we currently charge a monthly Saas fee per seat for the web app and a volume based pricing for the API.<p>We’re really excited to share what we’ve built so far and look forward to any feedback from the community! | null | 45 | 30 | [
42070829,
42069925,
42069437,
42068247,
42067673,
42067071,
42066858,
42066966,
42067249,
42067242,
42070339
] | null | null | null | null | null | null | null | null | null | train |
42,066,517 | jedberg | 2024-11-06T18:07:03 | Ask HN: What do you like to discuss in your board meetings? | If you're an investor, what do you like your startups to tell you?<p>If you're a founder, what do you like to report up to your board, and what do you like to discuss with them?<p>What are your "standard agenda items" besides acceptance of the minutes and votes on stock grants? | null | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,066,580 | bookofjoe | 2024-11-06T18:11:25 | Preclinical/clinical transcranial US neuromodulation functional connectomics | null | https://www.brainstimjrnl.com/article/S1935-861X(24)00103-7/fulltext | 1 | 0 | null | null | null | no_article | null | null | null | null | 2024-11-08T11:49:52 | null | train |
42,066,590 | handfuloflight | 2024-11-06T18:12:01 | The Windows Interface Guidelines – A Guide for Designing Software (1995) [pdf] | null | https://ics.uci.edu/~kobsa/courses/ICS104/course-notes/Microsoft_WindowsGuidelines.pdf | 5 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,066,724 | mixeden | 2024-11-06T18:19:45 | Compact Language Models via Pruning | null | https://synthical.com/article/Compact-Language-Models-via-Pruning-and-Knowledge-Distillation-a84f848b-76bd-4570-b0e7-34f178057967 | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,066,741 | rbanffy | 2024-11-06T18:20:41 | A viral rumor about FEMA ended in real threats of violence | null | https://weaponizedspaces.substack.com/p/how-a-viral-rumor-about-a-fema-director | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,066,789 | Veelox | 2024-11-06T18:22:34 | Ask HN: Voting Data Source by County? | It would be really nice to be able to download the a CSV of voting for each presidential election broken by county or zip code or something of the sort. My google skills are not good enough. Does anyone know of such a data source? | null | 2 | 2 | [
42066938
] | null | null | null | null | null | null | null | null | null | train |
42,066,876 | octopus2023inc | 2024-11-06T18:26:37 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,066,882 | sauravmaheshkar | 2024-11-06T18:27:02 | Brief Introduction to Contrastive Learning | null | https://www.lightly.ai/post/brief-introduction-to-contrastive-learning | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,066,923 | caleb_thompson | 2024-11-06T18:28:56 | What a Trump Victory Means for Tech | null | https://www.nytimes.com/2024/11/06/technology/trump-musk-ai-crypto.html | 2 | 3 | [
42066997,
42067876
] | null | null | null | null | null | null | null | null | null | train |
42,066,926 | rbanffy | 2024-11-06T18:29:01 | Trump's election win spells bad news for the auto industry | null | https://arstechnica.com/cars/2024/11/ev-subsidies-out-new-import-tariffs-in-how-trumps-win-affects-autos/ | 5 | 1 | [
42068799
] | null | null | null | null | null | null | null | null | null | train |
42,066,942 | HieronymusBosch | 2024-11-06T18:29:58 | Rust Trademark Policy Updates | null | https://foundation.rust-lang.org/news/rust-trademark-policy-updates/ | 3 | 1 | [
42067405
] | null | null | null | null | null | null | null | null | null | train |
42,066,951 | SLHamlet | 2024-11-06T18:30:29 | Philip Rosedale: On the Election | null | https://philiprosedale.substack.com/p/on-the-election | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,066,979 | PaulHoule | 2024-11-06T18:31:45 | Largest dam removal ever, driven by Tribes, kicks off Klamath River recovery | null | https://news.mongabay.com/2024/10/largest-dam-removal-ever-driven-by-tribes-kicks-off-klamath-river-recovery/ | 4 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,067,016 | surprisetalk | 2024-11-06T18:33:58 | In-Bed Emergency Protection from Phone-on-Face Drops | null | https://www.core77.com/posts/134219/In-Bed-Emergency-Protection-From-Phone-on-Face-Drops | 2 | 1 | [
42071242
] | null | null | Failed after 3 attempts. Last error: Quota exceeded for quota metric 'Generate Content API requests per minute' and limit 'GenerateContent request limit per minute for a region' of service 'generativelanguage.googleapis.com' for consumer 'project_number:854396441450'. | In-Bed Emergency Protection From Phone-on-Face Drops - Core77 | null | null |
In-Bed Emergency Protection From Phone-on-Face Drops
Kazuya Shibata's Smartphone Face Shield
Inventor Kazuya Shibata, who creates "marginally useful things," presents this Smartphone Face Shield. It's designed for those who use their phone in bed. A lessor inventor might simply have created an arm to hold the phone in place, but Shibata knows that greater phone engagement comes from holding the phone yourself. What's urgently needed, then, is emergency protection for when you drop it. If you want to 3D print your own, he's got the Fusion files here. Along with a caveat: "Face protection will fail about once in 10 times."
Favorite This
Comment
| 2024-11-08T12:36:30 | null | train |
42,067,060 | oavioklein | 2024-11-06T18:36:29 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,067,063 | WildestDreams_ | 2024-11-06T18:36:53 | Hispanic men helped propel Donald Trump back to the White House | null | https://www.economist.com/united-states/2024/11/06/hispanic-men-helped-propel-donald-trump-back-to-the-white-house | 6 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,067,081 | rbanffy | 2024-11-06T18:37:44 | Mighty radio bursts linked to galaxies: New clues about how magnetars form | null | https://phys.org/news/2024-11-mighty-radio-linked-massive-galaxies.html | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,067,088 | sandwichsphinx | 2024-11-06T18:38:10 | Placement of Analog Integrated Circuits Priority-Based Constructive Heuristic | null | https://arxiv.org/abs/2411.02406 | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,067,122 | smooke | 2024-11-06T18:40:22 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,067,123 | uninherited | 2024-11-06T18:40:26 | null | null | null | 1 | null | [
42067124
] | null | true | null | null | null | null | null | null | null | train |
42,067,137 | systemskid | 2024-11-06T18:41:21 | Ask HN: Are MPI implementations simply a wrapper on shared memory for intra-node | HPC noob here: Skimming through Open MPI implementation, it seems like it uses SM for intra-node communication? Isn't one of the points of MPI to use message queues for such communication, which differentiates it from shared memory based approaches? | null | 2 | 4 | [
42067255,
42067757
] | null | null | null | null | null | null | null | null | null | train |
42,067,148 | fanf2 | 2024-11-06T18:42:03 | The truffle industry is a big scam. Not just truffle oil, everything | null | https://www.tasteatlas.com/truffle-industry-is-a-big-scam | 32 | 11 | [
42070133,
42067764,
42070977,
42070721,
42070404,
42069427,
42070535
] | null | null | null | null | null | null | null | null | null | train |
42,067,179 | dfansteel | 2024-11-06T18:44:07 | What are the next level of computer science books you recommend? | Beyond “Introduction to Algorithms” or the typical undergraduate course books, what are the next level of book you recommend for software engineers? Not including platform or tool specific resources. | null | 6 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,067,184 | fzliu | 2024-11-06T18:44:19 | Elasticsearch vs. Vespa Performance Comparison | null | https://blog.vespa.ai/elasticsearch-vs-vespa-performance-comparison/ | 5 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,067,212 | segasaturn | 2024-11-06T18:45:49 | TikTok Employees Shrug Off the US Election | null | https://www.wired.com/story/tiktok-ban-us-election-donald-trump-kamala-harris/ | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,067,220 | yamrzou | 2024-11-06T18:46:06 | null | null | null | 2 | null | null | null | true | null | null | null | null | null | null | null | train |
42,067,233 | elsewhen | 2024-11-06T18:46:53 | Physicists spot quantum tornadoes twirling in a 'supersolid' | null | https://www.quantamagazine.org/physicists-spot-quantum-tornadoes-twirling-in-a-supersolid-20241106/ | 35 | 0 | null | null | null | no_error | Physicists Spot Quantum Tornadoes Twirling in a ‘Supersolid’ | Quanta Magazine | 2024-11-06T16:00:56+00:00 | By
Zack Savitsky
November 6, 2024 |
New observations of microscopic vortices confirm the existence of a paradoxical phase of matter that may also arise inside neutron stars.
Introduction
In a lab nestled between the jagged peaks of the Austrian Alps, rare earth metals vaporize and spew out of an oven at the speed of a fighter jet. Then a medley of lasers and magnetic pulses slow the gas nearly to a halt, making it colder than the depths of space. The roughly 50,000 atoms in the gas lose any sense of identity, merging into a single state. Finally, with a twist of the ambient magnetic field, tiny tornadoes swirl into existence, pirouetting in the darkness.
For three years, the physicist Francesca Ferlaino and her team at the University of Innsbruck worked to image these quantum-scale vortices in action. “Many people told me this would be impossible,” Ferlaino said during a tour of her lab this summer. “But I was so convinced that we would manage.”
Now, in a paper published today in Nature, they’ve published snapshots of the vortices, confirming the long-sought hallmark of an exotic phase of matter known as a supersolid.
The supersolid, a paradoxical phase of matter that’s simultaneously the stiffest of solids and the flowiest of fluids, has fascinated condensed matter physicists since its prediction in 1957. Hints of the phase have been mounting, but the new experiment secures the last major piece of evidence for its existence. The authors believe the vortices that form in supersolids can help explain properties in a range of systems, from high-temperature superconductors to astronomical bodies.
The vortices might show how matter behaves in some of the most extreme conditions in the universe. Pulsars, which are spinning neutron stars — the extraordinarily dense corpses of burnt-out stars — are suspected to have supersolid interiors. “This is actually a really good analogue system” for neutron stars, said Vanessa Graber, a physicist at Royal Holloway, University of London in the United Kingdom who specializes in these stars. “I’m really excited about that.”
Rigid and Runny
Imagine spinning a bucket filled with different kinds of matter. A solid will twirl along with the container because of the friction between the bucket and the material’s rigid lattice of atoms. A liquid, on the other hand, has less internal friction, so it will form a big vortex in the center of the bucket. (The exterior atoms rotate with the bucket while the inner ones lag behind.)
If you make certain liquids cold and sparse enough, their atoms begin interacting across longer distances, eventually linking together in one giant wave that flows perfectly without any friction. These so-called superfluids were first discovered in helium in 1937 by Russian and Canadian physicists.
Francesca Ferlaino, a physicist at the University of Innsbruck, has observed the hallmark feature of supersolids.
M Vandory/University of Innsbruck
Try spinning a bucket of superfluid, and it will remain at rest even as the bucket rotates around it. The superfluid still rubs against the bucket, but the material is totally impervious to friction until the container reaches a certain rotational speed. At this point, resisting the urge to rotate, the superfluid suddenly spawns a single quantum vortex — a whorl of atoms surrounding a column of nothingness that extends to the bottom of the bucket. Continue to speed up the container, and more of these perfect tornadoes will slither in from the rim.
Twenty years after superfluids were discovered, the American physicist Eugene Gross suggested that the same quantum collectivism could emerge in solids. Physicists debated for decades whether this bizarre superfluid-solid hybrid could exist. Eventually, a theoretical picture emerged for the supersolid. By adjusting the magnetic field around a superfluid, you can reduce the repulsion between atoms in such a way that they begin to clump together. Those clumps will all align with the magnetic field but repel one another, self-organizing into a crystalline pattern while retaining their strange frictionless behavior.
Put a supersolid in a rotating bucket, and atoms will shift in sync such that the lattice of clumps will appear to revolve with the container, much like a solid. But, like a superfluid, when spinning fast enough the material will still break out into vortices, which will get pinned between the atom clumps. The supersolid will be at once rigid and runny.
Gross’ prediction launched a long hunt for supersolids in the lab.
Ferlaino Group
Researchers first announced a discovery in 2004, only to walk back their claim. New bursts of activity came in 2017 and then again in 2019, when groups from Stuttgart, Florence and Innsbruck found promising signals of supersolidity in one-dimensional systems. The groups started with gases of dysprosium and erbium atoms, which are intrinsically magnetic enough to act like little bar magnets. Applying a magnetic field triggered the atoms to naturally group together into regularly spaced clumps, forming a crystalline lattice. Then, when the researchers lowered the temperature and density, interactions between the atoms caused them to naturally oscillate as one coherent wave, complete with all the features of a superfluid.
The 2019 experiments caught a glimpse of the “two competing natures” of the supersolid, said Elena Poli, a graduate student on the Innsbruck team. Since then, the group has expanded their putative supersolid from one dimension to two and probed it for different predicted properties.
But “what was missing was basically the smoking-gun evidence” of supersolids, said Jens Hertkorn, a physicist at the Massachusetts Institute of Technology and a former member of the Stuttgart team. The hallmark of superfluidity is the array of vortices that spawns upon rotation. Despite years of trying, “nobody has spun a supersolid successfully before,” Hertkorn said.
Spinning a Supersolid
To observe how their supersolid responds to rotation, the Innsbruck crew used a magnetic field as a spoon to stir the internal magnetic fields of the atoms about 50 times a second. That’s fast enough to trigger vortices, but gentle enough to preserve the quantum phase. “It’s a very, very delicate state — any small change would destroy it,” Ferlaino said.
Spotting those little cyclones was a bigger challenge. The group spent three years quantum-storm chasing. Eventually, they executed a proposal from 2022 by Alessio Recati, a physicist at the University of Trento. He suggested forming vortices in the supersolid phase, then melting the material back into a superfluid in order to image the vortices with higher contrast.
Francesca Ferlaino’s lab at the University of Innsbruck.
Patscheider
One Friday evening early last year, three grad students burst into a dim pub near the Innsbruck campus holding a laptop. They were looking for two of the team’s postdocs, who verified that they’d captured a tornado in their quantum gas. “It was exceptionally exciting,” said Thomas Bland, one of the postdocs. The grad students returned to the lab, and Bland and his colleague stayed for a celebratory round.
“We all believe that it is a quantum vortex,” said Recati, who was not involved with the experiment. He’s waiting for experimentalists to measure the rotational speed of the tornadoes to fully corroborate theoretical predictions, but the images alone are a satisfying validation, he said. “This is very relevant for the whole physics community.”
Hertkorn wants to see the results replicated by other groups and to track how the signals change over different experimental conditions. Still, he commends the Innsbruck team for their persistence in making such a challenging measurement. “It’s just experimentally really impressive that this is observable,” he said.
Cosmic Connections
This past May, Ezequiel Zubieta was lunching on stewed rolls in a small town outside Buenos Aires when he witnessed a dead star convulsing from his laptop screen. Zubieta, an astronomy grad student at the National University of La Plata, had been tracking the impressively stable rotation of the Vela pulsar, the magnetized remnant of a massive star that exploded roughly 11,000 years ago.
As it twirls, Vela shoots beams of radiation from its poles that flash on Earth 11 times per second, with a regularity that rivals the best clocks humans can build. But that day, the star spun around 2.4 billionths of a second faster than usual.
For decades, astronomers have wondered what could cause these massive objects to suddenly speed up their rotation. Many hope that these pulsar glitches can help them decipher the inner workings of these peculiar cosmic lighthouses.
Scientists know that stellar corpses are densely packed with neutrons — one teaspoon of neutron star material would weigh as much as Mount Everest. No one is sure what happens to neutrons in such conditions. But astronomers suspect that, in a layer below the star’s solid outer crust, pressurized neutrons form clumps that take on unusual shapes, which they often refer to as “nuclear pasta.” The leading models feature phases resembling gnocchi, spaghetti and lasagna.
At a conference in 2022, Ferlaino overheard some astronomers discussing the putative qualities of nuclear pasta. Many believe that the pastalike clumps of neutrons would merge to form a superfluid, but it’s unclear how that material could give rise to glitches. Ferlaino suspected that the glitches could be a sign of the supersolids she’d been cooking up in her own lab, so she decided to investigate.
The pressurized neutrons that fill neutron stars are thought to take on an array of possible shapes known as “nuclear pasta.”
Ferlaino Group
Last year, her team used a computer simulation of their supersolid to model what would happen if a similar material existed inside a spinning neutron star. They found that after vortices form, one of them can get dislodged and bump into its neighbor, setting off a tornado and avalanche that transfers its energy to the container. Enough of these tornado collisions could briefly speed up the neutron star’s rotation, resulting in a glitch, they proposed.
Graber, who had published a review of laboratory analogues for neutron stars several years earlier, was thrilled to come across the paper. “Oh my God, there is something else out there that I can use,” she recalled thinking about the various properties of rotating supersolids described in the paper. “Just reading through the text, I was like, ‘This is what I have, and this is what I have, and this is what I have.’”
Now that Ferlaino’s group has identified vortices in their supersolid, they plan to investigate how the tornadoes form, migrate and dissipate. They also want to replicate the putative mechanism for pulsar glitches, to show how an avalanche of vortices might prompt a real-world supersolid to speed up its spin. Physicists also hope to use these studies to decipher other exotic phases of matter where vortices are expected to play a key role, such as in high-temperature superconductors.
Meanwhile, astronomers like Graber and Zubieta hope this work will enable a new diagnostic tool for pulsars. With a better understanding of vortex dynamics, they may be able to use pulsar glitch observations to infer the composition and behavior of nuclear pasta.
“If you can understand how that physics works on a small scale, that’s really valuable for us,” Graber said. “I can’t use a telescope and look inside a neutron star’s crust, but they essentially have that handle.”
Ferlaino, whose group is on the lookout for other systems that may sport supersolidity, sees the applications as a reflection of the fundamental connectedness of nature. “Physics is universal,” she said, and “we’re learning the rules of the game.”
Next article
He’s Gleaning the Design Rules of Life to Re-Create It
| 2024-11-08T04:10:45 | en | train |
42,067,254 | ksec | 2024-11-06T18:48:45 | Safari 18.1 Release Notes | null | https://developer.apple.com/documentation/safari-release-notes/safari-18_1-release-notes | 20 | 4 | [
42070669,
42069949
] | null | null | null | null | null | null | null | null | null | train |
42,067,265 | hnburnsy | 2024-11-06T18:49:17 | Starship's Sixth Flight Test | null | https://www.spacex.com/launches/mission/?missionId=starship-flight-6 | 274 | 308 | [
42067898,
42068557,
42068854,
42067587,
42067573,
42070451,
42069069,
42069348,
42068558,
42067562,
42069770,
42068062,
42067625,
42067530,
42068628,
42068074
] | null | null | null | null | null | null | null | null | null | train |
42,067,275 | tosh | 2024-11-06T18:50:02 | WebSockets cost us $1M on our AWS bill | null | https://www.recall.ai/post/how-websockets-cost-us-1m-on-our-aws-bill | 221 | 135 | [
42069673,
42069745,
42069524,
42068395,
42069768,
42067563,
42069421,
42071056,
42069051,
42069879,
42068537,
42067844,
42070991,
42067919,
42071180,
42068290,
42068280,
42069585,
42068005,
42069002,
42068094,
42068330,
42068398,
42068148,
42069023,
42068624,
42068127,
42069854,
42069379,
42068457,
42067836,
42068059,
42068374,
42068877,
42068331,
42068394,
42069973
] | null | null | no_error | How WebSockets cost us $1M on our AWS bill | null | null |
We're Hiring Engineers!
Join us in building the future of engineering at Recall.ai.
Apply Now
IPC is something that is rarely top-of-mind when it comes to optimising cloud costs. But it turns out that if you IPC 1TB of video per second on AWS it can result in enormous bills when done inefficiently.
Join us in this deep dive where we unexpectedly discover how using WebSockets over loopback was ultimately costing us $1M/year in AWS spend and the quest for an efficient high-bandwidth, low-latency IPC.
Recall.ai powers meeting bots for hundreds of companies. We capture millions of meetings per month, and operate enormous infrastructure to do so.
We run all this infrastructure on AWS. Cloud computing is enormously convenient, but also notoriously expensive, which means performance and efficiency is very important to us.
In order to deliver a cost-efficient service to our customers, we're determined to squeeze every ounce of performance we can from our hardware.
We do our video processing on the CPU instead of on GPU, as GPU availability on the cloud providers has been patchy in the last few years. Before we started our optimization efforts, our bots generally required 4 CPU cores to run smoothly in all circumstances. These 4 CPU cores powered all parts of the bot, from the headless Chromium used to join meetings to the real-time video processing piplines to ingest the media.
We set a goal for ourselves to cut this CPU requirement in half, and thereby cut our cloud compute bill in half.
A lofty target, and the first step to accomplish it would be to profile our bots.
Our CPU is being spent doing what??
Everyone knows that video processing is very computationally expensive. Given that we process a ton of video, we initially expected the majority of our CPU usage to be video encoding and decoding.
We profiled a sample of running bots, and came to a shocking realization. The majority of our CPU time was actually being spent in two functions: __memmove_avx_unaligned_erms and __memcpy_avx_unaligned_erms.
Let's take a brief detour to explain what these functions do.
memmove and memcpy are both functions in the C standard library (glibc) that copy blocks of memory. memmove handles a few edge-cases around copying memory into overlapping ranges, but we can broadly categorize both these functions as "copying memory".
The avx_unaligned_erms suffix means this function is specifically optimized for systems with Advanced Vector Extensions (AVX) support and is also optimized for unaligned memory access. The erms part stands for Enhanced REP MOVSB/STOSB, which are optimizations in recent Intel processors for fast memory movement. We can broadly categorize the suffix to mean "a faster implementation, for this specific processor"
In our profiling, we discovered that by far, the biggest callers of these functions were in our Python WebSocket client that was receiving the data, followed by Chromium's WebSocket implementation that was sending the data.
An expensive set of sockets...
After pondering this, the result started making more sense. For bots that join calls using a headless Chromium, we needed a way to transport the raw decoded video out of Chromium's Javascript environment and into our encoder.
We originally settled on running a local WebSocket server, connecting to it in the Javascript environment, and sending data over that channel.
WebSocket seemed like a decent fit for our needs. It was "fast" as far as web APIs go, convenient to access from within the JS runtime, supported binary data, and most importantly was already built-in to Chromium.
One complicating factor here is that raw video is surprisingly high bandwidth. A single 1080p 30fps video stream, in uncompressed I420 format, is 1080 * 1920 * 1.5 (bytes per pixel) * 30 (frames per second) = 93.312 MB/s
Our monitoring showed us that at scale, the p99 bot receives 150MB/s of video data.
That's a lot of data to move around!
The next step was to figure out what specifically was causing the WebSocket transport to be so computationally expensive. We had to find the root cause, in order to make sure that our solution would sidestep WebSocket's pitfalls, and not introduce new issues of it's own.
We read through the WebSocket RFC, and Chromium's WebSocket implementation, dug through our profile data, and discovered two primary causes of slowness: fragmentation, and masking.
Fragmentation
The WebSocket specification supports fragmenting messages. This is the process of splitting a large message across several WebSocket frames.
According to Section 5.4 of the WebSocket RFC):
The primary purpose of fragmentation is to allow sending a message that is of unknown size when the message is started without having to buffer that message. If messages couldn't be fragmented, then an endpoint would have to buffer the entire message so its length could be counted before the first byte is sent. With fragmentation, a server or intermediary may choose a reasonable size buffer and, when the buffer is full, write a fragment to the network.
A secondary use-case for fragmentation is for multiplexing, where it is not desirable for a large message on one logical channel to monopolize the output channel, so the multiplexing needs to be free to split the message into smaller fragments to better share the output channel. (Note that the multiplexing extension is not described in this document.)
Different WebSocket implementations have different standards
Looking into the Chromium WebSocket source code, messages larger than 131KB will be fragmented into multiple WebSocket frames.
A single 1080p raw video frame would be 1080 * 1920 * 1.5 = 3110.4 KB in size, and therefore Chromium's WebSocket implementation would fragment it into 24 separate WebSocket frames.
That's a lot of copying and duplicate work!
Masking
The WebSocket specification also mandates that data from client to server is masked.
To avoid confusing network intermediaries (such as intercepting proxies) and for security reasons that are further discussed in Section 10.3, a client MUST mask all frames that it sends to the server
Masking the data involves obtaining a random 32-bit masking key, and XOR-ing the bytes of the original data with the masking key in 32-bit chunks.
This has security benefits, because it prevents a client from controlling the bytes that appear on the wire. If you're interested in the precise reason why this is important, read more here!
While this is great for security, the downside is masking the data means making an additional once-over pass over every byte sent over WebSocket -- insignificant for most web usages, but a meaningful amount of work when you're dealing with 100+ MB/s
Quest for a cheaper transport!
We knew we need to move away from WebSockets, so we began our quest to find a new mechanism to get data out of Chromium.
We realized pretty quickly that browser APIs are severely limited if we wanted something significantly more performant that WebSocket.
This meant we'd need to fork Chromium and implement something custom. But this also meant that the sky was the limit for how efficient we could get.
We considered 3 options: raw TCP/IP, Unix Domain Sockets, and Shared Memory:
TCP/IP
Chromium's WebSocket implementation, and the WebSocket spec in general, create some especially bad performance pitfalls.
How about we go one level deeper and add an extension to Chromium to allow us to send raw TCP/IP packets over the loopback device?
This would bypass the issues around WebSocket fragmentation and masking, and this would be pretty straightforward to implement. The loopback device would also introduce minimal latency.
There were a few drawbacks however. Firstly, the maximum size for TCP/IP packets is much smaller than the size of our raw video frames, which means we still run into fragmentation.
In a typical TCP/IP network connected via ethernet, the standard MTU (Maximum Transmission Unit) is 1500 bytes, resulting in a TCP MSS (Maximum Segment Size) of 1448 bytes. This is much smaller than our 3MB+ raw video frames.
Even the theoretical maximum size of a TCP/IP packet, 64k, is much smaller than the data we need to send, so there's no way for us to use TCP/IP without suffering from fragmentation.
There was another issue as well. Because the Linux networking stack runs in kernel-space, any packets we send over TCP/IP need to be copied from user-space into kernel-space. This adds significant overhead as we're transporting a high volume of data.
Unix Domain Sockets
We also explored exiting the networking stack entirely, and using good old Unix domain sockets.
A classic choice for IPC, and it turns out Unix domain sockets can actually be pretty fast.
Most importantly however, Unix domain sockets are a native part of the Linux operating system we run our bots in, and there are pre-existing functions and libraries to push data through Unix sockets.
There is one con however. To send data through a Unix domain socket, it needs to be copied from user-space to kernel-space, and back again. With the volume of data we're working with, this is a decent amount of overhead.
Shared Memory
We realized we could go one step further. Both TCP/IP and Unix Domain Sockets would at minimum require copying the data between user-space and kernel-space.
With a bit of DIY, we could get even more efficient using Shared Memory.
Shared memory is memory that can be simultaneously accessed by multiple processes at a time. This means that our Chromium could write to a block of memory, which would then be read directly by our video encoder with no copying at all required in between.
However there's no standard interface for transporting data over shared memory. It's not a standard like TCP/IP or Unix Domain sockets. If we went the shared memory route, we'd need to build the transport ourselves from the ground up, and there's a lot that could go wrong.
Glancing at our AWS bill gave us the resolve we needed to push forward. Shared memory, for maximum efficiency, was the way to go.
Sharing is caring (about performance)
As we need to continuously read and write data serially into our shared memory, we settled on a ring buffer as our high level transport design.
There are quite a few ringbuffer implementations in the Rust community, but we had a few specific requirements for our implementation:
Lock-free: We need consistent latency and no jitter, otherwise our real-time video processing would be disrupted.
Multiple producer, single consumer: We have multiple chromium threads writing audio and video data into the buffer, and a single thread in the media pipline consuming this data.
Dynamic Frame Sizes: Our ringbuffer needed to support audio packets, as well as video frames of different resolutions, meaning the size of each datum could vary drastically.
Zero-Copy Reads: We want to avoid copies as much as possible, and therefore want our media pipeline to be able to read data out of the buffer without copying it.
Sandbox Friendlyness: Chromium threads are sandboxed, and we need them to be able to access the ringbuffer easily.
Low Latency Signalling: We need our Chromium threads to be able to signal to the media pipeline when new data is available, or when buffer space is available.
We evaluated the off-the-shelf ringbuffer implementations, but didn't find one that fit our needs... so we decided to write our own!
The most non-standard part of our ring-buffer implementation is our support for zero-copy reads. Instead of the typical two-pointers, we have three pointers in our ring buffer:
write pointer: the next address to write to
peek pointer: the address of the next frame to read
read pointer: the address where data can be overwritten
To support zero-copy reads we feed frames from the peek pointer into our media pipeline, and only advance the read pointer when the frame has been fully processed.
This means that it's safe for the media pipeline to hold a reference to the data inside the ringbuffer, since that reference is guaranteed to be valid until the data is fully processed and the read pointer is advanced.
We use atomic operations to update the pointers in a thread-safe manner, and to signal that new data is available or buffer space is free we use a named semaphore.
After implementing this ringbuffer, and deploying this into production with a few other optimizations, we were able to reduce the CPU usage of our bots by up to 50%.
This exercise in optimizing IPC for CPU efficiency reduced our AWS bill by over a million dollars per year, a huge impact and a really great use of time! | 2024-11-08T05:53:41 | en | train |
42,067,302 | Shon3333 | 2024-11-06T18:51:24 | null | null | null | 1 | null | [
42067303
] | null | true | null | null | null | null | null | null | null | train |
42,067,362 | laurex | 2024-11-06T18:54:43 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,067,371 | startories | 2024-11-06T18:55:22 | What's After the Movie | null | https://www.whatsafterthemovie.com/ | 3 | 1 | [
42067372
] | null | null | missing_parsing | What's After the Movie? - Discover Movies, Reviews, and More | null | null |
Welcome to What's After the Movie!
Not sure whether to stay after the credits? Find out!
Browse our collection of 24412 movies
| 2024-11-08T21:49:21 | null | train |
42,067,373 | todsacerdoti | 2024-11-06T18:55:25 | Localized, web-based, Markdown, note-taking app inspired by textpod | null | https://github.com/Xafloc/NoteFlow | 4 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,067,376 | bfein2313 | 2024-11-06T18:55:35 | Can AI Take over VC Associates' Jobs by Screening Companies? | With AI now capable of analyzing pitch decks, predicting outcomes, and comparing startups based on key metrics, it's fair to wonder if it could replace some of the early-stage work traditionally handled by VC associates. This technology could streamline the screening process, offering quicker, data-driven insights that potentially reduce human bias and speed up deal flow.<p>But is this enough to replace the nuanced judgment and intuition that associates bring to the table? Or will AI simply assist by handling the repetitive tasks, freeing up associates to focus on more strategic, relationship-driven aspects of the job? Curious to hear where people think this trend is headed.<p>I have been doing customer and market research for my startup, Evala.ai, and am very interested to hear others thoughts. | null | 4 | 1 | [
42067588
] | null | null | null | null | null | null | null | null | null | train |
42,067,394 | handfuloflight | 2024-11-06T18:56:26 | BIMI | null | https://bimigroup.org/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,067,428 | worm888 | 2024-11-06T18:58:18 | Well-known website directories in many countries | null | https://www.alicesite.com/ | 2 | 0 | null | null | null | missing_parsing | Hottest Websites Directories - Www.Alicesite.Com | null | null |
Hottest Websites Directories - English
Portuguese
Italian
Korean
Russian
Spanish
Japanese
French
German
Chinese
| 2024-11-08T07:25:35 | null | train |
42,067,434 | samesh13 | 2024-11-06T18:58:30 | CreateAi.Review – Get 5-Star Google Reviews for Your Business with AI | null | https://www.createai.review/ | 2 | 0 | [
42067435
] | null | null | null | null | null | null | null | null | null | train |
42,067,445 | tmnvix | 2024-11-06T18:59:03 | IR supplies personal details of 268,000 taxpayers to Meta in data breach | null | https://www.stuff.co.nz/business/360476614/ir-supplies-personal-details-268000-taxpayers-meta-data-breach | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,067,468 | sandwichsphinx | 2024-11-06T19:00:37 | Blackstone to Take Retail Opportunity Investments Private in $4B Deal | null | https://www.wsj.com/articles/blackstone-to-take-retail-opportunity-investments-private-in-4-billion-deal-92bbf9b9 | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,067,474 | tosh | 2024-11-06T19:00:52 | ChatGPT now on chat.com | null | https://chat.com/ | 74 | 93 | [
42068036,
42067840,
42068346,
42067642,
42067878,
42067601,
42070185,
42069643,
42067627,
42068210,
42067667,
42067930,
42068139,
42067786,
42068015,
42067795,
42067615,
42068011,
42067814,
42069097,
42067955,
42067801
] | null | null | null | null | null | null | null | null | null | train |
42,067,511 | marginalcodex | 2024-11-06T19:03:16 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,067,516 | ______ | 2024-11-06T19:03:45 | Gilded Age | null | https://en.wikipedia.org/wiki/Gilded_Age | 4 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,067,596 | alextttty | 2024-11-06T19:08:47 | Show HN: Last CLI tool you will need | null | https://tushynski.me/from-documentation-chaos-to-terminal-clarity-last-cli-you-will-need/ | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,067,604 | fit_developer | 2024-11-06T19:09:32 | Show HN: YouTube Video Summarize Mobile App | Tired of watching long YouTube videos? Ai Video Chat app, you can now summarize YouTube videos in seconds! AI summarizer transcribes and summarizes videos, text summary. | https://apps.apple.com/us/app/video-summarize-ai-video-chat/id6692608763 | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,067,616 | aard | 2024-11-06T19:10:03 | Founder Mode: Elevating Individual Contributors | null | https://rethinkingsoftware.substack.com/p/founder-mode-elevating-individual | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,067,626 | Dermeni | 2024-11-06T19:10:26 | HN: ReInvestWealth – AI Bookkeeping at $9/mo for solopreneurs | ReInvestWealth is an affordable and user-friendly AI accounting software for modern entrepreneurs. | https://www.reinvestwealth.com | 3 | 0 | [
42068943
] | null | null | null | null | null | null | null | null | null | train |
42,067,636 | omidfarhang | 2024-11-06T19:11:11 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,067,660 | mfiguiere | 2024-11-06T19:12:31 | Millimeter Waves May Not Be 6G's Most Promising Spectrum | null | https://spectrum.ieee.org/6g-spectrum-fr3 | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,067,685 | PaulHoule | 2024-11-06T19:14:02 | Is 'U-shaped happiness' universal? Not for rural subsistence populations | null | https://phys.org/news/2024-10-happiness-universal-rural-subsistence-populations.html | 5 | 0 | null | null | null | null | null | null | null | null | null | null | train |
Subsets and Splits