id
int64
2
42.1M
by
large_stringlengths
2
15
time
timestamp[us]
title
large_stringlengths
0
198
text
large_stringlengths
0
27.4k
url
large_stringlengths
0
6.6k
score
int64
-1
6.02k
descendants
int64
-1
7.29k
kids
large list
deleted
large list
dead
bool
1 class
scraping_error
large_stringclasses
25 values
scraped_title
large_stringlengths
1
59.3k
scraped_published_at
large_stringlengths
4
66
scraped_byline
large_stringlengths
1
757
scraped_body
large_stringlengths
1
50k
scraped_at
timestamp[us]
scraped_language
large_stringclasses
58 values
split
large_stringclasses
1 value
42,056,193
cafelate
2024-11-05T23:41:24
The Most Controversial Nobel Prize in Recent Memory
null
https://www.theatlantic.com/ideas/archive/2024/11/democracy-acemoglu-nobel-prize/680522/
4
2
[ 42056237, 42064191, 42056224 ]
null
null
null
null
null
null
null
null
null
train
42,056,223
haloop_doop
2024-11-05T23:46:19
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,056,229
jayceeperido22
2024-11-05T23:47:10
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,056,247
technologyvault
2024-11-05T23:51:48
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,056,251
brandrick
2024-11-05T23:52:01
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,056,263
saomcomrad56
2024-11-05T23:53:51
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,056,268
null
2024-11-05T23:54:41
null
null
null
null
null
null
[ "true" ]
true
null
null
null
null
null
null
null
train
42,056,270
mixeden
2024-11-05T23:54:48
Adaptive Length Image Tokenization Method
null
https://synthical.com/article/Adaptive-Length-Image-Tokenization-via-Recurrent-Allocation-28b10f33-9071-41b6-b1a1-c5d0125ebc65
1
0
null
null
null
null
null
null
null
null
null
null
train
42,056,276
tylertreat
2024-11-05T23:56:17
Automating Infrastructure as Code with Vertex AI
null
https://realkinetic.substack.com/p/automating-infrastructure-as-code
7
0
null
null
null
no_error
Automating Infrastructure as Code with Vertex AI
2024-11-05T21:13:46+00:00
Tyler Treat, Matt Perreault
A lot of companies are trying to figure out how AI can be used to improve their business. Most of them are struggling to not just implement AI, but to even find use cases that aren’t contrived and actually add value to their customers. We recently discovered a compelling use case for AI integration in our Konfigurate platform, and we found that implementing generative AI doesn’t require a great deal of complexity. I'm going to walk you through what we learned about integrating an AI assistant into our production system. There's a ton of noise out there about what you "need" to integrate AI into your product. The good news? You don't need much. The bad news? It took too much time sifting through nonsense to find what actually helps deliver value with AI.We’ll show you how to leverage Google’s Vertex AI with Gemini 1.5 to implement multimodal input for automating the creation of infrastructure as code. We’ll see how to make our AI assistant context-aware, how to configure output to be well-structured, how to tune the output without needing actual model tuning, and how to test the model.Konfigurate takes a modern approach to infrastructure as code (IaC) that shifts more concerns left into the development process such as security, compliance, and architecture standardization. In addition to managing your cloud infrastructure, it also integrates with GitHub or GitLab to manage your organization’s repository structure and CI/CD.Workloads are organized into Platforms and Domains, creating a structured environment that connects GitHub/GitLab with your cloud platform for seamless application and infrastructure management. Everything in Konfigurate—Platforms, Domains, Workloads, Resources—is GitOps-driven and implemented through YAML configuration. Below is an example showing the configuration for an “Ecommerce” Platform:apiVersion: konfig.realkinetic.com/v1alpha8 kind: Platform metadata:   name: ecommerce   namespace: konfig-control-plane   labels:     konfig.realkinetic.com/control-plane: konfig-control-plane spec:   platformName: Ecommerce   gitlab:     parentGroupId: 88474985   gcp:     billingAccountId: "XXXXXX-XXXXXX-XXXXXX"     parentFolderId: "38822600023"     defaultEnvs:       - label: dev     services:       defaults:         - cloud-run         - cloud-sql         - pubsub         - firestore         - redis   groups:     dev:       - [email protected]     maintainer:       - [email protected]     owner:       - [email protected] Konfigurate Platform YAMLThe Konfigurate objects like Platforms, Domains, and Workloads are a well-structured problem. We have technical specifications for them defined in a way that's easily interpretable by programs. In fact, as you can probably tell from the example above, they are simply Kubernetes CRDs, meaning they are—quite literally—well-defined APIs. And as you can tell from the example, these YAML configurations are fairly straightforward, but they can still be tedious to write by hand. Instead, usually what happens, which also happens with every other IaC tool, is definitions get copy/pasted and proliferated. We saw an opportunity for AI due to the structured nature of the system and definition of the problem space.Our idea was to create an AI assistant that could generate Konfigurate IaC definitions based on flexible user input. Users could interact with the system in a couple different ways:Text Description: users could describe their desired system architecture using natural language, e.g. “Add a new analytics domain to the ecommerce platform and within it I need a new ETL pipeline that will pull data from the orders database, process it in Cloud Run, and write the transformed data to BigQuery.”Architecture Diagram: users could provide an image of their architecture diagram.While we only introduced support for natural language and image-based inputs, we also validated that it worked with audio-based descriptions of the architecture as well with no additional effort. We tested this by recording ourselves describing the infrastructure and then providing an M4A file to the model. We decided not to include this mode of input since, while cool, it seemed not particularly practical.This multimodal approach not only saves developers hours of time spent on boilerplate code but also accommodates different working styles and preferences. Whether a team uses visual tools for architecture design or prefers text-based planning, our system can adapt, getting them up and running with minimal mental effort. Developers would still be responsible for verifying system behavior and testing, but the initial setup time could be drastically reduced across various input methods.Critically, we found this feature makes IaC more accessible and productive for a much broader set of roles and skill sets. For instance, we’ve worked with mainframe COBOL engineers, data analysts, and developers with no cloud experience who are now able to more effectively implement cloud infrastructure and systems. It doesn’t hide the IaC from them, it just gives them a reliable starting point to work from that is actually grounded to their environment and problem space. What we have found with our AI-assisted infrastructure and our more general approach to Visual IaC is that developers spend more time focusing on their actual product and less time on undifferentiated work.Our team has a lot of GCP experience, so we decided to use the Vertex AI platform and the Gemini-1.5-Flash-002 model for this project. It was a no-brainer for us. We know the ins and outs of GCP, and Vertex AI offers an all-in-one managed solution that makes it easy to get going. This particular model is fast and most importantly it’s cost-effective. As we are sure this will ring true for many of you, we didn't want to mess around with setting up our own infrastructure or dealing with the headaches of managing our own AI models. The Vertex AI Studio made it really easy to start developing and iterating prompts as well as trying different models.Vertex AI StudioGreat, you've got your fancy AI setup, but don't you need some complex retrieval system to make it context-aware? Sure, RAG (Retrieval Augmented Generation) is often touted as essential for creating context-aware AI agents. Our experience took us down a different path.When researching how to create a context-aware GPT agent, you'll inevitably encounter RAG. This typically involves:Vector databases for efficient similarity searchComplex indexing and retrieval systemsAdditional infrastructure for training and fine-tuning modelsWe started by preparing JSONL-formatted data thinking we'd feed it into a RAG system. The plan was to have our AI model learn from this structured data to understand our Konfigurate specifications like Platforms and Domains. As we experimented, we found that going the RAG route wasn't giving us the consistent, high-quality outputs we needed, so we pivoted.Instead of relying on RAG, we leaned heavily into prompt engineering. Here's what we did:Long-Context Prompts: we crafted detailed prompts that provided the necessary context about our Konfigurate system, its components, and how they interact.Example IaC: as part of the prompt, we included numerous example definitions for Konfigurate objects such as Platforms and Domains.Example Prompts: we also included example prompts and their corresponding correct outputs, essentially "showing" the AI what we expected.Error Handling Prompts: we even included prompting that guided the AI on how to handle errors or edge cases.Consistency: by explicitly stating our requirements in the prompts, we got more consistent outputs.Flexibility: it was easier to tweak and refine our prompts than to restructure a RAG system.Control: we had more direct control over how the AI interpreted and used our domain-specific knowledge.Simplicity: no need for additional infrastructure or complex retrieval systems—instead, it’s just a single API call.While RAG has its place, don't assume it's always necessary. For our use case, well-crafted prompts proved more effective than a sophisticated retrieval system. We believe this was a better fit because of the well-structured nature of our problem space. We can trivially validate the results output by the model because they are data structures with specifications. As a result, we got our context-aware AI assistant up and running faster, with better results, and without the overhead or complexity of RAG. Remember, in the world of technology, most times the simplest solution is the most elegant.While prompt engineering has become a bit of a meme, it turned out to be the most crucial part of this whole process. When you're working with these AI models, everything boils down to how you craft your prompts. It's where the magic happens—or doesn't.Let's break down what this looks like in practice. We're using the Vertex AI API with Node.js , so we started with their boilerplate code. The key player is the getGenerativeModel() function. Here's a stripped-down version of what we're feeding it:const generativeModel = vertexAi.preview.getGenerativeModel({   model: "gemini-1.5-flash-002",   generationConfig: {     maxOutputTokens: 4096,     temperature: 0.2,     topP: 0.95,   },   safetySettings: [     {       category: HarmCategory.HARM_CATEGORY_HATE_SPEECH,       threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,     },     {       category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,       threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,     },     {       category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,       threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,     },     {       category: HarmCategory.HARM_CATEGORY_HARASSMENT,       threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,     },   ],   systemInstruction: {     role: "system",     parts: [       // Removed for brevity (detailed below)     ],   }, });Gemini 1.5 model initializationModel: We’re using the latest version of Gemini 1.5 Flash, which is a lightweight and cost-effective model that excels at multimodal tasks and processing large amounts of text.Generation Config: This is where we control things like the max output length as well as the “temperature” of the model. Temperature controls the randomness in token selection for the output. Gemini 1.5 Flash has a temperature range of 0 to 2 with 1 being the default. A lower temperature is good when you’re looking for a “true or correct” response, while a higher temperature can result in more diverse or unexpected results. This can be good for use cases that require a more “creative” model, but since our use case requires quite a bit of precision, we opted for a low temperature value.Safety Settings: These are Google's defaults. Refer to their documentation for customization.System Instruction: This is the real meat of prompt engineering. It's where you prime the model, giving it context and setting its role. I’ve omitted this from the example above to go into more depth on this below since it’s a critical part of the solution.Here's the thing: prompt engineering is a fine line between science and art. We spent a non-trivial amount of time crafting our prompts to get consistent, useful outputs. It's not just about dumping information, it's about structuring it in a way that guides the AI to give you what you actually need. Remember, these models will do exactly what you tell them to do, not necessarily what you want them to do. Sound familiar? It's like debugging code, but instead of fixing logic errors, you're fine-tuning language.Fair warning, this is probably where you'll spend most of your engineering time. It's tempting to think the AI will just "get it," but that's not how this works. You need to be painfully clear and specific in your instructions. We went through many iterations, tweaking words here and there, restructuring our prompts, and sometimes completely overhauling our approach. But each iteration got us closer to that sweet spot where the model consistently churned out exactly what we needed. In the end, nailing your prompt engineering is what separates a frustrating, inconsistent AI experience from one that feels like you've just added a new team member to your crew.The System Instructions mentioned above provide a way to inform the model how it should behave, provide it context, tell it how to structure output, and so forth. Though this information is separate from the actual user-provided prompt, they are still technically part of the overall prompt sent into the model. Effectively, System Instructions provide a way to factor out common prompt components from the user-provided prompt. We won’t show all of our System Instructions because there are quite a few, but I’ll show several examples below to give you an idea. Again, this is about being painstakingly explicit and clear about what you want the model to do.“Konfigurate is a system that manages cloud infrastructure in AWS or Google Cloud Platform. It uses Kubernetes YAML files in order to specify the configuration. Konfigurate makes it easy for developers to quickly and safely configure and deploy cloud resources within a company's standards. You are a Platform Engineer who's job is to help Application Software Engineers author their Konfigurate YAML specifications.”“I am going to provide some example Konfigurate YAML files for your reference. Never output this example YAML directly. Rather, when providing examples in your output, generate new examples with different names and so forth.”“Please provide the complete YAML output without any explanations or markdown formatting.”“If the user asks about something other than Konfigurate or if you are unable to produce Konfigurate YAML for their prompt, tell them you cannot help with that (this is the one case to return something other than YAML). Specifically, respond with the following: ‘Sorry, I'm unable to help with that.’”The example System Instructions above hint at this but it’s something worth going into more detail. First, our AI assistant has a very specific task: generate Konfigurate IaC YAML for users. For this reason, we never want it to output anything other than Konfigurate YAML to users, nor do we want it to respond to any prompts that are not directly related to Konfigurate. We handle this purely through prompting. To help the model understand Konfigurate IaC, we provide it with an extensive set of examples and tell it to only ever output complete YAML without any explanations or markdown formatting.However, the output is actually more involved than this for our situation. That’s because we don’t just want to support generating new IaC, but also modify existing resources as well. This means the model doesn’t just need to be context-aware, it also needs to understand the distinction between “this is a new resource” and “this is an existing resource being modified.” This is important because Konfigurate is GitOps-driven, meaning the IaC resources are created in a branch and then a pull request is created for the changes. We need to know which resources are being created or modified, and if the latter, where those resources live.Modifying an existing resourceTo make the model context-aware, we feed it the definitions for the existing resources in the user’s environment. This needs to happen at “prompt time”, so this information is not included as part of the System Instructions. Instead, we fetch this information on demand when a user prompt is submitted and augment their prompt with it. Additionally, we provide the UI context in which the user is submitting the prompt from. For example, if they submit a prompt to create a new Domain while within the Ecommerce Platform, we can infer that they wish to create a new Domain within this specific Platform. It may seem obvious to us, but the model is completely unaware of this and so we need to provide it with this context. Below is the full code showing how this works and how the prompt is constructed.export const generateYaml = async (   context: AIContext,   prompt: string,   fileData?: FileData, ) => {   const k8sApi = kc.makeApiClient(k8s.CustomObjectsApi);   const { controlPlaneProjectId, defaultBranch } = await getOrSetGitlabContext(k8sApi);   // Get user's environment information from the control plane   const [placeholders, konfigObjects] = await Promise.all([     getPlaceHolders(),     getKonfigObjectsYAML(controlPlaneProjectId, defaultBranch),   ]);   const parts: Part[] = [];   if (fileData) {     parts.push({       fileData,     });   }   if (prompt) {     parts.push({       text: prompt,     });   }   // Add user's environment context to the prompt   parts.push(     {       text:         "Replace the placeholders with the following values if " + "they should be present in the output YAML unless the " + "prompt is referring to actual YAMLs from the " + "user's environment, in which case use the YAML as is " + "without replacing values: " + JSON.stringify(placeholders, null, 2) + ".",     },     {       text:         'Following the "---" below are all existing Konfigurate ' + "YAMLs for the user\'s environment should they be " + "needed either to reference or modify and provide as " + "output based on the prompt. Don't forget to never " + "output the example YAML exactly as is without " + "modifications. Only output Konfigurate object YAML and " + "no other YAML structures. Infer appropriate emails for " + "dev, maintainer, and owner groups based on those in the " +         "provided YAML below if possible.\n" +         "---\n" +         konfigObjects +         "\n---\n",     },   );   // Add user's UI context to the prompt   if (context) {     let contextPrompt = "";     let contextSet = false;     if (context.platform && context.domain && context.workload) {       contextPrompt = `The user is operating within the context of the ${context.workload} Workload which is in the ${context.domain} Domain of the ${context.platform} Platform.`;       contextSet = true;     } else if (context.platform && context.domain) {       contextPrompt = `The user is operating within the context of the ${context.domain} Domain of the ${context.platform} Platform.`;       contextSet = true;     } else if (context.platform) {       contextPrompt = `The user is operating within the context of the ${context.platform} Platform.`;       contextSet = true;     }     if (contextSet) {       contextPrompt +=         " Use this context to infer where output objects should go should " +         "the user not provide explicit instructions in the prompt.";       parts.push({         text: contextPrompt,       });     }   }   const contents: Content[] = [     {       role: "user",       parts,     },   ];   const req: GenerateContentRequest = {     contents,   };   const resp = await makeVertexRequest(req);   return { error: resp === errorResponseMessage, content: resp }; };This prompt manipulation makes the model smart enough to understand the user’s environment and the context in which they are operating within. Feeding it all of this information is possible due to Gemini 1.5’s context window. The context window acts like a short-term memory, allowing the model to recall information as part of its output generation. While a person’s short-term memory is generally quite limited both in terms of the amount of information and recall accuracy, generative models like Gemini 1.5 can have massive context windows and near-perfect recall. Gemini 1.5 Flash in particular has a 1-million-token context window, and Gemini 1.5 Pro has a 2-million-token context window. For reference, 1 million tokens is the equivalent of 50,000 lines of code (with standard 80 characters per line) or 8 average-length English novels. This is called “long context”, and it allows us to provide the model with massive prompts while it is still able to find a “needle in a haystack.”Long context has allowed us to make the model context-aware with minimal effort, but there’s still a question we have not yet addressed: how can the model also output metadata along with the generated IaC YAML? Specifically, we need to know the file path for each respective Konfigurate object so that we create new resources in the right place or we modify the correct existing resources. The answer, of course, is more prompt engineering. To solve this problem, we instructed the model to include metadata YAML with each Konfigurate object. This metadata contains the file path for the object and whether or not it’s an existing resource. Here’s an example:apiVersion: konfig.realkinetic.com/v1alpha8 kind: Domain metadata:   name: dashboard   namespace: konfig-control-plane   labels:     konfig.realkinetic.com/platform: internal-services spec:   domainName: Dashboards --- filePath: konfig/internal-services/dashboard-domain.yaml isExisting: falseWe did this by providing the model with several examples. Here is the System Instruction prompt we used:{ text: "For each Konfigurate YAML you output, include the following " + "metadata, also in YAML format, following the Konfigurate " + "object itself: filePath, isExisting. Here are some " + "examples:\n" + metadataExample, }It seems simple, but it was surprisingly effective and reliable.Working with LLMs is a bit like describing a problem to someone else who writes the code to solve it—but without seeing the code, making it impossible to debug when issues arise. Worse yet, subtle changes in the description of the problem is akin to the other person starting over fully from scratch each time, so you might get consistent results or it could be completely different. There are also cases where no matter how explicit you are in your prompting, the model just doesn’t do the right thing. For example, with Gemini-1.5-Flash-001, we had problems preventing the AI from outputting the examples verbatim. We told it, in a variety of ways, to generate new examples using the provided ones as reference for the overall structure of resources, but it simply wouldn’t do it—until we upgraded to Gemini-1.5-Flash-002.What we saw is that something as simple as just changing the model version could result in wildly different output. This is a nascent area but it’s a major challenge for companies attempting to leverage generative AI within their products or, worse, as a core component of their product. The only solution we can think of is to have a battery of test prompts you feed your AI and compare the results. But even this is problematic as the output content might be the same but the structure may have slight variations. In our case because we are generating YAML, it’s easy for us to validate output, but for use cases that are less structured, this seems like a major concern. Another solution is to feed results into a different model, but this feels equally precarious.In addition to model stability, we had some challenges with “jailbreaking” the model. While we were never able to jailbreak the model to operate outside the context of Konfigurate, we were on occasion able to get it to provide Konfigurate output that was outside the bounds of our prompting. We did not invest a ton of time into this area as it felt like there wasn’t great ROI and it wasn’t really a concern within our product, but it’s certainly a concern when building with LLMs.This post is public so feel free to share it.ShareYou have stuck with us this far and now it’s time for some concrete strategies that consistently improved our AI's performance. Here's what we learned:Be Specific About Output: tell the model exactly what you want and how you want it. For us, that meant specifying YAML as the output format. Don't leave room for interpretation—the clearer you are, the better the results.Show, Don't Just Tell: give the model examples of what good output looks like. We explicitly prompted our model to reference our example resource specifications. It's like training a new team member—show them what success looks like.Use Placeholders: providing examples to the model worked great, except when it would use specific field values from the examples in the user’s output. To address this we used sentinel placeholder values in the examples and then had a step that told the model to replace the placeholders with values from the user’s environment at prompt time.Error Handling is Key: just like you'd build error handling into your code, build it into your prompts. Give the model clear instructions on how to respond when it encounters ambiguous or out-of-scope requests. This keeps the user experience smooth, even when things go sideways.The Anti-Hallucination Trick: it sounds silly but it helps to explicitly tell the model not to hallucinate and to only respond within the context you've provided. It's not foolproof, but we've seen a significant reduction in made-up information, especially when you’ve fine-tuned the temperature.Remember, prompt engineering is an iterative process. What works for one use case might not work for another. Keep experimenting, keep refining, and don't be afraid to start from scratch if something's not clicking. The goal is to find that sweet spot where your AI becomes a reliable, consistent part of your workflow.There you have it, our journey into integrating AI into the Konfigurate platform. We started thinking we needed all sorts of fancy tech only to find that sometimes, simpler is better. The big takeaways?You don't always need complex systems like RAG. A well-crafted prompt can often do the job just as well, if not better. Gemini 1.5’s long context and near-perfect recall makes it quite adept at the “needle-in-a-haystack” problem, and it enables pretty sophisticated use cases through complex prompting.Prompt engineering isn't just a buzzword or meme. It's where the real work happens, and it's worth investing your time to get it right.LLMs are well-suited to structured problems because they are good at pattern matching. They’re also good at creative problems, but it’s less clear to us how to integrate something like this into a product versus a structured problem.The AI landscape is constantly evolving. What works today might not be the best approach tomorrow. Stay flexible and keep experimenting.We hope sharing our experience saves you some time and headaches. Remember, there's no one-size-fits-all solution in AI integration. What worked for us might need tweaking for your specific use case. The key is to start simple, iterate often, and don't be afraid to challenge conventional wisdom. You might just find that the "must-have" tools aren't so must-have after all.Now, go forth and build something cool!
2024-11-08T01:47:45
en
train
42,056,277
ioblomov
2024-11-05T23:56:31
Algorithmic Pricing for Kale
null
https://www.bloomberg.com/opinion/articles/2024-11-05/algorithmic-pricing-for-kale
1
1
[ 42056284 ]
null
null
null
null
null
null
null
null
null
train
42,056,287
sixhobbits
2024-11-05T23:57:23
null
null
null
9
null
[ 42056318, 42056306 ]
null
true
null
null
null
null
null
null
null
train
42,056,295
nothrowaways
2024-11-05T23:58:41
null
null
null
2
null
null
null
true
null
null
null
null
null
null
null
train
42,056,299
PaulHoule
2024-11-05T23:59:30
A closer look at Intel and AMD's different approaches to gluing together CPUs
null
https://www.theregister.com/2024/10/24/intel_amd_packaging/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,056,314
saomcomrad56
2024-11-06T00:01:29
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,056,330
DenisM
2024-11-06T00:04:46
More Oracle Layoffs Started Nov. 1, Cloud Unit Impacted
null
https://www.channelfutures.com/channel-business/more-oracle-layoffs-started-nov-1-cloud-unit-impacted#
109
44
[ 42056720, 42056629, 42056641, 42056636, 42056605 ]
null
null
null
null
null
null
null
null
null
train
42,056,334
program247365
2024-11-06T00:05:13
Show HN: Hackertuah: A Hacker News CLI Built in Rust
This was a lot of fun!<p>Using Claude to start, and then Cursor to make more complicated changes, I made a Hacker News CLI, built in Rust, that has a sweet loading screen ;)<p>Wanted a neat way to browse hacker news, and this was a fun start.<p>Just a 4.7M binary on macOS.
https://github.com/program247365/hackertuah
6
2
[ 42057091 ]
null
null
no_error
GitHub - program247365/hackertuah: A CLI for Hacker News
null
program247365
Hacker News TUI A terminal-based user interface for browsing Hacker News with Vim-style navigation and Claude AI integration for story summarization. Features 🚀 Browse top Hacker News stories in your terminal ⌨️ Vim-style keyboard navigation 🤖 Claude AI integration for story summarization 🌐 Open stories directly in your default browser 💚 Classic green-on-black terminal aesthetic 🎯 Minimalist, distraction-free interface Installation Prerequisites Rust and Cargo (Latest stable version) A Claude API key from Anthropic Setup Clone the repository: git clone https://github.com/yourusername/hackernews-tui cd hackernews-tui Add your Claude API key to your environment: export CLAUDE_API_KEY=your_key_here Build and run: cargo build --release cargo run Usage Keyboard Controls j or ↓: Move down k or ↑: Move up Enter: Open selected story in default browser o: Open options menu q: Quit application Esc: Close menus/summaries Options Menu Press o to open the options menu, which provides: Summarize this post (uses Claude AI) Open in browser Close menu Story Information Each story displays: Title Score Author Direct link to article or discussion Dependencies [dependencies] ratatui = "0.21.0" crossterm = "0.26.0" tokio = { version = "1.0", features = ["full"] } reqwest = { version = "0.11", features = ["json"] } serde = { version = "1.0", features = ["derive"] } serde_json = "1.0" open = "3.2" Project Structure src/ ├── main.rs # Main application logic ├── types.rs # Data structures and type definitions ├── ui.rs # UI rendering and layout └── hn_api.rs # Hacker News API integration Features in Detail Hacker News Integration Fetches top 30 stories from Hacker News API Real-time score and comment updates Direct access to article URLs and discussion pages Claude AI Integration Summarizes long articles and discussions Provides concise, intelligent summaries of complex topics Accessible through the options menu with o Terminal UI Built with ratatui for smooth rendering Classic green-on-black color scheme Efficient memory usage and fast rendering Responsive layout that adapts to terminal size Contributing Fork the repository Create your feature branch (git checkout -b feature/AmazingFeature) Commit your changes (git commit -m 'Add some AmazingFeature') Push to the branch (git push origin feature/AmazingFeature) Open a Pull Request License This project is licensed under the MIT License - see the LICENSE file for details.
2024-11-08T01:49:56
en
train
42,056,341
edward
2024-11-06T00:06:30
Docling
null
https://simonwillison.net/2024/Nov/3/docling/
5
0
null
null
null
null
null
null
null
null
null
null
train
42,056,352
thunderbong
2024-11-06T00:08:52
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,056,353
aard
2024-11-06T00:09:00
AI That Can Invent AI Is Coming. Buckle Up
null
https://www.forbes.com/sites/robtoews/2024/11/03/ai-that-can-invent-ai-is-coming-buckle-up/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,056,362
sandwichsphinx
2024-11-06T00:10:40
Spontaneous Economic Order (2015)
null
https://link.springer.com/article/10.1007/s00191-015-0432-6
1
1
[ 42056563 ]
null
null
null
null
null
null
null
null
null
train
42,056,366
aard
2024-11-06T00:11:39
Software Makers Encouraged to Stop Using C/C++ by 2026
null
https://www.techrepublic.com/article/cisa-fbi-memory-safety-recommendations/
4
3
[ 42056522, 42056461, 42056467, 42057385 ]
null
null
null
null
null
null
null
null
null
train
42,056,368
thunderbong
2024-11-06T00:11:50
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,056,374
aard
2024-11-06T00:14:29
Be Polite to ChatGPT and Other AIs
null
https://www.forbes.com/sites/bernardmarr/2024/11/05/why-you-should-be-polite-to-chatgpt-and-other-ais/
7
3
[ 42056577, 42056465 ]
null
null
null
null
null
null
null
null
null
train
42,056,388
altryne1
2024-11-06T00:18:20
Show HN: Highjacking a Halloween toy to ID kids costumes with Gemini and 11labs
null
https://wandb.ai/wandb_fc/halloweave/reports/Hacking-a-skeleton-to-detect-kids-costumes-and-greet-them-with-a-custom-spooky-message--VmlldzoxMDAzOTkyMw
5
0
null
null
null
no_article
null
null
null
null
2024-11-08T21:14:07
null
train
42,056,394
FriedPickles
2024-11-06T00:19:00
China blocks Skydio battery supply
null
https://www.ft.com/content/b1104594-5da7-4b9a-b635-e7a80ab68fad
6
1
[ 42056402 ]
null
null
Failed after 3 attempts. Last error: Quota exceeded for quota metric 'Generate Content API requests per minute' and limit 'GenerateContent request limit per minute for a region' of service 'generativelanguage.googleapis.com' for consumer 'project_number:854396441450'.
Chinese sanctions hit US drone maker supplying Ukraine
2024-10-31T13:27:24.686Z
Ryan McMorrow, Demetri Sevastopulo, Kathrin Hille
Accessibility helpSkip to navigationSkip to contentSkip to footerSubscribe to unlock this articleGet complete digital access$75 per monthComplete digital access to quality FT journalism with expert analysis from industry leaders. Pay a year upfront and save 20%.Explore more offers.$1 for 4 weeksThen $75 per month. Complete digital access to quality FT journalism. Cancel anytime during your trial.Standard Digital$39 per monthGet essential digital access to quality FT journalism on any device. Pay a year upfront and save 20%Pay per readerComplete digital access for organisations. Includes exclusive features and content.Explore our full range of subscriptions.Discover all the plans currently available in your countryDigital access for organisations. Includes exclusive features and content.Why the FT?See why over a million readers pay to read the Financial Times.Find out why
2024-11-07T23:30:16
null
train
42,056,409
jpavel2
2024-11-06T00:23:14
WASM memory64 reaches stage 4
null
https://github.com/WebAssembly/memory64/issues/43
3
0
[ 42056594 ]
null
null
no_error
Tracking issue for phase 4 advancement · Issue #43 · WebAssembly/memory64
null
WebAssembly
New issue Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community. By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails. Already on GitHub? Sign in to your account Closed sbc100 opened this issue Nov 21, 2023 · 13 comments Closed Tracking issue for phase 4 advancement #43 sbc100 opened this issue Nov 21, 2023 · 13 comments Comments Update spec text ([spec] Update syntax spec for memory64 #50) Implement table64 extension  #51 Rebase interpreter changes JS API Updates Spec tests Implemented in at least two Web VMs v8: https://crbug.com/41480462 / https://crbug.com/364917766 Implemented in at least one toolchains BigInt parameters for the JS API? #68 Update JS API in v8: https://g-issues.chromium.org/issues/358381633 Text format: new abbreviation for inline data? #29 Restricting offsets to match the index type #76 Rename Index Type to something else, perhaps Offset Type #67 Tests needed for recent spec updates #80 ufna, josephrocca, warvstar, einarkristjan, dtig, SogoCZE, woojh3690, ghishadow, Innovgame, and wrangelvid reacted with thumbs up emoji josephrocca, jfmc, SogoCZE, and Innovgame reacted with heart emoji Innovgame and SogoCZE reacted with rocket emoji A while ago I heard about the intent to include 64-bit tables as well, in order to be able to compile function pointers consistently. What became of that? That request came from @matthias-blume who is working on an experimental wasm to native compiler without a sandbox. We have a workaround in llvm that truncate all function pointers before call indirect. It add an instruction for every call indirect in the program so it could save a bit on code size and complexity if we could remove it. Do you think it makes sense to roll it into this proposal? Yeah, from my perspective, Wasm64 is incomplete without it, and it leaves the language in an odd space. Do you think it would still be realistic to extend the proposal? What's the status of the spec document? It would be good to get the changes in well ahead of the phase 4 vote at the f2f meeting. I need to pick up this open PR: #50 Now with only the table64 extension remained, can we get an estimate please? We are just waiting on the second implementation of table64 now (in spidermonkey). My understanding is that is underway, so we should be able to vote on phase in next one or two meetings. #80 should be on this list as well, presumably. Indeed. Added #80, hopefully we can get those updates done by next meeting and vote on this.
2024-11-08T20:54:55
en
train
42,056,411
mingli_yuan
2024-11-06T00:23:42
Code robots, win matches, rank up
null
https://robotrumble.org/
2
0
[ 42056412 ]
null
null
null
null
null
null
null
null
null
train
42,056,420
Michelangelo11
2024-11-06T00:24:48
Only 5.3% of US welders are women. After years as a professor, I became one
null
https://theconversation.com/only-5-3-of-welders-in-the-us-are-women-after-years-as-a-writing-professor-i-became-one-heres-what-i-learned-240431
247
345
[ 42056718, 42057955, 42069273, 42056630, 42056597, 42068220, 42057274, 42064077, 42057079, 42069193, 42066646, 42063782, 42057624, 42069187, 42063652, 42064213, 42057648, 42066486, 42064087, 42066269, 42057618, 42066260, 42068163, 42057441, 42070296, 42056914, 42064112, 42058343, 42064495, 42063209, 42063142, 42056821, 42056790 ]
null
null
null
null
null
null
null
null
null
train
42,056,426
ignasheahy
2024-11-06T00:26:11
Show HN: Reddit/Discord alternative with unique look and feel and features
null
https://heahy.com/c/hackernewschat
12
1
[ 42056487 ]
null
null
null
null
null
null
null
null
null
train
42,056,433
JumpCrisscross
2024-11-06T00:28:14
Perplexity raising new funds at $9B valuation
null
https://www.reuters.com/technology/artificial-intelligence/perplexity-raising-new-funds-9-bln-valuation-source-says-2024-11-06/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,056,434
Uptrenda
2024-11-06T00:28:28
Show HN: Peer-to-Peer Direct Connections
null
https://roberts.pm/index.php/2024/11/05/p2pd/
7
4
[ 42057081 ]
null
null
null
null
null
null
null
null
null
train
42,056,442
pseudolus
2024-11-06T00:29:56
He's Gleaning the Design Rules of Life to Re-Create It
null
https://www.quantamagazine.org/hes-gleaning-the-design-rules-of-life-to-re-create-it-20241104/
3
0
null
null
null
missing_parsing
He’s Gleaning the Design Rules of Life to Re-Create It | Quanta Magazine
2024-11-04T15:19:46+00:00
By Shi En Kim November 4, 2024
Yizhi “Patrick” Cai is coordinating a global effort to write a complete synthetic yeast genome. If he succeeds, the resulting cell will be the artificial life most closely related to humans to date. Yizhi “Patrick” Cai, a synthetic biologist at the University of Manchester, is writing a synthetic genome that could be used as a programming substrate for life. Dave Phillips for Quanta Magazine Introduction Say you live in an old house, and you want to make a slew of renovations; at some point, the most efficient way forward is to build a new house altogether. A similar clean-slate approach is the guiding principle behind an international effort to create the world’s first synthetic eukaryotic genome — the set of molecular instructions that govern complex life on Earth, including humans. At present, genetic engineering techniques typically work by taking one of nature’s given genomic templates (the old house) and introducing individual mutations (renovations) to the DNA code. However, the ability to write a genome from scratch would unlock greater creativity in designing a desired genome — your dream home — and producing new kinds of organisms that do things that nature cannot. “If you see the genome as the operating system of organisms, then writing the genome is basically giving you an opportunity to reorganize the genome and to program living organisms,” said Yizhi “Patrick” Cai of the University of Manchester. In the future, researchers could engineer new cells with novel abilities and greater tolerance for environmental conditions such as heat and drought. Cai is the coordinator of the Synthetic Yeast Genome Project, a global consortium aimed at redesigning the genome for brewer’s yeast (Saccharomyces cerevisiae). Why yeast? The single-celled fungus is a close cousin to humans, evolutionarily speaking: At least 20% of human genes have a counterpart in yeast. Many cellular processes that unfold in the human body can be replicated in yeast as a research proxy. “Yeast is very important as a model organism for biotechnology and also for human health,” Cai said. He called it “the best-understood organism on this planet,” which gives his team a head start as they work to understand its genomic design. For researchers drumming up a brand-new genome, the fabrication itself isn’t the hard part; scientists already know how to assemble DNA base pairs into strings. Instead, the main challenge lies in devising a genomic sequence that produces a viable organism. To do that, scientists first need to glean the basic principles and design rules for what makes a working genome, and to identify any pitfalls to avoid. Cai holds up a petri dish where yeast grow. Yeast is a single-celled fungus that shares its eukaryotic cell type with humans. At least 20% of human genes have a counterpart in yeast, which has made it a useful model organism. Dave Phillips for Quanta Magazine The combinatorial space for the yeast genome — natural or synthetic — is vast; exploring it and tweaking each genetic component individually to understand its purpose and context is impractically slow. To accelerate the process, Cai’s team uses a tool called Scramble to probe yeast’s natural genome. It works by engineering yeast cells with the ability to randomize their genomes on command. Shuffling one’s genome is usually fatal business, but some cells might land on a workable combination by chance. Cai’s team can then work out what’s common among the lucky survivors to derive rules for genomic viability. With enough know-how, the researchers can then cobble together a fully synthetic genome that abides by nature’s design rules to produce a working eukaryotic cell. In November 2023, the project unveiled its latest advance: a synthetic yeast cell with 50% reengineered DNA. More than six of the yeast’s 16 chromosomes were entirely synthetic. One was even a neochromosome — a chromosome that has no natural counterpart. Although this 50% milestone was 10 years in the making, the remaining half of the yeast genome is potentially an easier lift, Cai said, now that the team has the right tools. Quanta spoke with Cai about programming life, reading and writing the genome like a book, and how wearing a kilt in a New England winter marked a turning point in his career. The interview has been condensed and edited for clarity. How did you become interested in the world of synthetic biology? I grew up in China, and I went to school to learn computer engineering. Then, when I joined the University of Edinburgh to do my master’s in robotics and linguistics — that was 2005 — I found a group of people that, instead of trying to program robots, were trying to program bacteria. I was recruited to be part of the team. We participated in this international genetic engineering competition hosted by the Massachusetts Institute of Technology every year. The goal was: Can you program living organisms to do something interesting? Synthetic biology isn’t only about creating new kinds of life, but also about better appreciating how life works, Cai said. “By building new genomes, new entities, it gives us a greater understanding of biology.” Dave Phillips for Quanta Magazine We programmed Escherichia coli bacteria to become biosensors for arsenic contamination in drinking water. If you regularly drink water which is contaminated by arsenite, you may have all kinds of problems, ranging from skin cancer to kidney cancer. [The engineered] bacteria could sense different levels of arsenite, up to five molecules per billion. They’re very sensitive. You can basically give the bacteria to local villagers in Bangladesh, ask them to add some water from a well, incubate it at room temperature, and tomorrow morning they can dip a pH test paper and see the color change. So it’s like a pregnancy test: very simple. The arsenic biosensor was a huge hit at MIT. I vividly remember it was in November. I went to Boston to compete. Because we were the first team from Scotland, they made me dress in a Scottish kilt. It was cold, I can tell you — Boston in November, wearing a kilt. That was the turning point of my career, really — looking at living organisms as a programming substrate. How did you transition from bacteria to yeast? I did my postdoc at Johns Hopkins University in Baltimore, where I met Jef Boeke, [a geneticist now at New York University]. He was running a small undergraduate course to synthesize the yeast genome. I said to him, “This will never get done before you retire, so we should make it an international effort.” “If you want to build an airplane, you don’t go by mimicking what a bird does,” Cai said. “You first derive the first principles of aerodynamics, and based on aerodynamics, you build a plane. That’s why our plane does not look like a bird.” Dave Phillips for Quanta Magazine I helped Jef set up an international consortium with 10 universities from four continents, and we have been working together over the last decade. Every one of us gets a chromosome or two. We refactor the chromosome — we make it to our specification. And then finally we merge it into a new genome. Yeast is a simple eukaryote, but it’s much more complex [than bacteria] in its genome. For one, bacteria have one chromosome; yeast has 16. The bacterial genome is much more compact. Yeast also has many additional elements. Why synthesize the yeast genome? The pursuit of the first synthetic genome is to really understand what the first principles of genome organization are. What I cannot build, I don’t understand [to paraphrase Richard Feynman]. This is the ultimate test. We really get to understand biology and life. Let me put it another way. Once you can read a book, you can read many books. That’s the equivalent of sequencing the genome. But only when it comes to writing do you really put your understanding to the test. Reading a genome is passive. But when you start writing, that’s a creative process. You will have much better control over the entity you engineer. You can then start thinking about reengineering life to address some of the very important questions in society today. And just to give you some examples: You can engineer the plant genome to be more resilient against climate change; you can engineer it to have better yields. This can address the food-shortage problem facing society. You can engineer, for instance, pig organs that are suitable for organ transplant. These are great examples of applications genome engineering can do for us. Cai brainstorms genetic design in his office at the University of Manchester. By creating synthetic eukaryotic life, he hopes to learn organizational design rules that make genomes work. Dave Phillips for Quanta Magazine With the current technology, you can make transgenic organisms by editing. But you do not really have complete freedom of expression. That’s because you are relying on natural templates to tinker with. What’s your approach to writing the first draft of the synthetic yeast genome? The way to write the genome is piece by piece. You take an existing book chapter; you rewrite the first paragraph, and then you put that into the cell. Now the yeast becomes your proofreader. The yeast will start reading your rewritten first paragraph and say, “Does this make sense?” If it doesn’t make sense, the yeast will complain. It’ll become sick, or it will become unviable. You do that paragraph by paragraph, and eventually you end up with a new chapter. Imagine that yeast has 16 chapters, which are its 16 chromosomes. So each of us [on the team], we take one chapter, and we write paragraph by paragraph. This is a bottom-up approach to rewriting the genome. How are you coming up with the design blueprint of the new synthetic genome? We don’t use Scramble to write the genome. But we use Scramble to devise new genomes. If you see the yeast genome as a deck of cards, where each card is a gene, this Scramble system allows you to shuffle the order of the cards, to invert some cards, to throw away some cards, to duplicate some cards. The genome reshuffling technology Scramble gives you an opportunity to systematically sample all combinations possible. So instead of making one genome, you’re effectively making billions of genomes at the same time. Cai uses a pipette in his laboratory. The synthetic biologist sees DNA as a programming substrate: Once he knows how to write it, he can reorganize the genome and use it to program living organisms with new traits. Dave Phillips for Quanta Magazine We have the capability to generate many alternative designs, and then we can select for the best performers. So let’s say I turn on this Scramble system and I generate all possible permutations of the genome. Then I ask a question, such as: Who can survive at 40 degrees Celsius? In that particular condition, the guys that survive are the guys which are truly interesting. They will never exist in nature. Then you can sequence them and say, “What kind of rearrangement do I need to give the yeast this special power?” This technology allows us to practically evolve strains to our specification. It also gives us insight on how genes are organized to give us a particular characteristic. That’s the beauty of coupling precise engineering with direct evolution. The natural genome will not allow you to do that. How exactly is the novel genome synthetic and different from the natural version? It’s synthetic because all the DNA sequences are chemically resynthesized. They’re not coming from their natural inheritances. The genome is about 20% smaller than the wild-type genome. We get rid of the junk that is not useful. The sequence composition is drastically different from their parents’. I’m happy to say all the synthetic chromosomes are really fit. That is surprising because every chromosome has thousands of edits, and we still managed to maintain really high fitness. That’s not just because we’re careful; it’s also because we’re being conservative. People were saying we’re aggressive because we put thousands of edits on each chromosome. But now we look back and say, “Actually, there’s so much plasticity in the genome.” You try to make all these changes, but cells still can take up all these tortures you impose on them. So if we’re able to do the next version, we should be much more aggressive. Cai imagines a future in which scientists can write synthetic genomes to create life that addresses societal problems such as climate change and disease. Dave Phillips for Quanta Magazine What’s next for this work? I’m particularly excited about a new project in my group, which is to try to derive what’s the minimal genome. What’s the minimal set of genes which can sustain life? The way we are tackling this is, we’re taking this synthetic yeast and using Scramble to reduce the genome. Can we derive the minimal genome by scrambling out anything which is nonessential? Last year, your team announced that the Synthetic Yeast Genome Project is 50% complete. What do you mean by that? It’s the sheer number of DNA [nucleotides] that have been incorporated into the genome. If you look at yeast as a book, that’s 16 chapters. But each chapter is a different length; they’re not equal. We have integrated six and a half chromosomes, but these are larger chromosomes, 50% of the characters. We have the technology to replace the second half. But you know, we might be surprised. We worry about what we call “synthetic lethality.” Maybe at some point when you change something here in one chapter, it’s fine; you change something here in another chapter, it’s fine. But they just cannot coexist in the same book. This is the biggest problem, that if we recombine multiple synthetic chromosomes together, you will see incompatibility between the edits on two chromosomes. So how much longer now? Every time I’ve been asked about it, I say 12 months from now. The last mile is always the most difficult. But it’s very encouraging. We are well on our way. So, probably something like the end of 2025? Ask me again next year. Next article Math’s ‘Bunkbed Conjecture’ Has Been Debunked
2024-11-08T18:22:43
null
train
42,056,448
FriedPickles
2024-11-06T00:32:45
Zoox Reinterprets FMVSS Regulations
null
https://www.theverge.com/2024/11/2/24285399/amazon-zoox-robotaxi-nhtsa-fmvss-comply
3
0
null
null
null
null
null
null
null
null
null
null
train
42,056,453
hn_acker
2024-11-06T00:33:55
No, Section 230 Doesn't 'Circumvent' the First Amendment
null
https://www.techdirt.com/2024/11/05/no-section-230-doesnt-circumvent-the-first-amendment-but-this-harvard-article-circumvents-reality/
10
3
[ 42061095, 42056454, 42056650 ]
null
null
null
null
null
null
null
null
null
train
42,056,462
hn_acker
2024-11-06T00:34:59
AI in Criminal Justice Is the Trend Attorneys Need to Know About
null
https://www.eff.org/deeplinks/2024/11/ai-criminal-justice-trend-attorneys-need-know-about
8
0
null
null
null
null
null
null
null
null
null
null
train
42,056,472
pseudolus
2024-11-06T00:36:23
How do you vote from space? NASA astronauts cast 2024 election ballots from ISS
null
https://www.space.com/space-exploration/international-space-station/how-do-you-vote-from-space-nasa-astronauts-cast-2024-election-ballots-from-iss
2
1
[ 42056482 ]
null
null
null
null
null
null
null
null
null
train
42,056,515
pseudolus
2024-11-06T00:47:30
The Antitrust Revolution
null
https://harpers.org/archive/2024/10/the-antitrust-revolution-big-tech-barry-c-lynn/
58
37
[ 42056717, 42056711, 42057255, 42056697, 42057182, 42057325, 42056673, 42056692, 42056647, 42056621, 42056682 ]
null
null
null
null
null
null
null
null
null
train
42,056,521
jonbaer
2024-11-06T00:50:09
Emergence: The Hidden Power of Collective Behavior
null
https://medium.com/@ryanchen_1890/emergence-the-hidden-power-of-collective-behavior-e02e05c72786
5
1
[ 42056536 ]
null
null
null
null
null
null
null
null
null
train
42,056,537
mslate
2024-11-06T00:55:24
If you care about safety, ride an e-bike
null
https://maxmautner.com/2024/11/05/ebikes-safety.html
2
6
[ 42057625, 42056664, 42056730 ]
null
null
no_error
If you care about safety, ride an e-bike
2024-11-05T00:00:00+00:00
Max Mautner
E-bikes are safer than regular bikes because:They’re fasterSince your speed is higher, there are fewer cars that pass you per mile traveled.Less passing means safer.They’re less fatiguingSince there is less fatigue to e-biking you are a more alert operator of your vehicle & you are more willing to exercise patience.They provide better lightingAs long as your e-bike’s battery is charged then you will have a front and rear light.Much easier to re-charge one device (the bike) then monitor a front and rear USB dongle, which often have no capacity indicator.This translates to better daytime and night-time visibility to car drivers.They provide better dead-stop experienceSince starting from a dead-stop is easier (whether with pedal assistance or a throttle) than a regular bike, you are more likely to respect stop signs.On a regular bike, stopping at all is a total hassle whereas with an e-bike there are lower physical stakes to coming to a complete stop and respecting motor vehicle traffic laws.This makes you bike more predictably and reduces your crash hazard.Unlike cars, the danger that you create for vulnerable road users is very low (though non-zero).As with all things: be considerate of others.While e-bikes are limited in the speed that they can reach, they are also nearly silent–so pass walkers and animals with care.There are a multitude of other e-bike benefits beyond their relative safety–and here I will get more poetic:Carrying heavy cargo is easier than on a regular bike: In fact, it is a tool for economic mobility and commerce: And that cargo includes pets:It can handle a variety of roadways (but of course, so can regular bikes): In all weather:You can go camping on it (although you need to watch your range, as your battery can run out):You can go to work on it (but of course, so can regular bikes):Or you can bring them on ferry boats:Moms can ride them:Dads can ride them:Kids can ride them:And grandparents can ride them: There are an incredible set of options: No matter if you have motorized electric assistance or not, ride bikes!November 5, 2024 · bicycling, transportation
2024-11-08T13:42:56
en
train
42,056,543
jf
2024-11-06T00:57:18
Curl -v HTTPS://Google.com
null
https://www.youtube.com/watch?v=atcqMWqB3hw
20
2
[ 42056555, 42063170 ]
null
null
no_article
null
null
null
null
2024-11-08T05:06:08
null
train
42,056,558
wbemaker
2024-11-06T01:01:10
We tried most Backlink building methods – this is what worked for us
I&#x27;m a bit frustrated with all the BS backlink advice out there.<p>We’ve tried just about every recommended method, and after a lot of trial and error, here’s what’s actually worked for us and what hasn’t.<p>The BS Backlink Strategies That Don’t Work (or are not scalable for most people)<p>Buying Backlinks: We tried this route, but quality backlinks can cost hundreds, and if you’re not generating tons of revenue, it’s not sustainable. We once hired someone who charged $500 just for the outreach, plus $50-100 per link, and the links were questionable at best. Needless to say, we stopped.<p>Guest Posts: Reaching out to blogs to offer guest posts might sound good, but the reality is that hardly anyone cares.<p>Broken Link Method: The idea is to find broken links on similar sites, then reach out offering your page as a replacement. We tried it, but no one cared about their broken links enough to update them, and our emails got ignored.<p>Unlinked Mentions: This involves finding sites that mention your brand without linking to you, and asking them to add a link. We reached out to a bunch of sites, and, again, no one cared.<p>The Backlink Strategies That Actually Worked (and are scalable)<p>Our most effective backlinks came from connecting with quality websites in our industry.<p>Here’s what actually moved the needle:<p>Networking &amp; Cross-Promotions: We’re in some WhatsApp groups with others in our industry, attend conferences, and connect with people via zoom when possible. Once you build these connections, cross-promotions, like blog posts, backlinks, or newsletter swaps, convert easily.<p>Creating Listicles: This was a great find! We create listicles like “Top X Tools for [Task relevant to your niche]” without any links initially. Then, we reach out to the companies we’ve featured to let them know they’ve been included. We offer them the chance to secure a link in the listicle in exchange for a backlink to our site. By leading with the free article feature and then pitching the link exchange, we get a much higher response rate. This method consistently yields about a 12% conversion rate. For every 10 companies we reach out to, we secure one backlink exchange. And you don’t need to keep writing new listicles, just replace the companies that did not respond with new ones.<p>Using apollo . io for Link Exchange Outreach: This involves finding niche sites with similar DR, building a list, importing it to Apollo to fetch the contacts, and setting up an email sequence to reach out automatically offering a simple collaboration and link exchange. Our success rate was about 3%, so 3 backlinks for every 100 sites (we send 50 emails a day). We use a free tool to bulk export lists of sites in the same niche and similar DR (i can share it in the comments). An Apollo subscription costs around 50$ month which makes each backlink quite cheap.<p>Using RankChase : RankChase is a platform that matches you with quality websites in your niche with similar DR that are also looking for link exchanges. You add your site, and they send link exchange opportunities to your inbox with contact info. The success rate is around 50-60%. For every two matches, we generally get one backlink and you can get a few matches a week. RankChase is free to join but for 30$&#x2F;mo you get 5x more matches so it is a great way to scale backlinks with little effort and for cheap.<p>Why Link Exchanges Are Actually Worth It<p>Some people say that Google does not like link exchanges, but the truth is everyone’s just guessing based on stuff they’ve read. No one really knows exactly how the Google algorithm works. It’s extremely common for niche sites to link to each other, and many are industry partners. We’ve never seen penalties from exchanging relevant, contextual links with high-quality sites, and haven’t met anyone else who has either. Relevant link exchanges was actually suggested by our SEO consultant.<p>Happy to share more details on any of these methods!
null
2
0
null
null
null
null
null
null
null
null
null
null
train
42,056,571
PaulHoule
2024-11-06T01:03:54
Engineers create bendable, self-heating and healing concrete
null
https://techxplore.com/news/2024-10-bendable-concrete.html
3
0
null
null
null
http_other_error
Just a moment...
null
null
Please complete security verificationThis request seems a bit unusual, so we need to confirm that you're human. Please press and hold the button until it turns completely green. Thank you for your cooperation!Press and hold the buttonIf you believe this is an error, please contact our support team.88.216.233.107 : 27026c1b-9df7-4a87-9f30-52df39e4
2024-11-08T05:21:47
null
train
42,056,581
todsacerdoti
2024-11-06T01:06:24
MinPin: Yet Another Pin Proposal
null
https://smallcultfollowing.com/babysteps/blog/2024/11/05/minpin/
1
0
null
null
null
no_error
yet another pin proposal · baby steps
null
null
This post floats a variation of boats’ UnpinCell proposal that I’m calling MinPin.1 MinPin’s goal is to integrate Pin into the language in a “minimally disruptive” way2 – and in particular a way that is fully backwards compatible. Unlike Overwrite, MinPin does not attempt to make Pin and &mut “play nicely” together. It does however leave the door open to add Overwrite in the future, and I think helps to clarify the positives and negatives that Overwrite would bring.TL;DR: Key design decisionsHere is a brief summary of MinPin’s rulesThe pinned keyword can be used to get pinned variations of things:In types, pinned P is equivalent to Pin<P>, so pinned &mut T and pinned Box<T> are equivalent to Pin<&mut T> and Pin<Box<T>> respectively.In function signatures, pinned &mut self can be used instead of self: Pin<&mut Self>.In expressions, pinned &mut $place is used to get a pinned &mut that refers to the value in $place.The Drop trait is modified to have fn drop(pinned &mut self) instead of fn drop(&mut self).However, impls of Drop are still permitted (even encouraged!) to use fn drop(&mut self), but it means that your type will not be able to use (safe) pin-projection. For many types that is not an issue; for futures or other “address sensitive” types, you should use fn drop(pinned &mut self).The rules for field projection from a s: pinned &mut S reference are based on whether or not Unpin is implemented:Projection is always allowed for fields whose type implements Unpin.For fields whose types are not known to implement Unpin:If the struct S is Unpin, &mut projection is allowed but not pinned &mut.If the struct S is !Unpin[^neg] and does not have a fn drop(&mut self) method, pinned &mut projection is allowed but not &mut.If the type checker does not know whether S is Unpin or not, or if the type S has a Drop impl with fn drop(&mut self), neither form of projection is allowed for fields that are not Unpin.There is a type struct Unpinnable<T> { value: T } that always implements Unpin.Design axiomsBefore I go further I want to layout some of my design axioms (beliefs that motivate and justify my design).Pin is part of the Rust language. Despite Pin being entirely a “library-based” abstraction at present, it is very much a part of the language semantics, and it deserves first-class support. It should be possible to create pinned references and do pin projections in safe Rust.Pin is its own world. Pin is only relevant in specific use cases, like futures or in-place linked lists.Pin should have zero-conceptual-cost. Unless you are writing a Pin-using abstraction, you shouldn’t have to know or think about pin at all.Explicit is possible. Automatic operations are nice but it should always be possible to write operations explicitly when needed.Backwards compatible. Existing code should continue to compile and work.Frequently asked questionsFor the rest of the post I’m just going to go into FAQ mode.I see the rules, but can you summarize how MinPin would feel to use?Yes. I think the rule of thumb would be this. For any given type, you should decide whether your type cares about pinning or not.Most types do not care about pinning. They just go on using &self and &mut self as normal. Everything works as today (this is the “zero-conceptual-cost” goal).But some types do care about pinning. These are typically future implementations but they could be other special case things. In that case, you should explicitly implement !Unpin to declare yourself as pinnable. When you declare your methods, you have to make a choiceIs the method read-only? Then use &self, that always works.Otherwise, use &mut self or pinned &mut self, depending…If the method is meant to be called before pinning, use &mut self.If the method is meant to be called after pinning, use pinned &mut self.This design works well so long as all mutating methods can be categorized into before-or-after pinning. If you have methods that need to be used in both settings, you have to start using workarounds – in the limit, you make two copies.How does MinPin compare to UnpinCell?Those of you who have been following the various posts in this area will recognize many elements from boats’ recent UnpinCell. While the proposals share many elements, there is also one big difference between them that makes a big difference in how they would feel when used. Which is overall better is not yet clear to me.Let’s start with what they have in common. Both propose syntax for pinned references/borrows (albeit slightly different syntax) and both include a type for “opting out” from pinning (the eponymous UnpinCell<T> in UnpinCell, Unpinnable<T> in MinPin). Both also have a similar “special case” around Drop in which writing a drop impl with fn drop(&mut self) disables safe pin-projection.Where they differ is how they manage generic structs like WrapFuture<F>, where it is not known whether or not they are Unpin.struct WrapFuture<F: Future> { future: F, } The r: pinned &mut WrapFuture<F>, the question is whether we can project the field future:impl<F: Future> WrapFuture<F> { fn method(pinned &mut self) { let f = pinned &mut r.future; // -------------------- // Is this allowed? } } There is a specific danger case that both sets of rules are trying to avoid. Imagine that WrapFuture<F> implements Unpin but F does not – e.g., imagine that you have a impl<F: Future> Unpin for WrapFuture<F>. In that case, the referent of the pinned &mut WrapFuture<F> reference is not actually pinned, because the type is unpinnable. If we permitted the creation of a pinned &mut F, where F: !Unpin, we would be under the (mistaken) impression that F is pinned. Bad.UnpinCell handles this case by saying that projecting from a pinned &mut is only allowed so long as there is no explicit impl of Unpin for WrapFuture (“if [WrapFuture<F>] implements Unpin, it does so using the auto-trait mechanism, not a manually written impl”). Basically: if the user doesn’t say whether the type is Unpin or not, then you can do pin-projection. The idea is that if the self type is Unpin, that will only be because all fields are unpin (in which case it is fine to make pinned &mut references to them); if the self type is not Unpin, then the field future is pinned, so it is safe.In contrast, in MinPin, this case is only allowed if there is an explicit !Unpin impl for WrapFuture:impl<F: Future> !Unpin for WrapFuture<F> { // This impl is required in MinPin, but not in UnpinCell } Explicit negative impls are not allowed on stable, but they were included in the original auto trait RFC. The idea is that a negative impl is an explicit, semver-binding commitment not to implement a trait. This is different from simply not including an impl at all, which allows for impls to be added later.Why would you prefer MinPin over UnpinCell or vice versa?I’m not totally sure which of these is better. I came to the !Unpin impl based on my axiom that pin is its own world – the idea was that it was better to push types to be explicitly unpin all the time than to have “dual-mode” types that masquerade as sometimes pinned and sometimes not.In general I feel like it’s better to justify language rules by the presence of a declaration than the absence of one. So I don’t like the idea of saying “the absence of an Unpin impl allows for pin-projection” – after all, adding impls is supposed to be semver-compliant. Of course, that’s much lesss true for auto traits, but it can still be true.In fact, Pin has had some unsoundness in the past based on unsafe reasoning that was justified by the lack of an impl. We assumed that &T could never implemented DerefMut, but it turned out to be possible to add weird impls of DerefMut in very specific cases. We fixed this by adding an explicit impl<T> !DerefMut for &T impl.On the other hand, I can imagine that many explicitly implemented futures might benefit from being able to be ambiguous about whether they are Unpin.What does your design axiom “Pin is its own world” mean?The way I see it is that, in Rust today (and in MinPin, pinned places, UnpinCell, etc), if you have a T: !Unpin type (that is, a type that is pinnable), it lives a double life. Initially, it is unpinned, and you interact can move it, &-ref it, or &mut-ref it, just like any other Rust value. But once a !Unpin value becomes pinned to a place, it enters a different state, in which you can no longer move it or use &mut, you have to use pinned &mut:flowchart TD Unpinned[ Unpinned: can access 'v' with '&' and '&mut' ] Pinned[ Pinned: can access 'v' with '&' and 'pinned &mut' ] Unpinned -- pin 'v' in place (only if T is '!Unpin') --> Pinned One-way transitions like this limit the amount of interop and composability you get in the language. For example, if my type has &mut methods, I can’t use them once the type is pinned, and I have to use some workaround, such as duplicating the method with pinned &mut.3 In this specific case, however, I don’t think this transition is so painful, and that’s because of the specifics of the domain: futures go through a pretty hard state change where they start in “preparation mode” and then eventually start executing. The set of methods you need at these two phases are quite distinct. So this is what I meant by “pin is its own world”: pin is not very interopable with Rust, but this is not as bad as it sounds, because you don’t often need that kind of interoperability.How would Overwrite affect pin being in its own world?With Overwrite, when you pin a value in place, you just gain the ability to use pinned &mut, you don’t give up the ability to use &mut:flowchart TD Unpinned[ Unpinned: can access 'v' with '&' and '&mut' ] Pinned[ Pinned: can additionally access 'v' with 'pinned &mut' ] Unpinned -- pin 'v' in place (only if T is '!Unpin') --> Pinned Making pinning into a “superset” of the capabilities of pinned means that pinned &mut can be coerced into an &mut (it could even be a “true subtype”, in Rust terms). This in turn means that a pinned &mut Self method can invoke &mut self methods, which helps to make pin feel like a smoothly integrated part of the language.3So does the axiom mean you think Overwrite is a bad idea?Not exactly, but I do think that if Overwrite is justified, it is not on the basis of Pin, it is on the basis of immutable fields. If you just look at Pin, then Overwrite does make Pin work better, but it does that by limiting the capabilities of &mut to those that are compatible with Pin. There is no free lunch! As Eric Holk memorably put it to me in privmsg:It seems like there’s a fixed amount of inherent complexity to pinning, but it’s up to us how we distribute it. Pin keeps it concentrated in a small area which makes it seem absolutely terrible, because you have to face the whole horror at once.4I think Pin as designed is a “zero-conceptual-cost” abstraction, meaning that if you are not trying to use it, you don’t really have to care about it. That’s worth maintaining, if we can. If we are going to limit what &mut can do, the reason to do it is primarily to get other benefits, not to benefit pin code specifically.To be clear, this is largely a function of where we are in Rust’s evolution. If we were still in the early days of Rust, I would say Overwrite is the correct call. It reminds me very much of the IMHTWAMA, the core “mutability xor sharing” rule at the heart of Rust’s borrow checker. When we decided to adopt the current borrow checker rules, the code was about 85-95% in conformance. That is, although there was plenty of aliased mutation, it was clear that “mutability xor sharing” was capturing a rule that we already mostly followed, but not completely. Because combining aliased state with memory safety is more complicated, that meant that a small minority of code was pushing complexity onto the entire language. Confining shared mutation to types like Cell and Mutex made most code simpler at the cost of more complexity around shared state in particular.There’s a similar dynamic around replace and swap. Replace and swap are only used in a few isolated places and in a few particular ways, but the all code has to be more conservative to account for that possibility. If we could go back, I think limiting Replace to some kind of Replaceable<T> type would be a good move, because it would mean that the more common case can enjoy the benefits: fewer borrow check errors and more precise programs due to immutable fields and the ability to pass an &mut SomeType and be sure that your callee is not swapping the value under your feet (useful for the “scope pattern” and also enables Pin<&mut> to be a subtype of &mut).Why did you adopt pinned &mut and not &pin mut as the syntax?The main reason was that I wanted a syntax that scaled to Pin<Box<T>>. But also the pin! macro exists, making the pin keyword somewhat awkward (though not impossible).One thing I was wondering about is the phrase “pinned reference” or “pinned pointer”. On the one hand, it is really a reference to a pinned value (which suggests &pin mut). On the other hand, I think this kind of ambiguity is pretty common. The main thing I have found is that my brain has trouble with Pin<P> because it wants to think of Pin as a “smart pointer” versus a modifier on another smart pointer. pinned Box<T> feels much better this way.Can you show me an example? What about the MaybeDone example?Yeah, totally. So boats [pinned places][] post introduced two futures, MaybeDone and Join. Here is how MaybeDone would look in MinPin, along with some inline comments:enum MaybeDone<F: Future> { Polling(F), Done(Unpinnable<Option<F::Output>>), // ---------- see below } impl<F: Future> !Unpin for MaybeDone<F> { } // ----------------------- // // `MaybeDone` is address-sensitive, so we // opt out from `Unpin` explicitly. I assumed // opting out from `Unpin` was the *default* in // my other posts. impl<F: Future> MaybeDone<F> { fn maybe_poll(pinned &mut self, cx: &mut Context<'_>) { if let MaybeDone::Polling(fut) = self { // --- // This is in fact pin-projection, although // it's happening implicitly as part of pattern // matching. `fut` here has type `pinned &mut F`. // We are permitted to do this pin-projection // to `F` because we know that `Self: !Unpin` // (because we declared that to be true). if let Poll::Ready(res) = fut.poll(cx) { *self = MaybeDone::Done(Some(res)); } } } fn is_done(&self) -> bool { matches!(self, &MaybeDone::Done(_)) } fn take_output(pinned &mut self) -> Option<F::Output> { // ---------------- // This method is called after pinning, so it // needs a `pinned &mut` reference... if let MaybeDone::Done(res) = self { res.value.take() // ------------ // // ...but take is an `&mut self` method // and `F:Output: Unpin` is known to be true. // // Therefore we have made the type in `Done` // be `Unpinnable`, so that we can do this // swap. } else { None } } } Can you translate the Join example?Yep! Here is Join:struct Join<F1: Future, F2: Future> { fut1: MaybeDone<F1>, fut2: MaybeDone<F2>, } impl<F1: Future, F2: Future> !Unpin for Join<F> { } // ------------------ // // Join is a custom future, so implement `!Unpin` // to gain access to pin-projection. impl<F1: Future, F2: Future> Future for Join<F1, F2> { type Output = (F1::Output, F2::Output); fn poll(pinned &mut self, cx: &mut Context<'_>) -> Poll<Self::Output> { // The calls to `maybe_poll` and `take_output` below // are doing pin-projection from `pinned &mut self` // to a `pinned &mut MaybeDone<F1>` (or `F2`) type. // This is allowed because we opted out from `Unpin` // above. self.fut1.maybe_poll(cx); self.fut2.maybe_poll(cx); if self.fut1.is_done() && self.fut2.is_done() { let res1 = self.fut1.take_output().unwrap(); let res2 = self.fut2.take_output().unwrap(); Poll::Ready((res1, res2)) } else { Poll::Pending } } } What’s the story with Drop and why does it matter?Drop’s current signature takes &mut self. But recall that once a !Unpin type is pinned, it is only safe to use pinned &mut. This is a combustible combination. It means that, for example, I can write a Drop that uses mem::replace or swap to move values out from my fields, even though they have been pinned.For types that are always Unpin, this is no problem, because &mut self and pinned &mut self are equivalent. For types that are always !Unpin, I’m not too worried, because Drop as is is a poor fit for them, and pinned &mut self will be beter.The tricky bit is types that are conditionally Unpin. Consider something like this:struct LogWrapper<T> { value: T, } impl<T> Drop for LogWrapper<T> { fn drop(&mut self) { ... } } At least today, whether or not LogWrapper is Unpin depends on whether T: Unpin, so we can’t know it for sure.The solution that boats and I both landed on effectively creates three categories of types:5those that implement Unpin, which are unpinnable;those that do not implement Unpin but which have fn drop(&mut self), which are unsafely pinnable;those that do not implement Unpin and do not have fn drop(&mut self), which are safely pinnable.The idea is that using fn drop(&mut self) puts you in this purgatory category of being “unsafely pinnable” (it might be more accurate to say being “maybe unsafely pinnable”, since often at compilation time with generics we won’t know if there is an Unpin impl or not). You don’t get access to safe pin projection or other goodies, but you can do projection with unsafe code (e.g., the way the pin-project-lite crate does it today).It feels weird to have Drop let you use &mut self when other traits don’t.Yes, it does, but in fact any method whose trait uses pinned &mut self can be implemented safely with &mut self so long as Self: Unpin. So we could just allow that in general. This would be cool because many hand-written futures are in fact Unpin, and so they could implement the poll method with &mut self.Wait, so if Unpin types can use &mut self, why do we need special rules for Drop?Well, it’s true that an Unpin type can use &mut self in place of pinned &mut self, but in fact we don’t always know when types are Unpin. Moreover, per the zero-conceptual-cost axiom, we don’t want people to have to know anything about Pin to use Drop. The obvious approaches I could think of all either violated that axiom or just… well… seemed weird:Permit fn drop(&mut self) but only if Self: Unpin seems like it would work, since most types are Unpin. But in fact types, by default, are only Unpin if their fields are Unpin, and so generic types are not known to be Unpin. This means that if you write a Drop impl for a generic type and you use fn drop(&mut self), you will get an error that can only be fixed by implementing Unpin unconditionally. Because “pin is its own world”, I believe adding the impl is fine, but it violates “zero-conceptual-cost” because it means that you are forced to understand what Unpin even means in the first place.To address that, I considered treating fn drop(&mut self) as implicitly declaring Self: Unpin. This doesn’t violate our axioms but just seems weird and kind of surprising. It’s also backwards incompatible with pin-project-lite.These considerations let me to conclude that actually the current design kind of puts in a place where we want three categories. I think in retrospect it’d be better if Unpin were implemented by default but not as an auto trait (i.e., all types were unconditionally Unpin unless they declare otherwise), but oh well.What is the forwards compatibility story for Overwrite?I mentioned early on that MinPin could be seen as a first step that can later be extended with Overwrite if we choose. How would that work?Basically, if we did the s/Unpin/Overwrite/ change, then we wouldrename Unpin to Overwrite (literally rename, they would be the same trait);prevent overwriting the referent of an &mut T unless T: Overwrite (or replacing, swapping, etc).These changes mean that &mut T is pin-preserving. If T: !Overwrite, then T may be pinned, but then &mut T won’t allow it to be overwritten, replaced, or swapped, and so pinning guarantees are preserved (and then some, since technically overwrites are ok, just not replacing or swapping). As a result, we can simplify the MinPin rules for pin-projection to the following:Given a reference s: pinned &mut S, the rules for projection of the field f are as follows:&mut projection is allowed via &mut s.f.pinned &mut projection is allowed via pinned &mut s.f if S: !UnpinWhat would it feel like if we adopted Overwrite?We actually got a bit of a preview when we talked about MaybeDone. Remember how we had to introduce Unpinnable around the final value so that we could swap it out? If we adopted Overwrite, I think the TL;DR of how code would be different is that most any code that today uses std::mem::replace or std::mem::swap would probably wind up using an explicit Unpinnable-like wrapper. I’ll cover this later.This goes a bit to show what I meant about there being a certain amount of inherent complexity that we can choose to distibute: in MinPin, this pattern of wrapping “swappable” data is isolated to pinned &mut self methods in !Unpin types. With Overwrite, it would be more widespread (but you would get more widespread benefits, as well).ConclusionMy conclusion is that this is a fascinating space to think about!6 So fun.
2024-11-08T20:32:18
en
train
42,056,590
johnys
2024-11-06T01:08:54
Commercial Opportunities in a Renaissance of Self-Hosting
null
https://blog.johnys.io/commercial-opportunities-in-a-renaissance-of-self-hosting/
5
0
null
null
null
null
null
null
null
null
null
null
train
42,056,626
herbertl
2024-11-06T01:22:18
Do good work and your fits will follow
null
https://www.blackbirdspyplane.com/p/do-good-work-and-your-fits-will-follow
2
0
null
null
null
null
null
null
null
null
null
null
train
42,056,639
SanjayMehta
2024-11-06T01:25:26
First wooden satellite shares material with samurai sword sheaths
null
https://www.popsci.com/technology/wooden-satellite-lignosat/
1
1
[ 42056693 ]
null
null
null
null
null
null
null
null
null
train
42,056,644
mathgenius
2024-11-06T01:26:43
The Spiral of Wrath: The crash of Armavia flight 967
null
https://admiralcloudberg.medium.com/the-spiral-of-wrath-the-crash-of-armavia-flight-967-c7d84541f0f7
57
32
[ 42056868, 42058438, 42067436, 42066929, 42057807, 42056745 ]
null
null
body_too_long
null
null
null
null
2024-11-07T23:46:56
null
train
42,056,654
breezykermo
2024-11-06T01:29:34
Using Two ReMarkables
null
https://www.ohrg.org/using-two-remarkables
26
6
[ 42062293, 42063046, 42060307, 42066768, 42058027, 42062682 ]
null
null
null
null
null
null
null
null
null
train
42,056,667
notimewaste
2024-11-06T01:32:12
Is your website tested? or you are
null
https://blog.monkeytest.ai/is-your-webshttps://blog.monkeytest.ai/is-your-website-really-tested-go-beyond-with-ai-monkey-testing-b87338320c1b
1
0
null
null
null
null
null
null
null
null
null
null
train
42,056,687
tomohawk
2024-11-06T01:38:47
NYT Strike Negotiation Bogged Down by Outlandish and Illegal Demands
null
https://www.semafor.com/article/09/15/2024/new-york-times-tech-staff-threatens-strike-during-election-day-crunch
6
3
[ 42063155, 42056835, 42056857 ]
null
null
null
null
null
null
null
null
null
train
42,056,735
bookofjoe
2024-11-06T01:51:34
Apple expands iPhone satellite services deal, commits $1.1B to expand capacity
null
https://9to5mac.com/2024/11/01/apple-expands-iphone-satellite-services-deal/
13
0
null
null
null
Failed after 3 attempts. Last error: Quota exceeded for quota metric 'Generate Content API requests per minute' and limit 'GenerateContent request limit per minute for a region' of service 'generativelanguage.googleapis.com' for consumer 'project_number:854396441450'.
Apple expands iPhone satellite services deal, commits $1.1bn to expand capacity - 9to5Mac
2024-11-01T12:00:10+00:00
Benjamin Mayo
Satellite services provider GlobalStar today disclosed an expansion of its deal with Apple. Apple will commit an additional $1.1 billion for upfront infrastructure prepayments, to increase the capacity of satellite services. Additionally, Apple will take 20% ownership of GlobalStar, in an equity deal worth about $400 million. The news has sent GlobalStar stock soaring, and it hints towards Apple’s growing plans for iPhone satellite features. With iOS 18, for instance, iPhone users are now able to send text messages to friends and family over satellite, when outside of cellular or WiFi range. Apple continues to commit significant financial resources to providing satellite features, while offering the feature for free to end users. However, it has repeatedly signalled that it intends to charge fees to iPhone users at some point. Satellite connectivity for Emergency SOS first launched with the iPhone 14 in 2022. At the time, Apple said that satellite would be free for two years. That means customers would have had to start paying around now, in late 2024. However, Apple extended the free period until 2025. Apple has yet to confirm how much it intends to charge for the satellite features. It’s a hairy subject as much of the current offering relies on using satellite during life-threatening emergencies, which feels rather punitive for Apple to charge for. It is possible the company will continue to offer Emergency SOS for free, while charging for other features like the ability to share location in Find My or the new iOS 18 capability to send text messages over satellite recreationally. Others have speculated satellite service may be rolled into the Apple One bundle, or be offered through mobile carrier add-ons. Satellite connectivity is supported on iPhone 14 and later models. Normally, satellite connectivity is only activated when outside of cellular or WiFi range. However, you can try out satellite in a demo capacity on your phone right now by navigating to Settings -> Emergency SOS -> Emergency SOS via Satellite -> Try Demo. Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.
2024-11-08T13:21:49
null
train
42,056,751
mixeden
2024-11-06T01:56:41
Graph Neural Networks for Predicting Material Properties
null
https://synthical.com/article/Graph-Neural-Networks-Based-Deep-Learning-for-Predicting-Structural-and-Electronic-Properties-3edeca76-a38b-4cff-b58a-704b2a1ec7b9
1
0
null
null
null
null
null
null
null
null
null
null
train
42,056,755
ocean_moist
2024-11-06T01:58:21
A Rust Take on Bsdiff
null
https://github.com/divvun/bidiff
9
0
null
null
null
null
null
null
null
null
null
null
train
42,056,756
ocean_moist
2024-11-06T01:58:48
MOATs Aren't Useful
null
https://rohan.ga/blog/moats/
6
4
[ 42056872, 42069418, 42056880, 42056884 ]
null
null
null
null
null
null
null
null
null
train
42,056,771
sandwichsphinx
2024-11-06T02:03:21
Histneur-L the History of Neuroscience Internet Forum
null
https://web.archive.org/web/20210613151057/http://www.bri.ucla.edu/nha/histneur.htm#expand
1
0
null
null
null
null
null
null
null
null
null
null
train
42,056,775
xrd
2024-11-06T02:04:01
Ask HN: Teaching the "magic" of software development to 3rd graders?
Next week I&#x27;m going into my daughter&#x27;s classroom to teach about software engineering. I want to teach them about the magic of it.<p>What do I mean by magic?<p>Well, Steve Jobs said all great technology is indistinguishable from magic.<p>I want to show them how math can be magic.<p>I want to show them how code can be magic.<p>But, these are third graders. And I&#x27;m not sure I can use a computer.<p>Any suggestions on magic tricks? I have 15 minutes.<p>This is really about teaching them about different careers. This is a very poor school where most of these kids probably don&#x27;t have a parent or friend in this industry. So instead of getting technical I want to show them the joy I get out of this work.
null
3
5
[ 42066874, 42056900, 42057115, 42057328 ]
null
null
null
null
null
null
null
null
null
train
42,056,776
sasmitharodrigo
2024-11-06T02:04:07
Show HN: I made a site to generate consistent text every time with AI
Hi everyone! I&#x27;m Sas, the founder of Sloap.<p>A little backstory on why I built this tool: I used to be an Etsy seller, and one challenge I kept running into was using ChatGPT to generate product titles, descriptions, and tags for Etsy product listings. At first, it worked well enough, but after a while, I found myself having to repeat the same instructions over and over. It became time-consuming and, honestly, a bit messy.<p>That’s when I started thinking, &quot;What if there was a tool where I could set all my preferences once and get consistent, accurate content every time?&quot; I had this idea floating around for a few months, but then something unexpected happened—I got banned from Etsy. That was the push I needed to bring Sloap to life.<p>Sloap isn’t just for Etsy or e-commerce sellers; it’s designed for anyone who wants consistent content generation without the hassle of repeating instructions. Whether you’re managing social media content, writing blog posts, creating product descriptions for a Shopify store, or generating text for newsletters, Sloap has you covered. It lets you set predefined rules for each project, ensuring your content is always on brand, no matter the platform.<p>After months of trial and error, learning, and lots of coffee, I’ve finally built something I’m genuinely proud of. It might not be perfect (yet), but Sloap is already so much more than I originally imagined. I’ll keep improving it, adding features, and making it even better with your feedback. Thanks for checking it out, and I’d love to hear what you think!
https://sloap.co/
3
0
null
null
null
null
null
null
null
null
null
null
train
42,056,785
teleforce
2024-11-06T02:07:18
Indonesia banned iPhone 16 and Google Pixel smartphones for breaking govt rules
null
https://timesofindia.indiatimes.com/technology/tech-news/indonesia-has-banned-iphone-16-series-and-google-pixel-smartphones-for-breaking-this-government-rule/articleshow/114973682.cms
2
0
null
null
null
null
null
null
null
null
null
null
train
42,056,786
june07
2024-11-06T02:07:18
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,056,814
thunderbong
2024-11-06T02:14:41
null
null
null
20
null
[ 42056881 ]
null
true
null
null
null
null
null
null
null
train
42,056,852
goplayoutside
2024-11-06T02:27:16
null
null
null
17
null
[ 42056854, 42056896, 42056916, 42056878 ]
null
true
null
null
null
null
null
null
null
train
42,056,866
doppp
2024-11-06T02:30:04
What you know that just ain't so
null
https://world.hey.com/dhh/what-you-know-that-just-ain-t-so-ab6f4bb1
1
0
null
null
null
null
null
null
null
null
null
null
train
42,056,888
hiddenest
2024-11-06T02:37:13
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,056,890
jaypatelani
2024-11-06T02:38:50
NetBSD: The portable, lightweight, and robust Unix-like operating system
null
https://www.osnews.com/story/141078/netbsd-the-portable-lightweight-and-robust-unix-like-operating-system/
2
0
null
null
null
null
null
null
null
null
null
null
train
42,056,902
getprops
2024-11-06T02:42:14
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,056,918
OuterVale
2024-11-06T02:46:58
98.css – A design system for building faithful recreations of old UIs
null
https://jdan.github.io/98.css/
363
78
[ 42062336, 42062043, 42066698, 42066396, 42059426, 42057977, 42056921, 42062400, 42060755, 42062081, 42060961, 42064435, 42063651, 42066347, 42069232, 42059728, 42069622, 42058214, 42064509, 42065698, 42060047, 42062349, 42063445, 42066055, 42065379 ]
null
null
null
null
null
null
null
null
null
train
42,056,920
radpanda
2024-11-06T02:48:00
4-Way Waymo Standoff Is Autonomous Vehicle Comedy Gold
null
https://www.roadandtrack.com/news/a62806804/4-way-waymo-self-driving-car-standoff-san-francisco/
6
2
[ 42058230 ]
null
null
null
null
null
null
null
null
null
train
42,056,923
gaws
2024-11-06T02:48:06
Title drops in movies
null
https://www.titledrops.net/
471
154
[ 42058053, 42061919, 42057269, 42060577, 42069739, 42057566, 42057665, 42061393, 42057986, 42058017, 42057439, 42057503, 42057309, 42070200, 42057587, 42065665, 42066429, 42061811, 42057544, 42057685, 42058446, 42063374, 42071028, 42057883, 42058159, 42058442, 42060603, 42058110, 42057511, 42066056, 42057958, 42057687, 42060742, 42064667, 42064920, 42057956, 42058207, 42069478, 42058658, 42060529, 42057358, 42059576, 42058007, 42058205, 42061507, 42064481, 42064034, 42058768, 42058070, 42059601, 42057584, 42057398, 42057413, 42057495, 42057239 ]
null
null
Failed after 3 attempts. Last error: Quota exceeded for quota metric 'Generate Content API requests per minute' and limit 'GenerateContent request limit per minute for a region' of service 'generativelanguage.googleapis.com' for consumer 'project_number:854396441450'.
Full of Themselves: An analysis of title drops in movies
null
null
I'm sure you all know the part of the movie where one of the characters says the actual title of the movie and you're like The overall meta-ness of this is - of course - nothing new. And filmmakers and scriptwriters have been doing it since the dawn of the medium itself*. It's known in film speak as a title drop. Consequently, there's tons of examples throughout movie history that range from the iconic (see Back to the Future's above) via the eccentric, the very much self-aware to the downright cringe. But how common are these title drops really? Has this phenomenon gained momentum over time with our postmodern culture becoming ever more meta? Can we predict anything about the quality of a film based on how many times its title is mentioned? And what does a movie title mean, anyway? There have been analyses and oh so so many listicles of the title drop phenomenon before, but they are small and anecdotal. Here's the first extensive analysis of title drops for a dataset of 73,921 movies that amount to roughly 61% of movies on IMDb with at least 100 user votes*. I'm looking at movies released between 1940 and 2023. Special thanks go to my friends at OpenSubtitles.com for providing this data! Let's talk data I started out with two datasets: 89,242 (English) movie subtitles from OpenSubtitles.com and metadata for 121,797 movies from IMDb. After joining them and filtering them for broken subtitle files I was left with a total of 73,921 subtitled movies. With that out of the way, I realized that the tougher task was still ahead of me: answering the question what even was a title drop? The naïve approach is - of course - to simply look for the movie's name anywhere in the subtitles. Which is a fantastic approach for movies like Back to the Future with a nice unique title: But this quickly breaks down if we look at movies like E or I *, which lead to way too many matches. We also run into problems with every movie that is a sequel (Rocky III, Hot Tub Time Machine 2) since none of the characters will add the sequel number to character names/oversized bathing equipment. Similarly, the rise of the colon in movie titles would make for some very awkward dialogue (LUKE: "Gosh Mr. Kenobi, it's almost like we're in the middle of some Star Wars Episode Four: A New Hope!"). (See also the He Didn't Say That meme.) So I applied a few rules to my title matching in the dialogue. Leading 'The', 'An' and 'A's and special characters like dashes are ignored, sequel numbers both Arabic and Roman are dropped (along with 'Episode...', 'Part...' etc.) and titles containing a colon are split and either side counts as a title drop. So for The Lord of the Rings: The Fellowship of the Ring either "Lord of the Rings" or "Fellowship of the Ring" would count as title drops (feel free to hover over the visualizations to explore the matches)! With the data cleaning out of the way, let's get down to business! Stats Alright, so here's the number you've all been waiting for (drumroll): 36.5% - so about a third - of movies have at least one title drop during their runtime. Also, there's a total of 277,668 title drops for all 26,965 title-dropping movies which means that there's an average of 10.3 title drops per movie that title drops. If they do it, they really go for it. So who are the most excessive offenders in mentioning their titles over the course of the film? The overall star when it comes to fiction only came out last year: it's Barbie by Greta Gerwig with an impressive 267 title drops within its 1 hour and 54 minutes runtime, clocking in at a whopping 2.34 BPM (Barbies Per Minute). On the non-fiction side of documentaries the winner is Mickey: The Story of a Mouse with 309 title drops in only 90 minutes, so 3.43 Mickeys Per Minute! Top ten number of title drops in one movie Fiction only Fiction + Documentaries What's interesting about the (Fiction) list here is that it's pretty international: only two of the top ten movies come from Hollywood, 6 are from India, one from Indonesia and one from Turkey. So it's definitely an international phenomenon. Names in titles Looking at the top ten list you might have noticed this little icon signifying a movie where the data says it's named after one of its characters*. Unsurprisingly, movies named after one of their characters have an average of 24.7 title drops, more than twice as much as the usual 10.3. Protagonists have a tendency to pop up repeatedly in a film, so their names usually do the same. Similarly, movies named after a protagonist have a title drop rate of 88.5% while only 34.2% of other movies drop their titles. A note on the data here This is the more experimental part of the analysis. To figure out if a movie was named after its protagonist I've used IMDb's Principals Dataset that lists character names for the first couple of actors and compared that to the movie's title. This approach yields reliable results, but of course misses movies when the character the movie is named after does not appear on that list. So you might find movies that miss the 'Named' icon even though they're clearly named after a character. Special characters in the title and character name are also challenging: for example, Tosun Pasa which actually has a ş character in its title - wrong on IMDb (Pasa) as well as the subtitles (Pasha) - or WALL·E with the challenging · in the middle: Even though there are mentions of "Wall-E" in the subtitles, the script - looking for "WALL·E" - wouldn't detect it. (I've fixed both of these films manually - but there might be more!) Titles or surnames also usually prevent being counted as title drops according to our definitions. Michael The Brave, King Lear or Barry Lyndon might mention a character's name ('Michael', 'Lear', 'Barry') but leave out the title or surname - so zero drops. Nevertheless, there do exist named films where you would expect a title drop which doesn't come! Examples are: Edward Scissorhands Predator and even Superman Anyway - back to the analysis! An interesting category are movies named after a character that only have a single title drop - making it all the more meaningful? Movies named after a character with single title drops "Real" title drops Title-drop connoisseurs might sneer at this point and well-actually us that a "real" title drop should only happen once in a film. That there's this one memorable (or cringe-y) scene where the protagonist looks directly at the camera and declares the title of the film with as much pathos as they can muster. Or as a nice send-off in the last spoken line. Such single drops happen surprisingly often: 11.3% of all movies do EXACTLY ONE title drop during their runtime. Which means that there's about twice as many movies having multiple title drops than single ones. In the single drop case it is more likely that the filmmakers were adding a title drop very consciously. Highest rated single drop movies Fiction only Fiction + Documentaries Single drops often happen in a key scene and explain the movie's title: what mysterious fellowship the first Lord of the Rings is named after. Or that the audience waiting for some dark knight to show up must simply accept that it's been the Batman all along. Title drops over the years One suspicion I had was that the very meta act of having a character speak the name of the movie they're in would be something gaining more and more traction over the last two or three decades. And indeed, if we look at the average number of movies with title drops over the decades we can see that there's a certain upwards trend. The 1960s and 1970s seemed to be most averse to mentioning their title in the film, while it's become more common-place over the last years. Highest title drops by decade Most drops Best rated (at least 1 drop) If we dig deeper, this growth over the decades comes with a clearer explanation: splitting up movies by single- and multi-title drops shows that while the tendency of movies to drop their title exactly once keeps more or less steady, the number of multi-drop films is on the rise. Your explanation for this (More movies are being named after their protagonists? Movies are more productified so brand recognition becomes an important concern?) is probably as good as mine 🤷 A sign of quality? Another question I wanted to answer was if a high number of title drops was a sign of a bad movie. Think of all the trashy slasher and horror movies about Meth Marmots and Killer Ballerinas - wouldn't their characters in the sparse dialogues constantly mention the title for brand recognition and all that? Interestingly though, there's no strong connection between film quality (expressed as IMDb rating (YMMV)) and the probability of title-dropping. Genres and title drops An aspect that certainly does have an impact on the probability of a title drop though is the genre of a film. If you think back to the discussion about names in titles from earlier, genres like Biography and other non-fiction genres like Sport and History - almost by definition - mention their subject in both the title and throughout the film. Accordingly, the probability of a title drop varies wildly by genre. Non-fiction films have a strong tendency towards title-dropping, while more fiction-oriented genres like Crime, Romance and War don't. What does a movie title mean? Finally, we can ask the question: what even is a movie title? I couldn't find a complete classification in the scientific literature ("What's in a name? The art of movie titling" by Ingrid Haidegger comes the closest). Movie titles are an interesting case, since they have to work as a description of a product, a marketing instrument, but also as the title of a piece of art. Consequently, it's a field ripe with opinions, science and experimentation and listicles. The most extensive classification of media titles in general I could find is TVTropes' Title Tropes list which lists over 180 (!) different types of tropes alone. Some of those tropes are: While naming a movie is a very creative task and pretty successfully defies classification, we can still look at the overall shape of movie titles and see if that has any impact on the number of title drops. One such simple aspect is the length of the title itself. As you would expect there's a negative correlation (if only a slight one*) between the length of a title and the number of title drops it does. Still, there are some fun examples for reaaaaally long movie titles that nevertheless do at least one title drop: And while these previous examples only drops parts from before or after the colon, this next specimen actually does an impressive full title drop: And with that, we're done with the overarching analysis! Feel free to drop us an e-mail or follow up on X/X, Bluesky or Mastodon if you have comments, questions, praise ❤️ Oh, and one more thing: If you're curious, here's the full dataset for you to explore! Explore all movies! Analysis + development by Dominikus Baur Design by Alice Thudt Datasets provided by OpenSubtitles.com and IMDb. Data: https://github.com/dominikus/titledrops.net
2024-11-08T05:47:58
null
train
42,056,928
impish9208
2024-11-06T02:49:34
AI Startup Perplexity to Triple Valuation to $9B in New Funding Round
null
https://www.wsj.com/tech/ai/ai-startup-perplexity-to-triple-valuation-to-9-billion-in-new-funding-round-f2fb8c2c
11
3
[ 42056933, 42062842, 42057445, 42059988 ]
null
null
null
null
null
null
null
null
null
train
42,056,936
wglb
2024-11-06T02:50:50
Study disproves weather-dependent renewable energy systems prone to blackouts
null
https://techxplore.com/news/2024-11-idea-weather-renewable-energy-prone.html
2
0
null
null
null
null
null
null
null
null
null
null
train
42,056,944
wglb
2024-11-06T02:53:28
Study of Venus's Haasttse-baad Tessera suggests formation by two large impacts
null
https://phys.org/news/2024-11-venus-haasttse-baad-tessera-formation.html
3
0
null
null
null
null
null
null
null
null
null
null
train
42,056,945
administrate
2024-11-06T02:53:31
Ask HN: How to build SaaS that makes money?
I understand that age old wisdom is not start with money as sole motivator. However, I don&#x27;t have a choice. Building SaaS is the only way out. Let&#x27;s keep the backstory aside and not focus on it.<p>I have too many ideas and I have build a product partially. I got paid for doing manual work. I am afraid locking down on one idea is putting all eggs in one basket. If that does not work I will end up losing 6 months and some capital and opportunity cost.<p>How do I choose the product roadmap and what to build. I have been proficient in full-stack development, data engineering, and partly with ML&#x2F;AI. No research experiece but that is not going to pay me anyway.<p>How do I build when there is SaaS overflow with AI app every day with Twitter bros advertising they made $10k in 3 months?<p>Customers are flooded with emails, ads, apps. What are ways you market your SaaS and make money?
null
2
2
[ 42057529, 42057192 ]
null
null
null
null
null
null
null
null
null
train
42,056,954
wglb
2024-11-06T02:54:44
Defibrillation devices save lives using 1k times less electricity
null
https://phys.org/news/2024-11-defibrillation-devices-electricity-optimized.html
105
61
[ 42057438, 42057390, 42071372, 42057639, 42057442, 42057322, 42057750, 42057340, 42061043, 42059340, 42057327 ]
null
null
null
null
null
null
null
null
null
train
42,056,957
purple-leafy
2024-11-06T02:54:58
Ask HN: I think I'm done with tech. Burnout constantly. Now what?
I&#x27;ve been working in tech the last 5 years.<p>This is a rant.<p>I&#x27;m constantly burning out at my job now. I hate how shit the work is, fucking pushing shit features for a shit codebase that adds negative value to the world.<p>There is no craftsmanship in anything anymore, just rush everything out the door full of bugs, everything is just money at the end of the day. No time to spend time making quality valuable products.<p>Golden bloody handcuffs, earning a decent amount, but to what end?<p>Constant layoffs, restructuring, bastard management and HR, no stability, no pay-rises, and no end in sight. As a worker I feel worthless. Treated like scum.<p>I love computers. I love learning, I love the power of programming. I hate that this capitalist shit-hole we live in means that the only Software we build is software that makes money. Marketing automation!? Fuck me!<p>What the hell do I do?<p>I want a consistent job, stability, to be treated well, not just a number on a spreadsheet. I want a job where I can learn maths or science or computer programming&#x2F;hardware, and not have my livelihood held to ransom by conservative lackwits in government cutting down on &quot;non-essential&quot; things like education.<p>I hate this trajectory the world is on. I had so much hope for the world when I was younger. I&#x27;d hate to show younger me current me.
null
20
13
[ 42062642, 42057207, 42057103, 42057122, 42057043, 42057489, 42057664, 42057049 ]
null
null
null
null
null
null
null
null
null
train
42,057,013
null
2024-11-06T03:09:19
null
null
null
null
null
[ 42057014 ]
[ "true" ]
null
null
null
null
null
null
null
null
train
42,057,017
CharlesW
2024-11-06T03:11:03
Csiro opens $6.8M printing facility to make flexible solar panels
null
https://www.abc.net.au/news/2024-11-03/csiro-opens-new-facility-to-print-flexible-solar-panels/104549170
5
0
null
null
null
null
null
null
null
null
null
null
train
42,057,022
thunderbong
2024-11-06T03:14:50
Why DX Doesn't Matter
null
https://yieldcode.blog/post/why-dx-doesnt-matter/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,057,025
robto
2024-11-06T03:15:33
Break Versioning (BreakVer)
null
https://www.taoensso.com/break-versioning
1
0
null
null
null
null
null
null
null
null
null
null
train
42,057,036
gaws
2024-11-06T03:21:09
Tracker Beeper (2022)
null
https://berthub.eu/articles/posts/tracker-beeper/
365
67
[ 42057488, 42057429, 42057418, 42057393, 42057454, 42057092, 42057420, 42057466, 42057409, 42057530, 42062520, 42057148, 42057638, 42057786, 42057336, 42057621, 42057374, 42057512, 42057571, 42057957, 42057531, 42057545, 42057477, 42057670, 42057353, 42057485 ]
null
null
null
null
null
null
null
null
null
train
42,057,037
octopus2023inc
2024-11-06T03:21:25
Show HN: LLM Applications from Yml Files
You build LLM applications with YAML files, that define an execution graph. Nodes can be either LLM API calls, regular function executions or other graphs themselves. Because you can nest graphs easily, building complex applications is not an issue, but at the same time you don&#x27;t lose control. The YAML basically states what are the tasks that need to be done and how they connect. Other than that, you only write individual python functions to be called during the execution. No new classes and abstractions to learn.
https://github.com/octopus2023-inc/gensphere
4
0
null
null
null
null
null
null
null
null
null
null
train
42,057,039
thunderbong
2024-11-06T03:21:40
Gartner Hype Cycle
null
https://en.wikipedia.org/wiki/Gartner_hype_cycle
3
0
null
null
null
null
null
null
null
null
null
null
train
42,057,047
therabbithole
2024-11-06T03:24:07
A global dataset of 7B individuals with socio-economic characteristics
null
https://www.nature.com/articles/s41597-024-03864-2
4
0
null
null
null
null
null
null
null
null
null
null
train
42,057,059
synergy20
2024-11-06T03:30:03
Build a database with Linux built-in commands
null
https://www.howtogeek.com/build-a-database-with-powerful-linux-built-in-tools/
3
1
[ 42057189 ]
null
null
null
null
null
null
null
null
null
train
42,057,063
dndndnd
2024-11-06T03:31:01
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,080
sandwichsphinx
2024-11-06T03:39:32
FPGA, ASIC, and SoC Development
null
https://www.mathworks.com/help/overview/fpga-asic-and-soc-development.html
3
0
null
null
null
null
null
null
null
null
null
null
train
42,057,139
slyall
2024-11-06T04:05:01
The deep learning boom caught almost everyone by surprise
null
https://www.understandingai.org/p/why-the-deep-learning-boom-caught
205
140
[ 42064762, 42060762, 42061089, 42058383, 42058282, 42067859, 42063807, 42057746, 42065897, 42064233, 42059833, 42064067, 42061283, 42057939, 42058063, 42057339 ]
null
null
null
null
null
null
null
null
null
train
42,057,142
ethanleetech
2024-11-06T04:07:35
null
null
null
1
null
[ 42057143 ]
null
true
null
null
null
null
null
null
null
train
42,057,147
sandwichsphinx
2024-11-06T04:08:24
North Korea Enters Ukraine Fight for First Time, Officials Say
null
https://www.nytimes.com/2024/11/05/world/europe/north-korea-russia-ukraine-kursk.html
5
1
[ 42057391 ]
null
null
null
null
null
null
null
null
null
train
42,057,164
The_News_Crypto
2024-11-06T04:13:24
null
null
null
1
null
[ 42057165 ]
null
true
null
null
null
null
null
null
null
train
42,057,169
SyncfusionBlogs
2024-11-06T04:14:56
null
null
null
1
null
[ 42057170 ]
null
true
null
null
null
null
null
null
null
train
42,057,175
squircle
2024-11-06T04:15:24
The Eureka Moment (2006)
null
https://www.scientificamerican.com/article/the-eureka-moment/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,057,186
vinnyglennon
2024-11-06T04:18:14
Bitcoin breaks above its all-time high of $73,777
null
https://www.fxstreet.com/cryptocurrencies/news/breaking-bitcoin-breaks-above-its-all-time-high-of-73-777-ahead-of-election-results-202411060312
13
17
[ 42057242, 42057248, 42057305, 42057199 ]
null
null
null
null
null
null
null
null
null
train
42,057,197
MyChannels
2024-11-06T04:25:04
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,212
appwiz
2024-11-06T04:31:22
Amazon drone delivery takes off in Arizona
null
https://www.aboutamazon.com/news/transportation/amazon-drone-delivery-arizona
1
0
null
null
null
null
null
null
null
null
null
null
train
42,057,234
snvzz
2024-11-06T04:40:20
New SvarDOS Kernel: EDR
null
http://svardos.org/?p=forum&thread=1722279472
1
0
null
null
null
null
null
null
null
null
null
null
train
42,057,236
null
2024-11-06T04:40:30
null
null
null
null
null
null
[ "true" ]
null
null
null
null
null
null
null
null
train
42,057,250
dz0707
2024-11-06T04:44:18
Ask HN: Do you have inspiring example of DDD usage?
Wondering if any of you have your favorite open source repository for a real software, used in production by real people, that is following DDD. I know that there are success stories, just never saw their code.<p>I&#x27;ve seen lots of demo&#x2F;experiment repos, but all of them are just that - with very limited scope, synthetic scenarios, doing shortcuts for brevity, etc.
null
2
0
null
null
null
null
null
null
null
null
null
null
train
42,057,257
waveywaves
2024-11-06T04:46:09
null
null
null
1
null
[ 42057258 ]
null
true
null
null
null
null
null
null
null
train
42,057,261
lifecodeuncoded
2024-11-06T04:47:29
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,057,262
rntn
2024-11-06T04:47:30
A new city springs from the rainforest to become Indonesia's tech hub
null
https://www.theregister.com/2024/11/06/indonesias_new_capital_nusantara/
1
0
null
null
null
null
null
null
null
null
null
null
train