title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Excel to PDF | 2 | I'm interested in running a llm locally for a variety of reasons, but for my actual job I have a menial task of taking data from an excel sheet and copying the various fields into a PDF template I have.
From what I read chatGPT plus can do this, but do ya'll think it's possible and/or too much hassle to get a local llama to do this? | 2025-06-01T23:19:03 | https://www.reddit.com/r/LocalLLaMA/comments/1l12yc2/excel_to_pdf/ | Soliloquy789 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l12yc2 | false | null | t3_1l12yc2 | /r/LocalLLaMA/comments/1l12yc2/excel_to_pdf/ | false | false | self | 2 | null |
IronLoom-32B-v1 - A Character Card Creator Model with Structured Planning | 9 | IronLoom-32B-v1 is a model specialized in creating character cards for Silly Tavern that has been trained to reason in a structured way before outputting the card.
**Model Name: IronLoom-32B-v1**
**Model URL:** [https://huggingface.co/Lachesis-AI/IronLoom-32B-v1](https://huggingface.co/Lachesis-AI/IronLoom-32B-v1)
**Model URL GGUFs:** [https://huggingface.co/Lachesis-AI/IronLoom-32B-v1-GGUF](https://huggingface.co/Lachesis-AI/IronLoom-32B-v1-GGUF)
**Model Author:** Lachesis-AI, Kos11
**Settings:** Temperature: 1, min\_p: 0.05 (0.02 for higher quants), GLM-4 Template, No System Prompt
You may need to update SillyTavern to the latest version for the GLM-4 Template
IronLoom goes through a multi-stage reasoning process where the model:
1. Extract key elements from the user prompt
2. Review given tags for the theme of the card
3. Draft an outline of the card's core structure
4. Create and return a completed card in YAML format which can then be converted into SillyTavern JSON
https://preview.redd.it/vajvuwdrie4f1.png?width=1968&format=png&auto=webp&s=93554071c01f63e5733c68c6e6069cebe9d76eb5
https://preview.redd.it/gkhe5xsrie4f1.png?width=1977&format=png&auto=webp&s=2c0bf0f05250f241a80021c10d69e010a3ddb4cf
| 2025-06-01T23:37:20 | https://www.reddit.com/r/LocalLLaMA/comments/1l13c8n/ironloom32bv1_a_character_card_creator_model_with/ | Kos11_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l13c8n | false | null | t3_1l13c8n | /r/LocalLLaMA/comments/1l13c8n/ironloom32bv1_a_character_card_creator_model_with/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'xQwYiosjCwwzBVbg47ge7mxH035jxm95l8I9ZhKcorQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sHgkpBWc5Ts977eTnV4CHU-D5gOFyzpv5gyH7zSKeqY.jpg?width=108&crop=smart&auto=webp&s=e428699e499363a54282e235c4a8975e593e54c6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/sHgkpBWc5Ts977eTnV4CHU-D5gOFyzpv5gyH7zSKeqY.jpg?width=216&crop=smart&auto=webp&s=97b899355790b013b2774ba0ff2f447f99ef78e3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/sHgkpBWc5Ts977eTnV4CHU-D5gOFyzpv5gyH7zSKeqY.jpg?width=320&crop=smart&auto=webp&s=8b67ec4845335be058a103706d05fe4e4358e890', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/sHgkpBWc5Ts977eTnV4CHU-D5gOFyzpv5gyH7zSKeqY.jpg?width=640&crop=smart&auto=webp&s=3f93976e44a3f59100141ae9ae46e4d8df82cf58', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/sHgkpBWc5Ts977eTnV4CHU-D5gOFyzpv5gyH7zSKeqY.jpg?width=960&crop=smart&auto=webp&s=f134628555212ceb50d0dc6397b0324ad91bf7f6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/sHgkpBWc5Ts977eTnV4CHU-D5gOFyzpv5gyH7zSKeqY.jpg?width=1080&crop=smart&auto=webp&s=ce8de62cfc21ab578edb485a7899fa05f6fa7467', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/sHgkpBWc5Ts977eTnV4CHU-D5gOFyzpv5gyH7zSKeqY.jpg?auto=webp&s=3a7146deecf5d42128997855fea0abd3156de9ae', 'width': 1200}, 'variants': {}}]} |
|
How are people running dual GPU these days? | 56 | I have a 4080 but was considering getting a 3090 for LLM models. I've never ran a dual set up before because I read like 6 years ago that crossfire isn't used anymore. But clearly people are doing it so is that still going on? How does it work? Will it only offload to 1 gpu and then to the RAM, or can it offload to one GPU and then to the second one if it needs more?
Thank you for your time :) | 2025-06-01T23:42:00 | https://www.reddit.com/r/LocalLLaMA/comments/1l13fqa/how_are_people_running_dual_gpu_these_days/ | admiralamott | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l13fqa | false | null | t3_1l13fqa | /r/LocalLLaMA/comments/1l13fqa/how_are_people_running_dual_gpu_these_days/ | false | false | self | 56 | null |
How are you selecting LLMs? | 0 | Below is my Desktop config
CPU : I9-13900KF
RAM : 64GB DDR4
GPU: NVIDIA GeForce RTX 4070 Ti with 12GB Dedicated GPU and 32GB Shared GPU. Overall, Task Manager shows my GPU Memory as 44GB.
https://preview.redd.it/xljtz6jqhe4f1.png?width=791&format=png&auto=webp&s=6bfe83e00013b28e950b09b9c8d48a0e89e00f41
Q1 : While selecting a model should I be considering Dedicated GPU only or Total GPU memory which add shared GPU memory and Dedicated GPU Memory ?
When I run deepseek-r1:32B with Q4 quantization, its eval rate is too slow at 4.56 tokens/s. I feel Its due to model getting offloaded to CPU. Q2: Correct me if I am wrong.
https://preview.redd.it/iur5oenlie4f1.png?width=497&format=png&auto=webp&s=80efa14ab71d5d9c1e6bd8bc1d2c73130e0de88e
https://preview.redd.it/f6d5vv5zhe4f1.png?width=748&format=png&auto=webp&s=f567193124bae72b3ff633af3f506c42c35ab482
https://preview.redd.it/330wfl4wie4f1.png?width=351&format=png&auto=webp&s=3a174d5656e1f77f38e340ff3b9c0cf32f81886a
I am using local LLMs for 2 use cases 1. Coding 2. General reasoning
Q3: How are you selecting which model to use for Coding and General Reasoning for your hardware?
Q4: Within coding, are you using anything smaller model for auto completions vs Full code agents? | 2025-06-01T23:46:43 | https://www.reddit.com/r/LocalLLaMA/comments/1l13j9b/how_are_you_selecting_llms/ | KVT_BK | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l13j9b | false | null | t3_1l13j9b | /r/LocalLLaMA/comments/1l13j9b/how_are_you_selecting_llms/ | false | false | 0 | null |
|
Which model are you using? June'25 edition | 1 | [removed] | 2025-06-01T23:56:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l13qto/which_model_are_you_using_june25_edition/ | Ok_Influence505 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l13qto | false | null | t3_1l13qto | /r/LocalLLaMA/comments/1l13qto/which_model_are_you_using_june25_edition/ | false | false | self | 1 | null |
Who is getting paid to work doing this rather than just hobby dabbling..what was your path? | 149 | I really enjoy hacking together LLM scripts and ideas. but how do I get paid doing it?? | 2025-06-02T00:00:47 | https://www.reddit.com/r/LocalLLaMA/comments/1l13tv3/who_is_getting_paid_to_work_doing_this_rather/ | bornfree4ever | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l13tv3 | false | null | t3_1l13tv3 | /r/LocalLLaMA/comments/1l13tv3/who_is_getting_paid_to_work_doing_this_rather/ | false | false | self | 149 | null |
Which model are you using? June'25 edition | 1 | [removed] | 2025-06-02T00:46:54 | https://www.reddit.com/r/LocalLLaMA/comments/1l14rz6/which_model_are_you_using_june25_edition/ | Ok_Influence505 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l14rz6 | false | null | t3_1l14rz6 | /r/LocalLLaMA/comments/1l14rz6/which_model_are_you_using_june25_edition/ | false | false | self | 1 | null |
did nvidia fix melting cable issue for rtx 6000 pro? I was thinking of buying one for AI stuff | 1 | [removed] | 2025-06-02T01:00:38 | https://www.reddit.com/r/LocalLLaMA/comments/1l151w2/did_nvidia_fix_melting_cable_issue_for_rtx_6000/ | tooLateButStillYoung | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l151w2 | false | null | t3_1l151w2 | /r/LocalLLaMA/comments/1l151w2/did_nvidia_fix_melting_cable_issue_for_rtx_6000/ | false | false | self | 1 | null |
Which model are you using? June'25 edition | 213 | As proposed previously from this [post](https://www.reddit.com/r/LocalLLaMA/comments/1jxu0f7/we_should_have_a_monthly_which_models_are_you/), it's time for another monthly check-in on the latest models and their applications. The goal is to keep everyone updated on recent releases and discover hidden gems that might be flying under the radar.
With new models like DeepSeek-R1-0528, Claude 4 dropping recently, I'm curious to see how these stack up against established options. Have you tested any of the latest releases? How do they compare to what you were using before?
So, let start a discussion on what models (both proprietary and open-weights) are use using (or stop using ;) ) for different purposes (coding, writing, creative writing etc.). | 2025-06-02T01:09:13 | https://www.reddit.com/r/LocalLLaMA/comments/1l1581z/which_model_are_you_using_june25_edition/ | Ok_Influence505 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1581z | false | null | t3_1l1581z | /r/LocalLLaMA/comments/1l1581z/which_model_are_you_using_june25_edition/ | false | false | self | 213 | null |
Does anyone know if there is a good leaderboard for Audio Language Model? | 1 | [removed] | 2025-06-02T01:54:09 | https://www.reddit.com/r/LocalLLaMA/comments/1l163p0/does_anyone_know_if_there_is_a_good_leaderboard/ | MediaHaunting8669 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l163p0 | false | null | t3_1l163p0 | /r/LocalLLaMA/comments/1l163p0/does_anyone_know_if_there_is_a_good_leaderboard/ | false | false | self | 1 | null |
Is Google censoring Qwen Long-L1-32B? | 1 | [removed] | 2025-06-02T01:58:42 | https://www.reddit.com/r/LocalLLaMA/comments/1l166zr/is_google_censoring_qwen_longl132b/ | lincolnrules | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l166zr | false | null | t3_1l166zr | /r/LocalLLaMA/comments/1l166zr/is_google_censoring_qwen_longl132b/ | false | false | 1 | null |
|
What's an open model to use to emulate what NotebookLM does? | 4 | Forgive the naive or dumb question here, I'm just starting out with doing this locally. So far I'm using instruct3-llama and a vector database in Chroma to prompt against a rulesbook. I give send a context selected by the user alongside the prompt to narrow what the LLM looks at to return results. Is command-r better? | 2025-06-02T02:06:47 | https://www.reddit.com/r/LocalLLaMA/comments/1l16d23/whats_an_open_model_to_use_to_emulate_what/ | mccoypauley | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l16d23 | false | null | t3_1l16d23 | /r/LocalLLaMA/comments/1l16d23/whats_an_open_model_to_use_to_emulate_what/ | false | false | self | 4 | null |
Thoughts on Chatbox? | 1 | [removed] | 2025-06-02T02:57:47 | https://www.reddit.com/r/LocalLLaMA/comments/1l17cgl/thoughts_on_chatbox/ | Accomplished-Rub2331 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l17cgl | false | null | t3_1l17cgl | /r/LocalLLaMA/comments/1l17cgl/thoughts_on_chatbox/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Qp3D189pFu1CLJc-3D4NUZhwtBVSjmjY03kJ5KEgcpw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oEz4ib6pbbSRLCTO2ea8arkJicCBNC3JUQvdtTvej5I.jpg?width=108&crop=smart&auto=webp&s=1de6605ab2e21cdf2cd50cc4ff69f3d15f135c1f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oEz4ib6pbbSRLCTO2ea8arkJicCBNC3JUQvdtTvej5I.jpg?width=216&crop=smart&auto=webp&s=9553a9704472b7ce523e97b729599a0358a979d4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oEz4ib6pbbSRLCTO2ea8arkJicCBNC3JUQvdtTvej5I.jpg?width=320&crop=smart&auto=webp&s=4cb73e1494094ec7f4bf474f2a8dd53ee9b2860f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oEz4ib6pbbSRLCTO2ea8arkJicCBNC3JUQvdtTvej5I.jpg?width=640&crop=smart&auto=webp&s=941d2bb6957deb5b78202d732a793c8683fb5b53', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oEz4ib6pbbSRLCTO2ea8arkJicCBNC3JUQvdtTvej5I.jpg?width=960&crop=smart&auto=webp&s=5554f9c279ec0e32f7d4de56b25d7ad914cfc91e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oEz4ib6pbbSRLCTO2ea8arkJicCBNC3JUQvdtTvej5I.jpg?width=1080&crop=smart&auto=webp&s=33f7b2aeca77843f0f6a55450702e9059f4f7faa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oEz4ib6pbbSRLCTO2ea8arkJicCBNC3JUQvdtTvej5I.jpg?auto=webp&s=9a2354491205fbeb23efafe6923a6bae927b7043', 'width': 1200}, 'variants': {}}]} |
SAGA Update: Autonomous Novel Writing with Deep KG & Semantic Context - Now Even More Advanced! | 27 | A couple of weeks ago, I shared an early version of SAGA (Semantic And Graph-enhanced Authoring), my project for autonomous novel generation. Thanks to some great initial feedback and a lot of focused development, I'm excited to share a significantly advanced version!
**What is SAGA?**
SAGA, powered by its NANA (Next-gen Autonomous Narrative Architecture) engine, is designed to write entire novels. It's not just about stringing words together; it employs a team of specialized AI agents that handle planning, drafting, comprehensive evaluation, continuity checking, and intelligent revision. The core idea is to combine the creative power of local LLMs with the structured knowledge of a Neo4j graph database and the coherence provided by semantic embeddings.
**What's New & Improved Since Last Time?**
SAGA has undergone substantial enhancements:
* **Deep Neo4j Integration:** Moved from a simpler DB to a full Neo4j backend. This allows for much richer tracking of characters, world-building, plot points, and dynamic relationships. It includes a robust schema with constraints and a vector index for semantic searches.
* **Hybrid Context Generation:** For each chapter, SAGA now generates a "hybrid context" by:
* Performing **semantic similarity searches** (via Ollama embeddings) on past chapter content stored in Neo4j to maintain narrative flow and tone.
* Extracting **key reliable facts** directly from the Neo4j knowledge graph to ensure the LLM adheres to established canon.
* **Advanced Revision Logic:** The revision process is now more sophisticated, capable of **patch-based revisions** for targeted fixes or full chapter rewrites when necessary.
* **Sophisticated Evaluation & Continuity:**
* The `ComprehensiveEvaluatorAgent` assesses drafts on multiple axes (plot, theme, depth, consistency).
* A dedicated `WorldContinuityAgent` performs focused checks against the KG and world-building data to catch inconsistencies.
* **Provisional Data Handling:** The system now explicitly tracks whether data is "provisional" (e.g., from an unrevised draft), allowing for better canon management.
* **Markdown for User Input:** You can now seed your story using a `user_story_elements.md` file with `[Fill-in]` placeholders, making initial setup more intuitive.
* **Text De-duplication:** Added a step to help reduce repetitive phrasing or content in generated drafts.
* **Performance & Stability:** Lots of under-the-hood improvements. SAGA can now generate a batch of 3 chapters (each ~13K+ tokens of narrative) in about 11 minutes on my setup, including all the planning, evaluation, and KG updates.
**Core Architecture Still Intact:**
The agentic pipeline remains central:
1. **Initial Setup:** Parses user markdown or generates plot, characters, and world-building; pre-populates Neo4j.
2. **Chapter Loop:**
* **Plan:** `PlannerAgent` details scenes.
* **Context:** Hybrid semantic & KG context is built.
* **Draft:** `DraftingAgent` writes the chapter.
* **Evaluate:** `ComprehensiveEvaluatorAgent` & `WorldContinuityAgent` scrutinize the draft.
* **Revise:** `ChapterRevisionLogic` applies fixes.
* **Finalize & Update KG:** `KGMaintainerAgent` summarizes, embeds, saves the chapter to Neo4j, and extracts/merges new knowledge back into the graph and agent state.
**Why This Approach?**
The goal is to create narratives that are not only creative but also *coherent* and *consistent* over tens of thousands of tokens. The graph database acts as the story's long-term memory and source of truth, while semantic embeddings help maintain flow and relevance.
**Current Performance Example:**
Using local GGUF models (Qwen3 14B for narration/planning, smaller Qwen3s for other tasks), SAGA generates:
* **3 chapters** (each ~13,000+ tokens of narrative)
* In approximately **11 minutes**
* This includes all planning, context generation, evaluation, and knowledge graph updates.
**Check it out & Get Involved:**
* **GitHub Repo:** [https://github.com/Lanerra/saga](https://github.com/Lanerra/saga) (The README has been updated with detailed setup instructions!)
* **Setup:** You'll need Python, Ollama (for embeddings), an OpenAI-API compatible LLM server, and Neo4j (Docker setup provided).
* **Reset Script:** `reset_neo4j.py` is still there to easily clear the database and start fresh.
* **Inspect KG:** The `inspect_kg.py` script mentioned previously has been replaced by direct Neo4j browser interaction (which is much more powerful for visualization).
I'm really proud of how far SAGA has come and believe it's pushing into some interesting territory for AI-assisted storytelling. I'd love for you all to try it out, see what kind of sagas NANA can spin up for you, and share your thoughts, feedback, or any issues you encounter.
What kind of stories will you create? | 2025-06-02T03:12:03 | https://www.reddit.com/r/LocalLLaMA/comments/1l17m9g/saga_update_autonomous_novel_writing_with_deep_kg/ | MariusNocturnum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l17m9g | false | null | t3_1l17m9g | /r/LocalLLaMA/comments/1l17m9g/saga_update_autonomous_novel_writing_with_deep_kg/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': 'vmohKiMaIalkUvT1Ey-JVw1JK3sVXOizSNS8yN44-wU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vic3MgrCDbXNtyTGoIqqmDJ2TdHQJPKldbrpWUZEECE.jpg?width=108&crop=smart&auto=webp&s=912fc6747b6206ece9f37c1785188e5b32551151', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vic3MgrCDbXNtyTGoIqqmDJ2TdHQJPKldbrpWUZEECE.jpg?width=216&crop=smart&auto=webp&s=4b0566f5cf7e3a5febe0347a7b881d5f99e79ee5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vic3MgrCDbXNtyTGoIqqmDJ2TdHQJPKldbrpWUZEECE.jpg?width=320&crop=smart&auto=webp&s=f548b3b7005f03f44230142fb57820e12da40d62', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vic3MgrCDbXNtyTGoIqqmDJ2TdHQJPKldbrpWUZEECE.jpg?width=640&crop=smart&auto=webp&s=fd9ca2077fad34d3c4b9d03d31556e83946e1430', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vic3MgrCDbXNtyTGoIqqmDJ2TdHQJPKldbrpWUZEECE.jpg?width=960&crop=smart&auto=webp&s=50de9759d836c63724a167372732edafe2b3ea01', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vic3MgrCDbXNtyTGoIqqmDJ2TdHQJPKldbrpWUZEECE.jpg?width=1080&crop=smart&auto=webp&s=ad1d21ab9e8526d881dcc08113161bef0b229965', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vic3MgrCDbXNtyTGoIqqmDJ2TdHQJPKldbrpWUZEECE.jpg?auto=webp&s=634152c32205913990bc07e581e1465e853d1fb5', 'width': 1200}, 'variants': {}}]} |
Memory Layer Compatible with Local Llama | 0 | I built a open-sourced remote personal memory vault that works with MCP compatible clients. You can just say "remember X, Y, Z." and then retrieve it later. You can store documents, and I am working on integrations with Obsidian and such. Looking for contributors to make this compatible with local llama.
I want this to be the catch all for who you are. And will be able to personalize the conversation for your personality. Would love any and all support with this and check it out if you're interested.
[jeanmemory.com](http://jeanmemory.com) | 2025-06-02T03:21:11 | https://www.reddit.com/r/LocalLLaMA/comments/1l17sdd/memory_layer_compatible_with_local_llama/ | OneEither8511 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l17sdd | false | null | t3_1l17sdd | /r/LocalLLaMA/comments/1l17sdd/memory_layer_compatible_with_local_llama/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '0K3EP3BTVG7L7Lx8Qm2haQiAc6eProGUj9H_Y6QGMsc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/etLpvGUB2CO3ZrX2CORZsRVj6sm1uRSs2gMTuje9Kv8.jpg?width=108&crop=smart&auto=webp&s=2498bff756608b692d3780a9ac2beb43816621f9', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/etLpvGUB2CO3ZrX2CORZsRVj6sm1uRSs2gMTuje9Kv8.jpg?width=216&crop=smart&auto=webp&s=d6372da40bb4abbc7f3ca3f89fe6de4985a19df2', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/etLpvGUB2CO3ZrX2CORZsRVj6sm1uRSs2gMTuje9Kv8.jpg?width=320&crop=smart&auto=webp&s=ebeb26da0b588c13d9d04368de2b414da48ae539', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/etLpvGUB2CO3ZrX2CORZsRVj6sm1uRSs2gMTuje9Kv8.jpg?width=640&crop=smart&auto=webp&s=d3d9c563a13ac8f89e7644c4d591c30ed78e88df', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/etLpvGUB2CO3ZrX2CORZsRVj6sm1uRSs2gMTuje9Kv8.jpg?width=960&crop=smart&auto=webp&s=a7d0b516b0b31731fa60833aa00b7175e25d278a', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/etLpvGUB2CO3ZrX2CORZsRVj6sm1uRSs2gMTuje9Kv8.jpg?width=1080&crop=smart&auto=webp&s=3f3d40722dc45ecc83eca5df7ee1e7d20a17ffbe', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/etLpvGUB2CO3ZrX2CORZsRVj6sm1uRSs2gMTuje9Kv8.jpg?auto=webp&s=e638b831cd1309b8dc82ce672487bb9c2097d17d', 'width': 1200}, 'variants': {}}]} |
anyone working on an interesting project | 1 | [removed] | 2025-06-02T03:31:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l17z1z/anyone_working_on_an_interesting_project/ | shoman30 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l17z1z | false | null | t3_1l17z1z | /r/LocalLLaMA/comments/1l17z1z/anyone_working_on_an_interesting_project/ | false | false | self | 1 | null |
Alienware R11 with RTX 3090 to run local AI? | 1 | [removed] | 2025-06-02T04:07:29 | https://www.reddit.com/r/LocalLLaMA/comments/1l18mxb/alienware_r11_with_rtx_3090_to_run_local_ai/ | Brief_Original | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l18mxb | false | null | t3_1l18mxb | /r/LocalLLaMA/comments/1l18mxb/alienware_r11_with_rtx_3090_to_run_local_ai/ | false | false | self | 1 | null |
What's next? Behemoth? Qwen VL/Coder? Mistral Large Reasoning/Vision? | 11 | do you await any model? | 2025-06-02T04:35:39 | https://www.reddit.com/r/LocalLLaMA/comments/1l194gj/whats_next_behemoth_qwen_vlcoder_mistral_large/ | secopsml | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l194gj | false | null | t3_1l194gj | /r/LocalLLaMA/comments/1l194gj/whats_next_behemoth_qwen_vlcoder_mistral_large/ | false | false | self | 11 | null |
Looking for an AI Chat Interface Platform Similar to Open WebUI (With Specific Requirements) | 1 | [removed] | 2025-06-02T05:15:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l19s5s/looking_for_an_ai_chat_interface_platform_similar/ | Lethal_Protector_404 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l19s5s | false | null | t3_1l19s5s | /r/LocalLLaMA/comments/1l19s5s/looking_for_an_ai_chat_interface_platform_similar/ | false | false | self | 1 | null |
IQ1_Smol_Boi | 418 | Some folks asked me for an R1-0528 quant that might fit on 128GiB RAM + 24GB VRAM. I didn't think it was possible, but turns out my new smol boi `IQ1_S_R4` is 131GiB and actually runs okay (ik_llama.cpp fork only), and has perplexity lower "better" than `Qwen3-235B-A22B-Q8_0` which is almost twice the size! Not sure that means it is better, but kinda surprising to me.
Unsloth's newest smol boi is an odd `UD-TQ1_0` weighing in at 151GiB. The `TQ1_0` quant is a 1.6875 bpw quant types for TriLMs and BitNet b1.58 models. However, if you open up the side-bar on the modelcard it doesn't actually have any TQ1_0 layers/tensors and is mostly a mix of IQN_S and such. So not sure what is going on there or if it was a mistake. It does at least run from what I can tell, though I didn't try inferencing with it. They do have an `IQ1_S` as well, but it seems rather larger given their recipe though I've heard folks have had success with it.
Bartowski's smol boi `IQ1_M` is the next smallest I've seen at about 138GiB and seems to work okay in my limited testing. Surprising how these quants can still run at such low bit rates!
Anyway, I wouldn't recommend these smol bois if you have enough RAM+VRAM to fit a more optimized larger quant, but if at least there are some options "For the desperate" haha...
Cheers! | 2025-06-02T05:26:51 | VoidAlchemy | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l19yud | false | null | t3_1l19yud | /r/LocalLLaMA/comments/1l19yud/iq1_smol_boi/ | false | false | 418 | {'enabled': True, 'images': [{'id': 'W-hH3Ojd_aRT_pAe1zxAxaTry8Prv1J_g5owpNWj7ug', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/9u1teeqt4g4f1.png?width=108&crop=smart&auto=webp&s=97b77df2c7ad2de0f72aa8041ee27a467626c1d9', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/9u1teeqt4g4f1.png?width=216&crop=smart&auto=webp&s=542bb11cd430a2682dd66c90af167ba23cd67c4e', 'width': 216}, {'height': 199, 'url': 'https://preview.redd.it/9u1teeqt4g4f1.png?width=320&crop=smart&auto=webp&s=5cd209dd1607a8726fe8b595b7d02fed7927d25e', 'width': 320}, {'height': 398, 'url': 'https://preview.redd.it/9u1teeqt4g4f1.png?width=640&crop=smart&auto=webp&s=bab7030e08d75ad0063051230aa543c0b2a3ac7f', 'width': 640}], 'source': {'height': 412, 'url': 'https://preview.redd.it/9u1teeqt4g4f1.png?auto=webp&s=d280b7c963f102e66173256d50bf351f30ff0407', 'width': 662}, 'variants': {}}]} |
||
KEEPER: Smarter Surveillance That Sees Like a Human | 1 | 2025-06-02T05:40:22 | https://v.redd.it/k47jp5uhbg4f1 | Constant-Marketing97 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l1a6nm | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/k47jp5uhbg4f1/DASHPlaylist.mpd?a=1751434836%2CYmJjNjRkYzQ4ZDQ3YmI2MDA2YzE5MDc3ZGRkMjBiOTY3ODZhY2UzNzYzZTllY2IwZmJjZmMxZmQzZmM0Y2YxNg%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/k47jp5uhbg4f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1056, 'hls_url': 'https://v.redd.it/k47jp5uhbg4f1/HLSPlaylist.m3u8?a=1751434836%2CNDY1NWFmNWNhNzQ4NzgxMjRmODgzYzQyOTUwNmY1MWIyODg1MDBiMTk2YWJkOGE3YTBkNDc2NWRiMzI5NWMzMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/k47jp5uhbg4f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1l1a6nm | /r/LocalLLaMA/comments/1l1a6nm/keeper_smarter_surveillance_that_sees_like_a_human/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'YThkeDQ1dWhiZzRmMVRHUSwufEhNdrXiUYlyX3cmGEIMD38eTPqE_6wvvZfs', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/YThkeDQ1dWhiZzRmMVRHUSwufEhNdrXiUYlyX3cmGEIMD38eTPqE_6wvvZfs.png?width=108&crop=smart&format=pjpg&auto=webp&s=c6f342483f2ed36c375820ab252d80386b40edce', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/YThkeDQ1dWhiZzRmMVRHUSwufEhNdrXiUYlyX3cmGEIMD38eTPqE_6wvvZfs.png?width=216&crop=smart&format=pjpg&auto=webp&s=b017a9366101148ecf45fb7c834c6a2830a94fca', 'width': 216}, {'height': 176, 'url': 'https://external-preview.redd.it/YThkeDQ1dWhiZzRmMVRHUSwufEhNdrXiUYlyX3cmGEIMD38eTPqE_6wvvZfs.png?width=320&crop=smart&format=pjpg&auto=webp&s=8d6a7a131267faa98994184597e2abdb8f6799ae', 'width': 320}, {'height': 352, 'url': 'https://external-preview.redd.it/YThkeDQ1dWhiZzRmMVRHUSwufEhNdrXiUYlyX3cmGEIMD38eTPqE_6wvvZfs.png?width=640&crop=smart&format=pjpg&auto=webp&s=ef036a328c82ef52cb9a9264edbe44b25099313f', 'width': 640}, {'height': 528, 'url': 'https://external-preview.redd.it/YThkeDQ1dWhiZzRmMVRHUSwufEhNdrXiUYlyX3cmGEIMD38eTPqE_6wvvZfs.png?width=960&crop=smart&format=pjpg&auto=webp&s=77a0ecadac188fba5389d4c94e2396a9524649c8', 'width': 960}, {'height': 594, 'url': 'https://external-preview.redd.it/YThkeDQ1dWhiZzRmMVRHUSwufEhNdrXiUYlyX3cmGEIMD38eTPqE_6wvvZfs.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1bfb3475ad257675105080566bd93a03d9624b10', 'width': 1080}], 'source': {'height': 1386, 'url': 'https://external-preview.redd.it/YThkeDQ1dWhiZzRmMVRHUSwufEhNdrXiUYlyX3cmGEIMD38eTPqE_6wvvZfs.png?format=pjpg&auto=webp&s=1039f206c9e0b21c5823195b561d9ae4b336a13a', 'width': 2520}, 'variants': {}}]} |
||
KEEPER: Smarter Surveillance That Sees Like a Human | 1 | 2025-06-02T05:41:22 | https://v.redd.it/3nnh64dqbg4f1 | Constant-Marketing97 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l1a78h | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/3nnh64dqbg4f1/DASHPlaylist.mpd?a=1751434896%2CNWYxY2Y5OTMxZjEwNzA2MDYwOTc4Y2I0OTc1N2NhMGYwNzgxZWM2ZmNjNWI5NjA4NDE3NDNkZDg1NTk0NjA1Yw%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/3nnh64dqbg4f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1056, 'hls_url': 'https://v.redd.it/3nnh64dqbg4f1/HLSPlaylist.m3u8?a=1751434896%2CNTJlOGUyNjRmOGVjYmQwN2MwNjMzOTY0YTgyMjZhYjBkYzc3ODM1YzJhODJhZjg4MTM2YTE0YjJmMzQwMmE0YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/3nnh64dqbg4f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1l1a78h | /r/LocalLLaMA/comments/1l1a78h/keeper_smarter_surveillance_that_sees_like_a_human/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'dTAwZWU1ZHFiZzRmMVRHUSwufEhNdrXiUYlyX3cmGEIMD38eTPqE_6wvvZfs', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/dTAwZWU1ZHFiZzRmMVRHUSwufEhNdrXiUYlyX3cmGEIMD38eTPqE_6wvvZfs.png?width=108&crop=smart&format=pjpg&auto=webp&s=9d7778239ce4660efa417b96a39c81f36b9dfb91', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/dTAwZWU1ZHFiZzRmMVRHUSwufEhNdrXiUYlyX3cmGEIMD38eTPqE_6wvvZfs.png?width=216&crop=smart&format=pjpg&auto=webp&s=cbeb891077e48d70e9ba6e59372a26322d445438', 'width': 216}, {'height': 176, 'url': 'https://external-preview.redd.it/dTAwZWU1ZHFiZzRmMVRHUSwufEhNdrXiUYlyX3cmGEIMD38eTPqE_6wvvZfs.png?width=320&crop=smart&format=pjpg&auto=webp&s=4db0d8975c0c53931fad3fda7c39717d5c5dd931', 'width': 320}, {'height': 352, 'url': 'https://external-preview.redd.it/dTAwZWU1ZHFiZzRmMVRHUSwufEhNdrXiUYlyX3cmGEIMD38eTPqE_6wvvZfs.png?width=640&crop=smart&format=pjpg&auto=webp&s=1be36ae5b98453240f3b9df51a62f2a7a7765d94', 'width': 640}, {'height': 528, 'url': 'https://external-preview.redd.it/dTAwZWU1ZHFiZzRmMVRHUSwufEhNdrXiUYlyX3cmGEIMD38eTPqE_6wvvZfs.png?width=960&crop=smart&format=pjpg&auto=webp&s=c13ba9e2f43bb1757d5d670bce68e2437e4c6e7f', 'width': 960}, {'height': 594, 'url': 'https://external-preview.redd.it/dTAwZWU1ZHFiZzRmMVRHUSwufEhNdrXiUYlyX3cmGEIMD38eTPqE_6wvvZfs.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f9bc8937e0237759581b6aa0c195bb2c6d551a65', 'width': 1080}], 'source': {'height': 1386, 'url': 'https://external-preview.redd.it/dTAwZWU1ZHFiZzRmMVRHUSwufEhNdrXiUYlyX3cmGEIMD38eTPqE_6wvvZfs.png?format=pjpg&auto=webp&s=4b4ee0dc61c864137b5dd9ee69b0daedf5e3dc34', 'width': 2520}, 'variants': {}}]} |
||
Snapdragon 8 Elite gets 5.5 t/s on Qwen3 30B A3B | 89 | Phone is a Razr Ultra 2025 | 2025-06-02T05:44:39 | 1ncehost | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l1a944 | false | null | t3_1l1a944 | /r/LocalLLaMA/comments/1l1a944/snapdragon_8_elite_gets_55_ts_on_qwen3_30b_a3b/ | false | false | 89 | {'enabled': True, 'images': [{'id': 'xmgABfvO1zSJbINJ-e4ZDYYCr3oRdPwgHC-A4qqr4ZI', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/jagac0yccg4f1.png?width=108&crop=smart&auto=webp&s=f0e4e601fd14ad1d1fc02c56ad8d9e48243a840e', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/jagac0yccg4f1.png?width=216&crop=smart&auto=webp&s=34fa81470abadcbff7c23a985947554dff733f12', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/jagac0yccg4f1.png?width=320&crop=smart&auto=webp&s=c60bc528164989e1f7c6aaac1207fdaf06d22682', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/jagac0yccg4f1.png?width=640&crop=smart&auto=webp&s=a6e0ede5a0ed01ba1b9b850ec415585886e5bad1', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/jagac0yccg4f1.png?width=960&crop=smart&auto=webp&s=106601aa23cfac77316b69d523b554e3f6baf0f9', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/jagac0yccg4f1.png?width=1080&crop=smart&auto=webp&s=922b54676ede98e98c264e4ec98545eca04d2b82', 'width': 1080}], 'source': {'height': 2992, 'url': 'https://preview.redd.it/jagac0yccg4f1.png?auto=webp&s=077c0f96ffc9039b144b79f021f25184f4e46de3', 'width': 1224}, 'variants': {}}]} |
||
Model under 1B parameters with great perfomance | 1 | [removed] | 2025-06-02T06:20:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l1at3f/model_under_1b_parameters_with_great_perfomance/ | Josephdhub | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1at3f | false | null | t3_1l1at3f | /r/LocalLLaMA/comments/1l1at3f/model_under_1b_parameters_with_great_perfomance/ | false | false | self | 1 | null |
[DEMO] I created an AI coding agent that debugs autonomously using dynamic runtime state in VS Code – Feedback Wanted! | 7 | I would like to share a demo of **Zentara Code,** a coding agent forked from Roo Code that can do debugging leveraging runtime state. One of the main pain spots when using coding agents is that they generate buggy code. And they have limited abilities to fix errors as they cannot leverage the runtime debugging tools like a real human programmer. They cannot inspect runtime variables values, do stack tracing, etc. I want to build a tool that can write, refactor, and debug code much like a professional software engineer with all available debugging tools similar to pdb or debugpy for python.
Here is a short demo to showcase some of its core debugging capabilities – things like autonomous error diagnosis, precise execution control, breakpoint management, and state inspection. It's integrate seamlessly with VS Code's debugger through DAP. It can even enhance pytest debugging by forcing pytest to bubble up the exception.
The code isn't public yet. There are something need to be polished. It would be release soon as an open source project. I'd love to get your honest feedback so that I go further.
* What are your initial impressions?
* Can it help your current vibe coding?
* What are the most exciting potential uses you see for an AI agent with these deep debugging abilities?
* Are there specific debugging challenges you struggle with that you'd want an AI like Zentara Code to solve?
* Any concerns or features?
Zentara Code leverages the Debug Adapter Protocol (DAP), making it language-agnostic for any language with a DAP-compliant debugger in VS Code. I have tested with Python, JavaScript, and TypeScript.
Please let me know your thoughts ! Your feedback at this early stage is incredibly valuable.
Thanks for watching! | 2025-06-02T06:45:17 | https://v.redd.it/poqqxlqvlg4f1 | bn_from_zentara | /r/LocalLLaMA/comments/1l1b6u0/demo_i_created_an_ai_coding_agent_that_debugs/ | 1970-01-01T00:00:00 | 0 | {} | 1l1b6u0 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/poqqxlqvlg4f1/DASHPlaylist.mpd?a=1751568321%2CY2Y0MmRkOTk5MThlNzI3ZTE1M2RjNjBjN2I4NTA4YTgwYTZjODM0NDQzNjQ1NTEyMTVkMjU2ZTUwMmEzMTBhYQ%3D%3D&v=1&f=sd', 'duration': 103, 'fallback_url': 'https://v.redd.it/poqqxlqvlg4f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 810, 'hls_url': 'https://v.redd.it/poqqxlqvlg4f1/HLSPlaylist.m3u8?a=1751568321%2CODA3ZWJiNTE4Mjg1YjJlOWFkZjQyMjM4ZjI5MjQ3ZWU4NjU2ZmJiMzNmODM4YTY2ODlkZTgxZWZjYjVkMDJjYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/poqqxlqvlg4f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1l1b6u0 | /r/LocalLLaMA/comments/1l1b6u0/demo_i_created_an_ai_coding_agent_that_debugs/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'MW5udDJzcXZsZzRmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/MW5udDJzcXZsZzRmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?width=108&crop=smart&format=pjpg&auto=webp&s=2e51897591c2decf50c45d451a09ca8ebb43c2ab', 'width': 108}, {'height': 91, 'url': 'https://external-preview.redd.it/MW5udDJzcXZsZzRmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?width=216&crop=smart&format=pjpg&auto=webp&s=a41cd52b0ec81d934c371f34f46daf5276ad7dae', 'width': 216}, {'height': 135, 'url': 'https://external-preview.redd.it/MW5udDJzcXZsZzRmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?width=320&crop=smart&format=pjpg&auto=webp&s=247b05b7903ead0e6248410df8bf55dff1d37623', 'width': 320}, {'height': 270, 'url': 'https://external-preview.redd.it/MW5udDJzcXZsZzRmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?width=640&crop=smart&format=pjpg&auto=webp&s=42de7e6dddc0e79493688115b6e3eba2065dde92', 'width': 640}, {'height': 405, 'url': 'https://external-preview.redd.it/MW5udDJzcXZsZzRmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?width=960&crop=smart&format=pjpg&auto=webp&s=ab01bc8bffefc34a947b5385f754d4d9f281e914', 'width': 960}, {'height': 455, 'url': 'https://external-preview.redd.it/MW5udDJzcXZsZzRmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e4e332771b5d71fa2760f0ffb06784bc5b41a96e', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MW5udDJzcXZsZzRmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?format=pjpg&auto=webp&s=c3b7ed4abe6ac69a0305527d9e63796de11e0c18', 'width': 2560}, 'variants': {}}]} |
|
What LLM libraries/frameworks are worthwhile and what is better to roll your own from scratch? | 31 | Maybe I'm suffering from NIH, but the core of systems can be quite simple to roll out using just torch/transformers/API calls.
What libraries/frameworks do you find most valuable to use instead of rolling your own? | 2025-06-02T06:47:29 | https://www.reddit.com/r/LocalLLaMA/comments/1l1b801/what_llm_librariesframeworks_are_worthwhile_and/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1b801 | false | null | t3_1l1b801 | /r/LocalLLaMA/comments/1l1b801/what_llm_librariesframeworks_are_worthwhile_and/ | false | false | self | 31 | null |
System Prompt Learning: Teaching your local LLMs to learn problem-solving strategies from experience (optillm plugin) | 36 | Hey r/LocalLlama!
I wanted to share something we've been working on that might interest folks running local LLMs - **System Prompt Learning (SPL)**.
# The Problem
You know how ChatGPT, Claude, etc. perform so well partly because they have incredibly detailed system prompts with sophisticated reasoning strategies? Most of us running local models just use basic prompts and miss out on those performance gains.
# What is SPL?
SPL implements what Andrej Karpathy called the "third paradigm" for LLM learning - instead of just pretraining and fine-tuning, models can now learn problem-solving strategies from their own experience.
# How it works:
* Automatically classifies problems into 16 types (math, coding, word problems, etc.)
* Builds a persistent database of effective solving strategies
* Selects the best strategies for each query
* Evaluates how well strategies worked and refines them over time
* All strategies are human-readable JSON - you can inspect and edit them
# Results:
Tested with gemini-2.0-flash-lite across math benchmarks:
* **Arena Hard**: 29% → 37.6% (+8.6%)
* **AIME24**: 23.33% → 30% (+6.67%)
* **OptILLMBench**: 61% → 65% (+4%)
* **MATH-500**: 85% → 85.6% (+0.6%)
After 500 queries, the system developed 129 strategies, refined 97 of them, and achieved much better problem-solving.
# For Local LLM Users:
* Works with **any OpenAI-compatible API** (so llama.cpp, Ollama, vLLM, etc.)
* **Runs completely locally** \- strategies stored in local JSON files
* **Two modes**: inference-only (default) or learning mode
* **Minimal overhead** \- just augments your system prompt
* **Open source** and easy to inspect/modify
# Setup:
pip install optillm
# Point to your local LLM endpoint
python optillm.py --base_url http://localhost:8080/v1
Then just add `spl-` prefix to your model:
model="spl-llama-3.2-3b" # or whatever your model is
Enable learning mode to create new strategies:
extra_body={"spl_learning": True}
# Example Strategy Learned:
The system automatically learned this strategy for word problems:
1. **Understand**: Read carefully, identify unknowns
2. **Plan**: Define variables, write equations
3. **Solve**: Step-by-step with units
4. **Verify**: Check reasonableness
All strategies are stored in `~/.optillm/spl/data/strategies.json` so you can back them up, share them, or manually edit them.
# Why This Matters for Local LLMs:
* Your model gets **progressively better** at problem types you use frequently
* **Transparent learning** \- you can see exactly what strategies it develops
* **No external dependencies** \- everything runs locally
* **Transferable knowledge** \- you can share strategy files between deployments
This feels like a step toward local models that actually improve through use, rather than being static after training.
**Links:**
* GitHub: [https://github.com/codelion/optillm](https://github.com/codelion/optillm)
* SPL Plugin: [https://github.com/codelion/optillm/tree/main/optillm/plugins/spl](https://github.com/codelion/optillm/tree/main/optillm/plugins/spl)
* Technical article: [https://huggingface.co/blog/codelion/system-prompt-learning](https://huggingface.co/blog/codelion/system-prompt-learning)
* Andrej's original tweet: [https://x.com/karpathy/status/1921368644069765486](https://x.com/karpathy/status/1921368644069765486)
Anyone tried this yet? Would love to hear how it works with different local models!
**Edit:** Works great with reasoning models like DeepSeek-R1, QwQ, etc. The strategies help guide their thinking process. | 2025-06-02T07:08:14 | https://www.reddit.com/r/LocalLLaMA/comments/1l1bjhm/system_prompt_learning_teaching_your_local_llms/ | asankhs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1bjhm | false | null | t3_1l1bjhm | /r/LocalLLaMA/comments/1l1bjhm/system_prompt_learning_teaching_your_local_llms/ | false | false | self | 36 | null |
Best accuracy vs speed tradeoff on a local setup | 1 | [removed] | 2025-06-02T07:09:01 | https://www.reddit.com/r/LocalLLaMA/comments/1l1bjwr/best_accuracy_vs_speed_tradeoff_on_a_local_setup/ | Awkward_Sympathy4475 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1bjwr | false | null | t3_1l1bjwr | /r/LocalLLaMA/comments/1l1bjwr/best_accuracy_vs_speed_tradeoff_on_a_local_setup/ | false | false | self | 1 | null |
Sharing my a demo of tool for easy handwritten fine-tuning dataset creation! | 1 | [removed] | 2025-06-02T07:17:02 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l1bo95 | false | null | t3_1l1bo95 | /r/LocalLLaMA/comments/1l1bo95/sharing_my_a_demo_of_tool_for_easy_handwritten/ | false | false | default | 1 | null |
||
Best VLM for financial document processing | 1 | [removed] | 2025-06-02T07:39:56 | https://www.reddit.com/r/LocalLLaMA/comments/1l1c0e3/best_vlm_for_financial_document_processing/ | SaasPhoenix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1c0e3 | false | null | t3_1l1c0e3 | /r/LocalLLaMA/comments/1l1c0e3/best_vlm_for_financial_document_processing/ | false | false | self | 1 | null |
Start up ideas around LLM and vision models like flux | 0 | Hi Friends,
I am looking for suggestions, I am planning to start a startup around llm and lora trained on specific customer data like their website or business information.
And I want to provide solution -
1 a chatbot for user which can help user navigate to different pages for doing certain task.
2 tools for admin to get insights on data and get visual representation using flux model to generate images.
3 Create mcp servers for different use cases specific to domain or organization.
My goal is to enable smes/small medium organization renovate their existing online presence AI, llm model which is trained on their specific data.
How can I improve my idea further, or is it really going to work. I want to know how different organization adopts to AI, what are the services they are looking for.
I am planning to spend $2000 usd and test it out. Please suggest should I not spend on it.
| 2025-06-02T08:53:14 | https://www.reddit.com/r/LocalLLaMA/comments/1l1d31v/start_up_ideas_around_llm_and_vision_models_like/ | SearchTricky7875 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1d31v | false | null | t3_1l1d31v | /r/LocalLLaMA/comments/1l1d31v/start_up_ideas_around_llm_and_vision_models_like/ | false | false | self | 0 | null |
A personal AI assistant on my laptop with 16 GB RAM and RTX 3050 4GB video memory. Which model is feasible? | 0 | I have worked with AI and RAG as part of profession most of that is glorified API calling. I don't have a speck of experience with local LLMs.
I want to build something that works on my machine. A low end LLM that can make tool calls and respond to simple questions.
For example:
Me : Open reddit
LLM: should make a tool call that opens reddit in default browser
I intend to expand the functionality of this in the future, like making it write emails.
I want to know if it is feasible to run it on my laptop or even possible to run on my laptop. If possible, which models can I use for this? | 2025-06-02T09:15:53 | https://www.reddit.com/r/LocalLLaMA/comments/1l1df0n/a_personal_ai_assistant_on_my_laptop_with_16_gb/ | WiseObjective8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1df0n | false | null | t3_1l1df0n | /r/LocalLLaMA/comments/1l1df0n/a_personal_ai_assistant_on_my_laptop_with_16_gb/ | false | false | self | 0 | null |
Looking for model recommendations for creative writing | 0 | Been using Fimbulvetr-11b-v2-i1 within LM Studio to generate a wide variety of fiction, 500 words at a time. Nothing commercial, just to amuse myself. But being limited to such short generations can be frustrating, especially when it starts skipping details from long prompts. When using Claude Sonnet, I saw it could produce responses triple that length. After looking into it, I learned about the concept of a Context Window, and saw this Fimbulvetr model was limited to 4k. I don't fully understand what value means, but I can say confidently my PC can handle far more than this tiny-feeling model. Any recommendations? I didn't drop 2 grand on a gaming PC to use programs built for toaster PCs. I would like to generate 2k+ word responses if it's possible on my hardware.
Random PC specs:
Lenovo Legion tower PC
RTX 3060 GPU
16 gigs of ram | 2025-06-02T09:35:46 | https://www.reddit.com/r/LocalLLaMA/comments/1l1dpic/looking_for_model_recommendations_for_creative/ | Bed-After | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1dpic | false | null | t3_1l1dpic | /r/LocalLLaMA/comments/1l1dpic/looking_for_model_recommendations_for_creative/ | false | false | self | 0 | null |
Best Video captioning model | 10 | Need to generate text captions from small video clips that later i can use to do semantic scene search. What are the best models for VRAM 12-32GB.
Maybe i can train/fine tune so i can do embeded search? | 2025-06-02T09:40:05 | https://www.reddit.com/r/LocalLLaMA/comments/1l1drru/best_video_captioning_model/ | VihmaVillu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1drru | false | null | t3_1l1drru | /r/LocalLLaMA/comments/1l1drru/best_video_captioning_model/ | false | false | self | 10 | null |
Any ideas on how to make qwen 3 8b run on phone? | 2 | I'm developing an app where you can edit code from your github repos using LLMs using llama.rn. Using the lowest quanitzation it still crashes the app. A bit strange since it can handle larger llms like yi coder 9b.
Anyone got an idea on what to do or what to read to understand the issue better?
Of if anyone would like to test my app you can try it here: https://www.lithelanding.com/ | 2025-06-02T09:40:46 | https://www.reddit.com/r/LocalLLaMA/comments/1l1ds4n/any_ideas_on_how_to_make_qwen_3_8b_run_on_phone/ | AspecialistI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1ds4n | false | null | t3_1l1ds4n | /r/LocalLLaMA/comments/1l1ds4n/any_ideas_on_how_to_make_qwen_3_8b_run_on_phone/ | false | false | self | 2 | null |
GPT4All, AnythingLLM, Open WebUI, or other? | 0 | I don't have the time I'd like to work on running LLMs locally, So far I have played with various models on GPT4All and a bit on AnythingLLM. In the interest of saving time, I am seeking opinions on which "front end" interface I should use with these various popular LLMs. I should note that I am most interested currently in developing a system for RAG or CAG. Most important to me right now is "chatting with my various documents." Any thoughts? | 2025-06-02T09:45:23 | https://www.reddit.com/r/LocalLLaMA/comments/1l1dujm/gpt4all_anythingllm_open_webui_or_other/ | BobbyNGa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1dujm | false | null | t3_1l1dujm | /r/LocalLLaMA/comments/1l1dujm/gpt4all_anythingllm_open_webui_or_other/ | false | false | self | 0 | null |
What do people think about SGLang vs. vLLM (or any other framework)? | 1 | [removed] | 2025-06-02T10:02:21 | https://www.reddit.com/r/LocalLLaMA/comments/1l1e42c/what_do_people_think_about_sglang_vs_vllm_or_any/ | Mother_Context_2446 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1e42c | false | null | t3_1l1e42c | /r/LocalLLaMA/comments/1l1e42c/what_do_people_think_about_sglang_vs_vllm_or_any/ | false | false | self | 1 | null |
Any node based tools for general AI workflows? | 1 | I'm looking if anyone built any Comfy UI style tools for all sorts of general AI workflows like LLMs, STT, TTS, basic stuff like HTTP requests, custom functions, etc. Something like a mix of Comfy UI and n8n. The closest thing I found is a closed source tool florafauna. | 2025-06-02T10:03:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l1e4uz/any_node_based_tools_for_general_ai_workflows/ | GamerWael | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1e4uz | false | null | t3_1l1e4uz | /r/LocalLLaMA/comments/1l1e4uz/any_node_based_tools_for_general_ai_workflows/ | false | false | self | 1 | null |
Ignore the hype - AI companies still have no moat | 267 | An article I wrote a while back, I think r/LocalLLaMA still wins
The basis of it is that Every single AI tool – has an open source alternative, every. single. one – so programming wise, for a new company to implement these features is not a matter of development complexity but a matter of getting the biggest audience
Everything has an open source versioned alternative right now
Take for example | 2025-06-02T10:06:26 | https://river.berlin/blog/there-is-still-no-moat/ | No_Tea2273 | river.berlin | 1970-01-01T00:00:00 | 0 | {} | 1l1e6ic | false | null | t3_1l1e6ic | /r/LocalLLaMA/comments/1l1e6ic/ignore_the_hype_ai_companies_still_have_no_moat/ | false | false | 267 | {'enabled': False, 'images': [{'id': 'TU8AJKDkxfU0q12qDeRP3T0ItraWwkLCVZQ_QFZdlPo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TawOSRI4o3WDthoH5zp4cL7vlpQPtqKfMqXniUZMdX0.jpg?width=108&crop=smart&auto=webp&s=58751e5944a3c7f8a7e94ef84d9f6df289e90d68', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TawOSRI4o3WDthoH5zp4cL7vlpQPtqKfMqXniUZMdX0.jpg?width=216&crop=smart&auto=webp&s=52e349fd0602448d443d969b83d44d977bce300e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TawOSRI4o3WDthoH5zp4cL7vlpQPtqKfMqXniUZMdX0.jpg?width=320&crop=smart&auto=webp&s=ae20ccaf78acd50b0e5fce38cd4a5bb2d4d7b3cc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TawOSRI4o3WDthoH5zp4cL7vlpQPtqKfMqXniUZMdX0.jpg?width=640&crop=smart&auto=webp&s=cc884e61e4fbae7ba821197e1b5320440aaab413', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TawOSRI4o3WDthoH5zp4cL7vlpQPtqKfMqXniUZMdX0.jpg?width=960&crop=smart&auto=webp&s=6c9d37f16795d4c857899f3952414d7f96ae854d', 'width': 960}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/TawOSRI4o3WDthoH5zp4cL7vlpQPtqKfMqXniUZMdX0.jpg?auto=webp&s=b25c27973601305827c3cd6ed6234b9e77772c4a', 'width': 1000}, 'variants': {}}]} |
|
Pinokio down for days | 1 | What's happening with [https://pinokio.computer/](https://pinokio.computer/) its been days its not working and because of that the discover tab in the client is also blank since it can't fetch data from the server
i've also used a website availability checker and it also can't reach pinokio if anyone knows what's up please elaborate thanks | 2025-06-02T10:25:16 | Reys_dev | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l1ehll | false | null | t3_1l1ehll | /r/LocalLLaMA/comments/1l1ehll/pinokio_down_for_days/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'De8Ah0XbfiCkCiFakq0q4wpjO1Zc_3Dg7EiiOGhJr88', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/q2tfudf6qh4f1.png?width=108&crop=smart&auto=webp&s=b886aa2ad682aed2db46dc15b82a9be0792a0e42', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/q2tfudf6qh4f1.png?width=216&crop=smart&auto=webp&s=a4fbe87c1651a83afc7a65dec51256ab0927dfa3', 'width': 216}, {'height': 183, 'url': 'https://preview.redd.it/q2tfudf6qh4f1.png?width=320&crop=smart&auto=webp&s=e181a1a0969aa056c1040232e938ec21822ba083', 'width': 320}, {'height': 366, 'url': 'https://preview.redd.it/q2tfudf6qh4f1.png?width=640&crop=smart&auto=webp&s=b4e9ae160851e9fe5c504e272f064d56e01b31c8', 'width': 640}, {'height': 550, 'url': 'https://preview.redd.it/q2tfudf6qh4f1.png?width=960&crop=smart&auto=webp&s=c21d5396d91be4e88208018c9dc3b735a01934c8', 'width': 960}], 'source': {'height': 564, 'url': 'https://preview.redd.it/q2tfudf6qh4f1.png?auto=webp&s=0fd48047a310bb02624eb782086a1e8542dc02a4', 'width': 984}, 'variants': {}}]} |
||
Any fast and multilingual TTS model trained with a lightweighted LLM? | 4 | There were some work such as Orptheus, Octus, Zonos etc, however, they seems both only for English.
Am seeking for a model trained with multilingual and with emotion promptable.
Anyone are planing to train a one? | 2025-06-02T10:31:07 | https://www.reddit.com/r/LocalLLaMA/comments/1l1el53/any_fast_and_multilingual_tts_model_trained_with/ | LewisJin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1el53 | false | null | t3_1l1el53 | /r/LocalLLaMA/comments/1l1el53/any_fast_and_multilingual_tts_model_trained_with/ | false | false | self | 4 | null |
Best Local LLMs for RTX 4060 (8GB VRAM) & 32GB RAM on Asus Zephyrus G14 (2024)? | 1 | [removed] | 2025-06-02T10:55:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l1ezqv/best_local_llms_for_rtx_4060_8gb_vram_32gb_ram_on/ | andreaingrando | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1ezqv | false | null | t3_1l1ezqv | /r/LocalLLaMA/comments/1l1ezqv/best_local_llms_for_rtx_4060_8gb_vram_32gb_ram_on/ | false | false | self | 1 | null |
Local LLMs and User Tasks Unrelated to IT | 1 | [removed] | 2025-06-02T11:33:35 | https://www.reddit.com/r/LocalLLaMA/comments/1l1foh9/local_llms_and_user_tasks_unrelated_to_it/ | KitchenPlayful3160 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1foh9 | false | null | t3_1l1foh9 | /r/LocalLLaMA/comments/1l1foh9/local_llms_and_user_tasks_unrelated_to_it/ | false | false | self | 1 | null |
Local LLMs and user tasks unrelated to IT | 1 | [removed] | 2025-06-02T11:41:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l1ftpr/local_llms_and_user_tasks_unrelated_to_it/ | KitchenPlayful3160 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1ftpr | false | null | t3_1l1ftpr | /r/LocalLLaMA/comments/1l1ftpr/local_llms_and_user_tasks_unrelated_to_it/ | false | false | self | 1 | null |
training local offline AI Ollama model | 1 | [removed] | 2025-06-02T11:52:59 | https://www.reddit.com/r/LocalLLaMA/comments/1l1g1c0/training_local_offline_ai_ollama_model/ | Prior-Initiative6925 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1g1c0 | false | null | t3_1l1g1c0 | /r/LocalLLaMA/comments/1l1g1c0/training_local_offline_ai_ollama_model/ | false | false | self | 1 | null |
[DEMO] I created a coding agent that can do dynamic, runtime debugging. | 18 | I'm just annoyed with inability of current coding agents creating buggy code and can not fix it. It is said that current LLM have Ph.D level and cannot fix some obvious bugs, just loop around and around and offer the same wrong solution for the bug. At the same time they look very smart, much knowledgeable than me. Why is that? My explanation is that they do not have access to the information as I do. When I do debugging, I can look at variable values, can go up and down the stack to figure out where the wrong variables values get it.
It seems to me that this can be fixed easily if we give a coding agent the rich context as we do when debugging by given them all the debugging tools. This approach has been pioneered previously by several posts such as :
[https://www.reddit.com/r/LocalLLaMA/comments/1inqb6n/letting\_llms\_using\_an\_ides\_debugger/](https://www.reddit.com/r/LocalLLaMA/comments/1inqb6n/letting_llms_using_an_ides_debugger/) , and [https://www.reddit.com/r/ClaudeAI/comments/1i3axh1/enable\_claude\_to\_interactively\_debug\_for\_you\_via/](https://www.reddit.com/r/ClaudeAI/comments/1i3axh1/enable_claude_to_interactively_debug_for_you_via/)
Those posts really provided the proof of concept of exactly what I am looking for . Also recently Microsoft published a paper about their Debug-gym, [https://www.microsoft.com/en-us/research/blog/debug-gym-an-environment-for-ai-coding-tools-to-learn-how-to-debug-code-like-programmers/](https://www.microsoft.com/en-us/research/blog/debug-gym-an-environment-for-ai-coding-tools-to-learn-how-to-debug-code-like-programmers/) , saying that by leveraging the runtime state knowledge, LLM can increase pretty substantially on coding accuracy.
One of the previous work uses MCP server approach. While MCP server provides the flexibility to quickly change the coding agent, I could not make it work robustly, stable in my setting. Maybe the sse transport layer of MCP server does not work well. Also current solutions only provide limited debugging functions. Inspired by those previous works, here I expanded the debugging toolset, made it directly integrated with my favorite coding agent - Roo -Code, skipping the MCP communication. Although this way, I lost the plug and play flexibility of MCP server, what I gain is more stable, robust performance.
Included is the demo of my coding agent - a fork from the wonderful coding agent Roo-Code. Besides writing code , it can set breakpoints, inspect stack variable, go up and down the stack, evaluate expression, run statements, etc. , have access to most debugger function tools. As Zentara Code - my forked coding agent communicate with debugger through VSCode DAP, it is language agnostic, can work with any language that has VSCode debugger extention. I have tested it with Python, TypeScript and Javascript.
I mostly code in Python. I usually ask Zentara Code write a code for me, and then write pytest tests for the code it write. Pytest by default captures all the assertion errors to make it own analysis, do not bubble up the exception. I was able to make Zentara code to capture those pytest exceptions. Now Zentara code can run those pytest tests, see the exception messages, use runtime state to interactively debug the exceptions smartly.
The code will be released soon after I finishing up final touch. The demo attached is an illustration of how Zentara code struggles and successfully debugs a buggy quicksort implementation using dynamic runtime info.
I just would like to share with you the preliminary result and get your initial impressions and feedbacks. | 2025-06-02T12:14:32 | https://v.redd.it/qic49y0h8i4f1 | bn_from_zentara | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l1ggkp | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/qic49y0h8i4f1/DASHPlaylist.mpd?a=1751458489%2CNTlhNzhhNGE0MmJhNmFhNzEyZWEwZWY2YmZjOGUwNjViM2Q2YzU1NDA2YzFhODcyNTM3ZGRhZmM1MmMzOTEyNA%3D%3D&v=1&f=sd', 'duration': 103, 'fallback_url': 'https://v.redd.it/qic49y0h8i4f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 810, 'hls_url': 'https://v.redd.it/qic49y0h8i4f1/HLSPlaylist.m3u8?a=1751458489%2CNGRkZTYzNjRhMzNmNjMxMWIwMjA2OTJhZTFmZmEzMTk2Y2NlNTlhOTI3OWE0OTkwZDNkNDIzY2RlMzc3M2I0OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/qic49y0h8i4f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1l1ggkp | /r/LocalLLaMA/comments/1l1ggkp/demo_i_created_a_coding_agent_that_can_do_dynamic/ | false | false | 18 | {'enabled': False, 'images': [{'id': 'MnJzMXR6MGg4aTRmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/MnJzMXR6MGg4aTRmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?width=108&crop=smart&format=pjpg&auto=webp&s=fb95136b2067dde97d492a926777bc155e42a937', 'width': 108}, {'height': 91, 'url': 'https://external-preview.redd.it/MnJzMXR6MGg4aTRmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?width=216&crop=smart&format=pjpg&auto=webp&s=f6caa0f82eaf319b8a4b77aef1d3df1e3207875a', 'width': 216}, {'height': 135, 'url': 'https://external-preview.redd.it/MnJzMXR6MGg4aTRmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?width=320&crop=smart&format=pjpg&auto=webp&s=58083933a00e15cdbf3fc64b0cda96b622cd8716', 'width': 320}, {'height': 270, 'url': 'https://external-preview.redd.it/MnJzMXR6MGg4aTRmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?width=640&crop=smart&format=pjpg&auto=webp&s=582cbbada12a1b94c062db5c9bc9654b8c24d3d2', 'width': 640}, {'height': 405, 'url': 'https://external-preview.redd.it/MnJzMXR6MGg4aTRmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?width=960&crop=smart&format=pjpg&auto=webp&s=ac625d50c4ecfe184e6d421445ff9db76179aca0', 'width': 960}, {'height': 455, 'url': 'https://external-preview.redd.it/MnJzMXR6MGg4aTRmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6062d26ba6058aaac038af902fcee1cdbf942895', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MnJzMXR6MGg4aTRmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?format=pjpg&auto=webp&s=8b2a8a5f4455bf08c45a0e10d0fef67985ccf0df', 'width': 2560}, 'variants': {}}]} |
|
Anyone tried this? - Self improving AI agents | 56 | Repository for **Darwin Gödel Machine (DGM)**, a novel self-improving system that iteratively modifies its own code (thereby also improving its ability to modify its own codebase) and empirically validates each change using coding benchmarks.
[https://github.com/jennyzzt/dgm](https://github.com/jennyzzt/dgm)
| 2025-06-02T12:21:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l1glmq/anyone_tried_this_self_improving_ai_agents/ | davesmith001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1glmq | false | null | t3_1l1glmq | /r/LocalLLaMA/comments/1l1glmq/anyone_tried_this_self_improving_ai_agents/ | false | false | self | 56 | {'enabled': False, 'images': [{'id': 'n2xrbopkMAwkYk9N1AXfdke1pr4pcaC3hC_Z_JrMqo8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Gb7ZKu2HwweiRm_e3UJm4oiqM8aRl9XkGGKSDyLISvg.jpg?width=108&crop=smart&auto=webp&s=39c844529ca20ad8e34ef42add1bb79c5654de3e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Gb7ZKu2HwweiRm_e3UJm4oiqM8aRl9XkGGKSDyLISvg.jpg?width=216&crop=smart&auto=webp&s=008b10a54fcffef34dc33fa4c10acae431dd11fd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Gb7ZKu2HwweiRm_e3UJm4oiqM8aRl9XkGGKSDyLISvg.jpg?width=320&crop=smart&auto=webp&s=6d4736001a6fbf8164d22c1da1990141a6695588', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Gb7ZKu2HwweiRm_e3UJm4oiqM8aRl9XkGGKSDyLISvg.jpg?width=640&crop=smart&auto=webp&s=cfa381d8cbad8c99b82e56d9554aefee88050de4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Gb7ZKu2HwweiRm_e3UJm4oiqM8aRl9XkGGKSDyLISvg.jpg?width=960&crop=smart&auto=webp&s=162a8317f0c60d4d241e0e6d4791b48016a5a071', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Gb7ZKu2HwweiRm_e3UJm4oiqM8aRl9XkGGKSDyLISvg.jpg?width=1080&crop=smart&auto=webp&s=af24e77d1acf6630a9d7439d0b57ec75ea170590', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Gb7ZKu2HwweiRm_e3UJm4oiqM8aRl9XkGGKSDyLISvg.jpg?auto=webp&s=22fd1b1076943a6527e2ad42811fea934a8ad8ec', 'width': 1200}, 'variants': {}}]} |
Which Open Source Model I should use for transcribing Audio Calls? Calls are in Indian Languages. I have used Whisper Large v3 and v2 and they are not good enough. | 1 | [removed] | 2025-06-02T12:21:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l1glpp/which_open_source_model_i_should_use_for/ | sportoholic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1glpp | false | null | t3_1l1glpp | /r/LocalLLaMA/comments/1l1glpp/which_open_source_model_i_should_use_for/ | false | false | self | 1 | null |
What is the best model for Void editor with agentic capabilities? | 1 | [removed] | 2025-06-02T12:24:05 | https://www.reddit.com/r/LocalLLaMA/comments/1l1gno3/what_is_the_best_model_for_void_editor_with/ | PreparationTrue9138 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1gno3 | false | null | t3_1l1gno3 | /r/LocalLLaMA/comments/1l1gno3/what_is_the_best_model_for_void_editor_with/ | false | false | self | 1 | null |
Anyone Used an LLM to Auto-Tag Inventory in a Dashboard? | 1 | [removed] | 2025-06-02T12:26:49 | https://www.reddit.com/r/LocalLLaMA/comments/1l1gpn9/anyone_used_an_llm_to_autotag_inventory_in_a/ | tonyblu331 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1gpn9 | false | null | t3_1l1gpn9 | /r/LocalLLaMA/comments/1l1gpn9/anyone_used_an_llm_to_autotag_inventory_in_a/ | false | false | self | 1 | null |
Which model will be good to Auto-Tag Inventory in a Dashboard? | 1 | [removed] | 2025-06-02T12:30:29 | https://www.reddit.com/r/LocalLLaMA/comments/1l1gs9c/which_model_will_be_good_to_autotag_inventory_in/ | tonyblu331 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1gs9c | false | null | t3_1l1gs9c | /r/LocalLLaMA/comments/1l1gs9c/which_model_will_be_good_to_autotag_inventory_in/ | false | false | self | 1 | null |
What is the best LLM to run locally? | 1 | [removed] | 2025-06-02T12:34:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l1guq3/what_is_the_best_llm_to_run_locally/ | Intelligent_Pop_4973 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1guq3 | false | null | t3_1l1guq3 | /r/LocalLLaMA/comments/1l1guq3/what_is_the_best_llm_to_run_locally/ | false | false | self | 1 | null |
Which Open Source Model I should use for transcribing Audio Calls? Calls are in Indian Languages. I have used Whisper Large v3 and v2 and they are not good enough. | 1 | [removed] | 2025-06-02T12:45:51 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l1h3my | false | null | t3_1l1h3my | /r/LocalLLaMA/comments/1l1h3my/which_open_source_model_i_should_use_for/ | false | false | default | 1 | null |
||
Best Open source LLMs for tool call / structured output | 0 | I have tried Qwen models (both 2.5 and 3) but it they still get the output wrong. (using vLLM). At least Qwen 32B (thinking and non thinking both) struggle with the output I specify. I have tried guided decoding too but no luck, they sometime work, but it's super unstable in terms out output. Llama 4 is nice but sometimes it stucks in the loop of calling tools, or not adhering to what I asked. Would appreciate your recommendations. | 2025-06-02T12:48:12 | https://www.reddit.com/r/LocalLLaMA/comments/1l1h5dq/best_open_source_llms_for_tool_call_structured/ | Initial_Track6190 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1h5dq | false | null | t3_1l1h5dq | /r/LocalLLaMA/comments/1l1h5dq/best_open_source_llms_for_tool_call_structured/ | false | false | self | 0 | null |
What are the best open source llms to be used for structured output? | 1 | [removed] | 2025-06-02T12:53:16 | https://www.reddit.com/r/LocalLLaMA/comments/1l1h94o/what_are_the_best_open_source_llms_to_be_used_for/ | mrpeakyblinder2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1h94o | false | null | t3_1l1h94o | /r/LocalLLaMA/comments/1l1h94o/what_are_the_best_open_source_llms_to_be_used_for/ | false | false | self | 1 | null |
Why is Qwen so neurotic | 1 | 2025-06-02T13:07:11 | nat2r | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l1hk8d | false | null | t3_1l1hk8d | /r/LocalLLaMA/comments/1l1hk8d/why_is_qwen_so_neurotic/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'airmeXL9rUqFmqQunaV-JWtmNZAdBT0XJlrbNc52X-c', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/v9fg9l49ji4f1.png?width=108&crop=smart&auto=webp&s=9f061924c02b00aad6dc14755d71308864817d42', 'width': 108}, {'height': 166, 'url': 'https://preview.redd.it/v9fg9l49ji4f1.png?width=216&crop=smart&auto=webp&s=95add3ed933fa5e3bdb5c51e0f9875e4070cbaa1', 'width': 216}, {'height': 247, 'url': 'https://preview.redd.it/v9fg9l49ji4f1.png?width=320&crop=smart&auto=webp&s=df3e073db289f43b7aee42a7f0b085f4e6a31169', 'width': 320}, {'height': 494, 'url': 'https://preview.redd.it/v9fg9l49ji4f1.png?width=640&crop=smart&auto=webp&s=00f4d90c06c2416ffff8bf5932a69c22c0294aac', 'width': 640}], 'source': {'height': 599, 'url': 'https://preview.redd.it/v9fg9l49ji4f1.png?auto=webp&s=8186de05eaaf36ac7965f52e320f4212c91493b3', 'width': 775}, 'variants': {}}]} |
|||
Tensor offload hunt for Qwen3 | 1 | [removed] | 2025-06-02T13:18:10 | https://www.reddit.com/r/LocalLLaMA/comments/1l1ht87/tensor_offload_hunt_for_qwen3/ | SimilarWarthog8393 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1ht87 | false | null | t3_1l1ht87 | /r/LocalLLaMA/comments/1l1ht87/tensor_offload_hunt_for_qwen3/ | false | false | self | 1 | null |
Agent controlling iPhone using OpenAI API | 1 | Seems like it Uses Xcode UI tests + accessibility tree to look into apps, and performs swipes, taps, to get things done. So technically it might be possible with 3n as it has vision to run it locally.
[https://github.com/rounak/PhoneAgent](https://github.com/rounak/PhoneAgent) | 2025-06-02T13:24:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l1hyns/agent_controlling_iphone_using_openai_api/ | Predatedtomcat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1hyns | false | null | t3_1l1hyns | /r/LocalLLaMA/comments/1l1hyns/agent_controlling_iphone_using_openai_api/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'zDyNdvXFGpItNCUiMFgkCUffHN_KZl5cvnnSqBXwo9M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zIj_eL1O8iHrq6FaXg_BzARBvQZvfpNKtHpl5GTKXmY.jpg?width=108&crop=smart&auto=webp&s=ed96be4ea713574777e10afd9fbd0dbe39f68b76', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zIj_eL1O8iHrq6FaXg_BzARBvQZvfpNKtHpl5GTKXmY.jpg?width=216&crop=smart&auto=webp&s=2e3483fb3629bfcfbe3fa413977efc6d9c20a3b3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zIj_eL1O8iHrq6FaXg_BzARBvQZvfpNKtHpl5GTKXmY.jpg?width=320&crop=smart&auto=webp&s=61ed2585a76e7244b56524810c7d1f766ef4b21f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zIj_eL1O8iHrq6FaXg_BzARBvQZvfpNKtHpl5GTKXmY.jpg?width=640&crop=smart&auto=webp&s=649fc4e47c85b28853b9e54eb6d2115cfc3802cf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zIj_eL1O8iHrq6FaXg_BzARBvQZvfpNKtHpl5GTKXmY.jpg?width=960&crop=smart&auto=webp&s=7e239ea7e97bbf67912034a7277200a30db98c44', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zIj_eL1O8iHrq6FaXg_BzARBvQZvfpNKtHpl5GTKXmY.jpg?width=1080&crop=smart&auto=webp&s=b646cc6d8a385699f207dcb5b424e16066d9ba6d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zIj_eL1O8iHrq6FaXg_BzARBvQZvfpNKtHpl5GTKXmY.jpg?auto=webp&s=81b96f78ee4c3b20e304bf98b782c6d58db739c9', 'width': 1200}, 'variants': {}}]} |
The duality of man | 1 | 2025-06-02T13:25:59 | poormail | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l1hziq | false | null | t3_1l1hziq | /r/LocalLLaMA/comments/1l1hziq/the_duality_of_man/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'wKEfZcL_Y-hZLOBAoUjcy5ERUrVD1j6VSbqEKHxtotg', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/i1qu90ynmi4f1.png?width=108&crop=smart&auto=webp&s=01ce335493bdb13ee75c5553bf1db5496c30a863', 'width': 108}, {'height': 198, 'url': 'https://preview.redd.it/i1qu90ynmi4f1.png?width=216&crop=smart&auto=webp&s=79fd55b6298be3dcac1108474dd47bc057b27a5d', 'width': 216}, {'height': 294, 'url': 'https://preview.redd.it/i1qu90ynmi4f1.png?width=320&crop=smart&auto=webp&s=5aa95cd9b69a8ab7e28f02c99ca4501576e1a2c9', 'width': 320}, {'height': 588, 'url': 'https://preview.redd.it/i1qu90ynmi4f1.png?width=640&crop=smart&auto=webp&s=649ca3f9502033e64adde34746f46b4cc8b86162', 'width': 640}, {'height': 882, 'url': 'https://preview.redd.it/i1qu90ynmi4f1.png?width=960&crop=smart&auto=webp&s=ec61bb2e521087b3b573336c12b6b6535f2b33db', 'width': 960}, {'height': 993, 'url': 'https://preview.redd.it/i1qu90ynmi4f1.png?width=1080&crop=smart&auto=webp&s=817d35deb77fc7e321ea69a54e2d5e8f6dc394d5', 'width': 1080}], 'source': {'height': 993, 'url': 'https://preview.redd.it/i1qu90ynmi4f1.png?auto=webp&s=de593b600afef2931d52ad3d1345ba4961d0c06d', 'width': 1080}, 'variants': {}}]} |
|||
MedGemma on Android | 5 | Any way to use the multimodel capabilities of MedGemma on android? Tried with both Layla and Crosstalk apps but the model cant read images using them | 2025-06-02T13:54:16 | https://www.reddit.com/r/LocalLLaMA/comments/1l1imus/medgemma_on_android/ | caiporadomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1imus | false | null | t3_1l1imus | /r/LocalLLaMA/comments/1l1imus/medgemma_on_android/ | false | false | self | 5 | null |
Enterprise-ready solution for local LLM | 1 | [removed] | 2025-06-02T13:54:22 | https://www.reddit.com/r/LocalLLaMA/comments/1l1imwu/enterpriseready_solution_for_local_llm/ | Soft_Protection2836 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1imwu | false | null | t3_1l1imwu | /r/LocalLLaMA/comments/1l1imwu/enterpriseready_solution_for_local_llm/ | false | false | self | 1 | null |
Drift Audit | 1 | [removed] | 2025-06-02T14:11:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l1j1i7/drift_audit/ | ShipOk3732 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1j1i7 | false | null | t3_1l1j1i7 | /r/LocalLLaMA/comments/1l1j1i7/drift_audit/ | false | false | self | 1 | null |
NVIDIA RTX PRO 6000 Unlocks GB202's Full Performance In Gaming: Beats GeForce RTX 5090 Convincingly | 80 | 2025-06-02T14:20:16 | https://wccftech.com/nvidia-rtx-pro-6000-beats-geforce-rtx-5090/ | _SYSTEM_ADMIN_MOD_ | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1l1j94p | false | null | t3_1l1j94p | /r/LocalLLaMA/comments/1l1j94p/nvidia_rtx_pro_6000_unlocks_gb202s_full/ | false | false | 80 | {'enabled': False, 'images': [{'id': 'JO87FqpgwRig4JJap9mmFU_C_QcRKIKsV0AaCsC1zCI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/CZ499DlxtUi8-a0hH-i2iuvuqGABLEdCAAN2p00rlA0.jpg?width=108&crop=smart&auto=webp&s=485c236f1d332f6b0fa8a2e9bfe1a2f3878d14fe', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/CZ499DlxtUi8-a0hH-i2iuvuqGABLEdCAAN2p00rlA0.jpg?width=216&crop=smart&auto=webp&s=e2b884abb1c9ecc22ae153f693815c6156f83cf3', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/CZ499DlxtUi8-a0hH-i2iuvuqGABLEdCAAN2p00rlA0.jpg?width=320&crop=smart&auto=webp&s=ed3fad4b9d5871bf87c6973299b52335d8ece981', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/CZ499DlxtUi8-a0hH-i2iuvuqGABLEdCAAN2p00rlA0.jpg?width=640&crop=smart&auto=webp&s=64ca9b6f5516529ad31d61311b7ab6293c4d549c', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/CZ499DlxtUi8-a0hH-i2iuvuqGABLEdCAAN2p00rlA0.jpg?width=960&crop=smart&auto=webp&s=1fe810aa923d738be3b571264eed1380b581eff6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/CZ499DlxtUi8-a0hH-i2iuvuqGABLEdCAAN2p00rlA0.jpg?width=1080&crop=smart&auto=webp&s=709a0a4da15d97e07f6b827bb95ee35583ca085c', 'width': 1080}], 'source': {'height': 756, 'url': 'https://external-preview.redd.it/CZ499DlxtUi8-a0hH-i2iuvuqGABLEdCAAN2p00rlA0.jpg?auto=webp&s=3edd9b7335f254456517d30653f73806038e4348', 'width': 1345}, 'variants': {}}]} |
||
R1-0528 won't stop thinking | 1 | If anyone can help with this issue, or provide some things to keep in mind when setting up R1-0528, that would be appreciated. It can handle small requests just fine, like ask it for a recipe and it can give you one, albeit with something weird here or there, but it gets trapped in a circuitous thought pattern when I give it a problem from LeetCode. When I first pulled it down, it would fall into a self deprecating gibberish, and after messing with the settings some, it's staying on topic, but still can't come to an answer. I've tried other coding problems, like one of the example prompts on Unsloth's walkthrough, but it'll still does the same thing. The thinking itself is pretty fast, but it just doesn't come to a solution. Anyone else running into this, or ran into this and found a solution?
I've tried Ollama's models, and Unsloth's, different quantizations, and tried various tweaks to the settings in Open WebUI. Temp at .6, top_p at .95, min .01. I even set the num_ctx for a bit, because I thought Ollama was only doing 2048 but didn't . I've followed Unsloth's walkthrough. My pc has an 14th gen i7, 4070ti, 16gb ram. | 2025-06-02T14:33:40 | https://www.reddit.com/r/LocalLLaMA/comments/1l1jla0/r10528_wont_stop_thinking/ | madman24k | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1jla0 | false | null | t3_1l1jla0 | /r/LocalLLaMA/comments/1l1jla0/r10528_wont_stop_thinking/ | false | false | self | 1 | null |
What Should Void Editor Provider Settings Be For llama.cpp (OpenAI compatible)? | 1 | [removed] | 2025-06-02T14:37:44 | https://www.reddit.com/r/LocalLLaMA/comments/1l1joxl/what_should_void_editor_provider_settings_be_for/ | je11eebean | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1joxl | false | null | t3_1l1joxl | /r/LocalLLaMA/comments/1l1joxl/what_should_void_editor_provider_settings_be_for/ | false | false | self | 1 | null |
Model Tuning and Re-Tuning Problem. | 1 | [removed] | 2025-06-02T14:39:19 | https://www.reddit.com/r/LocalLLaMA/comments/1l1jqc6/model_tuning_and_retuning_problem/ | Desperate_System3058 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1jqc6 | false | null | t3_1l1jqc6 | /r/LocalLLaMA/comments/1l1jqc6/model_tuning_and_retuning_problem/ | false | false | self | 1 | null |
Is Bandwidth of Oculink port enough to inference local LLMs? | 1 | RTX 3090 has bandwidth of 936.2 GB/s, if I connect the 3090 to a mini pc with Oculink port, Will the bandwidth be limited to 64Gbps ? | 2025-06-02T14:41:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l1jsmq/is_bandwidth_of_oculink_port_enough_to_inference/ | Relative_Rope4234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1jsmq | false | null | t3_1l1jsmq | /r/LocalLLaMA/comments/1l1jsmq/is_bandwidth_of_oculink_port_enough_to_inference/ | false | false | self | 1 | null |
Smallest LLM you tried that's legit | 176 | what's the smallest LLM you've used that gives proper text, not just random gibberish?
I've tried qwen2.5:0.5B.it works pretty well for me, actually quite good | 2025-06-02T14:48:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l1jyld/smallest_llm_you_tried_thats_legit/ | Remarkable-Law9287 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1jyld | false | null | t3_1l1jyld | /r/LocalLLaMA/comments/1l1jyld/smallest_llm_you_tried_thats_legit/ | false | false | self | 176 | null |
New to LLMs — Where Do I Even Start? (Using LM Studio + RTX 4050) | 1 | [removed] | 2025-06-02T14:50:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l1k0fl/new_to_llms_where_do_i_even_start_using_lm_studio/ | penumbrae_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1k0fl | false | null | t3_1l1k0fl | /r/LocalLLaMA/comments/1l1k0fl/new_to_llms_where_do_i_even_start_using_lm_studio/ | false | false | self | 1 | null |
Multimodal Monday #10: Unified Frameworks, Specialized Efficiency | 1 | [removed] | 2025-06-02T15:44:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l1le0w/multimodal_monday_10_unified_frameworks/ | Vast_Yak_4147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1le0w | false | null | t3_1l1le0w | /r/LocalLLaMA/comments/1l1le0w/multimodal_monday_10_unified_frameworks/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'zn7lJoCz71Sa-nFC6TpZPBPGqutrbCLUZvHJf1J43dk', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/9KCIzmMioY39tZ6CcD1Xsvpr2CbhSDfL1UxK8Ldw7sk.jpg?width=108&crop=smart&auto=webp&s=8b6644abcdf07a87206a196aa9d01ee52c160fe6', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/9KCIzmMioY39tZ6CcD1Xsvpr2CbhSDfL1UxK8Ldw7sk.jpg?width=216&crop=smart&auto=webp&s=6dfc189aed006ebb773222bf9e6890362cc175f7', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/9KCIzmMioY39tZ6CcD1Xsvpr2CbhSDfL1UxK8Ldw7sk.jpg?width=320&crop=smart&auto=webp&s=341c442d0f0e1f90f8bf650161ed7f7244e3248b', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/9KCIzmMioY39tZ6CcD1Xsvpr2CbhSDfL1UxK8Ldw7sk.jpg?width=640&crop=smart&auto=webp&s=66bcd9b51eea5e789bda1e57a17f8413074d8186', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/9KCIzmMioY39tZ6CcD1Xsvpr2CbhSDfL1UxK8Ldw7sk.jpg?width=960&crop=smart&auto=webp&s=39265c0d1eb19c786cd7a69039d8dc0499933cc8', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/9KCIzmMioY39tZ6CcD1Xsvpr2CbhSDfL1UxK8Ldw7sk.jpg?width=1080&crop=smart&auto=webp&s=4da57705f3ae6513fee3bbbcb95e0fff04ce1905', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/9KCIzmMioY39tZ6CcD1Xsvpr2CbhSDfL1UxK8Ldw7sk.jpg?auto=webp&s=2f7b66441dd92229478b5a9dcb46c4568f103433', 'width': 1536}, 'variants': {}}]} |
Multiturn causes additional output Quality? | 1 | So recently while just testing some things, I tried to change how I process the user assistant chat messages.
Instead of having alternating user and assistant messages be sent, I passed the entire chat as raw text with a user: and assistant: prefixed in the user message.
System prompt was kept the same.
The post processing looked like this:
Please fulfill users request taking the previous chat history into account.
<Chat_History>
....
</Chat_History>
Here is users next message.
user:
Has anyone else seen this behavior? It seems like while higher context requests degrade model output, instruction following etc., the multi round seem to create some additional degradation. Would it better to just use single turn instead? | 2025-06-02T15:47:42 | https://www.reddit.com/r/LocalLLaMA/comments/1l1lgvi/multiturn_causes_additional_output_quality/ | Federal_Order4324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1lgvi | false | null | t3_1l1lgvi | /r/LocalLLaMA/comments/1l1lgvi/multiturn_causes_additional_output_quality/ | false | false | self | 1 | null |
Tips with double 3090 setup | 0 | I'm planning on buying a second 3090 to expand the possibilities of what i can generate, it's going to be around 500-600 euros.
I have a RYZEN 5 5600x which I have been delaying upgrading, but might do so as well but because of gaming mostly. Have 32GB of RAM. And the motherboard is a B550-GAMING-EDGE-WIFI which will probably switch because of upgrading the CPU to AM5.
Does anyone that has this setup up have any tips or mistakes to avoid? | 2025-06-02T15:52:07 | https://www.reddit.com/r/LocalLLaMA/comments/1l1lksz/tips_with_double_3090_setup/ | Lonhanha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1lksz | false | null | t3_1l1lksz | /r/LocalLLaMA/comments/1l1lksz/tips_with_double_3090_setup/ | false | false | self | 0 | null |
Which LLM is best at understanding information in spreadsheets? | 3 | I have been having trouble finding an LLM that can properly process spreadsheet data. I've tried Gemma 8b and the latest deepseek. Yet both struggle to even do simple matching. I haven't tried Gemma 27b yet but I'm just not sure what I'm missing here.
I'm running on a 4090 and i9 with 64gb. | 2025-06-02T15:58:12 | https://www.reddit.com/r/LocalLLaMA/comments/1l1lqdm/which_llm_is_best_at_understanding_information_in/ | ColoradoCyclist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1lqdm | false | null | t3_1l1lqdm | /r/LocalLLaMA/comments/1l1lqdm/which_llm_is_best_at_understanding_information_in/ | false | false | self | 3 | null |
PlayAI's Latest Speech Editing Model: PlayDiffusion | 1 | [removed] | 2025-06-02T15:59:16 | https://www.reddit.com/r/LocalLLaMA/comments/1l1lrcm/playais_latest_speech_editing_model_playdiffusion/ | SandSalt8370 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1lrcm | false | null | t3_1l1lrcm | /r/LocalLLaMA/comments/1l1lrcm/playais_latest_speech_editing_model_playdiffusion/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'rfZh-VjWw-fpYgDNs403ia4KfWbi-8eAXIVDDmzS5-8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uarty8mt4J9iFf8F18qF7HEd72u8ervh99MPqT0aCds.jpg?width=108&crop=smart&auto=webp&s=6fd21e84a4656b2763783d6fafcc09e46dee9870', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uarty8mt4J9iFf8F18qF7HEd72u8ervh99MPqT0aCds.jpg?width=216&crop=smart&auto=webp&s=9a200095d70ef94bafb1d066596ca04acae43bde', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uarty8mt4J9iFf8F18qF7HEd72u8ervh99MPqT0aCds.jpg?width=320&crop=smart&auto=webp&s=7e30aa805c9f8f9df5fc2f65652de8f572642e24', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uarty8mt4J9iFf8F18qF7HEd72u8ervh99MPqT0aCds.jpg?width=640&crop=smart&auto=webp&s=c95a8ba3347c93b04c948e7f5cc00af916eefffa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uarty8mt4J9iFf8F18qF7HEd72u8ervh99MPqT0aCds.jpg?width=960&crop=smart&auto=webp&s=ca90717bbbe6e666431ef38df76f879af17dbcf7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uarty8mt4J9iFf8F18qF7HEd72u8ervh99MPqT0aCds.jpg?width=1080&crop=smart&auto=webp&s=939e9dd6e46527a10735e3eb5c76abc84e176c30', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uarty8mt4J9iFf8F18qF7HEd72u8ervh99MPqT0aCds.jpg?auto=webp&s=b7848f502baf64ff0774d69d8493c406ee3786c7', 'width': 1200}, 'variants': {}}]} |
|
PlayAI's Latest Speech Editing Model: PlayDiffusion | 1 | [removed] | 2025-06-02T16:01:12 | https://www.reddit.com/r/LocalLLaMA/comments/1l1lt8r/playais_latest_speech_editing_model_playdiffusion/ | SandSalt8370 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1lt8r | false | null | t3_1l1lt8r | /r/LocalLLaMA/comments/1l1lt8r/playais_latest_speech_editing_model_playdiffusion/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'rfZh-VjWw-fpYgDNs403ia4KfWbi-8eAXIVDDmzS5-8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uarty8mt4J9iFf8F18qF7HEd72u8ervh99MPqT0aCds.jpg?width=108&crop=smart&auto=webp&s=6fd21e84a4656b2763783d6fafcc09e46dee9870', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uarty8mt4J9iFf8F18qF7HEd72u8ervh99MPqT0aCds.jpg?width=216&crop=smart&auto=webp&s=9a200095d70ef94bafb1d066596ca04acae43bde', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uarty8mt4J9iFf8F18qF7HEd72u8ervh99MPqT0aCds.jpg?width=320&crop=smart&auto=webp&s=7e30aa805c9f8f9df5fc2f65652de8f572642e24', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uarty8mt4J9iFf8F18qF7HEd72u8ervh99MPqT0aCds.jpg?width=640&crop=smart&auto=webp&s=c95a8ba3347c93b04c948e7f5cc00af916eefffa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uarty8mt4J9iFf8F18qF7HEd72u8ervh99MPqT0aCds.jpg?width=960&crop=smart&auto=webp&s=ca90717bbbe6e666431ef38df76f879af17dbcf7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uarty8mt4J9iFf8F18qF7HEd72u8ervh99MPqT0aCds.jpg?width=1080&crop=smart&auto=webp&s=939e9dd6e46527a10735e3eb5c76abc84e176c30', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uarty8mt4J9iFf8F18qF7HEd72u8ervh99MPqT0aCds.jpg?auto=webp&s=b7848f502baf64ff0774d69d8493c406ee3786c7', 'width': 1200}, 'variants': {}}]} |
I Built a Better Gumloop in 48 Hours with Vibe Coding | 1 | Most no-code agent builders (Gumloop) are just workflow automation with LLM calls. They're not built for actual agents that need to:
\* Make dynamic routing decisions
\* Handle complex tool orchestration
\* Support ANY model (not just OpenAI)
\*\*Agent Framework:\*\* LangGraph (JS) because agents ARE graphs - nodes and edges map perfectly. Native graph execution, better for complex routing.
\*\*Tool Integration:\*\* Composio for 100+ pre-built tools with auth handled. Here's why this was crucial: Every tool has different auth flows - some want API keys, others OAuth. Multiply that by 100+ tools and you have a maintenance nightmare. Composio abstracts all of this away and provides me with ready functions for each tool that the agent can execute.
\*\*Tech Stack\*\*
Frontend is entirely vibe coded. Skipped Lovable/\[Bolt.new\](http://Bolt.new) \\- easier to get code directly in Cursor. My setup:
\* \*\*GPT-4.1\*\*: The sniper. Precise component tweaks.
\* \*\*Gemini 2.0 Flash\*\*: The machine gun. Rewrites entire components with cross-file context.
\* \*\*21st Dev's MCP Server\*\*: Beautiful shadcn components from descriptions.
The drag-and-drop canvas? ReactFlow + moving grid. Would've killed me manually.
\*\*Core Design: Just 4 Node Types\*\*
1. \*\*Input Node\*\* \\- Data entry
2. \*\*LLM Node\*\* \\- ANY model
3. \*\*Tool Node\*\* \\- Executes actions
4. \*\*Output Node\*\* \\- Collects results
An "agent" = LLM + Tool nodes with feedback loops. Flows save as JSON graphs, executed sequentially by node type.
\*\*Agent Patterns Supported\*\*(from Anthropic's guide):
1. Prompt Chaining
2. Parallelization
3. Routing
4. Evaluator-Optimizer
5. Tool-Augmented Agents
The code is on GitHub \[here\](https://github.com/ComposioHQ/agent-flow). Fork it, break it, make it better. | 2025-06-02T16:02:26 | https://v.redd.it/vlmx362kcj4f1 | goddamnit_1 | /r/LocalLLaMA/comments/1l1luhf/i_built_a_better_gumloop_in_48_hours_with_vibe/ | 1970-01-01T00:00:00 | 0 | {} | 1l1luhf | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vlmx362kcj4f1/DASHPlaylist.mpd?a=1751601753%2CNWNiYzM5ZGQwYWM2MDNiMzM1NmYwMjJlOTdkYWQ1ZmM2ZWE2MzI0NWMxZjNhMDExYzcyNGIwZDYyNGU0MjE3Yg%3D%3D&v=1&f=sd', 'duration': 87, 'fallback_url': 'https://v.redd.it/vlmx362kcj4f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/vlmx362kcj4f1/HLSPlaylist.m3u8?a=1751601753%2CNDUyMjk1N2M1ZmRiYjM5OWZlZjcwOTRiZmI2NTdlZTlkMmFiMGYwNDM0MzYwNzE5YTIwZWNlYzYwYmM3ZmZlYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vlmx362kcj4f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1916}} | t3_1l1luhf | /r/LocalLLaMA/comments/1l1luhf/i_built_a_better_gumloop_in_48_hours_with_vibe/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'OTJxeHg2MmtjajRmMY_Ep40r653Cb2WSUITkH5gN-iYqY3isooH7irtvpltT', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OTJxeHg2MmtjajRmMY_Ep40r653Cb2WSUITkH5gN-iYqY3isooH7irtvpltT.png?width=108&crop=smart&format=pjpg&auto=webp&s=58d91aed6e6b8fda287b40f34c042a8db13e0d88', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OTJxeHg2MmtjajRmMY_Ep40r653Cb2WSUITkH5gN-iYqY3isooH7irtvpltT.png?width=216&crop=smart&format=pjpg&auto=webp&s=6cb239f995dc16e009d041ea56e4958b410b793f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OTJxeHg2MmtjajRmMY_Ep40r653Cb2WSUITkH5gN-iYqY3isooH7irtvpltT.png?width=320&crop=smart&format=pjpg&auto=webp&s=d841caec9c736627c6b4c303cebb35876d62b5f5', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OTJxeHg2MmtjajRmMY_Ep40r653Cb2WSUITkH5gN-iYqY3isooH7irtvpltT.png?width=640&crop=smart&format=pjpg&auto=webp&s=9c9e8ebdf528cb88a78b4704a4567abb15fc3030', 'width': 640}, {'height': 541, 'url': 'https://external-preview.redd.it/OTJxeHg2MmtjajRmMY_Ep40r653Cb2WSUITkH5gN-iYqY3isooH7irtvpltT.png?width=960&crop=smart&format=pjpg&auto=webp&s=37e36c2a062008b357d6df73d7b03e9980078ffa', 'width': 960}, {'height': 608, 'url': 'https://external-preview.redd.it/OTJxeHg2MmtjajRmMY_Ep40r653Cb2WSUITkH5gN-iYqY3isooH7irtvpltT.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c071901678d957dae7e999697bd567bf42968aef', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/OTJxeHg2MmtjajRmMY_Ep40r653Cb2WSUITkH5gN-iYqY3isooH7irtvpltT.png?format=pjpg&auto=webp&s=4705280444a29e1c9a9da148e5b41e09e2756fdd', 'width': 1916}, 'variants': {}}]} |
|
What's a general model 14b or less that genuinely impresses you? | 31 | I'm looking for a general purpose model that is exceptional, outstanding, can do a wide array of tasks especially administrative, doing things like preparing me PowerPoint slide and the text that should be put into documents and just taking notes on stuff, converting ugly messy unformatted notes into something tangible. I need a model that can do that. Currently I've been using Phi, But it's really not that great. I'm kind of disappointed in it | 2025-06-02T16:02:56 | https://www.reddit.com/r/LocalLLaMA/comments/1l1luwz/whats_a_general_model_14b_or_less_that_genuinely/ | intimate_sniffer69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1luwz | false | null | t3_1l1luwz | /r/LocalLLaMA/comments/1l1luwz/whats_a_general_model_14b_or_less_that_genuinely/ | false | false | self | 31 | null |
PlayAI's Latest Diffusion-based Speech Editing Model: PlayDiffusion | 3 | PlayAI open-sourced a new Speech Editing model today that allows for precise & clean speech editing. A huge step up from traditional autoregressive models that aren't designed for this task. | 2025-06-02T16:03:46 | https://huggingface.co/spaces/PlayHT/PlayDiffusion | SandSalt8370 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l1lvri | false | null | t3_1l1lvri | /r/LocalLLaMA/comments/1l1lvri/playais_latest_diffusionbased_speech_editing/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'CqblY3Zg0YyBkT7WL4m7rTHmTkmHkQsN6Ve3JKLmzUk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/D6U0D3aqjaPtUZS0eiGqjZV2P0Ac0jXLOyvlEluTYfA.jpg?width=108&crop=smart&auto=webp&s=e9fb57a5c0e50ad13a69c186f8b3a8edb818eacc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/D6U0D3aqjaPtUZS0eiGqjZV2P0Ac0jXLOyvlEluTYfA.jpg?width=216&crop=smart&auto=webp&s=e50d0191a71e3932428b2882728c3c438e7d48e1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/D6U0D3aqjaPtUZS0eiGqjZV2P0Ac0jXLOyvlEluTYfA.jpg?width=320&crop=smart&auto=webp&s=3f5afd9c8788701f6318b7f5be8bc2a50cc8c57d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/D6U0D3aqjaPtUZS0eiGqjZV2P0Ac0jXLOyvlEluTYfA.jpg?width=640&crop=smart&auto=webp&s=9fb203fbc158aa6c10f6f406143fd8c1a2f16c07', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/D6U0D3aqjaPtUZS0eiGqjZV2P0Ac0jXLOyvlEluTYfA.jpg?width=960&crop=smart&auto=webp&s=0be45dd3441bf1725db49eba1d03e72b0bba59f2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/D6U0D3aqjaPtUZS0eiGqjZV2P0Ac0jXLOyvlEluTYfA.jpg?width=1080&crop=smart&auto=webp&s=d575ff06b223cd1dad70595d559496003ddcaa4a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/D6U0D3aqjaPtUZS0eiGqjZV2P0Ac0jXLOyvlEluTYfA.jpg?auto=webp&s=5ccd52bca3ce4c5da2cffc4af087a42feb4c23b6', 'width': 1200}, 'variants': {}}]} |
|
PlayAI's Latest Diffusion-based Speech Editing Model: PlayDiffusion | 1 | [removed] | 2025-06-02T16:07:36 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l1lzbm | false | null | t3_1l1lzbm | /r/LocalLLaMA/comments/1l1lzbm/playais_latest_diffusionbased_speech_editing/ | false | false | default | 1 | null |
||
PlayAI's Latest Diffusion-based Speech Editing Model: PlayDiffusion | 1 | [removed] | 2025-06-02T16:08:15 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l1lzxb | false | null | t3_1l1lzxb | /r/LocalLLaMA/comments/1l1lzxb/playais_latest_diffusionbased_speech_editing/ | false | false | default | 1 | null |
||
PlayAI's Latest Diffusion-based Speech Editing Model: PlayDiffusion | 1 | [removed] | 2025-06-02T16:08:38 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l1m0ai | false | null | t3_1l1m0ai | /r/LocalLLaMA/comments/1l1m0ai/playais_latest_diffusionbased_speech_editing/ | false | false | default | 1 | null |
||
PlayDiffusion - PlayAI's Latest Diffusion-based Speech Editing Model | 1 | [removed] | 2025-06-02T16:09:16 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l1m0uj | false | null | t3_1l1m0uj | /r/LocalLLaMA/comments/1l1m0uj/playdiffusion_playais_latest_diffusionbased/ | true | false | default | 1 | null |
||
PlayAI's Latest Diffusion-based Speech Editing Model: PlayDiffusion | 1 | [deleted] | 2025-06-02T16:09:48 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l1m1dj | false | null | t3_1l1m1dj | /r/LocalLLaMA/comments/1l1m1dj/playais_latest_diffusionbased_speech_editing/ | false | false | default | 1 | null |
||
PlayAI's Latest Diffusion-based Speech Editing Model: PlayDiffusion | 1 | [removed] | 2025-06-02T16:13:28 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l1m4wg | false | null | t3_1l1m4wg | /r/LocalLLaMA/comments/1l1m4wg/playais_latest_diffusionbased_speech_editing/ | false | false | default | 1 | null |
||
PlayAI's Latest Diffusion-based Speech Editing Model: PlayDiffusion | 1 | [removed] | 2025-06-02T16:15:35 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l1m6v5 | false | null | t3_1l1m6v5 | /r/LocalLLaMA/comments/1l1m6v5/playais_latest_diffusionbased_speech_editing/ | false | false | default | 1 | null |
||
PlayAI's Latest Diffusion-based Speech Editing Model: PlayDiffusion | 5 | PlayAI open-sourced a new Speech Editing model today that allows for precise & clean speech editing. A huge step up from traditional autoregressive models that aren't designed for this task. | 2025-06-02T16:16:12 | https://huggingface.co/spaces/PlayHT/PlayDiffusion | SandSalt8370 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l1m7fm | false | null | t3_1l1m7fm | /r/LocalLLaMA/comments/1l1m7fm/playais_latest_diffusionbased_speech_editing/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'CqblY3Zg0YyBkT7WL4m7rTHmTkmHkQsN6Ve3JKLmzUk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/D6U0D3aqjaPtUZS0eiGqjZV2P0Ac0jXLOyvlEluTYfA.jpg?width=108&crop=smart&auto=webp&s=e9fb57a5c0e50ad13a69c186f8b3a8edb818eacc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/D6U0D3aqjaPtUZS0eiGqjZV2P0Ac0jXLOyvlEluTYfA.jpg?width=216&crop=smart&auto=webp&s=e50d0191a71e3932428b2882728c3c438e7d48e1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/D6U0D3aqjaPtUZS0eiGqjZV2P0Ac0jXLOyvlEluTYfA.jpg?width=320&crop=smart&auto=webp&s=3f5afd9c8788701f6318b7f5be8bc2a50cc8c57d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/D6U0D3aqjaPtUZS0eiGqjZV2P0Ac0jXLOyvlEluTYfA.jpg?width=640&crop=smart&auto=webp&s=9fb203fbc158aa6c10f6f406143fd8c1a2f16c07', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/D6U0D3aqjaPtUZS0eiGqjZV2P0Ac0jXLOyvlEluTYfA.jpg?width=960&crop=smart&auto=webp&s=0be45dd3441bf1725db49eba1d03e72b0bba59f2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/D6U0D3aqjaPtUZS0eiGqjZV2P0Ac0jXLOyvlEluTYfA.jpg?width=1080&crop=smart&auto=webp&s=d575ff06b223cd1dad70595d559496003ddcaa4a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/D6U0D3aqjaPtUZS0eiGqjZV2P0Ac0jXLOyvlEluTYfA.jpg?auto=webp&s=5ccd52bca3ce4c5da2cffc4af087a42feb4c23b6', 'width': 1200}, 'variants': {}}]} |
|
How I Built a Better Gumloop in 48 Hours with Vibe Coding | 2 | Most no-code agent builders are just workflow automation with LLM calls sprinkled in. They're not built for actual agents that need to:
* Make dynamic routing decisions
* Handle complex tool orchestration
* Support ANY model (not just OpenAI)
I successfully built such a platform, here's everything I used:
**Agent Framework**: LangGraph (JS) because agents ARE graphs - nodes and edges map perfectly. Native graph execution, better for complex routing.
**Tool Integrations**: Composio for pre-built tools with auth handled. Here's why this was crucial: Every tool has different auth flows - some want API keys, others OAuth. Multiply that by 100+ tools and you have a maintenance nightmare. Composio abstracts all of this away and provides me with ready functions for each tool that the agent can execute.
# Tech Stack
Frontend is entirely vibe coded. Skipped Lovable/Bolt - easier to get code directly in Cursor. My setup:
* GPT-4.1: The sniper. Precise component tweaks.
* Gemini 2.0 Flash: The machine gun. Rewrites entire components and is better for larger codebase changes.
* 21st Dev's MCP Server: Beautiful shadcn components from descriptions.
The drag-and-drop canvas? ReactFlow + moving grid. Would've killed me manually.
# Core Design: Just 4 Node Types
1. Input Node - Data entry
2. LLM Node - ANY model
3. Tool Node - Executes actions
4. Output Node - Collects results
An "agent" = LLM + Tool nodes with feedback loops. Flows save as JSON graphs, executed sequentially by node type.
Agent Patterns Supported(from Anthropic's guide):
1. Prompt Chaining
2. Parallelization
3. Routing
4. Evaluator-Optimizer
5. Tool-Augmented Agents
The code is on GitHub. [here](https://github.com/ComposioHQ/agent-flow). Fork it, break it, make it better. | 2025-06-02T16:19:18 | https://v.redd.it/1xjdawdkfj4f1 | goddamnit_1 | /r/LocalLLaMA/comments/1l1ma92/how_i_built_a_better_gumloop_in_48_hours_with/ | 1970-01-01T00:00:00 | 0 | {} | 1l1ma92 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/1xjdawdkfj4f1/DASHPlaylist.mpd?a=1751602760%2CYTk1OWNlZmUzNmRhMThlNGNjNDlmYWUwNWJjYTFlNzA0ZTk1ZjEwZWQ2NjBhMTE3NGJmM2ZhODlhZGYzMDJlZQ%3D%3D&v=1&f=sd', 'duration': 87, 'fallback_url': 'https://v.redd.it/1xjdawdkfj4f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/1xjdawdkfj4f1/HLSPlaylist.m3u8?a=1751602760%2CNTNlNzk2MDFlZDUwNzZjMzcwMDc4MWRiNDFlNWEwOWU2Y2JkODhkNTM3OWY5Y2Y4ODhiOGZkZjEwNTE3NDE5Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/1xjdawdkfj4f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1916}} | t3_1l1ma92 | /r/LocalLLaMA/comments/1l1ma92/how_i_built_a_better_gumloop_in_48_hours_with/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'ZWM0cnN2ZGtmajRmMY_Ep40r653Cb2WSUITkH5gN-iYqY3isooH7irtvpltT', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZWM0cnN2ZGtmajRmMY_Ep40r653Cb2WSUITkH5gN-iYqY3isooH7irtvpltT.png?width=108&crop=smart&format=pjpg&auto=webp&s=7b23392dccfb736ace44dc1edbb3a40a7538b834', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZWM0cnN2ZGtmajRmMY_Ep40r653Cb2WSUITkH5gN-iYqY3isooH7irtvpltT.png?width=216&crop=smart&format=pjpg&auto=webp&s=be82eb4510ecd279cf899d6733eccd1ac0fca3b8', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZWM0cnN2ZGtmajRmMY_Ep40r653Cb2WSUITkH5gN-iYqY3isooH7irtvpltT.png?width=320&crop=smart&format=pjpg&auto=webp&s=bc8401099d252e8213e85539aa107ae9b24b99a0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZWM0cnN2ZGtmajRmMY_Ep40r653Cb2WSUITkH5gN-iYqY3isooH7irtvpltT.png?width=640&crop=smart&format=pjpg&auto=webp&s=1805da11c9a22448f78e1e645c58fdf8c7c88102', 'width': 640}, {'height': 541, 'url': 'https://external-preview.redd.it/ZWM0cnN2ZGtmajRmMY_Ep40r653Cb2WSUITkH5gN-iYqY3isooH7irtvpltT.png?width=960&crop=smart&format=pjpg&auto=webp&s=023f6777dff031b0468ba3534af75b726472e6c9', 'width': 960}, {'height': 608, 'url': 'https://external-preview.redd.it/ZWM0cnN2ZGtmajRmMY_Ep40r653Cb2WSUITkH5gN-iYqY3isooH7irtvpltT.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b1386179976d3759a995479e2cd6f0c55bff84b6', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZWM0cnN2ZGtmajRmMY_Ep40r653Cb2WSUITkH5gN-iYqY3isooH7irtvpltT.png?format=pjpg&auto=webp&s=ed728686a38cd7198d666d2148af52c173a12517', 'width': 1916}, 'variants': {}}]} |
|
PlayAI's Latest Diffusion-based Speech Editing Model: PlayDiffusion | 1 | [removed] | 2025-06-02T16:19:54 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l1masl | false | null | t3_1l1masl | /r/LocalLLaMA/comments/1l1masl/playais_latest_diffusionbased_speech_editing/ | false | false | default | 1 | null |
||
PlayAI's Latest Diffusion-based Speech Editing Model: PlayDiffusion | 98 | PlayAI open-sourced a new Speech Editing model today that allows for precise & clean speech editing. A huge step up from traditional autoregressive models that aren't designed for this task. | 2025-06-02T16:20:20 | https://github.com/playht/playdiffusion | SandSalt8370 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l1mb6y | false | null | t3_1l1mb6y | /r/LocalLLaMA/comments/1l1mb6y/playais_latest_diffusionbased_speech_editing/ | false | false | 98 | {'enabled': False, 'images': [{'id': 'CSWT8MFaDNeST1dQr1DqDgWI53La8d_i9DyfjkHlVy0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/w-YzJC8yYFljokN1sVgB95jsZmNJotgMItgN5CbyhjY.jpg?width=108&crop=smart&auto=webp&s=41cf3242be5c447c4e9a00d66fb5da1c09dcca24', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/w-YzJC8yYFljokN1sVgB95jsZmNJotgMItgN5CbyhjY.jpg?width=216&crop=smart&auto=webp&s=44ff4f98cb4ada5e789c7aef86bc8daa409b6d78', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/w-YzJC8yYFljokN1sVgB95jsZmNJotgMItgN5CbyhjY.jpg?width=320&crop=smart&auto=webp&s=8d638ed45c3fdc6a7d27914abeae6ba76fa1074f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/w-YzJC8yYFljokN1sVgB95jsZmNJotgMItgN5CbyhjY.jpg?width=640&crop=smart&auto=webp&s=d65ddc626651c2d4f886345c5637afbe5263791e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/w-YzJC8yYFljokN1sVgB95jsZmNJotgMItgN5CbyhjY.jpg?width=960&crop=smart&auto=webp&s=d7526eea0a28e56a56ae7cf753f5ff66b89b6f1c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/w-YzJC8yYFljokN1sVgB95jsZmNJotgMItgN5CbyhjY.jpg?width=1080&crop=smart&auto=webp&s=ce014aa614ff17ddb5fe8aca69daad0bacf19ae4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/w-YzJC8yYFljokN1sVgB95jsZmNJotgMItgN5CbyhjY.jpg?auto=webp&s=d03f48a6c37d93392a65e7b93b6bad5664fce5ca', 'width': 1200}, 'variants': {}}]} |
|
Has anyone had success implementing a local FIM model? | 5 | I've noticed that the auto-completion features in my current IDE can be sluggish. As I rely heavily on auto-completion during coding, I strongly prefer accurate autocomplete suggestions like those offered by "Cursor" over automated code generation(Chat/Agent tabs). Therefore, I'm seeking a local alternative that incorporates an intelligent agent capable of analyzing my entire codebase. Is this request overly ambitious 🙈? | 2025-06-02T16:31:49 | https://www.reddit.com/r/LocalLLaMA/comments/1l1mliw/has_anyone_had_success_implementing_a_local_fim/ | m_abdelfattah | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1mliw | false | null | t3_1l1mliw | /r/LocalLLaMA/comments/1l1mliw/has_anyone_had_success_implementing_a_local_fim/ | false | false | self | 5 | null |
Best uncensored multi language LLM up to 12B, still Mistral Nemo? | 23 | I want to use a fixed model for my private none commercial AI project because I want to finetune it later (LoRAs) for it's specific tasks. For that I need:
- A up to 12B text to text model - need to match into 12GB VRAM inclusive 8K context window.
- As uncensored as possible in it's core.
- Official support for main languages (At least EN/FR/DE).
Actually I have Mistral Nemo Instruct on my list, nothing else. It is the only model from that I know that match all three points without a "however".
12B at max because I set me a limit of 16GB VRAM for my AI project usage in total and that must be enough for the LLM with 8K context, Whisper and a TTS. 16GB because I want to open source my project later and don't want that it is limited to users with at least 24GB VRAM. 16GB are more and more common on actual graphic cards (don't by 8GB versions anymore!).
I know you can uncensor models, BUT abliterated models are mostly only uncensored for English language. I always noticed more worse performance on other languages with such models and don't want to deal with that. And Mistral Nemo is known to be very uncensored so no extra uncensoring needed.
Because the most finetuned models are only done for one or two languages, finetuned models fall out as options. I want to support at least EN/FR/DE languages. I'm myself a nativ German speaker and don't want to talk to AI all the time in English only. So I know very good how annoying it is that many AI projects only support English. | 2025-06-02T16:54:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l1n6h4/best_uncensored_multi_language_llm_up_to_12b/ | Blizado | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1n6h4 | false | null | t3_1l1n6h4 | /r/LocalLLaMA/comments/1l1n6h4/best_uncensored_multi_language_llm_up_to_12b/ | false | false | self | 23 | null |
lmarena update: Claude 4 opus at #8, #4 with style control, #1 on webdev. | 1 | [removed] | 2025-06-02T17:11:44 | https://www.reddit.com/r/LocalLLaMA/comments/1l1nmzb/lmarena_update_claude_4_opus_at_8_4_with_style/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1nmzb | false | null | t3_1l1nmzb | /r/LocalLLaMA/comments/1l1nmzb/lmarena_update_claude_4_opus_at_8_4_with_style/ | false | false | self | 1 | null |
Best Software to Self-host LLM | 0 | Hello everyone,
What is the best Android app where I can plug in my API key? Same question for Windows?
It would be great if it supports new models just like LiteLLM from Anthropic, Google, OpenAI, etc. | 2025-06-02T17:12:06 | https://www.reddit.com/r/LocalLLaMA/comments/1l1nnaa/best_software_to_selfhost_llm/ | AcanthaceaeNo5503 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1nnaa | false | null | t3_1l1nnaa | /r/LocalLLaMA/comments/1l1nnaa/best_software_to_selfhost_llm/ | false | false | self | 0 | null |
lmarena update: Claude 4 opus at #8, #4 with style control, #1 on webdev. | 1 | [removed] | 2025-06-02T17:13:13 | https://www.reddit.com/r/LocalLLaMA/comments/1l1nobm/lmarena_update_claude_4_opus_at_8_4_with_style/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1nobm | false | null | t3_1l1nobm | /r/LocalLLaMA/comments/1l1nobm/lmarena_update_claude_4_opus_at_8_4_with_style/ | false | false | self | 1 | null |
chatbot arena update: Claude 4 opus at #8, #4 with style control, #1 on webdev. | 1 | [removed] | 2025-06-02T17:14:59 | https://www.reddit.com/r/LocalLLaMA/comments/1l1npyr/chatbot_arena_update_claude_4_opus_at_8_4_with/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1npyr | false | null | t3_1l1npyr | /r/LocalLLaMA/comments/1l1npyr/chatbot_arena_update_claude_4_opus_at_8_4_with/ | false | false | self | 1 | null |
Mistral-Small 3.1 is {good|bad} at OCR when using {ollama|llama.cpp} | 3 | I’ve tried everything I can think of, and I’m losing my mind. Does anyone have any suggestions?
I’ve been trying out 24-28B local vision models for some slightly specialized OCR (nothing too fancy, it’s still words printed on a page), first using Ollama for inference. The results for Mistral Small 3.1 were fantastic, with character error rates in the 5-10% range, low enough that it could be useful in my professional field today – except inference with Ollama is very, very slow on my RTX 3060 with just 12 GB of VRAM (around 3.5 tok/sec), of course. The average character error rate was 9% on my 11 test cases, which intentionally included some difficult images to work with. Qwen 2.5VL:32b was a step behind (averaging 12%), while Gemma3:27b was noticeably worse (19%).
But wait! Llama.cpp handles offloading model layers to my GPU better, and inference is much faster – except now the character error rates are all different. Gemma3:27b comes in at 14%, and even Pixtral:12b is nearly as accurate. But Mistral Small 3.1 is consistently bad, at 20% or worse, not good enough to be useful.
I’m running all these tests using Q\_4\_M quants of Mistral Small 3.1 from Ollama (one monolithic file) and the Unsloth, Bartowski, and MRadermacher quants (which use a separate mmproj file) in Llama.cpp. I’ve also tried a Q\_6 quant, higher precision levels for the mmproj files, enabling or disabling KV cache and flash attention and mmproj offloading. I’ve tried using all the Ollama default settings in Llama.cpp. Nothing seems to make a difference – for my use case, Mistral Small 3.1 is consistently bad under llama.cpp, and consistently good to excellent (but extremely slow) under Ollama. Is it normal for the inference platform and/or quant provider to make such a big difference in accuracy?
Is there anything else I can try in Llama.cpp to get Ollama-like accuracy? I tried to find other inference engines that would work in Windows, but everything else is either running Ollama/Llama.cpp under the hood, or it doesn’t offer vision support. My attempts to use GGUF quants in vllm under WSL were unsuccessful.
If I could get Ollama accuracy and Llama.cpp inference speed, I could move forward with a big research project in my non-technical field. Any suggestions beyond saving up for another GPU? | 2025-06-02T17:37:57 | https://www.reddit.com/r/LocalLLaMA/comments/1l1ob6a/mistralsmall_31_is_goodbad_at_ocr_when_using/ | exacly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1ob6a | false | null | t3_1l1ob6a | /r/LocalLLaMA/comments/1l1ob6a/mistralsmall_31_is_goodbad_at_ocr_when_using/ | false | false | self | 3 | null |
latest llama.cpp (b5576) + DeepSeek-R1-0528-Qwen3-8B-Q8_0.gguf successful VScode + MCP running | 71 | Just downloaded [Release b5576 · ggml-org/llama.cpp](https://github.com/ggml-org/llama.cpp/releases/tag/b5576) and try to use MCP tools with folllowing environment:
1. [DeepSeek-R1-0528-Qwen3-8B-Q8\_0](https://huggingface.co/bartowski/deepseek-ai_DeepSeek-R1-0528-Qwen3-8B-GGUF)2. VS code
3. Cline
4. MCP tools like mcp\_server\_time, filesystem, MS playwright
Got application error before b5576 previously, but all tools can run smoothly now.
It took longer time to "think" compared with [Devstral-Small-2505-GGUF](https://huggingface.co/unsloth/Devstral-Small-2505-GGUF)
Anyway, it is a good model with less VRAM if want to try local development.
my Win11 batch file for reference, adjust based on your own environment:
\`\`\`TEXT
SET LLAMA\_CPP\_PATH=G:\\ai\\llama.cpp
SET PATH=%LLAMA\_CPP\_PATH%\\build\\bin\\Release\\;%PATH%
SET LLAMA\_ARG\_HOST=0.0.0.0
SET LLAMA\_ARG\_PORT=8080
SET LLAMA\_ARG\_JINJA=true
SET LLAMA\_ARG\_FLASH\_ATTN=true
SET LLAMA\_ARG\_CACHE\_TYPE\_K=q8\_0
SET LLAMA\_ARG\_CACHE\_TYPE\_V=q8\_0
SET LLAMA\_ARG\_N\_GPU\_LAYERS=65
SET LLAMA\_ARG\_CTX\_SIZE=131072
SET LLAMA\_ARG\_SWA\_FULL=true
SET LLAMA\_ARG\_MODEL=models\\deepseek-ai\_DeepSeek-R1-0528-Qwen3-8B-Q8\_0.gguf
llama-server.exe --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0 --repeat-penalty 1.1
\`\`\`
https://preview.redd.it/262hbrj02k4f1.png?width=1011&format=png&auto=webp&s=4d9d0a799cc96053b4b255429c2a7e4b85a995ce
| 2025-06-02T18:21:35 | https://www.reddit.com/r/LocalLLaMA/comments/1l1pgv9/latest_llamacpp_b5576_deepseekr10528qwen38bq8/ | tyoyvr-2222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1pgv9 | false | null | t3_1l1pgv9 | /r/LocalLLaMA/comments/1l1pgv9/latest_llamacpp_b5576_deepseekr10528qwen38bq8/ | false | false | 71 | {'enabled': False, 'images': [{'id': 'XimqWHIYvM5SwtiqperTDecr6b-hk0KlHOB1ZVWG1Lo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sYVW4X6gc-B0EvqWe0QNA3QGRk5e7XKmzrtKFGnT66k.jpg?width=108&crop=smart&auto=webp&s=6dda0e07950f1814805fb54edde94c948c6d062e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/sYVW4X6gc-B0EvqWe0QNA3QGRk5e7XKmzrtKFGnT66k.jpg?width=216&crop=smart&auto=webp&s=f04e71c5325d530ca27900282d75c60ae3d592ff', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/sYVW4X6gc-B0EvqWe0QNA3QGRk5e7XKmzrtKFGnT66k.jpg?width=320&crop=smart&auto=webp&s=30258edb32f8a3e87e16982757b26aea40a5c21a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/sYVW4X6gc-B0EvqWe0QNA3QGRk5e7XKmzrtKFGnT66k.jpg?width=640&crop=smart&auto=webp&s=d9cb9655f58f9405afdc373583acebb28d62dac9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/sYVW4X6gc-B0EvqWe0QNA3QGRk5e7XKmzrtKFGnT66k.jpg?width=960&crop=smart&auto=webp&s=317bd87c143929901b0141577858ca171808e5a9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/sYVW4X6gc-B0EvqWe0QNA3QGRk5e7XKmzrtKFGnT66k.jpg?width=1080&crop=smart&auto=webp&s=c8b2fea9b11aa41b4da3154b41a870876b0b31f4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/sYVW4X6gc-B0EvqWe0QNA3QGRk5e7XKmzrtKFGnT66k.jpg?auto=webp&s=d16c6c82f93f928ba275a3ac768b500dde512388', 'width': 1200}, 'variants': {}}]} |
|
What to do with GPUs? [Seeking ideas] | 3 | Hi there, I have a sizeable amount of GPU reserved instanced in Azure and GCP for next few longs. I am looking for some fun project to work on. Looking for ideas about what to build/fine-tune a model. | 2025-06-02T18:36:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l1pueu/what_to_do_with_gpus_seeking_ideas/ | Ok-Regular-1142 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1pueu | false | null | t3_1l1pueu | /r/LocalLLaMA/comments/1l1pueu/what_to_do_with_gpus_seeking_ideas/ | false | false | self | 3 | null |
Looking for advice: 5060 ti using PCIE 4.0 for converting my desktop into an LLM server | 1 | Hey!
I am looking to create a server for LLM experimentation. I am pricing out different options, and purchasing a new 5060 ti 16gb gpu seems like an attractive price friendly option to start dipping my toes.
The desktop I am looking to convert has a Ryzen 5800x, 64gb ram, 2 tb nvme 4. The mobo only supports pcie 4.0.
Would it be worthwhile to still go with the 5060 ti, which is pcie 5.0? Older gen, pcie 4.0 cards, that would be competitive are still more expensive used than a new 5060 ti in Canada. I would prefer to buy a new card over risking a used card that could become faulty without warranty.
Should I start pricing out an all new machine, or what would you say is my best bet?
Any advice would be greatly appreciated! | 2025-06-02T18:37:27 | https://www.reddit.com/r/LocalLLaMA/comments/1l1pvrr/looking_for_advice_5060_ti_using_pcie_40_for/ | polymath_renegade | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1pvrr | false | null | t3_1l1pvrr | /r/LocalLLaMA/comments/1l1pvrr/looking_for_advice_5060_ti_using_pcie_40_for/ | false | false | self | 1 | null |
Sharing my a demo of tool for easy handwritten fine-tuning dataset creation! | 1 | [removed] | 2025-06-02T18:39:12 | https://www.reddit.com/r/LocalLLaMA/comments/1l1pxf3/sharing_my_a_demo_of_tool_for_easy_handwritten/ | abaris243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l1pxf3 | false | null | t3_1l1pxf3 | /r/LocalLLaMA/comments/1l1pxf3/sharing_my_a_demo_of_tool_for_easy_handwritten/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'UjweHFlBfjtq-qgJURLZe74ot5ARI6AHWtzN7VjFiRs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/pRdGI5JF11YeJd2mj6iu585KhAnrYcxq8kgOMs8jPnc.jpg?width=108&crop=smart&auto=webp&s=5704e06d9310b4293e014267081165563e8bbeda', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/pRdGI5JF11YeJd2mj6iu585KhAnrYcxq8kgOMs8jPnc.jpg?width=216&crop=smart&auto=webp&s=9322cdae1a3cfea6603c3694657e28312f8dee9e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/pRdGI5JF11YeJd2mj6iu585KhAnrYcxq8kgOMs8jPnc.jpg?width=320&crop=smart&auto=webp&s=c448ac79286c91e904a3977d25308ca790b793ca', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/pRdGI5JF11YeJd2mj6iu585KhAnrYcxq8kgOMs8jPnc.jpg?auto=webp&s=94c5d8559331ed84d92207cf0e3ab1baa30f799c', 'width': 480}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.