title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Best model for dual or quad 3090?
2
I've seen a lot of these builds, they are very cool but what are you running on them?
2025-06-15T05:24:10
https://www.reddit.com/r/LocalLLaMA/comments/1lbsn7g/best_model_for_dual_or_quad_3090/
humanoid64
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbsn7g
false
null
t3_1lbsn7g
/r/LocalLLaMA/comments/1lbsn7g/best_model_for_dual_or_quad_3090/
false
false
self
2
null
How come Models like Qwen3 respond gibberish in Chinese ?
0
[https://model.lmstudio.ai/download/Qwen/Qwen3-Embedding-8B-GGUF](https://model.lmstudio.ai/download/Qwen/Qwen3-Embedding-8B-GGUF) Is there something that I'm missing ? , im using LM STUDIO 0.3.16 with updated Vulcan and CPU divers , its also broken in Koboldcpp https://preview.redd.it/q13vxkyhd17f1.png?width=814&format=png&auto=webp&s=3e19080030fc17a6d89f4911db22e11bae718ba3
2025-06-15T06:42:09
https://www.reddit.com/r/LocalLLaMA/comments/1lbtub3/how_come_models_like_qwen3_respond_gibberish_in/
uber-linny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbtub3
false
null
t3_1lbtub3
/r/LocalLLaMA/comments/1lbtub3/how_come_models_like_qwen3_respond_gibberish_in/
false
false
https://b.thumbs.redditm…R2PNlP-mMiBQ.jpg
0
null
Is there a need for ReAct?
6
For everyone's use case, is the ReAct paradigm useful or does it just slow down your agentic flow?
2025-06-15T07:05:24
https://www.reddit.com/r/LocalLLaMA/comments/1lbu6u2/is_there_a_need_for_react/
slashrshot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbu6u2
false
null
t3_1lbu6u2
/r/LocalLLaMA/comments/1lbu6u2/is_there_a_need_for_react/
false
false
self
6
null
Dual 3060RTX's running vLLM / Model suggestions?
8
Hello, I am pretty new to the foray here and I have enjoyed the last couple of days learning a bit about setting things. I was able to score a pair of 3060RTX's from marketplace for $350. Currently I have vLLM running with dwetzel/Mistral-Small-24B-Instruct-2501-GPTQ-INT4, per a thread I found here. Things run pretty well, but I was in hopes of also getting some image detection out of this, Any suggestions on models that would run well in this setup and accomplish this task? Thank you.
2025-06-15T07:08:06
https://www.reddit.com/r/LocalLLaMA/comments/1lbu89a/dual_3060rtxs_running_vllm_model_suggestions/
phin586
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbu89a
false
null
t3_1lbu89a
/r/LocalLLaMA/comments/1lbu89a/dual_3060rtxs_running_vllm_model_suggestions/
false
false
self
8
null
Testing Local LLMs on a Simple Web App Task (Performance + Output Comparison)
7
**Hey everyone,** I recently did a simple test to compare how a few local LLMs (plus Claude Sonnet 3.5 for reference) could perform on a basic front-end web development prompt. The goal was to generate code for a **real estate portfolio sharing website**, including a **listing entry form** and **listing display**, all in a **single HTML file using HTML, CSS, and Bootstrap**. **Prompt used:** >"Using HTML, CSS, and Bootstrap, write the code for a real estate portfolio sharing site, listing entry, and listing display in a single HTML file." **My setup:** All models except **Claude Sonnet 3.5** were tested locally on my laptop: * **GPU:** RTX 4070 (8GB VRAM) * **RAM:** 32GB * **Inference backend:** llama.cpp * **Qwen3 models:** Tested with `/think` (thinking mode enabled). # 🧪 Model Outputs + Performance |Model|Speed|Token Count|Notes| |:-|:-|:-|:-| |**GLM-9B-0414 Q5\_K\_XL**|28.1 t/s|8451 tokens|Excellent, most professional design, but listing form doesn't work.| |**Qwen3 30B-A3B Q4\_K\_XL**|12.4 t/s|1856 tokens|Fully working site, simpler than GLM but does the job.| |**Qwen3 8B Q5\_K\_XL**|36.1 t/s|2420 tokens|Also functional and well-structured.| |**Qwen3 4B Q8\_K\_XL**|38.0 t/s|3275 tokens|Surprisingly capable for its size, all basic requirements met.| |**Claude Sonnet 3.5 (Reference)**|–|–|Best overall: clean, functional, and interactive. No surprise here.| # 💬 My Thoughts: Out of all the models tested, here’s how I’d rank them **in terms of quality of design and functionality**: 1. **Claude Sonnet 3.5** – Clean, interactive, great structure (expected). 2. **GLM-9B-0414** – VERY polished web page, great UX and design elements, but the listing form can’t add new entries. Still impressive — I believe with a few additional prompts, it could be fixed. 3. **Qwen3 30B & Qwen3 8B** – Both gave a proper, fully working HTML file that met the prompt's needs. 4. **Qwen3 4B** – Smallest and simplest, but delivered the complete task nonetheless. Despite the small functionality flaw, **GLM-9B-0414** really blew me away in terms of how well-structured and professional-looking the output was. I'd say it's worth working with and iterating on. # 🔗 Code Outputs You can see the generated HTML files and compare them yourself here: [\[LINK TO CODES\]](https://filebin.net/r0vs0pujf3lx0hab) Would love to hear your thoughts if you’ve tried similar tests — particularly with GLM or Qwen3! Also open to suggestions for follow-up prompts or other models to try on my setup.
2025-06-15T08:05:58
https://www.reddit.com/r/LocalLLaMA/comments/1lbv3f1/testing_local_llms_on_a_simple_web_app_task/
SoAp9035
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbv3f1
false
null
t3_1lbv3f1
/r/LocalLLaMA/comments/1lbv3f1/testing_local_llms_on_a_simple_web_app_task/
false
false
self
7
null
Fine-tuning LLama 4 Scout Instruct on RTX 5090 Out of VRAM Memory Error
1
[removed]
2025-06-15T08:11:29
https://www.reddit.com/r/LocalLLaMA/comments/1lbv6bj/finetuning_llama_4_scout_instruct_on_rtx_5090_out/
AerieSure9064
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbv6bj
false
null
t3_1lbv6bj
/r/LocalLLaMA/comments/1lbv6bj/finetuning_llama_4_scout_instruct_on_rtx_5090_out/
false
false
self
1
null
rednote-hilab dots.llm1 support has been merged into llama.cpp
85
2025-06-15T08:18:35
https://github.com/ggml-org/llama.cpp/pull/14118
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1lbva5o
false
null
t3_1lbva5o
/r/LocalLLaMA/comments/1lbva5o/rednotehilab_dotsllm1_support_has_been_merged/
false
false
https://external-preview…c955ceba04689d59
85
{'enabled': False, 'images': [{'id': 'RLBNoVg_e7B3XdrLX8zzgLBIrezL9D4uJwyF2H_1MAE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RLBNoVg_e7B3XdrLX8zzgLBIrezL9D4uJwyF2H_1MAE.png?width=108&crop=smart&auto=webp&s=9766f8865dbd84288d2ddb6939f52a48e3be0623', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RLBNoVg_e7B3XdrLX8zzgLBIrezL9D4uJwyF2H_1MAE.png?width=216&crop=smart&auto=webp&s=0e2480df0f5ceffd06be6dd13559f25f78a641a2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RLBNoVg_e7B3XdrLX8zzgLBIrezL9D4uJwyF2H_1MAE.png?width=320&crop=smart&auto=webp&s=79f7e8ed00e66ea61655b5bd5d0654ecf2c7ed22', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RLBNoVg_e7B3XdrLX8zzgLBIrezL9D4uJwyF2H_1MAE.png?width=640&crop=smart&auto=webp&s=c7d590a18325c50ec13cca839d15368aa53334ae', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RLBNoVg_e7B3XdrLX8zzgLBIrezL9D4uJwyF2H_1MAE.png?width=960&crop=smart&auto=webp&s=b3acd20c81f7fe7c7580a5b25203a15e88fdcda8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RLBNoVg_e7B3XdrLX8zzgLBIrezL9D4uJwyF2H_1MAE.png?width=1080&crop=smart&auto=webp&s=cd82316d878f594e72291fa8e67c05cd56b331f9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RLBNoVg_e7B3XdrLX8zzgLBIrezL9D4uJwyF2H_1MAE.png?auto=webp&s=c9bf8e672bbf0cad6739e44db90197740b87f59b', 'width': 1200}, 'variants': {}}]}
Optimizing llama.cpp flags for max tokens/sec : anyone doing this automatically?
1
[removed]
2025-06-15T09:15:35
https://www.reddit.com/r/LocalLLaMA/comments/1lbw3y1/optimizing_llamacpp_flags_for_max_tokenssec/
Expert-Inspector-128
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbw3y1
false
null
t3_1lbw3y1
/r/LocalLLaMA/comments/1lbw3y1/optimizing_llamacpp_flags_for_max_tokenssec/
false
false
self
1
null
New Open Source Python Pack to Optimize llama.cpp flags: llama-optimus
1
[removed]
2025-06-15T09:47:07
https://www.reddit.com/r/LocalLLaMA/comments/1lbwkcf/new_open_source_python_pack_to_optimize_llamacpp/
Expert-Inspector-128
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbwkcf
false
null
t3_1lbwkcf
/r/LocalLLaMA/comments/1lbwkcf/new_open_source_python_pack_to_optimize_llamacpp/
false
false
self
1
null
Best practices - RAG, content generation
1
[removed]
2025-06-15T10:02:49
https://www.reddit.com/r/LocalLLaMA/comments/1lbwspl/best_practices_rag_content_generation/
Odd-Gene7766
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbwspl
false
null
t3_1lbwspl
/r/LocalLLaMA/comments/1lbwspl/best_practices_rag_content_generation/
false
false
self
1
null
Optimizing llama.cpp flags for max tokens/sec ; any auto-tuning tools out there?
1
[removed]
2025-06-15T10:04:07
https://www.reddit.com/r/LocalLLaMA/comments/1lbwtdx/optimizing_llamacpp_flags_for_max_tokenssec_any/
Expert-Inspector-128
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbwtdx
false
null
t3_1lbwtdx
/r/LocalLLaMA/comments/1lbwtdx/optimizing_llamacpp_flags_for_max_tokenssec_any/
false
false
https://b.thumbs.redditm…OMQa9yX-CZUg.jpg
1
null
Do multimodal LLMs (like Chatgpt, Gemini, Claude) use OCR under the hood to read text in images?
39
SOTA multimodal LLMs can read text from images (e.g. signs, screenshots, book pages) really well — almost better thatn OCR. Are they actually using an internal OCR system (like Tesseract or Azure Vision), or do they learn to "read" purely through pretraining (like contrastive learning on image-text pairs)?
2025-06-15T10:11:55
https://www.reddit.com/r/LocalLLaMA/comments/1lbwxj8/do_multimodal_llms_like_chatgpt_gemini_claude_use/
Comprehensive-Yam291
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbwxj8
false
null
t3_1lbwxj8
/r/LocalLLaMA/comments/1lbwxj8/do_multimodal_llms_like_chatgpt_gemini_claude_use/
false
false
self
39
null
Cursor and Bolt free alternative in VSCode
1
I have recently bought a new pc with a rtx 5060 ti 16gb and I want something like cursor and bolt but in VSCode I have already installed continue.dev as a replacement of copilot and installed deepseek r1 8b from ollama but when I tried it with cline or roo code something I tried with deepseek it doesn't work sometimes so what I want to ask what is the actual best local llm from ollama that I can use for both continue.dev and cline or roo code, and I don't care about the speed it can take an hour all I care My full pc specs Ryzen 5 7600x 32gb ddr5 6000 Rtx 5060ti 16gb model
2025-06-15T10:41:49
https://www.reddit.com/r/LocalLLaMA/comments/1lbxdwm/cursor_and_bolt_free_alternative_in_vscode/
McMezoplayz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbxdwm
false
null
t3_1lbxdwm
/r/LocalLLaMA/comments/1lbxdwm/cursor_and_bolt_free_alternative_in_vscode/
false
false
self
1
null
🚀 This AI Agent Uses Zero Memory, Zero Tools — Just Language. Meet Delta.
0
Hi I’m Vincent Chong. It’s me again — the guy who kept spamming LCM and SLS all over this place a few months ago. 😅 I’ve been working quietly on something, and it’s finally ready: Delta — a fully modular, prompt-only semantic agent built entirely with language. No memory. No plugins. No backend tools. Just structured prompt logic. It’s the first practical demo of Language Construct Modeling (LCM) under the Semantic Logic System (SLS). What if you could simulate personality, reasoning depth, and self-consistency… without memory, plugins, APIs, vector stores, or external logic? Introducing Delta — a modular, prompt-only AI agent powered entirely by language. Built with Language Construct Modeling (LCM) under the Semantic Logic System (SLS) framework, Delta simulates an internal architecture using nothing but prompts — no code changes, no fine-tuning. ⸻ 🧠 So what is Delta? Delta is not a role. Delta is a self-coordinated semantic agent composed of six interconnected modules: • 🧠 Central Processing Module (cognitive hub, decides all outputs) • 🎭 Emotional Intent Module (detects tone, adjusts voice) • 🧩 Inference Module (deep reasoning, breakthrough spotting) • 🔁 Internal Resonance (keeps evolving by remembering concepts) • 🧷 Anchor Module (maintains identity across turns) • 🔗 Coordination Module (ensures all modules stay in sync) Each time you say something, all modules activate, feed into the core processor, and generate a unified output. ⸻ 🧬 No Memory? Still Consistent. Delta doesn’t “remember” like traditional chatbots. Instead, it builds semantic stability through anchor snapshots, resonance, and internal loop logic. It doesn’t rely on plugins — it is its own cognitive system. ⸻ 💡 Why Try Delta? • ✅ Prompt-only architecture — easy to port across models • ✅ No hallucination-prone roleplay messiness • ✅ Modular, adjustable, and transparent • ✅ Supports real reasoning + emotionally adaptive tone • ✅ Works on GPT, Claude, Mistral, or any LLM with chat history Delta can function as: • 🧠 a humanized assistant • 📚 a semantic reasoning agent • 🧪 an experimental cognition scaffold • ✍️ a creative writing partner with persistent style ⸻ 🛠️ How It Works All logic is built in the prompt. No memory injection. No chain-of-thought crutches. Just pure layered design: • Each module is described in natural language • Modules feed forward and backward between turns • The system loops — and grows Delta doesn’t just reply. Delta thinks, feels, and evolves — in language. ——- GitHub repo link: https://github.com/chonghin33/multi-agent-delta —— **The full prompt modular structure will be released in the comment section.
2025-06-15T11:34:33
https://www.reddit.com/r/LocalLLaMA/comments/1lby87t/this_ai_agent_uses_zero_memory_zero_tools_just/
Ok_Sympathy_4979
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lby87t
false
null
t3_1lby87t
/r/LocalLLaMA/comments/1lby87t/this_ai_agent_uses_zero_memory_zero_tools_just/
false
false
self
0
{'enabled': False, 'images': [{'id': '1l6c9J0mLrG2PwBszgGlTJmSrjU69xsEL83aUVcDrd8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1l6c9J0mLrG2PwBszgGlTJmSrjU69xsEL83aUVcDrd8.png?width=108&crop=smart&auto=webp&s=0d79031aa7c97f9a1b21a13b5317213b7ffe34ab', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1l6c9J0mLrG2PwBszgGlTJmSrjU69xsEL83aUVcDrd8.png?width=216&crop=smart&auto=webp&s=484bd00cc6e6519642000017bc6f3492d01d9be1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1l6c9J0mLrG2PwBszgGlTJmSrjU69xsEL83aUVcDrd8.png?width=320&crop=smart&auto=webp&s=169b4573bfa6c77a863b88ba60c1790f5d2d16a6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1l6c9J0mLrG2PwBszgGlTJmSrjU69xsEL83aUVcDrd8.png?width=640&crop=smart&auto=webp&s=61988b2a2b332a37e8217158b9268ff8a5452e98', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1l6c9J0mLrG2PwBszgGlTJmSrjU69xsEL83aUVcDrd8.png?width=960&crop=smart&auto=webp&s=5811d6290d5fa151e6bf284885846346a0baa54b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1l6c9J0mLrG2PwBszgGlTJmSrjU69xsEL83aUVcDrd8.png?width=1080&crop=smart&auto=webp&s=410b17b3948833f107a22984c6e3ca4f8479edf2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1l6c9J0mLrG2PwBszgGlTJmSrjU69xsEL83aUVcDrd8.png?auto=webp&s=3961acbc0d339fd4a83c2fa0f13696a0131db66a', 'width': 1200}, 'variants': {}}]}
Creative writing and roleplay content generation. Any experience with good settings and prompting out there?
0
I have a model that is llama based and fine tuned for RP. It's uh... a little wild let's say. If I just say hello it starts writing business letters or describing random movie scenes. Kind of. It's pretty scattered. I've played somewhat with settings but I'm trying to stomp some of this out by setting up a model level (modelfile) system prompt that primes it to behave itself. And the default settings that would actually make it be somewhat understandable for a long time. I'm making progress but I'm probably reinventing the wheel here. Anyone with experience have examples of: Tricks they learned that make this work? For example how to get it to embody a character without jumping to yours at least. Or simple top level directives that prime it for whatever the user might throw at it later? I've kind of defaulted to video game language to start trying to reign it in. Defining a world seed, a player character, and defining all other characters as NPCs. But there's probably way better out there I can make use of, formatting and style tricks to get it to emphasize things, and well... LLMs are weird. I've seen weird string sequences used in some prompts to define skills and limit the AI in other areas so who knows what's out there. Any help is appreciated. New to this part of the AI space. I mostly had my fun with jailbreaking to see what could make the AI go a little mad and forget it hard limits.
2025-06-15T12:10:05
https://www.reddit.com/r/LocalLLaMA/comments/1lbytvw/creative_writing_and_roleplay_content_generation/
Agitated_Budgets
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbytvw
false
null
t3_1lbytvw
/r/LocalLLaMA/comments/1lbytvw/creative_writing_and_roleplay_content_generation/
false
false
self
0
null
What's the best OcrOptions to choose for OCR in Dockling?
1
I'm struggling to do the proper OCR. I have a PDF that contains both images (with text inside) and plain text. I tried to convert pdf to PNG and digest it, but with this approach ,it becomes even worse sometimes. Usually, I experiment with TesseractCliOcrOptions. I have a PDF with text and the logo of the company at the top right corner, which is constantly ignored. (it has a clear text inside it). Maybe someone found the silver bullet and the best settings to configure for OCR? Thank you.
2025-06-15T12:11:49
https://www.reddit.com/r/LocalLLaMA/comments/1lbyv2s/whats_the_best_ocroptions_to_choose_for_ocr_in/
depava
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbyv2s
false
null
t3_1lbyv2s
/r/LocalLLaMA/comments/1lbyv2s/whats_the_best_ocroptions_to_choose_for_ocr_in/
false
false
self
1
null
New OpenAI local model Leak straight from chatgpt
0
So appareently ChatGPT leaked the name of the new local model that OpenAI will work on When asked about more details he would just search the web and deny it's existence but after i forced it to tell me more it just stated that Apaprently it's going to be a "GPT-4o-calss" model, it's going to be multimodal and coming very soon !
2025-06-15T12:18:30
https://www.reddit.com/gallery/1lbyzcm
Skystunt
reddit.com
1970-01-01T00:00:00
0
{}
1lbyzcm
false
null
t3_1lbyzcm
/r/LocalLLaMA/comments/1lbyzcm/new_openai_local_model_leak_straight_from_chatgpt/
true
false
spoiler
0
{'enabled': True, 'images': [{'id': 'I-_cvCbUt9ZJ6zHG4A8WDhYTt_2N3D8_2pCQ-m_tyYM', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/I-_cvCbUt9ZJ6zHG4A8WDhYTt_2N3D8_2pCQ-m_tyYM.png?width=108&crop=smart&auto=webp&s=c836dc1128f64a01866f37f4ac08ccfa21590dfa', 'width': 108}, {'height': 124, 'url': 'https://external-preview.redd.it/I-_cvCbUt9ZJ6zHG4A8WDhYTt_2N3D8_2pCQ-m_tyYM.png?width=216&crop=smart&auto=webp&s=33af78e5302843146efeba87356450f68ee8beb5', 'width': 216}, {'height': 184, 'url': 'https://external-preview.redd.it/I-_cvCbUt9ZJ6zHG4A8WDhYTt_2N3D8_2pCQ-m_tyYM.png?width=320&crop=smart&auto=webp&s=8d36e5b0ce9e7035617b2490a47051d5f368bb7f', 'width': 320}, {'height': 368, 'url': 'https://external-preview.redd.it/I-_cvCbUt9ZJ6zHG4A8WDhYTt_2N3D8_2pCQ-m_tyYM.png?width=640&crop=smart&auto=webp&s=0f3acda88f5b903cd9e74e2c16e428f66ef9c644', 'width': 640}, {'height': 553, 'url': 'https://external-preview.redd.it/I-_cvCbUt9ZJ6zHG4A8WDhYTt_2N3D8_2pCQ-m_tyYM.png?width=960&crop=smart&auto=webp&s=8902b3fa0f884a45a921dee5256f95d565533553', 'width': 960}, {'height': 622, 'url': 'https://external-preview.redd.it/I-_cvCbUt9ZJ6zHG4A8WDhYTt_2N3D8_2pCQ-m_tyYM.png?width=1080&crop=smart&auto=webp&s=17ba92a809375a36e301ed1b291b4f049c917f6b', 'width': 1080}], 'source': {'height': 1225, 'url': 'https://external-preview.redd.it/I-_cvCbUt9ZJ6zHG4A8WDhYTt_2N3D8_2pCQ-m_tyYM.png?auto=webp&s=38aefa94fabe6f5b32eb14bbbd8de4742b7ecf9a', 'width': 2126}, 'variants': {'obfuscated': {'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/I-_cvCbUt9ZJ6zHG4A8WDhYTt_2N3D8_2pCQ-m_tyYM.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=45d89dda0b6ef8b5bd02561620c61897355c2366', 'width': 108}, {'height': 124, 'url': 'https://external-preview.redd.it/I-_cvCbUt9ZJ6zHG4A8WDhYTt_2N3D8_2pCQ-m_tyYM.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=b5f504df32e2c3712990f5d1ed9bd9a90291f7b3', 'width': 216}, {'height': 184, 'url': 'https://external-preview.redd.it/I-_cvCbUt9ZJ6zHG4A8WDhYTt_2N3D8_2pCQ-m_tyYM.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=bf689e24b366ae33eb164883f0919de43d4d180f', 'width': 320}, {'height': 368, 'url': 'https://external-preview.redd.it/I-_cvCbUt9ZJ6zHG4A8WDhYTt_2N3D8_2pCQ-m_tyYM.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=d66135ec672d0eaf82a41c0aa91328bdb7c7f22d', 'width': 640}, {'height': 553, 'url': 'https://external-preview.redd.it/I-_cvCbUt9ZJ6zHG4A8WDhYTt_2N3D8_2pCQ-m_tyYM.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=934127aad66c05bcd21f437c4a5dcceb5783b572', 'width': 960}, {'height': 622, 'url': 'https://external-preview.redd.it/I-_cvCbUt9ZJ6zHG4A8WDhYTt_2N3D8_2pCQ-m_tyYM.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=fcc54eb574fcc7adba55f2b5371ec9272b303344', 'width': 1080}], 'source': {'height': 1225, 'url': 'https://external-preview.redd.it/I-_cvCbUt9ZJ6zHG4A8WDhYTt_2N3D8_2pCQ-m_tyYM.png?blur=40&format=pjpg&auto=webp&s=523de1c3ccdb27085f18f045b39597128d5e7b34', 'width': 2126}}}}]}
7600 XT and 3050 Ti Mobile comparison
1
[removed]
2025-06-15T12:30:33
https://www.reddit.com/r/LocalLLaMA/comments/1lbz7a0/7600_xt_and_3050_ti_mobile_comparison/
Sherstnyov
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbz7a0
false
null
t3_1lbz7a0
/r/LocalLLaMA/comments/1lbz7a0/7600_xt_and_3050_ti_mobile_comparison/
false
false
self
1
null
Can I talk to more than one character via “LLM”? I have tried many online models but I can only talk to one character.
1
[removed]
2025-06-15T12:31:04
https://www.reddit.com/r/LocalLLaMA/comments/1lbz7n3/can_i_talk_to_more_than_one_character_via_llm_i/
foskarnet0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbz7n3
false
null
t3_1lbz7n3
/r/LocalLLaMA/comments/1lbz7n3/can_i_talk_to_more_than_one_character_via_llm_i/
false
false
self
1
null
Can I talk to more than one character via “LLM”? I have tried many online models but I can only talk to one character.
1
[removed]
2025-06-15T12:34:54
https://www.reddit.com/r/LocalLLaMA/comments/1lbza9z/can_i_talk_to_more_than_one_character_via_llm_i/
foskarnet0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbza9z
false
null
t3_1lbza9z
/r/LocalLLaMA/comments/1lbza9z/can_i_talk_to_more_than_one_character_via_llm_i/
false
false
self
1
null
[Follow-Up] Building Delta Wasn’t a Joke — This Is the System Behind It. Prove me wrong.(Plug-in free)
0
Hours ago I posted Delta — a modular, prompt-only semantic agent built without memory, plugins, or backend tools. Many thought it was just chatbot roleplay with a fancy wrapper. But Delta wasn’t built in isolation. It runs on something deeper: Language Construct Modeling (LCM) — a semantic architecture I’ve been developing under the Semantic Logic System (SLS). ⸻ 🧬 Why does this matter? LLMs don’t run Python. They run patterns in language. And that means language itself can be engineered as a control system. LCM treats language not just as communication, but as modular logic. The entire runtime is built from: 🔹 Meta Prompt Layering (MPL) A multi-layer semantic prompt structure that creates interaction. And the byproduct emerge from the interaction is the goal 🔹 Semantic Directive Prompting (SDP) Instead of raw instructions,language itself already filled up with semantic meaning. That’s why the LLM can interpret and move based on your a simple prompt. ⸻ Together, MPL + SDP allow you to simulate: • Recursive modular activation • Characterised agents • Semantic rhythm and identity stability • Semantic anchoring without real memory • Full system behavior built from language — not plugins ⸻ 🧠 So what is Delta? Delta is a modular LLM runtime made purely from these constructs. It’s not a role. It’s not a character. It has 6 internal modules — cognition, emotion, inference, memory echo, anchoring, and coordination. All work together inside the prompt — with no external code. It thinks, reasons, evolves using nothing but structured language. ⸻ 🔗 Want to understand more? • LCM whitepaper https://github.com/chonghin33/lcm-1.13-whitepaper • SLS Semantic Logic Framework https://github.com/chonghin33/lcm-1.13-whitepaper ⸻ If I’m wrong, prove me wrong. But if you’re still thinking prompts are just flavor text — you might be missing what language is becoming.
2025-06-15T12:44:30
https://www.reddit.com/r/LocalLLaMA/comments/1lbzgp9/followup_building_delta_wasnt_a_joke_this_is_the/
Ok_Sympathy_4979
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbzgp9
false
null
t3_1lbzgp9
/r/LocalLLaMA/comments/1lbzgp9/followup_building_delta_wasnt_a_joke_this_is_the/
false
false
self
0
{'enabled': False, 'images': [{'id': '5kvpHomYIKllMHz7pQ2iRQuKKm01jot_cSCO2CpOLcI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5kvpHomYIKllMHz7pQ2iRQuKKm01jot_cSCO2CpOLcI.png?width=108&crop=smart&auto=webp&s=3bfce6c097d7a95a62bb0084af4f38fca27618f9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5kvpHomYIKllMHz7pQ2iRQuKKm01jot_cSCO2CpOLcI.png?width=216&crop=smart&auto=webp&s=1b9b877413228ba18ac4eaf23539c9ec882b5b86', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5kvpHomYIKllMHz7pQ2iRQuKKm01jot_cSCO2CpOLcI.png?width=320&crop=smart&auto=webp&s=c95af56fc4c7ea7848004e69379545649ae1b812', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5kvpHomYIKllMHz7pQ2iRQuKKm01jot_cSCO2CpOLcI.png?width=640&crop=smart&auto=webp&s=3598cc4c1257d874cad89ed5571fa07578e6df0b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5kvpHomYIKllMHz7pQ2iRQuKKm01jot_cSCO2CpOLcI.png?width=960&crop=smart&auto=webp&s=409f1edba226e2faecb6581aae9f8712ebf94aa8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5kvpHomYIKllMHz7pQ2iRQuKKm01jot_cSCO2CpOLcI.png?width=1080&crop=smart&auto=webp&s=a3419335305b9307ee758b24388518ad332c5d1e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5kvpHomYIKllMHz7pQ2iRQuKKm01jot_cSCO2CpOLcI.png?auto=webp&s=c9cb3ccf43cfaf970ce3623b66f1035642eae32d', 'width': 1200}, 'variants': {}}]}
What am I doing wrong?
0
I'm new to local LLM and just downloaded LM Studio and a few models to test out. deepseek/deepseek-r1-0528-qwen3-8b being one of them. I asked it to write a simple function to sum a list of ints. Then I asked it to write a class to send emails. Watching it's thought process it seems to get lost and reverted back to answering the original question again. I'm guessing it's related to the context but I don't know. Hardware: RTX 4080 Super, 64gb, Ultra 9 285k
2025-06-15T13:22:49
https://www.reddit.com/r/LocalLLaMA/comments/1lc07xn/what_am_i_doing_wrong/
jcam12312
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc07xn
false
null
t3_1lc07xn
/r/LocalLLaMA/comments/1lc07xn/what_am_i_doing_wrong/
false
false
self
0
null
Best practices - RAG, content generation
1
[removed]
2025-06-15T13:23:21
https://www.reddit.com/r/LocalLLaMA/comments/1lc08at/best_practices_rag_content_generation/
Odd-Gene7766
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc08at
false
null
t3_1lc08at
/r/LocalLLaMA/comments/1lc08at/best_practices_rag_content_generation/
false
false
self
1
null
Best practices - RAG, content generation
1
[removed]
2025-06-15T13:25:54
https://www.reddit.com/r/LocalLLaMA/comments/1lc0a5t/best_practices_rag_content_generation/
allan_watts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc0a5t
false
null
t3_1lc0a5t
/r/LocalLLaMA/comments/1lc0a5t/best_practices_rag_content_generation/
false
false
self
1
null
llama.cpp: llama-server has multimodal audio input, so I tried it out.
1
[removed]
2025-06-15T13:43:03
https://www.reddit.com/r/LocalLLaMA/comments/1lc0n1x/llamacpp_llamaserver_has_multimodal_audio_input/
DesignToWin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc0n1x
false
null
t3_1lc0n1x
/r/LocalLLaMA/comments/1lc0n1x/llamacpp_llamaserver_has_multimodal_audio_input/
false
false
https://b.thumbs.redditm…iUy7sy9Tkbpo.jpg
1
null
LLM chess ELO?
0
I was wondering how good LLMs are at chess, in regards to ELO - say Lichess for discussion purposes -, and looked online, and the best I could find was [this](https://dubesor.de/chess/chess-leaderboard), which seems at least not uptodate at best, and not reliable more realistically. Any clue anyone if there's a more accurate, uptodate, and generally speaking, lack of a better term, better? Thanks :)
2025-06-15T13:45:37
https://www.reddit.com/r/LocalLLaMA/comments/1lc0oyf/llm_chess_elo/
BaconSky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc0oyf
false
null
t3_1lc0oyf
/r/LocalLLaMA/comments/1lc0oyf/llm_chess_elo/
false
false
self
0
null
Is it appropriate to do creative writing with RAG?
1
[removed]
2025-06-15T14:20:24
https://www.reddit.com/r/LocalLLaMA/comments/1lc1gcs/is_it_appropriate_to_do_creative_writing_with_rag/
ArranEye
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc1gcs
false
null
t3_1lc1gcs
/r/LocalLLaMA/comments/1lc1gcs/is_it_appropriate_to_do_creative_writing_with_rag/
false
false
self
1
null
Recreating old cartoons
8
I don’t actually have a solution for this. I’m curious if anyone else has found one. At some point in the future, I imagine the new video/image models could take old cartoons (or stop motion Gumby) that are very low resolution and very low frame rate and build them so that they are both high frame as well as high resolution. Nine months ago or so I downloaded all the different upscalers and was unimpressed on their ability to handle cartoons. The new video models brought it back to mind. Is anyone working on a project like this? Or now of a technology where there are good results?
2025-06-15T14:55:57
https://www.reddit.com/r/LocalLLaMA/comments/1lc295w/recreating_old_cartoons/
olympics2022wins
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc295w
false
null
t3_1lc295w
/r/LocalLLaMA/comments/1lc295w/recreating_old_cartoons/
false
false
self
8
null
Is it appropriate to do creative writing with RAG?
1
[removed]
2025-06-15T15:09:49
https://www.reddit.com/r/LocalLLaMA/comments/1lc2kr9/is_it_appropriate_to_do_creative_writing_with_rag/
ArranEye
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc2kr9
false
null
t3_1lc2kr9
/r/LocalLLaMA/comments/1lc2kr9/is_it_appropriate_to_do_creative_writing_with_rag/
false
false
self
1
null
PSA: 2 * 3090 with Nvlink can cause depression*
201
Hello. I was enjoying my 3090 so much. So I thought why not get a second? My use case is local coding models, and Gemma 3 mostly. It's been nothing short of a nightmare to get working. Just about everything that could go wrong, has gone wrong. * Mining rig frame took a day to put together * Power supply so huge it's just hanging out of said rig * Pci-e extender cables are a pain * My OS nvme died during this process * Fiddling with bios options to get both to work * Nvlink wasn't clipped on properly at first * I have a pci-e bifurcation card that I'm not using because I'm too scared to see what happens if I plug that in (it has a sata power connector and I'm scared it will just blow up) * Wouldn't turn on this morning (I've snapped my pci-e clips off my motherboard so maybe it's that) I have a desk fan nearby for when I finish getting vLLM setup. I will try and clip some case fans near them. I suppose the point of this post and my advice is, if you are going to mess around - build a second machine, don't take your workstation and try make it be something it isn't. Cheers. * Just trying to have some light humour about self inflicted problems and hoping to help anyone who might be thinking of doing the same to themselves. ❤️
2025-06-15T15:15:53
https://i.redd.it/sy4x3c4ft37f1.jpeg
cuckfoders
i.redd.it
1970-01-01T00:00:00
0
{}
1lc2pv9
false
null
t3_1lc2pv9
/r/LocalLLaMA/comments/1lc2pv9/psa_2_3090_with_nvlink_can_cause_depression/
false
false
default
201
{'enabled': True, 'images': [{'id': 'sy4x3c4ft37f1', 'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/sy4x3c4ft37f1.jpeg?width=108&crop=smart&auto=webp&s=97f6c82b2993197999ec385b1aa95cb80e8f221d', 'width': 108}, {'height': 287, 'url': 'https://preview.redd.it/sy4x3c4ft37f1.jpeg?width=216&crop=smart&auto=webp&s=d77271effc3e6e8d0a6c4a158c2ae675a457f3b3', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/sy4x3c4ft37f1.jpeg?width=320&crop=smart&auto=webp&s=887a4e6ba776163a24b71f5d4b80c40703b7af57', 'width': 320}, {'height': 852, 'url': 'https://preview.redd.it/sy4x3c4ft37f1.jpeg?width=640&crop=smart&auto=webp&s=b06f38f038f50f6e67a1b6715463246bf3738e46', 'width': 640}, {'height': 1278, 'url': 'https://preview.redd.it/sy4x3c4ft37f1.jpeg?width=960&crop=smart&auto=webp&s=cbe54c1a87ce901458aba68e8ee96678071beb6f', 'width': 960}, {'height': 1438, 'url': 'https://preview.redd.it/sy4x3c4ft37f1.jpeg?width=1080&crop=smart&auto=webp&s=007c23d735dc2d9db65ba4352b74f24c1f8315cf', 'width': 1080}], 'source': {'height': 4624, 'url': 'https://preview.redd.it/sy4x3c4ft37f1.jpeg?auto=webp&s=21e10292ee47371da0a076e13c517452678d44e9', 'width': 3472}, 'variants': {}}]}
Local LLM Memorization – A fully local memory system for long-term recall and visualization
1
[removed]
2025-06-15T15:53:57
https://www.reddit.com/r/LocalLLaMA/comments/1lc3lfs/local_llm_memorization_a_fully_local_memory/
Vicouille6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc3lfs
false
null
t3_1lc3lfs
/r/LocalLLaMA/comments/1lc3lfs/local_llm_memorization_a_fully_local_memory/
false
false
self
1
null
What I Learned from Breaking Down How Small AI Chatbots Actually Work (Tokenization to Testing)
1
[removed]
2025-06-15T15:57:33
https://i.redd.it/vlhs3hyb547f1.png
LokeshKeswani
i.redd.it
1970-01-01T00:00:00
0
{}
1lc3oid
false
null
t3_1lc3oid
/r/LocalLLaMA/comments/1lc3oid/what_i_learned_from_breaking_down_how_small_ai/
false
false
default
1
{'enabled': True, 'images': [{'id': 'vlhs3hyb547f1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/vlhs3hyb547f1.png?width=108&crop=smart&auto=webp&s=fd1e8598884dac49707761c86d9ed5ca8e288e02', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/vlhs3hyb547f1.png?width=216&crop=smart&auto=webp&s=9d2a09f106fa25e1feca533585985b54e192b84f', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/vlhs3hyb547f1.png?width=320&crop=smart&auto=webp&s=a2363388b7db7c8734025cb0687abb8f71b0bace', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/vlhs3hyb547f1.png?width=640&crop=smart&auto=webp&s=3bc844003da5545217528e9d08297d2b37b99d07', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/vlhs3hyb547f1.png?width=960&crop=smart&auto=webp&s=e3120df858874e36c826b58f4511aac089c86f89', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/vlhs3hyb547f1.png?width=1080&crop=smart&auto=webp&s=c8415cdf0fc52cefe0f67dc8ba15c64e762b7fc3', 'width': 1080}], 'source': {'height': 720, 'url': 'https://preview.redd.it/vlhs3hyb547f1.png?auto=webp&s=e489a667560aca4424b7b9a29b151336338ae8a4', 'width': 1280}, 'variants': {}}]}
Experimental ChatGPT like Web UI for Gemini API (open source)
1
[removed]
2025-06-15T16:16:47
https://www.reddit.com/r/LocalLLaMA/comments/1lc4582/experimental_chatgpt_like_web_ui_for_gemini_api/
W4D-cmd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc4582
false
null
t3_1lc4582
/r/LocalLLaMA/comments/1lc4582/experimental_chatgpt_like_web_ui_for_gemini_api/
false
false
https://b.thumbs.redditm…ZUoO-AcoZ9GI.jpg
1
null
Can someone with a Chinese ID get me an API key for Volcengine?
0
I am trying to run the new Seedance models via API and saw that they were made available on Volcengine (https://www.volcengine.com/docs/82379/1520757). However, in order to get an API key, you need to have a Chinese ID, which I do not have. I wonder if anyone can help on that issue.
2025-06-15T16:30:19
https://www.reddit.com/r/LocalLLaMA/comments/1lc4gtr/can_someone_with_a_chinese_id_get_me_an_api_key/
yachty66
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc4gtr
false
null
t3_1lc4gtr
/r/LocalLLaMA/comments/1lc4gtr/can_someone_with_a_chinese_id_get_me_an_api_key/
false
false
self
0
null
Live Speech To Text in Arabic
1
I was building an app for the Holy Quran which includes a feature where you can recite in Arabic and a highlighter will follow what you spoke. I want to later make this scalable to error detection and more similar to tarteel AI. But I can't seem to find a good model for Arabic to do the Audio to text part adequately in real time. I tried whisper, whisper.cpp, whisperX, and Vosk but none give adequate result except Apples ASR (very unexpected). I want this app to be compatible with iOS and android devices and want the ASR functionality to be client side only to eliminate internet connections. What models or new stuff should I try?
2025-06-15T16:32:58
https://www.reddit.com/r/LocalLLaMA/comments/1lc4j2w/live_speech_to_text_in_arabic/
AbdullahKhanSherwani
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc4j2w
false
null
t3_1lc4j2w
/r/LocalLLaMA/comments/1lc4j2w/live_speech_to_text_in_arabic/
false
false
self
1
null
Is rocm better supported on arch through a AUR package?
5
Or is the best way to use rocm the docker image provided here: https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/3rd-party/pytorch-install.html#using-wheels-package For a friend of mine
2025-06-15T17:28:39
https://www.reddit.com/r/LocalLLaMA/comments/1lc5w5r/is_rocm_better_supported_on_arch_through_a_aur/
No_Afternoon_4260
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc5w5r
false
null
t3_1lc5w5r
/r/LocalLLaMA/comments/1lc5w5r/is_rocm_better_supported_on_arch_through_a_aur/
false
false
self
5
null
Gemma3 12b or 27b for writing assistance/brainstorming?
6
A disclaimer before any reddit writers shit on me for using AI to write. I don't blindly copy and paste. I don't have it generate stories. All the ideas come from ME. I only use AI to bounce ideas off it. And to give advice on writing. And have it help me streamlie the stories. It's like having a more experienced writer looking at my work and providing advice on wording and making it more streamlined. Recently I started having ChatGPT give me micro storywriting challenges to help me improve my writing skills. So far, it's been helpful. I heard Gemma is really good at this sort of stuff to help writers with brainstorming and providing advice on editing texts. Would the 12b model be fine for what I need? I have the 12b and 27b installed via ollama and open WebUI. I have an RX 7800Xt and I tested it out a little bit. The 27b takes a few minutes to output a response and it's not super different from the 12b responses. Maybe a bit more detailed.
2025-06-15T17:54:26
https://www.reddit.com/r/LocalLLaMA/comments/1lc6idx/gemma3_12b_or_27b_for_writing/
Lord_Thunderballs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc6idx
false
null
t3_1lc6idx
/r/LocalLLaMA/comments/1lc6idx/gemma3_12b_or_27b_for_writing/
false
false
self
6
null
I wrapped Apple’s new on-device models in an OpenAI-compatible API
1
[removed]
2025-06-15T18:04:12
https://www.reddit.com/r/LocalLLaMA/comments/1lc6r5y/i_wrapped_apples_new_ondevice_models_in_an/
ChanningDai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc6r5y
false
null
t3_1lc6r5y
/r/LocalLLaMA/comments/1lc6r5y/i_wrapped_apples_new_ondevice_models_in_an/
false
false
self
1
null
I wrapped Apple’s new on-device models in an OpenAI-compatible API
311
I spent the weekend vibe-coding in Cursor and ended up with a small Swift app that turns the new macOS 26 on-device Apple Intelligence models into a local server you can hit with standard OpenAI `/v1/chat/completions` calls. Point any client you like at `http://127.0.0.1:11535`. * Nothing leaves your Mac * Works with any OpenAI-compatible client * Open source, MIT-licensed Repo’s here → [**https://github.com/gety-ai/apple-on-device-openai**](https://github.com/gety-ai/apple-on-device-openai) It was a fun hack—let me know if you try it out or run into any weirdness. Cheers! 🚀
2025-06-15T18:06:56
https://www.reddit.com/r/LocalLLaMA/comments/1lc6tii/i_wrapped_apples_new_ondevice_models_in_an/
FixedPt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc6tii
false
null
t3_1lc6tii
/r/LocalLLaMA/comments/1lc6tii/i_wrapped_apples_new_ondevice_models_in_an/
false
false
self
311
null
Can someone explain the current status socio-politics of GPU?
0
Hai i want to preapre an article on ai race, gpu and economical war between countries. I was not following the news past 8 months. What is the current status of it? I would like to hear, Nvidias monopoly, CUDA, massive chip shortage, role of TSMC, what biden did to cut nvidias exporting to china, what is Trumps tariff did, how china replied to this, what is chinas current status?, are they making their own chips? How does this affect ai race of countries? Did US ban export of GPUs to India? I know you folks are the best choice to get answers and viewpoints. I need to connect all these dots, above points are just hints, my idea is to get a whole picture about the gpu manufacturing and ai race of countries. Hope you people will add your predictions on upcoming economy falls and rises..
2025-06-15T18:27:06
https://www.reddit.com/r/LocalLLaMA/comments/1lc7arp/can_someone_explain_the_current_status/
Trysem
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc7arp
false
null
t3_1lc7arp
/r/LocalLLaMA/comments/1lc7arp/can_someone_explain_the_current_status/
false
false
self
0
null
Bank transactions extractions, tech stack help needed.
0
Hi, I am planning to start a project to extract transactions from bank PDFs. Let say I have 50 different bank statements and they all have different templates some have tables and some donot. Different banks uses different headers for transactions like some credit/deposit..., some banks daily balance etc. So input is PDFs and output is excle with transactions. So I need help in system architecture.(Fully loca runl) 1) model? 2) embeddings model 3) Db I am new to rag.
2025-06-15T18:54:39
https://www.reddit.com/r/LocalLLaMA/comments/1lc7ye0/bank_transactions_extractions_tech_stack_help/
nimmalachaitanya
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc7ye0
false
null
t3_1lc7ye0
/r/LocalLLaMA/comments/1lc7ye0/bank_transactions_extractions_tech_stack_help/
false
false
self
0
null
Remastering public domain Blues and Jazz
1
[removed]
2025-06-15T18:56:20
https://www.reddit.com/r/LocalLLaMA/comments/1lc7zvn/remastering_public_domain_blues_and_jazz/
autonoma_2042
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc7zvn
false
null
t3_1lc7zvn
/r/LocalLLaMA/comments/1lc7zvn/remastering_public_domain_blues_and_jazz/
false
false
self
1
null
So how are people actually building their agentic RAG pipeline?
23
I have a rag app, with a few sources that I can manually chose from to retrieve context. how does one prompt the LLM to get it to choose the right source? I just read on here people have success with the new mistral, but what do these prompts to the agent LLM look like? What have I missed after all these months that everyone seems to how to build an agent for their bespoke vector databases.
2025-06-15T19:11:05
https://www.reddit.com/r/LocalLLaMA/comments/1lc8cse/so_how_are_people_actually_building_their_agentic/
walagoth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc8cse
false
null
t3_1lc8cse
/r/LocalLLaMA/comments/1lc8cse/so_how_are_people_actually_building_their_agentic/
false
false
self
23
null
What's a model (preferably uncensored) that my computer would handle but with difficulty?
1
[removed]
2025-06-15T19:20:45
https://www.reddit.com/r/LocalLLaMA/comments/1lc8l69/whats_a_model_preferably_uncensored_that_my/
Rahodees
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc8l69
false
null
t3_1lc8l69
/r/LocalLLaMA/comments/1lc8l69/whats_a_model_preferably_uncensored_that_my/
false
false
self
1
null
What can I do with my laptop?
1
[removed]
2025-06-15T19:30:23
https://www.reddit.com/r/LocalLLaMA/comments/1lc8tcf/what_can_i_do_with_my_laptop/
djinny31
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc8tcf
false
null
t3_1lc8tcf
/r/LocalLLaMA/comments/1lc8tcf/what_can_i_do_with_my_laptop/
false
false
self
1
null
Good models for a 16GB M4 Mac Mini?
16
Just bought a 16GB M4 Mac Mini and put LM Studio into it. Right now I'm running the Deepseek R1 Qwen 8B model. It's ok and generates text pretty quickly but sometimes doesn't quite give the answer I'm looking for. What other models do you recommend? I don't code, mostly just use these things as a toy or to get quick answers for stuff that I would have used a search engine for in the past.
2025-06-15T19:50:41
https://www.reddit.com/r/LocalLLaMA/comments/1lc9alf/good_models_for_a_16gb_m4_mac_mini/
puukkeriro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lc9alf
false
null
t3_1lc9alf
/r/LocalLLaMA/comments/1lc9alf/good_models_for_a_16gb_m4_mac_mini/
false
false
self
16
null
changeish - Manage your code's changelog using Ollama
1
[removed]
2025-06-15T19:51:58
https://github.com/itlackey/changeish
Kitchen_Fix1464
github.com
1970-01-01T00:00:00
0
{}
1lc9bp3
false
null
t3_1lc9bp3
/r/LocalLLaMA/comments/1lc9bp3/changeish_manage_your_codes_changelog_using_ollama/
false
false
default
1
null
Mistral-Small useless when running locally
5
Mistral-Small from 2024 was one of my favorite local models, but their 2025 versions (running on llama.cpp with chat completion) is driving me crazy. It's not just the repetition problem people report, but in my use cases it behaves totally erratic, bad instruction following and sometimes completely off the rail answers that have nothing to do with my prompts. I tried different temperatures (most use cases for me require <0.4 anyway) and played with different sampler settings, quants and quantization techniques, from different sources (Bartowski, unsloth). I thought it might be the default prompt template in llama-server, tried to provide my own, using the old completion endpoint instead of chat. To no avail. Always bad results. Abandoned it back then in favor of other models. Then I tried Magistral-Small (Q6, unsloth) the other day in an agentic test setup. It did pick tools, but not intelligently and it used them in a wrong way and with stupid parameters. For example, one of my low bar tests: given current date tool, weather tool and the prompt to get me the weather in New York yesterday, it called the weather tool without calling the date tool first and asked for the weather in Moscow. The final answer was then some product review about a phone called magistral. Other times it generates product reviews about tekken (not their tokenizer, the game). Tried the same with Mistral-Small-3.1-24B-Instruct-2503-Q6\_K (unsloth). Same problems. I'm also using Mistral-Small via openrouter in a production RAG application. There it's pretty reliable and sometimes produces better results that Mistral Medium (sure, they use higher quants, but that can't be it). What am I doing wrong? I never had similar issues with any other model.
2025-06-15T21:09:03
https://www.reddit.com/r/LocalLLaMA/comments/1lcb4e6/mistralsmall_useless_when_running_locally/
mnze_brngo_7325
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcb4e6
false
null
t3_1lcb4e6
/r/LocalLLaMA/comments/1lcb4e6/mistralsmall_useless_when_running_locally/
false
false
self
5
null
FULL LEAKED v0 System Prompts and Tools [UPDATED]
170
(Latest system prompt: 15/06/2025) I managed to get FULL updated v0 system prompt and internal tools info. Over 900 lines You can it out at: https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools
2025-06-15T21:37:55
https://www.reddit.com/r/LocalLLaMA/comments/1lcbs7z/full_leaked_v0_system_prompts_and_tools_updated/
Independent-Box-898
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcbs7z
false
null
t3_1lcbs7z
/r/LocalLLaMA/comments/1lcbs7z/full_leaked_v0_system_prompts_and_tools_updated/
false
false
self
170
{'enabled': False, 'images': [{'id': 'z-F-XuiiPfOPT-xAWmd0p9c0_13GYNY8MeSslCYz0To', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/z-F-XuiiPfOPT-xAWmd0p9c0_13GYNY8MeSslCYz0To.png?width=108&crop=smart&auto=webp&s=e0931cf11aed86a4d7ab261ddac8592d0789a7fa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/z-F-XuiiPfOPT-xAWmd0p9c0_13GYNY8MeSslCYz0To.png?width=216&crop=smart&auto=webp&s=7abe347d15cc79b63442017829e5bd8ffae2b39b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/z-F-XuiiPfOPT-xAWmd0p9c0_13GYNY8MeSslCYz0To.png?width=320&crop=smart&auto=webp&s=890b098b56cefdac28cc710e18b13f324b84ebfe', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/z-F-XuiiPfOPT-xAWmd0p9c0_13GYNY8MeSslCYz0To.png?width=640&crop=smart&auto=webp&s=f05d98b5d495e6124ac462ea8c066fd76669e125', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/z-F-XuiiPfOPT-xAWmd0p9c0_13GYNY8MeSslCYz0To.png?width=960&crop=smart&auto=webp&s=f4df016fad58739d769d868b1d08c5cd9379cb00', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/z-F-XuiiPfOPT-xAWmd0p9c0_13GYNY8MeSslCYz0To.png?width=1080&crop=smart&auto=webp&s=c26bb9cf9e31263d8985ece4902b8214e04b7c55', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/z-F-XuiiPfOPT-xAWmd0p9c0_13GYNY8MeSslCYz0To.png?auto=webp&s=c92d45ae3d0d7021be0e7e4542a42df387d16b3d', 'width': 1200}, 'variants': {}}]}
Is gemini 2.5 pro just naturally better than the rest or is it just me?
70
I mean, maybe the other models do better in niche benchmarks, and maybe claude is better at coding specifically, but gemini 2.5 pro feels like I'm talking to a smart human being and it can actually build good arguments and have better chat sessions.
2025-06-15T21:54:54
https://www.reddit.com/r/LocalLLaMA/comments/1lcc5vk/is_gemini_25_pro_just_naturally_better_than_the/
freecodeio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcc5vk
false
null
t3_1lcc5vk
/r/LocalLLaMA/comments/1lcc5vk/is_gemini_25_pro_just_naturally_better_than_the/
false
false
self
70
null
Most human like LLM
1
[removed]
2025-06-15T22:11:26
https://www.reddit.com/r/LocalLLaMA/comments/1lccj7n/most_human_like_llm/
Wintlink-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lccj7n
false
null
t3_1lccj7n
/r/LocalLLaMA/comments/1lccj7n/most_human_like_llm/
false
false
self
1
null
Best LLMs for a MacBook Air M4 16GB
1
[removed]
2025-06-15T22:34:13
https://www.reddit.com/r/LocalLLaMA/comments/1lcd0hx/best_llms_for_a_macbook_air_m4_16gb/
Akeel1994
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcd0hx
false
null
t3_1lcd0hx
/r/LocalLLaMA/comments/1lcd0hx/best_llms_for_a_macbook_air_m4_16gb/
false
false
self
1
null
app that runs ollama and local LLM
1
[removed]
2025-06-15T23:04:59
https://www.reddit.com/r/LocalLLaMA/comments/1lcdnrf/app_that_runs_ollama_and_local_llm/
OwnSoup8888
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcdnrf
false
null
t3_1lcdnrf
/r/LocalLLaMA/comments/1lcdnrf/app_that_runs_ollama_and_local_llm/
false
false
self
1
null
run ollama with local models
1
[removed]
2025-06-15T23:11:00
https://www.reddit.com/r/LocalLLaMA/comments/1lcdsde/run_ollama_with_local_models/
OwnSoup8888
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcdsde
false
null
t3_1lcdsde
/r/LocalLLaMA/comments/1lcdsde/run_ollama_with_local_models/
false
false
self
1
null
HP ZBook Ultra 14 Zoll G1a LLM Benchmarks
1
[removed]
2025-06-15T23:27:35
https://www.reddit.com/r/LocalLLaMA/comments/1lce4vg/hp_zbook_ultra_14_zoll_g1a_llm_benchmarks/
holistech
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lce4vg
false
null
t3_1lce4vg
/r/LocalLLaMA/comments/1lce4vg/hp_zbook_ultra_14_zoll_g1a_llm_benchmarks/
false
false
self
1
{'enabled': False, 'images': [{'id': 'q2lajrOBmlh-esM4so8_e-D1xHt339X1g0ldL1vl71o', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/q2lajrOBmlh-esM4so8_e-D1xHt339X1g0ldL1vl71o.png?width=108&crop=smart&auto=webp&s=fa4b7038ed19bf08dd9773cb2f0db1d6352e949e', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/q2lajrOBmlh-esM4so8_e-D1xHt339X1g0ldL1vl71o.png?width=216&crop=smart&auto=webp&s=72e1a9b4a600c071841c0deec2f885c94077f825', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/q2lajrOBmlh-esM4so8_e-D1xHt339X1g0ldL1vl71o.png?width=320&crop=smart&auto=webp&s=731a3c9ac8e1077e19cf8be8bc5a4759ac841f8e', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/q2lajrOBmlh-esM4so8_e-D1xHt339X1g0ldL1vl71o.png?width=640&crop=smart&auto=webp&s=e2cf3d2c3e00c04da936feeda0fc3ecd74738f8a', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/q2lajrOBmlh-esM4so8_e-D1xHt339X1g0ldL1vl71o.png?width=960&crop=smart&auto=webp&s=e5eb1242ba92cba17d89336308bc3bb55a7c425a', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/q2lajrOBmlh-esM4so8_e-D1xHt339X1g0ldL1vl71o.png?width=1080&crop=smart&auto=webp&s=73996c0af6f060a4e77c38fe598261a3c7c74caa', 'width': 1080}], 'source': {'height': 2048, 'url': 'https://external-preview.redd.it/q2lajrOBmlh-esM4so8_e-D1xHt339X1g0ldL1vl71o.png?auto=webp&s=9de7e7a531c98018fc76956e58fe68c9de596485', 'width': 2048}, 'variants': {}}]}
Mistral Small 3.1 vs Magistral Small - experience?
1
[deleted]
2025-06-15T23:35:57
[deleted]
1970-01-01T00:00:00
0
{}
1lceb0j
false
null
t3_1lceb0j
/r/LocalLLaMA/comments/1lceb0j/mistral_small_31_vs_magistral_small_experience/
false
false
default
1
null
Best tutorials and resources for learning RAG?
17
I want to learn how RAG works and use it on a 4B-7B model. Do you have some beginner-friendly links/videotutorials/tools to help me out? Thanks!
2025-06-15T23:50:30
https://www.reddit.com/r/LocalLLaMA/comments/1lcelbw/best_tutorials_and_resources_for_learning_rag/
sebastianmicu24
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcelbw
false
null
t3_1lcelbw
/r/LocalLLaMA/comments/1lcelbw/best_tutorials_and_resources_for_learning_rag/
false
false
self
17
null
Test post
1
[deleted]
2025-06-15T23:57:11
[deleted]
1970-01-01T00:00:00
0
{}
1lceq68
false
null
t3_1lceq68
/r/LocalLLaMA/comments/1lceq68/test_post/
false
false
default
1
null
Test Post
1
[deleted]
2025-06-15T23:58:21
[deleted]
1970-01-01T00:00:00
0
{}
1lcer1z
false
null
t3_1lcer1z
/r/LocalLLaMA/comments/1lcer1z/test_post/
false
false
default
1
null
Test post
1
[deleted]
2025-06-16T00:16:18
[deleted]
1970-01-01T00:00:00
0
{}
1lcf4dl
false
null
t3_1lcf4dl
/r/LocalLLaMA/comments/1lcf4dl/test_post/
false
false
default
1
null
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training
1
[deleted]
2025-06-16T00:17:16
[deleted]
1970-01-01T00:00:00
0
{}
1lcf510
false
null
t3_1lcf510
/r/LocalLLaMA/comments/1lcf510/augmentoolkit_30_7_months_of_work_mit_license/
false
false
default
1
null
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training
1
[removed]
2025-06-16T00:18:10
https://www.reddit.com/r/LocalLLaMA/comments/1lcf5nt/augmentoolkit_30_7_months_of_work_mit_license/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcf5nt
false
null
t3_1lcf5nt
/r/LocalLLaMA/comments/1lcf5nt/augmentoolkit_30_7_months_of_work_mit_license/
false
false
self
1
null
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training
1
[removed]
2025-06-16T00:19:11
https://www.reddit.com/r/LocalLLaMA/comments/1lcf6en/augmentoolkit_30_7_months_of_work_mit_license/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcf6en
false
null
t3_1lcf6en
/r/LocalLLaMA/comments/1lcf6en/augmentoolkit_30_7_months_of_work_mit_license/
false
false
self
1
null
(LLMs) reflexively project grammatical and semantic structures from their training corpus
1
[removed]
2025-06-16T00:19:33
https://www.reddit.com/r/LocalLLaMA/comments/1lcf6o1/llms_reflexively_project_grammatical_and_semantic/
Funny_Ingenuity6982
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcf6o1
false
null
t3_1lcf6o1
/r/LocalLLaMA/comments/1lcf6o1/llms_reflexively_project_grammatical_and_semantic/
false
false
self
1
null
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training
1
[deleted]
2025-06-16T00:19:38
[deleted]
1970-01-01T00:00:00
0
{}
1lcf6qb
false
null
t3_1lcf6qb
/r/LocalLLaMA/comments/1lcf6qb/augmentoolkit_30_7_months_of_work_mit_license/
false
false
default
1
null
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training
1
[deleted]
2025-06-16T00:20:45
[deleted]
1970-01-01T00:00:00
0
{}
1lcf7jz
false
null
t3_1lcf7jz
/r/LocalLLaMA/comments/1lcf7jz/augmentoolkit_30_7_months_of_work_mit_license/
false
false
default
1
null
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training
1
[deleted]
2025-06-16T00:21:32
[deleted]
1970-01-01T00:00:00
0
{}
1lcf83c
false
null
t3_1lcf83c
/r/LocalLLaMA/comments/1lcf83c/augmentoolkit_30_7_months_of_work_mit_license/
false
false
default
1
null
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training
1
[removed]
2025-06-16T00:22:19
[deleted]
1970-01-01T00:00:00
0
{}
1lcf8n4
false
null
t3_1lcf8n4
/r/LocalLLaMA/comments/1lcf8n4/augmentoolkit_30_7_months_of_work_mit_license/
false
false
default
1
null
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training
1
[deleted]
2025-06-16T00:23:28
[deleted]
1970-01-01T00:00:00
0
{}
1lcf9hd
false
null
t3_1lcf9hd
/r/LocalLLaMA/comments/1lcf9hd/augmentoolkit_30_7_months_of_work_mit_license/
false
false
default
1
null
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training
1
[removed]
2025-06-16T00:24:04
https://www.reddit.com/r/LocalLLaMA/comments/1lcf9x5/augmentoolkit_30_7_months_of_work_mit_license/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcf9x5
false
null
t3_1lcf9x5
/r/LocalLLaMA/comments/1lcf9x5/augmentoolkit_30_7_months_of_work_mit_license/
false
false
self
1
{'enabled': False, 'images': [{'id': 'WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=108&crop=smart&auto=webp&s=1edc01ca2691dfef3b84d60ab40bcad7bce8a592', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=216&crop=smart&auto=webp&s=c32e8be1177863876c832c62bb5b79fcac234a15', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=320&crop=smart&auto=webp&s=d72fb1418e7e1cdfa09362f2ef6a874eb35901bb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=640&crop=smart&auto=webp&s=8627ee3850f012efee5d007f41e6d3290cb45ff7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=960&crop=smart&auto=webp&s=c6d745392916018f628864f66611d8501ef32d0f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=1080&crop=smart&auto=webp&s=6314f151ce285c7f790dc8bfa6fc905280097adf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?auto=webp&s=c7b35a50a46805d4e7baabdca5d460eff59b5287', 'width': 1200}, 'variants': {}}]}
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training
1
[deleted]
2025-06-16T00:25:19
[deleted]
1970-01-01T00:00:00
0
{}
1lcfata
false
null
t3_1lcfata
/r/LocalLLaMA/comments/1lcfata/augmentoolkit_30_7_months_of_work_mit_license/
false
false
default
1
null
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training
1
[removed]
2025-06-16T00:26:18
https://www.reddit.com/r/LocalLLaMA/comments/1lcfbiy/augmentoolkit_30_7_months_of_work_mit_license/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcfbiy
false
null
t3_1lcfbiy
/r/LocalLLaMA/comments/1lcfbiy/augmentoolkit_30_7_months_of_work_mit_license/
false
false
self
1
null
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training
1
[deleted]
2025-06-16T00:27:17
[deleted]
1970-01-01T00:00:00
0
{}
1lcfc90
false
null
t3_1lcfc90
/r/LocalLLaMA/comments/1lcfc90/augmentoolkit_30_7_months_of_work_mit_license/
false
false
default
1
null
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training
1
[deleted]
2025-06-16T00:29:01
[deleted]
1970-01-01T00:00:00
0
{}
1lcfdg2
false
null
t3_1lcfdg2
/r/LocalLLaMA/comments/1lcfdg2/augmentoolkit_30_7_months_of_work_mit_license/
false
false
default
1
null
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training
1
[removed]
2025-06-16T00:30:07
https://www.reddit.com/r/LocalLLaMA/comments/1lcfe8u/augmentoolkit_30_7_months_of_work_mit_license/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcfe8u
false
null
t3_1lcfe8u
/r/LocalLLaMA/comments/1lcfe8u/augmentoolkit_30_7_months_of_work_mit_license/
false
false
self
1
{'enabled': False, 'images': [{'id': 'WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=108&crop=smart&auto=webp&s=1edc01ca2691dfef3b84d60ab40bcad7bce8a592', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=216&crop=smart&auto=webp&s=c32e8be1177863876c832c62bb5b79fcac234a15', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=320&crop=smart&auto=webp&s=d72fb1418e7e1cdfa09362f2ef6a874eb35901bb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=640&crop=smart&auto=webp&s=8627ee3850f012efee5d007f41e6d3290cb45ff7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=960&crop=smart&auto=webp&s=c6d745392916018f628864f66611d8501ef32d0f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=1080&crop=smart&auto=webp&s=6314f151ce285c7f790dc8bfa6fc905280097adf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?auto=webp&s=c7b35a50a46805d4e7baabdca5d460eff59b5287', 'width': 1200}, 'variants': {}}]}
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training
1
[removed]
2025-06-16T00:30:49
https://www.reddit.com/r/LocalLLaMA/comments/1lcfer8/augmentoolkit_30_7_months_of_work_mit_license/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcfer8
false
null
t3_1lcfer8
/r/LocalLLaMA/comments/1lcfer8/augmentoolkit_30_7_months_of_work_mit_license/
false
false
self
1
{'enabled': False, 'images': [{'id': 'WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=108&crop=smart&auto=webp&s=1edc01ca2691dfef3b84d60ab40bcad7bce8a592', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=216&crop=smart&auto=webp&s=c32e8be1177863876c832c62bb5b79fcac234a15', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=320&crop=smart&auto=webp&s=d72fb1418e7e1cdfa09362f2ef6a874eb35901bb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=640&crop=smart&auto=webp&s=8627ee3850f012efee5d007f41e6d3290cb45ff7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=960&crop=smart&auto=webp&s=c6d745392916018f628864f66611d8501ef32d0f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=1080&crop=smart&auto=webp&s=6314f151ce285c7f790dc8bfa6fc905280097adf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?auto=webp&s=c7b35a50a46805d4e7baabdca5d460eff59b5287', 'width': 1200}, 'variants': {}}]}
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training
1
[deleted]
2025-06-16T00:31:31
[deleted]
1970-01-01T00:00:00
0
{}
1lcff7s
false
null
t3_1lcff7s
/r/LocalLLaMA/comments/1lcff7s/augmentoolkit_30_7_months_of_work_mit_license/
false
false
default
1
null
Test post
1
[removed]
2025-06-16T00:32:01
https://www.reddit.com/r/LocalLLaMA/comments/1lcffkg/test_post/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcffkg
false
null
t3_1lcffkg
/r/LocalLLaMA/comments/1lcffkg/test_post/
false
false
self
1
null
Test Post
1
[removed]
2025-06-16T00:35:21
https://www.reddit.com/r/LocalLLaMA/comments/1lcfhw2/test_post/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcfhw2
false
null
t3_1lcfhw2
/r/LocalLLaMA/comments/1lcfhw2/test_post/
false
false
self
1
{'enabled': False, 'images': [{'id': 'WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=108&crop=smart&auto=webp&s=1edc01ca2691dfef3b84d60ab40bcad7bce8a592', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=216&crop=smart&auto=webp&s=c32e8be1177863876c832c62bb5b79fcac234a15', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=320&crop=smart&auto=webp&s=d72fb1418e7e1cdfa09362f2ef6a874eb35901bb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=640&crop=smart&auto=webp&s=8627ee3850f012efee5d007f41e6d3290cb45ff7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=960&crop=smart&auto=webp&s=c6d745392916018f628864f66611d8501ef32d0f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=1080&crop=smart&auto=webp&s=6314f151ce285c7f790dc8bfa6fc905280097adf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?auto=webp&s=c7b35a50a46805d4e7baabdca5d460eff59b5287', 'width': 1200}, 'variants': {}}]}
Test post
1
[removed]
2025-06-16T00:35:50
https://www.reddit.com/r/LocalLLaMA/comments/1lcfi8u/test_post/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcfi8u
false
null
t3_1lcfi8u
/r/LocalLLaMA/comments/1lcfi8u/test_post/
false
false
self
1
null
Test post
1
[removed]
2025-06-16T00:36:19
https://www.reddit.com/r/LocalLLaMA/comments/1lcfilt/test_post/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcfilt
false
null
t3_1lcfilt
/r/LocalLLaMA/comments/1lcfilt/test_post/
false
false
self
1
null
Test post
1
[removed]
2025-06-16T00:37:51
https://www.reddit.com/r/LocalLLaMA/comments/1lcfjno/test_post/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcfjno
false
null
t3_1lcfjno
/r/LocalLLaMA/comments/1lcfjno/test_post/
false
false
self
1
null
Test post
1
[removed]
2025-06-16T00:38:35
https://www.reddit.com/r/LocalLLaMA/comments/1lcfk5m/test_post/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcfk5m
false
null
t3_1lcfk5m
/r/LocalLLaMA/comments/1lcfk5m/test_post/
false
false
self
1
null
Test post
1
[deleted]
2025-06-16T00:39:15
[deleted]
1970-01-01T00:00:00
0
{}
1lcfkla
false
null
t3_1lcfkla
/r/LocalLLaMA/comments/1lcfkla/test_post/
false
false
default
1
null
Test post
1
[deleted]
2025-06-16T00:39:52
[deleted]
1970-01-01T00:00:00
0
{}
1lcfl1r
false
null
t3_1lcfl1r
/r/LocalLLaMA/comments/1lcfl1r/test_post/
false
false
default
1
null
Test post
1
[removed]
2025-06-16T00:40:42
https://www.reddit.com/r/LocalLLaMA/comments/1lcflol/test_post/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcflol
false
null
t3_1lcflol
/r/LocalLLaMA/comments/1lcflol/test_post/
false
false
self
1
null
Test Post
1
[deleted]
2025-06-16T00:41:40
[deleted]
1970-01-01T00:00:00
0
{}
1lcfmd9
false
null
t3_1lcfmd9
/r/LocalLLaMA/comments/1lcfmd9/test_post/
false
false
default
1
null
Test Post
1
[removed]
2025-06-16T00:42:36
https://www.reddit.com/r/LocalLLaMA/comments/1lcfn17/test_post/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcfn17
false
null
t3_1lcfn17
/r/LocalLLaMA/comments/1lcfn17/test_post/
false
false
self
1
null
Test post
1
**Over the past year and a half** I've been working on the problem of **factual finetuning** \-- **training an LLM on new facts** so that it learns those facts, essentially extending its knowledge cutoff. Now that I've made significant progress on the problem, I'm releasing **Augmentoolkit 3.0** — an easy-to-use dataset generation and model training tool. Add documents, click a button, and Augmmentoolkit will do everything for you: it'll generate a domain-specific dataset, combine it with a balanced amount of generic data, automatically train a model on it, download it, quantize it, and run it for inference (accessible with a built-in chat interface). The project (and its demo models) are fully open-source. I even trained a model to run inside Augmentoolkit itself, allowing for faster **local dataset generation**. This update took more than six months and thousands of dollars to put together, and represents **a complete rewrite and overhaul of the original project.** It includes 16 prebuilt dataset generation pipelines and the extensively-documented code and conventions to build more. Beyond just factual finetuning, it even **includes an experimental** [**GRPO pipeline**](https://github.com/e-p-armstrong/augmentoolkit/blob/master/docs/grpo.md) that lets you **train a model to do any conceivable task** by just **writing a prompt to grade that task.** # The Links * [Project](https://github.com/e-p-armstrong/augmentoolkit) * [Train a model in 13 minutes quickstart tutorial video](https://www.youtube.com/watch?v=E9TyyZzIMyY&ab_channel=Augmentoolkit) * Demo model (what the quickstart produces) * [Link](https://huggingface.co/Heralax/llama-Augmentoolkit-Quickstart-Factual-Demo-Example) * Dataset and training configs are fully open source. The config is literally the quickstart config; the dataset is * The demo model is an LLM trained on a subset of the US Army Field Manuals -- the best free and open modern source of comprehensive documentation on a well-known field that I have found. This is also because I trained a model on these in the past and so training on them now serves as a good comparison between the power of the current tool compared to its previous version. * Experimental GRPO models * Now that Augmentoolkit includes the ability to grade models for their performance on a task, I naturally wanted to try this out, and on a task that people are familiar with. * I produced two RP models (base: Mistral 7b v0.2) with the intent of maximizing writing style quality and emotion, while minimizing GPT-isms. * One model has thought processes, the other does not. The non-thought-process model came out better for reasons described in the model card. * Non-reasoner [https://huggingface.co/Heralax/llama-gRPo-emotions-nothoughts](https://huggingface.co/Heralax/llama-gRPo-emotions-nothoughts) * Reasoner [https://huggingface.co/Heralax/llama-gRPo-thoughtprocess](https://huggingface.co/Heralax/llama-gRPo-thoughtprocess) # The Process to Reproduce * Clone * `git clone` [`https://github.com/e-p-armstrong/augmentoolkit.git`](https://github.com/e-p-armstrong/augmentoolkit.git) * Run Start Script * Local or Online * Mac * `bash` [`macos.sh`](http://macos.sh/) * `bash local_macos.sh` * Linux * `bash` [`linux.sh`](http://linux.sh/) * `bash local_linux.sh` * Windows + warning * `./start_windows.bat` * Windows interface compatibility is uncertain. It's probably more reliable to use the CLI instead. Instructions are here * Add API keys or use the local model * I trained a 7b model that is purpose-built to run Augmentoolkit pipelines (Apache license). This means that you can probably generate data at a decent speed on your own computer. It will definitely be slower than with an API, but it will be *much* better than trying to generate tens of millions of tokens with a local 70b. * There are separate start scripts for local datagen. * You'll probably only be able to get good dataset generation speed on a linux machine even though it does technically run on Mac, since Llama.cpp is MUCH slower than vLLM (which is Linux-only). * Click the "run" Button * Get Your Model * The integrated chat interface will automatically let you chat with it when the training and quanting is finished * The model will also automatically be pushed to Hugging Face (make sure you have enough space!) # Uses Besides faster generation times and lower costs, an expert AI that is trained on a domain gains a "big-picture" understanding of the subject that a generalist just won't have. It's the difference between giving a new student a class's full textbook and asking them to write an exam, versus asking a graduate student in that subject to write the exam. The new student probably won't even know where in that book they should look for the information they need, and even if they see the correct context, there's no guarantee that they understands what it means or how it fits into the bigger picture. Also, trying to build AI apps based on closed-source LLMs released by big labs sucks: * The lack of stable checkpoints under the control of the person running the model, makes the tech unstable and unpredictable to build on. * Capabilities change without warning and models are frequently made worse. * People building with AI have to work around the LLMs they are using (a moving target), rather than make the LLMs they are using fit into their system * Closed-source labs charge obscene prices, doing monopolistic rent collecting and impacting the margins of their customers. * Using closed-source labs is a privacy nightmare, especially now that API providers may be required by law to save and log formerly-private API requests. * Different companies have to all work with the same set of models, which have the same knowledge, the same capabilities, the same opinions, and they all sound more or less the same. But current open-source models often either suffer from a severe lack of capability, or are massive enough that they might as well be closed-source for most of the people trying to run them. The proposed solution? Small, efficient, powerful models that achieve superior performance on the things they are being used for (and sacrifice performance in the areas they *aren't* being used for) which are trained for their task and are controlled by the companies that use them. With Augmentoolkit: * You train your models, decide when those models update, and have full transparency over what went into them. * Capabilities change only when the company wants, and no one is forcing them to make their models worse. * People working with AI can customize the model they are using to function as part of the system they are designing, rather than having to twist their system to match a model. * 7 billion parameter models (the standard size Augmentoolkit trains) are so cheap to run it is absurd. They can run on a laptop, even. * Because you control your model, you control your inference, and you control your customers' data. * With your model's capabilities being fully customizable, your AI sounds like *your* AI, and has the opinions and capabilities that you want it to have. Furthermore, the open-source indie finetuning scene has been on life support, largely due to a lack of ability to make data, and the difficulty of getting started with (and getting results with) training, compared to methods like merging. Now that data is far easier to make, and training for specific objectives is much easier to do, and there is a good baseline with training wheels included that makes getting started easy, the hope is that people can iterate on finetunes and the scene can have new life. Augmentoolkit is taking a bet on an open-source future powered by small, efficient, Specialist Language Models.
2025-06-16T00:43:41
https://www.reddit.com/r/LocalLLaMA/comments/1lcfntv/test_post/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcfntv
false
null
t3_1lcfntv
/r/LocalLLaMA/comments/1lcfntv/test_post/
false
false
self
1
null
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training
1
[deleted]
2025-06-16T00:45:12
[deleted]
1970-01-01T00:00:00
0
{}
1lcfoue
false
null
t3_1lcfoue
/r/LocalLLaMA/comments/1lcfoue/augmentoolkit_30_7_months_of_work_mit_license/
false
false
default
1
null
Test post
1
**Over the past year and a half** I've been working on the problem of **factual finetuning** \-- **training an LLM on new facts** so that it learns those facts, essentially extending its knowledge cutoff. Now that I've made significant progress on the problem, I'm releasing **Augmentoolkit 3.0** — an easy-to-use dataset generation and model training tool. Add documents, click a button, and Augmmentoolkit will do everything for you: it'll generate a domain-specific dataset, combine it with a balanced amount of generic data, automatically train a model on it, download it, quantize it, and run it for inference (accessible with a built-in chat interface). The project (and its demo models) are fully open-source. I even trained a model to run inside Augmentoolkit itself, allowing for faster **local dataset generation**. This update took more than six months and thousands of dollars to put together, and represents **a complete rewrite and overhaul of the original project.** It includes 16 prebuilt dataset generation pipelines and the extensively-documented code and conventions to build more. Beyond just factual finetuning, it even **includes an experimental** [**GRPO pipeline**](https://github.com/e-p-armstrong/augmentoolkit/blob/master/docs/grpo.md) that lets you **train a model to do any conceivable task** by just **writing a prompt to grade that task.** # The Links * [Project](https://github.com/e-p-armstrong/augmentoolkit) * [Train a model in 13 minutes quickstart tutorial video](https://www.youtube.com/watch?v=E9TyyZzIMyY&ab_channel=Augmentoolkit) * Demo model (what the quickstart produces) * [Link](https://huggingface.co/Heralax/llama-Augmentoolkit-Quickstart-Factual-Demo-Example) * Dataset and training configs are fully open source. The config is literally the quickstart config; the dataset is * The demo model is an LLM trained on a subset of the US Army Field Manuals -- the best free and open modern source of comprehensive documentation on a well-known field that I have found. This is also because I trained a model on these in the past and so training on them now serves as a good comparison between the power of the current tool compared to its previous version. * Experimental GRPO models * Now that Augmentoolkit includes the ability to grade models for their performance on a task, I naturally wanted to try this out, and on a task that people are familiar with. * I produced two RP models (base: Mistral 7b v0.2) with the intent of maximizing writing style quality and emotion, while minimizing GPT-isms. * One model has thought processes, the other does not. The non-thought-process model came out better for reasons described in the model card. * Non-reasoner [https://huggingface.co/Heralax/llama-gRPo-emotions-nothoughts](https://huggingface.co/Heralax/llama-gRPo-emotions-nothoughts) * Reasoner [https://huggingface.co/Heralax/llama-gRPo-thoughtprocess](https://huggingface.co/Heralax/llama-gRPo-thoughtprocess) # The Process to Reproduce * Clone * `git clone` [`https://github.com/e-p-armstrong/augmentoolkit.git`](https://github.com/e-p-armstrong/augmentoolkit.git) * Run Start Script * Local or Online * Mac * `bash` [`macos.sh`](http://macos.sh/) * `bash local_macos.sh` * Linux * `bash` [`linux.sh`](http://linux.sh/) * `bash local_linux.sh` * Windows + warning * `./start_windows.bat` * Windows interface compatibility is uncertain. It's probably more reliable to use the CLI instead. Instructions are here * Add API keys or use the local model * I trained a 7b model that is purpose-built to run Augmentoolkit pipelines (Apache license). This means that you can probably generate data at a decent speed on your own computer. It will definitely be slower than with an API, but it will be *much* better than trying to generate tens of millions of tokens with a local 70b. * There are separate start scripts for local datagen. * You'll probably only be able to get good dataset generation speed on a linux machine even though it does technically run on Mac, since Llama.cpp is MUCH slower than vLLM (which is Linux-only). * Click the "run" Button * Get Your Model * The integrated chat interface will automatically let you chat with it when the training and quanting is finished * The model will also automatically be pushed to Hugging Face (make sure you have enough space!) # Uses Besides faster generation times and lower costs, an expert AI that is trained on a domain gains a "big-picture" understanding of the subject that a generalist just won't have. It's the difference between giving a new student a class's full textbook and asking them to write an exam, versus asking a graduate student in that subject to write the exam. The new student probably won't even know where in that book they should look for the information they need, and even if they see the correct context, there's no guarantee that they understands what it means or how it fits into the bigger picture. Also, trying to build AI apps based on closed-source LLMs released by big labs sucks: * The lack of stable checkpoints under the control of the person running the model, makes the tech unstable and unpredictable to build on. * Capabilities change without warning and models are frequently made worse. * People building with AI have to work around the LLMs they are using (a moving target), rather than make the LLMs they are using fit into their system * Closed-source labs charge obscene prices, doing monopolistic rent collecting and impacting the margins of their customers. * Using closed-source labs is a privacy nightmare, especially now that API providers may be required by law to save and log formerly-private API requests. * Different companies have to all work with the same set of models, which have the same knowledge, the same capabilities, the same opinions, and they all sound more or less the same. But current open-source models often either suffer from a severe lack of capability, or are massive enough that they might as well be closed-source for most of the people trying to run them. The proposed solution? Small, efficient, powerful models that achieve superior performance on the things they are being used for (and sacrifice performance in the areas they *aren't* being used for) which are trained for their task and are controlled by the companies that use them. With Augmentoolkit: * You train your models, decide when those models update, and have full transparency over what went into them. * Capabilities change only when the company wants, and no one is forcing them to make their models worse. * People working with AI can customize the model they are using to function as part of the system they are designing, rather than having to twist their system to match a model. * 7 billion parameter models (the standard size Augmentoolkit trains) are so cheap to run it is absurd. They can run on a laptop, even. * Because you control your model, you control your inference, and you control your customers' data. * With your model's capabilities being fully customizable, your AI sounds like *your* AI, and has the opinions and capabilities that you want it to have. Furthermore, the open-source indie finetuning scene has been on life support, largely due to a lack of ability to make data, and the difficulty of getting started with (and getting results with) training, compared to methods like merging. Now that data is far easier to make, and training for specific objectives is much easier to do, and there is a good baseline with training wheels included that makes getting started easy, the hope is that people can iterate on finetunes and the scene can have new life. Augmentoolkit is taking a bet on an open-source future powered by small, efficient, Specialist Language Models. # Cool things of note * Factually-finetuned models can actually cite what files they are remembering information from, and with a good degree of accuracy at that. This is not exclusive to the domain of RAG anymore. * Augmentoolkit models by default use a custom prompt template because it turns out that making SFT data look more like pretraining data in its structure helps models use their pretraining skills during chat settings. This includes factual recall. * Augmentoolkit was used to create the dataset generation model that runs Augmentoolkit's pipelines. You can find the config used to make the dataset (2.5 gigabytes) in the `generation/core_composition/meta_datagen` folder. * There's a pipeline for turning normal SFT data into reasoning SFT data that can give a good cold start to models that you want to give thought processes to. A number of datasets converted using this pipeline [are available on Hugging Face](https://huggingface.co/Augmentoolkit), fully open-source. * Augmentoolkit does not just automatically train models on the domain-specific data you generate: to ensure that there is enough data made for the model to 1) generalize and 2) learn the actual capability of conversation, Augmentoolkit will balance your domain-specific data with generic conversational data, ensuring that the LLM becomes smarter while retaining all of the question-answering capabilities imparted by the facts it is being trained on.
2025-06-16T00:46:01
https://www.reddit.com/r/LocalLLaMA/comments/1lcfpfn/test_post/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcfpfn
false
null
t3_1lcfpfn
/r/LocalLLaMA/comments/1lcfpfn/test_post/
false
false
self
1
null
Test post
1
**Over the past year and a half** I've been working on the problem of **factual finetuning** \-- **training an LLM on new facts** so that it learns those facts, essentially extending its knowledge cutoff. Now that I've made significant progress on the problem, I'm releasing **Augmentoolkit 3.0** — an easy-to-use dataset generation and model training tool. Add documents, click a button, and Augmmentoolkit will do everything for you: it'll generate a domain-specific dataset, combine it with a balanced amount of generic data, automatically train a model on it, download it, quantize it, and run it for inference (accessible with a built-in chat interface). The project (and its demo models) are fully open-source. I even trained a model to run inside Augmentoolkit itself, allowing for faster **local dataset generation**. This update took more than six months and thousands of dollars to put together, and represents **a complete rewrite and overhaul of the original project.** It includes 16 prebuilt dataset generation pipelines and the extensively-documented code and conventions to build more. Beyond just factual finetuning, it even **includes an experimental** [**GRPO pipeline**](https://github.com/e-p-armstrong/augmentoolkit/blob/master/docs/grpo.md) that lets you **train a model to do any conceivable task** by just **writing a prompt to grade that task.** # The Links * [Project](https://github.com/e-p-armstrong/augmentoolkit) * [Train a model in 13 minutes quickstart tutorial video](https://www.youtube.com/watch?v=E9TyyZzIMyY&ab_channel=Augmentoolkit) * Demo model (what the quickstart produces) * [Link](https://huggingface.co/Heralax/llama-Augmentoolkit-Quickstart-Factual-Demo-Example) * Dataset and training configs are fully open source. The config is literally the quickstart config; the dataset is * The demo model is an LLM trained on a subset of the US Army Field Manuals -- the best free and open modern source of comprehensive documentation on a well-known field that I have found. This is also because I trained a model on these in the past and so training on them now serves as a good comparison between the power of the current tool compared to its previous version. * Experimental GRPO models * Now that Augmentoolkit includes the ability to grade models for their performance on a task, I naturally wanted to try this out, and on a task that people are familiar with. * I produced two RP models (base: Mistral 7b v0.2) with the intent of maximizing writing style quality and emotion, while minimizing GPT-isms. * One model has thought processes, the other does not. The non-thought-process model came out better for reasons described in the model card. * Non-reasoner [https://huggingface.co/Heralax/llama-gRPo-emotions-nothoughts](https://huggingface.co/Heralax/llama-gRPo-emotions-nothoughts) * Reasoner [https://huggingface.co/Heralax/llama-gRPo-thoughtprocess](https://huggingface.co/Heralax/llama-gRPo-thoughtprocess) # The Process to Reproduce * Clone * `git clone` [`https://github.com/e-p-armstrong/augmentoolkit.git`](https://github.com/e-p-armstrong/augmentoolkit.git) * Run Start Script * Local or Online * Mac * `bash` [`macos.sh`](http://macos.sh/) * `bash local_macos.sh` * Linux * `bash` [`linux.sh`](http://linux.sh/) * `bash local_linux.sh` * Windows + warning * `./start_windows.bat` * Windows interface compatibility is uncertain. It's probably more reliable to use the CLI instead. Instructions are here * Add API keys or use the local model * I trained a 7b model that is purpose-built to run Augmentoolkit pipelines (Apache license). This means that you can probably generate data at a decent speed on your own computer. It will definitely be slower than with an API, but it will be *much* better than trying to generate tens of millions of tokens with a local 70b. * There are separate start scripts for local datagen. * You'll probably only be able to get good dataset generation speed on a linux machine even though it does technically run on Mac, since Llama.cpp is MUCH slower than vLLM (which is Linux-only). * Click the "run" Button * Get Your Model * The integrated chat interface will automatically let you chat with it when the training and quanting is finished * The model will also automatically be pushed to Hugging Face (make sure you have enough space!) # Uses Besides faster generation times and lower costs, an expert AI that is trained on a domain gains a "big-picture" understanding of the subject that a generalist just won't have. It's the difference between giving a new student a class's full textbook and asking them to write an exam, versus asking a graduate student in that subject to write the exam. The new student probably won't even know where in that book they should look for the information they need, and even if they see the correct context, there's no guarantee that they understands what it means or how it fits into the bigger picture. Also, trying to build AI apps based on closed-source LLMs released by big labs sucks: * The lack of stable checkpoints under the control of the person running the model, makes the tech unstable and unpredictable to build on. * Capabilities change without warning and models are frequently made worse. * People building with AI have to work around the LLMs they are using (a moving target), rather than make the LLMs they are using fit into their system * Closed-source labs charge obscene prices, doing monopolistic rent collecting and impacting the margins of their customers. * Using closed-source labs is a privacy nightmare, especially now that API providers may be required by law to save and log formerly-private API requests. * Different companies have to all work with the same set of models, which have the same knowledge, the same capabilities, the same opinions, and they all sound more or less the same. But current open-source models often either suffer from a severe lack of capability, or are massive enough that they might as well be closed-source for most of the people trying to run them. The proposed solution? Small, efficient, powerful models that achieve superior performance on the things they are being used for (and sacrifice performance in the areas they *aren't* being used for) which are trained for their task and are controlled by the companies that use them. With Augmentoolkit: * You train your models, decide when those models update, and have full transparency over what went into them. * Capabilities change only when the company wants, and no one is forcing them to make their models worse. * People working with AI can customize the model they are using to function as part of the system they are designing, rather than having to twist their system to match a model. * 7 billion parameter models (the standard size Augmentoolkit trains) are so cheap to run it is absurd. They can run on a laptop, even. * Because you control your model, you control your inference, and you control your customers' data. * With your model's capabilities being fully customizable, your AI sounds like *your* AI, and has the opinions and capabilities that you want it to have. Furthermore, the open-source indie finetuning scene has been on life support, largely due to a lack of ability to make data, and the difficulty of getting started with (and getting results with) training, compared to methods like merging. Now that data is far easier to make, and training for specific objectives is much easier to do, and there is a good baseline with training wheels included that makes getting started easy, the hope is that people can iterate on finetunes and the scene can have new life. Augmentoolkit is taking a bet on an open-source future powered by small, efficient, Specialist Language Models. # Cool things of note * Factually-finetuned models can actually cite what files they are remembering information from, and with a good degree of accuracy at that. This is not exclusive to the domain of RAG anymore. * Augmentoolkit models by default use a custom prompt template because it turns out that making SFT data look more like pretraining data in its structure helps models use their pretraining skills during chat settings. This includes factual recall. * Augmentoolkit was used to create the dataset generation model that runs Augmentoolkit's pipelines. You can find the config used to make the dataset (2.5 gigabytes) in the `generation/core_composition/meta_datagen` folder. * There's a pipeline for turning normal SFT data into reasoning SFT data that can give a good cold start to models that you want to give thought processes to. A number of datasets converted using this pipeline [are available on Hugging Face](https://huggingface.co/Augmentoolkit), fully open-source. * Augmentoolkit does not just automatically train models on the domain-specific data you generate: to ensure that there is enough data made for the model to 1) generalize and 2) learn the actual capability of conversation, Augmentoolkit will balance your domain-specific data with generic conversational data, ensuring that the LLM becomes smarter while retaining all of the question-answering capabilities imparted by the facts it is being trained on. * If you want to share the models you make with other people, Augmentoolkit has an easy way to make your custom LLM into a 
2025-06-16T00:46:42
https://www.reddit.com/r/LocalLLaMA/comments/1lcfpvn/test_post/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcfpvn
false
null
t3_1lcfpvn
/r/LocalLLaMA/comments/1lcfpvn/test_post/
false
false
self
1
null
Test post
1
[removed]
2025-06-16T00:47:14
https://www.reddit.com/r/LocalLLaMA/comments/1lcfq8y/test_post/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcfq8y
false
null
t3_1lcfq8y
/r/LocalLLaMA/comments/1lcfq8y/test_post/
false
false
self
1
{'enabled': False, 'images': [{'id': 'WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=108&crop=smart&auto=webp&s=1edc01ca2691dfef3b84d60ab40bcad7bce8a592', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=216&crop=smart&auto=webp&s=c32e8be1177863876c832c62bb5b79fcac234a15', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=320&crop=smart&auto=webp&s=d72fb1418e7e1cdfa09362f2ef6a874eb35901bb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=640&crop=smart&auto=webp&s=8627ee3850f012efee5d007f41e6d3290cb45ff7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=960&crop=smart&auto=webp&s=c6d745392916018f628864f66611d8501ef32d0f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=1080&crop=smart&auto=webp&s=6314f151ce285c7f790dc8bfa6fc905280097adf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?auto=webp&s=c7b35a50a46805d4e7baabdca5d460eff59b5287', 'width': 1200}, 'variants': {}}]}
Make your local LLM suffer
1
[removed]
2025-06-16T00:47:34
https://www.reddit.com/r/LocalLLaMA/comments/1lcfqih/make_your_local_llm_suffer/
Ok_Ninja7526
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcfqih
false
null
t3_1lcfqih
/r/LocalLLaMA/comments/1lcfqih/make_your_local_llm_suffer/
false
false
self
1
null
Test post
1
[removed]
2025-06-16T00:47:43
https://www.reddit.com/r/LocalLLaMA/comments/1lcfqm2/test_post/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcfqm2
false
null
t3_1lcfqm2
/r/LocalLLaMA/comments/1lcfqm2/test_post/
false
false
self
1
null
Test post
1
[deleted]
2025-06-16T00:50:31
[deleted]
1970-01-01T00:00:00
0
{}
1lcfsj8
false
null
t3_1lcfsj8
/r/LocalLLaMA/comments/1lcfsj8/test_post/
false
false
default
1
null
Test post, finally figured out what gets these autodeleted
1
[removed]
2025-06-16T00:51:38
https://www.reddit.com/r/LocalLLaMA/comments/1lcftal/test_post_finally_figured_out_what_gets_these/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcftal
false
null
t3_1lcftal
/r/LocalLLaMA/comments/1lcftal/test_post_finally_figured_out_what_gets_these/
false
false
self
1
null
Test post
1
**Over the past year and a half** I've been working on the problem of **factual finetuning** \-- **training an LLM on new facts** so that it learns those facts, essentially extending its knowledge cutoff. Now that I've made significant progress on the problem, I'm releasing **Augmentoolkit 3.0** — an easy-to-use dataset generation and model training tool. Add documents, click a button, and Augmmentoolkit will do everything for you: it'll generate a domain-specific dataset, combine it with a balanced amount of generic data, automatically train a model on it, download it, quantize it, and run it for inference (accessible with a built-in chat interface). The project (and its demo models) are fully open-source. I even trained a model to run inside Augmentoolkit itself, allowing for faster **local dataset generation**. This update took more than six months and thousands of dollars to put together, and represents **a complete rewrite and overhaul of the original project.** It includes 16 prebuilt dataset generation pipelines and the extensively-documented code and conventions to build more. Beyond just factual finetuning, it even **includes an experimental** [**GRPO pipeline**](https://github.com/e-p-armstrong/augmentoolkit/blob/master/docs/grpo.md) that lets you **train a model to do any conceivable task** by just **writing a prompt to grade that task.** # The Links * [Project](https://github.com/e-p-armstrong/augmentoolkit) * [Train a model in 13 minutes quickstart tutorial video](https://www.youtube.com/watch?v=E9TyyZzIMyY&ab_channel=Augmentoolkit) * Demo model (what the quickstart produces) * [Link](https://huggingface.co/Heralax/llama-Augmentoolkit-Quickstart-Factual-Demo-Example) * Dataset and training configs are fully open source. The config is literally the quickstart config; the dataset is * The demo model is an LLM trained on a subset of the US Army Field Manuals -- the best free and open modern source of comprehensive documentation on a well-known field that I have found. This is also because I trained a model on these in the past and so training on them now serves as a good comparison between the power of the current tool compared to its previous version. * Experimental GRPO models * Now that Augmentoolkit includes the ability to grade models for their performance on a task, I naturally wanted to try this out, and on a task that people are familiar with. * I produced two RP models (base: Mistral 7b v0.2) with the intent of maximizing writing style quality and emotion, while minimizing GPT-isms. * One model has thought processes, the other does not. The non-thought-process model came out better for reasons described in the model card. * Non-reasoner [https://huggingface.co/Heralax/llama-gRPo-emotions-nothoughts](https://huggingface.co/Heralax/llama-gRPo-emotions-nothoughts) * Reasoner [https://huggingface.co/Heralax/llama-gRPo-thoughtprocess](https://huggingface.co/Heralax/llama-gRPo-thoughtprocess) # The Process to Reproduce * Clone * `git clone` [`https://github.com/e-p-armstrong/augmentoolkit.git`](https://github.com/e-p-armstrong/augmentoolkit.git) * Run Start Script * Local or Online * Mac * `bash` [`macos.sh`](http://macos.sh/) * `bash local_macos.sh` * Linux * `bash` [`linux.sh`](http://linux.sh/) * `bash local_linux.sh` * Windows + warning * `./start_windows.bat` * Windows interface compatibility is uncertain. It's probably more reliable to use the CLI instead. Instructions are here * Add API keys or use the local model * I trained a 7b model that is purpose-built to run Augmentoolkit pipelines (Apache license). This means that you can probably generate data at a decent speed on your own computer. It will definitely be slower than with an API, but it will be *much* better than trying to generate tens of millions of tokens with a local 70b. * There are separate start scripts for local datagen. * You'll probably only be able to get good dataset generation speed on a linux machine even though it does technically run on Mac, since Llama.cpp is MUCH slower than vLLM (which is Linux-only). * Click the "run" Button * Get Your Model * The integrated chat interface will automatically let you chat with it when the training and quanting is finished * The model will also automatically be pushed to Hugging Face (make sure you have enough space!) # Uses Besides faster generation times and lower costs, an expert AI that is trained on a domain gains a "big-picture" understanding of the subject that a generalist just won't have. It's the difference between giving a new student a class's full textbook and asking them to write an exam, versus asking a graduate student in that subject to write the exam. The new student probably won't even know where in that book they should look for the information they need, and even if they see the correct context, there's no guarantee that they understands what it means or how it fits into the bigger picture. Also, trying to build AI apps based on closed-source LLMs released by big labs sucks: * The lack of stable checkpoints under the control of the person running the model, makes the tech unstable and unpredictable to build on. * Capabilities change without warning and models are frequently made worse. * People building with AI have to work around the LLMs they are using (a moving target), rather than make the LLMs they are using fit into their system * Closed-source labs charge obscene prices, doing monopolistic rent collecting and impacting the margins of their customers. * Using closed-source labs is a privacy nightmare, especially now that API providers may be required by law to save and log formerly-private API requests. * Different companies have to all work with the same set of models, which have the same knowledge, the same capabilities, the same opinions, and they all sound more or less the same. But current open-source models often either suffer from a severe lack of capability, or are massive enough that they might as well be closed-source for most of the people trying to run them. The proposed solution? Small, efficient, powerful models that achieve superior performance on the things they are being used for (and sacrifice performance in the areas they *aren't* being used for) which are trained for their task and are controlled by the companies that use them. With Augmentoolkit: * You train your models, decide when those models update, and have full transparency over what went into them. * Capabilities change only when the company wants, and no one is forcing them to make their models worse. * People working with AI can customize the model they are using to function as part of the system they are designing, rather than having to twist their system to match a model. * 7 billion parameter models (the standard size Augmentoolkit trains) are so cheap to run it is absurd. They can run on a laptop, even. * Because you control your model, you control your inference, and you control your customers' data. * With your model's capabilities being fully customizable, your AI sounds like *your* AI, and has the opinions and capabilities that you want it to have. Furthermore, the open-source indie finetuning scene has been on life support, largely due to a lack of ability to make data, and the difficulty of getting started with (and getting results with) training, compared to methods like merging. Now that data is far easier to make, and training for specific objectives is much easier to do, and there is a good baseline with training wheels included that makes getting started easy, the hope is that people can iterate on finetunes and the scene can have new life. Augmentoolkit is taking a bet on an open-source future powered by small, efficient, Specialist Language Models. # Cool things of note * Factually-finetuned models can actually cite what files they are remembering information from, and with a good degree of accuracy at that. This is not exclusive to the domain of RAG anymore. * Augmentoolkit models by default use a custom prompt template because it turns out that making SFT data look more like pretraining data in its structure helps models use their pretraining skills during chat settings. This includes factual recall. * Augmentoolkit was used to create the dataset generation model that runs Augmentoolkit's pipelines. You can find the config used to make the dataset (2.5 gigabytes) in the `generation/core_composition/meta_datagen` folder. * There's a pipeline for turning normal SFT data into reasoning SFT data that can give a good cold start to models that you want to give thought processes to. A number of datasets converted using this pipeline [are available on Hugging Face](https://huggingface.co/Augmentoolkit), fully open-source. * Augmentoolkit does not just automatically train models on the domain-specific data you generate: to ensure that there is enough data made for the model to 1) generalize and 2) learn the actual capability of conversation, Augmentoolkit will balance your domain-specific data with generic conversational data, ensuring that the LLM becomes smarter while retaining all of the question-answering capabilities imparted by the facts it is being trained on. # Why do all this + Vision I believe AI alignment is solved when individuals and orgs can make their AI act as they want it to, rather than having to settle for a one-size-fits-all solution. The moment people can use AI specialized to their domains, is also the moment when AI stops being slightly wrong at everything, and starts being incredibly useful across different fields. Furthermore, we must do everything we can to avoid a specific type of AI-powered future: the AI-powered future where what AI believes and is capable of doing is entirely controlled by a select few. Open source has to survive and thrive for this technology to be used right. As many people as possible must be able to control AI. I want to stop a slop-pocalypse. I want to stop a future of extortionate rent-collecting by the established labs. I want open-source finetuning, even by individuals, to thrive. I want people to be able to be artists, with data their paintbrush and AI weights their canvas. Teaching models facts was the first step, and I believe this first step has now been taken. It was probably one of the hardest; best to get it out of the way sooner. After this, I'm going to do writing style, and I will also improve the [GRPO pipeline](https://github.com/e-p-armstrong/augmentoolkit/blob/master/docs/grpo.md), which allows for models to be trained to do *literally anything* better. I encourage you to fork the project so that you can make your own data, so that you can create your own pipelines, and so that you can keep the spirit of open-source finetuning and experimentation alive. I also encourage you to star the project, because I like it when "number go up". Huge thanks to Austin Cook and all of Alignment Lab AI for helping me with ideas and with getting this out there. Look out for some cool stuff from them soon, by the way :) [Happy hacking!](https://github.com/e-p-armstrong/augmentoolkit)
2025-06-16T00:52:30
https://www.reddit.com/r/LocalLLaMA/comments/1lcftun/test_post/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcftun
false
null
t3_1lcftun
/r/LocalLLaMA/comments/1lcftun/test_post/
false
false
self
1
null
Augmentoolkit just got a major update - huge advance for dataset generation and fine-tuning
39
Just wanted to share that Augmentoolkit got a significant update that's worth checking out if you're into fine-tuning or dataset generation. Augmentoolkit 3.0 is a major upgrade from the previous version. [https://github.com/e-p-armstrong/augmentoolkit](https://github.com/e-p-armstrong/augmentoolkit) For context - I've been using it to create QA datasets from historical texts, and Augmentoolkit filled a big void in my workflow. The previous version was more bare-bones but got the job done for cranking out datasets. This new version is highly polished with a much expanded set of capabilities that could bring fine-tuning to a wider group of people - it now supports going all the way from input data to working fine-tuned model in a single pipeline. What's new and improved in v3.0: \-Production-ready pipeline that automatically generates training data and trains models for you \-Comes with a custom fine-tuned model specifically built for generating high-quality QA datasets locally (LocalLLaMA, rejoice!) \-Built-in no-code interface so you don't need to mess with command line stuff \-Plus many other improvements under the hood If you're working on domain-specific fine-tuning or need to generate training data from longer documents, I recommend taking a look. The previous version of the tool has been solid for automating the tedious parts of dataset creation for me. Anyone else been using Augmentoolkit for their projects?
2025-06-16T00:55:29
https://www.reddit.com/r/LocalLLaMA/comments/1lcfvw8/augmentoolkit_just_got_a_major_update_huge/
mj3815
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcfvw8
false
null
t3_1lcfvw8
/r/LocalLLaMA/comments/1lcfvw8/augmentoolkit_just_got_a_major_update_huge/
false
false
self
39
{'enabled': False, 'images': [{'id': 'WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=108&crop=smart&auto=webp&s=1edc01ca2691dfef3b84d60ab40bcad7bce8a592', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=216&crop=smart&auto=webp&s=c32e8be1177863876c832c62bb5b79fcac234a15', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=320&crop=smart&auto=webp&s=d72fb1418e7e1cdfa09362f2ef6a874eb35901bb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=640&crop=smart&auto=webp&s=8627ee3850f012efee5d007f41e6d3290cb45ff7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=960&crop=smart&auto=webp&s=c6d745392916018f628864f66611d8501ef32d0f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=1080&crop=smart&auto=webp&s=6314f151ce285c7f790dc8bfa6fc905280097adf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?auto=webp&s=c7b35a50a46805d4e7baabdca5d460eff59b5287', 'width': 1200}, 'variants': {}}]}
Is it possible to give Gemma 3 or any other model on-device screen awareness?
1
I got Gemma3 working on my pc last night, it is very fun to have a local llm, now I am trying to find actual use cases that could benefit my workflow. Is it possible to give it onscreen awareness and allow the model to interact with programs on the pc?
2025-06-16T00:57:39
https://www.reddit.com/r/LocalLLaMA/comments/1lcfxg3/is_it_possible_to_give_gemma_3_or_any_other_model/
Lord_Greedyy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lcfxg3
false
null
t3_1lcfxg3
/r/LocalLLaMA/comments/1lcfxg3/is_it_possible_to_give_gemma_3_or_any_other_model/
false
false
self
1
null