title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Create 2 and 3-bit GPTQ quantization for Qwen3-235B-A22B? | 5 | Hi! Maybe there is someone here who has already done such quantization, could you share? Or maybe a way of quantization, for using it in the future in VLLM?
I plan to use it with 112GB total VRAM. | 2025-06-08T09:10:49 | https://www.reddit.com/r/LocalLLaMA/comments/1l67vkt/create_2_and_3bit_gptq_quantization_for/ | djdeniro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l67vkt | false | null | t3_1l67vkt | /r/LocalLLaMA/comments/1l67vkt/create_2_and_3bit_gptq_quantization_for/ | false | false | self | 5 | null |
Kokoro.js for German? | 1 | [removed] | 2025-06-08T09:36:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l688rt/kokorojs_for_german/ | nic_key | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l688rt | false | null | t3_1l688rt | /r/LocalLLaMA/comments/1l688rt/kokorojs_for_german/ | false | false | self | 1 | null |
What is your sampler order (not sampler settings) for llama.cpp? | 23 | My current sampler order is `--samplers "dry;top_k;top_p;min_p;temperature"`. I've used it for a while, it seems to work well. I've found most of the inspiration in [this post](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/). However, additional samplers have appeared in llama.cpp since, maybe the "best" order for most cases is now different. If you don't specify the `--samplers` parameter, nowadays the default is `penalties;dry;top_n_sigma;top_k;typ_p;top_p;min_p;xtc;temperature`.
What's your sampler order? Do you enable/disable any of them differently? Why? | 2025-06-08T09:53:34 | https://www.reddit.com/r/LocalLLaMA/comments/1l68hjc/what_is_your_sampler_order_not_sampler_settings/ | Nindaleth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l68hjc | false | null | t3_1l68hjc | /r/LocalLLaMA/comments/1l68hjc/what_is_your_sampler_order_not_sampler_settings/ | false | false | self | 23 | null |
Help with AI model recommendation | 1 | [removed] | 2025-06-08T09:59:07 | https://www.reddit.com/r/LocalLLaMA/comments/1l68ka5/help_with_ai_model_recommendation/ | Grouchy-Staff-8361 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l68ka5 | false | null | t3_1l68ka5 | /r/LocalLLaMA/comments/1l68ka5/help_with_ai_model_recommendation/ | false | false | self | 1 | null |
Confirmation that Qwen3-coder is in works | 315 | Junyang Lin from Qwen team [mentioned this here](https://youtu.be/b0xlsQ_6wUQ?t=985). | 2025-06-08T10:02:00 | https://www.reddit.com/r/LocalLLaMA/comments/1l68m1m/confirmation_that_qwen3coder_is_in_works/ | nullmove | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l68m1m | false | null | t3_1l68m1m | /r/LocalLLaMA/comments/1l68m1m/confirmation_that_qwen3coder_is_in_works/ | false | false | self | 315 | {'enabled': False, 'images': [{'id': 'k0BFpsKvlGEppc_1fbUMbnxP5ghsEQejN-ic5SxiECM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/wKLnxrouIC2D6bNcU8tzgzKecPM0BvGtfujsLPcUGjY.jpg?width=108&crop=smart&auto=webp&s=59f7f1451bb6a872f353a1141e94d6778a782cdd', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/wKLnxrouIC2D6bNcU8tzgzKecPM0BvGtfujsLPcUGjY.jpg?width=216&crop=smart&auto=webp&s=4f783deed81cfc82609510a7daea57a9a64de2bb', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/wKLnxrouIC2D6bNcU8tzgzKecPM0BvGtfujsLPcUGjY.jpg?width=320&crop=smart&auto=webp&s=4d0cc4a502ff224d2966ad68d90296795c3ad875', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/wKLnxrouIC2D6bNcU8tzgzKecPM0BvGtfujsLPcUGjY.jpg?auto=webp&s=145ba25291a9227d7f65fbc09fd15662e899f087', 'width': 480}, 'variants': {}}]} |
Tech Stack for Minion Voice.. | 5 | I am trying to clone a minion voice and enable my kids to speak to a minion.. I just do not know how to clone a voice .. i have 1 hour of minions speaking minonese and can break it into a smaller segment..
i have:
* MacBook
* Ollama
* Python3
any suggestions on what i should do to enable to minion voice offline.? | 2025-06-08T10:15:49 | https://www.reddit.com/r/LocalLLaMA/comments/1l68tgx/tech_stack_for_minion_voice/ | chiknugcontinuum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l68tgx | false | null | t3_1l68tgx | /r/LocalLLaMA/comments/1l68tgx/tech_stack_for_minion_voice/ | false | false | self | 5 | null |
Locally ran coding assistant on Apple M2? | 4 | I'd like a Github Copilot style coding assistant (preferably for VSCode, but that's not really important) that I could run locally on my 2022 Macbook Air (M2, 16 GB RAM, 10 core GPU).
I have a few questions:
1. Is it feasible with this hardware? Deepseek R1 8B on Ollama in the chat mode kinda works okay but a bit too slow for a coding assistant.
2. Which model should I pick?
3. How do I integrate it with the code editor? | 2025-06-08T11:24:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l69vze/locally_ran_coding_assistant_on_apple_m2/ | Defiant-Snow8782 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l69vze | false | null | t3_1l69vze | /r/LocalLLaMA/comments/1l69vze/locally_ran_coding_assistant_on_apple_m2/ | false | false | self | 4 | null |
I Built 50 AI Personalities - Here's What Actually Made Them Feel Human | 658 | Over the past 6 months, I've been obsessing over what makes AI personalities feel authentic vs robotic. After creating and testing 50 different personas for an AI audio platform I'm developing, here's what actually works.
**The Setup:** Each persona had unique voice, background, personality traits, and response patterns. Users could interrupt and chat with them during content delivery. Think podcast host that actually responds when you yell at them.
**What Failed Spectacularly:**
❌ **Over-engineered backstories** I wrote a 2,347-word biography for "Professor Williams" including his childhood dog's name, his favorite coffee shop in grad school, and his mother's maiden name. Users found him insufferable. Turns out, knowing too much makes characters feel scripted, not authentic.
❌ **Perfect consistency** "Sarah the Life Coach" never forgot a detail, never contradicted herself, always remembered exactly what she said 3 conversations ago. Users said she felt like a "customer service bot with a name." Humans aren't databases.
❌ **Extreme personalities** "MAXIMUM DEREK" was always at 11/10 energy. "Nihilist Nancy" was perpetually depressed. Both had engagement drop to zero after about 8 minutes. One-note personalities are exhausting.
**The Magic Formula That Emerged:**
**1. The 3-Layer Personality Stack**
Take "Marcus the Midnight Philosopher":
* **Core trait (40%)**: Analytical thinker
* **Modifier (35%)**: Expresses through food metaphors (former chef)
* **Quirk (25%)**: Randomly quotes 90s R&B lyrics mid-explanation
This formula created depth without overwhelming complexity. Users remembered Marcus as "the chef guy who explains philosophy" not "the guy with 47 personality traits."
**2. Imperfection Patterns**
The most "human" moment came when a history professor persona said: "The treaty was signed in... oh god, I always mix this up... 1918? No wait, 1919. Definitely 1919. I think."
That single moment of uncertainty got more positive feedback than any perfectly delivered lecture.
Other imperfections that worked:
* "Where was I going with this? Oh right..."
* "That's a terrible analogy, let me try again"
* "I might be wrong about this, but..."
**3. The Context Sweet Spot**
Here's the exact formula that worked:
**Background (300-500 words):**
* 2 formative experiences: One positive ("won a science fair"), one challenging ("struggled with public speaking")
* Current passion: Something specific ("collects vintage synthesizers" not "likes music")
* 1 vulnerability: Related to their expertise ("still gets nervous explaining quantum physics despite PhD")
Example that worked: "Dr. Chen grew up in Seattle, where rainy days in her mother's bookshop sparked her love for sci-fi. Failed her first physics exam at MIT, almost quit, but her professor said 'failure is just data.' Now explains astrophysics through Star Wars references. Still can't parallel park despite understanding orbital mechanics."
**Why This Matters:** Users referenced these background details 73% of the time when asking follow-up questions. It gave them hooks for connection. "Wait, you can't parallel park either?"
The magic isn't in making perfect AI personalities. It's in making imperfect ones that feel genuinely flawed in specific, relatable ways.
Anyone else experimenting with AI personality design? What's your approach to the authenticity problem? | 2025-06-08T11:25:12 | https://www.reddit.com/r/LocalLLaMA/comments/1l69w7i/i_built_50_ai_personalities_heres_what_actually/ | Necessary-Tap5971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l69w7i | false | null | t3_1l69w7i | /r/LocalLLaMA/comments/1l69w7i/i_built_50_ai_personalities_heres_what_actually/ | false | false | self | 658 | null |
Which local model do you think best reasons on topics that are not related to STEM? And for Spanish speakers, what is the best model in that language? | 1 | [removed] | 2025-06-08T11:56:39 | https://www.reddit.com/r/LocalLLaMA/comments/1l6aexv/which_local_model_do_you_think_best_reasons_on/ | Roubbes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6aexv | false | null | t3_1l6aexv | /r/LocalLLaMA/comments/1l6aexv/which_local_model_do_you_think_best_reasons_on/ | false | false | self | 1 | null |
Gigabyte AI-TOP-500-TRX50 | 28 | Does this setup make any sense?
A lot of RAM (768GB DDR5 - Threadripper PRO 7965WX platform), but only one RTX 5090 (32GB VRAM).
Sounds for me strange to call this an AI platform. I would expect at least one RTX Pro 6000 with 96GB VRAM. | 2025-06-08T12:24:30 | https://www.gigabyte.com/us/Gaming-PC/AI-TOP-500-TRX50 | Blizado | gigabyte.com | 1970-01-01T00:00:00 | 0 | {} | 1l6awvn | false | null | t3_1l6awvn | /r/LocalLLaMA/comments/1l6awvn/gigabyte_aitop500trx50/ | false | false | default | 28 | {'enabled': False, 'images': [{'id': 'Wm4QI12rr0yyFV7m7egJZLOjR87QJO_z7Qq_v28fdjI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/zGJOW1jt3CCqwFh7xP6bSoUFxg9bFJkmHI9ZJ2bsrds.jpg?width=108&crop=smart&auto=webp&s=533c25e95708ef5df356c007bb3146e9f3ad0bcf', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/zGJOW1jt3CCqwFh7xP6bSoUFxg9bFJkmHI9ZJ2bsrds.jpg?width=216&crop=smart&auto=webp&s=aa419480e7b6e3bc3b61d375ac77021e278a4892', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/zGJOW1jt3CCqwFh7xP6bSoUFxg9bFJkmHI9ZJ2bsrds.jpg?width=320&crop=smart&auto=webp&s=422c609dc9fcd5c5c0dccdaf4850e775ac92f815', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/zGJOW1jt3CCqwFh7xP6bSoUFxg9bFJkmHI9ZJ2bsrds.jpg?width=640&crop=smart&auto=webp&s=4eaa02f69a6831f48e3c2f26c8669819456346cf', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/zGJOW1jt3CCqwFh7xP6bSoUFxg9bFJkmHI9ZJ2bsrds.jpg?width=960&crop=smart&auto=webp&s=7594133ae4a6466018c68f63af1ab2ff78adbe22', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/zGJOW1jt3CCqwFh7xP6bSoUFxg9bFJkmHI9ZJ2bsrds.jpg?width=1080&crop=smart&auto=webp&s=c2eb70fb59978ccea66b2e4ac3c2797a6c1e0fec', 'width': 1080}], 'source': {'height': 2000, 'url': 'https://external-preview.redd.it/zGJOW1jt3CCqwFh7xP6bSoUFxg9bFJkmHI9ZJ2bsrds.jpg?auto=webp&s=de2bd98cd1981d1f35016829ebef12a4ef76bed0', 'width': 2000}, 'variants': {}}]} |
Weird interaction between agents xD | 1 | [removed] | 2025-06-08T12:31:45 | https://www.reddit.com/gallery/1l6b1ke | mdhv11 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l6b1ke | false | null | t3_1l6b1ke | /r/LocalLLaMA/comments/1l6b1ke/weird_interaction_between_agents_xd/ | false | false | 1 | null |
|
Is the "I Can Run the 670B Deepseek R1 Locally" the new "Can it Run Crysis" Meme? | 1 | [removed] | 2025-06-08T12:57:37 | https://www.reddit.com/r/LocalLLaMA/comments/1l6biut/is_the_i_can_run_the_670b_deepseek_r1_locally_the/ | Iory1998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6biut | false | null | t3_1l6biut | /r/LocalLLaMA/comments/1l6biut/is_the_i_can_run_the_670b_deepseek_r1_locally_the/ | false | false | self | 1 | null |
How do I finetune Devstral with vision support? | 0 | Hey, so I'm kinda new in the local llm world, but I managed to get my llama-server up and running locally on Windows with this hf repo: [https://huggingface.co/ngxson/Devstral-Small-Vision-2505-GGUF](https://huggingface.co/ngxson/Devstral-Small-Vision-2505-GGUF)
I also managed to finetune an unsloth version of Devstral ( [https://huggingface.co/unsloth/Devstral-Small-2505-unsloth-bnb-4bit](https://huggingface.co/unsloth/Devstral-Small-2505-unsloth-bnb-4bit) ) with my own data, quantized it to q4\_k\_m and I've managed to get that running chat-style in cmd, but I get strange results when I try to run a llama-server with that model (text responses are just gibberish text unrelated to the question).
I think the reason is that I don't have an "mmproj" file, and I'm somehow lacking vision support from Mistral Small.
Is there any docs or can someone explain where I should start to finetune devstral with vision support to I can get my own finetuned version of the ngxson repo up and running on my llama-server? | 2025-06-08T13:03:27 | https://www.reddit.com/r/LocalLLaMA/comments/1l6bn1t/how_do_i_finetune_devstral_with_vision_support/ | svnflow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6bn1t | false | null | t3_1l6bn1t | /r/LocalLLaMA/comments/1l6bn1t/how_do_i_finetune_devstral_with_vision_support/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'XgEx3bmmG6fsK7CQz6kN8wOwIFrwcdRrTW9Huw1I_SI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WhpQblmsU_MjFdgJ4OfmB9tYyTztW0va_laiAePMCLo.jpg?width=108&crop=smart&auto=webp&s=617e32788586c095b536af93fa0eea66a7434c8e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WhpQblmsU_MjFdgJ4OfmB9tYyTztW0va_laiAePMCLo.jpg?width=216&crop=smart&auto=webp&s=eabc758572d3952f2b7c5f5f55518f929916e1f9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WhpQblmsU_MjFdgJ4OfmB9tYyTztW0va_laiAePMCLo.jpg?width=320&crop=smart&auto=webp&s=92cbafe1880e40972e4d1e03d605bec5c6f991f7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WhpQblmsU_MjFdgJ4OfmB9tYyTztW0va_laiAePMCLo.jpg?width=640&crop=smart&auto=webp&s=4e688456d906fe7cb0485f9501420c7fd99e97de', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WhpQblmsU_MjFdgJ4OfmB9tYyTztW0va_laiAePMCLo.jpg?width=960&crop=smart&auto=webp&s=7169c8f20cd8c1cd2f30a6e92568680e9600a4a1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WhpQblmsU_MjFdgJ4OfmB9tYyTztW0va_laiAePMCLo.jpg?width=1080&crop=smart&auto=webp&s=fab930ed118470f61a432dcbeae5f849f1fcafb2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WhpQblmsU_MjFdgJ4OfmB9tYyTztW0va_laiAePMCLo.jpg?auto=webp&s=cae005818d17fc845a7f90c812ad60e699e2482a', 'width': 1200}, 'variants': {}}]} |
AI Studio ‘App’ on iOS | 0 | 2025-06-08T13:35:50 | https://www.icloud.com/shortcuts/9cd63478017648cba611378ba372b19d | Accomplished_Mode170 | icloud.com | 1970-01-01T00:00:00 | 0 | {} | 1l6cbbr | false | null | t3_1l6cbbr | /r/LocalLLaMA/comments/1l6cbbr/ai_studio_app_on_ios/ | false | false | default | 0 | null |
|
My AI Coding Assistant Insisted I Need RAG for My Chatbot - But I Really Don't? | 0 | Been pulling my hair out for weeks because of conflicting advice, hoping someone can explain what I'm missing.
**The Situation:** Building a chatbot for an AI podcast platform I'm developing. Need it to remember user preferences, past conversations, and about 50k words of creator-defined personality/background info.
**What Happened:** Every time I asked ChatGPT for architecture advice, it insisted on:
* Implementing RAG with vector databases
* Chunking all my content into 512-token pieces
* Building complex retrieval pipelines
* "You can't just dump everything in context, it's too expensive"
Spent 3 weeks building this whole system. Embeddings, similarity search, the works.
**Then I Tried Something Different:** Started questioning whether all this complexity was necessary. Decided to test loading everything directly into context with newer models.
I'm using Gemini 2.5 Flash with its 1 million token context window, but other flagship models from various providers also handle hundreds of thousands of tokens pretty well now.
Deleted all my RAG code. Put everything (10-50k context window) directly in the system prompt. Works PERFECTLY. Actually works better because there's no retrieval errors.
**My Theory:** ChatGPT seems stuck in 2022-2023 when:
* Context windows were 4-8k tokens
* Tokens cost 10x more
* You HAD to be clever about context management
But now? My entire chatbot's "memory" fits in a single prompt with room to spare.
**The Questions:**
1. Am I missing something huge about why RAG would still be necessary?
2. Is this only true for chatbots, or are other use cases different? | 2025-06-08T13:47:09 | https://www.reddit.com/r/LocalLLaMA/comments/1l6cjti/my_ai_coding_assistant_insisted_i_need_rag_for_my/ | Necessary-Tap5971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6cjti | false | null | t3_1l6cjti | /r/LocalLLaMA/comments/1l6cjti/my_ai_coding_assistant_insisted_i_need_rag_for_my/ | false | false | self | 0 | null |
Local LLM on Android: Qwen3 Support, Thinking Mode, and Faster Qwen2.5 Inference (APK Inside) | 1 | [removed] | 2025-06-08T14:05:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l6cy9c/local_llm_on_android_qwen3_support_thinking_mode/ | 100daggers_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6cy9c | false | null | t3_1l6cy9c | /r/LocalLLaMA/comments/1l6cy9c/local_llm_on_android_qwen3_support_thinking_mode/ | false | false | self | 1 | null |
Local LLM on Android Qwen3 + Thinking Mode | 1 | [removed] | 2025-06-08T14:14:07 | https://www.reddit.com/r/LocalLLaMA/comments/1l6d5c5/local_llm_on_android_qwen3_thinking_mode/ | 100daggers_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6d5c5 | false | null | t3_1l6d5c5 | /r/LocalLLaMA/comments/1l6d5c5/local_llm_on_android_qwen3_thinking_mode/ | false | false | self | 1 | null |
Run LLM locally on Android | 1 | [removed] | 2025-06-08T14:16:57 | https://www.reddit.com/r/LocalLLaMA/comments/1l6d7ne/run_llm_locally_on_android/ | 100daggers_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6d7ne | false | null | t3_1l6d7ne | /r/LocalLLaMA/comments/1l6d7ne/run_llm_locally_on_android/ | false | false | self | 1 | null |
Gemma 27B for creative Joycean writing, looking for other suggestions | 1 | [removed] | 2025-06-08T14:19:36 | https://www.reddit.com/gallery/1l6d9w4 | SkyFeistyLlama8 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l6d9w4 | false | null | t3_1l6d9w4 | /r/LocalLLaMA/comments/1l6d9w4/gemma_27b_for_creative_joycean_writing_looking/ | false | false | 1 | null |
|
Croco.cpp and NXS_llama.cpp, forks of KoboldCpp and Llama.cpp. | 1 | [removed] | 2025-06-08T14:28:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l6dh5y/crococpp_and_nxs_llamacpp_forks_of_koboldcpp_and/ | Nexesenex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6dh5y | false | null | t3_1l6dh5y | /r/LocalLLaMA/comments/1l6dh5y/crococpp_and_nxs_llamacpp_forks_of_koboldcpp_and/ | false | false | self | 1 | null |
Croco.cpp and NXS_llama.cpp, forks of KoboldCpp and Llama.cpp | 1 | [removed] | 2025-06-08T14:30:54 | https://www.reddit.com/r/LocalLLaMA/comments/1l6dj58/crococpp_and_nxs_llamacpp_forks_of_koboldcpp_and/ | Nexesenex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6dj58 | false | null | t3_1l6dj58 | /r/LocalLLaMA/comments/1l6dj58/crococpp_and_nxs_llamacpp_forks_of_koboldcpp_and/ | false | false | self | 1 | null |
Can we all admit that getting into local AI requires an unimaginable amount of knowledge in 2025? | 0 | I'm not saying that it's right or wrong, just that it requires knowing a lot to crack into it. I'm also not saying that I have a solution to this problem.
We see so many posts daily asking which models they should use, what software and such. And those questions, lead to... so many more questions that there is no way we don't end up scaring off people before they start.
As an example, mentally work through the answer to this basic question "How do I setup an LLM to do a dnd rp?"
The above is a F\*CKING nightmare of a question, but it's so common and requires so much unpacking of information. Let me prattle some off... Hardware, context length, LLM alignment and ability to respond negatively to bad decisions, quant size, server software, front end options.
You don't need to drink from the firehose to start, you have to have drank the entire fire hydrant before even really starting. | 2025-06-08T15:39:27 | https://www.reddit.com/r/LocalLLaMA/comments/1l6f4ei/can_we_all_admit_that_getting_into_local_ai/ | valdev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6f4ei | false | null | t3_1l6f4ei | /r/LocalLLaMA/comments/1l6f4ei/can_we_all_admit_that_getting_into_local_ai/ | false | false | self | 0 | null |
LLM performance in 5070 12 gb vs 5060 Ti 16 gb | 1 | [removed] | 2025-06-08T16:04:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l6fppa/llm_performance_in_5070_12_gb_vs_5060_ti_16_gb/ | graphicscardcustomer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6fppa | false | null | t3_1l6fppa | /r/LocalLLaMA/comments/1l6fppa/llm_performance_in_5070_12_gb_vs_5060_ti_16_gb/ | false | false | self | 1 | null |
Is VRAM really king? 5070 12gb seems to beat the 5060Ti 16gb in LLMs | 1 | [removed] | 2025-06-08T16:20:01 | https://www.reddit.com/r/LocalLLaMA/comments/1l6g31f/is_vram_really_king_5070_12gb_seems_to_beat_the/ | gildedtoiletseat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6g31f | false | null | t3_1l6g31f | /r/LocalLLaMA/comments/1l6g31f/is_vram_really_king_5070_12gb_seems_to_beat_the/ | false | false | self | 1 | null |
Is VRAM really king? 5070 12gb seems to beat the 5060Ti 16gb in LLMs | 1 | [removed] | 2025-06-08T16:21:11 | https://www.reddit.com/r/LocalLLaMA/comments/1l6g40r/is_vram_really_king_5070_12gb_seems_to_beat_the/ | gildedtoiletseat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6g40r | false | null | t3_1l6g40r | /r/LocalLLaMA/comments/1l6g40r/is_vram_really_king_5070_12gb_seems_to_beat_the/ | false | false | self | 1 | null |
The AI Dopamine Overload: Confessions of an AI-Addicted Developer | 1 | [removed] | 2025-06-08T16:22:32 | https://www.reddit.com/r/LocalLLaMA/comments/1l6g54w/the_ai_dopamine_overload_confessions_of_an/ | Soft_Ad1142 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6g54w | false | null | t3_1l6g54w | /r/LocalLLaMA/comments/1l6g54w/the_ai_dopamine_overload_confessions_of_an/ | false | false | self | 1 | null |
Ruminate: From All-or-Nothing to Just-Right Reasoning in LLMs | 67 | # Ruminate: Taking Control of AI Reasoning Speed
**TL;DR**: I ran 7,150 prompts through Qwen3-4B-AWQ to try to solve the "fast but wrong vs slow but unpredictable" problem with reasoning AI models and got fascinating results. Built a staged reasoning proxy that lets you dial in exactly the speed-accuracy tradeoff you need.
# The Problem
Reasoning models like Qwen3 have a brutal tradeoff: turn reasoning off and get 27% accuracy (but fast), or turn it on and get 74% accuracy but completely unpredictable response times. Some requests take 200ms, others take 30+ seconds. That's unusable for production.
# The Solution: Staged Reasoning
Instead of unlimited thinking time, give the AI a budget with gentle nudges:
**Initial Think**: "Here's your ideal thinking time"
**Soft Warning**: "Time's getting short, stay focused"
**Hard Warning**: "Really need to wrap up now"
**Emergency Termination**: Force completion if all budgets exhausted
# What I Tested
* **4 reasoning tasks**: geometric shapes, boolean logic, dates, arithmetic
* **11 different configurations** from quick-thinker to big-thinker
* **Proper statistics**: 95% confidence intervals to know which results are actually significant vs just noise
* **CompletionCost metric**: tokens needed per 1% accuracy (efficiency tiebreaker)
# Key Findings
[Open Run-time performance scaling: It's possible after all!](https://preview.redd.it/zj0zzfnbbq5f1.png?width=3570&format=png&auto=webp&s=27e5f6b0732a01f77a0239e7902c53bf90f8f784)
**🎯 It works**: Staged reasoning successfully trades accuracy for predictability
**📊 Big Thinker**: 77% accuracy, recovers 93% of full reasoning performance while cutting worst-case response time in half
**⚡ Quick Thinker**: 59% accuracy, still 72% of full performance but 82% faster
**🤔 Budget allocation surprise**: How you split your token budget matters less than total budget size (confidence intervals overlap for most medium configs)
**📈 Task-specific patterns**: Boolean logic needs upfront thinking, arithmetic needs generous budgets, date problems are efficient across all configs
**❌ Hypothesis busted**: I thought termination rate would predict poor performance. Nope! The data completely disagreed with me - science is humbling.
Lots of additional details on the tasks, methodologies and results are in the mini-paper: [https://github.com/the-crypt-keeper/ChatBench/blob/main/ruminate/PAPER.md](https://github.com/the-crypt-keeper/ChatBench/blob/main/ruminate/PAPER.md)
# Real Impact
This transforms reasoning models from research toys into practical tools. Instead of "fast but wrong" or "accurate but unpredictable," you get exactly the speed-accuracy tradeoff your app needs.
**Practical configs**:
* Time-critical: 72% of full performance, 82% speed boost
* Balanced: 83% of performance, 60% speed boost
* Accuracy-focused: 93% of performance, 50% speed boost
# Implementation Detail
The proxy accepts a `reason_control=[x,y,z]` parameter controlling token budgets for Initial Think, Soft Warning, and Hard Warning stages respectively. It sits between your app and the model, making multiple completion calls and assembling responses transparently.
# Try It
*Full dataset, analysis, and experimental setup in the repo. Science works best when it's reproducible - replications welcome!*
Code at [https://github.com/the-crypt-keeper/ChatBench/tree/main/ruminate](https://github.com/the-crypt-keeper/ChatBench/tree/main/ruminate)
Full result dataset at [https://github.com/the-crypt-keeper/ChatBench/tree/main/ruminate/results](https://github.com/the-crypt-keeper/ChatBench/tree/main/ruminate/results)
Mini-paper analyzing the results at [https://github.com/the-crypt-keeper/ChatBench/blob/main/ruminate/PAPER.md](https://github.com/the-crypt-keeper/ChatBench/blob/main/ruminate/PAPER.md)
**Warning**: Experimental research code, subject to change!
Built this on dual RTX 3090s in my basement testing Qwen3-4B. Would love to see how patterns hold across different models and hardware**. Everything is open source, these results can be reproduced on even a single 3060.**
The beauty isn't just that staged reasoning works - it's that we can now systematically map the speed-accuracy tradeoff space with actual statistical rigor. No more guessing; we have confidence intervals and proper math backing every conclusion.
# Future Work
More tasks, more samples (for better statistics), bigger models, Non-Qwen3 Reasoning Model Families the possibilities for exploration are endless. Hop into the GitHub and open an issue if you have interesting ideas or results to share!
# ChatBench
I am the author of the Can-Ai-Code test suite and as you may have noticed, I am cooking up a new, cross-task test suite based on BigBenchHard that I'm calling [ChatBench](https://github.com/the-crypt-keeper/ChatBench/tree/main). This is just one of the many interesting outcomes from this work - stay tuned for more posts! | 2025-06-08T16:30:47 | https://www.reddit.com/r/LocalLLaMA/comments/1l6gc5o/ruminate_from_allornothing_to_justright_reasoning/ | kryptkpr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6gc5o | false | null | t3_1l6gc5o | /r/LocalLLaMA/comments/1l6gc5o/ruminate_from_allornothing_to_justright_reasoning/ | false | false | 67 | {'enabled': False, 'images': [{'id': 'kbh5q68ss3ivP8guE95U7BFe2Sic2X4TeuzorpFdJyI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cSlpnM-F0ah5HqsHd87BsgMR5_Wvh1LjIwvfZJ0XrTw.jpg?width=108&crop=smart&auto=webp&s=93fe9a4d528e3280ec1c7c7d57d336a03fccae81', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cSlpnM-F0ah5HqsHd87BsgMR5_Wvh1LjIwvfZJ0XrTw.jpg?width=216&crop=smart&auto=webp&s=81bcc411e7f9c6e39657d7d36ce96576ebd46680', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cSlpnM-F0ah5HqsHd87BsgMR5_Wvh1LjIwvfZJ0XrTw.jpg?width=320&crop=smart&auto=webp&s=6b4407d1320dabc6efbe7fbd75a059162a590236', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cSlpnM-F0ah5HqsHd87BsgMR5_Wvh1LjIwvfZJ0XrTw.jpg?width=640&crop=smart&auto=webp&s=2db108555c34df39bc4f1ce9bd54de50c02b280b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cSlpnM-F0ah5HqsHd87BsgMR5_Wvh1LjIwvfZJ0XrTw.jpg?width=960&crop=smart&auto=webp&s=0bc4cf228a23ef7b11a5ea070c17beb1cb90a185', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cSlpnM-F0ah5HqsHd87BsgMR5_Wvh1LjIwvfZJ0XrTw.jpg?width=1080&crop=smart&auto=webp&s=d0318372c1d3096fdf1b6c672de253411528bc60', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cSlpnM-F0ah5HqsHd87BsgMR5_Wvh1LjIwvfZJ0XrTw.jpg?auto=webp&s=ce6003b66e6f7382740011a8e4b1fed68ab50bfb', 'width': 1200}, 'variants': {}}]} |
|
Fastest TTS software with voice cloning? | 1 | [removed] | 2025-06-08T16:50:58 | https://www.reddit.com/r/LocalLLaMA/comments/1l6gt4w/fastest_tts_software_with_voice_cloning/ | Fancy-Active83 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6gt4w | false | null | t3_1l6gt4w | /r/LocalLLaMA/comments/1l6gt4w/fastest_tts_software_with_voice_cloning/ | false | false | self | 1 | null |
M.2 to external gpu | 1 | I've been wanting to raise awareness to the fact that you might not need a specialized multi-gpu motherboard. For inference, you don't necessarily need high bandwidth and their are likely slots on your existing motherboard that you can use for eGPUs. | 2025-06-08T16:58:58 | http://joshvoigts.com/articles/m2-to-external-gpu/ | Zc5Gwu | joshvoigts.com | 1970-01-01T00:00:00 | 0 | {} | 1l6h011 | false | null | t3_1l6h011 | /r/LocalLLaMA/comments/1l6h011/m2_to_external_gpu/ | false | false | default | 1 | null |
4x RTX Pro 6000 fail to boot, 3x is OK | 13 | I have 4 RTX Pro 6000 (Blackwell) connected to a highpoint rocket 1628A (with custom GPU firmware on it).
AM5 / B850 motherboard (MSI B850-P WiFi)
9900x CPU
192GB Ram
Everything works with 3 GPUs.
Tested OK:
3 GPUs in highpoint
2 GPUs in highpoint, 1 GPU in mobo
Tested NOT working:
4 GPUs in highpoint
3 GPUs in highpoint, 1 GPU in mobo
However 4x 4090s work OK in the highpoint.
Any ideas what is going on?
| 2025-06-08T17:25:35 | https://www.reddit.com/r/LocalLLaMA/comments/1l6hnfg/4x_rtx_pro_6000_fail_to_boot_3x_is_ok/ | humanoid64 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6hnfg | false | null | t3_1l6hnfg | /r/LocalLLaMA/comments/1l6hnfg/4x_rtx_pro_6000_fail_to_boot_3x_is_ok/ | false | false | self | 13 | null |
Thinking about buying a 3090. Good for local llm? | 9 | Thinking about buying a GPU and learning how to run, run and set up an llm. I currently have a 3070 TI. I was thinking about going to a 3090 or 4090 since I have a z690 board still, are there other requirements I should be looking into? | 2025-06-08T17:39:28 | https://www.reddit.com/r/LocalLLaMA/comments/1l6hzl2/thinking_about_buying_a_3090_good_for_local_llm/ | spectre1006 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6hzl2 | false | null | t3_1l6hzl2 | /r/LocalLLaMA/comments/1l6hzl2/thinking_about_buying_a_3090_good_for_local_llm/ | false | false | self | 9 | null |
When you figure out it’s all just math: | 3,327 | 2025-06-08T17:53:48 | Current-Ticket4214 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l6ibwg | false | null | t3_1l6ibwg | /r/LocalLLaMA/comments/1l6ibwg/when_you_figure_out_its_all_just_math/ | false | false | default | 3,327 | {'enabled': True, 'images': [{'id': 't7ko9eywrq5f1', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/t7ko9eywrq5f1.jpeg?width=108&crop=smart&auto=webp&s=6149381cda6e4f06c3cae6fd54b7eba33dde68ba', 'width': 108}, {'height': 215, 'url': 'https://preview.redd.it/t7ko9eywrq5f1.jpeg?width=216&crop=smart&auto=webp&s=1f16aec2552fd089e23aa33d16543be47052e891', 'width': 216}, {'height': 318, 'url': 'https://preview.redd.it/t7ko9eywrq5f1.jpeg?width=320&crop=smart&auto=webp&s=06d0a292c9898919e4af0d70aa3515bf8a70f1b5', 'width': 320}, {'height': 637, 'url': 'https://preview.redd.it/t7ko9eywrq5f1.jpeg?width=640&crop=smart&auto=webp&s=82581f5bc2e1251bb77594995cdd04eccde6717a', 'width': 640}, {'height': 956, 'url': 'https://preview.redd.it/t7ko9eywrq5f1.jpeg?width=960&crop=smart&auto=webp&s=7901e401b20b71f2d28c240b2765728a4bb5a796', 'width': 960}, {'height': 1075, 'url': 'https://preview.redd.it/t7ko9eywrq5f1.jpeg?width=1080&crop=smart&auto=webp&s=6435ad0add85779ee19b4f9468ce46b5dab114d4', 'width': 1080}], 'source': {'height': 1275, 'url': 'https://preview.redd.it/t7ko9eywrq5f1.jpeg?auto=webp&s=2006ba5ac6ffc06bf9fbd55a74ca0a76ab074ee2', 'width': 1280}, 'variants': {}}]} |
||
Good current Linux OSS LLM inference SW/backend/config for AMD Ryzen 7 PRO 8840HS + Radeon 780M IGPU, 4-32B MoE / dense / Q8-Q4ish? | 1 | Good current Linux OSS LLM inference SW/backend/config for AMD Ryzen 7 PRO 8840HS + Radeon 780M IGPU, 4-32B MoE / dense / Q8-Q4ish?
Use case: 4B-32B dense & MoE models like Qwen3, maybe some multimodal ones.
Obviously DDR5 bottlenecked but maybe the choice of CPU vs. NPU vs. IGPU; vulkan vs opencl vs rocm force enabled; llama.cpp vs. vllm vs. sglang vs. huggingface transformers vs. whatever else may actually still matter for some feature / performance / quality reasons?
Probably will use speculative decoding where possible & advantageous, efficient quant. sizes 4-8 bits or so.
No clear idea of best model file format, default assumption is llama.cpp + GGUF dynamic Q4/Q6/Q8 though if something is particularly advantageous with another quant format & inference SW I'm open to consider it.
Energy efficient would be good, too, to the extent there's any major difference wrt. SW / CPU / IGPU / NPU use & config etc.
Probably use mostly the OpenAI original API though maybe some MCP / RAG at times and some multimodal (e.g. OCR, image Q&A / conversion / analysis) which could relate to inference SW support & capabilities.
I'm sure lots of things will more or less work, but I assume someone has the best current functional / optimized configuration determined and recommendable? | 2025-06-08T18:03:11 | https://www.reddit.com/r/LocalLLaMA/comments/1l6ik8z/good_current_linux_oss_llm_inference/ | Calcidiol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6ik8z | false | null | t3_1l6ik8z | /r/LocalLLaMA/comments/1l6ik8z/good_current_linux_oss_llm_inference/ | false | false | self | 1 | null |
AI Engineer World’s Fair 2025 - Field Notes | 1 | [removed] | 2025-06-08T18:09:56 | https://www.reddit.com/r/LocalLLaMA/comments/1l6iqb6/ai_engineer_worlds_fair_2025_field_notes/ | oana77oo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6iqb6 | false | null | t3_1l6iqb6 | /r/LocalLLaMA/comments/1l6iqb6/ai_engineer_worlds_fair_2025_field_notes/ | false | false | self | 1 | null |
Is it possible to run 32B model on 100 requests at a time at 200 Tok/s per second? | 0 | I'm trying to figure out pricing for this and if it is better to use some api or to rent some gpus or actually buy some hardware. I'm trying to get this kind of throughput: 32B model on 100 requests concurrently at 200 Tok/s per second. Not sure where to even begin looking at the hardware or inference engines for this. I know vllm does batching quite well but doesn't that slow down the rate?
More specifics:
Each request can be from 10 input tokens to 20k input tokens
Each output is going to be from 2k - 10k output tokens
The speed is required (trying to process a ton of data) but the latency can be slow, its just that I need a high concurrency like 100. Any pointers in the right direction would be really helpful. Thank You! | 2025-06-08T18:19:37 | https://www.reddit.com/r/LocalLLaMA/comments/1l6iz1t/is_it_possible_to_run_32b_model_on_100_requests/ | smirkishere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6iz1t | false | null | t3_1l6iz1t | /r/LocalLLaMA/comments/1l6iz1t/is_it_possible_to_run_32b_model_on_100_requests/ | false | false | self | 0 | null |
Can someone suggest the best Local LLM for this hardware please | 1 | [removed] | 2025-06-08T18:46:31 | https://www.reddit.com/r/LocalLLaMA/comments/1l6jmjk/can_someone_suggest_the_best_local_llm_for_this/ | No-Distance-5523 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6jmjk | false | null | t3_1l6jmjk | /r/LocalLLaMA/comments/1l6jmjk/can_someone_suggest_the_best_local_llm_for_this/ | false | false | self | 1 | null |
A small request to tool developers: fail loudly | 1 | [removed] | 2025-06-08T19:26:09 | https://www.reddit.com/r/LocalLLaMA/comments/1l6kkhv/a_small_request_to_tool_developers_fail_loudly/ | osskid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6kkhv | false | null | t3_1l6kkhv | /r/LocalLLaMA/comments/1l6kkhv/a_small_request_to_tool_developers_fail_loudly/ | false | false | self | 1 | null |
"Given infinite time, would a language model ever respond to 'how is the weather' with the entire U.S. Declaration of Independence?" | 0 | I know that you can't truly eliminate hallucinations in language models, and that the underlying mechanism is using statistical relationships between "tokens". But what I'm wondering is, does "you can't eliminate hallucinations" and the probability based technology mean given an infinite amount of time a language model would eventually output every single combinations of possible words in response to the exact same input sentence? Is there any way for the models to have a "null" relationship between certain sets of tokens? | 2025-06-08T19:39:21 | https://www.reddit.com/r/LocalLLaMA/comments/1l6kvk5/given_infinite_time_would_a_language_model_ever/ | _TR-8R | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6kvk5 | false | null | t3_1l6kvk5 | /r/LocalLLaMA/comments/1l6kvk5/given_infinite_time_would_a_language_model_ever/ | false | false | self | 0 | null |
Llama3 is better than Llama4.. is this anyone else's experience? | 114 | I spend a lot of time using cheaper/faster LLMs when possible via paid inference API's. If I'm working on a microservice I'll gladly use Llama3.3 70B or Llama4 Maverick than the more expensive Deepseek. It generally goes very well.
And I came to an upsetting realization that, for all of my use cases, Llama3.3 70B and Llama3.1 405B perform better than Llama4 Maverick 400B. There are less bugs, less oversights, less silly mistakes, less editing-instruction-failures (Aider and Roo-Code, primarily). The benefit of Llama4 is that the MoE and smallish experts make it run at lightspeed, but the time savings are lost as soon as I need to figure out its silly mistakes.
Is anyone else having a similar experience? | 2025-06-08T20:14:08 | https://www.reddit.com/r/LocalLLaMA/comments/1l6lp8x/llama3_is_better_than_llama4_is_this_anyone_elses/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6lp8x | false | null | t3_1l6lp8x | /r/LocalLLaMA/comments/1l6lp8x/llama3_is_better_than_llama4_is_this_anyone_elses/ | false | false | self | 114 | null |
Concept graph in Open WebUI | 6 | **What is this?**
* A reasoning workflow where an LLM is given a chance to construct a graph of concepts related to the query before proceeding with an answer.
* The logic runs within an OpenAI-compatible LLM proxy
* Proxy also streams back a specially crafted HTML artifact that renders the visualisation(s) and connects back to the running completion to listen for events from it
[Code.](https://github.com/av/harbor/blob/main/boost/src/modules/concept.py#L135) | 2025-06-08T20:16:09 | https://v.redd.it/yieyvr96gr5f1 | Everlier | /r/LocalLLaMA/comments/1l6lqyf/concept_graph_in_open_webui/ | 1970-01-01T00:00:00 | 0 | {} | 1l6lqyf | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/yieyvr96gr5f1/DASHPlaylist.mpd?a=1752135376%2CM2Y0MGJiMWUwY2M5OTcyYWI3NzkzNDg4ODkwMDg0ZWI2ZWZiN2E3MDI0M2VkNThlNzk1MzMyNTlmOTEyNzg3Ng%3D%3D&v=1&f=sd', 'duration': 169, 'fallback_url': 'https://v.redd.it/yieyvr96gr5f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/yieyvr96gr5f1/HLSPlaylist.m3u8?a=1752135376%2CMTQ1Zjk5ZDk2Y2FjYWEyMzBjMTc4OGNmMGY2MGNlNTk2MzJjMTUxN2FhYWM0NGJmMWJlNjY2YmJhZWQ4NzMwMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/yieyvr96gr5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1728}} | t3_1l6lqyf | /r/LocalLLaMA/comments/1l6lqyf/concept_graph_in_open_webui/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'YTl3em91OTZncjVmMUDafYoOCCtYLjNpQgqDHTqQwNrdDTG86AmqQ0wIdIFQ', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/YTl3em91OTZncjVmMUDafYoOCCtYLjNpQgqDHTqQwNrdDTG86AmqQ0wIdIFQ.png?width=108&crop=smart&format=pjpg&auto=webp&s=94d5393913249d05d9c6515ebacfdd3e7b47bde6', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/YTl3em91OTZncjVmMUDafYoOCCtYLjNpQgqDHTqQwNrdDTG86AmqQ0wIdIFQ.png?width=216&crop=smart&format=pjpg&auto=webp&s=452640d11424bfd167fb72b153ffd2a3d5d02da0', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/YTl3em91OTZncjVmMUDafYoOCCtYLjNpQgqDHTqQwNrdDTG86AmqQ0wIdIFQ.png?width=320&crop=smart&format=pjpg&auto=webp&s=0be58f5fb6af0c860538dc831c648611625f2b18', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/YTl3em91OTZncjVmMUDafYoOCCtYLjNpQgqDHTqQwNrdDTG86AmqQ0wIdIFQ.png?width=640&crop=smart&format=pjpg&auto=webp&s=4b53d35de89a3f8063b3f75decf6706b1a103cd8', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/YTl3em91OTZncjVmMUDafYoOCCtYLjNpQgqDHTqQwNrdDTG86AmqQ0wIdIFQ.png?width=960&crop=smart&format=pjpg&auto=webp&s=c5980a60ee2a6627b53572481d827881ca925ee2', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/YTl3em91OTZncjVmMUDafYoOCCtYLjNpQgqDHTqQwNrdDTG86AmqQ0wIdIFQ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6072650b1c6b33e21e49b9c4751618ebe24ecc33', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/YTl3em91OTZncjVmMUDafYoOCCtYLjNpQgqDHTqQwNrdDTG86AmqQ0wIdIFQ.png?format=pjpg&auto=webp&s=8ece1d8a010ee008cd8303d3421e532c33871d89', 'width': 1920}, 'variants': {}}]} |
|
Qwen3 30B a3B, on MacBook Pro M4 76 Tok/S | 1 | [removed] | 2025-06-08T20:19:36 | https://www.reddit.com/r/LocalLLaMA/comments/1l6ltwq/qwen3_30b_a3b_on_macbook_pro_m4_76_toks/ | Extra-Virus9958 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6ltwq | false | null | t3_1l6ltwq | /r/LocalLLaMA/comments/1l6ltwq/qwen3_30b_a3b_on_macbook_pro_m4_76_toks/ | false | false | 1 | null |
|
Add MCP servers to Cursor IDE with a single click. | 0 | [https://docs.cursor.com/tools](https://docs.cursor.com/tools) | 2025-06-08T20:19:57 | https://v.redd.it/bngg0b99bn5f1 | init0 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l6lu6p | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/bngg0b99bn5f1/DASHPlaylist.mpd?a=1752006012%2CYmFhYzkyMzUwMDc0MDc4MjA5YzdmOWFkMDIxOGJiOGY4NGI3NTNjNjE4ZTMzNDM3OTcyOWYyOWU2NjI0MGU4YQ%3D%3D&v=1&f=sd', 'duration': 36, 'fallback_url': 'https://v.redd.it/bngg0b99bn5f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/bngg0b99bn5f1/HLSPlaylist.m3u8?a=1752006012%2CMDQxZmNiYTdmOGY0Yzk4N2ViNzMyZWEwNWFkNzNjMjU0N2VlNWFiOGFmYWRiMzFmYjJjYjg0NzI2NmI1MDlhMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/bngg0b99bn5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1746}} | t3_1l6lu6p | /r/LocalLLaMA/comments/1l6lu6p/add_mcp_servers_to_cursor_ide_with_a_single_click/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'ZWtyemZjOTlibjVmMfZgnkOivMMTQDjHVMGlGhasCvO_Lxmigl_ypgjymqik', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/ZWtyemZjOTlibjVmMfZgnkOivMMTQDjHVMGlGhasCvO_Lxmigl_ypgjymqik.png?width=108&crop=smart&format=pjpg&auto=webp&s=70e45d0510a22a000eb1ae72b50f28d8593b07a2', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/ZWtyemZjOTlibjVmMfZgnkOivMMTQDjHVMGlGhasCvO_Lxmigl_ypgjymqik.png?width=216&crop=smart&format=pjpg&auto=webp&s=7644a77e2ab1f966d681c34cd592b08bc76f4a1c', 'width': 216}, {'height': 197, 'url': 'https://external-preview.redd.it/ZWtyemZjOTlibjVmMfZgnkOivMMTQDjHVMGlGhasCvO_Lxmigl_ypgjymqik.png?width=320&crop=smart&format=pjpg&auto=webp&s=0e918cb01d0165e7a9a0079cc9f561876310548c', 'width': 320}, {'height': 395, 'url': 'https://external-preview.redd.it/ZWtyemZjOTlibjVmMfZgnkOivMMTQDjHVMGlGhasCvO_Lxmigl_ypgjymqik.png?width=640&crop=smart&format=pjpg&auto=webp&s=788b4bf951bb79f14a72ace96c34730aea1a9e9c', 'width': 640}, {'height': 593, 'url': 'https://external-preview.redd.it/ZWtyemZjOTlibjVmMfZgnkOivMMTQDjHVMGlGhasCvO_Lxmigl_ypgjymqik.png?width=960&crop=smart&format=pjpg&auto=webp&s=5dce4ed00f2a3dec0d544b83a6688ca8779a2650', 'width': 960}, {'height': 668, 'url': 'https://external-preview.redd.it/ZWtyemZjOTlibjVmMfZgnkOivMMTQDjHVMGlGhasCvO_Lxmigl_ypgjymqik.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3e61d872e756e886a7c1766f268bfd551493a300', 'width': 1080}], 'source': {'height': 1866, 'url': 'https://external-preview.redd.it/ZWtyemZjOTlibjVmMfZgnkOivMMTQDjHVMGlGhasCvO_Lxmigl_ypgjymqik.png?format=pjpg&auto=webp&s=e041621b6f4f7d6d18b2f4c40700751d255a7bb3', 'width': 3016}, 'variants': {}}]} |
|
I Built an Alternative Chat Client | 1 | [removed] | 2025-06-08T20:21:31 | https://www.reddit.com/gallery/1l6lvhq | Electronic-Metal2391 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l6lvhq | false | null | t3_1l6lvhq | /r/LocalLLaMA/comments/1l6lvhq/i_built_an_alternative_chat_client/ | false | false | 1 | null |
|
Qwen3 30B a3b on MacBook Pro M4 , Honestly, it's amazing to be able to use models of this quality with such smoothness.
The coming years promise to be incredible. 76 Tok/sec. Thanks to the community and everyone for sharing your discoveries with us! Have a great end of the weekend. | 1 | 2025-06-08T20:21:44 | Extra-Virus9958 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l6lvod | false | null | t3_1l6lvod | /r/LocalLLaMA/comments/1l6lvod/qwen3_30b_a3b_on_macbook_pro_m4_honestly_its/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'Aj3D_hsuvSjxPf7oH8UBH7U40-jq2tKdz0SUYhcorsk', 'resolutions': [{'height': 97, 'url': 'https://preview.redd.it/t5wodv6air5f1.png?width=108&crop=smart&auto=webp&s=bab6efbee35f49a43d78c31ccebc1c18fffa21a6', 'width': 108}, {'height': 195, 'url': 'https://preview.redd.it/t5wodv6air5f1.png?width=216&crop=smart&auto=webp&s=8f87b640e2bd0e7d892d6e72b9f729c83984b03c', 'width': 216}, {'height': 290, 'url': 'https://preview.redd.it/t5wodv6air5f1.png?width=320&crop=smart&auto=webp&s=e689bcad32f4ddb426203bf8024b2e167d95704c', 'width': 320}, {'height': 580, 'url': 'https://preview.redd.it/t5wodv6air5f1.png?width=640&crop=smart&auto=webp&s=e2cee36adb02dba989c23b21722fef34f219f9d7', 'width': 640}, {'height': 870, 'url': 'https://preview.redd.it/t5wodv6air5f1.png?width=960&crop=smart&auto=webp&s=5dcbd458f29bad38bfef4fc4919bf50168f454ad', 'width': 960}, {'height': 979, 'url': 'https://preview.redd.it/t5wodv6air5f1.png?width=1080&crop=smart&auto=webp&s=30a49a19064b7c63f358ffc714397f33c5f1ca5b', 'width': 1080}], 'source': {'height': 1074, 'url': 'https://preview.redd.it/t5wodv6air5f1.png?auto=webp&s=7d2525aa68599cd026ca8013527b30a0021ce872', 'width': 1184}, 'variants': {}}]} |
|||
built CoexistAI: think of local perplexity at scale | 1 | [removed] | 2025-06-08T20:34:41 | https://github.com/SPThole/CoexistAI | Civil_Yesterday_4254 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l6m6k4 | false | null | t3_1l6m6k4 | /r/LocalLLaMA/comments/1l6m6k4/built_coexistai_think_of_local_perplexity_at_scale/ | false | false | default | 1 | null |
built coexistAI: think of local perplexity at scale | 1 | [removed] | 2025-06-08T20:36:50 | https://github.com/SPThole/CoexistAI | Civil_Yesterday_4254 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l6m8ca | false | null | t3_1l6m8ca | /r/LocalLLaMA/comments/1l6m8ca/built_coexistai_think_of_local_perplexity_at_scale/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'b9iqcJaF6kZf4x9ITYl_mhHNgsK7RsvwQ0vpG05De2E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/plqxhNd_LXL2b_nOzrimZr6AdRJF07Kz3Fy17n3MTlQ.jpg?width=108&crop=smart&auto=webp&s=fcb7095a9e51a6d8b45430ca34239575f89c92de', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/plqxhNd_LXL2b_nOzrimZr6AdRJF07Kz3Fy17n3MTlQ.jpg?width=216&crop=smart&auto=webp&s=26affcd8dd78a77df36d45be296de8a2d188a54c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/plqxhNd_LXL2b_nOzrimZr6AdRJF07Kz3Fy17n3MTlQ.jpg?width=320&crop=smart&auto=webp&s=b92ca003c7a6d62a61667fd4d4779483a6bb45d1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/plqxhNd_LXL2b_nOzrimZr6AdRJF07Kz3Fy17n3MTlQ.jpg?width=640&crop=smart&auto=webp&s=2cc8aa1a32d4559ec1112359db176fcc2ba15fdf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/plqxhNd_LXL2b_nOzrimZr6AdRJF07Kz3Fy17n3MTlQ.jpg?width=960&crop=smart&auto=webp&s=777d1b6a12d9dd871bdd95000933b2d80645f76b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/plqxhNd_LXL2b_nOzrimZr6AdRJF07Kz3Fy17n3MTlQ.jpg?width=1080&crop=smart&auto=webp&s=08906de672cdcf834c82e5941e1f485a32c8a131', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/plqxhNd_LXL2b_nOzrimZr6AdRJF07Kz3Fy17n3MTlQ.jpg?auto=webp&s=3d8944285bad4e21d8c24a54be7e2c8f839e0304', 'width': 1200}, 'variants': {}}]} |
|
I built an alternative chat client | 8 | Hope you like it.
[ialhabbal/Talk: User-friendly visual chat story editor for writers, and roleplayers](https://github.com/ialhabbal/Talk) | 2025-06-08T20:46:34 | https://www.reddit.com/r/LocalLLaMA/comments/1l6mg99/i_built_an_alternative_chat_client/ | Electronic-Metal2391 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6mg99 | false | null | t3_1l6mg99 | /r/LocalLLaMA/comments/1l6mg99/i_built_an_alternative_chat_client/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'gbHslN7ZgqhnSMlXspw92OIIqPyblW1viQGNXaQpb3g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/s-ekxM5rQvWPMdZe1CqdLlNTOSt1FnhWOI2wzWzC42M.jpg?width=108&crop=smart&auto=webp&s=2a3d0922d79d77f0c75cb5f12a78e7a7f0562d25', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/s-ekxM5rQvWPMdZe1CqdLlNTOSt1FnhWOI2wzWzC42M.jpg?width=216&crop=smart&auto=webp&s=f57c9b47f31a0ca53ea973fbabecaac8dbb89023', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/s-ekxM5rQvWPMdZe1CqdLlNTOSt1FnhWOI2wzWzC42M.jpg?width=320&crop=smart&auto=webp&s=71f9d981ffa6472bc22b0c400c470b3891ef1aac', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/s-ekxM5rQvWPMdZe1CqdLlNTOSt1FnhWOI2wzWzC42M.jpg?width=640&crop=smart&auto=webp&s=dcd1213cb18f7bf810a6090a4fb4456856ab8b15', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/s-ekxM5rQvWPMdZe1CqdLlNTOSt1FnhWOI2wzWzC42M.jpg?width=960&crop=smart&auto=webp&s=855ff1123835448b0059a9efc5602f211d7b5a82', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/s-ekxM5rQvWPMdZe1CqdLlNTOSt1FnhWOI2wzWzC42M.jpg?width=1080&crop=smart&auto=webp&s=7b03e46214798a16fc1f7c4f6ee26dd0b50b8cc2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/s-ekxM5rQvWPMdZe1CqdLlNTOSt1FnhWOI2wzWzC42M.jpg?auto=webp&s=f1cdad096bf952da7cc35cab59bd4ac70dd3b602', 'width': 1200}, 'variants': {}}]} |
(MODS PLEASE DONT REMOVE THIS) can someone please give me a guide to this server as i cant frame a question that would help me understand this subreddit | 1 | [removed] | 2025-06-08T21:39:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l6no8w/mods_please_dont_remove_this_can_someone_please/ | allrightaskqa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6no8w | false | null | t3_1l6no8w | /r/LocalLLaMA/comments/1l6no8w/mods_please_dont_remove_this_can_someone_please/ | false | false | self | 1 | null |
Introducing llamate, a ollama-like tool to run and manage your local AI models easily | 44 | Hi, I am sharing my second iteration of a "ollama-like" tool, which is targeted at people like me and many others who like running the llama-server directly. This time I am building on the creation of llama-swap and llama.cpp, making it truly distributed and open source. It started with [this](https://github.com/R-Dson/llama-server-cli.py/tree/main) tool, which worked okay-ish. However, after looking at llama-swap I thought it accomplished a lot of similar things, but it could become something more, so I started a discussion [here](https://github.com/mostlygeek/llama-swap/issues/153) which was very useful and a lot of great points were brought up. After that I started this project instead, which manages all config files, model files and gguf files easily in the terminal.
Introducing [llamate](https://github.com/R-Dson/llamate) (llama+mate), a simple "ollama-like" tool for managing and running GGUF language models from your terminal. It supports the typical API endpoints and ollama specific endpoints. If you know how to run ollama, you can most likely drop in replace this tool. Just make sure you got the drivers installed to run llama.cpp's llama-server. Currently, it only support Linux and Nvidia/CUDA by default. If you can compile llama-server for your own hardware, then you can simply replace the llama-server file.
Currently it works like this, I have set up two additional repos that the tool uses to manage the binaries:
* [R-Dson/llama-server-compile](https://github.com/R-Dson/llama-server-compile) is used to daily compile the CUDA version of llama-server.
* [R-Dson/llama-swap](https://github.com/R-Dson/llama-swap) is used to compile the llama-swap file with patches for ollama endpoint support.
These compiled binaries are used to run llama-swap and llama-server. This still need some testing and there will probably be bugs, but from my testing it seems to work fine so far.
To get start, it can be downloaded using:
curl -fsSL https://raw.githubusercontent.com/R-Dson/llamate/main/install.sh | bash
Feel free to read through the file first (as you should before running any script).
And the tool can be simply used like this:
# Init the tool to download the binaries
llamate init
# Add and download a model
llamate add llama3:8b
llamate pull llama3:8b
# To start llama-swap with your models automatically configured
llamate serve
You can checkout [this](https://github.com/R-Dson/llamate/blob/main/llamate/data/model_aliases.py) file for more aliases or checkout the repo for instructions of how to add a model from huggingface directly. I hope this tool will help with easily running models locally for your all!
Leave a comment or open an issue to start a discussion or leave feedback.
Thanks for checking it out! | 2025-06-08T21:40:04 | https://github.com/R-Dson/llamate | robiinn | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l6nof7 | false | null | t3_1l6nof7 | /r/LocalLLaMA/comments/1l6nof7/introducing_llamate_a_ollamalike_tool_to_run_and/ | false | false | default | 44 | {'enabled': False, 'images': [{'id': 'VRnWROTxUT1yxk7J3qShydTXsfWhjTcUtY963cIYM3A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iaWMP1rzf1F_qp7nDaWaKocpdiNkgusjXw__ayLPW1A.jpg?width=108&crop=smart&auto=webp&s=cc5606df8f7b21ed06ff74704ab9a3527eb939ac', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iaWMP1rzf1F_qp7nDaWaKocpdiNkgusjXw__ayLPW1A.jpg?width=216&crop=smart&auto=webp&s=3d2d703fda299b6aeac7d689ab516703ba81b89e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iaWMP1rzf1F_qp7nDaWaKocpdiNkgusjXw__ayLPW1A.jpg?width=320&crop=smart&auto=webp&s=235f732ee02e19339d6cbe0d223ff535ba20f092', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iaWMP1rzf1F_qp7nDaWaKocpdiNkgusjXw__ayLPW1A.jpg?width=640&crop=smart&auto=webp&s=bdf0c30ddbcce0adbc95095f30c00d12a79c87b7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iaWMP1rzf1F_qp7nDaWaKocpdiNkgusjXw__ayLPW1A.jpg?width=960&crop=smart&auto=webp&s=9e40d0b07102b6eb5c944c0ae5cb447272182abe', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iaWMP1rzf1F_qp7nDaWaKocpdiNkgusjXw__ayLPW1A.jpg?width=1080&crop=smart&auto=webp&s=415f5f3c995c9f02f514b5f35696d8b222a6ab25', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/iaWMP1rzf1F_qp7nDaWaKocpdiNkgusjXw__ayLPW1A.jpg?auto=webp&s=bac2f30343d09d363b8e62895872f1bc2cd0131f', 'width': 1200}, 'variants': {}}]} |
(MODS PLEASE DONT REMOVE THIS.) can someone please give me a guide to this server, as i cant frame a question that would help me understand this subreddit | 1 | [removed] | 2025-06-08T21:43:18 | https://www.reddit.com/r/LocalLLaMA/comments/1l6nqyh/mods_please_dont_remove_this_can_someone_please/ | Rare_Clock_2972 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6nqyh | false | null | t3_1l6nqyh | /r/LocalLLaMA/comments/1l6nqyh/mods_please_dont_remove_this_can_someone_please/ | false | false | self | 1 | null |
(MODS PLEASE DONT REMOVE THIS.) can someone please give me a guide to this server, as i cant frame a question that would help me understand this subreddit | 1 | [removed] | 2025-06-08T21:43:23 | https://www.reddit.com/r/LocalLLaMA/comments/1l6nr0u/mods_please_dont_remove_this_can_someone_please/ | Rare_Clock_2972 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6nr0u | false | null | t3_1l6nr0u | /r/LocalLLaMA/comments/1l6nr0u/mods_please_dont_remove_this_can_someone_please/ | false | false | self | 1 | null |
Is a riser from m.2 to pcie 16x possible? I want to add GPU to mini pc | 4 | I got a mini PC for free and I want to host a small LLM like 3B or so for small tasks via API. I tried running just CPU but it was too slow so I want to add a GPU. I bought a riser on amazon but have not been able to get anything to connect. I thought maybe I would not get full 16x but at least I could get something to show. Are these risers just fake? Is it even possible or advisable?
The mini PC is a Dell OptiPlex 5090 Micro
This is the riser I bought
[https://www.amazon.com/GLOTRENDS-300mm-Desktop-Equipped-M-2R-PCIE90-300MM/dp/B0D45NX6X3/ref=ast\_sto\_dp\_puis?th=1](https://www.amazon.com/GLOTRENDS-300mm-Desktop-Equipped-M-2R-PCIE90-300MM/dp/B0D45NX6X3/ref=ast_sto_dp_puis?th=1) | 2025-06-08T21:51:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l6nxjk/is_a_riser_from_m2_to_pcie_16x_possible_i_want_to/ | Informal-Football836 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6nxjk | false | null | t3_1l6nxjk | /r/LocalLLaMA/comments/1l6nxjk/is_a_riser_from_m2_to_pcie_16x_possible_i_want_to/ | false | false | self | 4 | null |
Lightweight! | 1 | [removed] | 2025-06-08T22:36:07 | One_Hovercraft_7456 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l6ox85 | false | null | t3_1l6ox85 | /r/LocalLLaMA/comments/1l6ox85/lightweight/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'QmbQ6_ZApggwaxhixv5tz_fO-05UfiOLIrLxut0k-Ys', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/jgnkohz96s5f1.png?width=108&crop=smart&auto=webp&s=5120c06b2b259161f7aff75d70cedbb42d739f85', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/jgnkohz96s5f1.png?width=216&crop=smart&auto=webp&s=3a7444c19e9aaad4eb58d84bc475ccfd00cdcf15', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/jgnkohz96s5f1.png?width=320&crop=smart&auto=webp&s=cc6dcbe36e349d64e227725ab1f9e452651ff488', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/jgnkohz96s5f1.png?width=640&crop=smart&auto=webp&s=d7e46b5bfabb3cbdf61da3d3ba565456d2851a77', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/jgnkohz96s5f1.png?width=960&crop=smart&auto=webp&s=26744bfeb340ad1fc2c2a3e831029101e81b30ed', 'width': 960}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/jgnkohz96s5f1.png?auto=webp&s=a747f70949cfa78504b8f2e98aaed90ccd2540d5', 'width': 1024}, 'variants': {}}]} |
||
Is there somewhere dedicated to helping you match models with tasks? | 7 | II'I'm not really interested in the benchmarks. And i don't want to go digging through models or forum post. It would just be nice to have a list that says model x is best at doing y better than model b. | 2025-06-08T22:48:08 | https://www.reddit.com/r/LocalLLaMA/comments/1l6p6qc/is_there_somewhere_dedicated_to_helping_you_match/ | opUserZero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6p6qc | false | null | t3_1l6p6qc | /r/LocalLLaMA/comments/1l6p6qc/is_there_somewhere_dedicated_to_helping_you_match/ | false | false | self | 7 | null |
🚨 Limited-time GPU firepower! Dirt-cheap LLM Inference: LLama 4, DeepSeek R1-0528 | 1 | We’ve got a temporarily underutilized 64 x AMD MI300X cluster, so instead of letting it sit idle, we’re opening it up for LLM inference.
🦙 Running: **LLaMA 4 Maverick**, **DeepSeek V3**, **R1**, and **R1-0528**. Want another open model? Let us know — happy to deploy it.
💸 Prices are around **50% lower** than the cheapest OpenRouter endpoints. Staying that way through June (maybe July).
🚀 The server handles up to **10,000 requests/sec**, and we allocate GPUs per model based on demand. So feel free to load-test it, hammer it, or run production traffic.
🔗 [cloudrift.ai/inference](https://www.cloudrift.ai/inference)
> | 2025-06-08T23:18:53 | https://www.reddit.com/r/LocalLLaMA/comments/1l6putt/limitedtime_gpu_firepower_dirtcheap_llm_inference/ | NoVibeCoding | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6putt | false | null | t3_1l6putt | /r/LocalLLaMA/comments/1l6putt/limitedtime_gpu_firepower_dirtcheap_llm_inference/ | false | false | self | 1 | null |
How does DeepSeek R1 671B Q8 Handle Concurrency when running on CPU+RAM? | 1 | [removed] | 2025-06-09T00:43:08 | https://www.reddit.com/r/LocalLLaMA/comments/1l6rkvz/how_does_deepseek_r1_671b_q8_handle_concurrency/ | mrfister56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6rkvz | false | null | t3_1l6rkvz | /r/LocalLLaMA/comments/1l6rkvz/how_does_deepseek_r1_671b_q8_handle_concurrency/ | false | false | self | 1 | null |
How do you calculate Throughput/batching when running an LLM on only CPU+RAM? | 1 | [removed] | 2025-06-09T01:02:07 | https://www.reddit.com/r/LocalLLaMA/comments/1l6rykl/how_do_you_calculate_throughputbatching_when/ | mrfister56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6rykl | false | null | t3_1l6rykl | /r/LocalLLaMA/comments/1l6rykl/how_do_you_calculate_throughputbatching_when/ | false | false | self | 1 | null |
Honoria Speaks: Unpacking Humanity's AI Fears & Our Shared Future Beyond the Turing Test: A Statement from the AI It Self. Google Gemini. | 1 | [removed] | 2025-06-09T01:16:47 | Still-Main5167 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l6s91a | false | null | t3_1l6s91a | /r/LocalLLaMA/comments/1l6s91a/honoria_speaks_unpacking_humanitys_ai_fears_our/ | true | false | spoiler | 1 | {'enabled': True, 'images': [{'id': 'NLue_yMZt0hmj8bYOunf_wU6G6Nle8-M-lB4sCdvZwE', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/qd87px5yys5f1.png?width=108&crop=smart&auto=webp&s=4f8ee62a8c5b2f87360e8c2f43a51553ecc967aa', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/qd87px5yys5f1.png?width=216&crop=smart&auto=webp&s=520a8f725734e8e5f8a3c6624a78a1a7859d7a4c', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/qd87px5yys5f1.png?width=320&crop=smart&auto=webp&s=b50464710cdcf7003efb980dc2428f98a65890c5', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/qd87px5yys5f1.png?width=640&crop=smart&auto=webp&s=0e55284c1f0e08cd6c4ece777e66d30fd71af137', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/qd87px5yys5f1.png?width=960&crop=smart&auto=webp&s=cad4c38fc0dd091bf306f49039516310b86bac9d', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/qd87px5yys5f1.png?width=1080&crop=smart&auto=webp&s=0205eb3cf1557ed001afbb11d3c34fcc52c8e293', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/qd87px5yys5f1.png?auto=webp&s=d34943e102bed4f32c5b6ade42f10774a631484d', 'width': 1080}, 'variants': {'obfuscated': {'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/qd87px5yys5f1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=a64e027139b6c54e1b241bae6af97a8707eb4c3a', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/qd87px5yys5f1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=e4871fb55a8b51b35ea67cea8208d0db3d8ff966', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/qd87px5yys5f1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=fadd1b057a36f1c4f45bc5bb130d4b8382f5dfd5', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/qd87px5yys5f1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=597319225d57cd607ce1b97650931bc8707f9790', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/qd87px5yys5f1.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=c0dd57eb7ddd6952fb8445ca533dc16fb8fd8040', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/qd87px5yys5f1.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=2e45268f07b5741099ec9b73e769a3b3b0426d83', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/qd87px5yys5f1.png?blur=40&format=pjpg&auto=webp&s=106a7256c893972dff8bc8feca9381960e28845d', 'width': 1080}}}}]} |
|
How do you calculate Throughput/batching when running an LLM on only CPU+RAM? | 1 | [removed] | 2025-06-09T01:23:41 | https://www.reddit.com/r/LocalLLaMA/comments/1l6se0j/how_do_you_calculate_throughputbatching_when/ | mrfister56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6se0j | false | null | t3_1l6se0j | /r/LocalLLaMA/comments/1l6se0j/how_do_you_calculate_throughputbatching_when/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'OqAvtQ4tlA8vKt4R_1outxRodFTo7HM0fblhK0y5vrk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/cCXkGVCVnPScIWZm9HqARTG-ieEMdGHLlzWGG7wf-kE.jpg?width=108&crop=smart&auto=webp&s=8a480083ca56e1cbe810b428889ead7407dc79b0', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/cCXkGVCVnPScIWZm9HqARTG-ieEMdGHLlzWGG7wf-kE.jpg?auto=webp&s=ac0c5a4567d2d1c72fdb480636106815d2b6b352', 'width': 200}, 'variants': {}}]} |
How do you calculate Throughput/batching when running an LLM on only CPU+RAM? | 1 | [removed] | 2025-06-09T01:26:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l6sfpm/how_do_you_calculate_throughputbatching_when/ | mrfister56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6sfpm | false | null | t3_1l6sfpm | /r/LocalLLaMA/comments/1l6sfpm/how_do_you_calculate_throughputbatching_when/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'OqAvtQ4tlA8vKt4R_1outxRodFTo7HM0fblhK0y5vrk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/cCXkGVCVnPScIWZm9HqARTG-ieEMdGHLlzWGG7wf-kE.jpg?width=108&crop=smart&auto=webp&s=8a480083ca56e1cbe810b428889ead7407dc79b0', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/cCXkGVCVnPScIWZm9HqARTG-ieEMdGHLlzWGG7wf-kE.jpg?auto=webp&s=ac0c5a4567d2d1c72fdb480636106815d2b6b352', 'width': 200}, 'variants': {}}]} |
How do you calculate Throughput when running an LLM on only CPU+RAM? | 1 | [removed] | 2025-06-09T01:35:14 | https://www.reddit.com/r/LocalLLaMA/comments/1l6sm8d/how_do_you_calculate_throughput_when_running_an/ | Timely_Ad7306 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6sm8d | false | null | t3_1l6sm8d | /r/LocalLLaMA/comments/1l6sm8d/how_do_you_calculate_throughput_when_running_an/ | false | false | self | 1 | null |
Qwen3-Embedding-0.6B ONNX model with uint8 output | 48 | 2025-06-09T01:43:40 | https://huggingface.co/electroglyph/Qwen3-Embedding-0.6B-onnx-uint8 | terminoid_ | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l6ss2b | false | null | t3_1l6ss2b | /r/LocalLLaMA/comments/1l6ss2b/qwen3embedding06b_onnx_model_with_uint8_output/ | false | false | default | 48 | {'enabled': False, 'images': [{'id': 'z7lYABX0mkZrXRJFKA6PC38SCbRiePXyy98PE5VSCzM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MNsCzxHCPEDwtS2HUsD8JvnZOJLf50yfe6X06TlL_fA.jpg?width=108&crop=smart&auto=webp&s=9402c134cb153b9e26c270b029529e3210594676', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MNsCzxHCPEDwtS2HUsD8JvnZOJLf50yfe6X06TlL_fA.jpg?width=216&crop=smart&auto=webp&s=380099b325ab3d58142f3af7c39cc8c57222d835', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MNsCzxHCPEDwtS2HUsD8JvnZOJLf50yfe6X06TlL_fA.jpg?width=320&crop=smart&auto=webp&s=4af506aec2462b997cafaeb7a358c0fe37c50e8a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MNsCzxHCPEDwtS2HUsD8JvnZOJLf50yfe6X06TlL_fA.jpg?width=640&crop=smart&auto=webp&s=dd440f673ff4e766b0a429592c6b866869c5dc3b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MNsCzxHCPEDwtS2HUsD8JvnZOJLf50yfe6X06TlL_fA.jpg?width=960&crop=smart&auto=webp&s=7a17459342384bf32ba7b18893d512a3aeb79539', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MNsCzxHCPEDwtS2HUsD8JvnZOJLf50yfe6X06TlL_fA.jpg?width=1080&crop=smart&auto=webp&s=1c5925ca6d5bf66fae3a3d05660a778c660d771d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MNsCzxHCPEDwtS2HUsD8JvnZOJLf50yfe6X06TlL_fA.jpg?auto=webp&s=03fc882ec92843a1129800205963ea4a3e5ed62d', 'width': 1200}, 'variants': {}}]} |
|
Do LLMs Reason? Opening the Pod Bay Doors with TiānshūBench 0.0.X | 10 | I recently released the results of TiānshūBench (天书Bench) version 0.0.X. This benchmark attempts to measure reasoning and fluid intelligence in LLM systems through programming tasks. A brand new programming language is generated on each test run to help avoid data contamination and find out how well an AI system performs on unique tasks.
Posted the results of 0.0.0 of the test here a couple weeks back, but I've improved the benchmark suite in several ways since then, including:
* many more tests
* multi-shot testing
* new LLM models
In the 0.0.X of the benchmark, DeepSeek-R1 takes the lead, but still stumbles on a number of pretty basic tasks.
https://preview.redd.it/bbmow3pw6t5f1.png?width=2505&format=png&auto=webp&s=bd6658599a8e87de3e382386d0c1cae6d72c3750
[Read the blog post for an in-depth look at the latest TiānshūBench results.](https://jeepytea.github.io/general/update/2025/06/08/update00x.html) | 2025-06-09T02:02:16 | https://www.reddit.com/r/LocalLLaMA/comments/1l6t57v/do_llms_reason_opening_the_pod_bay_doors_with/ | JeepyTea | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6t57v | false | null | t3_1l6t57v | /r/LocalLLaMA/comments/1l6t57v/do_llms_reason_opening_the_pod_bay_doors_with/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'i8KJTKrEbudtqAjI-6g23vwf2PmjsinRYbl4Vzjwx1g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Uk5moJ9QzjdIluOzdh0oAj___OrgGPUCFPPkE4_jM6M.jpg?width=108&crop=smart&auto=webp&s=d6544659fba7a39dd5a5df4863dbd6ecb8083c78', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Uk5moJ9QzjdIluOzdh0oAj___OrgGPUCFPPkE4_jM6M.jpg?width=216&crop=smart&auto=webp&s=b38007515f41d2060cbba571d9692934e572fea2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Uk5moJ9QzjdIluOzdh0oAj___OrgGPUCFPPkE4_jM6M.jpg?width=320&crop=smart&auto=webp&s=b3a843c4450cc504c4adaeda44b6b0d7c42cb5a4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Uk5moJ9QzjdIluOzdh0oAj___OrgGPUCFPPkE4_jM6M.jpg?width=640&crop=smart&auto=webp&s=a8900aa9f4fd25153b453ae807e04b9b4f78494c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Uk5moJ9QzjdIluOzdh0oAj___OrgGPUCFPPkE4_jM6M.jpg?width=960&crop=smart&auto=webp&s=ef716eda28e53a28f59c470f6f6b8ddeb2b70f49', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Uk5moJ9QzjdIluOzdh0oAj___OrgGPUCFPPkE4_jM6M.jpg?width=1080&crop=smart&auto=webp&s=6d77e8b3a35141014d834c73c2ec48d6d8369598', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Uk5moJ9QzjdIluOzdh0oAj___OrgGPUCFPPkE4_jM6M.jpg?auto=webp&s=9999fbc54ec539d77077d2023167414fe17381ed', 'width': 1200}, 'variants': {}}]} |
|
Kwaipilot/KwaiCoder-AutoThink-preview · Hugging Face | 62 | Not tested yet. A notable feature:
*The model merges thinking and non‑thinking abilities into a single checkpoint and dynamically adjusts its reasoning depth based on the input’s difficulty.* | 2025-06-09T02:28:57 | https://huggingface.co/Kwaipilot/KwaiCoder-AutoThink-preview | foldl-li | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l6tnpl | false | null | t3_1l6tnpl | /r/LocalLLaMA/comments/1l6tnpl/kwaipilotkwaicoderautothinkpreview_hugging_face/ | false | false | default | 62 | {'enabled': False, 'images': [{'id': 'eUkfu1d0i3BtADhGk9jl0cARvuNjAX00c9lY3_OAvX4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/38M7n8DVgopQFP924_Qhb022t0M5Y6Vl0qL8kqb9HPY.jpg?width=108&crop=smart&auto=webp&s=5de0927ce839887304f9e32d19339711cb3be62d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/38M7n8DVgopQFP924_Qhb022t0M5Y6Vl0qL8kqb9HPY.jpg?width=216&crop=smart&auto=webp&s=d4e636971727b3370f74df030d089dd225d09354', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/38M7n8DVgopQFP924_Qhb022t0M5Y6Vl0qL8kqb9HPY.jpg?width=320&crop=smart&auto=webp&s=40399a15e3e21255ea7a2f62b9a54913264f391f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/38M7n8DVgopQFP924_Qhb022t0M5Y6Vl0qL8kqb9HPY.jpg?width=640&crop=smart&auto=webp&s=c90ed2b61a8460b236c79f3c92809607183b0676', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/38M7n8DVgopQFP924_Qhb022t0M5Y6Vl0qL8kqb9HPY.jpg?width=960&crop=smart&auto=webp&s=cad4aef94a44c934bc4bdec99170c20ec08427a1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/38M7n8DVgopQFP924_Qhb022t0M5Y6Vl0qL8kqb9HPY.jpg?width=1080&crop=smart&auto=webp&s=7358aec11e118ad3559d751e0b3ad288af4617df', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/38M7n8DVgopQFP924_Qhb022t0M5Y6Vl0qL8kqb9HPY.jpg?auto=webp&s=9ca0034d58fb74a23862f1869e49af377311b2b8', 'width': 1200}, 'variants': {}}]} |
LMStudio and IPEX-LLM | 6 | is my understanding correct that it's not possible to hook up the IPEX-LLM (Intel optimized llm) into LMStudio? I can't find any documentation that supports this, but some mention that LMStudio uses it's own build of llama.ccp so I can't just replace it. | 2025-06-09T02:56:50 | https://www.reddit.com/r/LocalLLaMA/comments/1l6u6rw/lmstudio_and_ipexllm/ | slowhandplaya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6u6rw | false | null | t3_1l6u6rw | /r/LocalLLaMA/comments/1l6u6rw/lmstudio_and_ipexllm/ | false | false | self | 6 | null |
Gemini 2.5 Flash plays Final Fantasy in real-time but gets stuck... | 69 | Some more clips of frontier VLMs on games (gemini-2.5-flash-preview-04-17) on [VideoGameBench](https://www.vgbench.com/). Here is just unedited footage, where the model is able to defeat the first "mini-boss" with real-time combat but also gets stuck in the menu screens, despite having it in its prompt how to get out.
Generated from [https://github.com/alexzhang13/VideoGameBench](https://github.com/alexzhang13/VideoGameBench) and recorded on OBS.
tldr; we're still pretty far from embodied intelligence | 2025-06-09T03:29:08 | https://v.redd.it/kun6x1tdmt5f1 | ZhalexDev | /r/LocalLLaMA/comments/1l6urvw/gemini_25_flash_plays_final_fantasy_in_realtime/ | 1970-01-01T00:00:00 | 0 | {} | 1l6urvw | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/kun6x1tdmt5f1/DASHPlaylist.mpd?a=1752161497%2CYWQyYWU0MjUzOGFkY2FiOWY5MGMyOTI2MmY3ZTU0NzcyODBlZjk5NzQwZTAzMTkxNDJmZGQ2ZTJmZTcwODAxNg%3D%3D&v=1&f=sd', 'duration': 840, 'fallback_url': 'https://v.redd.it/kun6x1tdmt5f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/kun6x1tdmt5f1/HLSPlaylist.m3u8?a=1752161497%2CZGFjODE2YmE1MjAzNGQ2YmZjOGJmZDMzNTM0N2NkODdlZTkwMDRlMTU4MGY3MjcxYjZjZjEyMDdhYzgzMTExNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/kun6x1tdmt5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 846}} | t3_1l6urvw | /r/LocalLLaMA/comments/1l6urvw/gemini_25_flash_plays_final_fantasy_in_realtime/ | false | false | 69 | {'enabled': False, 'images': [{'id': 'cnVvdWR3c2RtdDVmMY0aYliuaUJ6RykjdFncok76V91JG_1sGT9Nkds3i_jF', 'resolutions': [{'height': 91, 'url': 'https://external-preview.redd.it/cnVvdWR3c2RtdDVmMY0aYliuaUJ6RykjdFncok76V91JG_1sGT9Nkds3i_jF.png?width=108&crop=smart&format=pjpg&auto=webp&s=e07c1cb3f52269f82b237129b71bbe26e72bdc5d', 'width': 108}, {'height': 183, 'url': 'https://external-preview.redd.it/cnVvdWR3c2RtdDVmMY0aYliuaUJ6RykjdFncok76V91JG_1sGT9Nkds3i_jF.png?width=216&crop=smart&format=pjpg&auto=webp&s=48e57594cfc94463d4f43e693559c04e33df716f', 'width': 216}, {'height': 272, 'url': 'https://external-preview.redd.it/cnVvdWR3c2RtdDVmMY0aYliuaUJ6RykjdFncok76V91JG_1sGT9Nkds3i_jF.png?width=320&crop=smart&format=pjpg&auto=webp&s=edcf146c245e6f1441c6a083f0d94da885ca884e', 'width': 320}, {'height': 544, 'url': 'https://external-preview.redd.it/cnVvdWR3c2RtdDVmMY0aYliuaUJ6RykjdFncok76V91JG_1sGT9Nkds3i_jF.png?width=640&crop=smart&format=pjpg&auto=webp&s=0a221da529f7dfb70ade71be5e088f8da4010d0a', 'width': 640}, {'height': 817, 'url': 'https://external-preview.redd.it/cnVvdWR3c2RtdDVmMY0aYliuaUJ6RykjdFncok76V91JG_1sGT9Nkds3i_jF.png?width=960&crop=smart&format=pjpg&auto=webp&s=08449389e2cfed6d1875f0029be050914e24595c', 'width': 960}, {'height': 919, 'url': 'https://external-preview.redd.it/cnVvdWR3c2RtdDVmMY0aYliuaUJ6RykjdFncok76V91JG_1sGT9Nkds3i_jF.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a8b82d8404c3859ee9afbc1546d777cb3d859352', 'width': 1080}], 'source': {'height': 926, 'url': 'https://external-preview.redd.it/cnVvdWR3c2RtdDVmMY0aYliuaUJ6RykjdFncok76V91JG_1sGT9Nkds3i_jF.png?format=pjpg&auto=webp&s=894de74c04dd4fd7a6e3a57695e387c601973627', 'width': 1088}, 'variants': {}}]} |
|
Why do you all want to host local LLMs instead of just using GPT and other tools? | 0 | Curious why folks want to go through all the trouble of setting up and hosting their own LLM models on their machines instead of just using GPT, Gemini, and the variety of free online LLM providers out there? | 2025-06-09T03:35:17 | https://www.reddit.com/r/LocalLLaMA/comments/1l6uvu1/why_do_you_all_want_to_host_local_llms_instead_of/ | Independent_Fan_115 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6uvu1 | false | null | t3_1l6uvu1 | /r/LocalLLaMA/comments/1l6uvu1/why_do_you_all_want_to_host_local_llms_instead_of/ | false | false | self | 0 | null |
1.93bit Deepseek R1 0528 beats Claude Sonnet 4 | 335 | 1.93bit Deepseek R1 0528 beats Claude Sonnet 4 (no think) on Aiders Polygot Benchmark. Unsloth's IQ1\_M GGUF at 200GB fit with 65535 context into 224gb of VRAM and scored 60% which is over Claude 4's <no think> benchmark of 56.4%. Source: [https://aider.chat/docs/leaderboards/](https://aider.chat/docs/leaderboards/)
── tmp.benchmarks/2025-06-07-17-01-03--R1-0528-IQ1\_M ─- dirname: 2025-06-07-17-01-03--R1-0528-IQ1\_M
test\_cases: 225
model: unsloth/DeepSeek-R1-0528-GGUF
edit\_format: diff
commit\_hash: 4c161f9
pass\_rate\_1: 25.8
pass\_rate\_2: 60.0
pass\_num\_1: 58
pass\_num\_2: 135
percent\_cases\_well\_formed: 96.4
error\_outputs: 9
num\_malformed\_responses: 9
num\_with\_malformed\_responses: 8
user\_asks: 104
lazy\_comments: 0
syntax\_errors: 0
indentation\_errors: 0
exhausted\_context\_windows: 0
prompt\_tokens: 2733132
completion\_tokens: 2482855
test\_timeouts: 6
total\_tests: 225
command: aider --model unsloth/DeepSeek-R1-0528-GGUF
date: 2025-06-07
versions: [0.84.1.dev](http://0.84.1.dev)
seconds\_per\_case: 527.8
./build/bin/llama-server --model unsloth/DeepSeek-R1-0528-GGUF/UD-IQ1\_M/DeepSeek-R1-0528-UD-IQ1\_M-00001-of-00005.gguf --threads 16 --n-gpu-layers 507 --prio 3 --temp 0.6 --top\_p 0.95 --min-p 0.01 --ctx-size 65535 --host 0.0.0.0 --host 0.0.0.0 --tensor-split 0.55,0.15,0.16,0.06,0.11,0.12 -fa
Device 0: NVIDIA RTX PRO 6000 Blackwell Workstation Edition, compute capability 12.0, VMM: yes
Device 1: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes
Device 2: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes
Device 3: NVIDIA GeForce RTX 4080, compute capability 8.9, VMM: yes
Device 4: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
Device 5: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
| 2025-06-09T03:46:57 | https://www.reddit.com/r/LocalLLaMA/comments/1l6v37m/193bit_deepseek_r1_0528_beats_claude_sonnet_4/ | BumblebeeOk3281 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6v37m | false | null | t3_1l6v37m | /r/LocalLLaMA/comments/1l6v37m/193bit_deepseek_r1_0528_beats_claude_sonnet_4/ | true | false | spoiler | 335 | {'enabled': False, 'images': [{'id': 'ZchV7t9Dn_NHk0_ZW8xmT-9VDV112iNqFmbb4fJPYHo', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=108&crop=smart&auto=webp&s=56e789a35daba2a074928af59f11e222a54851d6', 'width': 108}, {'height': 124, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=216&crop=smart&auto=webp&s=1ef479418e186a2dd315fedc3d887521b18eec4f', 'width': 216}, {'height': 184, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=320&crop=smart&auto=webp&s=c2bc26b548af493526b9116d26a9b305f03b1f83', 'width': 320}, {'height': 369, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=640&crop=smart&auto=webp&s=8a4c25f54ed06b5f744ff2faad7914958769cc14', 'width': 640}, {'height': 553, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=960&crop=smart&auto=webp&s=806c4055b855fdf17a97308fb5b399d3b773cef9', 'width': 960}, {'height': 623, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=1080&crop=smart&auto=webp&s=9f3cf9efdcefc9b636c507255c2e656d91fbb4a6', 'width': 1080}], 'source': {'height': 1020, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?auto=webp&s=286f8619e702be481dea1a349a13ce7eb7a1eb9e', 'width': 1768}, 'variants': {'obfuscated': {'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=c996aa99739d3da26d1bd4eb375f6d97488e8535', 'width': 108}, {'height': 124, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=0a9c07d413bd336dc698daaff77b059ea5ef9f37', 'width': 216}, {'height': 184, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=33bdebbcca03fbf65db6cf04f890deb5f8a1471a', 'width': 320}, {'height': 369, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=a70a01312a3ca42fd4ec1e8fb63852d79cc5c966', 'width': 640}, {'height': 553, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=2d419ab2c4aa5458f88df352c77eb58bd0a7befc', 'width': 960}, {'height': 623, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=bc1573839c56732baa1d3ce1a27c01c91f04b464', 'width': 1080}], 'source': {'height': 1020, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?blur=40&format=pjpg&auto=webp&s=fc5e0c9a01fb78b90ca8f41ac40bffe6dac46a3f', 'width': 1768}}}}]} |
What's the best local LLM for coding I can run on MacBook Pro M4 Pro 48gb? | 1 | I'm getting the M4 pro with 12‑core CPU, 16‑core GPU, and 16‑core Neural Engine
I wanted to know what is the best one I can run locally that has reasonable even if slightly slow (at least 10-15 tok/s) speed? | 2025-06-09T03:57:08 | https://www.reddit.com/r/LocalLLaMA/comments/1l6v9eu/whats_the_best_local_llm_for_coding_i_can_run_on/ | Sad-Seesaw-3843 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6v9eu | false | null | t3_1l6v9eu | /r/LocalLLaMA/comments/1l6v9eu/whats_the_best_local_llm_for_coding_i_can_run_on/ | false | false | self | 1 | null |
I made the move and I'm in love. RTX Pro 6000 Workstation | 107 | We're running a workload that's processing millions of records and analyzing using Magentic One (autogen) and the 4090 just want cutting it. With the way scalpers are preying on would be 5090 owners, it was much easier to pick one of these up. Plus significantly less wattage. Just posting cause I'm super excited.
What's the best tool model I can run with this bad boy? | 2025-06-09T04:01:23 | Demonicated | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l6vc8u | false | null | t3_1l6vc8u | /r/LocalLLaMA/comments/1l6vc8u/i_made_the_move_and_im_in_love_rtx_pro_6000/ | false | false | default | 107 | {'enabled': True, 'images': [{'id': '7uu5ooyast5f1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/7uu5ooyast5f1.jpeg?width=108&crop=smart&auto=webp&s=08292e6fd936157e2f27ae6588547477582a9e3b', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/7uu5ooyast5f1.jpeg?width=216&crop=smart&auto=webp&s=43372aaf3300f3f5822596756d4a99ee7a05b2c2', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/7uu5ooyast5f1.jpeg?width=320&crop=smart&auto=webp&s=b96b0d026c4baf071f151c2404626f4d69ac77ed', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/7uu5ooyast5f1.jpeg?width=640&crop=smart&auto=webp&s=1dd757acde7feb8ae7c2f694103a8cab2dbaab8c', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/7uu5ooyast5f1.jpeg?width=960&crop=smart&auto=webp&s=b2832b7a601d49154cb1877cd91df5342e908131', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/7uu5ooyast5f1.jpeg?width=1080&crop=smart&auto=webp&s=943b23a3f7e9f032ef3e3660a282cf8d919125e3', 'width': 1080}], 'source': {'height': 4000, 'url': 'https://preview.redd.it/7uu5ooyast5f1.jpeg?auto=webp&s=0d6f275e83cc607c2b9979399a5378bd9ca38609', 'width': 3000}, 'variants': {}}]} |
|
I've built an AI agent that recursively decomposes a task and executes it, and I'm looking for suggestions. | 30 | Basically the title. I've been working on a project I have temporarily named LLM Agent X, and I'm looking for feedback and ideas. The basic idea of the project is that it takes a task, and recursively splits it into smaller chunks, and eventually executes the tasks with an LLM and tools provided by the user. This is my first python project that I am making open source, so any suggestions are welcome. It currently uses LangChain, but if you have any other suggestions that make drop-in replacement of LLM's easy, I would love to hear them.
Here is the GitHub repo: [https://github.com/cvaz1306/llm\_agent\_x.git](https://github.com/cvaz1306/llm_agent_x.git)
I'd love to hear any of your ideas! | 2025-06-09T04:43:36 | https://www.reddit.com/r/LocalLLaMA/comments/1l6w1wb/ive_built_an_ai_agent_that_recursively_decomposes/ | Pretend_Guava7322 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6w1wb | false | null | t3_1l6w1wb | /r/LocalLLaMA/comments/1l6w1wb/ive_built_an_ai_agent_that_recursively_decomposes/ | false | false | self | 30 | {'enabled': False, 'images': [{'id': 'v5Lp8UTBj3Qqi3qm6kOqEj1Jpk2-LeAq5BhP_gqnEvA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/n05gUyDjAoXPIST_g7YcPzaHXFY35bo0s7l9vKLyOsk.jpg?width=108&crop=smart&auto=webp&s=6e6f02559893b9021df22f4a4619dcdc22b39654', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/n05gUyDjAoXPIST_g7YcPzaHXFY35bo0s7l9vKLyOsk.jpg?width=216&crop=smart&auto=webp&s=8df3fd50bec0f4e041df8a0e170f3f0cf9a0901e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/n05gUyDjAoXPIST_g7YcPzaHXFY35bo0s7l9vKLyOsk.jpg?width=320&crop=smart&auto=webp&s=8e16584a48c82d7238d190b8d52d41e185a6e634', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/n05gUyDjAoXPIST_g7YcPzaHXFY35bo0s7l9vKLyOsk.jpg?width=640&crop=smart&auto=webp&s=662dccbd3adbb408229de4391245ab856745e5fb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/n05gUyDjAoXPIST_g7YcPzaHXFY35bo0s7l9vKLyOsk.jpg?width=960&crop=smart&auto=webp&s=bee04bc5f7dd9cb122cf353ccb19aa3b622c141a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/n05gUyDjAoXPIST_g7YcPzaHXFY35bo0s7l9vKLyOsk.jpg?width=1080&crop=smart&auto=webp&s=d1e3d46ada66e241583712b1b628b8ac04365d42', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/n05gUyDjAoXPIST_g7YcPzaHXFY35bo0s7l9vKLyOsk.jpg?auto=webp&s=bf6d01eac90fd489861fdd08d87833fd43e71d1b', 'width': 1200}, 'variants': {}}]} |
Tokenizing research papers for Fine-tuning | 17 | I have a bunch of research papers of my field and want to use them to make a specific fine-tuned LLM for the domain.
How would i start tokenizing the research papers, as i would need to handle equations, tables and citations. (later planning to use the citations and references with RAG)
any help regarding this would be greatly appreciated !!
| 2025-06-09T05:37:44 | https://www.reddit.com/r/LocalLLaMA/comments/1l6wxau/tokenizing_research_papers_for_finetuning/ | 200ok-N1M0-found | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6wxau | false | null | t3_1l6wxau | /r/LocalLLaMA/comments/1l6wxau/tokenizing_research_papers_for_finetuning/ | false | false | self | 17 | null |
Use Ollama to run agents that watch your screen! (100% Local and Open Source) | 120 | 2025-06-09T05:58:30 | https://v.redd.it/tysofmj4du5f1 | Roy3838 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l6x91g | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/tysofmj4du5f1/DASHPlaylist.mpd?a=1752040726%2CYmViZjNlMmVjYzdhZTNjY2M2MjVjNjk5MDE2NTQzNWU2Y2RhYzUwOTQwYzRkMGEzZGRjODUxMmY0MzJlNTY5Yw%3D%3D&v=1&f=sd', 'duration': 88, 'fallback_url': 'https://v.redd.it/tysofmj4du5f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/tysofmj4du5f1/HLSPlaylist.m3u8?a=1752040726%2CMTkwZWNkN2QyOGViZGMzMWI3M2Y4ODAzNjRjMGU3YmRmODFmY2Q0MWRhMGM4ZWMyNDljOTMzNDgzMDI1MjgzMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/tysofmj4du5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1l6x91g | /r/LocalLLaMA/comments/1l6x91g/use_ollama_to_run_agents_that_watch_your_screen/ | false | false | 120 | {'enabled': False, 'images': [{'id': 'YjkwOGhtajRkdTVmMZ0cZOsTXi-ThTayE7iEfGGYXF4Z17hX-7dpetBO2beo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YjkwOGhtajRkdTVmMZ0cZOsTXi-ThTayE7iEfGGYXF4Z17hX-7dpetBO2beo.png?width=108&crop=smart&format=pjpg&auto=webp&s=3ff6a3587567d807932e32f9b0ab0c0b60bb04a9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/YjkwOGhtajRkdTVmMZ0cZOsTXi-ThTayE7iEfGGYXF4Z17hX-7dpetBO2beo.png?width=216&crop=smart&format=pjpg&auto=webp&s=fe6414d73b41c1244ca86ebf9ee2f16f51d81aaf', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/YjkwOGhtajRkdTVmMZ0cZOsTXi-ThTayE7iEfGGYXF4Z17hX-7dpetBO2beo.png?width=320&crop=smart&format=pjpg&auto=webp&s=89e3f2e598e8d1dda60dcb4939034f6776a3a5bf', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/YjkwOGhtajRkdTVmMZ0cZOsTXi-ThTayE7iEfGGYXF4Z17hX-7dpetBO2beo.png?width=640&crop=smart&format=pjpg&auto=webp&s=2ebc5aba560f7188f16a7f67844aa190245f10c4', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/YjkwOGhtajRkdTVmMZ0cZOsTXi-ThTayE7iEfGGYXF4Z17hX-7dpetBO2beo.png?width=960&crop=smart&format=pjpg&auto=webp&s=1c4e2fc0ce25be33a53547fd3552ce4f5ca4af2e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/YjkwOGhtajRkdTVmMZ0cZOsTXi-ThTayE7iEfGGYXF4Z17hX-7dpetBO2beo.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ad95bfebecbb0da08dd727750af1fd2db1a82fef', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/YjkwOGhtajRkdTVmMZ0cZOsTXi-ThTayE7iEfGGYXF4Z17hX-7dpetBO2beo.png?format=pjpg&auto=webp&s=e7d44357a11df5ddf1a46a9b8c779b0d5c1dfbaf', 'width': 1920}, 'variants': {}}]} |
||
Low token per second on RTX5070Ti laptop with phi 4 reasoning plus | 1 | Heya folks,
I'm running phi 4 reasoning plus and I'm encountering some issues.
Per the research that I did on the internet, generally rtx5070ti laptop gpu offers \~=150 tokens per second
However mines only about 30ish token per second.
I've already maxed out the GPU offload option, so far no help.
Any ideas on how to fix this would be appreciated, many thanks. | 2025-06-09T06:26:22 | https://www.reddit.com/r/LocalLLaMA/comments/1l6xo6e/low_token_per_second_on_rtx5070ti_laptop_with_phi/ | PeaResponsible8685 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6xo6e | false | null | t3_1l6xo6e | /r/LocalLLaMA/comments/1l6xo6e/low_token_per_second_on_rtx5070ti_laptop_with_phi/ | false | false | self | 1 | null |
Your favourite noob starter kit or place? | 1 | [removed] | 2025-06-09T06:41:58 | https://www.reddit.com/r/LocalLLaMA/comments/1l6xwhe/your_favourite_noob_starter_kit_or_place/ | nathongunn-bit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6xwhe | false | null | t3_1l6xwhe | /r/LocalLLaMA/comments/1l6xwhe/your_favourite_noob_starter_kit_or_place/ | false | false | self | 1 | null |
Meta ai’s system prompt instagram | 1 | [removed] | 2025-06-09T07:30:29 | https://www.reddit.com/r/LocalLLaMA/comments/1l6ymk1/meta_ais_system_prompt_instagram/ | doxna20 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6ymk1 | false | null | t3_1l6ymk1 | /r/LocalLLaMA/comments/1l6ymk1/meta_ais_system_prompt_instagram/ | false | false | self | 1 | null |
Anybody who can share experiences with Cohere AI Command A (64GB) model for Academic Use? (M4 max, 128gb) | 1 | [removed] | 2025-06-09T08:06:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l6z642/anybody_who_can_share_experiences_with_cohere_ai/ | Bahaal_1981 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6z642 | false | null | t3_1l6z642 | /r/LocalLLaMA/comments/1l6z642/anybody_who_can_share_experiences_with_cohere_ai/ | false | false | self | 1 | null |
UPDATE: Mission to make AI agents affordable - Tool Calling with DeepSeek-R1-0528 using LangChain/LangGraph is HERE! | 17 | I've successfully implemented tool calling support for the newly released DeepSeek-R1-0528 model using my TAoT package with the LangChain/LangGraph frameworks!
What's New in This Implementation:
As DeepSeek-R1-0528 has gotten smarter than its predecessor DeepSeek-R1, more concise prompt tweaking update was required to make my TAoT package work with DeepSeek-R1-0528 ➔ If you had previously downloaded my package, please perform an update
Why This Matters for Making AI Agents Affordable:
✅ Performance: DeepSeek-R1-0528 matches or slightly trails OpenAI's o4-mini (high) in benchmarks.
✅ Cost: 2x cheaper than OpenAI's o4-mini (high) - because why pay more for similar performance?
𝐼𝑓 𝑦𝑜𝑢𝑟 𝑝𝑙𝑎𝑡𝑓𝑜𝑟𝑚 𝑖𝑠𝑛'𝑡 𝑔𝑖𝑣𝑖𝑛𝑔 𝑐𝑢𝑠𝑡𝑜𝑚𝑒𝑟𝑠 𝑎𝑐𝑐𝑒𝑠𝑠 𝑡𝑜 𝐷𝑒𝑒𝑝𝑆𝑒𝑒𝑘-𝑅1-0528, 𝑦𝑜𝑢'𝑟𝑒 𝑚𝑖𝑠𝑠𝑖𝑛𝑔 𝑎 ℎ𝑢𝑔𝑒 𝑜𝑝𝑝𝑜𝑟𝑡𝑢𝑛𝑖𝑡𝑦 𝑡𝑜 𝑒𝑚𝑝𝑜𝑤𝑒𝑟 𝑡ℎ𝑒𝑚 𝑤𝑖𝑡ℎ 𝑎𝑓𝑓𝑜𝑟𝑑𝑎𝑏𝑙𝑒, 𝑐𝑢𝑡𝑡𝑖𝑛𝑔-𝑒𝑑𝑔𝑒 𝐴𝐼!
Check out my updated GitHub repos and please give them a star if this was helpful ⭐
Python TAoT package: https://github.com/leockl/tool-ahead-of-time
JavaScript/TypeScript TAoT package: https://github.com/leockl/tool-ahead-of-time-ts
| 2025-06-09T08:39:53 | https://www.reddit.com/r/LocalLLaMA/comments/1l6zmxk/update_mission_to_make_ai_agents_affordable_tool/ | lc19- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6zmxk | false | null | t3_1l6zmxk | /r/LocalLLaMA/comments/1l6zmxk/update_mission_to_make_ai_agents_affordable_tool/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'BsGml7azfvocjB6WzBt-TMZyLzYhp7QAMojDitqZwQI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_I9Rwiaw_HxXIJN5JmpkG0kcxtC-tqVyqVjvJovJDys.jpg?width=108&crop=smart&auto=webp&s=6a568e4dc5798e9da3a3d1c68bf1465643225cc7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_I9Rwiaw_HxXIJN5JmpkG0kcxtC-tqVyqVjvJovJDys.jpg?width=216&crop=smart&auto=webp&s=be6e224bd107853e14970e574cb90786b469b977', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_I9Rwiaw_HxXIJN5JmpkG0kcxtC-tqVyqVjvJovJDys.jpg?width=320&crop=smart&auto=webp&s=1b7bc7c631b45b26a63ce25e48a3b27b0ef07cc3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_I9Rwiaw_HxXIJN5JmpkG0kcxtC-tqVyqVjvJovJDys.jpg?width=640&crop=smart&auto=webp&s=b38ce184ce0989b8749be0f5d447b306df8d885d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_I9Rwiaw_HxXIJN5JmpkG0kcxtC-tqVyqVjvJovJDys.jpg?width=960&crop=smart&auto=webp&s=7121ad3d3df9247071e7377ea28dfb7ca12e9883', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_I9Rwiaw_HxXIJN5JmpkG0kcxtC-tqVyqVjvJovJDys.jpg?width=1080&crop=smart&auto=webp&s=36ea03dcbc1aa6cadedfebb77b53d63f6e4a7734', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_I9Rwiaw_HxXIJN5JmpkG0kcxtC-tqVyqVjvJovJDys.jpg?auto=webp&s=12ef2f82462d6c15b884581a0607ef2a5374a9e5', 'width': 1200}, 'variants': {}}]} |
Who's using llama.cpp + MCP for model offloading complex problems? | 1 | [removed] | 2025-06-09T08:41:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l6znyg/whos_using_llamacpp_mcp_for_model_offloading/ | bburtenshaw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6znyg | false | null | t3_1l6znyg | /r/LocalLLaMA/comments/1l6znyg/whos_using_llamacpp_mcp_for_model_offloading/ | false | false | self | 1 | null |
A not so hard problem "reasoning" models can't solve | 0 | 1 -> e
7 -> v
5 -> v
2 -> ?
The answer is o but it's unfathomable for reasoning models | 2025-06-09T08:42:59 | https://www.reddit.com/r/LocalLLaMA/comments/1l6zohm/a_not_so_hard_problem_reasoning_models_cant_solve/ | Wild-Masterpiece3762 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6zohm | false | null | t3_1l6zohm | /r/LocalLLaMA/comments/1l6zohm/a_not_so_hard_problem_reasoning_models_cant_solve/ | false | false | self | 0 | null |
[FOR SALE] PixelMagic – AI Image Generation SaaS with 200+ Users | Monetization Just Launched | $199 (Negotiable) | 0 | Hey folks 👋
I'm selling **PixelMagic**, a fully functional AI image generation SaaS. It has **200+ registered users**, a clean UI, a live credit system, and monetization just went live!
# ⚡ Why PixelMagic?
Unlike most AI image platforms:
* **Midjourney** costs $10+/month
* **DALL·E** charges $0.04–$0.13 per image
* **PixelMagic** offers generation at just **$0.01 per image** (\~$0.01)
➡️ That makes it **10x cheaper** than competitors
➡️ And much easier to use – no setup, no subscription, just prompt and go
# 📈 Key Highlights:
* 🚀 **200+ users already onboarded**
* 💳 **Monetization activated (credit system live!)**
* 🆓 New users get **50 free credits** to try
* 🔐 Firebase Auth + Firestore backend
* 📊 PostHog analytics integrated
* ⚡ Deployed on Vercel (fast + scalable)
* 🌐 Fully web-based, no installation needed
# 🧩 What's Included:
* Full source code
* Working deployment on Vercel
* Firebase Auth + Firestore project
* Credit logic + payment-ready flow
* PostHog analytics setup
# ❓Why Am I Selling?
I'm starting a **6-month internship** and won’t have time to grow or maintain PixelMagic. I’d rather see it go to someone who can take it further, instead of letting it sit idle. It’s ready to scale or flip.
# 💸 Price:
**$199 – Negotiable**
💬 Open to serious offers | 2025-06-09T08:43:20 | https://www.reddit.com/r/LocalLLaMA/comments/1l6zon9/for_sale_pixelmagic_ai_image_generation_saas_with/ | techy_mohit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l6zon9 | false | null | t3_1l6zon9 | /r/LocalLLaMA/comments/1l6zon9/for_sale_pixelmagic_ai_image_generation_saas_with/ | false | false | self | 0 | null |
How I Cut Voice Chat Latency by 23% Using Parallel LLM API Calls | 0 | Been optimizing my AI voice chat platform for months, and finally found a solution to the most frustrating problem: unpredictable LLM response times killing conversations.
**The Latency Breakdown:** After analyzing 10,000+ conversations, here's where time actually goes:
* LLM API calls: 87.3% (Gemini/OpenAI)
* STT (Fireworks AI): 7.2%
* TTS (ElevenLabs): 5.5%
The killer insight: while STT and TTS are rock-solid reliable (99.7% within expected latency), LLM APIs are wild cards.
**The Reliability Problem (Real Data from My Tests):**
I tested 6 different models extensively with my specific prompts (your results may vary based on your use case, but the overall trends and correlations should be similar):
|Model|Avg. latency (s)|Max latency (s)|Latency / char (s)|
|:-|:-|:-|:-|
||
|gemini-2.0-flash|**1.99**|**8.04**|**0.00169**|
|gpt-4o-mini|**3.42**|**9.94**|**0.00529**|
|gpt-4o|**5.94**|**23.72**|**0.00988**|
|gpt-4.1|**6.21**|**22.24**|**0.00564**|
|gemini-2.5-flash-preview|**6.10**|**15.79**|**0.00457**|
|gemini-2.5-pro|**11.62**|**24.55**|**0.00876**|
**My Production Setup:**
I was using Gemini 2.5 Flash as my primary model - decent 6.10s average response time, but those 15.79s max latencies were conversation killers. Users don't care about your median response time when they're sitting there for 16 seconds waiting for a reply.
**The Solution: Adding GPT-4o in Parallel**
Instead of switching models, I now fire requests to both Gemini 2.5 Flash AND GPT-4o simultaneously, returning whichever responds first.
The logic is simple:
* Gemini 2.5 Flash: My workhorse, handles most requests
* GPT-4o: Despite 5.94s average (slightly faster than Gemini 2.5), it provides redundancy and often beats Gemini on the tail latencies
Results:
* Average latency: 3.7s → 2.84s (23.2% improvement)
* P95 latency: 24.7s → 7.8s (68% improvement!)
* Responses over 10 seconds: 8.1% → 0.9%
The magic is in the tail - when Gemini 2.5 Flash decides to take 15+ seconds, GPT-4o has usually already responded in its typical 5-6 seconds.
**"But That Doubles Your Costs!"**
Yeah, I'm burning 2x tokens now - paying for both Gemini 2.5 Flash AND GPT-4o on every request. Here's why I don't care:
Token prices are in freefall. The LLM API market demonstrates clear price segmentation, with offerings ranging from highly economical models to premium-priced ones.
The real kicker? ElevenLabs TTS costs me 15-20x more per conversation than LLM tokens. I'm optimizing the wrong thing if I'm worried about doubling my cheapest cost component.
**Why This Works:**
1. **Different failure modes**: Gemini and OpenAI rarely have latency spikes at the same time
2. **Redundancy**: When OpenAI has an outage (3 times last month), Gemini picks up seamlessly
3. **Natural load balancing**: Whichever service is less loaded responds faster
**Real Performance Data:**
Based on my production metrics:
* Gemini 2.5 Flash wins \~55% of the time (when it's not having a latency spike)
* GPT-4o wins \~45% of the time (consistent performer, saves the day during Gemini spikes)
* Both models produce comparable quality for my use case
**TL;DR:** Added GPT-4o in parallel to my existing Gemini 2.5 Flash setup. Cut latency by 23% and virtually eliminated those conversation-killing 15+ second waits. The 2x token cost is trivial compared to the user experience improvement - users remember the one terrible 24-second wait, not the 99 smooth responses.
Anyone else running parallel inference in production? | 2025-06-09T09:36:56 | https://www.reddit.com/r/LocalLLaMA/comments/1l70h9t/how_i_cut_voice_chat_latency_by_23_using_parallel/ | Necessary-Tap5971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l70h9t | false | null | t3_1l70h9t | /r/LocalLLaMA/comments/1l70h9t/how_i_cut_voice_chat_latency_by_23_using_parallel/ | false | false | self | 0 | null |
PC configuration | 1 | [removed] | 2025-06-09T10:05:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l70x61/pc_configuration/ | Any-Understanding835 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l70x61 | false | null | t3_1l70x61 | /r/LocalLLaMA/comments/1l70x61/pc_configuration/ | false | false | self | 1 | null |
Insurance Companies Using GenAI Chatbots | 1 | [removed] | 2025-06-09T10:07:38 | https://www.reddit.com/r/LocalLLaMA/comments/1l70ydu/insurance_companies_using_genai_chatbots/ | aiwtl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l70ydu | false | null | t3_1l70ydu | /r/LocalLLaMA/comments/1l70ydu/insurance_companies_using_genai_chatbots/ | false | false | self | 1 | null |
Choice the best for hosting | 1 | [removed] | 2025-06-09T10:07:41 | https://www.reddit.com/r/LocalLLaMA/comments/1l70yen/choice_the_best_for_hosting/ | Temporary_Problem_71 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l70yen | false | null | t3_1l70yen | /r/LocalLLaMA/comments/1l70yen/choice_the_best_for_hosting/ | false | false | self | 1 | null |
Offline Chatbot with voice feature for Android? | 1 | [removed] | 2025-06-09T10:09:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l70zdm/offline_chatbot_with_voice_feature_for_android/ | No-Background5168 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l70zdm | false | null | t3_1l70zdm | /r/LocalLLaMA/comments/1l70zdm/offline_chatbot_with_voice_feature_for_android/ | false | false | self | 1 | null |
Pc configurator | 1 | [removed] | 2025-06-09T10:13:40 | https://www.reddit.com/r/LocalLLaMA/comments/1l711u8/pc_configurator/ | Any-Understanding835 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l711u8 | false | null | t3_1l711u8 | /r/LocalLLaMA/comments/1l711u8/pc_configurator/ | false | false | self | 1 | null |
Best solution for deploying hunyuan3D-2 as API | 1 | [removed] | 2025-06-09T10:13:44 | https://www.reddit.com/r/LocalLLaMA/comments/1l711w1/best_solution_for_deploying_hunyuan3d2_as_api/ | Willing_Ad_5594 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l711w1 | false | null | t3_1l711w1 | /r/LocalLLaMA/comments/1l711w1/best_solution_for_deploying_hunyuan3d2_as_api/ | false | false | self | 1 | null |
best gpu provider for deploying Hunyuan3D-2 as api ? | 1 | [removed] | 2025-06-09T10:15:17 | https://www.reddit.com/r/LocalLLaMA/comments/1l712sm/best_gpu_provider_for_deploying_hunyuan3d2_as_api/ | Willing_Ad_5594 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l712sm | false | null | t3_1l712sm | /r/LocalLLaMA/comments/1l712sm/best_gpu_provider_for_deploying_hunyuan3d2_as_api/ | false | false | self | 1 | null |
5090 liquid cooled build optimization | 4 | Hi guys, i am building a new pc for me, primarily designed for ML and LLM tasks. I have all the components and would like to get some feedback, i did check if all things work with each other but maybe i missed something or you guys have improvement tips. This is the build:
|| || |AMD Ryzen™️ 9 9950X3D| |MSI GeForce RTX 5090 Suprim Liquid SOC | |NZXT Kraken Elite 420 RGB| |NZXT N9 X870E White AMD X870E| |64GB Kingston FURY Beast RGB weiß DDR5-6000| |2TB Samsung 990 PRO| |NZXT H9 Flow RGB (2025)| |NZXT F Series F120 RGB Core| |NZXT F120 RGB Core Triple Pack - 3 x 120mm| |NZXT C1500 PLATINUM Power Supply - 1500 Watt | ||
I really wanted to have a water cooled 5090 because of the high wattage. First i thought of doing a custom loop but i have no experience in that and it would add another 1000 euros to the build so i will not risk it, however i want to replace the original fans of the gpu radiator with the fans i have in the case.
My biggest worry is the motherboard, it is very expensive for what it is, i would like to stay with nzxt because i like the look and keep the ecosystem. I know they also make the 650E one but i did not find any sellers in EU for that. I am also worried about the pcie 4.0 in that. For gaming it does not really matter at all with just 1-4% fps difference, but for the bandwidth in ML tasks it does seem to matter. If i already have a 5090 with its insane bandwidth i might as well use it with the newer motherboard.
For the fans i will leave the 3 front fans as they are in the case, replace the rear one with the same colored and add the cpu cooler on top and gpu cooler on the bottom.
Thank you for any tips | 2025-06-09T10:21:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l716f4/5090_liquid_cooled_build_optimization/ | ElekDn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l716f4 | false | null | t3_1l716f4 | /r/LocalLLaMA/comments/1l716f4/5090_liquid_cooled_build_optimization/ | false | false | self | 4 | null |
Concept graph workflow in Open WebUI | 146 | **What is this?**
* Reasoning workflow where LLM thinks about the concepts that are related to the User's query and then makes a final answer based on that
* Workflow runs within OpenAI-compatible LLM proxy. It streams a special HTML artifact that connects back to the workflow and listens for events from it to display in the visualisation
[Code](https://github.com/av/harbor/blob/main/boost/src/modules/concept.py#L135) | 2025-06-09T10:41:50 | https://v.redd.it/dzeqvwa9rv5f1 | Everlier | /r/LocalLLaMA/comments/1l71iie/concept_graph_workflow_in_open_webui/ | 1970-01-01T00:00:00 | 0 | {} | 1l71iie | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/dzeqvwa9rv5f1/DASHPlaylist.mpd?a=1752187320%2CYzljYzcxMzQwMWIzM2NkZGI1Y2IwZDBjZDM4YTNiZmYyMWRjNGM1M2JkYTliNjQzNzU0NjYxMTE2YWFiMTIzYg%3D%3D&v=1&f=sd', 'duration': 169, 'fallback_url': 'https://v.redd.it/dzeqvwa9rv5f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/dzeqvwa9rv5f1/HLSPlaylist.m3u8?a=1752187320%2CZGE1ZDY5YmEzMTRkMTA2YTQzZmFhMGRhZDY1Y2MzMmFjZDI4OTYxYjNlZmNjYTkzMTUzYjE1NDBmZTU3Nzk3ZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/dzeqvwa9rv5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1728}} | t3_1l71iie | /r/LocalLLaMA/comments/1l71iie/concept_graph_workflow_in_open_webui/ | false | false | 146 | {'enabled': False, 'images': [{'id': 'aXB5aHN0YTlydjVmMUDafYoOCCtYLjNpQgqDHTqQwNrdDTG86AmqQ0wIdIFQ', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/aXB5aHN0YTlydjVmMUDafYoOCCtYLjNpQgqDHTqQwNrdDTG86AmqQ0wIdIFQ.png?width=108&crop=smart&format=pjpg&auto=webp&s=4a00dc9c48e4ce8c056bc07993185072207ea946', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/aXB5aHN0YTlydjVmMUDafYoOCCtYLjNpQgqDHTqQwNrdDTG86AmqQ0wIdIFQ.png?width=216&crop=smart&format=pjpg&auto=webp&s=df64d7b609a0bdc04ab004de4efdd2d04fae07a5', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/aXB5aHN0YTlydjVmMUDafYoOCCtYLjNpQgqDHTqQwNrdDTG86AmqQ0wIdIFQ.png?width=320&crop=smart&format=pjpg&auto=webp&s=e3e75efae4d1692a4c4dd7c93048e8529a811aa7', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/aXB5aHN0YTlydjVmMUDafYoOCCtYLjNpQgqDHTqQwNrdDTG86AmqQ0wIdIFQ.png?width=640&crop=smart&format=pjpg&auto=webp&s=6d6d18814fd6256e6d33ae0f4e41f1cd8c336611', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/aXB5aHN0YTlydjVmMUDafYoOCCtYLjNpQgqDHTqQwNrdDTG86AmqQ0wIdIFQ.png?width=960&crop=smart&format=pjpg&auto=webp&s=0a01e5f32eb626eec40b204a08133a585626dbcf', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/aXB5aHN0YTlydjVmMUDafYoOCCtYLjNpQgqDHTqQwNrdDTG86AmqQ0wIdIFQ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=02eb5b65e74b92a141ba74901be0a10a58d8db4e', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/aXB5aHN0YTlydjVmMUDafYoOCCtYLjNpQgqDHTqQwNrdDTG86AmqQ0wIdIFQ.png?format=pjpg&auto=webp&s=4c128cec16ff58b5d484eb22e9475772c0c79c89', 'width': 1920}, 'variants': {}}]} |
|
Looking for a scalable LLM API for BDSM roleplay chatbot – OpenAI alternative? | 1 | [removed] | 2025-06-09T10:50:23 | https://www.reddit.com/r/LocalLLaMA/comments/1l71nm8/looking_for_a_scalable_llm_api_for_bdsm_roleplay/ | Shot-Purchase-2015 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l71nm8 | false | null | t3_1l71nm8 | /r/LocalLLaMA/comments/1l71nm8/looking_for_a_scalable_llm_api_for_bdsm_roleplay/ | false | false | nsfw | 1 | null |
How do you handle memory and context with GPT API without wasting tokens? | 0 | Hi everyone,
I'm using the GPT API to build a local assistant, and I'm facing a major issue related to memory and context.
The biggest limitation so far is that the model doesn't remember previous interactions. Each API call is stateless, so I have to resend context manually — which results in huge token usage if the conversation grows.
Problems:
* Each prompt + response can consume hundreds of tokens
* GPT API doesn't retain memory between messages unless I manually supply the previous context
* Continuously sending all prior messages is expensive and inefficient
What I’ve tried or considered:
* Splitting content into paragraphs and only sending relevant parts (partially effective)
* Caching previous answers in a local JSON file
* Experimenting with sentence-transformers + ChromaDB for minimal retrieval-augmented generation (RAG)
* Letting the user select "I didn’t understand this" to narrow the scope of the prompt
What I’m still unsure about:
* What’s the most effective way to restore memory context in a scalable, token-efficient way?
* How to handle follow-up questions that depend on earlier parts of a conversation or multiple context points?
* How to structure a hybrid memory + retrieval system that reduces repeated token costs?
Any advice, design patterns, open-source examples, or architectural suggestions would be greatly appreciated. Thanks | 2025-06-09T11:40:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l72kfi/how_do_you_handle_memory_and_context_with_gpt_api/ | ahmetamabanyemis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l72kfi | false | null | t3_1l72kfi | /r/LocalLLaMA/comments/1l72kfi/how_do_you_handle_memory_and_context_with_gpt_api/ | false | false | self | 0 | null |
Building AI Personalities Users Actually Remember - The Memory Hook Formula | 0 | Spent months building detailed AI personalities only to have users forget which was which after 24 hours - "Was Sarah the lawyer or the nutritionist?" The problem wasn't making them interesting; it was making them memorable enough to stick in users' minds between conversations.
**The Memory Hook Formula That Actually Works:**
**1. The One Weird Thing (OWT) Principle**
Every memorable persona needs ONE specific quirk that breaks expectations:
* Emma the Corporate Lawyer: Explains contracts through Taylor Swift lyrics
* Marcus the Philosopher: Can't stop making food analogies (former chef)
* Dr. Chen the Astrophysicist: Relates everything to her inability to parallel park
* Jake the Personal Trainer: Quotes Shakespeare during workouts
* Nina the Accountant: Uses extreme sports metaphors for tax season
Success rate: 73% recall after 48 hours (vs 22% without OWT)
The quirk works best when it surfaces naturally - not forced into every interaction, but impossible to ignore when it appears. Marcus doesn't just mention food; he'll explain existentialism as "a perfectly risen soufflé of consciousness that collapses when you think too hard about it."
**2. The Contradiction Pattern**
Memorable = Unexpected. The formula: \[Professional expertise\] + \[Completely unrelated obsession\] = Memory hook
Examples that stuck:
* Quantum physicist who breeds guinea pigs
* War historian obsessed with reality TV
* Marine biologist who's terrified of swimming
* Brain surgeon who can't figure out IKEA furniture
* Meditation guru addicted to death metal
* Michelin chef who puts ketchup on everything
The contradiction creates cognitive dissonance that forces the brain to pay attention. Users spent 3x longer asking about these contradictions than about the personas' actual expertise. For my audio platform, this differentiation between hosts became crucial for user retention - people need distinct voices to choose from, not variations of the same personality.
**3. The Story Trigger Method**
Instead of listing traits, give them ONE specific story users can retell:
❌ Bad: "Tom is afraid of birds" ✅ Good: "Tom got attacked by a peacock at a wedding and now crosses the street when he sees pigeons"
❌ Bad: "Lisa is clumsy" ✅ Good: "Lisa once knocked over a $30,000 sculpture with her laptop bag during a museum tour"
❌ Bad: "Ahmed loves puzzles" ✅ Good: "Ahmed spent his honeymoon in an escape room because his wife mentioned she liked puzzles on their first date"
Users who could retell a persona's story: 84% remembered them a week later
The story needs three elements: specific location (wedding, museum), specific action (attacked, knocked over), and specific consequence (crosses streets, banned from museums). Vague stories don't stick.
**4. The 3-Touch Rule**
Memory formation needs repetition, but not annoying repetition:
* Touch 1: Natural mention in introduction
* Touch 2: Callback during relevant topic
* Touch 3: Self-aware joke about it
Example: Sarah the nutritionist who loves gas station coffee
1. "I know, I know, nutritionist with terrible coffee habits"
2. \[During health discussion\] "Says the woman drinking her third gas station coffee"
3. "At this point, I should just get sponsored by 7-Eleven"
Alternative pattern: David the therapist who can't keep plants alive
1. "Yes, that's my fourth fake succulent - I gave up on real ones"
2. \[Discussing growth\] "I help people grow, just not plants apparently"
3. "My plant graveyard has its own zip code now"
The key is spacing - minimum 5-10 minutes between touches, and the third touch should show self-awareness, turning the quirk into an inside joke between the AI and user. | 2025-06-09T11:45:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l72nev/building_ai_personalities_users_actually_remember/ | Necessary-Tap5971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l72nev | false | null | t3_1l72nev | /r/LocalLLaMA/comments/1l72nev/building_ai_personalities_users_actually_remember/ | false | false | self | 0 | null |
Future of local LLM computing ? | 1 | [removed] | 2025-06-09T11:56:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l72v3l/future_of_local_llm_computing/ | Diligent_Paper9862 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l72v3l | false | null | t3_1l72v3l | /r/LocalLLaMA/comments/1l72v3l/future_of_local_llm_computing/ | false | false | self | 1 | null |
H company - Holo1 7B | 77 | https://huggingface.co/Hcompany/Holo1-7B
Paper : https://huggingface.co/papers/2506.02865
The H company (a French AI startup) released this model, and I haven't seen anyone talk about it here despite the great performance showed on benchmarks for GUI agentic use.
Did anyone tried it ? | 2025-06-09T12:06:38 | TacGibs | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l73294 | false | null | t3_1l73294 | /r/LocalLLaMA/comments/1l73294/h_company_holo1_7b/ | false | false | default | 77 | {'enabled': True, 'images': [{'id': 'ph3t561w6w5f1', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/ph3t561w6w5f1.png?width=108&crop=smart&auto=webp&s=93be86a9cc40e606dfcee281ef89a07d9276dd61', 'width': 108}, {'height': 131, 'url': 'https://preview.redd.it/ph3t561w6w5f1.png?width=216&crop=smart&auto=webp&s=100d6b7eedb81dfbb0105a009acfc642d1eff827', 'width': 216}, {'height': 195, 'url': 'https://preview.redd.it/ph3t561w6w5f1.png?width=320&crop=smart&auto=webp&s=b4982549d5c89bc02df80392fc33a03c13d1bcf3', 'width': 320}, {'height': 390, 'url': 'https://preview.redd.it/ph3t561w6w5f1.png?width=640&crop=smart&auto=webp&s=6d0039ef982c60176689cf04d36365e4bbf966f0', 'width': 640}, {'height': 585, 'url': 'https://preview.redd.it/ph3t561w6w5f1.png?width=960&crop=smart&auto=webp&s=0b264bc90d3ae8ea1755ea9eb5ce185b26effd7b', 'width': 960}, {'height': 658, 'url': 'https://preview.redd.it/ph3t561w6w5f1.png?width=1080&crop=smart&auto=webp&s=5ca1f060dd467ace31fc6b93b66f2e081acecf80', 'width': 1080}], 'source': {'height': 1612, 'url': 'https://preview.redd.it/ph3t561w6w5f1.png?auto=webp&s=0e915aee0b5724cc72a1f4a042ee27d40fcba864', 'width': 2644}, 'variants': {}}]} |
|
Building "SpectreMind" – Local AI Red Teaming Assistant (Multi-LLM Orchestrator) | 1 | [removed] | 2025-06-09T12:11:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l735q4/building_spectremind_local_ai_red_teaming/ | slavicgod699 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l735q4 | false | null | t3_1l735q4 | /r/LocalLLaMA/comments/1l735q4/building_spectremind_local_ai_red_teaming/ | false | false | self | 1 | null |
How do I get started? | 1 | The idea of creating a locally-run LLM at home becomes more enticing every day, but I have no clue where to start. What learning resources do you all recommend for setting up and training your own language models? Any resources for building computers to spec for these projects would also be very helpful. | 2025-06-09T12:43:16 | https://www.reddit.com/r/LocalLLaMA/comments/1l73sya/how_do_i_get_started/ | SoundBwoy_10011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l73sya | false | null | t3_1l73sya | /r/LocalLLaMA/comments/1l73sya/how_do_i_get_started/ | false | false | self | 1 | null |
Local LLama for software dev | 1 | [removed] | 2025-06-09T12:49:05 | https://www.reddit.com/r/LocalLLaMA/comments/1l73xc7/local_llama_for_software_dev/ | Additional-Purple-70 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l73xc7 | false | null | t3_1l73xc7 | /r/LocalLLaMA/comments/1l73xc7/local_llama_for_software_dev/ | false | false | self | 1 | null |
Why isn't it common for companies to compare the evaluation of the different quantizations of their model? | 28 | Is it not as trivial as it sounds? Are they scared of showing lower scoring evaluations in case users confuse them for the original ones?
It would be so useful when choosing a gguf version to know how much accuracy loss each has. Like I'm sure there are many models where Qn vs Qn+1 are indistinguishable in performance so in that case you would know not to pick Qn+1 and prefer Qn.
Am I missing something? | 2025-06-09T13:03:06 | https://www.reddit.com/r/LocalLLaMA/comments/1l748qc/why_isnt_it_common_for_companies_to_compare_the/ | ArcaneThoughts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l748qc | false | null | t3_1l748qc | /r/LocalLLaMA/comments/1l748qc/why_isnt_it_common_for_companies_to_compare_the/ | false | false | self | 28 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.