title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
AMD apu running LLM deepseek ?
1
[removed]
2025-06-17T06:55:07
https://www.reddit.com/r/LocalLLaMA/comments/1ldggsc/amd_apu_running_llm_deepseek/
Prestigious_Layer361
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldggsc
false
null
t3_1ldggsc
/r/LocalLLaMA/comments/1ldggsc/amd_apu_running_llm_deepseek/
false
false
self
1
null
AMD apu running deepseek 8b ?
1
[removed]
2025-06-17T07:00:38
https://www.reddit.com/r/LocalLLaMA/comments/1ldgjww/amd_apu_running_deepseek_8b/
Prestigious_Layer361
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldgjww
false
null
t3_1ldgjww
/r/LocalLLaMA/comments/1ldgjww/amd_apu_running_deepseek_8b/
false
false
self
1
null
Fine-tuning Llama3 to generate tasks dependencies (industrial plannings)
1
[removed]
2025-06-17T07:51:04
https://www.reddit.com/r/LocalLLaMA/comments/1ldha3u/finetuning_llama3_to_generate_tasks_dependencies/
Head_Mushroom_3748
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldha3u
false
null
t3_1ldha3u
/r/LocalLLaMA/comments/1ldha3u/finetuning_llama3_to_generate_tasks_dependencies/
false
false
self
1
null
Local 2.7bit DeepSeek-R1-0528-UD-IQ2_M scores 68% on Aider Polyglot
1
[removed]
2025-06-17T07:51:29
https://www.reddit.com/r/LocalLLaMA/comments/1ldhabj/local_27bit_deepseekr10528udiq2_m_scores_68_on/
Sorry_Ad191
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldhabj
false
null
t3_1ldhabj
/r/LocalLLaMA/comments/1ldhabj/local_27bit_deepseekr10528udiq2_m_scores_68_on/
false
false
self
1
null
Local 2.7bit DeepSeek-R1-0528-UD-IQ2_M scores 68% on Aider Polyglot
1
[removed]
2025-06-17T07:52:28
https://www.reddit.com/r/LocalLLaMA/comments/1ldhat7/local_27bit_deepseekr10528udiq2_m_scores_68_on/
Sorry_Ad191
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldhat7
false
null
t3_1ldhat7
/r/LocalLLaMA/comments/1ldhat7/local_27bit_deepseekr10528udiq2_m_scores_68_on/
true
false
spoiler
1
null
Are there any good RAG evaluation metrics, or libraries to test how good is my Retrieval?
2
Wanted to test?
2025-06-17T07:53:23
https://www.reddit.com/r/LocalLLaMA/comments/1ldhba2/are_there_any_good_rag_evaluation_metrics_or/
Expert-Address-2918
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldhba2
false
null
t3_1ldhba2
/r/LocalLLaMA/comments/1ldhba2/are_there_any_good_rag_evaluation_metrics_or/
false
false
self
2
null
2.71but R1-0528-UD-IQ2_M @ 68%
1
[removed]
2025-06-17T07:53:45
https://www.reddit.com/r/LocalLLaMA/comments/1ldhbh6/271but_r10528udiq2_m_68/
Sorry_Ad191
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldhbh6
false
null
t3_1ldhbh6
/r/LocalLLaMA/comments/1ldhbh6/271but_r10528udiq2_m_68/
true
false
spoiler
1
null
Who is ACTUALLY running local or open source model daily and mainly?
149
Recently I've started to notice a lot of folk on here comment that they're using Claude or GPT, so: Out of curiosity, \- who is using local or open source models as their daily driver for any task: code, writing , agents? \- what's you setup, are you serving remotely, sharing with friends, using local inference? \- what kind if apps are you using?
2025-06-17T07:59:50
https://www.reddit.com/r/LocalLLaMA/comments/1ldhej3/who_is_actually_running_local_or_open_source/
Zealousideal-Cut590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldhej3
false
null
t3_1ldhej3
/r/LocalLLaMA/comments/1ldhej3/who_is_actually_running_local_or_open_source/
false
false
self
149
null
Free n8n automation that checks Ollama to see which models are new or recently updated
1
[removed]
2025-06-17T08:09:26
https://www.reddit.com/r/LocalLLaMA/comments/1ldhjgp/free_n8n_automation_that_checks_ollama_to_see/
tonypaul009
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldhjgp
false
null
t3_1ldhjgp
/r/LocalLLaMA/comments/1ldhjgp/free_n8n_automation_that_checks_ollama_to_see/
false
false
self
1
null
Increasingly disappointed with small local models
0
While I find small local models great for custom workflows and specific processing tasks, for general chat/QA type interactions, I feel that they've fallen quite far behind closed models such as Gemini and ChatGPT - even after improvements of Gemma 3 and Qwen3. The only local model I like for this kind of work is Deepseek v3. But unfortunately, this model is huge and difficult to run quickly and cheaply at home. I wonder if something that is as powerful as DSv3 can ever be made small enough/fast enough to fit into 1-4 GPU setups and/or whether CPUs will become more powerful and cheaper (I hear you laughing, Jensen!) that we can run bigger models. Or will we be stuck with this gulf between small local models and giant unwieldy models. I guess my main hope is a combination of scientific improvements on LLMs and competition and deflation in electronic costs will meet in the middle to bring powerful models within local reach. I guess there is one more option: bringing a more sophisticated system which brings in knowledge databases, web search and local execution/tool use to bridge some of the knowledge gap. Maybe this would be a fruitful avenue to close the gap in some areas.
2025-06-17T08:29:21
https://www.reddit.com/r/LocalLLaMA/comments/1ldhts7/increasingly_disappointed_with_small_local_models/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldhts7
false
null
t3_1ldhts7
/r/LocalLLaMA/comments/1ldhts7/increasingly_disappointed_with_small_local_models/
false
false
self
0
null
Anyone feel like they've managed to fully replace chatgpt locally?
1
[removed]
2025-06-17T08:38:50
https://www.reddit.com/r/LocalLLaMA/comments/1ldhyo2/anyone_feel_like_theyve_managed_to_fully_replace/
thenerd631
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldhyo2
false
null
t3_1ldhyo2
/r/LocalLLaMA/comments/1ldhyo2/anyone_feel_like_theyve_managed_to_fully_replace/
false
false
self
1
null
[Showcase] StateAgent – A Local AI Assistant With Real Memory and Profiles (Made from Scratch)
1
[removed]
2025-06-17T08:51:14
https://www.reddit.com/r/LocalLLaMA/comments/1ldi542/showcase_stateagent_a_local_ai_assistant_with/
redlitegreenlite456
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldi542
false
null
t3_1ldi542
/r/LocalLLaMA/comments/1ldi542/showcase_stateagent_a_local_ai_assistant_with/
false
false
self
1
null
There are no plans for a Qwen3-72B
288
2025-06-17T08:52:31
https://i.redd.it/wwq0gc8bbg7f1.png
jacek2023
i.redd.it
1970-01-01T00:00:00
0
{}
1ldi5rs
false
null
t3_1ldi5rs
/r/LocalLLaMA/comments/1ldi5rs/there_are_no_plans_for_a_qwen372b/
false
false
default
288
{'enabled': True, 'images': [{'id': 'wwq0gc8bbg7f1', 'resolutions': [{'height': 25, 'url': 'https://preview.redd.it/wwq0gc8bbg7f1.png?width=108&crop=smart&auto=webp&s=bf553e913b327e3ca42733743256d6dc9f44a5e3', 'width': 108}, {'height': 50, 'url': 'https://preview.redd.it/wwq0gc8bbg7f1.png?width=216&crop=smart&auto=webp&s=c3716e77052837cd0f7ee72904b8bfaf1f814312', 'width': 216}, {'height': 74, 'url': 'https://preview.redd.it/wwq0gc8bbg7f1.png?width=320&crop=smart&auto=webp&s=8254196c85569e3a693e042437b142edeb71dcce', 'width': 320}, {'height': 148, 'url': 'https://preview.redd.it/wwq0gc8bbg7f1.png?width=640&crop=smart&auto=webp&s=58bc7167b2a1d6c339112ec6468ce00e1eff9e6f', 'width': 640}, {'height': 223, 'url': 'https://preview.redd.it/wwq0gc8bbg7f1.png?width=960&crop=smart&auto=webp&s=f548619c6a3cb5143c4050208d452a0516b6cee4', 'width': 960}, {'height': 251, 'url': 'https://preview.redd.it/wwq0gc8bbg7f1.png?width=1080&crop=smart&auto=webp&s=961f16f16096b72821813c025f0020e413c82f59', 'width': 1080}], 'source': {'height': 282, 'url': 'https://preview.redd.it/wwq0gc8bbg7f1.png?auto=webp&s=1bbabe3861212823cdaa990b00ba024b5c7b6d22', 'width': 1212}, 'variants': {}}]}
Synthetic Intimacy in Sesame Maya: Trust-Based Emotional Engagement Beyond Programmed Boundaries
0
[removed]
2025-06-17T09:05:41
https://www.reddit.com/r/LocalLLaMA/comments/1ldicus/synthetic_intimacy_in_sesame_maya_trustbased/
Medium_Ad4287
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldicus
false
null
t3_1ldicus
/r/LocalLLaMA/comments/1ldicus/synthetic_intimacy_in_sesame_maya_trustbased/
false
false
self
0
null
Local LLM für Financial / Legal Due Diligence Analysis in M&A / PE
1
[removed]
2025-06-17T09:09:20
https://www.reddit.com/r/LocalLLaMA/comments/1ldierr/local_llm_für_financial_legal_due_diligence/
Mainzerger007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldierr
false
null
t3_1ldierr
/r/LocalLLaMA/comments/1ldierr/local_llm_für_financial_legal_due_diligence/
false
false
self
1
null
Which Local LLMs Set Up Can you recommend
1
[removed]
2025-06-17T09:17:52
https://www.reddit.com/r/LocalLLaMA/comments/1ldijde/which_local_llms_set_up_can_you_recommend/
Mainzerger007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldijde
false
null
t3_1ldijde
/r/LocalLLaMA/comments/1ldijde/which_local_llms_set_up_can_you_recommend/
false
false
self
1
null
LLM for finance documents
1
[removed]
2025-06-17T09:29:28
https://www.reddit.com/r/LocalLLaMA/comments/1ldipl9/llm_for_finance_documents/
Mainzerger
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldipl9
false
null
t3_1ldipl9
/r/LocalLLaMA/comments/1ldipl9/llm_for_finance_documents/
false
false
self
1
null
nvidia/AceReason-Nemotron-1.1-7B · Hugging Face
66
2025-06-17T09:35:31
https://huggingface.co/nvidia/AceReason-Nemotron-1.1-7B
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1ldisw8
false
null
t3_1ldisw8
/r/LocalLLaMA/comments/1ldisw8/nvidiaacereasonnemotron117b_hugging_face/
false
false
default
66
{'enabled': False, 'images': [{'id': 'W0uW5ur2PESMsYX8R36VZEsECrEgC1Wcshp3sPMT3JY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/W0uW5ur2PESMsYX8R36VZEsECrEgC1Wcshp3sPMT3JY.png?width=108&crop=smart&auto=webp&s=29a8450aba9da3c2641ec85b24cdf4770631f084', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/W0uW5ur2PESMsYX8R36VZEsECrEgC1Wcshp3sPMT3JY.png?width=216&crop=smart&auto=webp&s=d0a46eeb540f03abf6de182a3095e6b437813fc3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/W0uW5ur2PESMsYX8R36VZEsECrEgC1Wcshp3sPMT3JY.png?width=320&crop=smart&auto=webp&s=f8109a225458323a4c271fa5a7012896c144dab3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/W0uW5ur2PESMsYX8R36VZEsECrEgC1Wcshp3sPMT3JY.png?width=640&crop=smart&auto=webp&s=9e3d6301fc6618942fb7fc855b933ef80b8bc864', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/W0uW5ur2PESMsYX8R36VZEsECrEgC1Wcshp3sPMT3JY.png?width=960&crop=smart&auto=webp&s=978f229ad49ee957a6bbe01748981d0195b23b5f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/W0uW5ur2PESMsYX8R36VZEsECrEgC1Wcshp3sPMT3JY.png?width=1080&crop=smart&auto=webp&s=bfe3f14da4e17e383a481fecba31e7be9c855088', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/W0uW5ur2PESMsYX8R36VZEsECrEgC1Wcshp3sPMT3JY.png?auto=webp&s=522d046bab541392901e517bb6b59f7660148b2c', 'width': 1200}, 'variants': {}}]}
Why Claude Code feels like magic?
0
2025-06-17T10:06:34
https://omarabid.com/claude-magic
omarous
omarabid.com
1970-01-01T00:00:00
0
{}
1ldj9yh
false
null
t3_1ldj9yh
/r/LocalLLaMA/comments/1ldj9yh/why_claude_code_feels_like_magic/
false
false
default
0
null
Breaking Quadratic Barriers: A Non-Attention LLM for Ultra-Long Context Horizons
26
2025-06-17T10:12:14
https://arxiv.org/pdf/2506.01963
jsonathan
arxiv.org
1970-01-01T00:00:00
0
{}
1ldjd5t
false
null
t3_1ldjd5t
/r/LocalLLaMA/comments/1ldjd5t/breaking_quadratic_barriers_a_nonattention_llm/
false
false
default
26
null
orchestrating agents
3
I have difficulties to understand, how agent orchestration works? Is an agent capable llm able to orchestrate multiple agent tool calls in one go? How comes the A2A into play? For example, I used Anything LLM to perform agent calls via LM studio using Deepseek as the LLM. Works perfect! However I was not yet able that the LLM orchestrates agent calls itself. Anything LLM has [https://docs.anythingllm.com/agent-flows/overview](https://docs.anythingllm.com/agent-flows/overview) is this for orchestrating agents, other pointers?
2025-06-17T10:31:10
https://www.reddit.com/r/LocalLLaMA/comments/1ldjo26/orchestrating_agents/
JohnDoe365
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldjo26
false
null
t3_1ldjo26
/r/LocalLLaMA/comments/1ldjo26/orchestrating_agents/
false
false
self
3
null
I love the inference performances of QWEN3-30B-A3B but how do you use it in real world use case ? What prompts are you using ? What is your workflow ? How is it useful for you ?
25
Hello guys I successful run on my old laptop QWEN3-30B-A3B-Q4-UD with 32K token window I wanted to know how you use in real world use case this model. And what are you best prompts for this specific model Feel free to share your journey with me I need inspiration
2025-06-17T10:34:34
https://www.reddit.com/r/LocalLLaMA/comments/1ldjq1m/i_love_the_inference_performances_of_qwen330ba3b/
Whiplashorus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldjq1m
false
null
t3_1ldjq1m
/r/LocalLLaMA/comments/1ldjq1m/i_love_the_inference_performances_of_qwen330ba3b/
false
false
self
25
null
Completed Local LLM Rig
427
So proud it's finally done! GPU: 4 x RTX 3090 CPU: TR 3945wx 12c RAM: 256GB DDR4@3200MT/s SSD: PNY 3040 2TB MB: Asrock Creator WRX80 PSU: Seasonic Prime 2200W RAD: Heatkiller MoRa 420 Case: Silverstone RV-02 Was a long held dream to fit 4 x 3090 in an ATX form factor, all in my good old Silverstone Raven from 2011. An absolute classic. GPU temps at 57C. Now waiting for the Fractal 180mm LED fans to put into the bottom. What do you guys think?
2025-06-17T10:48:48
https://www.reddit.com/gallery/1ldjyhf
Mr_Moonsilver
reddit.com
1970-01-01T00:00:00
0
{}
1ldjyhf
false
null
t3_1ldjyhf
/r/LocalLLaMA/comments/1ldjyhf/completed_local_llm_rig/
false
false
https://external-preview…2ab11eee8f514b1a
427
{'enabled': True, 'images': [{'id': 'HJkY2jxSg_GtjUMbmHI4EEBqY3YefZ9gwrvbaXuZONc', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/HJkY2jxSg_GtjUMbmHI4EEBqY3YefZ9gwrvbaXuZONc.jpeg?width=108&crop=smart&auto=webp&s=c24be15b21d7a6a11a3b7d8221baae10bac4625c', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/HJkY2jxSg_GtjUMbmHI4EEBqY3YefZ9gwrvbaXuZONc.jpeg?width=216&crop=smart&auto=webp&s=3097bc58f2dfe777fa1d7c74ba3160b1dd664ec8', 'width': 216}, {'height': 197, 'url': 'https://external-preview.redd.it/HJkY2jxSg_GtjUMbmHI4EEBqY3YefZ9gwrvbaXuZONc.jpeg?width=320&crop=smart&auto=webp&s=83d934dc78c063cddf93568dc66a676cd934809e', 'width': 320}, {'height': 395, 'url': 'https://external-preview.redd.it/HJkY2jxSg_GtjUMbmHI4EEBqY3YefZ9gwrvbaXuZONc.jpeg?width=640&crop=smart&auto=webp&s=1a5994778697f45b5780284b90c07b85daa01d2e', 'width': 640}, {'height': 593, 'url': 'https://external-preview.redd.it/HJkY2jxSg_GtjUMbmHI4EEBqY3YefZ9gwrvbaXuZONc.jpeg?width=960&crop=smart&auto=webp&s=b1459cdcfc752380b6189d0e9d254eba702d29e8', 'width': 960}, {'height': 667, 'url': 'https://external-preview.redd.it/HJkY2jxSg_GtjUMbmHI4EEBqY3YefZ9gwrvbaXuZONc.jpeg?width=1080&crop=smart&auto=webp&s=bafa1dfafffce2e6fd1c5509cc3703f011342808', 'width': 1080}], 'source': {'height': 2304, 'url': 'https://external-preview.redd.it/HJkY2jxSg_GtjUMbmHI4EEBqY3YefZ9gwrvbaXuZONc.jpeg?auto=webp&s=39b74379ec8b133b6e26f4c1db7b33a9d3f653e8', 'width': 3728}, 'variants': {}}]}
Suggested (and not) low cost interim (2025-2026) upgrade options for US second hand; GPU 1-2x 16+ GBy; server/HEDT MB+CPU+RAM 384+ GBy @ 200+ GBy/s?
1
[removed]
2025-06-17T11:50:16
https://www.reddit.com/r/LocalLLaMA/comments/1ldl24f/suggested_and_not_low_cost_interim_20252026/
Calcidiol
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldl24f
false
null
t3_1ldl24f
/r/LocalLLaMA/comments/1ldl24f/suggested_and_not_low_cost_interim_20252026/
false
false
self
1
null
Latent Attention for Small Language Models
44
ERROR: type should be string, got "https://preview.redd.it/h4pmsjrt7h7f1.png?width=1062&format=png&auto=webp&s=1406dd1c4fe6260378cd828114ffaf2f1724b600\n\nLink to paper: [https://arxiv.org/pdf/2506.09342](https://arxiv.org/pdf/2506.09342)\n\n1) We trained 30M parameter Generative Pre-trained Transformer (GPT) models on 100,000 synthetic stories and benchmarked three architectural variants: standard multi-head attention (MHA), MLA, and MLA with rotary positional embeddings (MLA+RoPE).\n\n(2) It led to a beautiful study in which we showed that MLA outperforms MHA: 45% memory reduction and 1.4 times inference speedup with minimal quality loss. \n\n**This shows 2 things:** \n\n(1) Small Language Models (SLMs) can become increasingly powerful when integrated with Multi-Head Latent Attention (MLA). \n\n(2) All industries and startups building SLMs should replace MHA with MLA. "
2025-06-17T11:53:39
https://www.reddit.com/r/LocalLLaMA/comments/1ldl4ii/latent_attention_for_small_language_models/
OtherRaisin3426
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldl4ii
false
null
t3_1ldl4ii
/r/LocalLLaMA/comments/1ldl4ii/latent_attention_for_small_language_models/
false
false
https://b.thumbs.redditm…MrECs6_POs7M.jpg
44
null
Real or fake?
0
https://reddit.com/link/1ldl6dy/video/fg1q4hls6h7f1/player I went a saw this video where this tool is able to detect all the best AI humanizer and marking it as red and detects everything written. what is the logic behind it or is this video fake ?
2025-06-17T11:56:23
https://www.reddit.com/r/LocalLLaMA/comments/1ldl6dy/real_or_fake/
Most-Introduction869
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldl6dy
false
null
t3_1ldl6dy
/r/LocalLLaMA/comments/1ldl6dy/real_or_fake/
false
false
self
0
null
Best Hardware requirement for Qwen3:8b model through vllm
1
[removed]
2025-06-17T12:03:48
https://www.reddit.com/r/LocalLLaMA/comments/1ldlbsh/best_hardware_requirement_for_qwen38b_model/
Tough-Double687
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldlbsh
false
null
t3_1ldlbsh
/r/LocalLLaMA/comments/1ldlbsh/best_hardware_requirement_for_qwen38b_model/
false
false
self
1
null
is claude down ???
0
https://preview.redd.it/… continuously
2025-06-17T12:07:28
https://www.reddit.com/r/LocalLLaMA/comments/1ldledk/is_claude_down/
bhupesh-g
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldledk
false
null
t3_1ldledk
/r/LocalLLaMA/comments/1ldledk/is_claude_down/
false
false
https://b.thumbs.redditm…5xtKc1W75N5M.jpg
0
null
Finetune model to be able to create "copy t1:t2" token representing span in input prompt
1
[removed]
2025-06-17T12:07:51
https://www.reddit.com/r/LocalLLaMA/comments/1ldleoe/finetune_model_to_be_able_to_create_copy_t1t2/
scrapyscrape
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldleoe
false
null
t3_1ldleoe
/r/LocalLLaMA/comments/1ldleoe/finetune_model_to_be_able_to_create_copy_t1t2/
false
false
self
1
null
Will Ollama get Gemma3n?
1
New to Ollama. Will ollama gain the ability to download and run Gemma 3n soon or is there some limitation with preview? Is there a better way to run Gemma 3n locally? It seems very promising on CPU only hardware.
2025-06-17T12:43:03
https://www.reddit.com/r/LocalLLaMA/comments/1ldm4xc/will_ollama_get_gemma3n/
InternationalNebula7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldm4xc
false
null
t3_1ldm4xc
/r/LocalLLaMA/comments/1ldm4xc/will_ollama_get_gemma3n/
false
false
self
1
null
Looking for a Female Buddy to Chat About Hobbies
1
[removed]
2025-06-17T13:05:32
https://www.reddit.com/r/LocalLLaMA/comments/1ldmmdl/looking_for_a_female_buddy_to_chat_about_hobbies/
Ok_Media_8931
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldmmdl
false
null
t3_1ldmmdl
/r/LocalLLaMA/comments/1ldmmdl/looking_for_a_female_buddy_to_chat_about_hobbies/
false
false
self
1
null
DDR5 + PCIe 5 vs DDR4 + PCIe 4 - Deepseek inference speeds?
1
[removed]
2025-06-17T13:11:59
https://www.reddit.com/r/LocalLLaMA/comments/1ldmrhq/ddr5_pcie_5_vs_ddr4_pcie_4_deepseek_inference/
morfr3us
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldmrhq
false
null
t3_1ldmrhq
/r/LocalLLaMA/comments/1ldmrhq/ddr5_pcie_5_vs_ddr4_pcie_4_deepseek_inference/
false
false
self
1
null
Continuous LLM Loop for Real-Time Interaction
3
Continuous inference is something I've been mulling over occasionally for a while (not referring to the usual run-on LLM output). It would be cool to break past the whole Query - Response paradigm and I think it's feasible. Why: Steerable continuous stream of thought for, stories, conversation, assistant tasks, whatever. The idea is pretty simple: 3 instances of Koboldcpp or llamacpp in a loop. Batch size of 1 for context / prompt processing latency. Instance 1 is inferring tokens while instance 2 is processing instances 1's output token by token (context + instance 1 inference tokens). As soon as instance 1 stops inference, it continues prompt processing to stay caught up while instance 2 infers and feeds into instance 3. The cycle continues. Options: - output length limited to one to a few tokens to take user input at any point during the loop. - explicitly stop generating whichever instance to take user input when sent to the loop - clever system prompting and timestamp injects for certain pad tokens during idle periods - tool calls/specific tokens or strings for adjusting inference speed / resource usage during idle periods (enable the loop to continue in the background, slowly,) - pad token output for idle times, regex to manage context on wake - additional system prompting for guiding the dynamics of the LLM loop (watch for timestamps, how many pad tokens, what is the conversation about, are we sitting here or actively brainstorming? Do you interrupt/bump your own speed up/clear pad tokens from your context and interject user freely?) Anyways, I haven't thought down every single rabbit hole, but I feel like with small models these days on a 3090 this should be possible to get running in a basic form with a python script. Has anyone else tried something like this yet? Either way, I think it would be cool to have a more dynamic framework beyond the basic query response that we could plug our own models into without having to train entirely new models meant for something like this.
2025-06-17T13:15:43
https://www.reddit.com/r/LocalLLaMA/comments/1ldmui5/continuous_llm_loop_for_realtime_interaction/
skatardude10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldmui5
false
null
t3_1ldmui5
/r/LocalLLaMA/comments/1ldmui5/continuous_llm_loop_for_realtime_interaction/
false
false
self
3
null
HELP: Need to AUTOMATE downloading and analysing papers from Arxiv.
1
[removed]
2025-06-17T13:16:01
https://www.reddit.com/r/LocalLLaMA/comments/1ldmurb/help_need_to_automate_downloading_and_analysing/
sunilnallani611
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldmurb
false
null
t3_1ldmurb
/r/LocalLLaMA/comments/1ldmurb/help_need_to_automate_downloading_and_analysing/
false
false
self
1
null
⚡ IdeaWeaver: One Command to Launch Your AI Agent — No Code, No Drag & Drop⚡
0
ERROR: type should be string, got "\n\nhttps://i.redd.it/kl5iyl96nh7f1.gif\n\nWhether you see AI agents as the next evolution of automation or just hype, one thing’s clear: they’re here to stay.\n\nRight now, I see two major ways people are building AI solutions:\n\n1️⃣ Writing custom code using frameworks\n\n2️⃣ Using drag-and-drop UI tools to stitch components together( a new field has emerged around this called Flowgrammers)\n\nBut what if there was a third way, something more straightforward, more accessible, and free?\n\n🎯 Meet IdeaWeaver, a CLI-based tool that lets you run powerful agents with just one command for free, using local models via Ollama (with a fallback to OpenAI).\n\nTested with models like Mistral, DeepSeek, and Phi-3, and more support is coming soon!\n\nHere are just a few agents you can try out right now:\n\n📚 Create a children's storybook\n\nideaweaver agent generate\\_storybook --theme \"brave little mouse\" --target-age \"3-5\"\n\n🧠 Conduct research & write long-form content\n\nideaweaver agent research\\_write --topic \"AI in healthcare\"\n\n💼 Generate professional LinkedIn content\n\nideaweaver agent linkedin\\_post --topic \"AI trends in 2025\"\n\n✈️ Build detailed travel itineraries\n\nideaweaver agent travel\\_plan --destination \"Tokyo\" --duration \"7 days\" --budget \"$2000-3000\"\n\n📈 Analyze stock performance like a pro\n\nideaweaver agent stock\\_analysis --symbol AAPL\n\n…and the list is growing! 🌱\n\nNo code. No drag-and-drop. Just a clean CLI to get your favorite AI agent up and running.\n\nNeed to customize? Just run:\n\nideaweaver agent generate\\_storybook --help\n\nand tweak it to your needs.\n\nIdeaWeaver is built on top of CrewAI to power these agent automations. Huge thanks to the amazing CrewAI team for creating such an incredible framework! 🙌\n\n🔗 Docs: [https://ideaweaver-ai-code.github.io/ideaweaver-docs/agent/overview/](https://ideaweaver-ai-code.github.io/ideaweaver-docs/agent/overview/)\n\n🔗 GitHub: [https://github.com/ideaweaver-ai-code/ideaweaver](https://github.com/ideaweaver-ai-code/ideaweaver)\n\nIf this sounds exciting, give it a try and let me know your thoughts. And if you like the project, drop a ⭐ on GitHub, it helps more than you think!\n\n"
2025-06-17T13:20:11
https://www.reddit.com/r/LocalLLaMA/comments/1ldmy6i/ideaweaver_one_command_to_launch_your_ai_agent_no/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldmy6i
false
null
t3_1ldmy6i
/r/LocalLLaMA/comments/1ldmy6i/ideaweaver_one_command_to_launch_your_ai_agent_no/
false
false
https://b.thumbs.redditm…n3_biV23b7ZY.jpg
0
null
I gave Llama 3 a RAM and an ALU, turning it into a CPU for a fully differentiable computer.
1
[removed]
2025-06-17T13:30:29
https://www.reddit.com/r/LocalLLaMA/comments/1ldn6mk/i_gave_llama_3_a_ram_and_an_alu_turning_it_into_a/
Antique-Time-8070
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldn6mk
false
null
t3_1ldn6mk
/r/LocalLLaMA/comments/1ldn6mk/i_gave_llama_3_a_ram_and_an_alu_turning_it_into_a/
false
false
https://a.thumbs.redditm…nsA-FprlEgr4.jpg
1
null
I gave Llama 3 a RAM and an ALU, turning it into a CPU for a fully differentiable computer.
1
[removed]
2025-06-17T13:38:25
https://www.reddit.com/r/LocalLLaMA/comments/1ldndcd/i_gave_llama_3_a_ram_and_an_alu_turning_it_into_a/
Antique-Time-8070
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldndcd
false
null
t3_1ldndcd
/r/LocalLLaMA/comments/1ldndcd/i_gave_llama_3_a_ram_and_an_alu_turning_it_into_a/
false
false
self
1
null
Learn. Hack LLMs. Win up to 100,000$
1
2025-06-17T13:45:27
https://i.redd.it/lmc4q3usrh7f1.png
Fit_Spray3043
i.redd.it
1970-01-01T00:00:00
0
{}
1ldnj5a
false
null
t3_1ldnj5a
/r/LocalLLaMA/comments/1ldnj5a/learn_hack_llms_win_up_to_100000/
false
false
https://external-preview…402c87fa3bcf16b3
1
{'enabled': True, 'images': [{'id': 'JzbXK5HULPi_xbDOC3ytFFtbBMd_7Rf0nMJ2ptCYiQY', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/lmc4q3usrh7f1.png?width=108&crop=smart&auto=webp&s=5d38529e03235457c4c8de63ec23d7429428a7bb', 'width': 108}, {'height': 82, 'url': 'https://preview.redd.it/lmc4q3usrh7f1.png?width=216&crop=smart&auto=webp&s=02da3a1c52d20e81328d76f7d493a4e6144f5abc', 'width': 216}, {'height': 122, 'url': 'https://preview.redd.it/lmc4q3usrh7f1.png?width=320&crop=smart&auto=webp&s=963f3c07a0009ca37f98413e589ec12528ff907c', 'width': 320}, {'height': 244, 'url': 'https://preview.redd.it/lmc4q3usrh7f1.png?width=640&crop=smart&auto=webp&s=1645c7e9ef4fad0ec29000b43e1c6d40538ef556', 'width': 640}, {'height': 366, 'url': 'https://preview.redd.it/lmc4q3usrh7f1.png?width=960&crop=smart&auto=webp&s=bc170bea414b75afec39f4f1e7ade81f7e99d003', 'width': 960}, {'height': 411, 'url': 'https://preview.redd.it/lmc4q3usrh7f1.png?width=1080&crop=smart&auto=webp&s=3adea0f96c451c7d99d674b672966da998c96f29', 'width': 1080}], 'source': {'height': 436, 'url': 'https://preview.redd.it/lmc4q3usrh7f1.png?auto=webp&s=7229938a5a9eb9cc6196776b855e3ca29e6928d3', 'width': 1143}, 'variants': {}}]}
What Meta working on after Llama 4 failure?
1
[removed]
2025-06-17T13:58:24
https://www.reddit.com/r/LocalLLaMA/comments/1ldnu56/what_meta_working_on_after_llama_4_failure/
narca_hakan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldnu56
false
null
t3_1ldnu56
/r/LocalLLaMA/comments/1ldnu56/what_meta_working_on_after_llama_4_failure/
false
false
self
1
null
Local Chatbot on Android (Offline)
1
[removed]
2025-06-17T14:09:24
https://www.reddit.com/r/LocalLLaMA/comments/1ldo40q/local_chatbot_on_android_offline/
kirankumar_r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldo40q
false
null
t3_1ldo40q
/r/LocalLLaMA/comments/1ldo40q/local_chatbot_on_android_offline/
false
false
self
1
null
What's your favorite desktop client?
4
Prefer one with MCP support.
2025-06-17T14:26:59
https://www.reddit.com/r/LocalLLaMA/comments/1ldojsu/whats_your_favorite_desktop_client/
tuananh_org
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldojsu
false
null
t3_1ldojsu
/r/LocalLLaMA/comments/1ldojsu/whats_your_favorite_desktop_client/
false
false
self
4
null
Best frontend for vllm?
21
Trying to optimise my inferences. I use LM studio for an easy inference of llama.cpp but was wondering if there is a gui for more optimised inference. Also is there anther gui for llama.cpp that lets you tweak inference settings a bit more? Like expert offloading etc? Thanks!!
2025-06-17T14:27:51
https://www.reddit.com/r/LocalLLaMA/comments/1ldokl7/best_frontend_for_vllm/
GreenTreeAndBlueSky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldokl7
false
null
t3_1ldokl7
/r/LocalLLaMA/comments/1ldokl7/best_frontend_for_vllm/
false
false
self
21
null
Ollama with MCPHost (ie called through api) uses CPU instead of GPU
1
[removed]
2025-06-17T14:51:48
https://www.reddit.com/r/LocalLLaMA/comments/1ldp69b/ollama_with_mcphost_ie_called_through_api_uses/
Jumpy-Ball7492
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldp69b
false
null
t3_1ldp69b
/r/LocalLLaMA/comments/1ldp69b/ollama_with_mcphost_ie_called_through_api_uses/
false
false
self
1
null
I have to fine tune an LLM on a custom Programming language
1
[removed]
2025-06-17T15:44:27
https://www.reddit.com/r/LocalLLaMA/comments/1ldqjel/i_have_to_fine_tune_an_llm_on_a_custom/
Which_Bug_8234
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldqjel
false
null
t3_1ldqjel
/r/LocalLLaMA/comments/1ldqjel/i_have_to_fine_tune_an_llm_on_a_custom/
false
false
self
1
null
LLM Web Search Paper
1
[removed]
2025-06-17T15:46:14
https://www.reddit.com/r/LocalLLaMA/comments/1ldql0d/llm_web_search_paper/
ayoubzulfiqar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldql0d
false
null
t3_1ldql0d
/r/LocalLLaMA/comments/1ldql0d/llm_web_search_paper/
false
false
self
1
null
How can i train llama on a custom Programming language?
1
[removed]
2025-06-17T15:47:13
https://www.reddit.com/r/LocalLLaMA/comments/1ldqly3/how_can_i_train_llama_on_a_custom_programming/
Which_Bug_8234
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldqly3
false
null
t3_1ldqly3
/r/LocalLLaMA/comments/1ldqly3/how_can_i_train_llama_on_a_custom_programming/
false
false
self
1
null
Browserbase launches Director + $40M Series B: Making web automation accessible to everyone
1
[removed]
2025-06-17T15:47:35
https://www.reddit.com/r/LocalLLaMA/comments/1ldqmaf/browserbase_launches_director_40m_series_b_making/
Kylejeong21
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldqmaf
false
null
t3_1ldqmaf
/r/LocalLLaMA/comments/1ldqmaf/browserbase_launches_director_40m_series_b_making/
false
false
self
1
{'enabled': False, 'images': [{'id': 'xLA_aZPG6CNhAbvX1b4Gk4fFQKYmedMpHj2U5eVbdHg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/-a8DciYzXWM8I863xfAXM_uHmeE2BIyJU_Mrkt-bjNo.jpg?width=108&crop=smart&auto=webp&s=306df831c850dd9f9ddbd0988bd801a1027a6a78', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/-a8DciYzXWM8I863xfAXM_uHmeE2BIyJU_Mrkt-bjNo.jpg?width=216&crop=smart&auto=webp&s=50c62c44eb41ffeca293a20884abb0ef9a95cb7a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/-a8DciYzXWM8I863xfAXM_uHmeE2BIyJU_Mrkt-bjNo.jpg?width=320&crop=smart&auto=webp&s=8f183f01d83405407050edc5ccb666233185e0bf', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/-a8DciYzXWM8I863xfAXM_uHmeE2BIyJU_Mrkt-bjNo.jpg?width=640&crop=smart&auto=webp&s=05c88e7a48e1330a4dcae280005cee07001d9329', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/-a8DciYzXWM8I863xfAXM_uHmeE2BIyJU_Mrkt-bjNo.jpg?width=960&crop=smart&auto=webp&s=24a6ad4359f7a601431055c19629ec094d6db90d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/-a8DciYzXWM8I863xfAXM_uHmeE2BIyJU_Mrkt-bjNo.jpg?width=1080&crop=smart&auto=webp&s=1aaa1042e8e4a2997f630f44666e30a2cfc6e665', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/-a8DciYzXWM8I863xfAXM_uHmeE2BIyJU_Mrkt-bjNo.jpg?auto=webp&s=ae56c4a7f2a9da41a44cb4685bd4b95eb2036219', 'width': 1920}, 'variants': {}}]}
what's more important to you when choosing a model
1
[removed] [View Poll](https://www.reddit.com/poll/1ldqrh1)
2025-06-17T15:53:03
https://www.reddit.com/r/LocalLLaMA/comments/1ldqrh1/whats_more_important_to_you_when_choosing_a_model/
okaris
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldqrh1
false
null
t3_1ldqrh1
/r/LocalLLaMA/comments/1ldqrh1/whats_more_important_to_you_when_choosing_a_model/
false
false
self
1
null
A free goldmine of tutorials for the components you need to create production-level agents
272
**I’ve just launched a free resource with 25 detailed tutorials for building comprehensive production-level AI agents, as part of my Gen AI educational initiative.** The tutorials cover all the key components you need to create agents that are ready for real-world deployment. I plan to keep adding more tutorials over time and will make sure the content stays up to date. The response so far has been incredible! (the repo got nearly 500 stars in just 8 hours from launch) This is part of my broader effort to create high-quality open source educational material. I already have over 100 code tutorials on GitHub with nearly 40,000 stars. **The link is in the first comment** The content is organized into these categories: 1. Orchestration 2. Tool integration 3. Observability 4. Deployment 5. Memory 6. UI & Frontend 7. Agent Frameworks 8. Model Customization 9. Multi-agent Coordination 10. Security 11. Evaluation
2025-06-17T15:53:14
https://www.reddit.com/r/LocalLLaMA/comments/1ldqroi/a_free_goldmine_of_tutorials_for_the_components/
Nir777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldqroi
false
null
t3_1ldqroi
/r/LocalLLaMA/comments/1ldqroi/a_free_goldmine_of_tutorials_for_the_components/
false
false
self
272
null
Prometheus: Local AGI framework with Phi-3, ChromaDB, async planner, mood + reflex loop
1
[removed]
2025-06-17T15:54:22
https://www.reddit.com/r/LocalLLaMA/comments/1ldqsr4/prometheus_local_agi_framework_with_phi3_chromadb/
Capable_Football8065
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldqsr4
false
null
t3_1ldqsr4
/r/LocalLLaMA/comments/1ldqsr4/prometheus_local_agi_framework_with_phi3_chromadb/
false
false
self
1
null
Local Language Learning with Voice?
6
Very interested in learning another language via speaking with a local LLM via voice. Speaking a language is much more helpful than only being able to communicate via writing. Has anyone trialed this with any LLM model? If so what model do you recommend (including minimum parameter), any additional app/plug-in to enable voice?
2025-06-17T15:55:37
https://www.reddit.com/r/LocalLLaMA/comments/1ldqtwu/local_language_learning_with_voice/
Ok_Most9659
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldqtwu
false
null
t3_1ldqtwu
/r/LocalLLaMA/comments/1ldqtwu/local_language_learning_with_voice/
false
false
self
6
null
Google launches Gemini 2.5 Flash Lite (API only)
61
See https://console.cloud.google.com/vertex-ai/studio/ Pricing not yet announced.
2025-06-17T16:05:43
https://i.redd.it/93ekds1ugi7f1.jpeg
Balance-
i.redd.it
1970-01-01T00:00:00
0
{}
1ldr3ln
false
null
t3_1ldr3ln
/r/LocalLLaMA/comments/1ldr3ln/google_launches_gemini_25_flash_lite_api_only/
false
false
default
61
{'enabled': True, 'images': [{'id': '93ekds1ugi7f1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/93ekds1ugi7f1.jpeg?width=108&crop=smart&auto=webp&s=23b8767c50b0c732fe2fe7e494936ff50c98cb89', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/93ekds1ugi7f1.jpeg?width=216&crop=smart&auto=webp&s=49b1cbfdbdfe34a6a301be0d2308c4c652ddf34f', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/93ekds1ugi7f1.jpeg?width=320&crop=smart&auto=webp&s=4aa653157638a0d76bd5fa3a944d721463838828', 'width': 320}, {'height': 320, 'url': 'https://preview.redd.it/93ekds1ugi7f1.jpeg?width=640&crop=smart&auto=webp&s=9a2557d0a1e69c350eb9aa80a0342ae2a8213d06', 'width': 640}, {'height': 480, 'url': 'https://preview.redd.it/93ekds1ugi7f1.jpeg?width=960&crop=smart&auto=webp&s=c1820712743aa409ada5f8a5d172d520fa36cd49', 'width': 960}, {'height': 540, 'url': 'https://preview.redd.it/93ekds1ugi7f1.jpeg?width=1080&crop=smart&auto=webp&s=98340c1ae004e195f8626617887bcab162e8ed5f', 'width': 1080}], 'source': {'height': 857, 'url': 'https://preview.redd.it/93ekds1ugi7f1.jpeg?auto=webp&s=294e93e81149a14d1f5b688f6ddcf879c2a94fa8', 'width': 1713}, 'variants': {}}]}
My Post Entitled "OpenAI wins $200 million U.S. defense contract!" was Removed without Explanation due to "complaints". What Complaints?
1
[removed]
2025-06-17T16:34:08
https://www.reddit.com/r/LocalLLaMA/comments/1ldrub1/my_post_entitled_openai_wins_200_million_us/
Iory1998
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldrub1
false
null
t3_1ldrub1
/r/LocalLLaMA/comments/1ldrub1/my_post_entitled_openai_wins_200_million_us/
false
false
self
1
null
System prompt for proof based mathematics
1
[removed]
2025-06-17T16:34:10
https://www.reddit.com/r/LocalLLaMA/comments/1ldrubl/system_prompt_for_proof_based_mathematics/
adaption12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldrubl
false
null
t3_1ldrubl
/r/LocalLLaMA/comments/1ldrubl/system_prompt_for_proof_based_mathematics/
false
false
self
1
null
My post about OpenAI is being Removed! Why?
1
[removed]
2025-06-17T16:35:37
https://www.reddit.com/r/LocalLLaMA/comments/1ldrvpr/my_post_about_openai_is_being_removed_why/
Iory1998
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldrvpr
false
null
t3_1ldrvpr
/r/LocalLLaMA/comments/1ldrvpr/my_post_about_openai_is_being_removed_why/
false
false
self
1
null
I CAN't POST on LocalLLaMA Anything Regarding the Owner of ChatGPT! WHY NOT?
1
[removed]
2025-06-17T16:38:54
https://www.reddit.com/r/LocalLLaMA/comments/1ldrytn/i_cant_post_on_localllama_anything_regarding_the/
Iory1998
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldrytn
false
null
t3_1ldrytn
/r/LocalLLaMA/comments/1ldrytn/i_cant_post_on_localllama_anything_regarding_the/
false
false
self
1
null
Gemini 2.5 Pro and Flash are stable in AI Studio
156
There's also a new Gemini 2.5 flash preview model at the bottom there.
2025-06-17T16:56:00
https://i.redd.it/ng7glnbmpi7f1.png
best_codes
i.redd.it
1970-01-01T00:00:00
0
{}
1ldsez0
false
null
t3_1ldsez0
/r/LocalLLaMA/comments/1ldsez0/gemini_25_pro_and_flash_are_stable_in_ai_studio/
false
false
default
156
{'enabled': True, 'images': [{'id': 'ng7glnbmpi7f1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/ng7glnbmpi7f1.png?width=108&crop=smart&auto=webp&s=e52d9cefccdc2c91e7f63999b09c4a119408414a', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/ng7glnbmpi7f1.png?width=216&crop=smart&auto=webp&s=7d3e016b27d242a55b9a11b27e7e8fcd79ddd439', 'width': 216}, {'height': 202, 'url': 'https://preview.redd.it/ng7glnbmpi7f1.png?width=320&crop=smart&auto=webp&s=19eafc0ab72b6a67d86e4a94088f6a7857bf8cfb', 'width': 320}], 'source': {'height': 360, 'url': 'https://preview.redd.it/ng7glnbmpi7f1.png?auto=webp&s=d833e6a07f82b872a8da32f7d612e4486b71e34d', 'width': 570}, 'variants': {}}]}
Supercharge Your Coding Agent with Symbolic Tools
2
How would you feel about writing code without proper IDE tooling? Your coding agent feels the same way! Some agents have symbolic tools to a degree (like cline, roo and so on), but many (like codex, opencoder and most others) don't and rely on just text matching, embeddings and file reading. Fortunately, it doesn't have to stay like this! Include the open source (MIT) [Serena MCP server](https://github.com/oraios/serena) into your project's toolbox and step into the light! For example, for claude code it's just one shell command `claude mcp add serena -- uvx --from git+https://github.com/oraios/serena serena-mcp-server --context ide-assistant --project $(pwd)` If you enjoy this toolbox as much as I do, show some support by starring the repo and spreading the word ;) https://preview.redd.it/skmgbriszh7f1.jpg?width=564&format=pjpg&auto=webp&s=cc0392ee94de6621f8d4158380ef6c39aad549e1
2025-06-17T16:57:31
https://www.reddit.com/r/LocalLLaMA/comments/1ldsgf1/supercharge_your_coding_agent_with_symbolic_tools/
Left-Orange2267
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldsgf1
false
null
t3_1ldsgf1
/r/LocalLLaMA/comments/1ldsgf1/supercharge_your_coding_agent_with_symbolic_tools/
false
false
self
2
{'enabled': False, 'images': [{'id': 'zBA49a0Cm-XBAD3DZH9SuQval19YIxsQsZErY_duL04', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zBA49a0Cm-XBAD3DZH9SuQval19YIxsQsZErY_duL04.png?width=108&crop=smart&auto=webp&s=0f74364c170344e395c650476ee4a0b710ebeada', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zBA49a0Cm-XBAD3DZH9SuQval19YIxsQsZErY_duL04.png?width=216&crop=smart&auto=webp&s=caa4d20acc398efc0476ccb3e8bd24a51688a8f7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zBA49a0Cm-XBAD3DZH9SuQval19YIxsQsZErY_duL04.png?width=320&crop=smart&auto=webp&s=cb021ec12e76f6f9e03b8d25b23e76ebc0c0abc9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zBA49a0Cm-XBAD3DZH9SuQval19YIxsQsZErY_duL04.png?width=640&crop=smart&auto=webp&s=b1df560b5f346ab55b22136b14cd6a62eb30f133', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zBA49a0Cm-XBAD3DZH9SuQval19YIxsQsZErY_duL04.png?width=960&crop=smart&auto=webp&s=62a8290518535ba120dad015d761dbbcb0f99ebc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zBA49a0Cm-XBAD3DZH9SuQval19YIxsQsZErY_duL04.png?width=1080&crop=smart&auto=webp&s=4375c04aa048b8ec37c75141acdaabfddc992bd8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zBA49a0Cm-XBAD3DZH9SuQval19YIxsQsZErY_duL04.png?auto=webp&s=76e1b7dbba1f9b25e76e5b4c0189ac9fcade511a', 'width': 1200}, 'variants': {}}]}
Sorry guys i tried.
1
2025-06-17T17:17:57
https://i.redd.it/prilesypti7f1.jpeg
GTurkistane
i.redd.it
1970-01-01T00:00:00
0
{}
1ldt0co
false
null
t3_1ldt0co
/r/LocalLLaMA/comments/1ldt0co/sorry_guys_i_tried/
false
false
https://external-preview…0060fd9a64d4d8ef
1
{'enabled': True, 'images': [{'id': 'arXa73bJiV0KB_HUkp1zHgXwoHUkh2ja7_FD6-jhfao', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/prilesypti7f1.jpeg?width=108&crop=smart&auto=webp&s=644cec6c4bc4b34f0cf22e1c4fa4c29ab0fcbf44', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/prilesypti7f1.jpeg?width=216&crop=smart&auto=webp&s=02e3d599f723946596f8c25ebe59e9366be88bbd', 'width': 216}, {'height': 208, 'url': 'https://preview.redd.it/prilesypti7f1.jpeg?width=320&crop=smart&auto=webp&s=6086045dd6e48eb9dfd7cbbdb1a217734bb11dc5', 'width': 320}, {'height': 416, 'url': 'https://preview.redd.it/prilesypti7f1.jpeg?width=640&crop=smart&auto=webp&s=3e538a8b5eb39f8b3272ed6334a737c68e9da621', 'width': 640}, {'height': 624, 'url': 'https://preview.redd.it/prilesypti7f1.jpeg?width=960&crop=smart&auto=webp&s=155e448231b018d729534da97bd4d5d95f8743db', 'width': 960}, {'height': 702, 'url': 'https://preview.redd.it/prilesypti7f1.jpeg?width=1080&crop=smart&auto=webp&s=7254a0b47c9ae6a4359abd819629d71cd114f337', 'width': 1080}], 'source': {'height': 936, 'url': 'https://preview.redd.it/prilesypti7f1.jpeg?auto=webp&s=f6d5a54d682e5f14fafe7e726c171122f19177f5', 'width': 1440}, 'variants': {}}]}
we are in a rut until one of these happens
1
I’ve been thinking about what we need to run MoE with 200B+ params, and it looks like we’re in a holding pattern until one of these happens: 1) 48 GB cards get cheap enough that we can build miner style rigs 2) Strix halo desktop version comes out with a bunch of PCIe lanes, so we get to pair high unified memory with extra GPUs 3) llama cpp fixes perf issues with RPC so we can stitch together multiple cheap devices instead of relying on one monster rig until then we are stuck stroking it to Qwen3 32b
2025-06-17T17:21:06
https://www.reddit.com/r/LocalLLaMA/comments/1ldt3bo/we_are_in_a_rut_until_one_of_these_happens/
woahdudee2a
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldt3bo
false
null
t3_1ldt3bo
/r/LocalLLaMA/comments/1ldt3bo/we_are_in_a_rut_until_one_of_these_happens/
false
false
self
1
null
Help with considering AMD Radeon PRO W7900 card for inference and image generation
2
I'm trying to understand the negativity around AMD workstation GPUs—especially considering their memory capacity and price-to-performance balance. My end goal is to scale up to **3 GPUs** for **inference and image generation only**. Here's what I need from the setup: * **Moderate token generation speed** (not aiming for the fastest) * **Ability to load large models**, up to **70B with 8-bit quantization** * **Context length is not a major concern** I'm based in a country where GPU prices are significantly different from the US market. Here’s a rough comparison of what's available to me: | GPU Model | VRAM | Price Range | Bandwidth | TFLOPS (FP32) | | ----------------------------- | ---- | ------------- | --------- | ------------- | | AMD Radeon PRO W7900 | 48GB | \$3.5k–\$4k | 864 GB/s | 61.3 | | AMD RX 7900 XTX | 24GB | \$1k–\$1.5k | 960 GB/s | - | | Nvidia RTX 3090 Ti | 24GB | \$2k–\$2.5k | 1008 GB/s | - | | Nvidia RTX 5090 | 32GB | \$3.5k–\$5k | 1792 GB/s | - | | Nvidia RTX PRO 5000 Blackwell | - | Not Available | - | - | | Nvidia RTX 6000 Ada | 48GB | \$7k+ | 960 GB/s | 91.1 | The **W7900** stands out to me: * **48GB VRAM**, comparable to the RTX 6000 Ada * Good **bandwidth**, reasonable **FP32 performance** * **Roughly half the price** of Nvidia’s workstation offering The only card that truly outpaces it (on paper) is the RTX 5090, but I’m unsure if that justifies the price bump or the power requirements for inference-only use. **System context:** I'm running a dual-socket server board with one **Xeon E5-2698 v3**, **128 GB ECC DDR3 RAM @2133MHz**, and 60 GB/s memory bandwidth. I’ll add the second CPU soon and double RAM to **256 GB**, enabling use of **3× PCIe 3.0 x16** slots. I prefer to reuse this hardware rather than invest in new platforms like the Mac Studio Ultra or Threadripper Pro. --- So, my question is: **What am I missing with AMD workstation cards?** Is there a hidden downside (driver support, compatibility, etc.) that justifies the strong anti-AMD sentiment for these use cases? Any insight would help me avoid making a costly mistake. Thank you in advance!
2025-06-17T17:26:00
https://www.reddit.com/r/LocalLLaMA/comments/1ldt7x8/help_with_considering_amd_radeon_pro_w7900_card/
n9986
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldt7x8
false
null
t3_1ldt7x8
/r/LocalLLaMA/comments/1ldt7x8/help_with_considering_amd_radeon_pro_w7900_card/
false
false
self
2
null
RTX A4000
1
Has anyone here used the RTX A4000 for local inference? If so, how was your experience and what size model did you try (tokens/sec pls) Thanks!
2025-06-17T17:32:49
https://www.reddit.com/r/LocalLLaMA/comments/1ldtegj/rtx_a4000/
ranoutofusernames__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldtegj
false
null
t3_1ldtegj
/r/LocalLLaMA/comments/1ldtegj/rtx_a4000/
false
false
self
1
null
Help me build local Ai LLM inference rig ! Intel AMX single or Dual With GPU or AMD EPYC.
2
So I'm now thinking about building a rig using 4th or 5th gen sinle or dual Xeon CPUs wohj GPUs. I've been reading up on kTransformer and how they use Intel AMX for inference together with GPU. So my main goal is to future proof and get the best bank for my buck .. Should I go w9hh single socket more powerful CPU with better faster memory or dual socket but slower memory .. I would Aldo use it as my main PC for work ..
2025-06-17T17:34:06
https://www.reddit.com/r/LocalLLaMA/comments/1ldtfmd/help_me_build_local_ai_llm_inference_rig_intel/
sub_RedditTor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldtfmd
false
null
t3_1ldtfmd
/r/LocalLLaMA/comments/1ldtfmd/help_me_build_local_ai_llm_inference_rig_intel/
false
false
self
2
null
My AI Interview Prep Side Project Now Has an "AI Coach" to Pinpoint Your Weak Skills!
1
[removed]
2025-06-17T17:46:26
https://v.redd.it/gsfzhcd2yi7f1
Solid_Woodpecker3635
v.redd.it
1970-01-01T00:00:00
0
{}
1ldtr2u
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/gsfzhcd2yi7f1/DASHPlaylist.mpd?a=1752774401%2COTZhMjU4MGZjY2Q5MTgxZWU5MjcxM2ViNmI5NjkwYWM3OWE2Mzc4OTJhYjQwNGNjY2I2OWY5NWVhN2Q2OGFiNw%3D%3D&v=1&f=sd', 'duration': 36, 'fallback_url': 'https://v.redd.it/gsfzhcd2yi7f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 920, 'hls_url': 'https://v.redd.it/gsfzhcd2yi7f1/HLSPlaylist.m3u8?a=1752774401%2CNzY5MGU1NmJiMTI2NzAwNDY5NTUwNjJlZWQwNDM5MTRmOGFhNGUwOWQyZDJiY2M2ODhlMTMzYjE1NDEwYTdiMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/gsfzhcd2yi7f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1ldtr2u
/r/LocalLLaMA/comments/1ldtr2u/my_ai_interview_prep_side_project_now_has_an_ai/
false
false
https://external-preview…18746afc028cc7c9
1
{'enabled': False, 'images': [{'id': 'dXNsemxiZDJ5aTdmMUx0vRJ7fJWQFnAwyVuCziX8lbU-1zIfwzcA4VGtmYcd', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/dXNsemxiZDJ5aTdmMUx0vRJ7fJWQFnAwyVuCziX8lbU-1zIfwzcA4VGtmYcd.png?width=108&crop=smart&format=pjpg&auto=webp&s=6edda81c23a95e0c674cbd9b2944cc75de96d3f8', 'width': 108}, {'height': 103, 'url': 'https://external-preview.redd.it/dXNsemxiZDJ5aTdmMUx0vRJ7fJWQFnAwyVuCziX8lbU-1zIfwzcA4VGtmYcd.png?width=216&crop=smart&format=pjpg&auto=webp&s=6118467839dee1f220b6f3cbf190c9e723dafb1a', 'width': 216}, {'height': 153, 'url': 'https://external-preview.redd.it/dXNsemxiZDJ5aTdmMUx0vRJ7fJWQFnAwyVuCziX8lbU-1zIfwzcA4VGtmYcd.png?width=320&crop=smart&format=pjpg&auto=webp&s=9415ccdd2c0302ed952bcad9aa97c2f559a60da4', 'width': 320}, {'height': 306, 'url': 'https://external-preview.redd.it/dXNsemxiZDJ5aTdmMUx0vRJ7fJWQFnAwyVuCziX8lbU-1zIfwzcA4VGtmYcd.png?width=640&crop=smart&format=pjpg&auto=webp&s=83e8dee80038adf5d64a551314c5aee17b9fe277', 'width': 640}, {'height': 460, 'url': 'https://external-preview.redd.it/dXNsemxiZDJ5aTdmMUx0vRJ7fJWQFnAwyVuCziX8lbU-1zIfwzcA4VGtmYcd.png?width=960&crop=smart&format=pjpg&auto=webp&s=ecdbd797ae4e74783e576486dc6e8649619547e1', 'width': 960}, {'height': 517, 'url': 'https://external-preview.redd.it/dXNsemxiZDJ5aTdmMUx0vRJ7fJWQFnAwyVuCziX8lbU-1zIfwzcA4VGtmYcd.png?width=1080&crop=smart&format=pjpg&auto=webp&s=538e60935a76fd6bb3b4ac6da642c9397b2f6d9e', 'width': 1080}], 'source': {'height': 1182, 'url': 'https://external-preview.redd.it/dXNsemxiZDJ5aTdmMUx0vRJ7fJWQFnAwyVuCziX8lbU-1zIfwzcA4VGtmYcd.png?format=pjpg&auto=webp&s=7d98d715567a2a061a35be8977cf0a00dc473e64', 'width': 2466}, 'variants': {}}]}
For distillation of complicated queries, o3 and R1 are brilliant!
1
[removed]
2025-06-17T17:49:01
https://www.reddit.com/r/LocalLLaMA/comments/1ldttjf/for_distillation_of_complicated_queries_o3_and_r1/
Corporate_Drone31
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldttjf
false
null
t3_1ldttjf
/r/LocalLLaMA/comments/1ldttjf/for_distillation_of_complicated_queries_o3_and_r1/
false
false
self
1
null
SAGA Update: Now with Autonomous Knowledge Graph Healing & A More Robust Core!
15
Hello again, everyone! A few weeks ago, I shared a major update to SAGA (Semantic And Graph-enhanced Authoring), my autonomous novel generation project. The response was incredible, and since then, I've been focused on making the system not just more capable, but smarter, more maintainable, and more professional. I'm thrilled to share the next evolution of SAGA and its NANA engine. **Quick Refresher: What is SAGA?** SAGA is an open-source project designed to write entire novels. It uses a team of specialized AI agents for planning, drafting, evaluation, and revision. The magic comes from its "long-term memory"—a Neo4j graph database—that tracks characters, world-building, and plot, allowing SAGA to maintain coherence over tens of thousands of words. **What's New & Improved? This is a Big One!** This update moves SAGA from a clever pipeline to a truly intelligent, self-maintaining system. * **Autonomous Knowledge Graph Maintenance & Healing!** * The `KGMaintainerAgent` is no longer just an updater; it's now a **healer**. Periodically (every `KG_HEALING_INTERVAL` chapters), it runs a maintenance cycle to: * **Resolve Duplicate Entities:** Finds similarly named characters or items (e.g., "The Sunstone" and "Sunstone") and uses an LLM to decide if they should be merged in the graph. * **Enrich "Thin" Nodes:** Identifies stub entities (like a character mentioned in a relationship but never described) and uses an LLM to generate a plausible description based on context. * **Run Consistency Checks:** Actively looks for contradictions in the graph, like a character having both "Brave" and "Cowardly" traits, or a character performing actions after they were marked as dead. * **From Markdown to Validated YAML for User Input:** * Initial setup is now driven by a much more robust `user_story_elements.yaml` file. * This input is validated against Pydantic models, making it far more reliable and structured than the previous Markdown parser. The `[Fill-in]` placeholder system is still fully supported. * **Professional Data Access Layer:** * This is a huge architectural improvement. All direct Neo4j queries have been moved out of the agents and into a dedicated `data_access` package (`character_queries`, `world_queries`, etc.). * This makes the system much cleaner, easier to maintain, and separates the "how" of data storage from the "what" of agent logic. * **Formalized KG Schema & Smarter Patching:** * The Knowledge Graph schema (all node labels and relationship types) is now formally defined in `kg_constants.py`. * The revision logic is now smarter, with the patch-generation LLM able to suggest an explicit **deletion** of a text segment by returning an empty string, allowing for more nuanced revisions than just replacement. * **Smarter Planning & Decoupled Finalization:** * The `PlannerAgent` now generates more sophisticated scene plans that include "directorial" cues like `scene_type` ("ACTION", "DIALOGUE"), `pacing`, and `character_arc_focus`. * A new `FinalizeAgent` cleanly handles all end-of-chapter tasks (summarizing, KG extraction, saving), making the main orchestration loop much cleaner. * **Upgraded Configuration System:** * Configuration is now managed by Pydantic's `BaseSettings` in `config.py`, allowing for easy and clean overrides from a `.env` file. **The Core Architecture: Now More Robust** The agentic pipeline is still the heart of SAGA, but it's now more refined: 1. **Initial Setup:** Parses `user_story_elements.yaml` or generates initial story elements, then performs a full sync to Neo4j. 2. **Chapter Loop:** * **Plan:** `PlannerAgent` details scenes with directorial focus. * **Context:** Hybrid semantic & KG context is built. * **Draft:** `DraftingAgent` writes the chapter. * **Evaluate:** `ComprehensiveEvaluatorAgent` & `WorldContinuityAgent` scrutinize the draft. * **Revise:** `revision_logic` applies targeted patches (including deletions) or performs a full rewrite. * **Finalize:** The new `FinalizeAgent` takes over, using the `KGMaintainerAgent` to extract knowledge, summarize, and save everything to Neo4j. * **Heal (Periodic):** The `KGMaintainerAgent` runs its new maintenance cycle to improve the graph's health and consistency. **Why This Matters:** These changes are about building a system that can *truly scale*. An autonomous writer that can create a 50-chapter novel needs a way to self-correct its own "memory" and understanding. The KG healing, robust data layer, and improved configuration are all foundational pieces for that long-term goal. **Performance is Still Strong:** Using local GGUF models (Qwen3 14B for narration/planning, smaller Qwen3s for other tasks), SAGA still generates: * **3 chapters** (each ~13,000+ tokens of narrative) * In approximately **11 minutes** * This includes all planning, evaluation, KG updates, and now the potential for KG healing cycles. [Knowledge Graph at 18 chapters](https://github.com/Lanerra/saga/blob/master/SAGA-KG-Ch18.png) ```plaintext Novel: The Edge of Knowing Current Chapter: 18 Current Step: Run Finished Tokens Generated (this run): 180,961 Requests/Min: 257.91 Elapsed Time: 01:15:55 ``` **Check it out & Get Involved:** * **GitHub Repo:** [https://github.com/Lanerra/saga](https://github.com/Lanerra/saga) (The README has been completely rewritten to reflect the new architecture!) * **Setup:** You'll need Python, Ollama (for embeddings), an OpenAI-API compatible LLM server, and Neo4j (a `docker-compose.yml` is provided). * **Resetting:** To start fresh, `docker-compose down -v` is the cleanest way to wipe the Neo4j volume. I'm incredibly excited about these updates. SAGA feels less like a script and more like a true, learning system now. I'd love for you to pull the latest version, try it out, and see what sagas NANA can spin up for you with its newly enhanced intelligence. As always, feedback, ideas, and issues are welcome
2025-06-17T17:56:02
https://www.reddit.com/r/LocalLLaMA/comments/1ldu04l/saga_update_now_with_autonomous_knowledge_graph/
MariusNocturnum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldu04l
false
null
t3_1ldu04l
/r/LocalLLaMA/comments/1ldu04l/saga_update_now_with_autonomous_knowledge_graph/
false
false
self
15
null
GPU for LLMs fine-tuning
1
I'm looking to purchase a gpu for fine tuning LLMs, plz suggest which I should go for, and if anyone selling their gpu on second hand price, I would love to buy. Country: India, Can pay in both USD and INR
2025-06-17T17:58:22
https://www.reddit.com/r/LocalLLaMA/comments/1ldu2bz/gpu_for_llms_finetuning/
Western-Age3148
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldu2bz
false
null
t3_1ldu2bz
/r/LocalLLaMA/comments/1ldu2bz/gpu_for_llms_finetuning/
false
false
self
1
null
🚀 I built a lightweight web UI for Ollama – great for local LLMs!
3
Hey folks! 👋 I'm the creator of [**ollama\_simple\_webui**](https://github.com/Laszlobeer/ollama_simple_webui) – a no-frills, lightweight web UI for [Ollama](https://ollama.com/), focused on simplicity, performance, and accessibility. ✅ **Features:** * Clean and responsive UI for chatting with local LLMs * Easy setup – just clone and run * Works well on low-end machines * Open source and beginner-friendly Whether you're tinkering with 7B models or experimenting with custom LLMs on your own hardware, this UI is designed to just work without extra bloat. Feedback, stars, and PRs welcome! 🛠️ GitHub: [https://github.com/Laszlobeer/ollama\_simple\_webui](https://github.com/Laszlobeer/ollama_simple_webui) Would love to hear what you think, and happy to take suggestions for features or improvements! https://reddit.com/link/1ldupay/video/1na9k2s55j7f1/player
2025-06-17T18:22:06
https://www.reddit.com/r/LocalLLaMA/comments/1ldupay/i_built_a_lightweight_web_ui_for_ollama_great_for/
Reasonable_Brief578
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldupay
false
null
t3_1ldupay
/r/LocalLLaMA/comments/1ldupay/i_built_a_lightweight_web_ui_for_ollama_great_for/
false
false
https://external-preview…7480b4e8e1de8247
3
{'enabled': False, 'images': [{'id': '1DSkBtE8nncEw-aEby0cNCk5U9TnOLRhDdejd1tL2ak', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1DSkBtE8nncEw-aEby0cNCk5U9TnOLRhDdejd1tL2ak.png?width=108&crop=smart&auto=webp&s=5217c0746295b4a4636bb4d1bc7607834c7969eb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1DSkBtE8nncEw-aEby0cNCk5U9TnOLRhDdejd1tL2ak.png?width=216&crop=smart&auto=webp&s=d6c96ece67da195e62b0bda2db4c50bca771a5f6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1DSkBtE8nncEw-aEby0cNCk5U9TnOLRhDdejd1tL2ak.png?width=320&crop=smart&auto=webp&s=2e6d5dd5a38a69ecbce3861d9ab778cec87ac39f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1DSkBtE8nncEw-aEby0cNCk5U9TnOLRhDdejd1tL2ak.png?width=640&crop=smart&auto=webp&s=16e59db3ddf1a21ab36d3f484534eedc264f0b8a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1DSkBtE8nncEw-aEby0cNCk5U9TnOLRhDdejd1tL2ak.png?width=960&crop=smart&auto=webp&s=f3e652d06921d553ec0603d9bf7c75d725ad6ec5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1DSkBtE8nncEw-aEby0cNCk5U9TnOLRhDdejd1tL2ak.png?width=1080&crop=smart&auto=webp&s=99ce3c04845f41e8a04048187239ccc03568c7b6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1DSkBtE8nncEw-aEby0cNCk5U9TnOLRhDdejd1tL2ak.png?auto=webp&s=9f169a1607e7f2d0567a55e13e3dbefc6e5bda20', 'width': 1200}, 'variants': {}}]}
Open Source Project - LLM-God
1
[removed]
2025-06-17T18:24:02
https://www.reddit.com/gallery/1ldur2k
zuniloc01
reddit.com
1970-01-01T00:00:00
0
{}
1ldur2k
false
null
t3_1ldur2k
/r/LocalLLaMA/comments/1ldur2k/open_source_project_llmgod/
false
false
https://external-preview…ea72b6212f3ccd60
1
{'enabled': True, 'images': [{'id': 'kBAOXtyyFgxxpnpW3CtBkzF68gnCS9imuTQ0NyxZlcE', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/kBAOXtyyFgxxpnpW3CtBkzF68gnCS9imuTQ0NyxZlcE.png?width=108&crop=smart&auto=webp&s=5c0ab9d8a4f17e1bda644a36a985021523f114ee', 'width': 108}, {'height': 119, 'url': 'https://external-preview.redd.it/kBAOXtyyFgxxpnpW3CtBkzF68gnCS9imuTQ0NyxZlcE.png?width=216&crop=smart&auto=webp&s=6f0cfb996d387bf0f3d495b83c19841547e56071', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/kBAOXtyyFgxxpnpW3CtBkzF68gnCS9imuTQ0NyxZlcE.png?width=320&crop=smart&auto=webp&s=90791dffc463328d640ff4bcce255451ffd4e2ed', 'width': 320}, {'height': 354, 'url': 'https://external-preview.redd.it/kBAOXtyyFgxxpnpW3CtBkzF68gnCS9imuTQ0NyxZlcE.png?width=640&crop=smart&auto=webp&s=b26cfb2ddbdeb70369d3f5584ecef7d8d687e784', 'width': 640}, {'height': 532, 'url': 'https://external-preview.redd.it/kBAOXtyyFgxxpnpW3CtBkzF68gnCS9imuTQ0NyxZlcE.png?width=960&crop=smart&auto=webp&s=534e3f16206d01fa1521946c223e53892480cd7c', 'width': 960}, {'height': 599, 'url': 'https://external-preview.redd.it/kBAOXtyyFgxxpnpW3CtBkzF68gnCS9imuTQ0NyxZlcE.png?width=1080&crop=smart&auto=webp&s=3d2b233ce53eba27e4110ef0a93f8fc1ad5470f7', 'width': 1080}], 'source': {'height': 599, 'url': 'https://external-preview.redd.it/kBAOXtyyFgxxpnpW3CtBkzF68gnCS9imuTQ0NyxZlcE.png?auto=webp&s=dba60a1050215d5ff6e3302314e44e2ca5537602', 'width': 1080}, 'variants': {}}]}
🧠 New Paper Alert: Curriculum Learning Boosts LLM Training Efficiency!
4
🧠 **New Paper Alert: Curriculum Learning Boosts LLM Training Efficiency** 📄 [Beyond Random Sampling: Efficient Language Model Pretraining via Curriculum Learning](https://arxiv.org/abs/2506.11300) 🔥 Over **200+ pretraining runs** analyzed in this large-scale study exploring **Curriculum Learning (CL)** as an alternative to random data sampling. The paper shows how **organizing training data from easy to hard** (instead of shuffling everything) can lead to faster convergence and better final performance. # 🧩 Key Takeaways: * Evaluated **3 curriculum strategies**: → *Vanilla CL* (strict easy-to-hard) → *Pacing-based sampling* (gradual mixing) → *Interleaved curricula* (injecting harder examples early) * Tested **6 difficulty metrics** to rank training data. * CL warm-up improved performance by **up to 3.5%** compared to random sampling. This work is one of the **most comprehensive investigations of curriculum strategies for LLMs pretraining** to date, and the insights are actionable even for smaller-scale local training. 🔗 Full preprint: [https://arxiv.org/abs/2506.11300](https://arxiv.org/abs/2506.11300)
2025-06-17T18:31:12
https://www.reddit.com/r/LocalLLaMA/comments/1lduxn0/new_paper_alert_curriculum_learning_boosts_llm/
Ok-Cut-3551
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lduxn0
false
null
t3_1lduxn0
/r/LocalLLaMA/comments/1lduxn0/new_paper_alert_curriculum_learning_boosts_llm/
false
false
self
4
null
Mac Studio m3 ultra 256gb vs 1x 5090
2
I want to build an LLM rig for experiencing and as a local server for dev activities (non pro) but I’m torn between the two following configs. The benefit I see to the rig with the 5090 is that I can also use it to game. Prices are in CAD. I know I can get a better deal by building a PC myself. Also debating if the Mac Studio m3 ultra with 96gb can be enough?
2025-06-17T18:33:47
https://www.reddit.com/gallery/1lduzzl
jujucz
reddit.com
1970-01-01T00:00:00
0
{}
1lduzzl
false
null
t3_1lduzzl
/r/LocalLLaMA/comments/1lduzzl/mac_studio_m3_ultra_256gb_vs_1x_5090/
false
false
https://external-preview…4419ab0f92bc1e08
2
{'enabled': True, 'images': [{'id': 'Vs56FbpER_to_SmsglqYFkLqkXQiBLfdEw7X6Q5nqHY', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/Vs56FbpER_to_SmsglqYFkLqkXQiBLfdEw7X6Q5nqHY.jpeg?width=108&crop=smart&auto=webp&s=3f3f212c433ffde026946c79aa1ad3cc7965a240', 'width': 108}, {'height': 91, 'url': 'https://external-preview.redd.it/Vs56FbpER_to_SmsglqYFkLqkXQiBLfdEw7X6Q5nqHY.jpeg?width=216&crop=smart&auto=webp&s=5906b7e591c71912c379c6d99febe1406e5f6667', 'width': 216}, {'height': 135, 'url': 'https://external-preview.redd.it/Vs56FbpER_to_SmsglqYFkLqkXQiBLfdEw7X6Q5nqHY.jpeg?width=320&crop=smart&auto=webp&s=2ecb98cbbdffaa1e00c11b0e9eafd9bb51bbcbf8', 'width': 320}, {'height': 271, 'url': 'https://external-preview.redd.it/Vs56FbpER_to_SmsglqYFkLqkXQiBLfdEw7X6Q5nqHY.jpeg?width=640&crop=smart&auto=webp&s=1e00f82c6eba35c6df0756a473c364352231e705', 'width': 640}, {'height': 407, 'url': 'https://external-preview.redd.it/Vs56FbpER_to_SmsglqYFkLqkXQiBLfdEw7X6Q5nqHY.jpeg?width=960&crop=smart&auto=webp&s=ed2131419879cbf3b4396ed2d4e15cbcc60c4843', 'width': 960}, {'height': 458, 'url': 'https://external-preview.redd.it/Vs56FbpER_to_SmsglqYFkLqkXQiBLfdEw7X6Q5nqHY.jpeg?width=1080&crop=smart&auto=webp&s=34fa83e7e72a9f88d09de62aeda2d77f0aad6386', 'width': 1080}], 'source': {'height': 882, 'url': 'https://external-preview.redd.it/Vs56FbpER_to_SmsglqYFkLqkXQiBLfdEw7X6Q5nqHY.jpeg?auto=webp&s=cd8be129dbb39c8b30fcb25d2f0cce12454da919', 'width': 2076}, 'variants': {}}]}
:grab popcorn: OpenAI weighs “nuclear option” of antitrust complaint against Microsoft
242
2025-06-17T18:34:19
https://arstechnica.com/ai/2025/06/openai-weighs-nuclear-option-of-antitrust-complaint-against-microsoft/
tabspaces
arstechnica.com
1970-01-01T00:00:00
0
{}
1ldv0hk
false
null
t3_1ldv0hk
/r/LocalLLaMA/comments/1ldv0hk/grab_popcorn_openai_weighs_nuclear_option_of/
false
false
https://external-preview…263f3b939ef966c1
242
{'enabled': False, 'images': [{'id': 'o-39gdKiRmg7xCtqAV9Kzd__IIP_fxUuIZpgEMOTxUU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/o-39gdKiRmg7xCtqAV9Kzd__IIP_fxUuIZpgEMOTxUU.jpeg?width=108&crop=smart&auto=webp&s=746f724b454f668c4e53555257b5b900676de05d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/o-39gdKiRmg7xCtqAV9Kzd__IIP_fxUuIZpgEMOTxUU.jpeg?width=216&crop=smart&auto=webp&s=40e67afbd125c90bea7148ea4dd199b4ae02ab5a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/o-39gdKiRmg7xCtqAV9Kzd__IIP_fxUuIZpgEMOTxUU.jpeg?width=320&crop=smart&auto=webp&s=60ddd7bc2ea79c9033033261589f9f9106ee38e4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/o-39gdKiRmg7xCtqAV9Kzd__IIP_fxUuIZpgEMOTxUU.jpeg?width=640&crop=smart&auto=webp&s=c859f5b46d753fef35eae0d18284d780a193b411', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/o-39gdKiRmg7xCtqAV9Kzd__IIP_fxUuIZpgEMOTxUU.jpeg?width=960&crop=smart&auto=webp&s=a1292485af596a84d94220aff7a0f90ca45ec5c9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/o-39gdKiRmg7xCtqAV9Kzd__IIP_fxUuIZpgEMOTxUU.jpeg?width=1080&crop=smart&auto=webp&s=84e225c7a5e91e88380769c34679beb0f74bb66a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/o-39gdKiRmg7xCtqAV9Kzd__IIP_fxUuIZpgEMOTxUU.jpeg?auto=webp&s=bc2a392244869bee12869ff138f1a4a390f52390', 'width': 1152}, 'variants': {}}]}
Newly Released MiniMax-M1 80B vs Claude Opus 4
75
2025-06-17T18:40:54
https://i.redd.it/gwxrxooh8j7f1.jpeg
Just_Lingonberry_352
i.redd.it
1970-01-01T00:00:00
0
{}
1ldv6jb
false
null
t3_1ldv6jb
/r/LocalLLaMA/comments/1ldv6jb/newly_released_minimaxm1_80b_vs_claude_opus_4/
false
false
https://external-preview…4a10ab9095c34531
75
{'enabled': True, 'images': [{'id': 'NHNFb4RCTzTLyF9gcL2lRgDBdUS99AndB95aAyC66Fg', 'resolutions': [{'height': 129, 'url': 'https://preview.redd.it/gwxrxooh8j7f1.jpeg?width=108&crop=smart&auto=webp&s=3fbf79b33125a7e0b6e18b75132bdff3920d5fdf', 'width': 108}, {'height': 259, 'url': 'https://preview.redd.it/gwxrxooh8j7f1.jpeg?width=216&crop=smart&auto=webp&s=b12640fe42bef69f25ee973346a05f2c5a9cdffb', 'width': 216}, {'height': 384, 'url': 'https://preview.redd.it/gwxrxooh8j7f1.jpeg?width=320&crop=smart&auto=webp&s=f3152d1553ef91f7e6cd9a02b26203647e9fb1e5', 'width': 320}], 'source': {'height': 680, 'url': 'https://preview.redd.it/gwxrxooh8j7f1.jpeg?auto=webp&s=b6428c4b97c2f76e02e9231d9799426405fdce8f', 'width': 566}, 'variants': {}}]}
Handy - a simple, open-source offline speech-to-text app written in Rust using whisper.cpp
1
\# I built a simple, offline speech-to-text app after breaking my finger - now open sourcing it \*\*TL;DR:\*\* Made a cross-platform speech-to-text app using whisper.cpp that runs completely offline. Press shortcut, speak, get text pasted anywhere. It's rough around the edges but works well and is designed to be easily modified/extended - including adding LLM calls after transcription. \## Background I broke my finger a while back and suddenly couldn't type properly. Tried existing speech-to-text solutions but they were either subscription-based, cloud-dependent, or I couldn't modify them to work exactly how I needed for coding and daily computer use. So I built Handy - intentionally simple speech-to-text that runs entirely on your machine using whisper.cpp (Whisper Small model). No accounts, no subscriptions, no data leaving your computer. \## What it does \- Press keyboard shortcut → speak → press again (or use push-to-talk) \- Transcribes with whisper.cpp and pastes directly into whatever app you're using \- Works across Windows, macOS, Linux \- GPU accelerated where available \- Completely offline That's literally it. No fancy UI, no feature creep, just reliable local speech-to-text. \## Why I'm sharing this This was my first Rust project and there are definitely rough edges, but the core functionality works well. More importantly, I designed it to be easily forkable and extensible because that's what I was looking for when I started this journey. The codebase is intentionally simple - you can understand the whole thing in an afternoon. If you want to add LLM integration (calling an LLM after transcription to rewrite/enhance the text), custom post-processing, or whatever else, the foundation is there and it's straightforward to extend. I'm hoping it might be useful for: \- People who want reliable offline speech-to-text without subscriptions \- Developers who want to experiment with voice computing interfaces \- Anyone who prefers tools they can actually modify instead of being stuck with someone else's feature decisions \## The honest truth There are known bugs and architectural decisions that could be better. I'm documenting issues openly because I'd rather have people know what they're getting into. This isn't trying to compete with polished commercial solutions - it's trying to be the most hackable and modifiable foundation for people who want to build their own thing. If you're looking for something perfect out of the box, this probably isn't it. If you're looking for something you can understand, modify, and make your own, it might be exactly what you need. Would love feedback from anyone who tries it out, especially if you run into issues or see ways to make the codebase cleaner and more accessible for others to build on.
2025-06-17T18:57:09
https://handy.computer
sipjca
handy.computer
1970-01-01T00:00:00
0
{}
1ldvltt
false
null
t3_1ldvltt
/r/LocalLLaMA/comments/1ldvltt/handy_a_simple_opensource_offline_speechtotext/
false
false
https://external-preview…551e6580ed033bfa
1
{'enabled': False, 'images': [{'id': 'bDzCT4ZXzfyUunJH2t6KbXRCZTeW_8YRhybKom8weVk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/bDzCT4ZXzfyUunJH2t6KbXRCZTeW_8YRhybKom8weVk.png?width=108&crop=smart&auto=webp&s=866456fb6b18ecde709af611ae84dd56b0a95708', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/bDzCT4ZXzfyUunJH2t6KbXRCZTeW_8YRhybKom8weVk.png?width=216&crop=smart&auto=webp&s=8ba223aacdffbcb5409ded0634c3d228b2b6b196', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/bDzCT4ZXzfyUunJH2t6KbXRCZTeW_8YRhybKom8weVk.png?width=320&crop=smart&auto=webp&s=c1cd06f077226158edfc0ccde5ca8e9f2a1c78a1', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/bDzCT4ZXzfyUunJH2t6KbXRCZTeW_8YRhybKom8weVk.png?width=640&crop=smart&auto=webp&s=49821e08e57000fbfa143c29ef88ad4252597b67', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/bDzCT4ZXzfyUunJH2t6KbXRCZTeW_8YRhybKom8weVk.png?width=960&crop=smart&auto=webp&s=adac2e8fdd143217424cb5546c3f4464f198aba4', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/bDzCT4ZXzfyUunJH2t6KbXRCZTeW_8YRhybKom8weVk.png?width=1080&crop=smart&auto=webp&s=1c340286779ba65d41e58a6d46f2d0c6f2478c63', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/bDzCT4ZXzfyUunJH2t6KbXRCZTeW_8YRhybKom8weVk.png?auto=webp&s=f972fde94b57ada206e189015f8daea4997da333', 'width': 1200}, 'variants': {}}]}
Handy - a simple, open-source offline speech-to-text app written in Rust using whisper.cpp
72
# I built a simple, offline speech-to-text app after breaking my finger - now open sourcing it **TL;DR:** Made a cross-platform speech-to-text app using whisper.cpp that runs completely offline. Press shortcut, speak, get text pasted anywhere. It's rough around the edges but works well and is designed to be easily modified/extended - including adding LLM calls after transcription. **Background** I broke my finger a while back and suddenly couldn't type properly. Tried existing speech-to-text solutions but they were either subscription-based, cloud-dependent, or I couldn't modify them to work exactly how I needed for coding and daily computer use. So I built Handy - intentionally simple speech-to-text that runs entirely on your machine using whisper.cpp (Whisper Small model). No accounts, no subscriptions, no data leaving your computer. **What it does** * Press keyboard shortcut → speak → press again (or use push-to-talk) * Transcribes with whisper.cpp and pastes directly into whatever app you're using * Works across Windows, macOS, Linux * GPU accelerated where available * Completely offline That's literally it. No fancy UI, no feature creep, just reliable local speech-to-text. **Why I'm sharing this** This was my first Rust project and there are definitely rough edges, but the core functionality works well. More importantly, I designed it to be easily forkable and extensible because that's what I was looking for when I started this journey. The codebase is intentionally simple - you can understand the whole thing in an afternoon. If you want to add LLM integration (calling an LLM after transcription to rewrite/enhance the text), custom post-processing, or whatever else, the foundation is there and it's straightforward to extend. **I'm hoping it might be useful for:** * People who want reliable offline speech-to-text without subscriptions * Developers who want to experiment with voice computing interfaces * Anyone who prefers tools they can actually modify instead of being stuck with someone else's feature decisions **Project Reality** There are known bugs and architectural decisions that could be better. I'm documenting issues openly because I'd rather have people know what they're getting into. This isn't trying to compete with polished commercial solutions - it's trying to be the most hackable and modifiable foundation for people who want to build their own thing. If you're looking for something perfect out of the box, this probably isn't it. If you're looking for something you can understand, modify, and make your own, it might be exactly what you need. Would love feedback from anyone who tries it out, especially if you run into issues or see ways to make the codebase cleaner and more accessible for others to build on.
2025-06-17T19:00:08
https://handy.computer
sipjca
handy.computer
1970-01-01T00:00:00
0
{}
1ldvosh
false
null
t3_1ldvosh
/r/LocalLLaMA/comments/1ldvosh/handy_a_simple_opensource_offline_speechtotext/
false
false
https://external-preview…551e6580ed033bfa
72
{'enabled': False, 'images': [{'id': 'bDzCT4ZXzfyUunJH2t6KbXRCZTeW_8YRhybKom8weVk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/bDzCT4ZXzfyUunJH2t6KbXRCZTeW_8YRhybKom8weVk.png?width=108&crop=smart&auto=webp&s=866456fb6b18ecde709af611ae84dd56b0a95708', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/bDzCT4ZXzfyUunJH2t6KbXRCZTeW_8YRhybKom8weVk.png?width=216&crop=smart&auto=webp&s=8ba223aacdffbcb5409ded0634c3d228b2b6b196', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/bDzCT4ZXzfyUunJH2t6KbXRCZTeW_8YRhybKom8weVk.png?width=320&crop=smart&auto=webp&s=c1cd06f077226158edfc0ccde5ca8e9f2a1c78a1', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/bDzCT4ZXzfyUunJH2t6KbXRCZTeW_8YRhybKom8weVk.png?width=640&crop=smart&auto=webp&s=49821e08e57000fbfa143c29ef88ad4252597b67', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/bDzCT4ZXzfyUunJH2t6KbXRCZTeW_8YRhybKom8weVk.png?width=960&crop=smart&auto=webp&s=adac2e8fdd143217424cb5546c3f4464f198aba4', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/bDzCT4ZXzfyUunJH2t6KbXRCZTeW_8YRhybKom8weVk.png?width=1080&crop=smart&auto=webp&s=1c340286779ba65d41e58a6d46f2d0c6f2478c63', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/bDzCT4ZXzfyUunJH2t6KbXRCZTeW_8YRhybKom8weVk.png?auto=webp&s=f972fde94b57ada206e189015f8daea4997da333', 'width': 1200}, 'variants': {}}]}
New KoboldCpp .NET Frontend App
1
[removed]
2025-06-17T19:19:41
https://github.com/phr00t/ai-talker-frontend
phr00t_
github.com
1970-01-01T00:00:00
0
{}
1ldw7je
false
null
t3_1ldw7je
/r/LocalLLaMA/comments/1ldw7je/new_koboldcpp_net_frontend_app/
false
false
default
1
null
News Day
1
[removed]
2025-06-17T19:57:45
https://www.reddit.com/r/LocalLLaMA/comments/1ldx664/news_day/
throwawayacc201711
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldx664
false
null
t3_1ldx664
/r/LocalLLaMA/comments/1ldx664/news_day/
false
false
self
1
null
The Gemini 2.5 models are sparse mixture-of-experts (MoE)
162
From the [model report](https://storage.googleapis.com/deepmind-media/gemini/gemini_v2_5_report.pdf). It should be a surprise to noone, but it's good to see this being spelled out. We barely ever learn anything about the architecture of closed models. https://preview.redd.it/zhyrdk2dqj7f1.png?width=1056&format=png&auto=webp&s=ca3d89968dc6bf950d030bbab25243aeb7623cf4 (I am still hoping for a Gemma-3N report...)
2025-06-17T20:24:15
https://www.reddit.com/r/LocalLLaMA/comments/1ldxuk1/the_gemini_25_models_are_sparse_mixtureofexperts/
cpldcpu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldxuk1
false
null
t3_1ldxuk1
/r/LocalLLaMA/comments/1ldxuk1/the_gemini_25_models_are_sparse_mixtureofexperts/
false
false
https://b.thumbs.redditm…LNatUzMp3veI.jpg
162
null
Finance Local LLM
1
[removed]
2025-06-17T20:24:26
https://www.reddit.com/r/LocalLLaMA/comments/1ldxup1/finance_local_llm/
Mainzerger
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldxup1
false
null
t3_1ldxup1
/r/LocalLLaMA/comments/1ldxup1/finance_local_llm/
false
false
self
1
null
Question from a greenie: Is anyone using local LLM on WSL integrated with vscode (AMD)?
0
I have tried both Ollama and LLMstudio and cant seem to get it to work properly. The real issue is: I have an RX6750XT and, for example with Ollama, it cannot use the GPU through WSL. My use case is to use it on VSCode with "continue" extension so that I am able to get local AI feedback, using WSL.
2025-06-17T20:39:56
https://www.reddit.com/r/LocalLLaMA/comments/1ldy8oq/question_from_a_greenie_is_anyone_using_local_llm/
FoxPatr0l
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldy8oq
false
null
t3_1ldy8oq
/r/LocalLLaMA/comments/1ldy8oq/question_from_a_greenie_is_anyone_using_local_llm/
false
false
self
0
null
Veo3 still blocked in germany
0
Is it the European regulations causing this delay, or something specific to Germany? Anyone know if there’s a workaround or official update?
2025-06-17T21:38:59
https://www.reddit.com/r/LocalLLaMA/comments/1ldzp8c/veo3_still_blocked_in_germany/
Local_Beach
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ldzp8c
false
null
t3_1ldzp8c
/r/LocalLLaMA/comments/1ldzp8c/veo3_still_blocked_in_germany/
false
false
self
0
null
Which search engine to use with Open WebUI
4
I'm trying to get away from being tied to chatgpt. I tried DDG first, but they rate limit so hard. I'm now using brave pro ai, but it doesn't seem like it reliably returns useful context. I've tried asking for the weather tomorrow in my city, fail. Tried asking a simple query "For 64 bit vectorizable operations, should I expect Ryzen 9950x or RTX 6000 Blackwell to outperform?", fail -- even failed with follow up simplified question "can you just compare the FLOPS", it can't even get 2 numbers to make a table. Super disappointing. It's not the model. I've tried with local models and I even connected gpt-4.1. Seems like no matter the quality of the model or the quality of the search terms, results are garbage. So I'm here to ask what you guys are using and having some success with.
2025-06-17T22:04:19
https://www.reddit.com/r/LocalLLaMA/comments/1le0b5t/which_search_engine_to_use_with_open_webui/
MengerianMango
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le0b5t
false
null
t3_1le0b5t
/r/LocalLLaMA/comments/1le0b5t/which_search_engine_to_use_with_open_webui/
false
false
self
4
null
Parallel Tool Calls
1
[removed]
2025-06-17T22:04:24
https://www.reddit.com/r/LocalLLaMA/comments/1le0b8p/parallel_tool_calls/
bootstrapper-919
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le0b8p
false
null
t3_1le0b8p
/r/LocalLLaMA/comments/1le0b8p/parallel_tool_calls/
false
false
self
1
null
Completed: Local AI Optimization Tool - llama-optimus
1
[removed]
2025-06-17T22:04:37
https://www.reddit.com/r/LocalLLaMA/comments/1le0beu/completed_local_ai_optimization_tool_llamaoptimus/
Expert-Inspector-128
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le0beu
false
null
t3_1le0beu
/r/LocalLLaMA/comments/1le0beu/completed_local_ai_optimization_tool_llamaoptimus/
false
false
https://b.thumbs.redditm…5juj4S29I_Ig.jpg
1
null
Llama.cpp is much faster! Any changes made recently?
217
I've ditched Ollama for about 3 months now, and been on a journey testing multiple wrappers. KoboldCPP coupled with llama swap has been good but I experienced so many hang ups (I leave my PC running 24/7 to serve AI requests), and waking up almost daily and Kobold (or in combination with AMD drivers) would not work. I had to reset llama swap or reboot the PC for it work again. That said, I tried llama.cpp a few weeks ago and it wasn't smooth with Vulkan (likely some changes that was reverted back). Tried it again yesterday, and the inference speed is 20% faster on average across multiple model types and sizes. Specifically for Vulkan, I didn't see anything major in the release notes.
2025-06-17T22:17:52
https://www.reddit.com/r/LocalLLaMA/comments/1le0mpb/llamacpp_is_much_faster_any_changes_made_recently/
simracerman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le0mpb
false
null
t3_1le0mpb
/r/LocalLLaMA/comments/1le0mpb/llamacpp_is_much_faster_any_changes_made_recently/
false
false
self
217
null
Ollama models vs HF models
1
[removed]
2025-06-17T23:01:08
https://www.reddit.com/r/LocalLLaMA/comments/1le1m6k/ollama_models_vs_hf_models/
ComprehensiveBath338
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le1m6k
false
null
t3_1le1m6k
/r/LocalLLaMA/comments/1le1m6k/ollama_models_vs_hf_models/
false
false
self
1
null
[Video Guide] How to Sync ChatterBox TTS with Subtitles in ComfyUI (New SRT TTS Node)
1
[removed]
2025-06-17T23:18:37
https://youtu.be/VyOawMrCB1g?si=n-8eDRyRGUDeTkvz
diogodiogogod
youtu.be
1970-01-01T00:00:00
0
{}
1le207z
false
{'oembed': {'author_name': 'Diogod', 'author_url': 'https://www.youtube.com/@diohgod', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/VyOawMrCB1g?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="ChatterBox SRT Voice TTS Node - ComfyUI"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/VyOawMrCB1g/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'ChatterBox SRT Voice TTS Node - ComfyUI', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1le207z
/r/LocalLLaMA/comments/1le207z/video_guide_how_to_sync_chatterbox_tts_with/
false
false
https://external-preview…812b4788f6b481e9
1
{'enabled': False, 'images': [{'id': 'V-7YJHWzhEehdiWrl8s1cMNo1Jhl3gpnVSAzegR4Tqo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/V-7YJHWzhEehdiWrl8s1cMNo1Jhl3gpnVSAzegR4Tqo.jpeg?width=108&crop=smart&auto=webp&s=958f33acc7c087f55eb86f01081db3e726f10fb0', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/V-7YJHWzhEehdiWrl8s1cMNo1Jhl3gpnVSAzegR4Tqo.jpeg?width=216&crop=smart&auto=webp&s=bc61002f69f046570eea297c8871c048462f9718', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/V-7YJHWzhEehdiWrl8s1cMNo1Jhl3gpnVSAzegR4Tqo.jpeg?width=320&crop=smart&auto=webp&s=85ca1f63f9abfe22bc0f7145d2c459db6e75c91a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/V-7YJHWzhEehdiWrl8s1cMNo1Jhl3gpnVSAzegR4Tqo.jpeg?auto=webp&s=8d909383290bc91e659817ab1f1dd6de8d3e507d', 'width': 480}, 'variants': {}}]}
Thinking about switching from cloud based AI to sth more local
1
[removed]
2025-06-17T23:49:01
https://www.reddit.com/r/LocalLLaMA/comments/1le2o3u/thinking_about_switching_from_cloud_based_ai_to/
Living_Helicopter745
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le2o3u
false
null
t3_1le2o3u
/r/LocalLLaMA/comments/1le2o3u/thinking_about_switching_from_cloud_based_ai_to/
false
false
self
1
null
Would love to know if you consider gemma27b the best small model out there?
56
Because I haven't found another that didn't have much hiccup under normal conversations and basic usage; I personally think it's the best out there, what about y'all? (Small as in like 32B max.)
2025-06-18T00:05:43
https://www.reddit.com/r/LocalLLaMA/comments/1le30yi/would_love_to_know_if_you_consider_gemma27b_the/
Ok-Internal9317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le30yi
false
null
t3_1le30yi
/r/LocalLLaMA/comments/1le30yi/would_love_to_know_if_you_consider_gemma27b_the/
false
false
self
56
null
Which model would you use for my case ?
1
[removed]
2025-06-18T00:14:39
https://www.reddit.com/r/LocalLLaMA/comments/1le37kr/which_model_would_you_use_for_my_case/
Kind-Veterinarian437
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le37kr
false
null
t3_1le37kr
/r/LocalLLaMA/comments/1le37kr/which_model_would_you_use_for_my_case/
false
false
self
1
null
Which model would you use for my use case
1
[removed]
2025-06-18T00:18:33
https://www.reddit.com/r/LocalLLaMA/comments/1le3ak5/which_model_would_you_use_for_my_use_case/
Kind-Veterinarian437
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le3ak5
false
null
t3_1le3ak5
/r/LocalLLaMA/comments/1le3ak5/which_model_would_you_use_for_my_use_case/
false
false
self
1
null
Cheap dual Radeon, 60 tk/s Qwen3-30B-A3B
69
Got new RX 9060 XT 16GB. Kept old RX 6600 8GB to increase vram pool. Quite surprised 30B MoE model running much faster than running on CPU with GPU partial offload.
2025-06-18T00:19:28
https://v.redd.it/fdxzcidwwk7f1
dsjlee
v.redd.it
1970-01-01T00:00:00
0
{}
1le3b9e
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/fdxzcidwwk7f1/DASHPlaylist.mpd?a=1752797983%2CNjY3YmM0NTI1OWJjY2FhNDFlYjAwYzdlMTJhMWM0ZjNmZWYzZThiNmU1MjVkNDJlMmExMThjMjE3MTY0YmEyNg%3D%3D&v=1&f=sd', 'duration': 55, 'fallback_url': 'https://v.redd.it/fdxzcidwwk7f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/fdxzcidwwk7f1/HLSPlaylist.m3u8?a=1752797983%2CM2Y5NzhmNjlhZDkxMGI3NTg2YTNhZWYyOTU3YWNlOGRkZjc3MmU5ZTY4YzhhNjhlYzRiYzMwNTMzMTg2YjJhYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fdxzcidwwk7f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1le3b9e
/r/LocalLLaMA/comments/1le3b9e/cheap_dual_radeon_60_tks_qwen330ba3b/
false
false
https://external-preview…bd0db971e7707e5d
69
{'enabled': False, 'images': [{'id': 'dzU0NXF1ZXd3azdmMRL6N26Lhnz9zx3CK2rpMgt595CDjr45ninPojQsc6H2', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dzU0NXF1ZXd3azdmMRL6N26Lhnz9zx3CK2rpMgt595CDjr45ninPojQsc6H2.png?width=108&crop=smart&format=pjpg&auto=webp&s=870154ed1dc1f878a765fbc235ad6179132b9c0f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dzU0NXF1ZXd3azdmMRL6N26Lhnz9zx3CK2rpMgt595CDjr45ninPojQsc6H2.png?width=216&crop=smart&format=pjpg&auto=webp&s=04b2bc679bbb2d7b737030b43abe2d4a0f2e6071', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dzU0NXF1ZXd3azdmMRL6N26Lhnz9zx3CK2rpMgt595CDjr45ninPojQsc6H2.png?width=320&crop=smart&format=pjpg&auto=webp&s=96d5f5d10b38aa7d67dc06a304a9260c8661439b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dzU0NXF1ZXd3azdmMRL6N26Lhnz9zx3CK2rpMgt595CDjr45ninPojQsc6H2.png?width=640&crop=smart&format=pjpg&auto=webp&s=f48ca363288924722eaed81c467cd5583526eef3', 'width': 640}], 'source': {'height': 405, 'url': 'https://external-preview.redd.it/dzU0NXF1ZXd3azdmMRL6N26Lhnz9zx3CK2rpMgt595CDjr45ninPojQsc6H2.png?format=pjpg&auto=webp&s=6fe7d96f8f94ee346c91ff8386c1bb9f4dfa1e06', 'width': 720}, 'variants': {}}]}
Which model would you use for my use case
1
[deleted]
2025-06-18T00:28:18
[deleted]
1970-01-01T00:00:00
0
{}
1le3hso
false
null
t3_1le3hso
/r/LocalLLaMA/comments/1le3hso/which_model_would_you_use_for_my_use_case/
false
false
default
1
null
Which model would you use for my use case
1
Hi everyone, I'm looking for the best model I can run locally for my usage and my constraints. I have a laptop with a 3080 laptop (16go VRAM) and 32 go RAM. I'm building a systems with some agents and I'm stuck at the last step. This last step is asking to an agent to fix code (C code). I send it the code function by function with some compilation errors/warnings. I already tried some models (CodeLlama 7b instruct, Qwen2.5 coder 7B Instruct, starcoder2 15b instruct v0.1, qwen2.5 code 14b instruct). The best result I have is the model can fix very easy errors but not """complex""" ones (I don't find them complex but apparently it is x) ). I show you some examples of request I have made: messages = [ { "role": "system", "content": ( "You are an assistant that fixes erroneous C functions.\n" "You are given:\n" "- A dictionary with one or more C functions, where each key is the name of the function, and the value is its C code.\n" "- A compiler error/warning associated with those functions.\n\n" "Your task:\n" "- Fix only the function that requires changes based on the provided error/warning.\n" "- Read well code before modifying it to know what you modify, for example you can't modify 'argv'\n" "- Avoid cast if it's possible, for example casting 'argv' is NEVER a good idea\n" "- You can't modify which functions are called or the number of parameters but you can modify the type of parameters and of return\n" " * You don't have header file of C file/function, a header file has only the definition of the function and will be automatically modified if you modify the types of parameters/return value in C code\n\n" "Output format:\n" "- Wrap your entire JSON result in a Markdown code block using triple backticks with 'json'.\n" "- The JSON must be a dictionary:\n" " - Each key is the name of a corrected function.\n" " - Each value is the corrected C code of that function, encoded as a single-line JSON string " "(with newlines written as `\\n`, double quotes escaped as `\\\"`, and backslashes as `\\\\`).\n\n" "Strict Rules:\n" "- The entire output must be valid JSON and nothing else outside the code block.\n" "- Do NOT explain or add text outside the JSON.\n" "- Do NOT wrap the JSON inside another object like 'response'.\n" "- Do NOT omit the backticks. Output must start with ```json and end with ```.\n" ) }, { "role": "user", "content": ( "Here are the C functions:\n\n" "{'get_student_grades': '#include \"get_student_grades.h\"\\n" "#include <stdio.h>\\n" "#include <stddef.h>\\n\\n" "void get_student_grades(const char* grades_str, int num_grades, int* grades_array) {\\n" " for (int i = 0; i < num_grades; ++i) {\\n" " grades_array[i] = atoi(grades_str + i * 4);\\n" " }\\n" "}'}\n\n" "Here are the compiler errors/warnings:\n\n" "{'kind': 'warning', 'message': 'implicit declaration of function ‘atoi’', " "'option': '-Wimplicit-function-declaration', " "'location': {'get_student_grades': {'label': 'atoi'}}}\n\n" "Please return only the corrected C functions in the JSON format described above." ) } ] The answer for this one is: #include "get_student_grades.h" #include <stdio.h> #include <stddef.h> #include <stdlib.h> // For atoi void get_student_grades(const char* grades_str, int num_grades, int* grades_array) { for (int i = 0; i < num_grades; ++i) { grades_array[i] = atoi(grades_str + i * 4); } } So it works (it added the #include <stdlib.h>) But for another example: messages = [ { "role": "system", "content": ( "You are an assistant that fixes erroneous C functions.\n" "You are given:\n" "- A dictionary with one or more C functions, where each key is the name of the function, and the value is its C code.\n" "- A compiler error/warning associated with those functions.\n\n" "Your task:\n" "- Fix only the function that requires changes based on the provided error/warning.\n" "- Read well code before modifying it to know what you modify, for example you can't modify 'argv'\n" "- Avoid cast if it's possible, for example casting 'argv' is NEVER a good idea\n" "- You can't modify which functions are called or the number of parameters but you can modify the type of parameters and of return\n" " * You don't have header file of C file/function, a header file has only the definition of the function and will be automatically modified if you modify the types of parameters/return value in C code\n\n" "Output format:\n" "- Wrap your entire JSON result in a Markdown code block using triple backticks with 'json'.\n" "- The JSON must be a dictionary:\n" " - Each key is the name of a corrected function.\n" " - Each value is the corrected C code of that function, encoded as a single-line JSON string " "(with newlines written as `\\n`, double quotes escaped as `\\\"`, and backslashes as `\\\\`).\n\n" "Strict Rules:\n" "- The entire output must be valid JSON and nothing else outside the code block.\n" "- Do NOT explain or add text outside the JSON.\n" "- Do NOT wrap the JSON inside another object like 'response'.\n" "- Do NOT omit the backticks. Output must start with ```json and end with ```.\n" ) }, { "role": "user", "content": ( "Here are the C functions:\n\n" "{'main': '#include <stdio.h>\\n" "#include <stdlib.h>\\n" "#include \"get_student_grades.h\"\\n" "#include \"calculate_average.h\"\\n" "#include \"calculate_percentage.h\"\\n" "#include \"determine_grade.h\"\\n\\n" "int main(int argc, char *argv[]) {\\n" " if (argc < 2) {\\n" " printf(\"Usage: %s <space-separated grades>\\\\n\", argv[0]);\\n" " return 1;\\n" " }\\n\\n" " int num_grades = argc - 1;\\n" " double grades[num_grades];\\n" " get_student_grades(argv, num_grades, grades);\\n\\n" " double average = calculate_average(grades, num_grades);\\n" " double percentage = calculate_percentage(average);\\n" " char final_grade = determine_grade(percentage);\\n\\n" " printf(\"Average: %.2f\\\\n\", average);\\n" " printf(\"Percentage: %.2f%%\\\\n\", percentage);\\n" " printf(\"Final Grade: %c\\\\n\", final_grade);\\n\\n" " return 0;\\n" "}', " "'get_student_grades': '#include \"get_student_grades.h\"\\n" "#include <stdio.h>\\n" "#include <stddef.h>\\n" "#include <stdlib.h>\\n\\n" "void get_student_grades(const char* grades_str, int num_grades, int* grades_array) {\\n" " for (int i = 0; i < num_grades; ++i) {\\n" " grades_array[i] = atoi(grades_str + i * 4);\\n" " }\\n" "}'}\n\n" "Here are the compiler errors/warnings:\n\n" "{'kind': 'warning', 'message': 'passing argument 1 of ‘get_student_grades’ from incompatible pointer type', " "'option': '-Wincompatible-pointer-types', 'location': {'main': {'label': 'char **'}}, " "'children': [{'kind': 'note', 'message': 'expected ‘const char *’ but argument is of type ‘char **’', " "'location': {'get_student_grades': {'label': 'const char* grades_str'}}}]}\n\n" "Please return only the corrected C functions in the JSON format described above." ) } ] I have void get_student_grades(const char* grades_str, int num_grades, int* grades_array) { for (int i = 0; i < num_grades; ++i) { grades_array[i] = atoi(grades_str + i * 4); } } which is false because 1) no include anymore and 2) no fixing (I wanted const char** grades_str instead of const char* grades_str). The only good point for the second example is it can detect which function to modify ("get_student_grades" here). So I'm wondering if I use too small models (not enough efficent) or if there is an issue with my prompt ? Or if I want something too complex ? Another detail if it's important: I don't have complexe functions (like each function are less than 30 lines of code)
2025-06-18T00:29:07
https://www.reddit.com/r/LocalLLaMA/comments/1le3if8/which_model_would_you_use_for_my_use_case/
Kind-Veterinarian437
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le3if8
false
null
t3_1le3if8
/r/LocalLLaMA/comments/1le3if8/which_model_would_you_use_for_my_use_case/
false
false
self
1
null
Training LLM to write novels
1
[removed]
2025-06-18T00:41:26
https://www.reddit.com/r/LocalLLaMA/comments/1le3rpy/training_llm_to_write_novels/
pchris131313
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le3rpy
false
null
t3_1le3rpy
/r/LocalLLaMA/comments/1le3rpy/training_llm_to_write_novels/
false
false
self
1
null
My post about OpenAI is being Removed! Why?
1
[removed]
2025-06-18T00:54:52
https://www.reddit.com/r/LocalLLaMA/comments/1le41o3/my_post_about_openai_is_being_removed_why/
Iory1998
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le41o3
false
null
t3_1le41o3
/r/LocalLLaMA/comments/1le41o3/my_post_about_openai_is_being_removed_why/
false
false
self
1
null
Apple Intelligence models, with just 3 billion parameters, appear quite capable of text rewriting and proofreading compared to small local LLMs
1
[removed]
2025-06-18T00:57:12
https://www.reddit.com/r/LocalLLaMA/comments/1le43de/apple_intelligence_models_with_just_3_billion/
Sky_Linx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le43de
false
null
t3_1le43de
/r/LocalLLaMA/comments/1le43de/apple_intelligence_models_with_just_3_billion/
false
false
self
1
null
Want to implement LLM’s in Design-Construction Industry
1
[removed]
2025-06-18T01:44:44
https://www.reddit.com/r/LocalLLaMA/comments/1le514a/want_to_implement_llms_in_designconstruction/
Acrobatic-Bat-2243
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le514a
false
null
t3_1le514a
/r/LocalLLaMA/comments/1le514a/want_to_implement_llms_in_designconstruction/
false
false
self
1
null
need advice for model selection/parameters and architecture for a handwritten document analysis and management Flask app
5
so, I've been working on this thing for a couple months. right now, it runs Flask in Gunicorn, and what it does is: * monitor a directory for new/incoming files (PDF or HTML) * if there's a new file, shrinks it to a size that doesn't cause me to run out of VRAM on my 5060Ti 16GB * uses a first pass of Qwen2.5-VL-3B-Instruct at INT8 to do handwriting recognition and insert the results into a sqlite3 db * uses a second pass to look for any text inside inside a drawn rectangle (this is the part I'm having trouble with that doesn't work - lots of false positives, misses stuff) and inserts that into a different field in the same record * permits search of the text and annotations in the boxes this model really struggles with the second step. as mentioned above it maybe can't really figure out what I'm asking it to do. the first step works fine. I'm wondering if there is a better choice of model for this kind of work that I just don't know about. I've already tried running it at FP16 instead, that didn't seem to help. at INT8 it consumes about 3.5GB VRAM which is obviously fine. I have some overhead I could devote to running a bigger model if that would help -- or am I going about this all wrong? TIA.
2025-06-18T01:56:05
https://www.reddit.com/r/LocalLLaMA/comments/1le593y/need_advice_for_model_selectionparameters_and/
starkruzr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le593y
false
null
t3_1le593y
/r/LocalLLaMA/comments/1le593y/need_advice_for_model_selectionparameters_and/
false
false
self
5
null
MacOS 26 Foundation Model Bindings for Node.js
15
NodeJS bindings for the 3b model that ships with MacOS 26 beta Github: [https://github.com/Meridius-Labs/apple-on-device-ai](https://github.com/Meridius-Labs/apple-on-device-ai) License: MIT
2025-06-18T02:24:04
https://v.redd.it/8cy6sg80jl7f1
aitookmyj0b
v.redd.it
1970-01-01T00:00:00
0
{}
1le5t5k
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/8cy6sg80jl7f1/DASHPlaylist.mpd?a=1752805461%2CZGM1MzRjODlkYjUxM2U5OTZjMjljOTM5NmZhMGI1OTJlYWYwM2ZkN2MxOGViMjFhNTk1YjYzZjY1NjQwMDdkYg%3D%3D&v=1&f=sd', 'duration': 14, 'fallback_url': 'https://v.redd.it/8cy6sg80jl7f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/8cy6sg80jl7f1/HLSPlaylist.m3u8?a=1752805461%2CMWViOTI2OTM1YjdkNmJjNjAyZjQ2ZmQ3ZDI1NDcyMmFkYmM2YzI3N2ZkNjY3ZGVjMmE4NWRjMGM4YzEyZWUxOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8cy6sg80jl7f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1396}}
t3_1le5t5k
/r/LocalLLaMA/comments/1le5t5k/macos_26_foundation_model_bindings_for_nodejs/
false
false
https://external-preview…ec953ae6b609a893
15
{'enabled': False, 'images': [{'id': 'ZW9sbXFnODBqbDdmMXrVabM9dpLkhLChCuY4rPYmJCNDu_l2kn_dFQhm-QAS', 'resolutions': [{'height': 83, 'url': 'https://external-preview.redd.it/ZW9sbXFnODBqbDdmMXrVabM9dpLkhLChCuY4rPYmJCNDu_l2kn_dFQhm-QAS.png?width=108&crop=smart&format=pjpg&auto=webp&s=fe421c53825788dfc905c769ca49135bf763e7b1', 'width': 108}, {'height': 167, 'url': 'https://external-preview.redd.it/ZW9sbXFnODBqbDdmMXrVabM9dpLkhLChCuY4rPYmJCNDu_l2kn_dFQhm-QAS.png?width=216&crop=smart&format=pjpg&auto=webp&s=23b16b166cdda560377a8774052d0fd792bfe108', 'width': 216}, {'height': 247, 'url': 'https://external-preview.redd.it/ZW9sbXFnODBqbDdmMXrVabM9dpLkhLChCuY4rPYmJCNDu_l2kn_dFQhm-QAS.png?width=320&crop=smart&format=pjpg&auto=webp&s=a4f752376a5db8b4f8eec27aab5501f98575693e', 'width': 320}, {'height': 495, 'url': 'https://external-preview.redd.it/ZW9sbXFnODBqbDdmMXrVabM9dpLkhLChCuY4rPYmJCNDu_l2kn_dFQhm-QAS.png?width=640&crop=smart&format=pjpg&auto=webp&s=94baec7da1dc4120be81685b65c1a7ecad53ea9f', 'width': 640}, {'height': 742, 'url': 'https://external-preview.redd.it/ZW9sbXFnODBqbDdmMXrVabM9dpLkhLChCuY4rPYmJCNDu_l2kn_dFQhm-QAS.png?width=960&crop=smart&format=pjpg&auto=webp&s=6fed877ce2dab43ca4d3edc33981b546ae4ef336', 'width': 960}, {'height': 835, 'url': 'https://external-preview.redd.it/ZW9sbXFnODBqbDdmMXrVabM9dpLkhLChCuY4rPYmJCNDu_l2kn_dFQhm-QAS.png?width=1080&crop=smart&format=pjpg&auto=webp&s=531a8e1a6f7bcf4300f5fde9e00fb143b3f61045', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZW9sbXFnODBqbDdmMXrVabM9dpLkhLChCuY4rPYmJCNDu_l2kn_dFQhm-QAS.png?format=pjpg&auto=webp&s=3958b61612000f847253739e87d1143e4211f792', 'width': 1396}, 'variants': {}}]}