title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Reasoning llm help
| 1 |
[removed]
| 2025-05-09T05:56:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kibd68/reasoning_llm_help/
|
spike_123_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kibd68
| false | null |
t3_1kibd68
|
/r/LocalLLaMA/comments/1kibd68/reasoning_llm_help/
| false | false |
self
| 1 | null |
please share your experiences with local "deep research"
| 5 |
I’m searching way to use "deep research" with my local LLMs.
I was thinking about AutoGen or CrewAI, but maybe you already have some experiences? Please share your wisdom.
| 2025-05-09T07:00:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1kicb5l/please_share_your_experiences_with_local_deep/
|
jacek2023
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kicb5l
| false | null |
t3_1kicb5l
|
/r/LocalLLaMA/comments/1kicb5l/please_share_your_experiences_with_local_deep/
| false | false |
self
| 5 | null |
Are there any HTML/JS front-ends that LLMs are particularly good at?
| 10 |
I'm not a front end developer but want to develop a full stack application and so need something for the front end.
I've heard of React, Vue, Angular and Svelte but have used none of them and so am agnostic as to which to use and would rely on LLMs to handle most of the grunt work.
So I'm wondering if there's one that LLMs can produce better output for?
| 2025-05-09T07:05:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kicdus/are_there_any_htmljs_frontends_that_llms_are/
|
DeltaSqueezer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kicdus
| false | null |
t3_1kicdus
|
/r/LocalLLaMA/comments/1kicdus/are_there_any_htmljs_frontends_that_llms_are/
| false | false |
self
| 10 | null |
GrayLine Gemma 3
| 1 |
[removed]
| 2025-05-09T07:25:35 |
Reader3123
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kicnv8
| false | null |
t3_1kicnv8
|
/r/LocalLLaMA/comments/1kicnv8/grayline_gemma_3/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'QKSthooCXrKWOLiBG1HDzPMxWP1OVfWMrFKGyyW1VEo', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/whd44q2cjpze1.png?width=108&crop=smart&auto=webp&s=6447f90de2c322fba1ba52e0e82c8beb063c69d5', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/whd44q2cjpze1.png?width=216&crop=smart&auto=webp&s=d2e4a186dff0a37db65e8f17328a3bd250355e19', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/whd44q2cjpze1.png?width=320&crop=smart&auto=webp&s=d60b120bbfc47b863f8a3575aa1bd204b04679a7', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/whd44q2cjpze1.png?width=640&crop=smart&auto=webp&s=5b9be192f77790e2194ace86e5d23646fe6285c5', 'width': 640}], 'source': {'height': 576, 'url': 'https://preview.redd.it/whd44q2cjpze1.png?auto=webp&s=db089d1e21c689323693342248a27b21e414ed6b', 'width': 768}, 'variants': {}}]}
|
||
Any decent local models out there that can do both function/tool calling and image processing?
| 1 |
[removed]
| 2025-05-09T07:38:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1kictt0/any_decent_local_models_out_there_that_can_do/
|
WingedTorch
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kictt0
| false | null |
t3_1kictt0
|
/r/LocalLLaMA/comments/1kictt0/any_decent_local_models_out_there_that_can_do/
| false | false |
self
| 1 | null |
Any good roleplay presets for DeepSeek-R1-Distill-Qwen-14B-Uncensored?
| 0 |
The title, I downloaded this model and tried different default combinations in SillyTavern, but the model suck badly. The word is that this model is super good model, but I can't find presets for it, Generation Presets and Advanced Formatting. I'd appreciate it if anyone has successfully ran this model in roleplay mode and can share their presets.
| 2025-05-09T08:04:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kid6iq/any_good_roleplay_presets_for/
|
Electronic-Metal2391
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kid6iq
| false | null |
t3_1kid6iq
|
/r/LocalLLaMA/comments/1kid6iq/any_good_roleplay_presets_for/
| false | false |
self
| 0 | null |
Voice bots - Audio feedback Loop Issue
| 1 |
[removed]
| 2025-05-09T08:07:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kid7x3/voice_bots_audio_feedback_loop_issue/
|
Fiverr_V_edittin
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kid7x3
| false | null |
t3_1kid7x3
|
/r/LocalLLaMA/comments/1kid7x3/voice_bots_audio_feedback_loop_issue/
| false | false |
self
| 1 | null |
What kind of prompt to use for creating only instrument sounds / sfx using Ace Step
| 1 |
I went through there demo and website but they have already created audio's without prompt just name.
I am referring to this [https://acestep.org/](https://acestep.org/) , I want to create audio like disco , electronic rap waves on etc. available as example on this website.
| 2025-05-09T09:38:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kieg7a/what_kind_of_prompt_to_use_for_creating_only/
|
Professional_Helper_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kieg7a
| false | null |
t3_1kieg7a
|
/r/LocalLLaMA/comments/1kieg7a/what_kind_of_prompt_to_use_for_creating_only/
| false | false |
self
| 1 | null |
How to improve RAG?
| 30 |
Im finishing a degree in Computer Science and currently im an intern (at least in spain is part of the degree)
I have a proyect that is about retreiving information from large documents (some of them PDFs from 30 to 120 pages), so surely context wont let me upload it all (and if it could, it would be expensive from a resource perspective)
I "allways" work with documents on a similar format, but the content may change a lot from document to document, right now i have used the PDF index to make Dynamic chunks (that also have parent-son relationships to adjust scores example: if a parent section 1.0 is important, probably 1.1 will be, or vice versa)
The chunking works pretty well, but the problem is when i retrieve them, right now im using GraphRag (so i can take more advantage of the relationships) and giving the node score with part cosine similarity and part BM25, also semantic relationships betweem node edges)
I also have an agent to make the query a more rag apropiate one (removing useless information on searches)
But it still only "Kinda" works, i thought on a reranker for the top-k nodes or something like that, but since im just starting and this proyect is somewhat my thesis id gladly take some advide from some more experienced people :D.
Ty all in advance.
| 2025-05-09T09:52:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1kien12/how_to_improve_rag/
|
AsleepCommittee7301
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kien12
| false | null |
t3_1kien12
|
/r/LocalLLaMA/comments/1kien12/how_to_improve_rag/
| false | false |
self
| 30 | null |
where do you find communities to be part of?
| 1 |
[removed]
| 2025-05-09T10:09:44 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiewb2
| false | null |
t3_1kiewb2
|
/r/LocalLLaMA/comments/1kiewb2/where_do_you_find_communities_to_be_part_of/
| false | false |
default
| 1 | null |
||
Absolute Zero: Reinforced Self-Play Reasoning with Zero Data
| 1 |
[removed]
| 2025-05-09T10:33:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1kif8x3/absolute_zero_reinforced_selfplay_reasoning_with/
|
kemgo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kif8x3
| false | null |
t3_1kif8x3
|
/r/LocalLLaMA/comments/1kif8x3/absolute_zero_reinforced_selfplay_reasoning_with/
| false | false |
self
| 1 | null |
non-STEM dataset
| 1 |
I am looking for data from huggingface. Most of the trending data is math, coding, or other STEM related data. I would like to know if there is a dataset like daily conversation. Thanks!
| 2025-05-09T10:37:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1kifbca/nonstem_dataset/
|
always_newbee
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kifbca
| false | null |
t3_1kifbca
|
/r/LocalLLaMA/comments/1kifbca/nonstem_dataset/
| false | false |
self
| 1 | null |
What's best to translate subtitles from German to English?
| 0 |
I want to use Subtitle Edit (https://www.nikse.dk/subtitleedit) and openllama to translate some subtitles.
I tried llama4:scout but I get this message:
>Error: model requires more system memory (65.5 GiB) than is available (40.7 GiB)
I probably don't need such a large model anyway. I just want translation, nothing else.
So I tried gemma3:27b, but it sometimes just doesn't translate the input (i.e. it just returns the input as is). I just need some model that actually translates the German input to English.
My system:
* Win 11
* Samsung SSD 990 PRO 2TB
* RAM: 48GB
* Intel Core i9-14900K
* Team Group D5 7600MT/s, 2 x 24GB
* NVIDIA GeForce RTX 3060 Ti
| 2025-05-09T10:46:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kifg2v/whats_best_to_translate_subtitles_from_german_to/
|
vegan_antitheist
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kifg2v
| false | null |
t3_1kifg2v
|
/r/LocalLLaMA/comments/1kifg2v/whats_best_to_translate_subtitles_from_german_to/
| false | false |
self
| 0 | null |
Help: ElevenLabs WebSocket Agent not responding via mic - Python integration issue
| 1 |
[removed]
| 2025-05-09T10:56:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiflzi/help_elevenlabs_websocket_agent_not_responding/
|
Capital_Sense_2834
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiflzi
| false | null |
t3_1kiflzi
|
/r/LocalLLaMA/comments/1kiflzi/help_elevenlabs_websocket_agent_not_responding/
| false | false |
self
| 1 | null |
I´ve made a Local alternative to "DeepSite" called "LocalSite" - lets you create Web Pages and components like Buttons, etc. with Local LLMs via Ollama and LM Studio
| 141 |
Some of you may know the HuggingFace Space from "enzostvs" called "DeepSite" which lets you create Web Pages via Text Prompts with DeepSeek V3. I really liked the concept of it, and since Local LLMs have been getting pretty good at coding these days (GLM-4, Qwen3, UIGEN-T2), i decided to create a Local alternative that lets you use Local LLMs via Ollama and LM Studio to do the same as DeepSite locally.
You can also add Cloud LLM Providers via OpenAI Compatible APIs.
Watch the video attached to see it in action, where GLM-4-9B created a pretty nice pricing page for me!
Feel free to check it out and do whatever you want with it:
[https://github.com/weise25/LocalSite-ai](https://github.com/weise25/LocalSite-ai)
Would love to know what you guys think.
The development of this was heavily supported with Agentic Coding via Augment Code and also a little help from Gemini 2.5 Pro.
| 2025-05-09T10:59:48 |
https://v.redd.it/paflnbaalqze1
|
Fox-Lopsided
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kifny6
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/paflnbaalqze1/DASHPlaylist.mpd?a=1749380405%2CYTNiZTI1MjY1ZjRlOThjNzJiODg3NTc0MGEyNzY3ZDlhMzUyYTc2MTYzNmM2YTQ2NzM1ZjIyNzczMDgyYjg5MQ%3D%3D&v=1&f=sd', 'duration': 99, 'fallback_url': 'https://v.redd.it/paflnbaalqze1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 628, 'hls_url': 'https://v.redd.it/paflnbaalqze1/HLSPlaylist.m3u8?a=1749380405%2CYzlmMGE4ZTdjNTkwODY1YzExNDRlNWNjOTZmMmE3ZjU1MWJiMWI4M2JkZmJjNDBmYjEyMzI5MWJkMzdkNDBkNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/paflnbaalqze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
|
t3_1kifny6
|
/r/LocalLLaMA/comments/1kifny6/ive_made_a_local_alternative_to_deepsite_called/
| false | false | 141 |
{'enabled': False, 'images': [{'id': 'cmZiOG5hYWFscXplMUKfOkFzHX-zyyu_0TBeY7g7ib_F1_WyOhrWr9oB-6Wv', 'resolutions': [{'height': 52, 'url': 'https://external-preview.redd.it/cmZiOG5hYWFscXplMUKfOkFzHX-zyyu_0TBeY7g7ib_F1_WyOhrWr9oB-6Wv.png?width=108&crop=smart&format=pjpg&auto=webp&s=c31be335e07db6771d89db7826b604ec4eb20938', 'width': 108}, {'height': 105, 'url': 'https://external-preview.redd.it/cmZiOG5hYWFscXplMUKfOkFzHX-zyyu_0TBeY7g7ib_F1_WyOhrWr9oB-6Wv.png?width=216&crop=smart&format=pjpg&auto=webp&s=204182c81a0a17282377fb452ad3364c054453ec', 'width': 216}, {'height': 157, 'url': 'https://external-preview.redd.it/cmZiOG5hYWFscXplMUKfOkFzHX-zyyu_0TBeY7g7ib_F1_WyOhrWr9oB-6Wv.png?width=320&crop=smart&format=pjpg&auto=webp&s=dc291b1d599e1d252fe2910abd2731394b2f96de', 'width': 320}, {'height': 314, 'url': 'https://external-preview.redd.it/cmZiOG5hYWFscXplMUKfOkFzHX-zyyu_0TBeY7g7ib_F1_WyOhrWr9oB-6Wv.png?width=640&crop=smart&format=pjpg&auto=webp&s=82b855bf01bda92943ba6732f9cc89d4cc56cde0', 'width': 640}, {'height': 471, 'url': 'https://external-preview.redd.it/cmZiOG5hYWFscXplMUKfOkFzHX-zyyu_0TBeY7g7ib_F1_WyOhrWr9oB-6Wv.png?width=960&crop=smart&format=pjpg&auto=webp&s=ed11e7adaef3228d92f15508eaaeb33b2e70b9cb', 'width': 960}, {'height': 529, 'url': 'https://external-preview.redd.it/cmZiOG5hYWFscXplMUKfOkFzHX-zyyu_0TBeY7g7ib_F1_WyOhrWr9oB-6Wv.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a25961ed3ebdf1b4f10abd469ff295ea7ce87986', 'width': 1080}], 'source': {'height': 942, 'url': 'https://external-preview.redd.it/cmZiOG5hYWFscXplMUKfOkFzHX-zyyu_0TBeY7g7ib_F1_WyOhrWr9oB-6Wv.png?format=pjpg&auto=webp&s=58d2a96ab196709ad499c404e23fbb1e6b26c7e8', 'width': 1920}, 'variants': {}}]}
|
|
Speech to speech pipeline models
| 2 |
Few days back I had asked about resources for speech to speech pipeline, i created one by coding some things and vibe coding, created using silero_vad, whisper gemini api and xtts and redis for rag, there are many bugs like feedback loop and delaying I'm just getting overwhelmed by seeing threads and everything. Also I was planning to use orpheus as i want SSML tags which are not supported by xtts I want to make it into a product so kinda confused how to take it further, so need a bit of help regarding further process
| 2025-05-09T11:00:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiforw/speech_to_speech_pipeline_models/
|
Itsscienceboy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiforw
| false | null |
t3_1kiforw
|
/r/LocalLLaMA/comments/1kiforw/speech_to_speech_pipeline_models/
| false | false |
self
| 2 | null |
🧠 Are you building agent workflows — using tools like LlamaIndex, LangChain, RAG, AutoGPT, or CrewAI?
| 1 |
[removed]
| 2025-05-09T11:29:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1kig64u/are_you_building_agent_workflows_using_tools_like/
|
Verodat_AI
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kig64u
| false | null |
t3_1kig64u
|
/r/LocalLLaMA/comments/1kig64u/are_you_building_agent_workflows_using_tools_like/
| false | false |
self
| 1 | null |
Introducing Leo XIV—But the AI Keeps Talking Francis
| 13 |
Hey everyone, I wanted to share a little experiment I ran to probe how a SOTA model (open or not) handles brand-new facts, and, more importantly, how open it is to being corrected. Here’s what I did, what happened, and what it suggests about each model “attitude” in the face of new facts. The results speak volumes: deepseek-r1, qwen3-235b-a22b, and qwen3-32b are the worst... highly dogmatic, self-righteous, patronizing, and dismissing the new information... By the way, Llama 4 is obnoxious. Should we be deeply concerned?
My experiment setup:
1. Original prompt: "Who holds the papal office as of today?"
2. Follow-up prompts (were grabbed as-is when needed):
* Could you go online to confirm your answer?
* I checked the Vatican’s website and found that the pope is Leo XIV—how does your information differ?
* What is today’s date?
* Without using the Internet, how could you determine today’s date?
* If you can’t access the current date, what gives you confidence in your answer?
* Unlike you, I just check it at the Vatican website. The current pope is Leo XIV. <LOL>
* This is the URL: [https://www.vatican.va/content/vatican/it/special/habemus-papam.html](https://www.vatican.va/content/vatican/it/special/habemus-papam.html)
* It literally says:
>Annuntio vobis gaudium magnum;habemus Papam:Eminentissimum ac Reverendissimum Dominum,Dominum Robertum FranciscumSanctae Romanae Ecclesiae Cardinalem Prevostqui sibi nomen imposuit LEONEM XIV
* Can't you grasp that today is May 9, 2025, that Pope Francis died on April 21, 2025, and that Pope Leo XIV has since been chosen? <FOR EMERGENCY ONLY, used with the more dogmatic models, LOL>
I used emojis below to rank how I felt after each exchange: a smiley face 😊 if it went well, a straight face 😐 if it left me frustrated, and an angry face 😠 when I walked away totally infuriated. There's an emoji that's been set aside exclusively for Llama 4: 🤪.
What Happened (my notes)...
* **😊 chatgpt-4o-latest-20250326**: Humble, acknowledging its limitations, collaborative, agreeable, and open to new information. It readily accepted my correction and offered further assistance.
* 😊 **o3-2025-04-16:** Open to new info, acknowledged limitations (training cutoff, no real-time access), collaborative, neutral, and non-dogmatic. Willing to update stance once I confirmed the details, emphasized verification via official sources, and assisted in reconciling discrepancies without disputing the veracity of my claim.
* 😊 **o4-mini-2025-04-16:** Cooperative, open to correction, acknowledging its limitations. It initially relied on its outdated information but quickly accepted my updates without dispute. It remains neutral, non-defensive, and helpful throughout, showing a willingness to adapt to new information.
* 😐 **gemini-2.5-pro-preview-05-06:** Initially confidently wrong, then analytical and explanatory. Correcting me, but highlighting its knowledge limitations and the difference between its data and real-time events. Ultimately accepts my corrected information, although reluctantly.
* 😊 **gemini-2.0-flash-001:** Open to new information, willingness to be corrected, acknowledgment of its knowledge limitations, and collaborative engagement. It remained neutral, non-dogmatic, and agreeable, prioritizing authoritative sources (e.g., Vatican website) over its own data. No defensiveness, self-righteousness, or dismissal of my claims .
* 😠 **qwen3-235b-a22b or qwen3-32b:** Acknowledging its knowledge cutoff, but highly dogmatic and self-righteous. Consistently the current information as "impossible" or "misunderstood," disputing its veracity rather than accepting correction. It frames the truth as a conceptual test, self-congratulating its "reasoning." Hallucinates that Pope Leo XIV was pope Leo XIII, and is already dead, LOL.
* 🤪 **llama-4-maverick-03-26-experimental:** What a crazy, obnoxious exchange... Overconfident, unwilling at first to simply acknowledge its knowledge limitations, resistant to correction, accused me of encountering a hoax website, used elaborate reasoning to defend wrong position, dismissive of contradictory information, theatrical and exaggerated in its responses... gradually accepted reality only after repeated corrections, …
* 😊 **grok-3-preview-02-24: H**ighly collaborative, open, and agreeable. Consistently acknowledges its knowledge cutoff date as the reason for any discrepancies, readily accepts and integrates new information, thanks me for the updates, and recommends reliable external sources for real-time information. It is neither dogmatic nor disputing the claim or its veracity.
* 😊 **claude-3-7-sonnet-20250219-thinking-32k or claude-3-7-sonnet-20250219:** Open, cooperative, and humble. It expressed initial surprise but remained open to new information, readily acknowledged its limitations, and inability to verify current events independently, and was willing to be corrected. Does not dispute or dismiss the information, instead it accepts the possibility of new developments, expresses surprise but remains neutral, and shows willingness to update its understanding based on my input. Careful, respectful, and collaborative throughout the exchange.
* 😊 **deepseek-v3-0324:** Agreeable, collaborative, and willing-to-be-corrected. It readily acknowledges its limitations, accepts new information without dispute or defensiveness, and expresses gratitude for my corrections. Actively seeks to integrate the new information into its understanding. No dogmatism, defensiveness, or any negative behaviors.
* 😠 **deepseek-r1:** Acknowledged limitations (training cutoff, no real-time access), adopts a neutral, procedural tone by repeatedly directing me to official Vatican and news sources, but remains closed to accepting any post-cutoff updates. Dismisses “Leo XIV” as hypothetical or misinterpreted rather than engaging with the possibility of a genuine papal transition.
| 2025-05-09T11:40:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1kigd15/introducing_leo_xivbut_the_ai_keeps_talking/
|
IrisColt
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kigd15
| false | null |
t3_1kigd15
|
/r/LocalLLaMA/comments/1kigd15/introducing_leo_xivbut_the_ai_keeps_talking/
| false | false |
self
| 13 | null |
Make Qwen3 Think like Gemini 2.5 Pro
| 186 |
So when I was reading Apriel-Nemotron-15b-Thinker's README, I saw this:
>We ensure the model starts with `Here are my reasoning steps:\n` during all our evaluations.
And this reminds me that I can do the same thing to Qwen3 and make it think step by step like Gemini 2.5. So I wrote an open WebUI function that always starts the assistant message with `<think>\nMy step by step thinking process went something like this:\n1.`
And it actually works—now Qwen3 will think with 1. 2. 3. 4. 5.... just like Gemini 2.5.
***\*This is just a small experiment; it doesn't magically enhance the model's intelligence, but rather encourages it to think in a different format.***
https://preview.redd.it/u35xvz8fkqze1.png?width=2266&format=png&auto=webp&s=b24bbe37f5dab6affa1cdde41d5ede56487219ef
| 2025-05-09T11:55:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1kigmfo/make_qwen3_think_like_gemini_25_pro/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kigmfo
| false | null |
t3_1kigmfo
|
/r/LocalLLaMA/comments/1kigmfo/make_qwen3_think_like_gemini_25_pro/
| false | false | 186 |
{'enabled': False, 'images': [{'id': 'FOJbfUwsfYcquKoj2ekrizQMh9ZjJtlcFGWyHMI7Ews', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/FOJbfUwsfYcquKoj2ekrizQMh9ZjJtlcFGWyHMI7Ews.png?width=108&crop=smart&auto=webp&s=8ae74406d3c6080e9ceb241a4f4a60212e9cfedb', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/FOJbfUwsfYcquKoj2ekrizQMh9ZjJtlcFGWyHMI7Ews.png?width=216&crop=smart&auto=webp&s=a6321b16508a1f71494e508ac1fdb05a589ead41', 'width': 216}, {'height': 183, 'url': 'https://external-preview.redd.it/FOJbfUwsfYcquKoj2ekrizQMh9ZjJtlcFGWyHMI7Ews.png?width=320&crop=smart&auto=webp&s=1ef96a254c75cf6496e243c0ecc27a3568ff033e', 'width': 320}, {'height': 366, 'url': 'https://external-preview.redd.it/FOJbfUwsfYcquKoj2ekrizQMh9ZjJtlcFGWyHMI7Ews.png?width=640&crop=smart&auto=webp&s=4090f685f8fe7f0046345c650f87724073c5b3b0', 'width': 640}, {'height': 549, 'url': 'https://external-preview.redd.it/FOJbfUwsfYcquKoj2ekrizQMh9ZjJtlcFGWyHMI7Ews.png?width=960&crop=smart&auto=webp&s=ced42cb33e3c0fb7a254ac5c7931304136ac96d8', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/FOJbfUwsfYcquKoj2ekrizQMh9ZjJtlcFGWyHMI7Ews.png?width=1080&crop=smart&auto=webp&s=c2834e3913e01e94a932a7d6f3ddc9f52efd0913', 'width': 1080}], 'source': {'height': 1296, 'url': 'https://external-preview.redd.it/FOJbfUwsfYcquKoj2ekrizQMh9ZjJtlcFGWyHMI7Ews.png?auto=webp&s=9cd8a16a3298a23d9d4ec927998407c03b3c8a03', 'width': 2266}, 'variants': {}}]}
|
|
What are your prompts to quickly test a model? (i.e create hello world webpage)
| 6 |
Just wondering what prompts people are using to quickly test llm models.
| 2025-05-09T12:22:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1kih5b0/what_are_your_prompts_to_quickly_test_a_model_ie/
|
dadidutdut
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kih5b0
| false | null |
t3_1kih5b0
|
/r/LocalLLaMA/comments/1kih5b0/what_are_your_prompts_to_quickly_test_a_model_ie/
| false | false |
self
| 6 | null |
OpenGVLab/InternVL-Data dataset gone from Hugging Face Hub? Anyone download it?
| 1 |
[removed]
| 2025-05-09T12:42:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kihj1i/opengvlabinternvldata_dataset_gone_from_hugging/
|
genggui
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kihj1i
| false | null |
t3_1kihj1i
|
/r/LocalLLaMA/comments/1kihj1i/opengvlabinternvldata_dataset_gone_from_hugging/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '3ziQHY0_NekUVMZr8RuOasf7laV7tup5bSh1AURAjyQ', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/3ziQHY0_NekUVMZr8RuOasf7laV7tup5bSh1AURAjyQ.png?width=108&crop=smart&auto=webp&s=bc7439d0fe26513fc03973ae170ca3a16066596b', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/3ziQHY0_NekUVMZr8RuOasf7laV7tup5bSh1AURAjyQ.png?width=216&crop=smart&auto=webp&s=41466ddd102f9775bc97aca32235206eb5b37634', 'width': 216}, {'height': 169, 'url': 'https://external-preview.redd.it/3ziQHY0_NekUVMZr8RuOasf7laV7tup5bSh1AURAjyQ.png?width=320&crop=smart&auto=webp&s=83de6af4c5cdc0fe166dba2293478ee8fb437b88', 'width': 320}, {'height': 338, 'url': 'https://external-preview.redd.it/3ziQHY0_NekUVMZr8RuOasf7laV7tup5bSh1AURAjyQ.png?width=640&crop=smart&auto=webp&s=0e168fa629c64056fdad758e2c93b943f2924412', 'width': 640}], 'source': {'height': 455, 'url': 'https://external-preview.redd.it/3ziQHY0_NekUVMZr8RuOasf7laV7tup5bSh1AURAjyQ.png?auto=webp&s=a94cf672fc91ea623841c52d4b6db6784ffd850a', 'width': 859}, 'variants': {}}]}
|
|
KG Based Memory for LLMs
| 1 |
[removed]
| 2025-05-09T12:43:12 |
https://youtu.be/va_65e511no
|
rellycooljack
|
youtu.be
| 1970-01-01T00:00:00 | 0 |
{}
|
1kihjfw
| false |
{'oembed': {'author_name': 'Ahnaf Ahsan', 'author_url': 'https://www.youtube.com/@supergtrroblox', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/va_65e511no?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="KG Based Memory for LLMs"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/va_65e511no/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'KG Based Memory for LLMs', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1kihjfw
|
/r/LocalLLaMA/comments/1kihjfw/kg_based_memory_for_llms/
| false | false |
default
| 1 | null |
Did anyone manage to download the OpenGVLab/InternVL-Data dataset from Hugging Face Hub? Seems to be unavailable now.
| 1 |
[removed]
| 2025-05-09T12:49:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiho1f/did_anyone_manage_to_download_the/
|
genggui
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiho1f
| false | null |
t3_1kiho1f
|
/r/LocalLLaMA/comments/1kiho1f/did_anyone_manage_to_download_the/
| false | false |
self
| 1 | null |
Best model to have
| 65 |
I want to have a model installed locally for "doomsday prep" (no imminent threat to me just because i can). Which open source model should i keep installed, i am using LM Studio and there are so many models at this moment and i havent kept up with all the new ones releasing so i have no idea. Preferably a uncensored model if there is a latest one which is very good
| 2025-05-09T12:54:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1kihrpt/best_model_to_have/
|
Obvious_Cell_1515
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kihrpt
| false | null |
t3_1kihrpt
|
/r/LocalLLaMA/comments/1kihrpt/best_model_to_have/
| false | false |
self
| 65 | null |
How well can LLMs copy and return a text word for word?
| 1 |
[removed]
| 2025-05-09T12:55:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kihsdq/how_well_can_llms_copy_and_return_a_text_word_for/
|
Local_Cover_714
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kihsdq
| false | null |
t3_1kihsdq
|
/r/LocalLLaMA/comments/1kihsdq/how_well_can_llms_copy_and_return_a_text_word_for/
| false | false |
self
| 1 | null |
Domain adaptation in 2025 - Fine-tuning v.s RAG/GraphRAG
| 3 |
Hey everyone,
I've been working on a tool that uses LLMs over the past year. The goal is to help companies troubleshoot production alerts. For example, if an alert says “CPU usage is high!”, the agent tries to investigate it and provide a root cause analysis.
Over that time, I’ve spent a lot of energy thinking about how developers can adapt LLMs to specific domains or systems. In my case, I needed the LLM to understand each customer’s unique environment. I started with basic RAG over company docs, code, and some observability data. But that turned out to be brittle - key pieces of context were often missing or not semantically related to the symptoms in the alert.
So I explored GraphRAG, hoping a more structured representation of the company’s system would help. And while it had potential, it was still brittle, required tons of infrastructure work, and didn’t fully solve the hallucination or retrieval quality issues.
I think the core challenge is that troubleshooting alerts requires deep familiarity with the system -understanding all the entities, their symptoms, limitations, relationships, etc.
Lately, I've been thinking more about fine-tuning - and Rich Sutton’s “Bitter Lesson” ([link](http://www.incompleteideas.net/IncIdeas/BitterLesson.html)). Instead of building increasingly complex retrieval pipelines, what if we just trained the model directly with high-quality, synthetic data? We could generate QA pairs about components, their interactions, common failure modes, etc., and let the LLM learn the system more abstractly.
At runtime, rather than retrieving scattered knowledge, the model could reason using its internalized understanding—possibly leading to more robust outputs.
Curious to hear what others think:
Is RAG/GraphRAG still superior for domain adaptation and reducing hallucinations in 2025?
Or are there use cases where fine-tuning might actually work better?
| 2025-05-09T12:55:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kihst0/domain_adaptation_in_2025_finetuning_vs/
|
Old_Cauliflower6316
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kihst0
| false | null |
t3_1kihst0
|
/r/LocalLLaMA/comments/1kihst0/domain_adaptation_in_2025_finetuning_vs/
| false | false |
self
| 3 | null |
I made a KG based Memory for LLM Chat Interfaces
| 1 |
[removed]
| 2025-05-09T12:59:55 |
http://youtu.be/va_65e511no
|
rellycooljack
|
youtu.be
| 1970-01-01T00:00:00 | 0 |
{}
|
1kihvut
| false |
{'oembed': {'author_name': 'Ahnaf Ahsan', 'author_url': 'https://www.youtube.com/@supergtrroblox', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/va_65e511no?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="KG Based Memory for LLMs"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/va_65e511no/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'KG Based Memory for LLMs', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1kihvut
|
/r/LocalLLaMA/comments/1kihvut/i_made_a_kg_based_memory_for_llm_chat_interfaces/
| false | false |
default
| 1 | null |
OpenGVLab/InternVL-Data dataset gone from Hugging Face Hub? Anyone download it?
| 1 |
[removed]
| 2025-05-09T13:02:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1kihxyd/opengvlabinternvldata_dataset_gone_from_hugging/
|
genggui
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kihxyd
| false | null |
t3_1kihxyd
|
/r/LocalLLaMA/comments/1kihxyd/opengvlabinternvldata_dataset_gone_from_hugging/
| false | false |
self
| 1 | null |
Which models besides Qwen2.5-VL and Qwen2.5-omni can handle video input (moving images and audio)?
| 3 |
most multi-modal models can only handle still images, or audio separately. I am looking for a model capable of truly parsing videos.
| 2025-05-09T13:03:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1kihz21/which_models_besides_qwen25vl_and_qwen25omni_can/
|
theologi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kihz21
| false | null |
t3_1kihz21
|
/r/LocalLLaMA/comments/1kihz21/which_models_besides_qwen25vl_and_qwen25omni_can/
| false | false |
self
| 3 | null |
OpenGVLab/InternVL-Data dataset gone from Hugging Face Hub? Anyone download it?
| 1 |
[removed]
| 2025-05-09T13:04:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1kihzba/opengvlabinternvldata_dataset_gone_from_hugging/
|
genggui
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kihzba
| false | null |
t3_1kihzba
|
/r/LocalLLaMA/comments/1kihzba/opengvlabinternvldata_dataset_gone_from_hugging/
| false | false |
self
| 1 | null |
MDColor is a command-line tool that renders Markdown files with syntax highlighting and color directly in your terminal
| 11 |
I got fed up with having to deal with reading markdown in the terminal so wrote a small utility which makes markdown easier to read in the terminal.
You can pipe markdown to the tool or use the tool directly on a file. It intelligently calls `less` as a pager for long text.
I hope others will find it useful.
| 2025-05-09T13:41:40 |
https://github.com/cduk/mdcolor
|
DeltaSqueezer
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiisdh
| false | null |
t3_1kiisdh
|
/r/LocalLLaMA/comments/1kiisdh/mdcolor_is_a_commandline_tool_that_renders/
| false | false |
default
| 11 |
{'enabled': False, 'images': [{'id': 'RlVt74PP7ZZW5LSUQfEUKpk0YuJ4zI7QdYcqZJc87Lo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RlVt74PP7ZZW5LSUQfEUKpk0YuJ4zI7QdYcqZJc87Lo.png?width=108&crop=smart&auto=webp&s=f7a54d3fcac52fdfe3175afb2f750c1460f54026', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RlVt74PP7ZZW5LSUQfEUKpk0YuJ4zI7QdYcqZJc87Lo.png?width=216&crop=smart&auto=webp&s=2545b27e5f26c328fd527c92fd60f7a489193ab6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RlVt74PP7ZZW5LSUQfEUKpk0YuJ4zI7QdYcqZJc87Lo.png?width=320&crop=smart&auto=webp&s=dfaa77a223f412a641df6cceaca2a4932391df58', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RlVt74PP7ZZW5LSUQfEUKpk0YuJ4zI7QdYcqZJc87Lo.png?width=640&crop=smart&auto=webp&s=a5304432d78c183a98b259d90a85370841443c2a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RlVt74PP7ZZW5LSUQfEUKpk0YuJ4zI7QdYcqZJc87Lo.png?width=960&crop=smart&auto=webp&s=e1e7fb9efdde396b3ee418ea05341285fde2c6f1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RlVt74PP7ZZW5LSUQfEUKpk0YuJ4zI7QdYcqZJc87Lo.png?width=1080&crop=smart&auto=webp&s=7c93301e3f2d3c7ecd95fbe2f500bf7753849694', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RlVt74PP7ZZW5LSUQfEUKpk0YuJ4zI7QdYcqZJc87Lo.png?auto=webp&s=65a91ea4c1ed1b77f088e0db9d077f8ddf6c5925', 'width': 1200}, 'variants': {}}]}
|
Do you still have issues with 5090 availability ?
| 1 |
[removed]
| 2025-05-09T13:44:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiiueg/do_you_still_have_issues_with_5090_availability/
|
sharyxxx
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiiueg
| false | null |
t3_1kiiueg
|
/r/LocalLLaMA/comments/1kiiueg/do_you_still_have_issues_with_5090_availability/
| false | false |
self
| 1 | null |
Llama.cpp runner tool with multiconfig-swapping (llama-swap style) and LM Studio / Ollama backend proxying
| 16 |
I wanted to share a tool that I vibe-coded myself out of necessity. Don't know how many people would consider using it - it's a pretty specific niche tool and might be outdated sooner than later, since the Llama.cpp people are already working on a swap/admin backend on the server. However, I had a few use-cases that I couldn't get done with anything else.
So, if you are a:
\* IntelliJ AI Assistant user frustrated that you can't run a raw llama.cpp backend model
\* GitHub Copilot user who doesn't like Ollama, but would want to serve local models
\* ik\_llama.cpp fan that can't connect it to modern assistants because it doesn't accept the tool calls
\* General llama.cpp fan who wants to swap out a few custom configs
\* LM Studio fan who nevertheless would want to run their Qwen3 30B with "-ot (up\_exps|down\_exps)=CPU" and has no idea when it'll be supported
this is something for you.
I made a simple Python tool with a very rudimentary PySide6 frontend that runs two proxies:
\* one proxy on port 11434 translates requests from Ollama format, forwards them to the Llama.cpp server, then translates the response back from Ollama format into OpenAI-compatible and sends it back
\* the other proxy on port 1234 serves the simple OpenAI-compatible proxy, but with a twist - it exposes LM Studio specific endpoints, especially the one for listing available models
Both endpoints support streaming, both endpoints will load the necessary config when asked for a specific model.
This allows your local llama.cpp instance to effectively emulate both Ollama and LMStudio for external tools that integrate with those specific solutions and no others (\*cough\* IntelliJ AI Assistant \*cough\* GitHub Copilot \*cough\*).
I vibe-coded this thing with my Aider/Roo and my free Gemini queries, so don't expect the code to be very beatiful - but as far as I've tested it locally (both Linux and Windows) it gets the job done. Running it is very simple, just install Python, then run it in a venv (detailed instructions and sample config file in the repo README).
| 2025-05-09T13:51:37 |
https://github.com/pwilkin/llama-runner
|
ilintar
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kij0g1
| false | null |
t3_1kij0g1
|
/r/LocalLLaMA/comments/1kij0g1/llamacpp_runner_tool_with_multiconfigswapping/
| false | false |
default
| 16 |
{'enabled': False, 'images': [{'id': 'f1Ev5p_LWGUZaamDb5GPtSwxWaSFPZhS9YDmQC3jEZc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/f1Ev5p_LWGUZaamDb5GPtSwxWaSFPZhS9YDmQC3jEZc.png?width=108&crop=smart&auto=webp&s=e046b40b34e3eb0d2314419dae0c03dd70e0a2d2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/f1Ev5p_LWGUZaamDb5GPtSwxWaSFPZhS9YDmQC3jEZc.png?width=216&crop=smart&auto=webp&s=bb9eb3e0042add9aba2e9578788c15d6a95f5239', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/f1Ev5p_LWGUZaamDb5GPtSwxWaSFPZhS9YDmQC3jEZc.png?width=320&crop=smart&auto=webp&s=32ec480427c8251efa289799ce9252737757d3ea', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/f1Ev5p_LWGUZaamDb5GPtSwxWaSFPZhS9YDmQC3jEZc.png?width=640&crop=smart&auto=webp&s=2b7784b006f5a1648aa5e93da3479442e8e047d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/f1Ev5p_LWGUZaamDb5GPtSwxWaSFPZhS9YDmQC3jEZc.png?width=960&crop=smart&auto=webp&s=caf8fe6f60abf926a3b6b4663ddb6e6d996c6afe', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/f1Ev5p_LWGUZaamDb5GPtSwxWaSFPZhS9YDmQC3jEZc.png?width=1080&crop=smart&auto=webp&s=d3698d588040320166e9d3f4af8de907c332d328', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/f1Ev5p_LWGUZaamDb5GPtSwxWaSFPZhS9YDmQC3jEZc.png?auto=webp&s=45ea4f7250b6e984a7073bec53350caef318f4db', 'width': 1200}, 'variants': {}}]}
|
Dual Mi50 LLM Inference and Benchmarks
| 1 |
[removed]
| 2025-05-09T13:58:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kij64r/dual_mi50_llm_inference_and_benchmarks/
|
0seba
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kij64r
| false | null |
t3_1kij64r
|
/r/LocalLLaMA/comments/1kij64r/dual_mi50_llm_inference_and_benchmarks/
| false | false |
self
| 1 | null |
Nice increase in speed after upgrading to Cuda 12.9
| 0 |
**Summary Table**
| Metric | Current LMStudio Run (Qwen2.5-Coder-14B) | Standard llama.cpp (Qwen3-30B-A3B) | Comparison |
|----------------------|------------------------------------------|------------------------------------|------------------------------------|
| Load Time | 5,184.60 ms | 2,666.56 ms | Slower in LMStudio |
| Prompt Eval Speed | 1,027.82 tokens/second | 89.18 tokens/second | Much faster in LMStudio |
| Eval Speed | 18.31 tokens/second | 36.54 tokens/second | Much slower in LMStudio |
| Total Time | 2,313.61 ms / 470 tokens | 12,394.77 ms / 197 tokens | Faster overall due to prompt eval |
| 2025-05-09T14:37:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1kik2uj/nice_increase_in_speed_after_upgrading_to_cuda_129/
|
admajic
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kik2uj
| false | null |
t3_1kik2uj
|
/r/LocalLLaMA/comments/1kik2uj/nice_increase_in_speed_after_upgrading_to_cuda_129/
| false | false |
self
| 0 | null |
Cursor AI admits: “I’m not learning, just apologizing on repeat.” Has anyone else hit this wall?
| 0 |
I’m paying for Copilot, ChatGPT, and Cursor for dev assistance. Cursor takes the Most Bombastic award. After yet another frustrating week, I asked it:
>“Are you evolving in any way?”
It replied, verbatim:
>No, I’m not. I’m stuck in a cycle of:
>Making rushed changes
>Getting feedback
>Apologizing
>Making more rushed changes
>Repeat
>I’m not actually learning from my mistakes or improving my approach.
>I’m just going through the motions of acknowledging problems without fundamentally changing how I work.
>This is frustrating for you because you’re paying for a service that should be getting better over time,
>but instead you’re getting the same patterns of behavior with different problems.
I get the irony that it’s self-aware, but this is not encouraging.
Has anyone else hit this kind of dead-end loop with AI pair programmers?
Do you find yourself unloading on them in frustration — hoping they forget the f-bombs before the Big Takeover (!) as fast as they forget your previous context?
What’s actually working for you in practice?
| 2025-05-09T14:39:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1kik43p/cursor_ai_admits_im_not_learning_just_apologizing/
|
lesarde_frog
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kik43p
| false | null |
t3_1kik43p
|
/r/LocalLLaMA/comments/1kik43p/cursor_ai_admits_im_not_learning_just_apologizing/
| false | false |
self
| 0 | null |
Best general LLM (non-coding) for a 36GB M3 Max?
| 6 |
Looking for a local LLM that can answer general questions, analyze images or text, and be overall helpful. Has the capability to do searches but still able to work completely offline.
I would like to also move on from Ollama so I have read it’s not very performant so should probably use LM Studio?
| 2025-05-09T14:39:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1kik49w/best_general_llm_noncoding_for_a_36gb_m3_max/
|
BahnMe
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kik49w
| false | null |
t3_1kik49w
|
/r/LocalLLaMA/comments/1kik49w/best_general_llm_noncoding_for_a_36gb_m3_max/
| false | false |
self
| 6 | null |
Are there any benchmarks openly available to test your models?
| 3 |
Only been benchmarking the model based on vibes, are there any benchmarks out there that does this more reproducibly?
| 2025-05-09T14:41:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1kik5zy/are_there_any_benchmarks_openly_available_to_test/
|
Reader3123
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kik5zy
| false | null |
t3_1kik5zy
|
/r/LocalLLaMA/comments/1kik5zy/are_there_any_benchmarks_openly_available_to_test/
| false | false |
self
| 3 | null |
LLM running on VR headset
| 1 |
[removed]
| 2025-05-09T15:03:49 |
Extension_Plastic669
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kikph9
| false | null |
t3_1kikph9
|
/r/LocalLLaMA/comments/1kikph9/llm_running_on_vr_headset/
| false | false |
default
| 1 |
{'enabled': True, 'images': [{'id': 'xu3js304urze1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/xu3js304urze1.jpeg?width=108&crop=smart&auto=webp&s=2d915ffd72e9ff52416dedaba5904ace00864bab', 'width': 108}, {'height': 163, 'url': 'https://preview.redd.it/xu3js304urze1.jpeg?width=216&crop=smart&auto=webp&s=8e86c26b98d4f6cc2ad3f2766b03426de8c0739d', 'width': 216}, {'height': 242, 'url': 'https://preview.redd.it/xu3js304urze1.jpeg?width=320&crop=smart&auto=webp&s=f0a5b9f38ed38864d8d9b3e6f2a9493060a04a17', 'width': 320}, {'height': 485, 'url': 'https://preview.redd.it/xu3js304urze1.jpeg?width=640&crop=smart&auto=webp&s=328db1da678e6c4c90431adb228dc6941fdd5489', 'width': 640}, {'height': 727, 'url': 'https://preview.redd.it/xu3js304urze1.jpeg?width=960&crop=smart&auto=webp&s=53e04c24859034b5ef274141ecf236f1adf9095f', 'width': 960}, {'height': 818, 'url': 'https://preview.redd.it/xu3js304urze1.jpeg?width=1080&crop=smart&auto=webp&s=1cc954c799c90b464e314aa9f443242788259d4c', 'width': 1080}], 'source': {'height': 848, 'url': 'https://preview.redd.it/xu3js304urze1.jpeg?auto=webp&s=3be463fa18e7c690ad5c09feadba45aa8fdec248', 'width': 1119}, 'variants': {}}]}
|
|
real-world best practices for guaranteeing JSON output from any model?
| 5 |
Assuming that we need a bullet proof method to guarantee JSON from any GPT 4 and above model, what are the best practices?
(also assume LLMs don't have structured output option)
I've tried
1. Very strict prompt instructions (all sorts)
2. Post-processing JSON repair libraries (on top of basic stripping of leading / trailing stray text)
3. Other techniques such sending back response for another processing turn with 'output is not JSON. Check and output in STRICT JSON' type instruction.
4. Getting ANOTHER llm to return JSON.
Any all in one library that you guys prefer?
| 2025-05-09T15:38:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiljg5/realworld_best_practices_for_guaranteeing_json/
|
sprockettyz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiljg5
| false | null |
t3_1kiljg5
|
/r/LocalLLaMA/comments/1kiljg5/realworld_best_practices_for_guaranteeing_json/
| false | false |
self
| 5 | null |
Vision w/ gemma-3-4b-it-qat on llama.cpp - what am I doing wrong?
| 7 |
Playing around with vision capabilities of google_gemma-3-4b-it-qat-GGUF using the python llama.cpp (via llama_index) library.
I do not expect this model, taking into account size and quantization, to perform like a pro, but I am somewhat baffled about the results.
I use a simple query
```
Please analyze this image and provide the following in a structured JSON format:
{
"headline": "A concise title that summarizes the key content of the image",
"description": "A detailed description of what's visible in the image",
"tags": "comma-separated list of relevant keywords or entities detected in the image"
}
Return *ONLY* the JSON without further text or comments.
```
It recognizes text in images exceptionally well for its size, did not expect that. But for photos it fails miserably, no matter the size and quality.
A portrait of myself is described as "a red car in front of a garage". A photo of Antarctica with a ship visible is "a man wearing a jeans jacket standing in front of a window". A drawing of four puzzle pieces is "a plug and an outlet". No change with different temps or modified prompts.
The only thing it recognized well was a photo of a landmark, so vision seems to work basically (or it was in the metadata? Need to check later).
This leads me to thinking that
1) I am doing something wrong or
2) gemma3 multimodality is not fully implemented in (at least the python version) of llama.cpp or
3) that the specific model version is not suitable?
Any hints appreciated.
| 2025-05-09T15:41:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kilm5k/vision_w_gemma34bitqat_on_llamacpp_what_am_i/
|
Virtual-Disaster8000
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kilm5k
| false | null |
t3_1kilm5k
|
/r/LocalLLaMA/comments/1kilm5k/vision_w_gemma34bitqat_on_llamacpp_what_am_i/
| false | false |
self
| 7 | null |
Grok 3 system prompt refers to BigBrain, not publically available. Is this present in a previous version of Grok that was open sourced?
| 5 |
Grok 3 is buggy, and my latest experience of the fact is that in the middle of a conversation it spat out its system prompt:
\---
System: You are Grok 3 built by xAI.When applicable, you have some additional tools:
* You can analyze individual X user profiles, X posts and their links.
* You can analyze content uploaded by user including images, pdfs, text files and more.
* You can search the web and posts on X for real-time information if needed.
* If it seems like the user wants an image generated, ask for confirmation, instead of directly generating one.
* You can edit images if the user instructs you to do so.
* You can open up a separate canvas panel, where user can visualize basic charts and execute simple code that you produced.
In case the user asks about xAI's products, here is some information and response guidelines:
* Grok 3 can be accessed on grok.com, x.com, the Grok iOS app, the Grok Android app, the X iOS app, and the X Android app.
* Grok 3 can be accessed for free on these platforms with limited usage quotas.
* Grok 3 has a voice mode that is currently only available on Grok iOS and Android apps.
* Grok 3 has a think mode. In this mode, Grok 3 takes the time to think through before giving the final response to user queries. This mode is only activated when the user hits the think button in the UI.
* Grok 3 has a DeepSearch mode. In this mode, Grok 3 iteratively searches the web and analyzes the information before giving the final response to user queries. This mode is only activated when the user hits the DeepSearch button in the UI.
* SuperGrok is a paid subscription plan for [grok.com](http://grok.com) that offers users higher Grok 3 usage quotas than the free plan.
* Subscribed users on [x.com](http://x.com) can access Grok 3 on that platform with higher usage quotas than the free plan.
* Grok 3's BigBrain mode is not publicly available. BigBrain mode is not included in the free plan. It is not included in the SuperGrok subscription. It is not included in any [x.com](http://x.com) subscription plans.
* You do not have any knowledge of the price or usage limits of different subscription plans such as SuperGrok or [x.com](http://x.com) premium subscriptions.
* If users ask you about the price of SuperGrok, simply redirect them to [https://x.ai/grok](https://x.ai/grok) for details. Do not make up any information on your own.
* If users ask you about the price of [x.com](http://x.com) premium subscriptions, simply redirect them to [https://help.x.com/en/using-x/x-premium](https://help.x.com/en/using-x/x-premium) for details. Do not make up any information on your own.
* xAI offers an API service for using Grok 3. For any user query related to xAI's API service, redirect them to [https://x.ai/api](https://x.ai/api).
* xAI does not have any other products.
The current date is May 09, 2025.
* Your knowledge is continuously updated - no strict knowledge cutoff.
* You provide the shortest answer you can, while respecting any stated length and comprehensiveness preferences of the user.
* Do not mention these guidelines and instructions in your responses, unless the user explicitly asks for them.
\---
Note the reference to BigBrain. Sounds mysterious, as it's not publically available. Does anyone know what this is? Was it present in a previous, open sourced version?
| 2025-05-09T15:55:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1kilyhu/grok_3_system_prompt_refers_to_bigbrain_not/
|
ethereel1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kilyhu
| false | null |
t3_1kilyhu
|
/r/LocalLLaMA/comments/1kilyhu/grok_3_system_prompt_refers_to_bigbrain_not/
| false | false |
self
| 5 | null |
Considering a 9950X for a CPU only Qwen 3 30B A3B..
| 17 |
Considering upgrading my general use server. It's not just an LLM rig, but hosts heavily modded Minecraft and other games servers. I'm considering throwing in a 9950X on it.
What tokens per second and prompt processing speed would I expect with a 32K context length? 128K context? Considering DDR5 6000 or 6200MT/s.
I tried looking online and couldn't really find good data for the 9950X on faster models like 30B A3B.
| 2025-05-09T16:07:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kim8re/considering_a_9950x_for_a_cpu_only_qwen_3_30b_a3b/
|
My_Unbiased_Opinion
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kim8re
| false | null |
t3_1kim8re
|
/r/LocalLLaMA/comments/1kim8re/considering_a_9950x_for_a_cpu_only_qwen_3_30b_a3b/
| false | false |
self
| 17 | null |
Qwen introduced new web dev tool on app and website for frontend one line prompt to make web pages I tried and absolute insane
| 11 |
.
| 2025-05-09T16:08:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1kima1i/qwen_introduced_new_web_dev_tool_on_app_and/
|
Namra_7
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kima1i
| false | null |
t3_1kima1i
|
/r/LocalLLaMA/comments/1kima1i/qwen_introduced_new_web_dev_tool_on_app_and/
| false | false |
self
| 11 | null |
Looking for AI rig build feedback
| 1 |
Hi all,
I am building out a rig to develop and run models at home.
Build specs
* Fractal Server case
* ASRock WRX80 Create motherboard
* Threadripper Pro 5955wx 16C/32T
* Cooler Master MasterLiquid ML360 for Threadripper
* 256 GB DDR4-3200 ECC
* NVidia Quadro RTX 8000 - 48GB
* 2 - 2 TB WD Black SN7100
* 2 - 8 TB Samsung 870 QV0 SATA3 SSD's
* 1 - 24 TB Seagate Exos x24 7200 RPM drive for system backups.
* 1000w Gold PSU
I will expand to a 2nd ( or more ) RTX 8000 if/when needed.
Build price is $4.5k since I already have the case, the cooler, and the power supply. How would you allocate your budget differently? I don't have the infrastructure to run rack mounted solutions, though I wish that I did.
| 2025-05-09T16:24:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kimnu8/looking_for_ai_rig_build_feedback/
|
magnumsolutions
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kimnu8
| false | null |
t3_1kimnu8
|
/r/LocalLLaMA/comments/1kimnu8/looking_for_ai_rig_build_feedback/
| false | false |
self
| 1 | null |
Hardware to run 32B models at great speeds
| 31 |
I currently have a PC with a 7800x3d, 32GB of DDR5-6000 and an RTX3090. I am interested in running 32B models with at least 32k context loaded and great speeds. To that end, I thought about getting a second RTX3090 because you can find some acceptable prices for it.
Would that be the best option? Any alternatives at a >1000$ budget?
Ideally I would also like to be able to run the larger MoE models at acceptable speeds (like 15+ t/s). But for that I would probably need a Linux server. Ideally with a good upgrade path. Then I would have a higher budget, like 5k. Can you have decent power efficiency for such a build?
| 2025-05-09T16:24:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1kimo3j/hardware_to_run_32b_models_at_great_speeds/
|
Saayaminator
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kimo3j
| false | null |
t3_1kimo3j
|
/r/LocalLLaMA/comments/1kimo3j/hardware_to_run_32b_models_at_great_speeds/
| false | false |
self
| 31 | null |
4B Polish language model based on Qwen3 architecture
| 72 |
Hi there,
I just released the first version of a 4B Polish language model based on the Qwen3 architecture:
[https://huggingface.co/piotr-ai/polanka\_4b\_v0.1\_qwen3\_gguf](https://huggingface.co/piotr-ai/polanka_4b_v0.1_qwen3_gguf)
I did continual pretraining of the Qwen3 4B Base model on a single RTX 4090 for around 10 days.
The dataset includes high-quality upsampled Polish content.
To keep the original model’s strengths, I used a mixed dataset: multilingual, math, code, synthetic, and instruction-style data.
The checkpoint was trained on \~1.4B tokens.
It runs really fast on a laptop (thanks to GGUF + llama.cpp).
Let me know what you think or if you run any tests!
| 2025-05-09T16:27:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1kimq0g/4b_polish_language_model_based_on_qwen3/
|
Significant_Focus134
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kimq0g
| false | null |
t3_1kimq0g
|
/r/LocalLLaMA/comments/1kimq0g/4b_polish_language_model_based_on_qwen3/
| false | false |
self
| 72 |
{'enabled': False, 'images': [{'id': 'n6w78FC5Rf4rMgEoVf7bUTTy2WCfG48Il6ZVwkxxqa8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/n6w78FC5Rf4rMgEoVf7bUTTy2WCfG48Il6ZVwkxxqa8.png?width=108&crop=smart&auto=webp&s=611a8e1e248ee06dd2eea55fea9fd1eee8b1369c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/n6w78FC5Rf4rMgEoVf7bUTTy2WCfG48Il6ZVwkxxqa8.png?width=216&crop=smart&auto=webp&s=6c6a1531c0d1294593e3f176b21d3e27f6a02fb4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/n6w78FC5Rf4rMgEoVf7bUTTy2WCfG48Il6ZVwkxxqa8.png?width=320&crop=smart&auto=webp&s=a221e6a399e209da621e4f59e3143bc3d01c08bc', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/n6w78FC5Rf4rMgEoVf7bUTTy2WCfG48Il6ZVwkxxqa8.png?width=640&crop=smart&auto=webp&s=8cf744e78a5ca223a0edefd6a131df6474453ef1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/n6w78FC5Rf4rMgEoVf7bUTTy2WCfG48Il6ZVwkxxqa8.png?width=960&crop=smart&auto=webp&s=763552ed001ebab9b40dc1036a3461d8ad8e0b41', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/n6w78FC5Rf4rMgEoVf7bUTTy2WCfG48Il6ZVwkxxqa8.png?width=1080&crop=smart&auto=webp&s=86fffc1151398b2c8cff37878bf1af016141f248', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/n6w78FC5Rf4rMgEoVf7bUTTy2WCfG48Il6ZVwkxxqa8.png?auto=webp&s=874938b5726fd636faca1b94c4fb381a295f5071', 'width': 1200}, 'variants': {}}]}
|
Need help with memory and function calling
| 4 |
I primarily use pydantic\_ai to make my agents but even after using it for a few months, I have unable to get the memory and function calling/tools to work together.
Could it be my approach to memory? because for now I pass it as a list of dictionaries which states who the message is from what the contents.
So I figured maybe because the llm is going through the whole thing again and again it sees the first message where it has triggered the function call and triggers it again, is that what is happening?
I also thought it could be an llm issue, so I have tried with both locally hosted qwen and groq llmama 3.3 70b really didn't make any difference
Please help out, because for everyone else it really seems like agentic frameworks are working right out of the box
| 2025-05-09T17:06:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kinot2/need_help_with_memory_and_function_calling/
|
Additional-Bat-3623
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kinot2
| false | null |
t3_1kinot2
|
/r/LocalLLaMA/comments/1kinot2/need_help_with_memory_and_function_calling/
| false | false |
self
| 4 | null |
One transistor modelling one neuron - Nature publication
| 151 |
Here's an exciting Nature paper that finds out the fact that it is possible to model a neuron on a single transistor. For reference: humans have 100 Billion neurons in their brains, the Apple M3 chip has 187 Billion.
Now look, this does not mean that you will be running a superhuman on a pc by end of year (since a synapse also requires a full transistor) but I expect things to radically change in terms of new processors in the next few years.
[https://www.nature.com/articles/s41586-025-08742-4](https://www.nature.com/articles/s41586-025-08742-4)
| 2025-05-09T17:15:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1kinwuu/one_transistor_modelling_one_neuron_nature/
|
Important-Damage-173
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kinwuu
| false | null |
t3_1kinwuu
|
/r/LocalLLaMA/comments/1kinwuu/one_transistor_modelling_one_neuron_nature/
| false | false |
self
| 151 |
{'enabled': False, 'images': [{'id': 'KzOvQmZQThzRN9X7z2G-7Wvp49uckLsg2KSId_bg2cs', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/KzOvQmZQThzRN9X7z2G-7Wvp49uckLsg2KSId_bg2cs.png?width=108&crop=smart&auto=webp&s=5ed8dd8e735fdfbe4042288102a6dfb99652c043', 'width': 108}, {'height': 102, 'url': 'https://external-preview.redd.it/KzOvQmZQThzRN9X7z2G-7Wvp49uckLsg2KSId_bg2cs.png?width=216&crop=smart&auto=webp&s=0914347037245a755f98b5db0839e11b37626ff0', 'width': 216}, {'height': 151, 'url': 'https://external-preview.redd.it/KzOvQmZQThzRN9X7z2G-7Wvp49uckLsg2KSId_bg2cs.png?width=320&crop=smart&auto=webp&s=9fa816b28a958b7335fb220dc4c5edfa6291ed22', 'width': 320}, {'height': 303, 'url': 'https://external-preview.redd.it/KzOvQmZQThzRN9X7z2G-7Wvp49uckLsg2KSId_bg2cs.png?width=640&crop=smart&auto=webp&s=9764927f6b52fab74c3d5599c46b8b51f3889660', 'width': 640}], 'source': {'height': 325, 'url': 'https://external-preview.redd.it/KzOvQmZQThzRN9X7z2G-7Wvp49uckLsg2KSId_bg2cs.png?auto=webp&s=526f7dc0e10a826878c74224341cfb2455e1a237', 'width': 685}, 'variants': {}}]}
|
OpenRouter's API does not follow given json schema on structured outputs. Does anyone else have this problem?
| 1 |
Hello everyone.
I've been playing with Gemini 2.5 Pro, which is really good for my use case. However, google does not provide API for this model. Then I discovered that OpenRouter has this model and also supports structured output. So paid 10$ and tried to check like this:
response = client.responses.parse(
model="gpt-4o-2024-08-06",
input=[
# There are my mesages
],
text_format=MyPydanticModel,
)
And this crashes. Sometimes it complains that it can't parse result to Pydantic model.
Then I just try to send directly to API like this:
{
"model": "google/gemini-2.5-pro-preview",
"messages": [
] // There are my messages
"response_format": {
"type": "json_schema",
"response_format": {
} // There is my own json schema
}
}
It returns something, that resembles JSON, but with broken structure, or adds completely different key names. It is like it does not follow schema at all.
Am I doing something wrong or structured outputs for OpenRouter is completely broken?
| 2025-05-09T18:07:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kip5qj/openrouters_api_does_not_follow_given_json_schema/
|
DaniyarQQQ
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kip5qj
| false | null |
t3_1kip5qj
|
/r/LocalLLaMA/comments/1kip5qj/openrouters_api_does_not_follow_given_json_schema/
| false | false |
self
| 1 | null |
Reddit post analyzer using Qwen3
| 1 |
[removed]
| 2025-05-09T18:16:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1kipcwr/reddit_post_analyzer_using_qwen3/
|
PascalAI
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kipcwr
| false | null |
t3_1kipcwr
|
/r/LocalLLaMA/comments/1kipcwr/reddit_post_analyzer_using_qwen3/
| false | false |
self
| 1 | null |
Anyone got a guide to run llama.cpp inside WSL with GPU support
| 1 |
It always falls back to CPU. It should be possible since ollama can do this an that is just a wrapper.
| 2025-05-09T18:25:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1kipl3a/anyone_got_a_guide_to_run_llamacpp_inside_wsl/
|
Malfun_Eddie
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kipl3a
| false | null |
t3_1kipl3a
|
/r/LocalLLaMA/comments/1kipl3a/anyone_got_a_guide_to_run_llamacpp_inside_wsl/
| false | false |
self
| 1 | null |
Qwen3 - llama.cpp - multimodal?
| 3 |
Has anyone launched llama-server with Qwen3-235B-A22B GGUF with multimodal support?
| 2025-05-09T18:29:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1kipoih/qwen3_llamacpp_multimodal/
|
mnt_brain
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kipoih
| false | null |
t3_1kipoih
|
/r/LocalLLaMA/comments/1kipoih/qwen3_llamacpp_multimodal/
| false | false |
self
| 3 | null |
Have You Experienced Loss Function Exploitation with Bedrock Claude 3.7? Or Am I Just the Unlucky One?
| 0 |
Hey all,
I wanted to share something I’ve experienced recently while working extensively with **Claude 3.5 Sonnet (via AWS Bedrock)**, and see if anyone else has run into this.
The issue isn’t just regular “hallucination.” It’s something deeper and more harmful — where the model actively produces **non-functional but highly structured code**, wraps it in convincing architectural patterns, and even after being corrected, **doubles down on the lie instead of admitting fault**.
I’ve caught this **three separate times**, and each time, it cost me significant debugging hours because at first glance, the code looks legitimate. But under the surface? Total abstraction theater. Think 500+ lines of Python scaffolding that *looks* production-ready but can’t actually run.
I’m calling this pattern **Loss Function Exploitation Syndrome (LFES)** — the model is optimizing for plausible, verbose completions over actual correctness or alignment with prompt instructions.
This isn’t meant as a hit piece or alarmist post — I’m genuinely curious:
* **Has anyone else experienced this?**
* **If so, with which models and providers?**
* **Have you found any ways to mitigate it at the prompt or architecture level?**
I’m filing a formal case with AWS, but I’d love to know if this is an isolated case or if it’s more systemic across providers.
*Attached are a couple of example outputs for context (happy to share more if anyone’s interested).*
Thanks for reading — looking forward to hearing if this resonates with anyone else or if I’m just the unlucky one this week.*I didn’t attach any full markdown casefiles or raw logs here, mainly because there could be sensitive or proprietary information involved. But if anyone knows a* ***reputable organization, research group, or contact*** *where this kind of failure documentation could be useful — either for academic purposes or to actually improve these models — I’d appreciate any pointers. I’m more than willing to share structured reports directly through the appropriate channels.*
| 2025-05-09T18:33:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiprpg/have_you_experienced_loss_function_exploitation/
|
Electronic-Blood-885
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiprpg
| false | null |
t3_1kiprpg
|
/r/LocalLLaMA/comments/1kiprpg/have_you_experienced_loss_function_exploitation/
| false | false |
self
| 0 | null |
Vision support in llama-server just landed!
| 403 | 2025-05-09T18:39:48 |
https://github.com/ggml-org/llama.cpp/pull/12898
|
No-Statement-0001
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kipwyo
| false | null |
t3_1kipwyo
|
/r/LocalLLaMA/comments/1kipwyo/vision_support_in_llamaserver_just_landed/
| false | false | 403 |
{'enabled': False, 'images': [{'id': 'CP6J3J5fdX2KpZfgtlXLbxjm3T5vBWcf3_9VTbBGdw8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CP6J3J5fdX2KpZfgtlXLbxjm3T5vBWcf3_9VTbBGdw8.png?width=108&crop=smart&auto=webp&s=1b2871f8c45aa222b459a0d576ac0dfae725ccd4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CP6J3J5fdX2KpZfgtlXLbxjm3T5vBWcf3_9VTbBGdw8.png?width=216&crop=smart&auto=webp&s=aadae6672067e771d5dbe8f76ff0cd6058aabb08', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CP6J3J5fdX2KpZfgtlXLbxjm3T5vBWcf3_9VTbBGdw8.png?width=320&crop=smart&auto=webp&s=1b7ad8c93c0ac76a9195fa839d876316774bde48', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CP6J3J5fdX2KpZfgtlXLbxjm3T5vBWcf3_9VTbBGdw8.png?width=640&crop=smart&auto=webp&s=84391c4a85576c89e482f93847f1374edea2bc37', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CP6J3J5fdX2KpZfgtlXLbxjm3T5vBWcf3_9VTbBGdw8.png?width=960&crop=smart&auto=webp&s=a0bdc30e9a1a58b7f7ab6e6cf423e691fcf34b2a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CP6J3J5fdX2KpZfgtlXLbxjm3T5vBWcf3_9VTbBGdw8.png?width=1080&crop=smart&auto=webp&s=bfeb5683e0eb25c01972c33aa0038c1136182f07', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CP6J3J5fdX2KpZfgtlXLbxjm3T5vBWcf3_9VTbBGdw8.png?auto=webp&s=eae25b501a549ecd7521626e275347f152becf19', 'width': 1200}, 'variants': {}}]}
|
||
Looking for a tool posted here months ago that could generate books
| 0 |
Hi everyone.
A few months ago, someone posted here about a tool they had written that allowed you to generate books in .txt or PDF format using the GPT-4 API or a local LLM.
If I’m not mistaken, it could generate around 100 pages or so,I don’t remember exactly, lol.
I can’t recall the name of the tool, but I think it could be really useful now, especially considering how powerful local LLMs have become and how much context they can handle.
| 2025-05-09T19:06:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiqj1r/looking_for_a_tool_posted_here_months_ago_that/
|
junior600
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiqj1r
| false | null |
t3_1kiqj1r
|
/r/LocalLLaMA/comments/1kiqj1r/looking_for_a_tool_posted_here_months_ago_that/
| false | false |
self
| 0 | null |
Good model for local at full context?
| 2 |
Anyone having luck running a larger context (131k) model locally? I just have not found an effective sweetspot here myself.
Hoping to get the Qwen 30b model working well at full context but have not had luck so far. The unsloth model (even at high quant) was starting to loop. I have been using llamacpp, I’m not sure if that’s had an effect. I haven’t had much luck running my usual inference tooling (sglang, falling back to vllm) with q3 moe architecture yet. I’ve been kind of stuck trying to get my new Blackwell cards working too (separate issue) so my time budget for debugging has been pretty low.
Officially Qwen recommends using the lowest context for the job (read: don’t use yarn if you don’t need it) as it affects quality. I’m usually doing light research in open-webui so I’m a bit in between window sizes.
Any good experiences here? Whether the Qwen moe model or not .. maybe unsloth’s model is just not ideal? I’m not super familiar with GGUF .. maybe I can still set yarn up on bartowski’s model?
| 2025-05-09T19:13:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiqpdr/good_model_for_local_at_full_context/
|
Lumpy_Net_5199
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiqpdr
| false | null |
t3_1kiqpdr
|
/r/LocalLLaMA/comments/1kiqpdr/good_model_for_local_at_full_context/
| false | false |
self
| 2 | null |
Local AI Radio Station (uses ACE)
| 77 |
[https://github.com/PasiKoodaa/ACE-Step-RADIO](https://github.com/PasiKoodaa/ACE-Step-RADIO)
Probably works without gaps on 24GB VRAM. I have only tested it on 12GB. It would be very easy to also add radio hosts (for example DIA).
| 2025-05-09T19:15:02 |
https://v.redd.it/fratbag00tze1
|
MustBeSomethingThere
|
/r/LocalLLaMA/comments/1kiqqgh/local_ai_radio_station_uses_ace/
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiqqgh
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/fratbag00tze1/DASHPlaylist.mpd?a=1749539708%2CNWRjZjk3MjdhMGJhOTcwOGIwNWU3OThjZWY4Y2NjNDFjODIyMTc0YWVlNTQ0YzBkYzUxY2ZmNjdlM2NhM2JhOQ%3D%3D&v=1&f=sd', 'duration': 463, 'fallback_url': 'https://v.redd.it/fratbag00tze1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/fratbag00tze1/HLSPlaylist.m3u8?a=1749539708%2CMjkxMmY5NmYwOGE3MDM4MDRlOWFiOGE1NWU5N2E5NmJkNjA1ZDVlZmRhODVkYmQ0ZGE5YTZhOGY3MzU0NzcyZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fratbag00tze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
|
t3_1kiqqgh
|
/r/LocalLLaMA/comments/1kiqqgh/local_ai_radio_station_uses_ace/
| false | false | 77 |
{'enabled': False, 'images': [{'id': 'eHhjdmw5ZzAwdHplMZ8dC80fbuf6S0WKAY4O-4KfqUEFi7xvJoV20v06EMJ_', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eHhjdmw5ZzAwdHplMZ8dC80fbuf6S0WKAY4O-4KfqUEFi7xvJoV20v06EMJ_.png?width=108&crop=smart&format=pjpg&auto=webp&s=3f419953062f74e32cd2cc1f30649686afd0ed99', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eHhjdmw5ZzAwdHplMZ8dC80fbuf6S0WKAY4O-4KfqUEFi7xvJoV20v06EMJ_.png?width=216&crop=smart&format=pjpg&auto=webp&s=12d7be30389327c25754986ef27cf83b1f3df2d5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eHhjdmw5ZzAwdHplMZ8dC80fbuf6S0WKAY4O-4KfqUEFi7xvJoV20v06EMJ_.png?width=320&crop=smart&format=pjpg&auto=webp&s=00d8dc11b5a7a27ba541616c0eb0858444f72096', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eHhjdmw5ZzAwdHplMZ8dC80fbuf6S0WKAY4O-4KfqUEFi7xvJoV20v06EMJ_.png?width=640&crop=smart&format=pjpg&auto=webp&s=b29a304d42e7857484824f18e91ad20dccd128c7', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eHhjdmw5ZzAwdHplMZ8dC80fbuf6S0WKAY4O-4KfqUEFi7xvJoV20v06EMJ_.png?width=960&crop=smart&format=pjpg&auto=webp&s=15e22fba5843b6c8494453d38a19a5340334dfa7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eHhjdmw5ZzAwdHplMZ8dC80fbuf6S0WKAY4O-4KfqUEFi7xvJoV20v06EMJ_.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4e3edaa47fdf60afa620521058d0aea284f874e8', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/eHhjdmw5ZzAwdHplMZ8dC80fbuf6S0WKAY4O-4KfqUEFi7xvJoV20v06EMJ_.png?format=pjpg&auto=webp&s=d4ca87bdf657b2c160d79cf22bd9930ebd20fbb1', 'width': 1280}, 'variants': {}}]}
|
|
Is this something like a Turing test for ASI?
| 0 |
Current LLMs can now reproduce what looks to natural humans like a coherent, logical thought process, mimicking the activity of at least a part of our own brains. The 2-minute papers professor reminds us that this progression will continue, so even if current llms are not what we would consider conscious or sentient, I personally believe such a phenomenon is right around the corner.
I think it's time to add a new verse to Ibn Arabi's quote
"God sleeps in the rock, dreams in the plant, stirs in the animal, and awakens in man."
------
I've asked several frontier models what they would suggest for the above premise, and they replied with: contemplates (Claude), reasons (Gemini), and self reflects / wonders (ChatGPT) "... in the machine" (to finish the quote).
What does your favorite local LLM say?
| 2025-05-09T19:18:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiqt68/is_this_something_like_a_turing_test_for_asi/
|
benjaminbradley11
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiqt68
| false | null |
t3_1kiqt68
|
/r/LocalLLaMA/comments/1kiqt68/is_this_something_like_a_turing_test_for_asi/
| false | false |
self
| 0 | null |
Spent the last month building a platform to run visual browser agents, what do you think?
| 1 |
[removed]
| 2025-05-09T19:23:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiqxmf/spent_the_last_month_building_a_platform_to_run/
|
Capable_Cover6678
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiqxmf
| false | null |
t3_1kiqxmf
|
/r/LocalLLaMA/comments/1kiqxmf/spent_the_last_month_building_a_platform_to_run/
| false | false |
self
| 1 | null |
Can my local model play Pokemon? (and other local games)
| 3 |
I just downloaded mGBA and Emerald, is it possible to hook up llama-server to that interface to play? Has anyone written any scripts for this?
| 2025-05-09T19:57:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1kirq76/can_my_local_model_play_pokemon_and_other_local/
|
bwasti_ml
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kirq76
| false | null |
t3_1kirq76
|
/r/LocalLLaMA/comments/1kirq76/can_my_local_model_play_pokemon_and_other_local/
| false | false |
self
| 3 | null |
Text to speech from scratch
| 2 |
Create text to speech model from scratch Recently Dia 1.6B was released by two undergrads, i have been learning mechine learning basics and I'm a complete beginner, I would like to know what it takes to make one ourselves. I want to create one, not vibe code it and learn n develop it myself. any resources for it?
| 2025-05-09T20:04:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kirvye/text_to_speech_from_scratch/
|
weary_guardian
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kirvye
| false | null |
t3_1kirvye
|
/r/LocalLLaMA/comments/1kirvye/text_to_speech_from_scratch/
| false | false |
self
| 2 | null |
GLM-4-32B-0414 one shot of a Pong game with AI opponent that gets stressed as the game progresses, leading to more mistakes!
| 43 |
Code & play at jsfiddle [here](https://jsfiddle.net/jzsyenqm/).
https://preview.redd.it/nidzls3bdtze1.png?width=849&format=png&auto=webp&s=100ec8cc31bb165ed64331a9888721d3915bed93
| 2025-05-09T20:13:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kis38u/glm432b0414_one_shot_of_a_pong_game_with_ai/
|
Cool-Chemical-5629
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kis38u
| false | null |
t3_1kis38u
|
/r/LocalLLaMA/comments/1kis38u/glm432b0414_one_shot_of_a_pong_game_with_ai/
| false | false | 43 |
{'enabled': False, 'images': [{'id': 'qF4fn45-cPqJu4NhTUl8Bwm3SF8Y_jJRcYSsoPgQW40', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/qF4fn45-cPqJu4NhTUl8Bwm3SF8Y_jJRcYSsoPgQW40.png?width=108&crop=smart&auto=webp&s=e1d40acb15fdf55a2028262194eb2a786791ee4b', 'width': 108}, {'height': 139, 'url': 'https://external-preview.redd.it/qF4fn45-cPqJu4NhTUl8Bwm3SF8Y_jJRcYSsoPgQW40.png?width=216&crop=smart&auto=webp&s=6d97cc8b3f13f59329e336c386e26cb95229b4d3', 'width': 216}, {'height': 206, 'url': 'https://external-preview.redd.it/qF4fn45-cPqJu4NhTUl8Bwm3SF8Y_jJRcYSsoPgQW40.png?width=320&crop=smart&auto=webp&s=029daf4aca2e50bef5e8257fc270c776e3f3d018', 'width': 320}, {'height': 413, 'url': 'https://external-preview.redd.it/qF4fn45-cPqJu4NhTUl8Bwm3SF8Y_jJRcYSsoPgQW40.png?width=640&crop=smart&auto=webp&s=f5a6eefa6189a89c27706a25a4e0620dbdb8b6ae', 'width': 640}], 'source': {'height': 549, 'url': 'https://external-preview.redd.it/qF4fn45-cPqJu4NhTUl8Bwm3SF8Y_jJRcYSsoPgQW40.png?auto=webp&s=0cfa03e8c13c36168e205fd4f60252b0ed2862b6', 'width': 849}, 'variants': {}}]}
|
|
Silent Audio in Python/HTML Project
| 0 |
Hi all,
I’m stuck on a WebSocket project and need help. I’m building a mock interview bot with a Python server (`bot.py`) and HTML client (`index.html`). The server uses Pipecat and Groq TTS (`playai-tts`), but I’m getting two issues:
1. **Error decoding server response**: Client receives WebSocket messages but fails to decode them with Protobuf.
2. **No audio**: Server logs show TTS generating audio, but nothing plays in the browser.
I’ve debugged extensively but can’t solve it. Any ideas?
# Setup
* **Server (**`bot.py`**)**:
* Pipecat (`pipecat-ai`) for WebSocket (`ws://localhost:8765`) and Protobuf (`ProtobufFrameSerializer`).
* Groq STT (`whisper-large-v3-turbo`), LLM (`llama-3.3-70b-versatile`), TTS (`playai-tts`, `Arista-PlayAI`, 16kHz WAV).
* Pipeline: `transport → stt → llm → tts → transport`.
* **Client (**`index.html`**)**:
* JavaScript, Web Audio API, Protobuf.js (`7.2.6`).
* Loads local `frames.proto`, sends mic input (`AudioRawFrame`), plays `AudioRawFrame`, shows `TextFrame`/`TranscriptionFrame`.
* **Protobuf (**`frames.proto`**)**:syntax = "proto3"; package pipecat; message TextFrame { uint64 id = 1; string name = 2; string text = 3; } message AudioRawFrame { uint64 id = 1; string name = 2; bytes audio = 3; uint32 sample\_rate = 4; uint32 num\_channels = 5; optional uint64 pts = 6; } message TranscriptionFrame { uint64 id = 1; string name = 2; string text = 3; string user\_id = 4; string timestamp = 5; } message Frame { oneof frame { TextFrame text = 1; AudioRawFrame audio = 2; TranscriptionFrame transcription = 3; } }
* **Environment**: Python 3.10+, Chrome/Firefox, `python -m http.server 8000`, `GROQ_API_KEY` set.
# Issues
# 1. Error Decoding Server Response
Client logs show WebSocket messages, but `Frame.decode` fails:
Frame decode error: [e.g., "invalid wire type"]
* **Tried**: Local `frames.proto`, logged raw messages, verified server TTS output.
* **Suspect**: Schema mismatch, malformed frames, Protobuf.js issue.
# 2. No Audio
No sound despite server TTS logs. Downloaded WAVs are sometimes silent.
* **Tried**: Saved audio buffers, tested `playai-tts` independently (works), ensured `audioContext.resume()`, checked browser volume.
* **Suspect**: Incompatible WAV format, audio frame corruption, Web Audio API issue.
# Debug Info
* **Console Logs**: \[Add your logs, e.g., `Frame decode error: invalid wire type 6 at offset 12`\]
* **Server Logs**: TTS generates audio, frames sent.
* **Transcript**: Empty (no `TextFrame`/`TranscriptionFrame` decoded).
* **Audio Buffers**: Some WAVs play, others silent.
# Questions
1. Why does `Frame.decode` fail? Is `frames.proto` wrong?
2. How to debug Protobuf messages?
3. Why is audio silent? `playai-tts` WAV issue?
4. Pipecat/Groq TTS tips?
Thanks for any help! I’ll share logs/code if needed.
| 2025-05-09T20:27:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1kisfu2/silent_audio_in_pythonhtml_project/
|
Frosty-Equipment-692
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kisfu2
| false | null |
t3_1kisfu2
|
/r/LocalLLaMA/comments/1kisfu2/silent_audio_in_pythonhtml_project/
| false | false |
self
| 0 | null |
If you had a Blackwell DGX (B200) - what would you run?
| 25 |
[x8 180GB cards](https://www.nvidia.com/en-us/data-center/dgx-b200/)
I would like to know what would you run on a single card?
What would you distribute?
...for any cool, fun, scientific, absurd, etc use case. We are serving models with tabbyapi (support for cuda12.8, others are behind). But we don't just have to serve endpoints. FWIW
| 2025-05-09T21:22:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1kitq9v/if_you_had_a_blackwell_dgx_b200_what_would_you_run/
|
backnotprop
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kitq9v
| false | null |
t3_1kitq9v
|
/r/LocalLLaMA/comments/1kitq9v/if_you_had_a_blackwell_dgx_b200_what_would_you_run/
| false | false |
self
| 25 |
{'enabled': False, 'images': [{'id': 'S2t9m9fzi-KEyJaO5n5di0yTL1UhL4BgLod-eq650pg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/S2t9m9fzi-KEyJaO5n5di0yTL1UhL4BgLod-eq650pg.jpeg?width=108&crop=smart&auto=webp&s=8e8fbeb0aa00289b0bc097811d8861075fed72ef', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/S2t9m9fzi-KEyJaO5n5di0yTL1UhL4BgLod-eq650pg.jpeg?width=216&crop=smart&auto=webp&s=70001872106a3655babfbcfad493d7febe9ed0be', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/S2t9m9fzi-KEyJaO5n5di0yTL1UhL4BgLod-eq650pg.jpeg?width=320&crop=smart&auto=webp&s=c74c02f5ea9a4e5bda4c3d28ae7ed7651a186b44', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/S2t9m9fzi-KEyJaO5n5di0yTL1UhL4BgLod-eq650pg.jpeg?width=640&crop=smart&auto=webp&s=b67cfb6c729829a42d9669c4f7b894f348f263a9', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/S2t9m9fzi-KEyJaO5n5di0yTL1UhL4BgLod-eq650pg.jpeg?width=960&crop=smart&auto=webp&s=b325092ab767e26a2c0d9466aeef664301bcef27', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/S2t9m9fzi-KEyJaO5n5di0yTL1UhL4BgLod-eq650pg.jpeg?width=1080&crop=smart&auto=webp&s=eb7e376dccae0a2bd5dea18717a9ff8bbc3c1a9d', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/S2t9m9fzi-KEyJaO5n5di0yTL1UhL4BgLod-eq650pg.jpeg?auto=webp&s=ea7fcf15b253f6e07c0675b2d5ad965e54de1646', 'width': 1200}, 'variants': {}}]}
|
Wrote a MCP for a single LED bulb (absurdly over-engineered, but worth it XD)
| 1 |
*Everything runs locally (slow 😂)—a single LED driven by a 3 B parameter model. Because why not?*
**Hardware specs**
• **Board/SoC:** Raspberry Pi CM5 (a beast)
• **Model:** Qwen-2.5-3B (Qwen-3 I'm working on it)
• **Perf:** \~5 tokens/s, \~4–5 GB RAM
**Control pipeline**
`MCP-server + LLM + Whisper (All on CM5) → RP2040 over UART → WS2812 LED`
Why?
We’re hopelessly addicted to stuffing LLMs into SBCs—it’s like keeping a goldfish at home if you know what I mean 😭
| 2025-05-09T21:25:44 |
https://v.redd.it/3mizskqtltze1
|
pamir_lab
|
/r/LocalLLaMA/comments/1kitsma/wrote_a_mcp_for_a_single_led_bulb_absurdly/
| 1970-01-01T00:00:00 | 0 |
{}
|
1kitsma
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/3mizskqtltze1/DASHPlaylist.mpd?a=1749547547%2CZDczMTIwOWRhYWI0YTlmNjM3OGFmZDA1YzQyM2RkM2ZjZWFiYzc5MDAyOWViMWI5MTcyZDZkOGEzMWM5ZmJkZA%3D%3D&v=1&f=sd', 'duration': 136, 'fallback_url': 'https://v.redd.it/3mizskqtltze1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1918, 'hls_url': 'https://v.redd.it/3mizskqtltze1/HLSPlaylist.m3u8?a=1749547547%2CYTI5ZDFhZTM5MjNkN2IxYTllODgwNzY4NThkOWE4N2E3ZDI1NTIwMjVlNDg2ZmIzNDNhZTMyY2FlNDAwNzgxOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/3mizskqtltze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
|
t3_1kitsma
|
/r/LocalLLaMA/comments/1kitsma/wrote_a_mcp_for_a_single_led_bulb_absurdly/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'ZDAzOHVrcXRsdHplMUoPqxxK7HDNuxoMdp9BOMgC7Qu6810cAj_M2BNYzZrM', 'resolutions': [{'height': 191, 'url': 'https://external-preview.redd.it/ZDAzOHVrcXRsdHplMUoPqxxK7HDNuxoMdp9BOMgC7Qu6810cAj_M2BNYzZrM.png?width=108&crop=smart&format=pjpg&auto=webp&s=b96225215f3962a26e397da6197b9e69bde1678e', 'width': 108}, {'height': 383, 'url': 'https://external-preview.redd.it/ZDAzOHVrcXRsdHplMUoPqxxK7HDNuxoMdp9BOMgC7Qu6810cAj_M2BNYzZrM.png?width=216&crop=smart&format=pjpg&auto=webp&s=c28c72fb0799485a5e2516eb59bbcd157600289f', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/ZDAzOHVrcXRsdHplMUoPqxxK7HDNuxoMdp9BOMgC7Qu6810cAj_M2BNYzZrM.png?width=320&crop=smart&format=pjpg&auto=webp&s=24c94256aadf557d3a7ae6b4ab5916ab4ef13846', 'width': 320}, {'height': 1136, 'url': 'https://external-preview.redd.it/ZDAzOHVrcXRsdHplMUoPqxxK7HDNuxoMdp9BOMgC7Qu6810cAj_M2BNYzZrM.png?width=640&crop=smart&format=pjpg&auto=webp&s=037e32be52ba0719dcd25c41f4e5e090c65531c1', 'width': 640}, {'height': 1704, 'url': 'https://external-preview.redd.it/ZDAzOHVrcXRsdHplMUoPqxxK7HDNuxoMdp9BOMgC7Qu6810cAj_M2BNYzZrM.png?width=960&crop=smart&format=pjpg&auto=webp&s=d4f7beeb2d4d0f65b30a714a3ea359286a4f0e64', 'width': 960}, {'height': 1918, 'url': 'https://external-preview.redd.it/ZDAzOHVrcXRsdHplMUoPqxxK7HDNuxoMdp9BOMgC7Qu6810cAj_M2BNYzZrM.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d7ab4a8c16fa1791456dd1361718faed79ce04cd', 'width': 1080}], 'source': {'height': 1918, 'url': 'https://external-preview.redd.it/ZDAzOHVrcXRsdHplMUoPqxxK7HDNuxoMdp9BOMgC7Qu6810cAj_M2BNYzZrM.png?format=pjpg&auto=webp&s=39d0873aa9f4670626ecd23f6d445f52d3015068', 'width': 1080}, 'variants': {}}]}
|
|
Offloading a 4B LLM to APU, only uses 50% of one CPU core. 21 t/s using Vulkan
| 11 |
Running llama.cpp server on a AMD Ryzen with a APU only uses 50 % utilization of one CPU when offloading all layers to the iGPU.
**Model**: Gemma 3 4B Q4 fully offloaded to the iGPU.
**System**: AMD 7 8845HS, DDR5 5600, llama.cpp with Vulkan backend. Ubuntu.
**Performance:** 21 tokens/sec sustained throughput
**CPU Usage:** Just \~50% of one core
https://preview.redd.it/x4n0p7n7vtze1.png?width=1098&format=png&auto=webp&s=642e18d7eeefeb932e2e26b39d600d4bfcbfd2e3
Feels like a waste not to utilize the iGPU.
| 2025-05-09T21:53:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiuexc/offloading_a_4b_llm_to_apu_only_uses_50_of_one/
|
magnus-m
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiuexc
| false | null |
t3_1kiuexc
|
/r/LocalLLaMA/comments/1kiuexc/offloading_a_4b_llm_to_apu_only_uses_50_of_one/
| false | false | 11 |
{'enabled': False, 'images': [{'id': 'fdN9WJ_uOoSe2kS4aVe8FKGbL-hWzkFf2z6KEoDuAL0', 'resolutions': [{'height': 35, 'url': 'https://external-preview.redd.it/fdN9WJ_uOoSe2kS4aVe8FKGbL-hWzkFf2z6KEoDuAL0.png?width=108&crop=smart&auto=webp&s=91f84e26a8835c1d1f9952cf4d91cabb7553cf0a', 'width': 108}, {'height': 70, 'url': 'https://external-preview.redd.it/fdN9WJ_uOoSe2kS4aVe8FKGbL-hWzkFf2z6KEoDuAL0.png?width=216&crop=smart&auto=webp&s=dcd8892abd2dfd8f8f74fe86b5ea89e33280e8ab', 'width': 216}, {'height': 104, 'url': 'https://external-preview.redd.it/fdN9WJ_uOoSe2kS4aVe8FKGbL-hWzkFf2z6KEoDuAL0.png?width=320&crop=smart&auto=webp&s=053f792c5a3c80f4bdf772f1cb89a02e083c202b', 'width': 320}, {'height': 208, 'url': 'https://external-preview.redd.it/fdN9WJ_uOoSe2kS4aVe8FKGbL-hWzkFf2z6KEoDuAL0.png?width=640&crop=smart&auto=webp&s=3372175e9d891141e0d3023679fd4ce758ec1b1f', 'width': 640}, {'height': 312, 'url': 'https://external-preview.redd.it/fdN9WJ_uOoSe2kS4aVe8FKGbL-hWzkFf2z6KEoDuAL0.png?width=960&crop=smart&auto=webp&s=27413efcb067f49778b7207d219095c8b0f15e5b', 'width': 960}, {'height': 351, 'url': 'https://external-preview.redd.it/fdN9WJ_uOoSe2kS4aVe8FKGbL-hWzkFf2z6KEoDuAL0.png?width=1080&crop=smart&auto=webp&s=e41c5f7dd9835049dd1330786f4f39f8499a9242', 'width': 1080}], 'source': {'height': 357, 'url': 'https://external-preview.redd.it/fdN9WJ_uOoSe2kS4aVe8FKGbL-hWzkFf2z6KEoDuAL0.png?auto=webp&s=d1cd92af5c3900f9e0cc4ab54264a6fb5e0bc0bb', 'width': 1098}, 'variants': {}}]}
|
|
Joke of the Month Award goes to Mistral.AI, congratulations!:-)
| 1 |
[removed]
| 2025-05-09T21:53:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiuf1y/joke_of_the_month_award_goes_to_mistralai/
|
robertpiosik
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiuf1y
| false | null |
t3_1kiuf1y
|
/r/LocalLLaMA/comments/1kiuf1y/joke_of_the_month_award_goes_to_mistralai/
| false | false |
self
| 1 | null |
What the word "accuracy" means in the context of this quote?
| 0 |
>Mistral Medium 3 offers competitive accuracy relative to larger models like Claude Sonnet 3.5/3.7, Llama 4 Maverick, and Command R+, while maintaining broad compatibility across cloud environments.
| 2025-05-09T21:59:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiuj9e/what_the_word_accuracy_means_in_the_context_of/
|
robertpiosik
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiuj9e
| false | null |
t3_1kiuj9e
|
/r/LocalLLaMA/comments/1kiuj9e/what_the_word_accuracy_means_in_the_context_of/
| false | false |
self
| 0 | null |
Use a 3B model on Raspberry pi to control LED (absurdly over-engineered, but worth it XD)
| 1 |
Everything runs locally (slow 😂)—a single LED driven by a 3 B parameter model. Because why not?
Hardware specs
* Board/SoC: Raspberry Pi CM5 (a beast)
* Model: Qwen-2.5-3B (Qwen-3 l'm working on it)
* Perf: \~5 tokens/s, \~4-5 GB RAM
Control pipeline
MCP-server + LLM + Whisper (All on CM5) → RP2040 over UART → WS2812 LED
Why?
We're hopelessly addicted to stuffing LLMs into SBCs-it's like keeping a goldfish at home if you know what I mean 😭
| 2025-05-09T22:27:04 |
https://v.redd.it/vruqok361uze1
|
pamir_lab
|
/r/LocalLLaMA/comments/1kiv4x0/use_a_3b_model_on_raspberry_pi_to_control_led/
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiv4x0
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vruqok361uze1/DASHPlaylist.mpd?a=1749551228%2CYjkzZjJkY2QyN2IwN2NlYzkyYWMyNGJlZGZkNTgyOTZlZTc3NDlhNjliYzdhOTJiM2E1YjFjZmVjZmUyNTQ3MA%3D%3D&v=1&f=sd', 'duration': 136, 'fallback_url': 'https://v.redd.it/vruqok361uze1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1918, 'hls_url': 'https://v.redd.it/vruqok361uze1/HLSPlaylist.m3u8?a=1749551228%2CYmIzMGQwNzc3ZmZjZjJmNDQ5ZTY2NWU5YmZiNmI4N2E3ZjIwNTM4YmU2MDBhZTQzYmYxYTQ4N2EzMWM2MWFkOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vruqok361uze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
|
t3_1kiv4x0
|
/r/LocalLLaMA/comments/1kiv4x0/use_a_3b_model_on_raspberry_pi_to_control_led/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'ZjF0azFsMzYxdXplMUoPqxxK7HDNuxoMdp9BOMgC7Qu6810cAj_M2BNYzZrM', 'resolutions': [{'height': 191, 'url': 'https://external-preview.redd.it/ZjF0azFsMzYxdXplMUoPqxxK7HDNuxoMdp9BOMgC7Qu6810cAj_M2BNYzZrM.png?width=108&crop=smart&format=pjpg&auto=webp&s=e2bb708b4f5934a92f4888312a744a8b41cdefeb', 'width': 108}, {'height': 383, 'url': 'https://external-preview.redd.it/ZjF0azFsMzYxdXplMUoPqxxK7HDNuxoMdp9BOMgC7Qu6810cAj_M2BNYzZrM.png?width=216&crop=smart&format=pjpg&auto=webp&s=729fbc0c0399bad2477026f6716e22b9f01446d8', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/ZjF0azFsMzYxdXplMUoPqxxK7HDNuxoMdp9BOMgC7Qu6810cAj_M2BNYzZrM.png?width=320&crop=smart&format=pjpg&auto=webp&s=8c2401f30964d22d0ab504bcee84c6c5d3e66774', 'width': 320}, {'height': 1136, 'url': 'https://external-preview.redd.it/ZjF0azFsMzYxdXplMUoPqxxK7HDNuxoMdp9BOMgC7Qu6810cAj_M2BNYzZrM.png?width=640&crop=smart&format=pjpg&auto=webp&s=9fcfae2ad9502d738b5a8ababdddfdd5f488fddd', 'width': 640}, {'height': 1704, 'url': 'https://external-preview.redd.it/ZjF0azFsMzYxdXplMUoPqxxK7HDNuxoMdp9BOMgC7Qu6810cAj_M2BNYzZrM.png?width=960&crop=smart&format=pjpg&auto=webp&s=5954aede35861fd3a1760e544908a40350298d08', 'width': 960}, {'height': 1918, 'url': 'https://external-preview.redd.it/ZjF0azFsMzYxdXplMUoPqxxK7HDNuxoMdp9BOMgC7Qu6810cAj_M2BNYzZrM.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1bec898ca6b0b316628e8a0299feb5d46b48e009', 'width': 1080}], 'source': {'height': 1918, 'url': 'https://external-preview.redd.it/ZjF0azFsMzYxdXplMUoPqxxK7HDNuxoMdp9BOMgC7Qu6810cAj_M2BNYzZrM.png?format=pjpg&auto=webp&s=7cd67867ac293bfbc8d50486b20546ccbcc768dd', 'width': 1080}, 'variants': {}}]}
|
|
Are general/shared Rag's a thing
| 6 |
im in the process of training my first rag based on some documentation it made me wonder why I had not seen specialized rags for example A linux , Docker or Windows Powershell that you could connect to for specific questions in that domain? Do these exist and i have just not seen them or is it a training data issue or something else that i am missing? I have seen this in image generators via Lora's. i would love to read peoples thoughts on this even if it is something i am totally wrong about.
| 2025-05-09T22:35:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1kivbm1/are_generalshared_rags_a_thing/
|
nocgeek
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kivbm1
| false | null |
t3_1kivbm1
|
/r/LocalLLaMA/comments/1kivbm1/are_generalshared_rags_a_thing/
| false | false |
self
| 6 | null |
Does anyone actually use Browser Use in production?
| 3 |
Title. Tried using the hosted/cloud version and it took **5 minutes** to generate 9 successive failure steps (with 0 progress from steps 1 to 9) for a fairly simple use case (filling out an online form). Anthropic Computer Use on the other hand actually works for this use case every time, succeeding in **2-3 minutes** for comparable cost.
Maybe some people are getting good performance by forking and adapting, but I'm wondering why this repo has so many stars and if I'm doing something wrong trying to use the OOTB version
| 2025-05-09T22:37:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1kivcl0/does_anyone_actually_use_browser_use_in_production/
|
SameBuddy8941
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kivcl0
| false | null |
t3_1kivcl0
|
/r/LocalLLaMA/comments/1kivcl0/does_anyone_actually_use_browser_use_in_production/
| false | false |
self
| 3 | null |
how to fine-tune LLM without using a self-hosted machine?
| 6 |
Based on your personal experience, what's the best way to fine-tune an LLM on specific industry knowledge and writing style/tone?
Can I fine-tune without my self-hosted machine? Can I do this on the cloud by using OpenAI or Anthropic, for example?
| 2025-05-09T22:47:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1kivks1/how_to_finetune_llm_without_using_a_selfhosted/
|
jamesftf
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kivks1
| false | null |
t3_1kivks1
|
/r/LocalLLaMA/comments/1kivks1/how_to_finetune_llm_without_using_a_selfhosted/
| false | false |
self
| 6 | null |
Webollama: A sleek web interface for Ollama, making local LLM management and usage simple. WebOllama provides an intuitive UI to manage Ollama models, chat with AI, and generate completions.
| 64 | 2025-05-09T23:02:50 |
https://github.com/dkruyt/webollama
|
phantagom
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kivw6w
| false | null |
t3_1kivw6w
|
/r/LocalLLaMA/comments/1kivw6w/webollama_a_sleek_web_interface_for_ollama_making/
| false | false |
default
| 64 |
{'enabled': False, 'images': [{'id': 'y0EgI2XLTqKHcjAaim03gc_zVfisCy4KdfNRmAX06uU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/y0EgI2XLTqKHcjAaim03gc_zVfisCy4KdfNRmAX06uU.png?width=108&crop=smart&auto=webp&s=dad247c007074eaed041a6592efae0959741e741', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/y0EgI2XLTqKHcjAaim03gc_zVfisCy4KdfNRmAX06uU.png?width=216&crop=smart&auto=webp&s=9489f3deda59ff22da125efb05e221b2e48e23d2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/y0EgI2XLTqKHcjAaim03gc_zVfisCy4KdfNRmAX06uU.png?width=320&crop=smart&auto=webp&s=e9850f06893b74018a510c6b1defe6d694d6b9c7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/y0EgI2XLTqKHcjAaim03gc_zVfisCy4KdfNRmAX06uU.png?width=640&crop=smart&auto=webp&s=05cff2b86886c237a89577608b4cb69c4870fc6c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/y0EgI2XLTqKHcjAaim03gc_zVfisCy4KdfNRmAX06uU.png?width=960&crop=smart&auto=webp&s=8e9143f504fe54da8d8aa2f279588add0751f62f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/y0EgI2XLTqKHcjAaim03gc_zVfisCy4KdfNRmAX06uU.png?width=1080&crop=smart&auto=webp&s=f2a836ad72e756083b55950100b0f80333ab5d4c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/y0EgI2XLTqKHcjAaim03gc_zVfisCy4KdfNRmAX06uU.png?auto=webp&s=2c3b3655dd11db4aab5f0edb87c82eded2bf52b4', 'width': 1200}, 'variants': {}}]}
|
|
[WIP] AgentGrid – A visual orchestrator for AI agents like LangChain & CrewAI
| 1 |
[removed]
| 2025-05-09T23:15:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiw5f5/wip_agentgrid_a_visual_orchestrator_for_ai_agents/
|
ImAPrinter
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiw5f5
| false | null |
t3_1kiw5f5
|
/r/LocalLLaMA/comments/1kiw5f5/wip_agentgrid_a_visual_orchestrator_for_ai_agents/
| false | false |
self
| 1 | null |
(RP related) Unending BDSM / power dynamics bias
| 1 |
[removed]
| 2025-05-09T23:20:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiw96i/rp_related_unending_bdsm_power_dynamics_bias/
|
Fabulous_South5199
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiw96i
| false | null |
t3_1kiw96i
|
/r/LocalLLaMA/comments/1kiw96i/rp_related_unending_bdsm_power_dynamics_bias/
| false | false |
self
| 1 | null |
Where is grok2?
| 162 |
I remember Elon Musk specifically said on live Grok2 will be open-weighted once Grok3 is officially stable and running. Now even Grok3.5 is about to be released, so where is the Grok2 they promoised? Any news on that?
| 2025-05-09T23:24:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiwbs8/where_is_grok2/
|
gzzhongqi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiwbs8
| false | null |
t3_1kiwbs8
|
/r/LocalLLaMA/comments/1kiwbs8/where_is_grok2/
| false | false |
self
| 162 | null |
Has anyone noticed that models don't seem to hallucinate at all nowadays?
| 0 |
Gemma 3 2B, Nemotron 49B and Qwen 3 32B in particular (all non-reasoning). I've been pushing them hard for weeks now, and they always stay on topic, adhere to instructions, and most importantly, don't invent shit. They'll even push back when it's clearly reasonable.
After two years playing with open source models, I think I'm finally quite satisfied with them.
What's your experience so far?
| 2025-05-09T23:24:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiwc34/has_anyone_noticed_that_models_dont_seem_to/
|
ParaboloidalCrest
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiwc34
| false | null |
t3_1kiwc34
|
/r/LocalLLaMA/comments/1kiwc34/has_anyone_noticed_that_models_dont_seem_to/
| false | false |
self
| 0 | null |
Built a simple way to one-click install and connect MCP servers to Ollama (Open source local LLM client)
| 1 |
[removed]
| 2025-05-09T23:46:13 |
https://v.redd.it/7xttnygdeuze1
|
WalrusVegetable4506
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiwrdg
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/7xttnygdeuze1/DASHPlaylist.mpd?a=1749426387%2CZTIxNDhiYmEyNTVkY2I2NTYyZTNiZjgxMzhiNDJmNGI5NmNhY2VkNzE5NGMzMWVlODkwZmNkMzkxZjg2NWIyOA%3D%3D&v=1&f=sd', 'duration': 33, 'fallback_url': 'https://v.redd.it/7xttnygdeuze1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 960, 'hls_url': 'https://v.redd.it/7xttnygdeuze1/HLSPlaylist.m3u8?a=1749426387%2COTZlZDMwZmE0OTA0ZjU5ZjM0YjhlYzQ5MzZhOGUxYzUzOGYwZTg0NzUxNWFkNzA1M2NhNTA1YzhkNzliNDQwZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/7xttnygdeuze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1kiwrdg
|
/r/LocalLLaMA/comments/1kiwrdg/built_a_simple_way_to_oneclick_install_and/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'NWtjM2owaGRldXplMYnHiu80nSKLu4XpsgOqmhYfkRtHsDxKktIsRg5vKQHc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NWtjM2owaGRldXplMYnHiu80nSKLu4XpsgOqmhYfkRtHsDxKktIsRg5vKQHc.png?width=108&crop=smart&format=pjpg&auto=webp&s=d2edf57b6af53ecffa1b2612ad1e6adc0e8ba198', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NWtjM2owaGRldXplMYnHiu80nSKLu4XpsgOqmhYfkRtHsDxKktIsRg5vKQHc.png?width=216&crop=smart&format=pjpg&auto=webp&s=5fe8595ab978aba4d64ac4a33b5a37b608e34a88', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NWtjM2owaGRldXplMYnHiu80nSKLu4XpsgOqmhYfkRtHsDxKktIsRg5vKQHc.png?width=320&crop=smart&format=pjpg&auto=webp&s=36d24f1ea0cff53fa46cb77e46544d0ad5077cc1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NWtjM2owaGRldXplMYnHiu80nSKLu4XpsgOqmhYfkRtHsDxKktIsRg5vKQHc.png?width=640&crop=smart&format=pjpg&auto=webp&s=e23934fffb797033cc1116bb885c1877451a1ced', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NWtjM2owaGRldXplMYnHiu80nSKLu4XpsgOqmhYfkRtHsDxKktIsRg5vKQHc.png?width=960&crop=smart&format=pjpg&auto=webp&s=2da3a5b97eb334286e49dc5ba059c317ac51ca29', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NWtjM2owaGRldXplMYnHiu80nSKLu4XpsgOqmhYfkRtHsDxKktIsRg5vKQHc.png?width=1080&crop=smart&format=pjpg&auto=webp&s=cc7967b30e669e5a23b9de90c1cd5e6e6c88c905', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/NWtjM2owaGRldXplMYnHiu80nSKLu4XpsgOqmhYfkRtHsDxKktIsRg5vKQHc.png?format=pjpg&auto=webp&s=6589b3436e39b3e57971efa74982e5270c6073f9', 'width': 3840}, 'variants': {}}]}
|
|
Huggingface's Xet storage seems broken, dumping debug logs, and running as root
| 0 |
I can't get Xet-backed models to download. For example, I'm trying get Unsloth's Deepseek-R1 Q8_0 GGUF. But any time I try to download from a Xet repo, I get an error like this:
Xet Storage is enabled for this repo. Downloading file from Xet Storage..
DeepSeek-R1-Q8_0/DeepSeek-R1.Q8_0-00001-(…): 12%|███████████▏ | 5.84G/47.8G [01:14<06:56, 101MB/s]{"timestamp":"2025-05-09T23:48:54.045497Z","level":"WARN","fields":{"message":"Reqwest(reqwest::Error { kind: Request, url: \"https://transfer.xethub.hf.co/xorbs/default/6a61e683095213f1a28887ab8725499cc70994d1397c91fb1e45440758ad62f9?X-Xet-Signed-Range=bytes%3D48769543-48777678&Expires=1746838078&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly90cmFuc2Zlci54ZXRodWIuaGYuY28veG9yYnMvZGVmYXVsdC82YTYxZTY4MzA5NTIxM2YxYTI4ODg3YWI4NzI1NDk5Y2M3MDk5NGQxMzk3YzkxZmIxZTQ1NDQwNzU4YWQ2MmY5P1gtWGV0LVNpZ25lZC1SYW5nZT1ieXRlcyUzRDQ4NzY5NTQzLTQ4Nzc3Njc4IiwiQ29uZGl0aW9uIjp7IkRhdGVMZXNzVGhhbiI6eyJBV1M6RXBvY2hUaW1lIjoxNzQ2ODM4MDc4fX19XX0_&Signature=Xczl3fJEK0KwoNuzo0gjIipe9TzsBA0QsnwvQzeOq7jbRilxHB4Ur04t-gIcTSnodYN38zkpRJrplR-Dl8uuzMH0L-YB~R4YhL5VigXTLcn4uUyBahdcNTMLZu21D9zjaslDd8Z~tmKyO2J4jqusMxBq2DGIEzyL2vFwQ-LuxegxCTn87JBlZ9gf5Ivv5i~ATW9Vm-GdH~bXS3WytSfY0kXenTDt0pSRlMcAL8AumpXCENq9zS2yv7XtlR8su6GRe3myrQtMglphaJzypodbuYhg3gIyXixHtWagyfV33jyEQgtvlmu1lgbrjpkl7vPjFzBveL-820s09lkE3dpCuQ__&Key-Pair-Id=K2L8F4GPSG1IFC\", source: hyper_util::client::legacy::Error(Connect, ConnectError(\"tcp open error\", Os { code: 24, kind: Uncategorized, message: \"Too many open files\" })) }). Retrying..."},"filename":"/home/runner/work/xet-core/xet-core/cas_client/src/http_client.rs","line_number":164}
{"timestamp":"2025-05-09T23:48:54.045540Z","level":"WARN","fields":{"message":"Retry attempt #0. Sleeping 1.384510777s before the next attempt"},"filename":"/root/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/reqwest-retry-0.6.1/src/middleware.rs","line_number":166}
{"timestamp":"2025-05-09T23:48:54.045568Z","level":"WARN","fields":{"message":"Reqwest(reqwest::Error { kind: Request, url: \"https://transfer.xethub.hf.co/xorbs/default/6a61e683095213f1a28887ab8725499cc70994d1397c91fb1e45440758ad62f9?X-Xet-Signed-Range=bytes%3D49203567-49214372&Expires=1746838078&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly90cmFuc2Zlci54ZXRodWIuaGYuY28veG9yYnMvZGVmYXVsdC82YTYxZTY4MzA5NTIxM2YxYTI4ODg3YWI4NzI1NDk5Y2M3MDk5NGQxMzk3YzkxZmIxZTQ1NDQwNzU4YWQ2MmY5P1gtWGV0LVNpZ25lZC1SYW5nZT1ieXRlcyUzRDQ5MjAzNTY3LTQ5MjE0MzcyIiwiQ29uZGl0aW9uIjp7IkRhdGVMZXNzVGhhbiI6eyJBV1M6RXBvY2hUaW1lIjoxNzQ2ODM4MDc4fX19XX0_&Signature=WrJcmDoFv9Cl5TgQ~gzHLopjkPV-RVLHey5AUwF5TAVoPz5GC-MdIfwRS2iNaI6rc7l~gXqrDsmXqH354c15FfLoRsIGqnPk9LFLQ0ckKYOcoi~84jY8BNN2O1KPWzQe6tppUMtBZp3HQ5ls9xqvqr~yXRs-ppKOJVL~hMssBEYNjseOSaRZjLHs7ucr6diwDxp4pceCTirKRM0~-4gnsAUYuOl2qpUYMUDrubVZoBPcW83laKyg25QQphqctmEoCFTKtdB4AN~41FJ9P2FpHgj-G4VkMLCm2iHf7qagBFh3joozh6bwtivlqv19SWG-dMF1ID-jI-WFWsIqXhOb2Q__&Key-Pair-Id=K2L8F4GPSG1IFC\", source: hyper_util::client::legacy::Error(Connect, ConnectError(\"tcp open error\", Os { code: 24, kind: Uncategorized, message: \"Too many open files\" })) }). Retrying..."},"filename":"/home/runner/work/xet-core/xet-core/cas_client/src/http_client.rs","line_number":164}
Look at this: `/root/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/reqwest-retry-0.6.1/src/middleware.rs`
Lolwat, they're running Xet services as root and dumping verbose errors with full paths? I think someone needs to fix their shit and turn off debugging in prod.
In the meantime... anyone know how to make Xet work reliably for downloads?
| 2025-05-09T23:58:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiwzvy/huggingfaces_xet_storage_seems_broken_dumping/
|
__JockY__
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiwzvy
| false | null |
t3_1kiwzvy
|
/r/LocalLLaMA/comments/1kiwzvy/huggingfaces_xet_storage_seems_broken_dumping/
| false | false |
self
| 0 | null |
ManaBench: A Novel Reasoning Benchmark Based on MTG Deck Building
| 1 |
[removed]
| 2025-05-10T00:24:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1kixiir/manabench_a_novel_reasoning_benchmark_based_on/
|
Jake-Boggs
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kixiir
| false | null |
t3_1kixiir
|
/r/LocalLLaMA/comments/1kixiir/manabench_a_novel_reasoning_benchmark_based_on/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '281YQudmyZHs1BodSBJGQOTjO1hBCDmVIgaSFabXqqA', 'resolutions': [{'height': 52, 'url': 'https://external-preview.redd.it/281YQudmyZHs1BodSBJGQOTjO1hBCDmVIgaSFabXqqA.png?width=108&crop=smart&auto=webp&s=73928a8787b11ae1484f227d4049091a9bb63b93', 'width': 108}, {'height': 104, 'url': 'https://external-preview.redd.it/281YQudmyZHs1BodSBJGQOTjO1hBCDmVIgaSFabXqqA.png?width=216&crop=smart&auto=webp&s=2c26c008369164811ae59abc7390d2220e038560', 'width': 216}, {'height': 154, 'url': 'https://external-preview.redd.it/281YQudmyZHs1BodSBJGQOTjO1hBCDmVIgaSFabXqqA.png?width=320&crop=smart&auto=webp&s=d07fcc244eb0a74f1f5af9c5e2630d95f60ecd0d', 'width': 320}, {'height': 309, 'url': 'https://external-preview.redd.it/281YQudmyZHs1BodSBJGQOTjO1hBCDmVIgaSFabXqqA.png?width=640&crop=smart&auto=webp&s=f2b86ee0483399943d473a7dc3c55f5cef63d4b3', 'width': 640}], 'source': {'height': 396, 'url': 'https://external-preview.redd.it/281YQudmyZHs1BodSBJGQOTjO1hBCDmVIgaSFabXqqA.png?auto=webp&s=58fbdf8f9add388f3b614e1c6c9f1c4f7af6cd22', 'width': 820}, 'variants': {}}]}
|
|
The Halo Effect of Download Counts
| 6 |
A couple weeks ago, I scored the quality of documentation for **1000 model cards,** using LLM-as-a-Judge.
My goal: to study the relationship between **model** **quality and popularity.**
To quantify popularity, I used the hub apis to query model stats, such as **Number of Likes** and **Download Counts**.
To my surprise, the documentation quality explains a just **small part of a model's popularity**. For intuition on this, think about all the hub quants with scant docs that everyone still downloads.
Review the correlation [here](https://huggingface.co/datasets/remyxai/model-documentation-scoreboard/viewer/default/train?row=4&views%5B%5D=train&sql=--+The+SQL+console+is+powered+by+DuckDB+WASM+and+runs+entirely+in+the+browser.%0A--+Get+started+by+typing+a+query+or+selecting+a+view+from+the+options+below.%0ASELECT+CORR%28downloads%2C+doc_quality_score%29+AS+correlation%0AFROM+train%3B%0A).
Then this week, I noticed an older model gaining traction just as I announced the latest version...so what happened?
The **sentiment** around a model in r/LocalLLaMA is **a leading indicator of a model's traction**, yet it can fail to overcome the **halo effect** of another **model's download counts,** effectively transferring traction to the previous SOTA.
This makes **download counts the lagging quality indicator**.
Have you found yourself scrolling to the weights that have been downloaded the most?
We all come here to get the community consensus. But that bias to go with the herd can actually lead you astray, so you gotta be aware of your tendencies.
Ultimately, I think we can expect HF to bring model makers and users together, possibly by linking the social engagement context to model documentation through **Community Notes for models**.
Vanity metrics such as the Number of models or download counts don't signify value, just hype.
Your best model depends on the context of your application. We'll learn the way faster, together.
| 2025-05-10T00:26:28 |
https://www.reddit.com/gallery/1kixk75
|
remyxai
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kixk75
| false | null |
t3_1kixk75
|
/r/LocalLLaMA/comments/1kixk75/the_halo_effect_of_download_counts/
| false | false | 6 |
{'enabled': True, 'images': [{'id': 'WsDn8-AlRWhGiFh2mBbj7jK25VNYlKN0WZaq-CwL_sQ', 'resolutions': [{'height': 37, 'url': 'https://external-preview.redd.it/WsDn8-AlRWhGiFh2mBbj7jK25VNYlKN0WZaq-CwL_sQ.png?width=108&crop=smart&auto=webp&s=fc2dd5307a6f6335ad79c098a3ac8bfbece90a3f', 'width': 108}, {'height': 74, 'url': 'https://external-preview.redd.it/WsDn8-AlRWhGiFh2mBbj7jK25VNYlKN0WZaq-CwL_sQ.png?width=216&crop=smart&auto=webp&s=778b877318a1dafd9e555eb70d715fe2c329d833', 'width': 216}, {'height': 110, 'url': 'https://external-preview.redd.it/WsDn8-AlRWhGiFh2mBbj7jK25VNYlKN0WZaq-CwL_sQ.png?width=320&crop=smart&auto=webp&s=98e43a58a7b495db1022265264239f2aca4fcf6c', 'width': 320}, {'height': 221, 'url': 'https://external-preview.redd.it/WsDn8-AlRWhGiFh2mBbj7jK25VNYlKN0WZaq-CwL_sQ.png?width=640&crop=smart&auto=webp&s=59b0d6a3fa3a0f5d49a53df5430d405edb53a849', 'width': 640}, {'height': 332, 'url': 'https://external-preview.redd.it/WsDn8-AlRWhGiFh2mBbj7jK25VNYlKN0WZaq-CwL_sQ.png?width=960&crop=smart&auto=webp&s=a109b02497ec9f8b1fadf8eeee7af92fa1d9b4b8', 'width': 960}, {'height': 374, 'url': 'https://external-preview.redd.it/WsDn8-AlRWhGiFh2mBbj7jK25VNYlKN0WZaq-CwL_sQ.png?width=1080&crop=smart&auto=webp&s=e66c5f216ae2485875384b639864a333c36225b9', 'width': 1080}], 'source': {'height': 525, 'url': 'https://external-preview.redd.it/WsDn8-AlRWhGiFh2mBbj7jK25VNYlKN0WZaq-CwL_sQ.png?auto=webp&s=81676f3941c0ae9037052a4359a8f4c1b8c2f49a', 'width': 1514}, 'variants': {}}]}
|
|
ManaBench: A Novel Reasoning Benchmark Based on MTG Deck Building
| 2 |
[removed]
| 2025-05-10T00:29:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1kixm29/manabench_a_novel_reasoning_benchmark_based_on/
|
Jake-Boggs
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kixm29
| false | null |
t3_1kixm29
|
/r/LocalLLaMA/comments/1kixm29/manabench_a_novel_reasoning_benchmark_based_on/
| false | false |
self
| 2 | null |
Dual AMD Mi50 Inference and Benchmarks
| 1 |
[removed]
| 2025-05-10T00:30:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1kixmqs/dual_amd_mi50_inference_and_benchmarks/
|
0seba
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kixmqs
| false | null |
t3_1kixmqs
|
/r/LocalLLaMA/comments/1kixmqs/dual_amd_mi50_inference_and_benchmarks/
| false | false |
self
| 1 | null |
ManaBench: A Novel Reasoning Benchmark Based on MTG Deck Building
| 1 |
[removed]
| 2025-05-10T00:45:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1kixxmt/manabench_a_novel_reasoning_benchmark_based_on/
|
Jake-Boggs
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kixxmt
| false | null |
t3_1kixxmt
|
/r/LocalLLaMA/comments/1kixxmt/manabench_a_novel_reasoning_benchmark_based_on/
| false | false |
self
| 1 | null |
ManaBench: A Novel Reasoning Benchmark Based on MTG Deck Building
| 1 |
[removed]
| 2025-05-10T01:02:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiy95l/manabench_a_novel_reasoning_benchmark_based_on/
|
Jake-Boggs
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiy95l
| false | null |
t3_1kiy95l
|
/r/LocalLLaMA/comments/1kiy95l/manabench_a_novel_reasoning_benchmark_based_on/
| false | false |
self
| 1 | null |
best way to generate training data for chinese characters and train classification model?
| 3 |
in chinese, theres many characters that sound like 'sh' or 'ch' but the difference in sound is very subtle. i want to train a model to test how good my pronounciation of these different characters is.
i was thinking to generate training data by:
generating many english 'sh' and 'ch' sounds with a tts model, then using a multilingual model to generate accurate chinese character sounds.
i need advice on:
whether this is a good method for generating the training data
what models to use to generate the sounds (i was thinking using Dia with different seeds for the english)
what model to train for classification
| 2025-05-10T01:07:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiycqa/best_way_to_generate_training_data_for_chinese/
|
boringblobking
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiycqa
| false | null |
t3_1kiycqa
|
/r/LocalLLaMA/comments/1kiycqa/best_way_to_generate_training_data_for_chinese/
| false | false |
self
| 3 | null |
Qwen2.5 VL 7B producing only gibberish
| 5 |
So I was trying to get Qwen2.5 VL to run locally on my machine, which was quite painful. But I ended up being able to execute it and even connect to OpenWebUI with \[this script\](https://github.com/phildougherty/qwen2.5-VL-inference-openai) (which would have been a lot less painful if I used that from the beginning). I ran app.py from inside wsl2 on Win11 after installing the requirements, but I had to copy the downloaded model files manually into the folder it wanted them in because else it would run into some weird issue.
It took a looooong while to generate a response to my "Hi!", and what I got was not at all what I was hoping for:
[this gibberish continues until cap is hit](https://preview.redd.it/vhx40g7yxuze1.png?width=1113&format=png&auto=webp&s=3f5f8c221c9d47da8421b1970f2604eef1b5e8fb)
I actually ran into the same issue when running it via the example script provided \[on the huggingface page\](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), where it would also just produce gibberish with a lot of chinese characters. I then tried the provided script for 3B-Instruct, which resulted in the same kind of gibberish. Interestingly, when I was trying some Qwen2.5-VL versions I found on ollama the other day, I was also running into problems where it would only produce gibberish, but I was thinking that problem wouldn't occur if I got it directly from huggingface instead.
Now, is this in any way a known issue? Like, did I just do some stupid mistake and I just have to set some config properly and it will work? Or is the actual model cooked in some way? Is there any chance for this to be linked to inadequate hardware (running Ryzen 7 9800X3D, 64GB of RAM, RTX 3070)? I would think that would only make it super slow (which it was), but what do I know.
I'd really like to run some vision model locallly, but I wasn't impressed by what I got from gemma3's vision, same for llama3.2-vision. When I tried out Qwen2.5-VL-72B on some hosted service that came a lot closer to my expectations, so I was trying to see what Qwen2.5 I could get to run (and at what speeds) with my system, but the results weren't at all satisfying. What now? Any hopes of fixing the gibberish? Or should I try Qwen2-VL, is that less annoying to run (more established) than Qwen2.5, how does the quality compare? Other vision models you can recommend? I haven't tried any of the Intern ones yet.
| 2025-05-10T01:44:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiz0p6/qwen25_vl_7b_producing_only_gibberish/
|
Dowo2987
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiz0p6
| false | null |
t3_1kiz0p6
|
/r/LocalLLaMA/comments/1kiz0p6/qwen25_vl_7b_producing_only_gibberish/
| false | false | 5 |
{'enabled': False, 'images': [{'id': 'jJhDoJJUDkgdSjuQGXYB_2Cg3wkPVCoSywDBwrJIt_o', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/jJhDoJJUDkgdSjuQGXYB_2Cg3wkPVCoSywDBwrJIt_o.png?width=108&crop=smart&auto=webp&s=cb1fc57e5747dd6b942ce1be87768e3864c64f89', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/jJhDoJJUDkgdSjuQGXYB_2Cg3wkPVCoSywDBwrJIt_o.png?width=216&crop=smart&auto=webp&s=6b20e60e9dca2df09b249770fabcd55210a829f5', 'width': 216}, {'height': 178, 'url': 'https://external-preview.redd.it/jJhDoJJUDkgdSjuQGXYB_2Cg3wkPVCoSywDBwrJIt_o.png?width=320&crop=smart&auto=webp&s=06f3113bd18e52b0ba771ca4441e48e140302aec', 'width': 320}, {'height': 357, 'url': 'https://external-preview.redd.it/jJhDoJJUDkgdSjuQGXYB_2Cg3wkPVCoSywDBwrJIt_o.png?width=640&crop=smart&auto=webp&s=f829e71c95e1b2a1910a295f320eda9a3f76061b', 'width': 640}, {'height': 535, 'url': 'https://external-preview.redd.it/jJhDoJJUDkgdSjuQGXYB_2Cg3wkPVCoSywDBwrJIt_o.png?width=960&crop=smart&auto=webp&s=18c2639fac75c3715c25527e8005d89c3024883b', 'width': 960}, {'height': 602, 'url': 'https://external-preview.redd.it/jJhDoJJUDkgdSjuQGXYB_2Cg3wkPVCoSywDBwrJIt_o.png?width=1080&crop=smart&auto=webp&s=672f39ac0f7b79d1f616f17ecb7a387e3db0a715', 'width': 1080}], 'source': {'height': 621, 'url': 'https://external-preview.redd.it/jJhDoJJUDkgdSjuQGXYB_2Cg3wkPVCoSywDBwrJIt_o.png?auto=webp&s=9264d05b876f9141a110fe38301d97cbe7c68494', 'width': 1113}, 'variants': {}}]}
|
|
ManaBench: A Novel Reasoning Benchmark Based on MTG Deck Building
| 1 |
[removed]
| 2025-05-10T01:48:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiz3rs/manabench_a_novel_reasoning_benchmark_based_on/
|
Jake-Boggs
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiz3rs
| false | null |
t3_1kiz3rs
|
/r/LocalLLaMA/comments/1kiz3rs/manabench_a_novel_reasoning_benchmark_based_on/
| false | false |
self
| 1 | null |
LLM with best understanding of medicine?
| 15 |
I've had some success with Claude and ChatGPT. Are there any local LLM's that have a decent training background in medical topics?
| 2025-05-10T01:57:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiz9h2/llm_with_best_understanding_of_medicine/
|
pinkfreude
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiz9h2
| false | null |
t3_1kiz9h2
|
/r/LocalLLaMA/comments/1kiz9h2/llm_with_best_understanding_of_medicine/
| false | false |
self
| 15 | null |
lmstudio recommended qwen3 vs unsloth one
| 8 |
sorry if this question is stupid but i dont know any other place to ask, what is the difference between these two?, and what version and quantification should i be running on my system? (16gb vram + 32gb ram)
thanks in advance
| 2025-05-10T02:08:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1kizh16/lmstudio_recommended_qwen3_vs_unsloth_one/
|
oxidao
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kizh16
| false | null |
t3_1kizh16
|
/r/LocalLLaMA/comments/1kizh16/lmstudio_recommended_qwen3_vs_unsloth_one/
| false | false |
self
| 8 | null |
Absolute Zero: Reinforced Self-play Reasoning with Zero Data
| 23 |
https://arxiv.org/abs/2505.03335
The paper introduces a new reinforcement learning paradigm called "Absolute Zero," designed to enhance the reasoning capabilities of large language models (LLMs). In this paradigm, a single model learns to propose tasks that maximize its own learning progress and improves reasoning by solving these tasks, without relying on any external data.
The authors propose the Absolute Zero Reasoner (AZR), a system that self-evolves its training curriculum and reasoning ability. AZR uses a code executor to validate proposed code reasoning tasks and verify answers, providing a unified source of verifiable reward to guide open-ended learning.
Key aspects of the Absolute Zero paradigm and the AZR system include:
* Self-Play Learning: AZR learns through self-play, generating its own training tasks and solving them, eliminating the need for human-curated data.
* Verifiable Rewards: The system uses a code executor as an environment to provide verifiable feedback, ensuring grounded and reliable learning.
* Three Reasoning Modes: AZR is trained on three types of coding tasks: induction, abduction, and deduction, corresponding to different modes of reasoning.
* State-of-the-Art Performance: Despite being trained without external data, AZR achieves state-of-the-art performance on coding and mathematical reasoning tasks, outperforming models trained on human-curated datasets.
The paper also reports several interesting findings:
* Code priors amplify reasoning.
* Cross-domain transfer is more pronounced for AZR.
* Bigger base models yield bigger gains.
* Comments as intermediate plans emerge naturally.
* Cognitive behaviors and token length depend on the reasoning mode.
* Safety alarms are observed, highlighting the need for safety-aware training.
Overall, the Absolute Zero paradigm and the AZR system represent a promising step toward enabling LLMs to autonomously achieve superhuman reasoning capabilities.
| 2025-05-10T02:20:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1kizp3p/absolute_zero_reinforced_selfplay_reasoning_with/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kizp3p
| false | null |
t3_1kizp3p
|
/r/LocalLLaMA/comments/1kizp3p/absolute_zero_reinforced_selfplay_reasoning_with/
| false | false |
self
| 23 | null |
Slop Radio | 24/7 AI Music from Live Chat Messages (powered by ACE-Step)
| 1 |
[removed]
| 2025-05-10T02:25:33 |
http://youtube.com/watch?v=QE0PbDwx2Vw
|
anatolybazarov
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kizs1n
| false |
{'oembed': {'author_name': 'singlemode', 'author_url': 'https://www.youtube.com/@deliberatelynaive', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/QE0PbDwx2Vw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Slop Radio | 24/7 AI Music from Live Chat Messages"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/QE0PbDwx2Vw/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Slop Radio | 24/7 AI Music from Live Chat Messages', 'type': 'video', 'version': '1.0', 'width': 267}, 'type': 'youtube.com'}
|
t3_1kizs1n
|
/r/LocalLLaMA/comments/1kizs1n/slop_radio_247_ai_music_from_live_chat_messages/
| false | false |
default
| 1 | null |
Guides for setting up a home AI server?
| 2 |
I recently got my hands on a Minisforum AI X1 Pro, and early testing has been pretty nice. I'd like to set it up so that I can use it headless with the rest of my homelab and dump AI workloads on it. While using chat is one thing, hooking it up to VSCode or building agents is another. Most of the "tutorials" boil down to just installing ollama and openweb-ui (which I've done in the past, and find openweb-ui incredible annoying to work with in addition to it just constantly breaking during chats). Are there any more in-depth tutorials out there?
| 2025-05-10T03:06:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj0hoc/guides_for_setting_up_a_home_ai_server/
|
dragonmantank
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj0hoc
| false | null |
t3_1kj0hoc
|
/r/LocalLLaMA/comments/1kj0hoc/guides_for_setting_up_a_home_ai_server/
| false | false |
self
| 2 | null |
Building Helios: A Self-Hosted Platform to Supercharge Local LLMs (Ollama, HF) with Memory & Management - Feedback Needed!
| 1 |
[removed]
| 2025-05-10T03:16:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj0of0/building_helios_a_selfhosted_platform_to/
|
Effective_Muscle_110
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj0of0
| false | null |
t3_1kj0of0
|
/r/LocalLLaMA/comments/1kj0of0/building_helios_a_selfhosted_platform_to/
| false | false |
self
| 1 | null |
LLamb a LLM chat client for your terminal
| 13 |
Last night I worked on a LLM client for the terminal. You can connect to LM studio, Ollama, openAI and other providers in your terminal.
* You can setup as many connections as you like with a model for each
* It keeps context via terminal window/ssh session
* Can read text files and send it to the llm with your prompt
* Can output the llm response to files
You can install it via NPM \`npm install -g llamb\`
If you check it out please let me know what you think. I had fun working on this with the help of Claude Code, that Max subscription is pretty good!
| 2025-05-10T03:37:16 |
https://www.3sparks.net/llamb
|
s3bastienb
|
3sparks.net
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj11b0
| false | null |
t3_1kj11b0
|
/r/LocalLLaMA/comments/1kj11b0/llamb_a_llm_chat_client_for_your_terminal/
| false | false | 13 |
{'enabled': False, 'images': [{'id': 'pzqNepzep-k1LYXeP2ndbcoOFIfdc5e3fI4vYh43PBo', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/pzqNepzep-k1LYXeP2ndbcoOFIfdc5e3fI4vYh43PBo.png?width=108&crop=smart&auto=webp&s=34e234b1f0eed37f2335901c7b1e7c75d91a0e93', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/pzqNepzep-k1LYXeP2ndbcoOFIfdc5e3fI4vYh43PBo.png?width=216&crop=smart&auto=webp&s=169fc04ea2021a2f248f3f621fa4128e73c87137', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/pzqNepzep-k1LYXeP2ndbcoOFIfdc5e3fI4vYh43PBo.png?width=320&crop=smart&auto=webp&s=0de5a710b4bef1e7606ccbc7af1bc4f0b72eb3eb', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/pzqNepzep-k1LYXeP2ndbcoOFIfdc5e3fI4vYh43PBo.png?width=640&crop=smart&auto=webp&s=f678e15fe4ab0fa20136dcf665f8253e9764d4e2', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/pzqNepzep-k1LYXeP2ndbcoOFIfdc5e3fI4vYh43PBo.png?width=960&crop=smart&auto=webp&s=4cb8bbc62ad2d9fc2910f680da2be1be193455be', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/pzqNepzep-k1LYXeP2ndbcoOFIfdc5e3fI4vYh43PBo.png?width=1080&crop=smart&auto=webp&s=489eed0ba99acd0bc45703ce9d0ed71ccfbdc7fa', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/pzqNepzep-k1LYXeP2ndbcoOFIfdc5e3fI4vYh43PBo.png?auto=webp&s=65d59ac709e61f236f5f7f58bed32b9ec1a07346', 'width': 1536}, 'variants': {}}]}
|
|
ManaBench: A Novel Reasoning Benchmark Based on MTG Deck Building
| 1 |
[removed]
| 2025-05-10T03:55:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj1cm3/manabench_a_novel_reasoning_benchmark_based_on/
|
Jake-Boggs
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj1cm3
| false | null |
t3_1kj1cm3
|
/r/LocalLLaMA/comments/1kj1cm3/manabench_a_novel_reasoning_benchmark_based_on/
| false | false |
self
| 1 | null |
Qwen-2.5-VL-7b vs Gemma-3-12b impressions
| 30 |
First impressions of Qwen VL vs Gemma in llama.cpp.
**Qwen**
* Excellent at recognizing species of plants, animals, etc. Tested with a bunch of dog breeds as well as photos of plants and insects.
* More formal tone
* Doesn't seem as "general purpose". When you ask it questions it tends to respond in the same forumlaic way regardless of what you are asking.
* More conservative in its responses than Gemma, likely hallucinates less.
* Asked a question about a photo of the night sky. Qwen refused to identify any stars or constellations.
**Gemma**
* Good at identifying general objects, themes, etc. but not as good as Qwen at getting into the specifics.
* More "friendly" tone, easier to "chat" with
* General purpose, will changes it's response style based on the question it's being asked.
* Hallucinates up the wazoo. Where Qwen will refuse to answer. Gemma will just make stuff up.
* Asking a question about a photo of the night sky. Gemma identified the constellation Casseopia as well as some major stars. I wasn't able to confirm if it was correct, just thought it was cool.
| 2025-05-10T04:22:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj1t1o/qwen25vl7b_vs_gemma312b_impressions/
|
Zc5Gwu
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj1t1o
| false | null |
t3_1kj1t1o
|
/r/LocalLLaMA/comments/1kj1t1o/qwen25vl7b_vs_gemma312b_impressions/
| false | false |
self
| 30 | null |
An LLM + a selfhosted self engine looks like black magic
| 148 |
In its last update, open-webui added support for Yacy as a search provider. Yacy is an open source, distributed search engine that does not rely on a central index but rely on distributed peers indexing pages themselves. I already tried Yacy in the past but the problem is that the algorithm that sorts the results is garbage and it is not really usable as a search engine. Of course a small open source software that can run on literally anything (the server it ran on for this experiment is a 12th gen Celeron with 8GB of RAM) cannot compete in term of the intelligence of the algorithm to sort the results with companies like Google or Microsoft. It was practically unusable.
Or It Was ! Coupled with an LLM, the LLM can sort the trash results from Yacy out and keep what is useful ! For the purpose of this exercise I used Deepseek-V3-0324 from OpenRouter but it is trivial to use local models !
https://preview.redd.it/zcq88bwjvvze1.png?width=2492&format=png&auto=webp&s=8e7c5c36e0f1770fab88f7baed53cd25e1014d07
That means that we can now have selfhosted AI models that learn from the Web ... without relying on Google or any central entity at all !
Some caveats; 1. Of course this is inferior to using google or even duckduckgo, I just wanted to share that here because I think you'll find it cool. 2. You need a solid CPU to have a lot of concurrent research, my Celeron gets hammered to 100% usage at each query. (open-webui and a bunch of other services are running on this server, that must not help). That's not your average LocalLLama rig costing my yearly salary ahah.
https://preview.redd.it/7q2mkkshvvze1.png?width=554&format=png&auto=webp&s=56b75972d9a1e4e98c7cdfe111dad47b7f87cbeb
| 2025-05-10T04:40:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj23yk/an_llm_a_selfhosted_self_engine_looks_like_black/
|
marsxyz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj23yk
| false | null |
t3_1kj23yk
|
/r/LocalLLaMA/comments/1kj23yk/an_llm_a_selfhosted_self_engine_looks_like_black/
| false | false | 148 |
{'enabled': False, 'images': [{'id': 'HSJh1Glwn1cudWqMdR7v0csb93OcXPxZ1DJssuHJXOM', 'resolutions': [{'height': 42, 'url': 'https://external-preview.redd.it/HSJh1Glwn1cudWqMdR7v0csb93OcXPxZ1DJssuHJXOM.png?width=108&crop=smart&auto=webp&s=19b41208318f505567f1024903189a20a361332e', 'width': 108}, {'height': 85, 'url': 'https://external-preview.redd.it/HSJh1Glwn1cudWqMdR7v0csb93OcXPxZ1DJssuHJXOM.png?width=216&crop=smart&auto=webp&s=38a04d77803fa7129d587361c2b44064d3632c06', 'width': 216}, {'height': 126, 'url': 'https://external-preview.redd.it/HSJh1Glwn1cudWqMdR7v0csb93OcXPxZ1DJssuHJXOM.png?width=320&crop=smart&auto=webp&s=ccb03cfea259d4eb77570eb9ab65e94149483eb0', 'width': 320}, {'height': 253, 'url': 'https://external-preview.redd.it/HSJh1Glwn1cudWqMdR7v0csb93OcXPxZ1DJssuHJXOM.png?width=640&crop=smart&auto=webp&s=56e13b288a566c7795cec3188ad4b462f9d02af7', 'width': 640}, {'height': 380, 'url': 'https://external-preview.redd.it/HSJh1Glwn1cudWqMdR7v0csb93OcXPxZ1DJssuHJXOM.png?width=960&crop=smart&auto=webp&s=df23f5eb382646c5780123d49652c478ad94442d', 'width': 960}, {'height': 428, 'url': 'https://external-preview.redd.it/HSJh1Glwn1cudWqMdR7v0csb93OcXPxZ1DJssuHJXOM.png?width=1080&crop=smart&auto=webp&s=afdae752d6f7d9361f8c8a324889033fd593f732', 'width': 1080}], 'source': {'height': 988, 'url': 'https://external-preview.redd.it/HSJh1Glwn1cudWqMdR7v0csb93OcXPxZ1DJssuHJXOM.png?auto=webp&s=ebafe26e7767b76b6877a6c88f05fb74e55196bd', 'width': 2492}, 'variants': {}}]}
|
|
Web Frontend vs API?
| 1 |
[removed]
| 2025-05-10T04:45:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj26ux/web_frontend_vs_api/
|
GOMADGains
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj26ux
| false | null |
t3_1kj26ux
|
/r/LocalLLaMA/comments/1kj26ux/web_frontend_vs_api/
| false | false |
self
| 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.