title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Yoshua Bengio, Turing-award winning AI Godfather, starts a company to keep rampant AI innovation in check
0
[https://techcrunch.com/2025/06/03/yoshua-bengio-launches-lawzero-a-nonprofit-ai-safety-lab/](https://techcrunch.com/2025/06/03/yoshua-bengio-launches-lawzero-a-nonprofit-ai-safety-lab/)
2025-06-03T19:48:55
https://i.redd.it/hfoomobunr4f1.png
Particular_Pool8344
i.redd.it
1970-01-01T00:00:00
0
{}
1l2lf54
false
null
t3_1l2lf54
/r/LocalLLaMA/comments/1l2lf54/yoshua_bengio_turingaward_winning_ai_godfather/
false
false
https://a.thumbs.redditm…dZOUTbMbtpK0.jpg
0
{'enabled': True, 'images': [{'id': '23eejwbe6Qe99Ly_RvNaoTWnXb74y6Ktm79t4Visd9E', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/hfoomobunr4f1.png?width=108&crop=smart&auto=webp&s=32ef9b5e7f88f7da8094c4a9346139128486b7aa', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/hfoomobunr4f1.png?width=216&crop=smart&auto=webp&s=b0180e881842b901de95942869d815818a6369ee', 'width': 216}, {'height': 218, 'url': 'https://preview.redd.it/hfoomobunr4f1.png?width=320&crop=smart&auto=webp&s=4e4e408591fc3e25112b8017700de4eee50e4c8e', 'width': 320}, {'height': 437, 'url': 'https://preview.redd.it/hfoomobunr4f1.png?width=640&crop=smart&auto=webp&s=116e0eae66789ba21c0a2455a105998e789f0d16', 'width': 640}, {'height': 655, 'url': 'https://preview.redd.it/hfoomobunr4f1.png?width=960&crop=smart&auto=webp&s=a175b635371da0b212631211a19c9dc4c3d2148c', 'width': 960}, {'height': 737, 'url': 'https://preview.redd.it/hfoomobunr4f1.png?width=1080&crop=smart&auto=webp&s=ddb366a3fe7b52ee153f924e623a4a6b490d498e', 'width': 1080}], 'source': {'height': 787, 'url': 'https://preview.redd.it/hfoomobunr4f1.png?auto=webp&s=12cf92f1f1555bdfd4d25d611da469cb51bffbca', 'width': 1152}, 'variants': {}}]}
Simulating Social Media Personas with LLMs — COLM 2025 + Kaggle Task
1
[removed]
2025-06-03T19:49:04
https://sites.google.com/view/social-sims-with-llms/home
RSTZZZ
sites.google.com
1970-01-01T00:00:00
0
{}
1l2lfa2
false
null
t3_1l2lfa2
/r/LocalLLaMA/comments/1l2lfa2/simulating_social_media_personas_with_llms_colm/
false
false
https://a.thumbs.redditm…u8zeECQo8Kn0.jpg
1
{'enabled': False, 'images': [{'id': 'RBPSVg5ZXUfLuQDkRyZKykvGvxWJwrHaMyNDHrBoXtY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?width=108&crop=smart&auto=webp&s=09247db6c0a148d31436ea73b9638a861f1757fc', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?width=216&crop=smart&auto=webp&s=2fc10ca88f40d5331f13aa598fc96c0e3522becb', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?width=320&crop=smart&auto=webp&s=f20da031956c59ec7a7ae45bcde3f746e40a26d6', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?width=640&crop=smart&auto=webp&s=668693379801238ef468319348742cccd6cea4ff', 'width': 640}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?auto=webp&s=c0503e3d17e96559a3fa3fc935fe8ca971f21851', 'width': 640}, 'variants': {}}]}
Simulating Social Media Personas with LLMs — COLM 2025 + Kaggle Task
1
[removed]
2025-06-03T19:50:59
https://sites.google.com/view/social-sims-with-llms/home
RSTZZZ
sites.google.com
1970-01-01T00:00:00
0
{}
1l2lh3q
false
null
t3_1l2lh3q
/r/LocalLLaMA/comments/1l2lh3q/simulating_social_media_personas_with_llms_colm/
false
false
https://a.thumbs.redditm…u8zeECQo8Kn0.jpg
1
{'enabled': False, 'images': [{'id': 'RBPSVg5ZXUfLuQDkRyZKykvGvxWJwrHaMyNDHrBoXtY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?width=108&crop=smart&auto=webp&s=09247db6c0a148d31436ea73b9638a861f1757fc', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?width=216&crop=smart&auto=webp&s=2fc10ca88f40d5331f13aa598fc96c0e3522becb', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?width=320&crop=smart&auto=webp&s=f20da031956c59ec7a7ae45bcde3f746e40a26d6', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?width=640&crop=smart&auto=webp&s=668693379801238ef468319348742cccd6cea4ff', 'width': 640}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?auto=webp&s=c0503e3d17e96559a3fa3fc935fe8ca971f21851', 'width': 640}, 'variants': {}}]}
Simulating Social Media Personas with Local LLMs
1
[removed]
2025-06-03T19:54:12
https://www.reddit.com/r/LocalLLaMA/comments/1l2lk4j/simulating_social_media_personas_with_local_llms/
RSTZZZ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2lk4j
false
null
t3_1l2lk4j
/r/LocalLLaMA/comments/1l2lk4j/simulating_social_media_personas_with_local_llms/
false
false
self
1
{'enabled': False, 'images': [{'id': 'RBPSVg5ZXUfLuQDkRyZKykvGvxWJwrHaMyNDHrBoXtY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?width=108&crop=smart&auto=webp&s=09247db6c0a148d31436ea73b9638a861f1757fc', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?width=216&crop=smart&auto=webp&s=2fc10ca88f40d5331f13aa598fc96c0e3522becb', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?width=320&crop=smart&auto=webp&s=f20da031956c59ec7a7ae45bcde3f746e40a26d6', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?width=640&crop=smart&auto=webp&s=668693379801238ef468319348742cccd6cea4ff', 'width': 640}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?auto=webp&s=c0503e3d17e96559a3fa3fc935fe8ca971f21851', 'width': 640}, 'variants': {}}]}
Simulating Social Media Personas with Local LLMs
1
[removed]
2025-06-03T19:55:23
[deleted]
1970-01-01T00:00:00
0
{}
1l2ll8l
false
null
t3_1l2ll8l
/r/LocalLLaMA/comments/1l2ll8l/simulating_social_media_personas_with_local_llms/
false
false
default
1
null
Social Simulation with LLMs
1
2025-06-03T19:56:04
https://sites.google.com/view/social-sims-with-llms/home
RSTZZZ
sites.google.com
1970-01-01T00:00:00
0
{}
1l2llv4
false
null
t3_1l2llv4
/r/LocalLLaMA/comments/1l2llv4/social_simulation_with_llms/
false
false
https://a.thumbs.redditm…u8zeECQo8Kn0.jpg
1
{'enabled': False, 'images': [{'id': 'RBPSVg5ZXUfLuQDkRyZKykvGvxWJwrHaMyNDHrBoXtY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?width=108&crop=smart&auto=webp&s=09247db6c0a148d31436ea73b9638a861f1757fc', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?width=216&crop=smart&auto=webp&s=2fc10ca88f40d5331f13aa598fc96c0e3522becb', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?width=320&crop=smart&auto=webp&s=f20da031956c59ec7a7ae45bcde3f746e40a26d6', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?width=640&crop=smart&auto=webp&s=668693379801238ef468319348742cccd6cea4ff', 'width': 640}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?auto=webp&s=c0503e3d17e96559a3fa3fc935fe8ca971f21851', 'width': 640}, 'variants': {}}]}
TTS that Synchronizes Phonemes/Text and Audio Live? / TTS + Animatronics
1
[removed]
2025-06-03T19:56:30
https://www.reddit.com/r/LocalLLaMA/comments/1l2lm9n/tts_that_synchronizes_phonemestext_and_audio_live/
SourceTop1470
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2lm9n
false
null
t3_1l2lm9n
/r/LocalLLaMA/comments/1l2lm9n/tts_that_synchronizes_phonemestext_and_audio_live/
false
false
self
1
null
[R] SocialSim’25: Social Simulations with LLMs — Call for Papers + Shared Task
1
[removed]
2025-06-03T20:04:02
https://www.reddit.com/r/LocalLLaMA/comments/1l2lthz/r_socialsim25_social_simulations_with_llms_call/
RSTZZZ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2lthz
false
null
t3_1l2lthz
/r/LocalLLaMA/comments/1l2lthz/r_socialsim25_social_simulations_with_llms_call/
false
false
self
1
null
live transcription
13
I want to use whisper or any other model similar accuracy on device android with inference. PLease suggest me the one with best latency. Please help me if i am missing out something - onnx, Tflite , ctranslate2 if you know anything about this category any open source proejcts that can help me pull off a live transcription on android. Please help me out Also i am building in java so would consider doing a binding or using libraries to build other projects
2025-06-03T20:16:02
https://www.reddit.com/r/LocalLLaMA/comments/1l2m4q9/live_transcription/
Away_Expression_3713
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2m4q9
false
null
t3_1l2m4q9
/r/LocalLLaMA/comments/1l2m4q9/live_transcription/
false
false
self
13
null
How does gemma3:4b-it-qat fare against OpenAI models on the MMLU-Pro benchmark? See for yourself in Excel
1
[removed]
2025-06-03T21:03:02
https://www.reddit.com/r/LocalLLaMA/comments/1l2nc5k/how_does_gemma34bitqat_fare_against_openai_models/
OptimalParking
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2nc5k
false
null
t3_1l2nc5k
/r/LocalLLaMA/comments/1l2nc5k/how_does_gemma34bitqat_fare_against_openai_models/
false
false
self
1
null
Awakening people to well-being...
1
[removed]
2025-06-03T21:37:58
https://www.reddit.com/r/LocalLLaMA/comments/1l2o7fo/awakening_people_to_wellbeing/
zartte_Forever7927
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2o7fo
false
null
t3_1l2o7fo
/r/LocalLLaMA/comments/1l2o7fo/awakening_people_to_wellbeing/
false
false
self
1
{'enabled': False, 'images': [{'id': 'XUaYY9UvWUY2dxBrvBdLynhcdraE1n8Mbfig66gxGHo', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/pQrveUfJ0u1VocQBhr2ks-zddW0tDw7PGVGeeLyLLdo.jpg?width=108&crop=smart&auto=webp&s=5dcc3a7916439929f7a706c6ee0266c5e0f227ed', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/pQrveUfJ0u1VocQBhr2ks-zddW0tDw7PGVGeeLyLLdo.jpg?width=216&crop=smart&auto=webp&s=adce4f862d211199856d04c8343d8671204e0e6b', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/pQrveUfJ0u1VocQBhr2ks-zddW0tDw7PGVGeeLyLLdo.jpg?width=320&crop=smart&auto=webp&s=0863890dbd47aa0a0185b781fff7de1a2c97d81c', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/pQrveUfJ0u1VocQBhr2ks-zddW0tDw7PGVGeeLyLLdo.jpg?width=640&crop=smart&auto=webp&s=647fc1d15f6914e561d0ed6b45a2e5069f6153a1', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/pQrveUfJ0u1VocQBhr2ks-zddW0tDw7PGVGeeLyLLdo.jpg?auto=webp&s=526cd7dfaf66e982c42d3c222df2476eac1900d9', 'width': 720}, 'variants': {}}]}
What GUI are you using for local LLMs? (AnythingLLM, LM Studio, etc.)
171
I’ve been trying out AnythingLLM and LM Studio lately to run models like LLaMA and Gemma locally. Curious what others here are using. What’s been your experience with these or other GUI tools like GPT4All, Oobabooga, PrivateGPT, etc.? What do you like, what’s missing, and what would you recommend for someone looking to do local inference with documents or RAG?
2025-06-03T22:09:03
https://www.reddit.com/r/LocalLLaMA/comments/1l2oywk/what_gui_are_you_using_for_local_llms_anythingllm/
Aaron_MLEngineer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2oywk
false
null
t3_1l2oywk
/r/LocalLLaMA/comments/1l2oywk/what_gui_are_you_using_for_local_llms_anythingllm/
false
false
self
171
null
Extract information from resume in json format
1
[removed]
2025-06-03T22:09:24
https://www.reddit.com/r/LocalLLaMA/comments/1l2oz7r/extract_information_from_resume_in_json_format/
Fast_Huckleberry_894
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2oz7r
false
null
t3_1l2oz7r
/r/LocalLLaMA/comments/1l2oz7r/extract_information_from_resume_in_json_format/
false
false
self
1
null
The LLM rabbit hole
0
Everything started as a genuine interest in trying some small LLMs at home. It began with an old PC case housing an RTX 2080 (with 8GB of RAM!). Just to see what was possible. My first steps were with llama.cpp before I moved on to other horizons. Then came one 3090, then two 3090s. LLMs started being integrated into all of my tools, beginning with personal projects and then extending to work tasks. More and more, everything started happening within an LLM conversation. From issuing update commands, the thrill of laziness just didn't end. I started using RAG and automating everything. The goal became to do everything from within the LLM chat and never leave it. Even reading the news and checking emails eventually happened through the local LLM. Little by little, even calls to external models, like Gemini, were routed through the local LLM. At the end of this rabbit hole, those $4500, or even $9000, GPUs started to feel cheap—a bargain. Money flowed directly to NVIDIA. There was nothing but this LLM window; that was it. The end goal is literally to boot the computer into an LLM interface and never leave it. I’m certain it will happen. People will remove Excel, Slack, Outlook—everything, actually—and the only non-closable window you'll have is the LLM. That’s all you need. Attention is all we need, and that’s the LLM rabbit hole for you. Co-written with a LLM
2025-06-03T22:17:55
https://www.reddit.com/r/LocalLLaMA/comments/1l2p6gt/the_llm_rabbit_hole/
Mobile_Tart_1016
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2p6gt
false
null
t3_1l2p6gt
/r/LocalLLaMA/comments/1l2p6gt/the_llm_rabbit_hole/
false
false
self
0
null
B vs Quantization
1
I'm looking for some advice on choosing the best configuration for my Large Language Model (LLM). I'm trying to understand the differences between two settings and would appreciate any guidance. What's the main difference between a 4B\_Q8 and a 12B\_Q4\_0 configuration? Is one significantly better than the other, or are there specific use cases where each is preferred? Thanks !
2025-06-03T22:30:03
https://www.reddit.com/r/LocalLLaMA/comments/1l2pgks/b_vs_quantization/
Empty_Object_9299
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2pgks
false
null
t3_1l2pgks
/r/LocalLLaMA/comments/1l2pgks/b_vs_quantization/
false
false
self
1
null
Llama 3.3 70b Vs Newer Models
24
On my MBP (M3 Max 16/40 64GB), the largest model I can run seems to be Llama 3.3 70b. The swathe of new models don't have any options with this many parameters its either 30b or 200b+. My question is does Llama 3.3 70b, compete or even is it still my best option for local use, or even with the much lower amount of parameters are the likes of Qwen3 30b a3b, Qwen3 32b, Gemma3 27b, DeepSeek R1 0528 Qwen3 8b, are these newer models still "better" or smarter? I primarily use LLMs for search engine via perplexica and as code assitants. I have attempted to test this myself and honestly they all seem to work at times, can't say I've tested consistently enough yet though to say for sure if there is a front runner. So yeah is Llama 3.3 dead in the water now?
2025-06-03T22:35:36
https://www.reddit.com/r/LocalLLaMA/comments/1l2pl4l/llama_33_70b_vs_newer_models/
BalaelGios
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2pl4l
false
null
t3_1l2pl4l
/r/LocalLLaMA/comments/1l2pl4l/llama_33_70b_vs_newer_models/
false
false
self
24
null
Are you really using open-source or local LLMs and do they help you?
1
[removed]
2025-06-03T23:07:34
https://www.reddit.com/r/LocalLLaMA/comments/1l2qar6/are_you_really_using_opensource_or_local_llms_and/
AccidentFriendly7530
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2qar6
false
null
t3_1l2qar6
/r/LocalLLaMA/comments/1l2qar6/are_you_really_using_opensource_or_local_llms_and/
false
false
self
1
null
B vs Quantization
6
I've been reading about different configurations for my Large Language Model (LLM) and had a question. I understand that Q4 models are generally less accurate compared to other quantization settings (am i wright?). To clarify, I'm trying to decide between two configurations: * 4B\_Q8: fewer parameters with potentially better accuracy * 12B\_Q4\_0: more parameters with potentially lower accuracy In general, is it better to prioritize more accuracy with fewer parameters or less accuracy with more parameters?
2025-06-03T23:30:53
https://www.reddit.com/r/LocalLLaMA/comments/1l2qtbo/b_vs_quantization/
Empty_Object_9299
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2qtbo
false
null
t3_1l2qtbo
/r/LocalLLaMA/comments/1l2qtbo/b_vs_quantization/
false
false
self
6
null
Building an extension that lets you try ANY clothing on with AI! Who wants me to open source it?
0
2025-06-03T23:31:14
https://v.redd.it/y0z3cehfrs4f1
ParsaKhaz
v.redd.it
1970-01-01T00:00:00
0
{}
1l2qtle
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/y0z3cehfrs4f1/DASHPlaylist.mpd?a=1751585488%2CODUzMjE5ZTUwYWZkNDQ5ZTM3OWE2ZmJmYmY4ODRhMTM3YzM0MzdkZmRhYWE3ZWUzNGJiNDQyOGNkNjdhMmI2OQ%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/y0z3cehfrs4f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/y0z3cehfrs4f1/HLSPlaylist.m3u8?a=1751585488%2CN2I3MDIxMDkzYzQzODUwNjQxZGNlMWJhMmU5YmU1ZGU3OGJiM2ZiNWI0OTE2NDEwZTJkMWNhOTQzZjdkNDU0Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/y0z3cehfrs4f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 758}}
t3_1l2qtle
/r/LocalLLaMA/comments/1l2qtle/building_an_extension_that_lets_you_try_any/
false
false
https://external-preview…38330a2c19555c18
0
{'enabled': False, 'images': [{'id': 'OGdrcWRjaGZyczRmMRmQT4-0lQTlIfgftoYeHfX8nRIDSoRoafHzyMNvPJv5', 'resolutions': [{'height': 102, 'url': 'https://external-preview.redd.it/OGdrcWRjaGZyczRmMRmQT4-0lQTlIfgftoYeHfX8nRIDSoRoafHzyMNvPJv5.png?width=108&crop=smart&format=pjpg&auto=webp&s=0ade8021fb51396bfa71b971ba1a763861b5b81c', 'width': 108}, {'height': 205, 'url': 'https://external-preview.redd.it/OGdrcWRjaGZyczRmMRmQT4-0lQTlIfgftoYeHfX8nRIDSoRoafHzyMNvPJv5.png?width=216&crop=smart&format=pjpg&auto=webp&s=ecf698cc4a980b6021fbf0b2472b0cf950e277c8', 'width': 216}, {'height': 303, 'url': 'https://external-preview.redd.it/OGdrcWRjaGZyczRmMRmQT4-0lQTlIfgftoYeHfX8nRIDSoRoafHzyMNvPJv5.png?width=320&crop=smart&format=pjpg&auto=webp&s=10fca69cae969676aad65d5ede7e1254b1ed729b', 'width': 320}, {'height': 607, 'url': 'https://external-preview.redd.it/OGdrcWRjaGZyczRmMRmQT4-0lQTlIfgftoYeHfX8nRIDSoRoafHzyMNvPJv5.png?width=640&crop=smart&format=pjpg&auto=webp&s=2e7f3bb5e80752034c5343c5c88bc88e22d4c015', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/OGdrcWRjaGZyczRmMRmQT4-0lQTlIfgftoYeHfX8nRIDSoRoafHzyMNvPJv5.png?format=pjpg&auto=webp&s=00569fd546185a6733b02e661f816b4fd4148f53', 'width': 758}, 'variants': {}}]}
Help Me Understand MOE vs Dense
41
It seems SOTA LLMS are moving towards MOE architectures. The smartest models in the world [seem to be using it](https://lmarena.ai/leaderboard). But why? When you use a MOE model, only a fraction of parameters are actually active. Wouldn't the model be "smarter" if you just use all parameters? Efficiency is awesome, but there are many problems that the smartest models cannot solve (i.e., cancer, a bug in my code, etc.). So, are we moving towards MOE because we discovered some kind of intelligence scaling limit in dense models (for example, a dense 2T LLM could never outperform a well architected MOE 2T LLM) or is it just for efficiency, or both?
2025-06-03T23:33:16
https://www.reddit.com/r/LocalLLaMA/comments/1l2qv7z/help_me_understand_moe_vs_dense/
Express_Seesaw_8418
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2qv7z
false
null
t3_1l2qv7z
/r/LocalLLaMA/comments/1l2qv7z/help_me_understand_moe_vs_dense/
false
false
self
41
null
How my open-source extension does with a harder virtual try on outfit!
0
I'm open sourcing a chrome extension that lets you try on anything that you see on the internet. Feels like magic. [click here to visit the github](https://github.com/parsakhaz/fashn-tryon-extension)
2025-06-03T23:44:31
https://v.redd.it/e8m2fq0cts4f1
ParsaKhaz
v.redd.it
1970-01-01T00:00:00
0
{}
1l2r3rt
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/e8m2fq0cts4f1/DASHPlaylist.mpd?a=1751586285%2CZTgyNWE4ZTA3YWYxNDFlMjQwMmI3ZmJkMmVhNzAwYzU0MzlkMDY3YzhlMzg3YzE0MWVjODUyMjU1YmQyMzVlZQ%3D%3D&v=1&f=sd', 'duration': 7, 'fallback_url': 'https://v.redd.it/e8m2fq0cts4f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/e8m2fq0cts4f1/HLSPlaylist.m3u8?a=1751586285%2CMzQ3NzNmMTk2ZDhjZDVkOTBlN2MyZGI1ZWY4ZDhlNTYzNjFkOWJiMGUxNDM4NWMzYjU2MmMyOTcxNjJkNTEyZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/e8m2fq0cts4f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1262}}
t3_1l2r3rt
/r/LocalLLaMA/comments/1l2r3rt/how_my_opensource_extension_does_with_a_harder/
false
false
https://external-preview…305bed29bf3bae55
0
{'enabled': False, 'images': [{'id': 'a2N0Z3RxMGN0czRmMZEFLO9nCJikC7mtBpPcIQAr59c4sK2P034UkenC8j1x', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/a2N0Z3RxMGN0czRmMZEFLO9nCJikC7mtBpPcIQAr59c4sK2P034UkenC8j1x.png?width=108&crop=smart&format=pjpg&auto=webp&s=d2fcc31ac4c0e125dd221ae51c9bd10bdb04cb96', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/a2N0Z3RxMGN0czRmMZEFLO9nCJikC7mtBpPcIQAr59c4sK2P034UkenC8j1x.png?width=216&crop=smart&format=pjpg&auto=webp&s=431f5d8f0a1483201654a2706bd5e6bfcee5521f', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/a2N0Z3RxMGN0czRmMZEFLO9nCJikC7mtBpPcIQAr59c4sK2P034UkenC8j1x.png?width=320&crop=smart&format=pjpg&auto=webp&s=ec9ee4fac4249e66cc308b67511e9a389df1c0f9', 'width': 320}, {'height': 364, 'url': 'https://external-preview.redd.it/a2N0Z3RxMGN0czRmMZEFLO9nCJikC7mtBpPcIQAr59c4sK2P034UkenC8j1x.png?width=640&crop=smart&format=pjpg&auto=webp&s=a0a75a8e9501ef500332f3882ae482d5ee3a1413', 'width': 640}, {'height': 547, 'url': 'https://external-preview.redd.it/a2N0Z3RxMGN0czRmMZEFLO9nCJikC7mtBpPcIQAr59c4sK2P034UkenC8j1x.png?width=960&crop=smart&format=pjpg&auto=webp&s=9a8087394ddece62f3207c86459924907869eb32', 'width': 960}, {'height': 615, 'url': 'https://external-preview.redd.it/a2N0Z3RxMGN0czRmMZEFLO9nCJikC7mtBpPcIQAr59c4sK2P034UkenC8j1x.png?width=1080&crop=smart&format=pjpg&auto=webp&s=635ca32f053e03e88fae55575a26584cd29bde38', 'width': 1080}], 'source': {'height': 852, 'url': 'https://external-preview.redd.it/a2N0Z3RxMGN0czRmMZEFLO9nCJikC7mtBpPcIQAr59c4sK2P034UkenC8j1x.png?format=pjpg&auto=webp&s=4a52d7e034f4754c540738db9e12e71165f38f2a', 'width': 1494}, 'variants': {}}]}
New to local LLMs, but just launched my iOS+macOS app that runs LLMs locally
0
Hey everyone! I'm pretty new to the world of local LLMs, but I’ve been pretty fascinated with the idea of running an LLM on a smartphone for a while. I spent some time looking into how to do this, and ended up writing my own Swift wrapper for `llama.cpp` called [Kuzco](https://github.com/jaredcassoutt/Kuzco). I decided to use my own wrapper and create [Haplo AI](https://apps.apple.com/us/app/haplo-ai-local-private-llms/id6746702574). An app that lets users download and chat with open-source models like Mistral, Phi, and Gemma — fully offline and on-device. It works on both **iOS and macOS**, and everything runs through `llama.cpp`. The app lets users adjust system prompts, response length, creativity, and context window — nothing too fancy yet, but it works well for quick, private conversations without any cloud dependency. I’m also planning to build a sandbox-style system so other iOS/macOS apps can interact with models that the user has already downloaded. If you have any feedback, suggestions, or model recommendations, I’d really appreciate it. Still learning a lot, and would love to make this more useful for folks who are deep into the local LLM space!
2025-06-04T00:10:03
https://v.redd.it/dm6oetmrxs4f1
D1no_nugg3t
v.redd.it
1970-01-01T00:00:00
0
{}
1l2rn3e
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/dm6oetmrxs4f1/DASHPlaylist.mpd?a=1751587817%2CMGM0YmNhMmQ5M2RiMzk5OTYyNWUxNzUxOTA3NDI5MDAyMzllYTQ0OTc2MDcxYTgxNWI0ZjJmNTdiNTA5NTA1Mw%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/dm6oetmrxs4f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/dm6oetmrxs4f1/HLSPlaylist.m3u8?a=1751587817%2CZjZjNTkzNjBiOTA5Nzc3YWE3ZDhkNmI3Y2VjYTgxNWFhZmM0M2U2NzZmYmVmYTI4OWZhNDQxYTcyYjgyZjAwZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/dm6oetmrxs4f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 886}}
t3_1l2rn3e
/r/LocalLLaMA/comments/1l2rn3e/new_to_local_llms_but_just_launched_my_iosmacos/
false
false
https://external-preview…75592940cd059b98
0
{'enabled': False, 'images': [{'id': 'Z2RoN3p2bXJ4czRmMQwACy4WNa4bY9-leubtLYxRtX-4z95GrOzzVAD13MCL', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/Z2RoN3p2bXJ4czRmMQwACy4WNa4bY9-leubtLYxRtX-4z95GrOzzVAD13MCL.png?width=108&crop=smart&format=pjpg&auto=webp&s=d76ecc32482f19819f9302b8229e7cfbf241485b', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/Z2RoN3p2bXJ4czRmMQwACy4WNa4bY9-leubtLYxRtX-4z95GrOzzVAD13MCL.png?width=216&crop=smart&format=pjpg&auto=webp&s=1246f36ad10465d0eeef62c5be7b2a29b3ce4851', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/Z2RoN3p2bXJ4czRmMQwACy4WNa4bY9-leubtLYxRtX-4z95GrOzzVAD13MCL.png?width=320&crop=smart&format=pjpg&auto=webp&s=c5192e5ace2edef79b09332285c547b329de272b', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/Z2RoN3p2bXJ4czRmMQwACy4WNa4bY9-leubtLYxRtX-4z95GrOzzVAD13MCL.png?width=640&crop=smart&format=pjpg&auto=webp&s=988c3fa59dc16c44b9cd513e0525e30976a55fd0', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/Z2RoN3p2bXJ4czRmMQwACy4WNa4bY9-leubtLYxRtX-4z95GrOzzVAD13MCL.png?width=960&crop=smart&format=pjpg&auto=webp&s=3ad1cf06856f3e32d22eb84b4cb89a83267b23fe', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/Z2RoN3p2bXJ4czRmMQwACy4WNa4bY9-leubtLYxRtX-4z95GrOzzVAD13MCL.png?width=1080&crop=smart&format=pjpg&auto=webp&s=611334d7a6e494cb3d3354728e60be9a2294f639', 'width': 1080}], 'source': {'height': 2340, 'url': 'https://external-preview.redd.it/Z2RoN3p2bXJ4czRmMQwACy4WNa4bY9-leubtLYxRtX-4z95GrOzzVAD13MCL.png?format=pjpg&auto=webp&s=2694359a0fafc4f668c236b7b37934fedee079bc', 'width': 1080}, 'variants': {}}]}
Is there a standard for AI-Readable context files in repositories ?
1
[removed]
2025-06-04T00:15:19
https://www.reddit.com/r/LocalLLaMA/comments/1l2rqv8/is_there_a_standard_for_aireadable_context_files/
shijoi87
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2rqv8
false
null
t3_1l2rqv8
/r/LocalLLaMA/comments/1l2rqv8/is_there_a_standard_for_aireadable_context_files/
false
false
self
1
null
Deepseek R1 0528 8B running locally on Samsung Galaxy tab S10 ultra (Mediatek demensity 9300+)
0
App: MNN Chat Settings: Backend: opencl Thread Number: 6
2025-06-04T00:15:39
https://v.redd.it/5ysuy6l6zs4f1
Ok_Essay3559
/r/LocalLLaMA/comments/1l2rr3z/deepseek_r1_0528_8b_running_locally_on_samsung/
1970-01-01T00:00:00
0
{}
1l2rr3z
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/5ysuy6l6zs4f1/DASHPlaylist.mpd?a=1751717746%2CMGZjMmMxMTQyY2RiZDQ4MGQ5MDUwYTY4M2ViZTg0NTkyNzQyMjUwOGE2NGY0NmM4YzUwMjNmODA1ZDA5ZTYyMQ%3D%3D&v=1&f=sd', 'duration': 29, 'fallback_url': 'https://v.redd.it/5ysuy6l6zs4f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/5ysuy6l6zs4f1/HLSPlaylist.m3u8?a=1751717746%2CNzc1MjRjNWM4MDQ4OTNiODJjY2FkZDc2YjNhNTY0ZjAxMmI5YzEyZWJiZmYxZDk2NzZhOGFlMjM2NTdlNDFhNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5ysuy6l6zs4f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1l2rr3z
/r/LocalLLaMA/comments/1l2rr3z/deepseek_r1_0528_8b_running_locally_on_samsung/
false
false
https://external-preview…72040c2ce0fa7f51
0
{'enabled': False, 'images': [{'id': 'dHRoZDVxbjZ6czRmMXZZzKu1t2kRtGkjW8FFkNYcYMNT6d174HwLLtjmiNAX', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dHRoZDVxbjZ6czRmMXZZzKu1t2kRtGkjW8FFkNYcYMNT6d174HwLLtjmiNAX.png?width=108&crop=smart&format=pjpg&auto=webp&s=02e6d8acae1558993ab3d654ccc02f9540b61ee4', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dHRoZDVxbjZ6czRmMXZZzKu1t2kRtGkjW8FFkNYcYMNT6d174HwLLtjmiNAX.png?width=216&crop=smart&format=pjpg&auto=webp&s=dbc409d2131dd15e8426b06e287c7a2d73ab7e4c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dHRoZDVxbjZ6czRmMXZZzKu1t2kRtGkjW8FFkNYcYMNT6d174HwLLtjmiNAX.png?width=320&crop=smart&format=pjpg&auto=webp&s=9300df85352cfd45bcca1ad414311031f1818c0d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dHRoZDVxbjZ6czRmMXZZzKu1t2kRtGkjW8FFkNYcYMNT6d174HwLLtjmiNAX.png?width=640&crop=smart&format=pjpg&auto=webp&s=1390bef11d22e3ae981d657cb77e7e84276189f6', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dHRoZDVxbjZ6czRmMXZZzKu1t2kRtGkjW8FFkNYcYMNT6d174HwLLtjmiNAX.png?width=960&crop=smart&format=pjpg&auto=webp&s=8bb7a4cfc5411dbe7c3b6d054fe93f2bdc82a2bd', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dHRoZDVxbjZ6czRmMXZZzKu1t2kRtGkjW8FFkNYcYMNT6d174HwLLtjmiNAX.png?width=1080&crop=smart&format=pjpg&auto=webp&s=210deaf6ddb8f907af2fbe8181c651cea3aec723', 'width': 1080}], 'source': {'height': 810, 'url': 'https://external-preview.redd.it/dHRoZDVxbjZ6czRmMXZZzKu1t2kRtGkjW8FFkNYcYMNT6d174HwLLtjmiNAX.png?format=pjpg&auto=webp&s=a6a0b0228728b63c755ea91c90bea3a2132c406f', 'width': 1440}, 'variants': {}}]}
Secure Minions: private collaboration between Ollama and frontier models
35
Extremely interesting developments coming out of Hazy Research. Has anyone tested this yet?
2025-06-04T00:23:17
https://ollama.com/blog/secureminions
MediocreBye
ollama.com
1970-01-01T00:00:00
0
{}
1l2rwhu
false
null
t3_1l2rwhu
/r/LocalLLaMA/comments/1l2rwhu/secure_minions_private_collaboration_between/
false
false
https://b.thumbs.redditm…GfuKl0-DKJRg.jpg
35
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
Setting up an AI to help prepare for a high difficulty questions test
1
[removed]
2025-06-04T01:24:52
https://www.reddit.com/r/LocalLLaMA/comments/1l2t4jm/setting_up_an_ai_to_help_prepare_for_a_high/
FinancialMechanic853
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2t4jm
false
null
t3_1l2t4jm
/r/LocalLLaMA/comments/1l2t4jm/setting_up_an_ai_to_help_prepare_for_a_high/
false
false
self
1
null
WINA by Microsoft
1
[removed]
2025-06-04T01:32:25
https://www.reddit.com/r/LocalLLaMA/comments/1l2ta4d/wina_by_microsoft/
mas554ter365
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2ta4d
false
null
t3_1l2ta4d
/r/LocalLLaMA/comments/1l2ta4d/wina_by_microsoft/
false
false
self
1
{'enabled': False, 'images': [{'id': 'iIAybxrKhc8mLATwgu3MVJWB8lY8OZqDINzCmSk7SK4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_1itsub6A0MZ6zgczmc41XfXDsezUI5HcfLCBy9cBoM.jpg?width=108&crop=smart&auto=webp&s=40a77179dfc505f41aa170ba092e10cbaa75fa97', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_1itsub6A0MZ6zgczmc41XfXDsezUI5HcfLCBy9cBoM.jpg?width=216&crop=smart&auto=webp&s=5b82fe9ebd4314a63ed4aaec88240ea375e007d7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_1itsub6A0MZ6zgczmc41XfXDsezUI5HcfLCBy9cBoM.jpg?width=320&crop=smart&auto=webp&s=6ddee993a2d39864a066fcd50f72762a23ec96bc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_1itsub6A0MZ6zgczmc41XfXDsezUI5HcfLCBy9cBoM.jpg?width=640&crop=smart&auto=webp&s=c5bf832db36515a567ee9caccbb014526c358c74', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_1itsub6A0MZ6zgczmc41XfXDsezUI5HcfLCBy9cBoM.jpg?width=960&crop=smart&auto=webp&s=55875dda9da894350e4224856bc4855efa0e5ff6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_1itsub6A0MZ6zgczmc41XfXDsezUI5HcfLCBy9cBoM.jpg?width=1080&crop=smart&auto=webp&s=16b6200deeb08a21460cf7c6a6251390f72fb424', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_1itsub6A0MZ6zgczmc41XfXDsezUI5HcfLCBy9cBoM.jpg?auto=webp&s=cfef7d0e95f094389c083f10f9ad86f337d585d1', 'width': 1200}, 'variants': {}}]}
Understand Any Repo In Seconds
0
Hey Devs & PMs! Imagine if you could approach any GitHub repository and: ✨ Instantly grasp its core through intelligent digests. ✨ See its structure unfold before your eyes in clear diagrams. ✨ Simply *ask* the codebase questions and get meaningful answers. I've created [**Gitscape.ai**](http://Gitscape.ai) ([https://www.gitscape.ai/](https://www.gitscape.ai/)) to bring this vision to life. 🤯 Oh, and it's **100% OPEN SOURCE!** 🤯 Feel free to try it, break it, fix it!
2025-06-04T02:24:54
https://www.reddit.com/r/LocalLLaMA/comments/1l2ubor/understand_any_repo_in_seconds/
Purple_Huckleberry58
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2ubor
false
null
t3_1l2ubor
/r/LocalLLaMA/comments/1l2ubor/understand_any_repo_in_seconds/
false
false
self
0
{'enabled': False, 'images': [{'id': 'lMxBo0Bl-j1avfhruNBelKZWZ42GY1AF8eqiHj1lXdw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/J7cWBiMpOXIpK7EzhpbyiDu4mjrNzuoZjktZA9lJV-M.jpg?width=108&crop=smart&auto=webp&s=5952a25c64c3feff3c5ffa73ec90e38921b64f94', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/J7cWBiMpOXIpK7EzhpbyiDu4mjrNzuoZjktZA9lJV-M.jpg?width=216&crop=smart&auto=webp&s=cf960cbce6d7fad83d2bd21d2cd22a9c48f5f457', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/J7cWBiMpOXIpK7EzhpbyiDu4mjrNzuoZjktZA9lJV-M.jpg?width=320&crop=smart&auto=webp&s=dd948211909d15d1d39745f1af6b8dc48f0c99cb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/J7cWBiMpOXIpK7EzhpbyiDu4mjrNzuoZjktZA9lJV-M.jpg?width=640&crop=smart&auto=webp&s=accde10df17dffca494e13aa0696635e6abaf75e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/J7cWBiMpOXIpK7EzhpbyiDu4mjrNzuoZjktZA9lJV-M.jpg?width=960&crop=smart&auto=webp&s=dab72ce8a58c7982e75410c9d610e67006f29713', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/J7cWBiMpOXIpK7EzhpbyiDu4mjrNzuoZjktZA9lJV-M.jpg?width=1080&crop=smart&auto=webp&s=936b8132ff25d95565a8db8c3a7d095b14d651f3', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/J7cWBiMpOXIpK7EzhpbyiDu4mjrNzuoZjktZA9lJV-M.jpg?auto=webp&s=690be7752af866f77ed169ef53b029f6c131b5a0', 'width': 1280}, 'variants': {}}]}
Building an AI sales call coach trained on real objections-best stack for passive listening, grading, and coaching?
1
[removed]
2025-06-04T02:25:24
https://www.reddit.com/r/LocalLLaMA/comments/1l2uc1q/building_an_ai_sales_call_coach_trained_on_real/
Zealousideal_Top8456
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2uc1q
false
null
t3_1l2uc1q
/r/LocalLLaMA/comments/1l2uc1q/building_an_ai_sales_call_coach_trained_on_real/
false
false
self
1
null
Ecne AI Podcast Generator - Update
22
[main page of the new early development GUI](https://preview.redd.it/l1ttsivtlt4f1.png?width=974&format=png&auto=webp&s=5cd68053e425cae46eaf174906c23e18539a2795) So I've been working more on one of my side projects, the [Ecne-AI-Podcaster](https://github.com/ETomberg391/Ecne-AI-Podcaster) This was to automate as much as I can in a decent quality way with as many free tools available to build Automated Podcast videos. My project takes your Topic idea, some searching keywords you set, some guidance you'd like the podcast to use or follow, and then uses several techniques to automate researching the topic (Google/Brave API, Selenium, Newspaper4k, local pdf,docx,xlsx,xlsm,csv,txt files). It will then compile a podcast script (Either Host/Guest or just Host in single speaker mode), along with an optional Report paper, and a Youtube Description generator in case you wanted such for posting. Once you have the script, you can then process it through the Podcast generator option, and it will generate segments of the audio for you to review, along with any tweaks and redo's you need to the text and TTS audio. Overall the largest example I have done is a new video I've posted here: [ Dundell's Cyberspace - What are Game Emulators? ](https://youtu.be/zbZmEwGinoA?si=-SIWCyKdi8b94T9G)which ended up with 173 sources used, distilled down to 89 with an acceptable relevance score based on the Topic, and then 78 segments of broken down TTS audio for a total 18 1/2 min video that took 2 hours (45 min script building + 45 min TTS generations + 30 min building the finalized video) along with 1 1/2 hours of manually fixing TTS audio ends with my built-in GUI for quality purposes. Notes: \- Installer is working but a huge mess. Taking some recommendations soon to either remove the sudo install requests and see if I an find a better solutions that using sudo for anything and just mention what he user needs to install beforehand like most other projects... \- Additionally looking into more options for the Docker backend. The backend is entirely the [Orpheus-FastAPI Project](https://github.com/Lex-au/Orpheus-FastAPI) and the models based on [Orpheus-TTS](https://github.com/canopyai/Orpheus-TTS) which so far work the best for an all-in-one solution with very good quality audio in nice FastAPI llama-server docker. I'd try out another TTS like Dia when I find a decent Dockerized FastAPI with similar functionality. \- Lastly I've been working on trying to get both Linux and Windows working, and so far I Can, but Windows takes a lot of reruns of the Installer, and again I am going to try to move away from anything Sudo or admin rights needed soon, or at least something more of Acknowledgement/consent for transparency. If you have any questions let me know. I'm going to continue to look into developing this further. Fix up the Readme and requirements section and fix any additional bugs I can find.
2025-06-04T02:29:53
https://www.reddit.com/r/LocalLLaMA/comments/1l2uf6e/ecne_ai_podcast_generator_update/
Dundell
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2uf6e
false
null
t3_1l2uf6e
/r/LocalLLaMA/comments/1l2uf6e/ecne_ai_podcast_generator_update/
false
false
https://b.thumbs.redditm…6jdGGcT3OIaQ.jpg
22
{'enabled': False, 'images': [{'id': 'kvtnKZsDxU3WMPXcbwq_8t-ZEg2GbZGKlOMogDo3XjE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/45EfjPVsgm1MYdHAKtLDE7-W5DynLHblAveFiC_Lni4.jpg?width=108&crop=smart&auto=webp&s=57eeea0584b55d81e3a0e7aea51da7fe8b533002', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/45EfjPVsgm1MYdHAKtLDE7-W5DynLHblAveFiC_Lni4.jpg?width=216&crop=smart&auto=webp&s=38017fd42050e0115a51c4ef3fa55f3af6c597df', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/45EfjPVsgm1MYdHAKtLDE7-W5DynLHblAveFiC_Lni4.jpg?width=320&crop=smart&auto=webp&s=f2c3d18f79a3a8c7d6ef1bfaae78d9fb62186ef9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/45EfjPVsgm1MYdHAKtLDE7-W5DynLHblAveFiC_Lni4.jpg?width=640&crop=smart&auto=webp&s=dc854f0bdc436f59c7883ad04543499e14274e2e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/45EfjPVsgm1MYdHAKtLDE7-W5DynLHblAveFiC_Lni4.jpg?width=960&crop=smart&auto=webp&s=df205a4bc1a14b2e97169c5571253694e7a281a7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/45EfjPVsgm1MYdHAKtLDE7-W5DynLHblAveFiC_Lni4.jpg?width=1080&crop=smart&auto=webp&s=d0128b8a62d2aef4fb445c0f95868d63699738c9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/45EfjPVsgm1MYdHAKtLDE7-W5DynLHblAveFiC_Lni4.jpg?auto=webp&s=34e9a81972c59651de8f9577969d3c2e0e5246a5', 'width': 1200}, 'variants': {}}]}
Used DeepSeek-R1 0528 (Qwen 3 distill) to extract information from a PDF with Ollama and the results are great
0
I've converted the latest [Nvidia financial results](https://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-first-quarter-fiscal-2026) to markdown and fed it to the model. The values extracted were all correct - something I haven't seen for <13B model. What are your impressions of the model? - [Watch the YouTube video](https://www.youtube.com/watch?v=EK227Zysnyk) - [Jupyter notebook](https://github.com/curiousily/AI-Bootcamp/blob/master/25.financial-data-extraction-with-deepseek-r1.ipynb)
2025-06-04T02:40:32
https://www.reddit.com/r/LocalLLaMA/comments/1l2umib/used_deepseekr1_0528_qwen_3_distill_to_extract/
curiousily_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2umib
false
null
t3_1l2umib
/r/LocalLLaMA/comments/1l2umib/used_deepseekr1_0528_qwen_3_distill_to_extract/
false
false
self
0
{'enabled': False, 'images': [{'id': 't2EnApoTGr54KYWonX6z7WBWt9w9XZj6oBGtwLmjIos', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/Jdd2J-_htAJai6Xr-bxogk7HZd4dzL1qfmZTJYdOfL8.jpg?width=108&crop=smart&auto=webp&s=3c3963995ac274a3d73fed0fa18b0c7800daabd8', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/Jdd2J-_htAJai6Xr-bxogk7HZd4dzL1qfmZTJYdOfL8.jpg?width=216&crop=smart&auto=webp&s=aa5955634d8bf056aed4da201f498dff5fe5d318', 'width': 216}, {'height': 212, 'url': 'https://external-preview.redd.it/Jdd2J-_htAJai6Xr-bxogk7HZd4dzL1qfmZTJYdOfL8.jpg?width=320&crop=smart&auto=webp&s=ea0979054d2fe549ae1e3e2ccbe714c80d8936cf', 'width': 320}, {'height': 425, 'url': 'https://external-preview.redd.it/Jdd2J-_htAJai6Xr-bxogk7HZd4dzL1qfmZTJYdOfL8.jpg?width=640&crop=smart&auto=webp&s=54d97cddee2bce3769417a4198daf13e14181081', 'width': 640}, {'height': 638, 'url': 'https://external-preview.redd.it/Jdd2J-_htAJai6Xr-bxogk7HZd4dzL1qfmZTJYdOfL8.jpg?width=960&crop=smart&auto=webp&s=8ee52031e186157e50af8909f4fd45aa629a37ec', 'width': 960}, {'height': 718, 'url': 'https://external-preview.redd.it/Jdd2J-_htAJai6Xr-bxogk7HZd4dzL1qfmZTJYdOfL8.jpg?width=1080&crop=smart&auto=webp&s=361dd3f44f331eeae2bdce7707fdb1a7b88fdaeb', 'width': 1080}], 'source': {'height': 801, 'url': 'https://external-preview.redd.it/Jdd2J-_htAJai6Xr-bxogk7HZd4dzL1qfmZTJYdOfL8.jpg?auto=webp&s=bea8e8882dde470176bebc95cd7ccab68c83cb70', 'width': 1204}, 'variants': {}}]}
Using AI to review sales booking calls and improve objection handling
1
[removed]
2025-06-04T02:48:26
https://www.reddit.com/r/LocalLLaMA/comments/1l2us3j/using_ai_to_review_sales_booking_calls_and/
jprime4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2us3j
false
null
t3_1l2us3j
/r/LocalLLaMA/comments/1l2us3j/using_ai_to_review_sales_booking_calls_and/
false
false
self
1
null
Help: BSODs with any model
1
[removed]
2025-06-04T02:53:22
https://www.reddit.com/r/LocalLLaMA/comments/1l2uvka/help_bsods_with_any_model/
VitallyRaccoon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2uvka
false
null
t3_1l2uvka
/r/LocalLLaMA/comments/1l2uvka/help_bsods_with_any_model/
false
false
self
1
null
Connecting to an LM Studio server using an Android client?
1
Does anyone have a solution for this, or is ollama my best bet for being able to remotely host a server. I have tailscale on all devices, so can definitely use that. I looked into ChatterUI but it doesnt seem to be compatible with LM Studio, unless I am missing anything. Thoughts?
2025-06-04T03:07:21
https://www.reddit.com/r/LocalLLaMA/comments/1l2v58k/connecting_to_an_lm_studio_server_using_an/
nat2r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2v58k
false
null
t3_1l2v58k
/r/LocalLLaMA/comments/1l2v58k/connecting_to_an_lm_studio_server_using_an/
false
false
self
1
null
Simulated Transcendence: Exploring the Psychological Effects of Prolonged LLM Interaction
0
I've been researching a phenomenon I'm calling **Simulated Transcendence (ST)**—a pattern where extended interactions with large language models (LLMs) give users a sense of profound insight or personal growth, which may not be grounded in actual understanding. **Key Mechanisms Identified:** * **Semantic Drift:** Over time, users and LLMs may co-create metaphors and analogies that lose their original meaning, leading to internally coherent but externally confusing language. * **Recursive Containment:** LLMs can facilitate discussions that loop back on themselves, giving an illusion of depth without real progression. * **Affective Reinforcement:** Positive feedback from LLMs can reinforce users' existing beliefs, creating echo chambers. * **Simulated Intimacy:** Users might develop emotional connections with LLMs, attributing human-like understanding to them. * **Authorship and Identity Fusion:** Users may begin to see LLM-generated content as extensions of their own thoughts, blurring the line between human and machine authorship. These mechanisms can lead to a range of cognitive and emotional effects, from enhanced self-reflection to potential dependency or distorted thinking. I've drafted a paper discussing ST in detail, including potential mitigation strategies through user education and interface design. **Read the full draft here:** [ST paper](https://docs.google.com/document/d/1nJfq2WwFoe3K0uYYjCe8A277IU_nw1utkIVgYI3_wS8/edit?usp=sharing) I'm eager to hear your thoughts: * Have you experienced or observed similar patterns? * What are your perspectives on the psychological impacts of LLM interactions? Looking forward to a thoughtful discussion!
2025-06-04T03:25:28
https://www.reddit.com/r/LocalLLaMA/comments/1l2vhbu/simulated_transcendence_exploring_the/
AirplaneHat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2vhbu
false
null
t3_1l2vhbu
/r/LocalLLaMA/comments/1l2vhbu/simulated_transcendence_exploring_the/
false
false
self
0
{'enabled': False, 'images': [{'id': 'qNGDBk-zGbl3NXt1amgYXXBbIZktw2XXX27lvHHk2Fo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/AvD4RRtVLH4fdKu2icc49F4tjVsPu9l-Q330AwGDuws.jpg?width=108&crop=smart&auto=webp&s=2e019fb2ae04e664a6868b912c66f53397890996', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/AvD4RRtVLH4fdKu2icc49F4tjVsPu9l-Q330AwGDuws.jpg?width=216&crop=smart&auto=webp&s=2b1771c9716e429e943697bbfc72d7731995079b', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/AvD4RRtVLH4fdKu2icc49F4tjVsPu9l-Q330AwGDuws.jpg?width=320&crop=smart&auto=webp&s=eb0513e20daf4cceffc0dc8c471cd8ddf18e450a', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/AvD4RRtVLH4fdKu2icc49F4tjVsPu9l-Q330AwGDuws.jpg?width=640&crop=smart&auto=webp&s=9f64964f4c2f6e3821c7b4cfc222c8e653116cae', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/AvD4RRtVLH4fdKu2icc49F4tjVsPu9l-Q330AwGDuws.jpg?width=960&crop=smart&auto=webp&s=5c2d0240fca84ae80876cbf8949f69d713b1506e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/AvD4RRtVLH4fdKu2icc49F4tjVsPu9l-Q330AwGDuws.jpg?width=1080&crop=smart&auto=webp&s=407167dc2d84dc716896118257563be4ef3e2c0b', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/AvD4RRtVLH4fdKu2icc49F4tjVsPu9l-Q330AwGDuws.jpg?auto=webp&s=d36355db24e69adb9eab864b8686ac44a350abbe', 'width': 1200}, 'variants': {}}]}
Recommended courses/certifications to become an AI integration professional as a software engineer?
1
[removed]
2025-06-04T03:28:13
https://www.reddit.com/r/LocalLLaMA/comments/1l2vj2m/recommended_coursescertifications_to_become_an_ai/
Ill_Yam_9994
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2vj2m
false
null
t3_1l2vj2m
/r/LocalLLaMA/comments/1l2vj2m/recommended_coursescertifications_to_become_an_ai/
false
false
self
1
null
Meta AI is really good for removing objects, texts, watermarks, etc from images in just a few clicks.
1
2025-06-04T03:40:29
https://www.reddit.com/gallery/1l2vqyy
Obvious_King2150
reddit.com
1970-01-01T00:00:00
0
{}
1l2vqyy
false
null
t3_1l2vqyy
/r/LocalLLaMA/comments/1l2vqyy/meta_ai_is_really_good_for_removing_objects_texts/
false
false
https://b.thumbs.redditm…_aQexsHtALxA.jpg
1
null
Fully offline verbal chat bot
75
I wanted to get some feedback on my project at its current state. The goal is to have the program run in the background so that the LLM is always accessible with just a keybind. Right now I have it displaying a console for debugging, but it is capable of running fully in the background. This is written in Rust, and is set up to run fully offline. I'm using LM Studio to serve the model on an OpenAI compatable API, Piper TTS for the voice, and Whisper.cpp for the transcription. Current ideas: \- Find a better Piper model \- Allow customization of hotkey via config file \- Add a hotkey to insert the contents of the clipboard to the prompt \- Add the ability to cut off the AI before it finishes I'm not making the code available yet since at its current state its highly tailored to my specific computer. I will make it open source on GitHub once I fix that. Please leave suggestions!
2025-06-04T03:41:10
https://v.redd.it/cw4rpviiyt4f1
NonYa_exe
v.redd.it
1970-01-01T00:00:00
0
{}
1l2vrg2
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/cw4rpviiyt4f1/DASHPlaylist.mpd?a=1751600487%2CM2U5MzIxNmYxOGVmNDU2NTVmYjEyOTIzNGMwNjRlZDFhMmZhMTA0NTQ1ZWMxZTZjNGJiODNlMDEzMjdiZWJkYQ%3D%3D&v=1&f=sd', 'duration': 79, 'fallback_url': 'https://v.redd.it/cw4rpviiyt4f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/cw4rpviiyt4f1/HLSPlaylist.m3u8?a=1751600487%2CM2I2YTJjOTg5MGQ0MThhMWJjMjU0NzQ4YmI4OWJlNTBmMmM1Njc2OTgxZjJjYjJjNjIwYjdkODRjMmVlNDQwOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/cw4rpviiyt4f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1l2vrg2
/r/LocalLLaMA/comments/1l2vrg2/fully_offline_verbal_chat_bot/
false
false
https://external-preview…4c7b22d9475ab110
75
{'enabled': False, 'images': [{'id': 'dWY5OGJ3aWl5dDRmMXZQOk9qxWLlo00dzciqndO7qSY-J3_oYBSBLg5Z6rT9', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dWY5OGJ3aWl5dDRmMXZQOk9qxWLlo00dzciqndO7qSY-J3_oYBSBLg5Z6rT9.png?width=108&crop=smart&format=pjpg&auto=webp&s=f5d6d8654b0dc55df3f2d8852d0d9c124d07f8f6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dWY5OGJ3aWl5dDRmMXZQOk9qxWLlo00dzciqndO7qSY-J3_oYBSBLg5Z6rT9.png?width=216&crop=smart&format=pjpg&auto=webp&s=d0d4be3acc02a2b7ef4838a323d0ac37ddbc5f55', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dWY5OGJ3aWl5dDRmMXZQOk9qxWLlo00dzciqndO7qSY-J3_oYBSBLg5Z6rT9.png?width=320&crop=smart&format=pjpg&auto=webp&s=032c61dc0eb86a7e533a4521b4f1f4d680646c04', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dWY5OGJ3aWl5dDRmMXZQOk9qxWLlo00dzciqndO7qSY-J3_oYBSBLg5Z6rT9.png?width=640&crop=smart&format=pjpg&auto=webp&s=a7bf994f88884e7729e408675aacde3614e9ab07', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dWY5OGJ3aWl5dDRmMXZQOk9qxWLlo00dzciqndO7qSY-J3_oYBSBLg5Z6rT9.png?width=960&crop=smart&format=pjpg&auto=webp&s=abfb3f28b61145ce0500d6f0ac454a5eaa5553d2', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dWY5OGJ3aWl5dDRmMXZQOk9qxWLlo00dzciqndO7qSY-J3_oYBSBLg5Z6rT9.png?width=1080&crop=smart&format=pjpg&auto=webp&s=9cdfeb87ebe5fa1fe982dfaf6b1ccba9f359f119', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dWY5OGJ3aWl5dDRmMXZQOk9qxWLlo00dzciqndO7qSY-J3_oYBSBLg5Z6rT9.png?format=pjpg&auto=webp&s=834389b7dba20a5853993db13075c08a7dc2ca0f', 'width': 1920}, 'variants': {}}]}
Turning to LocalLLM instead of Gemini?
6
Hey all, I've been using Gemini 2.5 pro as a coding assistant for a long time now. Recently good has really neutered Gemini. Responses are less confident, often ramble and repeat the same code dozens of times. I've been testing R1 0528 8b 16fp on a 5090 and it seems to come up with decent solutions, faster than Gemini. Gemini time to first token is extremely long now, like sometimes 5+ minutes. I'm curios if what your experience is with LocalLLM for coding and what models you all use. This is the first time I've actually considered more gpus in favor of local llm over paying for online LLM services. What platform are you all coding on? I've been happy with vs code
2025-06-04T04:43:20
https://www.reddit.com/r/LocalLLaMA/comments/1l2wuk3/turning_to_localllm_instead_of_gemini/
rymn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2wuk3
false
null
t3_1l2wuk3
/r/LocalLLaMA/comments/1l2wuk3/turning_to_localllm_instead_of_gemini/
false
false
self
6
null
Python Pandas Ditches NumPy for Speedier PyArrow
144
2025-06-04T04:44:44
https://thenewstack.io/python-pandas-ditches-numpy-for-speedier-pyarrow/
Sporeboss
thenewstack.io
1970-01-01T00:00:00
0
{}
1l2wvf3
false
null
t3_1l2wvf3
/r/LocalLLaMA/comments/1l2wvf3/python_pandas_ditches_numpy_for_speedier_pyarrow/
false
false
https://b.thumbs.redditm…WnYp4YEzlxKU.jpg
144
{'enabled': False, 'images': [{'id': 'x63vjJO5jvX8J5A_FSdVNXOBldLGHYVzgJVrJ6TICYc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/YNruBwDE4glgMZxB7Rz7Dn9TVxQIuQU5bX9UAhvGW9I.jpg?width=108&crop=smart&auto=webp&s=3861884de0126014c55e3f76985339defbab8768', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/YNruBwDE4glgMZxB7Rz7Dn9TVxQIuQU5bX9UAhvGW9I.jpg?width=216&crop=smart&auto=webp&s=9cbb8ee0be7081e512ec16b5bc2712ac7552fc1a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/YNruBwDE4glgMZxB7Rz7Dn9TVxQIuQU5bX9UAhvGW9I.jpg?width=320&crop=smart&auto=webp&s=43168b0d283be84aac70b575c43712629195387b', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/YNruBwDE4glgMZxB7Rz7Dn9TVxQIuQU5bX9UAhvGW9I.jpg?width=640&crop=smart&auto=webp&s=b0372e63d733e050c30987cb1c1b8b2b63ce28fc', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/YNruBwDE4glgMZxB7Rz7Dn9TVxQIuQU5bX9UAhvGW9I.jpg?width=960&crop=smart&auto=webp&s=62717cbbc6f96dbc115e5a70744de2d1acac2bef', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/YNruBwDE4glgMZxB7Rz7Dn9TVxQIuQU5bX9UAhvGW9I.jpg?width=1080&crop=smart&auto=webp&s=d79581401f8ca3b3426c42e47cde322dde7b076e', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/YNruBwDE4glgMZxB7Rz7Dn9TVxQIuQU5bX9UAhvGW9I.jpg?auto=webp&s=e7d8ca039972be430924b1360222bd8b2c11c6d3', 'width': 1200}, 'variants': {}}]}
Looking for Guidance on Local LLM Optimization
0
I’m interested in learning about optimization techniques for running inference on local LLMs, but there’s so much information out there that I’m not sure where to start. I’d really appreciate any suggestions or guidance on how to begin. I’m currently using a gaming laptop with an RTX 4050 GPU. Also, do you think learning CUDA would be worthwhile if I want to go deeper into the optimization side?
2025-06-04T04:54:18
https://www.reddit.com/r/LocalLLaMA/comments/1l2x1be/looking_for_guidance_on_local_llm_optimization/
stinkbug_007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2x1be
false
null
t3_1l2x1be
/r/LocalLLaMA/comments/1l2x1be/looking_for_guidance_on_local_llm_optimization/
false
false
self
0
null
Loading Qwen Pretrained Weights for Fine-tuning
1
[removed]
2025-06-04T05:27:44
https://www.reddit.com/r/LocalLLaMA/comments/1l2xldu/loading_qwen_pretrained_weights_for_finetuning/
hendy0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2xldu
false
null
t3_1l2xldu
/r/LocalLLaMA/comments/1l2xldu/loading_qwen_pretrained_weights_for_finetuning/
false
false
self
1
{'enabled': False, 'images': [{'id': '2XSd1VnYkyg18jnDzmhs_F6KLPWKAfk7zmciWOnKVBc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1dm7yMOq16uiS-56QQ6PfHjS1Kmx_C8RKgzcKsGQbuE.jpg?width=108&crop=smart&auto=webp&s=60f1fd153b0a403486324ab9ab487f26fcccc124', 'width': 108}, {'height': 109, 'url': 'https://external-preview.redd.it/1dm7yMOq16uiS-56QQ6PfHjS1Kmx_C8RKgzcKsGQbuE.jpg?width=216&crop=smart&auto=webp&s=e8a40c9c6052f7e6d858065b7db6670b63faf12e', 'width': 216}, {'height': 162, 'url': 'https://external-preview.redd.it/1dm7yMOq16uiS-56QQ6PfHjS1Kmx_C8RKgzcKsGQbuE.jpg?width=320&crop=smart&auto=webp&s=b0ca75de1834c130768a72df7e333fa766187695', 'width': 320}, {'height': 325, 'url': 'https://external-preview.redd.it/1dm7yMOq16uiS-56QQ6PfHjS1Kmx_C8RKgzcKsGQbuE.jpg?width=640&crop=smart&auto=webp&s=d63f295aa830a3d8fa16df633d1ea9d53e165b9e', 'width': 640}, {'height': 487, 'url': 'https://external-preview.redd.it/1dm7yMOq16uiS-56QQ6PfHjS1Kmx_C8RKgzcKsGQbuE.jpg?width=960&crop=smart&auto=webp&s=27b75128a8201e61d78320035fa8a080962ff9c2', 'width': 960}, {'height': 548, 'url': 'https://external-preview.redd.it/1dm7yMOq16uiS-56QQ6PfHjS1Kmx_C8RKgzcKsGQbuE.jpg?width=1080&crop=smart&auto=webp&s=60d1c6707e88247aa77a52479aada353caca2c8a', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/1dm7yMOq16uiS-56QQ6PfHjS1Kmx_C8RKgzcKsGQbuE.jpg?auto=webp&s=af2f6cc58c9fa7c71153683fa5947608c9943a34', 'width': 1772}, 'variants': {}}]}
Locally loading the pretrained weights of Qwen2.5-0.5B
1
[removed]
2025-06-04T05:33:09
https://www.reddit.com/r/LocalLLaMA/comments/1l2xoip/locally_loading_the_pretrained_weights_of/
hendy0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2xoip
false
null
t3_1l2xoip
/r/LocalLLaMA/comments/1l2xoip/locally_loading_the_pretrained_weights_of/
false
false
self
1
{'enabled': False, 'images': [{'id': '2XSd1VnYkyg18jnDzmhs_F6KLPWKAfk7zmciWOnKVBc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1dm7yMOq16uiS-56QQ6PfHjS1Kmx_C8RKgzcKsGQbuE.jpg?width=108&crop=smart&auto=webp&s=60f1fd153b0a403486324ab9ab487f26fcccc124', 'width': 108}, {'height': 109, 'url': 'https://external-preview.redd.it/1dm7yMOq16uiS-56QQ6PfHjS1Kmx_C8RKgzcKsGQbuE.jpg?width=216&crop=smart&auto=webp&s=e8a40c9c6052f7e6d858065b7db6670b63faf12e', 'width': 216}, {'height': 162, 'url': 'https://external-preview.redd.it/1dm7yMOq16uiS-56QQ6PfHjS1Kmx_C8RKgzcKsGQbuE.jpg?width=320&crop=smart&auto=webp&s=b0ca75de1834c130768a72df7e333fa766187695', 'width': 320}, {'height': 325, 'url': 'https://external-preview.redd.it/1dm7yMOq16uiS-56QQ6PfHjS1Kmx_C8RKgzcKsGQbuE.jpg?width=640&crop=smart&auto=webp&s=d63f295aa830a3d8fa16df633d1ea9d53e165b9e', 'width': 640}, {'height': 487, 'url': 'https://external-preview.redd.it/1dm7yMOq16uiS-56QQ6PfHjS1Kmx_C8RKgzcKsGQbuE.jpg?width=960&crop=smart&auto=webp&s=27b75128a8201e61d78320035fa8a080962ff9c2', 'width': 960}, {'height': 548, 'url': 'https://external-preview.redd.it/1dm7yMOq16uiS-56QQ6PfHjS1Kmx_C8RKgzcKsGQbuE.jpg?width=1080&crop=smart&auto=webp&s=60d1c6707e88247aa77a52479aada353caca2c8a', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/1dm7yMOq16uiS-56QQ6PfHjS1Kmx_C8RKgzcKsGQbuE.jpg?auto=webp&s=af2f6cc58c9fa7c71153683fa5947608c9943a34', 'width': 1772}, 'variants': {}}]}
nvidia/Llama-3.1-Nemotron-Nano-VL-8B-V1 · Hugging Face
77
2025-06-04T05:34:44
https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-VL-8B-V1
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1l2xpf5
false
null
t3_1l2xpf5
/r/LocalLLaMA/comments/1l2xpf5/nvidiallama31nemotronnanovl8bv1_hugging_face/
false
false
https://b.thumbs.redditm…Gcy_79n2GD0I.jpg
77
{'enabled': False, 'images': [{'id': 'jDcrQduqX5g-ycbyFiwL5ysLV6x6-8E3fp6HGIL8u7Q', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3iMGSgahr2EGqEFlPNVdelZ_zKKfIXKVuELuwxGt7R4.jpg?width=108&crop=smart&auto=webp&s=01e6c02b48eac1d78a94fb44f556438e8bd923b3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3iMGSgahr2EGqEFlPNVdelZ_zKKfIXKVuELuwxGt7R4.jpg?width=216&crop=smart&auto=webp&s=d1947a6126484e3fe9aec145dbc699aaee79573d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3iMGSgahr2EGqEFlPNVdelZ_zKKfIXKVuELuwxGt7R4.jpg?width=320&crop=smart&auto=webp&s=a3a42fcea81b837593bb163359c705834d83d89f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3iMGSgahr2EGqEFlPNVdelZ_zKKfIXKVuELuwxGt7R4.jpg?width=640&crop=smart&auto=webp&s=f881ace7e1b4f8039722557754569dbed2ca23ac', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3iMGSgahr2EGqEFlPNVdelZ_zKKfIXKVuELuwxGt7R4.jpg?width=960&crop=smart&auto=webp&s=699dcea6fc07749ab26582fb09b894236d78e190', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3iMGSgahr2EGqEFlPNVdelZ_zKKfIXKVuELuwxGt7R4.jpg?width=1080&crop=smart&auto=webp&s=8006c74c4d4762aae2422737c695fb90ce2eecf1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3iMGSgahr2EGqEFlPNVdelZ_zKKfIXKVuELuwxGt7R4.jpg?auto=webp&s=2e4159adcf30cb29ad8bb7a6f3ca211d7b947292', 'width': 1200}, 'variants': {}}]}
Suitable LLM+prompt for extracting data points from an image of graphs/charts
1
[removed]
2025-06-04T05:58:13
https://i.redd.it/yxdcc5lwmu4f1.png
EmeraldThug
i.redd.it
1970-01-01T00:00:00
0
{}
1l2y2bb
false
null
t3_1l2y2bb
/r/LocalLLaMA/comments/1l2y2bb/suitable_llmprompt_for_extracting_data_points/
false
false
https://b.thumbs.redditm…hiW0YeFsqTlA.jpg
1
{'enabled': True, 'images': [{'id': 'ojlIhXXoxjnefWadiG6oo4dn2nHZ999Ah0du_w8F4iA', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/yxdcc5lwmu4f1.png?width=108&crop=smart&auto=webp&s=7783c890465fe3d02ea0e4f9597c178da71ac830', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/yxdcc5lwmu4f1.png?width=216&crop=smart&auto=webp&s=a4a2a6a2cc1fea444c94c4cf2dc82e0bddcdf354', 'width': 216}, {'height': 214, 'url': 'https://preview.redd.it/yxdcc5lwmu4f1.png?width=320&crop=smart&auto=webp&s=70ac870a209056bf5f352d5238f3a7d1bfbab459', 'width': 320}, {'height': 429, 'url': 'https://preview.redd.it/yxdcc5lwmu4f1.png?width=640&crop=smart&auto=webp&s=873a27e75e35f2bdcc841316f9d94c315807e386', 'width': 640}], 'source': {'height': 640, 'url': 'https://preview.redd.it/yxdcc5lwmu4f1.png?auto=webp&s=9f14590412202c01b4c7ebc6fd2d8e897e008fac', 'width': 954}, 'variants': {}}]}
Tried 10 models, all seem to refuse to write a 10,000 word story. Is there something bad with my prompt? I'm just doing some testing to learn and I can't figure out how to get the LLM to do as I say.
55
2025-06-04T06:36:26
https://i.imgur.com/uup3xQO.png
StartupTim
i.imgur.com
1970-01-01T00:00:00
0
{}
1l2ynsc
false
null
t3_1l2ynsc
/r/LocalLLaMA/comments/1l2ynsc/tried_10_models_all_seem_to_refuse_to_write_a/
false
false
https://b.thumbs.redditm…lepK5vdysDew.jpg
55
{'enabled': True, 'images': [{'id': 'wQB-UVcq4f39CkUKxnJvTeA2uOvjdGGU560OImk8Nnk', 'resolutions': [{'height': 32, 'url': 'https://external-preview.redd.it/rlOy3w1CoH2JdExOlsJT5MCZp4fqssksLcusxXxosAg.png?width=108&crop=smart&auto=webp&s=ae525cb0dfa6fdc778bcdc65ff48475c663de14a', 'width': 108}, {'height': 65, 'url': 'https://external-preview.redd.it/rlOy3w1CoH2JdExOlsJT5MCZp4fqssksLcusxXxosAg.png?width=216&crop=smart&auto=webp&s=d0bf6bf7480407d4867ba1562a29c69aa8cbd364', 'width': 216}, {'height': 97, 'url': 'https://external-preview.redd.it/rlOy3w1CoH2JdExOlsJT5MCZp4fqssksLcusxXxosAg.png?width=320&crop=smart&auto=webp&s=fe9ed55f03792979881c573798b9bdc836c0d5c7', 'width': 320}, {'height': 195, 'url': 'https://external-preview.redd.it/rlOy3w1CoH2JdExOlsJT5MCZp4fqssksLcusxXxosAg.png?width=640&crop=smart&auto=webp&s=6ce5032ae39d55f280b278a700a75157939738ba', 'width': 640}], 'source': {'height': 257, 'url': 'https://external-preview.redd.it/rlOy3w1CoH2JdExOlsJT5MCZp4fqssksLcusxXxosAg.png?auto=webp&s=847e292a494c43d7ece9cdcfbf65368a3d59a5da', 'width': 842}, 'variants': {}}]}
Why doesn't Llama4:16x17b run well on a host with enough ram to run 32b dense models?
0
I have M1 Max with 32GB ram. It runs 32b models very well (13-16 tokens/s). I thought I could run a large MoE like llama4:16x17b, because if only 17b parameters are active + some shared layers, it will easily fit in my ram and the other mempages can sleep in swap space. But no. $ ollama ps NAME ID SIZE PROCESSOR UNTIL llama4:16x17b fff25efaabd4 70 GB 69%/31% CPU/GPU 4 minutes from now System slows down to a crawl and I get 1 token every 20-30 seconds. I clearly misunderstood how things work. Asking big deepseek gives me a different answer each time I ask. Anybody willing to clarify in simple terms? Also, what is the largest MoE I could run on this? (something with more overall parameters than a dense 32b model)
2025-06-04T06:45:51
https://www.reddit.com/r/LocalLLaMA/comments/1l2yssk/why_doesnt_llama416x17b_run_well_on_a_host_with/
umataro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2yssk
false
null
t3_1l2yssk
/r/LocalLLaMA/comments/1l2yssk/why_doesnt_llama416x17b_run_well_on_a_host_with/
false
false
self
0
null
Colab of xtts2 conqui? Tried available on google but not working
0
[https://huggingface.co/spaces/coqui/xtts](https://huggingface.co/spaces/coqui/xtts) Want whats working here but for longer context. thank you.
2025-06-04T07:25:55
https://www.reddit.com/r/LocalLLaMA/comments/1l2zeql/colab_of_xtts2_conqui_tried_available_on_google/
jadhavsaurabh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2zeql
false
null
t3_1l2zeql
/r/LocalLLaMA/comments/1l2zeql/colab_of_xtts2_conqui_tried_available_on_google/
false
false
self
0
{'enabled': False, 'images': [{'id': 'qEg0nV4qLjF_R339rWG2nm0ZKDxL3ktS8y5QFNx3XaI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cI6tjfI7nprfyU6AtFvYsbkvkXhd-uhgYF0bPonkRU4.jpg?width=108&crop=smart&auto=webp&s=14485689b80d197ea46135f5f612fca2c19a8443', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/cI6tjfI7nprfyU6AtFvYsbkvkXhd-uhgYF0bPonkRU4.jpg?width=216&crop=smart&auto=webp&s=35384c1e4d99f68090c8e0a3b8fbffbaab0f74ff', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/cI6tjfI7nprfyU6AtFvYsbkvkXhd-uhgYF0bPonkRU4.jpg?width=320&crop=smart&auto=webp&s=ef9cc8ba7f49df833e36ed6ac85c9b389a8d90ef', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/cI6tjfI7nprfyU6AtFvYsbkvkXhd-uhgYF0bPonkRU4.jpg?width=640&crop=smart&auto=webp&s=6ad026f67e807372777c1886d2dde4c03f869d43', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/cI6tjfI7nprfyU6AtFvYsbkvkXhd-uhgYF0bPonkRU4.jpg?width=960&crop=smart&auto=webp&s=d35cc55adb6e466d1f4d62d23f920f29cfd9dedc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/cI6tjfI7nprfyU6AtFvYsbkvkXhd-uhgYF0bPonkRU4.jpg?width=1080&crop=smart&auto=webp&s=71b0e117d5defbabaecf3a146fccc26e53c13d5a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/cI6tjfI7nprfyU6AtFvYsbkvkXhd-uhgYF0bPonkRU4.jpg?auto=webp&s=39341ebb80b547106865953db6ab22ca14a294ce', 'width': 1200}, 'variants': {}}]}
Should I buy this laptop?
0
Hey everyone, I came across a used Dell XPS 13 9340 with 32gb RAM and a 1TB SSD, running on the Meteor Lake chip. The seller is asking 650 euro for it. Just looking for some advice. I currently have a MacBook M2 Max with 32gb, which I like, but the privacy concerns and limited flexibility with Linux are pushing me to switch. Thinking about selling the MacBook and using the Dell mainly for Linux and running local LLMs. Does anyone here have experience with this model, especially for LLM use? How does it perform in real-world situations, both in terms of speed and efficiency? I’m curious how well it handles various open-source LLMs, and whether the performance is actually good enough for day-to-day work or tinkering. Is this price about right for what’s being offered, or should I be wary? The laptop was originally bought in November 2024, so it should still be fairly new. For those who have tried Linux on this particular Dell, any issues with compatibility or hardware support I should know about? Would you recommend it for a balance of power, portability, and battery life? Any tips on what to look out for before buying would also be appreciated. Thanks for any input. Let me know what you guys think :)
2025-06-04T07:34:55
https://www.reddit.com/r/LocalLLaMA/comments/1l2zjet/should_i_buy_this_laptop/
Optimal_League_1419
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2zjet
false
null
t3_1l2zjet
/r/LocalLLaMA/comments/1l2zjet/should_i_buy_this_laptop/
false
false
self
0
null
Progress update — current extraction status + next step for dataset formatting
0
I’ve currently extracted only {{char}}’s dialogue — without {{user}} responses — from the visual novel. Right now, I haven’t fully separated SFW from NSFW yet. There are two files: One with mixed SFW + NSFW One with NSFW-only content I’m wondering now: Should I also extract SFW-only into its own file? Once extraction is done, I’ll begin merging everything into a proper JSON structure for formatting as a usable dataset — ready for developers to use for fine-tuning or RAG systems. Also, just to check — is what I’m doing so far actually the right approach? I’m mainly focused on organizing, cleaning, and formatting the raw dialogue in a way that’s useful for others, but if anyone has tips or corrections, I’d appreciate the input. This is my first real project, and while I don’t plan to stop at this visual novel, I’m still unsure what the next step will be after I finish this one. Any feedback on the SFW/NSFW separation or the structure you’d prefer to see in the dataset is welcome.
2025-06-04T09:10:33
https://i.redd.it/cw2ynsrvmv4f1.png
Akowmako
i.redd.it
1970-01-01T00:00:00
0
{}
1l30wtf
false
null
t3_1l30wtf
/r/LocalLLaMA/comments/1l30wtf/progress_update_current_extraction_status_next/
false
false
https://a.thumbs.redditm…SLqitofiu9c8.jpg
0
{'enabled': True, 'images': [{'id': 'bqztjXF0wlfVnn_sVTsZfkYM4PXHu8QJj5zxVlJFDP4', 'resolutions': [{'height': 36, 'url': 'https://preview.redd.it/cw2ynsrvmv4f1.png?width=108&crop=smart&auto=webp&s=810a025b524b32dc257771e91c59289fa7be309f', 'width': 108}, {'height': 72, 'url': 'https://preview.redd.it/cw2ynsrvmv4f1.png?width=216&crop=smart&auto=webp&s=2e8a8e06094cb6d018f73128c2d9217664fe135d', 'width': 216}, {'height': 107, 'url': 'https://preview.redd.it/cw2ynsrvmv4f1.png?width=320&crop=smart&auto=webp&s=05e32c4b2645857fc63987c2e334c989621fe2ba', 'width': 320}, {'height': 215, 'url': 'https://preview.redd.it/cw2ynsrvmv4f1.png?width=640&crop=smart&auto=webp&s=03e0304b0aa047f3cabd7fb74d861378c21063cd', 'width': 640}, {'height': 322, 'url': 'https://preview.redd.it/cw2ynsrvmv4f1.png?width=960&crop=smart&auto=webp&s=3c7d907501cfc027d5efd2e09169d0b2d9733974', 'width': 960}, {'height': 363, 'url': 'https://preview.redd.it/cw2ynsrvmv4f1.png?width=1080&crop=smart&auto=webp&s=18afb17d76e613f995f52f75a48c6308fa6700db', 'width': 1080}], 'source': {'height': 484, 'url': 'https://preview.redd.it/cw2ynsrvmv4f1.png?auto=webp&s=c4f50fcc80e32f237364b71c36cda71eac3240e9', 'width': 1440}, 'variants': {}}]}
Shisa V2 405B: The strongest model ever built in Japan! (JA/EN)
310
Hey everyone, so we've released the latest member of our [Shisa V2](https://www.reddit.com/r/LocalLLaMA/comments/1jz2lll/shisa_v2_a_family_of_new_jaen_bilingual_models/) family of open bilingual (Japanes/English) models: [Shisa V2 405B](https://shisa.ai/posts/shisa-v2-405b/)! * Llama 3.1 405B Fine Tune, inherits the Llama 3.1 license * Not just our JA mix but also additional KO + ZH-TW to augment 405B's native multilingual * Beats GPT-4 & GPT-4 Turbo in JA/EN, matches latest GPT-4o and DeepSeek-V3 in JA MT-Bench (it's not a reasoning or code model, but 日本語上手!) * Based on our evals, it's is w/o a doubt the strongest model to ever be released from Japan, beating out the efforts of bigco's etc. Tiny teams can do great things leveraging open models! * Quants and end-point available for testing * Super cute doggos: [Shisa V2 405B 日本語上手!](https://preview.redd.it/ky8dtjov7v4f1.jpg?width=900&format=pjpg&auto=webp&s=92341b8556272d688fe593aa185ca83351d83b99) For the r/LocalLLaMA crowd: * Of course full model weights at [shisa-ai/shisa-v2-llama-3.1-405b](https://huggingface.co/shisa-ai/shisa-v2-llama3.1-405b) but also a range of GGUFs in a repo as well: [shisa-ai/shisa-v2-llama3.1-405b-GGUF](https://huggingface.co/shisa-ai/shisa-v2-llama3.1-405b-GGUF) * These GGUFs are all (except the Q8\_0) imatrixed w/ a calibration set based on our (Apache 2.0, also available for download) core Shisa V2 SFT dataset. They range from 100GB for the IQ2\_XXS to 402GB for the Q8\_0. Thanks to ubergarm for the pointers for what the gguf quanting landscape looks like in 2025! Check out our initially linked blog post for all the deets + a full set of overview slides in JA and EN versions. Explains how we did our testing, training, dataset creation, and all kinds of little fun tidbits like: [Go go open source!](https://preview.redd.it/lbov91pz8v4f1.png?width=3000&format=png&auto=webp&s=01c8e969486668800832414e9224c523cfa626c6) [When your model is significantly better than GPT 4 it just gives you 10s across the board 😂](https://preview.redd.it/rhposci49v4f1.png?width=3000&format=png&auto=webp&s=8c7023b1b5f57a5b40fdafc8109e5f3e30a2a51f) While I know these models are big and maybe not directly relevant to people here, we've now tested our dataset on a huge range of base models from 7B to 405B and can conclude it can basically make any model mo-betta' at Japanese (without negatively impacting English or other capabilities!). This whole process has been basically my whole year, so happy to finally get it out there and of course, answer any questions anyone might have.
2025-06-04T09:32:34
https://www.reddit.com/r/LocalLLaMA/comments/1l318di/shisa_v2_405b_the_strongest_model_ever_built_in/
randomfoo2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l318di
false
null
t3_1l318di
/r/LocalLLaMA/comments/1l318di/shisa_v2_405b_the_strongest_model_ever_built_in/
false
false
self
310
null
Is this Server PC a horrible mistake for these purposes?
1
[removed]
2025-06-04T10:48:44
https://www.reddit.com/r/LocalLLaMA/comments/1l32fi0/is_this_server_pc_a_horrible_mistake_for_these/
Humble_Stuff5531
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l32fi0
false
null
t3_1l32fi0
/r/LocalLLaMA/comments/1l32fi0/is_this_server_pc_a_horrible_mistake_for_these/
false
false
self
1
null
Can I Train an LLM on a Normal Laptop? (Need Advice!)
1
[removed]
2025-06-04T11:11:10
https://www.reddit.com/r/LocalLLaMA/comments/1l32tkd/can_i_train_an_llm_on_a_normal_laptop_need_advice/
Popular_Student_2822
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l32tkd
false
null
t3_1l32tkd
/r/LocalLLaMA/comments/1l32tkd/can_i_train_an_llm_on_a_normal_laptop_need_advice/
false
false
self
1
null
Practical Steps to Fine-Tune a Small LLM (e.g., Mistral) on a Laptop?
1
[removed]
2025-06-04T11:13:47
https://www.reddit.com/r/LocalLLaMA/comments/1l32v77/practical_steps_to_finetune_a_small_llm_eg/
Popular_Student_2822
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l32v77
false
null
t3_1l32v77
/r/LocalLLaMA/comments/1l32v77/practical_steps_to_finetune_a_small_llm_eg/
false
false
self
1
null
The godawful Limitless Pendant reviewed…
1
2025-06-04T11:21:28
https://www.damianreilly.co.uk/p/review-the-godawful-limitless-pendant
myrtlehinchwater
damianreilly.co.uk
1970-01-01T00:00:00
0
{}
1l33043
false
null
t3_1l33043
/r/LocalLLaMA/comments/1l33043/the_godawful_limitless_pendant_reviewed/
false
false
https://b.thumbs.redditm…yoc45t1nDeiY.jpg
1
{'enabled': False, 'images': [{'id': 'TwRHafzxrhfsJkhI2MqiSfz6nbh04f3ARvp9BRu2Ji4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GsAlco5cHQ9EkxToe9ZoJ5sB0pU-SR6dip3m9hLAeuE.jpg?width=108&crop=smart&auto=webp&s=3e502827f453a16bd5dcc6a637a1ed58b73705a4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GsAlco5cHQ9EkxToe9ZoJ5sB0pU-SR6dip3m9hLAeuE.jpg?width=216&crop=smart&auto=webp&s=bbaa480942c6c0eac2aa215fdca2e4463632dbab', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GsAlco5cHQ9EkxToe9ZoJ5sB0pU-SR6dip3m9hLAeuE.jpg?width=320&crop=smart&auto=webp&s=b0155657c24740f0ee0153f3d7dfe4bfc703a357', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GsAlco5cHQ9EkxToe9ZoJ5sB0pU-SR6dip3m9hLAeuE.jpg?width=640&crop=smart&auto=webp&s=dfed02d04d005528f5eb3608a1ba81014a7f409a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GsAlco5cHQ9EkxToe9ZoJ5sB0pU-SR6dip3m9hLAeuE.jpg?width=960&crop=smart&auto=webp&s=14d5181783c12da2c5430b5cf9390718dc29449b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GsAlco5cHQ9EkxToe9ZoJ5sB0pU-SR6dip3m9hLAeuE.jpg?width=1080&crop=smart&auto=webp&s=9eddfc58b60e68d2978ff0034d7f2f825e55ceae', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GsAlco5cHQ9EkxToe9ZoJ5sB0pU-SR6dip3m9hLAeuE.jpg?auto=webp&s=99830cd0357670b20312807f7f44cad31838dc5e', 'width': 1200}, 'variants': {}}]}
Would like to run AI on my old laptop
1
[removed]
2025-06-04T11:25:29
https://www.reddit.com/r/LocalLLaMA/comments/1l332oa/would_like_to_run_ai_on_my_old_laptop/
SaasMinded
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l332oa
false
null
t3_1l332oa
/r/LocalLLaMA/comments/1l332oa/would_like_to_run_ai_on_my_old_laptop/
false
false
https://b.thumbs.redditm…vj7om_ihq7zc.jpg
1
null
Best model for data extraction from scanned documents
11
I'm building my little ocr tool to extract data from pdfs, mostly bank receipt, id cards, and stuff like that. I experimented with few models (running on ollama locally), and I found that gemma3:12b was the best choice I could get. I'm running on a 4070 laptop with 8Gb, but I have a desktop with a 5080 if the models really need more power and vram. Gemma3 is quite good especially with text data, but on the numbers it hallucinate a lot, even when the document is clearly readable. I tried Internvl2\_5 4b, but it's not doing great at all, intervl3:8B is just responding "sorry", so It's a bit broken in my use case. If you have any recommandation of models that could be great in my use case I would be interested :)
2025-06-04T11:39:19
https://www.reddit.com/r/LocalLLaMA/comments/1l33bph/best_model_for_data_extraction_from_scanned/
Wintlink-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l33bph
false
null
t3_1l33bph
/r/LocalLLaMA/comments/1l33bph/best_model_for_data_extraction_from_scanned/
false
false
self
11
null
Most recently updated knowledge base/ training data.
1
What good llm models, does not matter the size, has the most updated knowledge base?
2025-06-04T11:43:47
https://www.reddit.com/r/LocalLLaMA/comments/1l33enp/most_recently_updated_knowledge_base_training_data/
EasyConference4177
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l33enp
false
null
t3_1l33enp
/r/LocalLLaMA/comments/1l33enp/most_recently_updated_knowledge_base_training_data/
false
false
self
1
null
Self aware models?
1
[removed]
2025-06-04T11:51:51
https://www.reddit.com/r/LocalLLaMA/comments/1l33k2n/self_aware_models/
Gadrakmtg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l33k2n
false
null
t3_1l33k2n
/r/LocalLLaMA/comments/1l33k2n/self_aware_models/
false
false
self
1
null
Help me use AI for my game - specific case
8
Hi, hope this is the right place to ask. I created a game to play myself in C# and C++ - its one of those hidden object games. As I made it for myself I used assets from another game from a different genre. The studio that developed that game has since closed down in 2016, but I don't know who owns the copyright now, seems no one. The sprites I used from that game are distinctive and easily recognisable as coming from that game. Now that I'm thinking of sharing my game with everyone, how can I use AI to recreate these images in a different but uniform style, to detach it from the original source. Is there a way I can feed it the original sprites, plus examples of the style I want the new game to have, and for it to re-imagine the sprites? Getting an artist to draw them is not an option as there are more than 10,000 sprites. Thanks.
2025-06-04T12:01:57
https://www.reddit.com/r/LocalLLaMA/comments/1l33r5h/help_me_use_ai_for_my_game_specific_case/
Salamander500
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l33r5h
false
null
t3_1l33r5h
/r/LocalLLaMA/comments/1l33r5h/help_me_use_ai_for_my_game_specific_case/
false
false
self
8
null
looking for a free good image to video ai service
0
I’m looking for a good free image to video ai that lets me generate around 8 eight second videos a day on a free plan without blocking 60 to 70 percent of my prompts. i tried a couple of sites with the prompt “girl slowly does a 360 turn” and both blocked it. does anyone know any sites or tools maybe even [**domoai**](https://www.domoai.app/home?via=081621AUG) and [kling](http://kling.com) that let you make 8 videos a day for free without heavy prompt restrictions? appreciate any recommendations!
2025-06-04T12:33:42
https://www.reddit.com/r/LocalLLaMA/comments/1l34dqu/looking_for_a_free_good_image_to_video_ai_service/
Own_View3337
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l34dqu
false
null
t3_1l34dqu
/r/LocalLLaMA/comments/1l34dqu/looking_for_a_free_good_image_to_video_ai_service/
false
false
self
0
null
AMA – I’ve built 7 commercial RAG projects. Got tired of copy-pasting boilerplate, so we open-sourced our internal stack.
623
Hey folks, I’m a senior tech lead with 8+ years of experience, and for the last \~3 I’ve been knee-deep in building LLM-powered systems — RAG pipelines, agentic apps, text2SQL engines. We’ve shipped real products in manufacturing, sports analytics, NGOs, legal… you name it. After doing this *again and again*, I got tired of the same story: building ingestion from scratch, duct-taping vector DBs, dealing with prompt spaghetti, and debugging hallucinations without proper logs. So we built [**ragbits**](https://github.com/deepsense-ai/ragbits) — a toolbox of reliable, type-safe, modular building blocks for GenAI apps. What started as an internal accelerator is now **fully open-sourced (v1.0.0)** and ready to use. Why we built it: * We wanted *repeatability*. RAG isn’t magic — but building it cleanly every time takes effort. * We needed to *move fast* for PoCs, without sacrificing structure. * We hated black boxes — ragbits integrates easily with your observability stack (OpenTelemetry, CLI debugging, prompt testing). * And most importantly, we wanted to scale apps without turning the codebase into a dumpster fire. I’m happy to answer questions about RAG, our approach, gotchas from real deployments, or the internals of ragbits. No fluff — just real lessons from shipping LLM systems in production. We’re looking for feedback, contributors, and people who want to build better GenAI apps. If that sounds like you, take [ragbits](https://github.com/deepsense-ai/ragbits) for a spin. Let’s talk 👇
2025-06-04T13:06:03
https://www.reddit.com/r/LocalLLaMA/comments/1l352wk/ama_ive_built_7_commercial_rag_projects_got_tired/
Loud_Picture_1877
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l352wk
false
null
t3_1l352wk
/r/LocalLLaMA/comments/1l352wk/ama_ive_built_7_commercial_rag_projects_got_tired/
false
false
self
623
{'enabled': False, 'images': [{'id': 'DNbUBM7ed4V49RRULrIFUyofnrMt6h4cI90ApWQTQDg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MDZAltpl6XhYmd4yidNS3D5sMjU_8_9h9cIl7CUWTRY.jpg?width=108&crop=smart&auto=webp&s=c4df8d942ed3660c38a8f93dc941c58ecee2f46f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MDZAltpl6XhYmd4yidNS3D5sMjU_8_9h9cIl7CUWTRY.jpg?width=216&crop=smart&auto=webp&s=62372db2ee058f8cb8c83ebc6eacd812c57c6bfb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MDZAltpl6XhYmd4yidNS3D5sMjU_8_9h9cIl7CUWTRY.jpg?width=320&crop=smart&auto=webp&s=4471d6966dc9b4a2e531ea21c11e858316afc18e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MDZAltpl6XhYmd4yidNS3D5sMjU_8_9h9cIl7CUWTRY.jpg?width=640&crop=smart&auto=webp&s=945b0b0c62bdf7e6c966171da60950c2102d8638', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MDZAltpl6XhYmd4yidNS3D5sMjU_8_9h9cIl7CUWTRY.jpg?width=960&crop=smart&auto=webp&s=2b021e3a0bc640dbba0bd7838a0faea8bf6eafba', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MDZAltpl6XhYmd4yidNS3D5sMjU_8_9h9cIl7CUWTRY.jpg?width=1080&crop=smart&auto=webp&s=b06e9753fbc93c8f73c732d2e8b68d3a156b5429', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MDZAltpl6XhYmd4yidNS3D5sMjU_8_9h9cIl7CUWTRY.jpg?auto=webp&s=c93f00ec53ac840f3402a28a8f1a8e517af95588', 'width': 1200}, 'variants': {}}]}
How to access my LLM remotely
0
I have Ollama and docker running Open Web-UI setup and working well on the LAN. How can I open port 3000 to access the LLM from anywhere? I have a static IP but when I try to port forward it doesn't respond.
2025-06-04T13:18:43
https://www.reddit.com/r/LocalLLaMA/comments/1l35d0c/how_to_access_my_llm_remotely/
bones10145
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l35d0c
false
null
t3_1l35d0c
/r/LocalLLaMA/comments/1l35d0c/how_to_access_my_llm_remotely/
false
false
self
0
null
KV Cache in nanoVLM
25
I thought I had a fair amount of understanding about KV Cache before implementing it from scratch. I would like to dedicate this blog post to all of them who are really curious about KV Cache, think they know enough about the idea, but would love to implement it someday. We discover a lot of things while working through it, and I have tried documenting it as much as I could. Hope you all will enjoy reading it. We chose [nanoVLM](https://github.com/huggingface/nanoVLM) to implement KV Cache so that it does not have too many abstractions and we could lay out the foundations better. Blog: [hf.co/blog/kv-cache](http://hf.co/blog/kv-cache) https://preview.redd.it/rv93ilolvw4f1.png?width=1199&format=png&auto=webp&s=cd727fe98bf357bdccc6663a2183a50e5987c1c8
2025-06-04T13:23:57
https://www.reddit.com/r/LocalLLaMA/comments/1l35h5g/kv_cache_in_nanovlm/
Disastrous-Work-1632
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l35h5g
false
null
t3_1l35h5g
/r/LocalLLaMA/comments/1l35h5g/kv_cache_in_nanovlm/
false
false
https://a.thumbs.redditm…MhLLf7WWvay0.jpg
25
{'enabled': False, 'images': [{'id': 'vknG7DZMBYx0_fJ40PKSKNLzuhkrd4QfuZqTU-ujEyM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fqfBX7C3jmtI84itPZTI_T_664vZv239TD6hn9XTKfY.jpg?width=108&crop=smart&auto=webp&s=5a87716d667ffbac2c68a865fbfead1af7713350', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fqfBX7C3jmtI84itPZTI_T_664vZv239TD6hn9XTKfY.jpg?width=216&crop=smart&auto=webp&s=4ac4a573a01d127ddedefe9d2f15f588d2c6fe0b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fqfBX7C3jmtI84itPZTI_T_664vZv239TD6hn9XTKfY.jpg?width=320&crop=smart&auto=webp&s=1519a04f6ede7107a4cdc200c42e4a65717040ab', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fqfBX7C3jmtI84itPZTI_T_664vZv239TD6hn9XTKfY.jpg?width=640&crop=smart&auto=webp&s=47376a36a5552070fc30e5629099d856ecaa7080', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fqfBX7C3jmtI84itPZTI_T_664vZv239TD6hn9XTKfY.jpg?width=960&crop=smart&auto=webp&s=66a0491f6ff16af8dee4609cd4c36aa5684d0837', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fqfBX7C3jmtI84itPZTI_T_664vZv239TD6hn9XTKfY.jpg?width=1080&crop=smart&auto=webp&s=c1224afd76985c776c0912184311d74a0a6f55ad', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/fqfBX7C3jmtI84itPZTI_T_664vZv239TD6hn9XTKfY.jpg?auto=webp&s=e4195260975de912e90cb0094aab84923a6fab05', 'width': 1280}, 'variants': {}}]}
RTX 5060 Ti 16GB vs 5070 Ti 16GB for AI workloads (LLMs, fine-tuning, etc.)
1
[removed]
2025-06-04T13:26:54
https://i.redd.it/iik0idchww4f1.jpeg
VIrgin_COde
i.redd.it
1970-01-01T00:00:00
0
{}
1l35jh8
false
null
t3_1l35jh8
/r/LocalLLaMA/comments/1l35jh8/rtx_5060_ti_16gb_vs_5070_ti_16gb_for_ai_workloads/
false
false
default
1
{'enabled': True, 'images': [{'id': 'iik0idchww4f1', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/iik0idchww4f1.jpeg?width=108&crop=smart&auto=webp&s=62fac6ee3eb824af9f339556e35bfa258baf03be', 'width': 108}, {'height': 149, 'url': 'https://preview.redd.it/iik0idchww4f1.jpeg?width=216&crop=smart&auto=webp&s=0f37f1aac894bc4ff20a93fb4665775daa01feab', 'width': 216}, {'height': 220, 'url': 'https://preview.redd.it/iik0idchww4f1.jpeg?width=320&crop=smart&auto=webp&s=334327175fbe2790d1859a5758682ec783bce15c', 'width': 320}, {'height': 441, 'url': 'https://preview.redd.it/iik0idchww4f1.jpeg?width=640&crop=smart&auto=webp&s=d9f0b34e898e905b88eda6ec4e9888b933f245c8', 'width': 640}, {'height': 662, 'url': 'https://preview.redd.it/iik0idchww4f1.jpeg?width=960&crop=smart&auto=webp&s=991477d8c32fdab70d1364f474f9f1b03df36b76', 'width': 960}], 'source': {'height': 707, 'url': 'https://preview.redd.it/iik0idchww4f1.jpeg?auto=webp&s=fa4b71006480491420952b80104142bcdc4cf627', 'width': 1024}, 'variants': {}}]}
Is DevStral actually usable for C programming? Occasionally getting segmentation faults...
1
[removed]
2025-06-04T13:34:32
https://www.reddit.com/r/LocalLLaMA/comments/1l35pjo/is_devstral_actually_usable_for_c_programming/
ParticularContest201
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l35pjo
false
null
t3_1l35pjo
/r/LocalLLaMA/comments/1l35pjo/is_devstral_actually_usable_for_c_programming/
false
false
self
1
null
Common Corpus: The Largest Collection of Ethical Data for LLM Pre-Training
140
"Announcing the release of the official Common Corpus paper: a 20 page report detailing how we collected, processed and published 2 trillion tokens of reusable data for LLM pretraining." Thread by the first author: https://x.com/Dorialexander/status/1930249894712717744 Paper: https://arxiv.org/abs/2506.01732
2025-06-04T13:37:13
https://i.redd.it/l1wcpiqhyw4f1.jpeg
Initial-Image-1015
i.redd.it
1970-01-01T00:00:00
0
{}
1l35rp1
false
null
t3_1l35rp1
/r/LocalLLaMA/comments/1l35rp1/common_corpus_the_largest_collection_of_ethical/
false
false
https://b.thumbs.redditm…vaD07owSP7kA.jpg
140
{'enabled': True, 'images': [{'id': 'Eodox09e1J5kGgD9RnLimI6X9YHgeP49qkeGxNEYhrE', 'resolutions': [{'height': 117, 'url': 'https://preview.redd.it/l1wcpiqhyw4f1.jpeg?width=108&crop=smart&auto=webp&s=9551cfe4a9add655326ce00738e015a801d139c6', 'width': 108}, {'height': 235, 'url': 'https://preview.redd.it/l1wcpiqhyw4f1.jpeg?width=216&crop=smart&auto=webp&s=e7519df5873f2c83128bcd55fab17b4c180a69cf', 'width': 216}, {'height': 348, 'url': 'https://preview.redd.it/l1wcpiqhyw4f1.jpeg?width=320&crop=smart&auto=webp&s=57c609ecc1b88ee3677d9e751663960cd7a5ce13', 'width': 320}, {'height': 697, 'url': 'https://preview.redd.it/l1wcpiqhyw4f1.jpeg?width=640&crop=smart&auto=webp&s=b09d23def47a360d64c26390174e009fd27fb722', 'width': 640}, {'height': 1046, 'url': 'https://preview.redd.it/l1wcpiqhyw4f1.jpeg?width=960&crop=smart&auto=webp&s=248e1b1b5cf8ee8c975cf2a7e5d2947031c84fe9', 'width': 960}, {'height': 1177, 'url': 'https://preview.redd.it/l1wcpiqhyw4f1.jpeg?width=1080&crop=smart&auto=webp&s=2b80f35e55f960b7c4c95ff77b1b73ad0b82986d', 'width': 1080}], 'source': {'height': 1236, 'url': 'https://preview.redd.it/l1wcpiqhyw4f1.jpeg?auto=webp&s=768c488c64aaf556a3aac0a99e994ea9efe432f3', 'width': 1134}, 'variants': {}}]}
SmolVLA: Efficient Vision-Language-Action Model trained on Lerobot Community Data
1
[removed]
2025-06-04T13:45:19
https://i.redd.it/vtztkucqyw4f1.gif
WoanqDil
i.redd.it
1970-01-01T00:00:00
0
{}
1l35ygr
false
null
t3_1l35ygr
/r/LocalLLaMA/comments/1l35ygr/smolvla_efficient_visionlanguageaction_model/
false
false
default
1
{'enabled': True, 'images': [{'id': 'vtztkucqyw4f1', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/vtztkucqyw4f1.gif?width=108&crop=smart&format=png8&s=81fb251db57dc279abf625ae5ee13f0b8c897e21', 'width': 108}, {'height': 95, 'url': 'https://preview.redd.it/vtztkucqyw4f1.gif?width=216&crop=smart&format=png8&s=94e0a2388e65cd5124d72797ecf210b652fd3d19', 'width': 216}, {'height': 141, 'url': 'https://preview.redd.it/vtztkucqyw4f1.gif?width=320&crop=smart&format=png8&s=c7506eba99148fd52a94f0455c462cd00aaadbb0', 'width': 320}], 'source': {'height': 240, 'url': 'https://preview.redd.it/vtztkucqyw4f1.gif?format=png8&s=c3b62b9fd70fa1bf0e0f571efc0753d10f8657b0', 'width': 544}, 'variants': {'gif': {'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/vtztkucqyw4f1.gif?width=108&crop=smart&s=4cf6d39ce8c3ef837b065e7d34b480595760acb6', 'width': 108}, {'height': 95, 'url': 'https://preview.redd.it/vtztkucqyw4f1.gif?width=216&crop=smart&s=cd3c0017915ebee374a21dc8f85f4bd0be87b943', 'width': 216}, {'height': 141, 'url': 'https://preview.redd.it/vtztkucqyw4f1.gif?width=320&crop=smart&s=d12f70db910eca16239eb5747167c0333bac2dfa', 'width': 320}], 'source': {'height': 240, 'url': 'https://preview.redd.it/vtztkucqyw4f1.gif?s=3243872c884401b4578caa35bef169221fcfac1b', 'width': 544}}, 'mp4': {'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/vtztkucqyw4f1.gif?width=108&format=mp4&s=11b13c3470910d13f6f64fbab1355a25dc92defb', 'width': 108}, {'height': 95, 'url': 'https://preview.redd.it/vtztkucqyw4f1.gif?width=216&format=mp4&s=523768b2f4de310fab10b09ecc018e9e2252e414', 'width': 216}, {'height': 141, 'url': 'https://preview.redd.it/vtztkucqyw4f1.gif?width=320&format=mp4&s=ca22d361525f70a44ac7c80ad3a55be7772cc22a', 'width': 320}], 'source': {'height': 240, 'url': 'https://preview.redd.it/vtztkucqyw4f1.gif?format=mp4&s=063889c5067a1db52ad575f77c543cf46528e276', 'width': 544}}}}]}
Improving DeepSeek-R1-0528 Inference Speed
1
[removed]
2025-06-04T13:53:25
https://www.reddit.com/r/LocalLLaMA/comments/1l365fa/improving_deepseekr10528_inference_speed/
prepytixel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l365fa
false
null
t3_1l365fa
/r/LocalLLaMA/comments/1l365fa/improving_deepseekr10528_inference_speed/
false
false
self
1
null
Is DevStral actually usable for C programming? Occasionally getting segmentation faults...
1
[removed]
2025-06-04T13:57:02
https://www.reddit.com/r/LocalLLaMA/comments/1l368co/is_devstral_actually_usable_for_c_programming/
ParticularContest201
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l368co
false
null
t3_1l368co
/r/LocalLLaMA/comments/1l368co/is_devstral_actually_usable_for_c_programming/
false
false
self
1
null
Suggestions for a good model for generating Drupal module code?
0
I've tried the opencoder and Deepseek models, as well as llama, gemma and a few others, but they tend to really not generate sensible results even with the temperature lowered. Does anyone have any tiips on which model(s) might be best suited for generating Drupal code? Thanks!!
2025-06-04T14:11:00
https://www.reddit.com/r/LocalLLaMA/comments/1l36kbc/suggestions_for_a_good_model_for_generating/
tastybeer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l36kbc
false
null
t3_1l36kbc
/r/LocalLLaMA/comments/1l36kbc/suggestions_for_a_good_model_for_generating/
false
false
self
0
null
Simple News Broadcast Generator Script using local LLM as "editor" EdgeTTS as narrator, using a list of RSS feeds you can curate yourself
32
In this repo I built a simple python script which scrapes RSS feeds and generates a news broadcast mp3 narrated by a realistic voice, using Ollama, so local LLM, to generate the summaries and final composed broadcast. You can specify whichever news sources you want in the feeds.yaml file, as well as the number of articles, as well as change the tone of the broadcast through editing the summary and broadcast generating prompts in the simple one file script. All you need is Ollama installed and then pull whichever models you want or can run locally, I like mistral for this use case, and you can change out the models as well as the voice of the narrator, using edge tts, easily at the beginning of the script. There is so much more you can do with this concept and build upon it. I made a version the other day which had a full Vite/React frontend and FastAPI backend which displayed each of the news stories, summaries, links, sorting abilities as well as UI to change the sources and read or listen to the broadcast. But I like the simplicity of this. Simply run the script and listen to the latest news in a brief broadcast from a myriad of viewpoints using your own choice of tone through editing the prompts. This all originated on a post where someone said AI would lead to people being less informed and I argued that if you use AI correctly it would actually make you more informed. So I decided to write a script which takes whichever news sources I want, in this case objectivity is my goal, as well I can alter the prompts which edit together the broadcast so that I do not have all of the interjected bias inherent in almost all news broadcasts nowadays. So therefore I posit I can use AI to help people be more informed rather than less, through allowing an individual to construct their own news broadcasts free of the biases inherent with having a "human" editor of the news. Soulless, but that is how I like my objective news content.
2025-06-04T14:20:15
https://github.com/kliewerdaniel/News02
KonradFreeman
github.com
1970-01-01T00:00:00
0
{}
1l36s62
false
null
t3_1l36s62
/r/LocalLLaMA/comments/1l36s62/simple_news_broadcast_generator_script_using/
false
false
default
32
{'enabled': False, 'images': [{'id': 'fjxqU7FwvzpZ5aD4dKhEDL4Mh84C2kD-LdIr5egsvAE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hcM8-O1ltv8-loyy7_68UOpQyrqYIJ-2Wv8L99rgmA4.jpg?width=108&crop=smart&auto=webp&s=d80318cfb026081ad9bd96d0a03d72bb81c3f579', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hcM8-O1ltv8-loyy7_68UOpQyrqYIJ-2Wv8L99rgmA4.jpg?width=216&crop=smart&auto=webp&s=3e88b1bd902e7523fc01233de8f02f0b5950d124', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hcM8-O1ltv8-loyy7_68UOpQyrqYIJ-2Wv8L99rgmA4.jpg?width=320&crop=smart&auto=webp&s=42b3e749be3a94f8d0340b73cb4bba0409b8867f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hcM8-O1ltv8-loyy7_68UOpQyrqYIJ-2Wv8L99rgmA4.jpg?width=640&crop=smart&auto=webp&s=70fa2368d5859d9edd2fc09e22913d20ae1311ae', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hcM8-O1ltv8-loyy7_68UOpQyrqYIJ-2Wv8L99rgmA4.jpg?width=960&crop=smart&auto=webp&s=0fb5a146ccb03298e21d9038806330017b5d38c9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hcM8-O1ltv8-loyy7_68UOpQyrqYIJ-2Wv8L99rgmA4.jpg?width=1080&crop=smart&auto=webp&s=e730d76664a92330130cb36e736a5cd459009b0e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hcM8-O1ltv8-loyy7_68UOpQyrqYIJ-2Wv8L99rgmA4.jpg?auto=webp&s=27ca7478fe39d22da91bcca4ebdb1033190cc468', 'width': 1200}, 'variants': {}}]}
Recommendations for model setup on single H200
0
I have been using a server with a single A100 GOU, and now I have an upgrade to a server which ahs a single H200 (141GB VRAM). Currently I have been using a Mistral-Small-3.1-24B version and serving it behind a vLLM instance. My use case is typically instruction based wherein mostly the server is churning user defined responses to provided unstructured text data. I also have a small sue case of Image captioning for which I am using VLM capabilities of Mistral. I am reaosnably ahppy with its performance but I do feel it slows down when users access it in parallel and quality of responses leaves room for improvement. Typically when the text provided as context with input is not properly formatted (ex when I get text directly from documents, pdf, OCR etc... It tends to lose a lot of its structure) Now with a H200 machine, I wanted to udnerstand my options. One option I was thinking was to run 2 instances in load balanced way to at least cater to multi user peak loads? Is ithere a more elegant way perhaps using vLLM? More importantly, I wanted to know what better options I have in terms of models I can use. Will I be able to run a 70B Llama3 or DeepSeek in full precision? If not, which Quantized versions would be a good fit? Are there good models between 24B-70B which I can explore. All inputs are appreciated. Thanks.
2025-06-04T14:40:15
https://www.reddit.com/r/LocalLLaMA/comments/1l379ix/recommendations_for_model_setup_on_single_h200/
OpportunityProper252
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l379ix
false
null
t3_1l379ix
/r/LocalLLaMA/comments/1l379ix/recommendations_for_model_setup_on_single_h200/
false
false
self
0
null
Best model for research in PyTorch
2
Hello, I'm looking for a model good in PyTorch that could help me for my research project. Any ideas?
2025-06-04T15:16:54
https://www.reddit.com/r/LocalLLaMA/comments/1l387hu/best_model_for_research_in_pytorch/
Soft-Salamander7514
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l387hu
false
null
t3_1l387hu
/r/LocalLLaMA/comments/1l387hu/best_model_for_research_in_pytorch/
false
false
self
2
null
Help Choosing the Best LLM Inference Stack for Local Deployment (8x RTX 6000 Blackwell)
1
[removed]
2025-06-04T15:41:01
https://www.reddit.com/r/LocalLLaMA/comments/1l38tfw/help_choosing_the_best_llm_inference_stack_for/
Fresh_Month_2594
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l38tfw
false
null
t3_1l38tfw
/r/LocalLLaMA/comments/1l38tfw/help_choosing_the_best_llm_inference_stack_for/
false
false
self
1
null
Has anyone successfully built a coding assistant using local llama?
36
Something that's like Copilot, Kilocode, etc. What model are you using? What pc specs do you have? How is the performance? Lastly, is this even possible?
2025-06-04T15:49:06
https://www.reddit.com/r/LocalLLaMA/comments/1l390xb/has_anyone_successfully_built_a_coding_assistant/
rushblyatiful
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l390xb
false
null
t3_1l390xb
/r/LocalLLaMA/comments/1l390xb/has_anyone_successfully_built_a_coding_assistant/
false
false
self
36
null
Real-time knowledge graph with Kuzu and CocoIndex, high performance open source stack end to end - GraphRAG
1
[removed]
2025-06-04T15:56:24
https://www.reddit.com/r/LocalLLaMA/comments/1l397ky/realtime_knowledge_graph_with_kuzu_and_cocoindex/
Whole-Assignment6240
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l397ky
false
null
t3_1l397ky
/r/LocalLLaMA/comments/1l397ky/realtime_knowledge_graph_with_kuzu_and_cocoindex/
false
false
self
1
null
Drummer's Cydonia 24B v3 - A Mistral 24B 2503 finetune!
127
Survey Time: I'm working on Skyfall v3 but need opinions on the upscale size. 31B sounds comfy for a 24GB setup? Do you have an upper/lower bound in mind for that range?
2025-06-04T16:03:35
https://huggingface.co/TheDrummer/Cydonia-24B-v3
TheLocalDrummer
huggingface.co
1970-01-01T00:00:00
0
{}
1l39ea3
false
null
t3_1l39ea3
/r/LocalLLaMA/comments/1l39ea3/drummers_cydonia_24b_v3_a_mistral_24b_2503/
false
false
default
127
{'enabled': False, 'images': [{'id': 't081jAI6iYNLTzx3riAMviTtpLcwyeFOx6ZQvPV3hRI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/v0smBaFAfIOYhWsjTXmZvmibfthD29DfOmGvXCsBLOk.jpg?width=108&crop=smart&auto=webp&s=24a01201ee1428f9838f07875e4b98e8a59afa19', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/v0smBaFAfIOYhWsjTXmZvmibfthD29DfOmGvXCsBLOk.jpg?width=216&crop=smart&auto=webp&s=7dbca5cfb98dd3abea9721871c56f606d66063b0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/v0smBaFAfIOYhWsjTXmZvmibfthD29DfOmGvXCsBLOk.jpg?width=320&crop=smart&auto=webp&s=c2c294dedc659513956a0c7aff64571fb595a09f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/v0smBaFAfIOYhWsjTXmZvmibfthD29DfOmGvXCsBLOk.jpg?width=640&crop=smart&auto=webp&s=21f1147df642a26ce4a38f94f61704c291faa086', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/v0smBaFAfIOYhWsjTXmZvmibfthD29DfOmGvXCsBLOk.jpg?width=960&crop=smart&auto=webp&s=17ccf9db99d6ac9e4dc567b202466746b317df5a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/v0smBaFAfIOYhWsjTXmZvmibfthD29DfOmGvXCsBLOk.jpg?width=1080&crop=smart&auto=webp&s=c51305e41060bc6745df04a8f11b8b64acf77625', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/v0smBaFAfIOYhWsjTXmZvmibfthD29DfOmGvXCsBLOk.jpg?auto=webp&s=dd2b0129220cccca9ccca9d5724f0f7dd58a3115', 'width': 1200}, 'variants': {}}]}
Is there any open source project leveraging genAI to run quality checks on tabular data ?
1
Hey guys, most of the work in the ML/data science/BI still relies on tabular data. Everybody who has worked on that knows data quality is where most of the work goes, and that’s super frustrating. I used to use great expectations to run quality checks on dataframes, but that’s based on hard coded rules (you declare things like “column X needs to be between 0 and 10”). Is there any open source project leveraging genAI to run these quality checks? Something where you tell what the columns mean and give business context, and the LLM creates tests and find data quality issues for you? I tried deep research and openAI found nothing for me.
2025-06-04T16:12:22
https://www.reddit.com/r/LocalLLaMA/comments/1l39mc2/is_there_any_open_source_project_leveraging_genai/
Jazzlike_Tooth929
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l39mc2
false
null
t3_1l39mc2
/r/LocalLLaMA/comments/1l39mc2/is_there_any_open_source_project_leveraging_genai/
false
false
self
1
null
lancement d'un noyau ia modulaire
1
[removed]
2025-06-04T16:25:14
https://www.reddit.com/r/LocalLLaMA/comments/1l39y4y/lancement_dun_noyau_ia_modulaire/
diama_ai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l39y4y
false
null
t3_1l39y4y
/r/LocalLLaMA/comments/1l39y4y/lancement_dun_noyau_ia_modulaire/
false
false
https://b.thumbs.redditm…T9KL6Kxblk_o.jpg
1
null
Anthropic Shutting out Windsurf -- This is why I'm so big on local and open source
1
[removed]
2025-06-04T16:25:31
https://www.reddit.com/r/LocalLLaMA/comments/1l39yea/anthropic_shutting_out_windsurf_this_is_why_im_so/
davidtwaring
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l39yea
false
null
t3_1l39yea
/r/LocalLLaMA/comments/1l39yea/anthropic_shutting_out_windsurf_this_is_why_im_so/
false
false
self
1
null
Anthropic shutting out Windsurf - This is why I'm so big on local and open source
1
[removed]
2025-06-04T16:27:47
https://www.reddit.com/r/LocalLLaMA/comments/1l3a0iy/anthropic_shutting_out_windsurf_this_is_why_im_so/
davidtwaring
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3a0iy
false
null
t3_1l3a0iy
/r/LocalLLaMA/comments/1l3a0iy/anthropic_shutting_out_windsurf_this_is_why_im_so/
false
false
self
1
null
Digitizing 30 Stacks of Uni Dokuments & Feeding into a Local LLM
5
Hey everyone, I’m embarking on a pretty ambitious project and could really use some advice. I have about 30 stacks of university notes – each stack is roughly 200 pages – that I want to digitize and then feed into a LLM for analysis. Basically, I'd love to be able to ask the LLM questions about my notes and get intelligent answers based on their content. Ideally, I’d also like to end up with editable Word-like documents containing the digitized text. The biggest hurdle right now is the OCR (Optical Character Recognition) process. I've tried a few different methods already without much success. I've experimented with: * Tesseract OCR: Didn't produce great results, especially with my complex layouts. * PDF 24 OCR: Similar issues to Tesseract. * My Scanner’s Built-in Software: This was the best of the bunch so far, but it still struggles significantly. A lot of my notes contain tables and diagrams, and the OCR consistently messes those up. My goal is twofold: 1) To create a searchable knowledge base where I can ask questions about the content of my notes (e.g., "What were the key arguments regarding X?"), and 2) to have editable documents that I can add to or correct. I'm relatively new to the world of LLMs, but I’ve been having fun experimenting with different models through Open WebUI connected to LM Studio. My setup is: * CPU: AMD Ryzen 7 5700X3D * GPU: RX 6700 XT I'm a bit concerned about whether my hardware will be sufficient. Also, I’m very new to programming – I don’t have any experience with Python or coding in general. I'm hoping there might be someone out there who can offer some guidance. Specifically, I'd love to know: * OCR Recommendations: Are there any OCR engines or techniques that are particularly good at handling tables and complex layouts? (Ideally something that works well with AMD hardware). * Post-Processing: What’s the best way to clean up OCR output, especially when dealing with lots of tables? Are there any tools or libraries you recommend for correcting errors in bulk? * LLM Integration: Any suggestions on how to best integrate the digitized text into a local LLM (e.g., which models are good for question answering and knowledge retrieval)? I'm using Open WebUI/LM Studio currently (mainly because of LM Studios GPU Support), but open to other options. * Hardware Considerations: Is my AMD Ryzen 7 5700X3D and RX 6700 XT a reasonable setup for this kind of project? Any help or suggestions would be greatly appreciated! I'm really excited about the potential of this project, but feeling a bit overwhelmed by the technical challenges. Thanks in advance! For anyone how is curious: I let gemma3 writes a good part of this post. On my own I just couldn’t keep it structured.
2025-06-04T17:31:45
https://www.reddit.com/r/LocalLLaMA/comments/1l3boea/digitizing_30_stacks_of_uni_dokuments_feeding/
SpitePractical8460
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3boea
false
null
t3_1l3boea
/r/LocalLLaMA/comments/1l3boea/digitizing_30_stacks_of_uni_dokuments_feeding/
false
false
self
5
null
How does gemma3:4b-it-qat fare against OpenAI models on MMLU-Pro benchmark? Try for yourself in Excel
28
I made an Excel add-in that lets you run a prompt on thousands of rows of tasks. Might be useful for some of you to quickly benchmark new models when they come out. In the video I ran gemma3:4b-it-qat, gpt-4.1-mini, and o4-mini on a (admittedly tiny) subset of the MMLU Pro benchmark. I think I understand now why OpenAI didn't include MMLU Pro in their gpt-4.1-mini announcement blog post :D To try for yourself, clone the git repo at [https://github.com/getcellm/cellm/](https://github.com/getcellm/cellm/), build with Visual Studio, and run the installer Cellm-AddIn-Release-x64.msi in src\\Cellm.Installers\\bin\\x64\\Release\\en-US.
2025-06-04T17:37:22
https://v.redd.it/ye3ahlk05y4f1
Kapperfar
/r/LocalLLaMA/comments/1l3btj3/how_does_gemma34bitqat_fare_against_openai_models/
1970-01-01T00:00:00
0
{}
1l3btj3
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ye3ahlk05y4f1/DASHPlaylist.mpd?a=1751780246%2CZjczZDVmNzIyMzg0ZWQwMDA1M2MwN2YyZDU4YTVjZDA5MWJhZGQ2Nzg2YWFkOTUwOWVmMjMyYTE1YjI0ZWFiMA%3D%3D&v=1&f=sd', 'duration': 121, 'fallback_url': 'https://v.redd.it/ye3ahlk05y4f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/ye3ahlk05y4f1/HLSPlaylist.m3u8?a=1751780246%2CNGMzODI1ZWMwMGU1YWQ1ODdmNGY4YjUyZjUyYzA0Y2UxNTJkMzZkMTVlYzQ4MzNjZTg4MWZkNjc2Mjg1YjVhOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ye3ahlk05y4f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1l3btj3
/r/LocalLLaMA/comments/1l3btj3/how_does_gemma34bitqat_fare_against_openai_models/
false
false
https://external-preview…0d3ab6c25c8bfc7c
28
{'enabled': False, 'images': [{'id': 'YWJhMzhuazA1eTRmMe_FxokcQpBsD-M8wgi3hLI8PHPTr7rVnpmgkef-dH9o', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YWJhMzhuazA1eTRmMe_FxokcQpBsD-M8wgi3hLI8PHPTr7rVnpmgkef-dH9o.png?width=108&crop=smart&format=pjpg&auto=webp&s=425ccf9e97c37d435a06bbb596c41e067952ea91', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/YWJhMzhuazA1eTRmMe_FxokcQpBsD-M8wgi3hLI8PHPTr7rVnpmgkef-dH9o.png?width=216&crop=smart&format=pjpg&auto=webp&s=9955aff2d0c064e6aede1c45c50b53018effe015', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/YWJhMzhuazA1eTRmMe_FxokcQpBsD-M8wgi3hLI8PHPTr7rVnpmgkef-dH9o.png?width=320&crop=smart&format=pjpg&auto=webp&s=1087549b1be315cab75c8c9a6c69af408d2272f5', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/YWJhMzhuazA1eTRmMe_FxokcQpBsD-M8wgi3hLI8PHPTr7rVnpmgkef-dH9o.png?width=640&crop=smart&format=pjpg&auto=webp&s=df7f391cabc273670116528040966822d79bd2a2', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/YWJhMzhuazA1eTRmMe_FxokcQpBsD-M8wgi3hLI8PHPTr7rVnpmgkef-dH9o.png?width=960&crop=smart&format=pjpg&auto=webp&s=95569cd794d6812af156980545fc893fcfb6382c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/YWJhMzhuazA1eTRmMe_FxokcQpBsD-M8wgi3hLI8PHPTr7rVnpmgkef-dH9o.png?width=1080&crop=smart&format=pjpg&auto=webp&s=86a9960c54791df5f7513b3012809461373c840c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/YWJhMzhuazA1eTRmMe_FxokcQpBsD-M8wgi3hLI8PHPTr7rVnpmgkef-dH9o.png?format=pjpg&auto=webp&s=c3e4df071c6c4757c79f157d09bf4eeac149c265', 'width': 1920}, 'variants': {}}]}
GRMR-V3: A set of models for reliable grammar correction.
96
Let's face it: You don't need big models like 32B, or medium sized models like 8B for grammar correction. So I've created a set of fine-tuned models specialized in just doing that: fixing grammar. [Models](https://huggingface.co/collections/qingy2024/grmr-v3-models-683e6a27b42e4eb0e950fbdd): GRMR-V3 (1B, 1.2B, 1.7B, 3B, 4B, and 4.3B) [GGUFs here](https://huggingface.co/collections/qingy2024/grmr-v3-ggufs-684083beb5be4b136e5fbc68)
2025-06-04T17:53:41
https://www.reddit.com/r/LocalLLaMA/comments/1l3c8is/grmrv3_a_set_of_models_for_reliable_grammar/
random-tomato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3c8is
false
null
t3_1l3c8is
/r/LocalLLaMA/comments/1l3c8is/grmrv3_a_set_of_models_for_reliable_grammar/
false
false
self
96
{'enabled': False, 'images': [{'id': 'TUJ65URlfz7avJTo72njareuNShxaju8zU4SYHkPF-c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MXqJmVgCHNlEUhx-IxD9pDhkhp1ZtKL3aoDYARdvQ10.jpg?width=108&crop=smart&auto=webp&s=08ed7f92b9f39d34b2ebcc9171a156cd04f53397', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MXqJmVgCHNlEUhx-IxD9pDhkhp1ZtKL3aoDYARdvQ10.jpg?width=216&crop=smart&auto=webp&s=be4e182704a7a392186effa5f32289dbccf8afb8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MXqJmVgCHNlEUhx-IxD9pDhkhp1ZtKL3aoDYARdvQ10.jpg?width=320&crop=smart&auto=webp&s=14255127a724f020448c0610e78c729e41178314', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MXqJmVgCHNlEUhx-IxD9pDhkhp1ZtKL3aoDYARdvQ10.jpg?width=640&crop=smart&auto=webp&s=bf66badcc873193df51b253e1792369b4fbe637e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MXqJmVgCHNlEUhx-IxD9pDhkhp1ZtKL3aoDYARdvQ10.jpg?width=960&crop=smart&auto=webp&s=8503ebfb6c42691409312d91e839bd526c3e3cca', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MXqJmVgCHNlEUhx-IxD9pDhkhp1ZtKL3aoDYARdvQ10.jpg?width=1080&crop=smart&auto=webp&s=cfbbaae462cc1aab75e1b0fe6ddb8874c939cd1c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MXqJmVgCHNlEUhx-IxD9pDhkhp1ZtKL3aoDYARdvQ10.jpg?auto=webp&s=c06a6eff7e5c95b2c4ac6ff1967e509152a89940', 'width': 1200}, 'variants': {}}]}
Taskade MCP – Generate Claude/Cursor tools from any OpenAPI spec ⚡
1
Hey all, We needed a faster way to wire AI agents (like Claude, Cursor) to real APIs using OpenAPI specs. So we built and open-sourced **[Taskade MCP](https://github.com/taskade/mcp)** — a codegen tool and local server that turns OpenAPI 3.x specs into Claude/Cursor-compatible MCP tools. - Auto-generates agent tools in seconds - Compatible with MCP, Claude, Cursor - Supports headers, fetch overrides, normalization - Includes a local server - Self-hostable or integrate into your workflow GitHub: <https://github.com/taskade/mcp> More context: <https://www.taskade.com/blog/mcp/> Thanks and welcome any feedback too!
2025-06-04T18:15:24
https://www.reddit.com/r/LocalLLaMA/comments/1l3csix/taskade_mcp_generate_claudecursor_tools_from_any/
taskade
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3csix
false
null
t3_1l3csix
/r/LocalLLaMA/comments/1l3csix/taskade_mcp_generate_claudecursor_tools_from_any/
false
false
self
1
{'enabled': False, 'images': [{'id': 'LASjwjJSCnDis9BKVqPfucu58HXWKqcn4_F2Q418qBA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6uwvOyt98UU5zfNKI-Ea240Dk4_hQO6sitKe5Pp-3FY.jpg?width=108&crop=smart&auto=webp&s=8a3d117787f29e9c2961c7414493211868076b5f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6uwvOyt98UU5zfNKI-Ea240Dk4_hQO6sitKe5Pp-3FY.jpg?width=216&crop=smart&auto=webp&s=1383a4625feba1716936b8ccf8db21455406bf93', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6uwvOyt98UU5zfNKI-Ea240Dk4_hQO6sitKe5Pp-3FY.jpg?width=320&crop=smart&auto=webp&s=6a03452f89ce33578a652009d565b42ef7ad2a72', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6uwvOyt98UU5zfNKI-Ea240Dk4_hQO6sitKe5Pp-3FY.jpg?width=640&crop=smart&auto=webp&s=f263e411cb33974171d4b5c39e5d2b6876a30324', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6uwvOyt98UU5zfNKI-Ea240Dk4_hQO6sitKe5Pp-3FY.jpg?width=960&crop=smart&auto=webp&s=ad6cca8e874dba8d130dad7a958d38631ae0956e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6uwvOyt98UU5zfNKI-Ea240Dk4_hQO6sitKe5Pp-3FY.jpg?width=1080&crop=smart&auto=webp&s=7d167aeff6dab5dd83521f53f8f8987f99a173b0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6uwvOyt98UU5zfNKI-Ea240Dk4_hQO6sitKe5Pp-3FY.jpg?auto=webp&s=4e55eb681e0726f9d0a5e8d1ee320bb1e0ddf5e6', 'width': 1200}, 'variants': {}}]}
Real-time conversational AI running 100% locally in-browser on WebGPU
1
For those interested, here's how it works: \- A cascaded & interleaving of various models to enable low-latency & real-time speech-to-speech generation. \- Models: Silero VAD for voice activity detection, whisper for speech recognition, SmolLM2-1.7B for text generation, and Kokoro for text to speech \- WebGPU: powered by Transformers.js and ONNX Runtime Web I hope you like it! Link to source code and online demo: [https://huggingface.co/spaces/webml-community/conversational-webgpu](https://huggingface.co/spaces/webml-community/conversational-webgpu)
2025-06-04T18:40:15
https://v.redd.it/xk719di2cy4f1
xenovatech
/r/LocalLLaMA/comments/1l3dfg5/realtime_conversational_ai_running_100_locally/
1970-01-01T00:00:00
0
{}
1l3dfg5
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/xk719di2cy4f1/DASHPlaylist.mpd?a=1751784022%2CMGY5ODZkNmVkNDBkY2I1MWQxYmQ0NWI0MjI3MTk3MmU1MzViY2M3OTU4NDBmYjU0NmYzYWRjYjcwMjY2NWIwNQ%3D%3D&v=1&f=sd', 'duration': 58, 'fallback_url': 'https://v.redd.it/xk719di2cy4f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/xk719di2cy4f1/HLSPlaylist.m3u8?a=1751784022%2CNGNkNDg3MjdjNmI1NmJhY2E1ZTZlMGQzYWQ1MjMzMTI4ODQwYTgyYTlkMzRkNjI2YjViMjg5MTZhYmRkZjk4Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xk719di2cy4f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1l3dfg5
/r/LocalLLaMA/comments/1l3dfg5/realtime_conversational_ai_running_100_locally/
false
false
https://external-preview…badcee9875158c96
1
{'enabled': False, 'images': [{'id': 'MzR2ZHliaTJjeTRmMb-Z1L0lQHsk-1t-PBURRQBQD36a7CaPOYYP63NLiwlg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MzR2ZHliaTJjeTRmMb-Z1L0lQHsk-1t-PBURRQBQD36a7CaPOYYP63NLiwlg.png?width=108&crop=smart&format=pjpg&auto=webp&s=c0bbcea75fec8b62b79a1caba40718b089a490fe', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MzR2ZHliaTJjeTRmMb-Z1L0lQHsk-1t-PBURRQBQD36a7CaPOYYP63NLiwlg.png?width=216&crop=smart&format=pjpg&auto=webp&s=05833d148e7cf62d186e7140f1e8486604266236', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MzR2ZHliaTJjeTRmMb-Z1L0lQHsk-1t-PBURRQBQD36a7CaPOYYP63NLiwlg.png?width=320&crop=smart&format=pjpg&auto=webp&s=f52f2d92981b47e81dca3498808709316acfc580', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MzR2ZHliaTJjeTRmMb-Z1L0lQHsk-1t-PBURRQBQD36a7CaPOYYP63NLiwlg.png?width=640&crop=smart&format=pjpg&auto=webp&s=b5ccccd6c3abcc1b87a8629cfe412c31a43f7852', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MzR2ZHliaTJjeTRmMb-Z1L0lQHsk-1t-PBURRQBQD36a7CaPOYYP63NLiwlg.png?width=960&crop=smart&format=pjpg&auto=webp&s=a2acc2000b115d533b932fff2f281882d34ef954', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MzR2ZHliaTJjeTRmMb-Z1L0lQHsk-1t-PBURRQBQD36a7CaPOYYP63NLiwlg.png?width=1080&crop=smart&format=pjpg&auto=webp&s=636dfbcf56057e4d02ea82cb47e33a2f4b775ca9', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/MzR2ZHliaTJjeTRmMb-Z1L0lQHsk-1t-PBURRQBQD36a7CaPOYYP63NLiwlg.png?format=pjpg&auto=webp&s=f6b608b91154c172e047b293565f5e6d2438b081', 'width': 3840}, 'variants': {}}]}
Real-time conversational AI running 100% locally in-browser on WebGPU
1,250
2025-06-04T18:42:30
https://v.redd.it/t419j8srgy4f1
xenovatech
/r/LocalLLaMA/comments/1l3dhjx/realtime_conversational_ai_running_100_locally/
1970-01-01T00:00:00
0
{}
1l3dhjx
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/t419j8srgy4f1/DASHPlaylist.mpd?a=1751784155%2CNTY5OWEzYzg4Y2NjMmUxNGU3MjFkMWRhMTQxY2NjNjFkMzgwN2EwNDZlZDRlMDg0Nzc2YmRmYTI5MDBiNDFlZg%3D%3D&v=1&f=sd', 'duration': 58, 'fallback_url': 'https://v.redd.it/t419j8srgy4f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/t419j8srgy4f1/HLSPlaylist.m3u8?a=1751784155%2CYTc1NzlmM2Y4YzE5OGUzNmQxMDI4MTc4ZDE0ZjJjZWRhN2Q4Y2ZhYjcyOWE4MTZmOGJiYTU1OTFjOWQwZjYxZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/t419j8srgy4f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1l3dhjx
/r/LocalLLaMA/comments/1l3dhjx/realtime_conversational_ai_running_100_locally/
false
false
https://external-preview…9f4c113bdfaee71d
1,250
{'enabled': False, 'images': [{'id': 'MmRtc2I4c3JneTRmMb-Z1L0lQHsk-1t-PBURRQBQD36a7CaPOYYP63NLiwlg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MmRtc2I4c3JneTRmMb-Z1L0lQHsk-1t-PBURRQBQD36a7CaPOYYP63NLiwlg.png?width=108&crop=smart&format=pjpg&auto=webp&s=147389199b19cc719c83b4d1cb6de2fcef5b5934', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MmRtc2I4c3JneTRmMb-Z1L0lQHsk-1t-PBURRQBQD36a7CaPOYYP63NLiwlg.png?width=216&crop=smart&format=pjpg&auto=webp&s=eeb60924332b4565ed95c870bc5a457eecba2cec', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MmRtc2I4c3JneTRmMb-Z1L0lQHsk-1t-PBURRQBQD36a7CaPOYYP63NLiwlg.png?width=320&crop=smart&format=pjpg&auto=webp&s=afd0dfdb1491d5b16f96da3b5d82188415d8e0bf', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MmRtc2I4c3JneTRmMb-Z1L0lQHsk-1t-PBURRQBQD36a7CaPOYYP63NLiwlg.png?width=640&crop=smart&format=pjpg&auto=webp&s=13e519d35a50adae350f8dcfd20e49009fd6e656', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MmRtc2I4c3JneTRmMb-Z1L0lQHsk-1t-PBURRQBQD36a7CaPOYYP63NLiwlg.png?width=960&crop=smart&format=pjpg&auto=webp&s=bef8612c5f6384fb9b7f565f3bbe03ee94581240', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MmRtc2I4c3JneTRmMb-Z1L0lQHsk-1t-PBURRQBQD36a7CaPOYYP63NLiwlg.png?width=1080&crop=smart&format=pjpg&auto=webp&s=cab51fc8a933cf8389eb3e1d06606c4652d51a7f', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/MmRtc2I4c3JneTRmMb-Z1L0lQHsk-1t-PBURRQBQD36a7CaPOYYP63NLiwlg.png?format=pjpg&auto=webp&s=6fad1dfdfe4e66322f998d2d886c6795fe086703', 'width': 3840}, 'variants': {}}]}
Using LLaMA 3 locally to plan macOS UI actions (Vision + Accessibility demo)
4
Wanted to see if LLaMA 3-8B on an M2 could replace cloud GPT for desktop RPA. Pipeline: * Ollama -> “plan” JSON steps from plain English * macOS Vision framework locates UI elements * Accessibility API executes clicks/keys * Feedback loop retries if confidence < 0.7 Prompt snippet: { "instruction": "rename every PNG on Desktop to yyyy-mm-dd-counter, then zip them" } LLaMA planned 6 steps, hit 5/6 correctly (missed a modal OK button). Repo (MIT, Python + Swift bridge): [https://github.com/macpilotai/macpilot](https://github.com/macpilotai/macpilot) Would love thoughts on improving grounding / reducing hallucinated UI elements.
2025-06-04T18:47:18
https://www.reddit.com/r/LocalLLaMA/comments/1l3dm0c/using_llama_3_locally_to_plan_macos_ui_actions/
TyBoogie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3dm0c
false
null
t3_1l3dm0c
/r/LocalLLaMA/comments/1l3dm0c/using_llama_3_locally_to_plan_macos_ui_actions/
false
false
self
4
{'enabled': False, 'images': [{'id': 'jla_f4udSu_pe2OQSa52x_K_vjEbE29q9yz3Rnruh-w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aRmXx66Lv5yJRvTZQZ4iijpoiQPLctNkEQMQZZ7dp50.jpg?width=108&crop=smart&auto=webp&s=b8d69cc7bd816c51b5306aab0c1a7bb9c6d7c6e4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aRmXx66Lv5yJRvTZQZ4iijpoiQPLctNkEQMQZZ7dp50.jpg?width=216&crop=smart&auto=webp&s=e5e00b6182f6afa85608ca1319d7197b7054990e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aRmXx66Lv5yJRvTZQZ4iijpoiQPLctNkEQMQZZ7dp50.jpg?width=320&crop=smart&auto=webp&s=aef03c322919ef64aa1c6be6480c294dffe748b8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aRmXx66Lv5yJRvTZQZ4iijpoiQPLctNkEQMQZZ7dp50.jpg?width=640&crop=smart&auto=webp&s=6fcfb582799f5b80e2e18ce499a19ef3fd5df386', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aRmXx66Lv5yJRvTZQZ4iijpoiQPLctNkEQMQZZ7dp50.jpg?width=960&crop=smart&auto=webp&s=d0d77a824f34e92e71eb6872ff8c1dae26e42449', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aRmXx66Lv5yJRvTZQZ4iijpoiQPLctNkEQMQZZ7dp50.jpg?width=1080&crop=smart&auto=webp&s=3c80533e338f03079d22007566f2ec136132fc1d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aRmXx66Lv5yJRvTZQZ4iijpoiQPLctNkEQMQZZ7dp50.jpg?auto=webp&s=1d0a12a5741862781047bcaae7ebdecd80b9a40f', 'width': 1200}, 'variants': {}}]}
CPU or GPU upgrade for 70b models?
4
Currently im running 70b q3 quants on my GTX 1080 with a 6800k CPU at 0.6 tokens/sec. Isn't it true that upgrading to a 4060ti with 16gb of VRAM would have almost no effect whatsoever on inference speed because its still offloading? GPT thinks i should upgrade my CPU suggesting ill get 2.5 tokens per sec or more on a £400 CPU upgrade. Is this accurate? It accurately guessed my inference speed on my 6800k which makes me think its correct about everything else.
2025-06-04T19:44:46
https://www.reddit.com/r/LocalLLaMA/comments/1l3f2jz/cpu_or_gpu_upgrade_for_70b_models/
Ok-Application-2261
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3f2jz
false
null
t3_1l3f2jz
/r/LocalLLaMA/comments/1l3f2jz/cpu_or_gpu_upgrade_for_70b_models/
false
false
self
4
null
I made an LLM tool to let you search offline Wikipedia/StackExchange/DevDocs ZIM files (llm-tools-kiwix, works with Python & LLM cli)
65
Hey everyone, I just released [`llm-tools-kiwix`](https://github.com/mozanunal/llm-tools-kiwix), a plugin for the [`llm` CLI](https://llm.datasette.io/) and Python that lets LLMs read and search offline ZIM archives (i.e., Wikipedia, DevDocs, StackExchange, and more) **totally offline**. **Why?** A lot of local LLM use cases could benefit from RAG using big knowledge bases, but most solutions require network calls. Kiwix makes it possible to have huge websites (Wikipedia, StackExchange, etc.) stored as `.zim` files on your disk. Now you can let your LLM access those—no Internet needed. **What does it do?** - **Discovers your ZIM files** (in the cwd or a folder via `KIWIX_HOME`) - Exposes tools so the LLM can search articles or read full content - Works on the command line or from Python (supports GPT-4o, ollama, Llama.cpp, etc via the `llm` tool) - No cloud or browser needed, just pure local retrieval **Example use-case:** Say you have `wikipedia_en_all_nopic_2023-10.zim` downloaded and want your LLM to answer questions using it: ``` llm install llm-tools-kiwix # (one-time setup) llm -m ollama:llama3 --tool kiwix_search_and_collect \ "Summarize notable attempts at human-powered flight from Wikipedia." \ --tools-debug ``` Or use the Docker/DevDocs ZIMs for local developer documentation search. **How to try:** 1. Download some ZIM files from https://download.kiwix.org/zim/ 2. Put them in your project dir, or set `KIWIX_HOME` 3. `llm install llm-tools-kiwix` 4. Use tool mode as above! **Open source, Apache 2.0.** Repo + docs: https://github.com/mozanunal/llm-tools-kiwix PyPI: https://pypi.org/project/llm-tools-kiwix/ Let me know what you think! Would love feedback, bug reports, or ideas for more offline tools.
2025-06-04T19:57:22
https://www.reddit.com/r/LocalLLaMA/comments/1l3fdv3/i_made_an_llm_tool_to_let_you_search_offline/
mozanunal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3fdv3
false
null
t3_1l3fdv3
/r/LocalLLaMA/comments/1l3fdv3/i_made_an_llm_tool_to_let_you_search_offline/
false
false
self
65
{'enabled': False, 'images': [{'id': 'LdeLyjWpKozaXC-1mzijVFkn07--A9IsGK-EOeqIB30', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/F8bFZyE_Fj_a5QKBiIJ_MPA97JNbla0lFcO3Ox7-2wE.jpg?width=108&crop=smart&auto=webp&s=735403ad4d738491a88b27206322250646225362', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/F8bFZyE_Fj_a5QKBiIJ_MPA97JNbla0lFcO3Ox7-2wE.jpg?width=216&crop=smart&auto=webp&s=415064a28fd5c97ed24de9858b27342be279714f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/F8bFZyE_Fj_a5QKBiIJ_MPA97JNbla0lFcO3Ox7-2wE.jpg?width=320&crop=smart&auto=webp&s=311df8a50ea77b9862720029b8c46b00a9077ff8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/F8bFZyE_Fj_a5QKBiIJ_MPA97JNbla0lFcO3Ox7-2wE.jpg?width=640&crop=smart&auto=webp&s=0fb97efcd891cc40abb15bc77f273725fccfe313', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/F8bFZyE_Fj_a5QKBiIJ_MPA97JNbla0lFcO3Ox7-2wE.jpg?width=960&crop=smart&auto=webp&s=1711f3295d6a142a6ab89d551b91d1fe60a68381', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/F8bFZyE_Fj_a5QKBiIJ_MPA97JNbla0lFcO3Ox7-2wE.jpg?width=1080&crop=smart&auto=webp&s=9ec7015b13139232a47ca3037e959253dd73b54a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/F8bFZyE_Fj_a5QKBiIJ_MPA97JNbla0lFcO3Ox7-2wE.jpg?auto=webp&s=39673c757c8935a74d3d36bc104c72d1c15b2bdb', 'width': 1200}, 'variants': {}}]}
Which models are you able to use with MCP servers?
0
I've been working heavily with MCP servers (mostly Obsidian) from Claude Desktop for the last couple of months, but I'm running into quota issues all the time with my Pro account and really want to use alternatives (using Ollama if possible, OpenRouter otherwise). I successfully connected my MCP servers to AnythingLLM, but none of the models I tried seem to be aware they can use MCP tools. The AnythingLLM documentation does warn that smaller models will struggle with this use case, but even Sonnet 4 refused to make MCP calls. [https://docs.anythingllm.com/agent-not-using-tools](https://docs.anythingllm.com/agent-not-using-tools) Any tips on any combination of Windows desktop chat client + LLM model (local preferred, remote OK) that actually make MCP tool calls?
2025-06-04T20:58:15
https://www.reddit.com/r/LocalLLaMA/comments/1l3gwkw/which_models_are_you_able_to_use_with_mcp_servers/
rdmDgnrtd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3gwkw
false
null
t3_1l3gwkw
/r/LocalLLaMA/comments/1l3gwkw/which_models_are_you_able_to_use_with_mcp_servers/
false
false
self
0
{'enabled': False, 'images': [{'id': 'w37CiJqQCCa-YfqtBhexm9AtCs6w-fVanSxzd90DK78', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Z0RUD0QGmEo8Fg4yXs6AVssiZe2AUuo8d3qWzjiBdV4.jpg?width=108&crop=smart&auto=webp&s=786653b9fb6b09478530bee1056d8e9b6cf8e5e7', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Z0RUD0QGmEo8Fg4yXs6AVssiZe2AUuo8d3qWzjiBdV4.jpg?width=216&crop=smart&auto=webp&s=0e8ba16aa6cf79f53feea5ee67db5e49093ece69', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Z0RUD0QGmEo8Fg4yXs6AVssiZe2AUuo8d3qWzjiBdV4.jpg?width=320&crop=smart&auto=webp&s=49a6df3343f049511672da593f86ab13293f630c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Z0RUD0QGmEo8Fg4yXs6AVssiZe2AUuo8d3qWzjiBdV4.jpg?width=640&crop=smart&auto=webp&s=5b88da991191362aad6044ab70f2dc3f19c6366e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/Z0RUD0QGmEo8Fg4yXs6AVssiZe2AUuo8d3qWzjiBdV4.jpg?width=960&crop=smart&auto=webp&s=b5278c6dafe2ee98ab3abc2ccbfc7b516ba75ec0', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/Z0RUD0QGmEo8Fg4yXs6AVssiZe2AUuo8d3qWzjiBdV4.jpg?width=1080&crop=smart&auto=webp&s=fb3875fd53405065702e6207651d6fc8ef50e383', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/Z0RUD0QGmEo8Fg4yXs6AVssiZe2AUuo8d3qWzjiBdV4.jpg?auto=webp&s=f406b87c9b4ac013efa094ca24876a927eea3d97', 'width': 1200}, 'variants': {}}]}
Hardware considerations (5090 vs 2 x 3090). What AMD AM5 MOBO for dual GPU?
20
Hello everyone! I have an AM5 motherboard prepared for a single GPU card. I also have an MSI RTX 3090 Suprim. I can also buy a second MSI RTX 3090 Suprim, used of course, but then I would have to change the motherboard (also case and PSU). The other option is to buy the used RTX 5090 instead of the 3090 (then the rest of the hardware remains the same). I have the possibility to buy a slightly used 5090 at a price almost same to two 3090s (because of case/PSU difference). I know 48 GB VRAM is more than 32 GB VRAM ;), but things get complicated with two cards (and the money is ultimately close). If you persuade me to get two 3090 cards (it's almost a given on the LLM forums), then please suggest what AMD AM5 motherboard you recommend for two graphics cards (the MSI RTX 3090 Suprim are extremely large, heavy and power hungry - although the latter can be tamed by undervolting). What motherboards do you recommend? (They must be large, with a good power section so that I can install two 3090 cards without problems). I also need to make sure I have above-average cooling, although I won't go into water cooling. I would have less problems with the 5090, but I know VRAM is so important. What works best for you guys and what do you recommend which direction to go? The dual GPU board seems more future-proof, as you I will be able to replace the 3090s with two 5090s (Ti / Super) in the future (if you can talk about ‘future-proof’ solutions in the PC world ;) ) Thanks for your suggestions and help with the choice!
2025-06-04T21:41:24
https://www.reddit.com/r/LocalLLaMA/comments/1l3hys4/hardware_considerations_5090_vs_2_x_3090_what_amd/
Repsol_Honda_PL
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3hys4
false
null
t3_1l3hys4
/r/LocalLLaMA/comments/1l3hys4/hardware_considerations_5090_vs_2_x_3090_what_amd/
false
false
self
20
null
UPDATE: Inference needs nontrivial amount of PCIe bandwidth (8x RTX 3090 rig, tensor parallelism)
59
A month ago I complained that connecting 8 RTX 3090 with PCIe 3.0 x4 links is bad idea. I have upgraded my rig with better PCIe links and have an update with some numbers. The upgrade: PCIe 3.0 -> 4.0, x4 width to x8 width. Used H12SSL with 16-core EPYC 7302. I didn't try the p2p PCIe drivers yet. The numbers: Bandwidth (p2pBandwidthLatencyTest, read): Before: 1.6GB/s single direction After: 6.1GB/s single direction LLM: Model: TechxGenus/Mistral-Large-Instruct-2411-AWQ Before: ~25 t/s generation and ~100 t/s prefill on 80k context. After: ~33 t/s generation and ~250 t/s prefill on 80k context. Both of these were achieved running docker.io/lmsysorg/sglang:v0.4.6.post2-cu124 250t/s prefill makes me very happy. The LLM is finally fast enough to not choke on adding extra files to context when coding. Options: ``` environment: - TORCHINDUCTOR_CACHE_DIR=/root/cache/torchinductor_cache - PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True command: - python3 - -m - sglang.launch_server - --host - 0.0.0.0 - --port - "8000" - --model-path - TechxGenus/Mistral-Large-Instruct-2411-AWQ - --sleep-on-idle - --tensor-parallel-size - "8" - --mem-fraction-static - "0.90" - --chunked-prefill-size - "2048" - --context-length - "128000" - --cuda-graph-max-bs - "8" - --enable-torch-compile - --json-model-override-args - '{ "rope_scaling": {"factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" }}' ```
2025-06-04T21:51:11
https://www.reddit.com/r/LocalLLaMA/comments/1l3i78l/update_inference_needs_nontrivial_amount_of_pcie/
pmur12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3i78l
false
null
t3_1l3i78l
/r/LocalLLaMA/comments/1l3i78l/update_inference_needs_nontrivial_amount_of_pcie/
false
false
self
59
null
Error with full finetune, model merge, and quantization on vllm
1
[removed]
2025-06-04T23:48:27
https://www.reddit.com/r/LocalLLaMA/comments/1l3kuez/error_with_full_finetune_model_merge_and/
Alternative-Dot451
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3kuez
false
null
t3_1l3kuez
/r/LocalLLaMA/comments/1l3kuez/error_with_full_finetune_model_merge_and/
false
false
self
1
null
Error with full finetune, model merge, and quantization on vllm
1
[removed]
2025-06-04T23:50:06
https://www.reddit.com/r/LocalLLaMA/comments/1l3kvm5/error_with_full_finetune_model_merge_and/
Alternative-Dot451
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3kvm5
false
null
t3_1l3kvm5
/r/LocalLLaMA/comments/1l3kvm5/error_with_full_finetune_model_merge_and/
false
false
self
1
null
Has anyone got DeerFlow working with LM Studio has the Backend?
0
Been trying to get DeerFlow to use LM Studio as its backend, but it's not working properly. It just behaves like a regular chat interface without leveraging the local model the way I expected. Anyone else run into this or have it working correctly?
2025-06-05T00:18:12
https://www.reddit.com/r/LocalLLaMA/comments/1l3lgnp/has_anyone_got_deerflow_working_with_lm_studio/
Soraman36
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3lgnp
false
null
t3_1l3lgnp
/r/LocalLLaMA/comments/1l3lgnp/has_anyone_got_deerflow_working_with_lm_studio/
false
false
self
0
null
My former go-to misguided attention prompt in shambles (DS-V3-0528)
54
Last year, [this prompt](https://www.reddit.com/r/LocalLLaMA/comments/1h8g8v3/a_test_prompt_the_new_llama_33_70b_struggles_with/) was useful to differentiate the smartest models from the rest. This year, the AI not only doesn't fall for it but realizes it's being tested and how it's being tested. I'm liking 0528's new chain of thought where it tries to read the user's intentions. Makes collaboration easier when you can track its "intentions" and it can track yours.
2025-06-05T00:32:47
https://i.redd.it/8uil7xc0705f1.png
nomorebuttsplz
i.redd.it
1970-01-01T00:00:00
0
{}
1l3lrdq
false
null
t3_1l3lrdq
/r/LocalLLaMA/comments/1l3lrdq/my_former_goto_misguided_attention_prompt_in/
false
false
https://a.thumbs.redditm…1-d-jL9Utrj8.jpg
54
{'enabled': True, 'images': [{'id': 'v4s3wJLftwP1LimkjdNWreJuCiImnsukCzokwUqlHyw', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/8uil7xc0705f1.png?width=108&crop=smart&auto=webp&s=da7d2a8ff5b014296d2cca54b6abf8affcf5d51f', 'width': 108}, {'height': 176, 'url': 'https://preview.redd.it/8uil7xc0705f1.png?width=216&crop=smart&auto=webp&s=045f5c930398ef4930f6b824b876beece5a9b391', 'width': 216}, {'height': 261, 'url': 'https://preview.redd.it/8uil7xc0705f1.png?width=320&crop=smart&auto=webp&s=9351bc7156af25bf7f2f5f70c4e1dc0ae1f15b4a', 'width': 320}, {'height': 522, 'url': 'https://preview.redd.it/8uil7xc0705f1.png?width=640&crop=smart&auto=webp&s=d9ce87f679d955eba51793d99329aa97280deb77', 'width': 640}, {'height': 784, 'url': 'https://preview.redd.it/8uil7xc0705f1.png?width=960&crop=smart&auto=webp&s=cbb8edc9c2cb36630ffe46092ff7cb279cd7d090', 'width': 960}, {'height': 882, 'url': 'https://preview.redd.it/8uil7xc0705f1.png?width=1080&crop=smart&auto=webp&s=785f314702ef58774479992b15718e5186248e23', 'width': 1080}], 'source': {'height': 1085, 'url': 'https://preview.redd.it/8uil7xc0705f1.png?auto=webp&s=98856eb8b0def4692f3424dc64ef459c4a490df5', 'width': 1328}, 'variants': {}}]}
Anyone have any experience with Deepseek-R1-0528-Qwen3-8B?
8
I'm trying to download Unsloth's version on Msty (2021 iMac, 16GB), and per Unsloth's HuggingFace, they say to do the Q4\_K\_XL version because that's the version that's preconfigured with the prompt template and the settings and all that good jazz. But I'm left scratching my head over here. It acts all bonkers. Spilling prompt tags (when they **are** entered), never actually stops its output... regardless whether or not a prompt template is entered. Even in its reasoning it acts as if the user (me) is prompting it and engaging in its own schizophrenic conversation. Or it'll answer the query, then reason after the query like it's going to engage back in its own schizo convo. And for the prompt templates? *Maaannnn*...I've tried ChatML, Vicuna, Gemma Instruct, Alfred, a custom one combining a few of them, Jinja-format, non-Jinja format...wrapped text, non-wrapped text, nothing seems to work. I know it's something I'm doing wrong; it work's in HuggingFace's Open Playground just fine. Granite Instruct seemed to come the closest, but it still wrapped the answer and didn't **stop** its answer, then it reasoned from its own output. Quite a treat of a model; I just wonder if there's something I need to interrupt as far as how Msty prompts the LLM behind-the-scenes, or configure. Any advice? (inb4 switch to Open WebUI lol)
2025-06-05T00:37:31
https://www.reddit.com/r/LocalLLaMA/comments/1l3lutf/anyone_have_any_experience_with/
clduab11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l3lutf
false
null
t3_1l3lutf
/r/LocalLLaMA/comments/1l3lutf/anyone_have_any_experience_with/
false
false
self
8
null
After court order, OpenAI is now preserving all ChatGPT and API logs
997
>OpenAI could have taken steps to anonymize the chat logs but chose not to, only making an argument for why it "would not" be able to segregate data, rather than explaining why it "can’t." Surprising absolutely nobody, except maybe ChatGPT users, OpenAI and the United States own your data and can do whatever they want with it. ClosedAI have the audacity to pretend they're the good guys, despite not doing anything tech-wise to prevent this from being possible. My personal opinion is that Gemini, Claude, et al. are next. Yet another win for open weights. Own your tech, own your data.
2025-06-05T02:00:22
https://arstechnica.com/tech-policy/2025/06/openai-says-court-forcing-it-to-save-all-chatgpt-logs-is-a-privacy-nightmare/
iGermanProd
arstechnica.com
1970-01-01T00:00:00
0
{}
1l3niws
false
null
t3_1l3niws
/r/LocalLLaMA/comments/1l3niws/after_court_order_openai_is_now_preserving_all/
false
false
default
997
{'enabled': False, 'images': [{'id': 'BUgrpepEp3PiEWaSG8x4EpSYcr7rmPFZOASl26sCl9Y', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/X_hEWYjElFi1bBhWlW8lpN7Rp7cf6NXMmqc3u_L3ogI.jpg?width=108&crop=smart&auto=webp&s=73bbb4adc582b4db9e9d74abef9f64ad8518c2eb', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/X_hEWYjElFi1bBhWlW8lpN7Rp7cf6NXMmqc3u_L3ogI.jpg?width=216&crop=smart&auto=webp&s=65904afe53e95be8accf2d10d158e1e84bad0e18', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/X_hEWYjElFi1bBhWlW8lpN7Rp7cf6NXMmqc3u_L3ogI.jpg?width=320&crop=smart&auto=webp&s=0b2dccbfc76c87084bfb079e241874560d3901cc', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/X_hEWYjElFi1bBhWlW8lpN7Rp7cf6NXMmqc3u_L3ogI.jpg?width=640&crop=smart&auto=webp&s=f187439e139ab8a2daded4a0c65829ac68188bda', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/X_hEWYjElFi1bBhWlW8lpN7Rp7cf6NXMmqc3u_L3ogI.jpg?width=960&crop=smart&auto=webp&s=83b4f12929f3a20090df1f2a00b6271508df4dee', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/X_hEWYjElFi1bBhWlW8lpN7Rp7cf6NXMmqc3u_L3ogI.jpg?width=1080&crop=smart&auto=webp&s=09a33933abd43cf10c7bf53292684a89ed57cf79', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/X_hEWYjElFi1bBhWlW8lpN7Rp7cf6NXMmqc3u_L3ogI.jpg?auto=webp&s=95d113bc6a320c7d1f66a42bf016dd127fc1c5bb', 'width': 1152}, 'variants': {}}]}