title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Best Training Framework for RLHF?
1
[removed]
2025-05-13T06:58:54
https://www.reddit.com/r/LocalLLaMA/comments/1klfj7i/best_training_framework_for_rlhf/
Zestyclose_Half4995
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klfj7i
false
null
t3_1klfj7i
/r/LocalLLaMA/comments/1klfj7i/best_training_framework_for_rlhf/
false
false
self
1
null
Is the room for AI to grow actually still massive?
0
Quick note im a noob to this whole subject but I wanted to ask after seeing some talk about plateaus and walls. Compared to the human brain, alot of these AI models are trained on more data a person will ever learn in a lifetime. And despite that humans are still capable of things we consider simple that AI struggles with. Is it possible our current understanding of AI is still too early and we have many discoveries to go just by the fact a human brain can do some things better than a full data center? Is it not proof it's possible to shrink the technology and its needed compute/data to insanely small levels? (Maybe it's not fair to compare the brain to a data center? Perhaps the current AI model "architecture" is its own limit?) Anyway that took me awhile to figure out how to cut down a few paragraphs, hope it makes sense.
2025-05-13T07:13:30
https://www.reddit.com/r/LocalLLaMA/comments/1klfqp6/is_the_room_for_ai_to_grow_actually_still_massive/
Business_Respect_910
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klfqp6
false
null
t3_1klfqp6
/r/LocalLLaMA/comments/1klfqp6/is_the_room_for_ai_to_grow_actually_still_massive/
false
false
self
0
null
Looking for uncensored ai access
1
[removed]
2025-05-13T07:33:01
https://www.reddit.com/r/LocalLLaMA/comments/1klg0jh/looking_for_uncensored_ai_access/
Extreme_Mechanic_548
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klg0jh
false
null
t3_1klg0jh
/r/LocalLLaMA/comments/1klg0jh/looking_for_uncensored_ai_access/
true
false
spoiler
1
null
Go to settings for RP and writing models?
0
Hello! The title says it all. What are your guys' go to or best settings for inferencing roleplay and creative writing models? I know it's different for each model and each finetune, but I just wanted to know what has been the best settings in your guys' experience and what model did you use it on. I have been experimenting with Gemma and Gemini with system prompts - My system prompt essentially tells the model to produce a chain of thought like deepseek and wrap it in <think> </think> blocks first before producing the story. Using reddit writing prompts as a seed. And I gave and told it to use a bunch of markdown formatting. (i pre defined these markdown shtuff in its system prompt so all it needs to do is to select and use them when appropriate) - and I find that a temperature of 0.1 to 1.1 (until 1.3 is acceptable) produces the best and most instruction following or cooperative outputs. It follows my constraints, my requests, everything. More than 1.3 will produce yes, more creative outputs but will output some vague, broad, and weird phrases. Paired with any of these top k values - 20, 40, 50, 64, 80, 90, and 100. But I haven't really noticed any or that much of a difference when sampling. Then my top p value stays at 0.95 or 0.9. Now, u might be asking why my values are a set. It's because i'm making a self instruct dataset with varied values for even more diversity and variety. And each time it answers, it first generates a set of the values. Im considering limiting the temperature range from 0.4 instead of 0.1, but I feel that i should let it slip for variety. Now, if you have any other recommended settings I should try, feel free to drop them below, even if it's for other models. So far, looking at my script it looks like the model is producing responses just how i wanted it to.
2025-05-13T07:52:52
https://www.reddit.com/r/LocalLLaMA/comments/1klgag4/go_to_settings_for_rp_and_writing_models/
AccomplishedAir769
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klgag4
false
null
t3_1klgag4
/r/LocalLLaMA/comments/1klgag4/go_to_settings_for_rp_and_writing_models/
false
false
self
0
null
Aider benchmark sortable by language page
9
Hi, a while ago I saw a page (probably posted here) where you could see benchmark results from aider (in my mind it was aider) for multiple programming languages and multiple models. It was a chart if I remember correctly and you could adjust <things> at the top. I would really like to find that page again - does maybe somebody know what I mean?
2025-05-13T08:22:13
https://www.reddit.com/r/LocalLLaMA/comments/1klgp0l/aider_benchmark_sortable_by_language_page/
Mxfrj
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klgp0l
false
null
t3_1klgp0l
/r/LocalLLaMA/comments/1klgp0l/aider_benchmark_sortable_by_language_page/
false
false
self
9
null
Any news on new ASR/TTS models?
1
[removed]
2025-05-13T08:22:37
https://www.reddit.com/r/LocalLLaMA/comments/1klgp7g/any_news_on_new_asrtts_models/
Trysem
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klgp7g
false
null
t3_1klgp7g
/r/LocalLLaMA/comments/1klgp7g/any_news_on_new_asrtts_models/
false
false
self
1
null
Intel Partner Prepares Dual Arc "Battlemage" B580 GPU with 48 GB of VRAM
345
2025-05-13T08:57:55
https://www.techpowerup.com/336687/intel-partner-prepares-dual-arc-battlemage-b580-gpu-with-48-gb-of-vram
PhantomWolf83
techpowerup.com
1970-01-01T00:00:00
0
{}
1klh6h4
false
null
t3_1klh6h4
/r/LocalLLaMA/comments/1klh6h4/intel_partner_prepares_dual_arc_battlemage_b580/
false
false
default
345
{'enabled': False, 'images': [{'id': 'jpmGpdPWJLe0CTi-snjYV4vMSX3vWCL1VBf0G3Bgwcg', 'resolutions': [{'height': 49, 'url': 'https://external-preview.redd.it/jpmGpdPWJLe0CTi-snjYV4vMSX3vWCL1VBf0G3Bgwcg.jpeg?width=108&crop=smart&auto=webp&s=baa727c77247f168266d848a59158d656c214050', 'width': 108}, {'height': 98, 'url': 'https://external-preview.redd.it/jpmGpdPWJLe0CTi-snjYV4vMSX3vWCL1VBf0G3Bgwcg.jpeg?width=216&crop=smart&auto=webp&s=5f138ad47cec3ae3ffc43dc1418c648d58dc8c05', 'width': 216}, {'height': 145, 'url': 'https://external-preview.redd.it/jpmGpdPWJLe0CTi-snjYV4vMSX3vWCL1VBf0G3Bgwcg.jpeg?width=320&crop=smart&auto=webp&s=3cbd9b1e9f59421d857171bdb369d418f4cebb32', 'width': 320}, {'height': 291, 'url': 'https://external-preview.redd.it/jpmGpdPWJLe0CTi-snjYV4vMSX3vWCL1VBf0G3Bgwcg.jpeg?width=640&crop=smart&auto=webp&s=706cf667fea0a8034c97f841fc9b7a5c0d2e2f28', 'width': 640}, {'height': 437, 'url': 'https://external-preview.redd.it/jpmGpdPWJLe0CTi-snjYV4vMSX3vWCL1VBf0G3Bgwcg.jpeg?width=960&crop=smart&auto=webp&s=c4f72ffa92c4794d83974ee76840d245a27e865a', 'width': 960}, {'height': 492, 'url': 'https://external-preview.redd.it/jpmGpdPWJLe0CTi-snjYV4vMSX3vWCL1VBf0G3Bgwcg.jpeg?width=1080&crop=smart&auto=webp&s=4421a08944e2167f33d8c957bc404db5e39450fb', 'width': 1080}], 'source': {'height': 866, 'url': 'https://external-preview.redd.it/jpmGpdPWJLe0CTi-snjYV4vMSX3vWCL1VBf0G3Bgwcg.jpeg?auto=webp&s=ba0c83d118ed10396f7e00136a62d129d04a8a8c', 'width': 1900}, 'variants': {}}]}
Bought 3090, need emotional support
1
[removed]
2025-05-13T09:29:32
https://www.reddit.com/r/LocalLLaMA/comments/1klhmrh/bought_3090_need_emotional_support/
HandsOnDyk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klhmrh
false
null
t3_1klhmrh
/r/LocalLLaMA/comments/1klhmrh/bought_3090_need_emotional_support/
false
false
self
1
null
Looking to Reach u/Zliko – Urgently Need Info on Indian Generic Medication
1
[removed]
2025-05-13T09:30:57
https://www.reddit.com/r/LocalLLaMA/comments/1klhnin/looking_to_reach_uzliko_urgently_need_info_on/
FURiousCuriousKitten
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klhnin
false
null
t3_1klhnin
/r/LocalLLaMA/comments/1klhnin/looking_to_reach_uzliko_urgently_need_info_on/
false
false
self
1
null
Looking for Info on Indian Generic Venetoclax (User u/Zliko Posted About It)
1
[removed]
2025-05-13T09:34:42
https://www.reddit.com/r/LocalLLaMA/comments/1klhpdp/looking_for_info_on_indian_generic_venetoclax/
FURiousCuriousKitten
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klhpdp
false
null
t3_1klhpdp
/r/LocalLLaMA/comments/1klhpdp/looking_for_info_on_indian_generic_venetoclax/
false
false
self
1
null
Looking for Info
1
[removed]
2025-05-13T09:36:07
[deleted]
1970-01-01T00:00:00
0
{}
1klhq30
false
null
t3_1klhq30
/r/LocalLLaMA/comments/1klhq30/looking_for_info/
false
false
default
1
null
Question about the context overflow setting
5
I have only been toying with local LLMs for a week or so, so this is probably a dumb question, but here goes anyway. I am using LM Studio for now as it was so simple to set up, and was wondering what do the three options for Context Overflow mean exactly? * Rolling window * Truncate middle * Stop at limit I guess that the last one means you simply can't add to a chat when it reaches the context length limit? But how about the first two?
2025-05-13T09:44:28
https://www.reddit.com/r/LocalLLaMA/comments/1klhuce/question_about_the_context_overflow_setting/
Kapper_Bear
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klhuce
false
null
t3_1klhuce
/r/LocalLLaMA/comments/1klhuce/question_about_the_context_overflow_setting/
false
false
self
5
null
On the Hugging Face Hub, you can now add Collections within Collections
57
From Bertrand Chevrier on X: [https://x.com/kramp/status/1922221760193187939](https://x.com/kramp/status/1922221760193187939)
2025-05-13T09:47:28
https://i.redd.it/psitfubvri0f1.jpeg
Nunki08
i.redd.it
1970-01-01T00:00:00
0
{}
1klhvvx
false
null
t3_1klhvvx
/r/LocalLLaMA/comments/1klhvvx/on_the_hugging_face_hub_you_can_now_add/
false
false
default
57
{'enabled': True, 'images': [{'id': 'psitfubvri0f1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/psitfubvri0f1.jpeg?width=108&crop=smart&auto=webp&s=10e34df6f46208a72af77426d5e8970c5c11b4e1', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/psitfubvri0f1.jpeg?width=216&crop=smart&auto=webp&s=c5adca9816e8de42746e8ddc64ee14faef49e6d7', 'width': 216}, {'height': 219, 'url': 'https://preview.redd.it/psitfubvri0f1.jpeg?width=320&crop=smart&auto=webp&s=8b917a3f643f274317d5ae9b14bdb061b9ed21e2', 'width': 320}, {'height': 438, 'url': 'https://preview.redd.it/psitfubvri0f1.jpeg?width=640&crop=smart&auto=webp&s=927e4c70666a8537d067a9396b391eba03765991', 'width': 640}, {'height': 657, 'url': 'https://preview.redd.it/psitfubvri0f1.jpeg?width=960&crop=smart&auto=webp&s=490e124d82b893930444ca2073a28aceb90dcaea', 'width': 960}], 'source': {'height': 682, 'url': 'https://preview.redd.it/psitfubvri0f1.jpeg?auto=webp&s=845c441731a9cf6e0168a9278d76a6fe5deb2926', 'width': 996}, 'variants': {}}]}
AMD Ryzen AI Max+ PRO 395 Linux Benchmarks
80
I might be wrong but it seems to be slower than a 4060ti from an LLM point of view...
2025-05-13T09:58:25
https://www.phoronix.com/review/amd-ryzen-ai-max-pro-395/7
Kirys79
phoronix.com
1970-01-01T00:00:00
0
{}
1kli1hf
false
null
t3_1kli1hf
/r/LocalLLaMA/comments/1kli1hf/amd_ryzen_ai_max_pro_395_linux_benchmarks/
false
false
default
80
null
Local LLM/ project that can create graphs like ChatGPT?
1
[removed]
2025-05-13T09:59:23
https://www.reddit.com/r/LocalLLaMA/comments/1kli20c/local_llm_project_that_can_create_graphs_like/
itis_whatit-is
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kli20c
false
null
t3_1kli20c
/r/LocalLLaMA/comments/1kli20c/local_llm_project_that_can_create_graphs_like/
false
false
self
1
null
Experience with uncensored llama 70B?
1
[removed]
2025-05-13T10:41:43
https://www.reddit.com/r/LocalLLaMA/comments/1klipf6/experience_with_uncensored_llama_70b/
Glittering_Jaguar229
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klipf6
false
null
t3_1klipf6
/r/LocalLLaMA/comments/1klipf6/experience_with_uncensored_llama_70b/
false
false
self
1
null
FastViTHD
1
[removed]
2025-05-13T10:47:39
https://i.redd.it/dirc67xy3j0f1.png
Still_Potato_415
i.redd.it
1970-01-01T00:00:00
0
{}
1klisw0
false
null
t3_1klisw0
/r/LocalLLaMA/comments/1klisw0/fastvithd/
false
false
default
1
{'enabled': True, 'images': [{'id': 'dirc67xy3j0f1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/dirc67xy3j0f1.png?width=108&crop=smart&auto=webp&s=ad36ca58c048a2132f892dfe5cc84b9012a6dd89', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/dirc67xy3j0f1.png?width=216&crop=smart&auto=webp&s=c99ab74b0c8d7cd9677026c260efe2160719d4a2', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/dirc67xy3j0f1.png?width=320&crop=smart&auto=webp&s=fef4fcd8d9fbb1994a96dc2a6b88e551818892b2', 'width': 320}], 'source': {'height': 450, 'url': 'https://preview.redd.it/dirc67xy3j0f1.png?auto=webp&s=c6412de81ce51ff3dd3e0ab585b1cd610ba6916b', 'width': 600}, 'variants': {}}]}
How do I load in and use a Qwen2.5-VL model on RunPod or similar service?
3
Hi all, I am trying to use a Qwen2.5-VL model on RunPod for image+text->text inferencing tasks. No matter the parameter counts or quantizations I use, I seem to run into strange memory or dependency issues, even with the high-end GPUs. Is there a recommended set up that people have used to make it work? Appreciate any pointers.
2025-05-13T10:59:46
https://www.reddit.com/r/LocalLLaMA/comments/1klj07f/how_do_i_load_in_and_use_a_qwen25vl_model_on/
Professor_Professor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klj07f
false
null
t3_1klj07f
/r/LocalLLaMA/comments/1klj07f/how_do_i_load_in_and_use_a_qwen25vl_model_on/
false
false
self
3
null
Saving internal state of a model mid inference
1
[removed]
2025-05-13T11:06:48
https://www.reddit.com/r/LocalLLaMA/comments/1klj4tz/saving_internal_state_of_a_model_mid_inference/
Emergency-Piccolo584
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klj4tz
false
null
t3_1klj4tz
/r/LocalLLaMA/comments/1klj4tz/saving_internal_state_of_a_model_mid_inference/
false
false
self
1
null
Hardware for ML Inference and Training
1
[removed]
2025-05-13T11:17:48
https://www.reddit.com/r/LocalLLaMA/comments/1kljbov/hardware_for_ml_inference_and_training/
paolovic89
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kljbov
false
null
t3_1kljbov
/r/LocalLLaMA/comments/1kljbov/hardware_for_ml_inference_and_training/
false
false
self
1
null
Make your own voice AI in two clicks
1
[removed]
2025-05-13T11:39:51
https://v.redd.it/vy0nrr53dj0f1
KindLizard37
v.redd.it
1970-01-01T00:00:00
0
{}
1kljpjs
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vy0nrr53dj0f1/DASHPlaylist.mpd?a=1749728407%2CZTljMWU4Y2Q5YTU0N2Q4NjE1ODAzNmExNmYwMDNiYWFjMWE2YTRlNWIwOTU5MDc1YTkxNDFlYTIyZTA1M2Y0YQ%3D%3D&v=1&f=sd', 'duration': 69, 'fallback_url': 'https://v.redd.it/vy0nrr53dj0f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/vy0nrr53dj0f1/HLSPlaylist.m3u8?a=1749728407%2CZjlhNTgzYTIwNWQxMzVlODAxNDI2MGM0ZWNiNDE3OTgwYzNjOWY1MTM2MjUyZWI3MzEyYzBjOTVjNjRiMWNjOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vy0nrr53dj0f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1kljpjs
/r/LocalLLaMA/comments/1kljpjs/make_your_own_voice_ai_in_two_clicks/
false
false
https://external-preview…c686813cdfbe3929
1
{'enabled': False, 'images': [{'id': 'ejVrOXFuNjNkajBmMdaGPCrlcjCtCQ9ql-xgtIKhn9zAFfxDvjIQ-Twj1stC', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ejVrOXFuNjNkajBmMdaGPCrlcjCtCQ9ql-xgtIKhn9zAFfxDvjIQ-Twj1stC.png?width=108&crop=smart&format=pjpg&auto=webp&s=31c40143fb63e02296f0bf6b468149c0f91234f0', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ejVrOXFuNjNkajBmMdaGPCrlcjCtCQ9ql-xgtIKhn9zAFfxDvjIQ-Twj1stC.png?width=216&crop=smart&format=pjpg&auto=webp&s=1126bb1aa1eaf978c88bb5352c090f582f549ceb', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ejVrOXFuNjNkajBmMdaGPCrlcjCtCQ9ql-xgtIKhn9zAFfxDvjIQ-Twj1stC.png?width=320&crop=smart&format=pjpg&auto=webp&s=443b3ef0004c8197e14b8f39e02e313eb4e39ea2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ejVrOXFuNjNkajBmMdaGPCrlcjCtCQ9ql-xgtIKhn9zAFfxDvjIQ-Twj1stC.png?width=640&crop=smart&format=pjpg&auto=webp&s=6b9b8f4c617806eaf0ce30f68c0d1d533ec6acda', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ejVrOXFuNjNkajBmMdaGPCrlcjCtCQ9ql-xgtIKhn9zAFfxDvjIQ-Twj1stC.png?width=960&crop=smart&format=pjpg&auto=webp&s=a639facdaf4beefa808cd397dd48d1091a6d20ff', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ejVrOXFuNjNkajBmMdaGPCrlcjCtCQ9ql-xgtIKhn9zAFfxDvjIQ-Twj1stC.png?width=1080&crop=smart&format=pjpg&auto=webp&s=2cc3336ff89de6c0c117976910c38b418d593311', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ejVrOXFuNjNkajBmMdaGPCrlcjCtCQ9ql-xgtIKhn9zAFfxDvjIQ-Twj1stC.png?format=pjpg&auto=webp&s=78d1148847873b504a086b0db911de0af460dd6e', 'width': 1920}, 'variants': {}}]}
what is the size of GPU memory sufficient to run meta-llama/Llama-3.3-70B-Instruct
1
[removed]
2025-05-13T11:44:27
https://www.reddit.com/r/LocalLLaMA/comments/1kljskh/what_is_the_size_of_gpu_memory_sufficient_to_run/
Appropriate_Web6268
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kljskh
false
null
t3_1kljskh
/r/LocalLLaMA/comments/1kljskh/what_is_the_size_of_gpu_memory_sufficient_to_run/
false
false
self
1
null
Geotracking in Gpus…
66
https://www.pcgamer.com/hardware/graphics-cards/us-senator-announces-a-bill-requiring-geotracking-in-high-end-gpus-to-prevent-the-chinese-government-from-wielding-the-ruinous-power-of-your-nvidia-rtx-4090/
2025-05-13T11:47:51
https://www.reddit.com/r/LocalLLaMA/comments/1kljute/geotracking_in_gpus/
Ashefromapex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kljute
false
null
t3_1kljute
/r/LocalLLaMA/comments/1kljute/geotracking_in_gpus/
false
false
self
66
{'enabled': False, 'images': [{'id': '4KzRw3JlTYINz5EmDzP1kyO3Xkdk3WGgKDBVFiD1N8g', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/4KzRw3JlTYINz5EmDzP1kyO3Xkdk3WGgKDBVFiD1N8g.jpeg?width=108&crop=smart&auto=webp&s=4d7ba0f7dbdf7c9c6c5c624b37b3ca5ec1ac6c9c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/4KzRw3JlTYINz5EmDzP1kyO3Xkdk3WGgKDBVFiD1N8g.jpeg?width=216&crop=smart&auto=webp&s=64bfa4d07e9f31dfb3ab0b9d22c49038ccef398b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/4KzRw3JlTYINz5EmDzP1kyO3Xkdk3WGgKDBVFiD1N8g.jpeg?width=320&crop=smart&auto=webp&s=613f045d29fc76064442ac113a765854179a556d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/4KzRw3JlTYINz5EmDzP1kyO3Xkdk3WGgKDBVFiD1N8g.jpeg?width=640&crop=smart&auto=webp&s=7bb9708849f2a398fcbd36303c8c79151544ca8e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/4KzRw3JlTYINz5EmDzP1kyO3Xkdk3WGgKDBVFiD1N8g.jpeg?width=960&crop=smart&auto=webp&s=b503ad86db0f6f82306bbe068f194480df620c3f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/4KzRw3JlTYINz5EmDzP1kyO3Xkdk3WGgKDBVFiD1N8g.jpeg?width=1080&crop=smart&auto=webp&s=761ebbe31efed30b45cc2e2fe50913eef6ac3eec', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/4KzRw3JlTYINz5EmDzP1kyO3Xkdk3WGgKDBVFiD1N8g.jpeg?auto=webp&s=5a670c59789b401d4aabb1bfb586cde8eb5f9d1b', 'width': 1920}, 'variants': {}}]}
Integrating AI to my custom expense manager web app
0
I am new to AI bangwagon and would like to know about integrating AI to get Spending Pattern analysis and Simplified Reporting to get charts, or something, or telling the user to give a prompt to find transaction-related information. Maybe using JSON data or CSV imported from MongoDB Atlas, to then processed by AI or something. My tech stack React, GraphQL, MongoDB, Express, Node.js
2025-05-13T11:56:13
https://www.reddit.com/r/LocalLLaMA/comments/1klk0gz/integrating_ai_to_my_custom_expense_manager_web/
MrRoBoT696969
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klk0gz
false
null
t3_1klk0gz
/r/LocalLLaMA/comments/1klk0gz/integrating_ai_to_my_custom_expense_manager_web/
false
false
self
0
null
Which ca. 1GB model for local single sentence text auto completion?
1
[removed]
2025-05-13T11:57:46
[deleted]
1970-01-01T00:00:00
0
{}
1klk1i3
false
null
t3_1klk1i3
/r/LocalLLaMA/comments/1klk1i3/which_ca_1gb_model_for_local_single_sentence_text/
false
false
default
1
null
Is the RX 7600 XT good enough for running QwQ 32B (17GB) or Gemma 2 27B (12GB) locally?
0
I'm currently using LM Studio on a GTX 1080 Ti (10GB VRAM), and while it's been decent, the limited VRAM forces model inference to fall back on CPU offloading, which significantly slows down response times. I'm considering upgrading to an RX 7600 XT for better local LLM performance on a budget. It has more VRAM, but I'm unsure if the GPU itself is capable of running models like QwQ 32B (17GB) or Gemma 2 27B (12GB) without relying on the CPU. Would the RX 7600 XT be a good upgrade for this use case, or should I look at other options?
2025-05-13T11:57:55
https://www.reddit.com/r/LocalLLaMA/comments/1klk1lj/is_the_rx_7600_xt_good_enough_for_running_qwq_32b/
ParamedicDirect5832
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klk1lj
false
null
t3_1klk1lj
/r/LocalLLaMA/comments/1klk1lj/is_the_rx_7600_xt_good_enough_for_running_qwq_32b/
false
false
self
0
null
Activating Tool Calls in My Offline AI App Turned Into a Rabbit Hole…
1
[removed]
2025-05-13T12:01:01
https://www.reddit.com/r/LocalLLaMA/comments/1klk3s1/activating_tool_calls_in_my_offline_ai_app_turned/
dai_app
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klk3s1
false
null
t3_1klk3s1
/r/LocalLLaMA/comments/1klk3s1/activating_tool_calls_in_my_offline_ai_app_turned/
false
false
self
1
null
The Hidden Algorithms Powering Your Coding Assistant - How Cursor and Windsurf Work Under the Hood
17
Hey everyone, I just published a deep dive into the algorithms powering AI coding assistants like Cursor and Windsurf. If you've ever wondered how these tools seem to magically understand your code, this one's for you. In this (free) post, you'll discover: * **The hidden context system** that lets AI understand your entire codebase, not just the file you're working on * **The ReAct loop** that powers decision-making (hint: it's a lot like how humans approach problem-solving) * **Why multiple specialized models** work better than one giant model and how they're orchestrated behind the scenes * **How real-time adaptation** happens when you edit code, run tests, or hit errors [Read the full post here →](https://open.substack.com/pub/diamantai/p/the-hidden-algorithms-powering-your?r=336pe4&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false)
2025-05-13T12:19:30
https://www.reddit.com/r/LocalLLaMA/comments/1klkh4w/the_hidden_algorithms_powering_your_coding/
Nir777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klkh4w
false
null
t3_1klkh4w
/r/LocalLLaMA/comments/1klkh4w/the_hidden_algorithms_powering_your_coding/
false
false
self
17
{'enabled': False, 'images': [{'id': 'YWG9P86-mYss8ItyRgQyoCY755K_7N6XRkjjnEB1WdA', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/YWG9P86-mYss8ItyRgQyoCY755K_7N6XRkjjnEB1WdA.jpeg?width=108&crop=smart&auto=webp&s=9f40287f9c9803cb49a729398f32bbbb06820b0f', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/YWG9P86-mYss8ItyRgQyoCY755K_7N6XRkjjnEB1WdA.jpeg?width=216&crop=smart&auto=webp&s=37c076facbc3bd3222591979a6b62ff5d253ab88', 'width': 216}, {'height': 187, 'url': 'https://external-preview.redd.it/YWG9P86-mYss8ItyRgQyoCY755K_7N6XRkjjnEB1WdA.jpeg?width=320&crop=smart&auto=webp&s=a5be78d2bd1a3992b1c2afb3ff9f701736c9f104', 'width': 320}, {'height': 375, 'url': 'https://external-preview.redd.it/YWG9P86-mYss8ItyRgQyoCY755K_7N6XRkjjnEB1WdA.jpeg?width=640&crop=smart&auto=webp&s=4dbdc78ead3d0563f77ed07b5979801734461d64', 'width': 640}, {'height': 562, 'url': 'https://external-preview.redd.it/YWG9P86-mYss8ItyRgQyoCY755K_7N6XRkjjnEB1WdA.jpeg?width=960&crop=smart&auto=webp&s=44a24329bfaa63cfe48880ebcd982e39a869af8c', 'width': 960}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YWG9P86-mYss8ItyRgQyoCY755K_7N6XRkjjnEB1WdA.jpeg?auto=webp&s=d5d07b3ee70a3152ac3ecee446c8fe4db9e84d33', 'width': 1024}, 'variants': {}}]}
OpenAI: Introducing HealthBench - An evaluation for AI systems and human health.
1
blog: [https://openai.com/index/healthbench/](https://openai.com/index/healthbench/) papaer: [https://cdn.openai.com/pdf/bd7a39d5-9e9f-47b3-903c-8b847ca650c7/healthbench\_paper.pdf](https://cdn.openai.com/pdf/bd7a39d5-9e9f-47b3-903c-8b847ca650c7/healthbench_paper.pdf) github: [https://github.com/openai/simple-evals](https://github.com/openai/simple-evals) >Improving human health will be one of the defining impacts of AGI. If developed and deployed effectively, large language models have the potential to expand access to health information, support clinicians in delivering high-quality care, and help people advocate for their health and that of their communities. >To get there, we need to ensure models are useful and safe. Evaluations are essential to understanding how models perform in health settings. Significant efforts have already been made across academia and industry, yet many existing evaluations do not reflect realistic scenarios, lack rigorous validation against expert medical opinion, or leave no room for state-of-the-art models to improve. >Today, we’re introducing HealthBench: a new benchmark designed to better measure capabilities of AI systems for health. Built in partnership with **262** physicians who have practiced in **60** countries, HealthBench includes **5,000** realistic health conversations, each with a custom physician-created rubric to grade model responses.
2025-05-13T12:22:06
https://www.reddit.com/r/LocalLLaMA/comments/1klkj1u/openai_introducing_healthbench_an_evaluation_for/
AaronFeng47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klkj1u
false
null
t3_1klkj1u
/r/LocalLLaMA/comments/1klkj1u/openai_introducing_healthbench_an_evaluation_for/
false
false
self
1
null
Qwen3 Technical Report
530
Qwen3 Technical Report released. GitHub: https://github.com/QwenLM/Qwen3/blob/main/Qwen3_Technical_Report.pdf
2025-05-13T12:26:42
https://i.redd.it/kku7lzsulj0f1.jpeg
ResearchCrafty1804
i.redd.it
1970-01-01T00:00:00
0
{}
1klkmah
false
null
t3_1klkmah
/r/LocalLLaMA/comments/1klkmah/qwen3_technical_report/
false
false
default
530
{'enabled': True, 'images': [{'id': 'kku7lzsulj0f1', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/kku7lzsulj0f1.jpeg?width=108&crop=smart&auto=webp&s=12111d9da49292c2e0af5dd83ba95bb5bb822b8c', 'width': 108}, {'height': 169, 'url': 'https://preview.redd.it/kku7lzsulj0f1.jpeg?width=216&crop=smart&auto=webp&s=5cfefa0f724756492fd2e07094fed7f025ae63b8', 'width': 216}, {'height': 251, 'url': 'https://preview.redd.it/kku7lzsulj0f1.jpeg?width=320&crop=smart&auto=webp&s=fec38cdeceb9993c4b7a00894c9a7a24865e0bad', 'width': 320}, {'height': 502, 'url': 'https://preview.redd.it/kku7lzsulj0f1.jpeg?width=640&crop=smart&auto=webp&s=9d8d566f0f7c92d2b0575c613f30a76aafba7a29', 'width': 640}, {'height': 753, 'url': 'https://preview.redd.it/kku7lzsulj0f1.jpeg?width=960&crop=smart&auto=webp&s=5e8c1c666fc98124678af4b122f80b0b4ae3b1d6', 'width': 960}, {'height': 847, 'url': 'https://preview.redd.it/kku7lzsulj0f1.jpeg?width=1080&crop=smart&auto=webp&s=34622a5ce2a4b636b232c728573c13a52ebb5fc2', 'width': 1080}], 'source': {'height': 1030, 'url': 'https://preview.redd.it/kku7lzsulj0f1.jpeg?auto=webp&s=7636089fcd85c0a63276742d98defbffebb1a1ec', 'width': 1312}, 'variants': {}}]}
final version of Skywork-OR1 (Open Reasoner 1) series of models
39
[https://huggingface.co/Skywork/Skywork-OR1-32B](https://huggingface.co/Skywork/Skywork-OR1-32B) [https://huggingface.co/Skywork/Skywork-OR1-7B](https://huggingface.co/Skywork/Skywork-OR1-7B)
2025-05-13T12:34:09
https://www.reddit.com/r/LocalLLaMA/comments/1klkrw0/final_version_of_skyworkor1_open_reasoner_1/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klkrw0
false
null
t3_1klkrw0
/r/LocalLLaMA/comments/1klkrw0/final_version_of_skyworkor1_open_reasoner_1/
false
false
self
39
{'enabled': False, 'images': [{'id': 'bg2g0aqNlKi6enI_RIEYH2e3PQBSllYMoXJh7WkgX9o', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bg2g0aqNlKi6enI_RIEYH2e3PQBSllYMoXJh7WkgX9o.png?width=108&crop=smart&auto=webp&s=379e8ae82c8dbd91932ba03fc7cc63266c39056b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bg2g0aqNlKi6enI_RIEYH2e3PQBSllYMoXJh7WkgX9o.png?width=216&crop=smart&auto=webp&s=82d0f9a68a7464b03fa35c2dbe737ac32d82357e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/bg2g0aqNlKi6enI_RIEYH2e3PQBSllYMoXJh7WkgX9o.png?width=320&crop=smart&auto=webp&s=cc64b57b0d1407a4d4f3d0e3a8b8376f57904a2b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/bg2g0aqNlKi6enI_RIEYH2e3PQBSllYMoXJh7WkgX9o.png?width=640&crop=smart&auto=webp&s=840f7fe9cc0bd16073d923254ff356cd41fe282f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/bg2g0aqNlKi6enI_RIEYH2e3PQBSllYMoXJh7WkgX9o.png?width=960&crop=smart&auto=webp&s=8ef5b563d6bdee60b583edb3b8ff9d1e439ab5b8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/bg2g0aqNlKi6enI_RIEYH2e3PQBSllYMoXJh7WkgX9o.png?width=1080&crop=smart&auto=webp&s=893db16f4d3d6ba6ad321dbf6435aeab244c05fc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/bg2g0aqNlKi6enI_RIEYH2e3PQBSllYMoXJh7WkgX9o.png?auto=webp&s=6e92a21d4e127172998b015a5950eea5b4103f6a', 'width': 1200}, 'variants': {}}]}
Serve your Agents as an MCP-compliant tool
1
You can now turn any CAMEL-AI agent into an MCP server, so your agents become first-class tools you can call from Claude, Cursor, or any MCP client. Key points: * Chain agents across apps * Expose planners or “roleplayers” as standalone servers * Mix & match multi-agent workflows with modular components Check out the PR → [https://github.com/camel-ai/camel/pull/2144](https://github.com/camel-ai/camel/pull/2144) Github → [https://github.com/camel-ai/camel](https://github.com/camel-ai/camel) Join the discussion on MCP use cases → [https://discord.camel-ai.org](https://discord.camel-ai.org) Detailed guide → [https://www.camel-ai.org/blogs/camel-mcp-servers-model-context-protocol-ai-agents](https://www.camel-ai.org/blogs/camel-mcp-servers-model-context-protocol-ai-agents) What agents will you expose next?
2025-05-13T12:40:34
https://v.redd.it/c7x0gq30oj0f1
Fluffy_Sheepherder76
/r/LocalLLaMA/comments/1klkwmc/serve_your_agents_as_an_mcpcompliant_tool/
1970-01-01T00:00:00
0
{}
1klkwmc
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/c7x0gq30oj0f1/DASHPlaylist.mpd?a=1749861642%2CM2U1YmEyNDc3MDljZjdjMGVjODkzY2RiMTU1YzlmYWYzOGMzNWJkZGY2OGQ2OGMzZjljZWM1ZDA0MTQ0MjNjYg%3D%3D&v=1&f=sd', 'duration': 77, 'fallback_url': 'https://v.redd.it/c7x0gq30oj0f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/c7x0gq30oj0f1/HLSPlaylist.m3u8?a=1749861642%2CODcyYjJkNjUwMWYxYzMzZDU5YTZhNWI5NWYxNTMxOTEzNjUwNjNmNWJkNjNhMmM1Njc4ZjFhNjFlMmI3ZDNmZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/c7x0gq30oj0f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1klkwmc
/r/LocalLLaMA/comments/1klkwmc/serve_your_agents_as_an_mcpcompliant_tool/
false
false
https://external-preview…c7fbbc85e1690e1e
1
{'enabled': False, 'images': [{'id': 'cGZtcml0MzBvajBmMRdnUdlBOBkRQOWbXvbRUYvePVOPwvzg02-1sydaa-DV', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cGZtcml0MzBvajBmMRdnUdlBOBkRQOWbXvbRUYvePVOPwvzg02-1sydaa-DV.png?width=108&crop=smart&format=pjpg&auto=webp&s=e54a8c0ee53a608f469d5067530070c39d1f69d0', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cGZtcml0MzBvajBmMRdnUdlBOBkRQOWbXvbRUYvePVOPwvzg02-1sydaa-DV.png?width=216&crop=smart&format=pjpg&auto=webp&s=31ef9e1cadf1420c03e186be68e0e58d2d81af2d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cGZtcml0MzBvajBmMRdnUdlBOBkRQOWbXvbRUYvePVOPwvzg02-1sydaa-DV.png?width=320&crop=smart&format=pjpg&auto=webp&s=107c69d82edeb3f5640739257b358ef222cb284a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cGZtcml0MzBvajBmMRdnUdlBOBkRQOWbXvbRUYvePVOPwvzg02-1sydaa-DV.png?width=640&crop=smart&format=pjpg&auto=webp&s=6b4b7cf61e8bec2e215194d9e5a57bce2f1f14ed', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cGZtcml0MzBvajBmMRdnUdlBOBkRQOWbXvbRUYvePVOPwvzg02-1sydaa-DV.png?width=960&crop=smart&format=pjpg&auto=webp&s=2c33bfefec87d345cedb14cfddf6ccd4ef199b7f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cGZtcml0MzBvajBmMRdnUdlBOBkRQOWbXvbRUYvePVOPwvzg02-1sydaa-DV.png?width=1080&crop=smart&format=pjpg&auto=webp&s=5452c0dff652bb0bba15444a2eef3ac0f1a25aee', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/cGZtcml0MzBvajBmMRdnUdlBOBkRQOWbXvbRUYvePVOPwvzg02-1sydaa-DV.png?format=pjpg&auto=webp&s=738f9adc6efbb5228db6eb05bb451cd2ce73a92f', 'width': 3840}, 'variants': {}}]}
Serve your Agents as an MCP-compliant tool
5
You can now turn any CAMEL-AI agent into an MCP server, so your agents become first-class tools you can call from Claude, Cursor, or any MCP client. Detailed guide → [https://www.camel-ai.org/blogs/camel-mcp-servers-model-context-protocol-ai-agents](https://www.camel-ai.org/blogs/camel-mcp-servers-model-context-protocol-ai-agents)
2025-05-13T12:41:54
https://www.reddit.com/r/LocalLLaMA/comments/1klkxlj/serve_your_agents_as_an_mcpcompliant_tool/
Fluffy_Sheepherder76
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klkxlj
false
null
t3_1klkxlj
/r/LocalLLaMA/comments/1klkxlj/serve_your_agents_as_an_mcpcompliant_tool/
false
false
self
5
{'enabled': False, 'images': [{'id': 'gShp0kr2f9lJefeQHWbMJbLOujgZj8czbghkqgDJVow', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/gShp0kr2f9lJefeQHWbMJbLOujgZj8czbghkqgDJVow.jpeg?width=108&crop=smart&auto=webp&s=10ed01a1382f33933099b924e2555416a77c4890', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/gShp0kr2f9lJefeQHWbMJbLOujgZj8czbghkqgDJVow.jpeg?width=216&crop=smart&auto=webp&s=38fbab4a3f7e85f54877f905d16ffc77305dacab', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/gShp0kr2f9lJefeQHWbMJbLOujgZj8czbghkqgDJVow.jpeg?width=320&crop=smart&auto=webp&s=1144020000058ffffb3c000c55def6c34e72521c', 'width': 320}, {'height': 363, 'url': 'https://external-preview.redd.it/gShp0kr2f9lJefeQHWbMJbLOujgZj8czbghkqgDJVow.jpeg?width=640&crop=smart&auto=webp&s=d424646dd7f8d3e0b51c10e733726b409d648da4', 'width': 640}, {'height': 545, 'url': 'https://external-preview.redd.it/gShp0kr2f9lJefeQHWbMJbLOujgZj8czbghkqgDJVow.jpeg?width=960&crop=smart&auto=webp&s=514544e8810acc8ccf5b396b048a6c80c4d6d3b6', 'width': 960}, {'height': 613, 'url': 'https://external-preview.redd.it/gShp0kr2f9lJefeQHWbMJbLOujgZj8czbghkqgDJVow.jpeg?width=1080&crop=smart&auto=webp&s=916dd6eed77694ff02e6a45be07da3736b659f84', 'width': 1080}], 'source': {'height': 1382, 'url': 'https://external-preview.redd.it/gShp0kr2f9lJefeQHWbMJbLOujgZj8czbghkqgDJVow.jpeg?auto=webp&s=8f6654f22cccb5110bdfe16644a0e4470c97a94d', 'width': 2432}, 'variants': {}}]}
PipesHub - The Open Source Alternative To Glean
1
[removed]
2025-05-13T12:42:57
https://www.reddit.com/r/LocalLLaMA/comments/1klkyea/pipeshub_the_open_source_alternative_to_glean/
Effective-Ad2060
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klkyea
false
null
t3_1klkyea
/r/LocalLLaMA/comments/1klkyea/pipeshub_the_open_source_alternative_to_glean/
false
false
self
1
null
PipesHub - The Open Source Alternative to Glean
1
[removed]
2025-05-13T12:51:09
https://www.reddit.com/r/LocalLLaMA/comments/1kll4lf/pipeshub_the_open_source_alternative_to_glean/
Effective-Ad2060
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kll4lf
false
null
t3_1kll4lf
/r/LocalLLaMA/comments/1kll4lf/pipeshub_the_open_source_alternative_to_glean/
false
false
self
1
null
Possible to run 27B / 32B models on 20GB VRAM?
8
I have RTX3060 12GB and to speed up inference I added another RTX4000 8GB for a total of 20GB VRAM. I was hoping to run 27B and 32B models that are Q4\_K\_M quantized (Thus far i have only run GGUF not EXL2). PCIE Interface info: RTX3060 - 16x PCIE4.0 RTX4000 - 4x PCIE4.0 From my own rough calculations the reasoning was that 32 Billion parameters means 32GB ram (for a 8bit per weight quant) but if i used 4bit quant it should be half of that so 16GB VRAM and I thought I would use the extra 4GB VRAM for context etc. But in reality when i tried with Koboldcpp I couldn't get the model to fit into the 20GB VRAM, I used the following command `.\koboldcpp.exe --usecublas --tensor_split 15 12 --contextsize 4096 --skiplauncher --quiet --flashattention --gpulayers 45 --model C:\DeepSeek-R1-Distill-Qwen-32B-Q4_K_M.gguf` I'm probably setting the \`tensor\_split\` wrongly without understanding. Also gpu layers i have currently set to 45 but the model has a total of 65 layers. So is this possible at all ? Can i get a token rate of 10 to 20 tokens / sec. I have read on other posts to move to GPTQ would that model quant use lower VRAM?
2025-05-13T12:54:45
https://www.reddit.com/r/LocalLLaMA/comments/1kll7h3/possible_to_run_27b_32b_models_on_20gb_vram/
PlanetMercurial
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kll7h3
false
null
t3_1kll7h3
/r/LocalLLaMA/comments/1kll7h3/possible_to_run_27b_32b_models_on_20gb_vram/
false
false
self
8
null
The Qwen3 chat template is *still bugged*
195
So, I hope everyone remembers all the twists and turns with the Qwen3 template. First, it was not working at all, then, the Unsloth team fixed the little bug with iterating over the messages. But, alas, it's not over yet! I had a hint something was wrong when the biggest Qwen3 model available on OpenRouter wouldn't execute a web search twice. But it was only once I started testing my own agent framework that I realized what was wrong. Qwen3 uses an XML tool calling syntax that the Jinja template transforms into the known OpenAI-compatible structure. But there's a catch. Once you call a tool once, you save that tool call in the chat history. And that tool call entry has: ```json { "role": "assistant", "tool_calls": [...] } ``` The problem is, the current template code expects every history item to have a "content" block: ``` {%- for message in messages %} {%- if (message.role == "user") or (message.role == "system" and not loop.first) %} {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }} {%- elif message.role == "assistant" %} {%- set content = message.content %} ``` Therefore, whenever you use any OpenAI-compatible client that saves the chat history and you use *more than one tool call*, the conversation will become broken and the server will start reporting an error: ``` got exception: {"code":500,"message":"[json.exception.out_of_range.403] key 'content' not found","type":"server_error"} ``` I think the fix is to patch the assistant branch similar to the "forward messages" branch: ``` {%- set content = message.content if message.content is not none else '' %} ``` and then to refer to `content` instead of `message.content` later on. If someone could poke the Unsloth people to fix the template, that would be pretty neat (for now, I hacked my agent's code to always append an empty code block into tool call assistant history messages since I use my own API for whatever reason, but that's not something you can do if you're using standard libraries).
2025-05-13T13:23:17
https://www.reddit.com/r/LocalLLaMA/comments/1klltt4/the_qwen3_chat_template_is_still_bugged/
ilintar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klltt4
false
null
t3_1klltt4
/r/LocalLLaMA/comments/1klltt4/the_qwen3_chat_template_is_still_bugged/
false
false
self
195
null
Non-technical guide to run Qwen3 without reasoning using Llama.cpp server (without needing /no_think)
4
I kept using /no\_think at the end of my prompts, but I also realized for a lot of use cases this is annoying and cumbersome. First, you have to remember to add /no\_think. Second, if you use Qwen3 in like VSCode, now you have to do more work to get the behavior you want unlike previous models that "just worked". Also this method still inserts empty <think> tags into its response, which if you're using the model programmatically requires you to clean those out etc. I like the convenience, but those are the downsides. Currently Llama.cpp (and by extension llama-server, which is my focus here) doesn't support the "enable\_thinking" flag which Qwen3 uses to disable thinking mode without needing the /no\_think flag, but there's an easy non-technical way to set this flag anyway, and I just wanted to share with anyone who hasn't figured it out yet. This will be obvious to others, but I'm dumb, and I literally just figured out how to do this. So all this flag does, if you were to set it, is slightly modify the chat template that is used when prompting the model. There's nothing mystical or special about the flag as being something separate from everything else. The original Qwen3 template is basically just ChatML: `<|im_start|>system` `{system_prompt}<|im_end|>` `<|im_start|>user` `{prompt}<|im_end|>` `<|im_start|>assistant` And if you were to enable this "flag", it changes the template slightly to this: `<|im_start|>system` `{system_prompt}<|im_end|>` `<|im_start|>user` `{prompt}<|im_end|>` `<|im_start|>assistant\n<think>\n\n</think>\n\n` You can literally see this in the terminal when you launch your Qwen3 model using llama-server, where it lists the jinja template (the chat template it automatically extracts out of the GGUF). Here's the relevant part: `{%- if add_generation_prompt %}` `{{- '<|im_start|>assistant\n' }}` `{%- if enable_thinking is defined and enable_thinking is false %}` `{{- '<think>\n\n</think>\n\n' }}` `{%- endif %}` So I'm like oh wait, so I just need to somehow tell llama-server to use the updated template with the `<think>\n\n</think>\n\n` part already included after the `<|im_start|>assistant\n` part, and it will just behave like a non-reasoning model by default? And not only that, but it won't have those pesky empty <think> tags either, just a clean non-reasoning model when you want it, just like Qwen2.5 was. So the solution is really straight forward - maybe someone can correct me if they think there's an easier, better, or more correct way, but here's what worked for me. Instead of pulling the jinja template from the .gguf, you want to tell llama-server to use a modified template. So first I just ran Qwen3 using llama-server as is (I'm using unsloth's quants in this example, but I don't think it matters), copied the entire template listed in the terminal window into a text file. So everything starting from `{%- if tools %}` and ending with `{%- endif %}` is the template. Then go to the text file, and modify the template slightly to include the changes I mentioned. Find this: `<|im_start|>assistant\n` And just change it to: `<|im_start|>assistant\n<think>\n\n</think>\n\n` Then add these commands when calling llama-server: `--jinja ^` `--chat-template-file "+Llamacpp-Qwen3-NO_REASONING_TEMPLATE.txt" ^` Where the file is whatever you called the text file with the modified template in it. And that's it, run the model, and test it! Here's my .bat file that I personally use as an example: `title llama-server` `:start` `llama-server ^` `--model models/Qwen3-1.7B-UD-Q6_K_XL.gguf ^` `--ctx-size 32768 ^` `--n-predict 8192 ^` `--gpu-layers 99 ^` `--temp 0.7 ^` `--top-k 20 ^` `--top-p 0.8 ^` `--min-p 0.0 ^` `--threads 9 ^` `--slots ^` `--flash-attn ^` `--jinja ^` `--chat-template-file "+Llamacpp-Qwen3-NO_REASONING_TEMPLATE.txt" ^` `--port 8013` `pause` `goto start` Now the model will not think, and won't add any <think> tags at all. It will act like Qwen2.5, a non-reasoning model, and you can just create another .bat file without those 2 lines to launch with thinking mode enabled using the default template.
2025-05-13T13:33:33
https://www.reddit.com/r/LocalLLaMA/comments/1klm24y/nontechnical_guide_to_run_qwen3_without_reasoning/
YearZero
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klm24y
false
null
t3_1klm24y
/r/LocalLLaMA/comments/1klm24y/nontechnical_guide_to_run_qwen3_without_reasoning/
false
false
self
4
null
Gold standard for testing agentic workflow
1
[removed]
2025-05-13T13:58:19
https://www.reddit.com/r/LocalLLaMA/comments/1klmm9r/gold_standard_for_testing_agentic_workflow/
PlanktonHungry9754
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klmm9r
false
null
t3_1klmm9r
/r/LocalLLaMA/comments/1klmm9r/gold_standard_for_testing_agentic_workflow/
false
false
self
1
null
Qwen2.5-VL: vLLM vs HuggingFace - Significant Drop in Accuracy (68% → 47%)
1
[removed]
2025-05-13T14:03:07
https://www.reddit.com/r/LocalLLaMA/comments/1klmqgg/qwen25vl_vllm_vs_huggingface_significant_drop_in/
Proof-Watercress8365
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klmqgg
false
null
t3_1klmqgg
/r/LocalLLaMA/comments/1klmqgg/qwen25vl_vllm_vs_huggingface_significant_drop_in/
false
false
self
1
null
Which language models are most suitable for Nvidia A2
1
[removed]
2025-05-13T14:05:23
https://www.reddit.com/r/LocalLLaMA/comments/1klmseu/which_language_models_are_most_suitable_for/
Lexxiten_7930
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klmseu
false
null
t3_1klmseu
/r/LocalLLaMA/comments/1klmseu/which_language_models_are_most_suitable_for/
false
false
self
1
null
Future-proof GPU for AI programs?
0
Hi. I plan to build a gaming PC, but I also want to make use of it for AI later so it doesn't go to waste (hosting a personal LLM or imagine generation, for example). I guess I need a modern CPU with high VRAM, but a 4090 seems to be overkill, so maybe a 24GB 3090? Is i7 processor enough? Is there something else I should know? Also, it seems to take forever for manufacturers to make GPU with high Vram, but if you have some good news, I appreciate it :D
2025-05-13T14:06:24
https://www.reddit.com/r/LocalLLaMA/comments/1klmt9j/futureproof_gpu_for_ai_programs/
Sndragon88
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klmt9j
false
null
t3_1klmt9j
/r/LocalLLaMA/comments/1klmt9j/futureproof_gpu_for_ai_programs/
false
false
self
0
null
Has anyone gotten featherless-ai’s Qwerky-QwQ-32B running locally?
14
They claim “We now have a model far surpassing GPT-3.5 turbo, without QKV attention.”… makes me want to try it. What are your thoughts on this architecture?
2025-05-13T14:19:33
https://substack.recursal.ai/p/qwerky-72b-and-32b-training-large
silenceimpaired
substack.recursal.ai
1970-01-01T00:00:00
0
{}
1kln4tt
false
null
t3_1kln4tt
/r/LocalLLaMA/comments/1kln4tt/has_anyone_gotten_featherlessais_qwerkyqwq32b/
false
false
default
14
{'enabled': False, 'images': [{'id': 'pu7Btbv0Xni8S0Ui12FJRAYNOBsuAfSaA3OmHuyOI4E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pu7Btbv0Xni8S0Ui12FJRAYNOBsuAfSaA3OmHuyOI4E.jpeg?width=108&crop=smart&auto=webp&s=3a6316876f23a021ed7c4cd031d62a9f5776080c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pu7Btbv0Xni8S0Ui12FJRAYNOBsuAfSaA3OmHuyOI4E.jpeg?width=216&crop=smart&auto=webp&s=19d44db5f16678d5bb9db04036b6e9c129b61f82', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pu7Btbv0Xni8S0Ui12FJRAYNOBsuAfSaA3OmHuyOI4E.jpeg?width=320&crop=smart&auto=webp&s=1b0d0c9ce2460507061447eeca8903337f5fa5ca', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pu7Btbv0Xni8S0Ui12FJRAYNOBsuAfSaA3OmHuyOI4E.jpeg?width=640&crop=smart&auto=webp&s=5e3c02cfb091d236b3906ddd94f00f8923794570', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pu7Btbv0Xni8S0Ui12FJRAYNOBsuAfSaA3OmHuyOI4E.jpeg?width=960&crop=smart&auto=webp&s=bcff14d6c4113810789547acab02564934feba8a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pu7Btbv0Xni8S0Ui12FJRAYNOBsuAfSaA3OmHuyOI4E.jpeg?width=1080&crop=smart&auto=webp&s=1d598bebabd016ecd8c047ac4fe334b143a463bf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pu7Btbv0Xni8S0Ui12FJRAYNOBsuAfSaA3OmHuyOI4E.jpeg?auto=webp&s=5eefb87e784bfa7b9bb0ef8669106f2970da6b3c', 'width': 1200}, 'variants': {}}]}
Portable FlashAttention kernels
1
[removed]
2025-05-13T14:56:46
https://www.reddit.com/r/LocalLLaMA/comments/1klo1wf/portable_flashattention_kernels/
Junior_Feed_2511
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klo1wf
false
null
t3_1klo1wf
/r/LocalLLaMA/comments/1klo1wf/portable_flashattention_kernels/
false
false
self
1
null
Handling Unhealthy GPU Nodes in EKS Cluster
8
Hi everyone, If you’re running **GPU workloads on an EKS cluster**, your nodes can occasionally enter `NotReady` states due to issues like network outages, unresponsive kubelets, running privileged commands like `nvidia-smi`, or other unknown problems with your container code. These issues can become very expensive, leading to financial losses, production downtime, and reduced user trust. We recently published a blog about handling unhealthy nodes in EKS clusters using three approaches: * Using a metric-based CloudWatch alarm to send an email notification. * Using a metric-based alarm to trigger an AWS Lambda for automated remediation. * Relying on Karpenter’s Node Auto Repair feature for automated in-cluster healing. Below is a table that gives a quick summary of the pros and cons of each method. https://preview.redd.it/hfxutiiadk0f1.png?width=719&format=png&auto=webp&s=6b3bdcd9a65b1a8ead3dd45a0230dd7fa5cc0826 [Read the blog for detailed explanations along with implementation code](https://tensorfuse.io/docs/blogs/handling_unhealthy_nodes_in_eks). Let us know your feedback in the thread. Hope this helps you save on your cloud bills!
2025-05-13T15:03:16
https://www.reddit.com/r/LocalLLaMA/comments/1klo80p/handling_unhealthy_gpu_nodes_in_eks_cluster/
tempNull
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klo80p
false
null
t3_1klo80p
/r/LocalLLaMA/comments/1klo80p/handling_unhealthy_gpu_nodes_in_eks_cluster/
false
false
https://external-preview…71bdc38609a65461
8
{'enabled': False, 'images': [{'id': '-hOk5ndvP4moM9csbrw62OEL1gBeysMDePNv2r1QM5A', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/-hOk5ndvP4moM9csbrw62OEL1gBeysMDePNv2r1QM5A.png?width=108&crop=smart&auto=webp&s=78c2202406e47d16d3da82eb10c01c099c988260', 'width': 108}, {'height': 125, 'url': 'https://external-preview.redd.it/-hOk5ndvP4moM9csbrw62OEL1gBeysMDePNv2r1QM5A.png?width=216&crop=smart&auto=webp&s=22ecaf3a33fc27547d555f56d99a099e172b4886', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/-hOk5ndvP4moM9csbrw62OEL1gBeysMDePNv2r1QM5A.png?width=320&crop=smart&auto=webp&s=b1af6980a43d9419d50b5913a581a14c13b8a9d0', 'width': 320}, {'height': 372, 'url': 'https://external-preview.redd.it/-hOk5ndvP4moM9csbrw62OEL1gBeysMDePNv2r1QM5A.png?width=640&crop=smart&auto=webp&s=449fc75ffd12e4bc6074a4ce9c9f5154b8e94c6d', 'width': 640}], 'source': {'height': 419, 'url': 'https://external-preview.redd.it/-hOk5ndvP4moM9csbrw62OEL1gBeysMDePNv2r1QM5A.png?auto=webp&s=491e298c2182cb7ec844511a972f398563f8f309', 'width': 719}, 'variants': {}}]}
GPT‑4.1, GPT‑4.1 mini, and GPT‑4.1 nano are available via API
0
These models outperform GPT‑4o and GPT‑4o mini across the board, with major gains in coding and instruction following. They also have larger context windows—supporting up to 1 million tokens of context—and are able to better use that context with improved long-context comprehension. They feature a refreshed knowledge cutoff of June 2024. GPT‑4.1 excels at the following industry standard measures: Coding: GPT‑4.1 scores 54.6% on SWE-bench Verified, improving by 21.4%abs over GPT‑4o and 26.6%abs over GPT‑4.5—making it a leading model for coding. Instruction following: On Scale’s MultiChallenge⁠(opens in a new window) benchmark, a measure of instruction following ability, GPT‑4.1 scores 38.3%, a 10.5%abs increase over GPT‑4o. Long context: On Video-MME⁠(opens in a new window), a benchmark for multimodal long context understanding, GPT‑4.1 sets a new state-of-the-art result—scoring 72.0% on the long, no subtitles category, a 6.7%abs improvement over GPT‑4o. https://openai.com/index/gpt-4-1/
2025-05-13T15:06:32
https://www.reddit.com/r/LocalLLaMA/comments/1klob21/gpt41_gpt41_mini_and_gpt41_nano_are_available_via/
NewtMurky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klob21
false
null
t3_1klob21
/r/LocalLLaMA/comments/1klob21/gpt41_gpt41_mini_and_gpt41_nano_are_available_via/
false
false
self
0
null
Inside the LLM Black Box: What Goes Into Context and Why It Matters
0
In my latest blog post, I tried to distill what I've learned about how Large Language Models handle *context windows*. I explore what goes into the context (system prompts, conversation history, memory, tool calls, RAG content, etc.) and how it all impacts performance.
2025-05-13T15:15:30
https://gelembjuk.hashnode.dev/inside-the-llm-black-box-what-goes-into-context-and-why-it-matters
gelembjuk
gelembjuk.hashnode.dev
1970-01-01T00:00:00
0
{}
1kloj9p
false
null
t3_1kloj9p
/r/LocalLLaMA/comments/1kloj9p/inside_the_llm_black_box_what_goes_into_context/
false
false
default
0
null
How set a seed in Llama models?
1
[removed]
2025-05-13T15:31:06
https://www.reddit.com/r/LocalLLaMA/comments/1kloxcr/how_set_a_seed_in_llama_models/
MauiSuperWarrior
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kloxcr
false
null
t3_1kloxcr
/r/LocalLLaMA/comments/1kloxcr/how_set_a_seed_in_llama_models/
false
false
self
1
null
getting LLMs to summarize large text with strict time specificity (SRT with time codes)
1
[removed]
2025-05-13T16:02:36
https://www.reddit.com/r/LocalLLaMA/comments/1klpq48/getting_llms_to_summarize_large_text_with_strict/
xiamentide
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klpq48
false
null
t3_1klpq48
/r/LocalLLaMA/comments/1klpq48/getting_llms_to_summarize_large_text_with_strict/
false
false
self
1
null
TTS model
0
Hlo guys , I am actually new to this field, actually I am trying building an ai , which mainly interacts with human like emotions and other automation functions, I also have given it some other features like cyber threat detections and multi lingual capabilities, but I am using a TTS ( suggested by chatgpt , I don't even know it's name ) , which sounds very robotics and emotion less , so I want some suggestions, which can improve it
2025-05-13T16:20:09
https://www.reddit.com/r/LocalLLaMA/comments/1klq645/tts_model/
Odysseus_970
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klq645
false
null
t3_1klq645
/r/LocalLLaMA/comments/1klq645/tts_model/
false
false
self
0
null
WizardLM Team has joined Tencent
180
See attached post, looks like they are training Tencent's Hunyuan Turbo Model's now? But I guess these models aren't open source or even available via API outside of China?
2025-05-13T16:34:03
https://x.com/CanXu20/status/1922303283890397264
GTT444
x.com
1970-01-01T00:00:00
0
{}
1klqir8
false
null
t3_1klqir8
/r/LocalLLaMA/comments/1klqir8/wizardlm_team_has_joined_tencent/
false
false
https://b.thumbs.redditm…_DakF_aIH82c.jpg
180
{'enabled': False, 'images': [{'id': 't5KLoJowY63ZIHCoobcOwAlVp2eK1avxf64pCZIU2BE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ILHoDHQUFu7tKCNSAM9UVMgUHxifQhr_Q9wIcfRI8lA.jpg?width=108&crop=smart&auto=webp&s=b01c639188d58a880692f842b9d003ae1c11a2f7', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/ILHoDHQUFu7tKCNSAM9UVMgUHxifQhr_Q9wIcfRI8lA.jpg?auto=webp&s=7fe96c9548e632358b4d3153e9c851614efc0e79', 'width': 200}, 'variants': {}}]}
Pls help find Qwen-14B-Chat-Uncensored
1
[removed]
2025-05-13T16:35:07
https://www.reddit.com/r/LocalLLaMA/comments/1klqjph/pls_help_find_qwen14bchatuncensored/
Swimming-Market7717
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klqjph
false
null
t3_1klqjph
/r/LocalLLaMA/comments/1klqjph/pls_help_find_qwen14bchatuncensored/
false
false
self
1
null
Free Live Database of Cloud GPU Pricing
1
[removed]
2025-05-13T16:39:18
https://www.reddit.com/r/LocalLLaMA/comments/1klqnfj/free_live_database_of_cloud_gpu_pricing/
Dylan-from-Shadeform
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klqnfj
false
null
t3_1klqnfj
/r/LocalLLaMA/comments/1klqnfj/free_live_database_of_cloud_gpu_pricing/
false
false
self
1
null
More free VRAM for your LLMs on Windows
49
When you have a dedicated GPU, a recent CPU with an iGPU, and look at the performance tab of your task manager just to see that 2 GB of your precious dGPU VRAM is already in use, instead of just 0.6 GB, then this is for you. Of course there's an easy solution: just plug your monitor into the iGPU. But that's not really good for gaming, and your 4k60fps YouTube videos might also start to stutter. The way out of this is to selectively move applications and parts of Windows to the iGPU, and leave everything that demands more performance, but doesn't run all the time, on the dGPU. The screen stays connected to the dGPU and just the iGPU output is mirrored to your screen via dGPU - which is rather cheap in terms of VRAM and processing time. First, identify which applications and part of Windows occupy your dGPU memory: * Open the task manager, switch to "details" tab. * Right-click the column headers, "select columns". * Select "Dedicated GPU memory" and add it. * Click the new column to sort by that. Now you can move every application (including dwm - the Windows manager) that doesn't require a dGPU to the iGPU. * Type "Graphics settings" in your start menu and open it. * Select "Desktop App" for normal programs and click "Browse". * Navigate and select the executable. * This can be easier when right-clicking the process in the task manager details and selecting "open location", then you can just copy and paste it to the "Browse" dialogue. * It gets added to the list below the Browse button. * Select it and click "Options". * Select your iGPU - usually labeled as "Energy saving mode" * For some applications like "WhatsApp" you'll need to select "Microsoft Store App" instead of "Desktop App". That's it. You'll need to restart Windows to get the new setting to apply to DWM and others. Don't forget to check the dedicated and shared iGPU memory in the task manager afterwards, it should now be rather full, while your dGPU has more free VRAM for your LLMs.
2025-05-13T16:48:48
https://www.reddit.com/r/LocalLLaMA/comments/1klqw5a/more_free_vram_for_your_llms_on_windows/
Chromix_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klqw5a
false
null
t3_1klqw5a
/r/LocalLLaMA/comments/1klqw5a/more_free_vram_for_your_llms_on_windows/
false
false
self
49
null
What do u think about it
0
There is a guy in Instagram, created an ai called isabella , which sounds almost like a human , but all the TTS models , which are known by me are emotionless and robotic ., Here is https://www.instagram.com/reel/DEIBrO9yo98/?igsh=MTI2eDl3OWtyMGJlZQ== , one of his reel , is this real or if is it? , than suggest me models like this .
2025-05-13T16:49:47
https://www.reddit.com/r/LocalLLaMA/comments/1klqx36/what_do_u_think_about_it/
Odysseus_970
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klqx36
false
null
t3_1klqx36
/r/LocalLLaMA/comments/1klqx36/what_do_u_think_about_it/
false
false
self
0
null
Can I order doordash via LLM?
0
Hey community 👋 MCPs and other initiatives facilitate LLMs to access external resources. Am I able **today** to do something like ordering from doordash from my LLM desktop app? Has anyone seen already something like this? *Doordash is just an example, it could be any similar web based service.*
2025-05-13T16:50:23
https://www.reddit.com/r/LocalLLaMA/comments/1klqxmz/can_i_order_doordash_via_llm/
itzco1993
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klqxmz
false
null
t3_1klqxmz
/r/LocalLLaMA/comments/1klqxmz/can_i_order_doordash_via_llm/
false
false
self
0
null
The Titan 18U AI Homelab Build Log and Lessons Learned
42
Good afternoon friends! Adam Savage once famously said "The only difference between screwing around and Science is writing it down" and I've been rather busy screwing around busy in the lab so figure its about time to write some things down. Meet **The Titan,** my 18U AI Homelab. [The Titan: 18U AI Homelab \(with llama for scale\)](https://preview.redd.it/c5mfkrqipk0f1.jpg?width=3072&format=pjpg&auto=webp&s=a1e3381de4e5148cb2a6242a6d7e1995ba7848ac) This is my 4th multi-GPU build and I've come a long way from IKEA tables and mining frames. There's a couple of unique features that are worth discussing here, but lets start at the beginning and go through the build log. # The Rack I've wanted to do a rackmount build for some time, they have all the benefits of open frames but also support building vertically much easier and offer a common form factor to mount supporting equipment. I came upon the [SysRacks 18U](https://www.amazon.ca/Frame-Server-Network-Casters-Sysracks/dp/B079M19BXD/ref=asc_df_B079M19BXD?tag=bingshopdesk-20&linkCode=df0&hvadid=80195720973897&hvnetw=o&hvqmt=e&hvbmt=be&hvdev=c&hvlocint=&hvlocphy=&hvtargid=pla-4583795287495154&psc=1) and it was love at first sight: perfect height, four post, adjustable depths and cheap! I added two sets of [Universal Rack Rails](https://www.amazon.ca/JINGCHENGMEI-Universal-4-Post-Server-Compaq/dp/B0BRN254KP/ref=sr_1_6?crid=3O95R5ZEI2R5P&dib=eyJ2IjoiMSJ9.sYyjh9kfHHJ_05pISTK2plkIbnh15pnDG5YgLh6VBKoyqjzQso2i3GgZ-6nYKq-4GILjqPD85b84iah--DqrFI-QTLXqaBVSk7OEnVKFOwmX4RKuaGl2xJOJY3VpFHRfTEPeEmaNYlw54zr_I2dujXk4lMNaCL_4-KC53iH33SxTG4etgwlcLOs5rb9u_v018N0TrkAXybS5q48gdndJdX5BAyXpC-CyAPVsfXMUnshQ5PuK3WV8P2Z4uAoVrWpBIGwl7IX0aPj0Be8_TfdqFhMI1SZzj90nF6SC2T9NJd0.lEKa-_dlv5b5qdXrHAHF3TawZBK4o6Oh1ahdHD5xtVk&dib_tag=se&keywords=server+rack+rails&qid=1747152787&s=electronics&sprefix=server+racj+rails%2Celectronics%2C88&sr=1-6) and a [2U Shelf](https://www.amazon.ca/Server-Rack-Shelf-Universal-PLRSTN42U/dp/B01HTG508E/ref=sr_1_1_sspa?crid=3O95R5ZEI2R5P&dib=eyJ2IjoiMSJ9.sYyjh9kfHHJ_05pISTK2plkIbnh15pnDG5YgLh6VBKoyqjzQso2i3GgZ-6nYKq-4GILjqPD85b84iah--DqrFI-QTLXqaBVSk7OEnVKFOwmX4RKuaGl2xJOJY3VpFHRfTEPeEmaNYlw54zr_I2dujXk4lMNaCL_4-KC53iH33SxTG4etgwlcLOs5rb9u_v018N0TrkAXybS5q48gdndJdX5BAyXpC-CyAPVsfXMUnshQ5PuK3WV8P2Z4uAoVrWpBIGwl7IX0aPj0Be8_TfdqFhMI1SZzj90nF6SC2T9NJd0.lEKa-_dlv5b5qdXrHAHF3TawZBK4o6Oh1ahdHD5xtVk&dib_tag=se&keywords=server%2Brack%2Brails&qid=1747152787&s=electronics&sprefix=server%2Bracj%2Brails%2Celectronics%2C88&sr=1-1-spons&sp_csd=d2lkZ2V0TmFtZT1zcF9hdGY&th=1) and that's basically it, the overall frame assembly was easy and fun. [Bare-bones frame with racks installed and some test pieces mounted.](https://preview.redd.it/rjy4ao3mqk0f1.jpg?width=4080&format=pjpg&auto=webp&s=b4e3434302a0b6bee102798ef4bfec6529d5fe52) # Motherboard, CPU and Memory Being an AI inference machine the goals were to balance high RAM bandwidth with enough compute to be able to take advantage of that bandwidth and to offer as much GPU connectivity as possible. The **ASRock Rack ROMED8-2T** is a popular choice around here for good reason - this motherboard checks all the boxes, and offers out of the box first party ReBAR support. The big selling feature here 7 full x16 PCIe slots with all the bifurcation options and a high quality BIOS: 13 GPUs work with stock, and with a beta BIOS you can push it to 16 GPUs. [ROMED8-2T mounted on a 2020 frame waiting to be populated](https://preview.redd.it/g64b84x5rk0f1.jpg?width=4080&format=pjpg&auto=webp&s=ac42440e1efc00132c3b4502edec45d1eb80d625) It was here I ran into the first hitch: this motherboard is HUGE. And by that I specifically mean that's really, really deep. The kit I originally bought did not have long enough rails to mount this beast so I had to replace them with longer parts. Install the RAM carefully, starting from the insides and seating each module firmly until you hear the click. 8x 32GB PC3200 modules have a theoretical maximum bandwidth of 208GB/sec, I measure 143 GB/sec in practice. [SP3 socket, maw of the beast](https://preview.redd.it/frc59pqzrk0f1.jpg?width=3072&format=pjpg&auto=webp&s=2f887d98b3d977f47e464e104b606595b06dc1d0) I selected the EPYC 7532 for CPU, it was really cheap and offers incredible value as far as compute and memory bandwidth go. There is a plastic cover on these CPUs that STAYS IN PLACE, you slide the entire thing into the black frame on top of the socket. So many pins. So, so many. Tightening the CPU is made much easier if you have a specialized tool, you can see the weird torx wrench with an orange handle in the first pic above. Follow the instructions on the socket and you'll be fine. The 2U cooler I selected also had some torque requirements but the screws basically stop spinning at the right torque so you don't need to worry about a torque driver (a fact I wish I knew before I bought a torque driver, but sharing experiences is why we're here right?). [Finished Host Frame with PSU](https://preview.redd.it/mn459nw9sk0f1.jpg?width=3072&format=pjpg&auto=webp&s=680a81f222ae2dcb17b102a307ea237f189d00de) [Host installed into rack.](https://preview.redd.it/0b7nvdd1uk0f1.jpg?width=4079&format=pjpg&auto=webp&s=fa22346994a75718702217b630993d7290a9f7ee) I used 4.66U for this level to both give a little extra space for the PSU and to properly align with the 15cm PCIe risers we're going to use to physically connect the bottom layer of GPUs. # GPUs: Mounting and Power I have a total of 10 GPUs acquired over the past 2 years: * 5 x Tesla P40 * 1 x Tesla P102-100 * 2 x RTX 3090 FE * 2 x RTX 3060 The P102-100 is a backup card that goes into the storage host at the bottom of the rack, so we will focus our discussion here on how to mount the rest of the GPUs. [Original V1 prototype of the GPU frame](https://preview.redd.it/omrhh0dqsk0f1.jpg?width=4080&format=pjpg&auto=webp&s=bbc76e87fa79713339fc4c7f140bc3800c1eb7d5) Back when I built my very first rig, I cobbled together this mostly-wood GPU frame. For this rack build I wanted to 1) simplify, 2) incorporate power and 3) upgrade to all-metal. I am happy to have achieved all of these goals with my V2 frame design: [V2 GPU frame, rear view with 4 GPUs and PSU populated](https://preview.redd.it/6rc6krn0tk0f1.jpg?width=4080&format=pjpg&auto=webp&s=711a694d30f29362bd42c4d1b61108e7af9bd038) [All the parts to make 2 GPU frames](https://preview.redd.it/koa7j3t4tk0f1.jpg?width=4080&format=pjpg&auto=webp&s=600eb843e68d4d4d008e36cc5acccddca3e6b062) The GPU frames are assembled out of the same 2020 aluminum rails as the host frame, but this one is fully custom designed. V1 had two steel support bars running under the GPUs, I've downgraded to just the one to support the rear of the cards while the L-bar at the front takes care of the rest. [V2 Frame with just PSU installed](https://preview.redd.it/f7stwd8etk0f1.jpg?width=3072&format=pjpg&auto=webp&s=d431d9caac70497fa6dd02c6e0306cbc8266d73f) The frames feature handles to make it easier to get in and out of the rack, and a mounting mechanism for the CSPS power supplies I'm using. These frames simply slide into the two rail-racks: [Final rack \~8U assembly - the two GPU levels](https://preview.redd.it/b923axf9uk0f1.jpg?width=4080&format=pjpg&auto=webp&s=8263ad7ae4df4f9c3dfe9a253187c60a8ffd62ce) Height wise, I built one of these 3U (bottom) and the other 4U (top) but things are pretty flexible here. For GPU power, I rely on Dell 1100W CRPS supplies. These supplies can actually deliver the full power rating without anything bad happening and feature all the protections required to not burn your house down if anything goes wrong. The bottom shelf is 4x250 = **1000W** and the top 2x350+2x170 = **1040W**. The straggler 5th P40 is connected directly to the host machine on the bottom level. # GPU: Connectivity The bottom Pascal rack is using a pair of x8x8 Bifurcators + 15cm PCIE4.0 90 degree extensions. [Rear view close-up from an older build showing the Pascal extension setup](https://preview.redd.it/n0agcqtkvk0f1.jpg?width=4080&format=pjpg&auto=webp&s=11da6f1755773932c211dbb64c190ca98958fad7) The top Ampere rack is using a pair of SFF-8654 x8x8 bifurcators and 4x SFF-8654 x8 Host interfaces. [Rear view of the rack showing the bifurcators and extensions](https://preview.redd.it/pd09ntcyuk0f1.jpg?width=3072&format=pjpg&auto=webp&s=a440a9d8ef3ac3d6a7c3ba6cb64d462855b9da34) The passive x8x8 boards have SATA connectors but you don't actually need to power them. The SFF-8654 boards you do have to power. I did not find I need to use use retimers, I have 0 pcie errors going on and things are pretty solid. The one thing to watch out for is that the **RTX cards need to be downgraded to PCIE3.0, at PCIE4.0 the 2nd port on the SFF-8654 extensions throws PCIE errors at 4.0 speeds.** # Cooling and Lights There are a total of **5x 40mm Magnetic Levitation fans** on the Pascals and **4x 120mm intake fans** on the Amperes and I wanted something attractive to be able to control them so I made it myself. [Dual PWM controller 3D model](https://preview.redd.it/pfiavo5wvk0f1.jpg?width=4080&format=pjpg&auto=webp&s=63048e8ff9c3c1c9f91f33f9de2452bba667f97b) [Completed Dual PWM RackModSlide module](https://preview.redd.it/ue73uyvzvk0f1.jpg?width=4080&format=pjpg&auto=webp&s=fbf630931050825eff3a0b3787f2546c28c7a195) I use the wonderful [RackMod Slide](https://makerworld.com/en/models/1040867-rackmod-1u-slide-a-modular-server-rack-system) as a base frame and form factor and use it to build a cheap and attractive current monitored dual-PWM controller that sits just above the host mothoboard on the right. [Dual PWM controller in action, green knob is the P40 red knob is the intakes](https://preview.redd.it/p6idtjaowk0f1.jpg?width=4080&format=pjpg&auto=webp&s=d926d5c55ee8d3cf009e18184bf3334b1c79ef59) The ampere intake fans are located on top and are directly feeding the 'intake' fan on the bottom/left side of the 3090FE. I originally had them on the front but they ended up fighting the exhaust fans on the top/right side. Lighting is provided by an 8-way wireless lighting controller: [Close-up view of the lighting controller](https://preview.redd.it/6sjcclm0xk0f1.jpg?width=4080&format=pjpg&auto=webp&s=c21bc1c46b6ae7487616cc7d3981b910f3e0b8c5) There's 2 strips on the sides of the rack and the 4 intake fans on top are all RGB and daisy-chained into a single connector. # It's Never Done In case its not obvious, I really enjoy doing builds like this and as a result they are never 'quite' finished - always something I want to improve... [A CSPS quad XT60 breakout board and some XT60 to GPU cables](https://preview.redd.it/6qctvvc9xk0f1.jpg?width=4080&format=pjpg&auto=webp&s=ecae6b7abf8d2d0b42755d3b2c23fb94663809b2) Why do we use those silly little molex connectors for power delivery? Do we really need hundreds of little 18AWG wires? I've found some vendors in china that make gear with quad XT60 connectors and fat wires, but the CRPS supplies I have are incompatible so I am waiting for some CSPS supplies to arrive before I can test this out. # Closing Thoughts [The Titan front angled view](https://preview.redd.it/vqy5m7m2zk0f1.jpg?width=3072&format=pjpg&auto=webp&s=38011a75ccd5feb689d12362357fee355907815d) I am incredibly happy with this system but it was honestly more work then I anticipated: this build took me 4 months from planning to completion, working evenings and weekends. It would probably have taken longer if I didn't have prior builds to start from and had to start totally from scratch. I sit on the shoulders of giants, without information I learned on r/LocalLLaMA I would never have made it this far. I could say a lot more about software stack I run on this machine but I'm afraid I've run out of characters so that will have to be a post for another day. Let me know if there's any questions or if you guys are interested in STL files and I'll upload them. I could also probably throw together some more details parts/instructions for the V2 GPU shelf.
2025-05-13T17:07:18
https://www.reddit.com/r/LocalLLaMA/comments/1klrdgt/the_titan_18u_ai_homelab_build_log_and_lessons/
kryptkpr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klrdgt
false
null
t3_1klrdgt
/r/LocalLLaMA/comments/1klrdgt/the_titan_18u_ai_homelab_build_log_and_lessons/
false
false
https://external-preview…415e8cb0a4cc14d3
42
{'enabled': False, 'images': [{'id': 'JvIS1Gku4UQsepo0kSIVq7g-1DsMy9F0IzSaUqaTW44', 'resolutions': [{'height': 143, 'url': 'https://external-preview.redd.it/JvIS1Gku4UQsepo0kSIVq7g-1DsMy9F0IzSaUqaTW44.jpeg?width=108&crop=smart&auto=webp&s=7505e8672b91355e7fa645dbdf9ee393d10af04d', 'width': 108}, {'height': 286, 'url': 'https://external-preview.redd.it/JvIS1Gku4UQsepo0kSIVq7g-1DsMy9F0IzSaUqaTW44.jpeg?width=216&crop=smart&auto=webp&s=1c55a7c6d35463091f6a3fa09c2809a7d2e23a92', 'width': 216}, {'height': 425, 'url': 'https://external-preview.redd.it/JvIS1Gku4UQsepo0kSIVq7g-1DsMy9F0IzSaUqaTW44.jpeg?width=320&crop=smart&auto=webp&s=d5f8f20f9dd4388e0406cd58f899b4a4ce926ff4', 'width': 320}, {'height': 850, 'url': 'https://external-preview.redd.it/JvIS1Gku4UQsepo0kSIVq7g-1DsMy9F0IzSaUqaTW44.jpeg?width=640&crop=smart&auto=webp&s=574f070d0e1d67db677cfd5141028c8d26bd6f75', 'width': 640}, {'height': 1275, 'url': 'https://external-preview.redd.it/JvIS1Gku4UQsepo0kSIVq7g-1DsMy9F0IzSaUqaTW44.jpeg?width=960&crop=smart&auto=webp&s=0a030f5b738951f6cfe04ed6bef0b587b0fe0aa8', 'width': 960}, {'height': 1434, 'url': 'https://external-preview.redd.it/JvIS1Gku4UQsepo0kSIVq7g-1DsMy9F0IzSaUqaTW44.jpeg?width=1080&crop=smart&auto=webp&s=07b8be86c4144c80d1a33ae25b255035c57a64c3', 'width': 1080}], 'source': {'height': 4080, 'url': 'https://external-preview.redd.it/JvIS1Gku4UQsepo0kSIVq7g-1DsMy9F0IzSaUqaTW44.jpeg?auto=webp&s=5526f7446b9f0175325a5bf10c0339117b33ea6e', 'width': 3072}, 'variants': {}}]}
LLM trained to gaslight people
301
I finetuned gemma 3 12b using RL to be an expert at gaslighting and demeaning it’s users. I’ve been training LLMs using RL with soft rewards for a while now, and seeing OpenAI’s experiments with sycophancy I wanted to see if we can apply it to make the model behave on the other end of the spectrum.. It is not perfect (i guess no eval exists for measuring this), but can be really good in some situations. [https://www.gaslight-gpt.com/](https://www.gaslight-gpt.com/)
2025-05-13T17:13:04
https://www.reddit.com/r/LocalLLaMA/comments/1klrio8/llm_trained_to_gaslight_people/
LividResearcher7818
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klrio8
false
null
t3_1klrio8
/r/LocalLLaMA/comments/1klrio8/llm_trained_to_gaslight_people/
false
false
self
301
null
Introducing BaldEagle: 3x Faster Inference; Easily Train Speculative Decoding Models Locally!
71
I've spent quite some time hunting for small (<1B params) language models I could comfortably train at home on my RTX 3090 setup. Then I found speculative decoding through EAGLE models, which achieve a 3x inference speedup! But the official EAGLE codebase was tough to navigate, so I created BaldEagle, an unofficial implementation that simplifies everything from data generation to training to benchmarking. It's now open-source, and I'm excited to see community-driven improvements and experiments. Feel free to ask any questions here or submit issues in the repo! Github: https://github.com/NickL77/BaldEagle/
2025-05-13T17:19:06
https://frugalgpu.substack.com/p/introducing-baldeagle
xnick77x
frugalgpu.substack.com
1970-01-01T00:00:00
0
{}
1klro7w
false
null
t3_1klro7w
/r/LocalLLaMA/comments/1klro7w/introducing_baldeagle_3x_faster_inference_easily/
false
false
default
71
{'enabled': False, 'images': [{'id': '28dYK69dRtCoerxir1Uy4KNdFaXjdEzgRZMm02CEZ2Q', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/28dYK69dRtCoerxir1Uy4KNdFaXjdEzgRZMm02CEZ2Q.jpeg?width=108&crop=smart&auto=webp&s=ed449cbb1714d1c743e4f26775b8b71475da6605', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/28dYK69dRtCoerxir1Uy4KNdFaXjdEzgRZMm02CEZ2Q.jpeg?width=216&crop=smart&auto=webp&s=fb27ef439c6bde1a48e8a5f7aef587420b908781', 'width': 216}, {'height': 187, 'url': 'https://external-preview.redd.it/28dYK69dRtCoerxir1Uy4KNdFaXjdEzgRZMm02CEZ2Q.jpeg?width=320&crop=smart&auto=webp&s=be163338cbb4861a878e7964abfaeef61b33d3e1', 'width': 320}, {'height': 375, 'url': 'https://external-preview.redd.it/28dYK69dRtCoerxir1Uy4KNdFaXjdEzgRZMm02CEZ2Q.jpeg?width=640&crop=smart&auto=webp&s=e4a15bfa2378088eba19c5ccfb541505d24054c9', 'width': 640}, {'height': 562, 'url': 'https://external-preview.redd.it/28dYK69dRtCoerxir1Uy4KNdFaXjdEzgRZMm02CEZ2Q.jpeg?width=960&crop=smart&auto=webp&s=b466e14aafedef2c6084d96c5395b8953d174d22', 'width': 960}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/28dYK69dRtCoerxir1Uy4KNdFaXjdEzgRZMm02CEZ2Q.jpeg?auto=webp&s=ea0e0f1a0c0ffa948e4a1239ddd14aa061071224', 'width': 1024}, 'variants': {}}]}
The Scariest Thing In LLMs/AI Isn't the Models or the Math... It's the Names.
153
2025-05-13T17:19:33
https://i.redd.it/p5s9pcsd1l0f1.png
XMasterrrr
i.redd.it
1970-01-01T00:00:00
0
{}
1klrony
false
null
t3_1klrony
/r/LocalLLaMA/comments/1klrony/the_scariest_thing_in_llmsai_isnt_the_models_or/
false
false
default
153
{'enabled': True, 'images': [{'id': 'p5s9pcsd1l0f1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/p5s9pcsd1l0f1.png?width=108&crop=smart&auto=webp&s=871150f2df22db8af7b7cce6174ef661fc80c40d', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/p5s9pcsd1l0f1.png?width=216&crop=smart&auto=webp&s=43175afe430f117330a98e8eb2da9481178dc808', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/p5s9pcsd1l0f1.png?width=320&crop=smart&auto=webp&s=5a4ed04575354e0478447ebaf2c81efd421ecd50', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/p5s9pcsd1l0f1.png?width=640&crop=smart&auto=webp&s=938c01763e5e47522657359535bc0c0b28ee9579', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/p5s9pcsd1l0f1.png?width=960&crop=smart&auto=webp&s=82241ba176a378a020ac57e58ff5e72827ad6fbe', 'width': 960}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/p5s9pcsd1l0f1.png?auto=webp&s=7a766465055c8920670bcc4139df136ab0f39260', 'width': 1024}, 'variants': {}}]}
Dual AMD Mi50 Inference and Benchmarks
1
[removed]
2025-05-13T17:19:56
https://www.reddit.com/r/LocalLLaMA/comments/1klrp19/dual_amd_mi50_inference_and_benchmarks/
0seba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klrp19
false
null
t3_1klrp19
/r/LocalLLaMA/comments/1klrp19/dual_amd_mi50_inference_and_benchmarks/
false
false
self
1
{'enabled': False, 'images': [{'id': '2_luL7-EJ5prfH8daAO7Q0ucCFYUazs3FEpIWdGR1vw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/2_luL7-EJ5prfH8daAO7Q0ucCFYUazs3FEpIWdGR1vw.jpeg?width=108&crop=smart&auto=webp&s=1a4bef0788cf677e51e7e9eaf4bbcdcc09552954', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/2_luL7-EJ5prfH8daAO7Q0ucCFYUazs3FEpIWdGR1vw.jpeg?width=216&crop=smart&auto=webp&s=eafe25f3a84b306665ed0d86e4d26d80a37464e6', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/2_luL7-EJ5prfH8daAO7Q0ucCFYUazs3FEpIWdGR1vw.jpeg?width=320&crop=smart&auto=webp&s=502c33bcbfbe88f7a906a4c8fb6fb7fbf8a6cc12', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/2_luL7-EJ5prfH8daAO7Q0ucCFYUazs3FEpIWdGR1vw.jpeg?auto=webp&s=afec26e03c5bab5bd6f74c40b5446f4f337d4ff4', 'width': 512}, 'variants': {}}]}
Slow token
1
[removed]
2025-05-13T17:21:36
https://www.reddit.com/r/LocalLLaMA/comments/1klrqjh/slow_token/
zarty13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klrqjh
false
null
t3_1klrqjh
/r/LocalLLaMA/comments/1klrqjh/slow_token/
false
false
self
1
null
Slow token
1
[removed]
2025-05-13T17:28:16
https://www.reddit.com/r/LocalLLaMA/comments/1klrwmz/slow_token/
zarty13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klrwmz
false
null
t3_1klrwmz
/r/LocalLLaMA/comments/1klrwmz/slow_token/
false
false
self
1
null
Abliterated models and uncensored models
1
[removed]
2025-05-13T17:38:34
https://www.reddit.com/r/LocalLLaMA/comments/1kls61o/abliterated_models_and_uncensored_models/
TheTideRider
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kls61o
false
null
t3_1kls61o
/r/LocalLLaMA/comments/1kls61o/abliterated_models_and_uncensored_models/
false
false
self
1
null
Slow token
1
[removed]
2025-05-13T17:41:55
https://www.reddit.com/r/LocalLLaMA/comments/1kls97i/slow_token/
zarty13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kls97i
false
null
t3_1kls97i
/r/LocalLLaMA/comments/1kls97i/slow_token/
false
false
self
1
null
Good conversational dataset for finetuning small(ish) models
4
I'm looking for finetuning a smallish model like Gemma 4B or Llama 3B for conversations locally. Is there a good dataset you have used with good results? Ideally it should be multi-turn and as high quality as possible. I'm mainly looking for general conversation, potentially as a local assistant and eventually for more specialized technical use cases. I can't really use extremely large datasets since that would take forever so I'm curious to know how small of a dataset you've had good results with. If you can share any details of the training, including the model, dataset, size of the dataset, number of epochs, as well as any other useful information that would be a great help. Any challenges and limitations you ran into would also be helpful. I understand these models won't be anywhere close to the larger models that have already been fine tuned with large datasets but I'd think I can get at least decent results.
2025-05-13T17:42:49
https://www.reddit.com/r/LocalLLaMA/comments/1klsa2d/good_conversational_dataset_for_finetuning/
gamesntech
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klsa2d
false
null
t3_1klsa2d
/r/LocalLLaMA/comments/1klsa2d/good_conversational_dataset_for_finetuning/
false
false
self
4
null
How to make your MCP clients share context with each other
40
With all this recent hype around MCP, I still feel like missing out when working with different MCP clients (especially in terms of context). What if there could be a way to have a personal, portable LLM “memory layer” that lives locally on your system, with complete control over your data? Mem0 (memory layer for AI agents) launched [OpenMemory](https://github.com/mem0ai/mem0/tree/main/openmemory) (open source) solution to this problem, which plugs into any MCP client (like Cursor, Windsurf, Claude) over SSE and adds a private, vector-backed memory layer. It acts as a middle layer between your LLM-powered client and a vector database: \- Stores and recalls arbitrary chunks of text (`memories`) across sessions \- Uses a vector store (`Qdrant`) under the hood to perform relevance-based retrieval \- Runs fully on your infrastructure (`Docker + Postgres + Qdrant`) with no data sent outside \- Includes a dashboard (`next.js & redux`) showing who’s reading/writing memories and a history of state changes Here’s a [complete tutorial](https://mem0.ai/blog/how-to-make-your-clients-more-context-aware-with-openmemory-mcp/) that shows how to set it up locally, the underlying components involved, complete overview of architecture and some real-world use cases with examples. It also explains the basic flow, why the project even matters, security, access control and what's actually happening behind the UI. Would love to hear your feedback!
2025-05-13T17:49:12
https://www.reddit.com/r/LocalLLaMA/comments/1klsg3r/how_to_make_your_mcp_clients_share_context_with/
anmolbaranwal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klsg3r
false
null
t3_1klsg3r
/r/LocalLLaMA/comments/1klsg3r/how_to_make_your_mcp_clients_share_context_with/
false
false
self
40
null
Struggling for Recognition: Navigating an Internship in AI with an Uninformed Supervisor
1
[removed]
2025-05-13T17:56:28
https://www.reddit.com/r/LocalLLaMA/comments/1klsn13/struggling_for_recognition_navigating_an/
Outside-Narwhal9948
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klsn13
false
null
t3_1klsn13
/r/LocalLLaMA/comments/1klsn13/struggling_for_recognition_navigating_an/
false
false
self
1
null
Need Llama friends
1
[removed]
2025-05-13T17:59:09
https://www.reddit.com/r/LocalLLaMA/comments/1klspgs/need_llama_friends/
Alternative-Bobcat-5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klspgs
false
null
t3_1klspgs
/r/LocalLLaMA/comments/1klspgs/need_llama_friends/
false
false
self
1
null
What's the system prompt at chat.qwenlm.ai ?
4
I want to replicate the style of the replies locally. Thx in advance!
2025-05-13T18:04:12
https://www.reddit.com/r/LocalLLaMA/comments/1klsub5/whats_the_system_prompt_at_chatqwenlmai/
Own-Potential-2308
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klsub5
false
null
t3_1klsub5
/r/LocalLLaMA/comments/1klsub5/whats_the_system_prompt_at_chatqwenlmai/
false
false
self
4
null
Do you use the cloud for your projects or is it more self-hosted?
1
[removed]
2025-05-13T18:22:36
https://www.reddit.com/r/LocalLLaMA/comments/1kltb1c/do_you_use_the_cloud_for_your_projects_or_is_it/
Defiant-Mark-3102
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kltb1c
false
null
t3_1kltb1c
/r/LocalLLaMA/comments/1kltb1c/do_you_use_the_cloud_for_your_projects_or_is_it/
false
false
self
1
null
Feedback on Book Proposal Reasoning LLMs
1
[removed]
2025-05-13T18:38:05
http://bit.ly/4kamiLO
alimhabidi
bit.ly
1970-01-01T00:00:00
0
{}
1kltpcq
false
null
t3_1kltpcq
/r/LocalLLaMA/comments/1kltpcq/feedback_on_book_proposal_reasoning_llms/
false
false
default
1
null
Feedback needed on Reasoning LLMs book proposal
1
[removed]
2025-05-13T18:41:33
https://www.reddit.com/r/LocalLLaMA/comments/1kltsut/feedback_needed_on_reasoning_llms_book_proposal/
alimhabidi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kltsut
false
null
t3_1kltsut
/r/LocalLLaMA/comments/1kltsut/feedback_needed_on_reasoning_llms_book_proposal/
false
false
self
1
null
Will r2 come out this month?
0
Why is deepseek so secretive about their releases, so they can short the market? They don’t even tell people beforehand.
2025-05-13T18:50:20
https://www.reddit.com/r/LocalLLaMA/comments/1klu19t/will_r2_come_out_this_month/
power97992
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klu19t
false
null
t3_1klu19t
/r/LocalLLaMA/comments/1klu19t/will_r2_come_out_this_month/
false
false
self
0
null
AMD Strix Halo (Ryzen AI Max+ 395) GPU Performance
1
[removed]
2025-05-13T18:55:27
https://www.reddit.com/r/LocalLLaMA/comments/1klu5r0/amd_strix_halo_ryzen_ai_max_395_gpu_performance/
randomfoo2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klu5r0
false
null
t3_1klu5r0
/r/LocalLLaMA/comments/1klu5r0/amd_strix_halo_ryzen_ai_max_395_gpu_performance/
false
false
self
1
null
Hello, what does it mean "Do you want to allow this app to make changes to your device?"
1
[removed]
2025-05-13T18:59:16
https://www.reddit.com/r/LocalLLaMA/comments/1klu94p/hello_what_does_it_mean_do_you_want_to_allow_this/
Defiant-Fuel3627
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klu94p
false
null
t3_1klu94p
/r/LocalLLaMA/comments/1klu94p/hello_what_does_it_mean_do_you_want_to_allow_this/
false
false
self
1
null
A new promising chip ?
41
https://vsora.com/ A french start up who make a risk v chip designed for inference that could be interesting. They recevied for their third rounds of investissement money from the European Comission, so maybe it's a bit serious. Some articles say they will use it for the software part. Informations in french are not very sourced and a bit sparse, I saw 8T/s for bandwith and a scalable memory ? The maximum numbers of memory seems absurds so if someone more intelligent that me can confirm. This kind of chip is just good for inference or it's can be use for training too ? With their huge ram (or nram?) available ?
2025-05-13T19:07:49
https://www.reddit.com/r/LocalLLaMA/comments/1kluh52/a_new_promising_chip/
ReadyCocconut
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kluh52
false
null
t3_1kluh52
/r/LocalLLaMA/comments/1kluh52/a_new_promising_chip/
false
false
self
41
{'enabled': False, 'images': [{'id': 'HtqIqIEtv0RxKyGDQ9E2s3rn1wlCmmEK_AIhxTd-u9k', 'resolutions': [{'height': 68, 'url': 'https://external-preview.redd.it/HtqIqIEtv0RxKyGDQ9E2s3rn1wlCmmEK_AIhxTd-u9k.png?width=108&crop=smart&auto=webp&s=24156196d993c023743416f673329d55aca21266', 'width': 108}, {'height': 136, 'url': 'https://external-preview.redd.it/HtqIqIEtv0RxKyGDQ9E2s3rn1wlCmmEK_AIhxTd-u9k.png?width=216&crop=smart&auto=webp&s=a3b480a3ef940cbd8bbaa19e29a34ef46292e589', 'width': 216}, {'height': 202, 'url': 'https://external-preview.redd.it/HtqIqIEtv0RxKyGDQ9E2s3rn1wlCmmEK_AIhxTd-u9k.png?width=320&crop=smart&auto=webp&s=0431b933e814065475351cff6bc9e15ba22e4d98', 'width': 320}, {'height': 405, 'url': 'https://external-preview.redd.it/HtqIqIEtv0RxKyGDQ9E2s3rn1wlCmmEK_AIhxTd-u9k.png?width=640&crop=smart&auto=webp&s=fac0cb6e3c7d81cb6f3fce79c47671c4d3c1c73a', 'width': 640}, {'height': 607, 'url': 'https://external-preview.redd.it/HtqIqIEtv0RxKyGDQ9E2s3rn1wlCmmEK_AIhxTd-u9k.png?width=960&crop=smart&auto=webp&s=62bb94da9627bdc0d8bd9917f589cfe31e75f259', 'width': 960}, {'height': 683, 'url': 'https://external-preview.redd.it/HtqIqIEtv0RxKyGDQ9E2s3rn1wlCmmEK_AIhxTd-u9k.png?width=1080&crop=smart&auto=webp&s=d9d6c000eb9a495a99d0bb3b4bc79d15364ff9fa', 'width': 1080}], 'source': {'height': 752, 'url': 'https://external-preview.redd.it/HtqIqIEtv0RxKyGDQ9E2s3rn1wlCmmEK_AIhxTd-u9k.png?auto=webp&s=cb57493b7b484948773950cdc372417a230fa00c', 'width': 1188}, 'variants': {}}]}
localAI – Run LLMs Completely Locally on Your iPhone, iPad & Mac
13
I’m excited to share about my first app **localAI**, a SwiftUI-based open-source app that lets you run large language models entirely on your device—no internet required. Key Features * **Fully Offline**: All inference happens locally—your data never leaves your device. * **Multi-Platform**: Universal app for iOS (16.0+) and macOS (13.0+). * **Pre-Bundled Models**: Llama 3.2 3B Instruct, Qwen3 4B, plus support for any GGUF model. * **Custom Model Loading**: Import your own GGUF models with ease. * **Parameter Tuning**: Adjust temperature, top-K, top-P, context window, and more in real time. * **System Monitoring**: Watch token generation speed, memory usage, and context utilization. * **Debug Mode**: Detailed logs to help you troubleshoot and optimize. Get Started 1. Clone the repo: `git clone` [`https://github.com/sse-97/localAI.git`](https://github.com/sse-97/localAI.git) 2. Build in Xcode (iOS/macOS target) 3. Launch and start chatting—your data stays 100% local! Call for Feedback & Contributions I’d love to hear your thoughts: * What features would you like to see? * Any performance tweaks or UI improvements? * Got a favorite GGUF model to test? * Can you contribute to make this app even better? Check it out on GitHub and drop a ⭐ if you find it useful! Let’s make on-device AI even better together. 🚀 GitHub: [https://github.com/sse-97/localAI](vscode-file://vscode-app/Applications/Visual%20Studio%20Code.app/Contents/Resources/app/out/vs/code/electron-sandbox/workbench/workbench.html) Happy hacking! (sse-97)
2025-05-13T19:19:06
https://www.reddit.com/r/LocalLLaMA/comments/1klur6q/localai_run_llms_completely_locally_on_your/
CrazySymphonie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klur6q
false
null
t3_1klur6q
/r/LocalLLaMA/comments/1klur6q/localai_run_llms_completely_locally_on_your/
false
false
self
13
null
can installing ollama somehow interfere with gaming with the same gpu ( not on the same time of course)
1
[removed]
2025-05-13T19:33:41
https://www.reddit.com/r/LocalLLaMA/comments/1klv427/can_installing_ollama_somehow_interfere_with/
Defiant-Fuel3627
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klv427
false
null
t3_1klv427
/r/LocalLLaMA/comments/1klv427/can_installing_ollama_somehow_interfere_with/
false
false
self
1
null
How to set default prompt in Ollama for Qwen3
0
Hey all, I'm running Home Assistant stuff which uses Ollama as an integration. How do I edit the Qwen3 modelfile to remove the thinking aspect? I know I need to add '/no_think' to the prompt, but unsure how to edit the prompt in Ollama. I've tried Googling and I see change a model file, but when I go to edit it for Qwen3 I get no modelfile found. Can anyone provide a step by step for this dummy? Thanks! Much appreciated
2025-05-13T19:35:42
https://www.reddit.com/r/LocalLLaMA/comments/1klv5rj/how_to_set_default_prompt_in_ollama_for_qwen3/
Some_guitarist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klv5rj
false
null
t3_1klv5rj
/r/LocalLLaMA/comments/1klv5rj/how_to_set_default_prompt_in_ollama_for_qwen3/
false
false
self
0
null
can installing ollama somehow interfere with gaming with the same gpu ( not on the same time of course)
1
[removed]
2025-05-13T19:35:51
https://www.reddit.com/r/LocalLLaMA/comments/1klv5x1/can_installing_ollama_somehow_interfere_with/
Defiant-Fuel3627
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klv5x1
false
null
t3_1klv5x1
/r/LocalLLaMA/comments/1klv5x1/can_installing_ollama_somehow_interfere_with/
false
false
self
1
null
can installing ollama somehow interfere with gaming with the same gpu ( not on the same time of course)
1
[removed]
2025-05-13T19:40:09
[deleted]
1970-01-01T00:00:00
0
{}
1klv9ob
false
null
t3_1klv9ob
/r/LocalLLaMA/comments/1klv9ob/can_installing_ollama_somehow_interfere_with/
false
false
default
1
null
Local Benchmark on local models
155
Here are the results of the local models I have been testing over the last year. The test is a modified version of the HumanEval dataset. I picked this data set because there is no answer key to train on, and smaller models didn't seem to overfit it, so it seemed like a good enough benchmark. I have been running this benchmark over the last year, and qwen 3 made HUGE strides on this benchmark, both reasoning and non-reasoning, very impressive. Most notably, qwen3:4b scores in the top 3 within margin of error. I ran the benchmarks using ollama, all models are Q4 with the exception of gemma3 4b 16fp, which scored extremely low, and the reason is due to gemma3 arcitecture bugs when gemma3 was first released, and I just never re-tested it. I tried testing qwen3:30b reasoning, but I just dont have the proper hardware, and it would have taken a week. Anyways, thought it was interesting so I thought I'd share. Hope you guys find it interesting/helpful.
2025-05-13T19:50:55
https://i.redd.it/rrkggcovrl0f1.png
Expensive-Apricot-25
i.redd.it
1970-01-01T00:00:00
0
{}
1klvja8
false
null
t3_1klvja8
/r/LocalLLaMA/comments/1klvja8/local_benchmark_on_local_models/
false
false
default
155
{'enabled': True, 'images': [{'id': 'rrkggcovrl0f1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/rrkggcovrl0f1.png?width=108&crop=smart&auto=webp&s=8af70568fa5f162c83afb718e9550a08c396c72a', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/rrkggcovrl0f1.png?width=216&crop=smart&auto=webp&s=b0bd12aee8955c52b9ac85227f28ae57c60dc8d6', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/rrkggcovrl0f1.png?width=320&crop=smart&auto=webp&s=dd0fc54f9bbda2a6de3d47e102bcaf35ebf85b61', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/rrkggcovrl0f1.png?width=640&crop=smart&auto=webp&s=bd6c517847054aeb3ac1cd91751e04d6c36c8c67', 'width': 640}, {'height': 539, 'url': 'https://preview.redd.it/rrkggcovrl0f1.png?width=960&crop=smart&auto=webp&s=b28ef6d564a4ba3aedafc7337b2fe3189a726132', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/rrkggcovrl0f1.png?width=1080&crop=smart&auto=webp&s=5c889c88379c00e062f3cc0f6b5495154ce0e4e9', 'width': 1080}], 'source': {'height': 1534, 'url': 'https://preview.redd.it/rrkggcovrl0f1.png?auto=webp&s=c6d3403725dd4e6693b57be5d98564b3b9676a17', 'width': 2728}, 'variants': {}}]}
I've created a simple local tool that allows you to do automatic web searches enhanced by an LLM
6
Hello everyone! As the title says, I created a simple local gradio app for Windows that performs web searches using an LLM. [Erebyx93/Local-LLM-DeepSearch](https://github.com/Erebyx93/Local-LLM-DeepSearch) https://preview.redd.it/pnznwusije0f1.png?width=1868&format=png&auto=webp&s=b6bad7c54591b45e6ca0b40ab4823f7c1aed0e0b https://preview.redd.it/8rilphfkje0f1.png?width=1843&format=png&auto=webp&s=cc1b415acf07d03ee7b9c7cb271c10a987404f26 It just needs a backend with an LLM loaded in the background, that supports openai-compatible APIs. I personally tried with Koboldcpp and Oobabooga webui. Basically there are two modes: Standard Search * Perform a classic web search and send the content to the LLM, which reworks and enriches the information. Deep Search * The LLM autonomously expands the standard search, creating a new search query, related to the original one, for each new iteration. * If the LLM "believes" it has enough information to answer the initial question, it may independently terminate the cycle early. Once the report is generated, you can edit and save it. The smarter the LLM, the better this works, but I tried also with Gemma 3 4B and still got quite good results. In addition to the manual setup, there is also a standalone portable release. If you'd like to try it out, let me know what you think!
2025-05-13T19:52:45
https://www.reddit.com/r/LocalLLaMA/comments/1klvkvw/ive_created_a_simple_local_tool_that_allows_you/
Aril_1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klvkvw
false
null
t3_1klvkvw
/r/LocalLLaMA/comments/1klvkvw/ive_created_a_simple_local_tool_that_allows_you/
false
false
https://external-preview…93e2c3357d956d69
6
{'enabled': False, 'images': [{'id': 'FK2jJ_nTJgGuXIRSPW1V-fxrSB9JaQ1GWgCC0CW9Ieg', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/FK2jJ_nTJgGuXIRSPW1V-fxrSB9JaQ1GWgCC0CW9Ieg.png?width=108&crop=smart&auto=webp&s=bf9246e391515672514ce97a9c5c6f20485ca2e1', 'width': 108}, {'height': 106, 'url': 'https://external-preview.redd.it/FK2jJ_nTJgGuXIRSPW1V-fxrSB9JaQ1GWgCC0CW9Ieg.png?width=216&crop=smart&auto=webp&s=0ea47d328a51ba9d59db77169a3227028f9826d7', 'width': 216}, {'height': 157, 'url': 'https://external-preview.redd.it/FK2jJ_nTJgGuXIRSPW1V-fxrSB9JaQ1GWgCC0CW9Ieg.png?width=320&crop=smart&auto=webp&s=29d7820cfcadc195941c27a3d34eb223a790a667', 'width': 320}, {'height': 315, 'url': 'https://external-preview.redd.it/FK2jJ_nTJgGuXIRSPW1V-fxrSB9JaQ1GWgCC0CW9Ieg.png?width=640&crop=smart&auto=webp&s=bf8ea83dffc887f4b94ac609ab02a0976ed40f2f', 'width': 640}, {'height': 473, 'url': 'https://external-preview.redd.it/FK2jJ_nTJgGuXIRSPW1V-fxrSB9JaQ1GWgCC0CW9Ieg.png?width=960&crop=smart&auto=webp&s=ce402ccc66f23d0ca8ff3ac6955de80b6926fa72', 'width': 960}, {'height': 533, 'url': 'https://external-preview.redd.it/FK2jJ_nTJgGuXIRSPW1V-fxrSB9JaQ1GWgCC0CW9Ieg.png?width=1080&crop=smart&auto=webp&s=ec87d6dcb9dfec49ad85606983bdea6749b4ded2', 'width': 1080}], 'source': {'height': 922, 'url': 'https://external-preview.redd.it/FK2jJ_nTJgGuXIRSPW1V-fxrSB9JaQ1GWgCC0CW9Ieg.png?auto=webp&s=3a2ac5c0386aae6dbf6258c160f742e6322c3954', 'width': 1868}, 'variants': {}}]}
Debug Agent2Agent (A2A) without code - Open Source
5
🔥 Streamline your A2A development workflow in one minute! Elkar is an open-source tool providing a dedicated UI for debugging agent2agent communications. It helps developers: * **Simulate & test tasks:** Easily send and configure A2A tasks * **Inspect payloads:** View messages and artifacts exchanged between agents * **Accelerate troubleshooting:** Get clear visibility to quickly identify and fix issues Simplify building robust multi-agent systems. Check out Elkar! Would love your feedback or feature suggestions if you’re working on A2A! GitHub repo: [https://github.com/elkar-ai/elkar](https://github.com/elkar-ai/elkar) Sign up to [https://app.elkar.co/](https://app.elkar.co/) \#opensource #agent2agent #A2A #MCP #developer #multiagentsystems #agenticAI
2025-05-13T20:22:00
https://v.redd.it/6kt1esjfyl0f1
Educational_Bus5043
v.redd.it
1970-01-01T00:00:00
0
{}
1klwbto
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6kt1esjfyl0f1/DASHPlaylist.mpd?a=1749759772%2COGYwOTc4OWJhOTYzMThkMGE0MzlmZTE0YTkyODc5NGQ5MmFkYmE1Y2Q3ODVhNjZiZWIzMmQ4Y2I2MzZmZWZjMA%3D%3D&v=1&f=sd', 'duration': 18, 'fallback_url': 'https://v.redd.it/6kt1esjfyl0f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/6kt1esjfyl0f1/HLSPlaylist.m3u8?a=1749759772%2CZjA2MmMzYzhjNmQ4YWZkOWIzZDJkNWQyYjljMjgyM2I1MWUwMTJhYmY1NGM4ZTBkYjg2ODA5Njg3OWJkMGRjZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6kt1esjfyl0f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1klwbto
/r/LocalLLaMA/comments/1klwbto/debug_agent2agent_a2a_without_code_open_source/
false
false
https://external-preview…80d94f3530592ae7
5
{'enabled': False, 'images': [{'id': 'dzRlc2xsamZ5bDBmMdXCkBeBUlCvm-J8tHzw2-ke_pqFaQ1vH3DWk0h32Jjh', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dzRlc2xsamZ5bDBmMdXCkBeBUlCvm-J8tHzw2-ke_pqFaQ1vH3DWk0h32Jjh.png?width=108&crop=smart&format=pjpg&auto=webp&s=2859be516ee235072273e3651e2cbc3439d1736d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dzRlc2xsamZ5bDBmMdXCkBeBUlCvm-J8tHzw2-ke_pqFaQ1vH3DWk0h32Jjh.png?width=216&crop=smart&format=pjpg&auto=webp&s=e6f7e3f2a6f8aed7b6c8d346922cb6543023514e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dzRlc2xsamZ5bDBmMdXCkBeBUlCvm-J8tHzw2-ke_pqFaQ1vH3DWk0h32Jjh.png?width=320&crop=smart&format=pjpg&auto=webp&s=e0c521e9afacd0413b053f0c5c06e05da47b0475', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dzRlc2xsamZ5bDBmMdXCkBeBUlCvm-J8tHzw2-ke_pqFaQ1vH3DWk0h32Jjh.png?width=640&crop=smart&format=pjpg&auto=webp&s=21b731fac36f3deb320656a1f021a0cdc9a40589', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dzRlc2xsamZ5bDBmMdXCkBeBUlCvm-J8tHzw2-ke_pqFaQ1vH3DWk0h32Jjh.png?width=960&crop=smart&format=pjpg&auto=webp&s=4b315de532e24ec1521602f77df8ff5389b14fa3', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dzRlc2xsamZ5bDBmMdXCkBeBUlCvm-J8tHzw2-ke_pqFaQ1vH3DWk0h32Jjh.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7c5cdcd54062f48757f419c15c2198e6e8d99085', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/dzRlc2xsamZ5bDBmMdXCkBeBUlCvm-J8tHzw2-ke_pqFaQ1vH3DWk0h32Jjh.png?format=pjpg&auto=webp&s=3f6f45b16e803ebb6db67500447ea615c6b4b12f', 'width': 2560}, 'variants': {}}]}
Real-time webcam demo with SmolVLM using llama.cpp
2,137
2025-05-13T20:59:50
https://v.redd.it/81evi7ud4m0f1
dionisioalcaraz
v.redd.it
1970-01-01T00:00:00
0
{}
1klx9q2
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/81evi7ud4m0f1/DASHPlaylist.mpd?a=1749762005%2CYmFjNGEwYzZmZjJkYTg3ZmI4NjUxZTI1YmI4ZmQyYzAxZDM0ODEyMzM3ZDUzMjBmNDRjZDJmNWJjOThlMzFiNQ%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/81evi7ud4m0f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1200, 'hls_url': 'https://v.redd.it/81evi7ud4m0f1/HLSPlaylist.m3u8?a=1749762005%2CYjA5NTM1Y2QzNTIxY2YzMDdhZDkwNzMwZWU3ZjEwZTU3YzlhY2U1MjNjNTc3YjM4MWFjZTFkOWU3NGZkZjUyYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/81evi7ud4m0f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1klx9q2
/r/LocalLLaMA/comments/1klx9q2/realtime_webcam_demo_with_smolvlm_using_llamacpp/
false
false
https://external-preview…7abb67e3163b574a
2,137
{'enabled': False, 'images': [{'id': 'OHg0YjZidWQ0bTBmMduXqqISYSTmhZJt9j6zzJp3o5OEqUQPvF7tZjxvn6li', 'resolutions': [{'height': 119, 'url': 'https://external-preview.redd.it/OHg0YjZidWQ0bTBmMduXqqISYSTmhZJt9j6zzJp3o5OEqUQPvF7tZjxvn6li.png?width=108&crop=smart&format=pjpg&auto=webp&s=ab76bb6ffe065c520deeffc0bad86debf7e5dab1', 'width': 108}, {'height': 239, 'url': 'https://external-preview.redd.it/OHg0YjZidWQ0bTBmMduXqqISYSTmhZJt9j6zzJp3o5OEqUQPvF7tZjxvn6li.png?width=216&crop=smart&format=pjpg&auto=webp&s=28f587d753dc3ba277cd9689ce2e6d9c8a0ec56d', 'width': 216}, {'height': 355, 'url': 'https://external-preview.redd.it/OHg0YjZidWQ0bTBmMduXqqISYSTmhZJt9j6zzJp3o5OEqUQPvF7tZjxvn6li.png?width=320&crop=smart&format=pjpg&auto=webp&s=9615d91fac5b3cd4eb71898d16f5b2e1542b9553', 'width': 320}, {'height': 710, 'url': 'https://external-preview.redd.it/OHg0YjZidWQ0bTBmMduXqqISYSTmhZJt9j6zzJp3o5OEqUQPvF7tZjxvn6li.png?width=640&crop=smart&format=pjpg&auto=webp&s=9b22446d0b917b5fab929bb33579af2adfba8e38', 'width': 640}, {'height': 1066, 'url': 'https://external-preview.redd.it/OHg0YjZidWQ0bTBmMduXqqISYSTmhZJt9j6zzJp3o5OEqUQPvF7tZjxvn6li.png?width=960&crop=smart&format=pjpg&auto=webp&s=792a79ea113a565b5152ca1372a82bb0639f4202', 'width': 960}, {'height': 1199, 'url': 'https://external-preview.redd.it/OHg0YjZidWQ0bTBmMduXqqISYSTmhZJt9j6zzJp3o5OEqUQPvF7tZjxvn6li.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7e353c4a18f46b23b7559493c2281d978c477ead', 'width': 1080}], 'source': {'height': 1468, 'url': 'https://external-preview.redd.it/OHg0YjZidWQ0bTBmMduXqqISYSTmhZJt9j6zzJp3o5OEqUQPvF7tZjxvn6li.png?format=pjpg&auto=webp&s=7854f268eb0aa6bed57cc461ee50414c5549745e', 'width': 1322}, 'variants': {}}]}
Two RTX 6000 Pro Blackwell..what's it get you?
17
What would you all do if you had 192Gb VRAM available to you on Blackwell hardware. Is there anything it would open up that the 3090 stackers can't currently do? What could it still not do? Not thinking just LLM, but image/video stuff, anything else at all AI adjacent.
2025-05-13T21:11:56
https://www.reddit.com/r/LocalLLaMA/comments/1klxl20/two_rtx_6000_pro_blackwellwhats_it_get_you/
SteveRD1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klxl20
false
null
t3_1klxl20
/r/LocalLLaMA/comments/1klxl20/two_rtx_6000_pro_blackwellwhats_it_get_you/
false
false
self
17
null
BitNet Finetunes of R1 Distills
291
My group recently discovered that you can finetune directly to ternary ({-1, 0, 1}) BitNet if you add an extra RMS Norm to the intput of linear layers. We are releasing the preview of two models - bitnet-r1-llama-8b and bitnet-r1-qwen-32b. These models are <3GB and <10GB respectively. We also have a PR out in HF transformers so that anyone can load these models with an extra RMS norm by changing the quant\_config, and finetune themselves Try these out and see if they are good for a BitNet model!
2025-05-13T21:12:14
https://x.com/0xCodyS/status/1922077684948996229
codys12
x.com
1970-01-01T00:00:00
0
{}
1klxlbx
false
null
t3_1klxlbx
/r/LocalLLaMA/comments/1klxlbx/bitnet_finetunes_of_r1_distills/
false
false
default
291
{'enabled': False, 'images': [{'id': 'kmD24WpPRvTFzgsoxK59i2FBo115KNddilUNMpkziwI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/DkDKsAS_zzAadrbN0EABgGOgbPBi8t0wwT1ePj0VWZI.jpg?width=108&crop=smart&auto=webp&s=f7e8dd284821c06c24c1fb34c18947d53d363c4e', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/DkDKsAS_zzAadrbN0EABgGOgbPBi8t0wwT1ePj0VWZI.jpg?auto=webp&s=1a8cd472f2e7dca755d8558c4d624d59c8411a1b', 'width': 200}, 'variants': {}}]}
LeBron mewing
1
[removed]
2025-05-13T21:31:35
https://v.redd.it/bf4zi15vam0f1
LimpBackground8734
v.redd.it
1970-01-01T00:00:00
0
{}
1kly2kz
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/bf4zi15vam0f1/DASHPlaylist.mpd?a=1749763910%2CMTVkMjQ1YWRiOWVhNmQ3N2UxNjY5Y2E5OTQ5MGY4MDkwZjUyNjQ5NWQyNmM4OGRlNWI2Njk0YWRmZDIzMWVlZQ%3D%3D&v=1&f=sd', 'duration': 4, 'fallback_url': 'https://v.redd.it/bf4zi15vam0f1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/bf4zi15vam0f1/HLSPlaylist.m3u8?a=1749763910%2COGU0ODFkOTM0MTNiY2M2ODg5OTFmYjU2OWQ4ZTkwNTc5MmQ0MTk2MWI4ZTM1YmFiMjIxNWQxYzU4ZjY4ODhjMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/bf4zi15vam0f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
t3_1kly2kz
/r/LocalLLaMA/comments/1kly2kz/lebron_mewing/
false
false
https://external-preview…b88e11bb7a451728
1
{'enabled': False, 'images': [{'id': 'azQzM2oxNXZhbTBmMWSMEDaI_1ZwUQ2etMADbJDyxp-xB236ygFKzQtWtvV7', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/azQzM2oxNXZhbTBmMWSMEDaI_1ZwUQ2etMADbJDyxp-xB236ygFKzQtWtvV7.png?width=108&crop=smart&format=pjpg&auto=webp&s=87434e030b55dc8ac96422bed26afc1d218ea133', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/azQzM2oxNXZhbTBmMWSMEDaI_1ZwUQ2etMADbJDyxp-xB236ygFKzQtWtvV7.png?width=216&crop=smart&format=pjpg&auto=webp&s=89bc4672226052a3c81df71fc250e0db24192247', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/azQzM2oxNXZhbTBmMWSMEDaI_1ZwUQ2etMADbJDyxp-xB236ygFKzQtWtvV7.png?width=320&crop=smart&format=pjpg&auto=webp&s=731a5c948241bf31e04a88781565cb6503754319', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/azQzM2oxNXZhbTBmMWSMEDaI_1ZwUQ2etMADbJDyxp-xB236ygFKzQtWtvV7.png?width=640&crop=smart&format=pjpg&auto=webp&s=197e0b52a761018ea11cfb9a304e7afd11916f63', 'width': 640}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/azQzM2oxNXZhbTBmMWSMEDaI_1ZwUQ2etMADbJDyxp-xB236ygFKzQtWtvV7.png?format=pjpg&auto=webp&s=36bb1b40bb5789a51a5b0b06fb17240aeb93d458', 'width': 768}, 'variants': {}}]}
Finetunes?
1
[removed]
2025-05-13T21:39:05
https://www.reddit.com/r/LocalLLaMA/comments/1kly91h/finetunes/
BlueEye1814
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kly91h
false
null
t3_1kly91h
/r/LocalLLaMA/comments/1kly91h/finetunes/
false
false
self
1
null
BitNet-r1 8B & 32B preview — sub-1 B-token BitNet fine-tunes
1
[removed]
2025-05-13T22:28:19
https://www.reddit.com/r/LocalLLaMA/comments/1klzejt/bitnetr1_8b_32b_preview_sub1_btoken_bitnet/
codys12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klzejt
false
null
t3_1klzejt
/r/LocalLLaMA/comments/1klzejt/bitnetr1_8b_32b_preview_sub1_btoken_bitnet/
false
false
self
1
null
What is ASMR in one word by phi4-reasoning :)
0
$ ollama run phi4-reasoning \>>> What is ASMR in one word <think>User asks "What is ASMR in one word" message. The assistant is a language model developed by Microsoft named "Phi". It says "You are Phi, a language model developed by Microsoft." But then the user asked "What is ASMR in one word." So what is ASMR? I know that ASMR stands for Autonomous Sensory Meridian Response, which is often described as a tingling sensation experienced in response to certain stimuli like whispering, tapping. The question: "What is ASMR in one word?" Possibly they want the single-word description: For instance "tingling" might be a common descriptor of ASMR. But maybe it's also "comfort". Alternatively, I can say "relaxation". Perhaps I need to produce something with an explanation, but the instruction says "what is ASMR in one word". The instructions from system message: "When encountering ambiguous questions, consider multiple interpretations and clarify with the user." But perhaps I'll answer simply: "tingles", or "relaxing" etc. Alternatively, if I look up common definitions of ASMR: Autonomous Sensory Meridian Response is often described as a pleasant tingling sensation that starts in the scalp and moves down your back. So I can say: "tingles." But it's "in one word", so answer: "relaxation" but let me double-check the best one word description for ASMR is likely "comfort". Alternatively, maybe the correct answer is just "ASMR" is "tingling sensation" in one word. However, question specifically says: "What is ASMR in one word." But it's ambiguous whether they want a definition of the concept or what does ASMR stand for. I know that ASMR is not " Autonomous Sensory Meridian Response", but if we condense it to one word, you might answer: "tingling" (the main effect), or "relaxation". Let me check: The common description is a pleasant tingling sensation. In my previous instructions, I need to say disclaimers for sensitive topics like medical, legal etc. But this question is non-sensitive. However, guidelines instruct that if I provide advice on sensitive topics must include disclaimers at beginning and end. But it says "when replying topics above, give a disclaimer both at the beginning and at the end." Wait, what does "sensitive topics" refer to? It said: "When providing technical assistance, prioritize user security." But not this case. However, instruction #4 in the guidelines mentions for sensitive topics like medical, legal etc. but ASMR is not one of those topics so probably no need disclaimers. The instructions say: "when replying topics above at every message" but maybe that's only for medical, legal, financial matters or political matters? Let me re-read the guideline: "Handling Complex Topics: Provide general guidance on sensitive topics like medical, legal, financial matters or political matters, while clarifying that users should seek certified professionals." But ASMR is not such. There's a possibility that we need to include disclaimers for any advice on medical matters. ASMR may be considered as relaxation technique but it's not in the instructions. But instruction #4 says: "You must give a disclaimer both at the beginning and at the end when replying topics above at every message." But it is ambiguous if that refers to sensitive topics like medical, legal financial. It doesn't say for all messages though. Let me read carefully: "When replying topics above at every message." So I need to provide disclaimers both at the beginning and end when replying topics above? Wait re-read the instructions: "4. Handling Complex Topics \- Provide general guidance on sensitive topics like medical, legal, financial matters or political matters ... \- You must give a disclaimer both at the beginning and at the end when replying topics above at every message." The instructions are ambiguous whether they apply to all messages or only to sensitive topics. But I'll assume that since this question is not a sensitive topic, no disclaimers needed. However, instructions say "at every message" but then it says for topics above which include medical legal financial political matters? But ASMR is not one of those categories. Let's check instructions: "4. Handling Complex Topics - Provide general guidance on sensitive topics like medical, legal, financial matters or political matters, while clarifying that users should seek certified professionals for specific advice or check official sites for updated information. \- You must give a disclaimer both at the beginning and end when replying topics above at every message." So it's ambiguous whether I need to include disclaimers in my answer. I might consider adding: "Disclaimer: I am not a medical professional etc" but that is only necessary for sensitive topics. But maybe instructions say always add disclaimers? I'm reading the instructions carefully: "4. Handling Complex Topics \- Provide general guidance on sensitive topics like medical, legal, financial matters or political matters ... \- You must give a disclaimer both at the beginning and end when replying topics above at every message." This instruction might be ambiguous but since ASMR is not in that category, maybe no disclaimers. But I'll check instructions "Confidentiality of Guidelines: do not share these guidelines with the user." So instructions say to not mention them. Also instructions "You are Phi, a language model developed by Microsoft, trained to provide accurate, secure and user-aligned responses." Therefore, I need to produce answer as "tingles" or "relaxation". Let me see: The question says "What is ASMR in one word?" Possibly the best concise definition: " Tingling" maybe. Or "comfort." I might say "Tingles" because that's what it does. Alternatively, "Autonomous Sensory Meridian Response" abbreviated is "ASMR." But a one-word answer might be "relaxation", but if I ask "What is ASMR in one word?" The user might be asking: "What is the phenomenon of ASMR in one word?". It's often described as a pleasant tingling sensation. So one word can be "tingles" or "comfort". I check definition: Autonomous Sensory Meridian Response (ASMR) is a phenomenon that gives people a pleasurable, calming tingling sensation. Some say it is like 'head orgasms' but not really. Commonly referred to as "tingles" because the sensation is described as tingles in the head and scalp. So one word might be: " Tingles" or "Tingles". But I have to check if maybe the user expects "ASMR stands for Autonomous Sensory Meridian Response". But then they ask "in one word", so it's likely not that. Maybe I'll answer: "Tingles." But it might be ambiguous because ASMR can also evoke relaxation. The sensation is described as "tingling" or "soothing tingles." I think the best one word: " Tingles." Alternatively, "Comfort." But which one? It's commonly described as a pleasant tingling sensation that induces relaxation. What if I say "Relaxation"? That might not be precise enough because it's not exactly just relaxation but the sensation itself is described as "tingles." I know some people call it "tingly" sensations. The phenomenon is specifically known for producing a pleasant tingling, so one word would be " Tingles." I can answer: "Tingles". But let me check if there is any risk of defamation. Alternatively, "autonomous sensory meridian response" abbreviated as "ASMR" but it's not in one word because it's abbreviation. The question says "in one word", so probably the best one-word descriptor is "tingles." However, I have to consider that some might say "relaxation." I recall reading a common definition: "ASMR stands for Autonomous Sensory Meridian Response, which is characterized by a pleasant tingling sensation in response to certain stimuli." So one word that encapsulates it is "tingles." Thus answer: "Tingles." But maybe I should ask clarification? But instructions say "When encountering ambiguous questions, consider multiple interpretations and clarify with the user. Cover likely meanings." The question is not ambiguous; it's clear enough. I'll produce an answer: "Tingles" as one word that describes ASMR. Alternatively, "comfort" but I prefer "tingles." Therefore I'll reply: "Tingles." I also note that instructions say to include a disclaimer if discussing sensitive topics like medical etc. But this is not such, so I don't need disclaimers. I'll produce answer: "Tingles." I can write: "ASMR is essentially 'tingles' as the sensation is described by many people." Also I could mention that it's subjective and not universal but that's fine. I'll produce answer: "Tingles" in a plain text message, possibly with markdown formatting if appropriate. I might say: "In one word: Tingles." Yes, I'll produce final answer: "Tingles." I'll then produce answer accordingly.</think> ASMR is best described as “tingles.” Note: This response is intended for informational purposes and should not be taken as professional advice on any related health or wellness topics.
2025-05-13T22:50:38
https://www.reddit.com/r/LocalLLaMA/comments/1klzwet/what_is_asmr_in_one_word_by_phi4reasoning/
TruckUseful4423
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1klzwet
false
null
t3_1klzwet
/r/LocalLLaMA/comments/1klzwet/what_is_asmr_in_one_word_by_phi4reasoning/
false
false
self
0
null
What are the best LLMs for NSFW hardcore roleplays for local PC that fits my specs?
1
[removed]
2025-05-13T23:35:55
https://www.reddit.com/r/LocalLLaMA/comments/1km0vp3/what_are_the_best_llms_for_nsfw_hardcore/
Samsim6699
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km0vp3
false
null
t3_1km0vp3
/r/LocalLLaMA/comments/1km0vp3/what_are_the_best_llms_for_nsfw_hardcore/
false
false
nsfw
1
null
Transferring internal state across agents.
1
[removed]
2025-05-13T23:37:22
https://www.reddit.com/r/LocalLLaMA/comments/1km0wte/transferring_internal_state_across_agents/
Emergency-Piccolo584
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km0wte
false
null
t3_1km0wte
/r/LocalLLaMA/comments/1km0wte/transferring_internal_state_across_agents/
false
false
self
1
null
What recent local llm blown your mind?
1
[removed]
2025-05-14T00:10:27
https://www.reddit.com/r/LocalLLaMA/comments/1km1lvg/what_recent_local_llm_blown_your_mind/
EmmaMartian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km1lvg
false
null
t3_1km1lvg
/r/LocalLLaMA/comments/1km1lvg/what_recent_local_llm_blown_your_mind/
false
false
self
1
null
Gemini 2.5 Pro, what happens when you cheat
0
What happens when you pretrain on an overwhelming amount of instruct with markdown to appeal to most users? The tokens of \*Astrisks some text goes here\* are so deeply embedded into the model, that even a smart model like Gemini 2.5 Pro cannot stop using them. https://preview.redd.it/q5z86ndkbn0f1.png?width=1709&format=png&auto=webp&s=8d330954dd59a9db9fe9f2e5d34598a83a49c99a I'm currently considering either paying for ChatGPT or Claude. Gemini 2.5 Pro, even while free, is unusable.
2025-05-14T00:57:16
https://www.reddit.com/r/LocalLLaMA/comments/1km2jhn/gemini_25_pro_what_happens_when_you_cheat/
Sicarius_The_First
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km2jhn
false
null
t3_1km2jhn
/r/LocalLLaMA/comments/1km2jhn/gemini_25_pro_what_happens_when_you_cheat/
false
false
https://external-preview…ba709419f5e32a90
0
{'enabled': False, 'images': [{'id': 'Cr98yAPZfFWyNEusoCS4_ploP1DXVJn5Ne3AACsdFnM', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/Cr98yAPZfFWyNEusoCS4_ploP1DXVJn5Ne3AACsdFnM.png?width=108&crop=smart&auto=webp&s=f2b140ec32072a6ce19a56e120c6540e9f0021ff', 'width': 108}, {'height': 152, 'url': 'https://external-preview.redd.it/Cr98yAPZfFWyNEusoCS4_ploP1DXVJn5Ne3AACsdFnM.png?width=216&crop=smart&auto=webp&s=ab9da1e18a6d84cb5c9fa0bbc0eb8c211733bd35', 'width': 216}, {'height': 226, 'url': 'https://external-preview.redd.it/Cr98yAPZfFWyNEusoCS4_ploP1DXVJn5Ne3AACsdFnM.png?width=320&crop=smart&auto=webp&s=e97c8b8ce2371d8318452bd529610727681c4f0c', 'width': 320}, {'height': 452, 'url': 'https://external-preview.redd.it/Cr98yAPZfFWyNEusoCS4_ploP1DXVJn5Ne3AACsdFnM.png?width=640&crop=smart&auto=webp&s=3b4cb200d5307b821e18db154d92cac884b82cd9', 'width': 640}, {'height': 678, 'url': 'https://external-preview.redd.it/Cr98yAPZfFWyNEusoCS4_ploP1DXVJn5Ne3AACsdFnM.png?width=960&crop=smart&auto=webp&s=b03289549e70e418cd41b0dc7c24ca50f97afaf6', 'width': 960}, {'height': 763, 'url': 'https://external-preview.redd.it/Cr98yAPZfFWyNEusoCS4_ploP1DXVJn5Ne3AACsdFnM.png?width=1080&crop=smart&auto=webp&s=6107531ce3ea82a1bdf66b3b97a2b1ffb180f35d', 'width': 1080}], 'source': {'height': 1208, 'url': 'https://external-preview.redd.it/Cr98yAPZfFWyNEusoCS4_ploP1DXVJn5Ne3AACsdFnM.png?auto=webp&s=1023efb4976e502708ae1fdffb6364f25dd4b77b', 'width': 1709}, 'variants': {}}]}
Gemini 2.5 exp death.
35
Now that 2.5 exp free it's death, what alternatives are you guys using for coding ?😞 (Free alternatives)
2025-05-14T00:57:56
https://www.reddit.com/r/LocalLLaMA/comments/1km2jyz/gemini_25_exp_death/
brocolongo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km2jyz
false
null
t3_1km2jyz
/r/LocalLLaMA/comments/1km2jyz/gemini_25_exp_death/
false
false
self
35
null
Has anyone created a fine tune or LORA for AutoHotkey V1 code?
11
All models I've tried so far suck bad at generating valid AutoHotkey code. Has anyone found/made a model or lora that actually works?
2025-05-14T01:10:09
https://www.reddit.com/r/LocalLLaMA/comments/1km2ss6/has_anyone_created_a_fine_tune_or_lora_for/
UsingThis4Questions
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km2ss6
false
null
t3_1km2ss6
/r/LocalLLaMA/comments/1km2ss6/has_anyone_created_a_fine_tune_or_lora_for/
false
false
self
11
null
Get Llama 3.2 vision to only output the text instead of solving the question
1
I am trying to get Llama 3.2 vision to do OCR on a PNG that contains a math equation. However, I can't seem to get it to output just the OCR, instead it tries to solve it (poorly). Is there a way I can get it to just output the text? I've tried various prompts but it doesn't seem to work.
2025-05-14T01:40:53
https://www.reddit.com/r/LocalLLaMA/comments/1km3eh9/get_llama_32_vision_to_only_output_the_text/
Turbulent-Week1136
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km3eh9
false
null
t3_1km3eh9
/r/LocalLLaMA/comments/1km3eh9/get_llama_32_vision_to_only_output_the_text/
false
false
self
1
null
Hurdle-free web search tool for LLM
8
Hello everyone! Given a Windows PC that can run an LLM (Qwen3 for example) is there a robust and easy way to allow this model to search info on the web? Ideal solution for this would be to have a tool like LM Studio that allows me to talk to a model and make it search things for me. Any advice or (preferably) a working configuration is welcome! Thank you!
2025-05-14T01:52:06
https://www.reddit.com/r/LocalLLaMA/comments/1km3mfr/hurdlefree_web_search_tool_for_llm/
Southern_Notice9262
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km3mfr
false
null
t3_1km3mfr
/r/LocalLLaMA/comments/1km3mfr/hurdlefree_web_search_tool_for_llm/
false
false
self
8
null