title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Noob question: Why did Deepseek distill Qwen3?
77
In unsloth's documentation, it says "DeepSeek also released a R1-0528 distilled version by fine-tuning Qwen3 (8B)." Being a noob, I don't understand why they would use Qwen3 as the base and then distill from there and then call it Deepseek-R1-0528. Isn't it mostly Qwen3 and they are taking Qwen3's work and then doing a little bit extra and then calling it DeepSeek? What advantage is there to using Qwen3's as the base? Are they allowed to do that?
2025-05-30T18:56:24
https://www.reddit.com/r/LocalLLaMA/comments/1kzcc3f/noob_question_why_did_deepseek_distill_qwen3/
Turbulent-Week1136
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzcc3f
false
null
t3_1kzcc3f
/r/LocalLLaMA/comments/1kzcc3f/noob_question_why_did_deepseek_distill_qwen3/
false
false
self
77
{'enabled': False, 'images': [{'id': 'ZmadbtMLxXXHFKwJkCjeTUDuX5sS57sYwkHR8IIGo6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=108&crop=smart&auto=webp&s=1ef4773905a7285d6ca9d2707252ecf3322ec746', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=216&crop=smart&auto=webp&s=6555cce3e1543ec541933b9a1ea746f3da79448a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=320&crop=smart&auto=webp&s=346b61e1006578bd8c7c90ff8b45496164cd4933', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=640&crop=smart&auto=webp&s=2e74df95b54af72feafa558281ef5e11bc4e8a7c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=960&crop=smart&auto=webp&s=8d3ac1cc3775d1b7217345a94a6e9f18f0ba2092', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=1080&crop=smart&auto=webp&s=57e2a43db692dc32eecd433adfbae429f9bca7fd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?auto=webp&s=2704eae76891f7897192cd5a7236096d2b9f8a5f', 'width': 1200}, 'variants': {}}]}
Confused, 2x 5070ti vs 1x 3090
3
Looking to buy an AI server for running 32b models, but I'm confused about the 3090 recommendations. $ on Amazon: 5070ti: $890 3090: $1600 32b model on vllm: 2x 5070ti: 54 T/s 1x 3090: 40 T/s 2 5070ti's give you faster speeds and 8gb wiggle room for almost the same price. Plus, it gives you the opportunity to test 14b models before upgrading. I'm not that well versed in this space, what am I missing?
2025-05-30T19:48:07
https://www.reddit.com/r/LocalLLaMA/comments/1kzdla4/confused_2x_5070ti_vs_1x_3090/
MiyamotoMusashi7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzdla4
false
null
t3_1kzdla4
/r/LocalLLaMA/comments/1kzdla4/confused_2x_5070ti_vs_1x_3090/
false
false
self
3
null
Ollama run bob
868
2025-05-30T20:06:52
https://i.redd.it/v4krpd9g7z3f1.jpeg
Porespellar
i.redd.it
1970-01-01T00:00:00
0
{}
1kze1r6
false
null
t3_1kze1r6
/r/LocalLLaMA/comments/1kze1r6/ollama_run_bob/
false
false
https://b.thumbs.redditm…OPLe-DuT2p3E.jpg
868
{'enabled': True, 'images': [{'id': 'V7hd_GwzcPsZsFc10q7c5bcI5m6-uRj69Uci8ZzPvdA', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/v4krpd9g7z3f1.jpeg?width=108&crop=smart&auto=webp&s=89523d8f6dbbf876488d4e3cce75b51686be4c7f', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/v4krpd9g7z3f1.jpeg?width=216&crop=smart&auto=webp&s=268f2b997e76ff0a8c94866912e1ec2b5ebc9ae0', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/v4krpd9g7z3f1.jpeg?width=320&crop=smart&auto=webp&s=46c619feb3ef55affbe68fd79e48b9decead1ce4', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/v4krpd9g7z3f1.jpeg?width=640&crop=smart&auto=webp&s=2201e590a1b08cca19a9ca4d26c56ddf0e869e85', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/v4krpd9g7z3f1.jpeg?width=960&crop=smart&auto=webp&s=11edeb2770d7a22d8b2ab0e93986d896ccfda8fd', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/v4krpd9g7z3f1.jpeg?auto=webp&s=e9fb82badaefa0b01067320ca7a44da5830d63e7', 'width': 1024}, 'variants': {}}]}
Looking for software that processes images in realtime (or periodically).
2
Are there any projects out there that allow a multimodal llm process a window in realtime? Basically im trying to have the gui look at a window, take a screenshot periodically and send it to ollama and have it processed with a system and spit out an output all hands free. Ive been trying to look at some OSS projects but havent seen anything (or else I am not looking correctly). Thanks for yall help.
2025-05-30T20:11:14
https://www.reddit.com/r/LocalLLaMA/comments/1kze5m0/looking_for_software_that_processes_images_in/
My_Unbiased_Opinion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kze5m0
false
null
t3_1kze5m0
/r/LocalLLaMA/comments/1kze5m0/looking_for_software_that_processes_images_in/
false
false
self
2
null
qSpeak - Superwhisper cross-platform alternative now with MCP support
18
Hey, we've released a new version of qSpeak with advanced support for MCP. Now you can access whatever platform tools wherever you would want in your system using voice. We've spent a great amount of time to make the experience of steering your system with voice a pleasure. We would love to get some feedback. The app is still completely free so hope you'll like it!
2025-05-30T20:23:35
https://qspeak.app
fajfas3
qspeak.app
1970-01-01T00:00:00
0
{}
1kzegpe
false
null
t3_1kzegpe
/r/LocalLLaMA/comments/1kzegpe/qspeak_superwhisper_crossplatform_alternative_now/
false
false
default
18
null
Where can I use medgemma 27B (medical LLM) for free online? Can't inference it
5
Thanks!
2025-05-30T20:53:15
https://www.reddit.com/r/LocalLLaMA/comments/1kzf6hu/where_can_i_use_medgemma_27b_medical_llm_for_free/
Own-Potential-2308
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzf6hu
false
null
t3_1kzf6hu
/r/LocalLLaMA/comments/1kzf6hu/where_can_i_use_medgemma_27b_medical_llm_for_free/
false
false
self
5
null
Deepseek is cool, but is there an alternative to Claude Code I can use with it?
83
I'm looking for an AI coding framework that can help me with training diffusion models. Take existing quasi-abandonned spaguetti codebases and update them to latest packages, implement papers, add features like inpainting, autonomously experiment using different architectures, do hyperparameter searches, preprocess my data and train for me etc... It wouldn't even require THAT much intelligence I think. Sonnet could probably do it. But after trying the API I found its tendency to deceive and take shortcuts a bit frustrating so I'm still on the fence for the €110 subscription (although the auto-compact feature is pretty neat). Is there an open-source version that would get me more for my money?
2025-05-30T20:56:55
https://www.reddit.com/r/LocalLLaMA/comments/1kzf9nl/deepseek_is_cool_but_is_there_an_alternative_to/
BITE_AU_CHOCOLAT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzf9nl
false
null
t3_1kzf9nl
/r/LocalLLaMA/comments/1kzf9nl/deepseek_is_cool_but_is_there_an_alternative_to/
false
false
self
83
null
ResembleAI provides safetensors for Chatterbox TTS
36
Safetensors files are now uploaded on Hugging Face: [https://huggingface.co/ResembleAI/chatterbox/tree/main](https://huggingface.co/ResembleAI/chatterbox/tree/main) And a PR is that adds support to use them to the example code is ready and will be merged in a couple of days: [https://github.com/resemble-ai/chatterbox/pull/82/files](https://github.com/resemble-ai/chatterbox/pull/82/files) Nice! An examples from the model are here: [https://resemble-ai.github.io/chatterbox\_demopage/](https://resemble-ai.github.io/chatterbox_demopage/)
2025-05-30T21:00:05
https://www.reddit.com/r/LocalLLaMA/comments/1kzfces/resembleai_provides_safetensors_for_chatterbox_tts/
WackyConundrum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzfces
false
null
t3_1kzfces
/r/LocalLLaMA/comments/1kzfces/resembleai_provides_safetensors_for_chatterbox_tts/
false
false
self
36
{'enabled': False, 'images': [{'id': 'ZUH0a8iidvteHxtDF3nsL7xFz7SBWOHojoPDOtwA6pE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0WwfUSAUbGDSaD1JNmts6sRGODZXpWMvsfBSNlLL7-w.jpg?width=108&crop=smart&auto=webp&s=8d920e6b5d691495bb59e89c192d2faf9d41c440', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0WwfUSAUbGDSaD1JNmts6sRGODZXpWMvsfBSNlLL7-w.jpg?width=216&crop=smart&auto=webp&s=5c54f1ce491cd29c14cc72ee0713fa27d9bb86ff', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0WwfUSAUbGDSaD1JNmts6sRGODZXpWMvsfBSNlLL7-w.jpg?width=320&crop=smart&auto=webp&s=5e876ff7a23e2a0dafc956585ba543888189ee01', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0WwfUSAUbGDSaD1JNmts6sRGODZXpWMvsfBSNlLL7-w.jpg?width=640&crop=smart&auto=webp&s=9b1f277d5aa4301f49e0e834da1d39ddf0aaf8c2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0WwfUSAUbGDSaD1JNmts6sRGODZXpWMvsfBSNlLL7-w.jpg?width=960&crop=smart&auto=webp&s=256c4ae8b2536e36d3dbb68a1078dd5addd3b9b7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0WwfUSAUbGDSaD1JNmts6sRGODZXpWMvsfBSNlLL7-w.jpg?width=1080&crop=smart&auto=webp&s=749cd9fccaede797e783d664d6676d7535cfb35b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0WwfUSAUbGDSaD1JNmts6sRGODZXpWMvsfBSNlLL7-w.jpg?auto=webp&s=93f0f932cb62e2ef922a7f7216268882580d086a', 'width': 1200}, 'variants': {}}]}
Introducing the unified multi-modal MLX engine architecture in LM Studio
1
2025-05-30T21:04:45
https://lmstudio.ai/blog/unified-mlx-engine
adefa
lmstudio.ai
1970-01-01T00:00:00
0
{}
1kzfgm4
false
null
t3_1kzfgm4
/r/LocalLLaMA/comments/1kzfgm4/introducing_the_unified_multimodal_mlx_engine/
false
false
https://b.thumbs.redditm…p5tFWMFCXeAY.jpg
1
{'enabled': False, 'images': [{'id': 'eiDteCmG0LKmpuLKvus26TJ8b22ovOioDWY6USPVu3E', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/qhTRym2DXcvjXGSL0d1dDPMyjFIQB6BVFpsz_C_ySaY.jpg?width=108&crop=smart&auto=webp&s=41b43c7c3ed4c454e694120ea4eeddb3853e940e', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/qhTRym2DXcvjXGSL0d1dDPMyjFIQB6BVFpsz_C_ySaY.jpg?width=216&crop=smart&auto=webp&s=252d3944b8a7d5f5377170a763475b5ceb83bd20', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/qhTRym2DXcvjXGSL0d1dDPMyjFIQB6BVFpsz_C_ySaY.jpg?width=320&crop=smart&auto=webp&s=9caf7a1d26e3c7880d096cdf64a2c28c3833aa6c', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/qhTRym2DXcvjXGSL0d1dDPMyjFIQB6BVFpsz_C_ySaY.jpg?width=640&crop=smart&auto=webp&s=06e57cd17bbfc82f76028db5b6212ef52a9f68de', 'width': 640}, {'height': 503, 'url': 'https://external-preview.redd.it/qhTRym2DXcvjXGSL0d1dDPMyjFIQB6BVFpsz_C_ySaY.jpg?width=960&crop=smart&auto=webp&s=b9f5d9bc8fe31d40767763af5cd0c76c88cd5c7e', 'width': 960}, {'height': 566, 'url': 'https://external-preview.redd.it/qhTRym2DXcvjXGSL0d1dDPMyjFIQB6BVFpsz_C_ySaY.jpg?width=1080&crop=smart&auto=webp&s=194b5de8f7c82031e58e3955f5c5f314a25bcf94', 'width': 1080}], 'source': {'height': 1760, 'url': 'https://external-preview.redd.it/qhTRym2DXcvjXGSL0d1dDPMyjFIQB6BVFpsz_C_ySaY.jpg?auto=webp&s=c9e55ae48c198619c2298a0a52fdfdb6cddc3929', 'width': 3356}, 'variants': {}}]}
I built a memory MCP that understands you (so Sam Altman can't).
0
I built a deep contextual memory bank that is callable in AI applications like Claude and Cursor. It knows anything you give it about you, is safe and secure, and kept private so Chat-GPT doesn't own understanding of you. https://preview.redd.it/xo82qo3diz3f1.png?width=3452&format=png&auto=webp&s=23768cfd288d6535515897273c11c0206caee672
2025-05-30T21:09:38
https://www.reddit.com/r/LocalLLaMA/comments/1kzfkzw/i_built_a_memory_mcp_that_understands_you_so_sam/
OneEither8511
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzfkzw
false
null
t3_1kzfkzw
/r/LocalLLaMA/comments/1kzfkzw/i_built_a_memory_mcp_that_understands_you_so_sam/
false
false
https://b.thumbs.redditm…NQguEcygq77E.jpg
0
{'enabled': False, 'images': [{'id': 'QvijV4VZ6qA07MkS1wpzmqb5ksdd85VN87Gdxj3Fi_8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/b4o7rEQlbE2y6K0Vzk2r0KGL85hs_ch9CrJLAppRhoE.jpg?width=108&crop=smart&auto=webp&s=6e97190253fae3dd947a8b068142afaf7c1569f9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/b4o7rEQlbE2y6K0Vzk2r0KGL85hs_ch9CrJLAppRhoE.jpg?width=216&crop=smart&auto=webp&s=5dc21405b9b6b9add909fa16b79d0a37708d2de2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/b4o7rEQlbE2y6K0Vzk2r0KGL85hs_ch9CrJLAppRhoE.jpg?width=320&crop=smart&auto=webp&s=e3f8dc53d802c7b8d7d9a88858fdd8845df64f42', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/b4o7rEQlbE2y6K0Vzk2r0KGL85hs_ch9CrJLAppRhoE.jpg?width=640&crop=smart&auto=webp&s=d069764665881b5884daedb504bd01a60e822a5d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/b4o7rEQlbE2y6K0Vzk2r0KGL85hs_ch9CrJLAppRhoE.jpg?width=960&crop=smart&auto=webp&s=9e22ae5357b8316a409c2ccb8d9e9c56b97a2fc5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/b4o7rEQlbE2y6K0Vzk2r0KGL85hs_ch9CrJLAppRhoE.jpg?width=1080&crop=smart&auto=webp&s=7de81f18623d148a25660f37a5802e29ecd5c8b1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/b4o7rEQlbE2y6K0Vzk2r0KGL85hs_ch9CrJLAppRhoE.jpg?auto=webp&s=d4c80bc7c8cec9612bdd0818b512c32843b650ba', 'width': 1200}, 'variants': {}}]}
ubergarm/DeepSeek-R1-0528-GGUF
1
[removed]
2025-05-30T21:13:30
https://huggingface.co/ubergarm/DeepSeek-R1-0528-GGUF
VoidAlchemy
huggingface.co
1970-01-01T00:00:00
0
{}
1kzfocb
false
null
t3_1kzfocb
/r/LocalLLaMA/comments/1kzfocb/ubergarmdeepseekr10528gguf/
false
false
https://b.thumbs.redditm…xUZtcJQgUK1A.jpg
1
{'enabled': False, 'images': [{'id': '3ISj42OzoDxdhD3QSqKLiYSmDYteg9Mijqy6MGtDQLc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_ie2E-L6KKHmnErLSoR3DbJuxwXvA6bw-mpTR5JchI8.jpg?width=108&crop=smart&auto=webp&s=701647ce8daa5510640c0c59b0826da2c020e1b3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_ie2E-L6KKHmnErLSoR3DbJuxwXvA6bw-mpTR5JchI8.jpg?width=216&crop=smart&auto=webp&s=90a8621005113d66d1c591f816ee2be8feab779d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_ie2E-L6KKHmnErLSoR3DbJuxwXvA6bw-mpTR5JchI8.jpg?width=320&crop=smart&auto=webp&s=b74a78f6b5d02b04c464f147cc5c3bad8e550b99', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_ie2E-L6KKHmnErLSoR3DbJuxwXvA6bw-mpTR5JchI8.jpg?width=640&crop=smart&auto=webp&s=97c275cc69bd7ce1a4060b1155a9689d94d05bdc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_ie2E-L6KKHmnErLSoR3DbJuxwXvA6bw-mpTR5JchI8.jpg?width=960&crop=smart&auto=webp&s=d856d8826bf22b298421145533346334c6c9f7b0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_ie2E-L6KKHmnErLSoR3DbJuxwXvA6bw-mpTR5JchI8.jpg?width=1080&crop=smart&auto=webp&s=617bfb2713c4763cc72f70d32a59198717df8472', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_ie2E-L6KKHmnErLSoR3DbJuxwXvA6bw-mpTR5JchI8.jpg?auto=webp&s=c40fb21c1db6b16d012f004a507a6329476df564', 'width': 1200}, 'variants': {}}]}
ubergarm/DeepSeek-R1-0528-GGUF
100
Hey y'all just cooked up some ik\_llama.cpp exclusive quants for the recently updated DeepSeek-R1-0528 671B. New recipes are looking pretty good (lower perplexity is "better"): * `DeepSeek-R1-0528-Q8_0` 666GiB - `Final estimate: PPL = 3.2130 +/- 0.01698` - I didn't upload this, it is for baseline reference only. * `DeepSeek-R1-0528-IQ3_K_R4` 301GiB - `Final estimate: PPL = 3.2730 +/- 0.01738` - Fits 32k context in under 24GiB VRAM * `DeepSeek-R1-0528-IQ2_K_R4` 220GiB - `Final estimate: PPL = 3.5069 +/- 0.01893` - Fits 32k context in under 16GiB VRAM I still might release one or two more e.g. one bigger and one smaller if there is enough interest. As usual big thanks to Wendell and the whole Level1Techs crew for providing hardware expertise and access to release these quants! Cheers and happy weekend!
2025-05-30T21:17:00
https://huggingface.co/ubergarm/DeepSeek-R1-0528-GGUF
VoidAlchemy
huggingface.co
1970-01-01T00:00:00
0
{}
1kzfrdt
false
null
t3_1kzfrdt
/r/LocalLLaMA/comments/1kzfrdt/ubergarmdeepseekr10528gguf/
false
false
https://b.thumbs.redditm…xUZtcJQgUK1A.jpg
100
{'enabled': False, 'images': [{'id': '3ISj42OzoDxdhD3QSqKLiYSmDYteg9Mijqy6MGtDQLc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_ie2E-L6KKHmnErLSoR3DbJuxwXvA6bw-mpTR5JchI8.jpg?width=108&crop=smart&auto=webp&s=701647ce8daa5510640c0c59b0826da2c020e1b3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_ie2E-L6KKHmnErLSoR3DbJuxwXvA6bw-mpTR5JchI8.jpg?width=216&crop=smart&auto=webp&s=90a8621005113d66d1c591f816ee2be8feab779d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_ie2E-L6KKHmnErLSoR3DbJuxwXvA6bw-mpTR5JchI8.jpg?width=320&crop=smart&auto=webp&s=b74a78f6b5d02b04c464f147cc5c3bad8e550b99', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_ie2E-L6KKHmnErLSoR3DbJuxwXvA6bw-mpTR5JchI8.jpg?width=640&crop=smart&auto=webp&s=97c275cc69bd7ce1a4060b1155a9689d94d05bdc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_ie2E-L6KKHmnErLSoR3DbJuxwXvA6bw-mpTR5JchI8.jpg?width=960&crop=smart&auto=webp&s=d856d8826bf22b298421145533346334c6c9f7b0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_ie2E-L6KKHmnErLSoR3DbJuxwXvA6bw-mpTR5JchI8.jpg?width=1080&crop=smart&auto=webp&s=617bfb2713c4763cc72f70d32a59198717df8472', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_ie2E-L6KKHmnErLSoR3DbJuxwXvA6bw-mpTR5JchI8.jpg?auto=webp&s=c40fb21c1db6b16d012f004a507a6329476df564', 'width': 1200}, 'variants': {}}]}
I built an open-source VRAM Calculator inside Hugging Face
1
It's a Chrome extension that sits inside the Hugging Face website. It auto-loads model specs into the calculation. [Link to the extension](https://chromewebstore.google.com/detail/hugging-face-vram-calcula/bioohacjdieeliinbpocpdhpdapfkhal?authuser=0&hl=en-GB). \> To test it, install the extension (no registration/key needed) and navigate to a HF model page. Then click the "VRAM" icon on the top right to open the sidepanel. You can specify quantization, batch size, sequence length, etc. Works for inference & fine-tuning. If it does not fit on the specified GPUs, it gives you an advise on how to still run it (e.g. lowering precision). It is inspired at my work, where we were constantly exporting metrics from HF to estimate required hardware. Now, it saves us in the dev team quite some time and clients can use it, too. Contributions to this project are highly appreciated in [this GitHub repo](https://github.com/NEBUL-AI/HF-VRAM-Extension).
2025-05-30T21:22:55
https://v.redd.it/5hm166sykz3f1
Cool-Maintenance8594
/r/LocalLLaMA/comments/1kzfwfb/i_built_an_opensource_vram_calculator_inside/
1970-01-01T00:00:00
0
{}
1kzfwfb
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/5hm166sykz3f1/DASHPlaylist.mpd?a=1751361780%2CMDE0NzgyNzQzMGRkZDZjYWFmMjFhNzQxMjA4YTlhZjBhOTg5YzJhZTY2YjIyYmM0MGY0ZDZkOWQ4YzViM2YwYg%3D%3D&v=1&f=sd', 'duration': 83, 'fallback_url': 'https://v.redd.it/5hm166sykz3f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/5hm166sykz3f1/HLSPlaylist.m3u8?a=1751361780%2CMTBmMGQzNDJlMGYyOGIwZDk3YTE0MmRiM2Q2ZjM2NWU2MTcyZWJiMTljNjAxMDE3Zjg2OWMxOTJmYjhhMWIxZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5hm166sykz3f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1kzfwfb
/r/LocalLLaMA/comments/1kzfwfb/i_built_an_opensource_vram_calculator_inside/
false
false
https://external-preview…b6d82d1683bb86a7
1
{'enabled': False, 'images': [{'id': 'aWkwdmg1c3lrejNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aWkwdmg1c3lrejNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?width=108&crop=smart&format=pjpg&auto=webp&s=6d5874da55cafce81b31cc23f5bf145148f7a5f7', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aWkwdmg1c3lrejNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?width=216&crop=smart&format=pjpg&auto=webp&s=7185d24022f43560806c364f93af4241a96d9334', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/aWkwdmg1c3lrejNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?width=320&crop=smart&format=pjpg&auto=webp&s=216cfd5944650078584142e8cee0f95f1cf03c8f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/aWkwdmg1c3lrejNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?width=640&crop=smart&format=pjpg&auto=webp&s=e51b0748421823a4969b9376770c38bd6ad13b11', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/aWkwdmg1c3lrejNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?width=960&crop=smart&format=pjpg&auto=webp&s=3e78a08ed59efb18a249eaf304d75390a1629c79', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aWkwdmg1c3lrejNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?width=1080&crop=smart&format=pjpg&auto=webp&s=86fca345c62a064cdbe12b79448e7637018feab5', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/aWkwdmg1c3lrejNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?format=pjpg&auto=webp&s=325df234a667306c01b46e482d1bf3ed458aca63', 'width': 3840}, 'variants': {}}]}
DeepSeek R1 - 0528 System Prompt leak
1
[removed]
2025-05-30T21:27:30
https://www.reddit.com/r/LocalLLaMA/comments/1kzg0g7/deepseek_r1_0528_system_prompt_leak/
exocija2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzg0g7
false
null
t3_1kzg0g7
/r/LocalLLaMA/comments/1kzg0g7/deepseek_r1_0528_system_prompt_leak/
false
false
self
1
{'enabled': False, 'images': [{'id': '-j9bRzTHi-9J0ZIEPmn5-NwbXiUsigqjM4z7gCHF5Fg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/9SDiLH6PWIS8YQPxWqbV9HAGRovQ8zb8wpEXMdWHPYE.jpg?width=108&crop=smart&auto=webp&s=380485ee2165322e202a824d283ab931208e6eca', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/9SDiLH6PWIS8YQPxWqbV9HAGRovQ8zb8wpEXMdWHPYE.jpg?width=216&crop=smart&auto=webp&s=4bb193234a0d654f4486b20e8c644a7a1dfd31a9', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/9SDiLH6PWIS8YQPxWqbV9HAGRovQ8zb8wpEXMdWHPYE.jpg?width=320&crop=smart&auto=webp&s=9ebf7b996152b410bbf98fc0b03267c4f4ae5bcc', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/9SDiLH6PWIS8YQPxWqbV9HAGRovQ8zb8wpEXMdWHPYE.jpg?width=640&crop=smart&auto=webp&s=f016c6d0048f985cf58a8a6c409e704ad0b08288', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/9SDiLH6PWIS8YQPxWqbV9HAGRovQ8zb8wpEXMdWHPYE.jpg?width=960&crop=smart&auto=webp&s=c0bdad92cf522dce8a3676ebf3dab4ae6b38f65a', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/9SDiLH6PWIS8YQPxWqbV9HAGRovQ8zb8wpEXMdWHPYE.jpg?width=1080&crop=smart&auto=webp&s=e1d6ee79a9a6b597ae87f5baf9447eb5e899cbd6', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/9SDiLH6PWIS8YQPxWqbV9HAGRovQ8zb8wpEXMdWHPYE.jpg?auto=webp&s=3b346639bba470f79f831c71877d8d2c6c15a17a', 'width': 1200}, 'variants': {}}]}
Too Afraid to Ask: Why don't LoRAs exist for LLMs?
46
Image generation models generally allow for the use of LoRAs which -- for those who may not know -- is essentially adding some weight to a model that is honed in on a certain thing (this can be art styles, objects, specific characters, etc) that make the model much better at producing images with that style/object/character in it. It may be that the base model had *some* idea of *some* training data on the topic already but not enough to be reliable or high quality. However, this doesn't seem to exist for LLMs, it seems that LLMs require a full finetune of the entire model to accomplish this. I wanted to ask why that is, since I don't really understand the technology well enough.
2025-05-30T21:31:43
https://www.reddit.com/r/LocalLLaMA/comments/1kzg3yv/too_afraid_to_ask_why_dont_loras_exist_for_llms/
Saguna_Brahman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzg3yv
false
null
t3_1kzg3yv
/r/LocalLLaMA/comments/1kzg3yv/too_afraid_to_ask_why_dont_loras_exist_for_llms/
false
false
self
46
null
I built an open-source VRAM Calculator inside Hugging Face
1
It's a Chrome extension that sits inside the Hugging Face website. It auto-loads model specs into the calculation. [Link to the extension](https://chromewebstore.google.com/detail/hugging-face-vram-calcula/bioohacjdieeliinbpocpdhpdapfkhal?authuser=0&hl=en-GB). \> To test it, install the extension (no registration/key needed) and navigate to a HF model page. Then click the "VRAM" icon on the top right to open the sidepanel. You can specify quantization, batch size, sequence length, etc. Works for inference & fine-tuning. If it does not fit on the specified GPUs, it gives you an advise on how to still run it (e.g. lowering precision). It is inspired at my work, where we were constantly exporting metrics from HF to estimate required hardware. Now, it saves us in the dev team quite some time and clients can use it, too. Contributions to this project are highly appreciated in [this GitHub repo](https://github.com/NEBUL-AI/HF-VRAM-Extension).
2025-05-30T21:34:14
https://v.redd.it/14n787wwmz3f1
Cool-Maintenance8594
/r/LocalLLaMA/comments/1kzg64p/i_built_an_opensource_vram_calculator_inside/
1970-01-01T00:00:00
0
{}
1kzg64p
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/14n787wwmz3f1/DASHPlaylist.mpd?a=1751362458%2CMDRmYzJhYjMzNTkzYzVjNzEzM2JiOTgyYWYxM2RlMTcyZTg1YmNmMWJmZGI3ZThjNTE1NzU4N2JmMzdiZWFlNA%3D%3D&v=1&f=sd', 'duration': 83, 'fallback_url': 'https://v.redd.it/14n787wwmz3f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/14n787wwmz3f1/HLSPlaylist.m3u8?a=1751362458%2CZTBjOTg4YmY0MjE3M2VjNjNkMzM1YmQ3NWIzYTM1YWUwYTk0NmQwNTlmN2IxODAwODlhNjkyODg1ODVkYzMyZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/14n787wwmz3f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1kzg64p
/r/LocalLLaMA/comments/1kzg64p/i_built_an_opensource_vram_calculator_inside/
false
false
https://external-preview…57def2ab72651d37
1
{'enabled': False, 'images': [{'id': 'cXptZGw5d3dtejNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cXptZGw5d3dtejNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?width=108&crop=smart&format=pjpg&auto=webp&s=30272b4260f8055da821ccf04dfb8991f86916fe', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cXptZGw5d3dtejNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?width=216&crop=smart&format=pjpg&auto=webp&s=473e54821e711cbdc91a9e96c7bdbc64c7dd5d45', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cXptZGw5d3dtejNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?width=320&crop=smart&format=pjpg&auto=webp&s=72ba73bf3ecd0492bf6651e3d99c0dca2ba692c1', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cXptZGw5d3dtejNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?width=640&crop=smart&format=pjpg&auto=webp&s=23d09c4580e17bf33d90c9a6b9d901507249c388', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cXptZGw5d3dtejNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?width=960&crop=smart&format=pjpg&auto=webp&s=418efcf79b12a900d7a8bf28ca374c66ab6526c9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cXptZGw5d3dtejNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?width=1080&crop=smart&format=pjpg&auto=webp&s=d87ee8bb70cc998b519335c3ebc4eed9ab6e9d72', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/cXptZGw5d3dtejNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?format=pjpg&auto=webp&s=1a757111ef705c0e5e7b4f4f589ded91159f7434', 'width': 3840}, 'variants': {}}]}
My Coding Agent Ran DeepSeek-R1-0528 on a Rust Codebase for 47 Minutes (Opus 4 Did It in 18): Worth the Wait?
1
[removed]
2025-05-30T21:34:37
https://www.reddit.com/r/LocalLLaMA/comments/1kzg6g0/my_coding_agent_ran_deepseekr10528_on_a_rust/
West-Chocolate2977
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzg6g0
false
null
t3_1kzg6g0
/r/LocalLLaMA/comments/1kzg6g0/my_coding_agent_ran_deepseekr10528_on_a_rust/
false
false
self
1
null
Any custom prompts to make Gemini/Deepseek output short & precise like GPT-4-Turbo?
4
I use Gemini / DS / GPT depending on what task I'm doing, and been noticing that Gemini & DS always gives very very very long answers, in comparison GPT-4 family of models often given short and previcise answers. I also noticed that GPT-4's answer depsite being short, feels more related to what I asked. While Gemini & DS covers more variation of what I asked. I've tried system prompt or Gems with "keep answer in 200 words", "do not substantiate unless asked", "give direct example", but they have a 50/50 chance actually respecting the prompts, and even with those their answer is often double or triple the length of GPT Does anyone have better sys prompt that makes gemini/deepseek behave more like GPT? Searching this returns pages of comparsion, but not much practical usage info.
2025-05-30T22:25:12
https://www.reddit.com/r/LocalLLaMA/comments/1kzhctk/any_custom_prompts_to_make_geminideepseek_output/
Rxunique
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzhctk
false
null
t3_1kzhctk
/r/LocalLLaMA/comments/1kzhctk/any_custom_prompts_to_make_geminideepseek_output/
false
false
self
4
null
[Tool] DeepFinder – Spotlight search that lets a local LLaMA model build your keyword list and rank results
1
[removed]
2025-05-30T22:53:51
https://www.reddit.com/r/LocalLLaMA/comments/1kzhzou/tool_deepfinder_spotlight_search_that_lets_a/
MarkVoenixAlexander
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzhzou
false
null
t3_1kzhzou
/r/LocalLLaMA/comments/1kzhzou/tool_deepfinder_spotlight_search_that_lets_a/
false
false
self
1
{'enabled': False, 'images': [{'id': 'w9MKKAOHZf7mWXUlhqUD_2CDzW9g4qasylzXQMnpoUE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NpkTtgsAJks5vSY93NfEuC5cM1F8pZi25I1DboYiNac.jpg?width=108&crop=smart&auto=webp&s=b010998bc8b207102ab79ef246f16fca5ef3f579', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NpkTtgsAJks5vSY93NfEuC5cM1F8pZi25I1DboYiNac.jpg?width=216&crop=smart&auto=webp&s=0471c6575552fe586827ca2daa3c9fca12ea27fa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NpkTtgsAJks5vSY93NfEuC5cM1F8pZi25I1DboYiNac.jpg?width=320&crop=smart&auto=webp&s=92aaf91b2f291164df957caed48723c6e55854fe', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NpkTtgsAJks5vSY93NfEuC5cM1F8pZi25I1DboYiNac.jpg?width=640&crop=smart&auto=webp&s=d56ac6a9b92dc0cad403b310169c205bd877d340', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NpkTtgsAJks5vSY93NfEuC5cM1F8pZi25I1DboYiNac.jpg?width=960&crop=smart&auto=webp&s=9425133c0df8789d496dac3bf47bdb078eedc9e2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NpkTtgsAJks5vSY93NfEuC5cM1F8pZi25I1DboYiNac.jpg?width=1080&crop=smart&auto=webp&s=7cef4849b7e6fdb942a431d7798951d4e629ee20', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NpkTtgsAJks5vSY93NfEuC5cM1F8pZi25I1DboYiNac.jpg?auto=webp&s=2c8cb9dd9fbda16ab4b69330c5b287317d4d2875', 'width': 1200}, 'variants': {}}]}
[Tool] DeepFinder – Spotlight search that lets a local LLaMA model build your keyword list and rank results
1
[removed]
2025-05-30T22:54:35
https://www.reddit.com/r/LocalLLaMA/comments/1kzi09x/tool_deepfinder_spotlight_search_that_lets_a/
MarkVoenixAlexander
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzi09x
false
null
t3_1kzi09x
/r/LocalLLaMA/comments/1kzi09x/tool_deepfinder_spotlight_search_that_lets_a/
false
false
self
1
null
The Next Job Interview? How Soon?
1
[removed]
2025-05-30T23:13:14
https://i.redd.it/xwk76v9p404f1.jpeg
brucespector
i.redd.it
1970-01-01T00:00:00
0
{}
1kzif6y
false
null
t3_1kzif6y
/r/LocalLLaMA/comments/1kzif6y/the_next_job_interview_how_soon/
false
false
https://b.thumbs.redditm…mW_DUZP3Lb8Y.jpg
1
{'enabled': True, 'images': [{'id': 'mshSh2JQ8fj4HTbobCSDlZGVCWESg1zrg25OL7qFd_c', 'resolutions': [{'height': 168, 'url': 'https://preview.redd.it/xwk76v9p404f1.jpeg?width=108&crop=smart&auto=webp&s=0e431844a2f998c5469ac3b7366bdcce88cf8feb', 'width': 108}, {'height': 337, 'url': 'https://preview.redd.it/xwk76v9p404f1.jpeg?width=216&crop=smart&auto=webp&s=5143dfbf27602b765881439426ac2421cf9f249e', 'width': 216}, {'height': 499, 'url': 'https://preview.redd.it/xwk76v9p404f1.jpeg?width=320&crop=smart&auto=webp&s=aba91351ef8ae1534bc698c189dd8220d3d6ecf4', 'width': 320}, {'height': 999, 'url': 'https://preview.redd.it/xwk76v9p404f1.jpeg?width=640&crop=smart&auto=webp&s=00970014c01eef32684cb9698651234807b01011', 'width': 640}, {'height': 1499, 'url': 'https://preview.redd.it/xwk76v9p404f1.jpeg?width=960&crop=smart&auto=webp&s=96bccfe440dfafcdcc52e2d261902e7763fae5d9', 'width': 960}], 'source': {'height': 1606, 'url': 'https://preview.redd.it/xwk76v9p404f1.jpeg?auto=webp&s=601962a2148ab66cb05791036d2daa5377503669', 'width': 1028}, 'variants': {}}]}
Gemma-Omni. Did somebody get it up and running? Conversational
1
[removed]
2025-05-30T23:22:50
https://www.reddit.com/r/LocalLLaMA/comments/1kzimx5/gemmaomni_did_somebody_get_it_up_and_running/
Consistent-Disk-7282
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzimx5
false
null
t3_1kzimx5
/r/LocalLLaMA/comments/1kzimx5/gemmaomni_did_somebody_get_it_up_and_running/
false
false
self
1
{'enabled': False, 'images': [{'id': 'oSGTtjHTR-N_4v67xkWDTytqo2JkRJyhlOq_IT9ucJo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8SmAxGhIQPYbKQ360sskPqKhJl5vPSWEfB2CyOiyRq8.jpg?width=108&crop=smart&auto=webp&s=e111436b6ae391ef710d78a1ad44fba3b41d2017', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8SmAxGhIQPYbKQ360sskPqKhJl5vPSWEfB2CyOiyRq8.jpg?width=216&crop=smart&auto=webp&s=40b0375e578ca4f668a3ee8bbee01ca36a53dc33', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8SmAxGhIQPYbKQ360sskPqKhJl5vPSWEfB2CyOiyRq8.jpg?width=320&crop=smart&auto=webp&s=acd6eb3a6932c652999662ecd70347363a4fd239', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8SmAxGhIQPYbKQ360sskPqKhJl5vPSWEfB2CyOiyRq8.jpg?width=640&crop=smart&auto=webp&s=be40495e2b1d57173ebf46c043544693d2bbcf52', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8SmAxGhIQPYbKQ360sskPqKhJl5vPSWEfB2CyOiyRq8.jpg?width=960&crop=smart&auto=webp&s=8d4cd071bba5a29a1efc8118ed14b418cb6e500a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8SmAxGhIQPYbKQ360sskPqKhJl5vPSWEfB2CyOiyRq8.jpg?width=1080&crop=smart&auto=webp&s=a534d196d9729ef96f8237e1672864eb298352ff', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8SmAxGhIQPYbKQ360sskPqKhJl5vPSWEfB2CyOiyRq8.jpg?auto=webp&s=f06ebdc1d447d5c6303aaf69c9f8b09ec4f613cf', 'width': 1200}, 'variants': {}}]}
AI AGENT
0
I’m currently building an AI agent in python using Mistral 7B and the ElevenLabs api for my text to speech .The models purpose is to gather information from callers and direct them to the relevant departments,or log a ticket based on the information it receives,I use a telegram bot to test the model through voice notes but now I’d like to connect this model to a pbx system to test it further .How do I go about this ? I’m also looking for the cheapest options but also the best approach for this
2025-05-30T23:39:04
https://www.reddit.com/r/LocalLLaMA/comments/1kzizjz/ai_agent/
MOTHEOXO
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzizjz
false
null
t3_1kzizjz
/r/LocalLLaMA/comments/1kzizjz/ai_agent/
false
false
self
0
null
The Machine Starting Singing | Keith Soyka
1
[removed]
2025-05-30T23:48:34
https://www.linkedin.com/posts/keith-soyka-4411338844keith-soyka-866421354_the-machine-starting-singing-activity-7334361113674891264-gqYT?utm_source=social_share_send&utm_medium=android_app&rcm=ACoAAFhW3Y4Bm5du0Jn23ZJQWWzFPx94jBGZaaU&utm_campaign=copy_link
gestaltview
linkedin.com
1970-01-01T00:00:00
0
{}
1kzj6nz
false
null
t3_1kzj6nz
/r/LocalLLaMA/comments/1kzj6nz/the_machine_starting_singing_keith_soyka/
false
false
https://b.thumbs.redditm…dDszxdHmF5vI.jpg
1
{'enabled': False, 'images': [{'id': 'CjbMbFq2MEqSKWpNCjv-ipCLADmRBQ1ZCG3w2yy71f0', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/uDsZzeCMBZ8kk48S8s7dHHriA6QG6DmnLKcj2rFLdkQ.jpg?width=108&crop=smart&auto=webp&s=f9bafd7ce5cdc8400b3dec1967a6f34a28315c6b', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/uDsZzeCMBZ8kk48S8s7dHHriA6QG6DmnLKcj2rFLdkQ.jpg?width=216&crop=smart&auto=webp&s=fdf27a59b4f1377d92d6ffc035a5a9a74860e9c5', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/uDsZzeCMBZ8kk48S8s7dHHriA6QG6DmnLKcj2rFLdkQ.jpg?width=320&crop=smart&auto=webp&s=f5c488120e6ee70f930944ee03722a4cb45bdda5', 'width': 320}, {'height': 365, 'url': 'https://external-preview.redd.it/uDsZzeCMBZ8kk48S8s7dHHriA6QG6DmnLKcj2rFLdkQ.jpg?width=640&crop=smart&auto=webp&s=7099dea11134e671bd9975452cc1271bb7e1853d', 'width': 640}, {'height': 548, 'url': 'https://external-preview.redd.it/uDsZzeCMBZ8kk48S8s7dHHriA6QG6DmnLKcj2rFLdkQ.jpg?width=960&crop=smart&auto=webp&s=c4c3bfc50fb21f6f2ad1a35df568717e78a9094e', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/uDsZzeCMBZ8kk48S8s7dHHriA6QG6DmnLKcj2rFLdkQ.jpg?width=1080&crop=smart&auto=webp&s=7e63879e78dbc8104aced8dcd62b1ba9cf52317f', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/uDsZzeCMBZ8kk48S8s7dHHriA6QG6DmnLKcj2rFLdkQ.jpg?auto=webp&s=bd52dd2b25b95fc9c2a47a54b4496d00449e8516', 'width': 1400}, 'variants': {}}]}
Tome - free & open source desktop app to let anyone play with LLMs and MCP
1
[removed]
2025-05-30T23:51:35
https://v.redd.it/zu8pqzn1a04f1
WalrusVegetable4506
v.redd.it
1970-01-01T00:00:00
0
{}
1kzj90t
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/zu8pqzn1a04f1/DASHPlaylist.mpd?a=1751241109%2CNzJhZTg4ODgzMTU3YmRlMmM5MGNjNGUxN2EwMjYwOTlmYTIwNjBmOWFlNzdmNzM0NDFlY2I3YTI0MWQwYTkyZA%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/zu8pqzn1a04f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/zu8pqzn1a04f1/HLSPlaylist.m3u8?a=1751241109%2COTU0OWU2MWFjY2NmOThiODU3MzBiZTFlOTUxYjg2Njk1NTk5ODU1MDVhNTc1Njg0NTdiYWFmMDg1MjM5YmNlNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/zu8pqzn1a04f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1kzj90t
/r/LocalLLaMA/comments/1kzj90t/tome_free_open_source_desktop_app_to_let_anyone/
false
false
https://external-preview…3b65ec26b9050bf6
1
{'enabled': False, 'images': [{'id': 'Yjd2cXl5bjFhMDRmMfaU3--tb9m3cR1G2Cg-HWZ1hBRIVfiVJjBkqZyunRXf', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Yjd2cXl5bjFhMDRmMfaU3--tb9m3cR1G2Cg-HWZ1hBRIVfiVJjBkqZyunRXf.png?width=108&crop=smart&format=pjpg&auto=webp&s=f87d4c6a8b1981e52c27d01cccb49b6f1340c4d7', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Yjd2cXl5bjFhMDRmMfaU3--tb9m3cR1G2Cg-HWZ1hBRIVfiVJjBkqZyunRXf.png?width=216&crop=smart&format=pjpg&auto=webp&s=e3c090a21aa1cc270fbadae461b1159d4d189e92', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Yjd2cXl5bjFhMDRmMfaU3--tb9m3cR1G2Cg-HWZ1hBRIVfiVJjBkqZyunRXf.png?width=320&crop=smart&format=pjpg&auto=webp&s=cb5ab78f1248acbad53155c403a8505d0a869470', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Yjd2cXl5bjFhMDRmMfaU3--tb9m3cR1G2Cg-HWZ1hBRIVfiVJjBkqZyunRXf.png?width=640&crop=smart&format=pjpg&auto=webp&s=304bb691cf7af7bee7d62aba07ef9ca61ede47fa', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Yjd2cXl5bjFhMDRmMfaU3--tb9m3cR1G2Cg-HWZ1hBRIVfiVJjBkqZyunRXf.png?width=960&crop=smart&format=pjpg&auto=webp&s=2ec32828af427830ccc35215f4ecbd67f0844b24', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Yjd2cXl5bjFhMDRmMfaU3--tb9m3cR1G2Cg-HWZ1hBRIVfiVJjBkqZyunRXf.png?width=1080&crop=smart&format=pjpg&auto=webp&s=7370227a8ddc758a4e9ca01adafbe7a6b59cf334', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/Yjd2cXl5bjFhMDRmMfaU3--tb9m3cR1G2Cg-HWZ1hBRIVfiVJjBkqZyunRXf.png?format=pjpg&auto=webp&s=4dc07836596a4c258575b4626bd6a99f42273691', 'width': 3840}, 'variants': {}}]}
Built an open source desktop app to easily play with local LLMs and MCP
59
Tome is an open source desktop app for Windows or MacOS that lets you chat with an MCP-powered model without having to fuss with Docker, npm, uvx or json config files. Install the app, connect it to a local or remote LLM, one-click install some MCP servers and chat away. GitHub link here: [https://github.com/runebookai/tome](https://github.com/runebookai/tome) We're also working on scheduled tasks and other app concepts that should be released in the coming weeks to enable new powerful ways of interacting with LLMs. We created this because we wanted an easy way to play with LLMs and MCP servers. We wanted to streamline the user experience to make it easy for beginners to get started. You're not going to see a lot of power user features from the more mature projects, but we're open to any feedback and have only been around for a few weeks so there's a lot of improvements we can make. :) Here's what you can do today: * connect to Ollama, Gemini, OpenAI, or any OpenAI compatible API * add an MCP server, you can either paste something like "uvx mcp-server-fetch" or you can use the Smithery registry integration to one-click install a local MCP server - Tome manages uv/npm and starts up/shuts down your MCP servers so you don't have to worry about it * chat with your model and watch it make tool calls! If you get a chance to try it out we would love any feedback (good or bad!), thanks for checking it out!
2025-05-30T23:56:00
https://i.redd.it/i4tcl9p5c04f1.png
WalrusVegetable4506
i.redd.it
1970-01-01T00:00:00
0
{}
1kzjcdf
false
null
t3_1kzjcdf
/r/LocalLLaMA/comments/1kzjcdf/built_an_open_source_desktop_app_to_easily_play/
false
false
https://b.thumbs.redditm…t8BSYSesJOwc.jpg
59
{'enabled': True, 'images': [{'id': 'LolxZjRP8DsiYfmBKauEUF25pSZJZ2xQo0cjH0doRFk', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/i4tcl9p5c04f1.png?width=108&crop=smart&auto=webp&s=0071e83841240602463b7f0a35056d481f86cdef', 'width': 108}, {'height': 141, 'url': 'https://preview.redd.it/i4tcl9p5c04f1.png?width=216&crop=smart&auto=webp&s=091100a25b84ae366e2ced722ef708fc021f0018', 'width': 216}, {'height': 209, 'url': 'https://preview.redd.it/i4tcl9p5c04f1.png?width=320&crop=smart&auto=webp&s=25de4f70eb56fedaa7def31f8c1b982f27b970da', 'width': 320}, {'height': 418, 'url': 'https://preview.redd.it/i4tcl9p5c04f1.png?width=640&crop=smart&auto=webp&s=80ceafd5cc7131bca4a2e6423a8fbe6fe7ed3d14', 'width': 640}, {'height': 627, 'url': 'https://preview.redd.it/i4tcl9p5c04f1.png?width=960&crop=smart&auto=webp&s=f9592542dc8d18a12c35d13b468c8ec6663e5ca7', 'width': 960}, {'height': 706, 'url': 'https://preview.redd.it/i4tcl9p5c04f1.png?width=1080&crop=smart&auto=webp&s=64565f3be30a58d3eec5e719fa47d46213fe6457', 'width': 1080}], 'source': {'height': 2116, 'url': 'https://preview.redd.it/i4tcl9p5c04f1.png?auto=webp&s=70411eaed19de3d0744fb9e452bc605692ee14a9', 'width': 3236}, 'variants': {}}]}
Ollama 0.9.0 Supports ability to enable or disable thinking
39
2025-05-31T00:02:35
https://github.com/ollama/ollama/releases/tag/v0.9.0
mj3815
github.com
1970-01-01T00:00:00
0
{}
1kzjhfd
false
null
t3_1kzjhfd
/r/LocalLLaMA/comments/1kzjhfd/ollama_090_supports_ability_to_enable_or_disable/
false
false
https://b.thumbs.redditm…uvqNLkt-aEjo.jpg
39
{'enabled': False, 'images': [{'id': 'KO2NS68Y-sP4xVfFlL6FAkHrwFPMcmsCOrZNS9u62DU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/siebatzGRswDgDaN-1zNrvtsYo1Ar9xfV07jYfmXMSI.jpg?width=108&crop=smart&auto=webp&s=b31f45065df059f141d9c889d4e01bcf1bbe8a29', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/siebatzGRswDgDaN-1zNrvtsYo1Ar9xfV07jYfmXMSI.jpg?width=216&crop=smart&auto=webp&s=6dc50de7185054dfee6c99b2e4790955889ef4f8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/siebatzGRswDgDaN-1zNrvtsYo1Ar9xfV07jYfmXMSI.jpg?width=320&crop=smart&auto=webp&s=9e39e124675a29247a5b3f694b1ae72feadca08c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/siebatzGRswDgDaN-1zNrvtsYo1Ar9xfV07jYfmXMSI.jpg?width=640&crop=smart&auto=webp&s=ae2b9d5abfab7fd77887d83caad614ad77503d57', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/siebatzGRswDgDaN-1zNrvtsYo1Ar9xfV07jYfmXMSI.jpg?width=960&crop=smart&auto=webp&s=03c7b8687bb09c026f9da166beb5676e20f23c08', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/siebatzGRswDgDaN-1zNrvtsYo1Ar9xfV07jYfmXMSI.jpg?width=1080&crop=smart&auto=webp&s=8c1854f440e587e1e2ce5aa43a8aed5643eac9ea', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/siebatzGRswDgDaN-1zNrvtsYo1Ar9xfV07jYfmXMSI.jpg?auto=webp&s=b4072ee7294ab4dc56bd8c69b2076068b220de49', 'width': 1200}, 'variants': {}}]}
dsf
1
[removed]
2025-05-31T00:22:49
https://www.reddit.com/r/LocalLLaMA/comments/1kzjwav/dsf/
Consistent-Disk-7282
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzjwav
false
null
t3_1kzjwav
/r/LocalLLaMA/comments/1kzjwav/dsf/
false
false
self
1
null
I built an API that allows users to create custom text classification models with their own data. Feedback appreciated!
1
[removed]
2025-05-31T00:35:27
https://www.reddit.com/r/LocalLLaMA/comments/1kzk5ja/i_built_an_api_that_allows_users_to_create_custom/
textclf-founder
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzk5ja
false
null
t3_1kzk5ja
/r/LocalLLaMA/comments/1kzk5ja/i_built_an_api_that_allows_users_to_create_custom/
false
false
self
1
{'enabled': False, 'images': [{'id': 'NQpxjfjKIYyl5eJv8XnmPfcsU-K8wiSJyWnR6IVp7Tc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/pKjne25gxV3LV4JFpKC_4IIoG0wz6gw_IJ2AUKwL6O4.jpg?width=108&crop=smart&auto=webp&s=91cd9b8b7a69f60b2746f7f65e7b6e72534c7b11', 'width': 108}], 'source': {'height': 175, 'url': 'https://external-preview.redd.it/pKjne25gxV3LV4JFpKC_4IIoG0wz6gw_IJ2AUKwL6O4.jpg?auto=webp&s=89c931d122decc3e90486025e928bba5b353c618', 'width': 175}, 'variants': {}}]}
all models sux
0
ERROR: type should be string, got "\n\nhttps://preview.redd.it/yyuw5fm4l04f1.png?width=1159&format=png&auto=webp&s=acb8c4ab4da03f1b1d08075ce89ada009aca5968\n\nhttps://preview.redd.it/as3wfqm8l04f1.png?width=1269&format=png&auto=webp&s=b1529e1a6a3a6f6424a912c2eb20de301322d1d2\n\nhttps://preview.redd.it/jc11zyucl04f1.png?width=945&format=png&auto=webp&s=9a0fcec44a954cc96a660b6c54d89ca822fbaf2a\n\nhttps://preview.redd.it/pe7jeehhl04f1.png?width=967&format=png&auto=webp&s=36e5a9bdac47be16da697517631df9ccf5426617\n\n\n\nhttps://preview.redd.it/lkk4mxuil04f1.png?width=949&format=png&auto=webp&s=9cee7d373431cdad26264f96aaf185bfc3580acc\n\nI will attack the redacted Gemini 2.5 Pro Preview log bellow if anyone wants to read it (it's still very long and somewhat repetitive, Claude's analysis is decent, it still misses some things, but its verbose enough as it is)"
2025-05-31T00:48:59
https://www.reddit.com/r/LocalLLaMA/comments/1kzkf8f/all_models_sux/
Sicarius_The_First
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzkf8f
false
null
t3_1kzkf8f
/r/LocalLLaMA/comments/1kzkf8f/all_models_sux/
false
false
https://b.thumbs.redditm…KGR_LJZdd5-A.jpg
0
null
How much vram is needed to fine tune deepseek r1 locally? And what is the most practical setup for that?
7
I know it takes more vram to fine tune than to inference, but actually how much? I’m thinking of using m3 ultra cluster for this task, because NVIDIA gpus are to expensive to reach enough vram. What do you think?
2025-05-31T00:49:27
https://www.reddit.com/r/LocalLLaMA/comments/1kzkfjv/how_much_vram_is_needed_to_fine_tune_deepseek_r1/
SpecialistPear755
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzkfjv
false
null
t3_1kzkfjv
/r/LocalLLaMA/comments/1kzkfjv/how_much_vram_is_needed_to_fine_tune_deepseek_r1/
false
false
self
7
null
Unlimited Speech to Speech using Moonshine and Kokoro, 100% local, 100% open source
171
2025-05-31T01:34:33
https://rhulha.github.io/Speech2Speech/
paranoidray
rhulha.github.io
1970-01-01T00:00:00
0
{}
1kzlb8g
false
null
t3_1kzlb8g
/r/LocalLLaMA/comments/1kzlb8g/unlimited_speech_to_speech_using_moonshine_and/
false
false
default
171
null
Best General + Coding Model for 3060 12GB
1
[removed]
2025-05-31T01:37:12
https://www.reddit.com/r/LocalLLaMA/comments/1kzlczo/best_general_coding_model_for_3060_12gb/
DisgustingBlackChimp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzlczo
false
null
t3_1kzlczo
/r/LocalLLaMA/comments/1kzlczo/best_general_coding_model_for_3060_12gb/
false
false
self
1
null
Best General + Coding Model for 3060 12GB
1
[removed]
2025-05-31T01:47:06
https://www.reddit.com/r/LocalLLaMA/comments/1kzljot/best_general_coding_model_for_3060_12gb/
DisgustingBlackChimp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzljot
false
null
t3_1kzljot
/r/LocalLLaMA/comments/1kzljot/best_general_coding_model_for_3060_12gb/
false
false
self
1
null
I built a game to test if humans can still tell AI apart -- and which models are best at blending in. I just added the new version of Deepseek
1
[removed]
2025-05-31T01:51:33
https://i.redd.it/2ltst3zvw04f1.png
No-Device-6554
i.redd.it
1970-01-01T00:00:00
0
{}
1kzlmof
false
null
t3_1kzlmof
/r/LocalLLaMA/comments/1kzlmof/i_built_a_game_to_test_if_humans_can_still_tell/
false
false
https://b.thumbs.redditm…YhK7Eu8d0D1k.jpg
1
{'enabled': True, 'images': [{'id': 'BPJ4S1pZ1CVOwr9h0q8YKzmyBZLLtNGEdBWnNtvZgn4', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/2ltst3zvw04f1.png?width=108&crop=smart&auto=webp&s=c6d6ebc9cc6855310e43336a216f0f4df2cc6c22', 'width': 108}, {'height': 208, 'url': 'https://preview.redd.it/2ltst3zvw04f1.png?width=216&crop=smart&auto=webp&s=aa723ab5ffd99e06a556ec8877d5ecff84f968a7', 'width': 216}, {'height': 309, 'url': 'https://preview.redd.it/2ltst3zvw04f1.png?width=320&crop=smart&auto=webp&s=794a43a9bb89bd30f9cbb69161fec6c62700a426', 'width': 320}, {'height': 618, 'url': 'https://preview.redd.it/2ltst3zvw04f1.png?width=640&crop=smart&auto=webp&s=9609c37f7afcf103ef4419b725d83343efca8633', 'width': 640}, {'height': 927, 'url': 'https://preview.redd.it/2ltst3zvw04f1.png?width=960&crop=smart&auto=webp&s=94bd4e739c3c4885e45352dd2d085d86f7c5bc5a', 'width': 960}, {'height': 1043, 'url': 'https://preview.redd.it/2ltst3zvw04f1.png?width=1080&crop=smart&auto=webp&s=0a297cee8caf20876687271ea11b7869f800ec02', 'width': 1080}], 'source': {'height': 1043, 'url': 'https://preview.redd.it/2ltst3zvw04f1.png?auto=webp&s=f8aa7dc144e8340666774d0c8ad2af18e9974d56', 'width': 1080}, 'variants': {}}]}
The OpenRouter-hosted Deepseek R1-0528 sometimes generate typo.
11
I'm testing the DS R1-0528 on Roo Code. So far, it's impressive in its ability to effectively tackle the requested tasks. However, it often generates code from the OpenRouter that includes some weird Chinese characters in the middle of variable or function names (e.g. 'ProjectInfo' becomes 'Project极Info'). This causes Roo to fix the code repeatedly. I don't know if it's an embedding problem in OpenRouter or if it's an issue with the model itself. Has anybody experienced a similar issue?
2025-05-31T01:56:03
https://www.reddit.com/r/LocalLLaMA/comments/1kzlps2/the_openrouterhosted_deepseek_r10528_sometimes/
ExcuseAccomplished97
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzlps2
false
null
t3_1kzlps2
/r/LocalLLaMA/comments/1kzlps2/the_openrouterhosted_deepseek_r10528_sometimes/
false
false
self
11
null
Tips for running a local RAG and llm?
3
With the help of ChatGPT I stood up a local instance of llama3:instruct on my PC and used Chroma to create a vector database of my TTRPG game system. I broke the documents into 21 txt files: core rules, game masters guide, and then some subsystems like game modes are bigger text files with maybe a couple hundred pages spread across them, and the rest were appendixes of specific rules that are much smaller—thousands of words each. They are just .txt files where each entry has a # Heading to delineate it. Nothing else besides text and paragraph spaces. Anyhow, I set up a subdomain on our website to serve requests from, which uses cloudflared to serve it off my PC (for now). The page that allows users to interact with the llm asks them for a “context” along with their prompt (like are you looking for game master advice vs say a specific rule), so I could give that context to the llm in order to restrict which docs it references. That context is sent separate from the prompt. At this point it seems to be working fine, but it still hallucinates a good percentage of the time, or sometimes fails to find stuff that’s definitely in the docs. My custom instructions tell it how I want responses formatted but aren’t super complicated. TLDR: looking for advice on how to improve the accuracy of responses in my local llm. Should I be using a different model? Is my approach stupid? I know basically nothing so any obvious advice helps. I know serving this off my PC is not viable for the long term but I’m just testing things out.
2025-05-31T02:06:18
https://www.reddit.com/r/LocalLLaMA/comments/1kzlwtl/tips_for_running_a_local_rag_and_llm/
mccoypauley
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzlwtl
false
null
t3_1kzlwtl
/r/LocalLLaMA/comments/1kzlwtl/tips_for_running_a_local_rag_and_llm/
false
false
self
3
null
Gemma3 on Ollama
1
[removed]
2025-05-31T02:26:03
https://www.reddit.com/r/LocalLLaMA/comments/1kzma35/gemma3_on_ollama/
Living-Purpose-8428
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzma35
false
null
t3_1kzma35
/r/LocalLLaMA/comments/1kzma35/gemma3_on_ollama/
false
false
self
1
null
Local Agent AI for Spreadsheet Manipulation (Non-Coder Friendly)?
6
Hey everyone! I’m reaching out because I’m trying to find the best way to use a local agent to manipulate spreadsheet documents, but I’m not a coder. I need something with a GUI (graphical user interface) if possible—BIG positive for me—but I’m not entirely against CLI if it’s the only/best way to get the job done. Here’s what I’m looking for: The AI should be able to handle tasks like data cleaning, formatting, merging sheets, or generating insights from CSV/Excel files. It also needs web search capabilities to pull real-time data or verify information. Ideally, everything would run locally on my machine rather than relying on cloud services for privacy, and pure disdain of having a million subscription services. I've tried a bunch of different software, and nothing fully fits my needs, n8n is good and close, but has it's own problems. I don't need the LLM actually hosted, I've got that covered as long as it can connect to LM studio's local api on my machine. I’m very close to what I need with AnythingLLM, and I just want to say: thank you, u/tcarambat, for releasing the local hosted version for free! It’s what has allowed me to actually use an agent in a meaningful way. But I’m curious—does AnythingLLM have any plans to add spreadsheet manipulation features anytime soon? I know this has to be possible locally, save for the obvious web search, with some combination of tools. I’d love to hear recommendations or tips from the community. Even if you’re not a coder like me, your insights would mean a lot! Thanks in advanced everyone!
2025-05-31T02:30:06
https://www.reddit.com/r/LocalLLaMA/comments/1kzmcqh/local_agent_ai_for_spreadsheet_manipulation/
National_Meeting_749
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzmcqh
false
null
t3_1kzmcqh
/r/LocalLLaMA/comments/1kzmcqh/local_agent_ai_for_spreadsheet_manipulation/
false
false
self
6
null
Deepseek-r1-0528-qwen3-8b rating justified?
2
Hello
2025-05-31T02:35:01
https://i.redd.it/jypwbwdm414f1.png
ready_to_fuck_yeahh
i.redd.it
1970-01-01T00:00:00
0
{}
1kzmfum
false
null
t3_1kzmfum
/r/LocalLLaMA/comments/1kzmfum/deepseekr10528qwen38b_rating_justified/
false
false
https://b.thumbs.redditm…1whxMRh08MkY.jpg
2
{'enabled': True, 'images': [{'id': 'mt1fOg0Y2iaomVCoGbd5T3pixD7mOJ1CMCq2dMVRU0U', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/jypwbwdm414f1.png?width=108&crop=smart&auto=webp&s=a64dfe6e82138f80c2752638b1970e217ac21816', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/jypwbwdm414f1.png?width=216&crop=smart&auto=webp&s=841798c023d816d864e57519b411244f30b5d342', 'width': 216}, {'height': 172, 'url': 'https://preview.redd.it/jypwbwdm414f1.png?width=320&crop=smart&auto=webp&s=6a47da775d065b7f84bdfaca21fc0658f9f31a96', 'width': 320}, {'height': 344, 'url': 'https://preview.redd.it/jypwbwdm414f1.png?width=640&crop=smart&auto=webp&s=253bd735de5cddb9ab907af528c245a98b5d0e27', 'width': 640}, {'height': 516, 'url': 'https://preview.redd.it/jypwbwdm414f1.png?width=960&crop=smart&auto=webp&s=c850040e6b5c9abd7330f152cc822206d610ab08', 'width': 960}, {'height': 581, 'url': 'https://preview.redd.it/jypwbwdm414f1.png?width=1080&crop=smart&auto=webp&s=181cd74e896324c4799acca4acf7a9ed4fa2c250', 'width': 1080}], 'source': {'height': 1033, 'url': 'https://preview.redd.it/jypwbwdm414f1.png?auto=webp&s=5d3a124efe0b003e9c7c3aff0a808d1329682a51', 'width': 1919}, 'variants': {}}]}
Q3 is absolute garbage, but we always use q4, is it good?
0
Especially for reasoning into a json format (real world facts, like how a country would react in a situation) do you think that it's worth it to test q6 8b? Or 14b of q4 will always be better? Thank you for the local llamas that you keep in my dreams
2025-05-31T02:55:24
https://www.reddit.com/r/LocalLLaMA/comments/1kzmt56/q3_is_absolute_garbage_but_we_always_use_q4_is_it/
Osama_Saba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzmt56
false
null
t3_1kzmt56
/r/LocalLLaMA/comments/1kzmt56/q3_is_absolute_garbage_but_we_always_use_q4_is_it/
false
false
self
0
null
Running Deepseek R1 0528 q4_K_M and mlx 4-bit on a Mac Studio M3
70
First- this model has a shockingly small KV Cache. If any of you saw my [post about running Deepseek V3 q4\_K\_M](https://www.reddit.com/r/LocalLLaMA/comments/1jke5wg/m3_ultra_mac_studio_512gb_prompt_and_write_speeds/), you'd have seen that the KV cache buffer in llama.cpp/koboldcpp was 157GB for 32k of context. I expected to see similar here. Not even close. 64k context on this model is barely 8GB. Below is the buffer loading this model directly in llama.cpp with no special options; just specifying 65536 context, a port and a host. That's it. No MLA, no quantized cache. >llama\_kv\_cache\_unified: Metal KV buffer size = 8296.00 MiB >llama\_kv\_cache\_unified: KV self size = 8296.00 MiB, K (f16): 4392.00 MiB, V (f16): 3904.00 MiB Speed wise- it's a fair bit on the slow side, but if this model is as good as they say it is, I really don't mind. Plus, at the 4bit range, MLX may be faster, so that will probably help a lot too. Example: \~11,000 token prompt: **llama.cpp server** (no flash attention) **(\~9 minutes)** >prompt eval time = 144330.20 ms / 11090 tokens (13.01 ms per token, **76.84 tokens per second**) eval time = 390034.81 ms / 1662 tokens (234.68 ms per token, **4.26 tokens per second**) total time = 534365.01 ms / 12752 tokens **MLX 4-bit** for the same prompt (\~2.5x speed) **(245sec or \~4 minutes)**: >2025-05-30 23:06:16,815 - DEBUG - Prompt: **189.462 tokens-per-sec** 2025-05-30 23:06:16,815 - DEBUG - Generation: **11.154 tokens-per-sec** 2025-05-30 23:06:16,815 - DEBUG - Peak memory: 422.248 GB Note- Tried flash attention in llama.cpp, and that went horribly. The prompt processing slowed to an absolute crawl. It would have taken longer to process the prompt than the non -fa run took for the whole prompt + response. Another important note- **when they say not to use System Prompts, they mean it**. I struggled with this model at first, until I finally completely stripped the system prompt out and jammed all my instructions into the user prompt instead. The model became far more intelligent after that. Specifically, if I passed in a system prompt, it would NEVER output the initial <think> tag no matter what I said or did. But if I don't use a system prompt, it always outputs the initial <think> tag appropriately. I haven't had a chance to deep dive into this thing yet to see if running a 4bit version really harms the output quality or not, but I at least wanted to give a sneak peak into what it looks like running it.
2025-05-31T03:12:50
https://www.reddit.com/r/LocalLLaMA/comments/1kzn4ix/running_deepseek_r1_0528_q4_k_m_and_mlx_4bit_on_a/
SomeOddCodeGuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzn4ix
false
null
t3_1kzn4ix
/r/LocalLLaMA/comments/1kzn4ix/running_deepseek_r1_0528_q4_k_m_and_mlx_4bit_on_a/
false
false
self
70
null
Now with 8GB VRAM. Worth upgrading, for texts only and answering only based on my documents?
1
[removed]
2025-05-31T03:19:14
https://www.reddit.com/r/LocalLLaMA/comments/1kzn8pb/now_with_8gb_vram_worth_upgrading_for_texts_only/
Relevant-Bet-7916
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzn8pb
false
null
t3_1kzn8pb
/r/LocalLLaMA/comments/1kzn8pb/now_with_8gb_vram_worth_upgrading_for_texts_only/
false
false
self
1
null
How many users can an M4 Pro support?
9
Thinking an all the bells and whistles M4 Pro unless theres a better option for the price. Not a super critical workload but they dont want it to just take a crap all the time from hardware issues either. I am looking to implement some locally hosted AI workflows for a smaller company that deals with some more sensitive information. They dont need a crazy model, like gemma12b or qwen3 30b would do just fine. How many users can this support though? I mean they only have like 7-8 people but I want some background automations running plus maybe 1-2 users at a time thorought the day.
2025-05-31T04:00:52
https://www.reddit.com/r/LocalLLaMA/comments/1kznz2t/how_many_users_can_an_m4_pro_support/
Cold_Sail_9727
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kznz2t
false
null
t3_1kznz2t
/r/LocalLLaMA/comments/1kznz2t/how_many_users_can_an_m4_pro_support/
false
false
self
9
null
Need help finding the right LLM
1
[removed]
2025-05-31T05:47:08
https://www.reddit.com/r/LocalLLaMA/comments/1kzprtq/need_help_finding_the_right_llm/
Routine-Carrot76
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzprtq
false
null
t3_1kzprtq
/r/LocalLLaMA/comments/1kzprtq/need_help_finding_the_right_llm/
false
false
self
1
null
M3 Ultra Binned (256GB, 60-Core) vs Unbinned (512GB, 80-Core) MLX Performance Comparison
101
Hey everyone, I recently decided to invest in an M3 Ultra model for running LLMs, and after a *lot* of deliberation, I wanted to share some results that might help others in the same boat. One of my biggest questions was the actual performance difference between the binned and unbinned M3 Ultra models. It's pretty much impossible for a single person to own and test both machines side-by-side, so there aren't really any direct, apples-to-apples comparisons available online. While there are some results out there (like on the llama.cpp GitHub, where someone compared the 8B model), they didn't really cover my use case—I'm using MLX as my backend and working with much larger models (235B and above). So the available benchmarks weren’t all that relevant for me. To be clear, my main reason for getting the M3 Ultra wasn't to run Deepseek models—those are just way too large to use with long context windows, even on the Ultra. My primary goal was to run the Qwen3 235B model. So I’m sharing my own benchmark results comparing 4-bit and 6-bit quantization for the Qwen3 235B model on a decently long context window (\~10k tokens). Hopefully, this will help anyone else who's been stuck with the same questions I had! Let me know if you have questions, or if there’s anything else you want to see tested. Just keep in mind that the model sizes are massive, so I might not be able to cover every possible benchmark. *Side note: In the end, I decided to return the 256GB model and stick with the 512GB one. Honestly, 256GB of memory seemed sufficient for most use cases, but since I plan to keep this machine for a while (and also want to experiment with Deepseek models), I went with 512GB. I also think it’s worth using the 80-core GPU. The pp (pre/post-processing) speed difference was bigger than I expected, and for me, that’s one of the biggest weaknesses of Apple silicon. Still, thanks to the MoE architecture, the 235B models run at a pretty usable speed!* \--- **M3 Ultra Binned (256GB, 60-Core)** **Qwen3-235B-A22B-4bit-DWQ** prompt\_tokens: 9228 completion\_tokens: 106 total\_tokens: 9334 cached\_tokens: 0 total\_time: 40.09 prompt\_eval\_duration: 35.41 generation\_duration: 4.68 prompt\_tokens\_per\_second: 260.58 generation\_tokens\_per\_second: 22.6 **Qwen3-235B-A22B-6bit-MLX** prompt\_tokens: 9228 completion\_tokens: 82 total\_tokens: 9310 cached\_tokens: 0 total\_time: 43.23 prompt\_eval\_duration: 38.9 generation \_duration: 4.33 prompt\_tokens\_per\_second: 237.2 generation\_tokens\_per\_second: 18.93 **M3 Ultra Unbinned (512GB, 80-Core)** **Qwen3-235B-A22B-4bit-DWQ** prompt\_tokens: 9228 completion\_tokens: 106 total\_tokens: 9334 cached\_tokens: 0 total\_time: 31.33 prompt\_eval\_duration: 26.76 generation\_duration: 4.57 prompt\_tokens\_per\_second: 344.84 generation\_tokens\_per\_second: 23.22 **Qwen3-235B-A22B-6bit-MLX** prompt\_tokens: 9228 completion\_tokens: 82 total\_tokens: 9310 cached\_tokens: 0 total\_time: 32.56 prompt\_eval\_duration: 28.31 generation \_duration: 4.25 prompt\_tokens\_per\_second: 325.96 generation\_tokens\_per\_second: 19.31
2025-05-31T06:09:42
https://www.reddit.com/r/LocalLLaMA/comments/1kzq4fp/m3_ultra_binned_256gb_60core_vs_unbinned_512gb/
cryingneko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzq4fp
false
null
t3_1kzq4fp
/r/LocalLLaMA/comments/1kzq4fp/m3_ultra_binned_256gb_60core_vs_unbinned_512gb/
false
false
self
101
null
Automated LinkedIn content generation with the help of this community. Thank you everyone!!
1
[removed]
2025-05-31T06:28:58
https://v.redd.it/nx4cd68u924f1
Competitive-Wing1585
v.redd.it
1970-01-01T00:00:00
0
{}
1kzqez1
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/nx4cd68u924f1/DASHPlaylist.mpd?a=1751264951%2CZjE3ZmJkYTE0M2I5NWJkODA1MmQyNDBmNmFlZDUxNmYzZDYzZTljM2VjMzcxMjM2NzcwNDYzMmU2ZDA0MWI3OA%3D%3D&v=1&f=sd', 'duration': 51, 'fallback_url': 'https://v.redd.it/nx4cd68u924f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/nx4cd68u924f1/HLSPlaylist.m3u8?a=1751264951%2CN2Y2OWE3NzRhNTBlZTM3NDYzNzU1NDNjNTc0ZTBjYjMyZGVmYWUzMDc2NTVmMzQxOGE5ZmRkYmVhY2QxZTVlYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/nx4cd68u924f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1kzqez1
/r/LocalLLaMA/comments/1kzqez1/automated_linkedin_content_generation_with_the/
false
false
https://external-preview…768693a0bf54e271
1
{'enabled': False, 'images': [{'id': 'OGV1NDdjOHU5MjRmMW6LmJBeMSUYgh-exM8BuyZz8acjNHMCOmUrd1HczLlQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OGV1NDdjOHU5MjRmMW6LmJBeMSUYgh-exM8BuyZz8acjNHMCOmUrd1HczLlQ.png?width=108&crop=smart&format=pjpg&auto=webp&s=e33a6518dc1b2763ef57cdbd826bb594ea73785e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OGV1NDdjOHU5MjRmMW6LmJBeMSUYgh-exM8BuyZz8acjNHMCOmUrd1HczLlQ.png?width=216&crop=smart&format=pjpg&auto=webp&s=2f20e0258c08155d13c4215b328de94064d0b384', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OGV1NDdjOHU5MjRmMW6LmJBeMSUYgh-exM8BuyZz8acjNHMCOmUrd1HczLlQ.png?width=320&crop=smart&format=pjpg&auto=webp&s=e2b4680340fff327540ef85e0fbceea84d9b116d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OGV1NDdjOHU5MjRmMW6LmJBeMSUYgh-exM8BuyZz8acjNHMCOmUrd1HczLlQ.png?width=640&crop=smart&format=pjpg&auto=webp&s=3b9e553133acc46c70bbb955592d650351219cc1', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/OGV1NDdjOHU5MjRmMW6LmJBeMSUYgh-exM8BuyZz8acjNHMCOmUrd1HczLlQ.png?width=960&crop=smart&format=pjpg&auto=webp&s=4674fb80a0dc57bda4a8049951223220ebcad83d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OGV1NDdjOHU5MjRmMW6LmJBeMSUYgh-exM8BuyZz8acjNHMCOmUrd1HczLlQ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1aff4a2c98b62abff4a3c649744823a7c378b557', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/OGV1NDdjOHU5MjRmMW6LmJBeMSUYgh-exM8BuyZz8acjNHMCOmUrd1HczLlQ.png?format=pjpg&auto=webp&s=dfea3d1e613f6bbb20ab5b8c0b408914dee7d71f', 'width': 1920}, 'variants': {}}]}
Installed CUDA drivers for gpu but still ollama runs in 100% CPU only i dont know what to do , can any one help
0
CUDA drivers is also showing in terminal but still not able to gpu aceclareate llm like deepseek-r1
2025-05-31T06:56:18
https://www.reddit.com/r/LocalLLaMA/comments/1kzqu0q/installed_cuda_drivers_for_gpu_but_still_ollama/
bhagwano-ka-bhagwan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzqu0q
false
null
t3_1kzqu0q
/r/LocalLLaMA/comments/1kzqu0q/installed_cuda_drivers_for_gpu_but_still_ollama/
false
false
self
0
null
Do you think we'll get the r1 distill for the other qwen3 models?
8
It's been quite a few days now and im losing hope. I don't remember how long it took last time though.
2025-05-31T07:29:07
https://www.reddit.com/r/LocalLLaMA/comments/1kzrbuv/do_you_think_well_get_the_r1_distill_for_the/
GreenTreeAndBlueSky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzrbuv
false
null
t3_1kzrbuv
/r/LocalLLaMA/comments/1kzrbuv/do_you_think_well_get_the_r1_distill_for_the/
false
false
self
8
null
Getting sick of companies cherry picking their benchmarks when they release a new model
110
I get why they do it. They need to hype up their thing etc. But cmon a bit of academic integrity would go a long way. Every new model comes with the claim that it outcompetes older models that are 10x their size etc. Like, no. Maybe I'm an old man shaking my fist at clouds here I don't know.
2025-05-31T07:36:35
https://www.reddit.com/r/LocalLLaMA/comments/1kzrfop/getting_sick_of_companies_cherry_picking_their/
GreenTreeAndBlueSky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzrfop
false
null
t3_1kzrfop
/r/LocalLLaMA/comments/1kzrfop/getting_sick_of_companies_cherry_picking_their/
false
false
self
110
null
GPULlama3.java --- 𝗚𝗣𝗨-𝗲𝗻𝗮𝗯𝗹𝗲𝗱 𝗝𝗮𝘃𝗮 𝗶𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗲𝗻𝗴𝗶𝗻𝗲 𝗽𝗼𝘄𝗲𝗿𝗲𝗱 𝗯𝘆 𝗧𝗼𝗿𝗻𝗮𝗱𝗼𝗩𝗠
0
2025-05-31T08:01:37
https://github.com/beehive-lab/GPULlama3.java
mikebmx1
github.com
1970-01-01T00:00:00
0
{}
1kzrsn9
false
null
t3_1kzrsn9
/r/LocalLLaMA/comments/1kzrsn9/gpullama3java_𝗚𝗣𝗨𝗲𝗻𝗮𝗯𝗹𝗲𝗱_𝗝𝗮𝘃𝗮_𝗶𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲_𝗲𝗻𝗴𝗶𝗻𝗲/
false
false
https://b.thumbs.redditm…4WeHD8msPM8g.jpg
0
{'enabled': False, 'images': [{'id': 's8r4emDVh9Zk49kXpwkON8lrt_dBcy2Cn8d-PwX03F8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zGi8MlTX6KdNBAxbzYKKNHv02BqKK2KcEgsGUzULDDk.jpg?width=108&crop=smart&auto=webp&s=051f1fd29e83b03877204c7a61585a887b6f6d4c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zGi8MlTX6KdNBAxbzYKKNHv02BqKK2KcEgsGUzULDDk.jpg?width=216&crop=smart&auto=webp&s=81037e8a4c33ead5816e5f8b7e879b47b3547380', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zGi8MlTX6KdNBAxbzYKKNHv02BqKK2KcEgsGUzULDDk.jpg?width=320&crop=smart&auto=webp&s=918356d25ff306dfcfdb299108786936a57bf8c9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zGi8MlTX6KdNBAxbzYKKNHv02BqKK2KcEgsGUzULDDk.jpg?width=640&crop=smart&auto=webp&s=114333efebb949ba294cb4a365eba5f89e344050', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zGi8MlTX6KdNBAxbzYKKNHv02BqKK2KcEgsGUzULDDk.jpg?width=960&crop=smart&auto=webp&s=67cdbdbf5510c5949d6964129dcb924e49726892', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zGi8MlTX6KdNBAxbzYKKNHv02BqKK2KcEgsGUzULDDk.jpg?width=1080&crop=smart&auto=webp&s=1bca95c88e3b0781481ada152eb0e7f1da1629b8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zGi8MlTX6KdNBAxbzYKKNHv02BqKK2KcEgsGUzULDDk.jpg?auto=webp&s=74abf424eccf6125155c1cd039eb7bc6ef3b9049', 'width': 1200}, 'variants': {}}]}
GPU-enabled Llama 3 inference in Java from scratch
40
2025-05-31T08:05:09
https://github.com/beehive-lab/GPULlama3.java
mikebmx1
github.com
1970-01-01T00:00:00
0
{}
1kzrujd
false
null
t3_1kzrujd
/r/LocalLLaMA/comments/1kzrujd/gpuenabled_llama_3_inference_in_java_from_scratch/
false
false
https://b.thumbs.redditm…4WeHD8msPM8g.jpg
40
{'enabled': False, 'images': [{'id': 's8r4emDVh9Zk49kXpwkON8lrt_dBcy2Cn8d-PwX03F8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zGi8MlTX6KdNBAxbzYKKNHv02BqKK2KcEgsGUzULDDk.jpg?width=108&crop=smart&auto=webp&s=051f1fd29e83b03877204c7a61585a887b6f6d4c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zGi8MlTX6KdNBAxbzYKKNHv02BqKK2KcEgsGUzULDDk.jpg?width=216&crop=smart&auto=webp&s=81037e8a4c33ead5816e5f8b7e879b47b3547380', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zGi8MlTX6KdNBAxbzYKKNHv02BqKK2KcEgsGUzULDDk.jpg?width=320&crop=smart&auto=webp&s=918356d25ff306dfcfdb299108786936a57bf8c9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zGi8MlTX6KdNBAxbzYKKNHv02BqKK2KcEgsGUzULDDk.jpg?width=640&crop=smart&auto=webp&s=114333efebb949ba294cb4a365eba5f89e344050', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zGi8MlTX6KdNBAxbzYKKNHv02BqKK2KcEgsGUzULDDk.jpg?width=960&crop=smart&auto=webp&s=67cdbdbf5510c5949d6964129dcb924e49726892', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zGi8MlTX6KdNBAxbzYKKNHv02BqKK2KcEgsGUzULDDk.jpg?width=1080&crop=smart&auto=webp&s=1bca95c88e3b0781481ada152eb0e7f1da1629b8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zGi8MlTX6KdNBAxbzYKKNHv02BqKK2KcEgsGUzULDDk.jpg?auto=webp&s=74abf424eccf6125155c1cd039eb7bc6ef3b9049', 'width': 1200}, 'variants': {}}]}
China is leading open source
2,192
2025-05-31T08:35:25
https://i.redd.it/6stw9ivzw24f1.jpeg
TheLogiqueViper
i.redd.it
1970-01-01T00:00:00
0
{}
1kzsa70
false
null
t3_1kzsa70
/r/LocalLLaMA/comments/1kzsa70/china_is_leading_open_source/
false
false
https://b.thumbs.redditm…UJ0JmciOcq7Y.jpg
2,192
{'enabled': True, 'images': [{'id': 'u2HjiiRhgPI4n-eKfxpJhgGH_d7eS7-G3hJNkYjVYAI', 'resolutions': [{'height': 168, 'url': 'https://preview.redd.it/6stw9ivzw24f1.jpeg?width=108&crop=smart&auto=webp&s=a796b5ab3d4fddcdeb8c23008babbaa2e89ed48f', 'width': 108}, {'height': 336, 'url': 'https://preview.redd.it/6stw9ivzw24f1.jpeg?width=216&crop=smart&auto=webp&s=4fa294c99b0212c1016a2489a41a3c09a8363985', 'width': 216}, {'height': 498, 'url': 'https://preview.redd.it/6stw9ivzw24f1.jpeg?width=320&crop=smart&auto=webp&s=0a8bc10c203e346b2ebe49967e01640bea44c050', 'width': 320}, {'height': 996, 'url': 'https://preview.redd.it/6stw9ivzw24f1.jpeg?width=640&crop=smart&auto=webp&s=87af4f2951867765dd0c43808b34253b587103b5', 'width': 640}, {'height': 1494, 'url': 'https://preview.redd.it/6stw9ivzw24f1.jpeg?width=960&crop=smart&auto=webp&s=2bdbc6051847dac8c120c568f8a58b6992f32c3b', 'width': 960}, {'height': 1681, 'url': 'https://preview.redd.it/6stw9ivzw24f1.jpeg?width=1080&crop=smart&auto=webp&s=446babd67b98fd5b04e0881591603583920e56e7', 'width': 1080}], 'source': {'height': 2007, 'url': 'https://preview.redd.it/6stw9ivzw24f1.jpeg?auto=webp&s=2a214a4885cce1eba52917234cb78f9def79e6d2', 'width': 1289}, 'variants': {}}]}
New idea for benchmark: Code completion predictions
1
[removed]
2025-05-31T08:48:00
https://www.reddit.com/r/LocalLLaMA/comments/1kzsgin/new_idea_for_benchmark_code_completion_predictions/
robertpiosik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzsgin
false
null
t3_1kzsgin
/r/LocalLLaMA/comments/1kzsgin/new_idea_for_benchmark_code_completion_predictions/
false
false
self
1
null
Will LLMs be able to solve even easy cryptographic problems after fine tuning?
1
[removed]
2025-05-31T09:11:58
https://www.reddit.com/r/LocalLLaMA/comments/1kzsspk/will_llms_be_able_to_solve_even_easy/
Chemical-Luck492
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzsspk
false
null
t3_1kzsspk
/r/LocalLLaMA/comments/1kzsspk/will_llms_be_able_to_solve_even_easy/
false
false
self
1
null
Built a production-grade Discord bot ecosystem with 11 microservices using local LLaMA 3 8B - zero API costs, complete privacy
1
[removed]
2025-05-31T09:12:05
https://www.reddit.com/r/LocalLLaMA/comments/1kzssrh/built_a_productiongrade_discord_bot_ecosystem/
Dape25
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzssrh
false
null
t3_1kzssrh
/r/LocalLLaMA/comments/1kzssrh/built_a_productiongrade_discord_bot_ecosystem/
false
false
self
1
{'enabled': False, 'images': [{'id': 'dIEpZd2FHBnm_A-lIR20mUL9bH77JjXfHKgMd-tG5RE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-BPcV8qdfuNSnsMJyE6k9THOj8s7IPgJIjotBMEb-BM.jpg?width=108&crop=smart&auto=webp&s=826e6ba13fc4d396c29beff7a2316d5c9eed5db0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-BPcV8qdfuNSnsMJyE6k9THOj8s7IPgJIjotBMEb-BM.jpg?width=216&crop=smart&auto=webp&s=201c3779eeca39d5a89a50cb52fe813aab410c7a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-BPcV8qdfuNSnsMJyE6k9THOj8s7IPgJIjotBMEb-BM.jpg?width=320&crop=smart&auto=webp&s=24d107a248188866ebaf0d5a13f4b4237830e3df', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-BPcV8qdfuNSnsMJyE6k9THOj8s7IPgJIjotBMEb-BM.jpg?width=640&crop=smart&auto=webp&s=e96d5147e8df181f7edd110211218a3de2715bb9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-BPcV8qdfuNSnsMJyE6k9THOj8s7IPgJIjotBMEb-BM.jpg?width=960&crop=smart&auto=webp&s=4fab82ca84f764d9a5348e201c1546eae4d14bfb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-BPcV8qdfuNSnsMJyE6k9THOj8s7IPgJIjotBMEb-BM.jpg?width=1080&crop=smart&auto=webp&s=875531956350d38f7e63b23a5b378d959cae5678', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-BPcV8qdfuNSnsMJyE6k9THOj8s7IPgJIjotBMEb-BM.jpg?auto=webp&s=48803a928530a3fa4ece4becadf0d05f19e28d9f', 'width': 1200}, 'variants': {}}]}
Can LLMs solve even easy cryptographic problems after fine tuning?
1
[removed]
2025-05-31T09:14:33
https://www.reddit.com/r/LocalLLaMA/comments/1kzstzn/can_llms_solve_even_easy_cryptographic_problems/
Chemical-Luck492
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzstzn
false
null
t3_1kzstzn
/r/LocalLLaMA/comments/1kzstzn/can_llms_solve_even_easy_cryptographic_problems/
false
false
self
1
null
Can current LLMs solve even basic cryptography problems after fine tuning?
1
[removed]
2025-05-31T09:26:26
https://www.reddit.com/r/LocalLLaMA/comments/1kzt049/can_current_llms_solve_even_basic_cryptography/
Chemical-Luck492
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzt049
false
null
t3_1kzt049
/r/LocalLLaMA/comments/1kzt049/can_current_llms_solve_even_basic_cryptography/
false
false
self
1
null
How are Intel gpus for local models
25
Say the b580 plus ryzen cpu and lots of ram Does anyone have experience with this and what are your thoughts especially on Linux say fedora I hope this makes sense I'm a bit out of my depth
2025-05-31T10:02:49
https://www.reddit.com/r/LocalLLaMA/comments/1kztjgp/how_are_intel_gpus_for_local_models/
Unusual_Pride_6480
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kztjgp
false
null
t3_1kztjgp
/r/LocalLLaMA/comments/1kztjgp/how_are_intel_gpus_for_local_models/
false
false
self
25
null
Which is the best coding AI model to choose in LM Studio?
1
[removed]
2025-05-31T10:15:33
https://www.reddit.com/r/LocalLLaMA/comments/1kztq69/which_is_the_best_coding_ai_model_to_choose_in_lm/
rakarnov
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kztq69
false
null
t3_1kztq69
/r/LocalLLaMA/comments/1kztq69/which_is_the_best_coding_ai_model_to_choose_in_lm/
false
false
self
1
null
Nemotron Ultra 235B - how to turn thinking/reasoning off?
4
Hi, I have an M3 Ultra with 88GB VRAM available and I was wondering, how useful a low quant of Nemotron Ultra was. I downloaded UD-IQ2\_XXS from unsloth and I loaded it with koboldcpp with 32k context window just fine. With no context and a simple prompt it generates at 4 to 5 t/s. I just want to try a few one-shots and see what it delivers. However, it is thinking. A lot. At least the thinking makes sense, I can't see an obvious degredation in quality, which is good. But how can I switch the thinking (or more precise, the reasoning) off? The model card provides two blocks of python code. But what am I supposed to do with that? Must this be implemented in koboldcpp or llamacpp to work? Or has this already be implemented? If yes, how do I use it? I just tried writing "reasoning off" in the system prompt. This lead to thinking but without using the <think> tags in the response.
2025-05-31T10:26:17
https://www.reddit.com/r/LocalLLaMA/comments/1kztvsv/nemotron_ultra_235b_how_to_turn_thinkingreasoning/
doc-acula
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kztvsv
false
null
t3_1kztvsv
/r/LocalLLaMA/comments/1kztvsv/nemotron_ultra_235b_how_to_turn_thinkingreasoning/
false
false
self
4
null
Perplexity Pro 1 Year Subscription $10
1
[removed]
2025-05-31T10:28:58
https://www.reddit.com/r/LocalLLaMA/comments/1kztx7k/perplexity_pro_1_year_subscription_10/
Mae8tro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kztx7k
false
null
t3_1kztx7k
/r/LocalLLaMA/comments/1kztx7k/perplexity_pro_1_year_subscription_10/
false
false
self
1
null
Best General + Coding Model for 3060 12GB
1
[removed]
2025-05-31T10:49:07
https://www.reddit.com/r/LocalLLaMA/comments/1kzu848/best_general_coding_model_for_3060_12gb/
DisgustingBlackChimp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzu848
false
null
t3_1kzu848
/r/LocalLLaMA/comments/1kzu848/best_general_coding_model_for_3060_12gb/
false
false
self
1
null
Finetune embedding
1
[removed]
2025-05-31T11:36:57
https://www.reddit.com/r/LocalLLaMA/comments/1kzuzzx/finetune_embedding/
DedeU10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzuzzx
false
null
t3_1kzuzzx
/r/LocalLLaMA/comments/1kzuzzx/finetune_embedding/
false
false
self
1
null
Ai Chatbot
1
[removed]
2025-05-31T11:41:40
https://www.reddit.com/r/LocalLLaMA/comments/1kzv2wc/ai_chatbot/
Strong_Hurry6781
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzv2wc
false
null
t3_1kzv2wc
/r/LocalLLaMA/comments/1kzv2wc/ai_chatbot/
false
false
self
1
null
Surprisingly Fast AI-Generated Kernels We Didn’t Mean to Publish (Yet)
211
2025-05-31T11:41:56
https://crfm.stanford.edu/2025/05/28/fast-kernels.html
Maxious
crfm.stanford.edu
1970-01-01T00:00:00
0
{}
1kzv322
false
null
t3_1kzv322
/r/LocalLLaMA/comments/1kzv322/surprisingly_fast_aigenerated_kernels_we_didnt/
false
false
default
211
null
Finetune embedders
1
[removed]
2025-05-31T11:52:56
https://www.reddit.com/r/LocalLLaMA/comments/1kzv9sz/finetune_embedders/
DedeU10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzv9sz
false
null
t3_1kzv9sz
/r/LocalLLaMA/comments/1kzv9sz/finetune_embedders/
false
false
self
1
null
Now available on Ollama: finance-llama-8b (FP16, Q4)
1
Finance-Llama-8B is a fine-tuned Llama 3.1 8B model trained on 500k examples for tasks like QA, reasoning, sentiment, and NER. It supports multi-turn dialogue and is ideal for financial assistants. https://ollama.com/martain7r/finance-llama-8b Appreciate any thoughts or suggestions. Thanks!
2025-05-31T12:02:13
https://www.reddit.com/r/LocalLLaMA/s/f3DrnBYhaO
martian7r
reddit.com
1970-01-01T00:00:00
0
{}
1kzvfzy
false
null
t3_1kzvfzy
/r/LocalLLaMA/comments/1kzvfzy/now_available_on_ollama_financellama8b_fp16_q4/
false
false
default
1
null
Demo video of AutoBE, backend vibe coding agent, writing 100% compilation-successful code
1
## AutoBE: Backend Vibe Coding Agent Achieving 100% Compilation Success - Github Repository: https://github.com/wrtnlabs/autobe - Playground Website: https://stackblitz.com/github/wrtnlabs/autobe-playground-stackblitz - Demo Result (Generated backend applications by AutoBE) - [Bullet-in Board System](https://stackblitz.com/edit/autobe-demo-bbs) - [E-Commerce](https://stackblitz.com/edit/autobe-demo-shopping) I previously posted about this same project on Reddit, but back then the Prisma (ORM) agent side only had around 70% success rate. The reason was that the error messages from the Prisma compiler for AI-generated incorrect code were so unintuitive and hard to understand that even I, as a human, struggled to make sense of them. Consequently, the AI agent couldn't perform proper corrections based on these cryptic error messages. However, today I'm back with AutoBE that truly achieves 100% compilation success. I solved the problem of Prisma compiler's unhelpful and unintuitive error messages by directly building the Prisma AST (Abstract Syntax Tree), implementing validation myself, and creating a custom code generator. This approach bypasses the original Prisma compiler's confusing error messaging altogether, enabling the AI agent to generate consistently compilable backend code. --------------------------------------- Introducing AutoBE: The Future of Backend Development We are immensely proud to introduce AutoBE, our revolutionary open-source vibe coding agent for backend applications, developed by Wrtn Technologies. The most distinguished feature of AutoBE is its exceptional 100% success rate in code generation. AutoBE incorporates built-in TypeScript and Prisma compilers alongside OpenAPI validators, enabling automatic technical corrections whenever the AI encounters coding errors. Furthermore, our integrated review agents and testing frameworks provide an additional layer of validation, ensuring the integrity of all AI-generated code. What makes this even more remarkable is that backend applications created with AutoBE can seamlessly integrate with our other open-source projects—Agentica and AutoView—to automate AI agent development and frontend application creation as well. In theory, this enables complete full-stack application development through vibe coding alone. * Alpha Release: 2025-06-01 * Beta Release: 2025-07-01 * Official Release: 2025-08-01 AutoBE currently supports comprehensive requirements analysis and derivation, database design, and OpenAPI document generation (API interface specification). All core features will be completed by the beta release, while the integration with Agentica and AutoView for full-stack vibe coding will be finalized by the official release. We eagerly anticipate your interest and support as we embark on this exciting journey.
2025-05-31T12:07:00
https://v.redd.it/vz1ddc2uj34f1
jhnam88
/r/LocalLLaMA/comments/1kzvj6i/demo_video_of_autobe_backend_vibe_coding_agent/
1970-01-01T00:00:00
0
{}
1kzvj6i
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vz1ddc2uj34f1/DASHPlaylist.mpd?a=1751414824%2CMzM1YzQ1Y2RkZjRkMzg4NzY4YjRjYzE1MzNlZjhkYTI1ZWM0ODNjNzY1NTc5OTdhYzkwYjQyYTQ0MjI5N2M2NA%3D%3D&v=1&f=sd', 'duration': 323, 'fallback_url': 'https://v.redd.it/vz1ddc2uj34f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/vz1ddc2uj34f1/HLSPlaylist.m3u8?a=1751414824%2CNWNhODNiNTI3MWRjNTFiMjNmN2NjMTRjNWZiODE0ODY3YThhODkwNTQwNmRlNDFmNmQwZDRkNTZlNGMzZTU3MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vz1ddc2uj34f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1kzvj6i
/r/LocalLLaMA/comments/1kzvj6i/demo_video_of_autobe_backend_vibe_coding_agent/
false
false
https://external-preview…5f0440561439fb5f
1
{'enabled': False, 'images': [{'id': 'MXF3bzNlMnVqMzRmMTmeWBe8BijfErq43eeJIkR9awzEfjW1k8_2CCGHvg9p', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MXF3bzNlMnVqMzRmMTmeWBe8BijfErq43eeJIkR9awzEfjW1k8_2CCGHvg9p.png?width=108&crop=smart&format=pjpg&auto=webp&s=0ce324a2367534c80fc992ada86d1573a6530dc3', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MXF3bzNlMnVqMzRmMTmeWBe8BijfErq43eeJIkR9awzEfjW1k8_2CCGHvg9p.png?width=216&crop=smart&format=pjpg&auto=webp&s=b9c5c91589b2c12d22b746e2dcc51c7ab8fbb3c0', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MXF3bzNlMnVqMzRmMTmeWBe8BijfErq43eeJIkR9awzEfjW1k8_2CCGHvg9p.png?width=320&crop=smart&format=pjpg&auto=webp&s=31fef73397391746724e7a1825c99319337a036e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MXF3bzNlMnVqMzRmMTmeWBe8BijfErq43eeJIkR9awzEfjW1k8_2CCGHvg9p.png?width=640&crop=smart&format=pjpg&auto=webp&s=7430f1a02b84cd605fa0f08350b804f02561cbcb', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MXF3bzNlMnVqMzRmMTmeWBe8BijfErq43eeJIkR9awzEfjW1k8_2CCGHvg9p.png?width=960&crop=smart&format=pjpg&auto=webp&s=57c4ecb3ee3f25332c1a48bc39203a4e3586dfc1', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MXF3bzNlMnVqMzRmMTmeWBe8BijfErq43eeJIkR9awzEfjW1k8_2CCGHvg9p.png?width=1080&crop=smart&format=pjpg&auto=webp&s=57603c5b39862dca6a0393d208e505989deae4a9', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MXF3bzNlMnVqMzRmMTmeWBe8BijfErq43eeJIkR9awzEfjW1k8_2CCGHvg9p.png?format=pjpg&auto=webp&s=1076e1dc6e8fe7338a41f45dc9c77eb6c13157eb', 'width': 1920}, 'variants': {}}]}
Local LLM folks — how are you defining function specs?
1
[removed]
2025-05-31T12:23:46
https://www.reddit.com/r/LocalLLaMA/comments/1kzvu9d/local_llm_folks_how_are_you_defining_function/
FrostyButterscotch77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzvu9d
false
null
t3_1kzvu9d
/r/LocalLLaMA/comments/1kzvu9d/local_llm_folks_how_are_you_defining_function/
false
false
self
1
{'enabled': False, 'images': [{'id': 'HUR4ZjSsMcPldBF8PlxclI3gg-mjZXBfe4bNavSwrFw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?width=108&crop=smart&auto=webp&s=4c05659da71aabefa650df1fddb91bdf8888031d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?width=216&crop=smart&auto=webp&s=490f434fbbbf0f74a171e943297e61758633f730', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?width=320&crop=smart&auto=webp&s=6f57ef706f7fd8fd0484669113c189fba8da9198', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?width=640&crop=smart&auto=webp&s=5d72bc65c67e8fa81fbd23e548bba69e1a0bb3e8', 'width': 640}, {'height': 503, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?width=960&crop=smart&auto=webp&s=d13c9867058e25865b57356a8f76e4c2df202a84', 'width': 960}, {'height': 566, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?width=1080&crop=smart&auto=webp&s=84f7f12718fed77976904df46b50b7aeb1a2af03', 'width': 1080}], 'source': {'height': 629, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?auto=webp&s=0e18f26214a09b566dc3bc4bcdd70b1cf41d959a', 'width': 1200}, 'variants': {}}]}
AMD Octa-core Ryzen AI Max Pro 385 Processor Spotted On Geekbench: Affordable Strix Halo Chips Are About To Enter The Market
70
2025-05-31T12:41:19
https://wccftech.com/amd-ryzen-ai-max-pro-385-spotted-on-geekbench/
_SYSTEM_ADMIN_MOD_
wccftech.com
1970-01-01T00:00:00
0
{}
1kzw65c
false
null
t3_1kzw65c
/r/LocalLLaMA/comments/1kzw65c/amd_octacore_ryzen_ai_max_pro_385_processor/
false
false
https://b.thumbs.redditm…pd4vlzn7GjqM.jpg
70
{'enabled': False, 'images': [{'id': 'UB0xih_7GF6izcL4pkLktCeRW7ESe4LzZtsDYEAGV8w', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bnCoi_QMP0ucYNmMDpD8YzNjydtxrrZkZROQJhXvr2s.jpg?width=108&crop=smart&auto=webp&s=e3826e7384652a976c8dffa169cacbeb0284bda5', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bnCoi_QMP0ucYNmMDpD8YzNjydtxrrZkZROQJhXvr2s.jpg?width=216&crop=smart&auto=webp&s=c92ac53072bb0cdc84d03442d862b01440835e9a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bnCoi_QMP0ucYNmMDpD8YzNjydtxrrZkZROQJhXvr2s.jpg?width=320&crop=smart&auto=webp&s=c71935925d6cc3620a97e11a42bef0d3833a7fac', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bnCoi_QMP0ucYNmMDpD8YzNjydtxrrZkZROQJhXvr2s.jpg?width=640&crop=smart&auto=webp&s=981e9ab36322decfefbeb6831d8e913c9f0d6692', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bnCoi_QMP0ucYNmMDpD8YzNjydtxrrZkZROQJhXvr2s.jpg?width=960&crop=smart&auto=webp&s=3b4f84790b57785a863cb3c07292c74bdd3693a8', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bnCoi_QMP0ucYNmMDpD8YzNjydtxrrZkZROQJhXvr2s.jpg?width=1080&crop=smart&auto=webp&s=bed32ed7d97948dc2aa63ded654e62affc2c7484', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/bnCoi_QMP0ucYNmMDpD8YzNjydtxrrZkZROQJhXvr2s.jpg?auto=webp&s=0aa52f3181f824ce490a56632efd5f11b3fdd146', 'width': 1200}, 'variants': {}}]}
Hardware for AI models
1
[removed]
2025-05-31T12:42:21
https://www.reddit.com/r/LocalLLaMA/comments/1kzw6ud/hardware_for_ai_models/
borisr10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzw6ud
false
null
t3_1kzw6ud
/r/LocalLLaMA/comments/1kzw6ud/hardware_for_ai_models/
false
false
self
1
null
Local vs cloud ai models running and relearn
1
[removed]
2025-05-31T12:49:26
https://www.reddit.com/r/LocalLLaMA/comments/1kzwbn3/local_vs_cloud_ai_models_running_and_relearn/
borisr10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzwbn3
false
null
t3_1kzwbn3
/r/LocalLLaMA/comments/1kzwbn3/local_vs_cloud_ai_models_running_and_relearn/
false
false
self
1
null
Is there any voice agent framework in JS or equivalent of pipecat? Also is there any for avatar altertnative of Simli or Taven?
0
Trying to research on options that are good for creating voice ai agent with optionally an avatar. Open source packages preferred. I found pipecat, but server is in python, prefer js one if any anyone know any opensource Simli or Taven to be able to run.
2025-05-31T12:51:33
https://www.reddit.com/r/LocalLLaMA/comments/1kzwd46/is_there_any_voice_agent_framework_in_js_or/
gpt872323
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzwd46
false
null
t3_1kzwd46
/r/LocalLLaMA/comments/1kzwd46/is_there_any_voice_agent_framework_in_js_or/
false
false
self
0
null
Building a product management tool designed for the AI era
2
Most planning tools were built before AI became part of how we build. Product docs are written in one place, technical tasks live somewhere else, and the IDE where the actual code lives is isolated from both. And most of the time, devs are the ones who have to figure it out when things are unclear. After running into this a few too many times over the past 20 years, we started thinking how we could create a product development platform with an entirely new approach. The idea was to create a tool that helps shape projects with expert guidance and team context, turn them into detailed features and tasks, and keep that plan synced with the development environment. Something that works more like an extra teammate than another doc to manage. That turned into [Devplan](http://devplan.com). It takes ideas from any level of completeness and turns it into something buildable. It works as the liaison layer between product definition and modern AI-enabled execution. It is already integrated with Linear and Git and takes very little effort to incorporate into your existing workflow. We are in beta and still have a lot we are figuring out as we go. However, if you’ve ever had to guess what a vague ticket meant or found yourself building from a half-finished doc, we think Devplan could really help you. Also, if you are building with AI, Devplan creates custom, company and codebase specific instructions for Cursor or JetBrains Junie. If any of these scenarios describe you or your team, we would love to get you into our beta. We’re learning from every bit of feedback we get.
2025-05-31T12:56:28
https://www.reddit.com/r/LocalLLaMA/comments/1kzwgkc/building_a_product_management_tool_designed_for/
eastwindtoday
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzwgkc
false
null
t3_1kzwgkc
/r/LocalLLaMA/comments/1kzwgkc/building_a_product_management_tool_designed_for/
false
false
self
2
{'enabled': False, 'images': [{'id': 'eKSJQCGEFQzPwasZkKBHRfQaHfxfGSdD-2RaXDc9bYY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/n9yqUBJgILUali99soyGNS0TMHdJqfM7xzhQcN7UIxg.jpg?width=108&crop=smart&auto=webp&s=3b3c930c6a2274ce22def82119bd52f3a2dfa457', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/n9yqUBJgILUali99soyGNS0TMHdJqfM7xzhQcN7UIxg.jpg?width=216&crop=smart&auto=webp&s=916c27b81b6f3cbd274bb2f400ef1276b46f76da', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/n9yqUBJgILUali99soyGNS0TMHdJqfM7xzhQcN7UIxg.jpg?width=320&crop=smart&auto=webp&s=d28e16b509e0e632138831b60356b5a517e687fc', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/n9yqUBJgILUali99soyGNS0TMHdJqfM7xzhQcN7UIxg.jpg?width=640&crop=smart&auto=webp&s=fbf2d7e337ffcfceebec159418d9a5ba214bf9e6', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/n9yqUBJgILUali99soyGNS0TMHdJqfM7xzhQcN7UIxg.jpg?width=960&crop=smart&auto=webp&s=fa9cfadda0925e758e8e5255bdea8526f53ec6f0', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/n9yqUBJgILUali99soyGNS0TMHdJqfM7xzhQcN7UIxg.jpg?width=1080&crop=smart&auto=webp&s=f6a1be38ad95f04c20e293ade7dfd2f8142633f8', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/n9yqUBJgILUali99soyGNS0TMHdJqfM7xzhQcN7UIxg.jpg?auto=webp&s=d7bbd9cd38ca487a57edd62961dbef73bfbdcecc', 'width': 1200}, 'variants': {}}]}
A newbie's question about inference and quantization
0
Hi there, a newbie here. For a long time I have such question(s) and I'd appreciate someone (especially who worked around llama.cpp/vllm/any other LLM inference engine's related part) could answer: It's been a long time since NVIDIA GPUs (and other hardwares) got support of INT8 inference, then FP8 and more recently FP4, does the current llama.cpp or any other inference engine support such features? Another related question is about popular quatizations like GGUF, GGML, EXL, etc. AFAIK, they all have to make "runtime decompressions" that convert quantized weights to FP16/BF16 during inference, which cause extra overhead. Could they benefit by FP8/FP4 inference? (For example, GGUF Q8 and Q4KM are almost go-to option for most dudes, and I wonder if they could directly go FP8/FP4) Again, thanks for your kindness of attention and answer! I know much of my understanding could be wrong because I'm still learning...
2025-05-31T13:04:03
https://www.reddit.com/r/LocalLLaMA/comments/1kzwm5p/a_newbies_question_about_inference_and/
IngenuityNo1411
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzwm5p
false
null
t3_1kzwm5p
/r/LocalLLaMA/comments/1kzwm5p/a_newbies_question_about_inference_and/
false
false
self
0
null
what if I steal Chatgpt 4o model weights
1
[removed]
2025-05-31T14:27:16
https://www.reddit.com/r/LocalLLaMA/comments/1kzydam/what_if_i_steal_chatgpt_4o_model_weights/
Rare-Programmer-1747
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzydam
false
null
t3_1kzydam
/r/LocalLLaMA/comments/1kzydam/what_if_i_steal_chatgpt_4o_model_weights/
false
false
self
1
null
What if I secretly get access to chatgpt 4o model weights ?
0
Can I sell the model weights secretly ? Is it possible to open source the model weights? What is even stopping the OpenAi's employees form secretly doing it?
2025-05-31T14:31:17
https://www.reddit.com/r/LocalLLaMA/comments/1kzygjb/what_if_i_secretly_get_access_to_chatgpt_4o_model/
Rare-Programmer-1747
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzygjb
false
null
t3_1kzygjb
/r/LocalLLaMA/comments/1kzygjb/what_if_i_secretly_get_access_to_chatgpt_4o_model/
false
false
self
0
null
Use MCP to run computer use in a VM.
1
[removed]
2025-05-31T15:01:50
https://v.redd.it/l5uhpnqxt44f1
Impressive_Half_2819
v.redd.it
1970-01-01T00:00:00
0
{}
1kzz5l7
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/l5uhpnqxt44f1/DASHPlaylist.mpd?a=1751295725%2CNmU4YzNlMmRmOGMwNzdjYjk4MWUyMDY0MjE4OWVkMDhkYTdjYjU0OTQ4YjViNThhZWM4MzYyNzI3ZjAwODk1Mw%3D%3D&v=1&f=sd', 'duration': 115, 'fallback_url': 'https://v.redd.it/l5uhpnqxt44f1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 658, 'hls_url': 'https://v.redd.it/l5uhpnqxt44f1/HLSPlaylist.m3u8?a=1751295725%2COGI0MThlZTNhZTBhNTA5NWEyZDMzODllMGQ2NzFiMzhkOWU2MzExMWU0MTk2MDg3ZDM0NzYzMzUzZDA2NTA2Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/l5uhpnqxt44f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1kzz5l7
/r/LocalLLaMA/comments/1kzz5l7/use_mcp_to_run_computer_use_in_a_vm/
false
false
https://external-preview…0c21b54402254325
1
{'enabled': False, 'images': [{'id': 'dWJ5Mzg0Znh0NDRmMSBTlOtFiw3CN60nCKAl7ym9Md7o0mszJARyFHwBNilc', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/dWJ5Mzg0Znh0NDRmMSBTlOtFiw3CN60nCKAl7ym9Md7o0mszJARyFHwBNilc.png?width=108&crop=smart&format=pjpg&auto=webp&s=bcb3610aecd8a191bb6b7c2cb5fefa1295840ff7', 'width': 108}, {'height': 110, 'url': 'https://external-preview.redd.it/dWJ5Mzg0Znh0NDRmMSBTlOtFiw3CN60nCKAl7ym9Md7o0mszJARyFHwBNilc.png?width=216&crop=smart&format=pjpg&auto=webp&s=2439e0218393a7a5144e1d22fec90996ba759821', 'width': 216}, {'height': 164, 'url': 'https://external-preview.redd.it/dWJ5Mzg0Znh0NDRmMSBTlOtFiw3CN60nCKAl7ym9Md7o0mszJARyFHwBNilc.png?width=320&crop=smart&format=pjpg&auto=webp&s=61f3002f0ac70cac364dac40d104110fd9f34960', 'width': 320}, {'height': 328, 'url': 'https://external-preview.redd.it/dWJ5Mzg0Znh0NDRmMSBTlOtFiw3CN60nCKAl7ym9Md7o0mszJARyFHwBNilc.png?width=640&crop=smart&format=pjpg&auto=webp&s=fb4c78a1a49ea65299f0ced7057aa164e738c5ac', 'width': 640}, {'height': 493, 'url': 'https://external-preview.redd.it/dWJ5Mzg0Znh0NDRmMSBTlOtFiw3CN60nCKAl7ym9Md7o0mszJARyFHwBNilc.png?width=960&crop=smart&format=pjpg&auto=webp&s=4c89c85574a3912e1d31d81db8390b466211d736', 'width': 960}, {'height': 554, 'url': 'https://external-preview.redd.it/dWJ5Mzg0Znh0NDRmMSBTlOtFiw3CN60nCKAl7ym9Md7o0mszJARyFHwBNilc.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f6a0a1469652f771a7f655128edf653f65c4f63f', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/dWJ5Mzg0Znh0NDRmMSBTlOtFiw3CN60nCKAl7ym9Md7o0mszJARyFHwBNilc.png?format=pjpg&auto=webp&s=3230212c6ae01a794d431f6f848ef97ad126886e', 'width': 1402}, 'variants': {}}]}
Use MCP to run computer use in a VM.
15
MCP Server with Computer Use Agent runs through Claude Desktop, Cursor, and other MCP clients. An example use case lets try using Claude as a tutor to learn how to use Tableau. The MCP Server implementation exposes CUA's full functionality through standardized tool calls. It supports single-task commands and multi-task sequences, giving Claude Desktop direct access to all of Cua's computer control capabilities. This is the first MCP-compatible computer control solution that works directly with Claude Desktop's and Cursor's built-in MCP implementation. Simple configuration in your claude_desktop_config.json or cursor_config.json connects Claude or Cursor directly to your desktop environment. Github : https://github.com/trycua/cua
2025-05-31T15:04:34
https://v.redd.it/p51trp5fu44f1
Impressive_Half_2819
v.redd.it
1970-01-01T00:00:00
0
{}
1kzz7t4
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/p51trp5fu44f1/DASHPlaylist.mpd?a=1751295889%2CMmZhMjRhMmJlMWY3ZmQ4ZTM2NDA1NmI2NGM0ZDM0OTlhNzE0NTI2YmE3YzMyNTRmYzZmNmU1ZDRhZjY0YmUwYg%3D%3D&v=1&f=sd', 'duration': 115, 'fallback_url': 'https://v.redd.it/p51trp5fu44f1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 658, 'hls_url': 'https://v.redd.it/p51trp5fu44f1/HLSPlaylist.m3u8?a=1751295889%2CZjNmZGI3MjRhY2Y1YjUxZmRiZGQ0ZGIwZmVjMDU3MjViYjcwMWIzMWQ0OGVjNTFiNzY4M2U2M2I0YTE3ZTEzMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/p51trp5fu44f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1kzz7t4
/r/LocalLLaMA/comments/1kzz7t4/use_mcp_to_run_computer_use_in_a_vm/
false
false
https://external-preview…878c686c9ece2543
15
{'enabled': False, 'images': [{'id': 'MHJhMDJ6dWV1NDRmMSBTlOtFiw3CN60nCKAl7ym9Md7o0mszJARyFHwBNilc', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/MHJhMDJ6dWV1NDRmMSBTlOtFiw3CN60nCKAl7ym9Md7o0mszJARyFHwBNilc.png?width=108&crop=smart&format=pjpg&auto=webp&s=47e2dc9420359333cdafde9bbec0ab1e172ca136', 'width': 108}, {'height': 110, 'url': 'https://external-preview.redd.it/MHJhMDJ6dWV1NDRmMSBTlOtFiw3CN60nCKAl7ym9Md7o0mszJARyFHwBNilc.png?width=216&crop=smart&format=pjpg&auto=webp&s=0c99bd9c8316a743d7809d13ab45553a6bd96dcd', 'width': 216}, {'height': 164, 'url': 'https://external-preview.redd.it/MHJhMDJ6dWV1NDRmMSBTlOtFiw3CN60nCKAl7ym9Md7o0mszJARyFHwBNilc.png?width=320&crop=smart&format=pjpg&auto=webp&s=9362df1791c01d19c83e83058493858c8cd701c4', 'width': 320}, {'height': 328, 'url': 'https://external-preview.redd.it/MHJhMDJ6dWV1NDRmMSBTlOtFiw3CN60nCKAl7ym9Md7o0mszJARyFHwBNilc.png?width=640&crop=smart&format=pjpg&auto=webp&s=a482cbd8d93ccd9f2167797bd294612a12467b34', 'width': 640}, {'height': 493, 'url': 'https://external-preview.redd.it/MHJhMDJ6dWV1NDRmMSBTlOtFiw3CN60nCKAl7ym9Md7o0mszJARyFHwBNilc.png?width=960&crop=smart&format=pjpg&auto=webp&s=c657cc58b4067eb779d50eb0c52ad8cf69b08588', 'width': 960}, {'height': 554, 'url': 'https://external-preview.redd.it/MHJhMDJ6dWV1NDRmMSBTlOtFiw3CN60nCKAl7ym9Md7o0mszJARyFHwBNilc.png?width=1080&crop=smart&format=pjpg&auto=webp&s=deea5ff6a3ca07ed9d6238a8c37009f4448eee1a', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/MHJhMDJ6dWV1NDRmMSBTlOtFiw3CN60nCKAl7ym9Md7o0mszJARyFHwBNilc.png?format=pjpg&auto=webp&s=b885048f545ec34091a88259ab223b7ec34c7208', 'width': 1402}, 'variants': {}}]}
Open source iOS app for local AI inference - MIT License
2
Run LLMs completely locally on your iOS device. localAI is a native iOS application that enables on-device inference with large language models without requiring an internet connection. Built with Swift and SwiftUI for efficient model inference on Apple Silicon. Repo [https://github.com/sse-97/localAI-by-sse](https://github.com/sse-97/localAI-by-sse) Clone the repository, integrate the LLM.swift package, then build and run. Feel free to give feedback!
2025-05-31T15:18:56
https://www.reddit.com/r/LocalLLaMA/comments/1kzzjpn/open_source_ios_app_for_local_ai_inference_mit/
CrazySymphonie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzzjpn
false
null
t3_1kzzjpn
/r/LocalLLaMA/comments/1kzzjpn/open_source_ios_app_for_local_ai_inference_mit/
false
false
self
2
{'enabled': False, 'images': [{'id': 'gtOEr9J87Hzb41mlKbPN5z639UOP3JB1hOUtvISgyws', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/z4YaHtt20FiRs6AQSwhpmCk1jyg1DdufjDl_zg5UAPk.jpg?width=108&crop=smart&auto=webp&s=5a5e7ecf4f67db2c0372e00c6e2919e3e2507856', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/z4YaHtt20FiRs6AQSwhpmCk1jyg1DdufjDl_zg5UAPk.jpg?width=216&crop=smart&auto=webp&s=ffff6e1b87305f0e06c0ddb66ab9a0bcd18b1b28', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/z4YaHtt20FiRs6AQSwhpmCk1jyg1DdufjDl_zg5UAPk.jpg?width=320&crop=smart&auto=webp&s=b410d94a7f25213e53d6cf17156b6cc03e67eb66', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/z4YaHtt20FiRs6AQSwhpmCk1jyg1DdufjDl_zg5UAPk.jpg?width=640&crop=smart&auto=webp&s=8b6f45325e7149551e4cee89c14b9e34061a3d6d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/z4YaHtt20FiRs6AQSwhpmCk1jyg1DdufjDl_zg5UAPk.jpg?width=960&crop=smart&auto=webp&s=ae37c1a2d106ae90b244f0522091e24e542f95ed', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/z4YaHtt20FiRs6AQSwhpmCk1jyg1DdufjDl_zg5UAPk.jpg?width=1080&crop=smart&auto=webp&s=acd490eac773308aad93f53f346c032da873e2df', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/z4YaHtt20FiRs6AQSwhpmCk1jyg1DdufjDl_zg5UAPk.jpg?auto=webp&s=416b9267b9713d962bd46314ec1f0f8280c5a345', 'width': 1200}, 'variants': {}}]}
llama 3 capabilities such as periodic tasks?
1
[removed]
2025-05-31T15:27:06
https://www.reddit.com/r/LocalLLaMA/comments/1kzzqoe/llama_3_capabilities_such_as_periodic_tasks/
Exotic-Media5762
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzzqoe
false
null
t3_1kzzqoe
/r/LocalLLaMA/comments/1kzzqoe/llama_3_capabilities_such_as_periodic_tasks/
false
false
self
1
null
Google lets you run AI models locally
310
https://techcrunch.com/2025/05/31/google-quietly-released-an-app-that-lets-you-download-and-run-ai-models-locally/
2025-05-31T15:29:15
https://www.reddit.com/r/LocalLLaMA/comments/1kzzshu/google_lets_you_run_ai_models_locally/
dnr41418
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kzzshu
false
null
t3_1kzzshu
/r/LocalLLaMA/comments/1kzzshu/google_lets_you_run_ai_models_locally/
false
false
self
310
{'enabled': False, 'images': [{'id': '_FBQRawtsVnlTLgg9jFSaAELbacVusil3H8bxH8zdWA', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/fteILaQ9-5pekHZ-voivAp3DNsivLc5g2TpZDGhT004.jpg?width=108&crop=smart&auto=webp&s=9ddd21dcf8ac59bd61fe2319db5ff3b12f11fcdf', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/fteILaQ9-5pekHZ-voivAp3DNsivLc5g2TpZDGhT004.jpg?width=216&crop=smart&auto=webp&s=05b161e9570c9709dcebc48e2df14523616b5970', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/fteILaQ9-5pekHZ-voivAp3DNsivLc5g2TpZDGhT004.jpg?width=320&crop=smart&auto=webp&s=6f6f9799027d7c0ee50a0cdeb05ea5e5584f53a8', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/fteILaQ9-5pekHZ-voivAp3DNsivLc5g2TpZDGhT004.jpg?width=640&crop=smart&auto=webp&s=5b541e1e991ceedd7d2e1f3d67a52f0cad407588', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/fteILaQ9-5pekHZ-voivAp3DNsivLc5g2TpZDGhT004.jpg?width=960&crop=smart&auto=webp&s=9ed318067e1d0d519c263472722be96c3f47c58e', 'width': 960}], 'source': {'height': 683, 'url': 'https://external-preview.redd.it/fteILaQ9-5pekHZ-voivAp3DNsivLc5g2TpZDGhT004.jpg?auto=webp&s=2918f2dbf01fc155b084ab12f210a6bda225789b', 'width': 1024}, 'variants': {}}]}
[Update] Rensa: added full CMinHash + OptDensMinHash support (fast MinHash in Rust for dataset deduplication / LLM fine-tuning)
7
Hey all — quick update on [Rensa](https://github.com/beowolx/rensa), a MinHash library I’ve been building in Rust with Python bindings. It’s focused on speed and works well for deduplicating large text datasets — especially stuff like LLM fine-tuning where near duplicates are a problem. Originally, I built a custom algorithm called **RMinHash** because existing tools (like `datasketch`) were way too slow for my use cases. RMinHash is a fast, simple alternative to classic MinHash and gave me much better performance on big datasets. Since I last posted, I’ve added: * **CMinHash** – full implementation based on the paper (“C-MinHash: reducing K permutations to two”). It’s highly optimized, uses batching + vectorization. * **OptDensMinHash** – handles densification for sparse data, fills in missing values in a principled way. I ran benchmarks on a 100K-row dataset (`gretelai/synthetic_text_to_sql`) with 256 permutations: * `CMinHash`: 5.47s * `RMinHash`: 5.58s * `OptDensMinHash`: 12.36s * `datasketch`: 92.45s So yeah, still \~10-17x faster than datasketch, depending on variant. Accuracy-wise, all Rensa variants produce very similar (sometimes identical) results to `datasketch` in terms of deduplicated examples. It’s a side project I built out of necessity and I'd love to get some feedback from the community :) The Python API is simple and should feel familiar if you’ve used datasketch before. GitHub: [https://github.com/beowolx/rensa](https://github.com/yourusername/rensa) Thanks!
2025-05-31T15:33:13
https://github.com/beowolx/rensa
BeowulfBR
github.com
1970-01-01T00:00:00
0
{}
1kzzvzt
false
null
t3_1kzzvzt
/r/LocalLLaMA/comments/1kzzvzt/update_rensa_added_full_cminhash_optdensminhash/
false
false
https://b.thumbs.redditm…k6DqSKPP7tHI.jpg
7
{'enabled': False, 'images': [{'id': '9EoVEpX8dd8gmVZrlAEv4KPn0lD8aSgti45FwF6jKUo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5-lt2FyUCjdvV--0DsoyDNNXFgYjxAoICz1QQO0XTVc.jpg?width=108&crop=smart&auto=webp&s=8574a58256166f65fec102524f76a4e2ee5dfa0e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5-lt2FyUCjdvV--0DsoyDNNXFgYjxAoICz1QQO0XTVc.jpg?width=216&crop=smart&auto=webp&s=9b8286312dee6644feae648f2282b7965d2d915e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5-lt2FyUCjdvV--0DsoyDNNXFgYjxAoICz1QQO0XTVc.jpg?width=320&crop=smart&auto=webp&s=78b44d8c8230a5ffdb22b12f5a8e12d720046430', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5-lt2FyUCjdvV--0DsoyDNNXFgYjxAoICz1QQO0XTVc.jpg?width=640&crop=smart&auto=webp&s=5505689fd81e257fb9e8b4a1756ec434868c1664', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5-lt2FyUCjdvV--0DsoyDNNXFgYjxAoICz1QQO0XTVc.jpg?width=960&crop=smart&auto=webp&s=0a642bc1e091ab80853a6219484aced63139d76c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5-lt2FyUCjdvV--0DsoyDNNXFgYjxAoICz1QQO0XTVc.jpg?width=1080&crop=smart&auto=webp&s=435c0e0f418f063ce059674b9008d93ba5b18f65', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5-lt2FyUCjdvV--0DsoyDNNXFgYjxAoICz1QQO0XTVc.jpg?auto=webp&s=ebff258c9c615bd9ed756c5df846b684993001cc', 'width': 1200}, 'variants': {}}]}
Why Do LLMs Lie So Believably?
0
I’ve been messing around with a few local LLaMA models and something keeps tripping me up. They make stuff up with total confidence. It’s not just wrong answers. It’s the delivery. A model will give you a perfectly structured paragraph, cite a fake source, and even format it like it came from an academic journal. Unless you double-check everything, it feels legit. What’s crazy is that it doesn’t hesitate. There’s no “maybe” or “I’m not sure.” It just spits it out like it’s gospel. Is this just baked into how these models work? Like, they’re just predicting the next most likely word — so of course they’re gonna fake it when they hit a gap? How does everyone else deals with this? Do you have a process for spotting hallucinations or flagging bad info? And do you think we'll ever reach a point where local models can self-correct without external verification?
2025-05-31T15:42:01
https://www.reddit.com/r/LocalLLaMA/comments/1l003b6/why_do_llms_lie_so_believably/
Work_for_burritos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l003b6
false
null
t3_1l003b6
/r/LocalLLaMA/comments/1l003b6/why_do_llms_lie_so_believably/
false
false
self
0
null
[OC] Ablating Gemma 3 27B variants with synthetic data from Sonnet 4 (Few-shot vs LoRA)
1
[removed]
2025-05-31T15:50:32
https://www.reddit.com/r/LocalLLaMA/comments/1l00agm/oc_ablating_gemma_3_27b_variants_with_synthetic/
tawnyManticore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l00agm
false
null
t3_1l00agm
/r/LocalLLaMA/comments/1l00agm/oc_ablating_gemma_3_27b_variants_with_synthetic/
false
false
self
1
{'enabled': False, 'images': [{'id': 'YF2mZrP2LZphKjmsRiHyL6Oic0sw2vC0c9Q1XWpEOGA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=108&crop=smart&auto=webp&s=3b88941d057d599da1826c2b94b2663517e4e023', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=216&crop=smart&auto=webp&s=0b774d9f72bf345e9e39402886649223ad60e4d3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=320&crop=smart&auto=webp&s=6c769aa8ce8a2839b46e12de1fd8743d4171f08d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=640&crop=smart&auto=webp&s=c9f49d760efe4ddd92a3a07a57705e5073b56eed', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=960&crop=smart&auto=webp&s=8666fab577a806da6551b1f2e0ec70f217f6f2fb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=1080&crop=smart&auto=webp&s=b3de3b28dfba5fc1615aa5f1c855312805eda01b', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?auto=webp&s=6728f96b3a663740abd86d6d7aff692490474d84', 'width': 1280}, 'variants': {}}]}
[OC] Ablating Gemma 3 27B variants with synthetic data from Sonnet 4 (Few-shot vs LoRA)
1
[removed]
2025-05-31T15:56:15
https://www.reddit.com/r/LocalLLaMA/comments/1l00f80/oc_ablating_gemma_3_27b_variants_with_synthetic/
RemarkableMatter4058
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l00f80
false
null
t3_1l00f80
/r/LocalLLaMA/comments/1l00f80/oc_ablating_gemma_3_27b_variants_with_synthetic/
false
false
self
1
{'enabled': False, 'images': [{'id': 'YF2mZrP2LZphKjmsRiHyL6Oic0sw2vC0c9Q1XWpEOGA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=108&crop=smart&auto=webp&s=3b88941d057d599da1826c2b94b2663517e4e023', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=216&crop=smart&auto=webp&s=0b774d9f72bf345e9e39402886649223ad60e4d3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=320&crop=smart&auto=webp&s=6c769aa8ce8a2839b46e12de1fd8743d4171f08d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=640&crop=smart&auto=webp&s=c9f49d760efe4ddd92a3a07a57705e5073b56eed', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=960&crop=smart&auto=webp&s=8666fab577a806da6551b1f2e0ec70f217f6f2fb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=1080&crop=smart&auto=webp&s=b3de3b28dfba5fc1615aa5f1c855312805eda01b', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?auto=webp&s=6728f96b3a663740abd86d6d7aff692490474d84', 'width': 1280}, 'variants': {}}]}
Why did Anthropic release MCP as a standard?
0
Was there a capitalist reason? Did they think others were going to base it anyway like the OpenAI API?
2025-05-31T16:09:51
https://www.reddit.com/r/LocalLLaMA/comments/1l00r5n/why_did_anthropic_release_mcp_as_a_standard/
InsideYork
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l00r5n
false
null
t3_1l00r5n
/r/LocalLLaMA/comments/1l00r5n/why_did_anthropic_release_mcp_as_a_standard/
false
false
self
0
null
Giving Qwen 3 0.6B a Toolbelt in the form of MCP Support, Running Locally in Your Browser with Adjustable Thinking!
49
Hello all. I have spent a couple weekends giving the tiny Qwen3 0.6B model the ability to show off its underutilized tool calling abilities by using remote MCP servers. I am pleasantly surprised at how well it can chain tools. Additionally, I gave it the option to limit how much it can think to avoid the "overthinking" issue reasoning models (especially Qwen) can have. [This implementation was largely inspired by a great article from Zach Mueller outlining just that.](https://muellerzr.github.io/til/end_thinking.html) Also, this project is an adaptation of [Xenova's Qwen3 0.6 WebGPU code in transformers.js-examples](https://github.com/huggingface/transformers.js-examples/tree/main/qwen3-webgpu), it was a solid starting point to work with Qwen3 0.6B. Check it out for yourselves! HF Space Link: [https://huggingface.co/spaces/callbacked/Qwen3-MCP](https://huggingface.co/spaces/callbacked/Qwen3-MCP) Repo: [https://github.com/callbacked/qwen3-mcp](https://github.com/callbacked/qwen3-mcp) *Footnote: With Qwen3 8B having a distillation from R1-0528, I really hope we can see that trickle down to other models including Qwen3 0.6B. Seeing how much more intelligent the other models can get off of R1-0528 would be a cool thing see in action!*
2025-05-31T16:34:06
https://v.redd.it/r495cezy654f1
ajunior7
v.redd.it
1970-01-01T00:00:00
0
{}
1l01bfe
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/r495cezy654f1/DASHPlaylist.mpd?a=1751301259%2CODRlNjE5OGRkY2ZiNzRjZTU3YjUzYWM5Y2IyZTA5ZTUzYTZkZThmOGI3N2Q2ZTZjMDA2OWM2MGIxMThhMTJiMg%3D%3D&v=1&f=sd', 'duration': 85, 'fallback_url': 'https://v.redd.it/r495cezy654f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/r495cezy654f1/HLSPlaylist.m3u8?a=1751301259%2CMmE4N2RlODNlYjQwOGU0NDdiNTQxYzlmYmMxODRiNTQ5ZDc4NWMyYjBkYTk0NDRhOGUxMGY4NmUxODc3ZDkxZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/r495cezy654f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 944}}
t3_1l01bfe
/r/LocalLLaMA/comments/1l01bfe/giving_qwen_3_06b_a_toolbelt_in_the_form_of_mcp/
false
false
https://external-preview…a68e6ac527bb793f
49
{'enabled': False, 'images': [{'id': 'ZG81Yjhkenk2NTRmMdgqNWupVXy_ZPAevb2tTQhA9R_THDnUrLckbufzOiAz', 'resolutions': [{'height': 82, 'url': 'https://external-preview.redd.it/ZG81Yjhkenk2NTRmMdgqNWupVXy_ZPAevb2tTQhA9R_THDnUrLckbufzOiAz.png?width=108&crop=smart&format=pjpg&auto=webp&s=ef2f98923e0d32c5e1f5c62993c804748604488d', 'width': 108}, {'height': 164, 'url': 'https://external-preview.redd.it/ZG81Yjhkenk2NTRmMdgqNWupVXy_ZPAevb2tTQhA9R_THDnUrLckbufzOiAz.png?width=216&crop=smart&format=pjpg&auto=webp&s=1ce44a3e754cc99fc1b960df5b0a8a1f42e9a6c6', 'width': 216}, {'height': 244, 'url': 'https://external-preview.redd.it/ZG81Yjhkenk2NTRmMdgqNWupVXy_ZPAevb2tTQhA9R_THDnUrLckbufzOiAz.png?width=320&crop=smart&format=pjpg&auto=webp&s=c252e4a0299da2d82f600a12cf32c8b17a645638', 'width': 320}, {'height': 488, 'url': 'https://external-preview.redd.it/ZG81Yjhkenk2NTRmMdgqNWupVXy_ZPAevb2tTQhA9R_THDnUrLckbufzOiAz.png?width=640&crop=smart&format=pjpg&auto=webp&s=aa7dba6f39c7b8eebc217cf689cd085ea2a5853f', 'width': 640}, {'height': 732, 'url': 'https://external-preview.redd.it/ZG81Yjhkenk2NTRmMdgqNWupVXy_ZPAevb2tTQhA9R_THDnUrLckbufzOiAz.png?width=960&crop=smart&format=pjpg&auto=webp&s=e42f1c919dad881312ff31898d791577c64fba08', 'width': 960}, {'height': 823, 'url': 'https://external-preview.redd.it/ZG81Yjhkenk2NTRmMdgqNWupVXy_ZPAevb2tTQhA9R_THDnUrLckbufzOiAz.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6253e2e8386035c596ec03d6615e141fc88b300c', 'width': 1080}], 'source': {'height': 920, 'url': 'https://external-preview.redd.it/ZG81Yjhkenk2NTRmMdgqNWupVXy_ZPAevb2tTQhA9R_THDnUrLckbufzOiAz.png?format=pjpg&auto=webp&s=bcf73a4a055d8de678eb9deef5bc48daf607a81e', 'width': 1206}, 'variants': {}}]}
[VOICE VIBE CODING] Android app to code while afk
0
Hello, This is a continuation of a post I made \~2 months ago, showcasing an **Open Source implementation of Computer Use: "Simple Computer Use"**. We are now making public the main client we use: a **lightweight "Simple Computer Use" Android App**: [https://github.com/pnmartinez/simple-computer-use/releases/tag/0.5.0%2B0.1.0](https://github.com/pnmartinez/simple-computer-use/releases/tag/0.5.0%2B0.1.0) As Cursor does not offer Voice control yet (there several Issues opened about this in their repos), we did this clunky POC. Our surprise was that we ended up **using it every day**. **Walking the dog, commuting, at the gym...** This has been a productivity boost for us. We are just a team of 2, and the time we have yo develop it is little. But we have decided to publish this early, even in its clunky version, because we know there's use cases out there for this (and we welcome extra help). So let me know what you think and any feedback is welcomed. [Simple Computer Use Android App](https://preview.redd.it/6ebvy2y8b54f1.png?width=392&format=png&auto=webp&s=c2de56efbdb4e646397594c306b24141fb02716d)
2025-05-31T16:39:15
https://www.reddit.com/r/LocalLLaMA/comments/1l01frk/voice_vibe_coding_android_app_to_code_while_afk/
nava_7777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l01frk
false
null
t3_1l01frk
/r/LocalLLaMA/comments/1l01frk/voice_vibe_coding_android_app_to_code_while_afk/
false
false
https://b.thumbs.redditm…pZAf72uqeDjc.jpg
0
{'enabled': False, 'images': [{'id': 'hOrlbiuUHyPNMK9eU2w13rY4HX9HQGepkHo3FhKJOwI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Zefz_SSeW27vSoK1OnV87lUGKP24mlpIT0CdP69PUwU.jpg?width=108&crop=smart&auto=webp&s=53251397df96a18f97fa166e2c3d7961623fa8d9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Zefz_SSeW27vSoK1OnV87lUGKP24mlpIT0CdP69PUwU.jpg?width=216&crop=smart&auto=webp&s=74f066259b6a09665a07db6a6f25771ffca1cc0c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Zefz_SSeW27vSoK1OnV87lUGKP24mlpIT0CdP69PUwU.jpg?width=320&crop=smart&auto=webp&s=b9981fbd0dca64c1b60c0df76a4ac7e6db2a8f34', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Zefz_SSeW27vSoK1OnV87lUGKP24mlpIT0CdP69PUwU.jpg?width=640&crop=smart&auto=webp&s=a7dfbf33e433f46e91a8a50701cda4981858a0ce', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Zefz_SSeW27vSoK1OnV87lUGKP24mlpIT0CdP69PUwU.jpg?width=960&crop=smart&auto=webp&s=6089b7f7bcfe8789941cdffabcd80b03ecb3924d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Zefz_SSeW27vSoK1OnV87lUGKP24mlpIT0CdP69PUwU.jpg?width=1080&crop=smart&auto=webp&s=8775d62e764f71bf3a5f94003244969ebd7c0b50', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Zefz_SSeW27vSoK1OnV87lUGKP24mlpIT0CdP69PUwU.jpg?auto=webp&s=aa444acb75c4b0b986068c3968fe8834cab32d51', 'width': 1200}, 'variants': {}}]}
Context Issue on Long Threads For Reasoning Models
1
Context Issue on Long Threads For Reasoning Models Hi Everyone, This is an issue I noticed while extensively using o4-mini and 4o in a long ChatGPT thread related to one of my projects. As the context grew, I noticed that o4-mini getting confused while 4o was providing the desired answers. For example, if I asked o4-mini to rewrite an answer with some suggested modifications, it will reply with something like "can you please point to the message you are suggesting to rewrite?" Has anyone else noticed this issue? And if you know why it's happening, can you please clarify the reason for it as I wanna make sure that this kind of issues don't appear in my application while using the api? Thanks.
2025-05-31T17:01:24
https://www.reddit.com/r/LocalLLaMA/comments/1l01yhr/context_issue_on_long_threads_for_reasoning_models/
PleasantInspection12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l01yhr
false
null
t3_1l01yhr
/r/LocalLLaMA/comments/1l01yhr/context_issue_on_long_threads_for_reasoning_models/
false
false
self
1
null
Is there a way to convert the model downloaded directly from huggingface to blobs, refs, snapshots directory structure?
2
I downloaded new DeepSeek-R1 from huggingface. All the config, json and safetensors files are in single directory. I’m using mlx distributed and it requires the model to be in this directory structure. models—mlx-community—DeepSeek-R1-0528-4bit/ ├── blobs/ ├── refs/ ├── snapshots/ I don’t want to re-download this huge model again. Is there a way to convert it?
2025-05-31T17:04:16
https://www.reddit.com/r/LocalLLaMA/comments/1l020zk/is_there_a_way_to_convert_the_model_downloaded/
No_Conversation9561
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l020zk
false
null
t3_1l020zk
/r/LocalLLaMA/comments/1l020zk/is_there_a_way_to_convert_the_model_downloaded/
false
false
self
2
null
[OC] Ablating Gemma 3 27B variants with synthetic data from Sonnet 4 (Few-shot vs LoRA)
1
[removed]
2025-05-31T17:10:01
https://www.reddit.com/r/LocalLLaMA/comments/1l025xa/oc_ablating_gemma_3_27b_variants_with_synthetic/
davernow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l025xa
false
null
t3_1l025xa
/r/LocalLLaMA/comments/1l025xa/oc_ablating_gemma_3_27b_variants_with_synthetic/
false
false
self
1
{'enabled': False, 'images': [{'id': 'YF2mZrP2LZphKjmsRiHyL6Oic0sw2vC0c9Q1XWpEOGA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=108&crop=smart&auto=webp&s=3b88941d057d599da1826c2b94b2663517e4e023', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=216&crop=smart&auto=webp&s=0b774d9f72bf345e9e39402886649223ad60e4d3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=320&crop=smart&auto=webp&s=6c769aa8ce8a2839b46e12de1fd8743d4171f08d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=640&crop=smart&auto=webp&s=c9f49d760efe4ddd92a3a07a57705e5073b56eed', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=960&crop=smart&auto=webp&s=8666fab577a806da6551b1f2e0ec70f217f6f2fb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=1080&crop=smart&auto=webp&s=b3de3b28dfba5fc1615aa5f1c855312805eda01b', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?auto=webp&s=6728f96b3a663740abd86d6d7aff692490474d84', 'width': 1280}, 'variants': {}}]}
Ablating Gemma 3 27B variants with synthetic data from Sonnet 4 (Few-shot vs LoRA)
1
[removed]
2025-05-31T17:17:46
https://www.reddit.com/r/LocalLLaMA/comments/1l02cgv/ablating_gemma_3_27b_variants_with_synthetic_data/
davernow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l02cgv
false
null
t3_1l02cgv
/r/LocalLLaMA/comments/1l02cgv/ablating_gemma_3_27b_variants_with_synthetic_data/
false
false
self
1
{'enabled': False, 'images': [{'id': 'YF2mZrP2LZphKjmsRiHyL6Oic0sw2vC0c9Q1XWpEOGA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=108&crop=smart&auto=webp&s=3b88941d057d599da1826c2b94b2663517e4e023', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=216&crop=smart&auto=webp&s=0b774d9f72bf345e9e39402886649223ad60e4d3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=320&crop=smart&auto=webp&s=6c769aa8ce8a2839b46e12de1fd8743d4171f08d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=640&crop=smart&auto=webp&s=c9f49d760efe4ddd92a3a07a57705e5073b56eed', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=960&crop=smart&auto=webp&s=8666fab577a806da6551b1f2e0ec70f217f6f2fb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?width=1080&crop=smart&auto=webp&s=b3de3b28dfba5fc1615aa5f1c855312805eda01b', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/bBLGqkw8rbuEnK1NuHV0oZtrH35g0MGtpbFWORelOp4.jpg?auto=webp&s=6728f96b3a663740abd86d6d7aff692490474d84', 'width': 1280}, 'variants': {}}]}
deepseek/deepseek-r1-0528-qwen3-8b stuck on infinite tool loop. Any ideas?
26
I've downloaded the official Deepseek distillation from their official sources and it does seem a touch smarter. However, when using tools, it often gets stuck forever trying to use them. Do you know why this is going on, and if we have any workaround?
2025-05-31T17:23:44
https://www.reddit.com/r/LocalLLaMA/comments/1l02hmq/deepseekdeepseekr10528qwen38b_stuck_on_infinite/
Substantial_Swan_144
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l02hmq
false
null
t3_1l02hmq
/r/LocalLLaMA/comments/1l02hmq/deepseekdeepseekr10528qwen38b_stuck_on_infinite/
false
false
self
26
null
Best models to try on 96gb gpu?
47
RTX pro 6000 Blackwell arriving next week. What are the top local coding and image/video generation models I can try? Thanks!
2025-05-31T17:49:48
https://www.reddit.com/r/LocalLLaMA/comments/1l033vh/best_models_to_try_on_96gb_gpu/
sc166
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l033vh
false
null
t3_1l033vh
/r/LocalLLaMA/comments/1l033vh/best_models_to_try_on_96gb_gpu/
false
false
self
47
null
Why he think he Claude 3 Opus ?
0
https://preview.redd.it/… it was Qwen3 ?
2025-05-31T17:59:25
https://www.reddit.com/r/LocalLLaMA/comments/1l03btj/why_he_think_he_claude_3_opus/
presidentbidden
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l03btj
false
null
t3_1l03btj
/r/LocalLLaMA/comments/1l03btj/why_he_think_he_claude_3_opus/
false
false
https://a.thumbs.redditm…dl4BXS9niff0.jpg
0
null
LLM Extension for Command Palette: A way to chat with LLM without opening new windows
10
After my [last post](https://www.reddit.com/r/LocalLLaMA/comments/1kfxl36/proof_of_concept_ollama_chat_in_powertoys_command) got some nice feedbacks on what was just a small project, it motivated me to put this [on Microsoft store](https://apps.microsoft.com/detail/9NPK6KSDLC81) and also on winget, which means now the extension can be directly installed from the [PowerToys Command Palette](https://learn.microsoft.com/en-us/windows/powertoys/command-palette/overview) install extension command! To be honest, I first made this project just so that I don't have to open and manage a new window when talking to chatbots, but it seems others also like to have something like this, so here it is and I'm glad to be able to make it available for more people. On top of that, apart from chatting with LLMs through Ollama in the initial prototype, it is now also able to use OpenAI, Google, and Mistral services, and to my surprise more people I've talked to prefers Google Gemini than other services (or is it just because of the recent 2.5 Pro/Flash release?). And here is the open-sourced code: [LioQing/llm-extension-for-cmd-pal: An LLM extension for PowerToys Command Palette](https://github.com/LioQing/llm-extension-for-cmd-pal).
2025-05-31T18:02:51
https://v.redd.it/54dvyzcfo54f1
GGLio
/r/LocalLLaMA/comments/1l03f2h/llm_extension_for_command_palette_a_way_to_chat/
1970-01-01T00:00:00
0
{}
1l03f2h
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/54dvyzcfo54f1/DASHPlaylist.mpd?a=1751436176%2CNWZjMjM1MjFmYzg4YzQ3ZDc5OTI2ZjMyYzM0ZGQzZGYxODRhYmIxOWU3NmE2MzlkZGVlNGYxMDk1ZjdmODRkMQ%3D%3D&v=1&f=sd', 'duration': 53, 'fallback_url': 'https://v.redd.it/54dvyzcfo54f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/54dvyzcfo54f1/HLSPlaylist.m3u8?a=1751436176%2CN2RlMGIzZWI2OWM4OWIzMWY0MjcwZWQzYmJhMDE5OGY2ZWJkNzI4MGEwMzFkNDljMTkyOTNjY2FjOTBjZmQ3Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/54dvyzcfo54f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1l03f2h
/r/LocalLLaMA/comments/1l03f2h/llm_extension_for_command_palette_a_way_to_chat/
false
false
https://external-preview…863034d020557219
10
{'enabled': False, 'images': [{'id': 'ZWduanl1Y2ZvNTRmMYoemgkJP9kpJlL4F7uhfpuBmeMH1-UkrRZT_-5NJ7bo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZWduanl1Y2ZvNTRmMYoemgkJP9kpJlL4F7uhfpuBmeMH1-UkrRZT_-5NJ7bo.png?width=108&crop=smart&format=pjpg&auto=webp&s=d5270174b630e3fffd0e7dfd239aff4bec3848d5', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZWduanl1Y2ZvNTRmMYoemgkJP9kpJlL4F7uhfpuBmeMH1-UkrRZT_-5NJ7bo.png?width=216&crop=smart&format=pjpg&auto=webp&s=b7bf541b41d0ca232407b974dc2da7af042faf12', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZWduanl1Y2ZvNTRmMYoemgkJP9kpJlL4F7uhfpuBmeMH1-UkrRZT_-5NJ7bo.png?width=320&crop=smart&format=pjpg&auto=webp&s=edc44176cb1839df10137b6e64658c5298ddb412', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZWduanl1Y2ZvNTRmMYoemgkJP9kpJlL4F7uhfpuBmeMH1-UkrRZT_-5NJ7bo.png?width=640&crop=smart&format=pjpg&auto=webp&s=86681f1e2c56f9d3abe241671da8ce202d2a850a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZWduanl1Y2ZvNTRmMYoemgkJP9kpJlL4F7uhfpuBmeMH1-UkrRZT_-5NJ7bo.png?width=960&crop=smart&format=pjpg&auto=webp&s=ffa6055a798851837b2248c2f9aebcfef8aafe84', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZWduanl1Y2ZvNTRmMYoemgkJP9kpJlL4F7uhfpuBmeMH1-UkrRZT_-5NJ7bo.png?width=1080&crop=smart&format=pjpg&auto=webp&s=809d97beea2fa6d81726e86050eafaf784a1ab7f', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZWduanl1Y2ZvNTRmMYoemgkJP9kpJlL4F7uhfpuBmeMH1-UkrRZT_-5NJ7bo.png?format=pjpg&auto=webp&s=e60073798602eb149ecd5c3d831172441d872049', 'width': 1920}, 'variants': {}}]}
"Fill in the middle" video generation?
9
My dad has been taking photos when he goes hiking. He always frames them the same, and has taken photos for every season over the course of a few years. Can you guys recommend a video generator that can "fill in the middle" such that I can produce a video in between each of the photos?
2025-05-31T18:06:33
https://www.reddit.com/r/LocalLLaMA/comments/1l03iep/fill_in_the_middle_video_generation/
randomqhacker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l03iep
false
null
t3_1l03iep
/r/LocalLLaMA/comments/1l03iep/fill_in_the_middle_video_generation/
false
false
self
9
null
Regarding Hardcoded GGML Tensor Name Length Limit (GGML_MAX_NAME)
1
[removed]
2025-05-31T18:26:33
https://www.reddit.com/r/LocalLLaMA/comments/1l03z9d/regarding_hardcoded_ggml_tensor_name_length_limit/
Swimming-Market7717
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l03z9d
false
null
t3_1l03z9d
/r/LocalLLaMA/comments/1l03z9d/regarding_hardcoded_ggml_tensor_name_length_limit/
false
false
self
1
null