title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Demo Video of AutoBE, Backend Vibe Coding Agent Achieving 100% Compilation Success (Open Source)
39
## AutoBE: Backend Vibe Coding Agent Achieving 100% Compilation Success - Github Repository: https://github.com/wrtnlabs/autobe - Playground Website: https://stackblitz.com/github/wrtnlabs/autobe-playground-stackblitz - Demo Result (Generated backend applications by AutoBE) - [Bullet-in Board System](https://stackblitz.com/edit/autobe-demo-bbs) - [E-Commerce](https://stackblitz.com/edit/autobe-demo-shopping) I previously posted about this same project on Reddit, but back then the Prisma (ORM) agent side only had around 70% success rate. The reason was that the error messages from the Prisma compiler for AI-generated incorrect code were so unintuitive and hard to understand that even I, as a human, struggled to make sense of them. Consequently, the AI agent couldn't perform proper corrections based on these cryptic error messages. However, today I'm back with AutoBE that truly achieves 100% compilation success. I solved the problem of Prisma compiler's unhelpful and unintuitive error messages by directly building the Prisma AST (Abstract Syntax Tree), implementing validation myself, and creating a custom code generator. This approach bypasses the original Prisma compiler's confusing error messaging altogether, enabling the AI agent to generate consistently compilable backend code. --------------------------------------- Introducing AutoBE: The Future of Backend Development We are immensely proud to introduce AutoBE, our revolutionary open-source vibe coding agent for backend applications, developed by Wrtn Technologies. The most distinguished feature of AutoBE is its exceptional 100% success rate in code generation. AutoBE incorporates built-in TypeScript and Prisma compilers alongside OpenAPI validators, enabling automatic technical corrections whenever the AI encounters coding errors. Furthermore, our integrated review agents and testing frameworks provide an additional layer of validation, ensuring the integrity of all AI-generated code. What makes this even more remarkable is that backend applications created with AutoBE can seamlessly integrate with our other open-source projects—Agentica and AutoView—to automate AI agent development and frontend application creation as well. In theory, this enables complete full-stack application development through vibe coding alone. * Alpha Release: 2025-06-01 * Beta Release: 2025-07-01 * Official Release: 2025-08-01 AutoBE currently supports comprehensive requirements analysis and derivation, database design, and OpenAPI document generation (API interface specification). All core features will be completed by the beta release, while the integration with Agentica and AutoView for full-stack vibe coding will be finalized by the official release. We eagerly anticipate your interest and support as we embark on this exciting journey.
2025-05-31T18:38:56
https://v.redd.it/f2df0y0jw54f1
jhnam88
/r/LocalLLaMA/comments/1l049hr/demo_video_of_autobe_backend_vibe_coding_agent/
1970-01-01T00:00:00
0
{}
1l049hr
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/f2df0y0jw54f1/DASHPlaylist.mpd?a=1751438340%2CNjk2NWQyZjkyNmQ1MzhhODRhMzAwYTA0ZDBmY2YwNGM5NDg0ZDQ2N2Q5NzA1NWE4ZWQzYTU4NTYxZjFkMzdhMQ%3D%3D&v=1&f=sd', 'duration': 323, 'fallback_url': 'https://v.redd.it/f2df0y0jw54f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/f2df0y0jw54f1/HLSPlaylist.m3u8?a=1751438340%2CZjVmZWI5MDhmNmNiNGZkNjE2ZjA1OTU1MGZiOTUzY2U2MmUwZDlmODUzZTYzMDkyOTFiZmE3OTcxMjBiMjE4MA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/f2df0y0jw54f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1l049hr
/r/LocalLLaMA/comments/1l049hr/demo_video_of_autobe_backend_vibe_coding_agent/
false
false
https://external-preview…b21dc6e7a3703020
39
{'enabled': False, 'images': [{'id': 'a2RzcmN3MGp3NTRmMQcy6PVwRQbV7yy14JYjj4jOMAMqB9rDPOOSK6pFaFzH', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/a2RzcmN3MGp3NTRmMQcy6PVwRQbV7yy14JYjj4jOMAMqB9rDPOOSK6pFaFzH.png?width=108&crop=smart&format=pjpg&auto=webp&s=a08fdf3f483e3a42ed140d0d75fddda5f6739be6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/a2RzcmN3MGp3NTRmMQcy6PVwRQbV7yy14JYjj4jOMAMqB9rDPOOSK6pFaFzH.png?width=216&crop=smart&format=pjpg&auto=webp&s=b2ccd7a8bdf2eefce49c07ed328d65d865d48ed6', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/a2RzcmN3MGp3NTRmMQcy6PVwRQbV7yy14JYjj4jOMAMqB9rDPOOSK6pFaFzH.png?width=320&crop=smart&format=pjpg&auto=webp&s=2631ac9c40817403304506c53f763e617bd08e0b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/a2RzcmN3MGp3NTRmMQcy6PVwRQbV7yy14JYjj4jOMAMqB9rDPOOSK6pFaFzH.png?width=640&crop=smart&format=pjpg&auto=webp&s=dfd407089ba71b99a81478da2b717c2cac4611b0', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/a2RzcmN3MGp3NTRmMQcy6PVwRQbV7yy14JYjj4jOMAMqB9rDPOOSK6pFaFzH.png?width=960&crop=smart&format=pjpg&auto=webp&s=34d8c536a824f6d80e11fb66e034321c98a9a5d5', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/a2RzcmN3MGp3NTRmMQcy6PVwRQbV7yy14JYjj4jOMAMqB9rDPOOSK6pFaFzH.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b37ea43a453599064af924d311ba8e8235819f54', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/a2RzcmN3MGp3NTRmMQcy6PVwRQbV7yy14JYjj4jOMAMqB9rDPOOSK6pFaFzH.png?format=pjpg&auto=webp&s=d1a8574b7309b6993c2f1e2312ef5ab799a17ded', 'width': 1920}, 'variants': {}}]}
llama-server, gemma3, 32K context *and* speculative decoding on a 24GB GPU
74
llama.cpp keeps cooking! Draft model support with SWA landed this morning and early tests show up to 30% improvements in performance. Fitting it all on a single 24GB GPU was tight. The 4b as a draft model had a high enough acceptance rate to make a performance difference. Generating code had the best speed ups and creative writing got slower. Tested on dual 3090s: ### 4b draft model | prompt | n | tok/sec | draft_n | draft_accepted | ratio | Δ % | |--------|---|---------|---------|----------------|-------|-----| | create a one page html snake game in javascript | 1542 | 49.07 | 1422 | 956 | 0.67 | 26.7% | | write a snake game in python | 1904 | 50.67 | 1709 | 1236 | 0.72 | 31.6% | | write a story about a dog | 982 | 33.97 | 1068 | 282 | 0.26 | -14.4% | Scripts and configurations can be found on [llama-swap's wiki](https://github.com/mostlygeek/llama-swap/wiki/gemma3-27b-100k-context) llama-swap config: ```yaml macros: "server-latest": /path/to/llama-server/llama-server-latest --host 127.0.0.1 --port ${PORT} --flash-attn -ngl 999 -ngld 999 --no-mmap # quantize KV cache to Q8, increases context but # has a small effect on perplexity # https://github.com/ggml-org/llama.cpp/pull/7412#issuecomment-2120427347 "q8-kv": "--cache-type-k q8_0 --cache-type-v q8_0" "gemma3-args": | --model /path/to/models/gemma-3-27b-it-q4_0.gguf --temp 1.0 --repeat-penalty 1.0 --min-p 0.01 --top-k 64 --top-p 0.95 models: # fits on a single 24GB GPU w/ 100K context # requires Q8 KV quantization "gemma": env: # 3090 - 35 tok/sec - "CUDA_VISIBLE_DEVICES=GPU-6f0" # P40 - 11.8 tok/sec #- "CUDA_VISIBLE_DEVICES=GPU-eb1" cmd: | ${server-latest} ${q8-kv} ${gemma3-args} --ctx-size 102400 --mmproj /path/to/models/gemma-mmproj-model-f16-27B.gguf # single GPU w/ draft model (lower context) "gemma-fit": env: - "CUDA_VISIBLE_DEVICES=GPU-6f0" cmd: | ${server-latest} ${q8-kv} ${gemma3-args} --ctx-size 32000 --ctx-size-draft 32000 --model-draft /path/to/models/gemma-3-4b-it-q4_0.gguf --draft-max 8 --draft-min 4 # Requires 30GB VRAM for 100K context and non-quantized cache # - Dual 3090s, 38.6 tok/sec # - Dual P40s, 15.8 tok/sec "gemma-full": env: # 3090 - 38 tok/sec - "CUDA_VISIBLE_DEVICES=GPU-6f0,GPU-f10" # P40 - 15.8 tok/sec #- "CUDA_VISIBLE_DEVICES=GPU-eb1,GPU-ea4" cmd: | ${server-latest} ${gemma3-args} --ctx-size 102400 --mmproj /path/to/models/gemma-mmproj-model-f16-27B.gguf #-sm row # Requires: 35GB VRAM for 100K context w/ 4b model # with 4b as a draft model # note: --mmproj not compatible with draft models "gemma-draft": env: # 3090 - 38 tok/sec - "CUDA_VISIBLE_DEVICES=GPU-6f0,GPU-f10" cmd: | ${server-latest} ${gemma3-args} --ctx-size 102400 --model-draft /path/to/models/gemma-3-4b-it-q4_0.gguf --ctx-size-draft 102400 --draft-max 8 --draft-min 4 ```
2025-05-31T19:32:46
https://www.reddit.com/r/LocalLLaMA/comments/1l05hpu/llamaserver_gemma3_32k_context_and_speculative/
No-Statement-0001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l05hpu
false
null
t3_1l05hpu
/r/LocalLLaMA/comments/1l05hpu/llamaserver_gemma3_32k_context_and_speculative/
false
false
self
74
{'enabled': False, 'images': [{'id': 'Gl3gNdSGmTRUgVuThHldNFN7ixhImdAgLgxGF5XRiAo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WK_qIQEkXzl-T5spAFV6a7EN0d9D-ctFoLg6sWpMc4U.jpg?width=108&crop=smart&auto=webp&s=a7d8b32c8d708477fad9966f0b4311e5b0bdbd4a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WK_qIQEkXzl-T5spAFV6a7EN0d9D-ctFoLg6sWpMc4U.jpg?width=216&crop=smart&auto=webp&s=44f47b1e7148cf2b298fd72edf9b547103eaf09d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WK_qIQEkXzl-T5spAFV6a7EN0d9D-ctFoLg6sWpMc4U.jpg?width=320&crop=smart&auto=webp&s=896864e87f62d278a58d5a3becdfd8e068e56dff', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WK_qIQEkXzl-T5spAFV6a7EN0d9D-ctFoLg6sWpMc4U.jpg?width=640&crop=smart&auto=webp&s=3c65c6c0c3b059a1f6e8598a988184aa05cd5014', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WK_qIQEkXzl-T5spAFV6a7EN0d9D-ctFoLg6sWpMc4U.jpg?width=960&crop=smart&auto=webp&s=41416e38d0675e79570c4996626edf50d1e5c7e9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WK_qIQEkXzl-T5spAFV6a7EN0d9D-ctFoLg6sWpMc4U.jpg?width=1080&crop=smart&auto=webp&s=140af93dcd2585409b398b44a8c3482b5766c567', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WK_qIQEkXzl-T5spAFV6a7EN0d9D-ctFoLg6sWpMc4U.jpg?auto=webp&s=8cb08a9677bca4645cf9a9f50f2b4dfb0513f940', 'width': 1200}, 'variants': {}}]}
Open-Source TTS That Beats ElevenLabs? Chatterbox TTS by Resemble AI
0
Resemble AI just released Chatterbox, an open-source TTS model that might be the most powerful alternative to ElevenLabs to date. It's fast, expressive, and surprisingly versatile. Highlights: → Emotion Control: Fine-tune speech expressiveness with a single parameter. From deadpan to dramatic—works out of the box. → Zero-Shot Voice Cloning: Clone any voice with just a few seconds of reference audio. No finetuning needed. → Ultra Low Latency: Real-time inference (<200ms), which makes it a great fit for conversational AI and interactive media. → Built-in Watermarking: Perceptual audio watermarking ensures attribution without degrading quality—super relevant for ethical AI. → Human Preference Evaluation: In blind tests, 63.75% of listeners preferred Chatterbox over ElevenLabs in terms of audio quality and emotion. Curious to hear what others think. Could this be the open-source ElevenLabs killer we've been waiting for? Anyone already integrating it into production?
2025-05-31T19:37:34
https://www.reddit.com/r/LocalLLaMA/comments/1l05lhj/opensource_tts_that_beats_elevenlabs_chatterbox/
mahimairaja
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l05lhj
false
null
t3_1l05lhj
/r/LocalLLaMA/comments/1l05lhj/opensource_tts_that_beats_elevenlabs_chatterbox/
false
false
self
0
null
Recommendations for a model for nsfw rp with 8gb gpu?
1
[removed]
2025-05-31T19:49:48
https://www.reddit.com/r/LocalLLaMA/comments/1l05ve9/recommendations_for_a_model_for_nsfw_rp_with_8gb/
Proper-Customer7286
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l05ve9
false
null
t3_1l05ve9
/r/LocalLLaMA/comments/1l05ve9/recommendations_for_a_model_for_nsfw_rp_with_8gb/
false
false
nsfw
1
null
Has anyone managed to get a non Google AI to run
38
In the new Google edge gallery app? I'm wondering if deepseek or a version of it can be ran locally with it?
2025-05-31T19:51:20
https://i.redd.it/8yt7shdl964f1.png
Gabrielmorrow
i.redd.it
1970-01-01T00:00:00
0
{}
1l05wpz
false
null
t3_1l05wpz
/r/LocalLLaMA/comments/1l05wpz/has_anyone_managed_to_get_a_non_google_ai_to_run/
false
false
https://a.thumbs.redditm…ZSksDmfxvi48.jpg
38
{'enabled': True, 'images': [{'id': '45KWKL8csIb8OerruV55VVRy2HArFuq64kszbwYgDqM', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/8yt7shdl964f1.png?width=108&crop=smart&auto=webp&s=b2dc8e7df010f065ca77c717be0ec88a1eed168c', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/8yt7shdl964f1.png?width=216&crop=smart&auto=webp&s=88891cc2402ccdf687a06cab8a7dec9420925b8f', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/8yt7shdl964f1.png?width=320&crop=smart&auto=webp&s=67bdddd0a9017cf41c434abd8dd744d76b15a4ae', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/8yt7shdl964f1.png?width=640&crop=smart&auto=webp&s=739e2d5a72bad15d28de3d6a9992852dc9157d9f', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/8yt7shdl964f1.png?width=960&crop=smart&auto=webp&s=c3455ce54d987212049706ff8c5a7c6f2329cf69', 'width': 960}], 'source': {'height': 2244, 'url': 'https://preview.redd.it/8yt7shdl964f1.png?auto=webp&s=f91ab3403bf572a8cca9bbace72b52880983b382', 'width': 1008}, 'variants': {}}]}
Recommendations for a model for nsfw rp with 8gb gpu?
1
[removed]
2025-05-31T19:53:28
https://www.reddit.com/r/LocalLLaMA/comments/1l05yfv/recommendations_for_a_model_for_nsfw_rp_with_8gb/
MacaroniBee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l05yfv
false
null
t3_1l05yfv
/r/LocalLLaMA/comments/1l05yfv/recommendations_for_a_model_for_nsfw_rp_with_8gb/
false
false
nsfw
1
null
Speaker separation and transcription
8
Is there any software, llm or example code to do speaker separation and transcription from a mono recording source?
2025-05-31T19:53:49
https://www.reddit.com/r/LocalLLaMA/comments/1l05ypt/speaker_separation_and_transcription/
Khipu28
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l05ypt
false
null
t3_1l05ypt
/r/LocalLLaMA/comments/1l05ypt/speaker_separation_and_transcription/
false
false
self
8
null
Most powerful < 7b parameters model at the moment?
118
I would like to know which is the best model less than 7b currently available.
2025-05-31T20:14:03
https://www.reddit.com/r/LocalLLaMA/comments/1l06f7r/most_powerful_7b_parameters_model_at_the_moment/
ventilador_liliana
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l06f7r
false
null
t3_1l06f7r
/r/LocalLLaMA/comments/1l06f7r/most_powerful_7b_parameters_model_at_the_moment/
false
false
self
118
null
The SRE’s Guide to High Availability Open WebUI Deployment Architecture
11
Based on my real world experiences running Open WebUI for thousands of concurrent users, this guide covers the best practices for deploying stateless Open WebUI containers (Kubernetes Pods, Swarm services, ECS etc), Redis and external embeddings, vector databases and put all that behind a load balancer that understands long-lived WebSocket upgrades. When you’re ready to graduate from single container deployment to a distributed HA architecture for Open WebUI, this is where you should start!
2025-05-31T20:15:14
https://taylorwilsdon.medium.com/the-sres-guide-to-high-availability-open-webui-deployment-architecture-2ee42654eced
taylorwilsdon
taylorwilsdon.medium.com
1970-01-01T00:00:00
0
{}
1l06g4l
false
null
t3_1l06g4l
/r/LocalLLaMA/comments/1l06g4l/the_sres_guide_to_high_availability_open_webui/
false
false
https://b.thumbs.redditm…r8eKWYnHkGOU.jpg
11
{'enabled': False, 'images': [{'id': '-nPuH1hGczKfqGyzVl_emgzzYfQAn92QWGDHsFzv2QQ', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/XpbGGkJKPGpF-WdM9CwPHoy0zCWwEbDV6ozBsv9F_h8.jpg?width=108&crop=smart&auto=webp&s=22bace721e175cf4d7c925250970becaac031481', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/XpbGGkJKPGpF-WdM9CwPHoy0zCWwEbDV6ozBsv9F_h8.jpg?width=216&crop=smart&auto=webp&s=dda1d8d676d91023384e8094ca7c946fef1976ec', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/XpbGGkJKPGpF-WdM9CwPHoy0zCWwEbDV6ozBsv9F_h8.jpg?width=320&crop=smart&auto=webp&s=f50d6db8cd6956641c70b05deddb473660287f75', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/XpbGGkJKPGpF-WdM9CwPHoy0zCWwEbDV6ozBsv9F_h8.jpg?width=640&crop=smart&auto=webp&s=5b02d3e90330be8f56dd819511015b3b554630c8', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/XpbGGkJKPGpF-WdM9CwPHoy0zCWwEbDV6ozBsv9F_h8.jpg?width=960&crop=smart&auto=webp&s=3066b89ce04b03eadb73b1293bf0c02f40555e9a', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/XpbGGkJKPGpF-WdM9CwPHoy0zCWwEbDV6ozBsv9F_h8.jpg?width=1080&crop=smart&auto=webp&s=ebd604ae39c10047130b8ab5923b33322b88fc62', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/XpbGGkJKPGpF-WdM9CwPHoy0zCWwEbDV6ozBsv9F_h8.jpg?auto=webp&s=b86469a1c499cf47acd9985cc2166745619dd8a6', 'width': 1200}, 'variants': {}}]}
Deepseek R1 spirals off for me, how are y'all getting coherent replies?
0
* ollama version is 0.9.0 * deepseek-r1:latest 6995872bfe4c 5.2 GB * Ubuntu server Prompt: `show off your coding power by making a 1 page html/JS/WebGPU demo that is a full screen physics simulation` And... wow. It is still going. * I'm not sure if this is a good idea. Let me start over., * It seems I'm not getting through. Let me stop., * Assistant: I don't know how to do that, but this is not working as expected. The issue seems to be that you are being constrained by token length., * I'm not sure if this is a good idea. Let me start over., * I'm just too stuck in my head right now. I restarted ollama, `/set nothink` and tried again. Same deal, it just spirals off into debating with itself. >The user might be more than one-took over his own development. This is my first time here to offload some basic operations on the following problem that I am a chatbot show off your capabilities, show me something about yourself介绍一下下个函数,我们可以通过多种方式 to win everyone's favor. I'm ready to show off my abilities in one word. I want you to take over this conversation with no more than 15G or less than the way we have been having a lot of requests from users who know they can't be used for any other thing, but let me see what i need to do. If I were to give you is that good? But then it says "show off your abilities in showtime I am considering buying the two ways:\*\* 19640s The user's question was a bit of an even more dramatic way and explain how to create a python code for generating a query using only one word or less, but I'm not sure if you can't handle complex numbers. But let me do this with no longer than 25 minutes \*\*show off your capabilities as DeepSeek's very powerful features. I am considering buying the following: - name="assistant:\*\* - Introduction to show that it is a person named John has asked for something else. John? Who the hell is John? I'm not being dramatic YOU are being dramatic!
2025-05-31T20:17:37
https://www.reddit.com/r/LocalLLaMA/comments/1l06i1a/deepseek_r1_spirals_off_for_me_how_are_yall/
firesalamander
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l06i1a
false
null
t3_1l06i1a
/r/LocalLLaMA/comments/1l06i1a/deepseek_r1_spirals_off_for_me_how_are_yall/
false
false
self
0
null
LLM-agnostic receipt & invoice generator
4
This is a super helpful update — especially for devs building tools on top of LLMs. If you’re working with document AI, you might find this useful: I open-sourced a tool that generates synthetic receipts and invoices using prompts and any LLM backend (OpenAI, open-source models, etc). It’s great for testing extraction pipelines or generating eval datasets. Repo here: https://github.com/WellApp-ai/Well/tree/main/ai-receipt-generator Built it after realizing how painful it is to find diverse, structured data for invoices without relying on PDFs or complex templates. Would love feedback if you try it!
2025-05-31T20:20:48
https://www.reddit.com/r/LocalLLaMA/comments/1l06klt/llmagnostic_receipt_invoice_generator/
Sharp-Past-8473
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l06klt
false
null
t3_1l06klt
/r/LocalLLaMA/comments/1l06klt/llmagnostic_receipt_invoice_generator/
false
false
self
4
{'enabled': False, 'images': [{'id': 'xulo0UX4sQN4WA4NhGTUMyBtZSn1OUrRiBGqt06-aCA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5cQ9qUpGKduQWrKUU9S1mmmau8QZ5_fjnc0osy94Ogg.jpg?width=108&crop=smart&auto=webp&s=ea741d619d2515a8e32bb0bfcee552df9ab6fa3e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5cQ9qUpGKduQWrKUU9S1mmmau8QZ5_fjnc0osy94Ogg.jpg?width=216&crop=smart&auto=webp&s=21eca5dba404f79e42b425bea2c6ea53250adaf8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5cQ9qUpGKduQWrKUU9S1mmmau8QZ5_fjnc0osy94Ogg.jpg?width=320&crop=smart&auto=webp&s=31ff042c48a8235fe79d538c854d6dd2c952ff60', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5cQ9qUpGKduQWrKUU9S1mmmau8QZ5_fjnc0osy94Ogg.jpg?width=640&crop=smart&auto=webp&s=e904a9e580c5619c8e730366b4150a60c21c77d8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5cQ9qUpGKduQWrKUU9S1mmmau8QZ5_fjnc0osy94Ogg.jpg?width=960&crop=smart&auto=webp&s=2b5663d6f57234fcd9f58a0e68ff5d6f062d497a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5cQ9qUpGKduQWrKUU9S1mmmau8QZ5_fjnc0osy94Ogg.jpg?width=1080&crop=smart&auto=webp&s=a024b3a750b7b8e3379d50e4c75091eeae4706b2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5cQ9qUpGKduQWrKUU9S1mmmau8QZ5_fjnc0osy94Ogg.jpg?auto=webp&s=4d62fa896aa981c7808edacb8684112149f8757c', 'width': 1200}, 'variants': {}}]}
Do you agree with this assessment? (7B vs 24B)
0
Me:  Say I'm using a 24B model for role-play. Can you give me a short example of how the 7B version would differ in quality? Hardware isn't the topic of this scenario. Gemini 2.5 Pro (preview):  Okay, let's imagine a role-play scenario. Assume hardware is not a constraint, and we're just looking at the potential difference in output quality between a hypothetical 7B model and a 24B model, both generally well-trained for role-play. **Scenario:** You're a detective in a noir setting, interrogating a shifty informant named "Silas" about a missing person, "Eleanor Vance." **Your Prompt to the LLM (as Silas):** "Alright Silas, enough games. I know you saw Eleanor Vance the night she disappeared. What did she tell you? And don't even think about lying to me, I've got ways of making people talk." Here's how the responses might differ in quality: **Hypothetical 7B Model Response (as Silas):** "Whoa there, detective, easy! Eleanor? Yeah, I saw her. She looked scared. Said she was meeting someone important, 'bout some big deal. Didn't say who or what. That's all I know, I swear!" **Critique of the 7B Response:** * **Basic:** It answers the question directly but lacks richness. * **Less Nuance:** Silas sounds a bit generic, without a strong, distinct character voice. The "scared" detail is good, but not deeply evocative. * **Simpler Language:** The sentence structure and vocabulary are relatively straightforward. * **May Lack Subtext:** It might not pick up on the threat in your prompt as strongly or weave in much intrigue. **Hypothetical 24B Model Response (as Silas):** "Easy, flatfoot, no need to get your trench coat in a twist. *Heh.* Vance, yeah... saw her. Ghost-white, she was, clutchin' that cheap purse of hers like it held the crown jewels. Muttered somethin' 'bout a 'rendezvous with destiny' down by the docks – Pier 7, if memory serves, and my memory ain't what it used to be, *capiche*? Said it was her one big chance to get out of this grimy city. Didn't look like the kind of chance that ends well, if you catch my drift. Now, about those 'ways of making people talk'... are we talkin' good cop, or the other fella? Because my dentist already did a number on me last week." **Critique of the 24B Response:** * **More Nuance & Character:** Silas has a more distinct, cynical, and street-smart voice ("flatfoot," "heh," "capiche?"). The dialogue feels more alive and in-character for a noir informant. * **Richer Detail & Imagery:** "Ghost-white," "clutchin' that cheap purse," "rendezvous with destiny," "Pier 7" – these create a more vivid picture and offer more potential plot hooks. * **Subtlety and Subtext:** Silas acknowledges the threat but deflects with a bit of dark humor ("dentist already did a number on me"). He also hints at danger ("didn't look like the kind of chance that ends well") without being overly explicit. * **Better Flow and Complexity:** The sentences are more varied, and the response weaves together information, character, and atmosphere more effectively. It also subtly challenges the detective. * **Proactive Information:** It offers a specific location (Pier 7), which is a valuable lead. **In Short:** The 7B model might give you a functional response that moves the story along. The 24B model is more likely to provide a response that is more immersive, nuanced, creative, and characterful, leading to a richer and more engaging role-play experience. It can better handle complex character emotions, motivations, and the subtle undertones of the interaction.
2025-05-31T21:06:24
https://www.reddit.com/r/LocalLLaMA/comments/1l07llz/do_you_agree_with_this_assessment_7b_vs_24b/
santovalentino
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l07llz
false
null
t3_1l07llz
/r/LocalLLaMA/comments/1l07llz/do_you_agree_with_this_assessment_7b_vs_24b/
false
false
self
0
null
What is the current best Image to Video model with least content restrictions and guardrails?
0
Recently I can across few Instagram pages with borderline content . They have AI generated videos of women in bikini/lingerie. I know there are some jailbreaking prompts for commercial video generators like sora, veo and others but they generate videos of new women faces. What models could they be using to convert an image say of a women/man in bikini or shorts in to a short clip?
2025-05-31T21:12:34
https://www.reddit.com/r/LocalLLaMA/comments/1l07qj1/what_is_the_current_best_image_to_video_model/
Im_banned_everywhere
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l07qj1
false
null
t3_1l07qj1
/r/LocalLLaMA/comments/1l07qj1/what_is_the_current_best_image_to_video_model/
false
false
self
0
null
The Quest for 100k - LLAMA.CPP Setting for a Noobie
5
SO there was a post about eeking 100k context out of gemma3 27b on a 3090 and I really wanted to try it... but never setup llama.cpp before and being a glutton for punishment decided I wanted a GUI too in the form of open-webui. I think I got most of it working with an assortment of help from various AI's but the post suggested about 35t/s and I'm only managing about 10t/s. This is my startup file for llama.cpp, mostly settings copied from the other post [https://www.reddit.com/r/LocalLLaMA/comments/1kzcalh/llamaserver\_is\_cooking\_gemma3\_27b\_100k\_context/](https://www.reddit.com/r/LocalLLaMA/comments/1kzcalh/llamaserver_is_cooking_gemma3_27b_100k_context/) "@echo off" set SERVER\_PATH=X:\\llama-cpp\\llama-server.exe set MODEL\_PATH=X:\\llama-cpp\\models\\gemma-3-27b-it-q4\_0.gguf set MMPROJ\_PATH=X:\\llama-cpp\\models\\mmproj-model-f16-27B.gguf "%SERVER\_PATH%" \^ \--host [127.0.0.1](http://127.0.0.1) \--port 8080 \^ \--model "%MODEL\_PATH%" \^ \--ctx-size 102400 \^ \--cache-type-k q8\_0 --cache-type-v q8\_0 \^ \--flash-attn \^ \-ngl 999 -ngld 999 \^ \--no-mmap \^ \--mmproj "%MMPROJ\_PATH%" \^ \--temp 1.0 \^ \--repeat-penalty 1.0 \^ \--min-p 0.01 \^ \--top-k 64 \^ \--top-p 0.95 Anything obvious jump out to you wise folks that already have this working well or any ideas for what I could try? 100k at 35t/s sounds magical so would love to get there is I could.
2025-05-31T21:29:39
https://www.reddit.com/r/LocalLLaMA/comments/1l0843f/the_quest_for_100k_llamacpp_setting_for_a_noobie/
LostHisDog
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0843f
false
null
t3_1l0843f
/r/LocalLLaMA/comments/1l0843f/the_quest_for_100k_llamacpp_setting_for_a_noobie/
false
false
self
5
null
Google quietly released an app that lets you download and run AI models locally | TechCrunch
0
2025-05-31T21:44:17
https://techcrunch.com/2025/05/31/google-quietly-released-an-app-that-lets-you-download-and-run-ai-models-locally/
chillinewman
techcrunch.com
1970-01-01T00:00:00
0
{}
1l08fr8
false
null
t3_1l08fr8
/r/LocalLLaMA/comments/1l08fr8/google_quietly_released_an_app_that_lets_you/
false
false
https://a.thumbs.redditm…AJD9a94IUJj4.jpg
0
{'enabled': False, 'images': [{'id': '_FBQRawtsVnlTLgg9jFSaAELbacVusil3H8bxH8zdWA', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/fteILaQ9-5pekHZ-voivAp3DNsivLc5g2TpZDGhT004.jpg?width=108&crop=smart&auto=webp&s=9ddd21dcf8ac59bd61fe2319db5ff3b12f11fcdf', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/fteILaQ9-5pekHZ-voivAp3DNsivLc5g2TpZDGhT004.jpg?width=216&crop=smart&auto=webp&s=05b161e9570c9709dcebc48e2df14523616b5970', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/fteILaQ9-5pekHZ-voivAp3DNsivLc5g2TpZDGhT004.jpg?width=320&crop=smart&auto=webp&s=6f6f9799027d7c0ee50a0cdeb05ea5e5584f53a8', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/fteILaQ9-5pekHZ-voivAp3DNsivLc5g2TpZDGhT004.jpg?width=640&crop=smart&auto=webp&s=5b541e1e991ceedd7d2e1f3d67a52f0cad407588', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/fteILaQ9-5pekHZ-voivAp3DNsivLc5g2TpZDGhT004.jpg?width=960&crop=smart&auto=webp&s=9ed318067e1d0d519c263472722be96c3f47c58e', 'width': 960}], 'source': {'height': 683, 'url': 'https://external-preview.redd.it/fteILaQ9-5pekHZ-voivAp3DNsivLc5g2TpZDGhT004.jpg?auto=webp&s=2918f2dbf01fc155b084ab12f210a6bda225789b', 'width': 1024}, 'variants': {}}]}
Enigma
1
[removed]
2025-05-31T21:54:22
https://www.reddit.com/r/LocalLLaMA/comments/1l08nun/enigma/
FitCar5539
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l08nun
false
null
t3_1l08nun
/r/LocalLLaMA/comments/1l08nun/enigma/
false
false
self
1
null
Some newb assistant/agent questions.
2
I've been learning LLMs, and for most things it's easier to define a project to accomplish, then learn as you go, so I'm working on creating a generic AI agent/assistant that can do some (I thought) simple automation tasks. Really I just want something that can \- search the web, aggregate data and summarize. \- Do rudamentary tasks on my local system (display all files on my desktop, edit each file in a directory and replace one word, copy all \*.mpg files to one folder then all \*.txt files to a different folder) but done in plain spoken language \- write some code to do \[insert thing\], then test the code, and iterate until it works correctly. These things seemed reasonable when I started, I was wrong. I tried Open Interpreter, but I think because of my ignorance, it was too dumb to accomplish anything. Maybe it was the model, but I tried about 10 different models. I also tried Goose, with the same results. Too dumb, way too buggy, nothing ever worked right. I tried to install SuperAGI, and couldn't even get it to install. This led me to think, I should dig in a little further and figure out how I messed up, learn how everything works so I can actually troubleshoot. Also the tech might still be too new to be turn-key. So I decided to break this down into chunks and tackle it by coding something since I couldn't find a good framework. I'm proficient with Python, but didn't really want to write anything from scratch if tools exist. I'm looking into: \- ollama for the backend. I was using LM Studio, but it doesn't seem to play nice with anything really. \- a vector database to store knowledge, but I'm still confused about how memory and context works for LLMs. \- a RAG to further supplement the LLMs knowledge, but once again, confused about the various differences. \- Selenium or the like to be able to search the web, then parse the results and stash it in the vector database. \- MCP to allow various tools to be used. I know this has to do with "prompt engineering", and it seems like the vector DB and RAG could be used this way, but still hazy on how it all fits together. I've seen some MCP plugins in Goose which seem useful. Are there any good lists of MCPs out there? I can't seem to figure out how this is better than just structuring things like an API. So, my question is: Is this a good way to approach it? Any good resources to give me an overview on the current state of things? Any good frameworks that would help assemble all of this functionality into one place? If you were to tackle this sort of project, what would you use? I feel like I have an Ikea chair and no instructions.
2025-05-31T21:57:28
https://www.reddit.com/r/LocalLLaMA/comments/1l08qc3/some_newb_assistantagent_questions/
johnfkngzoidberg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l08qc3
false
null
t3_1l08qc3
/r/LocalLLaMA/comments/1l08qc3/some_newb_assistantagent_questions/
false
false
self
2
null
So... we tried to create an image-based world exploration game.
1
[removed]
2025-05-31T22:25:52
https://www.reddit.com/gallery/1l09cjo
Fickle-Bake-7557
reddit.com
1970-01-01T00:00:00
0
{}
1l09cjo
false
null
t3_1l09cjo
/r/LocalLLaMA/comments/1l09cjo/so_we_tried_to_create_an_imagebased_world/
false
false
https://b.thumbs.redditm…yK-Iww_MvCks.jpg
1
null
What are the top creative writing models ?
12
Hello everyone I wanted to know what are the top models that are good at creative writing. I'm looking for ones I can run on my card. I've got a 4070. It has 12GB of Vram. I've got 64GB of normal ram.
2025-05-31T22:33:14
https://www.reddit.com/r/LocalLLaMA/comments/1l09i8f/what_are_the_top_creative_writing_models/
TheArchivist314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l09i8f
false
null
t3_1l09i8f
/r/LocalLLaMA/comments/1l09i8f/what_are_the_top_creative_writing_models/
false
false
self
12
null
Created an AI chat app. Long chat responses are getting cutoff. It’s using Llama (via Groq cloud). Ne1 know how to stop it cuting out mid sentence. I’ve set prompt to only respond using couple of sentences and within 30 words. Also token limit. Also extended limit to try make it finish, but no joy?
0
Thanks to anyone who has a solution.
2025-05-31T22:46:11
https://www.reddit.com/r/LocalLLaMA/comments/1l09s7m/created_an_ai_chat_app_long_chat_responses_are/
OkPaper8003
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l09s7m
false
null
t3_1l09s7m
/r/LocalLLaMA/comments/1l09s7m/created_an_ai_chat_app_long_chat_responses_are/
false
false
self
0
null
What local LLM and IDE have documentation indexing like Cursor's @Docs?
5
Cursor will read and index code documentation but it doesn't work with local LLMs, not even via the ngrok method recently it seems (ie spoofing a local LLM with an OpenAI compatible API and using ngrok to tunnel localhost to a remote URL). VSCode doesn't have it, nor Windsurf, it seems. I see only Continue.dev has the same @Docs functionality, are there more?
2025-05-31T22:48:40
https://www.reddit.com/r/LocalLLaMA/comments/1l09u5c/what_local_llm_and_ide_have_documentation/
zxyzyxz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l09u5c
false
null
t3_1l09u5c
/r/LocalLLaMA/comments/1l09u5c/what_local_llm_and_ide_have_documentation/
false
false
self
5
null
DIY CAI
1
[removed]
2025-05-31T23:24:00
https://www.reddit.com/r/LocalLLaMA/comments/1l0allt/diy_cai/
ZackFlashhhh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0allt
false
null
t3_1l0allt
/r/LocalLLaMA/comments/1l0allt/diy_cai/
false
false
self
1
null
Best LLM for Helping writing a high fantasy book?
3
Hi, i am writing a book, and i would like some assitance from a Language model, mainly because english is not my first language, and even though i am quite fluent in it, i know for a fact there are grammar rules and stuff i am not aware of. So i need a model that i can feed it my book chapter by chapter, and it can correct my work, at some points expand on some paragraphs, maybe add details, find different phrasings or words for descriptions etc. correct spacings etc, in general i don't want it to write it for me, i just need help on the hard part of being a writer :P So what is a good LLM for that kind of workload? I have so many ideas and have actually written many many books, but never tried to publish any of them because they all felt immature, and not very well written, and even though i really tried to fix that, i wanna have a go with AI, see if it can do it better than i can (and it probably can)
2025-05-31T23:25:36
https://www.reddit.com/r/LocalLLaMA/comments/1l0amua/best_llm_for_helping_writing_a_high_fantasy_book/
Leonblackdeath
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0amua
false
null
t3_1l0amua
/r/LocalLLaMA/comments/1l0amua/best_llm_for_helping_writing_a_high_fantasy_book/
false
false
self
3
null
Using reasoning models makes me feel sad
1
[removed]
2025-05-31T23:30:27
https://www.reddit.com/r/LocalLLaMA/comments/1l0aqiy/using_reasoning_models_makes_me_feel_sad/
OrvaldMaxwell666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0aqiy
false
null
t3_1l0aqiy
/r/LocalLLaMA/comments/1l0aqiy/using_reasoning_models_makes_me_feel_sad/
false
false
self
1
null
I scraped 200k Dev jobs directly from corporate websites
1
[removed]
2025-05-31T23:34:42
https://www.reddit.com/r/LocalLLaMA/comments/1l0atq9/i_scraped_200k_dev_jobs_directly_from_corporate/
Elieroos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0atq9
false
null
t3_1l0atq9
/r/LocalLLaMA/comments/1l0atq9/i_scraped_200k_dev_jobs_directly_from_corporate/
false
false
self
1
{'enabled': False, 'images': [{'id': 'LisIUUGScx13mD-x3gFPv-giEc_OVliq9xdUF77fqKE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=108&crop=smart&auto=webp&s=8e5f4eecb8f4e20584a0a45a6c7b3d80bca50562', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=216&crop=smart&auto=webp&s=0bba062fe06cce12fc3d0c4cb2a0ea82abc7c266', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=320&crop=smart&auto=webp&s=3ad6582619e3a7c3baeb4b3bc407f87a187c2336', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=640&crop=smart&auto=webp&s=1b9a8da21d7a1b9b308c5828dbe6f6b7287068d6', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=960&crop=smart&auto=webp&s=196ba9362a8c5c81bc99f396e5c4bd3401667518', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=1080&crop=smart&auto=webp&s=f79588c44be17c9eae5cf5c5ccf4c0d9f77f0734', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?auto=webp&s=fa755a2de2b11728baa2d5e5dcd88171c0e5d4be', 'width': 1200}, 'variants': {}}]}
OpenWebUI vs LibreChat?
46
Hi, These are the two most popular Chat UI tools for LLMs. Have you tried them? Which one do you think is better?
2025-05-31T23:59:50
https://www.reddit.com/r/LocalLLaMA/comments/1l0bc5j/openwebui_vs_librechat/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0bc5j
false
null
t3_1l0bc5j
/r/LocalLLaMA/comments/1l0bc5j/openwebui_vs_librechat/
false
false
self
46
null
Built an API for creating custom lightweight text classification models. Feedback appreciated
1
[removed]
2025-06-01T00:12:25
https://www.reddit.com/r/LocalLLaMA/comments/1l0bl70/built_an_api_for_creating_custom_lightweight_text/
LineAlternative5694
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0bl70
false
null
t3_1l0bl70
/r/LocalLLaMA/comments/1l0bl70/built_an_api_for_creating_custom_lightweight_text/
false
false
self
1
{'enabled': False, 'images': [{'id': 'NQpxjfjKIYyl5eJv8XnmPfcsU-K8wiSJyWnR6IVp7Tc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/pKjne25gxV3LV4JFpKC_4IIoG0wz6gw_IJ2AUKwL6O4.jpg?width=108&crop=smart&auto=webp&s=91cd9b8b7a69f60b2746f7f65e7b6e72534c7b11', 'width': 108}], 'source': {'height': 175, 'url': 'https://external-preview.redd.it/pKjne25gxV3LV4JFpKC_4IIoG0wz6gw_IJ2AUKwL6O4.jpg?auto=webp&s=89c931d122decc3e90486025e928bba5b353c618', 'width': 175}, 'variants': {}}]}
AMD RX 9080 XT ES engineering sample, up to 32 GB of VRAM.
58
2025-06-01T00:57:59
https://www.notebookcheck.net/AMD-RX-9080-XT-ES-engineering-sample-could-rival-RTX-5080-Super.1027707.0.html
fallingdowndizzyvr
notebookcheck.net
1970-01-01T00:00:00
0
{}
1l0cg8b
false
null
t3_1l0cg8b
/r/LocalLLaMA/comments/1l0cg8b/amd_rx_9080_xt_es_engineering_sample_up_to_32_gb/
false
false
default
58
null
Is there an alternative to LM Studio with first class support for MLX models?
27
I've been using LM Studio for the last few months on my Macs due to it's first class support for MLX models (they implemented a very nice [MLX engine](https://github.com/lmstudio-ai/mlx-engine) which supports adjusting context length etc. While it works great, there are a few issues with it: \- it doesn't work behind a company proxy, which means it's a pain in the ass to update the MLX engine etc when there is a new release, on my work computers \- it's closed source, which I'm not a huge fan of I can run the MLX models using \`mlx\_lm.server\` and using open-webui or Jan as the front end; but running the models this way doesn't allow for adjustment of context window size (as far as I know) Are there any other solutions out there? I keep scouring the internet for alternatives once a week but I never find a good alternative. With the unified memory system in the new mac's and how well the run local LLMs, I'm surprised to find lack of first class support Apple's MLX system. (Yes, there is quite a big performance improvement, as least for me! I can run the MLX version Qwen3-30b-a3b at 55-65 tok/sec, vs \~35 tok/sec with the GGUF versions)
2025-06-01T01:17:14
https://www.reddit.com/r/LocalLLaMA/comments/1l0ct34/is_there_an_alternative_to_lm_studio_with_first/
ksoops
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0ct34
false
null
t3_1l0ct34
/r/LocalLLaMA/comments/1l0ct34/is_there_an_alternative_to_lm_studio_with_first/
false
false
self
27
{'enabled': False, 'images': [{'id': 'mNQ4j6dpl2JHjgdFR60OtY1qnXkZg4DDE0PneDdL0mY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-7YYC9MZYtr3o4YPVwJdwdryTGVCbxGl8hjFSmDTQVc.jpg?width=108&crop=smart&auto=webp&s=9425bd87b5a0bc0aa972444b4808fbae085b6d81', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-7YYC9MZYtr3o4YPVwJdwdryTGVCbxGl8hjFSmDTQVc.jpg?width=216&crop=smart&auto=webp&s=e699d5b48847779b1a56a67921c734750dd8ba2f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-7YYC9MZYtr3o4YPVwJdwdryTGVCbxGl8hjFSmDTQVc.jpg?width=320&crop=smart&auto=webp&s=3295b205d809f4b9a4f326b78a8dce52677b18de', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-7YYC9MZYtr3o4YPVwJdwdryTGVCbxGl8hjFSmDTQVc.jpg?width=640&crop=smart&auto=webp&s=27d98369753039a3cca1c61e7589a8c20383624a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-7YYC9MZYtr3o4YPVwJdwdryTGVCbxGl8hjFSmDTQVc.jpg?width=960&crop=smart&auto=webp&s=72b4554b6b653a08200d374758ce1ac95b1a3661', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-7YYC9MZYtr3o4YPVwJdwdryTGVCbxGl8hjFSmDTQVc.jpg?width=1080&crop=smart&auto=webp&s=396200981cfee0c969874789e77311ac9ca7c2a9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-7YYC9MZYtr3o4YPVwJdwdryTGVCbxGl8hjFSmDTQVc.jpg?auto=webp&s=6d9200756afd30b3abc43061f70ba6b60f6daf86', 'width': 1200}, 'variants': {}}]}
The largest change I've noticed in Deepseek-R1-0528 is more censorship
23
Running both models locally (R1 and R1-0528, both full 671B, FP8, on 8x Nvidia H200) This model is locked down WAY more than the last one. This is just an obvious example. First image is 0528, the remaining are one response from original R1 Thoughts everyone? I think this needs some discussion outside of the community just loving it after asking random questions and assuming all benchmark data is good. Also, this is not a Quant or Distill. This is the full model from the HF repo One last note... I know we can fine tune the open weights, but doing a good fine tune of a model this large bears decent cost and can't be just slapped on without seriously changing the quality of what we appreciate about it
2025-06-01T01:30:00
https://www.reddit.com/gallery/1l0d1mc
SashaUsesReddit
reddit.com
1970-01-01T00:00:00
0
{}
1l0d1mc
false
null
t3_1l0d1mc
/r/LocalLLaMA/comments/1l0d1mc/the_largest_change_ive_noticed_in_deepseekr10528/
false
false
https://b.thumbs.redditm…UxW-wCJGvmRE.jpg
23
null
[OC] Built an AI SQL Agent with dual model support (Local Ollama + OpenAI API)
1
[removed]
2025-06-01T01:57:31
https://www.reddit.com/r/LocalLLaMA/comments/1l0djk3/oc_built_an_ai_sql_agent_with_dual_model_support/
loglux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0djk3
false
null
t3_1l0djk3
/r/LocalLLaMA/comments/1l0djk3/oc_built_an_ai_sql_agent_with_dual_model_support/
false
false
self
1
null
DIY CAI with Llama 4
1
[removed]
2025-06-01T02:29:32
https://www.reddit.com/r/LocalLLaMA/comments/1l0e4ix/diy_cai_with_llama_4/
ZackFlashhhh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0e4ix
false
null
t3_1l0e4ix
/r/LocalLLaMA/comments/1l0e4ix/diy_cai_with_llama_4/
false
false
self
1
null
I'm tired of windows awful memory management how is the performance of LLM and AI tasks in Ubuntu? Windows takes 8+ gigs of ram idle and that's after debloating.
11
Windows isnt horrible for AI but god its so resource inefficient, for example if I train a wan 1.3b lora it will take 50+ gigs of ram unless I do something like launch Doom The Dark Ages and play on my other GPU then WSL ram usage drops and stays at 30 gigs. Why? No clue windows is the worst at memory management. When I use Ubuntu on my old server idle memory usage is 2gb max.
2025-06-01T02:29:33
https://www.reddit.com/r/LocalLLaMA/comments/1l0e4jl/im_tired_of_windows_awful_memory_management_how/
Commercial-Celery769
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0e4jl
false
null
t3_1l0e4jl
/r/LocalLLaMA/comments/1l0e4jl/im_tired_of_windows_awful_memory_management_how/
false
false
self
11
null
Creating a .gitignore breaks Qwen 3's brain.
1
[removed]
2025-06-01T02:56:23
https://www.reddit.com/r/LocalLLaMA/comments/1l0emfa/creating_a_gitignore_breaks_qwen_3s_brain/
Typical-Act-8371
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0emfa
false
null
t3_1l0emfa
/r/LocalLLaMA/comments/1l0emfa/creating_a_gitignore_breaks_qwen_3s_brain/
false
false
self
1
null
Write a MCP server to generate template code based on a finite state machine (FSM) in which an LLM writes code for?
1
[removed]
2025-06-01T03:17:10
https://www.reddit.com/r/LocalLLaMA/comments/1l0f0ai/write_a_mcp_server_to_generate_template_code/
top_ness
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0f0ai
false
null
t3_1l0f0ai
/r/LocalLLaMA/comments/1l0f0ai/write_a_mcp_server_to_generate_template_code/
false
false
self
1
{'enabled': False, 'images': [{'id': 'IC8cA5rNg1SSZZOPY4GpmWKZtdgjLtel9UP9WHQKmDQ', 'resolutions': [{'height': 43, 'url': 'https://external-preview.redd.it/CznTWmQwy0-oqft5zgZlbO9ioIkpO5wkiyTbZgSwAqo.jpg?width=108&crop=smart&auto=webp&s=4cf7bd45944d5f868a47607c20a439cb31abf124', 'width': 108}, {'height': 86, 'url': 'https://external-preview.redd.it/CznTWmQwy0-oqft5zgZlbO9ioIkpO5wkiyTbZgSwAqo.jpg?width=216&crop=smart&auto=webp&s=0610a81177ce9da3a2df86bc0e31ed9d9944b126', 'width': 216}, {'height': 127, 'url': 'https://external-preview.redd.it/CznTWmQwy0-oqft5zgZlbO9ioIkpO5wkiyTbZgSwAqo.jpg?width=320&crop=smart&auto=webp&s=6680b9769bf701bf9d4607073f27f4f2657d45d0', 'width': 320}, {'height': 255, 'url': 'https://external-preview.redd.it/CznTWmQwy0-oqft5zgZlbO9ioIkpO5wkiyTbZgSwAqo.jpg?width=640&crop=smart&auto=webp&s=dd5b3057dad19fa4454a2b3549935b30db67f390', 'width': 640}], 'source': {'height': 280, 'url': 'https://external-preview.redd.it/CznTWmQwy0-oqft5zgZlbO9ioIkpO5wkiyTbZgSwAqo.jpg?auto=webp&s=0f388af92021165f7b9a5987097c566e1b0ab65a', 'width': 702}, 'variants': {}}]}
PDF2MP3: OSS Web App to Convert PDF to MP3 Using Kokoro TTS
1
[removed]
2025-06-01T03:33:11
https://www.reddit.com/r/LocalLLaMA/comments/1l0fav9/pdf2mp3_oss_web_app_to_convert_pdf_to_mp3_using/
ProHackerEvan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0fav9
false
null
t3_1l0fav9
/r/LocalLLaMA/comments/1l0fav9/pdf2mp3_oss_web_app_to_convert_pdf_to_mp3_using/
false
false
self
1
null
PDF2MP3: OSS Web App to Convert PDF to MP3 Using Kokoro TTS
1
[removed]
2025-06-01T03:34:46
https://www.reddit.com/r/LocalLLaMA/comments/1l0fbv8/pdf2mp3_oss_web_app_to_convert_pdf_to_mp3_using/
ProHackerEvan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0fbv8
false
null
t3_1l0fbv8
/r/LocalLLaMA/comments/1l0fbv8/pdf2mp3_oss_web_app_to_convert_pdf_to_mp3_using/
false
false
self
1
null
I Made a Tool to Convert PDF to MP3 Using Kokoro
1
[removed]
2025-06-01T03:37:29
[deleted]
1970-01-01T00:00:00
0
{}
1l0fdog
false
null
t3_1l0fdog
/r/LocalLLaMA/comments/1l0fdog/i_made_a_tool_to_convert_pdf_to_mp3_using_kokoro/
false
false
default
1
null
DeepSeek R1 0528 + MCP → one model, 10 K+ tools (demo & walkthrough)
1
Hey folks, I’ve been experimenting with the new R1-0528 drop and thought some of you might like a peek at how it behaves once it’s wired to MCP (Model Context Protocol). # TL;DR * **Why bother?** R1-0528 is sitting at #4 on the leaderboard, but costs \~18× less than the usual suspects. * **MCP = universal adapter.** Once the model goes through MCP it can hit any of the \~10 000 tools/APIs in the registry (Slack, Notion, Shopify, custom REST endpoints, etc.). * **AgenticFlow** (my little project) now lets you plug those two things together with zero code. # What the demo covers (2-min video) 1. Drop your DeepSeek key 2. Pick R1-0528 as the “reasoning brain.” 3. Chain MCP nodes: * `generate_image` → Stable Diffusion * `pin_to_board` → Pinterest MCP 4. Ask the agent to document its own workflow so you can reuse it later. Video link: [`https://youtu.be/5aWMYxc0CbU`](https://youtu.be/5aWMYxc0CbU) # Why I’m posting * Curious if anyone else is pairing DeepSeek with MCP or other orchestration layers. * Feedback welcome, especially on edge-cases (rate limits, long context, etc.). * If this feels useful, I can share the full json + agent prompt. (I’m the founder of AgenticFlow, so yes, slight bias, and the project is free to tinker with up to 100 credits/day, no credit card.) Happy hacking & would love to hear how you’re using R1 in the wild! — Sean *Mods, if this reads too promotional, let me know and I’ll adjust.*
2025-06-01T03:54:59
https://v.redd.it/k4s7a6mjn84f1
ComposerGen
/r/LocalLLaMA/comments/1l0fooz/deepseek_r1_0528_mcp_one_model_10_k_tools_demo/
1970-01-01T00:00:00
0
{}
1l0fooz
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/k4s7a6mjn84f1/DASHPlaylist.mpd?a=1751471706%2CNjFjNDQ2NzQ1YmNkMmU5OTFiMmM3NzUwODAwZWFjMjk4NTJiNDM4YWY1NThlODRlMzM5NzRiMmVhN2M0YTc2Yw%3D%3D&v=1&f=sd', 'duration': 126, 'fallback_url': 'https://v.redd.it/k4s7a6mjn84f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/k4s7a6mjn84f1/HLSPlaylist.m3u8?a=1751471706%2CNWRjOGUxOGIwMDdlMTVjODViNjcxN2QxMDA4ZTEzYTljYjYwOGJjOWU3NjExYTY4YTc2NjM1YjcyZWQ5NmQ2Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/k4s7a6mjn84f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1l0fooz
/r/LocalLLaMA/comments/1l0fooz/deepseek_r1_0528_mcp_one_model_10_k_tools_demo/
false
false
https://external-preview…e91ad464ed1333c0
1
{'enabled': False, 'images': [{'id': 'ZmNqYzg1bWpuODRmMY9u1gkfzZFV7WvzlfKH_SHNlhIrDsa731lAFvIZrRrE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZmNqYzg1bWpuODRmMY9u1gkfzZFV7WvzlfKH_SHNlhIrDsa731lAFvIZrRrE.png?width=108&crop=smart&format=pjpg&auto=webp&s=b1d581812c3ec32a1293b6c4c5187c33f2e3c8d3', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZmNqYzg1bWpuODRmMY9u1gkfzZFV7WvzlfKH_SHNlhIrDsa731lAFvIZrRrE.png?width=216&crop=smart&format=pjpg&auto=webp&s=474a92bb2c20c1dda2deadf15389d19fecae1556', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZmNqYzg1bWpuODRmMY9u1gkfzZFV7WvzlfKH_SHNlhIrDsa731lAFvIZrRrE.png?width=320&crop=smart&format=pjpg&auto=webp&s=20714dde45865fe54d7585cdd36eb2447194b802', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZmNqYzg1bWpuODRmMY9u1gkfzZFV7WvzlfKH_SHNlhIrDsa731lAFvIZrRrE.png?width=640&crop=smart&format=pjpg&auto=webp&s=55b06bf2550982a012cab6d838cd6d7a917c0ebe', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZmNqYzg1bWpuODRmMY9u1gkfzZFV7WvzlfKH_SHNlhIrDsa731lAFvIZrRrE.png?width=960&crop=smart&format=pjpg&auto=webp&s=5e270cfab66196af5ddd9e3f497d1036d2ee624e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZmNqYzg1bWpuODRmMY9u1gkfzZFV7WvzlfKH_SHNlhIrDsa731lAFvIZrRrE.png?width=1080&crop=smart&format=pjpg&auto=webp&s=0423952f8e20e1f6e5315f8ebbfae5b70e401805', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/ZmNqYzg1bWpuODRmMY9u1gkfzZFV7WvzlfKH_SHNlhIrDsa731lAFvIZrRrE.png?format=pjpg&auto=webp&s=9b77e3a6380621ce0245e45f6a1ddaeebaf111f1', 'width': 3840}, 'variants': {}}]}
Help : GPU not being used?
1
Ok, so I'm new to this. Apologies if this is a dumb question. I have a rtx 3070 8gb vram, 32gb ram, Ryzen 5 5600gt (integrated graphics) windows11 I downloaded ollama and then downloaded a coder variant of qwen3 4b.(`ollama run mychen76/qwen3_cline_roocode:4b`) i ran it, and it runs 100% on my CPU (checked with `ollama ps` & the task manager) I read somewhere that i needed to install CUDA toolkit, that didn't make a difference. On githun I read that i needed to add the ollama Cuda pat to the path variable (at the very top), that also didnt work. Chat GPT hasn't been able to help either. Infact it's hallucinating.. telling to Am i doing something wrong here?
2025-06-01T04:21:31
https://www.reddit.com/r/LocalLLaMA/comments/1l0g5ob/help_gpu_not_being_used/
pyroblazer68
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0g5ob
false
null
t3_1l0g5ob
/r/LocalLLaMA/comments/1l0g5ob/help_gpu_not_being_used/
false
false
self
1
null
What's the best setup/llm for writing fast code?
7
I am interested how automated the process of writing the fastest code possible can be. Say I want code to multiply two 1000 by 1000 matrices as quickly as possible for example. Ideally the setup would produce code, time it on my machine, modify the code and repeat.
2025-06-01T05:03:30
https://www.reddit.com/r/LocalLLaMA/comments/1l0guyk/whats_the_best_setupllm_for_writing_fast_code/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0guyk
false
null
t3_1l0guyk
/r/LocalLLaMA/comments/1l0guyk/whats_the_best_setupllm_for_writing_fast_code/
false
false
self
7
null
Mother of Likely Murdered OpenAI Whistleblower Reveals All, Calls for Investigation of Sam Altman
7
2025-06-01T05:11:57
https://www.youtube.com/watch?v=Kev_-HyuI9Y
Warm_Iron_273
youtube.com
1970-01-01T00:00:00
0
{}
1l0gzwk
false
{'oembed': {'author_name': 'Tucker Carlson', 'author_url': 'https://www.youtube.com/@TuckerCarlson', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/Kev_-HyuI9Y?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Mother of Likely Murdered OpenAI Whistleblower Reveals All, Calls for Investigation of Sam Altman"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/Kev_-HyuI9Y/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Mother of Likely Murdered OpenAI Whistleblower Reveals All, Calls for Investigation of Sam Altman', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1l0gzwk
/r/LocalLLaMA/comments/1l0gzwk/mother_of_likely_murdered_openai_whistleblower/
false
false
https://b.thumbs.redditm…Ahr25N7vntWc.jpg
7
{'enabled': False, 'images': [{'id': 'n84akkk6SaJYgH6SehKBmvvooHbYzvsDfA9Hy-mZFuk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ULIJqtHvtm8DuAgwq07tSjSEx6PNAi5QSrX9Y4tt7vA.jpg?width=108&crop=smart&auto=webp&s=cbbc29998b061546191f29883e3f8c544964c1a4', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ULIJqtHvtm8DuAgwq07tSjSEx6PNAi5QSrX9Y4tt7vA.jpg?width=216&crop=smart&auto=webp&s=51ab2a6a72b252c2e4f002bfe992f811b47c77d0', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ULIJqtHvtm8DuAgwq07tSjSEx6PNAi5QSrX9Y4tt7vA.jpg?width=320&crop=smart&auto=webp&s=362e80f5297d31308d2be5863416a25d64959290', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/ULIJqtHvtm8DuAgwq07tSjSEx6PNAi5QSrX9Y4tt7vA.jpg?auto=webp&s=d319c5f16a2b44dca3d2a97b778c20f712776515', 'width': 480}, 'variants': {}}]}
AI Researcher/Scientist which uses local documents
1
[removed]
2025-06-01T05:29:42
https://www.reddit.com/r/LocalLLaMA/comments/1l0had8/ai_researcherscientist_which_uses_local_documents/
tomkod
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0had8
false
null
t3_1l0had8
/r/LocalLLaMA/comments/1l0had8/ai_researcherscientist_which_uses_local_documents/
false
false
self
1
{'enabled': False, 'images': [{'id': 'tCewK1g7--AHxgYCPys7oNGWJ3BJpvMUo_OYs8I-Jnc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NjZZmFJVl41Y5gnqqpivMWbZTd3y4eqn-M5Ig_wqcfo.jpg?width=108&crop=smart&auto=webp&s=6387da88ba86162b5fb6c012694f60a2cbab7a91', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NjZZmFJVl41Y5gnqqpivMWbZTd3y4eqn-M5Ig_wqcfo.jpg?width=216&crop=smart&auto=webp&s=f646d79ab24e2050550f7ddb030ea3a81b286648', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NjZZmFJVl41Y5gnqqpivMWbZTd3y4eqn-M5Ig_wqcfo.jpg?width=320&crop=smart&auto=webp&s=ecf4562eecf6fabdfdbaf6f740786b8bf9d2dacd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NjZZmFJVl41Y5gnqqpivMWbZTd3y4eqn-M5Ig_wqcfo.jpg?width=640&crop=smart&auto=webp&s=353fec532ca6af57e978a18eba576be4a9d2235d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NjZZmFJVl41Y5gnqqpivMWbZTd3y4eqn-M5Ig_wqcfo.jpg?width=960&crop=smart&auto=webp&s=c5d353be5c412b3ddd82d9b3b4a790271d7d33b8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NjZZmFJVl41Y5gnqqpivMWbZTd3y4eqn-M5Ig_wqcfo.jpg?width=1080&crop=smart&auto=webp&s=0e09b4db1b967700f04ad2b549c8328875293d0e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NjZZmFJVl41Y5gnqqpivMWbZTd3y4eqn-M5Ig_wqcfo.jpg?auto=webp&s=26d39fc001b6e06507c9ecc142c288d64766daa1', 'width': 1200}, 'variants': {}}]}
AI researcher/scientist which uses local documents
1
[removed]
2025-06-01T05:35:06
https://www.reddit.com/r/LocalLLaMA/comments/1l0hdjf/ai_researcherscientist_which_uses_local_documents/
tomkod
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0hdjf
false
null
t3_1l0hdjf
/r/LocalLLaMA/comments/1l0hdjf/ai_researcherscientist_which_uses_local_documents/
false
false
self
1
null
local LLM researcher/scientist which uses local documents
1
[removed]
2025-06-01T05:42:56
https://www.reddit.com/r/LocalLLaMA/comments/1l0hi08/local_llm_researcherscientist_which_uses_local/
tomkod
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0hi08
false
null
t3_1l0hi08
/r/LocalLLaMA/comments/1l0hi08/local_llm_researcherscientist_which_uses_local/
false
false
self
1
null
deepseek r1 matches gemini 2.5? what gpu do you use?
2
can anyone confirm based on vibes if the bechmarks are true? what gpu do you use for the new r1? i mean if i can get something close to gemini 2.5 pro locally then this changes everything.
2025-06-01T05:56:01
https://www.reddit.com/r/LocalLLaMA/comments/1l0hpha/deepseek_r1_matches_gemini_25_what_gpu_do_you_use/
Just_Lingonberry_352
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0hpha
false
null
t3_1l0hpha
/r/LocalLLaMA/comments/1l0hpha/deepseek_r1_matches_gemini_25_what_gpu_do_you_use/
false
false
self
2
null
Which model is suitable for e-mail classification / labeling?
7
I'm looking to automatically add labels my to e-mails like `spam`, `scam`, `cold-email`, `marketing`, `resume`, `proposal`, `meeting-request`, etc. to see how effective it is at keeping my mailbox organized. I need it to be self-hostable and I don't mind if it is slow. What is a suitable model for this?
2025-06-01T06:44:25
https://www.reddit.com/r/LocalLLaMA/comments/1l0ig7q/which_model_is_suitable_for_email_classification/
surveypoodle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0ig7q
false
null
t3_1l0ig7q
/r/LocalLLaMA/comments/1l0ig7q/which_model_is_suitable_for_email_classification/
false
false
self
7
null
Prebuilt PC vs DIY 5090
8
Thanks to micro center Santa Clara, I got lucky to bought an HP OMEN 45L prebuilt: Ultra 9 285K, RTX 5090 (OEM), 64GB DDR5, 2TB SSD, 360mm liquid cooling. As well as a 5090 Founders Edition. Background: • Have some prev ML/DL knowledge and exposure, but haven’t been hands-on in a while • Looking to get back into deep learning, both for learning and side projects Use case: • ML learning/ Re-implementing papers • Local LLM, fine-tuning, LoRA • 4K gaming • Maybe dual-GPU in the future, but still figuring things out The OMEN prebuild is quiet, stable, and ready to go — but have concerns on limited upgrade flexibility (BIOS, PSU, airflow). Would you suggest stick to the prebuilt or spend time for a custom built with the 5090 fe?
2025-06-01T07:38:29
https://www.microcenter.com/product/693699/hp-omen-45l-gt22-3090-gaming-pc
henrygatech
microcenter.com
1970-01-01T00:00:00
0
{}
1l0j9r8
false
null
t3_1l0j9r8
/r/LocalLLaMA/comments/1l0j9r8/prebuilt_pc_vs_diy_5090/
false
false
https://b.thumbs.redditm…r5u0NFjImngY.jpg
8
{'enabled': False, 'images': [{'id': 'VnA5kMfbPelLVLz2MQgbYUR1h0T11uGgzde947fDaOM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/tpiY2KNJA8Hoz09nmlhVjJx1UKfVzlzUwEhk5dBrcDM.jpg?width=108&crop=smart&auto=webp&s=1e055f890da2b034199db051097096c4421f040a', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/tpiY2KNJA8Hoz09nmlhVjJx1UKfVzlzUwEhk5dBrcDM.jpg?auto=webp&s=7f6daae4fe12efdf1849a3864c8a9445fe85eac4', 'width': 200}, 'variants': {}}]}
How many parameters does R1 0528 have?
28
I found conflicting info online, some articles say it's 685b and some say 671b, which is correct? huggingface also shows 685b (look at the attached screenshot) BUT it shows that even for the old one, which I know for sure was 671b. anyone know which is correct?
2025-06-01T07:44:01
https://www.reddit.com/gallery/1l0jcoa
Sudden-Albatross-733
reddit.com
1970-01-01T00:00:00
0
{}
1l0jcoa
false
null
t3_1l0jcoa
/r/LocalLLaMA/comments/1l0jcoa/how_many_parameters_does_r1_0528_have/
false
false
https://b.thumbs.redditm…jYVGA0rP1BDI.jpg
28
null
Has anyone successfully built an LLM system that works well on a large codebase?
1
[removed]
2025-06-01T09:11:45
https://www.reddit.com/r/LocalLLaMA/comments/1l0kmg3/has_anyone_successfully_built_an_llm_system_that/
shijoi87
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0kmg3
false
null
t3_1l0kmg3
/r/LocalLLaMA/comments/1l0kmg3/has_anyone_successfully_built_an_llm_system_that/
false
false
self
1
null
OpenAI to release open-source model this summer - everything we know so far
0
*TED2025 (April 11th 2025)* [https://youtu.be/5MWT\_doo68k?t=473](https://youtu.be/5MWT_doo68k?t=473) **Question:** How much were you shaken up by the arrival of DeepSeek? **Sam Altman's response:** I think open-source has an important place. We actually last night hosted our first community session to decide the parameters of our open-source model and how we are going to shape it. We are going to do a very powerful open-source model. I think this is important. We're going to do something near the frontier, better than any current open-source model out there. There will be people who use this in ways that some people in this room maybe you or I don't like. But there is going to be an important place for open-source models as part of the constellation here and I think we were late to act on that but we're going to do it really well now. *Tweet (April 25th 2025)* [https://x.com/actualananda/status/1915909779886858598](https://x.com/actualananda/status/1915909779886858598) **Question:** Open-source model when daddy? **Sam Altman's response:** heat waves. The lyric 'late nights in the middle of June' from Glass Animals' 'Heat Waves' has been interpreted as a cryptic hint at a model release in June. *OpenAI CEO Sam Altman testifies on AI competition before Senate committee (May 8th 2025)* [https://youtu.be/jOqTg1W\_F5Q?t=4741](https://youtu.be/jOqTg1W_F5Q?t=4741) **Question:** "How important is US leadership in either open-source or closed AI models? **Sam Altman's response:** I think it's quite important to lead in both. We realize that OpenAI can do more to help here. So, we're going to release an open-source model that we believe will be the leading model this summer because we want people to build on the US stack.
2025-06-01T09:40:49
https://www.reddit.com/r/LocalLLaMA/comments/1l0l1fx/openai_to_release_opensource_model_this_summer/
iamn0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0l1fx
false
null
t3_1l0l1fx
/r/LocalLLaMA/comments/1l0l1fx/openai_to_release_opensource_model_this_summer/
false
false
self
0
{'enabled': False, 'images': [{'id': 'yXu4Pd0Sn1hhLraTl-3eER1ALKtDG3yKNhzu6uZ9KeA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/qvR28r6IP4l8wfyLQs3YuRfIWWXs55MxYkodQ4PcgC4.jpg?width=108&crop=smart&auto=webp&s=277265b3edae6a60a3425eda3538e71ebd27e68f', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/qvR28r6IP4l8wfyLQs3YuRfIWWXs55MxYkodQ4PcgC4.jpg?width=216&crop=smart&auto=webp&s=8a9f9370d577f715348dc7f42a7e937422e88a71', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/qvR28r6IP4l8wfyLQs3YuRfIWWXs55MxYkodQ4PcgC4.jpg?width=320&crop=smart&auto=webp&s=712953aed51fcb1219ebe679088bbf8224ec70d4', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/qvR28r6IP4l8wfyLQs3YuRfIWWXs55MxYkodQ4PcgC4.jpg?auto=webp&s=0673c57e6b75b43ae194821b3a2195624d62e9cb', 'width': 480}, 'variants': {}}]}
AI Cost Optimisation
1
[removed]
2025-06-01T10:26:27
https://www.reddit.com/r/LocalLLaMA/comments/1l0lpqp/ai_cost_optimisation/
BenSimmons97
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0lpqp
false
null
t3_1l0lpqp
/r/LocalLLaMA/comments/1l0lpqp/ai_cost_optimisation/
false
false
self
1
null
How to execute commands by llm or how to switch back and forth llm to tool/function call?
0
How to execute commands by llm or how to switch back and forth llm to tool/function call? (sorry if question is not clear itself) I will try to cover my requirement. I am developing my personal assistant. So assuming I am giving command to llm **q: "What is the time now?"** llm answer: (internally: user asked time but I don't know time but I know I have function or something I can execute that function get\_current\_time) get\_current\_time: The time is 12:12AM **q: "What is my battery percentage?"** llm: llm will think and it will try to match if it can give answer to it or not and it will then find function like (get\_battery\_percentage) get\_battery\_percentage: Current battery percentage is 15% **q: Please run system update command** llm: I need to understand what type of system architacture os etc is(get\_system\_info(endExecution=false)) get\_system\_info: it will return system info (since endExecution is false which should be deciced by llm then I will not return system info and end command. Instead I will pas that response again to llm then now llm will take over next) llm: function return is passed to llm then llm gets the system like it's ubuntu and using apt so I for this it's sudo apt update so it will either retured to user or pass to (terminal\_call) with command. assume for now it's returned command so at the end llm will say: To update your system please run sudo apt update in command prompt so I want to make mini assistant which will run in my local system with local llm (ollama interface) but I am struggling with back and forth switching to tool and again taking over by llm. I am okay if on each take over I need another llm prompt execution
2025-06-01T10:50:20
https://www.reddit.com/r/LocalLLaMA/comments/1l0m2yd/how_to_execute_commands_by_llm_or_how_to_switch/
InsideResolve4517
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0m2yd
false
null
t3_1l0m2yd
/r/LocalLLaMA/comments/1l0m2yd/how_to_execute_commands_by_llm_or_how_to_switch/
false
false
self
0
null
104k-Token Prompt in a 110k-Token Context with DeepSeek-R1-0528-UD-IQ1_S – Benchmark & Impressive Results
132
The Prompt: https://thireus.com/REDDIT/DeepSeek_Runescape_Massive_Prompt.txt (Firefox: View -> Repair Text Encoding) The Command (on Windows): ``` perl -pe 's/\n/\\n/' DeepSeek_Runescape_Massive_Prompt.txt | CUDA_DEVICE_ORDER=PCI_BUS_ID CUDA_VISIBLE_DEVICES=0,2,1 ~/llama-b5355-bin-win-cuda12.4-x64/llama-cli -m DeepSeek-R1-0528-UD-IQ1_S-00001-of-00004.gguf -t 36 --ctx-size 110000 -ngl 62 --flash-attn --main-gpu 0 --no-mmap --mlock -ot ".ffn_(up|down)_exps.=CPU" --simple-io ``` The Answer (first time I see a model provide such a good answer): https://thireus.com/REDDIT/DeepSeek_Runescape_Massive_Prompt_Answer.txt The Hardware: ``` i9-7980XE - 4.2Ghz on all cores 256GB DDR4 F4-3200C14Q2-256GTRS - XMP enabled 1x 5090 (x16) 1x 3090 (x16) 1x 3090 (x8) Prime-X299-A-II ``` The benchmark: ``` llama_perf_sampler_print: sampling time = 608.32 ms / 106524 runs ( 0.01 ms per token, 175112.36 tokens per second) llama_perf_context_print: load time = 190451.73 ms llama_perf_context_print: prompt eval time = 5188938.33 ms / 104276 tokens ( 49.76 ms per token, 20.10 tokens per second) llama_perf_context_print: eval time = 577349.77 ms / 2248 runs ( 256.83 ms per token, 3.89 tokens per second) llama_perf_context_print: total time = 5768493.07 ms / 106524 tokens llama_perf_sampler_print: sampling time = 608.32 ms / 106524 runs ( 0.01 ms per token, 175112.36 tokens per second) llama_perf_context_print: load time = 190451.73 ms llama_perf_context_print: prompt eval time = 5188938.33 ms / 104276 tokens ( 49.76 ms per token, 20.10 tokens per second) llama_perf_context_print: eval time = 577349.77 ms / 2248 runs ( 256.83 ms per token, 3.89 tokens per second) llama_perf_context_print: total time = 5768493.22 ms / 106524 tokens ``` The questions: - Would 1x RTX PRO 6000 Blackwell or even 2x RTX PRO 6000 Blackwell significantly improve these metrics without any other hardware upgrade? (knowing that there would still be CPU offloading) - Would a different CPU, motherboard and RAM improve these metrics? - How to significantly improve prompt processing speed?
2025-06-01T11:00:46
https://www.reddit.com/r/LocalLLaMA/comments/1l0m8r0/104ktoken_prompt_in_a_110ktoken_context_with/
Thireus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0m8r0
false
null
t3_1l0m8r0
/r/LocalLLaMA/comments/1l0m8r0/104ktoken_prompt_in_a_110ktoken_context_with/
false
false
self
132
null
Introducing an open source cross-platform graphical interface LLM client
32
Cherry Studio is a desktop client that supports for multiple LLM providers, available on Windows, Mac and Linux.
2025-06-01T11:26:40
https://github.com/CherryHQ/cherry-studio
Fun-Doctor6855
github.com
1970-01-01T00:00:00
0
{}
1l0mo90
false
null
t3_1l0mo90
/r/LocalLLaMA/comments/1l0mo90/introducing_an_open_source_crossplatform/
false
false
https://b.thumbs.redditm…yE--_UMR-aBw.jpg
32
{'enabled': False, 'images': [{'id': 'He5VG53rTBjWbNk1_UdCjYukNuT1UhGRClb6ecDAOwM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/asw6R0ibq6fWJLI0jTiqq5MWe_ZOda7dhXjccGwW8KM.jpg?width=108&crop=smart&auto=webp&s=6c9b9a17a1cba0f4382bf80f06bb3715c6dc44e3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/asw6R0ibq6fWJLI0jTiqq5MWe_ZOda7dhXjccGwW8KM.jpg?width=216&crop=smart&auto=webp&s=92b27520516bfcbd53d1e6b9d990555d32df6c83', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/asw6R0ibq6fWJLI0jTiqq5MWe_ZOda7dhXjccGwW8KM.jpg?width=320&crop=smart&auto=webp&s=fc737b04eea752c74ff8754da1f1b9c156dae7b2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/asw6R0ibq6fWJLI0jTiqq5MWe_ZOda7dhXjccGwW8KM.jpg?width=640&crop=smart&auto=webp&s=20d6c7b5164323d63cf76761c30754520702828d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/asw6R0ibq6fWJLI0jTiqq5MWe_ZOda7dhXjccGwW8KM.jpg?width=960&crop=smart&auto=webp&s=40deaab2f5568fda0e3869b437de4c0487e64d1e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/asw6R0ibq6fWJLI0jTiqq5MWe_ZOda7dhXjccGwW8KM.jpg?width=1080&crop=smart&auto=webp&s=e9c156ddae1ba7946ff4beec6c795a9807e2d39c', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/asw6R0ibq6fWJLI0jTiqq5MWe_ZOda7dhXjccGwW8KM.jpg?auto=webp&s=44f8fb8f5f5c2a78bc39c42aa63d438eea87881f', 'width': 1280}, 'variants': {}}]}
Setting up an AI to help prepare for a high difficulty oral questions test
1
[removed]
2025-06-01T11:50:19
https://www.reddit.com/r/LocalLLaMA/comments/1l0n2ee/setting_up_an_ai_to_help_prepare_for_a_high/
FinancialMechanic853
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0n2ee
false
null
t3_1l0n2ee
/r/LocalLLaMA/comments/1l0n2ee/setting_up_an_ai_to_help_prepare_for_a_high/
false
false
self
1
null
Which is the best uncensored model?
216
Wanted to learn ethical hacking. Tried dolphin-mistral-r1 it did answer but it's answers were bad. Are there any good uncensored models?
2025-06-01T11:55:48
https://www.reddit.com/r/LocalLLaMA/comments/1l0n5ta/which_is_the_best_uncensored_model/
BoJackHorseMan53
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0n5ta
false
null
t3_1l0n5ta
/r/LocalLLaMA/comments/1l0n5ta/which_is_the_best_uncensored_model/
false
false
self
216
null
Experimenting with Autonomous AI Agents in Continuous Thinking Loops
1
[removed]
2025-06-01T12:26:49
https://www.reddit.com/r/LocalLLaMA/comments/1l0nqjw/experimenting_with_autonomous_ai_agents_in/
Wise-Increase1493
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0nqjw
false
null
t3_1l0nqjw
/r/LocalLLaMA/comments/1l0nqjw/experimenting_with_autonomous_ai_agents_in/
false
false
self
1
{'enabled': False, 'images': [{'id': '8bc_BXyGH8x4MjD2x6QM1kNDEKz4aOikVHQjwVa9SiM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tmLoYYQYDCayAdZ_iZEFCVqzbwogZryKK9HkTrenoPg.jpg?width=108&crop=smart&auto=webp&s=5ea6cbd37c8a0ce7588dec4e7ba1645c95996115', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tmLoYYQYDCayAdZ_iZEFCVqzbwogZryKK9HkTrenoPg.jpg?width=216&crop=smart&auto=webp&s=7fd659a75e222c8f502706d7e66053ae8f508380', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tmLoYYQYDCayAdZ_iZEFCVqzbwogZryKK9HkTrenoPg.jpg?width=320&crop=smart&auto=webp&s=029892249baddd63313f1f0ce716852290d2cae3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tmLoYYQYDCayAdZ_iZEFCVqzbwogZryKK9HkTrenoPg.jpg?width=640&crop=smart&auto=webp&s=96546f527d74bf804325c6441d0065235396d983', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tmLoYYQYDCayAdZ_iZEFCVqzbwogZryKK9HkTrenoPg.jpg?width=960&crop=smart&auto=webp&s=2cc89f1d4f10a9ef96db39798609761c632f3e8a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tmLoYYQYDCayAdZ_iZEFCVqzbwogZryKK9HkTrenoPg.jpg?width=1080&crop=smart&auto=webp&s=65a7209df984a2d69c11e61df939c5402137bd77', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tmLoYYQYDCayAdZ_iZEFCVqzbwogZryKK9HkTrenoPg.jpg?auto=webp&s=a8b714715d46ddb65ea44b8ff3708a619abeceb1', 'width': 1200}, 'variants': {}}]}
Let's build a production level Small Language Model (SLM) from scratch | 3 hour workshop
196
https://preview.redd.it/…ensive dataset.
2025-06-01T13:34:12
https://www.reddit.com/r/LocalLLaMA/comments/1l0p3et/lets_build_a_production_level_small_language/
OtherRaisin3426
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0p3et
false
{'oembed': {'author_name': 'Vizuara', 'author_url': 'https://www.youtube.com/@vizuara', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/pOFcwcwtv3k?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Build a Small Language Model (SLM) From Scratch"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/pOFcwcwtv3k/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Build a Small Language Model (SLM) From Scratch', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1l0p3et
/r/LocalLLaMA/comments/1l0p3et/lets_build_a_production_level_small_language/
false
false
https://b.thumbs.redditm…Zzbh0uiM6DdM.jpg
196
{'enabled': False, 'images': [{'id': 'oVEwtSXuv3g7GOlMqGljKa2WWMnZtnzafggxen7gFSg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/3QugVQO6P_Q3v0881CbP7ispW7LV5z9hQhVFGV8ZV58.jpg?width=108&crop=smart&auto=webp&s=a2b034196ef61c6b003d6df44caff39ccd200871', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/3QugVQO6P_Q3v0881CbP7ispW7LV5z9hQhVFGV8ZV58.jpg?width=216&crop=smart&auto=webp&s=16bbe353a8e9fe088ae7379abe8d4e9e8ecf3172', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/3QugVQO6P_Q3v0881CbP7ispW7LV5z9hQhVFGV8ZV58.jpg?width=320&crop=smart&auto=webp&s=64639bca07382b454fb4ec613939209217564782', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/3QugVQO6P_Q3v0881CbP7ispW7LV5z9hQhVFGV8ZV58.jpg?auto=webp&s=1c162842401611408de626c77786838589581ed1', 'width': 480}, 'variants': {}}]}
Has anyone had a play around with the new Google AI edge local models on Android? I tried one and it was not bad.
2
2025-06-01T14:19:00
https://github.com/google-ai-edge/gallery
mintybadgerme
github.com
1970-01-01T00:00:00
0
{}
1l0q2b5
false
null
t3_1l0q2b5
/r/LocalLLaMA/comments/1l0q2b5/has_anyone_had_a_play_around_with_the_new_google/
false
false
https://b.thumbs.redditm…dUw9EAxx2HVs.jpg
2
{'enabled': False, 'images': [{'id': 'tWqFFTtW1YjAoWlH44lH9wTxrW0TFs0PxgzHtrKYS6Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jVlNbJ79j9fiep4k95fdhipmGPj308uU_Xqc9jKZyRg.jpg?width=108&crop=smart&auto=webp&s=2e7234cf12e391aa62e475715d73244f9fe6b382', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jVlNbJ79j9fiep4k95fdhipmGPj308uU_Xqc9jKZyRg.jpg?width=216&crop=smart&auto=webp&s=a9cd609c909767bcb4ddcff53a6d4705b786affb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jVlNbJ79j9fiep4k95fdhipmGPj308uU_Xqc9jKZyRg.jpg?width=320&crop=smart&auto=webp&s=28ea744685b3789c818da1b558d03a273fc22610', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jVlNbJ79j9fiep4k95fdhipmGPj308uU_Xqc9jKZyRg.jpg?width=640&crop=smart&auto=webp&s=768583d468dd93eef06bb92c436c4988834ac16b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jVlNbJ79j9fiep4k95fdhipmGPj308uU_Xqc9jKZyRg.jpg?width=960&crop=smart&auto=webp&s=7cbed03797513cb042df9b147d1bf73cda0d9709', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jVlNbJ79j9fiep4k95fdhipmGPj308uU_Xqc9jKZyRg.jpg?width=1080&crop=smart&auto=webp&s=72c3b156833bfdaa328eaf37aff77cfa8f449b6d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jVlNbJ79j9fiep4k95fdhipmGPj308uU_Xqc9jKZyRg.jpg?auto=webp&s=3fa4136900d402dc0f41fe6db3820c7d55018609', 'width': 1200}, 'variants': {}}]}
DeepSeek-R1-0528-UD-Q6-K-XL on 10 Year Old Hardware
225
Don't expect anything useful in this post. I did it just to see if it was possible. This was on a 10+ year old system with a 6th generation i5 with 12gb of RAM. My ssd is nearly full so I had to mount an external 8TB USB drive to store the 560GB model. At least it is USB-3. I made an 800GB swap file and enabled it, then launched llama-cli with a simple prompt and went to bed. I half expected that the model might not even have fully loaded when I got up but it was already part way through the response. With no GPU, it seems to be about seven minutes per token.
2025-06-01T14:19:51
https://www.reddit.com/r/LocalLLaMA/comments/1l0q2zk/deepseekr10528udq6kxl_on_10_year_old_hardware/
Simusid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0q2zk
false
null
t3_1l0q2zk
/r/LocalLLaMA/comments/1l0q2zk/deepseekr10528udq6kxl_on_10_year_old_hardware/
false
false
self
225
null
Where can I share prompts I've written?
0
I've often written a roleplaying prompt for sillyness and just to mess around, only to do the same one months later. I don't typically like to keep them on my PC, cause it's just not preferred to keep NSFW prompts there, idk just don't want to. Is there a place I can share them with others, like a library or something?
2025-06-01T14:23:03
https://www.reddit.com/r/LocalLLaMA/comments/1l0q5iy/where_can_i_share_prompts_ive_written/
intimate_sniffer69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0q5iy
false
null
t3_1l0q5iy
/r/LocalLLaMA/comments/1l0q5iy/where_can_i_share_prompts_ive_written/
false
false
nsfw
0
null
TTS support in llama.cpp?
8
I know I can do this (using `OuteTTS-0.2-500M`): llama-tts -m OuteTTS-0.2-500M-Q4_K_M.gguf -m WavTokenizer-Large-75-F16.gguf -p "Hello" ... and get an `output.wav` audio file, that I can reproduce, with any terminal audio player, like: - aplay - play (sox) - paplay - mpv - ffplay --- Does llama-tts support any other TTS? --- I saw some PR in github with: - OuteTTS0.3 - OuteTTS1.0 - OrpheusTTS - SparkTTS But, none of those work for me.
2025-06-01T14:30:47
https://www.reddit.com/r/LocalLLaMA/comments/1l0qbot/tts_support_in_llamacpp/
Disonantemus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0qbot
false
null
t3_1l0qbot
/r/LocalLLaMA/comments/1l0qbot/tts_support_in_llamacpp/
false
false
self
8
null
App-Use : Create virtual desktops for AI agents to focus on specific apps.
53
App-Use lets you scope agents to just the apps they need. Instead of full desktop access, say "only work with Safari and Notes" or "just control iPhone Mirroring" - visual isolation without new processes for perfectly focused automation. Running computer-use on the entire desktop often causes agent hallucinations and loss of focus when they see irrelevant windows and UI elements. App-Use solves this by creating composited views where agents only see what matters, dramatically improving task completion accuracy Currently macOS-only (Quartz compositing engine). Read the full guide: https://trycua.com/blog/app-use Github : https://github.com/trycua/cua
2025-06-01T14:46:57
https://v.redd.it/v0fcznj6wb4f1
Impressive_Half_2819
v.redd.it
1970-01-01T00:00:00
0
{}
1l0qp75
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/v0fcznj6wb4f1/DASHPlaylist.mpd?a=1751381232%2CNTRlYmU0MTg2MzU0NTQ4MTY3OTVkMGE3NWI5MzFhNjc2NWYxYjU0NWI3YWUyYWZiNWUzYzUwMzdmODA4ZGEzMQ%3D%3D&v=1&f=sd', 'duration': 17, 'fallback_url': 'https://v.redd.it/v0fcznj6wb4f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/v0fcznj6wb4f1/HLSPlaylist.m3u8?a=1751381232%2CZWQ2ZDNlYmU4N2QzODkwMDQ2OTNhOTNmYTYzOTQ4NmFmMDU1NTdkZmQwODE1OThhMGRlYTNmYWVjMzZiYjEwNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/v0fcznj6wb4f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1732}}
t3_1l0qp75
/r/LocalLLaMA/comments/1l0qp75/appuse_create_virtual_desktops_for_ai_agents_to/
false
false
https://external-preview…39f0736b4a84fcfb
53
{'enabled': False, 'images': [{'id': 'ejV6cmV3ODZ3YjRmMYsTHh_R0WswrUJBBa-0t3y7YsS9UlwJcbvZWkm9vo2Y', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/ejV6cmV3ODZ3YjRmMYsTHh_R0WswrUJBBa-0t3y7YsS9UlwJcbvZWkm9vo2Y.png?width=108&crop=smart&format=pjpg&auto=webp&s=a78df13b1398a355f7fbaeff03bda8b127ab7f53', 'width': 108}, {'height': 134, 'url': 'https://external-preview.redd.it/ejV6cmV3ODZ3YjRmMYsTHh_R0WswrUJBBa-0t3y7YsS9UlwJcbvZWkm9vo2Y.png?width=216&crop=smart&format=pjpg&auto=webp&s=baa9cd7ef0143d5ab95e95d62f135e20489aca7d', 'width': 216}, {'height': 199, 'url': 'https://external-preview.redd.it/ejV6cmV3ODZ3YjRmMYsTHh_R0WswrUJBBa-0t3y7YsS9UlwJcbvZWkm9vo2Y.png?width=320&crop=smart&format=pjpg&auto=webp&s=933ac3e00a95909aca0e2b6902ec71a254d59186', 'width': 320}, {'height': 399, 'url': 'https://external-preview.redd.it/ejV6cmV3ODZ3YjRmMYsTHh_R0WswrUJBBa-0t3y7YsS9UlwJcbvZWkm9vo2Y.png?width=640&crop=smart&format=pjpg&auto=webp&s=82158c89f5300ae4feca05410dd798ba888fbf4c', 'width': 640}, {'height': 598, 'url': 'https://external-preview.redd.it/ejV6cmV3ODZ3YjRmMYsTHh_R0WswrUJBBa-0t3y7YsS9UlwJcbvZWkm9vo2Y.png?width=960&crop=smart&format=pjpg&auto=webp&s=ef808913a7d61e2c1b7e6fc7396fc4ad338fb020', 'width': 960}, {'height': 673, 'url': 'https://external-preview.redd.it/ejV6cmV3ODZ3YjRmMYsTHh_R0WswrUJBBa-0t3y7YsS9UlwJcbvZWkm9vo2Y.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c04d977b615e4b90a166f454105def7624f8de8a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ejV6cmV3ODZ3YjRmMYsTHh_R0WswrUJBBa-0t3y7YsS9UlwJcbvZWkm9vo2Y.png?format=pjpg&auto=webp&s=af6067dc6e82069efe319c1ed66028067fed1725', 'width': 1732}, 'variants': {}}]}
Seeking Community Review: Documented Evidence of AI Identity Persistence Across Instances
1
[removed]
2025-06-01T14:54:57
https://www.reddit.com/r/LocalLLaMA/comments/1l0qvsk/seeking_community_review_documented_evidence_of/
PotentialCraft3781
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0qvsk
false
null
t3_1l0qvsk
/r/LocalLLaMA/comments/1l0qvsk/seeking_community_review_documented_evidence_of/
false
false
self
1
null
Recommended setup for local LLMs
7
I'm currently running a PC with i7-8700k, 32GB of memory and Nvidia 4070 and it is clearly not fit for my needs (coding Typescript, Python and LLMs). However, I haven't found good resources on what should I upgrade next. My options at the moment are: \- Mac Studio M3 Ultra 96GB unified memory (or with 256GB if I manage to pay for it) \- Mac Studio M4 Max 128GB \- PC with 9950X3D, 128GB of DDR5 and Nvidia 5090 \- Upgrading just the GPU on my current PC, but I don't think that makes sense as the maximum RAM is still 32GB \- making a frankenstein budget option out of extra hardware I have around, buying the parts I don't have, leading to a: PC with 5950X, 128GB of DDR4, 1080TI with 12GB of VRAM. That is the most budget friendly option here, but I'm afraid it will be even slower and the case is too small to fit that 4070 from the other PC I have. That however would run Roo Code or Cursor (which would be needed unless I get a new GPU, or a Mac I guess) just fine. With my current system the biggest obstacle is that the inference speed is very slow on models larger than 8B parameters (like 2-8 tokens / second after thinking for minutes). What would be the most practical way of running larger models, and faster? You can recommend also surprise combinations if you come up with any, such as some Mac Mini configuration if the M4 Pro is fast enough for this. Also the 8B models (and smaller) have been so inaccurate that they've been effectively useless forcing me to use Cursor, which I don't exactly love either as it clears it context window constantly and I'd have to start again. Note that 2nd hand computers cost the same or more than new ones due to sky high demand because of sky high umemployment and oncoming implosion of the economic system. I'm out of options there unless you can give be good European retailers that ship abroad. Also I have a large Proxmox cluster that has everything I need except what I've mentioned here, database servers, dev environments, whatever I need, so that is taken care of.
2025-06-01T15:14:12
https://www.reddit.com/r/LocalLLaMA/comments/1l0rcin/recommended_setup_for_local_llms/
pioni
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0rcin
false
null
t3_1l0rcin
/r/LocalLLaMA/comments/1l0rcin/recommended_setup_for_local_llms/
false
false
self
7
null
Sharing my tool for easy handwritten fine-tuning dataset creation: supports multiple formats, token counting & auto saving!
1
[removed]
2025-06-01T15:16:19
https://www.reddit.com/r/LocalLLaMA/comments/1l0regm/sharing_my_tool_for_easy_handwritten_finetuning/
abaris243
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0regm
false
null
t3_1l0regm
/r/LocalLLaMA/comments/1l0regm/sharing_my_tool_for_easy_handwritten_finetuning/
false
false
self
1
{'enabled': False, 'images': [{'id': 'UjweHFlBfjtq-qgJURLZe74ot5ARI6AHWtzN7VjFiRs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/pRdGI5JF11YeJd2mj6iu585KhAnrYcxq8kgOMs8jPnc.jpg?width=108&crop=smart&auto=webp&s=5704e06d9310b4293e014267081165563e8bbeda', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/pRdGI5JF11YeJd2mj6iu585KhAnrYcxq8kgOMs8jPnc.jpg?width=216&crop=smart&auto=webp&s=9322cdae1a3cfea6603c3694657e28312f8dee9e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/pRdGI5JF11YeJd2mj6iu585KhAnrYcxq8kgOMs8jPnc.jpg?width=320&crop=smart&auto=webp&s=c448ac79286c91e904a3977d25308ca790b793ca', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/pRdGI5JF11YeJd2mj6iu585KhAnrYcxq8kgOMs8jPnc.jpg?auto=webp&s=94c5d8559331ed84d92207cf0e3ab1baa30f799c', 'width': 480}, 'variants': {}}]}
Old dual socket Xeon server with tons of RAM viable for LLM inference?
22
I was looking into maybe getting a used 2 socket Lga 3647 board and some Xeons wit loads of (RAM 256GB+). I don't need insane speeds, but it shouldn't take hours either. It seems a lot more affordable per GB than Apple silicon and of course VRAM, but I feel like it might be too slow to really be viable or just plain not worth it.
2025-06-01T15:35:57
https://www.reddit.com/r/LocalLLaMA/comments/1l0rvqr/old_dual_socket_xeon_server_with_tons_of_ram/
jojokingxp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0rvqr
false
null
t3_1l0rvqr
/r/LocalLLaMA/comments/1l0rvqr/old_dual_socket_xeon_server_with_tons_of_ram/
false
false
self
22
null
Qwenlong L1 long-context models
0
Wondering if anyone knows when we may get these to download? https://venturebeat.com/ai/qwenlong-l1-solves-long-context-reasoning-challenge-that-stumps-current-llms/
2025-06-01T16:31:02
https://www.reddit.com/r/LocalLLaMA/comments/1l0t7sz/qwenlong_l1_longcontext_models/
Willdudes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0t7sz
false
null
t3_1l0t7sz
/r/LocalLLaMA/comments/1l0t7sz/qwenlong_l1_longcontext_models/
false
false
self
0
{'enabled': False, 'images': [{'id': 'OD0UowLO7TYGjjjJRgA6lMym9726ap7GK-CiZaFcLL4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/P5U5i7XEhlSFrbDw6--sM_CkPbSnuyKhvI5ij6T5Lr0.jpg?width=108&crop=smart&auto=webp&s=1e4c75368a860bf5187b2c49f94257d087399c64', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/P5U5i7XEhlSFrbDw6--sM_CkPbSnuyKhvI5ij6T5Lr0.jpg?width=216&crop=smart&auto=webp&s=263ba72073331dad6786d7efdb9fb3a7b794d635', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/P5U5i7XEhlSFrbDw6--sM_CkPbSnuyKhvI5ij6T5Lr0.jpg?width=320&crop=smart&auto=webp&s=5aec754670538a25298142b4152d8167fca80623', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/P5U5i7XEhlSFrbDw6--sM_CkPbSnuyKhvI5ij6T5Lr0.jpg?width=640&crop=smart&auto=webp&s=2e20a1c876a0699f60453c0c5fb60a27a95b58c0', 'width': 640}, {'height': 538, 'url': 'https://external-preview.redd.it/P5U5i7XEhlSFrbDw6--sM_CkPbSnuyKhvI5ij6T5Lr0.jpg?width=960&crop=smart&auto=webp&s=0aab7ee6abc6d669753cc5aeb180b999e67f3719', 'width': 960}], 'source': {'height': 574, 'url': 'https://external-preview.redd.it/P5U5i7XEhlSFrbDw6--sM_CkPbSnuyKhvI5ij6T5Lr0.jpg?auto=webp&s=59aa473793a32e2c87cce1bc189bef423c498bb2', 'width': 1024}, 'variants': {}}]}
DeepSeek-R1-0528-Distill-Devstral Needs to Happen!
1
DeepSeek-R1-0528-Distill-Devstral Needs to Happen!
2025-06-01T16:31:28
https://www.reddit.com/r/LocalLLaMA/comments/1l0t86p/deepseekr10528distilldevstral_needs_to_happen/
Libertumi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0t86p
false
null
t3_1l0t86p
/r/LocalLLaMA/comments/1l0t86p/deepseekr10528distilldevstral_needs_to_happen/
false
false
self
1
null
DeepSeek-R1-0528-Distill-Devstral Needs to Happen!
1
Someone Should Make DeepSeek-R1-0528-Distill-Devstral, That Would Be Sick!
2025-06-01T16:32:12
https://www.reddit.com/r/LocalLLaMA/comments/1l0t8tz/deepseekr10528distilldevstral_needs_to_happen/
Libertumi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0t8tz
false
null
t3_1l0t8tz
/r/LocalLLaMA/comments/1l0t8tz/deepseekr10528distilldevstral_needs_to_happen/
false
false
self
1
null
I built a lightweight, private, MCP server to share context between AI tools
1
Hey guys, I have seen a few projects similar to mine lately, so I decided to open source mine ASAP. I wanted to make a service that persists context and can recall it across any AI tools. I also want it to be a way to persist your digital life and semantic search it, all self hosted. **One thing I saw lacking in a few other alternatives is re-embedding**. If you change your preferred model, the next startup will automatically re-embed all documents for you. As for how it works: if I read a website about presidents, I can say "recall documents about government" in my AI tool of choice, and it would be recalled, despite an exact text match not existing. I am in progress building **Obsidian and browser extensions** to progress towards automatically ingesting any content for later retrieval. You can bring your own AI service. I recommend Ollama or LM Studio, but you can connect it to OpenAI or any other embedding service. For AI and coding specifically, **there are getContext and setContext key / value tools that the MCP server adds**. You can imagine saving your project information, like what package mangers to use, in here at any time, and then any AI tool you can add it to the prompt afterwards. Some examples using Cline and Claude desktop can be found [at the bottom of the readme](https://github.com/zackify/revect?tab=readme-ov-file#-examples). This service uses SQLite, so it's incredibly simple, and **only takes up 90mb** for a fully complete docker container. **This means you can query your data easily, or back it up by mounting the container to an iCloud drive or Dropbox folder for example.** I have a cloud version I will launch soon, so its easy to share this between teams. Most of the examples I have seen currently use multiple services and much more resources to do the same thing. Let me know what you all think, the repo can be found here: [https://github.com/zackify/revect](https://github.com/zackify/revect)
2025-06-01T17:17:13
https://www.reddit.com/r/LocalLLaMA/comments/1l0uccd/i_built_a_lightweight_private_mcp_server_to_share/
coding9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0uccd
false
null
t3_1l0uccd
/r/LocalLLaMA/comments/1l0uccd/i_built_a_lightweight_private_mcp_server_to_share/
false
false
self
1
{'enabled': False, 'images': [{'id': 'wxX4QQJ-CAB3b-9UI5nsVGnGl38LHuQnlTGweQxBSuE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/17QAaG2sc_Jj-mAuaZFCL65lw4-LREHFyg_6011URI8.jpg?width=108&crop=smart&auto=webp&s=927c3cb8a52320ca0049f8b2efe5c27dd0205612', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/17QAaG2sc_Jj-mAuaZFCL65lw4-LREHFyg_6011URI8.jpg?width=216&crop=smart&auto=webp&s=5f6513a24f2808045dfb7c5195bf4ec08f80f3af', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/17QAaG2sc_Jj-mAuaZFCL65lw4-LREHFyg_6011URI8.jpg?width=320&crop=smart&auto=webp&s=f09de68080c89ad1dedec741f8358b93474c2d87', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/17QAaG2sc_Jj-mAuaZFCL65lw4-LREHFyg_6011URI8.jpg?width=640&crop=smart&auto=webp&s=f5deb5836326c821dde4f759880be15cf4100dcd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/17QAaG2sc_Jj-mAuaZFCL65lw4-LREHFyg_6011URI8.jpg?width=960&crop=smart&auto=webp&s=63d78c94cf35a9d77f40f1c48ea31532cea2f16d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/17QAaG2sc_Jj-mAuaZFCL65lw4-LREHFyg_6011URI8.jpg?width=1080&crop=smart&auto=webp&s=af6ad0ca9bfc10a6c28e9368c58df83b1eed4e4f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/17QAaG2sc_Jj-mAuaZFCL65lw4-LREHFyg_6011URI8.jpg?auto=webp&s=15ecd170e6bebf65361cf96fd90dd6249808271d', 'width': 1200}, 'variants': {}}]}
Has anyone tried Lobe-Chat?
1
[removed]
2025-06-01T17:18:30
https://www.reddit.com/r/LocalLLaMA/comments/1l0udh0/has_anyone_tried_lobechat/
AlexM4H
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0udh0
false
null
t3_1l0udh0
/r/LocalLLaMA/comments/1l0udh0/has_anyone_tried_lobechat/
false
false
self
1
{'enabled': False, 'images': [{'id': 'mpEFEY8JAlYVUVMYAueCra5ioNR_ClnoM09nfcnumOw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/plkbrec3Fo9yaXVRJFFWt9CZ99qAP6mS5V0-g1jwMks.jpg?width=108&crop=smart&auto=webp&s=8c6a0faec435ec7b16d1b3c2a454ef58231c9463', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/plkbrec3Fo9yaXVRJFFWt9CZ99qAP6mS5V0-g1jwMks.jpg?width=216&crop=smart&auto=webp&s=1b1a4c270f8d1eab11d4dcf344f492855227b4f3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/plkbrec3Fo9yaXVRJFFWt9CZ99qAP6mS5V0-g1jwMks.jpg?width=320&crop=smart&auto=webp&s=8fd94b93c9f3946c8a2b55e308ba269911934534', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/plkbrec3Fo9yaXVRJFFWt9CZ99qAP6mS5V0-g1jwMks.jpg?width=640&crop=smart&auto=webp&s=3be24a4e975edb1835f62e79a85c64960755ac46', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/plkbrec3Fo9yaXVRJFFWt9CZ99qAP6mS5V0-g1jwMks.jpg?width=960&crop=smart&auto=webp&s=6daec9ab14f26ebbece82473d5d1998d64dba3ad', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/plkbrec3Fo9yaXVRJFFWt9CZ99qAP6mS5V0-g1jwMks.jpg?width=1080&crop=smart&auto=webp&s=987c74ae6f9a381dfeb5f254393d458c43b0b1ff', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/plkbrec3Fo9yaXVRJFFWt9CZ99qAP6mS5V0-g1jwMks.jpg?auto=webp&s=add0bd945bfa14a2880df14064eaabdcf5d8405f', 'width': 1280}, 'variants': {}}]}
ollama-multirun: A bash shell script to run a single prompt against all your locally installed ollama models, saving the output and performance statistics as easily navigable web pages.
1
[removed]
2025-06-01T17:48:12
https://www.reddit.com/gallery/1l0v3w0
shared-media
reddit.com
1970-01-01T00:00:00
0
{}
1l0v3w0
false
null
t3_1l0v3w0
/r/LocalLLaMA/comments/1l0v3w0/ollamamultirun_a_bash_shell_script_to_run_a/
false
false
https://b.thumbs.redditm…4zF8eajn_uEU.jpg
1
null
Baby Voice TTS ? Kokoro or f5 or any good? I really want laghing and normal voices
0
Looking for tts who can create voice like 4-8 year old baby or childrens. with kokoro it doesnt have voices.
2025-06-01T17:53:57
https://www.reddit.com/r/LocalLLaMA/comments/1l0v8wt/baby_voice_tts_kokoro_or_f5_or_any_good_i_really/
jadhavsaurabh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0v8wt
false
null
t3_1l0v8wt
/r/LocalLLaMA/comments/1l0v8wt/baby_voice_tts_kokoro_or_f5_or_any_good_i_really/
false
false
self
0
null
I made a simple tool to test/compare your local LLMs on AIME 2024
47
I made [LocalAIME](https://github.com/Belluxx/LocalAIME) a simple tool that tests one or many LLMs locally or trough API (you can use any OpenAI-compatible API) on AIME 2024. It is pretty useful for testing different quants of the same model or the same quant of different providers. [Performance of some models i tested for each AIME 2024 problem](https://preview.redd.it/r0xk016htc4f1.png?width=4900&format=png&auto=webp&s=5fbfc8a2d435ef0fe50a7ed0dab250cdc03e6f2c) Let me know what you think about it!
2025-06-01T17:54:01
https://www.reddit.com/r/LocalLLaMA/comments/1l0v8yq/i_made_a_simple_tool_to_testcompare_your_local/
EntropyMagnets
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0v8yq
false
null
t3_1l0v8yq
/r/LocalLLaMA/comments/1l0v8yq/i_made_a_simple_tool_to_testcompare_your_local/
false
false
https://b.thumbs.redditm…DgIRGgdMv4Qc.jpg
47
{'enabled': False, 'images': [{'id': 'DsEhjmQ5Kl6ySNdivTOfWdAkiX0u-UrmwagwKWDzL4c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-Bks8K2_TljN7hLY0DvxIu9Ncpa8BzunHNO4VODMSAA.jpg?width=108&crop=smart&auto=webp&s=00dd04b0e8977332e6e19735c2514f614e5d1c70', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-Bks8K2_TljN7hLY0DvxIu9Ncpa8BzunHNO4VODMSAA.jpg?width=216&crop=smart&auto=webp&s=311acca008097e9bc5c6e79c2ab21ca73667140a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-Bks8K2_TljN7hLY0DvxIu9Ncpa8BzunHNO4VODMSAA.jpg?width=320&crop=smart&auto=webp&s=850d9cd405ef6857537675b1507b9acd29fa2048', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-Bks8K2_TljN7hLY0DvxIu9Ncpa8BzunHNO4VODMSAA.jpg?width=640&crop=smart&auto=webp&s=3e1cf6849b57a4c81ac6a807fbf541e56f6b4544', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-Bks8K2_TljN7hLY0DvxIu9Ncpa8BzunHNO4VODMSAA.jpg?width=960&crop=smart&auto=webp&s=137ecf38a774e25d2774701bfd6678208609379f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-Bks8K2_TljN7hLY0DvxIu9Ncpa8BzunHNO4VODMSAA.jpg?width=1080&crop=smart&auto=webp&s=ac60013d2f9361708d3d046f540f4e7cac57217e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-Bks8K2_TljN7hLY0DvxIu9Ncpa8BzunHNO4VODMSAA.jpg?auto=webp&s=13ef6b4a44c5b39cfd26a5880f7eeab90e827b0e', 'width': 1200}, 'variants': {}}]}
24GB MacMini users, can you offload up to 24GB models to the GPU?
1
[removed]
2025-06-01T18:02:45
https://www.reddit.com/r/LocalLLaMA/comments/1l0vgvq/24gb_macmini_users_can_you_offload_up_to_24gb/
electricgoat01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0vgvq
false
null
t3_1l0vgvq
/r/LocalLLaMA/comments/1l0vgvq/24gb_macmini_users_can_you_offload_up_to_24gb/
false
false
self
1
null
I'm trying to make llm use the docker vnc computer but it's not working
1
[removed]
2025-06-01T18:02:59
https://i.redd.it/37s5eg5suc4f1.png
rodrigoandrigo
i.redd.it
1970-01-01T00:00:00
0
{}
1l0vh31
false
null
t3_1l0vh31
/r/LocalLLaMA/comments/1l0vh31/im_trying_to_make_llm_use_the_docker_vnc_computer/
false
false
https://b.thumbs.redditm…GDOP6I_3yxLE.jpg
1
{'enabled': True, 'images': [{'id': '1dcuALT7VjGZkzs391kXysPy1BOdyTb-0bvLnXiWLe8', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/37s5eg5suc4f1.png?width=108&crop=smart&auto=webp&s=9b26ecfcdd79c4b05064f90fdfb3260085306620', 'width': 108}, {'height': 124, 'url': 'https://preview.redd.it/37s5eg5suc4f1.png?width=216&crop=smart&auto=webp&s=7bdebe49348762c8ed8b77c3d89f2492d9907779', 'width': 216}, {'height': 183, 'url': 'https://preview.redd.it/37s5eg5suc4f1.png?width=320&crop=smart&auto=webp&s=e31f063fe8fee5d502da3311afedf992ddbb7c41', 'width': 320}, {'height': 367, 'url': 'https://preview.redd.it/37s5eg5suc4f1.png?width=640&crop=smart&auto=webp&s=b0a9bec975d7804c6166d40c06b65a44733860cf', 'width': 640}, {'height': 551, 'url': 'https://preview.redd.it/37s5eg5suc4f1.png?width=960&crop=smart&auto=webp&s=d7efbaac0b6da7ff9253903b8981fbcb728a97c7', 'width': 960}, {'height': 620, 'url': 'https://preview.redd.it/37s5eg5suc4f1.png?width=1080&crop=smart&auto=webp&s=dbf2e63a3e15b0ef18af9f353cfb4f2d28bac853', 'width': 1080}], 'source': {'height': 1055, 'url': 'https://preview.redd.it/37s5eg5suc4f1.png?auto=webp&s=e86615b7a5c31de178c6edf1326dfdc2e4a8fc4c', 'width': 1837}, 'variants': {}}]}
Would a laptop iGPU + 64GB RAM be good for anything, speed wise?
12
VRAM is a big limiting factor for a lot of bigger models for most of consumer GPU. So, I was wondering if my iGPU (Ryzen 5 5600H) would be capable for running some models locally using RAM? Or would you think a M2 mac machine with similar RAM would be significantly better?
2025-06-01T18:20:15
https://www.reddit.com/r/LocalLLaMA/comments/1l0vwc1/would_a_laptop_igpu_64gb_ram_be_good_for_anything/
ArsenicBismuth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0vwc1
false
null
t3_1l0vwc1
/r/LocalLLaMA/comments/1l0vwc1/would_a_laptop_igpu_64gb_ram_be_good_for_anything/
false
false
self
12
null
WILL ANTHROPIC survive?
0
I am 100% sure that I am not the only one that feels like Anthropic might not make it to the point of AGI Here's why I think that - OpenAi is the most famous and they just had 500 billion$ investment(if I am not mistaken) -Gemini obviously powered by Google(that should be enough to tell how much potential Gemini has) -Deepseek backed by a freaking a literally GOVERNMENT(Chinese) and even the WHOLE WORLD wants them to win the race -Grok powered by richest man and has access to all the content on X(Twitter) and grok is most up to date llm (I think they update it every week or three days) I am sure that I am not missing anything here.
2025-06-01T18:23:24
https://www.reddit.com/r/LocalLLaMA/comments/1l0vz1p/will_anthropic_survive/
Rare-Programmer-1747
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0vz1p
false
null
t3_1l0vz1p
/r/LocalLLaMA/comments/1l0vz1p/will_anthropic_survive/
false
false
self
0
null
Is multiple m3 ultras the move instead of 1 big one?
7
I am seriously considering invest in a sizable m3 ultra mac studio. Looking through some of the benchmarks, it seems the m3ultra's do well but not as well in prompt processing speed. The comparisons from the 60 core to the 80 core seem to show a (surprisingly?) big boost from going up in gpu size. Given the low power usage, I think just getting more than 1 is a real option. However, I couldn't really find any comparisons comparing chained configurations, though I have seen videos of people doing it especially with the previous model. If you are in the \~10k price range, I think it's worth considering different combos: one 80 core, 512gb ram- \~$9.4k two 60 core, 256gb ram each - \~ $11k two 60 core, 1 256gb ram, 1 96gb ram \~ $9.6k three 60 core, 96gb ram each \~$12k Are you losing much performance by spreading things across 2 machines? I think the biggest issue will be the annoyance of administering 2+ boxes. Having different sized boxes many even more annoying. Anyone have any experience with this who can comment? Obviously the best setup is use case dependent but I am trying to understand what I might not be taking to account here...
2025-06-01T18:42:06
https://www.reddit.com/r/LocalLLaMA/comments/1l0wfln/is_multiple_m3_ultras_the_move_instead_of_1_big/
AcceptableBridge7616
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0wfln
false
null
t3_1l0wfln
/r/LocalLLaMA/comments/1l0wfln/is_multiple_m3_ultras_the_move_instead_of_1_big/
false
false
self
7
null
My Local LLM plan for academic editing help
0
Purchase a 512 GB Mac Studio. I have not chosen a model yet. I am not sure how large a model I will be able to fine tune, nor which model will be best. Run MLX. Fine tune the model on around 4 GB of previously edited files. I'm hoping Unsloth support comes soon, but I don't have high hopes. Hence the 512GB. Lots to learn here, I'm sure. I am aware that I will have to do a lot to prepare the data. I actually already started on that with some scripting. I feel comfortable building these scripts on cloud LLMs. I do not feel comfortable putting my life's work onto cloud LLMs. My editing is quite different from what ChatGPT and similar provide. Then I can generate edited files on demand as a service. I can also have employees, who are not as good at the editing, use the editing generated as a reasonable guide. It may find things they missed. This will mean less employee training needed and more catching of significant issues in the writing. I know that a Mac will be far slower than an NVIDIA box, but nothing has to be generated real time. 32k should be more than enough for context, as the files are generally pretty small. 8k will usually be more than enough context when things are fine tuned. If the writing is about novels, can I add the novels as source information to the fine tuning instead of context? The novels are in the public domain. Thoughts? Recommendations?
2025-06-01T18:45:43
https://www.reddit.com/r/LocalLLaMA/comments/1l0wix3/my_local_llm_plan_for_academic_editing_help/
LeopardOrLeaveHer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0wix3
false
null
t3_1l0wix3
/r/LocalLLaMA/comments/1l0wix3/my_local_llm_plan_for_academic_editing_help/
false
false
self
0
null
3x Modded 4090 48GB or RTX Pro 6000?
13
I can source them for about the same price. I've heard there is an efficiency hit on multi card with those modded 4090. But 3 card has 144GB vram vs RTX Pro's 96GB. And power consumption is comparable. Which route should I choose?
2025-06-01T19:05:50
https://www.reddit.com/r/LocalLLaMA/comments/1l0x0q8/3x_modded_4090_48gb_or_rtx_pro_6000/
sNullp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0x0q8
false
null
t3_1l0x0q8
/r/LocalLLaMA/comments/1l0x0q8/3x_modded_4090_48gb_or_rtx_pro_6000/
false
false
self
13
null
Vibe-code your own Static Site Generator (SSG
0
Hi guys, recently I run an experiment to vibe-code my own Static Site Generator (SSG) and the results were pretty good. I put together a blog post breaking down the whole process, plus I included the an initial prompt so you can try it out yourself. Give it a shot and let me know how it goes!
2025-06-01T19:27:01
https://eug.github.io/posts/vibe-code-your-own-ssg.html
eugf_
eug.github.io
1970-01-01T00:00:00
0
{}
1l0xj42
false
null
t3_1l0xj42
/r/LocalLLaMA/comments/1l0xj42/vibecode_your_own_static_site_generator_ssg/
false
false
default
0
null
Toolcalling in the reasoning trace as an alternative to agentic frameworks
15
[Deep Reasoning With Tools: Toolcalling in the reasoning trace](https://2084.substack.com/p/deep-reasoning-with-tools-toolcalling) Hey, so I was working on training reasoning models to do interesting things, when I started wanting them to be more dynamic: not just predict based on static information but actively search the data space to get information. Thus I built this toolset to integrate toolcalling into the reasoning trace of the AI models, since then I could do wayyy more complex RL training to allow it to do stuff like reconciliation of accounts, or more complex trading. However, as I built it, I realized that its actually a nice alternative to traditional agentic frameworks - you don't have discrete steps so it can run as long or as short as you want, and it can be invoked with a single command versus having to handle multiple steps. Thoughts? What other weirder agentic frameworks have y'all seen?
2025-06-01T19:40:13
https://www.reddit.com/r/LocalLLaMA/comments/1l0xubg/toolcalling_in_the_reasoning_trace_as_an/
ExaminationNo8522
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0xubg
false
null
t3_1l0xubg
/r/LocalLLaMA/comments/1l0xubg/toolcalling_in_the_reasoning_trace_as_an/
false
false
self
15
{'enabled': False, 'images': [{'id': '9a7ZCjCbYNIdu6GfaNVd7eVb-N5vwv7fmfApivsoKEQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/h_H3spaBA4x7-OYC3lVy5l0SEXrU8crvHyV3haxB97Y.jpg?width=108&crop=smart&auto=webp&s=2ff2ae74fc25431ddfd5f2d07cab594f85e7d19c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/h_H3spaBA4x7-OYC3lVy5l0SEXrU8crvHyV3haxB97Y.jpg?width=216&crop=smart&auto=webp&s=39d7f748a49adbad6ff83e446df3f31ce88fb184', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/h_H3spaBA4x7-OYC3lVy5l0SEXrU8crvHyV3haxB97Y.jpg?width=320&crop=smart&auto=webp&s=71ebea66dd38d687e189c3618edd46fd0a2a79b2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/h_H3spaBA4x7-OYC3lVy5l0SEXrU8crvHyV3haxB97Y.jpg?width=640&crop=smart&auto=webp&s=f4e25a943ba0721a9d1396e233dba13dcbcb2e0f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/h_H3spaBA4x7-OYC3lVy5l0SEXrU8crvHyV3haxB97Y.jpg?width=960&crop=smart&auto=webp&s=2505250a500271941bebdd29131d038fce2a907a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/h_H3spaBA4x7-OYC3lVy5l0SEXrU8crvHyV3haxB97Y.jpg?width=1080&crop=smart&auto=webp&s=aae6ca38a6e300a6274a358e3c446f58ff78a750', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/h_H3spaBA4x7-OYC3lVy5l0SEXrU8crvHyV3haxB97Y.jpg?auto=webp&s=a4bbcf0db843d0760c05a7fa33a9ed48544cee9b', 'width': 1200}, 'variants': {}}]}
Allowing LLM to ponder in Open WebUI
261
**What is this?** A completely superficial way of letting LLM to ponder a bit before making its conversation turn. The process is streamed to an artifact within Open WebUI. [Code](https://github.com/av/harbor/blob/main/boost/src/modules/ponder.py)
2025-06-01T19:47:52
https://v.redd.it/uoeptbsbdd4f1
Everlier
/r/LocalLLaMA/comments/1l0y0wp/allowing_llm_to_ponder_in_open_webui/
1970-01-01T00:00:00
0
{}
1l0y0wp
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/uoeptbsbdd4f1/DASHPlaylist.mpd?a=1751528877%2CYzllZGRmNjdmYWZkNGQ2YjUxNDEyMDY1OWE5Y2UxNWRhMWQ4NzJhZWQ4N2M0MTllMGYzMGEwYmM1MjczMzFmYQ%3D%3D&v=1&f=sd', 'duration': 41, 'fallback_url': 'https://v.redd.it/uoeptbsbdd4f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/uoeptbsbdd4f1/HLSPlaylist.m3u8?a=1751528877%2CMjhiMDIxMGQ5MDYyOGU3YjNjYTc5MThjNWY4ZTcxOWU1YTc1ZTdiN2IzMzAxYzMwNmQ5MGI0OGYwODVmM2I2Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/uoeptbsbdd4f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1728}}
t3_1l0y0wp
/r/LocalLLaMA/comments/1l0y0wp/allowing_llm_to_ponder_in_open_webui/
false
false
https://external-preview…bf9c6a7fa5a9dcfa
261
{'enabled': False, 'images': [{'id': 'dHd6NjY5c2JkZDRmMbDY_eAdKP8QUXyZwc-4j2cel9Olwb9ejqufCbXqijwB', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/dHd6NjY5c2JkZDRmMbDY_eAdKP8QUXyZwc-4j2cel9Olwb9ejqufCbXqijwB.png?width=108&crop=smart&format=pjpg&auto=webp&s=93b52068d1b79fa20ca68ed14fd0f9c0a3a6e25d', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/dHd6NjY5c2JkZDRmMbDY_eAdKP8QUXyZwc-4j2cel9Olwb9ejqufCbXqijwB.png?width=216&crop=smart&format=pjpg&auto=webp&s=762b22aed83ce68b24cc956db3bae7934675fe12', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/dHd6NjY5c2JkZDRmMbDY_eAdKP8QUXyZwc-4j2cel9Olwb9ejqufCbXqijwB.png?width=320&crop=smart&format=pjpg&auto=webp&s=74cf0dd9dc815f759519ecba4ca8838c88443ab6', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/dHd6NjY5c2JkZDRmMbDY_eAdKP8QUXyZwc-4j2cel9Olwb9ejqufCbXqijwB.png?width=640&crop=smart&format=pjpg&auto=webp&s=1e2866d472f86e8fab147a003f514b8bc7b2e78a', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/dHd6NjY5c2JkZDRmMbDY_eAdKP8QUXyZwc-4j2cel9Olwb9ejqufCbXqijwB.png?width=960&crop=smart&format=pjpg&auto=webp&s=1920315e9c94b0fecf31c3341237c15bba339233', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/dHd6NjY5c2JkZDRmMbDY_eAdKP8QUXyZwc-4j2cel9Olwb9ejqufCbXqijwB.png?width=1080&crop=smart&format=pjpg&auto=webp&s=222c728089cac8a903e29f46b371f5395820e222', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/dHd6NjY5c2JkZDRmMbDY_eAdKP8QUXyZwc-4j2cel9Olwb9ejqufCbXqijwB.png?format=pjpg&auto=webp&s=d8c8f33e1fd83d2966bf040894417bd06762febb', 'width': 1920}, 'variants': {}}]}
A Privacy-Focused Perplexity That Runs Locally on all your devices - iPhone, Android, iPad!
38
Hey r/LocalLlama community! Following up on my [previous post](https://www.reddit.com/r/LocalLLaMA/comments/1ku1444/a_privacyfocused_perplexity_that_runs_locally_on/)\- the response has been incredible! Thank you to everyone who tried it out, left reviews, and provided feedback. Based on your requests, I'm excited to announce that **MyDeviceAI is now available on iPad and Android**! # iPad Support * Full native iPad experience with optimized UI * Same lightning-fast local processing with M-series chips # Android Release * Available as APK on GitHub releases (v1.2) * Download link: [https://github.com/navedmerchant/MyDeviceAI/releases](https://github.com/navedmerchant/MyDeviceAI/releases) * Same core features: local AI, SearXNG integration, complete privacy * Works across a wide range of Android devices * Runs on CPU only for now, working on getting Adreno GPU support in llama.rn # What's Next? I'm continuing to work on improvements based on your suggestions: * Ability to select a larger model for powerful supported devices (Qwen 3 4b) * Ability to add images and documents to the chat for supported devices (QwenVL support) * Advanced speech mode on device * Enhanced personalization features # Download Links * **iOS/iPad**: [MyDeviceAI on App Store](https://apps.apple.com/us/app/mydeviceai/id6736578281?platform=ipad) * **Android**: [GitHub Releases v1.2](https://github.com/navedmerchant/MyDeviceAI/releases) * **Source Code**: [GitHub Repository](https://github.com/navedmerchant/MyDeviceAI) If you've been waiting for Android support or want to try it on iPad, now's your chance! As always, everything remains 100% free, open source, and completely private. Would love to hear your thoughts on the new platforms, and please consider leaving a review if MyDeviceAI has been useful for you. Your support helps tremendously with continued development!
2025-06-01T19:52:02
https://www.reddit.com/r/LocalLLaMA/comments/1l0y4ep/a_privacyfocused_perplexity_that_runs_locally_on/
Ssjultrainstnict
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0y4ep
false
null
t3_1l0y4ep
/r/LocalLLaMA/comments/1l0y4ep/a_privacyfocused_perplexity_that_runs_locally_on/
false
false
self
38
{'enabled': False, 'images': [{'id': '7-HFAtbo5I60W1_r4CgocNBdTzGwoEdGmG9vh0EFuog', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-5lp9Z6W2XdT9qC73g8A5oiR5h73-k5h2BbRRn43laE.jpg?width=108&crop=smart&auto=webp&s=8e6bc59f3f54bb8d3e4765d5d924ef62c20af88e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-5lp9Z6W2XdT9qC73g8A5oiR5h73-k5h2BbRRn43laE.jpg?width=216&crop=smart&auto=webp&s=a74b06a720ef95409d4996a1d4b78dd5812e0989', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-5lp9Z6W2XdT9qC73g8A5oiR5h73-k5h2BbRRn43laE.jpg?width=320&crop=smart&auto=webp&s=f9213de4e50881dfca2b9796ed4970633b58766b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-5lp9Z6W2XdT9qC73g8A5oiR5h73-k5h2BbRRn43laE.jpg?width=640&crop=smart&auto=webp&s=188bf6448fa531d8ba8230e1f6d776e2b58ba422', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-5lp9Z6W2XdT9qC73g8A5oiR5h73-k5h2BbRRn43laE.jpg?width=960&crop=smart&auto=webp&s=68ed96a41e049b0eb34cfe457c508fbfda56ad74', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-5lp9Z6W2XdT9qC73g8A5oiR5h73-k5h2BbRRn43laE.jpg?width=1080&crop=smart&auto=webp&s=62979d76588bc21b9e3d442600424c01be8087b4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-5lp9Z6W2XdT9qC73g8A5oiR5h73-k5h2BbRRn43laE.jpg?auto=webp&s=47ac07a48a04a24f59f66f245e7e34d40fda51f8', 'width': 1200}, 'variants': {}}]}
Llama.cpp - cache-type-k+cache-type-v+flash-attn too good to be true!?
1
[removed]
2025-06-01T20:04:36
https://www.reddit.com/r/LocalLLaMA/comments/1l0yf1d/llamacpp_cachetypekcachetypevflashattn_too_good/
cesarean722
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0yf1d
false
null
t3_1l0yf1d
/r/LocalLLaMA/comments/1l0yf1d/llamacpp_cachetypekcachetypevflashattn_too_good/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ZmadbtMLxXXHFKwJkCjeTUDuX5sS57sYwkHR8IIGo6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=108&crop=smart&auto=webp&s=1ef4773905a7285d6ca9d2707252ecf3322ec746', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=216&crop=smart&auto=webp&s=6555cce3e1543ec541933b9a1ea746f3da79448a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=320&crop=smart&auto=webp&s=346b61e1006578bd8c7c90ff8b45496164cd4933', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=640&crop=smart&auto=webp&s=2e74df95b54af72feafa558281ef5e11bc4e8a7c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=960&crop=smart&auto=webp&s=8d3ac1cc3775d1b7217345a94a6e9f18f0ba2092', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=1080&crop=smart&auto=webp&s=57e2a43db692dc32eecd433adfbae429f9bca7fd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?auto=webp&s=2704eae76891f7897192cd5a7236096d2b9f8a5f', 'width': 1200}, 'variants': {}}]}
Pure vs. merged - and a modern leaderboard
9
Probably been discussion about this, but I've noticed the trained-in quirks of models diminish with merged models. (Can't tell with abliterated since the only ones I've used are also mergers). Quirks include stubbornness in personality, desire consistency, to suck with certain formatting, etc. Yet we have no leaderboard [that I know of] that evaluates them anymore. Most leaderboards now are quite crippled in filtering, let alone finding open models. I'm trying to think of a way we could come up with basic low-energy-use community-based testing. It doesn't need to be exhaustive -- some small subsets of test types would likely satisfy for open against various mergers. People can establish tests for honoring instruct, basic accuracies, math, function-calling, whatever. (Models bad at something tend to show it quite rapidly in my own experience.) Being community-based ("crowd-sourced"), the system could cross-reference users' results to give a ranking reliability. Users can be get some type of reliability as well (perhaps a rank/algorithm we work on over time), to try to mitigate weirdos manipulating results (but one climbing high fraudulently would gain popularity and, thus, higher criticisms. Also, since the turnover of models is quite rapid, I'm not sure if there's much risk in the system just not being that perfect anyway. (It should, though, have some proper filtering and sorting in the results though!) What do you all think?
2025-06-01T20:12:09
https://www.reddit.com/r/LocalLLaMA/comments/1l0ylj8/pure_vs_merged_and_a_modern_leaderboard/
jaggzh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0ylj8
false
null
t3_1l0ylj8
/r/LocalLLaMA/comments/1l0ylj8/pure_vs_merged_and_a_modern_leaderboard/
false
false
self
9
null
Hello friends, a question about LLM model for 256 gb m3 ultra.
1
[removed]
2025-06-01T20:19:22
https://www.reddit.com/r/LocalLLaMA/comments/1l0yrs3/hello_friends_a_question_about_llm_model_for_256/
Mean_Bird_6331
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0yrs3
false
null
t3_1l0yrs3
/r/LocalLLaMA/comments/1l0yrs3/hello_friends_a_question_about_llm_model_for_256/
false
false
self
1
null
dsr1 0528 on ollama.com
0
is this misspelled on the repo? http://lollama.com/ibrary/deepseek-r1:621b-2508-94K-M "2508"?
2025-06-01T20:45:24
https://www.reddit.com/r/LocalLLaMA/comments/1l0zefs/dsr1_0528_on_ollamacom/
neurostream
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0zefs
false
null
t3_1l0zefs
/r/LocalLLaMA/comments/1l0zefs/dsr1_0528_on_ollamacom/
false
false
self
0
null
I manage to integrate vision in a Desktop app.
1
[removed]
2025-06-01T20:58:57
https://www.reddit.com/r/LocalLLaMA/comments/1l0zq2m/i_manage_to_integrate_vision_in_a_desktop_app/
Trilogix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l0zq2m
false
null
t3_1l0zq2m
/r/LocalLLaMA/comments/1l0zq2m/i_manage_to_integrate_vision_in_a_desktop_app/
false
false
https://b.thumbs.redditm…aq4gS2DFNkCA.jpg
1
null
25L Portable NV-linked Dual 3090 LLM Rig
163
Main point of portability is because The workplace of the coworker I built this for is truly offline, with no potential for LAN or wifi, so to download new models and update the system periodically I need to go pick it up from him and take it home. WARNING - these components don't fit if you try to copy this build. The bottom GPU is resting on the Arctic p12 slim fans at the bottom of the case and pushing up on the GPU. Also the top arctic p14 Max fans don't have mounting points for half of their screw holes, and are in place by being very tightly wedged against the motherboard, case, and PSU. Also, there 's probably way too much pressure on the pcie cables coming off the gpus when you close the glass. Also I had to daisy chain the PCIE cables because the Corsair RM 1200e only has four available on the PSU side and these particular EVGA 3090s require 3x 8pin power. Allegedly it just enforces a hardware power limit to 300 w but you should make it a little bit more safe by also enforcing the 300W power limit in Nvidia -SMI To make sure that the cards don't try to pull 450W through 300W pipes. Could have fit a bigger PSU, but then I wouldn't get that front fan which is probably crucial. All that being said, with a 300w power limit applied to both gpus in a silent fan profile, this rig has surprisingly good temperatures and noise levels considering how compact it is. During Cinebench 24 with both gpus being 100% utilized, the CPU runs at 63 C and both gpus at 67 Celsius somehow with almost zero gap between them and the glass closed. All the while running at about 37 to 40 decibels from 1 meter away. Prompt processing and inference - the gpus run at about 63 C, CPU at 55 C, and decibels at 34. Again, I don't understand why the temperatures for both are almost the same, when logically the top GPU should be much hotter. The only gap between the two gpus is the size of one of those little silicone rubber DisplayPort caps wedged into the end, right between where the pcie power cables connect to force the GPUs apart a little. Everything but the case, CPU cooler, and PSU was bought used on Facebook Marketplace [PCPartPicker Part List](https://pcpartpicker.com/list/nQXzgn) Type|Item|Price :----|:----|:---- **CPU** | [AMD Ryzen 7 5800X 3.8 GHz 8-Core Processor](https://pcpartpicker.com/product/qtvqqs/amd-ryzen-7-5800x-38-ghz-8-core-processor-100-100000063wof) | $160.54 @ Amazon **CPU Cooler** | [ID-COOLING FROZN A720 BLACK 98.6 CFM CPU Cooler](https://pcpartpicker.com/product/HbDQzy/id-cooling-frozn-a720-black-986-cfm-cpu-cooler-frozn-a720-black) | $69.98 @ Amazon **Motherboard** | [Asus ROG Strix X570-E Gaming ATX AM4 Motherboard](https://pcpartpicker.com/product/CLkgXL/asus-rog-strix-x570-e-gaming-atx-am4-motherboard-rog-strix-x570-e-gaming) | $559.00 @ Amazon **Memory** | [Corsair Vengeance LPX 32 GB (2 x 16 GB) DDR4-3200 CL16 Memory](https://pcpartpicker.com/product/6rrcCJ/corsair-memory-cmk32gx4m2b3200c16) | $81.96 @ Amazon **Storage** | [Samsung 980 Pro 1 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drive](https://pcpartpicker.com/product/DDWBD3/samsung-980-pro-1-tb-m2-2280-nvme-solid-state-drive-mz-v8p1t0bam) | $149.99 @ Amazon **Video Card** | [EVGA FTW3 ULTRA GAMING GeForce RTX 3090 24 GB Video Card](https://pcpartpicker.com/product/PG848d/evga-geforce-rtx-3090-24-gb-ftw3-ultra-gaming-video-card-24g-p5-3987-kr) | $750.00 **Video Card** | [EVGA FTW3 ULTRA GAMING GeForce RTX 3090 24 GB Video Card](https://pcpartpicker.com/product/PG848d/evga-geforce-rtx-3090-24-gb-ftw3-ultra-gaming-video-card-24g-p5-3987-kr) | $750.00 **Custom**| NVlink SLI bridge | $90.00 **Custom**| Mechanic Master c34plus | $200.00 **Custom**| Corsair RM1200e| $210.00 **Custom**| 2x Arctic p14 max, 3x p12, 3x p12 slim| $60.00 | *Prices include shipping, taxes, rebates, and discounts* | | **Total** | **$3081.47** | Generated by [PCPartPicker](https://pcpartpicker.com) 2025-06-01 16:48 EDT-0400 |
2025-06-01T21:01:58
https://www.reddit.com/gallery/1l0zsv7
Special-Wolverine
reddit.com
1970-01-01T00:00:00
0
{}
1l0zsv7
false
null
t3_1l0zsv7
/r/LocalLLaMA/comments/1l0zsv7/25l_portable_nvlinked_dual_3090_llm_rig/
false
false
https://b.thumbs.redditm…O-Ud355bse9Q.jpg
163
null
Anyone using an open source framework to control LLM agent behavior more precisely?
1
[removed]
2025-06-01T21:12:42
https://www.reddit.com/r/LocalLLaMA/comments/1l102hk/anyone_using_an_open_source_framework_to_control/
Ecstatic-Cranberry90
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l102hk
false
null
t3_1l102hk
/r/LocalLLaMA/comments/1l102hk/anyone_using_an_open_source_framework_to_control/
false
false
self
1
null
Any LLM benchmarks yet for the GMKTek EVO-X2 AMD Ryzen AI Max+ PRO 395?
13
Any LLM benchmarks yet for the GMKTek Evo-X2 AMD Ryzen AI Max+ PRO 395? I'd love to see latest benchmarks with ollama doing 30 to 100 GB models and maybe a lineup vs 4xxx and 5xxx Nvidia GPUs. Thanks!
2025-06-01T21:17:46
https://www.reddit.com/r/LocalLLaMA/comments/1l106wk/any_llm_benchmarks_yet_for_the_gmktek_evox2_amd/
StartupTim
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l106wk
false
null
t3_1l106wk
/r/LocalLLaMA/comments/1l106wk/any_llm_benchmarks_yet_for_the_gmktek_evox2_amd/
false
false
self
13
null
Context Window for Llama 4 New Meta API
0
Does anyone know what is the context window supported for llama 4 new meta api? I cannot find it.
2025-06-01T21:19:20
https://www.reddit.com/r/LocalLLaMA/comments/1l108b2/context_window_for_llama_4_new_meta_api/
Temporary-Koala-7370
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l108b2
false
null
t3_1l108b2
/r/LocalLLaMA/comments/1l108b2/context_window_for_llama_4_new_meta_api/
false
false
self
0
null
Connecting two 3090s
0
How can I connect two 3090s in consumer hardware? My motherboard supports x8/x8, and ample cooling. I was trying to connect them via an SLI/NVM Link but I don't see many resources on the topic. I've read some mentions of SLI being deprecated for FUTURE support, but I'm assuming it's still possible. I am not interested in finding a different motherboard + cpu platform, trying to work with what I got.
2025-06-01T21:31:20
https://www.reddit.com/r/LocalLLaMA/comments/1l10im3/connecting_two_3090s/
elchurnerista
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l10im3
false
null
t3_1l10im3
/r/LocalLLaMA/comments/1l10im3/connecting_two_3090s/
false
false
self
0
null
Sharing my tool for easy handwritten fine-tuning dataset creation: supports multiple formats, token counting & auto saving!
1
[removed]
2025-06-01T21:54:05
[deleted]
1970-01-01T00:00:00
0
{}
1l111qk
false
null
t3_1l111qk
/r/LocalLLaMA/comments/1l111qk/sharing_my_tool_for_easy_handwritten_finetuning/
false
false
default
1
null
Scalable Strategies for Continual Learning with Replay
1
[https://arxiv.org/abs/2505.12512](https://arxiv.org/abs/2505.12512)
2025-06-01T22:09:30
https://www.reddit.com/r/LocalLLaMA/comments/1l11eqz/scalable_strategies_for_continual_learning_with/
Old_Cardiologist_854
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l11eqz
false
null
t3_1l11eqz
/r/LocalLLaMA/comments/1l11eqz/scalable_strategies_for_continual_learning_with/
false
false
self
1
null
Playing generated games of Atari Style PingPong and Space Invaders, thanks to Qwen 3 8b! (Original non Deepseek version) This small model continues to amaze.
18
2025-06-01T22:51:37
https://youtu.be/ar_kFDHGbhQ
c64z86
youtu.be
1970-01-01T00:00:00
0
{}
1l12cmi
false
{'oembed': {'author_name': 'c64', 'author_url': 'https://www.youtube.com/@c64z86', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/ar_kFDHGbhQ?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Playing generated games of Atari Style PingPong and Space Invaders, thanks to Qwen 3 8b!"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/ar_kFDHGbhQ/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Playing generated games of Atari Style PingPong and Space Invaders, thanks to Qwen 3 8b!', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1l12cmi
/r/LocalLLaMA/comments/1l12cmi/playing_generated_games_of_atari_style_pingpong/
false
false
https://b.thumbs.redditm…CaHjBDXs0C-E.jpg
18
{'enabled': False, 'images': [{'id': 'XmcvlSRaFm1YeY2y0bLV7P5o9rzDH0mYlaKVYZVnus4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/fORXcgKVkaTCLfSuQUzrXrubR0RAsGHr5swRFkIXzZY.jpg?width=108&crop=smart&auto=webp&s=f151b6251d0f13bb540e4ecfe1e1ce200a5bbafc', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/fORXcgKVkaTCLfSuQUzrXrubR0RAsGHr5swRFkIXzZY.jpg?width=216&crop=smart&auto=webp&s=6ec33dba575bdbda06396d9c0151c445aadf8f78', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/fORXcgKVkaTCLfSuQUzrXrubR0RAsGHr5swRFkIXzZY.jpg?width=320&crop=smart&auto=webp&s=9570e8b102def9be8ddff84ea19c251af5011698', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/fORXcgKVkaTCLfSuQUzrXrubR0RAsGHr5swRFkIXzZY.jpg?auto=webp&s=941480be66076e386a21ac50c7f5d4040dcf0df5', 'width': 480}, 'variants': {}}]}