title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How to build a multi-model agent like this? | 1 | [removed] | 2025-05-26T23:24:16 | https://www.reddit.com/r/LocalLLaMA/comments/1kw8oaw/how_to_build_a_multimodel_agent_like_this/ | Crafty_Read_6928 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kw8oaw | false | null | t3_1kw8oaw | /r/LocalLLaMA/comments/1kw8oaw/how_to_build_a_multimodel_agent_like_this/ | false | false | 1 | null |
|
Peace-Through-Land-Auction | 0 | [removed] | 2025-05-26T23:48:47 | https://www.reddit.com/r/LocalLLaMA/comments/1kw96k1/peacethroughlandauction/ | zero_moo-s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kw96k1 | false | null | t3_1kw96k1 | /r/LocalLLaMA/comments/1kw96k1/peacethroughlandauction/ | false | false | self | 0 | null |
PC for local AI | 12 | Hey there! I use AI a lot. For the last 2 months I'm being experimenting with Roo Code and MCP servers, but always using Gemini, Claude and Deepseek. I would like to try local models but not sure what I need to get a good model running, like Devstral or Qwen 3.
My actual PC is not that big: i5 13600kf, 32gb ram, rtx4070 super.
Should I sell this gpu and buy a 4090 or 5090? Can I add a second gpu to add bulk gpu ram?
Thanks for your answers!! | 2025-05-26T23:59:44 | https://www.reddit.com/r/LocalLLaMA/comments/1kw9ecd/pc_for_local_ai/ | amunocis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kw9ecd | false | null | t3_1kw9ecd | /r/LocalLLaMA/comments/1kw9ecd/pc_for_local_ai/ | false | false | self | 12 | null |
Burned a lot on LLM calls — looking for an LLM gateway + observability tool. Landed on Keywords AI… anyone else? | 1 | [removed] | 2025-05-27T00:10:21 | https://www.reddit.com/r/LocalLLaMA/comments/1kw9m80/burned_a_lot_on_llm_calls_looking_for_an_llm/ | Main-Fisherman-2075 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kw9m80 | false | null | t3_1kw9m80 | /r/LocalLLaMA/comments/1kw9m80/burned_a_lot_on_llm_calls_looking_for_an_llm/ | false | false | self | 1 | null |
Free Speech to Speech Audio convertor (web or Google Colsb) | 1 | Hi. Can anyone please suggest some tools for doing speech to Speech (pre recorded) audio voice conversion tools. With which we can change the speaker's voices. Looking for something that is easy to run, consistent and fast. The audio length will be around 10-15 minutes. | 2025-05-27T00:31:11 | https://www.reddit.com/r/LocalLLaMA/comments/1kwa11w/free_speech_to_speech_audio_convertor_web_or/ | Tarun302 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwa11w | false | null | t3_1kwa11w | /r/LocalLLaMA/comments/1kwa11w/free_speech_to_speech_audio_convertor_web_or/ | false | false | self | 1 | null |
Downloading models on android inquiry | 0 | Just wondering how to install local models on android? I wanna try out the smaller Qwen and Gemini models but all the local downloads seem to be through vLLM and I believe that's only for PC? Could I just use termux or is there an alternative for Android?
Any help would be appreciated!
| 2025-05-27T01:09:52 | https://www.reddit.com/r/LocalLLaMA/comments/1kwas79/downloading_models_on_android_inquiry/ | PhantasmHunter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwas79 | false | null | t3_1kwas79 | /r/LocalLLaMA/comments/1kwas79/downloading_models_on_android_inquiry/ | false | false | self | 0 | null |
I scraped 111,847 finance-related jobs directly from corporate websites. | 1 | [removed] | 2025-05-27T01:44:06 | https://www.reddit.com/r/LocalLLaMA/comments/1kwbfvr/i_scraped_111847_financerelated_jobs_directly/ | Separate-Breath2267 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwbfvr | false | null | t3_1kwbfvr | /r/LocalLLaMA/comments/1kwbfvr/i_scraped_111847_financerelated_jobs_directly/ | false | false | self | 1 | null |
Should lower temperature be used now | 0 | It's been a while since I programaticlly called an ai model. Is lower temperature creative enough now. When I did it I had temp to be .80 and top p to be .95 and top alpha at .6. What generation parameters with what models do you use? | 2025-05-27T01:50:27 | https://www.reddit.com/r/LocalLLaMA/comments/1kwbk7j/should_lower_temperature_be_used_now/ | diaperrunner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwbk7j | false | null | t3_1kwbk7j | /r/LocalLLaMA/comments/1kwbk7j/should_lower_temperature_be_used_now/ | false | false | self | 0 | null |
Teach and Help with Decision: Keep P40 VM vs M4 24GB vs Ryzen Ai 9 365 vs Intel 125H | 0 | I currently have a modified Nvidia P40 with a GTX1070 cooler added to it. Works great for dinking around, but in my home-lab its taking up valuable space and its getting to the point I'm wondering if its heating up my HBAs too much. I've floated the idea of selling my modded P40 and instead switching to something smaller and "NUC'd". The problem I'm running into is I don't know much about local LLM's beyond what I've dabbled into via my escapades within my home-lab. As the title starts off with, I'm looking to grasp some basics, and then make a decision on my hardware.
First some questions:
1. I understand VRAM is useful/needed dependent on model size, but why is LPDDRX(5) more desired over DDR5 SO-DIMMS if both are addressable via the GPU/NPU/CPU for allocation? Is this a memory bandwidth issue? a pipeline issue?
2. Are TOPS a tried and true metric of processing power and capability?
3. With the M4 Minis are you capable of limiting UI and other process access to the hardware to better utilize the hardware for LLM utilization?
4. Is IPEX and ROCM up to snuff compared to AMD support especially for the sake of these NPU chips? They are a new mainstay to me as I'm semi familiar since Google Coral, but short of a small calculation chip, not fully grasping their place in the processor hierarchy.
Second the competitors:
* Current: Nvidia Tesla P40 (Modified with GTX 1070 cooler, keeps cool at 36c when idle, has done great but does get noise. Heats up the inside of my dated homelab).
* M4 Mac Mini 24GB - Most expensive of the group, least useful externally not for Apple ecosystem but my daily is Macbook and most of my infra is Linux.
* Ryzen AI 9 365 - Seems like it would be a good swiss army knife machine with a bit more power then....
* Intel 125h - Cheapest of the bunch, but upgradeable memory over the Ryzen AI 9. 96GB is possible...... | 2025-05-27T02:49:21 | https://www.reddit.com/r/LocalLLaMA/comments/1kwcp6m/teach_and_help_with_decision_keep_p40_vm_vs_m4/ | s0n1cm0nk3y | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwcp6m | false | null | t3_1kwcp6m | /r/LocalLLaMA/comments/1kwcp6m/teach_and_help_with_decision_keep_p40_vm_vs_m4/ | false | false | self | 0 | null |
LLama.cpp on intel 185H iGPU possible on a machine with RTX dGPU? | 1 | Hello, is it possible to run ollama or llama.cpp inferencing on a laptop with Ultra185H and a RTX4090 using onlye the Arc iGPU? I am trying to maximize the use of the machine as I already have an Ollama instance making use of the RTX4090 for inferencing and wondering if I can make use of the 185H iGPU for smaller model inferencing as well.......
Many thanks in advance. | 2025-05-27T02:50:09 | https://www.reddit.com/r/LocalLLaMA/comments/1kwcpqe/llamacpp_on_intel_185h_igpu_possible_on_a_machine/ | mlaihk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwcpqe | false | null | t3_1kwcpqe | /r/LocalLLaMA/comments/1kwcpqe/llamacpp_on_intel_185h_igpu_possible_on_a_machine/ | false | false | self | 1 | null |
You don't need to wait for AMD AI MAX 395+ | 1 | [removed] | 2025-05-27T02:55:11 | https://www.reddit.com/r/LocalLLaMA/comments/1kwct3w/you_dont_need_to_wait_for_amd_ai_max_395/ | Specialist_You3410 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwct3w | false | null | t3_1kwct3w | /r/LocalLLaMA/comments/1kwct3w/you_dont_need_to_wait_for_amd_ai_max_395/ | false | false | self | 1 | null |
Built an AI Study Assistant That Automatically Creates Notes + SRS | 1 | [removed] | 2025-05-27T03:02:36 | https://www.reddit.com/r/LocalLLaMA/comments/1kwcy8g/built_an_ai_study_assistant_that_automatically/ | Hirojinho | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwcy8g | false | null | t3_1kwcy8g | /r/LocalLLaMA/comments/1kwcy8g/built_an_ai_study_assistant_that_automatically/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'E47A6SeusMi2E0TGdaF3F8xV3n3fk5JslT9Ws6Njvcs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?width=108&crop=smart&auto=webp&s=6a7a278e6e1bcbc9de074da335a6ac30371bc147', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?width=216&crop=smart&auto=webp&s=7d280c6a4f03a55027d5ceeaab061ebfe1c55bd2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?width=320&crop=smart&auto=webp&s=dafd94a980a45b45201bb0d061e6d0ac36289361', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?width=640&crop=smart&auto=webp&s=524d969df5a75f65b073d29386bce0d248ef1842', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?width=960&crop=smart&auto=webp&s=926b6340a380e26009b461379ce8077ad131e89f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?width=1080&crop=smart&auto=webp&s=6898bae5cf37315cddf10bed4bd35fad04077ac3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?auto=webp&s=283aa9b3523f45a26168ed7de72be640566e2c48', 'width': 1200}, 'variants': {}}]} |
Automate Your Bill Splitting with CrewAI and Ollama | 1 | I’ve been wrestling with the chaos of splitting group bills for years—until I decided to let AI take the wheel. Meet my **Bill Splitting Automation Tool**, built with VisionParser, CrewAI, and ollama/mistral-nemo. Here’s what it does:
# 🔍 How It Works
1. **PDF Parsing → Markdown**
* Upload any bill PDF (restaurant, utilities, you name it).
* VisionParser converts it into human-friendly Markdown.
2. **AI-Powered Analysis**
* A smart agent reviews every line item.
* Automatically distinguishes between personal and shared purchases.
* Divides the cost fairly (taxes included!).
3. **Crystal-Clear Output**
* 🧾 Individual vs. Shared item tables
* 💸 Transparent tax breakdown
* 📖 Step-by-step explanation of every calculation
# ⚡ Why You’ll Love It
* **No More Math Drama:** Instant results—no calculators required.
* **Zero Disputes:** Fair splits, even for that $120 bottle of wine 🍷.
* **Totally Transparent:** Share the Markdown report with your group, and everyone sees exactly how costs were computed.
# 📂 Check It Out
👉 GitHub Repo: [https://github.com/Pavankunchala/LLM-Learn-PK/tree/main/AIAgent-CrewAi/splitwise\_with\_llm](https://github.com/Pavankunchala/LLM-Learn-PK/tree/main/AIAgent-CrewAi/splitwise_with_llm)
⭐ Don’t forget to drop a star if you find it useful!
🚀 P.S. This project was a ton of fun, and I'm itching for my next AI challenge! If you or your team are doing innovative work in **Computer Vision or LL**MS and are looking for a passionate dev, I'd love to chat.
* **My Email:** [[email protected]](https://www.google.com/url?sa=E&q=mailto%3Apavankunchalaofficial%40gmail.com)
* **My GitHub Profile (for more projects):** [https://github.com/Pavankunchala](https://github.com/Pavankunchala)
* **My Resume:** [https://drive.google.com/file/d/1ODtF3Q2uc0krJskE\_F12uNALoXdgLtgp/view](https://drive.google.com/file/d/1ODtF3Q2uc0krJskE_F12uNALoXdgLtgp/view) | 2025-05-27T03:09:25 | https://v.redd.it/9jat479yq83f1 | Solid_Woodpecker3635 | /r/LocalLLaMA/comments/1kwd2p1/automate_your_bill_splitting_with_crewai_and/ | 1970-01-01T00:00:00 | 0 | {} | 1kwd2p1 | false | null | t3_1kwd2p1 | /r/LocalLLaMA/comments/1kwd2p1/automate_your_bill_splitting_with_crewai_and/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'cmp6eHc2OXlxODNmMRWvGkBENpCIpC_EKUjQsCr5pB9yZN0tDOQcUMxmDOnL', 'resolutions': [{'height': 36, 'url': 'https://external-preview.redd.it/cmp6eHc2OXlxODNmMRWvGkBENpCIpC_EKUjQsCr5pB9yZN0tDOQcUMxmDOnL.png?width=108&crop=smart&format=pjpg&auto=webp&s=3a262e4c6f4f0c7addb55e961652e87b93993667', 'width': 108}, {'height': 73, 'url': 'https://external-preview.redd.it/cmp6eHc2OXlxODNmMRWvGkBENpCIpC_EKUjQsCr5pB9yZN0tDOQcUMxmDOnL.png?width=216&crop=smart&format=pjpg&auto=webp&s=5f1a8947127ce4c30493f62d56b49784efbd28ac', 'width': 216}, {'height': 108, 'url': 'https://external-preview.redd.it/cmp6eHc2OXlxODNmMRWvGkBENpCIpC_EKUjQsCr5pB9yZN0tDOQcUMxmDOnL.png?width=320&crop=smart&format=pjpg&auto=webp&s=ac12fc70e0dc027fe89df58d47faefe42ca4c86d', 'width': 320}, {'height': 216, 'url': 'https://external-preview.redd.it/cmp6eHc2OXlxODNmMRWvGkBENpCIpC_EKUjQsCr5pB9yZN0tDOQcUMxmDOnL.png?width=640&crop=smart&format=pjpg&auto=webp&s=d88744b5a0a0c8e9df89d8fe60d1f2ce238d9f06', 'width': 640}, {'height': 324, 'url': 'https://external-preview.redd.it/cmp6eHc2OXlxODNmMRWvGkBENpCIpC_EKUjQsCr5pB9yZN0tDOQcUMxmDOnL.png?width=960&crop=smart&format=pjpg&auto=webp&s=85be83557af98b0b2614701a7f04038cd90c6e04', 'width': 960}, {'height': 365, 'url': 'https://external-preview.redd.it/cmp6eHc2OXlxODNmMRWvGkBENpCIpC_EKUjQsCr5pB9yZN0tDOQcUMxmDOnL.png?width=1080&crop=smart&format=pjpg&auto=webp&s=2bae8c5788e164e8350019057f88ef2d172ca355', 'width': 1080}], 'source': {'height': 636, 'url': 'https://external-preview.redd.it/cmp6eHc2OXlxODNmMRWvGkBENpCIpC_EKUjQsCr5pB9yZN0tDOQcUMxmDOnL.png?format=pjpg&auto=webp&s=cad8f26c574eabddd08d584fc52d0d9b39742a81', 'width': 1880}, 'variants': {}}]} |
|
Been diving into running local LLMs (GPT4All, LM Studio, etc.) and came across this Beelink mini PC — seems to be a sweet spot between price and power for local AI stuff. | 1 | [removed] | 2025-05-27T03:13:15 | https://www.reddit.com/r/LocalLLaMA/comments/1kwd59w/been_diving_into_running_local_llms_gpt4all_lm/ | Hoodlum_Hero421 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwd59w | false | null | t3_1kwd59w | /r/LocalLLaMA/comments/1kwd59w/been_diving_into_running_local_llms_gpt4all_lm/ | false | false | self | 1 | null |
I forked llama-swap to add an ollama compatible api, so it can be a drop in replacement | 45 | For anyone else who has been annoyed with:
- ollama
- client programs that only support ollama for local models
I present you with [llama-swappo](https://github.com/kooshi/llama-swappo), a bastardization of the simplicity of llama-swap which adds an ollama compatible api to it.
This was mostly a quick hack I added for my own interests, so I don't intend to support it long term. All credit and support should go towards the original, but I'll probably set up a github action at some point to try to auto-rebase this code on top of his.
I offered to merge it, but he, correctly, declined based on concerns of complexity and maintenance.
So, if anyone's interested, it's available, and if not, well at least it scratched my itch for the day. (Turns out Qwen3 isn't all that competent at driving the Github Copilot Agent, it gave it a good shot though) | 2025-05-27T03:17:12 | https://www.reddit.com/r/LocalLLaMA/comments/1kwd7tg/i_forked_llamaswap_to_add_an_ollama_compatible/ | Kooshi_Govno | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwd7tg | false | null | t3_1kwd7tg | /r/LocalLLaMA/comments/1kwd7tg/i_forked_llamaswap_to_add_an_ollama_compatible/ | false | false | self | 45 | {'enabled': False, 'images': [{'id': 'g4EVeRAf4By-HP5yEiUCAOKaak_wBQnOJjcWGsFXFgY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WPNJ3TuBSDxdXMoKKGq63bjUxYbQZGAeHKZ9Y0tO_mE.jpg?width=108&crop=smart&auto=webp&s=1ef38c2100499edb8bda219be4183c24c47d18e7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WPNJ3TuBSDxdXMoKKGq63bjUxYbQZGAeHKZ9Y0tO_mE.jpg?width=216&crop=smart&auto=webp&s=738715ead3c846265bbdc268407c33efb7e4bb27', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WPNJ3TuBSDxdXMoKKGq63bjUxYbQZGAeHKZ9Y0tO_mE.jpg?width=320&crop=smart&auto=webp&s=7bf36c2534eafa77335f00008b8701f117b03cb0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WPNJ3TuBSDxdXMoKKGq63bjUxYbQZGAeHKZ9Y0tO_mE.jpg?width=640&crop=smart&auto=webp&s=782b75a410bb9db28b7e39efe9cef2f704452669', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WPNJ3TuBSDxdXMoKKGq63bjUxYbQZGAeHKZ9Y0tO_mE.jpg?width=960&crop=smart&auto=webp&s=0731654350658cf5b4d22817eb546341bb9a81d7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WPNJ3TuBSDxdXMoKKGq63bjUxYbQZGAeHKZ9Y0tO_mE.jpg?width=1080&crop=smart&auto=webp&s=47183d680662e3cdfbe7c89ce99e94f809413a65', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WPNJ3TuBSDxdXMoKKGq63bjUxYbQZGAeHKZ9Y0tO_mE.jpg?auto=webp&s=29b8266d4fc69cc382bbea1c2f4256ea676de77a', 'width': 1200}, 'variants': {}}]} |
I compared Manus and Genspark, with 7 challenging tasks | 1 | [removed] | 2025-05-27T03:39:45 | https://www.reddit.com/r/LocalLLaMA/comments/1kwdm8w/i_compared_manus_and_genspark_with_7_challenging/ | Fit-Rule8548 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwdm8w | false | null | t3_1kwdm8w | /r/LocalLLaMA/comments/1kwdm8w/i_compared_manus_and_genspark_with_7_challenging/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YQ7u__e6YDY29nHTlKbkGti2diIMtKYGCHkrZiHBCyg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CUoAi99NcPPSt-OBRxg2ktDrxyr2E4OtlaT4HNOK-4s.jpg?width=108&crop=smart&auto=webp&s=07fe7f0dccc36a0d30087768a64418e67a5840d4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CUoAi99NcPPSt-OBRxg2ktDrxyr2E4OtlaT4HNOK-4s.jpg?width=216&crop=smart&auto=webp&s=12068d4359abc9f160959cb786b238f5321c960c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CUoAi99NcPPSt-OBRxg2ktDrxyr2E4OtlaT4HNOK-4s.jpg?width=320&crop=smart&auto=webp&s=d98c1938cc4e75eb14b15c89f278e81b33a572f0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CUoAi99NcPPSt-OBRxg2ktDrxyr2E4OtlaT4HNOK-4s.jpg?width=640&crop=smart&auto=webp&s=61ae4a5d7e529914ccfd5435589f2ed7c8e40fb9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CUoAi99NcPPSt-OBRxg2ktDrxyr2E4OtlaT4HNOK-4s.jpg?width=960&crop=smart&auto=webp&s=7cdfcab7366fdbe9a5cd662c0dc3d1536ce22cc6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CUoAi99NcPPSt-OBRxg2ktDrxyr2E4OtlaT4HNOK-4s.jpg?width=1080&crop=smart&auto=webp&s=e011259baf936499d68a0379c6d00bd2081b62df', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CUoAi99NcPPSt-OBRxg2ktDrxyr2E4OtlaT4HNOK-4s.jpg?auto=webp&s=1672cdace77f30d672b3919f818fb0810886f950', 'width': 1200}, 'variants': {}}]} |
Best settings for running Qwen3-30B-A3B with llama.cpp (16GB VRAM and 64GB RAM) | 34 | In the past I used to mostly configure gpu layers to fit as closely as possible on the 16GB RAM. But lately there seem to be much better options to optimize for VRAM/RAM split. Especially with MoE models? I'm currently running Q4\_K\_M version (about 18.1 GB in size) with 38 layers and 8k context size because I was focusing on fitting as much of the model as possible on VRAM. That runs fairly well but I want to know if there is a much better way to optimize for my configuration.
I would really like to see if I can run the Q8\_0 (32 GB obviously) version in a way to utilize my VRAM and RAM as effectively possible and still be usable? I would also love to at least use the full 40K context if possible in this setting.
Lastly, for anyone experimenting with the A22B version as well, I assume it's usable with 128GB RAM? In this scenario, I'm not sure how much the 16GB VRAM can actually help.
Thanks for any advice in advance! | 2025-05-27T03:44:47 | https://www.reddit.com/r/LocalLLaMA/comments/1kwdpey/best_settings_for_running_qwen330ba3b_with/ | gamesntech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwdpey | false | null | t3_1kwdpey | /r/LocalLLaMA/comments/1kwdpey/best_settings_for_running_qwen330ba3b_with/ | false | false | self | 34 | null |
Prompting for agentic workflows | 3 | Under the hood I have a project memory that's fed into each new conversation. I tell this to one of my agents at the start of a session and I pretty much have my next day (or sometimes week) planned out:
Break down this (plan.md) into steps that can each be completed within one hour. Publish each of these step plans into serialized markdown files with clear context and deliverables. If it's logical for a task to be completed in one step but would take more than an hour keep it together, just make note that it will take more than an hour in the markdown file.
I'm still iterating on the "completed within x" part. I've tried tokens, context, and complexity. The hour is pretty ambitious for a single agent to complete without any intervention but I don't think it will be that way much longer. I could probably cut out a few words to save tokens but I don't want there to be any chance of confusion.
What kind of prompts are you using to create plans that are suitable for llm agents? | 2025-05-27T04:36:09 | https://www.reddit.com/r/LocalLLaMA/comments/1kwelai/prompting_for_agentic_workflows/ | ansmo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwelai | false | null | t3_1kwelai | /r/LocalLLaMA/comments/1kwelai/prompting_for_agentic_workflows/ | false | false | self | 3 | null |
LlamaFirewall: framework open source per rilevare e mitigare i rischi per la sicurezza incentrati sull'intelligenza artificiale - Help Net Security | 0 | 2025-05-27T04:39:16 | https://www.helpnetsecurity.com/2025/05/26/llamafirewall-open-source-framework-detect-mitigate-ai-centric-security-risks/ | Aiochedolor | helpnetsecurity.com | 1970-01-01T00:00:00 | 0 | {} | 1kwen4z | false | null | t3_1kwen4z | /r/LocalLLaMA/comments/1kwen4z/llamafirewall_framework_open_source_per_rilevare/ | false | false | 0 | {'enabled': False, 'images': [{'id': '_Xn7LhDNFIY7RcgTctjI8ok4tx05EAyY5sjGXbrxITA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Pu8f913f0BZRcZq8aaUGAPiafLL1kwOvTbOn67rnK18.jpg?width=108&crop=smart&auto=webp&s=cae63ace79d487bafee1d338dcdc0a4d7fca7ca8', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Pu8f913f0BZRcZq8aaUGAPiafLL1kwOvTbOn67rnK18.jpg?width=216&crop=smart&auto=webp&s=54b8008ee2d8a5e80368c46908bedb123fc1da25', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/Pu8f913f0BZRcZq8aaUGAPiafLL1kwOvTbOn67rnK18.jpg?width=320&crop=smart&auto=webp&s=df79d0f3265e4ed1e33d415102a8c26909d00116', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/Pu8f913f0BZRcZq8aaUGAPiafLL1kwOvTbOn67rnK18.jpg?width=640&crop=smart&auto=webp&s=b2c2f8c87594f99a011fd4879e331bec04f33f4b', 'width': 640}, {'height': 538, 'url': 'https://external-preview.redd.it/Pu8f913f0BZRcZq8aaUGAPiafLL1kwOvTbOn67rnK18.jpg?width=960&crop=smart&auto=webp&s=065067566c3dfb92bebda194d4cd1b2100bff910', 'width': 960}, {'height': 605, 'url': 'https://external-preview.redd.it/Pu8f913f0BZRcZq8aaUGAPiafLL1kwOvTbOn67rnK18.jpg?width=1080&crop=smart&auto=webp&s=65db65075bcd07eb714824ed0736fc7ebfc61228', 'width': 1080}], 'source': {'height': 816, 'url': 'https://external-preview.redd.it/Pu8f913f0BZRcZq8aaUGAPiafLL1kwOvTbOn67rnK18.jpg?auto=webp&s=5a151c53e34501c4a3772c1f19cffbfd1c162c82', 'width': 1456}, 'variants': {}}]} |
||
Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System Collaboration | 92 | Abstract
>Long-horizon video-audio reasoning and fine-grained pixel understanding impose conflicting requirements on omnimodal models: dense temporal coverage demands many low-resolution frames, whereas precise grounding calls for high-resolution inputs. We tackle this trade-off with a two-system architecture: a Global Reasoning System selects informative keyframes and rewrites the task at low spatial cost, while a Detail Understanding System performs pixel-level grounding on the selected high-resolution snippets. Because \`\`optimal'' keyframe selection and reformulation are ambiguous and hard to supervise, we formulate them as a reinforcement learning (RL) problem and present Omni-R1, an end-to-end RL framework built on Group Relative Policy Optimization. Omni-R1 trains the Global Reasoning System through hierarchical rewards obtained via online collaboration with the Detail Understanding System, requiring only one epoch of RL on small task splits.
Experiments on two challenging benchmarks, namely Referring Audio-Visual Segmentation (RefAVS) and Reasoning Video Object Segmentation (REVOS), show that Omni-R1 not only surpasses strong supervised baselines but also outperforms specialized state-of-the-art models, while substantially improving out-of-domain generalization and mitigating multimodal hallucination. Our results demonstrate the first successful application of RL to large-scale omnimodal reasoning and highlight a scalable path toward universally foundation models. | 2025-05-27T04:46:15 | https://huggingface.co/Haoz0206/Omni-R1 | ninjasaid13 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kwer9z | false | null | t3_1kwer9z | /r/LocalLLaMA/comments/1kwer9z/omnir1_reinforcement_learning_for_omnimodal/ | false | false | 92 | {'enabled': False, 'images': [{'id': 'GtcV9DcDdGlJyBtvfLSpnCruagztlOTW3rDgZ22y6gI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Mslr5FmgDa5Wl6TVAGHIe-yyfpC8KB7GpupP6mmM8Ko.jpg?width=108&crop=smart&auto=webp&s=2bce7761374b03ad2e325bf00303e9630f6174cf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Mslr5FmgDa5Wl6TVAGHIe-yyfpC8KB7GpupP6mmM8Ko.jpg?width=216&crop=smart&auto=webp&s=9eede7529edb2e8e52c063bc98bf03819d722dca', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Mslr5FmgDa5Wl6TVAGHIe-yyfpC8KB7GpupP6mmM8Ko.jpg?width=320&crop=smart&auto=webp&s=a9e989df77e5d5e51a1692fcdbc84ed876d37dd2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Mslr5FmgDa5Wl6TVAGHIe-yyfpC8KB7GpupP6mmM8Ko.jpg?width=640&crop=smart&auto=webp&s=2aefa740a7c49432c821d22fe05c260150bb95bc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Mslr5FmgDa5Wl6TVAGHIe-yyfpC8KB7GpupP6mmM8Ko.jpg?width=960&crop=smart&auto=webp&s=d13b693117bae2aaffbbd9de6529bdee1c739597', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Mslr5FmgDa5Wl6TVAGHIe-yyfpC8KB7GpupP6mmM8Ko.jpg?width=1080&crop=smart&auto=webp&s=e52addd724104c4c328e17ca8d0d79a98a72ee2c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Mslr5FmgDa5Wl6TVAGHIe-yyfpC8KB7GpupP6mmM8Ko.jpg?auto=webp&s=f58fc648cb94655f7ef76911c1f7f5140cccd5ee', 'width': 1200}, 'variants': {}}]} |
|
Claude sonnet 4 trouble | 1 | [removed] | 2025-05-27T05:05:05 | https://www.reddit.com/r/LocalLLaMA/comments/1kwf2do/claude_sonnet_4_trouble/ | TheVerge_Trades | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwf2do | false | null | t3_1kwf2do | /r/LocalLLaMA/comments/1kwf2do/claude_sonnet_4_trouble/ | false | false | self | 1 | null |
I made an open-source synthetic text datasets generator for LLM projects | 1 | [removed] | 2025-05-27T05:06:33 | https://www.reddit.com/r/LocalLLaMA/comments/1kwf3a2/i_made_an_opensource_synthetic_text_datasets/ | astro__pat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwf3a2 | false | null | t3_1kwf3a2 | /r/LocalLLaMA/comments/1kwf3a2/i_made_an_opensource_synthetic_text_datasets/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ol6l1w7L9PJwrnUQaW0T9pMEA5YQYURMZPun1reHqSA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Kx1qWhUTwhJEjTFoOpQHzKHh8XHSlKFVAbsBg6dkBuo.jpg?width=108&crop=smart&auto=webp&s=efd0d10102343cc497e618b49d245fd618e08f03', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Kx1qWhUTwhJEjTFoOpQHzKHh8XHSlKFVAbsBg6dkBuo.jpg?width=216&crop=smart&auto=webp&s=9cd72f6c4dacf98134333faa2213cee091188fad', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Kx1qWhUTwhJEjTFoOpQHzKHh8XHSlKFVAbsBg6dkBuo.jpg?width=320&crop=smart&auto=webp&s=677fb59c6e853df5ff4940301439dbf61f5d1d09', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Kx1qWhUTwhJEjTFoOpQHzKHh8XHSlKFVAbsBg6dkBuo.jpg?width=640&crop=smart&auto=webp&s=8189b5b8259b5885078b776e4497cf76abb51d9b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Kx1qWhUTwhJEjTFoOpQHzKHh8XHSlKFVAbsBg6dkBuo.jpg?width=960&crop=smart&auto=webp&s=756387ab57283f4deda190b029ad010921fe2d9c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Kx1qWhUTwhJEjTFoOpQHzKHh8XHSlKFVAbsBg6dkBuo.jpg?width=1080&crop=smart&auto=webp&s=908131e46acbf20a07fae23ed216ee298501af63', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Kx1qWhUTwhJEjTFoOpQHzKHh8XHSlKFVAbsBg6dkBuo.jpg?auto=webp&s=eb62ed1428e5d0ec4ba217751ac28c53079885e4', 'width': 1200}, 'variants': {}}]} |
Gemma 3 Performance: Tokens Per Second in LM Studio vs. Ollama on Mac Studio M3 Ultra | 1 | [removed] | 2025-05-27T05:09:12 | https://medium.com/p/7e1af75438e4 | Rif-SQL | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1kwf4ss | false | null | t3_1kwf4ss | /r/LocalLLaMA/comments/1kwf4ss/gemma_3_performance_tokens_per_second_in_lm/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'mHABBOeKKY7FDSBYGtJE5_Cel3QinZBFmTwhK_5YHhU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/eUTE6dW5zCtU-O7BgfzurRyBcaNNU3hCzv9MFh1v0Gk.jpg?width=108&crop=smart&auto=webp&s=a65ec67693f26a771981bd257dd0bd8a7956e982', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/eUTE6dW5zCtU-O7BgfzurRyBcaNNU3hCzv9MFh1v0Gk.jpg?width=216&crop=smart&auto=webp&s=a99d4140381e67f1849dfd3c98c1de8b7e610879', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/eUTE6dW5zCtU-O7BgfzurRyBcaNNU3hCzv9MFh1v0Gk.jpg?width=320&crop=smart&auto=webp&s=878b718cf432099494d6e52df46b6d0f30cfd6b0', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/eUTE6dW5zCtU-O7BgfzurRyBcaNNU3hCzv9MFh1v0Gk.jpg?width=640&crop=smart&auto=webp&s=8e3e07a8a57054374a8fcb4fcf8f99f394663448', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/eUTE6dW5zCtU-O7BgfzurRyBcaNNU3hCzv9MFh1v0Gk.jpg?width=960&crop=smart&auto=webp&s=d43c4c38697d31c6a61cd33cad6c9591cfb66714', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/eUTE6dW5zCtU-O7BgfzurRyBcaNNU3hCzv9MFh1v0Gk.jpg?auto=webp&s=82b7337154d68a775500a18fc4e73c62cf3977e9', 'width': 1024}, 'variants': {}}]} |
|
PFN Launches PLaMo Translate,a LLM model made for translation task | 14 | Archive Link:
[https://www.preferred.jp/en/news/pr20250527/](https://www.preferred.jp/en/news/pr20250527/)
Web Translation Demo:
[https://translate-demo.plamo.preferredai.jp/](https://translate-demo.plamo.preferredai.jp/)
Model on Huggingface:
[https://huggingface.co/pfnet/plamo-2-translate](https://huggingface.co/pfnet/plamo-2-translate) | 2025-05-27T05:22:58 | https://www.reddit.com/r/LocalLLaMA/comments/1kwfcw1/pfn_launches_plamo_translatea_llm_model_made_for/ | rikimtasu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwfcw1 | false | null | t3_1kwfcw1 | /r/LocalLLaMA/comments/1kwfcw1/pfn_launches_plamo_translatea_llm_model_made_for/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': '1tJ3KIrQGSESVZ7jdAndFJ02nLpdnVJP9j7Z5OR4Ams', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/XxYJUy7IlUFtHfWDnJwC5IDY1dFFc6MnORWVycdjc0E.jpg?width=108&crop=smart&auto=webp&s=0e0bc42892eaa629111572d1cb30a235d46bf044', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/XxYJUy7IlUFtHfWDnJwC5IDY1dFFc6MnORWVycdjc0E.jpg?width=216&crop=smart&auto=webp&s=dffe8223a8d849bb9c155bf3e12f264531256096', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/XxYJUy7IlUFtHfWDnJwC5IDY1dFFc6MnORWVycdjc0E.jpg?width=320&crop=smart&auto=webp&s=6a574612aca6c8a52b52b10a5be720d15c44fd76', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/XxYJUy7IlUFtHfWDnJwC5IDY1dFFc6MnORWVycdjc0E.jpg?width=640&crop=smart&auto=webp&s=a6f2cc931d4d2771a5b122eb11a862851f9b25f8', 'width': 640}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/XxYJUy7IlUFtHfWDnJwC5IDY1dFFc6MnORWVycdjc0E.jpg?auto=webp&s=f890d3fe80b5be26ea4d22b4505492245f4f878a', 'width': 640}, 'variants': {}}]} |
AgentKit - Drop-in plugin system for AI agents and MCP servers | 12 | I got tired of rebuilding the same tools every time I started a new project, or ripping out server/agent implementation to switch solutions, so I built a lightweight plugin system that lets you drop Python files into a folder and generate requirements.txt for them, create a .env with all the relevant items, and dynamically load them into an MCP/Agent solution. It also has a CLI to check compatibility and conflicts.
Hope it's useful to someone else - feedback would be greatly appreciated.
I also converted some of my older tools into this format like a glossary lookup engine and a tool I use to send myself MacOS notifications.
[https://github.com/batteryshark/agentkit\_plugins](https://github.com/batteryshark/agentkit_plugins) | 2025-05-27T05:28:10 | https://github.com/batteryshark/agentkit | atrfx | github.com | 1970-01-01T00:00:00 | 0 | {} | 1kwfftk | false | null | t3_1kwfftk | /r/LocalLLaMA/comments/1kwfftk/agentkit_dropin_plugin_system_for_ai_agents_and/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'taugWyvcmXWgAl3K8RmTWlF_lQYs7kar2WhZGnRlmHA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zd-ZiUz7OOd-ZVTfTYwqHZdD3FQ-Zlrik2BdtQdhqMY.jpg?width=108&crop=smart&auto=webp&s=baf65a7058044881b99a2e6a90932f3e08926847', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zd-ZiUz7OOd-ZVTfTYwqHZdD3FQ-Zlrik2BdtQdhqMY.jpg?width=216&crop=smart&auto=webp&s=6ade140cb3c98abfa623086d013d5c1afc69e87b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zd-ZiUz7OOd-ZVTfTYwqHZdD3FQ-Zlrik2BdtQdhqMY.jpg?width=320&crop=smart&auto=webp&s=ee4519681329fd9f2c0be2b77a3e2f9eba703bc5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zd-ZiUz7OOd-ZVTfTYwqHZdD3FQ-Zlrik2BdtQdhqMY.jpg?width=640&crop=smart&auto=webp&s=2d2e9c474ad2b76e2777114e47d96183c9dfe273', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zd-ZiUz7OOd-ZVTfTYwqHZdD3FQ-Zlrik2BdtQdhqMY.jpg?width=960&crop=smart&auto=webp&s=28185e1b8a5783098b8ac3c9fb15c49d69a6820f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zd-ZiUz7OOd-ZVTfTYwqHZdD3FQ-Zlrik2BdtQdhqMY.jpg?width=1080&crop=smart&auto=webp&s=6086aaaa1610d2c8aaac599e87a93275b9962a09', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zd-ZiUz7OOd-ZVTfTYwqHZdD3FQ-Zlrik2BdtQdhqMY.jpg?auto=webp&s=c127757bd351f601f8ef3e982a91c2f3313390cc', 'width': 1200}, 'variants': {}}]} |
|
Used A100 80 GB Prices Don't Make Sense | 147 | Can someone explain what I'm missing? The median price of the A100 80GB PCIe on eBay is $18,502 RTX 6000 Pro Blackwell cards can be purchased new for $8500.
What am I missing here? Is there something about the A100s that justifies the price difference? The only thing I can think of is 200w less power consumption and NVlink. | 2025-05-27T05:44:37 | https://www.reddit.com/r/LocalLLaMA/comments/1kwfp8v/used_a100_80_gb_prices_dont_make_sense/ | fakebizholdings | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwfp8v | false | null | t3_1kwfp8v | /r/LocalLLaMA/comments/1kwfp8v/used_a100_80_gb_prices_dont_make_sense/ | false | false | self | 147 | null |
Fudan University (FDU) and Shanghai Academy of AI for Science(SAIS): AI for Science 2025 | 1 | Produced by Fudan University and Shanghai Academy of AI for Science with support from Nature Research Intelligence, this report explores how artificial intelligence is transforming scientific discovery. It covers significant advances across disciplines — such as mathematics, life sciences and physical sciences — while highlighting emerging paradigms and strategies shaping the future of science through intelligent innovation. | 2025-05-27T06:21:17 | https://www.nature.com/articles/d42473-025-00161-3 | Lynncc6 | nature.com | 1970-01-01T00:00:00 | 0 | {} | 1kwg9ni | false | null | t3_1kwg9ni | /r/LocalLLaMA/comments/1kwg9ni/fudan_university_fdu_and_shanghai_academy_of_ai/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'WYgAMe89EDXcDqzlUukeeJmDzVGLFu7AQS7LfTCgm2c', 'resolutions': [{'height': 143, 'url': 'https://external-preview.redd.it/gUbZJGJpCHSwQirU7VTZKb7HvKb-aSASFTOJf8d8VFE.jpg?width=108&crop=smart&auto=webp&s=0473c3ac375dbea706c85e96259c0c5b4cf4172f', 'width': 108}, {'height': 286, 'url': 'https://external-preview.redd.it/gUbZJGJpCHSwQirU7VTZKb7HvKb-aSASFTOJf8d8VFE.jpg?width=216&crop=smart&auto=webp&s=2957e1415ee5716a535917d8b519b55e4c598ebb', 'width': 216}, {'height': 424, 'url': 'https://external-preview.redd.it/gUbZJGJpCHSwQirU7VTZKb7HvKb-aSASFTOJf8d8VFE.jpg?width=320&crop=smart&auto=webp&s=cd9447e0915d22cf4afe3228d2b29ddaaa8e17d5', 'width': 320}, {'height': 849, 'url': 'https://external-preview.redd.it/gUbZJGJpCHSwQirU7VTZKb7HvKb-aSASFTOJf8d8VFE.jpg?width=640&crop=smart&auto=webp&s=0bc360299557fd5d72f5ee74de954656cbd33747', 'width': 640}, {'height': 1273, 'url': 'https://external-preview.redd.it/gUbZJGJpCHSwQirU7VTZKb7HvKb-aSASFTOJf8d8VFE.jpg?width=960&crop=smart&auto=webp&s=59716a8afa9c3f988b83dec9eb19e3a91b7139a7', 'width': 960}, {'height': 1432, 'url': 'https://external-preview.redd.it/gUbZJGJpCHSwQirU7VTZKb7HvKb-aSASFTOJf8d8VFE.jpg?width=1080&crop=smart&auto=webp&s=22ffdbb6fd3cdc3f81a08502fcfa798814c13f72', 'width': 1080}], 'source': {'height': 1592, 'url': 'https://external-preview.redd.it/gUbZJGJpCHSwQirU7VTZKb7HvKb-aSASFTOJf8d8VFE.jpg?auto=webp&s=0cc6ec2c8d87fb2063b25a64975835d0b11d2182', 'width': 1200}, 'variants': {}}]} |
|
Model Context Protocol With Local LLM | 1 | [removed] | 2025-05-27T06:46:02 | https://www.reddit.com/r/LocalLLaMA/comments/1kwgmy1/model_context_protocol_with_local_llm/ | No_Finding2396 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwgmy1 | false | null | t3_1kwgmy1 | /r/LocalLLaMA/comments/1kwgmy1/model_context_protocol_with_local_llm/ | false | false | self | 1 | null |
How Does Qwen3-32B Handle Multi-Language Coding? | 1 | [removed] | 2025-05-27T06:56:34 | https://www.reddit.com/r/LocalLLaMA/comments/1kwgsgq/how_does_qwen332b_handle_multilanguage_coding/ | jameslee2295 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwgsgq | false | null | t3_1kwgsgq | /r/LocalLLaMA/comments/1kwgsgq/how_does_qwen332b_handle_multilanguage_coding/ | false | false | self | 1 | null |
What is the best tool/agents for daily desktop use you can't part with anymore | 1 | [removed] | 2025-05-27T07:22:24 | https://www.reddit.com/r/LocalLLaMA/comments/1kwh5zq/what_is_the_best_toolagents_for_daily_desktop_use/ | Malfun_Eddie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwh5zq | false | null | t3_1kwh5zq | /r/LocalLLaMA/comments/1kwh5zq/what_is_the_best_toolagents_for_daily_desktop_use/ | false | false | self | 1 | null |
Llama 3.1 Nemotron Ultra 253B Q3_K_S on my machine (RTX 6000 + 256GB DDR5 RAM) | 1 | [removed] | 2025-05-27T07:27:22 | https://www.reddit.com/r/LocalLLaMA/comments/1kwh8mv/llama_31_nemotron_ultra_253b_q3_k_s_on_my_machine/ | Wide_Food_2636 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwh8mv | false | null | t3_1kwh8mv | /r/LocalLLaMA/comments/1kwh8mv/llama_31_nemotron_ultra_253b_q3_k_s_on_my_machine/ | false | false | self | 1 | null |
Best LLM for PGS to SRT (movies) | 1 | [removed] | 2025-05-27T07:38:00 | https://www.reddit.com/r/LocalLLaMA/comments/1kwhe3j/best_llm_for_pgs_to_srt_movies/ | Im_The_Hollow_Man | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwhe3j | false | null | t3_1kwhe3j | /r/LocalLLaMA/comments/1kwhe3j/best_llm_for_pgs_to_srt_movies/ | false | false | self | 1 | null |
What are the best vision models at the moment ? | 15 | I'm trying to create an app that extract data from scanned documents and photos, and I was using InterVL2.5-4b running with ollama, but I was wondering if there are better models out there ?
What are your recommendation ?
I wanted to try the 8b version of intervl but there is no GGUF available at the moment.
Thank you :) | 2025-05-27T07:42:01 | https://www.reddit.com/r/LocalLLaMA/comments/1kwhg3t/what_are_the_best_vision_models_at_the_moment/ | Wintlink- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwhg3t | false | null | t3_1kwhg3t | /r/LocalLLaMA/comments/1kwhg3t/what_are_the_best_vision_models_at_the_moment/ | false | false | self | 15 | null |
3x AMD Instinct MI50 (48GB VRAM total): what can I do with it? | 3 | Hi everyone,
I've been running some smaller models locally on my laptop as a coding assistant, but I decided I wanted to run bigger models and maybe get answers a little bit faster.
Last weekend, I came across a set of 3 AMD MI50's on eBay which I bought for 330 euro total. I picked up an old 3-way CrossFire motherboard with a Intel 7700K and 16GB of RAM and a 1300W power supply for another \~200 euro locally hoping to build myself an inference machine.
What can I reasonably expect to run on this hardware? What's the best software to use? So far I've mostly been using llama.cpp with the CUDA or Vulkan backends. | 2025-05-27T07:46:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kwhif5/3x_amd_instinct_mi50_48gb_vram_total_what_can_i/ | spaceman_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwhif5 | false | null | t3_1kwhif5 | /r/LocalLLaMA/comments/1kwhif5/3x_amd_instinct_mi50_48gb_vram_total_what_can_i/ | false | false | self | 3 | null |
What are the best practices (sampling settings and prompting) for OCR, especially subtitles? | 1 | [removed] | 2025-05-27T08:04:01 | https://www.reddit.com/r/LocalLLaMA/comments/1kwhr3a/what_are_the_best_practices_sampling_settings_and/ | nmkd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwhr3a | false | null | t3_1kwhr3a | /r/LocalLLaMA/comments/1kwhr3a/what_are_the_best_practices_sampling_settings_and/ | false | false | self | 1 | null |
Why LLM Agents Still Hallucinate (Even with Tool Use and Prompt Chains) | 44 | You’d think calling external tools would “fix” hallucinations in LLM agents, but even with tools integrated (LangChain, ReAct, etc.), the bots still confidently invent or misuse tool outputs.
Part of the problem is that most pipelines treat the LLM like a black box between prompt → tool → response. There's no consistent *reasoning checkpoint* before the final output. So even if the tool gives the right data, the model might still mess up interpreting it or worse, hallucinate extra “context” to justify a bad answer.
What’s missing is a self-check step before the response is finalized. Like:
* Did this answer follow the intended logic?
* Did the tool result get used properly?
* Are we sticking to domain constraints?
Without that, you're just crossing your fingers and hoping the model doesn't go rogue. This matters a ton in customer support, healthcare, or anything regulated.
Also, tool use is only as good as your control over *when and how* tools are triggered. I’ve seen bots misfire APIs just because the prompt hinted at it vaguely. Unless you gate tool calls with precise logic, you get weird or premature tool usage that ruins the UX.
Curious what others are doing to get more reliable LLM behavior around tools + reasoning. Are you layering on more verification? Custom wrappers? | 2025-05-27T08:04:06 | https://www.reddit.com/r/LocalLLaMA/comments/1kwhr56/why_llm_agents_still_hallucinate_even_with_tool/ | Mountain-Insect-2153 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwhr56 | false | null | t3_1kwhr56 | /r/LocalLLaMA/comments/1kwhr56/why_llm_agents_still_hallucinate_even_with_tool/ | false | false | self | 44 | null |
Cognito: Your AI Sidekick for Chrome. A MIT licensed very lightweight Web UI with multitools. | 92 | * **Easiest Setup: No python, no docker, no endless dev packages.** Just download it from [Chrome](https://chromewebstore.google.com/detail/pphjdjdoclkedgiaahmiahladgcpohca?utm_source=item-share-cb) or my [Github](https://github.com/3-ark/Cognito-AI_Sidekick) (Same with the store, just the latest release). You don't need an exe.
* **No privacy issue: you can check the code yourself.**
* **Seamless AI Integration:** Connect to a wide array of powerful AI models:
* **Local Models:** Ollama, LM Studio, etc.
* **Cloud Services: several**
* **Custom Connections:** all OpenAI compatible endpoints.
* **Intelligent Content Interaction:**
* **Instant Summaries:** Get the gist of any webpage in seconds.
* **Contextual Q&A:** Ask questions about the current page, PDFs, selected text in the notes or you can simply send the urls directly to the bot, the scrapper will give the bot context to use.
* **Smart Web Search with scrapper:** Conduct context-aware searches using Google, DuckDuckGo, and Wikipedia, with the ability to fetch and analyze content from search results.
* **Customizable Personas (system prompts):** Choose from 7 pre-built AI personalities (Researcher, Strategist, etc.) or create your own.
* **Text-to-Speech (TTS):** Hear AI responses read aloud (supports browser TTS and integration with external services like Piper).
* **Chat History:** You can search it (also planed to be used in RAG).
I don't know how to post image here, tried links, markdown links or directly upload, all failed to display. Screenshots gifs links below: [https://github.com/3-ark/Cognito-AI\_Sidekick/blob/main/docs/web.gif](https://github.com/3-ark/Cognito-AI_Sidekick/blob/main/docs/web.gif)
[https://github.com/3-ark/Cognito-AI\_Sidekick/blob/main/docs/local.gif](https://github.com/3-ark/Cognito-AI_Sidekick/blob/main/docs/local.gif) | 2025-05-27T08:13:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kwhw20/cognito_your_ai_sidekick_for_chrome_a_mit/ | Asleep-Ratio7535 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwhw20 | false | null | t3_1kwhw20 | /r/LocalLLaMA/comments/1kwhw20/cognito_your_ai_sidekick_for_chrome_a_mit/ | false | false | self | 92 | {'enabled': False, 'images': [{'id': 'KQnoxsNXSt_ExJwiasklEOLgivpWk4hJKurpwqFTYpI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LK9RDOmWWg-1fAGxwAutIIXGyDq1BJqkepaHRh_fCYA.jpg?width=108&crop=smart&auto=webp&s=9a8d1d978965d9ac2d5363effe7f8828084d0689', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LK9RDOmWWg-1fAGxwAutIIXGyDq1BJqkepaHRh_fCYA.jpg?width=216&crop=smart&auto=webp&s=67ab8de71ebb4f87671bd2fbc54503f09f74a735', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LK9RDOmWWg-1fAGxwAutIIXGyDq1BJqkepaHRh_fCYA.jpg?width=320&crop=smart&auto=webp&s=dc5e5e599ca20e9f69e3653ef9c4208d0c202126', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LK9RDOmWWg-1fAGxwAutIIXGyDq1BJqkepaHRh_fCYA.jpg?width=640&crop=smart&auto=webp&s=477c4bd91b13661b3c26cf1b6fda3685e1133731', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LK9RDOmWWg-1fAGxwAutIIXGyDq1BJqkepaHRh_fCYA.jpg?width=960&crop=smart&auto=webp&s=f55b6c3c5056f76609696bf3a1491c4f36270110', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LK9RDOmWWg-1fAGxwAutIIXGyDq1BJqkepaHRh_fCYA.jpg?width=1080&crop=smart&auto=webp&s=5fece8df03ed0829e99c2ee01a71361e072e5130', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LK9RDOmWWg-1fAGxwAutIIXGyDq1BJqkepaHRh_fCYA.jpg?auto=webp&s=eecad9cc26dd96fbccc4b326473907c381fc68ce', 'width': 1200}, 'variants': {}}]} |
Open Source iOS OLLAMA Client | 8 | As you all know, ollama is a program that allows you to install and use various latest LLMs on your computer. Once you install it on your computer, you don't have to pay a usage fee, and you can install and use various types of LLMs according to your performance.
https://preview.redd.it/wb9qvk3vaa3f1.png?width=1984&format=png&auto=webp&s=0dbebcfe065996625fa68a698b78a24ebf0eaac6
However, the company that makes ollama does not make the UI. So there are several ollama-specific programs on the market. Last year, I made an ollama iOS client with Flutter and opened the code, but I didn't like the performance and UI, so I made it again. I will release the source code with the link. You can download the entire Swift source.
You can build it from the source, or you can download the app by going to the link.
[https://github.com/bipark/swift\_ios\_ollama\_client\_v3](https://github.com/bipark/swift_ios_ollama_client_v3) | 2025-05-27T08:22:08 | https://www.reddit.com/r/LocalLLaMA/comments/1kwi08a/open_source_ios_ollama_client/ | billythepark | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwi08a | false | null | t3_1kwi08a | /r/LocalLLaMA/comments/1kwi08a/open_source_ios_ollama_client/ | false | false | 8 | null |
|
SEED-GRPO: Pure RL with a 7B Model Sets New SOTA on AIME24 (56.7) | 1 | [removed] | 2025-05-27T08:43:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kwib3n/seedgrpo_pure_rl_with_a_7b_model_sets_new_sota_on/ | Competitive_Pilot_75 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwib3n | false | null | t3_1kwib3n | /r/LocalLLaMA/comments/1kwib3n/seedgrpo_pure_rl_with_a_7b_model_sets_new_sota_on/ | false | false | self | 1 | null |
Is there any model/provider agnostic client/sdk which has support for MCP, tools, RAG, multimodality, streaming, etc.? | 0 | I'm currently looking for a model/provider-agnostic SDK or client that supports a wide range of modern LLM capabilities out of the box. Specifically, I'm interested in something that covers:
Multi-provider compatibility (OpenAI,
Ollama, Google, etc., and possibly even it's own backend if possible for local model files)
Multi-modal input/output (text, image, audio, video)
Tool usage / function calling / MCP or any built-in multi-agent orchestration support
RAG integration (not necessarily important, I can build that myself)
Streaming input/output support
Preferably looking for it in TypeScript. Currently I'm thinking of either going with OpenAI ask or vercel's ai package. | 2025-05-27T08:43:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kwib3x/is_there_any_modelprovider_agnostic_clientsdk/ | GamerWael | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwib3x | false | null | t3_1kwib3x | /r/LocalLLaMA/comments/1kwib3x/is_there_any_modelprovider_agnostic_clientsdk/ | false | false | self | 0 | null |
Is it possible to run LLM entirely on decentralized nodes with no cloud backend? | 1 | [removed] | 2025-05-27T09:02:47 | https://www.reddit.com/r/LocalLLaMA/comments/1kwikte/is_it_possible_to_run_llm_entirely_on/ | Maleficent_Apple_287 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwikte | false | null | t3_1kwikte | /r/LocalLLaMA/comments/1kwikte/is_it_possible_to_run_llm_entirely_on/ | false | false | self | 1 | null |
Engineers who work in companies that have embraced AI coding, how has your worklife changed? | 82 | I've been working on my own since just before GPT 4, so I never experienced AI in the workplace. How has the job changed? How are sprints run? Is more of your time spent reviewing pull requests? Has the pace of releases increased? Do things break more often? | 2025-05-27T09:03:54 | https://www.reddit.com/r/LocalLLaMA/comments/1kwile2/engineers_who_work_in_companies_that_have/ | thezachlandes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwile2 | false | null | t3_1kwile2 | /r/LocalLLaMA/comments/1kwile2/engineers_who_work_in_companies_that_have/ | false | false | self | 82 | null |
Unsloth Fine-tuning without Nvidia | 1 | [removed] | 2025-05-27T09:30:03 | https://www.reddit.com/r/LocalLLaMA/comments/1kwiyxo/unsloth_finetuning_without_nvidia/ | Glad_Net8882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwiyxo | false | null | t3_1kwiyxo | /r/LocalLLaMA/comments/1kwiyxo/unsloth_finetuning_without_nvidia/ | false | false | self | 1 | null |
The Aider LLM Leaderboards were updated with benchmark results for Claude 4, revealing that Claude 4 Sonnet didn't outperform Claude 3.7 Sonnet | 310 | 2025-05-27T09:37:08 | Dr_Karminski | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kwj2p2 | false | null | t3_1kwj2p2 | /r/LocalLLaMA/comments/1kwj2p2/the_aider_llm_leaderboards_were_updated_with/ | false | false | 310 | {'enabled': True, 'images': [{'id': 'qUOEc2SfwtuE8pn7inuvXBcwGr7N8Gr78cogRiqTaJM', 'resolutions': [{'height': 131, 'url': 'https://preview.redd.it/ls92grf5oa3f1.png?width=108&crop=smart&auto=webp&s=2d8a8e6c515161d0ef8ca37e9e9c5c655cd8eeb0', 'width': 108}, {'height': 262, 'url': 'https://preview.redd.it/ls92grf5oa3f1.png?width=216&crop=smart&auto=webp&s=cb8712e831aa262905273d190faba31fe15cf498', 'width': 216}, {'height': 389, 'url': 'https://preview.redd.it/ls92grf5oa3f1.png?width=320&crop=smart&auto=webp&s=3e461d9b9cfeefbe4fae51d69dee33dea5003d93', 'width': 320}, {'height': 778, 'url': 'https://preview.redd.it/ls92grf5oa3f1.png?width=640&crop=smart&auto=webp&s=e89933d9870d06458186daafb142b31f9c95830f', 'width': 640}, {'height': 1168, 'url': 'https://preview.redd.it/ls92grf5oa3f1.png?width=960&crop=smart&auto=webp&s=48081b98eb20ed185699fc478ba1783ceb78730e', 'width': 960}, {'height': 1314, 'url': 'https://preview.redd.it/ls92grf5oa3f1.png?width=1080&crop=smart&auto=webp&s=ed9fa22d18676f1eb26f490c89f0732cc50ed5ff', 'width': 1080}], 'source': {'height': 2692, 'url': 'https://preview.redd.it/ls92grf5oa3f1.png?auto=webp&s=722d231677a3eaad04aa8cf38669ccbecb975bfe', 'width': 2212}, 'variants': {}}]} |
|||
I want to create a project of Text to Speech locally without any external api's | 1 | [removed] | 2025-05-27T09:38:02 | https://www.reddit.com/r/LocalLLaMA/comments/1kwj35u/i_want_to_create_a_project_of_text_to_speech/ | atmanirbhar21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwj35u | false | null | t3_1kwj35u | /r/LocalLLaMA/comments/1kwj35u/i_want_to_create_a_project_of_text_to_speech/ | false | false | self | 1 | null |
Anyone tried DCPMM with LLMs? | 7 | I've been seeing 128GB DCPMM modules for \~70usd per, thinking of using them. What's the performance like? | 2025-05-27T09:40:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kwj4d7/anyone_tried_dcpmm_with_llms/ | sTrollZ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwj4d7 | false | null | t3_1kwj4d7 | /r/LocalLLaMA/comments/1kwj4d7/anyone_tried_dcpmm_with_llms/ | false | false | self | 7 | null |
I want to create a project of Text to Speech locally without api | 1 | [removed] | 2025-05-27T09:41:21 | https://www.reddit.com/r/LocalLLaMA/comments/1kwj4uy/i_want_to_create_a_project_of_text_to_speech/ | atmanirbhar21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwj4uy | false | null | t3_1kwj4uy | /r/LocalLLaMA/comments/1kwj4uy/i_want_to_create_a_project_of_text_to_speech/ | false | false | self | 1 | null |
I'm looking for Gemini 2.0 Flash alternative for ComfyUI (to run locally) 🙏 | 1 | [removed] | 2025-05-27T10:22:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kwjrp4/im_looking_for_gemini_20_flash_alternative_for/ | VirtualWishX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwjrp4 | false | null | t3_1kwjrp4 | /r/LocalLLaMA/comments/1kwjrp4/im_looking_for_gemini_20_flash_alternative_for/ | false | false | self | 1 | null |
Wife isn’t home, that means H200 in the living room ;D | 784 | Finally got our H200 System, until it’s going in the datacenter next week that means localLLaMa with some extra power :D | 2025-05-27T10:40:11 | https://www.reddit.com/gallery/1kwk1jm | Flintbeker | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kwk1jm | false | null | t3_1kwk1jm | /r/LocalLLaMA/comments/1kwk1jm/wife_isnt_home_that_means_h200_in_the_living_room/ | false | false | 784 | null |
|
Is speculative Decoding effective for handling multiple user queries concurrently or w/o SD is better. | 6 | has anyone tried speculative decoding for handling multiple user queries concurrently.
how does it perform. | 2025-05-27T11:07:12 | https://www.reddit.com/r/LocalLLaMA/comments/1kwkhrx/is_speculative_decoding_effective_for_handling/ | Remarkable-Law9287 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwkhrx | false | null | t3_1kwkhrx | /r/LocalLLaMA/comments/1kwkhrx/is_speculative_decoding_effective_for_handling/ | false | false | self | 6 | null |
Seeking Guidance on Fine-Tuning for a Desktop Development Project | 1 | [removed] | 2025-05-27T11:16:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kwknjt/seeking_guidance_on_finetuning_for_a_desktop/ | YoussefTrr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwknjt | false | null | t3_1kwknjt | /r/LocalLLaMA/comments/1kwknjt/seeking_guidance_on_finetuning_for_a_desktop/ | false | false | self | 1 | null |
newbie,, versions mismatch hell with triton,vllm and unsloth | 0 | this is my fist time training a model
trying to use unsloth to fine tune qwen0.6b-bnb but i keep running into problems at first i asked chat gpt and ity suggested downgrading from python .13 to .11 i went there and now its suggestin going to .10
reading unsloth or vllm or triton repos doesnt mention having to use py .10
i keep getting errors like this
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
vllm 0.8.5.post1 requires torch==2.6.0, but you have torch 2.7.0 which is incompatible.
torch 2.7.0 requires triton==3.3.0; platform_system == "Linux" and platform_machine == "x86_64", but you have triton 3.2.0 which is incompatible.
of course when i go triton 3.3.0 other things break if i take the other route and go pytorch 2.6.0 even more things break
here is the script i am using if its need https://github.com/StudentOnCrack/confighosting/blob/main/myscript | 2025-05-27T11:46:48 | https://www.reddit.com/r/LocalLLaMA/comments/1kwl6lo/newbie_versions_mismatch_hell_with_tritonvllm_and/ | Excel_Document | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwl6lo | false | null | t3_1kwl6lo | /r/LocalLLaMA/comments/1kwl6lo/newbie_versions_mismatch_hell_with_tritonvllm_and/ | false | false | self | 0 | null |
Are there any good small MoE models? Something like 8B or 6B or 4B with active 2B | 9 | Thanks | 2025-05-27T11:50:41 | https://www.reddit.com/r/LocalLLaMA/comments/1kwl974/are_there_any_good_small_moe_models_something/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwl974 | false | null | t3_1kwl974 | /r/LocalLLaMA/comments/1kwl974/are_there_any_good_small_moe_models_something/ | false | false | self | 9 | null |
I created a ChatGPT-like UI for Local LLMs | 1 | [removed] | 2025-05-27T12:14:08 | https://www.reddit.com/gallery/1kwlpfe | BeautifulFlower7101 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kwlpfe | false | null | t3_1kwlpfe | /r/LocalLLaMA/comments/1kwlpfe/i_created_a_chatgptlike_ui_for_local_llms/ | false | false | 1 | null |
|
Run qwen 30b-a3b on Android local with Alibaba MNN Chat | 66 | https://github.com/alibaba/MNN/blob/master/apps/Android/MnnLlmChat/README.md#version-050
| 2025-05-27T12:26:03 | https://v.redd.it/aafvzgkhib3f1 | Juude89 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kwlxvb | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/aafvzgkhib3f1/DASHPlaylist.mpd?a=1750940777%2COTIyYjZiYjFhMTc1YTdlMWNjNDUwNzZiNzc3YmUwNDc1MGYxYjEwOTIyYzkzYmVlNThkNDZjNmFiYTE2MTE1MQ%3D%3D&v=1&f=sd', 'duration': 46, 'fallback_url': 'https://v.redd.it/aafvzgkhib3f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/aafvzgkhib3f1/HLSPlaylist.m3u8?a=1750940777%2COGY4M2FiNmM1OWVhMmY0NzZiODE1ZmViM2RjN2QwZWE0ZmMwYTNiYWI3NjUzYmVhYzgxZTQ2ZjYwMTU1ZDc0MA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/aafvzgkhib3f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 582}} | t3_1kwlxvb | /r/LocalLLaMA/comments/1kwlxvb/run_qwen_30ba3b_on_android_local_with_alibaba_mnn/ | false | false | 66 | {'enabled': False, 'images': [{'id': 'aGZnZW1ma2hpYjNmMebcV0-OYASONSRSOZTsoevngxFFIFBRatfx4SVyyBoC', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/aGZnZW1ma2hpYjNmMebcV0-OYASONSRSOZTsoevngxFFIFBRatfx4SVyyBoC.png?width=108&crop=smart&format=pjpg&auto=webp&s=5e732ab1100613bbeaf51f16ad95465b67e46115', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/aGZnZW1ma2hpYjNmMebcV0-OYASONSRSOZTsoevngxFFIFBRatfx4SVyyBoC.png?width=216&crop=smart&format=pjpg&auto=webp&s=0154b5a2771370cd63dc7ab3ba1a537e23b5f276', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/aGZnZW1ma2hpYjNmMebcV0-OYASONSRSOZTsoevngxFFIFBRatfx4SVyyBoC.png?width=320&crop=smart&format=pjpg&auto=webp&s=c0de20c5712c5e2c888297298c5552e0cfb7602a', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/aGZnZW1ma2hpYjNmMebcV0-OYASONSRSOZTsoevngxFFIFBRatfx4SVyyBoC.png?width=640&crop=smart&format=pjpg&auto=webp&s=67fbce2911c67d1d885bb1921f4c25b31ce5bd86', 'width': 640}], 'source': {'height': 1656, 'url': 'https://external-preview.redd.it/aGZnZW1ma2hpYjNmMebcV0-OYASONSRSOZTsoevngxFFIFBRatfx4SVyyBoC.png?format=pjpg&auto=webp&s=e431dd47ae0a5d2c14b656ccd78a581e91c36bab', 'width': 752}, 'variants': {}}]} |
|
Please help to choose GPU for Ollama setup | 0 | So, I dipping me feet in to local LLMs, I first tried it on LM Studio on my desktop with 3080ti and it runs nicely, but I want to run it on my homeserver, not desktop.
So ATM I launched it on Debian VM runnning on Proxmox. it has 12 CPU threads dedicated to it, outh of 12 threads(6 cores) my AMD Ryzen 3600 has, and 40 out of 48GB DDR4. There I run Ollama and Open-Webui and it works, but models are painfully slow to answer, even though I only trying smalles model versions available. I wondering if adding GPU to the server and passing it through to VM would make things run fast-ish. At the moment it is several minutes to first word, and then several seconds per word :)
My motherboard is ASRock B450M Pro4, it has 1 PCIe 3.0 x16, 1 PCIe 2.0 x16, 1 PCIe 2.0 x1
I have an access to local used server parts retailer, here are options they offer at the momemnt:
\- NVIDIA RTX A4000 16GB PCI Express 4.0 x16 \~$900 USD
\- NVIDIA QUADRO M4000 8GB PCI-E З.0 x16 \~$200 USD
\- NVIDIA TESLA M10 З2GB PCI-E З.0 x16 \~$150 USD
\- NVIDIA TESLA M60 16GB PCI-E З.0 x16 \~$140 USD
Are any of those are good for their price or I better to look for other options elsewhere? Take in to account that everything new around here cost \~2x US price.
PS: I also wondering, if having models stored on HDD have any effect on performance other than time to load the model before use? | 2025-05-27T12:28:17 | https://www.reddit.com/r/LocalLLaMA/comments/1kwlzez/please_help_to_choose_gpu_for_ollama_setup/ | bswan2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwlzez | false | null | t3_1kwlzez | /r/LocalLLaMA/comments/1kwlzez/please_help_to_choose_gpu_for_ollama_setup/ | false | false | self | 0 | null |
Finetuning or running the new gemma 3n models locally? | 2 | Has anyone had any luck running these new 3n models?
i noticed the safetensors aren't released yet so if you are running it or fine tuning it how are you going about the process?
[https://huggingface.co/collections/google/gemma-3n-preview-682ca41097a31e5ac804d57b](https://huggingface.co/collections/google/gemma-3n-preview-682ca41097a31e5ac804d57b) | 2025-05-27T12:31:41 | https://www.reddit.com/r/LocalLLaMA/comments/1kwm20i/finetuning_or_running_the_new_gemma_3n_models/ | jay2jp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwm20i | false | null | t3_1kwm20i | /r/LocalLLaMA/comments/1kwm20i/finetuning_or_running_the_new_gemma_3n_models/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'lMXsg923oKXNqAFcv091XpOzt0tS-VbvJyD1BGYthSo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nuTGd6nR-D7i0exzDvXeyeroWnA1sgWJyyF8GipdVWU.jpg?width=108&crop=smart&auto=webp&s=7d9d79bae8b5636ef4da12984fd0bbb5d013938c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nuTGd6nR-D7i0exzDvXeyeroWnA1sgWJyyF8GipdVWU.jpg?width=216&crop=smart&auto=webp&s=f7e109ac6322821f95556a423347fa4aa215c89a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nuTGd6nR-D7i0exzDvXeyeroWnA1sgWJyyF8GipdVWU.jpg?width=320&crop=smart&auto=webp&s=ac856560d7e5144ee9e70b315721e7f2b1d0aefd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nuTGd6nR-D7i0exzDvXeyeroWnA1sgWJyyF8GipdVWU.jpg?width=640&crop=smart&auto=webp&s=4092ee3492e35aa48ddc115bdbd7e2144d1d03c2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nuTGd6nR-D7i0exzDvXeyeroWnA1sgWJyyF8GipdVWU.jpg?width=960&crop=smart&auto=webp&s=e24fe3434779877705608854610506996af57828', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nuTGd6nR-D7i0exzDvXeyeroWnA1sgWJyyF8GipdVWU.jpg?width=1080&crop=smart&auto=webp&s=2f32479af3df3abb3c9d073993f7f76f6fa986c1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nuTGd6nR-D7i0exzDvXeyeroWnA1sgWJyyF8GipdVWU.jpg?auto=webp&s=343c3fe366d87c0c826e8293c823779a97b72152', 'width': 1200}, 'variants': {}}]} |
Any good way to use LM Studio API as a chat backend with anything besides OpenWebUI? Tired of ChatGPT model switching and want all local with damn web search. | 11 | Tried for hours with OpenWebUI and it doesn't see a single model I have with Lmstudio even with it loaded I lowkey just want a local web UI with web search I can use qwen 30b with and stop dealing with ChatGPT's awful model switching which just gives me wrong answers to basic questions unless I manually switch it to o4-mini for EVERY query. | 2025-05-27T12:37:11 | https://www.reddit.com/r/LocalLLaMA/comments/1kwm5z0/any_good_way_to_use_lm_studio_api_as_a_chat/ | Commercial-Celery769 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwm5z0 | false | null | t3_1kwm5z0 | /r/LocalLLaMA/comments/1kwm5z0/any_good_way_to_use_lm_studio_api_as_a_chat/ | false | false | self | 11 | null |
Just bought a used 3090. Should I keep my 3080 10GB? | 1 | [removed] | 2025-05-27T12:45:31 | https://www.reddit.com/r/LocalLLaMA/comments/1kwmc3n/just_bought_a_used_3090_should_i_keep_my_3080_10gb/ | Turbulent_Jump_2000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwmc3n | false | null | t3_1kwmc3n | /r/LocalLLaMA/comments/1kwmc3n/just_bought_a_used_3090_should_i_keep_my_3080_10gb/ | false | false | self | 1 | null |
mtmd : support Qwen 2.5 Omni (input audio+vision, no audio output) by ngxson · Pull Request #13784 · ggml-org/llama.cpp | 61 | 2025-05-27T12:58:06 | https://github.com/ggml-org/llama.cpp/pull/13784 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1kwmlos | false | null | t3_1kwmlos | /r/LocalLLaMA/comments/1kwmlos/mtmd_support_qwen_25_omni_input_audiovision_no/ | false | false | default | 61 | null |
|
FairyR1 32B / 14B | 42 | 2025-05-27T13:19:11 | https://huggingface.co/collections/PKU-DS-LAB/fairy-r1-6834014fe8fd45bc211c6dd7 | AaronFeng47 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kwn27n | false | null | t3_1kwn27n | /r/LocalLLaMA/comments/1kwn27n/fairyr1_32b_14b/ | false | false | 42 | {'enabled': False, 'images': [{'id': 'VV-f_T9dnmvGYd_O4osXjD47mjveffCENOVTP6PdJvw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/W-qV0BV1voPJhiTsOQdsGmcAlL-lVFIkzu14DCr59cA.jpg?width=108&crop=smart&auto=webp&s=475c9047773d45c6e63407bdf89c2e914ea97493', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/W-qV0BV1voPJhiTsOQdsGmcAlL-lVFIkzu14DCr59cA.jpg?width=216&crop=smart&auto=webp&s=dcbd27278fa041feca2ce85bb95e65ec70867600', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/W-qV0BV1voPJhiTsOQdsGmcAlL-lVFIkzu14DCr59cA.jpg?width=320&crop=smart&auto=webp&s=07321926ad50ca5ecb5b60abd0bb7d10506888b6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/W-qV0BV1voPJhiTsOQdsGmcAlL-lVFIkzu14DCr59cA.jpg?width=640&crop=smart&auto=webp&s=4b257239b7a4b96c72e2b478eb3665269afe6ea4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/W-qV0BV1voPJhiTsOQdsGmcAlL-lVFIkzu14DCr59cA.jpg?width=960&crop=smart&auto=webp&s=29b750eafb1d23e63a2b3a0f8bc27453ee44a08d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/W-qV0BV1voPJhiTsOQdsGmcAlL-lVFIkzu14DCr59cA.jpg?width=1080&crop=smart&auto=webp&s=5a8fe27239a2446755fb614340f09bf3430a0e8d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/W-qV0BV1voPJhiTsOQdsGmcAlL-lVFIkzu14DCr59cA.jpg?auto=webp&s=04e8345aae18bf4809084272b6b4360a4ebf4c67', 'width': 1200}, 'variants': {}}]} |
||
No DeepSeek v3 0526 | 0 | Unfortunately, the link was a placeholder and the release didn't materialize. | 2025-05-27T13:19:15 | https://docs.unsloth.ai/basics/deepseek-v3-0526-rumors | Hanthunius | docs.unsloth.ai | 1970-01-01T00:00:00 | 0 | {} | 1kwn29v | false | null | t3_1kwn29v | /r/LocalLLaMA/comments/1kwn29v/no_deepseek_v3_0526/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'ZmadbtMLxXXHFKwJkCjeTUDuX5sS57sYwkHR8IIGo6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=108&crop=smart&auto=webp&s=1ef4773905a7285d6ca9d2707252ecf3322ec746', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=216&crop=smart&auto=webp&s=6555cce3e1543ec541933b9a1ea746f3da79448a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=320&crop=smart&auto=webp&s=346b61e1006578bd8c7c90ff8b45496164cd4933', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=640&crop=smart&auto=webp&s=2e74df95b54af72feafa558281ef5e11bc4e8a7c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=960&crop=smart&auto=webp&s=8d3ac1cc3775d1b7217345a94a6e9f18f0ba2092', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=1080&crop=smart&auto=webp&s=57e2a43db692dc32eecd433adfbae429f9bca7fd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?auto=webp&s=2704eae76891f7897192cd5a7236096d2b9f8a5f', 'width': 1200}, 'variants': {}}]} |
|
Setup Recommendation for University (H200 vs RTX 6000 Pro) | 6 | My (small) university asked me to build a machine with GPUs that we're going to share between 2 PhD students and myself for a project (we got a grant for that).
The budget is 100k€. The machine will be used for training and data generation during the first year.
After that, we will turn it into an inference machine to serve the administration and professors (local chatbot + RAG). This will be used to serve sota open source models and remove all privacy concerns. I guess we can expect to run something around DeepSeek size in mid 2026 (or multiple instances of any large MoE).
We will have more budget in the future that's why we'll turn this machine for administrative/basic tasks.
We're currently weighing two main options:
1. 4x NVIDIA H200 GPUs (141Gb)
2. 8x NVIDIA RTX 6000 Pro Blackwell (96Gb)
What do you think? | 2025-05-27T13:25:58 | https://www.reddit.com/r/LocalLLaMA/comments/1kwn7t4/setup_recommendation_for_university_h200_vs_rtx/ | tkon3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwn7t4 | false | null | t3_1kwn7t4 | /r/LocalLLaMA/comments/1kwn7t4/setup_recommendation_for_university_h200_vs_rtx/ | false | false | self | 6 | null |
Trying to fine tune llama 3.2 3B on a custom data set for a random college to see how it goes ....but results are not as expected....new trained model can't seem to answer based on the new data. | 1 | [removed] | 2025-05-27T13:27:42 | https://www.reddit.com/r/LocalLLaMA/comments/1kwn959/trying_to_fine_tune_llama_32_3b_on_a_custom_data/ | Adorable-Device-2732 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwn959 | false | null | t3_1kwn959 | /r/LocalLLaMA/comments/1kwn959/trying_to_fine_tune_llama_32_3b_on_a_custom_data/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]} |
Made app for LLM/MCP/Agent experimenation | 9 | This is app for experimenting with different AI models and MCP servers. It supports anything OpenAI-compatible - OpenAI, Google, Mistral, LM Studio, Ollama, llama.cpp.
It's an open-source desktop app in Go [https://github.com/unra73d/agent-smith](https://github.com/unra73d/agent-smith)
You can select any combination of AI model/tool/agent role and experiment for your PoC/demo or maybe that would be your daily assistant.
# Features
* Chat with LLM model. You can change model, role, tools mid-converstaion which allows pretty neat scenarios
* Create customized agent roles via system prompts
* Use tools from MCP servers (both SSE and stdio)
* Builtin tool - Lua code execution when you need model to calculate something precisely
* Multiple chats in parallel
There is bunch of predefined roles but obviously you can configure them as you like. For example explain-to-me-like-I'm-5 agent:
https://preview.redd.it/njt76bb1tb3f1.png?width=1668&format=png&auto=webp&s=a522284551092d8142866fbf19969e3b89e3ce4e
And agent with the role of teacher would answer completely differently - it will see that app has built in Lua interpreter, will write an actual code to calculate stuff and answer you like this:
https://preview.redd.it/u5s9fzigtb3f1.png?width=1668&format=png&auto=webp&s=2345a5bc320144e99ee206af43ca8edb9095871e
Different models behave differently, and it is exactly one of the reasons I built this - to have a playground where I can freely combine different models, prompts and tools:
https://preview.redd.it/5ukfi7evtb3f1.png?width=1668&format=png&auto=webp&s=f5d1139fb858a452d31b1bc857937701d51231cc
Since this is a simple Go project, it is quite easy to run it:
`git clone` [`https://github.com/unra73d/agent-smith`](https://github.com/unra73d/agent-smith)
`cd agent-smith`
Then you can either run it with
`go run main.go`
or build an app that you can just double-click
`go build main.go`
| 2025-05-27T13:33:30 | https://www.reddit.com/r/LocalLLaMA/comments/1kwndsy/made_app_for_llmmcpagent_experimenation/ | Gold_Ad_2201 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwndsy | false | null | t3_1kwndsy | /r/LocalLLaMA/comments/1kwndsy/made_app_for_llmmcpagent_experimenation/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'ecA_GUF8_sowTBPFJO57MLXl9YjvGT4N6nOHP6HlaFk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SvhIAAf7rr58Ch8QyfpquKGkkCLO_l6uEg7_aB6MIk4.jpg?width=108&crop=smart&auto=webp&s=278737a0c99ce81afabd4f2823bda207ccda7d65', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SvhIAAf7rr58Ch8QyfpquKGkkCLO_l6uEg7_aB6MIk4.jpg?width=216&crop=smart&auto=webp&s=be2306aeb783288e0e1231b6864d81848c733a4b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SvhIAAf7rr58Ch8QyfpquKGkkCLO_l6uEg7_aB6MIk4.jpg?width=320&crop=smart&auto=webp&s=835046158b89c86d18fd45b696e4b2367281ac37', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SvhIAAf7rr58Ch8QyfpquKGkkCLO_l6uEg7_aB6MIk4.jpg?width=640&crop=smart&auto=webp&s=6399a2ecd25506ae1cf1af135c7fae535457b787', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SvhIAAf7rr58Ch8QyfpquKGkkCLO_l6uEg7_aB6MIk4.jpg?width=960&crop=smart&auto=webp&s=d0f1f9476bd8ef26f4fa52891eac349d1296b082', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SvhIAAf7rr58Ch8QyfpquKGkkCLO_l6uEg7_aB6MIk4.jpg?width=1080&crop=smart&auto=webp&s=0f1d4c9949834621a6d3cf5389f578c93de0beca', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SvhIAAf7rr58Ch8QyfpquKGkkCLO_l6uEg7_aB6MIk4.jpg?auto=webp&s=7080bf2ae040cbb7d4ca8c1953d5f46f419b855c', 'width': 1200}, 'variants': {}}]} |
|
Any low resources but high-quality sound Cloning TTS? | 1 | [removed] | 2025-05-27T13:43:33 | https://www.reddit.com/r/LocalLLaMA/comments/1kwnm07/any_low_resources_but_highquality_sound_cloning/ | Tombother | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwnm07 | false | null | t3_1kwnm07 | /r/LocalLLaMA/comments/1kwnm07/any_low_resources_but_highquality_sound_cloning/ | false | false | self | 1 | null |
Switched from a PC to Mac for LLM dev - One week Later | 79 |
[Broke down and bought a Mac Mini - my processes run 5x faster : r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/1ks5sh4/broke_down_and_bought_a_mac_mini_my_processes_run/)
Exactly a week ago I tromped to the Apple Store and bought a Mac Mini M4 Pro with 24gb memory - the model they usually stock in store. I really \*didn't\* want to move from Windows because I've used Windows since 3.0 and while it has its annoyances, I know the platform and didn't want to stall my development to go down a rabbit hole of new platform hassles - and I'm not a Windows, Mac or Linux 'fan' - they're tools to me - I've used them all - but always thought the MacOS was the least enjoyable to use.
Despite my reservations I bought the thing - and a week later - I'm glad I did - it's a keeper.
It took about 2 hours to set up my simple-as-possible free stack. Anaconda, Ollama, VScode. Download models, build model files, and maybe an hour of cursing to adjust the code for the Mac and I was up and running. I have a few python libraries that complain a bit but still run fine - no issues there.
The unified memory is a game-changer. It's not like having a gamer box with multiple slots having Nvidia cards, but it fits my use-case perfectly - I need to be able to travel with it in a backpack. I run a 13b model 5x faster than my CPU-constrained MiniPC did with an 8b model. I do need to use a free Mac utility to speed my fans up to full blast when running so I don't melt my circuit boards and void my warranty - but this box is the sweet-spot for me.
Still not a big lover of the MacOS but it works - and the hardware and unified memory architecture jams a lot into a small package.
I was hesitant to make the switch because I thought it would be a hassle - but it wasn't all that bad.
| 2025-05-27T13:54:23 | https://www.reddit.com/r/LocalLLaMA/comments/1kwnv4o/switched_from_a_pc_to_mac_for_llm_dev_one_week/ | ETBiggs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwnv4o | false | null | t3_1kwnv4o | /r/LocalLLaMA/comments/1kwnv4o/switched_from_a_pc_to_mac_for_llm_dev_one_week/ | false | false | self | 79 | null |
Play with Meta's Byte Latent Transformer "tokenizer-free" patcher in a HF Space | 1 | [removed] | 2025-05-27T13:59:13 | https://www.reddit.com/r/LocalLLaMA/comments/1kwnz5l/play_with_metas_byte_latent_transformer/ | lucalp__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwnz5l | false | null | t3_1kwnz5l | /r/LocalLLaMA/comments/1kwnz5l/play_with_metas_byte_latent_transformer/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'bWXVvwa8flCrmkYJvLXfE5G12bSSSTbkElYwWaDiCi0', 'resolutions': [{'height': 104, 'url': 'https://external-preview.redd.it/xysnssK0wWdIRckvWVwaBSbIhMo96eApOHbJ846j7qQ.jpg?width=108&crop=smart&auto=webp&s=2cd1045517eda93c2aaafc19130bea85c7466318', 'width': 108}], 'source': {'height': 120, 'url': 'https://external-preview.redd.it/xysnssK0wWdIRckvWVwaBSbIhMo96eApOHbJ846j7qQ.jpg?auto=webp&s=6d730f0aadb2da7eefca105ee16d8e99ecfca4a6', 'width': 124}, 'variants': {}}]} |
Why is my LLaMA running on CPU? | 0 | Sorry, I am obviously new to this.
I have python 3.10.6 installed, I created a venv and installed the requirements form the file and successfully ran the web ui locally but when I ran my first prompt I noticed it's exectuting on the CPU.
I also couldn't find any documentation, am I that bad at this? ;) If you have any link or tips please help :) | 2025-05-27T14:04:35 | https://www.reddit.com/r/LocalLLaMA/comments/1kwo41n/why_is_my_llama_running_on_cpu/ | ThinKingofWaves | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwo41n | false | null | t3_1kwo41n | /r/LocalLLaMA/comments/1kwo41n/why_is_my_llama_running_on_cpu/ | false | false | self | 0 | null |
Invented a new AI reasoning framework called HDA2A and wrote a basic paper - Potential to be something massive - check it out | 0 | [removed] | 2025-05-27T14:06:07 | https://www.reddit.com/r/LocalLLaMA/comments/1kwo5d3/invented_a_new_ai_reasoning_framework_called/ | Zizosk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwo5d3 | false | null | t3_1kwo5d3 | /r/LocalLLaMA/comments/1kwo5d3/invented_a_new_ai_reasoning_framework_called/ | false | false | self | 0 | null |
I created a ChatGPT-like UI for Local LLMs | 1 | [removed] | 2025-05-27T14:12:08 | https://www.reddit.com/gallery/1kwoacr | BeautifulFlower7101 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kwoacr | false | null | t3_1kwoacr | /r/LocalLLaMA/comments/1kwoacr/i_created_a_chatgptlike_ui_for_local_llms/ | false | false | 1 | null |
|
I created a ChatGPT-like UI for Local LLMs | 1 | [removed] | 2025-05-27T14:14:44 | https://www.reddit.com/gallery/1kwocja | BeautifulFlower7101 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kwocja | false | null | t3_1kwocja | /r/LocalLLaMA/comments/1kwocja/i_created_a_chatgptlike_ui_for_local_llms/ | false | false | 1 | null |
|
I created a ChatGPT-like UI for Local LLMs | 20 | Hey r/LocalLLaMA (and other AI enthusiasts!),
Wanted to share a project I've been pouring my evenings and weekends into: **AnyLM**.
I'm a big fan of local LLMs (Ollama, LMStudio, etc.) but always found myself wanting a cleaner, more integrated UI, something like ChatGPT, but for all my models, both local and API-based (OpenAI, Anthropic, Google). I wanted all my conversations in one spot.
So, I built AnyLM! It's a desktop app (Windows for now, macOS coming soon) that offers:
* A single interface for local models (Ollama/LMStudio) and API models.
* A clean, ChatGPT-style chat experience.
* Local data storage for privacy.
* File/image support & chat export.
It's currently available as a one-time purchase ($39.99 early bird price) with a 7-day free trial if you want to try it out.
Landing page & download: [https://anylm.app/](https://anylm.app/)
This has been a fun (and challenging!) project. I'd be super grateful for any feedback, suggestions, or if you just want to try it out and let me know what you think!
| 2025-05-27T14:22:41 | https://www.reddit.com/gallery/1kwoj76 | QuantumPancake422 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kwoj76 | false | null | t3_1kwoj76 | /r/LocalLLaMA/comments/1kwoj76/i_created_a_chatgptlike_ui_for_local_llms/ | false | false | 20 | null |
|
Tool like llama-swap, but for very different runtimes too? | 1 | [removed] | 2025-05-27T14:29:43 | https://www.reddit.com/r/LocalLLaMA/comments/1kwopd7/tool_like_llamaswap_but_for_very_different/ | yelling-at-clouds-40 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwopd7 | false | null | t3_1kwopd7 | /r/LocalLLaMA/comments/1kwopd7/tool_like_llamaswap_but_for_very_different/ | false | false | self | 1 | null |
error when trying to import a dataset in Google Collab | 1 | [removed] | 2025-05-27T14:35:40 | https://www.reddit.com/r/LocalLLaMA/comments/1kwounh/error_when_trying_to_import_a_dataset_in_google/ | Glad_Net8882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwounh | false | null | t3_1kwounh | /r/LocalLLaMA/comments/1kwounh/error_when_trying_to_import_a_dataset_in_google/ | false | false | self | 1 | null |
Local LLM server setup | 1 | [removed] | 2025-05-27T15:22:15 | https://www.reddit.com/r/LocalLLaMA/comments/1kwq0sd/local_llm_server_setup/ | _infY_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwq0sd | false | null | t3_1kwq0sd | /r/LocalLLaMA/comments/1kwq0sd/local_llm_server_setup/ | false | false | self | 1 | null |
Local LLM server setup | 1 | [removed] | 2025-05-27T15:34:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kwqbra/local_llm_server_setup/ | _infY_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwqbra | false | null | t3_1kwqbra | /r/LocalLLaMA/comments/1kwqbra/local_llm_server_setup/ | false | false | self | 1 | null |
Local Educational LLM | 1 | [removed] | 2025-05-27T15:36:59 | https://www.reddit.com/r/LocalLLaMA/comments/1kwqe4k/local_educational_llm/ | _camera_up | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwqe4k | false | null | t3_1kwqe4k | /r/LocalLLaMA/comments/1kwqe4k/local_educational_llm/ | false | false | self | 1 | null |
Best local/open-source coding models for 24GB VRAM? | 9 | Hey so i recently got a 3090 for pretty cheap, and thus i'm not really memory-constrained anymore.
I wanted to ask for the best currently available models i could use for code on my machine.
That'd be for all sorts of projects but mostly Python, C, C++, Java projects. Not much web dev or niche languages. I'm looking for an accurate and knowledgeable model/fine-tune for those. It needs to handle a fairly-big context (let's say 10k-20k at least) and provide good results if i manually give it the right parts of the code base. I don't really care about reasoning much unless it increases the output quality. Vision would be a plus but it's absolutely not necessary, i just focus on code quality first.
I currently know of Qwen 3 32B, GLM-4 32B, Qwen 2.5 Coder 32B.
Qwen 3 results have been pretty hit-or-miss for me personally, sometimes it works, sometimes it doesn't. Strangely enough it seems to provide better results with \`no\_think\` as it tends to overthink stuff in a schizophrenic fashion and go out of context (the weird thing is that in the think block i can see that it is attempting to do what i ask it to and then evolves into speculating everything else for a long time).
GLM-4 has given me better results with the few attempts i gave it so far, but it seems to sometimes do small mistakes that look right in logic and on paper but don't really compile well. It looks pretty good though, perhaps i could combine it with a secondary model for cleaning purposes. It lets me run at 20k context, unlike Qwen 3 which seems to not work past 8-10k for me.
I've yet to give another shot at Qwen 2.5 Coder for now, last time i used it, it was ok, but i did use a smaller model with less parameters and didn't extensively test it.
Speaking of which, can inference speed affect the final output quality? As in, for the same model and same size, will it be the same quality but much faster with my new card or is there a tradeoff? | 2025-05-27T15:49:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kwqq1p/best_localopensource_coding_models_for_24gb_vram/ | HRudy94 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwqq1p | false | null | t3_1kwqq1p | /r/LocalLLaMA/comments/1kwqq1p/best_localopensource_coding_models_for_24gb_vram/ | false | false | self | 9 | null |
[Research] AutoThink: Adaptive reasoning technique that improves local LLM performance by 43% on GPQA-Diamond | 162 | Hey r/LocalLLaMA!
I wanted to share a technique we've been working on called **AutoThink** that significantly improves reasoning performance on local models through adaptive resource allocation and steering vectors.
# What is AutoThink?
Instead of giving every query the same amount of "thinking time," AutoThink:
1. **Classifies query complexity** (HIGH/LOW) using an adaptive classifier
2. **Dynamically allocates thinking tokens** based on complexity (70-90% for hard problems, 20-40% for simple ones)
3. **Uses steering vectors** to guide reasoning patterns during generation
Think of it as making your local model "think harder" on complex problems and "think faster" on simple ones.
# Performance Results
Tested on **DeepSeek-R1-Distill-Qwen-1.5B**:
* **GPQA-Diamond**: 31.06% vs 21.72% baseline (+9.34 points, 43% relative improvement)
* **MMLU-Pro**: 26.38% vs 25.58% baseline (+0.8 points)
* Uses **fewer tokens** than baseline approaches
# Technical Approach
**Steering Vectors**: We use Pivotal Token Search (PTS) - a technique from Microsoft's Phi-4 paper that we implemented and enhanced. These vectors modify activations to encourage specific reasoning patterns:
* `depth_and_thoroughness`
* `numerical_accuracy`
* `self_correction`
* `exploration`
* `organization`
**Classification**: Built on our adaptive classifier that can learn new complexity categories without retraining.
# Model Compatibility
Works with any local reasoning model:
* DeepSeek-R1 variants
* Qwen models
* Llama models
* Any HuggingFace model you can load locally
# How to Try It
# Install optillm
pip install optillm
# Basic usage
from optillm.autothink import autothink_decode
response = autothink_decode(
model, tokenizer, messages,
{
"steering_dataset": "codelion/Qwen3-0.6B-pts-steering-vectors",
"target_layer": 19
# adjust based on your model
}
)
Full examples in the repo: [https://github.com/codelion/optillm/tree/main/optillm/autothink](https://github.com/codelion/optillm/tree/main/optillm/autothink)
# Research Links
* **Paper**: [https://papers.ssrn.com/sol3/papers.cfm?abstract\_id=5253327](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5253327)
* **AutoThink Code**: [https://github.com/codelion/optillm/tree/main/optillm/autothink](https://github.com/codelion/optillm/tree/main/optillm/autothink)
* **PTS Implementation**: [https://github.com/codelion/pts](https://github.com/codelion/pts)
* **HuggingFace Blog**: [https://huggingface.co/blog/codelion/pts](https://huggingface.co/blog/codelion/pts)
* **Adaptive Classifier**: [https://github.com/codelion/adaptive-classifier](https://github.com/codelion/adaptive-classifier)
# Current Limitations
* Currently works with local inference only (not integrated with optillm proxy yet)
* Requires models that support thinking tokens (`<think>` and `</think>`)
* Need to tune `target_layer` parameter for different model architectures
* Steering vector datasets are model-specific (though we provide some pre-computed ones)
# What's Next
We're working on:
* Support for more model architectures
* Better automatic layer detection
* Community-driven steering vector datasets
# Discussion
Has anyone tried similar approaches with local models? I'm particularly interested in:
* How different model families respond to steering vectors
* Alternative ways to classify query complexity
* Ideas for extracting better steering vectors
Would love to hear your thoughts and results if you try it out!
**EDIT**: For those asking about computational overhead - the classification step adds minimal latency (\~10ms), and the adaptive token allocation actually reduces total computation for simple queries while improving performance on complex ones.
**EDIT 2**: Someone asked about memory usage - steering vectors are small (typically <1MB per pattern) and the hooks add negligible memory overhead. The main requirement is having enough VRAM for your base model. | 2025-05-27T15:53:20 | https://www.reddit.com/r/LocalLLaMA/comments/1kwqt64/research_autothink_adaptive_reasoning_technique/ | asankhs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwqt64 | false | null | t3_1kwqt64 | /r/LocalLLaMA/comments/1kwqt64/research_autothink_adaptive_reasoning_technique/ | false | false | self | 162 | {'enabled': False, 'images': [{'id': 'FDXdZKuGBEAtS9eO0aBLVTgnysK1vKKknVcyyajidhI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZVZ79K3RkE2tKrpxJPOiyb0pGu5dId_pvgcx07xOs6g.jpg?width=108&crop=smart&auto=webp&s=432009460474e3d3b1cb12c99e65235fd8abcd13', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZVZ79K3RkE2tKrpxJPOiyb0pGu5dId_pvgcx07xOs6g.jpg?width=216&crop=smart&auto=webp&s=ca2c76d419741e0f524c741cd0293479ea8dbb03', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZVZ79K3RkE2tKrpxJPOiyb0pGu5dId_pvgcx07xOs6g.jpg?width=320&crop=smart&auto=webp&s=5461a707c9e2fd27f984ecd8ef40dcefd68cc129', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZVZ79K3RkE2tKrpxJPOiyb0pGu5dId_pvgcx07xOs6g.jpg?width=640&crop=smart&auto=webp&s=33f1a57516c261053db58c743f1c6cffc683ca03', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZVZ79K3RkE2tKrpxJPOiyb0pGu5dId_pvgcx07xOs6g.jpg?width=960&crop=smart&auto=webp&s=06b00fa835e5c3bb348180cfacb860a269339643', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZVZ79K3RkE2tKrpxJPOiyb0pGu5dId_pvgcx07xOs6g.jpg?width=1080&crop=smart&auto=webp&s=f774093bd4896afe3caa61a7f9b00499f05e6a21', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZVZ79K3RkE2tKrpxJPOiyb0pGu5dId_pvgcx07xOs6g.jpg?auto=webp&s=ed936335e7dcf95e4c0a286499067b8bdbfb878d', 'width': 1200}, 'variants': {}}]} |
Models with very recent training data? | 4 | I'm looking for a local model that has very recent training data, like April or May of this year.
I want to use it with Ollama and connect it to Figma's new MCP server so that I can instruct the model to create directly in Figma.
Seeing as Figma MCP support just released in he last few months, I figure I might have some issues trying to do this with a model that doesn't know the Figma MCP exists.
Does this matter? | 2025-05-27T16:08:57 | https://www.reddit.com/r/LocalLLaMA/comments/1kwr7ya/models_with_very_recent_training_data/ | new_pr0spect | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwr7ya | false | null | t3_1kwr7ya | /r/LocalLLaMA/comments/1kwr7ya/models_with_very_recent_training_data/ | false | false | self | 4 | null |
Why can't I reproduce benchmark scores from papers like Phi, Llama, or Qwen? Am I doing something wrong or is this normal? | 1 | [removed] | 2025-05-27T16:14:13 | https://www.reddit.com/r/LocalLLaMA/comments/1kwrcut/why_cant_i_reproduce_benchmark_scores_from_papers/ | Loose-Touch6108 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwrcut | false | null | t3_1kwrcut | /r/LocalLLaMA/comments/1kwrcut/why_cant_i_reproduce_benchmark_scores_from_papers/ | false | false | self | 1 | null |
Voice Assisted TODO Agent with Ollama | 1 | [removed] | 2025-05-27T16:15:29 | Wooden_Living_4553 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kwrdzl | false | null | t3_1kwrdzl | /r/LocalLLaMA/comments/1kwrdzl/voice_assisted_todo_agent_with_ollama/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'LisfHUiqtK9k1MtziN_r-RZlPG_tB6GEhiKt4WeyidE', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/paub2wq0nc3f1.png?width=108&crop=smart&auto=webp&s=70020610d2f3b21ce82f42454d59ffb465fd8777', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/paub2wq0nc3f1.png?width=216&crop=smart&auto=webp&s=5d9fa95e5108c2954ff9a185991e5cc1db42797c', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/paub2wq0nc3f1.png?width=320&crop=smart&auto=webp&s=8b5bb54fedbadcba59b5f490583ecd9be2bb326e', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/paub2wq0nc3f1.png?width=640&crop=smart&auto=webp&s=734c49ca3696094f980183e8a8c0a53a31b23e90', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/paub2wq0nc3f1.png?width=960&crop=smart&auto=webp&s=b36f2190c6979f34d05e511226c1d03b4e925062', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/paub2wq0nc3f1.png?width=1080&crop=smart&auto=webp&s=246bce7bd445786098c63eaa5605f9993d531823', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/paub2wq0nc3f1.png?auto=webp&s=fcc4ebf68e1bc2a749540b6dbf08494e042dd1c8', 'width': 1920}, 'variants': {}}]} |
||
Why can't I reproduce benchmark scores from papers like Phi, Llama, or Qwen? Am I doing something wrong or is this normal? | 1 | [removed] | 2025-05-27T16:15:45 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1kwre9q | false | null | t3_1kwre9q | /r/LocalLLaMA/comments/1kwre9q/why_cant_i_reproduce_benchmark_scores_from_papers/ | false | false | default | 1 | null |
||
[META] Too many apps! | 1 | [removed] | 2025-05-27T16:24:59 | https://www.reddit.com/r/LocalLLaMA/comments/1kwrmsf/meta_too_many_apps/ | PANIC_EXCEPTION | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwrmsf | false | null | t3_1kwrmsf | /r/LocalLLaMA/comments/1kwrmsf/meta_too_many_apps/ | false | false | self | 1 | null |
Gemma3 fully OSS model alternative (context especially)? | 4 | Hey all. So I'm trying to move my workflow from cloud-based proprietary models to locally based FOSS models. I am using OLMO2 as my primary driver since it has good performance and a fully open dataset. However it's context is rather limited for large code files. Does anyone have a suggestion for a large context model that ALSO is FOSS? Currently I'm using Gemma but that's obviously proprietary dataset. | 2025-05-27T16:29:50 | https://www.reddit.com/r/LocalLLaMA/comments/1kwrr55/gemma3_fully_oss_model_alternative_context/ | InvertedVantage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwrr55 | false | null | t3_1kwrr55 | /r/LocalLLaMA/comments/1kwrr55/gemma3_fully_oss_model_alternative_context/ | false | false | self | 4 | null |
Hunyuan releases HunyuanPortrait | 55 | 🎉 Introducing HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation
👉What's New?
1⃣Turn static images into living art! 🖼➡🎥
2⃣Unparalleled realism with Implicit Control + Stable Video Diffusion
3⃣SoTA temporal consistency & crystal-clear fidelity
This breakthrough method outperforms existing techniques, effectively disentangling appearance and motion under various image styles.
👉Why Matters?
With this method, animators can now create highly controllable and vivid animations by simply using a single portrait image and video clips as driving templates.
✅ One-click animation 🖱: Single image + video template = hyper-realistic results! 🎞
✅ Perfectly synced facial dynamics & head movements
✅ Identity consistency locked across all styles
👉A Game-changer for Fields like:
▶️Virtual Reality + AR experiences 👓
▶️Next-gen gaming Characters 🎮
▶️Human-AI interactions 🤖💬
📚Dive Deeper
Check out our paper to learn more about the magic behind HunyuanPortrait and how it’s setting a new standard for portrait animation!
🔗 Project Page: https://kkakkkka.github.io/HunyuanPortrait/
🔗 Research Paper: https://arxiv.org/abs/2503.18860
Demo: https://x.com/tencenthunyuan/status/1912109205525528673?s=46
🌟 Rewriting the rules of digital humans one frame at a time! | 2025-05-27T16:34:10 | ResearchCrafty1804 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kwrv8g | false | null | t3_1kwrv8g | /r/LocalLLaMA/comments/1kwrv8g/hunyuan_releases_hunyuanportrait/ | false | false | 55 | {'enabled': True, 'images': [{'id': 'sCkyKbDfW0hRBxZRbIS9y9gU-tEDbnVzRPWkMRvm-hg', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/66xgi7lrqc3f1.jpeg?width=108&crop=smart&auto=webp&s=9d6206e12f4c728bb898ed432be73c840616b0ca', 'width': 108}, {'height': 122, 'url': 'https://preview.redd.it/66xgi7lrqc3f1.jpeg?width=216&crop=smart&auto=webp&s=f360fc664d6fea49adaa3904e8e58c78fc6089e9', 'width': 216}, {'height': 181, 'url': 'https://preview.redd.it/66xgi7lrqc3f1.jpeg?width=320&crop=smart&auto=webp&s=d3b7a07d2e71ba365721e1610eeffbc40498a611', 'width': 320}, {'height': 362, 'url': 'https://preview.redd.it/66xgi7lrqc3f1.jpeg?width=640&crop=smart&auto=webp&s=457d1dce333f1637875489b18ba0f1081aa38b7a', 'width': 640}, {'height': 544, 'url': 'https://preview.redd.it/66xgi7lrqc3f1.jpeg?width=960&crop=smart&auto=webp&s=5af9c60cc2a6f7834b98dce264e720dfd7531dc8', 'width': 960}, {'height': 612, 'url': 'https://preview.redd.it/66xgi7lrqc3f1.jpeg?width=1080&crop=smart&auto=webp&s=73b064f3286f05b0ad5e92ff9fb2904ccea146f0', 'width': 1080}], 'source': {'height': 1173, 'url': 'https://preview.redd.it/66xgi7lrqc3f1.jpeg?auto=webp&s=d94a1b778c8b8b6ec8c0c6f3195d18144764f292', 'width': 2069}, 'variants': {}}]} |
||
Why can't I reproduce results from papers like Phi, Llama, or Qwen? Am I doing something wrong or is this normal? | 1 | [removed] | 2025-05-27T16:37:15 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1kwry21 | false | null | t3_1kwry21 | /r/LocalLLaMA/comments/1kwry21/why_cant_i_reproduce_results_from_papers_like_phi/ | false | false | default | 1 | null |
||
Is there a way to buy the NVIDIA RTX PRO 6000 Blackwell Server Edition right now? | 6 | I'm in the market for one due to the fact I've got a server infrastructure (with an A30 right now) in my homelab and everyone here is talking about the Workstation edition. I'm in the opposite boat, I need one of the cards without a fan and Nvidia hasn't emailed me anything indicating that the server cards are available yet. I guess I just wanted to make sure I'm not missing out and that the server version of the card isn't available yet. | 2025-05-27T16:40:37 | https://www.reddit.com/r/LocalLLaMA/comments/1kws15n/is_there_a_way_to_buy_the_nvidia_rtx_pro_6000/ | Yorn2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kws15n | false | null | t3_1kws15n | /r/LocalLLaMA/comments/1kws15n/is_there_a_way_to_buy_the_nvidia_rtx_pro_6000/ | false | false | self | 6 | null |
Is there a local LLM that can give you a description or tags for videos similar to Gemini? | 1 | Say you want to automate creating descriptions or tags, or ask questions about videos. Can you do that locally? | 2025-05-27T16:45:57 | https://www.reddit.com/r/LocalLLaMA/comments/1kws5wd/is_there_a_local_llm_that_can_give_you_a/ | GrayPsyche | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kws5wd | false | null | t3_1kws5wd | /r/LocalLLaMA/comments/1kws5wd/is_there_a_local_llm_that_can_give_you_a/ | false | false | self | 1 | null |
What tools or upgrades do I need to run a local AI assistant (Jarvis) more efficiently? | 1 | [removed] | 2025-05-27T17:19:16 | https://www.reddit.com/r/LocalLLaMA/comments/1kwt16j/what_tools_or_upgrades_do_i_need_to_run_a_local/ | Gold-Management5308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwt16j | false | null | t3_1kwt16j | /r/LocalLLaMA/comments/1kwt16j/what_tools_or_upgrades_do_i_need_to_run_a_local/ | false | false | self | 1 | null |
Asus Flow Z13 best Local LLM Tests. | 0 | [https://www.youtube.com/watch?v=AcTmeGpzhBk](https://www.youtube.com/watch?v=AcTmeGpzhBk) | 2025-05-27T17:23:40 | https://www.reddit.com/r/LocalLLaMA/comments/1kwt5hl/asus_flow_z13_best_local_llm_tests/ | Strong_Sympathy9955 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwt5hl | false | null | t3_1kwt5hl | /r/LocalLLaMA/comments/1kwt5hl/asus_flow_z13_best_local_llm_tests/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'PvGJWOLZ0UHJZ9cCD4kLq86-roAGlSyCJB6i6hL288E', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Et395fzyMNJzU5HdrbkFsr_Axhs6aZiYWEZvR0Ow-Lg.jpg?width=108&crop=smart&auto=webp&s=f052a53f4bbe57b49e463da91e02a57d76f03899', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Et395fzyMNJzU5HdrbkFsr_Axhs6aZiYWEZvR0Ow-Lg.jpg?width=216&crop=smart&auto=webp&s=54af0b445ca52191d99018ae94adc48f6cc0c39c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Et395fzyMNJzU5HdrbkFsr_Axhs6aZiYWEZvR0Ow-Lg.jpg?width=320&crop=smart&auto=webp&s=46da16aade4a24b56fc1d44b6a00a56db749c2b5', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Et395fzyMNJzU5HdrbkFsr_Axhs6aZiYWEZvR0Ow-Lg.jpg?auto=webp&s=6496a43b3a124004ae5fea3450032b15612e6a1f', 'width': 480}, 'variants': {}}]} |
Best Model for OCR | 1 | [removed] | 2025-05-27T17:40:38 | https://www.reddit.com/r/LocalLLaMA/comments/1kwtl6r/best_model_for_ocr/ | pgodgodzilla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwtl6r | false | null | t3_1kwtl6r | /r/LocalLLaMA/comments/1kwtl6r/best_model_for_ocr/ | false | false | self | 1 | null |
j1-nano & j1-micro: Absurdly Tiny RMs Competitive w/ Claude Opus, GPT-4o-mini, etc. | 1 | [removed] | 2025-05-27T17:47:23 | https://www.reddit.com/r/LocalLLaMA/comments/1kwtrf7/j1nano_j1micro_absurdly_tiny_rms_competitive_w/ | leonardtang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwtrf7 | false | null | t3_1kwtrf7 | /r/LocalLLaMA/comments/1kwtrf7/j1nano_j1micro_absurdly_tiny_rms_competitive_w/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'QqSY3F9i2BgB-OdT_JpQr1vBqr2oq4spYNzkghHXwCM', 'resolutions': [], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/4PgIzt2dsWk0hsH_pv6fTscUBf4LNxa8vUF1zyE23u0.jpg?auto=webp&s=adf334dabc58b5ccda405f20fe4d11f983c41fe9', 'width': 64}, 'variants': {}}]} |
[Research] j1-nano & j1-micro: Absurdly Tiny RMs Competitive w/ Claude Opus, GPT-4o-mini, etc. | 1 | [removed] | 2025-05-27T17:49:41 | https://www.reddit.com/r/LocalLLaMA/comments/1kwttjy/research_j1nano_j1micro_absurdly_tiny_rms/ | leonardtang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwttjy | false | null | t3_1kwttjy | /r/LocalLLaMA/comments/1kwttjy/research_j1nano_j1micro_absurdly_tiny_rms/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'QqSY3F9i2BgB-OdT_JpQr1vBqr2oq4spYNzkghHXwCM', 'resolutions': [], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/4PgIzt2dsWk0hsH_pv6fTscUBf4LNxa8vUF1zyE23u0.jpg?auto=webp&s=adf334dabc58b5ccda405f20fe4d11f983c41fe9', 'width': 64}, 'variants': {}}]} |
[Research]: j1-nano & j1-micro: Absurdly Tiny RMs Competitive w/ Claude Opus, GPT-4o-mini, etc. | 1 | [removed] | 2025-05-27T17:51:23 | https://www.reddit.com/r/LocalLLaMA/comments/1kwtv5l/research_j1nano_j1micro_absurdly_tiny_rms/ | leonardtang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwtv5l | false | null | t3_1kwtv5l | /r/LocalLLaMA/comments/1kwtv5l/research_j1nano_j1micro_absurdly_tiny_rms/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'QqSY3F9i2BgB-OdT_JpQr1vBqr2oq4spYNzkghHXwCM', 'resolutions': [], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/4PgIzt2dsWk0hsH_pv6fTscUBf4LNxa8vUF1zyE23u0.jpg?auto=webp&s=adf334dabc58b5ccda405f20fe4d11f983c41fe9', 'width': 64}, 'variants': {}}]} |
Recommendations for a local/open source todo/productivity assistant? | 1 | any popular local/open source todo productivity assistant.
I seem to always go back to pen and paper with any software tool
maybe AI helps with this? | 2025-05-27T17:52:57 | https://www.reddit.com/r/LocalLLaMA/comments/1kwtwk1/recommendations_for_a_localopen_source/ | bornfree4ever | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwtwk1 | false | null | t3_1kwtwk1 | /r/LocalLLaMA/comments/1kwtwk1/recommendations_for_a_localopen_source/ | false | false | self | 1 | null |
When are we getting the Proton Mail equivalent of AI Service? | 0 | Please point me to one if already available.
For a long time, Gmail, Yahoo and Outlook were the only mainstream good (free) personal email providers. We knew Google, and Microsoft mined our data for ads and some of us immediately switched to the likes of Protonmail when it came out or became popular.
When do you think a capable platform like ChatGPT/Claude/Gemini is coming to also offer privacy on cloud like Protonmail does? Criteria obviously would be the promise of privacy (servers based on non US/Chineese/Russian soil), with solid reliability, and on-par models capabilities rivaling the mainstream ones. Will be paid subscription for sure, and work on multiple platforms like Windows, Mac, iOS, Android.
Like the "how your own models" crowd for email, we know it's not for everyone even in AI. To get a competitive, useful output from localLLMs you need the right hardware, time and know how to build/maintain over time. | 2025-05-27T18:08:03 | https://www.reddit.com/r/LocalLLaMA/comments/1kwuap4/when_are_we_getting_the_proton_mail_equivalent_of/ | simracerman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwuap4 | false | null | t3_1kwuap4 | /r/LocalLLaMA/comments/1kwuap4/when_are_we_getting_the_proton_mail_equivalent_of/ | false | false | self | 0 | null |
😞No hate but claude-4 is disappointing | 244 | I mean how the heck literally Is Qwen-3 better than claude-4(the Claude who used to dog walk everyone).
this is just disappointing 🫠 | 2025-05-27T18:10:17 | Rare-Programmer-1747 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kwucpn | false | null | t3_1kwucpn | /r/LocalLLaMA/comments/1kwucpn/no_hate_but_claude4_is_disappointing/ | false | false | 244 | {'enabled': True, 'images': [{'id': 'qrlpDENbI4w2up03jLNVduozwcau1MgbEZoT9MLuxjE', 'resolutions': [{'height': 214, 'url': 'https://preview.redd.it/9dngmfww7d3f1.jpeg?width=108&crop=smart&auto=webp&s=fe77194d16dbde424a047b1d5e3f2c3b1dcf7d4e', 'width': 108}, {'height': 428, 'url': 'https://preview.redd.it/9dngmfww7d3f1.jpeg?width=216&crop=smart&auto=webp&s=a9c2c64eb59e051eea34723fa1a6d6703121c70d', 'width': 216}, {'height': 634, 'url': 'https://preview.redd.it/9dngmfww7d3f1.jpeg?width=320&crop=smart&auto=webp&s=e009e3ef1aa2f55ea2c960a8d8411d4cb8daab47', 'width': 320}, {'height': 1268, 'url': 'https://preview.redd.it/9dngmfww7d3f1.jpeg?width=640&crop=smart&auto=webp&s=d89328b58759f0c926b5258c859b6fbfcf5a5b32', 'width': 640}, {'height': 1902, 'url': 'https://preview.redd.it/9dngmfww7d3f1.jpeg?width=960&crop=smart&auto=webp&s=7e29b618e640d10b75e729198e9d81b983d64369', 'width': 960}, {'height': 2140, 'url': 'https://preview.redd.it/9dngmfww7d3f1.jpeg?width=1080&crop=smart&auto=webp&s=164dfac3996915508d02dbea4c983425263c77a3', 'width': 1080}], 'source': {'height': 2140, 'url': 'https://preview.redd.it/9dngmfww7d3f1.jpeg?auto=webp&s=b33372e8417d4e31702f3e21bbe06a9c068df398', 'width': 1080}, 'variants': {}}]} |
||
Time to make all models think 🧠 – the brand-new Mixture-of-Thoughts reasoning dataset is here | 1 | [removed] | 2025-05-27T19:04:49 | https://www.reddit.com/r/LocalLLaMA/comments/1kwvr80/time_to_make_all_models_think_the_brandnew/ | Thatisverytrue54321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwvr80 | false | null | t3_1kwvr80 | /r/LocalLLaMA/comments/1kwvr80/time_to_make_all_models_think_the_brandnew/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.