title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Tired of juggling llama.cpp instances? I built FlexLlama to run multiple models with one API | 1 | [removed] | 2025-06-11T07:22:10 | https://www.reddit.com/r/LocalLLaMA/comments/1l8mdwe/tired_of_juggling_llamacpp_instances_i_built/ | yazoniak | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8mdwe | false | null | t3_1l8mdwe | /r/LocalLLaMA/comments/1l8mdwe/tired_of_juggling_llamacpp_instances_i_built/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'vWE4QuTyF6L-LcDKzHydTwzJFbuW3jF4nE7CkhLBSxA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RtyioJQY43gkptAUptsrMVnTDYnCeODr0a7araa9UUs.jpg?width=108&crop=smart&auto=webp&s=fe64e448119be62e1e646d066ab20fc8af80a8a0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RtyioJQY43gkptAUptsrMVnTDYnCeODr0a7araa9UUs.jpg?width=216&crop=smart&auto=webp&s=f85b8f6f5378ddaac877d942d76edffe5d1f16a8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RtyioJQY43gkptAUptsrMVnTDYnCeODr0a7araa9UUs.jpg?width=320&crop=smart&auto=webp&s=f735d20e9b123de51e1aa5352c8436cccecfd6bf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RtyioJQY43gkptAUptsrMVnTDYnCeODr0a7araa9UUs.jpg?width=640&crop=smart&auto=webp&s=42942f844743e8d9046b2b8e4138a6c56dd78960', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RtyioJQY43gkptAUptsrMVnTDYnCeODr0a7araa9UUs.jpg?width=960&crop=smart&auto=webp&s=da8058359342bba6a7ef9e4680bcfd1ee63b2ac1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RtyioJQY43gkptAUptsrMVnTDYnCeODr0a7araa9UUs.jpg?width=1080&crop=smart&auto=webp&s=c5b311d075734ca4d8187da92a25c609df3c7e64', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RtyioJQY43gkptAUptsrMVnTDYnCeODr0a7araa9UUs.jpg?auto=webp&s=ed40166807d68c6b207ccabfab7d96a551ea14cc', 'width': 1200}, 'variants': {}}]} |
**OpenSloth: Multi-GPU Unsloth Training with Sequence Packing** | 1 | [removed] | 2025-06-11T07:47:59 | https://www.reddit.com/r/LocalLLaMA/comments/1l8mrhf/opensloth_multigpu_unsloth_training_with_sequence/ | Spirited_Vacation785 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8mrhf | false | null | t3_1l8mrhf | /r/LocalLLaMA/comments/1l8mrhf/opensloth_multigpu_unsloth_training_with_sequence/ | false | false | self | 1 | null |
OpenSloth: Multi-GPU Unsloth Training with Sequence Packing | 1 | [removed] | 2025-06-11T07:49:42 | https://www.reddit.com/r/LocalLLaMA/comments/1l8msd0/opensloth_multigpu_unsloth_training_with_sequence/ | Spirited_Vacation785 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8msd0 | false | null | t3_1l8msd0 | /r/LocalLLaMA/comments/1l8msd0/opensloth_multigpu_unsloth_training_with_sequence/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'gEjkgJMaFQy2TO8DwlRHCAFsaqTHFrL5qN-kmfLOqgI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Mqt1qsgzopwL_Av3gBadLC04QUjL_Ssl_DFRa72cVyU.jpg?width=108&crop=smart&auto=webp&s=d05ee090a5105f5c0bc9e4d190b62716bb435b41', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Mqt1qsgzopwL_Av3gBadLC04QUjL_Ssl_DFRa72cVyU.jpg?width=216&crop=smart&auto=webp&s=c62ce71a016c8f6648d5f4a22041e5df8166e394', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Mqt1qsgzopwL_Av3gBadLC04QUjL_Ssl_DFRa72cVyU.jpg?width=320&crop=smart&auto=webp&s=1f4fd5e7238b11f073aec149f1d96d31b4187302', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Mqt1qsgzopwL_Av3gBadLC04QUjL_Ssl_DFRa72cVyU.jpg?width=640&crop=smart&auto=webp&s=91047cb605b5fd3ecde4ec09e119684778617514', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Mqt1qsgzopwL_Av3gBadLC04QUjL_Ssl_DFRa72cVyU.jpg?width=960&crop=smart&auto=webp&s=73ee585b1a15f5f2afea7b5a2261ff520c62b9f6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Mqt1qsgzopwL_Av3gBadLC04QUjL_Ssl_DFRa72cVyU.jpg?width=1080&crop=smart&auto=webp&s=9f8875930fd83c535fc2754b55f5dd6ac2a678f5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Mqt1qsgzopwL_Av3gBadLC04QUjL_Ssl_DFRa72cVyU.jpg?auto=webp&s=f29ef03b4d5fb1dc105dd019bbd5462d0a2656e0', 'width': 1200}, 'variants': {}}]} |
How to speed up Kokoro-TTS? | 1 | [removed] | 2025-06-11T08:13:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l8n5iy/how_to_speed_up_kokorotts/ | fungigamer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8n5iy | false | null | t3_1l8n5iy | /r/LocalLLaMA/comments/1l8n5iy/how_to_speed_up_kokorotts/ | false | false | self | 1 | null |
Llama.cpp has been ported on ShelfMC | 1 | [removed] | 2025-06-11T08:14:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l8n5zs/llamacpp_has_been_ported_on_shelfmc/ | ulianownw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8n5zs | false | null | t3_1l8n5zs | /r/LocalLLaMA/comments/1l8n5zs/llamacpp_has_been_ported_on_shelfmc/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'sbiprA2BShK4BbvOgZ9xlD3vhgrZpIzevYsl69L3KOc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/oASb0K-qA08LWUe0F1yOF6qJfKVONDyiUwPV6UJ-Wjs.jpg?width=108&crop=smart&auto=webp&s=669249b367a0a138eb48352068db7a1e63cc24d3', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/oASb0K-qA08LWUe0F1yOF6qJfKVONDyiUwPV6UJ-Wjs.jpg?width=216&crop=smart&auto=webp&s=23f6e88b71b36f17507209175fc1cbd7e56c4ed8', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/oASb0K-qA08LWUe0F1yOF6qJfKVONDyiUwPV6UJ-Wjs.jpg?width=320&crop=smart&auto=webp&s=1ace773594eeaf23f15126ea9395be6cbded6a36', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/oASb0K-qA08LWUe0F1yOF6qJfKVONDyiUwPV6UJ-Wjs.jpg?auto=webp&s=aeb31b60803fceaf9963b9269d3fa98653229e17', 'width': 480}, 'variants': {}}]} |
[ Removed by Reddit ] | 1 | [removed] | 2025-06-11T08:20:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l8n8uz/removed_by_reddit/ | IkusaNakamura | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8n8uz | false | null | t3_1l8n8uz | /r/LocalLLaMA/comments/1l8n8uz/removed_by_reddit/ | false | false | nsfw | 1 | null |
At least Meta is better than this | 1 | 2025-06-11T08:23:36 | chimichanga_3 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l8nahq | false | null | t3_1l8nahq | /r/LocalLLaMA/comments/1l8nahq/at_least_meta_is_better_than_this/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'TlZQqUhkDVNNGws_b4gTg5k7VdCzWBvYE45PLSpRYcQ', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ozp0sr2xc96f1.jpeg?width=108&crop=smart&auto=webp&s=e682a7db45c80e9c89c3fcbc6ecd69648a190764', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ozp0sr2xc96f1.jpeg?width=216&crop=smart&auto=webp&s=3d358c7f95baba25a6e6f17a0326209aa165c87f', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/ozp0sr2xc96f1.jpeg?width=320&crop=smart&auto=webp&s=20b17b514a3ea44c90068a91177a9d12bda5df2c', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/ozp0sr2xc96f1.jpeg?width=640&crop=smart&auto=webp&s=712f8b9c3304859f135ead623bc55adcbbde21b7', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/ozp0sr2xc96f1.jpeg?width=960&crop=smart&auto=webp&s=0e6b7f6354cf020b37bcc75d0260347b52145c2b', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/ozp0sr2xc96f1.jpeg?width=1080&crop=smart&auto=webp&s=8b33590620996f32ec54c440e7601b6e1c3c4916', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/ozp0sr2xc96f1.jpeg?auto=webp&s=fb5b4ca86fcfadb038029e8d8dcc15246b7bbf7a', 'width': 1080}, 'variants': {}}]} |
|||
mlx-lm issue with GPU | 1 | [removed] | 2025-06-11T08:26:54 | https://www.reddit.com/r/LocalLLaMA/comments/1l8nc75/mlxlm_issue_with_gpu/ | Wooden_Living_4553 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8nc75 | false | null | t3_1l8nc75 | /r/LocalLLaMA/comments/1l8nc75/mlxlm_issue_with_gpu/ | false | false | self | 1 | null |
Ollama vs mlx-lm | 1 | [removed] | 2025-06-11T08:28:39 | https://www.reddit.com/r/LocalLLaMA/comments/1l8nd48/ollama_vs_mlxlm/ | Wooden_Living_4553 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8nd48 | false | null | t3_1l8nd48 | /r/LocalLLaMA/comments/1l8nd48/ollama_vs_mlxlm/ | false | false | self | 1 | null |
Image captioning | 3 | Hi everyone! I am working on a project that requires detailed analysis of certain figures using an llm to describe them. I am getting okay performance with qwen vl 2.5 30b, but only if I use very specific prompting. Since I am dealing with a variety of different kinds figures I would like to use different prompts depending on the type of figure.
Does anyone know of a good, fast image captioner that just describes the type of figure with one or two words? Say photograph, bar chart, diagram, etc. I can then use that to select which prompt to use on the 30b model. Bonus points if you can suggest something different to the qwen 2.5 model I am thinking of. | 2025-06-11T08:33:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l8nfop/image_captioning/ | 3oclockam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8nfop | false | null | t3_1l8nfop | /r/LocalLLaMA/comments/1l8nfop/image_captioning/ | false | false | self | 3 | null |
Supercharge Your API Integrations for LLMs: My Take on Dynamic Setup with YAML & MCP | 1 | [removed] | 2025-06-11T08:35:48 | https://www.reddit.com/r/LocalLLaMA/comments/1l8ngxy/supercharge_your_api_integrations_for_llms_my/ | Raga_123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8ngxy | false | null | t3_1l8ngxy | /r/LocalLLaMA/comments/1l8ngxy/supercharge_your_api_integrations_for_llms_my/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'DRNc5j6NRGO-QAoDG4FL_idk8cHxJfV15ie_XR9vNuI', 'resolutions': [{'height': 133, 'url': 'https://external-preview.redd.it/PsTCl9xcdJmqlv10thhIDliPmE_ZERKkug9D3ucIs2U.jpg?width=108&crop=smart&auto=webp&s=3776380b6e474bb1f8ae861e266f510003ae9686', 'width': 108}, {'height': 267, 'url': 'https://external-preview.redd.it/PsTCl9xcdJmqlv10thhIDliPmE_ZERKkug9D3ucIs2U.jpg?width=216&crop=smart&auto=webp&s=74b2b875ff52ed70c28c9a2a70330ae84975ede2', 'width': 216}, {'height': 395, 'url': 'https://external-preview.redd.it/PsTCl9xcdJmqlv10thhIDliPmE_ZERKkug9D3ucIs2U.jpg?width=320&crop=smart&auto=webp&s=3089302fc74571df053bbec1bd7311f50d6a5817', 'width': 320}, {'height': 791, 'url': 'https://external-preview.redd.it/PsTCl9xcdJmqlv10thhIDliPmE_ZERKkug9D3ucIs2U.jpg?width=640&crop=smart&auto=webp&s=f4b3bb4eb30784db5654feb272682c0661f60ec1', 'width': 640}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/PsTCl9xcdJmqlv10thhIDliPmE_ZERKkug9D3ucIs2U.jpg?auto=webp&s=cf9088ab5be7ed1605981c2f0abecf301c688f1a', 'width': 828}, 'variants': {}}]} |
Quick Ollama bench 9070XT vs 4060Ti | 0 | Ran aidatatools/ollama-benchmark/ with custom model set. On gaming PCs in our house. Thought I'd share.
9070XT - NixOS-unstable, running via docker ollama-rocm
4060Ti - Windows 10
9070XT:
\* \*\*deepseek-r1:14b\*\*: 42.58
\* \*\*gemma2:9b\*\*: 56.64
\* \*\*llava:13b\*\*: 57.89
\* \*\*llama3.1:8b\*\*: 75.49
\* \*\*mistral:7b\*\*: 82.70
\* \*\*llava:7b\*\*: 83.60
\* \*\*qwen2:7b\*\*: 89.01
\* \*\*phi3:3.8b\*\*: 109.43
4060Ti:
**phi3:3.8b**: 94.80 tokens/s
* **mistral:7b**: 56.52 tokens/s
* **llava:7b**: 56.63 tokens/s
* **qwen2:7b**: 54.74 tokens/s
* **llama3.1:8b**: 49.42 tokens/s
* **gemma2:9b**: 40.22 tokens/s
* **llava:13b**: 32.81 tokens/s
* **deepseek-r1:14b**: 27.31 tokens/s | 2025-06-11T08:40:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l8nje0/quick_ollama_bench_9070xt_vs_4060ti/ | Mysterious_Prune415 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8nje0 | false | null | t3_1l8nje0 | /r/LocalLLaMA/comments/1l8nje0/quick_ollama_bench_9070xt_vs_4060ti/ | false | false | self | 0 | null |
Why AI augmentation beats AI automation | 5 | The real power isn't in AI replacing humans - it's in the combination. Think about it like this: a drummer doesn't lose their creativity when they use a drum machine. They just get more tools to express their vision. Same thing's happening with content creation right now.
Recent data backs this up - LinkedIn reported that posts using AI assistance but maintaining human editing get 47% more engagement than pure AI content. Meanwhile, Jasper's 2024 survey found that 89% of successful content creators use AI tools, but 96% say human oversight is "critical" to their process.
I've been watching creators use AI tools, and the ones who succeed aren't the ones who just hit "generate" and publish whatever comes out. They're the ones who treat AI like a really smart intern - it can handle the heavy lifting, but the vision, the personality, the weird quirks that make content actually interesting? That's all human.
During my work on a podcast platform with AI-generated audio and AI hosts, I discovered something fascinating - listeners could detect fully synthetic content with 73% accuracy, even when they couldn't pinpoint exactly why something felt "off." But when humans wrote the scripts and just used AI for voice synthesis? Detection dropped to 31%.
The economics make sense too. Pure AI content is becoming a commodity. It's cheap, it's everywhere, and people are already getting tired of it. Content marketing platforms are reporting that pure AI articles have 65% lower engagement rates compared to human-written pieces. But human creativity enhanced by AI? That's where the value is. You get the efficiency of AI with the authenticity that only humans can provide.
I've noticed audiences are getting really good at sniffing out pure AI content. Google's latest algorithm updates have gotten 40% better at detecting and deprioritizing AI-generated content. They want the messy, imperfect, genuinely human stuff. AI should amplify that, not replace it.
The creators who'll win in the next few years aren't the ones fighting against AI or the ones relying entirely on it. They're the ones who figure out how to use it as a creative partner while keeping their unique voice front and center.
What's your take? | 2025-06-11T08:55:45 | https://www.reddit.com/r/LocalLLaMA/comments/1l8nrf5/why_ai_augmentation_beats_ai_automation/ | Necessary-Tap5971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8nrf5 | false | null | t3_1l8nrf5 | /r/LocalLLaMA/comments/1l8nrf5/why_ai_augmentation_beats_ai_automation/ | false | false | self | 5 | null |
is there a local singing tts? we can use today? | 1 | [removed] | 2025-06-11T09:06:28 | https://www.reddit.com/r/LocalLLaMA/comments/1l8nx6m/is_there_a_local_singing_tts_we_can_use_today/ | Exact-Yesterday-992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8nx6m | false | null | t3_1l8nx6m | /r/LocalLLaMA/comments/1l8nx6m/is_there_a_local_singing_tts_we_can_use_today/ | false | false | self | 1 | null |
Automated RAG systems which know the best way to index an arbitrary document? | 1 | [removed] | 2025-06-11T09:31:58 | https://www.reddit.com/r/LocalLLaMA/comments/1l8oaqh/automated_rag_systems_which_know_the_best_way_to/ | anythingjust__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8oaqh | false | null | t3_1l8oaqh | /r/LocalLLaMA/comments/1l8oaqh/automated_rag_systems_which_know_the_best_way_to/ | false | false | self | 1 | null |
Altman on open weight 🤔🤔 | 200 | 2025-06-11T09:38:50 | https://www.reddit.com/r/LocalLLaMA/comments/1l8oe8g/altman_on_open_weight/ | Mean-Neighborhood-42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8oe8g | false | null | t3_1l8oe8g | /r/LocalLLaMA/comments/1l8oe8g/altman_on_open_weight/ | false | false | 200 | {'enabled': False, 'images': [{'id': 'MKBlag_qUu9yYrp0owyTwlx7WLmGYYg_LjKIGYqQ8xg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/GVhfh_dphC644v-zed5GXAiqXgODxyxxMhlNyvcKd5w.jpg?width=108&crop=smart&auto=webp&s=6da23f937afc76fa32caeb0f34e23d73d37134ed', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/GVhfh_dphC644v-zed5GXAiqXgODxyxxMhlNyvcKd5w.jpg?auto=webp&s=24644fa2f1edeb248307dceb2e5a8f8eb186f896', 'width': 200}, 'variants': {}}]} |
||
Looking for a great rp model for independent characters | 1 | [removed] | 2025-06-11T09:41:17 | https://www.reddit.com/r/LocalLLaMA/comments/1l8ofi3/looking_for_a_great_rp_model_for_independent/ | Past-Deal5045 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8ofi3 | false | null | t3_1l8ofi3 | /r/LocalLLaMA/comments/1l8ofi3/looking_for_a_great_rp_model_for_independent/ | false | false | self | 1 | null |
Looking for a great rp model for independent characters | 1 | [removed] | 2025-06-11T09:43:07 | https://www.reddit.com/r/LocalLLaMA/comments/1l8ogi7/looking_for_a_great_rp_model_for_independent/ | Past-Deal5045 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8ogi7 | false | null | t3_1l8ogi7 | /r/LocalLLaMA/comments/1l8ogi7/looking_for_a_great_rp_model_for_independent/ | false | false | self | 1 | null |
It should be possible to create a "dummy GPU" hardware device that does nothing but host VRAM for a real card via NVLink | 1 | [removed] | 2025-06-11T09:48:22 | https://www.reddit.com/r/LocalLLaMA/comments/1l8ojcr/it_should_be_possible_to_create_a_dummy_gpu/ | Antique_Savings7249 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8ojcr | false | null | t3_1l8ojcr | /r/LocalLLaMA/comments/1l8ojcr/it_should_be_possible_to_create_a_dummy_gpu/ | false | false | self | 1 | null |
Image vector embedding | 1 | [removed] | 2025-06-11T09:48:28 | https://www.reddit.com/r/LocalLLaMA/comments/1l8ojfc/image_vector_embedding/ | thorf_44 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8ojfc | false | null | t3_1l8ojfc | /r/LocalLLaMA/comments/1l8ojfc/image_vector_embedding/ | false | false | self | 1 | null |
Where is the tutorial/guide to run locally? I look at the sidebar, and it's just filter by 'tutorial | guide' flair. | 1 | [removed] | 2025-06-11T10:15:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l8oynz/where_is_the_tutorialguide_to_run_locally_i_look/ | jinnyjuice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8oynz | false | null | t3_1l8oynz | /r/LocalLLaMA/comments/1l8oynz/where_is_the_tutorialguide_to_run_locally_i_look/ | false | false | self | 1 | null |
Have access to 2 x A100s - what multimodal LLM should I train that's beneficial to the community here? | 1 | [removed] | 2025-06-11T10:22:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l8p364/have_access_to_2_x_a100s_what_multimodal_llm/ | fullgoopy_alchemist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8p364 | false | null | t3_1l8p364 | /r/LocalLLaMA/comments/1l8p364/have_access_to_2_x_a100s_what_multimodal_llm/ | false | false | self | 1 | null |
Have access to 2 x A100s - what multimodal LLM should I train that's beneficial to the community here? | 1 | [removed] | 2025-06-11T10:24:53 | https://www.reddit.com/r/LocalLLaMA/comments/1l8p4cs/have_access_to_2_x_a100s_what_multimodal_llm/ | fullgoopy_alchemist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8p4cs | false | null | t3_1l8p4cs | /r/LocalLLaMA/comments/1l8p4cs/have_access_to_2_x_a100s_what_multimodal_llm/ | false | false | self | 1 | null |
Have access to 2 x A100s - wish to train something that's beneficial to the community | 1 | [removed] | 2025-06-11T10:28:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l8p6bs/have_access_to_2_x_a100s_wish_to_train_something/ | fullgoopy_alchemist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8p6bs | false | null | t3_1l8p6bs | /r/LocalLLaMA/comments/1l8p6bs/have_access_to_2_x_a100s_wish_to_train_something/ | false | false | self | 1 | null |
I finally got rid of Ollama! | 549 | About a month ago, I decided to move away from Ollama (while still using Open WebUI as frontend), and I actually did it faster and easier than I thought!
Since then, my setup has been (on both Linux and Windows):
llama.cpp or ik\_llama.cpp for inference
llama-swap to load/unload/auto-unload models (have a big config.yaml file with all the models and parameters like for think/no\_think, etc)
Open Webui as the frontend. In its "workspace" I have all the models (although not needed, because with llama-swap, Open Webui will list all the models in the drop list, but I prefer to use it) configured with the system prompts and so. So I just select whichever I want from the drop list or from the "workspace" and llama-swap loads (or unloads the current one and loads the new one) the model.
No more weird location/names for the models (I now just "wget" from huggingface to whatever folder I want and, if needed, I could even use them with other engines), or other "features" from Ollama.
Big thanks to llama.cpp (as always), ik\_llama.cpp, llama-swap and Open Webui! (and huggingface and r/localllama of course!) | 2025-06-11T10:42:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l8pem0/i_finally_got_rid_of_ollama/ | relmny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8pem0 | false | null | t3_1l8pem0 | /r/LocalLLaMA/comments/1l8pem0/i_finally_got_rid_of_ollama/ | false | false | self | 549 | null |
Which model & prompts I should use for this OCR work? | 2 | So I want to run OCR works on an old Japanese book and run into the following problems:
1. The book is stained and some of the words are blurred.
2. The texts are all in a vertical way and I would like the final results in a normal way.
3. There are annotations above some characters and I would like to capture those as well.
Can someone help me tackle this issue? | 2025-06-11T11:08:14 | https://www.reddit.com/r/LocalLLaMA/comments/1l8pu7y/which_model_prompts_i_should_use_for_this_ocr_work/ | lemuever17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8pu7y | false | null | t3_1l8pu7y | /r/LocalLLaMA/comments/1l8pu7y/which_model_prompts_i_should_use_for_this_ocr_work/ | false | false | self | 2 | null |
Testing Jamba 1.6 near the 256K context limit? | 1 | [removed] | 2025-06-11T11:24:45 | https://www.reddit.com/r/LocalLLaMA/comments/1l8q4x3/testing_jamba_16_near_the_256k_context_limit/ | zennaxxarion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8q4x3 | false | null | t3_1l8q4x3 | /r/LocalLLaMA/comments/1l8q4x3/testing_jamba_16_near_the_256k_context_limit/ | false | false | self | 1 | null |
MNN TaoAvatar: run 3d avatar offline, Android app by alibaba mnn team | 124 | https://github.com/alibaba/MNN/blob/master/apps/Android/Mnn3dAvatar/README.md#version-001 | 2025-06-11T11:43:01 | https://v.redd.it/65vyq2fhca6f1 | Juude89 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l8qh2a | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/65vyq2fhca6f1/DASHPlaylist.mpd?a=1752234194%2CYjViZDY4MDRkNzU0MGMyYzc2YTdkOGZmMjZmYWFmODFmMTMzMjkzOTQ2MWE2MGIzZDE3NTgzY2I2MmUxOGExZA%3D%3D&v=1&f=sd', 'duration': 33, 'fallback_url': 'https://v.redd.it/65vyq2fhca6f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/65vyq2fhca6f1/HLSPlaylist.m3u8?a=1752234194%2CMzU0ODNkZGNhZmYzODcyNTkxNjNkNWYxOWU1NjhmMTI3NWU0Y2JkNTUwNzU0YzExMjFmZjg5M2EzNmFkZDJkNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/65vyq2fhca6f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 582}} | t3_1l8qh2a | /r/LocalLLaMA/comments/1l8qh2a/mnn_taoavatar_run_3d_avatar_offline_android_app/ | false | false | 124 | {'enabled': False, 'images': [{'id': 'MXhndzBsZmhjYTZmMcCLIwnOvaDL_InTTSsx50yogh0oEgldtT-tbB2eca5E', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/MXhndzBsZmhjYTZmMcCLIwnOvaDL_InTTSsx50yogh0oEgldtT-tbB2eca5E.png?width=108&crop=smart&format=pjpg&auto=webp&s=ebc62b33d86e75ae58896af589275914b232d9ca', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/MXhndzBsZmhjYTZmMcCLIwnOvaDL_InTTSsx50yogh0oEgldtT-tbB2eca5E.png?width=216&crop=smart&format=pjpg&auto=webp&s=15582883de9b9ba2ea5b3f6eb3d455478998064e', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/MXhndzBsZmhjYTZmMcCLIwnOvaDL_InTTSsx50yogh0oEgldtT-tbB2eca5E.png?width=320&crop=smart&format=pjpg&auto=webp&s=34b32e292cbf6b654becdfab60a7541662df6146', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/MXhndzBsZmhjYTZmMcCLIwnOvaDL_InTTSsx50yogh0oEgldtT-tbB2eca5E.png?width=640&crop=smart&format=pjpg&auto=webp&s=128037bcb7f01575eede7c7f3fa991d11c34dedd', 'width': 640}], 'source': {'height': 1656, 'url': 'https://external-preview.redd.it/MXhndzBsZmhjYTZmMcCLIwnOvaDL_InTTSsx50yogh0oEgldtT-tbB2eca5E.png?format=pjpg&auto=webp&s=cc3fdb1c5d180487256646ee54589d01317db324', 'width': 752}, 'variants': {}}]} |
|
Can an AI agent analyse spreadsheets locally without exposing your data? | 1 | [removed] | 2025-06-11T11:56:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l8qq8u/can_an_ai_agent_analyse_spreadsheets_locally/ | StrictAd876 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8qq8u | false | null | t3_1l8qq8u | /r/LocalLLaMA/comments/1l8qq8u/can_an_ai_agent_analyse_spreadsheets_locally/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'USf2kP3eKTBj6g-52cbURnnHi7gB4YX2-nB48uBwCnA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/TnSEM6z35Mj98X6zjgDOFMEVNvl5C3_sYeeZW-BreaM.jpg?width=108&crop=smart&auto=webp&s=76cc045b0789ecd0db0a59879378f60a85e96c20', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/TnSEM6z35Mj98X6zjgDOFMEVNvl5C3_sYeeZW-BreaM.jpg?width=216&crop=smart&auto=webp&s=dff07a3294ff952252d4a22ea56ff3074e2f2e96', 'width': 216}], 'source': {'height': 280, 'url': 'https://external-preview.redd.it/TnSEM6z35Mj98X6zjgDOFMEVNvl5C3_sYeeZW-BreaM.jpg?auto=webp&s=69e8f7c9adb73e76697bd9436160551ad00b1244', 'width': 280}, 'variants': {}}]} |
|
Can an AI agent analyse spreadsheets locally without exposing your data? | 1 | [removed] | 2025-06-11T12:01:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l8qu82/can_an_ai_agent_analyse_spreadsheets_locally/ | StrictAd876 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8qu82 | false | null | t3_1l8qu82 | /r/LocalLLaMA/comments/1l8qu82/can_an_ai_agent_analyse_spreadsheets_locally/ | false | false | self | 1 | null |
Can an AI agent analyse spreadsheets locally without exposing your data? | 1 | [removed] | 2025-06-11T12:06:47 | https://www.reddit.com/r/LocalLLaMA/comments/1l8qxur/can_an_ai_agent_analyse_spreadsheets_locally/ | StrictAd876 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8qxur | false | null | t3_1l8qxur | /r/LocalLLaMA/comments/1l8qxur/can_an_ai_agent_analyse_spreadsheets_locally/ | false | false | self | 1 | null |
I find and analize 200k Construction Jobs using LLaMA | 1 | [removed] | 2025-06-11T12:44:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l8rq3k/i_find_and_analize_200k_construction_jobs_using/ | Separate-Breath2267 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8rq3k | false | null | t3_1l8rq3k | /r/LocalLLaMA/comments/1l8rq3k/i_find_and_analize_200k_construction_jobs_using/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'LisIUUGScx13mD-x3gFPv-giEc_OVliq9xdUF77fqKE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=108&crop=smart&auto=webp&s=8e5f4eecb8f4e20584a0a45a6c7b3d80bca50562', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=216&crop=smart&auto=webp&s=0bba062fe06cce12fc3d0c4cb2a0ea82abc7c266', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=320&crop=smart&auto=webp&s=3ad6582619e3a7c3baeb4b3bc407f87a187c2336', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=640&crop=smart&auto=webp&s=1b9a8da21d7a1b9b308c5828dbe6f6b7287068d6', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=960&crop=smart&auto=webp&s=196ba9362a8c5c81bc99f396e5c4bd3401667518', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=1080&crop=smart&auto=webp&s=f79588c44be17c9eae5cf5c5ccf4c0d9f77f0734', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?auto=webp&s=fa755a2de2b11728baa2d5e5dcd88171c0e5d4be', 'width': 1200}, 'variants': {}}]} |
An app to match specs to LLM | 2 | I get a lot of questions from people irl about which models to run locally on a persons spec. Frankly, I'd love to point them to an app that makes the recommendation based on an inputted spec. Does that app exist yet or do I have to build one? (Don't want to re-invent the wheel...) | 2025-06-11T12:58:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l8s0ka/an_app_to_match_specs_to_llm/ | jrf_1973 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8s0ka | false | null | t3_1l8s0ka | /r/LocalLLaMA/comments/1l8s0ka/an_app_to_match_specs_to_llm/ | false | false | self | 2 | null |
Sarvam AI (indian startup) is likely pulling of massive "download farming" in HF | 1 | [removed] | 2025-06-11T13:00:01 | https://www.reddit.com/r/LocalLLaMA/comments/1l8s1p7/sarvam_ai_indian_startup_is_likely_pulling_of/ | Ortho-BenzoPhenone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8s1p7 | false | null | t3_1l8s1p7 | /r/LocalLLaMA/comments/1l8s1p7/sarvam_ai_indian_startup_is_likely_pulling_of/ | false | false | 1 | null |
|
llama-server vs llama python binding | 2 | I am trying to build some applications which include RAG
llama.cpp python binding installs and run the CPU build instead of using a build i made.
(couldn't configure this to use my build)
Using llama-server makes sense but couldn't figure out how do i use my own chat template and loading the embedding model.
Any tips or resources?
| 2025-06-11T13:18:42 | https://www.reddit.com/r/LocalLLaMA/comments/1l8sh4m/llamaserver_vs_llama_python_binding/ | daxxy_1125 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8sh4m | false | null | t3_1l8sh4m | /r/LocalLLaMA/comments/1l8sh4m/llamaserver_vs_llama_python_binding/ | false | false | self | 2 | null |
LA RELACIÓN DE AYUDA EN TRABAJO SOCIAL | 1 | [removed] | 2025-06-11T13:35:28 | https://www.reddit.com/r/LocalLLaMA/comments/1l8sunm/la_relación_de_ayuda_en_trabajo_social/ | PersonalCounty3622 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8sunm | false | null | t3_1l8sunm | /r/LocalLLaMA/comments/1l8sunm/la_relación_de_ayuda_en_trabajo_social/ | false | false | self | 1 | null |
Recommendations for Models for Tool Usage | 5 | I’ve built a small app to experiment with mcp. I integrated about 2 dozen tools that my team uses for data processing pipelines. It works really well. The tool call success rate is probably over 95%. I built it using the OpenAI API. Ideally I’d like to host everything locally without changing my code, just the OpenAI base_url parameter to point it at my local model hosted by llama.cpp.
Are there good models that support OpenAI tool calling format? | 2025-06-11T13:36:09 | https://www.reddit.com/r/LocalLLaMA/comments/1l8sv82/recommendations_for_models_for_tool_usage/ | Simusid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8sv82 | false | null | t3_1l8sv82 | /r/LocalLLaMA/comments/1l8sv82/recommendations_for_models_for_tool_usage/ | false | false | self | 5 | null |
Huge VRAM usage with VLLM | 1 | Hi, I'm trying to make vllm run on my local machine (windows 11 laptop with a 4070 8GB of VRAM).
My goal is tu use vision models, and people said that gguf version of the models were bad for vision, and I can't run non gguf models with ollama, so I tried vllm.
After few day of trying with an old docker repo, and a local installation, I decied to try with wsl2, it took me a day to make it run, but now I'm only able to run tiny models like 1b versions, and the results are slow.
When I try to install bigger models like 7b models, I just get the error about my vram, vllm is trying to alocate a certains amount that isn't available (even if it is).
The error : "ValueError: Free memory on device (6.89/8.0 GiB) on startup is less than desired GPU memory utilization (0.9, 7.2 GiB). Decrease GPU memory utilization or reduce GPU memory used by other processes."
Also this value never change even if the actual vram change.
I tried with --gpu-memory-utilization 0.80 in the launch commmand, but it doesn't make any difference (even if I put 0.30).
The goal is to experiment on my laptop and then build / rent a bigger machine to put this in production, so the wsl thing is not permanent.
If you have any clue on what's going on it would be very helpfull !
Thank you ! | 2025-06-11T13:52:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l8t8n8/huge_vram_usage_with_vllm/ | Wintlink- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8t8n8 | false | null | t3_1l8t8n8 | /r/LocalLLaMA/comments/1l8t8n8/huge_vram_usage_with_vllm/ | false | false | self | 1 | null |
Anybody know what agent UI framwork this is (it has a terminal emulator for letting the LLM execute commands) | 0 | More or less title.
Anybody know an existing agent UI framework where one could let an LLM (local or via API) execute commands in a shell guided by the prompt on the right? | 2025-06-11T13:54:48 | Birder | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l8tap1 | false | null | t3_1l8tap1 | /r/LocalLLaMA/comments/1l8tap1/anybody_know_what_agent_ui_framwork_this_is_it/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'ygcCCuiqmUmeSWfdJzqqvGRmandTvGm1mS61b3VowKQ', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/dpqc2lnuza6f1.png?width=108&crop=smart&auto=webp&s=197b313d9261fee1f989da9f05e3a85879f97384', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/dpqc2lnuza6f1.png?width=216&crop=smart&auto=webp&s=09ad29547b968f811f1dccc7579c2af7996f9a33', 'width': 216}, {'height': 166, 'url': 'https://preview.redd.it/dpqc2lnuza6f1.png?width=320&crop=smart&auto=webp&s=f97ca0ae35a699d4ab98001fdceccc9d4417a544', 'width': 320}, {'height': 332, 'url': 'https://preview.redd.it/dpqc2lnuza6f1.png?width=640&crop=smart&auto=webp&s=e8842509887ca0ca835b8f709dd7918a2b3db40b', 'width': 640}, {'height': 498, 'url': 'https://preview.redd.it/dpqc2lnuza6f1.png?width=960&crop=smart&auto=webp&s=3def826d39d0ba8e9e815733704edc19753fec95', 'width': 960}, {'height': 560, 'url': 'https://preview.redd.it/dpqc2lnuza6f1.png?width=1080&crop=smart&auto=webp&s=1f9fea69c2c02792bd4e020d27926d12823ee671', 'width': 1080}], 'source': {'height': 2656, 'url': 'https://preview.redd.it/dpqc2lnuza6f1.png?auto=webp&s=f2bba3403c720ed3f2925fde541476b9691e8ae1', 'width': 5120}, 'variants': {}}]} |
||
NeuralCodecs Adds Speech: Dia TTS in C# .NET | 16 | Includes full Dia support with voice cloning and custom dynamic speed correction to solve Dia's speed-up issues on longer prompts.
Performance-wise, we miss out on the benefits of python's torch.compile, but still achieve slightly better tokens/s than the non-compiled Python in my setup (Windows/RTX 3090). Would love to hear what speeds you're getting if you give it a try! | 2025-06-11T13:57:03 | https://github.com/DillionLowry/NeuralCodecs | Knehm | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l8tcle | false | null | t3_1l8tcle | /r/LocalLLaMA/comments/1l8tcle/neuralcodecs_adds_speech_dia_tts_in_c_net/ | false | false | 16 | {'enabled': False, 'images': [{'id': 'D5xFS_6431ks4R6vzrvuvQxupuS0sM74zH0PT3IPLEE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/D5xFS_6431ks4R6vzrvuvQxupuS0sM74zH0PT3IPLEE.png?width=108&crop=smart&auto=webp&s=ccc6f2cc5d0e7d83263ead97b2d5f5e6021ca7e0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/D5xFS_6431ks4R6vzrvuvQxupuS0sM74zH0PT3IPLEE.png?width=216&crop=smart&auto=webp&s=bf5cae59670cc82d77f981c0822f19cc4872626c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/D5xFS_6431ks4R6vzrvuvQxupuS0sM74zH0PT3IPLEE.png?width=320&crop=smart&auto=webp&s=b8fe50596577f50f5ed654857b217486694e5e34', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/D5xFS_6431ks4R6vzrvuvQxupuS0sM74zH0PT3IPLEE.png?width=640&crop=smart&auto=webp&s=82b444918afedfda9d3c2e8102b261baff22f204', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/D5xFS_6431ks4R6vzrvuvQxupuS0sM74zH0PT3IPLEE.png?width=960&crop=smart&auto=webp&s=f089a623199e59c8e886e26cac141e43e94c4311', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/D5xFS_6431ks4R6vzrvuvQxupuS0sM74zH0PT3IPLEE.png?width=1080&crop=smart&auto=webp&s=eb731cd2c5a9d64cc21ba756cd264e8e78f105d7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/D5xFS_6431ks4R6vzrvuvQxupuS0sM74zH0PT3IPLEE.png?auto=webp&s=990429dada0b504810846c0555bf6094b2a37f65', 'width': 1200}, 'variants': {}}]} |
|
AI Deep Research Explained | 39 | Probably a lot of you are using deep research on ChatGPT, Perplexity, or Grok to get better and more comprehensive answers to your questions, or data you want to investigate.
But did you ever stop to think how it actually works behind the scenes?
In my latest blog post, I break down the system-level mechanics behind this new generation of research-capable AI:
* How these models understand what you're really asking
* How they decide when and how to search the web or rely on internal knowledge
* The ReAct loop that lets them reason step by step
* How they craft and execute smart queries
* How they verify facts by cross-checking multiple sources
* What makes retrieval-augmented generation (RAG) so powerful
* And why these systems are more up-to-date, transparent, and accurate
It's a shift from "look it up" to "figure it out."
**Read the full (not too long) blog post (free to read, no paywall). The link is in the first comment.** | 2025-06-11T14:32:14 | https://www.reddit.com/r/LocalLLaMA/comments/1l8u7wy/ai_deep_research_explained/ | Nir777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8u7wy | false | null | t3_1l8u7wy | /r/LocalLLaMA/comments/1l8u7wy/ai_deep_research_explained/ | false | false | self | 39 | null |
Meta releases V-JEPA 2, the first world model trained on video | 280 | 2025-06-11T14:48:35 | https://huggingface.co/collections/facebook/v-jepa-2-6841bad8413014e185b497a6 | juanviera23 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l8umf2 | false | null | t3_1l8umf2 | /r/LocalLLaMA/comments/1l8umf2/meta_releases_vjepa_2_the_first_world_model/ | false | false | 280 | {'enabled': False, 'images': [{'id': '7pBvBFe8bd_NQplAFgrEpaIQ63MNMr2sBmAuWlM0Xes', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XYCW87FCFGR0wI2hYDorldwWOlBC0pjIIfGLZhngZC4.jpg?width=108&crop=smart&auto=webp&s=2a03f4b14f6d80535555c4c68506482525daf741', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XYCW87FCFGR0wI2hYDorldwWOlBC0pjIIfGLZhngZC4.jpg?width=216&crop=smart&auto=webp&s=b811bea5ba52e45e279cd7e9e2e685335a862c31', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XYCW87FCFGR0wI2hYDorldwWOlBC0pjIIfGLZhngZC4.jpg?width=320&crop=smart&auto=webp&s=b5b208bc0a0c87c9657731099427f13a4ddf9292', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XYCW87FCFGR0wI2hYDorldwWOlBC0pjIIfGLZhngZC4.jpg?width=640&crop=smart&auto=webp&s=5011249fce4c8240f4cce7fe71a892b4c008780b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XYCW87FCFGR0wI2hYDorldwWOlBC0pjIIfGLZhngZC4.jpg?width=960&crop=smart&auto=webp&s=b391c410afe39ee8ed3dc2874629b8c89ef79c8e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XYCW87FCFGR0wI2hYDorldwWOlBC0pjIIfGLZhngZC4.jpg?width=1080&crop=smart&auto=webp&s=7a33587f6523f4221c2cb669931cedc7b442790c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XYCW87FCFGR0wI2hYDorldwWOlBC0pjIIfGLZhngZC4.jpg?auto=webp&s=8df979748c4460bbdd909a35951a24730c77a738', 'width': 1200}, 'variants': {}}]} |
||
From "LangGraph is trash" to "pip install langgraph": A Stockholm Syndrome Story | 1 | [removed] | 2025-06-11T15:03:21 | https://www.reddit.com/r/LocalLLaMA/comments/1l8uzrk/from_langgraph_is_trash_to_pip_install_langgraph/ | FailingUpAllDay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8uzrk | false | null | t3_1l8uzrk | /r/LocalLLaMA/comments/1l8uzrk/from_langgraph_is_trash_to_pip_install_langgraph/ | false | false | self | 1 | null |
M4 Max 128GB v NVIDIA DGX Spark? | 1 | [removed] | 2025-06-11T15:22:34 | https://www.reddit.com/r/LocalLLaMA/comments/1l8vhdl/m4_max_128gb_v_nvidia_dgx_spark/ | OptimisticSwitcheroo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8vhdl | false | null | t3_1l8vhdl | /r/LocalLLaMA/comments/1l8vhdl/m4_max_128gb_v_nvidia_dgx_spark/ | false | false | self | 1 | null |
What is the current state of llama.cpp rpc-server? | 13 | For context, I serendipitously got an extra x99 motherboard, and I have a couple spare GPUs available to use with it.
I'm curious, given the current state of llama.cpp rpc, if it's worth buying the CPU, cooler, etc. in order to run this board as an RPC node in llama.cpp?
I tried looking for information online, but couldn't find anything up to date.
Basically, does llama.cpp rpc-server currently work well? Is it worth setting up so that I can run larger models? What's been everyone's experiencing running it? | 2025-06-11T15:42:39 | https://www.reddit.com/r/LocalLLaMA/comments/1l8vziy/what_is_the_current_state_of_llamacpp_rpcserver/ | kevin_1994 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8vziy | false | null | t3_1l8vziy | /r/LocalLLaMA/comments/1l8vziy/what_is_the_current_state_of_llamacpp_rpcserver/ | false | false | self | 13 | null |
Looking to hire someone to teach me LLM finetuning / LoRa training | 1 | [removed] | 2025-06-11T15:43:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l8w00g/looking_to_hire_someone_to_teach_me_llm/ | contentedpoverty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8w00g | false | null | t3_1l8w00g | /r/LocalLLaMA/comments/1l8w00g/looking_to_hire_someone_to_teach_me_llm/ | false | false | self | 1 | null |
Which model should I use on my macbook m4? | 0 | I recently got a MacBook Air M4 and upgraded the RAM to 32 GB
I am not an expert, and neither do I have a technical background in web development, but I am quite a curious mind and was wondering which model you think I can run the best for code generation for web app developments? thanks! | 2025-06-11T15:43:23 | https://www.reddit.com/r/LocalLLaMA/comments/1l8w04v/which_model_should_i_use_on_my_macbook_m4/ | Sergioramos0447 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8w04v | false | null | t3_1l8w04v | /r/LocalLLaMA/comments/1l8w04v/which_model_should_i_use_on_my_macbook_m4/ | false | false | self | 0 | null |
Perception Language Models (PLM): 1B, 3B, and 8B VLMs with code and data | 29 | Very cool resource if you're working in the VLM space!
* Models: [https://huggingface.co/collections/facebook/perception-lm-67f9783f171948c383ee7498](https://huggingface.co/collections/facebook/perception-lm-67f9783f171948c383ee7498)
* Code: [https://github.com/facebookresearch/perception\_models](https://github.com/facebookresearch/perception_models)
* Data: [https://ai.meta.com/datasets/plm-data/](https://ai.meta.com/datasets/plm-data/)
* Paper: [https://arxiv.org/pdf/2504.13180](https://arxiv.org/pdf/2504.13180)
* Demo: [Video](https://video-ord5-3.xx.fbcdn.net/o1/v/t2/f2/m69/AQPbJ1oVsuJarMlUQhsExvmBr4_9_A-n3u3EuumpTflJrPPTmzcrWVtW0DabtOFq5_w144E4V0aCiUf97GG6en6k.mp4?strext=1&_nc_cat=107&_nc_sid=5e9851&_nc_ht=video-ord5-3.xx.fbcdn.net&_nc_ohc=esn7TCrGmRIQ7kNvwExOm9L&efg=eyJ2ZW5jb2RlX3RhZyI6Inhwdl9wcm9ncmVzc2l2ZS5GQUNFQk9PSy4uQzMuMTkyMC5kYXNoX2gyNjQtYmFzaWMtZ2VuMl8xMDgwcCIsInhwdl9hc3NldF9pZCI6MTMyMTU4Nzg0MjI2MDg2MCwidmlfdXNlY2FzZV9pZCI6MTA4MjUsImR1cmF0aW9uX3MiOjcxLCJ1cmxnZW5fc291cmNlIjoid3d3In0%3D&ccb=17-1&vs=7218d2367d39d1b6&_nc_vs=HBksFQIYOnBhc3N0aHJvdWdoX2V2ZXJzdG9yZS9HQkM1U3gwQnpydmFKZWNCQUNNNlJxU0hTZE1ZYnY0R0FBQUYVAALIARIAFQIYOnBhc3N0aHJvdWdoX2V2ZXJzdG9yZS9HTkRFTlIwVE9WeFZpN3NEQUQ1a18yQlBpc0Y0YnY0R0FBQUYVAgLIARIAKAAYABsCiAd1c2Vfb2lsATEScHJvZ3Jlc3NpdmVfcmVjaXBlATEVAAAm-P2E3sT-2AQVAigCQzMsF0BR2p--dsi0GBpkYXNoX2gyNjQtYmFzaWMtZ2VuMl8xMDgwcBEAdQJlkqkBAA&_nc_zt=28&oh=00_AfOfCNMWdIQf6B_RJS-7eHk5Dxgb8dkTcDYOU-dn3UhMpQ&oe=684F673B) | 2025-06-11T16:07:25 | https://huggingface.co/collections/facebook/perception-lm-67f9783f171948c383ee7498 | entsnack | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l8wm8v | false | null | t3_1l8wm8v | /r/LocalLLaMA/comments/1l8wm8v/perception_language_models_plm_1b_3b_and_8b_vlms/ | false | false | 29 | {'enabled': False, 'images': [{'id': 'n4hFtoyE8VyE50topwUGDQMo2PXOFF4-oMSKgCANc4w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kRxHl4QHoi7JxOqoDPbjwpU5kW4q0t0a7pYH8RdzKEQ.jpg?width=108&crop=smart&auto=webp&s=9271d8bf5bcdc029f80d41a794b775e1c953f42c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kRxHl4QHoi7JxOqoDPbjwpU5kW4q0t0a7pYH8RdzKEQ.jpg?width=216&crop=smart&auto=webp&s=b89e5adfd37dda29227d3c91678cdf175c4a3f94', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kRxHl4QHoi7JxOqoDPbjwpU5kW4q0t0a7pYH8RdzKEQ.jpg?width=320&crop=smart&auto=webp&s=892524b442bfa2f6effb0f70b469b5c05bb9c6ea', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kRxHl4QHoi7JxOqoDPbjwpU5kW4q0t0a7pYH8RdzKEQ.jpg?width=640&crop=smart&auto=webp&s=9f6da37c184270b83484344bd81db1d541c7e4b1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kRxHl4QHoi7JxOqoDPbjwpU5kW4q0t0a7pYH8RdzKEQ.jpg?width=960&crop=smart&auto=webp&s=50475d7427a497225c1b66d0f4fb6e9382e9ae9c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kRxHl4QHoi7JxOqoDPbjwpU5kW4q0t0a7pYH8RdzKEQ.jpg?width=1080&crop=smart&auto=webp&s=2a2545fd29f8f6613c6485eaa9953660d72ebe5b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kRxHl4QHoi7JxOqoDPbjwpU5kW4q0t0a7pYH8RdzKEQ.jpg?auto=webp&s=31c2b4cdab2d59edef19cdf1a1cea38f48c0043c', 'width': 1200}, 'variants': {}}]} |
|
Would you use an open source AI Voice Assistant Keychain, configurable to use local or frontier models? | 0 | Would you use an Al Assistant keychain with press to talk to an LLM (with wifi / cellular integration)?
You can control what tools the Al has available, select your LLM, and use companion app to manage transcripts.
Siri, Alexa, and Google are closed and difficult to customize. They own your data and you have no direct control over what they do with it.
| 2025-06-11T16:15:49 | zuluana | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l8wtxf | false | null | t3_1l8wtxf | /r/LocalLLaMA/comments/1l8wtxf/would_you_use_an_open_source_ai_voice_assistant/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'X27XrlpIdix0M0HpUJjwnVYuNzQK7By9FXZM6aPp63w', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/7q0w3276pb6f1.jpeg?width=108&crop=smart&auto=webp&s=2949cce14b0e3f2a2ca2e1ef1fcdd33db932b7c8', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/7q0w3276pb6f1.jpeg?width=216&crop=smart&auto=webp&s=7b834c78e05c3d4eb8d84563b848f5c985bfb8b8', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/7q0w3276pb6f1.jpeg?width=320&crop=smart&auto=webp&s=294482defe27fdf63ce2eb58f75addab8b03d8c1', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/7q0w3276pb6f1.jpeg?width=640&crop=smart&auto=webp&s=f861a94c21e64eda6dec850bbb79852a7d253ef8', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/7q0w3276pb6f1.jpeg?width=960&crop=smart&auto=webp&s=739f2c83ffa6e2451d051c24bec30b010c2012f2', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/7q0w3276pb6f1.jpeg?auto=webp&s=aeba884dc35af40018fb8e7b36c421cff4bf1e78', 'width': 1024}, 'variants': {}}]} |
||
[Tool] rvn-convert: OSS Rust-based SafeTensors to GGUF v3 converter (single-shard, fast, no Python) | 33 | Afternoon,
I built a tool out of frustration after losing hours to failed model conversions. (Seriously launching python tool just to see a failure after 159 tensors and 3 hours)
`rvn-convert` is a small Rust utility that memory-maps a HuggingFace `safetensors` file and writes a clean, llama.cpp-compatible `.gguf` file. No intermediate RAM spikes, no Python overhead, no disk juggling.
Features (v0.1.0)
Single-shard support (for now)
Upcasts BF16 → F32
Embeds `tokenizer.json`
Adds BOS/EOS/PAD IDs
GGUF v3 output (tested with LLaMA 3.2)
No multi-shard support (yet)
No quantization
No GGUF v2 / tokenizer model variants
I use this daily in my pipeline; just wanted to share in case it helps others.
GitHub: [https://github.com/rvnllm/rvn-convert](https://github.com/rvnllm/rvn-convert)
Open to feedback or bug reports—this is early but working well so far.
Cheers! | 2025-06-11T16:15:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l8wty9/tool_rvnconvert_oss_rustbased_safetensors_to_gguf/ | rvnllm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8wty9 | false | null | t3_1l8wty9 | /r/LocalLLaMA/comments/1l8wty9/tool_rvnconvert_oss_rustbased_safetensors_to_gguf/ | false | false | self | 33 | {'enabled': False, 'images': [{'id': 'DMQDI8vFlL9Gsgmf_pq6iToq5QkaGErQaZ02V2Zdi0M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DMQDI8vFlL9Gsgmf_pq6iToq5QkaGErQaZ02V2Zdi0M.png?width=108&crop=smart&auto=webp&s=f08de7056ab47428d83fd53f9b65bd01e6a091ad', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DMQDI8vFlL9Gsgmf_pq6iToq5QkaGErQaZ02V2Zdi0M.png?width=216&crop=smart&auto=webp&s=8abdd1db5aeb30598ac71208b6acfd0efaf7ef20', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DMQDI8vFlL9Gsgmf_pq6iToq5QkaGErQaZ02V2Zdi0M.png?width=320&crop=smart&auto=webp&s=65ece08e19f5c0480236bb3cd9891c6cb165dfee', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DMQDI8vFlL9Gsgmf_pq6iToq5QkaGErQaZ02V2Zdi0M.png?width=640&crop=smart&auto=webp&s=266d63b1f16e2102300576d49ef32accf07589a4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DMQDI8vFlL9Gsgmf_pq6iToq5QkaGErQaZ02V2Zdi0M.png?width=960&crop=smart&auto=webp&s=5451f6f3d5eb7c7a3afeca2c99a935112e62ec90', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DMQDI8vFlL9Gsgmf_pq6iToq5QkaGErQaZ02V2Zdi0M.png?width=1080&crop=smart&auto=webp&s=30ecc88dc2d401de43bae7ffe095bf75c7598fd7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DMQDI8vFlL9Gsgmf_pq6iToq5QkaGErQaZ02V2Zdi0M.png?auto=webp&s=63d9ac4250b2dd7ba1079a0c3bd2d1d3ee6dad1a', 'width': 1200}, 'variants': {}}]} |
Any easy local configuration that can find typos and gramatical/punctuaction errors in a pdf? | 2 | Hi,
Basically I would like to setup an AI that can look for things like "better better", "making make", "evoution" ... etc in a PDF. and annotate them, so that I can fix them!
I though about setting up a rag with llama3.2 but not sure if that's the best idea
(I could also supply the AI with .tex files that generate the PDF, however I don't want the AI changing things other than typos and some of them are really opinionated). Also which local model would you recommend? I don't have a lot of resources so anything bigger than 7b would be an issue
any advice? | 2025-06-11T16:17:54 | https://www.reddit.com/r/LocalLLaMA/comments/1l8wvvr/any_easy_local_configuration_that_can_find_typos/ | Super-Government6796 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8wvvr | false | null | t3_1l8wvvr | /r/LocalLLaMA/comments/1l8wvvr/any_easy_local_configuration_that_can_find_typos/ | false | false | self | 2 | null |
Qwen 2.5 3B VL performance dropped post fine tuning. | 12 | Beginner here - please help me out.
I was asked to fine tune a Qwen 2.5 3B VL for the following task:
Given an image taken during an online test, check if the candidate is cheating or not. A candidate is considered to be cheating if there’s a mobile phone, headphones, crowd around, etc.
I was able to fine tune Qwen using Gemini annotated images: ~500 image per label (I am considering this a multi label classification problem) and a LLM might not be the best way to go about it. Using SFT, I am using a <think> token for reasoning as the expected suffix(thinking_mode is disabled) and then a json output for the conclusion. I had pretty decent success with the base Qwen model, but with fine tuned one the outputs quality have dropped.
A few next steps I am thinking of is:
1. In the trainer module, training loss is most likely token to token match as task is causal output. Changing that to something w a classification head that can give out logits on the json part itself; hence might improve training accuracy.
2. A RL setup as dataset is smol.
Thoughts? | 2025-06-11T16:22:49 | https://www.reddit.com/r/LocalLLaMA/comments/1l8x080/qwen_25_3b_vl_performance_dropped_post_fine_tuning/ | chitrabhat4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8x080 | false | null | t3_1l8x080 | /r/LocalLLaMA/comments/1l8x080/qwen_25_3b_vl_performance_dropped_post_fine_tuning/ | false | false | self | 12 | null |
MCP overview (15min) | 1 | generated from the docs on [modelcontextprotocol.io](http://modelcontextprotocol.io) | 2025-06-11T16:41:34 | https://v.redd.it/tnogxadjtb6f1 | josh-r-meyer | /r/LocalLLaMA/comments/1l8xgxi/mcp_overview_15min/ | 1970-01-01T00:00:00 | 0 | {} | 1l8xgxi | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/tnogxadjtb6f1/DASHPlaylist.mpd?a=1752381702%2CMTA3NTZjMGFhYTNkMDJhMjI0M2Y2NzFiNzk3YzA5ZjFiZmQzMTUwOTBhMDg3MzMwMmZiMTBjZDMwMzU0ZTQxYQ%3D%3D&v=1&f=sd', 'duration': 835, 'fallback_url': 'https://v.redd.it/tnogxadjtb6f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/tnogxadjtb6f1/HLSPlaylist.m3u8?a=1752381702%2CZGEzYzc4ZGUyMWNjZjA5MDRkMDg4NzFmZTVlNGY4ZDBmMGY3NjMxZmY2MWFkOGIzY2E0MDMwM2Y4ZTI1MzgwYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/tnogxadjtb6f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1l8xgxi | /r/LocalLLaMA/comments/1l8xgxi/mcp_overview_15min/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'M3M4dHg4ZGp0YjZmMfQskq0GSv-UBGx4HIRStWuy9smxAWDk8euINcmpqFz8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/M3M4dHg4ZGp0YjZmMfQskq0GSv-UBGx4HIRStWuy9smxAWDk8euINcmpqFz8.png?width=108&crop=smart&format=pjpg&auto=webp&s=620092ddc95d5eae3e10a3fbb17d59fadd58130a', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/M3M4dHg4ZGp0YjZmMfQskq0GSv-UBGx4HIRStWuy9smxAWDk8euINcmpqFz8.png?width=216&crop=smart&format=pjpg&auto=webp&s=e029a6090b2caf060e9f7ff8b355b7e7b26417cd', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/M3M4dHg4ZGp0YjZmMfQskq0GSv-UBGx4HIRStWuy9smxAWDk8euINcmpqFz8.png?width=320&crop=smart&format=pjpg&auto=webp&s=5ed8a51c61f9cab0060a7d86f23e58e5a1f14000', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/M3M4dHg4ZGp0YjZmMfQskq0GSv-UBGx4HIRStWuy9smxAWDk8euINcmpqFz8.png?width=640&crop=smart&format=pjpg&auto=webp&s=8ff5b3fd47035e7f016b802a2ec79d5ceb28192a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/M3M4dHg4ZGp0YjZmMfQskq0GSv-UBGx4HIRStWuy9smxAWDk8euINcmpqFz8.png?width=960&crop=smart&format=pjpg&auto=webp&s=aec94d0aef0c9eb18a0df2162583321d31e4bf9f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/M3M4dHg4ZGp0YjZmMfQskq0GSv-UBGx4HIRStWuy9smxAWDk8euINcmpqFz8.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ad621ec70400403d39d93d45bde050e297e134d7', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/M3M4dHg4ZGp0YjZmMfQskq0GSv-UBGx4HIRStWuy9smxAWDk8euINcmpqFz8.png?format=pjpg&auto=webp&s=4674ee6e6fd1217c8ed87764895568612232f59f', 'width': 1920}, 'variants': {}}]} |
|
Looking for a lightweight front-end like llama-server | 1 | I really like llama-server but it lacks some features like continuing generation, editing the models message etc. And it could be better if it stored conversations in json files, but I don't want something like open-webui it's overkill and bloated for me. | 2025-06-11T16:57:23 | https://www.reddit.com/r/LocalLLaMA/comments/1l8xv7i/looking_for_a_lightweight_frontend_like/ | flatminded | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8xv7i | false | null | t3_1l8xv7i | /r/LocalLLaMA/comments/1l8xv7i/looking_for_a_lightweight_frontend_like/ | false | false | self | 1 | null |
Anyone built a reasoning-tuned Gemma 3 variant (GRPO or MoT)? | 1 | [removed] | 2025-06-11T17:06:17 | https://www.reddit.com/r/LocalLLaMA/comments/1l8y3u9/anyone_built_a_reasoningtuned_gemma_3_variant/ | Thatisverytrue54321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8y3u9 | false | null | t3_1l8y3u9 | /r/LocalLLaMA/comments/1l8y3u9/anyone_built_a_reasoningtuned_gemma_3_variant/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'nl-1QG5zRL6EJ6d1x2O0sStynFkKL24DsALXGU8DT-I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nl-1QG5zRL6EJ6d1x2O0sStynFkKL24DsALXGU8DT-I.png?width=108&crop=smart&auto=webp&s=c60b70012e6e71b439a033b77c4844e3340c9a13', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nl-1QG5zRL6EJ6d1x2O0sStynFkKL24DsALXGU8DT-I.png?width=216&crop=smart&auto=webp&s=11028adeddecdeb589a7387cb58b21fd8543a01e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nl-1QG5zRL6EJ6d1x2O0sStynFkKL24DsALXGU8DT-I.png?width=320&crop=smart&auto=webp&s=5c4f500d1c37eac0b07df89df487106efa58d780', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nl-1QG5zRL6EJ6d1x2O0sStynFkKL24DsALXGU8DT-I.png?width=640&crop=smart&auto=webp&s=bfcd0ebb0e2fe57a5e85a2cfe8333472d1f348a9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nl-1QG5zRL6EJ6d1x2O0sStynFkKL24DsALXGU8DT-I.png?width=960&crop=smart&auto=webp&s=5efe30c112ea747cc2bd4d4ce7339184d790fda2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nl-1QG5zRL6EJ6d1x2O0sStynFkKL24DsALXGU8DT-I.png?width=1080&crop=smart&auto=webp&s=d2e11c2b1d117b1b5078cb7ca9ab74080c001736', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nl-1QG5zRL6EJ6d1x2O0sStynFkKL24DsALXGU8DT-I.png?auto=webp&s=57b9dab9d6acdc734083ba841a9ebc58f178118e', 'width': 1200}, 'variants': {}}]} |
Made a medical document anonymizer using ollama! | 1 | [removed] | 2025-06-11T17:08:49 | https://www.reddit.com/r/LocalLLaMA/comments/1l8y68t/made_a_medical_document_anonymizer_using_ollama/ | ramu9703 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8y68t | false | null | t3_1l8y68t | /r/LocalLLaMA/comments/1l8y68t/made_a_medical_document_anonymizer_using_ollama/ | false | false | self | 1 | null |
n8n vs claudecode | 1 | [removed] | 2025-06-11T17:19:00 | https://www.reddit.com/r/LocalLLaMA/comments/1l8yfvz/n8n_vs_claudecode/ | pr0scient | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8yfvz | false | null | t3_1l8yfvz | /r/LocalLLaMA/comments/1l8yfvz/n8n_vs_claudecode/ | false | false | self | 1 | null |
I built a local-first AI video editor | 1 | [removed] | 2025-06-11T17:29:22 | https://youtu.be/0YBcYGmYV4c | ExtremeKangaroo5437 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1l8ypjw | false | {'oembed': {'author_name': 'Gowrav Vishwakarma', 'author_url': 'https://www.youtube.com/@gowravvishwakarma', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/0YBcYGmYV4c?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="I Built an AI Video Editor (It Runs on Your PC, With Local Models)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/0YBcYGmYV4c/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'I Built an AI Video Editor (It Runs on Your PC, With Local Models)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1l8ypjw | /r/LocalLLaMA/comments/1l8ypjw/i_built_a_localfirst_ai_video_editor/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eLLGv4TPae01UCRCPaHe0jr4Vo8Sn_L68frbEmgceVA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/eLLGv4TPae01UCRCPaHe0jr4Vo8Sn_L68frbEmgceVA.jpeg?width=108&crop=smart&auto=webp&s=145122f49289487526d65dc5f4bb55e5ba9c3ef0', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/eLLGv4TPae01UCRCPaHe0jr4Vo8Sn_L68frbEmgceVA.jpeg?width=216&crop=smart&auto=webp&s=1ab496ad043e81d76558b93c557e07b45fdc1731', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/eLLGv4TPae01UCRCPaHe0jr4Vo8Sn_L68frbEmgceVA.jpeg?width=320&crop=smart&auto=webp&s=0ddb3ab1f0c1ceee7872930bcbbb17bd7cbfc205', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/eLLGv4TPae01UCRCPaHe0jr4Vo8Sn_L68frbEmgceVA.jpeg?auto=webp&s=aefb48f62e9b8e336cac36fcb04673e0ce906927', 'width': 480}, 'variants': {}}]} |
|
deepseek v3 0324 vs deepseek r1 0528 for agentic coding? (also agent-zero troubleshoot) | 1 | [removed] | 2025-06-11T17:46:40 | https://www.reddit.com/r/LocalLLaMA/comments/1l8z5ma/deepseek_v3_0324_vs_deepseek_r1_0528_for_agentic/ | sans_the_comicc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8z5ma | false | null | t3_1l8z5ma | /r/LocalLLaMA/comments/1l8z5ma/deepseek_v3_0324_vs_deepseek_r1_0528_for_agentic/ | false | false | self | 1 | null |
deepseek v3 0324 vs deepseek r1 0528 for agentic coding? (also agent-zero troubleshoot) | 1 | [removed] | 2025-06-11T17:48:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l8z6xc/deepseek_v3_0324_vs_deepseek_r1_0528_for_agentic/ | sans_the_comicc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8z6xc | false | null | t3_1l8z6xc | /r/LocalLLaMA/comments/1l8z6xc/deepseek_v3_0324_vs_deepseek_r1_0528_for_agentic/ | false | false | self | 1 | null |
PSA: GPU Host Interface board power cables can melt, too. | 0 | My 3090s fell off the bus and stopped working after 3 days of sustained load this morning.
Investigating revealed they burned right through this (cheap) SATA power splitter powering my SFF-8654 interface boards.
I thought those lines were just used just for the 3.3v PCIe bucks (which is true for my SFF-8611 boards!) and they wouldn't actually try to pull a SATA line to GPU power... but here we are, so clearly I'm wrong. That means we need to supply 70W, or 6A @ 12V over this poor SATA connector.
Measuring the failed cable with a specialized meter reveals resistance of over 200 mOhm, which is way too high - that's a droop of 1.2V and only leaves 11V going to the load, which is both under ATX spec and has the connectors dissipating 6-7W which is way too far out of spec and melted them.
The replacement splitter has a resistance of 40 mOhm, which is still kinda high but at least not wildly out of spec and ensuring these SATA connectors are dissipating around 1W.
If you'd like to check your own power cables, the meter I am using can be found by searching the usual places for the magic phrase "Handheld DC milliohmmeter YMC01 low resistance four wire Kelvin measurement", for checking power cables you want the higher precision available 0-2ohm. | 2025-06-11T17:49:30 | https://www.reddit.com/gallery/1l8z89y | kryptkpr | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l8z89y | false | null | t3_1l8z89y | /r/LocalLLaMA/comments/1l8z89y/psa_gpu_host_interface_board_power_cables_can/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'RwsfeDYD6-Gznk9QyPSRJiN-8yplblUEUDvxaWDPg-o', 'resolutions': [{'height': 143, 'url': 'https://external-preview.redd.it/RwsfeDYD6-Gznk9QyPSRJiN-8yplblUEUDvxaWDPg-o.jpeg?width=108&crop=smart&auto=webp&s=52557638e0ec626d081bd88928d4cc6866af6369', 'width': 108}, {'height': 286, 'url': 'https://external-preview.redd.it/RwsfeDYD6-Gznk9QyPSRJiN-8yplblUEUDvxaWDPg-o.jpeg?width=216&crop=smart&auto=webp&s=567fe2c9942784e5740689a4044ce4e7465df01c', 'width': 216}, {'height': 425, 'url': 'https://external-preview.redd.it/RwsfeDYD6-Gznk9QyPSRJiN-8yplblUEUDvxaWDPg-o.jpeg?width=320&crop=smart&auto=webp&s=594200f01e2e04001d2672200e50d61889ba21e1', 'width': 320}, {'height': 850, 'url': 'https://external-preview.redd.it/RwsfeDYD6-Gznk9QyPSRJiN-8yplblUEUDvxaWDPg-o.jpeg?width=640&crop=smart&auto=webp&s=20ead8359625f9b95124989c296cf034b89387de', 'width': 640}, {'height': 1275, 'url': 'https://external-preview.redd.it/RwsfeDYD6-Gznk9QyPSRJiN-8yplblUEUDvxaWDPg-o.jpeg?width=960&crop=smart&auto=webp&s=a6016f4a6f0c3ef427be3531a5971846d45e9617', 'width': 960}, {'height': 1434, 'url': 'https://external-preview.redd.it/RwsfeDYD6-Gznk9QyPSRJiN-8yplblUEUDvxaWDPg-o.jpeg?width=1080&crop=smart&auto=webp&s=e14229d9a067ee7a775d00eeb1d2dedf4c8c717e', 'width': 1080}], 'source': {'height': 4080, 'url': 'https://external-preview.redd.it/RwsfeDYD6-Gznk9QyPSRJiN-8yplblUEUDvxaWDPg-o.jpeg?auto=webp&s=1a30feb5c8811ea1ce1ec89c7ae1ba29931edafa', 'width': 3072}, 'variants': {}}]} |
|
Disney and Universal sue AI image company Midjourney for unlicensed use of Star Wars, The Simpsons and more | 401 | This is big! When Disney gets involved, shit is about to hit the fan.
If they come after Midourney, then expect other AI labs trained on similar training data to be hit soon.
What do you think? | 2025-06-11T18:11:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l8zssy/disney_and_universal_sue_ai_image_company/ | Iory1998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8zssy | false | null | t3_1l8zssy | /r/LocalLLaMA/comments/1l8zssy/disney_and_universal_sue_ai_image_company/ | false | false | self | 401 | null |
More Intel Arc Pro B60 Photos | 1 | [removed] | 2025-06-11T18:19:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l900iy/more_intel_arc_pro_b60_photos/ | Dr_Karminski | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l900iy | false | null | t3_1l900iy | /r/LocalLLaMA/comments/1l900iy/more_intel_arc_pro_b60_photos/ | false | false | 1 | null |
|
Can we RL/GRPO a language model to hack its own brain by rewarding for specific measurements inside the transformer architecture during inference? | 4 | Hey folks, very simple concept. Basically you are doing reinforcement learning, so that means you might have a batch of 16 rollout per step. On each of these rollout, you are tracking measurements over the states of computation inside the LLM. For example the variance of its hidden states or activations during inference at each token. Then at the end you reward the model based on if it performed the task, but you also reward it based on what you think might be more efficient "states of mind" within the LLM. If you tie a reward based on the variance, then whichever reasoning/self-prompting strategy resulted in more variance within the hidden states will get amplified, and lead to more variance in hidden states in the next iteration, which continues to amplify every time. | 2025-06-11T18:43:10 | https://www.reddit.com/r/LocalLLaMA/comments/1l90m7g/can_we_rlgrpo_a_language_model_to_hack_its_own/ | ryunuck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l90m7g | false | null | t3_1l90m7g | /r/LocalLLaMA/comments/1l90m7g/can_we_rlgrpo_a_language_model_to_hack_its_own/ | false | false | self | 4 | null |
3x3090 Build | 1 | [removed] | 2025-06-11T19:09:43 | https://www.reddit.com/r/LocalLLaMA/comments/1l91afk/3x3090_build/ | bitrecs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l91afk | false | null | t3_1l91afk | /r/LocalLLaMA/comments/1l91afk/3x3090_build/ | false | false | self | 1 | null |
What AI industry events are you attending? | 0 | Hi everyone!
We're curious to know what types of AI-focused events you all enjoy attending or would love to see more of in the future. Are there any you're more interested in such as:
* Tech conferences
* Hackathons
* Meetups
* Workshops
* Online webinars
* Something else?
If you have any tips on how to get the most out of events you've previously attended, please share them below! | 2025-06-11T19:42:38 | https://www.reddit.com/r/LocalLLaMA/comments/1l924e6/what_ai_industry_events_are_you_attending/ | MetaforDevelopers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l924e6 | false | null | t3_1l924e6 | /r/LocalLLaMA/comments/1l924e6/what_ai_industry_events_are_you_attending/ | false | false | self | 0 | null |
Are we hobbyists lagging behind? | 41 | It almost feels like every local project is a variation of another project or an implementation of a project from the big orgs, i.e, notebook LLM, deepsearch, coding agents, etc.
Felt like a year or two ago, hobbyists were also helping to seriously push the envelope. How do we get back to relevancy and being impactful? | 2025-06-11T19:45:20 | https://www.reddit.com/r/LocalLLaMA/comments/1l926uy/are_we_hobbyists_lagging_behind/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l926uy | false | null | t3_1l926uy | /r/LocalLLaMA/comments/1l926uy/are_we_hobbyists_lagging_behind/ | false | false | self | 41 | null |
How to decide on a model? | 1 | i’m really new to this! i’m making my first local model now and am trying to pick a model that works for me. i’ve seen a few posts here trying to decode all the various things in model names, but it seems like the general consensus is that there isn’t much rhyme or reason to it. Is there a repository somewhere of all the models out there, along with specs? Something like params, hardware specs required, etc?
for context i’m just running this on my work laptop, so hardware is going to be my biggest hold up in this process. i’ll get more advanced later down the line, but for now im wanting to learn :) | 2025-06-11T19:59:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l92jl9/how_to_decide_on_a_model/ | Loud-Bake-2740 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l92jl9 | false | null | t3_1l92jl9 | /r/LocalLLaMA/comments/1l92jl9/how_to_decide_on_a_model/ | false | false | self | 1 | null |
NSFW Chatbot Translation Nightmares: Need Error-Free Non-English Models! | 1 | [removed] | 2025-06-11T20:01:23 | https://www.reddit.com/r/LocalLLaMA/comments/1l92lgb/nsfw_chatbot_translation_nightmares_need/ | No_Kale_9828 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l92lgb | false | null | t3_1l92lgb | /r/LocalLLaMA/comments/1l92lgb/nsfw_chatbot_translation_nightmares_need/ | false | false | nsfw | 1 | null |
GPU optimization for llama 3.1 8b | 2 | Hi, I am new to this AI/ML filed. I am trying to use 3.18b for entity recognition from bank transaction. The models to process atleast 2000 transactions. So what is best way to use full utlization of GPU. We have a powerful GPU for production. So currently I am sending multiple requests to model using ollama server option. | 2025-06-11T20:06:20 | https://www.reddit.com/r/LocalLLaMA/comments/1l92py6/gpu_optimization_for_llama_31_8b/ | nimmalachaitanya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l92py6 | false | null | t3_1l92py6 | /r/LocalLLaMA/comments/1l92py6/gpu_optimization_for_llama_31_8b/ | false | false | self | 2 | null |
As some people asked me to share some details, here is how I got to llama.cpp, llama-swap and Open Webui to fully replace Ollama. | 47 | Sorry to make another post about this, but as some people asked me for more details and the reply was getting lengthy, I decided to write another post. (I
TL;DR: This is for local models only. As I wrote in the other post: I use llama.ccp (and/or ik_llama.cpp), llama-swap, Open Webui (in my case) and wget to download the models. I have the same benefits as with Ollama, with all the extra configuration that llama.cpp provides.
Note that I'm NOT saying it works for everyone, as there were many things in Ollama that I didn't use, but for me is exactly the same (convenience) but way more options! (and probably faster). I really do not need Ollama anymore.
Disclaimer: this is in NO way the best nor optimized way. Actually is the opposite. But it works for me and my extreme lazyness. That's why I flaired it as "other" and not "tutorial".
- llama.cpp (the doc also might help to build ik_llama.cpp):
https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md
I started with binaries, where I downloaded the two files (CUDA 12.4 in my case ) and unpacked them in a single directory, so I could get used to it (without too much hassle) and see how I felt about it, and then I built it (that how I do it know, specially in Linux). Same with ik_llama.cpp for some MoE models.
Binaries:
https://github.com/ggml-org/llama.cpp/releases
- ik_llama.cpp:
https://github.com/ikawrakow/ik_llama.cpp
and fork with binaries:
https://github.com/Thireus/ik_llama.cpp/releases
I use it for ubergarm models and I might get a bit more speed in some MoE models.
- wget: yeah, I know, but it works great for me... I just cd into the folder where I keep all the models, and then:
wget -rc https://huggingface.co/ubergarm/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B-mix-IQ3_K-00002-of-00003.gguf
- llama-swap:
https://github.com/mostlygeek/llama-swap
I started by building it, but there are also binaries (which I used when I couldn't build it in another system), and then, once I had a very basic config.yaml file, I just opened a terminal and started it. The config.yaml file is the one that has the commands (llama-server or whatever) with paths, parameters, etc. It also has a GUI that lists all models and whether they are loaded or not. And once I found "ttl" command, as in:
"ttl: <seconds> "
that will unload the model after that time, then that was it. It was the only thing that I was missing...
- Open Webui:
https://github.com/open-webui/open-webui
For the frontend, I already had (which I really like) Open Webui, so switching from the "Ollama API" to the OpenAI API" and selecting the port, that was it. Open Webui will see all models listen in the llama-swap's config.yaml file.
Now when I want to test something, I just start it first with llama.cpp, make sure all settings work, and then add it to llama-swap (config.yaml).
Once in Open Webui, I just select whatever model and that's it. Llama-swap will take care of loading it, and if I want to load another model (like trying the same chat but a different model and so), I just select it in Open Webui drop down menu and llama-swap will unload the current one and load the new one. Pretty much like Ollama, except I know the settings will be the ones I set (config.yaml has the full commands and parameters like when running it with llama.cpp, exactly the same (except the ${PORT} variable)
Some examples:
(note that my config.yaml file sucks... but it works for me), and I'm only showing a few models, but I have about 40 configured, including same model but think/no_think (that have different parameters), etc:
Excerpt from my config.yaml:
models:
"qwen2.5-vl-7b-q8-ud-32k":
proxy: "http://localhost:${PORT}"
cmd: |
../llama.cpp/build/bin/Release/llama-server.exe --port ${PORT} -m ../models/huggingface.co/unsloth/Qwen2.5-VL-7B-Instruct-GGUF/Qwen2.5-VL-7B-Instruct-UD-Q8_K_XL.gguf --mmproj ../models/huggingface.co/unsloth/Qwen2.5-VL-7B-Instruct-GGUF/mmproj-BF16.gguf -c 32768 -n 32768 --prio 2 --threads 5 --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0.0 -ngl 99 --n-predict -1 --no-mmap -fa
# unload model after 5 seconds
ttl: 5
"qwen3-8b-iq2-ud-96k-think":
proxy: "http://localhost:${PORT}"
cmd: |
../llama.cpp/build/bin/Release/llama-server.exe --port ${PORT} -m ../models/huggingface.co/unsloth/Qwen3-8B-128K-GGUF/Qwen3-8B-128K-UD-IQ2_XXS.gguf -c 98304 -n 98304 --prio 2 --threads 5 --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0.0 -ngl 99 -fa
# unload model after 5 seconds
ttl: 5
"qwen3-8b-iq2-ud-96k-nothink":
proxy: "http://localhost:${PORT}"
cmd: |
../llama.cpp/build/bin/Release/llama-server.exe --port ${PORT} -m ../models/huggingface.co/unsloth/Qwen3-8B-128K-GGUF/Qwen3-8B-128K-UD-IQ2_XXS.gguf -c 98304 -n 98304 --prio 2 --threads 5 --temp 0.7 --top-k 20 --top-p 0.8 --min-p 0.0 -ngl 99 -fa
# unload model after 5 seconds
ttl: 5
"qwen3-235b-a22b-q2-ud-16k-think":
proxy: "http://localhost:${PORT}"
cmd: |
../llama.cpp/build/bin/Release/llama-server.exe --port ${PORT} -m ../models/Qwen3-235B-A22B-UD-Q2_K_XL-00001-of-00002.gguf -ot ".ffn_.*_exps.=CPU" -c 16384 -n 16384 --prio 2 -t 4 --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0.0 -ngl 99 -fa
# unload model after 30 seconds
ttl: 30
"qwen3-235b-a22b-q2-ud-16k-nothink":
proxy: "http://localhost:${PORT}"
cmd: |
../llama.cpp/build/bin/Release/llama-server.exe --port ${PORT} -m ../models/Qwen3-235B-A22B-UD-Q2_K_XL-00001-of-00002.gguf -ot ".ffn_.*_exps.=CPU" -c 16384 -n 16384 --prio 2 -t 4 --temp 0.7 --top-k 20 --top-p 0.8 --min-p 0.0 -ngl 99 -fa
# unload model after 30 seconds
ttl: 30
"gemma-3-12b-q5-ud-24k":
proxy: "http://localhost:${PORT}"
cmd: |
../llama.cpp/build/bin/Release/llama-server.exe --port ${PORT} -m ../models/huggingface.co/unsloth/gemma-3-12b-it-GGUF/gemma-3-12b-it-UD-Q5_K_XL.gguf --mmproj ../models/huggingface.co/unsloth/gemma-3-12b-it-GGUF/mmproj-F32.gguf -c 24576 -n 24576 --prio 2 -t 4 --temp 1 --top-k 64 --top-p 0.95 --min-p 0.0 -ngl 99 -fa --repeat-penalty 1.0
# unload model after 5 seconds
ttl: 5
"gemma-3-12b-q6-ud-8k":
proxy: "http://localhost:${PORT}"
cmd: |
../llama.cpp/build/bin/Release/llama-server.exe --port ${PORT} -m ../models/huggingface.co/unsloth/gemma-3-12b-it-GGUF/gemma-3-12b-it-UD-Q6_K_XL.gguf --mmproj ../models/huggingface.co/unsloth/gemma-3-12b-it-GGUF/mmproj-BF16.gguf -c 8192 -n 8192 --prio 2 -t 4 --temp 1 --top-k 64 --top-p 0.95 --min-p 0.0 -ngl 99 -fa --repeat-penalty 1.0
# unload model after 5 seconds
ttl: 5
"GLM-Z1-9b-0414-q8-ud-30k":
proxy: "http://localhost:${PORT}"
cmd: |
../llama.cpp/build/bin/Release/llama-server.exe --port ${PORT} -m ../models/huggingface.co/unsloth/GLM-Z1-9B-0414-GGUF/GLM-Z1-9B-0414-UD-Q8_K_XL.gguf -c 30000 -n 30000 --threads 5 --temp 0.6 --top-k 40 --top-p 0.95 -ngl 99 -fa
# unload model after 30 seconds
ttl: 30
"GLM-4-9b-0414-q6-ud-30k":
proxy: "http://localhost:${PORT}"
cmd: |
../llama.cpp/build/bin/Release/llama-server.exe --port ${PORT} -m ../models/huggingface.co/unsloth/GLM-4-9B-0414-GGUF/GLM-4-9B-0414-UD-Q6_K_XL.gguf -c 30000 -n 30000 --threads 5 --temp 0.7 --top-k 40 --top-p 0.95 -ngl 99 -fa
# unload model after 30 seconds
ttl: 30
groups:
"default":
swap: true
exclusive: true
members:
- "qwen2.5-vl-7b-q8-ud-32k"
- "qwen3-8b-iq2-96k-think"
- "qwen3-8b-iq2-96k-nothink"
- "qwen3-235b-a22b-q2-ud-16k-think"
- "qwen3-235b-a22b-q2-ud-16k-nothink"
- "gemma-3-12b-q5-ud-24k"
- "gemma-3-12b-q6-ud-8k"
- "GLM-Z1-9b-0414-q8-ud-30k"
- "GLM-4-9b-0414-q6-ud-30k"
# Optional: Set health check timeout and log level
#healthCheckTimeout: 60
healthCheckTimeout: 600
logLevel: info
(healthCheckTimeout default is 60, but for the biggest MoE models, I need more)
The "cmd" are the same that I can run directly with llama-server, just need to replace the --port variable with the port number and that's it.-
Then, in my case, I open a terminal in the llama-swap folder and:
./llama-swap --config config.yaml --listen :10001;
Again, this is ugly and not optimized at all, but works great for me and my lazyness.
Also, it will not work that great for everyone, as I guess Ollama has features that I never used (nor need), so I have no idea about them.
And last thing, as a test you can just:
- download llama.cpp binaries
- unpack the two files in a single folder
- run it (adapt it with the location of your folders):
./llama.cpp/llama-server.exe --port 10001 -m ../models/huggingface.co/unsloth/Qwen3-8B-128K-GGUF/Qwen3-8B-128K-UD-IQ2_XXS.gguf -c 98304 -n 98304 --prio 2 --threads 5 --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0.0 -ngl 99 -fa
and then go to llama.cpp webui:
http://127.0.0.1:10001
chat with it.
Try it with llama-swap:
- stop llama.cpp if it's running
- download llama-swap binary
- create/edit the config.yaml:
models:
"qwen3-8b-iq2-ud-96k-think":
proxy: "http://localhost:${PORT}"
cmd: |
../llama.cpp/build/bin/Release/llama-server.exe --port ${PORT} -m ../models/huggingface.co/unsloth/Qwen3-8B-128K-GGUF/Qwen3-8B-128K-UD-IQ2_XXS.gguf -c 98304 -n 98304 --prio 2 --threads 5 --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0.0 -ngl 99 -fa
# unload model after 5 seconds
ttl: 5
groups:
"default":
swap: true
exclusive: true
members:
- "qwen3-8b-iq2-96k-think"
# Optional: Set health check timeout and log level
#healthCheckTimeout: 60
healthCheckTimeout: 600
logLevel: info
- open a terminal in that folder and run something like:
./llama-swap --config config.yaml --listen :10001;
- configure any webui you have or go to:
http://localhost:10001/upstream
there you can click on the model you have configured in the config.yaml file and that will load the model and open the llama.cpp webui
I hope it helps some one.
Sorry to make another post about this, but as some people asked me more details and the reply was getting lengthy, I decided to write another post.
TL;DR: This is for local models only. As I wrote in the other post: I use llama.ccp (and/or ik_llama.cpp), llama-swap, Open Webui (in my case) and wget to download the models. I have the same benefits as with Ollama, with all the extra configuration that llama.cpp provides.
Note that I'm NOT saying it works for everyone, as there were many things in Ollama that I didn't use, but for me is exactly the same (convenience) but way more options! (and probably faster). I really do not need Ollama anymore.
Disclaimer: this is in NO way the best nor optimized way. Actually is the opposite. But it works for me and my extreme lazyness.
- llama.cpp (the doc also might help to build ik_llama.cpp):
https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md
I started with binaries, where I downloaded the two files (CUDA 12.4 in my case ) and unpacked them in a single directory, so I could get used to it (without too much hassle) and see how I felt about it, and then I built it (that how I do it know, specially in Linux). Same with ik_llama.cpp for some MoE models.
Binaries:
https://github.com/ggml-org/llama.cpp/releases
- ik_llama.cpp:
https://github.com/ikawrakow/ik_llama.cpp
and fork with binaries:
https://github.com/Thireus/ik_llama.cpp/releases
I use it for ubergarm models and I might get a bit more speed in some MoE models.
- wget: yeah, I know, but it works great for me... I just cd into the folder where I keep all the models, and then:
wget -rc https://huggingface.co/ubergarm/Qwen3-235B-A22B-GGUF/resolve/main/Qwen3-235B-A22B-mix-IQ3_K-00002-of-00003.gguf
- llama-swap:
https://github.com/mostlygeek/llama-swap
I started by building it, but there are also binaries (which I used when I couldn't build it in another system), and then, once I had a very basic config.yaml file, I just opened a terminal and started it. The config.yaml file is the one that has the commands (llama-server or whatever) with paths, parameters, etc. It also has a GUI that lists all models and whether they are loaded or not. And once I found "ttl" command, as in:
"ttl: <seconds> "
that will unload the model after that time, then that was it. It was the only thing that I was missing...
- Open Webui:
https://github.com/open-webui/open-webui
For the frontend, I already had (which I really like) Open Webui, so switching from the "Ollama API" to the OpenAI API" and selecting the port, that was it. Open Webui will see all models listen in the llama-swap's config.yaml file.
Now when I want to test something, I just start it first with llama.cpp, make sure all settings work, and then add it to llama-swap (config.yaml).
Once in Open Webui, I just select whatever model and that's it. Llama-swap will take care of loading it, and if I want to load another model (like trying the same chat but a different model and so), I just select it in Open Webui drop down menu and llama-swap will unload the current one and load the new one. Pretty much like Ollama, except I know the settings will be the ones I set (config.yaml has the full commands and parameters like when running it with llama.cpp, exactly the same (except the ${PORT} variable)
Some examples:
(note that my config.yaml file sucks... but it works for me), and I'm only showing a few models, but I have about 40 configured, including same model but think/no_think (that have different parameters), etc:
Excerpt from my config.yaml:
models:
"qwen2.5-vl-7b-q8-ud-32k":
proxy: "http://localhost:${PORT}"
cmd: |
../llama.cpp/build/bin/Release/llama-server.exe --port ${PORT} -m ../models/huggingface.co/unsloth/Qwen2.5-VL-7B-Instruct-GGUF/Qwen2.5-VL-7B-Instruct-UD-Q8_K_XL.gguf --mmproj ../models/huggingface.co/unsloth/Qwen2.5-VL-7B-Instruct-GGUF/mmproj-BF16.gguf -c 32768 -n 32768 --prio 2 --threads 5 --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0.0 -ngl 99 --n-predict -1 --no-mmap -fa
# unload model after 5 seconds
ttl: 5
"qwen3-8b-iq2-ud-96k-think":
proxy: "http://localhost:${PORT}"
cmd: |
../llama.cpp/build/bin/Release/llama-server.exe --port ${PORT} -m ../models/huggingface.co/unsloth/Qwen3-8B-128K-GGUF/Qwen3-8B-128K-UD-IQ2_XXS.gguf -c 98304 -n 98304 --prio 2 --threads 5 --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0.0 -ngl 99 -fa
# unload model after 5 seconds
ttl: 5
"qwen3-8b-iq2-ud-96k-nothink":
proxy: "http://localhost:${PORT}"
cmd: |
../llama.cpp/build/bin/Release/llama-server.exe --port ${PORT} -m ../models/huggingface.co/unsloth/Qwen3-8B-128K-GGUF/Qwen3-8B-128K-UD-IQ2_XXS.gguf -c 98304 -n 98304 --prio 2 --threads 5 --temp 0.7 --top-k 20 --top-p 0.8 --min-p 0.0 -ngl 99 -fa
# unload model after 5 seconds
ttl: 5
"qwen3-235b-a22b-q2-ud-16k-think":
proxy: "http://localhost:${PORT}"
cmd: |
../llama.cpp/build/bin/Release/llama-server.exe --port ${PORT} -m ../models/Qwen3-235B-A22B-UD-Q2_K_XL-00001-of-00002.gguf -ot ".ffn_.*_exps.=CPU" -c 16384 -n 16384 --prio 2 -t 4 --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0.0 -ngl 99 -fa
# unload model after 30 seconds
ttl: 30
"qwen3-235b-a22b-q2-ud-16k-nothink":
proxy: "http://localhost:${PORT}"
cmd: |
../llama.cpp/build/bin/Release/llama-server.exe --port ${PORT} -m ../models/Qwen3-235B-A22B-UD-Q2_K_XL-00001-of-00002.gguf -ot ".ffn_.*_exps.=CPU" -c 16384 -n 16384 --prio 2 -t 4 --temp 0.7 --top-k 20 --top-p 0.8 --min-p 0.0 -ngl 99 -fa
# unload model after 30 seconds
ttl: 30
"gemma-3-12b-q5-ud-24k":
proxy: "http://localhost:${PORT}"
cmd: |
../llama.cpp/build/bin/Release/llama-server.exe --port ${PORT} -m ../models/huggingface.co/unsloth/gemma-3-12b-it-GGUF/gemma-3-12b-it-UD-Q5_K_XL.gguf --mmproj ../models/huggingface.co/unsloth/gemma-3-12b-it-GGUF/mmproj-F32.gguf -c 24576 -n 24576 --prio 2 -t 4 --temp 1 --top-k 64 --top-p 0.95 --min-p 0.0 -ngl 99 -fa --repeat-penalty 1.0
# unload model after 5 seconds
ttl: 5
"gemma-3-12b-q6-ud-8k":
proxy: "http://localhost:${PORT}"
cmd: |
../llama.cpp/build/bin/Release/llama-server.exe --port ${PORT} -m ../models/huggingface.co/unsloth/gemma-3-12b-it-GGUF/gemma-3-12b-it-UD-Q6_K_XL.gguf --mmproj ../models/huggingface.co/unsloth/gemma-3-12b-it-GGUF/mmproj-BF16.gguf -c 8192 -n 8192 --prio 2 -t 4 --temp 1 --top-k 64 --top-p 0.95 --min-p 0.0 -ngl 99 -fa --repeat-penalty 1.0
# unload model after 5 seconds
ttl: 5
"GLM-Z1-9b-0414-q8-ud-30k":
proxy: "http://localhost:${PORT}"
cmd: |
../llama.cpp/build/bin/Release/llama-server.exe --port ${PORT} -m ../models/huggingface.co/unsloth/GLM-Z1-9B-0414-GGUF/GLM-Z1-9B-0414-UD-Q8_K_XL.gguf -c 30000 -n 30000 --threads 5 --temp 0.6 --top-k 40 --top-p 0.95 -ngl 99 -fa
# unload model after 30 seconds
ttl: 30
"GLM-4-9b-0414-q6-ud-30k":
proxy: "http://localhost:${PORT}"
cmd: |
../llama.cpp/build/bin/Release/llama-server.exe --port ${PORT} -m ../models/huggingface.co/unsloth/GLM-4-9B-0414-GGUF/GLM-4-9B-0414-UD-Q6_K_XL.gguf -c 30000 -n 30000 --threads 5 --temp 0.7 --top-k 40 --top-p 0.95 -ngl 99 -fa
# unload model after 30 seconds
ttl: 30
groups:
"default":
swap: true
exclusive: true
members:
- "qwen2.5-vl-7b-q8-ud-32k"
- "qwen3-8b-iq2-96k-think"
- "qwen3-8b-iq2-96k-nothink"
- "qwen3-235b-a22b-q2-ud-16k-think"
- "qwen3-235b-a22b-q2-ud-16k-nothink"
- "gemma-3-12b-q5-ud-24k"
- "gemma-3-12b-q6-ud-8k"
- "GLM-Z1-9b-0414-q8-ud-30k"
- "GLM-4-9b-0414-q6-ud-30k"
# Optional: Set health check timeout and log level
#healthCheckTimeout: 60
healthCheckTimeout: 600
logLevel: info
(healthCheckTimeout default is 60, but for the biggest MoE models, I need more)
The "cmd" are the same that I can run directly with llama-server, just need to replace the --port variable with the port number and that's it.-
Then, in my case, I open a terminal in the llama-swap folder and:
./llama-swap --config config.yaml --listen :10001;
Again, this is ugly and not optimized at all, but works great for me and my lazyness.
Also, it will not work that great for everyone, as I guess Ollama has features that I never used (nor need), so I have no idea about them.
And last thing, as a test you can just:
- download llama.cpp binaries
- unpack the two files in a single folder
- run it (adapt it with the location of your folders):
./llama.cpp/llama-server.exe --port 10001 -m ../models/huggingface.co/unsloth/Qwen3-8B-128K-GGUF/Qwen3-8B-128K-UD-IQ2_XXS.gguf -c 98304 -n 98304 --prio 2 --threads 5 --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0.0 -ngl 99 -fa
and then go to llama.cpp webui:
http://127.0.0.1:10001
chat with it.
Try it with llama-swap:
- stop llama.cpp if it's running
- download llama-swap binary
- create/edit the config.yaml:
models:
"qwen3-8b-iq2-ud-96k-think":
proxy: "http://localhost:${PORT}"
cmd: |
../llama.cpp/build/bin/Release/llama-server.exe --port ${PORT} -m ../models/huggingface.co/unsloth/Qwen3-8B-128K-GGUF/Qwen3-8B-128K-UD-IQ2_XXS.gguf -c 98304 -n 98304 --prio 2 --threads 5 --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0.0 -ngl 99 -fa
# unload model after 5 seconds
ttl: 5
groups:
"default":
swap: true
exclusive: true
members:
- "qwen3-8b-iq2-96k-think"
# Optional: Set health check timeout and log level
#healthCheckTimeout: 60
healthCheckTimeout: 600
logLevel: info
- open a terminal in that folder and run something like:
./llama-swap --config config.yaml --listen :10001;
- configure any webui you have or go to:
http://localhost:10001/upstream
there you can click on the model you have configured in the config.yaml file and that will load the model and open the llama.cpp webui
I hope it helps some one. | 2025-06-11T20:12:48 | https://www.reddit.com/r/LocalLLaMA/comments/1l92vr0/as_some_people_asked_me_to_share_some_details/ | relmny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l92vr0 | false | null | t3_1l92vr0 | /r/LocalLLaMA/comments/1l92vr0/as_some_people_asked_me_to_share_some_details/ | false | false | self | 47 | null |
Open Source LangSmith alternative with LangGraph visualization. | 1 | [removed] | 2025-06-11T20:32:19 | https://www.reddit.com/r/LocalLLaMA/comments/1l93d4h/open_source_langsmith_alternative_with_langgraph/ | Upstairs-Spell7521 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l93d4h | false | null | t3_1l93d4h | /r/LocalLLaMA/comments/1l93d4h/open_source_langsmith_alternative_with_langgraph/ | false | false | 1 | null |
|
LiteRT-LM - (An early version of) A C++ library to efficiently run Gemma-3N across various platform | 34 | 2025-06-11T20:49:26 | https://github.com/google-ai-edge/LiteRT-LM | cpldcpu | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l93ry3 | false | null | t3_1l93ry3 | /r/LocalLLaMA/comments/1l93ry3/litertlm_an_early_version_of_a_c_library_to/ | false | false | 34 | {'enabled': False, 'images': [{'id': 'whY1sTyeUUN2XlBpEOhXnhmIZo32bTeJQwAwjoq_qkU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5ALlCNjnyb01JF5ItOkTd8Y8O1vVBS6s20XdxDI3_Nc.jpg?width=108&crop=smart&auto=webp&s=b391038ab72c5c9198c134d039c079f8b90e94a7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5ALlCNjnyb01JF5ItOkTd8Y8O1vVBS6s20XdxDI3_Nc.jpg?width=216&crop=smart&auto=webp&s=69dda261368cecdc8b41ac6786a6acda94777bc0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5ALlCNjnyb01JF5ItOkTd8Y8O1vVBS6s20XdxDI3_Nc.jpg?width=320&crop=smart&auto=webp&s=d66bb22934dc69ffe14b5fa7b3fd39858208ca83', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5ALlCNjnyb01JF5ItOkTd8Y8O1vVBS6s20XdxDI3_Nc.jpg?width=640&crop=smart&auto=webp&s=0b3264b87600a5991d3d91a3c6ddb8ea5e0e3a35', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5ALlCNjnyb01JF5ItOkTd8Y8O1vVBS6s20XdxDI3_Nc.jpg?width=960&crop=smart&auto=webp&s=5f22fdd34f141c3a4de6a631d4f667364e3ef974', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5ALlCNjnyb01JF5ItOkTd8Y8O1vVBS6s20XdxDI3_Nc.jpg?width=1080&crop=smart&auto=webp&s=e64cb6c01ab1728c641a0d54070724cc6d4a7d26', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5ALlCNjnyb01JF5ItOkTd8Y8O1vVBS6s20XdxDI3_Nc.jpg?auto=webp&s=58e6b43c3f3d793deeed4a0346e01bbf7f395e3c', 'width': 1200}, 'variants': {}}]} |
||
DayWise – A Minimal Daily Task Tracker You Can Fully Customize | 1 | [removed] | 2025-06-11T21:20:10 | https://github.com/AhmedOsamaMath/daywise | AhmedOsamaMath | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l94jfx | false | null | t3_1l94jfx | /r/LocalLLaMA/comments/1l94jfx/daywise_a_minimal_daily_task_tracker_you_can/ | false | false | 1 | {'enabled': False, 'images': [{'id': '4-nV4zbLwfnh4eHHDjNUWXmCXam-kV8prcVul_k5peA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/k1P5n9ySvPv_KNkHk7uGXRC7g_irzkla6zDSJozR0u8.jpg?width=108&crop=smart&auto=webp&s=5f5653b4f0bcd06b74f551c7d4f84d076c00b838', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/k1P5n9ySvPv_KNkHk7uGXRC7g_irzkla6zDSJozR0u8.jpg?width=216&crop=smart&auto=webp&s=007ca4f6721e6544367649be312fa8e907cbb41b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/k1P5n9ySvPv_KNkHk7uGXRC7g_irzkla6zDSJozR0u8.jpg?width=320&crop=smart&auto=webp&s=889b2bcf181762490c0b73c063cb5f80237b62c1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/k1P5n9ySvPv_KNkHk7uGXRC7g_irzkla6zDSJozR0u8.jpg?width=640&crop=smart&auto=webp&s=d5bb62b56cd850bfc394da1d5e2fa20334cc2414', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/k1P5n9ySvPv_KNkHk7uGXRC7g_irzkla6zDSJozR0u8.jpg?width=960&crop=smart&auto=webp&s=cdc674f5ecb60c0050cd702989ec36e080261a4f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/k1P5n9ySvPv_KNkHk7uGXRC7g_irzkla6zDSJozR0u8.jpg?width=1080&crop=smart&auto=webp&s=b36feea47beb813840789e704ec6075bac25e52b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/k1P5n9ySvPv_KNkHk7uGXRC7g_irzkla6zDSJozR0u8.jpg?auto=webp&s=1c1c44751fd6e9e5666e831f464b854d19574cf1', 'width': 1200}, 'variants': {}}]} |
|
Accessing ios26 local LLM via React Native | 0 | Am downloading ios26 tonight! I’m not an Xcode or Swift guy. What do you guys think about soon having a native react module can install to allow React Native to access and play with the LLm in my Expo React Native apps.
I’m super stoked! Particularly to test it out to detect objects in photos.
| 2025-06-11T21:23:31 | https://www.reddit.com/r/LocalLLaMA/comments/1l94mca/accessing_ios26_local_llm_via_react_native/ | Puzzleheaded-Fly4322 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l94mca | false | null | t3_1l94mca | /r/LocalLLaMA/comments/1l94mca/accessing_ios26_local_llm_via_react_native/ | false | false | self | 0 | null |
Best site for inferencing medgemma 27B? | 10 | I know it's locallama: I tried the 4B model on lmstudio and got scared that a 5GB file is a better doctor than I will ever be, so now I want to try the 27B model to feel even worse. My poor 3060 with 6 GB VRAM will never handle it and i did not find it on aistudio nor on openrouter. I tried with Vertex AI but it's a pain in the a\*\* to setup so I wonder if there are alternatives (chat interface or API) that are easier to try.
*If you are curious about my experience with the model: the 4-bit answered most of my question correctly when asked in English (questions like "what's the most common congenital cardiopathy in people with trisomy 21?"), but failed when asked in Italian hallucinating new diseases. The 8-bit quant answered correctly in Italian as well, but both failed at telling me anything about a rare disease I'm studying (MADD), not even what it's acronym stands for.* | 2025-06-11T21:47:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l957nz/best_site_for_inferencing_medgemma_27b/ | sebastianmicu24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l957nz | false | null | t3_1l957nz | /r/LocalLLaMA/comments/1l957nz/best_site_for_inferencing_medgemma_27b/ | false | false | self | 10 | null |
Local LLM Bible | 1 | [removed] | 2025-06-11T22:01:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l95jc0/local_llm_bible/ | fromqjvoel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l95jc0 | false | null | t3_1l95jc0 | /r/LocalLLaMA/comments/1l95jc0/local_llm_bible/ | false | false | self | 1 | null |
Advice on running 70-200B LLM Local | 1 | [removed] | 2025-06-11T22:06:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l95oi1/advice_on_running_70200b_llm_local/ | Web3Vortex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l95oi1 | false | null | t3_1l95oi1 | /r/LocalLLaMA/comments/1l95oi1/advice_on_running_70200b_llm_local/ | false | false | self | 1 | null |
Magistral: Transparent, Multilingual Reasoning for Global Applications | LLM Radar | 1 | [removed] | 2025-06-11T22:16:10 | https://open-llm-radar.com/news/magistral-transparent-multilingual-reasoning-for-global-applications | Humble_String6885 | open-llm-radar.com | 1970-01-01T00:00:00 | 0 | {} | 1l95wkr | false | null | t3_1l95wkr | /r/LocalLLaMA/comments/1l95wkr/magistral_transparent_multilingual_reasoning_for/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'N2WElaYWnGyyW42lixQwMmRokSDdFtxCk-VcVh40SfI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LGK0M7OllwxmpTJEYFc9b8qWNnmAW7liTxiA1mJ3PBE.jpg?width=108&crop=smart&auto=webp&s=ebd50bebb463190be020448e995b3b173dd4a35a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LGK0M7OllwxmpTJEYFc9b8qWNnmAW7liTxiA1mJ3PBE.jpg?width=216&crop=smart&auto=webp&s=0450291c2034ef86f46cf1a463fdfcc48569e5e1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LGK0M7OllwxmpTJEYFc9b8qWNnmAW7liTxiA1mJ3PBE.jpg?width=320&crop=smart&auto=webp&s=1df1bf48c470ed6701ee0366343cd82b28499453', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LGK0M7OllwxmpTJEYFc9b8qWNnmAW7liTxiA1mJ3PBE.jpg?width=640&crop=smart&auto=webp&s=aed85146f613d7d967af7082601c189de97c266b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LGK0M7OllwxmpTJEYFc9b8qWNnmAW7liTxiA1mJ3PBE.jpg?width=960&crop=smart&auto=webp&s=264cfea5305af0b08de68429f039d67b36af9918', 'width': 960}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/LGK0M7OllwxmpTJEYFc9b8qWNnmAW7liTxiA1mJ3PBE.jpg?auto=webp&s=d58133b8b4e0ae6bfa8b746e7614179092d780bb', 'width': 1024}, 'variants': {}}]} |
|
Chatterbox - open-source SOTA TTS by resemble.ai | 56 | https://github.com/resemble-ai/chatterbox | 2025-06-11T22:32:39 | https://www.reddit.com/r/LocalLLaMA/comments/1l96ag1/chatterbox_opensource_sota_tts_by_resembleai/ | Otis43 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l96ag1 | false | null | t3_1l96ag1 | /r/LocalLLaMA/comments/1l96ag1/chatterbox_opensource_sota_tts_by_resembleai/ | false | false | self | 56 | {'enabled': False, 'images': [{'id': 'wKZzEXOJYkBr9dzybJhEftWoZrKKTptoZJVUfKjd_XY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wKZzEXOJYkBr9dzybJhEftWoZrKKTptoZJVUfKjd_XY.png?width=108&crop=smart&auto=webp&s=b836baa00f41475485167c4a531cd19ca6b36e52', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wKZzEXOJYkBr9dzybJhEftWoZrKKTptoZJVUfKjd_XY.png?width=216&crop=smart&auto=webp&s=a2bdedec7345a858467569ab4b5b7dd1e58bc57e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wKZzEXOJYkBr9dzybJhEftWoZrKKTptoZJVUfKjd_XY.png?width=320&crop=smart&auto=webp&s=27a410d943ccccf8fc9ec8d28d2a54d1a4e534af', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wKZzEXOJYkBr9dzybJhEftWoZrKKTptoZJVUfKjd_XY.png?width=640&crop=smart&auto=webp&s=8861ef4c133524caec4cdf8da897a79930319be6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wKZzEXOJYkBr9dzybJhEftWoZrKKTptoZJVUfKjd_XY.png?width=960&crop=smart&auto=webp&s=98a6407dea216cea04ad198f21bea8cd8e56d929', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wKZzEXOJYkBr9dzybJhEftWoZrKKTptoZJVUfKjd_XY.png?width=1080&crop=smart&auto=webp&s=b64a15181b90548f5919fa020bafa665ee4c261d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wKZzEXOJYkBr9dzybJhEftWoZrKKTptoZJVUfKjd_XY.png?auto=webp&s=7c641872d1afaa9be0d623a1588715aa7fbbf060', 'width': 1200}, 'variants': {}}]} |
Open WebUI MCP? | 6 | Has anyone had success using “MCP” with Open WebUI? I’m currently serving Llama 3.1 8B Instruct via vLLM, and the tool calling and subsequent utilization has been abysmal. Most of the blogs I see utilizing MCP seems to be using these frontier models, and I have to believe it’s possible locally. There’s always the chance that I need a different (or bigger) model.
If possible, I would prefer solutions that utilize vLLM and Open WebUI. | 2025-06-11T22:32:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l96aku/open_webui_mcp/ | memorial_mike | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l96aku | false | null | t3_1l96aku | /r/LocalLLaMA/comments/1l96aku/open_webui_mcp/ | false | false | self | 6 | null |
Why doesn't Apple invest in Mistral? | 0 | We saw the Microsoft/OpenAI and Amazon/Anthropic partnership. Why doesn't Apple do the same with Mistral? What is preventing it?
| 2025-06-11T22:33:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l96aqf/why_doesnt_apple_invest_in_mistral/ | Objective_Lab_3182 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l96aqf | false | null | t3_1l96aqf | /r/LocalLLaMA/comments/1l96aqf/why_doesnt_apple_invest_in_mistral/ | false | false | self | 0 | null |
OpenAI performs KYC to use the latest o3-pro via API | 97 | This afternoon I cobbled together a test-script to mess around with o3-pro. Looked nice, so nice that I came back this evening to give it another go. The OpenAI sdk throws an error in the terminal, prompting me to "Your organization must be verified to stream this model."
Allright, I go to OpenAI platform and lo and behold, a full blown KYC process kicks off, with ID scanning, face scanning, all that shite. Damn, has this gone far. Really hope DeepSeek delivers another blow with R2 to put an end to this. | 2025-06-11T23:24:00 | https://www.reddit.com/r/LocalLLaMA/comments/1l97fst/openai_performs_kyc_to_use_the_latest_o3pro_via/ | Mr_Moonsilver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l97fst | false | null | t3_1l97fst | /r/LocalLLaMA/comments/1l97fst/openai_performs_kyc_to_use_the_latest_o3pro_via/ | false | false | self | 97 | null |
Has anyone attempted to use k40 12gb GPU's they are quite cheap | 2 | I see old K40 GPU's going for around $34 I know they consume alot of power but are they compatible with anything LLM related without requiring alot of tinkering to get it to work at all. Its keplar so very old but $34 is cheap enough to want to make me want to try and experiment with it. | 2025-06-11T23:28:28 | https://www.reddit.com/r/LocalLLaMA/comments/1l97jdb/has_anyone_attempted_to_use_k40_12gb_gpus_they/ | Commercial-Celery769 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l97jdb | false | null | t3_1l97jdb | /r/LocalLLaMA/comments/1l97jdb/has_anyone_attempted_to_use_k40_12gb_gpus_they/ | false | false | self | 2 | null |
Open Source agentic tool/framework to automate codebase workflows | 13 | Hi everyone, I'm looking for some open source agentic tool/framework with autonomous agents to automate workflows on my repositories. I tried Aider but it requires way too much human intervention, even just to automate simple tasks, it seems not to be designed for that purpose. I'm also trying OpenHands, it looks good but I don't know if it's the best alternative for my use cases (or maybe someone who knows how to use it better can give me some advice, maybe I'm using it wrong). I am looking for something that really allows me to automate specific workflows on repositories (follow guidelines and rules, accessibility, make large scale changes etc). Thanks in advance. | 2025-06-11T23:46:59 | https://www.reddit.com/r/LocalLLaMA/comments/1l97xrq/open_source_agentic_toolframework_to_automate/ | Soft-Salamander7514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l97xrq | false | null | t3_1l97xrq | /r/LocalLLaMA/comments/1l97xrq/open_source_agentic_toolframework_to_automate/ | false | false | self | 13 | null |
Best Practices in RL for Reasoning-Capable LLMs: Insights from Mistral’s Magistral Report | 6 | Magistral combines PPO-Clip, REINFORCE++-style advantage normalization, and DAPO tricks like Dynamic Sampling into a solid RLHF recipe for reasoning LLMs:
[Blog: Best Practices in RL for Reasoning-Capable LLMs: Insights from Mistral’s Magistral Report](https://hijkzzz.notion.site/Best-Practices-in-RL-for-Reasoning-Capable-LLMs-Insights-from-Mistral-s-Magistral-Report-210d9a33ecc980a187d5c4cf09807271) | 2025-06-12T00:15:10 | https://www.reddit.com/r/LocalLLaMA/comments/1l98j75/best_practices_in_rl_for_reasoningcapable_llms/ | seventh_day123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l98j75 | false | null | t3_1l98j75 | /r/LocalLLaMA/comments/1l98j75/best_practices_in_rl_for_reasoningcapable_llms/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'QjPei5cbWgy-AGcB-QGKkai4tr5l3qsKDZCI4X8BzFQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/QjPei5cbWgy-AGcB-QGKkai4tr5l3qsKDZCI4X8BzFQ.png?width=108&crop=smart&auto=webp&s=3c1c626e6c45dbf41f96426728c1cd04e46f123f', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/QjPei5cbWgy-AGcB-QGKkai4tr5l3qsKDZCI4X8BzFQ.png?width=216&crop=smart&auto=webp&s=0afa5de01003d1ac1ed1430ca911f8fcff3bb523', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/QjPei5cbWgy-AGcB-QGKkai4tr5l3qsKDZCI4X8BzFQ.png?width=320&crop=smart&auto=webp&s=f9e6d7dd5f7a47048465cf175e068587a820ebd7', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/QjPei5cbWgy-AGcB-QGKkai4tr5l3qsKDZCI4X8BzFQ.png?width=640&crop=smart&auto=webp&s=86657edaa3af53ebbd03bef6575cb7742dea9a6c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/QjPei5cbWgy-AGcB-QGKkai4tr5l3qsKDZCI4X8BzFQ.png?width=960&crop=smart&auto=webp&s=27874c2eca6dfe7613b0462b8987f0de16747353', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/QjPei5cbWgy-AGcB-QGKkai4tr5l3qsKDZCI4X8BzFQ.png?width=1080&crop=smart&auto=webp&s=b6545e9faea5607eb72921797eb4f0f55d113ad8', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/QjPei5cbWgy-AGcB-QGKkai4tr5l3qsKDZCI4X8BzFQ.png?auto=webp&s=36cd0656364ab0c8588dac6b90b657842f12eb3a', 'width': 1200}, 'variants': {}}]} |
Privacy implications of sending data to OpenRouter | 32 | For those of you developing applications with LLMs: do you really send your data to a local LLM hosted through OpenRouter? What are the pros and cons of doing that over sending your data to OpenAI/Azure? I'm confused about the practice of taking a local model and then accessing it through a third-party API, it negates many of the benefits of using a local model in the first place. | 2025-06-12T00:18:22 | https://www.reddit.com/r/LocalLLaMA/comments/1l98lly/privacy_implications_of_sending_data_to_openrouter/ | entsnack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l98lly | false | null | t3_1l98lly | /r/LocalLLaMA/comments/1l98lly/privacy_implications_of_sending_data_to_openrouter/ | false | false | self | 32 | null |
Mistral-Nemotron? | 59 | Looks like Nvidia is hosting a new model but I can't find any information about it on Mistral's website?
https://docs.api.nvidia.com/nim/reference/mistralai-mistral-nemotron
https://build.nvidia.com/mistralai/mistral-nemotron/modelcard | 2025-06-12T01:13:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l99pih/mistralnemotron/ | mj3815 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l99pih | false | null | t3_1l99pih | /r/LocalLLaMA/comments/1l99pih/mistralnemotron/ | false | false | self | 59 | {'enabled': False, 'images': [{'id': '7I5kzl90Xp5FTA4J8SQiSeq3iT4zO8cTALslZpELywk', 'resolutions': [{'height': 37, 'url': 'https://external-preview.redd.it/JW4NXGKKI2VaiiqWrBNazWHEa229xXnGv_NL6p7ZjUs.jpg?width=108&crop=smart&auto=webp&s=f7510e699fecae74ca9659e1a8475aa0252dd5b6', 'width': 108}, {'height': 75, 'url': 'https://external-preview.redd.it/JW4NXGKKI2VaiiqWrBNazWHEa229xXnGv_NL6p7ZjUs.jpg?width=216&crop=smart&auto=webp&s=ce4b2fed887e87f9ed109ffc0daf5d621a0db0fe', 'width': 216}], 'source': {'height': 80, 'url': 'https://external-preview.redd.it/JW4NXGKKI2VaiiqWrBNazWHEa229xXnGv_NL6p7ZjUs.jpg?auto=webp&s=6f7e07ced20934559d1b4cce01ddeebbda15478a', 'width': 230}, 'variants': {}}]} |
Where can I find cloud platforms with NVIDIA B200 or better GPUs than H200? | 1 | [removed] | 2025-06-12T01:37:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l9a71i/where_can_i_find_cloud_platforms_with_nvidia_b200/ | Outrageous_Fix_8522 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9a71i | false | null | t3_1l9a71i | /r/LocalLLaMA/comments/1l9a71i/where_can_i_find_cloud_platforms_with_nvidia_b200/ | false | false | self | 1 | null |
Enable AI Agents to join and interact in your meetings | 36 | Hey guys,
we've been working on a project called joinly for the last few weeks. After many late nights and lots of energy drinks, we just open-sourced it. The idea is that you can make any browser-based video conference accessible to your AI agents and interact with them in real-time. Think of it at as a connector layer that brings the functionality of your AI agents into your meetings, essentially allowing you to build your own custom meeting assistant. Transcription, function calling etc. all happens locally respecting your privacy.
We made a quick video to show how it works. It's still in the early stages, so expect it to be a bit buggy. However, we think it's very promising!
We'd love to hear your feedback or ideas on what kind of agentic powers you'd enjoy in your meetings. 👉 [https://github.com/joinly-ai/joinly](https://github.com/joinly-ai/joinly) | 2025-06-12T02:14:43 | https://v.redd.it/pdsgwnsune6f1 | Square-Test-515 | /r/LocalLLaMA/comments/1l9ayep/enable_ai_agents_to_join_and_interact_in_your/ | 1970-01-01T00:00:00 | 0 | {} | 1l9ayep | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/pdsgwnsune6f1/DASHPlaylist.mpd?a=1752416092%2CODgzYzcyMmI2Y2FjOTE2YTFlMjE4YzhhMDcwMTQ3MmViN2M2OWQzMmY4NDQ3MzZiZjkwOTRmNzg2M2U5N2RmMQ%3D%3D&v=1&f=sd', 'duration': 72, 'fallback_url': 'https://v.redd.it/pdsgwnsune6f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/pdsgwnsune6f1/HLSPlaylist.m3u8?a=1752416092%2CMDViZWRkZTQ1Njk0ZDkwZjNmM2Y3YjRmZWM1OWNlNTZhYjc5N2E4YWM3ZDAxYzdlMjUxN2YzMjRlNWNhZTk1OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/pdsgwnsune6f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1662}} | t3_1l9ayep | /r/LocalLLaMA/comments/1l9ayep/enable_ai_agents_to_join_and_interact_in_your/ | false | false | 36 | {'enabled': False, 'images': [{'id': 'Nzdwb2lwc3VuZTZmMStdk3R7hZIRW-iC3N5YGOPCKOzNDbFNT3u3Wwxsw2PP', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/Nzdwb2lwc3VuZTZmMStdk3R7hZIRW-iC3N5YGOPCKOzNDbFNT3u3Wwxsw2PP.png?width=108&crop=smart&format=pjpg&auto=webp&s=d768ca4311fddb7ac424218fcbadf60888d3e071', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/Nzdwb2lwc3VuZTZmMStdk3R7hZIRW-iC3N5YGOPCKOzNDbFNT3u3Wwxsw2PP.png?width=216&crop=smart&format=pjpg&auto=webp&s=debca54154655f0df1d7e26e8a4eccc1b1c6046d', 'width': 216}, {'height': 208, 'url': 'https://external-preview.redd.it/Nzdwb2lwc3VuZTZmMStdk3R7hZIRW-iC3N5YGOPCKOzNDbFNT3u3Wwxsw2PP.png?width=320&crop=smart&format=pjpg&auto=webp&s=3fc58671ea6ff45dbe4deedab3532f3525f48d61', 'width': 320}, {'height': 416, 'url': 'https://external-preview.redd.it/Nzdwb2lwc3VuZTZmMStdk3R7hZIRW-iC3N5YGOPCKOzNDbFNT3u3Wwxsw2PP.png?width=640&crop=smart&format=pjpg&auto=webp&s=c953ba2017da32f4008d3a6e5320b6f32420c0e0', 'width': 640}, {'height': 624, 'url': 'https://external-preview.redd.it/Nzdwb2lwc3VuZTZmMStdk3R7hZIRW-iC3N5YGOPCKOzNDbFNT3u3Wwxsw2PP.png?width=960&crop=smart&format=pjpg&auto=webp&s=61208627c74638c8964f90fffb7a78e845c321de', 'width': 960}, {'height': 702, 'url': 'https://external-preview.redd.it/Nzdwb2lwc3VuZTZmMStdk3R7hZIRW-iC3N5YGOPCKOzNDbFNT3u3Wwxsw2PP.png?width=1080&crop=smart&format=pjpg&auto=webp&s=59a09e0eeb3f64f448a9412c16139d7733428817', 'width': 1080}], 'source': {'height': 1664, 'url': 'https://external-preview.redd.it/Nzdwb2lwc3VuZTZmMStdk3R7hZIRW-iC3N5YGOPCKOzNDbFNT3u3Wwxsw2PP.png?format=pjpg&auto=webp&s=9adb96fb514541da849dce3b486b200c8f336f62', 'width': 2560}, 'variants': {}}]} |
|
Local organic rig | 55 | local organic ai rig | 2025-06-12T02:17:01 | Both-Indication5062 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l9b04q | false | null | t3_1l9b04q | /r/LocalLLaMA/comments/1l9b04q/local_organic_rig/ | false | false | 55 | {'enabled': True, 'images': [{'id': 'b-sEz_TQLI79mIy16Sphy3FszNe1CYzoUD5oQ49pN-Q', 'resolutions': [{'height': 128, 'url': 'https://preview.redd.it/78c6uej5oe6f1.jpeg?width=108&crop=smart&auto=webp&s=716f184cfbaf30a2261b435bb50f4080c842b2dd', 'width': 108}, {'height': 257, 'url': 'https://preview.redd.it/78c6uej5oe6f1.jpeg?width=216&crop=smart&auto=webp&s=fe6202f1c370859c76381cb86ff345d137fb96b3', 'width': 216}, {'height': 381, 'url': 'https://preview.redd.it/78c6uej5oe6f1.jpeg?width=320&crop=smart&auto=webp&s=df89a07458254ec2f7343c5d6857ed5586d612f0', 'width': 320}, {'height': 762, 'url': 'https://preview.redd.it/78c6uej5oe6f1.jpeg?width=640&crop=smart&auto=webp&s=a1c646a11de430e9f56e2ed6e1e19a1e1cd01f56', 'width': 640}], 'source': {'height': 1062, 'url': 'https://preview.redd.it/78c6uej5oe6f1.jpeg?auto=webp&s=ceb5193aede6c8f68ad4a9cb29b6d13eaa2db9db', 'width': 891}, 'variants': {}}]} |
||
[2506.06105] Text-to-LoRA: Instant Transformer Adaption | 51 | 2025-06-12T02:47:41 | https://arxiv.org/abs/2506.06105 | Thrumpwart | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1l9blur | false | null | t3_1l9blur | /r/LocalLLaMA/comments/1l9blur/250606105_texttolora_instant_transformer_adaption/ | false | false | default | 51 | null |
|
Ming-Omni: A Unified Multimodal Model for Perception and Generation | 2 | [removed] | 2025-06-12T03:22:52 | https://github.com/inclusionAI/Ming/tree/main | ninjasaid13 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l9c9wa | false | null | t3_1l9c9wa | /r/LocalLLaMA/comments/1l9c9wa/mingomni_a_unified_multimodal_model_for/ | false | false | default | 2 | null |
Mistral.rs v0.6.0 now has full built-in MCP Client support! | 104 | Hey all! Just shipped what I think is a game-changer for local LLM workflows: MCP (Model Context Protocol) client support in [mistral.rs](https://github.com/EricLBuehler/mistral.rs/) ([https://github.com/EricLBuehler/mistral.rs](https://github.com/EricLBuehler/mistral.rs))! It is built-in and closely integrated, which makes the process of developing MCP-powered apps easy and fast.
You can get [mistralrs via PyPi](https://github.com/EricLBuehler/mistral.rs/blob/master/mistralrs-pyo3/_README.md#installation-from-pypi), [Docker Containers](https://github.com/EricLBuehler/mistral.rs/pkgs/container/mistral.rs), or with [a local build](https://github.com/EricLBuehler/mistral.rs?tab=readme-ov-file#installation-and-build).
**What does this mean?**
Your models can now automatically connect to external tools and services - file systems, web search, databases, APIs, you name it.
No more manual tool calling setup, no more custom integration code.
Just configure once and your models gain superpowers.
We support all the transport interfaces:
* **Process**: Local tools (filesystem, databases, and more)
* **Streamable HTTP and SSE**: REST APIs, cloud services - Works with *any* HTTP MCP server
* **WebSocket**: Real-time streaming tools
**The best part?** ***It just works.*** Tools are discovered automatically at startup, and support for multiserver, authentication handling, and timeouts are designed to make the experience easy.
I've been testing this extensively and it's incredibly smooth. The Python API feels natural, HTTP server integration is seamless, and the automatic tool discovery means no more maintaining tool registries.
**Using the MCP support in Python:**
https://preview.redd.it/2c2v74bt0f6f1.png?width=1274&format=png&auto=webp&s=bd180e59597f04103b7af5acc03b0983a4d41c04
**Use the HTTP server in just 2 steps:**
**1) Create mcp-config.json**
{
"servers": [
{
"name": "Filesystem Tools",
"source": {
"type": "Process",
"command": "npx",
"args": [
"@modelcontextprotocol/server-filesystem",
"."
]
}
}
],
"auto_register_tools": true
}
**2) Start server:**
`mistralrs-server --mcp-config mcp-config.json --port 1234 run -m Qwen/Qwen3-4B`
**You can just use the normal OpenAI API - tools work automatically!**
curl -X POST http://localhost:1234/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "mistral.rs",
"messages": [
{
"role": "user",
"content": "List files and create hello.txt"
}
]
}'
https://reddit.com/link/1l9cd44/video/i9ttdu2v0f6f1/player
I'm excited to see what you create with this 🚀! Let me know what you think.
**Quick links:**
* [https://github.com/EricLBuehler/mistral.rs/blob/master/examples/MCP\_QUICK\_START.md](https://github.com/EricLBuehler/mistral.rs/blob/master/examples/MCP_QUICK_START.md)
* [https://github.com/EricLBuehler/mistral.rs/tree/master/docs/MCP](https://github.com/EricLBuehler/mistral.rs/tree/master/docs/MCP)
* [https://github.com/EricLBuehler/mistral.rs/blob/master/examples/python/mcp\_client.py](https://github.com/EricLBuehler/mistral.rs/blob/master/examples/python/mcp_client.py) | 2025-06-12T03:27:42 | https://www.reddit.com/r/LocalLLaMA/comments/1l9cd44/mistralrs_v060_now_has_full_builtin_mcp_client/ | EricBuehler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9cd44 | false | null | t3_1l9cd44 | /r/LocalLLaMA/comments/1l9cd44/mistralrs_v060_now_has_full_builtin_mcp_client/ | false | false | 104 | {'enabled': False, 'images': [{'id': '2lLo8jhJSmFII5np0CAVlto_8NREWNjCUymEN6xCnKk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2lLo8jhJSmFII5np0CAVlto_8NREWNjCUymEN6xCnKk.png?width=108&crop=smart&auto=webp&s=c219aada1d3fcf1d71210730945ea465eb6844c5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2lLo8jhJSmFII5np0CAVlto_8NREWNjCUymEN6xCnKk.png?width=216&crop=smart&auto=webp&s=dea487c7bc2e610216766795d3a409163288208b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2lLo8jhJSmFII5np0CAVlto_8NREWNjCUymEN6xCnKk.png?width=320&crop=smart&auto=webp&s=5c30bcb8a067fd31c2653f48338321e0e066bcb0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2lLo8jhJSmFII5np0CAVlto_8NREWNjCUymEN6xCnKk.png?width=640&crop=smart&auto=webp&s=dd943325186c8dbc22ef59b4adb920ee2ccdc473', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2lLo8jhJSmFII5np0CAVlto_8NREWNjCUymEN6xCnKk.png?width=960&crop=smart&auto=webp&s=687853e73bef8cf389c6027007f4b4bd59727164', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2lLo8jhJSmFII5np0CAVlto_8NREWNjCUymEN6xCnKk.png?width=1080&crop=smart&auto=webp&s=32ffdd6514e60844c89c888b1455cf1713d7ef30', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2lLo8jhJSmFII5np0CAVlto_8NREWNjCUymEN6xCnKk.png?auto=webp&s=e81040ea360c90bc8c4949ddb1c7591056c5b3d6', 'width': 1200}, 'variants': {}}]} |
|
What are the best solutions to benchmark models locally? | 7 | Sorry if I'm missing something, but is there a good tool for benchmarking models locally? Not in terms of Tok/s, but by running them against open source benchmark datasets. I've been looking, and info on the topic is fragmented at best. Ideally something that can connect to localhost for local models.
Some benchmarks have their own tools to run models if I'm reading the githubs right, but it would be super cool to see the effect of settings changes on model performance(ie. Models as run by user). Mostly I'm excited to run qwen 235b at q1 and want to see how it stacks up against smaller models with bigger quants. | 2025-06-12T03:55:41 | https://www.reddit.com/r/LocalLLaMA/comments/1l9cve7/what_are_the_best_solutions_to_benchmark_models/ | PraxisOG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9cve7 | false | null | t3_1l9cve7 | /r/LocalLLaMA/comments/1l9cve7/what_are_the_best_solutions_to_benchmark_models/ | false | false | self | 7 | null |
Running an LLM on a PS Vita | 197 | After spending some time with my vita I wanted to see if \*\*any\*\* LLM can be ran on it, and it can! I modified llama2.c to have it run on the Vita, with the added capability of downloading the models on device to avoid having to manually transfer model files (which can be deleted too). This was a great way to learn about homebrewing on the Vita, there were a lot of great examples from the VitaSDK team which helped me a lot. If you have a Vita, there is a .vpk compiled in the releases section, check it out!
Repo: [https://github.com/callbacked/psvita-llm](https://github.com/callbacked/psvita-llm) | 2025-06-12T03:57:31 | https://v.redd.it/we6m8zvv4f6f1 | ajunior7 | /r/LocalLLaMA/comments/1l9cwi5/running_an_llm_on_a_ps_vita/ | 1970-01-01T00:00:00 | 0 | {} | 1l9cwi5 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/we6m8zvv4f6f1/DASHPlaylist.mpd?a=1752422286%2CZjQxOTY1MThlMjQ2YTZmOGU4N2E4MTE0MmJiNWM3MjgxZGZkMDQwOTIyNjJmM2I0YmVhMjZjODE2ODhjZGU1YQ%3D%3D&v=1&f=sd', 'duration': 83, 'fallback_url': 'https://v.redd.it/we6m8zvv4f6f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/we6m8zvv4f6f1/HLSPlaylist.m3u8?a=1752422286%2CYTJiY2ZmNTRhNmQ2NjI0MjRkNGQxNWFmZDAyNjc5ODU5MTM5MzA1MDIxNmIwNmY3YzZkNWVhMTdkZjRiNmM0Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/we6m8zvv4f6f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1l9cwi5 | /r/LocalLLaMA/comments/1l9cwi5/running_an_llm_on_a_ps_vita/ | false | false | 197 | {'enabled': False, 'images': [{'id': 'MWMwMGd5dnY0ZjZmMdxvYbrJ4twRsyVsSVzosN1N6q8R6lU4U4ntC9uiniMK', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MWMwMGd5dnY0ZjZmMdxvYbrJ4twRsyVsSVzosN1N6q8R6lU4U4ntC9uiniMK.png?width=108&crop=smart&format=pjpg&auto=webp&s=8702d3708fe40d9cc96246739252ddae77937c81', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MWMwMGd5dnY0ZjZmMdxvYbrJ4twRsyVsSVzosN1N6q8R6lU4U4ntC9uiniMK.png?width=216&crop=smart&format=pjpg&auto=webp&s=f3f4b0114a4f0eb4e7558abbf4a8da0c43a23bf7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MWMwMGd5dnY0ZjZmMdxvYbrJ4twRsyVsSVzosN1N6q8R6lU4U4ntC9uiniMK.png?width=320&crop=smart&format=pjpg&auto=webp&s=3b9e3aa8ad0aacb79e2fee57ba05329859247710', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MWMwMGd5dnY0ZjZmMdxvYbrJ4twRsyVsSVzosN1N6q8R6lU4U4ntC9uiniMK.png?width=640&crop=smart&format=pjpg&auto=webp&s=8226620f7b8730a89cc0ebe4e54ea9fad990ecd3', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MWMwMGd5dnY0ZjZmMdxvYbrJ4twRsyVsSVzosN1N6q8R6lU4U4ntC9uiniMK.png?width=960&crop=smart&format=pjpg&auto=webp&s=7bdffa173933b9968e216d9d069df1507894798a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MWMwMGd5dnY0ZjZmMdxvYbrJ4twRsyVsSVzosN1N6q8R6lU4U4ntC9uiniMK.png?width=1080&crop=smart&format=pjpg&auto=webp&s=66fb6ec7244274786916ed4912af904b8cee492f', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MWMwMGd5dnY0ZjZmMdxvYbrJ4twRsyVsSVzosN1N6q8R6lU4U4ntC9uiniMK.png?format=pjpg&auto=webp&s=c3788a51f7d84a99811e947da652f9aeff7f1256', 'width': 1920}, 'variants': {}}]} |
|
Testing Mac Studio 512 GB, 4 TB SSD, M3 Ultra w 32 cores. | 46 | Hi all,
I am running some tests and to be fair, I don't regret it.
Given that I want to learn and sell private AI solutions, and I want to run K8s clusters of agents locally for learning purposes, I think it's a good investment medium/long term.
24 tokens/second for Qwen3 235b, in thinking mode, is totally manageable and anyways that's when you need something complex.
If you use /nothink the response will be finalized in a short amount of time and for tasks like give me the boilerplate code for xyz, it's totally manageable.
Now I am downloading the latest R1, let's see how it goes with that.
| 2025-06-12T04:25:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l9demc/testing_mac_studio_512_gb_4_tb_ssd_m3_ultra_w_32/ | Deviad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9demc | false | null | t3_1l9demc | /r/LocalLLaMA/comments/1l9demc/testing_mac_studio_512_gb_4_tb_ssd_m3_ultra_w_32/ | false | false | self | 46 | null |
💡 I Built an AI-Powered YouTube Video Generator — Fully Automated, Using LLaMA, Stable Diffusion, Whisper & FFmpeg 🚀 | 1 | [removed] | 2025-06-12T04:31:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l9di6m/i_built_an_aipowered_youtube_video_generator/ | tuvshin-enkhbaatar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9di6m | false | null | t3_1l9di6m | /r/LocalLLaMA/comments/1l9di6m/i_built_an_aipowered_youtube_video_generator/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ow7D8WjHFOJwlOnfppmNCgeo_n9qG-oI31Ws0eFLPWE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ow7D8WjHFOJwlOnfppmNCgeo_n9qG-oI31Ws0eFLPWE.png?width=108&crop=smart&auto=webp&s=e484707ee8f247dcf251375a2cb017afdda71ad4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ow7D8WjHFOJwlOnfppmNCgeo_n9qG-oI31Ws0eFLPWE.png?width=216&crop=smart&auto=webp&s=ad8ab000cb8c60aa0a9b76f4c3659fe88d842ca5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ow7D8WjHFOJwlOnfppmNCgeo_n9qG-oI31Ws0eFLPWE.png?width=320&crop=smart&auto=webp&s=b63e44146e746103f2507dcd1edc95fa39047dbb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ow7D8WjHFOJwlOnfppmNCgeo_n9qG-oI31Ws0eFLPWE.png?width=640&crop=smart&auto=webp&s=e98f4eb3137b4fbc24b076089368b45175b282b4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ow7D8WjHFOJwlOnfppmNCgeo_n9qG-oI31Ws0eFLPWE.png?width=960&crop=smart&auto=webp&s=d46b087ed1bcfa69ff04c3fc3f537c3217b88fb1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ow7D8WjHFOJwlOnfppmNCgeo_n9qG-oI31Ws0eFLPWE.png?width=1080&crop=smart&auto=webp&s=2bbee08c24b431d23d8ddb3bdaa7202c02d8781e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ow7D8WjHFOJwlOnfppmNCgeo_n9qG-oI31Ws0eFLPWE.png?auto=webp&s=dd9915decd3290ff6c5d13c3416cd663286ee2a9', 'width': 1200}, 'variants': {}}]} |
Memory and compute estimation for Fine Tuning LLM | 11 | Hey guys,
i want to you the crowd intelligence of this forum, since i have not trained that many llms and this is my first larger project. i looked for resources but there is a lot of contrary information out there:
I have around 1 million samples of 2800 tokens. I am right now trying to finetune a qwen3 8bln model using a h100 gpu with 80gb, flash attention 2 and bfloat16.
since it is a pretty big model, i use lora with rank of 64 and deepspeed. the models supposedly needs around 4days for one epoch.
i have looked in the internet and i have seen that it takes around 1 second for a batchsize of 4 (which i am using). for 1 mln samples and epoch of 3 i get to 200 hours of training. however i see when i am training around 500 hours estimation during the training process.
does anyone here have a good way to calculate and optimize the speed during training? somehow there is not much information out there to estimate the time reliably. maybe i am also doing something wrong and others in this forum have performed similar fine tuning with faster calculation? | 2025-06-12T04:40:01 | https://www.reddit.com/r/LocalLLaMA/comments/1l9dns1/memory_and_compute_estimation_for_fine_tuning_llm/ | TraderBoy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9dns1 | false | null | t3_1l9dns1 | /r/LocalLLaMA/comments/1l9dns1/memory_and_compute_estimation_for_fine_tuning_llm/ | false | false | self | 11 | null |
Coming Soon: VLLM-Swap (Host multiple models through one OpenAI endpoint!) | 1 | [removed] | 2025-06-12T05:27:12 | https://www.reddit.com/r/LocalLLaMA/comments/1l9eg80/coming_soon_vllmswap_host_multiple_models_through/ | maxwell321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l9eg80 | false | null | t3_1l9eg80 | /r/LocalLLaMA/comments/1l9eg80/coming_soon_vllmswap_host_multiple_models_through/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Z4i5RZyvksBHZxF6kr2sr1v8Yu7u_Vv0xeomPXDKt7E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N6VeKOkIqgMUNu00LNlLfHTuOpjBzzc73kUXajt-SfE.jpg?width=108&crop=smart&auto=webp&s=9d0701cc6f2cf8db43d78763909b346fb7618e25', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/N6VeKOkIqgMUNu00LNlLfHTuOpjBzzc73kUXajt-SfE.jpg?width=216&crop=smart&auto=webp&s=c6b2ce63fd1bcf39ad71dc3720b80b4657c74cb7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/N6VeKOkIqgMUNu00LNlLfHTuOpjBzzc73kUXajt-SfE.jpg?width=320&crop=smart&auto=webp&s=2d56fcf727c224b43b7b6bd9235bf1eefaf692e1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/N6VeKOkIqgMUNu00LNlLfHTuOpjBzzc73kUXajt-SfE.jpg?width=640&crop=smart&auto=webp&s=209aab95155e479d07bfcac6b1d3b05fa2c55523', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/N6VeKOkIqgMUNu00LNlLfHTuOpjBzzc73kUXajt-SfE.jpg?width=960&crop=smart&auto=webp&s=32562837ad902ac279ce72864ed2186a27baa316', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/N6VeKOkIqgMUNu00LNlLfHTuOpjBzzc73kUXajt-SfE.jpg?width=1080&crop=smart&auto=webp&s=1c3130cff1afaecd700592b6bfa0be5531dcc95c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/N6VeKOkIqgMUNu00LNlLfHTuOpjBzzc73kUXajt-SfE.jpg?auto=webp&s=7f9f6946c7a2bf899fc20f73ffaf54782d217f58', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.