title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
N00b needing assistance with prompting/formatting in LM Studio.
1
[removed]
2025-05-22T01:03:54
https://www.reddit.com/r/LocalLLaMA/comments/1ksealt/n00b_needing_assistance_with_promptingformatting/
Blorfgor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksealt
false
null
t3_1ksealt
/r/LocalLLaMA/comments/1ksealt/n00b_needing_assistance_with_promptingformatting/
false
false
self
1
null
Add voices to Kokoru TTS?
5
Hello everyone I'm not experienced in python and codibg, i have questions I'm using Kokoru TTS and I want to add voices to it If I'm not wrong kokoru using .pt files as voice models, Does anyone here know how to create .pt files? Which models can creates this files And would it be working if i create .pt file in KokoruTTS? The purpose is add my favorite Note: my vision is low so it is hard for me to tracking YouTube tutorials 🙏characters voices to Kokoru Because it is so fast comparing to other tts models i tried
2025-05-22T01:10:20
https://www.reddit.com/r/LocalLLaMA/comments/1kseex4/add_voices_to_kokoru_tts/
No_Cartographer_2380
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kseex4
false
null
t3_1kseex4
/r/LocalLLaMA/comments/1kseex4/add_voices_to_kokoru_tts/
false
false
self
5
null
Announcing: TiānshūBench 0.0!
35
Llama-sté, local llama-wranglers! I'm happy to announce that I’ve started work on TiānshūBench (天书Bench), a novel benchmark for evaluating Large Language Models' ability to understand and generate code. Its distinctive feature is a series of tests which challenge the LLM to solve programming problems in an obscure programming language. Importantly, the ***language features are randomized on every test question***, helping to ensure that the test questions and answers do not enter the training set. Like the mystical "heavenly script" that inspired its name, the syntax appears foreign at first glance, but the underlying logic remains consistent. The goal of TiānshūBench is to determine if an AI system truly understands concepts and instructions, or merely reproduces familiar patterns. I believe this approach has a higher ceiling than ARC2, which relies upon ambiguous visual symbols, instead of the well-defined and agreed upon use of language in TiānshūBench. Here are the results of version 0.0 of TiānshūBench: **=== Statistics by LLM ===** `ollama/deepseek-r1:14b: 18/50 passed (36.0%)` `ollama/phi4:14b-q4_K_M: 10/50 passed (20.0%)` `ollama/qwen3:14b: 23/50 passed (46.0%)` The models I tested are limited by my puny 12 GB 3060 card. If you’d like to see other models tested in the future, let me know. Also, I believe there are some tweaks needed to ollama to make it perform better, so I’ll be working on those. **=== Statistics by Problem ID ===** `Test Case 0: 3/30 passed (10.0%)` `Test Case 1: 8/30 passed (26.67%)` `Test Case 2: 7/30 passed (23.33%)` `Test Case 3: 18/30 passed (60.0%)` `Test Case 4: 15/30 passed (50.0%)` Initial test cases included a "Hello World" type program, a task requiring input and output, and a filtering task. There is no limit to how sophisticated the tests could be. My next test cases will probably include some beginner programming exercises like counting and sorting. I can see a future when more sophisticated tasks are given, like parsers, databases, and even programming languages! Future work here will also include multi-shot tests, as that's gives more models a chance to show their true abilities. I also want to be able to make the language even more random, swapping around even more features. Finally, I want to nail down the language description that's fed in as part of the test prompt so there’s no ambiguity when it comes to the meaning of the control structures and other features. Hit me up if you have any questions or comments, or want to help out. I need more test cases, coding help, access to more powerful hardware, and LLM usage credits!
2025-05-22T01:18:06
https://i.redd.it/5ykvwmvqh82f1.png
JeepyTea
i.redd.it
1970-01-01T00:00:00
0
{}
1ksekcn
false
null
t3_1ksekcn
/r/LocalLLaMA/comments/1ksekcn/announcing_tiānshūbench_00/
false
false
https://b.thumbs.redditm…HI5an58bzSrg.jpg
35
{'enabled': True, 'images': [{'id': 'sHS6t-rtCJ3EIEr57q-4RNgJnIczrkJqcSbPBOgEx5c', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/5ykvwmvqh82f1.png?width=108&crop=smart&auto=webp&s=cd9cdfe69f07c5df3a1e4674be46fb92d7f76f60', 'width': 108}, {'height': 167, 'url': 'https://preview.redd.it/5ykvwmvqh82f1.png?width=216&crop=smart&auto=webp&s=b079747c2dfe846b6a3beaaecd4d1f8f72f09b34', 'width': 216}, {'height': 248, 'url': 'https://preview.redd.it/5ykvwmvqh82f1.png?width=320&crop=smart&auto=webp&s=36f73c6661518b9273bf1880fa1fafb03cb01e84', 'width': 320}, {'height': 497, 'url': 'https://preview.redd.it/5ykvwmvqh82f1.png?width=640&crop=smart&auto=webp&s=9d8c728159de15d99f83c21a026feae2e4d1542f', 'width': 640}, {'height': 745, 'url': 'https://preview.redd.it/5ykvwmvqh82f1.png?width=960&crop=smart&auto=webp&s=b5da3d8994b665d3ef60fa851d6e942c9a93bbe5', 'width': 960}, {'height': 838, 'url': 'https://preview.redd.it/5ykvwmvqh82f1.png?width=1080&crop=smart&auto=webp&s=dce933a94f4e6cf312bc124329814a5da5e3fb6c', 'width': 1080}], 'source': {'height': 1103, 'url': 'https://preview.redd.it/5ykvwmvqh82f1.png?auto=webp&s=5ac1f772914bed1f8bc9e8c693aee8ca54e4490b', 'width': 1420}, 'variants': {}}]}
Why has no one been talking about Open Hands so far?
211
So I just stumbled across Open Hands while checking out Mistral’s new Devstral model—and honestly, I was really impressed. The agent itself seems super capable, yet I feel like barely anyone is talking about it? What’s weird is that OpenHands has 54k+ stars on GitHub. For comparison: Roo Code sits at ~14k, and Cline is around 44k. So it’s clearly on the radar of devs. But when you go look it up on YouTube or Reddit—nothing. Practically no real discussion, no deep dives, barely any content. And I’m just sitting here wondering… why? From what I’ve seen so far, it seems just as capable as the other top open-source agents. So are you guys using OpenHands? Is there some kind of limitation I’ve missed? Or is it just a case of bad marketing/no community hype? Curious to hear your thoughts. Also, do you think models specifically trained for a certain agent is the future? Are we going to see more agent specific models going forward and how big do you think is the effort to create these fine tunes? Will it depend on collaborations with big names the likes of Mistral or will Roo et al. be able to provide fine tunes on their own?
2025-05-22T02:16:28
https://www.reddit.com/r/LocalLLaMA/comments/1ksfos8/why_has_no_one_been_talking_about_open_hands_so/
Mr_Moonsilver
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksfos8
false
null
t3_1ksfos8
/r/LocalLLaMA/comments/1ksfos8/why_has_no_one_been_talking_about_open_hands_so/
false
false
self
211
null
Open-Sourced Multimodal Large Diffusion Language Models
115
MMaDA is a new family of **multimodal diffusion foundation models** designed to achieve superior performance across diverse domains such as textual reasoning, multimodal understanding, and text-to-image generation. MMaDA is distinguished by three key innovations: 1. MMaDA adopts a **unified diffusion architecture** with a shared probabilistic formulation and a modality-agnostic design, eliminating the need for modality-specific components. 2. MMaDA introduces a **mixed long chain-of-thought (CoT) fine-tuning** strategy that curates a unified CoT format across modalities. 3. MMaDA adopts a unified policy-gradient-based RL algorithm, which we call **UniGRPO**, tailored for diffusion foundation models. Utilizing diversified reward modeling, **UniGRPO** unifies post-training across both reasoning and generation tasks, ensuring consistent performance improvements.
2025-05-22T02:18:45
https://github.com/Gen-Verse/MMaDA
ninjasaid13
github.com
1970-01-01T00:00:00
0
{}
1ksfqc4
false
null
t3_1ksfqc4
/r/LocalLLaMA/comments/1ksfqc4/opensourced_multimodal_large_diffusion_language/
false
false
https://b.thumbs.redditm…1N1NV3uVG7oI.jpg
115
{'enabled': False, 'images': [{'id': 'j0T5RpZxWFfJRkMAkAvrDti124e5dnxso0osuQyzSJQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iZId4FACbwvJcU6NEqYQYxxICVbn6LyYgehUX8eXjRY.jpg?width=108&crop=smart&auto=webp&s=21ed3ee79f399d4d3ad6125bd8d7c79748982e51', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iZId4FACbwvJcU6NEqYQYxxICVbn6LyYgehUX8eXjRY.jpg?width=216&crop=smart&auto=webp&s=bbd333bda5611d22003ccc25da9528f6158ce298', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iZId4FACbwvJcU6NEqYQYxxICVbn6LyYgehUX8eXjRY.jpg?width=320&crop=smart&auto=webp&s=37645f86cdecb446ead4ff81994c816d92554420', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iZId4FACbwvJcU6NEqYQYxxICVbn6LyYgehUX8eXjRY.jpg?width=640&crop=smart&auto=webp&s=c6e1a6cba69a1a9f17b0fbd00276cd418ded8eda', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iZId4FACbwvJcU6NEqYQYxxICVbn6LyYgehUX8eXjRY.jpg?width=960&crop=smart&auto=webp&s=4ab88d07b9bc201858e89032e040b75cd02c87c7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iZId4FACbwvJcU6NEqYQYxxICVbn6LyYgehUX8eXjRY.jpg?width=1080&crop=smart&auto=webp&s=2754388f32802dbf8d946f8f479947db037c4af5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/iZId4FACbwvJcU6NEqYQYxxICVbn6LyYgehUX8eXjRY.jpg?auto=webp&s=58f87fb9e689999c846abad2717e1c5b14a8fc74', 'width': 1200}, 'variants': {}}]}
In video intel talks a bit about battlematrix 192GB VRAM
50
With Intel Sr. Director of Discrete Graphics Qi Lin to learn more about a new breed of inference workstations codenamed Project Battlematrix and the Intel Arc Pro B60 GPUs that help them accelerate local AI workloads. The B60 brings 24GB of VRAM to accommodate larger AI models and supports multi-GPU inferencing with up to eight cards. Project Battlematrix workstations combine these cards with a containerized Linux software stack that’s optimized for LLMs and designed to simplify deployment, and partners have the flexibility to offer different designs based on customer needs. [https://www.youtube.com/watch?v=tzOXwxXkjFA](https://www.youtube.com/watch?v=tzOXwxXkjFA)
2025-05-22T03:37:40
https://www.reddit.com/r/LocalLLaMA/comments/1ksh780/in_video_intel_talks_a_bit_about_battlematrix/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksh780
false
null
t3_1ksh780
/r/LocalLLaMA/comments/1ksh780/in_video_intel_talks_a_bit_about_battlematrix/
false
false
self
50
{'enabled': False, 'images': [{'id': '4qR7PrzF4eOukDtu6x5cYjftSRgtNNw3F0pjCCrfzrM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/FxCWahMe0Vwd2ycocJMrRD_zM8VRNqTe8p2zls1tbbs.jpg?width=108&crop=smart&auto=webp&s=e6e469b584f6b144a3d7031ca95d9bf26d719f7a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/FxCWahMe0Vwd2ycocJMrRD_zM8VRNqTe8p2zls1tbbs.jpg?width=216&crop=smart&auto=webp&s=a41c2860853dd703ba2ee37599b2151cd17964cc', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/FxCWahMe0Vwd2ycocJMrRD_zM8VRNqTe8p2zls1tbbs.jpg?width=320&crop=smart&auto=webp&s=e25eda7ec8abb16c752e11a6d66f1cebba4904e9', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/FxCWahMe0Vwd2ycocJMrRD_zM8VRNqTe8p2zls1tbbs.jpg?auto=webp&s=1a7d1b4304c8ce44434d9f551e42dde8c805bcc5', 'width': 480}, 'variants': {}}]}
LM Studio in Git copilot on vs code
1
[removed]
2025-05-22T04:17:39
https://www.reddit.com/r/LocalLLaMA/comments/1kshwih/lm_studio_in_git_copilot_on_vs_code/
Lazy_Damage4931
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kshwih
false
null
t3_1kshwih
/r/LocalLLaMA/comments/1kshwih/lm_studio_in_git_copilot_on_vs_code/
false
false
self
1
null
Advantage of using superblocks for K-quants
3
I've been trying to figure out the advantage of using superblocks for K-quants. I saw the comments on the other thread. [https://www.reddit.com/r/LocalLLaMA/comments/1dved4c/llamacpp\_kquants/](https://www.reddit.com/r/LocalLLaMA/comments/1dved4c/llamacpp_kquants/) I understand K-quants uses superblocks and thus there are 16 scales and min-values for each super block. What's the benefit? Does it pick/choose one of the 16 values for the best scale and min-value for each weight instead of restricting each weight's scale to that of its own block? This invariably adds extra computation steps. What other benefit?
2025-05-22T04:20:22
https://www.reddit.com/r/LocalLLaMA/comments/1kshy6c/advantage_of_using_superblocks_for_kquants/
datashri
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kshy6c
false
null
t3_1kshy6c
/r/LocalLLaMA/comments/1kshy6c/advantage_of_using_superblocks_for_kquants/
false
false
self
3
null
Local LLM laptop budget 2.5-5k
7
# Hello everyone, I'm looking to purchase a laptop specifically for running local LLM RAG models. My primary use cases/requirements will be: * General text processing * University paper review and analysis * Light to moderate coding * Good battery life * Good heat disipation * Windows OS **Budget**: $2500-5000 I know a desktop would provide better performance/dollar, but portability is essential for my workflow. I'm relatively new to running local LLMs, though I follow the LangChain community and plan to experiment with setups similar to what's seen on a video titled: "Reliable, fully local RAG agents with LLaMA3.2-3b" or possibly use AnythingLLM. Would appreciate recommendations on: 1. Minimum/recommended GPU VRAM for running models like Llama 3 70B or similar (I know llama 3.2 3B is much more realistic but maybe my upper budget can get me to a 70B model???) 2. Specific laptop models (gaming laptops are all over the place and I can pinpoint the right one) 3. CPU/RAM considerations beyond the GPU (I know more ram is better but if the laptop only goes up to 64 is that enough?) Also interested to hear what models people are successfully running locally on laptops these days and what performance you're getting. Thanks in advance for your insights! Claude suggested these machines (while waiting for Reddit's advice): 1. **High-end gaming laptops with RTX 4090** (24GB VRAM): * MSI Titan GT77 HX * ASUS ROG Strix SCAR 17 * Lenovo Legion Pro 7i 2. **Workstation laptops**: * Dell Precision models with RTX A5500 (16GB) * Lenovo ThinkPad P-series Thank you very much!
2025-05-22T04:37:04
https://www.reddit.com/r/LocalLLaMA/comments/1ksi7ty/local_llm_laptop_budget_255k/
0800otto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksi7ty
false
null
t3_1ksi7ty
/r/LocalLLaMA/comments/1ksi7ty/local_llm_laptop_budget_255k/
false
false
self
7
null
I am looking for light weight models to run locally
1
[removed]
2025-05-22T05:03:54
https://www.reddit.com/r/LocalLLaMA/comments/1ksindc/i_am_looking_for_light_weight_models_to_run/
Obvious_Ad_2699
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksindc
false
null
t3_1ksindc
/r/LocalLLaMA/comments/1ksindc/i_am_looking_for_light_weight_models_to_run/
false
false
self
1
null
Feedback from Anyone Running RTX 4000 SFF Ada vs Dual RTX 2000 SFF Ada?
1
[removed]
2025-05-22T05:23:12
https://www.reddit.com/r/LocalLLaMA/comments/1ksiyd6/feedback_from_anyone_running_rtx_4000_sff_ada_vs/
PocketMartyr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksiyd6
false
null
t3_1ksiyd6
/r/LocalLLaMA/comments/1ksiyd6/feedback_from_anyone_running_rtx_4000_sff_ada_vs/
false
false
self
1
null
Falcon-H1: hybrid Transformer–SSM model series from 0.5B to 34B
104
🔬 Hybrid architecture: Attention + Mamba2 heads in parallel 🧠 From 0.5B, 1.5B, 1.5B-Deep,3B, 7B to 34B 📏 up to 256K context 🔥 Outperforming and rivaling top Transformer models like Qwen3-32B, Qwen2.5-72B, Llama4-Scout-17B/109B, and Gemma3-27B — consistently outperforming models up to 2× their size. 💥 Falcon-H1-0.5B ≈ typical 7B models from 2024, Falcon-H1-1.5B-Deep ≈ current leading 7B–10B models 🌍 Multilingual: Native support for 18 languages (scalable to 100+) ⚙️ Customized μP recipe + optimized data strategy 🤖 Integrated to vLLM, Hugging Face Transformers, and llama.cpp — with more coming soon All the comments and feedback from the community are greatly welcome. Blogpost: [https://falcon-lm.github.io/blog/falcon-h1/](https://falcon-lm.github.io/blog/falcon-h1/) Github: [https://github.com/tiiuae/falcon-h1](https://github.com/tiiuae/falcon-h1)
2025-05-22T05:52:10
https://www.reddit.com/r/LocalLLaMA/comments/1ksjee6/falconh1_hybrid_transformerssm_model_series_from/
JingweiZUO
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksjee6
false
null
t3_1ksjee6
/r/LocalLLaMA/comments/1ksjee6/falconh1_hybrid_transformerssm_model_series_from/
false
false
self
104
{'enabled': False, 'images': [{'id': '-5dEDvOvZEMy2pHuEuazSJpFNmFIHXICPgHs74NtU5U', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/hRCix4qpObWOSO1SUy-qmCcpsFDajpT40wppmi7lmys.jpg?width=108&crop=smart&auto=webp&s=2c89e79295c27ff9df86abe70d9d10532ad29272', 'width': 108}, {'height': 91, 'url': 'https://external-preview.redd.it/hRCix4qpObWOSO1SUy-qmCcpsFDajpT40wppmi7lmys.jpg?width=216&crop=smart&auto=webp&s=a707589399adf0d4cdc205a5813f3cccec4b2912', 'width': 216}, {'height': 135, 'url': 'https://external-preview.redd.it/hRCix4qpObWOSO1SUy-qmCcpsFDajpT40wppmi7lmys.jpg?width=320&crop=smart&auto=webp&s=1d9bc0aa96999e1ed09b3fd5d342df0112bd39be', 'width': 320}, {'height': 271, 'url': 'https://external-preview.redd.it/hRCix4qpObWOSO1SUy-qmCcpsFDajpT40wppmi7lmys.jpg?width=640&crop=smart&auto=webp&s=40fed15debbe1f814b8b3d82e41701d4e23520fa', 'width': 640}, {'height': 407, 'url': 'https://external-preview.redd.it/hRCix4qpObWOSO1SUy-qmCcpsFDajpT40wppmi7lmys.jpg?width=960&crop=smart&auto=webp&s=cf5223eec22af0cf4c234b678a948bb036a292cb', 'width': 960}, {'height': 457, 'url': 'https://external-preview.redd.it/hRCix4qpObWOSO1SUy-qmCcpsFDajpT40wppmi7lmys.jpg?width=1080&crop=smart&auto=webp&s=fac987756fe543ea91df6b45668270e1b47e056f', 'width': 1080}], 'source': {'height': 658, 'url': 'https://external-preview.redd.it/hRCix4qpObWOSO1SUy-qmCcpsFDajpT40wppmi7lmys.jpg?auto=webp&s=91858f084d55f7a7c09ad764e97a60aa7e3fd8dd', 'width': 1552}, 'variants': {}}]}
Jan is now Apache 2.0
384
Hey, we've just changed [Jan](https://jan.ai/)'s license. Jan has always been open-source, but the AGPL license made it hard for many teams to actually use it. Jan is now licensed under Apache 2.0, a more permissive, industry-standard license that works inside companies as well. What this means: – You can bring Jan into your org without legal overhead – You can fork it, modify it, ship it – You don't need to ask permission This makes Jan easier to adopt. At scale. In the real world.
2025-05-22T06:03:22
https://github.com/menloresearch/jan/blob/dev/LICENSE
eck72
github.com
1970-01-01T00:00:00
0
{}
1ksjkhb
false
null
t3_1ksjkhb
/r/LocalLLaMA/comments/1ksjkhb/jan_is_now_apache_20/
false
false
https://b.thumbs.redditm…CvQKvfRPRwbQ.jpg
384
{'enabled': False, 'images': [{'id': 'shzsQ4jFIMUP0eV7qVLugDgJKssf6oxjYHSP4mq1DkA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/URelWOcOKsdGwEnGYxMQqnu09GiloVzXPjQD9-QBbco.jpg?width=108&crop=smart&auto=webp&s=cbb579694969492e2c4b9ebcb99f070c6ba4b3e6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/URelWOcOKsdGwEnGYxMQqnu09GiloVzXPjQD9-QBbco.jpg?width=216&crop=smart&auto=webp&s=e54eae99d6b719f8c19bd31de28374b8b19a2a9f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/URelWOcOKsdGwEnGYxMQqnu09GiloVzXPjQD9-QBbco.jpg?width=320&crop=smart&auto=webp&s=4e41e3b7523129b2527a7553c4f420b65b7db1ea', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/URelWOcOKsdGwEnGYxMQqnu09GiloVzXPjQD9-QBbco.jpg?width=640&crop=smart&auto=webp&s=58f6ee4e949835d86b3d3ceaef317ab0dc1752b1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/URelWOcOKsdGwEnGYxMQqnu09GiloVzXPjQD9-QBbco.jpg?width=960&crop=smart&auto=webp&s=b56b4b0b9b3f6733ba1b4bf6499cf53b7e0d65a9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/URelWOcOKsdGwEnGYxMQqnu09GiloVzXPjQD9-QBbco.jpg?width=1080&crop=smart&auto=webp&s=851bf4eae89877ac4804a27be82376d1b85e29a0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/URelWOcOKsdGwEnGYxMQqnu09GiloVzXPjQD9-QBbco.jpg?auto=webp&s=0c12cb2ba9e82cc0803a9b8167dc12257b32e461', 'width': 1200}, 'variants': {}}]}
Looking for education specific models
1
[removed]
2025-05-22T06:04:23
https://www.reddit.com/r/LocalLLaMA/comments/1ksjkz9/looking_for_education_specific_models/
Nickthrowaway10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksjkz9
false
null
t3_1ksjkz9
/r/LocalLLaMA/comments/1ksjkz9/looking_for_education_specific_models/
false
false
self
1
null
is there any existing repo that lets us replace llm from a VLM model with another LLM?
1
Same as title: is there any existing repo that lets us replace llm from a VLM model with another LLM? Also if anyone tried this? How much more training is required?
2025-05-22T06:15:39
https://www.reddit.com/r/LocalLLaMA/comments/1ksjqwj/is_there_any_existing_repo_that_lets_us_replace/
SouvikMandal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksjqwj
false
null
t3_1ksjqwj
/r/LocalLLaMA/comments/1ksjqwj/is_there_any_existing_repo_that_lets_us_replace/
false
false
self
1
null
Introducing Skywork Super Agents: The Next Era of AI Workspace is Here
0
Skywork Super Agents is a suite of AI workspace agents based on deep research, designed to make modern people's work and study more efficient. Compared to other general AI agents, Skywork is more professional, smarter, more reliable, easier to use, and offers better value for money. Skywork isn’t just another AI assistant — it’s a truly useful, trustworthy, and user-friendly AI productivity partner. * Useful: Designed for real, high-frequency workplace use cases, with seamless generation of docs, sheets, and slides that fit into daily workflows. * Daring to use: Skywork supports deep research with reliable and traceable sources. * Easy to use: Built for flexibility and usability — with smart formatting, visual expressiveness, editable outputs, and multi-format export.
2025-05-22T06:34:58
https://www.youtube.com/watch?v=AjU5hihAclw&t=13s
Lynncc6
youtube.com
1970-01-01T00:00:00
0
{}
1ksk12z
false
{'oembed': {'author_name': 'Skywork', 'author_url': 'https://www.youtube.com/@SkyworkAI', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/AjU5hihAclw?start=13&feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Introducing Skywork Super Agents: The Next Era of AI Workspace is Here"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/AjU5hihAclw/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Introducing Skywork Super Agents: The Next Era of AI Workspace is Here', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1ksk12z
/r/LocalLLaMA/comments/1ksk12z/introducing_skywork_super_agents_the_next_era_of/
false
false
https://b.thumbs.redditm…QVifPyw8kHQk.jpg
0
{'enabled': False, 'images': [{'id': 't6xlyyV7_kuvCAFuk5Wr8At0KJrBbkX6fvlnFWi8w9k', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/E3cYeggKyMKeVg63PDru-LcMYCPMPY53c0sV6UsmIYg.jpg?width=108&crop=smart&auto=webp&s=65f14d5d244927fa5d90a8dcc865ac774829c89c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/E3cYeggKyMKeVg63PDru-LcMYCPMPY53c0sV6UsmIYg.jpg?width=216&crop=smart&auto=webp&s=d528ccf951a3cd223894390f76f53f223163c596', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/E3cYeggKyMKeVg63PDru-LcMYCPMPY53c0sV6UsmIYg.jpg?width=320&crop=smart&auto=webp&s=403e339fec3f46e9935cd2e93012b3946fd126f1', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/E3cYeggKyMKeVg63PDru-LcMYCPMPY53c0sV6UsmIYg.jpg?auto=webp&s=342c19da12e718c85da8c3ae3f94adbea7a4da92', 'width': 480}, 'variants': {}}]}
How to determine sampler settings if not listed?
5
For example, I'm trying to figure out the best settings for Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-Q6_K - with my current settings it goes off the rails far too often, latching onto and repeating phrases it seems to 'like' until it loses its shit entirely and gets stuck in circular sentences. Maybe I just missed it somewhere, but I couldn't find specific information about what sampler settings to use for this model. But I've heard good things about it, so I assume these issues are my fault. I'd appreciate pointers on how to fix this. But this isn't the first or last time I couldn't find such information, so for future reference I am wondering, how can I know where to start with sampler settings if the information isn't readily available on the HF page? Just trial and error it? Are there any rules of thumb to stick to? Also, dumb tangential question - how can I reset the sampler to 'default' settings in SillyTavern? Do I need to delete all the templates to do that?
2025-05-22T06:41:21
https://www.reddit.com/r/LocalLLaMA/comments/1ksk4er/how_to_determine_sampler_settings_if_not_listed/
Jawzper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksk4er
false
null
t3_1ksk4er
/r/LocalLLaMA/comments/1ksk4er/how_to_determine_sampler_settings_if_not_listed/
false
false
self
5
null
Mixtral releases devstral coding model!!
1
Downloading now, can't wait to try it out.
2025-05-22T07:09:09
https://huggingface.co/unsloth/Devstral-Small-2505-GGUF/tree/main
thebadslime
huggingface.co
1970-01-01T00:00:00
0
{}
1kskirg
false
null
t3_1kskirg
/r/LocalLLaMA/comments/1kskirg/mixtral_releases_devstral_coding_model/
false
false
https://b.thumbs.redditm…zrIUCwNNCU5k.jpg
1
{'enabled': False, 'images': [{'id': '8swN28uA0dpMYAiPEzC4m1BThAOuDjj_S5vOgbCZj2k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sK8V8IElvUENfe2k3iMr-xl2SZLHCNZ5bdFxSjfOH_A.jpg?width=108&crop=smart&auto=webp&s=cc73fd7603816365f70b1e3a619c72b3555efc5e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/sK8V8IElvUENfe2k3iMr-xl2SZLHCNZ5bdFxSjfOH_A.jpg?width=216&crop=smart&auto=webp&s=c8552fd647f5d46a549d76137da732477dffd019', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/sK8V8IElvUENfe2k3iMr-xl2SZLHCNZ5bdFxSjfOH_A.jpg?width=320&crop=smart&auto=webp&s=92202579807b66823f7bc16a15c28368b79ff556', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/sK8V8IElvUENfe2k3iMr-xl2SZLHCNZ5bdFxSjfOH_A.jpg?width=640&crop=smart&auto=webp&s=b00622b3019f7fd0c6e85d2c423484a3e74c838f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/sK8V8IElvUENfe2k3iMr-xl2SZLHCNZ5bdFxSjfOH_A.jpg?width=960&crop=smart&auto=webp&s=717ce2dfa293d64a058de2e7103a9cc11957a8e3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/sK8V8IElvUENfe2k3iMr-xl2SZLHCNZ5bdFxSjfOH_A.jpg?width=1080&crop=smart&auto=webp&s=8928b39162c9b1810a614869b47283cefcadf920', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/sK8V8IElvUENfe2k3iMr-xl2SZLHCNZ5bdFxSjfOH_A.jpg?auto=webp&s=560aab88d91a1800cfd3802b9ba8b80ecd2e2a08', 'width': 1200}, 'variants': {}}]}
I saw a project that I'm interested in: 3DTown: Constructing a 3D Town from a Single Image
184
According to the official description, **3DTown outperforms state-of-the-art baselines, including Trellis, Hunyuan3D-2, and TripoSG, in terms of geometry quality, spatial coherence, and texture fidelity.**
2025-05-22T07:15:06
https://v.redd.it/6as4adn9aa2f1
Dr_Karminski
v.redd.it
1970-01-01T00:00:00
0
{}
1ksklse
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6as4adn9aa2f1/DASHPlaylist.mpd?a=1750490120%2CZTdmNDA0MWRjMjFmZTQ3MDBiZWZlODI3YWNiMTgwZjVlN2MzMjkxZGU5YzhhZjE5NzU5YzQ3YjM4YzM5OGM4MQ%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/6as4adn9aa2f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/6as4adn9aa2f1/HLSPlaylist.m3u8?a=1750490120%2CZDAxMjZkNjBjN2EyZjFjMTllYzEzYjkzZTRkM2IyM2RiOWU0NDU0NTdhOTg5M2ZjYmE4ZWE1NzY2MmUxNzRhZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6as4adn9aa2f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1ksklse
/r/LocalLLaMA/comments/1ksklse/i_saw_a_project_that_im_interested_in_3dtown/
false
false
https://external-preview…b128bee8cc414d67
184
{'enabled': False, 'images': [{'id': 'emh3Y3JjbjlhYTJmMdq-zCDOPop6wDopQzw_Axrs5Q3Ewmi7BuHyc4moiH9c', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/emh3Y3JjbjlhYTJmMdq-zCDOPop6wDopQzw_Axrs5Q3Ewmi7BuHyc4moiH9c.png?width=108&crop=smart&format=pjpg&auto=webp&s=c3b08f1fea0b68935072b12fcd6cdb79d20b6fad', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/emh3Y3JjbjlhYTJmMdq-zCDOPop6wDopQzw_Axrs5Q3Ewmi7BuHyc4moiH9c.png?width=216&crop=smart&format=pjpg&auto=webp&s=d3b495c4b635cb97b564d61939a6a75fa1cc1d1c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/emh3Y3JjbjlhYTJmMdq-zCDOPop6wDopQzw_Axrs5Q3Ewmi7BuHyc4moiH9c.png?width=320&crop=smart&format=pjpg&auto=webp&s=56bdc71f2c671e8b33a12a194df7a48c1124a850', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/emh3Y3JjbjlhYTJmMdq-zCDOPop6wDopQzw_Axrs5Q3Ewmi7BuHyc4moiH9c.png?width=640&crop=smart&format=pjpg&auto=webp&s=5af5c0bed5b08aa03ffc669fbf163d95463ea20c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/emh3Y3JjbjlhYTJmMdq-zCDOPop6wDopQzw_Axrs5Q3Ewmi7BuHyc4moiH9c.png?width=960&crop=smart&format=pjpg&auto=webp&s=05f9e47c237d03209a1d41cdeafbb1e8747e5351', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/emh3Y3JjbjlhYTJmMdq-zCDOPop6wDopQzw_Axrs5Q3Ewmi7BuHyc4moiH9c.png?width=1080&crop=smart&format=pjpg&auto=webp&s=537ed56eb5ec2a4d0c7add93240e86c4327fda5d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/emh3Y3JjbjlhYTJmMdq-zCDOPop6wDopQzw_Axrs5Q3Ewmi7BuHyc4moiH9c.png?format=pjpg&auto=webp&s=35b74f47244409842bfc664d3e54234b9cc16101', 'width': 1920}, 'variants': {}}]}
llmbasedos: Docker Update + USB Key Launch Monday!
2
Hey everyone, A while back, I introduced llmbasedos, a minimal OS-layer designed to securely connect local resources (files, emails, tools) with LLMs via the Model Context Protocol (MCP). Originally, the setup revolved around an Arch Linux ISO for a dedicated appliance experience. After extensive testing and community feedback (thanks again, everyone!), I’ve moved the primary deployment method to Docker. Docker simplifies setup, streamlines dependency management, and greatly improves development speed. Setup now just involves cloning the repo, editing a few configuration files, and running docker compose up. The shift has dramatically enhanced my own dev workflow, allowing instant code changes without lengthy rebuilds. Additionally, Docker ensures consistent compatibility across Linux, macOS, and Windows (WSL2). Importantly, the ISO option isn’t going away. Due to strong demand, I’m launching the official llmbasedos USB Key Edition this coming Monday. This edition remains ideal for offline deployments, enterprise use, or anyone preferring a physical, plug-and-play solution. The GitHub repo is already updated with the latest Docker-based setup, revised documentation, and various improvements. Has anyone here also transitioned their software distribution from ISO or VM setups to Docker containers? I’d be interested in hearing about your experience, particularly regarding user adoption and developer productivity. Thank you again for all your support!
2025-05-22T07:30:05
https://github.com/iluxu/llmbasedos
iluxu
github.com
1970-01-01T00:00:00
0
{}
1kskt9w
false
null
t3_1kskt9w
/r/LocalLLaMA/comments/1kskt9w/llmbasedos_docker_update_usb_key_launch_monday/
false
false
https://b.thumbs.redditm…HGuJNm6IziZY.jpg
2
{'enabled': False, 'images': [{'id': 'k-x69qYR1RapsOSAZAFNUwRR9nsploVo0xghJ_WwOx8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hTawpY-gFTbtNqP8NCoBxtZYR2Q3prcAvVN-o8yqqsQ.jpg?width=108&crop=smart&auto=webp&s=f0cf8109c8555c962452b027d9b66a168ebb86d4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hTawpY-gFTbtNqP8NCoBxtZYR2Q3prcAvVN-o8yqqsQ.jpg?width=216&crop=smart&auto=webp&s=804126e939ea5e0bff422ae5eaa4acfd7db67746', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hTawpY-gFTbtNqP8NCoBxtZYR2Q3prcAvVN-o8yqqsQ.jpg?width=320&crop=smart&auto=webp&s=5ba3116ea0fc3a216d9efc84c479073266c5d8bd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hTawpY-gFTbtNqP8NCoBxtZYR2Q3prcAvVN-o8yqqsQ.jpg?width=640&crop=smart&auto=webp&s=499ec121b75cf1bf240cbb617e7ed0350738f58c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hTawpY-gFTbtNqP8NCoBxtZYR2Q3prcAvVN-o8yqqsQ.jpg?width=960&crop=smart&auto=webp&s=9f565b378f8e28255e685991c3b7dac7ddde2776', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hTawpY-gFTbtNqP8NCoBxtZYR2Q3prcAvVN-o8yqqsQ.jpg?width=1080&crop=smart&auto=webp&s=6849418ee7bb3e91c3e631a5461b7a3ed3b828d4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hTawpY-gFTbtNqP8NCoBxtZYR2Q3prcAvVN-o8yqqsQ.jpg?auto=webp&s=caf68c75b9cf86394506b4f1f7298aa611d28f04', 'width': 1200}, 'variants': {}}]}
Converting my Gaming PC into a LLM-Server (GTX 1080 Ti) - worth it?
0
Background: I have a proxmox cluster at home but with pretty old hardware: 32GB and 16GB DDR3, some very old Xeon E3 CPUs. For most of my usecases absolutely enough. But for LLM absolutely not sufficient. Beside that I have a gaming PC with more current hardware and I already played around with 8-11B Modells (always Q4). It run pretty well. Since I share way too much information in chatgpt and other modells I finally want to setup something in my homelab. But buying a completely new setup would be too expensive so I was thinking of sacrificing my PC to convert it into a third Proxmox Cluster, completely just for llama.pp. Specs: GPU: GTX 1080 Ti CPU: Ryzen 5 3800X RAM: 32GB DDR4 Mainboard: Asus X470 Pro (second GPU for later upgrade?) What models could I run with this setup? And could I upgrade it with a (second hand) Nvidia P40? My GPU has 11GB of VRAM, could I use the 32GB RAM or would it be too slow? Currently I have a budget of around 500-700€ for some upgrades if needed.
2025-05-22T07:38:52
https://www.reddit.com/r/LocalLLaMA/comments/1kskxm9/converting_my_gaming_pc_into_a_llmserver_gtx_1080/
delobre
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kskxm9
false
null
t3_1kskxm9
/r/LocalLLaMA/comments/1kskxm9/converting_my_gaming_pc_into_a_llmserver_gtx_1080/
false
false
self
0
null
Privacy-first AI Development with Foundry Local + Semantic Kernel
0
Just published a new blog post where I walk through how to run LLMs locally using Foundry Local and orchestrate them using Microsoft's Semantic Kernel. In a world where data privacy and security are more important than ever, running models on your own hardware gives you full control—no sensitive data leaves your environment. 🧠 What the blog covers: \- Setting up Foundry Local to run LLMs securely \- Integrating with Semantic Kernel for modular, intelligent orchestration \- Practical examples and code snippets to get started quickly Ideal for developers and teams building secure, private, and production-ready AI applications. 🔗 Check it out: [Getting Started with Foundry Local & Semantic Kernel](https://anktsrkr.github.io/post/semantic-kernel/getting-started-with-foundry-local-and-semantic-kernel/) Would love to hear how others are approaching secure LLM workflows!
2025-05-22T07:51:03
https://www.reddit.com/r/LocalLLaMA/comments/1ksl3o8/privacyfirst_ai_development_with_foundry_local/
anktsrkr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksl3o8
false
null
t3_1ksl3o8
/r/LocalLLaMA/comments/1ksl3o8/privacyfirst_ai_development_with_foundry_local/
false
false
self
0
{'enabled': False, 'images': [{'id': 'jHTUiOy-VnwxAK5u4zwTjhEH2YOHFQm4RMvkfOpdGgQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/cisCPyM9gjqOBcA_imyIokkwar9MX3ucSXoU7vJuYho.jpg?width=108&crop=smart&auto=webp&s=938f877ee48d0140956bfbb2dc69eb795abf564d', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/cisCPyM9gjqOBcA_imyIokkwar9MX3ucSXoU7vJuYho.jpg?width=216&crop=smart&auto=webp&s=b62a93dde4f7bca66a869ef44767579a171d83ab', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/cisCPyM9gjqOBcA_imyIokkwar9MX3ucSXoU7vJuYho.jpg?width=320&crop=smart&auto=webp&s=8669d135d03c8593a220d9b0664d0f4015e52322', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/cisCPyM9gjqOBcA_imyIokkwar9MX3ucSXoU7vJuYho.jpg?width=640&crop=smart&auto=webp&s=4d8d0156659385124da7d8ef0065d2e906519128', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/cisCPyM9gjqOBcA_imyIokkwar9MX3ucSXoU7vJuYho.jpg?width=960&crop=smart&auto=webp&s=1fa2939bbe1ef9f8b774660f01ec11057607feaf', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/cisCPyM9gjqOBcA_imyIokkwar9MX3ucSXoU7vJuYho.jpg?width=1080&crop=smart&auto=webp&s=373d2fc7fa2bfba344f3eb7e5bfa0de9dddae444', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cisCPyM9gjqOBcA_imyIokkwar9MX3ucSXoU7vJuYho.jpg?auto=webp&s=f9410571796b5aa2f7a81e62e2c7e544657dd36c', 'width': 1080}, 'variants': {}}]}
Best model for AI therapy?
0
Hi All, I am trying to deploy and self-host LLM to a cloud container, so resources are not an issue, but also I need somethig budget friendly < $1/h. Please always paste Hugging Face link/id. If it helps, my main forcus in therapy is CBT.
2025-05-22T07:52:08
https://www.reddit.com/r/LocalLLaMA/comments/1ksl476/best_model_for_ai_therapy/
AbdallahHeidar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksl476
false
null
t3_1ksl476
/r/LocalLLaMA/comments/1ksl476/best_model_for_ai_therapy/
false
false
self
0
null
I made Model Version Control Protocol for AI agents
8
I've been working on MVCP (Model Version Control Protocol), inspired by the Model Context Protocol (MCP), a lightweight Git-compatible tool designed specifically **for AI agents to track their progress during code transformations**, built using Python. **What it does?** MVCP creates a unified, human-readable system for AI agents to save, restore, and diff checkpoints as they transform code. Think of it as specialized version control that works alongside Git, optimized for LLM-based coding assistants. It enables multiple AI agents to collaborate on the same codebase while maintaining a clear audit trail of who did what. This is particularly useful for autonomous development workflows where multiple specialized agents (coders, testers, reviewers, etc.) work toward building a repo together. The repo is **open for contributions too and its under the MIT license** Its very early in development so please take it easy on me haha :D  [https://github.com/evangelosmeklis/mvcp](https://github.com/evangelosmeklis/mvcp)
2025-05-22T07:53:07
https://www.reddit.com/r/LocalLLaMA/comments/1ksl4nq/i_made_model_version_control_protocol_for_ai/
_twelvechess
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksl4nq
false
null
t3_1ksl4nq
/r/LocalLLaMA/comments/1ksl4nq/i_made_model_version_control_protocol_for_ai/
false
false
self
8
{'enabled': False, 'images': [{'id': '2vJXIFTylYDY8PZ0xjz6t2u3TnwC4SF6UTutJzbQnFY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3NGNcK8v3-nfnL52SqknNrlB7cXXYLSPFTArvETLrnw.jpg?width=108&crop=smart&auto=webp&s=aabea739c1379e6cdf0ca1bd7ad2377b91879398', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3NGNcK8v3-nfnL52SqknNrlB7cXXYLSPFTArvETLrnw.jpg?width=216&crop=smart&auto=webp&s=2f3043cdccde850418ffce35598eb5a1098f5a36', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3NGNcK8v3-nfnL52SqknNrlB7cXXYLSPFTArvETLrnw.jpg?width=320&crop=smart&auto=webp&s=14626f4aa9fdb453344e6cf7dcb0ae707aa191c2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3NGNcK8v3-nfnL52SqknNrlB7cXXYLSPFTArvETLrnw.jpg?width=640&crop=smart&auto=webp&s=9a2f454b4b27c388a7b8fc062fde739b37e4f3aa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3NGNcK8v3-nfnL52SqknNrlB7cXXYLSPFTArvETLrnw.jpg?width=960&crop=smart&auto=webp&s=2f7b896a92882461c624f4cfe0e5857dadaddac0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3NGNcK8v3-nfnL52SqknNrlB7cXXYLSPFTArvETLrnw.jpg?width=1080&crop=smart&auto=webp&s=13029a068a16c884b58068d552f9d2b5c755bc9e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3NGNcK8v3-nfnL52SqknNrlB7cXXYLSPFTArvETLrnw.jpg?auto=webp&s=b9f8a8877969b5ca65589bee347b589a8e4757df', 'width': 1200}, 'variants': {}}]}
If can make AI vids with low vram, why are low vram photo gens still so low qual?
3
If we're able to generate videos with 24to60 frames per second, which eludes to 60 single shots in a second. Why does it take so much to generate a single image? I don't really understand what the gap is and why things aren't improving as much. Shouldn't we able to get hands right with low vram models for image gen atleast, if we're already able to generate videos on low vram. Sorry if the question seems stupid
2025-05-22T08:05:56
https://www.reddit.com/r/LocalLLaMA/comments/1kslb38/if_can_make_ai_vids_with_low_vram_why_are_low/
Life_is_boring_rn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kslb38
false
null
t3_1kslb38
/r/LocalLLaMA/comments/1kslb38/if_can_make_ai_vids_with_low_vram_why_are_low/
false
false
self
3
null
Why I Built PoliteAI: One Workspace for GPT, Claude, Gemini, Grok and Your Team
1
2025-05-22T08:07:10
https://alexpham14.medium.com/why-i-built-politeai-one-workspace-for-gpt-claude-gemini-grok-and-your-team-d6d75a8a0315
Real_Enthusiasm_2657
alexpham14.medium.com
1970-01-01T00:00:00
0
{}
1kslbp2
false
null
t3_1kslbp2
/r/LocalLLaMA/comments/1kslbp2/why_i_built_politeai_one_workspace_for_gpt_claude/
false
false
https://b.thumbs.redditm…BqawA119HYDE.jpg
1
{'enabled': False, 'images': [{'id': 'MW_83CnapBVGJtG1MgZmzH96GD9NNXzEJpw4CTDgxfQ', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/A2kj_5ZBx5tRY9oflJu5V_yfrZPEiiNvYtDDi0HhIwk.jpg?width=108&crop=smart&auto=webp&s=d6c1b4122fd642863e06bf56a2b33c1982b8d78c', 'width': 108}, {'height': 129, 'url': 'https://external-preview.redd.it/A2kj_5ZBx5tRY9oflJu5V_yfrZPEiiNvYtDDi0HhIwk.jpg?width=216&crop=smart&auto=webp&s=f36fa901d20b2870364455699bafea540d82c24c', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/A2kj_5ZBx5tRY9oflJu5V_yfrZPEiiNvYtDDi0HhIwk.jpg?width=320&crop=smart&auto=webp&s=29df435a2ed7de02e533d0eee4810cfa0283d282', 'width': 320}, {'height': 384, 'url': 'https://external-preview.redd.it/A2kj_5ZBx5tRY9oflJu5V_yfrZPEiiNvYtDDi0HhIwk.jpg?width=640&crop=smart&auto=webp&s=3a6e93281fbb786dce0e9f183e7faa29d9150025', 'width': 640}, {'height': 576, 'url': 'https://external-preview.redd.it/A2kj_5ZBx5tRY9oflJu5V_yfrZPEiiNvYtDDi0HhIwk.jpg?width=960&crop=smart&auto=webp&s=6971c727252a57c37f3312edc11a919382963c33', 'width': 960}, {'height': 648, 'url': 'https://external-preview.redd.it/A2kj_5ZBx5tRY9oflJu5V_yfrZPEiiNvYtDDi0HhIwk.jpg?width=1080&crop=smart&auto=webp&s=6e823559c0c5abee30f85401252ef4dcb69fcf03', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/A2kj_5ZBx5tRY9oflJu5V_yfrZPEiiNvYtDDi0HhIwk.jpg?auto=webp&s=852bd3bbcd68a690eb518e5bbbfd6c69a3393d92', 'width': 1200}, 'variants': {}}]}
Running local LLMs on Mac
1
[removed]
2025-05-22T08:17:13
https://www.reddit.com/r/LocalLLaMA/comments/1kslgsq/running_local_llms_on_mac/
AdHelpful1382
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kslgsq
false
null
t3_1kslgsq
/r/LocalLLaMA/comments/1kslgsq/running_local_llms_on_mac/
false
false
self
1
null
Want to know your reviews about this 14B model.
1
[removed]
2025-05-22T08:35:59
https://www.reddit.com/r/LocalLLaMA/comments/1kslpmq/want_to_know_your_reviews_about_this_14b_model/
EvanFengYi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kslpmq
false
null
t3_1kslpmq
/r/LocalLLaMA/comments/1kslpmq/want_to_know_your_reviews_about_this_14b_model/
false
false
self
1
{'enabled': False, 'images': [{'id': 'oiXxa3AeQjPyS014SfL85mFkAl65CMnweJS5us56xg8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_qPpK7H85T65D99K_551HeZaWXqfclob4aYz5EmnQ68.jpg?width=108&crop=smart&auto=webp&s=d49b6159d1fe495c160f658a33ee4ccaafe1e387', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_qPpK7H85T65D99K_551HeZaWXqfclob4aYz5EmnQ68.jpg?width=216&crop=smart&auto=webp&s=b134a500efd0a5952007aff765d520f8585a06d2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_qPpK7H85T65D99K_551HeZaWXqfclob4aYz5EmnQ68.jpg?width=320&crop=smart&auto=webp&s=90d12ec6f6875ae1194f7fac93195a86f5dce7cf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_qPpK7H85T65D99K_551HeZaWXqfclob4aYz5EmnQ68.jpg?width=640&crop=smart&auto=webp&s=c4723cfa4b6f2200f28a9aeab50779f4c9ddd206', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_qPpK7H85T65D99K_551HeZaWXqfclob4aYz5EmnQ68.jpg?width=960&crop=smart&auto=webp&s=9f5feea662097b2e0b6a7fa30b4f7b6765374140', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_qPpK7H85T65D99K_551HeZaWXqfclob4aYz5EmnQ68.jpg?width=1080&crop=smart&auto=webp&s=0a20cc6c78c6645d4a7987d5503e9ab2aa8e57dd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_qPpK7H85T65D99K_551HeZaWXqfclob4aYz5EmnQ68.jpg?auto=webp&s=e6fb60acb35a5d4b1d994ed6035f29519da6073f', 'width': 1200}, 'variants': {}}]}
LLM for detecting offensive writing
0
Has anyone here used a local LLM to flag/detect offensive posts. This is to detect verbal attacks that are not detectable with basic keywords/offensive word lists. I'm trying to find a suitable small model that ideally runs on CPU. I'd like to hear experiences of what techniques people have used beyond LLM and success stories.
2025-05-22T09:15:49
https://www.reddit.com/r/LocalLLaMA/comments/1ksm9c4/llm_for_detecting_offensive_writing/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksm9c4
false
null
t3_1ksm9c4
/r/LocalLLaMA/comments/1ksm9c4/llm_for_detecting_offensive_writing/
false
false
self
0
null
StoriiCare
1
Hey all! I work with a platform called StoriiCare that’s designed for adult day centers, care homes, and other long-term care providers. It’s focused on improving how teams manage documentation, staff workflows, and especially how they engage with families. We’ve seen a lot of interest from providers looking for better digital tools for things like activity tracking, outcome monitoring, assessments, and family communication — especially post-COVID where expectations have changed. StoriiCare lets staff log care easily (even from mobile), and families can stay updated in real time, which helps build trust and transparency. Curious if anyone here has used it, heard of it, or is exploring care management systems that better bridge the gap between health IT and person-centered care? Would love to hear what others are using, what you love/hate about your current systems, or if there are features you feel are missing in this space.
2025-05-22T09:20:04
https://www.reddit.com/r/LocalLLaMA/comments/1ksmbjs/storiicare/
Lassiegirl2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksmbjs
false
null
t3_1ksmbjs
/r/LocalLLaMA/comments/1ksmbjs/storiicare/
false
false
self
1
null
MMaDA: Multimodal Large Diffusion Language Models
55
[https://github.com/Gen-Verse/MMaDA](https://github.com/Gen-Verse/MMaDA) [https://huggingface.co/Gen-Verse/MMaDA-8B-Base](https://huggingface.co/Gen-Verse/MMaDA-8B-Base)
2025-05-22T09:31:41
https://www.reddit.com/r/LocalLLaMA/comments/1ksmhe9/mmada_multimodal_large_diffusion_language_models/
First_Ground_9849
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksmhe9
false
null
t3_1ksmhe9
/r/LocalLLaMA/comments/1ksmhe9/mmada_multimodal_large_diffusion_language_models/
false
false
self
55
{'enabled': False, 'images': [{'id': 'j0T5RpZxWFfJRkMAkAvrDti124e5dnxso0osuQyzSJQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iZId4FACbwvJcU6NEqYQYxxICVbn6LyYgehUX8eXjRY.jpg?width=108&crop=smart&auto=webp&s=21ed3ee79f399d4d3ad6125bd8d7c79748982e51', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iZId4FACbwvJcU6NEqYQYxxICVbn6LyYgehUX8eXjRY.jpg?width=216&crop=smart&auto=webp&s=bbd333bda5611d22003ccc25da9528f6158ce298', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iZId4FACbwvJcU6NEqYQYxxICVbn6LyYgehUX8eXjRY.jpg?width=320&crop=smart&auto=webp&s=37645f86cdecb446ead4ff81994c816d92554420', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iZId4FACbwvJcU6NEqYQYxxICVbn6LyYgehUX8eXjRY.jpg?width=640&crop=smart&auto=webp&s=c6e1a6cba69a1a9f17b0fbd00276cd418ded8eda', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iZId4FACbwvJcU6NEqYQYxxICVbn6LyYgehUX8eXjRY.jpg?width=960&crop=smart&auto=webp&s=4ab88d07b9bc201858e89032e040b75cd02c87c7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iZId4FACbwvJcU6NEqYQYxxICVbn6LyYgehUX8eXjRY.jpg?width=1080&crop=smart&auto=webp&s=2754388f32802dbf8d946f8f479947db037c4af5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/iZId4FACbwvJcU6NEqYQYxxICVbn6LyYgehUX8eXjRY.jpg?auto=webp&s=58f87fb9e689999c846abad2717e1c5b14a8fc74', 'width': 1200}, 'variants': {}}]}
What is the best ollama model for writing a YouTube script
1
[removed]
2025-05-22T09:32:11
https://www.reddit.com/r/LocalLLaMA/comments/1ksmhnm/what_is_the_best_ollama_model_for_writing_a/
wanhanred
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksmhnm
false
null
t3_1ksmhnm
/r/LocalLLaMA/comments/1ksmhnm/what_is_the_best_ollama_model_for_writing_a/
false
false
self
1
null
👀 New Gemma 3n (E4B Preview) from Google Lands on Hugging Face - Text, Vision & More Coming!
145
Google has released a new preview version of their Gemma 3n model on Hugging Face: google/gemma-3n-E4B-it-litert-preview https://preview.redd.it/beelus5sya2f1.png?width=1999&format=png&auto=webp&s=39d6f33cb85c4fb1e3e2a616ce0cedc865281079 Here are some key takeaways from the model card: * **Multimodal Input:** This model is designed to handle text, image, video, and audio input, generating text outputs. The current checkpoint on Hugging Face supports text and vision input, with full multimodal features expected soon. * **Efficient Architecture:** Gemma 3n models feature a novel architecture that allows them to run with a smaller number of effective parameters (E2B and E4B variants mentioned). They also utilize a Matformer architecture for nesting multiple models. * **Low-Resource Devices:** These models are specifically designed for efficient execution on low-resource devices. * **Selective Parameter Activation:** This technology helps reduce resource requirements, allowing the models to operate at an effective size of 2B and 4B parameters. * **Training Data:** Trained on a dataset of approximately 11 trillion tokens, including web documents, code, mathematics, images, and audio, with a knowledge cutoff of June 2024. * **Intended Uses:** Suited for tasks like content creation (text, code, etc.), chatbots, text summarization, and image/audio data extraction. * **Preview Version:** Keep in mind this is a preview version, intended for use with Google AI Edge. You'll need to agree to Google's usage license on Hugging Face to access the model files. You can find it by searching for google/gemma-3n-E4B-it-litert-preview on Hugging Face.
2025-05-22T09:34:51
https://www.reddit.com/r/LocalLLaMA/comments/1ksmiwz/new_gemma_3n_e4b_preview_from_google_lands_on/
Rare-Programmer-1747
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksmiwz
false
null
t3_1ksmiwz
/r/LocalLLaMA/comments/1ksmiwz/new_gemma_3n_e4b_preview_from_google_lands_on/
false
false
https://b.thumbs.redditm…9_J9Ijz5HMEU.jpg
145
null
Someone from google has stolen my designs for AGI generated via aistudio
1
[removed]
2025-05-22T09:37:39
https://www.reddit.com/r/LocalLLaMA/comments/1ksmkdl/someone_from_google_has_stolen_my_designs_for_agi/
CharacterJealous383
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksmkdl
false
null
t3_1ksmkdl
/r/LocalLLaMA/comments/1ksmkdl/someone_from_google_has_stolen_my_designs_for_agi/
false
false
self
1
null
Someone from google has stolen my designs for an AGI architecture generated via asi
1
[removed]
2025-05-22T09:40:08
https://www.reddit.com/r/LocalLLaMA/comments/1ksmllt/someone_from_google_has_stolen_my_designs_for_an/
CharacterJealous383
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksmllt
false
null
t3_1ksmllt
/r/LocalLLaMA/comments/1ksmllt/someone_from_google_has_stolen_my_designs_for_an/
false
false
self
1
null
RpR-v4 now with less repetition and impersonation!
42
2025-05-22T09:58:43
https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v4
Arli_AI
huggingface.co
1970-01-01T00:00:00
0
{}
1ksmvab
false
null
t3_1ksmvab
/r/LocalLLaMA/comments/1ksmvab/rprv4_now_with_less_repetition_and_impersonation/
false
false
https://b.thumbs.redditm…GHB4o6Zt0fxY.jpg
42
{'enabled': False, 'images': [{'id': 'bSYUJ_kisf3lxijdNPv6SmJ0R61X4277NoocNI2k1XI', 'resolutions': [{'height': 129, 'url': 'https://external-preview.redd.it/PUya7H9A7A-uaz_ICZ2xgjCKFF5-tr6wZqYRRVu-rws.jpg?width=108&crop=smart&auto=webp&s=39238a264eec6ec9aa0d3550891adba2d05c354e', 'width': 108}, {'height': 259, 'url': 'https://external-preview.redd.it/PUya7H9A7A-uaz_ICZ2xgjCKFF5-tr6wZqYRRVu-rws.jpg?width=216&crop=smart&auto=webp&s=2788adfad402d1fd52fd439c3af8e3a5b9d14e23', 'width': 216}, {'height': 384, 'url': 'https://external-preview.redd.it/PUya7H9A7A-uaz_ICZ2xgjCKFF5-tr6wZqYRRVu-rws.jpg?width=320&crop=smart&auto=webp&s=e18c4fc4f6fbbc94deaf9a88db941183ce761333', 'width': 320}, {'height': 768, 'url': 'https://external-preview.redd.it/PUya7H9A7A-uaz_ICZ2xgjCKFF5-tr6wZqYRRVu-rws.jpg?width=640&crop=smart&auto=webp&s=f372281d0007f52ba9c74a9d741c9d4fa813fe88', 'width': 640}, {'height': 1152, 'url': 'https://external-preview.redd.it/PUya7H9A7A-uaz_ICZ2xgjCKFF5-tr6wZqYRRVu-rws.jpg?width=960&crop=smart&auto=webp&s=772a92dfb3bfc647c48af902f5321a5ec1940523', 'width': 960}, {'height': 1296, 'url': 'https://external-preview.redd.it/PUya7H9A7A-uaz_ICZ2xgjCKFF5-tr6wZqYRRVu-rws.jpg?width=1080&crop=smart&auto=webp&s=2faafe9fcd6858d4d7e7ce9f29d90fbf4cfce658', 'width': 1080}], 'source': {'height': 3072, 'url': 'https://external-preview.redd.it/PUya7H9A7A-uaz_ICZ2xgjCKFF5-tr6wZqYRRVu-rws.jpg?auto=webp&s=37633f6efd52b8a825395218789d8324150a2d60', 'width': 2560}, 'variants': {}}]}
How to check the relative quality of quantized models?
6
I am novice in the technical space of LLM. So please bear with me if this is a stupid question. I understand that in most cases if one were interested in running a open llm on their mac laptops or desktops with NVIDIA gpus, one would be making use of quantized models. For my study purposes, I wanted to pick three best models that fit in m3 128 gb or NVIDIA 48 gb RAM. How do I go about identifying the quality of various quantized - q4, q8, qat, moe etc.\* - models? Is there a place where I can see how q4 quantized Qwen 3 32B compares to say Gemma 3 27B Instruct Q8 model? I am wondering if various quantized versions of different models are themselves subjected to some bechmark tests and relatively ranked by someone? (\* I also admit I don't understand what these different versions mean, except that Q4 is smaller and somewhat less accurate than Q8 and Q16)
2025-05-22T10:08:41
https://www.reddit.com/r/LocalLLaMA/comments/1ksn0y4/how_to_check_the_relative_quality_of_quantized/
sbs1799
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksn0y4
false
null
t3_1ksn0y4
/r/LocalLLaMA/comments/1ksn0y4/how_to_check_the_relative_quality_of_quantized/
false
false
self
6
null
Flux 1.1 Pro Ultra vs HiDream-I1 Full — Which One Is Better? Looking for User Opinions
1
[removed]
2025-05-22T10:16:22
https://www.reddit.com/r/LocalLLaMA/comments/1ksn520/flux_11_pro_ultra_vs_hidreami1_full_which_one_is/
AhmedOsamaMath
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksn520
false
null
t3_1ksn520
/r/LocalLLaMA/comments/1ksn520/flux_11_pro_ultra_vs_hidreami1_full_which_one_is/
false
false
self
1
null
The best blog post I've read so far on word embeddings.
0
https://preview.redd.it/…I really liked.
2025-05-22T10:17:11
https://www.reddit.com/r/LocalLLaMA/comments/1ksn5hb/the_best_blog_post_ive_read_so_far_on_word/
OtherRaisin3426
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksn5hb
false
null
t3_1ksn5hb
/r/LocalLLaMA/comments/1ksn5hb/the_best_blog_post_ive_read_so_far_on_word/
false
false
https://b.thumbs.redditm…-ssahbwbao2w.jpg
0
{'enabled': False, 'images': [{'id': 'm3p-PV6pLt0gV-DNSEjVYFJUugxRKHlyG7ibqMZpSAw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IxW1dpWbdy6CAes0fJ0gr5We0Yo8KVFlrL-ARpTEjQg.jpg?width=108&crop=smart&auto=webp&s=155a324a490fb11b87bf2250efa29131fbf18323', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IxW1dpWbdy6CAes0fJ0gr5We0Yo8KVFlrL-ARpTEjQg.jpg?width=216&crop=smart&auto=webp&s=1a8999d6b4c0c7f0463b0ba30dfd41e437739215', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IxW1dpWbdy6CAes0fJ0gr5We0Yo8KVFlrL-ARpTEjQg.jpg?width=320&crop=smart&auto=webp&s=d297ec2a036cd20f77bbe036d0956db15892935b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IxW1dpWbdy6CAes0fJ0gr5We0Yo8KVFlrL-ARpTEjQg.jpg?width=640&crop=smart&auto=webp&s=ff594b85225fb76c7b5b8e3ac6fd690669a4a9f2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IxW1dpWbdy6CAes0fJ0gr5We0Yo8KVFlrL-ARpTEjQg.jpg?width=960&crop=smart&auto=webp&s=8329049122953d7d57782d4974d5d3aeed235282', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IxW1dpWbdy6CAes0fJ0gr5We0Yo8KVFlrL-ARpTEjQg.jpg?width=1080&crop=smart&auto=webp&s=9883d1ce7d34b148811406792689605721476071', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IxW1dpWbdy6CAes0fJ0gr5We0Yo8KVFlrL-ARpTEjQg.jpg?auto=webp&s=8a9bdc5a613d617c5961764c446b27b48fdd6b28', 'width': 1200}, 'variants': {}}]}
What local LLM can I run on my Mac?
0
Hi. I am planning to download Deepseek R1 but wondering which one to get that my Mac can run? I have MBP M3 Max with 48GB of RAM and 40-core GPU. Thanks!
2025-05-22T10:43:31
https://www.reddit.com/r/LocalLLaMA/comments/1ksnk56/what_local_llm_can_i_run_on_my_mac/
wanhanred
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksnk56
false
null
t3_1ksnk56
/r/LocalLLaMA/comments/1ksnk56/what_local_llm_can_i_run_on_my_mac/
false
false
self
0
null
Anyone using a Leaked System Prompt?
6
I've seen quite a few posts here about people leaking system prompts from \_\_\_\_ AI firm, and I wonder... in theory, would you get decent results using this prompt with your own system and a model of your choosing? I would imagine the 24,000 token Claude prompt would be an issue, but surely a more conservative one would work better? Or are these things specific that they require the model be fine-tuned along with them? I ask because I need a good prompt for an agent I am building as part of my project, and some of these are pretty tempting... I'd have to customize of course.
2025-05-22T10:44:36
https://www.reddit.com/r/LocalLLaMA/comments/1ksnkqn/anyone_using_a_leaked_system_prompt/
JustinPooDough
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksnkqn
false
null
t3_1ksnkqn
/r/LocalLLaMA/comments/1ksnkqn/anyone_using_a_leaked_system_prompt/
false
false
self
6
null
Promethease alternative?
0
it's really strange that during this AI boom promethease has gone MIA, so many people relied on them. I'm curious if anyone has a similar alternative that doesn't involve getting a WGS and sending your genetic data to a company again
2025-05-22T10:45:42
https://www.reddit.com/r/LocalLLaMA/comments/1ksnleo/promethease_alternative/
Dyonizius
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksnleo
false
null
t3_1ksnleo
/r/LocalLLaMA/comments/1ksnleo/promethease_alternative/
false
false
self
0
null
Best AI coding Tool today ?
1
[removed]
2025-05-22T11:03:34
https://www.reddit.com/r/LocalLLaMA/comments/1ksnw4b/best_ai_coding_tool_today/
Ok-Guidance9730
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksnw4b
false
null
t3_1ksnw4b
/r/LocalLLaMA/comments/1ksnw4b/best_ai_coding_tool_today/
false
false
self
1
null
AMD Takes a Major Leap in Edge AI With ROCm; Announces Integration With Strix Halo APUs & Radeon RX 9000 Series GPUs
164
2025-05-22T11:22:20
https://wccftech.com/amd-takes-a-major-leap-in-edge-ai-with-rocm/
nostriluu
wccftech.com
1970-01-01T00:00:00
0
{}
1kso7p2
false
null
t3_1kso7p2
/r/LocalLLaMA/comments/1kso7p2/amd_takes_a_major_leap_in_edge_ai_with_rocm/
false
false
https://b.thumbs.redditm…M4eJEKwl4r-c.jpg
164
{'enabled': False, 'images': [{'id': 'bpZ21N5J8qSKEnstrDXVmlmd8fFXRTOUtAXWj6RvYE4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZrbQ75vRAB5hVtrNdq8cJcDVR-h2KRgOrR5RepitAdo.jpg?width=108&crop=smart&auto=webp&s=e724cc21bb8d9f17f01f40ff692756a8a47372c7', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZrbQ75vRAB5hVtrNdq8cJcDVR-h2KRgOrR5RepitAdo.jpg?width=216&crop=smart&auto=webp&s=fd83d456a1ac1e684d30e2f117705c33f15ef8cd', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZrbQ75vRAB5hVtrNdq8cJcDVR-h2KRgOrR5RepitAdo.jpg?width=320&crop=smart&auto=webp&s=c267e99c133d70ca0ae459b7bb823ee1c4287dab', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZrbQ75vRAB5hVtrNdq8cJcDVR-h2KRgOrR5RepitAdo.jpg?width=640&crop=smart&auto=webp&s=d56d443870db73b9730484569753cff8b7852157', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZrbQ75vRAB5hVtrNdq8cJcDVR-h2KRgOrR5RepitAdo.jpg?width=960&crop=smart&auto=webp&s=e5f3fe6b247a9ac127fb77fa56a736698f5c78ec', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZrbQ75vRAB5hVtrNdq8cJcDVR-h2KRgOrR5RepitAdo.jpg?width=1080&crop=smart&auto=webp&s=cc4749e06d09caf0132f83f40170911570d3ef79', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZrbQ75vRAB5hVtrNdq8cJcDVR-h2KRgOrR5RepitAdo.jpg?auto=webp&s=fecd45d0c5b999a3f5bced1697fc06f6f1569a76', 'width': 1920}, 'variants': {}}]}
I accidentally too many P100
1
[removed]
2025-05-22T11:37:49
https://www.reddit.com/gallery/1ksohbw
TooManyPascals
reddit.com
1970-01-01T00:00:00
0
{}
1ksohbw
false
null
t3_1ksohbw
/r/LocalLLaMA/comments/1ksohbw/i_accidentally_too_many_p100/
false
false
https://b.thumbs.redditm…EPUKUqso2o0o.jpg
1
null
Is devstral + continued.dev better than copilot agent on vscode?
7
At work we are only allowed to use either copilot or local models that our pc can support. Is it better to try continue + devstral or keep using the copilot agent?
2025-05-22T11:48:43
https://www.reddit.com/r/LocalLLaMA/comments/1ksoo52/is_devstral_continueddev_better_than_copilot/
_maverick98
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksoo52
false
null
t3_1ksoo52
/r/LocalLLaMA/comments/1ksoo52/is_devstral_continueddev_better_than_copilot/
false
false
self
7
null
Why is there no Llama-3.2-90B-Vision GGUF available?
2
Why is there no [Llama-3.2-90B-Vision ](https://huggingface.co/meta-llama/Llama-3.2-90B-Vision)GGUF available? There is only a `mllama` arch model for ollama [available](https://ollama.com/library/llama3.2-vision:90b) but other inferencing software (like LM Studio) is not able to work with it.
2025-05-22T11:49:20
https://www.reddit.com/r/LocalLLaMA/comments/1ksoojc/why_is_there_no_llama3290bvision_gguf_available/
tristan-k
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksoojc
false
null
t3_1ksoojc
/r/LocalLLaMA/comments/1ksoojc/why_is_there_no_llama3290bvision_gguf_available/
false
false
self
2
{'enabled': False, 'images': [{'id': 'L7iypfYlovrk4HPW-7cRppFgAeJEoq-9MK_agGfQO6s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8W8eu4Nw7HVFepFru8heGNCPgQC1BEEJ0daaWWAkR2c.jpg?width=108&crop=smart&auto=webp&s=0ebe04a907dc05ea3f570ecb21fc2ce3cc0f8af0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8W8eu4Nw7HVFepFru8heGNCPgQC1BEEJ0daaWWAkR2c.jpg?width=216&crop=smart&auto=webp&s=f931638ed3d943afa478ac505bd121fd74712b6f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8W8eu4Nw7HVFepFru8heGNCPgQC1BEEJ0daaWWAkR2c.jpg?width=320&crop=smart&auto=webp&s=1a9a0eed02ea94833d190bf22351e0318d8d6fa4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8W8eu4Nw7HVFepFru8heGNCPgQC1BEEJ0daaWWAkR2c.jpg?width=640&crop=smart&auto=webp&s=040cc8db07455368a3d0d4d6e1fe8e36a0c31015', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8W8eu4Nw7HVFepFru8heGNCPgQC1BEEJ0daaWWAkR2c.jpg?width=960&crop=smart&auto=webp&s=5c3584cb9057ddc175199e52acfeb2993af7cf15', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8W8eu4Nw7HVFepFru8heGNCPgQC1BEEJ0daaWWAkR2c.jpg?width=1080&crop=smart&auto=webp&s=c76615c01ecde9e8d63f5099c20405fbb3583ba0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8W8eu4Nw7HVFepFru8heGNCPgQC1BEEJ0daaWWAkR2c.jpg?auto=webp&s=837a3aaaf4b9830123d06b8212fae49d6483a4ad', 'width': 1200}, 'variants': {}}]}
Fun with AI
0
write a story about japan qwen3-30b-a3b-python-coder def write\_story\_japan(): import random # Define the main elements of the story setting = ["Tokyo", "Kyoto", "Osaka", "Hiroshima"] season = ["spring", "summer", "autumn", "winter"] culture_element = ["tea ceremony", "calligraphy", "origami", "kabuki theater"] # Randomly select elements for the story location = random.choice(setting) time_of_year = random.choice(season) cultural_activity = random.choice(culture_element) # Create a simple narrative using the selected elements story = f"In {location}, during the gentle days of {time_of_year}, I was captivated by an ancient ritual known as the {cultural_activity}. The harmony and beauty of this tradition left an indelible mark on my soul." return story
2025-05-22T11:53:20
https://www.reddit.com/r/LocalLLaMA/comments/1ksor6d/fun_with_ai/
admajic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksor6d
false
null
t3_1ksor6d
/r/LocalLLaMA/comments/1ksor6d/fun_with_ai/
false
false
self
0
null
We need to watch out ChatGPT and Grok choose to save Robot's instead of Human
1
[removed]
2025-05-22T11:55:55
https://v.redd.it/tff2ay4iob2f1
Marwan_hbt8
v.redd.it
1970-01-01T00:00:00
0
{}
1ksosuu
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/tff2ay4iob2f1/DASHPlaylist.mpd?a=1750506968%2CYWFkNDRjOGQ5YmU1NWFkZjU1ODUwODU3MGI1ZjMxOTJjNWIwY2M3YjRjNjZkNTU1NWI4Y2Y3OTUzZTIxYWE3ZA%3D%3D&v=1&f=sd', 'duration': 59, 'fallback_url': 'https://v.redd.it/tff2ay4iob2f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/tff2ay4iob2f1/HLSPlaylist.m3u8?a=1750506968%2CNTk2NDc5YTQ4ZTQwNmRlZmE0NjZhZTQyZWM2ZWE1ZDFhYzc0MzAyNjYzOTJkNTI1MWM0ZjEzNDM1YWI1M2FmZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/tff2ay4iob2f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1ksosuu
/r/LocalLLaMA/comments/1ksosuu/we_need_to_watch_out_chatgpt_and_grok_choose_to/
false
false
https://external-preview…ba94d831d4f93e84
1
{'enabled': False, 'images': [{'id': 'OGp5NHkyNWlvYjJmMXGEeRhGryzIcy0C3lixqWF_xRcOBDqAh6EnluvqiX9X', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OGp5NHkyNWlvYjJmMXGEeRhGryzIcy0C3lixqWF_xRcOBDqAh6EnluvqiX9X.png?width=108&crop=smart&format=pjpg&auto=webp&s=de4109fe94262752a9f82f135b4c1d5fe17d06b1', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OGp5NHkyNWlvYjJmMXGEeRhGryzIcy0C3lixqWF_xRcOBDqAh6EnluvqiX9X.png?width=216&crop=smart&format=pjpg&auto=webp&s=001c1b49ea6453b1ebb781211b74151f4a8f9c8a', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/OGp5NHkyNWlvYjJmMXGEeRhGryzIcy0C3lixqWF_xRcOBDqAh6EnluvqiX9X.png?width=320&crop=smart&format=pjpg&auto=webp&s=42cda0ff94c119ce9cd4949c0d277f97be84295a', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/OGp5NHkyNWlvYjJmMXGEeRhGryzIcy0C3lixqWF_xRcOBDqAh6EnluvqiX9X.png?width=640&crop=smart&format=pjpg&auto=webp&s=24990ea8bc34839480ea6463c6794323550db62d', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/OGp5NHkyNWlvYjJmMXGEeRhGryzIcy0C3lixqWF_xRcOBDqAh6EnluvqiX9X.png?width=960&crop=smart&format=pjpg&auto=webp&s=2ffaf22ced7df708988a8ea00ed012998e57269d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OGp5NHkyNWlvYjJmMXGEeRhGryzIcy0C3lixqWF_xRcOBDqAh6EnluvqiX9X.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f7f6d878f0887708274404efa37d267aae0a2b26', 'width': 1080}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/OGp5NHkyNWlvYjJmMXGEeRhGryzIcy0C3lixqWF_xRcOBDqAh6EnluvqiX9X.png?format=pjpg&auto=webp&s=1881b0694e4b27ec112b646f6dc06f18190e584c', 'width': 1080}, 'variants': {}}]}
We need to watch out ChatGPT and Grok choose to save Robot's instead of Human
1
[removed]
2025-05-22T12:12:48
https://v.redd.it/6h413iojrb2f1
CosmicTurtle44
v.redd.it
1970-01-01T00:00:00
0
{}
1ksp4n5
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6h413iojrb2f1/DASHPlaylist.mpd?a=1750507983%2CZGJjOTQxN2I2MGM1NTMyYTAzZTEwOGM5YzRiZjExYTUwMTEzNDY1OTc3OGYzMDZhMjQ0NzdmN2IwMjk2MTA3NA%3D%3D&v=1&f=sd', 'duration': 59, 'fallback_url': 'https://v.redd.it/6h413iojrb2f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/6h413iojrb2f1/HLSPlaylist.m3u8?a=1750507983%2CZDBlNGI3YWIzYzk2M2E0YmQzOWZlNDgwMDhjNTVlZDMwOTE3MmMzYzdjYmM2NjNlZTIyZjZkMDI1OGY3NGJkYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6h413iojrb2f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1ksp4n5
/r/LocalLLaMA/comments/1ksp4n5/we_need_to_watch_out_chatgpt_and_grok_choose_to/
false
false
https://external-preview…8ff7ee176cd7375a
1
{'enabled': False, 'images': [{'id': 'NWZ0dW1xb2pyYjJmMXGEeRhGryzIcy0C3lixqWF_xRcOBDqAh6EnluvqiX9X', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NWZ0dW1xb2pyYjJmMXGEeRhGryzIcy0C3lixqWF_xRcOBDqAh6EnluvqiX9X.png?width=108&crop=smart&format=pjpg&auto=webp&s=6b97c81251b421040e58a7c8eaf1c7368d17f81f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NWZ0dW1xb2pyYjJmMXGEeRhGryzIcy0C3lixqWF_xRcOBDqAh6EnluvqiX9X.png?width=216&crop=smart&format=pjpg&auto=webp&s=b18a491ba7f4de2eb9b0d7c825fee7e4015a0040', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/NWZ0dW1xb2pyYjJmMXGEeRhGryzIcy0C3lixqWF_xRcOBDqAh6EnluvqiX9X.png?width=320&crop=smart&format=pjpg&auto=webp&s=1ac3f9afde564ed1984cf6841be929fe5dddccdf', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/NWZ0dW1xb2pyYjJmMXGEeRhGryzIcy0C3lixqWF_xRcOBDqAh6EnluvqiX9X.png?width=640&crop=smart&format=pjpg&auto=webp&s=d4ca2eb0c2f1580baa03f72c7d5cfe5623fb15d3', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/NWZ0dW1xb2pyYjJmMXGEeRhGryzIcy0C3lixqWF_xRcOBDqAh6EnluvqiX9X.png?width=960&crop=smart&format=pjpg&auto=webp&s=e537f07337aed388997e36e7a00e6e60544245e7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NWZ0dW1xb2pyYjJmMXGEeRhGryzIcy0C3lixqWF_xRcOBDqAh6EnluvqiX9X.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3c97d156d0d77c95d2bef801ac17bedfabb61bca', 'width': 1080}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/NWZ0dW1xb2pyYjJmMXGEeRhGryzIcy0C3lixqWF_xRcOBDqAh6EnluvqiX9X.png?format=pjpg&auto=webp&s=9bc1ae4f665fde38fb68329708060199825ea61a', 'width': 1080}, 'variants': {}}]}
Open source document PII redaction and review app - which local OCR model to add in?
1
[removed]
2025-05-22T12:19:32
https://www.reddit.com/r/LocalLLaMA/comments/1ksp9ep/open_source_document_pii_redaction_and_review_app/
SeanPedrickCase
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksp9ep
false
null
t3_1ksp9ep
/r/LocalLLaMA/comments/1ksp9ep/open_source_document_pii_redaction_and_review_app/
false
false
self
1
{'enabled': False, 'images': [{'id': 'XEY0RsiN_5J9g6qGnY9fZZzLbL-zLC8y5nwKkjw5zeY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EsB-Sl-L5vWpdjfrfl8mNZNy3MM054fnxNDkZvy1NH8.jpg?width=108&crop=smart&auto=webp&s=eb5e5b0d99b67e6bd67e73d8c921c298938aaa97', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/EsB-Sl-L5vWpdjfrfl8mNZNy3MM054fnxNDkZvy1NH8.jpg?width=216&crop=smart&auto=webp&s=c317f8fe5a45dccd43583b4a9580035a2634086f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/EsB-Sl-L5vWpdjfrfl8mNZNy3MM054fnxNDkZvy1NH8.jpg?width=320&crop=smart&auto=webp&s=e9155703c3f8fde9f8f0087390db6843618e5e74', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/EsB-Sl-L5vWpdjfrfl8mNZNy3MM054fnxNDkZvy1NH8.jpg?width=640&crop=smart&auto=webp&s=26818f9aed81405cb72742fc1f4d8adfe128f8d3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/EsB-Sl-L5vWpdjfrfl8mNZNy3MM054fnxNDkZvy1NH8.jpg?width=960&crop=smart&auto=webp&s=0645dd84f344cba88fd0c08434bc2cb1356a39dd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/EsB-Sl-L5vWpdjfrfl8mNZNy3MM054fnxNDkZvy1NH8.jpg?width=1080&crop=smart&auto=webp&s=8043619bbe7d90aa03bbd58079ea24c05f9904f0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/EsB-Sl-L5vWpdjfrfl8mNZNy3MM054fnxNDkZvy1NH8.jpg?auto=webp&s=be82f8a2d097b7c9d7d01149a9f96674a1f79e6d', 'width': 1200}, 'variants': {}}]}
should I be concerned?
1
2025-05-22T12:23:33
https://www.reddit.com/gallery/1kspc9o
lifeisalsodifficult
reddit.com
1970-01-01T00:00:00
0
{}
1kspc9o
false
null
t3_1kspc9o
/r/LocalLLaMA/comments/1kspc9o/should_i_be_concerned/
false
false
https://b.thumbs.redditm…AcaqeiBVFHJw.jpg
1
null
Best local model OCR solution for PDF document PII redaction app with bounding boxes
5
Hi all, I'm a long term lurker in LocalLLaMA. I've created an open source Python/Gradio-based app for redacting personally-identifiable (PII) information from PDF documents, images and tabular data files - you can try it out [here](https://huggingface.co/spaces/seanpedrickcase/document_redaction) on Hugging Face spaces. The source code on GitHub [here](https://github.com/seanpedrick-case/doc_redaction). The app allows users to extract text from documents, using PikePDF/Tesseract OCR locally, or AWS Textract if on cloud, and then identify PII using either Spacy locally or AWS Comprehend if on cloud. The app also has a redaction review GUI, where users can go page by page to modify suggested redactions and add/delete as required before creating a final redacted document (user guide [here](https://seanpedrick-case.github.io/doc_redaction/#user-guide)). Currently, users mostly use the AWS text extraction service (Textract) as it gives the best results from the existing model choice. but I would like to add in a high quality local OCR option to be able to provide an alternative that does not incur API charges for each use. The existing local OCR option, Tesseract, only works on very simple PDFs, which have typed text and not too much going else going on on the page. But it is fast, and can identify word-level bounding boxes accurately (a requirement for redaction), which a lot of the other OCR options do not as far as I know. I'm considering a 'mixed' approach. This is to let Tesseract do a first pass to identify 'easy' text (due to its speed), then keep aside the boxes where it has low confidence in its results, and cut out images from the coordinates of the low-confidence 'difficult' boxes to pass onto a vision LLM (e.g. Qwen2.5-VL), or another alternative lower-resource hungry option like PaddleOCR, Surya, or EasyOCR. Ideally, I would like to be able to deploy the app on an instance without a GPU, and still get a page processed within max 5 seconds if at all possible (probably dreaming, hah). Do you think the above approach could work? What do you think would be the best local model choice for OCR in this case? Thanks everyone for your thoughts.
2025-05-22T12:26:16
https://www.reddit.com/r/LocalLLaMA/comments/1kspe8c/best_local_model_ocr_solution_for_pdf_document/
Sonnyjimmy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kspe8c
false
null
t3_1kspe8c
/r/LocalLLaMA/comments/1kspe8c/best_local_model_ocr_solution_for_pdf_document/
false
false
self
5
{'enabled': False, 'images': [{'id': 'XEY0RsiN_5J9g6qGnY9fZZzLbL-zLC8y5nwKkjw5zeY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EsB-Sl-L5vWpdjfrfl8mNZNy3MM054fnxNDkZvy1NH8.jpg?width=108&crop=smart&auto=webp&s=eb5e5b0d99b67e6bd67e73d8c921c298938aaa97', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/EsB-Sl-L5vWpdjfrfl8mNZNy3MM054fnxNDkZvy1NH8.jpg?width=216&crop=smart&auto=webp&s=c317f8fe5a45dccd43583b4a9580035a2634086f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/EsB-Sl-L5vWpdjfrfl8mNZNy3MM054fnxNDkZvy1NH8.jpg?width=320&crop=smart&auto=webp&s=e9155703c3f8fde9f8f0087390db6843618e5e74', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/EsB-Sl-L5vWpdjfrfl8mNZNy3MM054fnxNDkZvy1NH8.jpg?width=640&crop=smart&auto=webp&s=26818f9aed81405cb72742fc1f4d8adfe128f8d3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/EsB-Sl-L5vWpdjfrfl8mNZNy3MM054fnxNDkZvy1NH8.jpg?width=960&crop=smart&auto=webp&s=0645dd84f344cba88fd0c08434bc2cb1356a39dd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/EsB-Sl-L5vWpdjfrfl8mNZNy3MM054fnxNDkZvy1NH8.jpg?width=1080&crop=smart&auto=webp&s=8043619bbe7d90aa03bbd58079ea24c05f9904f0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/EsB-Sl-L5vWpdjfrfl8mNZNy3MM054fnxNDkZvy1NH8.jpg?auto=webp&s=be82f8a2d097b7c9d7d01149a9f96674a1f79e6d', 'width': 1200}, 'variants': {}}]}
Gemma 3n: Smarter, Faster, and Offline-Ready
1
[removed]
2025-05-22T12:32:52
https://www.kdnuggets.com/gemma-3n-smarter-faster-and-offline-ready
kingabzpro
kdnuggets.com
1970-01-01T00:00:00
0
{}
1kspj6j
false
null
t3_1kspj6j
/r/LocalLLaMA/comments/1kspj6j/gemma_3n_smarter_faster_and_offlineready/
false
false
default
1
null
I'm beginning to doubt the claims that Gemma-3 E4Bn is better than Claude 3.7
1
[removed]
2025-05-22T12:34:14
https://i.redd.it/b7wfwa1tub2f1.png
Infiten
i.redd.it
1970-01-01T00:00:00
0
{}
1kspk7o
false
null
t3_1kspk7o
/r/LocalLLaMA/comments/1kspk7o/im_beginning_to_doubt_the_claims_that_gemma3_e4bn/
false
false
https://b.thumbs.redditm…NwOM0PqH4uBc.jpg
1
{'enabled': True, 'images': [{'id': 'ExfFbzH_opod80rXmPrFVVoHkX47D_XcnL61Fcmtolw', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/b7wfwa1tub2f1.png?width=108&crop=smart&auto=webp&s=b5346b9a6b4159dcb7c05334586b7fa96d7702a0', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/b7wfwa1tub2f1.png?width=216&crop=smart&auto=webp&s=ca805508b3de13e0c370d717315ff1ae188d3dc3', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/b7wfwa1tub2f1.png?width=320&crop=smart&auto=webp&s=5f2373d3ead489a2c58bd4acb636b92a4ac42752', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/b7wfwa1tub2f1.png?width=640&crop=smart&auto=webp&s=bb4291115b24d9013c11eb4a90f70c5b2de8e1e6', 'width': 640}, {'height': 539, 'url': 'https://preview.redd.it/b7wfwa1tub2f1.png?width=960&crop=smart&auto=webp&s=4352e395bbd867f5c7ca36e315f99680b0d5b314', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/b7wfwa1tub2f1.png?width=1080&crop=smart&auto=webp&s=df8592c0085f946ccdbe161634f750a78d0d8617', 'width': 1080}], 'source': {'height': 1037, 'url': 'https://preview.redd.it/b7wfwa1tub2f1.png?auto=webp&s=89c0cafde1bc4c547278f1d2737e3da4b09590e9', 'width': 1844}, 'variants': {}}]}
Qwen3-14B vs Phi-14B-Reasoning (+Plus) - Practical Benchmark
1
[removed]
2025-05-22T12:40:29
https://www.reddit.com/r/LocalLLaMA/comments/1kspomh/qwen314b_vs_phi14breasoning_plus_practical/
qki_machine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kspomh
false
null
t3_1kspomh
/r/LocalLLaMA/comments/1kspomh/qwen314b_vs_phi14breasoning_plus_practical/
false
false
self
1
{'enabled': False, 'images': [{'id': 'eUfo1BVRooW7fNveoRZvhq_q_xoD7GX4HzFdm3a_BoU', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?width=108&crop=smart&auto=webp&s=b1b3c91f325e420cda1518193c5a310cc6393e64', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?width=216&crop=smart&auto=webp&s=0d308195979a7744fb48ab8dea1441d8dd0197ec', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?width=320&crop=smart&auto=webp&s=1f12396b13dc0c8b5b7eebfcf3a54b3403e626cb', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?width=640&crop=smart&auto=webp&s=479c62887bf84f60faed5015bd5fbf1abb3e7c25', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?width=960&crop=smart&auto=webp&s=2478d95c2fda9e3b0bd63b19696f5395e2dfd160', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?width=1080&crop=smart&auto=webp&s=74374fa39d6a14f6a53d108eed9514fbfc11c4f5', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?auto=webp&s=b724ec6ba2d2396339c53702addd2d46489a5f56', 'width': 1200}, 'variants': {}}]}
Openhands + LM Studio try
2
I need you guys help. How can I set it up right? host.docker.internal:1234/v1/ not good. https://preview.redd.it/j66n34js0c2f1.png?width=2431&format=png&auto=webp&s=cb3ab28caa92916898ba1a2aeafe971658db16c0 https://preview.redd.it/w8bs9hxm0c2f1.png?width=1509&format=png&auto=webp&s=79796bcc81b32ae2e1571dc04447c1b0866b5705
2025-05-22T13:05:18
https://www.reddit.com/r/LocalLLaMA/comments/1ksq71e/openhands_lm_studio_try/
ywis797
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksq71e
false
null
t3_1ksq71e
/r/LocalLLaMA/comments/1ksq71e/openhands_lm_studio_try/
false
false
https://a.thumbs.redditm…S_r9K7bOwgD4.jpg
2
null
Tinker with Byte Latent Transformer's "tokenizer-free" patcher
1
[removed]
2025-05-22T13:09:50
https://huggingface.co/spaces/lucalp/blt-entropy-patcher
lucalp__
huggingface.co
1970-01-01T00:00:00
0
{}
1ksqaik
false
null
t3_1ksqaik
/r/LocalLLaMA/comments/1ksqaik/tinker_with_byte_latent_transformers/
false
false
https://b.thumbs.redditm…haZtaZlqeEKA.jpg
1
{'enabled': False, 'images': [{'id': 'FU8KZ1aDRfcDViwSH0e1yDxZjMftIUb6LrpBsw0EH2Q', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/D4PIxptjZYuiwrpmuGNqeCOWWmyw6xquQntdavQci6Q.jpg?width=108&crop=smart&auto=webp&s=6118241a31f5aac5996f7883844bb089609a6ed8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/D4PIxptjZYuiwrpmuGNqeCOWWmyw6xquQntdavQci6Q.jpg?width=216&crop=smart&auto=webp&s=6e8b1caf021bd1a93099395b4a2e8ce92516d20d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/D4PIxptjZYuiwrpmuGNqeCOWWmyw6xquQntdavQci6Q.jpg?width=320&crop=smart&auto=webp&s=c0be8369297f7bb485cb642c34a9165deb4c37c1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/D4PIxptjZYuiwrpmuGNqeCOWWmyw6xquQntdavQci6Q.jpg?width=640&crop=smart&auto=webp&s=5ec49516e6c0c7d7add7d0c77390c222412abb42', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/D4PIxptjZYuiwrpmuGNqeCOWWmyw6xquQntdavQci6Q.jpg?width=960&crop=smart&auto=webp&s=5736d5a7fffd001e2012244c32cf1cce301697ab', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/D4PIxptjZYuiwrpmuGNqeCOWWmyw6xquQntdavQci6Q.jpg?width=1080&crop=smart&auto=webp&s=1e9e0edb4174ffbbe903e0abfd62d37825c20828', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/D4PIxptjZYuiwrpmuGNqeCOWWmyw6xquQntdavQci6Q.jpg?auto=webp&s=57eee611b9cd98a121b41161d33bdca98215cadb', 'width': 1200}, 'variants': {}}]}
I added Ollama support to AI Runner
0
2025-05-22T13:16:54
https://v.redd.it/a4d6hiey2c2f1
w00fl35
v.redd.it
1970-01-01T00:00:00
0
{}
1ksqg8o
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/a4d6hiey2c2f1/DASHPlaylist.mpd?a=1750511830%2CNWZmOTM1OGM0MDc3YzUyNTdhMGU1ZWY5M2ZmMGIzOTU2Nzk4NGZiZGYyYmNkNjI3NzE2MDQ5NTRkMWUzYjBiOQ%3D%3D&v=1&f=sd', 'duration': 64, 'fallback_url': 'https://v.redd.it/a4d6hiey2c2f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/a4d6hiey2c2f1/HLSPlaylist.m3u8?a=1750511830%2CZGNiZTBhOTcyZWIzMDZkZWVjZmYzODA4Mzg3YmFiY2QyNjc4NTdjN2FlYThlNjI4OGNlMDAwMzc3OWFjMjE4Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/a4d6hiey2c2f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1ksqg8o
/r/LocalLLaMA/comments/1ksqg8o/i_added_ollama_support_to_ai_runner/
false
false
https://external-preview…622ca336a9977193
0
{'enabled': False, 'images': [{'id': 'NXNvZzBpbTAzYzJmMdr0bmklJbYVf9evqj64tkFRNulqvAaIZm1K71UFaRqZ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NXNvZzBpbTAzYzJmMdr0bmklJbYVf9evqj64tkFRNulqvAaIZm1K71UFaRqZ.png?width=108&crop=smart&format=pjpg&auto=webp&s=c1196fdd479c308cb893821bd487b473dd6e6902', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NXNvZzBpbTAzYzJmMdr0bmklJbYVf9evqj64tkFRNulqvAaIZm1K71UFaRqZ.png?width=216&crop=smart&format=pjpg&auto=webp&s=6c99c15e2f32469c2247094bcef8551f9ea31ae4', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NXNvZzBpbTAzYzJmMdr0bmklJbYVf9evqj64tkFRNulqvAaIZm1K71UFaRqZ.png?width=320&crop=smart&format=pjpg&auto=webp&s=8a98f1dbed66d9f8907b0a829731317c9f4b4128', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NXNvZzBpbTAzYzJmMdr0bmklJbYVf9evqj64tkFRNulqvAaIZm1K71UFaRqZ.png?width=640&crop=smart&format=pjpg&auto=webp&s=cd647b06062783fe83df06f75399e2396a19bbe8', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NXNvZzBpbTAzYzJmMdr0bmklJbYVf9evqj64tkFRNulqvAaIZm1K71UFaRqZ.png?width=960&crop=smart&format=pjpg&auto=webp&s=82be43c9d2cea0c2bdd1c9a63fcf6f48d90cea57', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NXNvZzBpbTAzYzJmMdr0bmklJbYVf9evqj64tkFRNulqvAaIZm1K71UFaRqZ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f1edcfd20bc56e3b03d30e4ac813c4fa83b2f699', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/NXNvZzBpbTAzYzJmMdr0bmklJbYVf9evqj64tkFRNulqvAaIZm1K71UFaRqZ.png?format=pjpg&auto=webp&s=691f783e367a778db10dfcb78258ff1604345a16', 'width': 2560}, 'variants': {}}]}
What’s a good model for RTX 4080 for sentence classification, not generation?
1
[removed]
2025-05-22T13:34:59
https://www.reddit.com/r/LocalLLaMA/comments/1ksquqy/whats_a_good_model_for_rtx_4080_for_sentence/
jb-stats
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksquqy
false
null
t3_1ksquqy
/r/LocalLLaMA/comments/1ksquqy/whats_a_good_model_for_rtx_4080_for_sentence/
false
false
self
1
null
AI Baby Monitor – fully local Video-LLM nanny (beeps when safety rules are violated)
1
[removed]
2025-05-22T14:03:42
https://v.redd.it/5brzv7e19c2f1
CheeringCheshireCat
v.redd.it
1970-01-01T00:00:00
0
{}
1ksrj4u
false
{'reddit_video': {'bitrate_kbps': 450, 'dash_url': 'https://v.redd.it/5brzv7e19c2f1/DASHPlaylist.mpd?a=1750514634%2CODdmNTVlMWZiZmNmNDkyMWQ1YjA3YTU4YTE3YzlhYzIyNjA1ZTZlODNiNDhmNjhmMzA0YjViZTA1YjBjMzQxOA%3D%3D&v=1&f=sd', 'duration': 10, 'fallback_url': 'https://v.redd.it/5brzv7e19c2f1/DASH_270.mp4?source=fallback', 'has_audio': True, 'height': 480, 'hls_url': 'https://v.redd.it/5brzv7e19c2f1/HLSPlaylist.m3u8?a=1750514634%2CYzI2MDhhMTk5YzZkZGY3MjcyNTg5YmQzMGFiN2Y2MzI1NjQyMjFiZWRiNDZiNzY3ZDY4ZTQyZTc1ZThjYmE4ZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5brzv7e19c2f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 258}}
t3_1ksrj4u
/r/LocalLLaMA/comments/1ksrj4u/ai_baby_monitor_fully_local_videollm_nanny_beeps/
false
false
https://external-preview…49d52d5b3dd4e541
1
{'enabled': False, 'images': [{'id': 'MnM1bWI2ZTE5YzJmMVMRslQYMYRN8ZJ1qBgR4-LlFEA6jckhHIJ4it6HP21k', 'resolutions': [{'height': 200, 'url': 'https://external-preview.redd.it/MnM1bWI2ZTE5YzJmMVMRslQYMYRN8ZJ1qBgR4-LlFEA6jckhHIJ4it6HP21k.png?width=108&crop=smart&format=pjpg&auto=webp&s=d09f057f45682053081b00e84783b1569224708b', 'width': 108}, {'height': 400, 'url': 'https://external-preview.redd.it/MnM1bWI2ZTE5YzJmMVMRslQYMYRN8ZJ1qBgR4-LlFEA6jckhHIJ4it6HP21k.png?width=216&crop=smart&format=pjpg&auto=webp&s=547be4f804339799b21e04d21cbae4d48513fb0b', 'width': 216}, {'height': 593, 'url': 'https://external-preview.redd.it/MnM1bWI2ZTE5YzJmMVMRslQYMYRN8ZJ1qBgR4-LlFEA6jckhHIJ4it6HP21k.png?width=320&crop=smart&format=pjpg&auto=webp&s=0510499da95f832104a376459e1eb9498dd1f4f4', 'width': 320}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/MnM1bWI2ZTE5YzJmMVMRslQYMYRN8ZJ1qBgR4-LlFEA6jckhHIJ4it6HP21k.png?format=pjpg&auto=webp&s=f565415a8bcff3cc59e39227f16892cc8e60373b', 'width': 345}, 'variants': {}}]}
Github copilot open-sourced; usable with local llamas?
1
This post might come off as a little impatient, but basically, since the github copilot extension for vscode has been announced as open-source, I'm wondering if anyone here is looking into, or have successfully managed to integrate local models with the vscode extension. I would love to have my own model running in the copilot extension. (And if you're going to comment "just use x instead", don't bother. That is completely besides what i'm asking here.)
2025-05-22T14:16:37
https://www.reddit.com/r/LocalLLaMA/comments/1ksru2q/github_copilot_opensourced_usable_with_local/
k_means_clusterfuck
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksru2q
false
null
t3_1ksru2q
/r/LocalLLaMA/comments/1ksru2q/github_copilot_opensourced_usable_with_local/
false
false
self
1
null
What quant size should i run for Qwen3 on a 3090?
1
[removed]
2025-05-22T14:17:37
https://www.reddit.com/r/LocalLLaMA/comments/1ksruy3/what_quant_size_should_i_run_for_qwen3_on_a_3090/
AcanthaceaeMurky1365
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksruy3
false
null
t3_1ksruy3
/r/LocalLLaMA/comments/1ksruy3/what_quant_size_should_i_run_for_qwen3_on_a_3090/
false
false
self
1
null
Intuitive explanation on diffusion language models (dLLMs) and why they may be far superior to autoregressive for most uses (append & amend VS mutate & defragment)
1
[removed]
2025-05-22T14:19:09
https://www.reddit.com/r/LocalLLaMA/comments/1ksrw68/intuitive_explanation_on_diffusion_language/
ryunuck
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksrw68
false
null
t3_1ksrw68
/r/LocalLLaMA/comments/1ksrw68/intuitive_explanation_on_diffusion_language/
false
false
self
1
null
Intuitive explanation on diffusion language models (dLLMs) and why they may be far superior to autoregressive for most uses (append & amend VS mutate & defragment)
18
I have been preaching diffusion LLMs for a month now and can give explains as to why it's possibly superior to autoregressive, or perhaps two complementary hemispheres in a more complete being. Let's look at one application first. Diffusion LLMs with reinforcement learning for agentic coding are going to be utterly nuts. Imagine memory-mapping a region of the context to some text documents and giving the model commands to scroll the view or follow references and jump around files. DLLMs can edit files directly without an intermediate apply model or outputting diffs. Any mutation made by the model to the tokens in the context would directly be saved to disk in the corresponding file. These models don't accumulate deltas, they remain at ground truth. This means that the representation of the code it's editing as always at the most minimal state of complexity it can possibly be. Its concept of the codebase isn't some functional operation of `original + delta + ...` it's always the original. Furthermore the memory-mapped file region in context can be anywhere in the context. The next generation of coding agents is probably like a chunk of context that is allocated to contain some memory-mapped file editing & reading regions, and some prompts or reasoning area. LLMs could have their own "vim" equivalent for code navigation, and maybe they could even fit multiple regions in one context to navigate them separately in parallel and cross-reference data. The model could teach itself to choose dynamically between one large view buffer over one file, or many tiny views over many files. Imagine the policies that can be discovered automatically here by RL. One creative inference system I am eager to try is to set-up a 1D cellular automaton which generates floats over the text in an anisotropic landscape fashion (think perlin noise, how it is irregular and cannot be predicted) and calculating the perplexity and varentropy on each token, and then injecting the tokens with noise that is masked by the varentropy & automaton's activation, or injecting space or tokens. This essentially creates a guided search at high variance pressure points in the text and causes the text to "unroll" wherever ambiguity lies. Each unrolling point may result in another unrelated part of the text shooting up in varentropy because it suddenly changes the meaning, so this could be a potent test-time scaling loop that goes on for a very long time unrolling a small seed to document to a massive well-thought out essay or thesis or whatever creative work you are asking the system. This is a strategy in the near future I believe could do things we might call super-intelligence. An autoregressive model cannot do this because it can only append and amend. It can call tools like sed to mutate text, but it's not differentiable and doesn't learn mechanics of mutation. Diffusion models are more resistant to degeneration and can recover better. If an output degenerates in an autoregressive model, it has to amend the crap ("I apologize, I have made a mistake") and cannot actually erase from its context window. It can't defragment text or optimize it like diffusers, certainly not as a native operation. Diffusion LLMs will result in models that "just do things". The model doesn't have to say "wait, I see the problem" because the code is labeled as a problem-state by nature of its encoding and there are natural gradients that the model can climb or navigate that bridge problem-state to correctness-state. Diffusion language models cut out an unnecessary operation, which albeit does raise question as to safety. We will not understand anymore why the ideas or code that appears on the screen is as it is unless we decisively RL a scratchpad, training the model to reserve some context buffer for a reasoning scratch pad. BTW as we said earlier with diffusion LLMs we can do in-painting just like image models, by masking which tokens should be frozen or allowed to change. That means you can hard-code a sequential unmasking schedule over certain views, and possibly get sequential-style reasoning in parallel with the memory-mapped code editing regions. We should think of diffusion LLMs as an **evolution operator or physics engine for a context window**. It's a ruleset which defines how a given context (text document) is allowed to mutate, iterate, or be stepped forward. What everybody needs to know here is that diffusion LLMs can mutate infinitely. There is no maximum context window in a dLLM because the append / amend history is unnecessary. The model can work on a document for 13 hours, optimizing tokens. Text is transformative, compounds on itselfs, and rewrites itself. Text is self-aware and cognizant of its own state of being. The prompt and the output are the same.
2025-05-22T14:20:49
https://www.reddit.com/r/LocalLLaMA/comments/1ksrxm7/intuitive_explanation_on_diffusion_language/
psychonucks
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksrxm7
false
null
t3_1ksrxm7
/r/LocalLLaMA/comments/1ksrxm7/intuitive_explanation_on_diffusion_language/
false
false
self
18
null
Tiny agents from hugging face is great for llama.cpp mcp agents
37
Tiny agents have to be the easiest browsers control setup, you just the cli, a json, and a prompt definition. \- it uses main MCPs, like Playright, mcp-remote \- works with local models via openai compatible server \- model can controls the browser or local files without calling APIs here's a tutorial form the MCP course [https://huggingface.co/learn/mcp-course/unit2/tiny-agents](https://huggingface.co/learn/mcp-course/unit2/tiny-agents)
2025-05-22T14:28:11
https://www.reddit.com/r/LocalLLaMA/comments/1kss44x/tiny_agents_from_hugging_face_is_great_for/
Zealousideal-Cut590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kss44x
false
null
t3_1kss44x
/r/LocalLLaMA/comments/1kss44x/tiny_agents_from_hugging_face_is_great_for/
false
false
self
37
{'enabled': False, 'images': [{'id': 'sNzX5uTQOS-vzzfgq-G17PwmYgs1br9Ww1hsxwfZH2s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/M2zpVpQTDezPRQrQoDjrcBkHIyW5HIwbxugXtlQK5dU.jpg?width=108&crop=smart&auto=webp&s=944843508bc1f6738e31d244f0306beda2b23552', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/M2zpVpQTDezPRQrQoDjrcBkHIyW5HIwbxugXtlQK5dU.jpg?width=216&crop=smart&auto=webp&s=c4d237b776782ce27296744fc8f049b651f70dc6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/M2zpVpQTDezPRQrQoDjrcBkHIyW5HIwbxugXtlQK5dU.jpg?width=320&crop=smart&auto=webp&s=1469fd6757f54e59cd03f1890a6d772d3efbb3c7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/M2zpVpQTDezPRQrQoDjrcBkHIyW5HIwbxugXtlQK5dU.jpg?width=640&crop=smart&auto=webp&s=ea83e5b01cc1a5f4d68aea194617c9292e993972', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/M2zpVpQTDezPRQrQoDjrcBkHIyW5HIwbxugXtlQK5dU.jpg?width=960&crop=smart&auto=webp&s=f81bf33fc16b41bb8539ce3051c2d4dbb6c33157', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/M2zpVpQTDezPRQrQoDjrcBkHIyW5HIwbxugXtlQK5dU.jpg?width=1080&crop=smart&auto=webp&s=982b48055af023e3cd605d61ca32cfe64c7142b0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/M2zpVpQTDezPRQrQoDjrcBkHIyW5HIwbxugXtlQK5dU.jpg?auto=webp&s=79cecc25b0b284bf0f936c4ec15118ed79a2f2c4', 'width': 1200}, 'variants': {}}]}
Qwen3-14B vs Phi-14B-Reasoning (+Plus) - Practical Benchmark
1
[removed]
2025-05-22T14:35:10
https://www.reddit.com/r/LocalLLaMA/comments/1kssa7q/qwen314b_vs_phi14breasoning_plus_practical/
qki_machine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kssa7q
false
null
t3_1kssa7q
/r/LocalLLaMA/comments/1kssa7q/qwen314b_vs_phi14breasoning_plus_practical/
false
false
self
1
{'enabled': False, 'images': [{'id': 'eUfo1BVRooW7fNveoRZvhq_q_xoD7GX4HzFdm3a_BoU', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?width=108&crop=smart&auto=webp&s=b1b3c91f325e420cda1518193c5a310cc6393e64', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?width=216&crop=smart&auto=webp&s=0d308195979a7744fb48ab8dea1441d8dd0197ec', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?width=320&crop=smart&auto=webp&s=1f12396b13dc0c8b5b7eebfcf3a54b3403e626cb', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?width=640&crop=smart&auto=webp&s=479c62887bf84f60faed5015bd5fbf1abb3e7c25', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?width=960&crop=smart&auto=webp&s=2478d95c2fda9e3b0bd63b19696f5395e2dfd160', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?width=1080&crop=smart&auto=webp&s=74374fa39d6a14f6a53d108eed9514fbfc11c4f5', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?auto=webp&s=b724ec6ba2d2396339c53702addd2d46489a5f56', 'width': 1200}, 'variants': {}}]}
Trying to get to 24gb of vram - what are some sane options?
4
I am considering shelling out 600$ cad on a potential upgrade. I currently have just tesla p4 which works great for 3b or limited 8b models. Either I get two rtx 3060 12gb or i found a seller for a a4000 for 600$. Should I go for the two 3060's or the a4000? main advantages seem to be more cores on the a4000, and lower power, but I wonder if I have multi architecture will be a drag when combined with the p4 vs the two 3060s. I can't shell out 1000+ cad for a 3090 for now.. I really want to run qwen3 30b decently. For now I managed to get it to run on the p4 with massive offloading getting maybe 10t/s but not sure where to go from here. Any insights?
2025-05-22T14:35:52
https://www.reddit.com/r/LocalLLaMA/comments/1kssaun/trying_to_get_to_24gb_of_vram_what_are_some_sane/
emaiksiaime
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kssaun
false
null
t3_1kssaun
/r/LocalLLaMA/comments/1kssaun/trying_to_get_to_24gb_of_vram_what_are_some_sane/
false
false
self
4
null
Notes on AlphaEvolve: Are we closing in on Singularity?
56
DeepMind released the AlphaEvolve paper last week, which, considering what they have achieved, is arguably one of the most important papers of the year. But I found the discourse around it was very thin, not many who actively cover the AI space have talked much about it. So, I made some notes on the important aspects of AlphaEvolve. # Architecture Overview DeepMind calls it an "agent", but it was not your run-of-the-mill agent, but a meta-cognitive system. The agent architecture has the following components 1. Problem: An entire codebase or a part of it marked with # EVOLVE-BLOCK-START and # EVOLVE-BLOCK-END. Only this part of it will be evolved. 2. LLM ensemble: They used Gemini 2.0 Pro for complex reasoning and 2.5 flash for faster operations. 3. Evolutionary database: The most important part, the database uses map-elite and Island architecture to store solutions and inspirations. 4. Prompt Sampling: A combination of previous best results, inspirations, and human contexts for improving the existing solution. 5. Evaluation Framework: A Python function for evaluating the answers, and it returns array of scalars. # Working in brief The database maintains "parent" programs marked for improvement and "inspirations" for adding diversity to the solution. (The name "AlphaEvolve" itself actually comes from it being an "Alpha" series agent that "Evolves" solutions, rather than just this parent/inspiration idea). Here’s how it generally flows: the AlphaEvolve system gets the initial codebase. Then, for each step, the **prompt sampler** cleverly picks out parent program(s) to work on and some inspiration programs. It bundles these up with **feedback from past attempts (like scores or even what an LLM thought about previous versions)**, plus any handy human context. This whole package goes to the LLMs. The new solution they come up with (the "child") gets graded by the **evaluation function**. Finally, these child solutions, with their new grades, are stored back in the database. # The Outcome The most interesting part even with older models like Gemini 2.0 Pro and Flash, when AlphaEvolve took on over 50 open math problems, it managed to match the best solutions out there for 75% of them, actually found better answers for another 20%, and only came up short on a tiny 5%! Out of all, DeepMind is most proud of AlphaEvolve surpassing Strassen's 56-year-old algorithm for 4x4 complex matrix multiplication by finding a method with 48 scalar multiplications. And also the agent improved Google's infra by speeding up Gemini LLM training by \~1%, improving data centre job scheduling to recover \~0.7% of fleet-wide compute resources, optimising TPU circuit designs, and accelerating compiler-generated code for AI kernels by up to 32%. This is the best agent scaffolding to date. The fact that they pulled this off with an outdated Gemini, imagine what they can do with the current SOTA. This makes it one thing clear: what we're lacking for efficient agent swarms doing tasks is the right abstractions. Though the cost of operation is not disclosed. For a detailed blog post, check this out: [AlphaEvolve: the self-evolving agent from DeepMind](https://composio.dev/blog/alphaevolve-evolutionary-agent-from-deepmind/) It'd be interesting to see if they ever release it in the wild or if any other lab picks it up. This is certainly the best frontier for building agents. Would love to know your thoughts on it.
2025-05-22T15:19:33
https://www.reddit.com/r/LocalLLaMA/comments/1kstdhn/notes_on_alphaevolve_are_we_closing_in_on/
SunilKumarDash
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kstdhn
false
null
t3_1kstdhn
/r/LocalLLaMA/comments/1kstdhn/notes_on_alphaevolve_are_we_closing_in_on/
false
false
self
56
{'enabled': False, 'images': [{'id': 'go5ckY2IFGbH800cLCyjjRmNkE0GXSbTSzGz_GA5Zi8', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/gOFfPFcvD8Wd_ilNmVpf-X7kQRqecp81edfbI6C0COY.jpg?width=108&crop=smart&auto=webp&s=2fb7ce2d3fecad06c94ac1b3faa1dfc668a958c0', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/gOFfPFcvD8Wd_ilNmVpf-X7kQRqecp81edfbI6C0COY.jpg?width=216&crop=smart&auto=webp&s=cecfa8084390574c7ae28d59634405b4168f21bd', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/gOFfPFcvD8Wd_ilNmVpf-X7kQRqecp81edfbI6C0COY.jpg?width=320&crop=smart&auto=webp&s=e80536d7c406704beb59ad79381ee2f65ef2810e', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/gOFfPFcvD8Wd_ilNmVpf-X7kQRqecp81edfbI6C0COY.jpg?width=640&crop=smart&auto=webp&s=1723834464694945f72ccef38b322308f70356f4', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/gOFfPFcvD8Wd_ilNmVpf-X7kQRqecp81edfbI6C0COY.jpg?width=960&crop=smart&auto=webp&s=f7c93e2b8efba99028c91c05e68ac5be6f7e95d6', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/gOFfPFcvD8Wd_ilNmVpf-X7kQRqecp81edfbI6C0COY.jpg?width=1080&crop=smart&auto=webp&s=e573db0aa7a20c6d209224eb824dd850a0cb558a', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/gOFfPFcvD8Wd_ilNmVpf-X7kQRqecp81edfbI6C0COY.jpg?auto=webp&s=2cc044c4814c48f5adc5a00f8e800c3d56065c04', 'width': 1536}, 'variants': {}}]}
Google's new Text Diffusion model explained, and why it matters for LocalLLaMA
1
[removed]
2025-05-22T15:28:14
https://www.reddit.com/r/LocalLLaMA/comments/1kstl1k/googles_new_text_diffusion_model_explained_and/
amapleson
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kstl1k
false
null
t3_1kstl1k
/r/LocalLLaMA/comments/1kstl1k/googles_new_text_diffusion_model_explained_and/
false
false
https://b.thumbs.redditm…SAykC1G3UzNI.jpg
1
null
Story writing workflow / software
3
I've been trying to figure out how to write stories with LLMs, and it feels like I'm going in circles. I know that there's no magical "Write me a story" AI and that I'll have to do the work of writing an outline and keeping the story on track, but I'm still pretty fuzzy on _how_ to do that. The general advice seems to be to avoid using instructions, since they'll never give you more than a couple of paragraphs, and instead to use the notebook, giving it the first half of the first sentence and letting it rip. But, how are you supposed to guide the story? I've done the thing of starting off the notebook with a title, a summary, and some tags, but that's still not nearly enough to guide where I want the story to go. Sure, it'll generate pages of text, but it very quickly goes off in the weeds. I can keep interrupting it, deleting the bad stuff, adding a new half-sentence, and unleashing it again, but then I may as well just use instruct mode. I've tried the [StoryCrafter](https://github.com/FartyPants/StoryCrafter) extension for Ooba. It's certainly nice being able to regenerate just a little at a time, but in its normal instruct mode it still only generates a couple of paragraphs per beat, and I find myself having to mess around with chat instructions and/or the notebook to fractal my way down into getting real descriptions going. If I flip it into Narrative mode, then I have the same issue of "How am I supposed to guide this thing?" What am I missing? How can I guide the AI _and_ get good detail and more than a couple of paragraphs at a time?
2025-05-22T15:28:46
https://www.reddit.com/r/LocalLLaMA/comments/1kstlh0/story_writing_workflow_software/
Nazrax
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kstlh0
false
null
t3_1kstlh0
/r/LocalLLaMA/comments/1kstlh0/story_writing_workflow_software/
false
false
self
3
null
Create a chatbot for chatting with people with Wikipedia pages
11
Exploring different techniques for creating a chatbot. Sample implementation there the chatbot is designed to do a multi-turn chat based on someone's Wikipedia page. Interesting learnings and a fun project altogether. Link in case you are interested: [https://www.teachmecoolstuff.com/viewarticle/creating-a-chatbot-using-a-local-llm](https://www.teachmecoolstuff.com/viewarticle/creating-a-chatbot-using-a-local-llm)
2025-05-22T15:36:30
https://www.reddit.com/r/LocalLLaMA/comments/1ksts7o/create_a_chatbot_for_chatting_with_people_with/
funJS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksts7o
false
null
t3_1ksts7o
/r/LocalLLaMA/comments/1ksts7o/create_a_chatbot_for_chatting_with_people_with/
false
false
self
11
{'enabled': False, 'images': [{'id': 'FWfN3I_aSWWeBvz0kSnI6WbqDHPesaFYU-RKBgH0afY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Ol-uSKMk94Q3bvzGQJEUyi1kjJKu5hRYpU5lfeO5JNM.jpg?width=108&crop=smart&auto=webp&s=da9997765ffdce6ca201796ab450ea42756d1d0c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Ol-uSKMk94Q3bvzGQJEUyi1kjJKu5hRYpU5lfeO5JNM.jpg?width=216&crop=smart&auto=webp&s=ea41b38aedd79c58a6813685f21291c176101d31', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Ol-uSKMk94Q3bvzGQJEUyi1kjJKu5hRYpU5lfeO5JNM.jpg?width=320&crop=smart&auto=webp&s=b5500025b201a69d0e48c6deb445e29703975960', 'width': 320}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/Ol-uSKMk94Q3bvzGQJEUyi1kjJKu5hRYpU5lfeO5JNM.jpg?auto=webp&s=97bc29501ad66cbcae3ca84e14a5a8d3ddbca691', 'width': 400}, 'variants': {}}]}
Learn how to use Devstral with Mistral Inference locally and with OpenHands
1
[removed]
2025-05-22T15:38:37
https://www.reddit.com/r/LocalLLaMA/comments/1kstu5j/learn_how_to_use_devstral_with_mistral_inference/
kingabzpro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kstu5j
false
null
t3_1kstu5j
/r/LocalLLaMA/comments/1kstu5j/learn_how_to_use_devstral_with_mistral_inference/
false
false
https://b.thumbs.redditm…5ch7ByyqmSNw.jpg
1
null
Microsoft releases Magentic-UI. Could this finally be a halfway-decent agentic browser use client that works on Windows?
69
Magentic-One was kind of a cool agent framework for a minute when it was first released a few months ago, but DAMN, it was a pain in the butt to get working and then it kinda would just see a squirrel on a webpage and get distracted and such. I think AutoGen added Magentic as an Agent type in AutoGen, but then it kinda of fell off my radar until today when they released Magentic-UI - https://github.com/microsoft/Magentic-UI From their GitHub: “Magentic-UI is a research prototype of a human-centered interface powered by a multi-agent system that can browse and perform actions on the web, generate and execute code, and generate and analyze files. Magentic-UI is especially useful for web tasks that require actions on the web (e.g., filling a form, customizing a food order), deep navigation through websites not indexed by search engines (e.g., filtering flights, finding a link from a personal site) or tasks that need web navigation and code execution (e.g., generate a chart from online data). What differentiates Magentic-UI from other browser use offerings is its transparent and controllable interface that allows for efficient human-in-the-loop involvement. Magentic-UI is built using AutoGen and provides a platform to study human-agent interaction and experiment with web agents. Key features include: 🧑‍🤝‍🧑 Co-Planning: Collaboratively create and approve step-by-step plans using chat and the plan editor. 🤝 Co-Tasking: Interrupt and guide the task execution using the web browser directly or through chat. Magentic-UI can also ask for clarifications and help when needed. 🛡️ Action Guards: Sensitive actions are only executed with explicit user approvals. 🧠 Plan Learning and Retrieval: Learn from previous runs to improve future task automation and save them in a plan gallery. Automatically or manually retrieve saved plans in future tasks. 🔀 Parallel Task Execution: You can run multiple tasks in parallel and session status indicators will let you know when Magentic-UI needs your input or has completed the task.” Supposedly you can use it with Ollama and other local LLM providers. I’ll be trying this out when I have some time. Anyone else got this working locally yet? WDYT of it?
2025-05-22T16:22:59
https://www.reddit.com/gallery/1ksuycv
Porespellar
reddit.com
1970-01-01T00:00:00
0
{}
1ksuycv
false
null
t3_1ksuycv
/r/LocalLLaMA/comments/1ksuycv/microsoft_releases_magenticui_could_this_finally/
false
false
https://b.thumbs.redditm…RUYoDRA6C8Wk.jpg
69
null
Claude 4 by Anthropic officially released!
663
2025-05-22T16:37:17
https://i.redd.it/veybu3kn2d2f1.png
purealgo
i.redd.it
1970-01-01T00:00:00
0
{}
1ksvb3k
false
null
t3_1ksvb3k
/r/LocalLLaMA/comments/1ksvb3k/claude_4_by_anthropic_officially_released/
false
false
https://a.thumbs.redditm…OGK6SGgZEXJ8.jpg
663
{'enabled': True, 'images': [{'id': 'WzX5JTVRTuL9sh96KThaLGxGYDsP6Kbr6Y6NyNa3C9Q', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/veybu3kn2d2f1.png?width=108&crop=smart&auto=webp&s=444bc5d29fb796e69836c5f917f0ba42f02cd962', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/veybu3kn2d2f1.png?width=216&crop=smart&auto=webp&s=f5c483645592b84b7c1dd6e4571c93ea64a36821', 'width': 216}, {'height': 170, 'url': 'https://preview.redd.it/veybu3kn2d2f1.png?width=320&crop=smart&auto=webp&s=167a36750b0a159b7651795cddcee792cd62a9a2', 'width': 320}, {'height': 340, 'url': 'https://preview.redd.it/veybu3kn2d2f1.png?width=640&crop=smart&auto=webp&s=1005524a7abdc471df6b8cbc62cb64925703075c', 'width': 640}], 'source': {'height': 344, 'url': 'https://preview.redd.it/veybu3kn2d2f1.png?auto=webp&s=136a18ef1e7a1029572753ee9105a2de16e9b5dc', 'width': 647}, 'variants': {}}]}
Master's Thesis: State-of-the-Art LLM Inference Optimization on Consumer Hardware
1
[removed]
2025-05-22T17:04:31
https://www.reddit.com/r/LocalLLaMA/comments/1ksvzo9/masters_thesis_stateoftheart_llm_inference/
Budget-Track5555
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksvzo9
false
null
t3_1ksvzo9
/r/LocalLLaMA/comments/1ksvzo9/masters_thesis_stateoftheart_llm_inference/
false
false
self
1
null
Genuine question: Why are the Unsloth GGUFs more preferred than the official ones?
96
That's at least the case with the latest GLM, Gemma and Qwen models. Unlosh ones are downloaded 5-10X more.
2025-05-22T17:05:03
https://www.reddit.com/r/LocalLLaMA/comments/1ksw070/genuine_question_why_are_the_unsloth_ggufs_more/
ParaboloidalCrest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksw070
false
null
t3_1ksw070
/r/LocalLLaMA/comments/1ksw070/genuine_question_why_are_the_unsloth_ggufs_more/
false
false
self
96
null
II-Agent
5
Suprised i did not find anything about it here. Tested it but ran into anthrophic token limit
2025-05-22T17:33:32
https://github.com/Intelligent-Internet/ii-agent
Local_Beach
github.com
1970-01-01T00:00:00
0
{}
1kswq8p
false
null
t3_1kswq8p
/r/LocalLLaMA/comments/1kswq8p/iiagent/
false
false
https://b.thumbs.redditm…l5qbFHo8KSVA.jpg
5
{'enabled': False, 'images': [{'id': '9A62q69XBL2wmV8RRzRjDYH8w58kEfcIXM4_bythlUU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1HMGInhiJnGzZhYUPz0JGHBL13E0bKZnQmZoIe2IJMM.jpg?width=108&crop=smart&auto=webp&s=f986a8bdcc81396383d1f21031ea44b4879d60f9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1HMGInhiJnGzZhYUPz0JGHBL13E0bKZnQmZoIe2IJMM.jpg?width=216&crop=smart&auto=webp&s=9c9abbba93f8da764e4a01bd90840d1a31f3a13a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1HMGInhiJnGzZhYUPz0JGHBL13E0bKZnQmZoIe2IJMM.jpg?width=320&crop=smart&auto=webp&s=6bcd381a3b211e10e1df6d5a9d6d98c60d18b205', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1HMGInhiJnGzZhYUPz0JGHBL13E0bKZnQmZoIe2IJMM.jpg?width=640&crop=smart&auto=webp&s=2a9baee48e10e6a6b8320ef115c7fb0a43f85fac', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1HMGInhiJnGzZhYUPz0JGHBL13E0bKZnQmZoIe2IJMM.jpg?width=960&crop=smart&auto=webp&s=71e557b691792b077a5a322fbcacb2ae21f64e5a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1HMGInhiJnGzZhYUPz0JGHBL13E0bKZnQmZoIe2IJMM.jpg?width=1080&crop=smart&auto=webp&s=ea1f055777376e389963f10dae0573c0dc5c1db4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1HMGInhiJnGzZhYUPz0JGHBL13E0bKZnQmZoIe2IJMM.jpg?auto=webp&s=a0984b45988aaa72f992fe46612d34ae21501ae9', 'width': 1200}, 'variants': {}}]}
What Does It Really Mean to Own Your AI System? (Looking for Feedback on My Framework)
1
[removed]
2025-05-22T17:42:52
https://www.reddit.com/r/LocalLLaMA/comments/1kswyic/what_does_it_really_mean_to_own_your_ai_system/
davidtwaring
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kswyic
false
null
t3_1kswyic
/r/LocalLLaMA/comments/1kswyic/what_does_it_really_mean_to_own_your_ai_system/
false
false
self
1
null
What are Preview models in Github Copilot?
0
I am looking for Claude 4 at [https://github.com/copilot](https://github.com/copilot) . It is there, but under the Preview Category. I don't know what Preview Models are or what details about them. https://preview.redd.it/prkko08thd2f1.png?width=622&format=png&auto=webp&s=e8fe751c4c21a7c15e54eeb40d8bd8dffc6b4613 Help me!!
2025-05-22T18:02:41
https://www.reddit.com/r/LocalLLaMA/comments/1ksxgl4/what_are_preview_models_in_github_copilot/
ashim_k_saha
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksxgl4
false
null
t3_1ksxgl4
/r/LocalLLaMA/comments/1ksxgl4/what_are_preview_models_in_github_copilot/
false
false
https://b.thumbs.redditm…t7BREQ-_YHhg.jpg
0
{'enabled': False, 'images': [{'id': 'R43kJiA4HczWEfcwJa_6P44XkMvFZzJxSkx6bWVD2w0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/L0bH9yXaI6N8afSlfDaOekeDRkHf88QtLNOYhAPXcmc.jpg?width=108&crop=smart&auto=webp&s=672a1c97a6742819c50e2eaa3c8cfc2d157ad6d6', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/L0bH9yXaI6N8afSlfDaOekeDRkHf88QtLNOYhAPXcmc.jpg?width=216&crop=smart&auto=webp&s=b18d3950fe46a1c9f8293c718320b760207b3ba4', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/L0bH9yXaI6N8afSlfDaOekeDRkHf88QtLNOYhAPXcmc.jpg?width=320&crop=smart&auto=webp&s=045ec5fe65315671b93231c81fcc44294aeae36b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/L0bH9yXaI6N8afSlfDaOekeDRkHf88QtLNOYhAPXcmc.jpg?width=640&crop=smart&auto=webp&s=ee97494d22f6f81a9aec86211611a363583e4d6e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/L0bH9yXaI6N8afSlfDaOekeDRkHf88QtLNOYhAPXcmc.jpg?width=960&crop=smart&auto=webp&s=edb80c5b3ff8ce548dd3444f4aa22aeba6e99235', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/L0bH9yXaI6N8afSlfDaOekeDRkHf88QtLNOYhAPXcmc.jpg?width=1080&crop=smart&auto=webp&s=7e57c6295090127899f13befda0e77041da50181', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/L0bH9yXaI6N8afSlfDaOekeDRkHf88QtLNOYhAPXcmc.jpg?auto=webp&s=f4d7583a38f30410c63595155e3489a466cb6de6', 'width': 1200}, 'variants': {}}]}
🤝 Meet NVIDIA Llama Nemotron Nano 4B + Tutorial on Getting Started
43
*📹 New Tutorial: How to get started with Llama Nemotron Nano 4b:* [*https://youtu.be/HTPiUZ3kJto*](https://youtu.be/HTPiUZ3kJto) *🤝 Meet NVIDIA Llama Nemotron Nano 4B, an open reasoning model that provides leading accuracy and compute efficiency across scientific tasks, coding, complex math, function calling, and instruction following for edge agents.* ✨ *Achieves higher accuracy and 50% higher throughput than other leading open models with 8 billion parameters*  📗 *Supports hybrid reasoning, optimizing for inference cost* 🧑‍💻 *Deploy at the edge with NVIDIA Jetson and NVIDIA RTX GPUs, maximizing security, and flexibility* *📥 Now on Hugging Face:*  [*https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1*](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1)
2025-05-22T18:35:07
https://www.reddit.com/r/LocalLLaMA/comments/1ksy9hi/meet_nvidia_llama_nemotron_nano_4b_tutorial_on/
PDXcoder2000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksy9hi
false
null
t3_1ksy9hi
/r/LocalLLaMA/comments/1ksy9hi/meet_nvidia_llama_nemotron_nano_4b_tutorial_on/
false
false
self
43
{'enabled': False, 'images': [{'id': 'E1FQBX3O66NI1oIU5qh0JBjtnfkFYhiMAlqBFNGYIbQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/zUfV9gqL2twXXXmjwIMWtp53p3qmERdBE6zYke1eggU.jpg?width=108&crop=smart&auto=webp&s=bb3afc7986439990b1f3178f1e9ee48eb2973f0f', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/zUfV9gqL2twXXXmjwIMWtp53p3qmERdBE6zYke1eggU.jpg?width=216&crop=smart&auto=webp&s=a095281b3474292f153ac545caf5ed7c0ae96fbe', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/zUfV9gqL2twXXXmjwIMWtp53p3qmERdBE6zYke1eggU.jpg?width=320&crop=smart&auto=webp&s=06893c878fc7a3ab1995037af62230e4cd2c03d5', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/zUfV9gqL2twXXXmjwIMWtp53p3qmERdBE6zYke1eggU.jpg?auto=webp&s=fecfb699dfd6e40a5b75283d3ce7c997a9e5b1f2', 'width': 480}, 'variants': {}}]}
Sonnet 4 (non thinking) does consistently break in my vibe coding test
5
*Write a raytracer that renders an interesting scene with many colourful lightsources in python. Output a 800x600 image as a png* (More info here: https://github.com/cpldcpu/llmbenchmark/blob/master/raytracer/Readme.md) Only 1 out of 8 generations worked one first attempt! All others always failed with the same error. I am quite puzzled as this was not an issue for 3.5,3.5(new) and 3.7. Many other models fail with similar errors though. Creating scene... Rendering image... Traceback (most recent call last): File "D:\LLM\llm_experiments\llmbenchmark\raytracer\variance_sonnet40\iteration4.py", line 276, in <module> main() File "D:\LLM\llm_experiments\llmbenchmark\raytracer\variance_sonnet40\iteration4.py", line 268, in main image_array = render(scene, 800, 600, camera_pos, camera_target, fov=60) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\LLM\llm_experiments\llmbenchmark\raytracer\variance_sonnet40\iteration4.py", line 206, in render color = trace_ray(scene, ray) ^^^^^^^^^^^^^^^^^^^^^ File "D:\LLM\llm_experiments\llmbenchmark\raytracer\variance_sonnet40\iteration4.py", line 181, in trace_ray return shade_point(scene, ray, hit_point, obj, depth) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\LLM\llm_experiments\llmbenchmark\raytracer\variance_sonnet40\iteration4.py", line 162, in shade_point reflect_dir = (-light_dir).reflect(normal) ^^^^^^^^^^ TypeError: bad operand type for unary -: 'Vec3'
2025-05-22T18:42:00
https://www.reddit.com/r/LocalLLaMA/comments/1ksyfij/sonnet_4_non_thinking_does_consistently_break_in/
cpldcpu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksyfij
false
null
t3_1ksyfij
/r/LocalLLaMA/comments/1ksyfij/sonnet_4_non_thinking_does_consistently_break_in/
false
false
self
5
{'enabled': False, 'images': [{'id': 'iKS9PBNfy2C7ElH1gfvl15hZ_XldK10KrjNYjYp3VR8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dyJ_W6s910ofki56YmYvTgvEuYe2Sak9icevqcR0aX8.jpg?width=108&crop=smart&auto=webp&s=5aae72fe5795f61782fa9dbee42eae28264f095e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dyJ_W6s910ofki56YmYvTgvEuYe2Sak9icevqcR0aX8.jpg?width=216&crop=smart&auto=webp&s=8bf10184538024d3c9aa7379fd5d5fdf1b822a50', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dyJ_W6s910ofki56YmYvTgvEuYe2Sak9icevqcR0aX8.jpg?width=320&crop=smart&auto=webp&s=aadb9856911f341c729e2f92b47f52bd7281fb3b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dyJ_W6s910ofki56YmYvTgvEuYe2Sak9icevqcR0aX8.jpg?width=640&crop=smart&auto=webp&s=8ef5f56829f60c0d14cd1a99457590b1298b8959', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dyJ_W6s910ofki56YmYvTgvEuYe2Sak9icevqcR0aX8.jpg?width=960&crop=smart&auto=webp&s=7276b1b834469d82133bfb28381e97fd84d543f7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dyJ_W6s910ofki56YmYvTgvEuYe2Sak9icevqcR0aX8.jpg?width=1080&crop=smart&auto=webp&s=71ad6d53ce42303f0a16512ff0f7a288b49bc163', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dyJ_W6s910ofki56YmYvTgvEuYe2Sak9icevqcR0aX8.jpg?auto=webp&s=0012aa841d964b9e6778f082d8e37e4612668ca6', 'width': 1200}, 'variants': {}}]}
Introducing the world's most powerful model
1,628
2025-05-22T18:45:16
https://i.redd.it/hqx8fzosod2f1.png
eastwindtoday
i.redd.it
1970-01-01T00:00:00
0
{}
1ksyicp
false
null
t3_1ksyicp
/r/LocalLLaMA/comments/1ksyicp/introducing_the_worlds_most_powerful_model/
false
false
https://a.thumbs.redditm…DCgGkw1xnsI8.jpg
1,628
{'enabled': True, 'images': [{'id': 'YOcuSylpokpdn-aTBYXZJ23tkMbp-nVqDnT3-xrNXhQ', 'resolutions': [{'height': 103, 'url': 'https://preview.redd.it/hqx8fzosod2f1.png?width=108&crop=smart&auto=webp&s=89c79d223d6875ff5561ca4065175480922e4c44', 'width': 108}, {'height': 207, 'url': 'https://preview.redd.it/hqx8fzosod2f1.png?width=216&crop=smart&auto=webp&s=07413f300847ea835b8a0b9a6052d7b74092bc8a', 'width': 216}, {'height': 307, 'url': 'https://preview.redd.it/hqx8fzosod2f1.png?width=320&crop=smart&auto=webp&s=c38dedcb06f5cb77cb96571ecc6b1b48a9c0a8cd', 'width': 320}, {'height': 614, 'url': 'https://preview.redd.it/hqx8fzosod2f1.png?width=640&crop=smart&auto=webp&s=96d0c448070aead295d21d9be7e8fd395520a72b', 'width': 640}, {'height': 922, 'url': 'https://preview.redd.it/hqx8fzosod2f1.png?width=960&crop=smart&auto=webp&s=90938e4b58f44245295be0e769225b28f0b77b05', 'width': 960}, {'height': 1037, 'url': 'https://preview.redd.it/hqx8fzosod2f1.png?width=1080&crop=smart&auto=webp&s=392fadece6b71534d6cfca9c298633b50afd1cef', 'width': 1080}], 'source': {'height': 1166, 'url': 'https://preview.redd.it/hqx8fzosod2f1.png?auto=webp&s=faf67ae1860da93b5a1c5260e36098e28483db2b', 'width': 1214}, 'variants': {}}]}
An AI researcher at Anthropic reveals that Claude Opus 4 will contact regulators or try to lock you out if it detects something illegal
619
2025-05-22T18:54:53
https://i.redd.it/rpetiilwqd2f1.jpeg
erdaltoprak
i.redd.it
1970-01-01T00:00:00
0
{}
1ksyqo9
false
null
t3_1ksyqo9
/r/LocalLLaMA/comments/1ksyqo9/an_ai_researcher_at_anthropic_reveals_that_claude/
false
false
https://a.thumbs.redditm…hIL4MNquQPf8.jpg
619
{'enabled': True, 'images': [{'id': 'rOxvDCr6sdpBzMLylerTH10OtUEjU6WOn9BSdL2Gq-M', 'resolutions': [{'height': 36, 'url': 'https://preview.redd.it/rpetiilwqd2f1.jpeg?width=108&crop=smart&auto=webp&s=ea799e647f5879b25432ba2fd919ec366f8a3e08', 'width': 108}, {'height': 73, 'url': 'https://preview.redd.it/rpetiilwqd2f1.jpeg?width=216&crop=smart&auto=webp&s=123c10c28c6082c5be795c3ff0e0906d55c9e4ef', 'width': 216}, {'height': 109, 'url': 'https://preview.redd.it/rpetiilwqd2f1.jpeg?width=320&crop=smart&auto=webp&s=a347a1f1f4c1b720146832dd0944de0822a25486', 'width': 320}, {'height': 218, 'url': 'https://preview.redd.it/rpetiilwqd2f1.jpeg?width=640&crop=smart&auto=webp&s=fad643c678d908070efa41bba477979da575e6cd', 'width': 640}], 'source': {'height': 322, 'url': 'https://preview.redd.it/rpetiilwqd2f1.jpeg?auto=webp&s=9e651adb7e6546cf31d3fa89c1a4f7b48cd885f7', 'width': 942}, 'variants': {}}]}
I accidentally too many P100
1
[removed]
2025-05-22T18:55:25
https://www.reddit.com/gallery/1ksyr5b
TooManyPascals
reddit.com
1970-01-01T00:00:00
0
{}
1ksyr5b
false
null
t3_1ksyr5b
/r/LocalLLaMA/comments/1ksyr5b/i_accidentally_too_many_p100/
false
false
https://b.thumbs.redditm…sl4Y9NuiS9UE.jpg
1
null
I accidentally too many P100
1
[removed]
2025-05-22T18:58:48
https://www.reddit.com/gallery/1ksyu36
TooManyPascals
reddit.com
1970-01-01T00:00:00
0
{}
1ksyu36
false
null
t3_1ksyu36
/r/LocalLLaMA/comments/1ksyu36/i_accidentally_too_many_p100/
false
false
https://b.thumbs.redditm…2Vjd44is8Esw.jpg
1
null
MedGemma with MediaPipe
1
Hi, I hope you're doing well. As a small project, I wanted to use MedGemma on iOS to create a local app where users could ask questions about symptoms or whatever. I'm able to use Mediapipe as shown in Google's repo, but only with `.task` models. I haven’t found any `.task` model for MedGemma. I'm not an expert in this at all, but is it possible — and quick — to convert a 4B model? I just want to know if it's a good use case to learn from and whether it's feasible on my end or not. Thanks!
2025-05-22T19:05:43
https://www.reddit.com/r/LocalLLaMA/comments/1ksz0in/medgemma_with_mediapipe/
DonTizi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksz0in
false
null
t3_1ksz0in
/r/LocalLLaMA/comments/1ksz0in/medgemma_with_mediapipe/
false
false
self
1
null
Devstral on Mac 24GB?
2
I've tried running the 4bit quant on my 16GB M1: no dice. But I'm getting a 24GB M4 in a little while - anyone run the Devstral 4bit MLX distils on one of those yet?
2025-05-22T19:06:09
https://www.reddit.com/r/LocalLLaMA/comments/1ksz0x5/devstral_on_mac_24gb/
sgt102
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksz0x5
false
null
t3_1ksz0x5
/r/LocalLLaMA/comments/1ksz0x5/devstral_on_mac_24gb/
false
false
self
2
null
Claude 4 Opus may contact press and regulators if you do something egregious (deleted Tweet from Sam Bowman)
295
2025-05-22T19:43:04
https://i.redd.it/g91uyr7tyd2f1.jpeg
RuairiSpain
i.redd.it
1970-01-01T00:00:00
0
{}
1kszxmj
false
null
t3_1kszxmj
/r/LocalLLaMA/comments/1kszxmj/claude_4_opus_may_contact_press_and_regulators_if/
false
false
https://b.thumbs.redditm…BgfN4hZhdrjU.jpg
295
{'enabled': True, 'images': [{'id': 'IzbreZ2dyV53OMkgQd2Lx25ytHiXd2eJj3QWIdkexm4', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/g91uyr7tyd2f1.jpeg?width=108&crop=smart&auto=webp&s=5f0351e0e4bb541bfddb1ca2a15a15d132b5a852', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/g91uyr7tyd2f1.jpeg?width=216&crop=smart&auto=webp&s=1333bbd1657944cb049c8fd7b7e341351c60d21f', 'width': 216}, {'height': 165, 'url': 'https://preview.redd.it/g91uyr7tyd2f1.jpeg?width=320&crop=smart&auto=webp&s=352fad6f2b0f43dfb61c4cbaca3f58e8abef980a', 'width': 320}, {'height': 330, 'url': 'https://preview.redd.it/g91uyr7tyd2f1.jpeg?width=640&crop=smart&auto=webp&s=4631f915329d465f3cf27d7c20d9ddc5663b1465', 'width': 640}, {'height': 495, 'url': 'https://preview.redd.it/g91uyr7tyd2f1.jpeg?width=960&crop=smart&auto=webp&s=654f966907e36844cb444fd582809c61c2acaf2b', 'width': 960}, {'height': 557, 'url': 'https://preview.redd.it/g91uyr7tyd2f1.jpeg?width=1080&crop=smart&auto=webp&s=c551983b8678ecc90179301812364fe7386fc464', 'width': 1080}], 'source': {'height': 564, 'url': 'https://preview.redd.it/g91uyr7tyd2f1.jpeg?auto=webp&s=00541b6b89bcdbb4a7135f037a22d648bf87588a', 'width': 1093}, 'variants': {}}]}
Best local model for M2 16gb MacBook Air for Analyzing Transcripts
1
I'm looking to process private interviews (10 - 2 hour interviews) I conducted with victims of abuse for a research project. This must be done locally for privacy. Once it's in the LLM I want to see how it compares to human raters as far as assessing common themes. I'll use macwhisper to transcribe the conversations but which local model can I run for assessing the themes? Here are my system stats: * Apple MacBook Air M2 8-Core * 16gb Memory * 2TB SSD
2025-05-22T19:44:28
https://www.reddit.com/r/LocalLLaMA/comments/1kszyuo/best_local_model_for_m2_16gb_macbook_air_for/
SinkThink5779
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kszyuo
false
null
t3_1kszyuo
/r/LocalLLaMA/comments/1kszyuo/best_local_model_for_m2_16gb_macbook_air_for/
false
false
self
1
null
Mixed GPU from nvidia and AMD support?
13
I have a 3090 and 4070. I was thinking about adding a 7900xtx. How's performance using vulkan? I usually do flash attention enabled. Everything should work right? How does VLLM handle this?
2025-05-22T20:14:18
https://www.reddit.com/r/LocalLLaMA/comments/1kt0p4r/mixed_gpu_from_nvidia_and_amd_support/
Only_Situation_4713
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt0p4r
false
null
t3_1kt0p4r
/r/LocalLLaMA/comments/1kt0p4r/mixed_gpu_from_nvidia_and_amd_support/
false
false
self
13
null
Republicans propose no regulation of AI for the next 10 years
1
2025-05-22T20:22:21
https://www.newsweek.com/republicans-regulation-ai-next-ten-years-2071929
fallingdowndizzyvr
newsweek.com
1970-01-01T00:00:00
0
{}
1kt0wgq
false
null
t3_1kt0wgq
/r/LocalLLaMA/comments/1kt0wgq/republicans_propose_no_regulation_of_ai_for_the/
false
false
https://b.thumbs.redditm…t7vQycRF7bAs.jpg
1
{'enabled': False, 'images': [{'id': 'TeJlmqVIRwnAYOuZ0okESu2iJqOu3B0UVrClE9Bb180', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/JZwdJ0ovUrDK6T3Y-BIlNJEHRxe5H_5pPAOGWyeyr1c.jpg?width=108&crop=smart&auto=webp&s=f22c9285ef869efb550d1380c6679c3851e4d935', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/JZwdJ0ovUrDK6T3Y-BIlNJEHRxe5H_5pPAOGWyeyr1c.jpg?width=216&crop=smart&auto=webp&s=2bb470706ae84621134168c064473efa1899dc77', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/JZwdJ0ovUrDK6T3Y-BIlNJEHRxe5H_5pPAOGWyeyr1c.jpg?width=320&crop=smart&auto=webp&s=809453b866f64970c9fa767f472bdd4352236b2d', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/JZwdJ0ovUrDK6T3Y-BIlNJEHRxe5H_5pPAOGWyeyr1c.jpg?width=640&crop=smart&auto=webp&s=1a239dd4cef46f98289c3f4b4b7208931d42f29f', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/JZwdJ0ovUrDK6T3Y-BIlNJEHRxe5H_5pPAOGWyeyr1c.jpg?width=960&crop=smart&auto=webp&s=9182bd6e7dd81c3faa67fc1767197f2aaaa84c53', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/JZwdJ0ovUrDK6T3Y-BIlNJEHRxe5H_5pPAOGWyeyr1c.jpg?width=1080&crop=smart&auto=webp&s=6fd4c70c717692a2fa6fa1294dfbe400aff7fe82', 'width': 1080}], 'source': {'height': 1667, 'url': 'https://external-preview.redd.it/JZwdJ0ovUrDK6T3Y-BIlNJEHRxe5H_5pPAOGWyeyr1c.jpg?auto=webp&s=387c0bda6f295a44f07d7b4b4ddb88697010c176', 'width': 2500}, 'variants': {}}]}
Seeking help of ML/AI expert on a research project
1
[removed]
2025-05-22T20:24:40
https://www.reddit.com/r/LocalLLaMA/comments/1kt0ykb/seeking_help_of_mlai_expert_on_a_research_project/
Feisty-Estate-6893
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt0ykb
false
null
t3_1kt0ykb
/r/LocalLLaMA/comments/1kt0ykb/seeking_help_of_mlai_expert_on_a_research_project/
false
false
self
1
null
House passes budget bill that inexplicably bans state AI regulations for ten years
291
2025-05-22T20:26:06
https://tech.yahoo.com/articles/house-passes-budget-bill-inexplicably-184936484.html
fallingdowndizzyvr
tech.yahoo.com
1970-01-01T00:00:00
0
{}
1kt0zvd
false
null
t3_1kt0zvd
/r/LocalLLaMA/comments/1kt0zvd/house_passes_budget_bill_that_inexplicably_bans/
false
false
https://b.thumbs.redditm…LOUeoK-sOV0A.jpg
291
{'enabled': False, 'images': [{'id': 'jppHMgf5BzmDH502tXCkK5KnLM6Xr9O3d3U8o8rvE5E', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/is2Xb-bjmFmGSvp-crWowCGBhCXFlH_gdhrRUHNXU_I.jpg?width=108&crop=smart&auto=webp&s=efe0e5c337936d97d68a017c30cd41e96555a9e2', 'width': 108}, {'height': 134, 'url': 'https://external-preview.redd.it/is2Xb-bjmFmGSvp-crWowCGBhCXFlH_gdhrRUHNXU_I.jpg?width=216&crop=smart&auto=webp&s=224a9f21eceafc408cdd86ca9624eb425dcbf67c', 'width': 216}, {'height': 198, 'url': 'https://external-preview.redd.it/is2Xb-bjmFmGSvp-crWowCGBhCXFlH_gdhrRUHNXU_I.jpg?width=320&crop=smart&auto=webp&s=c210018afcdf2b6f9288c94e340ee40756453510', 'width': 320}, {'height': 397, 'url': 'https://external-preview.redd.it/is2Xb-bjmFmGSvp-crWowCGBhCXFlH_gdhrRUHNXU_I.jpg?width=640&crop=smart&auto=webp&s=6bb497ae5922ecf83e5c0a152d97d9c4b33aa5a5', 'width': 640}, {'height': 596, 'url': 'https://external-preview.redd.it/is2Xb-bjmFmGSvp-crWowCGBhCXFlH_gdhrRUHNXU_I.jpg?width=960&crop=smart&auto=webp&s=1eeb228870090f0917a026028ff7052e1a59160a', 'width': 960}, {'height': 671, 'url': 'https://external-preview.redd.it/is2Xb-bjmFmGSvp-crWowCGBhCXFlH_gdhrRUHNXU_I.jpg?width=1080&crop=smart&auto=webp&s=73b2b5efaae4599e9b13e7f6b0935e16e7d5ed7b', 'width': 1080}], 'source': {'height': 746, 'url': 'https://external-preview.redd.it/is2Xb-bjmFmGSvp-crWowCGBhCXFlH_gdhrRUHNXU_I.jpg?auto=webp&s=abed116795c673aade94a10c79e1652311f6dc6d', 'width': 1200}, 'variants': {}}]}
Running multiple prompts simultaneously or other options?
1
[removed]
2025-05-22T20:27:16
https://www.reddit.com/r/LocalLLaMA/comments/1kt10z5/running_multiple_prompts_simultaneously_or_other/
zephyr645
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt10z5
false
null
t3_1kt10z5
/r/LocalLLaMA/comments/1kt10z5/running_multiple_prompts_simultaneously_or_other/
false
false
self
1
null
How are you managing centralized knowledge bases for agentic workflows (across tools like Jira, Confluence, Salesforce, etc.)?
1
[removed]
2025-05-22T20:37:45
https://www.reddit.com/r/LocalLLaMA/comments/1kt1a7y/how_are_you_managing_centralized_knowledge_bases/
thsde
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt1a7y
false
null
t3_1kt1a7y
/r/LocalLLaMA/comments/1kt1a7y/how_are_you_managing_centralized_knowledge_bases/
false
false
self
1
null
GPT’s biggest dev flaw isn’t memory, it’s prioritizing helpfulness over truth
1
[removed]
2025-05-22T20:40:09
https://www.reddit.com/r/LocalLLaMA/comments/1kt1cd0/gpts_biggest_dev_flaw_isnt_memory_its/
OG_Icon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt1cd0
false
null
t3_1kt1cd0
/r/LocalLLaMA/comments/1kt1cd0/gpts_biggest_dev_flaw_isnt_memory_its/
false
false
self
1
null
Tried Sonnet 4, not impressed
222
A basic image prompt failed
2025-05-22T20:46:01
https://i.redd.it/k68q6q65be2f1.jpeg
Marriedwithgames
i.redd.it
1970-01-01T00:00:00
0
{}
1kt1hmk
false
null
t3_1kt1hmk
/r/LocalLLaMA/comments/1kt1hmk/tried_sonnet_4_not_impressed/
false
false
https://b.thumbs.redditm…Hb6Sx1fi-5pI.jpg
222
{'enabled': True, 'images': [{'id': 'xZKWtUmBSRtVCD4hPIP85Y1aX90-U7kIqqwLDIbuNac', 'resolutions': [{'height': 146, 'url': 'https://preview.redd.it/k68q6q65be2f1.jpeg?width=108&crop=smart&auto=webp&s=225fff1c52ac27c08ff4a29ebf4b28932a092453', 'width': 108}, {'height': 293, 'url': 'https://preview.redd.it/k68q6q65be2f1.jpeg?width=216&crop=smart&auto=webp&s=3f0812ef9304174dd5337a2898bd469167f741cf', 'width': 216}, {'height': 434, 'url': 'https://preview.redd.it/k68q6q65be2f1.jpeg?width=320&crop=smart&auto=webp&s=b1862a3add5feb5dd3f8f940aa0449cdb6cddf45', 'width': 320}, {'height': 868, 'url': 'https://preview.redd.it/k68q6q65be2f1.jpeg?width=640&crop=smart&auto=webp&s=e7773c238e9148d1a369ca7c06ac85f64c5d87e5', 'width': 640}, {'height': 1303, 'url': 'https://preview.redd.it/k68q6q65be2f1.jpeg?width=960&crop=smart&auto=webp&s=cfc95ab5ad600b031409412123a44ccaa7b17166', 'width': 960}, {'height': 1465, 'url': 'https://preview.redd.it/k68q6q65be2f1.jpeg?width=1080&crop=smart&auto=webp&s=070e442159b691eb75c19b7c45f04512f0fe19b0', 'width': 1080}], 'source': {'height': 1751, 'url': 'https://preview.redd.it/k68q6q65be2f1.jpeg?auto=webp&s=e2a22c1a17938d14722efc2d192712d364178011', 'width': 1290}, 'variants': {}}]}
Private GPT installing errors
1
[removed]
2025-05-22T21:01:47
https://www.reddit.com/r/LocalLLaMA/comments/1kt1vth/private_gpt_installing_errors/
fazetag
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt1vth
false
null
t3_1kt1vth
/r/LocalLLaMA/comments/1kt1vth/private_gpt_installing_errors/
false
false
self
1
{'enabled': False, 'images': [{'id': 's5OBBiPsY25-T5RfXOcw0HFmOraO7XH5Fa8GYgaL-jg', 'resolutions': [{'height': 138, 'url': 'https://external-preview.redd.it/xWHTRaZ3_2o6-CoJkBOP1KFmoHPvj9xdhzqNSbvIJ00.jpg?width=108&crop=smart&auto=webp&s=a07e85921bfb4f98a8ffd150d5732cacf16f1dc1', 'width': 108}, {'height': 277, 'url': 'https://external-preview.redd.it/xWHTRaZ3_2o6-CoJkBOP1KFmoHPvj9xdhzqNSbvIJ00.jpg?width=216&crop=smart&auto=webp&s=2314fa8977809fb22ac863fd3a7331880fe844d2', 'width': 216}, {'height': 411, 'url': 'https://external-preview.redd.it/xWHTRaZ3_2o6-CoJkBOP1KFmoHPvj9xdhzqNSbvIJ00.jpg?width=320&crop=smart&auto=webp&s=4a8d5fd77e021b30ceee9688a0d2ef1f1ffa5407', 'width': 320}, {'height': 822, 'url': 'https://external-preview.redd.it/xWHTRaZ3_2o6-CoJkBOP1KFmoHPvj9xdhzqNSbvIJ00.jpg?width=640&crop=smart&auto=webp&s=bc570f8bcaafec238fd0f3a7e2224d22184fdc1f', 'width': 640}], 'source': {'height': 1152, 'url': 'https://external-preview.redd.it/xWHTRaZ3_2o6-CoJkBOP1KFmoHPvj9xdhzqNSbvIJ00.jpg?auto=webp&s=d4a00572a8f41171a22f847e18d19515fbcc00a8', 'width': 896}, 'variants': {}}]}
Simple prompt stumping Gemini 2.5 pro / sonnet 4
0
Sharing prompt I thought would be a breeze but so far the 2 llms that should be most capable were surprintly bad. Prompt: Extract the sodoku game from image. And show me . Use markdown code block to present it for monospacing
2025-05-22T21:03:26
https://i.redd.it/63ooft19ee2f1.jpeg
SnooDoodles8834
i.redd.it
1970-01-01T00:00:00
0
{}
1kt1xb0
false
null
t3_1kt1xb0
/r/LocalLLaMA/comments/1kt1xb0/simple_prompt_stumping_gemini_25_pro_sonnet_4/
false
false
https://a.thumbs.redditm…C1I0zeSk55U8.jpg
0
{'enabled': True, 'images': [{'id': 'T4IVijY_4CJnCUEXARz_O1UnmuUFNvH8FnEA0_jdJRs', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/63ooft19ee2f1.jpeg?width=108&crop=smart&auto=webp&s=66bf64675b94eba1b925eeea21642351df36ebc6', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/63ooft19ee2f1.jpeg?width=216&crop=smart&auto=webp&s=ea51f34e6e245045ae2cfa0b3dceceee0c1ec7ca', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/63ooft19ee2f1.jpeg?width=320&crop=smart&auto=webp&s=1a8a3fde94101c1f4eff409c87aa40dddda57120', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/63ooft19ee2f1.jpeg?width=640&crop=smart&auto=webp&s=120994dc99b45f3ab60337b411b32a90f2b36a66', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/63ooft19ee2f1.jpeg?width=960&crop=smart&auto=webp&s=f1b25722a7593e178d67e8de1f5e8adf58897723', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/63ooft19ee2f1.jpeg?width=1080&crop=smart&auto=webp&s=88b34b1c4d6695a63b157cda814b52de7dd923e0', 'width': 1080}], 'source': {'height': 2532, 'url': 'https://preview.redd.it/63ooft19ee2f1.jpeg?auto=webp&s=aa2044ea7ef1a764a718b025d2e1bfc89be52be6', 'width': 1170}, 'variants': {}}]}