title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Personalized AI Tutor built on top of Gemini
1
[removed]
2025-06-19T08:55:02
https://www.reddit.com/r/LocalLLaMA/comments/1lf6059/personalized_ai_tutor_built_on_top_of_gemini/
OtherRaisin3426
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf6059
false
null
t3_1lf6059
/r/LocalLLaMA/comments/1lf6059/personalized_ai_tutor_built_on_top_of_gemini/
false
false
self
1
null
Anyone have experience with Refact.ai tool?
0
I recently found [refact.ai](http://refact.ai) on SWE bench, on the lite version being the highest scorer. It is also an open source tool but i can't a lot information about it or the group behind it. Does anyone have experience with it? Care to share it?
2025-06-19T09:05:29
https://www.reddit.com/r/LocalLLaMA/comments/1lf65ts/anyone_have_experience_with_refactai_tool/
EternalOptimister
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf65ts
false
null
t3_1lf65ts
/r/LocalLLaMA/comments/1lf65ts/anyone_have_experience_with_refactai_tool/
false
false
self
0
{'enabled': False, 'images': [{'id': '8iyUZQC1CU-UelDJDhHlD9c6m04ywSmrgXg-sZuUFzc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/8iyUZQC1CU-UelDJDhHlD9c6m04ywSmrgXg-sZuUFzc.png?width=108&crop=smart&auto=webp&s=47f9c989917007a59d33a69890214a2974cb771e', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/8iyUZQC1CU-UelDJDhHlD9c6m04ywSmrgXg-sZuUFzc.png?width=216&crop=smart&auto=webp&s=f4ebbf29e53988ecb74a279c44ab37caa0ed591c', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/8iyUZQC1CU-UelDJDhHlD9c6m04ywSmrgXg-sZuUFzc.png?width=320&crop=smart&auto=webp&s=c67f71e075921e3605e9bf9dff4f25be948262a2', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/8iyUZQC1CU-UelDJDhHlD9c6m04ywSmrgXg-sZuUFzc.png?width=640&crop=smart&auto=webp&s=41ce22218d3891a5b12766c23674b5c26ff1b317', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/8iyUZQC1CU-UelDJDhHlD9c6m04ywSmrgXg-sZuUFzc.png?width=960&crop=smart&auto=webp&s=9caf9e20ac7d0c40e4bf1ad509fb6206cc5155ac', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/8iyUZQC1CU-UelDJDhHlD9c6m04ywSmrgXg-sZuUFzc.png?width=1080&crop=smart&auto=webp&s=d8305a55753c059cfe3915669b977483c9fdb1a6', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/8iyUZQC1CU-UelDJDhHlD9c6m04ywSmrgXg-sZuUFzc.png?auto=webp&s=2e8c3a8e4456915637f35b3df72ccf6d22fa6e12', 'width': 1200}, 'variants': {}}]}
Few-Shot Examples: Overfitting / Leakage
0
#TL:DR How do I get a model to avoid leaking/ overfitting its system prompt examples into the outputs? # Context I'm working with **qwen3 32b Q4_K_L**, in both thinking and non-thinking modes with 7900XTX on vulkan, for a structured output pipeline with the recommended sampling parameters, besides min_p = 0.01 # Issue I'm finding that for both modes the, frankly rather large examples I have, are consistently leaking into my general outputs. Say I have... --- ## System Prompt Body... This has guidance to specifically only generalise from the examples in here. ## Example ### Input This contains {{X}} ### good output This contains {{X}} ### bad output This contains {{X}} ## User Content This contains {{Y, Z}} ## Output This contains {{Y,Z,X}} --- I don't quite know how to get it to avoid putting the example in the output area. This example definitely improves outputs when it's there, but contaminants the content too often. Roughly 10-15% of content. I want to use this to curate a dataset, and while I can remove the examples and failures for a qlora system prompt/ output. I would much prefer to reduce the issue before then so it's easier to clean the data, more effective now, and isn't doing minor errors I don't notice as much. Any suggestions?
2025-06-19T09:11:58
https://www.reddit.com/r/LocalLLaMA/comments/1lf69bk/fewshot_examples_overfitting_leakage/
ROS_SDN
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf69bk
false
null
t3_1lf69bk
/r/LocalLLaMA/comments/1lf69bk/fewshot_examples_overfitting_leakage/
false
false
self
0
null
Less than 2GB models Hallucinate on the first prompt itself in LM studio
0
I have tried with 5 models which are less than 2 GB and they keep repeating 4-5 lines again and again. I have a RTX 2060 6GB VRAM, 16GB RAM, 8 core 16 threads ryzen. Models greater than 2GB in size run fine. I have tried changing temperature and model import settings but nothing has worked out so far.
2025-06-19T09:20:35
https://v.redd.it/hkl35nfjpu7f1
HareMayor
v.redd.it
1970-01-01T00:00:00
0
{}
1lf6e1t
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hkl35nfjpu7f1/DASHPlaylist.mpd?a=1752916849%2CYzg3YjkwMTBiNmEyZWViMDNmZTk1YWFhNDgxN2JhMWEyYWRlODA5ODYwYjg3YmYxMjllODdlNDFjZjU3NDRjYw%3D%3D&v=1&f=sd', 'duration': 15, 'fallback_url': 'https://v.redd.it/hkl35nfjpu7f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/hkl35nfjpu7f1/HLSPlaylist.m3u8?a=1752916849%2CMmMwMmJhYjM4MDJhYjg3ZmQ5NjA1OTBlM2FlNDFkMTdjYTdkM2FiZGJhOTA2YzA0ZDhiMzE3NThkMTYxMDE3MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hkl35nfjpu7f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1lf6e1t
/r/LocalLLaMA/comments/1lf6e1t/less_than_2gb_models_hallucinate_on_the_first/
false
false
https://external-preview…bf2e51e0e79a74e7
0
{'enabled': False, 'images': [{'id': 'bnhlMWtyZmpwdTdmMaMSsUFIRXdehewBebiXVkkMz_xXnA2RrWxjB5EV6p-n', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bnhlMWtyZmpwdTdmMaMSsUFIRXdehewBebiXVkkMz_xXnA2RrWxjB5EV6p-n.png?width=108&crop=smart&format=pjpg&auto=webp&s=ba571f0e412edc79661b4e59c3e351a6cb21b414', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bnhlMWtyZmpwdTdmMaMSsUFIRXdehewBebiXVkkMz_xXnA2RrWxjB5EV6p-n.png?width=216&crop=smart&format=pjpg&auto=webp&s=20484fba39d906beb51f3c4879708f8ca2c630a3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bnhlMWtyZmpwdTdmMaMSsUFIRXdehewBebiXVkkMz_xXnA2RrWxjB5EV6p-n.png?width=320&crop=smart&format=pjpg&auto=webp&s=b16809e688a01a8326ce4580f4b5f07a8ec12708', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bnhlMWtyZmpwdTdmMaMSsUFIRXdehewBebiXVkkMz_xXnA2RrWxjB5EV6p-n.png?width=640&crop=smart&format=pjpg&auto=webp&s=1f5d64da9c260714b4daa3754d4a89ba1f1a2323', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bnhlMWtyZmpwdTdmMaMSsUFIRXdehewBebiXVkkMz_xXnA2RrWxjB5EV6p-n.png?width=960&crop=smart&format=pjpg&auto=webp&s=eaa6af8f24cdde8f1136e694a7779f3c618c12c7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bnhlMWtyZmpwdTdmMaMSsUFIRXdehewBebiXVkkMz_xXnA2RrWxjB5EV6p-n.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ec99f1900bff0d50113127ec1458f2d2e228fd0d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bnhlMWtyZmpwdTdmMaMSsUFIRXdehewBebiXVkkMz_xXnA2RrWxjB5EV6p-n.png?format=pjpg&auto=webp&s=f8255fc5b39957750ea26368e98c440114141583', 'width': 1920}, 'variants': {}}]}
Which Open-source VectorDB for storing ColPali/ColQwen embeddings?
4
Hi everyone, this is my first post in this subreddit, and I'm wondering if this is the best sub to ask this. I'm currently doing a research project that involves using ColPali embedding/retrieval modules for RAG. However, from my research, I found out that most vector databases are highly incompatible with the embeddings produced by ColPali, since ColPali produces multi-vectors and most vector dbs are more optimized for single-vector operations. I am still very inexperienced in RAG, and some of my findings may be incorrect, so please take my statements above about ColPali embeddings and VectorDBs with a grain of salt. I hope you could suggest a few free, open source vector databases that are compatible with ColPali embeddings along with some posts/links that describes the workflow. Thanks for reading my post, and I hope you all have a good day.
2025-06-19T09:36:20
https://www.reddit.com/r/LocalLLaMA/comments/1lf6m5i/which_opensource_vectordb_for_storing/
dafroggoboi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf6m5i
false
null
t3_1lf6m5i
/r/LocalLLaMA/comments/1lf6m5i/which_opensource_vectordb_for_storing/
false
false
self
4
null
Mixture Of Adversaries.
6
# Mixture of Adversaries (MoA) ## Intro I wanted to think of a system that would address the major issues preventing "mission critical" use of LLMs: **1. Hallucinations** * No internal "Devil's advocate" or consensus mechanism to call itself out with **2. Outputs tend to prepresent a "regression to the mean"** * overly safe and bland outputs * trends towards the most average answer which doesnt work as well when a complex problem has multiple mutually-incompatible "correct" answers **3. Lack of cognitive dissonance in reasoning,** * Currently, reasoning tokens look more like neurotic self-doubt when it should be more dielectic. * Not effective at reconciling 2 confliciting by strong ideas. * Leads to "Both sides'ing" and middling I came up with an idea for a model architechture that attempts to make up for these, I shared it a week ago on OpenAI discord but the channel just moved on to kids whining about free tier limits, so I wanted to see what people thought about it (mainly so I can understand these concepts better). It's kinda like an asymetrical MoE with phased inference strategies. ## **Adversaries and Arbitration** I predict the next major level up for LLMs will be something like MoE but it'll be a MoA - Mixture of Adversaries that are only trained on their ability to defeat other adversaries in the model's group. At run time the adversaries will round robin their arguments (or perhaps do initial argument in parallel) and will also vote, but they aren't voting for a winner they are voting to eliminate an adversary. This repeats for several rounds until at some predefined ratio of eliminated adversaries another specialized expert (arbitrator) will step in and focus on consensus building between the stronger (remaining) adversaries. The adversaries still do what they do best but there are no longer any eliminations, instead the arbitrator focuses on taking the strong (surviving) arguments and building a consensus until their token budget is hit for their weird negotiation on an answer. ## **The Speaker** The arbitrator "expert" will hand over the answer to the "Speaker" who is specialized for the sole tasks of interpreting the models weird internal communication into natural language -> thats your output The "speaker" is actually very important because the adversaries (and to a lesser degree the arbitrator) don't speak in natural language, it would be some internal language that is more like draft tokens and would emerge on its own from the training, it wouldn't be a pre-constructed language. This is done to reduce the explosion of tokens that would come from turning the model into a small government lol. The speaker could have a new separate temperature parameter that controlled how much liberty it could take with interpreting the "ruling". We could call it wet/dry. This is actually very necessary to ensure the answer checks all the subjective boxes a human might be looking for in a response (emotional intelligence and the likes) ## **Challenges** Training will be difficult and may involve changing the MoE layout to temporarily have more arbitrators and speakers to maintain positive control over the adversaries who would be at risk for misalignment if not carefully scrutinized. Also sufficiently advanced adversaries might start to engage in *strategic voting* where they aren't eliminating the weakest argument, but are instead voting in such a way that is aware of how others vote and to ensure the maximum amount if their take is part of the consensus. - Perhaps they could be kept blind to certain aspects of the process to prevent perverse incentives, - Or if we are building a slow "costs-be-damned" model perhaps don't have them vote at all, and leave the voting up to arbitrator or a "jury" of mini arbitrators ## Conclusion Currently reasoning models just do this weird self-doubt thing, when what we really need is bona-fide cognitive dissonance which doesn't have to be doubt based, it can be adversarial between 2 or more strong (high probability) but logically "incompatible-with-each-other" predictions The major benefit of this approach is that it has the potential to generate high quality answers that don't just represent a regression to the mean (bland and safe) This could actually be done as an multi-model agent, but we'd need the SOTA club to grow some courage enough to make deliberately biased models
2025-06-19T09:39:42
https://www.reddit.com/r/LocalLLaMA/comments/1lf6nvw/mixture_of_adversaries/
teleprax
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf6nvw
false
null
t3_1lf6nvw
/r/LocalLLaMA/comments/1lf6nvw/mixture_of_adversaries/
false
false
self
6
null
Need help with finetuning
1
I need to finetune an open source model to summarise and analyze very large context data (around 50000 tokens, cannot decompose it into chunks). I need to do both SFT and reinforcement learning. Does anyone have experience with ORPO, DPO on very large context? ORPO though claims to use less memmory because of no reference model, still concatenates the chooses rejected prompts and responses using 4 times the memory. I have single A100 GPU with 80 GB vram. Cannot fit a single sequence for finetuning with ORPO (all batch sizes 1).
2025-06-19T10:47:11
https://www.reddit.com/r/LocalLLaMA/comments/1lf7ppq/need_help_with_finetuning/
Elemental_Ray
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf7ppq
false
null
t3_1lf7ppq
/r/LocalLLaMA/comments/1lf7ppq/need_help_with_finetuning/
false
false
self
1
null
"Cheap" 24GB GPU options for fine-tuning?
3
I'm currently weighing up options for a GPU to fine-tune larger LLMs (Deepseek 70b), as well as give me reasonable performance in inference. I'm willing to compromise speed for card capacity. Was initially considering a 3090 but after some digging there seems to be a lot more NVIDIA cards that have potential (p40, ect) but I'm a little overwhelmed.
2025-06-19T10:55:55
https://www.reddit.com/r/LocalLLaMA/comments/1lf7ux8/cheap_24gb_gpu_options_for_finetuning/
deus119
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf7ux8
false
null
t3_1lf7ux8
/r/LocalLLaMA/comments/1lf7ux8/cheap_24gb_gpu_options_for_finetuning/
false
false
self
3
null
🧠 Lost in the Mix: How Well Do LLMs Understand Code-Switched Text?
3
A new preprint takes a deep dive into the blind spot of multilingual LLMs: **code-switching**—where two or more languages are mixed within the same sentence or discourse. 📄 ["Lost in the Mix: Evaluating LLM Understanding of Code-Switched Text"](https://arxiv.org/abs/2506.14012v1) Key insights: * ⚠️ Embedding *non-English* words into English sentences consistently degrades LLM performance—even with linguistically valid switches. * ✅ Embedding *English* into non-English sentences often improves performance. * 🧪 Fine-tuning on code-switched data mitigates performance drops more reliably than prompting. * 🧬 Code-switching complexity (more languages, mixed scripts) doesn't linearly correlate with worse results. Benchmarks used include **Belebele**, **MMLU**, and **XNLI**, with code-switched versions constructed using theoretical constraints. 🔗 Full preprint: [2506.14012](https://arxiv.org/abs/2506.14012) 💾 Code & data: [GitHub repo](https://github.com/amr-mohamedd/Lost-in-the-Mix) If you're working on multilingual LLMs, robustness, or sociolinguistic NLP, this is worth a read.
2025-06-19T11:00:03
https://www.reddit.com/r/LocalLLaMA/comments/1lf7xdm/lost_in_the_mix_how_well_do_llms_understand/
Ok-Cut-3551
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf7xdm
false
null
t3_1lf7xdm
/r/LocalLLaMA/comments/1lf7xdm/lost_in_the_mix_how_well_do_llms_understand/
false
false
self
3
null
I have a HP workstation running a xeon e5 2699v4 I would like to add 4 p40s I would like to know if this is possible.
0
It is a Z440 Here is a picture of the motherboard. what adapters and such would I need to get 4 p40s to work. I could run two power supplies if that would help. https://preview.redd.it/bycisoz59v7f1.jpg?width=4000&format=pjpg&auto=webp&s=46a9b06fa0090ed3720d24b588e3ebce8fcd3aaa
2025-06-19T11:06:52
https://www.reddit.com/r/LocalLLaMA/comments/1lf81mp/i_have_a_hp_workstation_running_a_xeon_e5_2699v4/
tbandtg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf81mp
false
null
t3_1lf81mp
/r/LocalLLaMA/comments/1lf81mp/i_have_a_hp_workstation_running_a_xeon_e5_2699v4/
false
false
https://a.thumbs.redditm…3GnzinySPt14.jpg
0
null
Kyutai Speech-To-Text (STT 1B and STT 2.6B)
1
Kyutai STT: A speech-to-text optimized for real-time usage: [https://kyutai.org/next/stt](https://kyutai.org/next/stt) kyutai/stt-1b-en\_fr: [https://huggingface.co/kyutai/stt-1b-en\_fr](https://huggingface.co/kyutai/stt-1b-en_fr) kyutai/stt-2.6b-en: [https://huggingface.co/kyutai/stt-2.6b-en](https://huggingface.co/kyutai/stt-2.6b-en) GitHub repository: [https://github.com/kyutai-labs/delayed-streams-modeling/](https://github.com/kyutai-labs/delayed-streams-modeling/) From kyutai on 𝕏: [https://x.com/kyutai\_labs/status/1935652243119788111](https://x.com/kyutai_labs/status/1935652243119788111)[](https://github.com/kyutai-labs/delayed-streams-modeling/)
2025-06-19T11:31:39
https://v.redd.it/mxcthq2wbv7f1
Nunki08
v.redd.it
1970-01-01T00:00:00
0
{}
1lf8h8f
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/mxcthq2wbv7f1/DASHPlaylist.mpd?a=1752924716%2CNDYyNmU1ZGQyNzVjZmEyMDM2MGY5NGZkYjk3NzQ3OTJhZGQ1ZmU0NmExOWE4OTExZDUzMjNlY2M5NGE2ZTE3Nw%3D%3D&v=1&f=sd', 'duration': 53, 'fallback_url': 'https://v.redd.it/mxcthq2wbv7f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/mxcthq2wbv7f1/HLSPlaylist.m3u8?a=1752924716%2COGE1MmY1NTNhNDQ0NzM0YTFmY2Y2NjQzYThjODc1MjY4MGYyNmRhZmYyYTI3YzlkYzc4Yzc4Mzg0MmNhNTYzZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/mxcthq2wbv7f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1lf8h8f
/r/LocalLLaMA/comments/1lf8h8f/kyutai_speechtotext_stt_1b_and_stt_26b/
false
false
https://external-preview…8d982cd0db78b8c0
1
{'enabled': False, 'images': [{'id': 'NTQ1ZGZ1MndidjdmMVZ-Uud54CiEh-fogT8UBkwtbcV04X-Xv1aF2JRHETwz', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/NTQ1ZGZ1MndidjdmMVZ-Uud54CiEh-fogT8UBkwtbcV04X-Xv1aF2JRHETwz.png?width=108&crop=smart&format=pjpg&auto=webp&s=b720638c1baec415bb29802edca13f28ed9c95cd', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/NTQ1ZGZ1MndidjdmMVZ-Uud54CiEh-fogT8UBkwtbcV04X-Xv1aF2JRHETwz.png?width=216&crop=smart&format=pjpg&auto=webp&s=14a921834beab6a7f204703b16e09a62abf31543', 'width': 216}, {'height': 569, 'url': 'https://external-preview.redd.it/NTQ1ZGZ1MndidjdmMVZ-Uud54CiEh-fogT8UBkwtbcV04X-Xv1aF2JRHETwz.png?width=320&crop=smart&format=pjpg&auto=webp&s=bde691a210dd6dc08f81dac5b166f32ff3e0a639', 'width': 320}, {'height': 1138, 'url': 'https://external-preview.redd.it/NTQ1ZGZ1MndidjdmMVZ-Uud54CiEh-fogT8UBkwtbcV04X-Xv1aF2JRHETwz.png?width=640&crop=smart&format=pjpg&auto=webp&s=7427a12690c2884b7a6951ba07828738ae377584', 'width': 640}, {'height': 1707, 'url': 'https://external-preview.redd.it/NTQ1ZGZ1MndidjdmMVZ-Uud54CiEh-fogT8UBkwtbcV04X-Xv1aF2JRHETwz.png?width=960&crop=smart&format=pjpg&auto=webp&s=c6e747042ef60693ec7e07b3483504e0803de58c', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/NTQ1ZGZ1MndidjdmMVZ-Uud54CiEh-fogT8UBkwtbcV04X-Xv1aF2JRHETwz.png?width=1080&crop=smart&format=pjpg&auto=webp&s=05793ca59dd49e6928dd4be31744312629b29573', 'width': 1080}], 'source': {'height': 2276, 'url': 'https://external-preview.redd.it/NTQ1ZGZ1MndidjdmMVZ-Uud54CiEh-fogT8UBkwtbcV04X-Xv1aF2JRHETwz.png?format=pjpg&auto=webp&s=95a8f9abaeef5151183a0fb85c943fe3b65a3651', 'width': 1280}, 'variants': {}}]}
Kyutai Speech-To-Text (STT 1B and STT 2.6B)
1
Kyutai STT: A speech-to-text optimized for real-time usage: [https://kyutai.org/next/stt](https://kyutai.org/next/stt) kyutai/stt-1b-en\_fr: [https://huggingface.co/kyutai/stt-1b-en\_fr](https://huggingface.co/kyutai/stt-1b-en_fr) kyutai/stt-2.6b-en: [https://huggingface.co/kyutai/stt-2.6b-en](https://huggingface.co/kyutai/stt-2.6b-en) GitHub repository: [https://github.com/kyutai-labs/delayed-streams-modeling/](https://github.com/kyutai-labs/delayed-streams-modeling/)
2025-06-19T11:39:05
https://v.redd.it/qvkso7b6ev7f1
Nunki08
v.redd.it
1970-01-01T00:00:00
0
{}
1lf8m3j
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/qvkso7b6ev7f1/DASHPlaylist.mpd?a=1752925160%2CYjc1NGQ4MzllODkyY2RhMzlhZDU2MGYwNWMzYTI5ZjcyNDZmMjk1MDlhMWQ2ODY0NDAxMGMyYzM3NWI4N2QyNQ%3D%3D&v=1&f=sd', 'duration': 53, 'fallback_url': 'https://v.redd.it/qvkso7b6ev7f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/qvkso7b6ev7f1/HLSPlaylist.m3u8?a=1752925160%2CZmRlMjVjNmRmODMwYWEzY2M5MjE5ZGY4NjljODZjMjk5OWZhNGJlMWVkNmEyYTBiZGE1ZWM5YTMwY2UzYWFjYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/qvkso7b6ev7f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1lf8m3j
/r/LocalLLaMA/comments/1lf8m3j/kyutai_speechtotext_stt_1b_and_stt_26b/
false
false
https://external-preview…7bd3b41c4130ee95
1
{'enabled': False, 'images': [{'id': 'NTU3MHNjYjZldjdmMVZ-Uud54CiEh-fogT8UBkwtbcV04X-Xv1aF2JRHETwz', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/NTU3MHNjYjZldjdmMVZ-Uud54CiEh-fogT8UBkwtbcV04X-Xv1aF2JRHETwz.png?width=108&crop=smart&format=pjpg&auto=webp&s=e26c6a65a553eca662a60e66d3808c5370a28f68', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/NTU3MHNjYjZldjdmMVZ-Uud54CiEh-fogT8UBkwtbcV04X-Xv1aF2JRHETwz.png?width=216&crop=smart&format=pjpg&auto=webp&s=fdcc84987a23ee17392c18fe9a82695a6cba90a3', 'width': 216}, {'height': 569, 'url': 'https://external-preview.redd.it/NTU3MHNjYjZldjdmMVZ-Uud54CiEh-fogT8UBkwtbcV04X-Xv1aF2JRHETwz.png?width=320&crop=smart&format=pjpg&auto=webp&s=1ca83549ab0a5118d0adca2894393df7f05b4173', 'width': 320}, {'height': 1138, 'url': 'https://external-preview.redd.it/NTU3MHNjYjZldjdmMVZ-Uud54CiEh-fogT8UBkwtbcV04X-Xv1aF2JRHETwz.png?width=640&crop=smart&format=pjpg&auto=webp&s=549e528fc0c6d8b3c1c6bd794aeebde473767406', 'width': 640}, {'height': 1707, 'url': 'https://external-preview.redd.it/NTU3MHNjYjZldjdmMVZ-Uud54CiEh-fogT8UBkwtbcV04X-Xv1aF2JRHETwz.png?width=960&crop=smart&format=pjpg&auto=webp&s=0d257784799dce430ed4c6c65554cf35fd20f177', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/NTU3MHNjYjZldjdmMVZ-Uud54CiEh-fogT8UBkwtbcV04X-Xv1aF2JRHETwz.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c879b14e4bf58babeebcd372b60f4ae4a8a15158', 'width': 1080}], 'source': {'height': 2276, 'url': 'https://external-preview.redd.it/NTU3MHNjYjZldjdmMVZ-Uud54CiEh-fogT8UBkwtbcV04X-Xv1aF2JRHETwz.png?format=pjpg&auto=webp&s=7b619a34a79eb46d3715da9bcf59e7d35e88d7d3', 'width': 1280}, 'variants': {}}]}
Kyutai Speech-To-Text (STT 1B and STT 2.6B)
1
[removed]
2025-06-19T11:48:16
https://i.redd.it/2ytns4pmgv7f1.jpeg
Nunki08
i.redd.it
1970-01-01T00:00:00
0
{}
1lf8s02
false
null
t3_1lf8s02
/r/LocalLLaMA/comments/1lf8s02/kyutai_speechtotext_stt_1b_and_stt_26b/
false
false
https://external-preview…1b0286a0af20c0b9
1
{'enabled': True, 'images': [{'id': 'AY9vrwbgWgnFsbCkEPMPj5n4vk9It8_LAjPFdOZF2Ms', 'resolutions': [{'height': 121, 'url': 'https://preview.redd.it/2ytns4pmgv7f1.jpeg?width=108&crop=smart&auto=webp&s=322c6be20be877bd949756520db41d4e188fad28', 'width': 108}, {'height': 242, 'url': 'https://preview.redd.it/2ytns4pmgv7f1.jpeg?width=216&crop=smart&auto=webp&s=c5474c98d2790319ca5cd2e1b8d1ae6ffe54771a', 'width': 216}, {'height': 358, 'url': 'https://preview.redd.it/2ytns4pmgv7f1.jpeg?width=320&crop=smart&auto=webp&s=3426608b5dab34440881581a7555dd02708d0ddd', 'width': 320}, {'height': 717, 'url': 'https://preview.redd.it/2ytns4pmgv7f1.jpeg?width=640&crop=smart&auto=webp&s=3a4e3fa102fade3363724a4e370f6974e4e03506', 'width': 640}], 'source': {'height': 811, 'url': 'https://preview.redd.it/2ytns4pmgv7f1.jpeg?auto=webp&s=626139d433125e68b200e66ceda5584e1cb0b956', 'width': 723}, 'variants': {}}]}
Kyutai new Speech-To-Text models (STT 1B and STT 2.6B)
1
[removed]
2025-06-19T11:50:37
https://i.redd.it/7n7oe4l0hv7f1.jpeg
Nunki08
i.redd.it
1970-01-01T00:00:00
0
{}
1lf8tjs
false
null
t3_1lf8tjs
/r/LocalLLaMA/comments/1lf8tjs/kyutai_new_speechtotext_models_stt_1b_and_stt_26b/
false
false
default
1
{'enabled': True, 'images': [{'id': '7n7oe4l0hv7f1', 'resolutions': [{'height': 121, 'url': 'https://preview.redd.it/7n7oe4l0hv7f1.jpeg?width=108&crop=smart&auto=webp&s=62369d70b00ce933afbc36bf159dbdb859ab3a36', 'width': 108}, {'height': 242, 'url': 'https://preview.redd.it/7n7oe4l0hv7f1.jpeg?width=216&crop=smart&auto=webp&s=dfc111fe02277fdafa9eca6f25a11a407245a018', 'width': 216}, {'height': 358, 'url': 'https://preview.redd.it/7n7oe4l0hv7f1.jpeg?width=320&crop=smart&auto=webp&s=2acc2c59bf60490d3887012ade020585e9196007', 'width': 320}, {'height': 717, 'url': 'https://preview.redd.it/7n7oe4l0hv7f1.jpeg?width=640&crop=smart&auto=webp&s=5da2c9d44ed4dd072afa9e5017ee4e5f37888eca', 'width': 640}], 'source': {'height': 811, 'url': 'https://preview.redd.it/7n7oe4l0hv7f1.jpeg?auto=webp&s=a2a6f9a5cb0d17c209cfbb4b063c66dd72d28a8f', 'width': 723}, 'variants': {}}]}
Help me pick a PDF to Markdown/JSON converter pleaseeee
0
I’m trying to pick an OCR or document parsing tool, but the market’s noisy and hard to compare (everyone's benchmark says they're the best). Also LLMs are expensive. If you’ve worked with any, would love your input. What’s your primary use case or workflow involving document parsing or understanding? Which tools or services are you currently using or have evaluated for document parsing or OCR? What challenges or limitations have you run into with your current or past approach? Why did you decide not to move forward with tools you’ve tried (if any)? What are the top 2–3 things that matter most to you when choosing a tool like this? What’s your typical monthly budget (or budget range) for document processing infrastructure?
2025-06-19T12:09:24
https://www.reddit.com/r/LocalLLaMA/comments/1lf96ez/help_me_pick_a_pdf_to_markdownjson_converter/
Ordinary_Quantity_68
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf96ez
false
null
t3_1lf96ez
/r/LocalLLaMA/comments/1lf96ez/help_me_pick_a_pdf_to_markdownjson_converter/
false
false
self
0
null
I just shipped an AI Voice Agent that replaced the entire cold calling team
0
ERROR: type should be string, got "https://preview.redd.it/ap25f767nv7f1.png?width=1375&format=png&auto=webp&s=30462d11f4685b74033a9c2bc34abe0c122ca001\n\n \n \n \nMost automated-call setups are glorified IVRs:\n\n* No real outbound calls\n* Freeze at objections\n* Can’t lock meetings or send follow-ups by email \n* Definitely can’t close deals or trigger payments\n\nSo I built a smarter one with **a NO CODE voice agent** with 6 plugins. Rolled it out last week for a mid-size healthcare clinic, and here’s what it handles from them now:\n\n* **24/7 inbound:** every call answered, zero hold music.\n* **Smart triage:** checks doctor availability, books the slot and send a calendar invite, then emails + messages the patient the details.\n* **Post-visit feedback:** calls back after the appointment, grabs NPS in under a minute.\n\nUnder the hood it’s the same multi-agent stack I use for outbound SDR work: **Superu AI** grabs form data, scrapes public info, writes context-aware scripts on the fly, branches when the caller changes topic, and logs everything to the CRM. \n \nMy role? \nWe'll building an agent that talks is just a few min. task. \n\nShaping the agent to handle the queries, random questions and detailed info on the topic all this is done through the prompting which took me 3 days of hit and trail to make it talk like this.\n\nOfcourse it can be done better just spend more time in fining your promt.\n\n**Week-one stats:** zero missed calls, 72 % booking rate, receptionist finally free to help walk-ins.\n\nI can see a lot of business opportunities for folks like us dealing with even local business can make us good bucks\n\n\n\n \n"
2025-06-19T12:24:58
https://www.reddit.com/r/LocalLLaMA/comments/1lf9hq8/i_just_shipped_an_ai_voice_agent_that_replaced/
Agile_Baseball8351
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf9hq8
false
null
t3_1lf9hq8
/r/LocalLLaMA/comments/1lf9hq8/i_just_shipped_an_ai_voice_agent_that_replaced/
false
false
https://b.thumbs.redditm…AjvqxzeynkZM.jpg
0
null
Explain AI and MCP to a 5 year old in the 90s
116
2025-06-19T12:45:32
https://www.reddit.com/gallery/1lf9wof
cov_id19
reddit.com
1970-01-01T00:00:00
0
{}
1lf9wof
false
null
t3_1lf9wof
/r/LocalLLaMA/comments/1lf9wof/explain_ai_and_mcp_to_a_5_year_old_in_the_90s/
false
false
https://external-preview…4a1f54db2087dd61
116
{'enabled': True, 'images': [{'id': '64oqjh3Mi1lX6NMRJ57nKz9L5oT26BTAsIGTdNtrvn8', 'resolutions': [{'height': 144, 'url': 'https://external-preview.redd.it/64oqjh3Mi1lX6NMRJ57nKz9L5oT26BTAsIGTdNtrvn8.png?width=108&crop=smart&auto=webp&s=f36552420d32e54898af3ff5799e89deef32d887', 'width': 108}, {'height': 289, 'url': 'https://external-preview.redd.it/64oqjh3Mi1lX6NMRJ57nKz9L5oT26BTAsIGTdNtrvn8.png?width=216&crop=smart&auto=webp&s=65148baa4cdbdc3bf85284864a748e46254473df', 'width': 216}, {'height': 428, 'url': 'https://external-preview.redd.it/64oqjh3Mi1lX6NMRJ57nKz9L5oT26BTAsIGTdNtrvn8.png?width=320&crop=smart&auto=webp&s=ccd178322d6dc088b037b9ed96923c9b1b375622', 'width': 320}, {'height': 857, 'url': 'https://external-preview.redd.it/64oqjh3Mi1lX6NMRJ57nKz9L5oT26BTAsIGTdNtrvn8.png?width=640&crop=smart&auto=webp&s=edf2b1569a37ddebb64bb52fbe85643cb9a0b4ec', 'width': 640}, {'height': 1285, 'url': 'https://external-preview.redd.it/64oqjh3Mi1lX6NMRJ57nKz9L5oT26BTAsIGTdNtrvn8.png?width=960&crop=smart&auto=webp&s=fdbbedea359c7e4c555daa7c9a0a914e1db836d6', 'width': 960}], 'source': {'height': 1310, 'url': 'https://external-preview.redd.it/64oqjh3Mi1lX6NMRJ57nKz9L5oT26BTAsIGTdNtrvn8.png?auto=webp&s=b1f1d4ea0999af201f055268d51f58184c550dea', 'width': 978}, 'variants': {}}]}
Chatbox AI Delisted from iOS App Store. Any good alternatives?
1
Not sure why it got delisted.. https://chatboxai.app/en What do you use to connect back to Llamacpp/Kobold/LM Studio? Most of the apps require a ton of permissions.
2025-06-19T12:49:29
https://www.reddit.com/r/LocalLLaMA/comments/1lf9zph/chatbox_ai_delisted_from_ios_app_store_any_good/
simracerman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf9zph
false
null
t3_1lf9zph
/r/LocalLLaMA/comments/1lf9zph/chatbox_ai_delisted_from_ios_app_store_any_good/
false
false
self
1
{'enabled': False, 'images': [{'id': 'JQJYWP9EtIyW64HOx_ngOVbE5TF6SXekcj5FkVZaVII', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/JQJYWP9EtIyW64HOx_ngOVbE5TF6SXekcj5FkVZaVII.png?width=108&crop=smart&auto=webp&s=279a09b67459be926a08944e6c9ea50312a63a5f', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/JQJYWP9EtIyW64HOx_ngOVbE5TF6SXekcj5FkVZaVII.png?width=216&crop=smart&auto=webp&s=71854addc096fa99604724a66b8d210353b93453', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/JQJYWP9EtIyW64HOx_ngOVbE5TF6SXekcj5FkVZaVII.png?width=320&crop=smart&auto=webp&s=bb617badf513ed94cfbeb3e2cbe71000e2592028', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/JQJYWP9EtIyW64HOx_ngOVbE5TF6SXekcj5FkVZaVII.png?width=640&crop=smart&auto=webp&s=b440db62ce02b358f27186c315e179af0a46a940', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/JQJYWP9EtIyW64HOx_ngOVbE5TF6SXekcj5FkVZaVII.png?width=960&crop=smart&auto=webp&s=f3fbbeab38b3b11b83f247b980db93408afdf989', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/JQJYWP9EtIyW64HOx_ngOVbE5TF6SXekcj5FkVZaVII.png?width=1080&crop=smart&auto=webp&s=a139fe4e602dba248adc60f4bda3146ef969fd06', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/JQJYWP9EtIyW64HOx_ngOVbE5TF6SXekcj5FkVZaVII.png?auto=webp&s=292ee07d261c9960edd9e6bc5216b120e3ca8c70', 'width': 1200}, 'variants': {}}]}
Hallucination?
0
Can someone help me out? im using msty and no matter which local model i use its generating incorrect response. I've tried reinstalling too but it doesn't work
2025-06-19T13:06:03
https://i.redd.it/2hn0v2r9uv7f1.png
Sussymannnn
i.redd.it
1970-01-01T00:00:00
0
{}
1lfacep
false
null
t3_1lfacep
/r/LocalLLaMA/comments/1lfacep/hallucination/
false
false
https://external-preview…59c9c5dd4210fb92
0
{'enabled': True, 'images': [{'id': 'q_YaUOnuuBrX5ZRPSRfHrtrjtGvzbh1mafhvkkEcuP4', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/2hn0v2r9uv7f1.png?width=108&crop=smart&auto=webp&s=1b2816651367eedfab6f88151cde3cd10da3294f', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/2hn0v2r9uv7f1.png?width=216&crop=smart&auto=webp&s=f16cf2629a13c3e9f1c7ebd0d26cf0898b69a885', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/2hn0v2r9uv7f1.png?width=320&crop=smart&auto=webp&s=201494cd0c30020f3093ad55f0c0cbe40ecc7a01', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/2hn0v2r9uv7f1.png?width=640&crop=smart&auto=webp&s=935ead8d9530b56cc46ddde786717df2ce26282a', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/2hn0v2r9uv7f1.png?width=960&crop=smart&auto=webp&s=3dde4138b26acb256d1e1afe53a155de1be30474', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/2hn0v2r9uv7f1.png?width=1080&crop=smart&auto=webp&s=7c02cfc2067f22ed05265649e423fe85bec14189', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/2hn0v2r9uv7f1.png?auto=webp&s=1176b333b628e15b2a81d876279943ff7f60e310', 'width': 1920}, 'variants': {}}]}
Local AI setup 1x5090, 5x3090
32
**What I’ve been building lately: a local multi-model AI stack that’s getting kind of wild (in a good way)** Been heads-down working on a local AI stack that’s all about fast iteration and strong reasoning, fully running on consumer GPUs. It’s still evolving, but here’s what the current setup looks like: # 🧑‍💻 Coding Assistant **Model:** Devstral Q6 on LMStudio **Specs:** Q4 KV cache, 128K context, running on a 5090 Getting \~72 tokens/sec and still have 4GB VRAM free. Might try upping the quant if quality holds, or keep it as-is to push for a 40K token context experiment later. # 🧠 Reasoning Engine **Model:** Magistral Q4 on LMStudio **Specs:** Q8 KV cache, 128K context, running on a single 3090 Tuned more for heavy-duty reasoning tasks. Performs effectively up to 40K context. # 🧪 Eval + Experimentation Using local Arize Phoenix for evals, tracing, and tweaking. Super useful to visualize what’s actually happening under the hood. # 📁 Codebase Indexing **Using:** Roo Code * Qwen3 8B embedding model, FP16, 40K context, 4096D embeddings * Running on a dedicated 3090 * Talking to Qdrant (GPU mode), though having a minor issue where embedding vectors aren’t passing through cleanly—might just need to dig into what’s getting sent/received. * Would love a way to dedicate *part* of a GPU just to embedding workloads. Anyone done that? ✅ Indexing status: green # 🔜 What’s next * Testing **Kimi-Dev 72B** (EXL3 quant @ 5bpw, layer split) across 3x3090s—two for layers, one for the context window—via TextGenWebUI or vLLM on WSL2 * Also experimenting with an **8B reranker model** on a single 3090 to improve retrieval quality, still playing around with where it best fits in the workflow This stack is definitely becoming a bit of a GPU jungle, but the speed and flexibility it gives are worth it. If you're working on similar local inference workflows—or know a good way to do smart GPU assignment in multi-model setups—I’m super interested in this one challenge: **When a smaller model fails (say, after 3 tries), auto-escalate to a larger model with the same context, and save the larger model’s response as a reference for the smaller one in the future.** Would be awesome to see something like that integrated into Roo Code.
2025-06-19T14:08:38
https://www.reddit.com/r/LocalLLaMA/comments/1lfbqgw/local_ai_setup_1x5090_5x3090/
Emergency_Fuel_2988
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfbqgw
false
null
t3_1lfbqgw
/r/LocalLLaMA/comments/1lfbqgw/local_ai_setup_1x5090_5x3090/
false
false
self
32
null
Choosing the best cloud LLM provider
3
Between google collab and other cloud providers for open source LLM. Do you think it is the best option ? I do want your opinions regarding what are other cheapest but good option as well
2025-06-19T14:25:07
https://www.reddit.com/r/LocalLLaMA/comments/1lfc49l/choosing_the_best_cloud_llm_provider/
Glad_Net8882
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfc49l
false
null
t3_1lfc49l
/r/LocalLLaMA/comments/1lfc49l/choosing_the_best_cloud_llm_provider/
false
false
self
3
null
OpenAI Post - Toward understanding and preventing misalignment generalization
0
They are saying training a single/narrow 'misaligned persona' can generalize to cause the model at large to be unethical. I'm curious if this may be related to when you rain such a persona (a previous meta paper suggested that the initial training up to 3ish bits per parameter is memorization before it goes more into generalization. Secondly, can you simply train a bad mechanic instead of abliteration?
2025-06-19T14:27:17
https://openai.com/index/emergent-misalignment/
noage
openai.com
1970-01-01T00:00:00
0
{}
1lfc64h
false
null
t3_1lfc64h
/r/LocalLLaMA/comments/1lfc64h/openai_post_toward_understanding_and_preventing/
false
false
default
0
null
First External Deployment Live — Cold Starts Solved Without Keeping GPUs Always On
4
Thanks to this community for all the feedback in earlier threads . we just completed our first real-world pilot of our snapshot-based LLM runtime. The goal was to eliminate idle GPU burn without sacrificing cold start performance. In this setup: •Model loading happens in under 2 seconds •Snapshot-based orchestration avoids full reloads •Deployment worked out of the box with no partner infra changes •Running on CUDA 12.5.1 across containerized GPUs The pilot is now serving inference in a production-like environment, with sub-second latency post-load and no persistent GPU allocation. We’ll share more details soon (possibly an open benchmark), but just wanted to thank everyone who pushed us to refine it here. if anyone is experimenting with snapshotting or alternate loading strategies beyond vLLM/LLMCache, would love to discuss. Always learning from this group.
2025-06-19T14:59:17
https://i.redd.it/n8xdwq2tew7f1.jpeg
pmv143
i.redd.it
1970-01-01T00:00:00
0
{}
1lfcycb
false
null
t3_1lfcycb
/r/LocalLLaMA/comments/1lfcycb/first_external_deployment_live_cold_starts_solved/
false
false
default
4
{'enabled': True, 'images': [{'id': 'n8xdwq2tew7f1', 'resolutions': [{'height': 167, 'url': 'https://preview.redd.it/n8xdwq2tew7f1.jpeg?width=108&crop=smart&auto=webp&s=dd0fcf67698276cc7d87e0f17d48faab3784b962', 'width': 108}, {'height': 335, 'url': 'https://preview.redd.it/n8xdwq2tew7f1.jpeg?width=216&crop=smart&auto=webp&s=774ccab2c4ef592c59f585c44c1d9ff2a0bd8594', 'width': 216}, {'height': 496, 'url': 'https://preview.redd.it/n8xdwq2tew7f1.jpeg?width=320&crop=smart&auto=webp&s=025166d31033ea59fd07597eef062a2b57113a12', 'width': 320}, {'height': 993, 'url': 'https://preview.redd.it/n8xdwq2tew7f1.jpeg?width=640&crop=smart&auto=webp&s=f8201d98c8900d95770a461ff73d42d90f036e8d', 'width': 640}, {'height': 1489, 'url': 'https://preview.redd.it/n8xdwq2tew7f1.jpeg?width=960&crop=smart&auto=webp&s=b910a30f8498aa226ed6ef3aceb80b71e10a2b67', 'width': 960}, {'height': 1676, 'url': 'https://preview.redd.it/n8xdwq2tew7f1.jpeg?width=1080&crop=smart&auto=webp&s=ae7a60d655152998802aefc3e0aa5e73fa6ac0fa', 'width': 1080}], 'source': {'height': 1746, 'url': 'https://preview.redd.it/n8xdwq2tew7f1.jpeg?auto=webp&s=189e7fd69237704b18b16158b14532d33bb50d2b', 'width': 1125}, 'variants': {}}]}
Has anyone tried the new ICONN-1 (an Apache licensed model)
20
A post was made by the creators on the Huggingface subreddit. I haven’t had a chance to use it yet. Has anyone else? It isn’t clear at a quick glance if this is a dense model or MoE. The description mentions MoE so I assume it is, but no discussion on the expert size. Supposedly this is a new base model, but I wonder if it’s a ‘MoE’ made of existing Mistral models. The creator mentioned spending 50k on training it in the huggingface subreddit post.
2025-06-19T15:09:13
https://huggingface.co/ICONNAI/ICONN-1
silenceimpaired
huggingface.co
1970-01-01T00:00:00
0
{}
1lfd7e2
false
null
t3_1lfd7e2
/r/LocalLLaMA/comments/1lfd7e2/has_anyone_tried_the_new_iconn1_an_apache/
false
false
default
20
{'enabled': False, 'images': [{'id': 'SPYrTwyJE3TQKvjnrmxAQjGjLKoUyWEDHwmv3_PzeoA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SPYrTwyJE3TQKvjnrmxAQjGjLKoUyWEDHwmv3_PzeoA.png?width=108&crop=smart&auto=webp&s=5b2c6b95c12457e1084b5bb7a75f8669279c2f8e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SPYrTwyJE3TQKvjnrmxAQjGjLKoUyWEDHwmv3_PzeoA.png?width=216&crop=smart&auto=webp&s=f9eb0d414c078e8aa85961a695b2e8d082c434f9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SPYrTwyJE3TQKvjnrmxAQjGjLKoUyWEDHwmv3_PzeoA.png?width=320&crop=smart&auto=webp&s=75f16b47dd01180b406088035888cb6023824bf2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SPYrTwyJE3TQKvjnrmxAQjGjLKoUyWEDHwmv3_PzeoA.png?width=640&crop=smart&auto=webp&s=ba21b522cf50bc1f2cabea8c087c1617d7ee467b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SPYrTwyJE3TQKvjnrmxAQjGjLKoUyWEDHwmv3_PzeoA.png?width=960&crop=smart&auto=webp&s=5388c5b5fd2ec854b3bfb79579a31afb5b5a6792', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SPYrTwyJE3TQKvjnrmxAQjGjLKoUyWEDHwmv3_PzeoA.png?width=1080&crop=smart&auto=webp&s=8bd16f77e8ed7b51ab93807a096efdf4cab7a7de', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SPYrTwyJE3TQKvjnrmxAQjGjLKoUyWEDHwmv3_PzeoA.png?auto=webp&s=786aff36073291161494d80dedfb9c85099d3442', 'width': 1200}, 'variants': {}}]}
Is there a way that I can have a llm or some kind of vision model identify different types of animals on a low power device like a pi?
7
At my job there's an issue of one kind of animal eating all the food meant for another kind of animal. For instance, there will be a deer feeder but the goats will find it and live by the feeder. I want the feeder to identify the type of animal before activating. I can do this with a PC, but some of these feeders are in remote areas without hundreds of watts of power. If I can do it with a pi, even if it takes a minute to process, it would save a bunch of money from being wasted on making goats fat.
2025-06-19T15:09:29
https://www.reddit.com/r/LocalLLaMA/comments/1lfd7m6/is_there_a_way_that_i_can_have_a_llm_or_some_kind/
Red_Redditor_Reddit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfd7m6
false
null
t3_1lfd7m6
/r/LocalLLaMA/comments/1lfd7m6/is_there_a_way_that_i_can_have_a_llm_or_some_kind/
false
false
self
7
null
Best offline image processor model?
2
I want to be able to set up an image processor that can distinguish what car is what.. make and model
2025-06-19T15:16:30
https://www.reddit.com/r/LocalLLaMA/comments/1lfddq1/best_offline_image_processor_model/
chiknugcontinuum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfddq1
false
null
t3_1lfddq1
/r/LocalLLaMA/comments/1lfddq1/best_offline_image_processor_model/
false
false
self
2
null
Browser-based tool to record, transcribe, and summarise your audio notes/meetings — all locally, no uploads
0
Built a website to capture meetings, transcribe and summarise them. Record multiple audio clips into a single session. Transcribe directly in the browser using Whisper. Summarise the full session using Ollama or LM Studio. Customised system prompts to suit your summarisation needs. Cloud based options for transcription and summarisation (bring your own key or use site credits). Optional cloud based recordings sync between logged in devices. Record on your phone, transcribe and summarise on your workhorse GPU beast. Still a work in progress. Let me know what you think. [https://summarise.pro](https://summarise.pro)
2025-06-19T15:28:23
https://www.reddit.com/r/LocalLLaMA/comments/1lfdo8q/browserbased_tool_to_record_transcribe_and/
schawla
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfdo8q
false
null
t3_1lfdo8q
/r/LocalLLaMA/comments/1lfdo8q/browserbased_tool_to_record_transcribe_and/
false
false
self
0
null
low cost egpu HW setup (DIY build from random parts config or otherwise) options / questions / suggestions?
1
1: Simplest question -- if one has a modern LINUX(!) system with USB3.x ports without possible thunderbolt / PCIE tunneling, is there a technically reasonable option to connect egpus for inference over a USB 3.x 5 / 10 / 20 Gbps port? I assume there are things like USB based PCIE root complex controller ICs which could be used just like USB3 to NVME controllers but I've never heard of this being used for an eGPU or whether the drivers / chipsets are so bad / limited or the bandwidth so bad that it wouldn't be worthwhile. The simplest configurations I've heard of use PCIE over TB which obviously is more straightforward. So are all these DIY frankenstein DIY multi-GPU cages I see people build using naked eGPU "boards" connecting those over thunderbolt / PCIE or do usefully good ones instead / also take USB3? What should I look for for adapter board models / chipsets / cables / whatever to work with modern LINUX 6.5 kernel or whatever? 2: I have also never seen common after-market TB/USB4 controller cards that go into PCIE x4/x8 slots so I assume it's expensive / impossible / uncommon to try to go that route to get attachment to a TB/USB4 in to PCIE x4/x8/x16 output egpu "board"? 3: So whenever I've looked in the past dedicated off the shelf eGPU chassis enclosures were expensive / limited etc. Has it changed now and there are generic / commodity / inexpensive eGPU enclosures which one would sanely put a P40 / 3090 / 4090 / 5090 GPU in without worries about fit / thermals / ventillation / short circuits / fire etc.? 4: So what's the story with off the shelf enclosures or "DIY kits" for eGPUs -- I've got no problems picking out a PC ATX PSU I'd trust to run a DGPU, corsair, evga, whatever. So are there enclosure options besides just DIYing an empty ATX case + ATX PSU to house one or more EGPUs while using a standard "bring your own" ATX PSU? Or is a good / inexpensive approach to just use an ATX chassis / PSU for housing a DIY EGPU expansion? 5: Is there any good reason I should look at ready made eGPU enclosures which are integrating fans / PSU etc. for housing one or more DGPUs like say 3090 class or are they all going to be more expensive / less trustworthy (power, thermal, electric) than DIY based on ATX parts (assuming appearance / size / portability is no concern)? What would even be the most worthwhile "made to be an egpu chassis" product to look at from what sources if that's even relevant vs. full DIY? 6: If I have a desktop with a free x4/x8 PCIE slot obviously there are other alternatives like oculink and I think a couple others for connecting PCIE out of an ATX chassis from a PCIE slot over a 0.3-1m cable to an external chassis. What technologies / parts / board models / cable models / suppliers should I look at here? Is there any useful "flexible" configuration where the GPU side enclosure can accept multiple options e.g. EITHER USB3 / USB4 / TB / oculink / whatever else so one can connect any desktop / laptop easily? Or is that just uncommon / expensive / needless etc.?
2025-06-19T15:43:27
https://www.reddit.com/r/LocalLLaMA/comments/1lfe1jt/low_cost_egpu_hw_setup_diy_build_from_random/
Calcidiol
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfe1jt
false
null
t3_1lfe1jt
/r/LocalLLaMA/comments/1lfe1jt/low_cost_egpu_hw_setup_diy_build_from_random/
false
false
self
1
null
Skywork/Skywork-SWE-32B · Hugging Face
1
***Skywork-SWE-32B*** is a code agent model developed by [Skywork AI](https://skywork.ai/home), specifically designed for software engineering (SWE) tasks.
2025-06-19T15:43:45
https://huggingface.co/Skywork/Skywork-SWE-32B
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1lfe1sm
false
null
t3_1lfe1sm
/r/LocalLLaMA/comments/1lfe1sm/skyworkskyworkswe32b_hugging_face/
false
false
default
1
null
How do you size hardware
1
(my background: 25 years in tech, software engineer with lots of hardware/sysadmin experience) I'm working with a tech-for-good startup and have created a chatbot app for them, which has some small specific tools (data validation and posting to an API) I've had a lot of success with gemma3:12b-it-qat (but haven't started the agent work yet), I'm running Ollama locally with 32GB + rtx2070 (we don't judge)... I'm going to try larger models as soon as I get an extra 32GB ram installed properly! We'd like to self host our MVP LLM, because money is really tight (current budget of £5k) and during this phase, users are only signing up and doing some personalisation all via the chatbot, it's more of a demo than a usable product at this point but is important to collect feedback and gain traction. I'd like to know what sort of hardware we'd need to self host? I'm expecting 300-1000 users who are quite inactive. An Nvidia Spark DXG says it can handle upto 200B parameters although everyone seems to think they will be quite slow, it's also not due until July... however the good thing is two can be linked together, so an easy upgrade. We obviously don't want to waste our money, so are looking for something with some scale potential. My questions are: 1. What can we afford (£5k) that would run our current model for 5-10 daily active users 2. Same as above but going up to 27B model. 3. What should we be buying (i.e. if our budget was up to £15k). 4. Does anyone know what sort of cost this would be in a cloud environment? because AWS g4dn.xlarge starts at $2700/pa - but I've no idea how it would perform 5. Any insight on how to calculate myself would be really appreciated Many thanks
2025-06-19T15:44:24
https://www.reddit.com/r/LocalLLaMA/comments/1lfe2do/how_do_you_size_hardware/
GroundbreakingMain93
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfe2do
false
null
t3_1lfe2do
/r/LocalLLaMA/comments/1lfe2do/how_do_you_size_hardware/
false
false
self
1
null
Skywork-SWE-32B
80
[https://huggingface.co/Skywork/Skywork-SWE-32B](https://huggingface.co/Skywork/Skywork-SWE-32B) ***Skywork-SWE-32B*** is a code agent model developed by [Skywork AI](https://skywork.ai/home), specifically designed for software engineering (SWE) tasks. It demonstrates strong performance across several key metrics: * Skywork-SWE-32B attains 38.0% pass@1 accuracy on the [SWE-bench Verified](https://www.swebench.com) benchmark, outperforming previous open-source SoTA [Qwen2.5-Coder-32B-based](https://huggingface.co/Qwen/Qwen2.5-Coder-32B) LLMs built on the [OpenHands](https://github.com/All-Hands-AI/OpenHands) agent framework. * When incorporated with test-time scaling techniques, the performance further improves to 47.0% accuracy, surpassing the previous SoTA results for sub-32B parameter models. * We clearly demonstrate the data scaling law phenomenon for software engineering capabilities in LLMs, with no signs of saturation at 8209 collected training trajectories.
2025-06-19T15:45:12
https://www.reddit.com/r/LocalLLaMA/comments/1lfe33m/skyworkswe32b/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfe33m
false
null
t3_1lfe33m
/r/LocalLLaMA/comments/1lfe33m/skyworkswe32b/
false
false
self
80
{'enabled': False, 'images': [{'id': 'Qv1IlT89kGNjF7n0FWG6IungnWSmE77ruzXaHKrED_8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Qv1IlT89kGNjF7n0FWG6IungnWSmE77ruzXaHKrED_8.png?width=108&crop=smart&auto=webp&s=a2ca8271d9a1351e61b08293b19378467e2a8b75', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Qv1IlT89kGNjF7n0FWG6IungnWSmE77ruzXaHKrED_8.png?width=216&crop=smart&auto=webp&s=4aa76c2f21b15a555e57c09b865f7e3349cf3658', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Qv1IlT89kGNjF7n0FWG6IungnWSmE77ruzXaHKrED_8.png?width=320&crop=smart&auto=webp&s=82229bb2c542be2d08bf573350cda9c2be25130c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Qv1IlT89kGNjF7n0FWG6IungnWSmE77ruzXaHKrED_8.png?width=640&crop=smart&auto=webp&s=a116ca6c250616d32d03c27c281c7f80153d2138', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Qv1IlT89kGNjF7n0FWG6IungnWSmE77ruzXaHKrED_8.png?width=960&crop=smart&auto=webp&s=857c20ae19c9c31585c39f62a5c35c0f87ffa883', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Qv1IlT89kGNjF7n0FWG6IungnWSmE77ruzXaHKrED_8.png?width=1080&crop=smart&auto=webp&s=d595bfc9111468e7de5c4315ab79da1cc8c40862', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Qv1IlT89kGNjF7n0FWG6IungnWSmE77ruzXaHKrED_8.png?auto=webp&s=71bc1a5c22fada86843b23e367e63ae0a8e47164', 'width': 1200}, 'variants': {}}]}
1-Bit LLM vs 1.58-Bit LLM
0
1.58-bit LLM model is using terniary coding (-1, 0, +1) for the coefficients, where as 1-bit models are using binary coding (-1, +1) for the coefficients. In practice the terniary 1.58 bit coding is done using 2 bits of information. The problem with 1-bit coefficients is that it is not possible to represent a zero, where as in terniary coding is possible to represent a zero value precisely. However, it is possible to represent a value of zero using 1-bit coefficients with coding values (-1, +1), and get the benefits of terniary representation: The original terniary coefficient of -1, 0, +1 can be represented by using two 1-bit operations. Let's assume that we would want to multiply a number A using a terniary multiplier with values of (-1, 0, +1). We can achieve this by using two 1-bit operations: 1. (+1 \* A) + (+1 \* A) = +2A 2. (-1 \* A) + (-1 \* A) = -2A 3. (+1 \* A) + (-1 \* A) = 0 4. (-1 \* A) + (+1 \* A) = 0. This approach essentially decomposes each ternary weight into two binary operations that can represent the same three states: \+1: Use (+1, +1) → 2A → A (after scaling) \-1: Use (-1, -1) → -2A → -A (after scaling) 0: Use (+1, -1) or (-1, +1) → 0 The key advantages of this decomposition are: * True 1-bit storage: Each binary coefficient only needs 1 bit, so two coefficients need 2 bits total - the same as storing one ternary value, but without wasting bit combinations. * Hardware efficiency: Binary multiplications are much simpler than ternary operations in hardware. Multiplying by -1 or +1 is just sign flipping or pass-through. * Maintains expressiveness: Preserves the key benefit of ternary (precise zero representation) while using only binary operations. Would this approach provide practical advantages over the existing 1.58-bit or 1-bit LLM implementations in terms of computing power and/or memory usage? What do you think?
2025-06-19T15:53:38
https://www.reddit.com/r/LocalLLaMA/comments/1lfeam0/1bit_llm_vs_158bit_llm/
Vegetable_End_8935
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfeam0
false
null
t3_1lfeam0
/r/LocalLLaMA/comments/1lfeam0/1bit_llm_vs_158bit_llm/
false
false
self
0
null
From GPT-2 to DeepSeek: A 15M-Parameter Model for Children’s Stories
1
[removed]
2025-06-19T15:55:49
https://www.reddit.com/r/LocalLLaMA/comments/1lfecjh/from_gpt2_to_deepseek_a_15mparameter_model_for/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfecjh
false
null
t3_1lfecjh
/r/LocalLLaMA/comments/1lfecjh/from_gpt2_to_deepseek_a_15mparameter_model_for/
false
false
https://b.thumbs.redditm…i-31jc5SrAWY.jpg
1
null
[Project] DeepSeek-Based 15M-Parameter Model for Children’s Stories (Open Source)
21
https://preview.redd.it/…ek-based setups.
2025-06-19T15:58:06
https://www.reddit.com/r/LocalLLaMA/comments/1lfeein/project_deepseekbased_15mparameter_model_for/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfeein
false
null
t3_1lfeein
/r/LocalLLaMA/comments/1lfeein/project_deepseekbased_15mparameter_model_for/
false
false
https://a.thumbs.redditm…ItnDLCyt8W30.jpg
21
{'enabled': False, 'images': [{'id': 'USEPksTbnhSpjNDP3AWTvRB_hIM8jFv6ba_v6qu8L9U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/USEPksTbnhSpjNDP3AWTvRB_hIM8jFv6ba_v6qu8L9U.png?width=108&crop=smart&auto=webp&s=c17c2869b5051648a79720b9a6713d7d3b76d7b5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/USEPksTbnhSpjNDP3AWTvRB_hIM8jFv6ba_v6qu8L9U.png?width=216&crop=smart&auto=webp&s=184eb29fa0792f0943a98371d4464dbdba44aa0d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/USEPksTbnhSpjNDP3AWTvRB_hIM8jFv6ba_v6qu8L9U.png?width=320&crop=smart&auto=webp&s=bfa3f2a0ce98efb24610b21c044e7d1905fafa74', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/USEPksTbnhSpjNDP3AWTvRB_hIM8jFv6ba_v6qu8L9U.png?width=640&crop=smart&auto=webp&s=78593e09dc3ea14e60985fc4600b4c2f05f69690', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/USEPksTbnhSpjNDP3AWTvRB_hIM8jFv6ba_v6qu8L9U.png?width=960&crop=smart&auto=webp&s=6799e0dcd7149212fd578009c6088ebbdbda243c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/USEPksTbnhSpjNDP3AWTvRB_hIM8jFv6ba_v6qu8L9U.png?width=1080&crop=smart&auto=webp&s=3cf48e6010d01552517e3206e59720c9850338ff', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/USEPksTbnhSpjNDP3AWTvRB_hIM8jFv6ba_v6qu8L9U.png?auto=webp&s=2ffeada0edd5d15776165131875327e303bbdb23', 'width': 1200}, 'variants': {}}]}
Computer-Use on Windows Sandbox
49
Windows Sandbox support - run computer-use agents on Windows business apps without VMs or cloud costs. Your enterprise software runs on Windows, but testing agents required expensive cloud instances. Windows Sandbox changes this - it's Microsoft's built-in lightweight virtualization sitting on every Windows 10/11 machine, ready for instant agent development. Enterprise customers kept asking for AutoCAD automation, SAP integration, and legacy Windows software support. Traditional VM testing was slow and resource-heavy. Windows Sandbox solves this with disposable, seconds-to-boot Windows environments for safe agent testing. What you can build: AutoCAD drawing automation, SAP workflow processing, Bloomberg terminal trading bots, manufacturing execution system integration, or any Windows-only enterprise software automation - all tested safely in disposable sandbox environments. Free with Windows 10/11, boots in seconds, completely disposable. Perfect for development and testing before deploying to Windows cloud instances (coming later this month). Check out the github here : https://github.com/trycua/cua Blog : https://www.trycua.com/blog/windows-sandbox
2025-06-19T16:14:44
https://v.redd.it/2xrdz059sw7f1
Impressive_Half_2819
v.redd.it
1970-01-01T00:00:00
0
{}
1lfetix
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/2xrdz059sw7f1/DASHPlaylist.mpd?a=1752941697%2COGZlMmE4NjlmZjQwNmNkMmM0ZTU0ZmE0MmZjZDNhNGIyNzNhNGRhZTVlODk3MmU3MzhiZTdmZDYzOGEzZTU4NQ%3D%3D&v=1&f=sd', 'duration': 69, 'fallback_url': 'https://v.redd.it/2xrdz059sw7f1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/2xrdz059sw7f1/HLSPlaylist.m3u8?a=1752941697%2CYjAyNTQ3ODlkM2ZlODJlNzEyMjI2ZGU3MjhjZjhlZjRhYjRmY2MwNTY1NjcxOGViNjBkZDg1ZTA3ZTg1MDg0ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/2xrdz059sw7f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1lfetix
/r/LocalLLaMA/comments/1lfetix/computeruse_on_windows_sandbox/
false
false
https://external-preview…60dfd7d5bbd362b3
49
{'enabled': False, 'images': [{'id': 'MHY2YzU5dThzdzdmMUUIhfD3WmHuxYkgbFXnt7PvLDhATd-8_6cYVR-PGp7c', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MHY2YzU5dThzdzdmMUUIhfD3WmHuxYkgbFXnt7PvLDhATd-8_6cYVR-PGp7c.png?width=108&crop=smart&format=pjpg&auto=webp&s=eec08747952a4b84b9524ef0b8c461703eb988c9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MHY2YzU5dThzdzdmMUUIhfD3WmHuxYkgbFXnt7PvLDhATd-8_6cYVR-PGp7c.png?width=216&crop=smart&format=pjpg&auto=webp&s=030f82ee14784d99511fa5dfcab61623dfad3669', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MHY2YzU5dThzdzdmMUUIhfD3WmHuxYkgbFXnt7PvLDhATd-8_6cYVR-PGp7c.png?width=320&crop=smart&format=pjpg&auto=webp&s=be45d8831d032460cf8db576f6e1db8642951283', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MHY2YzU5dThzdzdmMUUIhfD3WmHuxYkgbFXnt7PvLDhATd-8_6cYVR-PGp7c.png?width=640&crop=smart&format=pjpg&auto=webp&s=e115edcbb179064b1b1f537cc1c7b49d7a9cf36f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MHY2YzU5dThzdzdmMUUIhfD3WmHuxYkgbFXnt7PvLDhATd-8_6cYVR-PGp7c.png?width=960&crop=smart&format=pjpg&auto=webp&s=f86421858dca84e002754bb246c027001a26db4b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MHY2YzU5dThzdzdmMUUIhfD3WmHuxYkgbFXnt7PvLDhATd-8_6cYVR-PGp7c.png?width=1080&crop=smart&format=pjpg&auto=webp&s=81f1f0e6d5eefdfe5e2cdb62dcb8ffe478084866', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/MHY2YzU5dThzdzdmMUUIhfD3WmHuxYkgbFXnt7PvLDhATd-8_6cYVR-PGp7c.png?format=pjpg&auto=webp&s=c9655e135e9bb2db0518e57059c2a0fa8e87ef81', 'width': 1280}, 'variants': {}}]}
5090 benchmarks - where are they?
10
As much as I love my hybrid 28GB setup, I would love a few more tokens. Qwen3 32b Q4KL gives me around 16 tps initially @ 32k context. What are you 5090 owners getting? Does anyone even have a 5090? 3090 all the way?
2025-06-19T16:27:11
https://www.reddit.com/r/LocalLLaMA/comments/1lff4ni/5090_benchmarks_where_are_they/
Secure_Reflection409
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lff4ni
false
null
t3_1lff4ni
/r/LocalLLaMA/comments/1lff4ni/5090_benchmarks_where_are_they/
false
false
self
10
null
AMD Lemonade Server Update: Ubuntu, llama.cpp, Vulkan, webapp, and more!
91
Hi r/localllama, it’s been a bit since my [post](https://www.reddit.com/r/LocalLLaMA/comments/1jujc9p/introducing_lemonade_server_npuaccelerated_local/) introducing [Lemonade Server](https://lemonade-server.ai), AMD’s open-source local LLM server that prioritizes NPU and GPU acceleration. GitHub: [https://github.com/lemonade-sdk/lemonade](https://github.com/lemonade-sdk/lemonade) I want to sincerely thank the community here for all the feedback on that post! It’s time for an update, and I hope you’ll agree we took the feedback to heart and did our best to deliver. The biggest changes since the last post are: 1. 🦙Added llama.cpp, GGUF, and Vulkan support as an additional backend alongside ONNX. This adds support for: A) GPU acceleration on Ryzen™ AI 7000/8000/300, Radeon™ 7000/9000, and many other device families. B) Tons of new models, including VLMs. 2. 🐧Ubuntu is now a fully supported operating system for llama.cpp+GGUF+Vulkan (GPU)+CPU, as well as ONNX+CPU. >ONNX+NPU support in Linux, as well as NPU support in llama.cpp, are a work in progress. 3. 💻Added a web app for model management (list/install/delete models) and basic LLM chat. Open it by pointing your browser at [http://localhost:8000](http://localhost:8000) while the server is running. 4. 🤖Added support for streaming tool calling (all backends) and demonstrated it in our [MCP + tiny-agents blog post](https://www.amd.com/en/developer/resources/technical-articles/2025/local-tiny-agents--mcp-agents-on-ryzen-ai-with-lemonade-server.html). 5. ✨Polished overall look and feel: new getting started website at [https://lemonade-server.ai](https://lemonade-server.ai), install in under 2 minutes, and server launches in under 2 seconds. With the added support for Ubuntu and llama.cpp, Lemonade Server should give great performance on many more PCs than it did 2 months ago. The team here at AMD would be very grateful if y'all could try it out with your favorite apps (I like Open WebUI) and give us another round of feedback. Cheers!
2025-06-19T17:18:57
https://www.reddit.com/gallery/1lfgfu5
jfowers_amd
reddit.com
1970-01-01T00:00:00
0
{}
1lfgfu5
false
null
t3_1lfgfu5
/r/LocalLLaMA/comments/1lfgfu5/amd_lemonade_server_update_ubuntu_llamacpp_vulkan/
false
false
https://external-preview…6cad42cc3f76bab2
91
{'enabled': True, 'images': [{'id': 'snRoVONGmevA0S70HKIe-_OVILdqukspGvuQ8vgG6Fg', 'resolutions': [{'height': 102, 'url': 'https://external-preview.redd.it/snRoVONGmevA0S70HKIe-_OVILdqukspGvuQ8vgG6Fg.png?width=108&crop=smart&auto=webp&s=4a2f1112d7055199e7fba9720febcfec1ac3aabf', 'width': 108}, {'height': 204, 'url': 'https://external-preview.redd.it/snRoVONGmevA0S70HKIe-_OVILdqukspGvuQ8vgG6Fg.png?width=216&crop=smart&auto=webp&s=f1b524d729e9bbf202740e5e7ea15dab174e8a0f', 'width': 216}, {'height': 302, 'url': 'https://external-preview.redd.it/snRoVONGmevA0S70HKIe-_OVILdqukspGvuQ8vgG6Fg.png?width=320&crop=smart&auto=webp&s=82277f350e13d45370bf5dbbfb5c0c7772d64305', 'width': 320}, {'height': 605, 'url': 'https://external-preview.redd.it/snRoVONGmevA0S70HKIe-_OVILdqukspGvuQ8vgG6Fg.png?width=640&crop=smart&auto=webp&s=854af80e0d39d84623d14e840f4c075c5351100c', 'width': 640}, {'height': 908, 'url': 'https://external-preview.redd.it/snRoVONGmevA0S70HKIe-_OVILdqukspGvuQ8vgG6Fg.png?width=960&crop=smart&auto=webp&s=9a91629184746db97d332d091cad4f22cea9ff03', 'width': 960}], 'source': {'height': 932, 'url': 'https://external-preview.redd.it/snRoVONGmevA0S70HKIe-_OVILdqukspGvuQ8vgG6Fg.png?auto=webp&s=54a5740acd2bcdb17ba5982a007cf8e3a510eb67', 'width': 985}, 'variants': {}}]}
Run Deepseek locally on a 24g GPU: Quantizing on our Giga Computing 6980P Xeon
48
2025-06-19T17:28:59
https://www.youtube.com/watch?v=KQDpE2SLzbA
atape_1
youtube.com
1970-01-01T00:00:00
0
{}
1lfgp3i
false
{'oembed': {'author_name': 'Level1Techs', 'author_url': 'https://www.youtube.com/@Level1Techs', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/KQDpE2SLzbA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Run Deepseek locally on a 24g GPU: Quantizing on our Giga Computing 6980P Xeon"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/KQDpE2SLzbA/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Run Deepseek locally on a 24g GPU: Quantizing on our Giga Computing 6980P Xeon', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1lfgp3i
/r/LocalLLaMA/comments/1lfgp3i/run_deepseek_locally_on_a_24g_gpu_quantizing_on/
false
false
default
48
{'enabled': False, 'images': [{'id': 'hH0pP3ONlv9RFU_tt26eUVTIN9Qz11vaCtIPHTz4lhc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/hH0pP3ONlv9RFU_tt26eUVTIN9Qz11vaCtIPHTz4lhc.jpeg?width=108&crop=smart&auto=webp&s=1660c57ee933ab1644847b55d63a801d7dee0ab7', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/hH0pP3ONlv9RFU_tt26eUVTIN9Qz11vaCtIPHTz4lhc.jpeg?width=216&crop=smart&auto=webp&s=484359d5270a821cceea41d10c203eb1d5d265f9', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/hH0pP3ONlv9RFU_tt26eUVTIN9Qz11vaCtIPHTz4lhc.jpeg?width=320&crop=smart&auto=webp&s=aabe4d7154b0327e0fc0818f68c0eae9ed8d9581', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/hH0pP3ONlv9RFU_tt26eUVTIN9Qz11vaCtIPHTz4lhc.jpeg?auto=webp&s=865e1171ec88bbd5d881d79c71df420f23eecadf', 'width': 480}, 'variants': {}}]}
Sam Altman says Meta offered OpenAI staff $100 million bonuses, as Mark Zuckerberg ramps up AI poaching efforts
191
"Meta Platforms tried to poach OpenAI employees by offering signing bonuses as high as $100 million, with even larger annual compensation packages, OpenAI chief executive Sam Altman said." [https://www.cnbc.com/2025/06/18/sam-altman-says-meta-tried-to-poach-openai-staff-with-100-million-bonuses-mark-zuckerberg.html](https://www.cnbc.com/2025/06/18/sam-altman-says-meta-tried-to-poach-openai-staff-with-100-million-bonuses-mark-zuckerberg.html)
2025-06-19T17:30:37
https://i.redd.it/niqpo23p5x7f1.jpeg
choose_a_guest
i.redd.it
1970-01-01T00:00:00
0
{}
1lfgqkd
false
null
t3_1lfgqkd
/r/LocalLLaMA/comments/1lfgqkd/sam_altman_says_meta_offered_openai_staff_100/
false
false
default
191
{'enabled': True, 'images': [{'id': 'niqpo23p5x7f1', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/niqpo23p5x7f1.jpeg?width=108&crop=smart&auto=webp&s=7b72b5caa6732a946994182ff1bc5b7b83345b22', 'width': 108}, {'height': 159, 'url': 'https://preview.redd.it/niqpo23p5x7f1.jpeg?width=216&crop=smart&auto=webp&s=78a177bb7cf77fd953d86944e3ce97dda3815031', 'width': 216}, {'height': 235, 'url': 'https://preview.redd.it/niqpo23p5x7f1.jpeg?width=320&crop=smart&auto=webp&s=78c0bfedb251a8333196d358a8660f371978f288', 'width': 320}, {'height': 471, 'url': 'https://preview.redd.it/niqpo23p5x7f1.jpeg?width=640&crop=smart&auto=webp&s=22e9ad07139fcbaf4f1d83ce46f5c89ca3c94565', 'width': 640}, {'height': 707, 'url': 'https://preview.redd.it/niqpo23p5x7f1.jpeg?width=960&crop=smart&auto=webp&s=3824c9945319445a5e5c9475afd8f806e5993b17', 'width': 960}, {'height': 795, 'url': 'https://preview.redd.it/niqpo23p5x7f1.jpeg?width=1080&crop=smart&auto=webp&s=2e3db414b661f6133d442d576d804579ca859177', 'width': 1080}], 'source': {'height': 3302, 'url': 'https://preview.redd.it/niqpo23p5x7f1.jpeg?auto=webp&s=039f3e72ce66df241fe80444a87e40c18c5c4dd8', 'width': 4483}, 'variants': {}}]}
cheapest computer to install an rtx 3090 for inference ?
2
Hello, I need a second rig to run Magistral Q6 with an RTX3090 (I already have the 3090). I am actually running Magistral on an AMD 7950X, 128GB RAM, ProArt X870E , RTX 3090, and I get 30 tokens/s. Now I need a second rig for a second person with the same performance. I know the CPU should not impact a lot because the model is fully GPU. I am looking to buy something used (I have a spare 850W PSU). How low do you think I can go ? Regards Vincent
2025-06-19T17:42:51
https://www.reddit.com/r/LocalLLaMA/comments/1lfh1s0/cheapest_computer_to_install_an_rtx_3090_for/
vdiallonort
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfh1s0
false
null
t3_1lfh1s0
/r/LocalLLaMA/comments/1lfh1s0/cheapest_computer_to_install_an_rtx_3090_for/
false
false
self
2
null
Is DDR4 and PCIe 3.0 holding back my inference speed?
3
I'm running Llama-CPP on two Rx 6800's (~512GB/s memory bandwidth) - each one getting 8 pcie lanes. I have a Ryzen 9 3950x paired with this and 64GB of 2900mhz DDR4 in dual-channel. I'm extremely pleased with inference speeds for models that fit on one GPU, but I have a weird cap of ~40 tokens/second when using models that require both GPUs that I can't seem to surpass (example: on smaller quants of Qwen3-30-a3b). In addition to this, startup time (whether on CPU, one GPU, or two GPU's) is quite slow. My system seems healthy and benching the bandwidth of the individual cards seems fine and I've tried any/all combinations of settings and ROCm versions to no avail. The last thing I could think of is that my platform is relatively old. Do you think upgrading to a DDR5 platform with PCIe 4/5 lanes would provide a noticeable benefit?
2025-06-19T17:44:51
https://www.reddit.com/r/LocalLLaMA/comments/1lfh3lc/is_ddr4_and_pcie_30_holding_back_my_inference/
ForsookComparison
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfh3lc
false
null
t3_1lfh3lc
/r/LocalLLaMA/comments/1lfh3lc/is_ddr4_and_pcie_30_holding_back_my_inference/
false
false
self
3
null
[Setup discussion] AMD RX 7900 XTX workstation for local LLMs — Linux or Windows as host OS?
6
Hey everyone, I’m a software developer and currently building a workstation to run local LLMs. I want to experiment with agents, text-to-speech, image generation, multi-user interfaces, etc. The goal is broad: from hobby projects to a shared AI assistant for my family. Specs: • GPU: RX 7900 XTX 24GB • CPU: i7-14700K • RAM: 96 GB DDR5 6000 • Use case: Always-on (24/7), multi-user, remotely accessible What the machine will be used for: • Running LLMs locally (accessed via web UI by multiple users) • Experiments with agents / memory / TTS / image generation • Docker containers for local network services • GitHub self-hosted runner (needs to stay active) • VPN server for remote access • Remote .NET development (Visual Studio on Windows) • Remote gaming (Steam + Parsec/Moonlight) ⸻ The challenge: Linux is clearly the better platform for LLM workloads (ROCm support, better tooling, Docker compatibility). But for gaming and .NET development, Windows is more practical. Dual-boot is highly undesirable, and possibly even unworkable: This machine needs to stay online 24/7 (for remote access, GitHub runner, VPN, etc.), so rebooting into a second OS isn’t a good option. ⸻ My questions: 1. Is Windows with ROCm support a viable base for running LLMs on the RX 7900 XTX? Or are there still major limitations and instability? 2. Can AMD GPUs be accessed properly in Docker on Windows (either native or via WSL2)? Or is full GPU access only reliable under a Linux host? 3. Would it be smarter to run Linux as the host and Windows in a VM (for dev/gaming)? Has anyone gotten that working with AMD GPU passthrough? 4. What’s a good starting point for running LLMs on AMD hardware? I’m new to tools like LM Studio and Open WebUI — which do you recommend? 5. Are there any benchmarks or comparisons specifically for AMD GPUs and LLM inference? 6. What’s a solid multi-user frontend for local LLMs? Ideally something that supports different users with their own chat history/context. ⸻ Any insights, tips, links, or examples of working setups are very welcome 🙏 Thanks in advance!
2025-06-19T17:55:43
https://www.reddit.com/r/LocalLLaMA/comments/1lfhdnb/setup_discussion_amd_rx_7900_xtx_workstation_for/
ElkanRoelen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfhdnb
false
null
t3_1lfhdnb
/r/LocalLLaMA/comments/1lfhdnb/setup_discussion_amd_rx_7900_xtx_workstation_for/
false
false
self
6
null
New Finnish models (Poro 2) based on Llama 3.1 8B and 70B
26
Poro 2 models are based on Llama 3.1 for both 8B and 70B versions. They've been continually pre-trained on 165B tokens using a carefully balanced mix of Finnish, English, code, and math data. In my opinion they perform better than Gemma 3 at least when it comes to Finnish. Gemma 3 is probably still smarter but won't work as well for Finnish. It's also much better at Finnish when comparing to Llama 3.1. Especially the 8B model is a huge difference. Other new models generally suck at Finnish besides DeepSeekV3/R1, so this is a pretty good release for GPU poor people. Poro 2 Collection: [https://huggingface.co/collections/LumiOpen/poro-2-6835bec8186e98712b061f02](https://huggingface.co/collections/LumiOpen/poro-2-6835bec8186e98712b061f02) GGUFs (only for Instruct): [https://huggingface.co/mradermacher/Llama-Poro-2-70B-Instruct-GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-70B-Instruct-GGUF) [https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-GGUF)
2025-06-19T18:02:04
https://www.reddit.com/r/LocalLLaMA/comments/1lfhjja/new_finnish_models_poro_2_based_on_llama_31_8b/
mpasila
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfhjja
false
null
t3_1lfhjja
/r/LocalLLaMA/comments/1lfhjja/new_finnish_models_poro_2_based_on_llama_31_8b/
false
false
self
26
{'enabled': False, 'images': [{'id': 'JzIz-AHYk0Imuoe3OwQ8pRU0vyxqIGGeaF-52Aly9ho', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JzIz-AHYk0Imuoe3OwQ8pRU0vyxqIGGeaF-52Aly9ho.png?width=108&crop=smart&auto=webp&s=810717e51c264afc8ab7106884e570bbdb855c2a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JzIz-AHYk0Imuoe3OwQ8pRU0vyxqIGGeaF-52Aly9ho.png?width=216&crop=smart&auto=webp&s=636f811e132bcd698bae822cd9618fab7650b8dc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JzIz-AHYk0Imuoe3OwQ8pRU0vyxqIGGeaF-52Aly9ho.png?width=320&crop=smart&auto=webp&s=a9f433ebd09e84c6e1526938778e77543ef6f404', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JzIz-AHYk0Imuoe3OwQ8pRU0vyxqIGGeaF-52Aly9ho.png?width=640&crop=smart&auto=webp&s=f4b0aeb0c1d5bc589ff69fd591f2b5994b1e0163', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JzIz-AHYk0Imuoe3OwQ8pRU0vyxqIGGeaF-52Aly9ho.png?width=960&crop=smart&auto=webp&s=6ca8380dac7302e73fd3f822c606ad57167781a5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JzIz-AHYk0Imuoe3OwQ8pRU0vyxqIGGeaF-52Aly9ho.png?width=1080&crop=smart&auto=webp&s=51ecab5a6c916f598199542e5a1bcf4d83c62029', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JzIz-AHYk0Imuoe3OwQ8pRU0vyxqIGGeaF-52Aly9ho.png?auto=webp&s=2b0483c97d39b181195973ab11d9171e4c30637e', 'width': 1200}, 'variants': {}}]}
I have an dual xeon e5-2680v2 with 64gb of ram, what is the best local llm I can run ?
0
what the title says, I have an dual xeon e5-2680v2 with 64gb of ram, what is the best local llm I can run ?
2025-06-19T18:04:49
https://www.reddit.com/r/LocalLLaMA/comments/1lfhm4m/i_have_an_dual_xeon_e52680v2_with_64gb_of_ram/
eightbitgamefan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfhm4m
false
null
t3_1lfhm4m
/r/LocalLLaMA/comments/1lfhm4m/i_have_an_dual_xeon_e52680v2_with_64gb_of_ram/
false
false
self
0
null
Kyutai's STT with semantic VAD now opensource
132
Kyutai published their latest tech demo few weeks ago, unmute.sh. It is an impressive voice-to-voice assistant using a 3rd-party text-to-text LLM (gemma), while retaining the conversation low latency of Moshi. They are currently opensourcing the various components for that. The first component they opensourced is their STT, available at [https://github.com/kyutai-labs/delayed-streams-modeling](https://github.com/kyutai-labs/delayed-streams-modeling) The best feature of that STT is Semantic VAD. In a local assistant, the VAD is a component that determines when to stop listening to a request. Most local VAD are sadly not very sophisticated, and won't allow you to pause or think in the middle of your sentence. The Semantic VAD in Kyutai's STT will allow local assistant to be much more comfortable to use. Hopefully we'll also get the streaming LLM integration and TTS from them soon, to be able to have our own low-latency local voice-to-voice assistant 🤞
2025-06-19T18:33:58
https://www.reddit.com/r/LocalLLaMA/comments/1lficpj/kyutais_stt_with_semantic_vad_now_opensource/
phhusson
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lficpj
false
null
t3_1lficpj
/r/LocalLLaMA/comments/1lficpj/kyutais_stt_with_semantic_vad_now_opensource/
false
false
self
132
{'enabled': False, 'images': [{'id': 'Tk7TPUXaCv0JxtoKJ8ZTKaRNtOpd6Cvo5_neUZjNTYk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Tk7TPUXaCv0JxtoKJ8ZTKaRNtOpd6Cvo5_neUZjNTYk.png?width=108&crop=smart&auto=webp&s=9afbd00eebd442d93bbbcfa4b9ac60c1f5862891', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Tk7TPUXaCv0JxtoKJ8ZTKaRNtOpd6Cvo5_neUZjNTYk.png?width=216&crop=smart&auto=webp&s=6c5254ea33542045e7785e8562642041a3d29ad0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Tk7TPUXaCv0JxtoKJ8ZTKaRNtOpd6Cvo5_neUZjNTYk.png?width=320&crop=smart&auto=webp&s=1ae95bf693a39ad852b1807fa1fee102e74644fe', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Tk7TPUXaCv0JxtoKJ8ZTKaRNtOpd6Cvo5_neUZjNTYk.png?width=640&crop=smart&auto=webp&s=095826385d384f938a134eb8bb24aa91aabd28ba', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Tk7TPUXaCv0JxtoKJ8ZTKaRNtOpd6Cvo5_neUZjNTYk.png?width=960&crop=smart&auto=webp&s=948480d0eff4d25c715a7616c2094b7346e63d05', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Tk7TPUXaCv0JxtoKJ8ZTKaRNtOpd6Cvo5_neUZjNTYk.png?width=1080&crop=smart&auto=webp&s=6deb031f4844b70cb118d0d79688f6711712502d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Tk7TPUXaCv0JxtoKJ8ZTKaRNtOpd6Cvo5_neUZjNTYk.png?auto=webp&s=af25ccf307dda8fd28335c1054e01b58a0bc82ac', 'width': 1200}, 'variants': {}}]}
Help with Ollama & Open WebUI – Best Practices for Staff Knowledge Base
1
[removed]
2025-06-19T18:43:58
https://www.reddit.com/r/LocalLLaMA/comments/1lfilpl/help_with_ollama_open_webui_best_practices_for/
Numerous-Ideal-7665
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfilpl
false
null
t3_1lfilpl
/r/LocalLLaMA/comments/1lfilpl/help_with_ollama_open_webui_best_practices_for/
false
false
self
1
null
How to intsall Sesame TTS locall in Win
1
Hi everyone, puzzeled right now. No matter how much I tried, I just can't seem to install sesame locally in my PC. Even after following the detailed tutorial's from their gthb page, I just cannot get it to work. Do I need to do anything other than following the instructions from the github page? At the end, I want a gradio web ui layout.
2025-06-19T18:55:15
https://www.reddit.com/r/LocalLLaMA/comments/1lfivt4/how_to_intsall_sesame_tts_locall_in_win/
Dragonacious
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfivt4
false
null
t3_1lfivt4
/r/LocalLLaMA/comments/1lfivt4/how_to_intsall_sesame_tts_locall_in_win/
false
false
self
1
null
Any reason to go true local vs cloud?
17
**Is there any value for investing in a GPU — price for functionality?** My own use case and conundrum: I have access to some powerful enterprises level compute and environments at work (through Azure AI Foundry and enterprise Stack). I'm a hobbyist dev and tinkerer for LLMs, building a much needed upgrade to my personal setup. I don't game too muchnon PC, so really a GPU for my own tower would just be for local models (LLM and media generation). My current solution is paying for distributed platforms or even reserved hardware like RunPod. I just can't make the math work for true local hardware. If it added value somehow, could justify it. But seems like I'm either dropping ~$2k for a 32GB ballpark that is going to have bandwidth issues, OR $8k or more for a workstation level card that will be outpaced in a couple of years anyway. Cost only starts to be justified when looking at 24/7 uptime, but then we're getting into API and web service territory where cloud hosting is a much better fit. **Short of just the satisfaction of being in direct ownership of the machine, with the loose benefits of a totally local environment, is there a good reason to buy hardware solely to run truly locally in 2025?**
2025-06-19T19:09:06
https://www.reddit.com/r/LocalLLaMA/comments/1lfj8hf/any_reason_to_go_true_local_vs_cloud/
ghost202
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfj8hf
false
null
t3_1lfj8hf
/r/LocalLLaMA/comments/1lfj8hf/any_reason_to_go_true_local_vs_cloud/
false
false
self
17
null
How to create synthetic datasets for multimodal models like vision and audio?
0
Just like we have the Meta synthetic datasets kit to create high quality synthetic datasets for text based models, how can we apply a similar approach to multimodal models like vision models,audio models?
2025-06-19T19:09:07
https://www.reddit.com/r/LocalLLaMA/comments/1lfj8i4/how_to_create_synthetic_datasets_for_multimodal/
SelectionCalm70
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfj8i4
false
null
t3_1lfj8i4
/r/LocalLLaMA/comments/1lfj8i4/how_to_create_synthetic_datasets_for_multimodal/
false
false
self
0
null
that's 500 IQ move
67
2025-06-19T19:21:47
https://i.redd.it/duqrjaumpx7f1.png
BoringAd6806
i.redd.it
1970-01-01T00:00:00
0
{}
1lfjjxh
false
null
t3_1lfjjxh
/r/LocalLLaMA/comments/1lfjjxh/thats_500_iq_move/
false
false
default
67
{'enabled': True, 'images': [{'id': 'duqrjaumpx7f1', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/duqrjaumpx7f1.png?width=108&crop=smart&auto=webp&s=21e00044eaee0cf1b49d6e2a89a97accc47f8645', 'width': 108}, {'height': 182, 'url': 'https://preview.redd.it/duqrjaumpx7f1.png?width=216&crop=smart&auto=webp&s=04d296a37412371c53298051fb358aa456cf2a23', 'width': 216}, {'height': 270, 'url': 'https://preview.redd.it/duqrjaumpx7f1.png?width=320&crop=smart&auto=webp&s=4bfcb4149137d332730d5a2e65c431f4031f12df', 'width': 320}, {'height': 541, 'url': 'https://preview.redd.it/duqrjaumpx7f1.png?width=640&crop=smart&auto=webp&s=320383a08e8da36e1748b9c18f4152434799bc85', 'width': 640}, {'height': 811, 'url': 'https://preview.redd.it/duqrjaumpx7f1.png?width=960&crop=smart&auto=webp&s=5c260a0b0826f5f8a3e5b2c3c3b9ba0551a300fb', 'width': 960}, {'height': 913, 'url': 'https://preview.redd.it/duqrjaumpx7f1.png?width=1080&crop=smart&auto=webp&s=7efbbcfc7a31d7383a46d1f35dd6d8ca4f7e043c', 'width': 1080}], 'source': {'height': 913, 'url': 'https://preview.redd.it/duqrjaumpx7f1.png?auto=webp&s=a20d06a3fae35b8b88e0217b1aebe3179a51e6e4', 'width': 1080}, 'variants': {}}]}
We Tested Apple's On-Device Model for RAG Task
79
Hey r/LocalLLaMA, We ran Apple's on-device model through samples of our RAG evaluation framework (1000 questions). # TL;DR **The Good:** * **8.5/10 factual accuracy** on questions it decides to answer (on par with best small models like Qwen3 4B and IBM Granite 3.3 2B) * **\~30 tokens/second** on M3 MacBook Air (16GB) * Strong context adherence (doesn't hallucinate much) **The Concerning:** * **45% incorrect rejection rate** (refuses to answer when it actually has the info) * **90% rejection rate** if you add "Answer the question based on search result" to system prompt * Won't elaborate or ask clarifying questions **The Weird:** * Guardrails flag questions as "unsafe" (22/1000, mostly medical topics) * Adopts the vocabulary/tone from your query in its responses # The Test We tested Apple's model as a **summarizer in a RAG system**. The setup: model receives a user query plus 2-5 search result chunks (512 tokens max each) and must synthesize them into an accurate answer. We used our [RED-flow evaluation framework](https://github.com/aizip/Rag-Eval-flow) designed for testing small language models in RAG tasks. 1000 questions from policy documents, technical manuals, and other domains, testing: * Can it find and synthesize answers from the chunks? * Does it recognize when chunks lack sufficient info? * Will it ask helpful clarification questions? The same evaluation runs in our [SLM RAG Arena](https://huggingface.co/spaces/aizip-dev/SLM-RAG-Arena) where you can see how different models handle these challenges on the leaderboard. # Detailed Observations # 1. It's REALLY Sensitive to Instructions The system prompt dramatically affects performance: * No system prompt (default): 45% incorrect rejections * Add "Answer the question based on search result": 90% incorrect rejections This suggests Apple's model is actually quite capable but becomes overly cautious with certain instructions. # 2. Guardrails Are... Interesting The model has guardrails that operate above the base model: * 22/1000 queries flagged as "unsafe" with zero explanation * Most rejected topics are medical-related * Inconsistent between runs (same question might pass or fail) # 3. Minimal Responses, No Discussion Unlike other SLMs or the reference model DeepSeek v3, this model: * Won't elaborate even when prompted * Never asks clarifying questions * Just outputs "I cannot answer that" instead of explaining why Example with insufficient context: Query: How do you know if the stuff you bought is the one they're talking about? [recall context] Apple Model: I cannot answer that. Reference (DeepSeek v3): To determine if the product you bought is the one being recalled, you should check the following details provided in the recall notice: the product name, intended consumer population, colors and sizes, model numbers, date codes, SKUs... Even when it has the information, responses are minimal: Query: How is the urban core defined for statistical purposes? Apple Model: The urban core is defined as a geographic area with a population of 50,000 or more. Reference (DeepSeek v3): The urban core for statistical purposes is defined as the central urban area within a Metropolitan Statistical Area (MSA) that has a population of 50,000 or more. This core urban area is the basis for delineating the MSA, which includes the counties containing the core urban area and any adjacent counties with a high degree of social and economic integration (measured by commuting to work) with the urban core. Correct but minimal. # 4. Surprisingly Good at Long Context No degradation with longer contexts: * "Hard" questions (requiring multiple chunks) score just as well * No "context spiral" or coherence loss * Suggests good attention mechanisms for a 3B model # 5. Personality Mirroring The model adopts your vocabulary and tone. If you use informal language in your query, it responds with the same style: Query: Who do you tell if you see something sketchy with a work vehicle? Apple Model: If you see something sketchy with a work vehicle, you should notify the Office of Inspector General (OIG) and the DOT Fleet Manager. Reference (DeepSeek v3): If you observe apparent unofficial use or misuse of a Government motor vehicle, you should notify the Office of the Inspector General (OIG) and the DOT Fleet Manager. The OIG will then coordinate with the DOT Fleet Manager to conduct an internal investigation to determine whether a violation has occurred. Notice how Apple's model picks up and reuses "sketchy" instead of using more formal language like "suspicious" or "apparent unofficial use". This happens consistently across different types of informal or domain-specific vocabulary. # What This Means Apple appears to be running a \~3B parameter model with: * Strong factual accuracy when it works * Overly conservative rejection behavior * Hard guardrails that sometimes misfire * Design choices favoring brevity over helpfulness For a local, on-device model, it's impressively capable. But the high rejection rate and minimal responses might frustrate users expecting ChatGPT-style interactions. **Theory:** Apple optimized for "never be wrong" over "always be helpful" - which makes sense for an on-device assistant but creates a different user experience than cloud LLMs. Anyone else tested this? Curious if you're seeing similar patterns.
2025-06-19T19:25:13
https://www.reddit.com/r/LocalLLaMA/comments/1lfjmx4/we_tested_apples_ondevice_model_for_rag_task/
No_Salamander1882
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfjmx4
false
null
t3_1lfjmx4
/r/LocalLLaMA/comments/1lfjmx4/we_tested_apples_ondevice_model_for_rag_task/
false
false
self
79
{'enabled': False, 'images': [{'id': 'wqsi-JfLb8pSXmshgX3Ny5LrE8yAdxgSirsoFM-A7B0', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/wqsi-JfLb8pSXmshgX3Ny5LrE8yAdxgSirsoFM-A7B0.jpeg?width=108&crop=smart&auto=webp&s=070d5dc10bbb27decd7458d197b3348cf9d147e4', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/wqsi-JfLb8pSXmshgX3Ny5LrE8yAdxgSirsoFM-A7B0.jpeg?width=216&crop=smart&auto=webp&s=17ba147060dfcf20e8539daad2c75459ebbf0fe8', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/wqsi-JfLb8pSXmshgX3Ny5LrE8yAdxgSirsoFM-A7B0.jpeg?width=320&crop=smart&auto=webp&s=7424e0c706e59139964d53a9f17265613bb2bfe5', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/wqsi-JfLb8pSXmshgX3Ny5LrE8yAdxgSirsoFM-A7B0.jpeg?width=640&crop=smart&auto=webp&s=d86354883e94ba574bf783760043edeffb4db638', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/wqsi-JfLb8pSXmshgX3Ny5LrE8yAdxgSirsoFM-A7B0.jpeg?width=960&crop=smart&auto=webp&s=055cc42526296bac832fab76a241f598c3f5ddf6', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/wqsi-JfLb8pSXmshgX3Ny5LrE8yAdxgSirsoFM-A7B0.jpeg?width=1080&crop=smart&auto=webp&s=75250188d9a026652efb5a62c18c2f761183946c', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/wqsi-JfLb8pSXmshgX3Ny5LrE8yAdxgSirsoFM-A7B0.jpeg?auto=webp&s=1635b6ee9fb2007670815c4c93d64daa72cc4d13', 'width': 1536}, 'variants': {}}]}
Preparing for the Intelligence Explosion
0
Abstract: AI that can accelerate research could drive a century of technological progress over just a few years. During such a period, new technological or political developments will raise consequential and hard-to-reverse decisions, in rapid succession. We call these developments grand challenges. These challenges include new weapons of mass destruction, AI-enabled autocracies, races to grab offworld resources, and digital beings worthy of moral consideration, as well as opportunities to dramatically improve quality of life and collective decision-making. We argue that these challenges cannot always be delegated to future AI systems, and suggest things we can do today to meaningfully improve our prospects. AGI preparedness is therefore not just about ensuring that advanced AI systems are aligned: we should be preparing, now, for the disorienting range of developments an intelligence explosion would bring. [https://arxiv.org/pdf/2506.14863](https://arxiv.org/pdf/2506.14863)
2025-06-19T19:52:06
https://www.reddit.com/r/LocalLLaMA/comments/1lfka3j/preparing_for_the_intelligence_explosion/
jackdareel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfka3j
false
null
t3_1lfka3j
/r/LocalLLaMA/comments/1lfka3j/preparing_for_the_intelligence_explosion/
false
false
self
0
null
Is the 3060 12GB the best performance/cost for entry level local hosted?
1
Hi, I was wondering if the 3060 would be a good buy for someone wanting to start out with Local host LLMs. I planned to look for something I can put in my small Proxmox home server/Nas to play around with things like Voice home assistant via small LLMs and just to learn more, so a bit of LLM, a bit of Stable Diffusion. Worth picking up a used one for £200 or spending a bit more on another card, or anything else worth considering that's coming soon?
2025-06-19T20:06:59
https://www.reddit.com/r/LocalLLaMA/comments/1lfkn72/is_the_3060_12gb_the_best_performancecost_for/
SKX007J1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfkn72
false
null
t3_1lfkn72
/r/LocalLLaMA/comments/1lfkn72/is_the_3060_12gb_the_best_performancecost_for/
false
false
self
1
null
Tool for creating datasets from unstructured data.
0
Since creating datasets from unstructured data like text is cumbersome I thought, given that I'm a software engineer, I'd make a tool for it. I'm not aware of any good and convenient solutions. Most of the time it's using ChatGPT and doing it manually or having to setup solution locally. (Let me know if there's a better way I don't know of.) I've created a very basic version of what I'm thinking: [http://app.easyjsonl.com](http://app.easyjsonl.com) It's very basic but please let me know what you think. Also feel free to use it (until my api credit depletes). It's basically calling OpenAI API in the background but using its client where I can force a given response format. For start I've added prompt-input-output but I want to do it for q&a and more formats.
2025-06-19T20:07:37
https://www.reddit.com/r/LocalLLaMA/comments/1lfkns2/tool_for_creating_datasets_from_unstructured_data/
WanderSprocket
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfkns2
false
null
t3_1lfkns2
/r/LocalLLaMA/comments/1lfkns2/tool_for_creating_datasets_from_unstructured_data/
false
false
self
0
null
Question: Multimodal LLM (text + image) with very long context (200k tokens)
0
Hi everyone, I’m looking for an LLM that can handle both text and images with a very long context window, up to 200k tokens. I saw that GPT-4-o (o3-mini) can handle 200k tokens but doesn’t process images. Current multimodal models usually support around 30k to 100k tokens max. Two questions: 1. **Does a multimodal text + image model with a 200k token context window exist?** 2. **If not, is it better to use two separate models (one for images, one for text) and combine their outputs?** What models or approaches would you recommend for this use case? Thanks!
2025-06-19T21:03:46
https://www.reddit.com/r/LocalLLaMA/comments/1lfm0dl/question_multimodal_llm_text_image_with_very_long/
Mobile_Estate_9160
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfm0dl
false
null
t3_1lfm0dl
/r/LocalLLaMA/comments/1lfm0dl/question_multimodal_llm_text_image_with_very_long/
false
false
self
0
null
We just added LlamaIndex support to AG-UI — bring a frontend to your agent
1
[removed]
2025-06-19T21:06:06
https://www.reddit.com/r/LocalLLaMA/comments/1lfm2cm/we_just_added_llamaindex_support_to_agui_bring_a/
nate4t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfm2cm
false
null
t3_1lfm2cm
/r/LocalLLaMA/comments/1lfm2cm/we_just_added_llamaindex_support_to_agui_bring_a/
false
false
self
1
{'enabled': False, 'images': [{'id': 'WfYO9CxPR5tbGfhzCSdTjR-PQPqbM1z-qIHZycRYaMY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WfYO9CxPR5tbGfhzCSdTjR-PQPqbM1z-qIHZycRYaMY.png?width=108&crop=smart&auto=webp&s=17f996362507f3c5a48f84f5f3186aae06e01f35', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WfYO9CxPR5tbGfhzCSdTjR-PQPqbM1z-qIHZycRYaMY.png?width=216&crop=smart&auto=webp&s=dc82ea997a758bf30e8affd6523b8409473c66e8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WfYO9CxPR5tbGfhzCSdTjR-PQPqbM1z-qIHZycRYaMY.png?width=320&crop=smart&auto=webp&s=afd14df4c1fd230ce647ed79a01792d8afde0370', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WfYO9CxPR5tbGfhzCSdTjR-PQPqbM1z-qIHZycRYaMY.png?width=640&crop=smart&auto=webp&s=5fb25f1c170ac12378d62c35efc538bc16fa8283', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WfYO9CxPR5tbGfhzCSdTjR-PQPqbM1z-qIHZycRYaMY.png?width=960&crop=smart&auto=webp&s=24dc709afa001c23219f3bda7b99578921e91d88', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WfYO9CxPR5tbGfhzCSdTjR-PQPqbM1z-qIHZycRYaMY.png?width=1080&crop=smart&auto=webp&s=df9650dd33edc4b4f3456cc718c7a29b79414f7a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WfYO9CxPR5tbGfhzCSdTjR-PQPqbM1z-qIHZycRYaMY.png?auto=webp&s=79efc85d488800d7e60e9d861d14a4592308cf2f', 'width': 1200}, 'variants': {}}]}
We just added LlamaIndex support to AG-UI — bring a frontend to your agent
15
Hey all, I'm on the team behind AG-UI, a lightweight standard that brings agents into the UI as dynamic, stateful, real-time collaborators. I'm seriously excited to share that **AG-UI now supports LlamaIndex** out of the box. You can wire up a LlamaIndex agent to a modern UI in seconds. # AG-UI features: * Real-time agentic chat with streaming * Bi-directional state synchronization * Generative UI and structured messages * Real-time context enrichment * Frontend tool integration * Human-in-the-loop collaboration # Already adopted by: * LangChain * CrewAI * Mastra * AG2 * Agno * **LlamaIndex** (just landed!) And **incoming**: AWS, A2A, AgentOps, AI-SDK, and more. Build your first AG-UI app by following the guide below, written by the team at LlamaIndex Would love feedback from anyone using LlamaIndex (or any other framework) with a frontend. If you've been hacking together agent interfaces, I'd love to chat with you.
2025-06-19T21:08:06
https://www.reddit.com/r/LocalLLaMA/comments/1lfm412/we_just_added_llamaindex_support_to_agui_bring_a/
nate4t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfm412
false
null
t3_1lfm412
/r/LocalLLaMA/comments/1lfm412/we_just_added_llamaindex_support_to_agui_bring_a/
false
false
self
15
null
iOS shortcut for private voice, text, and photo questions via Ollama API.
1
I've seen Gemini and OpenAI shortcuts, but I wanted something more private and locally hosted. So, I built this! You can ask your locally hosted AI questions via voice and text, and even with photos if you host a vision-capable model like Qwen2.5VL. Assigning it to your action button makes for fast and easy access. This shortcut requires an Ollama server, but you can likely adapt it to work with almost any AI API. To secure Ollama, I used [this proxy](https://github.com/ParisNeo/ollama_proxy_server) with bearer token authentication. Enter your user:key pair near the top of the shortcut to enable it. https://www.icloud.com/shortcuts/ace530e6c8304038b54c6b574475f2af
2025-06-19T21:17:25
https://www.reddit.com/r/LocalLLaMA/comments/1lfmc4b/ios_shortcut_for_private_voice_text_and_photo/
FreemanDave
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfmc4b
false
null
t3_1lfmc4b
/r/LocalLLaMA/comments/1lfmc4b/ios_shortcut_for_private_voice_text_and_photo/
false
false
self
1
{'enabled': False, 'images': [{'id': 'e-1E-fEOFlFbkjKUhF2g8IrVXDpKeL4Ty1BG16SOj3I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/e-1E-fEOFlFbkjKUhF2g8IrVXDpKeL4Ty1BG16SOj3I.png?width=108&crop=smart&auto=webp&s=cccf196695128b046c6efeb7637fce727ba1dbf3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/e-1E-fEOFlFbkjKUhF2g8IrVXDpKeL4Ty1BG16SOj3I.png?width=216&crop=smart&auto=webp&s=5b3be7d7bcaa242991d530bffe5951960225cd3a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/e-1E-fEOFlFbkjKUhF2g8IrVXDpKeL4Ty1BG16SOj3I.png?width=320&crop=smart&auto=webp&s=46dabd6a3653b4dc98a2eabfd25c9f5947eee5bb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/e-1E-fEOFlFbkjKUhF2g8IrVXDpKeL4Ty1BG16SOj3I.png?width=640&crop=smart&auto=webp&s=fdaa03b3c0412c6f5d72acc848849471bbca8ac4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/e-1E-fEOFlFbkjKUhF2g8IrVXDpKeL4Ty1BG16SOj3I.png?width=960&crop=smart&auto=webp&s=a655cc2b3532a1973b53e1691aa1babee037a84d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/e-1E-fEOFlFbkjKUhF2g8IrVXDpKeL4Ty1BG16SOj3I.png?width=1080&crop=smart&auto=webp&s=46f8bad974d716c4e1d5032a3347885aa410dce9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/e-1E-fEOFlFbkjKUhF2g8IrVXDpKeL4Ty1BG16SOj3I.png?auto=webp&s=2ed40d7d6196095ca7aff696c3c1dbb2f533f730', 'width': 1200}, 'variants': {}}]}
llama3.2:1b
0
Added this to test ollama was working with my 5070ti and I am seriously impressed. Near instant accurate responses beating 13B finetuned medical LLMs.
2025-06-19T21:30:20
https://www.reddit.com/r/LocalLLaMA/comments/1lfmmyd/llama321b/
Glittering-Koala-750
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfmmyd
false
null
t3_1lfmmyd
/r/LocalLLaMA/comments/1lfmmyd/llama321b/
false
false
self
0
null
ICONN 1 is now out!
270
Hello to r/LocalLLaMA , Today is a huge day for us, and we're thrilled to finally share something we've poured an incredible amount of time and resources into: **ICONN-1**. This isn't another fine-tune; we built this model from the ground up, a project that involved a significant investment of **$50,000 to train from scratch.** Our goal was ambitious: to create the most advanced and human-like open-source AI model under 100 billion parameters. And we believe ICONN-1 delivers. **What makes ICONN-1 special?** * **Mixture-of-Experts (MoE) Architecture:** Built on a custom Mixtral framework, ICONN-1 uses dynamic routing through specialized expert pathways. This means incredible computational efficiency alongside enhanced performance. * **88 Billion Parameters, 22 Billion Active:** We've managed to achieve highly nuanced and contextually accurate responses while maintaining scalability benefits through sparse activation. * **Designed for Human-like Interaction:** ICONN-1 (this specific version) is optimized for natural, emotionally resonant, and genuinely conversational interactions. We've even benchmarked it against human responses on 500 questions for emotion and common sense, and it consistently shows the highest human-thinking benchmark scores. Specialized Variant for Reasoning: We're also releasing **ICONN-e1** ***in beta***, a variant specifically fine-tuned for advanced reasoning, critical analysis, and complex problem-solving. This dual release represents a significant leap forward in versatile AI systems. * **Open-Source Commitment:** Our dedication to openness and accessibility is at the core of ICONN-1. We believe this is how we push AI forward together. **Ready to try it out?** **Go to** [ICONNAI/ICONN-1 · Hugging Face](https://huggingface.co/ICONNAI/ICONN-1) **Don't forget to LIKE our model - It helps get us up to Trending!**
2025-06-19T21:44:34
https://www.reddit.com/r/LocalLLaMA/comments/1lfmyy3/iconn_1_is_now_out/
Enderchef
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfmyy3
false
null
t3_1lfmyy3
/r/LocalLLaMA/comments/1lfmyy3/iconn_1_is_now_out/
false
false
self
270
{'enabled': False, 'images': [{'id': 'SPYrTwyJE3TQKvjnrmxAQjGjLKoUyWEDHwmv3_PzeoA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SPYrTwyJE3TQKvjnrmxAQjGjLKoUyWEDHwmv3_PzeoA.png?width=108&crop=smart&auto=webp&s=5b2c6b95c12457e1084b5bb7a75f8669279c2f8e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SPYrTwyJE3TQKvjnrmxAQjGjLKoUyWEDHwmv3_PzeoA.png?width=216&crop=smart&auto=webp&s=f9eb0d414c078e8aa85961a695b2e8d082c434f9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SPYrTwyJE3TQKvjnrmxAQjGjLKoUyWEDHwmv3_PzeoA.png?width=320&crop=smart&auto=webp&s=75f16b47dd01180b406088035888cb6023824bf2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SPYrTwyJE3TQKvjnrmxAQjGjLKoUyWEDHwmv3_PzeoA.png?width=640&crop=smart&auto=webp&s=ba21b522cf50bc1f2cabea8c087c1617d7ee467b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SPYrTwyJE3TQKvjnrmxAQjGjLKoUyWEDHwmv3_PzeoA.png?width=960&crop=smart&auto=webp&s=5388c5b5fd2ec854b3bfb79579a31afb5b5a6792', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SPYrTwyJE3TQKvjnrmxAQjGjLKoUyWEDHwmv3_PzeoA.png?width=1080&crop=smart&auto=webp&s=8bd16f77e8ed7b51ab93807a096efdf4cab7a7de', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SPYrTwyJE3TQKvjnrmxAQjGjLKoUyWEDHwmv3_PzeoA.png?auto=webp&s=786aff36073291161494d80dedfb9c85099d3442', 'width': 1200}, 'variants': {}}]}
Prompt engineering tip: Use bulleted lists
0
I was asking gemini for a plan for an MVP. My prompt was messy. Output from gemini was good. I then asked deepseek the same. I liked how deepseek structured the output, more robotic, less prose. I then asked gemini again in the style of deepseek and wow, what a difference. The output was so clean and tidy, less prose more bullets and checklists. If you've been in the LLM world for a while you know this is expected. The LLM tries to adopt your style of writing. The specific bulleted list I used was each item for the tech stack. Here is the better prompt: <...retracted...> MVP Plan with Kotlin Multiplatform Technology Stack: \* Frontend: Compose Multiplatform (Android, iOS, Web, desktop) \* Backend: Kotlin using Ktor \* Firebase \* Dependency Injection: [https://github.com/evant/kotlin-inject](https://github.com/evant/kotlin-inject) <... retracted feature discussion ...> . These features don't have to be in the MVP.  package <...snip...>
2025-06-19T21:47:47
https://www.reddit.com/r/LocalLLaMA/comments/1lfn1l3/prompt_engineering_tip_use_bulleted_lists/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfn1l3
false
null
t3_1lfn1l3
/r/LocalLLaMA/comments/1lfn1l3/prompt_engineering_tip_use_bulleted_lists/
false
false
self
0
{'enabled': False, 'images': [{'id': 'RcFkSsKYzFr8PJ596l5oUr199tyGk3gGxxSpWVmCk2M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RcFkSsKYzFr8PJ596l5oUr199tyGk3gGxxSpWVmCk2M.png?width=108&crop=smart&auto=webp&s=52e9c9359c50c5ce8fad8838f7c261d6848e95c0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RcFkSsKYzFr8PJ596l5oUr199tyGk3gGxxSpWVmCk2M.png?width=216&crop=smart&auto=webp&s=2a35930c47a765c9d9b97fc1bbe9680ac613f010', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RcFkSsKYzFr8PJ596l5oUr199tyGk3gGxxSpWVmCk2M.png?width=320&crop=smart&auto=webp&s=7e410d7a8b8f6922a66b9c9de49cb27fb14424de', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RcFkSsKYzFr8PJ596l5oUr199tyGk3gGxxSpWVmCk2M.png?width=640&crop=smart&auto=webp&s=1a251175ded64744c45ce001c3cb425cdf888a7d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RcFkSsKYzFr8PJ596l5oUr199tyGk3gGxxSpWVmCk2M.png?width=960&crop=smart&auto=webp&s=af6c78b12eccf0fb289b02da7518f3667e3fa7eb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RcFkSsKYzFr8PJ596l5oUr199tyGk3gGxxSpWVmCk2M.png?width=1080&crop=smart&auto=webp&s=6a3bb3f2b909f21180df426e984f8b982e97c8c6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RcFkSsKYzFr8PJ596l5oUr199tyGk3gGxxSpWVmCk2M.png?auto=webp&s=4c205faa0638f39fff669d686f950bcdd6e749d2', 'width': 1200}, 'variants': {}}]}
Optimized Chatterbox TTS (Up to 2-4x non-batched speedup)
45
Over the past few weeks I've been experimenting for speed, and finally it's stable - a version that easily triples the original inference speed on my Windows machine with Nvidia 3090. I've also streamlined the torch dtype mismatch, so it does not require torch.autocast and thus using half precision is faster, lowering the VRAM requirements (I *roughly* see 2.5GB usage) Here's the updated inference code: [https://github.com/rsxdalv/chatterbox/tree/fast](https://github.com/rsxdalv/chatterbox/tree/fast) In order to unlock the speed you need to torch.compile the generation step like so: model.t3._step_compilation_target = torch.compile( model.t3._step_compilation_target, fullgraph=True, backend="cudagraphs" ) And use bfloat16 for t3 to reduce memory bandwidth bottleneck: def t3_to(model: "ChatterboxTTS", dtype):     model.t3.to(dtype=dtype)     model.conds.t3.to(dtype=dtype)     return model Even without that you should see faster speeds due to removal of CUDA synchronization and more aggressive caching, but in my case the CPU/Windows Python is too slow to fully saturate the GPU without compilation. I targetted cudagraphs to hopefully avoid all *painful requirements* like triton and MSVC. The UI code that incorporates the compilation, memory usage check, half/full precision selection and more is in TTS WebUI (as an extension): [https://github.com/rsxdalv/TTS-WebUI](https://github.com/rsxdalv/TTS-WebUI) (The code of the extension: [https://github.com/rsxdalv/extension\_chatterbox](https://github.com/rsxdalv/extension_chatterbox) ) Note - in the UI, compilation can only be done at the start (as the first generation) due to multithreading vs PyTorch: [https://github.com/pytorch/pytorch/issues/123177](https://github.com/pytorch/pytorch/issues/123177) Even more details: After torch compilation is applied, the main bottleneck becomes memory speed. Thus, to further gain speed we can reduce the memory Changes done: prevent runtime checks in loops, cache all static embeddings, fix dtype mismatches preventing fp16, prevent cuda synchronizations, switch to StaticCache for compilation, use buffer for generated_ids in repetition_penalty_processor, check for EOS periodically, remove sliced streaming This also required copying the modeling\_llama from Transformers to remove optimization roadblocks. Numbers - these are system dependant! Thanks to user "a red pen" on TTS WebUI discord (with 5060 TI 16gb): Float32 Without Use Compilation: 57 it/s With Use Compilation: 46 it/s Bfloat16: Without Use Compilation: 47 it/s With Use Compilation: **81 it/s** On my Windows PC with 3090: Float32: Estimated token count: 70 Sampling: 8%|▊ | 80/1000 [00:02<00:24, 38.26it/s] Estimated token count: 70 Sampling: 8%|▊ | 80/1000 [00:02<00:23, 39.57it/s] Estimated token count: 70 Sampling: 8%|▊ | 80/1000 [00:01<00:22, 40.80it/s] Float32 Compiled: Estimated token count: 70 Sampling: 8%|▊ | 80/1000 [00:02<00:24, 37.87it/s] Estimated token count: 70 Sampling: 8%|▊ | 80/1000 [00:01<00:22, 41.21it/s] Estimated token count: 70 Sampling: 8%|▊ | 80/1000 [00:01<00:22, 41.07it/s] Float32 Compiled with Max\_Cache\_Len 600: Estimated token count: 70 Sampling: 16%|█▌ | 80/500 [00:01<00:07, 54.43it/s] Estimated token count: 70 Sampling: 16%|█▌ | 80/500 [00:01<00:07, 59.87it/s] Estimated token count: 70 Sampling: 16%|█▌ | 80/500 [00:01<00:07, 59.69it/s] Bfloat16: Estimated token count: 70 Sampling: 8%|▊ | 80/1000 [00:02<00:30, 30.56it/s] Estimated token count: 70 Sampling: 8%|▊ | 80/1000 [00:02<00:25, 35.69it/s] Estimated token count: 70 Sampling: 8%|▊ | 80/1000 [00:02<00:25, 36.31it/s] Bfloat16 Compiled: Estimated token count: 70 Sampling: 8%|▊ | 80/1000 [00:01<00:13, 66.01it/s] Estimated token count: 70 Sampling: 8%|▊ | 80/1000 [00:01<00:11, 78.61it/s] Estimated token count: 70 Sampling: 8%|▊ | 80/1000 [00:01<00:11, 78.64it/s] Bfloat16 Compiled with Max\_Cache\_Len 600: Estimated token count: 70 Sampling: 16%|█▌ | 80/500 [00:00<00:04, 84.08it/s] Estimated token count: 70 Sampling: 16%|█▌ | 80/500 [00:00<00:04, 101.48it/s] Estimated token count: 70 Sampling: 16%|█▌ | 80/500 [00:00<00:04, 101.41it/s] Bfloat16 Compiled with Max\_Cache\_Len 500: Estimated token count: 70 Sampling: 20%|██ | 80/400 [00:01<00:04, 78.85it/s] Estimated token count: 70 Sampling: 20%|██ | 80/400 [00:00<00:03, 104.57it/s] Estimated token count: 70 Sampling: 20%|██ | 80/400 [00:00<00:03, 104.84it/s] My best result is when running via API, where it goes to 108it/s at 560 cache len: ``` Using chatterbox streaming with params: {'audio\_prompt\_path': 'voices/chatterbox/Infinity.wav', 'chunked': True, 'desired\_length': 80, 'max\_length': 200, 'halve\_first\_chunk': False, 'exaggeration': 0.8, 'cfg\_weight': 0.6, 'temperature': 0.9, 'device': 'auto', 'dtype': 'bfloat16', 'cpu\_offload': False, 'cache\_voice': False, 'tokens\_per\_slice': None, 'remove\_milliseconds': None, 'remove\_milliseconds\_start': None, 'chunk\_overlap\_method': 'undefined', 'seed': -1, 'use\_compilation': True, 'max\_new\_tokens': 340, 'max\_cache\_len': 560} Using device: cuda Using cached model 'Chatterbox on cuda with torch.bfloat16' in namespace 'chatterbox'. Generating chunk: Alright, imagine you have a plant that lives in the desert where there isn't a lot of water. Estimated token count: 114 Sampling: 29%|██████████████████████▉ | 100/340 \[00:00<00:02, 102.48it/s\] Generating chunk: This plant, called a cactus, has a special body that can store water so it can survive without rain for a long time. Estimated token count: 152 Sampling: 47%|████████████████████████████████████▋ | 160/340 \[00:01<00:01, 108.20it/s\] Generating chunk: So while other plants might need watering every day, a cactus can go for weeks without any water. Estimated token count: 118 Sampling: 41%|████████████████████████████████ | 140/340 \[00:01<00:01, 108.76it/s\] Generating chunk: It's kind of like a squirrel storing nuts for winter, but the cactus stores water to survive hot, dry days. Estimated token count: 152 Sampling: 41%|████████████████████████████████ | 140/340 \[00:01<00:01, 108.89it/s\] ```
2025-06-19T22:14:13
https://www.reddit.com/r/LocalLLaMA/comments/1lfnn7b/optimized_chatterbox_tts_up_to_24x_nonbatched/
RSXLV
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfnn7b
false
null
t3_1lfnn7b
/r/LocalLLaMA/comments/1lfnn7b/optimized_chatterbox_tts_up_to_24x_nonbatched/
false
false
self
45
null
As a storyteller, how can I have this?
0
I am jealous of vibe coding. They get to create a lot and learn but there's no such thing yet for storytelling. I want to create short stories using ai. Image creation is not for me. Have anyone figured out anything for short films? I want to spend my days tinkering with shots, frames and movements.
2025-06-19T22:14:37
https://i.redd.it/vt4ji7xgky7f1.jpeg
Original-Party-2759
i.redd.it
1970-01-01T00:00:00
0
{}
1lfnnif
false
null
t3_1lfnnif
/r/LocalLLaMA/comments/1lfnnif/as_a_storyteller_how_can_i_have_this/
false
false
default
0
{'enabled': True, 'images': [{'id': 'vt4ji7xgky7f1', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/vt4ji7xgky7f1.jpeg?width=108&crop=smart&auto=webp&s=67a4a42f09c957d1800d8c07ac19a35a6b1b1bbc', 'width': 108}, {'height': 241, 'url': 'https://preview.redd.it/vt4ji7xgky7f1.jpeg?width=216&crop=smart&auto=webp&s=b87f7e00c5ad8d690f585281adca6953b57fb125', 'width': 216}, {'height': 357, 'url': 'https://preview.redd.it/vt4ji7xgky7f1.jpeg?width=320&crop=smart&auto=webp&s=ad02531aab8242b274c396e9e419bfe249d95f76', 'width': 320}, {'height': 715, 'url': 'https://preview.redd.it/vt4ji7xgky7f1.jpeg?width=640&crop=smart&auto=webp&s=28f3942b2ddee8ec60d74eda418f51c9fa52205b', 'width': 640}, {'height': 1072, 'url': 'https://preview.redd.it/vt4ji7xgky7f1.jpeg?width=960&crop=smart&auto=webp&s=6f6b4371c4c902cabbd437ee22a683d03e51930d', 'width': 960}], 'source': {'height': 1206, 'url': 'https://preview.redd.it/vt4ji7xgky7f1.jpeg?auto=webp&s=fc388b14c0f4df943deb083416c7acf065435cb9', 'width': 1079}, 'variants': {}}]}
Is there any frontend which supports OpenAI features like web search or Scheduled Tasks?
2
I’m currently using OpenWebUI… and they are not good at implementing basic features in Chatgpt Plus that’s been around for a long time. For example, web search. OpenWebUI web search sucks when using o3 or gpt-4.1. You have to configure a google/bing/etc api key, and then it takes 5+ minutes to do a simple query! Meanwhile, if you use chatgpt plus, the web search with o3 (or even if you use gpt-4o-search-preview in OpenWebUI) works perfectly. It **quickly** grabs a few webpages from google, filters the information, and **quickly** outputs a result, with references/links to the pages. For example, o3 handles the prompt “what are 24gb GPUs for under $1000 on the used market?” perfectly. Is there another software other than OpenWebUI that can use the OpenAI built in web search? Also, other ChatGPT features are missing, such as Scheduled Tasks. Is there any other frontend that supports Scheduled Tasks?
2025-06-19T22:39:42
https://www.reddit.com/r/LocalLLaMA/comments/1lfo7p0/is_there_any_frontend_which_supports_openai/
DepthHour1669
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfo7p0
false
null
t3_1lfo7p0
/r/LocalLLaMA/comments/1lfo7p0/is_there_any_frontend_which_supports_openai/
false
false
self
2
null
Why We Need Truth-Seeking AI: Announcing $1M in Grants
0
Anyone into philosophy and building an AI? https://youtu.be/HKFqZozACos Links in the comment section of the video. [I am not involved with the project, I just follow Johnathan on YouTube and thought that someone here might be interested in it.]
2025-06-19T22:52:53
https://www.reddit.com/r/LocalLLaMA/comments/1lfoi0v/why_we_need_truthseeking_ai_announcing_1m_in/
Cane_P
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfoi0v
false
null
t3_1lfoi0v
/r/LocalLLaMA/comments/1lfoi0v/why_we_need_truthseeking_ai_announcing_1m_in/
false
false
self
0
{'enabled': False, 'images': [{'id': 'n90JS4PebXxypqdeoc2kOLYNKWwWx_Q5MWaUcVMeixU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/n90JS4PebXxypqdeoc2kOLYNKWwWx_Q5MWaUcVMeixU.jpeg?width=108&crop=smart&auto=webp&s=f120f538b97b7197407376032bf36aa4c0177a27', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/n90JS4PebXxypqdeoc2kOLYNKWwWx_Q5MWaUcVMeixU.jpeg?width=216&crop=smart&auto=webp&s=5921907d083b4030244d14f4a27a6abc85fe06a8', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/n90JS4PebXxypqdeoc2kOLYNKWwWx_Q5MWaUcVMeixU.jpeg?width=320&crop=smart&auto=webp&s=8d07814fe5594744f06dc8945ad3f496ea9baa86', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/n90JS4PebXxypqdeoc2kOLYNKWwWx_Q5MWaUcVMeixU.jpeg?auto=webp&s=d5181b6313f6bf4914687cca266971b37f8a285b', 'width': 480}, 'variants': {}}]}
Dual RTX 6000, Blackwell + Ada Lovelace, with thermal imagery
1
This rig is more for training than local inference (though there is a lot of the latter with Qwen) but I thought it might be helpful to see how the new Blackwell cards dissipate heat compared to the older blower style for Quadros prominent since Amphere. There are two IR color ramps - a standard heat map and a rainbow palette that’s better at showing steep thresholds. You can see the majority of the heat is present at the two inner-facing triangles to the upper side center of the Blackwell card, with exhaust moving up and outward to the side. Underneath, you can see how effective the lower two fans are at moving heat, though the Ada Lovelace card’s fan input is a fair bit cooler. But the negative of the latter’s design is that the heat ramps up linearly through the card. The geometric heatmap of the Blackwell shows how superior its engineering is - it is overall comparatively cooler in surface area despite using double the wattage. A note on the setup - I have all system fans with exhaust facing inward to push air out try open side of the case. It seems like this shouldn’t work, but the Blackwell seems to stay much cooler this way than with the standard front fans as intake and back fans as exhaust. CPU is liquid cooled, and completely unaffected by proximity to the Blackwell card.
2025-06-19T23:16:02
https://www.reddit.com/gallery/1lfp0ch
Thalesian
reddit.com
1970-01-01T00:00:00
0
{}
1lfp0ch
false
null
t3_1lfp0ch
/r/LocalLLaMA/comments/1lfp0ch/dual_rtx_6000_blackwell_ada_lovelace_with_thermal/
false
false
https://external-preview…8c4eac6bbf933f61
1
{'enabled': True, 'images': [{'id': '8eF8bVPaFXTp_Koh0baE9cj7mvUPagqHoDLgF26UjbI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/8eF8bVPaFXTp_Koh0baE9cj7mvUPagqHoDLgF26UjbI.jpeg?width=108&crop=smart&auto=webp&s=fad8bd00d6ff520849f46654d69c69e63901c5f2', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/8eF8bVPaFXTp_Koh0baE9cj7mvUPagqHoDLgF26UjbI.jpeg?width=216&crop=smart&auto=webp&s=cf57d5753c3f342ff3956e650009ff2e4f52e402', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/8eF8bVPaFXTp_Koh0baE9cj7mvUPagqHoDLgF26UjbI.jpeg?width=320&crop=smart&auto=webp&s=d3787ca61deec41e7bdfcbbabb2c68356d1e676f', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/8eF8bVPaFXTp_Koh0baE9cj7mvUPagqHoDLgF26UjbI.jpeg?width=640&crop=smart&auto=webp&s=0a2e76c42f965ab02071be32552d80b3f98da414', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/8eF8bVPaFXTp_Koh0baE9cj7mvUPagqHoDLgF26UjbI.jpeg?width=960&crop=smart&auto=webp&s=29fb3c8baf82f1b82dc7a6759c0c09308d7aca59', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/8eF8bVPaFXTp_Koh0baE9cj7mvUPagqHoDLgF26UjbI.jpeg?width=1080&crop=smart&auto=webp&s=97f05b1be06f97845d7c07dec0d3d8430f213c0c', 'width': 1080}], 'source': {'height': 4284, 'url': 'https://external-preview.redd.it/8eF8bVPaFXTp_Koh0baE9cj7mvUPagqHoDLgF26UjbI.jpeg?auto=webp&s=dcc5c1d160a43412779e7e8f98c2ff10112efce5', 'width': 5712}, 'variants': {}}]}
Dual RTX 6000, Blackwell and Ada Lovelace, with thermal imagery
58
This rig is more for training than local inference (though there is a lot of the latter with Qwen) but I thought it might be helpful to see how the new Blackwell cards dissipate heat compared to the older blower style for Quadros prominent since Amphere. There are two IR color ramps - a standard heat map and a rainbow palette that’s better at showing steep thresholds. You can see the majority of the heat is present at the two inner-facing triangles to the upper side center of the Blackwell card (84 C), with exhaust moving up and outward to the side. Underneath, you can see how effective the lower two fans are at moving heat in the flow through design, though the Ada Lovelace card’s fan input is a fair bit cooler. But the negative of the latter’s design is that the heat ramps up linearly through the card. The geometric heatmap of the Blackwell shows how superior its engineering is - it is overall comparatively cooler in surface area despite using double the wattage. A note on the setup - I have all system fans with exhaust facing inward to push air out try open side of the case. It seems like this shouldn’t work, but the Blackwell seems to stay much cooler this way than with the standard front fans as intake and back fans as exhaust. Coolest part of the rig by feel is between the two cards. CPU is liquid cooled, and completely unaffected by proximity to the Blackwell card.
2025-06-19T23:23:38
https://www.reddit.com/gallery/1lfp66e
Thalesian
reddit.com
1970-01-01T00:00:00
0
{}
1lfp66e
false
null
t3_1lfp66e
/r/LocalLLaMA/comments/1lfp66e/dual_rtx_6000_blackwell_and_ada_lovelace_with/
false
false
https://external-preview…c6332eb18865dfe6
58
{'enabled': True, 'images': [{'id': 'dp9jZ9I5ulT5RZQDN9KwsbRB_C7Fi7IzWNUNN0l7OB8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/dp9jZ9I5ulT5RZQDN9KwsbRB_C7Fi7IzWNUNN0l7OB8.jpeg?width=108&crop=smart&auto=webp&s=4ee16e93b04183bd2df8a98a5a90d52b535fd63c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/dp9jZ9I5ulT5RZQDN9KwsbRB_C7Fi7IzWNUNN0l7OB8.jpeg?width=216&crop=smart&auto=webp&s=4cfeed01df2b13880ffe69f83d7b2aa9379255c3', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/dp9jZ9I5ulT5RZQDN9KwsbRB_C7Fi7IzWNUNN0l7OB8.jpeg?width=320&crop=smart&auto=webp&s=d8c408973de40a266cd7efce7dcf07aaa7e89fdd', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/dp9jZ9I5ulT5RZQDN9KwsbRB_C7Fi7IzWNUNN0l7OB8.jpeg?width=640&crop=smart&auto=webp&s=69de3d9bf9c09f76fc51ddb8950c1601d76f26c9', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/dp9jZ9I5ulT5RZQDN9KwsbRB_C7Fi7IzWNUNN0l7OB8.jpeg?width=960&crop=smart&auto=webp&s=8e2b98f010dabd6f0e650e193be589373b7aeafe', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/dp9jZ9I5ulT5RZQDN9KwsbRB_C7Fi7IzWNUNN0l7OB8.jpeg?width=1080&crop=smart&auto=webp&s=c000f2d95f66b3f4928f0c1388cd2e92a56f71eb', 'width': 1080}], 'source': {'height': 1206, 'url': 'https://external-preview.redd.it/dp9jZ9I5ulT5RZQDN9KwsbRB_C7Fi7IzWNUNN0l7OB8.jpeg?auto=webp&s=60525c9e6d9f4144b130037b640e48fac4996b4b', 'width': 1608}, 'variants': {}}]}
Anyone else tracking datacenter GPU prices on eBay?
58
I've been in the habit of checking eBay for AMD Instinct prices for a few years now, and noticed just today that MI210 prices seem to be dropping pretty quickly (though still priced out of my budget!) and there is a used MI300X for sale there for the first time, for *only* $35K /s I watch MI60 and MI100 prices too, but MI210 is the most interesting to me for a few reasons: * It's the last Instinct model to use a PCIe interface (later models use OAM or SH5), which I could conceivably use in servers I actually have, * It's the last Instinct model that runs at an even halfway-sane power draw (300W), * Fabrication processes don't improve significantly in later models until the MI350. In my own mind, my MI60 is mostly for learning how to make these Instinct GPUs work and not burst into flame, and it has indeed been a learning experience. When I invest "seriously" in LLM hardware, it will probably be eBay MI210s, but not until they have come down in price quite a bit more, and not until I have well-functioning training/fine-tuning software based on llama.cpp which works on the MI60. None of that exists yet, though it's progressing. Most people are probably more interested in Nvidia datacenter GPUs. I'm not in the habit of checking for that, but do see now that eBay has 40GB A100 for about $2500, and 80GB A100 for about $8800 (US dollars). Am I the only one, or are other people waiting with bated breath for second-hand datacenter GPUs to become affordable too?
2025-06-19T23:35:16
https://www.reddit.com/r/LocalLLaMA/comments/1lfpewd/anyone_else_tracking_datacenter_gpu_prices_on_ebay/
ttkciar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfpewd
false
null
t3_1lfpewd
/r/LocalLLaMA/comments/1lfpewd/anyone_else_tracking_datacenter_gpu_prices_on_ebay/
false
false
self
58
null
Qwen3 for Apple Neural Engine
120
We just dropped ANEMLL 0.3.3 alpha with Qwen3 support for Apple's Neural Engine https://github.com/Anemll/Anemll Start to support open source! Cheers, Anemll 🤖
2025-06-19T23:43:28
https://www.reddit.com/r/LocalLLaMA/comments/1lfpkyv/qwen3_for_apple_neural_engine/
Competitive-Bake4602
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfpkyv
false
null
t3_1lfpkyv
/r/LocalLLaMA/comments/1lfpkyv/qwen3_for_apple_neural_engine/
false
false
self
120
{'enabled': False, 'images': [{'id': 'nQKqVo6OHbUS3Rgj27R8TDTcE9cz10aAXxP0kdlPMQI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nQKqVo6OHbUS3Rgj27R8TDTcE9cz10aAXxP0kdlPMQI.png?width=108&crop=smart&auto=webp&s=954a1218e83cc3cae269b5c08148218bcb74581a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nQKqVo6OHbUS3Rgj27R8TDTcE9cz10aAXxP0kdlPMQI.png?width=216&crop=smart&auto=webp&s=13e3003e29c3aca37e7b7c6c66154f460bfa1c2f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nQKqVo6OHbUS3Rgj27R8TDTcE9cz10aAXxP0kdlPMQI.png?width=320&crop=smart&auto=webp&s=6bdc77f2aab69be94adf10ff874a6e366982281f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nQKqVo6OHbUS3Rgj27R8TDTcE9cz10aAXxP0kdlPMQI.png?width=640&crop=smart&auto=webp&s=d8179eb9af5fd34463fc395df019479fe436c491', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nQKqVo6OHbUS3Rgj27R8TDTcE9cz10aAXxP0kdlPMQI.png?width=960&crop=smart&auto=webp&s=8005d93525666c30be391467c835eb4b196d2518', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nQKqVo6OHbUS3Rgj27R8TDTcE9cz10aAXxP0kdlPMQI.png?width=1080&crop=smart&auto=webp&s=6512a6ff493879883419d3d4adc77c27ccfb22fc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nQKqVo6OHbUS3Rgj27R8TDTcE9cz10aAXxP0kdlPMQI.png?auto=webp&s=f5cc64d0d057747f36419dc3ac96c665d85dbbaf', 'width': 1200}, 'variants': {}}]}
Current best uncensored model?
274
this is probably one of the biggest advantages of local LLM's yet there is no universally accepted answer to what's the best model as of June 2025. **So share your BEST uncensored model!** *by ''best uncensored model' i mean the least censored model (that helped you get a nuclear bomb in your kitched), but also the most intelligent one*
2025-06-19T23:51:12
https://www.reddit.com/r/LocalLLaMA/comments/1lfpqs6/current_best_uncensored_model/
Accomplished-Feed568
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfpqs6
false
null
t3_1lfpqs6
/r/LocalLLaMA/comments/1lfpqs6/current_best_uncensored_model/
false
false
self
274
null
How to set temperature RIGHT
0
In Google AI Studio, I've noticed that lots of people think that the models aren't that great, and when coding can behave almost erratically and make bad, silly mistakes. The main culprit is because they weirdly set the default temp of all their models to 1. The temperature range at least in AI Studio is from 0 (completely predictable) - 2 (considerably random) but it's still not that good, and the temperature of the model is actually PRETTY IMPORTANT, so here's a guide on how to set temperatures for LLMs in general and why reasoning is important to. Lower temperatures on a model will make it have consistently less intelligence because it copies from its training data more but higher ones will result in an increased upper intelligence range where it's less consistent but you can get a sweet spot with it where the LLM can generate more complex things, however it's less stable so there'll still be bugs with them. An answer to this is to keep it at a lower temp but allow it to reason and you'll notice the model consistently returns valid code that only gets more accurate the longer it thinks. In this, we'll start with an example where setting it low, like to 0.4, is able to generate mathematical shapes that are represented semi-well in the data, but higher ones like 0.8 cause the LLM to be abit more "artistic" with the shape's design, but then ultimately fail in the actual rendering, since it's very intricate and needs to be 100% correct. Using a more complex prompt of creating a rotating tesseract with with bouncing balls in it can be simpler than the previous, and doesn't have much training data on it, so setting it higher would seem like the right option but this leads to inconsistent and sometimes buggy outputs; some are great while some fail, and as you increase the temp more and also just make the task harder in general, the % it can get right will plummet. A better thing to do is keep it low but just let the model reason, and 2.5 Flash is a hybrid so this is easy to do. This almost lets it do to the effect of "sampling" from the higher temperature results and pull out consistently more accurate code. A couple points higher to 0.6 even lets the model exercise aesthetics more, but here's where it kind of falters: When the temperature becomes too high (1), the reasoning takes it to a point of instability where the mathematics undermining the renderer starts to break down because of the model's now OVER generalization of the tight concepts.
2025-06-20T00:03:53
https://v.redd.it/gslo28iwyy7f1
Longjumping_Spot5843
v.redd.it
1970-01-01T00:00:00
0
{}
1lfq0bk
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/gslo28iwyy7f1/DASHPlaylist.mpd?a=1752969846%2CNDUyMTljNDFlZWFhMzFmZjYwNTJkZTlkYzU5Mzk1Yjk3MTZjOTRjYzUzZjQwMTA3N2YxYzBhNGI1ZTEzOTFiZQ%3D%3D&v=1&f=sd', 'duration': 41, 'fallback_url': 'https://v.redd.it/gslo28iwyy7f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/gslo28iwyy7f1/HLSPlaylist.m3u8?a=1752969846%2CNjYyYTFmODQ1Y2EzYTAyNDQxNzFmOTcwMjljNDlkZDUwYTA2ZGE3YWY2OTBiZWM3NDNhMGZkZDZlNmZlMzRkNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/gslo28iwyy7f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1lfq0bk
/r/LocalLLaMA/comments/1lfq0bk/how_to_set_temperature_right/
false
false
https://external-preview…bbbd6187677f314a
0
{'enabled': False, 'images': [{'id': 'bTIwNTd5aHd5eTdmMd0mZGb55txDNP6wvQ_3GRbDM3anw7Owu5c3y6JjwrsE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bTIwNTd5aHd5eTdmMd0mZGb55txDNP6wvQ_3GRbDM3anw7Owu5c3y6JjwrsE.png?width=108&crop=smart&format=pjpg&auto=webp&s=509010eefd4fd1d0dfa10b70ce2eb0f0b0fa0077', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bTIwNTd5aHd5eTdmMd0mZGb55txDNP6wvQ_3GRbDM3anw7Owu5c3y6JjwrsE.png?width=216&crop=smart&format=pjpg&auto=webp&s=9e0181f9176015fa90a65e4a42cb0e5b46906dd5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bTIwNTd5aHd5eTdmMd0mZGb55txDNP6wvQ_3GRbDM3anw7Owu5c3y6JjwrsE.png?width=320&crop=smart&format=pjpg&auto=webp&s=9804256a31850dd8105edd8212e2de9893b96f85', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bTIwNTd5aHd5eTdmMd0mZGb55txDNP6wvQ_3GRbDM3anw7Owu5c3y6JjwrsE.png?width=640&crop=smart&format=pjpg&auto=webp&s=896ceadfdb06518b5917e4684b28a08ffb4bd3ed', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bTIwNTd5aHd5eTdmMd0mZGb55txDNP6wvQ_3GRbDM3anw7Owu5c3y6JjwrsE.png?width=960&crop=smart&format=pjpg&auto=webp&s=aa74a81160c8f6dada2da601cb982b8805430529', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bTIwNTd5aHd5eTdmMd0mZGb55txDNP6wvQ_3GRbDM3anw7Owu5c3y6JjwrsE.png?width=1080&crop=smart&format=pjpg&auto=webp&s=64148be7f1f8ebe7e198200723d436b1b8a0f05e', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bTIwNTd5aHd5eTdmMd0mZGb55txDNP6wvQ_3GRbDM3anw7Owu5c3y6JjwrsE.png?format=pjpg&auto=webp&s=2897ed7c675845b9a2aa4de7595bf42a215d5d60', 'width': 1920}, 'variants': {}}]}
Running DeepSeek locally using ONNX Runtime
0
Just wanted to drop this here for anyone interested in running models locally using ONNX Runtime. The focus here is on using the NPU in Snapdragon X Elite, but can be extended to other systems as well!
2025-06-20T00:42:28
https://www.youtube.com/live/VRDB_ob7ulA?si=sR3Pes-BGUlPPJxh
DangerousGood4561
youtube.com
1970-01-01T00:00:00
0
{}
1lfqsps
false
null
t3_1lfqsps
/r/LocalLLaMA/comments/1lfqsps/running_deepseek_locally_using_onnx_runtime/
false
false
default
0
null
Performance scaling from 400W to 600W on 2 5090s (MSI, Inno) and 2 4090s (ASUS, Gigabyte) from compute-bound task (SDXL).
8
Hi there guys, hoping you are having a good day/night! Continuing a bit from this post [https://www.reddit.com/r/nvidia/comments/1ld3f9n/small\_comparison\_of\_2\_5090s\_1\_voltage\_efficient\_1/](https://www.reddit.com/r/nvidia/comments/1ld3f9n/small_comparison_of_2_5090s_1_voltage_efficient_1/) Now this this time, someone gave me the idea to compare how the power scales from each GPU itself as you give it more power. From the past post, * My most efficient 5090: MSI Vanguard SOC * My least efficient 5090: Inno3D X3 * My most efficient 4090: ASUS TUF * My least efficient 5090: Gigabyte Gaming OC TL:DR: 5090 Inno has a worse bin than the 5090 MSI, needing a lot more power to reach the same performance (and it doesn't reaches it at 600W either). On 4090s the TUF performs better as the more efficient GPU vs the Gigabyte. Then, doing a SDXL task, which had the settings: * Batch count 2 * Batch size 2 * 896x1088 * Hiresfix at 1.5x, to 1344x1632 * 4xBHI\_realplksr\_dysample\_multi upscaler * 25 normal steps with DPM++ SDE Sampler * 10 hi-res steps with Restart Sampler * reForge webui (I may continue dev soon?) SDXL is a txt2img generator, and at this low batch sizes, performance is limited by compute, rather by bandwidth. Other hardware-software config: * AMD Ryzen 7 7800X3D * 192GB RAM DDR5 6000Mhz CL30 * MSI Carbon X670E * Fedora 41 (Linux), Kernel 6.19 * Torch 2.7.1+cu128 Also, both 4090s have the GALAX 666W VBIOS (this VBIOS gives more performance per clock) and both 5090s have the Gigabyte Aorus Master VBIOS (same thing as the Galax one but at a big minor scale). Now instead of comparing the 4090 TUF as baseline (as it is the most efficient baseline), I compare instead vs 400W of each GPU itself instead. With this, we can see how poorly the 4090 scales with power. Here are the results! # RTX 4090 TUF (non-OC) |Power|Time (s)|Performance|Power Increase|Performance Gain|Efficiency Ratio| |:-|:-|:-|:-|:-|:-| |400W|45.4|100%|\-|\-|1| |475W|44.8|101.3%|\+18.8%|\+1.3%|0.07| |530W|44.2|102.7%|\+32.5%|\+2.7%|0.08| Spoiler but maybe not surprising: This is the worse scaling GPU, even if it's the more efficient. It hits a voltage limit very early so even if you give it more power, it is hard to make use of it (+32.5% power for only +2.7% performance). Basically I can't make it use more than 530W effectively (without touching voltage at least). # RTX 4090 Gigabyte Gaming OC |Power|Time (s)|Performance|Power Increase|Performance Gain|Efficiency Ratio| |:-|:-|:-|:-|:-|:-| |400W|46.0|100%|\-|\-|1| |475W|44.2|104.1%|\+18.8%|\+4.1%|0.22| |530W|43.3|106.2%|\+32.5%|\+6.2%|0.19| |560W|42.9|107.2%|\+40.0%|\+7.2%|0.18| This card scales a bit more with power. At 475W is already +19% power for 4% performance. Then at 600W, you get 7.2% more perf by using +40% power. I also have a hard time making it use more than 560W effectively (voltage limit before power limit). So this is why the 4090s are so famous about being able to undervolt it heavily and/or power limit them and not lose much performance. # RTX 5090 Inno3D X3 OC |Power|Time (s)|Performance|Power Increase|Performance Gain|Efficiency Ratio| |:-|:-|:-|:-|:-|:-| |400W|42.0|100%|\-|\-|1| |475W|38.1|110.2%|\+18.8%|\+10.2%|0.54| |600W|34.9|120.3%|\+50.0%|\+20.3%|0.41| This GPU, and 5090 in general, have the opposite problem vs the 4090. It is really hard to make it reach voltage limit with 600W, so it is constantly power limited. Even at 600W clocks will drop as it reaches power limit and then in consequence, voltage drops. It scales way better with power, but still less efficient than the baseline. At 600W it uses 50% more power for 20.3% more performance. Or you could say this card at 400W performs \~83% as fast vs 600W. Despite being a worse bin vs the MSI, it scales better? with power, as we will see next. # RTX 5090 MSI Vanguard SOC Launch Edition |Power|Time (s)|Performance|Power Increase|Performance Gain|Efficiency Ratio| |:-|:-|:-|:-|:-|:-| |400W|39.4|100%|\-|\-|1| |475W|36.1|109.1%|\+18.8%|\+9.1%|0.48| |545W|34.8|113.2%|\+36.3%|\+13.2%|0.36| |565W|34.4|114.5%|\+41.3%|\+14.5%|0.35| |600W|34.0|115.9%|\+50%|15.9%|0.32| This card is the one that performs the best as any given power point, but at the same time, vs the Inno3D, it scales worse as power increases. But even it is ahead, so in theory, this is a better bin vs the Inno, as it needs less power for the same performance. Just as reference, the RTX 5090 MSI scores about \~16500 on Steel Nomad at 600W (https://www.3dmark.com/sn/5412987), while the Inno3D does about \~15700 (didn't save the score, sorry!). So these both at 600W in that particular case, the MSI is 5% faster. As TL:DR: 4090s scare very poorly with more power as they reach the voltage limit earlier (that's why they're famous by keeping the performance when undervolting and/or power limiting), while the 5090s have the opposite problem: they are heavily power limited and then by that, voltage drops to keep being on the desired power limit. How it applies for LLMs? Probably not very applicable except by using an smaller model or by prompt processing speed, as generation speed is limited by bandwidth instead (and I can do a benchmark of this if someone wants, as there the 5090 is a good amount faster vs the 4090)
2025-06-20T01:23:51
https://www.reddit.com/r/LocalLLaMA/comments/1lfrmj6/performance_scaling_from_400w_to_600w_on_2_5090s/
panchovix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfrmj6
false
null
t3_1lfrmj6
/r/LocalLLaMA/comments/1lfrmj6/performance_scaling_from_400w_to_600w_on_2_5090s/
false
false
self
8
{'enabled': False, 'images': [{'id': 'EdaQxJeXXbDAR7IH6sBO_A4JGYRzpN5CoV5gk49NIGo', 'resolutions': [], 'source': {'height': 100, 'url': 'https://external-preview.redd.it/EdaQxJeXXbDAR7IH6sBO_A4JGYRzpN5CoV5gk49NIGo.jpeg?auto=webp&s=f3790c84b91d68186c7f69a223b13d9924f446bc', 'width': 99}, 'variants': {}}]}
Simulating top-down thinking in LLMs through prompting - a path to AGI like output?
0
the theory behind this is that since llms are essentially just coherency engines that use text probability to produce output that best fits whatever narrative is in the context window, then if you take a problem and give the llm enough context and constraints and then ask it to solve it, you will have created a high-probability path to the solution. i've been testing this out and it seems to generate much stronger ideas than any other prompting method i've used before. i'm sure you guys could get even more out of it. there's a lot of room for improvement. below is a full description of the method. if it was implemented directly into llms so that it was entirely automated i think it has the potential to revolutionize llms in the same what that chain-of-thought prompting was used to create reasoning models A Proposed Methodology for LLM Idea Generation by Simulating Top-Down Thinking **Introduction:** Current methods for generating ideas with Large Language Models (LLMs) often involve direct, open-ended prompts (e.g., "Invent a new X"). This approach typically yields superficial, generic, or factually incorrect outputs, as the model lacks the deep, structured context required for genuine innovation. The model essentially performs a "bottom-up" pattern match from its training data. This document outlines a structured, multi-phase methodology designed to simulate a more effective "top-down" human thinking process. The goal is to compel the LLM to first build a comprehensive and constrained model of the problem space before attempting to generate solutions within it. **Methodology: Simulating Top-Down Thinking** The process is divided into three distinct phases, designed to be executed sequentially in a single context window. It requires an LLM with tool use capabilities (specifically, web search) for optimal performance. **Phase 1: Knowledge Base Construction and Constraint Definition** The objective of this phase is to build a factually grounded and verifiable foundation for the problem. The LLM is tasked with acting as a research analyst, using web search to populate the knowledge base and citing sources for all key data points. 1. **Systematic Knowledge Acquisition:** The LLM is prompted to gather and structure information on a given topic, including: * Fundamental principles (e.g., relevant physics, chemistry). * Current state-of-the-art technologies and their performance metrics. * Summaries of landmark research papers. * Key commercial or academic entities in the field. 2. **Constraint Identification:** The LLM is then directed to explicitly research the problem's limitations: * **Historical Failures:** Documented reasons for failed or discontinued projects. * **Theoretical/Physical Limits:** Sourced information on known scientific or engineering constraints. * **Economic Barriers:** Data on cost, scalability, and market viability challenges. 3. **Success Criteria Definition:** The LLM researches and defines quantitative metrics that would constitute a breakthrough, based on expert consensus found in industry or academic reports. At the end of this phase, the context window contains a detailed, sourced, and constrained model of the problem, shifting the task from unconstrained invention to targeted problem-solving. **Phase 2: Iterative Ideation and Falsification** This phase introduces a dialectical loop between generative and critical processes. 1. **Hypothesis Generation (Ideation):** The LLM is prompted to generate a set of potential solutions. Critically, this prompt instructs the model to base its ideas exclusively on the information gathered in Phase 1. This encourages synthesis of the provided data rather than defaulting to generic concepts from its training. 2. **Hypothesis Testing (Falsification):** The LLM is given a new role as a skeptic and tasked with attempting to falsify each of its own generated ideas. This is a crucial step that leverages web access: * **Identify Core Assumption:** The model first articulates the most critical, untested assumption underlying each idea. * **Search for Contradictory Evidence:** It then formulates and executes web searches designed to find data that directly refutes the core assumption. * **Check for Prior Art:** It searches for patents, failed projects, or papers that indicate the idea has already been tried and found unworkable. * **Verdict:** The model provides a final judgment on each idea (e.g., "Plausible," "Questionable," "Falsified"), citing the evidence found. This iterative loop refines the pool of ideas, filtering out weak concepts and identifying the most robust ones. **Phase 3: Synthesis and Solution Outlining** In the final phase, the LLM is prompted to perform a higher-order synthesis of the entire conversation. 1. **Holistic Review:** The prompt instructs the LLM to adopt a persona focused on synthesis and integration. It is told to re-read and connect all the preceding information: the foundational knowledge, the identified constraints, the initial ideas, and the results of the falsification process. 2. **Integrated Solution Generation:** The model is then tasked with generating a final set of refined, integrated solutions. The prompt requires that these solutions must: * Adhere to the principles from Phase 1. * Directly address the bottlenecks from Phase 1. * Incorporate strengths or survive the criticisms from Phase 2. 3. **Development Outline:** For each final solution, the model is asked to produce a high-level, step-by-step plan for potential research and development, grounding the abstract idea in a plausible process. **Discussion and Potential Implications:** This methodology contrasts with Chain-of-Thought (CoT) prompting. While CoT structures an LLM's internal reasoning to solve a defined problem, this "top-down" approach structures the LLM's external information gathering and self-critique to approach an undefined or complex problem. If this methodology proves effective, the next logical step would be to incorporate it into the LLM training process itself via instruction fine-tuning. Training a model on millions of examples of this workflow could embed it as an autonomous behavior. An LLM trained in this manner could potentially: * Automate complex research-and-synthesis tasks from a single high-level user prompt. * Increase the reliability and verifiability of outputs by making evidence-gathering and self-critique an intrinsic part of its generation process. * Function as a more capable partner in complex domains such as scientific research, engineering design, and strategic analysis. Further testing is required to validate the robustness of this methodology across various problem types and LLM architectures.
2025-06-20T01:48:25
https://www.reddit.com/r/LocalLLaMA/comments/1lfs36u/simulating_topdown_thinking_in_llms_through/
edspert
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfs36u
false
null
t3_1lfs36u
/r/LocalLLaMA/comments/1lfs36u/simulating_topdown_thinking_in_llms_through/
false
false
self
0
null
help with Condaerror
3
I'm very new to AI and I'm really confused about all this. I'm trying to use AllTalk, but I'm having a problem called “Condaerror: Run conda init before Conda activate.” I searched the internet and it's really hard for me to understand, so I'm asking here to see if someone could explain it to me in a more...uhh...simple way without my brain the size of a peanut convert into peanut butter . psdt : if you know what "No module name whisper" means , give me hand with it please .
2025-06-20T02:17:57
https://www.reddit.com/r/LocalLLaMA/comments/1lfsntm/help_with_condaerror/
miorex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfsntm
false
null
t3_1lfsntm
/r/LocalLLaMA/comments/1lfsntm/help_with_condaerror/
false
false
self
3
null
Running Local LLMs (“AI”) on Old Unsupported AMD GPUs and Laptop iGPUs (Arch Linux Guide)
0
2025-06-20T02:34:36
https://ahenriksson.com/posts/running-llm-on-old-amd-gpus/
Kallocain
ahenriksson.com
1970-01-01T00:00:00
0
{}
1lfsz42
false
null
t3_1lfsz42
/r/LocalLLaMA/comments/1lfsz42/running_local_llms_ai_on_old_unsupported_amd_gpus/
false
false
default
0
null
If an omni-modal AI exists that can extract any sort of information from any given modality/ies (text, audio, video, GUI, etc), which task would you use it for ?
0
One common example is intelligent document processing. But I imagine we can also apply it on random youtube videos to cross-check for NSFW or gruesome contents or audios and describe what sort of contents were there in mild text for large-scale analysis. I see that not many research works exist for information extraction these days, at least those that actually make sense (beyond simply NERs or REs that not many care about). Opening up a post here for discussion!
2025-06-20T02:40:32
https://www.reddit.com/r/LocalLLaMA/comments/1lft30z/if_an_omnimodal_ai_exists_that_can_extract_any/
Marionberry6886
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lft30z
false
null
t3_1lft30z
/r/LocalLLaMA/comments/1lft30z/if_an_omnimodal_ai_exists_that_can_extract_any/
false
false
self
0
null
Running Local LLMs (“AI”) on Old Unsupported AMD GPUs and Laptop iGPUs using llama.cpp with Vulkan (Arch Linux Guide)
19
2025-06-20T02:51:23
https://ahenriksson.com/posts/running-llm-on-old-amd-gpus/
Kallocain
ahenriksson.com
1970-01-01T00:00:00
0
{}
1lftaep
false
null
t3_1lftaep
/r/LocalLLaMA/comments/1lftaep/running_local_llms_ai_on_old_unsupported_amd_gpus/
false
false
default
19
null
[DEAL] On-demand B200 GPUs for $1.49/hr at DeepInfra (promo ends June 30)
0
no commitments any configuration (1x, 2x and so on) minute level billing cheapest in the market👌
2025-06-20T03:00:40
https://www.reddit.com/r/LocalLLaMA/comments/1lftglj/deal_ondemand_b200_gpus_for_149hr_at_deepinfra/
temirulan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lftglj
false
null
t3_1lftglj
/r/LocalLLaMA/comments/1lftglj/deal_ondemand_b200_gpus_for_149hr_at_deepinfra/
false
false
self
0
null
Open Discussion: Improving HTML-to-Markdown Extraction Using Local LLMs (7B/8B, llama.cpp) – Seeking Feedback on My Approach!
15
Hey Reddit, I'm working on a smarter way to convert HTML web pages to high-quality Markdown using **local LLMs** (Qwen2.5-7B/8B, llama.cpp) running on consumer GPUs. My goal: outperform traditional tools like Readability or html2text on tricky websites (e.g. modern SPAs, tech blogs, and noisy sites) — and do it all *fully offline*, without sending data to cloud APIs. # Project Outline **Core features:** * **Website type detection:** My script first analyzes if a site is text-focused or media-centric (e.g. video/image/social), with structural and domain heuristics. * **HTML structure analysis:** Uses BeautifulSoup to extract candidate content areas, main titles, headings, and framework fingerprints (React, Vue, WordPress, etc). * **AI-powered extraction planning:** Local LLM generates JSON-formatted extraction strategies (selectors, noise filters, special rules) for each page, not just using static rules. * **AI quality scoring:** After Markdown extraction, the LLM scores content for completeness, readability, info value, and offers improvement advice. Low scores auto-trigger domain-specific extraction rule generation for next time. * **Everything is local:** I use llama-cpp-python with quantized GGUF models, so it runs on a 4070/4080/4090 or even a 7B model on a MacBook. # What works well? * On standard article/news/blog pages, quality is usually “good” or “excellent” (AI assessment scores 7-9/10). * On tricky/modern sites (dynamic content, noisy layout, SPAs), the LLM can suggest better selectors or filters than hard-coded rules. * All quality metrics, extraction strategies, and improvement rules are saved as JSON/Markdown reports for review or reuse. # Issues & Open Questions * For *media-heavy* or JavaScript-only sites, even the LLM struggles without browser rendering. Anyone have robust approaches for these? * The overall speed is decent (one page ≈ 10–20 sec on 4070 8G, q4\_K\_M), but batch processing hundreds of pages could be faster. Any tips for optimizing llama.cpp in this workflow? * Are there other open-source local LLM tools you’d recommend for this use case? * Would you find such a tool useful for personal archiving, knowledge bases, or note-taking? * Any recommended datasets or benchmarks for evaluating web-to-Markdown extraction quality (beyond manual review)? # Source and Demo This is still a work-in-progress, but happy to share some code snippets or experiment results if anyone is interested. Would love to hear your feedback, suggestions, or experiences building similar tools! *TL;DR: Building a fully local, AI-enhanced HTML-to-Markdown extractor that learns from its mistakes. Looking for advice, criticism, or fellow hackers to discuss!*
2025-06-20T03:28:14
https://www.reddit.com/r/LocalLLaMA/comments/1lftz5s/open_discussion_improving_htmltomarkdown/
coolmenu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lftz5s
false
null
t3_1lftz5s
/r/LocalLLaMA/comments/1lftz5s/open_discussion_improving_htmltomarkdown/
false
false
self
15
null
I did a thing...
1
[removed]
2025-06-20T04:01:15
https://www.reddit.com/r/LocalLLaMA/comments/1lfukb1/i_did_a_thing/
Ok-Mud9471
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfukb1
false
null
t3_1lfukb1
/r/LocalLLaMA/comments/1lfukb1/i_did_a_thing/
false
false
self
1
null
New 24B finetune: Impish_Magic_24B
61
It's the **20th of June, 2025**—The world is getting more and more chaotic, but let's look at the bright side: **Mistral** released a new model at a **very** good size of **24B**, no more "sign here" or "accept this weird EULA" there, a proper **Apache 2.0 License**, nice! 👍🏻 This model is based on **mistralai/Magistral-Small-2506** so naturally I named it **Impish\_Magic**. Truly excellent size, I tested it on my laptop (**16GB gpu**) and it works quite well (**4090m**). Strong in productivity & in fun. Good for creative writing, and writer style emulation. New unique data, see details in the model card: [https://huggingface.co/SicariusSicariiStuff/Impish\_Magic\_24B](https://huggingface.co/SicariusSicariiStuff/Impish_Magic_24B) The model would be on **Horde** at **very high availability** for the next few hours, so give it a try!
2025-06-20T04:21:57
https://www.reddit.com/r/LocalLLaMA/comments/1lfuxn1/new_24b_finetune_impish_magic_24b/
Sicarius_The_First
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfuxn1
false
null
t3_1lfuxn1
/r/LocalLLaMA/comments/1lfuxn1/new_24b_finetune_impish_magic_24b/
false
false
self
61
{'enabled': False, 'images': [{'id': '51No_P_uAdDX1Ycoltbek_a-pSyT0jWN6KAjsiAu82A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/51No_P_uAdDX1Ycoltbek_a-pSyT0jWN6KAjsiAu82A.png?width=108&crop=smart&auto=webp&s=5bb85cf25fd314ab613856c46b8fce17d683ab63', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/51No_P_uAdDX1Ycoltbek_a-pSyT0jWN6KAjsiAu82A.png?width=216&crop=smart&auto=webp&s=1139b2a2a9b2aca93674032073a0a745354bdc6f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/51No_P_uAdDX1Ycoltbek_a-pSyT0jWN6KAjsiAu82A.png?width=320&crop=smart&auto=webp&s=5beeecb09dd3baf931e42193ae8a8ba8b1a32cb2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/51No_P_uAdDX1Ycoltbek_a-pSyT0jWN6KAjsiAu82A.png?width=640&crop=smart&auto=webp&s=d00031fb9987a926924c39a7e75aceb9ab016129', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/51No_P_uAdDX1Ycoltbek_a-pSyT0jWN6KAjsiAu82A.png?width=960&crop=smart&auto=webp&s=7caccf6df93da3e73af148cc0c8a8e32ec8e4acb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/51No_P_uAdDX1Ycoltbek_a-pSyT0jWN6KAjsiAu82A.png?width=1080&crop=smart&auto=webp&s=62751df1e2485bebac0ce4aa2ffc578ff36e5992', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/51No_P_uAdDX1Ycoltbek_a-pSyT0jWN6KAjsiAu82A.png?auto=webp&s=67181fba9287946e5731ed0520a39b1ecf0a95db', 'width': 1200}, 'variants': {}}]}
96GB VRAM plus 256GB/512GB Fast RAM
12
I'm thinking of combining 96GB (1800GB/s) VRAM from the 6000 RTX PRO (already have this) with 256GB or 512GB (410GB/s) RAM in the upcoming Threadripper. Do you all think this could run any largish versions of Deepseek with useful thruput?
2025-06-20T04:42:49
https://www.reddit.com/r/LocalLLaMA/comments/1lfvaos/96gb_vram_plus_256gb512gb_fast_ram/
SteveRD1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfvaos
false
null
t3_1lfvaos
/r/LocalLLaMA/comments/1lfvaos/96gb_vram_plus_256gb512gb_fast_ram/
false
false
self
12
null
testing ai realism without crossing the line using stabilityai and domoai
0
not tryin to post nsfw, just wanted to test the boundaries of realism and style. [stabilityai](http://stability.ai) with some custom models gave pretty decent freedom. then touched everything up in [domoai](https://www.domoai.app/home?via=081621AUG) using a soft-glow filter. the line between “art” and “too much” is super thin so yeah… proceed wisely.
2025-06-20T04:44:25
https://www.reddit.com/r/LocalLLaMA/comments/1lfvbqg/testing_ai_realism_without_crossing_the_line/
Own_View3337
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfvbqg
false
null
t3_1lfvbqg
/r/LocalLLaMA/comments/1lfvbqg/testing_ai_realism_without_crossing_the_line/
false
false
self
0
null
Semantic kernel chatcompletion . Send help
1
Hey guys, sorry for the dumb question but I've been stuck for a while and I can't seem to find an answer to my question anywhere. But, I am using chatcompletion with autoinvokekernal. It's calling my plugin and I can see that a tool message is being returned as well as the model response in 2 separate messages, sometimes as 1 message But the model response does not return the tool response (JSON) to be as-is, it always rephrase no matter how many top level prompt I put. Is it a normal practice to manual invoke a function if I need that as a model response ? Or is the model supposed to return that by default? Not sure if I am making sense. As from what I can see the model never seems to ever respond to what's being returned by tool message or have any understanding of it. Even if I force tell it I was watching tutorial on chatcompletion and the guy has invoked manually even when using chatcompletion in order to return the function response as a model I can't even ask AI models on the above because they keep agreeing to anything I say even if it's wrong. Driving me insane
2025-06-20T04:51:33
https://www.reddit.com/r/LocalLLaMA/comments/1lfvg1q/semantic_kernel_chatcompletion_send_help/
Huntersolomon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfvg1q
false
null
t3_1lfvg1q
/r/LocalLLaMA/comments/1lfvg1q/semantic_kernel_chatcompletion_send_help/
false
false
self
1
null
Need help building real-time Avatar API — audio-to-video inference on backend (HPC server)
1
Hi all, I’m developing a real-time API for avatar generation using **MuseTalk**, and I could use some help optimizing the audio-to-video inference process under live conditions. The backend runs on a high-performance computing (HPC) server, and I want to keep the system responsive for real-time use. # Project Overview I’m building an **API** where a user speaks through a frontend interface (browser/mic), and the backend generates a lip-synced video avatar using MuseTalk. The API should: * Accept real-time audio from users. * Continuously split incoming audio into short chunks (e.g., 2 seconds). * Pass these chunks to MuseTalk for inference. * Return or stream the generated video frames to the frontend. The inference is handled server-side on a GPU-enabled HPC machine. Audio processing, segmentation, and file handling are already in place — I now need MuseTalk to run in a loop or long-running service, continuously processing new audio files and generating corresponding video clips. # Project Context: What is MuseTalk? [MuseTalk](https://github.com/Winfredy/MuseTalk) is a real-time talking-head generation framework. It works by taking an input audio waveform and generating a photorealistic video of a given face (avatar) lip-syncing to that audio. It combines a diffusion model with a UNet-based generator and a VAE for video decoding. The key modules include: * **Audio Encoder (Whisper)**: Extracts features from the input audio. * **Face Encoder / Landmarks Module**: Extracts facial structure and landmark features from a static avatar image or video. * **UNet + Diffusion Pipeline**: Generates motion frames based on audio + visual features. * **VAE Decoder**: Reconstructs the generated features into full video frames. MuseTalk supports real-time usage by keeping the diffusion and rendering lightweight enough to run frame-by-frame while processing short clips of audio. # My Goal To make MuseTalk continuously monitor a folder or a stream of audio (split into small clips, e.g., 2 seconds long), run inference for each clip in real time, and stream the output video frames to the web frontend. I need to handled audio segmentation, saving clips, and joining final video output. The remaining piece is modifying MuseTalk's `realtime_inference.py` so that it continuously listens for new audio clips, processes them, and outputs corresponding video segments in a loop. # Key Technical Challenges 1. **Maintaining Real-Time Inference Loop** * I want to keep the process running continuously, waiting for new audio chunks and generating avatar video without restarting the inference pipeline for each clip. 2. **Latency and Sync** * There’s a small but significant lag between audio input and avatar response due to model processing and file I/O. I want to minimize this. 3. **Resource Usage** * In long sessions, GPU memory spikes or accumulates over time. Possibly due to model reloading or tensor retention. # Questions * Has anyone modified MuseTalk to support streaming or a long-lived inference loop? * What is the best way to keep Whisper and the MuseTalk pipeline loaded in memory and reuse them for multiple consecutive clips? * How can I improve the sync between the end of one video segment and the start of the next? * Are there any known bottlenecks in `realtime_inference.py` or frame generation that could be optimized? # What I’ve Already Done * Created a frontend + backend setup for audio capture and segmentation. * Automatically save 2-second audio clips to a folder. * Trigger MuseTalk on new files using file polling. * Join the resulting video outputs into a continuous video. * Edited `realtime_inference.py` to run in a loop, but facing issues with lingering memory and lag. If anyone has experience extending MuseTalk for streaming use, or has insights into efficient frame-by-frame inference or audio synchronization strategies, I’d appreciate any advice, suggestions, or reference projects. Thank you.
2025-06-20T06:12:28
https://www.reddit.com/r/LocalLLaMA/comments/1lfwqlk/need_help_building_realtime_avatar_api/
timehascomeagainn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfwqlk
false
null
t3_1lfwqlk
/r/LocalLLaMA/comments/1lfwqlk/need_help_building_realtime_avatar_api/
false
false
self
1
null
Trying to understand
0
Hello Im a second year student of Informatics and have just finished my course of mathematical modelling (linear-non linear systems, differential equations etc) can someone suggest me a book that explains the math behind LLM (Like DeepSeek?) i know that there is some kind of matrix-multiplication done in the background to select tokens but i dont understand what this really means. If this is not the correct place to ask sorry in advance
2025-06-20T07:38:46
https://www.reddit.com/r/LocalLLaMA/comments/1lfy28p/trying_to_understand/
Remarkable_Fold_4202
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfy28p
false
null
t3_1lfy28p
/r/LocalLLaMA/comments/1lfy28p/trying_to_understand/
false
false
self
0
null
RTX 6000 PRO Blackwell Max Q? Non Max Q?
6
Hello everyone, I’m looking for some advice on upgrading my personal GPU server for research purposes. I’m considering the **RTX 6000 PRO Blackwell**, but I’m currently debating between the **Max-Q** and **non-Max-Q** versions. From what I understand, the Max-Q version operates at roughly **half the power** and delivers about **12% lower performance** compared to the full-power version. My question is this: If I manually **limit the power** of the **non-Max-Q version to the same level** as the Max-Q, would the performance be **similar**, or could it be **better than the Max-Q by more than 12%**? My reasoning is that the **non-Max-Q version might be more efficient** at lower power levels due to better thermal and power delivery design, even when underclocked. Has anyone tested this or seen benchmarks comparing the two under the same power limits? Thanks in advance!
2025-06-20T07:45:11
https://www.reddit.com/r/LocalLLaMA/comments/1lfy5sy/rtx_6000_pro_blackwell_max_q_non_max_q/
Opening_Progress6820
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfy5sy
false
null
t3_1lfy5sy
/r/LocalLLaMA/comments/1lfy5sy/rtx_6000_pro_blackwell_max_q_non_max_q/
false
false
self
6
null
Who's the voice Narrator in this video??
0
I've realized that you guys are very knowledgeable in almost every domain. I know someone must know the voice over in this video. [https://www.youtube.com/watch?v=miQjNZtohWw](https://www.youtube.com/watch?v=miQjNZtohWw) Tell me. I want to use it my project https://preview.redd.it/dxscjweih18f1.png?width=1366&format=png&auto=webp&s=a3f2ad43729f30ce380e7f750acb28616290c2d8
2025-06-20T08:03:58
https://www.reddit.com/r/LocalLLaMA/comments/1lfyftp/whos_the_voice_narrator_in_this_video/
mikemaina
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfyftp
false
{'oembed': {'author_name': 'Travpedia', 'author_url': 'https://www.youtube.com/@travpedia', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/miQjNZtohWw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="UNREAL PLACES | The Most Jaw-Dropping Locations on Planet Earth"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/miQjNZtohWw/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'UNREAL PLACES | The Most Jaw-Dropping Locations on Planet Earth', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1lfyftp
/r/LocalLLaMA/comments/1lfyftp/whos_the_voice_narrator_in_this_video/
false
false
https://external-preview…6eb4bc99742258cd
0
{'enabled': False, 'images': [{'id': '1u50t5PGupF5QtsTDf98eCFe_M9tfx1SJ4uI9eJen6U', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/1u50t5PGupF5QtsTDf98eCFe_M9tfx1SJ4uI9eJen6U.jpeg?width=108&crop=smart&auto=webp&s=106237c73a7df8a31bb62eee409b69addb6c827f', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/1u50t5PGupF5QtsTDf98eCFe_M9tfx1SJ4uI9eJen6U.jpeg?width=216&crop=smart&auto=webp&s=43a243e2d9ef606312af221717d52e6f9e48c222', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/1u50t5PGupF5QtsTDf98eCFe_M9tfx1SJ4uI9eJen6U.jpeg?width=320&crop=smart&auto=webp&s=99921e522744a27b8943b4bd6b7cd90ed1b060df', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/1u50t5PGupF5QtsTDf98eCFe_M9tfx1SJ4uI9eJen6U.jpeg?auto=webp&s=fb807020b7562dc1f5efda5b35b44aa03060d263', 'width': 480}, 'variants': {}}]}
Best model for a RX 6950xt?
3
Hello everyone, I'm currently using an Gigabyte RX 6950xt 16gb gddr6 from AMD in my main gaming rig, but i'm looking to upgrade it and i was wondering if it could be repurposed for using local AI. What model would you suggest to try? Thanks :)
2025-06-20T08:17:55
https://www.reddit.com/r/LocalLLaMA/comments/1lfyna1/best_model_for_a_rx_6950xt/
InvestitoreConfuso
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfyna1
false
null
t3_1lfyna1
/r/LocalLLaMA/comments/1lfyna1/best_model_for_a_rx_6950xt/
false
false
self
3
null
AMD Radeon AI PRO R9700 GPU Offers 4x More TOPS & 2x More AI Performance Than Radeon PRO W7800
42
2025-06-20T08:21:25
https://wccftech.com/amd-radeon-ai-pro-r9700-gpu-4x-more-tops-2x-ai-performance-vs-radeon-pro-w7800/
_SYSTEM_ADMIN_MOD_
wccftech.com
1970-01-01T00:00:00
0
{}
1lfyp3g
false
null
t3_1lfyp3g
/r/LocalLLaMA/comments/1lfyp3g/amd_radeon_ai_pro_r9700_gpu_offers_4x_more_tops/
false
false
default
42
{'enabled': False, 'images': [{'id': 'EYYV5pInhONsaeNMa0FEbViuyL2svw10Qf1f-BebbNc', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/EYYV5pInhONsaeNMa0FEbViuyL2svw10Qf1f-BebbNc.png?width=108&crop=smart&auto=webp&s=b9bf734907a603457ffb72691d1b99209b887347', 'width': 108}, {'height': 115, 'url': 'https://external-preview.redd.it/EYYV5pInhONsaeNMa0FEbViuyL2svw10Qf1f-BebbNc.png?width=216&crop=smart&auto=webp&s=f80c11aed3dc3c6106bdd2e15c51bc052160a5de', 'width': 216}, {'height': 171, 'url': 'https://external-preview.redd.it/EYYV5pInhONsaeNMa0FEbViuyL2svw10Qf1f-BebbNc.png?width=320&crop=smart&auto=webp&s=41a7cb037395d23a3eef716b9edc92a3e370b065', 'width': 320}, {'height': 343, 'url': 'https://external-preview.redd.it/EYYV5pInhONsaeNMa0FEbViuyL2svw10Qf1f-BebbNc.png?width=640&crop=smart&auto=webp&s=0632c2173f66d6b249a8bebef7acd8143977e85f', 'width': 640}, {'height': 515, 'url': 'https://external-preview.redd.it/EYYV5pInhONsaeNMa0FEbViuyL2svw10Qf1f-BebbNc.png?width=960&crop=smart&auto=webp&s=5fc9d0d4093a6df186e084ae2e9d5c35752b1dc2', 'width': 960}, {'height': 579, 'url': 'https://external-preview.redd.it/EYYV5pInhONsaeNMa0FEbViuyL2svw10Qf1f-BebbNc.png?width=1080&crop=smart&auto=webp&s=c74329dd08dfbfb29e6e9b3102c841a0aa932775', 'width': 1080}], 'source': {'height': 1306, 'url': 'https://external-preview.redd.it/EYYV5pInhONsaeNMa0FEbViuyL2svw10Qf1f-BebbNc.png?auto=webp&s=61b20ba5d2082021ee891aaf32eeb4b5dda7d99a', 'width': 2432}, 'variants': {}}]}
The guide to MCP I never had
2
MCP has been going viral but if you are overwhelmed by the jargon, you are not alone. I felt the same way, so I took some time to learn about MCP and created a free guide to explain all the stuff in a simple way. Covered the following topics in detail. 1. The problem of existing AI tools. 2. Introduction to MCP and its core components. 3. How does MCP work under the hood? 4. The problem MCP solves and why it even matters. 5. The 3 Layers of MCP (and how I finally understood them). 6. The easiest way to connect 100+ managed MCP servers with built-in Auth. 7. Six practical examples with demos. 8. Some limitations of MCP.
2025-06-20T08:39:57
https://levelup.gitconnected.com/the-guide-to-mcp-i-never-had-f79091cf99f8?sk=8c94f37d7c87b2e147366de13888388b
anmolbaranwal
levelup.gitconnected.com
1970-01-01T00:00:00
0
{}
1lfyyu4
false
null
t3_1lfyyu4
/r/LocalLLaMA/comments/1lfyyu4/the_guide_to_mcp_i_never_had/
false
false
https://external-preview…c021aa59e03ba892
2
{'enabled': False, 'images': [{'id': 'kjvfCXf5MBoj5Y3ukB4AI7oyfSqLc9-TUEK8hf4bjWk', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/kjvfCXf5MBoj5Y3ukB4AI7oyfSqLc9-TUEK8hf4bjWk.jpeg?width=108&crop=smart&auto=webp&s=582853b88840f195a019d50be35950e84eeb3879', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/kjvfCXf5MBoj5Y3ukB4AI7oyfSqLc9-TUEK8hf4bjWk.jpeg?width=216&crop=smart&auto=webp&s=c88157dd92611a4f20cf516eb6a03496d0b212a5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/kjvfCXf5MBoj5Y3ukB4AI7oyfSqLc9-TUEK8hf4bjWk.jpeg?width=320&crop=smart&auto=webp&s=36309bfc45b7ee440cf504e6bd4cb512a84b7ca6', 'width': 320}, {'height': 361, 'url': 'https://external-preview.redd.it/kjvfCXf5MBoj5Y3ukB4AI7oyfSqLc9-TUEK8hf4bjWk.jpeg?width=640&crop=smart&auto=webp&s=88289c0caff5d5b6133153fbabbcbfaeb2c6a195', 'width': 640}, {'height': 542, 'url': 'https://external-preview.redd.it/kjvfCXf5MBoj5Y3ukB4AI7oyfSqLc9-TUEK8hf4bjWk.jpeg?width=960&crop=smart&auto=webp&s=8c903bb8b120c01fec5bdebaae85b330553da973', 'width': 960}, {'height': 610, 'url': 'https://external-preview.redd.it/kjvfCXf5MBoj5Y3ukB4AI7oyfSqLc9-TUEK8hf4bjWk.jpeg?width=1080&crop=smart&auto=webp&s=f37afa971bc9bbf065010428819d1133c7543eb8', 'width': 1080}], 'source': {'height': 678, 'url': 'https://external-preview.redd.it/kjvfCXf5MBoj5Y3ukB4AI7oyfSqLc9-TUEK8hf4bjWk.jpeg?auto=webp&s=e12aee72a18e5c3d70fca6f2386b45bba5521667', 'width': 1200}, 'variants': {}}]}
Built a local-first RAG system using SQLite
1
[removed]
2025-06-20T08:53:23
https://www.reddit.com/r/LocalLLaMA/comments/1lfz5rw/built_a_localfirst_rag_system_using_sqlite/
gogozad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfz5rw
false
null
t3_1lfz5rw
/r/LocalLLaMA/comments/1lfz5rw/built_a_localfirst_rag_system_using_sqlite/
false
false
self
1
null
Built a local-first RAG system using SQLite
1
[removed]
2025-06-20T09:00:03
https://www.reddit.com/r/LocalLLaMA/comments/1lfz999/built_a_localfirst_rag_system_using_sqlite/
gogozad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfz999
false
null
t3_1lfz999
/r/LocalLLaMA/comments/1lfz999/built_a_localfirst_rag_system_using_sqlite/
false
false
self
1
null
Repurposing 800 x RX 580s for LLM inference - 4 months later - learnings
159
Back in March I asked this sub if RX 580s could be used for anything useful in the LLM space and asked for help on how to implemented inference: [https://www.reddit.com/r/LocalLLaMA/comments/1j1mpuf/repurposing\_old\_rx\_580\_gpus\_need\_advice/](https://www.reddit.com/r/LocalLLaMA/comments/1j1mpuf/repurposing_old_rx_580_gpus_need_advice/) Four months later, we've built a fully functioning inference cluster using around 800 RX 580s across 132 rigs. I want to come back and share what worked, what didn’t so that others can learn from our experience. # what worked **Vulkan with llama.cpp** * Vulkan backend worked on all RX 580s * Required compiling Shaderc manually to get `glslc` * llama.cpp built with custom flags for vulkan support and no avx instructions (our cpus on the builds are very old celerons). we tried countless build attempts and this is the best we could do: &#8203; CXXFLAGS="-march=core2 -mtune=generic" cmake .. \ -DLLAMA_BUILD_SERVER=ON \ -DGGML_VULKAN=ON \ -DGGML_NATIVE=OFF \ -DGGML_AVX=OFF -DGGML_AVX2=OFF \ -DGGML_AVX512=OFF -DGGML_AVX_VNNI=OFF \ -DGGML_FMA=OFF -DGGML_F16C=OFF \ -DGGML_AMX_TILE=OFF -DGGML_AMX_INT8=OFF -DGGML_AMX_BF16=OFF \ -DGGML_SSE42=ON \ **Per-rig multi-GPU scaling** * Each rig runs 6 GPUs and can split small models across multiple kubernetes containers with each GPU's VRAM shared (could only minimally do 1 GPU per container - couldn't split a GPU's VRAM to 2 containers) * Used `--ngl 999`, `--sm none` for 6 containers for 6 gpus * for bigger contexts we could extend the small model's limits and use more than 1 GPU's VRAM * for bigger models (Qwen3-30B\_Q8\_0) we used `--ngl 999`, `--sm layer` and build a recent llama.cpp implementation for reasoning management where you could turn off thinking mode with `--reasoning-budget 0` **Load balancing setup** * Built a fastapi load-balancer backend that assigns each user to an available kubernetes pod * Redis tracks current pod load and handle session stickiness * The load-balancer also does prompt cache retention and restoration. biggest challenge here was how to make the llama.cpp servers accept the old prompt caches that weren't 100% in the processed eval format and would get dropped and reinterpreted from the beginning. we found that using `--cache-reuse 32` would allow for a margin of error big enough for all the conversation caches to be evaluated instantly * Models respond via streaming SSE, OpenAI-compatible format # what didn’t work **ROCm HIP \\ pytorc \\ tensorflow inference** * ROCm technically works and tools like `rocminfo` and `rocm-smi` work but couldn't get a working llama.cpp HIP build * there’s no functional PyTorch backend for Polaris-class gfx803 cards so pytorch didn't work * couldn't get TensorFlow to work with llama.cpp we’re also putting part of our cluster through some live testing. If you want to throw some prompts at it, you can hit it here: [https://www.masterchaincorp.com](https://www.masterchaincorp.com) It’s running Qwen-30B and the frontend is just a basic llama.cpp server webui. nothing fancy so feel free to poke around and help test the setup. feedback welcome!
2025-06-20T09:14:15
https://www.reddit.com/r/LocalLLaMA/comments/1lfzh05/repurposing_800_x_rx_580s_for_llm_inference_4/
rasbid420
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfzh05
false
null
t3_1lfzh05
/r/LocalLLaMA/comments/1lfzh05/repurposing_800_x_rx_580s_for_llm_inference_4/
false
false
self
159
null
Any tools that help you build simple interactive projects from an idea?
4
I get random ideas sometimes, like a mini-game, typing test, or a little music toy, and I’d love to turn them into something playable without starting from scratch. Is there any tool that lets you describe what you want and helps build it out, even just a rough version? Not looking for anything super advanced, just fun stuff I can play around with or share.
2025-06-20T09:26:09
https://www.reddit.com/r/LocalLLaMA/comments/1lfznhz/any_tools_that_help_you_build_simple_interactive/
Fun_Construction_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfznhz
false
null
t3_1lfznhz
/r/LocalLLaMA/comments/1lfznhz/any_tools_that_help_you_build_simple_interactive/
false
false
self
4
null
What is a super lightweight model for checking grammar?
11
I have been looking for something that can check grammar. Nothing too serious, just something to look for obvious mistakes in a git commit message. After not finding a lightweight application, I'm wondering if there's an LLM that's super light to run on a CPU that can do this.
2025-06-20T09:28:14
https://www.reddit.com/r/LocalLLaMA/comments/1lfzon7/what_is_a_super_lightweight_model_for_checking/
kudikarasavasa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lfzon7
false
null
t3_1lfzon7
/r/LocalLLaMA/comments/1lfzon7/what_is_a_super_lightweight_model_for_checking/
false
false
self
11
null
How run Open Source?
0
Yeah so in new to ai and I’m just wondering one thing. If I got an open source model, how can I run it. I find it very hard and can’t seem to do it.
2025-06-20T10:01:22
https://www.reddit.com/r/LocalLLaMA/comments/1lg068y/how_run_open_source/
Easy_Marsupial_5833
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lg068y
false
null
t3_1lg068y
/r/LocalLLaMA/comments/1lg068y/how_run_open_source/
false
false
self
0
null
Good stable voice cloning and TTS with NOT much complicated installation?
3
I wanted a good voice cloning and TTS tool so I was reading some reviews and opinions. Decided to try XTTS v2 via their huggingface space demo and found their voice cloning is low quality. Then tried Spark TTS and it's voice cloning is not upto mark as well. Then tried Chatterbox. It is far better than those two. It's not perfect but not low quality like those two. I'm confused why people say xtts v2, spark tts has amazing cloning capability? Then I tried Sesame TTS after seeing the hype but sadly couldn't install it in windows. Then tried Style TTS 2 and couldn't get it to install it in windows either. Some TTS installations are so complicated. :/ Anyone can recommend a good stable voice cloning and TTS with not such complicated installation for windows? I got 12 GB Nvidia RTX 3060, 16 GB RAM, i5 12th gen.
2025-06-20T10:04:27
https://www.reddit.com/r/LocalLLaMA/comments/1lg084k/good_stable_voice_cloning_and_tts_with_not_much/
Dragonacious
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lg084k
false
null
t3_1lg084k
/r/LocalLLaMA/comments/1lg084k/good_stable_voice_cloning_and_tts_with_not_much/
false
false
self
3
null
Performance expectations question (Devstral)
2
Started playing around last weekend with some local models (devstral small Q4) on my dev laptop and while I got some useful results it took hours. For the given task of refactoring since Vue components from options to composition API this was fine as I just left it to get in with it while I did other things. However if it's too be more generally useful I'm going to need at least a 10x performance boost 50-100x ideally. I'm 90% sure the performance is limited by hardware but before spending $$$$ on something better I wanted to check the problem doesn't reside between keyboard and chair ;) Laptop is powerful but wasn't built with AI in mind; kubuntu running on Intel i7 10870H, 64GB ram, Nvidia 3070 8GB vram. Initial runs on CPU only got 1.85 TPS and when I updated the GPU drivers and got 16 layers offloaded to the GPU it went up to 2.25 TPS (this very small increase is what's making me wonder if I'm perhaps missing something else in the software setup as I'd have expected a 40% GPU offload to give a bigger boost) Model is Devstral small Q4, 16k context and 1k batch size. I followed a few tuning guides but they didn't make much difference. Question then is: am I getting the performance you'd expect out of my hardware or have I done something wrong? As a follow-up; what would be a cost effective build for running local models and getting a reasonable TPS rate with a single user. I'm thinking of a couple of options ATM; one is to sling a 5090 into my gaming rig and use that for AI as well (this was built for performance but is from the 1080 era so is likely too old and would need more than the card upgrading) Second option is to build a new machine with decent spec and room to grow; so a mb (suggestions ?) which can support 2-4 cards without being hyper expensive and perhaps a second hand 3090 to start. Am I best going with AMD or Intel processor? Initial budget would be about the cost of a 5090 so £2-3k is it realistic to get a system that'll do ~50 TPS on devstral for that?
2025-06-20T10:30:14
https://www.reddit.com/r/LocalLLaMA/comments/1lg0mqq/performance_expectations_question_devstral/
_-Carnage
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lg0mqq
false
null
t3_1lg0mqq
/r/LocalLLaMA/comments/1lg0mqq/performance_expectations_question_devstral/
false
false
self
2
null