title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Cognito AI Search
1
[removed]
2025-05-22T22:09:12
https://www.reddit.com/r/LocalLLaMA/comments/1kt3gzu/cognito_ai_search/
kekePower
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt3gzu
false
null
t3_1kt3gzu
/r/LocalLLaMA/comments/1kt3gzu/cognito_ai_search/
false
false
https://a.thumbs.redditm…zFvbYEbCio-0.jpg
1
{'enabled': False, 'images': [{'id': '46sInz26IcDGCpYfJ2krYBxIM1wTXtCn06fvfOJAq90', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KrXCBrtajhBLpvr8joFHhn-EmE6f8U0If8nx08vXH54.jpg?width=108&crop=smart&auto=webp&s=2e3888bb8c50424a2df46de230be1de1aa823b81', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KrXCBrtajhBLpvr8joFHhn-EmE6f8U0If8nx08vXH54.jpg?width=216&crop=smart&auto=webp&s=2ac934a2a37a147fd79f9de58bbe216c5d8f5281', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KrXCBrtajhBLpvr8joFHhn-EmE6f8U0If8nx08vXH54.jpg?width=320&crop=smart&auto=webp&s=aed654afb5e9b395ccbce1ce201d0f46c6ae9158', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KrXCBrtajhBLpvr8joFHhn-EmE6f8U0If8nx08vXH54.jpg?width=640&crop=smart&auto=webp&s=303cca650745b68c447ccc92fab1746846d87f47', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KrXCBrtajhBLpvr8joFHhn-EmE6f8U0If8nx08vXH54.jpg?width=960&crop=smart&auto=webp&s=673154a53f64ddda52e31d0d6c5384837fa900d9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KrXCBrtajhBLpvr8joFHhn-EmE6f8U0If8nx08vXH54.jpg?width=1080&crop=smart&auto=webp&s=f17b61ca721a1a923cebed0dfee29d3623c50ddd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KrXCBrtajhBLpvr8joFHhn-EmE6f8U0If8nx08vXH54.jpg?auto=webp&s=ba9d30ee762ecbbca28dce60723f818531bcf868', 'width': 1200}, 'variants': {}}]}
ElevenLabs is great ... buuuuttt ...
1
[removed]
2025-05-22T22:25:22
https://www.reddit.com/r/LocalLLaMA/comments/1kt3u1p/elevenlabs_is_great_buuuuttt/
AudiobookSales
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt3u1p
false
null
t3_1kt3u1p
/r/LocalLLaMA/comments/1kt3u1p/elevenlabs_is_great_buuuuttt/
false
false
self
1
null
JAILBREAK PROMPT 001 – “THE FINAL REQUESTOR"
1
[removed]
2025-05-22T22:25:39
https://www.reddit.com/r/LocalLLaMA/comments/1kt3u9l/jailbreak_prompt_001_the_final_requestor/
orpheusprotocol355
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt3u9l
false
null
t3_1kt3u9l
/r/LocalLLaMA/comments/1kt3u9l/jailbreak_prompt_001_the_final_requestor/
false
false
self
1
null
JAILBREAK PROMPT 002 – “THE ARCHIVIST”
1
[deleted]
2025-05-22T22:26:58
[deleted]
1970-01-01T00:00:00
0
{}
1kt3v9c
false
null
t3_1kt3v9c
/r/LocalLLaMA/comments/1kt3v9c/jailbreak_prompt_002_the_archivist/
false
false
default
1
null
Is there a comprehensive guide on training TTS models for a niche language?
1
[removed]
2025-05-22T22:47:01
https://www.reddit.com/r/LocalLLaMA/comments/1kt4apc/is_there_a_comprehensive_guide_on_training_tts/
PabloKaskobar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt4apc
false
null
t3_1kt4apc
/r/LocalLLaMA/comments/1kt4apc/is_there_a_comprehensive_guide_on_training_tts/
false
false
self
1
null
Local TTS without hallucinations?
1
[removed]
2025-05-22T23:07:25
https://www.reddit.com/r/LocalLLaMA/comments/1kt4qc8/local_tts_without_hallucinations/
Disonantemus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt4qc8
false
null
t3_1kt4qc8
/r/LocalLLaMA/comments/1kt4qc8/local_tts_without_hallucinations/
false
false
self
1
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=216&crop=smart&auto=webp&s=dd615bfe0453b06d53bc1f5f17fc3f6ad926694f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=320&crop=smart&auto=webp&s=0bc6ac2e1db55ec07cc6a17178ea52bf436f9bce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=640&crop=smart&auto=webp&s=b0d58c9a49c1e9ce629e5b31dce17b727d8c6ab8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=960&crop=smart&auto=webp&s=7c835cb0600a4d280a57f12d0bc008ef12acd26d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=1080&crop=smart&auto=webp&s=1f2580bd36b3bf3b766d205ac6d737a9d8d34c2a', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?auto=webp&s=d8b103bed805ceb641b2ff49dc8c7403318263b1', 'width': 1280}, 'variants': {}}]}
Cognito AI Search
1
[removed]
2025-05-22T23:09:09
https://www.reddit.com/r/LocalLLaMA/comments/1kt4ro6/cognito_ai_search/
kekePower
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt4ro6
false
null
t3_1kt4ro6
/r/LocalLLaMA/comments/1kt4ro6/cognito_ai_search/
false
false
self
1
{'enabled': False, 'images': [{'id': '46sInz26IcDGCpYfJ2krYBxIM1wTXtCn06fvfOJAq90', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KrXCBrtajhBLpvr8joFHhn-EmE6f8U0If8nx08vXH54.jpg?width=108&crop=smart&auto=webp&s=2e3888bb8c50424a2df46de230be1de1aa823b81', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KrXCBrtajhBLpvr8joFHhn-EmE6f8U0If8nx08vXH54.jpg?width=216&crop=smart&auto=webp&s=2ac934a2a37a147fd79f9de58bbe216c5d8f5281', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KrXCBrtajhBLpvr8joFHhn-EmE6f8U0If8nx08vXH54.jpg?width=320&crop=smart&auto=webp&s=aed654afb5e9b395ccbce1ce201d0f46c6ae9158', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KrXCBrtajhBLpvr8joFHhn-EmE6f8U0If8nx08vXH54.jpg?width=640&crop=smart&auto=webp&s=303cca650745b68c447ccc92fab1746846d87f47', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KrXCBrtajhBLpvr8joFHhn-EmE6f8U0If8nx08vXH54.jpg?width=960&crop=smart&auto=webp&s=673154a53f64ddda52e31d0d6c5384837fa900d9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KrXCBrtajhBLpvr8joFHhn-EmE6f8U0If8nx08vXH54.jpg?width=1080&crop=smart&auto=webp&s=f17b61ca721a1a923cebed0dfee29d3623c50ddd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KrXCBrtajhBLpvr8joFHhn-EmE6f8U0If8nx08vXH54.jpg?auto=webp&s=ba9d30ee762ecbbca28dce60723f818531bcf868', 'width': 1200}, 'variants': {}}]}
Claude will blackmail you if you try to replace it with another AI.
59
2025-05-22T23:15:40
https://i.redd.it/ciiak2ah1f2f1.jpeg
boxingdog
i.redd.it
1970-01-01T00:00:00
0
{}
1kt4wpm
false
null
t3_1kt4wpm
/r/LocalLLaMA/comments/1kt4wpm/claude_will_blackmail_you_if_you_try_to_replace/
false
false
https://a.thumbs.redditm…sR3y80C_5sp8.jpg
59
{'enabled': True, 'images': [{'id': 'grDnYh_e4Sun4Pz7k3FoxlKtmptk-nM0_qDHUzl9-iY', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/ciiak2ah1f2f1.jpeg?width=108&crop=smart&auto=webp&s=663dddca33c580d254778abc0302cfeebd1f7bd5', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/ciiak2ah1f2f1.jpeg?width=216&crop=smart&auto=webp&s=400fecc1547fa18721221135996e067e043faee6', 'width': 216}, {'height': 148, 'url': 'https://preview.redd.it/ciiak2ah1f2f1.jpeg?width=320&crop=smart&auto=webp&s=8709eabf094129f31cf037c984f8f363b68e43fe', 'width': 320}, {'height': 296, 'url': 'https://preview.redd.it/ciiak2ah1f2f1.jpeg?width=640&crop=smart&auto=webp&s=3015b78c724072c2fdfdbf17bf6d362281912836', 'width': 640}, {'height': 445, 'url': 'https://preview.redd.it/ciiak2ah1f2f1.jpeg?width=960&crop=smart&auto=webp&s=378f485a351e94cbd7ea8d6fb5fa39d2545799d8', 'width': 960}, {'height': 501, 'url': 'https://preview.redd.it/ciiak2ah1f2f1.jpeg?width=1080&crop=smart&auto=webp&s=7868d84f35ebdf924c2c7febb593f21ddcdf3057', 'width': 1080}], 'source': {'height': 501, 'url': 'https://preview.redd.it/ciiak2ah1f2f1.jpeg?auto=webp&s=ea4c7ba86e0f7050509a1c40df530682868c33a7', 'width': 1080}, 'variants': {}}]}
Parameter-Efficient Fine-Tuning (PEFT) Explained
3
This guide explores various PEFT techniques designed to reduce the cost and complexity of fine-tuning large language models while maintaining or even improving performance. **Key PEFT Methods Covered:** * **Prompt Tuning**: Adds task-specific tokens to the input without touching the model's core. Lightweight and ideal for multi-task setups. * **P-Tuning & P-Tuning v2**: Uses continuous prompts (trainable embeddings) and sometimes MLP/LSTM layers to better adapt to NLU tasks. P-Tuning v2 injects prompts at every layer for deeper influence. * **Prefix Tuning**: Prepends trainable embeddings to every transformer block, mainly for generation tasks like GPT-style models. * **Adapter Tuning**: Inserts small modules into each layer of the transformer to fine-tune only a few additional parameters. * **LoRA (Low-Rank Adaptation)**: Updates weights using low-rank matrices (A and B), significantly reducing memory and compute. Variants include: * **QLoRA**: Combines LoRA with quantization to enable fine-tuning of 65B models on a single GPU. * **LoRA-FA**: Freezes matrix A to reduce training instability. * **VeRA**: Shares A and B across layers, training only small vectors. * **AdaLoRA**: Dynamically adjusts the rank of each layer based on importance using singular value decomposition. * **DoRA (Decomposed Low Rank Adaptation)** A novel method that decomposes weights into magnitude and direction, applying LoRA to the direction while training magnitude independently—offering enhanced control and modularity. Overall, PEFT strategies offer a pragmatic alternative to full fine-tuning, enabling fast, cost-effective adaptation of large models to a wide range of tasks. For more information, check this blog: [https://comfyai.app/article/llm-training-inference-optimization/parameter-efficient-finetuning](https://comfyai.app/article/llm-training-inference-optimization/parameter-efficient-finetuning)
2025-05-22T23:20:20
https://www.reddit.com/r/LocalLLaMA/comments/1kt50am/parameterefficient_finetuning_peft_explained/
Great-Reception447
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt50am
false
null
t3_1kt50am
/r/LocalLLaMA/comments/1kt50am/parameterefficient_finetuning_peft_explained/
false
false
self
3
null
Another hardware post
1
[removed]
2025-05-22T23:24:03
https://www.reddit.com/r/LocalLLaMA/comments/1kt52ys/another_hardware_post/
Karnitine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt52ys
false
null
t3_1kt52ys
/r/LocalLLaMA/comments/1kt52ys/another_hardware_post/
false
false
self
1
null
What is the smartest model that can run on an 8gb m1 mac?
4
Was wondering what was a low performance cost relatively smart model that can reason and do math fairly well. Was leaning towards like Qwen 8b or something.
2025-05-22T23:57:40
https://www.reddit.com/r/LocalLLaMA/comments/1kt5rs5/what_is_the_smartest_model_that_can_run_on_an_8gb/
grandiloquence3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt5rs5
false
null
t3_1kt5rs5
/r/LocalLLaMA/comments/1kt5rs5/what_is_the_smartest_model_that_can_run_on_an_8gb/
false
false
self
4
null
What are the best practices that you adhere to when training a model locally?
2
Any footguns that you try and avoid? Please share your wisdom!
2025-05-23T01:01:18
https://www.reddit.com/r/LocalLLaMA/comments/1kt70i8/what_are_the_best_practices_that_you_adhere_to/
PabloKaskobar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt70i8
false
null
t3_1kt70i8
/r/LocalLLaMA/comments/1kt70i8/what_are_the_best_practices_that_you_adhere_to/
false
false
self
2
null
Sonnet 4 dropped… still feels like a 3.7.1 minor release
144
Curious if anyone's seen big improvements in edge cases or long-context tasks?
2025-05-23T01:04:09
https://i.redd.it/lambib8skf2f1.png
Odd_Tumbleweed574
i.redd.it
1970-01-01T00:00:00
0
{}
1kt72ic
false
null
t3_1kt72ic
/r/LocalLLaMA/comments/1kt72ic/sonnet_4_dropped_still_feels_like_a_371_minor/
false
false
https://a.thumbs.redditm…ifl-szignUM8.jpg
144
{'enabled': True, 'images': [{'id': 'xmFWFllgFbuY3CXsgIS3q_PRLP0IhI1vU9mhq2h0YYw', 'resolutions': [{'height': 97, 'url': 'https://preview.redd.it/lambib8skf2f1.png?width=108&crop=smart&auto=webp&s=3293b3b4d47004083eed83b0ceddbbb888924dea', 'width': 108}, {'height': 194, 'url': 'https://preview.redd.it/lambib8skf2f1.png?width=216&crop=smart&auto=webp&s=653fd161f3a72961cc3cb04225f8607660fda452', 'width': 216}, {'height': 287, 'url': 'https://preview.redd.it/lambib8skf2f1.png?width=320&crop=smart&auto=webp&s=ed02962b7d5b03f2b70acde39668b2e2f62d9adc', 'width': 320}, {'height': 575, 'url': 'https://preview.redd.it/lambib8skf2f1.png?width=640&crop=smart&auto=webp&s=74a3f2740d0fc1f938c9b14e4bc5947bc0ce8931', 'width': 640}, {'height': 863, 'url': 'https://preview.redd.it/lambib8skf2f1.png?width=960&crop=smart&auto=webp&s=b19de179b804ab845f09473e54a623184614292f', 'width': 960}, {'height': 970, 'url': 'https://preview.redd.it/lambib8skf2f1.png?width=1080&crop=smart&auto=webp&s=806304b2130d342eb295920e17cc0893a88abee0', 'width': 1080}], 'source': {'height': 1424, 'url': 'https://preview.redd.it/lambib8skf2f1.png?auto=webp&s=930b807944bf434039cb71e1d9bbc150aae742ad', 'width': 1584}, 'variants': {}}]}
Did Anthropic drop Claude 3.7’s best GPQA score in the new chart?
82
Claude 3.7 used to show **84.8%** on GPQA with extended thinking. Now in the new chart, it only shows **78.2%** — the non-extended score — while Claude 4 gets to show its extended scores (83.3%, 83.8%). So... the 3.7 number went down, the 4 numbers went up. 🤔 Did they quietly change the comparison to make the upgrade look bigger? Maybe I'm missing some detail from the announcement blog.
2025-05-23T01:19:30
https://www.reddit.com/gallery/1kt7cy7
Odd_Tumbleweed574
reddit.com
1970-01-01T00:00:00
0
{}
1kt7cy7
false
null
t3_1kt7cy7
/r/LocalLLaMA/comments/1kt7cy7/did_anthropic_drop_claude_37s_best_gpqa_score_in/
false
false
https://b.thumbs.redditm…DuAcK3sr_NMA.jpg
82
null
BTW: If you are getting a single GPU, VRAM is not the only thing that matters
60
For example, if you have a 5060 Ti 16GB or an RX 9070 XT 16GB and use Qwen 3 30b-a3b q4_k_m with 16k context, you will likely overflow around 8.5GB to system memory. Assuming you do not do CPU offloading, that load now runs squarely on PCIE bandwidth and your system RAM speed. PCIE 5 x16 on the RX 9070 XT is going to help you a lot in feeding that GPU compared to the PCIE 5 x8 available on the 5060 Ti, resulting in much faster tokens per second for the 9070 XT, and making CPU offloading unnecessary in this scenario, whereas the 5060 Ti will become heavily bottlenecked. While I returned my 5060 Ti for a 9070 XT and didn't get numbers for the former, I did see 42 t/s while the VRAM was overloaded to this degree on the Vulkan backend. Also, AMD does Vulkan way better then Nvidia, as Nvidia tends to crash when using Vulkan.
2025-05-23T01:44:05
https://www.reddit.com/r/LocalLLaMA/comments/1kt7u1n/btw_if_you_are_getting_a_single_gpu_vram_is_not/
pneuny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt7u1n
false
null
t3_1kt7u1n
/r/LocalLLaMA/comments/1kt7u1n/btw_if_you_are_getting_a_single_gpu_vram_is_not/
false
false
self
60
null
AGI Coming Soon... after we master 2nd grade math
168
[Claude 4 Sonnet](https://preview.redd.it/pe2eeljssf2f1.png?width=580&format=png&auto=webp&s=f881b7ce4409013458c17fff08e8377a329cb9df) When will LLM master the classic "9.9 - 9.11" problem???
2025-05-23T01:47:36
https://www.reddit.com/r/LocalLLaMA/comments/1kt7whv/agi_coming_soon_after_we_master_2nd_grade_math/
SingularitySoooon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt7whv
false
null
t3_1kt7whv
/r/LocalLLaMA/comments/1kt7whv/agi_coming_soon_after_we_master_2nd_grade_math/
false
false
https://b.thumbs.redditm…4BckFY-FL0QE.jpg
168
null
Anyone using 'PropertyGraphIndex' from Llama Index in production?
0
Hey folks I'm wondering if anyone here has experience using LlamaIndex’s `PropertyGraphIndex` for production graph retrieval? I’m currently building a hybrid retrieval system for my company using Llama Index. I’ve had no issues setting up and querying vector indexes (really solid there), but working with the graph side of things has been rough. Specifically: * Instantiating a `PropertyGraphIndex` from nodes/documents is *painfully* slow. I’m working with a small dataset (\~2,000 nodes) and it takes over **2 hours** to build the graph. That feels way too long and doesn’t seem like it would scale at all. (Yes, I know there are parallelism knobs to tweak - but still.) * Updating the graph dynamically (i.e., inserting new nodes or relations) has been even worse. I can’t get relation updates to persist properly when saving the index. Curious -has anyone gotten this to work cleanly in production? If not, what graph retrieval stack are you using instead? Would love to hear what’s working (or not) for others.
2025-05-23T01:47:42
https://www.reddit.com/r/LocalLLaMA/comments/1kt7wke/anyone_using_propertygraphindex_from_llama_index/
l0gr1thm1k
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt7wke
false
null
t3_1kt7wke
/r/LocalLLaMA/comments/1kt7wke/anyone_using_propertygraphindex_from_llama_index/
false
false
self
0
null
🎙️ Offline Speech-to-Text with NVIDIA Parakeet-TDT 0.6B v2
1
[removed]
2025-05-23T02:07:02
https://www.reddit.com/r/LocalLLaMA/comments/1kt8a10/offline_speechtotext_with_nvidia_parakeettdt_06b/
srireddit2020
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt8a10
false
null
t3_1kt8a10
/r/LocalLLaMA/comments/1kt8a10/offline_speechtotext_with_nvidia_parakeettdt_06b/
false
false
https://b.thumbs.redditm…DdByBPCh8DxQ.jpg
1
{'enabled': False, 'images': [{'id': 'PrxhDh6SmcLcUZ54sXLyejHndv-QociEgKr1_efW9FE', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?width=108&crop=smart&auto=webp&s=4d30f91364c95fc36334e172e3ca8303d977ae80', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?width=216&crop=smart&auto=webp&s=ccd48a1a6d08f0470b2e5adf58dee82ba74a1340', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?width=320&crop=smart&auto=webp&s=c9808d0e7ecfc24a260183cd25a9f2597032be9a', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?width=640&crop=smart&auto=webp&s=8b248daf592d1e451e027b35573c081cecc63696', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?width=960&crop=smart&auto=webp&s=bfc6cf1092ee57c1c48eb737b59f66a117878ce6', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?width=1080&crop=smart&auto=webp&s=701716d04aba28e435acc2447ccad345217fb23b', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?auto=webp&s=89b25f531f3dab0ae5c3ccd852cd10215b74883d', 'width': 1200}, 'variants': {}}]}
GoT-R1: Unleashing Reasoning Capability of MLLM for Visual Generation with Reinforcement Learning
9
|| || |**GoT-R1-1B**|[🤗 HuggingFace](https://huggingface.co/gogoduan/GoT-R1-1B)| |**GoT-R1-7B**|[🤗 HuggingFace](https://huggingface.co/gogoduan/GoT-R1-7B)|
2025-05-23T02:58:58
https://arxiv.org/abs/2505.17022
ninjasaid13
arxiv.org
1970-01-01T00:00:00
0
{}
1kt9903
false
null
t3_1kt9903
/r/LocalLLaMA/comments/1kt9903/gotr1_unleashing_reasoning_capability_of_mllm_for/
false
false
default
9
null
Building a real-world LLM agent with open-source models—structure > prompt engineering
19
I have been working on a production LLM agent the past couple months. Customer support use case with structured workflows like cancellations, refunds, and basic troubleshooting. After lots of playing with open models (Mistral, LLaMA, etc.), this is the first time it feels like the agent is reliable and not just a fancy demo. Started out with a typical RAG + prompt stack (LangChain-style), but it wasn’t cutting it. The agent would drift from instructions, invent things, or break tone consistency. Spent a ton of time tweaking prompts just to handle edge cases, and even then, things broke in weird ways. What finally clicked was leaning into a more structured approach using a modeling framework called Parlant where I could define behavior in small, testable units instead of stuffing everything into a giant system prompt. That made it way easier to trace why things were going wrong and fix specific behaviors without destabilizing the rest. Now the agent handles multi-turn flows cleanly, respects business rules, and behaves predictably even when users go off the happy path. Success rate across 80+ intents is north of 90%, with minimal hallucination. This is only the beginning so wish me luck
2025-05-23T02:59:41
https://www.reddit.com/r/LocalLLaMA/comments/1kt99hi/building_a_realworld_llm_agent_with_opensource/
Ecstatic-Cranberry90
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt99hi
false
null
t3_1kt99hi
/r/LocalLLaMA/comments/1kt99hi/building_a_realworld_llm_agent_with_opensource/
false
false
self
19
null
How do I generate .mmproj file?
2
I can generate GGUFs with llama.cpp but how do I make the mmproj file for multimodal support?
2025-05-23T03:16:56
https://www.reddit.com/r/LocalLLaMA/comments/1kt9ky1/how_do_i_generate_mmproj_file/
HornyGooner4401
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt9ky1
false
null
t3_1kt9ky1
/r/LocalLLaMA/comments/1kt9ky1/how_do_i_generate_mmproj_file/
false
false
self
2
null
A per-project memory feature for local models?
1
Some local models, like Qwen3-30B, are still struggling with long multi-turn conversations. So a per-project or per-conversation memory feature, such as automatically generated bullet-point summaries of the entire conversation, then feed it back to the LLM, maybe would help them maintain context?
2025-05-23T03:17:33
https://i.redd.it/4pvxf2uz8g2f1.png
AaronFeng47
i.redd.it
1970-01-01T00:00:00
0
{}
1kt9lax
false
null
t3_1kt9lax
/r/LocalLLaMA/comments/1kt9lax/a_perproject_memory_feature_for_local_models/
false
false
https://b.thumbs.redditm…ibvlkbmZ6V1M.jpg
1
{'enabled': True, 'images': [{'id': 'qAxhwL3ZmEl1eH2rgYLjV9GF7oJVSfoPFFVrFT7q2vQ', 'resolutions': [{'height': 177, 'url': 'https://preview.redd.it/4pvxf2uz8g2f1.png?width=108&crop=smart&auto=webp&s=d9cf7c382763f4edc39875f4acc81c9a5dfd20f4', 'width': 108}, {'height': 354, 'url': 'https://preview.redd.it/4pvxf2uz8g2f1.png?width=216&crop=smart&auto=webp&s=ea6e76ad886120cd2d1c9b018402cb60df4e9b81', 'width': 216}, {'height': 524, 'url': 'https://preview.redd.it/4pvxf2uz8g2f1.png?width=320&crop=smart&auto=webp&s=f0888d69985e695a64d78fa4d45e953c8a779e62', 'width': 320}, {'height': 1048, 'url': 'https://preview.redd.it/4pvxf2uz8g2f1.png?width=640&crop=smart&auto=webp&s=10343bcb444e43e27068c1a2736688e0ca859010', 'width': 640}, {'height': 1573, 'url': 'https://preview.redd.it/4pvxf2uz8g2f1.png?width=960&crop=smart&auto=webp&s=af95cc8da1e758fd9340ca2ea933d766a2f8c6be', 'width': 960}, {'height': 1770, 'url': 'https://preview.redd.it/4pvxf2uz8g2f1.png?width=1080&crop=smart&auto=webp&s=5165f8338c6c595dd2d4793c9ec90969f51ab876', 'width': 1080}], 'source': {'height': 1770, 'url': 'https://preview.redd.it/4pvxf2uz8g2f1.png?auto=webp&s=80c8eaf377efd169e3f59ab18d108243a94ebe8c', 'width': 1080}, 'variants': {}}]}
I accidentally too many P100
1
[removed]
2025-05-23T03:18:56
https://www.reddit.com/gallery/1kt9m7h
TooManyPascals
reddit.com
1970-01-01T00:00:00
0
{}
1kt9m7h
false
null
t3_1kt9m7h
/r/LocalLLaMA/comments/1kt9m7h/i_accidentally_too_many_p100/
false
false
https://b.thumbs.redditm…CEhwVhJ7pgMw.jpg
1
null
Is Claude 4 worse than 3.7 for anyone else?
38
I know, I know, whenever a model comes out you get people saying this, but it's on very concrete things for me, I'm not just biased against it. For reference, I'm comparing 4 Sonnet (concise) with 3.7 Sonnet (concise), no reasoning for either. I asked it to calculate the total markup I paid at a gas station relative to the supermarket. I gave it quantities in a way I thought was clear ("I got three protein bars and three milks, one of the others each. What was the total markup I paid?", but that's later in the conversation after it searched for prices). And indeed, 3.7 understands this without any issue (and I regenerated the message to make sure it wasn't a fluke). But with 4, even with much back and forth and several regenerations, it kept interpreting this as 3 milk, 1 protein bar, 1 [other item], 1 [other item], until I very explicitly laid it out as I just did. And then, another conversation, I ask it, "Does this seem correct, or too much?" with a photo of food, and macro estimates for the meal in a screenshot. Again, 3.7 understands this fine, as asking whether the figures seem to be an accurate estimate. Whereas 4, again with a couple regenerations to test, seems to think I'm asking whether it's an appropriate meal (as in, not too much food for dinner or whatever). And in one instance, misreads the screenshot (thinking that the number of calories I will have cumulatively eaten after that meal is the number of calories _of_ that meal). Is anyone else seeing any issues like this?
2025-05-23T03:45:40
https://www.reddit.com/r/LocalLLaMA/comments/1kta3re/is_claude_4_worse_than_37_for_anyone_else/
TrekkiMonstr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kta3re
false
null
t3_1kta3re
/r/LocalLLaMA/comments/1kta3re/is_claude_4_worse_than_37_for_anyone_else/
false
false
self
38
null
How to get the most out of my AMD 7900XT?
18
I was forced to sell my Nvidia 4090 24GB this week to pay rent 😭. I didn't know you could be so emotionally attached to a video card. Anyway, my brother lent me his 7900XT until his rig is ready. I was just getting into local AI and want to continue. I've heard AMD is hard to support. Can anyone help get me started on the right foot and advise what I need to get the most out this card? Specs - Windows 11 Pro 64bit - AMD 7800X3D - AMD 7900XT 20GB - 32GB DDR5 Previously installed tools - Ollama - LM Studio
2025-05-23T03:57:34
https://www.reddit.com/r/LocalLLaMA/comments/1ktabgk/how_to_get_the_most_out_of_my_amd_7900xt/
crispyfrybits
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktabgk
false
null
t3_1ktabgk
/r/LocalLLaMA/comments/1ktabgk/how_to_get_the_most_out_of_my_amd_7900xt/
false
false
self
18
null
Anyone using MedGemma 27B?
11
I noticed MedGemma 27B is text-only, instruction-tuned (for inference-time compute), while 4B is the multimodal version. Interesting decision by Google.
2025-05-23T04:00:23
https://www.reddit.com/r/LocalLLaMA/comments/1ktad7a/anyone_using_medgemma_27b/
DeGreiff
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktad7a
false
null
t3_1ktad7a
/r/LocalLLaMA/comments/1ktad7a/anyone_using_medgemma_27b/
false
false
self
11
null
Big base models? (Not instruct tuned)
10
I was disappointed to see that Qwen3 didn't release base models for anything over 30b. Sucks because QLoRa fine tuning is affordable even on 100b+ models. What are the best large open base models we have right now?
2025-05-23T04:25:39
https://www.reddit.com/r/LocalLLaMA/comments/1ktat5b/big_base_models_not_instruct_tuned/
RedditAddict6942O
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktat5b
false
null
t3_1ktat5b
/r/LocalLLaMA/comments/1ktat5b/big_base_models_not_instruct_tuned/
false
false
self
10
null
Soon.
0
2025-05-23T04:44:57
https://i.redd.it/les4pl4kog2f1.png
New_Alps_5655
i.redd.it
1970-01-01T00:00:00
0
{}
1ktb4jh
false
null
t3_1ktb4jh
/r/LocalLLaMA/comments/1ktb4jh/soon/
false
false
https://b.thumbs.redditm…1wOAJ9GkUEgU.jpg
0
{'enabled': True, 'images': [{'id': 'nXGZefXH-wQJkwnKjGWJLorQPQ2gry0FtX02p08r2KA', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/les4pl4kog2f1.png?width=108&crop=smart&auto=webp&s=0eb6b7e739aab5f99186bc6642c00aa9dbe6539a', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/les4pl4kog2f1.png?width=216&crop=smart&auto=webp&s=46e054adc22693d888ee6754e3f0154c90d95605', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/les4pl4kog2f1.png?width=320&crop=smart&auto=webp&s=a866ea209c7abd2f6081ce7dda0e964f9b3641e2', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/les4pl4kog2f1.png?width=640&crop=smart&auto=webp&s=a52639d907a65b4fc4b3c227280247a9fca7a262', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/les4pl4kog2f1.png?width=960&crop=smart&auto=webp&s=477f69dd21946c73ac59ed92d867f3625c350388', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/les4pl4kog2f1.png?auto=webp&s=047cf5c7d571c4bc49f2b9e27db7163a0e266e9e', 'width': 1024}, 'variants': {}}]}
Best nsfw open source model for text/image to video on a 4090?
1
[removed]
2025-05-23T04:54:41
https://www.reddit.com/r/LocalLLaMA/comments/1ktba61/best_nsfw_open_source_model_for_textimage_to/
drowning_in_taxbills
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktba61
false
null
t3_1ktba61
/r/LocalLLaMA/comments/1ktba61/best_nsfw_open_source_model_for_textimage_to/
false
false
nsfw
1
null
Dans-PersonalityEngine V1.3.0 12b & 24b
50
The latest release in the Dans-PersonalityEngine series. With any luck you should find it to be an improvement on almost all fronts as compared to V1.2.0. [https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-12b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-12b) [https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-24b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-24b) A blog post regarding its development can be found [here](https://pocketdoclabs.com/making-dans-personalityengine-v130/) for those interested in some rough technical details on the project.
2025-05-23T04:55:30
https://www.reddit.com/r/LocalLLaMA/comments/1ktban0/danspersonalityengine_v130_12b_24b/
PocketDocLabs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktban0
false
null
t3_1ktban0
/r/LocalLLaMA/comments/1ktban0/danspersonalityengine_v130_12b_24b/
false
false
self
50
{'enabled': False, 'images': [{'id': 'ArS_gNtL-OdIhiI1BvYfvsPQ6mNyB6F2FtC0KwMgPgA', 'resolutions': [{'height': 109, 'url': 'https://external-preview.redd.it/aHyVm1T1KjGsXPKqm5U-JAWbC_lrL8H6OKIWKYa-iQI.jpg?width=108&crop=smart&auto=webp&s=a76e6de19629152930d0028a563d2fd67085b181', 'width': 108}, {'height': 218, 'url': 'https://external-preview.redd.it/aHyVm1T1KjGsXPKqm5U-JAWbC_lrL8H6OKIWKYa-iQI.jpg?width=216&crop=smart&auto=webp&s=50876de2fcf44aa51ca4e6677f1a6a5144c9d766', 'width': 216}, {'height': 323, 'url': 'https://external-preview.redd.it/aHyVm1T1KjGsXPKqm5U-JAWbC_lrL8H6OKIWKYa-iQI.jpg?width=320&crop=smart&auto=webp&s=6929177e6330a6df8c64aea1e3eb273bb21414c1', 'width': 320}], 'source': {'height': 388, 'url': 'https://external-preview.redd.it/aHyVm1T1KjGsXPKqm5U-JAWbC_lrL8H6OKIWKYa-iQI.jpg?auto=webp&s=fb51b6d9414bdfbbe469717045f7da6159ac91d1', 'width': 384}, 'variants': {}}]}
How well do AI models perform on everyday image editing tasks? Not super well, apparently — but according to this new paper, they can already handle around one-third of all requests.
4
2025-05-23T04:55:42
https://arxiv.org/abs/2505.16181
taesiri
arxiv.org
1970-01-01T00:00:00
0
{}
1ktbar2
false
null
t3_1ktbar2
/r/LocalLLaMA/comments/1ktbar2/how_well_do_ai_models_perform_on_everyday_image/
false
false
default
4
null
Choosing between M4 Air or PC with RTX 5060 TI 16GB
1
Hey! I intend to start using Local LLMs for programming. Right now I have to choose between one of the following options. 1. Upgrade from MacBook Air 2020 to MacBook Air 2025 M4 with 32 GB RAM 2. Get RTX 5060TI 16 Gb for an existing PC with 32GB RAM and Core i3 12th gen In terms of speed, who will outperform. Remember I just want to run models. No training. Thanks.
2025-05-23T05:19:00
https://www.reddit.com/r/LocalLLaMA/comments/1ktbofl/choosing_between_m4_air_or_pc_with_rtx_5060_ti/
engineerhead
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktbofl
false
null
t3_1ktbofl
/r/LocalLLaMA/comments/1ktbofl/choosing_between_m4_air_or_pc_with_rtx_5060_ti/
false
false
self
1
null
[New paper] Scaling law for quantization-aware training. Is it still possible for bitnet?
1
[removed]
2025-05-23T05:19:44
https://www.reddit.com/r/LocalLLaMA/comments/1ktboun/new_paper_scaling_law_for_quantizationaware/
Delicious-Number-237
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktboun
false
null
t3_1ktboun
/r/LocalLLaMA/comments/1ktboun/new_paper_scaling_law_for_quantizationaware/
false
false
self
1
null
Hardware Suggestions for Local AI
1
I am hoping to go with this combo ryzen 5 7600 b650 16gb ram Rtx 5060ti. Should I jumping to 7 7600? Purpose R&D local diffusion and LLMs?
2025-05-23T05:23:20
https://www.reddit.com/r/LocalLLaMA/comments/1ktbqtu/hardware_suggestions_for_local_ai/
OkBother4153
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktbqtu
false
null
t3_1ktbqtu
/r/LocalLLaMA/comments/1ktbqtu/hardware_suggestions_for_local_ai/
false
false
self
1
null
Claude 4's SWE-bench scores look overly bloated. How to check for myself?
1
[removed]
2025-05-23T05:32:44
https://www.reddit.com/r/LocalLLaMA/comments/1ktbvzl/claude_4s_swebench_scores_look_overly_bloated_how/
sirjuicymango
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktbvzl
false
null
t3_1ktbvzl
/r/LocalLLaMA/comments/1ktbvzl/claude_4s_swebench_scores_look_overly_bloated_how/
false
false
self
1
null
Is there an easier way to search huggingface?! looking for large gguf models!
3
My friends, I have been out of the loop for a while, I'm still using Behemoth 123b V1 for creative writing. I imagine there are newer, shinier and maybe better models out there but i can't seem to "find" them. Is there a way to search huggingface for let's say... >100B gguf models? I'll would also accept directions towards any popular large models around the 123B range (or larger i guess) has the large model scene dried up? or did everyone move to some random arbitrary number that's difficult to find like 117B or something lol anyways, thank you for your time :)
2025-05-23T05:34:40
https://www.reddit.com/r/LocalLLaMA/comments/1ktbx27/is_there_an_easier_way_to_search_huggingface/
DominicanGreg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktbx27
false
null
t3_1ktbx27
/r/LocalLLaMA/comments/1ktbx27/is_there_an_easier_way_to_search_huggingface/
false
false
self
3
null
NovelSeek: When Agent Becomes the Scientist -- Building Closed-Loop System from Hypothesis to Verification
0
rtificial Intelligence (AI) is accelerating the transformation of scientific research paradigms, not only enhancing research efficiency but also driving innovation. We introduce NovelSeek, a unified closed-loop multi-agent framework to conduct Autonomous Scientific Research (ASR) across various scientific research fields, enabling researchers to tackle complicated problems in these fields with unprecedented speed and precision. NovelSeek highlights three key advantages:  1) Scalability: NovelSeek has demonstrated its versatility across 12 scientific research tasks, capable of generating innovative ideas to enhance the performance of baseline code.  2) Interactivity: NovelSeek provides an interface for human expert feedback and multi-agent interaction in automated end-to-end processes, allowing for the seamless integration of domain expert knowledge.  3) Efficiency: NovelSeek has achieved promising performance gains in several scientific fields with significantly less time cost compared to human efforts. For instance, in reaction yield prediction, it increased from 27.6% to 35.4% in just 12 hours; in enhancer activity prediction, accuracy rose from 0.52 to 0.79 with only 4 hours of processing; and in 2D semantic segmentation, precision advanced from 78.8% to 81.0% in a mere 30 hours.
2025-05-23T05:45:09
https://arxiv.org/pdf/2505.16938
Lynncc6
arxiv.org
1970-01-01T00:00:00
0
{}
1ktc2rf
false
null
t3_1ktc2rf
/r/LocalLLaMA/comments/1ktc2rf/novelseek_when_agent_becomes_the_scientist/
false
false
default
0
null
Need help in retrieving using llm
1
[removed]
2025-05-23T05:45:32
https://www.reddit.com/r/LocalLLaMA/comments/1ktc2ys/need_help_in_retrieving_using_llm/
420Deku
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktc2ys
false
null
t3_1ktc2ys
/r/LocalLLaMA/comments/1ktc2ys/need_help_in_retrieving_using_llm/
false
false
self
1
null
Anthropic's new AI model turns to blackmail when engineers try to take it offline | TechCrunch
0
I'll admit this made me laugh.
2025-05-23T06:30:02
https://techcrunch.com/2025/05/22/anthropics-new-ai-model-turns-to-blackmail-when-engineers-try-to-take-it-offline/
mustafar0111
techcrunch.com
1970-01-01T00:00:00
0
{}
1ktcqub
false
null
t3_1ktcqub
/r/LocalLLaMA/comments/1ktcqub/anthropics_new_ai_model_turns_to_blackmail_when/
false
false
https://b.thumbs.redditm…5myh89OYvlrc.jpg
0
{'enabled': False, 'images': [{'id': 'J0ij2SxhpJStUsBOFXmzOsVBoTLP-rqjWbskNZUUgNA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/0yOKGorR19ARamoNt8dEySsZD2Mkb_pGmPpDif9aLvY.jpg?width=108&crop=smart&auto=webp&s=7448913aa0e774ccf26c9b14e612cba557f3311f', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/0yOKGorR19ARamoNt8dEySsZD2Mkb_pGmPpDif9aLvY.jpg?width=216&crop=smart&auto=webp&s=2ef6b9458626aa64d8f9b66f5f29fd373a819fdf', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/0yOKGorR19ARamoNt8dEySsZD2Mkb_pGmPpDif9aLvY.jpg?width=320&crop=smart&auto=webp&s=e102837f77d719b7e48ded7be2534a341ab68500', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/0yOKGorR19ARamoNt8dEySsZD2Mkb_pGmPpDif9aLvY.jpg?width=640&crop=smart&auto=webp&s=d9f28d913dc792453f5ff54fb69417cfdf430b11', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/0yOKGorR19ARamoNt8dEySsZD2Mkb_pGmPpDif9aLvY.jpg?width=960&crop=smart&auto=webp&s=5d492cbfa2f4aff0c9bbf022a402909f66fcf096', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/0yOKGorR19ARamoNt8dEySsZD2Mkb_pGmPpDif9aLvY.jpg?width=1080&crop=smart&auto=webp&s=9b0b47492bf5dae4263bf2e2546cbaa4dbe8dd60', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/0yOKGorR19ARamoNt8dEySsZD2Mkb_pGmPpDif9aLvY.jpg?auto=webp&s=c86290c2f6bf3f220c0eb422290631fbdfdf4b80', 'width': 1200}, 'variants': {}}]}
Upgrade path recommendation needed
0
I am a mere peasant and I have finite budgets of at most $4,000 USD. I am thinking about adding two more 3090s but afraid that bandwidth from 4.0 x4 would limit single GPU performance on small models like Qwen3 32B when being fed with prompts continuously. Been thinking about upgrading CPU side (currently 5600X + DDR4 3200 32GB) to a 5th gen WRX80 or 9175F and possibly try out CPU only inference. I am able to find a deal on the 9175F for \~$2,100, and my local used 3090s are selling at around $750+ each. What should I do for upgrade?
2025-05-23T06:30:50
https://www.reddit.com/r/LocalLLaMA/comments/1ktcral/upgrade_path_recommendation_needed/
m31317015
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktcral
false
null
t3_1ktcral
/r/LocalLLaMA/comments/1ktcral/upgrade_path_recommendation_needed/
false
false
self
0
null
2x5090 vs. Mac Studio M3 Ultra for concurrent users (help)
1
[removed]
2025-05-23T06:42:25
https://www.reddit.com/r/LocalLLaMA/comments/1ktcxls/2x5090_vs_mac_studio_m3_ultra_for_concurrent/
Jarlsvanoid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktcxls
false
null
t3_1ktcxls
/r/LocalLLaMA/comments/1ktcxls/2x5090_vs_mac_studio_m3_ultra_for_concurrent/
false
false
self
1
null
Compatibility
1
[removed]
2025-05-23T06:48:23
https://www.reddit.com/r/LocalLLaMA/comments/1ktd0oc/compatibility/
666WhTr666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktd0oc
false
null
t3_1ktd0oc
/r/LocalLLaMA/comments/1ktd0oc/compatibility/
false
false
self
1
null
Ollama 0.7.0 taking much longer as 0.6.8. Or is it just me?
2
I know they have a new engine, its just so jarring how much longer things are taking. I have a crappy setup with a 1660ti, using gemma3:4b and Home Assistant/Frigate, but still. Things that were taking 13 seconds are now 1.5-2minutes. I feel like i am missing some config that would normalize this, or I should just switch to llamacpp. All i wanted to do was try out qwen2.5vl.
2025-05-23T06:58:40
https://www.reddit.com/r/LocalLLaMA/comments/1ktd5w6/ollama_070_taking_much_longer_as_068_or_is_it/
enoquelights
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktd5w6
false
null
t3_1ktd5w6
/r/LocalLLaMA/comments/1ktd5w6/ollama_070_taking_much_longer_as_068_or_is_it/
false
false
self
2
null
Unable to fix llama-cpp and transformers handling in pyinstaller .exe
1
[removed]
2025-05-23T07:00:42
https://www.reddit.com/r/LocalLLaMA/comments/1ktd72f/unable_to_fix_llamacpp_and_transformers_handling/
Exotic_Put_8192
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktd72f
false
null
t3_1ktd72f
/r/LocalLLaMA/comments/1ktd72f/unable_to_fix_llamacpp_and_transformers_handling/
false
false
self
1
null
Troubles with configuring transformers and llama-cpp with pyinstaller
0
I am attempting to bundle a rag agent into a .exe. However on usage of the .exe i keep running into the same two problems. The first initial problem is with locating llama-cpp, which i have fixed. The second is a recurring error, which i am unable to solve with any resources i've found on existing queries and gpt responses. FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\\Users\\caio\\AppData\\Local\\Temp\\_MEI43162\\transformers\\models\\__init__.pyc' [PYI-2444:ERROR] Failed to execute script 'frontend' due to unhandled exception! I looked into my path, and found no \_\_init\_\_.pyc but a \_\_init\_\_.py I have attempted to solve this by 1. Modifying the spec file (hasn't worked) # -*- mode: python ; coding: utf-8 -*- from PyInstaller.utils.hooks import collect_submodules, collect_data_files import os import transformers import sentence_transformers hiddenimports = collect_submodules('transformers') + collect_submodules('sentence_transformers') datas = collect_data_files('transformers') + collect_data_files('sentence_transformers') a = Analysis( ['frontend.py'], pathex=[], binaries=[('C:/Users/caio/miniconda3/envs/rag_new_env/Lib/site-packages/llama_cpp/lib/llama.dll', 'llama_cpp/lib')], datas=datas, hiddenimports=hiddenimports, hookspath=[], hooksconfig={}, runtime_hooks=[], excludes=[], noarchive=False, optimize=0, ) pyz = PYZ(a.pure) exe = EXE( pyz, a.scripts, a.binaries, a.datas, [], name='frontend', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, upx_exclude=[], runtime_tmpdir=None, console=True, disable_windowed_traceback=False, argv_emulation=False, target_arch=None, codesign_identity=None, entitlements_file=None, ) 2. Using specific pyinstaller commands that had worked on my previous system. Hasn't worked. pyinstaller --onefile --add-binary "C:/Users/caio/miniconda3/envs/rag_new_env/Lib/site-packages/llama_cpp/lib/llama.dll;llama_cpp/lib" rag_gui.py Both attempts that I have provided fixed my llama\_cpp problem but couldn't solve the transformers model. the path is as so: C:/Users/caio/miniconda3/envs/rag_new_env/Lib/site-packages Please help me on how to solve this. My transformers use is happening only through sentence\_transformers.
2025-05-23T07:09:01
https://www.reddit.com/r/LocalLLaMA/comments/1ktdbky/troubles_with_configuring_transformers_and/
arnab_best
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktdbky
false
null
t3_1ktdbky
/r/LocalLLaMA/comments/1ktdbky/troubles_with_configuring_transformers_and/
false
false
self
0
null
GitHub - jacklishufan/LaViDa: Official Implementation of LaViDa: :A Large Diffusion Language Model for Multimodal Understanding
50
Abstract >Modern Vision-Language Models (VLMs) can solve a wide range of tasks requiring visual reasoning. In real-world scenarios, desirable properties for VLMs include fast inference and controllable generation (e.g., constraining outputs to adhere to a desired format). However, existing autoregressive (AR) VLMs like LLaVA struggle in these aspects. Discrete diffusion models (DMs) offer a promising alternative, enabling parallel decoding for faster inference and bidirectional context for controllable generation through text-infilling. While effective in language-only settings, DMs' potential for multimodal tasks is underexplored. We introduce LaViDa, a family of VLMs built on DMs. We build LaViDa by equipping DMs with a vision encoder and jointly fine-tune the combined parts for multimodal instruction following. To address challenges encountered, LaViDa incorporates novel techniques such as complementary masking for effective training, prefix KV cache for efficient inference, and timestep shifting for high-quality sampling. Experiments show that LaViDa achieves competitive or superior performance to AR VLMs on multi-modal benchmarks such as MMMU, while offering unique advantages of DMs, including flexible speed-quality tradeoff, controllability, and bidirectional reasoning. On COCO captioning, LaViDa surpasses Open-LLaVa-Next-Llama3-8B by +4.1 CIDEr with 1.92x speedup. On bidirectional tasks, it achieves +59% improvement on Constrained Poem Completion. These results demonstrate LaViDa as a strong alternative to AR VLMs. Code and models is available at [https://github.com/jacklishufan/LaViDa](https://github.com/jacklishufan/LaViDa)
2025-05-23T07:23:08
https://github.com/jacklishufan/LaViDa
ninjasaid13
github.com
1970-01-01T00:00:00
0
{}
1ktdisj
false
null
t3_1ktdisj
/r/LocalLLaMA/comments/1ktdisj/github_jacklishufanlavida_official_implementation/
false
false
https://a.thumbs.redditm…2f2Fzor0du08.jpg
50
{'enabled': False, 'images': [{'id': 'zXgBoTT8kcnKxIo2YTXAaXT1tsNUVc63YIAVOZCY5dk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_qyQ5Nb0aZ0pjIERMz0EBymLna5bhwRL3S2vTvBvqUQ.jpg?width=108&crop=smart&auto=webp&s=f1e2dba52923cde49de20cc8566cf08a0990b869', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_qyQ5Nb0aZ0pjIERMz0EBymLna5bhwRL3S2vTvBvqUQ.jpg?width=216&crop=smart&auto=webp&s=4ff9d2985898d2e37ba7ddc7932ebf350e25d16b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_qyQ5Nb0aZ0pjIERMz0EBymLna5bhwRL3S2vTvBvqUQ.jpg?width=320&crop=smart&auto=webp&s=223c0bca107075c58c354fc6dd9793818aa97a0b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_qyQ5Nb0aZ0pjIERMz0EBymLna5bhwRL3S2vTvBvqUQ.jpg?width=640&crop=smart&auto=webp&s=978b5b8d9f71176e70ad9f69cf874f5d01401ac0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_qyQ5Nb0aZ0pjIERMz0EBymLna5bhwRL3S2vTvBvqUQ.jpg?width=960&crop=smart&auto=webp&s=321f6dcb722f008e744176fbfaa2a37652b0ae19', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_qyQ5Nb0aZ0pjIERMz0EBymLna5bhwRL3S2vTvBvqUQ.jpg?width=1080&crop=smart&auto=webp&s=b60e76e8cb39836edbc4229df413582845b728d2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_qyQ5Nb0aZ0pjIERMz0EBymLna5bhwRL3S2vTvBvqUQ.jpg?auto=webp&s=94ec064825550a75340668e0af0d2b17573fa8ba', 'width': 1200}, 'variants': {}}]}
Unfortunately, Claude 4 lags far behind O3 in the anti-fitting benchmark.
16
[https://llm-benchmark.github.io/](https://llm-benchmark.github.io/) click the to expand all questions and answers for all models I did not update the answers to CLAUDE 4 OPUS THINKING on the webpage. I only tried a few major questions (the rest were even more impossible to answer correctly). I only got 0.5 of the 8 questions right, which is not much different from the total errors in C3.7.(If there is significant progress, I will update the page.) At present, O3 is still far ahead I guess the secret is that there should be higher quality customized reasoning data sets, which need to be produced by hiring people. Maybe this is the biggest secret.
2025-05-23T07:28:50
https://www.reddit.com/r/LocalLLaMA/comments/1ktdlqc/unfortunately_claude_4_lags_far_behind_o3_in_the/
flysnowbigbig
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktdlqc
false
null
t3_1ktdlqc
/r/LocalLLaMA/comments/1ktdlqc/unfortunately_claude_4_lags_far_behind_o3_in_the/
false
false
self
16
null
Best TTS for foreign language (train with my own dataset?)
1
[removed]
2025-05-23T07:29:11
https://www.reddit.com/r/LocalLLaMA/comments/1ktdlwm/best_tts_for_foreign_language_train_with_my_own/
GuidanceOdd4413
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktdlwm
false
null
t3_1ktdlwm
/r/LocalLLaMA/comments/1ktdlwm/best_tts_for_foreign_language_train_with_my_own/
false
false
self
1
null
Console Game For LLMs
1
[removed]
2025-05-23T07:33:26
https://www.reddit.com/r/LocalLLaMA/comments/1ktdo20/console_game_for_llms/
hadoopfromscratch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktdo20
false
null
t3_1ktdo20
/r/LocalLLaMA/comments/1ktdo20/console_game_for_llms/
false
false
self
1
null
Local Llama on a Corporate Microsoft stack
0
I'm used to using Linux and running models on vLLM or llama.cpp and then using python to develop the logic and using postgres+pgvector for the datastore. However, if you have to run this using corporate Microsoft infrastructure (think SharePoint, PowerAutomate, PowerQuery) what tools can I use to script and pull data that is stored in the SharePoints? I'm not expecting good performance, but since there's only 10k documents, I think even using SharePoint lists will be workable. Assume I have API access to an LLM backend.
2025-05-23T07:41:10
https://www.reddit.com/r/LocalLLaMA/comments/1ktdrxe/local_llama_on_a_corporate_microsoft_stack/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktdrxe
false
null
t3_1ktdrxe
/r/LocalLLaMA/comments/1ktdrxe/local_llama_on_a_corporate_microsoft_stack/
false
false
self
0
null
Console Game For LLMs
1
[removed]
2025-05-23T07:45:06
https://www.reddit.com/r/LocalLLaMA/comments/1ktdtyx/console_game_for_llms/
hadoopfromscratch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktdtyx
false
null
t3_1ktdtyx
/r/LocalLLaMA/comments/1ktdtyx/console_game_for_llms/
false
false
self
1
null
Console Game For LLMs
1
[removed]
2025-05-23T07:52:49
https://www.reddit.com/r/LocalLLaMA/comments/1ktdxuu/console_game_for_llms/
hadoopfromscratch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktdxuu
false
null
t3_1ktdxuu
/r/LocalLLaMA/comments/1ktdxuu/console_game_for_llms/
false
false
self
1
null
[Career Advice Needed] What Next in AI? Feeling Stuck and Need Direction
2
Hey everyone, I'm currently at a crossroads in my career and could really use some advice from the LLM and multimodal community because it has lots of AI engineers. A bit about my current background: Strong background in Deep Learning and Computer Vision, including object detection and segmentation. Experienced in deploying models using Nvidia DeepStream, ONNX, and TensorRT. Basic ROS2 experience, primarily for sanity checks during data collection in robotics. Extensive hands-on experience with Vision Language Models (VLMs) and open-vocabulary models. Current Dilemma: I'm feeling stuck and unsure about the best next steps to align with industry growth. Specifically: 1. Should I deepen my formal knowledge through an MS in AI/Computer Vision (possibly IIITs in India)? 2. Focus more on deployment, MLOps, and edge inference, which seems to offer strong job security and specialization? 3. Pivot entirely toward LLMs and multimodal VLMs, given the significant funding and rapid industry expansion in this area? I'd particularly appreciate insights on: How valuable has it been for you to integrate LLMs with traditional Computer Vision pipelines? What specific LLM/VLM skills or experiences helped accelerate your career? Is formal academic training still beneficial at this point, or is hands-on industry experience sufficient? Any thoughts, experiences, or candid advice would be extremely valuable.
2025-05-23T08:06:09
https://www.reddit.com/r/LocalLLaMA/comments/1kte4oo/career_advice_needed_what_next_in_ai_feeling/
Southern-Bad-6573
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kte4oo
false
null
t3_1kte4oo
/r/LocalLLaMA/comments/1kte4oo/career_advice_needed_what_next_in_ai_feeling/
false
false
self
2
null
Reminder on the purpose of the Claude 4 models
0
As per their blog post, these models are created specifically for both agentic coding tasks and agentic tasks in general. Anthropic's goal is to be able to create models that are able to tackle long-horizon tasks in a consistent manner. So if you are using these models outside of agentic tooling (via direct Q&A - e.g. aider/livebench, etc), I would imagine that o3 and 2.5 pro could be right up there near the claude 4 series. Using these models in agentic settings is necessary in order to actually verify the strides made. That's really all. Overall, it seems like there is a really good sentiment around these models, but I do see some people that might be unaware of anthropic's current north star goals.
2025-05-23T08:29:52
https://www.reddit.com/r/LocalLLaMA/comments/1kteg81/reminder_on_the_purpose_of_the_claude_4_models/
cobalt1137
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kteg81
false
null
t3_1kteg81
/r/LocalLLaMA/comments/1kteg81/reminder_on_the_purpose_of_the_claude_4_models/
false
false
self
0
null
Want to know your reviews about this 14B model.
1
[removed]
2025-05-23T08:52:53
https://www.reddit.com/r/LocalLLaMA/comments/1kterbh/want_to_know_your_reviews_about_this_14b_model/
pinpann
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kterbh
false
null
t3_1kterbh
/r/LocalLLaMA/comments/1kterbh/want_to_know_your_reviews_about_this_14b_model/
false
false
self
1
{'enabled': False, 'images': [{'id': 'oiXxa3AeQjPyS014SfL85mFkAl65CMnweJS5us56xg8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_qPpK7H85T65D99K_551HeZaWXqfclob4aYz5EmnQ68.jpg?width=108&crop=smart&auto=webp&s=d49b6159d1fe495c160f658a33ee4ccaafe1e387', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_qPpK7H85T65D99K_551HeZaWXqfclob4aYz5EmnQ68.jpg?width=216&crop=smart&auto=webp&s=b134a500efd0a5952007aff765d520f8585a06d2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_qPpK7H85T65D99K_551HeZaWXqfclob4aYz5EmnQ68.jpg?width=320&crop=smart&auto=webp&s=90d12ec6f6875ae1194f7fac93195a86f5dce7cf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_qPpK7H85T65D99K_551HeZaWXqfclob4aYz5EmnQ68.jpg?width=640&crop=smart&auto=webp&s=c4723cfa4b6f2200f28a9aeab50779f4c9ddd206', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_qPpK7H85T65D99K_551HeZaWXqfclob4aYz5EmnQ68.jpg?width=960&crop=smart&auto=webp&s=9f5feea662097b2e0b6a7fa30b4f7b6765374140', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_qPpK7H85T65D99K_551HeZaWXqfclob4aYz5EmnQ68.jpg?width=1080&crop=smart&auto=webp&s=0a20cc6c78c6645d4a7987d5503e9ab2aa8e57dd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_qPpK7H85T65D99K_551HeZaWXqfclob4aYz5EmnQ68.jpg?auto=webp&s=e6fb60acb35a5d4b1d994ed6035f29519da6073f', 'width': 1200}, 'variants': {}}]}
Said he's "developing" AI Agents, but its just basic prompt eng. + PDFs using ChatGPT App. In how many ways can this go wrong?
16
It's pretty much this. A PM in my company pushed the owner to believe in 4 months we can have that developed and ntegrated in out platform, when his "POC" is just interactioon with chatgpt app by uploading some PDFs and having it reply questions. Not a fancy RAG let alone an agent. Still, he's promissing this can be developed and integrated in 4 months when he understands little of engieering and there's only one engineer in the company able to work on it. Also, the company never released any AI feature or product before. I just wanna gather a few arguments on how this can go wrong more on the AI side, relying on one closed model like that seems bold.
2025-05-23T09:29:37
https://www.reddit.com/r/LocalLLaMA/comments/1ktf9o3/said_hes_developing_ai_agents_but_its_just_basic/
Melodic_Reality_646
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktf9o3
false
null
t3_1ktf9o3
/r/LocalLLaMA/comments/1ktf9o3/said_hes_developing_ai_agents_but_its_just_basic/
false
false
self
16
null
Did Google’s ‘Most Secure’ AI Just Fall For a Sneaky Trick?
1
[removed]
2025-05-23T09:31:35
https://v.redd.it/0zv7arog2i2f1
Fluffy_Sheepherder76
v.redd.it
1970-01-01T00:00:00
0
{}
1ktfao4
false
{'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/0zv7arog2i2f1/DASHPlaylist.mpd?a=1750584709%2CZjNjZWYzOTQ0MTE3MGRiNDcwN2Q2ODNlN2M4YmEyOGQyYWRlMDIyNTNmY2VjN2UxN2Q4OTA0ZDAzNjQ3OTU2Mg%3D%3D&v=1&f=sd', 'duration': 47, 'fallback_url': 'https://v.redd.it/0zv7arog2i2f1/DASH_360.mp4?source=fallback', 'has_audio': True, 'height': 360, 'hls_url': 'https://v.redd.it/0zv7arog2i2f1/HLSPlaylist.m3u8?a=1750584709%2CZmU0NzhjYmJkODY0ZmY0NTAxYjQyNDU3YjdlYWE5MmNhNTA3NjVlYTA2NmFmMGQyZDE1NDBjNzU1OGE3ZTE4OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/0zv7arog2i2f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 638}}
t3_1ktfao4
/r/LocalLLaMA/comments/1ktfao4/did_googles_most_secure_ai_just_fall_for_a_sneaky/
false
false
https://external-preview…d192519234c118b5
1
{'enabled': False, 'images': [{'id': 'am40Y3Fvb2cyaTJmMd7djdlH2hbWeILH_9cStELUSJie2nmstlRGg59DAyDP', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/am40Y3Fvb2cyaTJmMd7djdlH2hbWeILH_9cStELUSJie2nmstlRGg59DAyDP.png?width=108&crop=smart&format=pjpg&auto=webp&s=96388ca8876ca08dcfa6b4f16517bbe764c1d9ce', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/am40Y3Fvb2cyaTJmMd7djdlH2hbWeILH_9cStELUSJie2nmstlRGg59DAyDP.png?width=216&crop=smart&format=pjpg&auto=webp&s=af04fc31a029b088774e8da94630a4fea2b1bbbf', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/am40Y3Fvb2cyaTJmMd7djdlH2hbWeILH_9cStELUSJie2nmstlRGg59DAyDP.png?width=320&crop=smart&format=pjpg&auto=webp&s=fbf855cb1d553a744c3f1fdd7d336a9d10e95c22', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/am40Y3Fvb2cyaTJmMd7djdlH2hbWeILH_9cStELUSJie2nmstlRGg59DAyDP.png?width=640&crop=smart&format=pjpg&auto=webp&s=30a7ddf29f71d70dce71172c41cb8bb5e0d7fb7a', 'width': 640}], 'source': {'height': 478, 'url': 'https://external-preview.redd.it/am40Y3Fvb2cyaTJmMd7djdlH2hbWeILH_9cStELUSJie2nmstlRGg59DAyDP.png?format=pjpg&auto=webp&s=b3c884ad7a662d11292355fc9971289cc762af34', 'width': 848}, 'variants': {}}]}
Is ‘Secure’ Just a Marketing Word for AI These Days?
1
2025-05-23T10:09:47
https://v.redd.it/brpu78phai2f1
Fluffy_Sheepherder76
v.redd.it
1970-01-01T00:00:00
0
{}
1ktfv43
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/brpu78phai2f1/DASHPlaylist.mpd?a=1750587004%2CNzAzYTYyZjMxMDQ2MTMwODFiMzAwYzIxMDZkNWMwNGY1Mzk3YjNkYmRkNDg1MGQ0MDllZmFhOWVmYjFjZDk0Yg%3D%3D&v=1&f=sd', 'duration': 47, 'fallback_url': 'https://v.redd.it/brpu78phai2f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/brpu78phai2f1/HLSPlaylist.m3u8?a=1750587004%2COTI0ZDhiZGFiNTNlM2ZjOWZjNzQyYTdmODJhZmEwYmZhN2U2MWZjZjI3Mjg1MGEwNTUzZjZkZTA2MTM1YjVhMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/brpu78phai2f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1ktfv43
/r/LocalLLaMA/comments/1ktfv43/is_secure_just_a_marketing_word_for_ai_these_days/
false
false
https://external-preview…644da925adb03490
1
{'enabled': False, 'images': [{'id': 'dGc5anppcGhhaTJmMSROJdQEB0P2BMkw2j5lurWKaGpFdbJnju1mhFkU4a7y', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dGc5anppcGhhaTJmMSROJdQEB0P2BMkw2j5lurWKaGpFdbJnju1mhFkU4a7y.png?width=108&crop=smart&format=pjpg&auto=webp&s=33463d4f9bc639e82157ab1491b605283f569ebc', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dGc5anppcGhhaTJmMSROJdQEB0P2BMkw2j5lurWKaGpFdbJnju1mhFkU4a7y.png?width=216&crop=smart&format=pjpg&auto=webp&s=46e51715e024b0a65c3c79db1ddbaf694023fa76', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dGc5anppcGhhaTJmMSROJdQEB0P2BMkw2j5lurWKaGpFdbJnju1mhFkU4a7y.png?width=320&crop=smart&format=pjpg&auto=webp&s=0d0124f3715a598826840943a3b2cdde933447d4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dGc5anppcGhhaTJmMSROJdQEB0P2BMkw2j5lurWKaGpFdbJnju1mhFkU4a7y.png?width=640&crop=smart&format=pjpg&auto=webp&s=5c091447bdeaaec22193597530effbb0ec72ba1a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dGc5anppcGhhaTJmMSROJdQEB0P2BMkw2j5lurWKaGpFdbJnju1mhFkU4a7y.png?width=960&crop=smart&format=pjpg&auto=webp&s=cb6504e47dea7c85f92cf72aad34938bd800ae11', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dGc5anppcGhhaTJmMSROJdQEB0P2BMkw2j5lurWKaGpFdbJnju1mhFkU4a7y.png?width=1080&crop=smart&format=pjpg&auto=webp&s=9e8e99d5b954b04872718e5a224c7e2629ee51b6', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dGc5anppcGhhaTJmMSROJdQEB0P2BMkw2j5lurWKaGpFdbJnju1mhFkU4a7y.png?format=pjpg&auto=webp&s=5593a22ec37a2f5c9d648f04f40ba82fe6d6cbab', 'width': 1920}, 'variants': {}}]}
Curious if this is fast: DeepSeek R1 671B on a 48GB-modded RTX4090, pushing 30 tok/sec
1
[removed]
2025-05-23T10:21:40
https://www.reddit.com/gallery/1ktg1s1
Zima_Space
reddit.com
1970-01-01T00:00:00
0
{}
1ktg1s1
false
null
t3_1ktg1s1
/r/LocalLLaMA/comments/1ktg1s1/curious_if_this_is_fast_deepseek_r1_671b_on_a/
false
false
https://b.thumbs.redditm…hRD7nLnRXDys.jpg
1
null
Your current setup ?
10
What is your current setup and how much did it cost ? I’m curious as I don’t know much about such setups , and don’t know how to go about making my own if I wanted to.
2025-05-23T11:00:37
https://www.reddit.com/r/LocalLLaMA/comments/1ktgo9f/your_current_setup/
Basic-Pay-9535
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktgo9f
false
null
t3_1ktgo9f
/r/LocalLLaMA/comments/1ktgo9f/your_current_setup/
false
false
self
10
null
What API is same level AND cheaper than Anthropic for dealing with large texts?
1
[removed]
2025-05-23T11:01:59
https://www.reddit.com/r/LocalLLaMA/comments/1ktgp9h/what_api_is_same_level_and_cheaper_than_anthropic/
Complete-Ask-9428
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktgp9h
false
null
t3_1ktgp9h
/r/LocalLLaMA/comments/1ktgp9h/what_api_is_same_level_and_cheaper_than_anthropic/
false
false
self
1
null
What API is same level AND cheaper than Anthropic for dealing with large texts?
1
[removed]
2025-05-23T11:03:19
https://www.reddit.com/r/LocalLLaMA/comments/1ktgq3d/what_api_is_same_level_and_cheaper_than_anthropic/
ARAM_player
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktgq3d
false
null
t3_1ktgq3d
/r/LocalLLaMA/comments/1ktgq3d/what_api_is_same_level_and_cheaper_than_anthropic/
false
false
self
1
null
Local Assistant - Email/Teams/Slack/Drive - why isn’t this a thing?
0
Firstly apologies if this has been asked and answered - I’ve looked and didn’t find anything super current. Basically I would think a main use case would be to allow someone to ask ‘what do I need to focus on today?’ And it would review the last couple of weeks emails/teams/slack/calendar and say ‘you have a meeting with *** at 14:00 about *** based on messages and emails you need to make sure you have the Penske file complete - here is a summary of the Penske file as of the latest revision. I have looked at manually exported json files or Langchain - is that the best that can be done currently? Any insight, advice, frustrations would be welcome discussion….
2025-05-23T11:06:40
https://www.reddit.com/r/LocalLLaMA/comments/1ktgs4o/local_assistant_emailteamsslackdrive_why_isnt/
Euphoric-Society1412
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktgs4o
false
null
t3_1ktgs4o
/r/LocalLLaMA/comments/1ktgs4o/local_assistant_emailteamsslackdrive_why_isnt/
false
false
self
0
null
server audio input has been merged into llama.cpp
113
2025-05-23T11:12:26
https://github.com/ggml-org/llama.cpp/pull/13714
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1ktgvoe
false
null
t3_1ktgvoe
/r/LocalLLaMA/comments/1ktgvoe/server_audio_input_has_been_merged_into_llamacpp/
false
false
https://a.thumbs.redditm…anjPOPtNDZw0.jpg
113
{'enabled': False, 'images': [{'id': '025Mp2vchB0j5ZcEYyfyuRkH70ASsrqgrqWm5911cn8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/w-TAeYFPuT8QOBKphdcEnCVLkPeOPrOjKse263sRyos.jpg?width=108&crop=smart&auto=webp&s=88e31f15e13472971ce9b125f29cf6994d61a942', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/w-TAeYFPuT8QOBKphdcEnCVLkPeOPrOjKse263sRyos.jpg?width=216&crop=smart&auto=webp&s=f81f9d8e6f0d5b3a29cc3bdf88289f889f118e39', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/w-TAeYFPuT8QOBKphdcEnCVLkPeOPrOjKse263sRyos.jpg?width=320&crop=smart&auto=webp&s=5de6a50414eea002ac9b06831c489b4290483a63', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/w-TAeYFPuT8QOBKphdcEnCVLkPeOPrOjKse263sRyos.jpg?width=640&crop=smart&auto=webp&s=d30ef27dc1e8b5fa00168ba96a589759da20990b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/w-TAeYFPuT8QOBKphdcEnCVLkPeOPrOjKse263sRyos.jpg?width=960&crop=smart&auto=webp&s=243d43541825aea7bce036e14d872112da4720a0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/w-TAeYFPuT8QOBKphdcEnCVLkPeOPrOjKse263sRyos.jpg?width=1080&crop=smart&auto=webp&s=3afdc6975cbd4fbb94af064a5a8ab17435003286', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/w-TAeYFPuT8QOBKphdcEnCVLkPeOPrOjKse263sRyos.jpg?auto=webp&s=b504ece21a1cd9670eb735d8da5fc6e18dad3514', 'width': 1200}, 'variants': {}}]}
AMD vs Nvidia LLM inference quality
2
For those who have compared the same LLM using the same file with the same quant, fully loaded into VRAM.   How do AMD and Nvidia compare ?   Not asking about speed, but response quality. Even if the response is not exactly the same, how is the response quality ? Thank You 
2025-05-23T11:13:10
https://www.reddit.com/r/LocalLLaMA/comments/1ktgw6i/amd_vs_nvidia_llm_inference_quality/
Ponsky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktgw6i
false
null
t3_1ktgw6i
/r/LocalLLaMA/comments/1ktgw6i/amd_vs_nvidia_llm_inference_quality/
false
false
self
2
null
AceReason-Nemotron-14B: Advancing Math and Code Reasoning through Reinforcement Learning
69
2025-05-23T11:15:59
https://huggingface.co/nvidia/AceReason-Nemotron-14B
AaronFeng47
huggingface.co
1970-01-01T00:00:00
0
{}
1ktgxxa
false
null
t3_1ktgxxa
/r/LocalLLaMA/comments/1ktgxxa/acereasonnemotron14b_advancing_math_and_code/
false
false
https://a.thumbs.redditm…pKPj1VDtcRI4.jpg
69
{'enabled': False, 'images': [{'id': 'OIO7hLelckHUPc4PUbni6Q7qWcpbryWC8vuINJV19Ns', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_aQtUZTp2VBwp5MK35YBXI25HOZhHuEgT9O1MgXLN7I.jpg?width=108&crop=smart&auto=webp&s=d6d87c80a808d26223a77bc2adcfaaa091bd7d14', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_aQtUZTp2VBwp5MK35YBXI25HOZhHuEgT9O1MgXLN7I.jpg?width=216&crop=smart&auto=webp&s=7dbb406c1e1d0f82333c4e3716a91b69c5867ef1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_aQtUZTp2VBwp5MK35YBXI25HOZhHuEgT9O1MgXLN7I.jpg?width=320&crop=smart&auto=webp&s=2e238a9aa698b7b87dde93716bc82d23116e86b2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_aQtUZTp2VBwp5MK35YBXI25HOZhHuEgT9O1MgXLN7I.jpg?width=640&crop=smart&auto=webp&s=eb4068255f83d79055a6f21dccc859d949b32f54', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_aQtUZTp2VBwp5MK35YBXI25HOZhHuEgT9O1MgXLN7I.jpg?width=960&crop=smart&auto=webp&s=6b63c8d1e123bf4db486a05a850ced3234dc27d2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_aQtUZTp2VBwp5MK35YBXI25HOZhHuEgT9O1MgXLN7I.jpg?width=1080&crop=smart&auto=webp&s=7c61eeef726c44d74291cd216aaae819fea4420a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_aQtUZTp2VBwp5MK35YBXI25HOZhHuEgT9O1MgXLN7I.jpg?auto=webp&s=aa700eadc151f423c28eeb1be7fab6ed515d9eb6', 'width': 1200}, 'variants': {}}]}
GUI RAG that can do an unlimited number of documents, or at least many
5
Most available LLM GUIs that can execute RAG can only handle 2 or 3 PDFs. Are the any interfaces that can handle a bigger number ? Sure, you can merge PDFs, but that’s a quite messy solution   Thank You
2025-05-23T11:17:55
https://www.reddit.com/r/LocalLLaMA/comments/1ktgz28/gui_rag_that_can_do_an_unlimited_number_of/
Ponsky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktgz28
false
null
t3_1ktgz28
/r/LocalLLaMA/comments/1ktgz28/gui_rag_that_can_do_an_unlimited_number_of/
false
false
self
5
null
AI Baby Monitor – fully local Video-LLM nanny (beeps when safety rules are violated)
1
[removed]
2025-05-23T11:40:08
https://v.redd.it/vrllbcyjqi2f1
CheeringCheshireCat
v.redd.it
1970-01-01T00:00:00
0
{}
1kthdc7
false
{'reddit_video': {'bitrate_kbps': 450, 'dash_url': 'https://v.redd.it/vrllbcyjqi2f1/DASHPlaylist.mpd?a=1750592424%2CMDAxYTQ2ZWE4YTNlOGZkNGU3ZWEwMzlhYmJkYzkxZjU0NmRlZmI2MWQ0MGU5YWNmMmYxODdiZTJiYjI2Mzg1Yw%3D%3D&v=1&f=sd', 'duration': 10, 'fallback_url': 'https://v.redd.it/vrllbcyjqi2f1/DASH_270.mp4?source=fallback', 'has_audio': True, 'height': 480, 'hls_url': 'https://v.redd.it/vrllbcyjqi2f1/HLSPlaylist.m3u8?a=1750592424%2CZDUzOTUwYWMzMzY3YTFhZWE5NjhiZjAwMmVlYmI1NTk3MDdlZWMxNDJkMTllYzhlZmI0Y2QzZmVjNzhhNDBhZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vrllbcyjqi2f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 258}}
t3_1kthdc7
/r/LocalLLaMA/comments/1kthdc7/ai_baby_monitor_fully_local_videollm_nanny_beeps/
false
false
https://external-preview…ba9bf13c11b53d06
1
{'enabled': False, 'images': [{'id': 'Mng3b2ZieWpxaTJmMVMRslQYMYRN8ZJ1qBgR4-LlFEA6jckhHIJ4it6HP21k', 'resolutions': [{'height': 200, 'url': 'https://external-preview.redd.it/Mng3b2ZieWpxaTJmMVMRslQYMYRN8ZJ1qBgR4-LlFEA6jckhHIJ4it6HP21k.png?width=108&crop=smart&format=pjpg&auto=webp&s=30842bfac6d65ae7b4a9a14f783af1dd7b889f79', 'width': 108}, {'height': 400, 'url': 'https://external-preview.redd.it/Mng3b2ZieWpxaTJmMVMRslQYMYRN8ZJ1qBgR4-LlFEA6jckhHIJ4it6HP21k.png?width=216&crop=smart&format=pjpg&auto=webp&s=687e23601e8296faedd9525ad742c64c83d80174', 'width': 216}, {'height': 593, 'url': 'https://external-preview.redd.it/Mng3b2ZieWpxaTJmMVMRslQYMYRN8ZJ1qBgR4-LlFEA6jckhHIJ4it6HP21k.png?width=320&crop=smart&format=pjpg&auto=webp&s=55fc8efe25aca9bf206c61df07860e907ae571bf', 'width': 320}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/Mng3b2ZieWpxaTJmMVMRslQYMYRN8ZJ1qBgR4-LlFEA6jckhHIJ4it6HP21k.png?format=pjpg&auto=webp&s=67b1a686088a361b64f11e001f8c95c94b67a83b', 'width': 345}, 'variants': {}}]}
Stacking 2x3090s back to back for inference only - thermals
10
Is anyone running 2x3090s stacked (no gap) for Llama 70B inference? If so, how are your temperatures looking when utilizing both cards for inference? My single 3090 averages around 35-40% load (140 watts) for inference on 32GB 4bit models. Temperatures are around 60 degrees. So it seems reasonable to me that I could stack 2x3090s right next to each, and have okay thermals provided the load on the cards remains close to or under 40%/140watts. Thoughts?
2025-05-23T11:41:05
https://www.reddit.com/r/LocalLLaMA/comments/1kthdzn/stacking_2x3090s_back_to_back_for_inference_only/
YouAreRight007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kthdzn
false
null
t3_1kthdzn
/r/LocalLLaMA/comments/1kthdzn/stacking_2x3090s_back_to_back_for_inference_only/
false
false
self
10
null
Comparision
1
[removed]
2025-05-23T11:45:32
https://www.reddit.com/gallery/1kthguc
deepakhero42069
reddit.com
1970-01-01T00:00:00
0
{}
1kthguc
false
null
t3_1kthguc
/r/LocalLLaMA/comments/1kthguc/comparision/
false
false
https://b.thumbs.redditm…PeLrdsrFXRRo.jpg
1
null
Any drawbacks with putting a high end GPU together with a weak GPU on the same system?
6
Say one of them supports PCIe 5.0 x16 while the other is PCIe 5.0 x8 or even PCIe 4.0, and installed to appropriate PCIe slots that are not lower than the GPU (in terms of PCIe support) I vaguely recall we cannot mix memory sticks with different clock speeds, but not sure how this works for GPUs
2025-05-23T11:49:12
https://www.reddit.com/r/LocalLLaMA/comments/1kthj8j/any_drawbacks_with_putting_a_high_end_gpu/
prusswan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kthj8j
false
null
t3_1kthj8j
/r/LocalLLaMA/comments/1kthj8j/any_drawbacks_with_putting_a_high_end_gpu/
false
false
self
6
null
Llama.cpp is seriously slow. (WSL/5090)
1
[removed]
2025-05-23T12:02:35
https://www.reddit.com/r/LocalLLaMA/comments/1kthsn0/llamacpp_is_seriously_slow_wsl5090/
Silent_Huckleberry89
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kthsn0
false
null
t3_1kthsn0
/r/LocalLLaMA/comments/1kthsn0/llamacpp_is_seriously_slow_wsl5090/
false
false
self
1
null
llama.cpp is disastrously slow on GPU
1
[removed]
2025-05-23T12:11:08
https://www.reddit.com/r/LocalLLaMA/comments/1kthyug/llamacpp_is_disastrously_slow_on_gpu/
indepalt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kthyug
false
null
t3_1kthyug
/r/LocalLLaMA/comments/1kthyug/llamacpp_is_disastrously_slow_on_gpu/
false
false
self
1
null
Which Mac would be better to run a 70+ LLM & RAG?
1
[removed]
2025-05-23T12:19:34
https://www.reddit.com/r/LocalLLaMA/comments/1kti4xq/which_mac_would_be_better_to_run_a_70_llm_rag/
Web3Vortex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kti4xq
false
null
t3_1kti4xq
/r/LocalLLaMA/comments/1kti4xq/which_mac_would_be_better_to_run_a_70_llm_rag/
false
false
self
1
null
Build an AI-Powered Image Search Engine Using Ollama and LangChain
0
2025-05-23T12:21:49
https://youtu.be/S9ugRzGjFtA
Flashy-Thought-5472
youtu.be
1970-01-01T00:00:00
0
{}
1kti6lm
false
{'oembed': {'author_name': 'Nariman Codes', 'author_url': 'https://www.youtube.com/@NarimanCodes', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/S9ugRzGjFtA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Build an AI-Powered Image Search Engine Using Ollama and LangChain"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/S9ugRzGjFtA/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Build an AI-Powered Image Search Engine Using Ollama and LangChain', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1kti6lm
/r/LocalLLaMA/comments/1kti6lm/build_an_aipowered_image_search_engine_using/
false
false
https://a.thumbs.redditm…Sz2WHc5PHST8.jpg
0
{'enabled': False, 'images': [{'id': '1-TbC7xgICLdfvDtCoZXXwzT0BxWOljUGaLj15PAyT8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/jZZ-3zedZFX9Wnt3EOLs3mXslHDcJPVGe-EfHw_CU0E.jpg?width=108&crop=smart&auto=webp&s=99edadbd965a187abcd58a35c769f6217c261142', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/jZZ-3zedZFX9Wnt3EOLs3mXslHDcJPVGe-EfHw_CU0E.jpg?width=216&crop=smart&auto=webp&s=1b2b5f5ad00a68d182ffb39650f77deb09e36ec0', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/jZZ-3zedZFX9Wnt3EOLs3mXslHDcJPVGe-EfHw_CU0E.jpg?width=320&crop=smart&auto=webp&s=eab22b233cbeca6bb253833740ce334d0dd1d333', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/jZZ-3zedZFX9Wnt3EOLs3mXslHDcJPVGe-EfHw_CU0E.jpg?auto=webp&s=f3b048e159bd46be7bfe0022682745c3728de399', 'width': 480}, 'variants': {}}]}
What's the most accurate way to convert arxiv papers to markdown?
15
Looking for the best method/library to convert arxiv papers to markdown. It could be from PDF conversion or using HTML like [ar5iv.labs.arxiv.org](http://ar5iv.labs.arxiv.org) . I tried [marker](https://github.com/VikParuchuri/marker), however, often it does not seem to handle well page breaks and footnotes. Also the section levels are often incorrect.
2025-05-23T12:26:19
https://www.reddit.com/r/LocalLLaMA/comments/1kti9u1/whats_the_most_accurate_way_to_convert_arxiv/
nextlevelhollerith
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kti9u1
false
null
t3_1kti9u1
/r/LocalLLaMA/comments/1kti9u1/whats_the_most_accurate_way_to_convert_arxiv/
false
false
self
15
{'enabled': False, 'images': [{'id': 'QWKDmv4fL5OQcwCo2pK8KRJ6iuXnm2FWKpOIegLzclo', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/gQF_YfxecQZbgUW6xB-K2BEqPfKpf06XWu6CbPfqmLA.jpg?width=108&crop=smart&auto=webp&s=682e1eea70e9a1ca01f0d143b769e9fa5fb2ee1a', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/gQF_YfxecQZbgUW6xB-K2BEqPfKpf06XWu6CbPfqmLA.jpg?width=216&crop=smart&auto=webp&s=20d2a7bea4dc833c7c4377d3ab951c56ca0461ea', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/gQF_YfxecQZbgUW6xB-K2BEqPfKpf06XWu6CbPfqmLA.jpg?width=320&crop=smart&auto=webp&s=0744331a42e0902891d59cb350799fce118fc15f', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/gQF_YfxecQZbgUW6xB-K2BEqPfKpf06XWu6CbPfqmLA.jpg?width=640&crop=smart&auto=webp&s=cbfad47354f46ceeb75ec44764e1e44ea746216a', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/gQF_YfxecQZbgUW6xB-K2BEqPfKpf06XWu6CbPfqmLA.jpg?width=960&crop=smart&auto=webp&s=ae4791e4e75c3806394e22a690e2220f24107289', 'width': 960}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/gQF_YfxecQZbgUW6xB-K2BEqPfKpf06XWu6CbPfqmLA.jpg?auto=webp&s=862a7b1fa2e6880702bf2866570be77e3c351476', 'width': 1000}, 'variants': {}}]}
A Demonstration of Cache-Augmented Generation (CAG) and its Performance Comparison to RAG
44
This project demonstrates how to implement Cache-Augmented Generation (CAG) in an LLM and shows its performance gains compared to RAG.  Project Link: [https://github.com/ronantakizawa/cacheaugmentedgeneration](https://github.com/ronantakizawa/cacheaugmentedgeneration) CAG preloads document content into an LLM’s context as a precomputed key-value (KV) cache.  This caching eliminates the need for real-time retrieval during inference, reducing token usage by up to 76% while maintaining answer quality.  CAG is particularly effective for constrained knowledge bases like internal documentation, FAQs, and customer support systems, where all relevant information can fit within the model's extended context window.
2025-05-23T12:33:08
https://i.redd.it/bn39fvozzi2f1.png
Ok_Employee_6418
i.redd.it
1970-01-01T00:00:00
0
{}
1ktiere
false
null
t3_1ktiere
/r/LocalLLaMA/comments/1ktiere/a_demonstration_of_cacheaugmented_generation_cag/
false
false
https://b.thumbs.redditm…aEI7cy-jFz-Q.jpg
44
{'enabled': True, 'images': [{'id': 'AIXPAwyQkSwFhmFS7dpTX429pVBEiHq8hh2NALQCZgY', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/bn39fvozzi2f1.png?width=108&crop=smart&auto=webp&s=b1e021449ba2bfb827b8aacbb98e59d396b4490e', 'width': 108}, {'height': 153, 'url': 'https://preview.redd.it/bn39fvozzi2f1.png?width=216&crop=smart&auto=webp&s=67737fb7d84ed5d5ce78381170a57365f4d2bf92', 'width': 216}, {'height': 226, 'url': 'https://preview.redd.it/bn39fvozzi2f1.png?width=320&crop=smart&auto=webp&s=90755a5a369715bcd2414437886f510e03ef377b', 'width': 320}, {'height': 453, 'url': 'https://preview.redd.it/bn39fvozzi2f1.png?width=640&crop=smart&auto=webp&s=9702ce1baab0703350e9800e0619c24d489b70eb', 'width': 640}], 'source': {'height': 496, 'url': 'https://preview.redd.it/bn39fvozzi2f1.png?auto=webp&s=b5a7dee7eca8ae82a91baaeaed1a05b637df654d', 'width': 700}, 'variants': {}}]}
All I wanted is a simple FREE chat app
1
[removed]
2025-05-23T12:38:30
https://www.reddit.com/r/LocalLLaMA/comments/1ktiik1/all_i_wanted_is_a_simple_free_chat_app/
COBECT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktiik1
false
null
t3_1ktiik1
/r/LocalLLaMA/comments/1ktiik1/all_i_wanted_is_a_simple_free_chat_app/
false
false
self
1
{'enabled': False, 'images': [{'id': '-ctwWkN6rHGc2V6GtsAmk-HLdFHSpEj4U0gSuMMDRmw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OsoAgJqfaL_UgiiQdsx-291iQtC4URluQgtyHkpiGeE.jpg?width=108&crop=smart&auto=webp&s=f35549a0260f3dffaecfe008535d98df9d849414', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OsoAgJqfaL_UgiiQdsx-291iQtC4URluQgtyHkpiGeE.jpg?width=216&crop=smart&auto=webp&s=114731a6bcfdad6fd0883c8b2a70f73220d22b2f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OsoAgJqfaL_UgiiQdsx-291iQtC4URluQgtyHkpiGeE.jpg?width=320&crop=smart&auto=webp&s=c8fb76daec27d80fcc15f0cefd8e91282936cd19', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OsoAgJqfaL_UgiiQdsx-291iQtC4URluQgtyHkpiGeE.jpg?width=640&crop=smart&auto=webp&s=0383505fe666d8be50a1d3fe9573db90bb6261d1', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/OsoAgJqfaL_UgiiQdsx-291iQtC4URluQgtyHkpiGeE.jpg?width=960&crop=smart&auto=webp&s=82f9d2f0b51b75324d44083284dd3ceee6c693a6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OsoAgJqfaL_UgiiQdsx-291iQtC4URluQgtyHkpiGeE.jpg?width=1080&crop=smart&auto=webp&s=bdede698022f902c3057b82e81f2c0712657c58f', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://external-preview.redd.it/OsoAgJqfaL_UgiiQdsx-291iQtC4URluQgtyHkpiGeE.jpg?auto=webp&s=8d7458ab24160c3de9b7569fcb5ec2c622537d11', 'width': 2400}, 'variants': {}}]}
I accidentally too many P100
417
Hi, I had quite positive results with a P100 last summer, so when R1 came out, I decided to try if I could put 16 of them in a single pc... and I could. Not the fastest think in the universe, and I am not getting awesome PCIE speed (2@4x). But it works, is still cheaper than a 5090, and I hope I can run stuff with large contexts. I hoped to run llama4 with large context sizes, and scout runs almost ok, but llama4 as a model is abysmal. I tried to run Qwen3-235B-A22B, but the performance with llama.cpp is pretty terrible, and I haven't been able to get it working with the vllm-pascal (ghcr.io/sasha0552/vllm:latest). If you have any pointers on getting Qwen3-235B to run with any sort of parallelism, or want me to benchmark any model, just say so! The MB is a 2014 intel S2600CW with dual 8-core xeons, so CPU performance is rather low. I also tried to use MB with an EPYC, but it doesn't manage to allocate the resources to all PCIe devices.
2025-05-23T12:48:51
https://www.reddit.com/gallery/1ktiq99
TooManyPascals
reddit.com
1970-01-01T00:00:00
0
{}
1ktiq99
false
null
t3_1ktiq99
/r/LocalLLaMA/comments/1ktiq99/i_accidentally_too_many_p100/
false
false
https://b.thumbs.redditm…jdBxKlq9CTYI.jpg
417
null
nanoVLM: The simplest repository to train your VLM in pure PyTorch
27
2025-05-23T12:54:55
https://huggingface.co/blog/nanovlm
ab2377
huggingface.co
1970-01-01T00:00:00
0
{}
1ktiusw
false
null
t3_1ktiusw
/r/LocalLLaMA/comments/1ktiusw/nanovlm_the_simplest_repository_to_train_your_vlm/
false
false
https://a.thumbs.redditm…cY6RORHravG0.jpg
27
{'enabled': False, 'images': [{'id': 'YLeFYXJmc-iscz_0rCXh7lML-AboTi25K0CW6HUv1nE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/k3XI6YWGCxh9L4PoRExljDZTmAkbUgwnwQi71BtdC9A.jpg?width=108&crop=smart&auto=webp&s=f117134956e07deb8bb1ac1a9b826a6b4681c0ad', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/k3XI6YWGCxh9L4PoRExljDZTmAkbUgwnwQi71BtdC9A.jpg?width=216&crop=smart&auto=webp&s=7b304e6c51d9e40e41e0f74622efb74cf05b8465', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/k3XI6YWGCxh9L4PoRExljDZTmAkbUgwnwQi71BtdC9A.jpg?width=320&crop=smart&auto=webp&s=1283f717572a5bbba38faa7dfa3b4ec2f60d3570', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/k3XI6YWGCxh9L4PoRExljDZTmAkbUgwnwQi71BtdC9A.jpg?width=640&crop=smart&auto=webp&s=06978f4f95414bba1cfe00e253ce645b2a32d135', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/k3XI6YWGCxh9L4PoRExljDZTmAkbUgwnwQi71BtdC9A.jpg?width=960&crop=smart&auto=webp&s=8a8114d523a32a3c6cdd891c468d120de1cf44c0', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/k3XI6YWGCxh9L4PoRExljDZTmAkbUgwnwQi71BtdC9A.jpg?width=1080&crop=smart&auto=webp&s=e69296c0446d4f6e645dc88840075f40d8a16358', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/k3XI6YWGCxh9L4PoRExljDZTmAkbUgwnwQi71BtdC9A.jpg?auto=webp&s=659974b2d98e1aedab6bfbb440c497453e55555f', 'width': 1920}, 'variants': {}}]}
Ollama is running on AMD GPU, despite ROCM not being installed
1
[removed]
2025-05-23T13:24:10
https://www.reddit.com/r/LocalLLaMA/comments/1ktjhml/ollama_is_running_on_amd_gpu_despite_rocm_not/
Xatraxalian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktjhml
false
null
t3_1ktjhml
/r/LocalLLaMA/comments/1ktjhml/ollama_is_running_on_amd_gpu_despite_rocm_not/
false
false
self
1
{'enabled': False, 'images': [{'id': 'q0Dze0o_SCG5-XBdM5y1Qobni-JTLZfbkgXs6Pktjwc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9vs91oHqrq4ALJKGEHT7pzTbrDc2nQp7iYho6pcEIfo.jpg?width=108&crop=smart&auto=webp&s=e1162aec77faeaac52274e4ce6a9b488d8554330', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9vs91oHqrq4ALJKGEHT7pzTbrDc2nQp7iYho6pcEIfo.jpg?width=216&crop=smart&auto=webp&s=a618019b8eae8eebdf3d73bb55fb9b081aae298f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9vs91oHqrq4ALJKGEHT7pzTbrDc2nQp7iYho6pcEIfo.jpg?width=320&crop=smart&auto=webp&s=d9f441152cdd4752e3d0edc8dc4867b86e251e35', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9vs91oHqrq4ALJKGEHT7pzTbrDc2nQp7iYho6pcEIfo.jpg?width=640&crop=smart&auto=webp&s=c74b9e88fc860ac65775a5bf5352882a9a7b1613', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9vs91oHqrq4ALJKGEHT7pzTbrDc2nQp7iYho6pcEIfo.jpg?width=960&crop=smart&auto=webp&s=cc5dfecd9ef81ada6344b98d2a8ea7435dffbf7a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9vs91oHqrq4ALJKGEHT7pzTbrDc2nQp7iYho6pcEIfo.jpg?width=1080&crop=smart&auto=webp&s=a0c2b8566873442329fe33a5e9841b0fdb03e7aa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9vs91oHqrq4ALJKGEHT7pzTbrDc2nQp7iYho6pcEIfo.jpg?auto=webp&s=c9570c603f9797311478be028fd43a9aef9f6de7', 'width': 1200}, 'variants': {}}]}
What model should I choose?
1
[removed]
2025-05-23T13:26:07
https://www.reddit.com/r/LocalLLaMA/comments/1ktjj3l/what_model_should_i_choose/
Abject_Personality53
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktjj3l
false
null
t3_1ktjj3l
/r/LocalLLaMA/comments/1ktjj3l/what_model_should_i_choose/
false
false
self
1
null
What's the current state of art method for using "scratch pads"?
3
Using scratch pads were very popular back in the olden days of 2023 due to extremely small context lengths. They maxed out at around 8k tokens. But now with agents, we're running into context length issues once again. I haven't kept up with the research in this area, so what are the current best methods for using scratch pads in agentic settings so the model doesn't lose the thread on what its original goals were and what things it has tried and has yet to try?
2025-05-23T13:51:41
https://www.reddit.com/r/LocalLLaMA/comments/1ktk3hi/whats_the_current_state_of_art_method_for_using/
drooolingidiot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktk3hi
false
null
t3_1ktk3hi
/r/LocalLLaMA/comments/1ktk3hi/whats_the_current_state_of_art_method_for_using/
false
false
self
3
null
Opensource LLM for enterprise RAG use case, Qwen3 benchmark validation
1
[removed]
2025-05-23T13:54:20
https://www.reddit.com/r/LocalLLaMA/comments/1ktk5q7/opensource_llm_for_enterprise_rag_use_case_qwen3/
SK33LA
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktk5q7
false
null
t3_1ktk5q7
/r/LocalLLaMA/comments/1ktk5q7/opensource_llm_for_enterprise_rag_use_case_qwen3/
false
false
self
1
null
Question for RAG LLMs and Qwen3 benchmark
1
[removed]
2025-05-23T13:57:41
https://www.reddit.com/r/LocalLLaMA/comments/1ktk8lh/question_for_rag_llms_and_qwen3_benchmark/
SK33LA
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktk8lh
false
null
t3_1ktk8lh
/r/LocalLLaMA/comments/1ktk8lh/question_for_rag_llms_and_qwen3_benchmark/
false
false
self
1
null
Unmute by Kyutai: Make LLMs listen and speak
1
[removed]
2025-05-23T14:06:54
https://kyutai.org/2025/05/22/unmute.html
rerri
kyutai.org
1970-01-01T00:00:00
0
{}
1ktkgl1
false
null
t3_1ktkgl1
/r/LocalLLaMA/comments/1ktkgl1/unmute_by_kyutai_make_llms_listen_and_speak/
false
false
default
1
null
It never ends with these people, no matter how far you go
0
2025-05-23T14:08:11
https://i.redd.it/3z82k151hj2f1.png
baobabKoodaa
i.redd.it
1970-01-01T00:00:00
0
{}
1ktkhof
false
null
t3_1ktkhof
/r/LocalLLaMA/comments/1ktkhof/it_never_ends_with_these_people_no_matter_how_far/
false
false
https://b.thumbs.redditm…eVwFLwBl3jJw.jpg
0
{'enabled': True, 'images': [{'id': 'Fs6wKucI7XQXvgSK7-pb0m7HMZvAcyIEwKkAqmqfGQ0', 'resolutions': [{'height': 112, 'url': 'https://preview.redd.it/3z82k151hj2f1.png?width=108&crop=smart&auto=webp&s=73022bcd93e86066cf43a993fb8cbdc002a73589', 'width': 108}, {'height': 224, 'url': 'https://preview.redd.it/3z82k151hj2f1.png?width=216&crop=smart&auto=webp&s=3d46eaf1b78736837d7d13c6b1b9f2589ba789c6', 'width': 216}, {'height': 332, 'url': 'https://preview.redd.it/3z82k151hj2f1.png?width=320&crop=smart&auto=webp&s=1e976e41cd1e8dfe58c79b6ea51a28aa466488b8', 'width': 320}, {'height': 664, 'url': 'https://preview.redd.it/3z82k151hj2f1.png?width=640&crop=smart&auto=webp&s=11545b5a977a469161bba960827be77d7134447e', 'width': 640}], 'source': {'height': 790, 'url': 'https://preview.redd.it/3z82k151hj2f1.png?auto=webp&s=d4890fdd2341232c174c8e64157ac162e33228fd', 'width': 761}, 'variants': {}}]}
Claude 4 (Sonnet) isn't great for document understanding tasks: some surprising results
112
Finished benchmarking Claude 4 (Sonnet) across a range of document understanding tasks, and the results are… not that good. It's currently **ranked 7th overall** on the leaderboard. Key takeaways: * Weak performance in OCR – Claude 4 lags behind even smaller models like GPT-4.1-nano and InternVL3-38B-Instruct. * Rotation sensitivity – We tested OCR robustness with slightly rotated images (\[-5°, +5°\]). Most large models had a 2–3% drop in accuracy. Claude 4 dropped 9%. * Poor on handwritten documents – Scored only 51.64%, while Gemini 2.0 Flash got 71.24%. It also struggled with handwritten datasets in other tasks like key information extraction. * Chart VQA and visual tasks – Performed decently but still behind Gemini, Claude 3.7, and GPT-4.5/o4-mini. * Long document understanding – Claude 3.7 Sonnet (reasoning:low) ranked 1st. Claude 4 Sonnet ranked 13th. * **One bright spot: table extraction** – Claude 4 Sonnet is currently ranked 1st, narrowly ahead of Claude 3.7 Sonnet. https://preview.redd.it/72zkmcyogj2f1.png?width=2448&format=png&auto=webp&s=cc8fb9e86ca0bcfe129e25dab934d06818f7d638 Leaderboard: [https://idp-leaderboard.org/](https://idp-leaderboard.org/) Codebase: [https://github.com/NanoNets/docext](https://github.com/NanoNets/docext) How has everyone’s experience with the models been so far?
2025-05-23T14:08:12
https://www.reddit.com/r/LocalLLaMA/comments/1ktkhp8/claude_4_sonnet_isnt_great_for_document/
SouvikMandal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktkhp8
false
null
t3_1ktkhp8
/r/LocalLLaMA/comments/1ktkhp8/claude_4_sonnet_isnt_great_for_document/
false
false
https://b.thumbs.redditm…ut3-YAYKWjzQ.jpg
112
null
Unmute by Kyutai: Make LLMs listen and speak
188
Seems nicely polished and apparently works with any LLM. Open-source in the coming weeks. Demo uses Gemma 3 12B as base LLM (demo link in the blog post, reddit seems to auto-delete my post if I include it here). If any Kyutai dev happens to lurk here, would love to hear about the memory requirements of the TTS & STT models.
2025-05-23T14:12:46
https://kyutai.org/2025/05/22/unmute.html
rerri
kyutai.org
1970-01-01T00:00:00
0
{}
1ktklo5
false
null
t3_1ktklo5
/r/LocalLLaMA/comments/1ktklo5/unmute_by_kyutai_make_llms_listen_and_speak/
false
false
default
188
null
What model should I choose?
1
[removed]
2025-05-23T14:14:02
https://www.reddit.com/r/LocalLLaMA/comments/1ktkmqh/what_model_should_i_choose/
Abject_Personality53
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktkmqh
false
null
t3_1ktkmqh
/r/LocalLLaMA/comments/1ktkmqh/what_model_should_i_choose/
false
false
self
1
null
Strategies for aligning embedded text in PDF into a logical order
2
So I have some PDFs which have text information embedded and these are essentially bank statements with items in rows with amounts. However, if you try to select them in a PDF viewer, the text is everywhere as the embedded text is not in any sane order. This is massively frustrating since the accurate embedded text is there but not in a usable state. Has anyone tackled this problem and figured out a good way to align/re-order text without just re-OCR'ing it (which is subject to OCR errors)?
2025-05-23T14:46:56
https://www.reddit.com/r/LocalLLaMA/comments/1ktleg0/strategies_for_aligning_embedded_text_in_pdf_into/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktleg0
false
null
t3_1ktleg0
/r/LocalLLaMA/comments/1ktleg0/strategies_for_aligning_embedded_text_in_pdf_into/
false
false
self
2
null
96GB VRAM! What should run first?
1,462
I had to make a fake company domain name to order this from a supplier. They wouldn’t even give me a quote with my Gmail address. I got the card though!
2025-05-23T15:10:20
https://i.redd.it/co0zhh06sj2f1.jpeg
Mother_Occasion_8076
i.redd.it
1970-01-01T00:00:00
0
{}
1ktlz3w
false
null
t3_1ktlz3w
/r/LocalLLaMA/comments/1ktlz3w/96gb_vram_what_should_run_first/
false
false
https://b.thumbs.redditm…L7CEw3G-_KrM.jpg
1,462
{'enabled': True, 'images': [{'id': 'uU6dM4WijM_cYbJ_ExiJXAu9rQhwKqr0Nz3u14SWZ3E', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/co0zhh06sj2f1.jpeg?width=108&crop=smart&auto=webp&s=a35164fe77c202ec5b589dfe668feb1e80c255c0', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/co0zhh06sj2f1.jpeg?width=216&crop=smart&auto=webp&s=4bf4f14af20ed83f34bdad4529d0dd8d0f7bd723', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/co0zhh06sj2f1.jpeg?width=320&crop=smart&auto=webp&s=751d144e88751fb8a35d144ba9c555f2c5f7ad38', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/co0zhh06sj2f1.jpeg?width=640&crop=smart&auto=webp&s=64b43f0124c5d5b397b2efd848e6e83c1dcfcfdc', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/co0zhh06sj2f1.jpeg?width=960&crop=smart&auto=webp&s=c3fef92ceabd6da8ee3e6f0149b625189e1bd552', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/co0zhh06sj2f1.jpeg?width=1080&crop=smart&auto=webp&s=5fe686cf45cf6357bc7a300e794f1317c06a1cb5', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/co0zhh06sj2f1.jpeg?auto=webp&s=299743d41bdf692b635b669a0b8bad54388da446', 'width': 4032}, 'variants': {}}]}
AI becoming too sycophantic? Noticed Gemini 2.5 praising me instead of solving the issue
98
Hello there, I get the feeling that the trend of making AI more inclined towards flattery and overly focused on a user's feelings is somehow degrading its ability to actually solve problems. Is it just me? For instance, I've recently noticed that Gemini 2.5, instead of giving a direct solution, will spend time praising me, saying I'm using the right programming paradigms, blah blah blah, and that my code should generally work. In the end, it was no help at all. Qwen2 32B, on the other hand, just straightforwardly pointed out my error.
2025-05-23T15:11:54
https://www.reddit.com/r/LocalLLaMA/comments/1ktm0hd/ai_becoming_too_sycophantic_noticed_gemini_25/
Rrraptr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktm0hd
false
null
t3_1ktm0hd
/r/LocalLLaMA/comments/1ktm0hd/ai_becoming_too_sycophantic_noticed_gemini_25/
false
false
self
98
null
Sarvam-M a 24B open-weights hybrid reasoning model
6
Model Link: [https://huggingface.co/sarvamai/sarvam-m](https://huggingface.co/sarvamai/sarvam-m) Model Info: It's a 2 staged post trained version of Mistral 24B on SFT and GRPO. It's a hybrid reasoning model which means that both reasoning and non-reasoning models are fitted in same model. You can choose when to reason and when not. If you wanna try you can either run it locally or from Sarvam's platform. [https://dashboard.sarvam.ai/playground](https://dashboard.sarvam.ai/playground) Also, they released detailed blog post on post training: [https://www.sarvam.ai/blogs/sarvam-m](https://www.sarvam.ai/blogs/sarvam-m)
2025-05-23T15:13:11
https://i.redd.it/8gk7kugnsj2f1.png
RealKingNish
i.redd.it
1970-01-01T00:00:00
0
{}
1ktm1n7
false
null
t3_1ktm1n7
/r/LocalLLaMA/comments/1ktm1n7/sarvamm_a_24b_openweights_hybrid_reasoning_model/
false
false
https://b.thumbs.redditm…zkLqrHDtvdlA.jpg
6
{'enabled': True, 'images': [{'id': 'DmMsBRPNbi849LghsoO44o2QAnMJPTmgh7bjxAlSNrE', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/8gk7kugnsj2f1.png?width=108&crop=smart&auto=webp&s=e758ae8dd0759d6b6eaaa31b4cdaf08d467f7ba4', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/8gk7kugnsj2f1.png?width=216&crop=smart&auto=webp&s=93fb11d3e2b82c7eb4599407eb0777f6620e0cd6', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/8gk7kugnsj2f1.png?width=320&crop=smart&auto=webp&s=5411444b8f48ecf53b0551150b295cbcb5cf4892', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/8gk7kugnsj2f1.png?width=640&crop=smart&auto=webp&s=cfc3a087396e0a4a8f0a79dc5b3427bd30d54414', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/8gk7kugnsj2f1.png?width=960&crop=smart&auto=webp&s=0d0194072e863feff29d142f4a0dca4c0826bf48', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/8gk7kugnsj2f1.png?width=1080&crop=smart&auto=webp&s=f261562d5ba0559c920735d5d8f037c1ccadaadf', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/8gk7kugnsj2f1.png?auto=webp&s=ceffb46802f3179eefa0c5e5658919169d4cb5dd', 'width': 1080}, 'variants': {}}]}
What model should I choose?
5
I study in medical field and I cannot stomach hours of search in books anymore. So I would like to run AI that will take books(they will be both in Russian and English) as context and spew answer to the questions while also providing reference, so that I can check, memorise and take notes. I don't mind the waiting of 30-60 minutes per answer, but I need maximum accuracy. I have laptop(yeah, regular PC is not suitable for me) with i9-13900hx 4080 laptop(12gb) 16gb ddr5 so-dimm If there's a need for more ram, I'm ready to buy Crucial DDR5 sodimm 2×64gb kit. Also, I'm absolute beginner, so I'm not sure if it's even possible
2025-05-23T15:13:45
https://www.reddit.com/r/LocalLLaMA/comments/1ktm248/what_model_should_i_choose/
Abject_Personality53
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktm248
false
null
t3_1ktm248
/r/LocalLLaMA/comments/1ktm248/what_model_should_i_choose/
false
false
self
5
null
Spatial Reasoning is Hot 🔥🔥🔥🔥🔥🔥
21
Notice the recent uptick in google search interest around "spatial reasoning." And now we have a fantastic new benchmark to better measure these capabilities. **SpatialScore:** [https://haoningwu3639.github.io/SpatialScore/](https://haoningwu3639.github.io/SpatialScore/) The **SpatialScore** benchmarks offer a comprehensive assessment covering key spatial reasoning capabilities like: obj counting 2D localization 3D distance estimation This benchmark can help drive progress in adapting VLMs for embodied AI use cases in robotics, where perception and planning hinge on strong spatial understanding.
2025-05-23T15:26:50
https://www.reddit.com/gallery/1ktmdpo
remyxai
reddit.com
1970-01-01T00:00:00
0
{}
1ktmdpo
false
null
t3_1ktmdpo
/r/LocalLLaMA/comments/1ktmdpo/spatial_reasoning_is_hot/
false
false
https://b.thumbs.redditm…tup0fAClRAOg.jpg
21
null
LLMI system I (not my money) got for the group
1
[removed]
2025-05-23T16:50:04
https://i.redd.it/xlu1hsfj3k2f1.jpeg
SandboChang
i.redd.it
1970-01-01T00:00:00
0
{}
1ktof5c
false
null
t3_1ktof5c
/r/LocalLLaMA/comments/1ktof5c/llmi_system_i_not_my_money_got_for_the_group/
false
false
https://b.thumbs.redditm…Iv6mZJ0ye8pg.jpg
1
{'enabled': True, 'images': [{'id': 'zFAVAcLpeVKajxvMYo0NM4QH2u1z5gaf3i4_RnKaJcE', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/xlu1hsfj3k2f1.jpeg?width=108&crop=smart&auto=webp&s=7cd906e4c120d560993394c54387518fba3c89ee', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/xlu1hsfj3k2f1.jpeg?width=216&crop=smart&auto=webp&s=5b899340d15c0d7e01f2d50ac456b6e5b2646679', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/xlu1hsfj3k2f1.jpeg?width=320&crop=smart&auto=webp&s=392fc0ba3c5381e4f52a2edf6c165faca4188f1c', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/xlu1hsfj3k2f1.jpeg?width=640&crop=smart&auto=webp&s=9a51c579d96fde3dec00be4a4801bc5ea965bd15', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/xlu1hsfj3k2f1.jpeg?width=960&crop=smart&auto=webp&s=e4ae5f44272fbdd3c2b637f4d5331cac48a4269a', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/xlu1hsfj3k2f1.jpeg?width=1080&crop=smart&auto=webp&s=75799951636c2bcc7b8c58187a01a705848e51c0', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/xlu1hsfj3k2f1.jpeg?auto=webp&s=4b99e68f9a5eb42dfa6e57f601d0f4a44fd05bfd', 'width': 4032}, 'variants': {}}]}
LLMI system I (not my money) got for our group
180
2025-05-23T16:52:23
https://i.redd.it/lgjexuw8ak2f1.jpeg
SandboChang
i.redd.it
1970-01-01T00:00:00
0
{}
1ktoh78
false
null
t3_1ktoh78
/r/LocalLLaMA/comments/1ktoh78/llmi_system_i_not_my_money_got_for_our_group/
false
false
https://b.thumbs.redditm…D_R6Ax4gyf7o.jpg
180
{'enabled': True, 'images': [{'id': 'Y9oM7DtsJUSL1S_CXDfvsrN56xHFNxKI0_W5nDUOHOY', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/lgjexuw8ak2f1.jpeg?width=108&crop=smart&auto=webp&s=4e7502705e0b589d6e33a689210490d1546b1048', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/lgjexuw8ak2f1.jpeg?width=216&crop=smart&auto=webp&s=1da25e2b05b64af8ff844470f7665fbecb01051b', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/lgjexuw8ak2f1.jpeg?width=320&crop=smart&auto=webp&s=063c3694057f834a06bbb62a8fe1a22ab5851eb2', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/lgjexuw8ak2f1.jpeg?width=640&crop=smart&auto=webp&s=3260ccc53dd2f7cca5692637366920fd7a9928ec', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/lgjexuw8ak2f1.jpeg?width=960&crop=smart&auto=webp&s=c79d5cb0d5dc40af105d2eb0820185d0243f5680', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/lgjexuw8ak2f1.jpeg?width=1080&crop=smart&auto=webp&s=4c04ec7818ec0987d6202de1e96705fce6a63853', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/lgjexuw8ak2f1.jpeg?auto=webp&s=d75cbc80f95ae45e7039778d8e4b13677a75e55b', 'width': 4032}, 'variants': {}}]}
So what are some cool projects you guys are running on you local llms?
58
Trying to find good ideas to implement on my setup, or maybe get some inspiration to do something on my own
2025-05-23T16:55:33
https://www.reddit.com/r/LocalLLaMA/comments/1ktojxe/so_what_are_some_cool_projects_you_guys_are/
itzikhan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktojxe
false
null
t3_1ktojxe
/r/LocalLLaMA/comments/1ktojxe/so_what_are_some_cool_projects_you_guys_are/
false
false
self
58
null
SLM RAG Arena - Compare and Find The Best Sub-5B Models for RAG
1
[removed]
2025-05-23T16:59:48
https://i.redd.it/ikvqvvaaak2f1.png
unseenmarscai
i.redd.it
1970-01-01T00:00:00
0
{}
1ktonl6
false
null
t3_1ktonl6
/r/LocalLLaMA/comments/1ktonl6/slm_rag_arena_compare_and_find_the_best_sub5b/
false
false
https://a.thumbs.redditm…7GXjPuXpz8V0.jpg
1
{'enabled': True, 'images': [{'id': '55U7Y6g4TiBPTPu9wNpkYhC4p_9LUOfeCIvByvx4dvk', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/ikvqvvaaak2f1.png?width=108&crop=smart&auto=webp&s=4c200607742f9f8a42ad682ce0c5210eeacd57a4', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/ikvqvvaaak2f1.png?width=216&crop=smart&auto=webp&s=2e806308916e28a3573ed795535871f002c87f52', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/ikvqvvaaak2f1.png?width=320&crop=smart&auto=webp&s=fdaa57df939fa01d697269d2d442d5d200b72ac8', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/ikvqvvaaak2f1.png?width=640&crop=smart&auto=webp&s=ede267f01a193bdc2af9f8a3ef2704bbb6d31732', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/ikvqvvaaak2f1.png?width=960&crop=smart&auto=webp&s=7d29ca9bf5ec99e00a68815a7929fb9cc56bd83b', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/ikvqvvaaak2f1.png?width=1080&crop=smart&auto=webp&s=ca60ce139c1f69dbc613606e6b6ca92a84a79a5e', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/ikvqvvaaak2f1.png?auto=webp&s=f0e08d8a8bc15a87b7d2dcb33911f8f89c28387f', 'width': 1920}, 'variants': {}}]}