title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
May 2025 Model IQ and Mac Benchmarks
0
Model IQ (5-shot MMLU) Mem (4-bit) M3 Max 64 GB M4 Pro 24 GB M4 Max 64 / 128 GB LLaMA 2 7 B 45.8 % 3.5 GB ~ 90 t/s ~ 54 t/s ~ 110 t/s LLaMA 2 13 B 55.4 % 6.5 GB ~ 25 t/s ~ 15 t/s ~ 30 t/s Mistral 7 B 62.5 % 3.0 GB ~ 60 t/s ~ 36 t/s ~ 65 t/s Mixtral 8×7 B (MoE) 71.7 % 22.5 GB ~ 60 t/s ~ 36 t/s ~ 72 t/s Mixtral 8×22 B (MoE) 77.8 % 88 GB ~ 65 t/s n/a ~ 78 t/s Qwen 2.5 14 B 68.6 % 7.0 GB ~ 45 t/s ~ 27 t/s ~ 55 t/s Qwen 2.5 32 B 74.4 % 16 GB ~ 20 t/s ~ 12 t/s ~ 24 t/s Qwen 2.5 72 B 77.4 % 36 GB ~ 10 t/s n/a ~ 12 t/s LLaMA 3 70 B (instr.) 79.3 % 35 GB ~ 10 t/s n/a ~ 7 t/s GPT-3.5 Turbo (API) 70.0 % n/a 109 t/s 109 t/s 109 t/s GPT-4 (API) 86.4 % n/a 12.5 t/s 12.5 t/s 12.5 t/s GPT-4o (API) 88.7 % n/a 138 t/s 138 t/s 138 t/s
2025-05-14T02:14:31
https://www.reddit.com/r/LocalLLaMA/comments/1km426d/may_2025_model_iq_and_mac_benchmarks/
FroyoCommercial627
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km426d
false
null
t3_1km426d
/r/LocalLLaMA/comments/1km426d/may_2025_model_iq_and_mac_benchmarks/
false
false
self
0
null
How to tell Aider to use Qwen3 with the /nothink option?
3
I understand that I can start aider and tell it to use models hosted locally by Ollama. `Ex. aider --model ollama/llama3` That being said, I'm not sure how to tell aider to use the /nothink (or /no\_think) option. Any suggestions?
2025-05-14T02:15:05
https://www.reddit.com/r/LocalLLaMA/comments/1km42k8/how_to_tell_aider_to_use_qwen3_with_the_nothink/
jpummill2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km42k8
false
null
t3_1km42k8
/r/LocalLLaMA/comments/1km42k8/how_to_tell_aider_to_use_qwen3_with_the_nothink/
false
false
self
3
null
How many data should I include to retain reasoning capability of Qwen3 model?
1
[removed]
2025-05-14T02:24:33
https://www.reddit.com/r/LocalLLaMA/comments/1km490t/how_many_data_should_i_include_to_retain/
LectureBig9815
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km490t
false
null
t3_1km490t
/r/LocalLLaMA/comments/1km490t/how_many_data_should_i_include_to_retain/
false
false
https://external-preview…d3454a1568ba426b
1
{'enabled': False, 'images': [{'id': 'learGapv71y-mEsO5d1s6yaakzOuHLJMeLWtqxQ5I0A', 'resolutions': [{'height': 47, 'url': 'https://external-preview.redd.it/learGapv71y-mEsO5d1s6yaakzOuHLJMeLWtqxQ5I0A.png?width=108&crop=smart&auto=webp&s=49a7725b34238fa2193c55d3faac5a238cd09caf', 'width': 108}, {'height': 94, 'url': 'https://external-preview.redd.it/learGapv71y-mEsO5d1s6yaakzOuHLJMeLWtqxQ5I0A.png?width=216&crop=smart&auto=webp&s=7bb958b5158360cf9653c0810ed22883fc2988d7', 'width': 216}, {'height': 139, 'url': 'https://external-preview.redd.it/learGapv71y-mEsO5d1s6yaakzOuHLJMeLWtqxQ5I0A.png?width=320&crop=smart&auto=webp&s=a591e23a62605d385e21b692174da8b03869e0a6', 'width': 320}, {'height': 278, 'url': 'https://external-preview.redd.it/learGapv71y-mEsO5d1s6yaakzOuHLJMeLWtqxQ5I0A.png?width=640&crop=smart&auto=webp&s=3c9d2901f176402aad2f728440d13899a42e231c', 'width': 640}], 'source': {'height': 340, 'url': 'https://external-preview.redd.it/learGapv71y-mEsO5d1s6yaakzOuHLJMeLWtqxQ5I0A.png?auto=webp&s=abb0c1254d884e24ae9f0bddd946210284856e32', 'width': 781}, 'variants': {}}]}
Aya Vision: Advancing the Frontier of Multilingual Multimodality
46
Abstract >Building multimodal language models is fundamentally challenging: it requires aligning vision and language modalities, curating high-quality instruction data, and avoiding the degradation of existing text-only capabilities once vision is introduced. These difficulties are further magnified in the multilingual setting, where the need for multimodal data in different languages exacerbates existing data scarcity, machine translation often distorts meaning, and catastrophic forgetting is more pronounced. To address the aforementioned challenges, we introduce novel techniques spanning both data and modeling. First, we develop a synthetic annotation framework that curates highquality, diverse multilingual multimodal instruction data, enabling Aya Vision models to produce natural, human-preferred responses to multimodal inputs across many languages. Complementing this, we propose a cross-modal model merging technique that mitigates catastrophic forgetting, effectively preserving text-only capabilities while simultaneously enhancing multimodal generative performance. Aya-Vision-8B achieves best-in-class performance compared to strong multimodal models such as Qwen-2.5-VL-7B, Pixtral-12B, and even much larger Llama-3.2-90B-Vision. We further scale this approach with Aya-Vision-32B, which outperforms models more than twice its size, such as Molmo-72B and LLaMA-3.2-90B-Vision. Our work advances multilingual progress on the multi-modal frontier, and provides insights into techniques that effectively bend the need for compute while delivering extremely high performance. Aya-Vision-8B: [https://huggingface.co/CohereLabs/aya-vision-8B](https://huggingface.co/CohereLabs/aya-vision-8B) Aya-Vision-32B: [https://huggingface.co/CohereLabs/aya-vision-32B](https://huggingface.co/CohereLabs/aya-vision-32B) AyaVisionBench: [https://huggingface.co/datasets/CohereLabs/AyaVisionBench](https://huggingface.co/datasets/CohereLabs/AyaVisionBench)
2025-05-14T03:41:24
https://arxiv.org/pdf/2505.08751
ninjasaid13
arxiv.org
1970-01-01T00:00:00
0
{}
1km5p7a
false
null
t3_1km5p7a
/r/LocalLLaMA/comments/1km5p7a/aya_vision_advancing_the_frontier_of_multilingual/
false
false
default
46
null
Anyone here building an AI product in German?
1
[removed]
2025-05-14T04:36:41
https://www.reddit.com/r/LocalLLaMA/comments/1km6npk/anyone_here_building_an_ai_product_in_german/
MikeTheSolist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km6npk
false
null
t3_1km6npk
/r/LocalLLaMA/comments/1km6npk/anyone_here_building_an_ai_product_in_german/
false
false
self
1
null
KoboldCpp Smart Launcher: GUI & CLI tools for tensor offload auto-tuning (developed with AI)
34
# KoboldCpp Smart Launcher: Optimize your LLM performance with tensor offloading [https://github.com/Viceman256/KoboldCpp-Smart-Launcher-v1.0.0](https://github.com/Viceman256/KoboldCpp-Smart-Launcher-v1.0.0) **TL;DR:** I created a launcher (GUI + CLI) for KoboldCpp that helps you automatically find the best tensor offload strategy for your model, potentially doubling generation speed without needing more VRAM. # What is this? Inspired by [this post about tensor offloading](https://www.reddit.com/r/LocalLLaMA/comments/1ki3sze/running_qwen3_235b_on_a_single_3060_12gb_6_ts/), I've created a launcher that makes it easy to apply and fine-tune tensor offload strategies. # Performance examples on consumer hardware: **Qwen3-32B on RTX 3060 (12GB)**: * Traditional layer offloading: **3.98 tokens/sec** * With smart tensor offloading: **10.61 tokens/sec** (166% faster!) **Gemma3-27B on RTX 2060 (6GB)**: * Traditional layer offloading: **6.86 tokens/sec** * With smart tensor offloading: **10.4 tokens/sec** (52% faster!) # Key Features * **Auto-Tuning**: Automatically finds the best tensor offload strategy for your model * **GUI & CLI versions**: Use whichever interface you prefer * **Launch History**: Remembers successful configurations for each model * **Multiple Launch Options**: Direct launch, best remembered config, or auto-tuning * **VRAM Monitoring**: Real-time display of GPU memory usage * **Process Management**: Easily stop and start KoboldCpp instances # Screenshots [https://github.com/Viceman256/KoboldCpp-Smart-Launcher-v1.0.0/blob/main/screenshots/GUI1.png](https://github.com/Viceman256/KoboldCpp-Smart-Launcher-v1.0.0/blob/main/screenshots/GUI1.png) # How to use 1. Clone/download the repository 2. Install dependencies (`pip install -r requirements.txt`) 3. Run the launcher: * GUI: `python koboldcpp_launcher_gui.py` * CLI: `python koboldcpp_launcher.py` 4. Select your GGUF model 5. Choose "Start Auto-Tune" to find the optimal settings 6. Follow the prompts to test different configurations 7. Save successful configurations for future use # What is tensor offloading? Traditional layer offloading moves entire transformer layers between CPU and GPU. Tensor offloading is more granular, it selectively keeps specific tensors (like FFN up/down/gate) on CPU while offloading computation-intensive tensors to GPU. This approach makes better use of your available hardware and can dramatically improve performance. # Full disclosure: Developed with AI assistance I want to be completely transparent that this project was developed with significant AI assistance (primarily Claude & Gemini). While I designed the concept and functionality, much of the code was generated and refined through AI assistance. **This means:** * The code works but may benefit from review by more experienced developers * There might be better ways to implement some features * I'm open to suggestions and improvements from the community I believe in the potential of this approach, and I hope this tool helps more people run LLMs efficiently on their hardware. # Installation Check the [GitHub repository]() for full installation instructions. Basic setup: # Install dependencies pip install -r requirements.txt # Run the launcher python koboldcpp_launcher_gui.py # or koboldcpp_launcher.py for CLI # Feedback Welcomed! I'd really appreciate: * Your experiences using the launcher * Performance improvements you've achieved * Suggestions for improvements * Code reviews/optimization suggestions * Reports of any bugs you encounter Thank you for checking out this project!
2025-05-14T05:08:05
https://www.reddit.com/r/LocalLLaMA/comments/1km75qq/koboldcpp_smart_launcher_gui_cli_tools_for_tensor/
viceman256
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km75qq
false
null
t3_1km75qq
/r/LocalLLaMA/comments/1km75qq/koboldcpp_smart_launcher_gui_cli_tools_for_tensor/
false
false
self
34
null
US issues worldwide restriction on using Huawei AI chips
206
2025-05-14T05:17:14
https://asia.nikkei.com/Spotlight/Huawei-crackdown/US-issues-worldwide-restriction-on-using-Huawei-AI-chips
fallingdowndizzyvr
asia.nikkei.com
1970-01-01T00:00:00
0
{}
1km7azf
false
null
t3_1km7azf
/r/LocalLLaMA/comments/1km7azf/us_issues_worldwide_restriction_on_using_huawei/
false
false
https://external-preview…69603c83b05a0718
206
{'enabled': False, 'images': [{'id': 'soYDsx1CxZzYVuQCW5jcyDs7LrLivdc870--Rv91s1Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/soYDsx1CxZzYVuQCW5jcyDs7LrLivdc870--Rv91s1Y.jpeg?width=108&crop=smart&auto=webp&s=35f7607b8e88bc9fc8f74e56e9cea7680ac318a3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/soYDsx1CxZzYVuQCW5jcyDs7LrLivdc870--Rv91s1Y.jpeg?width=216&crop=smart&auto=webp&s=d5a3c0696cbd5e520f0137bbc443371b6ec4676b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/soYDsx1CxZzYVuQCW5jcyDs7LrLivdc870--Rv91s1Y.jpeg?width=320&crop=smart&auto=webp&s=7ae2fe0c7c49e995e33960ac2a6b9d0121911fa9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/soYDsx1CxZzYVuQCW5jcyDs7LrLivdc870--Rv91s1Y.jpeg?width=640&crop=smart&auto=webp&s=84f437471c1c3dd9791d2e3dd486dbfff8b54094', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/soYDsx1CxZzYVuQCW5jcyDs7LrLivdc870--Rv91s1Y.jpeg?width=960&crop=smart&auto=webp&s=e59ddadedb074fe99ed2c91f11bbfda26c4ac303', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/soYDsx1CxZzYVuQCW5jcyDs7LrLivdc870--Rv91s1Y.jpeg?width=1080&crop=smart&auto=webp&s=0a199b4642d903743852ef9503b58caec4aa7adf', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/soYDsx1CxZzYVuQCW5jcyDs7LrLivdc870--Rv91s1Y.jpeg?auto=webp&s=12340e1cc311eb31700a558ab8b92bd087ed90df', 'width': 2520}, 'variants': {}}]}
Getting low similarity scores on Gemini and OpenAI embedding models compared to Open Source Models
1
[removed]
2025-05-14T05:22:58
https://www.reddit.com/r/LocalLLaMA/comments/1km7eaf/getting_low_similarity_scores_on_gemini_and/
Inevitable-Ad-2562
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km7eaf
false
null
t3_1km7eaf
/r/LocalLLaMA/comments/1km7eaf/getting_low_similarity_scores_on_gemini_and/
false
false
self
1
null
Getting low similarity scores on Gemini and OpenAI embedding models compared to Open Source Models
4
I was running multilingual-e5-large-instruct on my local using Ollama for embedding. For most of the relevant queries the embedding was returning higher similarity scores (>0.75). But I embedded the chunks and the query again with text-embedding-004 and text-embedding-3-large both of them return much lesser similarity scores (\~0.6) and also less relevant chunks. Why is this the case? I want to switch to a model which can be accessed via APIs or cheaper to host on my own Here's an example with Gemini: query: "In pubg how much time a round takes" similarity: 0.631454 chunk: 'PUBG Corporation has run several small tournaments and introduced in-game tools to help with broadcasting the game to spectators, as they wish for it to become a popular esport. It has sold over 75 million copies on personal computers and game consoles, is the best-selling game on PC and on Xbox One, and is the fifth best-selling video game of all time. Until Q3 2022, the game has accumulated $13 billion in worldwide revenue, including from the more successful mobile version of the game, and it is considered to be one of the highest-grossing video games of all time.GameplayPUBG is' Here's an example with multilingual-e5-large-instruct: query: in pubg how much time a round takes? similarity: 0.795082, chunk: 'red and bombed, posing a threat to players who remain in that area.\[5\] In both cases, players are warned a few minutes before these events, giving them time to relocate to safety.\[6\] A plane will fly over various parts of the playable map occasionally at random, or wherever a player uses a flare gun, and drop a loot package, containing items which are typically unobtainable during normal gameplay. These packages emit highly visible red smoke, drawing interested players near it and creating further confrontations.\[1\]\[7\] On average, a full round takes no more than 30 minutes.\[6\]At the completion of each round,' },
2025-05-14T05:25:15
https://www.reddit.com/r/LocalLLaMA/comments/1km7fm3/getting_low_similarity_scores_on_gemini_and/
Ok_Jacket3710
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km7fm3
false
null
t3_1km7fm3
/r/LocalLLaMA/comments/1km7fm3/getting_low_similarity_scores_on_gemini_and/
false
false
self
4
null
Is there such a thing as a RTX 4070 Ti Super with modded RAM?
1
[removed]
2025-05-14T05:47:59
https://www.reddit.com/r/LocalLLaMA/comments/1km7sga/is_there_such_a_thing_as_a_rtx_4070_ti_super_with/
jair_r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km7sga
false
null
t3_1km7sga
/r/LocalLLaMA/comments/1km7sga/is_there_such_a_thing_as_a_rtx_4070_ti_super_with/
false
false
self
1
null
LM studio on remote
1
[removed]
2025-05-14T06:02:15
https://www.reddit.com/r/LocalLLaMA/comments/1km805w/lm_studio_on_remote/
HappyFaithlessness70
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km805w
false
null
t3_1km805w
/r/LocalLLaMA/comments/1km805w/lm_studio_on_remote/
false
false
self
1
null
Embrace the jank (2x5090)
126
I just got a second 5090 to add to my 4x3090 setup as they have come down in price and have availability in my country now. Only to notice the Gigabyte model is way to long for this mining rig. ROPs are good luckily, this seem like later batches. Cable temps look good but I have the 5090 power limited to 400w and the 3090 to 250w
2025-05-14T06:04:35
https://www.reddit.com/gallery/1km81fb
bullerwins
reddit.com
1970-01-01T00:00:00
0
{}
1km81fb
false
null
t3_1km81fb
/r/LocalLLaMA/comments/1km81fb/embrace_the_jank_2x5090/
false
false
https://external-preview…c31ff6b88f349c0c
126
{'enabled': True, 'images': [{'id': 'E_uF8bYPAY2RyGg_EbX05IxfyM8iqcKYDZnPrNcsqUo', 'resolutions': [{'height': 144, 'url': 'https://external-preview.redd.it/E_uF8bYPAY2RyGg_EbX05IxfyM8iqcKYDZnPrNcsqUo.jpeg?width=108&crop=smart&auto=webp&s=69b9488f65adf52b6694a6fa1da67ea19a56bbec', 'width': 108}, {'height': 288, 'url': 'https://external-preview.redd.it/E_uF8bYPAY2RyGg_EbX05IxfyM8iqcKYDZnPrNcsqUo.jpeg?width=216&crop=smart&auto=webp&s=aa8ffb7cfc2bdb636dfd40b0999bc1df022daa49', 'width': 216}, {'height': 426, 'url': 'https://external-preview.redd.it/E_uF8bYPAY2RyGg_EbX05IxfyM8iqcKYDZnPrNcsqUo.jpeg?width=320&crop=smart&auto=webp&s=8ae2d03cf4d14f9c30352dfb14ed0bc22f3c849f', 'width': 320}, {'height': 853, 'url': 'https://external-preview.redd.it/E_uF8bYPAY2RyGg_EbX05IxfyM8iqcKYDZnPrNcsqUo.jpeg?width=640&crop=smart&auto=webp&s=3550e05f24088bc37f358744f2b8d324c6207068', 'width': 640}, {'height': 1280, 'url': 'https://external-preview.redd.it/E_uF8bYPAY2RyGg_EbX05IxfyM8iqcKYDZnPrNcsqUo.jpeg?width=960&crop=smart&auto=webp&s=bac28eed4e1dc22f894e10d177ba779ea644adcd', 'width': 960}, {'height': 1440, 'url': 'https://external-preview.redd.it/E_uF8bYPAY2RyGg_EbX05IxfyM8iqcKYDZnPrNcsqUo.jpeg?width=1080&crop=smart&auto=webp&s=c871347ded63b5191d161c904c06804178e71a73', 'width': 1080}], 'source': {'height': 5712, 'url': 'https://external-preview.redd.it/E_uF8bYPAY2RyGg_EbX05IxfyM8iqcKYDZnPrNcsqUo.jpeg?auto=webp&s=67be9402b83d4ca83ec498ffaf3efdb4e548116b', 'width': 4284}, 'variants': {}}]}
On-Device AgentCPM-GUI is Now Open-Source
1
[removed]
2025-05-14T06:13:54
https://www.reddit.com/r/LocalLLaMA/comments/1km86at/ondevice_agentcpmgui_is_now_opensource/
Lynncc6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km86at
false
null
t3_1km86at
/r/LocalLLaMA/comments/1km86at/ondevice_agentcpmgui_is_now_opensource/
false
false
self
1
null
On-Device AgentCPM-GUI is Now Open-Source
68
Key Features: \- 1st open-source GUI agent finely tuned for Chinese apps \- RFT-enhanced reasoning abilities \- Compact action-space design \- High-quality GUI grounding
2025-05-14T06:17:31
https://v.redd.it/9k8szctowo0f1
Lynncc6
v.redd.it
1970-01-01T00:00:00
0
{}
1km889x
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/9k8szctowo0f1/DASHPlaylist.mpd?a=1749795468%2CYWEwZDg2OGI2ODBhMTY3N2NiNWIzNzI3ZGRjOTc4NzdkMTQ5MDAzMzkyMzZjNDE0ZjNmZWYzZGMyOGM1ODY1OQ%3D%3D&v=1&f=sd', 'duration': 36, 'fallback_url': 'https://v.redd.it/9k8szctowo0f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/9k8szctowo0f1/HLSPlaylist.m3u8?a=1749795468%2CNWZiZjY5MmE1ZjgwY2Y2MjhiYzVhYWFiYjNhMDNkZGRjMTE1ZGYxZjhmZmQ5MTM4MGYzNTQxNWQyOWE1YThmZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/9k8szctowo0f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1338}}
t3_1km889x
/r/LocalLLaMA/comments/1km889x/ondevice_agentcpmgui_is_now_opensource/
false
false
https://external-preview…5a23a5b714dbf92e
68
{'enabled': False, 'images': [{'id': 'aDMycXVkdG93bzBmMdd4vZsHqnodJB44bgTX0N7YjbnpSNGmYM_uAYq-hEK7', 'resolutions': [{'height': 87, 'url': 'https://external-preview.redd.it/aDMycXVkdG93bzBmMdd4vZsHqnodJB44bgTX0N7YjbnpSNGmYM_uAYq-hEK7.png?width=108&crop=smart&format=pjpg&auto=webp&s=8baa98b2bcf5cc22f268ccac852429234b10df87', 'width': 108}, {'height': 174, 'url': 'https://external-preview.redd.it/aDMycXVkdG93bzBmMdd4vZsHqnodJB44bgTX0N7YjbnpSNGmYM_uAYq-hEK7.png?width=216&crop=smart&format=pjpg&auto=webp&s=0484f69993cd91d556a528f88828c49915af74b8', 'width': 216}, {'height': 258, 'url': 'https://external-preview.redd.it/aDMycXVkdG93bzBmMdd4vZsHqnodJB44bgTX0N7YjbnpSNGmYM_uAYq-hEK7.png?width=320&crop=smart&format=pjpg&auto=webp&s=9317b69b6b19d706fc51e78b985e3161adf79e85', 'width': 320}, {'height': 516, 'url': 'https://external-preview.redd.it/aDMycXVkdG93bzBmMdd4vZsHqnodJB44bgTX0N7YjbnpSNGmYM_uAYq-hEK7.png?width=640&crop=smart&format=pjpg&auto=webp&s=6235b3736d763a7edbdded7cfbc9eb136685b392', 'width': 640}, {'height': 774, 'url': 'https://external-preview.redd.it/aDMycXVkdG93bzBmMdd4vZsHqnodJB44bgTX0N7YjbnpSNGmYM_uAYq-hEK7.png?width=960&crop=smart&format=pjpg&auto=webp&s=d796cf53de69d8d0c758b9386d2d3d636183e94d', 'width': 960}, {'height': 871, 'url': 'https://external-preview.redd.it/aDMycXVkdG93bzBmMdd4vZsHqnodJB44bgTX0N7YjbnpSNGmYM_uAYq-hEK7.png?width=1080&crop=smart&format=pjpg&auto=webp&s=295e30cf77c5c7304d9f294869b9742f550db2e5', 'width': 1080}], 'source': {'height': 1622, 'url': 'https://external-preview.redd.it/aDMycXVkdG93bzBmMdd4vZsHqnodJB44bgTX0N7YjbnpSNGmYM_uAYq-hEK7.png?format=pjpg&auto=webp&s=a199c3a443e90dce8fbb635008d0d4f7896fd372', 'width': 2010}, 'variants': {}}]}
Best open source transcription and diarization models out there for German?
1
[removed]
2025-05-14T06:43:01
https://www.reddit.com/r/LocalLLaMA/comments/1km8lnp/best_open_source_transcription_and_diarization/
OstrichSerious6755
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km8lnp
false
null
t3_1km8lnp
/r/LocalLLaMA/comments/1km8lnp/best_open_source_transcription_and_diarization/
false
false
self
1
null
What are some good models I should check out on my MBP with M3 Pro (18GB mem)?
0
I have 18GB of memory. I've been running Mistral's 7B model. It hallucinates pretty badly to a point that it becomes unusable. What are some models that you found running amazingly well on your M3 Pro chip? With so many new models launching, I find it really hard to keep up.
2025-05-14T06:55:34
https://www.reddit.com/r/LocalLLaMA/comments/1km8s9f/what_are_some_good_models_i_should_check_out_on/
Professional_Field79
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km8s9f
false
null
t3_1km8s9f
/r/LocalLLaMA/comments/1km8s9f/what_are_some_good_models_i_should_check_out_on/
false
false
self
0
null
Zenbook S16 or alternative with more Ram
3
Hey there! Currently testing and fiddling a lot with local llms. I need a new laptop which can also handle av1 encode in hw. And I want to test more with local llms, mainly using continue in vs code. The catch i seem to run into is that there are no options in laptops with the ryzen ai series that have affordable or upgradeable ram. I've looked into the zenbook s16 with 32gb of ram now for a while and I like the overall specs besides the ram. Any tipps on an alternative? Or am i overthinking it? Willing to spend around 2k
2025-05-14T07:00:22
https://www.reddit.com/r/LocalLLaMA/comments/1km8uuk/zenbook_s16_or_alternative_with_more_ram/
PresentationSolid643
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km8uuk
false
null
t3_1km8uuk
/r/LocalLLaMA/comments/1km8uuk/zenbook_s16_or_alternative_with_more_ram/
false
false
self
3
null
LLM response doesn't end on mobile app
0
I am using an opensource app called PocketPal to use small LLMs locally on my phone. The first one I tried is Qwen3-4B-Q4\_K\_M. Because I want to include the thinking, I increased the output length, otherwise it cuts off halfway. Now it works well, at quite a usable speed, given it's running locally on a mobile phone. The issue is, that when it has given the final answer, it keeps on generating parts of the final answer and breaks down after that, constantly repeating. I haven't changed any other settings. Is there something obvious I missed or can change to fix this? Thanks!
2025-05-14T08:14:11
https://www.reddit.com/r/LocalLLaMA/comments/1km9vra/llm_response_doesnt_end_on_mobile_app/
__ThrowAway__123___
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km9vra
false
null
t3_1km9vra
/r/LocalLLaMA/comments/1km9vra/llm_response_doesnt_end_on_mobile_app/
false
false
self
0
null
llama.cpp for idiots. An easy way to get models?
0
Persuaded by the number of people saying we should use llama.cpp instead of ollama I gave it a go. First I had to download it. I am on a CPU only machine so I went to [https://github.com/ggml-org/llama.cpp/releases](https://github.com/ggml-org/llama.cpp/releases) and downloaded and unzipped [https://github.com/ggml-org/llama.cpp/releases/download/b5372/llama-b5372-bin-ubuntu-x64.zip](https://github.com/ggml-org/llama.cpp/releases/download/b5372/llama-b5372-bin-ubuntu-x64.zip) . This comes with no README but I went into the build directory and ran ./llama-cli -h . This makes it clear I need a local gguf file. My immediate goal is to run a good version of qwen3:14b .Is there an easy tool to find models that will fit into my RAM/hardware? For ollama I would just look at [https://www.ollama.com/library](https://www.ollama.com/library) .
2025-05-14T08:18:58
https://www.reddit.com/r/LocalLLaMA/comments/1km9y2h/llamacpp_for_idiots_an_easy_way_to_get_models/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1km9y2h
false
null
t3_1km9y2h
/r/LocalLLaMA/comments/1km9y2h/llamacpp_for_idiots_an_easy_way_to_get_models/
false
false
self
0
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=108&crop=smart&auto=webp&s=72aa5dcc1cd8dbddd3f1a103959106b666940069', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=216&crop=smart&auto=webp&s=a4159f87f341337a34069632ee0d5b75fa4e7042', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=320&crop=smart&auto=webp&s=b105a2c86f91fee19ce34c791a1b984348b68452', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=640&crop=smart&auto=webp&s=ae5173c455a88bb40bed1198799c0db65ff470d0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=960&crop=smart&auto=webp&s=d014791efbd4c8d05fd305a8b7842b029f22d83e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=1080&crop=smart&auto=webp&s=9addd19259612948921416b6f5bf04bd5191f933', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?auto=webp&s=db9ea157807723165a59f5f8694d9a5016d60d0f', 'width': 1280}, 'variants': {}}]}
Created a chat UI for Local Use Cases, Need feedback
1
[removed]
2025-05-14T08:25:47
https://v.redd.it/27q63q5pjp0f1
Desperate_Rub_1352
v.redd.it
1970-01-01T00:00:00
0
{}
1kma1ex
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/27q63q5pjp0f1/DASHPlaylist.mpd?a=1749803163%2CZWY5NjQwZGU4YTJmYTAwMjFmYmRkNzU2Zjc0ZWExMzQ2NTFkMzI0YTc1YzI5ZGRjNzM0MWQ0ZDFlMzQzOGIwMA%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/27q63q5pjp0f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/27q63q5pjp0f1/HLSPlaylist.m3u8?a=1749803163%2CZjhiY2QyMTk3NzU5NDJjNzFiYTYwMzA3ZTEyNzIwYTc3NWExZGQ3ZmM3ZjI1ZWNmNzJhNzc0Nzk4NWYxMGY2MA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/27q63q5pjp0f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1906}}
t3_1kma1ex
/r/LocalLLaMA/comments/1kma1ex/created_a_chat_ui_for_local_use_cases_need/
false
false
https://external-preview…b4f014b4f3c687bb
1
{'enabled': False, 'images': [{'id': 'MDV5dG5yNXBqcDBmMTOH5J5AQpLIolLNsYtUN9ML-s1U5-NfaBzMV_NGuHL7', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/MDV5dG5yNXBqcDBmMTOH5J5AQpLIolLNsYtUN9ML-s1U5-NfaBzMV_NGuHL7.png?width=108&crop=smart&format=pjpg&auto=webp&s=55a20a5fab386283ca78a8b9edc2290f2352cda0', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/MDV5dG5yNXBqcDBmMTOH5J5AQpLIolLNsYtUN9ML-s1U5-NfaBzMV_NGuHL7.png?width=216&crop=smart&format=pjpg&auto=webp&s=456657886aa4834189762f01f63eb7b2403a610a', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/MDV5dG5yNXBqcDBmMTOH5J5AQpLIolLNsYtUN9ML-s1U5-NfaBzMV_NGuHL7.png?width=320&crop=smart&format=pjpg&auto=webp&s=8116acfe00f50e10d3a88268b5f157d08c6245d4', 'width': 320}, {'height': 362, 'url': 'https://external-preview.redd.it/MDV5dG5yNXBqcDBmMTOH5J5AQpLIolLNsYtUN9ML-s1U5-NfaBzMV_NGuHL7.png?width=640&crop=smart&format=pjpg&auto=webp&s=79c6ae876aa518a1eb1fa6b5479632cfa174ce87', 'width': 640}, {'height': 543, 'url': 'https://external-preview.redd.it/MDV5dG5yNXBqcDBmMTOH5J5AQpLIolLNsYtUN9ML-s1U5-NfaBzMV_NGuHL7.png?width=960&crop=smart&format=pjpg&auto=webp&s=60ae3dd04eb58c1bdd994efbdd41da38b37bc8b5', 'width': 960}, {'height': 611, 'url': 'https://external-preview.redd.it/MDV5dG5yNXBqcDBmMTOH5J5AQpLIolLNsYtUN9ML-s1U5-NfaBzMV_NGuHL7.png?width=1080&crop=smart&format=pjpg&auto=webp&s=15d62a8af315b5fe55c7812ab5dedcfe6532ac21', 'width': 1080}], 'source': {'height': 1624, 'url': 'https://external-preview.redd.it/MDV5dG5yNXBqcDBmMTOH5J5AQpLIolLNsYtUN9ML-s1U5-NfaBzMV_NGuHL7.png?format=pjpg&auto=webp&s=29003fd0956e86fbb5c42015f56cb6ab84125f09', 'width': 2866}, 'variants': {}}]}
[D] How `thinking_budget` effect in Qwen3?
1
[removed]
2025-05-14T08:32:02
https://www.reddit.com/r/LocalLLaMA/comments/1kma4l6/d_how_thinking_budget_effect_in_qwen3/
Logical_Divide_3595
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kma4l6
false
null
t3_1kma4l6
/r/LocalLLaMA/comments/1kma4l6/d_how_thinking_budget_effect_in_qwen3/
false
false
self
1
null
[D] How `thinking_budget` effect in Qwen3?
2
After we set thinking\_budget, Does Qwen3 will try to consume \`thinking\_budget\` thinking tokens, or it's just a maximun limitation? \`thinking\_budget\` only exist on Qwen's official API documentation, does exist in open source inference library. Below is the text from Qwen3 technical report. \> Thinking Control: This involves the integration of two distinct modes, namely the “non-thinking” and “thinking” modes, providing users with the flexibility to choose whether the model should engage in reasoning or not, and to control the depth of thinking by specifying a token budget for the thinking process.
2025-05-14T08:33:19
https://www.reddit.com/r/LocalLLaMA/comments/1kma57b/d_how_thinking_budget_effect_in_qwen3/
Logical_Divide_3595
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kma57b
false
null
t3_1kma57b
/r/LocalLLaMA/comments/1kma57b/d_how_thinking_budget_effect_in_qwen3/
false
false
self
2
null
Found a pretty good cline-compatible Qwen3 MoE for Apple Silicon
22
I regularly test new models appearing on ollama's directory for use on my Mac M2 Ultra. Sparse models load tokens faster on Silicon so MoEs are models I target. [mychen76/qwen3\_cline\_roocode:30b ](https://www.ollama.com/mychen76/qwen3_cline_roocode:30b)is a MoE of qwen3 and so far, it has performed very well. The same user has also produced a 128k context window version (non-MoE) but this does not (yet) load on ollama. Just FYI since I often use stuff from here and often forget to feedback.
2025-05-14T08:36:49
https://www.reddit.com/r/LocalLLaMA/comments/1kma6tm/found_a_pretty_good_clinecompatible_qwen3_moe_for/
FluffyGoatNerder
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kma6tm
false
null
t3_1kma6tm
/r/LocalLLaMA/comments/1kma6tm/found_a_pretty_good_clinecompatible_qwen3_moe_for/
false
false
self
22
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=216&crop=smart&auto=webp&s=6ccf136f5d3091254a0067a3bc5d6c7df9d62d89', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=320&crop=smart&auto=webp&s=2530aa4ecbcf7899ec0d023e217fe24af15fe0a6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=640&crop=smart&auto=webp&s=8e51add1cab39c7614eb13e6195f23c5b4eeb417', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=960&crop=smart&auto=webp&s=750a6d42fd91c5a6e9a9c069e74247c877644e97', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=1080&crop=smart&auto=webp&s=9eab390b865b031211658564ad5fe5241c9661c5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?auto=webp&s=a080c4707584d3aa14134960cda9ba2d339b93a3', 'width': 1200}, 'variants': {}}]}
What local model and strategies should I use to generate reports?
1
Hello, I have been looking for solutions to generating reports for finished projects at work. With this I mean that I have a couple dozens pdfs (actually a lot of powerpoints, but i can convert them), and I want to create a report (<20 pages) following a clear structure that I can provide an example or template. I have been looking for RAG and whatnot (webui, kotaemon...), but it seems more suited for Q&A than other tasks? Maybe I have to use stuff like grobid, or maybe Apache tika followed by some LLM model via llama.ccp for the local semantic search and later injecting into a loose template? Frankly, this type of application seems very logical for LLMs, plus being very marketable to bussiness, but I haven't found anything specific. Thanks in advance
2025-05-14T08:40:27
https://www.reddit.com/r/LocalLLaMA/comments/1kma8id/what_local_model_and_strategies_should_i_use_to/
Ok_Appeal8653
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kma8id
false
null
t3_1kma8id
/r/LocalLLaMA/comments/1kma8id/what_local_model_and_strategies_should_i_use_to/
false
false
self
1
null
All Qwen3 benchmarks from the technical report in two tables (thinking and non-thinking)
1
[removed]
2025-05-14T08:46:59
https://www.reddit.com/r/LocalLLaMA/comments/1kmabk8/all_qwen3_benchmarks_from_the_technical_report_in/
BigPoppaK78
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmabk8
false
null
t3_1kmabk8
/r/LocalLLaMA/comments/1kmabk8/all_qwen3_benchmarks_from_the_technical_report_in/
false
false
self
1
{'enabled': False, 'images': [{'id': '7m8sE13XGXdIKB1usccJuLxIFDmfJxOx45-ZixmxCyo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7m8sE13XGXdIKB1usccJuLxIFDmfJxOx45-ZixmxCyo.png?width=108&crop=smart&auto=webp&s=07bfd7cb9b109d6e68bdda4375a51ec55b717092', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7m8sE13XGXdIKB1usccJuLxIFDmfJxOx45-ZixmxCyo.png?width=216&crop=smart&auto=webp&s=7ade7ddd0a17d9d196c4c2e311fbfe6751aba5e9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7m8sE13XGXdIKB1usccJuLxIFDmfJxOx45-ZixmxCyo.png?width=320&crop=smart&auto=webp&s=534b6f0d1b498f6c12225e42dba71819ea1c0d7c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7m8sE13XGXdIKB1usccJuLxIFDmfJxOx45-ZixmxCyo.png?width=640&crop=smart&auto=webp&s=2d3aacdbf17fe95c34c81a6161c4e6731a787cb6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7m8sE13XGXdIKB1usccJuLxIFDmfJxOx45-ZixmxCyo.png?width=960&crop=smart&auto=webp&s=460d55bdeb5720874ef35110950adc96f6cf5351', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7m8sE13XGXdIKB1usccJuLxIFDmfJxOx45-ZixmxCyo.png?width=1080&crop=smart&auto=webp&s=9f287564911de1d53e4acc24513cd002461a82d4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7m8sE13XGXdIKB1usccJuLxIFDmfJxOx45-ZixmxCyo.png?auto=webp&s=a76868006904ad57ca8c533b33cae9eec085d2fd', 'width': 1200}, 'variants': {}}]}
Seeking Guidance: Integrating RealtimeTTS with dia-1.6B or OrpheusTTS for Arabic Conversational AI
1
[removed]
2025-05-14T08:53:07
https://www.reddit.com/r/LocalLLaMA/comments/1kmaekb/seeking_guidance_integrating_realtimetts_with/
No-Reindeer-9968
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmaekb
false
null
t3_1kmaekb
/r/LocalLLaMA/comments/1kmaekb/seeking_guidance_integrating_realtimetts_with/
false
false
self
1
null
Benchmarking models with a custom QA dataset - what's the best workflow?
2
There are plenty of models available, and even for a single model, there are quite a few different settings to tinker with. I’d like to evaluate and benchmark them using my own question-and-answer dataset. My example use case is to test different quantized versions of a vision model with specific questions about a small set of images and compare the answers to the expected ones. I believe this process could be automated. Is there any tool or framework that allows working with a custom set of questions or tasks for each model and setting, and then compares how well each specific model or configuration performs? Please share what you're using and what works best for you.
2025-05-14T09:26:09
https://www.reddit.com/r/LocalLLaMA/comments/1kmauvs/benchmarking_models_with_a_custom_qa_dataset/
DevilaN82
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmauvs
false
null
t3_1kmauvs
/r/LocalLLaMA/comments/1kmauvs/benchmarking_models_with_a_custom_qa_dataset/
false
false
self
2
null
Announcing MAESTRO: A Local-First AI Research App! (Plus some benchmarks)
183
Hey r/LocalLLaMA! I'm excited to introduce **MAESTRO** (Multi-Agent Execution System & Tool-driven Research Orchestrator), an AI-powered research application designed for deep research tasks, with a strong focus on local control and capabilities. You can set it up locally to conduct comprehensive research using your own document collections and your choice of local or API-based LLMs. **GitHub:** [MAESTRO on GitHub](https://github.com/murtaza-nasir/maestro) MAESTRO offers a modular framework with document ingestion, a powerful Retrieval-Augmented Generation (RAG) pipeline, and a multi-agent system (Planning, Research, Reflection, Writing) to tackle complex research questions. You can interact with it via a Streamlit Web UI or a command-line interface. # Key Highlights: * **Local Deep Research:** Run it on your own machine. * **Your LLMs:** Configure and use local LLM providers. * **Powerful RAG:** Ingest your PDFs into a local, queryable knowledge base with hybrid search. * **Multi-Agent System:** Let AI agents collaborate on planning, information gathering, analysis, and report synthesis. * **Batch Processing:** Create batch jobs with multiple research questions. * **Transparency:** Track costs and resource usage. # LLM Performance & Benchmarks: We've put a lot of effort into evaluating LLMs to ensure MAESTRO produces high-quality, factual reports. We used a panel of "verifier" LLMs to assess the performance of various models (including popular local options) in key research and writing tasks. These benchmarks helped us identify strong candidates for different agent roles within MAESTRO, balancing performance on tasks like note generation and writing synthesis. While our evaluations included a mix of API-based and self-hostable models, we've provided specific recommendations and considerations for local setups in our documentation. You can find all the details on our evaluation methodology, the full benchmark results (including performance heatmaps), and our model recommendations in the `VERIFIER_AND_MODEL_FINDINGS.md` file within the repository. For the future, we plan to improve the UI to move away from streamlit and create better documentation, in addition to improvements and additions in the agentic research framework itself. We'd love for you to check out the [project on GitHub](https://github.com/murtaza-nasir/maestro), try it out, and share your feedback! We're especially interested in hearing from the LocalLLaMA community on how we can make it even better for local setups.
2025-05-14T09:35:43
https://www.reddit.com/gallery/1kmaztr
hedonihilistic
reddit.com
1970-01-01T00:00:00
0
{}
1kmaztr
false
null
t3_1kmaztr
/r/LocalLLaMA/comments/1kmaztr/announcing_maestro_a_localfirst_ai_research_app/
false
false
https://external-preview…aa2f5d47e2a6c03a
183
{'enabled': True, 'images': [{'id': 'bCy94p-09BFETAgyXqeKanaFoQ80U0YpxsXUwomgXJQ', 'resolutions': [{'height': 162, 'url': 'https://external-preview.redd.it/bCy94p-09BFETAgyXqeKanaFoQ80U0YpxsXUwomgXJQ.png?width=108&crop=smart&auto=webp&s=9b3fd385cff2b5b74808ada5502a353a8bc963a7', 'width': 108}, {'height': 324, 'url': 'https://external-preview.redd.it/bCy94p-09BFETAgyXqeKanaFoQ80U0YpxsXUwomgXJQ.png?width=216&crop=smart&auto=webp&s=e32d7632d93f09f6472606ccc467c5637e0d7823', 'width': 216}, {'height': 480, 'url': 'https://external-preview.redd.it/bCy94p-09BFETAgyXqeKanaFoQ80U0YpxsXUwomgXJQ.png?width=320&crop=smart&auto=webp&s=fab6e20d4a3f6bc4f76cd4680eddc3be0a9fe5f4', 'width': 320}, {'height': 960, 'url': 'https://external-preview.redd.it/bCy94p-09BFETAgyXqeKanaFoQ80U0YpxsXUwomgXJQ.png?width=640&crop=smart&auto=webp&s=665b9b97c2e20b7fb7b2b64d0b32539fb52568c4', 'width': 640}, {'height': 1440, 'url': 'https://external-preview.redd.it/bCy94p-09BFETAgyXqeKanaFoQ80U0YpxsXUwomgXJQ.png?width=960&crop=smart&auto=webp&s=29437c3ab113f97883285336aa728bff08a1c28d', 'width': 960}, {'height': 1620, 'url': 'https://external-preview.redd.it/bCy94p-09BFETAgyXqeKanaFoQ80U0YpxsXUwomgXJQ.png?width=1080&crop=smart&auto=webp&s=45d14f6c834dc4b37b83ee55cad313254c0471d1', 'width': 1080}], 'source': {'height': 5400, 'url': 'https://external-preview.redd.it/bCy94p-09BFETAgyXqeKanaFoQ80U0YpxsXUwomgXJQ.png?auto=webp&s=0e75310fff0db4c9cbf03e84204b3745c10f8a2a', 'width': 3600}, 'variants': {}}]}
Qwen3 benchmarks from the technical report in two tables (thinking and non-thinking) for easier comparison
1
[removed]
2025-05-14T09:47:25
https://www.reddit.com/r/LocalLLaMA/comments/1kmb5mc/qwen3_benchmarks_from_the_technical_report_in_two/
BigPoppaK78
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmb5mc
false
null
t3_1kmb5mc
/r/LocalLLaMA/comments/1kmb5mc/qwen3_benchmarks_from_the_technical_report_in_two/
false
false
self
1
{'enabled': False, 'images': [{'id': '7m8sE13XGXdIKB1usccJuLxIFDmfJxOx45-ZixmxCyo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7m8sE13XGXdIKB1usccJuLxIFDmfJxOx45-ZixmxCyo.png?width=108&crop=smart&auto=webp&s=07bfd7cb9b109d6e68bdda4375a51ec55b717092', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7m8sE13XGXdIKB1usccJuLxIFDmfJxOx45-ZixmxCyo.png?width=216&crop=smart&auto=webp&s=7ade7ddd0a17d9d196c4c2e311fbfe6751aba5e9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7m8sE13XGXdIKB1usccJuLxIFDmfJxOx45-ZixmxCyo.png?width=320&crop=smart&auto=webp&s=534b6f0d1b498f6c12225e42dba71819ea1c0d7c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7m8sE13XGXdIKB1usccJuLxIFDmfJxOx45-ZixmxCyo.png?width=640&crop=smart&auto=webp&s=2d3aacdbf17fe95c34c81a6161c4e6731a787cb6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7m8sE13XGXdIKB1usccJuLxIFDmfJxOx45-ZixmxCyo.png?width=960&crop=smart&auto=webp&s=460d55bdeb5720874ef35110950adc96f6cf5351', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7m8sE13XGXdIKB1usccJuLxIFDmfJxOx45-ZixmxCyo.png?width=1080&crop=smart&auto=webp&s=9f287564911de1d53e4acc24513cd002461a82d4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7m8sE13XGXdIKB1usccJuLxIFDmfJxOx45-ZixmxCyo.png?auto=webp&s=a76868006904ad57ca8c533b33cae9eec085d2fd', 'width': 1200}, 'variants': {}}]}
Multi-Instance GPU (MIG) for tensor parallel possible
2
I have an idea that might be a very stupid, wonder is it possible at all. I have 5x3090/4090. I wonder if i can add one rtx 6000 pro to the setup, then use Nvidia MIG to split the rtx 6000 pro into 3 of 24gb for 8xGPU tensor parallel. I understand that splitting gpu into 3 dont make it magically x3. However, tensor parallel with engine such as vllm will make the setup run as the weakest gpu. Given that pcie 5 and rtx 6000 pro vram bandwidth is double that of pcie 4 and 3090, will this idea be possible at all? Most model only do tensor parallel with 4 or 8 gpus hence being able to hit 8gpus would potentially bring alot of benefit to my setup.
2025-05-14T10:35:39
https://www.reddit.com/r/LocalLLaMA/comments/1kmbvfd/multiinstance_gpu_mig_for_tensor_parallel_possible/
Such_Advantage_6949
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmbvfd
false
null
t3_1kmbvfd
/r/LocalLLaMA/comments/1kmbvfd/multiinstance_gpu_mig_for_tensor_parallel_possible/
false
false
self
2
null
LLM - better chunking method
15
Problems with using an LLM to chunk: 1. Time/latency -> it takes time for the LLM to output all the chunks. 2. Hitting output context window cap -> since you’re essentially re-creating entire documents but in chunks, then you’ll often hit the token capacity of the output window. 3. Cost - since your essentially outputting entire documents again, you r costs go up. The method below helps all 3. Method: Step 1: assign an identification number to each and every sentence or paragraph in your document. a) Use a standard python library to parse the document into chunks of paragraphs or sentences. b) assign an identification number to each, and every sentence. Example sentence: Red Riding Hood went to the shops. She did not like the food that they had there. Example output: <1> Red Riding Hood went to the shops.</1><2>She did not like the food that they had there.</2> Note: this can easily be done with very standard python libraries that identify sentences. It’s very fast. You now have a method to identify sentences using a single digit. The LLM will now take advantage of this. Step 2. a) Send the entire document WITH the identification numbers associated to each sentence. b) tell the LLM “how”you would like it to chunk the material I.e: “please keep semantic similar content together” c) tell the LLM that you have provided an I.d number for each sentence and that you want it to output only the i.d numbers e.g: chunk 1: 1,2,3 chunk 2: 4,5,6,7,8,9 chunk 3: 10,11,12,13 etc Step 3: Reconstruct your chunks locally based on the LLM response. The LLM will provide you with the chunks and the sentence i.d’s that go into each chunk. All you need to do in your script is to re-construct it locally. Notes: 1. I did this method a couple years ago using ORIGINAL Haiku. It never messed up the chunking method. So it will definitely work for new models. 2. although I only provide 2 sentences in my example, in reality I used this with many, many, many chunks. For example, I chunked large court cases using this method. 3. It’s actually a massive time and token save. Suddenly a 50 token sentence becomes “1” token…. 4. If someone else already identified this method then please ignore this post :)
2025-05-14T11:07:03
https://www.reddit.com/r/LocalLLaMA/comments/1kmcdyt/llm_better_chunking_method/
Phoenix2990
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmcdyt
false
null
t3_1kmcdyt
/r/LocalLLaMA/comments/1kmcdyt/llm_better_chunking_method/
false
false
self
15
null
Local AI automation pipelines
2
Just wondering what do you use for AI Automation pipelines for local run? Something like [make.com](http://make.com) or vectorshift.ai? I want to run few routine task with LLM, but do not want to run it on public cloud.
2025-05-14T11:53:51
https://www.reddit.com/r/LocalLLaMA/comments/1kmd7o5/local_ai_automation_pipelines/
mancubus77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmd7o5
false
null
t3_1kmd7o5
/r/LocalLLaMA/comments/1kmd7o5/local_ai_automation_pipelines/
false
false
self
2
null
Searching a most Generous(in limits) fully managed Retrieval-Augmented Generation (RAG) service provider
3
I need projects like SciPhi's R2R ([https://github.com/SciPhi-AI/R2R](https://github.com/SciPhi-AI/R2R)), but the cloud limits are too tight for what I need. Are there any other options or projects out there that do similar things without those limits? I would really appreciate any suggestions or tips! Thanks!
2025-05-14T11:54:51
https://www.reddit.com/r/LocalLLaMA/comments/1kmd8ca/searching_a_most_generousin_limits_fully_managed/
dagm10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmd8ca
false
null
t3_1kmd8ca
/r/LocalLLaMA/comments/1kmd8ca/searching_a_most_generousin_limits_fully_managed/
false
false
self
3
{'enabled': False, 'images': [{'id': 'UOhnBWfT4I_RPpQ213KDUagE9P3TINcyu_J2b-Hif5k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UOhnBWfT4I_RPpQ213KDUagE9P3TINcyu_J2b-Hif5k.png?width=108&crop=smart&auto=webp&s=35fd60419df55a4cd8ea2ae0fd63c949e37bc2ad', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UOhnBWfT4I_RPpQ213KDUagE9P3TINcyu_J2b-Hif5k.png?width=216&crop=smart&auto=webp&s=f5a9d0bc7bc2fbc0a58167a47862354d0c5ccdd7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UOhnBWfT4I_RPpQ213KDUagE9P3TINcyu_J2b-Hif5k.png?width=320&crop=smart&auto=webp&s=a45db123469d61f7eb812b7df5241923e72e784a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UOhnBWfT4I_RPpQ213KDUagE9P3TINcyu_J2b-Hif5k.png?width=640&crop=smart&auto=webp&s=63167ad2ba7240e3644d97a3f69d52192608e74f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UOhnBWfT4I_RPpQ213KDUagE9P3TINcyu_J2b-Hif5k.png?width=960&crop=smart&auto=webp&s=56f37e4334eb17ca827c796905bf3baa4cd66494', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UOhnBWfT4I_RPpQ213KDUagE9P3TINcyu_J2b-Hif5k.png?width=1080&crop=smart&auto=webp&s=5d4fe06223f079f1d9863ac005609bcc9966595e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UOhnBWfT4I_RPpQ213KDUagE9P3TINcyu_J2b-Hif5k.png?auto=webp&s=fe2e3a38ded4f5595f3bd70c1d3860baa00e2997', 'width': 1200}, 'variants': {}}]}
Browser with tor for onion sites
1
[removed]
2025-05-14T11:55:11
https://www.reddit.com/r/LocalLLaMA/comments/1kmd8jv/browser_with_tor_for_onion_sites/
chillax9041
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmd8jv
false
null
t3_1kmd8jv
/r/LocalLLaMA/comments/1kmd8jv/browser_with_tor_for_onion_sites/
false
false
self
1
null
What does llama.cpp's http server's file-upload button do?
1
Does it simply concatenate the file and my direct prompt, treating the concatenation as the prompt? Using llama 3.2 3B Q4_K_S but incase my above suspicion is true, that does not matter as no model would yield reliable results. What I want to do is to ask questions about a file's contents. In my 15 experiments, sometimes the question about the file's contents is correctly answered. But sometimes it interprets the contents of the file instead of my query. (Bonus: I would like the result to be reproducable, ie when I open a new conversation, giving it the same prompts, I would like to get the same answers)
2025-05-14T11:56:31
https://www.reddit.com/r/LocalLLaMA/comments/1kmd9f9/what_does_llamacpps_http_servers_fileupload/
kdjfskdf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmd9f9
false
null
t3_1kmd9f9
/r/LocalLLaMA/comments/1kmd9f9/what_does_llamacpps_http_servers_fileupload/
false
false
self
1
null
chat.qwen.ai & chat.z.ai has the same UI
1
Both Qwen and Z's chat interface have the same layout, same menu settings, but they don't seem to mention reach other? Or are they using some chat template that others are using as well?
2025-05-14T12:08:10
https://www.reddit.com/r/LocalLLaMA/comments/1kmdhq3/chatqwenai_chatzai_has_the_same_ui/
Sheeple9001
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmdhq3
false
null
t3_1kmdhq3
/r/LocalLLaMA/comments/1kmdhq3/chatqwenai_chatzai_has_the_same_ui/
false
false
self
1
null
recommendations for tools/templates to create MCP hosts, clients and servers
2
MCP servers is perhaps the best served, but there's currently so much out there of variable quality, I wanted to check in to see what you have found and which are recommended. Python language preferred.
2025-05-14T12:12:26
https://www.reddit.com/r/LocalLLaMA/comments/1kmdksm/recommendations_for_toolstemplates_to_create_mcp/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmdksm
false
null
t3_1kmdksm
/r/LocalLLaMA/comments/1kmdksm/recommendations_for_toolstemplates_to_create_mcp/
false
false
self
2
null
4 hours to go!
1
[removed]
2025-05-14T12:16:17
https://i.redd.it/00opoqowoq0f1.jpeg
MihirBarve
i.redd.it
1970-01-01T00:00:00
0
{}
1kmdnlt
false
null
t3_1kmdnlt
/r/LocalLLaMA/comments/1kmdnlt/4_hours_to_go/
false
false
default
1
{'enabled': True, 'images': [{'id': '00opoqowoq0f1', 'resolutions': [{'height': 135, 'url': 'https://preview.redd.it/00opoqowoq0f1.jpeg?width=108&crop=smart&auto=webp&s=233124aee75ed9e773334ea9b15f44935b0b18d6', 'width': 108}, {'height': 270, 'url': 'https://preview.redd.it/00opoqowoq0f1.jpeg?width=216&crop=smart&auto=webp&s=eb11b6266a9eaf0db60bd0e1d8ab9621e28b3d12', 'width': 216}, {'height': 400, 'url': 'https://preview.redd.it/00opoqowoq0f1.jpeg?width=320&crop=smart&auto=webp&s=c1ff8d3e09e61eb978bfb9becd6418d524e1ebbd', 'width': 320}, {'height': 800, 'url': 'https://preview.redd.it/00opoqowoq0f1.jpeg?width=640&crop=smart&auto=webp&s=8700ccb1e1b6d5db09fec077d33d6d63668f52ca', 'width': 640}, {'height': 1200, 'url': 'https://preview.redd.it/00opoqowoq0f1.jpeg?width=960&crop=smart&auto=webp&s=b0d432967c07f1ea771f0361adc38845e770fdf9', 'width': 960}, {'height': 1350, 'url': 'https://preview.redd.it/00opoqowoq0f1.jpeg?width=1080&crop=smart&auto=webp&s=647f202719944e9c8e61bf6c2ee2ac094886d20b', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://preview.redd.it/00opoqowoq0f1.jpeg?auto=webp&s=df70f51a10a7cc8d4a9ec206a6bba1787df80a11', 'width': 1080}, 'variants': {}}]}
best small language model? around 2-10b parameters
52
whats the best small language model for chatting in english only, no need for any type of coding, math or multilingual capabilities, i've seen gemma and the smaller qwen models but are there any better alternatives that focus just on chatting/emotional intelligence? sorry if my question seems stupid i'm still new to this :P
2025-05-14T12:33:08
https://www.reddit.com/r/LocalLLaMA/comments/1kmdzv0/best_small_language_model_around_210b_parameters/
ThatIsNotIllegal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmdzv0
false
null
t3_1kmdzv0
/r/LocalLLaMA/comments/1kmdzv0/best_small_language_model_around_210b_parameters/
false
false
self
52
null
Build DeepSeek architecture from scratch | 20 high quality video lectures
113
[A few notes I made as part of this playlist](https://i.redd.it/of6lxo00sq0f1.gif) Here are the 20 lectures covering everything from Multi-Head Latent Attention to Mixture of Experts. It took me 2 months to finish recording these lectures. One of the most challenging (and also rewarding) thing I have done this year. Until now, we have uploaded 20 lectures in this playlist: (1) DeepSeek series introduction: [https://youtu.be/QWNxQIq0hMo](https://youtu.be/QWNxQIq0hMo) (2) DeepSeek basics: [https://youtu.be/WjhDDeZ7DvM](https://youtu.be/WjhDDeZ7DvM) (3) Journey of a token into the LLM architecture: [https://youtu.be/rkEYwH4UGa4](https://youtu.be/rkEYwH4UGa4) (4) Attention mechanism explained in 1 hour: [https://youtu.be/K45ze9Yd5UE](https://youtu.be/K45ze9Yd5UE) (5) Self Attention Mechanism - Handwritten from scratch: [https://youtu.be/s8mskq-nzec](https://youtu.be/s8mskq-nzec) (6) Causal Attention Explained: Don't Peek into the Future: [https://youtu.be/c6Kkj6iLeBg](https://youtu.be/c6Kkj6iLeBg) (7) Multi-Head Attention Visually Explained: [https://youtu.be/qbN4ulK-bZA](https://youtu.be/qbN4ulK-bZA) (8) Multi-Head Attention Handwritten from Scratch: [https://youtu.be/rvsEW-EsD-Y](https://youtu.be/rvsEW-EsD-Y) (9) Key Value Cache from Scratch: [https://youtu.be/IDwTiS4\_bKo](https://youtu.be/IDwTiS4_bKo) (10) Multi-Query Attention Explained: [https://youtu.be/Z6B51Odtn-Y](https://youtu.be/Z6B51Odtn-Y) (11) Understand Grouped Query Attention (GQA): [https://youtu.be/kx3rETIxo4Q](https://youtu.be/kx3rETIxo4Q) (12) Multi-Head Latent Attention From Scratch: [https://youtu.be/NlDQUj1olXM](https://youtu.be/NlDQUj1olXM) (13) Multi-Head Latent Attention Coded from Scratch in Python: [https://youtu.be/mIaWmJVrMpc](https://youtu.be/mIaWmJVrMpc) (14) Integer and Binary Positional Encodings: [https://youtu.be/rP0CoTxe5gU](https://youtu.be/rP0CoTxe5gU) (15) All about Sinusoidal Positional Encodings: [https://youtu.be/bQCQ7VO-TWU](https://youtu.be/bQCQ7VO-TWU) (16) Rotary Positional Encodings: [https://youtu.be/a17DlNxkv2k](https://youtu.be/a17DlNxkv2k) (17) How DeepSeek exactly implemented Latent Attention | MLA + RoPE: [https://youtu.be/m1x8vA\_Tscc](https://youtu.be/m1x8vA_Tscc) (18) Mixture of Experts (MoE) Introduction: [https://youtu.be/v7U21meXd6Y](https://youtu.be/v7U21meXd6Y) (19) Mixture of Experts Hands on Demonstration: [https://youtu.be/yw6fpYPJ7PI](https://youtu.be/yw6fpYPJ7PI) (20) Mixture of Experts Balancing Techniques: [https://youtu.be/nRadcspta\_8](https://youtu.be/nRadcspta_8) Next up: Multi-Token Prediction (MTP) and Fine-grained quantization.
2025-05-14T12:36:36
https://www.reddit.com/r/LocalLLaMA/comments/1kme2c4/build_deepseek_architecture_from_scratch_20_high/
OtherRaisin3426
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kme2c4
false
{'oembed': {'author_name': 'Vizuara', 'author_url': 'https://www.youtube.com/@vizuara', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/QWNxQIq0hMo?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Build DeepSeek from Scratch: Series Introduction"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/QWNxQIq0hMo/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Build DeepSeek from Scratch: Series Introduction', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1kme2c4
/r/LocalLLaMA/comments/1kme2c4/build_deepseek_architecture_from_scratch_20_high/
false
false
https://external-preview…e24996df612e8582
113
{'enabled': False, 'images': [{'id': 'KAbXE4K5sDdk4MosCKTIZy94mD_n03QyKwLpBwLHH7s', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/KAbXE4K5sDdk4MosCKTIZy94mD_n03QyKwLpBwLHH7s.jpeg?width=108&crop=smart&auto=webp&s=4afe035c944e7e0f78c85fdddb0139a5d0e81848', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/KAbXE4K5sDdk4MosCKTIZy94mD_n03QyKwLpBwLHH7s.jpeg?width=216&crop=smart&auto=webp&s=c959ccb27b72355271de06ab5dcbca72c03d0bc4', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/KAbXE4K5sDdk4MosCKTIZy94mD_n03QyKwLpBwLHH7s.jpeg?width=320&crop=smart&auto=webp&s=66d99eb54310442088ed7f01364f74ed3363b88f', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/KAbXE4K5sDdk4MosCKTIZy94mD_n03QyKwLpBwLHH7s.jpeg?auto=webp&s=8338a9d9510bc3bbe984d34b108ff67f13d6c7df', 'width': 480}, 'variants': {}}]}
GitHub - ByteDance-Seed/Seed1.5-VL: Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving state-of-the-art performance on 38 out of 60 public benchmarks.
49
Let's wait for the weights.
2025-05-14T13:12:54
https://github.com/ByteDance-Seed/Seed1.5-VL
foldl-li
github.com
1970-01-01T00:00:00
0
{}
1kmetlw
false
null
t3_1kmetlw
/r/LocalLLaMA/comments/1kmetlw/github_bytedanceseedseed15vl_seed15vl_a/
false
false
https://external-preview…f42e11119f667773
49
{'enabled': False, 'images': [{'id': '0Gwi4j4952nP4TJd3fepu6BYEfG11JFAepo3FpZAd4E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0Gwi4j4952nP4TJd3fepu6BYEfG11JFAepo3FpZAd4E.png?width=108&crop=smart&auto=webp&s=3dc722529f18365a867476614805992d6ccf3cb6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0Gwi4j4952nP4TJd3fepu6BYEfG11JFAepo3FpZAd4E.png?width=216&crop=smart&auto=webp&s=7c6b1bbc763a8c81101e3ef3fb151f7fb8ea8cb7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0Gwi4j4952nP4TJd3fepu6BYEfG11JFAepo3FpZAd4E.png?width=320&crop=smart&auto=webp&s=fa4f2d40dabce1bb24eb86d291cee155165b3eda', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0Gwi4j4952nP4TJd3fepu6BYEfG11JFAepo3FpZAd4E.png?width=640&crop=smart&auto=webp&s=08030f75958c411f48f1551b1ab776c4bb0ca72a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0Gwi4j4952nP4TJd3fepu6BYEfG11JFAepo3FpZAd4E.png?width=960&crop=smart&auto=webp&s=4f35fcc114c47919e02d126ba33db2648010ab62', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0Gwi4j4952nP4TJd3fepu6BYEfG11JFAepo3FpZAd4E.png?width=1080&crop=smart&auto=webp&s=420f421937b34896fc3b520864c42ae6ed569704', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0Gwi4j4952nP4TJd3fepu6BYEfG11JFAepo3FpZAd4E.png?auto=webp&s=67c0f8a226200dbdeec5306a2be040bca28937e3', 'width': 1200}, 'variants': {}}]}
Turn any toolkit into an MCP server in 3 easy steps 🐫🔧
1
[removed]
2025-05-14T13:17:46
https://www.reddit.com/r/LocalLLaMA/comments/1kmexee/turn_any_toolkit_into_an_mcp_server_in_3_easy/
Fluffy_Sheepherder76
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmexee
false
null
t3_1kmexee
/r/LocalLLaMA/comments/1kmexee/turn_any_toolkit_into_an_mcp_server_in_3_easy/
false
false
self
1
null
Turn any toolkit into an MCP server
0
If you’ve ever wanted to expose your own toolkit (like an ArXiv search tool, a Wikipedia fetcher, or any custom Python utility) as a lightweight service for CAMEL agents to call remotely, MCP (Model Context Protocol) makes it trivial. Here’s how you can get started in just three steps: # 1. Wrap & expose your toolkit * Import your toolkit class (e.g. `ArxivToolkit`) * Parse `--mode` (stdio│sse│streamable-http) and `--timeout` flags * Call `run_mcp_server(mode, timeout)` to serve its methods over MCP # 2. Configure your server launch * Create a simple JSON config (e.g. `mcp_servers_config.json`) * Define the command (`python`) and args (`[your_server_script, --mode, stdio, --timeout, 30]`) * This tells MCPToolkit how to start your server # 3. Connect, list tools & call them * In your client code, initialize `MCPToolkit(config_path)` * `await mcp.connect()`, pick a server, then `list_mcp_tools()` * Invoke a tool (e.g. `search_papers`) with its params and print the results That’s it, no heavy HTTP setup, no extra dependencies. Running in **stdio** mode keeps things local and debuggable, and you can swap to SSE or HTTP when you’re ready to scale. **Detailed guide:** [https://www.camel-ai.org/blogs/camel-mcp-servers-model-context-protocol-ai-agents](https://www.camel-ai.org/blogs/camel-mcp-servers-model-context-protocol-ai-agents)
2025-05-14T13:19:03
https://www.reddit.com/r/LocalLLaMA/comments/1kmeyfk/turn_any_toolkit_into_an_mcp_server/
Fluffy_Sheepherder76
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmeyfk
false
null
t3_1kmeyfk
/r/LocalLLaMA/comments/1kmeyfk/turn_any_toolkit_into_an_mcp_server/
false
false
self
0
{'enabled': False, 'images': [{'id': 'gShp0kr2f9lJefeQHWbMJbLOujgZj8czbghkqgDJVow', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/gShp0kr2f9lJefeQHWbMJbLOujgZj8czbghkqgDJVow.jpeg?width=108&crop=smart&auto=webp&s=10ed01a1382f33933099b924e2555416a77c4890', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/gShp0kr2f9lJefeQHWbMJbLOujgZj8czbghkqgDJVow.jpeg?width=216&crop=smart&auto=webp&s=38fbab4a3f7e85f54877f905d16ffc77305dacab', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/gShp0kr2f9lJefeQHWbMJbLOujgZj8czbghkqgDJVow.jpeg?width=320&crop=smart&auto=webp&s=1144020000058ffffb3c000c55def6c34e72521c', 'width': 320}, {'height': 363, 'url': 'https://external-preview.redd.it/gShp0kr2f9lJefeQHWbMJbLOujgZj8czbghkqgDJVow.jpeg?width=640&crop=smart&auto=webp&s=d424646dd7f8d3e0b51c10e733726b409d648da4', 'width': 640}, {'height': 545, 'url': 'https://external-preview.redd.it/gShp0kr2f9lJefeQHWbMJbLOujgZj8czbghkqgDJVow.jpeg?width=960&crop=smart&auto=webp&s=514544e8810acc8ccf5b396b048a6c80c4d6d3b6', 'width': 960}, {'height': 613, 'url': 'https://external-preview.redd.it/gShp0kr2f9lJefeQHWbMJbLOujgZj8czbghkqgDJVow.jpeg?width=1080&crop=smart&auto=webp&s=916dd6eed77694ff02e6a45be07da3736b659f84', 'width': 1080}], 'source': {'height': 1382, 'url': 'https://external-preview.redd.it/gShp0kr2f9lJefeQHWbMJbLOujgZj8czbghkqgDJVow.jpeg?auto=webp&s=8f6654f22cccb5110bdfe16644a0e4470c97a94d', 'width': 2432}, 'variants': {}}]}
Is there a benchmark that shows "prompt processing speed"?
3
I've been checking Artificial Analysis and others, and while they are very adamant about output speed i've yet to see "input speed". when working with large codebases I think prompt ingestion speed is VERY important any benches working on this? Something like "long input, short output".
2025-05-14T13:24:33
https://www.reddit.com/r/LocalLLaMA/comments/1kmf2w9/is_there_a_benchmark_that_shows_prompt_processing/
OmarBessa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmf2w9
false
null
t3_1kmf2w9
/r/LocalLLaMA/comments/1kmf2w9/is_there_a_benchmark_that_shows_prompt_processing/
false
false
self
3
null
best model i can run on mac air m3 16gb ram?
1
[removed]
2025-05-14T13:36:28
https://www.reddit.com/r/LocalLLaMA/comments/1kmfcq5/best_model_i_can_run_on_mac_air_m3_16gb_ram/
Acrobatic-Ad-4211
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmfcq5
false
null
t3_1kmfcq5
/r/LocalLLaMA/comments/1kmfcq5/best_model_i_can_run_on_mac_air_m3_16gb_ram/
false
false
self
1
null
Testing LLMs in prod feels way harder than it should
1
[removed]
2025-05-14T13:50:42
https://www.reddit.com/r/LocalLLaMA/comments/1kmfop9/testing_llms_in_prod_feels_way_harder_than_it/
Aggravating_Job2019
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmfop9
false
null
t3_1kmfop9
/r/LocalLLaMA/comments/1kmfop9/testing_llms_in_prod_feels_way_harder_than_it/
false
false
self
1
null
Wan-AI/Wan2.1-VACE-14B · Hugging Face (Apache-2.0)
152
**Wan2.1** [VACE](https://github.com/ali-vilab/VACE), an all-in-one model for video creation and editing
2025-05-14T14:06:06
https://huggingface.co/Wan-AI/Wan2.1-VACE-14B
Dark_Fire_12
huggingface.co
1970-01-01T00:00:00
0
{}
1kmg1ht
false
null
t3_1kmg1ht
/r/LocalLLaMA/comments/1kmg1ht/wanaiwan21vace14b_hugging_face_apache20/
false
false
https://external-preview…104d291ee6a1beea
152
{'enabled': False, 'images': [{'id': 'TmzIWNNChRov_gA4HjoE6PO2tsdMf2f2ESAHN00wPOY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TmzIWNNChRov_gA4HjoE6PO2tsdMf2f2ESAHN00wPOY.png?width=108&crop=smart&auto=webp&s=426d66cf7de922a930e072de2b3577218f43a8fc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TmzIWNNChRov_gA4HjoE6PO2tsdMf2f2ESAHN00wPOY.png?width=216&crop=smart&auto=webp&s=fe4bda28b93b3f10b8972efc38032dfcf8b85cd9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TmzIWNNChRov_gA4HjoE6PO2tsdMf2f2ESAHN00wPOY.png?width=320&crop=smart&auto=webp&s=07280463c63ef6c26f7ce088c8d692635b9b5693', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TmzIWNNChRov_gA4HjoE6PO2tsdMf2f2ESAHN00wPOY.png?width=640&crop=smart&auto=webp&s=4c439fad5f75361eaa3eb176a04dfd9733c3e274', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TmzIWNNChRov_gA4HjoE6PO2tsdMf2f2ESAHN00wPOY.png?width=960&crop=smart&auto=webp&s=d709bf81c9798ab10ceeb328a3b50e06024c826f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TmzIWNNChRov_gA4HjoE6PO2tsdMf2f2ESAHN00wPOY.png?width=1080&crop=smart&auto=webp&s=8380f8ec367a4258966d3bd17255c117901054f0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TmzIWNNChRov_gA4HjoE6PO2tsdMf2f2ESAHN00wPOY.png?auto=webp&s=db0eeeece16ac5028e37c3f03b73cbe6c65ce377', 'width': 1200}, 'variants': {}}]}
May 2025 Model Benchmarks - Mac vs. 5080
0
| Model | MMLU (%) | 4-bit RAM | M3 Max 64 GB (t/s, TTFT) | M4 Max 64 GB (t/s, TTFT) | M4 Max 128 GB (t/s, TTFT) | RTX 5080 16 GB (t/s, TTFT) | |------------------------------|----------|-----------|---------------------------|---------------------------|----------------------------|-----------------------------| | GPT-4.5 (Preview API) | 89.5 | n/a | 77, 1 s | 77, 1 s | 77, 1 s | 77, 1 s | | GPT-4o (API) | 88.7 | n/a | 138, 0.5 s | 138, 0.5 s | 138, 0.5 s | 138, 0.5 s | | GPT-4 (API) | 86.4 | n/a | 12.5, 1 s | 12.5, 1 s | 12.5, 1 s | 12.5, 1 s | | Mistral Large 2 123B | 84.0 | ≈60 GB | — (OOM) | — (OOM) | 6.6, >3 s | — (OOM) | | Mistral Small 3 24B | 81.0 | 12 GB | 18, ~1 s | ~20, ~1 s | ~25, ~1 s | — | | LLaMA 3 70B (instr.) | 79.5 | 35 GB | 10, 3–5 s | — (OOM) | 9.7, 3–5 s | ~6, ~0.5 s | | Qwen 3 30B (MoE) | 79.0 | 15 GB | ~45, ~1 s | ~60, ~1 s | ~100, ~1 s | — | | Mixtral 8×22B (MoE) | 77.8 | 88 GB | — (OOM) | — (OOM) | 19, <60 s | — (OOM) | | Qwen 2.5 72B | 77.4 | 36 GB | 10, 1–2 s | 10, 1–2 s | 11, 1–2 s | — | | Qwen 2.5 32B | 74.4 | 16 GB | 20, 1 s | 20, 1 s | 24, 1 s | — | | Mixtral 8×7B (MoE) | 71.7 | 22.5 GB | 60, 1 s | 44, 1 s | 46, 1 s | — (OOM) | | GPT-3.5 Turbo (API) | 70.0 | n/a | 109, 0.3 s | 109, 0.3 s | 109, 0.3 s | 109, 0.3 s | | Qwen 2.5 14B | 68.6 | 7 GB | 45, 0.8 s | 45, 0.8 s | 50, 0.8 s | — | | Gemma 3 27B (IT) | 67.5 | 13.5 GB | ~35, 1 s | ~42, 1 s | ~52, 1 s | — | | LLaMA 3 8B (instr.) | 66.6 | 3.8 GB | 40, 0.5 s | 55, 0.5 s | 60, 0.5 s | — | | Mistral 7B | 62.5 | 3 GB | 60, 0.6 s | 50, 0.6 s | 65, 0.6 s | ~150, 0.35 s | | LLaMA 2 13B | 55.4 | 6.5 GB | 25, 1 s | 25, 1 s | 30, 1 s | — | | LLaMA 2 7B | 45.8 | 3.5 GB | 90, 0.5 s | 80, 0.5 s | 83, 0.5 s | 84, 0.49 s | - All local numbers, single-batch streaming, 4-bit Q4(or closest) unless noted. - t/s, TTFT - streaming tokens ⁄ sec & 10 - 100 token short-prompt time-to-first-token. - “~” = best community estimate; plain numbers are repeatable logs. - “— (OOM)” = will not load in that memory budget; - “—” = no credible bench yet. - OpenAI API speeds are network-bound, so they’re identical across devices. - Estimates from OpenAI o3
2025-05-14T14:32:14
https://www.reddit.com/r/LocalLLaMA/comments/1kmgoj9/may_2025_model_benchmarks_mac_vs_5080/
FroyoCommercial627
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmgoj9
false
null
t3_1kmgoj9
/r/LocalLLaMA/comments/1kmgoj9/may_2025_model_benchmarks_mac_vs_5080/
false
false
self
0
null
SWE-rebench: A continuously updated benchmark for SWE LLMs
28
Hi! We present [SWE-rebench](https://swe-rebench.com/) — a new benchmark for evaluating agentic LLMs on a continuously updated and decontaminated set of real-world software engineering tasks, mined from active GitHub repositories. SWE-rebench combines the methodologies of SWE-bench and LiveCodeBench: we collect new issues from a wide range of repositories and evaluate how agents powered by different models solve them. The leaderboard will be continuously updated with new issues and models! Let us know which models you'd like us to evaluate. Stay tuned! https://preview.redd.it/gc3mvvuzfr0f1.png?width=2250&format=png&auto=webp&s=f465a5b37ef1a75db08982762c37f4c19ddfe33d
2025-05-14T14:57:47
https://www.reddit.com/r/LocalLLaMA/comments/1kmhb0c/swerebench_a_continuously_updated_benchmark_for/
Fabulous_Pollution10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmhb0c
false
null
t3_1kmhb0c
/r/LocalLLaMA/comments/1kmhb0c/swerebench_a_continuously_updated_benchmark_for/
false
false
https://external-preview…81a1f5aa444efd73
28
{'enabled': False, 'images': [{'id': 'qiTXPrfyonyQjbl3SeR1ri8_ePNvw_vqoI1O6pcB2Ho', 'resolutions': [{'height': 101, 'url': 'https://external-preview.redd.it/qiTXPrfyonyQjbl3SeR1ri8_ePNvw_vqoI1O6pcB2Ho.png?width=108&crop=smart&auto=webp&s=149340fa6585600bd89eae2febd845a1cbc5b03b', 'width': 108}, {'height': 203, 'url': 'https://external-preview.redd.it/qiTXPrfyonyQjbl3SeR1ri8_ePNvw_vqoI1O6pcB2Ho.png?width=216&crop=smart&auto=webp&s=e1a3ba4c05f0673d1a69bad1ba78a3e325e9fdc0', 'width': 216}, {'height': 301, 'url': 'https://external-preview.redd.it/qiTXPrfyonyQjbl3SeR1ri8_ePNvw_vqoI1O6pcB2Ho.png?width=320&crop=smart&auto=webp&s=910ca50aadb09ae22e7560df4354ff7593f7de70', 'width': 320}, {'height': 602, 'url': 'https://external-preview.redd.it/qiTXPrfyonyQjbl3SeR1ri8_ePNvw_vqoI1O6pcB2Ho.png?width=640&crop=smart&auto=webp&s=93143daeda204e9289a0f20f090a660db25d1840', 'width': 640}, {'height': 903, 'url': 'https://external-preview.redd.it/qiTXPrfyonyQjbl3SeR1ri8_ePNvw_vqoI1O6pcB2Ho.png?width=960&crop=smart&auto=webp&s=8daf7ca9daa15926296aa1d890f8b0eb5b289245', 'width': 960}, {'height': 1016, 'url': 'https://external-preview.redd.it/qiTXPrfyonyQjbl3SeR1ri8_ePNvw_vqoI1O6pcB2Ho.png?width=1080&crop=smart&auto=webp&s=b62ba35d48b739803afa333305962af968ad78b5', 'width': 1080}], 'source': {'height': 2118, 'url': 'https://external-preview.redd.it/qiTXPrfyonyQjbl3SeR1ri8_ePNvw_vqoI1O6pcB2Ho.png?auto=webp&s=3063aeb6e4a586c3f1b993026bcc1ff4b076d0ac', 'width': 2250}, 'variants': {}}]}
Seeking VRAM Backend Recommendations & Performance Comparisons for Multi-GPU AMD Setup (7900xtx x2 + 7800xt) - Gemma, Qwen Models
0
Hi everyone, I'm looking for advice on the best way to maximize output speed/throughput when running large language models on my setup. I'm primarily interested in running Gemma3:27b, Qwen3 32B models, and I'm trying to determine the most efficient VRAM backend to utilize. **My hardware is:** * **GPUs: (64GB)** 2x AMD Radeon RX 7900 XTX + 1x Radeon RX 7800 XT * **VRAM:** Effectively 24GB + 24GB + 16GB (total 64GB) * **RAM:** 128GB 4200MHz (32x4 configuration) * **CPU:** Ryzen 7 7700X Currently, I'm considering **VLLM** and **llama.cpp**. I've previously experimented with these backends with older models, and observed performance differences of only around 1-2 tokens per second, which was inconclusive. I'm hoping to get more targeted data with the newer, larger models. I also got better speed with Vulkan and llama.cpp for Qwen3::30B MOE for 110 token/s and around 14 token/s for Qwen3:235B\_Q2\_K form unsloth. I'm particularly interested in hearing from other users with **similar AMD GPU setups (specifically multi-GPU)** who have experience running LLMs. I would greatly appreciate it if you could share: * **What backend(s) have you found to be the most performant with AMD GPUs?** (VLLM, llama.cpp, others?) * **What quantization methods (e.g., GPTQ, AWQ, GGUF) are you using?** and at what bit depth (e.g., 4-bit, 8-bit)? * **Do you use all available GPUs, or only a subset?** What strategies do you find work best for splitting the model across multiple GPUs? (e.g., layer offloading, tensor parallelism) * **What inference frameworks (e.g., transformers, ExLlamaV2) are you using in conjunction with the backend?** * **Any specific configurations or settings you recommend for optimal performance with AMD GPUs?** (e.g. ROCm version, driver versions) I’m primarily focused on maximizing output speed/throughput for inference, so any insights related to that would be particularly helpful. I am open to suggestions on any and all optimization strategies. Thanks in advance for your time and expertise!
2025-05-14T15:14:16
https://www.reddit.com/r/LocalLLaMA/comments/1kmhq4s/seeking_vram_backend_recommendations_performance/
djdeniro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmhq4s
false
null
t3_1kmhq4s
/r/LocalLLaMA/comments/1kmhq4s/seeking_vram_backend_recommendations_performance/
false
false
self
0
null
"I Just Think They're Neat" - Marge Simpson
0
2025-05-14T15:14:24
https://i.redd.it/s5r8gzhdkr0f1.png
Accomplished_Mode170
i.redd.it
1970-01-01T00:00:00
0
{}
1kmhq92
false
null
t3_1kmhq92
/r/LocalLLaMA/comments/1kmhq92/i_just_think_theyre_neat_marge_simpson/
false
false
https://external-preview…dcd84f467c392797
0
{'enabled': True, 'images': [{'id': 'Rfg_TjB8lvgfXcS6q-cjV585IfwodPYnNvhSWEuZjTY', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/s5r8gzhdkr0f1.png?width=108&crop=smart&auto=webp&s=cea9ab4bb9089a8bbe9c01f0fb6273b539fe2499', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/s5r8gzhdkr0f1.png?width=216&crop=smart&auto=webp&s=e50462fc09667e1b0f44c24a1c17c10a31710514', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/s5r8gzhdkr0f1.png?width=320&crop=smart&auto=webp&s=7912c99508e207ae420df5b07b4bfb8fc0c007be', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/s5r8gzhdkr0f1.png?width=640&crop=smart&auto=webp&s=ab0e6299594cc743ad761246183c042b779150e8', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/s5r8gzhdkr0f1.png?width=960&crop=smart&auto=webp&s=d5bb73d610ae9aba6da17ec13dd523d6458e53d5', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/s5r8gzhdkr0f1.png?auto=webp&s=3b915fafd9e6ae5fc966cfcd98f0bd4081856f7e', 'width': 1024}, 'variants': {}}]}
Drummer's Snowpiercer 15B v1 - Trudge through the winter with a finetune of Nemotron 15B Thinker!
85
2025-05-14T15:15:31
https://huggingface.co/TheDrummer/Snowpiercer-15B-v1
TheLocalDrummer
huggingface.co
1970-01-01T00:00:00
0
{}
1kmhr87
false
null
t3_1kmhr87
/r/LocalLLaMA/comments/1kmhr87/drummers_snowpiercer_15b_v1_trudge_through_the/
false
false
https://external-preview…a5fa52406abf1cb2
85
{'enabled': False, 'images': [{'id': 'vaSJWfDvrVhyb2X2lFu4a2nMHg68l5zMzNqYLj2vNZ8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vaSJWfDvrVhyb2X2lFu4a2nMHg68l5zMzNqYLj2vNZ8.png?width=108&crop=smart&auto=webp&s=87d6ba94090314f595b424d9a173806dbfd0cb5e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vaSJWfDvrVhyb2X2lFu4a2nMHg68l5zMzNqYLj2vNZ8.png?width=216&crop=smart&auto=webp&s=c3584c4d5429a9af6bbdedc22e033c0eb497acae', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vaSJWfDvrVhyb2X2lFu4a2nMHg68l5zMzNqYLj2vNZ8.png?width=320&crop=smart&auto=webp&s=5b842659c7e96a1ddf3a201c26196caa0d11268a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vaSJWfDvrVhyb2X2lFu4a2nMHg68l5zMzNqYLj2vNZ8.png?width=640&crop=smart&auto=webp&s=059aaa73054b57a14497284f4a9f9a7d64c69435', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vaSJWfDvrVhyb2X2lFu4a2nMHg68l5zMzNqYLj2vNZ8.png?width=960&crop=smart&auto=webp&s=605802bdc3891e17a90e383ef23911812450d134', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vaSJWfDvrVhyb2X2lFu4a2nMHg68l5zMzNqYLj2vNZ8.png?width=1080&crop=smart&auto=webp&s=13abb72160be6409e7378ece0aaa6858defd814c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vaSJWfDvrVhyb2X2lFu4a2nMHg68l5zMzNqYLj2vNZ8.png?auto=webp&s=b7dacbf97599c46dd316f0dbd7d001592e234aff', 'width': 1200}, 'variants': {}}]}
Alpakafarm Team :D
1
[removed]
2025-05-14T15:19:54
https://www.reddit.com/r/LocalLLaMA/comments/1kmhv72/alpakafarm_team_d/
hashashinsophia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmhv72
false
null
t3_1kmhv72
/r/LocalLLaMA/comments/1kmhv72/alpakafarm_team_d/
false
false
self
1
null
Open source robust LLM extractor for HTML/Markdown in Typescript
7
While working with LLMs for structured web data extraction, I kept running into issues with invalid JSON and broken links in the output. This led me to build a library focused on robust extraction and enrichment: * **Clean HTML conversion**: transforms HTML into LLM-friendly markdown with an option to extract just the main content * **LLM structured output**: Uses Gemini 2.5 flash or GPT-4o mini to balance accuracy and cost. Can also also use custom prompt * **JSON sanitization**: If the LLM structured output fails or doesn't fully match your schema, a sanitization process attempts to recover and fix the data, especially useful for deeply nested objects and arrays * **URL validation**: all extracted URLs are validated - handling relative URLs, removing invalid ones, and repairing markdown-escaped links Github: [https://github.com/lightfeed/lightfeed-extract](https://github.com/lightfeed/lightfeed-extract) I'd love to hear if anyone else has experimented with LLMs for data extraction or if you have any questions about this approach!
2025-05-14T15:21:05
https://www.reddit.com/r/LocalLLaMA/comments/1kmhwah/open_source_robust_llm_extractor_for_htmlmarkdown/
Visual-Librarian6601
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmhwah
false
null
t3_1kmhwah
/r/LocalLLaMA/comments/1kmhwah/open_source_robust_llm_extractor_for_htmlmarkdown/
false
false
self
7
{'enabled': False, 'images': [{'id': 'MKZABIE7lqL2cSBoQwluQ1gDj1TBaTloTpSt-av-tAU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MKZABIE7lqL2cSBoQwluQ1gDj1TBaTloTpSt-av-tAU.png?width=108&crop=smart&auto=webp&s=a6802fb385755f9fcc3fc7eaa69d26bee4e224c6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MKZABIE7lqL2cSBoQwluQ1gDj1TBaTloTpSt-av-tAU.png?width=216&crop=smart&auto=webp&s=3be40692c9586d28dc9f283ab5eadda90132ac59', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MKZABIE7lqL2cSBoQwluQ1gDj1TBaTloTpSt-av-tAU.png?width=320&crop=smart&auto=webp&s=045091819fcd89dc0bccdc9e2dbc9978ac548cc1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MKZABIE7lqL2cSBoQwluQ1gDj1TBaTloTpSt-av-tAU.png?width=640&crop=smart&auto=webp&s=7f52c55fcd3f38eef4eb122127fed09880d6f082', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MKZABIE7lqL2cSBoQwluQ1gDj1TBaTloTpSt-av-tAU.png?width=960&crop=smart&auto=webp&s=717513770c632c0a28dbd59e9dbbc1bd51e34ac4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MKZABIE7lqL2cSBoQwluQ1gDj1TBaTloTpSt-av-tAU.png?width=1080&crop=smart&auto=webp&s=ae6247a30e6df95dd57f55daf45d7bbfa90843fc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MKZABIE7lqL2cSBoQwluQ1gDj1TBaTloTpSt-av-tAU.png?auto=webp&s=996a7ba946e6cd7a16941d9cd89f82ad8d5207b6', 'width': 1200}, 'variants': {}}]}
Stable Audio Open Small - a new fast audio generation model
1
[removed]
2025-05-14T15:25:27
https://www.reddit.com/r/LocalLLaMA/comments/1kmi02i/stable_audio_open_small_a_new_fast_audio/
iGermanProd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmi02i
false
null
t3_1kmi02i
/r/LocalLLaMA/comments/1kmi02i/stable_audio_open_small_a_new_fast_audio/
false
false
self
1
{'enabled': False, 'images': [{'id': '39Xr2upspnm9bcbQYM_ldKOrrTfhM4Vwfy1L3mW6S7U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/39Xr2upspnm9bcbQYM_ldKOrrTfhM4Vwfy1L3mW6S7U.png?width=108&crop=smart&auto=webp&s=49e48660beb5870fa7e2d7eb5025f203aee72565', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/39Xr2upspnm9bcbQYM_ldKOrrTfhM4Vwfy1L3mW6S7U.png?width=216&crop=smart&auto=webp&s=227c6c1f105aa2f2b0ba1798f7f3cae8f1063706', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/39Xr2upspnm9bcbQYM_ldKOrrTfhM4Vwfy1L3mW6S7U.png?width=320&crop=smart&auto=webp&s=2b429059c50d9b5754b83f36359b2ff8b59b75d1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/39Xr2upspnm9bcbQYM_ldKOrrTfhM4Vwfy1L3mW6S7U.png?width=640&crop=smart&auto=webp&s=5b42f9384f7a05d9c107ad06debbfe66a06a0f98', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/39Xr2upspnm9bcbQYM_ldKOrrTfhM4Vwfy1L3mW6S7U.png?width=960&crop=smart&auto=webp&s=b5bfe5372a59be90726f5e6c5749d83424a76b2a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/39Xr2upspnm9bcbQYM_ldKOrrTfhM4Vwfy1L3mW6S7U.png?width=1080&crop=smart&auto=webp&s=c17b5e70a6b0bce70164d42950c0a5350c7fa012', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/39Xr2upspnm9bcbQYM_ldKOrrTfhM4Vwfy1L3mW6S7U.png?auto=webp&s=241a7820b4d9f16366ed3ddc313421a8a1155c1f', 'width': 1200}, 'variants': {}}]}
Stable Audio Open Small - a new fast audio generation model
1
[removed]
2025-05-14T15:29:26
https://www.reddit.com/r/LocalLLaMA/comments/1kmi3il/stable_audio_open_small_a_new_fast_audio/
iGermanProd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmi3il
false
null
t3_1kmi3il
/r/LocalLLaMA/comments/1kmi3il/stable_audio_open_small_a_new_fast_audio/
false
false
self
1
null
AMD Strix Halo (Ryzen AI Max+ 395) GPU LLM Performance
192
I've been doing some (ongoing) testing on a Strix Halo system recently and with a bunch of desktop systems coming out, and very few advanced/serious GPU-based LLM performance reviews out there, I figured it might be worth sharing a few notes I've made on the current performance and state of software. This post will primarily focus on LLM inference with the Strix Halo GPU on Linux (but the llama.cpp testing should be pretty relevant for Windows as well). This post gets rejected with too many links so I'll just leave a single link for those that want to dive deeper: https://llm-tracker.info/_TOORG/Strix-Halo # Raw Performance In terms of raw compute specs, the Ryzen AI Max 395's Radeon 8060S has 40 RDNA3.5 CUs. At a max clock of 2.9GHz this should have a peak of **59.4 FP16/BF16 TFLOPS**: ``` 512 ops/clock/CU * 40 CU * 2.9e9 clock / 1e12 = 59.392 FP16 TFLOPS ``` This peak value requires either WMMA or wave32 VOPD otherwise the max is halved. Using mamf-finder to test, without hipBLASLt, it takes about 35 hours to test and only gets to **5.1 BF16 TFLOPS** (**<9%** max theoretical). However, when run with hipBLASLt, this goes up to **36.9 TFLOPS** (**>60%** max theoretical) which is comparable to MI300X efficiency numbers. On the memory bandwidth (MBW) front, `rocm_bandwidth_test` gives about *212 GB/s* peak bandwidth (DDR5-8000 on a 256-bit bus gives a theoretical peak MBW of *256 GB/*s). This is roughly in line with the max MBW tested by ThePhawx, jack stone, and others on various Strix Halo systems. One thing `rocm_bandwidth_test` gives you is also CPU to GPU speed, which is *~84 GB/s*. The system I am using is set to almost all of its memory dedicated to GPU - 8GB GART and 110 GB GTT and has a very high (>100W TDP). # llama.cpp What most people probably want to know is how these chips perform with llama.cpp for bs=1 inference. First I'll test with the standard TheBloke/Llama-2-7B-GGUF Q4_0 so you can easily compare to other tests like my previous compute and memory bandwidth efficiency tests across architectures or the official llama.cpp Apple Silicon M-series performance thread. I ran with a number of different backends, and the results were actually pretty surprising: |Run|pp512 (t/s)|tg128 (t/s)|Max Mem (MiB)| |:-|:-|:-|:-| |CPU|294.64 ± 0.58|28.94 ± 0.04|| |CPU + FA|294.36 ± 3.13|29.42 ± 0.03|| |HIP|348.96 ± 0.31|48.72 ± 0.01|4219| |HIP + FA|331.96 ± 0.41|45.78 ± 0.02|4245| |HIP + WMMA|322.63 ± 1.34|48.40 ± 0.02|4218| |HIP + WMMA + FA|343.91 ± 0.60|50.88 ± 0.01|4218| |Vulkan|881.71 ± 1.71|52.22 ± 0.05|**3923**| |Vulkan + FA|**884.20 ± 6.23**|**52.73 ± 0.07**|**3923**| The HIP version performs **far** below what you'd expect in terms of tok/TFLOP efficiency for prompt processing even vs other RDNA3 architectures: - `gfx1103` Radeon 780M iGPU gets 14.51 tok/TFLOP. At that efficiency you'd expect the about 850 tok/s that the Vulkan backend delivers. - `gfx1100` Radeon 7900 XTX gets 25.12 tok/TFLOP. At that efficiency you'd expect almost 1500 tok/s, almost double what the Vulkan backend delivers, and >4X what the current HIP backend delivers. - HIP pp512 barely beats out CPU backend numbers. I don't have an explanation for this. - Just for a reference of how bad the HIP performance is, an 18CU M3 Pro has \~12.8 FP16 TFLOPS (4.6X less compute than Strix Halo) and delivers about the same pp512. Lunar Lake Arc 140V has 32 FP16 TFLOPS (almost 1/2 Strix Halo) and has a pp512 of 657 tok/s (1.9X faster) - With the Vulkan backend pp512 is about the same as an M4 Max and tg128 is about equivalent to an M4 Pro Testing a similar system with Linux 6.14 vs 6.15 showed a 15% performance difference so it's possible future driver/platform updates will improve/fix Strix Halo's ROCm/HIP compute efficiency problems. So that's a bit grim, but I did want to point out one silver lining. With the recent fixes for Flash Attention with the llama.cpp Vulkan backend, I did some higher context testing, and here, the HIP + rocWMMA backend actually shows some strength. It has basically **no decrease in either pp or tg performance at 8K context** and uses the least memory to boot: |Run|pp8192 (t/s)|tg8192 (t/s)|Max Mem (MiB)| |:-|:-|:-|:-| |HIP|245.59 ± 0.10|12.43 ± 0.00|6+10591| |HIP + FA|190.86 ± 0.49|30.01 ± 0.00|7+8089| |HIP + WMMA|230.10 ± 0.70|12.37 ± 0.00|6+10590| |HIP + WMMA + FA|368.77 ± 1.22|**50.97 ± 0.00**|**7+8062**| |Vulkan|487.69 ± 0.83|7.54 ± 0.02|7761+1180| |Vulkan + FA|**490.18 ± 4.89**|32.03 ± 0.01|7767+1180| - You need to have `rocmwmma` installed - many distros have packages but you need gfx1151 support is very new (#PR 538) from last week) so you will probably need to build your own rocWMMA from source - You should then rebuild llama.cpp with `-DGGML_HIP_ROCWMMA_FATTN=ON` If you mostly do 1-shot inference, then the Vulkan + FA backend is actually probably the best and is the most cross-platform/easy option. If you frequently have longer conversations then HIP + WMMA + FA is probalby the way to go, even if prompt processing is much slower than it should be right now. I also ran some tests with Qwen3-30B-A3B UD-Q4_K_XL. Larger MoEs is where these large unified memory APUs really shine. Here are Vulkan results. One thing worth noting, and this is particular to the Qwen3 MoE and Vulkan backend, but using `-b 256` significantly improves the pp512 performance: |Run|pp512 (t/s)|tg128 (t/s)| |:-|:-|:-| |Vulkan|70.03 ± 0.18|75.32 ± 0.08| |Vulkan b256|118.78 ± 0.64|74.76 ± 0.07| While the pp512 is slow, tg128 is as speedy as you'd expect for 3B activations. This is still only a 16.5 GB model though, so let's go bigger. Llama 4 Scout is 109B parameters and 17B activations and the UD-Q4_K_XL is 57.93 GiB. |Run|pp512 (t/s)|tg128 (t/s)| |:-|:-|:-| |Vulkan|102.61 ± 1.02|20.23 ± 0.01| |HIP|GPU Hang|GPU Hang| While Llama 4 has had a rocky launch, this is a model that performs about as well as Llama 3.3 70B, but tg is 4X faster, and has SOTA vision as well, so having this speed for tg is a real win. I've also been able to successfully RPC llama.cpp to test some truly massive (Llama 4 Maverick, Qwen 235B-A22B models, but I'll leave that for a future followup). Besides romWMMA, I was able to build a ROCm 6.4 image for Strix Halo (gfx1151) using u/scottt's dockerfiles. These docker images have hipBLASLt built with gfx1151 support. I was also able to build AOTriton without too much hassle (it takes about 1h wall time on Strix Halo if you restrict to just the gfx1151 GPU_TARGET). Composable Kernel (CK) has gfx1151 support now as well and builds in about 15 minutes. PyTorch was a huge PITA to build, but with a fair amount of elbow grease, I was able to get HEAD (2.8.0a0) compiling, however it still has problems with Flash Attention not working even with `TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL` set. There's a lot of active work ongoing for PyTorch. For those interested, I'd recommend checking out my linked docs. I won't bother testing training or batch inference engines until at least PyTorch FA is sorted. Current testing shows fwd/bwd pass to be in the **~1 TFLOPS** ballpark (very bad)... This testing obviously isn't very comprehensive, but since there's very little out there, I figure I'd at least share some of the results, especially with the various Chinese Strix Halo mini PCs beginning to ship and with Computex around the corner.
2025-05-14T15:29:44
https://www.reddit.com/r/LocalLLaMA/comments/1kmi3ra/amd_strix_halo_ryzen_ai_max_395_gpu_llm/
randomfoo2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmi3ra
false
null
t3_1kmi3ra
/r/LocalLLaMA/comments/1kmi3ra/amd_strix_halo_ryzen_ai_max_395_gpu_llm/
false
false
self
192
{'enabled': False, 'images': [{'id': 'LVI9k0fh_RZkRc--0M6US_gjgAtTjLGNcNKdvCGt54E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/LVI9k0fh_RZkRc--0M6US_gjgAtTjLGNcNKdvCGt54E.png?width=108&crop=smart&auto=webp&s=3c6c0b68f1819ae018c774a2adecbe294e588c55', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/LVI9k0fh_RZkRc--0M6US_gjgAtTjLGNcNKdvCGt54E.png?width=216&crop=smart&auto=webp&s=36a64b8797ebe09f7b5b0cd9495d84f249f127ec', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/LVI9k0fh_RZkRc--0M6US_gjgAtTjLGNcNKdvCGt54E.png?width=320&crop=smart&auto=webp&s=3dcdd25f0544d27424e83687cb5edc3ba5f29467', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/LVI9k0fh_RZkRc--0M6US_gjgAtTjLGNcNKdvCGt54E.png?width=640&crop=smart&auto=webp&s=01bf4a944cf3c8883de954fb44d1e1e0d77ad514', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/LVI9k0fh_RZkRc--0M6US_gjgAtTjLGNcNKdvCGt54E.png?width=960&crop=smart&auto=webp&s=22b48667f314aa26660868ff0034f9444237e376', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/LVI9k0fh_RZkRc--0M6US_gjgAtTjLGNcNKdvCGt54E.png?width=1080&crop=smart&auto=webp&s=36db542a3da01c6f461af60e991887a4955c28b3', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/LVI9k0fh_RZkRc--0M6US_gjgAtTjLGNcNKdvCGt54E.png?auto=webp&s=138bd372588239ac9f4acccb87cb74d9f15ce728', 'width': 1200}, 'variants': {}}]}
Stable Audio Open Small - new fast audio generation model
60
**Weights**: [https://huggingface.co/stabilityai/stable-audio-open-small](https://huggingface.co/stabilityai/stable-audio-open-small) **Paper**: [https://arxiv.org/abs/2505.08175](https://arxiv.org/abs/2505.08175) **Arm learning path**: [https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/run-stable-audio-open-small-with-lite-rt](https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/run-stable-audio-open-small-with-lite-rt) The last link has some demos, they claim 30% faster than realtime!
2025-05-14T15:31:25
https://www.reddit.com/r/LocalLLaMA/comments/1kmi59x/stable_audio_open_small_new_fast_audio_generation/
iGermanProd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmi59x
false
null
t3_1kmi59x
/r/LocalLLaMA/comments/1kmi59x/stable_audio_open_small_new_fast_audio_generation/
false
false
self
60
{'enabled': False, 'images': [{'id': '39Xr2upspnm9bcbQYM_ldKOrrTfhM4Vwfy1L3mW6S7U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/39Xr2upspnm9bcbQYM_ldKOrrTfhM4Vwfy1L3mW6S7U.png?width=108&crop=smart&auto=webp&s=49e48660beb5870fa7e2d7eb5025f203aee72565', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/39Xr2upspnm9bcbQYM_ldKOrrTfhM4Vwfy1L3mW6S7U.png?width=216&crop=smart&auto=webp&s=227c6c1f105aa2f2b0ba1798f7f3cae8f1063706', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/39Xr2upspnm9bcbQYM_ldKOrrTfhM4Vwfy1L3mW6S7U.png?width=320&crop=smart&auto=webp&s=2b429059c50d9b5754b83f36359b2ff8b59b75d1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/39Xr2upspnm9bcbQYM_ldKOrrTfhM4Vwfy1L3mW6S7U.png?width=640&crop=smart&auto=webp&s=5b42f9384f7a05d9c107ad06debbfe66a06a0f98', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/39Xr2upspnm9bcbQYM_ldKOrrTfhM4Vwfy1L3mW6S7U.png?width=960&crop=smart&auto=webp&s=b5bfe5372a59be90726f5e6c5749d83424a76b2a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/39Xr2upspnm9bcbQYM_ldKOrrTfhM4Vwfy1L3mW6S7U.png?width=1080&crop=smart&auto=webp&s=c17b5e70a6b0bce70164d42950c0a5350c7fa012', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/39Xr2upspnm9bcbQYM_ldKOrrTfhM4Vwfy1L3mW6S7U.png?auto=webp&s=241a7820b4d9f16366ed3ddc313421a8a1155c1f', 'width': 1200}, 'variants': {}}]}
I updated the SmolVLM llama.cpp webcam demo to run locally in-browser on WebGPU.
418
Inspired by [https://www.reddit.com/r/LocalLLaMA/comments/1klx9q2/realtime\_webcam\_demo\_with\_smolvlm\_using\_llamacpp/](https://www.reddit.com/r/LocalLLaMA/comments/1klx9q2/realtime_webcam_demo_with_smolvlm_using_llamacpp/), I decided to update the llama.cpp server demo so that it runs 100% locally in-browser on WebGPU, using Transformers.js. This means you can simply visit the link and run the demo, without needing to install anything locally. I hope you like it! [https://huggingface.co/spaces/webml-community/smolvlm-realtime-webgpu](https://huggingface.co/spaces/webml-community/smolvlm-realtime-webgpu) PS: The source code is a single index.html file you can find in the "Files" section on the demo page.
2025-05-14T15:33:15
https://v.redd.it/or5b3ks8nr0f1
xenovatech
v.redd.it
1970-01-01T00:00:00
0
{}
1kmi6vl
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/or5b3ks8nr0f1/DASHPlaylist.mpd?a=1749828809%2CYjE2ZTA2ZGZmNjQwZmRmNzYzZWJiNTNmZjAxOWFlYmEzZTYxOThmNmVmZGYwYjNlYWJmMmNhNzk0MGUwZjI4NQ%3D%3D&v=1&f=sd', 'duration': 46, 'fallback_url': 'https://v.redd.it/or5b3ks8nr0f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/or5b3ks8nr0f1/HLSPlaylist.m3u8?a=1749828809%2CYzY1ZDAyYTIwMjZlOTM4ZDkwMTg1N2YzMGU5MWI2NThiYmY1YTkwN2IzYmU4OGU5MzcyYjYxNGY4YWJlOGJiNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/or5b3ks8nr0f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_1kmi6vl
/r/LocalLLaMA/comments/1kmi6vl/i_updated_the_smolvlm_llamacpp_webcam_demo_to_run/
false
false
https://external-preview…bac258abd8f7d25a
418
{'enabled': False, 'images': [{'id': 'Z3l2NXpmczhucjBmMUwcvEt1gWTYtmZHqUwsIc9aRH3JKfTLJ5UHo4J1H4An', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Z3l2NXpmczhucjBmMUwcvEt1gWTYtmZHqUwsIc9aRH3JKfTLJ5UHo4J1H4An.png?width=108&crop=smart&format=pjpg&auto=webp&s=c624c3accf3427a26457b7b0b9e23d62726b3b99', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Z3l2NXpmczhucjBmMUwcvEt1gWTYtmZHqUwsIc9aRH3JKfTLJ5UHo4J1H4An.png?width=216&crop=smart&format=pjpg&auto=webp&s=725fe2a7baf5268c430a20739d634353ecd32616', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Z3l2NXpmczhucjBmMUwcvEt1gWTYtmZHqUwsIc9aRH3JKfTLJ5UHo4J1H4An.png?width=320&crop=smart&format=pjpg&auto=webp&s=8b5d1ead16ffcc28fab300f6fc85af2177509797', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/Z3l2NXpmczhucjBmMUwcvEt1gWTYtmZHqUwsIc9aRH3JKfTLJ5UHo4J1H4An.png?width=640&crop=smart&format=pjpg&auto=webp&s=6a5ed70fc177fe9fee410343061721ffaf9bc54b', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/Z3l2NXpmczhucjBmMUwcvEt1gWTYtmZHqUwsIc9aRH3JKfTLJ5UHo4J1H4An.png?width=960&crop=smart&format=pjpg&auto=webp&s=d46be20b312b694b405360a11cdf744d0cc933ab', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/Z3l2NXpmczhucjBmMUwcvEt1gWTYtmZHqUwsIc9aRH3JKfTLJ5UHo4J1H4An.png?width=1080&crop=smart&format=pjpg&auto=webp&s=677beff36f38102cd9c50d72a87ee143debb8c1b', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Z3l2NXpmczhucjBmMUwcvEt1gWTYtmZHqUwsIc9aRH3JKfTLJ5UHo4J1H4An.png?format=pjpg&auto=webp&s=f0f9a14d9f538f797492ca0e031d414f2f5b742f', 'width': 1080}, 'variants': {}}]}
[image processing failed]
1
[deleted]
2025-05-14T15:42:45
[deleted]
1970-01-01T00:00:00
0
{}
1kmif8e
false
null
t3_1kmif8e
/r/LocalLLaMA/comments/1kmif8e/image_processing_failed/
false
false
default
1
null
[image processing failed]
1
[deleted]
2025-05-14T15:42:57
[deleted]
1970-01-01T00:00:00
0
{}
1kmiffg
false
null
t3_1kmiffg
/r/LocalLLaMA/comments/1kmiffg/image_processing_failed/
false
false
default
1
null
[image processing failed]
1
[deleted]
2025-05-14T15:43:12
[deleted]
1970-01-01T00:00:00
0
{}
1kmifn5
false
null
t3_1kmifn5
/r/LocalLLaMA/comments/1kmifn5/image_processing_failed/
false
false
default
1
null
Roadmap for frontier models summer 2025
3
1. grok 3.5 2. o3 pro / o4 full 3. gemini ultra 4. claude 4 (neptune) 5. deepseek r2 6. r2 operator [https://x.com/iruletheworldmo/status/1922413637496344818](https://x.com/iruletheworldmo/status/1922413637496344818)
2025-05-14T16:09:10
https://www.reddit.com/r/LocalLLaMA/comments/1kmj3gl/roadmap_for_frontier_models_summer_2025/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmj3gl
false
null
t3_1kmj3gl
/r/LocalLLaMA/comments/1kmj3gl/roadmap_for_frontier_models_summer_2025/
false
false
self
3
{'enabled': False, 'images': [{'id': 'ztSy_MabctLaC-zAjVunO33lufNbxoJXJs5r6phqO7g', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/fpWIsY_XH0hB_jTvQqTBczYhToRDGpL7hcAo67ksQBI.jpg?width=108&crop=smart&auto=webp&s=ce762d95475584036c857d08e7062d5ad2dfa2f9', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/fpWIsY_XH0hB_jTvQqTBczYhToRDGpL7hcAo67ksQBI.jpg?auto=webp&s=fc31fda66a4ee684952cd4a8cf52310e767f45b5', 'width': 200}, 'variants': {}}]}
Personal notes: Agentic Loop from OpenAI's GPT-4.1 Prompting Guide
3
Finally got around to the bookmark I had saved a while ago: OpenAI's prompting guide: [https://cookbook.openai.com/examples/gpt4-1\_prompting\_guide](https://cookbook.openai.com/examples/gpt4-1_prompting_guide) I have to say I really like it! I am still working through it. I usually scribble my notes in Excalidraw. I just wrote this for myself and am sharing it here in case it helps others. I think much of the guide is relevant in general to build useful agents (or simple deterministic workflows). Note: I am still working through it, so this might change. I will add more here as I go through the guide. It's quite dense, and I am still making sense of it. So will change the sketch.
2025-05-14T16:09:45
https://i.redd.it/27dndr0qtr0f1.png
phoneixAdi
i.redd.it
1970-01-01T00:00:00
0
{}
1kmj3zn
false
null
t3_1kmj3zn
/r/LocalLLaMA/comments/1kmj3zn/personal_notes_agentic_loop_from_openais_gpt41/
false
false
default
3
{'enabled': True, 'images': [{'id': '27dndr0qtr0f1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/27dndr0qtr0f1.png?width=108&crop=smart&auto=webp&s=664d145bc70f28ae617dab9a1ee30f24a59ac398', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/27dndr0qtr0f1.png?width=216&crop=smart&auto=webp&s=78336b750b50c5b54775d4e8b2cc53c4fd061807', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/27dndr0qtr0f1.png?width=320&crop=smart&auto=webp&s=fe2d3c0787f138f27630bdb392db4063293b5024', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/27dndr0qtr0f1.png?width=640&crop=smart&auto=webp&s=15922ba752a7ef4a4a0ac8386c7de6bd81b2dc6f', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/27dndr0qtr0f1.png?width=960&crop=smart&auto=webp&s=d4c1f130a31b4a6ac68294d33d270096b17db0b2', 'width': 960}, {'height': 1620, 'url': 'https://preview.redd.it/27dndr0qtr0f1.png?width=1080&crop=smart&auto=webp&s=3bb10730a477e3e52e50885b84a8d4560dcf9808', 'width': 1080}], 'source': {'height': 1792, 'url': 'https://preview.redd.it/27dndr0qtr0f1.png?auto=webp&s=7d52e5bfbb0b1962f8e341c17d40e4d5a6107d2a', 'width': 1194}, 'variants': {}}]}
now you can create a mobile app by prompting "just code me the best mobile app bro"
0
2025-05-14T16:27:13
https://i.redd.it/v77u2kdmxr0f1.png
sickleRunner
i.redd.it
1970-01-01T00:00:00
0
{}
1kmjjnu
false
null
t3_1kmjjnu
/r/LocalLLaMA/comments/1kmjjnu/now_you_can_create_a_mobile_app_by_prompting_just/
false
false
default
0
{'enabled': True, 'images': [{'id': 'v77u2kdmxr0f1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/v77u2kdmxr0f1.png?width=108&crop=smart&auto=webp&s=cb7e858080fcb02082a8e179d15accb67c7ecc7f', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/v77u2kdmxr0f1.png?width=216&crop=smart&auto=webp&s=2e871afc28ac47dc35997b949d2e9c3d4a34f26f', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/v77u2kdmxr0f1.png?width=320&crop=smart&auto=webp&s=406d042aef9ec57042778ff169e62b657f57370a', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/v77u2kdmxr0f1.png?width=640&crop=smart&auto=webp&s=280b1f962fed836717c731a83f7558e3d0a35e93', 'width': 640}, {'height': 961, 'url': 'https://preview.redd.it/v77u2kdmxr0f1.png?width=960&crop=smart&auto=webp&s=196202c128f32b7daca49006839aa28b0a92eb42', 'width': 960}, {'height': 1081, 'url': 'https://preview.redd.it/v77u2kdmxr0f1.png?width=1080&crop=smart&auto=webp&s=18eadcc94ff6057b6d8b06d86c857a3d374816e4', 'width': 1080}], 'source': {'height': 1384, 'url': 'https://preview.redd.it/v77u2kdmxr0f1.png?auto=webp&s=0aaac96f69d437fb71696fadeae23a2040b35387', 'width': 1382}, 'variants': {}}]}
NimbleEdge AI – Fully On-Device Llama 3.1 1B Assistant with Text & Voice, No Cloud Needed
1
[removed]
2025-05-14T16:31:37
https://www.reddit.com/r/LocalLLaMA/comments/1kmjnpa/nimbleedge_ai_fully_ondevice_llama_31_1b/
voidmemoriesmusic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmjnpa
false
null
t3_1kmjnpa
/r/LocalLLaMA/comments/1kmjnpa/nimbleedge_ai_fully_ondevice_llama_31_1b/
false
false
self
1
{'enabled': False, 'images': [{'id': 'pck51c9IH1xwIu945SU8FDzS7FZ6iXCnZkvILeAKGVQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/pck51c9IH1xwIu945SU8FDzS7FZ6iXCnZkvILeAKGVQ.jpeg?width=108&crop=smart&auto=webp&s=a1c342885a1dbfbf7ae6fa969c63e35999633f3d', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/pck51c9IH1xwIu945SU8FDzS7FZ6iXCnZkvILeAKGVQ.jpeg?width=216&crop=smart&auto=webp&s=59314b0c5c4a37b2b62d6a54d0e4d606dcc47ab4', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/pck51c9IH1xwIu945SU8FDzS7FZ6iXCnZkvILeAKGVQ.jpeg?width=320&crop=smart&auto=webp&s=73054b21bfc5b6efe310d53282465e6146264c88', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/pck51c9IH1xwIu945SU8FDzS7FZ6iXCnZkvILeAKGVQ.jpeg?auto=webp&s=34d030898c4e42305436de3476bef910a238de6b', 'width': 480}, 'variants': {}}]}
gemini pays less attention to system messages by default?
0
exploring models for an application that will have to frequently inject custom instructions to guide the model in its next response i noticed that gemini compared to gpt requires a lot more prompting to follow system messages and values user messages much higher by default wonder if this is just a result of different training between the models or if there's a better way to inference gemini with custom instructions other than system messages i can get it to pay more attention with some more explicit instructions but it's not quite the same as with gpt, that just follows the instruction and only the instruction reliably
2025-05-14T16:31:37
https://i.redd.it/r143ofbwwr0f1.png
Goericke
i.redd.it
1970-01-01T00:00:00
0
{}
1kmjnpg
false
null
t3_1kmjnpg
/r/LocalLLaMA/comments/1kmjnpg/gemini_pays_less_attention_to_system_messages_by/
false
false
https://external-preview…18d15096fbbdf117
0
{'enabled': True, 'images': [{'id': 'aKhf1ZF_jCa2xzdu2dl6RCjBAGa9CPjV5MZOR__DGi0', 'resolutions': [{'height': 138, 'url': 'https://preview.redd.it/r143ofbwwr0f1.png?width=108&crop=smart&auto=webp&s=a84fcc7cd84db936d9b9f4c9029b4d03c6b2039a', 'width': 108}, {'height': 277, 'url': 'https://preview.redd.it/r143ofbwwr0f1.png?width=216&crop=smart&auto=webp&s=b3adb61634a0bbe0df5c6e5e35188a93a780c30d', 'width': 216}, {'height': 411, 'url': 'https://preview.redd.it/r143ofbwwr0f1.png?width=320&crop=smart&auto=webp&s=efa6c30d63e746e3088d795ddb1e3c04a603058c', 'width': 320}, {'height': 822, 'url': 'https://preview.redd.it/r143ofbwwr0f1.png?width=640&crop=smart&auto=webp&s=2b452e64e0d76a1f3d4a11b04d115473574d5ce9', 'width': 640}], 'source': {'height': 1045, 'url': 'https://preview.redd.it/r143ofbwwr0f1.png?auto=webp&s=c6820228d8616c488286cc04799badcfefed6461', 'width': 813}, 'variants': {}}]}
[FREE SAMPLE] Clean GPT Fine-Tuning Dataset – 15,000 Curated Text Blocks from Public Domain Books (Non-Fiction)
1
[removed]
2025-05-14T16:32:32
https://www.reddit.com/r/LocalLLaMA/comments/1kmjoht/free_sample_clean_gpt_finetuning_dataset_15000/
Patient-Tooth7354
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmjoht
false
null
t3_1kmjoht
/r/LocalLLaMA/comments/1kmjoht/free_sample_clean_gpt_finetuning_dataset_15000/
false
false
self
1
{'enabled': False, 'images': [{'id': '_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU.png?width=108&crop=smart&auto=webp&s=d86627c87d9d144c16c153653adb9156be4935a0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU.png?width=216&crop=smart&auto=webp&s=aaf13450e84c9e1f27e2080455eefb565a93ee98', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU.png?width=320&crop=smart&auto=webp&s=9f69320eccf005cf98274db64d39f1910e205ae2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU.png?width=640&crop=smart&auto=webp&s=977c2f8c4a830d4dfa796179c0fa4c66dd3fa492', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU.png?width=960&crop=smart&auto=webp&s=0fe8d226c17b2534ef266e037ed2964e149617cb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU.png?width=1080&crop=smart&auto=webp&s=9cd95b9a0bd050025268960365fa1e7e86c8309e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU.png?auto=webp&s=43f171957caa9988de973025c40512017f12ebfd', 'width': 1200}, 'variants': {}}]}
NimbleEdge AI – Fully On-Device Llama 3.2 1B Assistant with Text & Voice, No Cloud Needed
28
Hi everyone! We’re excited to share **NimbleEdge AI**, a fully on-device conversational assistant built around **Llama 3.2 1B**, **Whisper Tiny or Google ASR**, and **Kokoro TTS** – all running directly on your mobile device. The best part? It works **offline**, and **nothing ever leaves your device**—no data is sent to the cloud, no queries to external LLM providers. We use ONNX-quantized models and a Python script to orchestrate the entire workflow, which gets executed on-device leveraging the NimbleEdge SDK built on C++ for optimal performance on-device. Sign up for early access [here](https://www.nimbleedge.com/nimbleedge-ai-early-access-sign-up) (Currently - only available on Android) And we are open-sourcing the [Python workflow script](https://github.com/NimbleEdge/kokoro/blob/main/on_device_workflow.py) and [extensions to Kokoro TTS](https://github.com/NimbleEdge/kokoro) for on-device execution with the entire on-device SDK to be open sourced soon after. Happy to answer technical questions about our model setup, on-device SDK, or the Python workflow script. Would love feedback from the local Llama community!
2025-05-14T16:35:16
https://www.reddit.com/r/LocalLLaMA/comments/1kmjr1n/nimbleedge_ai_fully_ondevice_llama_32_1b/
voidmemoriesmusic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmjr1n
false
null
t3_1kmjr1n
/r/LocalLLaMA/comments/1kmjr1n/nimbleedge_ai_fully_ondevice_llama_32_1b/
false
false
self
28
null
Don't know how to proceed with qwen 2.5 vl series models to get the correct bounding boxes around the words in the document
1
[removed]
2025-05-14T16:45:38
https://www.reddit.com/r/LocalLLaMA/comments/1kmk0nk/dont_know_how_to_proceed_with_qwen_25_vl_series/
GurEmbarrassed2584
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmk0nk
false
null
t3_1kmk0nk
/r/LocalLLaMA/comments/1kmk0nk/dont_know_how_to_proceed_with_qwen_25_vl_series/
false
false
self
1
null
[R] Open Dataset – 15,000 Clean Text Blocks for GPT Fine-Tuning (Public Domain, Non-Fiction)
1
[removed]
2025-05-14T16:47:56
https://www.reddit.com/r/LocalLLaMA/comments/1kmk2ur/r_open_dataset_15000_clean_text_blocks_for_gpt/
Patient-Tooth7354
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmk2ur
false
null
t3_1kmk2ur
/r/LocalLLaMA/comments/1kmk2ur/r_open_dataset_15000_clean_text_blocks_for_gpt/
false
false
self
1
null
Fun little AI quiz
1
2025-05-14T16:51:27
https://i.redd.it/747n02rz1s0f1.jpeg
workbyatlas
i.redd.it
1970-01-01T00:00:00
0
{}
1kmk62a
false
null
t3_1kmk62a
/r/LocalLLaMA/comments/1kmk62a/fun_little_ai_quiz/
false
false
default
1
{'enabled': True, 'images': [{'id': '747n02rz1s0f1', 'resolutions': [{'height': 127, 'url': 'https://preview.redd.it/747n02rz1s0f1.jpeg?width=108&crop=smart&auto=webp&s=e510b4ca95c768e6b26444604bdd0c526702ae56', 'width': 108}, {'height': 254, 'url': 'https://preview.redd.it/747n02rz1s0f1.jpeg?width=216&crop=smart&auto=webp&s=b483d80fc850a5bf3cecc6a26fc87bc5307c02c1', 'width': 216}, {'height': 377, 'url': 'https://preview.redd.it/747n02rz1s0f1.jpeg?width=320&crop=smart&auto=webp&s=10cd786e7c3e2713365c24ccfa6a50f5781e22f8', 'width': 320}, {'height': 755, 'url': 'https://preview.redd.it/747n02rz1s0f1.jpeg?width=640&crop=smart&auto=webp&s=0005631ad706214e178c70af2e7bbfcdc8fcc0d1', 'width': 640}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/747n02rz1s0f1.jpeg?auto=webp&s=8a2648a8dd7e544f1ddea87c7f5a5edbb7a4f5d9', 'width': 868}, 'variants': {}}]}
How we built our AI code review tool for IDEs
1
2025-05-14T17:08:22
https://www.coderabbit.ai/blog/how-we-built-our-ai-code-review-tool-for-ides
thewritingwallah
coderabbit.ai
1970-01-01T00:00:00
0
{}
1kmklm3
false
null
t3_1kmklm3
/r/LocalLLaMA/comments/1kmklm3/how_we_built_our_ai_code_review_tool_for_ides/
false
false
https://external-preview…e66e52ce3ede9d20
1
{'enabled': False, 'images': [{'id': 'QJfLls6H_xPiNk0Vx5HLs7aYZdW0Mol8WKxlf1QcKFw', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/QJfLls6H_xPiNk0Vx5HLs7aYZdW0Mol8WKxlf1QcKFw.png?width=108&crop=smart&auto=webp&s=f9c4e206c230b04e4a38f0cb72f2955c17a2ed90', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/QJfLls6H_xPiNk0Vx5HLs7aYZdW0Mol8WKxlf1QcKFw.png?width=216&crop=smart&auto=webp&s=c8a7e9fa1ff46d51edc56114ea3db37616b54885', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/QJfLls6H_xPiNk0Vx5HLs7aYZdW0Mol8WKxlf1QcKFw.png?width=320&crop=smart&auto=webp&s=bd26556eb60b45040e05344c763076214a38a9e8', 'width': 320}, {'height': 363, 'url': 'https://external-preview.redd.it/QJfLls6H_xPiNk0Vx5HLs7aYZdW0Mol8WKxlf1QcKFw.png?width=640&crop=smart&auto=webp&s=de0af5ce121f1630501ae87322a0d10d70d18a85', 'width': 640}, {'height': 545, 'url': 'https://external-preview.redd.it/QJfLls6H_xPiNk0Vx5HLs7aYZdW0Mol8WKxlf1QcKFw.png?width=960&crop=smart&auto=webp&s=bd81953ae9930c64e19ffd7de8b0c9b063bd1c5b', 'width': 960}, {'height': 613, 'url': 'https://external-preview.redd.it/QJfLls6H_xPiNk0Vx5HLs7aYZdW0Mol8WKxlf1QcKFw.png?width=1080&crop=smart&auto=webp&s=a5abcb34a8449de114162bc99dbfef688eef258e', 'width': 1080}], 'source': {'height': 1890, 'url': 'https://external-preview.redd.it/QJfLls6H_xPiNk0Vx5HLs7aYZdW0Mol8WKxlf1QcKFw.png?auto=webp&s=36a40f8e42e4d9abb0311b01ce0f8ceb569bf84b', 'width': 3327}, 'variants': {}}]}
Xeon 6 6900, 12mrdimm 8800, amx.. worth it?
1
Intel's latest xeon 6 6900 (formerly rapid granite). 12 mrdimm up to 8800, amx support.. I can find a cpu for under 5k, no way to find a available motherboard (except the one on aliexpress for 2k). All I can really find is a complet system on itcreations (usa) with 12 rdimm 6400 for around 13k iirc. What is your opinion on that system? Do you know where to find a motherboard? (I'm in europe)
2025-05-14T17:16:27
https://www.reddit.com/r/LocalLLaMA/comments/1kmksr5/xeon_6_6900_12mrdimm_8800_amx_worth_it/
No_Afternoon_4260
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmksr5
false
null
t3_1kmksr5
/r/LocalLLaMA/comments/1kmksr5/xeon_6_6900_12mrdimm_8800_amx_worth_it/
false
false
self
1
null
Building My Local LLM Chat UI: Progress So Far + Future Roadmap
1
Hello everyone, my first reddit post ever! I’ve been building a fully local, offline LLM chat interface designed around actual daily use, fast performance, and a focus on clean, customizable design. It started as a personal challenge and has grown into something I use constantly and plan to evolve much further. Here’s what I’ve implemented so far: * Complete markdown renderer for clean message formatting * Chat minimization to keep long conversations tidy * In-chat search to quickly find messages by keyword * Text-to-speech (TTS) support for LLM responses * User message editing and forking * Switching between different versions of user and LLM messages * Experimental quoting system for LLM outputs (early stage) * Polished front-end with custom theme and color tuning * Multiple theme switching for different moods and use cases * Beautifully crafted UI with attention to user experience * Glassmorphism effects for a modern, layered visual look * Initial memory feature to help the LLM retain context across interactions, in future I will make it global and local memory as well The current version feels fast, snappy, and very enjoyable to use. But I’m only at the start. The next phase will focus on expanding real functionality: integrating task-oriented agents, adding deep document research and knowledge exploration, enabling thinking UIs and visual canvases, providing code analysis and explanations, introducing full voice-driven control with fallback to text, and even allowing generation of audio summaries or podcast-like outputs from chats and documents. The aim is to turn this into a complete local research, thinking, and workflow assistant. I built this for myself, but if people show interest, I’ll consider releasing it. I genuinely want feedback: what am I missing, what could be better, and which features would you prioritize if you were using something like this?
2025-05-14T17:53:12
https://v.redd.it/0g70v9f2cs0f1
Desperate_Rub_1352
/r/LocalLLaMA/comments/1kmlqtr/building_my_local_llm_chat_ui_progress_so_far/
1970-01-01T00:00:00
0
{}
1kmlqtr
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/0g70v9f2cs0f1/DASHPlaylist.mpd?a=1749966800%2CYjhhYzIzNGY0MTc3NTBiZjY3NTNmYTM2ZTE1OThhZWZjMDU4YjI5ODA1YjM3YzVjZGMxNGQ2OTMwNzY3YTdlZQ%3D%3D&v=1&f=sd', 'duration': 48, 'fallback_url': 'https://v.redd.it/0g70v9f2cs0f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/0g70v9f2cs0f1/HLSPlaylist.m3u8?a=1749966800%2CNWY4NTQ3MGVkZGMyMjAxZDBlZmI1ZmM1M2JhZjcyNTMxZWQzMzVhNWUwZWUxNGRhZGI3MGU0YjcyNzA4ODhiNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/0g70v9f2cs0f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1906}}
t3_1kmlqtr
/r/LocalLLaMA/comments/1kmlqtr/building_my_local_llm_chat_ui_progress_so_far/
false
false
https://external-preview…9266b654c6eb5bf0
1
{'enabled': False, 'images': [{'id': 'Nzd2Y205ZjJjczBmMQXdlzCcXrRSF6QNtR-5LXsr8naKnLiD8pPE0dCTLYFs', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/Nzd2Y205ZjJjczBmMQXdlzCcXrRSF6QNtR-5LXsr8naKnLiD8pPE0dCTLYFs.png?width=108&crop=smart&format=pjpg&auto=webp&s=d990e0f1435d82e6f09ab2dc3c0f563db410c79a', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/Nzd2Y205ZjJjczBmMQXdlzCcXrRSF6QNtR-5LXsr8naKnLiD8pPE0dCTLYFs.png?width=216&crop=smart&format=pjpg&auto=webp&s=db1efb1ba88bc83632a555ade9ba972ce64e1673', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/Nzd2Y205ZjJjczBmMQXdlzCcXrRSF6QNtR-5LXsr8naKnLiD8pPE0dCTLYFs.png?width=320&crop=smart&format=pjpg&auto=webp&s=8ae56740cd1fb3abacd5ee08b0409e11d3247dc6', 'width': 320}, {'height': 362, 'url': 'https://external-preview.redd.it/Nzd2Y205ZjJjczBmMQXdlzCcXrRSF6QNtR-5LXsr8naKnLiD8pPE0dCTLYFs.png?width=640&crop=smart&format=pjpg&auto=webp&s=25125953ade14289eab6adafa0bdd1c1607fec47', 'width': 640}, {'height': 543, 'url': 'https://external-preview.redd.it/Nzd2Y205ZjJjczBmMQXdlzCcXrRSF6QNtR-5LXsr8naKnLiD8pPE0dCTLYFs.png?width=960&crop=smart&format=pjpg&auto=webp&s=7193fd0d85daa822580b3d7ac5c48cc9139becf4', 'width': 960}, {'height': 611, 'url': 'https://external-preview.redd.it/Nzd2Y205ZjJjczBmMQXdlzCcXrRSF6QNtR-5LXsr8naKnLiD8pPE0dCTLYFs.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e0629c5eb85490cf52118b7703fe003fb830e3eb', 'width': 1080}], 'source': {'height': 1624, 'url': 'https://external-preview.redd.it/Nzd2Y205ZjJjczBmMQXdlzCcXrRSF6QNtR-5LXsr8naKnLiD8pPE0dCTLYFs.png?format=pjpg&auto=webp&s=cf5b471a69dc464bf3b295b422ede008e0f12e3e', 'width': 2866}, 'variants': {}}]}
Qwen3-30B-A6B-16-Extreme is fantastic
420
[https://huggingface.co/DavidAU/Qwen3-30B-A6B-16-Extreme](https://huggingface.co/DavidAU/Qwen3-30B-A6B-16-Extreme) Quants: [https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-GGUF) Someone recently mentioned this model here on r/LocalLLaMA and I gave it a try. For me it is the best model I can run locally with my 36GB CPU only setup. In my view it is a lot smarter than the original A3B model. It uses 16 experts instead of 8 and when watching it thinking I can see that it thinks a step further/deeper than the original model. Speed is still great. I wonder if anyone else has tried it. A 128k context version is also available.
2025-05-14T17:57:00
https://www.reddit.com/r/LocalLLaMA/comments/1kmlu2y/qwen330ba6b16extreme_is_fantastic/
DocWolle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmlu2y
false
null
t3_1kmlu2y
/r/LocalLLaMA/comments/1kmlu2y/qwen330ba6b16extreme_is_fantastic/
false
false
self
420
{'enabled': False, 'images': [{'id': 'SJ3pgQQCKG9CpSkqEjPCMkNkO03Y1_NTrzn_Asqv48M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IExLOKklShrLgpO_T6JUZcKO6wohCBUcfkgMfXMtZpQ.jpg?width=108&crop=smart&auto=webp&s=61b6b2ee8a5f4eadfdd153f565d2f56eaf70dbd0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IExLOKklShrLgpO_T6JUZcKO6wohCBUcfkgMfXMtZpQ.jpg?width=216&crop=smart&auto=webp&s=67d354eb1aef704a2c70e7d9bf3ce69d3c201eb2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IExLOKklShrLgpO_T6JUZcKO6wohCBUcfkgMfXMtZpQ.jpg?width=320&crop=smart&auto=webp&s=0deb13294e1ca10442ac9a4dfe6c433d595e0573', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IExLOKklShrLgpO_T6JUZcKO6wohCBUcfkgMfXMtZpQ.jpg?width=640&crop=smart&auto=webp&s=68e28483c5a85b4444670f71e917d6483981c502', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IExLOKklShrLgpO_T6JUZcKO6wohCBUcfkgMfXMtZpQ.jpg?width=960&crop=smart&auto=webp&s=ad6e4950640feaa53f1e13a864d7de0ace532d24', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IExLOKklShrLgpO_T6JUZcKO6wohCBUcfkgMfXMtZpQ.jpg?width=1080&crop=smart&auto=webp&s=cd8c4c30d2370ca0033dc98ec1144d3a5122d046', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IExLOKklShrLgpO_T6JUZcKO6wohCBUcfkgMfXMtZpQ.jpg?auto=webp&s=fbf2e73644991b497f6e70380e8b81883845fcc9', 'width': 1200}, 'variants': {}}]}
How to get started with LLM (highschool senior)?
0
2025-05-14T18:10:33
https://i.redd.it/jo9lhkt1gs0f1.jpeg
Most-Tea840
i.redd.it
1970-01-01T00:00:00
0
{}
1kmm6p2
false
null
t3_1kmm6p2
/r/LocalLLaMA/comments/1kmm6p2/how_to_get_started_with_llm_highschool_senior/
false
false
https://b.thumbs.redditm…69FEHMsFqcoM.jpg
0
{'enabled': True, 'images': [{'id': '0TUaCbB-w38608XTXWLq3y3YFq966Lr7gYXuVoKfEZU', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/jo9lhkt1gs0f1.jpeg?width=108&crop=smart&auto=webp&s=b70533131f02e2005b9fda43eac2c75412bcc97a', 'width': 108}, {'height': 94, 'url': 'https://preview.redd.it/jo9lhkt1gs0f1.jpeg?width=216&crop=smart&auto=webp&s=855731bddfe9a2d2984a1afa3ec19a14cfff61f4', 'width': 216}, {'height': 140, 'url': 'https://preview.redd.it/jo9lhkt1gs0f1.jpeg?width=320&crop=smart&auto=webp&s=603219a1d917707a1ff2d5ddf8078e9055fe6c8b', 'width': 320}, {'height': 280, 'url': 'https://preview.redd.it/jo9lhkt1gs0f1.jpeg?width=640&crop=smart&auto=webp&s=2166733a20e7b7da9fde475ec09f6419e61f42f4', 'width': 640}, {'height': 420, 'url': 'https://preview.redd.it/jo9lhkt1gs0f1.jpeg?width=960&crop=smart&auto=webp&s=1c3fdd49148fe8a2a9cc82156f2f400441f9f760', 'width': 960}, {'height': 472, 'url': 'https://preview.redd.it/jo9lhkt1gs0f1.jpeg?width=1080&crop=smart&auto=webp&s=2ff3757a15b8cef708eb5a45e048588ca44e0862', 'width': 1080}], 'source': {'height': 700, 'url': 'https://preview.redd.it/jo9lhkt1gs0f1.jpeg?auto=webp&s=0f7361021d16a84a4b346d5e5c75ce8048bf2875', 'width': 1600}, 'variants': {}}]}
How to get started with LLM (highschool senior)?
0
I am beginner starting out with LLM and stuff, Can you provide me a roadmap to get started. For context: I am an highschool senior. I have basic understanding of python. What are the things I need to learn to work on LLM from base, I can spend 7h+ for 2 month.
2025-05-14T18:12:30
https://i.redd.it/1zjivv8ags0f1.jpeg
Most-Tea840
i.redd.it
1970-01-01T00:00:00
0
{}
1kmm8gs
false
null
t3_1kmm8gs
/r/LocalLLaMA/comments/1kmm8gs/how_to_get_started_with_llm_highschool_senior/
false
false
https://b.thumbs.redditm…yF-Dj--DlkUM.jpg
0
{'enabled': True, 'images': [{'id': 'eQmcXfueDu9Qc0eiZrTizoqDkZ3t8SusgktSl4JND6s', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/1zjivv8ags0f1.jpeg?width=108&crop=smart&auto=webp&s=64c9eab036f509ab85ffba62b6e406a47b812f1f', 'width': 108}, {'height': 94, 'url': 'https://preview.redd.it/1zjivv8ags0f1.jpeg?width=216&crop=smart&auto=webp&s=394b708043ad1a6afc3ab73e9b681faa8ad2a35c', 'width': 216}, {'height': 140, 'url': 'https://preview.redd.it/1zjivv8ags0f1.jpeg?width=320&crop=smart&auto=webp&s=299eb08917d80839d024b9a7e0a10174e3ce2aa5', 'width': 320}, {'height': 280, 'url': 'https://preview.redd.it/1zjivv8ags0f1.jpeg?width=640&crop=smart&auto=webp&s=69e9a656417e3cca3520811fc99413f6b91cd41b', 'width': 640}, {'height': 420, 'url': 'https://preview.redd.it/1zjivv8ags0f1.jpeg?width=960&crop=smart&auto=webp&s=9dfe899be9e880bdbc4ea019a138da5a50b80e50', 'width': 960}, {'height': 472, 'url': 'https://preview.redd.it/1zjivv8ags0f1.jpeg?width=1080&crop=smart&auto=webp&s=10e4626f03094f13cced629379b8bab266c01d06', 'width': 1080}], 'source': {'height': 700, 'url': 'https://preview.redd.it/1zjivv8ags0f1.jpeg?auto=webp&s=e311a41c6718076b6b66fa21a158a9f76b91f8b3', 'width': 1600}, 'variants': {}}]}
My Local LLM Chat Interface: Current Progress and Vision
80
Hello everyone, my first reddit post ever! I’ve been building a fully local, offline LLM chat interface designed around actual daily use, fast performance, and a focus on clean, customizable design. It started as a personal challenge and has grown into something I use constantly and plan to evolve much further. Here’s what I’ve implemented so far: * Complete markdown renderer for clean message formatting * Chat minimization to keep long conversations tidy * In-chat search to quickly find messages by keyword * Text-to-speech (TTS) support for LLM responses * User message editing and forking * Switching between different versions of user and LLM messages * Experimental quoting system for LLM outputs (early stage) * Polished front-end with custom theme and color tuning * Multiple theme switching for different moods and use cases * Beautifully crafted UI with attention to user experience * Glassmorphism effects for a modern, layered visual look * Initial memory feature to help the LLM retain context across interactions, in future I will make it global and local memory as well The current version feels fast, snappy, and very enjoyable to use. But I’m only at the start. The next phase will focus on expanding real functionality: integrating task-oriented agents, adding deep document research and knowledge exploration, enabling thinking UIs and visual canvases, providing code analysis and explanations, introducing full voice-driven control with fallback to text, and even allowing generation of audio summaries or podcast-like outputs from chats and documents. The aim is to turn this into a complete local research, thinking, and workflow assistant. I built this for myself, but if people show interest, I’ll consider releasing it. I genuinely want feedback: what am I missing, what could be better, and which features would you prioritize if you were using something like this?
2025-05-14T18:18:12
https://v.redd.it/0az6hifchs0f1
Desperate_Rub_1352
v.redd.it
1970-01-01T00:00:00
0
{}
1kmmdm9
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/0az6hifchs0f1/DASHPlaylist.mpd?a=1749838705%2CZGNlNjFlMTMyNmJjNWEwM2E1YWMyY2NjYTMzODYyYWI0OTIwN2E3MTg4YjA3NjIzMTk1MjU1ZTUxNGRiMDA1MQ%3D%3D&v=1&f=sd', 'duration': 48, 'fallback_url': 'https://v.redd.it/0az6hifchs0f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/0az6hifchs0f1/HLSPlaylist.m3u8?a=1749838705%2CODVkODE4MWRmZGE2NDY4MmI0ZDNhYWIyOGM2NjcxMjBjNzg5NThhMzJmZDUwMGQ4ZmIwM2E4OGRhOWQ5NWRkYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/0az6hifchs0f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1906}}
t3_1kmmdm9
/r/LocalLLaMA/comments/1kmmdm9/my_local_llm_chat_interface_current_progress_and/
false
false
https://external-preview…dd3e0a0210b8a9e1
80
{'enabled': False, 'images': [{'id': 'bHduMTFnZmNoczBmMQXdlzCcXrRSF6QNtR-5LXsr8naKnLiD8pPE0dCTLYFs', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/bHduMTFnZmNoczBmMQXdlzCcXrRSF6QNtR-5LXsr8naKnLiD8pPE0dCTLYFs.png?width=108&crop=smart&format=pjpg&auto=webp&s=2787f09a4cfec9c8d89a45e369a36e7f27a7ea4f', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/bHduMTFnZmNoczBmMQXdlzCcXrRSF6QNtR-5LXsr8naKnLiD8pPE0dCTLYFs.png?width=216&crop=smart&format=pjpg&auto=webp&s=dced03249c163d82111351346fac70fce3b5cbdf', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/bHduMTFnZmNoczBmMQXdlzCcXrRSF6QNtR-5LXsr8naKnLiD8pPE0dCTLYFs.png?width=320&crop=smart&format=pjpg&auto=webp&s=a564ea5fbb844d7ae03668f7fbd57f0b76b4ab62', 'width': 320}, {'height': 362, 'url': 'https://external-preview.redd.it/bHduMTFnZmNoczBmMQXdlzCcXrRSF6QNtR-5LXsr8naKnLiD8pPE0dCTLYFs.png?width=640&crop=smart&format=pjpg&auto=webp&s=ef940ced3c30819943841c44aa7c02916df9f77e', 'width': 640}, {'height': 543, 'url': 'https://external-preview.redd.it/bHduMTFnZmNoczBmMQXdlzCcXrRSF6QNtR-5LXsr8naKnLiD8pPE0dCTLYFs.png?width=960&crop=smart&format=pjpg&auto=webp&s=f8c42ca3cc27592df97fe4389dea9c8bcf380a46', 'width': 960}, {'height': 611, 'url': 'https://external-preview.redd.it/bHduMTFnZmNoczBmMQXdlzCcXrRSF6QNtR-5LXsr8naKnLiD8pPE0dCTLYFs.png?width=1080&crop=smart&format=pjpg&auto=webp&s=305eb07446398c55f83f3555d65a056c316b1345', 'width': 1080}], 'source': {'height': 1624, 'url': 'https://external-preview.redd.it/bHduMTFnZmNoczBmMQXdlzCcXrRSF6QNtR-5LXsr8naKnLiD8pPE0dCTLYFs.png?format=pjpg&auto=webp&s=e4d5cd383bec70a46f92b7abd217ac032b170b03', 'width': 2866}, 'variants': {}}]}
Why do so many people not recommend LLM Studio?
17
Curious why so many people do not like this application or prefer an alternative? What's wrong with it?
2025-05-14T18:29:57
https://www.reddit.com/r/LocalLLaMA/comments/1kmmo48/why_do_so_many_people_not_recommend_llm_studio/
intimate_sniffer69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmmo48
false
null
t3_1kmmo48
/r/LocalLLaMA/comments/1kmmo48/why_do_so_many_people_not_recommend_llm_studio/
false
false
self
17
null
Base Models That Can Still Complete Text in an Entertaining Way
82
Back during the LLaMa-1 to Mistral-7B era, it used to be a lot of fun to just download a base model, give it a ridiculous prompt, and let it autocomplete. The results were often less dry and more entertaining than asking the corresponding instruct models to do it. But today's models, even the base ones, seem to be heavily trained on synthetic, dry, reasoning-heavy data, and that approach just doesn't work anymore. Do you know of any current models (or maybe fine-tunes) that still work well for this purpose?
2025-05-14T18:32:10
https://www.reddit.com/r/LocalLLaMA/comments/1kmmq6d/base_models_that_can_still_complete_text_in_an/
Soft-Ad4690
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmmq6d
false
null
t3_1kmmq6d
/r/LocalLLaMA/comments/1kmmq6d/base_models_that_can_still_complete_text_in_an/
false
false
self
82
null
AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms
135
Today, Google announced AlphaEvolve, an evolutionary coding agent powered by large language models for general-purpose algorithm discovery and optimization. AlphaEvolve pairs the creative problem-solving capabilities of our Gemini models with automated evaluators that verify answers, and uses an evolutionary framework to improve upon the most promising ideas. AlphaEvolve enhanced the efficiency of Google's data centers, chip design and AI training processes — including training the large language models underlying **AlphaEvolve itself**. It has also helped design faster matrix multiplication algorithms and find new solutions to open mathematical problems, showing incredible promise for application across many areas. Blog post: https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/ Paper: https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/AlphaEvolve.pdf
2025-05-14T19:14:49
https://i.redd.it/pj1r83skrs0f1.jpeg
NewtMurky
i.redd.it
1970-01-01T00:00:00
0
{}
1kmnsol
false
null
t3_1kmnsol
/r/LocalLLaMA/comments/1kmnsol/alphaevolve_a_geminipowered_coding_agent_for/
false
false
https://a.thumbs.redditm…L_Axn1vqPz38.jpg
135
{'enabled': True, 'images': [{'id': '54-i1F4Xk7efV144Ff_fU_TjAd4MXRQTzSSE2rxjLUY', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/pj1r83skrs0f1.jpeg?width=108&crop=smart&auto=webp&s=b72d12a476de535c70badaa22b582f76fb6ac87f', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/pj1r83skrs0f1.jpeg?width=216&crop=smart&auto=webp&s=b74cc8a20b7f7468196cef518bf716e9af93bfd4', 'width': 216}, {'height': 200, 'url': 'https://preview.redd.it/pj1r83skrs0f1.jpeg?width=320&crop=smart&auto=webp&s=c4bdeb4a40f87c0a7d231fc770be8f8dc576d907', 'width': 320}, {'height': 400, 'url': 'https://preview.redd.it/pj1r83skrs0f1.jpeg?width=640&crop=smart&auto=webp&s=7100be7b0bc1cceb9f30d390f47de6dcbfbabcec', 'width': 640}, {'height': 600, 'url': 'https://preview.redd.it/pj1r83skrs0f1.jpeg?width=960&crop=smart&auto=webp&s=f6deb2b74dcf8c6202903fe46844a4218337882e', 'width': 960}], 'source': {'height': 660, 'url': 'https://preview.redd.it/pj1r83skrs0f1.jpeg?auto=webp&s=5646a277cff85a63713df58d656cc0db69bfc3df', 'width': 1056}, 'variants': {}}]}
JoyCaption Beta One: an image captioning Visual Language Model (VLM) built from the ground up as a free, open, and uncensored model for the community to use in training Diffusion models
1
[removed]
2025-05-14T19:29:05
https://huggingface.co/fancyfeast/llama-joycaption-beta-one-hf-llava
noobitom
huggingface.co
1970-01-01T00:00:00
0
{}
1kmo5ei
false
null
t3_1kmo5ei
/r/LocalLLaMA/comments/1kmo5ei/joycaption_beta_one_an_image_captioning_visual/
false
false
https://b.thumbs.redditm…B1QNmVuDq6UY.jpg
1
{'enabled': False, 'images': [{'id': 'TItd7P6JybMVSD_-iwYroQbp11rVVr7fbZLrwLuYaVo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2SJ8hUsEO9g2sLHXwu09e-Mxm2KWEgn2ihEMz1ARvrM.jpg?width=108&crop=smart&auto=webp&s=14c7526bc8e2dd16123f224037d628959e477eb0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2SJ8hUsEO9g2sLHXwu09e-Mxm2KWEgn2ihEMz1ARvrM.jpg?width=216&crop=smart&auto=webp&s=363454f2a70320b2f91c79ea201832e7946dc09a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2SJ8hUsEO9g2sLHXwu09e-Mxm2KWEgn2ihEMz1ARvrM.jpg?width=320&crop=smart&auto=webp&s=18c419fb055c5b452659c7b522cfd387371867c0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2SJ8hUsEO9g2sLHXwu09e-Mxm2KWEgn2ihEMz1ARvrM.jpg?width=640&crop=smart&auto=webp&s=8c4c2f97909b8859f55af0f2943668dd70600a43', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2SJ8hUsEO9g2sLHXwu09e-Mxm2KWEgn2ihEMz1ARvrM.jpg?width=960&crop=smart&auto=webp&s=ebd6c31605dcad04b3c96151faa3383c6544b353', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2SJ8hUsEO9g2sLHXwu09e-Mxm2KWEgn2ihEMz1ARvrM.jpg?width=1080&crop=smart&auto=webp&s=10eb89a57a3f2bf4a9d87e7ad958b46a4446481d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2SJ8hUsEO9g2sLHXwu09e-Mxm2KWEgn2ihEMz1ARvrM.jpg?auto=webp&s=c69bfcca132c057d1d4d980ad4ad78a98756c57b', 'width': 1200}, 'variants': {}}]}
Anyone running a 5000 series GPU in a Linux VM for LLM/SD with a Linux host (e.g. Proxmox)? Does shutting down your VM crash your host?
0
I have a 5070 Ti that is passed through into a Fedora Server 42 VM. Wanna run some LLM and maybe ComfyUI in it. I have to install the open source Nvidia driver because the older proprietary one doesn't support newer GPUs anymore. Anyway, I followed the driver install guide of Fedora, and installed the driver successfully. However, when I shut down the VM, the GPU seems to be not resetting properly and freezes the VM host. I have to reboot the host to recover the GPU. Does anyone with a 5000 series GPU have this problem as well? If not, could you share your setup/configuration?
2025-05-14T19:43:25
https://www.reddit.com/r/LocalLLaMA/comments/1kmoia9/anyone_running_a_5000_series_gpu_in_a_linux_vm/
regunakyle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmoia9
false
null
t3_1kmoia9
/r/LocalLLaMA/comments/1kmoia9/anyone_running_a_5000_series_gpu_in_a_linux_vm/
false
false
self
0
null
TRAIL: New Benchmark Showing how LLMs are Challenged at Debugging/Analyzing Agent Traces + Percival: Patronus AI's Companion for Debugging Agentic Traces that outdoes baselines on TRAIL
0
Hi everyone! We're builders and researchers at [Patronus AI](https://www.patronus.ai/) and and we've just released both a challenging eval benchmark and research named TRAIL for LLM-driven agentic trace analysis + debugging AND and our very own specialized solution called [Percival](https://www.linkedin.com/posts/anandnkannappan_today-im-super-excited-to-introduce-ugcPost-7328482072438714368-k3Sd) that's an AI companion to debug agent traces and outdoes the baselines on TRAIL ## 📊 TRAIL Benchmark Our paper "[TRAIL: Trace Reasoning and Agentic Issue Localization](https://arxiv.org/abs/2505.08638)" (now on arXiv) introduces a new taxonomy + rich human-annotated dataset for LLM-based observability and debugging of agentic traces: * 148 human-annotated traces from GAIA & SWE-Bench with 800+ unique errors (each trace requiring ~110-120 minutes of expert annotation) * A comprehensive taxonomy spanning reasoning, execution, and planning failures * First benchmark designed to test LLMs' ability to provide observability for agent systems that has extensive human annotated instances from an ecologically valid setting [GAIA/SWEBench + open telemetry traces] ### Technical Challenges: * TRAIL traces demand substantial context window capacity: * TRAIL (GAIA) traces average 286K tokens (max 7.5M tokens) * TRAIL (SWE-Bench) traces average 616K tokens (max 2.05M tokens) * Even with 1M token context windows, many models cannot process all traces * Typical output generation requires ~1.2K tokens on average (max 5.4K) * Both Llama-4 models are challenged by the benchmark too, performing very poorly at localizing errors inspite of the very long context window (10M) * Even leading LLMs are challenged by the task: * Best performer (Gemini-2.5-Pro) achieves only 18.3% joint accuracy on TRAIL (GAIA) * Claude-3.7-Sonnet manages just 4.7% joint accuracy * Performance strongly correlated with reasoning capability * Models show complex category-specific strengths (e.g., Gemini-2.5-Pro excels at detecting Goal Deviation (70% F1) and Poor Information Retrieval (50% F1)) ## ♞ Percival: AI Companion for Agent Debugging Following this research, we've developed [Percival](https://docs.patronus.ai/docs/percival/percival), an AI companion for every AI team that needs to debug and optimize their AI outputs: * Outperforms all the baselines from TRAIL on agent trace analysis (Mean Joint accuracy goes up from 0.11 using vanilla Gemini-2.5-Pro to 0.17 with Percival) * Has a specialized approach to ingest and process traces * Employs both episodic and semantic memory components for persistent debugging * Identifies critical issues like resource abuse, context handling failures, and planning bugs thanks to its rich taxonomy * Since Percival is opentelemetry + openinference compatible, it supports Smolagents, Pydantic AI, OpenAI Agent SDK, Langchain, CrewAI, Custom OpenAI and Custom Anthropic clients frameworks out of the box! Percival's also been covered by [VentureBeat](https://venturebeat.com/ai/patronus-ai-debuts-percival-to-help-enterprises-monitor-failing-ai-agents-at-scale/) among other sources hours back ### Why This Matters: As LLMs increasingly operate as tool-driven, multi-turn agents, visibility into their execution becomes critical. TRAIL demonstrates the significant gap between current capabilities and the needs of practical agent debugging, while providing a valuable dataset for advancing LLM-based observability research. The benchmark is fully open-source (MIT Licensed) - check out our [GitHub repo](https://github.com/patronus-ai/trail-benchmark), [HuggingFace dataset](https://huggingface.co/datasets/PatronusAI/TRAIL), [leaderboard](https://huggingface.co/spaces/PatronusAI/TRAIL), and [arXiv paper](https://arxiv.org/abs/2505.08638). We're excited to hear what LLM-driven approaches emerge to improve on TRAIL, and how future LLMs with longer context and stronger reasoning perform on it. We're also actively looking for developers and builders working with agentic systems to try out [Percival](https://docs.patronus.ai/docs/percival/percival) and share feedback, including all the vivacious Local Lllama LLM/AI engineers, researchers and enthusiasts here!!
2025-05-14T19:44:08
https://www.reddit.com/r/LocalLLaMA/comments/1kmoixa/trail_new_benchmark_showing_how_llms_are/
Ganglion_Varicose
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmoixa
false
null
t3_1kmoixa
/r/LocalLLaMA/comments/1kmoixa/trail_new_benchmark_showing_how_llms_are/
false
false
self
0
{'enabled': False, 'images': [{'id': 'hNx_56-i8Rv0ws9R7PldQYV1dU6UXpI4OH8ZzXyfPVY', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/_Clg7oeqP2GJrCrJNlGAgwzBPI96CgDuEgDXIkNEQDo.jpg?width=108&crop=smart&auto=webp&s=9c98e9f0f73626384e3126f6fc08c13b9a548bdb', 'width': 108}, {'height': 161, 'url': 'https://external-preview.redd.it/_Clg7oeqP2GJrCrJNlGAgwzBPI96CgDuEgDXIkNEQDo.jpg?width=216&crop=smart&auto=webp&s=fad47ff6495efe55f551149fcf56317a5791f943', 'width': 216}, {'height': 239, 'url': 'https://external-preview.redd.it/_Clg7oeqP2GJrCrJNlGAgwzBPI96CgDuEgDXIkNEQDo.jpg?width=320&crop=smart&auto=webp&s=895549241449210dbd48719094b16d5b052b49f8', 'width': 320}, {'height': 478, 'url': 'https://external-preview.redd.it/_Clg7oeqP2GJrCrJNlGAgwzBPI96CgDuEgDXIkNEQDo.jpg?width=640&crop=smart&auto=webp&s=11a42afcef9e0c6858bbe0bd74dfca1327dfefa8', 'width': 640}, {'height': 717, 'url': 'https://external-preview.redd.it/_Clg7oeqP2GJrCrJNlGAgwzBPI96CgDuEgDXIkNEQDo.jpg?width=960&crop=smart&auto=webp&s=f7d41ef65c89e1a0b89b713304c50df00f5267e4', 'width': 960}, {'height': 807, 'url': 'https://external-preview.redd.it/_Clg7oeqP2GJrCrJNlGAgwzBPI96CgDuEgDXIkNEQDo.jpg?width=1080&crop=smart&auto=webp&s=d6c220dca9e5a9d8dfc69e4ebde8033f830eb4b6', 'width': 1080}], 'source': {'height': 1614, 'url': 'https://external-preview.redd.it/_Clg7oeqP2GJrCrJNlGAgwzBPI96CgDuEgDXIkNEQDo.jpg?auto=webp&s=56614f7a0bcb042e2d953091bed09c9dd046210c', 'width': 2160}, 'variants': {}}]}
Dual AMD Mi50 Inference and Benchmarks
1
[removed]
2025-05-14T19:55:59
https://www.reddit.com/r/LocalLLaMA/comments/1kmot8u/dual_amd_mi50_inference_and_benchmarks/
0seba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmot8u
false
null
t3_1kmot8u
/r/LocalLLaMA/comments/1kmot8u/dual_amd_mi50_inference_and_benchmarks/
false
false
self
1
null
LLaDA-8B-Tools: A diffusion language model fine-tuned for tool use
1
[removed]
2025-05-14T20:27:00
https://v.redd.it/m9kszt3g4t0f1
ProximileLLC
v.redd.it
1970-01-01T00:00:00
0
{}
1kmpkyv
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/m9kszt3g4t0f1/DASHPlaylist.mpd?a=1749846435%2CNWZjNTQxOWMwMzQ3YjE4ZGUwYzI0YjY4ZjRjMzQ5MjBmNjIxNGE2NGVhZmE0OTE5MzU3MDRhNDlhN2Q1M2IzZA%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/m9kszt3g4t0f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/m9kszt3g4t0f1/HLSPlaylist.m3u8?a=1749846435%2CZmQ4ZTkzZTllODQ4NTE0YjY2ZjJkMzQ2ZDAyOGQzZGJlNDBjNDQxYTE2MGY4NTVjM2M2Mzg3YWFmMTc0MDEzZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/m9kszt3g4t0f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1146}}
t3_1kmpkyv
/r/LocalLLaMA/comments/1kmpkyv/llada8btools_a_diffusion_language_model_finetuned/
false
false
https://external-preview…fdb0fefe652b46d9
1
{'enabled': False, 'images': [{'id': 'Z2w1bm16M2c0dDBmMXRO5O1bc_ZKL3B3043MSrRh2JI_XHyuWSuizLL5YitN', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/Z2w1bm16M2c0dDBmMXRO5O1bc_ZKL3B3043MSrRh2JI_XHyuWSuizLL5YitN.png?width=108&crop=smart&format=pjpg&auto=webp&s=a3be9f6035989c21036e16d4c67b3f3f31871174', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/Z2w1bm16M2c0dDBmMXRO5O1bc_ZKL3B3043MSrRh2JI_XHyuWSuizLL5YitN.png?width=216&crop=smart&format=pjpg&auto=webp&s=8eff4fcdbd2fccffce1cb3b685a4ca7f9c261314', 'width': 216}, {'height': 201, 'url': 'https://external-preview.redd.it/Z2w1bm16M2c0dDBmMXRO5O1bc_ZKL3B3043MSrRh2JI_XHyuWSuizLL5YitN.png?width=320&crop=smart&format=pjpg&auto=webp&s=975ddcae139b41384c1c85feddba0796b7b03b6d', 'width': 320}, {'height': 402, 'url': 'https://external-preview.redd.it/Z2w1bm16M2c0dDBmMXRO5O1bc_ZKL3B3043MSrRh2JI_XHyuWSuizLL5YitN.png?width=640&crop=smart&format=pjpg&auto=webp&s=60e3c5d18035fb6c40b7722048bb747d4a3a31b4', 'width': 640}, {'height': 603, 'url': 'https://external-preview.redd.it/Z2w1bm16M2c0dDBmMXRO5O1bc_ZKL3B3043MSrRh2JI_XHyuWSuizLL5YitN.png?width=960&crop=smart&format=pjpg&auto=webp&s=519ec4784a822efba8e920aee10a355160b95e8b', 'width': 960}, {'height': 678, 'url': 'https://external-preview.redd.it/Z2w1bm16M2c0dDBmMXRO5O1bc_ZKL3B3043MSrRh2JI_XHyuWSuizLL5YitN.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ec34186394fcba51cdfa1d7af16619fe46827602', 'width': 1080}], 'source': {'height': 954, 'url': 'https://external-preview.redd.it/Z2w1bm16M2c0dDBmMXRO5O1bc_ZKL3B3043MSrRh2JI_XHyuWSuizLL5YitN.png?format=pjpg&auto=webp&s=f358b5e51f458476cb8beaaf415f538b971712c5', 'width': 1518}, 'variants': {}}]}
Is it possible to tell aider just to use the LLM currently loaded in Ollama?
0
I have an LLM (Qwen3) running in Ollama. Is there a way to tell aider to just use the LLM that's already loaded?
2025-05-14T20:27:46
https://www.reddit.com/r/LocalLLaMA/comments/1kmplmq/is_it_possible_to_tell_aider_just_to_use_the_llm/
jpummill2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmplmq
false
null
t3_1kmplmq
/r/LocalLLaMA/comments/1kmplmq/is_it_possible_to_tell_aider_just_to_use_the_llm/
false
false
self
0
null
We need llama-4-maverick-03-26-experimental.
28
Hey everyone, I've been spending a lot of time looking into the differences between the Llama-4 Maverick we got and the \`llama-4-maverick-03-26-experimental\` version, and honestly, I'm starting to feel like we seriously missed out. From my own personal testing with the \`03-26-experimental\`, the emotional intelligence is genuinely striking. It feels more nuanced, more understanding, and less like it is just pattern-matching empathy. It's a qualitative difference that really stands out. And it's not just my anecdotal experience. This post (\[https://www.reddit.com/r/LocalLLaMA/comments/1ju9s1c/the\_experimental\_version\_of\_llama4\_maverick\_on/\](https://www.reddit.com/r/LocalLLaMA/comments/1ju9s1c/the\_experimental\_version\_of\_llama4\_maverick\_on/)) highlights how the LMArena version is significantly more creative and a better coder than the model that eventually got the official release. Now, I know the counter-argument: "Oh, it was just better at 'glazing' or producing overly long, agreeable responses." But I don't think that tells the whole story. If you look at the LMSys blog post on sentiment control (\[https://blog.lmarena.ai/blog/2025/sentiment-control/\](https://blog.lmarena.ai/blog/2025/sentiment-control/)), it's pretty clear. When they account for the verbosity and "glazing," the \`llama-4-maverick-03-26-experimental\` model \*still\* significantly outperforms the released version. In their charts, the experimental model is shown as being above Gemma 3 27B, while the released version actually dips \*below\* it. That's a difference in underlying capability, not just surface-level agreeableness. And then there's the infamous "ball in the heptagon" test. The released Llama-4 Maverick was a complete trainwreck on this, as painfully detailed here: (\[https://www.reddit.com/r/LocalLLaMA/comments/1jsl37d/im\_incredibly\_disappointed\_with\_llama4/\](https://www.reddit.com/r/LocalLLaMA/comments/1jsl37d/im\_incredibly\_disappointed\_with\_llama4/)). It was a real letdown for many. But the \`03-26-experimental\` version? It actually handles the heptagon test surprisingly well, demonstrating a level of coding the released version just doesn't seem to have. [Sorry, if it seems slow at the start. That isn't in the actual thing, it's just the webm -\> gif conversion.](https://i.redd.it/o0p3m0kj8t0f1.gif) So, what gives? It feels like the \`llama-4-maverick-03-26-experimental\` was a more aligned that actually possessed superior core capabilities in several key areas. While the released version might be more polished in some respects, it seems to have worse actual intelligence and usefulness for more complex tasks. I really hope there's a chance we can see this experimental version released, or at least get more insight into why such a capable version was seemingly left behind. It feels like the community is missing out on a much better model. What are your thoughts? Has anyone else tested or seen results from \`llama-4-maverick-03-26-experimental\` that align with this? (It's still up on LMArena for direct chat.) \*\*TL;DR:\*\* The \`llama-4-maverick-03-26-experimental\` version seems demonstrably better in emotional intelligence, creativity, coding, and even raw benchmark performance (once "glazing" is accounted for) and reasoning (heptagon test) than the released Llama-4 Maverick. We want access to \*that\* model!
2025-05-14T20:52:37
https://www.reddit.com/r/LocalLLaMA/comments/1kmq7gx/we_need_llama4maverick0326experimental/
PuppyGirlEfina
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmq7gx
false
null
t3_1kmq7gx
/r/LocalLLaMA/comments/1kmq7gx/we_need_llama4maverick0326experimental/
false
false
https://b.thumbs.redditm…gGa4xZQnPkAY.jpg
28
null
Anyone running a 192GB DDR5 build?
1
[removed]
2025-05-14T21:00:08
https://www.reddit.com/r/LocalLLaMA/comments/1kmqe0q/anyone_running_a_192gb_ddr5_build/
lukinhasb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmqe0q
false
null
t3_1kmqe0q
/r/LocalLLaMA/comments/1kmqe0q/anyone_running_a_192gb_ddr5_build/
false
false
self
1
null
Nous Psyche, distributed training of a new 40B base model
62
2025-05-14T21:09:52
https://psyche.network/runs/consilience-40b-1/0
discr
psyche.network
1970-01-01T00:00:00
0
{}
1kmqmr8
false
null
t3_1kmqmr8
/r/LocalLLaMA/comments/1kmqmr8/nous_psyche_distributed_training_of_a_new_40b/
false
false
default
62
null
I finally got the hardware together what model should I run ?
1
[removed]
2025-05-14T21:21:52
https://www.reddit.com/r/LocalLLaMA/comments/1kmqxk8/i_finally_got_the_hardware_together_what_model/
Loose-Bet9409
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmqxk8
false
null
t3_1kmqxk8
/r/LocalLLaMA/comments/1kmqxk8/i_finally_got_the_hardware_together_what_model/
false
false
self
1
null
[Tool] FlexAudioPrint: local audio transcription + dialogue formatting using Whisper + gemma3:12b via Ollama
8
Hey everyone! I’ve just released an update to [**FlexAudioPrint**](https://github.com/loglux/FlexAudioPrint), a local-first audio transcription app that now includes **formatted dialogue output** using a local model via Ollama (currently `gemma3:12b`). # 🔧 Features: * 🎙️ Transcribes audio files using OpenAI Whisper (all model sizes supported) * 💬 **New**: Formats raw transcripts into **readable, labelled dialogue scripts** – Adds speaker labels (e.g., Peter, Sarah) – Fixes punctuation & line breaks – *Italicises non-verbal cues (like \[laughter\])* * 📄 Generates `.srt` subtitles * 🧠 Powered by `gemma3:12b` through **Ollama** — no cloud, no OpenAI API needed * 🖼️ Simple Gradio interface + CLI support * 🆓 100% local, open source, no accounts or tracking # 🔗 GitHub: 👉 [https://github.com/loglux/FlexAudioPrint](https://github.com/loglux/FlexAudioPrint) Let me know what you think, and feel free to contribute!
2025-05-14T21:27:20
https://www.reddit.com/r/LocalLLaMA/comments/1kmr28a/tool_flexaudioprint_local_audio_transcription/
loglux
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmr28a
false
null
t3_1kmr28a
/r/LocalLLaMA/comments/1kmr28a/tool_flexaudioprint_local_audio_transcription/
false
false
self
8
{'enabled': False, 'images': [{'id': 'EyVuRfQJaszN47mAze4zqmq5aDylOea5IL5Hg96we9U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1qZZ-yQwkfHsQEwVkdaMCdM3hnZChQfvubhpGvC08JM.jpg?width=108&crop=smart&auto=webp&s=3ee4caa7f7a999df673f854e4d20b4cc56f09b81', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1qZZ-yQwkfHsQEwVkdaMCdM3hnZChQfvubhpGvC08JM.jpg?width=216&crop=smart&auto=webp&s=2d60e948d9fbbb5c44bc0c356b6fd299cb887bb6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1qZZ-yQwkfHsQEwVkdaMCdM3hnZChQfvubhpGvC08JM.jpg?width=320&crop=smart&auto=webp&s=d6314c93778f1a828a5be6d636b6046dd28ca859', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1qZZ-yQwkfHsQEwVkdaMCdM3hnZChQfvubhpGvC08JM.jpg?width=640&crop=smart&auto=webp&s=dde157aac765f6b6b57ff45165ec53a4de019110', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1qZZ-yQwkfHsQEwVkdaMCdM3hnZChQfvubhpGvC08JM.jpg?width=960&crop=smart&auto=webp&s=3d48e357b5e04aecd68741f4a917115e617617f6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1qZZ-yQwkfHsQEwVkdaMCdM3hnZChQfvubhpGvC08JM.jpg?width=1080&crop=smart&auto=webp&s=1ee68f945733fa4db4485dd8caa21489d006deed', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1qZZ-yQwkfHsQEwVkdaMCdM3hnZChQfvubhpGvC08JM.jpg?auto=webp&s=e129394f6a2c868b115b5a3eac298f38c67ad303', 'width': 1200}, 'variants': {}}]}
Are you using AI Gateway in your GenAI stack? Either for personal use or at work?
1
Curious to hear your thoughts — have you felt the need for an AI Gateway layer while building GenAI applications? Model switching has been a real pain point for me lately, but I’m still unsure if investing in a Gateway makes sense. It obviously comes with a broader set of features, but I’m trying to gauge how useful that actually is in practice. Would love to know if your team is using something similar and finding it valuable. I’m currently evaluating a few options — LiteLLM, Portkey, and TrueFoundry — but also debating whether it’s worth building something in-house instead.
2025-05-14T21:36:51
https://www.reddit.com/r/LocalLLaMA/comments/1kmragz/are_you_using_ai_gateway_in_your_genai_stack/
Difficult_Ad_3903
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmragz
false
null
t3_1kmragz
/r/LocalLLaMA/comments/1kmragz/are_you_using_ai_gateway_in_your_genai_stack/
false
false
self
1
null
MLA optimization with flashattention for llama.cpp,MLA + FA now only uses K-cache - 47% saving on KV-cache size
134
[MLA + FA now only uses K-cache - 47% saving on KV-cache size (only for use with #13435 for now) by jukofyork · Pull Request #13529 · ggml-org/llama.cpp](https://github.com/ggml-org/llama.cpp/pull/13529) `llama_kv_cache_unified: kv_size = 163840, type_k = 'f16', type_v = 'f16', n_layer = 61, can_shift = 0, padding = 256` `llama_kv_cache_unified: CUDA0 KV buffer size = 10980.00 MiB` `llama_kv_cache_unified: KV self size = 10980.00 MiB, K (f16): 10980.00 MiB, V (f16): 0.00 MiB` The full context of 160k tokens now takes up less than 11GB without kquants
2025-05-14T21:42:55
https://www.reddit.com/r/LocalLLaMA/comments/1kmrfoo/mla_optimization_with_flashattention_for/
shing3232
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmrfoo
false
null
t3_1kmrfoo
/r/LocalLLaMA/comments/1kmrfoo/mla_optimization_with_flashattention_for/
false
false
self
134
null
taken an LLM and built layered determinism into it it
1
[removed]
2025-05-14T21:57:58
https://www.reddit.com/gallery/1kmrsg2
Sad_Perception_1685
reddit.com
1970-01-01T00:00:00
0
{}
1kmrsg2
false
null
t3_1kmrsg2
/r/LocalLLaMA/comments/1kmrsg2/taken_an_llm_and_built_layered_determinism_into/
false
false
https://b.thumbs.redditm…bjS-FhVCQ4DA.jpg
1
null
The Psyche Network Decentralized Infrastructure Architecture - Nous Research
4
TL;DR from the site: "Psyche is an open infrastructure that democratizes AI development by decentralizing training across underutilized hardware. Building on DisTrO and its predecessor DeMo, Psyche reduces data transfer by several orders of magnitude, making distributed training practical. Coordination happens on the Solana blockchain, ensuring a fault-tolerant and censorship-resistant network." [GitHub](https://github.com/PsycheFoundation/psyche)
2025-05-14T21:58:03
https://nousresearch.com/nous-psyche/
Junior_Ad315
nousresearch.com
1970-01-01T00:00:00
0
{}
1kmrsic
false
null
t3_1kmrsic
/r/LocalLLaMA/comments/1kmrsic/the_psyche_network_decentralized_infrastructure/
false
false
https://b.thumbs.redditm…Nw92yCi9RsOM.jpg
4
{'enabled': False, 'images': [{'id': 'CETQXRVwjAaaWNDaD_s_G3tYC0oWbtRHtb3lsStlPaU', 'resolutions': [{'height': 102, 'url': 'https://external-preview.redd.it/KnAjTVWkcbhaPMehdLgpMEdIsyR_K5uWQ5jNBOrdb6E.jpg?width=108&crop=smart&auto=webp&s=391e49f194587f64b1a4812cb414a40129352f46', 'width': 108}, {'height': 204, 'url': 'https://external-preview.redd.it/KnAjTVWkcbhaPMehdLgpMEdIsyR_K5uWQ5jNBOrdb6E.jpg?width=216&crop=smart&auto=webp&s=213bf3c1d4204965ac74f3305dcd3c01d13d9c39', 'width': 216}, {'height': 302, 'url': 'https://external-preview.redd.it/KnAjTVWkcbhaPMehdLgpMEdIsyR_K5uWQ5jNBOrdb6E.jpg?width=320&crop=smart&auto=webp&s=d9847ae8fafeb5a82f3f18a2cd445217a4b08a5d', 'width': 320}, {'height': 605, 'url': 'https://external-preview.redd.it/KnAjTVWkcbhaPMehdLgpMEdIsyR_K5uWQ5jNBOrdb6E.jpg?width=640&crop=smart&auto=webp&s=91faeba38b1862d6ed7e87f5f8d67af4aa3c104a', 'width': 640}, {'height': 908, 'url': 'https://external-preview.redd.it/KnAjTVWkcbhaPMehdLgpMEdIsyR_K5uWQ5jNBOrdb6E.jpg?width=960&crop=smart&auto=webp&s=58c7c99215561501813116bb36d1aba285fa40dd', 'width': 960}, {'height': 1022, 'url': 'https://external-preview.redd.it/KnAjTVWkcbhaPMehdLgpMEdIsyR_K5uWQ5jNBOrdb6E.jpg?width=1080&crop=smart&auto=webp&s=4f4acb51cf8708abf3d47548c65c25fd6b7f7130', 'width': 1080}], 'source': {'height': 2423, 'url': 'https://external-preview.redd.it/KnAjTVWkcbhaPMehdLgpMEdIsyR_K5uWQ5jNBOrdb6E.jpg?auto=webp&s=50f99dba462bd539d0768b1a5fee7e9ba28a6b12', 'width': 2560}, 'variants': {}}]}
Visual Studio/Cursor type experience using local llm?
4
Has anyone been able to use a local LLM that works like Cursor/ VS copilot? I tried connecting an ollama instance to Zed and Cline and the results haven’t been that great, esp multiple file edits. Any tips?
2025-05-14T22:24:12
https://www.reddit.com/r/LocalLLaMA/comments/1kmsdtz/visual_studiocursor_type_experience_using_local/
CSlov23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmsdtz
false
null
t3_1kmsdtz
/r/LocalLLaMA/comments/1kmsdtz/visual_studiocursor_type_experience_using_local/
false
false
self
4
null
HELP PLS Can anyone tell me if the new Ki is my software?? I’m still on gpt Handy app 4o sometimes used gpt 4o mini high on pc did I get baited by hallucinating KI?
1
[removed]
2025-05-14T22:37:47
https://www.reddit.com/r/LocalLLaMA/comments/1kmsoe5/help_pls_can_anyone_tell_me_if_the_new_ki_is_my/
Bac4rdi1997
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kmsoe5
false
null
t3_1kmsoe5
/r/LocalLLaMA/comments/1kmsoe5/help_pls_can_anyone_tell_me_if_the_new_ki_is_my/
false
false
self
1
null
The era of local Computer-Use AI Agents is here.
3
Meet UI-TARS-1.5-7B-6bit, now running natively on Apple Silicon via MLX. The video is of UI-TARS-1.5-7B-6bit completing the prompt "draw a line from the red circle to the green circle, then open reddit in a new tab" running entirely on MacBook. The video is just a replay, during actual usage it took between 15s to 50s per turn with 720p screenshots (on avg its \~30s per turn), this was also with many apps open so it had to fight for memory at times. Built using C/ua: [**https://github.com/trycua/cua**](https://github.com/trycua/cua) This is just the 7 Billion model. Expect much more with the 72 billion. The future is indeed here.
2025-05-14T22:48:36
https://v.redd.it/nnjtc8uctt0f1
sandropuppo
v.redd.it
1970-01-01T00:00:00
0
{}
1kmswrp
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/nnjtc8uctt0f1/DASHPlaylist.mpd?a=1749854928%2CY2FiMjE4NTM2NmZjNTNlZjkyMTJlZTQwYTI5YTQ1NjM1NmFhMDc4NjA1NWY5ZWU4NWIwNzQ3YTUzNDcyYjk0OQ%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/nnjtc8uctt0f1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/nnjtc8uctt0f1/HLSPlaylist.m3u8?a=1749854928%2COTViYjc4MWEwMjM5M2ZiNmM3OTE3YTRjMWNmOWM2MzI2NDlmNTcwMmIwMDhiNWI2MmYwYjUxMTYzMWI4MmVkYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/nnjtc8uctt0f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1152}}
t3_1kmswrp
/r/LocalLLaMA/comments/1kmswrp/the_era_of_local_computeruse_ai_agents_is_here/
false
false
https://external-preview…7c83206e33d4860c
3
{'enabled': False, 'images': [{'id': 'NWYyN3JidWN0dDBmMc1_Uw_Blv-bDsLMvMjjAPFg5-FuU5GLXJGCr90pzTxm', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/NWYyN3JidWN0dDBmMc1_Uw_Blv-bDsLMvMjjAPFg5-FuU5GLXJGCr90pzTxm.png?width=108&crop=smart&format=pjpg&auto=webp&s=c9d471530a88a4e96f9e65e6e84c73d61e2caf9e', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/NWYyN3JidWN0dDBmMc1_Uw_Blv-bDsLMvMjjAPFg5-FuU5GLXJGCr90pzTxm.png?width=216&crop=smart&format=pjpg&auto=webp&s=cee89ed8d9e723af62bb6f7972d7775448b2575a', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/NWYyN3JidWN0dDBmMc1_Uw_Blv-bDsLMvMjjAPFg5-FuU5GLXJGCr90pzTxm.png?width=320&crop=smart&format=pjpg&auto=webp&s=997e018f608d3b90fa7a805e920b9f4aa45fb922', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/NWYyN3JidWN0dDBmMc1_Uw_Blv-bDsLMvMjjAPFg5-FuU5GLXJGCr90pzTxm.png?width=640&crop=smart&format=pjpg&auto=webp&s=2167c6a947df11774bbb15af6ac946262048404c', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/NWYyN3JidWN0dDBmMc1_Uw_Blv-bDsLMvMjjAPFg5-FuU5GLXJGCr90pzTxm.png?width=960&crop=smart&format=pjpg&auto=webp&s=a187c14d5d1ff30144501bc4fd643d8b81b802e5', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/NWYyN3JidWN0dDBmMc1_Uw_Blv-bDsLMvMjjAPFg5-FuU5GLXJGCr90pzTxm.png?width=1080&crop=smart&format=pjpg&auto=webp&s=19a6e851bd9b107e73ca6ce6019c75aa482db13b', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/NWYyN3JidWN0dDBmMc1_Uw_Blv-bDsLMvMjjAPFg5-FuU5GLXJGCr90pzTxm.png?format=pjpg&auto=webp&s=7e06d6de4a52134b5a1721b6295f6e2fe9d0e720', 'width': 1152}, 'variants': {}}]}