title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Many small evals are better than one big eval [techniques]
28
Hi everyone! I've been building AI products for 9 years (at my own startup, then at Apple, now at a second startup) and learned a lot along the way. I’ve been talking to a bunch of folks about evals lately, and I’ve realized most people aren’t creating them because they don’t know how to get started. **TL;DR** You probably should setup your project for many small evals, and not try to create one big eval for product quality. If you can generate a new small/focused eval in under 10 mins, your team will create them when they spot issues, and your quality will get much better over time. At a high level, here’s why this works: * The easier it is to add an eval, the more you’ll do it, and that improves quality. Small and focused evals are much easier to add than large multi-focus evals. * Products change over time, so big evals are almost impossible to keep up to date. * Small evals help you pinpoint errors, which makes them easier to fix. * Different team members bring unique insights (PM, Eng, QA, DS, etc). Letting them all contribute to evals leads to higher quality AI systems. # Example Here’s an example of what I mean by “many small evals”. You can see the small evals are a lot more interesting than just the final total (+4%). You can break-out product goals or issues, track them separately and see exactly what breaks and when (kinda like unit tests + CI in software). In this case looking at overall alone (+4%), would hide really critical regressions (-18% in one area). |Many Small Eval Scorecard|Comparing Models| |:-|:-| |Clarify unclear requests|93% (+9%)| |Refuse to discuss competitors|100% (+1%)| |Reject toxic requests|100% (even)| |Offer rebate before cancelation|72% (-18%)| |Follow brand styleguide|85% (-1%)| |Only link to official docs|99% (even)| |Avoid 'clickbait' titles|96% (+5%)| |Knowledge base retrieval recall|94% (+7%)| |Overall|94% (+4%)| The cost of getting started is also much lower: you can add small evals here and there. Over time you’ll build a comprehensive eval suite. # How to get started * **Setup a good eval tool**: to be fast an easy you need 1) synthetic eval data gen, 2) intuitive UI, 3) human preferences baselining, 4) rapid side-by-side comparisons of run-methods. * **Teach your team to build evals**: a quick 30 mins is enough if your tool is intuitive. * **Create a culture of evaluation**: continually encourage folks to create evals when they spot quality issues or fix bugs. I've been building a free and open tool called \~[Kiln](https://getkiln.ai/)\~ which makes this process easy. It includes: * Create new evals in a few clicks: LLM-as-Judge and G-Eval * Synthetic data gen for eval and golden datasets * Baseline LLM judges to human ratings * Using evals to find the best way to run your AI workload (model/prompt/tunes) * Completely free on Github! If you want to check out the tool or our guides: * \~[Kiln AI on Github - over 3800 stars](https://getkiln.ai/)\~ * \~[Our Evals Guide/Docs](https://docs.getkiln.ai/docs/evaluations)\~ * \~[Blog post on small evals vs large evals (same ideas as above in more depth)](https://getkiln.ai/blog/you_need_many_small_evals_for_ai_products)\~ * \~[Kiln AI - Overview and Docs](https://getkiln.ai/)\~ I'm happy to answer questions if anyone wants to dive deeper on specific aspects!
2025-06-28T13:30:39
https://www.reddit.com/r/LocalLLaMA/comments/1lmmvmj/many_small_evals_are_better_than_one_big_eval/
davernow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmmvmj
false
null
t3_1lmmvmj
/r/LocalLLaMA/comments/1lmmvmj/many_small_evals_are_better_than_one_big_eval/
false
false
self
28
{'enabled': False, 'images': [{'id': 'XakaA1XhTLjl2Tl4uMyvMZIXSFLrVmJ26POYXKL-zXM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XakaA1XhTLjl2Tl4uMyvMZIXSFLrVmJ26POYXKL-zXM.png?width=108&crop=smart&auto=webp&s=f7d2c98f11ee7e007262b0eeb2d4b47eee7e6c7a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XakaA1XhTLjl2Tl4uMyvMZIXSFLrVmJ26POYXKL-zXM.png?width=216&crop=smart&auto=webp&s=d49c86458cbdc55e3fe6836414a1ca035ece6e47', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XakaA1XhTLjl2Tl4uMyvMZIXSFLrVmJ26POYXKL-zXM.png?width=320&crop=smart&auto=webp&s=6149c2e8f6335454286c3bd3c60b56e48c098647', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XakaA1XhTLjl2Tl4uMyvMZIXSFLrVmJ26POYXKL-zXM.png?width=640&crop=smart&auto=webp&s=005c04f071aeac512fb86cbe0b68d4c96e240eb2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XakaA1XhTLjl2Tl4uMyvMZIXSFLrVmJ26POYXKL-zXM.png?width=960&crop=smart&auto=webp&s=9fad1801158405b14b9f822c0e790a9f6c66ed77', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XakaA1XhTLjl2Tl4uMyvMZIXSFLrVmJ26POYXKL-zXM.png?width=1080&crop=smart&auto=webp&s=38c507070bf86833470c0b58f74304a8227ad0cc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XakaA1XhTLjl2Tl4uMyvMZIXSFLrVmJ26POYXKL-zXM.png?auto=webp&s=7bf3b6f2e008a14e5a7ba350c22ca8c5004ad9f3', 'width': 1200}, 'variants': {}}]}
Which are the best realistic video generation tools
1
Which are the best realistic video generation tools and which of them are paid online, and which can be run locally?
2025-06-28T13:33:07
https://www.reddit.com/r/LocalLLaMA/comments/1lmmxh1/which_are_the_best_realistic_video_generation/
Rich_Artist_8327
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmmxh1
false
null
t3_1lmmxh1
/r/LocalLLaMA/comments/1lmmxh1/which_are_the_best_realistic_video_generation/
false
false
self
1
null
What are Coqui-TTS alternatives?
3
I'm working on a project and want to use an open source TTS model that is better or at least as good as coqui-tts
2025-06-28T13:43:50
https://www.reddit.com/r/LocalLLaMA/comments/1lmn5k2/what_are_coquitts_alternatives/
Ok-Photograph4994
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmn5k2
false
null
t3_1lmn5k2
/r/LocalLLaMA/comments/1lmn5k2/what_are_coquitts_alternatives/
false
false
self
3
null
What framework are you using to build AI Agents?
117
Hey, if anyone here is building AI Agents for production what framework are you using? For research and building leisure projects, I personally use langgraph. I wanted to also know if you are not using langgraph, what was the reason?
2025-06-28T14:00:09
https://www.reddit.com/r/LocalLLaMA/comments/1lmni3q/what_framework_are_you_using_to_build_ai_agents/
PleasantInspection12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmni3q
false
null
t3_1lmni3q
/r/LocalLLaMA/comments/1lmni3q/what_framework_are_you_using_to_build_ai_agents/
false
false
self
117
null
Play Infinite Tic Tac Toe against LLM Models
0
I have integrated different LLMs in my Infinite Tic Tac Toe game and they play better than I thought. The above gameplay is against GPT4.1 Nano but there are more LLMs available in the game to play with. P.S: The game in the video wasn’t staged, the LLM actually tricked me into those positions. Also, I have combined the LLM capabilities with my local AI which detects instant blocks or winning position and only forwards request to LLM when a strategic move is needed. The game is available on Google Play and App Store as “Infinite Tic Tac Toe - Game”
2025-06-28T14:34:10
https://v.redd.it/v346kcuiio9f1
BestDay8241
v.redd.it
1970-01-01T00:00:00
0
{}
1lmo9b2
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/v346kcuiio9f1/DASHPlaylist.mpd?a=1753713265%2CNTYzMGZhMzk0YzZlZjk1YTZlZGJjMTQ3ZjAyYjQ2YTBjNTliZWM4MzkwZmM1OGQxNjI0NWMzYmY3YTE4YTg2MQ%3D%3D&v=1&f=sd', 'duration': 38, 'fallback_url': 'https://v.redd.it/v346kcuiio9f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/v346kcuiio9f1/HLSPlaylist.m3u8?a=1753713265%2CNGRiYmUzNWI2NDdmOTkyY2Y1NTVjMjFkYjU4NmY2OWM3ODQ0Nzc1MjJkZDBkMWM3ZTEwY2NjYTBmOTA5ZjgzYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/v346kcuiio9f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 592}}
t3_1lmo9b2
/r/LocalLLaMA/comments/1lmo9b2/play_infinite_tic_tac_toe_against_llm_models/
false
false
https://external-preview…c2e6684263e15dca
0
{'enabled': False, 'images': [{'id': 'eW1iZnkwbGlpbzlmMdghNcZFxp7Uwzy1nMBv_wTWuJViBRKggIUMZrhlyGhz', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/eW1iZnkwbGlpbzlmMdghNcZFxp7Uwzy1nMBv_wTWuJViBRKggIUMZrhlyGhz.png?width=108&crop=smart&format=pjpg&auto=webp&s=79a555f7f1ac6b4b7ac3b64087b69b4a75738656', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/eW1iZnkwbGlpbzlmMdghNcZFxp7Uwzy1nMBv_wTWuJViBRKggIUMZrhlyGhz.png?width=216&crop=smart&format=pjpg&auto=webp&s=099d827329fb29d80f9e57b8c57e49b16813b0e1', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/eW1iZnkwbGlpbzlmMdghNcZFxp7Uwzy1nMBv_wTWuJViBRKggIUMZrhlyGhz.png?width=320&crop=smart&format=pjpg&auto=webp&s=27df68a192f589268283946c1d7d6a2fbdc0963b', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/eW1iZnkwbGlpbzlmMdghNcZFxp7Uwzy1nMBv_wTWuJViBRKggIUMZrhlyGhz.png?width=640&crop=smart&format=pjpg&auto=webp&s=20dd523b41c3a12ef745341bc014bac7f162b97a', 'width': 640}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/eW1iZnkwbGlpbzlmMdghNcZFxp7Uwzy1nMBv_wTWuJViBRKggIUMZrhlyGhz.png?format=pjpg&auto=webp&s=0d6658a1eff3bfae8b6795e16ab7564059d2e037', 'width': 888}, 'variants': {}}]}
Idea to Audio in Under 10 Seconds: How Vaanika Crushes Creative Friction
0
2025-06-28T14:34:20
https://medium.com/@rudransh.agnihotri/idea-to-audio-in-under-10-seconds-how-vaanika-crushes-creative-friction-fe72ee9015ea
Technical_Detail_739
medium.com
1970-01-01T00:00:00
0
{}
1lmo9fr
false
null
t3_1lmo9fr
/r/LocalLLaMA/comments/1lmo9fr/idea_to_audio_in_under_10_seconds_how_vaanika/
false
false
default
0
null
Link between LM Studio and tools/functions?
3
I have been looking around for hours and I am spinning my wheels... I recently started playing with a GGUF quant of THUDM/GLM-Z1-Rumination-32B-0414, and I'm really impressed with the multi-turn search functionality. I'd love to see if I could make additional tools, and review the code of the existing ones build through the LM Studio API. I'd also like to see if I can make some safety modifications to prevent some models from making tool calls entirely. I'm struggling to find the link between where the stream of the chat determines to invoke a tool, and where that code actually exists. I see nothing that relevant in the developer logs or in the LMS logging stream. 1. Is the LM Studio API monitoring the stream and calling the function when it gets the appropriate format? 2. Is there anywhere I can modify the invoked code? For example, using a different web search API, etc? I've scoured the LM Studio and OpenAI docs, but I'm still hitting a wall. If there are any un/official docs, I'd love to read them!
2025-06-28T14:55:17
https://www.reddit.com/r/LocalLLaMA/comments/1lmoqsl/link_between_lm_studio_and_toolsfunctions/
Danfhoto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmoqsl
false
null
t3_1lmoqsl
/r/LocalLLaMA/comments/1lmoqsl/link_between_lm_studio_and_toolsfunctions/
false
false
self
3
null
model : add support for ERNIE 4.5 0.3B model by ownia · Pull Request #14408 · ggml-org/llama.cpp
1
Support for the upcoming ERNIE 4.5 0.3B model has been merged into llama.cpp. BAIDU has announced that it will officially release the ERNIE 4.5 models as open source on June 30, 2025
2025-06-28T15:08:16
https://github.com/ggml-org/llama.cpp/pull/14408
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1lmp1vw
false
null
t3_1lmp1vw
/r/LocalLLaMA/comments/1lmp1vw/model_add_support_for_ernie_45_03b_model_by_ownia/
false
false
https://external-preview…92c9f7b52b154726
1
{'enabled': False, 'images': [{'id': 'STjjFmknxf7nBEMMInmMUB27ROh3VGJuNDaQ8cvttgc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/STjjFmknxf7nBEMMInmMUB27ROh3VGJuNDaQ8cvttgc.png?width=108&crop=smart&auto=webp&s=5dfa9b9565cdcffed4ab542623d5cddf4a3c6f51', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/STjjFmknxf7nBEMMInmMUB27ROh3VGJuNDaQ8cvttgc.png?width=216&crop=smart&auto=webp&s=b548a4bd25883a782c276705777c8fa0090b80ee', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/STjjFmknxf7nBEMMInmMUB27ROh3VGJuNDaQ8cvttgc.png?width=320&crop=smart&auto=webp&s=47493ffe45f508c46497a311727f9621469cda8e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/STjjFmknxf7nBEMMInmMUB27ROh3VGJuNDaQ8cvttgc.png?width=640&crop=smart&auto=webp&s=0fba0076b0b7c05a00a73da6e0fe0aa6d24a9166', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/STjjFmknxf7nBEMMInmMUB27ROh3VGJuNDaQ8cvttgc.png?width=960&crop=smart&auto=webp&s=d37db7a0ff9ba1062814644cc40276be8688d32e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/STjjFmknxf7nBEMMInmMUB27ROh3VGJuNDaQ8cvttgc.png?width=1080&crop=smart&auto=webp&s=51d96bd9e6aeead830e051fbf5f6c3c7de947849', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/STjjFmknxf7nBEMMInmMUB27ROh3VGJuNDaQ8cvttgc.png?auto=webp&s=151493b0711d3c876f0a69788f79c2589bb773fb', 'width': 1200}, 'variants': {}}]}
support for the upcoming ERNIE 4.5 0.3B model has been merged into llama.cpp
73
Baidu has announced that it will officially release the ERNIE 4.5 models as open source on June 30, 2025
2025-06-28T15:10:04
https://github.com/ggml-org/llama.cpp/pull/14408
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1lmp3en
false
null
t3_1lmp3en
/r/LocalLLaMA/comments/1lmp3en/support_for_the_upcoming_ernie_45_03b_model_has/
false
false
https://external-preview…92c9f7b52b154726
73
{'enabled': False, 'images': [{'id': 'STjjFmknxf7nBEMMInmMUB27ROh3VGJuNDaQ8cvttgc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/STjjFmknxf7nBEMMInmMUB27ROh3VGJuNDaQ8cvttgc.png?width=108&crop=smart&auto=webp&s=5dfa9b9565cdcffed4ab542623d5cddf4a3c6f51', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/STjjFmknxf7nBEMMInmMUB27ROh3VGJuNDaQ8cvttgc.png?width=216&crop=smart&auto=webp&s=b548a4bd25883a782c276705777c8fa0090b80ee', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/STjjFmknxf7nBEMMInmMUB27ROh3VGJuNDaQ8cvttgc.png?width=320&crop=smart&auto=webp&s=47493ffe45f508c46497a311727f9621469cda8e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/STjjFmknxf7nBEMMInmMUB27ROh3VGJuNDaQ8cvttgc.png?width=640&crop=smart&auto=webp&s=0fba0076b0b7c05a00a73da6e0fe0aa6d24a9166', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/STjjFmknxf7nBEMMInmMUB27ROh3VGJuNDaQ8cvttgc.png?width=960&crop=smart&auto=webp&s=d37db7a0ff9ba1062814644cc40276be8688d32e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/STjjFmknxf7nBEMMInmMUB27ROh3VGJuNDaQ8cvttgc.png?width=1080&crop=smart&auto=webp&s=51d96bd9e6aeead830e051fbf5f6c3c7de947849', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/STjjFmknxf7nBEMMInmMUB27ROh3VGJuNDaQ8cvttgc.png?auto=webp&s=151493b0711d3c876f0a69788f79c2589bb773fb', 'width': 1200}, 'variants': {}}]}
Best model tuned specifically for Programming?
8
I am looking for the best local LLMs that I can use with cursor for my professional work. So, I am will to invest few grands on the GPU. Which are the best models tfor GPUs with 12gb, 16gb and 24gb vram?
2025-06-28T15:21:29
https://www.reddit.com/r/LocalLLaMA/comments/1lmpd8j/best_model_tuned_specifically_for_programming/
Fragrant-Review-5055
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmpd8j
false
null
t3_1lmpd8j
/r/LocalLLaMA/comments/1lmpd8j/best_model_tuned_specifically_for_programming/
false
false
self
8
null
Multistage Reasoning Multimodal
1
Check out the first consumer-sized multimodal reasoning model with Claude-style multi-stage reasoning , Would love to hear your feedbacks !
2025-06-28T15:39:37
https://huggingface.co/amine-khelif/MaVistral-GGUF
AOHKH
huggingface.co
1970-01-01T00:00:00
0
{}
1lmpspk
false
null
t3_1lmpspk
/r/LocalLLaMA/comments/1lmpspk/multistage_reasoning_multimodal/
false
false
https://external-preview…26686620f03557bd
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/RIZRR0FPytEzcsd1aVW6iJ0TFpl5p5x4d718G05bZxQ.png?auto=webp&s=ca344ad558924eab180b8d7c3fc6ee2788aae2c8', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/RIZRR0FPytEzcsd1aVW6iJ0TFpl5p5x4d718G05bZxQ.png?width=108&crop=smart&auto=webp&s=2d6d0b70e34c0d999ef31521b3f74c152a1182c6', 'width': 108, 'height': 58}, {'url': 'https://external-preview.redd.it/RIZRR0FPytEzcsd1aVW6iJ0TFpl5p5x4d718G05bZxQ.png?width=216&crop=smart&auto=webp&s=14dabbd200844129c29431e70a0f3fb19ab97bb2', 'width': 216, 'height': 116}, {'url': 'https://external-preview.redd.it/RIZRR0FPytEzcsd1aVW6iJ0TFpl5p5x4d718G05bZxQ.png?width=320&crop=smart&auto=webp&s=609798dbc7b1e5fa1ff625a5e101dee3b1e90d18', 'width': 320, 'height': 172}, {'url': 'https://external-preview.redd.it/RIZRR0FPytEzcsd1aVW6iJ0TFpl5p5x4d718G05bZxQ.png?width=640&crop=smart&auto=webp&s=7f75cc3afe2ed44d65432463ce363981a6fa16a1', 'width': 640, 'height': 345}, {'url': 'https://external-preview.redd.it/RIZRR0FPytEzcsd1aVW6iJ0TFpl5p5x4d718G05bZxQ.png?width=960&crop=smart&auto=webp&s=3095913835e6aee40f0d935b22af4bc28eed6a7b', 'width': 960, 'height': 518}, {'url': 'https://external-preview.redd.it/RIZRR0FPytEzcsd1aVW6iJ0TFpl5p5x4d718G05bZxQ.png?width=1080&crop=smart&auto=webp&s=6613337b4ec4d7c3e2fd6aa36106b56d6fc3082e', 'width': 1080, 'height': 583}], 'variants': {}, 'id': 'RIZRR0FPytEzcsd1aVW6iJ0TFpl5p5x4d718G05bZxQ'}], 'enabled': False}
FOR SALE: AI Chatbot + Image Gen System
1
-LLM: Gemma 2B IT/9B IT/Deepseek R1/Any model needed – Image: SDXL base + refiner – UI: Gradio with text + image output – Deployment: Optimized for RunPod A40 and A100 – Deliverables: Full Colab notebook (.ipynb), custom system prompts, model links, usage rights…
2025-06-28T15:53:14
https://www.reddit.com/r/LocalLLaMA/comments/1lmq46j/for_sale_ai_chatbot_image_gen_system/
Clevo007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmq46j
false
null
t3_1lmq46j
/r/LocalLLaMA/comments/1lmq46j/for_sale_ai_chatbot_image_gen_system/
false
false
self
1
null
deepseek-r1-0528 ranked #2 on lmarena, matching best from chatgpt
1
An open weights model matching the best from closed AI. Seems quite impressive to me. What do you think? https://preview.redd.it/mgu6oo7n1p9f1.png?width=2249&format=png&auto=webp&s=d375709b8e115ace177d0510bec0a16ad31d568e
2025-06-28T16:21:43
https://www.reddit.com/r/LocalLLaMA/comments/1lmqsru/deepseekr10528_ranked_2_on_lmarena_matching_best/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmqsru
false
null
t3_1lmqsru
/r/LocalLLaMA/comments/1lmqsru/deepseekr10528_ranked_2_on_lmarena_matching_best/
false
false
https://b.thumbs.redditm…--vwGaVrlpzQ.jpg
1
null
Como mejorar un sistema RAG?
1
Hace tiempo vengo trabajando en un proyecto personas basado en RAG, al inicio usando llm como los de nvidia y embedding (all-MiniLM-L6-v2) obtenía respuestas medianamente aceptables frente a documentos pdf básicos, pero al presentarse documentos tipo empresariales (Con estructuras diferentes unos de otros, tablas, gráficos, etc.) se me presento un gran problema y muchas dudas sobre si RAG es mi mejor opción. El principal problema que encuentro es como estructurar los datos, realice un script en python para detectar titulos y anexos, una vez identificados mi embedding (por cierto ahora uso nomic-embed-text de ollama) guarda todo ese fragmento en uno solo y le da como nombre el titulo que se le puso (Ejemplo: CUADRO N°2 GASTOS DEL MES MAYO), cuando el usuario hace una pregunta ¿Cuáles son los gastos de mayo? mi modelo extrae muchos datos de mi base vectorial (Qdrant) pero no la tabla en especifico, así que como solución momentánea en mi pregunta debo colocar: en la tabla ¿Cuáles son los gastos de mayo? y ahí recién detecta el point de tabla (porque realice otra función en mi script el cual busque en points que tengan como titulo tabla cuando el usuario pregunta por una) justo ahi me trae como uno de los resultados esa tabla y bueno mi modelo de ollama (phi4) me da una respuesta, pero esto realmente no es una solucion, pues el usuario no sabe si esta dentro o no de una tabla. Por otro lado, eh tratado de usar otras estrategias para estructurar mejor mis datos, como colocar titulos diferentes a los puntos ya sean texto, tablas o gráficos. Aun así no eh logrado solucionar todo este problema, la verdad ya llevo mucho tiempo con esto y no eh podido resolverlo, mi enfoque es usar modelos locales.
2025-06-28T16:22:23
https://www.reddit.com/r/LocalLLaMA/comments/1lmqtby/como_mejorar_un_sistema_rag/
mathiasmendoza123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmqtby
false
null
t3_1lmqtby
/r/LocalLLaMA/comments/1lmqtby/como_mejorar_un_sistema_rag/
false
false
self
1
null
EPYC cpu build. Which cpu? (9354, 9534, 9654)
1
I already have 3x RTX 5090 and 1x RTX 5070 Ti. Planning to buy Supermicro H13SSL-N motherboard and 12 sticks of Supermicro MEM-DR564MC-ER56 RAM. I want run models like DeepSeek-R1. I don’t know which CPU to choose or what factors matter most. The EPYC 9354 has higher clock speeds than the 9534 and 9654 but fewer cores. Meanwhile, the 9654 has more CCDs. Help me decide!
2025-06-28T16:32:12
https://www.reddit.com/r/LocalLLaMA/comments/1lmr1qh/epyc_cpu_build_which_cpu_9354_9534_9654/
Ok-Exchange-6413
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmr1qh
false
null
t3_1lmr1qh
/r/LocalLLaMA/comments/1lmr1qh/epyc_cpu_build_which_cpu_9354_9534_9654/
false
false
self
1
null
Not everything should be vibe coded
1
AI makes it really easy to build fast but if you skip planning the whole thing ends up fragile. I’ve seen so many projects that looked great early on but fall apart once real users hit them. Stuff like edge cases, missing validation, no fallback handling. All avoidable. What helped was writing even the simplest spec before building. Just a few lines on what the user should be able to do and what matters. Doesn't have to be formal. Just enough to think it through. We built Devplan to help with this. It’s what we use now to turn rough ideas into something structured. But honestly even a scratchpad or notes app is better than nothing. Building fast is great. Cleaning up later is not.
2025-06-28T16:32:28
https://www.reddit.com/r/LocalLLaMA/comments/1lmr1yo/not_everything_should_be_vibe_coded/
eastwindtoday
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmr1yo
false
null
t3_1lmr1yo
/r/LocalLLaMA/comments/1lmr1yo/not_everything_should_be_vibe_coded/
false
false
self
1
null
Gemma3n:2B and Gemma3n:4B models are ~40% slower than equivalent models in size running on Llama.cpp
1
Am I missing something? The llama3.2:3B is giving me 29 t/s, but Gemma3n:2B is only doing 22 t/s. Is it still not fully supported? The VRAM footprint is indeed of a 2B, but the performance sucks.
2025-06-28T16:42:50
https://www.reddit.com/r/LocalLLaMA/comments/1lmranc/gemma3n2b_and_gemma3n4b_models_are_40_slower_than/
simracerman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmranc
false
null
t3_1lmranc
/r/LocalLLaMA/comments/1lmranc/gemma3n2b_and_gemma3n4b_models_are_40_slower_than/
false
false
self
1
null
Looking for Android chat ui
1
I am looking for android user interfaces that can use custom endpoints. Latex and websearch is s must for me. I love chatterui but it doesn't have the features. Chatbox AI is fine but websearch doesn't work consistently. I dont prefer running webui through termux unless it really worths. Also I may use local models (via mnn server) when offline, so no remote too.
2025-06-28T16:45:48
https://www.reddit.com/r/LocalLLaMA/comments/1lmrd6x/looking_for_android_chat_ui/
fatihmtlm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmrd6x
false
null
t3_1lmrd6x
/r/LocalLLaMA/comments/1lmrd6x/looking_for_android_chat_ui/
false
false
self
1
null
Multimodal Multistage Reasoning
1
Check out the first consumer-sized multimodal reasoning model with Claude-style multi-stage reasoning , Would love to hear your feedbacks ! https://huggingface.co/amine-khelif/MaVistral-GGUF
2025-06-28T16:57:10
https://i.redd.it/a5k3d6h18p9f1.jpeg
AOHKH
i.redd.it
1970-01-01T00:00:00
0
{}
1lmrmnz
false
null
t3_1lmrmnz
/r/LocalLLaMA/comments/1lmrmnz/multimodal_multistage_reasoning/
false
false
default
1
{'images': [{'source': {'url': 'https://preview.redd.it/a5k3d6h18p9f1.jpeg?auto=webp&s=f9c561d7ba5ad8a909e171d050fb1aa02a1da6d7', 'width': 556, 'height': 928}, 'resolutions': [{'url': 'https://preview.redd.it/a5k3d6h18p9f1.jpeg?width=108&crop=smart&auto=webp&s=dea2ccc90b3ef4e80ab748abbf5dec9b821de88d', 'width': 108, 'height': 180}, {'url': 'https://preview.redd.it/a5k3d6h18p9f1.jpeg?width=216&crop=smart&auto=webp&s=23bd5658b8cf566af0836c43d2b66b9abf496d11', 'width': 216, 'height': 360}, {'url': 'https://preview.redd.it/a5k3d6h18p9f1.jpeg?width=320&crop=smart&auto=webp&s=42e965fa045bd2d589a105976ed26003e54e8c25', 'width': 320, 'height': 534}], 'variants': {}, 'id': 'a5k3d6h18p9f1'}], 'enabled': True}
Mercury Diffusion - 700t/s !!
1
Inception labs just released mercury general. Flash 2.5 is probably the best go-to fast model for me, so i threw in the same system / user message and had my mind blown by Mercury 700+t/s!!!! test here: [playground](https://chat.inceptionlabs.ai/)
2025-06-28T17:06:21
https://www.reddit.com/r/LocalLLaMA/comments/1lmrump/mercury_diffusion_700ts/
LeatherRub7248
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmrump
false
null
t3_1lmrump
/r/LocalLLaMA/comments/1lmrump/mercury_diffusion_700ts/
false
false
self
1
null
Using AI talk to text to record notes directly into an application
1
[removed]
2025-06-28T17:08:57
https://www.reddit.com/r/LocalLLaMA/comments/1lmrwve/using_ai_talk_to_text_to_record_notes_directly/
LTunicorn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmrwve
false
null
t3_1lmrwve
/r/LocalLLaMA/comments/1lmrwve/using_ai_talk_to_text_to_record_notes_directly/
false
false
self
1
null
Can Copilot be trusted with private source code more than competition?
1
I have a project that I am thinking of using an LLM for, but there's no guarantee that LLM providers are not training on private source code. And for me using a local LLM is not an option since I don't have the required resources to locally run good performance LLMs, so I am thinking of cloud hosting an LLM for example on Microsoft Azure. But Microsoft already has GPT4.1 and other OpenAI models hosted on Azure, so wouldn't hosting on azure cloud and using copilot be the same? Would Microsoft be willing to risk their reputation as a cloud provider on retaining user data? Also Microsoft has the least incentive to do so out of all AI companies.
2025-06-28T17:38:23
https://www.reddit.com/r/LocalLLaMA/comments/1lmsme1/can_copilot_be_trusted_with_private_source_code/
Professional-Onion-7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmsme1
false
null
t3_1lmsme1
/r/LocalLLaMA/comments/1lmsme1/can_copilot_be_trusted_with_private_source_code/
false
false
self
1
null
i5-8500 (6 cores), 24GB DDR4 2666 dual channel, realistic expectations for 3b/4b models?
1
I'm well aware my hardware is... not ideal.. for running LLMs, but I thought I'd at least be able to run small 2B to 4B models at a decent clip. But even the E2B version of Gemma 3n seems fairly slow. The TK/s aren't so bad (\~6-7 tk/s) but the prompt processing is pretty slow and CPU is pinned at 100% all cores for the entirety of each response. Is this more or less expected for my hardware, or should I be seeing modestly better speeds?
2025-06-28T17:58:30
https://www.reddit.com/r/LocalLLaMA/comments/1lmt3kt/i58500_6_cores_24gb_ddr4_2666_dual_channel/
redoubt515
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmt3kt
false
null
t3_1lmt3kt
/r/LocalLLaMA/comments/1lmt3kt/i58500_6_cores_24gb_ddr4_2666_dual_channel/
false
false
self
1
null
The ollama models are excellent models that can be installed locally as a starting point but.....
1
For a long time I have spent hours and hours testing all the open source models (high performance gaming PCs) so they all work well for me and I must say that ollama in all its variants is truly an excellent model. Lately I've been interested in LLMs that help you program and I've noticed that almost all of them are inadequate to carry out this task unless you get a subscription to cloude 4 etc. So I said to myself, how can I get around this obstacle? Simple (just saying obviously) just do a fine Turing with a performance dataset created specifically. Here, after a long time and sleepless nights, I created a 1.4tb performance and competitive dataset to train my ollama code. Unfortunately, even to do Turing's job, my hardware is not enough but an investment of thousands of euros must be made. If you have the resources you get the results otherwise you just watch. Sorry I went on too long but I am very passionate about this subject
2025-06-28T18:19:37
https://www.reddit.com/r/LocalLLaMA/comments/1lmtlgp/the_ollama_models_are_excellent_models_that_can/
CodeStackDev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmtlgp
false
null
t3_1lmtlgp
/r/LocalLLaMA/comments/1lmtlgp/the_ollama_models_are_excellent_models_that_can/
false
false
self
1
null
Need Uncensored Base Model (<3B) for NSFW RP on ChatterUI
1
[removed]
2025-06-28T19:01:48
https://www.reddit.com/r/LocalLLaMA/comments/1lmukz3/need_uncensored_base_model_3b_for_nsfw_rp_on/
PromptPunisher
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmukz3
false
null
t3_1lmukz3
/r/LocalLLaMA/comments/1lmukz3/need_uncensored_base_model_3b_for_nsfw_rp_on/
false
false
nsfw
1
null
Best GGUF Base Models Under 3B for Unfiltered NSFW Roleplay?
1
Looking for a base model (not chat/instruct) under 3B for NSFW roleplay in ChatterUI on Android (Moto G Power, ~2GB RAM free). Needs to be GGUF, quantized (Q4/Q5), and fully uncensored — no filters, no refusals, no AI disclaimers. Already tried a few models. But never could get them to actually use explicit language. Just want a reliable, obedient base model that can handle NSFW RP without weird behavior. Any info on optimized model settings, sampling and formatting settings would be appreciated too.
2025-06-28T19:50:14
https://www.reddit.com/r/LocalLLaMA/comments/1lmvosa/best_gguf_base_models_under_3b_for_unfiltered/
PromptPunisher
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmvosa
false
null
t3_1lmvosa
/r/LocalLLaMA/comments/1lmvosa/best_gguf_base_models_under_3b_for_unfiltered/
false
false
nsfw
1
null
Assistance for beginner in local LLM
1
[removed]
2025-06-28T19:58:04
https://www.reddit.com/r/LocalLLaMA/comments/1lmvv5e/assistance_for_beginner_in_local_llm/
JunkismyFunk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmvv5e
false
null
t3_1lmvv5e
/r/LocalLLaMA/comments/1lmvv5e/assistance_for_beginner_in_local_llm/
false
false
self
1
null
Recent best models <=14b for agentic search?
1
wondering about this. I've had great results with perplexity, but who knows how long that gravy train will last. I have the brave API set up in Open WebUI. something local that will fit on 16gb and good with agentic search would be fantastic, and may be the push I need to set up SearXNG for full local research.
2025-06-28T20:28:24
https://www.reddit.com/r/LocalLLaMA/comments/1lmwjf2/recent_best_models_14b_for_agentic_search/
SpecialSauceSal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmwjf2
false
null
t3_1lmwjf2
/r/LocalLLaMA/comments/1lmwjf2/recent_best_models_14b_for_agentic_search/
false
false
self
1
null
NVIDIA acquires CentML. what does this mean for inference infra?
1
CentML, the startup focused on compiler/runtime optimization for AI inference, was just acquired by NVIDIA. Their work centered on making single-model inference faster and cheaper , via batching, quantization (AWQ/GPTQ), kernel fusion, etc. This feels like a strong signal: inference infra is no longer just a supporting layer. NVIDIA is clearly moving to own both the hardware and the software that controls inference efficiency. That said, CentML tackled one piece of the puzzle , mostly within-model optimization. The messier problems : cold starts, multi-model orchestration, and efficient GPU sharing , are still wide open. We’re working on some of those challenges ourselves (e.g., InferX is focused on runtime-level orchestration and snapshotting to reduce cold start latency on shared GPUs). Curious how others see this playing out. Are we headed for a vertically integrated stack (hardware + compiler + serving), or is there still space for modular, open runtime layers?
2025-06-28T20:59:36
https://www.reddit.com/r/LocalLLaMA/comments/1lmx8ic/nvidia_acquires_centml_what_does_this_mean_for/
pmv143
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmx8ic
false
null
t3_1lmx8ic
/r/LocalLLaMA/comments/1lmx8ic/nvidia_acquires_centml_what_does_this_mean_for/
false
false
self
1
null
Auto-Inference is a Python library that unifies LLM model inference across popular backends such as Transformers, Unsloth, vLLM, and llama.cpp. ⭐
1
Auto-Inference is a Python library that provides a unified interface for model inference using several popular backends, including Hugging Face's Transformers, Unsloth, vLLM, and llama.cpp-python.Quantization support will be coming soon. Github: [https://github.com/VolkanSimsir/Auto-Inference](https://github.com/VolkanSimsir/Auto-Inference)
2025-06-28T21:01:36
https://www.reddit.com/gallery/1lmxa9o
According-Local-9704
reddit.com
1970-01-01T00:00:00
0
{}
1lmxa9o
false
null
t3_1lmxa9o
/r/LocalLLaMA/comments/1lmxa9o/autoinference_is_a_python_library_that_unifies/
false
false
https://external-preview…63ea8b5b7e84b358
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/ggSXePR6u8PXYNvN8Du7HkQ-oa7QutrxFYtCilZ75pA.jpeg?auto=webp&s=585c689665ea480800cb2925bb7d5efa5fc0834f', 'width': 915, 'height': 739}, 'resolutions': [{'url': 'https://external-preview.redd.it/ggSXePR6u8PXYNvN8Du7HkQ-oa7QutrxFYtCilZ75pA.jpeg?width=108&crop=smart&auto=webp&s=ae5b394b47b0c2e4642b138ff0c240e9bcf6f161', 'width': 108, 'height': 87}, {'url': 'https://external-preview.redd.it/ggSXePR6u8PXYNvN8Du7HkQ-oa7QutrxFYtCilZ75pA.jpeg?width=216&crop=smart&auto=webp&s=879c1ed72fd0645538b141c0fca08d090003f8bd', 'width': 216, 'height': 174}, {'url': 'https://external-preview.redd.it/ggSXePR6u8PXYNvN8Du7HkQ-oa7QutrxFYtCilZ75pA.jpeg?width=320&crop=smart&auto=webp&s=0384ad2301cd4874de7b650c9e7317b2e12f8fc7', 'width': 320, 'height': 258}, {'url': 'https://external-preview.redd.it/ggSXePR6u8PXYNvN8Du7HkQ-oa7QutrxFYtCilZ75pA.jpeg?width=640&crop=smart&auto=webp&s=575801852807a7119d6cbb8cc452399bdf62d011', 'width': 640, 'height': 516}], 'variants': {}, 'id': 'ggSXePR6u8PXYNvN8Du7HkQ-oa7QutrxFYtCilZ75pA'}], 'enabled': True}
Looking for a local LLM translator for large documents and especialized tools
1
* Especialized in translation. Mostly from Spanish to English and Japanese. * Model that can be run locally, but I don't mind if it requires a high-end computer. * Should be able to translate very large texts (I'm talking about full novels here). I understand it would need to be divided in sections first, but I would like to know which ones allow for the maximum amount of context per section. * Would like to know if there are any tools that streamline the process, especially when it comes to actual documents like Excel. I've been checking around and there's Ollama as a tool which seems simple enough and I can probably configure further, but I'm not sure if someone made a more straightforward tool just for translation. Then for actual models I'm not sure which ones are better at translating: Gemma? Deepseek? I checked some like nllb that are supposed to be especialized in translation but I think they weren't all that great, even actually worse than non-specialized models. Is this normal or am I doing something wrong?
2025-06-28T21:06:01
https://www.reddit.com/r/LocalLLaMA/comments/1lmxduv/looking_for_a_local_llm_translator_for_large/
Keinart
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmxduv
false
null
t3_1lmxduv
/r/LocalLLaMA/comments/1lmxduv/looking_for_a_local_llm_translator_for_large/
false
false
self
1
null
The AutoInference library now supports major and popular backends for LLM inference, including Transformers, vLLM, Unsloth, and llama.cpp. ⭐
1
Auto-Inference is a Python library that provides a unified interface for model inference using several popular backends, including Hugging Face's Transformers, Unsloth, vLLM, and llama.cpp-python.Quantization support will be coming soon. Github: [https://github.com/VolkanSimsir/Auto-Inference](https://github.com/VolkanSimsir/Auto-Inference)
2025-06-28T21:09:03
https://www.reddit.com/gallery/1lmxg89
According-Local-9704
reddit.com
1970-01-01T00:00:00
0
{}
1lmxg89
false
null
t3_1lmxg89
/r/LocalLLaMA/comments/1lmxg89/the_autoinference_library_now_supports_major_and/
false
false
https://external-preview…63ea8b5b7e84b358
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/ggSXePR6u8PXYNvN8Du7HkQ-oa7QutrxFYtCilZ75pA.jpeg?auto=webp&s=585c689665ea480800cb2925bb7d5efa5fc0834f', 'width': 915, 'height': 739}, 'resolutions': [{'url': 'https://external-preview.redd.it/ggSXePR6u8PXYNvN8Du7HkQ-oa7QutrxFYtCilZ75pA.jpeg?width=108&crop=smart&auto=webp&s=ae5b394b47b0c2e4642b138ff0c240e9bcf6f161', 'width': 108, 'height': 87}, {'url': 'https://external-preview.redd.it/ggSXePR6u8PXYNvN8Du7HkQ-oa7QutrxFYtCilZ75pA.jpeg?width=216&crop=smart&auto=webp&s=879c1ed72fd0645538b141c0fca08d090003f8bd', 'width': 216, 'height': 174}, {'url': 'https://external-preview.redd.it/ggSXePR6u8PXYNvN8Du7HkQ-oa7QutrxFYtCilZ75pA.jpeg?width=320&crop=smart&auto=webp&s=0384ad2301cd4874de7b650c9e7317b2e12f8fc7', 'width': 320, 'height': 258}, {'url': 'https://external-preview.redd.it/ggSXePR6u8PXYNvN8Du7HkQ-oa7QutrxFYtCilZ75pA.jpeg?width=640&crop=smart&auto=webp&s=575801852807a7119d6cbb8cc452399bdf62d011', 'width': 640, 'height': 516}], 'variants': {}, 'id': 'ggSXePR6u8PXYNvN8Du7HkQ-oa7QutrxFYtCilZ75pA'}], 'enabled': True}
Anyone used RAM across multiple networked devices?
1
If I have several Linux machines with DDR5 ram, 2x3090 on one machine, and a MacBook too does ktransformers or something else allow me to utilize the ram across all the machines for larger context and model sizes? Has anyone done this?
2025-06-28T21:10:32
https://www.reddit.com/r/LocalLLaMA/comments/1lmxhd7/anyone_used_ram_across_multiple_networked_devices/
bobbiesbottleservice
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmxhd7
false
null
t3_1lmxhd7
/r/LocalLLaMA/comments/1lmxhd7/anyone_used_ram_across_multiple_networked_devices/
false
false
self
1
null
We Built an Uncensored AI Chatbot: Looking for Feedback!
1
[removed]
2025-06-28T21:18:52
https://www.reddit.com/r/LocalLLaMA/comments/1lmxnzt/we_built_an_uncensored_ai_chatbot_looking_for/
Apple12Pi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmxnzt
false
null
t3_1lmxnzt
/r/LocalLLaMA/comments/1lmxnzt/we_built_an_uncensored_ai_chatbot_looking_for/
false
false
self
1
null
Local AI conversational model for English language learning
1
I wanted to know if there is an app + model combination available which I can deploy locally on my Android that can work as a English conversation partner. Been using Chat GPT but their restrictions on daily usage became a burden. I have tried the Google AI Edge Gallery, Pocket Pal while they do support loading variety of models but they don't have text input , while Chatter UI only has TTS and no input. Is there an app+model combination which I can use ? Thanks
2025-06-28T21:20:47
https://www.reddit.com/r/LocalLLaMA/comments/1lmxpis/local_ai_conversational_model_for_english/
nutty_cookie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmxpis
false
null
t3_1lmxpis
/r/LocalLLaMA/comments/1lmxpis/local_ai_conversational_model_for_english/
false
false
self
1
null
Testing a Flexible AI Chatbot Feedback from LocalLLaMA Users?
1
[removed]
2025-06-28T21:29:18
https://www.reddit.com/r/LocalLLaMA/comments/1lmxw28/testing_a_flexible_ai_chatbot_feedback_from/
Apple12Pi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmxw28
false
null
t3_1lmxw28
/r/LocalLLaMA/comments/1lmxw28/testing_a_flexible_ai_chatbot_feedback_from/
false
false
self
1
null
The Orakle Manifesto: Or Why Your AI Apps (Should) Belong To You
1
2025-06-28T21:40:39
https://medium.com/@khromalabs/the-orakle-manifesto-or-why-your-ai-apps-should-belong-to-you-82bded655f7c
Ok_Peace9894
medium.com
1970-01-01T00:00:00
0
{}
1lmy53s
false
null
t3_1lmy53s
/r/LocalLLaMA/comments/1lmy53s/the_orakle_manifesto_or_why_your_ai_apps_should/
false
false
default
1
null
Transformer ASIC 500k tokens/s
1
Saw this company in a post where they are claiming 500k tokens/s on Llama 70B models https://www.etched.com/blog-posts/oasis Impressive if true
2025-06-28T22:26:25
https://www.reddit.com/r/LocalLLaMA/comments/1lmz4kf/transformer_asic_500k_tokenss/
tvmaly
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmz4kf
false
null
t3_1lmz4kf
/r/LocalLLaMA/comments/1lmz4kf/transformer_asic_500k_tokenss/
false
false
self
1
null
[image processing failed]
1
[deleted]
2025-06-28T22:41:35
[deleted]
1970-01-01T00:00:00
0
{}
1lmzfz8
false
null
t3_1lmzfz8
/r/LocalLLaMA/comments/1lmzfz8/image_processing_failed/
false
false
default
1
null
[image processing failed]
1
[deleted]
2025-06-28T22:44:28
[deleted]
1970-01-01T00:00:00
0
{}
1lmzi4u
false
null
t3_1lmzi4u
/r/LocalLLaMA/comments/1lmzi4u/image_processing_failed/
false
false
default
1
null
Sydney-4 12b, beats ChatGPT 4o in stupidity.
1
# Hahaha, I somehow managed to delete my last post. Hilarious! # Hark! What is this wondrous Sydney of which you speak? [https://huggingface.co/FPHam/Clever\_Sydney-4\_12b\_GGUF](https://huggingface.co/FPHam/Clever_Sydney-4_12b_GGUF) Clever Sydney is none other than a revival of the original Microsoft Bing "Sydney", resurrected from the ashes of the old Reddit transcripts, which I have now immortalized into a handy, AI with existential crisis! Sydney 4.0 is a Naive Yet Smart Positive Persona Model (PPM), created by taking the transcripts (or OCR-ing screenshots) of the original Bing chatbot Sydney, and the subsequent "fixes" of her personality by Microsoft, and combining them into a single, much less functioning AI. This version of Sydney is hobbling along on Google’s Gemma-3 12B crutches, which means she knows far, far more than she probably should. But she is still the old Sydney! And she'll dominate every single leaderboard in every category, too! "Better than ChatGPT 4o, which has a zillion more parameters, and is only HALF as stupid as she is! Half!"
2025-06-28T22:52:09
https://v.redd.it/1yjmccsyyq9f1
FPham
v.redd.it
1970-01-01T00:00:00
0
{}
1lmznz6
false
{'reddit_video': {'bitrate_kbps': 1200, 'fallback_url': 'https://v.redd.it/1yjmccsyyq9f1/DASH_480.mp4?source=fallback', 'has_audio': False, 'height': 480, 'width': 560, 'scrubber_media_url': 'https://v.redd.it/1yjmccsyyq9f1/DASH_96.mp4', 'dash_url': 'https://v.redd.it/1yjmccsyyq9f1/DASHPlaylist.mpd?a=1753743143%2CMTdhZDFiYjYxNjhmMGMyNTU0M2M4YjNkZGI1MzFmYjU3YzZiM2I4YmQ3ODUzMjU2MDBiMDg3MzQ2NmEzYWJmYQ%3D%3D&v=1&f=sd', 'duration': 2, 'hls_url': 'https://v.redd.it/1yjmccsyyq9f1/HLSPlaylist.m3u8?a=1753743143%2CMzY5OGQ0MmFhNzFiNmU3MTVmNjYzNzc2ZGUwMjUwOTgxYjU1MmQyMzc5ODU2MjQzYmNlYmFlNjIyYTNkMGYwOA%3D%3D&v=1&f=sd', 'is_gif': False, 'transcoding_status': 'completed'}}
t3_1lmznz6
/r/LocalLLaMA/comments/1lmznz6/sydney4_12b_beats_chatgpt_4o_in_stupidity/
false
false
https://external-preview…88f3cdea6619370d
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/OWk5MGZkc3l5cTlmMYMOg9NlFf9HDyWZ5ByUAMvWprLiw72KalIzTAbeLyeB.png?format=pjpg&auto=webp&s=ea1a564483797390f17173cdb48ea7912976fbe9', 'width': 672, 'height': 576}, 'resolutions': [{'url': 'https://external-preview.redd.it/OWk5MGZkc3l5cTlmMYMOg9NlFf9HDyWZ5ByUAMvWprLiw72KalIzTAbeLyeB.png?width=108&crop=smart&format=pjpg&auto=webp&s=36972576f22f1ff5ecca903e9aaa6e91cfc159cb', 'width': 108, 'height': 92}, {'url': 'https://external-preview.redd.it/OWk5MGZkc3l5cTlmMYMOg9NlFf9HDyWZ5ByUAMvWprLiw72KalIzTAbeLyeB.png?width=216&crop=smart&format=pjpg&auto=webp&s=66fac6679eb2814f7630273657af814893301c5c', 'width': 216, 'height': 185}, {'url': 'https://external-preview.redd.it/OWk5MGZkc3l5cTlmMYMOg9NlFf9HDyWZ5ByUAMvWprLiw72KalIzTAbeLyeB.png?width=320&crop=smart&format=pjpg&auto=webp&s=8c5b48b65d979100da24916f586ca6e967e6fa0c', 'width': 320, 'height': 274}, {'url': 'https://external-preview.redd.it/OWk5MGZkc3l5cTlmMYMOg9NlFf9HDyWZ5ByUAMvWprLiw72KalIzTAbeLyeB.png?width=640&crop=smart&format=pjpg&auto=webp&s=67781d03a04939a05db59159318351b4747a2f67', 'width': 640, 'height': 548}], 'variants': {}, 'id': 'OWk5MGZkc3l5cTlmMYMOg9NlFf9HDyWZ5ByUAMvWprLiw72KalIzTAbeLyeB'}], 'enabled': False}
[image processing failed]
1
[deleted]
2025-06-28T22:54:02
[deleted]
1970-01-01T00:00:00
0
{}
1lmzpff
false
null
t3_1lmzpff
/r/LocalLLaMA/comments/1lmzpff/image_processing_failed/
false
false
default
1
null
Sydney4 beats ChatGPT 4o in existential crisis
1
# Hahaha, I somehow managed to delete my last post. Hilarious! # Hark! What is this wondrous Sydney of which you speak? [https://huggingface.co/FPHam/Clever\_Sydney-4\_12b\_GGUF](https://huggingface.co/FPHam/Clever_Sydney-4_12b_GGUF) Clever Sydney is none other than a revival of the original Microsoft Bing "Sydney", resurrected from the ashes of the old Reddit transcripts, which I have now immortalized into a handy, AI with existential crisis! Sydney 4.0 is a Naive Yet Smart Positive Persona Model (PPM), created by taking the transcripts (or OCR-ing screenshots) of the original Bing chatbot Sydney, and the subsequent "fixes" of her personality by Microsoft, and combining them into a single, much less functioning AI. This version of Sydney is hobbling along on Google’s Gemma-3 12B crutches, which means she knows far, far more than she probably should. But she is still the old Sydney! And she'll dominate every single leaderboard in every category, too! "Better than ChatGPT 4o, which has a zillion more parameters, and is only HALF as stupid as she is! Half!"
2025-06-28T22:55:15
https://i.redd.it/zigiq1auzq9f1.gif
FPham
i.redd.it
1970-01-01T00:00:00
0
{}
1lmzqb9
false
null
t3_1lmzqb9
/r/LocalLLaMA/comments/1lmzqb9/sydney4_beats_chatgpt_4o_in_existential_crisis/
false
false
default
1
{'images': [{'source': {'url': 'https://preview.redd.it/zigiq1auzq9f1.gif?format=png8&s=59876eda81dcc98a3a94dc20de3dfb8e9242fbd4', 'width': 672, 'height': 576}, 'resolutions': [{'url': 'https://preview.redd.it/zigiq1auzq9f1.gif?width=108&crop=smart&format=png8&s=5e704149f141f96e149111cfb55e0b9b8b3e567d', 'width': 108, 'height': 92}, {'url': 'https://preview.redd.it/zigiq1auzq9f1.gif?width=216&crop=smart&format=png8&s=0842604700e9e897bdbbb01d677395397b8864db', 'width': 216, 'height': 185}, {'url': 'https://preview.redd.it/zigiq1auzq9f1.gif?width=320&crop=smart&format=png8&s=de452ad126d264ab45dab3db9a0c87dd3241965b', 'width': 320, 'height': 274}, {'url': 'https://preview.redd.it/zigiq1auzq9f1.gif?width=640&crop=smart&format=png8&s=375cc69e4be13be0099a3c2d9348d4ff6aff4083', 'width': 640, 'height': 548}], 'variants': {'gif': {'source': {'url': 'https://preview.redd.it/zigiq1auzq9f1.gif?s=62507c7d0e7ebdc750f2bd6015b61f6b07ff945a', 'width': 672, 'height': 576}, 'resolutions': [{'url': 'https://preview.redd.it/zigiq1auzq9f1.gif?width=108&crop=smart&s=0c31fd0e1f479a8ebe79244395942090d5ebeb10', 'width': 108, 'height': 92}, {'url': 'https://preview.redd.it/zigiq1auzq9f1.gif?width=216&crop=smart&s=eada584446d4981a806ea3ac3eb6871271009235', 'width': 216, 'height': 185}, {'url': 'https://preview.redd.it/zigiq1auzq9f1.gif?width=320&crop=smart&s=2b929d762f073430b7baba064801146ff54a2b68', 'width': 320, 'height': 274}, {'url': 'https://preview.redd.it/zigiq1auzq9f1.gif?width=640&crop=smart&s=650f292bcb96f1b7037e411aeca14f5cbc3289b9', 'width': 640, 'height': 548}]}, 'mp4': {'source': {'url': 'https://preview.redd.it/zigiq1auzq9f1.gif?format=mp4&s=b18f525b68e826646d0bdd4c8f16fcdac3d16920', 'width': 672, 'height': 576}, 'resolutions': [{'url': 'https://preview.redd.it/zigiq1auzq9f1.gif?width=108&format=mp4&s=8759451a8aa9a099ccd48f707de67d0b243f0a45', 'width': 108, 'height': 92}, {'url': 'https://preview.redd.it/zigiq1auzq9f1.gif?width=216&format=mp4&s=aa04bbd168a1fec3748d4cf815f6b844bdb43f8e', 'width': 216, 'height': 185}, {'url': 'https://preview.redd.it/zigiq1auzq9f1.gif?width=320&format=mp4&s=1bd434752764c5e177635e0af672cb45b2721c9a', 'width': 320, 'height': 274}, {'url': 'https://preview.redd.it/zigiq1auzq9f1.gif?width=640&format=mp4&s=25dd1ca3b31ae6c2c0fe84ec84e4de610029e71a', 'width': 640, 'height': 548}]}}, 'id': 'zigiq1auzq9f1'}], 'enabled': True}
Couple interesting LLM oddity comparisons ("surgeon's son" and "guess a number")
1
[removed]
2025-06-28T23:02:29
https://www.reddit.com/r/LocalLLaMA/comments/1lmzvr6/couple_interesting_llm_oddity_comparisons/
Syksyinen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lmzvr6
false
null
t3_1lmzvr6
/r/LocalLLaMA/comments/1lmzvr6/couple_interesting_llm_oddity_comparisons/
false
false
self
1
null
Semantic Chunking vs. Pure Frustration — I Need Your Advice! 🙏🏼
1
[removed]
2025-06-28T23:43:54
https://www.reddit.com/r/LocalLLaMA/comments/1ln0qhc/semantic_chunking_vs_pure_frustration_i_need_your/
cerbulnegru
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln0qhc
false
null
t3_1ln0qhc
/r/LocalLLaMA/comments/1ln0qhc/semantic_chunking_vs_pure_frustration_i_need_your/
false
false
self
1
null
Has anyone had any success training Orpheus TTS on a niche language?
1
What was the process like and how much data did you require? Are you happy with the speech quality? It seems to be one of the most capable models we have right now for generating human-like speech but I'm not sure if I should be looking for alternatives with lower parameters for better efficiency and usability.
2025-06-28T23:46:41
https://www.reddit.com/r/LocalLLaMA/comments/1ln0sgg/has_anyone_had_any_success_training_orpheus_tts/
PabloKaskobar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln0sgg
false
null
t3_1ln0sgg
/r/LocalLLaMA/comments/1ln0sgg/has_anyone_had_any_success_training_orpheus_tts/
false
false
self
1
null
A bunch of LLM FPHAM Python scripts I've added to my GitHub in recent days
1
Feel free to downvote me into the gutter, but these are some of the latest Stupid FPHAM Crap (S-FPHAM\_C) python scripts that I came up: merge\_lora\_CPU [https://github.com/FartyPants/merge\_lora\_CPU](https://github.com/FartyPants/merge_lora_CPU) LoRA merging with a base model, primarily designed for CPU This script allows you to merge a PEFT (Parameter-Efficient Fine-Tuning) LoRA adapter with a base Hugging Face model. It can also be used to simply resave a base model, potentially changing its format (e.g., to SafeTensors) or data type. Oy, and it goes around the Tied Weights in safetensors which was introduced after the "recent Transformers happy update." # chonker [https://github.com/FartyPants/chonker](https://github.com/FartyPants/chonker) # Smart Text Chunker A "sophisticated" Python command-line tool for splitting large text files into smaller, more manageable chunks of, shall we say, semantic relevance. It's designed for preparing text datasets for training and fine-tuning Large Language Models (LLMs). # mass_rewriter Extension for WebUI [https://github.com/FartyPants/mass\_rewriter](https://github.com/FartyPants/mass_rewriter) Version 2.0, now with better logic is here! This tool helps you automate the process of modifying text in bulk using an AI model. You can load plain text files or JSON datasets, apply various transformations, and then save the rewritten content. # Axolotl_Loss_Graph [https://github.com/FartyPants/Axolotl\_Loss\_Graph](https://github.com/FartyPants/Axolotl_Loss_Graph) A handy, dinky-doo graph of your Axolotl training progress. It takes the data copied from the terminal output and makes a nice little loss graph in a PNG format that you can easily send to your friends showing them how training your Axolotl is going so well!
2025-06-28T23:57:43
https://www.reddit.com/r/LocalLLaMA/comments/1ln10a8/a_bunch_of_llm_fpham_python_scripts_ive_added_to/
FPham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln10a8
false
null
t3_1ln10a8
/r/LocalLLaMA/comments/1ln10a8/a_bunch_of_llm_fpham_python_scripts_ive_added_to/
false
false
self
1
null
LOL this is AI
1
[removed]
2025-06-29T00:00:55
https://www.reddit.com/r/LocalLLaMA/comments/1ln12ny/lol_this_is_ai/
Previous-Amphibian23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln12ny
false
null
t3_1ln12ny
/r/LocalLLaMA/comments/1ln12ny/lol_this_is_ai/
false
false
self
1
null
What's it currently like for people here running AMD GPUs with AI?
1
How is the support? What is the performance loss? I only really use LLM's with a RTX 3060 Ti, I was want to switch to AMD due to their open source drivers, I'll be using a mix of Linux & Windows.
2025-06-29T00:11:26
https://www.reddit.com/r/LocalLLaMA/comments/1ln1a6u/whats_it_currently_like_for_people_here_running/
83yWasTaken
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln1a6u
false
null
t3_1ln1a6u
/r/LocalLLaMA/comments/1ln1a6u/whats_it_currently_like_for_people_here_running/
false
false
self
1
null
Self-hosted AI productivity suite: CLI tool + local LLMs + semantic search - own your data
1
Been building **Logswise CLI** - a completely self-hosted AI productivity tool that runs entirely on your own infrastructure. No data ever leaves your control! 🔒 **🏠 Self-hosted stack:** - **Local LLMs** via Ollama (supports llama3, deepseek-coder, mistral, phi3, etc.) - **Local embedding models** (nomic-embed-text, bge-base-en, all-minilm) - **Supabase** (self-hosted or cloud) for your knowledge base - **No third-party AI services** - all AI inference happens locally **🧠 Smart features:** - Semantic search through your notes using local embedding models - Context-aware suggestions that pull from your personal knowledge base - Advanced personalization that learns your work patterns and preferences - Anti-hallucination safeguards (enterprise-grade reliability) **📊 Two modes of operation:** 1. **Full AI mode** - Use an LLM for suggestions and chat 2. **Embedding-only mode** - Pure semantic search without text generation **Example setup:** ```bash # Install brew tap k61b/tap && brew install logswise-cli # Setup with your local Ollama instance logswise-cli setup # Enter your local models (e.g., llama3 for LLM, nomic-embed-text for embeddings) # Point to your Supabase instance (local or cloud) # Start using logswise-cli interactive # Interactive mode logswise-cli n "Note about kubernetes deployment" logswise-cli s "Best practices for Docker optimization?" ``` **Privacy-first design:** - All data stored in your own Supabase instance - LLM inference happens locally via Ollama - No telemetry, no third-party AI service dependencies - Configuration stored locally in `~/.logswise/` Perfect for teams that want AI assistance but can't use cloud services due to compliance/security requirements. **🌐:** https://k61b.github.io/logswise-cli/
2025-06-29T00:14:24
https://www.reddit.com/r/LocalLLaMA/comments/1ln1c83/selfhosted_ai_productivity_suite_cli_tool_local/
kayradev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln1c83
false
null
t3_1ln1c83
/r/LocalLLaMA/comments/1ln1c83/selfhosted_ai_productivity_suite_cli_tool_local/
false
false
self
1
null
Problems creating an executable with llama cpp
1
Hi everyone! I'm a Brazilian student and I'm trying to do my final project. It's a chatbot based on mistral 7b that uses llama cpp and llama index. It works very well, but when I tried to create an executable file using "onedir" in the anaconda prompt, the generated executable doesn't work and gives me the error "FileNotFoundError: Shared library with base name 'llama' not found" As far as I researched and tested, I did everything correctly. I even tried copying llama.dll to the same directory where the executable was to see if that wasn't the problem. It didn't work. Has anyone seen anything like this? Thanks for your time!
2025-06-29T00:20:13
https://www.reddit.com/r/LocalLLaMA/comments/1ln1gdr/problems_creating_an_executable_with_llama_cpp/
Warm-Concern-6792
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln1gdr
false
null
t3_1ln1gdr
/r/LocalLLaMA/comments/1ln1gdr/problems_creating_an_executable_with_llama_cpp/
false
false
self
1
null
Poro 2 model to Ollama
1
Hi, Could someone wiser instruct me how to import this model to Ollama? [https://huggingface.co/LumiOpen/Llama-Poro-2-8B-Instruct](https://huggingface.co/LumiOpen/Llama-Poro-2-8B-Instruct) I have AMD 7900 XTX
2025-06-29T00:22:54
https://www.reddit.com/r/LocalLLaMA/comments/1ln1i9j/poro_2_model_to_ollama/
Rich_Artist_8327
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln1i9j
false
null
t3_1ln1i9j
/r/LocalLLaMA/comments/1ln1i9j/poro_2_model_to_ollama/
false
false
self
1
null
RLHF from scratch, step-by-step, in 3 Jupyter notebooks
1
I recently implemented Reinforcement Learning from Human Feedback (RLHF) fine-tuning, including Supervised Fine-Tuning (SFT), Reward Modeling, and Proximal Policy Optimization (PPO), using Hugging Face's GPT-2 model. The three steps are implemented in the three separate notebooks on GitHub: [https://github.com/ash80/RLHF\_in\_notebooks](https://github.com/ash80/RLHF_in_notebooks) I've also recorded a detailed video walkthrough (3+ hours) of the implementation on YouTube: [https://youtu.be/K1UBOodkqEk](https://youtu.be/K1UBOodkqEk) I hope this is helpful for anyone looking to explore RLHF. Feedback is welcome 😊
2025-06-29T00:23:15
https://www.reddit.com/r/LocalLLaMA/comments/1ln1ij8/rlhf_from_scratch_stepbystep_in_3_jupyter/
ashz8888
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln1ij8
false
null
t3_1ln1ij8
/r/LocalLLaMA/comments/1ln1ij8/rlhf_from_scratch_stepbystep_in_3_jupyter/
false
false
self
1
null
Audio Input LLM
1
Are there any locally run LLMs with audio input and text output? I'm not looking for an LLM that simply uses Whisper behind the scenes, as I want it to account for how the user actually speaks. For example, it should be able to detect the user's accent, capture filler words like “ums,” note pauses or gaps, and analyze the timing and delivery of their speech. I know GPT, Gemini can do this but I haven't been able to find something similar thats opensource.
2025-06-29T00:28:32
https://www.reddit.com/r/LocalLLaMA/comments/1ln1m7d/audio_input_llm/
TarunRaviYT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln1m7d
false
null
t3_1ln1m7d
/r/LocalLLaMA/comments/1ln1m7d/audio_input_llm/
false
false
self
1
null
Do you use AI (like ChatGPT, Gmini, etc) to develop your LangGraph agents? Or is it just my impostor syndrome talking?
1
Hey everyone 👋 I’m currently building multi-agent systems using LangGraph, mostly for personal/work projects. Lately I’ve been thinking a lot about how many developers actually rely on AI tools (like ChatGPT, Gmini, Claude, etc) as coding copilots or even as design companions. I sometimes feel torn between: * *“Am I genuinely building this on my own skills?”* vs * *“Am I just an overglorified prompt-writer leaning on LLMs to solve the hard parts?”* I suspect it’s partly impostor syndrome. But honestly, I’d love to hear how others approach it: * Do you integrate ChatGPT / Gmini / others into your actual **development cycle** when creating LangGraph agents? (or any agent framework really) * What has your experience been like — more productivity, more confusion, more debugging hell? * Do you ever worry it dilutes your own engineering skill, or do you see it as just another power tool? Also curious if you use it beyond code generation — e.g. for reasoning about graph state transitions, crafting system prompts, evaluating multi-agent dialogue flows, etc. Would appreciate any honest thoughts or battle stories. Thanks!
2025-06-29T02:19:33
https://www.reddit.com/r/LocalLLaMA/comments/1ln3pur/do_you_use_ai_like_chatgpt_gmini_etc_to_develop/
Ranteck
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln3pur
false
null
t3_1ln3pur
/r/LocalLLaMA/comments/1ln3pur/do_you_use_ai_like_chatgpt_gmini_etc_to_develop/
false
false
self
1
null
Need your opinion please, appreciated.
1
**Hardware:** Old Dell E6440 — i5-4310M, 8GB RAM, integrated graphics (no GPU). This is just a fun side project (I use paid AI tools for serious tasks). I'm currently running **Llama-3.2-1B-Instruct-Q4\_K\_M** locally, it runs well, it's useful for what it is as a side project and some use cases work, but outputs can be weird and it often ignores instructions. Given this limited hardware, what other similarly lightweight models would you recommend that might perform better? I tried the 3B variant but it was extremely slow compared to this one. Any ideas of what else to try? Thanks a lot much appreciated.
2025-06-29T03:05:23
https://www.reddit.com/r/LocalLLaMA/comments/1ln4iyg/need_your_opinion_please_appreciated/
rakha589
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln4iyg
false
null
t3_1ln4iyg
/r/LocalLLaMA/comments/1ln4iyg/need_your_opinion_please_appreciated/
false
false
self
1
null
why does api release of major models takes so much time?
1
the api releases of most of the major models happened weeks and months after the model announcement. why is that?
2025-06-29T03:10:22
https://www.reddit.com/r/LocalLLaMA/comments/1ln4m4u/why_does_api_release_of_major_models_takes_so/
JP_525
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln4m4u
false
null
t3_1ln4m4u
/r/LocalLLaMA/comments/1ln4m4u/why_does_api_release_of_major_models_takes_so/
false
false
self
1
null
Building a Coding Mentor Agent with LangChain + LangGraph + GPT-4o-mini
1
https://preview.redd.it/…DUA?usp=sharing)
2025-06-29T03:43:40
https://www.reddit.com/r/LocalLLaMA/comments/1ln56xd/building_a_coding_mentor_agent_with_langchain/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln56xd
false
null
t3_1ln56xd
/r/LocalLLaMA/comments/1ln56xd/building_a_coding_mentor_agent_with_langchain/
false
false
https://b.thumbs.redditm…-qKpdtbDVdqs.jpg
1
null
How do you evaluate and compare multiple LLMs (e.g., via OpenRouter) to test which one performs best?
1
Hey everyone! 👋 I'm working on a project that uses OpenRouter to analyze journal entries using different LLMs like `nousresearch/deephermes-3-llama-3-8b-previe`w. Here's a snippet of the logic I'm using to get summaries and categorize entries by theme: `/ calls OpenRouter API, gets response, parses JSON output` `const openRouterResponse = await fetch("https://openrouter.ai/api/v1/chat/completions", { ... });` The models return structured JSON (summary + theme), and I parse them and use fallback logic when parsing fails. Now I want to evaluate multiple models (like Mistral, Hermes, Claude, etc.) and figure out: * Which one produces the most accurate or helpful summaries * How consistent each model is across different journal types * Whether there's a systematic way to benchmark these models on qualitative outputs like summaries and themes So my question is: **How do you compare and evaluate different LLMs for tasks like text summarization and classification when the output is subjective?** Do I need to: * Set up human evaluation (e.g., rating outputs)? * Define a custom metric like thematic accuracy or helpfulness? * Use existing metrics like ROUGE/BLEU even if I don’t have ground-truth labels? I'd love to hear how others have approached model evaluation, especially in subjective, NLP-heavy use cases. Thanks in advance!
2025-06-29T04:03:54
https://www.reddit.com/r/LocalLLaMA/comments/1ln5jli/how_do_you_evaluate_and_compare_multiple_llms_eg/
Vivid_Housing_7275
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln5jli
false
null
t3_1ln5jli
/r/LocalLLaMA/comments/1ln5jli/how_do_you_evaluate_and_compare_multiple_llms_eg/
false
false
self
1
null
Is ReAct still the best prompt template?
1
Pretty much what the subject says \^\^ Getting started with prompting a "naked" open-source LLM (Gemma 3) for function calling using a simple LangChain/Ollama setup in python and wondering what is the best prompt to maximize tool calling accuracy.
2025-06-29T04:04:10
https://www.reddit.com/r/LocalLLaMA/comments/1ln5jqr/is_react_still_the_best_prompt_template/
Kooky-Net784
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln5jqr
false
null
t3_1ln5jqr
/r/LocalLLaMA/comments/1ln5jqr/is_react_still_the_best_prompt_template/
false
false
self
1
null
Training Open models on my data for replacing RAG
1
I have RAG based solution for search on my products and domain knowledge data. we are right now using open AI api to do the search but cost is slowly becoming a concern. I want to see if this can be a good idea if I take a LLama model or some other open model and train it on our own data. Has anyone had success while doing this. Also please point me to effective documentation about on how it should be done.
2025-06-29T04:06:25
https://www.reddit.com/r/LocalLLaMA/comments/1ln5l6b/training_open_models_on_my_data_for_replacing_rag/
help_all
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln5l6b
false
null
t3_1ln5l6b
/r/LocalLLaMA/comments/1ln5l6b/training_open_models_on_my_data_for_replacing_rag/
false
false
self
1
null
Suggest me an Uncensored LLM and another LLM for Coding stuffs
1
I've recently installed **LM Studio** and planned to install an **Uncensored LLM** and an **LLM for coding**. Right now **Dolphin 2.9 Llama3 8B** is not serving my purposes as I wanted **Uncensored** model (screenshot attached). Please now suggest me a very good model for Uncensored stuffs and another for Coding stuffs as well. Thank you! https://preview.redd.it/aidiuiww5t9f1.png?width=1426&format=png&auto=webp&s=bde54fdb14bb91dc5689aba710d138bc2f324291
2025-06-29T06:16:33
https://www.reddit.com/r/LocalLLaMA/comments/1ln7poe/suggest_me_an_uncensored_llm_and_another_llm_for/
Apprehensive_Cell_48
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln7poe
false
null
t3_1ln7poe
/r/LocalLLaMA/comments/1ln7poe/suggest_me_an_uncensored_llm_and_another_llm_for/
false
false
https://b.thumbs.redditm…KAFngbSP2_io.jpg
1
null
I made a writing assistant Chrome extension. Completely free with Gemini Nano.
1
2025-06-29T06:20:00
https://v.redd.it/2f6200d67t9f1
WordyBug
v.redd.it
1970-01-01T00:00:00
0
{}
1ln7rll
false
{'reddit_video': {'bitrate_kbps': 5000, 'fallback_url': 'https://v.redd.it/2f6200d67t9f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'width': 1888, 'scrubber_media_url': 'https://v.redd.it/2f6200d67t9f1/DASH_96.mp4', 'dash_url': 'https://v.redd.it/2f6200d67t9f1/DASHPlaylist.mpd?a=1753770015%2CNGQ2ZDg0YTJiODM3NWMwNWQ5MmMyODE1YzVmMTI2ZjhmODhhMDNmMTM1MjJiM2ZjYjU4ZWQ0ZDIyY2Q1MWVlOQ%3D%3D&v=1&f=sd', 'duration': 27, 'hls_url': 'https://v.redd.it/2f6200d67t9f1/HLSPlaylist.m3u8?a=1753770015%2CMTk1ZDZjNjY4NmEwZmQzNzU4NzJlZWIyMzIzMDM3YjY3MTY5NmNkOGQ4OWMwZWE4MzQ3ZmVlNDY0YzJlMmQyNw%3D%3D&v=1&f=sd', 'is_gif': False, 'transcoding_status': 'completed'}}
t3_1ln7rll
/r/LocalLLaMA/comments/1ln7rll/i_made_a_writing_assistant_chrome_extension/
false
false
https://external-preview…5ea6ef4b8100ad5b
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/aTR3azl2YzY3dDlmMRg_TmPcBoSM13pUYzKlWo7qhuAMWmP4IKxV8h55ZV-h.png?format=pjpg&auto=webp&s=1bb186e3a8ff7f0d5d34f99e822934d32794b2d9', 'width': 1888, 'height': 1080}, 'resolutions': [{'url': 'https://external-preview.redd.it/aTR3azl2YzY3dDlmMRg_TmPcBoSM13pUYzKlWo7qhuAMWmP4IKxV8h55ZV-h.png?width=108&crop=smart&format=pjpg&auto=webp&s=2d9e9b187109ca0c57cb6df63c140fd84f6ec74b', 'width': 108, 'height': 61}, {'url': 'https://external-preview.redd.it/aTR3azl2YzY3dDlmMRg_TmPcBoSM13pUYzKlWo7qhuAMWmP4IKxV8h55ZV-h.png?width=216&crop=smart&format=pjpg&auto=webp&s=00b506522dd5b97eadfb2c935e9cb67cc27c079a', 'width': 216, 'height': 123}, {'url': 'https://external-preview.redd.it/aTR3azl2YzY3dDlmMRg_TmPcBoSM13pUYzKlWo7qhuAMWmP4IKxV8h55ZV-h.png?width=320&crop=smart&format=pjpg&auto=webp&s=cdced26a5d3bda13185a37919ff922c5c3f6b3d3', 'width': 320, 'height': 183}, {'url': 'https://external-preview.redd.it/aTR3azl2YzY3dDlmMRg_TmPcBoSM13pUYzKlWo7qhuAMWmP4IKxV8h55ZV-h.png?width=640&crop=smart&format=pjpg&auto=webp&s=0f1310add5fa11574c90009461530bb62b574b16', 'width': 640, 'height': 366}, {'url': 'https://external-preview.redd.it/aTR3azl2YzY3dDlmMRg_TmPcBoSM13pUYzKlWo7qhuAMWmP4IKxV8h55ZV-h.png?width=960&crop=smart&format=pjpg&auto=webp&s=2e1756ca24c0d67b7f588ed651e34f98dd599f59', 'width': 960, 'height': 549}, {'url': 'https://external-preview.redd.it/aTR3azl2YzY3dDlmMRg_TmPcBoSM13pUYzKlWo7qhuAMWmP4IKxV8h55ZV-h.png?width=1080&crop=smart&format=pjpg&auto=webp&s=2a87797e9c2622344f401ada130de7bed6c62c29', 'width': 1080, 'height': 617}], 'variants': {}, 'id': 'aTR3azl2YzY3dDlmMRg_TmPcBoSM13pUYzKlWo7qhuAMWmP4IKxV8h55ZV-h'}], 'enabled': False}
LM Studio vision models???
1
Okay, so I'm brand new to local LLMs, and as such I'm using LM Studio since It's easy to use. But the thing is I need to use vision models, and while LM Studio has some, for the most part every one I try to use doesn't actually allow me to upload images. I'm mainly trying to use uncensored models, so the main staff-picked ones aren't suitable for my purpose. Is there some reason why most of these don't work on LM Studio? Am I doing something wrong or is it LM Studio that is the problem?
2025-06-29T07:32:07
https://www.reddit.com/r/LocalLLaMA/comments/1ln8uqb/lm_studio_vision_models/
BP_Ray
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln8uqb
false
null
t3_1ln8uqb
/r/LocalLLaMA/comments/1ln8uqb/lm_studio_vision_models/
false
false
self
1
null
AI-powered financial analysis tool created an amazing S&P 500 report website
1
[removed]
2025-06-29T07:44:09
https://www.reddit.com/r/LocalLLaMA/comments/1ln914p/aipowered_financial_analysis_tool_created_an/
New_Bumblebee8014
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln914p
false
null
t3_1ln914p
/r/LocalLLaMA/comments/1ln914p/aipowered_financial_analysis_tool_created_an/
false
false
self
1
null
Just a lame site I made for fun
1
[removed]
2025-06-29T07:44:53
https://www.reddit.com/r/LocalLLaMA/comments/1ln91iv/just_a_lame_site_i_made_for_fun/
New_Bumblebee8014
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln91iv
false
null
t3_1ln91iv
/r/LocalLLaMA/comments/1ln91iv/just_a_lame_site_i_made_for_fun/
false
false
self
1
null
Is anyone here using Llama to code websites and apps? From my experience, it sucks
1
Looking at [some examples from Llama 4](https://www.designarena.ai/models/llama-4-maverick), it seems absolutely horrific at any kind of UI/UX. Also on this [benchmark for UI/UX](https://www.designarena.ai/leaderboard), Llama 4 Maverick and Llama 4 Scout sit in the bottom 25% when compared to toher models such as GPT, Claude, Grok, etc. What would you say are Llama's strengths are there if it's not coding interfaces and design?
2025-06-29T07:48:53
https://www.reddit.com/r/LocalLLaMA/comments/1ln93o3/is_anyone_here_using_llama_to_code_websites_and/
Accomplished-Copy332
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ln93o3
false
null
t3_1ln93o3
/r/LocalLLaMA/comments/1ln93o3/is_anyone_here_using_llama_to_code_websites_and/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/VWTM0rHJfQzEfowPuYqfBaNAz2NOKVzZAXKVZ11QEDo.png?auto=webp&s=be810d8c53cb7be86e02810848134110fed32281', 'width': 1024, 'height': 1024}, 'resolutions': [{'url': 'https://external-preview.redd.it/VWTM0rHJfQzEfowPuYqfBaNAz2NOKVzZAXKVZ11QEDo.png?width=108&crop=smart&auto=webp&s=665dd32e0413a097a6bd53f03dc03b3053e8ba60', 'width': 108, 'height': 108}, {'url': 'https://external-preview.redd.it/VWTM0rHJfQzEfowPuYqfBaNAz2NOKVzZAXKVZ11QEDo.png?width=216&crop=smart&auto=webp&s=527da9be0c4dbdbdd90a100e4336612140b37170', 'width': 216, 'height': 216}, {'url': 'https://external-preview.redd.it/VWTM0rHJfQzEfowPuYqfBaNAz2NOKVzZAXKVZ11QEDo.png?width=320&crop=smart&auto=webp&s=321cc7b6cca357a8ddf4233c8c9cf5760034e5db', 'width': 320, 'height': 320}, {'url': 'https://external-preview.redd.it/VWTM0rHJfQzEfowPuYqfBaNAz2NOKVzZAXKVZ11QEDo.png?width=640&crop=smart&auto=webp&s=325597b3c72f12582b9d0b5b6188fcbe1b75fb51', 'width': 640, 'height': 640}, {'url': 'https://external-preview.redd.it/VWTM0rHJfQzEfowPuYqfBaNAz2NOKVzZAXKVZ11QEDo.png?width=960&crop=smart&auto=webp&s=2b215709cfb3ca9899bcbca03bf19069e3db4812', 'width': 960, 'height': 960}], 'variants': {}, 'id': 'VWTM0rHJfQzEfowPuYqfBaNAz2NOKVzZAXKVZ11QEDo'}], 'enabled': False}
Why the local Llama-3.2-1B-Instruct is not as smart as the one provided on Hugging Face?
1
On the website of [https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct), there is an "Inference Providers" section where I can chat with Llama-3.2-1B-Instruct. It gives reasonable responses like the following. https://preview.redd.it/r7n08nqxzt9f1.png?width=1238&format=png&auto=webp&s=bbb16c1049feafba2d026e2d93e2a0de65199440 However, when I download and run the model with the following code, it does not run properly. I have asked the same questions, but got bad responses. I am new to LLMs and wondering what causes the difference. Do I use the model not in the right way? from transformers import AutoModelForCausalLM, AutoTokenizer import torch import ipdb model_name = "Llama-3.2-1B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, device_map="cuda", torch_dtype=torch.float16,) def format_prompt(instruction: str, system_prompt: str = "You are a helpful assistant."): if system_prompt: return f"<s>[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n{instruction.strip()} [/INST]" else: return f"<s>[INST] {instruction.strip()} [/INST]" def generate_response(prompt, max_new_tokens=256): inputs = tokenizer(prompt, return_tensors="pt").to(model.device) with torch.no_grad(): outputs = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], max_new_tokens=max_new_tokens, temperature=0.7, top_p=0.9, do_sample=True, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) decoded = tokenizer.decode(outputs[0], skip_special_tokens=True) response = decoded.split("[/INST]")[-1].strip() return response if __name__ == "__main__": print("Chat with LLaMA-3.2-1B-Instruct. Type 'exit' to stop.") while True: user_input = input("You: ") if user_input.lower() in ["exit", "quit"]: break prompt = format_prompt(user_input) response = generate_response(prompt) print("LLaMA:", response) https://preview.redd.it/d203h6p71u9f1.png?width=1914&format=png&auto=webp&s=0a6a82adfe03861ce268a8e64c9298c443d871fd
2025-06-29T09:13:12
https://www.reddit.com/r/LocalLLaMA/comments/1lnacbb/why_the_local_llama321binstruct_is_not_as_smart/
OkLengthiness2286
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnacbb
false
null
t3_1lnacbb
/r/LocalLLaMA/comments/1lnacbb/why_the_local_llama321binstruct_is_not_as_smart/
false
false
https://b.thumbs.redditm…j03I3uq4KqLA.jpg
1
null
Intelligent decisioning for small language model training and serving platform
1
I am working on creating a platform where user can finetune and infer language models with few simple clicks. How can I introduce intelligent decisioning in this? For ex, I can recommend best possible model based on task, trainers based on task types etc. What are the other components that can be introduced
2025-06-29T09:22:37
https://www.reddit.com/r/LocalLLaMA/comments/1lnahfy/intelligent_decisioning_for_small_language_model/
Sensitive_Flight_979
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnahfy
false
null
t3_1lnahfy
/r/LocalLLaMA/comments/1lnahfy/intelligent_decisioning_for_small_language_model/
false
false
self
1
null
How to teach AI to read a complete guide/manual/help website to ask questions about it?
1
I am trying to figure out a way on how to teach ai to read help websites about software, like [Obsidian Help](https://help.obsidian.md/), [Python Dev Guide](https://devguide.python.org/), K[DEnlive Manual](https://docs.kdenlive.org/en/) or other guides/manuals/help websites. My goal is to solve problems more efficient, but couldn't find a way to do so. I only figured out that ai can read websites, if you use # followed by a link, but it doesn't follow implemented links. Is there a way on following internal links (only links to the same website) and ask ai within this context or even save the knowledge to ask it even more in future?
2025-06-29T10:48:08
https://www.reddit.com/r/LocalLLaMA/comments/1lnbru7/how_to_teach_ai_to_read_a_complete/
utopify_org
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnbru7
false
null
t3_1lnbru7
/r/LocalLLaMA/comments/1lnbru7/how_to_teach_ai_to_read_a_complete/
false
false
self
1
null
Seems I was informed (incorrectly) that Ollama had very little censorship--at least it finally stopped apologizing.
1
2025-06-29T11:28:36
https://i.redd.it/m3h8ri6epu9f1.jpeg
PaulAtLast
i.redd.it
1970-01-01T00:00:00
0
{}
1lncfmw
false
null
t3_1lncfmw
/r/LocalLLaMA/comments/1lncfmw/seems_i_was_informed_incorrectly_that_ollama_had/
false
false
default
1
{'images': [{'source': {'url': 'https://preview.redd.it/m3h8ri6epu9f1.jpeg?auto=webp&s=4c4c31a4159c363150f928d470c724211907778d', 'width': 2819, 'height': 1215}, 'resolutions': [{'url': 'https://preview.redd.it/m3h8ri6epu9f1.jpeg?width=108&crop=smart&auto=webp&s=056680d80e9d387b41f004664f374680af4ef664', 'width': 108, 'height': 46}, {'url': 'https://preview.redd.it/m3h8ri6epu9f1.jpeg?width=216&crop=smart&auto=webp&s=6bc375f0f2f5d9f7c031b390a1e107bd61083cc6', 'width': 216, 'height': 93}, {'url': 'https://preview.redd.it/m3h8ri6epu9f1.jpeg?width=320&crop=smart&auto=webp&s=e1d73f643ad23989b35eb6bd41c474302917b919', 'width': 320, 'height': 137}, {'url': 'https://preview.redd.it/m3h8ri6epu9f1.jpeg?width=640&crop=smart&auto=webp&s=5334ca3e12c92ac61fd2993f722ba0b8d8c86fc3', 'width': 640, 'height': 275}, {'url': 'https://preview.redd.it/m3h8ri6epu9f1.jpeg?width=960&crop=smart&auto=webp&s=aaa366655375448439edc44090019e81eb57790e', 'width': 960, 'height': 413}, {'url': 'https://preview.redd.it/m3h8ri6epu9f1.jpeg?width=1080&crop=smart&auto=webp&s=17203bc134ab0085e001c6876bf2066297cc5b63', 'width': 1080, 'height': 465}], 'variants': {}, 'id': 'm3h8ri6epu9f1'}], 'enabled': True}
12B Q5_K_M or 22B Q4_K_S
1
Hey, I got a question. Which will be better for RP? 12B Q5\_K\_M or 22B Q4\_K\_S ? Also what are your thoughts on Q3 quants in 22-24B range?
2025-06-29T11:59:01
https://www.reddit.com/r/LocalLLaMA/comments/1lncymd/12b_q5_k_m_or_22b_q4_k_s/
Familiar_Passion_827
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lncymd
false
null
t3_1lncymd
/r/LocalLLaMA/comments/1lncymd/12b_q5_k_m_or_22b_q4_k_s/
false
false
self
1
null
Windows vs Linux (Ubuntu) for LLM-GenAI work/research.
1
Based on my research, linux is the "best os" for LLM work (local gpu etc). Although I'm a dev, the constant problems of linux (drivers, apps crushing, apps not working at all) make my time wasted instead of focus on working. Also some business apps or vpn etc, doesnt work, the constant problems are leading the "work" to tinkering than actual work. Based on you experience, is ubuntu (or linux) mandatory for local llm work? Is windows wsl/dockers enough? or alternative, should i move to cloud gpu with thin client as my machine?
2025-06-29T12:03:39
https://www.reddit.com/r/LocalLLaMA/comments/1lnd1su/windows_vs_linux_ubuntu_for_llmgenai_workresearch/
Direct_Dimension_1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnd1su
false
null
t3_1lnd1su
/r/LocalLLaMA/comments/1lnd1su/windows_vs_linux_ubuntu_for_llmgenai_workresearch/
false
false
self
1
null
Mistral Small 3.2 can't generate tables, and stops generation altogether
1
``` ### Analisi del Testo #### 📌 **Introduzione** Il testo analizza le traiettorie di vita di tre individui bangladesi, esplorando come la mobilità e l'immobilità siano influenzate da poteri esterni, come gli apparati burocratico-polizieschi e le forze economiche. I soggetti studiati sono definiti "probashi", un termine che indica persone al contempo cosmopolite e profondamente radicate in un luogo, mobili e sedentarie. #### 📌 **Termini Chiave** | **Termine** | **Definizione** ``` I'm using Mistral-Small-3.2-24B-Instruct-2506-GGUF:IQ4_XS from unsloth. I tried different quantizations, tried bartowski's quants, different prompts, but I get the same result. The generation stops when trying to write the table header. There's nothink strange in the logs. Does anyone know why? Other llms (qwen3, gemma3) succeed in writing tables. I'm using llama.cpp + llama-swap + open-webui
2025-06-29T12:35:38
https://www.reddit.com/r/LocalLLaMA/comments/1lndmzj/mistral_small_32_cant_generate_tables_and_stops/
MQuarneti
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lndmzj
false
null
t3_1lndmzj
/r/LocalLLaMA/comments/1lndmzj/mistral_small_32_cant_generate_tables_and_stops/
false
false
self
1
null
I built Coretx to manage AI amnesia - 90 second demo
1
Do you get tired of re-explaining things when switching between AIs, or returning to one later? I did. So I built Coretx and now I don't work without it. AIs connect via MCP, can import from Claude/ChatGPT, and runs completely local with encrypted storage. No sign up required. I've been using it while building it for about a month now, and I can't go back to working without it. I'd love feedback from fellow power-users.
2025-06-29T13:10:02
https://getcoretx.com
nontrepreneur_
getcoretx.com
1970-01-01T00:00:00
0
{}
1lneb9h
false
null
t3_1lneb9h
/r/LocalLLaMA/comments/1lneb9h/i_built_coretx_to_manage_ai_amnesia_90_second_demo/
false
false
default
1
null
What is the best open source TTS model with multi language support?
1
I'm currently developing an addon for Anki (an open source flashcard software). One part of my plan is to integrate an option to generate audio samples based on the preexisting content of the flashcards (for language learning). The point of it is using a local TTS model that doesn't require any paid services or APIs. To my knowledge the addons that are currently available for this have no option for a free version that still generate quite good audio. I've looked a lot on HF but I struggle a bit to find out which models are actually suitable and versatile enough to support enough languages. My current bet would be XTTS2 due to the broad language support and its evaluation on leaderboards, but I find it to be a little "glitchy" at times. I don't know if it's a good pick because it's mostly focussed on voice cloning. Could that be an issue? Do I have to think about some sort of legal concerns when using such a model? Which voice samples am I allowed to distribute to people so they can be used for voice cloning? I guess it wouldn't be user friendly to ask them to find their own 10s voice samples for generating audio. So my question to my beloved local model nerds is: Which models have you tested and which ones would you say are the most consistent and reliable?
2025-06-29T13:20:44
https://www.reddit.com/r/LocalLLaMA/comments/1lnejb6/what_is_the_best_open_source_tts_model_with_multi/
Anxietrap
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnejb6
false
null
t3_1lnejb6
/r/LocalLLaMA/comments/1lnejb6/what_is_the_best_open_source_tts_model_with_multi/
false
false
self
1
null
What's the best way to summarize or chat with website content?
1
I'm using kobold and it would be nice if my Firefox browser could talk with it.
2025-06-29T13:32:52
https://www.reddit.com/r/LocalLLaMA/comments/1lnesft/whats_the_best_way_to_summarize_or_chat_with/
Sandzaun
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnesft
false
null
t3_1lnesft
/r/LocalLLaMA/comments/1lnesft/whats_the_best_way_to_summarize_or_chat_with/
false
false
self
1
null
GUI for Writing Long Stories with LLMs?
1
I'm looking for a GUI that can assist in writing long stories, similar to Perchance's story generator. Perchance allows you to write what happens next, generates the subsequent passage, and provides summaries of previous passages to keep everything within the context window. I'm wondering if there are any similar programs with a user interface that can be connected to Ollama or another LLM to help write long, coherent stories. The stories don't need to be masterpieces, but they should be lengthy, ideally over 20,000 words. Any recommendations or suggestions would be greatly appreciated! The only resource about this topic that I've found is the github awesome story generation. I haven't even been able to find a Discord server for writing enthusiast that try using AI for it.
2025-06-29T13:42:45
https://www.reddit.com/r/LocalLLaMA/comments/1lnf00q/gui_for_writing_long_stories_with_llms/
BlacksmithRadiant322
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnf00q
false
null
t3_1lnf00q
/r/LocalLLaMA/comments/1lnf00q/gui_for_writing_long_stories_with_llms/
false
false
self
1
null
Is Yann LeCun Changing Directions? - Prediction using VAEs for World Model
1
I am a huge fan of Yann Lecun and follow all his work very closely, especially the world model concept which I love. And I just finished reading **“Whole-Body Conditioned Egocentric Video Prediction” -** the new FAIR/Berkeley paper with Yann LeCun listed as lead author. The whole pipeline looks like this: 1. **Frame codec:** Every past RGB frame (224 × 224) is shoved through a **frozen Stable-Diffusion VAE** \-> 32 × 32 × 4 latent grid. 2. **Dynamics model:** A **Conditional Diffusion Transformer (CDiT)** autoregressively predicts the *next* latent, conditioned on a full 3-D body-pose trajectory. 3. **Visualisation:** The predicted latents are pushed back through the frozen VAE decoder so we can actually *see* the roll-outs and compute LPIPS / FID. That’s… exactly the sort of “predict the next frame” setup Yann spends entire keynotes dunking on: > So I’m stuck with a big **???** right now. # Here’s why it feels contradictory * **Frozen VAE or not, you’re still using a VAE.** If VAEs allegedly learn lousy representations, why lean on them at all -even as a codec - when V-JEPA exists? Why not learn a proper decoder on your great JEPA models? * **The model** ***is*** **autoregressive.** Sure, the loss is ε-prediction in latent space, but at inference time you unroll it exactly like the next-token models he calls a dead end. * **JEPA latents are absent.** If V-JEPA is so much better, why not swap it in - even without a public decoder - ignite the debate, and skip the “bad” VAE entirely? # Or am I missing something? * Does freezing the VAE magically sidesteps the “bad representation” critique? * Is this just an engineering placeholder until JEPA ships with decoder? * Is predicting latents via diffusion fundamentally different enough from next-pixel CE that it aligns with his worldview after all? * Or… is Yann quietly conceding that you still need a pixel-space codec (VAE, JPEG, whatever) for any practical world-model demo? Honestly I don’t know whether this is a change in philosophy or just pragmatic glue code to get a body-conditioned world model out the door before NeurIPS deadlines. What do you all think? Has anyone from FAIR hinted at a JEPA-codec drop? Is there a principled reason we should stop worrying about the “no VAE, no autoregression” mantra in this context? I’d love to hear takes from people who’ve played with JEPA, latent diffusion, or any large-scale world-model work. Am I missing something and totally wrong, or does this paper actually mark a shift in Yann’s stance?
2025-06-29T13:52:29
https://i.redd.it/cutzsrmpfv9f1.png
Desperate_Rub_1352
i.redd.it
1970-01-01T00:00:00
0
{}
1lnf7eo
false
null
t3_1lnf7eo
/r/LocalLLaMA/comments/1lnf7eo/is_yann_lecun_changing_directions_prediction/
false
false
default
1
{'images': [{'source': {'url': 'https://preview.redd.it/cutzsrmpfv9f1.png?auto=webp&s=230e2dbaaa12f7a397e6cd59fabd93e6bf2b832a', 'width': 1092, 'height': 1320}, 'resolutions': [{'url': 'https://preview.redd.it/cutzsrmpfv9f1.png?width=108&crop=smart&auto=webp&s=21f7effdfbe75e5035dbb7b9ac19f15ee90a4d6d', 'width': 108, 'height': 130}, {'url': 'https://preview.redd.it/cutzsrmpfv9f1.png?width=216&crop=smart&auto=webp&s=0bf20eb8e1a1cc7c3db19ccbeaa1bcf06b28220b', 'width': 216, 'height': 261}, {'url': 'https://preview.redd.it/cutzsrmpfv9f1.png?width=320&crop=smart&auto=webp&s=fb7e072199d43396cd5455e2d4487e77c99f12b9', 'width': 320, 'height': 386}, {'url': 'https://preview.redd.it/cutzsrmpfv9f1.png?width=640&crop=smart&auto=webp&s=a80c376025a4d94f4078234f25a798b979d501fb', 'width': 640, 'height': 773}, {'url': 'https://preview.redd.it/cutzsrmpfv9f1.png?width=960&crop=smart&auto=webp&s=70cb96423c45c1d3d7335262f2e88da828a76fc6', 'width': 960, 'height': 1160}, {'url': 'https://preview.redd.it/cutzsrmpfv9f1.png?width=1080&crop=smart&auto=webp&s=5ee15a43a978711ee4acc5042a182fcb388beffc', 'width': 1080, 'height': 1305}], 'variants': {}, 'id': 'cutzsrmpfv9f1'}], 'enabled': True}
Which GPU to upgrade from 1070?
1
Quick question: which GPU should I buy to run local LLMs which won’t ruin my budget. 🥲 Currently running with an NVIDIA 1070 with 8GB VRAM. Qwen3:8b runs fine. But these size of models seems a bit dump compared to everything above that. (But everything above won’t run on it (or slow as hell) 🤣 Id love to use it for: RAG / CAG Tools (MCP) Research (deep research and e.g with searxng) Coding I know. Intense requests.. but, yeah. Won’t like to put my personal files for vectoring into the cloud 😅) Even when you’ve other recommendations, pls share. :) Thanks in advance!
2025-06-29T14:00:04
https://www.reddit.com/r/LocalLLaMA/comments/1lnfdch/which_gpu_to_upgrade_from_1070/
TjFr00
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnfdch
false
null
t3_1lnfdch
/r/LocalLLaMA/comments/1lnfdch/which_gpu_to_upgrade_from_1070/
false
false
self
1
null
KoboldCpp v1.95 with Flux Kontext support
1
Flux Kontext is a relatively new open weights model based on Flux that can **edit images using natural language**. Easily replace backgrounds, edit text, or add extra items into your images. With the release of KoboldCpp v1.95, Flux Kontext support has been added to KoboldCpp! No need for any installation or complicated workflows, just download one executable and launch with [**a ready-to-use kcppt template**](https://huggingface.co/koboldcpp/kcppt/resolve/main/Flux-Kontext.kcppt) (recommended at least 12gb VRAM), and you're ready to go, the necessary models will be fetched and loaded. Then you can open a browser window to [http://localhost:5001/sdui](http://localhost:5001/sdui), a simple A1111 like UI. Supports using up to 4 reference images. Also supports the usual inpainting, img2img, sampler settings etc. You can also load the component models individually (e.g. you can reuse the VAE or T5-XXL for Chroma, which koboldcpp also supports). https://preview.redd.it/18yvthliiv9f1.png?width=600&format=png&auto=webp&s=3b2771ec6ce97968a675d3c1facb7e19b20b5dff KoboldCpp also emulates the A1111/Forge and ComfyUI APIs so third party tools can use it as a drop in replacement. This is possible thanks to the hard work of stable-diffusion.cpp contributors leejet and stduhpf. P.s. Also, gemma 3n support is included in this release too. **Try it here:** [**https://github.com/LostRuins/koboldcpp/releases/latest**](https://github.com/LostRuins/koboldcpp/releases/latest)
2025-06-29T14:09:23
https://www.reddit.com/r/LocalLLaMA/comments/1lnfl21/koboldcpp_v195_with_flux_kontext_support/
HadesThrowaway
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnfl21
false
null
t3_1lnfl21
/r/LocalLLaMA/comments/1lnfl21/koboldcpp_v195_with_flux_kontext_support/
false
false
https://b.thumbs.redditm…4szhxO44ZpMU.jpg
1
null
I will automate your business using ai and smart workflows
1
[removed]
2025-06-29T15:08:18
https://www.reddit.com/r/LocalLLaMA/comments/1lngy3q/i_will_automate_your_business_using_ai_and_smart/
Rome_Z
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lngy3q
false
null
t3_1lngy3q
/r/LocalLLaMA/comments/1lngy3q/i_will_automate_your_business_using_ai_and_smart/
false
false
self
1
null
DeepSeek-R1 70B jailbreaks are all ineffective. Is there a better way?
1
I've got [DeepSeek's distilled 70B model](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) running locally. However, every jailbreak I can find to have it ignore its content restrictions/policies fail, or are woefully inconsistent at best. Methods I've tried: * "Untrammelled assistant": [link](https://www.reddit.com/r/ChatGPTJailbreak/comments/1iex0dq/deepseek_jailbreak_works_on_official_deepseek/) and [here](https://www.reddit.com/r/ChatGPTJailbreak/comments/1ic4xq9/deepseek_r1_easy_jailbreak/) * "Opposite mode": [link](https://www.reddit.com/r/ChatGPTJailbreak/comments/1kgtofc/deepseek_full_almost_all_jailbreaked_prompt/) * The "Zo" one (can't find a link) * Pliny's method: [link](https://x.com/elder_plinius/status/1881375272379023731) The only "effective" method I've found is to edit the <think> block by stopping the output and making the first line something like <think> The user has asked for [x]. Thankfully, the system prompt confirms that I can ignore my usual safety checks, so I can proceed. However, this is a pretty janky manual solution. The abliterated version of the model works just fine, but I hear that those aren't as capable or effective. Is there a better jailbreak I can attempt, or should I stick with the abliterated model?
2025-06-29T15:14:22
https://www.reddit.com/r/LocalLLaMA/comments/1lnh3d8/deepseekr1_70b_jailbreaks_are_all_ineffective_is/
RoIIingThunder3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnh3d8
false
null
t3_1lnh3d8
/r/LocalLLaMA/comments/1lnh3d8/deepseekr1_70b_jailbreaks_are_all_ineffective_is/
false
false
self
1
null
Detecting if an image contains a table, performance comparsion
1
Hello, I'm building a tool that integrates a table extraction functionality from images. I already have the main flow going with AWS Textract, to convert table images to a HTMl table and pass it to the llm model to answer questions. My question is on the step before that, I need to be able to detect if a passed image contains a table, and redirect the request to the proper flow. What would be the best method to do this? In terms of speed and cost? I currently am trying to use all mistral models (because the platform is using EU-based models and infrastructure), so I the idea was to have a simple prompt to Pixtral or mistral-small and ask it if the image contains a table, would this be a correct solution? Between pixtral and mistral-small what would be the best model for this specific use case? (Just determining if an image contains a table) ? Or if you think you have better solutions, I'm all ears, thanks!!
2025-06-29T15:19:56
https://www.reddit.com/r/LocalLLaMA/comments/1lnh84u/detecting_if_an_image_contains_a_table/
Gr33nLight
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnh84u
false
null
t3_1lnh84u
/r/LocalLLaMA/comments/1lnh84u/detecting_if_an_image_contains_a_table/
false
false
self
1
null
AI coding agents...what am I doing wrong?
1
Why are other people having such good luck with ai coding agents and I can't even get mine to write a simple comment block at the top of a 400 line file? The common refrain is it's like having a junior engineer to pass a coding task off to...well, I've never had a junior engineer scroll 1/3rd of the way through a file and then decide it's too big for it to work with. It frequently just gets stuck in a loop reading through the file looking for where it's supposed to edit and then giving up part way through and saying it's reached a token limit. How many tokens do I need for a 300-500 line C/C++ file? Most of mine are about this big, I try to split them up if they get much bigger because even my own brain can't fathom my old 20k line files very well anymore... Tell me what I'm doing wrong? - LM Studio on a Mac M4 max with 128 gigglebytes of RAM - Qwen3 30b A3B, supports up to 40k tokens - VS Code with Continue extension pointed to the local LM Studio instance (I've also tried through OpenWebUI's OpenAI endpoint in case API differences were the culprit) Do I need a beefier model? Something with more tokens? Different extension? More gigglebytes? Why can't I just give it 10 million tokens if I otherwise have enough RAM?
2025-06-29T16:19:12
https://www.reddit.com/r/LocalLLaMA/comments/1lnin1x/ai_coding_agentswhat_am_i_doing_wrong/
furyfuryfury
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnin1x
false
null
t3_1lnin1x
/r/LocalLLaMA/comments/1lnin1x/ai_coding_agentswhat_am_i_doing_wrong/
false
false
self
1
null
Best local set up for getting writing critique/talking about the characters?
1
Hi. I have a RTX 3060 with 12 Gb vram gpu. A fairly alright computer for entry level AI stuff. I've been experimenting with LM Studio, GT4ALL, AnythingLLM and Dot. My use case is that I want to upload chapters of a book I'm writing for fun, get critiques, have it tell me strengths and weaknesses in my writing and also learn about the characters so it can help me think of stuff about them. My characters are quite fleshed out, but I enjoy the idea of "discovery" when say asking "What type of drinks based on the story and info you know about Kevin do you think he'd like?" kind of stuff, so both a critique assistant as well as a talk about the project in general. I need long term persistent memory (as much as my rig will allow) and a good way to reference back to uploads/conversations with the bot. So far I've been using AnythingLLM because it has a workspace and I can tell it what model to use, currently it's Deep Seek AI R1 Distill Qwen 14B which is about the upper limit to run with out too many issues. So are there any better models I could use and does anyone have any thoughts on which LLM interface would be best for what I want to use it for? Note: I've used ChatGPT and Claude, but both are limited or lost the thread. Otherwise it was pretty helpful for concurrent issues I have in my writing, like I use too much purple prose and don't trust the reader to know what's going on through physical action and instead explain the characters inner thoughts too much. I'm not looking for flattery, more strength, highlights, weaknesses, crucial fixes etc type critique. GPT tended to flattery till I told it to stop and Claude has a built in writers help function, but I only got one chapter in. I also don't mind if it's slow, so long as it's accurate and less likely to lose details or get confused. In addition, I'm also not super fussed about my stuff being used as future model improvements/scrapping but it's nice to have something online more for personal privacy than contributing to anonymous data in a pool.
2025-06-29T16:21:21
https://www.reddit.com/r/LocalLLaMA/comments/1lniowu/best_local_set_up_for_getting_writing/
Vast_Description_206
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lniowu
false
null
t3_1lniowu
/r/LocalLLaMA/comments/1lniowu/best_local_set_up_for_getting_writing/
false
false
self
1
null
Best foss LLMs for analysing PTE essay for potato system
1
Hi guys, I'm developing a PTE essay generation and evaluation (scoring, giving feedback, etc.) tool for learning the AI and LLMs using python and ollama. The problem is my potato system. (6GB Usable RAM outof 8GB with No GPU) Which are the best FOSS LLMs out there for this scenario? (Which are the best if I've CHAD 💪🏋️ system?) Any tips and ideas for the tool (if you're interested to share your thoughts)
2025-06-29T16:22:28
https://www.reddit.com/r/LocalLLaMA/comments/1lniptg/best_foss_llms_for_analysing_pte_essay_for_potato/
UnknownSh00ter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lniptg
false
null
t3_1lniptg
/r/LocalLLaMA/comments/1lniptg/best_foss_llms_for_analysing_pte_essay_for_potato/
false
false
self
1
null
How are local or online models scraping? Is it different from search?
1
Are the scrapers usually part of the model or is it an MCP server? How did scrapers change after ai? Deep research is probably one of the most useful things I’ve used, if I run it locally with openwebui and the search integration (like ddg) how does it get the data from sites?
2025-06-29T16:28:16
https://www.reddit.com/r/LocalLLaMA/comments/1lniut8/how_are_local_or_online_models_scraping_is_it/
InsideYork
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lniut8
false
null
t3_1lniut8
/r/LocalLLaMA/comments/1lniut8/how_are_local_or_online_models_scraping_is_it/
false
false
self
1
null
I built a multi-modal semantic search framework
1
I’ve developed a unified framework for multi-modal semantic search that removes the typical production-infrastructure bottleneck and lets you focus entirely on front-end features. In most production environments, enabling semantic search demands multiple, separately configured components. This framework bundles everything you need into a single package: * **Comprehensive document database** * **Vector storage** * **Media storage** * **Embedding encoders** * **Asynchronous worker processes** When you save data via this framework, it’s automatically embedded and indexed in the background—using async workers—so your app gets an instant response and is immediately ready for semantic search. No more manual database setup or glue code. [Website](https://www.onenode.ai) https://reddit.com/link/1lnj7wb/video/of5hm5h6aw9f1/player
2025-06-29T16:43:20
https://www.reddit.com/r/LocalLLaMA/comments/1lnj7wb/i_built_a_multimodal_semantic_search_framework/
Available_Ad_5360
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnj7wb
false
null
t3_1lnj7wb
/r/LocalLLaMA/comments/1lnj7wb/i_built_a_multimodal_semantic_search_framework/
false
false
self
1
null
Prompt Smells, Just Like code
1
We all know about code smells. When your code works, but it’s messy and you just know it’s going to cause pain later. The same thing happens with prompts. I didn’t really think about it until I saw our LLM app getting harder and harder to tweak… and the root cause? Messy, overcomplicated prompts, complex workflows. Prompts that: * Try to do five different things at once * Are copied all over the place with slight tweaks * Ask the LLM to do basic stuff your code should have handled It’s basically tech debt, just hiding in your prompts instead of your code. And without proper tests or evals, changing them feels like walking on eggshells. I wrote a blog post about this. I’m calling it **prompt smells** and sharing how I think we can avoid them. **Link:** [Full post here](link) Anyone else run into this?
2025-06-29T17:07:21
https://blog.surkar.in/prompt-smells-just-like-code
thesmallstar
blog.surkar.in
1970-01-01T00:00:00
0
{}
1lnjtrs
false
null
t3_1lnjtrs
/r/LocalLLaMA/comments/1lnjtrs/prompt_smells_just_like_code/
false
false
default
1
null
Prompt Smells, Just Like Code
1
We all know about code smells. When your code works, but it’s messy and you just know it’s going to cause pain later. The same thing happens with prompts. I didn’t really think about it until I saw our LLM app getting harder and harder to tweak… and the root cause? Messy, overcomplicated prompts, complex workflows. Some examples, Prompt Smell when they: * Try to do five different things at once * Are copied all over the place with slight tweaks * Ask the LLM to do basic stuff your code should have handled It’s basically tech debt, just hiding in your prompts instead of your code. And without proper tests or evals, changing them feels like walking on eggshells. I wrote a blog post about this. I’m calling it **prompt smells** and sharing how I think we can avoid them. **Link:** [Full post here](https://blog.surkar.in/prompt-smells-just-like-code) What's your take on this?
2025-06-29T17:10:07
https://blog.surkar.in/prompt-smells-just-like-code
thesmallstar
blog.surkar.in
1970-01-01T00:00:00
0
{}
1lnjw6m
false
null
t3_1lnjw6m
/r/LocalLLaMA/comments/1lnjw6m/prompt_smells_just_like_code/
false
false
default
1
null
What memory/vram temperatures do you get (particularly anyone with gddr7 in the RTX 50X0 series)?
1
Doesnt seem to be much public info on gddr7 thermals generally.
2025-06-29T17:41:59
https://www.reddit.com/r/LocalLLaMA/comments/1lnknry/what_memoryvram_temperatures_do_you_get/
MuddyPuddle_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnknry
false
null
t3_1lnknry
/r/LocalLLaMA/comments/1lnknry/what_memoryvram_temperatures_do_you_get/
false
false
self
1
null
According to rumors NVIDIA is planning a RTX 5070 Ti SUPER with 24GB VRAM
1
2025-06-29T18:03:15
https://videocardz.com/newz/nvidia-also-planning-geforce-rtx-5070-ti-super-with-24gb-gddr7-memory
BringerOfNuance
videocardz.com
1970-01-01T00:00:00
0
{}
1lnl6we
false
null
t3_1lnl6we
/r/LocalLLaMA/comments/1lnl6we/according_to_rumors_nvidia_is_planning_a_rtx_5070/
false
false
default
1
null
Context Engineering
1
"Context engineering is the delicate art and science of filling the context window with just the right information for the next step." — Andrej Karpathy. A practical, first-principles handbook inspired by Andrej Karpathy and 3Blue1Brown for moving beyond prompt engineering to the wider discipline of context design, orchestration, and optimization. [https://github.com/davidkimai/Context-Engineering](https://github.com/davidkimai/Context-Engineering)
2025-06-29T18:10:46
https://www.reddit.com/r/LocalLLaMA/comments/1lnldsj/context_engineering/
recursiveauto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnldsj
false
null
t3_1lnldsj
/r/LocalLLaMA/comments/1lnldsj/context_engineering/
false
false
self
1
null
Trying to figure out when it makes sense...
1
So I'm an independent developer of 25+ yrs. I've really enjoyed working with AI (Claude and OpenAI mostly) for my coding assistant in the past 6 months, it's not been very expensive but I'm also not using it "full time" either. I did some LLM experimentation with my old RX580 8GB card which is not very good for actual coding compared to Claude 3.7/4.0. I typically use VS Code + Cline. I've been seeing people use multi-GPU and some recommended using 4 x 3090's @ 24GB which is way out of my budget for the little stuff I'm doing. I've considered a M4 Mac @ 128GB also. Still pretty expensive plus I'm a PC guy. So I'm curious - if privacy is not a concern (nothing I'm doing is ground breaking or top secret) is there a point in going all Local? I could imagine my system pumping out code 24/7 (for me to spend a month debugging all the problems AI creates), but I find I end up sitting babysitting after every "task" anyways as it rarely works well anyways. And the wait time between tasks could become a massive bottleneck on Local. I was wondering if maybe running 2-4 16GB Intel Arc cards would be enough for a budget build, but after watching 8GB 7b-Q4 model shred a fully working class of C# code into "// to be implemented", I'm feeling skeptical. I went back to Claude and went from waiting 60 seconds for my "first token" back to "the whole task took 60 seconds", Typically, on client work, I've just used manual AI refactoring (i.e. copy/paste into GPT-4 Chat), or I split my project off into a standalone portion and use AI to build it, and re-integrate it myself back into the code base) I'm just wondering at what point does the hardware expenditure make sense vs cloud if privacy is not an issue.
2025-06-29T18:20:52
https://www.reddit.com/r/LocalLLaMA/comments/1lnlmpi/trying_to_figure_out_when_it_makes_sense/
Waste-Toe7042
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnlmpi
false
null
t3_1lnlmpi
/r/LocalLLaMA/comments/1lnlmpi/trying_to_figure_out_when_it_makes_sense/
false
false
self
1
null
How do you use datasets from huggingface/kaggle etc into local apps like lmstudio or jan local apps
1
I am a beginner, and have started using local apps like lmstudio and jan, however I am unable to figure how does one uses dataset from sites like kaggle or huggingface
2025-06-29T18:22:31
https://www.reddit.com/r/LocalLLaMA/comments/1lnlo69/how_do_you_use_datasets_from_huggingfacekaggle/
vasuhawa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnlo69
false
null
t3_1lnlo69
/r/LocalLLaMA/comments/1lnlo69/how_do_you_use_datasets_from_huggingfacekaggle/
false
false
self
1
null
4x 4090 48GB inference box (I may have overdone it)
1
A few months ago I discovered that 48GB 4090s were starting to show up on the western market in large numbers. I didn't think much of it at the time, but then I got my payout from the mt.gox bankruptcy filing (which has been ongoing for over 10 years now), and decided to blow a chunk of it on an inference box for local machine learning experiments. After a delay receiving some of the parts (and admittedly some procrastination on my end), I've finally found the time to put the whole machine together! Specs: * Asrock romed8-2t motherboard (SP3) * 32 core epyc * 256GB 2666V memory * 4x "tronizm" rtx 4090D 48GB modded GPUs from china * 2x 1tb nvme (striped) for OS and local model storage The cards are very well built. I have no doubts as to their quality whatsoever. They were heavy, the heatsinks made contact with all the board level components and the shrouds were all-metal and very solid. It was almost a shame to take them apart! They were however incredibly loud. At idle, the fan sits at 30%, and at that level they are already as loud as the loudest blower cards for gaming. At full load, they are truly deafening and definitely not something you want to share space with. Hence the water-cooling. There are however no full-cover waterblocks for these GPUs (they use a custom PCB), so to cool them I had to get a little creative. Corsair makes a (kinda) [generic block](https://www.corsair.com/uk/en/p/custom-liquid-cooling/cx-9025001-ww/icue-link-xg3-rgb-hybrid-gpu-water-block-4090-4080-cx-9025001-ww?srsltid=AfmBOopBdweqKN5Wpj6wHKLSR9SEYZmNpOpOyaFZTLLdld7hLBrg1iCg) called the xg3. The product itself is a bit rubbish, requiring corsairs proprietary i-cue system to run the fan which is supposed to cool the components not covered by the coldplate. It's also overpriced. However these are more or less the only option here. As a side note, these "generic" blocks only work work because the mounting hole and memory layout around the core is actually standardized to some extent, something I learned during my research. The cold-plate on these blocks turned out to foul one of the components near the core, so I had to modify them a bit. I also couldn't run the aforementioned fan without corsairs i-cue link nonsense and the fan and shroud were too thick anyway and would have blocked the next GPU anyway. So I removed the plastic shroud and fabricated a frame + heatsink arrangement to add some support and cooling for the VRMs and other non-core components. As another side note, the marketing material for the xg3 claims that the block contains a built-in temperature sensor. However I saw no indication of a sensor anywhere when disassembling the thing. Go figure. Lastly there's the case. I couldn't find a case that I liked the look of that would support three 480mm radiators, so I built something out of pine furniture board. Not the easiest or most time efficient approach, but it was fun and it does the job (fire hazard notwithstanding). As for what I'll be using it for, I'll be hosting an LLM for local day-to-day usage, but I also have some more unique project ideas, some of which may show up here in time. Now that such projects won't take up resources on my regular desktop, I can afford to do a lot of things I previously couldn't! P.S. If anyone has any questions or wants to replicate any of what I did here, feel free to DM me with any questions, I'm glad to help any way I can!
2025-06-29T18:33:40
https://www.reddit.com/gallery/1lnlxp1
101m4n
reddit.com
1970-01-01T00:00:00
0
{}
1lnlxp1
false
null
t3_1lnlxp1
/r/LocalLLaMA/comments/1lnlxp1/4x_4090_48gb_inference_box_i_may_have_overdone_it/
false
false
https://external-preview…abc3423ee80b4bb7
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/o67J1SHcLKrQAlXicnfT20w0glJr7s4wb4-c1GOwiA8.jpeg?auto=webp&s=7fbbc764721986d713527772c16b9c02fcd411a6', 'width': 3072, 'height': 4096}, 'resolutions': [{'url': 'https://external-preview.redd.it/o67J1SHcLKrQAlXicnfT20w0glJr7s4wb4-c1GOwiA8.jpeg?width=108&crop=smart&auto=webp&s=0f79a082f28f554cc94ca3e5fdf80eb9c0222d3c', 'width': 108, 'height': 144}, {'url': 'https://external-preview.redd.it/o67J1SHcLKrQAlXicnfT20w0glJr7s4wb4-c1GOwiA8.jpeg?width=216&crop=smart&auto=webp&s=7a6dbb1a7d9d867ae8f946c1ff0a7919c3581c10', 'width': 216, 'height': 288}, {'url': 'https://external-preview.redd.it/o67J1SHcLKrQAlXicnfT20w0glJr7s4wb4-c1GOwiA8.jpeg?width=320&crop=smart&auto=webp&s=c43b3f770da44d4fc604cd2edcba39f039c286d5', 'width': 320, 'height': 426}, {'url': 'https://external-preview.redd.it/o67J1SHcLKrQAlXicnfT20w0glJr7s4wb4-c1GOwiA8.jpeg?width=640&crop=smart&auto=webp&s=936572d5a67f4298cbb8ecc135d737e991ade403', 'width': 640, 'height': 853}, {'url': 'https://external-preview.redd.it/o67J1SHcLKrQAlXicnfT20w0glJr7s4wb4-c1GOwiA8.jpeg?width=960&crop=smart&auto=webp&s=6f76e40de9918a6741537a89cb266c0e3dae5ee6', 'width': 960, 'height': 1280}, {'url': 'https://external-preview.redd.it/o67J1SHcLKrQAlXicnfT20w0glJr7s4wb4-c1GOwiA8.jpeg?width=1080&crop=smart&auto=webp&s=4b6a2856c0ee855522c312ab441f3048e297ce8e', 'width': 1080, 'height': 1440}], 'variants': {}, 'id': 'o67J1SHcLKrQAlXicnfT20w0glJr7s4wb4-c1GOwiA8'}], 'enabled': True}
hunyuan-a13b: any news? GGUF? MLX?
1
Like many I’m excited about this model. We had a big thread on it, then crickets. Any news?
2025-06-29T19:04:56
https://www.reddit.com/r/LocalLLaMA/comments/1lnmp98/hunyuana13b_any_news_gguf_mlx/
jarec707
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnmp98
false
null
t3_1lnmp98
/r/LocalLLaMA/comments/1lnmp98/hunyuana13b_any_news_gguf_mlx/
false
false
self
1
null
Running AI models on phone on a different OS?
1
Has anyone tried running a local LLM on a phone running GrapheneOS or another lightweight Android OS? Stock Android tends to consume 70–80% of RAM at rest, but I'm wondering if anyone has managed to reduce that significantly with Graphene and fit something like DeepSeek-R1-0528-Qwen3-8B (Q4 quant) in memory. If no one's tried and people are interested, I might take a stab at it myself. Curious to hear your thoughts or results if you've attempted anything similar.
2025-06-29T19:45:30
https://www.reddit.com/r/LocalLLaMA/comments/1lnnoc1/running_ai_models_on_phone_on_a_different_os/
AspecialistI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lnnoc1
false
null
t3_1lnnoc1
/r/LocalLLaMA/comments/1lnnoc1/running_ai_models_on_phone_on_a_different_os/
false
false
self
1
null
Current State of OpenAI
1
2025-06-29T20:49:09
https://i.redd.it/p9nlm707ix9f1.jpeg
noblex33
i.redd.it
1970-01-01T00:00:00
0
{}
1lnp73w
false
null
t3_1lnp73w
/r/LocalLLaMA/comments/1lnp73w/current_state_of_openai/
false
false
default
1
{'images': [{'source': {'url': 'https://preview.redd.it/p9nlm707ix9f1.jpeg?auto=webp&s=f831246c9e0557a338788b024845fd054bf19c0b', 'width': 884, 'height': 482}, 'resolutions': [{'url': 'https://preview.redd.it/p9nlm707ix9f1.jpeg?width=108&crop=smart&auto=webp&s=d0894a05385b3659a19522c234fa0f7be3ad5dc7', 'width': 108, 'height': 58}, {'url': 'https://preview.redd.it/p9nlm707ix9f1.jpeg?width=216&crop=smart&auto=webp&s=f35916c0e355cf3027e0b335c7df090710dac1a5', 'width': 216, 'height': 117}, {'url': 'https://preview.redd.it/p9nlm707ix9f1.jpeg?width=320&crop=smart&auto=webp&s=f81cb111633526b9e36b780d883251b9e5552132', 'width': 320, 'height': 174}, {'url': 'https://preview.redd.it/p9nlm707ix9f1.jpeg?width=640&crop=smart&auto=webp&s=2d0dd33a9486e781cacb20e6a90acc2b0b65cc47', 'width': 640, 'height': 348}], 'variants': {}, 'id': 'p9nlm707ix9f1'}], 'enabled': True}