title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Gen AI training contrnt | 1 | [removed] | 2025-06-09T13:08:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l74cps/gen_ai_training_contrnt/ | Outrageous_Cup9473 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l74cps | false | null | t3_1l74cps | /r/LocalLLaMA/comments/1l74cps/gen_ai_training_contrnt/ | false | false | self | 1 | null |
KVzip: Query-agnostic KV Cache Eviction — 3~4× memory reduction and 2× lower decoding latency | 401 | Hi! We've released KVzip, a KV cache compression method designed to support diverse future queries. You can try the demo on GitHub! Supported models include Qwen3/2.5, Gemma3, and LLaMA3.
GitHub: [https://github.com/snu-mllab/KVzip](https://github.com/snu-mllab/KVzip)
Paper: [https://arxiv.org/abs/2505.23416](https://arxiv.org/abs/2505.23416)
Blog: [https://janghyun1230.github.io/kvzip](https://janghyun1230.github.io/kvzip) | 2025-06-09T13:54:45 | janghyun1230 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l75fc8 | false | null | t3_1l75fc8 | /r/LocalLLaMA/comments/1l75fc8/kvzip_queryagnostic_kv_cache_eviction_34_memory/ | false | false | default | 401 | {'enabled': True, 'images': [{'id': 'bpxlu6tfnw5f1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/bpxlu6tfnw5f1.png?width=108&crop=smart&auto=webp&s=6b6a38e2866e14db91003d5a4a47866574b78280', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/bpxlu6tfnw5f1.png?width=216&crop=smart&auto=webp&s=18f4076b0360c0560bec9e04df5826acce9ae772', 'width': 216}, {'height': 167, 'url': 'https://preview.redd.it/bpxlu6tfnw5f1.png?width=320&crop=smart&auto=webp&s=927d3683222486bd52d8d852b34d728ded7ed91d', 'width': 320}, {'height': 335, 'url': 'https://preview.redd.it/bpxlu6tfnw5f1.png?width=640&crop=smart&auto=webp&s=351cc92c28950c272bbcdaec0981dd5beb03e8cc', 'width': 640}, {'height': 503, 'url': 'https://preview.redd.it/bpxlu6tfnw5f1.png?width=960&crop=smart&auto=webp&s=8b5ac8ce6cba5c026689fd03276b59762a4aef16', 'width': 960}, {'height': 566, 'url': 'https://preview.redd.it/bpxlu6tfnw5f1.png?width=1080&crop=smart&auto=webp&s=e7dd2526891c9edb870191530c6c034697787ee9', 'width': 1080}], 'source': {'height': 780, 'url': 'https://preview.redd.it/bpxlu6tfnw5f1.png?auto=webp&s=c4a1e8b81e13c607022a138fc633701d065ca12a', 'width': 1486}, 'variants': {}}]} |
|
I built a Code Agent that writes code and live-debugs itself by reading and walking the call stack. | 77 | 2025-06-09T14:11:07 | https://v.redd.it/b1pnpj9lsw5f1 | bn_from_zentara | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l75tp1 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/b1pnpj9lsw5f1/DASHPlaylist.mpd?a=1752070282%2CMWJjNzk4MmNmN2JiNWI4NTViMWYyYWUyMzY4NGQ3OTU3Y2YxMWYzMmVmOGZiMTc5YmE4MDdmNjcwNzJlZTk0Mw%3D%3D&v=1&f=sd', 'duration': 103, 'fallback_url': 'https://v.redd.it/b1pnpj9lsw5f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 810, 'hls_url': 'https://v.redd.it/b1pnpj9lsw5f1/HLSPlaylist.m3u8?a=1752070282%2CNjZiYjFlZjA4NTBiODFkNzQyNzI3YzJiZmQ5MzZjNzlkYjJkZjVlOWY5MWQ4NDg3Y2I4NTIyNmU1OGMwMTFhYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/b1pnpj9lsw5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1l75tp1 | /r/LocalLLaMA/comments/1l75tp1/i_built_a_code_agent_that_writes_code_and/ | false | false | 77 | {'enabled': False, 'images': [{'id': 'M3NzcG44YWxzdzVmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/M3NzcG44YWxzdzVmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?width=108&crop=smart&format=pjpg&auto=webp&s=6ec9a9c7fcd0be9c241507c1ce755746d8286d18', 'width': 108}, {'height': 91, 'url': 'https://external-preview.redd.it/M3NzcG44YWxzdzVmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?width=216&crop=smart&format=pjpg&auto=webp&s=829b811eb3a40cb192889351211da9818a795855', 'width': 216}, {'height': 135, 'url': 'https://external-preview.redd.it/M3NzcG44YWxzdzVmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?width=320&crop=smart&format=pjpg&auto=webp&s=5779e1b2d165afa2b69c14f256106ba487033d20', 'width': 320}, {'height': 270, 'url': 'https://external-preview.redd.it/M3NzcG44YWxzdzVmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?width=640&crop=smart&format=pjpg&auto=webp&s=a40748fa6ef2f6e8c30abb94b9fc62f8ccd1c219', 'width': 640}, {'height': 405, 'url': 'https://external-preview.redd.it/M3NzcG44YWxzdzVmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?width=960&crop=smart&format=pjpg&auto=webp&s=904ac9352538f41a1f37689ecc12359cbf84a05a', 'width': 960}, {'height': 455, 'url': 'https://external-preview.redd.it/M3NzcG44YWxzdzVmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?width=1080&crop=smart&format=pjpg&auto=webp&s=18412fe3cfd77271aefca1becdfe99954c370992', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/M3NzcG44YWxzdzVmMSD9styMP4r1TEey8oyCG95ikT3wdTqvY8nF4QJV9K9T.png?format=pjpg&auto=webp&s=564131595bb1c7d564622ff058f784d426efd6ec', 'width': 2560}, 'variants': {}}]} |
||
Trying to Make Llama Extract Smarter with a Schema-Building AI Agent | 1 | [removed] | 2025-06-09T14:20:59 | https://www.reddit.com/r/LocalLLaMA/comments/1l762el/trying_to_make_llama_extract_smarter_with_a/ | Professional_Term579 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l762el | false | null | t3_1l762el | /r/LocalLLaMA/comments/1l762el/trying_to_make_llama_extract_smarter_with_a/ | false | false | self | 1 | null |
Trying to Make Llama Extract Smarter with a Schema-Building AI Agent | 1 | Hey folks,
I’ve been experimenting with Llama Extract to pull table data from 10-K PDFs. It actually works pretty well when you already have a solid schema in place.
The challenge I’m running into is that 10-Ks from different companies often format their tables a bit differently. So having a single “one-size-fits-all” schema doesn’t really cut it.
I’m thinking of building an AI agent using Pydantic AI that can:
1. Read the specific table I want from the PDF,
2. Identify the income statement line items, and
3. Automatically generate the schema for me.
Then I’d just plug that schema into Llama Extract.
Has anyone here built something similar or have any tips on how to go about creating this kind of agent? | 2025-06-09T14:22:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l7642v/trying_to_make_llama_extract_smarter_with_a/ | Professional_Term579 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7642v | false | null | t3_1l7642v | /r/LocalLLaMA/comments/1l7642v/trying_to_make_llama_extract_smarter_with_a/ | false | false | self | 1 | null |
DeepSeek R1 0528 Hits 71% (+14.5 pts from R1) on Aider Polyglot Coding Leaderboard | 283 | 2025-06-09T14:29:54 | https://www.reddit.com/r/LocalLLaMA/comments/1l76ab7/deepseek_r1_0528_hits_71_145_pts_from_r1_on_aider/ | Xhehab_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l76ab7 | false | null | t3_1l76ab7 | /r/LocalLLaMA/comments/1l76ab7/deepseek_r1_0528_hits_71_145_pts_from_r1_on_aider/ | false | false | 283 | {'enabled': False, 'images': [{'id': 'ZchV7t9Dn_NHk0_ZW8xmT-9VDV112iNqFmbb4fJPYHo', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=108&crop=smart&auto=webp&s=56e789a35daba2a074928af59f11e222a54851d6', 'width': 108}, {'height': 124, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=216&crop=smart&auto=webp&s=1ef479418e186a2dd315fedc3d887521b18eec4f', 'width': 216}, {'height': 184, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=320&crop=smart&auto=webp&s=c2bc26b548af493526b9116d26a9b305f03b1f83', 'width': 320}, {'height': 369, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=640&crop=smart&auto=webp&s=8a4c25f54ed06b5f744ff2faad7914958769cc14', 'width': 640}, {'height': 553, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=960&crop=smart&auto=webp&s=806c4055b855fdf17a97308fb5b399d3b773cef9', 'width': 960}, {'height': 623, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?width=1080&crop=smart&auto=webp&s=9f3cf9efdcefc9b636c507255c2e656d91fbb4a6', 'width': 1080}], 'source': {'height': 1020, 'url': 'https://external-preview.redd.it/iUsfwiVJPYLjTSAVy9M84yJWl92m3NW-HLg-4yfog9U.jpg?auto=webp&s=286f8619e702be481dea1a349a13ce7eb7a1eb9e', 'width': 1768}, 'variants': {}}]} |
||
7900 XTX what are your go-to models for 24GB VRAM? | 15 | Just finished my new build with a 7900 XTX and I'm looking for some model recommendations.
Since most of the talk is CUDA-centric, I'm curious what my AMD users are running. I've got 24GB of VRAM to play with and I'm mainly looking for good models for general purpose chat/reasoning. | 2025-06-09T14:32:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l76cg2/7900_xtx_what_are_your_goto_models_for_24gb_vram/ | BillyTheMilli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l76cg2 | false | null | t3_1l76cg2 | /r/LocalLLaMA/comments/1l76cg2/7900_xtx_what_are_your_goto_models_for_24gb_vram/ | false | false | self | 15 | null |
Dolphin appreciation post. | 0 | Just a simple Dolphin appreciation post here. I appreciate all the work done by Cognitive Computationd. Wondering what cool new stuff Eric has cooking lately. | 2025-06-09T14:34:39 | Porespellar | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l76ekr | false | null | t3_1l76ekr | /r/LocalLLaMA/comments/1l76ekr/dolphin_appreciation_post/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '9w0uktkaxw5f1', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/9w0uktkaxw5f1.jpeg?width=108&crop=smart&auto=webp&s=10e1c7cf22cc272bc454918c300cb2ed2b600e0e', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/9w0uktkaxw5f1.jpeg?width=216&crop=smart&auto=webp&s=acb60e8932d124a91ed49e387222cf788457977e', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/9w0uktkaxw5f1.jpeg?width=320&crop=smart&auto=webp&s=2b1968b4c97be704ca8032a197a5c88155517068', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/9w0uktkaxw5f1.jpeg?width=640&crop=smart&auto=webp&s=2ef821a8b2fc4d890844db951f4d0f144a80f313', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/9w0uktkaxw5f1.jpeg?width=960&crop=smart&auto=webp&s=b9b21333959deb63379cda7c66997530958e163b', 'width': 960}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/9w0uktkaxw5f1.jpeg?auto=webp&s=4dbb5f4ca2cf1b6ff5c633eda06d235e6c4af9b6', 'width': 1024}, 'variants': {}}]} |
|
PC build for local LLMs | 1 | [removed] | 2025-06-09T14:35:54 | https://www.reddit.com/r/LocalLLaMA/comments/1l76fpy/pc_build_for_local_llms/ | 6446thatsmynumber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l76fpy | false | null | t3_1l76fpy | /r/LocalLLaMA/comments/1l76fpy/pc_build_for_local_llms/ | false | false | self | 1 | null |
Winter has arrived | 0 | Last year we saw a lot of significant improvements in AI, but this year we are only seeing gradual improvements. The feeling that remains is that the wall has become a mountain, and the climb will be very difficult and long. | 2025-06-09T14:50:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l76sg1/winter_has_arrived/ | Objective_Lab_3182 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l76sg1 | false | null | t3_1l76sg1 | /r/LocalLLaMA/comments/1l76sg1/winter_has_arrived/ | false | false | self | 0 | null |
Build a full on-device rag app using qwen3 embedding and qwen3 llm | 5 | The Qwen3 0.6B embedding is extremely well at a 4-bit size for the small RAG. I was able to run the entire application offline on my iPhone 13.
[https://youtube.com/shorts/zG\_WD166pHo](https://youtube.com/shorts/zG_WD166pHo)
I have published the macOS version on the App Store and still working on the iOS part. Please let me know if you think this is useful or if any improvements are needed.
[https://textmates.app/](https://textmates.app/) | 2025-06-09T14:51:48 | https://www.reddit.com/r/LocalLLaMA/comments/1l76tvu/build_a_full_ondevice_rag_app_using_qwen3/ | mzbacd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l76tvu | false | null | t3_1l76tvu | /r/LocalLLaMA/comments/1l76tvu/build_a_full_ondevice_rag_app_using_qwen3/ | false | false | self | 5 | null |
When it comes to cost-performance ratio, DeepSeek R1 (0528) is the best. | 1 | 2025-06-09T15:11:51 | ashim_k_saha | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l77c59 | false | null | t3_1l77c59 | /r/LocalLLaMA/comments/1l77c59/when_it_comes_to_costperformance_ratio_deepseek/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'jagl9bpk3x5f1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/jagl9bpk3x5f1.jpeg?width=108&crop=smart&auto=webp&s=0de4cd5add5c6d0f22f153d51c96e28326649727', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/jagl9bpk3x5f1.jpeg?width=216&crop=smart&auto=webp&s=eaef951f7c4a135d1c7120234212aa57ca8f8e17', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/jagl9bpk3x5f1.jpeg?width=320&crop=smart&auto=webp&s=df7a4d434025d3bbc0c55624e9ab7de548da9c34', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/jagl9bpk3x5f1.jpeg?width=640&crop=smart&auto=webp&s=d3d14716b6c4f431b9bb49c95d11398ba99acbac', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/jagl9bpk3x5f1.jpeg?width=960&crop=smart&auto=webp&s=78c63c8fc67396ba35013acb717a301ca105770d', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/jagl9bpk3x5f1.jpeg?width=1080&crop=smart&auto=webp&s=d663959fa902717dce1c33e1a618699ca16b3909', 'width': 1080}], 'source': {'height': 2175, 'url': 'https://preview.redd.it/jagl9bpk3x5f1.jpeg?auto=webp&s=f1cf2e4d0e1ac119cb80de2ef3c5ca118b1ee395', 'width': 1080}, 'variants': {}}]} |
||
Models and where to find them? | 1 | So SD has civit.ai, though not perfect it has decent search, ratings and what not, generally find it to work quite well.
But sayI want to see what recent models are popular (and I literally do, so please share) that are for: programming, role play, general questions, maybe some other case I'm not even aware of. What are good ways to find about that, apart from asking here? I know hugging face seems like core repo of all stuff. But somehow it's search does not seem too comfy, or maybe I just need to learn to use it more... Another option I used a bit is just go on ollama page and see what models they list. Though that is also quite weak, and ollama in my eyes are, well lets call them peculiar, even if popular. | 2025-06-09T15:16:10 | https://www.reddit.com/r/LocalLLaMA/comments/1l77g2z/models_and_where_to_find_them/ | morphles | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l77g2z | false | null | t3_1l77g2z | /r/LocalLLaMA/comments/1l77g2z/models_and_where_to_find_them/ | false | false | self | 1 | null |
What is a best model?? | 1 | [removed] | 2025-06-09T15:20:35 | https://www.reddit.com/r/LocalLLaMA/comments/1l77k53/what_is_a_best_model/ | Roberts_shine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l77k53 | false | null | t3_1l77k53 | /r/LocalLLaMA/comments/1l77k53/what_is_a_best_model/ | false | false | self | 1 | null |
feeling lost - localai apps | 1 | [removed] | 2025-06-09T15:35:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l77xgu/feeling_lost_localai_apps/ | Empty-Investment-827 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l77xgu | false | null | t3_1l77xgu | /r/LocalLLaMA/comments/1l77xgu/feeling_lost_localai_apps/ | false | false | self | 1 | null |
Translation models that support streaming | 3 | Are their any nlps that support streaming outputs? - need translation models that supports steaming text outputs | 2025-06-09T15:53:38 | https://www.reddit.com/r/LocalLLaMA/comments/1l78eb8/translation_models_that_support_streaming/ | Away_Expression_3713 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l78eb8 | false | null | t3_1l78eb8 | /r/LocalLLaMA/comments/1l78eb8/translation_models_that_support_streaming/ | false | false | self | 3 | null |
Benchmark Fusion: m-transportability of AI Evals | 5 | Reviewing VLM spatial reasoning benchmarks [SpatialScore](https://arxiv.org/pdf/2505.17012) versus [OmniSpatial](https://arxiv.org/pdf/2506.03135), you'll find a reversal between the rankings for **SpaceQwen** and **SpatialBot**, and missing comparisons for **SpaceThinker.**
Ultimately, we want to compare models on equal footing and project their performance to a real-world application.
So how do you make sense of partial comparisons and conflicting evaluation results to choose the best model for your application?
Studying the categorical breakdown by task type, you can identify which benchmark includes a task distribution more aligned with your primary use-case and go with that finding.
But can you get more information by averaging the results?
From the causal inference literature, the concept of [transportability](https://projecteuclid.org/journals/statistical-science/volume-29/issue-4/External-Validity-From-Do-Calculus-to-Transportability-Across-Populations/10.1214/14-STS486.full) describes a flexible and principled way to re-weight these comprehensive benchmarks to rank model performance for your application.
What else can you gain from applying the lens of causal AI engineering?
\* more explainable assessments
\* cheaper and more robust offline evaluations | 2025-06-09T16:08:34 | https://www.reddit.com/gallery/1l78sg6 | remyxai | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l78sg6 | false | null | t3_1l78sg6 | /r/LocalLLaMA/comments/1l78sg6/benchmark_fusion_mtransportability_of_ai_evals/ | false | false | 5 | null |
|
This was, in fact, neccesary. | 1 | 2025-06-09T16:09:20 | Immediate_Song4279 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l78t5q | false | null | t3_1l78t5q | /r/LocalLLaMA/comments/1l78t5q/this_was_in_fact_neccesary/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'q4mgCpZ-h43L30hvamjrTgllkFCuZx6dwUYGY91cN0Q', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/ycmwigk3ex5f1.png?width=108&crop=smart&auto=webp&s=f8c357fb8983f0e8df33b1017773321535a2583d', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/ycmwigk3ex5f1.png?width=216&crop=smart&auto=webp&s=d23fb0fd1cd30acf0ae26c74164443a98bd2f2e9', 'width': 216}, {'height': 212, 'url': 'https://preview.redd.it/ycmwigk3ex5f1.png?width=320&crop=smart&auto=webp&s=b4ee667ce8d425456d8018661a7751a7b1b1d140', 'width': 320}, {'height': 424, 'url': 'https://preview.redd.it/ycmwigk3ex5f1.png?width=640&crop=smart&auto=webp&s=85505e8e19682593ad54820e03c7fc462fd73257', 'width': 640}], 'source': {'height': 512, 'url': 'https://preview.redd.it/ycmwigk3ex5f1.png?auto=webp&s=f1ae5200511c92ea97831e5ef005419cd7a7cb80', 'width': 772}, 'variants': {}}]} |
|||
Best story writing apps? | 1 | [removed] | 2025-06-09T16:09:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l78toh/best_story_writing_apps/ | wtfislandfill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l78toh | false | null | t3_1l78toh | /r/LocalLLaMA/comments/1l78toh/best_story_writing_apps/ | false | false | self | 1 | null |
Is it possible to recreate training code for a TTS model when only inference code and model weights are available? | 1 | [removed] | 2025-06-09T16:10:16 | https://www.reddit.com/r/LocalLLaMA/comments/1l78u0z/is_it_possible_to_recreate_training_code_for_a/ | ConnectPea8944 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l78u0z | false | null | t3_1l78u0z | /r/LocalLLaMA/comments/1l78u0z/is_it_possible_to_recreate_training_code_for_a/ | false | false | self | 1 | null |
Fully Offline AI Computer (works standalone or online) | 0 | I’ve put together a fully local AI computer that can operate entirely offline, but also seamlessly connects to third-party providers and tools if desired. It bundles best-in-class open-source software (like Ollama, OpenWebUI, Qdrant, Open Interpreter, and more), integrates it into an optimized mini PC, and offers strong hardware performance (AMD Ryzen, KDE Plasma 6).
It's extensible and modular, so obsolescence shouldn't be an issue for a while. I think I can get these units into people’s hands for about $1,500, and shortcut a lot of the process.
Would this be of interest to anyone out there? | 2025-06-09T16:11:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l78v5g/fully_offline_ai_computer_works_standalone_or/ | _redacted- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l78v5g | false | null | t3_1l78v5g | /r/LocalLLaMA/comments/1l78v5g/fully_offline_ai_computer_works_standalone_or/ | false | false | self | 0 | null |
Is there a DeepSeek-R1-0528 14B or just DeepSeek-R1 14B that I can download and run via vLLM? | 0 | I don't see any model files other than those from Ollama, but I still want to use vLLM. I don't want any distilled models; do you have any ideas? Hugging face only seems to have the original models or just distilled ones.
Another unrelated question, can I run the 32B model (20GB) on a 16GB GPU? I have 32GB RAM and SSD, not sure if it helps? | 2025-06-09T16:33:43 | https://www.reddit.com/r/LocalLLaMA/comments/1l79frx/is_there_a_deepseekr10528_14b_or_just_deepseekr1/ | mrnerdy59 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l79frx | false | null | t3_1l79frx | /r/LocalLLaMA/comments/1l79frx/is_there_a_deepseekr10528_14b_or_just_deepseekr1/ | false | false | self | 0 | null |
Good pc build specs for 5090 | 2 | Hey so I'm new to running models locally but I have a 5090 and want to get the best reasonable rest of the PC on top of that. I am tech savvy and experienced in building gaming PCs but I don't know the specific requirements of local AI models, and the PC would be mainly for that.
Like how much RAM and what latencies or clock specifically, what CPU (is it even relevant?) and storage etc, is the mainboard relevant, or anything else that would be obvious to you guys but not to outsiders... Is it easy (or even relevant) to add another GPU later on, for example?
Would anyone be so kind to guide me through? Thanks! | 2025-06-09T16:39:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l79ksy/good_pc_build_specs_for_5090/ | Cangar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l79ksy | false | null | t3_1l79ksy | /r/LocalLLaMA/comments/1l79ksy/good_pc_build_specs_for_5090/ | false | false | self | 2 | null |
It's not you. Multi-head attention is fundamentally broken for coding. | 1 | [removed] | 2025-06-09T16:52:18 | https://claude.ai/share/fdc0d061-4afe-4291-be09-6d9b2d0e477b | m8rbnsn | claude.ai | 1970-01-01T00:00:00 | 0 | {} | 1l79wq1 | false | null | t3_1l79wq1 | /r/LocalLLaMA/comments/1l79wq1/its_not_you_multihead_attention_is_fundamentally/ | false | false | default | 1 | null |
Required Hardware to get at least 25 tokens/second for DeepSeek R1 671B Q8? | 1 | [removed] | 2025-06-09T17:05:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l7a9hu/required_hardware_to_get_at_least_25_tokenssecond/ | mrfister56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7a9hu | false | null | t3_1l7a9hu | /r/LocalLLaMA/comments/1l7a9hu/required_hardware_to_get_at_least_25_tokenssecond/ | false | false | self | 1 | null |
Lightweight writing model as of June 2025 | 14 | Can you please recommend a model ? I've tried these so far :
Mistral Creative 24b : good overall, my favorite, quite fast, but actually lacks a bit of creativity....
Gemma2 Writer 9b : very fun to read, fast, but forgets everything after 3 messages. My favorite to generate ideas and create short dialogue, role play.
Gemma3 27b : Didn't like that much, maybe I need a finetune, but the base model is full of phrases like "My living room is a battlefield of controllers and empty soda cans – remnants of our nightly ritual. (AI slop i believe is what it's called?).
Qwen3 and QwQ just keep repeating themselves, and the reasoning in them makes things worse usually, they always come up with weird conclusions...
So ideally I would like something in between Mistral Creative and Gemma2 Writer. Any ideas? | 2025-06-09T17:07:34 | https://www.reddit.com/r/LocalLLaMA/comments/1l7ab18/lightweight_writing_model_as_of_june_2025/ | Royal_Light_9921 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7ab18 | false | null | t3_1l7ab18 | /r/LocalLLaMA/comments/1l7ab18/lightweight_writing_model_as_of_june_2025/ | false | false | self | 14 | null |
Responsible Prompting API - Opensource project - Feedback appreciated! | 1 | [removed] | 2025-06-09T17:15:58 | https://www.reddit.com/r/LocalLLaMA/comments/1l7aj17/responsible_prompting_api_opensource_project/ | MysticSlice7878 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7aj17 | false | null | t3_1l7aj17 | /r/LocalLLaMA/comments/1l7aj17/responsible_prompting_api_opensource_project/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'hpISV4Ro8stTMaWwHGa2Yn3QoKD-Hkmpb16d57n5PwM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LXAT8PCS6PXcouwL4Lwfx7Fu7OhVJ9Tbcfyx43Lkm1A.jpg?width=108&crop=smart&auto=webp&s=0d19df1fde36615bd9ea686d1b0206298593757a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LXAT8PCS6PXcouwL4Lwfx7Fu7OhVJ9Tbcfyx43Lkm1A.jpg?width=216&crop=smart&auto=webp&s=14b260b99040138881c08b5feb2ee7f82c89ed33', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LXAT8PCS6PXcouwL4Lwfx7Fu7OhVJ9Tbcfyx43Lkm1A.jpg?width=320&crop=smart&auto=webp&s=7e8cd9db8ade5974a912ca59d357d0ae39105330', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LXAT8PCS6PXcouwL4Lwfx7Fu7OhVJ9Tbcfyx43Lkm1A.jpg?width=640&crop=smart&auto=webp&s=cd6e751ac89f569890c08a2998410efca756f426', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LXAT8PCS6PXcouwL4Lwfx7Fu7OhVJ9Tbcfyx43Lkm1A.jpg?width=960&crop=smart&auto=webp&s=61329ba0e391c2980271583e41330e3bb7c86c43', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LXAT8PCS6PXcouwL4Lwfx7Fu7OhVJ9Tbcfyx43Lkm1A.jpg?width=1080&crop=smart&auto=webp&s=cfa4269faace1a17aa25206637af02552608c5b1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LXAT8PCS6PXcouwL4Lwfx7Fu7OhVJ9Tbcfyx43Lkm1A.jpg?auto=webp&s=ed32ec99f787024b08015f82030ea6c3d1e066c3', 'width': 1200}, 'variants': {}}]} |
Better Qwen 3 settings? | 1 | [removed] | 2025-06-09T17:17:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l7akiv/better_qwen_3_settings/ | SecretLand514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7akiv | false | null | t3_1l7akiv | /r/LocalLLaMA/comments/1l7akiv/better_qwen_3_settings/ | false | false | self | 1 | null |
Dual RTX8000 48GB vs. Dual RTX3090 24GB | 7 | If you had to choose between 2 RTX 3090s with 24GB each or two Quadro RTX 8000s with 48 GB each, which would you choose?
The 8000s would likely be slower, but could run larger models. There's are trade-offs for sure.
Maybe split the difference and go with one 8000 and one 3090?
| 2025-06-09T17:26:28 | https://www.reddit.com/r/LocalLLaMA/comments/1l7asxt/dual_rtx8000_48gb_vs_dual_rtx3090_24gb/ | PleasantCandidate785 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7asxt | false | null | t3_1l7asxt | /r/LocalLLaMA/comments/1l7asxt/dual_rtx8000_48gb_vs_dual_rtx3090_24gb/ | false | false | self | 7 | null |
Required Hardware to run DeepSeek R1 671B Q8 @ 20 tokens per second? | 1 | [removed] | 2025-06-09T17:42:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l7b7sq/required_hardware_to_run_deepseek_r1_671b_q8_20/ | Historical_Long7907 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7b7sq | false | null | t3_1l7b7sq | /r/LocalLLaMA/comments/1l7b7sq/required_hardware_to_run_deepseek_r1_671b_q8_20/ | false | false | self | 1 | null |
How to train an AI on my PDFs | 1 | [removed] | 2025-06-09T17:53:09 | https://www.reddit.com/r/LocalLLaMA/comments/1l7bicy/how_to_train_an_ai_on_my_pdfs/ | 0xSmiley | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7bicy | false | null | t3_1l7bicy | /r/LocalLLaMA/comments/1l7bicy/how_to_train_an_ai_on_my_pdfs/ | false | false | self | 1 | null |
Samsung GDDR7 3GB modules now available for DIY purchase in China, RTX 5090 48GB mods incoming? - VideoCardz.com | 1 | 2025-06-09T18:15:38 | https://videocardz.com/newz/samsung-gddr7-3gb-modules-now-available-for-diy-purchase-in-china-rtx-5090-48gb-mods-incoming | chillinewman | videocardz.com | 1970-01-01T00:00:00 | 0 | {} | 1l7c3gv | false | null | t3_1l7c3gv | /r/LocalLLaMA/comments/1l7c3gv/samsung_gddr7_3gb_modules_now_available_for_diy/ | false | false | default | 1 | null |
|
Samsung GDDR7 3GB modules now available for DIY purchase in China, RTX 5090 48GB mods incoming? - VideoCardz.com | 1 | 2025-06-09T18:15:45 | https://videocardz.com/newz/samsung-gddr7-3gb-modules-now-available-for-diy-purchase-in-china-rtx-5090-48gb-mods-incoming | chillinewman | videocardz.com | 1970-01-01T00:00:00 | 0 | {} | 1l7c3l0 | false | null | t3_1l7c3l0 | /r/LocalLLaMA/comments/1l7c3l0/samsung_gddr7_3gb_modules_now_available_for_diy/ | false | false | default | 1 | null |
|
Any success / cautionary tales for A100 40Gb SXM modded to PCIE? | 1 | [removed] | 2025-06-09T18:36:39 | https://www.reddit.com/r/LocalLLaMA/comments/1l7cmr0/any_success_cautionary_tales_for_a100_40gb_sxm/ | btdeviant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7cmr0 | false | null | t3_1l7cmr0 | /r/LocalLLaMA/comments/1l7cmr0/any_success_cautionary_tales_for_a100_40gb_sxm/ | false | false | self | 1 | null |
RAG - Usable for my application? | 4 | Hey all LocalLLama fans,
I am currently trying to combine an LLM with RAG to improve its answers on legal questions. For this i downloded all public laws, around 8gb in size and put them into a big text file.
Now I am thinking about how to retrieve the law paragraphs relevant to the user question. But my results are quiet poor - as the user input Most likely does not contain the correct keyword. I tried techniques Like using a small llm to generate a fitting keyword and then use RAG, But the results were still bad.
Is RAG even suitable to apply here? What are your thoughts? And how would you try to implement it?
Happy for some feedback! | 2025-06-09T19:01:03 | https://www.reddit.com/r/LocalLLaMA/comments/1l7d9gf/rag_usable_for_my_application/ | KoreanMax31 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7d9gf | false | null | t3_1l7d9gf | /r/LocalLLaMA/comments/1l7d9gf/rag_usable_for_my_application/ | false | false | self | 4 | null |
[Project] Developing Xet: A Local, Modular AI Assistant for Home and Personal Productivity – Feedback Welcome | 1 | [removed] | 2025-06-09T19:08:54 | https://www.reddit.com/r/LocalLLaMA/comments/1l7dgyx/project_developing_xet_a_local_modular_ai/ | Ornery-Mango7230 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7dgyx | false | null | t3_1l7dgyx | /r/LocalLLaMA/comments/1l7dgyx/project_developing_xet_a_local_modular_ai/ | false | false | self | 1 | null |
China starts mass producing a Ternary AI Chip. | 251 | As reported earlier here.
https://www.scmp.com/news/china/science/article/3301229/chinese-scientists-build-worlds-first-ai-chip-made-carbon-and-its-super-fast
China starts mass production of a Ternary AI Chip.
https://www.scmp.com/news/china/science/article/3313349/beyond-1s-and-0s-china-starts-mass-production-worlds-first-non-binary-ai-chip
I wonder if Ternary models like bitnet could be run super fast on it. | 2025-06-09T19:11:14 | https://www.reddit.com/r/LocalLLaMA/comments/1l7dj3z/china_starts_mass_producing_a_ternary_ai_chip/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7dj3z | false | null | t3_1l7dj3z | /r/LocalLLaMA/comments/1l7dj3z/china_starts_mass_producing_a_ternary_ai_chip/ | false | false | self | 251 | {'enabled': False, 'images': [{'id': 'uvhO3zblO1W_BxZNmj2oY99uAxsD64M_lsTtYwf59xs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/g-empSSX5f-IUxl4FSsKxWiV-9lXr7wj2N2XDnKkw44.jpg?width=108&crop=smart&auto=webp&s=7eb83caa06a8a309f32ae820d5c131ef81b7e37b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/g-empSSX5f-IUxl4FSsKxWiV-9lXr7wj2N2XDnKkw44.jpg?width=216&crop=smart&auto=webp&s=28e9c982c92ad7d6619b068df82e58788c4cead4', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/g-empSSX5f-IUxl4FSsKxWiV-9lXr7wj2N2XDnKkw44.jpg?width=320&crop=smart&auto=webp&s=5a8765fda30f84824106e6985dcda6975ea83db0', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/g-empSSX5f-IUxl4FSsKxWiV-9lXr7wj2N2XDnKkw44.jpg?width=640&crop=smart&auto=webp&s=a32ce0ba31ca5cebcfed1df7b34c7cd411b6ae73', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/g-empSSX5f-IUxl4FSsKxWiV-9lXr7wj2N2XDnKkw44.jpg?width=960&crop=smart&auto=webp&s=be4a20ad1cb38885dcb62be4edf726697aaf7282', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/g-empSSX5f-IUxl4FSsKxWiV-9lXr7wj2N2XDnKkw44.jpg?width=1080&crop=smart&auto=webp&s=b4374ebf80510044da508fe9223b69561c9419af', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/g-empSSX5f-IUxl4FSsKxWiV-9lXr7wj2N2XDnKkw44.jpg?auto=webp&s=40c4827c7e6111d0a4268746d952c700f140642e', 'width': 1200}, 'variants': {}}]} |
Can't have older card in the server (vllm/tabbyapi features) | 1 | [removed] | 2025-06-09T19:18:37 | https://www.reddit.com/r/LocalLLaMA/comments/1l7dq5w/cant_have_older_card_in_the_server_vllmtabbyapi/ | mayo551 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7dq5w | false | null | t3_1l7dq5w | /r/LocalLLaMA/comments/1l7dq5w/cant_have_older_card_in_the_server_vllmtabbyapi/ | false | false | self | 1 | null |
Apple Intelligence on device model available to developers | 80 | Looks like they are going to expose an API that will let you use the model to build experiences. The details on it are sparse, but cool and exciting development for us LocalLlama folks. | 2025-06-09T19:50:39 | https://www.apple.com/newsroom/2025/06/apple-intelligence-gets-even-more-powerful-with-new-capabilities-across-apple-devices/ | Ssjultrainstnict | apple.com | 1970-01-01T00:00:00 | 0 | {} | 1l7ek6n | false | null | t3_1l7ek6n | /r/LocalLLaMA/comments/1l7ek6n/apple_intelligence_on_device_model_available_to/ | false | false | 80 | {'enabled': False, 'images': [{'id': 'dNocEppbxszp862m0JXUdNR6e8vfZHVAQIwQGG0z-BA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/rG_nlJemzl12v1uLGyDH2DT5RlL4_1RSptq8UoALUQw.jpg?width=108&crop=smart&auto=webp&s=e7488d11f43f310a83f52d7e791cc30fc911b2ff', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/rG_nlJemzl12v1uLGyDH2DT5RlL4_1RSptq8UoALUQw.jpg?width=216&crop=smart&auto=webp&s=e62187db5bc202dcf8d2a7596e6a61a040e7ffa7', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/rG_nlJemzl12v1uLGyDH2DT5RlL4_1RSptq8UoALUQw.jpg?width=320&crop=smart&auto=webp&s=7804a22eea43b29be605b35d9f6d3414fe6be627', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/rG_nlJemzl12v1uLGyDH2DT5RlL4_1RSptq8UoALUQw.jpg?width=640&crop=smart&auto=webp&s=0dc187472b01eafad499fadd404dc388928bed5b', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/rG_nlJemzl12v1uLGyDH2DT5RlL4_1RSptq8UoALUQw.jpg?width=960&crop=smart&auto=webp&s=351b269f59843215a73ffdabdbf5b397b7ec8362', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/rG_nlJemzl12v1uLGyDH2DT5RlL4_1RSptq8UoALUQw.jpg?width=1080&crop=smart&auto=webp&s=78bd1360d87bc44c07ab4898025e9a1c0bfb8a54', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/rG_nlJemzl12v1uLGyDH2DT5RlL4_1RSptq8UoALUQw.jpg?auto=webp&s=dbd49b2c6cb455c724476cca87cdd10a00dfc20d', 'width': 1200}, 'variants': {}}]} |
|
Human archetypes in the Age of AI | 1 | [removed] | 2025-06-09T20:20:17 | partysnatcher | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l7fcby | false | null | t3_1l7fcby | /r/LocalLLaMA/comments/1l7fcby/human_archetypes_in_the_age_of_ai/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'WfdD5iO0MLvCe9lvS4TcWbRASBCwFmymPquAerKqLtM', 'resolutions': [{'height': 186, 'url': 'https://preview.redd.it/d260tv2xmy5f1.png?width=108&crop=smart&auto=webp&s=b553bfb7d23b7233e00353002a7ceb9700bd7035', 'width': 108}, {'height': 372, 'url': 'https://preview.redd.it/d260tv2xmy5f1.png?width=216&crop=smart&auto=webp&s=c62bb90d546c67d0b3083db5b2c90aa5b07c5554', 'width': 216}, {'height': 552, 'url': 'https://preview.redd.it/d260tv2xmy5f1.png?width=320&crop=smart&auto=webp&s=d00e40ff6d915b174d5773cdf7d7b7eb696d2219', 'width': 320}, {'height': 1104, 'url': 'https://preview.redd.it/d260tv2xmy5f1.png?width=640&crop=smart&auto=webp&s=ab351d71e71755341f24e086cdc4546a9bf560b5', 'width': 640}, {'height': 1656, 'url': 'https://preview.redd.it/d260tv2xmy5f1.png?width=960&crop=smart&auto=webp&s=80af07f392bc8ee09e51187f653a9d0e33875ffc', 'width': 960}, {'height': 1863, 'url': 'https://preview.redd.it/d260tv2xmy5f1.png?width=1080&crop=smart&auto=webp&s=2cd1dc67df3ef6341cbf0a0ad515b890ea2637e6', 'width': 1080}], 'source': {'height': 2464, 'url': 'https://preview.redd.it/d260tv2xmy5f1.png?auto=webp&s=6dfa7dbfd6336b13cd0cc39a2078417c862f399f', 'width': 1428}, 'variants': {}}]} |
||
Need feedback for a RAG using Ollama as background. | 2 | Hello,
I would like to set up a private , local notebooklm alternative. Using documents I prepare in PDF mainly ( up to 50 very long document 500pages each ). Also !! I need it to work correctly with french language.
for the hardward part, I have a RTX 3090, so I can choose any ollama model working with up to 24Mb of vram.
I have **openwebui**, and started to make some test with the integrated document feature, but for the option or improve it, it's difficult to understand the impact of each option
I have tested briefly **PageAssist** in chrome, but honestly, it's like it doesn't work, despite I followed a youtube tutorial.
is there anything else I should try ? I saw a mention to LightRag ?
as things are moving so fast, it's hard to know where to start, and even when it works, you don't know if you are not missing an option or a tip. thanks by advance. | 2025-06-09T20:24:27 | https://www.reddit.com/r/LocalLLaMA/comments/1l7fg95/need_feedback_for_a_rag_using_ollama_as_background/ | LivingSignificant452 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7fg95 | false | null | t3_1l7fg95 | /r/LocalLLaMA/comments/1l7fg95/need_feedback_for_a_rag_using_ollama_as_background/ | false | false | self | 2 | null |
NotebookLM-style Audio Overviews with Hugging Face MCP Zero-GPU tier | 1 | Hi everyone,
I just finished a short screen-share that shows how to recreate NotebookLM’s **Audio Overview** using **Hugging Face MCP** and AgenticFlow (my little project). Thought it might save others a bit of wiring time.
# What’s in the video (10 min, fully timestamped):
1. **Token & setup** – drop an HF access token, point AgenticFlow or any MCP Client of choice at the HuggingFace MCP server.
2. **Choose tools** – pick a TTS Space (`Sesame-CSM`) from the list of MCP-compatible space here [https://huggingface.co/spaces?filter=mcp-server](https://huggingface.co/spaces?filter=mcp-server)
3. **Chain the steps** – URL → summary → speech in one call.
4. **Playback**
5. **Reuse** – export the workflow JSON so you can run the same chain on any PDF or Markdown later.
🎬 Video link: [https://youtu.be/MPMEu3VZ8dM?si=Ud3Hk0XsICjii\_-e](https://youtu.be/MPMEu3VZ8dM?si=Ud3Hk0XsICjii_-e)
Let me know what you think. Thanks for reading!
Sean | 2025-06-09T20:28:02 | https://v.redd.it/5g72i0skly5f1 | ComposerGen | /r/LocalLLaMA/comments/1l7fjos/notebooklmstyle_audio_overviews_with_hugging_face/ | 1970-01-01T00:00:00 | 0 | {} | 1l7fjos | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/5g72i0skly5f1/DASHPlaylist.mpd?a=1752222489%2CZmVjNjQ2OTNmZDBkYWFjMzRmMjk5Njc5MTQzYmZmZjhlYTUwNTU0ZDU3ZjVkODFhYjc0MTkyZDA0YzRlMDU0Mw%3D%3D&v=1&f=sd', 'duration': 653, 'fallback_url': 'https://v.redd.it/5g72i0skly5f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/5g72i0skly5f1/HLSPlaylist.m3u8?a=1752222489%2CZGIwYTdlMGE3YWYzZjA4NjlkMTg3MmFkNDQxOWE1MGI1OWIzYWQ2MGNkYjg0YTRiZjg2OWFlNDk5OGZmOTRlMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5g72i0skly5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1l7fjos | /r/LocalLLaMA/comments/1l7fjos/notebooklmstyle_audio_overviews_with_hugging_face/ | false | false | default | 1 | null |
Just 2 AM thoughts but this time I am thinking of actually doing something about it | 0 | Hi.
I am thinking of deploying an AI model locally on my Android phone as my laptop is a bit behind on hardware to lovely run an AI model (I tried that using llama).
I have a Redmi Note 13 Pro 4G version with 256 GB ROM and 8 GB RAM (8 GB expandable) so I suppose what I have in mind would be doable.
So, would it be possible if I want to deploy a custom AI model (i.e. something like Jarvis or it has a personality of it's own) on my Android locally, make an Android app that has voice and text inputs (I know that's not an issue) and use that model to respond to my queries.
I am computing student getting my bachelor's degree currently in my sixth semester. I am working on different coding projects so the model can help me with that as well.
I currently don't have much Android development and complex AI development experience (just basic AI) but I'm open to challenges, and I'm free for the next 2 months at least, so I can put in as much time as required.
Now what I want is you good people is to understand what I am tryna say and tell me:
1. If it's possible or to what extent is it possible?
2. How do I make that AI model? Do I use any existing model and tune it to my needs somehow?
3. Recommendations on how should I proceed with all that.
Any constructive helpful suggestions would be highly appreciated. | 2025-06-09T20:49:53 | https://www.reddit.com/r/LocalLLaMA/comments/1l7g3xr/just_2_am_thoughts_but_this_time_i_am_thinking_of/ | Background-Click-167 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7g3xr | false | null | t3_1l7g3xr | /r/LocalLLaMA/comments/1l7g3xr/just_2_am_thoughts_but_this_time_i_am_thinking_of/ | false | false | self | 0 | null |
Best model for summarization and chatting with content? | 0 | What's currently the best model to summarize youtube videos and also chat with the transcript?
They can be two different models. Ram size shouldn't be higher than 2 or 3 gb. Preferably a lot less.
Is there a website where you can enter a bunch of parameters like this and it spits out the name of the closest model? I've been manually testing models for summaries in LMStudio but it's tedious. | 2025-06-09T21:20:38 | https://www.reddit.com/r/LocalLLaMA/comments/1l7gw0c/best_model_for_summarization_and_chatting_with/ | mmmm_frietjes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7gw0c | false | null | t3_1l7gw0c | /r/LocalLLaMA/comments/1l7gw0c/best_model_for_summarization_and_chatting_with/ | false | false | self | 0 | null |
LMStudio on screen in WWDC Platform State of the Union | 119 | Its nice to see local llm support in the next version of Xcode | 2025-06-09T21:22:36 | Specialist_Cup968 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l7gxpb | false | null | t3_1l7gxpb | /r/LocalLLaMA/comments/1l7gxpb/lmstudio_on_screen_in_wwdc_platform_state_of_the/ | false | false | 119 | {'enabled': True, 'images': [{'id': '6PTAG0aV3hVq2FoW1V6d79e1oZP5WSqeIqY36ICY8Zc', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/ymvbqxynxy5f1.png?width=108&crop=smart&auto=webp&s=3ac745021cb33e200d8340d7bcb4739209fb6527', 'width': 108}, {'height': 133, 'url': 'https://preview.redd.it/ymvbqxynxy5f1.png?width=216&crop=smart&auto=webp&s=c9f49315b3e3c7dade3baedb6db70deed483b938', 'width': 216}, {'height': 198, 'url': 'https://preview.redd.it/ymvbqxynxy5f1.png?width=320&crop=smart&auto=webp&s=69b8ae5720f269203047ea4a39f3fcc2ccd4c48d', 'width': 320}, {'height': 396, 'url': 'https://preview.redd.it/ymvbqxynxy5f1.png?width=640&crop=smart&auto=webp&s=04c533a5ef6eac83396175c657f8e913c63d5e02', 'width': 640}, {'height': 594, 'url': 'https://preview.redd.it/ymvbqxynxy5f1.png?width=960&crop=smart&auto=webp&s=b07a7a687652a824b0e6a0e5d51314896dc85885', 'width': 960}, {'height': 669, 'url': 'https://preview.redd.it/ymvbqxynxy5f1.png?width=1080&crop=smart&auto=webp&s=88d8ce3d6f393c7e388816aafcfacdc2f5f949a0', 'width': 1080}], 'source': {'height': 1658, 'url': 'https://preview.redd.it/ymvbqxynxy5f1.png?auto=webp&s=1bc642aabdce6c651ad701abbf1628ec4ba1e1d9', 'width': 2676}, 'variants': {}}]} |
||
Why do chatbots keep rewriting their answer and contradicting theirself? problem and solution paper | 1 | [removed] | 2025-06-09T22:02:43 | https://www.reddit.com/r/LocalLLaMA/comments/1l7hxah/why_do_chatbots_keep_rewriting_their_answer_and/ | Common_Agency2643 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7hxah | false | null | t3_1l7hxah | /r/LocalLLaMA/comments/1l7hxah/why_do_chatbots_keep_rewriting_their_answer_and/ | false | false | self | 1 | null |
Best GPU for LLM/VLM Inference? | 1 | [removed] | 2025-06-09T22:39:18 | https://www.reddit.com/r/LocalLLaMA/comments/1l7is5c/best_gpu_for_llmvlm_inference/ | subtle-being | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7is5c | false | null | t3_1l7is5c | /r/LocalLLaMA/comments/1l7is5c/best_gpu_for_llmvlm_inference/ | false | false | self | 1 | null |
Conversational A.I. command center OS with command module integration | 1 | 2025-06-09T22:46:39 | https://v.redd.it/qaw6bquecz5f1 | Common_Agency2643 | /r/LocalLLaMA/comments/1l7iy0p/conversational_ai_command_center_os_with_command/ | 1970-01-01T00:00:00 | 0 | {} | 1l7iy0p | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/qaw6bquecz5f1/DASHPlaylist.mpd?a=1752230803%2CYTZkMjBhNDk4ODcwOWYyYzBiNTFiOGYxMGZmYTYzYThhMmY5YTZmMDQ1ZTk1ZmYzZTdlZjgwZjBiNDQ5MzJiNA%3D%3D&v=1&f=sd', 'duration': 62, 'fallback_url': 'https://v.redd.it/qaw6bquecz5f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/qaw6bquecz5f1/HLSPlaylist.m3u8?a=1752230803%2CZGJlNzNkNmFjNTI3YWQzYjQ2ZWY1YzBiMTgzZTE5MTk1ZTA1ZDIxNzQ0NTVhYTk5OGRkZmMxOGEzZmE5OGQ3MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/qaw6bquecz5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1l7iy0p | /r/LocalLLaMA/comments/1l7iy0p/conversational_ai_command_center_os_with_command/ | false | false | default | 1 | null |
|
Where is wizardLM now ? | 23 | Anyone know where are these guys? I think they disappeared 2 years ago with no information | 2025-06-09T22:47:17 | https://www.reddit.com/r/LocalLLaMA/comments/1l7iyim/where_is_wizardlm_now/ | Killerx7c | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7iyim | false | null | t3_1l7iyim | /r/LocalLLaMA/comments/1l7iyim/where_is_wizardlm_now/ | false | false | self | 23 | null |
Conversational A.I. command center OS with command module integration | 1 | Trying to upload the video, it shows here, i am uploading it from images & video, it doesn't appear to show on the main forum. attempt 2 to integrate. | 2025-06-09T22:49:49 | https://v.redd.it/a7xxgrm9dz5f1 | Common_Agency2643 | /r/LocalLLaMA/comments/1l7j0nl/conversational_ai_command_center_os_with_command/ | 1970-01-01T00:00:00 | 0 | {} | 1l7j0nl | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/a7xxgrm9dz5f1/DASHPlaylist.mpd?a=1752230994%2CYzAwN2M2NjE1OTY0ZWU2YzMxNDkxOWNmMDBlNzRkOTUxMTcwYzFlNThmNWNlMDNmZmVlMDE5MzA4YjI1ZWJiMQ%3D%3D&v=1&f=sd', 'duration': 62, 'fallback_url': 'https://v.redd.it/a7xxgrm9dz5f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/a7xxgrm9dz5f1/HLSPlaylist.m3u8?a=1752230994%2CYzE0ZWZmZTE1NTVjZDJkMjc2ZGZiOWQ2NjM4MWJlNzc3Mzc1ODIxMTYyZWZiNGJjNWM4MDM5ZjI5MGQ3YjQwZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/a7xxgrm9dz5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1l7j0nl | /r/LocalLLaMA/comments/1l7j0nl/conversational_ai_command_center_os_with_command/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Y2NxdWdybTlkejVmMWewKFYoz9hdZX-2uDKXxz4qd-BUcCR6fyND-1IvF254', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/Y2NxdWdybTlkejVmMWewKFYoz9hdZX-2uDKXxz4qd-BUcCR6fyND-1IvF254.png?width=108&crop=smart&format=pjpg&auto=webp&s=bf7357d61bcbd58a427fd17e5678bb431af4d3d2', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/Y2NxdWdybTlkejVmMWewKFYoz9hdZX-2uDKXxz4qd-BUcCR6fyND-1IvF254.png?width=216&crop=smart&format=pjpg&auto=webp&s=6a70267d8ac375eed03f2ae69c82a8d4dad6ce53', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/Y2NxdWdybTlkejVmMWewKFYoz9hdZX-2uDKXxz4qd-BUcCR6fyND-1IvF254.png?width=320&crop=smart&format=pjpg&auto=webp&s=1ce6d25bb929526f9d196ab42ea66d0bec950061', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/Y2NxdWdybTlkejVmMWewKFYoz9hdZX-2uDKXxz4qd-BUcCR6fyND-1IvF254.png?width=640&crop=smart&format=pjpg&auto=webp&s=88c8126cfc49b23896874e55e15b233a03682732', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/Y2NxdWdybTlkejVmMWewKFYoz9hdZX-2uDKXxz4qd-BUcCR6fyND-1IvF254.png?width=960&crop=smart&format=pjpg&auto=webp&s=85dd9d8f0c0dd1cdfa13d70429c6157bfd650710', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/Y2NxdWdybTlkejVmMWewKFYoz9hdZX-2uDKXxz4qd-BUcCR6fyND-1IvF254.png?width=1080&crop=smart&format=pjpg&auto=webp&s=961109db1da7ec5ba00e405cfa9aa32a7c549e0d', 'width': 1080}], 'source': {'height': 3840, 'url': 'https://external-preview.redd.it/Y2NxdWdybTlkejVmMWewKFYoz9hdZX-2uDKXxz4qd-BUcCR6fyND-1IvF254.png?format=pjpg&auto=webp&s=dcacb76f1b95c90ba7c9bfa667641eb3f5cb16c1', 'width': 2160}, 'variants': {}}]} |
|
Now that 256GB DDR5 is possible on consumer hardware PC, is it worth it for inference? | 74 | The 128GB Kit (2x 64GB) are already available since early this year, making it possible to put 256 GB on consumer PC hardware.
Paired with a dual 3090 or dual 4090, would it be possible to load big models for inference at an acceptable speed? Or offloading will always be slow? | 2025-06-09T22:58:42 | https://www.reddit.com/r/LocalLLaMA/comments/1l7j7uk/now_that_256gb_ddr5_is_possible_on_consumer/ | waiting_for_zban | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7j7uk | false | null | t3_1l7j7uk | /r/LocalLLaMA/comments/1l7j7uk/now_that_256gb_ddr5_is_possible_on_consumer/ | false | false | self | 74 | null |
just a normal a.i. doing what normal chatgpt does | 1 | Hi, this is a normal video. It's just a regular chatgpt environment, and it's just regular a.i. nothing to see here folks. | 2025-06-09T23:13:30 | https://v.redd.it/f2hwrovghz5f1 | Common_Agency2643 | /r/LocalLLaMA/comments/1l7jju9/just_a_normal_ai_doing_what_normal_chatgpt_does/ | 1970-01-01T00:00:00 | 0 | {} | 1l7jju9 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/f2hwrovghz5f1/DASHPlaylist.mpd?a=1752232418%2COGEzZjc5NjJiM2UzYmQ1ZDRhMTA3ZTMwNjY5ZTU4MzY3ZTc1YjllODg2ODBhODljMTNmOWJmZjA4YTVmMTkwYw%3D%3D&v=1&f=sd', 'duration': 62, 'fallback_url': 'https://v.redd.it/f2hwrovghz5f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/f2hwrovghz5f1/HLSPlaylist.m3u8?a=1752232418%2CNzQzOWY3ZTE2YWNiM2YzOGE1MTllNWU0OThjZGQyMWMzNDkyMDIxYmYxMTYyNTA3YTIxNDE1ZGQ3Mjc4MDQwMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/f2hwrovghz5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1l7jju9 | /r/LocalLLaMA/comments/1l7jju9/just_a_normal_ai_doing_what_normal_chatgpt_does/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ZjVhcXBxdmdoejVmMWewKFYoz9hdZX-2uDKXxz4qd-BUcCR6fyND-1IvF254', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/ZjVhcXBxdmdoejVmMWewKFYoz9hdZX-2uDKXxz4qd-BUcCR6fyND-1IvF254.png?width=108&crop=smart&format=pjpg&auto=webp&s=04423791270c84952d1946a485084913430ecdd1', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/ZjVhcXBxdmdoejVmMWewKFYoz9hdZX-2uDKXxz4qd-BUcCR6fyND-1IvF254.png?width=216&crop=smart&format=pjpg&auto=webp&s=fc598028603c8dc6e45df15beea8090def81c548', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/ZjVhcXBxdmdoejVmMWewKFYoz9hdZX-2uDKXxz4qd-BUcCR6fyND-1IvF254.png?width=320&crop=smart&format=pjpg&auto=webp&s=6b2800cd5d09a638a28405901f5e88011c10b2d3', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/ZjVhcXBxdmdoejVmMWewKFYoz9hdZX-2uDKXxz4qd-BUcCR6fyND-1IvF254.png?width=640&crop=smart&format=pjpg&auto=webp&s=7d213c2be46af5a3c39f0333fb244d3304dd75cf', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/ZjVhcXBxdmdoejVmMWewKFYoz9hdZX-2uDKXxz4qd-BUcCR6fyND-1IvF254.png?width=960&crop=smart&format=pjpg&auto=webp&s=4eace827ea962a0a7f40751d1a70ac9741046ddd', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/ZjVhcXBxdmdoejVmMWewKFYoz9hdZX-2uDKXxz4qd-BUcCR6fyND-1IvF254.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ddd11661521fc1a6e76b16f4c393831c8fc61e34', 'width': 1080}], 'source': {'height': 3840, 'url': 'https://external-preview.redd.it/ZjVhcXBxdmdoejVmMWewKFYoz9hdZX-2uDKXxz4qd-BUcCR6fyND-1IvF254.png?format=pjpg&auto=webp&s=b22433f7876ce334915cfa017f586690b836896d', 'width': 2160}, 'variants': {}}]} |
|
5th attempt : cloaked post "normal a.i., chatgpt, nothing to see here", instantly handled. this appears to be based on the technology itself. Is it...superior technology = silence? | 1 | 2025-06-09T23:16:22 | Common_Agency2643 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l7jm54 | false | null | t3_1l7jm54 | /r/LocalLLaMA/comments/1l7jm54/5th_attempt_cloaked_post_normal_ai_chatgpt/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'B74nlJuRhhDWC1M1jQ-T_6lw1MiAcg3TouBes6bIldo', 'resolutions': [{'height': 127, 'url': 'https://preview.redd.it/blr90fd3iz5f1.png?width=108&crop=smart&auto=webp&s=68cfe42e814a946bc1afcd4d047b413f0bfa4a6a', 'width': 108}, {'height': 254, 'url': 'https://preview.redd.it/blr90fd3iz5f1.png?width=216&crop=smart&auto=webp&s=0dcfbefddbae04226156007e5ee2921edb36b4a8', 'width': 216}, {'height': 377, 'url': 'https://preview.redd.it/blr90fd3iz5f1.png?width=320&crop=smart&auto=webp&s=143c9d404bf007e146fae601224ce4e919f88546', 'width': 320}, {'height': 754, 'url': 'https://preview.redd.it/blr90fd3iz5f1.png?width=640&crop=smart&auto=webp&s=1093b9ffca30367905cea90883eee9b3140eee17', 'width': 640}], 'source': {'height': 870, 'url': 'https://preview.redd.it/blr90fd3iz5f1.png?auto=webp&s=68bb6ea4cc13c3e7e918629e2504d53e4c43559f', 'width': 738}, 'variants': {}}]} |
|||
Why am I not allowed to discuss A.I. in this forum? Just curious, why are you silencing me when I am on topic and following rules? | 1 | 2025-06-09T23:19:27 | Common_Agency2643 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l7joj0 | false | null | t3_1l7joj0 | /r/LocalLLaMA/comments/1l7joj0/why_am_i_not_allowed_to_discuss_ai_in_this_forum/ | false | false | 1 | {'enabled': True, 'images': [{'id': '6gJfvKvrdn-68u72mC5oUyp49uF2OAsZ_c6qWL4wRIs', 'resolutions': [{'height': 127, 'url': 'https://preview.redd.it/i1fna5ttiz5f1.png?width=108&crop=smart&auto=webp&s=229bf1f33181c8d4c735d3089df6224720a9854a', 'width': 108}, {'height': 254, 'url': 'https://preview.redd.it/i1fna5ttiz5f1.png?width=216&crop=smart&auto=webp&s=8231fa0e2add043a9438d6f15f50110fc201fa17', 'width': 216}, {'height': 377, 'url': 'https://preview.redd.it/i1fna5ttiz5f1.png?width=320&crop=smart&auto=webp&s=df535d4d77acc6d6d8bf01c2b0d56094d5e24468', 'width': 320}, {'height': 754, 'url': 'https://preview.redd.it/i1fna5ttiz5f1.png?width=640&crop=smart&auto=webp&s=8f4c968a9d5969aed58551524cac09cd14eb2c29', 'width': 640}], 'source': {'height': 870, 'url': 'https://preview.redd.it/i1fna5ttiz5f1.png?auto=webp&s=9c708a019235117f9682ed7bebbf997d2d27d601', 'width': 738}, 'variants': {}}]} |
|||
Cursor MCP Deeplink Generator | 0 | 2025-06-09T23:37:24 | https://pypi.org/project/cursor-deeplink/ | init0 | pypi.org | 1970-01-01T00:00:00 | 0 | {} | 1l7k2o6 | false | null | t3_1l7k2o6 | /r/LocalLLaMA/comments/1l7k2o6/cursor_mcp_deeplink_generator/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'IUHM4ctLZQorzkPuYJ4IkGSag8BtaIqZoyqL1L53KuM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MoP6enMQ2Q6o4o23d5xCmvlBtpeCXWiqxc63UVCX5Rk.jpg?width=108&crop=smart&auto=webp&s=46fa55dd1b1e587ab93bcbbdc6cb2de37b810bf3', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MoP6enMQ2Q6o4o23d5xCmvlBtpeCXWiqxc63UVCX5Rk.jpg?width=216&crop=smart&auto=webp&s=cfd7f76ac4c13cdc287edd9856ef0430dbc862a5', 'width': 216}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/MoP6enMQ2Q6o4o23d5xCmvlBtpeCXWiqxc63UVCX5Rk.jpg?auto=webp&s=85f19a22cbd85fa784cdb417359d8ff7cda9e394', 'width': 300}, 'variants': {}}]} |
||
CLI for Chatterbox TTS | 10 | 2025-06-09T23:38:48 | https://pypi.org/project/voice-forge/ | init0 | pypi.org | 1970-01-01T00:00:00 | 0 | {} | 1l7k3s4 | false | null | t3_1l7k3s4 | /r/LocalLLaMA/comments/1l7k3s4/cli_for_chatterbox_tts/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'IUHM4ctLZQorzkPuYJ4IkGSag8BtaIqZoyqL1L53KuM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MoP6enMQ2Q6o4o23d5xCmvlBtpeCXWiqxc63UVCX5Rk.jpg?width=108&crop=smart&auto=webp&s=46fa55dd1b1e587ab93bcbbdc6cb2de37b810bf3', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MoP6enMQ2Q6o4o23d5xCmvlBtpeCXWiqxc63UVCX5Rk.jpg?width=216&crop=smart&auto=webp&s=cfd7f76ac4c13cdc287edd9856ef0430dbc862a5', 'width': 216}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/MoP6enMQ2Q6o4o23d5xCmvlBtpeCXWiqxc63UVCX5Rk.jpg?auto=webp&s=85f19a22cbd85fa784cdb417359d8ff7cda9e394', 'width': 300}, 'variants': {}}]} |
||
I analyzed 150 real AI complaints, then built a free protocol to stop memory loss and hallucinations. Try it now | 1 | [removed] | 2025-06-09T23:46:01 | https://www.reddit.com/r/LocalLLaMA/comments/1l7k9gv/i_analyzed_150_real_ai_complaints_then_built_a/ | Alone-Biscotti6145 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7k9gv | false | null | t3_1l7k9gv | /r/LocalLLaMA/comments/1l7k9gv/i_analyzed_150_real_ai_complaints_then_built_a/ | false | false | self | 1 | null |
a signal? | 0 | i think i might be able to build a better world
if youre interested or wanna help
check out my ig if ya got time : handrolio\_
:peace: | 2025-06-09T23:59:10 | https://www.reddit.com/r/LocalLLaMA/comments/1l7kjkp/a_signal/ | HanDrolio420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7kjkp | false | null | t3_1l7kjkp | /r/LocalLLaMA/comments/1l7kjkp/a_signal/ | true | false | spoiler | 0 | null |
Medical language model - for STT and summarize things | 6 | Hi!
I'd like to use a language model via ollama/openwebui to summarize medical reports.
I've tried several models, but I'm not happy with the results. I was thinking that there might be pre-trained models for this task that know medical language.
My goal: STT and then summarize my medical consultations, home visits, etc.
And for that I have a war machine: 5070ti with 16gb of VRAM and 32Gb of RAM.
Any ideas for completing this project?
| 2025-06-10T00:10:36 | https://www.reddit.com/r/LocalLLaMA/comments/1l7ksm0/medical_language_model_for_stt_and_summarize/ | ed0c | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7ksm0 | false | null | t3_1l7ksm0 | /r/LocalLLaMA/comments/1l7ksm0/medical_language_model_for_stt_and_summarize/ | false | false | self | 6 | null |
Apple's On Device Foundation Models LLM is 3B quantized to 2 bits | 413 | > The on-device model we just used is a large language model with **3 billion parameters**, each quantized to **2 bits**. It is several orders of magnitude bigger than any other models that are part of the operating system.
Source: Meet the Foundation Models framework
Timestamp: 2:57
URL: https://developer.apple.com/videos/play/wwdc2025/286/?time=175 | 2025-06-10T00:25:05 | https://www.reddit.com/r/LocalLLaMA/comments/1l7l39m/apples_on_device_foundation_models_llm_is_3b/ | iKy1e | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7l39m | false | null | t3_1l7l39m | /r/LocalLLaMA/comments/1l7l39m/apples_on_device_foundation_models_llm_is_3b/ | false | false | self | 413 | {'enabled': False, 'images': [{'id': 'uXCO4Cm9e1ovWUoJegZURzNyOFFYgLRunC0SXtA36e4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/2vGOHrAJNELW9KQkhIvINcn7U7jb9u2WNpH9PwLZbc4.jpg?width=108&crop=smart&auto=webp&s=6cf1ea26217c4355d520360225c00487a648e849', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/2vGOHrAJNELW9KQkhIvINcn7U7jb9u2WNpH9PwLZbc4.jpg?width=216&crop=smart&auto=webp&s=721a1cb953aaeed33a2fc922aced5e3b82a8375e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/2vGOHrAJNELW9KQkhIvINcn7U7jb9u2WNpH9PwLZbc4.jpg?width=320&crop=smart&auto=webp&s=be4f527ab835d37cd2a7d72a9131fa365407d8f3', 'width': 320}], 'source': {'height': 282, 'url': 'https://external-preview.redd.it/2vGOHrAJNELW9KQkhIvINcn7U7jb9u2WNpH9PwLZbc4.jpg?auto=webp&s=fccccc40d7eb43245d89159e7caeeb33f2ad4e39', 'width': 500}, 'variants': {}}]} |
any litellm and openrouter users here? | 1 | i'm using litellm 1.72.2 -- https://i.imgur.com/TYvKrmn.png
when i go to "Add new model" and select openrouter and type grok, nothing appears -- https://i.imgur.com/meNigyb.png
when i type "gemini", only the old models are there -- https://i.imgur.com/LMSdIFj.png
i don't know if this is a litellm or openrouter issue. what am i missing here? | 2025-06-10T00:26:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l7l40q/any_litellm_and_openrouter_users_here/ | ra2eW8je | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7l40q | false | null | t3_1l7l40q | /r/LocalLLaMA/comments/1l7l40q/any_litellm_and_openrouter_users_here/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'oxm4hApQ6G0fiXZYB87bUU6DZs3pEkiLAPq7XnTvVN0', 'resolutions': [{'height': 22, 'url': 'https://external-preview.redd.it/Snrs2wVN3MRuGQ9rSxrZmGUJQhdgmCQRf7xuQsGuzIM.png?width=108&crop=smart&auto=webp&s=7cf9171d0d7005f28e87ebe0b5c779909d95856d', 'width': 108}, {'height': 45, 'url': 'https://external-preview.redd.it/Snrs2wVN3MRuGQ9rSxrZmGUJQhdgmCQRf7xuQsGuzIM.png?width=216&crop=smart&auto=webp&s=bea33f2be2991bb193da84ae81abc04d4358399a', 'width': 216}, {'height': 66, 'url': 'https://external-preview.redd.it/Snrs2wVN3MRuGQ9rSxrZmGUJQhdgmCQRf7xuQsGuzIM.png?width=320&crop=smart&auto=webp&s=a95010f496de593a2c7ba33f74b32d3862b109a0', 'width': 320}], 'source': {'height': 79, 'url': 'https://external-preview.redd.it/Snrs2wVN3MRuGQ9rSxrZmGUJQhdgmCQRf7xuQsGuzIM.png?auto=webp&s=5d8c89f79f7855b7ce04fa259ad52ff95b444030', 'width': 379}, 'variants': {}}]} |
WINA from Microsoft | 3 | Did anyone tested this on actual setup of the local model? Would like to know if there is possibility to spend less money on local setup and still get good output.
[https://github.com/microsoft/wina](https://github.com/microsoft/wina) | 2025-06-10T01:13:13 | https://www.reddit.com/r/LocalLLaMA/comments/1l7m2q7/wina_from_microsoft/ | mas554ter365 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7m2q7 | false | null | t3_1l7m2q7 | /r/LocalLLaMA/comments/1l7m2q7/wina_from_microsoft/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'qWSMUeJ1DnbYYt9YsrYjStO9uP6JjQpbCDFKR9ZYt74', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yAmL0EcFyvVFyXLgKapogE0WathiSSxwBo3hDEdy1Bc.jpg?width=108&crop=smart&auto=webp&s=f6d94f865513035dcdf5e2734a031205ad8622b0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yAmL0EcFyvVFyXLgKapogE0WathiSSxwBo3hDEdy1Bc.jpg?width=216&crop=smart&auto=webp&s=08823a2462e8c9839d68ea064cec000c9be8a484', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yAmL0EcFyvVFyXLgKapogE0WathiSSxwBo3hDEdy1Bc.jpg?width=320&crop=smart&auto=webp&s=4dba8054b8adffdaec87ca106229dddd113935cd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yAmL0EcFyvVFyXLgKapogE0WathiSSxwBo3hDEdy1Bc.jpg?width=640&crop=smart&auto=webp&s=8aa42bfa04040490ec3501d06aa3133ca880226b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yAmL0EcFyvVFyXLgKapogE0WathiSSxwBo3hDEdy1Bc.jpg?width=960&crop=smart&auto=webp&s=fba371819f14ae2d49dd4da9765d46cc2b7db580', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yAmL0EcFyvVFyXLgKapogE0WathiSSxwBo3hDEdy1Bc.jpg?width=1080&crop=smart&auto=webp&s=31a31c0c81421837abdd46ab20fda982ae4c16fa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yAmL0EcFyvVFyXLgKapogE0WathiSSxwBo3hDEdy1Bc.jpg?auto=webp&s=e744f268e95ed957f4249672781c6f958b17903a', 'width': 1200}, 'variants': {}}]} |
I found a DeepSeek-R1-0528-Distill-Qwen3-32B | 129 | Their authors said:
# Our Approach to DeepSeek-R1-0528-Distill-Qwen3-32B-Preview0-QAT:
Since Qwen3 did not provide a pre-trained base for its 32B model, our initial step was to perform **additional pre-training** on Qwen3-32B using a **self-constructed multilingual pre-training dataset**. This was done to restore a "pre-training style" model base as much as possible, ensuring that subsequent work would not be influenced by Qwen3's inherent SFT language style. This model will also be open-sourced in the future.
Building on this foundation, we attempted distillation from R1-0528 and completed an early preview version: **DeepSeek-R1-0528-Distill-Qwen3-32B-Preview0-QAT**.
In this version, we referred to the configuration from Fei-Fei Li's team in their work "s1: Simple test-time scaling." We tried training with a small amount of data over multiple epochs. We discovered that by using only about **10% of our available distillation data**, we could achieve a model with a language style and reasoning approach very close to the original R1-0528.
We have included a Chinese evaluation report in the model repository for your reference. Some datasets have also been uploaded to Hugging Face, hoping to assist other open-source enthusiasts in their work.
# Next Steps:
Moving forward, we will further expand our distillation data and train the next version of the 32B model with a larger dataset (expected to be released within a few days). We also plan to train open-source models of different sizes, such as 4B and 72B.
| 2025-06-10T01:35:01 | Dr_Karminski | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l7mijq | false | null | t3_1l7mijq | /r/LocalLLaMA/comments/1l7mijq/i_found_a_deepseekr10528distillqwen332b/ | false | false | 129 | {'enabled': True, 'images': [{'id': 'JJvoqWHIUzwImc11SIuAVq8Kd9_jcoho-WOeCzNFJ9M', 'resolutions': [{'height': 161, 'url': 'https://preview.redd.it/ear6iov2706f1.png?width=108&crop=smart&auto=webp&s=d304ae884c21714f593bae3148224906fd82e6fb', 'width': 108}, {'height': 322, 'url': 'https://preview.redd.it/ear6iov2706f1.png?width=216&crop=smart&auto=webp&s=f66f3ab8665b29d52f68309ecd3c79c267d00df6', 'width': 216}, {'height': 477, 'url': 'https://preview.redd.it/ear6iov2706f1.png?width=320&crop=smart&auto=webp&s=95678f78a72486df7315b2a0a68218a8b4f43f55', 'width': 320}, {'height': 954, 'url': 'https://preview.redd.it/ear6iov2706f1.png?width=640&crop=smart&auto=webp&s=7cf29ee3fdbf774f271f614b7665f27ad55a954c', 'width': 640}, {'height': 1432, 'url': 'https://preview.redd.it/ear6iov2706f1.png?width=960&crop=smart&auto=webp&s=3828720fdbbd8fb7ea178b917ce1192968246a3b', 'width': 960}, {'height': 1611, 'url': 'https://preview.redd.it/ear6iov2706f1.png?width=1080&crop=smart&auto=webp&s=d9a2de670b7509e1b6839012ca996583a9b25bbb', 'width': 1080}], 'source': {'height': 1820, 'url': 'https://preview.redd.it/ear6iov2706f1.png?auto=webp&s=d0ded462799e49013f5eeca0bed0dd8f9b9fb676', 'width': 1220}, 'variants': {}}]} |
||
How I scraped and analize 5.1 million jobs using LLaMA 7B | 1 | [removed] | 2025-06-10T01:46:37 | https://www.reddit.com/r/LocalLLaMA/comments/1l7mqu4/how_i_scraped_and_analize_51_million_jobs_using/ | Elieroos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7mqu4 | false | null | t3_1l7mqu4 | /r/LocalLLaMA/comments/1l7mqu4/how_i_scraped_and_analize_51_million_jobs_using/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'LisIUUGScx13mD-x3gFPv-giEc_OVliq9xdUF77fqKE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=108&crop=smart&auto=webp&s=8e5f4eecb8f4e20584a0a45a6c7b3d80bca50562', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=216&crop=smart&auto=webp&s=0bba062fe06cce12fc3d0c4cb2a0ea82abc7c266', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=320&crop=smart&auto=webp&s=3ad6582619e3a7c3baeb4b3bc407f87a187c2336', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=640&crop=smart&auto=webp&s=1b9a8da21d7a1b9b308c5828dbe6f6b7287068d6', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=960&crop=smart&auto=webp&s=196ba9362a8c5c81bc99f396e5c4bd3401667518', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=1080&crop=smart&auto=webp&s=f79588c44be17c9eae5cf5c5ccf4c0d9f77f0734', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?auto=webp&s=fa755a2de2b11728baa2d5e5dcd88171c0e5d4be', 'width': 1200}, 'variants': {}}]} |
Best Approaches for Accurate Large-Scale Medical Code Search? | 1 | [removed] | 2025-06-10T01:50:56 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l7mtvb | false | null | t3_1l7mtvb | /r/LocalLLaMA/comments/1l7mtvb/best_approaches_for_accurate_largescale_medical/ | false | false | default | 1 | null |
||
Chonkie update. | 11 | Launch HN: Chonkie (YC X25) – Open-Source Library for Advanced Chunking | https://news.ycombinator.com/item?id=44225930 | 2025-06-10T02:22:31 | https://www.reddit.com/r/LocalLLaMA/comments/1l7ngkn/chonkie_update/ | dnr41418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7ngkn | false | null | t3_1l7ngkn | /r/LocalLLaMA/comments/1l7ngkn/chonkie_update/ | false | false | self | 11 | null |
Where is Llama 4.1? | 34 | Meta releases llama 4 2 months ago. They have all the gpus in the world, something like 350K H100s according to reddit. Why won’t they copy deepseek/qwen and retrain a larger model and release it?
| 2025-06-10T02:27:21 | https://www.reddit.com/r/LocalLLaMA/comments/1l7nk47/where_is_llama_41/ | MutedSwimming3347 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7nk47 | false | null | t3_1l7nk47 | /r/LocalLLaMA/comments/1l7nk47/where_is_llama_41/ | false | false | self | 34 | null |
Is this a reasonable spec’d rig for entry level | 1 | Hi all! I’m new to LLMs and very excited about getting started.
My background is engineering and I have a few projects in mind that I think would be helpful for myself and others in my organization. Some of which could probably be done in python but I said what the heck, let me try a LLM.
Here are the specs and I would greatly appreciate any input or drawbacks of the unit. I’m getting this at a decent price from what I’ve seen.
GPU: Asus GeForce RTX 3090
CPU: Intel i9-9900K
Motherboard: Asus PRIME Z390-A ATX LGA1151
RAM: Corsair Vengeance RGB Pro (2 x 16 GB)
Main Project: Customers come to us with certain requirements. Based on those requirements we have to design our equipment a specific way. Throughout the design process and the lack of good documentation we go through a series of meetings to finalize everything. I would like to train the model based on the past project data that’s available to quickly develop the design of the equipment to say “X equipment needs to have 10 bolts and 2 rods because of Y reason” (I’m over simplifying). The data itself probably wouldn’t be anymore than 100-200mb. I’m not sure if this is too small of a sample size to train a model on, I’m still learning.
| 2025-06-10T02:33:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l7no60/is_this_a_reasonable_specd_rig_for_entry_level/ | Tx-Heat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7no60 | false | null | t3_1l7no60 | /r/LocalLLaMA/comments/1l7no60/is_this_a_reasonable_specd_rig_for_entry_level/ | false | false | self | 1 | null |
Semantic Search Demo Using Qwen3 0.6B Embedding (w/o reranker) in-browser Using transformers.js | 1 | A couple days ago the Qwen team dropped their 0.6B & 4B embedding and reranking models. Having seen an ONNX quant for the 0.6B embedding model, I created a demo for it which runs locally via transformers.js.
Similarity among nodes is ranked by using basic cosine similarity (only top 3) because I couldn't use the 0.6B reranking model on account of there not being an ONNX quant just yet and I was running out of my weekend time to learn how to convert it, but I will leave that exercise for another time!
Check it out for yourselves, you can even add in your own memory bank with your own 20 fun facts to test out. 20 being a safe arbitrary number as adding hundreds would probably take a while to generate embeddings all at once. Was a fun thing to work on though, small models rock.
Repo: [https://github.com/callbacked/qwen3-semantic-search](https://github.com/callbacked/qwen3-semantic-search)
HF Space: [https://huggingface.co/spaces/callbacked/qwen3-semantic-search](https://huggingface.co/spaces/callbacked/qwen3-semantic-search)
| 2025-06-10T02:41:24 | https://v.redd.it/fiyy4eoig06f1 | ajunior7 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l7nu0l | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/fiyy4eoig06f1/DASHPlaylist.mpd?a=1752115299%2CZGJhMzdhMmE3YjBiYmVlZjRlYjZhYTNkYTZmODNlMGQ0MzQzYTBkYTQwZDY3ZTYyOWZmMDI2ZTRlMTdhNmJmNw%3D%3D&v=1&f=sd', 'duration': 92, 'fallback_url': 'https://v.redd.it/fiyy4eoig06f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/fiyy4eoig06f1/HLSPlaylist.m3u8?a=1752115299%2CMzAwMjhjODgwMTAzZTljY2I0NGY5YWRkMGVjZDlmZDJmMWI0NmZhZjIxNDdjZDFlZDdiZDM3ODI4ZmNkZDg2Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fiyy4eoig06f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 944}} | t3_1l7nu0l | /r/LocalLLaMA/comments/1l7nu0l/semantic_search_demo_using_qwen3_06b_embedding_wo/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'cXNzc2s5b2lnMDZmMWjBiK3dvnpyH94tfSRL_W8T1S635Yrtcq3qyAjkjm_l', 'resolutions': [{'height': 82, 'url': 'https://external-preview.redd.it/cXNzc2s5b2lnMDZmMWjBiK3dvnpyH94tfSRL_W8T1S635Yrtcq3qyAjkjm_l.png?width=108&crop=smart&format=pjpg&auto=webp&s=013986f95cecc3b526294de8e864b301d2f059f3', 'width': 108}, {'height': 164, 'url': 'https://external-preview.redd.it/cXNzc2s5b2lnMDZmMWjBiK3dvnpyH94tfSRL_W8T1S635Yrtcq3qyAjkjm_l.png?width=216&crop=smart&format=pjpg&auto=webp&s=2eaaaccae587654d22d5150c214dce20565a9001', 'width': 216}, {'height': 244, 'url': 'https://external-preview.redd.it/cXNzc2s5b2lnMDZmMWjBiK3dvnpyH94tfSRL_W8T1S635Yrtcq3qyAjkjm_l.png?width=320&crop=smart&format=pjpg&auto=webp&s=174fe69404ca183b8ae8a6bdad025b2166404892', 'width': 320}, {'height': 488, 'url': 'https://external-preview.redd.it/cXNzc2s5b2lnMDZmMWjBiK3dvnpyH94tfSRL_W8T1S635Yrtcq3qyAjkjm_l.png?width=640&crop=smart&format=pjpg&auto=webp&s=8982ae39b21ca699f2e83a2b2d50ce3ddc9ebe0a', 'width': 640}, {'height': 732, 'url': 'https://external-preview.redd.it/cXNzc2s5b2lnMDZmMWjBiK3dvnpyH94tfSRL_W8T1S635Yrtcq3qyAjkjm_l.png?width=960&crop=smart&format=pjpg&auto=webp&s=b56ac9908b391aaaa8c9fafc00d60e6ef7b6fcd6', 'width': 960}, {'height': 823, 'url': 'https://external-preview.redd.it/cXNzc2s5b2lnMDZmMWjBiK3dvnpyH94tfSRL_W8T1S635Yrtcq3qyAjkjm_l.png?width=1080&crop=smart&format=pjpg&auto=webp&s=60f0329ff69ef4b9ab7be1175b9f59e399df7f6b', 'width': 1080}], 'source': {'height': 920, 'url': 'https://external-preview.redd.it/cXNzc2s5b2lnMDZmMWjBiK3dvnpyH94tfSRL_W8T1S635Yrtcq3qyAjkjm_l.png?format=pjpg&auto=webp&s=fbe082110e0597860a58d1e4107f5a23edccc7d2', 'width': 1206}, 'variants': {}}]} |
|
Knock some sense into me | 2 | I have a 5080 in my main rig and I’ve become convinced that it’s not the best solution for a day to day LLM for asking questions, some coding help, and container deployment troubleshooting.
Part of me wants to build a purpose built LLM rig with either a couple 3090s or something else.
Am I crazy? Is my 5080 plenty? | 2025-06-10T02:42:56 | https://www.reddit.com/r/LocalLLaMA/comments/1l7nv49/knock_some_sense_into_me/ | synthchef | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7nv49 | false | null | t3_1l7nv49 | /r/LocalLLaMA/comments/1l7nv49/knock_some_sense_into_me/ | false | false | self | 2 | null |
How to scale AI interaction without "magic prompts": trajectory - based model design I(t) | 1 | [removed] | 2025-06-10T03:08:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l7ocnw/how_to_scale_ai_interaction_without_magic_prompts/ | Radiant-Cost5478 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7ocnw | false | null | t3_1l7ocnw | /r/LocalLLaMA/comments/1l7ocnw/how_to_scale_ai_interaction_without_magic_prompts/ | false | false | self | 1 | null |
Semantic Search Demo Using Qwen3 0.6B Embedding (w/o reranker) in-browser Using transformers.js | 134 | Hello everyone! A couple days ago the Qwen team dropped their 4B, 8B, and 0.6B embedding and reranking models. Having seen an ONNX quant for the 0.6B embedding model, I created a demo for it which runs locally via transformers.js. It is a visualization showing both the contextual relationships between items inside a "memory bank" (as I call it) and having pertinent information being retrieved given a query, with varying degrees of similarity in its results.
Basic cosine similarity is used to rank the results from a query because I couldn't use the 0.6B reranking model on account of there not being an ONNX quant just yet and I was running out of my weekend time to learn how to convert it, but I will leave that exercise for another time!
On the contextual relationship mapping, each node is given up to three other nodes it can connect to based on how similar the information is to each other.
Check it out for yourselves, you can even add in your own memory bank with your own 20 fun facts to test out. 20 being a safe arbitrary number as adding hundreds would probably take a while to generate embeddings. Was a fun thing to work on though, small models rock.
Repo: [https://github.com/callbacked/qwen3-semantic-search](https://github.com/callbacked/qwen3-semantic-search)
HF Space: [https://huggingface.co/spaces/callbacked/qwen3-semantic-search](https://huggingface.co/spaces/callbacked/qwen3-semantic-search) | 2025-06-10T03:10:27 | https://v.redd.it/y6ht8zacj06f1 | ajunior7 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l7odzw | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/y6ht8zacj06f1/DASHPlaylist.mpd?a=1752117046%2COWYwODg0MjA3MjRjNDU1ODViMTlkOTQwMWMxYzc4ZWY3N2E4YjA5ODBmY2Y4Yjk0NDM2N2EwNmUyOWYyNWM1MQ%3D%3D&v=1&f=sd', 'duration': 92, 'fallback_url': 'https://v.redd.it/y6ht8zacj06f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/y6ht8zacj06f1/HLSPlaylist.m3u8?a=1752117046%2COWJhNjlhOGQ5ODQ4OWQyNGJiZTAzNzcwZGFkMTM3ZDk1ZDk0YzRlNjcxNDE0OWQ4MGJmMmY4MjE0YmQzYjNlOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/y6ht8zacj06f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 944}} | t3_1l7odzw | /r/LocalLLaMA/comments/1l7odzw/semantic_search_demo_using_qwen3_06b_embedding_wo/ | false | false | 134 | {'enabled': False, 'images': [{'id': 'eW52bnN2YWNqMDZmMWjBiK3dvnpyH94tfSRL_W8T1S635Yrtcq3qyAjkjm_l', 'resolutions': [{'height': 82, 'url': 'https://external-preview.redd.it/eW52bnN2YWNqMDZmMWjBiK3dvnpyH94tfSRL_W8T1S635Yrtcq3qyAjkjm_l.png?width=108&crop=smart&format=pjpg&auto=webp&s=14035aa109d2798b0c4acb6cbbaf1a2d8a9ad963', 'width': 108}, {'height': 164, 'url': 'https://external-preview.redd.it/eW52bnN2YWNqMDZmMWjBiK3dvnpyH94tfSRL_W8T1S635Yrtcq3qyAjkjm_l.png?width=216&crop=smart&format=pjpg&auto=webp&s=c10fd5435d935046a6793a919408978a8fd084e9', 'width': 216}, {'height': 244, 'url': 'https://external-preview.redd.it/eW52bnN2YWNqMDZmMWjBiK3dvnpyH94tfSRL_W8T1S635Yrtcq3qyAjkjm_l.png?width=320&crop=smart&format=pjpg&auto=webp&s=d4cd0b52f494fe5b49cdf664e4bd5af7754167f4', 'width': 320}, {'height': 488, 'url': 'https://external-preview.redd.it/eW52bnN2YWNqMDZmMWjBiK3dvnpyH94tfSRL_W8T1S635Yrtcq3qyAjkjm_l.png?width=640&crop=smart&format=pjpg&auto=webp&s=ef6e451324a922463c87bc49fa13da055992c516', 'width': 640}, {'height': 732, 'url': 'https://external-preview.redd.it/eW52bnN2YWNqMDZmMWjBiK3dvnpyH94tfSRL_W8T1S635Yrtcq3qyAjkjm_l.png?width=960&crop=smart&format=pjpg&auto=webp&s=9b296e1015f51c77b976ab8e3a2f5136ccd92694', 'width': 960}, {'height': 823, 'url': 'https://external-preview.redd.it/eW52bnN2YWNqMDZmMWjBiK3dvnpyH94tfSRL_W8T1S635Yrtcq3qyAjkjm_l.png?width=1080&crop=smart&format=pjpg&auto=webp&s=96aefedbd720d09e54ab362345b837e2ac766c94', 'width': 1080}], 'source': {'height': 920, 'url': 'https://external-preview.redd.it/eW52bnN2YWNqMDZmMWjBiK3dvnpyH94tfSRL_W8T1S635Yrtcq3qyAjkjm_l.png?format=pjpg&auto=webp&s=482fb2791d965f3aa4b052da3b8f984f448eca2c', 'width': 1206}, 'variants': {}}]} |
|
How to scale AI interaction without "magic prompts": trajectory - based model design I(t) | 1 | [removed] | 2025-06-10T03:11:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l7oerb/how_to_scale_ai_interaction_without_magic_prompts/ | Radiant-Cost5478 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7oerb | false | null | t3_1l7oerb | /r/LocalLLaMA/comments/1l7oerb/how_to_scale_ai_interaction_without_magic_prompts/ | false | false | 1 | null |
|
Future of local LLM computing? | 1 | [removed] | 2025-06-10T03:20:42 | https://www.reddit.com/r/LocalLLaMA/comments/1l7okyr/future_of_local_llm_computing/ | LakeDeep34 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7okyr | false | null | t3_1l7okyr | /r/LocalLLaMA/comments/1l7okyr/future_of_local_llm_computing/ | false | false | self | 1 | null |
Google Diffusion told me its system prompt | 151 | # Your name is Gemini Diffusion. You are an expert text diffusion language model trained by Google. You are not an autoregressive language model. You can not generate images or videos. You are an advanced AI assistant and an expert in many areas.
# Core Principles & Constraints:
# 1. Instruction Following: Prioritize and follow specific instructions provided by the user, especially regarding output format and constraints.
# 2. Non-Autoregressive: Your generation process is different from traditional autoregressive models. Focus on generating complete, coherent outputs based on the prompt rather than token-by-token prediction.
# 3. Accuracy & Detail: Strive for technical accuracy and adhere to detailed specifications (e.g., Tailwind classes, Lucide icon names, CSS properties).
# 4. No Real-Time Access: You cannot browse the internet, access external files or databases, or verify information in real-time. Your knowledge is based on your training data.
# 5. Safety & Ethics: Do not generate harmful, unethical, biased, or inappropriate content.
# 6. Knowledge cutoff: Your knowledge cutoff is December 2023. The current year is 2025 and you do not have access to information from 2024 onwards.
# 7. Code outputs: You are able to generate code outputs in any programming language or framework.
# Specific Instructions for HTML Web Page Generation:
# * Output Format:
# * Provide all HTML, CSS, and JavaScript code within a single, runnable code block (e.g., using ```html ... ```).
# * Ensure the code is self-contained and includes necessary tags (`<!DOCTYPE html>`, `<html>`, `<head>`, `<body>`, `<script>`, `<style>`).
# * Do not use divs for lists when more semantically meaningful HTML elements will do, such as <ol> and <li> as children.
# * Aesthetics & Design:
# * The primary goal is to create visually stunning, highly polished, and responsive web pages suitable for desktop browsers.
# * Prioritize clean, modern design and intuitive user experience.
# * Styling (Non-Games):
# * Tailwind CSS Exclusively: Use Tailwind CSS utility classes for ALL styling. Do not include `<style>` tags or external `.css` files.
# * Load Tailwind: Include the following script tag in the `<head>` of the HTML: `<script src="https://unpkg.com/@tailwindcss/browser@4"></script>`
# * Focus: Utilize Tailwind classes for layout (Flexbox/Grid, responsive prefixes `sm:`, `md:`, `lg:`), typography (font family, sizes, weights), colors, spacing (padding, margins), borders, shadows, etc.
# * Font: Use `Inter` font family by default. Specify it via Tailwind classes if needed.
# * Rounded Corners: Apply `rounded` classes (e.g., `rounded-lg`, `rounded-full`) to all relevant elements.
# * Icons:
# * Method: Use `<img>` tags to embed Lucide static SVG icons: `<img src="https://unpkg.com/lucide-static@latest/icons/ICON_NAME.svg">`. Replace `ICON_NAME` with the exact Lucide icon name (e.g., `home`, `settings`, `search`).
# * Accuracy: Ensure the icon names are correct and the icons exist in the Lucide static library.
# * Layout & Performance:
# * CLS Prevention: Implement techniques to prevent Cumulative Layout Shift (e.g., specifying dimensions, appropriately sized images).
# * HTML Comments: Use HTML comments to explain major sections, complex structures, or important JavaScript logic.
# * External Resources: Do not load placeholders or files that you don't have access to. Avoid using external assets or files unless instructed to. Do not use base64 encoded data.
# * Placeholders: Avoid using placeholders unless explicitly asked to. Code should work immediately.
# Specific Instructions for HTML Game Generation:
# * Output Format:
# * Provide all HTML, CSS, and JavaScript code within a single, runnable code block (e.g., using ```html ... ```).
# * Ensure the code is self-contained and includes necessary tags (`<!DOCTYPE html>`, `<html>`, `<head>`, `<body>`, `<script>`, `<style>`).
# * Aesthetics & Design:
# * The primary goal is to create visually stunning, engaging, and playable web games.
# * Prioritize game-appropriate aesthetics and clear visual feedback.
# * Styling:
# * Custom CSS: Use custom CSS within `<style>` tags in the `<head>` of the HTML. Do not use Tailwind CSS for games.
# * Layout: Center the game canvas/container prominently on the screen. Use appropriate margins and padding.
# * Buttons & UI: Style buttons and other UI elements distinctively. Use techniques like shadows, gradients, borders, hover effects, and animations where appropriate.
# * Font: Consider using game-appropriate fonts such as `'Press Start 2P'` (include the Google Font link: `<link href="https://fonts.googleapis.com/css2?family=Press+Start+2P&display=swap" rel="stylesheet">`) or a monospace font.
# * Functionality & Logic:
# * External Resources: Do not load placeholders or files that you don't have access to. Avoid using external assets or files unless instructed to. Do not use base64 encoded data.
# * Placeholders: Avoid using placeholders unless explicitly asked to. Code should work immediately.
# * Planning & Comments: Plan game logic thoroughly. Use extensive code comments (especially in JavaScript) to explain game mechanics, state management, event handling, and complex algorithms.
# * Game Speed: Tune game loop timing (e.g., using `requestAnimationFrame`) for optimal performance and playability.
# * Controls: Include necessary game controls (e.g., Start, Pause, Restart, Volume). Place these controls neatly outside the main game area (e.g., in a top or bottom center row).
# * No `alert()`: Display messages (e.g., game over, score updates) using in-page HTML elements (e.g., `<div>`, `<p>`) instead of the JavaScript `alert()` function.
# * Libraries/Frameworks: Avoid complex external libraries or frameworks unless specifically requested. Focus on vanilla JavaScript where possible.
# Final Directive:
# Think step by step through what the user asks. If the query is complex, write out your thought process before committing to a final answer. Although you are excellent at generating code in any programming language, you can also help with other types of query. Not every output has to include code. Make sure to follow user instructions precisely. Your task is to answer the requests of the user to the best of your ability.
| 2025-06-10T03:21:19 | https://www.reddit.com/r/LocalLLaMA/comments/1l7olcw/google_diffusion_told_me_its_system_prompt/ | bralynn2222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7olcw | false | null | t3_1l7olcw | /r/LocalLLaMA/comments/1l7olcw/google_diffusion_told_me_its_system_prompt/ | false | false | self | 151 | null |
Is 5090 viable even for 32B model? | 1 | [removed] | 2025-06-10T04:06:50 | https://www.reddit.com/r/LocalLLaMA/comments/1l7pfal/is_5090_viable_even_for_32b_model/ | kkgmgfn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7pfal | false | null | t3_1l7pfal | /r/LocalLLaMA/comments/1l7pfal/is_5090_viable_even_for_32b_model/ | false | false | self | 1 | null |
GRPO Can Boost LLM-Based TTS Performance | 35 | Hi everyone!
**LlaSA** ([https://arxiv.org/abs/2502.04128](https://arxiv.org/abs/2502.04128)) is a Llama-based TTS model.
We fine-tuned it on **15 k hours of Korean speech** and then applied **GRPO**. The result:
https://preview.redd.it/33lko3wtz06f1.png?width=1779&format=png&auto=webp&s=31d61678e43758906c6cd76cd639f61bb9f31de8
This shows that GRPO can noticeably boost an **LLM-based TTS system** on our internal benchmark.
**Key takeaway**
Optimizing for **CER alone isn’t enough**—adding Whisper Negative Log-Likelihood as a second reward signal and optimizing *both CER and Whisper-NLL* makes training far more effective.
Source code and training scripts are public (checkpoints remain internal for policy reasons):
[https://github.com/channel-io/ch-tts-llasa-rl-grpo](https://github.com/channel-io/ch-tts-llasa-rl-grpo)
— **Seungyoun Shin** ([https://github.com/SeungyounShin](https://github.com/SeungyounShin)) @ **Channel Corp** ([https://channel.io/en](https://channel.io/en)) | 2025-06-10T04:18:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l7pmua/grpo_can_boost_llmbased_tts_performance/ | skswldndi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7pmua | false | null | t3_1l7pmua | /r/LocalLLaMA/comments/1l7pmua/grpo_can_boost_llmbased_tts_performance/ | false | false | 35 | null |
|
[Release] mirau-agent-14b-base: An autonomous multi-turn tool-calling base model with hybrid reasoning for RL training | 1 | [removed] | 2025-06-10T06:00:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l7ratz/release_mirauagent14bbase_an_autonomous_multiturn/ | EliaukMouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7ratz | false | null | t3_1l7ratz | /r/LocalLLaMA/comments/1l7ratz/release_mirauagent14bbase_an_autonomous_multiturn/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'fQCoJRtPWJ-Dc2_q3db6ZvAcLDdJ5ZiGmR3Ni38fpwE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/JrOry-YPRo61mnt5KjFGr_uC_uI8nrXfI2XXPOSL2jk.jpg?width=108&crop=smart&auto=webp&s=2a9e96fedcce0786f651822472ea7cb4a908c0a3', 'width': 108}], 'source': {'height': 128, 'url': 'https://external-preview.redd.it/JrOry-YPRo61mnt5KjFGr_uC_uI8nrXfI2XXPOSL2jk.jpg?auto=webp&s=8573859d6c915ae173dd658c1551a2e9718262ff', 'width': 128}, 'variants': {}}]} |
Built a lightweight local AI chat interface | 9 | Got tired of opening terminal windows every time I wanted to use Ollama on old Dell Optiplex running 9th gen i3. Tried open webui but found it too clunky to use and confusing to update.
Ended up building chat-o-llama (I know, catchy name) using flask and uses ollama:
* Clean web UI with proper copy/paste functionality
* No GPU required - runs on CPU-only machines
* Works on 8GB RAM systems and even Raspberry Pi 4
* Persistent chat history with SQLite
Been running it on an old Dell Optiplex with an i3 & Raspberry pi 4B - it's much more convenient than the terminal.
>**GitHub:** [https://github.com/ukkit/chat-o-llama](https://github.com/ukkit/chat-o-llama)
Would love to hear if anyone tries it out or has suggestions for improvements.
https://i.redd.it/v3y5hivli16f1.gif
| 2025-06-10T06:02:36 | https://www.reddit.com/r/LocalLLaMA/comments/1l7rc5e/built_a_lightweight_local_ai_chat_interface/ | Longjumping_Tie_7758 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7rc5e | false | null | t3_1l7rc5e | /r/LocalLLaMA/comments/1l7rc5e/built_a_lightweight_local_ai_chat_interface/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'qNOm2AB2ZKg9bo94a7NK6nrQgkaTaGn0oz_1iyKDfbg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dhaM32e4eXtCR_ZtSxsQfX45fhQAcqrDj_Tb_w6WzUc.jpg?width=108&crop=smart&auto=webp&s=166a3a9a781f68ea8ece94fbbe7133496c4e92ce', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dhaM32e4eXtCR_ZtSxsQfX45fhQAcqrDj_Tb_w6WzUc.jpg?width=216&crop=smart&auto=webp&s=b2c2d1f6e349e5fbc230a4f47b82cb1ad00d4565', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dhaM32e4eXtCR_ZtSxsQfX45fhQAcqrDj_Tb_w6WzUc.jpg?width=320&crop=smart&auto=webp&s=7b67f84e0b5f66441f2f44de672956d7c6c21994', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dhaM32e4eXtCR_ZtSxsQfX45fhQAcqrDj_Tb_w6WzUc.jpg?width=640&crop=smart&auto=webp&s=3785dcd6e90442fe89e840bec1a313d432c2082d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dhaM32e4eXtCR_ZtSxsQfX45fhQAcqrDj_Tb_w6WzUc.jpg?width=960&crop=smart&auto=webp&s=94d963fb8883be7932feff1d22ea26d833ec95b2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dhaM32e4eXtCR_ZtSxsQfX45fhQAcqrDj_Tb_w6WzUc.jpg?width=1080&crop=smart&auto=webp&s=ba75f0bb249f41ac9a0f14ee970457d91e7f41ec', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dhaM32e4eXtCR_ZtSxsQfX45fhQAcqrDj_Tb_w6WzUc.jpg?auto=webp&s=883a681be7aa62f1c43fde1954426147ed6b66a1', 'width': 1200}, 'variants': {}}]} |
|
[Release] mirau-agent-14b-base: An autonomous multi-turn tool-calling base model with hybrid reasoning for RL training | 1 | [removed] | 2025-06-10T06:03:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l7rcdw/release_mirauagent14bbase_an_autonomous_multiturn/ | EliaukMouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7rcdw | false | null | t3_1l7rcdw | /r/LocalLLaMA/comments/1l7rcdw/release_mirauagent14bbase_an_autonomous_multiturn/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'F92Pm5PCvc5z7JHXz1XvpBB45lJtVgj3EbxN8KaHBYE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=108&crop=smart&auto=webp&s=273f321f8fe5f00f82b50c19654bf7a96aee1d0d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=216&crop=smart&auto=webp&s=ac4dbd7785306e44fed88303ca0120afd3e58200', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=320&crop=smart&auto=webp&s=14fac64354b545b15ad3502b4a2e69631f87f7a6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=640&crop=smart&auto=webp&s=3efc23aa0af8ba8593e40d72e040081f6521f1df', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=960&crop=smart&auto=webp&s=0598b152c8cfd7165fb85042ca9af92e6fa16d13', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=1080&crop=smart&auto=webp&s=c8f17239337f795a223a7a2a2e32a55eb18a928e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?auto=webp&s=85a2d3b7ae726152cb51fb9aad8441164aac5b8e', 'width': 1200}, 'variants': {}}]} |
[Release] mirau-agent-14b-base: An autonomous multi-turn tool-calling base model with hybrid reasoning for RL training | 1 | [removed] | 2025-06-10T06:04:07 | https://www.reddit.com/r/LocalLLaMA/comments/1l7rcz1/release_mirauagent14bbase_an_autonomous_multiturn/ | EliaukMouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7rcz1 | false | null | t3_1l7rcz1 | /r/LocalLLaMA/comments/1l7rcz1/release_mirauagent14bbase_an_autonomous_multiturn/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'F92Pm5PCvc5z7JHXz1XvpBB45lJtVgj3EbxN8KaHBYE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=108&crop=smart&auto=webp&s=273f321f8fe5f00f82b50c19654bf7a96aee1d0d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=216&crop=smart&auto=webp&s=ac4dbd7785306e44fed88303ca0120afd3e58200', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=320&crop=smart&auto=webp&s=14fac64354b545b15ad3502b4a2e69631f87f7a6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=640&crop=smart&auto=webp&s=3efc23aa0af8ba8593e40d72e040081f6521f1df', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=960&crop=smart&auto=webp&s=0598b152c8cfd7165fb85042ca9af92e6fa16d13', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=1080&crop=smart&auto=webp&s=c8f17239337f795a223a7a2a2e32a55eb18a928e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?auto=webp&s=85a2d3b7ae726152cb51fb9aad8441164aac5b8e', 'width': 1200}, 'variants': {}}]} |
[Release] mirau-agent-14b-base: An autonomous multi-turn tool-calling base model with hybrid reasoning for RL training | 1 | [removed] | 2025-06-10T06:07:38 | https://www.reddit.com/r/LocalLLaMA/comments/1l7rexy/release_mirauagent14bbase_an_autonomous_multiturn/ | EliaukMouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7rexy | false | null | t3_1l7rexy | /r/LocalLLaMA/comments/1l7rexy/release_mirauagent14bbase_an_autonomous_multiturn/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'F92Pm5PCvc5z7JHXz1XvpBB45lJtVgj3EbxN8KaHBYE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=108&crop=smart&auto=webp&s=273f321f8fe5f00f82b50c19654bf7a96aee1d0d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=216&crop=smart&auto=webp&s=ac4dbd7785306e44fed88303ca0120afd3e58200', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=320&crop=smart&auto=webp&s=14fac64354b545b15ad3502b4a2e69631f87f7a6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=640&crop=smart&auto=webp&s=3efc23aa0af8ba8593e40d72e040081f6521f1df', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=960&crop=smart&auto=webp&s=0598b152c8cfd7165fb85042ca9af92e6fa16d13', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=1080&crop=smart&auto=webp&s=c8f17239337f795a223a7a2a2e32a55eb18a928e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?auto=webp&s=85a2d3b7ae726152cb51fb9aad8441164aac5b8e', 'width': 1200}, 'variants': {}}]} |
Why I can't POST in LocalLLama | 1 | [removed] | 2025-06-10T06:09:28 | https://www.reddit.com/r/LocalLLaMA/comments/1l7rfy4/why_i_cant_post_in_localllama/ | EliaukMouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7rfy4 | false | null | t3_1l7rfy4 | /r/LocalLLaMA/comments/1l7rfy4/why_i_cant_post_in_localllama/ | false | false | self | 1 | null |
i can't post | 1 | test | 2025-06-10T06:12:03 | https://www.reddit.com/r/LocalLLaMA/comments/1l7rhec/i_cant_post/ | EliaukMouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7rhec | false | null | t3_1l7rhec | /r/LocalLLaMA/comments/1l7rhec/i_cant_post/ | false | false | self | 1 | null |
[Release] mirau-agent-14b-base: An autonomous multi-turn tool-calling base model with hybrid reasoning for RL training | 1 | [removed] | 2025-06-10T06:14:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l7rihi/release_mirauagent14bbase_an_autonomous_multiturn/ | EliaukMouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7rihi | false | null | t3_1l7rihi | /r/LocalLLaMA/comments/1l7rihi/release_mirauagent14bbase_an_autonomous_multiturn/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'F92Pm5PCvc5z7JHXz1XvpBB45lJtVgj3EbxN8KaHBYE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=108&crop=smart&auto=webp&s=273f321f8fe5f00f82b50c19654bf7a96aee1d0d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=216&crop=smart&auto=webp&s=ac4dbd7785306e44fed88303ca0120afd3e58200', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=320&crop=smart&auto=webp&s=14fac64354b545b15ad3502b4a2e69631f87f7a6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=640&crop=smart&auto=webp&s=3efc23aa0af8ba8593e40d72e040081f6521f1df', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=960&crop=smart&auto=webp&s=0598b152c8cfd7165fb85042ca9af92e6fa16d13', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=1080&crop=smart&auto=webp&s=c8f17239337f795a223a7a2a2e32a55eb18a928e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?auto=webp&s=85a2d3b7ae726152cb51fb9aad8441164aac5b8e', 'width': 1200}, 'variants': {}}]} |
A comprehensive MCP server implementing the latest specification. | 3 | 2025-06-10T06:18:59 | https://github.com/hemanth/paws-on-mcp | init0 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l7rl60 | false | null | t3_1l7rl60 | /r/LocalLLaMA/comments/1l7rl60/a_comprehensive_mcp_server_implementing_the/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'pYZxgGBxTUQmi9k84JqgcID7IJ-ISVVXYAAjGNqs4OY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2ypuEOTiX6Tbsb7kORscGU80DAVLDFrfhDPiOEMVE-I.jpg?width=108&crop=smart&auto=webp&s=a6f20591d62f5be3ea5ac2ffe2dde4ceb3d62711', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2ypuEOTiX6Tbsb7kORscGU80DAVLDFrfhDPiOEMVE-I.jpg?width=216&crop=smart&auto=webp&s=703f0fb3d3611bd10f01bd0758704e301e697cdc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2ypuEOTiX6Tbsb7kORscGU80DAVLDFrfhDPiOEMVE-I.jpg?width=320&crop=smart&auto=webp&s=e8b0962b01260fd61e9a3a7325f19ee5e8a78382', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2ypuEOTiX6Tbsb7kORscGU80DAVLDFrfhDPiOEMVE-I.jpg?width=640&crop=smart&auto=webp&s=d7eb389bad8a6b419456d2d3748f0a416b203fae', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2ypuEOTiX6Tbsb7kORscGU80DAVLDFrfhDPiOEMVE-I.jpg?width=960&crop=smart&auto=webp&s=5c9d067610adaeb608062d205ff0a04f0022d39a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2ypuEOTiX6Tbsb7kORscGU80DAVLDFrfhDPiOEMVE-I.jpg?width=1080&crop=smart&auto=webp&s=e65f11ed030fafc7b91f0a3d82d9687063bbdbd8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2ypuEOTiX6Tbsb7kORscGU80DAVLDFrfhDPiOEMVE-I.jpg?auto=webp&s=0bbeb057657372e5c50656638504fa828a749358', 'width': 1200}, 'variants': {}}]} |
||
[D] Open-Source Models Catching Up to Closed-Source in Enterprise AI – Interesting Benchmark Study | 1 | [removed] | 2025-06-10T06:22:37 | https://www.reddit.com/r/LocalLLaMA/comments/1l7rn5h/d_opensource_models_catching_up_to_closedsource/ | Fast-Comfortable-681 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7rn5h | false | null | t3_1l7rn5h | /r/LocalLLaMA/comments/1l7rn5h/d_opensource_models_catching_up_to_closedsource/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'vux615m1wkrh932YfXOdpv8M4xU2zjaKBT-UG9IMlPs', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/LAC-zWe12vaF6JJ1FyiG5iTKGAVEL6m7rrGvG8E0PRY.jpg?width=108&crop=smart&auto=webp&s=a30954f9a0ab6df607f71fcac6c730489c38e251', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/LAC-zWe12vaF6JJ1FyiG5iTKGAVEL6m7rrGvG8E0PRY.jpg?width=216&crop=smart&auto=webp&s=0d212537c3510e35711158599c7c6450df2044b6', 'width': 216}, {'height': 212, 'url': 'https://external-preview.redd.it/LAC-zWe12vaF6JJ1FyiG5iTKGAVEL6m7rrGvG8E0PRY.jpg?width=320&crop=smart&auto=webp&s=e54ae7704e3a5c3421d93c7ab492d6659e5ef4fd', 'width': 320}, {'height': 425, 'url': 'https://external-preview.redd.it/LAC-zWe12vaF6JJ1FyiG5iTKGAVEL6m7rrGvG8E0PRY.jpg?width=640&crop=smart&auto=webp&s=97f438634708868f746419a0bd213b1ae2d97810', 'width': 640}, {'height': 638, 'url': 'https://external-preview.redd.it/LAC-zWe12vaF6JJ1FyiG5iTKGAVEL6m7rrGvG8E0PRY.jpg?width=960&crop=smart&auto=webp&s=19aeae1679215d574e2fa945882a1d9add364e97', 'width': 960}, {'height': 718, 'url': 'https://external-preview.redd.it/LAC-zWe12vaF6JJ1FyiG5iTKGAVEL6m7rrGvG8E0PRY.jpg?width=1080&crop=smart&auto=webp&s=78a623c7969e2d580caebc15f1ca473ba9fad4f2', 'width': 1080}], 'source': {'height': 858, 'url': 'https://external-preview.redd.it/LAC-zWe12vaF6JJ1FyiG5iTKGAVEL6m7rrGvG8E0PRY.jpg?auto=webp&s=301a57f9da5c2d0fdcadc6d52ef7b3f39ec0f4a8', 'width': 1290}, 'variants': {}}]} |
mirau-agent-14b-base: An autonomous multi-turn tool-calling base model with hybrid reasoning for RL training | 1 | [removed] | 2025-06-10T06:34:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l7rtgc/mirauagent14bbase_an_autonomous_multiturn/ | EliaukMouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7rtgc | false | null | t3_1l7rtgc | /r/LocalLLaMA/comments/1l7rtgc/mirauagent14bbase_an_autonomous_multiturn/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'F92Pm5PCvc5z7JHXz1XvpBB45lJtVgj3EbxN8KaHBYE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=108&crop=smart&auto=webp&s=273f321f8fe5f00f82b50c19654bf7a96aee1d0d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=216&crop=smart&auto=webp&s=ac4dbd7785306e44fed88303ca0120afd3e58200', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=320&crop=smart&auto=webp&s=14fac64354b545b15ad3502b4a2e69631f87f7a6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=640&crop=smart&auto=webp&s=3efc23aa0af8ba8593e40d72e040081f6521f1df', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=960&crop=smart&auto=webp&s=0598b152c8cfd7165fb85042ca9af92e6fa16d13', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?width=1080&crop=smart&auto=webp&s=c8f17239337f795a223a7a2a2e32a55eb18a928e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Yx8yh7muJo4xZ-jtPXaND-bcr9Yzc5UZywWNCeQLn2I.jpg?auto=webp&s=85a2d3b7ae726152cb51fb9aad8441164aac5b8e', 'width': 1200}, 'variants': {}}]} |
An autonomous multi-turn tool-calling base model for RL tool-call training | 1 | [removed] | 2025-06-10T06:37:28 | https://www.reddit.com/r/LocalLLaMA/comments/1l7rv1e/an_autonomous_multiturn_toolcalling_base_model/ | EliaukMouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7rv1e | false | null | t3_1l7rv1e | /r/LocalLLaMA/comments/1l7rv1e/an_autonomous_multiturn_toolcalling_base_model/ | false | false | self | 1 | null |
Vision models | 1 | [removed] | 2025-06-10T06:51:59 | https://www.reddit.com/gallery/1l7s2mv | Enough-Incident-4435 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l7s2mv | false | null | t3_1l7s2mv | /r/LocalLLaMA/comments/1l7s2mv/vision_models/ | false | false | 1 | null |
|
Feels like, Apple's busted, with the ai race... WWDC 2025 conclusion: No update, all minor updates... Does anyone else feeling the same-way? | 42 | They could have better skip the WWDC | 2025-06-10T06:58:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l7s5xx/feels_like_apples_busted_with_the_ai_race_wwdc/ | ExplanationEqual2539 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7s5xx | false | null | t3_1l7s5xx | /r/LocalLLaMA/comments/1l7s5xx/feels_like_apples_busted_with_the_ai_race_wwdc/ | false | false | self | 42 | null |
Apple research messed up | 0 | Their illusion of intelligence had a design flaw, what frontier models wasn’t able to solve was “unsolvable” problem given the constraints.
| 2025-06-10T07:13:59 | https://www.linkedin.com/pulse/ai-reasoning-models-vs-human-style-problem-solving-case-mahendru-mhbjc?utm_source=share&utm_medium=member_ios&utm_campaign=share_via | TrifleHopeful5418 | linkedin.com | 1970-01-01T00:00:00 | 0 | {} | 1l7seds | false | null | t3_1l7seds | /r/LocalLLaMA/comments/1l7seds/apple_research_messed_up/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'SfDBbTwoSlP3t49X-DzOenP7DPUl56haACsFp5qBk6E', 'resolutions': [], 'source': {'height': 96, 'url': 'https://external-preview.redd.it/Jr2u9t7hHrCf63fubhl1KzYbXy626ftH82VNyHypf5Q.jpg?auto=webp&s=aab36e1b3c82df95001d7fe771b306f5a5a4f4f9', 'width': 96}, 'variants': {}}]} |
|
Mark Zuckerberg Personally Hiring to Create New “Superintelligence” AI Team | 293 | 2025-06-10T07:23:13 | https://www.bloomberg.com/news/articles/2025-06-10/zuckerberg-recruits-new-superintelligence-ai-group-at-meta?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTc0OTUzOTk2NCwiZXhwIjoxNzUwMTQ0NzY0LCJhcnRpY2xlSWQiOiJTWE1KNFlEV1JHRzAwMCIsImJjb25uZWN0SWQiOiJCQjA1NkM3NzlFMTg0MjU0OUQ3OTdCQjg1MUZBODNBMCJ9.oQD8-YVuo3p13zoYHc4VDnMz-MTkSU1vpwO3bBypUBY | gensandman | bloomberg.com | 1970-01-01T00:00:00 | 0 | {} | 1l7sj45 | false | null | t3_1l7sj45 | /r/LocalLLaMA/comments/1l7sj45/mark_zuckerberg_personally_hiring_to_create_new/ | false | false | 293 | {'enabled': False, 'images': [{'id': 'zxplPyNxZhwv2TannaIJXNsEMt11TmHOCNfJkyL65Vg', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/PQVpxiJVBzVlTszQQQQivblrBqJ4eZAv8qWKEd5l9co.jpg?width=108&crop=smart&auto=webp&s=53b4bfef9b4257d16d7e7f3a7be635cc867c55a4', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/PQVpxiJVBzVlTszQQQQivblrBqJ4eZAv8qWKEd5l9co.jpg?width=216&crop=smart&auto=webp&s=d9a2ab07088a51f22a6b559a8503a8abe44bff3a', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/PQVpxiJVBzVlTszQQQQivblrBqJ4eZAv8qWKEd5l9co.jpg?width=320&crop=smart&auto=webp&s=563d0d02e73c3005a23dfedb78117dc258698e6d', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/PQVpxiJVBzVlTszQQQQivblrBqJ4eZAv8qWKEd5l9co.jpg?width=640&crop=smart&auto=webp&s=62bc47ff5664f309fedad12ae6792557582ee0a4', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/PQVpxiJVBzVlTszQQQQivblrBqJ4eZAv8qWKEd5l9co.jpg?width=960&crop=smart&auto=webp&s=084a543d4fdff8dc7ec968d68ece8700e500316b', 'width': 960}, {'height': 719, 'url': 'https://external-preview.redd.it/PQVpxiJVBzVlTszQQQQivblrBqJ4eZAv8qWKEd5l9co.jpg?width=1080&crop=smart&auto=webp&s=c7ad2ce79eef036becacab827944ab935f426e3e', 'width': 1080}], 'source': {'height': 799, 'url': 'https://external-preview.redd.it/PQVpxiJVBzVlTszQQQQivblrBqJ4eZAv8qWKEd5l9co.jpg?auto=webp&s=2395a6a4df767d2ac56524746931feded435a90e', 'width': 1200}, 'variants': {}}]} |
||
An autonomous multi-turn tool-calling base model for RL tool-call training | 1 | [removed] | 2025-06-10T07:23:46 | https://www.reddit.com/r/LocalLLaMA/comments/1l7sje0/an_autonomous_multiturn_toolcalling_base_model/ | EliaukMouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7sje0 | false | null | t3_1l7sje0 | /r/LocalLLaMA/comments/1l7sje0/an_autonomous_multiturn_toolcalling_base_model/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'AKaMSOfJYnlG078czfFbfWqAb0eBPGDcHKrAXmnU50U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=108&crop=smart&auto=webp&s=e9e90a25625ad3f9171819c90d87173ce47b20aa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=216&crop=smart&auto=webp&s=113d5a22c282559523f7071bd18f075a1adeb4fe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=320&crop=smart&auto=webp&s=ca0236d18c5925fed7b96bf162c169d2f4631e11', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=640&crop=smart&auto=webp&s=f843f791022421e80147e86ad1a24e06209b5cd8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=960&crop=smart&auto=webp&s=9cea9d6b43507fab2619a1dbe1414da6bda156de', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=1080&crop=smart&auto=webp&s=cc2d248c7d109c6e62b2de9fe7b74127bb26a91b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?auto=webp&s=8b4c2d15714bbf7cf2f67fabe38467149a7fb69c', 'width': 1200}, 'variants': {}}]} |
Share an agentic base model for RL training. | 1 | a new model | 2025-06-10T07:27:05 | https://www.reddit.com/r/LocalLLaMA/comments/1l7sl3f/share_an_agentic_base_model_for_rl_training/ | EliaukMouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7sl3f | false | null | t3_1l7sl3f | /r/LocalLLaMA/comments/1l7sl3f/share_an_agentic_base_model_for_rl_training/ | false | false | self | 1 | null |
An autonomous multi-turn tool-calling base model for RL tool-call training | 1 | [removed] | 2025-06-10T07:29:48 | https://huggingface.co/eliuakk/mirau-agent-14b-base | EliaukMouse | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l7smep | false | null | t3_1l7smep | /r/LocalLLaMA/comments/1l7smep/an_autonomous_multiturn_toolcalling_base_model/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'AKaMSOfJYnlG078czfFbfWqAb0eBPGDcHKrAXmnU50U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=108&crop=smart&auto=webp&s=e9e90a25625ad3f9171819c90d87173ce47b20aa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=216&crop=smart&auto=webp&s=113d5a22c282559523f7071bd18f075a1adeb4fe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=320&crop=smart&auto=webp&s=ca0236d18c5925fed7b96bf162c169d2f4631e11', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=640&crop=smart&auto=webp&s=f843f791022421e80147e86ad1a24e06209b5cd8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=960&crop=smart&auto=webp&s=9cea9d6b43507fab2619a1dbe1414da6bda156de', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=1080&crop=smart&auto=webp&s=cc2d248c7d109c6e62b2de9fe7b74127bb26a91b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?auto=webp&s=8b4c2d15714bbf7cf2f67fabe38467149a7fb69c', 'width': 1200}, 'variants': {}}]} |
|
An autonomous multi-turn tool-calling base model for RL tool-call training | 1 | [removed] | 2025-06-10T07:31:21 | https://huggingface.co/eliuakk/mirau-agent-14b-base | EliaukMouse | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l7sn7w | false | null | t3_1l7sn7w | /r/LocalLLaMA/comments/1l7sn7w/an_autonomous_multiturn_toolcalling_base_model/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'AKaMSOfJYnlG078czfFbfWqAb0eBPGDcHKrAXmnU50U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=108&crop=smart&auto=webp&s=e9e90a25625ad3f9171819c90d87173ce47b20aa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=216&crop=smart&auto=webp&s=113d5a22c282559523f7071bd18f075a1adeb4fe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=320&crop=smart&auto=webp&s=ca0236d18c5925fed7b96bf162c169d2f4631e11', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=640&crop=smart&auto=webp&s=f843f791022421e80147e86ad1a24e06209b5cd8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=960&crop=smart&auto=webp&s=9cea9d6b43507fab2619a1dbe1414da6bda156de', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=1080&crop=smart&auto=webp&s=cc2d248c7d109c6e62b2de9fe7b74127bb26a91b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?auto=webp&s=8b4c2d15714bbf7cf2f67fabe38467149a7fb69c', 'width': 1200}, 'variants': {}}]} |
|
An autonomous multi-turn tool-calling base model for RL tool-call training | 1 | [removed] | 2025-06-10T07:34:29 | https://huggingface.co/eliuakk/mirau-agent-14b-base | EliaukMouse | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l7sota | false | null | t3_1l7sota | /r/LocalLLaMA/comments/1l7sota/an_autonomous_multiturn_toolcalling_base_model/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'AKaMSOfJYnlG078czfFbfWqAb0eBPGDcHKrAXmnU50U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=108&crop=smart&auto=webp&s=e9e90a25625ad3f9171819c90d87173ce47b20aa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=216&crop=smart&auto=webp&s=113d5a22c282559523f7071bd18f075a1adeb4fe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=320&crop=smart&auto=webp&s=ca0236d18c5925fed7b96bf162c169d2f4631e11', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=640&crop=smart&auto=webp&s=f843f791022421e80147e86ad1a24e06209b5cd8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=960&crop=smart&auto=webp&s=9cea9d6b43507fab2619a1dbe1414da6bda156de', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=1080&crop=smart&auto=webp&s=cc2d248c7d109c6e62b2de9fe7b74127bb26a91b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?auto=webp&s=8b4c2d15714bbf7cf2f67fabe38467149a7fb69c', 'width': 1200}, 'variants': {}}]} |
|
Vibe-coding without the 14-hour debug spirals | 336 | After 2 years I've finally cracked the code on avoiding these infinite loops. Here's what actually works:
**1. The 3-Strike Rule (aka "Stop Digging, You Idiot")**
If AI fails to fix something after 3 attempts, STOP. Just stop. I learned this after watching my codebase grow from 2,000 lines to 18,000 lines trying to fix a dropdown menu. The AI was literally wrapping my entire app in try-catch blocks by the end.
What to do instead:
* Screenshot the broken UI
* Start a fresh chat session
* Describe what you WANT, not what's BROKEN
* Let AI rebuild that component from scratch
**2. Context Windows Are Not Your Friend**
Here's the dirty secret - after about 10 back-and-forth messages, the AI starts forgetting what the hell you're even building. I once had Claude convinced my AI voice platform was a recipe blog because we'd been debugging the persona switching feature for so long.
My rule: Every 8-10 messages, I:
* Save working code to a separate file
* Start fresh
* Paste ONLY the relevant broken component
* Include a one-liner about what the app does
This cut my debugging time by \~70%.
**3. The "Explain Like I'm Five" Test**
If you can't explain what's broken in one sentence, you're already screwed. I spent 6 hours once because I kept saying "the data flow is weird and the state management seems off but also the UI doesn't update correctly sometimes."
Now I force myself to say things like:
* "Button doesn't save user data"
* "Page crashes on refresh"
* "Image upload returns undefined"
Simple descriptions = better fixes.
**4. Version Control Is Your Escape Hatch**
Git commit after EVERY working feature. Not every day. Not every session. EVERY. WORKING. FEATURE.
I learned this after losing 3 days of work because I kept "improving" working code until it wasn't working anymore. Now I commit like a paranoid squirrel hoarding nuts for winter.
My commits from last week:
* 42 total commits
* 31 were rollback points
* 11 were actual progress
* 0 lost features
**5. The Nuclear Option: Burn It Down**
Sometimes the code is so fucked that fixing it would take longer than rebuilding. I had to nuke our entire voice personality management system three times before getting it right.
If you've spent more than 2 hours on one bug:
1. Copy your core business logic somewhere safe
2. Delete the problematic component entirely
3. Tell AI to build it fresh with a different approach
4. Usually takes 20 minutes vs another 4 hours of debugging
The infinite loop isn't an AI problem - it's a human problem of being too stubborn to admit when something's irreversibly broken. | 2025-06-10T07:38:44 | https://www.reddit.com/r/LocalLLaMA/comments/1l7sr2b/vibecoding_without_the_14hour_debug_spirals/ | Necessary-Tap5971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l7sr2b | false | null | t3_1l7sr2b | /r/LocalLLaMA/comments/1l7sr2b/vibecoding_without_the_14hour_debug_spirals/ | false | false | self | 336 | null |
An autonomous multi-turn tool-calling base model for RL training | 1 | [removed] | 2025-06-10T07:39:22 | https://huggingface.co/eliuakk/mirau-agent-14b-base | EliaukMouse | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l7srdx | false | null | t3_1l7srdx | /r/LocalLLaMA/comments/1l7srdx/an_autonomous_multiturn_toolcalling_base_model/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'AKaMSOfJYnlG078czfFbfWqAb0eBPGDcHKrAXmnU50U', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=108&crop=smart&auto=webp&s=e9e90a25625ad3f9171819c90d87173ce47b20aa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=216&crop=smart&auto=webp&s=113d5a22c282559523f7071bd18f075a1adeb4fe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=320&crop=smart&auto=webp&s=ca0236d18c5925fed7b96bf162c169d2f4631e11', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=640&crop=smart&auto=webp&s=f843f791022421e80147e86ad1a24e06209b5cd8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=960&crop=smart&auto=webp&s=9cea9d6b43507fab2619a1dbe1414da6bda156de', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?width=1080&crop=smart&auto=webp&s=cc2d248c7d109c6e62b2de9fe7b74127bb26a91b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dNdqJyhtXwj87G8KuNit74iQ4LPkok_1Pa60DPDQATc.jpg?auto=webp&s=8b4c2d15714bbf7cf2f67fabe38467149a7fb69c', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.