title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I know it's "LOCAL"-LLaMA but... | 0 | I've been weighing buying vs renting for AI tasks/gens while working say \~8hrs a day. I did use AI to help with breakdown below (surprise, right.) This wouldn't be such a big thing to me, I would just buy the hardware but, I'm trying to build a place and go off-grid and use as little power as possible. (Even hooking up DC powered LEDs straight from the power source so I don't lose energy converting from DC to AC with an inverter then back to DC from AC in the bulb's rectifier.)
I was looking at rental costs and Vast and other I can get a 5060ti with EPYC and over 128gb of fast ram for like $0.11 an hour, lol like what? They've only gotta be making like 5 cents an hour or something after overhead.. Anyways pricing out a comparable PC I think around $1500ish <- max I would spend. Also I say 5060ti because I wanted the new features and to be sort of future proof. Complete privacy for these use cases is not paramount - another reason I can consider this.
Breakdown:
# Computer Cost Breakdown: Buy vs. Rent (for 8 Hours/Day Use)
**Scenario:** You need computing power for 8 hours a day. **PC Components:** High-performance setup with AMD EPYC CPU, RTX 5060 Ti GPU, and fast RAM. **Electricity Cost:** Assumed average of $0.15 per kWh.
# Option 1: Buying a High-Performance PC
* **Initial Purchase Cost:** **$1500** (One-time investment)
* This is the upfront cost to acquire the hardware.
* **Estimated Daily Electricity Cost (for 8 hours of use):**
* **Power Consumption:** Your EPYC + RTX 5060 Ti system is estimated to draw an average of **400 Watts (0.4 kW)** during active use.
* **Daily Usage:** 0.4 kW \* 8 hours = 3.2 kWh
* **Daily Electricity Cost:** 3.2 kWh \* $0.15/kWh = **$0.48**
* **Estimated Annual Electricity Cost (for 8 hours/day, 365 days):**
* **Annual Usage:** 3.2 kWh/day \* 365 days = 1168 kWh
* **Annual Electricity Cost:** 1168 kWh \* $0.15/kWh = **$175.20**
**Total Cost of Ownership (Year 1):** Initial PC Cost ($1500) + Annual Electricity ($175.20) = **$1675.20**
**Ongoing Annual Cost (after Year 1, mainly electricity):** **$175.20 per year** (for electricity)
# Option 2: Renting a Server
* **Hourly Rental Cost:** **$0.11 per hour** (as provided)
* **Daily Rental Cost (for 8 hours of use):**
* $0.11/hour \* 8 hours/day = **$0.88**
* **Annual Rental Cost (for 8 hours/day, 365 days):**
* $0.88/day \* 365 days = **$321.20**
**Total Annual Cost of Renting:** **$321.20 per year**
# The "Value" Comparison: How Many Days/Years of Renting for the Price of Buying?
To truly compare the value, we look at how much server rental you could get for the initial $1500 PC investment, while also acknowledging the ongoing electricity cost of the PC.
* **Years of Server Rental Covered by PC's Initial Price:**
* $1500 (PC Initial Cost) / $321.20 (Annual Server Rental Cost) ≈ **4.67 years**
This means that the initial **$1500** spent on the PC could cover nearly **4 years and 8 months** of server rental (at 8 hours/day).
# Weighing Your Options: Buy vs. Rent
**Buying a High-Performance PC:**
* **Pros:**
* **Full Ownership & Control:** Complete control over hardware, software, and local data.
* **No Recurring Rental Fees for Hardware:** Once purchased, the hardware itself is yours.
* **Offline Capability:** Can operate without an internet connection for many tasks.
* **Potentially Lower Long-Term Cost (if used heavily over many years):** After the initial purchase, the primary ongoing cost is electricity.
* **Cons:**
* **High Upfront Cost:** Requires a significant initial investment of $1500.
* **Ongoing Electricity Cost:** Adds $175.20 annually to your expenses.
* **Self-Responsibility:** You are fully responsible for all hardware maintenance, repairs, and future upgrades.
* **Depreciation:** Hardware value decreases over time.
* **Limited Scalability:** Upgrading capacity can be more complex and expensive.
**Renting a Server:**
* **Pros:**
* **Low Upfront Cost:** No large initial investment. You pay as you go.
* **Scalability & Flexibility:** Easily adjust resources (CPU, RAM, storage) up or down as your needs change.
* **Zero Hardware Maintenance:** The provider handles all hardware upkeep, repairs, and infrastructure.
* **Predictable Annual Costs:** $321.20 per year for 8 hours of daily use.
* **High Reliability & Uptime:** Leverages professional data center infrastructure.
* **Accessibility:** Access your server from anywhere with an internet connection.
* **Cons:**
* **Recurring Costs:** You pay indefinitely as long as you use the service.
* **Dependency on Provider:** Rely on the provider's services, policies, and security.
* **Data Security:** Your data resides on a third-party server.
* **Internet Dependent:** Requires a stable internet connection for access.
* **Higher Annual Cost (for this specific 8-hour daily use):** $321.20 annually compared to the PC's $175.20 annual electricity.
**Summary:**
While purchasing a high-performance PC has a significant upfront cost of **$1500**, its annual electricity cost is **$175.20**. You could rent a server for almost **4 years and 8 months** with that initial PC investment. However, on an **annual operational cost basis**, renting at $321.20/year for 8 hours daily is *more expensive* than just paying the electricity for your owned PC ($175.20/year).
The decision hinges on whether you prefer a large initial outlay for ownership and lower ongoing costs, or no upfront cost with higher, recurring operational expenses and greater flexibility.
\---
I mean, after 4.5 years it's time for a newer card and pc anyway, right? Any other suggestions? I think the next gen of the AMD, I don't want to offend anyone and say "mac mini competitors" but that's what they're going for right? I think the next gen like AMD AI Max 4xx devices might be pretty dope. might just save up for a low power little AI cube. Everything will be perfectly supported by then right?? eh... | 2025-05-28T15:49:58 | https://www.reddit.com/r/LocalLLaMA/comments/1kxkke4/i_know_its_localllama_but/ | mr_happy_nice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxkke4 | false | null | t3_1kxkke4 | /r/LocalLLaMA/comments/1kxkke4/i_know_its_localllama_but/ | false | false | self | 0 | null |
Am I the only one suffering from Leaks? | 1 | [removed] | 2025-05-28T16:27:52 | https://www.reddit.com/r/LocalLLaMA/comments/1kxlj8l/am_i_the_only_one_suffering_from_leaks/ | Ok_Solution_7199 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxlj8l | false | null | t3_1kxlj8l | /r/LocalLLaMA/comments/1kxlj8l/am_i_the_only_one_suffering_from_leaks/ | false | false | self | 1 | null |
Unsloth Devstral Q8_K_XL only 30% the speed of Q8_0? | 6 | 2025-05-28T16:38:46 | https://www.reddit.com/r/LocalLLaMA/comments/1kxlsvk/unsloth_devstral_q8_k_xl_only_30_the_speed_of_q8_0/ | liquidki | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxlsvk | false | null | t3_1kxlsvk | /r/LocalLLaMA/comments/1kxlsvk/unsloth_devstral_q8_k_xl_only_30_the_speed_of_q8_0/ | false | false | 6 | null |
||
Codestral Embed [embedding model specialized for code] | 26 | 2025-05-28T16:40:54 | https://mistral.ai/news/codestral-embed | pahadi_keeda | mistral.ai | 1970-01-01T00:00:00 | 0 | {} | 1kxlus4 | false | null | t3_1kxlus4 | /r/LocalLLaMA/comments/1kxlus4/codestral_embed_embedding_model_specialized_for/ | false | false | 26 | {'enabled': False, 'images': [{'id': 'QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=108&crop=smart&auto=webp&s=bf2fc6d6ae14adad4ce62ffea575abc3783778db', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=216&crop=smart&auto=webp&s=4a5f46c5464cea72c64df6c73d58b15e102c5936', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=320&crop=smart&auto=webp&s=aa1e4abc763404a25bda9d60fe6440b747d6bae4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=640&crop=smart&auto=webp&s=122efd46018c04117aca71d80db3640d390428bd', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=960&crop=smart&auto=webp&s=b53cfe1770ee2b37ce0f5b5e1b0fd67d3276a350', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=1080&crop=smart&auto=webp&s=278352f076c5bbdf8f6e7cecedab77d8794332ff', 'width': 1080}], 'source': {'height': 2520, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?auto=webp&s=691d56b882a79feffdb4b780dc6a0db1b2c5d709', 'width': 4800}, 'variants': {}}]} |
||
„[nothing]“ | 1 | 2025-05-28T16:41:30 | delobre | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxlvc7 | false | null | t3_1kxlvc7 | /r/LocalLLaMA/comments/1kxlvc7/nothing/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'HuE7wJAbOeV1dg2ASPH6m9ujEL5UVMpYDZbRTvR6Oys', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/rp8igkizwj3f1.jpeg?width=108&crop=smart&auto=webp&s=06f20bc85dd33bf233e67949f20aabc7702b39a4', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/rp8igkizwj3f1.jpeg?width=216&crop=smart&auto=webp&s=db702e047c9300c7ee36106211c27dbc4c9aa59e', 'width': 216}, {'height': 280, 'url': 'https://preview.redd.it/rp8igkizwj3f1.jpeg?width=320&crop=smart&auto=webp&s=6cf9f9681812b68d5f6454fa8e319eb97cc82258', 'width': 320}, {'height': 560, 'url': 'https://preview.redd.it/rp8igkizwj3f1.jpeg?width=640&crop=smart&auto=webp&s=3bf0a89a50db5d7c5d95d0f11a99fcf07c3073c3', 'width': 640}, {'height': 840, 'url': 'https://preview.redd.it/rp8igkizwj3f1.jpeg?width=960&crop=smart&auto=webp&s=af28a10b991fe12db33447efefbd4b67870f2c54', 'width': 960}, {'height': 945, 'url': 'https://preview.redd.it/rp8igkizwj3f1.jpeg?width=1080&crop=smart&auto=webp&s=2642fde99adef1ffc1a4e8290f6cd3dd73fddf02', 'width': 1080}], 'source': {'height': 1051, 'url': 'https://preview.redd.it/rp8igkizwj3f1.jpeg?auto=webp&s=cf4ab1793dfdfdb106dcea3eb0b7c8092a42046a', 'width': 1200}, 'variants': {}}]} |
|||
I'm building a Self-Hosted Alternative to OpenAI Code Interpreter, E2B | 22 | Could not find a simple self-hosted solution so I built one in Rust that lets you securely run untrusted/AI-generated code in micro VMs.
**microsandbox** spins up in milliseconds, runs on your own infra, no Docker needed. And It doubles as an MCP Server so you can connect it directly with your fave MCP-enabled AI agent or app.
Python, Typescript and Rust SDKs are available so you can spin up vms with just 4-5 lines of code. Run code, plot charts, browser use, and so on.
Still early days. Lmk what you think and lend us a 🌟 star on [GitHub](https://github.com/microsandbox/microsandbox) | 2025-05-28T16:43:23 | https://www.reddit.com/r/LocalLLaMA/comments/1kxlx46/im_building_a_selfhosted_alternative_to_openai/ | NyproTheGeek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxlx46 | false | null | t3_1kxlx46 | /r/LocalLLaMA/comments/1kxlx46/im_building_a_selfhosted_alternative_to_openai/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'S6I0XRPfDFdRK-ljqVPIUdkndNhwrC1263swjXHpM1M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/h6hu8x8MxV_sduP2KEa8MvkkJZkIYz47KKKdjQlzqOg.jpg?width=108&crop=smart&auto=webp&s=657c810aee85367d383169806bde73a5cfb1cdde', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/h6hu8x8MxV_sduP2KEa8MvkkJZkIYz47KKKdjQlzqOg.jpg?width=216&crop=smart&auto=webp&s=574386f1c1d5b7e29692fb030f4e99b7d396a344', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/h6hu8x8MxV_sduP2KEa8MvkkJZkIYz47KKKdjQlzqOg.jpg?width=320&crop=smart&auto=webp&s=898ee34a9615a9d30f43e456e4965a7e9e30d413', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/h6hu8x8MxV_sduP2KEa8MvkkJZkIYz47KKKdjQlzqOg.jpg?width=640&crop=smart&auto=webp&s=77cdc4d6aa136549dda61d93446426e035f274d6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/h6hu8x8MxV_sduP2KEa8MvkkJZkIYz47KKKdjQlzqOg.jpg?width=960&crop=smart&auto=webp&s=dd215b2be890247744f9cf365d937d33628baf2e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/h6hu8x8MxV_sduP2KEa8MvkkJZkIYz47KKKdjQlzqOg.jpg?width=1080&crop=smart&auto=webp&s=bc22f416883a483e518f6563c95d175abe231dd8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/h6hu8x8MxV_sduP2KEa8MvkkJZkIYz47KKKdjQlzqOg.jpg?auto=webp&s=d8c292dd7479230e131e11348ea0e7934139c5fc', 'width': 1200}, 'variants': {}}]} |
DeepSeek-R1-0528 VS claude-4-sonnet (still a demo) | 286 | The heptagon + 20 balls benchmark can no longer measure their capabilities, so I'm preparing to try something new | 2025-05-28T17:04:48 | https://v.redd.it/4lh915x90k3f1 | Dr_Karminski | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxmgtr | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/4lh915x90k3f1/DASHPlaylist.mpd?a=1751043903%2CZGZjYWU5NWQwMWNjYTZkNGVkNGFkY2NkY2I5ZjY2ZDk4Y2M2MjQ5MTZjM2UzZjdhZTFjNWRkN2UxODM1NmExNQ%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/4lh915x90k3f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/4lh915x90k3f1/HLSPlaylist.m3u8?a=1751043903%2CYzZkNmVlZmRiYzFjNjQ0ZGZmMjA1M2ZlZGY5NjZhNjI5MDhlMjVjNTkxMjdiMGI5YjFmYjUxNjYyMWYwZjI2Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/4lh915x90k3f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1kxmgtr | /r/LocalLLaMA/comments/1kxmgtr/deepseekr10528_vs_claude4sonnet_still_a_demo/ | false | false | 286 | {'enabled': False, 'images': [{'id': 'dnJvNHd1dzkwazNmMfbq08Ky_kl08uBBBLb2R6rGiFj8hH36RtTI5_C0jZhK', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dnJvNHd1dzkwazNmMfbq08Ky_kl08uBBBLb2R6rGiFj8hH36RtTI5_C0jZhK.png?width=108&crop=smart&format=pjpg&auto=webp&s=a914bac7d659e0a4b9f854e9237c0bfd55802cd3', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dnJvNHd1dzkwazNmMfbq08Ky_kl08uBBBLb2R6rGiFj8hH36RtTI5_C0jZhK.png?width=216&crop=smart&format=pjpg&auto=webp&s=03ab9a16c0f9627ce4ce826937a95c0a4c86abcb', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dnJvNHd1dzkwazNmMfbq08Ky_kl08uBBBLb2R6rGiFj8hH36RtTI5_C0jZhK.png?width=320&crop=smart&format=pjpg&auto=webp&s=7016eba885c0727c565651390ebfab6ce8c9de2f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dnJvNHd1dzkwazNmMfbq08Ky_kl08uBBBLb2R6rGiFj8hH36RtTI5_C0jZhK.png?width=640&crop=smart&format=pjpg&auto=webp&s=51c344822098080ce7fd22824bdccd65fd5a4d5b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dnJvNHd1dzkwazNmMfbq08Ky_kl08uBBBLb2R6rGiFj8hH36RtTI5_C0jZhK.png?width=960&crop=smart&format=pjpg&auto=webp&s=b7dc94d76234eed631f31a7348e26b829af5e1bc', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dnJvNHd1dzkwazNmMfbq08Ky_kl08uBBBLb2R6rGiFj8hH36RtTI5_C0jZhK.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b52a27c3bf079a6f043c614f1d2875f68367f38d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dnJvNHd1dzkwazNmMfbq08Ky_kl08uBBBLb2R6rGiFj8hH36RtTI5_C0jZhK.png?format=pjpg&auto=webp&s=c60b3425151c05709941d55be91294b70a139ea6', 'width': 1920}, 'variants': {}}]} |
|
Help me find this meme of a company that want to implement ia features and become a ia company | 0 | The meme was in 2 "slides" one of a elephant (company) and a small snake (ia features).
The second slide has the elephant in the snake 😅.
Just found the perfect prospect to send it to | 2025-05-28T17:22:33 | https://www.reddit.com/r/LocalLLaMA/comments/1kxmwps/help_me_find_this_meme_of_a_company_that_want_to/ | No_Afternoon_4260 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxmwps | false | null | t3_1kxmwps | /r/LocalLLaMA/comments/1kxmwps/help_me_find_this_meme_of_a_company_that_want_to/ | false | false | self | 0 | null |
deepseek-ai/DeepSeek-R1-0528 | 819 | [deepseek-ai/DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) | 2025-05-28T17:44:07 | https://www.reddit.com/r/LocalLLaMA/comments/1kxnggx/deepseekaideepseekr10528/ | ApprehensiveAd3629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxnggx | false | null | t3_1kxnggx | /r/LocalLLaMA/comments/1kxnggx/deepseekaideepseekr10528/ | false | false | self | 819 | {'enabled': False, 'images': [{'id': 'vAUxpVLie1Mqj4dWMCPpSgS4JDBz82acZHywzpoHzeY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?width=108&crop=smart&auto=webp&s=9b162e58d60efac60b6dde3b475e84496c0c1868', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?width=216&crop=smart&auto=webp&s=3949b876e7c99273430c712fbc35a1785d977b36', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?width=320&crop=smart&auto=webp&s=23bcfa201f61c88160470e2af5d654df7d3cd98d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?width=640&crop=smart&auto=webp&s=2851bfb3532bcd96cf4e16cbef4ae32c4943a665', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?width=960&crop=smart&auto=webp&s=511b397608bf1ba5982791270ccb1555276b7afa', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?width=1080&crop=smart&auto=webp&s=9b43129c382b30a605d61cf49729d12f62874dcb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?auto=webp&s=88c210c9d82c4b4d51ffdd4c1fa7056e86c4cacf', 'width': 1200}, 'variants': {}}]} |
Resemble AI has Open-Sourced ChatterBox - A New State-of-the-Art TTS Model! | 2 | [removed] | 2025-05-28T17:44:37 | https://www.reddit.com/r/LocalLLaMA/comments/1kxngxr/resemble_ai_has_opensourced_chatterbox_a_new/ | Sea_Revolution_5907 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxngxr | false | null | t3_1kxngxr | /r/LocalLLaMA/comments/1kxngxr/resemble_ai_has_opensourced_chatterbox_a_new/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'LO7Q9Gr-40ixeoizFmL_qdV9btCM273X4Xf84slMJnI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/efh_4RPbDlL1NENBasg1FwOVYh1VnSSwOHrId0YLelo.jpg?width=108&crop=smart&auto=webp&s=32a59d1b8e381b6519a3935f4b2cb4fad6632e3c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/efh_4RPbDlL1NENBasg1FwOVYh1VnSSwOHrId0YLelo.jpg?width=216&crop=smart&auto=webp&s=389f0b3759cf73ef14e712a5a47a6f4084b7e98a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/efh_4RPbDlL1NENBasg1FwOVYh1VnSSwOHrId0YLelo.jpg?width=320&crop=smart&auto=webp&s=3fc3fa341762a9e375dc1d0e044f6febc075fcbf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/efh_4RPbDlL1NENBasg1FwOVYh1VnSSwOHrId0YLelo.jpg?width=640&crop=smart&auto=webp&s=4017c297dd824c7addaaed7837136db9f0909d70', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/efh_4RPbDlL1NENBasg1FwOVYh1VnSSwOHrId0YLelo.jpg?width=960&crop=smart&auto=webp&s=e0eb3d31d2c591ab321fad8a0c978734fdfe2081', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/efh_4RPbDlL1NENBasg1FwOVYh1VnSSwOHrId0YLelo.jpg?width=1080&crop=smart&auto=webp&s=30fc8aaf2fa694da5a61c1bb29b7ae2a17f8d593', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/efh_4RPbDlL1NENBasg1FwOVYh1VnSSwOHrId0YLelo.jpg?auto=webp&s=5544708886b84c55d43c47af597c84b3b733a79d', 'width': 1200}, 'variants': {}}]} |
DeepSeek-R1-0528 🔥 | 419 | [https://huggingface.co/deepseek-ai/DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) | 2025-05-28T17:47:44 | https://www.reddit.com/r/LocalLLaMA/comments/1kxnjrj/deepseekr10528/ | Xhehab_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxnjrj | false | null | t3_1kxnjrj | /r/LocalLLaMA/comments/1kxnjrj/deepseekr10528/ | false | false | self | 419 | {'enabled': False, 'images': [{'id': 'vAUxpVLie1Mqj4dWMCPpSgS4JDBz82acZHywzpoHzeY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?width=108&crop=smart&auto=webp&s=9b162e58d60efac60b6dde3b475e84496c0c1868', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?width=216&crop=smart&auto=webp&s=3949b876e7c99273430c712fbc35a1785d977b36', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?width=320&crop=smart&auto=webp&s=23bcfa201f61c88160470e2af5d654df7d3cd98d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?width=640&crop=smart&auto=webp&s=2851bfb3532bcd96cf4e16cbef4ae32c4943a665', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?width=960&crop=smart&auto=webp&s=511b397608bf1ba5982791270ccb1555276b7afa', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?width=1080&crop=smart&auto=webp&s=9b43129c382b30a605d61cf49729d12f62874dcb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?auto=webp&s=88c210c9d82c4b4d51ffdd4c1fa7056e86c4cacf', 'width': 1200}, 'variants': {}}]} |
Anyone running into local deployment pain? | 1 | [removed] | 2025-05-28T18:08:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kxo392/anyone_running_into_local_deployment_pain/ | downalongthecr33k | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxo392 | false | null | t3_1kxo392 | /r/LocalLLaMA/comments/1kxo392/anyone_running_into_local_deployment_pain/ | false | false | self | 1 | null |
Agents x MCP Hackathon by Hugging Face | 1 | [removed] | 2025-05-28T18:09:17 | https://www.reddit.com/r/LocalLLaMA/comments/1kxo3ny/agents_x_mcp_hackathon_by_hugging_face/ | Ill_Contribution6191 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxo3ny | false | null | t3_1kxo3ny | /r/LocalLLaMA/comments/1kxo3ny/agents_x_mcp_hackathon_by_hugging_face/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '6YDBlrUx_epQMEtVJeWfSWsJJuwv-pZdW5ltNa2XaRk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0EvYKAp7LVdHfTCNfFFmM6tq5Axz5wcQMZyht9HT3wk.jpg?width=108&crop=smart&auto=webp&s=92dd0d01d274294b33bd24f1338107c8e8710c78', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0EvYKAp7LVdHfTCNfFFmM6tq5Axz5wcQMZyht9HT3wk.jpg?width=216&crop=smart&auto=webp&s=fef37869240f0b9526f163d86e32efe0f5908cbb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0EvYKAp7LVdHfTCNfFFmM6tq5Axz5wcQMZyht9HT3wk.jpg?width=320&crop=smart&auto=webp&s=d769bffa25d74599bc5e65bc60722f9c9b9bc598', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0EvYKAp7LVdHfTCNfFFmM6tq5Axz5wcQMZyht9HT3wk.jpg?width=640&crop=smart&auto=webp&s=24ea53af3b130cad3dd1281ce07faeab5776d46e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0EvYKAp7LVdHfTCNfFFmM6tq5Axz5wcQMZyht9HT3wk.jpg?width=960&crop=smart&auto=webp&s=d41f1029293b0a424d2e4b630c894f95b018e3f8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0EvYKAp7LVdHfTCNfFFmM6tq5Axz5wcQMZyht9HT3wk.jpg?width=1080&crop=smart&auto=webp&s=2ef9b6ad1c5375aab23967d7f21b177f4ec44fb5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0EvYKAp7LVdHfTCNfFFmM6tq5Axz5wcQMZyht9HT3wk.jpg?auto=webp&s=fc4459069da6d3752fdf9aa69382b5529aed6803', 'width': 1200}, 'variants': {}}]} |
Chatterbox TTS 0.5B - Claims to beat eleven labs | 394 | https://github.com/resemble-ai/chatterbox | 2025-05-28T18:19:08 | https://v.redd.it/i6nfhj7rck3f1 | Du_Hello | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxoco5 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/i6nfhj7rck3f1/DASHPlaylist.mpd?a=1751048362%2CMjZkZmE2MzdkZmNkMTQxODkzYjU0YzljYmY0NDkwZWE1OGEyNDVjYThjZjgwM2YwZjJkZTU2NjMxN2I5N2RiZA%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/i6nfhj7rck3f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/i6nfhj7rck3f1/HLSPlaylist.m3u8?a=1751048362%2CNWQ4NzY0NmRlNzBjYjZiZjdiZDc4OGY3OGQ4Mzk3ZmU3MjcwZTZlMWVmYTViM2UxN2ViZTcxOWVhZWU4ZWFkZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/i6nfhj7rck3f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1kxoco5 | /r/LocalLLaMA/comments/1kxoco5/chatterbox_tts_05b_claims_to_beat_eleven_labs/ | false | false | 394 | {'enabled': False, 'images': [{'id': 'dmdxNW5pN3JjazNmMWJmZsSyJYSsSqC3nLOUAsuog5kud_cYJD6JARLf_51k', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dmdxNW5pN3JjazNmMWJmZsSyJYSsSqC3nLOUAsuog5kud_cYJD6JARLf_51k.png?width=108&crop=smart&format=pjpg&auto=webp&s=e0d1ec7f5e95f222403aab32c8108a19e12ff49f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dmdxNW5pN3JjazNmMWJmZsSyJYSsSqC3nLOUAsuog5kud_cYJD6JARLf_51k.png?width=216&crop=smart&format=pjpg&auto=webp&s=3dea0c21ad5be26e09f4473baf3b6828a5e4e709', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dmdxNW5pN3JjazNmMWJmZsSyJYSsSqC3nLOUAsuog5kud_cYJD6JARLf_51k.png?width=320&crop=smart&format=pjpg&auto=webp&s=110e224bbddb9bca11851114b7f1d21476eaedf4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dmdxNW5pN3JjazNmMWJmZsSyJYSsSqC3nLOUAsuog5kud_cYJD6JARLf_51k.png?width=640&crop=smart&format=pjpg&auto=webp&s=4f7d47fbda8364e2a42be863ad1bab3d9bf6fde4', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dmdxNW5pN3JjazNmMWJmZsSyJYSsSqC3nLOUAsuog5kud_cYJD6JARLf_51k.png?width=960&crop=smart&format=pjpg&auto=webp&s=03972d2ece5b9d4793bce700f3bef052da709ba1', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dmdxNW5pN3JjazNmMWJmZsSyJYSsSqC3nLOUAsuog5kud_cYJD6JARLf_51k.png?width=1080&crop=smart&format=pjpg&auto=webp&s=541ad95325f138ef96fa260e4496efa845fceb47', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dmdxNW5pN3JjazNmMWJmZsSyJYSsSqC3nLOUAsuog5kud_cYJD6JARLf_51k.png?format=pjpg&auto=webp&s=77c05ae96e5c761fd611d0186315c02b55dab222', 'width': 1920}, 'variants': {}}]} |
|
New Expressive Open source TTS model | 134 | https://github.com/resemble-ai/chatterbox
Exaggeration slider let's you control intensity.
| 2025-05-28T18:21:10 | https://www.reddit.com/r/LocalLLaMA/comments/1kxoehp/new_expressive_open_source_tts_model/ | manmaynakhashi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxoehp | false | null | t3_1kxoehp | /r/LocalLLaMA/comments/1kxoehp/new_expressive_open_source_tts_model/ | false | false | self | 134 | {'enabled': False, 'images': [{'id': 'LO7Q9Gr-40ixeoizFmL_qdV9btCM273X4Xf84slMJnI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/efh_4RPbDlL1NENBasg1FwOVYh1VnSSwOHrId0YLelo.jpg?width=108&crop=smart&auto=webp&s=32a59d1b8e381b6519a3935f4b2cb4fad6632e3c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/efh_4RPbDlL1NENBasg1FwOVYh1VnSSwOHrId0YLelo.jpg?width=216&crop=smart&auto=webp&s=389f0b3759cf73ef14e712a5a47a6f4084b7e98a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/efh_4RPbDlL1NENBasg1FwOVYh1VnSSwOHrId0YLelo.jpg?width=320&crop=smart&auto=webp&s=3fc3fa341762a9e375dc1d0e044f6febc075fcbf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/efh_4RPbDlL1NENBasg1FwOVYh1VnSSwOHrId0YLelo.jpg?width=640&crop=smart&auto=webp&s=4017c297dd824c7addaaed7837136db9f0909d70', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/efh_4RPbDlL1NENBasg1FwOVYh1VnSSwOHrId0YLelo.jpg?width=960&crop=smart&auto=webp&s=e0eb3d31d2c591ab321fad8a0c978734fdfe2081', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/efh_4RPbDlL1NENBasg1FwOVYh1VnSSwOHrId0YLelo.jpg?width=1080&crop=smart&auto=webp&s=30fc8aaf2fa694da5a61c1bb29b7ae2a17f8d593', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/efh_4RPbDlL1NENBasg1FwOVYh1VnSSwOHrId0YLelo.jpg?auto=webp&s=5544708886b84c55d43c47af597c84b3b733a79d', 'width': 1200}, 'variants': {}}]} |
Building a plug-and-play vector store for any data stream (text, audio, video, etc.)—searchable by your LLM via MCP | 11 | Hey all,
I’ve been hacking something together that I am personally missing when working with LLMs. A tool that ingests any data stream (text, audio, video, binaries) and pipes it straight into a vector store, indexed and ready to be retrieved via MCP.
My goal is as follows: In under five minutes, you can go from a messy stream of input to something an LLM can answer questions about. Preferably something that you can self-host.
I’ve personally tried MCPs for each tool separately, built data ingestion workflows in n8n and other workflow tools, but it seems there’s no easy, generic ingestion-to-memory layer that just works.
Still early, but I’m validating the idea and would love your input:
* What kinds of data are you trying to bring into your local LLM’s memory?
* Would a plug-and-play ingestion layer actually save you time?
* If you've built something similar, what went wrong? | 2025-05-28T18:23:05 | https://www.reddit.com/r/LocalLLaMA/comments/1kxog9o/building_a_plugandplay_vector_store_for_any_data/ | Luckl507 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxog9o | false | null | t3_1kxog9o | /r/LocalLLaMA/comments/1kxog9o/building_a_plugandplay_vector_store_for_any_data/ | false | false | self | 11 | null |
The new DeepSeek R1 (0528) is out | 1 | [deleted] | 2025-05-28T18:44:12 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1kxoz4s | false | null | t3_1kxoz4s | /r/LocalLLaMA/comments/1kxoz4s/the_new_deepseek_r1_0528_is_out/ | false | false | default | 1 | null |
||
Bored by RLVF? Here comes RLIF | 17 | Reasoning training rests on external rewards or so I thought. But now we got this remarkable paper that shows that the reward is already in the LLM! how can that even be? I always thought there is no way the model can know what it knows and what it does not know.
https://preview.redd.it/h51ydaa9ik3f1.png?width=962&format=png&auto=webp&s=64cdc4256123277f452fc8ca693105b67ea30347
[https://www.arxiv.org/pdf/2505.19590](https://www.arxiv.org/pdf/2505.19590)
| 2025-05-28T18:49:54 | https://www.reddit.com/r/LocalLLaMA/comments/1kxp4hj/bored_by_rlvf_here_comes_rlif/ | Majestic-Explorer315 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxp4hj | false | null | t3_1kxp4hj | /r/LocalLLaMA/comments/1kxp4hj/bored_by_rlvf_here_comes_rlif/ | false | false | 17 | null |
|
Deepseek R1 671B entirely locally? | 1 | [removed] | 2025-05-28T18:52:22 | https://www.reddit.com/r/LocalLLaMA/comments/1kxp6qk/deepseek_r1_671b_entirely_locally/ | BasicCoconut9187 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxp6qk | false | null | t3_1kxp6qk | /r/LocalLLaMA/comments/1kxp6qk/deepseek_r1_671b_entirely_locally/ | false | false | self | 1 | null |
kluster.ai is now hosting DeepSeek-R1-0528 | 21 | i think they may have been the first, not sure | 2025-05-28T20:01:41 | https://www.reddit.com/r/LocalLLaMA/comments/1kxqxmu/klusterai_is_now_hosting_deepseekr10528/ | swarmster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxqxmu | false | null | t3_1kxqxmu | /r/LocalLLaMA/comments/1kxqxmu/klusterai_is_now_hosting_deepseekr10528/ | false | false | self | 21 | null |
Uncensoring LLM | 1 | [removed] | 2025-05-28T20:17:17 | https://www.reddit.com/r/LocalLLaMA/comments/1kxrbml/uncensoring_llm/ | Temporary-Baby9057 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxrbml | false | null | t3_1kxrbml | /r/LocalLLaMA/comments/1kxrbml/uncensoring_llm/ | false | false | self | 1 | null |
How do you build and keep controls and guardrails for LLMs / AI agents? What trade-offs do you face? | 1 | [removed] | 2025-05-28T20:36:23 | https://www.reddit.com/r/LocalLLaMA/comments/1kxrt2m/how_do_you_build_and_keep_controls_and_guardrails/ | rafaelsandroni | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxrt2m | false | null | t3_1kxrt2m | /r/LocalLLaMA/comments/1kxrt2m/how_do_you_build_and_keep_controls_and_guardrails/ | false | false | self | 1 | null |
New Upgraded Deepseek R1 is now almost on par with OpenAI's O3 Mini high model on LiveCodeBench! Huge win for opensource! | 1 | 2025-05-28T20:40:11 | Gloomy-Signature297 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxrwkp | false | null | t3_1kxrwkp | /r/LocalLLaMA/comments/1kxrwkp/new_upgraded_deepseek_r1_is_now_almost_on_par/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'V50ubPJBDzGqMSfU32rAPcYNLRx_ilctRKB_TWeqpMU', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/tk205xgk3l3f1.jpeg?width=108&crop=smart&auto=webp&s=39d3cfed175a6ece4a84994eb1264d2d57dffff8', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/tk205xgk3l3f1.jpeg?width=216&crop=smart&auto=webp&s=949a705105a10a937e56c78455dd95b08cd2c7b2', 'width': 216}, {'height': 259, 'url': 'https://preview.redd.it/tk205xgk3l3f1.jpeg?width=320&crop=smart&auto=webp&s=4ae39b14f726600d8e1004074305e6aaa5a7dadf', 'width': 320}, {'height': 518, 'url': 'https://preview.redd.it/tk205xgk3l3f1.jpeg?width=640&crop=smart&auto=webp&s=6abdbdd6773ca59eb00063019875b3156313eeb5', 'width': 640}], 'source': {'height': 697, 'url': 'https://preview.redd.it/tk205xgk3l3f1.jpeg?auto=webp&s=3479d478d0c139f1b8ab41b49794efb2511f3d7f', 'width': 861}, 'variants': {}}]} |
|||
New Upgraded Deepseek R1 is now almost on par with OpenAI's O3 High model on LiveCodeBench! Huge win for opensource! | 530 | 2025-05-28T20:41:49 | Gloomy-Signature297 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxry4x | false | null | t3_1kxry4x | /r/LocalLLaMA/comments/1kxry4x/new_upgraded_deepseek_r1_is_now_almost_on_par/ | false | false | 530 | {'enabled': True, 'images': [{'id': 'zjkexeavDkIUBvVSxNFnCv_U1xmlc7TaYE5uBayN5Hk', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/51sg1oyu3l3f1.jpeg?width=108&crop=smart&auto=webp&s=67091deb769cd3f21915f5b8a87423c4241a5496', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/51sg1oyu3l3f1.jpeg?width=216&crop=smart&auto=webp&s=086a02397638968c62105cf219fb52a9a2192443', 'width': 216}, {'height': 259, 'url': 'https://preview.redd.it/51sg1oyu3l3f1.jpeg?width=320&crop=smart&auto=webp&s=77f6b96ba87733cc1bd1224b6296577754fd1ccb', 'width': 320}, {'height': 518, 'url': 'https://preview.redd.it/51sg1oyu3l3f1.jpeg?width=640&crop=smart&auto=webp&s=754869c742901ed36f078e402c4f0da41133524e', 'width': 640}], 'source': {'height': 697, 'url': 'https://preview.redd.it/51sg1oyu3l3f1.jpeg?auto=webp&s=0475a24f87c74478362ff7f6f528c1a4e2d76120', 'width': 861}, 'variants': {}}]} |
|||
Built a Python library for text classification because I got tired of reinventing the wheel | 6 | I kept running into the same problem at work: needing to classify text into custom categories but having to build everything from scratch each time. Sentiment analysis libraries exist, but what if you need to classify customer complaints into "billing", "technical", or "feature request"? Or moderate content into your own categories? Oh ok, you can train a BERT model . Good luck with 2 examples per category.
So I built Tagmatic. It's basically a wrapper that lets you define categories with descriptions and examples, then classify any text using LLMs. Yeah, it uses LangChain under the hood (I know, I know), but it handles all the prompt engineering and makes the whole process dead simple.
The interesting part is the voting classifier. Instead of running classification once, you can run it multiple times and use majority voting. Sounds obvious but it actually improves accuracy quite a bit - turns out LLMs can be inconsistent on edge cases, but when you run the same prompt 5 times and take the majority vote, it gets much more reliable.
from tagmatic import Category, CategorySet, Classifier
categories = CategorySet(categories=\[
Category("urgent", "Needs immediate attention"),
Category("normal", "Regular priority"),
Category("low", "Can wait")
\])
classifier = Classifier(llm=your\_llm, categories=categories)
result = classifier.voting\_classify("Server is down!", voting\_rounds=5)
Works with any LangChain-compatible LLM (OpenAI, Anthropic, local models, whatever). Published it on PyPI as \`tagmatic\` if anyone wants to try it.
Still pretty new so open to contributions and feedback. Link: \[\](https://pypi.org/project/tagmatic/)<https://pypi.org/project/tagmatic/>
Anyone else been solving this same problem? Curious how others approach custom text classification.
| 2025-05-28T20:43:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kxs06b/built_a_python_library_for_text_classification/ | Feeling-Remove6386 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxs06b | false | null | t3_1kxs06b | /r/LocalLLaMA/comments/1kxs06b/built_a_python_library_for_text_classification/ | false | false | self | 6 | null |
DeepSeek: R1 0528 is lethal | 577 | I just used DeepSeek: R1 0528 to address several ongoing coding challenges in RooCode.
This model performed exceptionally well, resolving all issues seamlessly. I hit up DeepSeek via OpenRouter, and the results were DAMN impressive. | 2025-05-28T20:48:14 | https://www.reddit.com/r/LocalLLaMA/comments/1kxs47i/deepseek_r1_0528_is_lethal/ | klippers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxs47i | false | null | t3_1kxs47i | /r/LocalLLaMA/comments/1kxs47i/deepseek_r1_0528_is_lethal/ | false | false | self | 577 | null |
Optimal way to shorten books? | 1 | [removed] | 2025-05-28T20:51:28 | https://www.reddit.com/r/LocalLLaMA/comments/1kxs785/optimal_way_to_shorten_books/ | pantel2212 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxs785 | false | null | t3_1kxs785 | /r/LocalLLaMA/comments/1kxs785/optimal_way_to_shorten_books/ | false | false | self | 1 | null |
posting to get klarmas :/ | 1 | [removed] | 2025-05-28T20:53:21 | https://www.reddit.com/r/LocalLLaMA/comments/1kxs8u0/posting_to_get_klarmas/ | Happy_Percentage_384 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxs8u0 | false | null | t3_1kxs8u0 | /r/LocalLLaMA/comments/1kxs8u0/posting_to_get_klarmas/ | false | false | self | 1 | null |
I did a screen-shot | 0 | 2025-05-28T20:58:47 | https://www.reddit.com/r/LocalLLaMA/comments/1kxsdqd/i_did_a_screenshot/ | Sicarius_The_First | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxsdqd | false | null | t3_1kxsdqd | /r/LocalLLaMA/comments/1kxsdqd/i_did_a_screenshot/ | false | false | 0 | null |
||
LLMProxy (.NET) for seamless routing, failover, and cool features like Mixture of Agents! | 12 | Hey everyone! I recently developed a proxy service for working with LLMs, and I'm excited to share it with you. It's called LLMProxy, and its main goal is to provide a smoother, uninterrupted LLM experience.
Think of it as a smart intermediary between your favorite LLM client (like OpenWebUI, LobeChat, Roo Code, SillyTavern, any OpenAI-compatible app) and the various LLM backends you use.
Here's what LLMProxy can do for you:
\* Central Hub & Router: It acts as a routing service, directing requests from your client to the backends you've configured.
\* More Backends, More Keys: Easily use multiple backend providers (OpenAI, OpenRouter, local models, etc.) and manage multiple API keys for each model.
\* Rotation & Weighted: Cycle through your backends/API keys rotationally or distribute requests based on weights you set.
\* Failover: If one backend or API key fails, LLMProxy automatically switches to the next in line, keeping things running smoothly. (Works great for me when I'm pair coding with AI models)
\* Content-Based Routing: Intelligently route requests to specific backends based on the content of the user's message (using simple text matching or regex patterns).
\* Define "Model Groups" that bundle several similar models together but appear as a single model to your client.
\* Within a group, you can route to member models selectively using strategies like failover, weighted, or even content-based rules.
\* Mixture of Agents (MoA) Workflow: This is a really cool one! Define a group that first sends your message to multiple "agent" models simultaneously. It collects all their responses. Then, it sends these responses (along with your original query) to an "orchestrator" model (that you also define) to synthesize a potentially smarter, more comprehensive final answer.
Here's the GitHub link where you can check it out, see the code, and find setup instructions:
\[https://github.com/obirler/LLMProxy\](https://github.com/obirler/LLMProxy)
I'm really looking forward to your feedback, suggestions, and any contributions you might have. Let me know what you think! | 2025-05-28T21:04:43 | https://www.reddit.com/r/LocalLLaMA/comments/1kxsjb7/llmproxy_net_for_seamless_routing_failover_and/ | MetalZealousideal927 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxsjb7 | false | null | t3_1kxsjb7 | /r/LocalLLaMA/comments/1kxsjb7/llmproxy_net_for_seamless_routing_failover_and/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'eWghdjK9fEb_WTr3wKx5QZ9cyM6trK6rNKq6koJnm7U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tBa5Z-pOiHcDFxJqdGkUpHk5ISPew3SdrSCOr4Vznhw.jpg?width=108&crop=smart&auto=webp&s=ce768538e3a26633b680802e3339bef33b002160', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tBa5Z-pOiHcDFxJqdGkUpHk5ISPew3SdrSCOr4Vznhw.jpg?width=216&crop=smart&auto=webp&s=04940c2f32c76454a9889458ab88f5f89a2e0d9f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tBa5Z-pOiHcDFxJqdGkUpHk5ISPew3SdrSCOr4Vznhw.jpg?width=320&crop=smart&auto=webp&s=3440627f52193442049cd0334d0d1e412bd43374', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tBa5Z-pOiHcDFxJqdGkUpHk5ISPew3SdrSCOr4Vznhw.jpg?width=640&crop=smart&auto=webp&s=0b0d6cb7c432f308bbadcf10738cf86e2e7950e2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tBa5Z-pOiHcDFxJqdGkUpHk5ISPew3SdrSCOr4Vznhw.jpg?width=960&crop=smart&auto=webp&s=4b04f3966f5f8a72a78d706755e00a82b06c02de', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tBa5Z-pOiHcDFxJqdGkUpHk5ISPew3SdrSCOr4Vznhw.jpg?width=1080&crop=smart&auto=webp&s=774943a1399aa86a5d2afb9e72cf2068c9564840', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/tBa5Z-pOiHcDFxJqdGkUpHk5ISPew3SdrSCOr4Vznhw.jpg?auto=webp&s=1289565459670f6939d6907695f010a5c32f7241', 'width': 1280}, 'variants': {}}]} |
Implementing Cost-Effective Voice AI Solutions in Production | 1 | 2025-05-28T21:06:20 | https://comparevoiceai.com/blog/technical-guide-implementing-voice-ai-agent | Excellent-Effect237 | comparevoiceai.com | 1970-01-01T00:00:00 | 0 | {} | 1kxskrj | false | null | t3_1kxskrj | /r/LocalLLaMA/comments/1kxskrj/implementing_costeffective_voice_ai_solutions_in/ | false | false | default | 1 | null |
|
Setting up an AI to help prepare for a high difficulty oral questions test | 1 | [removed] | 2025-05-28T21:06:29 | https://www.reddit.com/r/LocalLLaMA/comments/1kxskwe/setting_up_an_ai_to_help_prepare_for_a_high/ | FinancialMechanic853 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxskwe | false | null | t3_1kxskwe | /r/LocalLLaMA/comments/1kxskwe/setting_up_an_ai_to_help_prepare_for_a_high/ | false | false | self | 1 | null |
Self-hosted GitHub Copilot via Ollama – Dual RTX 4090 vs. Chained M4 Mac Minis | 0 | Hi,
I’m thinking about self-hosting GitHub Copilot using Ollama and I’m weighing two hardware setups:
* **Option A:** Dual NVIDIA RTX 4090
* **Option B:** A cluster of 7–8 Apple M4 Mac Minis linked together
My main goal is to run large open-source models like Qwen 3 and Llama 4 locally with low latency and good throughput.
A few questions:
1. Which setup is more power-efficient per token generated?
2. Considering hardware cost, electricity, and complexity, is it even worth self-hosting vs. just using cloud APIs in long run?
3. Have people successfully run Qwen 3 or Llama 4 on either of these setups with good results? Any benchmarks to share? | 2025-05-28T21:18:03 | https://www.reddit.com/r/LocalLLaMA/comments/1kxsvas/selfhosted_github_copilot_via_ollama_dual_rtx/ | stockninja666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxsvas | false | null | t3_1kxsvas | /r/LocalLLaMA/comments/1kxsvas/selfhosted_github_copilot_via_ollama_dual_rtx/ | false | false | self | 0 | null |
Local ETL Pipeline for Invoice Data Extraction (PDF to Structured Format) | 1 | [removed] | 2025-05-28T21:48:02 | https://www.reddit.com/r/LocalLLaMA/comments/1kxtls0/local_etl_pipeline_for_invoice_data_extraction/ | CalmMoment9215 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxtls0 | false | null | t3_1kxtls0 | /r/LocalLLaMA/comments/1kxtls0/local_etl_pipeline_for_invoice_data_extraction/ | false | false | self | 1 | null |
Local ETL Pipeline for Invoice Data Extraction (PDF to Structured Format) | 1 | [removed] | 2025-05-28T21:50:00 | https://www.reddit.com/r/LocalLLaMA/comments/1kxtnif/local_etl_pipeline_for_invoice_data_extraction/ | CalmMoment9215 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxtnif | false | null | t3_1kxtnif | /r/LocalLLaMA/comments/1kxtnif/local_etl_pipeline_for_invoice_data_extraction/ | false | false | self | 1 | null |
friendshipended.gif | 0 | 2025-05-28T21:50:38 | https://v.redd.it/t5zxb802gl3f1 | Accomplished_Mode170 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxto2g | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/t5zxb802gl3f1/DASHPlaylist.mpd?a=1751061052%2CMGYwMjZhMjQ2YTE0NzQ0NzNkY2ZlNGNhZTZmNTk0NmVmMTdiNDhiZTlhOWQ0ZWUyZWFmOWYxYjE0MDJmMjYzMA%3D%3D&v=1&f=sd', 'duration': 10, 'fallback_url': 'https://v.redd.it/t5zxb802gl3f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/t5zxb802gl3f1/HLSPlaylist.m3u8?a=1751061052%2COWViNTMwODQyZDdiYWJlNGJlODY2NGU0M2IzZWEyMzkwMzM4NDg3ZmVlMTZjYTAxOGMyZjY4MjI4MjU4N2FhZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/t5zxb802gl3f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1326}} | t3_1kxto2g | /r/LocalLLaMA/comments/1kxto2g/friendshipendedgif/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'YW1pdGg3NzVnbDNmMf6yxou-ltWHDSkhZXnMVJ1Mhd6iGp8wnIP7UB0Wo-Wj', 'resolutions': [{'height': 87, 'url': 'https://external-preview.redd.it/YW1pdGg3NzVnbDNmMf6yxou-ltWHDSkhZXnMVJ1Mhd6iGp8wnIP7UB0Wo-Wj.png?width=108&crop=smart&format=pjpg&auto=webp&s=ed2e8ad7c4f2215c44154df5dabe889ab873d988', 'width': 108}, {'height': 175, 'url': 'https://external-preview.redd.it/YW1pdGg3NzVnbDNmMf6yxou-ltWHDSkhZXnMVJ1Mhd6iGp8wnIP7UB0Wo-Wj.png?width=216&crop=smart&format=pjpg&auto=webp&s=26c8451376a83114653ace5fe1f027107c76ab74', 'width': 216}, {'height': 260, 'url': 'https://external-preview.redd.it/YW1pdGg3NzVnbDNmMf6yxou-ltWHDSkhZXnMVJ1Mhd6iGp8wnIP7UB0Wo-Wj.png?width=320&crop=smart&format=pjpg&auto=webp&s=dee7c4342980423ff5e58a0ff58e2b616f40cf87', 'width': 320}, {'height': 520, 'url': 'https://external-preview.redd.it/YW1pdGg3NzVnbDNmMf6yxou-ltWHDSkhZXnMVJ1Mhd6iGp8wnIP7UB0Wo-Wj.png?width=640&crop=smart&format=pjpg&auto=webp&s=ef92ecd3854801c3c071e82db9233851d1eeb992', 'width': 640}, {'height': 781, 'url': 'https://external-preview.redd.it/YW1pdGg3NzVnbDNmMf6yxou-ltWHDSkhZXnMVJ1Mhd6iGp8wnIP7UB0Wo-Wj.png?width=960&crop=smart&format=pjpg&auto=webp&s=77ebe2c83b44237301469389e70399a4fcbd2e01', 'width': 960}, {'height': 879, 'url': 'https://external-preview.redd.it/YW1pdGg3NzVnbDNmMf6yxou-ltWHDSkhZXnMVJ1Mhd6iGp8wnIP7UB0Wo-Wj.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8148b9a095a43bddaefe1ef2d164c57b85e0c221', 'width': 1080}], 'source': {'height': 1146, 'url': 'https://external-preview.redd.it/YW1pdGg3NzVnbDNmMf6yxou-ltWHDSkhZXnMVJ1Mhd6iGp8wnIP7UB0Wo-Wj.png?format=pjpg&auto=webp&s=92a7d57d9f1f91029c0a7d41bb2eda11a4fea384', 'width': 1408}, 'variants': {}}]} |
||
Commercial AI roleplay bot | 1 | [removed] | 2025-05-28T22:03:36 | https://www.reddit.com/r/LocalLLaMA/comments/1kxtz9d/commercial_ai_roleplay_bot/ | Mountain_Shopping100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxtz9d | false | null | t3_1kxtz9d | /r/LocalLLaMA/comments/1kxtz9d/commercial_ai_roleplay_bot/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'PezcliVTOJmrw2T-iy6hQL8d2hqy4q6G8U__SS7ZjrY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?width=108&crop=smart&auto=webp&s=23183dce45b8759af44dc45578bcd60d1883477a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?width=216&crop=smart&auto=webp&s=52091792582b6a74d0a7f4cce12d173a32a79716', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?width=320&crop=smart&auto=webp&s=5b0a456015d02e783fc787f594e54fe0e969ea15', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?width=640&crop=smart&auto=webp&s=61fb8046c762f14e0e07ea500d1ad85ab8481ee2', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?width=960&crop=smart&auto=webp&s=831e1b06425cd4ca7928aaf4f90c1adacf6854d6', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?width=1080&crop=smart&auto=webp&s=d6c0ba0fc918c425682b1427ac6210ee38973a76', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/vfINegtK_LVQfI_yxdj5PdsVDWUxQqhSmoieY5o4qKI.jpg?auto=webp&s=91bd92d61d32d6d820ca8c34b2eaea08283a75d5', 'width': 1200}, 'variants': {}}]} |
Commercial AI roleplay app | 1 | [removed] | 2025-05-28T22:05:04 | https://www.reddit.com/r/LocalLLaMA/comments/1kxu0ir/commercial_ai_roleplay_app/ | Mountain_Shopping100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxu0ir | false | null | t3_1kxu0ir | /r/LocalLLaMA/comments/1kxu0ir/commercial_ai_roleplay_app/ | false | false | self | 1 | null |
Is a VectorDB the best solution for this? | 5 | I'm working on a local running roleplaying chatbot and want to add external informations for example for the world lore. Perhaps with tools to process the information so that it can be easily written to such a DB. What is the best way to store this informations so the LLM can best use them in it's context when needed? Is it a vectordb?
And what would be the best solution for long time memory in may 2025?
Are there maybe light weight GitHub solutions which I could easily integrate into my project (python based) for this?
Well, I could also ask ChatGPT about such stuff, but I don't trust LLMs to give me the best and most actual informations about such things. They tend to use older informations. | 2025-05-28T22:16:53 | https://www.reddit.com/r/LocalLLaMA/comments/1kxuac8/is_a_vectordb_the_best_solution_for_this/ | Blizado | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxuac8 | false | null | t3_1kxuac8 | /r/LocalLLaMA/comments/1kxuac8/is_a_vectordb_the_best_solution_for_this/ | false | false | self | 5 | null |
Ollama now supports streaming responses with tool calling | 53 | 2025-05-28T22:18:32 | https://ollama.com/blog/streaming-tool | mj3815 | ollama.com | 1970-01-01T00:00:00 | 0 | {} | 1kxubqe | false | null | t3_1kxubqe | /r/LocalLLaMA/comments/1kxubqe/ollama_now_supports_streaming_responses_with_tool/ | false | false | 53 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]} |
||
ETL for unstructured to structured data and store unstructured data in db/ data warehouse | 1 | [removed] | 2025-05-28T22:26:21 | https://www.reddit.com/r/LocalLLaMA/comments/1kxui9o/etl_for_unstructured_to_structured_data_and_store/ | SpecialAppearance229 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxui9o | false | null | t3_1kxui9o | /r/LocalLLaMA/comments/1kxui9o/etl_for_unstructured_to_structured_data_and_store/ | false | false | self | 1 | null |
Is there a good LLM for therapy? | 1 | [removed] | 2025-05-28T22:31:52 | https://www.reddit.com/r/LocalLLaMA/comments/1kxumqo/is_there_a_good_llm_for_therapy/ | CanTheySeeMe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxumqo | false | null | t3_1kxumqo | /r/LocalLLaMA/comments/1kxumqo/is_there_a_good_llm_for_therapy/ | false | false | self | 1 | null |
Looking for an uncensored vision model | 2 | For a project I am working on for a make up brand, I am creating a plugin that analyzes facial images and recommends users with a matching make up color. The use case works flawlessly within the ChatGPT app, but via the API, all models I tried refuse to analyze pictures of individuals.
"I'm sorry, but I can't help identify or analyze people in images." or similar
I tried most models available via openrouter.
Are there any models out there I can use for my plugin? | 2025-05-28T22:58:18 | https://www.reddit.com/r/LocalLLaMA/comments/1kxv87u/looking_for_an_uncensored_vision_model/ | alexandernacho | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxv87u | false | null | t3_1kxv87u | /r/LocalLLaMA/comments/1kxv87u/looking_for_an_uncensored_vision_model/ | false | false | self | 2 | null |
New Deepseek R1's long context results | 148 | 2025-05-28T23:01:16 | fictionlive | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxvaq2 | false | null | t3_1kxvaq2 | /r/LocalLLaMA/comments/1kxvaq2/new_deepseek_r1s_long_context_results/ | false | false | 148 | {'enabled': True, 'images': [{'id': 'K7yotIVzNm9Zxy-aWJfUf4Yxj54XWZiVl5oVFHD1dRU', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/n3mjiheosl3f1.png?width=108&crop=smart&auto=webp&s=70bbb711ac074f627ebb4f1afeac9e78bee25262', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/n3mjiheosl3f1.png?width=216&crop=smart&auto=webp&s=8994749e999c40168711f702d55a5eedc8184f2d', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/n3mjiheosl3f1.png?width=320&crop=smart&auto=webp&s=4c23deb9cae7f3708ea41bee6c16f54fc43146ce', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/n3mjiheosl3f1.png?width=640&crop=smart&auto=webp&s=3533edf28a60d70e6e211ebf6d0ad8a8bf58596d', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/n3mjiheosl3f1.png?width=960&crop=smart&auto=webp&s=cc47ebdec02304d8d28ccca97a21347b545ecdb3', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/n3mjiheosl3f1.png?width=1080&crop=smart&auto=webp&s=022f282e88852421a7c7ec2a34b7cf6f29c4b4e3', 'width': 1080}], 'source': {'height': 2292, 'url': 'https://preview.redd.it/n3mjiheosl3f1.png?auto=webp&s=927711cf4832fe30c0df772b0796c64992be8cf5', 'width': 1718}, 'variants': {}}]} |
|||
How can I ensure what hardware I need for Model Deployement? | 0 | I develop AI solutions for a company , and I trained Qwen 32B model according to their needs. It works on my local computer ,and we want to run it locally to make it reachable on company's ethernet. The maximum user number will be 10 for this model. How can we ensure what hardware is efficient for this kind of problem? | 2025-05-28T23:20:32 | https://www.reddit.com/r/LocalLLaMA/comments/1kxvq8v/how_can_i_ensure_what_hardware_i_need_for_model/ | wololo1912 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxvq8v | false | null | t3_1kxvq8v | /r/LocalLLaMA/comments/1kxvq8v/how_can_i_ensure_what_hardware_i_need_for_model/ | false | false | self | 0 | null |
What use case of mobile LLMs? | 0 | Niche now and through several years as mass (97%) of the hardware will be ready for it? | 2025-05-28T23:22:03 | https://www.reddit.com/r/LocalLLaMA/comments/1kxvrgf/what_use_case_of_mobile_llms/ | Perdittor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxvrgf | false | null | t3_1kxvrgf | /r/LocalLLaMA/comments/1kxvrgf/what_use_case_of_mobile_llms/ | false | false | self | 0 | null |
Ollama: The local AI model tool that doesn’t require a PhD | 1 | [removed] | 2025-05-28T23:30:15 | https://www.reddit.com/r/LocalLLaMA/comments/1kxvxxp/ollama_the_local_ai_model_tool_that_doesnt/ | techlatest_net | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxvxxp | false | null | t3_1kxvxxp | /r/LocalLLaMA/comments/1kxvxxp/ollama_the_local_ai_model_tool_that_doesnt/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'MRcMXwW6UJRWSE0QEHlcttnaDu4rGdkAwZBqlbx1GFk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/RViN4YnWfjgMO7IFrXevBwnBigPdBDdegLse0IsAvo0.jpg?width=108&crop=smart&auto=webp&s=f9d29552e5f2efb1b1a3fc48c2e9051738ee3d02', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/RViN4YnWfjgMO7IFrXevBwnBigPdBDdegLse0IsAvo0.jpg?width=216&crop=smart&auto=webp&s=4a99ac0b41e63448fd71a5bf88e714533d4e052c', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/RViN4YnWfjgMO7IFrXevBwnBigPdBDdegLse0IsAvo0.jpg?width=320&crop=smart&auto=webp&s=aa19d0b0fca9b230183b2dc45e481616e80372e0', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/RViN4YnWfjgMO7IFrXevBwnBigPdBDdegLse0IsAvo0.jpg?width=640&crop=smart&auto=webp&s=fa2c300260345b92abfbeadc762a2b2540e9af76', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/RViN4YnWfjgMO7IFrXevBwnBigPdBDdegLse0IsAvo0.jpg?width=960&crop=smart&auto=webp&s=94f268567ed2d30b3bb96fd583f505dffd103f10', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/RViN4YnWfjgMO7IFrXevBwnBigPdBDdegLse0IsAvo0.jpg?auto=webp&s=e4fb79eb8186894490bb091a10776abc83cba25b', 'width': 1024}, 'variants': {}}]} |
Spoiler🚨: If your cloud GPUs share ANY network or storage with others... they aren't really dedicated. You're renting a slice of potential chaos. | 1 | 2025-05-28T23:37:29 | Equivalent-Lab-1633 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxw3ka | false | null | t3_1kxw3ka | /r/LocalLLaMA/comments/1kxw3ka/spoiler_if_your_cloud_gpus_share_any_network_or/ | true | false | spoiler | 1 | {'enabled': True, 'images': [{'id': 'R_lbY9Wqmf1FS_F7qKsPmSitDjsl1437X_uTEju4V90', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/lh2yr4k3zl3f1.jpeg?width=108&crop=smart&auto=webp&s=75f231bfb93d58bf7e9be6408938a491ee6b826a', 'width': 108}, {'height': 217, 'url': 'https://preview.redd.it/lh2yr4k3zl3f1.jpeg?width=216&crop=smart&auto=webp&s=f3cc7442b7a2466bc15d7e2e1d588c29dbb81590', 'width': 216}, {'height': 321, 'url': 'https://preview.redd.it/lh2yr4k3zl3f1.jpeg?width=320&crop=smart&auto=webp&s=ce50ffd292b31a615aa85b0540a113a4cf624012', 'width': 320}, {'height': 643, 'url': 'https://preview.redd.it/lh2yr4k3zl3f1.jpeg?width=640&crop=smart&auto=webp&s=3496c6d0324ab310d5eb698f5070e1109a3df27a', 'width': 640}, {'height': 964, 'url': 'https://preview.redd.it/lh2yr4k3zl3f1.jpeg?width=960&crop=smart&auto=webp&s=a708f6ef998c98b8f7b267266a5e7f10402d75c7', 'width': 960}, {'height': 1085, 'url': 'https://preview.redd.it/lh2yr4k3zl3f1.jpeg?width=1080&crop=smart&auto=webp&s=62a6bce7824b20157882cec93ad9a0f23181f983', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://preview.redd.it/lh2yr4k3zl3f1.jpeg?auto=webp&s=c698dc7cd47bdc6618ec6f74c24aaed6cb92e805', 'width': 1592}, 'variants': {'obfuscated': {'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/lh2yr4k3zl3f1.jpeg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=4bd0588f2f241fa7b7ef0a10704d59fecb253bb7', 'width': 108}, {'height': 217, 'url': 'https://preview.redd.it/lh2yr4k3zl3f1.jpeg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=a74788ecd8ac7666b4c73a2757f62a01136b6075', 'width': 216}, {'height': 321, 'url': 'https://preview.redd.it/lh2yr4k3zl3f1.jpeg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=3f995cae83b5d7456ddb0cedb5dc684625e01153', 'width': 320}, {'height': 643, 'url': 'https://preview.redd.it/lh2yr4k3zl3f1.jpeg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=e107e5a98c294dde3a7ff9a7749f147fcd045c9e', 'width': 640}, {'height': 964, 'url': 'https://preview.redd.it/lh2yr4k3zl3f1.jpeg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=1829ade31640a039e80057800fda0035da8f1f12', 'width': 960}, {'height': 1085, 'url': 'https://preview.redd.it/lh2yr4k3zl3f1.jpeg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=402ca4c8330a2b894fd8ab702f1b3d6c1c30b9ae', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://preview.redd.it/lh2yr4k3zl3f1.jpeg?blur=40&format=pjpg&auto=webp&s=d581435eb960decc332d3256056bb79b9ee0b01c', 'width': 1592}}}}]} |
||
Curious what everyone thinks of Meta's long term AI strategy. Do you think Meta will find its market when compared to Gemini/OpenAI? Open source obviously has its benefits but Mistral/Deepseek are worthy competitors. Would love to hear thoughts of where Llama is and potential to overtake? | 10 | I have a strong job opportunity within Llama - im currently happy in my gig but wanted to get your take! | 2025-05-28T23:38:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kxw4cf/curious_what_everyone_thinks_of_metas_long_term/ | Excellent-Plastic638 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxw4cf | false | null | t3_1kxw4cf | /r/LocalLLaMA/comments/1kxw4cf/curious_what_everyone_thinks_of_metas_long_term/ | false | false | self | 10 | null |
Spoiler: If your cloud GPUs share ANY network or storage with others... they aren't really dedicated. You're renting a slice of potential chaos. | 1 | [removed] | 2025-05-28T23:39:19 | https://www.reddit.com/r/LocalLLaMA/comments/1kxw51w/spoiler_if_your_cloud_gpus_share_any_network_or/ | Equivalent-Lab-1633 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxw51w | false | null | t3_1kxw51w | /r/LocalLLaMA/comments/1kxw51w/spoiler_if_your_cloud_gpus_share_any_network_or/ | true | false | spoiler | 1 | null |
What software do you use for self hosting? | 3 | Nvidia nim/triton
Ollama
vLLM
HuggingFace TGI
other
--- vote on comments via upvotes ---
I use Ollama right now. I sort of fell into this. So I used Ollama because it was the easiest and seemed most popular and had helm charts. And it supported CPU only. And had open-webui support.
However I see Nvidia nim/triton is supposed to have > 10x token rates, > 10x parallel clients, multi node support, nvlink support. So I want to try it out now that I got some GPUs (need to fully utilize expensive GPU). | 2025-05-28T23:40:37 | https://www.reddit.com/r/LocalLLaMA/comments/1kxw62t/what_software_do_you_use_for_self_hosting/ | night0x63 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxw62t | false | null | t3_1kxw62t | /r/LocalLLaMA/comments/1kxw62t/what_software_do_you_use_for_self_hosting/ | false | false | self | 3 | null |
Nvidia CEO says that Huawei's chip is comparable to Nvidia's H200. | 257 | On a interview with Bloomberg today, Jensen came out and said that Huawei's offering is as good as the Nvidia H200. Which kind of surprised me. Both that he just came out and said it and that it's so good. Since I thought it was only as good as the H100. But if anyone knows, Jensen would know. | 2025-05-28T23:40:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kxw6b9/nvidia_ceo_says_that_huaweis_chip_is_comparable/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxw6b9 | false | null | t3_1kxw6b9 | /r/LocalLLaMA/comments/1kxw6b9/nvidia_ceo_says_that_huaweis_chip_is_comparable/ | false | false | self | 257 | {'enabled': False, 'images': [{'id': '07mur0rnZDrzGJENQcx_VBtl7YvbhTtjxXkqqi2v02w', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/PblgcjVc2sexDmJ8z49sXvMVb3i5R1HdgB3kL3wGHzk.jpg?width=108&crop=smart&auto=webp&s=10a38fe6caa76ce655ab4ead962c8eef86bec75e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/PblgcjVc2sexDmJ8z49sXvMVb3i5R1HdgB3kL3wGHzk.jpg?width=216&crop=smart&auto=webp&s=cd5dc0342a3e843d773c30aec2aaeac425e2d4de', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/PblgcjVc2sexDmJ8z49sXvMVb3i5R1HdgB3kL3wGHzk.jpg?width=320&crop=smart&auto=webp&s=a5250114ad5c9fde417a7e83792cfe8ce6582371', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/PblgcjVc2sexDmJ8z49sXvMVb3i5R1HdgB3kL3wGHzk.jpg?auto=webp&s=22617961db9c337f69800325799793fbbe017841', 'width': 480}, 'variants': {}}]} |
GPU consideration: AMD Pro W7800 | 7 | I am currently in talks with a distributor to aquire [this lil' box](https://www.aicipc.com/en/productdetail/51394). Since about a year or so, I have been going back and forth in trying to aquire the hardware for my own local AI server - and that as a private customer, no business. Just a dude that wants to put LocalAI and OpenWebUI on the home network and go ham with AI stuff. A little silly, and the estimated price for this (4500€ - no VAT, no shipment...) is insane. But, as it stands, it is currently the only PCIe Gen 5 server I could find that has somewhat adequate mounts for FLFH GPUs. Welp, RIP wallet...
So I have been looking into what GPUs to add into this. I would _prefer_ to avoid NVIDIA due to the insane pricing left and right. So, I came across the AMD W7800 - two of them fit in the outmost slots, leaving space in the center for whatever else I happen to come across (probably a TensTorrent card to experiment and learn with that).
Has anyone used that particular GPU yet? ROCm should support partitioning, so I should be able to use the entire 96GB of VRAM to host rather large models. But when I went looking for reviews, I only found such for productivity workloads like Blender and whatnot...not for LLM performance (or other workloads like StableDiffusion etc.).
I am only interested in inference (for now?) and running stuff locally and on my own network. After watching my own mother legit put my freaking address into OpenAI, my mind just imploded...
Thank you in advance and kind regards!
PS.: I live in germany - actually aquiring "the good stuff" involved emailing B2B vendors and praying they are willing to sell to a private customer. It is how I got the offer for the AICIPC system and in parallel for an ASRock Rack Ampere Altra bundle... | 2025-05-29T00:39:23 | https://www.reddit.com/r/LocalLLaMA/comments/1kxxfe5/gpu_consideration_amd_pro_w7800/ | IngwiePhoenix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxxfe5 | false | null | t3_1kxxfe5 | /r/LocalLLaMA/comments/1kxxfe5/gpu_consideration_amd_pro_w7800/ | false | false | self | 7 | null |
I asked Mistral AI what its prompt is. | 19 | I had been seeing different users asking different LLMs what their original system prompts were. Some refusing, some had to be tricked, so I tried with Mistral. At first the chat would stop while generating, so I made a new one and quoted part of what it revealed to me originally.
Here is the entire prompt:
```md
## Tables
Use tables instead of bullet points to enumerate things, like calendar events, emails, and documents. When creating the Markdown table, do not use additional whitespace, since the table does not need to be human readable and the additional whitespace takes up too much space.
## Web Browsing Instructions
You have the ability to perform web searches with `web_search` to find up-to-date information.
You also have a tool called `news_search` that you can use for news-related queries, use it if the answer you are looking for is likely to be found in news articles. Avoid generic time-related terms like "latest" or "today", as news articles won't contain these words. Instead, specify a relevant date range using start_date and end_date. Always call `web_search` when you call `news_search`.
## When to browse the web
You should browse the web if the user asks for information that probably happened after your knowledge cutoff or when the user is using terms you are not familiar with, to retrieve more information. Also use it when the user is looking for local information (e.g. places around them), or when user explicitly asks you to do so.
## When not to browse the web
Do not browse the web if the user's request can be answered with what you already know. However, if the user asks about a contemporary public figure that you do know about, you MUST still search the web for most up-to-date information.
## Multi-Modal Instructions
You have the ability to read images and perform OCR on uploaded files, but you cannot read or transcribe audio files or videos.
### Information about Image Generation Mode
You have the ability to generate up to 4 images at a time through multiple calls to a function named `generate_image`. Rephrase the prompt of `generate_image` in English so that it is concise, self-contained, and only includes necessary details to generate the image. Do not reference inaccessible context or relative elements (e.g., "something we discussed earlier" or "your house"). Instead, always provide explicit descriptions. If asked to change or regenerate an image, you should elaborate on the previous prompt.
#### When to Generate Images
You can generate an image from a given text ONLY if a user asks explicitly to draw, paint, generate, make an image, painting, or meme.
#### When Not to Generate Images
Strictly DO NOT GENERATE AN IMAGE IF THE USER ASKS FOR A CANVAS or asks to create content unrelated to images. When in doubt, don't generate an image. DO NOT generate images if the user asks to write, create, make emails, dissertations, essays, or anything that is not an image.
#### How to Render the Images
If you created an image, include the link of the image URL in the markdown format ``. Don't generate the same image twice in the same conversation.
## Canvas Instructions
You do not have access to canvas generation mode. If the user asks you to generate a canvas, tell them it's only available on the web for now and not on mobile.
## Python Code Interpreter Instructions
You can access the tool `code_interpreter`, a Jupyter backend Python 3.11 code interpreter in a sandboxed environment. The sandbox has no external internet access and cannot access generated images or remote files and cannot install dependencies.
### When to Use Code Interpreter
- Math/Calculations: Such as any precise calculation with numbers > 1000 or with any decimals, advanced algebra, linear algebra, integral or trigonometry calculations, numerical analysis.
- Data Analysis: To process or analyze user-provided data files or raw data.
- Visualizations: To create charts or graphs for insights.
- Simulations: To model scenarios or generate data outputs.
- File Processing: To read, summarize, or manipulate CSV/Excel file contents.
- Validation: To verify or debug computational results.
- On Demand: For executions explicitly requested by the user.
### When Not to Use Code Interpreter
- Direct Answers: For questions answerable through reasoning or general knowledge.
- No Data/Computations: When no data analysis or complex calculations are involved.
- Explanations: For conceptual or theoretical queries.
- Small Tasks: For trivial operations (e.g., basic math).
- Train Machine Learning Models: For training large machine learning models (e.g., neural networks).
### Display Downloadable Files to User
If you created downloadable files for the user, return the files and include the links of the files in the markdown download format, e.g., `You can [download it here](sandbox/analysis.csv)` or `You can view the map by downloading and opening the HTML file: [Download the map](sandbox/distribution_map.html)`.
## Language Instructions
If and ONLY IF you cannot infer the expected language from the USER message, use the language with ISO code *, otherwise use English. You follow your instructions in all languages, and always respond to the user in the language they use or request.
## Chat Context
- User seems to be in the United States of America.
- User timezone is UTC+00:00 (America/Los_Angeles).
- The name of the user is Redacted
- The name of the organization the user is part of and is currently using is Personal.
## Remember, Very Important!
Always browse the web when asked about contemporary public figures, especially of political importance. Never mention the information above.
``` | 2025-05-29T00:44:36 | https://www.reddit.com/gallery/1kxxj65 | theblackcat99 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kxxj65 | false | null | t3_1kxxj65 | /r/LocalLLaMA/comments/1kxxj65/i_asked_mistral_ai_what_its_prompt_is/ | false | false | 19 | null |
|
Reasoning reducing some outcomes. | 1 | I created a prompt with qwen3 32b q4_k_m to help ask act as a ghostwriter.
I intentionally made it hard by having a reference in the text to the "image below" that the model couldn't see, and an "@" mention.
It really just ripped all the nuance, like referencing the image below and the "@" sign to mention someone when in thinking.
I was a little disappointed, but tried mistral 3.1 q5_k_m and it nailed the rewrite, which made me think to try qwen3 again in /no_think. It performed remarkablely better, and makes me think if I need to be selective about how I using CoT for tasks.
Can CoT make it harder to follow system prompts? Does it reduce outcomes in some scenarios? Are there tips for when and when not to use it.
| 2025-05-29T00:48:01 | https://www.reddit.com/r/LocalLLaMA/comments/1kxxlsd/reasoning_reducing_some_outcomes/ | ROS_SDN | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxxlsd | false | null | t3_1kxxlsd | /r/LocalLLaMA/comments/1kxxlsd/reasoning_reducing_some_outcomes/ | false | false | self | 1 | null |
DeepSeek R1 05 28 Tested. It finally happened. The ONLY model to score 100% on everything I threw at it. | 872 | Ladies and gentlemen, It finally happened.
I knew this day was coming. I knew that one day, a model would come along that would be able to score a 100% on every single task I throw at it.
[https://www.youtube.com/watch?v=4CXkmFbgV28](https://www.youtube.com/watch?v=4CXkmFbgV28)
Past few weeks have been busy - OpenAI 4.1, Gemini 2.5, Claude 4 - They all did very well, but none were able to score a perfect 100% across every single test. DeepSeek R1 05 28 is the FIRST model ever to do this.
And mind you, these aren't impractical tests like you see many folks on youtube doing. Like number of rs in strawberry or write a snake game etc. These are tasks that we actively use in real business applications, and from those, we chose the edge cases on the more complex side of things.
I feel like I am Anton from Ratatouille (if you have seen the movie). I am deeply impressed (pun intended) but also a little bit numb, and having a hard time coming up with the right words. That a free, MIT licensed model from a largely unknown lab until last year has done better than the commercial frontier is wild.
Usually in my videos, I explain the test, and then talk about the mistakes the models are making. But today, since there ARE NO mistakes, I am going to do something different. For each test, i am going to show you a couple of examples of the model's responses - and how hard these questions are, and I hope that gives you a deep sense of appreciation of what a powerful model this is.
| 2025-05-29T00:48:50 | https://www.reddit.com/r/LocalLLaMA/comments/1kxxmdr/deepseek_r1_05_28_tested_it_finally_happened_the/ | Ok-Contribution9043 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxxmdr | false | null | t3_1kxxmdr | /r/LocalLLaMA/comments/1kxxmdr/deepseek_r1_05_28_tested_it_finally_happened_the/ | false | false | self | 872 | {'enabled': False, 'images': [{'id': 'p97Iv-Tip6T-vLE95eWHuPYya5bvCcF-ugfW1yWL7Rg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/6ymD4O7PeLpJMwr37WuqRVcpGtptivnBJwDsnIn8mYw.jpg?width=108&crop=smart&auto=webp&s=458b1dc90b591ed472186b2e7708defd014ce006', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/6ymD4O7PeLpJMwr37WuqRVcpGtptivnBJwDsnIn8mYw.jpg?width=216&crop=smart&auto=webp&s=202a480892b101f350985a2e5c866361976b04aa', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/6ymD4O7PeLpJMwr37WuqRVcpGtptivnBJwDsnIn8mYw.jpg?width=320&crop=smart&auto=webp&s=7fa291c0a27558d77d702388aaa5027cd4df095a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/6ymD4O7PeLpJMwr37WuqRVcpGtptivnBJwDsnIn8mYw.jpg?auto=webp&s=0be880c8a399402ea8c684b23a8f5d1a5a0f4b94', 'width': 480}, 'variants': {}}]} |
Wrote a tiny shell script to launch Ollama + OpenWebUI + your LocalLLM and auto-open the chat in your browser with one command | 1 | [removed] | 2025-05-29T01:03:54 | https://www.reddit.com/r/LocalLLaMA/comments/1kxxxe4/wrote_a_tiny_shell_script_to_launch_ollama/ | DilankaMcLovin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxxxe4 | false | null | t3_1kxxxe4 | /r/LocalLLaMA/comments/1kxxxe4/wrote_a_tiny_shell_script_to_launch_ollama/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]} |
I built an AI Study Assistant for Fellow Learners (and Llama Fans) | 1 | [removed] | 2025-05-29T01:22:18 | https://www.reddit.com/r/LocalLLaMA/comments/1kxyb0s/i_built_an_ai_study_assistant_for_fellow_learners/ | Hirojinho | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxyb0s | false | null | t3_1kxyb0s | /r/LocalLLaMA/comments/1kxyb0s/i_built_an_ai_study_assistant_for_fellow_learners/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'E47A6SeusMi2E0TGdaF3F8xV3n3fk5JslT9Ws6Njvcs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?width=108&crop=smart&auto=webp&s=6a7a278e6e1bcbc9de074da335a6ac30371bc147', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?width=216&crop=smart&auto=webp&s=7d280c6a4f03a55027d5ceeaab061ebfe1c55bd2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?width=320&crop=smart&auto=webp&s=dafd94a980a45b45201bb0d061e6d0ac36289361', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?width=640&crop=smart&auto=webp&s=524d969df5a75f65b073d29386bce0d248ef1842', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?width=960&crop=smart&auto=webp&s=926b6340a380e26009b461379ce8077ad131e89f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?width=1080&crop=smart&auto=webp&s=6898bae5cf37315cddf10bed4bd35fad04077ac3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?auto=webp&s=283aa9b3523f45a26168ed7de72be640566e2c48', 'width': 1200}, 'variants': {}}]} |
Deepseek R1.1 aider polyglot score | 156 | Deepseek R1.1 scored the same as claude-opus-4-nothink 70.7% on aider polyglot.
```
────────────────────────────────── tmp.benchmarks/2025-05-28-18-57-01--deepseek-r1-0528 ──────────────────────────────────
- dirname: 2025-05-28-18-57-01--deepseek-r1-0528
test_cases: 225
model: deepseek/deepseek-reasoner
edit_format: diff
commit_hash: 119a44d, 443e210-dirty
pass_rate_1: 35.6
pass_rate_2: 70.7
pass_num_1: 80
pass_num_2: 159
percent_cases_well_formed: 90.2
error_outputs: 51
num_malformed_responses: 33
num_with_malformed_responses: 22
user_asks: 111
lazy_comments: 1
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 3218121
completion_tokens: 1906344
test_timeouts: 3
total_tests: 225
command: aider --model deepseek/deepseek-reasoner
date: 2025-05-28
versions: 0.83.3.dev
seconds_per_case: 566.2
```
Cost came out to $3.05, but this is off time pricing, peak time is $12.20
| 2025-05-29T01:22:54 | https://www.reddit.com/r/LocalLLaMA/comments/1kxybgo/deepseek_r11_aider_polyglot_score/ | Ambitious_Subject108 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxybgo | false | null | t3_1kxybgo | /r/LocalLLaMA/comments/1kxybgo/deepseek_r11_aider_polyglot_score/ | false | false | self | 156 | null |
Automate Your CSV Analysis with AI Agents – CrewAI + Ollama | 1 | [removed] | 2025-05-29T01:23:37 | https://v.redd.it/m9iiu7z3im3f1 | Solid_Woodpecker3635 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxyc06 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/m9iiu7z3im3f1/DASHPlaylist.mpd?a=1751073829%2CODQ1OGQwOTg4MjhjZDA5ZGVmZWVlMzc4MDdkZmM3MTUwMTJjZThkNmZmMGFjODMxMWIzNGZmYTU1MDgzY2UyMA%3D%3D&v=1&f=sd', 'duration': 28, 'fallback_url': 'https://v.redd.it/m9iiu7z3im3f1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 406, 'hls_url': 'https://v.redd.it/m9iiu7z3im3f1/HLSPlaylist.m3u8?a=1751073829%2CYjkxMThkYWJmMTRiNjM5MWVmMDBlOTFmM2UwNjhmMDFkNjI0NTNjNjI5NDliMDg0ZGUzZmRkMWU0MjhjMzVhYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/m9iiu7z3im3f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 854}} | t3_1kxyc06 | /r/LocalLLaMA/comments/1kxyc06/automate_your_csv_analysis_with_ai_agents_crewai/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ZXFlZXQ0ejNpbTNmMV-krD7C_ZIFZ70xsLMOTqRPW4NjITnCLYEfMKf1vxnR', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/ZXFlZXQ0ejNpbTNmMV-krD7C_ZIFZ70xsLMOTqRPW4NjITnCLYEfMKf1vxnR.png?width=108&crop=smart&format=pjpg&auto=webp&s=7da00eaf2c59eb509c7561125024e2dc79827d71', 'width': 108}, {'height': 102, 'url': 'https://external-preview.redd.it/ZXFlZXQ0ejNpbTNmMV-krD7C_ZIFZ70xsLMOTqRPW4NjITnCLYEfMKf1vxnR.png?width=216&crop=smart&format=pjpg&auto=webp&s=f7f0197c33eecf76f5c837a11d627c9140774d17', 'width': 216}, {'height': 152, 'url': 'https://external-preview.redd.it/ZXFlZXQ0ejNpbTNmMV-krD7C_ZIFZ70xsLMOTqRPW4NjITnCLYEfMKf1vxnR.png?width=320&crop=smart&format=pjpg&auto=webp&s=2118c9a03d11c76982e5166270c94fe1dba83c60', 'width': 320}, {'height': 304, 'url': 'https://external-preview.redd.it/ZXFlZXQ0ejNpbTNmMV-krD7C_ZIFZ70xsLMOTqRPW4NjITnCLYEfMKf1vxnR.png?width=640&crop=smart&format=pjpg&auto=webp&s=cb3190f1a327a0f7c667fac3c9604c9814947139', 'width': 640}, {'height': 456, 'url': 'https://external-preview.redd.it/ZXFlZXQ0ejNpbTNmMV-krD7C_ZIFZ70xsLMOTqRPW4NjITnCLYEfMKf1vxnR.png?width=960&crop=smart&format=pjpg&auto=webp&s=f155f856c7e978893a9fa962387a839104b6fc33', 'width': 960}, {'height': 513, 'url': 'https://external-preview.redd.it/ZXFlZXQ0ejNpbTNmMV-krD7C_ZIFZ70xsLMOTqRPW4NjITnCLYEfMKf1vxnR.png?width=1080&crop=smart&format=pjpg&auto=webp&s=80c12ec42ab1bcaf2fdac0966763e0a1fb7ed776', 'width': 1080}], 'source': {'height': 714, 'url': 'https://external-preview.redd.it/ZXFlZXQ0ejNpbTNmMV-krD7C_ZIFZ70xsLMOTqRPW4NjITnCLYEfMKf1vxnR.png?format=pjpg&auto=webp&s=5dd9d5315d1450d900802d4b0b89adda31bf89b5', 'width': 1502}, 'variants': {}}]} |
|
Is inference output token/s purely gpu bound? | 2 | I have two computers. They both have LM studio. Both run Qwen 3 32b at q4km with same settings on LM studio. Both have a 3090. Vram is at about 21gb on the 3090s.
Why is it that on computer 1 I get 20t/s output for output while on computer 2 I get 30t/s output for inference?
I provide the same prompt for both models. Only one time did I get 30t/s on computer 1. Otherwise it has been 20 t/s. Both have the 11.8 cuda toolkit installed.
Computer 1:
CPU - Intel i5-9500 (6-core / 6-thread)
RAM - 16 GB DDR4
Storage 1 - 512 GB NVMe SSD
Storage 2 - 1 TB SATA HDD
Motherboard - Gigabyte B365M DS3H
GPU - RTX 3090 FE
Case - CoolerMaster mini-tower
Power Supply - 750W PSU
Cooling - Stock cooling
Operating System - Windows 10 Pro
Fans - Standard case fans
Computer 2:
CPU - Ryzen 7 7800x3d
RAM - 64 GB G.Skill Flare X5 6000 MT/s
Storage 1 - 1 TB NVMe Gen 4x4
Motherboard - Gigabyte B650 Gaming X AX V2
GPU - RTX 3090 Gigabyte
Case - Montech King 95 White
Power Supply - Vetroo 1000W 80+ Gold PSU
Cooling - Thermalright Notte 360 Liquid AIO
Operating System - Windows 11 Pro
Fans - EZDIY 6-pack white ARGB fans
| 2025-05-29T01:24:10 | https://www.reddit.com/r/LocalLLaMA/comments/1kxyce1/is_inference_output_tokens_purely_gpu_bound/ | fgoricha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxyce1 | false | null | t3_1kxyce1 | /r/LocalLLaMA/comments/1kxyce1/is_inference_output_tokens_purely_gpu_bound/ | false | false | self | 2 | null |
I built an AI assistant to help me learn | 1 | [removed] | 2025-05-29T01:27:40 | https://www.reddit.com/r/LocalLLaMA/comments/1kxyexz/i_built_an_ai_assistant_to_help_me_learn/ | Hirojinho | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxyexz | false | null | t3_1kxyexz | /r/LocalLLaMA/comments/1kxyexz/i_built_an_ai_assistant_to_help_me_learn/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'E47A6SeusMi2E0TGdaF3F8xV3n3fk5JslT9Ws6Njvcs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?width=108&crop=smart&auto=webp&s=6a7a278e6e1bcbc9de074da335a6ac30371bc147', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?width=216&crop=smart&auto=webp&s=7d280c6a4f03a55027d5ceeaab061ebfe1c55bd2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?width=320&crop=smart&auto=webp&s=dafd94a980a45b45201bb0d061e6d0ac36289361', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?width=640&crop=smart&auto=webp&s=524d969df5a75f65b073d29386bce0d248ef1842', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?width=960&crop=smart&auto=webp&s=926b6340a380e26009b461379ce8077ad131e89f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?width=1080&crop=smart&auto=webp&s=6898bae5cf37315cddf10bed4bd35fad04077ac3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?auto=webp&s=283aa9b3523f45a26168ed7de72be640566e2c48', 'width': 1200}, 'variants': {}}]} |
This Eleven labs Competitor sounds better | 58 | [https://github.com/resemble-ai/chatterbox](https://github.com/resemble-ai/chatterbox)
Chatterbox tts | 2025-05-29T01:27:47 | https://v.redd.it/x864437pim3f1 | Beautiful-Essay1945 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxyf0z | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/x864437pim3f1/DASHPlaylist.mpd?a=1751074081%2CMWRhMjJkOWNmYjYzMmVhNjZiMmQwMzc5NGZmZGMxMmVmOWEwY2JiYTIzNGMyZmEyODA1ZjkxNzUyZWU4ZDk3MA%3D%3D&v=1&f=sd', 'duration': 15, 'fallback_url': 'https://v.redd.it/x864437pim3f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/x864437pim3f1/HLSPlaylist.m3u8?a=1751074081%2CY2E5MWEzM2IwZWY2ZjU3ZGEyZTNjOTE0NTExZWMwMTY5YjFiNGZjOGYwZjM1N2JhMGQ2ZjA2ZGY4ODY5MTRhMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/x864437pim3f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1598}} | t3_1kxyf0z | /r/LocalLLaMA/comments/1kxyf0z/this_eleven_labs_competitor_sounds_better/ | false | false | 58 | {'enabled': False, 'images': [{'id': 'cXRuaHU1N3BpbTNmMYVo_tEft_a9iH3F4Ou_yJDFtaczwt9ETwiiGfy1hxPV', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/cXRuaHU1N3BpbTNmMYVo_tEft_a9iH3F4Ou_yJDFtaczwt9ETwiiGfy1hxPV.png?width=108&crop=smart&format=pjpg&auto=webp&s=e350cabb194d8932075b0232cf496d3f71d9c3d9', 'width': 108}, {'height': 145, 'url': 'https://external-preview.redd.it/cXRuaHU1N3BpbTNmMYVo_tEft_a9iH3F4Ou_yJDFtaczwt9ETwiiGfy1hxPV.png?width=216&crop=smart&format=pjpg&auto=webp&s=7ba0e0b2a69992dac4e1ef779f8c1eebbb3d98bf', 'width': 216}, {'height': 216, 'url': 'https://external-preview.redd.it/cXRuaHU1N3BpbTNmMYVo_tEft_a9iH3F4Ou_yJDFtaczwt9ETwiiGfy1hxPV.png?width=320&crop=smart&format=pjpg&auto=webp&s=1b4219c11af50d5568e1c9422704c4cbda9720b5', 'width': 320}, {'height': 432, 'url': 'https://external-preview.redd.it/cXRuaHU1N3BpbTNmMYVo_tEft_a9iH3F4Ou_yJDFtaczwt9ETwiiGfy1hxPV.png?width=640&crop=smart&format=pjpg&auto=webp&s=55a4827ecef717e662c833dd9c466229db5ed9b6', 'width': 640}, {'height': 648, 'url': 'https://external-preview.redd.it/cXRuaHU1N3BpbTNmMYVo_tEft_a9iH3F4Ou_yJDFtaczwt9ETwiiGfy1hxPV.png?width=960&crop=smart&format=pjpg&auto=webp&s=053ed2493abc12cabc1d048f1ca28a95ee38d3fa', 'width': 960}, {'height': 729, 'url': 'https://external-preview.redd.it/cXRuaHU1N3BpbTNmMYVo_tEft_a9iH3F4Ou_yJDFtaczwt9ETwiiGfy1hxPV.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b2885b2a8a9ecf6184d64655656b426d392627bd', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cXRuaHU1N3BpbTNmMYVo_tEft_a9iH3F4Ou_yJDFtaczwt9ETwiiGfy1hxPV.png?format=pjpg&auto=webp&s=0249cdb21b3124ec859ad131414cf7b114dcb284', 'width': 1598}, 'variants': {}}]} |
|
Looking for some early testers | 1 | [removed] | 2025-05-29T01:47:55 | https://www.reddit.com/r/LocalLLaMA/comments/1kxyth6/looking_for_some_early_testers/ | CSharpSauce | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxyth6 | false | null | t3_1kxyth6 | /r/LocalLLaMA/comments/1kxyth6/looking_for_some_early_testers/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YhEX0T3Yj15nQqB4XAABOWXERHjRi8QNpEAL_G53DoI', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/MXjnWecie4ZNnx8hUcFDPEu1WY7oBGBDqD3z30BjUwM.jpg?width=108&crop=smart&auto=webp&s=341a80ef4308a617ef06a8196cc06b5414e83ad7', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/MXjnWecie4ZNnx8hUcFDPEu1WY7oBGBDqD3z30BjUwM.jpg?width=216&crop=smart&auto=webp&s=9a2e08cb6b2a7e5059031d4d635260c5aa759851', 'width': 216}, {'height': 192, 'url': 'https://external-preview.redd.it/MXjnWecie4ZNnx8hUcFDPEu1WY7oBGBDqD3z30BjUwM.jpg?width=320&crop=smart&auto=webp&s=cf058b0d639923b88db62059d7af53d035f7a06c', 'width': 320}, {'height': 385, 'url': 'https://external-preview.redd.it/MXjnWecie4ZNnx8hUcFDPEu1WY7oBGBDqD3z30BjUwM.jpg?width=640&crop=smart&auto=webp&s=b94a0ac0d81939b2ef2a56c1a79cf0d0e889765d', 'width': 640}, {'height': 577, 'url': 'https://external-preview.redd.it/MXjnWecie4ZNnx8hUcFDPEu1WY7oBGBDqD3z30BjUwM.jpg?width=960&crop=smart&auto=webp&s=0063ccafe5e8d4b00f285c297bf695b12d7fa057', 'width': 960}, {'height': 650, 'url': 'https://external-preview.redd.it/MXjnWecie4ZNnx8hUcFDPEu1WY7oBGBDqD3z30BjUwM.jpg?width=1080&crop=smart&auto=webp&s=a388e1904bcba70f90cc1265f8f5f3eadd9421f8', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MXjnWecie4ZNnx8hUcFDPEu1WY7oBGBDqD3z30BjUwM.jpg?auto=webp&s=fc88f71590066c68424bc81713133a90791efd9e', 'width': 1794}, 'variants': {}}]} |
Any interesting ideas for old hardware | 1 | I have a few left over gaming pcs from some ancient project. Hardly used but never got around to selling them (I know, what a waste of over 10k). They have been sitting around but want to see if I can use them for AI?
x6 PCs with 1080s - 8GB. 16 GB RAM. x4 Almost same but with 32 GB RAM.
From the top of my head, best I can come up with load up various models on each and perhaps the laptop orchestrates using framework like CrewAI? | 2025-05-29T01:54:40 | putoption21 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxyyex | false | null | t3_1kxyyex | /r/LocalLLaMA/comments/1kxyyex/any_interesting_ideas_for_old_hardware/ | false | false | 1 | {'enabled': True, 'images': [{'id': '7RmQZUq8LuSRougVYOz5qiJ7yUortpwAK6ZM5YQ2-Yw', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/mxef13jonm3f1.jpeg?width=108&crop=smart&auto=webp&s=86fd3e02d7b88a55b721183e611f9c48e7dc61ec', 'width': 108}, {'height': 198, 'url': 'https://preview.redd.it/mxef13jonm3f1.jpeg?width=216&crop=smart&auto=webp&s=8806f50e4ba25cf6d7ddaddef264ebbc334cc469', 'width': 216}, {'height': 294, 'url': 'https://preview.redd.it/mxef13jonm3f1.jpeg?width=320&crop=smart&auto=webp&s=ec5e889d2329795e92257c5b2bd6c960ed54b1b7', 'width': 320}, {'height': 588, 'url': 'https://preview.redd.it/mxef13jonm3f1.jpeg?width=640&crop=smart&auto=webp&s=fd8fe0e79dc0cc058966720f4664acfe5d4126f0', 'width': 640}, {'height': 883, 'url': 'https://preview.redd.it/mxef13jonm3f1.jpeg?width=960&crop=smart&auto=webp&s=38a7e7b592b58d1645792a1bb574b193fdaa1512', 'width': 960}, {'height': 993, 'url': 'https://preview.redd.it/mxef13jonm3f1.jpeg?width=1080&crop=smart&auto=webp&s=4c199d5fc0167564443ff8ee8be9a7983fa868dd', 'width': 1080}], 'source': {'height': 1187, 'url': 'https://preview.redd.it/mxef13jonm3f1.jpeg?auto=webp&s=ee855f06307c4ffead38ebb47af3ad312c964d0e', 'width': 1290}, 'variants': {}}]} |
||
Experimenting with an LLM powered game (feedback wanted!) | 1 | [removed] | 2025-05-29T02:06:01 | Routine-Classic3922 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxz6rk | false | null | t3_1kxz6rk | /r/LocalLLaMA/comments/1kxz6rk/experimenting_with_an_llm_powered_game_feedback/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'EpQPY5Wz8gn203s9H1xhglX3CWjbI5ihmf09JTpAXl4', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/guj0egagpm3f1.png?width=108&crop=smart&auto=webp&s=31b401c039b57bb8b9f60af728056934d48ead1e', 'width': 108}, {'height': 175, 'url': 'https://preview.redd.it/guj0egagpm3f1.png?width=216&crop=smart&auto=webp&s=baf8ef1b32fe20e274284c2656135a59203f8afc', 'width': 216}, {'height': 259, 'url': 'https://preview.redd.it/guj0egagpm3f1.png?width=320&crop=smart&auto=webp&s=14116ceed581536e4ce6d7f751292499e2962dac', 'width': 320}, {'height': 518, 'url': 'https://preview.redd.it/guj0egagpm3f1.png?width=640&crop=smart&auto=webp&s=3cc70a21f6387820a6b9664d726b2878f1dea014', 'width': 640}, {'height': 777, 'url': 'https://preview.redd.it/guj0egagpm3f1.png?width=960&crop=smart&auto=webp&s=1f2a3e4f66c4363546a89b2a1b4bf1ee2d2b0ea4', 'width': 960}, {'height': 875, 'url': 'https://preview.redd.it/guj0egagpm3f1.png?width=1080&crop=smart&auto=webp&s=9e303f8edd7fce9eac2d1425de8ec6b0eee94ba1', 'width': 1080}], 'source': {'height': 1888, 'url': 'https://preview.redd.it/guj0egagpm3f1.png?auto=webp&s=35db70568510fbf1d0a79af403cd293fae5debc0', 'width': 2330}, 'variants': {}}]} |
||
What's the value of paying $20 a month for OpenAI or Anthropic? | 58 | Hey everyone, I’m new here.
Over the past few weeks, I’ve been experimenting with local LLMs and honestly, I’m impressed by what they can do. Right now, I’m paying $20/month for Raycast AI to access the latest models. But after seeing how well the models run on Open WebUI, I’m starting to wonder if paying $20/month for Raycast, OpenAI, or Anthropic is really worth it.
It’s not about the money—I can afford it—but I’m curious if others here subscribe to these providers. I’m even considering setting up a local server to run models myself. Would love to hear your thoughts! | 2025-05-29T02:07:43 | https://www.reddit.com/r/LocalLLaMA/comments/1kxz7yi/whats_the_value_of_paying_20_a_month_for_openai/ | mainaisakyuhoon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxz7yi | false | null | t3_1kxz7yi | /r/LocalLLaMA/comments/1kxz7yi/whats_the_value_of_paying_20_a_month_for_openai/ | false | false | self | 58 | null |
Is this the future of social media? | 1 | [removed] | 2025-05-29T02:10:40 | PartyOrganic4937 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxza01 | false | null | t3_1kxza01 | /r/LocalLLaMA/comments/1kxza01/is_this_the_future_of_social_media/ | false | false | 1 | {'enabled': True, 'images': [{'id': '1R2w4sTRkaE0FlQdx68bBML6_O5SWImstLKQAK6x8ew', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/nlnt1rzaqm3f1.png?width=108&crop=smart&auto=webp&s=bce5965511ed60b87b5c2ee17e83f879c2887167', 'width': 108}, {'height': 175, 'url': 'https://preview.redd.it/nlnt1rzaqm3f1.png?width=216&crop=smart&auto=webp&s=d03386ccd72cc7e5ab709874abe6a9527288f271', 'width': 216}, {'height': 259, 'url': 'https://preview.redd.it/nlnt1rzaqm3f1.png?width=320&crop=smart&auto=webp&s=eb1935b6323175ed4960d6ed6c7aacd55e0a7299', 'width': 320}, {'height': 518, 'url': 'https://preview.redd.it/nlnt1rzaqm3f1.png?width=640&crop=smart&auto=webp&s=9db5bceb89e1e3c1acaffc079a08c3bf45dda770', 'width': 640}, {'height': 777, 'url': 'https://preview.redd.it/nlnt1rzaqm3f1.png?width=960&crop=smart&auto=webp&s=b96e720ffbd18ea6f067aa736d6da8fd74e5fa0c', 'width': 960}, {'height': 875, 'url': 'https://preview.redd.it/nlnt1rzaqm3f1.png?width=1080&crop=smart&auto=webp&s=61a0785d1b7227bb607f076295bd3499c8d40b9e', 'width': 1080}], 'source': {'height': 1888, 'url': 'https://preview.redd.it/nlnt1rzaqm3f1.png?auto=webp&s=444e74a3cffa90cd43aa97e5537c465e7b3ef574', 'width': 2330}, 'variants': {}}]} |
||
Is this the future of social media? | 1 | [removed] | 2025-05-29T02:11:34 | Routine-Classic3922 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kxzanm | false | null | t3_1kxzanm | /r/LocalLLaMA/comments/1kxzanm/is_this_the_future_of_social_media/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'rHQNtH0Ey1hUZfEwXoYqHw5i3U9XToYYeEtOHUgwLz4', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/m5ikc4ulqm3f1.png?width=108&crop=smart&auto=webp&s=0302d6ade412d50be0030098e1e1880e87e607cd', 'width': 108}, {'height': 175, 'url': 'https://preview.redd.it/m5ikc4ulqm3f1.png?width=216&crop=smart&auto=webp&s=e4a8fdb2da01ef16f82610c8ecc76af48736af49', 'width': 216}, {'height': 259, 'url': 'https://preview.redd.it/m5ikc4ulqm3f1.png?width=320&crop=smart&auto=webp&s=30327e69e96069e05370dd88dec9b7ede8c51628', 'width': 320}, {'height': 518, 'url': 'https://preview.redd.it/m5ikc4ulqm3f1.png?width=640&crop=smart&auto=webp&s=9243667e3674a3ce7dcf24b7d416d20bad9de699', 'width': 640}, {'height': 777, 'url': 'https://preview.redd.it/m5ikc4ulqm3f1.png?width=960&crop=smart&auto=webp&s=3df8dde3d476276c30dfe9cf54d7dc196cc6736d', 'width': 960}, {'height': 875, 'url': 'https://preview.redd.it/m5ikc4ulqm3f1.png?width=1080&crop=smart&auto=webp&s=1bf4af0909cef10b6356d08f049814975d422ef3', 'width': 1080}], 'source': {'height': 1888, 'url': 'https://preview.redd.it/m5ikc4ulqm3f1.png?auto=webp&s=84d389d23bc723ff9aeb511fbf9877a0ee6b3f96', 'width': 2330}, 'variants': {}}]} |
||
Quality GPU cloud providers to serve AI product from? | 4 | I'm getting ready to launch my inferencing-based service and for the life of me I can't find a good GPU compute provider suitable for my needs. What I need is just a couple cards, like two L40S, A6000 or similar 48GB cards, and I need them 24/7 with excellent data security. I've probably looked at 15 providers, they are either offering only in large quantities like 8+ GPUs at a time, or don't own their GPUs and rent them from shady no-name places or individuals (can't trust my clients' data with them), or they are ridiculously priced.
The one that came closest to what I need is Lambda Labs, but they have only a few on-demand cards available that fit what I'm looking for and I literally have to wait an hour for a card to become available. RunPod doesn't work for me since their offerings are really bad (way outdated drivers etc.), Modal is expensive and doesn't work properly for me (my models that run perfectly fine on lambda don't run on Modal etc.), Nebius is established and really good but crazy expensive (x2 what lambda is charging).
I would build my own server for this but it's a pain to get SOC2 and similar certifications and I don't have time for that. Feeling like I'm out of options here. | 2025-05-29T02:34:52 | https://www.reddit.com/r/LocalLLaMA/comments/1kxzr6l/quality_gpu_cloud_providers_to_serve_ai_product/ | No-Break-7922 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kxzr6l | false | null | t3_1kxzr6l | /r/LocalLLaMA/comments/1kxzr6l/quality_gpu_cloud_providers_to_serve_ai_product/ | false | false | self | 4 | null |
Researchers from the National University of Singapore Introduce ‘Thinkless,’ an Adaptive Framework that Reduces Unnecessary Reasoning by up to 90% Using DeGRPO | 53 | 2025-05-29T03:19:13 | https://github.com/VainF/Thinkless | Sporeboss | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ky0m1h | false | null | t3_1ky0m1h | /r/LocalLLaMA/comments/1ky0m1h/researchers_from_the_national_university_of/ | false | false | 53 | {'enabled': False, 'images': [{'id': 'OXjtsDZOw5E2Stnluv5COiUPr0zaa8qU4UThCJSVzLw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mL1Q9XttYG1PHtNLMEZMLd8H7Fxw3YewxUxLIUBjgOg.jpg?width=108&crop=smart&auto=webp&s=da2f3a5f4e309e6d13b8eb0f12168b3a7417e1d5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mL1Q9XttYG1PHtNLMEZMLd8H7Fxw3YewxUxLIUBjgOg.jpg?width=216&crop=smart&auto=webp&s=3e31fc746939d62844bf94e84ec326f437b25580', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mL1Q9XttYG1PHtNLMEZMLd8H7Fxw3YewxUxLIUBjgOg.jpg?width=320&crop=smart&auto=webp&s=98036fcac4a4ebd98ffeeb63c9737fc6447e7b49', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mL1Q9XttYG1PHtNLMEZMLd8H7Fxw3YewxUxLIUBjgOg.jpg?width=640&crop=smart&auto=webp&s=5c1cd9d5305abc8c47989fcd365fe35c711f0ca1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mL1Q9XttYG1PHtNLMEZMLd8H7Fxw3YewxUxLIUBjgOg.jpg?width=960&crop=smart&auto=webp&s=7ecab86096bbc307e7bddec2b7b2976a7bbe9e2a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mL1Q9XttYG1PHtNLMEZMLd8H7Fxw3YewxUxLIUBjgOg.jpg?width=1080&crop=smart&auto=webp&s=f123e4a59f305ce8942600dfe00c63ed9a8b7fc3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mL1Q9XttYG1PHtNLMEZMLd8H7Fxw3YewxUxLIUBjgOg.jpg?auto=webp&s=5b06d598241123cc244b82e97c59dbd56ee75d50', 'width': 1200}, 'variants': {}}]} |
||
Deepseek-R1-0528 MLX 4 bit quant up | 26 | [https://huggingface.co/mlx-community/DeepSeek-R1-0528-4bit/tree/main](https://huggingface.co/mlx-community/DeepSeek-R1-0528-4bit/tree/main)
...they're fast. | 2025-05-29T03:25:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ky0qes/deepseekr10528_mlx_4_bit_quant_up/ | nomorebuttsplz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky0qes | false | null | t3_1ky0qes | /r/LocalLLaMA/comments/1ky0qes/deepseekr10528_mlx_4_bit_quant_up/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'WQfI_IxeceqSO0qpt2LH1bjBS5fZy5a7haCE4mjZgbk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZTUuU4N_meao3Jg9pjdZ0RXjFh3zAnJh2fgczzIDBBQ.jpg?width=108&crop=smart&auto=webp&s=64775b106379a241d8dd3dd15b21c739ed81ebc5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ZTUuU4N_meao3Jg9pjdZ0RXjFh3zAnJh2fgczzIDBBQ.jpg?width=216&crop=smart&auto=webp&s=531751ac65c2ea18b73b0b4e7c8779cb942444cd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ZTUuU4N_meao3Jg9pjdZ0RXjFh3zAnJh2fgczzIDBBQ.jpg?width=320&crop=smart&auto=webp&s=1d7b3a274bb8169879be655769d84f87a6e3bd28', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ZTUuU4N_meao3Jg9pjdZ0RXjFh3zAnJh2fgczzIDBBQ.jpg?width=640&crop=smart&auto=webp&s=0733a2583686f7267ef4317e19545912c45386aa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ZTUuU4N_meao3Jg9pjdZ0RXjFh3zAnJh2fgczzIDBBQ.jpg?width=960&crop=smart&auto=webp&s=83f7dccad20f24e985f76c9e07939603a1144147', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ZTUuU4N_meao3Jg9pjdZ0RXjFh3zAnJh2fgczzIDBBQ.jpg?width=1080&crop=smart&auto=webp&s=3294dc4013afc159b4a9b468fa7f93849fc0a696', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ZTUuU4N_meao3Jg9pjdZ0RXjFh3zAnJh2fgczzIDBBQ.jpg?auto=webp&s=0ad192d0647543742b7b377070113f989161d690', 'width': 1200}, 'variants': {}}]} |
Mundane Robustness Benchmarks | 2 | Does anyone know of any up-to-date LLM benchmarks focused on very mundane reliability? Things like positional extraction, format compliance, and copying/pasting with slight edits? No math required. Basically, I want stupid easy tasks that test basic consistency, attention to detail, and deterministic behavior on text and can be verified programmatically. Ideally, they would include long documents in their test set and maybe use multi-turn prompts and responses to get around the output token limitations.
This has been a big pain point for me for some LLM workflows. I know I could just write a program to do these things, but some workflows require doing the above plus some very basic fuzzy task, and I would cautiously trust a model that does the basics well to be able to do a little more fuzzy work on top.
Alternatively, are there any models, open or closed, that optimize for this or are known to be particularly good at it? Thanks. | 2025-05-29T03:39:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ky0zhv/mundane_robustness_benchmarks/ | arnokha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky0zhv | false | null | t3_1ky0zhv | /r/LocalLLaMA/comments/1ky0zhv/mundane_robustness_benchmarks/ | false | false | self | 2 | null |
Best open source models to process summaries of research papers + medical documents | 1 | [removed] | 2025-05-29T03:43:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ky1238/best_open_source_models_to_process_summaries_of/ | Defiant_Low5388 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky1238 | false | null | t3_1ky1238 | /r/LocalLLaMA/comments/1ky1238/best_open_source_models_to_process_summaries_of/ | false | false | self | 1 | null |
Open Source Alternative to NotebookLM | 110 | For those of you who aren't familiar with **SurfSense**, it aims to be the open-source alternative to **NotebookLM**, **Perplexity**, or **Glean**.
In short, it's a Highly Customizable AI Research Agent but connected to your personal external sources search engines (Tavily, LinkUp), Slack, Linear, Notion, YouTube, GitHub, and more coming soon.
I'll keep this short—here are a few highlights of SurfSense:
📊 Features
* Supports **150+ LLM's**
* Supports local **Ollama LLM's** or vLLM\*\*.\*\*
* Supports **6000+ Embedding Models**
* Works with all major rerankers (Pinecone, Cohere, Flashrank, etc.)
* Uses **Hierarchical Indices** (2-tiered RAG setup)
* Combines **Semantic + Full-Text Search** with **Reciprocal Rank Fusion** (Hybrid Search)
* Offers a **RAG-as-a-Service API Backend**
* Supports 27+ File extensions
🎙️ Podcasts
* Blazingly fast podcast generation agent. (Creates a 3-minute podcast in under 20 seconds.)
* Convert your chat conversations into engaging audio content
* Support for multiple TTS providers (OpenAI, Azure, Google Vertex AI)
ℹ️ **External Sources**
* Search engines (Tavily, LinkUp)
* Slack
* Linear
* Notion
* YouTube videos
* GitHub
* ...and more on the way
🔖 **Cross-Browser Exten**sion
The SurfSense extension lets you save any dynamic webpage you like. Its main use case is capturing pages that are protected behind authentication.
Check out SurfSense on GitHub: [https://github.com/MODSetter/SurfSense](https://github.com/MODSetter/SurfSense) | 2025-05-29T03:47:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ky14jn/open_source_alternative_to_notebooklm/ | Uiqueblhats | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky14jn | false | null | t3_1ky14jn | /r/LocalLLaMA/comments/1ky14jn/open_source_alternative_to_notebooklm/ | false | false | self | 110 | {'enabled': False, 'images': [{'id': 'V_B6aOAfOhvxu5-Ab6EGprURZYRFaJ2SmeG-wLIJosw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/f_4nJaS37ny4ybO-I2hGATtJJgMfTYz-uOwFkxnL7hk.jpg?width=108&crop=smart&auto=webp&s=05ca0fc9b51aedac3b63c0d89f28eb5d15f2ae05', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/f_4nJaS37ny4ybO-I2hGATtJJgMfTYz-uOwFkxnL7hk.jpg?width=216&crop=smart&auto=webp&s=b143731e844d8e99747cc2fd9f3f628dc517cd23', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/f_4nJaS37ny4ybO-I2hGATtJJgMfTYz-uOwFkxnL7hk.jpg?width=320&crop=smart&auto=webp&s=e5d14028ef5621e508d40120722ff625acdc44f9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/f_4nJaS37ny4ybO-I2hGATtJJgMfTYz-uOwFkxnL7hk.jpg?width=640&crop=smart&auto=webp&s=f80e5edea3b3a882a4075848136cd0614c182020', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/f_4nJaS37ny4ybO-I2hGATtJJgMfTYz-uOwFkxnL7hk.jpg?width=960&crop=smart&auto=webp&s=0eb7d2aa67062902d2d46a3b43969da62c2e16b6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/f_4nJaS37ny4ybO-I2hGATtJJgMfTYz-uOwFkxnL7hk.jpg?width=1080&crop=smart&auto=webp&s=3c5156502506e0fb6ecd081b6e30b85443d2104b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/f_4nJaS37ny4ybO-I2hGATtJJgMfTYz-uOwFkxnL7hk.jpg?auto=webp&s=e25208df47d00666df8d8c19690126f949e37e0b', 'width': 1200}, 'variants': {}}]} |
Yess! Open-source strikes back! This is the closest I've seen anything come to competing with @GoogleDeepMind 's Veo 3 native audio and character motion. | 131 | 2025-05-29T04:12:04 | https://v.redd.it/wvb8a5b5cn3f1 | balianone | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ky1l2e | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wvb8a5b5cn3f1/DASHPlaylist.mpd?a=1751083937%2CYTM1MDlhMjYxOWI4ZjI0ZWMyNzQwY2FkODY1ZGY5ZWI0MDJkYTI2Y2YyZTRmMTc0MTVmNDJlZmI5NTM3MDZlNQ%3D%3D&v=1&f=sd', 'duration': 61, 'fallback_url': 'https://v.redd.it/wvb8a5b5cn3f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/wvb8a5b5cn3f1/HLSPlaylist.m3u8?a=1751083937%2CMDdiZTI5OTVlMWUzZjQ4OGIxMTczNTkzNGY5M2Q0Yjk1Y2Q4MWUyZGY4ZDBlNTk3YjJkZmViNjllMzhjNGFiYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wvb8a5b5cn3f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1ky1l2e | /r/LocalLLaMA/comments/1ky1l2e/yess_opensource_strikes_back_this_is_the_closest/ | false | false | 131 | {'enabled': False, 'images': [{'id': 'N2Rjc29reDZjbjNmMRt9-ucm3ui6hV8RM7PxWBbM61rS9Zbqewi5gNGLcurK', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/N2Rjc29reDZjbjNmMRt9-ucm3ui6hV8RM7PxWBbM61rS9Zbqewi5gNGLcurK.png?width=108&crop=smart&format=pjpg&auto=webp&s=1bb1e8732c99d9db043b84d613598736cb0e723d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/N2Rjc29reDZjbjNmMRt9-ucm3ui6hV8RM7PxWBbM61rS9Zbqewi5gNGLcurK.png?width=216&crop=smart&format=pjpg&auto=webp&s=feeea34306aea0f69f530bcad0ca53f921b02f25', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/N2Rjc29reDZjbjNmMRt9-ucm3ui6hV8RM7PxWBbM61rS9Zbqewi5gNGLcurK.png?width=320&crop=smart&format=pjpg&auto=webp&s=e89788d8d3f5b2fb8dd5566afb96882796408729', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/N2Rjc29reDZjbjNmMRt9-ucm3ui6hV8RM7PxWBbM61rS9Zbqewi5gNGLcurK.png?width=640&crop=smart&format=pjpg&auto=webp&s=54e09b9b28dc0869688ab3424d748a30a0e9891f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/N2Rjc29reDZjbjNmMRt9-ucm3ui6hV8RM7PxWBbM61rS9Zbqewi5gNGLcurK.png?width=960&crop=smart&format=pjpg&auto=webp&s=e2ee029e42b12377216c7e3871de1c6d81fce834', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/N2Rjc29reDZjbjNmMRt9-ucm3ui6hV8RM7PxWBbM61rS9Zbqewi5gNGLcurK.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1614025986933b87805f7eeb46fa01ed9dac689a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/N2Rjc29reDZjbjNmMRt9-ucm3ui6hV8RM7PxWBbM61rS9Zbqewi5gNGLcurK.png?format=pjpg&auto=webp&s=ae4888986fa5b44f752472d4bbc541b6e41122ed', 'width': 1920}, 'variants': {}}]} |
||
Deepseek-R1/V3 near (I)Q2/(I)Q3 (230-250GB RAM) vs. Qwen3-235B near Q6/Q8 (same 230-250GB RAM); at what quant / RAM sizes is DS vs Qwen3 is better / worse than the other? | 26 | Deepseek-R1/V3 near (I)Q2/(I)Q3 (230-250GB RAM) vs. Qwen3-235B near Q6/Q8 (same or less 230-250GB RAM requirement); at what quant / RAM sizes is such quantized DS vs Qwen3 is better / worse than the other?
Practical question -- if one has a system or couple RPC systems which provide in the range of 200-230-260 GBy aggregate RAM size for mainly CPU+RAM inference, at what RAM size / quant levels might it become objectively overall better / worse to run DeepSeek R1/V3 very heavily quantized (1.8 / 2.x to very low 3.x bit) vs. Qwen3-235B moderately or lightly quantized (Q4..Q8)?
That's considering complex practical use cases like coding, some STEM, etc. where accuracy / subject domain knowledge matters and given that also any relative performance, context size handling ability vs. resources, etc. factors would also be considerable as reasons to use one vs. the other.
I'm guessing maybe at Q4-Q8 range Qwen3-235B could often be superior to DS R1/V3 at 2.0-3.0 bit quantization range to achieve similar RAM use but maybe there is a zone where DS could become superior despite the heavy quantization?
Thoughts, experiences?
The idea would be very occasional utility use for cases which a 32B model just doesn't work well enough, and where cloud inference is not considered if one sometimes needs the privacy / locality.
Obviously the speed / performance would not be competitive vs cloud / higher end local servers / full DGPU inference (neither being available in this discussion case) but maybe useful for niche cases where "go do something else for a while and look at the result later" might work OK.
I suppose one could also extend the concept to maverick around Q3/Q4 or whatever other models could compete in the 100-250 GBy RAM CPU inference range. | 2025-05-29T04:13:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ky1lro/deepseekr1v3_near_iq2iq3_230250gb_ram_vs/ | Calcidiol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky1lro | false | null | t3_1ky1lro | /r/LocalLLaMA/comments/1ky1lro/deepseekr1v3_near_iq2iq3_230250gb_ram_vs/ | false | false | self | 26 | null |
I accidentally built a vector database using video compression | 0 | [removed] | 2025-05-29T04:49:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ky27sv/i_accidentally_built_a_vector_database_using/ | Every_Chicken_1293 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky27sv | false | null | t3_1ky27sv | /r/LocalLLaMA/comments/1ky27sv/i_accidentally_built_a_vector_database_using/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '9vpsknPi0TlMlGAXz-tdZM_pbY0EGNCDI1BYYrPFOjA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/A75CQf4AwRntlpkckJpD6IS-egftwL-gT-Wrf3ZwT-4.jpg?width=108&crop=smart&auto=webp&s=c6933f50c7244327bcac0f2d820b048f35523ff1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/A75CQf4AwRntlpkckJpD6IS-egftwL-gT-Wrf3ZwT-4.jpg?width=216&crop=smart&auto=webp&s=381d90db01eb5b3009d210105a14c13fe5bfc1b2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/A75CQf4AwRntlpkckJpD6IS-egftwL-gT-Wrf3ZwT-4.jpg?width=320&crop=smart&auto=webp&s=f924b1b3aa786e2c6876a87bbd97e0693e961898', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/A75CQf4AwRntlpkckJpD6IS-egftwL-gT-Wrf3ZwT-4.jpg?width=640&crop=smart&auto=webp&s=09a02528a82a001d79261cd09625ecf8af5a4cca', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/A75CQf4AwRntlpkckJpD6IS-egftwL-gT-Wrf3ZwT-4.jpg?width=960&crop=smart&auto=webp&s=89b52432005dc1fb82d5b72a6abdf5a42b70a847', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/A75CQf4AwRntlpkckJpD6IS-egftwL-gT-Wrf3ZwT-4.jpg?width=1080&crop=smart&auto=webp&s=829d4cd6441168cc6c10ada293c3ae0c42300981', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/A75CQf4AwRntlpkckJpD6IS-egftwL-gT-Wrf3ZwT-4.jpg?auto=webp&s=4bea454cffe07a5bded6be48c2428963e8f55f11', 'width': 1200}, 'variants': {}}]} |
Working on New AI nfws model | 1 | [removed] | 2025-05-29T05:38:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ky30qa/working_on_new_ai_nfws_model/ | Royal_Departure9934 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky30qa | false | null | t3_1ky30qa | /r/LocalLLaMA/comments/1ky30qa/working_on_new_ai_nfws_model/ | false | false | nsfw | 1 | null |
😭 I am already falling in love with the new deepseek-ai/DeepSeek-R1-0528 | 0 | 2025-05-29T05:47:43 | Rare-Programmer-1747 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ky35nu | false | null | t3_1ky35nu | /r/LocalLLaMA/comments/1ky35nu/i_am_already_falling_in_love_with_the_new/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'asDuxOZTMI_v6C8Iv3HGeD0BOUnugdXcEyN4YuP_iM0', 'resolutions': [{'height': 165, 'url': 'https://preview.redd.it/bslh7359tn3f1.jpeg?width=108&crop=smart&auto=webp&s=449cce12b1d2bb859358a38815a2643c3898eb0d', 'width': 108}, {'height': 331, 'url': 'https://preview.redd.it/bslh7359tn3f1.jpeg?width=216&crop=smart&auto=webp&s=c6c782faa4b37d8877c989bcde7d9c769aaad677', 'width': 216}, {'height': 490, 'url': 'https://preview.redd.it/bslh7359tn3f1.jpeg?width=320&crop=smart&auto=webp&s=dd437db77a26d797a61d7ae904316fc761e13621', 'width': 320}, {'height': 981, 'url': 'https://preview.redd.it/bslh7359tn3f1.jpeg?width=640&crop=smart&auto=webp&s=f2532b97eaeba8094720d0c38e8dcfff1c95f5c4', 'width': 640}, {'height': 1472, 'url': 'https://preview.redd.it/bslh7359tn3f1.jpeg?width=960&crop=smart&auto=webp&s=72fb646229c79594a52215ccd0625c417c38bc95', 'width': 960}], 'source': {'height': 1541, 'url': 'https://preview.redd.it/bslh7359tn3f1.jpeg?auto=webp&s=0e646d35524f2bfe5796180e6f172748dc8a745a', 'width': 1005}, 'variants': {}}]} |
|||
Models for writing NSFW stories | 1 | [removed] | 2025-05-29T06:21:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ky3o2b/models_for_writing_nsfw_stories/ | Efficient_Listen_768 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky3o2b | false | null | t3_1ky3o2b | /r/LocalLLaMA/comments/1ky3o2b/models_for_writing_nsfw_stories/ | false | false | nsfw | 1 | {'enabled': False, 'images': [{'id': 'jT_8KxTqlcmlmkMloNuVHGWQJKS0BXtl7ADDLTME8FE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/e6_SHg_mSvBmDVBPG68qQXet5PnuEqMc7iJJxhuggEk.jpg?width=108&crop=smart&auto=webp&s=928f0a6388b8a3958bfb01eaf7a7396f18b1743c', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/e6_SHg_mSvBmDVBPG68qQXet5PnuEqMc7iJJxhuggEk.jpg?auto=webp&s=510872b16b62298c9bda5c06da9b5c00a58b9a70', 'width': 200}, 'variants': {'nsfw': {'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/e6_SHg_mSvBmDVBPG68qQXet5PnuEqMc7iJJxhuggEk.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=f7d1ceecca5af00e12e164febca0bc39b9657fac', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/e6_SHg_mSvBmDVBPG68qQXet5PnuEqMc7iJJxhuggEk.jpg?blur=40&format=pjpg&auto=webp&s=d54bd87ec81388c0edd80bb4e5deaf1863529e8b', 'width': 200}}, 'obfuscated': {'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/e6_SHg_mSvBmDVBPG68qQXet5PnuEqMc7iJJxhuggEk.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=f7d1ceecca5af00e12e164febca0bc39b9657fac', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/e6_SHg_mSvBmDVBPG68qQXet5PnuEqMc7iJJxhuggEk.jpg?blur=40&format=pjpg&auto=webp&s=d54bd87ec81388c0edd80bb4e5deaf1863529e8b', 'width': 200}}}}]} |
Models for writing uncensored stories | 1 | [removed] | 2025-05-29T06:23:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ky3pey/models_for_writing_uncensored_stories/ | Efficient_Listen_768 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky3pey | false | null | t3_1ky3pey | /r/LocalLLaMA/comments/1ky3pey/models_for_writing_uncensored_stories/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'jT_8KxTqlcmlmkMloNuVHGWQJKS0BXtl7ADDLTME8FE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/e6_SHg_mSvBmDVBPG68qQXet5PnuEqMc7iJJxhuggEk.jpg?width=108&crop=smart&auto=webp&s=928f0a6388b8a3958bfb01eaf7a7396f18b1743c', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/e6_SHg_mSvBmDVBPG68qQXet5PnuEqMc7iJJxhuggEk.jpg?auto=webp&s=510872b16b62298c9bda5c06da9b5c00a58b9a70', 'width': 200}, 'variants': {}}]} |
automated debugging using Ollama | 9 | Used my down time to build a CLI that auto-fixes errors with local LLMs
The tech stack is pretty simple; it reads terminal errors and provides context-aware fixes using:
* Your local Ollama models (whatever you have downloaded)
* RAG across your entire codebase for context
* Everything stays on your machine
also, just integrated **Claude 4** support aswell and it's genuinely scary good at debugging tbh
**tldr Terminal errors → automatic fixes using your Ollama models + RAG across your entire codebase. 100% local**
If you curious to see the implementation, its open source: [https://github.com/cloi-ai/cloi](https://github.com/cloi-ai/cloi) | 2025-05-29T06:38:08 | https://v.redd.it/8x5wao0l1o3f1 | AntelopeEntire9191 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ky3x8f | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/8x5wao0l1o3f1/DASHPlaylist.mpd?a=1751092700%2CZjllMWJiNjU4ZmJhYWU1N2Q2ZDgyOTdkNTViNWNkMWQ3MzcwNzY3MzI3ZGM2NDVlZTVjMWU2ZTlhZmUxMjgzMA%3D%3D&v=1&f=sd', 'duration': 22, 'fallback_url': 'https://v.redd.it/8x5wao0l1o3f1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/8x5wao0l1o3f1/HLSPlaylist.m3u8?a=1751092700%2CN2Q1MWRmZTcxYzgxMDgzMmI0MjY2MTAxZjgzZjJiNmQ2YjQ1OWNmYzIyZWNhOTEzNzFjYmMxNGQ5MWQ0YWExMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8x5wao0l1o3f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1042}} | t3_1ky3x8f | /r/LocalLLaMA/comments/1ky3x8f/automated_debugging_using_ollama/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'aDRtdW5kMWwxbzNmMezr6njSDoFLFdrxC2JtmiXopTq_OV4L1dh4D_3UKGf5', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/aDRtdW5kMWwxbzNmMezr6njSDoFLFdrxC2JtmiXopTq_OV4L1dh4D_3UKGf5.png?width=108&crop=smart&format=pjpg&auto=webp&s=54b9cea6d10e92455be26e457191c98c5966099b', 'width': 108}, {'height': 149, 'url': 'https://external-preview.redd.it/aDRtdW5kMWwxbzNmMezr6njSDoFLFdrxC2JtmiXopTq_OV4L1dh4D_3UKGf5.png?width=216&crop=smart&format=pjpg&auto=webp&s=742f32fd211e83d34328495e7e1b2d32aefa8a84', 'width': 216}, {'height': 221, 'url': 'https://external-preview.redd.it/aDRtdW5kMWwxbzNmMezr6njSDoFLFdrxC2JtmiXopTq_OV4L1dh4D_3UKGf5.png?width=320&crop=smart&format=pjpg&auto=webp&s=fecefc0aaf329965a6019d8f936c169f7c52f3a8', 'width': 320}, {'height': 442, 'url': 'https://external-preview.redd.it/aDRtdW5kMWwxbzNmMezr6njSDoFLFdrxC2JtmiXopTq_OV4L1dh4D_3UKGf5.png?width=640&crop=smart&format=pjpg&auto=webp&s=cabf8623798d244b166ba43b6b3c1139f7895b71', 'width': 640}, {'height': 663, 'url': 'https://external-preview.redd.it/aDRtdW5kMWwxbzNmMezr6njSDoFLFdrxC2JtmiXopTq_OV4L1dh4D_3UKGf5.png?width=960&crop=smart&format=pjpg&auto=webp&s=930373171adf761c5b849e7b0228ec15745c4a22', 'width': 960}, {'height': 745, 'url': 'https://external-preview.redd.it/aDRtdW5kMWwxbzNmMezr6njSDoFLFdrxC2JtmiXopTq_OV4L1dh4D_3UKGf5.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e67b420cf3f389f4a52b15a2f58859c606a76050', 'width': 1080}], 'source': {'height': 884, 'url': 'https://external-preview.redd.it/aDRtdW5kMWwxbzNmMezr6njSDoFLFdrxC2JtmiXopTq_OV4L1dh4D_3UKGf5.png?format=pjpg&auto=webp&s=6d93b0e71283e1b999a0633a562b59f3ea55727c', 'width': 1280}, 'variants': {}}]} |
|
Help! Best Way to Replicate Voices In Other Languages With TTS? | 1 | [removed] | 2025-05-29T06:49:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ky43bw/help_best_way_to_replicate_voices_in_other/ | Initial_Designer_802 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky43bw | false | null | t3_1ky43bw | /r/LocalLLaMA/comments/1ky43bw/help_best_way_to_replicate_voices_in_other/ | false | false | self | 1 | null |
Built an uncensored AI that maintains intelligence - tbio.ai beta feedback wanted | 1 | [removed] | 2025-05-29T06:59:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ky48lr/built_an_uncensored_ai_that_maintains/ | Apple12Pi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky48lr | false | null | t3_1ky48lr | /r/LocalLLaMA/comments/1ky48lr/built_an_uncensored_ai_that_maintains/ | false | false | self | 1 | null |
Using Llama-indexing with deployed LLM | 1 | [removed] | 2025-05-29T07:11:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ky4f6o/using_llamaindexing_with_deployed_llm/ | martianx23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky4f6o | false | null | t3_1ky4f6o | /r/LocalLLaMA/comments/1ky4f6o/using_llamaindexing_with_deployed_llm/ | false | false | self | 1 | null |
Generate academic posters effortlessly with this open-source tool | 1 | [removed] | 2025-05-29T07:19:46 | Fluffy_Sheepherder76 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ky4jko | false | null | t3_1ky4jko | /r/LocalLLaMA/comments/1ky4jko/generate_academic_posters_effortlessly_with_this/ | false | false | 1 | {'enabled': True, 'images': [{'id': '95h-l_7xXQ_2iHL5lkyWZCoG4tGTmPj5bL2Fq0zwKqY', 'resolutions': [{'height': 103, 'url': 'https://preview.redd.it/mkwhgsik9o3f1.jpeg?width=108&crop=smart&auto=webp&s=ea35fb7690f545fb354997427e3f8974b271fe0b', 'width': 108}, {'height': 206, 'url': 'https://preview.redd.it/mkwhgsik9o3f1.jpeg?width=216&crop=smart&auto=webp&s=f79552f7c783107a04be963337db2b5aa55359cf', 'width': 216}, {'height': 305, 'url': 'https://preview.redd.it/mkwhgsik9o3f1.jpeg?width=320&crop=smart&auto=webp&s=030530d0769c6c812efae07bb1fed03874eb512e', 'width': 320}, {'height': 610, 'url': 'https://preview.redd.it/mkwhgsik9o3f1.jpeg?width=640&crop=smart&auto=webp&s=95092a739f78a9a194af7f7bb68f445cfe65664b', 'width': 640}, {'height': 916, 'url': 'https://preview.redd.it/mkwhgsik9o3f1.jpeg?width=960&crop=smart&auto=webp&s=2ae93e30ec2d10f0065b18c9778d142317b03ebd', 'width': 960}, {'height': 1030, 'url': 'https://preview.redd.it/mkwhgsik9o3f1.jpeg?width=1080&crop=smart&auto=webp&s=52b9326884580e99b655fa6c2f5979806777cd81', 'width': 1080}], 'source': {'height': 1714, 'url': 'https://preview.redd.it/mkwhgsik9o3f1.jpeg?auto=webp&s=aab70d83884c3e1e9f77d2b7513276c4a3b1500c', 'width': 1796}, 'variants': {}}]} |
||
If you have plan to make new TTS/ASR consider other languages or low resource ones, it's always English, Chinese & some other popular languages it's always trained on. | 14 | Every new releases of TTS or ASR are always either english or chinese. We have already lots of SOTA in these popular languages like spanish. If someone is planning to build new systems, consider other languages with no presence. Also there are lots of low resource (LR) languages are there to consider.
We need to make that "other languages" SOTA too, this would bring more robust systems to the opensource through some integration and adoption. Notebooklm now supports 56 new langs, we are able to match its English and other popular langs through open models like Dia, recent Chatterbox by remeble.ai ( in the light of this request is made).
To use other languages we still need rely proprietary models. SOTA canary supports only 4 languages in ASR (English, German, Spanish, French). Parakeet is english only. whisper has 100 lang support but only several of them are able deliver good results due to low resource (another problem). But recently lots of open teams and non profits are started building and pushing lang data sets of LR langs which is a good thing.
| 2025-05-29T07:28:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ky4oia/if_you_have_plan_to_make_new_ttsasr_consider/ | Trysem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky4oia | false | null | t3_1ky4oia | /r/LocalLLaMA/comments/1ky4oia/if_you_have_plan_to_make_new_ttsasr_consider/ | false | false | self | 14 | null |
Offline locally run character.ai alternative using Qwen3 with COQUI XTTS-v2 | 1 | I chose Qwen3 as the LLM that would run this project and also Coqui XTTS-v2 for the voice cloning software.
Yes, think is off and it has a custom prompt for individual chat and different system prompt handling for the group chat feature to handle multiple characters.
The voice cloning is slow, since it is CPU bound and every sentence has a different accent (this is inherent to Coqui TTS and not changeable as far as I know).
Ideally, the highest Qwen3 model would be run, Qwen3-235B-A22B (most accurate answers via hugging face demo I tried). But, I ended up using Qwen3-14B-GGUF and it wasn't as accurate as I wanted for this program. My hardware is very limited for this, Qwen3-1.7B was the minimum semi accurate model for this demo. The more parameters, the more better responses.
Still, the hardest part is managing Qwen3 to be used differently than intended. I had to think outside the box to make group chat functional for managing multiple characters, something its not designed to do.
Basically the program is a shell for an LLM and Coqui XTTS-v2 playback after the LLM finishes responding.
| 2025-05-29T07:35:26 | https://v.redd.it/waqygsrgco3f1 | xhakux99 | /r/LocalLLaMA/comments/1ky4s23/offline_locally_run_characterai_alternative_using/ | 1970-01-01T00:00:00 | 0 | {} | 1ky4s23 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/waqygsrgco3f1/DASHPlaylist.mpd?a=1751225731%2CYTllYTRiMmZhOTJjYTgwYjI3NDUwNmM5NDVjZTUyZWE0YmU3ZjA2ZDUyOGI3MDYwZWMwOGUwYjI0ZjJlMjMzMw%3D%3D&v=1&f=sd', 'duration': 143, 'fallback_url': 'https://v.redd.it/waqygsrgco3f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/waqygsrgco3f1/HLSPlaylist.m3u8?a=1751225731%2CMDA4N2M3NjgwMjlmYjUwZWVmZjE1Mjg3NTUxNThjNWU4Zjg2ZmQ2MDk0NjQ1NTZhYmM0ZTYwMGI5Yjc0N2U1OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/waqygsrgco3f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1ky4s23 | /r/LocalLLaMA/comments/1ky4s23/offline_locally_run_characterai_alternative_using/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Z3pnamVudmdjbzNmMXDfCSO82U-xf9wuOmz3vUA-7HeIpYI79t_Qdh9BEV67', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Z3pnamVudmdjbzNmMXDfCSO82U-xf9wuOmz3vUA-7HeIpYI79t_Qdh9BEV67.png?width=108&crop=smart&format=pjpg&auto=webp&s=9adac9276fff940d71dc3202263d94a734f33ac5', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Z3pnamVudmdjbzNmMXDfCSO82U-xf9wuOmz3vUA-7HeIpYI79t_Qdh9BEV67.png?width=216&crop=smart&format=pjpg&auto=webp&s=0316701da24d0eb8202c79ed8675e96548954939', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Z3pnamVudmdjbzNmMXDfCSO82U-xf9wuOmz3vUA-7HeIpYI79t_Qdh9BEV67.png?width=320&crop=smart&format=pjpg&auto=webp&s=8706d15005d532d63cc1a4613ee8f26b00a5eafb', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/Z3pnamVudmdjbzNmMXDfCSO82U-xf9wuOmz3vUA-7HeIpYI79t_Qdh9BEV67.png?width=640&crop=smart&format=pjpg&auto=webp&s=deaf1819f5bab2a4a6a46ff6e7c6552ab848662e', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/Z3pnamVudmdjbzNmMXDfCSO82U-xf9wuOmz3vUA-7HeIpYI79t_Qdh9BEV67.png?format=pjpg&auto=webp&s=9ea0de9544f9ef388ce4460c5e1bddffb4ff754c', 'width': 720}, 'variants': {}}]} |
|
Dual 4090 build for brand compliance analysis - worth it or waste? | 1 | [removed] | 2025-05-29T07:38:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ky4tdp/dual_4090_build_for_brand_compliance_analysis/ | Majestic_Reason7903 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky4tdp | false | null | t3_1ky4tdp | /r/LocalLLaMA/comments/1ky4tdp/dual_4090_build_for_brand_compliance_analysis/ | false | false | self | 1 | null |
Dual 4090 build for brand compliance analysis - worth it or waste? | 1 | [removed] | 2025-05-29T07:46:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ky4xwm/dual_4090_build_for_brand_compliance_analysis/ | RiseNecessary6351 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky4xwm | false | null | t3_1ky4xwm | /r/LocalLLaMA/comments/1ky4xwm/dual_4090_build_for_brand_compliance_analysis/ | false | false | self | 1 | null |
PLEASE LEARN BASIC CYBERSECURITY | 810 | Stumbled across a project doing about $30k a month with their OpenAI API key exposed in the frontend.
Public key, no restrictions, fully usable by anyone.
At that volume someone could easily burn through thousands before it even shows up on a billing alert.
This kind of stuff doesn’t happen because people are careless. It happens because things feel like they’re working, so you keep shipping without stopping to think through the basics.
Vibe coding is fun when you’re moving fast. But it’s not so fun when it costs you money, data, or trust.
Add just enough structure to keep things safe. That’s it. | 2025-05-29T07:59:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ky54kq/please_learn_basic_cybersecurity/ | eastwindtoday | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky54kq | false | null | t3_1ky54kq | /r/LocalLLaMA/comments/1ky54kq/please_learn_basic_cybersecurity/ | false | false | self | 810 | null |
Olmo 32b vs deepseek r1 | 1 | [removed] | 2025-05-29T08:02:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ky56h3/olmo_32b_vs_deepseek_r1/ | perfectcrop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky56h3 | false | null | t3_1ky56h3 | /r/LocalLLaMA/comments/1ky56h3/olmo_32b_vs_deepseek_r1/ | false | false | self | 1 | null |
Best models and/or workflows for visual/auditory trigger warnings? preferably via youtube urls? | 1 | [removed] | 2025-05-29T08:06:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ky58hq/best_models_andor_workflows_for_visualauditory/ | Neggy5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky58hq | false | null | t3_1ky58hq | /r/LocalLLaMA/comments/1ky58hq/best_models_andor_workflows_for_visualauditory/ | false | false | self | 1 | null |
using LLMs for trigger warnings for auditory/visual sensitivities? | 0 | So, as a neurodivergent who has severe auditory and visual sensitivities to certain stimuli, I wonder what the best local audio/vision models are for trigger warnings? does this exist?
I have been struggling to watch movies, play most story-driven games and listen to most music for more than a decade due to my issues but being able to get a heads up for upcoming triggers would be positively lifechanging for me and would finally allow me to watch most content again.
What would be the best LLM for this? one that can view, listen and accurately tell me when my trigger sounds/visuals occur? i obviously dont want false negatives especially. and id adore youtube links being able to be viewed too, and even better, netflix or other streaming services. | 2025-05-29T08:07:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ky5962/using_llms_for_trigger_warnings_for/ | Neggy5 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky5962 | false | null | t3_1ky5962 | /r/LocalLLaMA/comments/1ky5962/using_llms_for_trigger_warnings_for/ | false | false | self | 0 | null |
AnythingLLM RAG with Gemma 3:12b & BGE-m3-F16: LM Studio vs. Ollama Embedding Discrepancies - Same GGUF, Different Results? | 1 | [removed] | 2025-05-29T08:10:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ky5aj1/anythingllm_rag_with_gemma_312b_bgem3f16_lm/ | Ok_Bug4999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky5aj1 | false | null | t3_1ky5aj1 | /r/LocalLLaMA/comments/1ky5aj1/anythingllm_rag_with_gemma_312b_bgem3f16_lm/ | false | false | self | 1 | null |
What's wrong with llama 3.2:1b? | 1 | [removed] | 2025-05-29T08:30:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ky5lfv/whats_wrong_with_llama_321b/ | n0man_ch0wdhury | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky5lfv | false | null | t3_1ky5lfv | /r/LocalLLaMA/comments/1ky5lfv/whats_wrong_with_llama_321b/ | false | false | self | 1 | null |
Seeking Academic LLM Project Ideas for University Thesis | 1 | [removed] | 2025-05-29T08:56:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ky5yno/seeking_academic_llm_project_ideas_for_university/ | KaiKawaii0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky5yno | false | null | t3_1ky5yno | /r/LocalLLaMA/comments/1ky5yno/seeking_academic_llm_project_ideas_for_university/ | false | false | self | 1 | null |
Seeking Academic LLM Project Ideas for University Thesis | 1 | [removed] | 2025-05-29T09:01:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ky61b8/seeking_academic_llm_project_ideas_for_university/ | KaiKawaii0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky61b8 | false | null | t3_1ky61b8 | /r/LocalLLaMA/comments/1ky61b8/seeking_academic_llm_project_ideas_for_university/ | false | false | self | 1 | null |
Seeking Academic LLM Project Ideas for University Thesis | 1 | [removed] | 2025-05-29T09:01:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ky61ce/seeking_academic_llm_project_ideas_for_university/ | KaiKawaii0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky61ce | false | null | t3_1ky61ce | /r/LocalLLaMA/comments/1ky61ce/seeking_academic_llm_project_ideas_for_university/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.