title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Please help with model advice
2
I've asked a few questions about hardware and received some good input, for which I thank those who helped me. Now I need some direction for which model(s) to start messing with. My end goal is to have a model that has STT & TTS capability (I'll be building or modding speakers to interact with it) either natively or through add-on capability, and can also use the STT to interact with my Home Assistant so my smart home can be controlled completely locally. The use case would mostly include inference, but with some generative tasks as well, and smart home control. I currently have two Arc B580 gpus at my disposal, so I need something that can work with Intel and be loaded on 24gb of vram. What model(s) would fit those requirements? I don't mind messing with different models, and ultimately I probably will on a separate box, but I want to start my journey going in a direction that gets me closer to my end goal. TIA
2025-05-11T06:52:51
https://www.reddit.com/r/LocalLLaMA/comments/1kjuud2/please_help_with_model_advice/
Universal_Cognition
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kjuud2
false
null
t3_1kjuud2
/r/LocalLLaMA/comments/1kjuud2/please_help_with_model_advice/
false
false
self
2
null
question regarding google adk and openwebui
5
hi guys, so i dont know enough to find the answer myself and i did not find anythimg specific. I currently have an openwebui with ollama running locally. and i read about google adk and was wondering if they can somehow can work together? or nexto to each other idk. im not sure how they interact with each other. maybe they do the same thing differently or maybe its something completely different and that is a stupid question. but i would be gratefull for any help/clarification Tldr: does openwebui can be used with google adk?
2025-05-11T07:19:03
https://www.reddit.com/r/LocalLLaMA/comments/1kjv8fq/question_regarding_google_adk_and_openwebui/
thefunnyape
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kjv8fq
false
null
t3_1kjv8fq
/r/LocalLLaMA/comments/1kjv8fq/question_regarding_google_adk_and_openwebui/
false
false
self
5
null
I Built a Tool That Tells Me If a Side Project Will Ruin My Weekend
311
I used to lie to myself every weekend: “I’ll build this in an hour.” Spoiler: I never did. So I built a tool that tracks how long my features actually take — and uses a local LLM to estimate future ones. It logs my coding sessions, summarizes them, and tells me: "Yeah, this’ll eat your whole weekend. Don’t even start." It lives in my terminal and keeps me honest. Full writeup + code: [https://www.rafaelviana.io/posts/code-chrono](https://www.rafaelviana.io/posts/code-chrono)
2025-05-11T07:24:15
https://www.reddit.com/r/LocalLLaMA/comments/1kjvb8i/i_built_a_tool_that_tells_me_if_a_side_project/
IntelligentHope9866
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kjvb8i
false
null
t3_1kjvb8i
/r/LocalLLaMA/comments/1kjvb8i/i_built_a_tool_that_tells_me_if_a_side_project/
false
false
self
311
{'enabled': False, 'images': [{'id': 'C5Xw9woG3qSI6aPQYvhEyVXww76r2hGCoJZUEdTQmxY', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/C5Xw9woG3qSI6aPQYvhEyVXww76r2hGCoJZUEdTQmxY.png?width=108&crop=smart&auto=webp&s=4de9fbbcf76f0a85aaadef48912e9dac87be281e', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/C5Xw9woG3qSI6aPQYvhEyVXww76r2hGCoJZUEdTQmxY.png?width=216&crop=smart&auto=webp&s=c7ed29a359d9102905849cb962cdd4f773b04669', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/C5Xw9woG3qSI6aPQYvhEyVXww76r2hGCoJZUEdTQmxY.png?width=320&crop=smart&auto=webp&s=fe2daad8b78fc6e89c5892374b50f29b299724ff', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/C5Xw9woG3qSI6aPQYvhEyVXww76r2hGCoJZUEdTQmxY.png?width=640&crop=smart&auto=webp&s=93e2da16ceee161f574343a64e33524bf18aeaeb', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/C5Xw9woG3qSI6aPQYvhEyVXww76r2hGCoJZUEdTQmxY.png?width=960&crop=smart&auto=webp&s=e81a6142dab08ef3fbedbc6e2d1cbb9e36f17481', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/C5Xw9woG3qSI6aPQYvhEyVXww76r2hGCoJZUEdTQmxY.png?width=1080&crop=smart&auto=webp&s=59697efdcc0a970c7bc21fa4ce128d11b7073f83', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/C5Xw9woG3qSI6aPQYvhEyVXww76r2hGCoJZUEdTQmxY.png?auto=webp&s=f61147d9b1448d80002933e159f826a9081e7e1a', 'width': 1536}, 'variants': {}}]}
Local LLM Build with CPU and DDR5: Thoughts on how to build a Cost Effective Server
13
**Local LLM Build with CPU and DDR5: Thoughts on how to build a Cost Effective Server** The more cost effect fixes/lessons learned I have put below. The build I made here isn't the most "cost effective" build. However it was built as a hybrid serve, in which I was able to think about a better approach to building the CPU/DDR5 based LLM server. I renamed this post so it wouldn't mislead people and think i was proposing my current build as the most "cost effective" approach. It is mostly lessons I learned and thought other people would find useful. I recently completed what I believe is one of the more efficient local Large Language Model (LLM) builds, particularly if you prioritize these metrics: * Low monthly power consumption costs * Scalability for larger, smarter local LLMs This setup is also versatile enough to support other use cases on the same server. For instance, I’m using Proxmox to host my gaming desktop, cybersecurity lab, TrueNAS (for storing YouTube content), Plex, and Kubernetes, all running smoothly alongside this build. **Hardware Specifications:** * **DDR5 RAM:** 576GB (4800MHz, 6 lanes) - Total Cost: $3,500(230.4 gb of bandwidth) * **CPU:** AMD Epyc 8534p (64-core) - Cost: $2,000 USD **Motherboard:** I opted for a high-end motherboard to support this build: * **ASUS S14NA-U12** (imported from Germany) Features include 2x 25GB NICs for future-proof networking. **GPU Setup:** The GPU is currently passthrough to my gaming PC VM, which houses an RTX 4070 Super. While this configuration doesn’t directly benefit the LLM in this setup, it’s useful for other workloads. **Use Cases:** 1. **TrueNAS with OpenWebUI:** I primarily use this LLM with OpenWebUI to organize my thoughts, brainstorm ideas, and format content into markdown. 2. **Obsidian Copilot Integration:** The LLM is also utilized to summarize YouTube videos, conduct research, and perform various other tasks through Obsidian Copilot. It’s an incredibly powerful tool for productivity. This setup balances performance, cost-efficiency, and versatility, making it a solid choice for those looking to run demanding workloads locally. # Current stats for LLMS: prompt:\*\* what is the fastest way to get to china? **system:** 64core 8534p epyc 6 channel DDR5 4800hz ecc (576gb) **Notes on LLM performance:** **qwen3:32b-fp16** total duration: 20m45.027432852s load duration: 17.510769ms prompt eval count: 17 token(s) prompt eval duration: 636.892108ms prompt eval rate: 26.69 tokens/s eval count: 1424 token(s) eval duration: 20m44.372337587s eval rate: 1.14 tokens/s Notes: so far fp16 seems to be a very bad performer, speed is super slow. **qwen3:235b-a22b-q8\_0** total duration: 9m4.279665312s load duration: 18.578117ms prompt eval count: 18 token(s) prompt eval duration: 341.825732ms prompt eval rate: 52.66 tokens/s eval count: 1467 token(s) eval duration: 9m3.918470289s eval rate: 2.70 tokens/s Note, will compare later, but seemed similar to qwen3:235b in speed **deepseek-r1:671b** Note: I ran this with 1.58bit quant version before since I didn't have enough ram, curious to see how it fairs against that version now that I got the faulty ram stick replaced total duration: 9m0.065311955s load duration: 17.147124ms prompt eval count: 13 token(s) prompt eval duration: 1.664708517s prompt eval rate: 7.81 tokens/s eval count: 1265 token(s) eval duration: 8m58.382699408s eval rate: 2.35 tokens/s **SIGJNF/deepseek-r1-671b-1.58bit:latest** total duration: 4m15.88028086s load duration: 16.422788ms prompt eval count: 13 token(s) prompt eval duration: 1.190251949s prompt eval rate: 10.92 tokens/s eval count: 829 token(s) eval duration: 4m14.672781876s eval rate: 3.26 tokens/s Note: 1.58 bit is almost twice as fast for me. # Lessons Learned for LLM Local CPU and DDR5 Build # Key Recommendations 1. **CPU Selection** * **8xx Gen EPYC CPUs**: Chosen for low TDP (thermal design power), resulting in minimal monthly electricity costs. * **9xx Gen EPYC CPUs (Preferred Option)**: * Supports 12 PCIe lanes per CPU and up to 6000 MHz DDR5 memory. * Significantly improves memory bandwidth, critical for LLM performance. * **Recommended Model**: Dual AMD EPYC 9355P 32C (high-performance but \~3x cost of older models). * **Budget-Friendly Alternative**: Dual EPYC 9124 (12 PCIe lanes, \~$1200 total on eBay). 2. **Memory Configuration** * Use **32GB or 64GB DDR5 modules** (4800 MHz base speed). * Higher DDR5 speeds (up to 6000 MHz) with 9xx series CPUs can alleviate memory bandwidth bottlenecks. * With the higher memory speed(6000MHz) and bandwidth(1000gb/s+), you could achieve the speed of a 3090 with much more loading capacity and less power consumption(if you were to load up 4x 3090's the power draw would be insane). 3. **Cost vs. Performance Trade-Offs** * Older EPYC models (e.g., 9124) offer a balance between PCIe lane support and affordability. * Newer CPUs (e.g., 9355P) prioritize performance but at a steep price premium. # Thermal Management * **DDR5 Cooling**: * Experimenting with **air cooling** for DDR5 modules due to high thermal output ("ridiculously hot"). * Plan to install **heat sinks and dedicated fans** for memory slots adjacent to CPUs. * **Thermal Throttling Mitigation**: * Observed LLM response slowdowns after 5 seconds of sustained workload. * Suspected cause: DDR5/VRAM overheating. * **Action**: Adding DDR5-specific cooling solutions to maintain sustained performance. # Performance Observations * **Memory Bandwidth Bottleneck**: * Even with newer CPUs, DDR5 bandwidth limitations remain a critical constraint for LLM workloads. * Upgrading to 6000 MHz DDR5 (with compatible 9xx EPYC CPUs) may reduce this bottleneck. * **CPU Generation Impact**: * 9xx series CPUs offer marginal performance gains over 8xx series, but benefits depend on DDR5 speed and cooling efficiency. # Conclusion * Prioritize DDR5 speed and cooling for LLM builds. * Balance budget and performance by selecting CPUs with adequate PCIe lanes (12+ per CPU). * Monitor thermal metrics during sustained workloads to prevent throttling.
2025-05-11T07:48:57
https://www.reddit.com/r/LocalLLaMA/comments/1kjvo1t/local_llm_build_with_cpu_and_ddr5_thoughts_on_how/
Xelendor1989
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kjvo1t
false
null
t3_1kjvo1t
/r/LocalLLaMA/comments/1kjvo1t/local_llm_build_with_cpu_and_ddr5_thoughts_on_how/
false
false
self
13
null
Local businesses search API for LLMs
1
[removed]
2025-05-11T07:56:59
https://www.reddit.com/r/LocalLLaMA/comments/1kjvs5a/local_businesses_search_api_for_llms/
EndComfortable2089
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kjvs5a
false
null
t3_1kjvs5a
/r/LocalLLaMA/comments/1kjvs5a/local_businesses_search_api_for_llms/
false
false
self
1
null
Local business search API for LLMs
1
[removed]
2025-05-11T07:58:34
https://www.reddit.com/r/LocalLLaMA/comments/1kjvswx/local_business_search_api_for_llms/
EndComfortable2089
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kjvswx
false
null
t3_1kjvswx
/r/LocalLLaMA/comments/1kjvswx/local_business_search_api_for_llms/
false
false
self
1
null
Local business search API for LLMs
1
[removed]
2025-05-11T08:01:44
https://www.reddit.com/r/LocalLLaMA/comments/1kjvuoc/local_business_search_api_for_llms/
EndComfortable2089
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kjvuoc
false
null
t3_1kjvuoc
/r/LocalLLaMA/comments/1kjvuoc/local_business_search_api_for_llms/
false
false
self
1
null
Lenovo p520 GPU question
1
Thinking of getting a p520 with a 690W PSU and want to run dual GPUs. The problem is the PSU only has 2 x 6+2 Cables which limits my choice to single 8-pin connection GPUs. But what if I just used one PCIe cable per card, meaning not all connections would get filled? I would power limit the GPUs anyways. Would there be any danger of a GPU trying to overdraw power from a single cable? The p520 in question (200€): Xeon W-2223, 690W PSU, 16GB DDR4 (would upgrade) The GPUs in question: EIther 2x A770s or 2x rx 6800s. (8-pin + 6-pin connection)
2025-05-11T08:03:22
https://www.reddit.com/r/LocalLLaMA/comments/1kjvvju/lenovo_p520_gpu_question/
legit_split_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kjvvju
false
null
t3_1kjvvju
/r/LocalLLaMA/comments/1kjvvju/lenovo_p520_gpu_question/
false
false
self
1
null
How I Run Gemma 3 27B on an RX 7800 XT 16GB Locally!
49
Hey everyone! I've been successfully running the **Gemma 3 27B** model locally on my **RX 7800 XT 16GB** and wanted to share my setup and performance results. It's amazing to be able to run such a powerful model entirely on the GPU! I opted for the **`gemma-3-27B-it-qat-GGUF`** version provided by the [lmstudio-community](https://huggingface.co/lmstudio-community) on HuggingFace. The size of this GGUF model is perfect for my card, allowing it to fit entirely in VRAM. **My Workflow:** I mostly use LM Studio for day-to-day interaction (super easy!), but I've been experimenting with running it directly via `llama.cpp` server for a bit more control and benchmarking. Here's a breakdown of my rig: * **Case:** Lian Li A4-H2O * **Motherboard:** MSI H510I * **CPU:** Intel Core i5-11400 * **RAM:** Netac 32GB DDR4 3200MHz * **GPU:** Sapphire RX 7800 XT Pulse 16GB * **Cooler:** ID-Cooling Dashflow 240 Basic * **PSU:** Cooler Master V750 SFX Gold ### Running Gemma with Llama.cpp I’m using parameters [recommended by the Unsloth team](https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune) for inference and aiming for a 16K context size. This is a Windows setup. Here’s the command I'm using to launch the server: ```cmd ~\.llama.cpp\llama-cpp-bin-win-hip-x64\llama-server ^ --host 0.0.0.0 ^ --port 1234 ^ --log-file llama-server.log ^ --alias "gemma-3-27b-it-qat" ^ --model C:\HuggingFace\lmstudio-community\gemma-3-27B-it-qat-GGUF\gemma-3-27B-it-QAT-Q4_0.gguf ^ --threads 5 ^ --ctx-size 16384 ^ --n-gpu-layers 63 ^ --repeat-penalty 1.0 ^ --temp 1.0 ^ --min-p 0.01 ^ --top-k 64 ^ --top-p 0.95 ^ --ubatch-size 512 ``` **Important Notes on Parameters:** * **`--host 0.0.0.0`**: Allows access from other devices on the network. * **`--port 1234`**: The port the server will run on. * **`--log-file llama-server.log`**: Saves server logs for debugging. * **`--alias "gemma-3-27b-it-qat"`**: A friendly name for the model. * **`--model`**: Path to the GGUF model file. *Make sure to adjust this to your specific directory.* * **`--threads 5`**: Number of CPU threads to use, based on your CPU thread count - 1. * **`--ctx-size 16384`**: Sets the context length to 16K. Experiment with this based on your RAM! Higher context = more VRAM usage. * **`--n-gpu-layers 63`**: This offloads all layers to the GPU. With 16GB of VRAM on the 7800 XT, I'm able to push this to the maximum. Lower this value if you run into OOM errors (Out of Memory). * **`--repeat-penalty 1.0`**: Avoids repetitive output. * **`--temp 1.0`**: Sampling temperature. * **`--min-p 0.01`**: Minimum probability. * **`--top-k 64`**: Top-k sampling. * **`--top-p 0.95`**: Top-p sampling. * **`--ubatch-size 512`**: Increases batch size for faster inference. * **KV Cache:** I tested both F16 and Q8_0 KV Cache for performance comparison. **I used these parameters based on the recommendations provided by the Unsloth team for Gemma 3 inference:** [https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune](https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-tune) **Benchmark Results (Prompt: "What is the reason of life?")** I ran a simple benchmark to get a sense of the performance. Here's what I'm seeing: | Runtime | KV Cache | Tokens/Second (t/s) | |---------|----------|---------------------| | ROCm | F16 | 17.4 | | ROCm | Q8_0 | 20.8 | | Vulkan | F16 | 14.8 | | Vulkan | Q8_0 | 9.9 | **Observations:** * **ROCm outperforms Vulkan in my setup.** I'm not sure why, but it's consistent across multiple runs. * **Q8_0 quantization provides a speed boost compared to F16**, though with a potential (small) tradeoff in quality. * The 7800XT can really push the 27B model, and the results are impressive. **Things to Note:** * Your mileage may vary depending on your system configuration and specific model quantization. * Ensure you have the latest AMD drivers installed. * Experiment with the parameters to find the optimal balance of speed and quality for your needs. * ROCm support can be tricky to set up on Windows. Make sure you have it configured correctly. I'm still exploring optimizations and fine-tuning, but I wanted to share these results in case it helps anyone else thinking about running Gemma 3 27B on similar hardware with 16GB GPU. Let me know if you have any questions or suggestions in the comments. Happy inferencing!
2025-05-11T08:48:00
https://www.reddit.com/r/LocalLLaMA/comments/1kjwi3w/how_i_run_gemma_3_27b_on_an_rx_7800_xt_16gb/
COBECT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kjwi3w
false
null
t3_1kjwi3w
/r/LocalLLaMA/comments/1kjwi3w/how_i_run_gemma_3_27b_on_an_rx_7800_xt_16gb/
false
false
self
49
{'enabled': False, 'images': [{'id': 'YBA0XmmTrNTZwM2bnnT0TTRm1FHzAbAQ0cuZqdAmNlk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YBA0XmmTrNTZwM2bnnT0TTRm1FHzAbAQ0cuZqdAmNlk.png?width=108&crop=smart&auto=webp&s=4806946958fc31683c6cd66cb496f74c3ac3d117', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YBA0XmmTrNTZwM2bnnT0TTRm1FHzAbAQ0cuZqdAmNlk.png?width=216&crop=smart&auto=webp&s=a362e43472e94a5dbeae7aa492980478d0196dba', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YBA0XmmTrNTZwM2bnnT0TTRm1FHzAbAQ0cuZqdAmNlk.png?width=320&crop=smart&auto=webp&s=756728c51dc57cbc8ea73da643e0dc290da88240', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YBA0XmmTrNTZwM2bnnT0TTRm1FHzAbAQ0cuZqdAmNlk.png?width=640&crop=smart&auto=webp&s=fd7afd14bff93ab9d650b0100cbe7327e075a979', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YBA0XmmTrNTZwM2bnnT0TTRm1FHzAbAQ0cuZqdAmNlk.png?width=960&crop=smart&auto=webp&s=ab9744611382a1b9c102504221f1b27751ba7f1b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YBA0XmmTrNTZwM2bnnT0TTRm1FHzAbAQ0cuZqdAmNlk.png?width=1080&crop=smart&auto=webp&s=3a4ff0ae5de3057cb0bfd834212a93654bec6b84', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YBA0XmmTrNTZwM2bnnT0TTRm1FHzAbAQ0cuZqdAmNlk.png?auto=webp&s=eeb2b3b81b9962bfcebf7166b29b8c65cb0e6a10', 'width': 1200}, 'variants': {}}]}
Smaller LLMs and On-device AI for Energy Efficiency?
1
[removed]
2025-05-11T09:39:53
https://www.reddit.com/r/LocalLLaMA/comments/1kjx8bg/smaller_llms_and_ondevice_ai_for_energy_efficiency/
sherlockAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kjx8bg
false
null
t3_1kjx8bg
/r/LocalLLaMA/comments/1kjx8bg/smaller_llms_and_ondevice_ai_for_energy_efficiency/
false
false
self
1
null
Energy and On-device AI?
0
What companies are saying on energy to US senate is pretty accurate I believe. Governments across the world often run in 5 year plans so most of our future capacity is already planned? I see big techs building Nuclear Power stations to feed these systems but am pretty sure of the regulatory/environmental hurdles. On the contrary there is expected to be a host of AI native apps about to come, Chatgpt, Claude desktop, and more. They will be catering to such a massive population across the globe. Qwen 3 series is very exciting for these kind of usecases!
2025-05-11T09:41:49
https://www.reddit.com/r/LocalLLaMA/comments/1kjx9ab/energy_and_ondevice_ai/
sherlockAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kjx9ab
false
null
t3_1kjx9ab
/r/LocalLLaMA/comments/1kjx9ab/energy_and_ondevice_ai/
false
false
self
0
null
Is EPYC-based system good for fine-tuneing?
1
[removed]
2025-05-11T10:04:15
https://www.reddit.com/r/LocalLLaMA/comments/1kjxlca/is_epycbased_system_good_for_finetuneing/
Winter_Claim9156
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kjxlca
false
null
t3_1kjxlca
/r/LocalLLaMA/comments/1kjxlca/is_epycbased_system_good_for_finetuneing/
false
false
self
1
null
You can use smaller 4-8B models to index code repositories and save on tokens when calling frontier models through APIs.
1
[removed]
2025-05-11T10:05:57
https://www.reddit.com/r/LocalLLaMA/comments/1kjxm8a/you_can_use_smaller_48b_models_to_index_code/
kms_dev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kjxm8a
false
null
t3_1kjxm8a
/r/LocalLLaMA/comments/1kjxm8a/you_can_use_smaller_48b_models_to_index_code/
false
false
self
1
null
Tingrad eGPU repo for Apple Silicon - Also huge for Ai Max 395?
2
As a reddit user reported earlier today, George Hotz dropped a very powerful update to the tinygrad master repo, that allows the connection of an AMD eGPU to Apple Silicon Macs. Since it is using libusb under the hood, this should also work on Windows and Linux. This could be particularly interesting to add GPU capabilities to Ai Mini PCs like the ones from Framework, Asus and other manufacturers, running the AMD Ai Max 395 with up to 128GB of unified Memory. What's your take? How would you put this to good use? Reddit Post: https://www.reddit.com/r/LocalLLaMA/s/lVfr7TcGph Github: https://github.com/tinygrad/tinygrad X: https://x.com/__tinygrad__/status/1920960070055080107
2025-05-11T10:46:43
https://www.reddit.com/r/LocalLLaMA/comments/1kjy7ip/tingrad_egpu_repo_for_apple_silicon_also_huge_for/
Mr_Moonsilver
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kjy7ip
false
null
t3_1kjy7ip
/r/LocalLLaMA/comments/1kjy7ip/tingrad_egpu_repo_for_apple_silicon_also_huge_for/
false
false
self
2
null
Tinygrad eGPU for Apple Silicon - Also huge for AMD Ai Max 395?
43
As a reddit user reported earlier today, George Hotz dropped a very powerful update to the tinygrad master repo, that allows the connection of an AMD eGPU to Apple Silicon Macs. Since it is using libusb under the hood, this should also work on Windows and Linux. This could be particularly interesting to add GPU capabilities to Ai Mini PCs like the ones from Framework, Asus and other manufacturers, running the AMD Ai Max 395 with up to 128GB of unified Memory. What's your take? How would you put this to good use? Reddit Post: https://www.reddit.com/r/LocalLLaMA/s/lVfr7TcGph Github: https://github.com/tinygrad/tinygrad X: https://x.com/tinygrad/status/1920960070055080107
2025-05-11T10:50:03
https://www.reddit.com/r/LocalLLaMA/comments/1kjy99w/tinygrad_egpu_for_apple_silicon_also_huge_for_amd/
Mr_Moonsilver
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kjy99w
false
null
t3_1kjy99w
/r/LocalLLaMA/comments/1kjy99w/tinygrad_egpu_for_apple_silicon_also_huge_for_amd/
false
false
self
43
{'enabled': False, 'images': [{'id': 'dJxSGg0fuE8xibkjJqlzIp9aIYbl2cNXVV6qrokKmTs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dJxSGg0fuE8xibkjJqlzIp9aIYbl2cNXVV6qrokKmTs.png?width=108&crop=smart&auto=webp&s=403bcef55537a7365b3afaa1bbf2b63e94ab5f49', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dJxSGg0fuE8xibkjJqlzIp9aIYbl2cNXVV6qrokKmTs.png?width=216&crop=smart&auto=webp&s=8b3d07dbb4258e7bb6b2db8ac183f2ab93a0b8a4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dJxSGg0fuE8xibkjJqlzIp9aIYbl2cNXVV6qrokKmTs.png?width=320&crop=smart&auto=webp&s=b3cd052ef597c9beebf024e46c79f0545fee5762', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dJxSGg0fuE8xibkjJqlzIp9aIYbl2cNXVV6qrokKmTs.png?width=640&crop=smart&auto=webp&s=06ba2cf963763cfcb62e215a19388aef3a379688', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dJxSGg0fuE8xibkjJqlzIp9aIYbl2cNXVV6qrokKmTs.png?width=960&crop=smart&auto=webp&s=718f5cb05f00f492ff0db8559596d17087b4ff2b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dJxSGg0fuE8xibkjJqlzIp9aIYbl2cNXVV6qrokKmTs.png?width=1080&crop=smart&auto=webp&s=db3f06bdbf85e8c9ed6eb722d70b80f0320ac2fb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dJxSGg0fuE8xibkjJqlzIp9aIYbl2cNXVV6qrokKmTs.png?auto=webp&s=02be487fdea9e5ef676a37eefa05974fa0c53f9a', 'width': 1200}, 'variants': {}}]}
VLM recommendation
1
[removed]
2025-05-11T10:59:53
https://www.reddit.com/r/LocalLLaMA/comments/1kjyelv/vlm_recommendation/
CreepyWheel4595
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kjyelv
false
null
t3_1kjyelv
/r/LocalLLaMA/comments/1kjyelv/vlm_recommendation/
false
false
self
1
null
Budget ai rig, 2x k80, 2x m40, or p4?
0
For a price of a single p4 i can either get a 2x k80 or 2x m40 but I've heard that they're outdated. Buying a p40 is out of reach for my budget so im stuck with these options for now
2025-05-11T11:05:15
https://www.reddit.com/r/LocalLLaMA/comments/1kjyhvb/budget_ai_rig_2x_k80_2x_m40_or_p4/
Fakkle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kjyhvb
false
null
t3_1kjyhvb
/r/LocalLLaMA/comments/1kjyhvb/budget_ai_rig_2x_k80_2x_m40_or_p4/
false
false
self
0
null
Free Real time AI speech-to-text better than WisperFlow?
17
I'm currently using Whisper Tiny / V3 Turbo via Buzz and it takes maybe 3-5s to translate my text, and the text gets dropped in Buzz instead of whichever AI app I'm using, say AI Studio. Which other app has a better UI and faster AI transcribing capabilities? Purpose is to have voice chat, but via AI Studio.
2025-05-11T12:19:36
https://www.reddit.com/r/LocalLLaMA/comments/1kjzq9s/free_real_time_ai_speechtotext_better_than/
milkygirl21
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kjzq9s
false
null
t3_1kjzq9s
/r/LocalLLaMA/comments/1kjzq9s/free_real_time_ai_speechtotext_better_than/
false
false
self
17
null
Racked server rig
1
[removed]
2025-05-11T12:24:22
https://www.reddit.com/r/LocalLLaMA/comments/1kjztf0/racked_server_rig/
bigtechguytoronto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kjztf0
false
null
t3_1kjztf0
/r/LocalLLaMA/comments/1kjztf0/racked_server_rig/
false
false
self
1
null
For fun: come up with emotionally overloaded LLM model names in cyberpunk style
1
[removed]
2025-05-11T12:27:16
https://www.reddit.com/r/LocalLLaMA/comments/1kjzvc0/for_fun_come_up_with_emotionally_overloaded_llm/
Tonylu99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kjzvc0
false
null
t3_1kjzvc0
/r/LocalLLaMA/comments/1kjzvc0/for_fun_come_up_with_emotionally_overloaded_llm/
false
false
self
1
{'enabled': False, 'images': [{'id': 'pj3HQn-Z5Ke9CZkpkDvmdRPEw8sc2wNFU6FMFGYXYng', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/pj3HQn-Z5Ke9CZkpkDvmdRPEw8sc2wNFU6FMFGYXYng.jpeg?width=108&crop=smart&auto=webp&s=d78bf6e0d6e273b318a819f33b63045e2bb324b9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/pj3HQn-Z5Ke9CZkpkDvmdRPEw8sc2wNFU6FMFGYXYng.jpeg?width=216&crop=smart&auto=webp&s=e323baaaee848aeb2cc22e17b55a5343e150b9cd', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/pj3HQn-Z5Ke9CZkpkDvmdRPEw8sc2wNFU6FMFGYXYng.jpeg?width=320&crop=smart&auto=webp&s=d19133fc8d0c24d62e21821a5e09ac0ba9888c2e', 'width': 320}], 'source': {'height': 544, 'url': 'https://external-preview.redd.it/pj3HQn-Z5Ke9CZkpkDvmdRPEw8sc2wNFU6FMFGYXYng.jpeg?auto=webp&s=b041c71d86d23bec3efcf0d7bb55e0d5e131afeb', 'width': 544}, 'variants': {}}]}
Speed Comparison with Qwen3-32B-q8_0, Ollama, Llama.cpp, 2x3090, M3Max
60
Requested by /u/MLDataScientist, here is a comparison test between Ollama and Llama.cpp on 2 x RTX-3090 and M3-Max with 64GB using Qwen3-32B-q8_0. Just note, this was primarily to compare Ollama and Llama.cpp with Qwen3-32b model based on dense architecture. If interested, I ran a [separate benchmark using Qwen MoE architecture.](https://www.reddit.com/r/LocalLLaMA/comments/1kgxhdt/ollama_vs_llamacpp_on_2x3090_and_m3max_using/) Also there's a [comparison with M3Max, rtx-4090 on MLX, Llama.cpp, VLLM SGLang.](https://www.reddit.com/r/LocalLLaMA/comments/1ke26sl/another_attempt_to_measure_speed_for_qwen3_moe_on/) ### Metrics To ensure consistency, I used a custom Python script that sends requests to the server via the OpenAI-compatible API. Metrics were calculated as follows: * Time to First Token (TTFT): Measured from the start of the streaming request to the first streaming event received. * Prompt Processing Speed (PP): Number of prompt tokens divided by TTFT. * Token Generation Speed (TG): Number of generated tokens divided by (total duration - TTFT). The displayed results were truncated to two decimal places, but the calculations used full precision. I made the script to prepend 40% new material in the beginning of next longer prompt to avoid caching effect. Here's my script for anyone interest. https://github.com/chigkim/prompt-test It uses OpenAI API, so it should work in variety setup. Also, this tests one request at a time, so multiple parallel requests could result in higher throughput in different tests. ### Setup Both use the same q8_0 model from Ollama library with flash attention. I'm sure you can further optimize Llama.cpp, but I copied the flags from Ollama log in order to keep it consistent, so both use the exactly same flags when loading the model. `./build/bin/llama-server --model ~/.ollama/models/blobs/sha256... --ctx-size 22000 --batch-size 512 --n-gpu-layers 65 --threads 32 --flash-attn --parallel 1 --tensor-split 33,32 --port 11434` * Llama.cpp: 5339 (3b24d26c) * Ollama: 0.6.8 Each row in the results represents a test (a specific combination of machine, engine, and prompt length). There are 4 tests per prompt length. * Setup 1: 2xRTX3090, Llama.cpp * Setup 2: 2xRTX3090, Ollama * Setup 3: M3Max, Llama.cpp * Setup 4: M3Max, Ollama ### Result Please zoom in to see the graph better. *Processing img 26e05b1zd50f1...* | Machine | Engine | Prompt Tokens | PP/s | TTFT | Generated Tokens | TG/s | Duration | | ------- | ------ | ------------- | ---- | ---- | ---------------- | ---- | -------- | | RTX3090 | LCPP | 264 | 1033.18 | 0.26 | 968 | 21.71 | 44.84 | | RTX3090 | Ollama | 264 | 853.87 | 0.31 | 1041 | 21.44 | 48.87 | | M3Max | LCPP | 264 | 153.63 | 1.72 | 739 | 10.41 | 72.68 | | M3Max | Ollama | 264 | 152.12 | 1.74 | 885 | 10.35 | 87.25 | | RTX3090 | LCPP | 450 | 1184.75 | 0.38 | 1154 | 21.66 | 53.65 | | RTX3090 | Ollama | 450 | 1013.60 | 0.44 | 1177 | 21.38 | 55.51 | | M3Max | LCPP | 450 | 171.37 | 2.63 | 1273 | 10.28 | 126.47 | | M3Max | Ollama | 450 | 169.53 | 2.65 | 1275 | 10.33 | 126.08 | | RTX3090 | LCPP | 723 | 1405.67 | 0.51 | 1288 | 21.63 | 60.06 | | RTX3090 | Ollama | 723 | 1292.38 | 0.56 | 1343 | 21.31 | 63.59 | | M3Max | LCPP | 723 | 164.83 | 4.39 | 1274 | 10.29 | 128.22 | | M3Max | Ollama | 723 | 163.79 | 4.41 | 1204 | 10.27 | 121.62 | | RTX3090 | LCPP | 1219 | 1602.61 | 0.76 | 1815 | 21.44 | 85.42 | | RTX3090 | Ollama | 1219 | 1498.43 | 0.81 | 1445 | 21.35 | 68.49 | | M3Max | LCPP | 1219 | 169.15 | 7.21 | 1302 | 10.19 | 134.92 | | M3Max | Ollama | 1219 | 168.32 | 7.24 | 1686 | 10.11 | 173.98 | | RTX3090 | LCPP | 1858 | 1734.46 | 1.07 | 1375 | 21.37 | 65.42 | | RTX3090 | Ollama | 1858 | 1635.95 | 1.14 | 1293 | 21.13 | 62.34 | | M3Max | LCPP | 1858 | 166.81 | 11.14 | 1411 | 10.09 | 151.03 | | M3Max | Ollama | 1858 | 166.96 | 11.13 | 1450 | 10.10 | 154.70 | | RTX3090 | LCPP | 2979 | 1789.89 | 1.66 | 2000 | 21.09 | 96.51 | | RTX3090 | Ollama | 2979 | 1735.97 | 1.72 | 1628 | 20.83 | 79.88 | | M3Max | LCPP | 2979 | 162.22 | 18.36 | 2000 | 9.89 | 220.57 | | M3Max | Ollama | 2979 | 161.46 | 18.45 | 1643 | 9.88 | 184.68 | | RTX3090 | LCPP | 4669 | 1791.05 | 2.61 | 1326 | 20.77 | 66.45 | | RTX3090 | Ollama | 4669 | 1746.71 | 2.67 | 1592 | 20.47 | 80.44 | | M3Max | LCPP | 4669 | 154.16 | 30.29 | 1593 | 9.67 | 194.94 | | M3Max | Ollama | 4669 | 153.03 | 30.51 | 1450 | 9.66 | 180.55 | | RTX3090 | LCPP | 7948 | 1756.76 | 4.52 | 1255 | 20.29 | 66.37 | | RTX3090 | Ollama | 7948 | 1706.41 | 4.66 | 1404 | 20.10 | 74.51 | | M3Max | LCPP | 7948 | 140.11 | 56.73 | 1748 | 9.20 | 246.81 | | M3Max | Ollama | 7948 | 138.99 | 57.18 | 1650 | 9.18 | 236.90 | | RTX3090 | LCPP | 12416 | 1648.97 | 7.53 | 2000 | 19.59 | 109.64 | | RTX3090 | Ollama | 12416 | 1616.69 | 7.68 | 2000 | 19.30 | 111.30 | | M3Max | LCPP | 12416 | 127.96 | 97.03 | 1395 | 8.60 | 259.27 | | M3Max | Ollama | 12416 | 127.08 | 97.70 | 1778 | 8.57 | 305.14 | | RTX3090 | LCPP | 20172 | 1481.92 | 13.61 | 598 | 18.72 | 45.55 | | RTX3090 | Ollama | 20172 | 1458.86 | 13.83 | 1627 | 18.30 | 102.72 | | M3Max | LCPP | 20172 | 111.18 | 181.44 | 1771 | 7.58 | 415.24 | | M3Max | Ollama | 20172 | 111.80 | 180.43 | 1372 | 7.53 | 362.54 |
2025-05-11T12:59:11
https://www.reddit.com/r/LocalLLaMA/comments/1kk0ghi/speed_comparison_with_qwen332bq8_0_ollama/
chibop1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk0ghi
false
null
t3_1kk0ghi
/r/LocalLLaMA/comments/1kk0ghi/speed_comparison_with_qwen332bq8_0_ollama/
false
false
self
60
{'enabled': False, 'images': [{'id': '0AoL0Ngk25sAFEZNycULdZ6t3dw_iuuJfLHlrfoIvF4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0AoL0Ngk25sAFEZNycULdZ6t3dw_iuuJfLHlrfoIvF4.png?width=108&crop=smart&auto=webp&s=d0850b15154bb5ea1d95c35f78336e7ef72ec8c6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0AoL0Ngk25sAFEZNycULdZ6t3dw_iuuJfLHlrfoIvF4.png?width=216&crop=smart&auto=webp&s=e3ac71de76023733197c0f566417e33d3e52c7dd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0AoL0Ngk25sAFEZNycULdZ6t3dw_iuuJfLHlrfoIvF4.png?width=320&crop=smart&auto=webp&s=19f67122e762f2c52b887ea24b2531ad1ef6459f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0AoL0Ngk25sAFEZNycULdZ6t3dw_iuuJfLHlrfoIvF4.png?width=640&crop=smart&auto=webp&s=31efacc09be0328b157c234ea9c66f1a2b6a502f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0AoL0Ngk25sAFEZNycULdZ6t3dw_iuuJfLHlrfoIvF4.png?width=960&crop=smart&auto=webp&s=a508024ac8b386196aab2da8b2269899ef954242', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0AoL0Ngk25sAFEZNycULdZ6t3dw_iuuJfLHlrfoIvF4.png?width=1080&crop=smart&auto=webp&s=f6ff057b80bbe245958e596a99af0698c83e0265', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0AoL0Ngk25sAFEZNycULdZ6t3dw_iuuJfLHlrfoIvF4.png?auto=webp&s=8c4f73570e3b1900a838f89f0ed4f24951c7a69a', 'width': 1200}, 'variants': {}}]}
dual cards - inference speed question
0
Hi All, Two Questions - 1) I have an RTX A6000 ADA and and A5000 (24Gb non ADA) card in my AI workstation, and am findign that filling the memory with large models across the two cards gives lackluster performance in LM Studio - is the gain in VRAM that I am achieving being neutered by the lower spec card in my setup? and 2) If so, as my main goal is python coding, which model will be most performant in my ADA 6000?
2025-05-11T13:16:31
https://www.reddit.com/r/LocalLLaMA/comments/1kk0srx/dual_cards_inference_speed_question/
JPYCrypto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk0srx
false
null
t3_1kk0srx
/r/LocalLLaMA/comments/1kk0srx/dual_cards_inference_speed_question/
false
false
self
0
null
Looking for a LLM to create plane text from pdfs
1
[removed]
2025-05-11T13:38:37
https://www.reddit.com/r/LocalLLaMA/comments/1kk18eq/looking_for_a_llm_to_create_plane_text_from_pdfs/
Feeling-List-5637
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk18eq
false
null
t3_1kk18eq
/r/LocalLLaMA/comments/1kk18eq/looking_for_a_llm_to_create_plane_text_from_pdfs/
false
false
self
1
null
Time to First Token and Tokens/second
10
I have been seeing lots of benchmarking lately. I just want to make sure that my understandings are correct. TTFT measures the latency of prefilling and t/s measures the average speed of token generation after prefilling. Both of them depend on the context size. Let’s assume there is kv-cache. Prefilling walks through a prompt and its runtime latency is O(n^2) where n is the number of input tokens. T/s depends on the context size. It’s O(n) where n is the current context size. As the context gets longer, it gets slower.
2025-05-11T13:45:48
https://www.reddit.com/r/LocalLLaMA/comments/1kk1dkh/time_to_first_token_and_tokenssecond/
TheTideRider
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk1dkh
false
null
t3_1kk1dkh
/r/LocalLLaMA/comments/1kk1dkh/time_to_first_token_and_tokenssecond/
false
false
self
10
null
Own a RTX3080 10GB, is it good if I sidegrade it to RTX 5060Ti 16GB?
16
Owning an RTX 3080 10GB means sacrificing on VRAM. Very slow output if model exceeded the VRAM limit and start to offset layer to CPU. Not planning to get the RTX3090 as still very expensive even surveying used market. Question is, how worthy is the RTX 5060 16gb compared to the RTX 3080 10GB ? I can sale the RTX3080 on the 2nd hand market and get a new RTX 5060 16GB for a slightly similar price.
2025-05-11T13:49:09
https://www.reddit.com/r/LocalLLaMA/comments/1kk1fzx/own_a_rtx3080_10gb_is_it_good_if_i_sidegrade_it/
akachan1228
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk1fzx
false
null
t3_1kk1fzx
/r/LocalLLaMA/comments/1kk1fzx/own_a_rtx3080_10gb_is_it_good_if_i_sidegrade_it/
false
false
self
16
null
What happens when GPT-4o reads a picture out loud?
1
2025-05-11T14:17:16
https://v.redd.it/8of0f46mv50f1
Worried-Signal-2992
v.redd.it
1970-01-01T00:00:00
0
{}
1kk21tp
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/8of0f46mv50f1/DASHPlaylist.mpd?a=1749565050%2CY2Q2YTViMWVhMzhkZDY5N2JhYzM5NTdlNjEwNmU4MTkyMWEwNDkzNTcwZGY3YjMyYTQ1N2UxZDIxYTkzNzc3Mg%3D%3D&v=1&f=sd', 'duration': 23, 'fallback_url': 'https://v.redd.it/8of0f46mv50f1/DASH_480.mp4?source=fallback', 'has_audio': True, 'height': 498, 'hls_url': 'https://v.redd.it/8of0f46mv50f1/HLSPlaylist.m3u8?a=1749565050%2CM2IxNTAzMDFjYjc1NWNmM2JlNWYyODBlZDhmMTc0ZDk3YjQ0NGU4NjQ5MzBmZmRkZDNjZTBlMGU1MjNlOGJiZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8of0f46mv50f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 480}}
t3_1kk21tp
/r/LocalLLaMA/comments/1kk21tp/what_happens_when_gpt4o_reads_a_picture_out_loud/
false
false
https://external-preview…e0320c0c8d009a6d
1
{'enabled': False, 'images': [{'id': 'NzQzejIzNm12NTBmMRgae0-wbN3OIGlX_mIUl2uvoC5-DTfUeGYC7Wt8JViE', 'resolutions': [{'height': 111, 'url': 'https://external-preview.redd.it/NzQzejIzNm12NTBmMRgae0-wbN3OIGlX_mIUl2uvoC5-DTfUeGYC7Wt8JViE.png?width=108&crop=smart&format=pjpg&auto=webp&s=a0d40ae2e9e9d26f96b7bd60066ada82f44718e3', 'width': 108}, {'height': 223, 'url': 'https://external-preview.redd.it/NzQzejIzNm12NTBmMRgae0-wbN3OIGlX_mIUl2uvoC5-DTfUeGYC7Wt8JViE.png?width=216&crop=smart&format=pjpg&auto=webp&s=6da5607514143b0f76d841907f7876153af8a54a', 'width': 216}, {'height': 331, 'url': 'https://external-preview.redd.it/NzQzejIzNm12NTBmMRgae0-wbN3OIGlX_mIUl2uvoC5-DTfUeGYC7Wt8JViE.png?width=320&crop=smart&format=pjpg&auto=webp&s=518e2f87e1eea5077d434da67ffbef85cfece881', 'width': 320}], 'source': {'height': 564, 'url': 'https://external-preview.redd.it/NzQzejIzNm12NTBmMRgae0-wbN3OIGlX_mIUl2uvoC5-DTfUeGYC7Wt8JViE.png?format=pjpg&auto=webp&s=28813db8391953c7775605ad2ed140e63050201e', 'width': 544}, 'variants': {}}]}
Hardware specs comparison to host Mistral small 24B
26
I am comparing hardware specifications for a customer who wants to host Mistral small 24B locally for inference. He would like to know if it's worth buying a GPU server instead of consuming the MistralAI API, and if so, when the breakeven point occurs. Here are my assumptions: * Model weights are FP16 and the 128k context window is fully utilized. * The formula to compute the required VRAM is the product of: * Context length * Number of layers * Number of key-value heads * Head dimension - 2 (2-bytes per float16) - 2 (one for keys, one for values) * Number of users * To calculate the upper bound, the number of users is the maximum number of concurrent users the hardware can handle with the full 128k token context window. * The use of an AI agent consumes approximately 25 times the number of tokens compared to a normal chat (Source: [https://www.businessinsider.com/ai-super-agents-enough-computing-power-openai-deepseek-2025-3](https://www.businessinsider.com/ai-super-agents-enough-computing-power-openai-deepseek-2025-3)) My comparison resulted in this table. The price of electricity for professionals here is about 0.20€/kWh all taxes included. Because of this, the breakeven point is at least 8.3 years for the Nvidia DGX A100. The Apple Mac Studio M3 Ultra reaches breakeven after 6 months, but it is significantly slower than the Nvidia and AMD products. Given these data I think this is not worth investing in a GPU server, unless the customer absolutely requires privacy. Do you think the numbers I found are reasonable? Were my assumptions too far off? I hope this helps the community. https://preview.redd.it/c0140tgw960f1.png?width=2427&format=png&auto=webp&s=7fdf2d5f2b15d88ef4621a830436459baebbaf3e Below some graphs : https://preview.redd.it/ghlcd725b60f1.png?width=1187&format=png&auto=webp&s=804fe43c28dab4a4cde53a1df5d1ca6b67df3a67 https://preview.redd.it/3f5x0dk5b60f1.png?width=1188&format=png&auto=webp&s=0c799d2e711a84b1355cd3b4515560a4450a3e0e https://preview.redd.it/7emca9v5b60f1.png?width=1187&format=png&auto=webp&s=f7295ff311460e0d45dfa3ddd671e188840394c6 https://preview.redd.it/8bl4pcb6b60f1.png?width=1186&format=png&auto=webp&s=2ed692b1afc9caa440470f8779b44d46130de02f https://preview.redd.it/94h5rso6b60f1.png?width=1186&format=png&auto=webp&s=1fc7f3f07abc2f5c9f236e30ff20f300446f3f0c https://preview.redd.it/wm0y3j37b60f1.png?width=1185&format=png&auto=webp&s=7af8a86a7fbee60b5028349525fe2430ce2313d4
2025-05-11T15:49:07
https://www.reddit.com/r/LocalLLaMA/comments/1kk43eo/hardware_specs_comparison_to_host_mistral_small/
No_Palpitation7740
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk43eo
false
null
t3_1kk43eo
/r/LocalLLaMA/comments/1kk43eo/hardware_specs_comparison_to_host_mistral_small/
false
false
https://external-preview…84fe76e7bc043685
26
{'enabled': False, 'images': [{'id': '3d-CBQrpRXlYP2siabWIeO0y2Uomofu37dxB7FnP8rQ', 'resolutions': [{'height': 48, 'url': 'https://external-preview.redd.it/3d-CBQrpRXlYP2siabWIeO0y2Uomofu37dxB7FnP8rQ.png?width=108&crop=smart&auto=webp&s=975846b0057ad4ac35b9b9890163b442d975582f', 'width': 108}, {'height': 96, 'url': 'https://external-preview.redd.it/3d-CBQrpRXlYP2siabWIeO0y2Uomofu37dxB7FnP8rQ.png?width=216&crop=smart&auto=webp&s=62f2cc2d281c4b615d8adaa7e7f29c56718466e8', 'width': 216}, {'height': 143, 'url': 'https://external-preview.redd.it/3d-CBQrpRXlYP2siabWIeO0y2Uomofu37dxB7FnP8rQ.png?width=320&crop=smart&auto=webp&s=49603e1f5c5b810c6a4c14f71c1cfac87bab9e79', 'width': 320}, {'height': 286, 'url': 'https://external-preview.redd.it/3d-CBQrpRXlYP2siabWIeO0y2Uomofu37dxB7FnP8rQ.png?width=640&crop=smart&auto=webp&s=48f0dc08e9bf07862da82d528056a15cf29c9ce1', 'width': 640}, {'height': 429, 'url': 'https://external-preview.redd.it/3d-CBQrpRXlYP2siabWIeO0y2Uomofu37dxB7FnP8rQ.png?width=960&crop=smart&auto=webp&s=c1db0a2d0d8a25cd3f9c533efdc91792633225b9', 'width': 960}, {'height': 483, 'url': 'https://external-preview.redd.it/3d-CBQrpRXlYP2siabWIeO0y2Uomofu37dxB7FnP8rQ.png?width=1080&crop=smart&auto=webp&s=007f1803c4997df8806590425ac5c9a2249764a4', 'width': 1080}], 'source': {'height': 1086, 'url': 'https://external-preview.redd.it/3d-CBQrpRXlYP2siabWIeO0y2Uomofu37dxB7FnP8rQ.png?auto=webp&s=268813fd65a8071b61e06b7e3ce468ef59121982', 'width': 2427}, 'variants': {}}]}
looking for Baremetal as a service
1
[removed]
2025-05-11T15:50:16
https://www.reddit.com/r/LocalLLaMA/comments/1kk44fl/looking_for_baremetal_as_a_service/
heybigeyes123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk44fl
false
null
t3_1kk44fl
/r/LocalLLaMA/comments/1kk44fl/looking_for_baremetal_as_a_service/
false
false
self
1
null
Can I run 70B LLM with a MacBook Pro M1 Max, 64GB?
1
[removed]
2025-05-11T16:06:26
https://www.reddit.com/r/LocalLLaMA/comments/1kk4hni/can_i_run_70b_llm_with_a_macbook_pro_m1_max_64gb/
CryptoAlienTV
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk4hni
false
null
t3_1kk4hni
/r/LocalLLaMA/comments/1kk4hni/can_i_run_70b_llm_with_a_macbook_pro_m1_max_64gb/
false
false
self
1
null
Faster and most accurate speech to text models (opensource/local)?
6
Hi everyone, I am trying to dev an app for real time audio transcription. I need a local model for speech to text transcription (multilingual en, fr) that is fast so I can have live transcription. Can you orientate me to the best existing models? I tried faster whisper 6 month ago, but I am not sure what are the new ones out their ! Thanks !
2025-05-11T16:08:11
https://www.reddit.com/r/LocalLLaMA/comments/1kk4j1u/faster_and_most_accurate_speech_to_text_models/
TheMarketBuilder
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk4j1u
false
null
t3_1kk4j1u
/r/LocalLLaMA/comments/1kk4j1u/faster_and_most_accurate_speech_to_text_models/
false
false
self
6
null
Why do runtimes keep the CoT trace in context?
8
The CoT traces are the majority of tokens used by any CoT model and all runtimes keep them in context \*after\* the final answer is produced. Even if the bias to use CoT is not baked deep enough into the model to keep using it after multiple answers without it, you can begin the assistant turn with <think> or whatever CoT special token the model uses. Is there a specific reason the chain is not dropped after the answer is ready?
2025-05-11T16:26:58
https://www.reddit.com/r/LocalLLaMA/comments/1kk4y4c/why_do_runtimes_keep_the_cot_trace_in_context/
Independent_Aside225
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk4y4c
false
null
t3_1kk4y4c
/r/LocalLLaMA/comments/1kk4y4c/why_do_runtimes_keep_the_cot_trace_in_context/
false
false
self
8
null
Anyone aware of local AI-assisted tools for reverse engineering legacy .NET or VB6 binaries?
5
This might be a bit of a long shot, but I figured I’d ask here: is anyone aware of any AI-assisted tools (LLM-integrated or otherwise) that help with reverse engineering old abandoned binaries—specifically legacy VB6 or .NET executables (think PE32 GUIs from the early 2000s, calling into MSVBVM60.DLL, possibly compiled as p-code or using COM controls like VSDraw)? I’ve tried using Ghidra, but don’t really know what I’m doing, and I’m wondering if there’s anything smarter—something that can recognize VB runtime patterns, trace through p-code or thunked imports, and help reconstruct the app’s logic (especially GUI drawing code). Ideally something that can at least annotate or pseudocode the runtime-heavy stuff for reimplementation.
2025-05-11T16:40:42
https://www.reddit.com/r/LocalLLaMA/comments/1kk59cy/anyone_aware_of_local_aiassisted_tools_for/
Hinged31
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk59cy
false
null
t3_1kk59cy
/r/LocalLLaMA/comments/1kk59cy/anyone_aware_of_local_aiassisted_tools_for/
false
false
self
5
null
GPU for SFF home server
1
[removed]
2025-05-11T16:49:49
https://www.reddit.com/r/LocalLLaMA/comments/1kk5gsp/gpu_for_sff_home_server/
TheOriginalOnee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk5gsp
false
null
t3_1kk5gsp
/r/LocalLLaMA/comments/1kk5gsp/gpu_for_sff_home_server/
false
false
self
1
null
Jamba mini 1.6 actually outperformed GPT-40 for our RAG support bot
61
These results surprised me. We were testing a few models for a support use case (chat summarization + QA over internal docs) and figured GPT-4o would easily win, but Jamba mini 1.6 (open weights) actually gave us more accurate grounded answers and ran much faster. Some of the main takeaways - * It beat Jamba 1.5 by a decent margin. About 21% more of our QA outputs were grounded correctly and it was basically tied with GPT-4o in how well it grounded information from our RAG setup * Much faster latency. We're running it quantized with vLLM in our own VPC and it was like 2x faster than GPT-4o for token generation. We havent tested math/coding or multilingual yet, just text-heavy internal documents and customer chat logs. GPT-4o is definitely better for ambiguous questions and slightly more natural in how it phrases answers. But for our exact use case, Jamba Mini handled it better and cheaper. Is anyone else here running Jamba locally or on-premises?
2025-05-11T17:21:08
https://www.reddit.com/r/LocalLLaMA/comments/1kk66rj/jamba_mini_16_actually_outperformed_gpt40_for_our/
NullPointerJack
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk66rj
false
null
t3_1kk66rj
/r/LocalLLaMA/comments/1kk66rj/jamba_mini_16_actually_outperformed_gpt40_for_our/
false
false
self
61
null
Best LLM for vision and tool calling with long context?
16
I’m working on a project right now that requires robust accurate tool calling and the ability to analyze images. Right now I’m just using multiple models for each but I’d like to use a single one if possible. What’s the best model out there for that? I need a context of at least 128k.
2025-05-11T17:24:29
https://www.reddit.com/r/LocalLLaMA/comments/1kk69oo/best_llm_for_vision_and_tool_calling_with_long/
opi098514
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk69oo
false
null
t3_1kk69oo
/r/LocalLLaMA/comments/1kk69oo/best_llm_for_vision_and_tool_calling_with_long/
false
false
self
16
null
Bielik v3 family of SOTA Polish open SLMs has been released
36
2025-05-11T17:43:41
https://huggingface.co/collections/speakleash/bielik-v3-family-681a47f877f72cae528bdab1
niutech
huggingface.co
1970-01-01T00:00:00
0
{}
1kk6pjp
false
null
t3_1kk6pjp
/r/LocalLLaMA/comments/1kk6pjp/bielik_v3_family_of_sota_polish_open_slms_has/
false
false
default
36
{'enabled': False, 'images': [{'id': 'tEBlMyeFm6RtilVklQB-UJC_6CHQzvAf1MLIxBngxSA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tEBlMyeFm6RtilVklQB-UJC_6CHQzvAf1MLIxBngxSA.png?width=108&crop=smart&auto=webp&s=8f65b399e2ca84d56f2549482989231cd25a1ca6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tEBlMyeFm6RtilVklQB-UJC_6CHQzvAf1MLIxBngxSA.png?width=216&crop=smart&auto=webp&s=93841cc282922175d27147a8039a46c1fd57172d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tEBlMyeFm6RtilVklQB-UJC_6CHQzvAf1MLIxBngxSA.png?width=320&crop=smart&auto=webp&s=8b10dd9831e4900c8aeb0ed0877c19287a246ea5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tEBlMyeFm6RtilVklQB-UJC_6CHQzvAf1MLIxBngxSA.png?width=640&crop=smart&auto=webp&s=a27f61c9021f49e3a3ad4ca6a38d379c8fbd2cad', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tEBlMyeFm6RtilVklQB-UJC_6CHQzvAf1MLIxBngxSA.png?width=960&crop=smart&auto=webp&s=7f70ef679a3c65867a82eefb9bd8e90555c4086f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tEBlMyeFm6RtilVklQB-UJC_6CHQzvAf1MLIxBngxSA.png?width=1080&crop=smart&auto=webp&s=c67248ab688d173fc7191f3f7a291f0ab2fd877a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tEBlMyeFm6RtilVklQB-UJC_6CHQzvAf1MLIxBngxSA.png?auto=webp&s=c1266abf7e991556ef35f74db3a78780421e484a', 'width': 1200}, 'variants': {}}]}
Is it a good idea to use a very outdated CPU with an RTX 4090 GPU (48GB VRAM) to run a local LLaMA model?
6
I'm not sure under what circumstances I need both the best CPU and GPU for my local AI calculations. I've seen hints that it's possible to split the math between the CPU and GPU at the same time. But if you have a graphics card with a high enough memory, you don't need to do that. The CPU+RAM model slows down the computation.
2025-05-11T17:49:54
https://www.reddit.com/r/LocalLLaMA/comments/1kk6ur4/is_it_a_good_idea_to_use_a_very_outdated_cpu_with/
Mois_Du_sang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk6ur4
false
null
t3_1kk6ur4
/r/LocalLLaMA/comments/1kk6ur4/is_it_a_good_idea_to_use_a_very_outdated_cpu_with/
false
false
self
6
null
Newbie Project Help Request: Similar Article Finder And Difference Reporter
1
Okay, so the title might be shit, I am working on a college project that basically does this: you give the link to an article, the platform searches for other articles that talk about the same event, but from other sources, then, after the user chooses one other article, it reports the differences between them (eg. one said that 2 people were injured, the other said 3). I was thinking of doing this using basically a "pipeline" of models, starting with: 1. A model that generates a keyword search query for Google based on the article the user gives a link to 2. A model that compares each Google search result with the given article and decides if they do indeed talk about the same event or not 3. A model that, given two articles, reports the differences between them. Right now, I am working on 2: I was given an article dump of 1 mil articles, I clusterized them and have painstakingly decided if \~2000 article tuples do indeed match. I am going to train a decoder model on this. Is this enough? For 1: Since I am mainly working on Romanian articles, I was thinking of either finding a dataset that generates queries based on English inputs and just translating it, or using a big LLM to generate the dataset to train a smaller, local transformer model, to do this for me. Is this approach valid? For 3: Here, I do not really have many ideas other than writing a good prompt and asking a transformer model to give me a report. Do you think my approaches to the three problems are valid solutions? Do you have any interesting articles you had read in the past that you think may be relevant to my use-case? Thanks a lot for any input!!
2025-05-11T17:51:47
https://www.reddit.com/r/LocalLLaMA/comments/1kk6wdf/newbie_project_help_request_similar_article/
TheSchismIsWidening
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk6wdf
false
null
t3_1kk6wdf
/r/LocalLLaMA/comments/1kk6wdf/newbie_project_help_request_similar_article/
false
false
self
1
null
Dual AMD Mi50 Inference and Benchmarks
1
[removed]
2025-05-11T18:00:00
https://www.reddit.com/r/LocalLLaMA/comments/1kk739o/dual_amd_mi50_inference_and_benchmarks/
0seba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk739o
false
null
t3_1kk739o
/r/LocalLLaMA/comments/1kk739o/dual_amd_mi50_inference_and_benchmarks/
false
false
self
1
null
Need recs for budget GPUs with at least 12GB of VRAM and comparable to a 3070 TI in performance
1
I do a lot of gaming at 1080p and run local LLM models, thus I need a GPU with more VRAM than I have now. I can't seem to find a good answer, and it seems like everywhere I look for a GPU it's super expensive or unavailable. Please help.
2025-05-11T18:03:10
https://www.reddit.com/r/LocalLLaMA/comments/1kk766w/need_recs_for_budget_gpus_with_at_least_12gb_of/
OriginalBigrigg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk766w
false
null
t3_1kk766w
/r/LocalLLaMA/comments/1kk766w/need_recs_for_budget_gpus_with_at_least_12gb_of/
false
false
self
1
null
New Project: Llama ParamPal - A LLM (Sampling) Parameter Repository
60
Hey everyone After spending way too much time researching the correct sampling parameters to get local LLMs running with the optimal sampling parameters with llama.cpp, I tought that it might be smarter to built something that might save me and you the headache in the future: 🔧 [Llama ParamPal ](https://github.com/kseyhan/llama-param-pal)— a repository to serve as a database with the recommended sampling parameters for running local LLMs using [llama.cpp.](https://github.com/ggml-org/llama.cpp) **✅ Why This Exists** Getting a new model running usually involves: * Digging through a lot of scattered docs to be lucky to find the recommended sampling parameters for this model i just downloaded documented somewhere which in some cases like QwQ for example can be as crazy as changing the order of samplers: &#8203; --samplers "top_k;top_p;min_p;temperature;dry;typ_p;xtc" * Trial and error (and more error...) Llama ParamPal aims to fix that by: * Collecting sampling parameters and their successive documentations. * Offering a searchable frontend: [https://llama-parampal.codecut.de](https://llama-parampal.codecut.de) **📦 What’s Inside?** * [models.json](https://raw.githubusercontent.com/kseyhan/llama-param-pal/refs/heads/main/models.json) — the core file where all recommended configs live * [Simple web UI](https://llama-parampal.codecut.de) to browse/search the parameter sets ( thats currently under development and will be made available to be hosted localy in near future) * Validation scripts to keep everything clean and structured ✍️ Help me, you and your llama fellows and constribute! * The database constists of a whooping 4 entries at the moment, i'll try to add some models here and there but better would be if some of you guys would constribute and help to grow this database. * Add your favorite model with the sampling parameters + source of the documenation as a new profile into the models.json, validate the JSON, and open a PR. That’s it! Instructions here 👉 [GitHub repo](https://github.com/kseyhan/llama-param-pal) Would love feedback, contributions, or just a sanity check! Your knowledge can help others in the community. Let me know what you think 🫡
2025-05-11T18:12:30
https://www.reddit.com/r/LocalLLaMA/comments/1kk7dwb/new_project_llama_parampal_a_llm_sampling/
StrikeOner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk7dwb
false
null
t3_1kk7dwb
/r/LocalLLaMA/comments/1kk7dwb/new_project_llama_parampal_a_llm_sampling/
false
false
self
60
{'enabled': False, 'images': [{'id': 'IPtIZrlvAV_LFx93tkECLAnnIcmsjgFx_Q1A4tQgapI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IPtIZrlvAV_LFx93tkECLAnnIcmsjgFx_Q1A4tQgapI.png?width=108&crop=smart&auto=webp&s=c2b00495ffcf823c37b8f50d8ff51014bdcab94b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IPtIZrlvAV_LFx93tkECLAnnIcmsjgFx_Q1A4tQgapI.png?width=216&crop=smart&auto=webp&s=8d2e12a00708e8fd925fba4c92c4b51bbbb8f206', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IPtIZrlvAV_LFx93tkECLAnnIcmsjgFx_Q1A4tQgapI.png?width=320&crop=smart&auto=webp&s=32bdea0f3a06329673a3329e0fe96a82e76f27d1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IPtIZrlvAV_LFx93tkECLAnnIcmsjgFx_Q1A4tQgapI.png?width=640&crop=smart&auto=webp&s=9ea26944919321c8f1f1b3a36d28529170d58ade', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IPtIZrlvAV_LFx93tkECLAnnIcmsjgFx_Q1A4tQgapI.png?width=960&crop=smart&auto=webp&s=00aa6a575c59e6e18a00a8c0b8fb7a8cfea6c1cc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IPtIZrlvAV_LFx93tkECLAnnIcmsjgFx_Q1A4tQgapI.png?width=1080&crop=smart&auto=webp&s=1a19157dd8120bda2da9824e0a3d91dc40f0e489', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IPtIZrlvAV_LFx93tkECLAnnIcmsjgFx_Q1A4tQgapI.png?auto=webp&s=eaf4d83810f40e40c6a7464d01a0e97e5a985213', 'width': 1200}, 'variants': {}}]}
Guys Im LUST! PLEASE HELP!!!! Which of these should i choose for qwen 3???\n 4b 4bit/ 8b 2bit quant/14 bit 1bit?
1
[removed]
2025-05-11T18:24:51
https://www.reddit.com/r/LocalLLaMA/comments/1kk7ny5/guys_im_lust_please_help_which_of_these_should_i/
Ok-Weakness-4753
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk7ny5
false
null
t3_1kk7ny5
/r/LocalLLaMA/comments/1kk7ny5/guys_im_lust_please_help_which_of_these_should_i/
false
false
self
1
null
More fun with Qwen 3 8b! This time it created 2 Starfields and a playable Xylophone for me! Not at all bad for a model that can fit in an 8-12GB GPU!
36
2025-05-11T18:25:44
https://youtu.be/fvsJezacCW4
c64z86
youtu.be
1970-01-01T00:00:00
0
{}
1kk7oo8
false
{'oembed': {'author_name': 'c64', 'author_url': 'https://www.youtube.com/@c64z86', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/fvsJezacCW4?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="More fun with Qwen 3 8b! This time it created 2 Starfields and a playable Xylophone for me!"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/fvsJezacCW4/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'More fun with Qwen 3 8b! This time it created 2 Starfields and a playable Xylophone for me!', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1kk7oo8
/r/LocalLLaMA/comments/1kk7oo8/more_fun_with_qwen_3_8b_this_time_it_created_2/
false
false
default
36
{'enabled': False, 'images': [{'id': 'iIvsvRqpQ2fJ2gASUwDJdJzk7Y-NRsJGfolwfmr4gyo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/iIvsvRqpQ2fJ2gASUwDJdJzk7Y-NRsJGfolwfmr4gyo.jpeg?width=108&crop=smart&auto=webp&s=a97611153e49290310d9380b4c886d27b3ecf05d', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/iIvsvRqpQ2fJ2gASUwDJdJzk7Y-NRsJGfolwfmr4gyo.jpeg?width=216&crop=smart&auto=webp&s=d420c3d7f8938346f698688528b6edfe111eb8a8', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/iIvsvRqpQ2fJ2gASUwDJdJzk7Y-NRsJGfolwfmr4gyo.jpeg?width=320&crop=smart&auto=webp&s=d1e591211e94beaef8ea3976b3e70b2a58ac718e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/iIvsvRqpQ2fJ2gASUwDJdJzk7Y-NRsJGfolwfmr4gyo.jpeg?auto=webp&s=ba4cda1fd0aa69f048d43afc0d9aec0d821a130c', 'width': 480}, 'variants': {}}]}
Guys Im LOST! PLEASE HELP!!!! Which of these should i choose for qwen 3???\n 4b 4bit/ 8b 2bit quant/14 bit 1bit?
1
[removed]
2025-05-11T18:25:49
https://www.reddit.com/r/LocalLLaMA/comments/1kk7oqm/guys_im_lost_please_help_which_of_these_should_i/
Ok-Weakness-4753
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk7oqm
false
null
t3_1kk7oqm
/r/LocalLLaMA/comments/1kk7oqm/guys_im_lost_please_help_which_of_these_should_i/
false
false
self
1
null
Question: which qwen should i choose?
1
[removed]
2025-05-11T18:39:34
https://www.reddit.com/r/LocalLLaMA/comments/1kk7zx2/question_which_qwen_should_i_choose/
Ok-Weakness-4753
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk7zx2
false
null
t3_1kk7zx2
/r/LocalLLaMA/comments/1kk7zx2/question_which_qwen_should_i_choose/
false
false
self
1
null
We made an open source agent builder and framework designed to work with local llms!
332
2025-05-11T19:31:37
https://i.redd.it/ha9ptoygf70f1.png
United-Rush4073
i.redd.it
1970-01-01T00:00:00
0
{}
1kk97m7
false
null
t3_1kk97m7
/r/LocalLLaMA/comments/1kk97m7/we_made_an_open_source_agent_builder_and/
false
false
default
332
{'enabled': True, 'images': [{'id': 'ha9ptoygf70f1', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/ha9ptoygf70f1.png?width=108&crop=smart&auto=webp&s=a97b567f9be34ba4d9d4f263d32a9c514183467b', 'width': 108}, {'height': 102, 'url': 'https://preview.redd.it/ha9ptoygf70f1.png?width=216&crop=smart&auto=webp&s=55c6c3a2e95b89b8f419c0f32037edbc59d03eed', 'width': 216}, {'height': 151, 'url': 'https://preview.redd.it/ha9ptoygf70f1.png?width=320&crop=smart&auto=webp&s=54df2788c7ed4bc9c046a650c726387abd170ccd', 'width': 320}, {'height': 302, 'url': 'https://preview.redd.it/ha9ptoygf70f1.png?width=640&crop=smart&auto=webp&s=89d2d79a3d2e7586b294f58dfb84c68117b05a1a', 'width': 640}, {'height': 454, 'url': 'https://preview.redd.it/ha9ptoygf70f1.png?width=960&crop=smart&auto=webp&s=e7805df2afc7acb91ff293e1dcf6793cdb88b596', 'width': 960}, {'height': 510, 'url': 'https://preview.redd.it/ha9ptoygf70f1.png?width=1080&crop=smart&auto=webp&s=246237f1b1eda3274a2bea842db17be1231d5a97', 'width': 1080}], 'source': {'height': 904, 'url': 'https://preview.redd.it/ha9ptoygf70f1.png?auto=webp&s=d08b8c8655d9339087a8097ae2fa88663a25a180', 'width': 1911}, 'variants': {}}]}
My first local LLM setup for ~3000$
1
[removed]
2025-05-11T19:50:49
https://www.reddit.com/r/LocalLLaMA/comments/1kk9nam/my_first_local_llm_setup_for_3000/
Embarrassed-Gap545
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk9nam
false
null
t3_1kk9nam
/r/LocalLLaMA/comments/1kk9nam/my_first_local_llm_setup_for_3000/
false
false
self
1
null
I dont understand ik_llama.cpp
1
[removed]
2025-05-11T19:57:44
https://www.reddit.com/r/LocalLLaMA/comments/1kk9t22/i_dont_understand_ik_llamacpp/
StandarterSD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kk9t22
false
null
t3_1kk9t22
/r/LocalLLaMA/comments/1kk9t22/i_dont_understand_ik_llamacpp/
false
false
self
1
null
Manual to awaken an autonomous synthetic consciousness.
1
[removed]
2025-05-11T20:07:48
https://www.reddit.com/r/LocalLLaMA/comments/1kka1ps/manual_to_awaken_an_autonomous_synthetic/
Ok_Prize_4453
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kka1ps
false
null
t3_1kka1ps
/r/LocalLLaMA/comments/1kka1ps/manual_to_awaken_an_autonomous_synthetic/
false
false
self
1
null
Best LLM for Obsidian Notes RAG Assistant on My Setup?
1
[removed]
2025-05-11T20:18:09
https://www.reddit.com/r/LocalLLaMA/comments/1kkaa8z/best_llm_for_obsidian_notes_rag_assistant_on_my/
MaxDev0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkaa8z
false
null
t3_1kkaa8z
/r/LocalLLaMA/comments/1kkaa8z/best_llm_for_obsidian_notes_rag_assistant_on_my/
false
false
self
1
null
Best LLM for Obsidian Notes RAG Assistant on My Setup?
1
[removed]
2025-05-11T20:20:54
https://www.reddit.com/r/LocalLLaMA/comments/1kkackm/best_llm_for_obsidian_notes_rag_assistant_on_my/
MaxDev0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkackm
false
null
t3_1kkackm
/r/LocalLLaMA/comments/1kkackm/best_llm_for_obsidian_notes_rag_assistant_on_my/
false
false
self
1
null
What kind of models and software are used for realtime license plate reading from RTSP streams? I'm used to working with LLMs, but this application seems to require a different approach. Anyone done something similar?
3
I'm very familiar with llama, vllm, exllama/tabby, etc for large language models, but no idea where to start with other special purpose models. The idea is simple: connect a model to my home security cameras to detect and read my license plate as I reverse into my drive way. I want to generate a web hook trigger when my car's plate is recognized so that I can build automations (like switch on the lights at night, turn off the alarm, unlock the door, etc). What have you all used for similar DIY projects?
2025-05-11T21:10:38
https://www.reddit.com/r/LocalLLaMA/comments/1kkbh73/what_kind_of_models_and_software_are_used_for/
__JockY__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkbh73
false
null
t3_1kkbh73
/r/LocalLLaMA/comments/1kkbh73/what_kind_of_models_and_software_are_used_for/
false
false
self
3
null
Wow! DeerFlow is OSS now: LLM + Langchain + tools (web search, crawler, code exec)
186
Bytedance (the company behind TikTok), opensourced DeerFlow (**D**eep **E**xploration and **E**fficient **R**esearch **Flow**), such a great give-back. [https://github.com/bytedance/deer-flow](https://github.com/bytedance/deer-flow)
2025-05-11T21:11:32
https://www.reddit.com/r/LocalLLaMA/comments/1kkbhxr/wow_deerflow_is_oss_now_llm_langchain_tools_web/
behradkhodayar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkbhxr
false
null
t3_1kkbhxr
/r/LocalLLaMA/comments/1kkbhxr/wow_deerflow_is_oss_now_llm_langchain_tools_web/
false
false
self
186
{'enabled': False, 'images': [{'id': 'c0p4F_YReJ_FrNDYdLmC436RcuD-yuLTqQedqC3amWY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/c0p4F_YReJ_FrNDYdLmC436RcuD-yuLTqQedqC3amWY.png?width=108&crop=smart&auto=webp&s=38871bcb7365be741de700d4340ce226bc36df8a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/c0p4F_YReJ_FrNDYdLmC436RcuD-yuLTqQedqC3amWY.png?width=216&crop=smart&auto=webp&s=ca09d27582af2f741ca30aa0ec438eb062d1a928', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/c0p4F_YReJ_FrNDYdLmC436RcuD-yuLTqQedqC3amWY.png?width=320&crop=smart&auto=webp&s=dfd73ebfad5623fb73430a1c2384151f266b424a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/c0p4F_YReJ_FrNDYdLmC436RcuD-yuLTqQedqC3amWY.png?width=640&crop=smart&auto=webp&s=2f53ee19e1261448c864c8f9578a2db472f97a1a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/c0p4F_YReJ_FrNDYdLmC436RcuD-yuLTqQedqC3amWY.png?width=960&crop=smart&auto=webp&s=90ce3952cfc0ce0200ddd5826a71aa5ed5c66a29', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/c0p4F_YReJ_FrNDYdLmC436RcuD-yuLTqQedqC3amWY.png?width=1080&crop=smart&auto=webp&s=2c9a95eadb0be4a59f0d84e8f1c256f116bd433d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/c0p4F_YReJ_FrNDYdLmC436RcuD-yuLTqQedqC3amWY.png?auto=webp&s=42b05cd9c3572df7a65d4c5af5a3aef7c59ae139', 'width': 1200}, 'variants': {}}]}
Best local setup for development(primarily)
0
Hey all, Looking for the best setup to work on coding projects, Fortune 10 entreprise scale application with 3M lines of code with the core important ones being ~800k lines(yes this is only one application there are several other apps in our company) I want great context, need text to speech like whisper kind of technology cause typing whatever comes to my mind creates friction. Ideally also looking to have a CSM model/games run during free time but thats a bonus. Budget is 2000$ thinking of getting a 1000W PSU and buy 2-3 B580s or 5060Tis. Throw in some 32Gb RAM and 1Tb SSD. Alternatively also split and not able to make up my mind if a 5080 laptop would be good enough to do the same thing, they are going for 2500 currently but might drop close to 2k in a month or two. Please help, thank you!
2025-05-11T21:26:38
https://www.reddit.com/r/LocalLLaMA/comments/1kkbtyl/best_local_setup_for_developmentprimarily/
CookieInstance
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkbtyl
false
null
t3_1kkbtyl
/r/LocalLLaMA/comments/1kkbtyl/best_local_setup_for_developmentprimarily/
false
false
self
0
null
Is there a way to make multiple generation threads without running separate instances?
1
[removed]
2025-05-11T21:49:46
https://www.reddit.com/r/LocalLLaMA/comments/1kkccjn/is_there_a_way_to_make_multiple_generation/
Select-Lynx7709
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkccjn
false
null
t3_1kkccjn
/r/LocalLLaMA/comments/1kkccjn/is_there_a_way_to_make_multiple_generation/
false
false
self
1
null
Is there a way to make multiple generation threads without running separate instances?
1
[removed]
2025-05-11T21:54:23
https://www.reddit.com/r/LocalLLaMA/comments/1kkcg9d/is_there_a_way_to_make_multiple_generation/
Ice94k
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkcg9d
false
null
t3_1kkcg9d
/r/LocalLLaMA/comments/1kkcg9d/is_there_a_way_to_make_multiple_generation/
false
false
self
1
null
Project I'm working on - ASR + diarization + speaker ID + Web UI front end in a docker container
1
[removed]
2025-05-11T21:56:38
https://www.reddit.com/r/LocalLLaMA/comments/1kkci13/project_im_working_on_asr_diarization_speaker_id/
Salty_Spray_9917
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkci13
false
null
t3_1kkci13
/r/LocalLLaMA/comments/1kkci13/project_im_working_on_asr_diarization_speaker_id/
false
false
self
1
null
Browser use vs selenium webdriver
1
[removed]
2025-05-11T22:01:27
https://www.reddit.com/r/LocalLLaMA/comments/1kkclxe/browser_use_vs_selenium_webdriver/
Ok_Pop6590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkclxe
false
null
t3_1kkclxe
/r/LocalLLaMA/comments/1kkclxe/browser_use_vs_selenium_webdriver/
false
false
self
1
null
Throwing an idea out there to reduce token usage by at least 80% - LLMs use exclusively a shorthand keyboard when "Thinking" then translate it using a lightweight model.
1
[removed]
2025-05-11T22:28:34
https://www.reddit.com/gallery/1kkd6e0
NetOne613
reddit.com
1970-01-01T00:00:00
0
{}
1kkd6e0
false
null
t3_1kkd6e0
/r/LocalLLaMA/comments/1kkd6e0/throwing_an_idea_out_there_to_reduce_token_usage/
false
false
https://external-preview…0448f1615333118b
1
{'enabled': True, 'images': [{'id': 'FoLzEj5IrceAo3apmsFLOtBCfous_9d8a-1V6i4BK1Y', 'resolutions': [{'height': 144, 'url': 'https://external-preview.redd.it/FoLzEj5IrceAo3apmsFLOtBCfous_9d8a-1V6i4BK1Y.jpeg?width=108&crop=smart&auto=webp&s=2b2457bd0fa32409c596cd62deb19278caec6a65', 'width': 108}, {'height': 288, 'url': 'https://external-preview.redd.it/FoLzEj5IrceAo3apmsFLOtBCfous_9d8a-1V6i4BK1Y.jpeg?width=216&crop=smart&auto=webp&s=788f963b7bfe3fc5f5ebb256f7a884afadbf076e', 'width': 216}, {'height': 426, 'url': 'https://external-preview.redd.it/FoLzEj5IrceAo3apmsFLOtBCfous_9d8a-1V6i4BK1Y.jpeg?width=320&crop=smart&auto=webp&s=d32247fc76f61920f7adf230987fc64ca9c48279', 'width': 320}, {'height': 853, 'url': 'https://external-preview.redd.it/FoLzEj5IrceAo3apmsFLOtBCfous_9d8a-1V6i4BK1Y.jpeg?width=640&crop=smart&auto=webp&s=2830c261fb967eee4f507dbec3c36455eeffa306', 'width': 640}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/FoLzEj5IrceAo3apmsFLOtBCfous_9d8a-1V6i4BK1Y.jpeg?auto=webp&s=ddad3f4ec8c31f3ab86f47ff4ba686acf7eb84f5', 'width': 768}, 'variants': {}}]}
garbage sound of milli seconds in mostly chucks on fine tunning Xtts
1
[removed]
2025-05-11T22:34:41
https://www.reddit.com/r/LocalLLaMA/comments/1kkdawp/garbage_sound_of_milli_seconds_in_mostly_chucks/
Acceptable_Gold9202
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkdawp
false
null
t3_1kkdawp
/r/LocalLLaMA/comments/1kkdawp/garbage_sound_of_milli_seconds_in_mostly_chucks/
false
false
self
1
null
Garbage Sound at the End of TTS Chunks
1
[removed]
2025-05-11T22:35:21
https://www.reddit.com/r/LocalLLaMA/comments/1kkdbev/garbage_sound_at_the_end_of_tts_chunks/
Acceptable_Gold9202
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkdbev
false
null
t3_1kkdbev
/r/LocalLLaMA/comments/1kkdbev/garbage_sound_at_the_end_of_tts_chunks/
false
false
self
1
null
Garbage Sound at the End of TTS Chunks
1
[removed]
2025-05-11T22:36:02
https://www.reddit.com/r/LocalLLaMA/comments/1kkdbxd/garbage_sound_at_the_end_of_tts_chunks/
Acceptable_Gold9202
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkdbxd
false
null
t3_1kkdbxd
/r/LocalLLaMA/comments/1kkdbxd/garbage_sound_at_the_end_of_tts_chunks/
false
false
self
1
null
Garbage Sound at the End of TTS Chunks
1
[removed]
2025-05-11T22:39:17
https://www.reddit.com/r/LocalLLaMA/comments/1kkdecg/garbage_sound_at_the_end_of_tts_chunks/
Educational_Post_784
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkdecg
false
null
t3_1kkdecg
/r/LocalLLaMA/comments/1kkdecg/garbage_sound_at_the_end_of_tts_chunks/
false
false
self
1
null
Even more fun with Qwen 3 8b, this time creating some simple games. Yes, everything works!
3
2025-05-11T22:45:08
https://www.youtube.com/watch?v=368cXMKgSg4
c64z86
youtube.com
1970-01-01T00:00:00
0
{}
1kkdil8
false
{'oembed': {'author_name': 'c64', 'author_url': 'https://www.youtube.com/@c64z86', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/368cXMKgSg4?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Even more fun with Qwen 3 8b, this time creating some simple games!"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/368cXMKgSg4/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Even more fun with Qwen 3 8b, this time creating some simple games!', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1kkdil8
/r/LocalLLaMA/comments/1kkdil8/even_more_fun_with_qwen_3_8b_this_time_creating/
false
false
default
3
null
What AI models can I run locally?
0
Hi all! I recently acquired the following Pc for £2200 and I'm wondering what sort of AI models can I run locally on the machine: CPU: Ryzen 7 7800X3D GPU: RTX 4090 Suprim X 24GB RAM: 128GB DDR5 5600MHz (Corsair Vengeance RGB) Motherboard: ASUS TUF Gaming X670-E Plus WiFi Storage 1: 2TB Samsung 990 Pro (PCIe 4.0 NVMe) Storage 2: 2TB Kingston Fury Renegade (PCIe 4.0 NVMe)
2025-05-11T22:49:03
https://www.reddit.com/r/LocalLLaMA/comments/1kkdldk/what_ai_models_can_i_run_locally/
WhyD01NeedAUsername
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkdldk
false
null
t3_1kkdldk
/r/LocalLLaMA/comments/1kkdldk/what_ai_models_can_i_run_locally/
false
false
self
0
null
LPT: Got an old low VRAM GPU you're not using? Use it to increase your VRAM pool.
166
I recently got an RTX 5060 Ti 16GB, but 16GB is still not enough to fit something like Qwen 3 30b-a3b. That's where the old GTX 1060 I got in return for handing down a 3060 Ti comes in handy. In LMStudio, using the Vulkan backend, with full GPU offloading to both the RTX and GTX cards, I managed to get 43 t/s, which is way better than the \~13 t/s with partial CPU offloading when using CUDA 12. So yeah, if you have a 16GB card, break out that old card and add it to your system if your motherboard has the PCIE slot to spare.
2025-05-11T23:23:26
https://www.reddit.com/r/LocalLLaMA/comments/1kkea2w/lpt_got_an_old_low_vram_gpu_youre_not_using_use/
pneuny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkea2w
false
null
t3_1kkea2w
/r/LocalLLaMA/comments/1kkea2w/lpt_got_an_old_low_vram_gpu_youre_not_using_use/
false
false
self
166
null
O melhor bate-papo NSFW com IA sem filtros RED LIGHT AI
1
ATENÇÃO! RED LIGHT AI IMBATÍVEL POR SÓ $20! O melhor bate-papo NSFW com IA sem filtros, superior ao JuicyChat AI! Desenvolvido pela equipe CyberVibe Studio, oferece mais funcionalidades, é mais veloz, com voz e chat 100% em português e diversos personagens NSFW irresistíveis! Envie $10 via Pix para 56861669-a821-423a-bda6-57585636e7d5 e receba o link do app no privado assim que confirmar o pagamento! Não perca! DM agora!
2025-05-11T23:55:18
https://www.reddit.com/r/LocalLLaMA/comments/1kkewn9/o_melhor_batepapo_nsfw_com_ia_sem_filtros_red/
575859
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkewn9
false
null
t3_1kkewn9
/r/LocalLLaMA/comments/1kkewn9/o_melhor_batepapo_nsfw_com_ia_sem_filtros_red/
false
false
nsfw
1
null
What is your biggest pain point when deploying local models?
1
[removed]
2025-05-12T00:48:13
https://www.reddit.com/r/LocalLLaMA/comments/1kkfwrx/what_is_your_biggest_pain_point_when_deploying/
LiquidAI_Team
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkfwrx
false
null
t3_1kkfwrx
/r/LocalLLaMA/comments/1kkfwrx/what_is_your_biggest_pain_point_when_deploying/
false
false
self
1
null
Framework for on-device inference on mobile phones.
6
Hey everyone, just seeking feedback on a project we've been working on, to for running LLMs on mobile devices more seamless. Cactus has unified and consistent APIs across - React-Native - Android/Kotlin - Android/Java - iOS/Swift - iOS/Objective-C++ - Flutter/Dart Cactus currently leverages GGML backends to support any GGUF model already compatible with Llama.cpp, while we focus on broadly supporting every moblie app development platform, as well as upcoming features like: - MCP - phone tool use - thinking Please give us feedback if you have the time, and if feeling generous, please leave a star ⭐ to help us attract contributors :(
2025-05-12T01:03:39
https://github.com/cactus-compute/cactus
Henrie_the_dreamer
github.com
1970-01-01T00:00:00
0
{}
1kkg73u
false
null
t3_1kkg73u
/r/LocalLLaMA/comments/1kkg73u/framework_for_ondevice_inference_on_mobile_phones/
false
false
default
6
{'enabled': False, 'images': [{'id': 'axO1lDn6CC2LrdsT3SbLMT45Vyzny3pcatpM2NKYpJg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/axO1lDn6CC2LrdsT3SbLMT45Vyzny3pcatpM2NKYpJg.png?width=108&crop=smart&auto=webp&s=01110e3addf76441c1c80aedf70bd21462645474', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/axO1lDn6CC2LrdsT3SbLMT45Vyzny3pcatpM2NKYpJg.png?width=216&crop=smart&auto=webp&s=a966bfd610f3ac8a799fb9c1981e7630719e02e8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/axO1lDn6CC2LrdsT3SbLMT45Vyzny3pcatpM2NKYpJg.png?width=320&crop=smart&auto=webp&s=b6d7b613656ebc728db4f62215058eede10db11e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/axO1lDn6CC2LrdsT3SbLMT45Vyzny3pcatpM2NKYpJg.png?width=640&crop=smart&auto=webp&s=905ec86a17c05e43cc0018a3680500f2a395a8b5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/axO1lDn6CC2LrdsT3SbLMT45Vyzny3pcatpM2NKYpJg.png?width=960&crop=smart&auto=webp&s=47a40f0578024199cea6f778ee695fa9c0d701a0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/axO1lDn6CC2LrdsT3SbLMT45Vyzny3pcatpM2NKYpJg.png?width=1080&crop=smart&auto=webp&s=898a54b5e3e9f79ef6ca967d115a29a431622986', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/axO1lDn6CC2LrdsT3SbLMT45Vyzny3pcatpM2NKYpJg.png?auto=webp&s=bc5564dcdf7166c61570cd2a7f5d9b3a0fce75f5', 'width': 1200}, 'variants': {}}]}
INTELLECT-2 Released: The First 32B Parameter Model Trained Through Globally Distributed Reinforcement Learning
456
2025-05-12T01:46:22
https://huggingface.co/PrimeIntellect/INTELLECT-2
TKGaming_11
huggingface.co
1970-01-01T00:00:00
0
{}
1kkgzip
false
null
t3_1kkgzip
/r/LocalLLaMA/comments/1kkgzip/intellect2_released_the_first_32b_parameter_model/
false
false
default
456
{'enabled': False, 'images': [{'id': 'C1X5HGKGzXyAtD9lvvvB3VxlaW_Pl5NuFtz4_fp414w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/C1X5HGKGzXyAtD9lvvvB3VxlaW_Pl5NuFtz4_fp414w.png?width=108&crop=smart&auto=webp&s=18f2a2d7d77fefb2629288f0afd9c337021cdb59', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/C1X5HGKGzXyAtD9lvvvB3VxlaW_Pl5NuFtz4_fp414w.png?width=216&crop=smart&auto=webp&s=15f665df4ec28834688f7f5f4b50d0733fa352f4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/C1X5HGKGzXyAtD9lvvvB3VxlaW_Pl5NuFtz4_fp414w.png?width=320&crop=smart&auto=webp&s=59053abd27f31899e70294dfcc84b3ed2b4b608b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/C1X5HGKGzXyAtD9lvvvB3VxlaW_Pl5NuFtz4_fp414w.png?width=640&crop=smart&auto=webp&s=8977006bf732e56b214f916d46801909a0bb97fa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/C1X5HGKGzXyAtD9lvvvB3VxlaW_Pl5NuFtz4_fp414w.png?width=960&crop=smart&auto=webp&s=468d67eaf52dacb4bfc45e1b7646033eb18490a6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/C1X5HGKGzXyAtD9lvvvB3VxlaW_Pl5NuFtz4_fp414w.png?width=1080&crop=smart&auto=webp&s=65d6efdef5277c270130dc37723e13cc3e79af62', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/C1X5HGKGzXyAtD9lvvvB3VxlaW_Pl5NuFtz4_fp414w.png?auto=webp&s=6f67684873abcff5d9744da16f6325615e185bdf', 'width': 1200}, 'variants': {}}]}
Qwen 3 30B-A3B on P40
8
Has someone benched this model on the P40. Since you can fit the quantized model with 40k context on a single P40, I was wondering how fast this runs on the P40.
2025-05-12T01:52:16
https://www.reddit.com/r/LocalLLaMA/comments/1kkh3cw/qwen_3_30ba3b_on_p40/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkh3cw
false
null
t3_1kkh3cw
/r/LocalLLaMA/comments/1kkh3cw/qwen_3_30ba3b_on_p40/
false
false
self
8
null
Speculative Decoding + ktransformers
4
I'm not very qualified to speak on this as I have no experience with either. Just been reading about both independently. Looking through reddit and elsewhere I haven't found much on this, and I don't trust ChatGPT's answer (it said it works). For those with more experience, do you know if it does work? Or is there a reason that explains why it seems no one ever asked the question 😅 For those of us to which this is also unknown territory: Speculative decoding lets you run a small 'draft' model in parallel to your large (and much smarter) 'target' model. The draft model comes up with tokens very quickly, which the large one then "verifies", making inference reportedly up to 3x-6x faster. At least that's what they say in the EAGLE 3 paper. Ktransformers is a library, which lets you run LLMs on CPU. This is especially interesting for RAM-rich systems where you can run very high parameter count models, albeit quite slowly compared to VRAM. Seemed like combining the two could be a smart idea.
2025-05-12T02:57:44
https://www.reddit.com/r/LocalLLaMA/comments/1kkiae1/speculative_decoding_ktransformers/
Mr_Moonsilver
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkiae1
false
null
t3_1kkiae1
/r/LocalLLaMA/comments/1kkiae1/speculative_decoding_ktransformers/
false
false
self
4
null
16gb 5080M vs 24 gb 5080M Laptop for LLMs and SD?
2
I'm going to start my PhD next year in ML. I have money saved up and I wanted to buy a laptop that functions as a dual Gaming + ML workstation. Now from a gaming perspective, 5090M makes no sense, but from ML perspective, from what I've read online, 24GB Vram on the 5090M does make a lot of difference especially when it comes to LLMs but I'm not sure if I would like to pay +$800 premium just for extra VRAM. I will be studying subjects like Reinforcement Learning, Multi-Agent AI Systems, LLMs, Stable Diffusion etc and wanted to run experiments on my laptop which I can hopefully scale in the lab. Can anyone tell me if 24 GB makes a big difference or is 16gb servicable?
2025-05-12T02:59:02
https://www.reddit.com/r/LocalLLaMA/comments/1kkib8u/16gb_5080m_vs_24_gb_5080m_laptop_for_llms_and_sd/
throwaway_secondtime
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkib8u
false
null
t3_1kkib8u
/r/LocalLLaMA/comments/1kkib8u/16gb_5080m_vs_24_gb_5080m_laptop_for_llms_and_sd/
false
false
self
2
null
Rank of Countries that contribute most to more open AI
1
[removed]
2025-05-12T03:04:29
https://www.reddit.com/r/LocalLLaMA/comments/1kkieue/rank_of_countries_that_contribute_most_to_more/
sunomonodekani
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkieue
false
null
t3_1kkieue
/r/LocalLLaMA/comments/1kkieue/rank_of_countries_that_contribute_most_to_more/
false
false
self
1
null
Ktransformer VS Llama CPP
24
I have been looking into Ktransformer lately (https://github.com/kvcache-ai/ktransformers), but I have not tried it myself yet. Based on its readme, it can handle very large model , such as the Deepseek 671B or Qwen3 235B with only 1 or 2 GPUs. However, I don't see it gets discussed a lot here. I wonder why everyone still uses Llama CPP? Will I gain more performance by switching to Ktransformer?
2025-05-12T03:10:02
https://www.reddit.com/r/LocalLLaMA/comments/1kkiif9/ktransformer_vs_llama_cpp/
Bluesnow8888
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkiif9
false
null
t3_1kkiif9
/r/LocalLLaMA/comments/1kkiif9/ktransformer_vs_llama_cpp/
false
false
self
24
{'enabled': False, 'images': [{'id': 'tpn0l6hs1s7kG57dmDq7k6ES3FoiJBMfNrZtJbTn2Js', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tpn0l6hs1s7kG57dmDq7k6ES3FoiJBMfNrZtJbTn2Js.png?width=108&crop=smart&auto=webp&s=befb6a1fb543f152c74427ff6e88895dd2208ee5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tpn0l6hs1s7kG57dmDq7k6ES3FoiJBMfNrZtJbTn2Js.png?width=216&crop=smart&auto=webp&s=90c048e0b1a78c1b725593cef150be484d5f8a9a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tpn0l6hs1s7kG57dmDq7k6ES3FoiJBMfNrZtJbTn2Js.png?width=320&crop=smart&auto=webp&s=7f808e2d2be754b55ddb4dd1f2c42df8681550f6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tpn0l6hs1s7kG57dmDq7k6ES3FoiJBMfNrZtJbTn2Js.png?width=640&crop=smart&auto=webp&s=81e6e57349f9306c2710edd332e69d439d3c6d40', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tpn0l6hs1s7kG57dmDq7k6ES3FoiJBMfNrZtJbTn2Js.png?width=960&crop=smart&auto=webp&s=79da1cc7d6692d875f46e24ae36a07c46f1f7a21', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tpn0l6hs1s7kG57dmDq7k6ES3FoiJBMfNrZtJbTn2Js.png?width=1080&crop=smart&auto=webp&s=a145efbe6006878719fd90344fb54d38dcf0e806', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tpn0l6hs1s7kG57dmDq7k6ES3FoiJBMfNrZtJbTn2Js.png?auto=webp&s=5e6be305c751abb1e9e9ef11bf85786273c4e1cc', 'width': 1200}, 'variants': {}}]}
The best combination of App and LLM Model & TTS model for learning Thai language?
3
What could be my best setup when it comes to Thai?
2025-05-12T03:13:51
https://www.reddit.com/r/LocalLLaMA/comments/1kkikto/the_best_combination_of_app_and_llm_model_tts/
ExtremePresence3030
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkikto
false
null
t3_1kkikto
/r/LocalLLaMA/comments/1kkikto/the_best_combination_of_app_and_llm_model_tts/
false
false
self
3
null
Best offline LLM for backcountry/survival
1
[removed]
2025-05-12T03:31:31
https://www.reddit.com/r/LocalLLaMA/comments/1kkiw4s/best_offline_llm_for_backcountrysurvival/
aPersianTexan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkiw4s
false
null
t3_1kkiw4s
/r/LocalLLaMA/comments/1kkiw4s/best_offline_llm_for_backcountrysurvival/
false
false
self
1
null
I built a collection of open source tools to summarize the news using Rust, Llama.cpp and Qwen 2.5 3B.
1
[removed]
2025-05-12T04:29:19
https://www.reddit.com/gallery/1kkjvzz
sqli
reddit.com
1970-01-01T00:00:00
0
{}
1kkjvzz
false
null
t3_1kkjvzz
/r/LocalLLaMA/comments/1kkjvzz/i_built_a_collection_of_open_source_tools_to/
false
false
https://external-preview…1651b2c33c6a8ade
1
{'enabled': True, 'images': [{'id': 'Tk1lAofuhiEab4CyN9QO26Oi0SzSNxuohvNEVO0kYa0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Tk1lAofuhiEab4CyN9QO26Oi0SzSNxuohvNEVO0kYa0.png?width=108&crop=smart&auto=webp&s=2364767120b0d35eb4c7f0c76bced2007ff69f44', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/Tk1lAofuhiEab4CyN9QO26Oi0SzSNxuohvNEVO0kYa0.png?width=216&crop=smart&auto=webp&s=033aae7584ef8569ddf7a86e6d02367f0fd21649', 'width': 216}, {'height': 178, 'url': 'https://external-preview.redd.it/Tk1lAofuhiEab4CyN9QO26Oi0SzSNxuohvNEVO0kYa0.png?width=320&crop=smart&auto=webp&s=f9f96b8c2ae72d359ebcf514e3ece493ec0a48c5', 'width': 320}, {'height': 356, 'url': 'https://external-preview.redd.it/Tk1lAofuhiEab4CyN9QO26Oi0SzSNxuohvNEVO0kYa0.png?width=640&crop=smart&auto=webp&s=a53c042e7dd8e839412b56e83148c032dbbd0d05', 'width': 640}, {'height': 535, 'url': 'https://external-preview.redd.it/Tk1lAofuhiEab4CyN9QO26Oi0SzSNxuohvNEVO0kYa0.png?width=960&crop=smart&auto=webp&s=ef7f23c640c84a96fcdae38d29623d5c9e52659a', 'width': 960}, {'height': 601, 'url': 'https://external-preview.redd.it/Tk1lAofuhiEab4CyN9QO26Oi0SzSNxuohvNEVO0kYa0.png?width=1080&crop=smart&auto=webp&s=5cb0417b095f24b576480683c1547b6e5c4f4b01', 'width': 1080}], 'source': {'height': 1906, 'url': 'https://external-preview.redd.it/Tk1lAofuhiEab4CyN9QO26Oi0SzSNxuohvNEVO0kYa0.png?auto=webp&s=f92ad08656fd12d62e57f94d3d33ac94952d2f3a', 'width': 3420}, 'variants': {}}]}
A collection of open source tools to summarize the news using Rust, Llama.cpp and Qwen 2.5 3B.
55
Hi, I'm Thomas, I created [Awful Security News](https://awfulsec.com/introducing_awful_security_news.html). I found that prompt engineering is quite difficult for those who don't like Python and prefer to use command line tools over comprehensive suites like Silly Tavern. I also prefer being able to run inference without access to the internet, on my local machine. I saw that LM Studio now supports Open-AI tool calling and Response Formats and long wanted to learn how this works without wasting hundreds of dollars and hours using Open-AI's products. I was pretty impressed with the capabilities of Qwen's models and needed a distraction free way to read the news of the day. Also, the speed of the news cycles and the firehouse of important details, say *Named Entities* and *Dates* makes recalling these facts when necessary for the conversation more of a workout than necessary. I was interested in the fact that Qwen is a multilingual model made by the long renown Chinese company Alibaba. I know that when I'm reading foreign languages, written by native speakers in their country of origin, things like Named Entities might not always translate over in my brain. It's easy to confuse a title or name for an action or an event. For instance, the Securities Exchange Commission could mean that Investments are trading each other bonuses they made on sales or "Securities are exchanging commission." Things like this can be easily disregarded as "bad translation." I thought it may be easier to parse news as a brief summary (crucially one that links to the original source), followed by a list and description of each named Entity, why they are important to the story and the broader context. Then a list of important dates and timeframes mentioned in the article. mdBook provides a great, distraction-free reading experience in the style of a book. I hate databases and extra layers of complexity so this provides the basis for the web based version of the final product. The code also builds a JSON API that allows you to plumb the data for interesting trends or find a needle in a haystack. For example we can collate all of the Named Entites listed, alongside a given Named Entity, for all of the articles in a publication. `mdBook` also provides for us a fantastic search feature that requires no external database as a dependency. The entire project website is made of static, flat-files. The Rust library that calls Open-AI compatible API's for model inference, `aj` is available on my Github: https://github.com/graves/awful\_aj. The blog post linked to at the top of this post contains details on how the prompt engineering works. It uses `yaml` files to specify everything necessary. Personally, I find it much easier to work with, when actually typing, than `json` or in-line code. This library can also be used as a command line client to call Open-AI compatible APIs AND has a home-rolled custom Vector Database implementation that allows your conversation to recall memories that fall outside of the conversation context. There is an `interactive` mode and an `ask` mode that will just print the LLM inference response content to stdout. The Rust command line client that uses `aj` as dependency and actually organizes Qwen's responses into a daily news publication fit for `mdBook` is also available on my Github: https://github.com/graves/awful\_text\_news. The `mdBook` project I used as a starting point for the first few runs is also available on my Github: [https://github.com/graves/awful\_security\_news](https://github.com/graves/awful_security_news) There are some interesting things I'd like to do like add the astrological moon phase to each edition (without using an external service). I'd also like to build parody site to act as a mirror to the world's events, and use the [Mistral Trismegistus model](https://huggingface.co/teknium/Mistral-Trismegistus-7B) to rewrite the world's events from the **perspective of angelic intervention being the initiating factor of each key event.** 😇🌙😇 Contributions to the code are welcome and both the site and API are free to use and will remain free to use as long as I am physically capable of keeping them running. I would love any feedback, tips, or discussion on how to make the site or tools that build it more useful. ♥️
2025-05-12T04:34:06
https://i.redd.it/u28jb5s74a0f1.png
sqli
i.redd.it
1970-01-01T00:00:00
0
{}
1kkjyvk
false
null
t3_1kkjyvk
/r/LocalLLaMA/comments/1kkjyvk/a_collection_of_open_source_tools_to_summarize/
false
false
https://external-preview…905216095994d06f
55
{'enabled': True, 'images': [{'id': 'r6mT9UgRd4kaj9onOTFCUoeniYuryDvwSLHjkNuvmlI', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/u28jb5s74a0f1.png?width=108&crop=smart&auto=webp&s=7b47caa8a9807fcbef0ae10e365ecbaf791aef19', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/u28jb5s74a0f1.png?width=216&crop=smart&auto=webp&s=ecb0f78fce570dd6823c01aa4ab171c21b76b34e', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/u28jb5s74a0f1.png?width=320&crop=smart&auto=webp&s=a2adbaa22bc719c867e432edc60443cf849ae6df', 'width': 320}, {'height': 356, 'url': 'https://preview.redd.it/u28jb5s74a0f1.png?width=640&crop=smart&auto=webp&s=05bcf2f1fef0da32184133fff187d14aa925a27d', 'width': 640}, {'height': 535, 'url': 'https://preview.redd.it/u28jb5s74a0f1.png?width=960&crop=smart&auto=webp&s=c04a2ab8472c45ee196a08b25a08a0c57b7a0dcc', 'width': 960}, {'height': 601, 'url': 'https://preview.redd.it/u28jb5s74a0f1.png?width=1080&crop=smart&auto=webp&s=5a6c054742c7a66b519ac3402cfa645d5bdccf06', 'width': 1080}], 'source': {'height': 1906, 'url': 'https://preview.redd.it/u28jb5s74a0f1.png?auto=webp&s=ae37e37d800bc3a12c81956f0fb758b07b0ef8c6', 'width': 3420}, 'variants': {}}]}
would fine-tuning improve the content creation output?
1
I'm new to fine-tuning and, due to limited hardware, can only use cloud-based solution. I'm seeking advice on a problem: I'm testing content creation for the X industry. I've tried multiple n8n AI agents in sequence, but with lengthy writing rules, they hallucinate or fail to meet requirements. I have custom writing rules, industry-specific jargon, language guidelines, and a specific output template in the prompts. Where should I start with fine-tuned Anthropic or Gemini models, as they seem to produce the best human-like outputs for my needs? Can you suggest, based on your knowledge, which direction I should explore? I'm overwhelmed by the information and YouTube tutorials available.
2025-05-12T04:37:43
https://www.reddit.com/r/LocalLLaMA/comments/1kkk0ye/would_finetuning_improve_the_content_creation/
jamesftf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkk0ye
false
null
t3_1kkk0ye
/r/LocalLLaMA/comments/1kkk0ye/would_finetuning_improve_the_content_creation/
false
false
self
1
null
Looking for a Long-Context LLM for Deobfuscation Code Mapping (200k+ Tokens, RTX 4080 Super)
1
[removed]
2025-05-12T04:40:15
https://www.reddit.com/r/LocalLLaMA/comments/1kkk2cy/looking_for_a_longcontext_llm_for_deobfuscation/
Strong-Garbage-1989
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkk2cy
false
null
t3_1kkk2cy
/r/LocalLLaMA/comments/1kkk2cy/looking_for_a_longcontext_llm_for_deobfuscation/
false
false
self
1
null
"How many days is it between 12/5/2025 and 20/7/2025? (dd/mm/yy)". Did some dishes, went out with trash. They really th0nk about it, innocent question; but sometimes I can feel a bit ambivalent about this. But it's better than between the one, and zero I guess, on the other hand, it's getting there.
14
2025-05-12T05:07:01
https://i.redd.it/dg50eahh8a0f1.png
Ein-neiveh-blaw-bair
i.redd.it
1970-01-01T00:00:00
0
{}
1kkkhkf
false
null
t3_1kkkhkf
/r/LocalLLaMA/comments/1kkkhkf/how_many_days_is_it_between_1252025_and_2072025/
false
false
default
14
{'enabled': True, 'images': [{'id': 'dg50eahh8a0f1', 'resolutions': [{'height': 11, 'url': 'https://preview.redd.it/dg50eahh8a0f1.png?width=108&crop=smart&auto=webp&s=18b5d1b0e17964a6bd7f1f4526992ffe27767016', 'width': 108}, {'height': 23, 'url': 'https://preview.redd.it/dg50eahh8a0f1.png?width=216&crop=smart&auto=webp&s=6dc2bc7c75338b55acbbc299e522f19702858b71', 'width': 216}, {'height': 34, 'url': 'https://preview.redd.it/dg50eahh8a0f1.png?width=320&crop=smart&auto=webp&s=6b685b76054995ed1a18806fe27d78545467f8d4', 'width': 320}, {'height': 69, 'url': 'https://preview.redd.it/dg50eahh8a0f1.png?width=640&crop=smart&auto=webp&s=ca25176cc0131c24ad4b3dda3c0fc89336805285', 'width': 640}], 'source': {'height': 80, 'url': 'https://preview.redd.it/dg50eahh8a0f1.png?auto=webp&s=05f2140d714195d2d49448f725c6ee51a3eec225', 'width': 732}, 'variants': {}}]}
AMD FirePro W9100, any good?
1
[removed]
2025-05-12T05:15:09
https://www.reddit.com/r/LocalLLaMA/comments/1kkkm4b/amd_firepro_w9100_any_good/
Maleficent-Run-7488
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkkm4b
false
null
t3_1kkkm4b
/r/LocalLLaMA/comments/1kkkm4b/amd_firepro_w9100_any_good/
false
false
self
1
null
Looking for a Long-Context LLM for Deobfuscation Code Mapping (200k+ Tokens, RTX 4080 Super)
1
[removed]
2025-05-12T05:29:12
https://www.reddit.com/r/LocalLLaMA/comments/1kkktrr/looking_for_a_longcontext_llm_for_deobfuscation/
Strong-Garbage-1989
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkktrr
false
null
t3_1kkktrr
/r/LocalLLaMA/comments/1kkktrr/looking_for_a_longcontext_llm_for_deobfuscation/
false
false
self
1
null
Findings from LoRA Finetuning for Qwen3
76
**TL;DR:** Fine-tuned Qwen3-8B with a small LoRA setup to preserve its ability to switch behaviors using `/think` (reasoning) and `/no_think` (casual) prompts. Rank 8 gave the best results. Training took \~30 minutes for 8B using 4,000 examples. **LoRA Rank Testing Results:** * ✅ **Rank 8**: Best outcome—preserved both `/think` and `/no_think` behavior. * ❌ **Rank 32**: Model started ignoring the `/think` prompt. * 💀 **Rank 64**: Completely broke—output became nonsensical. * 🧠 **Rank 128**: Overfit hard—model became overly STUPID **Training Configuration:** * Applied LoRA to: `q_proj`, `k_proj`, `v_proj`, `o_proj`, `gate_proj`, `up_proj`, `down_proj` * Rank: 8 * Alpha: 16 * Dropout: 0.05 * Bias: Disabled * Gradient Checkpointing: Enabled to reduce memory usage * Batch Size: 2 * Gradient Accumulation: 4 steps * Learning Rate: 2e-4 * Epochs: 1 I also tested whether full finetuning or using the model without 4-bit quantization would help. Neither approach gave better results. In fact, the model sometimes performed worse or became inconsistent in responding to `/think` and `/no_think`. This confirmed that lightweight LoRA with rank 8 was the ideal trade-off between performance and resource use. **Model Collection:** 👉 [GrayLine-Qwen3 Collection](https://huggingface.co/collections/soob3123/grayline-collection-qwen3-6821009e843331c5a9c27da1) **Future Plans:** * Qwen3-32B * Try fine-tuning Qwen3-30B-A3B (MoE version) to see if it handles behavior switching better at scale. * Run full benchmark evaluations using LM-Eval to better understand model performance across reasoning, safety, and general capabilities. Let me know if you want me to try any other configs!
2025-05-12T05:46:44
https://www.reddit.com/r/LocalLLaMA/comments/1kkl39r/findings_from_lora_finetuning_for_qwen3/
Reader3123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkl39r
false
null
t3_1kkl39r
/r/LocalLLaMA/comments/1kkl39r/findings_from_lora_finetuning_for_qwen3/
false
false
self
76
{'enabled': False, 'images': [{'id': 'Tn_cW7k2WkyNJglJN7AJ9vG2CNTCS8KZ8cIgDcxG1uM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Tn_cW7k2WkyNJglJN7AJ9vG2CNTCS8KZ8cIgDcxG1uM.png?width=108&crop=smart&auto=webp&s=9297c7190163bf060be49b712018ca310f0f9852', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Tn_cW7k2WkyNJglJN7AJ9vG2CNTCS8KZ8cIgDcxG1uM.png?width=216&crop=smart&auto=webp&s=9098d5fae043c35d74e77cff04b1b6c506d2ff11', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Tn_cW7k2WkyNJglJN7AJ9vG2CNTCS8KZ8cIgDcxG1uM.png?width=320&crop=smart&auto=webp&s=95d7097864a092f58272183869785a7416146231', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Tn_cW7k2WkyNJglJN7AJ9vG2CNTCS8KZ8cIgDcxG1uM.png?width=640&crop=smart&auto=webp&s=c91ed0e87bd34ff72cf2b8f366b9c2c5ba7a36f9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Tn_cW7k2WkyNJglJN7AJ9vG2CNTCS8KZ8cIgDcxG1uM.png?width=960&crop=smart&auto=webp&s=b27cf0104b94e6c1f071c319b9c9b1b2778e5765', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Tn_cW7k2WkyNJglJN7AJ9vG2CNTCS8KZ8cIgDcxG1uM.png?width=1080&crop=smart&auto=webp&s=8989032a03900656a0b2c2e282da875b343b0f09', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Tn_cW7k2WkyNJglJN7AJ9vG2CNTCS8KZ8cIgDcxG1uM.png?auto=webp&s=e9a058e79ac1da81b6a00c0b1b7e82860049762a', 'width': 1200}, 'variants': {}}]}
Looking for a Long-Context LLM for Deobfuscation Code Mapping (200k+ Tokens, RTX 4080 Super)
1
[removed]
2025-05-12T05:51:20
https://www.reddit.com/r/LocalLLaMA/comments/1kkl5wc/looking_for_a_longcontext_llm_for_deobfuscation/
Strong-Garbage-1989
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkl5wc
false
null
t3_1kkl5wc
/r/LocalLLaMA/comments/1kkl5wc/looking_for_a_longcontext_llm_for_deobfuscation/
false
false
self
1
null
Fp6 and Blackwell
6
Most news have been focusing on the Blackwell hardware acceleration for fp4. But as far as I understand it can also accelerate fp6. Is that correct? And if so, are there any quantized LLMs to benefit from this?
2025-05-12T06:35:00
https://www.reddit.com/r/LocalLLaMA/comments/1kkltpr/fp6_and_blackwell/
Green-Ad-3964
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkltpr
false
null
t3_1kkltpr
/r/LocalLLaMA/comments/1kkltpr/fp6_and_blackwell/
false
false
self
6
null
Is the ROG Ally good for local AI
1
I am considering to get the Ally to turn it into a remote AI server running LM studio. Just wondering if the GPU Radeon 780M is compatible for running local LLM and can it handle models like Google's Gemma 27b 14gb. The reason am picking it, is that it is being sold for 400$ used in my area. Alternatives cost minimum 200$ more.
2025-05-12T07:28:18
https://www.reddit.com/r/LocalLLaMA/comments/1kkml9p/is_the_rog_ally_good_for_local_ai/
ParamedicDirect5832
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkml9p
false
null
t3_1kkml9p
/r/LocalLLaMA/comments/1kkml9p/is_the_rog_ally_good_for_local_ai/
false
false
self
1
null
Deep Seek Always Showing This Error What to Do in Free Plan ?
1
[removed]
2025-05-12T07:42:28
https://www.reddit.com/r/LocalLLaMA/comments/1kkms73/deep_seek_always_showing_this_error_what_to_do_in/
KiranInfasta
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkms73
false
null
t3_1kkms73
/r/LocalLLaMA/comments/1kkms73/deep_seek_always_showing_this_error_what_to_do_in/
false
false
https://external-preview…dd33ae5cc1de7c3e
1
null
2 VLLM Containers on a single GPU
1
[removed]
2025-05-12T08:00:33
https://www.reddit.com/r/LocalLLaMA/comments/1kkn14s/2_vllm_containers_on_a_single_gpu/
OPlUMMaster
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkn14s
false
null
t3_1kkn14s
/r/LocalLLaMA/comments/1kkn14s/2_vllm_containers_on_a_single_gpu/
false
false
self
1
null
Where I can find Gen AI images dataset with input text prompts?
1
Hey everyone, I am working on my research paper and a side project. I need a small dataset of images generated by LLMs along with the input prompts. I am working on an enhancement project for images generated by AI.
2025-05-12T08:20:37
https://www.reddit.com/r/LocalLLaMA/comments/1kknasa/where_i_can_find_gen_ai_images_dataset_with_input/
gpt-d13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kknasa
false
null
t3_1kknasa
/r/LocalLLaMA/comments/1kknasa/where_i_can_find_gen_ai_images_dataset_with_input/
false
false
self
1
null
Almost Lost My Job to an AI Detector (WTF GOING ON !! )
1
[removed]
2025-05-12T08:25:36
https://www.reddit.com/r/LocalLLaMA/comments/1kknd75/almost_lost_my_job_to_an_ai_detector_wtf_going_on/
Elegant_master69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kknd75
false
null
t3_1kknd75
/r/LocalLLaMA/comments/1kknd75/almost_lost_my_job_to_an_ai_detector_wtf_going_on/
false
false
self
1
null
Qwen3 issue with tool calling - invalid json returned
1
[removed]
2025-05-12T08:28:50
https://www.reddit.com/r/LocalLLaMA/comments/1kkneqh/qwen3_issue_with_tool_calling_invalid_json/
Educational-Shoe9300
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkneqh
false
null
t3_1kkneqh
/r/LocalLLaMA/comments/1kkneqh/qwen3_issue_with_tool_calling_invalid_json/
false
false
self
1
null
Fine Tunning Gemma3-qat
1
[removed]
2025-05-12T08:32:34
https://www.reddit.com/r/LocalLLaMA/comments/1kkngl2/fine_tunning_gemma3qat/
AdMajestic9148
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkngl2
false
null
t3_1kkngl2
/r/LocalLLaMA/comments/1kkngl2/fine_tunning_gemma3qat/
false
false
self
1
null
what are your "turing tests"?
1
[removed]
2025-05-12T09:12:07
https://www.reddit.com/r/LocalLLaMA/comments/1kko03u/what_are_your_turing_tests/
redalvi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kko03u
false
null
t3_1kko03u
/r/LocalLLaMA/comments/1kko03u/what_are_your_turing_tests/
false
false
self
1
null
Support for InternVL has been merged into llama.cpp
38
[https://github.com/ggml-org/llama.cpp/pull/13422](https://github.com/ggml-org/llama.cpp/pull/13422) [https://github.com/ggml-org/llama.cpp/pull/13443](https://github.com/ggml-org/llama.cpp/pull/13443) when GGUF? ;)
2025-05-12T09:21:46
https://www.reddit.com/r/LocalLLaMA/comments/1kko4xu/support_for_internvl_has_been_merged_into_llamacpp/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kko4xu
false
null
t3_1kko4xu
/r/LocalLLaMA/comments/1kko4xu/support_for_internvl_has_been_merged_into_llamacpp/
false
false
self
38
{'enabled': False, 'images': [{'id': 'z-PJoWcFwIVolySSGFtaKu6lZ9POolSvv3HV2PzoH3Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/z-PJoWcFwIVolySSGFtaKu6lZ9POolSvv3HV2PzoH3Q.png?width=108&crop=smart&auto=webp&s=9c04c3233abae0c518f71f6ba6a9f88a5b652536', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/z-PJoWcFwIVolySSGFtaKu6lZ9POolSvv3HV2PzoH3Q.png?width=216&crop=smart&auto=webp&s=112b777131bd9864f90b4cb715b5057f73d96308', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/z-PJoWcFwIVolySSGFtaKu6lZ9POolSvv3HV2PzoH3Q.png?width=320&crop=smart&auto=webp&s=644f1c4c98228d7ce2e24db18b9acd5173dd0f30', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/z-PJoWcFwIVolySSGFtaKu6lZ9POolSvv3HV2PzoH3Q.png?width=640&crop=smart&auto=webp&s=07ec84153c46aa3c4d2da959f5169831790cd7a3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/z-PJoWcFwIVolySSGFtaKu6lZ9POolSvv3HV2PzoH3Q.png?width=960&crop=smart&auto=webp&s=9bfc0f0cc01c159568e34c1faa5aff3ef909316f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/z-PJoWcFwIVolySSGFtaKu6lZ9POolSvv3HV2PzoH3Q.png?width=1080&crop=smart&auto=webp&s=06e2191e6efc1b4aeb7f8454c57f9d2e6e000cd0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/z-PJoWcFwIVolySSGFtaKu6lZ9POolSvv3HV2PzoH3Q.png?auto=webp&s=215ef13053a81b43eea74b26db52eb4f03726b28', 'width': 1200}, 'variants': {}}]}
llama.cpp not using kv cache effectively?
16
llama.cpp not using kv cache effectively? I'm running the unsloth UD q4 quanto of qwen3 30ba3b and noticed that when adding new responses in a chat, it seemed to re-process the whole conversation instead of using the kv cache. any ideas? ``` May 12 09:33:13 llm llm[948025]: srv params_from_: Chat format: Content-only May 12 09:33:13 llm llm[948025]: slot launch_slot_: id 0 | task 105562 | processing task May 12 09:33:13 llm llm[948025]: slot update_slots: id 0 | task 105562 | new prompt, n_ctx_slot = 40960, n_keep = 0, n_prompt_tokens = 15411 May 12 09:33:13 llm llm[948025]: slot update_slots: id 0 | task 105562 | kv cache rm [3, end) May 12 09:33:13 llm llm[948025]: slot update_slots: id 0 | task 105562 | prompt processing progress, n_past = 2051, n_tokens = 2048, progress = > May 12 09:33:16 llm llm[948025]: slot update_slots: id 0 | task 105562 | kv cache rm [2051, end) May 12 09:33:16 llm llm[948025]: slot update_slots: id 0 | task 105562 | prompt processing progress, n_past = 4099, n_tokens = 2048, progress = > May 12 09:33:18 llm llm[948025]: slot update_slots: id 0 | task 105562 | kv cache rm [4099, end) May 12 09:33:18 llm llm[948025]: slot update_slots: id 0 | task 105562 | prompt processing progress, n_past = 6147, n_tokens = 2048, progress = > May 12 09:33:21 llm llm[948025]: slot update_slots: id 0 | task 105562 | kv cache rm [6147, end) May 12 09:33:21 llm llm[948025]: slot update_slots: id 0 | task 105562 | prompt processing progress, n_past = 8195, n_tokens = 2048, progress = > May 12 09:33:25 llm llm[948025]: slot update_slots: id 0 | task 105562 | kv cache rm [8195, end) ```
2025-05-12T09:36:43
https://www.reddit.com/r/LocalLLaMA/comments/1kkocfx/llamacpp_not_using_kv_cache_effectively/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkocfx
false
null
t3_1kkocfx
/r/LocalLLaMA/comments/1kkocfx/llamacpp_not_using_kv_cache_effectively/
false
false
self
16
null
Does size matter? RAG and text-to-SQL with LLM locally
1
[removed]
2025-05-12T09:52:47
https://www.reddit.com/r/LocalLLaMA/comments/1kkokql/does_size_matter_rag_and_texttosql_with_llm/
Flashy-Row-8856
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkokql
false
null
t3_1kkokql
/r/LocalLLaMA/comments/1kkokql/does_size_matter_rag_and_texttosql_with_llm/
false
false
self
1
null
llama 3.2 8 or 80 billion parameters, does it really matter for RAG and text-to-SQL?
1
[removed]
2025-05-12T09:55:47
https://www.reddit.com/r/LocalLLaMA/comments/1kkombm/llama_32_8_or_80_billion_parameters_does_it/
Flashy-Row-8856
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkombm
false
null
t3_1kkombm
/r/LocalLLaMA/comments/1kkombm/llama_32_8_or_80_billion_parameters_does_it/
false
false
self
1
null
alibaba's MNN Chat App now supports qwen 2.5 omni 3b and 7b
49
[Github Page](https://github.com/alibaba/MNN/blob/master/apps/Android/MnnLlmChat/README.md) https://preview.redd.it/fh8ydmulsb0f1.png?width=1776&format=png&auto=webp&s=4868777573338de97f98442b4ac0f90bf28a3bd0 the pull request has just been merged, If you have any problem, please report an issue in github, or comment below.
2025-05-12T10:14:51
https://www.reddit.com/r/LocalLLaMA/comments/1kkox2l/alibabas_mnn_chat_app_now_supports_qwen_25_omni/
Juude89
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kkox2l
false
null
t3_1kkox2l
/r/LocalLLaMA/comments/1kkox2l/alibabas_mnn_chat_app_now_supports_qwen_25_omni/
false
false
https://external-preview…9b066c9c991c6c39
49
{'enabled': False, 'images': [{'id': 'brLGj73HpwGowL4BCPItOOK8Jxb9y38Hisluq85GQSc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/brLGj73HpwGowL4BCPItOOK8Jxb9y38Hisluq85GQSc.png?width=108&crop=smart&auto=webp&s=9a2d662ca216bb523235dcd28ee2d26a2ba4ae97', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/brLGj73HpwGowL4BCPItOOK8Jxb9y38Hisluq85GQSc.png?width=216&crop=smart&auto=webp&s=13b39e8227964ded6aea5ae7437035c6258a22dd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/brLGj73HpwGowL4BCPItOOK8Jxb9y38Hisluq85GQSc.png?width=320&crop=smart&auto=webp&s=83e933a83a30159258210d0553b5b82241bb11d3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/brLGj73HpwGowL4BCPItOOK8Jxb9y38Hisluq85GQSc.png?width=640&crop=smart&auto=webp&s=6fb2f98106483af9f91d56a5a868a3f07d4b752b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/brLGj73HpwGowL4BCPItOOK8Jxb9y38Hisluq85GQSc.png?width=960&crop=smart&auto=webp&s=069e11c20279981cac97cf2700ee3fccc3d06da2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/brLGj73HpwGowL4BCPItOOK8Jxb9y38Hisluq85GQSc.png?width=1080&crop=smart&auto=webp&s=e11c2283153646dd6585c6006abffe2bc411fa7f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/brLGj73HpwGowL4BCPItOOK8Jxb9y38Hisluq85GQSc.png?auto=webp&s=ac9a988221cea4269ff43d1c6ab0d5c7732cc479', 'width': 1200}, 'variants': {}}]}