title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
What do you think of Arcee's Virtuoso Large and Coder Large?
3
I'm testing them through OpenRouter and they look pretty good. Anyone using them?
2025-05-18T17:54:39
https://www.reddit.com/r/LocalLLaMA/comments/1kpq099/what_do_you_think_of_arcees_virtuoso_large_and/
Sky_Linx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpq099
false
null
t3_1kpq099
/r/LocalLLaMA/comments/1kpq099/what_do_you_think_of_arcees_virtuoso_large_and/
false
false
self
3
null
What ai is best for Chinese to English translation currently?
2
[removed]
2025-05-18T18:03:53
https://www.reddit.com/r/LocalLLaMA/comments/1kpq8e4/what_ai_is_best_for_chinese_to_english/
Civil_Candidate_824
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpq8e4
false
null
t3_1kpq8e4
/r/LocalLLaMA/comments/1kpq8e4/what_ai_is_best_for_chinese_to_english/
false
false
self
2
null
Optimizing llama-server for RTX 509 + RTX 4090
1
[removed]
2025-05-18T18:11:20
https://www.reddit.com/r/LocalLLaMA/comments/1kpqell/optimizing_llamaserver_for_rtx_509_rtx_4090/
Lumpy-Flamingo6802
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpqell
false
null
t3_1kpqell
/r/LocalLLaMA/comments/1kpqell/optimizing_llamaserver_for_rtx_509_rtx_4090/
false
false
self
1
null
Best ultra low budget GPU for 70B and best LLM for my purpose
1
[removed]
2025-05-18T18:16:40
https://www.reddit.com/r/LocalLLaMA/comments/1kpqj7e/best_ultra_low_budget_gpu_for_70b_and_best_llm/
ExtensionAd182
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpqj7e
false
null
t3_1kpqj7e
/r/LocalLLaMA/comments/1kpqj7e/best_ultra_low_budget_gpu_for_70b_and_best_llm/
false
false
self
1
null
Orange Pi AI Studio pro is now available. 192gb for ~2900$. Anyone knows how it performs and what can be done with it?
55
There was some speculation about it some months ago in this thread: https://www.reddit.com/r/LocalLLaMA/comments/1im141p/orange_pi_ai_studio_pro_mini_pc_with_408gbs/ Seems it can be ordered now on AliExpress (96gb for ~2600$, 192gb for ~2900$, but I couldn't find any english reviews or more info on it than what was speculated early this year. It's not even listed on orangepi.org. Maybe someone speaking chinese can find more info on it on the chinese web? Afaik it's not a full mini computer but some usb4.0 add on. Software support is likely going to be the biggest issue, but would really love to know about some real-world experiences with this thing.
2025-05-18T18:17:45
https://www.reddit.com/r/LocalLLaMA/comments/1kpqk4c/orange_pi_ai_studio_pro_is_now_available_192gb/
MarinatedPickachu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpqk4c
false
null
t3_1kpqk4c
/r/LocalLLaMA/comments/1kpqk4c/orange_pi_ai_studio_pro_is_now_available_192gb/
false
false
self
55
null
MLX vs. UD GGUF
13
Not sure if this is useful to anyone else, but I benchmarked Unsloth's Qwen3-30B-A3B Dynamic 2.0 GGUF against the MLX version. Both models are the 8-bit quantization. Both are running on LM Studio with the recommended Qwen 3 settings for samplers and temperature. Results from the same thinking prompt: \- MLX: 3,516 tokens generated, 1.0s to first token, 70.6 tokens/second \- UD GGUF: 3,321 tokens generated, 0.12s to first token, 23.41 tokens/second This is on an MacBook M4 Max with 128 GB of RAM, all layers offloaded to the GPU.
2025-05-18T18:27:09
https://www.reddit.com/r/LocalLLaMA/comments/1kpqrzz/mlx_vs_ud_gguf/
cspenn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpqrzz
false
null
t3_1kpqrzz
/r/LocalLLaMA/comments/1kpqrzz/mlx_vs_ud_gguf/
false
false
self
13
null
Geekbench equivalent for local LLM perf
1
[removed]
2025-05-18T18:32:26
https://www.reddit.com/r/LocalLLaMA/comments/1kpqwh8/geekbench_equivalent_for_local_llm_perf/
Friendly_Writer_8549
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpqwh8
false
null
t3_1kpqwh8
/r/LocalLLaMA/comments/1kpqwh8/geekbench_equivalent_for_local_llm_perf/
false
false
self
1
null
minimum parameter model needed for rag? can i do it without llama
1
[removed]
2025-05-18T18:52:19
https://www.reddit.com/r/LocalLLaMA/comments/1kprdez/minimum_parameter_model_needed_for_rag_can_i_do/
ExtremeAcceptable289
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kprdez
false
null
t3_1kprdez
/r/LocalLLaMA/comments/1kprdez/minimum_parameter_model_needed_for_rag_can_i_do/
false
false
self
1
null
Skeptical about the increased focus on STEM and CoT
78
With the release of Qwen3, I’ve been growing increasingly skeptical about the direction many labs are taking with CoT and STEM focused LLMs. With Qwen3, every model in the lineup follows a hybrid CoT approach and has a heavy emphasis on STEM tasks. This seems to be part of why the models feel “overcooked”. I have seen from other people that fine-tuning these models has been a challenge, especially with the reasoning baked in. This can be seen when applying instruction training data to the supposed base model that Qwen released. The training loss is surprisingly low which suggests that it’s already been instruction-primed to some extent, likely to better support CoT. This has not been a new thing as we have seen censorship and refusals from “base” models before. Now, if the instruction-tuned checkpoints were always strong, maybe that would be acceptable. But I have seen a bunch of reports that these models tend to become overly repetitive in long multi-turn conversations. That’s actually what pushed some people to train their own base models for Qwen3. One possible explanation is that a large portion of the training seems focused on single-shot QA tasks for math and code. This heavy emphasis on STEM capabilities has brought about an even bigger issue apart from fine-tuning. That is signs of knowledge degradation or what’s called catastrophic forgetting. Newer models, even some of the largest, are not making much headway on frontier knowledge benchmarks like Humanity’s Last Exam. This leads to hilarious results where Llama 2 7B beats out GPT 4.5 on that benchmark. While some might argue that raw knowledge isn’t a measure of intelligence, for LLMs, robust world knowledge is still critical for answering general questions or even coding for more niche applications. I don’t want LLMs to start relying on search tools for answering knowledge questions. Going back to CoT, it’s also not a one-size-fits-all solution. It has an inherent latency since the model has to "think out loud" by generating thinking tokens before answering and often explores multiple unnecessary branches. While this could make models like R1 surprisingly charming in its human-like thoughts, the time it takes to answer can take too long, especially for more basic questions. While there have been some improvements in token efficiency, it’s still a bottleneck, especially in running local LLMs where hardware is a real limiting factor. It's what made me not that interested in running local CoT models as I have limited hardware. More importantly, CoT doesn’t actually help with every task. In creative writing, for example, there’s no single correct answer to reason toward. Reasoning might help with coherence, but in my own testing, it usually results in less focused paragraphs. And at the end of the day, it’s still unclear whether these models are truly reasoning, or just remembering patterns from training. CoT models continue to struggle with genuinely novel problems, and we’ve seen that even without generating CoT tokens, some CoT models can still perform impressively compared to similarly sized non CoT trained models. I sometimes wonder if these models actually reason or just remember the steps to a memorized answer. So yeah, I’m not fully sold on the CoT and STEM-heavy trajectory the field is on right now, especially when it comes at the cost of broad general capability and world knowledge. It feels like the field is optimizing for a narrow slice of tasks (math, code) while losing sight of what makes these models useful more broadly. This can already bee seen with the May release of Gemini 2.5 Pro where the only marketed improvement was in coding while everything else seems to be a downgrade from the March release of Gemini 2.5 Pro.
2025-05-18T19:10:54
https://www.reddit.com/r/LocalLLaMA/comments/1kprsun/skeptical_about_the_increased_focus_on_stem_and/
Quazar386
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kprsun
false
null
t3_1kprsun
/r/LocalLLaMA/comments/1kprsun/skeptical_about_the_increased_focus_on_stem_and/
false
false
self
78
null
How to choose STT model for your Voice agent
0
2025-05-18T19:21:38
https://comparevoiceai.com/blog/how-to-choose-stt-voice-ai-model
Excellent-Effect237
comparevoiceai.com
1970-01-01T00:00:00
0
{}
1kps1z4
false
null
t3_1kps1z4
/r/LocalLLaMA/comments/1kps1z4/how_to_choose_stt_model_for_your_voice_agent/
false
false
default
0
null
A doubt
1
[removed]
2025-05-18T19:35:18
https://www.reddit.com/r/LocalLLaMA/comments/1kpsdd8/a_doubt/
Relative_Ability_220
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpsdd8
false
null
t3_1kpsdd8
/r/LocalLLaMA/comments/1kpsdd8/a_doubt/
false
false
self
1
null
Can I pool VRAM of the new nvidia workstation GPU's for local models?
1
[removed]
2025-05-18T19:41:42
https://www.reddit.com/r/LocalLLaMA/comments/1kpsiv8/can_i_pool_vram_of_the_new_nvidia_workstation/
Careless-Wrongdoer82
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpsiv8
false
null
t3_1kpsiv8
/r/LocalLLaMA/comments/1kpsiv8/can_i_pool_vram_of_the_new_nvidia_workstation/
false
false
self
1
null
Unsloth phi4 reasoning plus Q6 has big problems with thinking compared to QWQ3. Should I use unsloth PHI4 Reasoning Q6?
1
[removed]
2025-05-18T19:48:46
https://www.reddit.com/r/LocalLLaMA/comments/1kpsoqi/unsloth_phi4_reasoning_plus_q6_has_big_problems/
Hot_Watercress5440
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpsoqi
false
null
t3_1kpsoqi
/r/LocalLLaMA/comments/1kpsoqi/unsloth_phi4_reasoning_plus_q6_has_big_problems/
false
false
self
1
null
Can I pool VRAM of the new nvidia workstation GPU's for local models?
1
[removed]
2025-05-18T19:59:15
https://www.reddit.com/r/LocalLLaMA/comments/1kpsxdh/can_i_pool_vram_of_the_new_nvidia_workstation/
tyflips
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpsxdh
false
null
t3_1kpsxdh
/r/LocalLLaMA/comments/1kpsxdh/can_i_pool_vram_of_the_new_nvidia_workstation/
false
false
self
1
null
Serve 1 LLM with different prompts for Visual Studio Code?
1
How do you guys tackle this scenario? I'd like to have VSCode run Continue or Copilot or something else with both "Chat" and "Autocomplete/Fill in the middle" but instead of running 2 models, simply run the same instruct model with different system prompts or what not. I'm not very experienced with Ollama and LMStudio (LLamaCPP) and never touched VLLM before, but i believe Ollama just loads up the same model twice in VRAM which is super wasteful and same happens to LMStudio that i tried just now. For example, on my 24GB GPU i want a 32B model for both autocomplete and chat, GLM-4 handles large context admirably. Or perhaps a 14B Qwen 3 with very long context that maxes out 24GB. A large instruct model can be smart enough to follow the system prompt and do possibly do much better than a 1B Model that does just basic auto complete. Have you guys done this before? Obviously, the inference engine will use more resources to handle more than 1 session, but i don't want it to just double the same model in VRAM. Perhaps this has been a stupid question and i believe VLLM is geared more towards this, but I'm not really experienced around this topic. Thank you in advance... May the AI gods be kind upon us.
2025-05-18T20:28:57
https://www.reddit.com/r/LocalLLaMA/comments/1kptmbk/serve_1_llm_with_different_prompts_for_visual/
windozeFanboi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kptmbk
false
null
t3_1kptmbk
/r/LocalLLaMA/comments/1kptmbk/serve_1_llm_with_different_prompts_for_visual/
false
false
self
1
null
How does num_gpu work in Ollama? Why Ollama keeps using the GPU after the model stopped?
0
Hello guys, i'm confused, i hope you guys can help me. If i run **Qwen3 30B A3B** with num\_gpu **maxed out**, i get **2-3 T/s** with **90% GPU** usage and **20% CPU** usage. If i run it at **default**, i get **12-17 T/s** with **60% GPU** usage and **50%** **CPU** usage. While if i run **Gemma 3 12B QAT** with num\_gpu **maxed out**, i get **60-65 T/**s with **95% GPU** usage and **15%** **CPU** usage. If i run it at **default**, i get **12-13T/S** with **45% GPU** usage and **70%** **CPU** usage Also, after the response is generated, the GPU usage skyrockets to 95-100% when using Qwen3, but this does not happen with Gemma 3 what the hell is happening? Specs are: RTX 3080 Ti, 32gb of RAM and 12900K
2025-05-18T21:24:08
https://www.reddit.com/r/LocalLLaMA/comments/1kpuvnn/how_does_num_gpu_work_in_ollama_why_ollama_keeps/
S4lVin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpuvnn
false
null
t3_1kpuvnn
/r/LocalLLaMA/comments/1kpuvnn/how_does_num_gpu_work_in_ollama_why_ollama_keeps/
false
false
self
0
null
How to choose a TTS model for your voice agent
0
https://comparevoiceai.com/blog/how-to-choose-tts-voice-ai-model
2025-05-18T21:30:14
https://www.reddit.com/r/LocalLLaMA/comments/1kpv0ga/how_to_choose_a_tts_model_for_your_voice_agent/
Excellent-Effect237
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpv0ga
false
null
t3_1kpv0ga
/r/LocalLLaMA/comments/1kpv0ga/how_to_choose_a_tts_model_for_your_voice_agent/
false
false
self
0
null
Best local llm to run on a 16gb MacBook Pro M4
1
[removed]
2025-05-18T21:49:56
https://www.reddit.com/r/LocalLLaMA/comments/1kpvgam/best_local_llm_to_run_on_a_16gb_macbook_pro_m4/
combo-user
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpvgam
false
null
t3_1kpvgam
/r/LocalLLaMA/comments/1kpvgam/best_local_llm_to_run_on_a_16gb_macbook_pro_m4/
false
false
self
1
null
Best local LLaMA model for coding + fine-tuning on M2 Max (64 GB) & Zed Editor?
2
Hey everyone, I’m experimenting with running a LLaMA-style model 100% locally on my MacBook Pro M2 Max (64 GB RAM), and I have a few questions before I dive in: 1. Which model for coding? •I work mainly in Astro, React and modern JS/TS stacks and we all know how this stacks update every week. •I’m torn between smaller/light models (7B/13B) vs. larger ones (34B/70B) — but I don’t want to hit swap or kill performance. •Anyone using Code Llama, StarCoder, PolyCoder, etc., locally? Which gave you the best dev-assistant experience? Currently I'm using cursor but with gemeni 2.5 pro and it works well for me but I want to switch to Zed since it's light weight and also let's us use our own local models. 2. Quantization & memory footprint •I’ve heard about 8-bit / 4-bit quantization to squeeze a big model into limited RAM. •But I'm not sure exactly Any pitfalls on macOS? •Roughly, which quantized sizes actually fit (e.g. 13B-int8 vs. 34B-int4)? I don't understand too much about this quantize yet but yea I would research it more if indeed is a viable solution. 3. Training / fine-tuning for my stack •I’d love the model to know Astro components, ShadCN patterns, React hooks, Tailwind conventions, etc. •What’s the easiest workflow? •LoRA / QLoRA on a small dataset? •In-context examples only? •Full fine-tune? •And down the road, as Astro/React evolve, is it better to append new data to my LoRA or just switch to an updated model checkpoint? 4. Zed Editor integration •I plan to use the model as my AI pair-programmer inside Zed Editor (it supports llama.cpp backends). •Are there any special flags or setup tips to get low latency/autocomplete working smoothly? TL;DR •Best local LLM for code? (size vs. performance on M2 Max) •How to quantize (8-bit / 4-bit) & fit in 64 GB •Fine-tuning strategy for Astro/React and ongoing updates •Zed Editor: best practices for a snappy dev-assistant Thanks in advance for any pointers 😊
2025-05-18T21:54:19
https://www.reddit.com/r/LocalLLaMA/comments/1kpvju2/best_local_llama_model_for_coding_finetuning_on/
webmero
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpvju2
false
null
t3_1kpvju2
/r/LocalLLaMA/comments/1kpvju2/best_local_llama_model_for_coding_finetuning_on/
false
false
self
2
null
Looking for lightweight open-source LLM for Egyptian Arabic real estate assistant (on Colab)
1
[removed]
2025-05-18T22:17:22
https://www.reddit.com/r/LocalLLaMA/comments/1kpw2k2/looking_for_lightweight_opensource_llm_for/
Ok-Watercress-451
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpw2k2
false
null
t3_1kpw2k2
/r/LocalLLaMA/comments/1kpw2k2/looking_for_lightweight_opensource_llm_for/
false
false
self
1
null
Unlimited text-to-speech using Kokoro-JS, 100% local, 100% open source
177
2025-05-18T22:26:10
https://streaming-kokoro.glitch.me/
paranoidray
streaming-kokoro.glitch.me
1970-01-01T00:00:00
0
{}
1kpw9nw
false
null
t3_1kpw9nw
/r/LocalLLaMA/comments/1kpw9nw/unlimited_texttospeech_using_kokorojs_100_local/
false
false
default
177
null
Unlock Qwen3's Full Power: cot_proxy for Easy Mode Switching, Parameter Control & Clean Outputs!
39
Hey AI Devs & Qwen3 Users! 👋 Struggling to effectively use Qwen3 models with their hybrid reasoning (`/think`) and normal (`/no_think`) modes? It can be a real challenge when each mode needs different sampling parameters, and tools like Cline or RooCode don't offer that fine-grained control. That's where `cot_proxy` comes in! 🚀 `cot_proxy` is a lightweight, Dockerized reverse proxy that sits between your application and your LLM, giving you powerful control over the request lifecycle. It's particularly game-changing for models like Qwen3. **How** `cot_proxy` **makes your life easier:** * 🧠 **Master Qwen3's Hybrid Nature:** * **Automatic Mode Commands:** Configure `cot_proxy` to automatically append `/think` or `/no_think` to your prompts based on the "pseudo-model" you call. * **Optimized Sampling Per Mode:** Define different sampling parameters (temperature, top\_p, etc.) for your "thinking" and "non-thinking" Qwen3 configurations. * 🔧 **Advanced Request Manipulation:** * **Model-Specific Configurations:** Create "pseudo-models" in your `.env` file (e.g., `Qwen3-32B-Creative-Thinking` vs. `Qwen3-32B-Factual-Concise`). `cot_proxy` then applies the specific parameters, prompt additions, and upstream model mapping you've defined. * **Clean Outputs:** Automatically strip out `<think>...</think>` tags from responses, delivering only the final, clean answer – even with streaming! * 💡 **Easy Integration:** * **Turnkey Qwen3 Examples:** Our [`.env.example`](https://github.com/bold84/cot_proxy/blob/main/.env.example) file provides working configurations to get you started with Qwen3 immediately. * **Use with Any Client:** Seamlessly integrate Qwen3 (and other complex models) into applications that don't natively support advanced parameter or prompt adjustments. Essentially, `cot_proxy` lets you abstract away the complexities of managing sophisticated models, allowing your client applications to remain simple while still leveraging the full power of models like Qwen3. 🔗 **Check it out, star it, and simplify your LLM workflows!** **GitHub Repository:** [https://github.com/bold84/cot\_proxy](https://github.com/bold84/cot_proxy) We'd love to hear your feedback and see how you use it!
2025-05-18T22:35:12
https://www.reddit.com/r/LocalLLaMA/comments/1kpwgjy/unlock_qwen3s_full_power_cot_proxy_for_easy_mode/
ben1984th
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpwgjy
false
null
t3_1kpwgjy
/r/LocalLLaMA/comments/1kpwgjy/unlock_qwen3s_full_power_cot_proxy_for_easy_mode/
false
false
self
39
{'enabled': False, 'images': [{'id': 'Ypsh0bt2sx_9RWDd-holqz-jR_IsFaeSixPwaViweRs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_G3QCLTdRHjxiIHAdv9pyGbcJVsySFIJVKNgmIthdpU.jpg?width=108&crop=smart&auto=webp&s=7009670e65ffcee32f51561d60cb583610807f53', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_G3QCLTdRHjxiIHAdv9pyGbcJVsySFIJVKNgmIthdpU.jpg?width=216&crop=smart&auto=webp&s=5a30311e597c6c5c49bc01228a6a9120e2e3a4cc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_G3QCLTdRHjxiIHAdv9pyGbcJVsySFIJVKNgmIthdpU.jpg?width=320&crop=smart&auto=webp&s=a676dbb9fe56f0e3be3b52490367b4fbeb2c70c1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_G3QCLTdRHjxiIHAdv9pyGbcJVsySFIJVKNgmIthdpU.jpg?width=640&crop=smart&auto=webp&s=150a301d86d8799a745c87a8b4117e27cd0323b4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_G3QCLTdRHjxiIHAdv9pyGbcJVsySFIJVKNgmIthdpU.jpg?width=960&crop=smart&auto=webp&s=ae8addc8ac083f214f9b04e7cf1ff248e639e233', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_G3QCLTdRHjxiIHAdv9pyGbcJVsySFIJVKNgmIthdpU.jpg?width=1080&crop=smart&auto=webp&s=9f1b4cd58a0d03f13c6a5031cb9b92930fc4c03d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_G3QCLTdRHjxiIHAdv9pyGbcJVsySFIJVKNgmIthdpU.jpg?auto=webp&s=7ec4d0e0c3c0fd28be264d1bef112ce40f9b3168', 'width': 1200}, 'variants': {}}]}
"After constant IPTV issues in Canada, this one finally delivered"
1
[removed]
2025-05-18T22:58:41
https://www.reddit.com/r/LocalLLaMA/comments/1kpwxq4/after_constant_iptv_issues_in_canada_this_one/
Any-Passion625
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpwxq4
false
null
t3_1kpwxq4
/r/LocalLLaMA/comments/1kpwxq4/after_constant_iptv_issues_in_canada_this_one/
false
false
self
1
null
Qwen 3 14B gguf "chat"?
1
[removed]
2025-05-18T23:10:06
https://www.reddit.com/r/LocalLLaMA/comments/1kpx67j/qwen_3_14b_gguf_chat/
Effective_Owl7362
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpx67j
false
null
t3_1kpx67j
/r/LocalLLaMA/comments/1kpx67j/qwen_3_14b_gguf_chat/
false
false
self
1
null
Can I pool VRAM of the new nvidia workstation GPU's for local models?
1
[removed]
2025-05-18T23:15:52
https://www.reddit.com/r/LocalLLaMA/comments/1kpxab3/can_i_pool_vram_of_the_new_nvidia_workstation/
tyflips
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpxab3
false
null
t3_1kpxab3
/r/LocalLLaMA/comments/1kpxab3/can_i_pool_vram_of_the_new_nvidia_workstation/
false
false
self
1
null
I built a tool to profile LLM energy usage on Macs programmatically (down to the line of code)
11
If you want to measure LLM energy consumption on Macs, you have options like powermetrics (a CLI tool that periodically prints energy usage to your terminal) or Activity Monitor. These work fine if you just want a high-level glance at your LLM's energy usage, but if you want more precise measurement (like seeing **energy used over specific lines of code**, or **energy cost per token generated**, etc.), there's not really a super straightforward way. That's why I built "zeus-apple-silicon" ([github](https://github.com/ml-energy/zeus-apple-silicon)), a really tiny/lightweight library that lets you profile energy on Apple silicon programmatically, starting/stopping measurement at exactly the lines you want in your code. **As a bonus**, it provides more detailed metrics than powermetrics or similar tools -- whereas powermetrics only gives you aggregates for CPU, GPU, and ANE, this library will also break down energy metrics per efficiency/performance core, DRAM, and so on. The library is available as a package in **Python**, but also as a header-only include in **C++** (in case you're interfacing with, say, llama.cpp directly). Check out a more detailed blog post about it (with examples) here: [https://ml.energy/blog/energy/measurement/profiling-llm-energy-consumption-on-macs/](https://ml.energy/blog/energy/measurement/profiling-llm-energy-consumption-on-macs/)
2025-05-18T23:19:55
https://www.reddit.com/r/LocalLLaMA/comments/1kpxd7t/i_built_a_tool_to_profile_llm_energy_usage_on/
cachehit_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpxd7t
false
null
t3_1kpxd7t
/r/LocalLLaMA/comments/1kpxd7t/i_built_a_tool_to_profile_llm_energy_usage_on/
false
false
self
11
{'enabled': False, 'images': [{'id': '-YYyL_j0bV3W5TR_FNmATZI7qBS4xDSm4VOnHHmYJXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LKirANG9XZbpKhq9T4bV1x4DpOpo63CrIDXxzKImYQ8.jpg?width=108&crop=smart&auto=webp&s=d81ec25fcb4923413147f6ac1234cf1a0c0cb375', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LKirANG9XZbpKhq9T4bV1x4DpOpo63CrIDXxzKImYQ8.jpg?width=216&crop=smart&auto=webp&s=8744531d6f33c38e98481a2b04bb124d466faef9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LKirANG9XZbpKhq9T4bV1x4DpOpo63CrIDXxzKImYQ8.jpg?width=320&crop=smart&auto=webp&s=2bba09a684bc5a9255d2a5ab0db4d796d4be0977', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LKirANG9XZbpKhq9T4bV1x4DpOpo63CrIDXxzKImYQ8.jpg?width=640&crop=smart&auto=webp&s=60dd32aaa8d306cddf469739975a694c707e4a42', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LKirANG9XZbpKhq9T4bV1x4DpOpo63CrIDXxzKImYQ8.jpg?width=960&crop=smart&auto=webp&s=ffbb70cd3d6a0edbf56f68f9e00d008bd6969b62', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LKirANG9XZbpKhq9T4bV1x4DpOpo63CrIDXxzKImYQ8.jpg?width=1080&crop=smart&auto=webp&s=5bb6cfe2e1c1ead4e4d72cfb1c553be77bf53126', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LKirANG9XZbpKhq9T4bV1x4DpOpo63CrIDXxzKImYQ8.jpg?auto=webp&s=6a21d18c4d04261bacf65c597492a429c8525645', 'width': 1200}, 'variants': {}}]}
Riffusion Ai music generator Ai voices spoken word, Shakespeare "All the World's a Stage", Abraham Lincoln ordering Pizza, German, Russian Spanish Singing/spoken word. I clone these Riffusion Ai voices of emotion and use in Zonos to create various types of voices for male and female.
5
2025-05-18T23:29:35
https://v.redd.it/zmpy3wuajm1f1
Extension-Fee-8480
/r/LocalLLaMA/comments/1kpxk18/riffusion_ai_music_generator_ai_voices_spoken/
1970-01-01T00:00:00
0
{}
1kpxk18
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/zmpy3wuajm1f1/DASHPlaylist.mpd?a=1750332580%2CMzU3MWVlZDM5MjJiNDhiNTcwOTExM2E1YjJlZjU2ZTBlNDBlMzliYjViZGYwMWRkY2Y3NDRjNmM1OWFjMzFiOA%3D%3D&v=1&f=sd', 'duration': 600, 'fallback_url': 'https://v.redd.it/zmpy3wuajm1f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/zmpy3wuajm1f1/HLSPlaylist.m3u8?a=1750332580%2CNTQyY2ExYmI1MzgwYmRiYzViMWJmZTEzYjY5ZmFkZjUxMjE3ZjhiZjY0NTllMzA0ZDRiYzQ3ZjgwMTY5ZWM4MA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/zmpy3wuajm1f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1kpxk18
/r/LocalLLaMA/comments/1kpxk18/riffusion_ai_music_generator_ai_voices_spoken/
false
false
https://external-preview…8c8cdad1e278eae5
5
{'enabled': False, 'images': [{'id': 'MzRsNGkwdmFqbTFmMQig-YZTkSc6LjowdLkLgQB-FI1hgO_IdhK5hGYwlsFQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MzRsNGkwdmFqbTFmMQig-YZTkSc6LjowdLkLgQB-FI1hgO_IdhK5hGYwlsFQ.png?width=108&crop=smart&format=pjpg&auto=webp&s=7caf8eb74603f95f408dd7ba2c3eb7c56941eab6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MzRsNGkwdmFqbTFmMQig-YZTkSc6LjowdLkLgQB-FI1hgO_IdhK5hGYwlsFQ.png?width=216&crop=smart&format=pjpg&auto=webp&s=1ae113cdb1b2d69b6adff79e94b136b1b3172e25', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MzRsNGkwdmFqbTFmMQig-YZTkSc6LjowdLkLgQB-FI1hgO_IdhK5hGYwlsFQ.png?width=320&crop=smart&format=pjpg&auto=webp&s=85c1a15030d3d39b01a1b715325ab7311ff90eda', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MzRsNGkwdmFqbTFmMQig-YZTkSc6LjowdLkLgQB-FI1hgO_IdhK5hGYwlsFQ.png?width=640&crop=smart&format=pjpg&auto=webp&s=7a271497e2927a91b025c6817364e996608780d0', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MzRsNGkwdmFqbTFmMQig-YZTkSc6LjowdLkLgQB-FI1hgO_IdhK5hGYwlsFQ.png?width=960&crop=smart&format=pjpg&auto=webp&s=de06a74186866b2de126c486292ce19d9c44e047', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MzRsNGkwdmFqbTFmMQig-YZTkSc6LjowdLkLgQB-FI1hgO_IdhK5hGYwlsFQ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f126d89702160df3f60c6c8c3efde2f5e164f603', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/MzRsNGkwdmFqbTFmMQig-YZTkSc6LjowdLkLgQB-FI1hgO_IdhK5hGYwlsFQ.png?format=pjpg&auto=webp&s=3091b545eb06c24d9aa79e29b0e0ff68b086297f', 'width': 1280}, 'variants': {}}]}
Qwen released new paper and model: ParScale, ParScale-1.8B-(P1-P8)
468
The original text says, 'We theoretically and empirically establish that scaling with P parallel streams is comparable to scaling the number of parameters by O(log P).' Does this mean that a 30B model can achieve the effect of a 45B model?
2025-05-19T00:24:28
https://i.redd.it/7q0xsc86um1f1.png
Dr_Karminski
i.redd.it
1970-01-01T00:00:00
0
{}
1kpyn8g
false
null
t3_1kpyn8g
/r/LocalLLaMA/comments/1kpyn8g/qwen_released_new_paper_and_model_parscale/
false
false
https://a.thumbs.redditm…N4Vhab3vxH88.jpg
468
{'enabled': True, 'images': [{'id': 'PyGaUo1WVJJTNJTihagMB-1J-8iG1Q6G3HjZt2t5Foc', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/7q0xsc86um1f1.png?width=108&crop=smart&auto=webp&s=1e8068b081f67db09e13530f196c6274a5008fca', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/7q0xsc86um1f1.png?width=216&crop=smart&auto=webp&s=3f4de50ed0a70e70982e384042194fd56bc73cae', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/7q0xsc86um1f1.png?width=320&crop=smart&auto=webp&s=be0b9c6fab8341bed800c9702e02fba99642e1f9', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/7q0xsc86um1f1.png?width=640&crop=smart&auto=webp&s=952df9feb0cce10d5227340e9e367e9fc6939abe', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/7q0xsc86um1f1.png?width=960&crop=smart&auto=webp&s=5b74ef89cadef0fb22541d05b7c1b08b65646d34', 'width': 960}], 'source': {'height': 2101, 'url': 'https://preview.redd.it/7q0xsc86um1f1.png?auto=webp&s=97ad54d20a4c8d0ae1cf5836ebcdf0360bb020db', 'width': 972}, 'variants': {}}]}
How can I improve this subtitle translator prompt?
5
Hello, I've been trying to use AI models on OpenRouter in order to translate subtitles. My script will break the subtitle file into chunks and feed it to the LLM model 1 by 1. After a bit of testing I found Deepseek V3 0324 to yield the best results. However, it'll still take multiple tries for it to translate it properly. A lot of the time it does not translate the entire thing, or just starts saying random stuff. Before I start adjusting things like temperature I'd really appreciate if someone could look at my prompts to see if any improvements could be made `SYSTEM_PROMPT = (` `"You are a professional subtitle translator. "` `"Respond only with the content, translated into the target language. "` `"Do not add explanations, comments, or any extra text. "` `"Maintain subtitle numbering, timestamps, and formatting exactly as in the original .srt file. "` `"For sentences spanning multiple blocks: translate the complete sentence, then re-distribute it across the original blocks. Crucially, if the original sentence was split at a particular conceptual point, try to mirror this split point in the translated sentence when re-chunking, as long as it sounds natural in the target language. Timestamps and IDs must remain unchanged."` `"Your response must begin directly with the first subtitle block's ID number. No pleasantries such as 'Here is the translation:' or 'Okay, here's the SRT:'. "` `"Your response should have the same amount of subtitle blocks as the input."` `)` `USER_PROMPT_TEMPLATE = (` `"Region/Country of the text: {region}\n"` `"Translate the following .srt content into {target_language}, preserving the original meaning, timing, and structure. "` `"Ensure each subtitle block is readable and respects the original display durations. "` `"Output only a valid .srt file with the translated text.\n\n"` `"{srt_text}"`
2025-05-19T00:31:01
https://www.reddit.com/r/LocalLLaMA/comments/1kpyrrs/how_can_i_improve_this_subtitle_translator_prompt/
OneSteelTank
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpyrrs
false
null
t3_1kpyrrs
/r/LocalLLaMA/comments/1kpyrrs/how_can_i_improve_this_subtitle_translator_prompt/
false
false
self
5
null
Can I split my GPU VRAM?
1
[removed]
2025-05-19T00:53:16
https://www.reddit.com/r/LocalLLaMA/comments/1kpz6u4/can_i_split_my_gpu_vram/
Sufficient_Bit_3312
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpz6u4
false
null
t3_1kpz6u4
/r/LocalLLaMA/comments/1kpz6u4/can_i_split_my_gpu_vram/
false
false
self
1
null
To think or to no_think with Qwen3
17
Lately I got a 5090 and been experimenting with Qwen3-32B at Q5 (unsloth). With Flash attention and KV cache quantization at Q8, I am able to get up to 32k token window while fully occupying the GPU memory (30-31 GB). It gives a generation speed of 50 t/s which is very impressive. I am using that with Roocode via Visual Studio Code, served from LMStudio. (on Windows 11) However, with thinking turned on, even though I followed the recommended settings by Alibaba, it almost never gave me good results. For a simple request like a small modification to a snake game, it can overthink all the way to fill up the 32k token window over a couple minutes and does nothing useful at all. Comparing to that, the no\_think option works a lot better for me. While it may not one-shot a request, it is very fast and with a couple corrections it can usually get the job done. How is your experience so far? Did I miss anything when trying the thinking version of Qwen3? One problem could be with Cline/Roocode I could not really set the top\_p/min\_p/top\_k, and they could be affecting my results.
2025-05-19T01:00:55
https://www.reddit.com/r/LocalLLaMA/comments/1kpzbvl/to_think_or_to_no_think_with_qwen3/
SandboChang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpzbvl
false
null
t3_1kpzbvl
/r/LocalLLaMA/comments/1kpzbvl/to_think_or_to_no_think_with_qwen3/
false
false
self
17
null
I built an open-source AI-powered library for web testing with Llama/Mistral
1
[removed]
2025-05-19T01:03:19
https://www.reddit.com/r/LocalLLaMA/comments/1kpzdjx/i_built_an_opensource_aipowered_library_for_web/
p0deje
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpzdjx
false
null
t3_1kpzdjx
/r/LocalLLaMA/comments/1kpzdjx/i_built_an_opensource_aipowered_library_for_web/
false
false
self
1
null
Are there any models that I can run locally with only 2 gb of RAM?
0
Hello this maybe a very dumb question but are there any llms that I can run locally on my potato pc? Or are they all RAM hogging and the only way to run them is through a expensive cloud computing service?
2025-05-19T01:34:14
https://www.reddit.com/r/LocalLLaMA/comments/1kpzy8g/are_there_any_models_that_i_can_run_locally_with/
LaidBackDev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kpzy8g
false
null
t3_1kpzy8g
/r/LocalLLaMA/comments/1kpzy8g/are_there_any_models_that_i_can_run_locally_with/
false
false
self
0
null
Is Qwen 2.5 Coder Instruct still the best option for local coding with 24GB VRAM?
47
Is Qwen 2.5 Coder Instruct still the best option for local coding with 24GB VRAM, or has that changed since Qwen 3 came out? I haven't noticed a coding model for it, but it's possible other models have come in gone that I've missed that handle python better than Qwen 2.5.
2025-05-19T01:40:12
https://www.reddit.com/r/LocalLLaMA/comments/1kq029v/is_qwen_25_coder_instruct_still_the_best_option/
MrWeirdoFace
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq029v
false
null
t3_1kq029v
/r/LocalLLaMA/comments/1kq029v/is_qwen_25_coder_instruct_still_the_best_option/
false
false
self
47
null
Where can I find this AO3 dataset for creative writing LLM
1
[removed]
2025-05-19T01:54:13
https://huggingface.co/datasets/nyuuzyou/archiveofourown
EastPanic647
huggingface.co
1970-01-01T00:00:00
0
{}
1kq0bai
false
null
t3_1kq0bai
/r/LocalLLaMA/comments/1kq0bai/where_can_i_find_this_ao3_dataset_for_creative/
false
false
https://b.thumbs.redditm…gGbmhsfTWlrQ.jpg
1
{'enabled': False, 'images': [{'id': '6pBxabgD7OhKKdtNMAWcuXn2kajwWkXmJ38aOm5Jx8M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_gcd8T9jnslCz83hv3axTJ6bD2j8tD5ikhOUeJc1fdc.jpg?width=108&crop=smart&auto=webp&s=82c14b58a1ea066ceb2285ad939d1ae35ddce74c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_gcd8T9jnslCz83hv3axTJ6bD2j8tD5ikhOUeJc1fdc.jpg?width=216&crop=smart&auto=webp&s=26b7b1ce7f359857b50bc01a7d8874f98701d613', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_gcd8T9jnslCz83hv3axTJ6bD2j8tD5ikhOUeJc1fdc.jpg?width=320&crop=smart&auto=webp&s=7a282db4699d868a8acb7beffa8e24b52d841e8e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_gcd8T9jnslCz83hv3axTJ6bD2j8tD5ikhOUeJc1fdc.jpg?width=640&crop=smart&auto=webp&s=9d85edf262a4331b7febc5f7398fc8b2d049d384', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_gcd8T9jnslCz83hv3axTJ6bD2j8tD5ikhOUeJc1fdc.jpg?width=960&crop=smart&auto=webp&s=01962bbafcb705c2bbb0df39724de6ca76374f6c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_gcd8T9jnslCz83hv3axTJ6bD2j8tD5ikhOUeJc1fdc.jpg?width=1080&crop=smart&auto=webp&s=d90fd2a254985357ef18c5bc3a2d8a6b52658764', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_gcd8T9jnslCz83hv3axTJ6bD2j8tD5ikhOUeJc1fdc.jpg?auto=webp&s=305c3fe0901971daf47b5212a0d183e60f678073', 'width': 1200}, 'variants': {}}]}
Where can I find this AO3 dataset for creative writing LLM
1
[removed]
2025-05-19T01:55:54
https://huggingface.co/datasets/nyuuzyou/archiveofourown
EastPanic647
huggingface.co
1970-01-01T00:00:00
0
{}
1kq0cfx
false
null
t3_1kq0cfx
/r/LocalLLaMA/comments/1kq0cfx/where_can_i_find_this_ao3_dataset_for_creative/
false
false
https://b.thumbs.redditm…gGbmhsfTWlrQ.jpg
1
{'enabled': False, 'images': [{'id': '6pBxabgD7OhKKdtNMAWcuXn2kajwWkXmJ38aOm5Jx8M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_gcd8T9jnslCz83hv3axTJ6bD2j8tD5ikhOUeJc1fdc.jpg?width=108&crop=smart&auto=webp&s=82c14b58a1ea066ceb2285ad939d1ae35ddce74c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_gcd8T9jnslCz83hv3axTJ6bD2j8tD5ikhOUeJc1fdc.jpg?width=216&crop=smart&auto=webp&s=26b7b1ce7f359857b50bc01a7d8874f98701d613', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_gcd8T9jnslCz83hv3axTJ6bD2j8tD5ikhOUeJc1fdc.jpg?width=320&crop=smart&auto=webp&s=7a282db4699d868a8acb7beffa8e24b52d841e8e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_gcd8T9jnslCz83hv3axTJ6bD2j8tD5ikhOUeJc1fdc.jpg?width=640&crop=smart&auto=webp&s=9d85edf262a4331b7febc5f7398fc8b2d049d384', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_gcd8T9jnslCz83hv3axTJ6bD2j8tD5ikhOUeJc1fdc.jpg?width=960&crop=smart&auto=webp&s=01962bbafcb705c2bbb0df39724de6ca76374f6c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_gcd8T9jnslCz83hv3axTJ6bD2j8tD5ikhOUeJc1fdc.jpg?width=1080&crop=smart&auto=webp&s=d90fd2a254985357ef18c5bc3a2d8a6b52658764', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_gcd8T9jnslCz83hv3axTJ6bD2j8tD5ikhOUeJc1fdc.jpg?auto=webp&s=305c3fe0901971daf47b5212a0d183e60f678073', 'width': 1200}, 'variants': {}}]}
The first author of the ParScale paper discusses how they turned ParScale from an idea into reality
74
Because many friends have given feedback that Zhihu cannot be accessed without registration, I am simply using a translation plugin to translate posts from Zhihu into English and taking screenshots. The original author is keytoyze, who holds all rights to the article. The original address is: [www.zhihu.com/question/1907422978985169131/answer/1907565157103694086](http://www.zhihu.com/question/1907422978985169131/answer/1907565157103694086) https://preview.redd.it/coxrzxd6ln1f1.png?width=869&format=png&auto=webp&s=55637a7888ae9396e88a09ea0ed134bd153e7dcb https://preview.redd.it/hudkuuf7ln1f1.png?width=862&format=png&auto=webp&s=9c9af9f77370961a07bdc6876c6be9e84c3ff2de https://preview.redd.it/xebnsy18ln1f1.png?width=877&format=png&auto=webp&s=b8c78a0d42bead0e4838d2f6f24da84d5a706b3a https://preview.redd.it/3yuzdfp8ln1f1.png?width=866&format=png&auto=webp&s=a03790528375bd05619f79e335c08cafa9659595 https://preview.redd.it/z07wi6f9ln1f1.png?width=855&format=png&auto=webp&s=230c6c9bba3ae8d72838c06d5ae6c0f7fdab16d3 https://preview.redd.it/bs6cecy9ln1f1.png?width=856&format=png&auto=webp&s=b948927ff6a3edeea98ddc37377eac53e5a968fd
2025-05-19T02:55:28
https://www.reddit.com/r/LocalLLaMA/comments/1kq1g7s/the_first_author_of_the_parscale_paper_discusses/
Dr_Karminski
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq1g7s
false
null
t3_1kq1g7s
/r/LocalLLaMA/comments/1kq1g7s/the_first_author_of_the_parscale_paper_discusses/
false
false
https://b.thumbs.redditm…U6xEN99bLNKc.jpg
74
null
Qwen Web Dev just got even better! One click to deploy!
1
[removed]
2025-05-19T03:15:12
https://www.reddit.com/r/LocalLLaMA/comments/1kq1ssl/qwen_web_dev_just_got_even_better_one_click_to/
No_Banana_5663
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq1ssl
false
null
t3_1kq1ssl
/r/LocalLLaMA/comments/1kq1ssl/qwen_web_dev_just_got_even_better_one_click_to/
false
false
self
1
null
Who wants to buy to run a local LLM? Please contact me.
0
https://preview.redd.it/…are selling this
2025-05-19T03:39:08
https://www.reddit.com/r/LocalLLaMA/comments/1kq27u1/who_wants_to_buy_to_run_a_local_llm_please/
Reasonable-Climate66
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq27u1
false
null
t3_1kq27u1
/r/LocalLLaMA/comments/1kq27u1/who_wants_to_buy_to_run_a_local_llm_please/
false
false
https://b.thumbs.redditm…NqE_I_Gu32bY.jpg
0
null
Challenges in Fine-Tuning LLMs on Large Proprietary Codebases
1
[removed]
2025-05-19T03:57:43
https://www.reddit.com/r/LocalLLaMA/comments/1kq2jf2/challenges_in_finetuning_llms_on_large/
SaladNo6817
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq2jf2
false
null
t3_1kq2jf2
/r/LocalLLaMA/comments/1kq2jf2/challenges_in_finetuning_llms_on_large/
false
false
self
1
null
Challenges in Fine-Tuning LLMs on Large Proprietary Codebases
1
[removed]
2025-05-19T04:00:00
https://www.reddit.com/r/LocalLLaMA/comments/1kq2kuo/challenges_in_finetuning_llms_on_large/
SaladNo6817
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq2kuo
false
null
t3_1kq2kuo
/r/LocalLLaMA/comments/1kq2kuo/challenges_in_finetuning_llms_on_large/
false
false
self
1
null
SAGA: Semantic And Graph-enhanced Authoring
1
[removed]
2025-05-19T04:16:38
https://github.com/Lanerra/saga
MariusNocturnum
github.com
1970-01-01T00:00:00
0
{}
1kq2vdl
false
null
t3_1kq2vdl
/r/LocalLLaMA/comments/1kq2vdl/saga_semantic_and_graphenhanced_authoring/
false
false
https://b.thumbs.redditm…P-qiYtJ4AXKg.jpg
1
{'enabled': False, 'images': [{'id': '0V29OFL9XjKhu7pB82_qXO3IeOsVcMk5AhsL2AK4HqE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?width=108&crop=smart&auto=webp&s=04407b3d983c12adada102ccea1df4032cfb5857', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?width=216&crop=smart&auto=webp&s=cebef6ba1a59943887b3457aa1ecdd95b94932d8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?width=320&crop=smart&auto=webp&s=97f371b6ffedd11f8f887f040be0b2b0f2747be5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?width=640&crop=smart&auto=webp&s=41ac10dd58a4bd36016e1c240c75953067202aa3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?width=960&crop=smart&auto=webp&s=77ebface5274c7e58da46d0adc61f7409dbccf08', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?width=1080&crop=smart&auto=webp&s=ba3656550cca30a4b762d1a07bc3334f501c23d2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?auto=webp&s=72f0f9d513678d40b39004aba3e787b53c5c2a6a', 'width': 1200}, 'variants': {}}]}
I made a tool to efficiently find optimal parameters
49
TLDR: https://github.com/kooshi/TaguchiBench Taguchi lets you change multiple variables at once to test a bunch of stuff quickly, and I made a tool to do it for AI and other stuff --- I've been waking up inspired often recently, with the multiplying effect of Claude and Gemini, I can explore ideas as fast as I come up with them. One seemed particularly compelling, partially because I've been looking for an excuse to use Orthogonal Arrays ever since I saw [NightHawkInLight's video](https://youtu.be/5oULEuOoRd0) about them. I wanted a way to test local llm sampler parameters to see what was really the best, and as it takes so long to run benchmarks, Orthogonal Arrays popped into my head as a way to efficiently test them. I had no idea how much statistical math went into analyzing these things, but I just kept learning and coding. I'm sure it's nowhere near perfect, but it seems to be working pretty well, and I mostly cleaned things up enough to allow the scrutiny of the public eye. At some point I realized it could be generalized to run _any_ command line tool and optimize those arguments as well, so I ended up completely refactoring it to break it into two components. So here's what I have: https://github.com/kooshi/TaguchiBench Two tools: - LiveBenchRunner - which just sets up and executes a LiveBench run with llama-server as the backend, which is useful by itself or with: - TaguchiBench.Engine - takes a set of parameters and values - attempts to fit them into a Taguchi (Orthogonal) array (harder than you'd think) - runs the tool an efficient number of times with the different values for the parameters - does a bunch of statistical analysis on the scores returned by the tool - makes some nice reports out of them It can also recover from an interrupted experiment, which is nice considering how long runs can take. (In the future I may take advantage of LiveBench's recovery ability as well) I haven't actually found any useful optimization data yet, as I've just been focused on development, but now that it's pretty solid, I'm curious to validate [Qwen3's recent recommendation to enable presence penalty](https://www.reddit.com/r/LocalLLaMA/comments/1kkuq7m/qwen_suggests_adding_presence_penalty_when_using/). What I'm really hoping though, is that someone else finds a use for this in their own work, since it can help optimize any process you can run from a command line. I looked around, and I didn't see any open source tool like it. I did find this https://pypi.org/project/taguchi/, and shoutout to another NightHawkInLight fan, but it doesn't appear to do any analysis of returned values, and is generally pretty simple. Granted, mine's probably massively *over*engineered, but so it goes. Anyway, I hope you all like it, and have some uses for it, AI related or not!
2025-05-19T04:19:00
https://www.reddit.com/r/LocalLLaMA/comments/1kq2wr0/i_made_a_tool_to_efficiently_find_optimal/
Kooshi_Govno
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq2wr0
false
null
t3_1kq2wr0
/r/LocalLLaMA/comments/1kq2wr0/i_made_a_tool_to_efficiently_find_optimal/
false
false
self
49
{'enabled': False, 'images': [{'id': 'pqOdkXftXSgOnRklCwdQYtWnW6Aq7pS-skqM5PzrA-0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RHEjmD2Uv6HsIiItPBnrOe_TAPyct-iifl0dQKXTIvk.jpg?width=108&crop=smart&auto=webp&s=1f76682be88866ec2a0714eb6ccb052f89af2058', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RHEjmD2Uv6HsIiItPBnrOe_TAPyct-iifl0dQKXTIvk.jpg?width=216&crop=smart&auto=webp&s=ef7d18f876342d07f4110838f84dd052c2262dfb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RHEjmD2Uv6HsIiItPBnrOe_TAPyct-iifl0dQKXTIvk.jpg?width=320&crop=smart&auto=webp&s=d5df073cf3d8d03010ad13ac8aec8af278bc13f9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RHEjmD2Uv6HsIiItPBnrOe_TAPyct-iifl0dQKXTIvk.jpg?width=640&crop=smart&auto=webp&s=dccd6f94dff9c82fd287c43b1228a6ba935b15fb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RHEjmD2Uv6HsIiItPBnrOe_TAPyct-iifl0dQKXTIvk.jpg?width=960&crop=smart&auto=webp&s=0a7c0ddbffdda6ba138635216f94f2b184c2994e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RHEjmD2Uv6HsIiItPBnrOe_TAPyct-iifl0dQKXTIvk.jpg?width=1080&crop=smart&auto=webp&s=05e855891d54f71246579fdafd8451a875ca971c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RHEjmD2Uv6HsIiItPBnrOe_TAPyct-iifl0dQKXTIvk.jpg?auto=webp&s=355f8f5e52d2a48ee4783e52db91215b71a1c981', 'width': 1200}, 'variants': {}}]}
SAGA: Semantic And Graph-enhanced Authoring
1
[removed]
2025-05-19T04:19:16
https://www.reddit.com/r/LocalLLaMA/comments/1kq2wx6/saga_semantic_and_graphenhanced_authoring/
MariusNocturnum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq2wx6
false
null
t3_1kq2wx6
/r/LocalLLaMA/comments/1kq2wx6/saga_semantic_and_graphenhanced_authoring/
false
false
self
1
{'enabled': False, 'images': [{'id': '0V29OFL9XjKhu7pB82_qXO3IeOsVcMk5AhsL2AK4HqE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?width=108&crop=smart&auto=webp&s=04407b3d983c12adada102ccea1df4032cfb5857', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?width=216&crop=smart&auto=webp&s=cebef6ba1a59943887b3457aa1ecdd95b94932d8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?width=320&crop=smart&auto=webp&s=97f371b6ffedd11f8f887f040be0b2b0f2747be5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?width=640&crop=smart&auto=webp&s=41ac10dd58a4bd36016e1c240c75953067202aa3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?width=960&crop=smart&auto=webp&s=77ebface5274c7e58da46d0adc61f7409dbccf08', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?width=1080&crop=smart&auto=webp&s=ba3656550cca30a4b762d1a07bc3334f501c23d2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?auto=webp&s=72f0f9d513678d40b39004aba3e787b53c5c2a6a', 'width': 1200}, 'variants': {}}]}
SAGA - Semantic And Graph-enhanced Authoring
21
I'd like to share a little project I've been actively working on for the last couple weeks called SAGA. It is still very much under development, so I'd love to know your thoughts about it!. SAGA (Semantic And Graph-enhanced Authoring) is a sophisticated AI-powered creative writing system designed to generate full-length novels with consistent characters, coherent world-building, and compelling narratives. Unlike simple prompt-based writing tools, SAGA employs a multi-stage pipeline that mirrors professional writing processes: planning, drafting, evaluation, and revision. 🌟 Key Features \- \*\*Multi-Stage Writing Pipeline\*\*: Separate planning, drafting, evaluation, and revision phases with specialized LLM prompts \- \*\*Hybrid Knowledge Management\*\*: Combines JSON-based character/world profiles with a knowledge graph for factual consistency \- \*\*Intelligent Context Generation\*\*: Uses semantic similarity and reliable knowledge facts to provide relevant context for each chapter \- \*\*Comprehensive Quality Control\*\*: Evaluates consistency, plot alignment, thematic coherence, and narrative depth \- \*\*Agentic Planning\*\*: Detailed scene-by-scene planning with focus elements for narrative depth \- \*\*Provisional Data Tracking\*\*: Marks data quality based on source reliability to maintain canon integrity \- \*\*Adaptive Revision\*\*: Targeted revision strategies based on specific evaluation feedback The system will: \- Generate or load a plot outline \- Create initial world-building \- Pre-populate the knowledge graph \- Begin writing chapters iteratively \- Resume from the last chapter it left off on [https://github.com/Lanerra/saga](https://github.com/Lanerra/saga)
2025-05-19T04:23:40
https://www.reddit.com/r/LocalLLaMA/comments/1kq2zgg/saga_semantic_and_graphenhanced_authoring/
MariusNocturnum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq2zgg
false
null
t3_1kq2zgg
/r/LocalLLaMA/comments/1kq2zgg/saga_semantic_and_graphenhanced_authoring/
false
false
self
21
{'enabled': False, 'images': [{'id': '0V29OFL9XjKhu7pB82_qXO3IeOsVcMk5AhsL2AK4HqE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?width=108&crop=smart&auto=webp&s=04407b3d983c12adada102ccea1df4032cfb5857', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?width=216&crop=smart&auto=webp&s=cebef6ba1a59943887b3457aa1ecdd95b94932d8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?width=320&crop=smart&auto=webp&s=97f371b6ffedd11f8f887f040be0b2b0f2747be5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?width=640&crop=smart&auto=webp&s=41ac10dd58a4bd36016e1c240c75953067202aa3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?width=960&crop=smart&auto=webp&s=77ebface5274c7e58da46d0adc61f7409dbccf08', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?width=1080&crop=smart&auto=webp&s=ba3656550cca30a4b762d1a07bc3334f501c23d2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?auto=webp&s=72f0f9d513678d40b39004aba3e787b53c5c2a6a', 'width': 1200}, 'variants': {}}]}
Challenges in Fine-Tuning LLMs on Large Proprietary Codebases
1
[removed]
2025-05-19T04:35:30
https://www.reddit.com/r/LocalLLaMA/comments/1kq36b5/challenges_in_finetuning_llms_on_large/
SaladNo6817
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq36b5
false
null
t3_1kq36b5
/r/LocalLLaMA/comments/1kq36b5/challenges_in_finetuning_llms_on_large/
false
false
self
1
null
Challenges in Fine-Tuning LLMs on Large Proprietary Codebases
1
[removed]
2025-05-19T04:37:59
https://www.reddit.com/r/LocalLLaMA/comments/1kq37pu/challenges_in_finetuning_llms_on_large/
SaladNo6817
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq37pu
false
null
t3_1kq37pu
/r/LocalLLaMA/comments/1kq37pu/challenges_in_finetuning_llms_on_large/
false
false
self
1
null
I created a program that create personalized playlist from a large playlist using LLM( Hope it helps you organize your chaotic playlist)
1
[removed]
2025-05-19T04:51:22
https://www.reddit.com/r/LocalLLaMA/comments/1kq3fhm/i_created_a_program_that_create_personalized/
MoodOdd9657
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq3fhm
false
null
t3_1kq3fhm
/r/LocalLLaMA/comments/1kq3fhm/i_created_a_program_that_create_personalized/
false
false
self
1
{'enabled': False, 'images': [{'id': 'OfgZLP4gv_k8jMG04ZbCW0i7qucIwr7BuybdnlYYlaM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-XsCps4n1z5hfvbsp4fv0t0IizynYLRMj-i2RTS1yjQ.jpg?width=108&crop=smart&auto=webp&s=ae90ffa3a9a53e2d7d87797b9966ee3578b0c628', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-XsCps4n1z5hfvbsp4fv0t0IizynYLRMj-i2RTS1yjQ.jpg?width=216&crop=smart&auto=webp&s=8541be3ad08d5cb8ca7ee95b724696c6c961caf3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-XsCps4n1z5hfvbsp4fv0t0IizynYLRMj-i2RTS1yjQ.jpg?width=320&crop=smart&auto=webp&s=291ad376e88b0d593fce5949d539f5641217a037', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-XsCps4n1z5hfvbsp4fv0t0IizynYLRMj-i2RTS1yjQ.jpg?width=640&crop=smart&auto=webp&s=d7243f6fbc796d4fe0c8eb80e617ad4166a3d02c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-XsCps4n1z5hfvbsp4fv0t0IizynYLRMj-i2RTS1yjQ.jpg?width=960&crop=smart&auto=webp&s=8a586b3d4fee6e6816efa4f07149101cbe53e5d4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-XsCps4n1z5hfvbsp4fv0t0IizynYLRMj-i2RTS1yjQ.jpg?width=1080&crop=smart&auto=webp&s=3fdf115fd45ca95f49dd2e8a3bb561d8520449fe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-XsCps4n1z5hfvbsp4fv0t0IizynYLRMj-i2RTS1yjQ.jpg?auto=webp&s=def1ef20cdbf75ee2672181cc9642d1c2cc05e86', 'width': 1200}, 'variants': {}}]}
My post about fine-tuning has been removed by the filter.
1
[removed]
2025-05-19T04:52:54
https://www.reddit.com/r/LocalLLaMA/comments/1kq3gcm/my_post_about_finetuning_has_been_removed_by_the/
SaladNo6817
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq3gcm
false
null
t3_1kq3gcm
/r/LocalLLaMA/comments/1kq3gcm/my_post_about_finetuning_has_been_removed_by_the/
false
false
self
1
null
I created a program that create personalized playlist from a large playlist using LLM
1
[removed]
2025-05-19T04:54:06
https://www.reddit.com/r/LocalLLaMA/comments/1kq3h1b/i_created_a_program_that_create_personalized/
MoodOdd9657
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq3h1b
false
null
t3_1kq3h1b
/r/LocalLLaMA/comments/1kq3h1b/i_created_a_program_that_create_personalized/
false
false
self
1
{'enabled': False, 'images': [{'id': 'OfgZLP4gv_k8jMG04ZbCW0i7qucIwr7BuybdnlYYlaM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-XsCps4n1z5hfvbsp4fv0t0IizynYLRMj-i2RTS1yjQ.jpg?width=108&crop=smart&auto=webp&s=ae90ffa3a9a53e2d7d87797b9966ee3578b0c628', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-XsCps4n1z5hfvbsp4fv0t0IizynYLRMj-i2RTS1yjQ.jpg?width=216&crop=smart&auto=webp&s=8541be3ad08d5cb8ca7ee95b724696c6c961caf3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-XsCps4n1z5hfvbsp4fv0t0IizynYLRMj-i2RTS1yjQ.jpg?width=320&crop=smart&auto=webp&s=291ad376e88b0d593fce5949d539f5641217a037', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-XsCps4n1z5hfvbsp4fv0t0IizynYLRMj-i2RTS1yjQ.jpg?width=640&crop=smart&auto=webp&s=d7243f6fbc796d4fe0c8eb80e617ad4166a3d02c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-XsCps4n1z5hfvbsp4fv0t0IizynYLRMj-i2RTS1yjQ.jpg?width=960&crop=smart&auto=webp&s=8a586b3d4fee6e6816efa4f07149101cbe53e5d4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-XsCps4n1z5hfvbsp4fv0t0IizynYLRMj-i2RTS1yjQ.jpg?width=1080&crop=smart&auto=webp&s=3fdf115fd45ca95f49dd2e8a3bb561d8520449fe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-XsCps4n1z5hfvbsp4fv0t0IizynYLRMj-i2RTS1yjQ.jpg?auto=webp&s=def1ef20cdbf75ee2672181cc9642d1c2cc05e86', 'width': 1200}, 'variants': {}}]}
Challenges in Fine-Tuning LLMs on Large Proprietary Codebases
1
[removed]
2025-05-19T05:04:41
https://www.reddit.com/r/LocalLLaMA/comments/1kq3n5d/challenges_in_finetuning_llms_on_large/
SaladNo6817
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq3n5d
false
null
t3_1kq3n5d
/r/LocalLLaMA/comments/1kq3n5d/challenges_in_finetuning_llms_on_large/
false
false
self
1
null
Challenges in Fine-Tuning LLMs on Large Proprietary Codebases
1
[removed]
2025-05-19T05:06:08
https://www.reddit.com/r/LocalLLaMA/comments/1kq3nwv/challenges_in_finetuning_llms_on_large/
SaladNo6817
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq3nwv
false
null
t3_1kq3nwv
/r/LocalLLaMA/comments/1kq3nwv/challenges_in_finetuning_llms_on_large/
false
false
self
1
null
Challenges in Fine-Tuning LLMs on Large Codebases
1
[removed]
2025-05-19T05:09:19
https://www.reddit.com/r/LocalLLaMA/comments/1kq3pm9/challenges_in_finetuning_llms_on_large_codebases/
SaladNo6817
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq3pm9
false
null
t3_1kq3pm9
/r/LocalLLaMA/comments/1kq3pm9/challenges_in_finetuning_llms_on_large_codebases/
false
false
self
1
null
[OSS] Containerized llama.cpp + Ollama backend runner for RunPod serverless (easy LLM deployment)
6
I'm sharing an open-source project I built called `runpod-llm` \- a containerized setup for running LLMs on RunPod, with minimal config and full support for both llama.cpp and Ollama backends. # ⚙️ What It Does * Lets you spin up an LLM container on RunPod (e.g., serverless GPU) with a few env vars * Supports both `llama.cpp` (GGUF models) and `Ollama` (for models like Mistral, LLaMA 3, etc.) * Handles downloading, mounting, and exposing a chat completion-style API out of the box * Designed to be flexible for devs building custom endpoints or chaining to other infra # ✅ Features * Backend toggle via `LLM_BACKEND` env var (`llama.cpp` or `ollama`) * GPU & CPU config for `llama.cpp` (`GPU_LAYERS`, `CPU_THREADS`, etc.) * Pulls models dynamically via URL * Can run as a RunPod serverless or pod endpoint # 📦 Repo **GitHub:** [https://github.com/zeeb0tt/runpod-llm](https://github.com/zeeb0tt/runpod-llm) **Docker:** [zeeb0t/runpod-llm](https://hub.docker.com/r/zeeb0t/runpod-llm) # 🧠 Example Use Case I’ve used this with Qwen3-30B-A3B (Q8\_0) in RunPod serverless, exposing a `/v1/chat/completions`\-style interface compatible with OpenAI clients. You can try that build out right away as I have uploaded it to my Docker repository. If you have specific models and quants you'd like uploaded and you can't figure out how, let me know and I'll build one for you.... happy to answer questions or help people get it wired up... PRs welcome too.
2025-05-19T05:19:19
https://www.reddit.com/r/LocalLLaMA/comments/1kq3v0u/oss_containerized_llamacpp_ollama_backend_runner/
zeeb0t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq3v0u
false
null
t3_1kq3v0u
/r/LocalLLaMA/comments/1kq3v0u/oss_containerized_llamacpp_ollama_backend_runner/
false
false
self
6
{'enabled': False, 'images': [{'id': 'Cwyggr1WckApp_4o7_KbiFTNs608MlRMpv0_7qmC4DA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AYRIfGXAmZYxTJ8LI1HWwu47ZkRAjkUVrKlvfPtUrUE.jpg?width=108&crop=smart&auto=webp&s=2d9f764a1f630a1418fd6620bb5937a95ca75c9d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/AYRIfGXAmZYxTJ8LI1HWwu47ZkRAjkUVrKlvfPtUrUE.jpg?width=216&crop=smart&auto=webp&s=3d133470dc003643e01e1660bab47c7ec83c1da9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/AYRIfGXAmZYxTJ8LI1HWwu47ZkRAjkUVrKlvfPtUrUE.jpg?width=320&crop=smart&auto=webp&s=6a41043d768057a3906fe8995dc93bb8bd27a3a0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/AYRIfGXAmZYxTJ8LI1HWwu47ZkRAjkUVrKlvfPtUrUE.jpg?width=640&crop=smart&auto=webp&s=7f73ced2287ab44f972071e6def2321a224334e1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/AYRIfGXAmZYxTJ8LI1HWwu47ZkRAjkUVrKlvfPtUrUE.jpg?width=960&crop=smart&auto=webp&s=196c2b8803fec4337db54812790c2b56aa83066d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/AYRIfGXAmZYxTJ8LI1HWwu47ZkRAjkUVrKlvfPtUrUE.jpg?width=1080&crop=smart&auto=webp&s=2a1f0413d47fbf83204084b4714073dfcccb86d9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/AYRIfGXAmZYxTJ8LI1HWwu47ZkRAjkUVrKlvfPtUrUE.jpg?auto=webp&s=4695b5c356676086d30351bada675a8fdafaf4d5', 'width': 1200}, 'variants': {}}]}
Is it possible to use Qwen2.5-VL's vision encoder to generate pure image embeddings like CLIP or ViT?
1
[removed]
2025-05-19T05:28:16
https://www.reddit.com/r/LocalLLaMA/comments/1kq3zve/is_it_possible_to_use_qwen25vls_vision_encoder_to/
MysteriousAlps608
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq3zve
false
null
t3_1kq3zve
/r/LocalLLaMA/comments/1kq3zve/is_it_possible_to_use_qwen25vls_vision_encoder_to/
false
false
self
1
null
Is a Q&A dataset absolutely necessary when fine-tuning an LLM?
1
[removed]
2025-05-19T05:53:50
https://www.reddit.com/r/LocalLLaMA/comments/1kq4dlb/is_a_qa_dataset_absolutely_necessary_when/
Cyp9715
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq4dlb
false
null
t3_1kq4dlb
/r/LocalLLaMA/comments/1kq4dlb/is_a_qa_dataset_absolutely_necessary_when/
false
false
self
1
null
NVIDIA says DGX Spark releasing in July
61
DGX Spark should be available in July. The 128 GB unified memory amount is nice, but there's been discussions about whether the bandwidth will be too slow to be practical. Will be interesting to see what independent benchmarks will show, I don't think it's had any outsider reviews yet. I couldn't find a price yet, that of course will be quite important too. [https://nvidianews.nvidia.com/news/nvidia-launches-ai-first-dgx-personal-computing-systems-with-global-computer-makers](https://nvidianews.nvidia.com/news/nvidia-launches-ai-first-dgx-personal-computing-systems-with-global-computer-makers) || || |System Memory|128 GB LPDDR5x, unified system memory| || || |Memory Bandwidth|273 GB/s|
2025-05-19T05:56:22
https://www.reddit.com/r/LocalLLaMA/comments/1kq4ey4/nvidia_says_dgx_spark_releasing_in_july/
Aplakka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq4ey4
false
null
t3_1kq4ey4
/r/LocalLLaMA/comments/1kq4ey4/nvidia_says_dgx_spark_releasing_in_july/
false
false
self
61
{'enabled': False, 'images': [{'id': 'Kkp9zaJa0nIObH7G6Qz8lvwwNpFIqHm-PW5o6mlo3Dk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ggt6c4juOCzmVY5sG5r1Hrv23PCDLUqE-nnPBVYjAs4.jpg?width=108&crop=smart&auto=webp&s=0cdc4a0fc0bb5ef1e75c3757bef9404477d1d883', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ggt6c4juOCzmVY5sG5r1Hrv23PCDLUqE-nnPBVYjAs4.jpg?width=216&crop=smart&auto=webp&s=9c96c2b7b53a4453ee5e09c7ad5de3f722548d98', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ggt6c4juOCzmVY5sG5r1Hrv23PCDLUqE-nnPBVYjAs4.jpg?width=320&crop=smart&auto=webp&s=f13b58573a95d3ca1fcbeba9e56611f18c2d76f3', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ggt6c4juOCzmVY5sG5r1Hrv23PCDLUqE-nnPBVYjAs4.jpg?width=640&crop=smart&auto=webp&s=059d1da04d74d9745f051dcd940a017f47d52cb9', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ggt6c4juOCzmVY5sG5r1Hrv23PCDLUqE-nnPBVYjAs4.jpg?width=960&crop=smart&auto=webp&s=53c3e9481de03bb6c7a21de56e7027bbf46ae9f8', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ggt6c4juOCzmVY5sG5r1Hrv23PCDLUqE-nnPBVYjAs4.jpg?width=1080&crop=smart&auto=webp&s=2d7e80bb06a3bf219c6262ae628fffddb80d73f7', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ggt6c4juOCzmVY5sG5r1Hrv23PCDLUqE-nnPBVYjAs4.jpg?auto=webp&s=408fcf71c09b528c8a4f0e82f8de6178c541ab94', 'width': 1920}, 'variants': {}}]}
OuteTTS v1.0 now supported by chatllm.cpp
28
After Orpheus-TTS is implemented in [ChatLLM.cpp](https://github.com/foldl/chatllm.cpp), now here comes [OuteTTS v1.0](https://huggingface.co/OuteAI/Llama-OuteTTS-1.0-1B).
2025-05-19T06:27:47
https://v.redd.it/cpcocy4jmo1f1
foldl-li
v.redd.it
1970-01-01T00:00:00
0
{}
1kq4vrv
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/cpcocy4jmo1f1/DASHPlaylist.mpd?a=1750228079%2CYTU1MzYxMjJiM2M1NzI1ODI5M2U0YjAwYTcwNTJiY2Y4YzU3NTcxODdkNTc5OWE4ZjdiZDVmYWY3ODE4ZWYwYQ%3D%3D&v=1&f=sd', 'duration': 9, 'fallback_url': 'https://v.redd.it/cpcocy4jmo1f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/cpcocy4jmo1f1/HLSPlaylist.m3u8?a=1750228079%2CYzJjZDYwOTk0ZGYxZDI4NzEzMThmY2U3NjEyN2NjN2I5MWRjNzUzYmNiNDM5N2I0MDg5MzViZDVlYjRmODA4ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/cpcocy4jmo1f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1kq4vrv
/r/LocalLLaMA/comments/1kq4vrv/outetts_v10_now_supported_by_chatllmcpp/
false
false
https://external-preview…954bfd92ee3a34ef
28
{'enabled': False, 'images': [{'id': 'bGFzbDV5NGptbzFmMYcMr_Cq2gLg7E5zrnm6bi-e1D6-e2IdLt9Ao5b5ur7s', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bGFzbDV5NGptbzFmMYcMr_Cq2gLg7E5zrnm6bi-e1D6-e2IdLt9Ao5b5ur7s.png?width=108&crop=smart&format=pjpg&auto=webp&s=7c65f8f44d36dfc70d800a53650d8478f97bfb17', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bGFzbDV5NGptbzFmMYcMr_Cq2gLg7E5zrnm6bi-e1D6-e2IdLt9Ao5b5ur7s.png?width=216&crop=smart&format=pjpg&auto=webp&s=678ae1f2aa9e0ece5cfa741c3607144653a6f032', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bGFzbDV5NGptbzFmMYcMr_Cq2gLg7E5zrnm6bi-e1D6-e2IdLt9Ao5b5ur7s.png?width=320&crop=smart&format=pjpg&auto=webp&s=db73f0bf1bbd3cb97cd42b7c479f7a994bceea6e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bGFzbDV5NGptbzFmMYcMr_Cq2gLg7E5zrnm6bi-e1D6-e2IdLt9Ao5b5ur7s.png?width=640&crop=smart&format=pjpg&auto=webp&s=e0f51586c44d8b9d2f55ac31389ae99f540bf21e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bGFzbDV5NGptbzFmMYcMr_Cq2gLg7E5zrnm6bi-e1D6-e2IdLt9Ao5b5ur7s.png?width=960&crop=smart&format=pjpg&auto=webp&s=65b67ba95475104da91a854a0d42c2f766cbe797', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bGFzbDV5NGptbzFmMYcMr_Cq2gLg7E5zrnm6bi-e1D6-e2IdLt9Ao5b5ur7s.png?width=1080&crop=smart&format=pjpg&auto=webp&s=ad041b7b9225be3a5dcb5b492c37c2a73c571010', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bGFzbDV5NGptbzFmMYcMr_Cq2gLg7E5zrnm6bi-e1D6-e2IdLt9Ao5b5ur7s.png?format=pjpg&auto=webp&s=8796fab98799b053588daf1f8fba9e47069044a9', 'width': 1920}, 'variants': {}}]}
#Lab Leak vs.Natural Origin 2020年,一场疫情改变了世界。但病毒的起源至今仍是谜团。近日,美国麦卡洛基金会管理员兼流行病学家尼古拉斯·赫尔舍尔发表论文《研究发现北卡罗来纳大学教堂山分校BSL-3设施发生多起SARS-CoV-2(冠状病毒)实验室泄露》,认为“大量证据表明SARS-CoV-2病毒是人为制造”论文称,在2020年6月至2021年1月期间,北卡罗来纳大学测序了7例SARS-CoV-2病毒“实验室获得性感染”,均疑似源自该大学顶级冠状病毒实验室进行的合成病毒研究,其中包括巴里克实验室。更关键的是,UNC实验室合成的病毒带有独特的‘基因水印’——T15102
1
2025-05-19T06:31:25
https://i.redd.it/or3t8khrno1f1.jpeg
NorthAgency4433
i.redd.it
1970-01-01T00:00:00
0
{}
1kq4xot
false
null
t3_1kq4xot
/r/LocalLLaMA/comments/1kq4xot/lab_leak_vsnatural_origin/
false
false
default
1
{'enabled': True, 'images': [{'id': 'or3t8khrno1f1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/or3t8khrno1f1.jpeg?width=108&crop=smart&auto=webp&s=a41152c754b30dc24e7dedfff98da7c01748c04f', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/or3t8khrno1f1.jpeg?width=216&crop=smart&auto=webp&s=63e3ad24c7b0b7f5dce9062cb373fcaf1cd963d5', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/or3t8khrno1f1.jpeg?width=320&crop=smart&auto=webp&s=e3ccd38edac14ea2a843af782beed1d064a9fa3a', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/or3t8khrno1f1.jpeg?width=640&crop=smart&auto=webp&s=8529d0e0cfd66e1e1e055bef951f5d5f1af31c60', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/or3t8khrno1f1.jpeg?width=960&crop=smart&auto=webp&s=5f7abc4d5b7941ff64793c79767a8f512238d335', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/or3t8khrno1f1.jpeg?width=1080&crop=smart&auto=webp&s=522d4f6663056121533b58e9ad0989302f1de777', 'width': 1080}], 'source': {'height': 2844, 'url': 'https://preview.redd.it/or3t8khrno1f1.jpeg?auto=webp&s=c88d0adf02f8ddf1bd3b06f74d0171d4d1fdef09', 'width': 2844}, 'variants': {}}]}
Clara — A fully offline, Modular AI workspace (LLMs + Agents + Automation + Image Gen)
590
So I’ve been working on this for the past few months and finally feel good enough to share it. It’s called **Clara** — and the idea is simple: 🧩 **Imagine building your own workspace for AI** — with local tools, agents, automations, and image generation. Note: Created this becoz i hated the ChatUI for everything, I want everything in one place but i don't wanna jump between apps and its completely opensource with MIT Lisence Clara lets you do exactly that — fully offline, fully modular. You can: * 🧱 Drop everything as widgets on a dashboard — rearrange, resize, and make it *yours with all the stuff mentioned below* * 💬 Chat with local LLMs with Rag, Image, Documents, Run Code like ChatGPT - Supports both Ollama and Any OpenAI Like API * ⚙️ Create agents with built-in logic & memory * 🔁 Run automations via native N8N integration (1000+ Free Templates in ClaraVerse Store) * 🎨 Generate images locally using Stable Diffusion (ComfyUI) - (Native Build without ComfyUI Coming Soon) Clara has app for everything - Mac, Windows, Linux It’s like… instead of opening a bunch of apps, you build your own AI control room. And it all runs on your machine. No cloud. No API keys. No bs. Would love to hear what y’all think — ideas, bugs, roast me if needed 😄 If you're into local-first tooling, this might actually be useful. Peace ✌️ **Note:** I built Clara because honestly... I was sick of bouncing between 10 different ChatUIs just to get basic stuff done. I wanted one place — where I could run LLMs, trigger workflows, write code, generate images — without switching tabs or tools. So I made it. And yeah — it’s fully open-source, MIT licensed, no gatekeeping. Use it, break it, fork it, whatever you want.
2025-05-19T06:53:01
https://i.redd.it/u6niruxjqo1f1.png
BadBoy17Ge
i.redd.it
1970-01-01T00:00:00
0
{}
1kq590b
false
null
t3_1kq590b
/r/LocalLLaMA/comments/1kq590b/clara_a_fully_offline_modular_ai_workspace_llms/
false
false
https://b.thumbs.redditm…_TIV9OBwwTxk.jpg
590
{'enabled': True, 'images': [{'id': 'mpzTqJBqnQqcN7JUUt3vhpTSsrB2Q1Yt81APefO73sg', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/u6niruxjqo1f1.png?width=108&crop=smart&auto=webp&s=eccc34dfe2ed11aeac1431f9d2435d5623b3c5c0', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/u6niruxjqo1f1.png?width=216&crop=smart&auto=webp&s=a3db6c114a27318d15bf58067b257180d4d1c007', 'width': 216}, {'height': 199, 'url': 'https://preview.redd.it/u6niruxjqo1f1.png?width=320&crop=smart&auto=webp&s=1eb88d57c6a7db761a96586e1865c0c0f89bc5f2', 'width': 320}, {'height': 398, 'url': 'https://preview.redd.it/u6niruxjqo1f1.png?width=640&crop=smart&auto=webp&s=92c4cac8e33b1fe68fdc0af3f66d45dcdcf1c55a', 'width': 640}, {'height': 597, 'url': 'https://preview.redd.it/u6niruxjqo1f1.png?width=960&crop=smart&auto=webp&s=5156b5f154bb3c911708a52eaac0fb933d617215', 'width': 960}, {'height': 672, 'url': 'https://preview.redd.it/u6niruxjqo1f1.png?width=1080&crop=smart&auto=webp&s=b552548e7fbe9ee984d1181baf04c57303a61d7e', 'width': 1080}], 'source': {'height': 2228, 'url': 'https://preview.redd.it/u6niruxjqo1f1.png?auto=webp&s=a3cb727ed6acaf221fb7180bdd1816840d903538', 'width': 3578}, 'variants': {}}]}
Looking for GPU recommendations for local LLM (on Linux)
1
[removed]
2025-05-19T07:10:06
https://www.reddit.com/r/LocalLLaMA/comments/1kq5hxv/looking_for_gpu_recommendations_for_local_llm_on/
Southern-Shift-736
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq5hxv
false
null
t3_1kq5hxv
/r/LocalLLaMA/comments/1kq5hxv/looking_for_gpu_recommendations_for_local_llm_on/
false
false
self
1
null
Challenges in Fine-Tuning LLMs on Large Proprietary Codebases
1
[removed]
2025-05-19T07:21:33
https://www.reddit.com/r/LocalLLaMA/comments/1kq5num/challenges_in_finetuning_llms_on_large/
SaladNo6817
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq5num
false
null
t3_1kq5num
/r/LocalLLaMA/comments/1kq5num/challenges_in_finetuning_llms_on_large/
false
false
self
1
null
How do you know which tool to run your model with?
1
I was watching a few videos from Bijan Bowen and he often says he has to launch the model from vllm or specifically from LM Studio, etc. Is there a reason why models need to be run using specific tools and how do you know where to run the LLM?
2025-05-19T07:40:00
https://www.reddit.com/r/LocalLLaMA/comments/1kq5x9w/how_do_you_know_which_tool_to_run_your_model_with/
crispyfrybits
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq5x9w
false
null
t3_1kq5x9w
/r/LocalLLaMA/comments/1kq5x9w/how_do_you_know_which_tool_to_run_your_model_with/
false
false
self
1
null
LM Studio: Setting `trust_remote_code=True`
1
[removed]
2025-05-19T07:47:26
https://www.reddit.com/r/LocalLLaMA/comments/1kq6105/lm_studio_setting_trust_remote_codetrue/
NiceLinden97
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq6105
false
null
t3_1kq6105
/r/LocalLLaMA/comments/1kq6105/lm_studio_setting_trust_remote_codetrue/
false
false
self
1
null
Very mixed results with llama3.2 - the 3b version
1
Hello, I'm working on a "simple" sentiment check. The strings / text are usually a few words long and should be checked by a system (n8n, sentiment analysis node) and afterwards categorized (positive, neutral, negative). If I'm testing this on an OpenAI account - or maybe even a local qwen3:4b this seems to work quite reliable. For testing and demo purposes, I'd like to run this locally. qwen3:4b takes quite long on my "GPU free" laptop. llama3.2 3b is faster, but I don't really understand why it has mixed results. I've got a set of ca. 8 sentences. Once I run the sentiment analysis in a loop it works. Another time it won't work. People suggested that Ollama 3B often won't work reliable. [https://community.n8n.io/t/sentiment-analysis-mostly-works-sometimes-not-with-local-ollama/116728](https://community.n8n.io/t/sentiment-analysis-mostly-works-sometimes-not-with-local-ollama/116728) And for other models, I assume I'd need a different hardware? 16 × AMD Ryzen 7 PRO 6850U with Radeon Graphics - 32 GB RAM
2025-05-19T07:53:32
https://www.reddit.com/r/LocalLLaMA/comments/1kq63vz/very_mixed_results_with_llama32_the_3b_version/
Chris8080
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq63vz
false
null
t3_1kq63vz
/r/LocalLLaMA/comments/1kq63vz/very_mixed_results_with_llama32_the_3b_version/
false
false
self
1
{'enabled': False, 'images': [{'id': 'AoIRwsKr-WBO6KSvFraVVBUgYYrHU7YCZVpWdBqjCnY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/NaiUuri2WGbCs_1xSZf0qZi56RKNYEV6mFk6gXOYSHM.jpg?width=108&crop=smart&auto=webp&s=90a202c41e658f5623626f92d99d6b86507ba3c0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/NaiUuri2WGbCs_1xSZf0qZi56RKNYEV6mFk6gXOYSHM.jpg?width=216&crop=smart&auto=webp&s=5aa18846e8ec75e552f52e0fc4c599b268e66e6f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/NaiUuri2WGbCs_1xSZf0qZi56RKNYEV6mFk6gXOYSHM.jpg?width=320&crop=smart&auto=webp&s=a49fb00b4145c9582ecb3c8b1ff1033156f04df7', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/NaiUuri2WGbCs_1xSZf0qZi56RKNYEV6mFk6gXOYSHM.jpg?width=640&crop=smart&auto=webp&s=fb7a520ebda192acf0d7bb0a9aac1305369c912f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/NaiUuri2WGbCs_1xSZf0qZi56RKNYEV6mFk6gXOYSHM.jpg?width=960&crop=smart&auto=webp&s=5cc27828d425295c9c8e6d1b8413e34e6165d6c8', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/NaiUuri2WGbCs_1xSZf0qZi56RKNYEV6mFk6gXOYSHM.jpg?width=1080&crop=smart&auto=webp&s=dedb21bdd66f77a29ac570d2436f2b6b9a49d619', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/NaiUuri2WGbCs_1xSZf0qZi56RKNYEV6mFk6gXOYSHM.jpg?auto=webp&s=7085ddd66355d67a5ac5b01d8ba9cb98def022f6', 'width': 1200}, 'variants': {}}]}
NVIDIA Intros RTX PRO Servers For Enterprise, Equipped With RTX PRO 6000 "Blackwell" Server GPUs
4
2025-05-19T08:11:31
https://wccftech.com/nvidia-rtx-pro-servers-enterprise-equipped-rtx-pro-6000-blackwell-server-gpus/
_SYSTEM_ADMIN_MOD_
wccftech.com
1970-01-01T00:00:00
0
{}
1kq6cn1
false
null
t3_1kq6cn1
/r/LocalLLaMA/comments/1kq6cn1/nvidia_intros_rtx_pro_servers_for_enterprise/
false
false
https://b.thumbs.redditm…fApvYcp7EnjU.jpg
4
{'enabled': False, 'images': [{'id': 'HCa-4VpwQmGL_cOq69TsjRRM6M7wSgNqzxyYXesaFBY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/SPV-nOvGz6aOxZ24Ebe-WSTOsw-hV6ptHmK457FG9lE.jpg?width=108&crop=smart&auto=webp&s=3b3db277afd3e8f102d2521822e406ac49635363', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/SPV-nOvGz6aOxZ24Ebe-WSTOsw-hV6ptHmK457FG9lE.jpg?width=216&crop=smart&auto=webp&s=dc68f0ee10e8c34384b5f773680ff5f078648c80', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/SPV-nOvGz6aOxZ24Ebe-WSTOsw-hV6ptHmK457FG9lE.jpg?width=320&crop=smart&auto=webp&s=fa6de7c0b8634902acf6fea123e5d35c7efceb25', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/SPV-nOvGz6aOxZ24Ebe-WSTOsw-hV6ptHmK457FG9lE.jpg?width=640&crop=smart&auto=webp&s=0b5abda4a70d9234c26ac483dc31c6311d1b7f6a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/SPV-nOvGz6aOxZ24Ebe-WSTOsw-hV6ptHmK457FG9lE.jpg?width=960&crop=smart&auto=webp&s=3b21e7e69edd09cdcf299eefaf37146ecdba9d6e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/SPV-nOvGz6aOxZ24Ebe-WSTOsw-hV6ptHmK457FG9lE.jpg?width=1080&crop=smart&auto=webp&s=63f001e52a962ad81fc96597888fc2522e2a510f', 'width': 1080}], 'source': {'height': 1441, 'url': 'https://external-preview.redd.it/SPV-nOvGz6aOxZ24Ebe-WSTOsw-hV6ptHmK457FG9lE.jpg?auto=webp&s=e9f02c8508cc8154276ae652727f7445efefe824', 'width': 2560}, 'variants': {}}]}
Best Way to Serve LLaMA 4 Scout or DeepSeek V3 with 10 Concurrent Users @ 30 t/s?
1
[removed]
2025-05-19T08:11:36
https://www.reddit.com/r/LocalLLaMA/comments/1kq6cok/best_way_to_serve_llama_4_scout_or_deepseek_v3/
HereForAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq6cok
false
null
t3_1kq6cok
/r/LocalLLaMA/comments/1kq6cok/best_way_to_serve_llama_4_scout_or_deepseek_v3/
false
false
self
1
null
NVIDIA Launches GB10-Powered DGX Spark & GB300-Powered DGX Station AI Systems, Blackwell Ultra With 20 PFLOPs Compute
14
2025-05-19T08:12:32
https://wccftech.com/nvidia-gb10-powered-dgx-spark-gb300-powered-dgx-station-ai-systems/
_SYSTEM_ADMIN_MOD_
wccftech.com
1970-01-01T00:00:00
0
{}
1kq6d4u
false
null
t3_1kq6d4u
/r/LocalLLaMA/comments/1kq6d4u/nvidia_launches_gb10powered_dgx_spark/
false
false
https://b.thumbs.redditm…qo0ai5xCwrjE.jpg
14
{'enabled': False, 'images': [{'id': 'PD8rnvifNMtTH2QZfbt1ABecTzvsQu7n786xD74W-RU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Kf9iwaDR1s4NlZiLQgFyqXBmfU0c5KzKmPwqHwOjBKE.jpg?width=108&crop=smart&auto=webp&s=3583a7b92227eedbd32ca5e7e391e998ee1a6b40', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Kf9iwaDR1s4NlZiLQgFyqXBmfU0c5KzKmPwqHwOjBKE.jpg?width=216&crop=smart&auto=webp&s=edd8218e5a9378497e66e51b06573be624bb8a9a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Kf9iwaDR1s4NlZiLQgFyqXBmfU0c5KzKmPwqHwOjBKE.jpg?width=320&crop=smart&auto=webp&s=d8c502fdf944a850b7ecb86040699439a4758fdf', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Kf9iwaDR1s4NlZiLQgFyqXBmfU0c5KzKmPwqHwOjBKE.jpg?width=640&crop=smart&auto=webp&s=cc58fdb063571a703f73fdcc3db050aede808c75', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Kf9iwaDR1s4NlZiLQgFyqXBmfU0c5KzKmPwqHwOjBKE.jpg?width=960&crop=smart&auto=webp&s=9fa7cd64672b2c8c84b52fbe8b8a0a6c98e0a6e5', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Kf9iwaDR1s4NlZiLQgFyqXBmfU0c5KzKmPwqHwOjBKE.jpg?width=1080&crop=smart&auto=webp&s=d77182713b88b506fe3bc5419f362b5066b514e1', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Kf9iwaDR1s4NlZiLQgFyqXBmfU0c5KzKmPwqHwOjBKE.jpg?auto=webp&s=60ba344e90b33241b4424038e63000358232e14d', 'width': 1920}, 'variants': {}}]}
Creating your own avatar
1
I saw in the news today that UBS were creating AI avatars of their analysts to make presentations, see: https://www.ft.com/content/0916d635-755b-4cdc-b722-e32d94ae334d (paywalled). I was curious about doing the same thing for myself but run locally so I have full control over my avatar. Has anyone done something like this already? What tools do you use? The standing and presenting should be an easier thing to generate compared to arbitarary video. There are many TTS options available with voice cloning too. I'm not sure whether it would make sense to do TTS and then generate video based on that, or jointly generate video and audio based on a script.
2025-05-19T08:14:39
https://www.reddit.com/r/LocalLLaMA/comments/1kq6e5v/creating_your_own_avatar/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq6e5v
false
null
t3_1kq6e5v
/r/LocalLLaMA/comments/1kq6e5v/creating_your_own_avatar/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Ljdpzyn2d8htEgCmohLfQbgKl1NNRFeXoNkEhxnWqSw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OeDSdLvnckzY6teyy8Fj0jYItPdNsjrrnd3B9Ye4OsQ.jpg?width=108&crop=smart&auto=webp&s=8237593dc599d5728240f7fa80130c7f969ca1a7', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OeDSdLvnckzY6teyy8Fj0jYItPdNsjrrnd3B9Ye4OsQ.jpg?width=216&crop=smart&auto=webp&s=03d95d48577e3014506ccb1fdcee74d9e6543c82', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OeDSdLvnckzY6teyy8Fj0jYItPdNsjrrnd3B9Ye4OsQ.jpg?width=320&crop=smart&auto=webp&s=5b2fdb14b4183dc58069eea9086e5b48518e857d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OeDSdLvnckzY6teyy8Fj0jYItPdNsjrrnd3B9Ye4OsQ.jpg?width=640&crop=smart&auto=webp&s=b645846e357172fca3d65e655ea3a4c83a29b08e', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/OeDSdLvnckzY6teyy8Fj0jYItPdNsjrrnd3B9Ye4OsQ.jpg?width=960&crop=smart&auto=webp&s=f6110d31a5968106a28673adb1a446b110f93c3c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OeDSdLvnckzY6teyy8Fj0jYItPdNsjrrnd3B9Ye4OsQ.jpg?width=1080&crop=smart&auto=webp&s=237005ad0b63f7dd1d467a9ca612557d767e9817', 'width': 1080}], 'source': {'height': 1288, 'url': 'https://external-preview.redd.it/OeDSdLvnckzY6teyy8Fj0jYItPdNsjrrnd3B9Ye4OsQ.jpg?auto=webp&s=a8360ec0aec3017c03e5462b439b6da11050bc3c', 'width': 2289}, 'variants': {}}]}
Real time voice to voice AI
1
Hello everyone, I’m building a website that allows users to practice interviews with a virtual examiner. This means I need a real-time, voice-to-voice solution with low latency and reasonable cost. The business model is as follows: for example, a customer pays $10 for a 20-minute mock interview. The interview script will be fed to the language model in advance. So far, I’ve explored the following options: • ElevenLabs – excellent quality but quite expensive • Deepgram • Speechmatics – seems somewhat affordable, but I’m unsure how well it would scale • Agora.io Do you know of any alternative solutions? For instance, using Google STT, a locally deployed language model (like Mistral), and Amazon Polly for TTS? I’d be very grateful if anyone with experience building real-time voice platforms could advise me on the best combination of tools for an affordable, low-latency solution.
2025-05-19T08:34:02
https://www.reddit.com/r/LocalLLaMA/comments/1kq6nkv/real_time_voice_to_voice_ai/
Prestigious-Ant-4348
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq6nkv
false
null
t3_1kq6nkv
/r/LocalLLaMA/comments/1kq6nkv/real_time_voice_to_voice_ai/
false
false
self
1
null
OuteTTS 1.0 (0.6B) — Apache 2.0, Batch Inference (~0.1–0.02 RTF)
144
Hey everyone! I just released OuteTTS-1.0-0.6B, a lighter variant built on Qwen-3 0.6B. OuteTTS-1.0-0.6B - Model Architecture: Based on Qwen-3 0.6B. - License: Apache 2.0 (free for commercial and personal use) - Multilingual: 14 supported languages: English, Chinese, Dutch, French, Georgian, German, Hungarian, Italian, Japanese, Korean, Latvian, Polish, Russian, Spanish Python Package Update: outetts v0.4.2 - EXL2 Async: batched inference - vLLM (Experimental): batched inference - Llama.cpp Async Server: continuous batching - Llama.cpp Server: external-URL model inference ⚡ Benchmarks (Single NVIDIA L40S GPU) | Model | Batch→RTF | |---------------------------------------------------|-------------------------------| | vLLM OuteTTS-1.0-0.6B FP8 | 16→0.11, 24→0.08, 32→0.05 | | vLLM Llama-OuteTTS-1.0-1B FP8 | 32→0.04, 64→0.03, 128→0.02 | | EXL2 OuteTTS-1.0-0.6B 8bpw | 32→0.108 | | EXL2 OuteTTS-1.0-0.6B 6bpw | 32→0.106 | | EXL2 Llama-OuteTTS-1.0-1B 8bpw | 32→0.105 | | Llama.cpp server OuteTTS-1.0-0.6B Q8_0 | 16→0.22, 32→0.20 | | Llama.cpp server OuteTTS-1.0-0.6B Q6_K | 16→0.21, 32→0.19 | | Llama.cpp server Llama-OuteTTS-1.0-1B Q8_0 | 16→0.172, 32→0.166 | | Llama.cpp server Llama-OuteTTS-1.0-1B Q6_K | 16→0.165, 32→0.164 | 📦 Model Weights (ST, GGUF, EXL2, FP8): https://huggingface.co/OuteAI/OuteTTS-1.0-0.6B 📂 Python Inference Library: https://github.com/edwko/OuteTTS
2025-05-19T08:56:52
https://huggingface.co/OuteAI/OuteTTS-1.0-0.6B
OuteAI
huggingface.co
1970-01-01T00:00:00
0
{}
1kq6ysz
false
null
t3_1kq6ysz
/r/LocalLLaMA/comments/1kq6ysz/outetts_10_06b_apache_20_batch_inference_01002_rtf/
false
false
https://b.thumbs.redditm…CKvI_28C4EKY.jpg
144
{'enabled': False, 'images': [{'id': 'bRXF6OCSYqhV__cmYbl3mo1a9EDkvDWIr5S2Odd-RwA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mkj0c5KE7uG2t5lRcNFyEg2Rx_CYpgOSNHlXCK0pNG4.jpg?width=108&crop=smart&auto=webp&s=ed3a16858e6830b08e5756690dac427b68e1e3f3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mkj0c5KE7uG2t5lRcNFyEg2Rx_CYpgOSNHlXCK0pNG4.jpg?width=216&crop=smart&auto=webp&s=7c036cc8899e6ae258e1a99cb75875e2900c7731', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mkj0c5KE7uG2t5lRcNFyEg2Rx_CYpgOSNHlXCK0pNG4.jpg?width=320&crop=smart&auto=webp&s=bcb891c1076493a918f33d54b1bda2b8052e4688', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mkj0c5KE7uG2t5lRcNFyEg2Rx_CYpgOSNHlXCK0pNG4.jpg?width=640&crop=smart&auto=webp&s=9ca9a806b15f609c1dba2685c905de1ca8099ac2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mkj0c5KE7uG2t5lRcNFyEg2Rx_CYpgOSNHlXCK0pNG4.jpg?width=960&crop=smart&auto=webp&s=7d394e696a58b713775c8b83f6132d9d55f0133b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mkj0c5KE7uG2t5lRcNFyEg2Rx_CYpgOSNHlXCK0pNG4.jpg?width=1080&crop=smart&auto=webp&s=e5bbf89ab82506856af7190caf58905b747a486a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mkj0c5KE7uG2t5lRcNFyEg2Rx_CYpgOSNHlXCK0pNG4.jpg?auto=webp&s=7ed3b2e4baadc521a479979d455a537dcc5a0811', 'width': 1200}, 'variants': {}}]}
Any tts that support Hebrew?
1
I just need one that sound natural.
2025-05-19T09:43:44
https://www.reddit.com/r/LocalLLaMA/comments/1kq7mol/any_tts_that_support_hebrew/
ResponsibleTruck4717
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq7mol
false
null
t3_1kq7mol
/r/LocalLLaMA/comments/1kq7mol/any_tts_that_support_hebrew/
false
false
self
1
null
Is a Rx 9070Xt Good enough?
1
[removed]
2025-05-19T09:57:07
https://www.reddit.com/r/LocalLLaMA/comments/1kq7tnh/is_a_rx_9070xt_good_enough/
uc--
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq7tnh
false
null
t3_1kq7tnh
/r/LocalLLaMA/comments/1kq7tnh/is_a_rx_9070xt_good_enough/
false
false
self
1
null
Anything below 7b is a waste of time
1
[removed]
2025-05-19T10:00:57
https://www.reddit.com/r/LocalLLaMA/comments/1kq7vrv/anything_below_7b_is_a_waste_of_time/
GreenTreeAndBlueSky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq7vrv
false
null
t3_1kq7vrv
/r/LocalLLaMA/comments/1kq7vrv/anything_below_7b_is_a_waste_of_time/
false
false
self
1
null
Anything below 7b is useless
1
[removed]
2025-05-19T10:15:43
https://www.reddit.com/r/LocalLLaMA/comments/1kq847j/anything_below_7b_is_useless/
GreenTreeAndBlueSky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq847j
false
null
t3_1kq847j
/r/LocalLLaMA/comments/1kq847j/anything_below_7b_is_useless/
false
false
self
1
null
Water Cooling My RTX 4090 48GB: A DIY Mod with a 240mm AIO
1
[removed]
2025-05-19T10:17:35
https://www.reddit.com/r/LocalLLaMA/comments/1kq8568/water_cooling_my_rtx_4090_48gb_a_diy_mod_with_a/
Weekly-Program-2004
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq8568
false
null
t3_1kq8568
/r/LocalLLaMA/comments/1kq8568/water_cooling_my_rtx_4090_48gb_a_diy_mod_with_a/
false
false
https://b.thumbs.redditm…e8q3C0G_ZvmA.jpg
1
null
Water Cooling My RTX 4090 48GB: DIY Mod with a 240mm AIO
1
[removed]
2025-05-19T10:20:19
https://www.reddit.com/r/LocalLLaMA/comments/1kq86kc/water_cooling_my_rtx_4090_48gb_diy_mod_with_a/
Weekly-Program-2004
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq86kc
false
null
t3_1kq86kc
/r/LocalLLaMA/comments/1kq86kc/water_cooling_my_rtx_4090_48gb_diy_mod_with_a/
false
false
self
1
null
Water Cooling My RTX 4090 48GB: A DIY Mod with a 240mm AIO
1
[removed]
2025-05-19T10:25:10
https://www.reddit.com/r/LocalLLaMA/comments/1kq895r/water_cooling_my_rtx_4090_48gb_a_diy_mod_with_a/
Academic-Passenger99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq895r
false
null
t3_1kq895r
/r/LocalLLaMA/comments/1kq895r/water_cooling_my_rtx_4090_48gb_a_diy_mod_with_a/
false
false
https://b.thumbs.redditm…AybNp0C-SHSA.jpg
1
null
Any lightweight AI model for ollama that can be trained to do queries and read software manuals?
1
[removed]
2025-05-19T10:53:51
https://www.reddit.com/r/LocalLLaMA/comments/1kq8pp2/any_lightweight_ai_model_for_ollama_that_can_be/
Palova98
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq8pp2
false
null
t3_1kq8pp2
/r/LocalLLaMA/comments/1kq8pp2/any_lightweight_ai_model_for_ollama_that_can_be/
false
false
self
1
null
How to make your MCP clients (Cursor, Windsurf...) share context with each other
1
[removed]
2025-05-19T11:01:36
https://www.reddit.com/r/LocalLLaMA/comments/1kq8uct/how_to_make_your_mcp_clients_cursor_windsurf/
anmolbaranwal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq8uct
false
null
t3_1kq8uct
/r/LocalLLaMA/comments/1kq8uct/how_to_make_your_mcp_clients_cursor_windsurf/
false
false
self
1
{'enabled': False, 'images': [{'id': '8yw1aMEiwnrwsodbJvTKRxq08BfbHuBN6x3eS_kY70k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XUP9Ymk8zh-fwyrDcatHMCpuMSC_Hgf4XdqGH8l-n2I.jpg?width=108&crop=smart&auto=webp&s=8ffbfc22020c54c31b3695c4c8e7f62e98864899', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XUP9Ymk8zh-fwyrDcatHMCpuMSC_Hgf4XdqGH8l-n2I.jpg?width=216&crop=smart&auto=webp&s=c3e55533f85d621b037949bdc742cb7f606c95bb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XUP9Ymk8zh-fwyrDcatHMCpuMSC_Hgf4XdqGH8l-n2I.jpg?width=320&crop=smart&auto=webp&s=9fd5f61452a11273845da5a7132556796232cddb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XUP9Ymk8zh-fwyrDcatHMCpuMSC_Hgf4XdqGH8l-n2I.jpg?width=640&crop=smart&auto=webp&s=595a93224ffed58c0000e4491a58207e74ff4985', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XUP9Ymk8zh-fwyrDcatHMCpuMSC_Hgf4XdqGH8l-n2I.jpg?width=960&crop=smart&auto=webp&s=5601057797a85d776d024366fc995590bb221042', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XUP9Ymk8zh-fwyrDcatHMCpuMSC_Hgf4XdqGH8l-n2I.jpg?width=1080&crop=smart&auto=webp&s=79d52ea1851b72cb9ec90398a22878561bbf02a5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XUP9Ymk8zh-fwyrDcatHMCpuMSC_Hgf4XdqGH8l-n2I.jpg?auto=webp&s=cc98a7bc71ffc9e088b6106cc56e3005d757dfe5', 'width': 1200}, 'variants': {}}]}
Qwen hallucinating chinese || Better models for german RAG use cases?
3
No matter which qwen model i use, it keeps sometimes randomly hallucinating chinese characters, which makes it unusable for my usecase in a german business environment. I am specifically looking for a model proficient in german and specialized for RAG use cases. For efficiency I would like to use an AWQ quantization. I‘ve been looking at llama3.1 and 3.3 70B and also the nemotron versions but it seems to me that there are very little awq versions of them out there. Does anyone have experience with using these models for non english use cases, especially with RAG? Is there maybe another model that works better? Like I said I tried qwen and was quite disappointed, same for Gemma, that‘s why I‘m going back to llama models right now. It just seems weird to me that the best models to use in a business environment is almost a year old. What else can I test out?
2025-05-19T11:02:08
https://www.reddit.com/r/LocalLLaMA/comments/1kq8uqn/qwen_hallucinating_chinese_better_models_for/
okonemi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq8uqn
false
null
t3_1kq8uqn
/r/LocalLLaMA/comments/1kq8uqn/qwen_hallucinating_chinese_better_models_for/
false
false
self
3
null
Computex: Intel Unveils New GPUs for AI and Workstations
186
24GB for $500
2025-05-19T11:05:10
https://newsroom.intel.com/client-computing/computex-intel-unveils-new-gpus-ai-workstations
MR_-_501
newsroom.intel.com
1970-01-01T00:00:00
0
{}
1kq8wo4
false
null
t3_1kq8wo4
/r/LocalLLaMA/comments/1kq8wo4/computex_intel_unveils_new_gpus_for_ai_and/
false
false
https://b.thumbs.redditm…5o_lHa0HApzk.jpg
186
{'enabled': False, 'images': [{'id': '007o_fpFSpvZlrkPAPfXPwKClNNhBgQoF7pYoT0U_Fc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/a_EOuFMT3wImCaaTP_AxZtoh2M_kQm2Ho4iekIvJrVk.jpg?width=108&crop=smart&auto=webp&s=618bc4b8d0174c09145f919bfdf2728c47d7e7f4', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/a_EOuFMT3wImCaaTP_AxZtoh2M_kQm2Ho4iekIvJrVk.jpg?width=216&crop=smart&auto=webp&s=2711e61016c4ad347bb81350b3c8fe6954a5a87d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/a_EOuFMT3wImCaaTP_AxZtoh2M_kQm2Ho4iekIvJrVk.jpg?width=320&crop=smart&auto=webp&s=3ef95ca15434d4d39550b35fffdd57e836aa0814', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/a_EOuFMT3wImCaaTP_AxZtoh2M_kQm2Ho4iekIvJrVk.jpg?width=640&crop=smart&auto=webp&s=296206ba445fb5eff3bb134ade0c97c527f347ae', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/a_EOuFMT3wImCaaTP_AxZtoh2M_kQm2Ho4iekIvJrVk.jpg?width=960&crop=smart&auto=webp&s=476d6745761c1bb7da8fe77a4238e7025c79f91d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/a_EOuFMT3wImCaaTP_AxZtoh2M_kQm2Ho4iekIvJrVk.jpg?width=1080&crop=smart&auto=webp&s=bab35592d21adcd640c0d6f4f61a4f1022b4482d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/a_EOuFMT3wImCaaTP_AxZtoh2M_kQm2Ho4iekIvJrVk.jpg?auto=webp&s=5ce0cc908fcf086aad8f14862f56add2613386d0', 'width': 1920}, 'variants': {}}]}
Intel launches $299 Arc Pro B50 with 16GB of memory, 'Project Battlematrix' workstations with 24GB Arc Pro B60 GPUs
776
"While the B60 is designed for powerful 'Project Battlematrix' AI workstations... will carry a roughly $500 per-unit price tag
2025-05-19T11:14:29
https://www.tomshardware.com/pc-components/gpus/intel-launches-usd299-arc-pro-b50-with-16gb-of-memory-project-battlematrix-workstations-with-24gb-arc-pro-b60-gpus
FullstackSensei
tomshardware.com
1970-01-01T00:00:00
0
{}
1kq9294
false
null
t3_1kq9294
/r/LocalLLaMA/comments/1kq9294/intel_launches_299_arc_pro_b50_with_16gb_of/
false
false
https://b.thumbs.redditm…NwQu6XlQgQSA.jpg
776
{'enabled': False, 'images': [{'id': '2WRQJFuDy0yvdo8Tiv2FKWqHIhmhdcrt4EosSmebgBg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/lJpkUaWR7aRg9qhyrcIgwW2kvtG6PxI9-Hw_9dnqBZU.jpg?width=108&crop=smart&auto=webp&s=6b3edf0d2b0683e25c02fa3aaba823f08261c32f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/lJpkUaWR7aRg9qhyrcIgwW2kvtG6PxI9-Hw_9dnqBZU.jpg?width=216&crop=smart&auto=webp&s=8a2afc2744ca2986984fadf34220b81e55fde164', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/lJpkUaWR7aRg9qhyrcIgwW2kvtG6PxI9-Hw_9dnqBZU.jpg?width=320&crop=smart&auto=webp&s=c17e5983bd9a28f66d7849025c063175b548a5bb', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/lJpkUaWR7aRg9qhyrcIgwW2kvtG6PxI9-Hw_9dnqBZU.jpg?width=640&crop=smart&auto=webp&s=64c87f9f3217c313d6276262cf0a6572a7d3d2af', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/lJpkUaWR7aRg9qhyrcIgwW2kvtG6PxI9-Hw_9dnqBZU.jpg?width=960&crop=smart&auto=webp&s=7c4d65e75918fbe8c80eacf35092650d263bd099', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/lJpkUaWR7aRg9qhyrcIgwW2kvtG6PxI9-Hw_9dnqBZU.jpg?width=1080&crop=smart&auto=webp&s=93447e9fc55a80ac498b3a2fb736ba51c191b2f3', 'width': 1080}], 'source': {'height': 4592, 'url': 'https://external-preview.redd.it/lJpkUaWR7aRg9qhyrcIgwW2kvtG6PxI9-Hw_9dnqBZU.jpg?auto=webp&s=086e82b39341ed1249322387a781a15e5a255c16', 'width': 8160}, 'variants': {}}]}
3090 or 5060 Ti
5
I am interested in building a new desktop computer, and would like to make sure to be able to run some local function-calling llm (for toying around, and maybe using it in some coding assistance tool) and also NLP. I've seen those two devices. One is relativelly old but can be bought used at about 700€, while a 5060 ti 16GB can be bought cheaper at around 500€. The 3090 appears to have (according to openbenchmarking) about 40% better performance in gaming and general performance, with a similar order for FP16 computation (according to Wikipedia), in addition to 8 extra GB of RAM. However, it seems that the 3090 does not support lower resolution floats, unlike a 5090 which can go down to fp4. (althought I suspect I might have gotten something wrong. I see quantization with 5 or 6 bits. Which align to none of that) and so I am worried such a GPU would require me to use fp16, limited the amount of parameter I can use. Is my worry correct? What would be your recommendation? Is there a performance benchmark for that use case somewhere? Thanks
2025-05-19T11:26:53
https://www.reddit.com/r/LocalLLaMA/comments/1kq99yg/3090_or_5060_ti/
marius851000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq99yg
false
null
t3_1kq99yg
/r/LocalLLaMA/comments/1kq99yg/3090_or_5060_ti/
false
false
self
5
null
What is the smoothest speech interface to run locally?
7
M3 Mac, running Gemma 12B in LMStudio. Is low-latency natural speech possible? Or am I better off just using voice input transcription?
2025-05-19T11:38:49
https://www.reddit.com/r/LocalLLaMA/comments/1kq9h8x/what_is_the_smoothest_speech_interface_to_run/
winkler1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq9h8x
false
null
t3_1kq9h8x
/r/LocalLLaMA/comments/1kq9h8x/what_is_the_smoothest_speech_interface_to_run/
false
false
self
7
null
Intel Announces Arc Pro B-Series, "Project Battlematrix" Linux Software Improvements
62
2025-05-19T11:46:51
https://www.phoronix.com/review/intel-arc-pro-b-series
reps_up
phoronix.com
1970-01-01T00:00:00
0
{}
1kq9mfl
false
null
t3_1kq9mfl
/r/LocalLLaMA/comments/1kq9mfl/intel_announces_arc_pro_bseries_project/
false
false
https://a.thumbs.redditm…E_3lB-U8ykG4.jpg
62
{'enabled': False, 'images': [{'id': '9gsOsUk7wZGtWiZOETwIxtZ9lVGZIRlbcb6FJRN_uYo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/GNAZ-cVMVgweIup1G1242zhFgkq9dtWIloVxhHHhjYo.jpg?width=108&crop=smart&auto=webp&s=41b235ed8fe992345920e400f9dd8f5b5ced709a', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/GNAZ-cVMVgweIup1G1242zhFgkq9dtWIloVxhHHhjYo.jpg?width=216&crop=smart&auto=webp&s=e2bf1e65cca37f0797fc3bc4d4bd0a35cbcde448', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/GNAZ-cVMVgweIup1G1242zhFgkq9dtWIloVxhHHhjYo.jpg?width=320&crop=smart&auto=webp&s=6a1e5fcd37c0b5c30e52469e5392a9f2bc92c1f2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/GNAZ-cVMVgweIup1G1242zhFgkq9dtWIloVxhHHhjYo.jpg?width=640&crop=smart&auto=webp&s=a042c504c995ad73ede19093c65cdc62c3f82fe9', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/GNAZ-cVMVgweIup1G1242zhFgkq9dtWIloVxhHHhjYo.jpg?width=960&crop=smart&auto=webp&s=ffed13a2a0a4df910e9035be693e713d41cc1170', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/GNAZ-cVMVgweIup1G1242zhFgkq9dtWIloVxhHHhjYo.jpg?width=1080&crop=smart&auto=webp&s=0a5a644ca77f2c424a821930658986d46d492993', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/GNAZ-cVMVgweIup1G1242zhFgkq9dtWIloVxhHHhjYo.jpg?auto=webp&s=82fc8a7e017b35e2e832f11e01cf3d1028c27a65', 'width': 1920}, 'variants': {}}]}
How can I integrate a pretrained LLM (like LLaMA, Qwen) into a Speech-to-Text (ASR) pipeline?
4
Hey everyone, I'm exploring the idea of building a Speech-to-Text system that leverages the capabilities of pretrained language models like LLaMA, or Qwen—not just as a traditional language model for rescoring but potentially as a more integral part of the transcription process. Has anyone here tried something like this? Are there any frameworks, repos, or resources you'd recommend? Would love to hear your insights or see examples if you've done something similar. Thanks in advance!
2025-05-19T12:02:03
https://www.reddit.com/r/LocalLLaMA/comments/1kq9wtz/how_can_i_integrate_a_pretrained_llm_like_llama/
Extra-Designer9333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq9wtz
false
null
t3_1kq9wtz
/r/LocalLLaMA/comments/1kq9wtz/how_can_i_integrate_a_pretrained_llm_like_llama/
false
false
self
4
null
Intel Dual B60 with 48GB VRAM - Sub $1000
1
[removed]
2025-05-19T12:03:26
https://youtu.be/Y8MWbPBP9i0?feature=shared
ImpossibleHabit615
youtu.be
1970-01-01T00:00:00
0
{}
1kq9xvv
false
{'oembed': {'author_name': 'Gamers Nexus', 'author_url': 'https://www.youtube.com/@GamersNexus', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/Y8MWbPBP9i0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Intel Arc B60 DUAL-GPU 48GB Video Card Tear-Down | MAXSUN Arc Pro B60 Dual"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/Y8MWbPBP9i0/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Intel Arc B60 DUAL-GPU 48GB Video Card Tear-Down | MAXSUN Arc Pro B60 Dual', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1kq9xvv
/r/LocalLLaMA/comments/1kq9xvv/intel_dual_b60_with_48gb_vram_sub_1000/
false
false
https://b.thumbs.redditm…DLT5LyVIyUHw.jpg
1
{'enabled': False, 'images': [{'id': 'QkBkzo69L9FPJFFvgY_pKvU07Uk9bCROnv0X9tm1uGk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/7FOJ3auvDt89Hg_vMibknvVEFUx6iCFTiqBX1eYmFSA.jpg?width=108&crop=smart&auto=webp&s=8072bed96cc1790b00d5b4a11f0ab5aa362c538b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/7FOJ3auvDt89Hg_vMibknvVEFUx6iCFTiqBX1eYmFSA.jpg?width=216&crop=smart&auto=webp&s=9cdd20b9565d7066799ad0704b7f17972070363e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/7FOJ3auvDt89Hg_vMibknvVEFUx6iCFTiqBX1eYmFSA.jpg?width=320&crop=smart&auto=webp&s=39cffe3cb1c59d44099155324ec96bb08d047073', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/7FOJ3auvDt89Hg_vMibknvVEFUx6iCFTiqBX1eYmFSA.jpg?auto=webp&s=290683962a68616f3631890a5a1273add7a4cac0', 'width': 480}, 'variants': {}}]}
KTransformers v0.3.1 now supports Intel Arc GPUs (A770 + new B-series): 7 tps DeepSeek R1 decode speed for a single CPU + a single A770
80
As shared in [this post](https://www.reddit.com/r/LocalLLaMA/comments/1kq9294/intel_launches_299_arc_pro_b50_with_16gb_of/), Intel just dropped their new Arc Pro B-series GPUs today. Thanks to early collaboration with Intel, KTransformers v0.3.1 is out now with Day 0 support for these new cards — including the previously supported A-series like the A770. In our test setup with a single-socket Xeon 5 + DDR5 4800MT/s + Arc A770, we’re seeing around 7.5 tokens/sec decoding speed on *deepseek-r1 Q4*. Enabling dual NUMA gives you even better throughput. More details and setup instructions: [https://github.com/kvcache-ai/ktransformers/blob/main/doc/en/xpu.md](https://github.com/kvcache-ai/ktransformers/blob/main/doc/en/xpu.md) Thanks for all the support, and more updates soon!
2025-05-19T12:16:17
https://www.reddit.com/r/LocalLLaMA/comments/1kqa6l0/ktransformers_v031_now_supports_intel_arc_gpus/
CombinationNo780
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqa6l0
false
null
t3_1kqa6l0
/r/LocalLLaMA/comments/1kqa6l0/ktransformers_v031_now_supports_intel_arc_gpus/
false
false
self
80
null
Intel Arc B60 DUAL-GPU 48GB Video Card Tear-Down | MAXSUN Arc Pro B60 Dual
121
[Gamers Nexus](https://www.youtube.com/@GamersNexus)
2025-05-19T12:18:09
https://www.youtube.com/watch?v=Y8MWbPBP9i0
Optifnolinalgebdirec
youtube.com
1970-01-01T00:00:00
0
{}
1kqa7vx
false
{'oembed': {'author_name': 'Gamers Nexus', 'author_url': 'https://www.youtube.com/@GamersNexus', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/Y8MWbPBP9i0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Intel Arc B60 DUAL-GPU 48GB Video Card Tear-Down | MAXSUN Arc Pro B60 Dual"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/Y8MWbPBP9i0/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Intel Arc B60 DUAL-GPU 48GB Video Card Tear-Down | MAXSUN Arc Pro B60 Dual', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1kqa7vx
/r/LocalLLaMA/comments/1kqa7vx/intel_arc_b60_dualgpu_48gb_video_card_teardown/
false
false
https://b.thumbs.redditm…b6_a7gW9Ue6M.jpg
121
{'enabled': False, 'images': [{'id': 'QkBkzo69L9FPJFFvgY_pKvU07Uk9bCROnv0X9tm1uGk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/7FOJ3auvDt89Hg_vMibknvVEFUx6iCFTiqBX1eYmFSA.jpg?width=108&crop=smart&auto=webp&s=8072bed96cc1790b00d5b4a11f0ab5aa362c538b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/7FOJ3auvDt89Hg_vMibknvVEFUx6iCFTiqBX1eYmFSA.jpg?width=216&crop=smart&auto=webp&s=9cdd20b9565d7066799ad0704b7f17972070363e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/7FOJ3auvDt89Hg_vMibknvVEFUx6iCFTiqBX1eYmFSA.jpg?width=320&crop=smart&auto=webp&s=39cffe3cb1c59d44099155324ec96bb08d047073', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/7FOJ3auvDt89Hg_vMibknvVEFUx6iCFTiqBX1eYmFSA.jpg?auto=webp&s=290683962a68616f3631890a5a1273add7a4cac0', 'width': 480}, 'variants': {}}]}
llama.cpp now supports Llama 4 vision
93
Vision support is picking up speed with the recent refactoring to better support it in general. Note that there's a minor(?) [issue with Llama 4 vision](https://github.com/ggml-org/llama.cpp/pull/13282) in general, as you can see below. It's most likely with the model, not with the implementation in llama.cpp, as the issue also occurs on other inference engines than just llama.cpp. https://preview.redd.it/c25p83fheq1f1.png?width=503&format=png&auto=webp&s=6eeb50199641034f38969eb526581fe95ef46498
2025-05-19T12:22:27
https://www.reddit.com/r/LocalLLaMA/comments/1kqab4m/llamacpp_now_supports_llama_4_vision/
Chromix_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqab4m
false
null
t3_1kqab4m
/r/LocalLLaMA/comments/1kqab4m/llamacpp_now_supports_llama_4_vision/
false
false
https://b.thumbs.redditm…9VkdleNOJcJw.jpg
93
{'enabled': False, 'images': [{'id': 'xCocCp_GtOpQABZSmSEYgMQkRf9mUiqrXVi8rbnByzw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/B8o6PBUKxWoyfTLHowtPtTQrUM4omNNyOv5t_-1MIqk.jpg?width=108&crop=smart&auto=webp&s=622517cfa0fdcee698976b99a00dd71571acbd46', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/B8o6PBUKxWoyfTLHowtPtTQrUM4omNNyOv5t_-1MIqk.jpg?width=216&crop=smart&auto=webp&s=ecb11a4ebe7a4faea521c44b36ef34a3f2dfa352', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/B8o6PBUKxWoyfTLHowtPtTQrUM4omNNyOv5t_-1MIqk.jpg?width=320&crop=smart&auto=webp&s=f00e22e3c6f370ddd83080e9b95900a75de40e52', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/B8o6PBUKxWoyfTLHowtPtTQrUM4omNNyOv5t_-1MIqk.jpg?width=640&crop=smart&auto=webp&s=dd011b3d7cd43123e6bb6b624eb22b92c82f10f5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/B8o6PBUKxWoyfTLHowtPtTQrUM4omNNyOv5t_-1MIqk.jpg?width=960&crop=smart&auto=webp&s=fdc336670d3384eefb94d66107e2c9644f730e6d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/B8o6PBUKxWoyfTLHowtPtTQrUM4omNNyOv5t_-1MIqk.jpg?width=1080&crop=smart&auto=webp&s=2027f4ab2c8909a58b4e9d6ba363d6ed8038dbca', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/B8o6PBUKxWoyfTLHowtPtTQrUM4omNNyOv5t_-1MIqk.jpg?auto=webp&s=43be0efea86cbcceeebb07cac6ad7da744ecd3c7', 'width': 1200}, 'variants': {}}]}
Is Intel's ARC GPU 48GB GPU going to take over?
1
At the 3:58 mark video says cost is expected to be less than $1K: [https://www.youtube.com/watch?v=Y8MWbPBP9i0](https://www.youtube.com/watch?v=Y8MWbPBP9i0) [https://videocardz.com/newz/intel-announces-arc-pro-b60-24gb-and-b50-16gb-cards-dual-b60-features-48gb-memory](https://videocardz.com/newz/intel-announces-arc-pro-b60-24gb-and-b50-16gb-cards-dual-b60-features-48gb-memory) The 24GB costs $500, which seems like a no brainer. Info on 24gb card: [https://videocardz.com/newz/intel-announces-arc-pro-b60-24gb-and-b50-16gb-cards-dual-b60-features-48gb-memory](https://videocardz.com/newz/intel-announces-arc-pro-b60-24gb-and-b50-16gb-cards-dual-b60-features-48gb-memory) [https://wccftech.com/intel-arc-pro-b60-24-gb-b50-16-gb-battlemage-gpus-pro-ai-3x-faster-dual-gpu-variant/](https://wccftech.com/intel-arc-pro-b60-24-gb-b50-16-gb-battlemage-gpus-pro-ai-3x-faster-dual-gpu-variant/) [https://newsroom.intel.com/client-computing/computex-intel-unveils-new-gpus-ai-workstations](https://newsroom.intel.com/client-computing/computex-intel-unveils-new-gpus-ai-workstations)
2025-05-19T12:42:12
https://www.reddit.com/r/LocalLLaMA/comments/1kqaphp/is_intels_arc_gpu_48gb_gpu_going_to_take_over/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqaphp
false
null
t3_1kqaphp
/r/LocalLLaMA/comments/1kqaphp/is_intels_arc_gpu_48gb_gpu_going_to_take_over/
false
false
self
1
{'enabled': False, 'images': [{'id': 'QkBkzo69L9FPJFFvgY_pKvU07Uk9bCROnv0X9tm1uGk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/7FOJ3auvDt89Hg_vMibknvVEFUx6iCFTiqBX1eYmFSA.jpg?width=108&crop=smart&auto=webp&s=8072bed96cc1790b00d5b4a11f0ab5aa362c538b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/7FOJ3auvDt89Hg_vMibknvVEFUx6iCFTiqBX1eYmFSA.jpg?width=216&crop=smart&auto=webp&s=9cdd20b9565d7066799ad0704b7f17972070363e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/7FOJ3auvDt89Hg_vMibknvVEFUx6iCFTiqBX1eYmFSA.jpg?width=320&crop=smart&auto=webp&s=39cffe3cb1c59d44099155324ec96bb08d047073', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/7FOJ3auvDt89Hg_vMibknvVEFUx6iCFTiqBX1eYmFSA.jpg?auto=webp&s=290683962a68616f3631890a5a1273add7a4cac0', 'width': 480}, 'variants': {}}]}
Is Intel Arc GPU with 48GB of memory going to take over for $1k?
289
At the 3:58 mark video says cost is expected to be less than $1K: [https://www.youtube.com/watch?v=Y8MWbPBP9i0](https://www.youtube.com/watch?v=Y8MWbPBP9i0) [https://videocardz.com/newz/intel-announces-arc-pro-b60-24gb-and-b50-16gb-cards-dual-b60-features-48gb-memory](https://videocardz.com/newz/intel-announces-arc-pro-b60-24gb-and-b50-16gb-cards-dual-b60-features-48gb-memory) The 24GB costs $500, which also seems like a no brainer. Info on 24gb card: [https://videocardz.com/newz/intel-announces-arc-pro-b60-24gb-and-b50-16gb-cards-dual-b60-features-48gb-memory](https://videocardz.com/newz/intel-announces-arc-pro-b60-24gb-and-b50-16gb-cards-dual-b60-features-48gb-memory) [https://wccftech.com/intel-arc-pro-b60-24-gb-b50-16-gb-battlemage-gpus-pro-ai-3x-faster-dual-gpu-variant/](https://wccftech.com/intel-arc-pro-b60-24-gb-b50-16-gb-battlemage-gpus-pro-ai-3x-faster-dual-gpu-variant/) [https://newsroom.intel.com/client-computing/computex-intel-unveils-new-gpus-ai-workstations](https://newsroom.intel.com/client-computing/computex-intel-unveils-new-gpus-ai-workstations)
2025-05-19T12:43:45
https://www.reddit.com/r/LocalLLaMA/comments/1kqaqmr/is_intel_arc_gpu_with_48gb_of_memory_going_to/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqaqmr
false
null
t3_1kqaqmr
/r/LocalLLaMA/comments/1kqaqmr/is_intel_arc_gpu_with_48gb_of_memory_going_to/
false
false
self
289
{'enabled': False, 'images': [{'id': 'QkBkzo69L9FPJFFvgY_pKvU07Uk9bCROnv0X9tm1uGk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/7FOJ3auvDt89Hg_vMibknvVEFUx6iCFTiqBX1eYmFSA.jpg?width=108&crop=smart&auto=webp&s=8072bed96cc1790b00d5b4a11f0ab5aa362c538b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/7FOJ3auvDt89Hg_vMibknvVEFUx6iCFTiqBX1eYmFSA.jpg?width=216&crop=smart&auto=webp&s=9cdd20b9565d7066799ad0704b7f17972070363e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/7FOJ3auvDt89Hg_vMibknvVEFUx6iCFTiqBX1eYmFSA.jpg?width=320&crop=smart&auto=webp&s=39cffe3cb1c59d44099155324ec96bb08d047073', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/7FOJ3auvDt89Hg_vMibknvVEFUx6iCFTiqBX1eYmFSA.jpg?auto=webp&s=290683962a68616f3631890a5a1273add7a4cac0', 'width': 480}, 'variants': {}}]}
Demo of Sleep-time Compute to Reduce LLM Response Latency
1
[removed]
2025-05-19T12:47:46
https://i.redd.it/dqmlrygziq1f1.png
Ok_Employee_6418
i.redd.it
1970-01-01T00:00:00
0
{}
1kqatmg
false
null
t3_1kqatmg
/r/LocalLLaMA/comments/1kqatmg/demo_of_sleeptime_compute_to_reduce_llm_response/
false
false
https://b.thumbs.redditm…uFsL4dxfLC1g.jpg
1
{'enabled': True, 'images': [{'id': 'caxgt3E8oHyg9_AQtk-k8-Rkz-9BGCT0aBGiWVJQqt4', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/dqmlrygziq1f1.png?width=108&crop=smart&auto=webp&s=6ce6f0fe76aefebd675e9bff77f2440ece519e48', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/dqmlrygziq1f1.png?width=216&crop=smart&auto=webp&s=e99b60f945fce31c626c56c843c24ba8bc3f449c', 'width': 216}, {'height': 190, 'url': 'https://preview.redd.it/dqmlrygziq1f1.png?width=320&crop=smart&auto=webp&s=9baec6c4bd381e385445fdfb8348b61162029559', 'width': 320}, {'height': 381, 'url': 'https://preview.redd.it/dqmlrygziq1f1.png?width=640&crop=smart&auto=webp&s=925080c6d44926ad4ee93beb50a4d4b7e117d69f', 'width': 640}, {'height': 572, 'url': 'https://preview.redd.it/dqmlrygziq1f1.png?width=960&crop=smart&auto=webp&s=dde0be88a1d1664cad3463185c93a191137affac', 'width': 960}, {'height': 644, 'url': 'https://preview.redd.it/dqmlrygziq1f1.png?width=1080&crop=smart&auto=webp&s=f9427aa7c6c32e73ff34e7237e37594d9f1167c7', 'width': 1080}], 'source': {'height': 808, 'url': 'https://preview.redd.it/dqmlrygziq1f1.png?auto=webp&s=dcd2c772964878bc68cf74f35ef796848d271f67', 'width': 1354}, 'variants': {}}]}
RTX PRO 6000 - Help me benchmark
1
[removed]
2025-05-19T12:54:04
https://www.reddit.com/r/LocalLLaMA/comments/1kqay8r/rtx_pro_6000_help_me_benchmark/
KernQ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqay8r
false
null
t3_1kqay8r
/r/LocalLLaMA/comments/1kqay8r/rtx_pro_6000_help_me_benchmark/
false
false
self
1
null
Anything below 7b is useless
0
I feel like as much as it is appealing to low vram gpus or lower end cpus, there is nothing useful that comes out of these models. There reasoning is bad, and their knowledge inevitably very limited. Despite how well they might score on some benchmarks, they are nothing more than a gimmick. What do you think?
2025-05-19T12:55:59
https://www.reddit.com/r/LocalLLaMA/comments/1kqazmm/anything_below_7b_is_useless/
GreenTreeAndBlueSky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqazmm
false
null
t3_1kqazmm
/r/LocalLLaMA/comments/1kqazmm/anything_below_7b_is_useless/
false
false
self
0
null
Been away for two months.. what's the new hotness?
83
What's the new hotness? Saw a Qwen model? I'm usually able to run things in the 20-23B range... but if there's low end stuff, I'm interested in that as well.
2025-05-19T13:18:32
https://www.reddit.com/r/LocalLLaMA/comments/1kqbh7g/been_away_for_two_months_whats_the_new_hotness/
bigattichouse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqbh7g
false
null
t3_1kqbh7g
/r/LocalLLaMA/comments/1kqbh7g/been_away_for_two_months_whats_the_new_hotness/
false
false
self
83
null
Any known vendor/buyer for LLM home server, but in a PC case ? Cant put a blade in my flat...
1
[removed]
2025-05-19T13:19:06
https://www.reddit.com/r/LocalLLaMA/comments/1kqbhmy/any_known_vendorbuyer_for_llm_home_server_but_in/
watzemember
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqbhmy
false
null
t3_1kqbhmy
/r/LocalLLaMA/comments/1kqbhmy/any_known_vendorbuyer_for_llm_home_server_but_in/
false
false
self
1
null
Is Parquet the best format for AI datasets now ?
0
Many datasets are shared in Parquet format, what do you think about it ? (mostly talking about text datasets, but also interested in other modalities too) Last week the apache/arrow finally released a way to modify a Parquet file locally, i.e. no need to rewrite all the data every time you need to insert/delete/edit 1 row. While it's a good step in the right direction to make it easier to manipulate Parquet files, there is still some work to do IMO. Do you think it can make a difference ?
2025-05-19T13:19:22
https://www.reddit.com/r/LocalLLaMA/comments/1kqbhvi/is_parquet_the_best_format_for_ai_datasets_now/
qlhoest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqbhvi
false
null
t3_1kqbhvi
/r/LocalLLaMA/comments/1kqbhvi/is_parquet_the_best_format_for_ai_datasets_now/
false
false
self
0
null
Mini PC recommendation
1
[removed]
2025-05-19T13:35:26
https://www.reddit.com/r/LocalLLaMA/comments/1kqbuq4/mini_pc_recommendation/
RevolutionaryPick241
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqbuq4
false
null
t3_1kqbuq4
/r/LocalLLaMA/comments/1kqbuq4/mini_pc_recommendation/
false
false
self
1
null