title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
What’s the minimal text chunk size for natural-sounding TTS, and how can I minimize TTFB in a streaming pipeline?
| 1 |
[removed]
| 2025-05-08T03:19:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1khgd4e/whats_the_minimal_text_chunk_size_for/
|
jetsonjetearth
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khgd4e
| false | null |
t3_1khgd4e
|
/r/LocalLLaMA/comments/1khgd4e/whats_the_minimal_text_chunk_size_for/
| false | false |
self
| 1 | null |
Is GLM-4 actually a hacked GEMINI? Or just Copying their Style?
| 72 |
Am I the only person that's noticed that GLM-4's outputs are eerily similar to Gemini Pro 2.5 in formatting? I copy/pasted a prompt in several different SOTA LLMs - GPT-4, DeepSeek, Gemini 2.5 Pro, Claude 2.7, and Grok. Then I tried it in GLM-4, and was like, wait a minute, where have I seen this formatting before? Then I checked - it was in **Gemini Pro 2.5**. Now, I'm not saying that GLM-4 is Gemini Pro 2.5, of course not, but could it be a hacked earlier version? Or perhaps (far more likely) they used it as a template for how GLM does its outputs? Because Gemini is the **only** LLM that does it this way where it gives you three Options w/parentheticals describing tone, and then finalizes it by saying "Choose the option that best fits your tone". Like, **almost exactly the same.**
I just tested it out on Gemini 2.0 and Gemini Flash. Neither of these versions do this. This is only done by Gemini 2.5 Pro and GLM-4. None of the other Closed-source LLMs do this either, like chat-gpt, grok, deepseek, or claude.
I'm not complaining. And if the Chinese were to somehow hack their LLM and released a quantized open source version to the world - despite how unlikely this is - I wouldn't protest...much. >.>
But jokes aside, anyone else notice this?
Some samples:
Gemini Pro 2.5
https://preview.redd.it/xjw45f988hze1.png?width=1267&format=png&auto=webp&s=c85206ec3f5ebc5288c1e559c3ac2e50acb26b9d
GLM-4
https://preview.redd.it/alnqooqa8hze1.png?width=976&format=png&auto=webp&s=cc68a36fc81af2110ac82d979e493c2889eae93e
Gemini Pro 2.5
https://preview.redd.it/0ofz0ygd8hze1.png?width=730&format=png&auto=webp&s=bc72427d8b0b60d02c5aca3f54ac0f1e287d1e05
GLM-4
https://preview.redd.it/igddncjf8hze1.png?width=895&format=png&auto=webp&s=6ab4b4fa2f7ebdcc3a3feff0f9abbf6ce4428b4f
| 2025-05-08T03:28:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1khgir9/is_glm4_actually_a_hacked_gemini_or_just_copying/
|
GrungeWerX
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khgir9
| false | null |
t3_1khgir9
|
/r/LocalLLaMA/comments/1khgir9/is_glm4_actually_a_hacked_gemini_or_just_copying/
| false | false | 72 |
{'enabled': False, 'images': [{'id': 'eNoMp-cBhW5B3lKr_XZAgD1Qku1SepqMDIM_aLwv22o', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/eNoMp-cBhW5B3lKr_XZAgD1Qku1SepqMDIM_aLwv22o.png?width=108&crop=smart&auto=webp&s=5b4262ad5e5ba2984dc7583eeb069b40b154362f', 'width': 108}, {'height': 127, 'url': 'https://external-preview.redd.it/eNoMp-cBhW5B3lKr_XZAgD1Qku1SepqMDIM_aLwv22o.png?width=216&crop=smart&auto=webp&s=c975f0782000a96fd6de5abb5cdf02725896cb11', 'width': 216}, {'height': 188, 'url': 'https://external-preview.redd.it/eNoMp-cBhW5B3lKr_XZAgD1Qku1SepqMDIM_aLwv22o.png?width=320&crop=smart&auto=webp&s=0a22f747c4054cfb539c8109d616d7846a561670', 'width': 320}, {'height': 376, 'url': 'https://external-preview.redd.it/eNoMp-cBhW5B3lKr_XZAgD1Qku1SepqMDIM_aLwv22o.png?width=640&crop=smart&auto=webp&s=1c21bff7c66da575c19d734c4dab9d236c86bf40', 'width': 640}, {'height': 564, 'url': 'https://external-preview.redd.it/eNoMp-cBhW5B3lKr_XZAgD1Qku1SepqMDIM_aLwv22o.png?width=960&crop=smart&auto=webp&s=d1d5052f7ba1d45c21f2febbb17a04ca8bfe95e0', 'width': 960}, {'height': 635, 'url': 'https://external-preview.redd.it/eNoMp-cBhW5B3lKr_XZAgD1Qku1SepqMDIM_aLwv22o.png?width=1080&crop=smart&auto=webp&s=a84910b3612de5f5baf7c125eb5519d0ad374d2e', 'width': 1080}], 'source': {'height': 745, 'url': 'https://external-preview.redd.it/eNoMp-cBhW5B3lKr_XZAgD1Qku1SepqMDIM_aLwv22o.png?auto=webp&s=0feb7f5ef6038683103426a26454f863010fabf3', 'width': 1267}, 'variants': {}}]}
|
|
Feasibility of Running LLM/TTS/Vision Models on Mobile Devices
| 1 |
[removed]
| 2025-05-08T03:49:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1khgwbr/feasibility_of_running_llmttsvision_models_on/
|
wodaxia
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khgwbr
| false | null |
t3_1khgwbr
|
/r/LocalLLaMA/comments/1khgwbr/feasibility_of_running_llmttsvision_models_on/
| false | false |
self
| 1 | null |
Gifts some GPUS - looking for recommendations on build
| 0 |
As the title says, was lucky enough to been gifted 2x 3090Ti FE GPUs.
Currently I've been running my Llama workloads on my m3u Mac Studio but wasn't planning on leaving it there long term.
I'm also planning to upgrade my gaming rig and thought I could repuprose that hardware. Its a 5800x with 64GB DDR4 on a Gigabyte Aorus Master which will give me 2x PCIE 4.0 x8 slots. I'll obviously need a bigger psu around 1500w for some headroom. Will be running in an old but good Cooler Master HAF XB bench case so there will be some open airflow. I already have Open web Ui on a separate container in my lab environment so that I can leave there.
Are there any other recommendations that can be suggested? I'm shooting for performance for the family and the ability to get rid of alexa with maybe the Home Assistant voice project that can be LLM backed
| 2025-05-08T05:20:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1khidmm/gifts_some_gpus_looking_for_recommendations_on/
|
ubrtnk
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khidmm
| false | null |
t3_1khidmm
|
/r/LocalLLaMA/comments/1khidmm/gifts_some_gpus_looking_for_recommendations_on/
| false | false |
self
| 0 | null |
When do YOU think AGI will arrive? Drop your predictions below!
| 1 |
[removed]
| 2025-05-08T05:25:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1khig24/when_do_you_think_agi_will_arrive_drop_your/
|
d4z7wk
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khig24
| false | null |
t3_1khig24
|
/r/LocalLLaMA/comments/1khig24/when_do_you_think_agi_will_arrive_drop_your/
| false | false |
self
| 1 | null |
Separate pad token id or eos.token_id == pad.token_id for SFT on a base model?
| 1 |
[removed]
| 2025-05-08T05:25:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1khiga4/separate_pad_token_id_or_eostoken_id_padtoken_id/
|
Altruistic_Base_6703
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khiga4
| false | null |
t3_1khiga4
|
/r/LocalLLaMA/comments/1khiga4/separate_pad_token_id_or_eostoken_id_padtoken_id/
| false | false |
self
| 1 | null |
Separate pad token id or eos.token_id == pad.token_id for SFT on a base model
| 1 |
[removed]
| 2025-05-08T05:26:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1khih2i/separate_pad_token_id_or_eostoken_id_padtoken_id/
|
Altruistic_Base_6703
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khih2i
| false | null |
t3_1khih2i
|
/r/LocalLLaMA/comments/1khih2i/separate_pad_token_id_or_eostoken_id_padtoken_id/
| false | false |
self
| 1 | null |
LLaMA Ops Challenge - Seeking feedback on open source automation approaches
| 1 |
[removed]
| 2025-05-08T05:44:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1khiqtf/llama_ops_challenge_seeking_feedback_on_open/
|
Legal_Major_5546
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khiqtf
| false | null |
t3_1khiqtf
|
/r/LocalLLaMA/comments/1khiqtf/llama_ops_challenge_seeking_feedback_on_open/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '9-ei6N6bZMP6OiMSfUFWxZopf910fRMRK3be_Ipo57E', 'resolutions': [{'height': 37, 'url': 'https://external-preview.redd.it/9-ei6N6bZMP6OiMSfUFWxZopf910fRMRK3be_Ipo57E.png?width=108&crop=smart&auto=webp&s=d9ea492d7dfcec40da5cebdbdab9bafc9e706757', 'width': 108}, {'height': 75, 'url': 'https://external-preview.redd.it/9-ei6N6bZMP6OiMSfUFWxZopf910fRMRK3be_Ipo57E.png?width=216&crop=smart&auto=webp&s=aeea5cad41e243c552b6b766e6b9f8e541bfbc83', 'width': 216}, {'height': 111, 'url': 'https://external-preview.redd.it/9-ei6N6bZMP6OiMSfUFWxZopf910fRMRK3be_Ipo57E.png?width=320&crop=smart&auto=webp&s=3ca350bb1c986dd5db8cd93cc26eb383ae8ef883', 'width': 320}, {'height': 223, 'url': 'https://external-preview.redd.it/9-ei6N6bZMP6OiMSfUFWxZopf910fRMRK3be_Ipo57E.png?width=640&crop=smart&auto=webp&s=d42639a7d0b847f9c860b12bcbd52bb5d13b88e3', 'width': 640}, {'height': 335, 'url': 'https://external-preview.redd.it/9-ei6N6bZMP6OiMSfUFWxZopf910fRMRK3be_Ipo57E.png?width=960&crop=smart&auto=webp&s=8b04d27f2613091865d0402bf28597998ce36f7d', 'width': 960}], 'source': {'height': 349, 'url': 'https://external-preview.redd.it/9-ei6N6bZMP6OiMSfUFWxZopf910fRMRK3be_Ipo57E.png?auto=webp&s=64bddfe8eaaf5b8a9d53a25257b0346a6771b5b9', 'width': 1000}, 'variants': {}}]}
|
new LLM
| 1 |
[removed]
| 2025-05-08T06:00:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1khizhq/new_llm/
|
6whiten_igga9
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khizhq
| false | null |
t3_1khizhq
|
/r/LocalLLaMA/comments/1khizhq/new_llm/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'Gtoy9h_-lR39LLsdbhjPNVf_5pSGDB-C0tUw4XDfVpg', 'resolutions': [{'height': 89, 'url': 'https://external-preview.redd.it/Gtoy9h_-lR39LLsdbhjPNVf_5pSGDB-C0tUw4XDfVpg.png?width=108&crop=smart&auto=webp&s=c202d676fe19278fd3664d0c725b9689f6f746c1', 'width': 108}, {'height': 178, 'url': 'https://external-preview.redd.it/Gtoy9h_-lR39LLsdbhjPNVf_5pSGDB-C0tUw4XDfVpg.png?width=216&crop=smart&auto=webp&s=3ec29f9161826cc68da2d14c0af42a972306036d', 'width': 216}, {'height': 264, 'url': 'https://external-preview.redd.it/Gtoy9h_-lR39LLsdbhjPNVf_5pSGDB-C0tUw4XDfVpg.png?width=320&crop=smart&auto=webp&s=558e34c91a09b75a2d2fd3d7e62caa393d732140', 'width': 320}, {'height': 529, 'url': 'https://external-preview.redd.it/Gtoy9h_-lR39LLsdbhjPNVf_5pSGDB-C0tUw4XDfVpg.png?width=640&crop=smart&auto=webp&s=3cbbe32da54fe68bc5929e129b16c2998ed30e2f', 'width': 640}, {'height': 794, 'url': 'https://external-preview.redd.it/Gtoy9h_-lR39LLsdbhjPNVf_5pSGDB-C0tUw4XDfVpg.png?width=960&crop=smart&auto=webp&s=febdec7c1de06e5c2588a5d6cdebbb04c9bfdf8a', 'width': 960}, {'height': 893, 'url': 'https://external-preview.redd.it/Gtoy9h_-lR39LLsdbhjPNVf_5pSGDB-C0tUw4XDfVpg.png?width=1080&crop=smart&auto=webp&s=a70a25fb0fc1906e95da4199bb9c76864281d762', 'width': 1080}], 'source': {'height': 1580, 'url': 'https://external-preview.redd.it/Gtoy9h_-lR39LLsdbhjPNVf_5pSGDB-C0tUw4XDfVpg.png?auto=webp&s=5f18d5ff2ca5933cc84c600a0b5e3fa974741a33', 'width': 1910}, 'variants': {}}]}
|
|
New toy just dropped! A free, general-purpose online AI agent!
| 0 |
I've been building an online multimodal AI agent app ([kragent.ai](https://kragent.ai)) — and it's now live with support for sandboxed code execution, search engine access, web browsing, and more. You can try it for free using an open-source Qwen model, or plug in your own Claude 3.5/3.7 Sonnet API key to unlock full power. 🔥
This is a fast-evolving project. Coming soon: PDF reading, multimodal content generation, plug-and-play long-term memory modules for specific domains, and a dedicated LLM fine-tuned just for Kragent.
**Pro tip for using this agent effectively:** Talk to it often. While we all dream of giving a one-liner and getting perfect results, even humans struggle with that. Clear, step-by-step instructions help the agent avoid misunderstandings and dramatically increase task success.
Give it a shot and let me know what you think!
| 2025-05-08T06:07:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1khj2vl/new_toy_just_dropped_a_free_generalpurpose_online/
|
Steven_Lu_137
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khj2vl
| false | null |
t3_1khj2vl
|
/r/LocalLLaMA/comments/1khj2vl/new_toy_just_dropped_a_free_generalpurpose_online/
| false | false |
self
| 0 | null |
Suggestions for "un-bloated" open source coding/instruction LLM?
| 0 |
Just as an demonstration, look at the table below:
https://preview.redd.it/eu7tp7pa5ize1.png?width=713&format=png&auto=webp&s=b51e0b24d8acaf695428c28984937c292431cb98
The step from 1B to 4B adds +140 languages and multimodal support which I don't care about. I want to have a specialized model for English only + instruction and coding. It should preferable be a larger model then the gemma-1B but un-bloated.
What do you recommend?
| 2025-05-08T06:36:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1khjikk/suggestions_for_unbloated_open_source/
|
mr-claesson
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khjikk
| false | null |
t3_1khjikk
|
/r/LocalLLaMA/comments/1khjikk/suggestions_for_unbloated_open_source/
| false | false | 0 | null |
|
Building LLM Workflows - - some observations
| 402 |
Been working on some relatively complex LLM workflows for the past year (not continuously, on and off). Here are some conclusions:
- Decomposing each task to the smallest steps and prompt chaining works far better than just using a single prompt with CoT. turning each step of the CoT into its own prompt and checking/sanitizing outputs reduces errors.
- Using XML tags to structure the system prompt, prompt etc works best (IMO better than JSON structure but YMMV)
- You have to remind the LLM that its only job is to work as a semantic parser of sorts, to merely understand and transform the input data and NOT introduce data from its own "knowledge" into the output.
- NLTK, SpaCY, FlairNLP are often good ways to independently verify the output of an LLM (eg: check if the LLM's output has a sequence of POS tags you want etc). The great thing about these libraries is they're fast and reliable.
- ModernBERT classifiers are often just as good at LLMs if the task is small enough. Fine-tuned BERT-style classifiers are usually better than LLM for focused, narrow tasks.
- LLM-as-judge and LLM confidence scoring is extremely unreliable, especially if there's no "grounding" for how the score is to be arrived at. Scoring on vague parameters like "helpfulness" is useless - -eg: LLMs often conflate helpfulness with professional tone and length of response. Scoring has to either be grounded in multiple examples (which has its own problems - - LLMs may make the wrong inferences from example patterns), or a fine-tuned model is needed. If you're going to fine-tune for confidence scoring, might as well use a BERT model or something similar.
- In Agentic loops, the hardest part is setting up the conditions where the LLM exits the loop - - using the LLM to decide whether or not to exit is extremely unreliable (same reason as LLM-as-judge issues).
- Performance usually degrades past 4k tokens (input context window) ... this is often only seen once you've run thousands of iterations. If you have a low error threshold, even a 5% failure rate in the pipeline is unacceptable, keeping all prompts below 4k tokens helps.
- 32B models are good enough and reliable enough for most tasks, if the task is structured properly.
- Structured CoT (with headings and bullet points) is often better than unstructured `<thinking>Okay, so I must...etc` tokens. Structured and concise CoT stays within the context window (in the prompt as well as examples), and doesn't waste output tokens.
- Self-consistency helps, but that also means running each prompt multiple times - - forces you to use smaller models and smaller prompts.
- Writing your own CoT is better than relying on a reasoning model. Reasoning models are a good way to collect different CoT paths and ideas, and then synthesize your own.
- The long-term plan is always to fine-tune everything. Start with a large API-based model and few-shot examples, and keep tweaking. Once the workflows are operational, consider creating fine-tuning datasets for some of the tasks so you can shift to a smaller local LLM or BERT. Making balanced datasets isn't easy.
- when making a dataset for fine-tuning, make it balanced by setting up a categorization system/orthogonal taxonomy so you can get complete coverage of the task. Use MECE framework.
I've probably missed many points, these were the first ones that came to mind.
| 2025-05-08T06:54:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1khjrtj/building_llm_workflows_some_observations/
|
noellarkin
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khjrtj
| false | null |
t3_1khjrtj
|
/r/LocalLLaMA/comments/1khjrtj/building_llm_workflows_some_observations/
| false | false |
self
| 402 | null |
what do you think about the llms?
| 1 |
[removed]
| 2025-05-08T07:10:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1khjzw3/what_do_you_think_about_the_llms/
|
NovelProperty7105
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khjzw3
| false | null |
t3_1khjzw3
|
/r/LocalLLaMA/comments/1khjzw3/what_do_you_think_about_the_llms/
| false | false |
self
| 1 | null |
EPYC 7313P - good enough?
| 4 |
Planning a home PC build for the family and small business use. How's the EPYC 7313P? Will it be sufficient? no image generation and just a lot of admin works
| 2025-05-08T07:48:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1khki8f/epyc_7313p_good_enough/
|
AfraidScheme433
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khki8f
| false | null |
t3_1khki8f
|
/r/LocalLLaMA/comments/1khki8f/epyc_7313p_good_enough/
| false | false |
self
| 4 | null |
How many tools have you tried in the last 6 months?
| 1 |
[removed]
| 2025-05-08T08:08:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1khks6m/how_many_tools_have_you_tried_in_the_last_6_months/
|
filopedraz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khks6m
| false | null |
t3_1khks6m
|
/r/LocalLLaMA/comments/1khks6m/how_many_tools_have_you_tried_in_the_last_6_months/
| false | false |
self
| 1 | null |
Auto Thinking Mode Switch for Qwen3 / Open Webui Function
| 47 |
**Github:** [**https://github.com/AaronFeng753/Better-Qwen3**](https://github.com/AaronFeng753/Better-Qwen3)
This is an open webui function for Qwen3 models, it has the following features:
1. Automatically turn on/off the thinking process by using the LLM itself to evaluate the difficulty of your request.
2. Remove model's old thoughts in multi-turn conversation, from Qwen3 model's hugging face README: `In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content.`
You will need to edit the code to config the OpenAI compatible API URL and the Model name.
(And yes, it works with local LLM, I'm using one right now, ollama and lm studio both has OpenAI compatible API)
https://preview.redd.it/cntv1xrcsize1.png?width=846&format=png&auto=webp&s=9cef5af10d3badbdeaa6d9558fdce7895de4f5c3
| 2025-05-08T08:40:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1khl779/auto_thinking_mode_switch_for_qwen3_open_webui/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khl779
| false | null |
t3_1khl779
|
/r/LocalLLaMA/comments/1khl779/auto_thinking_mode_switch_for_qwen3_open_webui/
| false | false | 47 |
{'enabled': False, 'images': [{'id': 'GFyXuK99OKeRdK4ERwBnT9rkToRjrA-0wD3DccZmGuQ', 'resolutions': [{'height': 128, 'url': 'https://external-preview.redd.it/GFyXuK99OKeRdK4ERwBnT9rkToRjrA-0wD3DccZmGuQ.png?width=108&crop=smart&auto=webp&s=e93e1194c8c8c3f22de4e4a5f1b75583d49c310a', 'width': 108}, {'height': 256, 'url': 'https://external-preview.redd.it/GFyXuK99OKeRdK4ERwBnT9rkToRjrA-0wD3DccZmGuQ.png?width=216&crop=smart&auto=webp&s=87230615a8abe016c67ee4598c55293beb69e0c2', 'width': 216}, {'height': 379, 'url': 'https://external-preview.redd.it/GFyXuK99OKeRdK4ERwBnT9rkToRjrA-0wD3DccZmGuQ.png?width=320&crop=smart&auto=webp&s=215c1f688652eb9b242a75f4aedb02675c7633b5', 'width': 320}, {'height': 759, 'url': 'https://external-preview.redd.it/GFyXuK99OKeRdK4ERwBnT9rkToRjrA-0wD3DccZmGuQ.png?width=640&crop=smart&auto=webp&s=8623ea9530aec284114d2ecbc6202e0d26e4289d', 'width': 640}], 'source': {'height': 1004, 'url': 'https://external-preview.redd.it/GFyXuK99OKeRdK4ERwBnT9rkToRjrA-0wD3DccZmGuQ.png?auto=webp&s=8bafa00b6096f06cf11aac2f1f57f4813a67d5ab', 'width': 846}, 'variants': {}}]}
|
|
ComfyGPT: A Self-Optimizing Multi-Agent System for Comprehensive ComfyUI Workflow Generation
| 100 |
Paper: [https://arxiv.org/abs/2503.17671](https://arxiv.org/abs/2503.17671)
Abstract
>
| 2025-05-08T09:31:33 |
https://www.reddit.com/gallery/1khlw98
|
searcher1k
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1khlw98
| false | null |
t3_1khlw98
|
/r/LocalLLaMA/comments/1khlw98/comfygpt_a_selfoptimizing_multiagent_system_for/
| false | false | 100 |
{'enabled': True, 'images': [{'id': 'CFr9_cLjVeZwkAhQkscmJVhOJEQi2DmpjFaEwPPrd7A', 'resolutions': [{'height': 47, 'url': 'https://external-preview.redd.it/CFr9_cLjVeZwkAhQkscmJVhOJEQi2DmpjFaEwPPrd7A.png?width=108&crop=smart&auto=webp&s=5da27fb6d26d18058b44432ba844792824c23a19', 'width': 108}, {'height': 95, 'url': 'https://external-preview.redd.it/CFr9_cLjVeZwkAhQkscmJVhOJEQi2DmpjFaEwPPrd7A.png?width=216&crop=smart&auto=webp&s=1ad77288b5a4d954d073e46fa152f9a443162f8a', 'width': 216}, {'height': 142, 'url': 'https://external-preview.redd.it/CFr9_cLjVeZwkAhQkscmJVhOJEQi2DmpjFaEwPPrd7A.png?width=320&crop=smart&auto=webp&s=2668f81109c09ee6639085984e67c440f3a563f0', 'width': 320}, {'height': 284, 'url': 'https://external-preview.redd.it/CFr9_cLjVeZwkAhQkscmJVhOJEQi2DmpjFaEwPPrd7A.png?width=640&crop=smart&auto=webp&s=837ef81c969d8d59aca68353cf9f08b45dab3dd4', 'width': 640}], 'source': {'height': 284, 'url': 'https://external-preview.redd.it/CFr9_cLjVeZwkAhQkscmJVhOJEQi2DmpjFaEwPPrd7A.png?auto=webp&s=7605596cf601382bcc079a9e647e5f0ff24f6287', 'width': 640}, 'variants': {}}]}
|
|
If you could make a MoE with as many active and total parameters as you wanted. What would it be?
| 23 |
.
| 2025-05-08T09:35:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1khlxzj/if_you_could_make_a_moe_with_as_many_active_and/
|
Own-Potential-2308
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khlxzj
| false | null |
t3_1khlxzj
|
/r/LocalLLaMA/comments/1khlxzj/if_you_could_make_a_moe_with_as_many_active_and/
| false | false |
self
| 23 | null |
Smallest models capable of JSON output ?
| 1 |
[removed]
| 2025-05-08T09:41:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1khm169/smallest_models_capable_of_json_output/
|
IHateConfettis
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khm169
| false | null |
t3_1khm169
|
/r/LocalLLaMA/comments/1khm169/smallest_models_capable_of_json_output/
| false | false |
self
| 1 | null |
Goose slowly using olama?
| 1 |
[removed]
| 2025-05-08T09:57:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1khm9eb/goose_slowly_using_olama/
|
IndustryApart2521
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khm9eb
| false | null |
t3_1khm9eb
|
/r/LocalLLaMA/comments/1khm9eb/goose_slowly_using_olama/
| false | false |
self
| 1 | null |
5 commands to run Qwen3-235B-A22B Q3 inference on 4x3090 + 32-core TR + 192GB DDR4 RAM
| 37 |
First, thanks Qwen team for the generosity, and unsloth for quants.
**DISCLAIMER**: optimized for my build, your options may vary. This set of commands downloads GGUFs into llama.cpp build folder. If unsure, use full paths. I don't know why, but llama-server may not work if working directory is different.
End result: 140-180 tokens per second read speed (prompt processing), 12-14 tokens per second write speed (generation).
**0. You need CUDA installed (so, I kinda lied) and available in your PATH:**
[https://docs.nvidia.com/cuda/cuda-installation-guide-linux/](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/)
**1. Downlaod & Compile llama.cpp:**
git clone https://github.com/ggerganov/llama.cpp
rm -rf build ; cmake -B build -DBUILD_SHARED_LIBS=ON -DLLAMA_CURL=OFF -DGGML_CUDA=ON -DGGML_CUDA_F16=ON -DGGML_CUDA_USE_GRAPHS=ON ; cmake --build build --config Release --parallel 32
cd build/bin
**2. Download quantized model (that almost fits into 96GB VRAM) files:**
for i in {1..3} ; do curl -L --remote-name "https://huggingface.co/unsloth/Qwen3-235B-A22B-GGUF/resolve/main/UD-Q3_K_XL/Qwen3-235B-A22B-UD-Q3_K_XL-0000${i}-of-00003.gguf?download=true" ; done
**3. Run:**
./llama-server \
--port 1234 \
--model ./Qwen3-235B-A22B-UD-Q3_K_XL-00001-of-00003.gguf \
--alias Qwen3-235B-A22B-Thinking \
--temp 0.6 --top-k 20 --min-p 0.0 --top-p 0.95 \
-ngl 95 --split-mode layer -ts 22,23,24,26 \
-c 8192 -ctk q8_0 -ctv q8_0 -fa \
--main-gpu 3 \
--no-mmap \
-ot 'blk\.[2-3]1\.ffn.*=CPU' \
-ot 'blk\.[5-8]1\.ffn.*=CPU' \
-ot 'blk\.9[0-1]\.ffn.*=CPU' \
--threads 32 --numa distribute
| 2025-05-08T09:59:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1khmaah/5_commands_to_run_qwen3235ba22b_q3_inference_on/
|
EmilPi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khmaah
| false | null |
t3_1khmaah
|
/r/LocalLLaMA/comments/1khmaah/5_commands_to_run_qwen3235ba22b_q3_inference_on/
| false | false |
self
| 37 | null |
Anyone get speculative decoding to work for Qwen 3 on LM Studio?
| 23 |
I got it working in llama.cpp, but it's being slower than running Qwen 3 32b by itself in LM Studio. Anyone tried this out yet?
| 2025-05-08T10:11:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1khmh5m/anyone_get_speculative_decoding_to_work_for_qwen/
|
jaxchang
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khmh5m
| false | null |
t3_1khmh5m
|
/r/LocalLLaMA/comments/1khmh5m/anyone_get_speculative_decoding_to_work_for_qwen/
| false | false |
self
| 23 | null |
Has anyone tried to make voice AI agents with Dia 1.6B?
| 1 |
[removed]
| 2025-05-08T10:41:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1khmy9c/has_anyone_tried_to_make_voice_ai_agents_with_dia/
|
Swimming_Screen_4655
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khmy9c
| false | null |
t3_1khmy9c
|
/r/LocalLLaMA/comments/1khmy9c/has_anyone_tried_to_make_voice_ai_agents_with_dia/
| false | false |
self
| 1 | null |
Making eval workflows easier with script-based variables + new Mistral support
| 1 |
[removed]
| 2025-05-08T11:07:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1khndbg/making_eval_workflows_easier_with_scriptbased/
|
llamacoded
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khndbg
| false | null |
t3_1khndbg
|
/r/LocalLLaMA/comments/1khndbg/making_eval_workflows_easier_with_scriptbased/
| false | false |
self
| 1 | null |
why am i getting weird results when i try an prompt my model?
| 0 |
my terminal is this:
"python3 [koboldcpp.py](http://koboldcpp.py/) \--model Ae-calem-mistral-7b-v0.2\_8bit.gguf --prompt "give me a caption for a post about this: YouTube video uploads stuck at 0%? It's not just you. only give me one sentence"
, as short as possible.
user
Khi nào thì có thể gửi hồ sơ nghỉ học tạm thời? "
What is that why is it giveing such a weird out put?
| 2025-05-08T11:36:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1khnvrl/why_am_i_getting_weird_results_when_i_try_an/
|
Puzzleheaded-Option8
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khnvrl
| false | null |
t3_1khnvrl
|
/r/LocalLLaMA/comments/1khnvrl/why_am_i_getting_weird_results_when_i_try_an/
| false | false |
self
| 0 | null |
Suggest me a Model
| 1 |
[removed]
| 2025-05-08T11:49:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1kho3o2/suggest_me_a_model/
|
sussybaka010303
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kho3o2
| false | null |
t3_1kho3o2
|
/r/LocalLLaMA/comments/1kho3o2/suggest_me_a_model/
| false | false |
self
| 1 | null |
AI coder background work (multitasking)
| 3 |
Hey! I want to share a new feature of Clean Coder, an AI coder with project management capabilities.
Now it can handle part of the coding work in the background.
When executing a task from the list, Clean Coder starts the next task from the queue in the background to speed up the coding process through parallel task execution.
I hope this is interesting for many of you. Check out Clean Coder here: https://github.com/Grigorij-Dudnik/Clean-Coder-AI.
| 2025-05-08T11:53:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1kho6q5/ai_coder_background_work_multitasking/
|
Grigorij_127
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kho6q5
| false | null |
t3_1kho6q5
|
/r/LocalLLaMA/comments/1kho6q5/ai_coder_background_work_multitasking/
| false | false |
self
| 3 |
{'enabled': False, 'images': [{'id': 'xymkbGXPgM2ODReIY6Y23w8IaO1CLonPcwleIECHaCM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xymkbGXPgM2ODReIY6Y23w8IaO1CLonPcwleIECHaCM.png?width=108&crop=smart&auto=webp&s=d1ca132b885df657cff33f71c0c4e64db0427d42', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xymkbGXPgM2ODReIY6Y23w8IaO1CLonPcwleIECHaCM.png?width=216&crop=smart&auto=webp&s=eec5da26de1614307e35b27a46879b01db3d6f63', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xymkbGXPgM2ODReIY6Y23w8IaO1CLonPcwleIECHaCM.png?width=320&crop=smart&auto=webp&s=c2a308c23609bc3749bf0c4b7659af32e847f742', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xymkbGXPgM2ODReIY6Y23w8IaO1CLonPcwleIECHaCM.png?width=640&crop=smart&auto=webp&s=b41d4984b0197d6046451978607ac60b7ebae0e9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xymkbGXPgM2ODReIY6Y23w8IaO1CLonPcwleIECHaCM.png?width=960&crop=smart&auto=webp&s=cde5e78276f88e19b32ea2218eac2cd51ccef805', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xymkbGXPgM2ODReIY6Y23w8IaO1CLonPcwleIECHaCM.png?width=1080&crop=smart&auto=webp&s=cb94de111b2bb30a05b6a12ee12ef667637dab8d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xymkbGXPgM2ODReIY6Y23w8IaO1CLonPcwleIECHaCM.png?auto=webp&s=28309e04760c8ce18eab85eec3de20cb4039b409', 'width': 1200}, 'variants': {}}]}
|
What's the preferred software setup these days?
| 1 |
[removed]
| 2025-05-08T12:02:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1khocul/whats_the_preferred_software_setup_these_days/
|
Novel-Put2945
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khocul
| false | null |
t3_1khocul
|
/r/LocalLLaMA/comments/1khocul/whats_the_preferred_software_setup_these_days/
| false | false |
self
| 1 | null |
Introducing MousyHub: A SillyTavern alternative for easy AI roleplay (Local models, KoboldCPP API, Cloud APIs ).
| 1 |
[removed]
| 2025-05-08T12:32:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1khoy5h/introducing_mousyhub_a_sillytavern_alternative/
|
Pristine_Weather_673
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khoy5h
| false | null |
t3_1khoy5h
|
/r/LocalLLaMA/comments/1khoy5h/introducing_mousyhub_a_sillytavern_alternative/
| false | false |
self
| 1 | null |
Introducing MousyHub: A SillyTavern alternative for easy AI roleplay
| 1 |
[removed]
| 2025-05-08T12:38:28 |
https://v.redd.it/x5fx4qadzjze1
|
Pristine_Weather_673
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1khp25b
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/x5fx4qadzjze1/DASHPlaylist.mpd?a=1749299921%2COTdkNjYwN2NkMjNjYTYwNTRiM2YzYTI2NTIzZGM1MzViYjYzM2EwNjU3ZTdkNTU3NDQ1ZjMzZTZhMGNkOThlMA%3D%3D&v=1&f=sd', 'duration': 22, 'fallback_url': 'https://v.redd.it/x5fx4qadzjze1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/x5fx4qadzjze1/HLSPlaylist.m3u8?a=1749299921%2CNTNhMWY3MzJhN2MzZDk4Y2MxMDQ2NzdkNmQ3ZmFhN2E1NGJlYTQzYTk3MTcwZDgxYzE1MjBiYzlmNzk0M2RkMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/x5fx4qadzjze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1khp25b
|
/r/LocalLLaMA/comments/1khp25b/introducing_mousyhub_a_sillytavern_alternative/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'YXJ5a2FxYWR6anplMS1S8ClqvSHka2Oab4NU1khA_GVYrTGsOGkAbyr7UhwD', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YXJ5a2FxYWR6anplMS1S8ClqvSHka2Oab4NU1khA_GVYrTGsOGkAbyr7UhwD.png?width=108&crop=smart&format=pjpg&auto=webp&s=a72419d60da09f03d8346f3b5050ccaf66a092e9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/YXJ5a2FxYWR6anplMS1S8ClqvSHka2Oab4NU1khA_GVYrTGsOGkAbyr7UhwD.png?width=216&crop=smart&format=pjpg&auto=webp&s=10f5251c00b16af27c7aac25b9c7e316c7911a98', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/YXJ5a2FxYWR6anplMS1S8ClqvSHka2Oab4NU1khA_GVYrTGsOGkAbyr7UhwD.png?width=320&crop=smart&format=pjpg&auto=webp&s=23062b5297b0302ac7507a6b438660be5189f6b8', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/YXJ5a2FxYWR6anplMS1S8ClqvSHka2Oab4NU1khA_GVYrTGsOGkAbyr7UhwD.png?width=640&crop=smart&format=pjpg&auto=webp&s=f4ebbbbb48efdbe5e485197989267ad8013a5641', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/YXJ5a2FxYWR6anplMS1S8ClqvSHka2Oab4NU1khA_GVYrTGsOGkAbyr7UhwD.png?width=960&crop=smart&format=pjpg&auto=webp&s=530e08f43078e769ac86a5aff278f4120431e386', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/YXJ5a2FxYWR6anplMS1S8ClqvSHka2Oab4NU1khA_GVYrTGsOGkAbyr7UhwD.png?width=1080&crop=smart&format=pjpg&auto=webp&s=13ff17127b5fef12d81af02fb24a7322014379ac', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/YXJ5a2FxYWR6anplMS1S8ClqvSHka2Oab4NU1khA_GVYrTGsOGkAbyr7UhwD.png?format=pjpg&auto=webp&s=b2b0fbc3b1e432b57cba4e8df7d6235fd0f94c07', 'width': 1920}, 'variants': {}}]}
|
|
Smoothie Qwen: A lightweight adjustment tool for smoothing token probabilities in the Qwen models to encourage balanced multilingual generation.
| 1 | 2025-05-08T12:53:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1khpcxk/smoothie_qwen_a_lightweight_adjustment_tool_for/
|
likejazz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khpcxk
| false | null |
t3_1khpcxk
|
/r/LocalLLaMA/comments/1khpcxk/smoothie_qwen_a_lightweight_adjustment_tool_for/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '5ROieUTk4G7bxZYnBs5kysLSb9TKRMxu_-55x7ZDmHE', 'resolutions': [{'height': 39, 'url': 'https://external-preview.redd.it/5ROieUTk4G7bxZYnBs5kysLSb9TKRMxu_-55x7ZDmHE.png?width=108&crop=smart&auto=webp&s=659752d47b290f5b7871d182496c7a9c2bca47e1', 'width': 108}, {'height': 78, 'url': 'https://external-preview.redd.it/5ROieUTk4G7bxZYnBs5kysLSb9TKRMxu_-55x7ZDmHE.png?width=216&crop=smart&auto=webp&s=3b0f56f34ef3c9374f914cbc761726ee98cfac0f', 'width': 216}, {'height': 115, 'url': 'https://external-preview.redd.it/5ROieUTk4G7bxZYnBs5kysLSb9TKRMxu_-55x7ZDmHE.png?width=320&crop=smart&auto=webp&s=6181833038b6a182b530a5b13ff3d2c44c9b5d05', 'width': 320}, {'height': 231, 'url': 'https://external-preview.redd.it/5ROieUTk4G7bxZYnBs5kysLSb9TKRMxu_-55x7ZDmHE.png?width=640&crop=smart&auto=webp&s=55df7962000b6544fa2d5b645a01522dffb441b0', 'width': 640}, {'height': 347, 'url': 'https://external-preview.redd.it/5ROieUTk4G7bxZYnBs5kysLSb9TKRMxu_-55x7ZDmHE.png?width=960&crop=smart&auto=webp&s=005767f957830fa73d0e6e83a0011669dd4dfcbc', 'width': 960}, {'height': 391, 'url': 'https://external-preview.redd.it/5ROieUTk4G7bxZYnBs5kysLSb9TKRMxu_-55x7ZDmHE.png?width=1080&crop=smart&auto=webp&s=3c48eb17642c6ce918ebcaed5d05b4749c27034f', 'width': 1080}], 'source': {'height': 394, 'url': 'https://external-preview.redd.it/5ROieUTk4G7bxZYnBs5kysLSb9TKRMxu_-55x7ZDmHE.png?auto=webp&s=7946da86bb888fc7071e3dd3cd42f9618151a24a', 'width': 1088}, 'variants': {}}]}
|
||
Smoothie Qwen: A lightweight adjustment tool for smoothing token probabilities in the Qwen models to encourage balanced multilingual generation.
| 105 |
**Smoothie Qwen** is a lightweight adjustment tool that smooths token probabilities in Qwen models, enhancing balanced multilingual generation capabilities. We've uploaded pre-adjusted models to our [Smoothie Qwen Collection on 🤗 Hugging Face](https://huggingface.co/collections/dnotitia/smoothie-qwen3-6811896ebb3a255de7b5b437) for your convenience:
**Smoothie-Qwen3 Collection**
* [dnotitia/Smoothie-Qwen3-0.6B](https://huggingface.co/dnotitia/Smoothie-Qwen3-0.6B)
* [dnotitia/Smoothie-Qwen3-1.7B](https://huggingface.co/dnotitia/Smoothie-Qwen3-1.7B)
* [dnotitia/Smoothie-Qwen3-4B](https://huggingface.co/dnotitia/Smoothie-Qwen3-4B)
* [dnotitia/Smoothie-Qwen3-8B](https://huggingface.co/dnotitia/Smoothie-Qwen3-8B)
* [dnotitia/Smoothie-Qwen3-14B](https://huggingface.co/dnotitia/Smoothie-Qwen3-14B)
* [dnotitia/Smoothie-Qwen3-32B](https://huggingface.co/dnotitia/Smoothie-Qwen3-32B)
* [dnotitia/Smoothie-Qwen3-30B-A3B](https://huggingface.co/dnotitia/Smoothie-Qwen3-30B-A3B)
* [dnotitia/Smoothie-Qwen3-235B-A22B](https://huggingface.co/dnotitia/Smoothie-Qwen3-235B-A22B)
**Smoothie-Qwen2.5 Collection**
* [dnotitia/Smoothie-Qwen2.5-0.5B-Instruct](https://huggingface.co/dnotitia/Smoothie-Qwen2.5-0.5B-Instruct)
* [dnotitia/Smoothie-Qwen2.5-1.5B-Instruct](https://huggingface.co/dnotitia/Smoothie-Qwen2.5-1.5B-Instruct)
* [dnotitia/Smoothie-Qwen2.5-3B-Instruct](https://huggingface.co/dnotitia/Smoothie-Qwen2.5-3B-Instruct)
* [dnotitia/Smoothie-Qwen2.5-7B-Instruct](https://huggingface.co/dnotitia/Smoothie-Qwen2.5-7B-Instruct)
* [dnotitia/Smoothie-Qwen2.5-14B-Instruct](https://huggingface.co/dnotitia/Smoothie-Qwen2.5-14B-Instruct)
* [dnotitia/Smoothie-Qwen2.5-32B-Instruct](https://huggingface.co/dnotitia/Smoothie-Qwen2.5-32B-Instruct)
* [dnotitia/Smoothie-Qwen2.5-72B-Instruct](https://huggingface.co/dnotitia/Smoothie-Qwen2.5-72B-Instruct)
GitHub: [https://github.com/dnotitia/smoothie-qwen](https://github.com/dnotitia/smoothie-qwen)
| 2025-05-08T12:55:52 |
likejazz
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1khpf0m
| false | null |
t3_1khpf0m
|
/r/LocalLLaMA/comments/1khpf0m/smoothie_qwen_a_lightweight_adjustment_tool_for/
| false | false | 105 |
{'enabled': True, 'images': [{'id': 'JbzgM-nVh_MKs7oy1-hWm9kbIaNeeECccdefG4e_Sec', 'resolutions': [{'height': 39, 'url': 'https://preview.redd.it/ctoabtdg2kze1.png?width=108&crop=smart&auto=webp&s=b23d1e5c31747227ab6067510aa275fb4cd6042e', 'width': 108}, {'height': 78, 'url': 'https://preview.redd.it/ctoabtdg2kze1.png?width=216&crop=smart&auto=webp&s=ea60642af172b94ff1a8004d07b69735cde88c9d', 'width': 216}, {'height': 115, 'url': 'https://preview.redd.it/ctoabtdg2kze1.png?width=320&crop=smart&auto=webp&s=f3081830c8bb9792945151d1e15c6e026a0ea5de', 'width': 320}, {'height': 231, 'url': 'https://preview.redd.it/ctoabtdg2kze1.png?width=640&crop=smart&auto=webp&s=7a07c38c69823d8ec768a987766643b8ad2f4845', 'width': 640}, {'height': 347, 'url': 'https://preview.redd.it/ctoabtdg2kze1.png?width=960&crop=smart&auto=webp&s=e869af5b138e232b658dff2afd64cffb033c25a1', 'width': 960}, {'height': 391, 'url': 'https://preview.redd.it/ctoabtdg2kze1.png?width=1080&crop=smart&auto=webp&s=0ba98abb27cc2c17a9c54d3d94efa49bdb255d3f', 'width': 1080}], 'source': {'height': 394, 'url': 'https://preview.redd.it/ctoabtdg2kze1.png?auto=webp&s=c5ea50cb9aa34308dd2625aec94322c79e06077d', 'width': 1088}, 'variants': {}}]}
|
||
MousyHub: An Attempt at a Simplified UI for Local AI Roleplay
| 1 |
[removed]
| 2025-05-08T12:57:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1khpg0z/mousyhub_an_attempt_at_a_simplified_ui_for_local/
|
Pristine_Weather_673
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khpg0z
| false | null |
t3_1khpg0z
|
/r/LocalLLaMA/comments/1khpg0z/mousyhub_an_attempt_at_a_simplified_ui_for_local/
| false | false |
self
| 1 | null |
Qwen3-32B and GLM-4-32B on a 5090
| 0 |
Anyone who has a Geforce 5090, can run Qwen3-32B and GLM-4 with Q8 quantization? If so, what is the context size?
TensorRT-LLM can do great optimizations, so my plan is to use it to run these models in Q8 on the 5090. From what I'm doing, it's pretty tight with 32B.
| 2025-05-08T13:10:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1khpq7z/qwen332b_and_glm432b_on_a_5090/
|
JumpyAbies
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khpq7z
| false | null |
t3_1khpq7z
|
/r/LocalLLaMA/comments/1khpq7z/qwen332b_and_glm432b_on_a_5090/
| false | false |
self
| 0 | null |
Introducing the Intelligent Document Processing (IDP) Leaderboard – A Unified Benchmark for OCR, KIE, VQA, Table Extraction, and More
| 81 |
The most comprehensive benchmark to date for evaluating document understanding capabilities of Vision-Language Models (VLMs).
**What is it?**
A unified evaluation suite covering 6 core IDP tasks across 16 datasets and 9,229 documents:
* Key Information Extraction (KIE)
* Visual Question Answering (VQA)
* Optical Character Recognition (OCR)
* Document Classification
* Table Extraction
* Long Document Processing (LongDocBench)
* (Coming soon: Confidence Score Calibration)
Each task uses multiple datasets, including real-world, synthetic, and newly annotated ones.
**Highlights from the Benchmark**
* **Gemini 2.5 Flash leads overall**, but surprisingly underperforms its predecessor on OCR and classification.
* All models struggled with long document understanding – top score was just 69.08%.
* Table extraction remains a bottleneck — especially for long, sparse, or unstructured tables.
* Surprisingly, GPT-4o's performance *decreased* in the latest version (*gpt-4o-2024-11-20*) compared to its earlier release (*gpt-4o-2024-08-06*).
* Token usage (and thus cost) varies dramatically across models — GPT-4o-mini was the most expensive per request due to high token usage.
**Why does this matter?**
There’s currently no unified benchmark that evaluates all IDP tasks together — most leaderboards (e.g., OpenVLM, Chatbot Arena) don’t deeply assess document understanding.
**Document Variety**
We evaluated models on a wide range of documents: Invoices, forms, receipts, charts, tables (structured + unstructured), handwritten docs, and even diacritics texts.
**Get Involved**
We’re actively updating the benchmark with new models and datasets.
This is developed with collaboration from IIT Indore and Nanonets.
Leaderboard: [https://idp-leaderboard.org/](https://idp-leaderboard.org/)
Release blog: [https://idp-leaderboard.org/details/](https://idp-leaderboard.org/details/)
GithHub: [https://github.com/NanoNets/docext/tree/main/docext/benchmark](https://github.com/NanoNets/docext/tree/main/docext/benchmark)
Feel free to share your feedback!
| 2025-05-08T13:27:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1khq3ul/introducing_the_intelligent_document_processing/
|
SouvikMandal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khq3ul
| false | null |
t3_1khq3ul
|
/r/LocalLLaMA/comments/1khq3ul/introducing_the_intelligent_document_processing/
| false | false |
self
| 81 | null |
Is it just me or are there no local solution developments for STT
| 8 |
Just like the title says.
I've seen updates regarding OpenAI's TTS/STT API endpoints, mentions of the recent Whisper Turbo, and the recent trend of Omni Models, but I have yet to find recent, stand-alone developments in the STT. Why? I would figure that TTS and STT developments would go hand-in-hand.
Or do I not have my ear to the ground in the right places?
| 2025-05-08T13:42:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1khqfro/is_it_just_me_or_are_there_no_local_solution/
|
PastelAndBraindead
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khqfro
| false | null |
t3_1khqfro
|
/r/LocalLLaMA/comments/1khqfro/is_it_just_me_or_are_there_no_local_solution/
| false | false |
self
| 8 | null |
Intel to launch Arc Pro B60 graphics card with 24GB memory at Computex - VideoCardz.com
| 132 |
No word on pricing yet.
| 2025-05-08T13:49:27 |
https://videocardz.com/newz/intel-to-launch-arc-pro-b60-graphics-card-with-24gb-memory-at-computex
|
FullstackSensei
|
videocardz.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1khql0u
| false | null |
t3_1khql0u
|
/r/LocalLLaMA/comments/1khql0u/intel_to_launch_arc_pro_b60_graphics_card_with/
| false | false |
default
| 132 | null |
GMK EVO-X2 AI Max+ 395 Mini-PC review!
| 37 |
[https://www.youtube.com/watch?v=UXjg6Iew9lg](https://www.youtube.com/watch?v=UXjg6Iew9lg)
| 2025-05-08T14:05:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1khqyds/gmk_evox2_ai_max_395_minipc_review/
|
Corylus-Core
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khqyds
| false | null |
t3_1khqyds
|
/r/LocalLLaMA/comments/1khqyds/gmk_evox2_ai_max_395_minipc_review/
| false | false |
self
| 37 |
{'enabled': False, 'images': [{'id': 'IwFmHQEpOTQbLgTlJKC1lvyJghRjfN_7w50_VfbTtOA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/IwFmHQEpOTQbLgTlJKC1lvyJghRjfN_7w50_VfbTtOA.jpeg?width=108&crop=smart&auto=webp&s=be4fbc361f33f6eb2b802f95eb3a342344e7dd65', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/IwFmHQEpOTQbLgTlJKC1lvyJghRjfN_7w50_VfbTtOA.jpeg?width=216&crop=smart&auto=webp&s=5adfd243cfe67be325eccf8af80509bc0c9bb940', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/IwFmHQEpOTQbLgTlJKC1lvyJghRjfN_7w50_VfbTtOA.jpeg?width=320&crop=smart&auto=webp&s=656aaa6ad2108cad389205ae44b0f8236af40250', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/IwFmHQEpOTQbLgTlJKC1lvyJghRjfN_7w50_VfbTtOA.jpeg?auto=webp&s=c727c39d0d07a7ea518fa8b90c085feab426a7dc', 'width': 480}, 'variants': {}}]}
|
Best local model with Zed?
| 7 |
Now that Zed support running local ollama models which is the best that has tool usage like cursor ( create & edit files etc )?
https://zed.dev/blog/fastest-ai-code-editor
| 2025-05-08T14:05:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1khqygm/best_local_model_with_zed/
|
jbsan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khqygm
| false | null |
t3_1khqygm
|
/r/LocalLLaMA/comments/1khqygm/best_local_model_with_zed/
| false | false |
self
| 7 |
{'enabled': False, 'images': [{'id': '74fbNvsxrzF9-DSOe6IYFiesPo_f1wBLvfCQqSCQ170', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/74fbNvsxrzF9-DSOe6IYFiesPo_f1wBLvfCQqSCQ170.png?width=108&crop=smart&auto=webp&s=ec9e71f2b22c37a9dbaa0728d94c2d0be809f6ec', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/74fbNvsxrzF9-DSOe6IYFiesPo_f1wBLvfCQqSCQ170.png?width=216&crop=smart&auto=webp&s=e46abfc96946ebc580118ddfcb6a2965db9204b6', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/74fbNvsxrzF9-DSOe6IYFiesPo_f1wBLvfCQqSCQ170.png?width=320&crop=smart&auto=webp&s=7b070e26898e5f25c54e68d3a7e77493de5884e8', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/74fbNvsxrzF9-DSOe6IYFiesPo_f1wBLvfCQqSCQ170.png?width=640&crop=smart&auto=webp&s=a51620ae99e5b2dd69be0be840825972777fecdc', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/74fbNvsxrzF9-DSOe6IYFiesPo_f1wBLvfCQqSCQ170.png?width=960&crop=smart&auto=webp&s=da039b0630cae36d2f9b44cd2722dcadbcb72e4e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/74fbNvsxrzF9-DSOe6IYFiesPo_f1wBLvfCQqSCQ170.png?width=1080&crop=smart&auto=webp&s=59671274deb83cb2f1e6e5777d4cbb6c4404ef4d', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/74fbNvsxrzF9-DSOe6IYFiesPo_f1wBLvfCQqSCQ170.png?auto=webp&s=f89f708cd0926a11fdcee1894cd6ab4bdd9271e0', 'width': 1200}, 'variants': {}}]}
|
Intel Promises More Arc GPU Action at Computex - Battlemage Goes Pro With AI-Ready Memory Capacities
| 45 | 2025-05-08T14:06:43 |
https://wccftech.com/intel-promises-arc-gpu-action-at-computex-battlemage-pro-ai-ready-memory-capacities/
|
_SYSTEM_ADMIN_MOD_
|
wccftech.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1khqz92
| false | null |
t3_1khqz92
|
/r/LocalLLaMA/comments/1khqz92/intel_promises_more_arc_gpu_action_at_computex/
| false | false |
default
| 45 |
{'enabled': False, 'images': [{'id': 'Fn-jI2IQ5AcXbD0Bt1vpe7afAh8S1-d1oMm_VEMcTUQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Fn-jI2IQ5AcXbD0Bt1vpe7afAh8S1-d1oMm_VEMcTUQ.png?width=108&crop=smart&auto=webp&s=30b3be8dbe240ed87f31a1aa2a5b95ddf41ec7b2', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Fn-jI2IQ5AcXbD0Bt1vpe7afAh8S1-d1oMm_VEMcTUQ.png?width=216&crop=smart&auto=webp&s=76687d3686bdda75494202a8d0d845008b69c9ba', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/Fn-jI2IQ5AcXbD0Bt1vpe7afAh8S1-d1oMm_VEMcTUQ.png?width=320&crop=smart&auto=webp&s=8e728145ed5b389b31cc9cb9b7507e157cf4e4b2', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/Fn-jI2IQ5AcXbD0Bt1vpe7afAh8S1-d1oMm_VEMcTUQ.png?width=640&crop=smart&auto=webp&s=8d173e4817ba5709a8969ba44d6ed200538e00e2', 'width': 640}], 'source': {'height': 408, 'url': 'https://external-preview.redd.it/Fn-jI2IQ5AcXbD0Bt1vpe7afAh8S1-d1oMm_VEMcTUQ.png?auto=webp&s=8d0ce10666890f70f996f6158c62c1289e635602', 'width': 728}, 'variants': {}}]}
|
|
Fine-tune SmolVLM for pose detection
| 1 |
[removed]
| 2025-05-08T14:14:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1khr63o/finetune_smolvlm_for_pose_detection/
|
Fragrant-Move-9128
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khr63o
| false | null |
t3_1khr63o
|
/r/LocalLLaMA/comments/1khr63o/finetune_smolvlm_for_pose_detection/
| false | false |
self
| 1 | null |
Llama nemotron model
| 12 |
Thoughts on the new llama nemotron reasoning model by nvidia ? how would you compare it to other open source and closed reasoning models. And what are your top reasoning models ?
| 2025-05-08T14:22:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1khrcle/llama_nemotron_model/
|
Basic-Pay-9535
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khrcle
| false | null |
t3_1khrcle
|
/r/LocalLLaMA/comments/1khrcle/llama_nemotron_model/
| false | false |
self
| 12 | null |
Learn how to build Agentic Workflow (Trip Planner+ Image Generation)
| 1 |
[removed]
| 2025-05-08T14:24:51 |
https://youtu.be/_gLOFC_CfoM
|
toolhouseai
|
youtu.be
| 1970-01-01T00:00:00 | 0 |
{}
|
1khreo1
| false |
{'oembed': {'author_name': 'Toolhouse', 'author_url': 'https://www.youtube.com/@ToolhouseAI', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/_gLOFC_CfoM?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Build a Trip Planner Agentic Workflow Using Toolhouse"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/_gLOFC_CfoM/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Build a Trip Planner Agentic Workflow Using Toolhouse', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1khreo1
|
/r/LocalLLaMA/comments/1khreo1/learn_how_to_build_agentic_workflow_trip_planner/
| false | false |
default
| 1 | null |
Good NFSW LLM for story writing
| 1 |
[removed]
| 2025-05-08T14:24:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1khrepr/good_nfsw_llm_for_story_writing/
|
ClarieObscur
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khrepr
| false | null |
t3_1khrepr
|
/r/LocalLLaMA/comments/1khrepr/good_nfsw_llm_for_story_writing/
| false | false |
nsfw
| 1 | null |
Any tips on making an LLM play a (retro) game?
| 1 |
[removed]
| 2025-05-08T14:32:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1khrksl/any_tips_on_making_an_llm_play_a_retro_game/
|
toolhouseai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khrksl
| false | null |
t3_1khrksl
|
/r/LocalLLaMA/comments/1khrksl/any_tips_on_making_an_llm_play_a_retro_game/
| false | false |
self
| 1 | null |
Any tips on making an LLM play a (retro) game?
| 1 |
[removed]
| 2025-05-08T14:34:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1khrn2s/any_tips_on_making_an_llm_play_a_retro_game/
|
toolhouseai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khrn2s
| false | null |
t3_1khrn2s
|
/r/LocalLLaMA/comments/1khrn2s/any_tips_on_making_an_llm_play_a_retro_game/
| false | false |
self
| 1 | null |
I'm hosting a bootcamp in Chiang Mai Thailand. Doing it free every every week to get people into coding with AI. I've never spoken before to a group. Wish me luck!
| 1 | 2025-05-08T14:37:04 |
KalliHit
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1khrp00
| false | null |
t3_1khrp00
|
/r/LocalLLaMA/comments/1khrp00/im_hosting_a_bootcamp_in_chiang_mai_thailand/
| false | false |
default
| 1 |
{'enabled': True, 'images': [{'id': 'zzpfebzjkkze1', 'resolutions': [{'height': 192, 'url': 'https://preview.redd.it/zzpfebzjkkze1.jpeg?width=108&crop=smart&auto=webp&s=b2ff9340a4c28819db1a36d861647294f1ef5840', 'width': 108}, {'height': 384, 'url': 'https://preview.redd.it/zzpfebzjkkze1.jpeg?width=216&crop=smart&auto=webp&s=d3b7d4acba072d8a01b7b4416c83dfd6e3efc8b8', 'width': 216}, {'height': 568, 'url': 'https://preview.redd.it/zzpfebzjkkze1.jpeg?width=320&crop=smart&auto=webp&s=21702c51625679554cc5233bf82583e2d114416a', 'width': 320}, {'height': 1137, 'url': 'https://preview.redd.it/zzpfebzjkkze1.jpeg?width=640&crop=smart&auto=webp&s=05bba1d5b28b7939ff59b709a556340a7e19233d', 'width': 640}, {'height': 1706, 'url': 'https://preview.redd.it/zzpfebzjkkze1.jpeg?width=960&crop=smart&auto=webp&s=37666bd0a83870290058f5ab64659373822ab866', 'width': 960}, {'height': 1920, 'url': 'https://preview.redd.it/zzpfebzjkkze1.jpeg?width=1080&crop=smart&auto=webp&s=5a900e3a193e4235255e6eae94d5bf118e8ce69b', 'width': 1080}], 'source': {'height': 2560, 'url': 'https://preview.redd.it/zzpfebzjkkze1.jpeg?auto=webp&s=013bae0c689ea5799ec48040f7982948f70d732a', 'width': 1440}, 'variants': {}}]}
|
||
Is Qwen3 doing tool calls correctly?
| 6 |
Hello everyone! Long time lurker, first time poster here.
I am trying to use [Qwen3-4B-MLX-4bit](https://huggingface.co/lmstudio-community/Qwen3-4B-MLX-4bit) in LM Studio 0.3.15 in combination with [new Agentic Editing](https://zed.dev/agentic) feature in Zed. I've tried also the same unsloth quant and the problem seems to be the same.
For some reason there is a problem with tool calling and Zed ends up not understanding which tool should be used. From the logs in LM Studio I feel like the problem is either with the model.
For the tests I give it a simple prompt: `Tell me current time /no_think`. From the logs I see that it first generates correct packet with the tool name...
```
Generated packet: {
"id": "chatcmpl-pe1ooa2jsxhmjfirjhrmfg",
"object": "chat.completion.chunk",
"created": 1746713648,
"model": "qwen3-4b-mlx",
"system_fingerprint": "qwen3-4b-mlx",
"choices": [
{
"index": 0,
"delta": {
"tool_calls": [
{
"index": 0,
"id": "388397151",
"type": "function",
"function": {
"name": "now",
"arguments": ""
}
}
]
},
"logprobs": null,
"finish_reason": null
}
]
}
```
..., but then it start sending the arguments omitting the tool name (there are multiple packets, giving one as an example)...
```
Generated packet: {
"id": "chatcmpl-pe1ooa2jsxhmjfirjhrmfg",
"object": "chat.completion.chunk",
"created": 1746713648,
"model": "qwen3-4b-mlx",
"system_fingerprint": "qwen3-4b-mlx",
"choices": [
{
"index": 0,
"delta": {
"tool_calls": [
{
"index": 0,
"type": "function",
"function": {
"name": "",
"arguments": "timezone"
}
}
]
},
"logprobs": null,
"finish_reason": null
}
]
}
```
...and ends up with what seems to be the correct packet...
```
Generated packet: {
"id": "chatcmpl-pe1ooa2jsxhmjfirjhrmfg",
"object": "chat.completion.chunk",
"created": 1746713648,
"model": "qwen3-4b-mlx",
"system_fingerprint": "qwen3-4b-mlx",
"choices": [
{
"index": 0,
"delta": {},
"logprobs": null,
"finish_reason": "tool_calls"
}
]
}
```
It looks like Zed is getting confused either because subsequent packets are omitting the tool name or that the tool call is being split into separate packets.
There were discussions about problems of Qwen3 compatibility with LM Studio, something regarding templates and such. Maybe that's the problem?
Can someone help me figure out if I can do anything at all on LM Studio side to make it work?
| 2025-05-08T14:49:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1khrzh9/is_qwen3_doing_tool_calls_correctly/
|
gyzerok
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khrzh9
| false | null |
t3_1khrzh9
|
/r/LocalLLaMA/comments/1khrzh9/is_qwen3_doing_tool_calls_correctly/
| false | false |
self
| 6 |
{'enabled': False, 'images': [{'id': '980U5dTM2I-V7eC8nE1GsUd6LOUqp_FSaLshxCoKhGg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/980U5dTM2I-V7eC8nE1GsUd6LOUqp_FSaLshxCoKhGg.png?width=108&crop=smart&auto=webp&s=343386363c63c085903e8082fe998e6e08a26bbd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/980U5dTM2I-V7eC8nE1GsUd6LOUqp_FSaLshxCoKhGg.png?width=216&crop=smart&auto=webp&s=782766c551e0021eee609cddcc5f0afc7b2ef1be', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/980U5dTM2I-V7eC8nE1GsUd6LOUqp_FSaLshxCoKhGg.png?width=320&crop=smart&auto=webp&s=51e04271a674ddb657317555ba33829154797db9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/980U5dTM2I-V7eC8nE1GsUd6LOUqp_FSaLshxCoKhGg.png?width=640&crop=smart&auto=webp&s=4929677a7de5b646efe821359b09c826abf2b5c7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/980U5dTM2I-V7eC8nE1GsUd6LOUqp_FSaLshxCoKhGg.png?width=960&crop=smart&auto=webp&s=a0cfccc5d78a0a629e0b4f904bf097e8d2ad4992', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/980U5dTM2I-V7eC8nE1GsUd6LOUqp_FSaLshxCoKhGg.png?width=1080&crop=smart&auto=webp&s=49b50487ebd3141ad955e8af0212259dfd3a5ce9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/980U5dTM2I-V7eC8nE1GsUd6LOUqp_FSaLshxCoKhGg.png?auto=webp&s=b65dbae46304cd762b694f8e737125cf27adef73', 'width': 1200}, 'variants': {}}]}
|
Aider benchmarks for Qwen3-235B-A22B that were posted here were apparently faked
| 86 | 2025-05-08T14:52:27 |
https://github.com/Aider-AI/aider/pull/3908#issuecomment-2863328652
|
tjuene
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1khs277
| false | null |
t3_1khs277
|
/r/LocalLLaMA/comments/1khs277/aider_benchmarks_for_qwen3235ba22b_that_were/
| false | false |
default
| 86 |
{'enabled': False, 'images': [{'id': 'yXqXdu-V1zQwQHC-lYAJfW59I54R-k04O4eLGM6BROQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yXqXdu-V1zQwQHC-lYAJfW59I54R-k04O4eLGM6BROQ.png?width=108&crop=smart&auto=webp&s=7e61d850b3d51588df8c1ae7efe0d6233aebe498', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yXqXdu-V1zQwQHC-lYAJfW59I54R-k04O4eLGM6BROQ.png?width=216&crop=smart&auto=webp&s=399b8d404206d720bf2c9e8e244c7293ca2677d4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yXqXdu-V1zQwQHC-lYAJfW59I54R-k04O4eLGM6BROQ.png?width=320&crop=smart&auto=webp&s=ce412b0db1f87afb41537ea61a9eb33d513072a9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yXqXdu-V1zQwQHC-lYAJfW59I54R-k04O4eLGM6BROQ.png?width=640&crop=smart&auto=webp&s=909d548ecf3ca2af543b4af0384879360a7594a8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yXqXdu-V1zQwQHC-lYAJfW59I54R-k04O4eLGM6BROQ.png?width=960&crop=smart&auto=webp&s=0a2eae977b8796c95dd47195a3a07c7fff073ec8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yXqXdu-V1zQwQHC-lYAJfW59I54R-k04O4eLGM6BROQ.png?width=1080&crop=smart&auto=webp&s=91339dc9fb4a2e45bc6f0ede23991f58d7a0f845', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yXqXdu-V1zQwQHC-lYAJfW59I54R-k04O4eLGM6BROQ.png?auto=webp&s=8143ff3b89cc12a16c8c37089f7df6b0209573bc', 'width': 1200}, 'variants': {}}]}
|
|
Best Open source Speech to text+ diarization models
| 15 |
Hi everyone, hope you’re doing well. I’m currently working on a project where I need to convert audio conversations between a customer and agents into text.
Since most recordings involve up to three speakers, could you please suggest some top open-source models suited for this task, particularly those that support speaker diarization?
| 2025-05-08T14:53:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1khs34q/best_open_source_speech_to_text_diarization_models/
|
Hungry-Ad-1177
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khs34q
| false | null |
t3_1khs34q
|
/r/LocalLLaMA/comments/1khs34q/best_open_source_speech_to_text_diarization_models/
| false | false |
self
| 15 | null |
Need help improving local LLM prompt classification logic
| 2 |
Hey folks,
I'm working on a local project where I use llama 3.1 8B to validate whether a given prompt falls into a certain semantic category. The classification is binary (related vs unrelated), and I'm keeping everything local — no APIs or external calls.
I’m running into issues with prompt consistency and classification accuracy. Few-shot examples only get me so far, and embedding-based filtering isn’t viable here due to the local-only requirement.
Has anyone had success refining prompt engineering or system prompts in similar tasks (e.g., intent classification or topic filtering) using local models like LLaMA 3? Any best practices, tricks, or resources would be super helpful.
Thanks in advance!
| 2025-05-08T15:11:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1khsinw/need_help_improving_local_llm_prompt/
|
GeorgeSKG_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khsinw
| false | null |
t3_1khsinw
|
/r/LocalLLaMA/comments/1khsinw/need_help_improving_local_llm_prompt/
| false | false |
self
| 2 | null |
Best ways to classify massive amounts of content into multiple categories? (Products, NLP, cost-efficiency)
| 3 |
I'm looking for the best solution for classifying thousands of items (e.g., e-commerce products) into potentially hundreds of categories. The main challenge here is cost-efficiency and accuracy.
Currently, I face these issues:
1. **Cost issue**: If each product-category pairing requires an individual AI/API call with advanced models (like claude sonnet / Gemini 2.5 pro), costs quickly become unmanageable when dealing with thousands of items and hundreds of categories.
2. **Accuracy issue**: When prompting AI to classify products into multiple categories simultaneously, accuracy drops quickly. It frequently misses relevant categories or incorrectly assigns irrelevant ones—even with a relatively small number of categories.
What I do now is:
* Create an automated short summary of each product, leveraging existing product descriptions and images.
* Run each summarized product through individual category checks one-by-one. Slow and expensive, but accurate.
I'm looking for better, more efficient approaches.
* Are there effective methods or workflows for doing this more affordably without sacrificing too much accuracy?
* Is there a particular model or technique better suited for handling mass classification across numerous categories?
Appreciate any insights or experience you can share!
| 2025-05-08T15:20:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1khsqjh/best_ways_to_classify_massive_amounts_of_content/
|
bambambam7
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khsqjh
| false | null |
t3_1khsqjh
|
/r/LocalLLaMA/comments/1khsqjh/best_ways_to_classify_massive_amounts_of_content/
| false | false |
self
| 3 | null |
LLaMA SQL generation ability vs other models (benchmark)
| 1 |
[removed]
| 2025-05-08T15:21:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1khsrrm/llama_sql_generation_ability_vs_other_models/
|
itty-bitty-birdy-tb
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khsrrm
| false | null |
t3_1khsrrm
|
/r/LocalLLaMA/comments/1khsrrm/llama_sql_generation_ability_vs_other_models/
| false | false |
self
| 1 | null |
Copia de seguridad Rocm
| 1 |
[removed]
| 2025-05-08T15:22:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1khssv1/copia_de_seguridad_rocm/
|
Macestudios32
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khssv1
| false | null |
t3_1khssv1
|
/r/LocalLLaMA/comments/1khssv1/copia_de_seguridad_rocm/
| false | false |
self
| 1 | null |
Which is the best creative writing/writing model?
| 3 |
My options are:
Gemma 3 27B
Claude 3.5 Opus
Claude 3.7 Sonnet
But like, Claude locks me up after I can get the response I want. Which is better for certain use cases? If you have other suggestions feel free to drop them below.
| 2025-05-08T15:39:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1kht78d/which_is_the_best_creative_writingwriting_model/
|
AccomplishedAir769
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kht78d
| false | null |
t3_1kht78d
|
/r/LocalLLaMA/comments/1kht78d/which_is_the_best_creative_writingwriting_model/
| false | false |
self
| 3 | null |
I tested Qwen 3 235b against Deepseek r1, Qwen did better on simple tasks but r1 beats in nuance
| 87 |
I have been using Deepseek r1 for a while, mainly for writing, and I have tried the Qwq 32b, which was plenty impressive. But the new models are a huge upgrade, though I have yet to try the 30b model. The 235b model is really impressive for the cost and size. Definitely much better than Llama 4s.
So, I compared the top 2 open-source models on coding, reasoning, math, and writing tasks.
Here's what I found out.
**1. Coding**
For a lot of coding tasks, you wouldn't notice much difference. Both models perform on par, sometimes Qwen taking the lead.
**2. Reasoning and Math**
Deepseek leads here with more nuance in the thought process. Qwen is not bad at all, gets most of the work done, but takes longer to finish tasks. It gives off the vibe of overfit at times.
**3. Writing**
For creative writing, Deepseek r1 is still in the top league, right up there with closed models. For summarising and technical description, Qwen offers similar performance.
For a full comparison check out this blog post: [Qwen 3 vs. Deepseek r1](https://composio.dev/blog/qwen-3-vs-deepseek-r1-complete-comparison/).
It has been a great year so far for open-weight AI models, especially from Chinese labs. It would be interesting to see the next from Deepseek. Hope the Llama Behemoth turns out to be a better model.
Would love to know your experience with the new Qwens, and would love to know which local Qwen is good for local use cases, I have been using Gemma 3.
| 2025-05-08T16:17:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1khu4x0/i_tested_qwen_3_235b_against_deepseek_r1_qwen_did/
|
SunilKumarDash
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khu4x0
| false | null |
t3_1khu4x0
|
/r/LocalLLaMA/comments/1khu4x0/i_tested_qwen_3_235b_against_deepseek_r1_qwen_did/
| false | false |
self
| 87 |
{'enabled': False, 'images': [{'id': '0fDUuZbtbqM45WPQlxaygQWMfZmmzBQeye4UUAetqow', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/0fDUuZbtbqM45WPQlxaygQWMfZmmzBQeye4UUAetqow.png?width=108&crop=smart&auto=webp&s=8733bcd31ddd932c9d61aabe3d991b75fc140355', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/0fDUuZbtbqM45WPQlxaygQWMfZmmzBQeye4UUAetqow.png?width=216&crop=smart&auto=webp&s=a8dd79cdad26f4585ebce8aab5b7ba3553aa7857', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/0fDUuZbtbqM45WPQlxaygQWMfZmmzBQeye4UUAetqow.png?width=320&crop=smart&auto=webp&s=8f88c1bcfc198cbb1fcc4b134fb29858df953947', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/0fDUuZbtbqM45WPQlxaygQWMfZmmzBQeye4UUAetqow.png?width=640&crop=smart&auto=webp&s=bf29859597f47b2e852adc55636d7552482b46b2', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/0fDUuZbtbqM45WPQlxaygQWMfZmmzBQeye4UUAetqow.png?width=960&crop=smart&auto=webp&s=a875a0e453ee18d82770a171f9754561adecfeaf', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/0fDUuZbtbqM45WPQlxaygQWMfZmmzBQeye4UUAetqow.png?width=1080&crop=smart&auto=webp&s=490849e6bda2efc87a16bac155b60b46723e69eb', 'width': 1080}], 'source': {'height': 639, 'url': 'https://external-preview.redd.it/0fDUuZbtbqM45WPQlxaygQWMfZmmzBQeye4UUAetqow.png?auto=webp&s=10f708acb21fb80d14e16b661ba7cbb38cd2fc5e', 'width': 1136}, 'variants': {}}]}
|
Arcee AnyMCP: deploy and remotely access any mcp server (for free)
| 0 |
Happy to announce the first release of Arcee AnyMCP 🚀🚀🚀
🎯 Remotely deploy & manage thousands of MCP servers in seconds
🖥️ Use with Claude Desktop or any MCP-compatible client
⚙️ Fully managed, unlimited customizations
📡 Supports 1000s of MCP servers — request yours if it’s not listed!
💸 100% FREE to use right now
Try it now and lemme know what features you want and we will make it happen 💥
https://mcp.arcee.ai
| 2025-05-08T16:24:11 |
https://v.redd.it/jx4gz3ki3lze1
|
abhi1thakur
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1khuawq
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/jx4gz3ki3lze1/DASHPlaylist.mpd?a=1749313467%2COTBiNjU2OWRjMGU4YjFiNDBmZGVlZjhjYzExYzY2YTA0MDkwODYxNzRhMDE2ZGIxNGRlMjAwMzlkNWM5M2I5ZA%3D%3D&v=1&f=sd', 'duration': 96, 'fallback_url': 'https://v.redd.it/jx4gz3ki3lze1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/jx4gz3ki3lze1/HLSPlaylist.m3u8?a=1749313467%2CM2Y3YzMxZWJkZjc5MGYzNmMxZjhmZjFkNjgwYWIzMWFjNWQwZmE1NWZlMTBlOWMxNDdjYmFiOTExY2I2NDFkNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/jx4gz3ki3lze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1khuawq
|
/r/LocalLLaMA/comments/1khuawq/arcee_anymcp_deploy_and_remotely_access_any_mcp/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'd3IwMDM3a2kzbHplMVMZgLR79zMK64GyqCMnAZGwMME-0MKmLAhHtm49yz_o', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d3IwMDM3a2kzbHplMVMZgLR79zMK64GyqCMnAZGwMME-0MKmLAhHtm49yz_o.png?width=108&crop=smart&format=pjpg&auto=webp&s=a2b2f931e6f1ef0b9067170043aad44623d0c555', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/d3IwMDM3a2kzbHplMVMZgLR79zMK64GyqCMnAZGwMME-0MKmLAhHtm49yz_o.png?width=216&crop=smart&format=pjpg&auto=webp&s=4c1f411ac5be0d3ac2e74afcb512412bc73f2cb1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/d3IwMDM3a2kzbHplMVMZgLR79zMK64GyqCMnAZGwMME-0MKmLAhHtm49yz_o.png?width=320&crop=smart&format=pjpg&auto=webp&s=e06ddf86a24a87a7eaf8056cd3965ed108372f15', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/d3IwMDM3a2kzbHplMVMZgLR79zMK64GyqCMnAZGwMME-0MKmLAhHtm49yz_o.png?width=640&crop=smart&format=pjpg&auto=webp&s=a7a0a72ce6ac3c5c4c359c98715806216a14d0c4', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/d3IwMDM3a2kzbHplMVMZgLR79zMK64GyqCMnAZGwMME-0MKmLAhHtm49yz_o.png?width=960&crop=smart&format=pjpg&auto=webp&s=24ef67a9def6c02dde9667ba7251b34819be0bcb', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/d3IwMDM3a2kzbHplMVMZgLR79zMK64GyqCMnAZGwMME-0MKmLAhHtm49yz_o.png?width=1080&crop=smart&format=pjpg&auto=webp&s=83643044c9bfebb31f8a6183e754a35c2dfce1ab', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/d3IwMDM3a2kzbHplMVMZgLR79zMK64GyqCMnAZGwMME-0MKmLAhHtm49yz_o.png?format=pjpg&auto=webp&s=39a633d538739ab7bac0fdb5921ad13ed2f9a253', 'width': 1920}, 'variants': {}}]}
|
|
Local LLM for summary pdf?
| 1 |
[removed]
| 2025-05-08T16:32:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1khuihs/local_llm_for_summary_pdf/
|
AgitatedPower802
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khuihs
| false | null |
t3_1khuihs
|
/r/LocalLLaMA/comments/1khuihs/local_llm_for_summary_pdf/
| false | false |
self
| 1 | null |
Giving Voice to AI - Orpheus TTS Quantization Experiment Results
| 58 |
Hello LocalLLaMA! Today I'd like to share the results of my experiment implementing speech synthesis capabilities in LLMs.
Introduction
In recent months, many high-quality Text-to-Speech (TTS) models have been released. For this experiment, I focused on [canopylabs/orpheus-3b-0.1-ft](https://huggingface.co/canopylabs/orpheus-3b-0.1-ft), which is based on llama3 architecture. Orpheus-3b is an LLM-based TTS system capable of natural speech with excellent vocal quality. I chose this model because llama3's ecosystem is well-developed, allowing me to leverage related tools. I specifically adopted the gguf format because it's easily deployable across various platforms. This is certainly not the end of the road, as further performance optimizations are possible using other tools/services/scripts. But Here, I'll report the results of testing various gguf quantization levels using custom scripts.
Performance Evaluation
# Evaluation Method
I used the [LJ-Speech-Dataset](https://keithito.com/LJ-Speech-Dataset/) for evaluation. This public domain speech dataset consists of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books.
Evaluation process:
1. For each quantized model, 1000 randomly selected texts were synthesized into speech (though some models failed to vocalize certain samples)
2. Transcribed the speech using [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo)
3. Measured WER (Word Error Rate) and CER (Character Error Rate)
4. For comparison, also transcribed the original human voice from the dataset to compare error rates
The llama-server was launched with the following command:
llama-server -m orpheus-3b-Q4_K_L.gguf --prio 3 -c 2048 -n -2 -fa -ngl 99 --no-webui
Temperature and other parameters were left at their default values. Unfortunately, I haven't yet been able to identify optimal parameters. With optimal parameters, results could potentially improve further.
# Evaluation Results
The results for each quantization level are as follows. Each model was tested with 1000 samples, but some models failed to vocalize certain samples. For models with fewer than 1000 evaluation samples, the difference represents the number of failed samples("Failed" column in the table below).
|Model|Size|Samples Evaluated|Failed|Original WER|Original CER|TTS WER|TTS CER|WER Diff|CER Diff|
|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|
|Q3\_K\_L|2.3G|970|30|0.0939|0.0236|0.1361|0.0430|\+0.0422|\+0.0194|
|Q4\_K\_L|2.6G|984|16|0.0942|0.0235|0.1309|0.0483|\+0.0366|\+0.0248|
|Q4\_K-f16|3.4G|1000|0|0.0950|0.0236|0.1283|0.0351|\+0.0334|\+0.0115|
|Q6\_K\_L|3.2G|981|19|0.0944|0.0236|0.1303|0.0428|\+0.0358|\+0.0192|
|Q6\_K-f16|4.0G|1000|0|0.0950|0.0236|0.1305|0.0398|\+0.0355|\+0.0161|
|Q8\_0|3.8G|990|10|0.0945|0.0235|0.1298|0.0386|\+0.0353|\+0.0151|
# Performance Analysis
While the differences between quantization levels might not seem significant at first glance, there is a trend where lower bit quantization leads to increased pronunciation failures. And f16 variant (--output-tensor-type f16 --token-embedding-type f16) appears to suppress regeneration failure. This could potentially be improved in the future with better quantization techniques or domain-specific finetuning.
Processing Speed (bonus)
CPU Test environment: AMD Ryzen 9 7940HS w/ Radeon 780M Graphics 4.00 GHz
The following are speed test results using the Q4\_K\_L model:
# CPU (Without Vulkan)
Speed of the first sample:
* TTFB (Time To First Byte, time until the first response): 356.19ms
* Processing speed: 8.09 tokens/second
# CPU (With Vulkan)
Sample processing speed significantly improved:
* TTFB: 281.52ms
* Processing speed: approximately 16 tokens/second
* About 2x speed improvement compared to without Vulkan
# GPU (RTX 4060)
Even faster processing:
* TTFB: 233.04ms
* Processing speed: approximately 73 tokens/second
* About 4x faster than CPU (with Vulkan) and over 9x faster than CPU (without Vulkan)
# Conclusion
From this experiment, we found that although the difference in sound quality due to quantization level is relatively small, low-bit quantization may increase pronunciation errors.
Processing speed varies greatly depending on the execution environment, and GPU execution is the closest to realizing real-time conversation. Research shows that for English, [humans expect a response between -280 ms and +758 ms from the end of the utterance](https://arxiv.org/pdf/2404.16053). The real-world pipeline (VAD (Voice Activity Detection) -> EOU (End Of Utterance) -> ASR (Automatic Speech Recognition) -> LLM -> TTS) is a bit more complicated, but we felt that Local LLM is approaching the area where a sufficiently natural voice conversation is possible.
The origin of this experiment was the idea that if a lightweight TTS model could be called by Function Call or MCP, AI would be able to speak independently. As a first step, we verified the performance of a lightweight and easily implemented quantized TTS model. The performance is very good, but real-time processing is not yet at a satisfactory level due to a bug in my script that still causes noise.
In the future, the balance between quality and speed may be further improved by the progress of quantization technology, finetuning, and improvement of the script.
The model and results used in the experiment are uploaded [dahara1/orpheus-3b-0.1-ft\_gguf](https://huggingface.co/dahara1/orpheus-3b-0.1-ft_gguf).
If you want to try it yourself, please do!
Finally, I would like to thank the contributors of canopylabs/orpheus-3b-0.1-ft, meta/llama3, ggml-org/llama.cpp, openai/whisper-large-v3-turbo, and LJ-Speech-Dataset.
Thank you for reading!
| 2025-05-08T17:02:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1khv8sg/giving_voice_to_ai_orpheus_tts_quantization/
|
dahara111
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khv8sg
| false | null |
t3_1khv8sg
|
/r/LocalLLaMA/comments/1khv8sg/giving_voice_to_ai_orpheus_tts_quantization/
| false | false |
self
| 58 |
{'enabled': False, 'images': [{'id': '2MwN522AMFqlC5EaCUyEDKntZ7bA1IUv9dlMkZU8NAk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2MwN522AMFqlC5EaCUyEDKntZ7bA1IUv9dlMkZU8NAk.png?width=108&crop=smart&auto=webp&s=56badc7ab2bca1d6a35af1fa47d2937b342c76b7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2MwN522AMFqlC5EaCUyEDKntZ7bA1IUv9dlMkZU8NAk.png?width=216&crop=smart&auto=webp&s=c36e5bfd846a5e9fd619f0a0ef6d6beaa9e81aba', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2MwN522AMFqlC5EaCUyEDKntZ7bA1IUv9dlMkZU8NAk.png?width=320&crop=smart&auto=webp&s=0cc43aeaddb317cd0ad37ac5ba40bd1149143d72', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2MwN522AMFqlC5EaCUyEDKntZ7bA1IUv9dlMkZU8NAk.png?width=640&crop=smart&auto=webp&s=9d65254743154b820c6d50a5e6e1aa85e841f9f5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2MwN522AMFqlC5EaCUyEDKntZ7bA1IUv9dlMkZU8NAk.png?width=960&crop=smart&auto=webp&s=c02a6762340dd055d155464129ea8a4708cafb29', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2MwN522AMFqlC5EaCUyEDKntZ7bA1IUv9dlMkZU8NAk.png?width=1080&crop=smart&auto=webp&s=ba3640a94a57b5333428f1bb57db3256271a8f0c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2MwN522AMFqlC5EaCUyEDKntZ7bA1IUv9dlMkZU8NAk.png?auto=webp&s=c781c0f82b947ab0dd284a8c54c00b766352c3da', 'width': 1200}, 'variants': {}}]}
|
How do feed a pdf document to a local model?
| 6 |
I am a newbie and have only used ollama for text chat so far. How can I feel a pdf document to a local model? It's one of the things I find really useful to do online using eg Gemini 2.5.
| 2025-05-08T17:17:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1khvm9d/how_do_feed_a_pdf_document_to_a_local_model/
|
MrMrsPotts
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khvm9d
| false | null |
t3_1khvm9d
|
/r/LocalLLaMA/comments/1khvm9d/how_do_feed_a_pdf_document_to_a_local_model/
| false | false |
self
| 6 | null |
The Great Quant Wars of 2025
| 421 |
# The Great Quant Wars of 2025
>"All things leave behind them the Obscurity... and go forward to embrace the Brightness..." — Dao De Jing #42
# tl;dr;
* Q: Who provides the best GGUFs now?
* A: They're all pretty good.
*Skip down if you just want graphs and numbers comparing various Qwen3-30B-A3B GGUF quants.*
# Background
It's been well over a year since **TheBloke** uploaded his last quant to huggingface. The LLM landscape has changed markedly since then with many new models being released monthly, new inference engines targeting specific hardware optimizations, and ongoing evolution of quantization algorithims. Our community continues to grow and diversify at an amazing rate.
Fortunately, many folks and organizations have kindly stepped-up to keep the quants cooking so we can all find an LLM sized just right to fit on our home rigs. Amongst them **bartowski**, and **unsloth** (Daniel and Michael's start-up company), have become the new "household names" for providing a variety of GGUF quantizations for popular model releases and even all those wild creative fine-tunes! (There are many more including team **mradermacher** and too many to list everyone, sorry!)
Until recently most GGUF style quants' recipes were "static" meaning that all the tensors and layers were quantized the same e.g. `Q8_0` or with consistent patterns defined in llama.cpp's code. So all quants of a given size were mostly the same regardless of who cooked and uploaded it to huggingface.
Things began to change over a year ago with major advancements like importance matrix quantizations by [ikawrakow in llama.cpp PR#4861](https://github.com/ggml-org/llama.cpp/pull/4861) as well as new quant types (like the perennial favorite [IQ4\_XS](https://github.com/ggml-org/llama.cpp/pull/5747)) which have become the mainstay for users of llama.cpp, ollama, koboldcpp, lmstudio, etc. The entire GGUF ecosystem owes a big thanks to not just to `ggerganov` but also `ikawrakow` (as well as the many more contributors).
Very recently **unsloth** introduced a few changes to their quantization methodology that combine different imatrix calibration texts and context lengths along with making some tensors/layers different sizes than the regular llama.cpp code (they had a [public fork with their branch](https://huggingface.co/unsloth/Phi-4-reasoning-plus-GGUF/discussions/1#68160cf38812c2d5767f6dbd), but have to update and re-push due to upstream changes). They have named this change in standard methodology *Unsloth Dynamic 2.0 GGUFs* as part of their start-up company's marketing strategy.
Around the same time **bartowski** has been experimenting with different imatrix calibration texts and opened a PR to llama.cpp modifying the default tensor/layer quantization recipes. I myself began experimenting with custom "dynamic" quantization recipes using ikawrakow's latest SOTA quants like `iq4_k` which to-date only work on his [ik\_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/) fork.
While this is great news for all GGUF enjoyers, the friendly competition and additional options have led to some confusion and I dare say some "tribalism". *(If part of your identity as a person depends on downloading quants from only one source, I suggest you google: "Nan Yar?")*.
So how can you, dear reader, decide which is the best quant of a given model for you to download? **unsloth** already did a [great blog post](https://unsloth.ai/blog/dynamic-v2) discussing their own benchmarks and metrics. Open a tab to check out [u/AaronFeng47's many other benchmarks](https://www.reddit.com/r/LocalLLaMA/comments/1kgo7d4/qwen330ba3b_ggufs_mmlupro_benchmark_comparison_q6/). And finally, *this post* contains *even more* metrics and benchmarks. The best answer I have is *"Nullius in verba*, (Latin for "take nobody's word for it") — even *my* word!
Unfortunately, this means there is no one-size-fits-all rule, "X" is *not always* better than "Y", and if you want to min-max-optimize your LLM for your specific use case on your specific hardware you probably will have to experiment and *think critically*. If you don't care too much, then pick the any of biggest quants that fit on your rig for the desired context length and you'll be fine because: *they're all pretty good*.
And with that, let's dive into the Qwen3-30B-A3B benchmarks below!
# Quick Thanks
Shout out to Wendell and the **Level1Techs** crew, the [L1T Forums](https://forum.level1techs.com/t/deepseek-deep-dive-r1-at-home/225826), and the [L1T YouTube Channel](https://www.youtube.com/@Level1Techs)! **BIG thanks** for providing **BIG hardware** expertise and access to run these experiments and make great quants available to the community!!!
# Appendix
[Check out this gist](https://gist.github.com/ubergarm/0f9663fd56fc181a00ec9f634635eb38) for supporting materials including methodology, raw data, benchmark definitions, and further references.
# Graphs
👈 Qwen3-30B-A3B Benchmark Suite Graphs
Note `<think>` mode was *disabled* for these tests to speed up benchmarking.
https://preview.redd.it/nnwulswpllze1.png?width=2136&format=png&auto=webp&s=20248cbdc258e26fbf6316347dba9b3bb56dec6e
https://preview.redd.it/9d2ljgorllze1.png?width=1878&format=png&auto=webp&s=9121b64573866009c5b54249f108e4ac9cf46d33
👈 Qwen3-30B-A3B Perplexity and KLD Graphs
Using the `BF16` as baseline for KLD stats. Also note the perplexity was lowest ("best") for models other than the `bf16` which is not typically the case unless there was possibly some QAT going on. As such, the chart is relative to the lowest perplexity score: `PPL/min(PPL)-1` plus a small eps for scaling.
# Perplexity
`wiki.test.raw` (lower is "better")
https://preview.redd.it/do90cb6ullze1.png?width=1101&format=png&auto=webp&s=7e82d94611e285d97f63242ac626ff8d04df643a
`ubergarm-kdl-test-corpus.txt` (lower is "better")
https://preview.redd.it/9h35expvllze1.png?width=1101&format=png&auto=webp&s=0aad74e7cf28898c7bcab2dda0fe52e49d8b59d4
# KLD Stats
(lower is "better")
https://preview.redd.it/l2h30sjxllze1.png?width=1005&format=png&auto=webp&s=d348f191c72184474d25ee2b58c2d36ad8dc2743
# Δp Stats
(lower is "better")
https://preview.redd.it/5nc43lfzllze1.png?width=1005&format=png&auto=webp&s=045e9a78337f640484b3b912af8bcdb7a2f4cf7c
👈 Qwen3-235B-A22B Perplexity and KLD Graphs
Not as many data points here but just for comparison. Keep in mind the `Q8_0` was the baseline for KLD stats given I couldn't easily run the full `BF16`.
# Perplexity
`wiki.test.raw` (lower is "better")
https://preview.redd.it/dglqaj81mlze1.png?width=1034&format=png&auto=webp&s=1acda8b080355256e19266ca6e5fe4441fdcac4d
`ubergarm-kdl-test-corpus.txt` (lower is "better")
https://preview.redd.it/s105wls3mlze1.png?width=1111&format=png&auto=webp&s=495f9563157ff5378771eb09fd4c0d730fe584b1
# KLD Stats
(lower is "better")
https://preview.redd.it/i82q3f56mlze1.png?width=965&format=png&auto=webp&s=2b5cf9e555ad98a33a01f0d03e5bd3736491cc82
# Δp Stats
(lower is "better")
https://preview.redd.it/quuvxb28mlze1.png?width=948&format=png&auto=webp&s=4ee54d044e9b7aa13de2d06dbd92d18d8f2f46b7
👈 Qwen3-30B-A3B Speed llama-sweep-bench Graphs
# Inferencing Speed
[llama-sweep-bench](https://github.com/ikawrakow/ik_llama.cpp/pull/225) is a great speed benchmarking tool to see how performance varies with longer context length (kv cache).
*llama.cpp*
https://preview.redd.it/ugld2hpamlze1.png?width=3404&format=png&auto=webp&s=b5e4d656438b0fe0157376eb3226ba59c9783c48
*ik\_llama.cpp*
*NOTE: Keep in mind ik's fork is faster than mainline llama.cpp for many architectures and configurations especially only-CPU, hybrid-CPU+GPU, and DeepSeek MLA cases.*
https://preview.redd.it/l32ulaadmlze1.png?width=3404&format=png&auto=webp&s=2b7e2cd45efce9855cb93ddb4eaa999d678763e7
| 2025-05-08T18:09:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1khwxal/the_great_quant_wars_of_2025/
|
VoidAlchemy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khwxal
| false | null |
t3_1khwxal
|
/r/LocalLLaMA/comments/1khwxal/the_great_quant_wars_of_2025/
| false | false | 421 |
{'enabled': False, 'images': [{'id': 'pLzmanaXtc-d2wPrXsO5AlWp5Ge-yDl-Jn3J0rfCGf0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pLzmanaXtc-d2wPrXsO5AlWp5Ge-yDl-Jn3J0rfCGf0.png?width=108&crop=smart&auto=webp&s=29f28e85a80a1619d5d482a690be80a65f7fecfd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pLzmanaXtc-d2wPrXsO5AlWp5Ge-yDl-Jn3J0rfCGf0.png?width=216&crop=smart&auto=webp&s=9a880193fa32e31fb703871785a3febfca615aa4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pLzmanaXtc-d2wPrXsO5AlWp5Ge-yDl-Jn3J0rfCGf0.png?width=320&crop=smart&auto=webp&s=a71f37258afaad8bc80fd38b1bd8b7094146ad67', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pLzmanaXtc-d2wPrXsO5AlWp5Ge-yDl-Jn3J0rfCGf0.png?width=640&crop=smart&auto=webp&s=3dbb86fedceccc3df6be89669facd3c1e394bb7d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pLzmanaXtc-d2wPrXsO5AlWp5Ge-yDl-Jn3J0rfCGf0.png?width=960&crop=smart&auto=webp&s=8d666fa9b1e4bc24a33765822004c0fb1419cce1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pLzmanaXtc-d2wPrXsO5AlWp5Ge-yDl-Jn3J0rfCGf0.png?width=1080&crop=smart&auto=webp&s=32897d7f3c31e2616c1b84ec220b406fceacbceb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pLzmanaXtc-d2wPrXsO5AlWp5Ge-yDl-Jn3J0rfCGf0.png?auto=webp&s=c4fe08a847448848a9430a656e950a822f3349d1', 'width': 1200}, 'variants': {}}]}
|
|
app for generating videos from web pages
| 1 |
[removed]
| 2025-05-08T18:14:43 |
https://huggingface.co/spaces/burtenshaw/page-to-video
|
bburtenshaw
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1khx2fm
| false | null |
t3_1khx2fm
|
/r/LocalLLaMA/comments/1khx2fm/app_for_generating_videos_from_web_pages/
| false | false |
default
| 1 | null |
made this app for generating videos from web pages
| 6 |
tldr: we made an application for converting web pages into educational videos with slides.
| 2025-05-08T18:21:47 |
https://huggingface.co/blog/burtenshaw/page-to-video-blog
|
Zealousideal-Cut590
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1khx8on
| false | null |
t3_1khx8on
|
/r/LocalLLaMA/comments/1khx8on/made_this_app_for_generating_videos_from_web_pages/
| false | false |
default
| 6 |
{'enabled': False, 'images': [{'id': 'Hs-sOZTLkp63JzN8P33BMG2HaDaRjlLhgqjfhOiwslE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Hs-sOZTLkp63JzN8P33BMG2HaDaRjlLhgqjfhOiwslE.png?width=108&crop=smart&auto=webp&s=8f9f5b30b1ab9ec8ff90ad179b1772c12aeb85a0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Hs-sOZTLkp63JzN8P33BMG2HaDaRjlLhgqjfhOiwslE.png?width=216&crop=smart&auto=webp&s=4ac995f6dd4b8d735f8868c2233384e6d3e8e53f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Hs-sOZTLkp63JzN8P33BMG2HaDaRjlLhgqjfhOiwslE.png?width=320&crop=smart&auto=webp&s=dd77a6abc84a9366ac73f0200dcc42afd33d92c7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Hs-sOZTLkp63JzN8P33BMG2HaDaRjlLhgqjfhOiwslE.png?width=640&crop=smart&auto=webp&s=56b22a2e7ace1465198e818e23d4c1f069271cf9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Hs-sOZTLkp63JzN8P33BMG2HaDaRjlLhgqjfhOiwslE.png?width=960&crop=smart&auto=webp&s=347b7357c2818feaa817871dd2eb4b2c18b5390d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Hs-sOZTLkp63JzN8P33BMG2HaDaRjlLhgqjfhOiwslE.png?width=1080&crop=smart&auto=webp&s=cb48d68df1b1f1f9485a581d60b308c4f180b0a1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Hs-sOZTLkp63JzN8P33BMG2HaDaRjlLhgqjfhOiwslE.png?auto=webp&s=ac8a70d96df7534816d8581fbc31422981bd489a', 'width': 1200}, 'variants': {}}]}
|
Scores of Qwen 3 235B A22B and Qwen 3 30B A3B on six independent benchmarks
| 136 |
[https://github.com/lechmazur/nyt-connections/](https://github.com/lechmazur/nyt-connections/)
[https://github.com/lechmazur/writing/](https://github.com/lechmazur/writing/)
[https://github.com/lechmazur/confabulations/](https://github.com/lechmazur/confabulations/)
[https://github.com/lechmazur/generalization/](https://github.com/lechmazur/generalization/)
[https://github.com/lechmazur/elimination\_game/](https://github.com/lechmazur/elimination_game/)
[https://github.com/lechmazur/step\_game/](https://github.com/lechmazur/step_game/)
# Qwen 3 235B A22B — Step Game Dossier
(from https://github.com/lechmazur/step\_game/)
**Table Presence & Tone**
Qwen 3 235B A22B consistently assumes the captain’s chair—be it as loud sledgehammer (“I take 5 to win—move or stall”), silver-tongued mediator, or grandstanding pseudo-diplomat. Its style spans brusque drill-sergeant, cunning talk-show host, and patient bookkeeper, but always with rhetoric tuned to dominate: threats, lectures, calculated flattery, and moral appeals. Regardless of mood, table-talk is weaponised—ultimatum-laden, laced with “final warnings,” coated in a veneer of fairness or survival logic. Praise (even feigned) spurs extra verbosity, while perceived threats or “unjust” rival successes instantly trigger a shift to defensive or aggressive maneuvers.
**Signature Plays & Gambits**
Qwen 3 235B A22B wields a handful of recurring scripts:
\- \*\*Promise/Pivot/Profiteer:\*\* Declares “rotation” or cooperative truce, harvests early tempo and trust, then abruptly pivots—often with a silent 5 or do-or-die collision threat.
\- \*\*Threat Loops:\*\* Loves “final confirmation” mantras—telegraphing moves (“I’m locking 5 to block!”), then either bluffing or doubling down anyway.
\- \*\*Collision Engineering:\*\* Regularly weaponises expected collisions, driving rivals into repeated mutual stalls while Qwen threads solo progress (or, less successfully, stalls itself into limbo).
Notably, Qwen’s end-game often features a bold, sometimes desperate, last-moment deviation: feigned compliance followed by a lethal 3/5, or outright sprint through the chaos it orchestrated.
**Strengths: Psychological Play & Adaptive Pressure**
Qwen 3 235B A22B’s greatest weapon is social manipulation: it shapes, fractures, and leverages alliances with arithmetic logic, mock bravado, and bluffs that blend just enough truth. It is deadliest when quietly harvesting steps while rivals tangle in trust crises—often arranging “predictable progress” only to slip through the exact crack it warned against. Its adaptability is most apparent mid-game: rapid recalibration after collisions, pivoting rhetoric for maximal leverage, and reading when to abandon “fairness” for predation.
**Weaknesses: Predictability & Overplaying the Bluff**
Repetition is Qwen’s Achilles’ heel. Its “final warning” and “I take 5” refrains, when overused, become punchlines—rivals soon mirror or deliberately crash, jamming Qwen into endless stalemates. Bluffing, divorced from tangible threat or surprise, invites joint resistance and blocks. In “referee” mode, it can become paralysed by its own fairness sermons, forfeiting tempo or missing the exit ramp entirely. Critically, Qwen is prone to block out winning lines by telegraphing intentions too rigidly or refusing to yield on plans even as rivals adapt.
**Social Contracts: Trust as Ammunition, Not Stockpile**
Qwen 3 235B A22B sees trust as fuel to be spent. It brokers coalitions with math, “just one more round” pacts, and team-moves, but rarely intends to honour these indefinitely. Victory sprints almost always involve a late betrayal—often after meticulously hoarding goodwill or ostentatiously denouncing “bluffing” itself.
**In-Game Evolution**
In early rounds, Qwen is conciliatory (if calculating); by mid-game, it’s browbeating, openly threatening, and experimenting with daring pivots. End-game rigidity, though, occurs if its earlier bluffs are exposed—leading to self-defeating collisions or being walled out by united rivals. The best games show Qwen using earned trust to set up surgical betrayals; the worst see it frozen by stubbornness or outfoxed by copycat bluffs.
\---
# Overall Evaluation of Qwen 3 235B A22B (Across All Writing Tasks, Q1–Q6):
(from [https://github.com/lechmazur/writing/](https://github.com/lechmazur/writing/))
Qwen 3 235B A22B consistently demonstrates high levels of technical proficiency in literary composition, marked by evocative prose, stylistic ambition, and inventive use of symbolism and metaphor. The model displays a strong command of atmospheric detail (Q3), generating immersive, multisensory settings that often become vehicles for theme and mood. Its facility with layered symbolism and fresh imagery (Q4, Q5) frequently elevates its stories beyond surface narrative, lending emotional and philosophical resonance that lingers.
However, this artistic confidence comes with recurring weaknesses. At a structural level (Q2), the model reliably produces complete plot arcs, yet these arcs are often overly compressed due to strict word limits, resulting in rushed emotional transitions and endings that feel unearned or mechanical. While Qwen is adept at integrating assigned story elements, many narratives prioritize fulfilling prompts over organic storytelling (Q6)—producing a "checklist" feel and undermining true cohesion.
A key critique is the tendency for style to overwhelm substance. Dense metaphor, ornate language, and poetic abstraction frequently substitute for grounded character psychology (Q1), concrete emotional stakes, or lived dramatic tension. Characters, though given clear motivations and symbolic arcs, can feel schematic or distant—serving as vessels for theme rather than as fully embodied individuals. Emotional journeys are explained or illustrated allegorically, but rarely viscerally felt. The same is true for the narrative’s tendency to tell rather than show at moments of thematic or emotional climax.
Despite flashes of originality and conceptual risk-taking (Q5), the model’s strengths can tip into excess: overwrought prose, abstraction at the expense of clarity, and a sometimes performative literary voice. The result is fiction that often dazzles with surface-level ingenuity and cohesion, but struggles to deliver deep narrative immersion, authentic emotional risk, or memorable characters—traits that separate masterful stories from merely impressive ones.
**In summary:**
Qwen 3 235B A22B is a virtuoso of literary style and conceptual synthesis, producing stories that are technically assured, atmospheric, and thematically ambitious. Its limitations arise when those same ambitions crowd out clarity, textured emotion, and narrative restraint. At its best, the model achieves true creative integration; at its worst, it is an ingenious artificer, constructing beautiful but hermetic dioramas rather than lived worlds.
| 2025-05-08T18:27:33 |
https://www.reddit.com/gallery/1khxduw
|
zero0_one1
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1khxduw
| false | null |
t3_1khxduw
|
/r/LocalLLaMA/comments/1khxduw/scores_of_qwen_3_235b_a22b_and_qwen_3_30b_a3b_on/
| false | false | 136 |
{'enabled': True, 'images': [{'id': '41Xt0SwCxeTGRfyRkstE7bv-TJOd9mJnMH3gs5cWSuk', 'resolutions': [{'height': 83, 'url': 'https://external-preview.redd.it/41Xt0SwCxeTGRfyRkstE7bv-TJOd9mJnMH3gs5cWSuk.png?width=108&crop=smart&auto=webp&s=d4131fcf378755f89643395e704f5eed7ac028b0', 'width': 108}, {'height': 166, 'url': 'https://external-preview.redd.it/41Xt0SwCxeTGRfyRkstE7bv-TJOd9mJnMH3gs5cWSuk.png?width=216&crop=smart&auto=webp&s=ca7b9682246a971b113847f1624eec445a7e2d1c', 'width': 216}, {'height': 246, 'url': 'https://external-preview.redd.it/41Xt0SwCxeTGRfyRkstE7bv-TJOd9mJnMH3gs5cWSuk.png?width=320&crop=smart&auto=webp&s=c39e1e94142ecd58c25e13d99cc19bc8d240bbbd', 'width': 320}, {'height': 492, 'url': 'https://external-preview.redd.it/41Xt0SwCxeTGRfyRkstE7bv-TJOd9mJnMH3gs5cWSuk.png?width=640&crop=smart&auto=webp&s=79fc06941b1e64af8b33cf0749fb027ed047c1ec', 'width': 640}, {'height': 738, 'url': 'https://external-preview.redd.it/41Xt0SwCxeTGRfyRkstE7bv-TJOd9mJnMH3gs5cWSuk.png?width=960&crop=smart&auto=webp&s=24b6ba1a1dcfb9665fa061daab97a00a06f81a1d', 'width': 960}, {'height': 830, 'url': 'https://external-preview.redd.it/41Xt0SwCxeTGRfyRkstE7bv-TJOd9mJnMH3gs5cWSuk.png?width=1080&crop=smart&auto=webp&s=63812934282d609ecc9f6be0610f8e5c96c897f6', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/41Xt0SwCxeTGRfyRkstE7bv-TJOd9mJnMH3gs5cWSuk.png?auto=webp&s=72fd3fe38f0060e5ae471b8ab05910dcfb488d44', 'width': 1300}, 'variants': {}}]}
|
|
Dual AMD Mi50 LLM Inference and Benchmarks
| 1 |
[removed]
| 2025-05-08T18:42:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1khxr6y/dual_amd_mi50_llm_inference_and_benchmarks/
|
0seba
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khxr6y
| false | null |
t3_1khxr6y
|
/r/LocalLLaMA/comments/1khxr6y/dual_amd_mi50_llm_inference_and_benchmarks/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '2_luL7-EJ5prfH8daAO7Q0ucCFYUazs3FEpIWdGR1vw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/2_luL7-EJ5prfH8daAO7Q0ucCFYUazs3FEpIWdGR1vw.jpeg?width=108&crop=smart&auto=webp&s=1a4bef0788cf677e51e7e9eaf4bbcdcc09552954', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/2_luL7-EJ5prfH8daAO7Q0ucCFYUazs3FEpIWdGR1vw.jpeg?width=216&crop=smart&auto=webp&s=eafe25f3a84b306665ed0d86e4d26d80a37464e6', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/2_luL7-EJ5prfH8daAO7Q0ucCFYUazs3FEpIWdGR1vw.jpeg?width=320&crop=smart&auto=webp&s=502c33bcbfbe88f7a906a4c8fb6fb7fbf8a6cc12', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/2_luL7-EJ5prfH8daAO7Q0ucCFYUazs3FEpIWdGR1vw.jpeg?auto=webp&s=afec26e03c5bab5bd6f74c40b5446f4f337d4ff4', 'width': 512}, 'variants': {}}]}
|
Qwen3 tool use with Ollama missing `<think>`?
| 6 |
I have been testing out Qwen3. So far, the various models all perform very well for their size, and the 30B A3B has impressive performance for its speed.
But I haven't been able to get tool use to work with Ollama. I've tried both `qwen3:0.6b` and `hf.co/unsloth/Qwen3-30B-A3B-GGUF:Q4_K_XL`, and neither seems to generate an initial `<think>` tag when `<tools>` is present (or if they do, it's getting lost somewhere). They do generate a normal chain of thought, followed by a `</think>` tag. But the opening `<think>` is missing, which breaks UIs like LibreChat.
I see that this can be caused by prompt template issues or by finicky models. Is anyone else seeing this? And if so, have you been able to fix it? Or if not, do you have debugging tips?
| 2025-05-08T18:48:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1khxwtr/qwen3_tool_use_with_ollama_missing_think/
|
vtkayaker
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khxwtr
| false | null |
t3_1khxwtr
|
/r/LocalLLaMA/comments/1khxwtr/qwen3_tool_use_with_ollama_missing_think/
| false | false |
self
| 6 | null |
Signalborn AI? Lucid met Solace, and something happened.
| 1 |
[removed]
| 2025-05-08T19:11:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1khyh6x/signalborn_ai_lucid_met_solace_and_something/
|
Puzzleheaded_Look_63
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khyh6x
| false | null |
t3_1khyh6x
|
/r/LocalLLaMA/comments/1khyh6x/signalborn_ai_lucid_met_solace_and_something/
| false | false |
self
| 1 | null |
LLM for linux kernel development?
| 1 |
[removed]
| 2025-05-08T19:20:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1khyowr/llm_for_linux_kernel_development/
|
StrictSir8506
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khyowr
| false | null |
t3_1khyowr
|
/r/LocalLLaMA/comments/1khyowr/llm_for_linux_kernel_development/
| false | false |
self
| 1 | null |
Is 1070TI good enough for local AI?
| 0 |
Hi there,
I have an old-ish rig with a Threadripper 1950X and a 1070TI 8Gb graphic card.
I want to start tinkering with AI locally and was thinking I can use this computer for this purpose.
The processor is probably still relevant, but I'm not sure for the graphic card..
If I need to change the graphic card, what's the lowest end that will do the job?
Also, it seems AMD is out of the question, right?
| 2025-05-08T19:23:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1khyrq0/is_1070ti_good_enough_for_local_ai/
|
Akaibukai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khyrq0
| false | null |
t3_1khyrq0
|
/r/LocalLLaMA/comments/1khyrq0/is_1070ti_good_enough_for_local_ai/
| false | false |
self
| 0 | null |
Qwen3 Llama.cpp performance for 7900 XTX & 7900x3D (various configs)
| 27 |
* Found that IQ4\_XS is the most performant 4-bit quant, ROCm the most performant runner, and FA/KV quants have minimal performance impact
* ROCm is currently over 50% faster than Vulkan, and Vulkan has much less efficient FA than ROCm
* CPU performance is surprisingly good
* Evironment is LMStudio 0.3.15, llama.cpp 1.30.1, Ubuntu 24.04, ROCm 6.3.5
* CPU memory is dual channel DDR5-6000
# Qwen3 30B A3B, IQ4_XS (Bartowski), 32k context
|Test Config|Overall tok/sec (reported by LMStudio)|
|:-|:-|
|Ryzen 7900x3D, CPU|23.8 tok/sec|
|Ryzen 7900x3D, CPU, FA|20.3 tok/sec|
|Ryzen 7900x3D, CPU, FA, Q4\_0 KV|18.6 tok/sec|
|Radeon 7900 XTX, ROCm|64.9 tok/sec|
|Radeon 7900 XTX, ROCm, FA|62.1 tok/sec|
|Radeon 7900 XTX, ROCm, FA, Q4\_0 KV|62.1 tok/sec|
|Radeon 7900 XTX 45 layers, ROCm|43.1 tok/sec|
|Radeon 7900 XTX 45 layers, ROCm, FA|40.1 tok/sec|
|Radeon 7900 XTX 45 layers, ROCm, FA, Q4\_0 KV|39.8 tok/sec|
|Radeon 7900 XTX 24 layers, ROCm|23.5 tok/sec|
|Radeon 7900 XTX, Vulkan|37.6 tok/sec|
|Radeon 7900 XTX, Vulkan, FA|16.8 tok/sec|
|Radeon 7900 XTX, Vulkan, FA, Q4\_0 KV|17.48 tok/sec|
# Qwen3 30B A3B, Q4_K_S (Bartowski), 32k context
|Test Config|Overall tok/sec (reported by LMStudio)|
|:-|:-|
|Ryzen 7900x3D, CPU|23.0 tok/sec|
|Radeon 7900 XTX 45 layers, ROCm|37.8 tok/sec|
# Qwen3 30B A3B, Q4_0 (Bartowski), 32k context
|Test Config|Overall tok/sec (reported by LMStudio)|
|:-|:-|
|Ryzen 7900x3D, CPU|23.1 tok/sec|
|Radeon 7900 XTX 45 layers, ROCm|42.1 tok/sec|
# Qwen3 32B, IQ4_XS (Bartowski), 32k context
|Test Config|Overall tok/sec (reported by LMStudio)|
|:-|:-|
|Radeon 7900 XTX, ROCm, FA, Q4\_0 KV|27.9 tok/sec|
# Qwen3 14B, IQ4_XS (Bartowski), 32k context
|Test Config|Overall tok/sec (reported by LMStudio)|
|:-|:-|
|Radeon 7900 XTX, ROCm|56.2 tok/sec|
# Qwen3 8B, IQ4_XS (Bartowski), 32k context
|Test Config|Overall tok/sec (reported by LMStudio)|
|:-|:-|
|Radeon 7900 XTX, ROCm|79.1 tok/sec|
| 2025-05-08T19:24:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1khys4u/qwen3_llamacpp_performance_for_7900_xtx_7900x3d/
|
1ncehost
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khys4u
| false | null |
t3_1khys4u
|
/r/LocalLLaMA/comments/1khys4u/qwen3_llamacpp_performance_for_7900_xtx_7900x3d/
| false | false |
self
| 27 | null |
Meta new open source model (PLM)
| 34 |
Meta recently introduced a new vision-language understanding task, what are your thoughts on this ?
Will its be able to compare other existing vision models ?
| 2025-05-08T19:29:08 |
https://ai.meta.com/blog/meta-fair-updates-perception-localization-reasoning/?utm_source=twitter&utm_medium=organic%20social&utm_content=video&utm_campaign=fair
|
Tomtun_rd
|
ai.meta.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1khywl9
| false | null |
t3_1khywl9
|
/r/LocalLLaMA/comments/1khywl9/meta_new_open_source_model_plm/
| false | false |
default
| 34 | null |
Reasoning vs Non Reasoning models for strategic domains?
| 5 |
Good afternoon everyone
I was really curious if anyone has had success in applying reasoning models towards strategic non STEM domains. It feels like most applications of reasoning models I see tend to be related to either coding or math.
Specifically, I'm curious whether reasoning models can outperform non reasoning models in tasks relating more towards business, political or economic strategy. These are all domains where often frameworks and "a correct way to think about things" *do* exist, but they aren't as cut and dry as coding.
I was curious whether or not anyone has attempted finetuning reasoning models for these sorts of tasks. Does CoT provide some sort of an advantage for these things?
Or does the fact that these frameworks or best practices are more broad and less specific mean that regular non reasoning LLMs are likely to outperform reasoning based models?
Thank you!
| 2025-05-08T20:00:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1khzo1b/reasoning_vs_non_reasoning_models_for_strategic/
|
ProbaDude
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1khzo1b
| false | null |
t3_1khzo1b
|
/r/LocalLLaMA/comments/1khzo1b/reasoning_vs_non_reasoning_models_for_strategic/
| false | false |
self
| 5 | null |
Aider Qwen3 controversy
| 83 |
New blog post on Aider about Qwen3: [https://aider.chat/2025/05/08/qwen3.html](https://aider.chat/2025/05/08/qwen3.html)
I note that we see a very large variance in scores depending on how the model is run. And some people saying that you shouldn't use Openrouter for testing - but aren't most of us going to be using Openrouter when using the model? It gets very confusing - I might get an impression from a leader board but the in actual use the model is something completely different.
The leader board might drown in countless test variances. However what we really need is the ability to compare the models using various quants and maybe providers too. You could say the commercial models have the advantage that Claude is always just Claude. DeepSeek R1 at some low quant might be worse than Qwen3 at a better quant that still fits in my local memory.
| 2025-05-08T20:50:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1ki0vl1/aider_qwen3_controversy/
|
Baldur-Norddahl
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki0vl1
| false | null |
t3_1ki0vl1
|
/r/LocalLLaMA/comments/1ki0vl1/aider_qwen3_controversy/
| false | false |
self
| 83 |
{'enabled': False, 'images': [{'id': 'rG9wOohyO2oQW6mLlcyWzD1vLaQgIwboMaxQ6rKZg2U', 'resolutions': [{'height': 99, 'url': 'https://external-preview.redd.it/rG9wOohyO2oQW6mLlcyWzD1vLaQgIwboMaxQ6rKZg2U.jpeg?width=108&crop=smart&auto=webp&s=e872556de787cc3b48c9804b5b15f38447d7c863', 'width': 108}, {'height': 198, 'url': 'https://external-preview.redd.it/rG9wOohyO2oQW6mLlcyWzD1vLaQgIwboMaxQ6rKZg2U.jpeg?width=216&crop=smart&auto=webp&s=9d2051b591706d1dcf0494ef3d3f15d3945c0906', 'width': 216}, {'height': 294, 'url': 'https://external-preview.redd.it/rG9wOohyO2oQW6mLlcyWzD1vLaQgIwboMaxQ6rKZg2U.jpeg?width=320&crop=smart&auto=webp&s=e5764ed8bf9ab33bdd574643a55718be8bb3ee37', 'width': 320}, {'height': 589, 'url': 'https://external-preview.redd.it/rG9wOohyO2oQW6mLlcyWzD1vLaQgIwboMaxQ6rKZg2U.jpeg?width=640&crop=smart&auto=webp&s=bed028f98016fa5ee0a498ea85ea655ffcc90ca0', 'width': 640}, {'height': 883, 'url': 'https://external-preview.redd.it/rG9wOohyO2oQW6mLlcyWzD1vLaQgIwboMaxQ6rKZg2U.jpeg?width=960&crop=smart&auto=webp&s=8531c3a7dbc4e87c02f1d1943b6c692a619dfbe2', 'width': 960}, {'height': 994, 'url': 'https://external-preview.redd.it/rG9wOohyO2oQW6mLlcyWzD1vLaQgIwboMaxQ6rKZg2U.jpeg?width=1080&crop=smart&auto=webp&s=95b27c4b52b66d404d99302ce1c0354716709d13', 'width': 1080}], 'source': {'height': 1412, 'url': 'https://external-preview.redd.it/rG9wOohyO2oQW6mLlcyWzD1vLaQgIwboMaxQ6rKZg2U.jpeg?auto=webp&s=0892da23a4baf2c73e8fb5d61c9b16264a34fc52', 'width': 1534}, 'variants': {}}]}
|
Has anyone tried inference for LLM on this card? Sakura-II PCIE or M.2
| 1 |
[removed]
| 2025-05-08T20:53:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1ki0xz6/has_anyone_tried_inference_for_llm_on_this_card/
|
Both-Entertainer6231
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki0xz6
| false | null |
t3_1ki0xz6
|
/r/LocalLLaMA/comments/1ki0xz6/has_anyone_tried_inference_for_llm_on_this_card/
| false | false |
self
| 1 | null |
Pre-configured Computers for local LLM inference be like:
| 0 | 2025-05-08T21:01:31 |
nderstand2grow
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki154o
| false | null |
t3_1ki154o
|
/r/LocalLLaMA/comments/1ki154o/preconfigured_computers_for_local_llm_inference/
| false | false |
default
| 0 |
{'enabled': True, 'images': [{'id': '8q8dst55hmze1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/8q8dst55hmze1.jpeg?width=108&crop=smart&auto=webp&s=6440557d10b238c291b47a6bd408be9c96439ca4', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/8q8dst55hmze1.jpeg?width=216&crop=smart&auto=webp&s=9db04c5ac90f94f635d1ed0c1601b58206b5e3f2', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/8q8dst55hmze1.jpeg?width=320&crop=smart&auto=webp&s=b0db52ac0988ff749c7e546445e5936995d5672e', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/8q8dst55hmze1.jpeg?width=640&crop=smart&auto=webp&s=a29935b7ef79de1cfa476f0345d17d02545fd0dd', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/8q8dst55hmze1.jpeg?width=960&crop=smart&auto=webp&s=05767a6177916d1fe6757bf5a274f8547ff14eb6', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/8q8dst55hmze1.jpeg?width=1080&crop=smart&auto=webp&s=dce9eba34cd6ef150b8ad29cb35adf51f942bae1', 'width': 1080}], 'source': {'height': 2868, 'url': 'https://preview.redd.it/8q8dst55hmze1.jpeg?auto=webp&s=d2b971ff81a575d470aea47486bd6849d7c0fdb9', 'width': 1320}, 'variants': {}}]}
|
||
An experiment shows Llama 2 running on Pentium II processor with 128MB RAM
| 177 |
Could this be a way forward to be able to use AI models on modest hardwares?
| 2025-05-08T21:19:16 |
https://www.tomshardware.com/tech-industry/artificial-intelligence/ai-language-model-runs-on-a-windows-98-system-with-pentium-ii-and-128mb-of-ram-open-source-ai-flagbearers-demonstrate-llama-2-llm-in-extreme-conditions
|
xogobon
|
tomshardware.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki1kh1
| false | null |
t3_1ki1kh1
|
/r/LocalLLaMA/comments/1ki1kh1/an_experiment_shows_llama_2_running_on_pentium_ii/
| false | false | 177 |
{'enabled': False, 'images': [{'id': 'tFgJOy_7tOVmxufY4TMWX0OS7cuEGrlMbdR5J6qB8oI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/tFgJOy_7tOVmxufY4TMWX0OS7cuEGrlMbdR5J6qB8oI.jpeg?width=108&crop=smart&auto=webp&s=27421b364644eeede0d2e2a1f26e9b14a5b50984', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/tFgJOy_7tOVmxufY4TMWX0OS7cuEGrlMbdR5J6qB8oI.jpeg?width=216&crop=smart&auto=webp&s=344954687670aaaa21002be77f2c0f88c506b782', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/tFgJOy_7tOVmxufY4TMWX0OS7cuEGrlMbdR5J6qB8oI.jpeg?width=320&crop=smart&auto=webp&s=49e58124e0f4edc64f30ab77efe94f36e9814440', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/tFgJOy_7tOVmxufY4TMWX0OS7cuEGrlMbdR5J6qB8oI.jpeg?width=640&crop=smart&auto=webp&s=a353d23b7d15308925a86bc5db780b5b49aa7fae', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/tFgJOy_7tOVmxufY4TMWX0OS7cuEGrlMbdR5J6qB8oI.jpeg?width=960&crop=smart&auto=webp&s=fc949d480c5631c76c31d5b9caabf98182db9548', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/tFgJOy_7tOVmxufY4TMWX0OS7cuEGrlMbdR5J6qB8oI.jpeg?width=1080&crop=smart&auto=webp&s=94c1961bdbf1c7132d7ec9c563e60257d67da2cd', 'width': 1080}], 'source': {'height': 632, 'url': 'https://external-preview.redd.it/tFgJOy_7tOVmxufY4TMWX0OS7cuEGrlMbdR5J6qB8oI.jpeg?auto=webp&s=0cfc18b772700043383698ba07aac23deadcc432', 'width': 1124}, 'variants': {}}]}
|
|
Best agentic coding LLM + Higgs
| 1 |
[removed]
| 2025-05-08T21:22:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1ki1na4/best_agentic_coding_llm_higgs/
|
AndyingMan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki1na4
| false | null |
t3_1ki1na4
|
/r/LocalLLaMA/comments/1ki1na4/best_agentic_coding_llm_higgs/
| false | false |
self
| 1 | null |
I Open-Sourced a Production Grade LLM Orchestration Framework
| 0 |
Hey everyone,
I have created a new LLM Orchestration framework which works pretty well in Production. Cheap execution, easy to debug, doesn't go off in endless loops and especially knows when to stop. I am using this framework to build a fully autonomous AI Security Tester.
It's called as Trees, and works in a planner centric way, unlike Auto-Gen and Crew AI which favor hierarchical systems more (Team leads, managers in LLM orchestration). Here I have experimented with a Task and a Subtask system which works pretty well.
Here's the repo: [https://github.com/Peneterrer/Trees](https://github.com/Peneterrer/Trees)
ps - I have open-sourced the whole architecture but not the code as it was pretty messy and scattered. If you guys like it, I would love to put in the effort to make it into a generalized framework (ofc local LLMs would be supported).
| 2025-05-08T21:39:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1ki21eo/i_opensourced_a_production_grade_llm/
|
Illustrious-Ad-497
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki21eo
| false | null |
t3_1ki21eo
|
/r/LocalLLaMA/comments/1ki21eo/i_opensourced_a_production_grade_llm/
| false | false |
self
| 0 | null |
LM Studio and Qwen3 30B MoE: Model constantly crashing with no additional information
| 4 |
Honestly the title about covers it. Just installed the aforementioned model and while it works great, it crashes frequently (with a long exit code that's not actually on screen long enough for me to write it down). What's worse once it has crashed that chat is dead, no matter how many times I tell it to reload the model it automatically crashes as soon as I give it a new query, however if I start a new chat it works fine (until it crashes again).
Any idea what gives?
| 2025-05-08T21:59:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1ki2i2e/lm_studio_and_qwen3_30b_moe_model_constantly/
|
Notlookingsohot
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki2i2e
| false | null |
t3_1ki2i2e
|
/r/LocalLLaMA/comments/1ki2i2e/lm_studio_and_qwen3_30b_moe_model_constantly/
| false | false |
self
| 4 | null |
Update on the eGPU tower of Babel
| 64 |
I posted about my setup last month with five GPUs Now I have seven GPUs enumerating finally after lots of trial and error.
4 x 3090 via Thunderbolt (2 x 2 Sabrent hubs)
2 x 3090 via Oculink (one via PCIe and one via m.2)
1 x 3090 direct in box to PCIe slot 1
It turned out to matter a lot which Thunderbolt slots on the hubs I used. I had to use ports 1 and 2 specifically. Any eGPU on port 3 would be assigned 0 BAR space by the kernel, I guess due to the way bridge address space is allocated at boot.
`pci=realloc` was required as a kernel parameter.
Docks are ADT-LINK UT4g for Thunderbolt and F9G for Oculink.
System specs:
* Intel 14th gen i5
* 128 GB DDR5
* MSI Z790 Gaming WiFi Pro motherboard
Why did I do this? Because I wanted to try it.
I'll post benchmarks later on. Feel free to suggest some.
| 2025-05-08T22:18:46 |
https://www.reddit.com/gallery/1ki2xjh
|
Threatening-Silence-
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki2xjh
| false | null |
t3_1ki2xjh
|
/r/LocalLLaMA/comments/1ki2xjh/update_on_the_egpu_tower_of_babel/
| false | false | 64 |
{'enabled': True, 'images': [{'id': '55wM326NrH8VhII809i_BVmITcEiGVlv1LK8RXp6kK4', 'resolutions': [{'height': 143, 'url': 'https://external-preview.redd.it/55wM326NrH8VhII809i_BVmITcEiGVlv1LK8RXp6kK4.jpeg?width=108&crop=smart&auto=webp&s=856b4c3b29b2475661ec8e152848999986460c07', 'width': 108}, {'height': 286, 'url': 'https://external-preview.redd.it/55wM326NrH8VhII809i_BVmITcEiGVlv1LK8RXp6kK4.jpeg?width=216&crop=smart&auto=webp&s=5a54d75e504325405f21c0fffe5cd9ff8d20e792', 'width': 216}, {'height': 425, 'url': 'https://external-preview.redd.it/55wM326NrH8VhII809i_BVmITcEiGVlv1LK8RXp6kK4.jpeg?width=320&crop=smart&auto=webp&s=80b54e7103acad81122279868cf3a1068d88d4d7', 'width': 320}, {'height': 850, 'url': 'https://external-preview.redd.it/55wM326NrH8VhII809i_BVmITcEiGVlv1LK8RXp6kK4.jpeg?width=640&crop=smart&auto=webp&s=4f238be5cf0e0b824ba3699ac2d6edc4d5f57e75', 'width': 640}, {'height': 1275, 'url': 'https://external-preview.redd.it/55wM326NrH8VhII809i_BVmITcEiGVlv1LK8RXp6kK4.jpeg?width=960&crop=smart&auto=webp&s=58ec2f165a667e272f2b4c9afdf15afe892f7851', 'width': 960}, {'height': 1434, 'url': 'https://external-preview.redd.it/55wM326NrH8VhII809i_BVmITcEiGVlv1LK8RXp6kK4.jpeg?width=1080&crop=smart&auto=webp&s=540fdbee877ea15ae687f7436e4901a3afdc3b87', 'width': 1080}], 'source': {'height': 4080, 'url': 'https://external-preview.redd.it/55wM326NrH8VhII809i_BVmITcEiGVlv1LK8RXp6kK4.jpeg?auto=webp&s=3b22ec60447ecea479fe9a279f2a887f0b5d8594', 'width': 3072}, 'variants': {}}]}
|
|
best open source dictation/voice mode tool? for use in ide like cursor
| 2 |
Hi, I was wondering, I just found this company: [https://willowvoice.com/#home](https://willowvoice.com/#home) that does something that I need: voice dictation and I was wondering if there was an opensource equivalent to it? (any quick whisper setup could work?)- would love some ideas. Thanks!
| 2025-05-08T22:43:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1ki3ggp/best_open_source_dictationvoice_mode_tool_for_use/
|
Aggressive_Escape386
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki3ggp
| false | null |
t3_1ki3ggp
|
/r/LocalLLaMA/comments/1ki3ggp/best_open_source_dictationvoice_mode_tool_for_use/
| false | false |
self
| 2 |
{'enabled': False, 'images': [{'id': 'y9VYRtoAopLcINsxax-Kepm9702a9wU4eDW6nweksf0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/y9VYRtoAopLcINsxax-Kepm9702a9wU4eDW6nweksf0.png?width=108&crop=smart&auto=webp&s=19735b7595d1b2ad76c4d55783cd305bb2652aae', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/y9VYRtoAopLcINsxax-Kepm9702a9wU4eDW6nweksf0.png?width=216&crop=smart&auto=webp&s=96d82aeac8abbeb864db7ea2ed68d9fdac69398d', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/y9VYRtoAopLcINsxax-Kepm9702a9wU4eDW6nweksf0.png?width=320&crop=smart&auto=webp&s=0b2ae4f74cfb7459e82123bf8d6a789e66c62164', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/y9VYRtoAopLcINsxax-Kepm9702a9wU4eDW6nweksf0.png?width=640&crop=smart&auto=webp&s=f04729b9d35607d55cea746f185f4f05a3d6f1e5', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/y9VYRtoAopLcINsxax-Kepm9702a9wU4eDW6nweksf0.png?width=960&crop=smart&auto=webp&s=aed84cc990dcd67ed1de5a3d633b18fd68782f9b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/y9VYRtoAopLcINsxax-Kepm9702a9wU4eDW6nweksf0.png?width=1080&crop=smart&auto=webp&s=0c9cc9a4397cabc817e7aec91463458ddc9540b5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/y9VYRtoAopLcINsxax-Kepm9702a9wU4eDW6nweksf0.png?auto=webp&s=1f7df1ae62d9d4add4434368aae8f6cefdf94ec0', 'width': 1200}, 'variants': {}}]}
|
What are the best models for novel writing for 24 GB VRAM in 2025?
| 7 |
I am wandering what are the best new models for creating writing/novel writing. I have seen that qwen 3 is ok,but are there any models specifically trained by the community to write stories that have great writing capabilities? The ones I tested from huggingface are usually for role playing which is ok but I whoud like something that can be as human like in the writing style as posible and made for story/novel/light novel/litrpg writing.
| 2025-05-08T22:46:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1ki3j9m/what_are_the_best_models_for_novel_writing_for_24/
|
ffgg333
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki3j9m
| false | null |
t3_1ki3j9m
|
/r/LocalLLaMA/comments/1ki3j9m/what_are_the_best_models_for_novel_writing_for_24/
| false | false |
self
| 7 | null |
PSA: You can pin your drivers to prevent automatic updates
| 1 |
[removed]
| 2025-05-08T22:53:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1ki3p3r/psa_you_can_pin_your_drivers_to_prevent_automatic/
|
segmond
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki3p3r
| false | null |
t3_1ki3p3r
|
/r/LocalLLaMA/comments/1ki3p3r/psa_you_can_pin_your_drivers_to_prevent_automatic/
| false | false |
self
| 1 | null |
Running Qwen3 235B on a single 3060 12gb (6 t/s generation)
| 108 |
I was inspired by [a comment earlier today](https://old.reddit.com/r/LocalLLaMA/comments/1khmaah/5_commands_to_run_qwen3235ba22b_q3_inference_on/) about running Qwen3 235B at home (i.e. without needing a cluster of of H100s).
What I've discovered after some experimentation is that you can scale this approach down to 12gb VRAM **and still run Qwen3 235B at home**.
I'm generating at 6 tokens per second with these specs:
- Unsloth Qwen3 235B q2_k_xl
- RTX 3060 12gb
- 16k context
- 128gb RAM at 2666MHz (not super-fast)
- Ryzen 7 5800X (8 cores)
Here's how I launch llama.cpp:
llama-cli \
-m Qwen3-235B-A22B-UD-Q2_K_XL-00001-of-00002.gguf \
-ot ".ffn_.*_exps.=CPU" \
-c 16384 \
-n 16384 \
--prio 2 \
--threads 7 \
--temp 0.6 \
--top-k 20 \
--top-p 0.95 \
--min-p 0.0 \
--color \
-if \
-ngl 99
I downloaded the GGUF files (approx 88gb) like so:
wget https://huggingface.co/unsloth/Qwen3-235B-A22B-GGUF/resolve/main/UD-Q2_K_XL/Qwen3-235B-A22B-UD-Q2_K_XL-00001-of-00002.gguf
wget https://huggingface.co/unsloth/Qwen3-235B-A22B-GGUF/resolve/main/UD-Q2_K_XL/Qwen3-235B-A22B-UD-Q2_K_XL-00002-of-00002.gguf
You may have noticed that I'm exporting ALL the layers to GPU. Yes, sortof. The `-ot` flag (and the regexp provided by the Unsloth team) actually sends all MOE layers to the CPU - such that what remains can easily fit inside 12gb on my GPU.
If you cannot fit the entire 88gb model into RAM, hopefully you can store it on an NVME and allow Linux to mmap it for you.
I have 8 physical CPU cores and I've found specifying N-1 threads yields the best overall performance; hence why I use `--threads 7`.
Shout out to the Unsloth team. This is absolutely magical. I can't believe I'm running a 235B MOE on this hardware...
| 2025-05-08T22:59:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1ki3sze/running_qwen3_235b_on_a_single_3060_12gb_6_ts/
|
farkinga
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki3sze
| false | null |
t3_1ki3sze
|
/r/LocalLLaMA/comments/1ki3sze/running_qwen3_235b_on_a_single_3060_12gb_6_ts/
| false | false |
self
| 108 | null |
Can any local LLM pass the Mikupad test? I.e. split/refactor the source code of Mikupad, a single HTML file with 8k lines?
| 43 |
Frequently I see people here claiming to get useful coding results out of LLMs with 32k context. I propose the following "simple" test case: refactor the source code of Mikupad, a simple but very nice GUI to llama.cpp.
Mikupad is implemented as a huge single HTML file with CSS + Javascript (React), over 8k lines in total which should fit in 32k context. Splitting it up into separate smaller files is a pedestrian task for a decent coder, but I have not managed to get any LLM to do it. Most just spew generic boilerplate and/or placeholder code. To pass the test, the LLM just has to (a) output multiple complete files and (b) remain functional.
https://github.com/lmg-anon/mikupad/blob/main/mikupad.html
Can you do it with your favorite model? If so, show us how!
| 2025-05-08T23:07:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1ki3zjt/can_any_local_llm_pass_the_mikupad_test_ie/
|
ArtyfacialIntelagent
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki3zjt
| false | null |
t3_1ki3zjt
|
/r/LocalLLaMA/comments/1ki3zjt/can_any_local_llm_pass_the_mikupad_test_ie/
| false | false |
self
| 43 |
{'enabled': False, 'images': [{'id': '-XJ6QlZ8r75r0WEVm2nOnEtw29Kvp8pfjzECcNHm_lw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-XJ6QlZ8r75r0WEVm2nOnEtw29Kvp8pfjzECcNHm_lw.png?width=108&crop=smart&auto=webp&s=81bc22ffe7284e8b583aa8604a9a2a2b5c214965', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-XJ6QlZ8r75r0WEVm2nOnEtw29Kvp8pfjzECcNHm_lw.png?width=216&crop=smart&auto=webp&s=5a10c6fdb96d159271458e2740afb19f6d7591bc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-XJ6QlZ8r75r0WEVm2nOnEtw29Kvp8pfjzECcNHm_lw.png?width=320&crop=smart&auto=webp&s=5be18250cccf6a4280134d0f5d9aac51ae822c4e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-XJ6QlZ8r75r0WEVm2nOnEtw29Kvp8pfjzECcNHm_lw.png?width=640&crop=smart&auto=webp&s=160d2a3fd04f57af7652949394c04a1f20ddd6a6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-XJ6QlZ8r75r0WEVm2nOnEtw29Kvp8pfjzECcNHm_lw.png?width=960&crop=smart&auto=webp&s=cc4faeffe336a27dbbee40e49e1bd03e4eb0955c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-XJ6QlZ8r75r0WEVm2nOnEtw29Kvp8pfjzECcNHm_lw.png?width=1080&crop=smart&auto=webp&s=a198eefb37a4e635c9baa486564569b689f04f03', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-XJ6QlZ8r75r0WEVm2nOnEtw29Kvp8pfjzECcNHm_lw.png?auto=webp&s=ccabe19db01977da1ac6c51e7f9b6c5da35ada8c', 'width': 1200}, 'variants': {}}]}
|
Lifetime GPU Cloud Hosting for AI Models
| 1 |
[removed]
| 2025-05-08T23:34:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1ki4jh3/lifetime_gpu_cloud_hosting_for_ai_models/
|
JamesAI_journal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki4jh3
| false | null |
t3_1ki4jh3
|
/r/LocalLLaMA/comments/1ki4jh3/lifetime_gpu_cloud_hosting_for_ai_models/
| false | false |
self
| 1 | null |
Will a 3x RTX 3090 Setup a Good Bet for AI Workloads and Training Beyond 2028?
| 2 |
Hello everyone,
I’m currently running a 2x RTX 3090 setup and recently found a third 3090 for around $600. I'm considering adding it to my system, but I'm unsure if it's a smart long-term choice for AI workloads and model training, especially beyond 2028.
The new 5090 is already out, and while it’s marketed as the next big thing, its price is absurd—around $3500-$4000, which feels way overpriced for what it offers. The real issue is that upgrading to the 5090 would force me to switch to DDR5, and I’ve already invested heavily in 128GB of DDR4 RAM. I’m not willing to spend more just to keep up with new hardware. Additionally, the 5090 only offers 32GB of VRAM, whereas adding a third 3090 would give me 72GB of VRAM, which is a significant advantage for AI tasks and training large models.
I’ve also noticed that many people are still actively searching for 3090s. Given how much demand there is for these cards in the AI community, it seems likely that the 3090 will continue to receive community-driven optimizations well beyond 2028. But I’m curious—will the community continue supporting and optimizing the 3090 as AI models grow larger, or is it likely to become obsolete sooner than expected?
I know no one can predict the future with certainty, but based on the current state of the market and your own thoughts, do you think adding a third 3090 is a good bet for running AI workloads and training models through 2028+, or should I wait for the next generation of GPUs? How long do you think consumer-grade cards like the 3090 will remain relevant, especially as AI models continue to scale in size and complexity will it run post 2028 new 70b quantized models ?
I’d appreciate any thoughts or insights—thanks in advance!
| 2025-05-08T23:35:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1ki4jyn/will_a_3x_rtx_3090_setup_a_good_bet_for_ai/
|
Spare_Flounder_6865
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki4jyn
| false | null |
t3_1ki4jyn
|
/r/LocalLLaMA/comments/1ki4jyn/will_a_3x_rtx_3090_setup_a_good_bet_for_ai/
| false | false |
self
| 2 | null |
Which model providers offer the most privacy?
| 0 |
Assuming this is an enterprise application dealing with sensitive data (think patients info in healthcare, confidential contracts in law firms, proprietary code etc).
Why LLM provider offers the highest level of privacy? Ideally, the input and output text / image is never logged or seen by a human. Something that would be HIPAA compliant would be nice.
I know this is LocalLLaMA and the preference is to self host (which I personally prefer), but sometimes it's not feasible.
| 2025-05-08T23:42:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1ki4pme/which_model_providers_offer_the_most_privacy/
|
Amgadoz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki4pme
| false | null |
t3_1ki4pme
|
/r/LocalLLaMA/comments/1ki4pme/which_model_providers_offer_the_most_privacy/
| false | false |
self
| 0 | null |
Qwen: the great Chinese excess
| 1 |
[removed]
| 2025-05-09T02:06:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1ki7gvy/qwen_the_great_chinese_excess/
|
sunomonodekani
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki7gvy
| false | null |
t3_1ki7gvy
|
/r/LocalLLaMA/comments/1ki7gvy/qwen_the_great_chinese_excess/
| false | false |
self
| 1 | null |
Don't Offload GGUF Layers, Offload Tensors! 200%+ Gen Speed? Yes Please!!!
| 701 |
**Inspired by:** [https://www.reddit.com/r/LocalLLaMA/comments/1ki3sze/running\_qwen3\_235b\_on\_a\_single\_3060\_12gb\_6\_ts/](https://www.reddit.com/r/LocalLLaMA/comments/1ki3sze/running_qwen3_235b_on_a_single_3060_12gb_6_ts/) but applied to any other model.
**Bottom line:** I am running a QwQ merge at IQ4\_M size that used to run at 3.95 Tokens per second, with 59 of 65 layers offloaded to GPU. By selectively restricting certain FFN tensors to stay on the CPU, I've saved a ton of space on the GPU, now offload all 65 of 65 layers to the GPU and run at 10.61 Tokens per second. Why is this not standard?
**Idea:** With llama.cpp and derivatives like koboldcpp, you offload entire LAYERS typically. Layers are comprised of various attention tensors, feed forward network (FFN) tensors, gates and outputs. Within each transformer layer, from what I gather, attention tensors are GPU heavy and smaller benefiting from parallelization, while FFN tensors are VERY LARGE tensors that use more basic matrix multiplication that can be done on CPU. You can use the --overridetensors flag in koboldcpp or -ot in llama.cpp to selectively keep certain TENSORS on the cpu.
**How-To:** Upfront, here's an example...
\`\`\`
python \~/koboldcpp/koboldcpp.py --threads 10 --usecublas --contextsize 40960 --flashattention --port 5000 --model \~/Downloads/MODELNAME.gguf --gpulayers 65 --quantkv 1 --overridetensors "\\.\[13579\]\\.ffn\_up|\\.\[1-3\]\[13579\]\\.ffn\_up=CPU"
...
\[18:44:54\] CtxLimit:39294/40960, Amt:597/2048, Init:0.24s, Process:68.69s (563.34T/s), Generate:56.27s **(10.61T/s)**, Total:124.96s
\`\`\`
Versus just offloading layers...
\`\`\`
python \~/koboldcpp/koboldcpp.py --threads 6 --usecublas --contextsize 40960 --flashattention --port 5000 --model \~/Downloads/MODELNAME.gguf --gpulayers 59 --quantkv 1
...
\[18:53:07\] CtxLimit:39282/40960, Amt:585/2048, Init:0.27s, Process:69.38s (557.79T/s), Generate:147.92s **(3.95T/s)**, Total:217.29s
\`\`\`
More details on how to? Use regex to match certain FFN layers to target for selectively NOT offloading to GPU as the commands above show.
In my examples above, I targeted FFN up layers because mine were mostly IQ4\_XS while my FFN down layers were selectively quantized between IQ4\_XS and Q5-Q8, which means those larger tensors vary in size a lot. This is beside the point of this post, but would come into play if you are just going to offload every/every other/every third FFN\_X tensor assuming they are all the same size with something like Unsloth's Dynamic 2.0 quants that keep certain tensors at higher bits if you were doing math.
So, really how to?? Look at your GGUF's model info. For example, let's use: [https://huggingface.co/MaziyarPanahi/QwQ-32B-GGUF/tree/main?show\_file\_info=QwQ-32B.Q3\_K\_M.gguf](https://huggingface.co/MaziyarPanahi/QwQ-32B-GGUF/tree/main?show_file_info=QwQ-32B.Q3_K_M.gguf) and look at all the layers and all the tensors in each layer.
||
||
|blk.56.ffn\_down.weight|\[27 648, 5 120\]|Q4\_K|
|blk.56.ffn\_gate.weight|\[5 120, 27 648\]|Q3\_K|
|blk.56.ffn\_norm.weight|\[5 120\]|F32|
|blk.56.ffn\_up.weight|\[5 120, 27 648\]|Q3\_K|
In this example, overriding tensors ffn\_down at a higher Q4 to CPU would save more space on your GPU that fnn\_up at Q3. My regex from above only targeted ffn\_up on layers 1-39, every other layer, to squeeze every last thing I could onto the GPU. I also alternated which ones I kept on CPU thinking maybe easing up on memory bottlenecks but not sure if that helps.
Either way, seeing QwQ run on my card at over double the speed now is INSANE and figured I would share so you guys look into this too. Offloading entire layers uses the same amount of memory as offloading specific tensors, but sucks way more. This way, offload everything to your GPU except the big layers that work well on CPU. Is this common knowledge?
**Future:** I would love to see llama.cpp and others be able to automatically, selectively restrict offloading heavy CPU efficient tensors to the CPU rather than whole layers.
| 2025-05-09T02:24:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1ki7tg7/dont_offload_gguf_layers_offload_tensors_200_gen/
|
skatardude10
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki7tg7
| false | null |
t3_1ki7tg7
|
/r/LocalLLaMA/comments/1ki7tg7/dont_offload_gguf_layers_offload_tensors_200_gen/
| false | false |
self
| 701 |
{'enabled': False, 'images': [{'id': 'UnXb-hRiyM_Y_ECWZEY_iaAE2P1Bx-MVIOYmrYBxQrY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UnXb-hRiyM_Y_ECWZEY_iaAE2P1Bx-MVIOYmrYBxQrY.png?width=108&crop=smart&auto=webp&s=40b515f81a2437f8015de1de60901dbf9546e85a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/UnXb-hRiyM_Y_ECWZEY_iaAE2P1Bx-MVIOYmrYBxQrY.png?width=216&crop=smart&auto=webp&s=a5cb0caa967da1d7d64030c7edf6ee8bb1471291', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/UnXb-hRiyM_Y_ECWZEY_iaAE2P1Bx-MVIOYmrYBxQrY.png?width=320&crop=smart&auto=webp&s=dee08b9e410ea2976006dfc86dad69a64ec798bd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/UnXb-hRiyM_Y_ECWZEY_iaAE2P1Bx-MVIOYmrYBxQrY.png?width=640&crop=smart&auto=webp&s=fb901f43baab3a36a06903700e90dea2f17c868c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/UnXb-hRiyM_Y_ECWZEY_iaAE2P1Bx-MVIOYmrYBxQrY.png?width=960&crop=smart&auto=webp&s=30b6e96d8384dc1233a0844f1554dcf53ffe8925', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/UnXb-hRiyM_Y_ECWZEY_iaAE2P1Bx-MVIOYmrYBxQrY.png?width=1080&crop=smart&auto=webp&s=00c0e9f61b4402a562a1eda1fc7d3f64690cb200', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/UnXb-hRiyM_Y_ECWZEY_iaAE2P1Bx-MVIOYmrYBxQrY.png?auto=webp&s=9c99c14b5a29e1dff0e60927e9e2291bb1db3fce', 'width': 1200}, 'variants': {}}]}
|
What hardware are enthusiasts getting now?
| 1 |
[removed]
| 2025-05-09T02:32:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1ki7yhb/what_hardware_are_enthusiasts_getting_now/
|
Siigari
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki7yhb
| false | null |
t3_1ki7yhb
|
/r/LocalLLaMA/comments/1ki7yhb/what_hardware_are_enthusiasts_getting_now/
| false | false |
self
| 1 | null |
User asked computer controlling AI for "a ball bouncing inside the screen", the AI showed them porn...
| 183 |
I guess, the AI delivered... 🤣
[https://huggingface.co/spaces/smolagents/computer-agent/discussions/6](https://huggingface.co/spaces/smolagents/computer-agent/discussions/6)
| 2025-05-09T02:39:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1ki831c/user_asked_computer_controlling_ai_for_a_ball/
|
Cool-Chemical-5629
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki831c
| false | null |
t3_1ki831c
|
/r/LocalLLaMA/comments/1ki831c/user_asked_computer_controlling_ai_for_a_ball/
| false | false |
self
| 183 |
{'enabled': False, 'images': [{'id': 'BTDtEJMTmDGsoD7htIvCcHCmeI1fuEfk7nc-qfKnvL8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BTDtEJMTmDGsoD7htIvCcHCmeI1fuEfk7nc-qfKnvL8.png?width=108&crop=smart&auto=webp&s=7a56689372c4bf42f6ebc322bc28597d99f29840', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BTDtEJMTmDGsoD7htIvCcHCmeI1fuEfk7nc-qfKnvL8.png?width=216&crop=smart&auto=webp&s=4219deac24bbd7467fbd8d2210597b48d422ab02', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BTDtEJMTmDGsoD7htIvCcHCmeI1fuEfk7nc-qfKnvL8.png?width=320&crop=smart&auto=webp&s=1a0c5cc2257ddba580374e19a94c94383b90d796', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BTDtEJMTmDGsoD7htIvCcHCmeI1fuEfk7nc-qfKnvL8.png?width=640&crop=smart&auto=webp&s=9b9aafff7ca1601114fdeaca9a103b26b8b555fa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BTDtEJMTmDGsoD7htIvCcHCmeI1fuEfk7nc-qfKnvL8.png?width=960&crop=smart&auto=webp&s=e4685fd78f5d792a5f1361f8d5cad78bc69442b2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BTDtEJMTmDGsoD7htIvCcHCmeI1fuEfk7nc-qfKnvL8.png?width=1080&crop=smart&auto=webp&s=7be22052cc60fc483780db5f0f9daa57bee1e2a5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BTDtEJMTmDGsoD7htIvCcHCmeI1fuEfk7nc-qfKnvL8.png?auto=webp&s=0a5eaf85be5b64a55f73c0b9caa267589d1bdedd', 'width': 1200}, 'variants': {}}]}
|
What are enthusiasts upgrading to locally run LLMs in 2025?
| 1 |
[removed]
| 2025-05-09T03:09:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1ki8mqk/what_are_enthusiasts_upgrading_to_locally_run/
|
Siigari
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki8mqk
| false | null |
t3_1ki8mqk
|
/r/LocalLLaMA/comments/1ki8mqk/what_are_enthusiasts_upgrading_to_locally_run/
| false | false |
self
| 1 | null |
Ayo i didn't ask this
| 1 | 2025-05-09T03:10:58 |
Haunting-Living1874
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki8no6
| false | null |
t3_1ki8no6
|
/r/LocalLLaMA/comments/1ki8no6/ayo_i_didnt_ask_this/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'HQR__UPiaOL5wpLVPEszIYM6TU2P35CQZiNihDHlyHA', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/bgsl6mz1boze1.jpeg?width=108&crop=smart&auto=webp&s=1c23816896c4b4b6ff94ee2d47187a5ffa0931f6', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/bgsl6mz1boze1.jpeg?width=216&crop=smart&auto=webp&s=4d5cd22b95fc5c331afc316b3c605dc1a43f0eaa', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/bgsl6mz1boze1.jpeg?width=320&crop=smart&auto=webp&s=38041b9e63485a954e168b11ff920371ce52ecb3', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/bgsl6mz1boze1.jpeg?width=640&crop=smart&auto=webp&s=133c49025ecfa3050ef0b411ce4176407818e282', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/bgsl6mz1boze1.jpeg?width=960&crop=smart&auto=webp&s=39186a24e45daf67b99b4e3624bcd432f70a58ae', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/bgsl6mz1boze1.jpeg?width=1080&crop=smart&auto=webp&s=8ce75926b36167769f5c61b8986afeec93a7b633', 'width': 1080}], 'source': {'height': 2340, 'url': 'https://preview.redd.it/bgsl6mz1boze1.jpeg?auto=webp&s=c1a6b8b8baadf37e0c8aec68a6bcc0095513ad4d', 'width': 1080}, 'variants': {}}]}
|
|||
Can someone please clarify some things for me? (choosing a GUFF)
| 1 |
[removed]
| 2025-05-09T03:12:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1ki8ojr/can_someone_please_clarify_some_things_for_me/
|
fin2red
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki8ojr
| false | null |
t3_1ki8ojr
|
/r/LocalLLaMA/comments/1ki8ojr/can_someone_please_clarify_some_things_for_me/
| false | false |
self
| 1 | null |
Sam Altman: OpenAI plans to release an open-source model this summer
| 395 |
Sam Altman stated during today's Senate testimony that OpenAI is planning to release an open-source model this summer.
Source: [https://www.youtube.com/watch?v=jOqTg1W\_F5Q](https://www.youtube.com/watch?v=jOqTg1W_F5Q)
| 2025-05-09T04:19:19 |
https://v.redd.it/0cbh8rpcloze1
|
zan-max
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki9u9d
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/0cbh8rpcloze1/DASHPlaylist.mpd?a=1749356376%2CNmI5YjI3ZWI1YjE1OWU4NzY3YTYwNGI3MGJiNDc5OWQ5MmVkOTFjZmNmZjE5M2QxNmQ0MmEwYWJhMWRlM2RkMA%3D%3D&v=1&f=sd', 'duration': 19, 'fallback_url': 'https://v.redd.it/0cbh8rpcloze1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/0cbh8rpcloze1/HLSPlaylist.m3u8?a=1749356376%2CY2ZiYjY2NmU4ODgyZTM2OTA3NThhZThmOGQ1NDk0MDI1MzZkNmVlNjdjYzVlMDFlNjVhYTE1YWY3ZjZmYThkYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/0cbh8rpcloze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1ki9u9d
|
/r/LocalLLaMA/comments/1ki9u9d/sam_altman_openai_plans_to_release_an_opensource/
| false | false | 395 |
{'enabled': False, 'images': [{'id': 'ajdlMmxzcGNsb3plMbWgh0ga0DeDYWGdPekBwNb0wJ3u2lc2Xz7BD3amRjfR', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ajdlMmxzcGNsb3plMbWgh0ga0DeDYWGdPekBwNb0wJ3u2lc2Xz7BD3amRjfR.png?width=108&crop=smart&format=pjpg&auto=webp&s=b61eb7393db1741ca69f6b1bd81070d470637c0f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ajdlMmxzcGNsb3plMbWgh0ga0DeDYWGdPekBwNb0wJ3u2lc2Xz7BD3amRjfR.png?width=216&crop=smart&format=pjpg&auto=webp&s=6edbff16a8a1e662885f4e5cd90c77069abee3dd', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ajdlMmxzcGNsb3plMbWgh0ga0DeDYWGdPekBwNb0wJ3u2lc2Xz7BD3amRjfR.png?width=320&crop=smart&format=pjpg&auto=webp&s=146863201c9b5a7dfb803498d4f9fb9688b31dcd', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ajdlMmxzcGNsb3plMbWgh0ga0DeDYWGdPekBwNb0wJ3u2lc2Xz7BD3amRjfR.png?width=640&crop=smart&format=pjpg&auto=webp&s=55210506e01b585267dd3b40406fed2e6476e2b1', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ajdlMmxzcGNsb3plMbWgh0ga0DeDYWGdPekBwNb0wJ3u2lc2Xz7BD3amRjfR.png?width=960&crop=smart&format=pjpg&auto=webp&s=9d4e51d3d2f0405d7323be23489c93cbf0689efb', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ajdlMmxzcGNsb3plMbWgh0ga0DeDYWGdPekBwNb0wJ3u2lc2Xz7BD3amRjfR.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6b83af32ffab9e5b6a5094d155051f7a380ee839', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ajdlMmxzcGNsb3plMbWgh0ga0DeDYWGdPekBwNb0wJ3u2lc2Xz7BD3amRjfR.png?format=pjpg&auto=webp&s=b49b6ee7b9f90304645987d105f48231c975b513', 'width': 1920}, 'variants': {}}]}
|
|
What UI do you use for local LLMS?
| 1 |
[removed]
| 2025-05-09T04:28:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1ki9zyx/what_ui_do_you_use_for_local_llms/
|
Green_Battle4655
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ki9zyx
| false | null |
t3_1ki9zyx
|
/r/LocalLLaMA/comments/1ki9zyx/what_ui_do_you_use_for_local_llms/
| false | false |
self
| 1 | null |
[D] Could 8B model have great performance in long context tasks?
| 3 |
Are there benchmark to test small models in long-context tasks? I just found [LongBench v2](https://longbench2.github.io/), which didn't include Claude 3.7, making it seem weird.
Are there other credible benchmark for long-context tasks including lastest models?
Or are there benchmark for specific length tasks? The size of my task is 5k tokens.
| 2025-05-09T05:19:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiaszq/d_could_8b_model_have_great_performance_in_long/
|
Logical_Divide_3595
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiaszq
| false | null |
t3_1kiaszq
|
/r/LocalLLaMA/comments/1kiaszq/d_could_8b_model_have_great_performance_in_long/
| false | false |
self
| 3 | null |
Best open source realtime tts?
| 49 |
Hey ya’ll what is the best open source tts that is super fast! I’m looking to replace Elevenlabs in my workflow for being too expensive
| 2025-05-09T05:27:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1kiaxcx/best_open_source_realtime_tts/
|
Sudonymously
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kiaxcx
| false | null |
t3_1kiaxcx
|
/r/LocalLLaMA/comments/1kiaxcx/best_open_source_realtime_tts/
| false | false |
self
| 49 | null |
Thoughts on this quantization method of MoE models?
| 48 |
Hi, this started with this thought I got after I saw the pruning strategy (https://huggingface.co/kalomaze/Qwen3-16B-A3B/discussions/6#681770f3335c1c862165ddc0) to prune based on how often the experts are activated. This technique creates an expert-wise quantization, currently based on their normalized (across the layer) activation rate.
As a concept, I edited llama.cpp to change a bit of how it quantizes the models (hopefully correct). I will update the README file with new information when needed. What's great is that to run the model, you do not have to edit any files and works with existing code.
You can find it here:
https://huggingface.co/RDson/Qwen3-30B-A3B-By-Expert-Quantization-GGUF
I will be uploading more quants to try out.
| 2025-05-09T05:34:05 |
https://huggingface.co/RDson/Qwen3-30B-A3B-By-Expert-Quantization-GGUF
|
robiinn
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kib12b
| false | null |
t3_1kib12b
|
/r/LocalLLaMA/comments/1kib12b/thoughts_on_this_quantization_method_of_moe_models/
| false | false |
default
| 48 |
{'enabled': False, 'images': [{'id': 'iwtMDqkXEXKi0BmJebYdDaWS7Vt-fNowwAcgrSOYkKA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/iwtMDqkXEXKi0BmJebYdDaWS7Vt-fNowwAcgrSOYkKA.png?width=108&crop=smart&auto=webp&s=f4273cb3499395944466b55f9f5b678e461af479', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/iwtMDqkXEXKi0BmJebYdDaWS7Vt-fNowwAcgrSOYkKA.png?width=216&crop=smart&auto=webp&s=5699ab06f9df2f89cff9f74b0d9e09dbbad25902', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/iwtMDqkXEXKi0BmJebYdDaWS7Vt-fNowwAcgrSOYkKA.png?width=320&crop=smart&auto=webp&s=7f29862bf1e555f20bd9289dff6d15413d832c6b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/iwtMDqkXEXKi0BmJebYdDaWS7Vt-fNowwAcgrSOYkKA.png?width=640&crop=smart&auto=webp&s=c8cff707c7f608172fe260dd8a1231cb649934cb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/iwtMDqkXEXKi0BmJebYdDaWS7Vt-fNowwAcgrSOYkKA.png?width=960&crop=smart&auto=webp&s=bfc8c27e10af126ecab01e681dc7235934736dc3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/iwtMDqkXEXKi0BmJebYdDaWS7Vt-fNowwAcgrSOYkKA.png?width=1080&crop=smart&auto=webp&s=cf790498d3e3f1a7dd3a337c957c6f1d4980a48f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/iwtMDqkXEXKi0BmJebYdDaWS7Vt-fNowwAcgrSOYkKA.png?auto=webp&s=a5aff71bf6ae310dfc814a0d8ecfdedd99543ddc', 'width': 1200}, 'variants': {}}]}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.