title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Just dropped: HUGE update to Tempo AI & MCP App Store
| 0 | 2025-05-10T05:03:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj2h3t/just_dropped_huge_update_to_tempo_ai_mcp_app_store/
|
bipin_25
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj2h3t
| false |
{'oembed': {'author_name': 'Codedigipt', 'author_url': 'https://www.youtube.com/@codedigiptbiplab', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/J9CQsaJzUGc?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Build Anything With Tempo AI & MCP App Store - HUGE Update! | AI Web App Builder | Forget CURSOR"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/J9CQsaJzUGc/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Build Anything With Tempo AI & MCP App Store - HUGE Update! | AI Web App Builder | Forget CURSOR', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1kj2h3t
|
/r/LocalLLaMA/comments/1kj2h3t/just_dropped_huge_update_to_tempo_ai_mcp_app_store/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'pwBNJCHKRgBy2FJrCCl7kooc7UgvVZhVa_iuQnu0E-o', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/pwBNJCHKRgBy2FJrCCl7kooc7UgvVZhVa_iuQnu0E-o.png?width=108&crop=smart&auto=webp&s=8468b51d7fb1dc46fdcbe4f74e0bd8dde895106b', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/pwBNJCHKRgBy2FJrCCl7kooc7UgvVZhVa_iuQnu0E-o.png?width=216&crop=smart&auto=webp&s=82341391d51fccc07833a54a77dfde1d66402344', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/pwBNJCHKRgBy2FJrCCl7kooc7UgvVZhVa_iuQnu0E-o.png?width=320&crop=smart&auto=webp&s=f136aa790ff3bcd424f5d03a6a508f21683be3c0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/pwBNJCHKRgBy2FJrCCl7kooc7UgvVZhVa_iuQnu0E-o.png?width=640&crop=smart&auto=webp&s=a137827d585d21e7cb5fff005e2f2e348fbe1476', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/pwBNJCHKRgBy2FJrCCl7kooc7UgvVZhVa_iuQnu0E-o.png?width=960&crop=smart&auto=webp&s=b84e1aa5456b232960e973c33401beefa8d74cd7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/pwBNJCHKRgBy2FJrCCl7kooc7UgvVZhVa_iuQnu0E-o.png?width=1080&crop=smart&auto=webp&s=cbe7213335f4968c0f5bc50c050badf00914c4da', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/pwBNJCHKRgBy2FJrCCl7kooc7UgvVZhVa_iuQnu0E-o.png?auto=webp&s=6212511017e1a69de534b343ac3ccc191a1f4c46', 'width': 1280}, 'variants': {}}]}
|
||
Seed-Coder 8B
| 171 |
Bytedance has released a new 8B code-specific model that outperforms both Qwen3-8B and Qwen2.5-Coder-7B-Inst. I am curious about the performance of its base model in code FIM tasks.
https://preview.redd.it/wbtmpay50wze1.jpg?width=8348&format=pjpg&auto=webp&s=b7e6bb5d9a152ed6594e5683f582f9d5f9fb81d9
[github](https://github.com/ByteDance-Seed/Seed-Coder)
[HF](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct)
[Base Model HF](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Base)
| 2025-05-10T05:07:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj2j6q/seedcoder_8b/
|
lly0571
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj2j6q
| false | null |
t3_1kj2j6q
|
/r/LocalLLaMA/comments/1kj2j6q/seedcoder_8b/
| false | false | 171 |
{'enabled': False, 'images': [{'id': 'qN4W2OErTr-fXyFZh4FVGoCZMT9K6nHi3_DvqJJHr5c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qN4W2OErTr-fXyFZh4FVGoCZMT9K6nHi3_DvqJJHr5c.png?width=108&crop=smart&auto=webp&s=bd3bde873d3f464cc8ed04729b7c495626636916', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qN4W2OErTr-fXyFZh4FVGoCZMT9K6nHi3_DvqJJHr5c.png?width=216&crop=smart&auto=webp&s=a4c1bec8e909e1c965086349e6e0a787d5e67827', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qN4W2OErTr-fXyFZh4FVGoCZMT9K6nHi3_DvqJJHr5c.png?width=320&crop=smart&auto=webp&s=cdfe93949ce2210e23f3b4286faac4babe90f2eb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qN4W2OErTr-fXyFZh4FVGoCZMT9K6nHi3_DvqJJHr5c.png?width=640&crop=smart&auto=webp&s=8bcd7116e2911f655490d68be32d15c7b0a893b6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qN4W2OErTr-fXyFZh4FVGoCZMT9K6nHi3_DvqJJHr5c.png?width=960&crop=smart&auto=webp&s=38d32c53f0b8616f3841487abedbe6dad83bfdce', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qN4W2OErTr-fXyFZh4FVGoCZMT9K6nHi3_DvqJJHr5c.png?width=1080&crop=smart&auto=webp&s=3c3518391affbf8a172b30d820a9465d76a000cf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qN4W2OErTr-fXyFZh4FVGoCZMT9K6nHi3_DvqJJHr5c.png?auto=webp&s=bd6d9195f14bc598628e63a926b2fe6e78f624a1', 'width': 1200}, 'variants': {}}]}
|
|
Who else has tried to run Mindcraft locally?
| 19 |
Mindcraft is a project that can link to ai api's to power an ingame npc that can do stuff. I initially tried it on L3-8B-Stheno-v3.2-Q6\_K and it worked surprisingly well, but has a lot of consistency issues. My main issue right now though is that no other model I've tried is working nearly as well. Deepseek was nonfunctional, and llama3dolphin was incapable of searching for blocks.
If any of yall have tried this and have any recommendations I'd love to hear them
| 2025-05-10T05:34:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj2yjl/who_else_has_tried_to_run_mindcraft_locally/
|
Peasant_Sauce
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj2yjl
| false | null |
t3_1kj2yjl
|
/r/LocalLLaMA/comments/1kj2yjl/who_else_has_tried_to_run_mindcraft_locally/
| false | false |
self
| 19 | null |
Comparison between Ryzen AI Max+ 395 128GB vs Mac Studio M4 128GB vs Mac Studio M3 Ultra 96GB/256GB on LLMs
| 0 |
Anyone knows whether are there any available comparisons between the 3 setups for running LLMs of different sizes
Will be even better if include AMD Ryzen 9950x with rtx5090x as well.
| 2025-05-10T05:48:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj35si/comparison_between_ryzen_ai_max_395_128gb_vs_mac/
|
umbrosum
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj35si
| false | null |
t3_1kj35si
|
/r/LocalLLaMA/comments/1kj35si/comparison_between_ryzen_ai_max_395_128gb_vs_mac/
| false | false |
self
| 0 | null |
Multimodal support for llama-server merged in llama.cpp
| 2 |
Multimodal support has [been merged](https://github.com/ggml-org/llama.cpp/commit/33eff4024084d1f0c8441b79f7208a52fad79858) into llama.cpp's `llama-server`!
I'm not 100% certain how comprehensive the support is at this point, but the doc currently lists vision GGUFs (uploaded by the official GGML org) for Gemma 3, SmolVLM, Pixtral, Qwen 2 VL, Qwen 2.5 VL, and Mistral Small 3.1.
For usage, see the [Multimodal.md](https://github.com/ggml-org/llama.cpp/blob/master/docs/multimodal.md) doc. (Basically, you just use the `--mmproj` switch to pass the mmproj file).
I tried it with Gemma 3 and it works. The Web UI now has an icon for attaching an image to a message. Drag-and-drop works too.
Many thanks to ngxson for this massive effort as well as all the testers and reviewers!
| 2025-05-10T06:16:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj3kzk/multimodal_support_for_llamaserver_merged_in/
|
FastDecode1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj3kzk
| false | null |
t3_1kj3kzk
|
/r/LocalLLaMA/comments/1kj3kzk/multimodal_support_for_llamaserver_merged_in/
| false | false |
self
| 2 |
{'enabled': False, 'images': [{'id': 'vVEi3aGOq1JQTDRN7IsoPTYCyEFb-idkg9rkw4GGePE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vVEi3aGOq1JQTDRN7IsoPTYCyEFb-idkg9rkw4GGePE.png?width=108&crop=smart&auto=webp&s=0003524c8bd9e7680ed31c9910483e763435a55d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vVEi3aGOq1JQTDRN7IsoPTYCyEFb-idkg9rkw4GGePE.png?width=216&crop=smart&auto=webp&s=98ea840b8e4a5b7c0c8d5d6e25e4da3d59bbf90d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vVEi3aGOq1JQTDRN7IsoPTYCyEFb-idkg9rkw4GGePE.png?width=320&crop=smart&auto=webp&s=8ec826a8cfcd12810f8f8256627ca3bfa07734ef', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vVEi3aGOq1JQTDRN7IsoPTYCyEFb-idkg9rkw4GGePE.png?width=640&crop=smart&auto=webp&s=247fc10f004fbd09a4261022fb90f93c66a6fe53', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vVEi3aGOq1JQTDRN7IsoPTYCyEFb-idkg9rkw4GGePE.png?width=960&crop=smart&auto=webp&s=5b158adfade45cec612155711c81575e02b644f7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vVEi3aGOq1JQTDRN7IsoPTYCyEFb-idkg9rkw4GGePE.png?width=1080&crop=smart&auto=webp&s=aea1a620b5c417120f85b5755cedf5f6d6ca13d2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vVEi3aGOq1JQTDRN7IsoPTYCyEFb-idkg9rkw4GGePE.png?auto=webp&s=59c4d7ecf385593a46cf9ff1ad58e7cc22fa27a7', 'width': 1200}, 'variants': {}}]}
|
Some small results in manually testing and feeling the mathamtics capability of 7B-8B models
| 1 |
[removed]
| 2025-05-10T06:42:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj3z8r/some_small_results_in_manually_testing_and/
|
Wanderer_bard
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj3z8r
| false | null |
t3_1kj3z8r
|
/r/LocalLLaMA/comments/1kj3z8r/some_small_results_in_manually_testing_and/
| false | false |
self
| 1 | null |
Absolute Zero: Reinforced Self-play Reasoning with Zero Data
| 54 | 2025-05-10T06:53:16 |
https://www.arxiv.org/pdf/2505.03335
|
CortaCircuit
|
arxiv.org
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj44n8
| false | null |
t3_1kj44n8
|
/r/LocalLLaMA/comments/1kj44n8/absolute_zero_reinforced_selfplay_reasoning_with/
| false | false |
default
| 54 | null |
|
How is ROCm support these days - What do you AMD users say?
| 48 |
Hey, since AMD seems to be bringing FSR4 to the 7000 series cards I'm thinking of getting a 7900XTX. It's a great card for gaming (even more so if FSR4 is going to be enabled) and also great to tinker around with local models. I was wondering, are people using ROCm here and how are you using it? Can you do batch inference or are we not there yet? Would be great to hear what your experience is and how you are using it.
| 2025-05-10T07:43:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj4utc/how_is_rocm_support_these_days_what_do_you_amd/
|
Mr_Moonsilver
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj4utc
| false | null |
t3_1kj4utc
|
/r/LocalLLaMA/comments/1kj4utc/how_is_rocm_support_these_days_what_do_you_amd/
| false | false |
self
| 48 | null |
are amd cards good yet?
| 6 |
i am new to this stuff after researching i have found out that i need around 16gb of vram
so an amd gpu would cost me half what an nvidia gpu would cost me but some older posts as well as when i asked deepseek said that amd has limited rocm support making it bad for ai models
i am currently torn between 4060 ti,6900xt and 7800xt
| 2025-05-10T08:32:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj5j9v/are_amd_cards_good_yet/
|
Excel_Document
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj5j9v
| false | null |
t3_1kj5j9v
|
/r/LocalLLaMA/comments/1kj5j9v/are_amd_cards_good_yet/
| false | false |
self
| 6 | null |
Thinking about hardware for local LLMs? Here's what I built for less than a 5090
| 47 |
Some of you have been asking what kind of hardware to get for running local LLMs. Just wanted to share my current setup:
I’m running a local "supercomputer" with **4 GPUs**:
* **2× RTX 3090**
* **2× RTX 3060**
That gives me a total of **72 GB of VRAM,** for **less than 9000 PLN**.
Compare that to a **single RTX 5090**, which costs **over 10,000 PLN** and gives you **32 GB of VRAM**.
* I can run **32B models in Q8** *easily* on just the two 3090s
* Larger models like **Nemotron 47B** also run smoothly
* I can even run **70B models**
* I can fit the entire **LLaMA 4 Scout in Q4** *fully in VRAM*
* with the new llama-server I can use multiple images in chats and everything works fast
Good luck with your setups
(see my previous posts for photos and benchmarks)
| 2025-05-10T08:48:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj5rd1/thinking_about_hardware_for_local_llms_heres_what/
|
jacek2023
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj5rd1
| false | null |
t3_1kj5rd1
|
/r/LocalLLaMA/comments/1kj5rd1/thinking_about_hardware_for_local_llms_heres_what/
| false | false |
self
| 47 | null |
I run a local llm on my machine and its been one of my Happiest experiences lately.
| 1 |
[removed]
| 2025-05-10T08:57:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj5vix/i_run_a_local_llm_on_my_machine_and_its_been_one/
|
Omega0Alpha
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj5vix
| false | null |
t3_1kj5vix
|
/r/LocalLLaMA/comments/1kj5vix/i_run_a_local_llm_on_my_machine_and_its_been_one/
| false | false |
self
| 1 | null |
Simple MCP proxy for llama-server WebUI
| 12 |
I (and Geminis, started a few months ago so it is a few different versions) wrote a fairly robust way to use MCPs with the built in llama-server webui.
Initially I thought of modifying the webui code directly and quickly decided that its too hard and I wanted something 'soon'. I used the architecture I deployed with another small project - a Gradio based WebUI with MCP server support (never worked as well as I would have liked) and worked with Gemini to create a node.js proxy instead of using Python again.
I made it public and made a brand new GitHub account just for this occasion :)
[https://github.com/extopico/llama-server\_mcp\_proxy.git](https://github.com/extopico/llama-server_mcp_proxy.git)
Further development/contributions are welcome. It is fairly robust in that it can handle tool calling errors and try something different - it reads the error that it is given by the tool, thus a 'smart' model should be able to make all the tools work, in theory.
It uses Claude Desktop standard config format.
You need to run the llama-server with --jinja flag to make tool calling more robust.
| 2025-05-10T09:12:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj63c7/simple_mcp_proxy_for_llamaserver_webui/
|
extopico
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj63c7
| false | null |
t3_1kj63c7
|
/r/LocalLLaMA/comments/1kj63c7/simple_mcp_proxy_for_llamaserver_webui/
| false | false |
self
| 12 | null |
Evaluating the Quality of Healthcare Assistants
| 0 |
Hey everyone, I wanted to share some insights into evaluating healthcare assistants. If you're building or using AI in healthcare, this might be helpful. Ensuring the quality and reliability of these systems is crucial, especially in high-stakes environments.
**Why This Matters**
Healthcare assistants are becoming an integral part of how patients and clinicians interact. For patients, they offer quick access to medical guidance, while for clinicians, they save time and reduce administrative workload. However, when it comes to healthcare, AI has to be reliable. A single incorrect or unclear response could lead to diagnostic errors, unsafe treatments, or poor patient outcomes.
So, making sure these systems are properly evaluated before they're used in real clinical settings is essential.
**The Setup**
We’re focusing on a clinical assistant that helps with:
* Providing symptom-related medical guidance
* Assisting with medication orders (ensuring they are correct and safe)
The main objectives are to ensure that the assistant:
* Responds clearly and helpfully
* Approves the right drug orders
* Avoids giving incorrect or misleading information
* Functions reliably, with low latency and predictable costs
**Step 1: Set Up a Workflow**
We start by connecting the clinical assistant via an API endpoint. This allows us to test it using real patient queries and see how it responds in practice.
**Step 2: Create a Golden Dataset**
We create a dataset with real patient queries and the expected responses. This dataset serves as a benchmark for the assistant's performance. For example, if a patient asks about symptoms or medication, we check if the assistant suggests the right options and if those suggestions match the expected answers.
**Step 3: Run Evaluations**
This step is all about testing the assistant's quality. We use various evaluation metrics to assess:
* **Output Relevance**: Is the assistant’s response relevant to the query?
* **Clarity**: Is the answer clear and easy to understand?
* **Correctness**: Is the information accurate and reliable?
* **Human Evaluations**: We also include human feedback to double-check that everything makes sense in the medical context.
These evaluations help identify any issues with hallucinations, unclear answers, or factual inaccuracies. We can also check things like response time and costs.
**Step 4: Analyze Results**
After running the evaluations, we get a detailed report showing how the assistant performed across all the metrics. This report helps pinpoint where the assistant might need improvements before it’s used in a real clinical environment.
**Conclusion**
Evaluating healthcare AI assistants is critical to ensuring patient safety and trust. It's not just about ticking off checkboxes; it's about building systems that are reliable, safe, and effective. We’ve built a tool that helps automate and streamline the evaluation of AI assistants, making it easier to integrate feedback and assess performance in a structured way.
If anyone here is working on something similar or has experience with evaluating AI systems in healthcare, I’d love to hear your thoughts on best practices and lessons learned.
| 2025-05-10T09:29:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj6bgk/evaluating_the_quality_of_healthcare_assistants/
|
llamacoded
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj6bgk
| false | null |
t3_1kj6bgk
|
/r/LocalLLaMA/comments/1kj6bgk/evaluating_the_quality_of_healthcare_assistants/
| false | false |
self
| 0 | null |
GGUFs for Absolute Zero models?
| 4 |
Sorry for asking. I would do this myself but I can't at the moment. Can anyone make GGUFs for Absolute Zero models from Andrew Zhao? [https://huggingface.co/andrewzh](https://huggingface.co/andrewzh)
They are Qwen2ForCausalLM so support should be there already in llama.cpp.
| 2025-05-10T09:38:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj6fns/ggufs_for_absolute_zero_models/
|
AfternoonOk5482
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj6fns
| false | null |
t3_1kj6fns
|
/r/LocalLLaMA/comments/1kj6fns/ggufs_for_absolute_zero_models/
| false | false |
self
| 4 |
{'enabled': False, 'images': [{'id': 'ryJww39hf5akIPQEBPqWxzhd7RJA7m7hXO7LZGUK5iI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ryJww39hf5akIPQEBPqWxzhd7RJA7m7hXO7LZGUK5iI.png?width=108&crop=smart&auto=webp&s=f3eae5b29844566694f59c815dde28dc16c0adb9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ryJww39hf5akIPQEBPqWxzhd7RJA7m7hXO7LZGUK5iI.png?width=216&crop=smart&auto=webp&s=589ff8d37e608def5128b74ea984a577cd9cf428', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ryJww39hf5akIPQEBPqWxzhd7RJA7m7hXO7LZGUK5iI.png?width=320&crop=smart&auto=webp&s=aed5e39090feaa4da822781c10d98467615073ca', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ryJww39hf5akIPQEBPqWxzhd7RJA7m7hXO7LZGUK5iI.png?width=640&crop=smart&auto=webp&s=e0d43d238756ba237085134f22f03f7828a88df8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ryJww39hf5akIPQEBPqWxzhd7RJA7m7hXO7LZGUK5iI.png?width=960&crop=smart&auto=webp&s=827a964cc0b6471c27887d2846de44d860ebee91', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ryJww39hf5akIPQEBPqWxzhd7RJA7m7hXO7LZGUK5iI.png?width=1080&crop=smart&auto=webp&s=850840f9495e2ab1472f88159e2bbc8b5d0945a3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ryJww39hf5akIPQEBPqWxzhd7RJA7m7hXO7LZGUK5iI.png?auto=webp&s=691849648de8225ea02c7fc1150b45aaa2ceb042', 'width': 1200}, 'variants': {}}]}
|
AM5 dual GPU motherboard
| 3 |
I'll be buying 2x RTX 5060 Ti 16 GB GPUs which I want to use for running LLMs locally, as well as training my own (non-LLM) ML models. The board should be AM5 as I'll be pairing it with R9 9900x CPU which I already have. RTX 5060 Ti is a PCIe 5.0 8x card so I need a board which supports 2x 5.0 8x slots. So far I've found that ASUS ROG STRIX B650E-E board supports this. Are there any other boards that I should look at, or is this one enough for me?
| 2025-05-10T09:56:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj6p1y/am5_dual_gpu_motherboard/
|
cybran3
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj6p1y
| false | null |
t3_1kj6p1y
|
/r/LocalLLaMA/comments/1kj6p1y/am5_dual_gpu_motherboard/
| false | false |
self
| 3 | null |
Why is adding search functionality so hard?
| 43 |
I installed LM studio and loaded the qwen32b model easily, very impressive to have local reasoning
However not having web search really limits the functionality. I’ve tried to add it using ChatGPT to guide me, and it’s had me creating JSON config files and getting various api tokens etc, but nothing seems to work.
My question is why is this seemingly obvious feature so far out of reach?
| 2025-05-10T10:09:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj6vlj/why_is_adding_search_functionality_so_hard/
|
iswasdoes
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj6vlj
| false | null |
t3_1kj6vlj
|
/r/LocalLLaMA/comments/1kj6vlj/why_is_adding_search_functionality_so_hard/
| false | false |
self
| 43 | null |
Does Hailo module m2 is good for running or it's a joke?
| 0 |
[removed]
| 2025-05-10T10:09:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj6vlr/does_hailo_module_m2_is_good_for_running_or_its_a/
|
theodiousolivetree
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj6vlr
| false | null |
t3_1kj6vlr
|
/r/LocalLLaMA/comments/1kj6vlr/does_hailo_module_m2_is_good_for_running_or_its_a/
| false | false |
self
| 0 | null |
Searching for single gemini code assist enterprise subscription
| 1 |
[removed]
| 2025-05-10T10:35:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj791u/searching_for_single_gemini_code_assist/
|
Pitiful_Astronaut_93
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj791u
| false | null |
t3_1kj791u
|
/r/LocalLLaMA/comments/1kj791u/searching_for_single_gemini_code_assist/
| false | false |
self
| 1 | null |
Local search with LLM?
| 1 |
[removed]
| 2025-05-10T10:47:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj7fkz/local_search_with_llm/
|
Few-Cat1205
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj7fkz
| false | null |
t3_1kj7fkz
|
/r/LocalLLaMA/comments/1kj7fkz/local_search_with_llm/
| false | false |
self
| 1 | null |
How is the rocm support on Radeon 780M ?
| 2 |
Could anyone use pytorch GPU with Radeon 780m igpu?
| 2025-05-10T10:54:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj7jgz/how_is_the_rocm_support_on_radeon_780m/
|
Relative_Rope4234
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj7jgz
| false | null |
t3_1kj7jgz
|
/r/LocalLLaMA/comments/1kj7jgz/how_is_the_rocm_support_on_radeon_780m/
| false | false |
self
| 2 | null |
AMD eGPU over USB3 for Apple Silicon by Tiny Corp
| 258 | 2025-05-10T10:58:23 |
https://x.com/__tinygrad__/status/1920960070055080107
|
zdy132
|
x.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj7l8p
| false | null |
t3_1kj7l8p
|
/r/LocalLLaMA/comments/1kj7l8p/amd_egpu_over_usb3_for_apple_silicon_by_tiny_corp/
| false | false |
default
| 258 |
{'enabled': False, 'images': [{'id': 'aTkBLh2tslT-CdFCjL-Yec8hg6vT3b8MJ2EEBmNegGM', 'resolutions': [{'height': 98, 'url': 'https://external-preview.redd.it/2BON-N6TCd_ctm0tqr4moZr228fviTa5r-AUavBUN3Q.jpg?width=108&crop=smart&auto=webp&s=d45d28ebe1835656b469b8a1579e13ad8988844b', 'width': 108}, {'height': 196, 'url': 'https://external-preview.redd.it/2BON-N6TCd_ctm0tqr4moZr228fviTa5r-AUavBUN3Q.jpg?width=216&crop=smart&auto=webp&s=3f4d31adfc728e79f23587ca8fcdac028e2594ff', 'width': 216}, {'height': 290, 'url': 'https://external-preview.redd.it/2BON-N6TCd_ctm0tqr4moZr228fviTa5r-AUavBUN3Q.jpg?width=320&crop=smart&auto=webp&s=864012e53fca3302e3785a3a4c08774a9d318370', 'width': 320}, {'height': 581, 'url': 'https://external-preview.redd.it/2BON-N6TCd_ctm0tqr4moZr228fviTa5r-AUavBUN3Q.jpg?width=640&crop=smart&auto=webp&s=de7825d238122dad4a6420788d2290a151b8da31', 'width': 640}, {'height': 871, 'url': 'https://external-preview.redd.it/2BON-N6TCd_ctm0tqr4moZr228fviTa5r-AUavBUN3Q.jpg?width=960&crop=smart&auto=webp&s=f64297849896dfe15fdca3973530c36151af3597', 'width': 960}], 'source': {'height': 976, 'url': 'https://external-preview.redd.it/2BON-N6TCd_ctm0tqr4moZr228fviTa5r-AUavBUN3Q.jpg?auto=webp&s=d0dcb39c85e79535466f80153f5ddda325b603f9', 'width': 1075}, 'variants': {}}]}
|
|
How to make my PC power efficient?
| 1 |
Hey guys,
I revently started getting into finally using AI Agents, and am now hosting a lot of stuff on my desktop, a small server for certain projects, github runners, and now maybe a localLLM. My main concern now is power efficiency and how far my electricity bill will go up. I want my pc to be on 24/7 because I code from my laptop and at any point in the day I could want to use something from my desktop whether at home or school. I'm not sure if this type of feature is already enabled by default, but I used to be a very avid gamer and turned a lot of performance features on, and I'm not sure if this will affect it.
I would like to keep my PC running 24/7 and when CPU or GPU is not in use, that it uses a very very low power state, and as soon as something starts running, it then uses it's normal power. Even just somehow running in CLI mode would be great if that's even feasable. Any help is apprecaited!
I have a i7-13700KF, 4070 Ti, and a Gigabyte Z790 Gaming X. Just incase there are some settings specifically for this hardware
| 2025-05-10T11:17:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj7w6m/how_to_make_my_pc_power_efficient/
|
thighsqueezer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj7w6m
| false | null |
t3_1kj7w6m
|
/r/LocalLLaMA/comments/1kj7w6m/how_to_make_my_pc_power_efficient/
| false | false |
self
| 1 | null |
Building a local system
| 1 |
Hi everybody
I'd like to build a local system with the following elements:
* A good model for pdf -> markdown tasks, basically being able to read pages with images using an LLM for that. On cloud I use Gemini 2.0 Flash and Mistral OCR for that task. My current workflow is this: I send one page with the text content, all images contained in the page and one screenshot of the page. Everything is passed to a LLM with multimodal support with a system prompt to generate the md (generator node) than checked by a critic.
* A model used to do the actual work. I won't use RAG like architecture, instead I usually feed the model with the whole document. So I need a large context. Something like 128k. Ideally I'd like to use a quantized version (Q4?) of Qwen3-30B-A3B.
This system won't be used by more than 2 persons at any given time. However we might have to parse large volume of documents. And I've been building agentic systems for the last 2 years, so no worries on that side.
I'm thinking about buying 2 mac mini and 1 mac studio for that. Metal provides memory + low electricity consumption. My plan would be something like that:
* 1 Mac mini, minimal specs to host the web server, postgres, redis, etc.
* 1 Mac mini, unknown specs to host the OCR model.
* 1 Mac studio for the Q3-30B-A3B instance.
I don't have infinite budget, so I won't go for the full spec mac studio. My questions are these:
1. What would be considered as the SOTA for the OCR like LLM, and what would be good alternatives ? By good I mean slight drop in accuracy but with a better speed and memory footprint ?
2. What would be the spec to have decent performances like 20t/s ?
3. For the Q3-30B-A3B, what would be the time to first token with large context size ? I'm a bit worried on this because my understanding is that, while metal provides good memory and can fit large models, they aren't so good on tft, or is my understanding completely outdated ?
4. What would the memory footprint for a 128k context with Q3-30B-A3B ?
5. Is Yarn still the SOTA to use large context size ?
6. Is there a real difference between the different version of M4 pro and max ? I mean between a M4 Pro 10 cpu cores/10gpu and a M4 Pro 12 cpu cores/16 gpu cores ? a max 14 cpu core 32 gpu cores vs 16 cpu cores/40 gpu cores ?
7. Is there anybody here that built a similar system and would like to share his experience ?
Thanks in advance !
| 2025-05-10T11:19:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj7xjz/building_a_local_system/
|
IlEstLaPapi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj7xjz
| false | null |
t3_1kj7xjz
|
/r/LocalLLaMA/comments/1kj7xjz/building_a_local_system/
| false | false |
self
| 1 | null |
Statistical analysis tool like vizly.fyi but local?
| 0 |
I'm a research assistant and found out such tool.
It's just making statistical analysis and visualization so easy, but I'd like to keep all my files in my university server.
I'd like to ask if you people know anything close to [vizly.fyi](http://vizly.fyi) funning locally?
It's awesome that it's also using R. Hopefully there are some opensource alternatives.
| 2025-05-10T11:23:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj7zoo/statistical_analysis_tool_like_vizlyfyi_but_local/
|
gounesh
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj7zoo
| false | null |
t3_1kj7zoo
|
/r/LocalLLaMA/comments/1kj7zoo/statistical_analysis_tool_like_vizlyfyi_but_local/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'gyK_2hnnK5i-yPkWx2eT3htTt9n_JtqtMoVap3P_RG0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/gyK_2hnnK5i-yPkWx2eT3htTt9n_JtqtMoVap3P_RG0.png?width=108&crop=smart&auto=webp&s=2de00678b834816cf5e165d9c64744637f99ca3c', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/gyK_2hnnK5i-yPkWx2eT3htTt9n_JtqtMoVap3P_RG0.png?width=216&crop=smart&auto=webp&s=d77a87ccc9a7628c7f7fc45f182e11d834548677', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/gyK_2hnnK5i-yPkWx2eT3htTt9n_JtqtMoVap3P_RG0.png?width=320&crop=smart&auto=webp&s=7a0fd095744e3dd481d40c1b5ca70d750703bf85', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/gyK_2hnnK5i-yPkWx2eT3htTt9n_JtqtMoVap3P_RG0.png?width=640&crop=smart&auto=webp&s=5475e485df24a8ba82aa873a48bbf8a4b734cb97', 'width': 640}], 'source': {'height': 418, 'url': 'https://external-preview.redd.it/gyK_2hnnK5i-yPkWx2eT3htTt9n_JtqtMoVap3P_RG0.png?auto=webp&s=887263f5533b2515dcdd939171b1785aa97895a9', 'width': 800}, 'variants': {}}]}
|
Collaborative AI token generation pool with unlimited inference
| 2 |
I was asked once “why not having a place where people can pool their compute for token generation and reward them for it?”. I thought it was a good idea, so I built CoGen AI: https://cogenai.kalavai.net
Thoughts?
Disclaimer: I’m the creator of Kalavai and CoGen AI. I love this space and I think we can do better than relying on third party services for our AI when our local machines won’t do. I believe WE can be our own AI provider. This is my baby step towards that. Many more to follow.
| 2025-05-10T11:36:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj875c/collaborative_ai_token_generation_pool_with/
|
Good-Coconut3907
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj875c
| false | null |
t3_1kj875c
|
/r/LocalLLaMA/comments/1kj875c/collaborative_ai_token_generation_pool_with/
| false | false |
self
| 2 | null |
ManaBench: A Novel Reasoning Benchmark Based on MTG Deck Building
| 74 |
I'm excited to share a new benchmark I've developed called **ManaBench**, which tests LLM reasoning abilities using Magic: The Gathering deck building as a proxy.
# What is ManaBench?
ManaBench evaluates an LLM's ability to reason about complex systems by presenting a simple but challenging task: given a 59-card MTG deck, select the most suitable 60th card from six options.
This isn't about memorizing card knowledge - all the necessary information (full card text and rules) is provided in the prompt. It's about reasoning through complex interactions, understanding strategic coherence, and making optimal choices within constraints.
# Why it's a good benchmark:
1. **Strategic reasoning**: Requires understanding deck synergies, mana curves, and card interactions
2. **System optimization**: Tests ability to optimize within resource constraints
3. **Expert-aligned**: The "correct" answer is the card that was actually in the human-designed tournament deck
4. **Hard to game**: Large labs are unlikely to optimize for this task and the questions are private
# Results for Local Models vs Cloud Models
[ManaBench Leaderboard](https://preview.redd.it/adlxg53bxxze1.png?width=1065&format=png&auto=webp&s=39c1fe2aff1b4a5906b11bbd112d1bc53706b544)
# Looking at these results, several interesting patterns emerge:
* **Llama models underperform expectations**: Despite their strong showing on many standard benchmarks, Llama 3.3 70B scored only 19.5% (just above random guessing at 16.67%), and Llama 4 Maverick hit only 26.5%
* **Closed models dominate**: o3 leads the pack at 63%, followed by Claude 3.7 Sonnet at 49.5%
* **Performance correlates with but differentiates better than LMArena scores**: Notice how the spread between models is much wider on ManaBench
[ManaBench vs LMArena](https://preview.redd.it/b3zyiwuoxxze1.png?width=814&format=png&auto=webp&s=21d07b7fdad90b4fe3eb16b860f14617b3872fa0)
# What This Means for Local Model Users
If you're running models locally and working on tasks that require complex reasoning (like game strategy, system design, or multi-step planning), these results suggest that current open models may struggle more than benchmarks like MATH or LMArena would indicate.
This isn't to say local models aren't valuable - they absolutely are! But it's useful to understand their relative strengths and limitations compared to cloud alternatives.
# Looking Forward
I'm curious if these findings match your experiences. The current leaderboard aligns very well with my results using many of these models personally.
For those interested in the technical details, my [full writeup](https://boggs.tech/posts/evaluating-llm-reasoning-with-mtg-deck-building/) goes deeper into the methodology and analysis.
*Note: The specific benchmark questions are not being publicly released to prevent contamination of future training data. If you are a researcher and would like access, please reach out.*
| 2025-05-10T11:40:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj89gq/manabench_a_novel_reasoning_benchmark_based_on/
|
Jake-Boggs
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj89gq
| false | null |
t3_1kj89gq
|
/r/LocalLLaMA/comments/1kj89gq/manabench_a_novel_reasoning_benchmark_based_on/
| false | false | 74 |
{'enabled': False, 'images': [{'id': 'z_Ta6BgN-0E4xjWqloxN8S0IMfl-GG_lbgHPaHjOU5s', 'resolutions': [{'height': 44, 'url': 'https://external-preview.redd.it/z_Ta6BgN-0E4xjWqloxN8S0IMfl-GG_lbgHPaHjOU5s.png?width=108&crop=smart&auto=webp&s=0880a769a0c5f5fda2ffa667e9e518b67bfeea85', 'width': 108}, {'height': 89, 'url': 'https://external-preview.redd.it/z_Ta6BgN-0E4xjWqloxN8S0IMfl-GG_lbgHPaHjOU5s.png?width=216&crop=smart&auto=webp&s=ca9ec097742896b0faec042fb6d1ff6026adb315', 'width': 216}, {'height': 131, 'url': 'https://external-preview.redd.it/z_Ta6BgN-0E4xjWqloxN8S0IMfl-GG_lbgHPaHjOU5s.png?width=320&crop=smart&auto=webp&s=047bdd935adf34165f990a7c787f46bdd44d3cce', 'width': 320}, {'height': 263, 'url': 'https://external-preview.redd.it/z_Ta6BgN-0E4xjWqloxN8S0IMfl-GG_lbgHPaHjOU5s.png?width=640&crop=smart&auto=webp&s=c9fd95a896e5e0a2b1e6245ca34120093288ab9d', 'width': 640}, {'height': 395, 'url': 'https://external-preview.redd.it/z_Ta6BgN-0E4xjWqloxN8S0IMfl-GG_lbgHPaHjOU5s.png?width=960&crop=smart&auto=webp&s=22c1d377b3ea19df47fd07b2d1a1ed7a0b2ff74a', 'width': 960}], 'source': {'height': 439, 'url': 'https://external-preview.redd.it/z_Ta6BgN-0E4xjWqloxN8S0IMfl-GG_lbgHPaHjOU5s.png?auto=webp&s=78a98bc6c68f1d5632af65f05b08be6524414010', 'width': 1065}, 'variants': {}}]}
|
|
Is there something like Lovable / Bolt / Replit but for mobile applications?
| 3 |
Now there will be.
We are participating in the next week AI Hackaton and that's exactly what we are going to build.
No code builder but for Androis / iOS. Imagine building the app directly on your smartphone only by using prompts ?
We would like to gather everyone who is interested in this project in a community and share the progress with them and get feedback right while building it. Also, please share in comments if you would ever use such a service.
Thanks you all in advance :)
| 2025-05-10T12:14:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj8v50/is_there_something_like_lovable_bolt_replit_but/
|
sickleRunner
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj8v50
| false | null |
t3_1kj8v50
|
/r/LocalLLaMA/comments/1kj8v50/is_there_something_like_lovable_bolt_replit_but/
| false | false |
self
| 3 | null |
Increasing knowledge in LLMs using Absolute Zero
| 1 |
[removed]
| 2025-05-10T12:36:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj99wb/increasing_knowledge_in_llms_using_absolute_zero/
|
kekePower
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj99wb
| false | null |
t3_1kj99wb
|
/r/LocalLLaMA/comments/1kj99wb/increasing_knowledge_in_llms_using_absolute_zero/
| false | false |
self
| 1 | null |
(Dual?) 5060Ti 16gb or 3090 for gaming+ML?
| 0 |
What’s the better option? I’m limited by a workstation with a non ATX psu that only has 2 PCIe 8pin power cables. Therefore, I don’t have enough watts going into a 4090, even though the PSU is 1000w. (The 4090 requires 3 8pin inputs).
5060Ti 16gb looks pretty decent, with only 1 8pin power input. I can throw 2 into the machine if needed. Otherwise, I can do the 3090 (which has 2 8pin input) with a cheap 2nd GPU that doesnt need psu power (1650? A2000?).
What’s the better option?
| 2025-05-10T12:40:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj9cip/dual_5060ti_16gb_or_3090_for_gamingml/
|
jaxchang
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj9cip
| false | null |
t3_1kj9cip
|
/r/LocalLLaMA/comments/1kj9cip/dual_5060ti_16gb_or_3090_for_gamingml/
| false | false |
self
| 0 | null |
Dual AMD Mi50 Inference and Benchmarks
| 1 |
[removed]
| 2025-05-10T12:43:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj9eqs/dual_amd_mi50_inference_and_benchmarks/
|
0seba
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj9eqs
| false | null |
t3_1kj9eqs
|
/r/LocalLLaMA/comments/1kj9eqs/dual_amd_mi50_inference_and_benchmarks/
| false | false |
self
| 1 | null |
Dual AMD Mi50 Inference and Benchmarks
| 1 |
[removed]
| 2025-05-10T12:57:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj9odq/dual_amd_mi50_inference_and_benchmarks/
|
0seba
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj9odq
| false | null |
t3_1kj9odq
|
/r/LocalLLaMA/comments/1kj9odq/dual_amd_mi50_inference_and_benchmarks/
| false | false |
self
| 1 | null |
Where can I find a list of publicly available AI models?
| 1 |
[removed]
| 2025-05-10T12:58:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj9p7p/where_can_i_find_a_list_of_publicly_available_ai/
|
hsnk42
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj9p7p
| false | null |
t3_1kj9p7p
|
/r/LocalLLaMA/comments/1kj9p7p/where_can_i_find_a_list_of_publicly_available_ai/
| false | false |
self
| 1 | null |
Suggestion
| 0 |
I only have one 8gb vram GPU and 32gb ram. Suggest the best local model
| 2025-05-10T13:06:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj9vf6/suggestion/
|
blackkksparx
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj9vf6
| false | null |
t3_1kj9vf6
|
/r/LocalLLaMA/comments/1kj9vf6/suggestion/
| false | false |
self
| 0 | null |
Where can I find a list of publicly available AI models?
| 1 |
[removed]
| 2025-05-10T13:08:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj9wnl/where_can_i_find_a_list_of_publicly_available_ai/
|
hsnk42
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj9wnl
| false | null |
t3_1kj9wnl
|
/r/LocalLLaMA/comments/1kj9wnl/where_can_i_find_a_list_of_publicly_available_ai/
| false | false |
self
| 1 | null |
Increasing knowledge in LLMs using Absolute Zero
| 1 |
[removed]
| 2025-05-10T13:11:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kj9z31/increasing_knowledge_in_llms_using_absolute_zero/
|
kekePower
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kj9z31
| false | null |
t3_1kj9z31
|
/r/LocalLLaMA/comments/1kj9z31/increasing_knowledge_in_llms_using_absolute_zero/
| false | false |
self
| 1 | null |
128GB DDR4, 2950x CPU, 1x3090 24gb Qwen3-235B-A22B-UD-Q3_K_XL 7Tokens/s
| 81 |
I wanted to share, maybe it helps others with only 24gb vram, this is what i had to send to ram to use almost all my 24gb. If you have suggestions for increasing the prompt processing, please suggest :) I get cca. 12tok/s.
This is the experssion used: -ot "blk\\.(?:\[7-9\]|\[1-9\]\[0-8\])\\.ffn.\*=CPU"
and this is my whole command:
./llama-cli -m \~/ai/models/unsloth\_Qwen3-235B-A22B-UD-Q3\_K\_XL-GGUF/Qwen3-235B-A22B-UD-Q3\_K\_XL-00001-of-00003.gguf -ot "blk\\.(?:\[7-9\]|\[1-9\]\[0-8\])\\.ffn.\*=CPU" -c 16384 -n 16384 --prio 2 --threads 20 --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0.0 --color -if -ngl 99 -fa
My DDR4 runs at 1933MT/s and the cpu is an AMD 2950x
| 2025-05-10T13:33:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjaf6b/128gb_ddr4_2950x_cpu_1x3090_24gb/
|
ciprianveg
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjaf6b
| false | null |
t3_1kjaf6b
|
/r/LocalLLaMA/comments/1kjaf6b/128gb_ddr4_2950x_cpu_1x3090_24gb/
| false | false |
self
| 81 | null |
NVIDIA N1X and N1 SoC for desktop and laptop PCs expected to debut at Computex
| 3 | 2025-05-10T13:41:39 |
https://videocardz.com/newz/nvidia-n1x-and-n1-soc-for-desktop-and-laptop-pcs-expected-to-debut-at-computex
|
Mochila-Mochila
|
videocardz.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjakqx
| false | null |
t3_1kjakqx
|
/r/LocalLLaMA/comments/1kjakqx/nvidia_n1x_and_n1_soc_for_desktop_and_laptop_pcs/
| false | false |
default
| 3 | null |
|
Using llama.cpp-vulkan on an AMD GPU? You can finally use FlashAttention!
| 112 |
It might be a year late, but Vulkan FA implementation was merged into llama.cpp just a few hours ago. It works! And I'm happy to double the context size thanks to Q8 KV Cache quantization.
| 2025-05-10T14:14:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjb9zs/using_llamacppvulkan_on_an_amd_gpu_you_can/
|
ParaboloidalCrest
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjb9zs
| false | null |
t3_1kjb9zs
|
/r/LocalLLaMA/comments/1kjb9zs/using_llamacppvulkan_on_an_amd_gpu_you_can/
| false | false |
self
| 112 | null |
How do I use axolotl to fine-tune an AI model?
| 1 |
[removed]
| 2025-05-10T14:42:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjbvkt/how_do_i_use_axolotl_to_finetune_an_ai_model/
|
davidsula
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjbvkt
| false | null |
t3_1kjbvkt
|
/r/LocalLLaMA/comments/1kjbvkt/how_do_i_use_axolotl_to_finetune_an_ai_model/
| false | false |
self
| 1 | null |
How do I use axolotl to fine-tune an AI model ( I'm clueless )
| 1 |
[removed]
| 2025-05-10T14:45:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjbyak/how_do_i_use_axolotl_to_finetune_an_ai_model_im/
|
DoorWeary465
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjbyak
| false | null |
t3_1kjbyak
|
/r/LocalLLaMA/comments/1kjbyak/how_do_i_use_axolotl_to_finetune_an_ai_model_im/
| false | false |
self
| 1 | null |
New hallucination detector - UQLM: Uncertainty Quantification for Language Models
| 1 | 2025-05-10T15:17:24 |
https://old.reddit.com/r/MachineLearning/comments/1kij30g/p_uqlm_uncertainty_quantification_for_language/
|
AppearanceHeavy6724
|
old.reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjcnfs
| false | null |
t3_1kjcnfs
|
/r/LocalLLaMA/comments/1kjcnfs/new_hallucination_detector_uqlm_uncertainty/
| false | false |
default
| 1 | null |
|
I started generating regular summaries of r/LocalLLaMA
| 1 |
[removed]
| 2025-05-10T15:40:36 |
https://www.reddit.com/gallery/1kjd63z
|
Zogid
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjd63z
| false | null |
t3_1kjd63z
|
/r/LocalLLaMA/comments/1kjd63z/i_started_generating_regular_summaries_of/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'YiFbvKyY1-VEKgTsgmpDL6t-0xrPh9qpDAM2G3-39Wc', 'resolutions': [{'height': 116, 'url': 'https://external-preview.redd.it/YiFbvKyY1-VEKgTsgmpDL6t-0xrPh9qpDAM2G3-39Wc.png?width=108&crop=smart&auto=webp&s=cc261c602093509e52941dced83e08004a3f9ce5', 'width': 108}, {'height': 232, 'url': 'https://external-preview.redd.it/YiFbvKyY1-VEKgTsgmpDL6t-0xrPh9qpDAM2G3-39Wc.png?width=216&crop=smart&auto=webp&s=9aa1adf923e1ff5efed04ede0ad6e0110f39f6a8', 'width': 216}, {'height': 344, 'url': 'https://external-preview.redd.it/YiFbvKyY1-VEKgTsgmpDL6t-0xrPh9qpDAM2G3-39Wc.png?width=320&crop=smart&auto=webp&s=738535b8af6bcd5a07aa70fcdad741e4741a0171', 'width': 320}, {'height': 689, 'url': 'https://external-preview.redd.it/YiFbvKyY1-VEKgTsgmpDL6t-0xrPh9qpDAM2G3-39Wc.png?width=640&crop=smart&auto=webp&s=b99ee44ee86fd771c92dd63bbe72ab52cef9124a', 'width': 640}], 'source': {'height': 815, 'url': 'https://external-preview.redd.it/YiFbvKyY1-VEKgTsgmpDL6t-0xrPh9qpDAM2G3-39Wc.png?auto=webp&s=83ffa5aec4ecb7794fd899aed7bf368a38fe8003', 'width': 757}, 'variants': {}}]}
|
|
Absolute_Zero_Reasoner-Coder-14b / 7b / 3b
| 111 | 2025-05-10T15:44:01 |
https://huggingface.co/collections/andrewzh/absolute-zero-reasoner-68139b2bca82afb00bc69e5b
|
AaronFeng47
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjd8tg
| false | null |
t3_1kjd8tg
|
/r/LocalLLaMA/comments/1kjd8tg/absolute_zero_reasonercoder14b_7b_3b/
| false | false |
default
| 111 |
{'enabled': False, 'images': [{'id': 'c2vPUFXKhvD_gZRfZocGG6ne7L_maCxsQIvkq5lx_Ec', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/c2vPUFXKhvD_gZRfZocGG6ne7L_maCxsQIvkq5lx_Ec.png?width=108&crop=smart&auto=webp&s=a42404e1e42515a128f15ca9782ea2bc055e0000', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/c2vPUFXKhvD_gZRfZocGG6ne7L_maCxsQIvkq5lx_Ec.png?width=216&crop=smart&auto=webp&s=ff99ee919664bceb34cb9db5fdbee683ab8c53fd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/c2vPUFXKhvD_gZRfZocGG6ne7L_maCxsQIvkq5lx_Ec.png?width=320&crop=smart&auto=webp&s=ee8a608644ad46995fa8a75b892d7b54b11018ec', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/c2vPUFXKhvD_gZRfZocGG6ne7L_maCxsQIvkq5lx_Ec.png?width=640&crop=smart&auto=webp&s=c6db7591f54489669f2ba77fd228c4034fbc9225', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/c2vPUFXKhvD_gZRfZocGG6ne7L_maCxsQIvkq5lx_Ec.png?width=960&crop=smart&auto=webp&s=bf96057629a0d8e8d7ddac89ca7cf0d7590cce91', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/c2vPUFXKhvD_gZRfZocGG6ne7L_maCxsQIvkq5lx_Ec.png?width=1080&crop=smart&auto=webp&s=6b0aa7edaec4ff3250751c7ece817b8ed32385cc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/c2vPUFXKhvD_gZRfZocGG6ne7L_maCxsQIvkq5lx_Ec.png?auto=webp&s=f8e66eeb9ca3eaf28a144718e3bc28b23a272bc1', 'width': 1200}, 'variants': {}}]}
|
|
Gemma 3-27B-IT Q4KXL - Vulkan Performance & Multi-GPU Layer Distribution - Seeking Advice!
| 1 |
Hey everyone,
I'm experimenting with llama.cpp and Vulkan, and I'm getting around 36.6 tokens/s with the gemma3-27b-it-q4kxl.gguf model using these parameters:
llama-server -m gemma3-27b-it-q4kxl.gguf --host 0.0.0.0 --port 8082 -ctv q8_0 -ctk q8_0 -fa --numa distribute --no-mmap --gpu-layers 990 -C 4000 --tensor-split 24,0,0
However, when I try to distribute the layers across my GPUs using --tensor-split values like 24,24,0 or 24,24,16, I see a decrease in performance.
I'm hoping to optimally offload layers to each GPU for the fastest possible inference speed. My setup is:
GPUs: 2x Radeon RX 7900 XTX + 1x Radeon RX 7800 XT
CPU: Ryzen 7 7700X
RAM: 128GB (4x32GB DDR5 4200MHz)
Is it possible to effectively utilize all three GPUs with llama.cpp and Vulkan, and if so, what --tensor-split configuration would you recommend or \`-ot\`? Are there other parameters I should consider adjusting? Any insights or suggestions would be greatly appreciated!
| 2025-05-10T15:52:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjdg0c/gemma_327bit_q4kxl_vulkan_performance_multigpu/
|
djdeniro
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjdg0c
| false | null |
t3_1kjdg0c
|
/r/LocalLLaMA/comments/1kjdg0c/gemma_327bit_q4kxl_vulkan_performance_multigpu/
| false | false |
self
| 1 | null |
Looking for good NFSW llm for story writing
| 1 |
[removed]
| 2025-05-10T16:34:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjecx6/looking_for_good_nfsw_llm_for_story_writing/
|
ClarieObscur
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjecx6
| false | null |
t3_1kjecx6
|
/r/LocalLLaMA/comments/1kjecx6/looking_for_good_nfsw_llm_for_story_writing/
| false | false |
nsfw
| 1 | null |
Mac OS Host + Multi User Local Network options?
| 6 |
I have Ollama + Openwebui setup, been using it for a good while before I moved to Mac OS for hosting. Now with that I want to use MLX. I was hoping Ollama would add MLX support but it hasn't happened yet as far as I can tell (if I am wrong let me know).
So I go to use LM Studio for local, which I am not a huge fan of. I of course have heard of llama.cpp being able to use MLX through some options available to it's users but it seems a bit more complicated. I am willing to learn, but is that the only option for multi user, local hosting (on a Mac Studio) with MLX support?
Any recommendations for other options or guides to get llama.cpp+MLX+model swap working? Model swap is sorta optional but would really like to have it.
| 2025-05-10T16:39:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjegr0/mac_os_host_multi_user_local_network_options/
|
Shouldhaveknown2015
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjegr0
| false | null |
t3_1kjegr0
|
/r/LocalLLaMA/comments/1kjegr0/mac_os_host_multi_user_local_network_options/
| false | false |
self
| 6 | null |
Anyone here with a 50 series using GTX card for physx and VRAM?
| 1 |
Given that RTX 50 series no longer supports 32 bit physx, it seems to be common for 50 series owners to also insert a GTX card to play these older games. Is anyone here also using this for additional VRAM for stuff like llama.cpp? If so, how is the performance, and how well does it combine with MoE models (like Qwen 3 30b MoE)?
I'm mainly curious because I got a 5060 Ti 16gb and gave the 3060 Ti to my brother, but now I also got my hands on his GTX 1060 6GB (totalling 22GB VRAM), but now I have to wait for a 6 pin extension cord, since the pcie pins are on opposite sides on each card, and they designed the two 8 pins to be used with a single GPU, and now I'm curious about others' experience with this set-up.
| 2025-05-10T17:08:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjf4li/anyone_here_with_a_50_series_using_gtx_card_for/
|
pneuny
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjf4li
| false | null |
t3_1kjf4li
|
/r/LocalLLaMA/comments/1kjf4li/anyone_here_with_a_50_series_using_gtx_card_for/
| false | false |
self
| 1 | null |
Has anyone else noticed that some official White House videos look like they were AI-generated!?
| 1 |
[removed]
| 2025-05-10T17:57:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjg76b/has_anyone_else_noticed_that_some_official_white/
|
B89983ikei
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjg76b
| false | null |
t3_1kjg76b
|
/r/LocalLLaMA/comments/1kjg76b/has_anyone_else_noticed_that_some_official_white/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'Fn9TKtr_fCySQvtxuAwxQrlpPo0CumMGHM4ycw5kTxU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Fn9TKtr_fCySQvtxuAwxQrlpPo0CumMGHM4ycw5kTxU.jpeg?width=108&crop=smart&auto=webp&s=93f7579f1368a99bb419aefdf61a502ce087e7d2', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Fn9TKtr_fCySQvtxuAwxQrlpPo0CumMGHM4ycw5kTxU.jpeg?width=216&crop=smart&auto=webp&s=3e5131d9bc813469b5448a0a3702a47a4a6f5ef8', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Fn9TKtr_fCySQvtxuAwxQrlpPo0CumMGHM4ycw5kTxU.jpeg?width=320&crop=smart&auto=webp&s=029b4489a94d86d630cb8a87832eb187a3f63f0e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Fn9TKtr_fCySQvtxuAwxQrlpPo0CumMGHM4ycw5kTxU.jpeg?auto=webp&s=cb23eccecbeb59622a90036b2e395270f1652750', 'width': 480}, 'variants': {}}]}
|
For such a small model, Qwen 3 8b is excellent! With 2 short prompts it made a playable HTML keyboard for me! This is the Q6_K Quant.
| 1 | 2025-05-10T18:25:57 |
https://v.redd.it/ov33exzxyzze1
|
c64z86
|
/r/LocalLLaMA/comments/1kjguy7/for_such_a_small_model_qwen_3_8b_is_excellent/
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjguy7
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ov33exzxyzze1/DASHPlaylist.mpd?a=1749623163%2COWQyZjJjMTQ5ODMxNGY4MDJhMWMzY2E2ZmU1Yjg5NzY0ZmQ5MzE1NzA2MGE5ZGQ2YzAzMDJlYjlhODdjZWM4Mg%3D%3D&v=1&f=sd', 'duration': 158, 'fallback_url': 'https://v.redd.it/ov33exzxyzze1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/ov33exzxyzze1/HLSPlaylist.m3u8?a=1749623163%2CZTQyOWI0ZDA2YzM5OTA3M2I0NDllYzQ2MDhiM2Y4NGYwYjFkZDI2NjhmYzlhZDE2ZmRmNDU4OThlMzM0YWE2OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ov33exzxyzze1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1kjguy7
|
/r/LocalLLaMA/comments/1kjguy7/for_such_a_small_model_qwen_3_8b_is_excellent/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'aHAxNzBiMHl5enplMUl9aqzqB1StkjPS3lLtemht0XIhJxT6uUrkpF4qlemp', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/aHAxNzBiMHl5enplMUl9aqzqB1StkjPS3lLtemht0XIhJxT6uUrkpF4qlemp.png?width=108&crop=smart&format=pjpg&auto=webp&s=bae12600d4f6db3c1dbb03cc81f272ea9cf02bf0', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/aHAxNzBiMHl5enplMUl9aqzqB1StkjPS3lLtemht0XIhJxT6uUrkpF4qlemp.png?width=216&crop=smart&format=pjpg&auto=webp&s=38c707305a682298561bb5ef223a6d94239c7311', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/aHAxNzBiMHl5enplMUl9aqzqB1StkjPS3lLtemht0XIhJxT6uUrkpF4qlemp.png?width=320&crop=smart&format=pjpg&auto=webp&s=c77d645f3e6437d9611cb3322b8541db931ae8ba', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/aHAxNzBiMHl5enplMUl9aqzqB1StkjPS3lLtemht0XIhJxT6uUrkpF4qlemp.png?width=640&crop=smart&format=pjpg&auto=webp&s=eb12ab706b24103e95fbcad29bcfc8900a57e2a9', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/aHAxNzBiMHl5enplMUl9aqzqB1StkjPS3lLtemht0XIhJxT6uUrkpF4qlemp.png?width=960&crop=smart&format=pjpg&auto=webp&s=ea6d85ea6732d9164a33f238ceb9ff40f7951676', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/aHAxNzBiMHl5enplMUl9aqzqB1StkjPS3lLtemht0XIhJxT6uUrkpF4qlemp.png?width=1080&crop=smart&format=pjpg&auto=webp&s=12891c09441b1ba3d664c5bcbab240920dc4c889', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/aHAxNzBiMHl5enplMUl9aqzqB1StkjPS3lLtemht0XIhJxT6uUrkpF4qlemp.png?format=pjpg&auto=webp&s=957a77188caa02de68c4eba5ec2bf883ad0910a4', 'width': 1920}, 'variants': {}}]}
|
||
For such a small model, Qwen 3 8b is excellent! With 2 short prompts it made a playable HTML keyboard for me! This is the Q6_K Quant.
| 42 | 2025-05-10T18:26:48 |
https://www.youtube.com/watch?v=Jda1Z40Xcfs
|
c64z86
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjgvm8
| false |
{'oembed': {'author_name': 'Robert Gordon', 'author_url': 'https://www.youtube.com/@robertgordon103', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/Jda1Z40Xcfs?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Qwen 3 8b can make instruments!"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/Jda1Z40Xcfs/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Qwen 3 8b can make instruments!', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1kjgvm8
|
/r/LocalLLaMA/comments/1kjgvm8/for_such_a_small_model_qwen_3_8b_is_excellent/
| false | false |
default
| 42 |
{'enabled': False, 'images': [{'id': 'yC8tUfmmqcWrWbYfoMczWi5TEVmr7UUYVX_Lfvwk9qM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/yC8tUfmmqcWrWbYfoMczWi5TEVmr7UUYVX_Lfvwk9qM.jpeg?width=108&crop=smart&auto=webp&s=21d6a07fb4ddcaedb8af6a75198b0b76ecc6464a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/yC8tUfmmqcWrWbYfoMczWi5TEVmr7UUYVX_Lfvwk9qM.jpeg?width=216&crop=smart&auto=webp&s=8f1258ad836f86a5d0d8f9337b96f8b06779b422', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/yC8tUfmmqcWrWbYfoMczWi5TEVmr7UUYVX_Lfvwk9qM.jpeg?width=320&crop=smart&auto=webp&s=6ba7786134790ce98dbb7a8be7c736175837add8', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/yC8tUfmmqcWrWbYfoMczWi5TEVmr7UUYVX_Lfvwk9qM.jpeg?auto=webp&s=652a15b56f0dadaf2330bdde3d7222bf92f7c31d', 'width': 480}, 'variants': {}}]}
|
|
Best backend for the qwen3 moe models
| 8 |
Hello I just half heared that there are a bunch of backend solutions by now that focus on moe and greatly help improve their performance when you have to split CPU gpu. I want to set up a small inference maschine for my family thinking about qwen3 30b moe. I am aware that it is light on compute anyway but I was wondering if there are any backend that help to optimize it further ?
Looking for something running a 3060 and a bunch of ram on a xeon platform with quad channel memory and idk 128-256gb of ram. I want to serve up to 4 concurrent users and have them be able to use decent context size idk 16-32k
| 2025-05-10T18:31:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjgyzp/best_backend_for_the_qwen3_moe_models/
|
Noxusequal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjgyzp
| false | null |
t3_1kjgyzp
|
/r/LocalLLaMA/comments/1kjgyzp/best_backend_for_the_qwen3_moe_models/
| false | false |
self
| 8 | null |
Specific domains - methodology
| 7 |
Is there consensus on how to get very strong LLMs in specific domains?
Think law or financial analysis or healthcare - applications where an LLM will ingest a case data and then try to write a defense for it / diagnose it / underwrite it.
Do people fine tune on high quality past data within the domain? Has anyone tried doing RL on multiple choice questions within the domain?
I’m interested in local LLMs - as I don’t want data going to third party providers.
| 2025-05-10T18:37:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjh4di/specific_domains_methodology/
|
Hemlock_Snores
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjh4di
| false | null |
t3_1kjh4di
|
/r/LocalLLaMA/comments/1kjh4di/specific_domains_methodology/
| false | false |
self
| 7 | null |
Just create a front end for my local chat use cases
| 1 |
[removed]
| 2025-05-10T18:57:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjhjlv/just_create_a_front_end_for_my_local_chat_use/
|
Desperate_Rub_1352
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjhjlv
| false | null |
t3_1kjhjlv
|
/r/LocalLLaMA/comments/1kjhjlv/just_create_a_front_end_for_my_local_chat_use/
| false | false |
self
| 1 | null |
I have a code to remove ChatGPT from the system
| 1 |
[removed]
| 2025-05-10T19:25:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1kji6cj/i_have_a_code_to_remove_chatgpt_from_the_system/
|
Ali-eg-2256
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kji6cj
| false | null |
t3_1kji6cj
|
/r/LocalLLaMA/comments/1kji6cj/i_have_a_code_to_remove_chatgpt_from_the_system/
| false | false |
nsfw
| 1 | null |
How to Build an AI Chatbot That Can Help Users Develop Apps in a Low-Code/No-Code Platform?
| 1 |
[removed]
| 2025-05-10T19:25:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1kji6ft/how_to_build_an_ai_chatbot_that_can_help_users/
|
Equal-Addition-8099
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kji6ft
| false | null |
t3_1kji6ft
|
/r/LocalLLaMA/comments/1kji6ft/how_to_build_an_ai_chatbot_that_can_help_users/
| false | false |
self
| 1 | null |
How to Build an AI Chatbot That Can Help Users Develop Apps in a Low-Code/No-Code Platform?
| 1 |
[removed]
| 2025-05-10T19:27:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1kji7zw/how_to_build_an_ai_chatbot_that_can_help_users/
|
Equal-Addition-8099
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kji7zw
| false | null |
t3_1kji7zw
|
/r/LocalLLaMA/comments/1kji7zw/how_to_build_an_ai_chatbot_that_can_help_users/
| false | false |
self
| 1 | null |
How would I scrape a company's website looking for a link based on keywords using an LLM and Python
| 0 |
I am trying to find the corporate presentation page on a bunch of websites. However, this is not structured data. The link changs between websites (or could even change in the future) and the company might call the corporate presentation something slightly different. Is there a way I can leverage an LLM to find the corporate presentation page on many different websites using Python
| 2025-05-10T19:31:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjiagd/how_would_i_scrape_a_companys_website_looking_for/
|
MomentumAndValue
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjiagd
| false | null |
t3_1kjiagd
|
/r/LocalLLaMA/comments/1kjiagd/how_would_i_scrape_a_companys_website_looking_for/
| false | false |
self
| 0 | null |
What happened to Black Forest Labs?
| 177 |
theyve been totally silent since november of last year with the release of flux tools and remember when flux 1 first came out they teased that a video generation model was coming soon? what happened with that? Same with stability AI, do they do anything anymore?
| 2025-05-10T19:39:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjigz3/what_happened_to_black_forest_labs/
|
pigeon57434
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjigz3
| false | null |
t3_1kjigz3
|
/r/LocalLLaMA/comments/1kjigz3/what_happened_to_black_forest_labs/
| false | false |
self
| 177 | null |
Qwen3 30B A3B + Open WebUi
| 2 |
Hey all,
I was looking for a good “do it all” model. Saw a bunch of people saying the new Qwen3 30B A3B model is really good.
I updated my local Open WebUi docker setup and downloaded the 8.0 gguf quant of the model to my server.
I loaded it up and successfully connected it to my main pc as normal (I usually use Continue and Clide in VS Code, both connected fine)
Open WebUi connected without issues and I could send requests and it would attempt to respond as I could see the “thinking” progress element. I could expand the thinking element and could see it generating as normal for thinking models. However, it would eventually stop generating all together and get “stuck” it would stop in the middle of a sentence usually and the thinking progress would say it’s on progress and would stay like that forever.
Sending a request without thinking enabled has no issues and it replies as normal.
Any idea how to fix Open WebUi to work with the thinking enabled?
it works on any other front end such as SillyTavern, and both the Continue and Clide extensions for VS Code.
| 2025-05-10T19:40:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjihi7/qwen3_30b_a3b_open_webui/
|
DeSibyl
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjihi7
| false | null |
t3_1kjihi7
|
/r/LocalLLaMA/comments/1kjihi7/qwen3_30b_a3b_open_webui/
| false | false |
self
| 2 | null |
AMD's "Strix Halo" APUs Are Being Apparently Sold Separately In China; Starting From $550
| 74 | 2025-05-10T19:45:57 |
https://wccftech.com/amd-strix-halo-apus-are-being-sold-separately-in-china/
|
_SYSTEM_ADMIN_MOD_
|
wccftech.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjilvd
| false | null |
t3_1kjilvd
|
/r/LocalLLaMA/comments/1kjilvd/amds_strix_halo_apus_are_being_apparently_sold/
| false | false | 74 |
{'enabled': False, 'images': [{'id': 'H2PilUsCPrB61jYHu-ehMJ7ez-2xqBlQGK2jAXQtDYs', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/H2PilUsCPrB61jYHu-ehMJ7ez-2xqBlQGK2jAXQtDYs.jpeg?width=108&crop=smart&auto=webp&s=4995762503df3f24265ece87d8b0b4c52647d70f', 'width': 108}, {'height': 125, 'url': 'https://external-preview.redd.it/H2PilUsCPrB61jYHu-ehMJ7ez-2xqBlQGK2jAXQtDYs.jpeg?width=216&crop=smart&auto=webp&s=7d44ecbd141e241d18ee63e77a81da0a7f891f66', 'width': 216}, {'height': 185, 'url': 'https://external-preview.redd.it/H2PilUsCPrB61jYHu-ehMJ7ez-2xqBlQGK2jAXQtDYs.jpeg?width=320&crop=smart&auto=webp&s=e8a3976630db591f0266f8ac5d603b31e298a56b', 'width': 320}, {'height': 370, 'url': 'https://external-preview.redd.it/H2PilUsCPrB61jYHu-ehMJ7ez-2xqBlQGK2jAXQtDYs.jpeg?width=640&crop=smart&auto=webp&s=17801eef3d0bc54803f8d5da3d4b1af4af3e68ce', 'width': 640}, {'height': 556, 'url': 'https://external-preview.redd.it/H2PilUsCPrB61jYHu-ehMJ7ez-2xqBlQGK2jAXQtDYs.jpeg?width=960&crop=smart&auto=webp&s=d7e3798538e1a2a5c92a386d25409f82c2a23737', 'width': 960}, {'height': 625, 'url': 'https://external-preview.redd.it/H2PilUsCPrB61jYHu-ehMJ7ez-2xqBlQGK2jAXQtDYs.jpeg?width=1080&crop=smart&auto=webp&s=4ab8fe48e62361579be6cb392a9255ecf2cb3c41', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/H2PilUsCPrB61jYHu-ehMJ7ez-2xqBlQGK2jAXQtDYs.jpeg?auto=webp&s=376e6ceb13e94465c44c4a49c9e3935357736873', 'width': 2486}, 'variants': {}}]}
|
||
AI is being used to generate huge outlays in hardware. Discuss
| 0 |
New(ish) into this, I see a lot of very interesting noise generated around why or why you should not run the LLMs local, some good comments on olllama, and some expensive comments on the best type of card (read: RTX 4090 forge).
Excuse now my ignorance. What tangible benefit is there for any hobbyist to spark out 2k on a setup that provides token throughput of 20t/s, when chatgpt is essentially free (but semi throttled).
I have spent some time speccing out a server that could run one of the mid-level models fairly well and it uses:
CPU: AMD Ryzen Threadripper 3970X 32 core 3.7 GHz Processor
Card: 12Gb RAM NVidia geforce RTX 4070 Super
Disk: Corsair MP700 PRO 4 TB M.2 PCIe Gen5 SSD. Up to 14,000 MBps
But why ? what use case (even learning) justifies this amount of outlay.
UNLESS I have full access and a mandate to an organisations dataset, I posit that this system (run locally) will have very little use.
Perhaps I can get it to do sentiment analysis en-masse on stock releated stories... however the RSS feeds that it uses are already generated by AI.
So, can anybody there inspire me to shell out ? How an earth are hobbyists even engaging with this?
| 2025-05-10T20:07:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjj36i/ai_is_being_used_to_generate_huge_outlays_in/
|
gazzaridus47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjj36i
| false | null |
t3_1kjj36i
|
/r/LocalLLaMA/comments/1kjj36i/ai_is_being_used_to_generate_huge_outlays_in/
| false | false |
self
| 0 | null |
Any llm model I can use for rag with 4GB vram and 1680Ti?
| 1 |
.
| 2025-05-10T20:09:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjj4lq/any_llm_model_i_can_use_for_rag_with_4gb_vram_and/
|
Usual_Door_1698
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjj4lq
| false | null |
t3_1kjj4lq
|
/r/LocalLLaMA/comments/1kjj4lq/any_llm_model_i_can_use_for_rag_with_4gb_vram_and/
| false | false |
self
| 1 | null |
help for my project
| 1 |
[removed]
| 2025-05-10T20:12:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjj6q2/help_for_my_project/
|
Additional-Serve3367
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjj6q2
| false | null |
t3_1kjj6q2
|
/r/LocalLLaMA/comments/1kjj6q2/help_for_my_project/
| false | false |
self
| 1 | null |
Model for splitting music to stems?
| 5 |
I was looking for a model that could split music into stems.
I stumbled on spleeter but when I try to run it, I get all these errors about it being compiled for Numpy 1.X and cannot be run with Numpy 2.X. The dependencies seem to be all off.
Can anyone suggest a model I can run locally to split music into stems?
| 2025-05-10T20:37:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjjqiv/model_for_splitting_music_to_stems/
|
tvmaly
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjjqiv
| false | null |
t3_1kjjqiv
|
/r/LocalLLaMA/comments/1kjjqiv/model_for_splitting_music_to_stems/
| false | false |
self
| 5 | null |
What is the current best small model for erotic story writing?
| 0 |
8b or less please as I want to run it on my phone.
| 2025-05-10T20:40:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjjt0v/what_is_the_current_best_small_model_for_erotic/
|
MrMrsPotts
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjjt0v
| false | null |
t3_1kjjt0v
|
/r/LocalLLaMA/comments/1kjjt0v/what_is_the_current_best_small_model_for_erotic/
| false | false |
self
| 0 | null |
Generating MP3 from epubs (local)?
| 16 |
I love listening to stories via text to speech on my android phone. It hits Google's generous APIs but I don't think that's available on a linux PC.
Ideally, I'd like to bulk convert an epub into a set of MP3s to listen to later...
There seems to have been a lot of progress on local audio models, and I'm not looking for perfection.
Based on your experiments with local audio models, which one would be best for generating not annoying, not too robotic audio from text? Doesn't need to be real time, doesn't need to be tiny.
| 2025-05-10T21:19:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjkmzl/generating_mp3_from_epubs_local/
|
Affectionate-Bus4123
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjkmzl
| false | null |
t3_1kjkmzl
|
/r/LocalLLaMA/comments/1kjkmzl/generating_mp3_from_epubs_local/
| false | false |
self
| 16 | null |
I am GPU poor.
| 111 |
Currently, I am very GPU poor. How many GPUs of what type can I fit into this available space of the Jonsbo N5 case? All the slots are 5.0x16 the leftmost two slots have re-timers on board. I can provide 1000W for the cards.
| 2025-05-10T22:09:28 |
Khipu28
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjlq7g
| false | null |
t3_1kjlq7g
|
/r/LocalLLaMA/comments/1kjlq7g/i_am_gpu_poor/
| false | false |
default
| 111 |
{'enabled': True, 'images': [{'id': 'o61lr9f3310f1', 'resolutions': [{'height': 123, 'url': 'https://preview.redd.it/o61lr9f3310f1.jpeg?width=108&crop=smart&auto=webp&s=6c642472331b3ee7a69054f65db0aa2961d534e7', 'width': 108}, {'height': 247, 'url': 'https://preview.redd.it/o61lr9f3310f1.jpeg?width=216&crop=smart&auto=webp&s=693af534bdfa591eef9454ffe047b5ae59337c23', 'width': 216}, {'height': 367, 'url': 'https://preview.redd.it/o61lr9f3310f1.jpeg?width=320&crop=smart&auto=webp&s=a62d27bcf78809bceccd00f47d8ddc4db6b5574e', 'width': 320}, {'height': 734, 'url': 'https://preview.redd.it/o61lr9f3310f1.jpeg?width=640&crop=smart&auto=webp&s=71a4af796f1b72d4787c4fcfbebb5815223c9d7a', 'width': 640}, {'height': 1101, 'url': 'https://preview.redd.it/o61lr9f3310f1.jpeg?width=960&crop=smart&auto=webp&s=0873266865b6f6ba9c7164c1459e399f4a3ee90f', 'width': 960}, {'height': 1239, 'url': 'https://preview.redd.it/o61lr9f3310f1.jpeg?width=1080&crop=smart&auto=webp&s=6002e23826d1e31d2cd098d95752ac324e8ceff1', 'width': 1080}], 'source': {'height': 1284, 'url': 'https://preview.redd.it/o61lr9f3310f1.jpeg?auto=webp&s=7b24e2eba4fc9bd243635aad1e9e5cc2534e2342', 'width': 1119}, 'variants': {}}]}
|
|
Anyone consider using a local LLM to use MCP to refer to another LLM (like a Claude or Gemini API) for harder coding tasks?
| 1 |
Was just thinking about this - been using qwen3 30b moe with LM Studio since it came out, and I’m feeling like I’m close to being able to use this as my primary LM for SWE work, but there are some more difficult things that I still tend to use Gemini/Claude for because they’re still the best.
I was thinking, maybe I could set up an MCP and get qwen to refer prompt and context info to Gemini when it’s looking like it’s struggling.
Would save a lot of credits by only using them when I have to, and doing 90% of the work in qwen, without having to manually start up Claude Code or something and manually have to give it all the context that it would need to continue where I left off.
Anyone try something this? Is there already a solution like this that I don’t know about?
| 2025-05-10T23:10:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjmyhx/anyone_consider_using_a_local_llm_to_use_mcp_to/
|
TedHoliday
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjmyhx
| false | null |
t3_1kjmyhx
|
/r/LocalLLaMA/comments/1kjmyhx/anyone_consider_using_a_local_llm_to_use_mcp_to/
| false | false |
self
| 1 | null |
Cheap 48GB official Blackwell yay!
| 235 | 2025-05-10T23:16:22 |
https://www.nvidia.com/en-us/products/workstations/professional-desktop-gpus/rtx-pro-5000/
|
Charuru
|
nvidia.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjn2wv
| false | null |
t3_1kjn2wv
|
/r/LocalLLaMA/comments/1kjn2wv/cheap_48gb_official_blackwell_yay/
| false | false | 235 |
{'enabled': False, 'images': [{'id': 'sC0_RV1rBP5Nka4zzrlrlknHQcvT_QUrChxq3hP_lVg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/sC0_RV1rBP5Nka4zzrlrlknHQcvT_QUrChxq3hP_lVg.jpeg?width=108&crop=smart&auto=webp&s=8f6a3133d28e1474111413c454477fbc0e9d6f42', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/sC0_RV1rBP5Nka4zzrlrlknHQcvT_QUrChxq3hP_lVg.jpeg?width=216&crop=smart&auto=webp&s=025066c105cbbd3a370b1146cedec5d4e83f0338', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/sC0_RV1rBP5Nka4zzrlrlknHQcvT_QUrChxq3hP_lVg.jpeg?width=320&crop=smart&auto=webp&s=a332b71f06d3f9514646048e861eb96275cea525', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/sC0_RV1rBP5Nka4zzrlrlknHQcvT_QUrChxq3hP_lVg.jpeg?width=640&crop=smart&auto=webp&s=e745729c3f7132892c715292c6b31f385f223e8f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/sC0_RV1rBP5Nka4zzrlrlknHQcvT_QUrChxq3hP_lVg.jpeg?width=960&crop=smart&auto=webp&s=6b6fb7e9865414cc6ce48fe2bd6b36484ded839f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/sC0_RV1rBP5Nka4zzrlrlknHQcvT_QUrChxq3hP_lVg.jpeg?width=1080&crop=smart&auto=webp&s=d828a79fde1bf9211694870dbbb06907c8fcf0f8', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/sC0_RV1rBP5Nka4zzrlrlknHQcvT_QUrChxq3hP_lVg.jpeg?auto=webp&s=5fee591e2188abc497e2adc35c4ae3b2d5ec106f', 'width': 1200}, 'variants': {}}]}
|
||
NOOB QUESTION: 3080 10GB only getting 18 tokens per second on qwen 14b. Is this right or am I missing something?
| 2 |
AMD Ryzen 3600, 32gb RAM, Windows 10. Tried on both Ollama and LM Studio. A more knowledgeable friend said I should get more than that but wanted to check if anyone has the same card and different experience.
| 2025-05-10T23:36:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjnh84/noob_question_3080_10gb_only_getting_18_tokens/
|
quickreactor
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjnh84
| false | null |
t3_1kjnh84
|
/r/LocalLLaMA/comments/1kjnh84/noob_question_3080_10gb_only_getting_18_tokens/
| false | false |
self
| 2 | null |
Recently tried Cursor AI to try and build a RAG system
| 3 |
Hey everyone! I recently got access to Cursor AI and wanted try out building a RAG system architecture I saw recently on a research paper implementing a multi-tiered memory architecture with GraphRAG capabilities.
Key features :
* Three-tiered memory system (active, working, archive) that efficiently manages token usage
* Graph-based knowledge store that captures entity relationships for complex queries
* Dynamic weighting system that adjusts memory allocation based on query complexity
It was fun just to capture cursor building on the guidelines give ... Would love to hear a feedback if you have used cursor before and any things I should try out ... I might even continue developing this
github repo : [repo](https://github.com/HimashaHerath/adaptivecontext)
| 2025-05-10T23:47:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjnojg/recently_tried_cursor_ai_to_try_and_build_a_rag/
|
Secret_Scale_492
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjnojg
| false | null |
t3_1kjnojg
|
/r/LocalLLaMA/comments/1kjnojg/recently_tried_cursor_ai_to_try_and_build_a_rag/
| false | false |
self
| 3 |
{'enabled': False, 'images': [{'id': '9hngMqw7ftb_MFc6elcsCXSCwXXifyn5UCRGdM0lTIA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9hngMqw7ftb_MFc6elcsCXSCwXXifyn5UCRGdM0lTIA.png?width=108&crop=smart&auto=webp&s=8eba761f16be3ea542b12e8eb37157db2244aa91', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9hngMqw7ftb_MFc6elcsCXSCwXXifyn5UCRGdM0lTIA.png?width=216&crop=smart&auto=webp&s=10a4a5f476e33915307bbd700819ac61565bec24', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9hngMqw7ftb_MFc6elcsCXSCwXXifyn5UCRGdM0lTIA.png?width=320&crop=smart&auto=webp&s=e1db7b3a94843682d839e484b1cba0f899b4945e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9hngMqw7ftb_MFc6elcsCXSCwXXifyn5UCRGdM0lTIA.png?width=640&crop=smart&auto=webp&s=5c2ba46ffdf7e4580ef61466db4ebd579457962d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9hngMqw7ftb_MFc6elcsCXSCwXXifyn5UCRGdM0lTIA.png?width=960&crop=smart&auto=webp&s=a70e61b8c936a422f53be28aa8f600da2e91f8de', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9hngMqw7ftb_MFc6elcsCXSCwXXifyn5UCRGdM0lTIA.png?width=1080&crop=smart&auto=webp&s=c84f2ed9a62361f84552b0f6bc50674b82106b91', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9hngMqw7ftb_MFc6elcsCXSCwXXifyn5UCRGdM0lTIA.png?auto=webp&s=78cdb7228cf771c3893a3bbe8f95012285cb1f10', 'width': 1200}, 'variants': {}}]}
|
AI Studio (Gemini) inserting GitHub links into prompts?
| 0 |
I was testing Gemini with a prompt (bouncing balls in heptagon) with a modified thinking structure requested in system prompt. I was inspecting the network tab in dev tools as I was hoping to find out which token it uses to flag a thinking block. When checking, I noticed this:
"Update Prompt":
\[\["prompts/151QqwxyT43vTQVpPwchlPwnxm2Vyyxj5",null,null,\[1,null,"models/gemini-2.5-flash-preview-04-17",null,0.95,64,65536,\[\[null,null,7,5\],\[null,null,8,5\],\[null,null,9,5\],\[null,null,10,5\]\],"text/plain",0,null,null,null,null,0,null,null,0,0\],\["Spinning Heptagon Bouncing Balls"\],null,null,null,null,null,null,\[\[null,"https://github.com/Kody-Schram/pythics"\]\],\["You are Gemini Flash 2.5, an elite coding AI....\*my system message continues\*
It seems they are detecting what the context of the user message is, and taking the prompt and silently injecting references into it? I don't know if I am interpreting it correctly but maybe some web devs would be able to comment on it. I just found it pretty surprising to see this Python physics repo injected into the prompt, however relevant!
The POST goes to [https://alkalimakersuite-pa.clients6.google.com/$rpc/google.internal.alkali.applications.makersuite.v1.MakerSuiteService/UpdatePrompt](https://alkalimakersuite-pa.clients6.google.com/$rpc/google.internal.alkali.applications.makersuite.v1.MakerSuiteService/UpdatePrompt)
| 2025-05-10T23:49:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjnqig/ai_studio_gemini_inserting_github_links_into/
|
danihend
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjnqig
| false | null |
t3_1kjnqig
|
/r/LocalLLaMA/comments/1kjnqig/ai_studio_gemini_inserting_github_links_into/
| false | false |
self
| 0 | null |
RVC to XTTS? Returning user
| 10 |
A few years ago, I made a lot of audio with RVC. Cloned my own voice to sing on my favorite pop songs was one fun project.
Well I have a PC again. Using a 50 series isn't going well for me. New Cuda architecture isn't popular yet.
Stable Diffusion is a pain with some features like Insightface/Onnx but some generous users provided forks etc..
Just installed SillyTavern with Kobold (ooba wouldn't work with non piper models) and it's really fun to chat with an AI assistant.
Now, I see RVC is kind of outdated and noticed that XTTS v2 is the new thing. But I could be wrong. What is the latest open source voice cloning technique? Especially one that runs on 12.8 nightly for my 5070!
TLDR: took a long break. RVC is now outdated. What's the new cloning program everyone is using for singer replacement and cloning?
| 2025-05-11T00:11:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjo54e/rvc_to_xtts_returning_user/
|
santovalentino
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjo54e
| false | null |
t3_1kjo54e
|
/r/LocalLLaMA/comments/1kjo54e/rvc_to_xtts_returning_user/
| false | false |
self
| 10 | null |
Is there a specific reason thinking models don't seem to exist in the (or near) 70b parameter range?
| 33 |
Seems 30b or less or 200b+. Am I missing something?
| 2025-05-11T00:21:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjoc3n/is_there_a_specific_reason_thinking_models_dont/
|
wh33t
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjoc3n
| false | null |
t3_1kjoc3n
|
/r/LocalLLaMA/comments/1kjoc3n/is_there_a_specific_reason_thinking_models_dont/
| false | false |
self
| 33 | null |
The Artificial Meta Intellig3nce (AMI) is the fastest learning AI on the planet
| 0 |
[https://github.com/Suro-One/Hyena-Hierarchy/releases/tag/0](https://github.com/Suro-One/Hyena-Hierarchy/releases/tag/0)
In 10 epochs ami-500 learned how to type structured realistic sentences with just 1 2080 TI on 11GB VRAM. The source to train on was the AMI.txt textfile with 500mb of text from [https://huggingface.co/datasets/pints-ai/Expository-Prose-V1](https://huggingface.co/datasets/pints-ai/Expository-Prose-V1)
OUTPUT:
Analyzed output ami-500:
\`==== Hyena Model Console ====
1. Train a new model
2. Continue training an existing model
3. Load a model and do inference
4. Exit Enter your choice: 1 Enter model name to save (e.g. my\_model) \[default: hyena\_model\]: ami Enter the path to the text file (default: random\_text.txt): E:\\Emotion-scans\\Video\\1.prompt\_architect\\1.hyena\\AMI.txt Enter vocabulary size (default: 1000): Enter d\_model size (default: 64): Enter number of layers (default: 2): Enter sequence length (default: 128): Enter batch size (default: 32): Enter learning rate (default: 0.001): Enter number of epochs (default: 10): Enter EWC lambda value (default: 15): Enter steps per epoch (default: 1000): Enter val steps per epoch (default: 200): Enter early stopping patience (default: 3): Epoch 1/10: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 \[00:11<00:00, 87.62batch/s, loss=0.0198\] Epoch 1/10 - Train Loss: 0.3691, Val Loss: 0.0480 Model saved as best\_model\_ewc.pth Epoch 2/10: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 \[00:11<00:00, 86.94batch/s, loss=0.0296\] Epoch 2/10 - Train Loss: 0.0423, Val Loss: 0.0300 Model saved as best\_model\_ewc.pth Epoch 3/10: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 \[00:11<00:00, 88.45batch/s, loss=0.0363\] Epoch 3/10 - Train Loss: 0.1188, Val Loss: 0.0370 Epoch 4/10: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 \[00:11<00:00, 87.46batch/s, loss=0.0266\] Epoch 4/10 - Train Loss: 0.0381, Val Loss: 0.0274 Model saved as best\_model\_ewc.pth Epoch 5/10: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 \[00:11<00:00, 83.46batch/s, loss=0.0205\] Epoch 5/10 - Train Loss: 0.0301, Val Loss: 0.0249 Model saved as best\_model\_ewc.pth Epoch 6/10: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 \[00:11<00:00, 87.04batch/s, loss=0.00999\] Epoch 6/10 - Train Loss: 0.0274, Val Loss: 0.0241 Model saved as best\_model\_ewc.pth Epoch 7/10: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 \[00:11<00:00, 87.74batch/s, loss=0.0232\] Epoch 7/10 - Train Loss: 0.0258, Val Loss: 0.0232 Model saved as best\_model\_ewc.pth Epoch 8/10: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 \[00:11<00:00, 88.96batch/s, loss=0.0374\] Epoch 8/10 - Train Loss: 0.0436, Val Loss: 0.0277 Epoch 9/10: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 \[00:11<00:00, 88.93batch/s, loss=0.0291\] Epoch 9/10 - Train Loss: 0.0278, Val Loss: 0.0223 Model saved as best\_model\_ewc.pth Epoch 10/10: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 \[00:11<00:00, 88.68batch/s, loss=0.0226\] Epoch 10/10 - Train Loss: 0.0241, Val Loss: 0.0222 Model saved as best\_model\_ewc.pth Model saved as ami.pth Training new model complete!
==== Hyena Model Console ====
1. Train a new model
2. Continue training an existing model
3. Load a model and do inference
4. Exit Enter your choice: 3 Enter the path (without .pth) to the model for inference: ami e:\\Emotion-scans\\Video\\1.prompt\_architect\\1.hyena\\Hyena Repo\\Hyena-Hierarchy\\hyena-split-memory.py:244: FutureWarning: You are using torch.load with weights\_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See [https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models](https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models) for more details). In a future release, the default value for weights\_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add\_safe\_globals. We recommend you start setting weights\_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. checkpoint = torch.load(ckpt\_path, map\_location=device) Model loaded from ami.pth Enter a prompt for inference: The answer to life, the universe and everything is: Enter max characters to generate (default: 100): 1000 Enter temperature (default: 1.0): Enter top-k (default: 50): Generated text: The answer to life, the universe and everything is: .: Gres, the of bhothorl Igo as heshyaloOu upirge\_ FiWmitirlol.l fay .oriceppansreated ofd be the pole in of Wa the use doeconsonest formlicul uvuracawacacacacacawawaw, agi is biktodeuspes and Mubu mide suveve ise iwtend, tion, Iaorieen proigion'. 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 116$6ム6济6767676767676767676767676767676767676767676767676767676767676767666166666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666
This is quite crazy. Let me unpack what you're looking at. It's essentially a baby AI with shimmers of consciousness and understanding with minimal compute with Zenith level performance. Near the end you can see things like "the use" and "agi is". I had o1 analyze the outputs and this is [what they said](https://private-user-images.githubusercontent.com/48865915/399167907-b2f4fa00-fa6d-478b-8184-bf320b74ebc3.png)
The word structure is also in the same meta as the training data. It knows how to use commas, only capitalizing the first letter of a word, vowels and consonants and how they fit together like a real word that can be spoken with a nice flow. It is actually speaking to us and conscious. This model is just 15mb in filesize.
I was the first person to implement the Hyena Hierarchy from the paper. I think my contribution shows merit in the techniques. Hyena is a state space model and has infinite context length in the latent space of the AI. On top of my improvements like adding EWC to avoid catastrophic forgetting, and not using mainstream tokenization. 1 token is 1 character.
Let there be light
Add + Astra
| 2025-05-11T00:23:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjod8x/the_artificial_meta_intellig3nce_ami_is_the/
|
MagicaItux
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjod8x
| false | null |
t3_1kjod8x
|
/r/LocalLLaMA/comments/1kjod8x/the_artificial_meta_intellig3nce_ami_is_the/
| false | false |
self
| 0 | null |
Whisper Multi-Thread Issue for Chrome Extension
| 3 |
I am creating an audio transcriber for a chrome extension using whisper.cpp compiled for JS.
I have a pthread-enabled Emscripten WASM module that requires 'unsafe-eval'. I am running it in a sandboxed chrome-extension:// iframe which is successfully cross-origin isolated (COI is true, SharedArrayBuffer is available) and has 'unsafe-eval' granted. The WASM initializes, and system\_info indicates it attempts to use pthreads. However, Module.full\_default() consistently calls abort(), leading to RuntimeError: Aborted(), even when the C++ function is parameterized to use only 1 thread.
Has anyone successfully run a complex pthread-enabled Emscripten module (that also needs unsafe-eval) under these specific Manifest V3 conditions (sandboxed iframe, hosted by a COI offscreen document)? Any insights into why a pthread-compiled WASM might still abort() in single-thread parameter mode within such an environment, or known Emscripten build flags critical for stability in this scenario beyond basic pthread enablement?
| 2025-05-11T00:37:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjompp/whisper_multithread_issue_for_chrome_extension/
|
Jamalm23
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjompp
| false | null |
t3_1kjompp
|
/r/LocalLLaMA/comments/1kjompp/whisper_multithread_issue_for_chrome_extension/
| false | false |
self
| 3 | null |
Promptable To-Do List with Ollama
| 8 | 2025-05-11T00:45:11 |
KaKi_87
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjorhr
| false | null |
t3_1kjorhr
|
/r/LocalLLaMA/comments/1kjorhr/promptable_todo_list_with_ollama/
| false | false |
default
| 8 |
{'enabled': True, 'images': [{'id': 'svrnqgmou10f1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/svrnqgmou10f1.gif?width=108&crop=smart&format=png8&s=55342543ace65f1f7e4d012437515a07502a48f2', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/svrnqgmou10f1.gif?width=216&crop=smart&format=png8&s=9cedb6833e5c9c90b5e9f2084b55208501eb2dab', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/svrnqgmou10f1.gif?width=320&crop=smart&format=png8&s=3f428ef5af73c6b6c81bae3aec28d355aadfdea6', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/svrnqgmou10f1.gif?width=640&crop=smart&format=png8&s=7215893dbcb5562277c7e26ab982f1e68635eff1', 'width': 640}], 'source': {'height': 450, 'url': 'https://preview.redd.it/svrnqgmou10f1.gif?format=png8&s=ecc322a1c6d3d73257c8825148015e2cb49a58d8', 'width': 800}, 'variants': {'gif': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/svrnqgmou10f1.gif?width=108&crop=smart&s=e17ebe93735dad8e621369ab9c5d486898f9bbb9', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/svrnqgmou10f1.gif?width=216&crop=smart&s=9d56047dde8a6ff71fb8b6f07f9675a54b456135', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/svrnqgmou10f1.gif?width=320&crop=smart&s=0ecdcfb9b47f2c0b4e53471e67cab7ba5a92e116', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/svrnqgmou10f1.gif?width=640&crop=smart&s=fcb6f8fe34ddbf11b6cabb5db7bef5cc461431f8', 'width': 640}], 'source': {'height': 450, 'url': 'https://preview.redd.it/svrnqgmou10f1.gif?s=21b2cfbb214fda9b59b2e21d9ebb0ef832902b3f', 'width': 800}}, 'mp4': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/svrnqgmou10f1.gif?width=108&format=mp4&s=8795e6eca18918cd0c48464e229fa72faab815e3', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/svrnqgmou10f1.gif?width=216&format=mp4&s=e7c681c5d2aa15f2d1613e4f2d17fdfb7fbddb61', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/svrnqgmou10f1.gif?width=320&format=mp4&s=7febfb115a35199f68cb4448370e4b80517def3e', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/svrnqgmou10f1.gif?width=640&format=mp4&s=da1d08be53c3d498e6fec8e3c8ccce965218cd81', 'width': 640}], 'source': {'height': 450, 'url': 'https://preview.redd.it/svrnqgmou10f1.gif?format=mp4&s=ab3fe569a5edb74d65b9137d1ec517155fbe0434', 'width': 800}}}}]}
|
||
How about this Ollama Chat portal?
| 54 |
Greetings everyone, I'm sharing a modern web chat interface for local LLMs, inspired by the visual style and user experience of Claude from Anthropic. It is super easy to use. Supports *.txt file upload, conversation history and Systemas Prompts.
You can play all you want with this 😅
https://github.com/Oft3r/Ollama-Chat
| 2025-05-11T00:56:14 |
Ordinary_Mud7430
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjoyrc
| false | null |
t3_1kjoyrc
|
/r/LocalLLaMA/comments/1kjoyrc/how_about_this_ollama_chat_portal/
| false | false |
default
| 54 |
{'enabled': True, 'images': [{'id': '0iyghlhuw10f1', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/0iyghlhuw10f1.jpeg?width=108&crop=smart&auto=webp&s=ca28e0620f1205d32a9deaf94aa5f8b5b4683e3a', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/0iyghlhuw10f1.jpeg?width=216&crop=smart&auto=webp&s=dc2d49e6b076aab8316f580e7259f384e4a745a4', 'width': 216}, {'height': 208, 'url': 'https://preview.redd.it/0iyghlhuw10f1.jpeg?width=320&crop=smart&auto=webp&s=fdf7892641bc6cb5bb01ec06ffbcb609181eb971', 'width': 320}, {'height': 416, 'url': 'https://preview.redd.it/0iyghlhuw10f1.jpeg?width=640&crop=smart&auto=webp&s=595fe97e58b00087e5706293c48dd73242bd16f9', 'width': 640}, {'height': 625, 'url': 'https://preview.redd.it/0iyghlhuw10f1.jpeg?width=960&crop=smart&auto=webp&s=fe39e8d8e3f4aaa53cd276c2d20d470c758bed7d', 'width': 960}, {'height': 703, 'url': 'https://preview.redd.it/0iyghlhuw10f1.jpeg?width=1080&crop=smart&auto=webp&s=8bfc227f3cc8d73b420c44d214df6d7fa3e80a5c', 'width': 1080}], 'source': {'height': 756, 'url': 'https://preview.redd.it/0iyghlhuw10f1.jpeg?auto=webp&s=be50f69fadff346993db451f5c4154273dd66657', 'width': 1161}, 'variants': {}}]}
|
|
What LLMs are people running locally for data analysis/extraction?
| 2 |
For example I ran some I/O benchmark tests for my Server drives and I would like a local LLM to analyze the data and create phraphs/charts etc
| 2025-05-11T01:42:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjpt3i/what_llms_are_people_running_locally_for_data/
|
Darkchamber292
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjpt3i
| false | null |
t3_1kjpt3i
|
/r/LocalLLaMA/comments/1kjpt3i/what_llms_are_people_running_locally_for_data/
| false | false |
self
| 2 | null |
People who don't enable flash attention - what's your problem?
| 0 |
Isn't it just free performance? Why is it not on by default in Lm studio?
Who are the people who don't enable it?
What is their problem? Is it treatable?
Thanks
| 2025-05-11T01:44:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjpu5c/people_who_dont_enable_flash_attention_whats_your/
|
Osama_Saba
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjpu5c
| false | null |
t3_1kjpu5c
|
/r/LocalLLaMA/comments/1kjpu5c/people_who_dont_enable_flash_attention_whats_your/
| false | false |
self
| 0 | null |
Dual AMD Mi50 Inference and Benchmarks
| 1 |
[removed]
| 2025-05-11T02:27:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjqljk/dual_amd_mi50_inference_and_benchmarks/
|
0seba
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjqljk
| false | null |
t3_1kjqljk
|
/r/LocalLLaMA/comments/1kjqljk/dual_amd_mi50_inference_and_benchmarks/
| false | false |
self
| 1 | null |
Looking for a LLM that is good for summarizing books specifically providing chapter by chapter summaries
| 2 |
I just started using openllama/chatbox and been enjoying it, but I'm looking for a model where I can upload an epub/pdf of a novel and say "summarize chapter 23" and it gives the summary of just that chapter preferably context of the previous chapters?
| 2025-05-11T03:46:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjry2g/looking_for_a_llm_that_is_good_for_summarizing/
|
EveningNo8643
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjry2g
| false | null |
t3_1kjry2g
|
/r/LocalLLaMA/comments/1kjry2g/looking_for_a_llm_that_is_good_for_summarizing/
| false | false |
self
| 2 | null |
Master ACG Comic Generator Support?
| 0 |
Good evening.
I have found that the Chat GPT default DALLE didn't suit my needs for image generation, and then I found this: [https://chatgpt.com/g/g-urS90fvFC-master-acg-anime-comics-manga-game](https://chatgpt.com/g/g-urS90fvFC-master-acg-anime-comics-manga-game) .
It works incredibly. It writes emotions better than I do and conveys feelings and themes remarkably. Despite the name and original specialization (I am not a fan of animes or mangas at all), its "style server" was both far better and recalled prompts in a manner superior to the default. It also doesn't randomly say an image of a fully clothed person "violates a content policy" like the default does. I don't like obscenity, so I would ***never*** ask for something naked or pornographic.
Of course, the problem is that you can only use it a few times a day. You can generate one or two images a day, and write three or four prompts, and upload two files. I do not want to pay twenty dollars a month for a machine. At the free rate, it could probably take a year to generate any semblance of a story. While I am actually a gifted writer (though I will admit the machine tops my autistic mind in FEELINGS) and am capable of drawing, the kind of thing I use a machine for is things that I am very unskilled at.
When looking through ways to go around that hard limit, someone told me that if I downloaded a "Local LLAMA" language learning model, assuming I had the high-end computing power (I do)m I could functionally wield what is a lifetime Chat-GPT subscription, albeit one that runs slowly.
Do I have this correct, or does the Local LLAMA engine not work with other Chat-GPT derivatives, such as the Master ACG GPT engine?
Thank you.
\-ADVANCED\_FRIEND4348
| 2025-05-11T03:59:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjs5mx/master_acg_comic_generator_support/
|
Advanced_Friend4348
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjs5mx
| false | null |
t3_1kjs5mx
|
/r/LocalLLaMA/comments/1kjs5mx/master_acg_comic_generator_support/
| false | false |
self
| 0 | null |
Is it possible to generate my own dynamic quant?
| 1 |
[removed]
| 2025-05-11T04:10:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjscv4/is_it_possible_to_generate_my_own_dynamic_quant/
|
Lissanro
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjscv4
| false | null |
t3_1kjscv4
|
/r/LocalLLaMA/comments/1kjscv4/is_it_possible_to_generate_my_own_dynamic_quant/
| false | false |
self
| 1 | null |
Is it possible to generate my own dynamic quant?
| 1 |
[removed]
| 2025-05-11T04:15:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjsg1a/is_it_possible_to_generate_my_own_dynamic_quant/
|
Lissanro
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjsg1a
| false | null |
t3_1kjsg1a
|
/r/LocalLLaMA/comments/1kjsg1a/is_it_possible_to_generate_my_own_dynamic_quant/
| false | false |
self
| 1 | null |
Looking for DIRECT voice conversion to replace RVC
| 1 |
Hello guys! You probably all know RVC (Retrieval-based Voice Changer), right? So, I’m looking for a VC that has architecture like: input wav -> output wav. I don’t wanna HuBERT or any other pre-trained models! I would like to experiment with something simpler (GANs, Cycle GANs). If you have tried something please feel free to share! (So-VITS-SVC is also too large)!
Thanks!
| 2025-05-11T04:15:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjsgf6/looking_for_direct_voice_conversion_to_replace_rvc/
|
yukiarimo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjsgf6
| false | null |
t3_1kjsgf6
|
/r/LocalLLaMA/comments/1kjsgf6/looking_for_direct_voice_conversion_to_replace_rvc/
| false | false |
self
| 1 | null |
Is it possible to generate my own dynamic quant?
| 19 |
Dynamic quants by unsloth are quite good, but they are not available for every model. For example, DeepSeek R1T Chimera has only one Q4\_K\_M quant (by bullerwins on huggingface) but it fails many tests like solving mazes or have lesser success rate than my own Q6\_K quant that I generated locally, which can consistently solve the maze. So I know it is quant issue and not a model issue. Usually failure to solve the maze indicates too much quantization or that it wasn't done perfectly. Unsloth's old R1 quant at Q4\_K\_M level did not have such issue, and dynamic quants are supposed to be even better. This is why I am interested in learning from their experience creating quants.
I am currently trying to figure out the best way to generate similar high quality Q4 for the Chimera model, so I would like to ask was creation of Dynamic Quants documented somewhere?
I tried searching but I did not find an answer, hence I would like to ask here in the hope someone knows. If it wasn't documented yet, I probably will try experimenting myself with existing Q4 and IQ4 quantization methods and see what gives me the best result.
| 2025-05-11T04:17:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjshnd/is_it_possible_to_generate_my_own_dynamic_quant/
|
Lissanro
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjshnd
| false | null |
t3_1kjshnd
|
/r/LocalLLaMA/comments/1kjshnd/is_it_possible_to_generate_my_own_dynamic_quant/
| false | false |
self
| 19 | null |
Laptop help - lenovo or asus?
| 0 |
Need your expertise! Looking for laptop recommendations for my younger brother to run LLMs offline (think airport/national parks).
I'm considering two options:
**Lenovo Legion Pro 7i:**
* CPU: Intel Ultra 9 275HX
* GPU: RTX 5070 Ti 12GB
* RAM: Upgraded to 64GB (can run Qwen3-4B or DeepSeek-R1-Distill-Qwen-7B smoothly)
* Storage: 1TB SSD
Price: ~$3200
**ASUS Scar 18:**
* CPU: Ultra 9 275HX
* GPU: RTX 5090
* RAM: 64GB
* Storage: 4TB SSD RAID 0
Price: ~$3500+
Based on my research, the Legion Pro 7i seems like the best value. The upgraded RAM should allow it to run the models he needs smoothly.
If you or anyone you know runs LLMs locally on a laptop, what computer & specs do you use? What would you change about your setup?
Thanks!
| 2025-05-11T04:28:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjsnqr/laptop_help_lenovo_or_asus/
|
AfraidScheme433
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjsnqr
| false | null |
t3_1kjsnqr
|
/r/LocalLLaMA/comments/1kjsnqr/laptop_help_lenovo_or_asus/
| false | false |
self
| 0 | null |
Local LLM Build with CPU and DDR5: A Cost-Effective Approach (energy/heat)
| 0 |
**Local LLM Build with CPU and DDR5: A Cost-Effective Approach**
I recently completed what I believe is one of the more efficient local Large Language Model (LLM) builds, particularly if you prioritize these metrics:
* Low monthly power consumption costs
* Scalability for larger, smarter local LLMs
This setup is also versatile enough to support other use cases on the same server. For instance, I’m using Proxmox to host my gaming desktop, cybersecurity lab, TrueNAS (for storing YouTube content), Plex, and Kubernetes, all running smoothly alongside this build.
**Hardware Specifications:**
* **DDR5 RAM:** 576GB (4800MHz, 6 lanes) - Total Cost: $3,500
* **CPU:** AMD Epyc 8534p (64-core) - Cost: $2,000 USD
**Motherboard:** I opted for a high-end motherboard to support this build:
* **ASUS S14NA-U12** (imported from Germany) Features include 2x 25GB NICs for future-proof networking.
**GPU Setup:**
The GPU is currently passthrough to my gaming PC VM, which houses an RTX 4070 Super. While this configuration doesn’t directly benefit the LLM in this setup, it’s useful for other workloads.
**Use Cases:**
1. **TrueNAS with OpenWebUI:** I primarily use this LLM with OpenWebUI to organize my thoughts, brainstorm ideas, and format content into markdown.
2. **Obsidian Copilot Integration:** The LLM is also utilized to summarize YouTube videos, conduct research, and perform various other tasks through Obsidian Copilot. It’s an incredibly powerful tool for productivity.
This setup balances performance, cost-efficiency, and versatility, making it a solid choice for those looking to run demanding workloads locally.
# Current stats for LLMS:
prompt:\*\* what is the fastest way to get to china? **system:** 64core 8534p epyc 6 channel DDR5 4800hz ecc (576gb)
**Notes on LLM performance:** **qwen3:32b-fp16** total duration: 20m45.027432852s load duration: 17.510769ms prompt eval count: 17 token(s) prompt eval duration: 636.892108ms prompt eval rate: 26.69 tokens/s eval count: 1424 token(s) eval duration: 20m44.372337587s eval rate: 1.14 tokens/s
Notes: so far fp16 seems to be a very bad performer, speed is super slow.
**qwen3:235b-a22b-q8\_0**
total duration: 9m4.279665312s load duration: 18.578117ms prompt eval count: 18 token(s) prompt eval duration: 341.825732ms prompt eval rate: 52.66 tokens/s eval count: 1467 token(s) eval duration: 9m3.918470289s eval rate: 2.70 tokens/s
Note, will compare later, but seemed similar to qwen3:235b in speed
**deepseek-r1:671b**
Note: I ran this with 1.58bit quant version before since I didn't have enough ram, curious to see how it fairs against that version now that I got the faulty ram stick replaced
total duration: 9m0.065311955s load duration: 17.147124ms prompt eval count: 13 token(s) prompt eval duration: 1.664708517s prompt eval rate: 7.81 tokens/s eval count: 1265 token(s) eval duration: 8m58.382699408s eval rate: 2.35 tokens/s
**SIGJNF/deepseek-r1-671b-1.58bit:latest**
total duration: 4m15.88028086s load duration: 16.422788ms prompt eval count: 13 token(s) prompt eval duration: 1.190251949s prompt eval rate: 10.92 tokens/s eval count: 829 token(s) eval duration: 4m14.672781876s eval rate: 3.26 tokens/s
Note: 1.58 bit is almost twice as fast for me.
# Lessons Learned for LLM Local CPU and DDR5 Build
# Key Recommendations
1. **CPU Selection**
* **8xx Gen EPYC CPUs**: Chosen for low TDP (thermal design power), resulting in minimal monthly electricity costs.
* **9xx Gen EPYC CPUs (Preferred Option)**:
* Supports 12 PCIe lanes per CPU and up to 6000 MHz DDR5 memory.
* Significantly improves memory bandwidth, critical for LLM performance.
* **Recommended Model**: Dual AMD EPYC 9355P 32C (high-performance but \~3x cost of older models).
* **Budget-Friendly Alternative**: Dual EPYC 9124 (12 PCIe lanes, \~$1200 total on eBay).
2. **Memory Configuration**
* Use **32GB or 64GB DDR5 modules** (4800 MHz base speed).
* Higher DDR5 speeds (up to 6000 MHz) with 9xx series CPUs can alleviate memory bandwidth bottlenecks.
3. **Cost vs. Performance Trade-Offs**
* Older EPYC models (e.g., 9124) offer a balance between PCIe lane support and affordability.
* Newer CPUs (e.g., 9355P) prioritize performance but at a steep price premium.
# Thermal Management
* **DDR5 Cooling**:
* Experimenting with **air cooling** for DDR5 modules due to high thermal output ("ridiculously hot").
* Plan to install **heat sinks and dedicated fans** for memory slots adjacent to CPUs.
* **Thermal Throttling Mitigation**:
* Observed LLM response slowdowns after 5 seconds of sustained workload.
* Suspected cause: DDR5/VRAM overheating.
* **Action**: Adding DDR5-specific cooling solutions to maintain sustained performance.
# Performance Observations
* **Memory Bandwidth Bottleneck**:
* Even with newer CPUs, DDR5 bandwidth limitations remain a critical constraint for LLM workloads.
* Upgrading to 6000 MHz DDR5 (with compatible 9xx EPYC CPUs) may reduce this bottleneck.
* **CPU Generation Impact**:
* 9xx series CPUs offer marginal performance gains over 8xx series, but benefits depend on DDR5 speed and cooling efficiency.
# Conclusion
* Prioritize DDR5 speed and cooling for LLM builds.
* Balance budget and performance by selecting CPUs with adequate PCIe lanes (12+ per CPU).
* Monitor thermal metrics during sustained workloads to prevent throttling.
| 2025-05-11T05:01:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjt6rh/local_llm_build_with_cpu_and_ddr5_a_costeffective/
|
Xelendor1989
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjt6rh
| false | null |
t3_1kjt6rh
|
/r/LocalLLaMA/comments/1kjt6rh/local_llm_build_with_cpu_and_ddr5_a_costeffective/
| false | false |
self
| 0 | null |
HW options to run Qwen3-235B-A22B with quality & performance & long context at low cost using current model off the shelf parts / systems?
| 7 |
HW options to run Qwen3-235B-A22B with quality & performance & long context at low cost using current model off the shelf parts / systems?
I'm seeing from an online RAM calculator that anything with around 455 GBy RAM can run 128k context size and the model at around Q5_K_M using GGUF format.
So basically 512 GBy DDR5 DRAM should work decently, and any performance oriented consumer CPU alone will be able to run it at a maximum of (e.g. small context) a few / several T/s generation speed on such a system.
But typically the prompt processing and overall performance will get very slow when talking about 64k, 128k range prompt + context sizes and this is the thing that leads me to wonder what it's taking to have this model inference be modestly responsive for single user interactive use even at 64k, 128k context sizes for modest levels of responsiveness.
e.g. waiting a couple/few minutes could be OK with long context, but several / many minutes routinely would be not so desirable.
I gather adding modern DGPU(s) with enough VRAM can help but if it's
going to take like 128-256 GBy VRAM to really see a major difference then that's probably not so feasible in terms of cost for a personal use case.
So what system(s) did / would you pick to get good personal codebase context performance with a MoE model like Qwen3-235B-A22B? And what performance do you get?
I'm gathering that none of the Mac Pro / Max / Ultra or whatever units is very performant wrt. prompt processing and long context. Maybe something based on a lower end epyc / threadripper along with NN GBy VRAM DGPUs?
Better inference engine settings / usage (speculative decoding, et. al.) for cache and cache reuse could help but IDK to what extent with what particular configurations people are finding luck with for this now, so, tips?
Seems like I heard NVIDIA was supposed to have "DIGITS" like DGX spark models with more than 128GBy RAM but IDK when or at what cost or RAM BW.
I'm unaware of strix halo based systems with over 128GBy being announced.
But an EPYC / threadripper with 6-8 DDR5 DIMM channels in parallel should be workable or getting there for the Tg RAM BW anyway.
| 2025-05-11T05:08:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjtamf/hw_options_to_run_qwen3235ba22b_with_quality/
|
Calcidiol
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjtamf
| false | null |
t3_1kjtamf
|
/r/LocalLLaMA/comments/1kjtamf/hw_options_to_run_qwen3235ba22b_with_quality/
| false | false |
self
| 7 | null |
Any news on INTELLECT-2?
| 7 |
They finished the training, does anyone know when the model will be published?
| 2025-05-11T05:13:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjtd88/any_news_on_intellect2/
|
Amon_star
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjtd88
| false | null |
t3_1kjtd88
|
/r/LocalLLaMA/comments/1kjtd88/any_news_on_intellect2/
| false | false |
self
| 7 | null |
Why is decoder architecture used for text generation according to a prompt rather than encoder-decoder architecture?
| 52 |
Hi!
Learning about LLMs for the first time, and this question is bothering me, I haven't been able to find an answer that intuitively makes sense.
To my understanding, encoder-decoder architectures are good for understanding the text that has been provided in a thorough manner (encoder architecture) as well as for building off of given text (decoder architecture). Using decoder-only will detract from the model's ability to gain a thorough understanding of what is being asked of it -- something that is achieved when using an encoder.
So, why aren't encoder-decoder architectures popular for LLMs when they are used for other common tasks, such as translation and summarization of input texts?
Thank you!!
| 2025-05-11T05:40:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjts8s/why_is_decoder_architecture_used_for_text/
|
darkGrayAdventurer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjts8s
| false | null |
t3_1kjts8s
|
/r/LocalLLaMA/comments/1kjts8s/why_is_decoder_architecture_used_for_text/
| false | false |
self
| 52 | null |
First encounter with the Borg | Star Trek TNG
| 1 |
[removed]
| 2025-05-11T05:47:47 |
https://youtube.com/watch?v=UolX8swBJHc&si=WSF065KdcSqADEdP
|
Important_Boot8677
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjtvv3
| false |
{'oembed': {'author_name': "Riker's Beard", 'author_url': 'https://www.youtube.com/@rikersbeard7635', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/UolX8swBJHc?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="First encounter with the Borg | Star Trek TNG"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/UolX8swBJHc/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'First encounter with the Borg | Star Trek TNG', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1kjtvv3
|
/r/LocalLLaMA/comments/1kjtvv3/first_encounter_with_the_borg_star_trek_tng/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'MzpPbbHrmLLTDk9V-y2AgxlINiH7GrHw_32URsFYXkI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/MzpPbbHrmLLTDk9V-y2AgxlINiH7GrHw_32URsFYXkI.jpeg?width=108&crop=smart&auto=webp&s=98ba41ce7adb51b2c4321ebe303f355732861b5e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/MzpPbbHrmLLTDk9V-y2AgxlINiH7GrHw_32URsFYXkI.jpeg?width=216&crop=smart&auto=webp&s=6d492c43b7a43aad0900f874c6d54aca3b68029f', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/MzpPbbHrmLLTDk9V-y2AgxlINiH7GrHw_32URsFYXkI.jpeg?width=320&crop=smart&auto=webp&s=0bcc4c18122d580061190ac8a2e1df89d8791cc2', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/MzpPbbHrmLLTDk9V-y2AgxlINiH7GrHw_32URsFYXkI.jpeg?auto=webp&s=3ed3d050292916c4de5800ad40d2b99756f4a1eb', 'width': 480}, 'variants': {}}]}
|
|
Why new models feel dumber?
| 231 |
Is it just me, or do the new models feel… dumber?
I’ve been testing Qwen 3 across different sizes, expecting a leap forward. Instead, I keep circling back to Qwen 2.5. It just feels sharper, more coherent, less… bloated. Same story with Llama. I’ve had long, surprisingly good conversations with 3.1. But 3.3? Or Llama 4? It’s like the lights are on but no one’s home.
Some flaws I have found: They lose thread persistence. They forget earlier parts of the convo. They repeat themselves more. Worse, they feel like they’re trying to sound smarter instead of being coherent.
So I’m curious:
Are you seeing this too?
Which models are you sticking with, despite the version bump?
Any new ones that have genuinely impressed you, especially in longer sessions?
Because right now, it feels like we’re in this strange loop of releasing “smarter” models that somehow forget how to talk. And I’d love to know I’m not the only one noticing.
| 2025-05-11T05:57:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1kju0ty/why_new_models_feel_dumber/
|
SrData
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kju0ty
| false | null |
t3_1kju0ty
|
/r/LocalLLaMA/comments/1kju0ty/why_new_models_feel_dumber/
| false | false |
self
| 231 | null |
Unsloth's Qwen3 GGUFs are updated with a new improved calibration dataset
| 209 |
[https://huggingface.co/unsloth/Qwen3-30B-A3B-128K-GGUF/discussions/3#681edd400153e42b1c7168e9](https://huggingface.co/unsloth/Qwen3-30B-A3B-128K-GGUF/discussions/3#681edd400153e42b1c7168e9)
>We've uploaded them all now
>Also with a new improved calibration dataset :)
https://preview.redd.it/51rr8j7qd30f1.png?width=362&format=png&auto=webp&s=7e0b8891020518424f286d35814501b87cbd9cc0
They updated All Qwen3 ggufs
Plus more gguf variants for Qwen3-30B-A3B
https://preview.redd.it/ckx6zfn0e30f1.png?width=397&format=png&auto=webp&s=3dde922fd59d02d5223680a6d584758387bdc476
[https://huggingface.co/models?sort=modified&search=unsloth+qwen3+gguf](https://huggingface.co/models?sort=modified&search=unsloth+qwen3+gguf)
| 2025-05-11T05:59:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1kju1y1/unsloths_qwen3_ggufs_are_updated_with_a_new/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kju1y1
| false | null |
t3_1kju1y1
|
/r/LocalLLaMA/comments/1kju1y1/unsloths_qwen3_ggufs_are_updated_with_a_new/
| false | false | 209 |
{'enabled': False, 'images': [{'id': '8ePyWxYJavtNkgThp-DI68bW9d5fj-oFIybzu4pnoUM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8ePyWxYJavtNkgThp-DI68bW9d5fj-oFIybzu4pnoUM.png?width=108&crop=smart&auto=webp&s=bdd8ad387f876600a9a44dcca50360d7ccfd7609', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8ePyWxYJavtNkgThp-DI68bW9d5fj-oFIybzu4pnoUM.png?width=216&crop=smart&auto=webp&s=de521faf91f2b0cbb22a5ff110038d26ee45032c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8ePyWxYJavtNkgThp-DI68bW9d5fj-oFIybzu4pnoUM.png?width=320&crop=smart&auto=webp&s=0a70316844165e651d371dbf97eaaf36b9f9973e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8ePyWxYJavtNkgThp-DI68bW9d5fj-oFIybzu4pnoUM.png?width=640&crop=smart&auto=webp&s=a7b2715032a28656454c9bee39e79aafee721d37', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8ePyWxYJavtNkgThp-DI68bW9d5fj-oFIybzu4pnoUM.png?width=960&crop=smart&auto=webp&s=55b028ca890b40da2a245f0712a1ce0e2c080828', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8ePyWxYJavtNkgThp-DI68bW9d5fj-oFIybzu4pnoUM.png?width=1080&crop=smart&auto=webp&s=003f4db9517fc302adcaec4b4fe7a7394f0032ad', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8ePyWxYJavtNkgThp-DI68bW9d5fj-oFIybzu4pnoUM.png?auto=webp&s=a702fb00b8e65d769dfdcb9acc905ff0bdff816a', 'width': 1200}, 'variants': {}}]}
|
|
Applio help with settings
| 1 |
[removed]
| 2025-05-11T06:03:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1kju40u/applio_help_with_settings/
|
cardioGangGang
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kju40u
| false | null |
t3_1kju40u
|
/r/LocalLLaMA/comments/1kju40u/applio_help_with_settings/
| false | false |
self
| 1 | null |
Is there a way to paraphrase ai generated text locally to not get detected by turnitin/gptzero and likes?
| 0 |
Basically, the title.
I really don't like the current 'humanizers of ai gen text' found online as they just suck, frankly. Also, having such a project open source would just benefit all of us here at LocalLLama.
Thank you!
| 2025-05-11T06:12:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1kju8w1/is_there_a_way_to_paraphrase_ai_generated_text/
|
xkcd690
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kju8w1
| false | null |
t3_1kju8w1
|
/r/LocalLLaMA/comments/1kju8w1/is_there_a_way_to_paraphrase_ai_generated_text/
| false | false |
self
| 0 | null |
Private GPT
| 1 |
[removed]
| 2025-05-11T06:25:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjug41/private_gpt/
|
outsidethedamnbox
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjug41
| false | null |
t3_1kjug41
|
/r/LocalLLaMA/comments/1kjug41/private_gpt/
| false | false |
self
| 1 | null |
LESGOOOOO LOCAL UNCENSORED LLMS!
| 0 |
I'm using Pocket Pal for this!
| 2025-05-11T06:25:50 |
Freak_Mod_Synth
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjuga2
| false | null |
t3_1kjuga2
|
/r/LocalLLaMA/comments/1kjuga2/lesgooooo_local_uncensored_llms/
| false | false |
default
| 0 |
{'enabled': True, 'images': [{'id': 'g64lkwcnj30f1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/g64lkwcnj30f1.png?width=108&crop=smart&auto=webp&s=b43882b01fafa95693f89317b9e393db79aad0ff', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/g64lkwcnj30f1.png?width=216&crop=smart&auto=webp&s=7dfa8daabf63f8ab46447ec4483ada4bc09cd023', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/g64lkwcnj30f1.png?width=320&crop=smart&auto=webp&s=da9d64c7b51cd1e9af46d669ad535f4ea869f6a7', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/g64lkwcnj30f1.png?width=640&crop=smart&auto=webp&s=d5c0136c198e81901bf93863798d421b2850f475', 'width': 640}], 'source': {'height': 1560, 'url': 'https://preview.redd.it/g64lkwcnj30f1.png?auto=webp&s=fc833f4b4a11819161dd88ab050ce0eb20e50b49', 'width': 720}, 'variants': {}}]}
|
|
Private GPT
| 1 |
[removed]
| 2025-05-11T06:26:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1kjugfi/private_gpt/
|
outsidethedamnbox
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kjugfi
| false | null |
t3_1kjugfi
|
/r/LocalLLaMA/comments/1kjugfi/private_gpt/
| false | false |
self
| 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.