title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How to Actually Run a Large Language Model (LLM) from a Portable SSD? Is it Feasible? | 1 | [removed] | 2025-05-29T09:05:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ky63p3/how_to_actually_run_a_large_language_model_llm/ | Own-Objective-7818 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky63p3 | false | null | t3_1ky63p3 | /r/LocalLLaMA/comments/1ky63p3/how_to_actually_run_a_large_language_model_llm/ | false | false | self | 1 | null |
What are Feasible & Interesting LLM Thesis Topics | 1 | [removed] | 2025-05-29T09:06:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ky64az/what_are_feasible_interesting_llm_thesis_topics/ | KaiKawaii0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky64az | false | null | t3_1ky64az | /r/LocalLLaMA/comments/1ky64az/what_are_feasible_interesting_llm_thesis_topics/ | false | false | self | 1 | null |
How to quantize Vision models for Ollama/GGUF. | 1 | I need to quantize a fine-tuned Gemma 3 model that supports images. Usually I quantize with Ollama, but it doesn't know to ignore the "Vision Tower" and fails.
vLLM has a recipe to do this correctly, but the resulting model uses I4, I8 etc, that Ollama cannot handle.
I'd rather stay with Ollama because my app uses its API. Is there any way to generate a model with vLLM that Ollama can quantize and convert into GGUF format?
Thanks for any suggestions | 2025-05-29T09:14:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ky68do/how_to_quantize_vision_models_for_ollamagguf/ | Hughesbay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky68do | false | null | t3_1ky68do | /r/LocalLLaMA/comments/1ky68do/how_to_quantize_vision_models_for_ollamagguf/ | false | false | self | 1 | null |
LORA Continuos pre-training on 7B Instruct Model | 1 | [removed] | 2025-05-29T09:21:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ky6bv2/lora_continuos_pretraining_on_7b_instruct_model/ | Fun-Industry-1485 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky6bv2 | false | null | t3_1ky6bv2 | /r/LocalLLaMA/comments/1ky6bv2/lora_continuos_pretraining_on_7b_instruct_model/ | false | false | self | 1 | null |
Speed-up VLLM boot time | 1 | [removed] | 2025-05-29T09:21:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ky6c7v/speedup_vllm_boot_time/ | badmathfood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky6c7v | false | null | t3_1ky6c7v | /r/LocalLLaMA/comments/1ky6c7v/speedup_vllm_boot_time/ | false | false | self | 1 | null |
Built an ADK Agent that finds Jobs based on your Resume | 7 | I recently built an AI Agent to do job search using Google's new ADK framework, which requires us to upload resume and it takes care of all things by itself.
At first, I was looking to use Qwen vision llm to read resume but decided to use Mistral OCR instead. It was a right choice for sure, Mistral OCR is perfect for doc parsing instead of using other vision model.
What Agents are doing in my App demo:
* Reads resume using Mistral OCR
* Uses Qwen3-14B to generate targeted search queries
* Searches job boards like Y Combinator and Wellfound via the Linkup web search
* Returns curated job listings
It all runs as a single pipeline. Just upload your resume, and the agent handles the rest.
It's a simple implementation, I also recorded a tutorial video and made it open source -[repo](https://github.com/Astrodevil/ADK-Agent-Examples/tree/main/jobfinder_agent), [video](https://www.youtube.com/watch?v=ji_hECcyTjs)
Give it a try and let me know how the responses are! | 2025-05-29T09:21:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ky6c8z/built_an_adk_agent_that_finds_jobs_based_on_your/ | codes_astro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky6c8z | false | null | t3_1ky6c8z | /r/LocalLLaMA/comments/1ky6c8z/built_an_adk_agent_that_finds_jobs_based_on_your/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'pnpR0apJjmbpIKgry27yZ1hp2afUqHiEyHaSslIgZwc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eUmM4MX0Ma_ybofm-r41QuiGdrivELQ2wditrM45yqM.jpg?width=108&crop=smart&auto=webp&s=8eec53ba80ea572e4558c1c0f818333467759f20', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eUmM4MX0Ma_ybofm-r41QuiGdrivELQ2wditrM45yqM.jpg?width=216&crop=smart&auto=webp&s=e86881147963ab37011b8df3e2804876349c2664', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eUmM4MX0Ma_ybofm-r41QuiGdrivELQ2wditrM45yqM.jpg?width=320&crop=smart&auto=webp&s=75d1e9d87718b6c049eb52c036f4844a6fb96898', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eUmM4MX0Ma_ybofm-r41QuiGdrivELQ2wditrM45yqM.jpg?width=640&crop=smart&auto=webp&s=7c11bea42dc19fe29b69ba9bae0cce249dcb54d5', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eUmM4MX0Ma_ybofm-r41QuiGdrivELQ2wditrM45yqM.jpg?width=960&crop=smart&auto=webp&s=89babca18110b5f6fba8c7e93479a995778396ba', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eUmM4MX0Ma_ybofm-r41QuiGdrivELQ2wditrM45yqM.jpg?width=1080&crop=smart&auto=webp&s=1e457af59a1f8cd4820431459693a82ee0741aff', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/eUmM4MX0Ma_ybofm-r41QuiGdrivELQ2wditrM45yqM.jpg?auto=webp&s=4eb2ec223e0edc52b3f4d77fc922d78e92e382bc', 'width': 1280}, 'variants': {}}]} |
Speed up VLLM boot time | 1 | [removed] | 2025-05-29T09:25:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ky6ecs/speed_up_vllm_boot_time/ | badmathfood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky6ecs | false | null | t3_1ky6ecs | /r/LocalLLaMA/comments/1ky6ecs/speed_up_vllm_boot_time/ | false | false | self | 1 | null |
MNN is quite something, Qwen3-32B on a OnePlus 13 24GB | 97 | In the settings for the model mmap needs to be enabled for this to not crash. It's not that fast, but works. | 2025-05-29T09:32:44 | VickWildman | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ky6hxy | false | null | t3_1ky6hxy | /r/LocalLLaMA/comments/1ky6hxy/mnn_is_quite_something_qwen332b_on_a_oneplus_13/ | false | false | 97 | {'enabled': True, 'images': [{'id': 'PtQB33Svan8LfgmXbBs90S2Rjmj7LtVQwALE5U4Qf7o', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/432wqex5vo3f1.jpeg?width=108&crop=smart&auto=webp&s=d1964064b0ace02c6708060994168e15b8169c67', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/432wqex5vo3f1.jpeg?width=216&crop=smart&auto=webp&s=4b6b59cd3701659f969faed16afef1fb34813c0a', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/432wqex5vo3f1.jpeg?width=320&crop=smart&auto=webp&s=8c447fe056f910e81b31358aa73de27937e8ba36', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/432wqex5vo3f1.jpeg?width=640&crop=smart&auto=webp&s=5718621f13c3fd8412aaeb9d5f7bca2fe1dfa8d3', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/432wqex5vo3f1.jpeg?width=960&crop=smart&auto=webp&s=d62660129a42e09b7d980614371107e2189b40d3', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/432wqex5vo3f1.jpeg?width=1080&crop=smart&auto=webp&s=3ee6168d92655cc46067d9cd441a46b071abccd4', 'width': 1080}], 'source': {'height': 3168, 'url': 'https://preview.redd.it/432wqex5vo3f1.jpeg?auto=webp&s=e54a7963d82822d8f58d1b0d38dd6ff973d012bb', 'width': 1440}, 'variants': {}}]} |
||
What model to run. | 0 | Hello does anyone have some tips for what model to run on a 5070 ti for making a llm thats gonna function as a ai agent with own documents that is being fed as data | 2025-05-29T09:48:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ky6qfc/what_model_to_run/ | Material-Score-8128 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky6qfc | false | null | t3_1ky6qfc | /r/LocalLLaMA/comments/1ky6qfc/what_model_to_run/ | false | false | self | 0 | null |
Speed up VLLM server boot | 1 | [removed] | 2025-05-29T09:48:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ky6qm0/speed_up_vllm_server_boot/ | badmathfood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky6qm0 | false | null | t3_1ky6qm0 | /r/LocalLLaMA/comments/1ky6qm0/speed_up_vllm_server_boot/ | false | false | self | 1 | null |
2x Instinct MI50 32G running vLLM results | 22 | I picked up these two AMD Instinct MI50 32G cards from a second-hand trading platform in China. Each card cost me 780 CNY, plus an additional 30 CNY for shipping. I also grabbed two cooling fans to go with them, each costing 40 CNY. In total, I spent 1730 CNY, which is approximately 230 USD.
Even though it’s a second-hand trading platform, the seller claimed they were brand new. Three days after I paid, the cards arrived at my doorstep. Sure enough, they looked untouched, just like the seller promised.
The MI50 cards can’t output video (even though they have a miniDP port). To use them, I had to disable CSM completely in the motherboard BIOS and enable the Above 4G decoding option.
## System Setup
### Hardware Setup
- Intel Xeon E5-2666V3
- RDIMM DDR3 1333 32GB*4
- JGINYUE X99 TI PLUS
One MI50 is plugged into a PCIe 3.0 x16 slot, and the other is in a PCIe 3.0 x8 slot. There’s no Infinity Fabric Link between the two cards.
### Software Setup
- PVE 8.4.1 (Linux kernel 6.8)
- Ubuntu 24.04 (LXC container)
- ROCm 6.3
- vLLM 0.9.0
The vLLM I used is a modified version. The official vLLM support on AMD platforms has some issues. GGUF, GPTQ, and AWQ all have problems.
### vllm serv Parameters
```sh
docker run -it --rm --shm-size=2g --device=/dev/kfd --device=/dev/dri \
--group-add video -p 8000:8000 -v /mnt:/mnt nalanzeyu/vllm-gfx906:v0.9.0-rocm6.3 \
vllm serve --max-model-len 8192 --disable-log-requests --dtype float16 \
/mnt/<MODEL_PATH> -tp 2
```
### vllm bench Parameters
```sh
# for decode
vllm bench serve \
--model /mnt/<MODEL_PATH> \
--num-prompts 8 \
--random-input-len 1 \
--random-output-len 256 \
--ignore-eos \
--max-concurrency <CONCURRENCY>
# for prefill
vllm bench serve \
--model /mnt/<MODEL_PATH> \
--num-prompts 8 \
--random-input-len 4096 \
--random-output-len 1 \
--ignore-eos \
--max-concurrency 1
```
## Results
### ~70B 4-bit
| Model | B | 1x Concurrency | 2x Concurrency | 4x Concurrency | 8x Concurrency | Prefill |
|------------|----------|---------------:|---------------:|---------------:|---------------:|------------:|
| Qwen2.5 | 72B GPTQ | 17.77 t/s | 33.53 t/s | 57.47 t/s | 53.38 t/s | 159.66 t/s |
| Llama 3.3 | 70B GPTQ | 18.62 t/s | 35.13 t/s | 59.66 t/s | 54.33 t/s | 156.38 t/s |
### ~30B 4-bit
| Model | B | 1x Concurrency | 2x Concurrency | 4x Concurrency | 8x Concurrency | Prefill |
|---------------------|----------|---------------:|---------------:|---------------:|---------------:|------------:|
| Qwen3 | 32B AWQ | 27.58 t/s | 49.27 t/s | 87.07 t/s | 96.61 t/s | 293.37 t/s |
| Qwen2.5-Coder | 32B AWQ | 27.95 t/s | 51.33 t/s | 88.72 t/s | 98.28 t/s | 329.92 t/s |
| GLM 4 0414 | 32B GPTQ | 29.34 t/s | 52.21 t/s | 91.29 t/s | 95.02 t/s | 313.51 t/s |
| Mistral Small 2501 | 24B AWQ | 39.54 t/s | 71.09 t/s | 118.72 t/s | 133.64 t/s | 433.95 t/s |
### ~30B 8-bit
| Model | B | 1x Concurrency | 2x Concurrency | 4x Concurrency | 8x Concurrency | Prefill |
|----------------|----------|---------------:|---------------:|---------------:|---------------:|------------:|
| Qwen3 | 32B GPTQ | 22.88 t/s | 38.20 t/s | 58.03 t/s | 44.55 t/s | 291.56 t/s |
| Qwen2.5-Coder | 32B GPTQ | 23.66 t/s | 40.13 t/s | 60.19 t/s | 46.18 t/s | 327.23 t/s |
| 2025-05-29T10:28:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ky7diy/2x_instinct_mi50_32g_running_vllm_results/ | NaLanZeYu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky7diy | false | null | t3_1ky7diy | /r/LocalLLaMA/comments/1ky7diy/2x_instinct_mi50_32g_running_vllm_results/ | false | false | self | 22 | null |
DGX spark/station | 1 | [removed] | 2025-05-29T10:39:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ky7ju2/dgx_sparkstation/ | AvailableSlice6854 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky7ju2 | false | null | t3_1ky7ju2 | /r/LocalLLaMA/comments/1ky7ju2/dgx_sparkstation/ | false | false | self | 1 | null |
PromptCoT-Mamba-7B | 1 | [removed] | 2025-05-29T10:46:46 | https://www.reddit.com/gallery/1ky7nzz | Efficient-Owl9751 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ky7nzz | false | null | t3_1ky7nzz | /r/LocalLLaMA/comments/1ky7nzz/promptcotmamba7b/ | false | false | 1 | null |
|
PromptCoT-Mamba-7B | 1 | [removed] | 2025-05-29T10:53:55 | https://www.reddit.com/gallery/1ky7s5r | Efficient-Owl9751 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ky7s5r | false | null | t3_1ky7s5r | /r/LocalLLaMA/comments/1ky7s5r/promptcotmamba7b/ | false | false | 1 | null |
|
What is the best cheap GPU for speculative decoding? | 2 | Here's a question that doesn't get asked very often (and the answer isn't "get a 3090").
What is the best cheap GPU for speculative decoding? My main GPU is a 3090.
My goal is to have this 2nd GPU running Qwen 3 0.6b or Qwen 3 1.7b. Or Gemma 3 4b. It may also be running whisper or a similar speech-to-text model at the same time.
I currently have a 2nd GPU (which is ancient) with 2gb vram (Nvidia P620)... which is not up to the task. So I'm looking to upgrade the small GPU.
I currently see a $35 Nvidia 1650 4gb on fb marketplace, which is a great price but I suspect 4gb may be a bit too limiting. What other suggestions do people have?
Considerations: price, power usage. For these reasons, I don't want to get a 3090. I just want a cheap small GPU that can run on the side- preferably running whisper concurrently if necessary, but that would take a lot more power. | 2025-05-29T11:03:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ky7ycc/what_is_the_best_cheap_gpu_for_speculative/ | DepthHour1669 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky7ycc | false | null | t3_1ky7ycc | /r/LocalLLaMA/comments/1ky7ycc/what_is_the_best_cheap_gpu_for_speculative/ | false | false | self | 2 | null |
Another benchmark result is in for Deepseek r1.1: big jump in nyt word connections | 64 | 2025-05-29T11:13:10 | _Nils- | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ky847t | false | null | t3_1ky847t | /r/LocalLLaMA/comments/1ky847t/another_benchmark_result_is_in_for_deepseek_r11/ | false | false | 64 | {'enabled': True, 'images': [{'id': 'szHsUpIn9Tm2_gOyHwY_qJSgi1EJEfTGdAIZ3tUh9VU', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/h9qjhjmbfp3f1.png?width=108&crop=smart&auto=webp&s=4f497a0a12314ba489d84e80c63694be1dd47202', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/h9qjhjmbfp3f1.png?width=216&crop=smart&auto=webp&s=09d19d31ddde6dcee771d11f4e685cc2ae9d6094', 'width': 216}, {'height': 200, 'url': 'https://preview.redd.it/h9qjhjmbfp3f1.png?width=320&crop=smart&auto=webp&s=3b0eae2373526e354bf4125d37ff28a078c61a3e', 'width': 320}, {'height': 400, 'url': 'https://preview.redd.it/h9qjhjmbfp3f1.png?width=640&crop=smart&auto=webp&s=c435c0734d4925ffa167706d837e08be7b2a0870', 'width': 640}, {'height': 600, 'url': 'https://preview.redd.it/h9qjhjmbfp3f1.png?width=960&crop=smart&auto=webp&s=b6243d87716c57778f7d465145f9084f5a129470', 'width': 960}, {'height': 675, 'url': 'https://preview.redd.it/h9qjhjmbfp3f1.png?width=1080&crop=smart&auto=webp&s=c8150eeb714a26db33ff53ef7637fac016e4ba35', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://preview.redd.it/h9qjhjmbfp3f1.png?auto=webp&s=5aee5ba07fccb9de6fd8248a898e673656658bd4', 'width': 1600}, 'variants': {}}]} |
|||
Speed up VLLM server boot time | 1 | [removed] | 2025-05-29T11:25:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ky8bux/speed_up_vllm_server_boot_time/ | badmathfood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky8bux | false | null | t3_1ky8bux | /r/LocalLLaMA/comments/1ky8bux/speed_up_vllm_server_boot_time/ | false | false | self | 1 | null |
SWE-rebench: Over 21,000 Open Tasks for SWE LLMs | 36 | Hi! We just released SWE-rebench – an extended and improved version of our previous dataset with GitHub issue-solving tasks.
One common limitation in such datasets is that they usually don’t have many tasks, and they come from only a small number of repositories. For example, in the original SWE-bench there are 2,000+ tasks from just 18 repos. This mostly happens because researchers install each project manually and then collect the tasks.
We automated and scaled this process, so we were able to collect 21,000+ tasks from over 3,400 repositories.
You can find the full technical report [here](https://huggingface.co/papers/2505.20411). We also used a subset of this dataset to build our [SWE-rebench leaderboard.](https://swe-rebench.com/leaderboard) | 2025-05-29T11:25:45 | https://huggingface.co/datasets/nebius/SWE-rebench | Fabulous_Pollution10 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ky8cby | false | null | t3_1ky8cby | /r/LocalLLaMA/comments/1ky8cby/swerebench_over_21000_open_tasks_for_swe_llms/ | false | false | 36 | {'enabled': False, 'images': [{'id': 'B6v5sBICdVUHLpjO8vQN_BhBGMqJhDRiXG6BpX0jgxk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Wlax0ZVaKn93EpAcdC0XPmjtaNO0SAghfCC94lLHq10.jpg?width=108&crop=smart&auto=webp&s=2318cc187aed29555ee8f4e95b18cbc44d177f9f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Wlax0ZVaKn93EpAcdC0XPmjtaNO0SAghfCC94lLHq10.jpg?width=216&crop=smart&auto=webp&s=5669a9025a3d5b994913951789c973584bde625f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Wlax0ZVaKn93EpAcdC0XPmjtaNO0SAghfCC94lLHq10.jpg?width=320&crop=smart&auto=webp&s=bd1656ae9f193958cff6910b5c7b3bd95449546d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Wlax0ZVaKn93EpAcdC0XPmjtaNO0SAghfCC94lLHq10.jpg?width=640&crop=smart&auto=webp&s=45e56ad4265cb4302e58c64224d67824529eff4b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Wlax0ZVaKn93EpAcdC0XPmjtaNO0SAghfCC94lLHq10.jpg?width=960&crop=smart&auto=webp&s=d823df4f85de09b72759efd11b5b5143abd85272', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Wlax0ZVaKn93EpAcdC0XPmjtaNO0SAghfCC94lLHq10.jpg?width=1080&crop=smart&auto=webp&s=13bf0f5d1515eefdba20a9f464056d12a60cc4ab', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Wlax0ZVaKn93EpAcdC0XPmjtaNO0SAghfCC94lLHq10.jpg?auto=webp&s=9f2f41ebbbb8e7358812c71b8c91712ab4a2878d', 'width': 1200}, 'variants': {}}]} |
|
Dual 4090 build for brand compliance analysis - worth it or waste? | 0 | [removed] | 2025-05-29T11:29:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ky8ei2/dual_4090_build_for_brand_compliance_analysis/ | RiseNecessary6351 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky8ei2 | false | null | t3_1ky8ei2 | /r/LocalLLaMA/comments/1ky8ei2/dual_4090_build_for_brand_compliance_analysis/ | false | false | self | 0 | null |
deepseek r1 0528 Anti-fitting logic test | 6 | api
[https://llm-benchmark.github.io/](https://llm-benchmark.github.io/)
For some reason, 60% of the questions cannot be returned because of too long thinking chains (always wrong)
The score went from 0/16 to 1/16, which also made R1 overtake Gemini
I got one question right, and the wrong questions were more ridiculous than gemini,
I only updated the one I got right
claude 4 is still terrible, so I don't want to update some wrong answers | 2025-05-29T11:49:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ky8rsu/deepseek_r1_0528_antifitting_logic_test/ | flysnowbigbig | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky8rsu | false | null | t3_1ky8rsu | /r/LocalLLaMA/comments/1ky8rsu/deepseek_r1_0528_antifitting_logic_test/ | false | false | self | 6 | null |
DeepSeek-R1-0528 Official Benchmarks Released!!! | 707 | 2025-05-29T11:55:06 | https://huggingface.co/deepseek-ai/DeepSeek-R1-0528 | Xhehab_ | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ky8vlm | false | null | t3_1ky8vlm | /r/LocalLLaMA/comments/1ky8vlm/deepseekr10528_official_benchmarks_released/ | false | false | 707 | {'enabled': False, 'images': [{'id': 'vAUxpVLie1Mqj4dWMCPpSgS4JDBz82acZHywzpoHzeY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?width=108&crop=smart&auto=webp&s=9b162e58d60efac60b6dde3b475e84496c0c1868', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?width=216&crop=smart&auto=webp&s=3949b876e7c99273430c712fbc35a1785d977b36', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?width=320&crop=smart&auto=webp&s=23bcfa201f61c88160470e2af5d654df7d3cd98d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?width=640&crop=smart&auto=webp&s=2851bfb3532bcd96cf4e16cbef4ae32c4943a665', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?width=960&crop=smart&auto=webp&s=511b397608bf1ba5982791270ccb1555276b7afa', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?width=1080&crop=smart&auto=webp&s=9b43129c382b30a605d61cf49729d12f62874dcb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/G2g_zbuPp_sknOUdQv6ufEg8e0xJC81xbpHlzy2plQU.jpg?auto=webp&s=88c210c9d82c4b4d51ffdd4c1fa7056e86c4cacf', 'width': 1200}, 'variants': {}}]} |
||
Anyone heard about DeepSeek-R1-0528-Qwen3-8B? | 1 | [removed] | 2025-05-29T12:03:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ky91at/anyone_heard_about_deepseekr10528qwen38b/ | ApprehensiveRoof2722 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky91at | false | null | t3_1ky91at | /r/LocalLLaMA/comments/1ky91at/anyone_heard_about_deepseekr10528qwen38b/ | false | false | self | 1 | null |
https://github.com/adeelahmad/mlx-grpo | 1 | 2025-05-29T12:08:54 | https://github.com/adeelahmad/mlx-grpo | adeelahmadch | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ky958t | false | null | t3_1ky958t | /r/LocalLLaMA/comments/1ky958t/httpsgithubcomadeelahmadmlxgrpo/ | false | false | 1 | {'enabled': False, 'images': [{'id': '0V-DlH4S8Lpw-5ak6RkOhkC6HAYu3E-wUk_YyWjyugM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4bujr2a9Rx-_7aFLAdHGHINXunCPwtFr2Yoq_Rriz8Q.jpg?width=108&crop=smart&auto=webp&s=f8fc1cb3dcdf3ab15fe255a9086398c836f34441', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4bujr2a9Rx-_7aFLAdHGHINXunCPwtFr2Yoq_Rriz8Q.jpg?width=216&crop=smart&auto=webp&s=9343642d4cdb3698dd65d22a6fe8524585c86b03', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4bujr2a9Rx-_7aFLAdHGHINXunCPwtFr2Yoq_Rriz8Q.jpg?width=320&crop=smart&auto=webp&s=b1db4c70c72aa8c7189f235b97d243526e70e9d2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4bujr2a9Rx-_7aFLAdHGHINXunCPwtFr2Yoq_Rriz8Q.jpg?width=640&crop=smart&auto=webp&s=7b8787bb254f32fd8fbaf8bcac12c6686e69b828', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4bujr2a9Rx-_7aFLAdHGHINXunCPwtFr2Yoq_Rriz8Q.jpg?width=960&crop=smart&auto=webp&s=5567ec2258bf8ff6f54c5a3af520e673d5d1260c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4bujr2a9Rx-_7aFLAdHGHINXunCPwtFr2Yoq_Rriz8Q.jpg?width=1080&crop=smart&auto=webp&s=122153d9366447e53f480d2d8700950bf0866e06', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4bujr2a9Rx-_7aFLAdHGHINXunCPwtFr2Yoq_Rriz8Q.jpg?auto=webp&s=8fc1b0c7e5712b33def471eed2485989388228a4', 'width': 1200}, 'variants': {}}]} |
||
[OC] Clean MCP server/client setup for backend apps — no more Stdio + IDE lock-in | 1 | [removed] | 2025-05-29T12:17:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ky9bej/oc_clean_mcp_serverclient_setup_for_backend_apps/ | s1lv3rj1nx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky9bej | false | null | t3_1ky9bej | /r/LocalLLaMA/comments/1ky9bej/oc_clean_mcp_serverclient_setup_for_backend_apps/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'gt8YBQhLJDO9bS_Ufhd6THIbFdPaILwQd6v-W4W06rg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/H8e7Ya3-2Z681FbhytG9FFBoDai-9oPoZbS3U8Zj7JE.jpg?width=108&crop=smart&auto=webp&s=87476195a7beac59ac6b8392511baa3f2a3bbd17', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/H8e7Ya3-2Z681FbhytG9FFBoDai-9oPoZbS3U8Zj7JE.jpg?width=216&crop=smart&auto=webp&s=ed8c1b4169632a1cf6a3f3d4711dda6d23fe603a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/H8e7Ya3-2Z681FbhytG9FFBoDai-9oPoZbS3U8Zj7JE.jpg?width=320&crop=smart&auto=webp&s=1f21628ac382c4c21990fc35b6a20ef43ede062b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/H8e7Ya3-2Z681FbhytG9FFBoDai-9oPoZbS3U8Zj7JE.jpg?width=640&crop=smart&auto=webp&s=79e8773bb3b533124b1a478bf077bc04686465f3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/H8e7Ya3-2Z681FbhytG9FFBoDai-9oPoZbS3U8Zj7JE.jpg?width=960&crop=smart&auto=webp&s=fe7c2e1198c6a838982758ae6999d7f6be8d6559', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/H8e7Ya3-2Z681FbhytG9FFBoDai-9oPoZbS3U8Zj7JE.jpg?width=1080&crop=smart&auto=webp&s=b6c0bc2f0f1aff4c7dfe23b0260bc629862463fd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/H8e7Ya3-2Z681FbhytG9FFBoDai-9oPoZbS3U8Zj7JE.jpg?auto=webp&s=dff0345ef04f613e61f22893346928cac02ca6ab', 'width': 1200}, 'variants': {}}]} |
🔍 DeepSeek-R1-0528: Open-Source Reasoning Model Catching Up to O3 & Gemini? | 30 |
DeepSeek just released an updated version of its reasoning model: **DeepSeek-R1-0528**, and it's getting *very* close to the top proprietary models like OpenAI's O3 and Google’s Gemini 2.5 Pro—while remaining completely open-source.
https://preview.redd.it/bw6qw038rp3f1.png?width=3961&format=png&auto=webp&s=4399b2c6fa184d68de8dfedd4ed84c529d9033a2
🧠 **What’s New in R1-0528?**
* Major gains in reasoning depth & inference.
* AIME 2025 accuracy jumped from **70% → 87.5%**.
* Reasoning now uses **\~23K tokens per question** on average (previously \~12K).
* Reduced hallucinations, improved function calling, and better "vibe coding" UX.
📊 **How does it stack up?**
Here’s how DeepSeek-R1-0528 (and its distilled variant) compare to other models:
|Benchmark|DeepSeek-R1-0528|o3-mini|Gemini 2.5|Qwen3-235B|
|:-|:-|:-|:-|:-|
|**AIME 2025**|**87.5**|76.7|72.0|81.5|
|**LiveCodeBench**|**73.3**|65.9|62.3|66.5|
|**HMMT Feb 25**|**79.4**|53.3|64.2|62.5|
|**GPQA-Diamond**|**81.0**|76.8|**82.8**|71.1|
>
📌 **Why it matters:**
This update shows DeepSeek closing the gap on state-of-the-art models in math, logic, and code—all in an open-source release. It’s also practical to run locally (check Unsloth for quantized versions), and DeepSeek now supports system prompts and smoother chain-of-thought inference without hacks.
🧪 Try it: [huggingface.co/deepseek-ai/DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528)
🌐 Demo: [chat.deepseek.com](https://chat.deepseek.com) (toggle “DeepThink”)
🧠 API: [platform.deepseek.com](https://platform.deepseek.com) | 2025-05-29T12:20:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ky9dbd/deepseekr10528_opensource_reasoning_model/ | Rare-Programmer-1747 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky9dbd | false | null | t3_1ky9dbd | /r/LocalLLaMA/comments/1ky9dbd/deepseekr10528_opensource_reasoning_model/ | false | false | 30 | null |
|
First version of Elicitation to the MCP draft specification. | 8 | 2025-05-29T12:27:17 | https://modelcontextprotocol.io/specification/draft/client/elicitation | Jordi_Mon_Companys | modelcontextprotocol.io | 1970-01-01T00:00:00 | 0 | {} | 1ky9i0z | false | null | t3_1ky9i0z | /r/LocalLLaMA/comments/1ky9i0z/first_version_of_elicitation_to_the_mcp_draft/ | false | false | 8 | {'enabled': False, 'images': [{'id': 'LjTpFd_4SaaWYWPmkAYQUo2TwFNjXXBS0zFSbsWyRuo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/t2kWo6n-mwJ5Mgcx-hTyiDZTY_h1nO4UdL9Mkg7KoWc.jpg?width=108&crop=smart&auto=webp&s=7a834690bf5b504383c894e57e513dfb8c93ea61', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/t2kWo6n-mwJ5Mgcx-hTyiDZTY_h1nO4UdL9Mkg7KoWc.jpg?width=216&crop=smart&auto=webp&s=dad0db9d875173631e4f9744efd8da45b4b66406', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/t2kWo6n-mwJ5Mgcx-hTyiDZTY_h1nO4UdL9Mkg7KoWc.jpg?width=320&crop=smart&auto=webp&s=2af13675eab23bb62b9c1eec7508053f6a2813d4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/t2kWo6n-mwJ5Mgcx-hTyiDZTY_h1nO4UdL9Mkg7KoWc.jpg?width=640&crop=smart&auto=webp&s=0f85a3d41bda78e021c87cf82603eb37b46468f5', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/t2kWo6n-mwJ5Mgcx-hTyiDZTY_h1nO4UdL9Mkg7KoWc.jpg?width=960&crop=smart&auto=webp&s=952d617c380d8163966844c0ab5fcc9d502e9aae', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/t2kWo6n-mwJ5Mgcx-hTyiDZTY_h1nO4UdL9Mkg7KoWc.jpg?width=1080&crop=smart&auto=webp&s=f864a28b9864a522a6298af79525ab749621afb8', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/t2kWo6n-mwJ5Mgcx-hTyiDZTY_h1nO4UdL9Mkg7KoWc.jpg?auto=webp&s=3196779166ac40e43be4cf296e7da07031217be9', 'width': 1200}, 'variants': {}}]} |
||
Smallest & best OCR model that can read math & code? | 3 | It seems like Math & OCR is hard for models.
I tried Google's Gemma models 2b, 7b, 27b (my LMStudio has Gemma 3 4B Instruct QAT) but it always makes some mistake. Either it doesn't read stuff fully or make mistakes. For example, a particular section had 4 listicles but it only read 2 of them.
Another one was Qwen-2.5-vl-7b which can't understand the difference between 10^9 and 109.
Is there any small model that excels at math & code plus can read the whole sections without problems? I also want it to be small in size as much as possible.
Google's Gemma is good but not enough as it frequently gets things wrong. | 2025-05-29T12:38:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ky9q2a/smallest_best_ocr_model_that_can_read_math_code/ | deadcoder0904 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ky9q2a | false | null | t3_1ky9q2a | /r/LocalLLaMA/comments/1ky9q2a/smallest_best_ocr_model_that_can_read_math_code/ | false | false | self | 3 | null |
Deepseek R1.1 dominates gemini 2.5 flash on price vs performance | 166 | 2025-05-29T12:56:01 | https://www.reddit.com/r/LocalLLaMA/comments/1kya3c2/deepseek_r11_dominates_gemini_25_flash_on_price/ | ihexx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kya3c2 | false | null | t3_1kya3c2 | /r/LocalLLaMA/comments/1kya3c2/deepseek_r11_dominates_gemini_25_flash_on_price/ | false | false | 166 | null |
||
DeepSeek-R1-0528 Official Benchmark | 371 | Source:https://mp.weixin.qq.com/s/U5fnTRW4cGvXYJER__YBiw | 2025-05-29T13:02:45 | Fun-Doctor6855 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kya8kq | false | null | t3_1kya8kq | /r/LocalLLaMA/comments/1kya8kq/deepseekr10528_official_benchmark/ | false | false | 371 | {'enabled': True, 'images': [{'id': 'X84gp9VhqYpUYVIl--oeHETjZ58lDPymOMOWpmRdbnE', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/ph8ccp8vyp3f1.png?width=108&crop=smart&auto=webp&s=61508f70e6c9bb6ea9982ce4eb6821c431beb0dc', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/ph8ccp8vyp3f1.png?width=216&crop=smart&auto=webp&s=232b03d945a73f56a3ea8fc904aa219500d818c9', 'width': 216}, {'height': 188, 'url': 'https://preview.redd.it/ph8ccp8vyp3f1.png?width=320&crop=smart&auto=webp&s=dca5e8888607256867462a836d3137a5ed6bcb8e', 'width': 320}, {'height': 377, 'url': 'https://preview.redd.it/ph8ccp8vyp3f1.png?width=640&crop=smart&auto=webp&s=a26aecb4cde21d947b429d105d49de5b484adce2', 'width': 640}, {'height': 566, 'url': 'https://preview.redd.it/ph8ccp8vyp3f1.png?width=960&crop=smart&auto=webp&s=22fde72897b2486a75e1127bf867478f61a626f0', 'width': 960}, {'height': 637, 'url': 'https://preview.redd.it/ph8ccp8vyp3f1.png?width=1080&crop=smart&auto=webp&s=e5b3f5b962f648040fded2376d046b28c997c8bd', 'width': 1080}], 'source': {'height': 755, 'url': 'https://preview.redd.it/ph8ccp8vyp3f1.png?auto=webp&s=8f12398864fa1662e2ecf3792b6f85d60242a9e4', 'width': 1280}, 'variants': {}}]} |
||
New DeepSeek R1 8B Distill that's "matching the performance of Qwen3-235B-thinking" may be incoming! | 307 | DeepSeek-R1-0528-Qwen3-8B incoming? Oh yeah, gimme that, thank you! 😂 | 2025-05-29T13:07:22 | Cool-Chemical-5629 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kyac9f | false | null | t3_1kyac9f | /r/LocalLLaMA/comments/1kyac9f/new_deepseek_r1_8b_distill_thats_matching_the/ | false | false | 307 | {'enabled': True, 'images': [{'id': 'LpXiT4GLQ3DeHpOvOTUlsOTrsEELtw0WIGZtQjOfPLI', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/8vwdjpcxyp3f1.png?width=108&crop=smart&auto=webp&s=2cc095ea99d80b76e9d0148cdd9f44b25fca4cd2', 'width': 108}, {'height': 209, 'url': 'https://preview.redd.it/8vwdjpcxyp3f1.png?width=216&crop=smart&auto=webp&s=f3b2fb5f219bb6ce22d10f83efe4aaa285c9802d', 'width': 216}, {'height': 310, 'url': 'https://preview.redd.it/8vwdjpcxyp3f1.png?width=320&crop=smart&auto=webp&s=cf4c53db852928d1164994fd06cff28578b00e80', 'width': 320}, {'height': 620, 'url': 'https://preview.redd.it/8vwdjpcxyp3f1.png?width=640&crop=smart&auto=webp&s=16361f0824e9b22cc2a7a8bb532724773abb7a72', 'width': 640}], 'source': {'height': 697, 'url': 'https://preview.redd.it/8vwdjpcxyp3f1.png?auto=webp&s=3fd37b5dae7f7f9b0e9814838807d835a0f27cf2', 'width': 719}, 'variants': {}}]} |
||
Speed-up VLLM server boot | 1 | [removed] | 2025-05-29T13:17:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kyak6q/speedup_vllm_server_boot/ | badmathfood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyak6q | false | null | t3_1kyak6q | /r/LocalLLaMA/comments/1kyak6q/speedup_vllm_server_boot/ | false | false | self | 1 | null |
DeepSeek-R1-0528 distill on Qwen3 8B | 151 | 2025-05-29T13:17:51 | Own-Potential-2308 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kyakcp | false | null | t3_1kyakcp | /r/LocalLLaMA/comments/1kyakcp/deepseekr10528_distill_on_qwen3_8b/ | false | false | 151 | {'enabled': True, 'images': [{'id': 'lPwlt9148s2wZuBYrrHPk7x396eULnEK2CRLdF8d6-c', 'resolutions': [{'height': 101, 'url': 'https://preview.redd.it/nrkr44ek1q3f1.jpeg?width=108&crop=smart&auto=webp&s=638df7c2c7e4d93291a44abbc75d2cf1ee37fd26', 'width': 108}, {'height': 203, 'url': 'https://preview.redd.it/nrkr44ek1q3f1.jpeg?width=216&crop=smart&auto=webp&s=f29fb5191a82e5eddc9fd681b810fca3af4f8480', 'width': 216}, {'height': 301, 'url': 'https://preview.redd.it/nrkr44ek1q3f1.jpeg?width=320&crop=smart&auto=webp&s=e774d9bbfdd958865bbc2af656d63bec7d7c3eda', 'width': 320}, {'height': 602, 'url': 'https://preview.redd.it/nrkr44ek1q3f1.jpeg?width=640&crop=smart&auto=webp&s=d7ae61c0111aa5a48e0895ada14976d096d88746', 'width': 640}], 'source': {'height': 779, 'url': 'https://preview.redd.it/nrkr44ek1q3f1.jpeg?auto=webp&s=0b8abececa74e78ca259ddc8cb9b5f0d8777a9a5', 'width': 828}, 'variants': {}}]} |
|||
Coresignal MCP: Test it with 1,000 free credits | 1 | [removed] | 2025-05-29T13:23:04 | https://www.reddit.com/r/LocalLLaMA/comments/1kyaoir/coresignal_mcp_test_it_with_1000_free_credits/ | AdmirableBat3827 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyaoir | false | null | t3_1kyaoir | /r/LocalLLaMA/comments/1kyaoir/coresignal_mcp_test_it_with_1000_free_credits/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'kuDdK7_W5GTZN-ezUbE9RIrXyLhC3vvLkLS07xbGEaA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6p2rQWnvhWKSsCerEPgOYfWJKL1c1TTX337jkPWO-LI.jpg?width=108&crop=smart&auto=webp&s=dc65aacc17290e6486aa963cda6254e33c8563d0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6p2rQWnvhWKSsCerEPgOYfWJKL1c1TTX337jkPWO-LI.jpg?width=216&crop=smart&auto=webp&s=a2b380a340b036909f0bec60f425129552983db5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6p2rQWnvhWKSsCerEPgOYfWJKL1c1TTX337jkPWO-LI.jpg?width=320&crop=smart&auto=webp&s=da4275367d0ef92bb4f8bfa825b79d63a91104c7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6p2rQWnvhWKSsCerEPgOYfWJKL1c1TTX337jkPWO-LI.jpg?width=640&crop=smart&auto=webp&s=3e00cfb3fba678f931eeb0b42759ca74148863e6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6p2rQWnvhWKSsCerEPgOYfWJKL1c1TTX337jkPWO-LI.jpg?width=960&crop=smart&auto=webp&s=3d57ef05a7f47d4e0a79455a62e4100ea6a8cb2e', 'width': 960}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/6p2rQWnvhWKSsCerEPgOYfWJKL1c1TTX337jkPWO-LI.jpg?auto=webp&s=dae648ab0f08d0701fc601c0e15fdea35a383acb', 'width': 1024}, 'variants': {}}]} |
deepseek-ai/DeepSeek-R1-0528-Qwen3-8B · Hugging Face | 289 | 2025-05-29T13:24:05 | https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kyap9q | false | null | t3_1kyap9q | /r/LocalLLaMA/comments/1kyap9q/deepseekaideepseekr10528qwen38b_hugging_face/ | false | false | 289 | {'enabled': False, 'images': [{'id': 'R-1OzuRKOdpsZYZg4m_xP2EzGdAuJDcDlA7j2s3ED38', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8hRwXI0dhC0uoSc2zQ6TvHX1Aw9zshcTMnuDtSCd7AY.jpg?width=108&crop=smart&auto=webp&s=165d4cf0673ca50bddb247fff72e6822b06e2c6e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8hRwXI0dhC0uoSc2zQ6TvHX1Aw9zshcTMnuDtSCd7AY.jpg?width=216&crop=smart&auto=webp&s=79840b0a08c15a8402169a216e266d68fd3fcecf', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8hRwXI0dhC0uoSc2zQ6TvHX1Aw9zshcTMnuDtSCd7AY.jpg?width=320&crop=smart&auto=webp&s=c2711544b39cbb577e8fba5030065b16c5ea95e1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8hRwXI0dhC0uoSc2zQ6TvHX1Aw9zshcTMnuDtSCd7AY.jpg?width=640&crop=smart&auto=webp&s=fdf654415d883f00f7930d8548353332a4e97f3a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8hRwXI0dhC0uoSc2zQ6TvHX1Aw9zshcTMnuDtSCd7AY.jpg?width=960&crop=smart&auto=webp&s=165b965981a74b09144beb06a3ef62dbd4ffa957', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8hRwXI0dhC0uoSc2zQ6TvHX1Aw9zshcTMnuDtSCd7AY.jpg?width=1080&crop=smart&auto=webp&s=649d9c68216e4351ad8a9c571e93926c8d0bebfb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8hRwXI0dhC0uoSc2zQ6TvHX1Aw9zshcTMnuDtSCd7AY.jpg?auto=webp&s=9705e54fdb76e3cf9147598b335d087d668442de', 'width': 1200}, 'variants': {}}]} |
||
Setting Up a Local LLM for Private Document Processing – Recommendations? | 2 | Hey!
I’ve got a client who needs a local AI setup to process sensitive documents that can't be exposed online. So, I'm planning to deploy a local LLM (Large Language Model) on a dedicated server within their internal network.
The budget is around $5,000 USD, so getting solid computing power and a decent GPU shouldn't be an issue.
A few questions:
* What’s currently the best all-around LLM that can be downloaded and run locally?
* Is **Ollama** still the go-to tool for running local models, or are there better alternatives?
* What drivers or frameworks will I need to support the setup?
* Any hardware sugguestions?
For context, I come from a frontend background with some fullstack experience, so I’m thinking of building them a custom GUI with prefilled prompts for the tasks they’ll need regularly.
Anything else I should consider for this kind of setup? | 2025-05-29T13:32:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kyaw41/setting_up_a_local_llm_for_private_document/ | DSandleman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyaw41 | false | null | t3_1kyaw41 | /r/LocalLLaMA/comments/1kyaw41/setting_up_a_local_llm_for_private_document/ | false | false | self | 2 | null |
Qwen withholds 32B/235B base models, presumably so they can’t be distilled by Deepseek. | 1 | [removed] | 2025-05-29T13:54:09 | https://www.reddit.com/r/LocalLLaMA/comments/1kybdzn/qwen_withholds_32b235b_base_models_presumably_so/ | DowntownCase7112 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kybdzn | false | null | t3_1kybdzn | /r/LocalLLaMA/comments/1kybdzn/qwen_withholds_32b235b_base_models_presumably_so/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'eUe17voVkF4rUxp20J0CXK9LZ1ckV3728roXC7v8pVo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/StCGtof_ZY2UaRP28sF399B_SlSxCvVeSObIQBN3gZ8.jpg?width=108&crop=smart&auto=webp&s=1e4e0581cca8cdee9d1908117d0d6678ae7c2d82', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/StCGtof_ZY2UaRP28sF399B_SlSxCvVeSObIQBN3gZ8.jpg?width=216&crop=smart&auto=webp&s=15903fab82711b1e4f9225aae8f55f60446cbb4d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/StCGtof_ZY2UaRP28sF399B_SlSxCvVeSObIQBN3gZ8.jpg?width=320&crop=smart&auto=webp&s=cc3ad072a4d1ac7363ec2ce1d38eeebcddc17cc0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/StCGtof_ZY2UaRP28sF399B_SlSxCvVeSObIQBN3gZ8.jpg?width=640&crop=smart&auto=webp&s=3655edd78b8c90b9f09df99ecac68026ea1d38eb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/StCGtof_ZY2UaRP28sF399B_SlSxCvVeSObIQBN3gZ8.jpg?width=960&crop=smart&auto=webp&s=14b941079646d3a0a86f2816f5c2da3e253f8daa', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/StCGtof_ZY2UaRP28sF399B_SlSxCvVeSObIQBN3gZ8.jpg?width=1080&crop=smart&auto=webp&s=74b240604d45469b760105bb3b6ea40d6cfb09a6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/StCGtof_ZY2UaRP28sF399B_SlSxCvVeSObIQBN3gZ8.jpg?auto=webp&s=1f3e45fb11550c650a00aecbcec753c205afd580', 'width': 1200}, 'variants': {}}]} |
Personalized AI Tutor Demo | Learn about LLMs with an AI Tutor | 1 | [removed] | 2025-05-29T14:03:55 | https://www.reddit.com/r/LocalLLaMA/comments/1kybmcu/personalized_ai_tutor_demo_learn_about_llms_with/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kybmcu | false | null | t3_1kybmcu | /r/LocalLLaMA/comments/1kybmcu/personalized_ai_tutor_demo_learn_about_llms_with/ | false | false | self | 1 | null |
the impact of memory timings on CPU LLM inference performance. | 7 | I didn't find any data related to this subject so I ran a few tests over the past few days and got some interesting results.
The inspiration for the test was [this thread on hardwareluxx](https://www.hardwareluxx.de/community/threads/ram-timings-und-deren-einfluss-auf-spiele-und-anwendungen-amd-update-23-05-2020.1269156/).
unfortunately I only have access to two ddr4 AM4 CPUs. I will repeat the tests when I get access to a ddr5 system.
CPUs are running at fixed clocks. R7 2700 at 3.8Ghz and R5 5600 at 4.2Ghz.
I tested Single Rank and Dual rank configurations, both using samsung B die sticks. The performance gain due to tighter timings on SR is more significant (which is consistent with [gaming benchmarks](https://www.youtube.com/watch?v=AGux0pANft0))
The thing I found most interesting was the lack of sensitivity to tRRDS tRRDL tFAW compared to gaming workloads... I usually gain 5-7% from tightening those in games like Witcher3, but here the impact is much more miniscule.
by far the most important timings based on my tests seem to be tRFC, tRDRDSCL. which is a massive advantage for samsung B die kits (and also hynix A/M die on ddr5 if the results also hold true on ddr5)
I ran the tests using llama.cpp cpu backend. I also tried ik\_llama.cpp and it was slower on zen+, and same-ish on zen2 (although Prompt Processing was much faster but since PP is not sensitive to bandwidth, I stuck with llama.cpp).
[zen+, 3400MT\/s Dual Rank B Die ](https://preview.redd.it/tobey4ib8q3f1.png?width=1134&format=png&auto=webp&s=faa022a3cb917a73982644e4a9f931674698dde2)
[zen2, 3733MT\/s Dual Rank B die](https://preview.redd.it/da96hezu8q3f1.png?width=1170&format=png&auto=webp&s=3dfbbeacd46f72baee0b643a35891c7cff56f098)
[zen2, 3733MT\/s SR vs DR, Qwen3 4B q4K\_M](https://preview.redd.it/iivlo3x19q3f1.png?width=1196&format=png&auto=webp&s=2cacc69a161f802bc0ff9f04770e06aef3348040)
TLDR: if you have had experince in memory OC, make sure to tune tRRDS/L, tFAW, tRFC, tRDRDSCL for at least a 5% boost to TG performance... | 2025-05-29T14:08:53 | https://www.reddit.com/r/LocalLLaMA/comments/1kybql4/the_impact_of_memory_timings_on_cpu_llm_inference/ | AliNT77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kybql4 | false | null | t3_1kybql4 | /r/LocalLLaMA/comments/1kybql4/the_impact_of_memory_timings_on_cpu_llm_inference/ | false | false | 7 | null |
|
Small open models are more cost effective than closed ones (score from artifical analysis). | 34 | Sampled only the most cost efficient models that were above a score threshold. | 2025-05-29T14:12:37 | GreenTreeAndBlueSky | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kybtri | false | null | t3_1kybtri | /r/LocalLLaMA/comments/1kybtri/small_open_models_are_more_cost_effective_than/ | false | false | 34 | {'enabled': True, 'images': [{'id': 'lA-Kd2ezIorsaDntaG6RoDwCx7zssa-KCHeOKjh2sgU', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/hwn90facbq3f1.png?width=108&crop=smart&auto=webp&s=52779e4d061460335cb2c329d16ef64704eb13ab', 'width': 108}, {'height': 161, 'url': 'https://preview.redd.it/hwn90facbq3f1.png?width=216&crop=smart&auto=webp&s=b9dd6dbb1d8e328dac5dbe08de27045056cbc782', 'width': 216}, {'height': 239, 'url': 'https://preview.redd.it/hwn90facbq3f1.png?width=320&crop=smart&auto=webp&s=6ec8d68646975a1a2d5f4e9775b02542e02928a2', 'width': 320}, {'height': 478, 'url': 'https://preview.redd.it/hwn90facbq3f1.png?width=640&crop=smart&auto=webp&s=5e7240aaf83de86ee6bec64e336e5d5e8f0b8703', 'width': 640}, {'height': 718, 'url': 'https://preview.redd.it/hwn90facbq3f1.png?width=960&crop=smart&auto=webp&s=a09fc0cc0f812ad33343691423daeed30cf2589b', 'width': 960}, {'height': 808, 'url': 'https://preview.redd.it/hwn90facbq3f1.png?width=1080&crop=smart&auto=webp&s=6145615ace49a832a1dba54d67674e7a274ab056', 'width': 1080}], 'source': {'height': 1180, 'url': 'https://preview.redd.it/hwn90facbq3f1.png?auto=webp&s=90243f3847e5802172fcd95e3d756889313cbf6d', 'width': 1577}, 'variants': {}}]} |
||
I scraped 1M jobs directly from corporate websites. | 1 | [removed] | 2025-05-29T14:15:12 | https://www.reddit.com/r/LocalLLaMA/comments/1kybvz7/i_scraped_1m_jobs_directly_from_corporate_websites/ | Elieroos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kybvz7 | false | null | t3_1kybvz7 | /r/LocalLLaMA/comments/1kybvz7/i_scraped_1m_jobs_directly_from_corporate_websites/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'LisIUUGScx13mD-x3gFPv-giEc_OVliq9xdUF77fqKE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=108&crop=smart&auto=webp&s=8e5f4eecb8f4e20584a0a45a6c7b3d80bca50562', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=216&crop=smart&auto=webp&s=0bba062fe06cce12fc3d0c4cb2a0ea82abc7c266', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=320&crop=smart&auto=webp&s=3ad6582619e3a7c3baeb4b3bc407f87a187c2336', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=640&crop=smart&auto=webp&s=1b9a8da21d7a1b9b308c5828dbe6f6b7287068d6', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=960&crop=smart&auto=webp&s=196ba9362a8c5c81bc99f396e5c4bd3401667518', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=1080&crop=smart&auto=webp&s=f79588c44be17c9eae5cf5c5ccf4c0d9f77f0734', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?auto=webp&s=fa755a2de2b11728baa2d5e5dcd88171c0e5d4be', 'width': 1200}, 'variants': {}}]} |
New Qwen3 8B Distill of DeepSeek R1 0528 | 1 | [removed] | 2025-05-29T14:20:50 | samuelchristlie | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kyc0uz | false | null | t3_1kyc0uz | /r/LocalLLaMA/comments/1kyc0uz/new_qwen3_8b_distill_of_deepseek_r1_0528/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'vD848Gtcd6AzmG2Ci8YxqhMMmWmK3hihSXC_LPqGqHo', 'resolutions': [{'height': 39, 'url': 'https://preview.redd.it/gptpz0rxaq3f1.png?width=108&crop=smart&auto=webp&s=124fe075347a442f069658b0add64282d5f91cc9', 'width': 108}, {'height': 79, 'url': 'https://preview.redd.it/gptpz0rxaq3f1.png?width=216&crop=smart&auto=webp&s=9eadf80916686441c899ee84b6e375a3253b050f', 'width': 216}, {'height': 118, 'url': 'https://preview.redd.it/gptpz0rxaq3f1.png?width=320&crop=smart&auto=webp&s=591b278e88916d017ed4b659951d9661cd40e21c', 'width': 320}, {'height': 236, 'url': 'https://preview.redd.it/gptpz0rxaq3f1.png?width=640&crop=smart&auto=webp&s=d75186f8bb1dd4c18911dea43edd3365d43799f9', 'width': 640}, {'height': 354, 'url': 'https://preview.redd.it/gptpz0rxaq3f1.png?width=960&crop=smart&auto=webp&s=d977c8457d6734b68c10d653771d89a1cfce074d', 'width': 960}, {'height': 398, 'url': 'https://preview.redd.it/gptpz0rxaq3f1.png?width=1080&crop=smart&auto=webp&s=e31db9bcc18930e54e17c2ed20d824ef87c15dbd', 'width': 1080}], 'source': {'height': 539, 'url': 'https://preview.redd.it/gptpz0rxaq3f1.png?auto=webp&s=12cd572dbe9ef09a0aaf8b22dd55aac973f81f46', 'width': 1461}, 'variants': {}}]} |
||
Interesting LLM Thesis Topics | 1 | [removed] | 2025-05-29T14:23:05 | https://www.reddit.com/r/LocalLLaMA/comments/1kyc2sc/interesting_llm_thesis_topics/ | KaiKawaii0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyc2sc | false | null | t3_1kyc2sc | /r/LocalLLaMA/comments/1kyc2sc/interesting_llm_thesis_topics/ | false | false | self | 1 | null |
Deepseek is the 4th most Intelligent Ai in the world | 1 | *Processing img fw3jnlhocq3f1...*
And yes, that's Claude-4 all the way at the bottom.
i love Deepseek
i mean look at the price to performance | 2025-05-29T14:24:43 | https://www.reddit.com/r/LocalLLaMA/comments/1kyc47y/deepseek_is_the_4th_most_intelligent_ai_in_the/ | Rare-Programmer-1747 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyc47y | false | null | t3_1kyc47y | /r/LocalLLaMA/comments/1kyc47y/deepseek_is_the_4th_most_intelligent_ai_in_the/ | false | false | self | 1 | null |
Deepseek is the 4th most intelligent AI in the world. | 325 | 2025-05-29T14:31:15 | https://www.reddit.com/r/LocalLLaMA/comments/1kyca0p/deepseek_is_the_4th_most_intelligent_ai_in_the/ | Rare-Programmer-1747 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyca0p | false | null | t3_1kyca0p | /r/LocalLLaMA/comments/1kyca0p/deepseek_is_the_4th_most_intelligent_ai_in_the/ | false | false | 325 | null |
||
Can we take Deepseek-R1-Qwen3-8b tokenizer and copy it to Qwen3 30b A3b? | 0 | Deepseek’s post on the R1 distill for Qwen3 8b implies the only thing changed is the tokenizer config, and other parts of Qwen3 are the same.
This is surprising to me, as I thought such a distill would require a lot of GPU power, to finetune the model with the R1 dataset.
If this is not the case, and we can do a simple ctrl-c ctrl-v, then is it possible to apply this to other Qwen3 models? Such as Qwen3 32b or Qwen3 30b A3b? | 2025-05-29T14:40:30 | https://www.reddit.com/r/LocalLLaMA/comments/1kycia8/can_we_take_deepseekr1qwen38b_tokenizer_and_copy/ | jaxchang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kycia8 | false | null | t3_1kycia8 | /r/LocalLLaMA/comments/1kycia8/can_we_take_deepseekr1qwen38b_tokenizer_and_copy/ | false | false | self | 0 | null |
PC configuration for fast LocalLLaMA | 1 | [removed] | 2025-05-29T14:44:12 | https://www.reddit.com/r/LocalLLaMA/comments/1kycllk/pc_configuration_for_fast_localllama/ | Icy_Fee7219 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kycllk | false | null | t3_1kycllk | /r/LocalLLaMA/comments/1kycllk/pc_configuration_for_fast_localllama/ | false | false | self | 1 | null |
What is this nice frontend shown on the Deepseek R1 updated website? | 3 | https://i.redd.it/68wa4yfvgq3f1.gif
[Deepseek News Link](https://api-docs.deepseek.com/news/news250528) | 2025-05-29T14:44:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kycm83/what_is_this_nice_frontend_shown_on_the_deepseek/ | Yes_but_I_think | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kycm83 | false | null | t3_1kycm83 | /r/LocalLLaMA/comments/1kycm83/what_is_this_nice_frontend_shown_on_the_deepseek/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'fj84M-Z0L_zDI8VjgLPR-vGFwXVTTqVZFoa_h5offPs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/AO2sAF0_c_2mBe6UautksfrJRPPX3sFbs0Fu0kPn0C0.jpg?width=108&crop=smart&auto=webp&s=4f39a07c027d6036b98ac9f4ba405a8d11549aa3', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/AO2sAF0_c_2mBe6UautksfrJRPPX3sFbs0Fu0kPn0C0.jpg?width=216&crop=smart&auto=webp&s=77d81d7dfb3f0dc0281915e155e87541e4069970', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/AO2sAF0_c_2mBe6UautksfrJRPPX3sFbs0Fu0kPn0C0.jpg?width=320&crop=smart&auto=webp&s=e7e73cd0eb037665260b5368de787bf4d34a0086', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/AO2sAF0_c_2mBe6UautksfrJRPPX3sFbs0Fu0kPn0C0.jpg?width=640&crop=smart&auto=webp&s=aa0a8cd368da789c05b75a810cf0a1e21413b8f2', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/AO2sAF0_c_2mBe6UautksfrJRPPX3sFbs0Fu0kPn0C0.jpg?width=960&crop=smart&auto=webp&s=fb05999616d9a4f01271acab1427db387e6f4095', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/AO2sAF0_c_2mBe6UautksfrJRPPX3sFbs0Fu0kPn0C0.jpg?width=1080&crop=smart&auto=webp&s=6aea590aabdd6f82e13381ed9c97788ecddef016', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/AO2sAF0_c_2mBe6UautksfrJRPPX3sFbs0Fu0kPn0C0.jpg?auto=webp&s=bb5327c204c8ce6c5773c7700d887e31427085b4', 'width': 1200}, 'variants': {}}]} |
|
No offense: Deepseek 8b 0528 Qwen3 Not Better Than Qwen3 8B | 0 | Just want to say this
Asked some prompts related to basic stuff like create calculator.
Qwen in zero shot where deepseek 8b qwen - required more shooting.
| 2025-05-29T14:59:02 | https://www.reddit.com/r/LocalLLaMA/comments/1kycymx/no_offense_deepseek_8b_0528_qwen3_not_better_than/ | dreamai87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kycymx | false | null | t3_1kycymx | /r/LocalLLaMA/comments/1kycymx/no_offense_deepseek_8b_0528_qwen3_not_better_than/ | false | false | self | 0 | null |
"These students can't add two and two, and they go to Harvard." — Donald Trump | 0 | 2025-05-29T15:03:44 | Fun-Doctor6855 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kyd34w | false | null | t3_1kyd34w | /r/LocalLLaMA/comments/1kyd34w/these_students_cant_add_two_and_two_and_they_go/ | false | false | 0 | {'enabled': True, 'images': [{'id': '4QakP4WzA59VCrcpOFeyPMVdWrPIvXFstpY8P_8XPfI', 'resolutions': [{'height': 103, 'url': 'https://preview.redd.it/gzuaa0ufkq3f1.jpeg?width=108&crop=smart&auto=webp&s=5edb55ab120c0ee2aa01d6841f984ab20ecae915', 'width': 108}, {'height': 207, 'url': 'https://preview.redd.it/gzuaa0ufkq3f1.jpeg?width=216&crop=smart&auto=webp&s=85cb59c223be77bf2406d8518203876e33bae99c', 'width': 216}, {'height': 308, 'url': 'https://preview.redd.it/gzuaa0ufkq3f1.jpeg?width=320&crop=smart&auto=webp&s=adfd4562025f5050bfe50632f0b058eed4ac43bf', 'width': 320}, {'height': 616, 'url': 'https://preview.redd.it/gzuaa0ufkq3f1.jpeg?width=640&crop=smart&auto=webp&s=a4c2fdfcef38736345d4939c3ca9dda3f65524c6', 'width': 640}, {'height': 924, 'url': 'https://preview.redd.it/gzuaa0ufkq3f1.jpeg?width=960&crop=smart&auto=webp&s=757986e322c2eb5e4221382159fe83cc9aae673d', 'width': 960}, {'height': 1039, 'url': 'https://preview.redd.it/gzuaa0ufkq3f1.jpeg?width=1080&crop=smart&auto=webp&s=7dcbc34d27de1e2d6f196953eb0c0a37bc50b3e2', 'width': 1080}], 'source': {'height': 1386, 'url': 'https://preview.redd.it/gzuaa0ufkq3f1.jpeg?auto=webp&s=1b442c6ed8232d7b4890f8120cd52a9fe18fe106', 'width': 1440}, 'variants': {}}]} |
|||
Got Access to Domo AI. What should I try with it? | 0 | just got access to [domoai](https://www.domoai.app/home?via=081621AUG) and have been testing different prompts. If you have ideas like anime to real, style-swapped videos, or anything unusual, drop them in the comments. I’ll try the top suggestions with the most upvotes after a few hours since it takes some time to generate results.
I’ll share the links once they’re ready.
If you have a unique or creative idea, post it below and I’ll try to bring it to life.
| 2025-05-29T15:16:59 | https://www.reddit.com/r/LocalLLaMA/comments/1kydf3k/got_access_to_domo_ai_what_should_i_try_with_it/ | Own_View3337 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kydf3k | false | null | t3_1kydf3k | /r/LocalLLaMA/comments/1kydf3k/got_access_to_domo_ai_what_should_i_try_with_it/ | false | false | self | 0 | null |
Does Llama actually work well for real projects? Which version is best, and what are the trade-offs? | 1 | [removed] | 2025-05-29T15:23:13 | https://www.reddit.com/r/LocalLLaMA/comments/1kydksa/does_llama_actually_work_well_for_real_projects/ | chefs-1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kydksa | false | null | t3_1kydksa | /r/LocalLLaMA/comments/1kydksa/does_llama_actually_work_well_for_real_projects/ | false | false | self | 1 | null |
Is there a local model that can solve this text decoding riddle? | 4 | Since the introduction of DeepSeek-R1 distills (the original ones) I've tried to find a local model that can solve text decoding problem from o1 research page ["Learning to reason with LLMs" (OpenAI)](https://openai.com/index/learning-to-reason-with-llms/):
oyfjdnisdr rtqwainr acxz mynzbhhx -> Think step by step
Use the example above to decode:
oyekaijzdf aaptcg suaokybhai ouow aqht mynznvaatzacdfoulxxz
So far, no model up to 32B params (with quantization) was able solve this, on my machine at least.
If the model is small, it tends to give up early and say that there is no solution.
If the model is larger, it talks to itself endlessly until it runs out of context.
So, maybe it is possible if the right model and settings are chosen? | 2025-05-29T15:27:23 | https://www.reddit.com/r/LocalLLaMA/comments/1kydoio/is_there_a_local_model_that_can_solve_this_text/ | F1amy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kydoio | false | null | t3_1kydoio | /r/LocalLLaMA/comments/1kydoio/is_there_a_local_model_that_can_solve_this_text/ | false | false | self | 4 | null |
Mastering DeepSeek LLaMA Locally: Open WebUI + Ollama Guide | 1 | [removed] | 2025-05-29T15:30:04 | https://www.reddit.com/r/LocalLLaMA/comments/1kydqw2/mastering_deepseek_llama_locally_open_webui/ | techlatest_net | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kydqw2 | false | null | t3_1kydqw2 | /r/LocalLLaMA/comments/1kydqw2/mastering_deepseek_llama_locally_open_webui/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YwLbil7JL-v1VrLIWxRnPRhtfaTePgNV_z5tZk6MWvY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/FWicWs5tMHKYBjCuLBHMrS6ALj18kexW9NEotw5xaNI.jpg?width=108&crop=smart&auto=webp&s=6b0f2892fce5def25786bde7c912767ff4aef411', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/FWicWs5tMHKYBjCuLBHMrS6ALj18kexW9NEotw5xaNI.jpg?width=216&crop=smart&auto=webp&s=7cdea8754870adb309ae19490d0511a6739e3b36', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/FWicWs5tMHKYBjCuLBHMrS6ALj18kexW9NEotw5xaNI.jpg?width=320&crop=smart&auto=webp&s=10221f73f9dba4b96cba7c0ce1f551c3c85630bc', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/FWicWs5tMHKYBjCuLBHMrS6ALj18kexW9NEotw5xaNI.jpg?width=640&crop=smart&auto=webp&s=d5361029639230dd48ea56006bfb996cb5f03b62', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/FWicWs5tMHKYBjCuLBHMrS6ALj18kexW9NEotw5xaNI.jpg?width=960&crop=smart&auto=webp&s=1f8b9fb042b7eb2014c7b737eb4167a4102b3d10', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/FWicWs5tMHKYBjCuLBHMrS6ALj18kexW9NEotw5xaNI.jpg?auto=webp&s=0822f1818dfbc34cbd6bd835c29200aee6deb0be', 'width': 1024}, 'variants': {}}]} |
Google MedGemma models | 1 | [removed] | 2025-05-29T15:32:22 | https://www.reddit.com/r/LocalLLaMA/comments/1kydsza/google_medgemma_models/ | MST019 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kydsza | false | null | t3_1kydsza | /r/LocalLLaMA/comments/1kydsza/google_medgemma_models/ | false | false | self | 1 | null |
Is there any good smaller NSFW models for story writing? | 4 | I have a fairly weak PC, 6GB VRAM and 50 GB RAM. I have tried a couple models on Ollama but most of them suck, they either keep repeating themselves or just do a sort of Compilation where they just briefly summarizes everything and immediately skips to the end.
So are there any good models on Ollama or elsewhere that I can use? Its fine if its a larger model and the output speed is slow, aslong as I can run it. | 2025-05-29T15:32:37 | https://www.reddit.com/r/LocalLLaMA/comments/1kydt6f/is_there_any_good_smaller_nsfw_models_for_story/ | LeiMoshen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kydt6f | false | null | t3_1kydt6f | /r/LocalLLaMA/comments/1kydt6f/is_there_any_good_smaller_nsfw_models_for_story/ | false | false | nsfw | 4 | null |
How do you define "vibe coding"? | 0 | 2025-05-29T15:33:38 | vibjelo | i.imgur.com | 1970-01-01T00:00:00 | 0 | {} | 1kydu4r | false | null | t3_1kydu4r | /r/LocalLLaMA/comments/1kydu4r/how_do_you_define_vibe_coding/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'ifU43VPseK25e5pNBguSXwQIB5m0OD5Yv2d_aGc13Hg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XJ58eUxnoCOpF6NTPxWXIxUq-Ld2T-55DHynDaQaQSQ.png?width=108&crop=smart&auto=webp&s=e15cb99d58745beecc428343d490f7548016337e', 'width': 108}, {'height': 109, 'url': 'https://external-preview.redd.it/XJ58eUxnoCOpF6NTPxWXIxUq-Ld2T-55DHynDaQaQSQ.png?width=216&crop=smart&auto=webp&s=fa1e4aa7b267b01a045cdb0c32a7e1b2547aacf0', 'width': 216}, {'height': 161, 'url': 'https://external-preview.redd.it/XJ58eUxnoCOpF6NTPxWXIxUq-Ld2T-55DHynDaQaQSQ.png?width=320&crop=smart&auto=webp&s=1da9f3ef1b9d7bf2358096fb9c106a9d2f64764a', 'width': 320}, {'height': 323, 'url': 'https://external-preview.redd.it/XJ58eUxnoCOpF6NTPxWXIxUq-Ld2T-55DHynDaQaQSQ.png?width=640&crop=smart&auto=webp&s=c91fe2a40ffcdfa04b111527169d68d32ebf4bd2', 'width': 640}, {'height': 484, 'url': 'https://external-preview.redd.it/XJ58eUxnoCOpF6NTPxWXIxUq-Ld2T-55DHynDaQaQSQ.png?width=960&crop=smart&auto=webp&s=aee027baf03ad9d0c41a4c4f50ac66e8639b6d2d', 'width': 960}, {'height': 545, 'url': 'https://external-preview.redd.it/XJ58eUxnoCOpF6NTPxWXIxUq-Ld2T-55DHynDaQaQSQ.png?width=1080&crop=smart&auto=webp&s=8ebb36163814ecd0b5f71f01478dfb8c99ed40d6', 'width': 1080}], 'source': {'height': 798, 'url': 'https://external-preview.redd.it/XJ58eUxnoCOpF6NTPxWXIxUq-Ld2T-55DHynDaQaQSQ.png?auto=webp&s=c9b8de031635f92d1620938ee987d52cfb03ae6c', 'width': 1580}, 'variants': {}}]} |
|||
What are cool ways you use your Local LLM | 7 | Things that just make your life a bit easier with Ai. | 2025-05-29T15:39:46 | https://www.reddit.com/r/LocalLLaMA/comments/1kydzmh/what_are_cool_ways_you_use_your_local_llm/ | DOK10101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kydzmh | false | null | t3_1kydzmh | /r/LocalLLaMA/comments/1kydzmh/what_are_cool_ways_you_use_your_local_llm/ | false | false | self | 7 | null |
When to Fine-Tune LLMs (and When Not To) - A Practical Guide | 110 | I've been building fine-tunes for 9 years (at my own startup, then at Apple, now at a second startup) and learned a lot along the way. I thought most of this was common knowledge, but I've been told it's helpful so wanted to write up a rough guide for when to (and when not to) fine-tune, what to expect, and which models to consider. Hopefully it's helpful!
**TL;DR**: Fine-tuning can solve specific, measurable problems: inconsistent outputs, bloated inference costs, prompts that are too complex, and specialized behavior you can't achieve through prompting alone. However, you should pick the goals of fine-tuning before you start, to help you select the right base models.
Here's a quick overview of what fine-tuning can (and can't) do:
**Quality Improvements**
* **Task-specific scores**: Teaching models how to respond through examples (way more effective than just prompting)
* **Style conformance**: A bank chatbot needs different tone than a fantasy RPG agent
* **JSON formatting**: Seen format accuracy jump from <5% to >99% with fine-tuning vs base model
* **Other formatting requirements**: Produce consistent function calls, XML, YAML, markdown, etc
**Cost, Speed and Privacy Benefits**
* **Shorter prompts**: Move formatting, style, rules from prompts into the model itself
* Formatting instructions → fine-tuning
* Tone/style → fine-tuning
* Rules/logic → fine-tuning
* Chain of thought guidance → fine-tuning
* Core task prompt → keep this, but can be much shorter
* **Smaller models**: Much smaller models can offer similar quality for specific tasks, once fine-tunes. Example: Qwen 14B runs 6x faster, costs \~3% of GPT-4.1.
* **Local deployment**: Fine-tune small models to run locally and privately. If building for others, this can drop your inference cost to zero.
**Specialized Behaviors**
* **Tool calling**: Teaching when/how to use specific tools through examples
* **Logic/rule following**: Better than putting everything in prompts, especially for complex conditional logic
* **Bug fixes**: Add examples of failure modes with correct outputs to eliminate them
* **Distillation**: Get large model to teach smaller model (surprisingly easy, takes \~20 minutes)
* **Learned reasoning patterns**: Teach specific thinking patterns for your domain instead of using expensive general reasoning models
**What NOT to Use Fine-Tuning For**
Adding knowledge really isn't a good match for fine-tuning. Use instead:
* RAG for searchable info
* System prompts for context
* Tool calls for dynamic knowledge
You can combine these with fine-tuned models for the best of both worlds.
**Base Model Selection by Goal**
* **Mobile local**: Gemma 3 3n/1B, Qwen 3 1.7B
* **Desktop local**: Qwen 3 4B/8B, Gemma 3 2B/4B
* **Cost/speed optimization**: Try 1B-32B range, compare tradeoff of quality/cost/speed
* **Max quality**: Gemma 3 27B, Qwen3 large, Llama 70B, GPT-4.1, Gemini flash/Pro (yes - you can fine-tune closed OpenAI/Google models via their APIs)
**Pro Tips**
* **Iterate and experiment** \- try different base models, training data, tuning with/without reasoning tokens
* **Set up evals** \- you need metrics to know if fine-tuning worked
* **Start simple** \- supervised fine-tuning usually sufficient before trying RL
* **Synthetic data works well for most use cases** \- don't feel like you need tons of human-labeled data
**Getting Started**
The process of fine-tuning involves a few steps:
1. Pick specific goals from above
2. Generate/collect training examples (few hundred to few thousand)
3. Train on a range of different base models
4. Measure quality with evals
5. Iterate, trying more models and training modes
**Tool to Create and Evaluate Fine-tunes**
I've been building a free and open tool called [Kiln](https://getkiln.ai) which makes this process easy. It has several major benfits:
* **Complete**: Kiln can do every step including defing schemas, creating synthetic data for training, creating evals to measure quality and select the best model.
* **Intuative**: anyone can use Kiln. The UI will walk you
* **Private**: We never have access to your data. Kiln runs locally. You can choose to fine-tune locally (unsloth) or use a serive (Fireworks, Together, OpenAI, Google) using your own API keys
* **Wide range of models**: we support training over 60 models including open-weight models (Gemma, Qwen, Llama) and closed models (GPT, Gemini)
* **Easy Evals**: fine-tuning many models is easy, but selecting the best one can be hard. Our evals will help you figure out which model works best.
If you want to checkout the tool or our guides:
* [Kiln AI on Github - over 3500 stars](https://getkiln.ai)
* [Guide: How to Fine Tune a LLMs](https://docs.getkiln.ai/docs/fine-tuning-guide)
* [Guide: How to distll LLMs](https://docs.getkiln.ai/docs/guide-train-a-reasoning-model)
* [Blog post on when to fine-tune (same ideas as above in more depth)](https://getkiln.ai/blog/why_fine_tune_LLM_models_and_how_to_get_started)
* [Kiln AI - Overview and Docs](https://getkiln.ai)
I'm happy to answer questions if anyone wants to dive deeper on specific aspects! | 2025-05-29T16:07:09 | https://www.reddit.com/r/LocalLLaMA/comments/1kyeo4z/when_to_finetune_llms_and_when_not_to_a_practical/ | davernow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyeo4z | false | null | t3_1kyeo4z | /r/LocalLLaMA/comments/1kyeo4z/when_to_finetune_llms_and_when_not_to_a_practical/ | false | false | self | 110 | {'enabled': False, 'images': [{'id': 'XakaA1XhTLjl2Tl4uMyvMZIXSFLrVmJ26POYXKL-zXM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2LwF8UR_7NcVTFDWyd6CmPGp05eWO7MLbl14VnMS85w.jpg?width=108&crop=smart&auto=webp&s=fa326ef50bb272a1afa988432609189589ae2dee', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2LwF8UR_7NcVTFDWyd6CmPGp05eWO7MLbl14VnMS85w.jpg?width=216&crop=smart&auto=webp&s=804bc6242969dca1611141c9aa6853159e729a41', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2LwF8UR_7NcVTFDWyd6CmPGp05eWO7MLbl14VnMS85w.jpg?width=320&crop=smart&auto=webp&s=9da15b0ce24431ba7a575ed0d2e3b661fc169a42', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2LwF8UR_7NcVTFDWyd6CmPGp05eWO7MLbl14VnMS85w.jpg?width=640&crop=smart&auto=webp&s=684c2f6e1469717b2d191857c05883d516f46e29', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2LwF8UR_7NcVTFDWyd6CmPGp05eWO7MLbl14VnMS85w.jpg?width=960&crop=smart&auto=webp&s=8818e6c3ad19edf6351c189e67197071ddb4b0a9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2LwF8UR_7NcVTFDWyd6CmPGp05eWO7MLbl14VnMS85w.jpg?width=1080&crop=smart&auto=webp&s=f80bd5b9fa293216139d7156a1f78b3d8f9908ed', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2LwF8UR_7NcVTFDWyd6CmPGp05eWO7MLbl14VnMS85w.jpg?auto=webp&s=7a6063b66f0be45760b7c244109adb8bda0752f5', 'width': 1200}, 'variants': {}}]} |
Does anyone knows what is goldmane llm at lmarena? | 3 | It gave 10/10 to my specific tasks | 2025-05-29T16:20:34 | https://www.reddit.com/r/LocalLLaMA/comments/1kyf07f/does_anyone_knows_what_is_goldmane_llm_at_lmarena/ | Economy_Apple_4617 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyf07f | false | null | t3_1kyf07f | /r/LocalLLaMA/comments/1kyf07f/does_anyone_knows_what_is_goldmane_llm_at_lmarena/ | false | false | self | 3 | null |
Dual 4090 build for brand compliance analysis - worth it or waste? | 0 | Building a rig to auto-analyze marketing assets against brand guidelines/marketing persona preferences (logo placement, colors, text positioning etc). Need to batch process and score images, then generate reports.
Specs I'm considering:
• 2x RTX 4090 24GB
• R9 7950X
• 128GB DDR5 ECC
• 2TB NVMe, 1600W PSU
• Proxmox for model containers
Key questions:
Do models like Qwen2.5-VL-32B or InternVL-40B actually scale across dual 4090s or am I just burning money?
128GB RAM - necessary for this workload or total overkill?
Anyone running similar visual analysis stuff? What models are you using?
Has to be on-prem (client data), budget flexible but don't want to build a space heater for no reason.
Real experiences appreciated. | 2025-05-29T16:24:23 | https://www.reddit.com/r/LocalLLaMA/comments/1kyf3oc/dual_4090_build_for_brand_compliance_analysis/ | RiseNecessary6351 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyf3oc | false | null | t3_1kyf3oc | /r/LocalLLaMA/comments/1kyf3oc/dual_4090_build_for_brand_compliance_analysis/ | false | false | self | 0 | null |
LLM benchmarks for AI MAX+ 395 (HP laptop) | 36 | Not my video.
Even knowing the bandwidth in advance, the tokens per second are still a bit underwhelming. Can't beat physics I guess.
The Framework Desktop will have a higher TDP, but don't think it's gonna help much. | 2025-05-29T16:34:01 | https://www.youtube.com/watch?v=-HJ-VipsuSk | BerryGloomy4215 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1kyfcky | false | {'oembed': {'author_name': 'AIex The AI Workbench', 'author_url': 'https://www.youtube.com/@AIexTheAIWorkbench', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/-HJ-VipsuSk?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="AMD Ryzen AI Max+ 395 | Local LLM Benchmark on HP ZBook Ultra G1a"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/-HJ-VipsuSk/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'AMD Ryzen AI Max+ 395 | Local LLM Benchmark on HP ZBook Ultra G1a', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1kyfcky | /r/LocalLLaMA/comments/1kyfcky/llm_benchmarks_for_ai_max_395_hp_laptop/ | false | false | 36 | {'enabled': False, 'images': [{'id': 'by0eH17d5lslDimbW-QRNwhVovOySH8G4eVonrVuD1g', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/tr4JiOsuiWwYXrmqb9qpQBRMgBXV0gjIlFHUHTS_EpE.jpg?width=108&crop=smart&auto=webp&s=724424371434feca6b704da3f5ba3b9f973114fc', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/tr4JiOsuiWwYXrmqb9qpQBRMgBXV0gjIlFHUHTS_EpE.jpg?width=216&crop=smart&auto=webp&s=1f06a6d83781c2be88b0fba715033bd586f7bac5', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/tr4JiOsuiWwYXrmqb9qpQBRMgBXV0gjIlFHUHTS_EpE.jpg?width=320&crop=smart&auto=webp&s=c64c43c5bd85f57a3e86b9335903ac1c1ac8709f', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/tr4JiOsuiWwYXrmqb9qpQBRMgBXV0gjIlFHUHTS_EpE.jpg?auto=webp&s=707c4481cbb22cdcdf52e70263c81beff183e4af', 'width': 480}, 'variants': {}}]} |
|
R1 distil qwen 3 8b way worse than qwen3 14b | 0 | Sent the same prompt: "do a solar system simulation in a single html file" to both of them, 3 times each. Qwen14b did fine all three times. The other one failed every single time. Used q4_k_m for qwen3 14b and q5_k_m for r1 distil. | 2025-05-29T17:00:43 | https://www.reddit.com/r/LocalLLaMA/comments/1kyg15b/r1_distil_qwen_3_8b_way_worse_than_qwen3_14b/ | GreenTreeAndBlueSky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyg15b | false | null | t3_1kyg15b | /r/LocalLLaMA/comments/1kyg15b/r1_distil_qwen_3_8b_way_worse_than_qwen3_14b/ | false | false | self | 0 | null |
How are you handling AI agent coordination in your SaaS? | 1 | [removed] | 2025-05-29T17:10:31 | https://www.reddit.com/r/LocalLLaMA/comments/1kyga7j/how_are_you_handling_ai_agent_coordination_in/ | Easy-String6650 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyga7j | false | null | t3_1kyga7j | /r/LocalLLaMA/comments/1kyga7j/how_are_you_handling_ai_agent_coordination_in/ | false | false | self | 1 | null |
R1 on live bench | 21 | [benchmark ](https://preview.redd.it/kmmnq5dodr3f1.png?width=1390&format=png&auto=webp&s=8faaad69539bfb4dc5eb23f1e0126ba3709b5f0d)
benchmark | 2025-05-29T17:48:54 | https://www.reddit.com/r/LocalLLaMA/comments/1kyh95g/r1_on_live_bench/ | Inevitable_Clothes91 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyh95g | false | null | t3_1kyh95g | /r/LocalLLaMA/comments/1kyh95g/r1_on_live_bench/ | false | false | 21 | null |
|
Why is Mistral Small 3 faster than the Qwen3 30B A3B model? | 1 | [removed] | 2025-05-29T17:56:05 | https://www.reddit.com/r/LocalLLaMA/comments/1kyhfhx/why_is_mistral_small_3_faster_than_the_qwen3_30b/ | Alone_Ad_6011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyhfhx | false | null | t3_1kyhfhx | /r/LocalLLaMA/comments/1kyhfhx/why_is_mistral_small_3_faster_than_the_qwen3_30b/ | false | false | self | 1 | null |
Free up VRAM by using iGPU for display rendering, and Graphics card just for LLM | 8 | Has anyone tried using your internal GPU for display rendering so you have all the VRAM available for your AI programs? Will it be as simple as disconnecting all cables from the graphics card and only connecting your monitor to your iGPU? I'm using Windows, but the question also applies if using other OSes. | 2025-05-29T18:02:08 | https://www.reddit.com/r/LocalLLaMA/comments/1kyhl5u/free_up_vram_by_using_igpu_for_display_rendering/ | some_user_2021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyhl5u | false | null | t3_1kyhl5u | /r/LocalLLaMA/comments/1kyhl5u/free_up_vram_by_using_igpu_for_display_rendering/ | false | false | self | 8 | null |
Which local model in your practical experience is best for tool use? | 1 | [removed] | 2025-05-29T18:27:08 | https://www.reddit.com/r/LocalLLaMA/comments/1kyi881/which_local_model_in_your_practical_experience_is/ | numbtheless | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyi881 | false | null | t3_1kyi881 | /r/LocalLLaMA/comments/1kyi881/which_local_model_in_your_practical_experience_is/ | false | false | self | 1 | null |
has anyone tried BAML for structured outputs? | 1 | [removed] | 2025-05-29T18:29:11 | https://www.reddit.com/r/LocalLLaMA/comments/1kyia2i/has_anyone_tried_baml_for_structured_outputs/ | sandy_005 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyia2i | false | null | t3_1kyia2i | /r/LocalLLaMA/comments/1kyia2i/has_anyone_tried_baml_for_structured_outputs/ | false | false | self | 1 | null |
Claude Sonnet 4 is truly decieving | 1 | [removed] | 2025-05-29T18:33:08 | https://www.reddit.com/r/LocalLLaMA/comments/1kyidnn/claude_sonnet_4_is_truly_decieving/ | Ortho-BenzoPhenone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyidnn | false | null | t3_1kyidnn | /r/LocalLLaMA/comments/1kyidnn/claude_sonnet_4_is_truly_decieving/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'OyJmICVkJ46HCE5hYYD__ia7siW4AiqfKr6KYSU2clc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NHasHAAWbfMdXwF7ji9uvT43yX3G6MdjPJngfFXZZ1E.jpg?width=108&crop=smart&auto=webp&s=b6ec9686c50c0dbd7647322b08ccb9bca4b2f4e0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NHasHAAWbfMdXwF7ji9uvT43yX3G6MdjPJngfFXZZ1E.jpg?width=216&crop=smart&auto=webp&s=f4be67ede9778998f4184ca22e798ce7592a7ba9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NHasHAAWbfMdXwF7ji9uvT43yX3G6MdjPJngfFXZZ1E.jpg?width=320&crop=smart&auto=webp&s=ad8da23bca4733572466f953d26cf6d2d0d4732d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NHasHAAWbfMdXwF7ji9uvT43yX3G6MdjPJngfFXZZ1E.jpg?width=640&crop=smart&auto=webp&s=6785c16a2b25345f3abd8f5a590084c1f29f4320', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NHasHAAWbfMdXwF7ji9uvT43yX3G6MdjPJngfFXZZ1E.jpg?width=960&crop=smart&auto=webp&s=856bc0793636028389440447645e692a6174586e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NHasHAAWbfMdXwF7ji9uvT43yX3G6MdjPJngfFXZZ1E.jpg?width=1080&crop=smart&auto=webp&s=ec69f3ab70bdcd7fe7cf43b68272cfb97455b040', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NHasHAAWbfMdXwF7ji9uvT43yX3G6MdjPJngfFXZZ1E.jpg?auto=webp&s=f6ddfc12746509d5294950ed4134f708e6972d1d', 'width': 1200}, 'variants': {}}]} |
|
## DL: CLI Downloader - Hugging Face, Llama.cpp, Auto-Updates & More! | 0 | Hi,
I'm excited to share \*\*DL\*\*, a command-line interface (CLI) tool I've been developing (with a \*lot\* of help from AI!) to make downloading files, especially large model files and repositories, much smoother and faster. If you're often grabbing stuff from Hugging Face, need the latest llama.cpp, or just want a robust concurrent downloader, DL might be for you!
\*\*The Twist?\*\* This entire project, from the core downloader to the UI and feature logic, was \*\*100% generated using AI tools\*\* like Google Gemini and Claude Sonnet.
https://preview.redd.it/dt7aqkdamr3f1.png?width=2568&format=png&auto=webp&s=6ddc278ac6c344e3b525acae5be1bccd3b663e30
\### 🤔 Why DL?
Tired of single-threaded downloads, complex scripts for model repos, or missing a good overview of your downloads? DL aims to solve these with:
\* \*\*⚡ Blazing Fast Concurrent Downloads:\*\* Download multiple files simultaneously. You can control concurrency (\`-c\`), with smart caps for file lists vs. Hugging Face repos.
\* \*\*🤖 Hugging Face Supercharged:\*\*
\* Easily download entire repositories: \`./dl -hf TheBloke/Mistral-7B-Instruct-v0.2-GGUF\`
\* \*\*Interactive GGUF Selector (\`-select\`):\*\* This is a big one!
\* Intelligently detects multi-part GGUF series (e.g., \`model-00001-of-00030.gguf\`) and standalone \`.gguf\` files.
\* Pre-scans file sizes to give you an idea before you download.
\* Presents a clean list to pick exactly the GGUF model or series you need.
\* Preserves original subfolder structure from the HF repo.
\* \*\*🦙 Quick Llama.cpp Binaries (\`-getllama\`):\*\* Interactively fetches and lets you choose the latest \`ggerganov/llama.cpp\` release binaries suitable for your platform.
\* \*\*💅 Sleek Terminal UI:\*\*
\* Dynamic progress bars for each download (and an overall summary).
\* Shows filename, percentage, downloaded/total size, live speed, and ETA.
\* Handles unknown file sizes gracefully with a spinner.
\* Clears and redraws for a clean, modern TUI experience.
\* \*\*✨ Auto-Updates (\`--update\`):\*\* Keep DL up-to-date with the latest features and fixes directly from GitHub (\`vyrti/dl\`). Current version: \`v0.1.2\`.
\* \*\*📚 Predefined Model Shortcuts (\`-m\`):\*\* Quickly grab common GGUF models with aliases like \`-m qwen3-4b\` (includes Qwen3, Gemma3, and more).
\* \*\*📁 Organized Downloads:\*\* Files are saved neatly into a \`downloads/\` directory, with subfolders for HF repos (e.g., \`downloads/owner\_repo\_name\`) or \`llama.cpp\` versions.
\* \*\*🔧 Flexible & User-Friendly:\*\*
\* Download from a list of URLs in a text file (\`-f urls.txt\`).
\* Detailed debug logging (\`-debug\`) to \`log.log\`.
\* Informative error messages right in the progress display.
\* \*\*💻 Cross-Platform:\*\* Built with Go, it runs natively on Windows, macOS, and Linux.
\* \*\*ℹ️ System Info (\`-t\`):\*\* A handy built-in tool to quickly display your system's hardware specifications (CPU, RAM, GPU, VRAM).
\---
\### 🛠️ A Few Quick Examples:
\* \*\*Download a full Hugging Face repository:\*\*
\`\`\`bash
./dl -hf "Qwen/Qwen3-4B-GGUF"
\`\`\`
\* \*\*Interactively select a GGUF model/series from Hugging Face:\*\*
\`\`\`bash
./dl -hf "unsloth/DeepSeek-R1-0528-GGUF" -select
\`\`\`
\* \*\*Get the latest Llama.cpp binaries:\*\*
\`\`\`bash
./dl -getllama
\`\`\`
\* \*\*Download a predefined model alias:\*\*
\`\`\`bash
./dl -m gemma3-27b
\`\`\`
\* \*\*Download from a list of URLs with 5 concurrent downloads:\*\*
\`\`\`bash
./dl -f my\_download\_links.txt -c 5
\`\`\`
\* \*\*Update DL to the latest version:\*\*
\`\`\`bash
./dl --update
\`\`\`
\---
\### 🔗 Get DL & Get Involved!
You can find the source code, \`build.sh\` script, and more details on the GitHub repository:
\*\*➡️ \[https://github.com/vyrti/dl\](https://github.com/vyrti/dl)\*\*
I'd love to hear your feedback! If you find it useful, have suggestions, or encounter any issues, please let me know or open an issue on GitHub. And if you like it, a star on the repo would be much appreciated! ⭐
What do you all think? Any features you'd love to see in a CLI downloader?
Thanks for checking it out!
\---
\*\*Tags:\*\* #golang #opensource #cli #commandline #developer #huggingface #ai #gguf #llamacpp #downloader #sidetool #programming | 2025-05-29T18:36:37 | https://www.reddit.com/r/LocalLLaMA/comments/1kyigzz/dl_cli_downloader_hugging_face_llamacpp/ | AleksHop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyigzz | false | null | t3_1kyigzz | /r/LocalLLaMA/comments/1kyigzz/dl_cli_downloader_hugging_face_llamacpp/ | false | false | 0 | null |
|
## DL: CLI Downloader - Hugging Face, Llama.cpp, Auto-Updates & More! | 0 | Hey everyone!
I'm excited to share \*\*DL\*\*, a command-line interface (CLI) tool I've been developing (with a \*lot\* of help from AI!) to make downloading files, especially large model files and repositories, much smoother and faster. If you're often grabbing stuff from Hugging Face, need the latest llama.cpp, or just want a robust concurrent downloader, DL might be for you!
\*\*The Twist?\*\* This entire project, from the core downloader to the UI and feature logic, was \*\*100% generated using AI tools\*\* like Google Gemini and Claude Sonnet. It's been a fascinating experiment in guiding AI to build a functional piece of software. (More on this below!)
https://preview.redd.it/wkyo2n8ymr3f1.png?width=2568&format=png&auto=webp&s=4f9361439639040daf4bda144dc3f0f6c04bf8a4
\---
\### 🤔 Why DL?
Tired of single-threaded downloads, complex scripts for model repos, or missing a good overview of your downloads? DL aims to solve these with:
\* \*\*⚡ Blazing Fast Concurrent Downloads:\*\* Download multiple files simultaneously. You can control concurrency (\`-c\`), with smart caps for file lists vs. Hugging Face repos.
\* \*\*🤖 Hugging Face Supercharged:\*\*
\* Easily download entire repositories: \`./dl -hf TheBloke/Mistral-7B-Instruct-v0.2-GGUF\`
\* \*\*Interactive GGUF Selector (\`-select\`):\*\* This is a big one!
\* Intelligently detects multi-part GGUF series (e.g., \`model-00001-of-00030.gguf\`) and standalone \`.gguf\` files.
\* Pre-scans file sizes to give you an idea before you download.
\* Presents a clean list to pick exactly the GGUF model or series you need.
\* Preserves original subfolder structure from the HF repo.
\* \*\*🦙 Quick Llama.cpp Binaries (\`-getllama\`):\*\* Interactively fetches and lets you choose the latest \`ggerganov/llama.cpp\` release binaries suitable for your platform.
\* \*\*💅 Sleek Terminal UI:\*\*
\* Dynamic progress bars for each download (and an overall summary).
\* Shows filename, percentage, downloaded/total size, live speed, and ETA.
\* Handles unknown file sizes gracefully with a spinner.
\* Clears and redraws for a clean, modern TUI experience.
\* \*\*✨ Auto-Updates (\`--update\`):\*\* Keep DL up-to-date with the latest features and fixes directly from GitHub (\`vyrti/dl\`). Current version: \`v0.1.2\`.
\* \*\*📚 Predefined Model Shortcuts (\`-m\`):\*\* Quickly grab common GGUF models with aliases like \`-m qwen3-4b\` (includes Qwen3, Gemma3, and more).
\* \*\*📁 Organized Downloads:\*\* Files are saved neatly into a \`downloads/\` directory, with subfolders for HF repos (e.g., \`downloads/owner\_repo\_name\`) or \`llama.cpp\` versions.
\* \*\*🔧 Flexible & User-Friendly:\*\*
\* Download from a list of URLs in a text file (\`-f urls.txt\`).
\* Detailed debug logging (\`-debug\`) to \`log.log\`.
\* Informative error messages right in the progress display.
\* \*\*💻 Cross-Platform:\*\* Built with Go, it runs natively on Windows, macOS, and Linux.
\* \*\*ℹ️ System Info (\`-t\`):\*\* A handy built-in tool to quickly display your system's hardware specifications (CPU, RAM, GPU, VRAM).
\---
\### 🛠️ A Few Quick Examples:
\* \*\*Download a full Hugging Face repository:\*\*
\`\`\`bash
./dl -hf "Qwen/Qwen3-4B-GGUF"
\`\`\`
\* \*\*Interactively select a GGUF model/series from Hugging Face:\*\*
\`\`\`bash
./dl -hf "unsloth/DeepSeek-R1-0528-GGUF" -select
\`\`\`
\* \*\*Get the latest Llama.cpp binaries:\*\*
\`\`\`bash
./dl -getllama
\`\`\`
\* \*\*Download a predefined model alias:\*\*
\`\`\`bash
./dl -m gemma3-27b
\`\`\`
\* \*\*Download from a list of URLs with 5 concurrent downloads:\*\*
\`\`\`bash
./dl -f my\_download\_links.txt -c 5
\`\`\`
\* \*\*Update DL to the latest version:\*\*
\`\`\`bash
./dl --update
\`\`\`
\---
\### 🔗 Get DL & Get Involved!
You can find the source code, \`build.sh\` script, and more details on the GitHub repository:
\*\*➡️ \[https://github.com/vyrti/dl\](https://github.com/vyrti/dl)\*\*
I'd love to hear your feedback! If you find it useful, have suggestions, or encounter any issues, please let me know or open an issue on GitHub. And if you like it, a star on the repo would be much appreciated! ⭐
What do you all think? Any features you'd love to see in a CLI downloader?
Thanks for checking it out!
\---
\*\*Tags:\*\* #golang #opensource #cli #commandline #developer #huggingface #ai #gguf #llamacpp #downloader #sidetool #programming | 2025-05-29T18:40:18 | https://www.reddit.com/r/LocalLLaMA/comments/1kyikj7/dl_cli_downloader_hugging_face_llamacpp/ | AleksHop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyikj7 | false | null | t3_1kyikj7 | /r/LocalLLaMA/comments/1kyikj7/dl_cli_downloader_hugging_face_llamacpp/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'W7eRnFuf9-jFWaQMacHfXI1UUakTahErVEbmllcveM0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/19IROMXJ8r1_iIDMPV2bmONHc5r3w-dQ7JOPOalOr_w.jpg?width=108&crop=smart&auto=webp&s=8ac618c7c35354de206d04314ab61bed3064d380', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/19IROMXJ8r1_iIDMPV2bmONHc5r3w-dQ7JOPOalOr_w.jpg?width=216&crop=smart&auto=webp&s=0fb6b9dd6b6dea9d0876048240349d4772da40c6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/19IROMXJ8r1_iIDMPV2bmONHc5r3w-dQ7JOPOalOr_w.jpg?width=320&crop=smart&auto=webp&s=3d404112db1232837ee9bf1c2f0ea13dd15b3915', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/19IROMXJ8r1_iIDMPV2bmONHc5r3w-dQ7JOPOalOr_w.jpg?width=640&crop=smart&auto=webp&s=d3806aa69c8332e0c684c88652edd7191484df0f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/19IROMXJ8r1_iIDMPV2bmONHc5r3w-dQ7JOPOalOr_w.jpg?width=960&crop=smart&auto=webp&s=8ac9ac495ec2179f199d23c16d3c4c96557dc997', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/19IROMXJ8r1_iIDMPV2bmONHc5r3w-dQ7JOPOalOr_w.jpg?width=1080&crop=smart&auto=webp&s=ff51e458b5dd93457428d7396865f83bdb24f86f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/19IROMXJ8r1_iIDMPV2bmONHc5r3w-dQ7JOPOalOr_w.jpg?auto=webp&s=66d54b3edf83dbe662576a204fcc35fce0a273fb', 'width': 1200}, 'variants': {}}]} |
|
Considering a dedicated compute card for MSTY. What is faster than a 6800XT and affordable? | 1 | I’m looking at the Radeon Instinct MI50 that has 16GB of HBM2, doubling the memory bandwidth of the 6800XT but the 6800XT has 84% better compute.
What should I be considering? | 2025-05-29T18:42:58 | https://www.reddit.com/r/LocalLLaMA/comments/1kyin1j/considering_a_dedicated_compute_card_for_msty/ | TurtleCrusher | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyin1j | false | null | t3_1kyin1j | /r/LocalLLaMA/comments/1kyin1j/considering_a_dedicated_compute_card_for_msty/ | false | false | self | 1 | null |
Smallest+Fastest Model For Chatting With Webpages? | 5 | I want to use the [Page Assist Firefox extension](https://github.com/n4ze3m/page-assist) for talking with AI about the current webpage I'm on. Are there recommended small+fast models for this I can run on ollama?
Embedding models recommendations are great too. They suggested using [nomic-embed-text](https://ollama.com/library/nomic-embed-text). | 2025-05-29T18:53:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kyiwjp/smallestfastest_model_for_chatting_with_webpages/ | getSAT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyiwjp | false | null | t3_1kyiwjp | /r/LocalLLaMA/comments/1kyiwjp/smallestfastest_model_for_chatting_with_webpages/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'iReN37fyKZV3DfAdtYlAQyk9Org-AelIPHAJ5YP6IBI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CpLvoWC7oQyH8GI-fNW5UmzRKVtRjXvafczv4XMUo4Q.jpg?width=108&crop=smart&auto=webp&s=ee302aa4cba946af847d96c73e8ea0e67454a3bb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CpLvoWC7oQyH8GI-fNW5UmzRKVtRjXvafczv4XMUo4Q.jpg?width=216&crop=smart&auto=webp&s=d09139c0bd33c43b0cab7c9f6dbbdc5f69113efa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CpLvoWC7oQyH8GI-fNW5UmzRKVtRjXvafczv4XMUo4Q.jpg?width=320&crop=smart&auto=webp&s=03825ccc0084de0b1d57d36b357c57a2ba9fb81c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CpLvoWC7oQyH8GI-fNW5UmzRKVtRjXvafczv4XMUo4Q.jpg?width=640&crop=smart&auto=webp&s=b0d71ee081cc8477d017ba79fa4b60cfa38c04ed', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CpLvoWC7oQyH8GI-fNW5UmzRKVtRjXvafczv4XMUo4Q.jpg?width=960&crop=smart&auto=webp&s=f01cc89d6033af36aca9ca8532e26d7ce3ab0452', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CpLvoWC7oQyH8GI-fNW5UmzRKVtRjXvafczv4XMUo4Q.jpg?width=1080&crop=smart&auto=webp&s=fa3e2cce5e5604e4b27c93a0d00907d30e8598cc', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/CpLvoWC7oQyH8GI-fNW5UmzRKVtRjXvafczv4XMUo4Q.jpg?auto=webp&s=3c21dd9a8aa674ad68310a15f2e62d09cc13d357', 'width': 1280}, 'variants': {}}]} |
Paper page - GraLoRA: Granular Low-Rank Adaptation for Parameter-Efficient Fine-Tuning | 29 | This looks pretty promising for getting closer to a full finetuning. | 2025-05-29T19:15:48 | https://huggingface.co/papers/2505.20355 | AutomataManifold | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kyjh6f | false | null | t3_1kyjh6f | /r/LocalLLaMA/comments/1kyjh6f/paper_page_gralora_granular_lowrank_adaptation/ | false | false | 29 | {'enabled': False, 'images': [{'id': '3i6EYJM_JZbEgvv1WLLZWmZ-2tZKsR9bOFAJ8I3sm50', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bn2Vlujw06Mp0sbZ3JHB5bJ2z1rjiqN_uvV6LZmy5wg.jpg?width=108&crop=smart&auto=webp&s=7558232f6452c15a462e887fb212ab8ae4ce18a5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bn2Vlujw06Mp0sbZ3JHB5bJ2z1rjiqN_uvV6LZmy5wg.jpg?width=216&crop=smart&auto=webp&s=67a36de7ec381021afaad286b226c0ec482f94a1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/bn2Vlujw06Mp0sbZ3JHB5bJ2z1rjiqN_uvV6LZmy5wg.jpg?width=320&crop=smart&auto=webp&s=1c40ad0c97dda8b5c8597d7f183556fda4e80035', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/bn2Vlujw06Mp0sbZ3JHB5bJ2z1rjiqN_uvV6LZmy5wg.jpg?width=640&crop=smart&auto=webp&s=184973529c8b48cb5ff5b70dc76ccaec9db3387c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/bn2Vlujw06Mp0sbZ3JHB5bJ2z1rjiqN_uvV6LZmy5wg.jpg?width=960&crop=smart&auto=webp&s=4884de4df0a75a44e72bc182f969274a861c5638', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/bn2Vlujw06Mp0sbZ3JHB5bJ2z1rjiqN_uvV6LZmy5wg.jpg?width=1080&crop=smart&auto=webp&s=a1ee6d2efefd88fdc1db0d868a3a453567f79a0b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/bn2Vlujw06Mp0sbZ3JHB5bJ2z1rjiqN_uvV6LZmy5wg.jpg?auto=webp&s=a76394f1e9edbbd24ef4a11db4e55eda2a107850', 'width': 1200}, 'variants': {}}]} |
|
seeking (or building) an ai browser extension with inline form suggestions + multi-field support | 2 |
hey all — i'm looking for an existing tool (or folks interested in building one) that can intelligently assist with filling out web forms. not just basic autofill, but something smarter — context-aware, user-aware, and unobtrusive.
here’s what i’m envisioning:
* a browser extension that stays dormant until triggered (via right-click or keybind)
* when activated, it should:
* analyze the current form — field labels, structure, surrounding content
* offer **inline suggestions** (ideally like copilot/intellisense) or autofill prompts i can tab through or accept
* optionally suggest values for *multiple fields at once* when context allows
* learn from my past entries, securely and privately (preferably local-first)
essential features:
* gpt-4o or local llm integration for generating smart, field-appropriate responses
* inline ui for previews/suggestions (not just “fill all”)
* context menu or keyboard-triggered activation
* encrypted local memory of my entries and preferences
* multi-profile support (personal / work / educator etc.)
* open source or built for extensibility
i’ve tried tools like harpa ai, compose ai, and magical — they get partway there, but none offer true **inline**, multi-field aware suggestions with user-defined control and memory.
if this exists, i want to use it.
if it doesn’t, i’m open to building it with others who care about privacy, presence, and usefulness over noise.
thanks. | 2025-05-29T19:16:55 | https://www.reddit.com/r/LocalLLaMA/comments/1kyji9c/seeking_or_building_an_ai_browser_extension_with/ | madouble7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyji9c | false | null | t3_1kyji9c | /r/LocalLLaMA/comments/1kyji9c/seeking_or_building_an_ai_browser_extension_with/ | false | false | self | 2 | null |
DeepSeek-R1-0528-Qwen3-8B-OpenVINO quants are up | 13 | https://huggingface.co/Echo9Zulu/DeepSeek-R1-0528-Qwen3-8B-OpenVINO
There are a handful of quants in this repo. To keep things easier to maintain I've taken queues from how unsloth organizes their repos.
Will add some inference code examples tonight. There were some issues with AutoTokenizers in my quick tests and I want to understand more deeply why torch.Tensor worked before I refactor my project.
Some early observations:
- /no_think no longer works. Same over openrouter.
- R1-0528 model card mentions thinking tokens increase by 2x. Depending on how the distill performs in practice this may limit utility for extended chats/complex tasks ie, risk of thinking tokens filling kv cache before assistant response begins may be higher as task complexity grows on current consumer intel gpus
| 2025-05-29T19:26:25 | https://www.reddit.com/r/LocalLLaMA/comments/1kyjqrg/deepseekr10528qwen38bopenvino_quants_are_up/ | Echo9Zulu- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyjqrg | false | null | t3_1kyjqrg | /r/LocalLLaMA/comments/1kyjqrg/deepseekr10528qwen38bopenvino_quants_are_up/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'JYKAtiDF6fVq7LGoir-AB6chiRAaDougdypYh6p4_ug', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oSp7hRn9U7_TwaLZEBiEIfEHfVmUDVJzaQCrNDawmd8.jpg?width=108&crop=smart&auto=webp&s=0999c93c7c021fc6e55d3eaf3a6ce7a2524ec874', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oSp7hRn9U7_TwaLZEBiEIfEHfVmUDVJzaQCrNDawmd8.jpg?width=216&crop=smart&auto=webp&s=168c874d9cc3ee43ce9f1efa3e0bf66e8ac6e3d7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oSp7hRn9U7_TwaLZEBiEIfEHfVmUDVJzaQCrNDawmd8.jpg?width=320&crop=smart&auto=webp&s=04a2b98bff3867693995cdbfd4bd058fe41d9516', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oSp7hRn9U7_TwaLZEBiEIfEHfVmUDVJzaQCrNDawmd8.jpg?width=640&crop=smart&auto=webp&s=f1f999029dd5ad13570a8dd2f9031547854fb0b9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oSp7hRn9U7_TwaLZEBiEIfEHfVmUDVJzaQCrNDawmd8.jpg?width=960&crop=smart&auto=webp&s=8fac3dcf28d9ac688b597aa02f2826e0f0162c80', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oSp7hRn9U7_TwaLZEBiEIfEHfVmUDVJzaQCrNDawmd8.jpg?width=1080&crop=smart&auto=webp&s=e29aa70eed1362e0920ba5f1f0cc9f00557d6663', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oSp7hRn9U7_TwaLZEBiEIfEHfVmUDVJzaQCrNDawmd8.jpg?auto=webp&s=0f0c85542c7c93691c1a52dfffa7dbada12b2e42', 'width': 1200}, 'variants': {}}]} |
Why is Mistral Small 3 faster than the Qwen3 30B A3B model? | 1 | [removed] | 2025-05-29T19:40:15 | https://www.reddit.com/r/LocalLLaMA/comments/1kyk385/why_is_mistral_small_3_faster_than_the_qwen3_30b/ | Alone_Ad_6011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyk385 | false | null | t3_1kyk385 | /r/LocalLLaMA/comments/1kyk385/why_is_mistral_small_3_faster_than_the_qwen3_30b/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=108&crop=smart&auto=webp&s=ff8c322202cb0f1a1f82f87a2c77754ddc0b9e61', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=216&crop=smart&auto=webp&s=e20458b3bc0a4d8ebf3e09b7e3615cfda4e00844', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=320&crop=smart&auto=webp&s=508265ec16105ddc4d2105e057c292f8470229ac', 'width': 320}, {'height': 355, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=640&crop=smart&auto=webp&s=690b875bfe1b25ba2e96b432c42bb1b096935efd', 'width': 640}, {'height': 533, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=960&crop=smart&auto=webp&s=ee86a1133471b58f18d2dbf89ec1c88906c2d623', 'width': 960}, {'height': 600, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=1080&crop=smart&auto=webp&s=e42c63d534439a755f46f08c5db09cbaaefca3d0', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?auto=webp&s=6e0008e17dc8f6f6b13799bc7416400acacbaca0', 'width': 1260}, 'variants': {}}]} |
[[ACCEPT]] -> Will you train my AGI/ASI/AMI on your beast of a computer? | 1 | [removed] | 2025-05-29T19:40:41 | https://www.reddit.com/r/LocalLLaMA/comments/1kyk3la/accept_will_you_train_my_agiasiami_on_your_beast/ | MagicaItux | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyk3la | false | null | t3_1kyk3la | /r/LocalLLaMA/comments/1kyk3la/accept_will_you_train_my_agiasiami_on_your_beast/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'fE4ENnR74sJD8MSpm2UY59UL-v5B8OIbom80atHNgPs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JuIdgNjAZ0jQJX8S5QPAewFkUrDpnZ2uxJbbOj4k5gU.jpg?width=108&crop=smart&auto=webp&s=bd626ef8e3a68f30e5d070fb0f55cbedfac8fe76', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JuIdgNjAZ0jQJX8S5QPAewFkUrDpnZ2uxJbbOj4k5gU.jpg?width=216&crop=smart&auto=webp&s=278976c0511ccdb6055ba1415aecd96286bdf0ea', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JuIdgNjAZ0jQJX8S5QPAewFkUrDpnZ2uxJbbOj4k5gU.jpg?width=320&crop=smart&auto=webp&s=438da41aa526487d71fea2ce179c71eff3b66e8d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JuIdgNjAZ0jQJX8S5QPAewFkUrDpnZ2uxJbbOj4k5gU.jpg?width=640&crop=smart&auto=webp&s=1a87a672a019da438bc45ecc089943b6ec8f1d40', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JuIdgNjAZ0jQJX8S5QPAewFkUrDpnZ2uxJbbOj4k5gU.jpg?width=960&crop=smart&auto=webp&s=d7c4a47f9d26eec4c431b019b348e2f775159925', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JuIdgNjAZ0jQJX8S5QPAewFkUrDpnZ2uxJbbOj4k5gU.jpg?width=1080&crop=smart&auto=webp&s=019c7996ad0db0695dba6cb1daffae89b385cf67', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JuIdgNjAZ0jQJX8S5QPAewFkUrDpnZ2uxJbbOj4k5gU.jpg?auto=webp&s=2df91d4bfb473d13f5274945d91b5de6ed2fa352', 'width': 1200}, 'variants': {}}]} |
Always nice to get something open from the closed AI labs. This time from Anthropic, not a model but pretty cool research/exploration tool. | 159 | 2025-05-29T19:47:35 | https://www.anthropic.com/research/open-source-circuit-tracing | indicava | anthropic.com | 1970-01-01T00:00:00 | 0 | {} | 1kyk9nf | false | null | t3_1kyk9nf | /r/LocalLLaMA/comments/1kyk9nf/always_nice_to_get_something_open_from_the_closed/ | false | false | 159 | {'enabled': False, 'images': [{'id': 'tne0EzCj3fYeq1lZqYf76vEUZ6Xi7xtX6Z_j6u_q-B4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/VhgQC2k4JrTeuPSzZf3YPcyvHE4Tk7RPdF2DBlFwjUY.jpg?width=108&crop=smart&auto=webp&s=1eb6c6d6bee2b0280b9a7e0f6f219f41d3fe0706', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/VhgQC2k4JrTeuPSzZf3YPcyvHE4Tk7RPdF2DBlFwjUY.jpg?width=216&crop=smart&auto=webp&s=b790a4045eef07ea92b1e370b368e11e8c93fa9f', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/VhgQC2k4JrTeuPSzZf3YPcyvHE4Tk7RPdF2DBlFwjUY.jpg?width=320&crop=smart&auto=webp&s=a0840bc6e71ebf7bba17b4c432da3b512f45594a', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/VhgQC2k4JrTeuPSzZf3YPcyvHE4Tk7RPdF2DBlFwjUY.jpg?width=640&crop=smart&auto=webp&s=b0a93a9d20ac4bf6173a53bdc3f6120b8384d723', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/VhgQC2k4JrTeuPSzZf3YPcyvHE4Tk7RPdF2DBlFwjUY.jpg?width=960&crop=smart&auto=webp&s=24cb9d3425740fefe89601d90f5218670eef4f42', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/VhgQC2k4JrTeuPSzZf3YPcyvHE4Tk7RPdF2DBlFwjUY.jpg?width=1080&crop=smart&auto=webp&s=3dc847135de94c7cd8945ed208966f55e4f1af2c', 'width': 1080}], 'source': {'height': 1261, 'url': 'https://external-preview.redd.it/VhgQC2k4JrTeuPSzZf3YPcyvHE4Tk7RPdF2DBlFwjUY.jpg?auto=webp&s=76aa149d66f15d06b88836f2c04362fcb660a2e8', 'width': 2401}, 'variants': {}}]} |
||
PSA: Don't waste electricity when running vllm. Use this patch | 303 | I was annoyed by vllm using 100% CPU on as many cores as there are connected GPUs even when there's no activity. I have 8 GPUs connected connected to a single machine, so idle power usage was almost double compared to optimal arrangement.
I went forward and fixed this: https://github.com/vllm-project/vllm/pull/16226.
The PR to vllm is getting ages to be merged, so if you want to reduce your power cost today, you can use instructions outlined here https://github.com/vllm-project/vllm/pull/16226#issuecomment-2839769179 to apply fix. This only works when deploying vllm in a container.
There's similar patch to sglang as well: https://github.com/sgl-project/sglang/pull/6026 | 2025-05-29T19:53:38 | https://www.reddit.com/r/LocalLLaMA/comments/1kykez2/psa_dont_waste_electricity_when_running_vllm_use/ | pmur12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kykez2 | false | null | t3_1kykez2 | /r/LocalLLaMA/comments/1kykez2/psa_dont_waste_electricity_when_running_vllm_use/ | false | false | self | 303 | {'enabled': False, 'images': [{'id': '0pKXup16UoyNVN1je09sqpKw5PcVHUVQgKwcevkCKQs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NN14Ap84Yztw0OV4s-mY-nQZStg8I5hNQDYCXPfphq0.jpg?width=108&crop=smart&auto=webp&s=21d00b95cd1b1e998caf7fcdd6ae2e9ecf8edaf5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NN14Ap84Yztw0OV4s-mY-nQZStg8I5hNQDYCXPfphq0.jpg?width=216&crop=smart&auto=webp&s=b86a300f2c2981c9f5ad4ea8809be0e2bb8e6fe8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NN14Ap84Yztw0OV4s-mY-nQZStg8I5hNQDYCXPfphq0.jpg?width=320&crop=smart&auto=webp&s=0f7bdadfe1ac8dae9ff226b23e537bc1a8e8aade', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NN14Ap84Yztw0OV4s-mY-nQZStg8I5hNQDYCXPfphq0.jpg?width=640&crop=smart&auto=webp&s=74519346685ef6a200cd1292fa763d95cc538912', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NN14Ap84Yztw0OV4s-mY-nQZStg8I5hNQDYCXPfphq0.jpg?width=960&crop=smart&auto=webp&s=add9e77c39cafd2a772900ccde2f1de5484b559d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NN14Ap84Yztw0OV4s-mY-nQZStg8I5hNQDYCXPfphq0.jpg?width=1080&crop=smart&auto=webp&s=74a84a36a67e000f18d0becddcb797017ea431ea', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NN14Ap84Yztw0OV4s-mY-nQZStg8I5hNQDYCXPfphq0.jpg?auto=webp&s=f7b195569142aea1e8a95f997649089812f92940', 'width': 1200}, 'variants': {}}]} |
What’s the most useful agent you’ve built or used? | 1 | [removed] | 2025-05-29T19:58:14 | https://www.reddit.com/r/LocalLLaMA/comments/1kykize/whats_the_most_useful_agent_youve_built_or_used/ | InitialChard8359 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kykize | false | null | t3_1kykize | /r/LocalLLaMA/comments/1kykize/whats_the_most_useful_agent_youve_built_or_used/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'eeW9fH3YcxMuHj2amhvboHe6AxADIiP0ot6ECXrRMbs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ekvKwmt1PM8YArW1cwNvrhLU-M8zXpAL7gRavbuuZQY.jpg?width=108&crop=smart&auto=webp&s=3c43d42d0b10a45d21cf05f31dcf8aa5592ea940', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ekvKwmt1PM8YArW1cwNvrhLU-M8zXpAL7gRavbuuZQY.jpg?width=216&crop=smart&auto=webp&s=62064636de41fcca027787444241e05d365c5ced', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ekvKwmt1PM8YArW1cwNvrhLU-M8zXpAL7gRavbuuZQY.jpg?width=320&crop=smart&auto=webp&s=69e378ca78199ede24fc50164eeba75a4abf71f8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ekvKwmt1PM8YArW1cwNvrhLU-M8zXpAL7gRavbuuZQY.jpg?width=640&crop=smart&auto=webp&s=1d238891ba7da447c20159d8901d777124ab6017', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ekvKwmt1PM8YArW1cwNvrhLU-M8zXpAL7gRavbuuZQY.jpg?width=960&crop=smart&auto=webp&s=da514ab7d445846f764de6b34a66e4dd4d12d50f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ekvKwmt1PM8YArW1cwNvrhLU-M8zXpAL7gRavbuuZQY.jpg?width=1080&crop=smart&auto=webp&s=0f7f4eb049b3b753d3aabe533a4530fc0ad3a714', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ekvKwmt1PM8YArW1cwNvrhLU-M8zXpAL7gRavbuuZQY.jpg?auto=webp&s=946312f8bb2660d8602312243358d37cb8b9a724', 'width': 1200}, 'variants': {}}]} |
What’s the most useful agent you’ve built or used? | 1 | [removed] | 2025-05-29T19:59:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kykkad/whats_the_most_useful_agent_youve_built_or_used/ | InitialChard8359 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kykkad | false | null | t3_1kykkad | /r/LocalLLaMA/comments/1kykkad/whats_the_most_useful_agent_youve_built_or_used/ | false | false | self | 1 | null |
Voice customization on Orpheus TTS Baseten deployment | 1 | [removed] | 2025-05-29T20:09:57 | https://www.reddit.com/r/LocalLLaMA/comments/1kyktn6/voice_customization_on_orpheus_tts_baseten/ | ComedianImpressive37 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyktn6 | false | null | t3_1kyktn6 | /r/LocalLLaMA/comments/1kyktn6/voice_customization_on_orpheus_tts_baseten/ | false | false | self | 1 | null |
I'm using LM Studio and have just started trying to use a Deepseek-R1 Distilled Llama model and unlike any other model I've ever used, the LLM keeps responding in a strange way. I am incredibly new to this whole thing, so if this is a stupid question I apologize. | 0 | Every time I throw something at the model (8B or 70B both) it responds with something like "Okay, so I'm trying to figure out..." or "The user wants to know... " and none of my other models have responded like this. What's causing this? I'm incredibly confused and honestly don't even know where to begin searching for this. | 2025-05-29T20:10:32 | https://www.reddit.com/r/LocalLLaMA/comments/1kyku6k/im_using_lm_studio_and_have_just_started_trying/ | BokehJunkie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyku6k | false | null | t3_1kyku6k | /r/LocalLLaMA/comments/1kyku6k/im_using_lm_studio_and_have_just_started_trying/ | false | false | self | 0 | null |
The real treasure of LocalLLaMA? The friends we make along the way. | 1 | [removed] | 2025-05-29T20:28:28 | https://www.reddit.com/r/LocalLLaMA/comments/1kylac2/the_real_treasure_of_localllama_the_friends_we/ | bigattichouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kylac2 | false | null | t3_1kylac2 | /r/LocalLLaMA/comments/1kylac2/the_real_treasure_of_localllama_the_friends_we/ | false | false | self | 1 | null |
Google Edge Gallery | 7 | I've just downloaded and installed Google Edge Gallery. I'm using model Gemma 3n E2B (3.1 GB) and it's pretty interesting to finally have an official Google app to run LLM locally.
I was wondering if anyone could help me in suggesting some use cases. I have no coding background.
| 2025-05-29T20:33:19 | https://github.com/google-ai-edge/gallery | Trick-Point2641 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1kyleou | false | null | t3_1kyleou | /r/LocalLLaMA/comments/1kyleou/google_edge_gallery/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'UOkoAwdgytb0vzPhxhPGDxVAmEdB0InDKlmPQY3ayAk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PvPHHttLquwymvsI7n05-J6903Qe0wxum1jUVhrWVUc.jpg?width=108&crop=smart&auto=webp&s=e8c7381c8ac69fb7f3ffed7a99a2a3bccd4df2b4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PvPHHttLquwymvsI7n05-J6903Qe0wxum1jUVhrWVUc.jpg?width=216&crop=smart&auto=webp&s=37d7da73c651ee111063b7f935d11053d7dba60c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PvPHHttLquwymvsI7n05-J6903Qe0wxum1jUVhrWVUc.jpg?width=320&crop=smart&auto=webp&s=58de612ec47cd1f00264a71ede75cb87ac9a8efa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PvPHHttLquwymvsI7n05-J6903Qe0wxum1jUVhrWVUc.jpg?width=640&crop=smart&auto=webp&s=e708af6bb7eac6da9fbe666aa6ec6ea57ff2ef82', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PvPHHttLquwymvsI7n05-J6903Qe0wxum1jUVhrWVUc.jpg?width=960&crop=smart&auto=webp&s=c30d4634bc455499ee5cf7b7493bca75152ed2c0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PvPHHttLquwymvsI7n05-J6903Qe0wxum1jUVhrWVUc.jpg?width=1080&crop=smart&auto=webp&s=814f6206f59029ff9ad995f24dd58925046adbaa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PvPHHttLquwymvsI7n05-J6903Qe0wxum1jUVhrWVUc.jpg?auto=webp&s=90fd8a495b5e01aa232ba4fe2ea6429b917869ab', 'width': 1200}, 'variants': {}}]} |
|
Do you struggle to find the write tools to connect to your AI agent? | 1 | [removed] | 2025-05-29T20:48:05 | https://www.reddit.com/r/LocalLLaMA/comments/1kylrwf/do_you_struggle_to_find_the_write_tools_to/ | Apprehensive-Row5364 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kylrwf | false | null | t3_1kylrwf | /r/LocalLLaMA/comments/1kylrwf/do_you_struggle_to_find_the_write_tools_to/ | false | false | self | 1 | null |
deepseek-r1 what are the difference | 1 | The subject today is definitively deepseek-r1
It would be appreciate if someone could explain the difference bettween these on ollama's site
* deepseek-r1:8b
* deepseek-r1:8b-0528-qwen3-q4\_K\_M
* deepseek-r1:8b-llama-distill-q4\_K\_M
Thanks !
| 2025-05-29T21:03:17 | https://www.reddit.com/r/LocalLLaMA/comments/1kym5ck/deepseekr1_what_are_the_difference/ | Empty_Object_9299 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kym5ck | false | null | t3_1kym5ck | /r/LocalLLaMA/comments/1kym5ck/deepseekr1_what_are_the_difference/ | false | false | self | 1 | null |
DeepSeek-R1-0528-Qwen3-8B on iPhone 16 Pro | 491 | I added the updated DeepSeek-R1-0528-Qwen3-8B with 4bit quant in my app to test it on iPhone. It's running with MLX.
It runs which is impressive but too slow to be usable, the model is thinking for too long and the phone get really hot. I wonder if 8B models will be usable when the iPhone 17 drops.
That said, I will add the model on iPad with M series chip. | 2025-05-29T21:10:08 | https://v.redd.it/mb6zoiqtds3f1 | adrgrondin | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kymbcn | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/mb6zoiqtds3f1/DASHPlaylist.mpd?a=1751145025%2CMDc3NTk4OTlhM2JlMDQ3MmM0OTEyZjkxNWM2MmYxYmRjNzkyMTNhYmVlZGNlMGFhMGMxZjMxZjcxZTkwN2E0Mg%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/mb6zoiqtds3f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1920, 'hls_url': 'https://v.redd.it/mb6zoiqtds3f1/HLSPlaylist.m3u8?a=1751145025%2COTA2YmNjYjEwYTFkMTI4YmVkM2UxYWJlNjdkNDkzYzY5OTY2NmIwOGQwNTE0NjJkMTBjNTk5MDlhNzljZTI1OA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/mb6zoiqtds3f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1kymbcn | /r/LocalLLaMA/comments/1kymbcn/deepseekr10528qwen38b_on_iphone_16_pro/ | false | false | 491 | {'enabled': False, 'images': [{'id': 'NXIzbTE5bXRkczNmMTPgNQxrmyDrsqQqm5XEPHINTq7pqExK0opX4bhpHRYD', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/NXIzbTE5bXRkczNmMTPgNQxrmyDrsqQqm5XEPHINTq7pqExK0opX4bhpHRYD.png?width=108&crop=smart&format=pjpg&auto=webp&s=395e4581897f47ce64304e1284797cb2c34bc8ab', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/NXIzbTE5bXRkczNmMTPgNQxrmyDrsqQqm5XEPHINTq7pqExK0opX4bhpHRYD.png?width=216&crop=smart&format=pjpg&auto=webp&s=6652805856575e37fc76eb17026f92872b49d645', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/NXIzbTE5bXRkczNmMTPgNQxrmyDrsqQqm5XEPHINTq7pqExK0opX4bhpHRYD.png?width=320&crop=smart&format=pjpg&auto=webp&s=5ad74890e1e39a303b2dbd51ab8fefebbdfb0596', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/NXIzbTE5bXRkczNmMTPgNQxrmyDrsqQqm5XEPHINTq7pqExK0opX4bhpHRYD.png?width=640&crop=smart&format=pjpg&auto=webp&s=360be18ff4b7415ab31bab10618bc5f14957c1f7', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/NXIzbTE5bXRkczNmMTPgNQxrmyDrsqQqm5XEPHINTq7pqExK0opX4bhpHRYD.png?width=960&crop=smart&format=pjpg&auto=webp&s=c641643cb8fddafae92787361110c2434fde1e76', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/NXIzbTE5bXRkczNmMTPgNQxrmyDrsqQqm5XEPHINTq7pqExK0opX4bhpHRYD.png?width=1080&crop=smart&format=pjpg&auto=webp&s=324a526b22037f43644ec114fb31805f7127a580', 'width': 1080}], 'source': {'height': 2560, 'url': 'https://external-preview.redd.it/NXIzbTE5bXRkczNmMTPgNQxrmyDrsqQqm5XEPHINTq7pqExK0opX4bhpHRYD.png?format=pjpg&auto=webp&s=96a6f5def3c434046e34426b4f013d7c324bed0d', 'width': 1440}, 'variants': {}}]} |
|
Helping someone build a local continuity LLM for writing and memory—does this setup make sense? | 1 | I’m helping someone close to me set up a local LLM system for creative writing, philosophical thinking, and memory continuity. They’re a writer dealing with mild cognitive challenges and want a private companion to help preserve tone, voice, and longform reasoning over time, especially because these changes are likely to get worse.
They’re not interested in chatbot novelty or coding help. This would be a quiet, consistent tool to support journaling, fiction, and philosophical inquiry—something like a reflective assistant that carries tone and memory, not just generates responses.
In some way they are considering that this will help them to preserve themselves.
⸻
Setup Plan
• Hardware: MINISFORUM UM790 Pro
→ Ryzen 9 7940HS / 64GB RAM / 1TB SSD
• OS: Linux Mint (simple, lightweight, good UI)
• Runner: LM Studio or Oobabooga
• Model: Starting with Nous Hermes 2 (13B GGUF), considering LLaMA 3 8B or Mixtral 12x7B later
• Use case:
→ Longform journaling, philosophical dialogue, recursive writing support
→ No APIs, no multi-user setup—just one person, one machine
• Memory layer: Manually managed for now (static prompt + context docs), may add simple RAG later for document recall
⸻
What We’re Unsure About
1. Is the hardware sufficient?
Can the UM790 Pro handle 13B and Mixtral models smoothly on CPU alone?
2. Are the runners stable?
Would LM Studio or Oobabooga be reliable for longform, recursive writing without crashes or weird behaviors?
3. Has anyone done something similar?
Not just a productivity tool—but a kind of memory-preserving thought companion. Curious if others have tried this kind of use case and how it held up over time.
⸻
Any feedback or thoughts would be much appreciated—especially from people who’ve built focused, single-user LLM setups for creative or introspective work.
Thanks. | 2025-05-29T21:21:16 | https://www.reddit.com/r/LocalLLaMA/comments/1kyml5o/helping_someone_build_a_local_continuity_llm_for/ | larawithoutau | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyml5o | false | null | t3_1kyml5o | /r/LocalLLaMA/comments/1kyml5o/helping_someone_build_a_local_continuity_llm_for/ | false | false | self | 1 | null |
Deep Seek R1 0528 FP on Mac Studio M3U 512GB | 34 | Using deep seek R1 to do a coding project I’ve been trying to do with O-Mini for a couple weeks and DS528 nailed it. It’s more up to date.
It’s using about 360 GB of ram, and I’m only getting 10TKS max, but using more experts. I also have full 138K context. Taking me longer and running the studio hotter than I’ve felt it before, but it’s chugging it out accurate at least.
Got a 8500 token response which is the longest I’ve had yet. | 2025-05-29T21:21:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kymlon/deep_seek_r1_0528_fp_on_mac_studio_m3u_512gb/ | redragtop99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kymlon | false | null | t3_1kymlon | /r/LocalLLaMA/comments/1kymlon/deep_seek_r1_0528_fp_on_mac_studio_m3u_512gb/ | false | false | self | 34 | null |
Why didn't they call the R1 update R2? | 1 | [removed] | 2025-05-29T21:25:13 | https://www.reddit.com/r/LocalLLaMA/comments/1kymomb/why_didnt_they_call_the_r1_update_r2/ | Extra-Whereas-9408 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kymomb | false | null | t3_1kymomb | /r/LocalLLaMA/comments/1kymomb/why_didnt_they_call_the_r1_update_r2/ | false | false | self | 1 | null |
Where are r1 5-28 14b and 32B distilled ? | 4 | I don't see the models on HuggingFace, maybe they will be out later? | 2025-05-29T21:35:00 | https://www.reddit.com/r/LocalLLaMA/comments/1kymx69/where_are_r1_528_14b_and_32b_distilled/ | power97992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kymx69 | false | null | t3_1kymx69 | /r/LocalLLaMA/comments/1kymx69/where_are_r1_528_14b_and_32b_distilled/ | false | false | self | 4 | null |
Looking for RAG + chat system | 1 | [removed] | 2025-05-29T21:35:16 | https://www.reddit.com/r/LocalLLaMA/comments/1kymxee/looking_for_rag_chat_system/ | ScientistSmart5629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kymxee | false | null | t3_1kymxee | /r/LocalLLaMA/comments/1kymxee/looking_for_rag_chat_system/ | false | false | self | 1 | null |
Qwen finetune from NVIDIA...? | 30 | 2025-05-29T21:35:46 | https://huggingface.co/nvidia/Qwen-2.5-32B-HS3-RM_20250501 | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kymxtq | false | null | t3_1kymxtq | /r/LocalLLaMA/comments/1kymxtq/qwen_finetune_from_nvidia/ | false | false | 30 | {'enabled': False, 'images': [{'id': 'pDpdszbQQ6B6pOUgXP9WIqfiP5x2PqtJ-AutagOicKE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7jWrZJp8baLd-q8ftwpfQNg5hnORpIkQUQ5gwiCsnDY.jpg?width=108&crop=smart&auto=webp&s=f490c81a053b53e1b5d7c185a4067ad6bca80873', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7jWrZJp8baLd-q8ftwpfQNg5hnORpIkQUQ5gwiCsnDY.jpg?width=216&crop=smart&auto=webp&s=38af05d7f394b99f47bf140ca75b861dcfaeb081', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7jWrZJp8baLd-q8ftwpfQNg5hnORpIkQUQ5gwiCsnDY.jpg?width=320&crop=smart&auto=webp&s=5ad427ce036b07ef0d9864301e4ee925e49254a7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7jWrZJp8baLd-q8ftwpfQNg5hnORpIkQUQ5gwiCsnDY.jpg?width=640&crop=smart&auto=webp&s=4948da5a2139d58521457051a127ca863465ac4d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7jWrZJp8baLd-q8ftwpfQNg5hnORpIkQUQ5gwiCsnDY.jpg?width=960&crop=smart&auto=webp&s=ecaa7dd8596c46dc93b2fb4493e9a7c724fc3344', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7jWrZJp8baLd-q8ftwpfQNg5hnORpIkQUQ5gwiCsnDY.jpg?width=1080&crop=smart&auto=webp&s=059a80a21cd8885c91914ebd054c2b94ad8da20f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7jWrZJp8baLd-q8ftwpfQNg5hnORpIkQUQ5gwiCsnDY.jpg?auto=webp&s=25d1ff55a931034f526f352a823ef99b550f101d', 'width': 1200}, 'variants': {}}]} |
||
Tell me about you rig? | 7 | Hey folks! 👋
I’m running a 16GB Raspberry Pi 5 setup with a HaloS HAT and a 1TB SSD. I know it’s a pup compared to the big rigs out there, but I’m all about building something affordable and accessible. 💡
I’ve been able to load several models — even tested up to 9B parameters (though yeah, it gets *sluggish* 😅). That said, I’m loving how snappy **TinyLlama 1B quantized** feels — fast enough to feel fluid in use.
I’m really curious to hear from others:
**What’s your main setup → model → performance/output?**
Do you think *tokens per second (TPS)* really matters for it to *feel* responsive? Or is there a point where it’s “good enough”?
🎯 My project: RoverByte
I’m building a fleet of robotic (and virtual) dogs to help keep your life on track. Think task buddies or focus companions. The central AI, RoverSeer, lives at the “home base” and communicates with the fleet over what I call RoverNet (LoRa + WiFi combo). 🐾💻📡
I’ve read that the HaloS HAT is currently image-focused, but potentially extendable for LLM acceleration. Anyone got thoughts or experience with this? | 2025-05-29T21:36:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kymyt4/tell_me_about_you_rig/ | codemusicred | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kymyt4 | false | null | t3_1kymyt4 | /r/LocalLLaMA/comments/1kymyt4/tell_me_about_you_rig/ | false | false | self | 7 | null |
Exploring Practical Uses for Small Language Models (e.g., Microsoft Phi) | 3 | Hey Reddit!
I've recently set up a small language model, specifically Microsoft's **Phi-3-mini**, on my modest home server. It's fascinating to see what these compact models can do, and I'm keen to explore more practical applications beyond basic experimentation.
My initial thoughts for its use include:
* **Categorizing my Obsidian notes:** This would be a huge time-saver for organizing my knowledge base.
* **Generating documentation for my home server setup:** Automating this tedious but crucial task would be incredibly helpful.
However, I'm sure there are many other clever and efficient ways to leverage these smaller models, especially given their lower resource requirements compared to larger LLMs.
So, I'm curious: **What are** ***you*** **using small language models like Phi-3 for? Or, what creative use cases have you thought of?**
Also, a more specific question: **How well do these smaller models perform in an autonomous agent context?** I'm wondering if they can be reliable enough for task execution and decision-making when operating somewhat independently.
Looking forward to hearing your ideas and experiences! | 2025-05-29T21:48:07 | https://www.reddit.com/r/LocalLLaMA/comments/1kyn8bn/exploring_practical_uses_for_small_language/ | amunocis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyn8bn | false | null | t3_1kyn8bn | /r/LocalLLaMA/comments/1kyn8bn/exploring_practical_uses_for_small_language/ | false | false | self | 3 | null |
Llama.cpp performance on Z13 (128GB unified) | 6 | Tested models from 8B to 70B on the AMD AI 395+ max chip on a Asus Flow Z13 (128GB unified).
TL;DR:
- Sweet spot for minimum real-time is about ~32B active params which gives ~10TPS.
- ~1/5th the performance of a 5090 for token generation speed (TG), 1/21 perf in prompt processing (PP)
- Silent mode on battery is significantly lower performance compared to plugged in. Performance mode is about the same. Turbo is largely not worth the noise.
- Q4 performs significantly faster than Q6 (~20%)
- Q8 has faster PP, but slower TG likely due to data type alignment
z13 LLM Thoughts:
- You can fit a 32B model in the 32GB z13 at Q4 but with a slower model load (at 24GB VRam/8GB system ram it will page out during load).
- For LLM use likely 64GB model is sweet spot with 32GB VRam split.
- 128GB likely only really useful for large MoE models something like llama4 scout if llama 4 was any good, or if you're ok with ~5tps 70B+ models or need full quants.
- 2TB max ssd is likely to be an annoyance for holding a large set of models
# Benchmark Results
llama.cpp build: d13d0f61 (5468)
### Qwen3 8B - Q4_K_M
#### Turbo Mode (plugged in) ~ 130w at socket
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| qwen3 8B Q4_K - Medium | 4.68 GiB | 8.19 B | RPC,Vulkan | 99 | pp512 | 628.81 ± 9.17 |
| qwen3 8B Q4_K - Medium | 4.68 GiB | 8.19 B | RPC,Vulkan | 99 | tg128 | 37.57 ± 0.22 |
#### Performance (plugged in)
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| qwen3 8B Q4_K - Medium | 4.68 GiB | 8.19 B | RPC,Vulkan | 99 | pp512 | 564.24 ± 21.33 |
| qwen3 8B Q4_K - Medium | 4.68 GiB | 8.19 B | RPC,Vulkan | 99 | tg128 | 36.59 ± 0.09 |
#### Performance (battery)
~ 1.6Ghz Core (2.1Ghz during tg), 1000Mhz memory
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| qwen3 8B Q4_K - Medium | 4.68 GiB | 8.19 B | RPC,Vulkan | 99 | pp512 | 558.01 ± 21.57 |
| qwen3 8B Q4_K - Medium | 4.68 GiB | 8.19 B | RPC,Vulkan | 99 | tg128 | 35.45 ± 0.16 |
#### Silent (plugged in)
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| qwen3 8B Q4_K - Medium | 4.68 GiB | 8.19 B | RPC,Vulkan | 99 | pp512 | 416.92 ± 37.11 |
| qwen3 8B Q4_K - Medium | 4.68 GiB | 8.19 B | RPC,Vulkan | 99 | tg128 | 33.08 ± 0.05
#### Silent (battery)
~ 1Ghz Core, 800Mhz memory
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| qwen3 8B Q4_K - Medium | 4.68 GiB | 8.19 B | RPC,Vulkan | 99 | pp512 | 314.73 ± 22.39 |
| qwen3 8B Q4_K - Medium | 4.68 GiB | 8.19 B | RPC,Vulkan | 99 | tg128 | 24.01 ± 0.10 |
#### Performance - CPU Only
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| qwen3 8B Q4_K - Medium | 4.68 GiB | 8.19 B | RPC | 99 | pp512 | 97.80 ± 0.41 |
| qwen3 8B Q4_K - Medium | 4.68 GiB | 8.19 B | RPC | 99 | tg128 | 13.20 ± 0.11 |
### Llama 8B - Q6_K
#### Turbo (plugged in)
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| llama 8B Q6_K | 6.14 GiB | 8.03 B | RPC,Vulkan | 99 | pp512 | 605.92 ± 9.40 |
| llama 8B Q6_K | 6.14 GiB | 8.03 B | RPC,Vulkan | 99 | tg128 | 29.81 ± 0.06 |
#### Performance (plugged in)
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| llama 8B Q6_K | 6.14 GiB | 8.03 B | RPC,Vulkan | 99 | pp512 | 567.50 ± 18.36 |
| llama 8B Q6_K | 6.14 GiB | 8.03 B | RPC,Vulkan | 99 | tg128 | 29.42 ± 0.02 |
#### Performance (battery)
~ 1800mhz core during ppg, 2100mhz during TG, 1000mhz memory
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| llama 8B Q6_K | 6.14 GiB | 8.03 B | RPC,Vulkan | 99 | pp512 | 562.48 ± 15.21 |
| llama 8B Q6_K | 6.14 GiB | 8.03 B | RPC,Vulkan | 99 | tg128 | 28.55 ± 0.10 |
#### Silent (plugged in)
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| llama 8B Q6_K | 6.14 GiB | 8.03 B | RPC,Vulkan | 99 | pp512 | 412.44 ± 31.95 |
| llama 8B Q6_K | 6.14 GiB | 8.03 B | RPC,Vulkan | 99 | tg128 | 27.69 ± 0.03 |
#### Silent (battery)
~ 800mhz core, 800mhz memory
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| llama 8B Q6_K | 6.14 GiB | 8.03 B | RPC,Vulkan | 99 | pp512 | 284.57 ± 23.03 |
| llama 8B Q6_K | 6.14 GiB | 8.03 B | RPC,Vulkan | 99 | tg128 | 20.82 ± 0.07 |
### Llama 8B - Q8_0
#### Performance
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| llama 8B Q8_0 | 7.95 GiB | 8.03 B | RPC,Vulkan | 99 | pp512 | 808.72 ± 44.78 |
| llama 8B Q8_0 | 7.95 GiB | 8.03 B | RPC,Vulkan | 99 | tg128 | 24.33 ± 0.05 |
#### Silent (battery)
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| llama 8B Q8_0 | 7.95 GiB | 8.03 B | RPC,Vulkan | 99 | pp512 | 408.94 ± 25.12 |
| llama 8B Q8_0 | 7.95 GiB | 8.03 B | RPC,Vulkan | 99 | tg128 | 17.22 ± 0.54 |
### Qwen3 32B
#### Turbo (plugged in)
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| qwen3 32B Q4_K - Medium | 18.40 GiB | 32.76 B | RPC,Vulkan | 99 | pp512 | 129.04 ± 1.29 |
| qwen3 32B Q4_K - Medium | 18.40 GiB | 32.76 B | RPC,Vulkan | 99 | tg128 | 10.34 ± 0.02 |
#### Performance
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| qwen3 32B Q4_K - Medium | 18.40 GiB | 32.76 B | RPC,Vulkan | 99 | pp512 | 112.20 ± 1.26 |
| qwen3 32B Q4_K - Medium | 18.40 GiB | 32.76 B | RPC,Vulkan | 99 | tg128 | 10.08 ± 0.02 |
#### Silent (plugged in)
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| qwen3 32B Q4_K - Medium | 18.40 GiB | 32.76 B | RPC,Vulkan | 99 | pp512 | 86.44 ± 0.61 |
| qwen3 32B Q4_K - Medium | 18.40 GiB | 32.76 B | RPC,Vulkan | 99 | tg128 | 8.69 ± 0.01 |
#### Silent (battery)
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| qwen3 32B Q4_K - Medium | 18.40 GiB | 32.76 B | RPC,Vulkan | 99 | pp512 | 57.74 ± 0.49 |
| qwen3 32B Q4_K - Medium | 18.40 GiB | 32.76 B | RPC,Vulkan | 99 | tg128 | 5.78 ± 0.02 |
### Llama 3 70B
#### Turbo (plugged in)
~2100Mhz core, 1000Mhz memory
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| llama 70B Q4_K - Medium | 39.59 GiB | 70.55 B | RPC,Vulkan | 99 | pp512 | 58.44 ± 0.22 |
| llama 70B Q4_K - Medium | 39.59 GiB | 70.55 B | RPC,Vulkan | 99 | tg128 | 4.82 ± 0.00 |
#### Performance
~1680Mhz Core, ~1000Mhz Memory
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| llama 70B Q4_K - Medium | 39.59 GiB | 70.55 B | RPC,Vulkan | 99 | pp512 | 50.82 ± 0.13 |
| llama 70B Q4_K - Medium | 39.59 GiB | 70.55 B | RPC,Vulkan | 99 | tg128 | 4.74 ± 0.01 |
#### Silent (plugged in)
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| llama 70B Q4_K - Medium | 39.59 GiB | 70.55 B | RPC,Vulkan | 99 | pp512 | 38.56 ± 0.26 |
| llama 70B Q4_K - Medium | 39.59 GiB | 70.55 B | RPC,Vulkan | 99 | tg128 | 4.09 ± 0.00 |
#### Silent
| model | size | params | backend | ngl | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: |
| llama 70B Q4_K - Medium | 39.59 GiB | 70.55 B | RPC,Vulkan | 99 | pp512 | 25.82 ± 0.09 |
| llama 70B Q4_K - Medium | 39.59 GiB | 70.55 B | RPC,Vulkan | 99 | tg128 | 2.44 ± 0.69 | | 2025-05-29T21:48:08 | https://www.reddit.com/r/LocalLLaMA/comments/1kyn8bv/llamacpp_performance_on_z13_128gb_unified/ | discr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kyn8bv | false | null | t3_1kyn8bv | /r/LocalLLaMA/comments/1kyn8bv/llamacpp_performance_on_z13_128gb_unified/ | false | false | self | 6 | null |
LM Studio Slower with 2 GPUs | 1 | Hello all,
I recently got a second RTX 4090 in order to run larger models. I can now fit larger models and run them now.
However, I noticed that when run the smaller models that already fit on a single GPU, I get less tokens/second.
I've played with the LM Studio hardware settings by changing the option to evenly split or priority order when allocating layers to GPU. I noticed that priority performs a lot faster than evenly split for smaller models.
When I disable the the second GPU in the LM studio hardware options, I get the same performance as when I only had 1 GPU installed (as expected).
Is it expect that you get less tokens/second when splitting across multiple GPUs?
| 2025-05-29T22:05:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kynnf1/lm_studio_slower_with_2_gpus/ | MrVicePres | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kynnf1 | false | null | t3_1kynnf1 | /r/LocalLLaMA/comments/1kynnf1/lm_studio_slower_with_2_gpus/ | false | false | self | 1 | null |
DeepSeek is THE REAL OPEN AI | 1,061 | Every release is great. I am only dreaming to run the 671B beast locally. | 2025-05-29T22:19:53 | https://www.reddit.com/r/LocalLLaMA/comments/1kynytt/deepseek_is_the_real_open_ai/ | foldl-li | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kynytt | false | null | t3_1kynytt | /r/LocalLLaMA/comments/1kynytt/deepseek_is_the_real_open_ai/ | false | false | self | 1,061 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.