title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Falcon-H1 Family of Hybrid-Head Language Models, including 0.5B, 1.5B, 1.5B-Deep, 3B, 7B, and 34B
| 223 | 2025-05-21T09:50:09 |
https://huggingface.co/collections/tiiuae/falcon-h1-6819f2795bc406da60fab8df
|
jacek2023
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1krtvpj
| false | null |
t3_1krtvpj
|
/r/LocalLLaMA/comments/1krtvpj/falconh1_family_of_hybridhead_language_models/
| false | false | 223 |
{'enabled': False, 'images': [{'id': '6yF1l7Wcly08Rg8_MLpSaoIjS1uyuQIPMUXJIhl5gPs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/asQIFBJYgU0s0y-AV0hAHtenKk6qa9ZCLFCb-Jjyvag.jpg?width=108&crop=smart&auto=webp&s=1347f695ea42069fd4950cb6d695199ce666797f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/asQIFBJYgU0s0y-AV0hAHtenKk6qa9ZCLFCb-Jjyvag.jpg?width=216&crop=smart&auto=webp&s=0f9dcfab65cf0ca9747bee53fb2582f7aa57fd4e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/asQIFBJYgU0s0y-AV0hAHtenKk6qa9ZCLFCb-Jjyvag.jpg?width=320&crop=smart&auto=webp&s=9b0fc6da8ccf04b08235a838d57953f81eda230e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/asQIFBJYgU0s0y-AV0hAHtenKk6qa9ZCLFCb-Jjyvag.jpg?width=640&crop=smart&auto=webp&s=669a7c89198f19469e2598642c94e9e4b54a56f3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/asQIFBJYgU0s0y-AV0hAHtenKk6qa9ZCLFCb-Jjyvag.jpg?width=960&crop=smart&auto=webp&s=0245c034bc55ebafdf79dda312c6d103d23eb3d3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/asQIFBJYgU0s0y-AV0hAHtenKk6qa9ZCLFCb-Jjyvag.jpg?width=1080&crop=smart&auto=webp&s=30b56578cd9dd7c4a25e3899c3c3d4626190189a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/asQIFBJYgU0s0y-AV0hAHtenKk6qa9ZCLFCb-Jjyvag.jpg?auto=webp&s=1298effbf5169af4c20de0e8061f0686cd77b40c', 'width': 1200}, 'variants': {}}]}
|
||
Open Question: Why don’t voice agent developers fine-tune models like Kokoro to match local accents or customer voices?
| 1 |
[removed]
| 2025-05-21T09:50:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1krtvv0/open_question_why_dont_voice_agent_developers/
|
MAtrixompa
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krtvv0
| false | null |
t3_1krtvv0
|
/r/LocalLLaMA/comments/1krtvv0/open_question_why_dont_voice_agent_developers/
| false | false |
self
| 1 | null |
AMD Unleashes Radeon AI PRO R9700 GPU With 32 GB VRAM, 128 AI Cores & 300W TDP: 2x Faster Than Last-Gen W7800 In DeepSeek R1
| 1 |
[removed]
| 2025-05-21T10:12:51 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1kru86k
| false | null |
t3_1kru86k
|
/r/LocalLLaMA/comments/1kru86k/amd_unleashes_radeon_ai_pro_r9700_gpu_with_32_gb/
| false | false |
default
| 1 | null |
||
AMD Unleashes Radeon AI PRO R9700 GPU With 32 GB VRAM, 128 AI Cores & 300W TDP: 2x Faster Than Last-Gen W7800 In DeepSeek R1
| 1 | 2025-05-21T10:13:32 |
https://wccftech.com/amd-radeon-ai-pro-r9700-gpu-32-gb-vram-128-ai-cores-300w-2x-faster-deepseek-r1/
|
_SYSTEM_ADMIN_MOD_
|
wccftech.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kru8ki
| false | null |
t3_1kru8ki
|
/r/LocalLLaMA/comments/1kru8ki/amd_unleashes_radeon_ai_pro_r9700_gpu_with_32_gb/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'q7mve0pWQXapLu3eVRUlgERITl5WzhQmx_iowI8HRh8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/QehnmIpXCG5EMr8E9Ybhl7ucP9xiIhX3kgr39HNF6VE.jpg?width=108&crop=smart&auto=webp&s=d798ba11963dca0389f6729f6d91fc536cdb985a', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/QehnmIpXCG5EMr8E9Ybhl7ucP9xiIhX3kgr39HNF6VE.jpg?width=216&crop=smart&auto=webp&s=6847d8f16fbf9247e5b94ddcfb9a56ed0322edd5', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/QehnmIpXCG5EMr8E9Ybhl7ucP9xiIhX3kgr39HNF6VE.jpg?width=320&crop=smart&auto=webp&s=0a8d5925acf3d4c541628a3e4449b80c775a422e', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/QehnmIpXCG5EMr8E9Ybhl7ucP9xiIhX3kgr39HNF6VE.jpg?width=640&crop=smart&auto=webp&s=e5d5ee740920ff81f8c64196eff63c881820a953', 'width': 640}, {'height': 538, 'url': 'https://external-preview.redd.it/QehnmIpXCG5EMr8E9Ybhl7ucP9xiIhX3kgr39HNF6VE.jpg?width=960&crop=smart&auto=webp&s=7f422bf917b283a9c8a9311649937c74ac18ee20', 'width': 960}, {'height': 605, 'url': 'https://external-preview.redd.it/QehnmIpXCG5EMr8E9Ybhl7ucP9xiIhX3kgr39HNF6VE.jpg?width=1080&crop=smart&auto=webp&s=45f4d8e562b050b466f53fac070a9ef9900f94f6', 'width': 1080}], 'source': {'height': 837, 'url': 'https://external-preview.redd.it/QehnmIpXCG5EMr8E9Ybhl7ucP9xiIhX3kgr39HNF6VE.jpg?auto=webp&s=f1e98d66412188cac318198802652825ac6f091a', 'width': 1493}, 'variants': {}}]}
|
||
Hidden thinking
| 42 |
I was disappointed to find that Google has now hidden Gemini's thinking. I guess it is understandable to stop others from using the data to train and so help's good to keep their competitive advantage, but I found the thoughts so useful. I'd read the thoughts as generated and often would terminate the generation to refine the prompt based on the output thoughts which led to better results.
It was nice while it lasted and I hope a lot of thinking data was scraped to help train the open models.
| 2025-05-21T10:15:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1kru9v3/hidden_thinking/
|
DeltaSqueezer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kru9v3
| false | null |
t3_1kru9v3
|
/r/LocalLLaMA/comments/1kru9v3/hidden_thinking/
| false | false |
self
| 42 | null |
Seeking Self-Hosted Dynamic Proxy for OpenAI-Compatible APIs
| 1 |
[removed]
| 2025-05-21T10:30:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kruhit/seeking_selfhosted_dynamic_proxy_for/
|
z00log
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kruhit
| false | null |
t3_1kruhit
|
/r/LocalLLaMA/comments/1kruhit/seeking_selfhosted_dynamic_proxy_for/
| false | false |
self
| 1 | null |
Recommendations for Self-Hosted, Open-Source Proxy for Dynamic OpenAI API Forwarding?
| 1 |
[removed]
| 2025-05-21T10:33:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1krujc9/recommendations_for_selfhosted_opensource_proxy/
|
z00log
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krujc9
| false | null |
t3_1krujc9
|
/r/LocalLLaMA/comments/1krujc9/recommendations_for_selfhosted_opensource_proxy/
| false | false |
self
| 1 | null |
gemma 3n seems not work well for non English prompt
| 36 | 2025-05-21T10:44:22 |
Juude89
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1krupm7
| false | null |
t3_1krupm7
|
/r/LocalLLaMA/comments/1krupm7/gemma_3n_seems_not_work_well_for_non_english/
| false | false | 36 |
{'enabled': True, 'images': [{'id': '4JzO8RQ9NzmRSnyXZ6u5IvVoDqiycP2kNMGxQK8u14o', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/xhxxm6xv642f1.jpeg?width=108&crop=smart&auto=webp&s=0b6f2982c1a773712602acb0412b16d0e17e8fb7', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/xhxxm6xv642f1.jpeg?width=216&crop=smart&auto=webp&s=a1e055e41252c357924d7f93cd61bdac7ad68965', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/xhxxm6xv642f1.jpeg?width=320&crop=smart&auto=webp&s=cde878c1f0073030879c995970a4f430ea0de6b6', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/xhxxm6xv642f1.jpeg?width=640&crop=smart&auto=webp&s=d0580d38b88c51a74cb73fa8b517a498c3df81f8', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/xhxxm6xv642f1.jpeg?width=960&crop=smart&auto=webp&s=e1451753def4d0e8f49952ca05d27109f7a4c50a', 'width': 960}], 'source': {'height': 2248, 'url': 'https://preview.redd.it/xhxxm6xv642f1.jpeg?auto=webp&s=45523743cddc7adc193301628fd80fb5bb0a0766', 'width': 1079}, 'variants': {}}]}
|
|||
Is there a portable .exe GUI I can run ggufs on?
| 1 |
That needs no installation. And you can just import a gguf file without internet?
Essentially LM studio but portable
| 2025-05-21T10:53:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kruv49/is_there_a_portable_exe_gui_i_can_run_ggufs_on/
|
Own-Potential-2308
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kruv49
| false | null |
t3_1kruv49
|
/r/LocalLLaMA/comments/1kruv49/is_there_a_portable_exe_gui_i_can_run_ggufs_on/
| false | false |
self
| 1 | null |
Hello everyone
| 1 |
[removed]
| 2025-05-21T11:11:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1krv5oe/hello_everyone/
|
No_Cartographer_2380
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krv5oe
| false | null |
t3_1krv5oe
|
/r/LocalLLaMA/comments/1krv5oe/hello_everyone/
| false | false |
self
| 1 | null |
Add voices to Kokoru?
| 1 |
[removed]
| 2025-05-21T11:53:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1krvwhb/add_voices_to_kokoru/
|
No_Cartographer_2380
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krvwhb
| false | null |
t3_1krvwhb
|
/r/LocalLLaMA/comments/1krvwhb/add_voices_to_kokoru/
| false | false |
self
| 1 | null |
Add voices to Kokoru TTS
| 1 |
[removed]
| 2025-05-21T12:00:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1krw17i/add_voices_to_kokoru_tts/
|
No_Cartographer_2380
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krw17i
| false | null |
t3_1krw17i
|
/r/LocalLLaMA/comments/1krw17i/add_voices_to_kokoru_tts/
| false | false |
self
| 1 | null |
Add voices to Kokoru TTS?
| 1 |
[removed]
| 2025-05-21T12:14:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1krwazg/add_voices_to_kokoru_tts/
|
No_Cartographer_2380
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krwazg
| false | null |
t3_1krwazg
|
/r/LocalLLaMA/comments/1krwazg/add_voices_to_kokoru_tts/
| false | false |
self
| 1 | null |
Ollama + RAG in godot 4
| 0 |
I’ve been experimenting with setting up my own local setup with ollama, with some success. I’m using deepseek-coder-v2 with a plugin for interfacing within Godot 4 ( game engine). I set up a RAG due to GDScript ( native language for engine) not being up to date with the model knowledge cutoff. I scraped the documentation for it to use in the database, and plan to add my own project code to it in the future.
My current flow is this : Query from user > RAG with an embedding model > cache the query > send enhanced prompt to Ollama > generation>answer to godot interface.
I currently have a 12gb RTX 5070 on this machine, my 4090 died and could not find a reasonable replacement, with 64gb ram.
Inference takes about 12-18 seconds now depends on the prompt complexity, what are you guys getting on similar gpu? I’m trying to see whether RAG is worth it as it adds a middleware connection. Any suggestions would be welcomed, thank you.
| 2025-05-21T12:42:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1krwuqx/ollama_rag_in_godot_4/
|
Huge-Masterpiece-824
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krwuqx
| false | null |
t3_1krwuqx
|
/r/LocalLLaMA/comments/1krwuqx/ollama_rag_in_godot_4/
| false | false |
self
| 0 | null |
Docling completely missing large elements on a page, can Layout model be changed?
| 6 |
I was playing around with docling today after I've seen some hype around these parts, and found that the object detection would often miss large chart/image type elements side by side. In one case, there were 2 large squarish elements side by side, roughly the same side. The left one was detected, the right one was not. In a other case there were boxes in 2x4 formation. One of the 7 was completely missed. YOLO handled everything in my simple test perfectly.
So I guess my questions are:
1. Are there any simple things to change configure to improve this?
2. Is there a way to replace the layout easily? Looking through the code, the design is nice and modular except there doesn't seem to be a way to change the layout model. I'll have to look deeper but I couldn't tell if plugin system would allow me to change it.
| 2025-05-21T12:43:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1krwvgh/docling_completely_missing_large_elements_on_a/
|
joomla00
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krwvgh
| false | null |
t3_1krwvgh
|
/r/LocalLLaMA/comments/1krwvgh/docling_completely_missing_large_elements_on_a/
| false | false |
self
| 6 | null |
Key findings after testing LLMs
| 3 |
After running my tests, plus a few others, and publishing the results, I got to thinking about how strong Qwen3 really is.
You can read my musings here: [https://blog.kekepower.com/blog/2025/may/21/deepseek\_r1\_and\_v3\_vs\_qwen3\_-\_why\_631-billion\_parameters\_still\_miss\_the\_mark\_on\_instruction\_fidelity.html](https://blog.kekepower.com/blog/2025/may/21/deepseek_r1_and_v3_vs_qwen3_-_why_631-billion_parameters_still_miss_the_mark_on_instruction_fidelity.html)
>TL;DR
>DeepSeek R1-631 B and V3-631 B nail reasoning tasks but routinely ignore explicit format or length constraints.
>Qwen3 (8 B → 235 B) obeys instructions out-of-the-box, even on a single RTX 3070, though the 30 B-A3B variant hallucinated once in a 10 000-word test (details below).
>If your pipeline needs precise word counts or tag wrappers, use Qwen3 today; keep DeepSeek for creative ideation unless you’re ready to babysit it with chunked prompts or regex post-processing.
>Rumor mill says DeepSeek V4 and R2 will land shortly; worth re-testing when they do.
There were also comments on my other post about my prompt. That is was either weak or having too many parameters.
Question: **Do you have any suggestions for strong, difficult, interesting or breaking prompts I can test next?**
| 2025-05-21T13:02:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1krx9pa/key_findings_after_testing_llms/
|
kekePower
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krx9pa
| false | null |
t3_1krx9pa
|
/r/LocalLLaMA/comments/1krx9pa/key_findings_after_testing_llms/
| false | false |
self
| 3 | null |
Location of downloaded LLM on android
| 2 |
Hello guys, can I know the exact location of the downloaded models gguf on apps like Chatter UI?
| 2025-05-21T13:28:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1krxucq/location_of_downloaded_llm_on_android/
|
Egypt_Pharoh1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krxucq
| false | null |
t3_1krxucq
|
/r/LocalLLaMA/comments/1krxucq/location_of_downloaded_llm_on_android/
| false | false |
self
| 2 | null |
LLM for Linux questions
| 2 |
I am trying to learn Linux. Can anyone recommend me a good LLM that can answer all Linux related questions? Preferrably not a huge one, like under 20B.
| 2025-05-21T13:29:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1krxvce/llm_for_linux_questions/
|
Any-Championship-611
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krxvce
| false | null |
t3_1krxvce
|
/r/LocalLLaMA/comments/1krxvce/llm_for_linux_questions/
| false | false |
self
| 2 | null |
New falcon models using mamba hybrid are very competetive if not ahead for their sizes.
| 53 |
AVG SCORES FOR A VARIETY OF BENCHMARKS:
\*\*Falcon-H1 Models:\*\*
1. \*\*Falcon-H1-34B:\*\* 58.92
2. \*\*Falcon-H1-7B:\*\* 54.08
3. \*\*Falcon-H1-3B:\*\* 48.09
4. \*\*Falcon-H1-1.5B-deep:\*\* 47.72
5. \*\*Falcon-H1-1.5B:\*\* 45.47
6. \*\*Falcon-H1-0.5B:\*\* 35.83
\*\*Qwen3 Models:\*\*
1. \*\*Qwen3-32B:\*\* 58.44
2. \*\*Qwen3-8B:\*\* 52.62
3. \*\*Qwen3-4B:\*\* 48.83
4. \*\*Qwen3-1.7B:\*\* 41.08
5. \*\*Qwen3-0.6B:\*\* 31.24
\*\*Gemma3 Models:\*\*
1. \*\*Gemma3-27B:\*\* 58.75
2. \*\*Gemma3-12B:\*\* 54.10
3. \*\*Gemma3-4B:\*\* 44.32
4. \*\*Gemma3-1B:\*\* 29.68
\*\*Llama Models:\*\*
1. \*\*Llama3.3-70B:\*\* 58.20
2. \*\*Llama4-scout:\*\* 57.42
3. \*\*Llama3.1-8B:\*\* 44.77
4. \*\*Llama3.2-3B:\*\* 38.29
5. \*\*Llama3.2-1B:\*\* 24.99
benchmarks tested:
\* BBH
\* ARC-C
\* TruthfulQA
\* HellaSwag
\* MMLU
\* GSM8k
\* MATH-500
\* AMC-23
\* AIME-24
\* AIME-25
\* GPQA
\* GPQA\_Diamond
\* MMLU-Pro
\* MMLU-stem
\* HumanEval
\* HumanEval+
\* MBPP
\* MBPP+
\* LiveCodeBench
\* CRUXEval
\* IFEval
\* Alpaca-Eval
\* MTBench
\* LiveBench
all the data I grabbed for this post was found at: [https://huggingface.co/tiiuae/Falcon-H1-1.5B-Instruct](https://huggingface.co/tiiuae/Falcon-H1-1.5B-Instruct) and the various other models in the h1 family.
| 2025-05-21T13:31:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1krxwja/new_falcon_models_using_mamba_hybrid_are_very/
|
ElectricalAngle1611
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krxwja
| false | null |
t3_1krxwja
|
/r/LocalLLaMA/comments/1krxwja/new_falcon_models_using_mamba_hybrid_are_very/
| false | false |
self
| 53 |
{'enabled': False, 'images': [{'id': '7pmU_qf5f-85GxHu1Vf_nMXhQKDehQ-ngMHj09B5QjM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9Qap-vjKineTiZeWQ7u0iMdaKdKc-0HPWYNmnEyzFbM.jpg?width=108&crop=smart&auto=webp&s=903373bcb66f69286ec2409aad5dae81b331c6f0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9Qap-vjKineTiZeWQ7u0iMdaKdKc-0HPWYNmnEyzFbM.jpg?width=216&crop=smart&auto=webp&s=6fda54175579f30a4d54546a155549e2a08c9d84', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9Qap-vjKineTiZeWQ7u0iMdaKdKc-0HPWYNmnEyzFbM.jpg?width=320&crop=smart&auto=webp&s=c3a70a3a5c8de72cf178599d48ae28dff8478f61', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9Qap-vjKineTiZeWQ7u0iMdaKdKc-0HPWYNmnEyzFbM.jpg?width=640&crop=smart&auto=webp&s=d8792c7c439aba72d854c970eae14188582ab8ac', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9Qap-vjKineTiZeWQ7u0iMdaKdKc-0HPWYNmnEyzFbM.jpg?width=960&crop=smart&auto=webp&s=d1d8bd40195c897939de9565c8572d0ad81327dd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9Qap-vjKineTiZeWQ7u0iMdaKdKc-0HPWYNmnEyzFbM.jpg?width=1080&crop=smart&auto=webp&s=8460d3078fc1860a91122af3be16503a7ee7f7d5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9Qap-vjKineTiZeWQ7u0iMdaKdKc-0HPWYNmnEyzFbM.jpg?auto=webp&s=4a24a914046dd8a413463dcf5cb442765132c1fe', 'width': 1200}, 'variants': {}}]}
|
What Hardware release are you looking forward to this year?
| 2 |
I'm curious what folks are planning for this year? I've been looking out for hardware that can handle very very large models, and getting my homelab ready for an expansion, but I've lost my vision on what to look for this year for very large self-hosted models.
Curious what the community thinks.
| 2025-05-21T13:42:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1kry5dk/what_hardware_release_are_you_looking_forward_to/
|
SanFranPanManStand
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kry5dk
| false | null |
t3_1kry5dk
|
/r/LocalLLaMA/comments/1kry5dk/what_hardware_release_are_you_looking_forward_to/
| false | false |
self
| 2 | null |
Dynamically loading experts in MoE models?
| 2 |
Is this a thing? If not, why not? I mean, MoE models like qwen3 235b only have 22b active parameters, so if one were able to just use the active parameters, then qwen would be much easier to run, maybe even runnable on a basic computer with 32gb of ram
| 2025-05-21T13:46:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1kry8m8/dynamically_loading_experts_in_moe_models/
|
ExtremeAcceptable289
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kry8m8
| false | null |
t3_1kry8m8
|
/r/LocalLLaMA/comments/1kry8m8/dynamically_loading_experts_in_moe_models/
| false | false |
self
| 2 | null |
Solo-Built Distributed AI Voice Assistant
| 1 |
[removed]
| 2025-05-21T13:50:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1kryby9/solobuilt_distributed_ai_voice_assistant/
|
No_Story_9941
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kryby9
| false | null |
t3_1kryby9
|
/r/LocalLLaMA/comments/1kryby9/solobuilt_distributed_ai_voice_assistant/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '1GieUu91QDrH2_j5GwLwVzYftRP1WCG3Ezk3UlAoer0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/BsGVyVndLk2cMkzkvJ0HdFhlWKdyd-rSl-ZQDfl8NHU.jpg?width=108&crop=smart&auto=webp&s=29bbc2fb60d7d6c5ad41d9af1623422c4de45880', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/BsGVyVndLk2cMkzkvJ0HdFhlWKdyd-rSl-ZQDfl8NHU.jpg?width=216&crop=smart&auto=webp&s=28390cde97d137733a5669c5c1546a7c345c01e4', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/BsGVyVndLk2cMkzkvJ0HdFhlWKdyd-rSl-ZQDfl8NHU.jpg?width=320&crop=smart&auto=webp&s=43c42e1aad292b5f529e6c3c3883537540ffbfd3', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/BsGVyVndLk2cMkzkvJ0HdFhlWKdyd-rSl-ZQDfl8NHU.jpg?width=640&crop=smart&auto=webp&s=028ba493e1839d9bd846e1aa8f0c826ee6c8b016', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/BsGVyVndLk2cMkzkvJ0HdFhlWKdyd-rSl-ZQDfl8NHU.jpg?width=960&crop=smart&auto=webp&s=87dfc3f8937145fc637305edc8353a7c9435fd6e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/BsGVyVndLk2cMkzkvJ0HdFhlWKdyd-rSl-ZQDfl8NHU.jpg?width=1080&crop=smart&auto=webp&s=ca46569e72ba369a523138c3b68edfb5bc5a6bf3', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/BsGVyVndLk2cMkzkvJ0HdFhlWKdyd-rSl-ZQDfl8NHU.jpg?auto=webp&s=8f53d3743793f4a5263d156a1c2d4dc9ccdbc4f9', 'width': 1200}, 'variants': {}}]}
|
Late-Night Study Lifesaver? My Unexpected Win with SolutionInn’s Ask AI
| 1 |
[removed]
| 2025-05-21T13:56:15 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1krygf5
| false | null |
t3_1krygf5
|
/r/LocalLLaMA/comments/1krygf5/latenight_study_lifesaver_my_unexpected_win_with/
| false | false |
default
| 1 | null |
||
Solo-Built Distributed AI Voice Assistant
| 1 |
[removed]
| 2025-05-21T13:58:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1kryhxj/solobuilt_distributed_ai_voice_assistant/
|
No_Story_9941
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kryhxj
| false | null |
t3_1kryhxj
|
/r/LocalLLaMA/comments/1kryhxj/solobuilt_distributed_ai_voice_assistant/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '1GieUu91QDrH2_j5GwLwVzYftRP1WCG3Ezk3UlAoer0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/BsGVyVndLk2cMkzkvJ0HdFhlWKdyd-rSl-ZQDfl8NHU.jpg?width=108&crop=smart&auto=webp&s=29bbc2fb60d7d6c5ad41d9af1623422c4de45880', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/BsGVyVndLk2cMkzkvJ0HdFhlWKdyd-rSl-ZQDfl8NHU.jpg?width=216&crop=smart&auto=webp&s=28390cde97d137733a5669c5c1546a7c345c01e4', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/BsGVyVndLk2cMkzkvJ0HdFhlWKdyd-rSl-ZQDfl8NHU.jpg?width=320&crop=smart&auto=webp&s=43c42e1aad292b5f529e6c3c3883537540ffbfd3', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/BsGVyVndLk2cMkzkvJ0HdFhlWKdyd-rSl-ZQDfl8NHU.jpg?width=640&crop=smart&auto=webp&s=028ba493e1839d9bd846e1aa8f0c826ee6c8b016', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/BsGVyVndLk2cMkzkvJ0HdFhlWKdyd-rSl-ZQDfl8NHU.jpg?width=960&crop=smart&auto=webp&s=87dfc3f8937145fc637305edc8353a7c9435fd6e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/BsGVyVndLk2cMkzkvJ0HdFhlWKdyd-rSl-ZQDfl8NHU.jpg?width=1080&crop=smart&auto=webp&s=ca46569e72ba369a523138c3b68edfb5bc5a6bf3', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/BsGVyVndLk2cMkzkvJ0HdFhlWKdyd-rSl-ZQDfl8NHU.jpg?auto=webp&s=8f53d3743793f4a5263d156a1c2d4dc9ccdbc4f9', 'width': 1200}, 'variants': {}}]}
|
What are the best models for non-documental OCR?
| 2 |
Hello,
I am searching for the best LLMs for OCR. I am *not* scanning documents or similar. The input are images of sacks in a warehouse, and text has to be extracted from it. I tried QwenVL and was much worse than traditional OCR like PaddleOCR, which has given the the best results (Ok-ish at best). However, the protective plastic around the sacks creates a lot of reflections which hamper the ability to extract the text, specially when its searching for printed text and not the one that was originally drawn in the labels.
The new Google 3n seems promising though, however I would like to know what alternatives are there (with free comercial use if possible).
Thanks in advance
| 2025-05-21T14:14:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1kryw3a/what_are_the_best_models_for_nondocumental_ocr/
|
Ok_Appeal8653
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kryw3a
| false | null |
t3_1kryw3a
|
/r/LocalLLaMA/comments/1kryw3a/what_are_the_best_models_for_nondocumental_ocr/
| false | false |
self
| 2 | null |
Meet Mistral Devstral, SOTA open model designed specifically for coding agents
| 277 |
[https://mistral.ai/news/devstral](https://mistral.ai/news/devstral)
Open Weights : [https://huggingface.co/mistralai/Devstral-Small-2505](https://huggingface.co/mistralai/Devstral-Small-2505)
| 2025-05-21T14:15:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1kryxdg/meet_mistral_devstral_sota_open_model_designed/
|
ApprehensiveAd3629
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kryxdg
| false | null |
t3_1kryxdg
|
/r/LocalLLaMA/comments/1kryxdg/meet_mistral_devstral_sota_open_model_designed/
| false | false |
self
| 277 |
{'enabled': False, 'images': [{'id': 'QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=108&crop=smart&auto=webp&s=bf2fc6d6ae14adad4ce62ffea575abc3783778db', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=216&crop=smart&auto=webp&s=4a5f46c5464cea72c64df6c73d58b15e102c5936', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=320&crop=smart&auto=webp&s=aa1e4abc763404a25bda9d60fe6440b747d6bae4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=640&crop=smart&auto=webp&s=122efd46018c04117aca71d80db3640d390428bd', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=960&crop=smart&auto=webp&s=b53cfe1770ee2b37ce0f5b5e1b0fd67d3276a350', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=1080&crop=smart&auto=webp&s=278352f076c5bbdf8f6e7cecedab77d8794332ff', 'width': 1080}], 'source': {'height': 2520, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?auto=webp&s=691d56b882a79feffdb4b780dc6a0db1b2c5d709', 'width': 4800}, 'variants': {}}]}
|
mistralai/Devstral-Small-2505 · Hugging Face
| 394 |
Devstral is an agentic LLM for software engineering tasks built under a collaboration between Mistral AI and All Hands AI
| 2025-05-21T14:17:03 |
https://huggingface.co/mistralai/Devstral-Small-2505
|
Dark_Fire_12
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kryybf
| false | null |
t3_1kryybf
|
/r/LocalLLaMA/comments/1kryybf/mistralaidevstralsmall2505_hugging_face/
| false | false | 394 |
{'enabled': False, 'images': [{'id': 'l58T3fdYm1_vbmuoUrQ3Cn8X1B6VkXutYixnBDCamkM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5v7V2smikryAtPAPRovLgRwqCqqgG7mLENcd1_6EmM4.jpg?width=108&crop=smart&auto=webp&s=e68376b3b17125c9c3906caaa299d50cf4a7a765', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/5v7V2smikryAtPAPRovLgRwqCqqgG7mLENcd1_6EmM4.jpg?width=216&crop=smart&auto=webp&s=caab1502ae4d22c7b2e58317646812aa089bb503', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/5v7V2smikryAtPAPRovLgRwqCqqgG7mLENcd1_6EmM4.jpg?width=320&crop=smart&auto=webp&s=aadf62bcce9a7db26a8c9e94eb4c7276ccc4cd76', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/5v7V2smikryAtPAPRovLgRwqCqqgG7mLENcd1_6EmM4.jpg?width=640&crop=smart&auto=webp&s=c8ec3b7f12dc09c129535d0279c6db5801db61aa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/5v7V2smikryAtPAPRovLgRwqCqqgG7mLENcd1_6EmM4.jpg?width=960&crop=smart&auto=webp&s=d4010fbf7dffade3e1f35dfa3cb41e700f462ba8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/5v7V2smikryAtPAPRovLgRwqCqqgG7mLENcd1_6EmM4.jpg?width=1080&crop=smart&auto=webp&s=64409ac6c2abb03b4e823e790de19dc30c04220e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/5v7V2smikryAtPAPRovLgRwqCqqgG7mLENcd1_6EmM4.jpg?auto=webp&s=41c9ad3a075d9562a14f150a5301ebe677436d27', 'width': 1200}, 'variants': {}}]}
|
|
Fastest model & setup for classification?
| 1 |
[removed]
| 2025-05-21T14:20:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1krz120/fastest_model_setup_for_classification/
|
Regular_Problem9019
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krz120
| false | null |
t3_1krz120
|
/r/LocalLLaMA/comments/1krz120/fastest_model_setup_for_classification/
| false | false |
self
| 1 | null |
AMD ROCm 6.4.1 now supports 9070/XT (Navi4)
| 100 |
As of this post, AMD hasn't updated their github page or their official ROCm doc page, but here is the official link to their site. Looks like it is a bundled ROCm stack for Ubuntu LTS and RHEL 9.6.
I got my 9070XT at launch at MSRP, so this is good news for me!
| 2025-05-21T14:48:49 |
https://www.amd.com/en/resources/support-articles/release-notes/RN-AMDGPU-UNIFIED-LINUX-25-10-1-ROCM-6-4-1.html
|
shifty21
|
amd.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1krzpmu
| false | null |
t3_1krzpmu
|
/r/LocalLLaMA/comments/1krzpmu/amd_rocm_641_now_supports_9070xt_navi4/
| false | false | 100 |
{'enabled': False, 'images': [{'id': 'VY-gY2HYKYcXNyATN6eFOZXOvhwsQtwwRY3tWSlZJMQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?width=108&crop=smart&auto=webp&s=bf9ed3573a3db5d3e44a72830f8426517a91377c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?width=216&crop=smart&auto=webp&s=f524b797a7ea6865e919d4617ae1dcf2b6c6a2af', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?width=320&crop=smart&auto=webp&s=10a67c110f3a769d2bfe608f2b3a86f3166e246f', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?width=640&crop=smart&auto=webp&s=55af9887767b7ab9df9c7ca842d03265592ce4ea', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?width=960&crop=smart&auto=webp&s=d292a516385f47874c32d38cd8df66e9fb7ad712', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?width=1080&crop=smart&auto=webp&s=5b59e2297e5600bee5b6085195e17dda1c43324f', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/Uw9Z-ATtXBVz3bFA4dAJBygcK_v6wL5a2uOdNIk-9qE.jpg?auto=webp&s=c9e47dc7ead939c14e8582ec6e213ea7b7903190', 'width': 1200}, 'variants': {}}]}
|
|
medgemma-4b the Pharmacist 🤣
| 301 |
Google’s new OS medical model gave in to the dark side far too easily. I had to laugh. I expected it to put up a little more of a fight, but there you go.
| 2025-05-21T14:49:13 |
AlternativePlum5151
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1krzpyp
| false | null |
t3_1krzpyp
|
/r/LocalLLaMA/comments/1krzpyp/medgemma4b_the_pharmacist/
| false | false |
nsfw
| 301 |
{'enabled': True, 'images': [{'id': 'DXEd1qFG1EZ9N07_bpcSQ_lnxnoDwIUpoHc59yZWr7Q', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?width=108&crop=smart&auto=webp&s=f50455370fe07f346afbd340be210479cbbf7d79', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?width=216&crop=smart&auto=webp&s=f9ced7a447c985729f70eecf349297e9f2042e30', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?width=320&crop=smart&auto=webp&s=07538bd6de2b1ffe8312bc785319ab380979bd33', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?width=640&crop=smart&auto=webp&s=c2ed2cdf779e7529952f893533d7788e300c2fbc', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?width=960&crop=smart&auto=webp&s=e50f2832bda57be3c07abeb9b49fae3b9ee2573f', 'width': 960}, {'height': 608, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?width=1080&crop=smart&auto=webp&s=cb16ed6d8df545344e8753127e5133e1e7faf26c', 'width': 1080}], 'source': {'height': 612, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?auto=webp&s=1b94525cf8e3352b3c5e5b6376fb0c476d5c900a', 'width': 1086}, 'variants': {'nsfw': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=b53c7e09e8d0a9f82dc92b13db663e8a5ad0efbe', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=07d29beeab8cad2c821dc72bd79a7b2ef5c7e57a', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=a3a2c4e1af1aad00778660612ce10fdfcbb23537', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=ac5b708305379e6852b53cfee8b703276efd262d', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=b98ec582810d25c672c9db513acc2784f3f34234', 'width': 960}, {'height': 608, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=6867582bcc6029dc57b140cfdcc2d80d61ddce9d', 'width': 1080}], 'source': {'height': 612, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?blur=40&format=pjpg&auto=webp&s=6655858ac3de0baa2da7bade80927fc1eee77e1f', 'width': 1086}}, 'obfuscated': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=b53c7e09e8d0a9f82dc92b13db663e8a5ad0efbe', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=07d29beeab8cad2c821dc72bd79a7b2ef5c7e57a', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=a3a2c4e1af1aad00778660612ce10fdfcbb23537', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=ac5b708305379e6852b53cfee8b703276efd262d', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=b98ec582810d25c672c9db513acc2784f3f34234', 'width': 960}, {'height': 608, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=6867582bcc6029dc57b140cfdcc2d80d61ddce9d', 'width': 1080}], 'source': {'height': 612, 'url': 'https://preview.redd.it/f25nhvxqd52f1.png?blur=40&format=pjpg&auto=webp&s=6655858ac3de0baa2da7bade80927fc1eee77e1f', 'width': 1086}}}}]}
|
|
largest context window model for 24GB VRAM?
| 2 |
Hey guys. Trying to find a model that can analyze large text files (10,000 to 15,000 words at a time) without pagination
What model is best for summarizing medium-large bodies of text?
| 2025-05-21T14:55:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1krzuzj/largest_context_window_model_for_24gb_vram/
|
odaman8213
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krzuzj
| false | null |
t3_1krzuzj
|
/r/LocalLLaMA/comments/1krzuzj/largest_context_window_model_for_24gb_vram/
| false | false |
self
| 2 | null |
Agent Commerce Kit – Protocols for AI Agent Identity and Payments
| 2 | 2025-05-21T15:00:06 |
https://www.agentcommercekit.com/overview/introduction
|
catena_labs
|
agentcommercekit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1krzzg7
| false | null |
t3_1krzzg7
|
/r/LocalLLaMA/comments/1krzzg7/agent_commerce_kit_protocols_for_ai_agent/
| false | false | 2 |
{'enabled': False, 'images': [{'id': '_hbQnw-UE3NBRg2idKDtqs2IThHc7EPSMdFg9H83f1o', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/OEDlc3lmK9Ir2wXwMtfU-Q1rGIkQxOFuuaTGNvgglsY.jpg?width=108&crop=smart&auto=webp&s=e9bb17d22a1e8418165e6f3d3b9e99e50dd41808', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/OEDlc3lmK9Ir2wXwMtfU-Q1rGIkQxOFuuaTGNvgglsY.jpg?width=216&crop=smart&auto=webp&s=32bef1bd55b1c72fd24d2b0e53016c0ff459bc0b', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/OEDlc3lmK9Ir2wXwMtfU-Q1rGIkQxOFuuaTGNvgglsY.jpg?width=320&crop=smart&auto=webp&s=191337f8e5222cddad7aca85c5100c28c7b5622f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/OEDlc3lmK9Ir2wXwMtfU-Q1rGIkQxOFuuaTGNvgglsY.jpg?width=640&crop=smart&auto=webp&s=f3b1efb31c565d94d37aaccd93f1cd3426b6a23a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/OEDlc3lmK9Ir2wXwMtfU-Q1rGIkQxOFuuaTGNvgglsY.jpg?width=960&crop=smart&auto=webp&s=201b29479c13e6263100dbf524357a3632d68b66', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/OEDlc3lmK9Ir2wXwMtfU-Q1rGIkQxOFuuaTGNvgglsY.jpg?width=1080&crop=smart&auto=webp&s=e375f6c37666911270dc8b01f66492619b7e78fe', 'width': 1080}], 'source': {'height': 2520, 'url': 'https://external-preview.redd.it/OEDlc3lmK9Ir2wXwMtfU-Q1rGIkQxOFuuaTGNvgglsY.jpg?auto=webp&s=c77f7e8147438e4e438cb3c615b720aa223fded1', 'width': 4800}, 'variants': {}}]}
|
||
Voice cloning for Kokoro TTS using random walk algorithms
| 94 |
[https://news.ycombinator.com/item?id=44052295](https://news.ycombinator.com/item?id=44052295)
Hey everybody, I made a library that can somewhat clone voices using Kokoro TTS. I know it is a popular library for adding speech to various LLM applications, so I figured I would share it here. It can take awhile and produce a variety of results, but overall it is a promising attempt to add more voice options to this great library.
Check out the code and examples.
| 2025-05-21T15:12:31 |
https://github.com/RobViren/kvoicewalk
|
rodbiren
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks0arl
| false | null |
t3_1ks0arl
|
/r/LocalLLaMA/comments/1ks0arl/voice_cloning_for_kokoro_tts_using_random_walk/
| false | false | 94 |
{'enabled': False, 'images': [{'id': '9j7T1aryT1VtYnao4_flbG1Qyq29ME_hO8aYhG0flnk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aOW6-hgbtxvb8U4tg5vaNfPPQC6NYWWnQfNZ4XDYvYg.jpg?width=108&crop=smart&auto=webp&s=dd9bee6b0cec514fb74dee249bdcf20136db2c3a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aOW6-hgbtxvb8U4tg5vaNfPPQC6NYWWnQfNZ4XDYvYg.jpg?width=216&crop=smart&auto=webp&s=3fb5b981b7f5640271bed78361267b35876adb0c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aOW6-hgbtxvb8U4tg5vaNfPPQC6NYWWnQfNZ4XDYvYg.jpg?width=320&crop=smart&auto=webp&s=768f0cba993f1acf17ff1be317f5d286d635bee3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aOW6-hgbtxvb8U4tg5vaNfPPQC6NYWWnQfNZ4XDYvYg.jpg?width=640&crop=smart&auto=webp&s=a846a19dfe6fa63bfde52db9069d0dadbf3b7dba', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aOW6-hgbtxvb8U4tg5vaNfPPQC6NYWWnQfNZ4XDYvYg.jpg?width=960&crop=smart&auto=webp&s=7a642c94732d169c40715d9d1b0f50e7e8eb270c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aOW6-hgbtxvb8U4tg5vaNfPPQC6NYWWnQfNZ4XDYvYg.jpg?width=1080&crop=smart&auto=webp&s=d8cafbadfeee2e11ba60e3ca6d1208e6164de401', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aOW6-hgbtxvb8U4tg5vaNfPPQC6NYWWnQfNZ4XDYvYg.jpg?auto=webp&s=e6a7f6cef4ae1d9ffbaaad5eca87fc7cd0db43c8', 'width': 1200}, 'variants': {}}]}
|
|
new to local, half new to AI but an oldie -help pls
| 6 |
ive been using deepseek r1 (web) to generate code for scripting languages. i dont think it does a good enough job at code generation.... i'd like to know some ideas. ill mostly be doing javascript, and .net (0 knowledge yet.. wanna get into it)
i just got a new 9900x3d + 5070 gpu and would like to know if its better to host locally... if its faster.
pls share me ideas. i like optimal setups. prefer free methods but if there are some cheap api's that i need to buy then i will.
| 2025-05-21T15:16:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks0epf/new_to_local_half_new_to_ai_but_an_oldie_help_pls/
|
biatche
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks0epf
| false | null |
t3_1ks0epf
|
/r/LocalLLaMA/comments/1ks0epf/new_to_local_half_new_to_ai_but_an_oldie_help_pls/
| false | false |
self
| 6 | null |
I'd love a qwen3-coder-30B-A3B
| 97 |
Honestly I'd pay quite a bit to have such a model on my own machine. Inference would be quite fast and coding would be decent.
| 2025-05-21T15:19:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks0h52/id_love_a_qwen3coder30ba3b/
|
GreenTreeAndBlueSky
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks0h52
| false | null |
t3_1ks0h52
|
/r/LocalLLaMA/comments/1ks0h52/id_love_a_qwen3coder30ba3b/
| false | false |
self
| 97 | null |
in the end satya was right scaling law is so much good , veo 3 is a prime example and many other model too
| 0 |
the more it training on the problem the better it will get and google already have too much video data its was a common sense that with that much data nobody is going to mess with the google monopoly , im pretty sure all other startup finding the solution of this problem .
the there all other model are also solely based on the scaling right now
| 2025-05-21T15:27:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks0oea/in_the_end_satya_was_right_scaling_law_is_so_much/
|
Select_Dream634
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks0oea
| false | null |
t3_1ks0oea
|
/r/LocalLLaMA/comments/1ks0oea/in_the_end_satya_was_right_scaling_law_is_so_much/
| false | false |
self
| 0 | null |
SWE-rebench update: GPT4.1 mini/nano and Gemini 2.0/2.5 Flash added
| 28 |
We’ve just added a batch of new models to the [SWE-rebench leaderboard](https://swe-rebench.com/leaderboard):
* GPT-4.1 mini
* GPT-4.1 nano
* Gemini 2.0 Flash
* Gemini 2.5 Flash Preview 05-20
A few quick takeaways:
* gpt-4.1-mini is surprisingly strong, it matches full GPT-4.1 performance on fresh, decontaminated tasks. Very strong instruction following capabilities.
* gpt-4.1-nano, on the other hand, struggles. It often misunderstands the system prompt and hallucinates environment responses. This also affects other models in the bottom of the leaderboard.
* gemini 2.0 flash performs on par with Qwen and LLaMA 70B. It doesn't seem to suffer from contamination, but it often has troubles following instructions precisely.
* gemini 2.5 flash preview 05-20 is a big improvement over 2.0. It’s nearly GPT-4.1 level on older data and gets closer to GPT-4.1 mini on newer tasks, being \~2.6x cheaper, though possibly a bit contaminated.
I know many people are waiting for frontier model results. Thanks to OpenAI for providing API credits, results for o3 and o4-mini are coming soon. Stay tuned!
| 2025-05-21T15:32:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks0snl/swerebench_update_gpt41_mininano_and_gemini_2025/
|
Long-Sleep-13
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks0snl
| false | null |
t3_1ks0snl
|
/r/LocalLLaMA/comments/1ks0snl/swerebench_update_gpt41_mininano_and_gemini_2025/
| false | false |
self
| 28 | null |
Building a runtime to enable fast, multi-model serving with vLLM — looking for feedback
| 0 |
We’ve been working on an AI inference runtime that makes it easier to serve **multiple LLMs using vLLM** , but in a more dynamic, serverless-style setup.
The idea is:
* You bring your **existing vLLM container image**
* We snapshot and restore model state **directly on GPU in seconds**
* No need to spin up new containers or re-init models
* Cold starts drop to **under 2s**, even for large models
We’re building this because most vLLM setups today assume a single model per container. That works great for throughput, but it’s hard when:
* You want to host **lots of models** on limited GPUs
* You need to **swap models** quickly without downtime
* You're running a **multi-tenant** setup or shared endpoint
Would love to hear if others are running into this , or thinking about similar problems. We’re deep in infra land but happy to share details or trade notes with anyone working on multi-model inference or runtime tooling.
| 2025-05-21T15:33:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks0tlw/building_a_runtime_to_enable_fast_multimodel/
|
pmv143
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks0tlw
| false | null |
t3_1ks0tlw
|
/r/LocalLLaMA/comments/1ks0tlw/building_a_runtime_to_enable_fast_multimodel/
| false | false |
self
| 0 | null |
Has anyone used Gemini Live API for real-time interaction?
| 1 |
[removed]
| 2025-05-21T15:35:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks0vdw/has_anyone_used_gemini_live_api_for_realtime/
|
Funny_Working_7490
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks0vdw
| false | null |
t3_1ks0vdw
|
/r/LocalLLaMA/comments/1ks0vdw/has_anyone_used_gemini_live_api_for_realtime/
| false | false |
self
| 1 | null |
This AI tool applies to more jobs in a minute than most people do in a week
| 1 | 2025-05-21T15:37:06 |
https://v.redd.it/mbvknht2n52f1
|
Alternative_Rock_836
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks0x2t
| false |
{'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/mbvknht2n52f1/DASHPlaylist.mpd?a=1750433840%2CZGI1NmE1NTI5ZTAwNGM0YzMwN2Y2YjVhNTFjNWQxZTFkZjEyNjc2MWIxYzg3ODcyMmIxYzk5ZWExMzAyNDA2Zg%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/mbvknht2n52f1/DASH_360.mp4?source=fallback', 'has_audio': True, 'height': 360, 'hls_url': 'https://v.redd.it/mbvknht2n52f1/HLSPlaylist.m3u8?a=1750433840%2CMTliYmQ2MGNhNjA0MWZmMTVmZWU5YWE0MTdkMmNmYjliNzAwMDc0YTNmMTI1NDYyNWFlNGE2NDg5N2NjYWM5OA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/mbvknht2n52f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 596}}
|
t3_1ks0x2t
|
/r/LocalLLaMA/comments/1ks0x2t/this_ai_tool_applies_to_more_jobs_in_a_minute/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'bHNkNXJodDJuNTJmMY2OvwhO7zRqwSN2HcLLGYwPRadfKF-XTSbAURTEMoUb', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/bHNkNXJodDJuNTJmMY2OvwhO7zRqwSN2HcLLGYwPRadfKF-XTSbAURTEMoUb.png?width=108&crop=smart&format=pjpg&auto=webp&s=ea5ea01cd62b12705d4599b05154efa081218d66', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/bHNkNXJodDJuNTJmMY2OvwhO7zRqwSN2HcLLGYwPRadfKF-XTSbAURTEMoUb.png?width=216&crop=smart&format=pjpg&auto=webp&s=8b7a4bf8ec785c16b4cdb49f4974d9181fc019bd', 'width': 216}, {'height': 193, 'url': 'https://external-preview.redd.it/bHNkNXJodDJuNTJmMY2OvwhO7zRqwSN2HcLLGYwPRadfKF-XTSbAURTEMoUb.png?width=320&crop=smart&format=pjpg&auto=webp&s=cc3465d80bbfb999454c8e495dd04a7f299c4b4e', 'width': 320}, {'height': 386, 'url': 'https://external-preview.redd.it/bHNkNXJodDJuNTJmMY2OvwhO7zRqwSN2HcLLGYwPRadfKF-XTSbAURTEMoUb.png?width=640&crop=smart&format=pjpg&auto=webp&s=97b48a741d7b91d12ed30262049dad6d995f917b', 'width': 640}], 'source': {'height': 430, 'url': 'https://external-preview.redd.it/bHNkNXJodDJuNTJmMY2OvwhO7zRqwSN2HcLLGYwPRadfKF-XTSbAURTEMoUb.png?format=pjpg&auto=webp&s=63aed7c089d7203218b47e7192f122a19391850a', 'width': 712}, 'variants': {}}]}
|
||
New to the PC world and want to run a llm locally and need input
| 4 |
I don't really know where to begin with this Im looking for something similar to gpt-4 performance and thinking but be able to run it locally my specs are below. I have no idea where to start or really what I want any help would be appreciated.
* AMD Ryzen 9 7950X
* PNY RTX 4070 Ti SUPER
* ASUS ROG Strix B650E-F Gaming WiFi
I would like it to be able to accurately search the web, be able to upload files for projects I'm working on and help me generate ideas or get through roadblocks is there something out there that's similar to this that would work for me?
| 2025-05-21T15:39:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks0zee/new_to_the_pc_world_and_want_to_run_a_llm_locally/
|
ZiritoBlue
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks0zee
| false | null |
t3_1ks0zee
|
/r/LocalLLaMA/comments/1ks0zee/new_to_the_pc_world_and_want_to_run_a_llm_locally/
| false | false |
self
| 4 | null |
How much ram do i need with a RTX 5090
| 1 |
[removed]
| 2025-05-21T15:40:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks10hh/how_much_ram_do_i_need_with_a_rtx_5090/
|
nanomax55
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks10hh
| false | null |
t3_1ks10hh
|
/r/LocalLLaMA/comments/1ks10hh/how_much_ram_do_i_need_with_a_rtx_5090/
| false | false |
self
| 1 | null |
Mistral's new Devstral coding model running on a single RTX 4090 with 54k context using Q4KM quantization with vLLM
| 218 |
Full model announcement post on the Mistral blog [https://mistral.ai/news/devstral](https://mistral.ai/news/devstral)
| 2025-05-21T15:50:12 |
erdaltoprak
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks18uf
| false | null |
t3_1ks18uf
|
/r/LocalLLaMA/comments/1ks18uf/mistrals_new_devstral_coding_model_running_on_a/
| false | false | 218 |
{'enabled': True, 'images': [{'id': 'Yxwnv789FEPpZ62S-qRHY4WD3ig6GY_W_fz-otR2lmU', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/ddhhql5ap52f1.png?width=108&crop=smart&auto=webp&s=da019bab6f1b8373b91ff01f2e1d7c40454a1f12', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/ddhhql5ap52f1.png?width=216&crop=smart&auto=webp&s=875c6b73f51e4e01d184ea2fc96f0d4d292a9baf', 'width': 216}, {'height': 201, 'url': 'https://preview.redd.it/ddhhql5ap52f1.png?width=320&crop=smart&auto=webp&s=bb5657161b1eb1f369c0f0da2f35023fc0573702', 'width': 320}, {'height': 403, 'url': 'https://preview.redd.it/ddhhql5ap52f1.png?width=640&crop=smart&auto=webp&s=4ac522114e2ed7386b3d3e60852472eaf2f4b906', 'width': 640}, {'height': 605, 'url': 'https://preview.redd.it/ddhhql5ap52f1.png?width=960&crop=smart&auto=webp&s=1bf5b7da1c1a74351172632db72fb03bba7777a5', 'width': 960}, {'height': 680, 'url': 'https://preview.redd.it/ddhhql5ap52f1.png?width=1080&crop=smart&auto=webp&s=c596f03b65516ed7920d25170764603d89ee6496', 'width': 1080}], 'source': {'height': 1372, 'url': 'https://preview.redd.it/ddhhql5ap52f1.png?auto=webp&s=61bc7e272608ffb5c25eb24e0eb3de434ae947ea', 'width': 2177}, 'variants': {}}]}
|
||
Anyone else feel like LLMs aren't actually getting that much better?
| 239 |
I've been in the game since GPT-3.5 (and even before then with Github Copilot). Over the last 2-3 years I've tried most of the top LLMs: all of the GPT iterations, all of the Claude's, Mistral's, LLama's, Deepseek's, Qwen's, and now Gemini 2.5 Pro Preview 05-06.
Based on benchmarks and LMSYS Arena, one would expect something like the newest Gemini 2.5 Pro to be leaps and bounds ahead of what GPT-3.5 or GPT-4 was. I feel like it's not. My use case is generally technical: longer form coding and system design sorts of questions. I occasionally also have models draft out longer English texts like reports or briefs.
Overall I feel like models still have the same problems that they did when ChatGPT first came out: hallucination, generic LLM babble, hard-to-find bugs in code, system designs that might check out on first pass but aren't fully thought out.
Don't get me wrong, LLMs are still incredible time savers, but they have been since the beginning. I don't know if my prompting techniques are to blame? I don't really engineer prompts at all besides explaining the problem and context as thoroughly as I can.
Does anyone else feel the same way?
| 2025-05-21T16:06:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks1ncf/anyone_else_feel_like_llms_arent_actually_getting/
|
Swimming_Beginning24
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks1ncf
| false | null |
t3_1ks1ncf
|
/r/LocalLLaMA/comments/1ks1ncf/anyone_else_feel_like_llms_arent_actually_getting/
| false | false |
self
| 239 | null |
Trying to set up a local run Llama/XTTS workflow. Where can I get sound data sets like LORAs or checkpoints? Are NSFW sound effects possible?
| 1 |
[removed]
| 2025-05-21T16:19:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks1zia/trying_to_set_up_a_local_run_llamaxtts_workflow/
|
highwaytrading
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks1zia
| false | null |
t3_1ks1zia
|
/r/LocalLLaMA/comments/1ks1zia/trying_to_set_up_a_local_run_llamaxtts_workflow/
| false | false |
nsfw
| 1 | null |
Should I add 64gb RAM to my current PC ?
| 0 |
I currently have this configuration :
* Graphics Card: MSI GeForce RTX 3060 VENTUS 2X 12G OC
* Power Supply: CORSAIR CX650 ATX 650W
* Motherboard: GIGABYTE B550M DS3H
* Processor (CPU): AMD Ryzen 7 5800X
* RAM: Corsair Vengeance LPX 32 GB (2 x 16 GB) DDR4 3600 MHz
* CPU Cooler: Mars Gaming ML-PRO120, Professional Liquid Cooling for CPU
* Storage: Crucial P3 Plus 2TB PCIe Gen4 NVMe M.2 SSD (Up to 5,000 MB/s)
I am quite happy with it but I would like to know if there would be any benefit and if it is possible to add Corsair Vengeance LPX 64 Go (2 x 32 GB) DDR4 3600 MHz to the two remaining slot of my motherboard.
If I add the 64Gb ram I will have 2 x 16GB and 2 x 32GB, is it compatible if i put two in channel A and two in channel B ?
What are the bigger model that I could fit with 96Gb ?
| 2025-05-21T16:21:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks2101/should_i_add_64gb_ram_to_my_current_pc/
|
DiyGun
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks2101
| false | null |
t3_1ks2101
|
/r/LocalLLaMA/comments/1ks2101/should_i_add_64gb_ram_to_my_current_pc/
| false | false |
self
| 0 | null |
Public ranking for open source models?
| 8 |
Is there a public ranking that i can check for open source models to compare them and to be able to finetune? Its weird theres a ranking for everything except for models that we can use for fine tuning
| 2025-05-21T16:41:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks2j74/public_ranking_for_open_source_models/
|
jinstronda
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks2j74
| false | null |
t3_1ks2j74
|
/r/LocalLLaMA/comments/1ks2j74/public_ranking_for_open_source_models/
| false | false |
self
| 8 | null |
Startups: Collaborative Coding with Windsurf/Cursor
| 1 |
How are startups using Windsurf/Cursor, etc. to code new applications as a team? I'm trying to wrap my head around how it works without everyone stepping on each other's toes.
My initial thoughts on starting a project from scratch:
1. **Architecture Setup**: Have one person define global rules, coding styles, and architect the system using microservices. They should also set up the local, staging, and production environments.
2. **Core Implementation**: The same person (or someone who understands the vision) implements the core of the application, defining core objects, endpoints, etc. This allows the LLM to interact with both backend and frontend to build it out.
3. **Feature Development**: Once the architecture and core are in place (which should be relatively fast), assign feature sets to backend/frontend teams. It might be easier to merge backend and frontend teams so the LLM has full oversight from both perspectives.
4. **Sprints and Testing**: Each person is responsible for their feature and its unit tests during sprints. Once the sprint is completed and tested, the code is pushed, reviewed, merged and ???... profit?
This is my vision for making it work effectively, but I’ve only coded solo projects with LLMs, not with a team. I’m curious how startups or companies like Facebook, X, etc., have restructured to use these tools.
Would love some insight and blunt criticism from people who do this daily.
| 2025-05-21T16:53:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks2u4z/startups_collaborative_coding_with_windsurfcursor/
|
CodeBradley
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks2u4z
| false | null |
t3_1ks2u4z
|
/r/LocalLLaMA/comments/1ks2u4z/startups_collaborative_coding_with_windsurfcursor/
| false | false |
self
| 1 | null |
Just Discovered the NIMO AI-395 Mini PC – Compact Powerhouse Shipping Soon!
| 1 |
[removed]
| 2025-05-21T16:57:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks2xlf/just_discovered_the_nimo_ai395_mini_pc_compact/
|
bigManProf0
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks2xlf
| false | null |
t3_1ks2xlf
|
/r/LocalLLaMA/comments/1ks2xlf/just_discovered_the_nimo_ai395_mini_pc_compact/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'o4lqUh5-wp1wM6I4Tx8WUzrhi7Sqcg9u2CoUJvgaFH0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/cys9JaQ7pfaYr9P4nO8xZAetJhQC6eSAREg3JavVBhs.jpg?width=108&crop=smart&auto=webp&s=6074fa4aa0510b873ca44659684d34b1350afdde', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/cys9JaQ7pfaYr9P4nO8xZAetJhQC6eSAREg3JavVBhs.jpg?width=216&crop=smart&auto=webp&s=e3795685fa513f24e9faf7402d99cfd177cf59b1', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/cys9JaQ7pfaYr9P4nO8xZAetJhQC6eSAREg3JavVBhs.jpg?width=320&crop=smart&auto=webp&s=5f7c3610b62bc6045860cd4a19692ade394527f8', 'width': 320}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/cys9JaQ7pfaYr9P4nO8xZAetJhQC6eSAREg3JavVBhs.jpg?auto=webp&s=08a32711c9e6395b5348ca50d28b24aa6141e94c', 'width': 500}, 'variants': {}}]}
|
Pizza and Google I/O - I'm ready!
| 0 | 2025-05-21T17:00:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks30hq/pizza_and_google_io_im_ready/
|
kekePower
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks30hq
| false | null |
t3_1ks30hq
|
/r/LocalLLaMA/comments/1ks30hq/pizza_and_google_io_im_ready/
| false | false | 0 | null |
||
Just Discovered the NIMO AI-395 Mini PC – Compact Powerhouse Shipping Soon!
| 1 |
[removed]
| 2025-05-21T17:07:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks36kb/just_discovered_the_nimo_ai395_mini_pc_compact/
|
bigManProf0
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks36kb
| false | null |
t3_1ks36kb
|
/r/LocalLLaMA/comments/1ks36kb/just_discovered_the_nimo_ai395_mini_pc_compact/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'o4lqUh5-wp1wM6I4Tx8WUzrhi7Sqcg9u2CoUJvgaFH0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/cys9JaQ7pfaYr9P4nO8xZAetJhQC6eSAREg3JavVBhs.jpg?width=108&crop=smart&auto=webp&s=6074fa4aa0510b873ca44659684d34b1350afdde', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/cys9JaQ7pfaYr9P4nO8xZAetJhQC6eSAREg3JavVBhs.jpg?width=216&crop=smart&auto=webp&s=e3795685fa513f24e9faf7402d99cfd177cf59b1', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/cys9JaQ7pfaYr9P4nO8xZAetJhQC6eSAREg3JavVBhs.jpg?width=320&crop=smart&auto=webp&s=5f7c3610b62bc6045860cd4a19692ade394527f8', 'width': 320}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/cys9JaQ7pfaYr9P4nO8xZAetJhQC6eSAREg3JavVBhs.jpg?auto=webp&s=08a32711c9e6395b5348ca50d28b24aa6141e94c', 'width': 500}, 'variants': {}}]}
|
Use iGPU + dGPU + NPU simultaneously?
| 1 |
[removed]
| 2025-05-21T17:10:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks39lp/use_igpu_dgpu_npu_simultaneously/
|
panther_ra
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks39lp
| false | null |
t3_1ks39lp
|
/r/LocalLLaMA/comments/1ks39lp/use_igpu_dgpu_npu_simultaneously/
| false | false |
self
| 1 | null |
Tokilake: Self-Host a Private AI Cloud for Your GPUs (Access NATted Hardware!) - Open Source
| 1 |
[removed]
| 2025-05-21T17:22:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks3ksa/tokilake_selfhost_a_private_ai_cloud_for_your/
|
Square-Air6513
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks3ksa
| false | null |
t3_1ks3ksa
|
/r/LocalLLaMA/comments/1ks3ksa/tokilake_selfhost_a_private_ai_cloud_for_your/
| false | false |
self
| 1 | null |
Models become so good, companies are selling illusion of a working brain
| 0 |
The push for agents we observe today is a marketing strategy to sell more usage, create demand for resources and justify investments in infrastructure and supporting software.
We don't have an alternative to our minds, AI systems can't come to conclusions outside of their training datasets. What we have is an illusion based on advancements in synthetic data generation, in simple terms - talking about the same things in different ways, increasing probability of a valid pattern match.
Some questions I have constantly no my mind...
* How will people tackle unseen challenges when they stop practice basic problem solving skills?
* Isn't this push for agents a trap to disable people from the ability to think on their own and make them reliant on AI tools?
* Aren't these influencers drug dealers selling short sighted solutions with dangerous long term consequences?
| 2025-05-21T17:25:46 |
robertpiosik
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks3ni2
| false | null |
t3_1ks3ni2
|
/r/LocalLLaMA/comments/1ks3ni2/models_become_so_good_companies_are_selling/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'KEW7j1dEIE_G49e6l5U-TJTDMQxMImUpX4729PDHtEc', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/2uo5fi9e362f1.png?width=108&crop=smart&auto=webp&s=f135bf4945ee739ecd8405a1103d90ac729be8dd', 'width': 108}, {'height': 199, 'url': 'https://preview.redd.it/2uo5fi9e362f1.png?width=216&crop=smart&auto=webp&s=32fcf7f00d7acfb51d3b7dd2662dc0cc8bc280b6', 'width': 216}, {'height': 295, 'url': 'https://preview.redd.it/2uo5fi9e362f1.png?width=320&crop=smart&auto=webp&s=f6c0e4789c8811824bcc187dcfc947371b40b5ec', 'width': 320}, {'height': 591, 'url': 'https://preview.redd.it/2uo5fi9e362f1.png?width=640&crop=smart&auto=webp&s=c3d5cde114ae3858321e838691f3f69523acf7aa', 'width': 640}, {'height': 887, 'url': 'https://preview.redd.it/2uo5fi9e362f1.png?width=960&crop=smart&auto=webp&s=377ee06bcc38d8cd03f365b7bdc2f88d64c843fe', 'width': 960}, {'height': 998, 'url': 'https://preview.redd.it/2uo5fi9e362f1.png?width=1080&crop=smart&auto=webp&s=ef03180e57b66873d876975520e89ad163beaece', 'width': 1080}], 'source': {'height': 1106, 'url': 'https://preview.redd.it/2uo5fi9e362f1.png?auto=webp&s=dde4f6b3a3dd9a01b37cb83f65314aa02a800175', 'width': 1196}, 'variants': {}}]}
|
||
Radeon AI Pro R9700. considering they took a lot of time comparing to 5080 likely to cost around 1000USD for 32gb vram
| 1 | 2025-05-21T17:26:08 |
intc3172
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks3nui
| false | null |
t3_1ks3nui
|
/r/LocalLLaMA/comments/1ks3nui/radeon_ai_pro_r9700_considering_they_took_a_lot/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'uDyEHx3AJFIzvDb01KtyN7Wl7K2B0ZYUlZyCkbYfuUA', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/bx2sdzae662f1.png?width=108&crop=smart&auto=webp&s=9f7ceb479a9933ad8f2ef4b8da4a6cc89e27fa0e', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/bx2sdzae662f1.png?width=216&crop=smart&auto=webp&s=49befe98579a379c1588a9e438f5847fe304e0e4', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/bx2sdzae662f1.png?width=320&crop=smart&auto=webp&s=1abcc8137c2d0cb1faec283a876833fbbed58ad9', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/bx2sdzae662f1.png?width=640&crop=smart&auto=webp&s=cb9338f25f5209cac6c41af7b5be8071f22653c7', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/bx2sdzae662f1.png?width=960&crop=smart&auto=webp&s=a97337e6dc31c606da02c832ad1a9ecf72206379', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/bx2sdzae662f1.png?width=1080&crop=smart&auto=webp&s=b51511253901e3f0027d43dc0a39b94b56697580', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/bx2sdzae662f1.png?auto=webp&s=8225687d96114b58302d471fc4343e371343bd14', 'width': 1920}, 'variants': {}}]}
|
|||
ChatGPT’s Impromptu Web Lookups... Can Open Source Compete?
| 0 |
I must reluctantly admit... I can’t out-fox ChatGPT, when it spots a blind spot, it just deduces it needs a web lookup and grabs the answer, no extra setup or config required. Its power comes from having vast public data indexed (Google, lol) and the instinct to query it on the fly with... tools (?).
As of today, how could an open-source project realistically replicate or incorporate that same seamless, on-demand lookup capability?
| 2025-05-21T17:26:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks3oi1/chatgpts_impromptu_web_lookups_can_open_source/
|
IrisColt
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks3oi1
| false | null |
t3_1ks3oi1
|
/r/LocalLLaMA/comments/1ks3oi1/chatgpts_impromptu_web_lookups_can_open_source/
| false | false |
self
| 0 | null |
Starting a Discord for Local LLM Deployment Enthusiasts!
| 1 |
[removed]
| 2025-05-21T17:27:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks3olr/starting_a_discord_for_local_llm_deployment/
|
tagrib
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks3olr
| false | null |
t3_1ks3olr
|
/r/LocalLLaMA/comments/1ks3olr/starting_a_discord_for_local_llm_deployment/
| false | false |
self
| 1 | null |
Tokilake: Self-Host a Private AI Cloud for Your GPUs (Access NATted Hardware!) - Open Source
| 1 |
[removed]
| 2025-05-21T17:28:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks3pyz/tokilake_selfhost_a_private_ai_cloud_for_your/
|
Wise_Day4455
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks3pyz
| false | null |
t3_1ks3pyz
|
/r/LocalLLaMA/comments/1ks3pyz/tokilake_selfhost_a_private_ai_cloud_for_your/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'SWsR6BF8B-wCJpnAlQOh4Gv6RwJLVcBzaQgItQJahMU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=108&crop=smart&auto=webp&s=f5522e7afee672a20964c788b66486037d7d69e9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=216&crop=smart&auto=webp&s=0773ac87b0f809a700e07546ad86209be51d10f3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=320&crop=smart&auto=webp&s=974dd4028ff87a33897d44bf755c9b482a4487f0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=640&crop=smart&auto=webp&s=740a3ef4a70060e10c60a2e866a3e7165a27c965', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=960&crop=smart&auto=webp&s=87da404089fe69d8a70e8babc4a2c563a90e06fc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=1080&crop=smart&auto=webp&s=19097ce864ab5b2ffc1af9de25eba9667c7190d0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?auto=webp&s=8c9ff40d6452d04732e6cfd0d8729afea0c2e1f9', 'width': 1200}, 'variants': {}}]}
|
Tokilake: Self-Host a Private AI Cloud for Your GPUs (Access NATted Hardware!) - Open Source
| 1 |
[removed]
| 2025-05-21T17:33:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks3uic/tokilake_selfhost_a_private_ai_cloud_for_your/
|
Wise_Day4455
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks3uic
| false | null |
t3_1ks3uic
|
/r/LocalLLaMA/comments/1ks3uic/tokilake_selfhost_a_private_ai_cloud_for_your/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'SWsR6BF8B-wCJpnAlQOh4Gv6RwJLVcBzaQgItQJahMU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=108&crop=smart&auto=webp&s=f5522e7afee672a20964c788b66486037d7d69e9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=216&crop=smart&auto=webp&s=0773ac87b0f809a700e07546ad86209be51d10f3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=320&crop=smart&auto=webp&s=974dd4028ff87a33897d44bf755c9b482a4487f0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=640&crop=smart&auto=webp&s=740a3ef4a70060e10c60a2e866a3e7165a27c965', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=960&crop=smart&auto=webp&s=87da404089fe69d8a70e8babc4a2c563a90e06fc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=1080&crop=smart&auto=webp&s=19097ce864ab5b2ffc1af9de25eba9667c7190d0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?auto=webp&s=8c9ff40d6452d04732e6cfd0d8729afea0c2e1f9', 'width': 1200}, 'variants': {}}]}
|
Starting a Discord for Local LLM Deployment Enthusiasts
| 1 |
[removed]
| 2025-05-21T17:35:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks3w6b/starting_a_discord_for_local_llm_deployment/
|
tagrib
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks3w6b
| false | null |
t3_1ks3w6b
|
/r/LocalLLaMA/comments/1ks3w6b/starting_a_discord_for_local_llm_deployment/
| false | false |
self
| 1 | null |
Tokilake: Self-Host a Private AI Cloud for Your GPUs (Access NATted Hardware) - Open Source
| 1 |
[removed]
| 2025-05-21T17:42:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks425p/tokilake_selfhost_a_private_ai_cloud_for_your/
|
Wise_Day4455
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks425p
| false | null |
t3_1ks425p
|
/r/LocalLLaMA/comments/1ks425p/tokilake_selfhost_a_private_ai_cloud_for_your/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'SWsR6BF8B-wCJpnAlQOh4Gv6RwJLVcBzaQgItQJahMU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=108&crop=smart&auto=webp&s=f5522e7afee672a20964c788b66486037d7d69e9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=216&crop=smart&auto=webp&s=0773ac87b0f809a700e07546ad86209be51d10f3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=320&crop=smart&auto=webp&s=974dd4028ff87a33897d44bf755c9b482a4487f0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=640&crop=smart&auto=webp&s=740a3ef4a70060e10c60a2e866a3e7165a27c965', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=960&crop=smart&auto=webp&s=87da404089fe69d8a70e8babc4a2c563a90e06fc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=1080&crop=smart&auto=webp&s=19097ce864ab5b2ffc1af9de25eba9667c7190d0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?auto=webp&s=8c9ff40d6452d04732e6cfd0d8729afea0c2e1f9', 'width': 1200}, 'variants': {}}]}
|
Arc pro b60 48gb vram
| 14 |
https://videocardz.com/newz/maxsun-unveils-arc-pro-b60-dual-turbo-two-battlemage-gpus-48gb-vram-and-400w-power
| 2025-05-21T17:48:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks47wv/arc_pro_b60_48gb_vram/
|
zathras7
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks47wv
| false | null |
t3_1ks47wv
|
/r/LocalLLaMA/comments/1ks47wv/arc_pro_b60_48gb_vram/
| false | false |
self
| 14 |
{'enabled': False, 'images': [{'id': 'GAmv1Z-0vEGZvysNr_W7sYt4MzDAstpGPPZMsg_bJ6Y', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/3yPGHghVMzYHAb5HXyT6mKs83tOiC4QzGnzAcUM9BIg.jpg?width=108&crop=smart&auto=webp&s=45651f84e192227cc5fd2b75e4ab4aa94615da2d', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/3yPGHghVMzYHAb5HXyT6mKs83tOiC4QzGnzAcUM9BIg.jpg?width=216&crop=smart&auto=webp&s=6b8b1ec52dd41f97cce22cc9fb7fe5cdfa605409', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/3yPGHghVMzYHAb5HXyT6mKs83tOiC4QzGnzAcUM9BIg.jpg?width=320&crop=smart&auto=webp&s=81af294d8fc00e404a95e64a5715bd1cf6d12454', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/3yPGHghVMzYHAb5HXyT6mKs83tOiC4QzGnzAcUM9BIg.jpg?width=640&crop=smart&auto=webp&s=1fa86e61511d4466859fdeb441d484dfa4ce319e', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/3yPGHghVMzYHAb5HXyT6mKs83tOiC4QzGnzAcUM9BIg.jpg?width=960&crop=smart&auto=webp&s=4d0f7635588d9a1b1646c719eec720503911f734', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/3yPGHghVMzYHAb5HXyT6mKs83tOiC4QzGnzAcUM9BIg.jpg?width=1080&crop=smart&auto=webp&s=16b769ae000ddf9fb2a3c74b73019882c8f3dbae', 'width': 1080}], 'source': {'height': 1040, 'url': 'https://external-preview.redd.it/3yPGHghVMzYHAb5HXyT6mKs83tOiC4QzGnzAcUM9BIg.jpg?auto=webp&s=610e989b9bb37e122fbe5ea488262a6bee0062b6', 'width': 2000}, 'variants': {}}]}
|
Tokilake: Self-Host a Private AI Cloud for Your GPUs (Access NATted Hardware!) - Open Source
| 1 |
[removed]
| 2025-05-21T17:54:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks4duq/tokilake_selfhost_a_private_ai_cloud_for_your/
|
Wise_Day4455
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks4duq
| false | null |
t3_1ks4duq
|
/r/LocalLLaMA/comments/1ks4duq/tokilake_selfhost_a_private_ai_cloud_for_your/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'SWsR6BF8B-wCJpnAlQOh4Gv6RwJLVcBzaQgItQJahMU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=108&crop=smart&auto=webp&s=f5522e7afee672a20964c788b66486037d7d69e9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=216&crop=smart&auto=webp&s=0773ac87b0f809a700e07546ad86209be51d10f3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=320&crop=smart&auto=webp&s=974dd4028ff87a33897d44bf755c9b482a4487f0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=640&crop=smart&auto=webp&s=740a3ef4a70060e10c60a2e866a3e7165a27c965', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=960&crop=smart&auto=webp&s=87da404089fe69d8a70e8babc4a2c563a90e06fc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?width=1080&crop=smart&auto=webp&s=19097ce864ab5b2ffc1af9de25eba9667c7190d0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bA0_QmSLYFpl9V716td8shzgi6k-l2hulq6JSUFtHac.jpg?auto=webp&s=8c9ff40d6452d04732e6cfd0d8729afea0c2e1f9', 'width': 1200}, 'variants': {}}]}
|
Introducing Exosphere: Platform for batch AI agents!
| 1 |
[removed]
| 2025-05-21T17:55:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks4e29/introducing_exosphere_platform_for_batch_ai_agents/
|
jain-nivedit
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks4e29
| false | null |
t3_1ks4e29
|
/r/LocalLLaMA/comments/1ks4e29/introducing_exosphere_platform_for_batch_ai_agents/
| false | false |
self
| 1 | null |
Platform for batch AI agents
| 1 |
[removed]
| 2025-05-21T17:57:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks4g5r/platform_for_batch_ai_agents/
|
jain-nivedit
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks4g5r
| false | null |
t3_1ks4g5r
|
/r/LocalLLaMA/comments/1ks4g5r/platform_for_batch_ai_agents/
| false | false |
self
| 1 | null |
Batch AI agents
| 1 |
[removed]
| 2025-05-21T18:01:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks4kbr/batch_ai_agents/
|
jain-nivedit
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks4kbr
| false | null |
t3_1ks4kbr
|
/r/LocalLLaMA/comments/1ks4kbr/batch_ai_agents/
| false | false |
self
| 1 | null |
Perchance RP/RPG story interface for local model?
| 4 | 2025-05-21T18:14:26 |
Shockbum
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks4vql
| false | null |
t3_1ks4vql
|
/r/LocalLLaMA/comments/1ks4vql/perchance_rprpg_story_interface_for_local_model/
| false | false | 4 |
{'enabled': True, 'images': [{'id': 'j147zoA2bk_BqU9e6UNIbq8t5IY3IFoo9BdOY6kMP_U', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/h8nikytye62f1.jpeg?width=108&crop=smart&auto=webp&s=217c7f86a44106a85974ff043c9804e69f46cd9e', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/h8nikytye62f1.jpeg?width=216&crop=smart&auto=webp&s=739ea676662b9eb3fae8c0a0c9dafac8aa8f30c7', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/h8nikytye62f1.jpeg?width=320&crop=smart&auto=webp&s=1a837d3444ecf036d2d4558e3a77306fe2c0138b', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/h8nikytye62f1.jpeg?width=640&crop=smart&auto=webp&s=2bbfe9f051b0c18e313aed3cfe84613737a96080', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/h8nikytye62f1.jpeg?width=960&crop=smart&auto=webp&s=b94a6f988509a6c7753fe71b0d0d16e6237a9530', 'width': 960}], 'source': {'height': 1000, 'url': 'https://preview.redd.it/h8nikytye62f1.jpeg?auto=webp&s=fe60a7b627f4ee6388195a3f454d4993e76e2489', 'width': 1000}, 'variants': {}}]}
|
|||
Bosgame M5 AI Mini PC - $1699 | AMD Ryzen AI Max+ 395, 128gb LPDDR5, and 2TB SSD
| 10 |
https://www.bosgamepc.com/products/bosgame-m5-ai-mini-desktop-ryzen-ai-max-395
| 2025-05-21T18:42:02 |
https://www.bosgamepc.com/products/bosgame-m5-ai-mini-desktop-ryzen-ai-max-395
|
policyweb
|
bosgamepc.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks5kpe
| false | null |
t3_1ks5kpe
|
/r/LocalLLaMA/comments/1ks5kpe/bosgame_m5_ai_mini_pc_1699_amd_ryzen_ai_max_395/
| false | false |
default
| 10 | null |
Devstral with vision support (from ngxson)
| 23 |
[https://huggingface.co/ngxson/Devstral-Small-Vision-2505-GGUF](https://huggingface.co/ngxson/Devstral-Small-Vision-2505-GGUF)
Just sharing in case people did not notice (version with vision "re-added"). Did not test yet but will do that soonly.
| 2025-05-21T18:45:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks5nul/devstral_with_vision_support_from_ngxson/
|
Leflakk
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks5nul
| false | null |
t3_1ks5nul
|
/r/LocalLLaMA/comments/1ks5nul/devstral_with_vision_support_from_ngxson/
| false | false |
self
| 23 |
{'enabled': False, 'images': [{'id': 'm0-I3lsSbmMtSf-Bnz_2X_1TVtuh-tbWaKroW5KWCec', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LquwuxAIjWiQrDO0hGeOBS10nRcmudBN1QF7bkdxtzc.jpg?width=108&crop=smart&auto=webp&s=5639c1a88d7a1be5f21fe637272d2f0f9dae89c5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LquwuxAIjWiQrDO0hGeOBS10nRcmudBN1QF7bkdxtzc.jpg?width=216&crop=smart&auto=webp&s=693e2ac246a4457cf3336e6cfc1520280ae404c9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LquwuxAIjWiQrDO0hGeOBS10nRcmudBN1QF7bkdxtzc.jpg?width=320&crop=smart&auto=webp&s=f822dd9f605f6337960aacd9096f0fd6d4f894d4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LquwuxAIjWiQrDO0hGeOBS10nRcmudBN1QF7bkdxtzc.jpg?width=640&crop=smart&auto=webp&s=b7a09663aefb532cb6fa9e7a39b1b90e6d1b03e5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LquwuxAIjWiQrDO0hGeOBS10nRcmudBN1QF7bkdxtzc.jpg?width=960&crop=smart&auto=webp&s=390c68792380b8053e69e80d2ca1d5c4f5b91edd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LquwuxAIjWiQrDO0hGeOBS10nRcmudBN1QF7bkdxtzc.jpg?width=1080&crop=smart&auto=webp&s=ebbca2e368df9c04748ed10ea481f56eff267ac9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LquwuxAIjWiQrDO0hGeOBS10nRcmudBN1QF7bkdxtzc.jpg?auto=webp&s=41c480a79f43c3aae0c027930847b2ba8a3754a7', 'width': 1200}, 'variants': {}}]}
|
Reliable function calling with vLLM
| 2 |
Hi all,
we're experimenting with function calling using open-source models served through vLLM, and we're struggling to get reliable outputs for most agentic use cases.
So far, we've tried: LLaMA 3.3 70B (both vanilla and fine-tuned by Watt-ai for tool use) and Gemma 3 27B.
For LLaMA, we experimented with both the JSON and Pythonic templates/parsers.
Unfortunately nothing seem to work that well:
- Often the models respond with a mix of plain text and function calls, so the calls aren't returned properly in the tool_calls field.
- In JSON format, they frequently mess up brackets or formatting.
- In Pythonic format, we get quotation issues and inconsistent syntax.
Overall, it feels like function calling for local models is still far behind what's available from hosted providers.
Are you seeing the same? We’re currently trying to mitigate by:
1. Tweaking the chat template: Adding hints like “make sure to return valid JSON” or “quote all string parameters.” This seems to help slightly, especially in single-turn scenarios.
2. Improving the parser: Early stage here, but the idea is to scan the entire message for tool calls, not just the beginning. That way we might catch function calls even when mixed with surrounding text.
Curious to hear how others are tackling this. Any tips, tricks, or model/template combos that worked for you?
| 2025-05-21T18:46:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks5oxb/reliable_function_calling_with_vllm/
|
mjf-89
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks5oxb
| false | null |
t3_1ks5oxb
|
/r/LocalLLaMA/comments/1ks5oxb/reliable_function_calling_with_vllm/
| false | false |
self
| 2 | null |
Broke down and bought a Mac Mini - my processes run 5x faster
| 87 |
I ran my process on my $850 Beelink Ryzen 9 32gb machine and it took 4 hours to run - the process calls my 8g llm 42 times during the run. It took 4 hours and 18 minutes. The Mac Mini with an M4 Pro chip and 24gb memory took 47 minutes.
It’s a keeper - I’m returning my Beelink. That unified memory in the Mac used half the memory and used the GPU.
I know I could have bought a used gamer rig cheaper but for a lot of reasons - this is perfect for me. I would much prefer not using the MacOS - Windows is a PITA but I’m used to it. It took about 2 hours of cursing to install my stack and port my code.
I have 2 weeks to return it and I’m going to push this thing to the limits.
| 2025-05-21T18:50:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks5sh4/broke_down_and_bought_a_mac_mini_my_processes_run/
|
ETBiggs
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks5sh4
| false | null |
t3_1ks5sh4
|
/r/LocalLLaMA/comments/1ks5sh4/broke_down_and_bought_a_mac_mini_my_processes_run/
| false | false |
self
| 87 | null |
Best ai for Chinese to English translation?
| 1 |
[removed]
| 2025-05-21T18:54:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks5wdx/best_ai_for_chinese_to_english_translation/
|
Civil_Candidate_824
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks5wdx
| false | null |
t3_1ks5wdx
|
/r/LocalLLaMA/comments/1ks5wdx/best_ai_for_chinese_to_english_translation/
| false | false |
self
| 1 | null |
NVLink On 2x 3090 Question
| 4 |
Hello all. I recently got access to 2x RTX 3090 FEs as well as a 4-slot official NVLink bridge connector. I am planning on using this in Linux for AI research and development. I am wondering if there is any motherboard requirement to be able to use NVLink on Linux? It is hard enough to find a motherboard with the right spacing + x8/x8 bifurcation, so I really hope there is no restriction! If there is however, please let me know what series is supported. Currently looking at z690 mbs + 13900k. Thanks a lot 🙏.
| 2025-05-21T19:01:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks6236/nvlink_on_2x_3090_question/
|
Skye7821
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks6236
| false | null |
t3_1ks6236
|
/r/LocalLLaMA/comments/1ks6236/nvlink_on_2x_3090_question/
| false | false |
self
| 4 | null |
Falcon-H1 performance.
| 1 |
[removed]
| 2025-05-21T19:03:02 |
MentionAgitated8682
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks63tw
| false | null |
t3_1ks63tw
|
/r/LocalLLaMA/comments/1ks63tw/falconh1_performance/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'f18o0iKoAwjVEcmdpeJiiLMG36xXuUNy6inV7lIkDD8', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/kkdx3itun62f1.jpeg?width=108&crop=smart&auto=webp&s=7ff2416b05b27a220b530e9c9a4a03ed2966bd50', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/kkdx3itun62f1.jpeg?width=216&crop=smart&auto=webp&s=ae3be6a34f1d5a1cb8ba9688851e435d7f7e78eb', 'width': 216}, {'height': 243, 'url': 'https://preview.redd.it/kkdx3itun62f1.jpeg?width=320&crop=smart&auto=webp&s=65bbece077314b36c25d13ace53292cdda8b98e0', 'width': 320}, {'height': 486, 'url': 'https://preview.redd.it/kkdx3itun62f1.jpeg?width=640&crop=smart&auto=webp&s=fb7fb4e622f7a11e7d70d0ea3ba0c4291e67d0ac', 'width': 640}], 'source': {'height': 523, 'url': 'https://preview.redd.it/kkdx3itun62f1.jpeg?auto=webp&s=2c22ff6551d900c7719ef686d9f5a43469a0b3de', 'width': 688}, 'variants': {}}]}
|
||
Looking for Advices for a Reliable Linux AI Workstation and Gaming
| 1 |
[removed]
| 2025-05-21T19:08:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks68lq/looking_for_advices_for_a_reliable_linux_ai/
|
MondoGao
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks68lq
| false | null |
t3_1ks68lq
|
/r/LocalLLaMA/comments/1ks68lq/looking_for_advices_for_a_reliable_linux_ai/
| false | false |
self
| 1 | null |
What is tps of qwen3 30ba3b on igpu 780m?
| 1 |
I'm looking to get a home server that can host qwen3 30ba3b, and looking at minipc, with 780m and 64gb ddr5 RAM, or mac mini options, with at least 32gb RAM.
Does anyone have an 780m that can test the speeds, prompt processing and token generation, using llama.cpp or vllm (if it even works on igpu)?
| 2025-05-21T19:24:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks6mlc/what_is_tps_of_qwen3_30ba3b_on_igpu_780m/
|
Zyguard7777777
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks6mlc
| false | null |
t3_1ks6mlc
|
/r/LocalLLaMA/comments/1ks6mlc/what_is_tps_of_qwen3_30ba3b_on_igpu_780m/
| false | false |
self
| 1 | null |
Falcon-H1 by tiiuae.
| 3 |
Falcon-H1 seems to perform very decently, even better than gemma3. 0.5B and 1.5B are impressive, especially on math questions.
Any thoughts about the bigger models from your experience with it? Thoughts on the architecture?
| 2025-05-21T19:54:31 |
ilyas555
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks7dht
| false | null |
t3_1ks7dht
|
/r/LocalLLaMA/comments/1ks7dht/falconh1_by_tiiuae/
| false | false | 3 |
{'enabled': True, 'images': [{'id': 'BYiVouPSFbmQlsi7nqdglAvU9l-khKSAva4DrPa17wA', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/pqhnjil1x62f1.jpeg?width=108&crop=smart&auto=webp&s=75fc10eff3c379ff1b37935851fb1ff747112c06', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/pqhnjil1x62f1.jpeg?width=216&crop=smart&auto=webp&s=858b542422bf0215a2aee237230c313bfe0e1dd1', 'width': 216}, {'height': 243, 'url': 'https://preview.redd.it/pqhnjil1x62f1.jpeg?width=320&crop=smart&auto=webp&s=65720b92c40132fd5fad94b00a88544ca7280714', 'width': 320}, {'height': 486, 'url': 'https://preview.redd.it/pqhnjil1x62f1.jpeg?width=640&crop=smart&auto=webp&s=6d19dab02140aa6a47fb13661ccea27d5acb2f51', 'width': 640}], 'source': {'height': 523, 'url': 'https://preview.redd.it/pqhnjil1x62f1.jpeg?auto=webp&s=6aa3290c898f096ae8bc2815e86faf63c36d9314', 'width': 688}, 'variants': {}}]}
|
||
Llama.cpp vs onnx runtime
| 4 |
Whats better in terms of performance for both android and iOS?
also anyone tried gamma 3n by Google? Would love to know about it
| 2025-05-21T20:06:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks7obu/llamacpp_vs_onnx_runtime/
|
Away_Expression_3713
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks7obu
| false | null |
t3_1ks7obu
|
/r/LocalLLaMA/comments/1ks7obu/llamacpp_vs_onnx_runtime/
| false | false |
self
| 4 | null |
I added semantic search to our codebase with RAG. No infra, no vector DB, 50 lines of Python.
| 1 |
We’ve been exploring ways to make our codebase more searchable for both humans and LLM agents. Standard keyword search doesn’t cut it when trying to answer questions like:
* “Where is ingestion triggered?”
* “How are retries configured in Airflow?”
* “Where do we handle usage tracking?”
We didn’t want to maintain embedding pipelines or spin up vector databases, so we tried a lightweight approach using an API-based tool that handles the heavy lifting.
It worked surprisingly well with under 50 lines of Python to prep documents (with metadata), batch index them, and run natural language queries. No infrastructure setup required.
[Here’s the blog post walking through our setup and code.](https://ducky.ai/blog/semantic-code-search?utm_source=reddit-locallama&utm_medium=post&utm_campaign=technical-use_case&utm_content=semantic-code-search)
Curious how others are approaching internal search or retrieval layers — especially if you’ve tackled this with in-house tools, LlamaIndex/LangChain, or just Elasticsearch.
| 2025-05-21T20:06:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks7oss/i_added_semantic_search_to_our_codebase_with_rag/
|
superconductiveKyle
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks7oss
| false | null |
t3_1ks7oss
|
/r/LocalLLaMA/comments/1ks7oss/i_added_semantic_search_to_our_codebase_with_rag/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'IzWJYiFwjzG4A1U_z4otwvXPIUCx93ztClVJ5X6VtWA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Zdi-XkgFKYATZe6kH61ARNNlrbi8KYmb5byDmF5A0nk.jpg?width=108&crop=smart&auto=webp&s=aa724089a7f8c7c73b0f8b53f2550d520e6efa7b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Zdi-XkgFKYATZe6kH61ARNNlrbi8KYmb5byDmF5A0nk.jpg?width=216&crop=smart&auto=webp&s=eafc712760162ec877556dc4ba5082d01a2e1755', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Zdi-XkgFKYATZe6kH61ARNNlrbi8KYmb5byDmF5A0nk.jpg?width=320&crop=smart&auto=webp&s=ad629d51d671ebe80cfc9f75dfd2d435027328a3', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Zdi-XkgFKYATZe6kH61ARNNlrbi8KYmb5byDmF5A0nk.jpg?width=640&crop=smart&auto=webp&s=24100a54a3740d513cb1cedaa6dc404a6186b6ab', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/Zdi-XkgFKYATZe6kH61ARNNlrbi8KYmb5byDmF5A0nk.jpg?width=960&crop=smart&auto=webp&s=481998802286d1f08f7ae57291410b26a87cab19', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/Zdi-XkgFKYATZe6kH61ARNNlrbi8KYmb5byDmF5A0nk.jpg?width=1080&crop=smart&auto=webp&s=00163fd8f0d288041de986ecd9c6f934b501d7a4', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/Zdi-XkgFKYATZe6kH61ARNNlrbi8KYmb5byDmF5A0nk.jpg?auto=webp&s=023c4d4ce04b0e0ac80111342f5264561bf1b1f1', 'width': 1200}, 'variants': {}}]}
|
I need help with SLMs
| 0 |
I tried running many SLMs including phi3 mini and more. I tried llama.cpp, onnx runtime as of now to run it on android and iOS. Even heard of gamma 3n release recently by Google.
Spent a lot of time in this. Please help me move forward because I didn't got any good results in terms of performance.
What my expectations are? A good SLM which I can run on android and iOS with good performance
| 2025-05-21T20:08:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks7qgk/i_need_help_with_slms/
|
Away_Expression_3713
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks7qgk
| false | null |
t3_1ks7qgk
|
/r/LocalLLaMA/comments/1ks7qgk/i_need_help_with_slms/
| false | false |
self
| 0 | null |
EVO X2 Qwen3 32B Q4 benchmark please
| 3 |
Anyone with the EVO X2 able to test performance of Qwen 3 32B Q4. Ideally with standard context and with 128K max context size.
| 2025-05-21T20:27:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks87oi/evo_x2_qwen3_32b_q4_benchmark_please/
|
MidnightProgrammer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks87oi
| false | null |
t3_1ks87oi
|
/r/LocalLLaMA/comments/1ks87oi/evo_x2_qwen3_32b_q4_benchmark_please/
| false | false |
self
| 3 | null |
Has anybody built a chatbot for tons of pdf‘s with high accuracy yet?
| 1 |
[removed]
| 2025-05-21T20:30:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks89je/has_anybody_built_a_chatbot_for_tons_of_pdfs_with/
|
Melodic_Conflict_831
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks89je
| false | null |
t3_1ks89je
|
/r/LocalLLaMA/comments/1ks89je/has_anybody_built_a_chatbot_for_tons_of_pdfs_with/
| false | false |
self
| 1 | null |
Google's new Text Diffusion model explained, and why it matters for LocalLLaMA
| 1 |
[removed]
| 2025-05-21T20:57:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks8xnx/googles_new_text_diffusion_model_explained_and/
|
amapleson
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks8xnx
| false | null |
t3_1ks8xnx
|
/r/LocalLLaMA/comments/1ks8xnx/googles_new_text_diffusion_model_explained_and/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'A8pZ1jc8VsvBE9_QRAwDJNx28Ocv2Dnv_rNfhgi6nEQ', 'resolutions': [{'height': 102, 'url': 'https://external-preview.redd.it/UKaL__HlNkC06VtxCnSqS-6ULeMna6lMu5DWwmwHadA.jpg?width=108&crop=smart&auto=webp&s=16c440f9f5ab482df84dba7d4d22ebb17566aacd', 'width': 108}, {'height': 204, 'url': 'https://external-preview.redd.it/UKaL__HlNkC06VtxCnSqS-6ULeMna6lMu5DWwmwHadA.jpg?width=216&crop=smart&auto=webp&s=257b5fc65fe171e0819bec45c5de06e1bcf4f1c0', 'width': 216}, {'height': 302, 'url': 'https://external-preview.redd.it/UKaL__HlNkC06VtxCnSqS-6ULeMna6lMu5DWwmwHadA.jpg?width=320&crop=smart&auto=webp&s=c05a91638b1d41b4f0b248cb433609be852ba3dd', 'width': 320}, {'height': 605, 'url': 'https://external-preview.redd.it/UKaL__HlNkC06VtxCnSqS-6ULeMna6lMu5DWwmwHadA.jpg?width=640&crop=smart&auto=webp&s=337b9727c8481339101f58f7cda06ef86a8eeca5', 'width': 640}], 'source': {'height': 838, 'url': 'https://external-preview.redd.it/UKaL__HlNkC06VtxCnSqS-6ULeMna6lMu5DWwmwHadA.jpg?auto=webp&s=8581f62b9026daaa8be99bdafaf6a806cbedb2c4', 'width': 886}, 'variants': {}}]}
|
|
Best Local LLM on a 16GB MacBook Pro M4
| 0 |
Hi! I'm looking to run local llm on a MacBook Pro M4 with 16GB of RAM. My intended use case of creative writing for a writing some stories (so to brainstorm certain ideas), some psychological reasoning (to help in making the narrative reasonable and relatable) and possibly some coding in JavaScript or with Godot for some game dev (very rarely this is just to show off to some colleagues tbh)
I'd value some loss in speed over quality of responses but I'm open to options!
P.S. Any recommendations for an ML tool making 2D pixel art or character sprites? I would appreciate some recommendations, I'd love to branch out to making D&D campaign ebooks too. What happened to stable diffusion, I've been out of the loop on that one.
| 2025-05-21T21:03:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1ks92uu/best_local_llm_on_a_16gb_macbook_pro_m4/
|
combo-user
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ks92uu
| false | null |
t3_1ks92uu
|
/r/LocalLLaMA/comments/1ks92uu/best_local_llm_on_a_16gb_macbook_pro_m4/
| false | false |
self
| 0 | null |
Tools to perform data transformations using LLMs?
| 1 |
What tools do you use if you have some large amounts of data and performing transformations them is a huge task? With LLMs there's the issue of context length and high API cost. I've been building something in this space, but curious to know what other tools are there?
Any results in both unstructured and structured data are welcome.
| 2025-05-21T21:57:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1ksadu3/tools_to_perform_data_transformations_using_llms/
|
metalvendetta
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ksadu3
| false | null |
t3_1ksadu3
|
/r/LocalLLaMA/comments/1ksadu3/tools_to_perform_data_transformations_using_llms/
| false | false |
self
| 1 | null |
Local TTS with actual multilingual support
| 9 |
Hey guys! I'm doing a local Home Assistant project that includes a fully local Voice Assistant, all in native Bulgarian. I'm using Whisper Turbo V3 for STT, Qwen3 for the LLM part, but I'm stuck at the TTS part. I'm looking for a good, Bulgarian-speaking, open-source TTS engine (preferably a modern one), but all of the top available ones I've found on HuggingFace don't include Bulgarian. There's a few really good options if i wanted to go closed-source online (i.e Gemini 2.5 TTS, Elevenlabs, Microsoft Azure TTS, etc.), but I'd really rather the whole system work offline.
What options do I have on the locally-run side? Am I doomed to rely on the corporate overlords?
| 2025-05-21T22:14:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1ksas7b/local_tts_with_actual_multilingual_support/
|
oMGalLusrenmaestkaen
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ksas7b
| false | null |
t3_1ksas7b
|
/r/LocalLLaMA/comments/1ksas7b/local_tts_with_actual_multilingual_support/
| false | false |
self
| 9 | null |
Devstral vs DeepSeek vs Qwen3
| 46 |
What are your expectations about it? The announcement is quite interesting. 🔥
Noticed that they put Gemma3 on the bottom of the chart, but it shows very well on daily basis. 🤔
| 2025-05-21T22:15:51 |
https://mistral.ai/news/devstral
|
COBECT
|
mistral.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1ksat42
| false | null |
t3_1ksat42
|
/r/LocalLLaMA/comments/1ksat42/devstral_vs_deepseek_vs_qwen3/
| false | false | 46 |
{'enabled': False, 'images': [{'id': 'QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=108&crop=smart&auto=webp&s=bf2fc6d6ae14adad4ce62ffea575abc3783778db', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=216&crop=smart&auto=webp&s=4a5f46c5464cea72c64df6c73d58b15e102c5936', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=320&crop=smart&auto=webp&s=aa1e4abc763404a25bda9d60fe6440b747d6bae4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=640&crop=smart&auto=webp&s=122efd46018c04117aca71d80db3640d390428bd', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=960&crop=smart&auto=webp&s=b53cfe1770ee2b37ce0f5b5e1b0fd67d3276a350', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=1080&crop=smart&auto=webp&s=278352f076c5bbdf8f6e7cecedab77d8794332ff', 'width': 1080}], 'source': {'height': 2520, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?auto=webp&s=691d56b882a79feffdb4b780dc6a0db1b2c5d709', 'width': 4800}, 'variants': {}}]}
|
|
I prompted Google Veo 2 for a woman to wash her hands under a faucet of running water. I used a Flux reference image from a setup using Character Creator 4 rendering. This one is way better than the Hailuo Minimax I tried with the same reference image.
| 0 | 2025-05-21T22:51:02 |
https://v.redd.it/drkp3kngs72f1
|
Extension-Fee-8480
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ksbkm8
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/drkp3kngs72f1/DASHPlaylist.mpd?a=1750459877%2CZDZlZDE0NjQ3YjQ0MzY2YzM5Y2M0MmYwMWE4YWFjODJlYTVlOWVhODk2OTQwNGEzZWE3YTA2MmQ5MWUxMWNkNg%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/drkp3kngs72f1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/drkp3kngs72f1/HLSPlaylist.m3u8?a=1750459877%2CZTQyMjkyZjVlYzUxZGY2M2NiY2EwN2E5NTc3ZmJjZWFiZjQwZDhkZDUzOTU5YzY4MzAxZDg5ZGJhZWJmNmViOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/drkp3kngs72f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
|
t3_1ksbkm8
|
/r/LocalLLaMA/comments/1ksbkm8/i_prompted_google_veo_2_for_a_woman_to_wash_her/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'd2RrdTFqbmdzNzJmMXapZT7-gpnrYQ1aIjbqu8DN2FjezwDjjBC8-BcqD9aP', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d2RrdTFqbmdzNzJmMXapZT7-gpnrYQ1aIjbqu8DN2FjezwDjjBC8-BcqD9aP.png?width=108&crop=smart&format=pjpg&auto=webp&s=8b782329be0ecdb4e2b498a87dbe46aace91a5a9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/d2RrdTFqbmdzNzJmMXapZT7-gpnrYQ1aIjbqu8DN2FjezwDjjBC8-BcqD9aP.png?width=216&crop=smart&format=pjpg&auto=webp&s=5c6bc46b87bc721651c4f63c06c3af65585ad077', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/d2RrdTFqbmdzNzJmMXapZT7-gpnrYQ1aIjbqu8DN2FjezwDjjBC8-BcqD9aP.png?width=320&crop=smart&format=pjpg&auto=webp&s=27131905dcbed2e11db9066029f0bbe007f1d7ba', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/d2RrdTFqbmdzNzJmMXapZT7-gpnrYQ1aIjbqu8DN2FjezwDjjBC8-BcqD9aP.png?width=640&crop=smart&format=pjpg&auto=webp&s=36762e7afc44130cf49dd13a01574b50d3355d87', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/d2RrdTFqbmdzNzJmMXapZT7-gpnrYQ1aIjbqu8DN2FjezwDjjBC8-BcqD9aP.png?width=960&crop=smart&format=pjpg&auto=webp&s=a1bfa27b03239e31a9361693b0bcda01e4230f5e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/d2RrdTFqbmdzNzJmMXapZT7-gpnrYQ1aIjbqu8DN2FjezwDjjBC8-BcqD9aP.png?width=1080&crop=smart&format=pjpg&auto=webp&s=24387e4c86f2db48a6132d61817514864826013b', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/d2RrdTFqbmdzNzJmMXapZT7-gpnrYQ1aIjbqu8DN2FjezwDjjBC8-BcqD9aP.png?format=pjpg&auto=webp&s=7b805fcdcf500752be6a6ff8d3ec400748af43c3', 'width': 1280}, 'variants': {}}]}
|
||
I created a story generator that streams forever - all running locally on my desktop.
| 1 |
[removed]
| 2025-05-21T22:53:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1ksbmbu/i_created_a_story_generator_that_streams_forever/
|
-Ants-In-Pants-
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ksbmbu
| false | null |
t3_1ksbmbu
|
/r/LocalLLaMA/comments/1ksbmbu/i_created_a_story_generator_that_streams_forever/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'S409SasqYcx_GUZ9RTqjucd0yhcAT7--IBgD3U7cLa8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/3K8YvmzPmn9AVt_vx5fNEbkhJN56D38hlXNSSX2RD-M.jpg?width=108&crop=smart&auto=webp&s=e05a5b7d772c13aaa998d37df68da0287dbf6cd0', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/3K8YvmzPmn9AVt_vx5fNEbkhJN56D38hlXNSSX2RD-M.jpg?width=216&crop=smart&auto=webp&s=40d960d65694ce6af79167522306557bde1bd29a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/3K8YvmzPmn9AVt_vx5fNEbkhJN56D38hlXNSSX2RD-M.jpg?width=320&crop=smart&auto=webp&s=30fb68859c75fb1907b32c241ba3ec8f202e7213', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/3K8YvmzPmn9AVt_vx5fNEbkhJN56D38hlXNSSX2RD-M.jpg?auto=webp&s=771d1f14f53af0d8517bfed232f9658da0762307', 'width': 480}, 'variants': {}}]}
|
Blackwell 5000 vs DGX
| 2 |
I’m on an AM4 platform, and looking for guidance on the trade offs between the dgx spark vs the similarly priced Blackwell 5000. I would like to be able to run llms locally for my coding needs, a bit of invokeai fun, and in general explore all of the cool innovations in open source. Are the models that can fit into 48gb good enough for local development experiences? I am primarily focused on full stack development in JavaScript/typescript. Or should I lean towards more memory footprint with DGX Spark?
My experience to date has primarily been cursor + Claude 3.5/3.7 models. I understand too, that open source will likely not meet the 3.7 model accuracy, but maybe my assumptions could be wrong for specific languages. Many thanks!
| 2025-05-21T23:25:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kscb9d/blackwell_5000_vs_dgx/
|
cpfowlke
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kscb9d
| false | null |
t3_1kscb9d
|
/r/LocalLLaMA/comments/1kscb9d/blackwell_5000_vs_dgx/
| false | false |
self
| 2 | null |
AI Agents and assistants
| 5 |
I’ve been trying various AI agents and assistants.
I want:
- a coding assistant that can analyze code, propose/make changes, create commits maybe
- search the internet, save the info, find URLs, download git repos maybe
- examine my code on disk, tell me why it sucks, web search data on disk, and add to the memory context if necessary to analyze
- read/write files in a sandbox.
I’ve looked at Goose and AutoGPT. What other tools are out there for a local LLM? Are there any features I should be looking out for?
It would be nice to just ask the LLM, “search the web for X, clone the git repo, save it /right/here/“. Or “do a web search, find the latest method/tool for X”
Now tell me why I’m dumb and expect too much. :)
| 2025-05-21T23:35:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1kscibt/ai_agents_and_assistants/
|
johnfkngzoidberg
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kscibt
| false | null |
t3_1kscibt
|
/r/LocalLLaMA/comments/1kscibt/ai_agents_and_assistants/
| false | false |
self
| 5 | null |
Benchmarking FP8 vs GGUF:Q8 on RTX 5090 (Blackwell SM120)
| 9 |
Now that the first FP8 implementations for RTX Blackwell (SM120) are available in vLLM, I’ve benchmarked several models and frameworks under Windows 11 with WSL (Ubuntu 24.04):
* vLLM with [https://huggingface.co/RedHatAI/phi-4-FP8-dynamic](https://huggingface.co/RedHatAI/phi-4-FP8-dynamic) (FP8 compressed-tensors)
* Ollama with [https://huggingface.co/unsloth/phi-4-GGUF](https://huggingface.co/unsloth/phi-4-GGUF) (Q8\_0)
* LM Studio with [https://huggingface.co/lmstudio-community/phi-4-GGUF](https://huggingface.co/lmstudio-community/phi-4-GGUF) (Q8\_0)
In all cases the models were loaded with a maximum context length of 16k.
Benchmarks were performed using [https://github.com/huggingface/inference-benchmarker](https://github.com/huggingface/inference-benchmarker)
Here’s the Docker command used:
sudo docker run --network host -e HF_TOKEN=$HF_TOKEN \
-v ~/inference-benchmarker-results:/opt/inference-benchmarker/results \
inference_benchmarker inference-benchmarker \
--url $URL \
--rates 1.0 --rates 10.0 --rates 30.0 --rates 100.0 \
--max-vus 800 --duration 120s --warmup 30s --benchmark-kind rate \
--model-name $ModelName \
--tokenizer-name "microsoft/phi-4" \
--prompt-options "num_tokens=8000,max_tokens=8020,min_tokens=7980,variance=10" \
--decode-options "num_tokens=8000,max_tokens=8020,min_tokens=7980,variance=10"
# URL should point to your local vLLM/Ollama/LM Studio instance.
# ModelName corresponds to the loaded model, e.g. "hf.co/unsloth/phi-4-GGUF:Q8_0" (Ollama) or "phi-4" (LM Studio)
# Note: For 200-token prompt benchmarking, use the following options:
--prompt-options "num_tokens=200,max_tokens=220,min_tokens=180,variance=10" \
--decode-options "num_tokens=200,max_tokens=220,min_tokens=180,variance=10"
Results:
* 200 token prompts: [https://huggingface.co/spaces/textgeflecht/inference-benchmarking-results-phi4-200-tokens](https://huggingface.co/spaces/textgeflecht/inference-benchmarking-results-phi4-200-tokens)
* 8000 token prompts: [https://huggingface.co/spaces/textgeflecht/inference-benchmarking-results-phi4-8000-tokens](https://huggingface.co/spaces/textgeflecht/inference-benchmarking-results-phi4-8000-tokens)
[screenshot: 200 token prompts](https://preview.redd.it/7q460i1fv72f1.png?width=2500&format=png&auto=webp&s=a7c313e1b3bb83acea8406a117c84b3ef8f19df1)
[screenshot: 8000 token prompts](https://preview.redd.it/y6hcf4ckv72f1.png?width=2498&format=png&auto=webp&s=33af700454d8128c9cd32653660f8e70d0c6fb0a)
Observations:
* It is already well-known that **vLLM** offers high token throughput given sufficient request rates. In case of phi-4 I archieved 3k tokens/s, with smaller models like Llama 3.1 8B up to 5.5k tokens/s was possible (the latter one is not in the benchmark screenshots or links above; I'll test again once more FP8 kernel optimizations are implemented in vLLM).
* **LM Studio**: Adjusting the “Evaluation Batch Size” to 16k didn't noticeably improve throughput. Any tips?
* **Ollama**: I couldn’t find any settings to optimize for higher throughput.
| 2025-05-21T23:41:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1kscn2n/benchmarking_fp8_vs_ggufq8_on_rtx_5090_blackwell/
|
drulee
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kscn2n
| false | null |
t3_1kscn2n
|
/r/LocalLLaMA/comments/1kscn2n/benchmarking_fp8_vs_ggufq8_on_rtx_5090_blackwell/
| false | false | 9 |
{'enabled': False, 'images': [{'id': 'M_gTmLtCfRqDWG1zQCFF07fs4TLqvtzbxGKM41ar9uw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NX0ZFcOGjNWeWZmvYhRiRB7Gy7xI5qS47yN1p8Z8lh0.jpg?width=108&crop=smart&auto=webp&s=9d2d00426a93a0f40beec99054056b16be635e71', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NX0ZFcOGjNWeWZmvYhRiRB7Gy7xI5qS47yN1p8Z8lh0.jpg?width=216&crop=smart&auto=webp&s=7b26fa274184c904d602e6a1f6230ce069b44abe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NX0ZFcOGjNWeWZmvYhRiRB7Gy7xI5qS47yN1p8Z8lh0.jpg?width=320&crop=smart&auto=webp&s=f7c640c4b138b67e05c56539c1d3a24044a06902', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NX0ZFcOGjNWeWZmvYhRiRB7Gy7xI5qS47yN1p8Z8lh0.jpg?width=640&crop=smart&auto=webp&s=5463507e8f4650cdb7014b6c0266da600351cb78', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NX0ZFcOGjNWeWZmvYhRiRB7Gy7xI5qS47yN1p8Z8lh0.jpg?width=960&crop=smart&auto=webp&s=6cca8b23f1ba2bc9493f955b9be8a2c43d460e59', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NX0ZFcOGjNWeWZmvYhRiRB7Gy7xI5qS47yN1p8Z8lh0.jpg?width=1080&crop=smart&auto=webp&s=75d444943c4516b14b2528770568f7a13898dba1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NX0ZFcOGjNWeWZmvYhRiRB7Gy7xI5qS47yN1p8Z8lh0.jpg?auto=webp&s=eb0e192f2420d25b6b7f268f24a3a3b3f546f650', 'width': 1200}, 'variants': {}}]}
|
|
Qwen3 is impressive but sometimes acts like it went through lobotomy. Have you experienced something similar?
| 32 |
I've tested Qwen3 32b at Q4\_s, Qwen3 30b-A3B Q5\_m and Qwen 14b Q6\_k a few days ago. The 14b was the fastest one for me since it didn't require loading into RAM (I have 16gb VRAM) (and yes the 30b one was 2-5t/s slower than 14b).
Qwen3 14b was very impressive at basic math, even when I ended up just bashing my keyboard and giving it stuff like this to solve: 37478847874 + 363605 \* 53, it somehow got them right (also more advanced math). Weirdly, it was usually better to turn thinking off for these. I was happy to find out this model was the best so far among the local models at talking in my language (not english), so will be great for multilingual tasks.
However it sometimes fails to properly follow instructions/misunderstands them, or ignores small details I ask for, like formatting. Enabling the thinking improves a lot on this though for the 14b and 30b models. The 32b is a lot better at this, even without thinking, but not perfect either. It sometimes gives the dumbest responses I've experienced, even the 32b. For example this was my first contact with the 32b model:
Me: "Hello, are you Qwen?"
Qwen 32b: "Hi I am not Qwen, you might be confusing me with someone else. My name is Qwen".
I was thinking "what is going on here?", it reminded me of barely functional 1b-3b models in Q4 lobotomy quants I had tested for giggles ages ago. It never did something blatantly stupid like this again, but some weird responses come up occasionally, also I feel like it sometimes struggles with english (?), giving oddly formulated responses, other models like Mistrals never did this.
Other thing, both 14b and 32b did a similar weird response (I checked 32b after I was shocked at 14b, copying the same messages I used before). I will give an example, not what I actually talked about with it, but it was like this: I asked "Oh recently my head is hurting, what to do?" And after giving some solid advice it gave me this, (word for word in the 1st sentence!): "You are not just headache! You are right to be concerned!" and went on with stuff like "Your struggles are valid and" (etc...) First of all this barely makes sense wth is "You are not just a headache!" like duh? I guess it tried to do some not really needed kindness/mental health support thing but it ended up sounding weird and almost patronizing.
And it talks too much. I'm talking about what it says after thinking or with thinking mode OFF, not what it is saying while it's thinking. Even during characters/RP it's just not really good because it gives me like 10 lines per response, where it just fast-track hallucinates unneeded things, and frequently detaches and breaks character, talking in 3rd person about the character it should RP as. Although disliking too much talking is subjective so other people might love this. I call the talking too much + breaking character during RP "Gemmaism" because gemma 2 27b also did this all the time and it drove me insane back then too.
So for RP/casual chat/characters I still prefer Mistral 22b 2409 and Mistral Nemo (and their finetunes). So far it's a mixed bag for me because of these, it could both impress and shock me at different times.
| 2025-05-21T23:42:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1kscnlo/qwen3_is_impressive_but_sometimes_acts_like_it/
|
AltruisticList6000
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kscnlo
| false | null |
t3_1kscnlo
|
/r/LocalLLaMA/comments/1kscnlo/qwen3_is_impressive_but_sometimes_acts_like_it/
| false | false |
self
| 32 | null |
I built an Open-Source AI Resume Tailoring App with LangChain & Ollama - Looking for feedback & my next CV/GenAI role!
| 0 |
I've been diving deep into the LLM world lately and wanted to share a project I've been tinkering with: an **AI-powered Resume Tailoring application**.
**The Gist:** You feed it your current resume and a job description, and it tries to tweak your resume's keywords to better align with what the job posting is looking for. We all know how much of a pain manual tailoring can be, so I wanted to see if I could automate parts of it.
**Tech Stack Under the Hood:**
* **Backend:** LangChain is the star here, using hybrid retrieval (BM25 for sparse, and a dense model for semantic search). I'm running language models locally using Ollama, which has been a fun experience.
* **Frontend:** Good ol' React.
**Current Status & What's Next:**
It's definitely not perfect yet – more of a proof-of-concept at this stage. I'm planning to spend this weekend refining the code, improving the prompting, and maybe making the UI a bit slicker.
**I'd love your thoughts!** If you're into RAG, LangChain, or just resume tech, I'd appreciate any suggestions, feedback, or even contributions. The code is open source:
* **Project Repo:** [https://github.com/Pavankunchala/LLM-Learn-PK/tree/main/resume-tailor](https://github.com/Pavankunchala/LLM-Learn-PK/tree/main/resume-tailor)
**On a related note (and the other reason for this post!):** I'm actively on the hunt for new opportunities, specifically in **Computer Vision and Generative AI / LLM domains**. Building this project has only fueled my passion for these areas. If your team is hiring, or you know someone who might be interested in a profile like mine, I'd be thrilled if you reached out.
* **My Email:** [[email protected]](https://www.google.com/url?sa=E&q=mailto%3Apavankunchalaofficial%40gmail.com)
* **My GitHub Profile (for more projects):** [https://github.com/Pavankunchala](https://github.com/Pavankunchala)
* **My Resume:** [https://drive.google.com/file/d/1ODtF3Q2uc0krJskE\_F12uNALoXdgLtgp/view](https://drive.google.com/file/d/1ODtF3Q2uc0krJskE_F12uNALoXdgLtgp/view)
Thanks for reading this far! Looking forward to any discussions or leads.
| 2025-05-21T23:48:09 |
https://v.redd.it/rimmf6n8282f1
|
Solid_Woodpecker3635
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kscs5w
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/rimmf6n8282f1/DASHPlaylist.mpd?a=1750463303%2CNDg2ZTA1YjY2ZGZmOGI3Njk4YmQ3Y2ZiMDIzYWNiOWJkMDZhN2EwMDc0ZmZmNGMwYTY5NDU3Zjg1MjM0NmQ0Zg%3D%3D&v=1&f=sd', 'duration': 40, 'fallback_url': 'https://v.redd.it/rimmf6n8282f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 942, 'hls_url': 'https://v.redd.it/rimmf6n8282f1/HLSPlaylist.m3u8?a=1750463303%2CZWIzZjY3NjBhODkyMWMxNDZmMGEwNzdhMzQ5ZmM4YmM1YjAxYTE3NTIxZTAzM2U5ZjJhMDMzZTM2NGI5MTIxOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/rimmf6n8282f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1kscs5w
|
/r/LocalLLaMA/comments/1kscs5w/i_built_an_opensource_ai_resume_tailoring_app/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'Ym1sM3g2bjgyODJmMfQDJh93iwJo5nXUO4UpJpk3IDUw8nDIGOAeiz1giQIg', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/Ym1sM3g2bjgyODJmMfQDJh93iwJo5nXUO4UpJpk3IDUw8nDIGOAeiz1giQIg.png?width=108&crop=smart&format=pjpg&auto=webp&s=609dabb647f900c86e1bd63336656ceb69de4ed5', 'width': 108}, {'height': 106, 'url': 'https://external-preview.redd.it/Ym1sM3g2bjgyODJmMfQDJh93iwJo5nXUO4UpJpk3IDUw8nDIGOAeiz1giQIg.png?width=216&crop=smart&format=pjpg&auto=webp&s=9df960f9b7ce5209bd286845bb8e204c0bad647a', 'width': 216}, {'height': 157, 'url': 'https://external-preview.redd.it/Ym1sM3g2bjgyODJmMfQDJh93iwJo5nXUO4UpJpk3IDUw8nDIGOAeiz1giQIg.png?width=320&crop=smart&format=pjpg&auto=webp&s=c35f93f2130618f26eacbcb3ac83b181ff1f5d99', 'width': 320}, {'height': 314, 'url': 'https://external-preview.redd.it/Ym1sM3g2bjgyODJmMfQDJh93iwJo5nXUO4UpJpk3IDUw8nDIGOAeiz1giQIg.png?width=640&crop=smart&format=pjpg&auto=webp&s=258e1458aab5839fa2e8c4efd4cdcb90d9088038', 'width': 640}, {'height': 471, 'url': 'https://external-preview.redd.it/Ym1sM3g2bjgyODJmMfQDJh93iwJo5nXUO4UpJpk3IDUw8nDIGOAeiz1giQIg.png?width=960&crop=smart&format=pjpg&auto=webp&s=c1d161d3afb43fac8a970e4995a8308c8e95b963', 'width': 960}, {'height': 530, 'url': 'https://external-preview.redd.it/Ym1sM3g2bjgyODJmMfQDJh93iwJo5nXUO4UpJpk3IDUw8nDIGOAeiz1giQIg.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f7912cb71d1d13992a24e12d19ded05a7ee59e02', 'width': 1080}], 'source': {'height': 1196, 'url': 'https://external-preview.redd.it/Ym1sM3g2bjgyODJmMfQDJh93iwJo5nXUO4UpJpk3IDUw8nDIGOAeiz1giQIg.png?format=pjpg&auto=webp&s=9120fe4ac188333e92a929e4b7e4eb71181a1c8b', 'width': 2436}, 'variants': {}}]}
|
|
Where is DeepSeek R2?
| 0 |
Seriously, what's going on with the Deepseek team? News outlets were confident R2 will be released in April. Some claimed early May. Google released 2 SOTA models after R2 (and Gemma-3 family). Alibaba released 2 families of models since then. Heck, even ClosedAI released o3 and o4.
What is the Deepseek team cooking? I can't think of any model release that made me this excited and anxious at the same time! I am excited at the prospect of another release that would disturb the whole world (and tank Nvidia's stocks again). What new breakthroughs will the team make this time?
At the same time, I am anxious at the prospect of R2 not being anything special, which would just confirm what many are whispering in the background: Maybe we just ran into a wall, this time for real.
I've been following the open-source llm industry since llama leaked, and it has become like Christmas every day for me. I don't want that to stop!
What do you think?
| 2025-05-21T23:53:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kscwf0/where_is_deepseek_r2/
|
Iory1998
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kscwf0
| false | null |
t3_1kscwf0
|
/r/LocalLLaMA/comments/1kscwf0/where_is_deepseek_r2/
| false | false |
self
| 0 | null |
Intel introduces AI Assistant Builder
| 9 | 2025-05-22T00:03:02 |
https://github.com/intel/intel-ai-assistant-builder
|
reps_up
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1ksd3ba
| false | null |
t3_1ksd3ba
|
/r/LocalLLaMA/comments/1ksd3ba/intel_introduces_ai_assistant_builder/
| false | false | 9 |
{'enabled': False, 'images': [{'id': '6EIJuaMtcCyCEql43kkf2EFeHWy1BNA3ZnPHNMBU_uI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PCDKZl8OEsOR2xW0cCFX0HbUlT9eSGSHwqdpNPtlRdE.jpg?width=108&crop=smart&auto=webp&s=409ba12fe26dc26e79f475ebe4b0625c8b83fd65', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PCDKZl8OEsOR2xW0cCFX0HbUlT9eSGSHwqdpNPtlRdE.jpg?width=216&crop=smart&auto=webp&s=0828a83ea6f74fdd6fac2589c6032d017e92b685', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PCDKZl8OEsOR2xW0cCFX0HbUlT9eSGSHwqdpNPtlRdE.jpg?width=320&crop=smart&auto=webp&s=670382464f8144310b173953bdbc6e3087b1aee2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PCDKZl8OEsOR2xW0cCFX0HbUlT9eSGSHwqdpNPtlRdE.jpg?width=640&crop=smart&auto=webp&s=9328d1f249cf5156cf80fe0a2b7f9d6241838047', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PCDKZl8OEsOR2xW0cCFX0HbUlT9eSGSHwqdpNPtlRdE.jpg?width=960&crop=smart&auto=webp&s=33d0127989519ff55915403ab99baf06c7f2feca', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PCDKZl8OEsOR2xW0cCFX0HbUlT9eSGSHwqdpNPtlRdE.jpg?width=1080&crop=smart&auto=webp&s=7d09ac7bb4ba458cf40fa9c9bd382c54faf4cc90', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PCDKZl8OEsOR2xW0cCFX0HbUlT9eSGSHwqdpNPtlRdE.jpg?auto=webp&s=64a0a66cba8f9a5792dfb96579a0cd941d372ad8', 'width': 1200}, 'variants': {}}]}
|
||
Devstral Small from 2023
| 3 |
knowledge cutoff in 2023 many things has been changed in the development field. very disappointing but can fine-tune own version
| 2025-05-22T00:07:23 |
Null_Execption
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ksd6el
| false | null |
t3_1ksd6el
|
/r/LocalLLaMA/comments/1ksd6el/devstral_small_from_2023/
| false | false | 3 |
{'enabled': True, 'images': [{'id': 'P4Ut2BPslzTz9oX-BPMCld09xGxRUuaMzAb0q0DJcU8', 'resolutions': [{'height': 21, 'url': 'https://preview.redd.it/wh80ms6w582f1.png?width=108&crop=smart&auto=webp&s=70c1544c8b232c3ce6305f05167f6d7c3ba4341e', 'width': 108}, {'height': 43, 'url': 'https://preview.redd.it/wh80ms6w582f1.png?width=216&crop=smart&auto=webp&s=acf88410ecf5728cb20f224a68a98c79b5c76109', 'width': 216}, {'height': 64, 'url': 'https://preview.redd.it/wh80ms6w582f1.png?width=320&crop=smart&auto=webp&s=8ae31fbe1b95f402780726ff8e0d09683b624fff', 'width': 320}, {'height': 128, 'url': 'https://preview.redd.it/wh80ms6w582f1.png?width=640&crop=smart&auto=webp&s=bf73622f9c2c159457715cb457af4d1e8df307dc', 'width': 640}, {'height': 192, 'url': 'https://preview.redd.it/wh80ms6w582f1.png?width=960&crop=smart&auto=webp&s=c15e8757def001072d57f28640a2bd68b033fe84', 'width': 960}, {'height': 216, 'url': 'https://preview.redd.it/wh80ms6w582f1.png?width=1080&crop=smart&auto=webp&s=5c02ff1671671bb31ac69636d060be0bb8822550', 'width': 1080}], 'source': {'height': 318, 'url': 'https://preview.redd.it/wh80ms6w582f1.png?auto=webp&s=dcba5b2c3ec9e8fbc7aa37d302c14c0baef84ebf', 'width': 1586}, 'variants': {}}]}
|
||
4-bit quantized Moondream: 42% less memory with 99.4% accuracy
| 148 | 2025-05-22T00:19:04 |
https://moondream.ai/blog/smaller-faster-moondream-with-qat
|
radiiquark
|
moondream.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1ksdeup
| false | null |
t3_1ksdeup
|
/r/LocalLLaMA/comments/1ksdeup/4bit_quantized_moondream_42_less_memory_with_994/
| false | false |
default
| 148 | null |
|
Harnessing the Universal Geometry of Embeddings
| 62 | 2025-05-22T00:32:59 |
https://arxiv.org/abs/2505.12540
|
Recoil42
|
arxiv.org
| 1970-01-01T00:00:00 | 0 |
{}
|
1ksdox8
| false | null |
t3_1ksdox8
|
/r/LocalLLaMA/comments/1ksdox8/harnessing_the_universal_geometry_of_embeddings/
| false | false |
default
| 62 | null |
|
Any of the concurrent backends (vLLM, SGlang etc.) support model switching?
| 8 |
I need to run both a VLM and an LLM. I could use two GPUs/containers for this but that obviously doubles the cost. Any of big name backends like vLLM or SGlang support model switching or loading multiple models on the same GPU? What's the best way to go about this? Or is it simple a dream at the moment?
| 2025-05-22T00:55:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kse4pe/any_of_the_concurrent_backends_vllm_sglang_etc/
|
No-Break-7922
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kse4pe
| false | null |
t3_1kse4pe
|
/r/LocalLLaMA/comments/1kse4pe/any_of_the_concurrent_backends_vllm_sglang_etc/
| false | false |
self
| 8 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.