title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Why aren't you using Aider??
| 31 |
After using Aider for a few weeks, going back to co-pilot, roo code, augment, etc, feels like crawling in comparison. Aider + the Gemini family works SO UNBELIEVABLY FAST.
I can request and generate 3 versions of my new feature faster in Aider (and for 1/10th the token cost) than it takes to make one change with Roo Code. And the quality, even with the same models, is higher in Aider.
Anybody else have a similar experience with Aider? Or was it negative for some reason?
| 2025-05-20T15:46:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1kr867y/why_arent_you_using_aider/
|
MrPanache52
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kr867y
| false | null |
t3_1kr867y
|
/r/LocalLLaMA/comments/1kr867y/why_arent_you_using_aider/
| false | false |
self
| 31 | null |
vLLM for multi-model orchestration — sub-2s cold starts, 90%+ GPU utilization (no K8s required)
| 1 |
[removed]
| 2025-05-20T15:50:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1kr8aoy/vllm_for_multimodel_orchestration_sub2s_cold/
|
pmv143
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kr8aoy
| false | null |
t3_1kr8aoy
|
/r/LocalLLaMA/comments/1kr8aoy/vllm_for_multimodel_orchestration_sub2s_cold/
| false | false |
self
| 1 | null |
A new fine tune of Gemma 3 27B with more beneficial knowledge
| 0 |
Fine tuned Gemma 3 27B and improvements are here.
https://preview.redd.it/sdjbaegzky1f1.png?width=859&format=png&auto=webp&s=f10332eac1774598fca6b6b56505487d771ae1a7
GGUFs: [https://huggingface.co/models?other=base\_model:quantized:etemiz/Ostrich-27B-AHA-Gemma3-250519](https://huggingface.co/models?other=base_model:quantized:etemiz/Ostrich-27B-AHA-Gemma3-250519)
Article: [https://huggingface.co/blog/etemiz/fine-tuning-gemma-3-for-human-alignment](https://huggingface.co/blog/etemiz/fine-tuning-gemma-3-for-human-alignment)
Next should I try fine tuning Qwen 3?
| 2025-05-20T15:53:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1kr8d73/a_new_fine_tune_of_gemma_3_27b_with_more/
|
de4dee
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kr8d73
| false | null |
t3_1kr8d73
|
/r/LocalLLaMA/comments/1kr8d73/a_new_fine_tune_of_gemma_3_27b_with_more/
| false | false | 0 |
{'enabled': False, 'images': [{'id': '4U0CmYREJaC7yvT2uvxIIGg6gfCNeyTpPqX9vzfye9w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nCcoziJvD7nXQc-5nG-EkqzxSiA8uq2Aj5hvibtwD2M.jpg?width=108&crop=smart&auto=webp&s=3f49c5d1a1620e09193aabec4b5804f43fdfd0c9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nCcoziJvD7nXQc-5nG-EkqzxSiA8uq2Aj5hvibtwD2M.jpg?width=216&crop=smart&auto=webp&s=21dc08b36a0304f882b16a1b3beeedb0e017a620', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nCcoziJvD7nXQc-5nG-EkqzxSiA8uq2Aj5hvibtwD2M.jpg?width=320&crop=smart&auto=webp&s=3880327d941987a06262e17292f99d5c35a0835f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nCcoziJvD7nXQc-5nG-EkqzxSiA8uq2Aj5hvibtwD2M.jpg?width=640&crop=smart&auto=webp&s=fb000400cdee635d0a3ad2260339cc25c293d636', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nCcoziJvD7nXQc-5nG-EkqzxSiA8uq2Aj5hvibtwD2M.jpg?width=960&crop=smart&auto=webp&s=caf1b2807b26990c375e66a502cf26c416283be9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nCcoziJvD7nXQc-5nG-EkqzxSiA8uq2Aj5hvibtwD2M.jpg?width=1080&crop=smart&auto=webp&s=4121f6bf1bbb4fdc429df92f3a1af15d2eadea6c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nCcoziJvD7nXQc-5nG-EkqzxSiA8uq2Aj5hvibtwD2M.jpg?auto=webp&s=853578b0a022c3d43982b37eaa9df1d1d134d63c', 'width': 1200}, 'variants': {}}]}
|
|
Gold standard for testing agentic workflow
| 1 |
[removed]
| 2025-05-20T15:59:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1kr8i30/gold_standard_for_testing_agentic_workflow/
|
PlanktonHungry9754
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kr8i30
| false | null |
t3_1kr8i30
|
/r/LocalLLaMA/comments/1kr8i30/gold_standard_for_testing_agentic_workflow/
| false | false |
self
| 1 | null |
Gemma 3n Preview
| 475 | 2025-05-20T16:10:01 |
https://huggingface.co/collections/google/gemma-3n-preview-682ca41097a31e5ac804d57b
|
brown2green
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kr8s40
| false | null |
t3_1kr8s40
|
/r/LocalLLaMA/comments/1kr8s40/gemma_3n_preview/
| false | false | 475 |
{'enabled': False, 'images': [{'id': 'lMXsg923oKXNqAFcv091XpOzt0tS-VbvJyD1BGYthSo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nuTGd6nR-D7i0exzDvXeyeroWnA1sgWJyyF8GipdVWU.jpg?width=108&crop=smart&auto=webp&s=7d9d79bae8b5636ef4da12984fd0bbb5d013938c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nuTGd6nR-D7i0exzDvXeyeroWnA1sgWJyyF8GipdVWU.jpg?width=216&crop=smart&auto=webp&s=f7e109ac6322821f95556a423347fa4aa215c89a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nuTGd6nR-D7i0exzDvXeyeroWnA1sgWJyyF8GipdVWU.jpg?width=320&crop=smart&auto=webp&s=ac856560d7e5144ee9e70b315721e7f2b1d0aefd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nuTGd6nR-D7i0exzDvXeyeroWnA1sgWJyyF8GipdVWU.jpg?width=640&crop=smart&auto=webp&s=4092ee3492e35aa48ddc115bdbd7e2144d1d03c2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nuTGd6nR-D7i0exzDvXeyeroWnA1sgWJyyF8GipdVWU.jpg?width=960&crop=smart&auto=webp&s=e24fe3434779877705608854610506996af57828', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nuTGd6nR-D7i0exzDvXeyeroWnA1sgWJyyF8GipdVWU.jpg?width=1080&crop=smart&auto=webp&s=2f32479af3df3abb3c9d073993f7f76f6fa986c1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nuTGd6nR-D7i0exzDvXeyeroWnA1sgWJyyF8GipdVWU.jpg?auto=webp&s=343c3fe366d87c0c826e8293c823779a97b72152', 'width': 1200}, 'variants': {}}]}
|
||
What happened to Llama models?
| 1 |
[removed]
| 2025-05-20T16:14:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kr8wgl/what_happened_to_llama_models/
|
PlanktonHungry9754
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kr8wgl
| false | null |
t3_1kr8wgl
|
/r/LocalLLaMA/comments/1kr8wgl/what_happened_to_llama_models/
| false | false |
self
| 1 | null |
Did anybody receive this?
| 0 | 2025-05-20T16:33:03 |
bot-333
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kr9d13
| false | null |
t3_1kr9d13
|
/r/LocalLLaMA/comments/1kr9d13/did_anybody_receive_this/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'dqOKrWrwkid-Bw8j-2zh1TR_WJuOLw5MHopn4Gt1Wmk', 'resolutions': [{'height': 166, 'url': 'https://preview.redd.it/jfotoht6sy1f1.jpeg?width=108&crop=smart&auto=webp&s=94ec8f61bcc22b6add5e0dde565759795ef8aa3a', 'width': 108}, {'height': 332, 'url': 'https://preview.redd.it/jfotoht6sy1f1.jpeg?width=216&crop=smart&auto=webp&s=87fc328a6ce2e8986d410471b69918afc5ed0d48', 'width': 216}, {'height': 492, 'url': 'https://preview.redd.it/jfotoht6sy1f1.jpeg?width=320&crop=smart&auto=webp&s=18c800ed7def5ac9e6c4382ce806c9bbf3b3ffb6', 'width': 320}, {'height': 984, 'url': 'https://preview.redd.it/jfotoht6sy1f1.jpeg?width=640&crop=smart&auto=webp&s=0a97d953c3e545ecd95586bd45321a78df6a2fec', 'width': 640}, {'height': 1476, 'url': 'https://preview.redd.it/jfotoht6sy1f1.jpeg?width=960&crop=smart&auto=webp&s=3dff17a0031821c95d9f6093af26a98c7fb83d86', 'width': 960}, {'height': 1661, 'url': 'https://preview.redd.it/jfotoht6sy1f1.jpeg?width=1080&crop=smart&auto=webp&s=d75077ae211b5aef57ca41e42284b62a81afa3c4', 'width': 1080}], 'source': {'height': 1800, 'url': 'https://preview.redd.it/jfotoht6sy1f1.jpeg?auto=webp&s=fbaa8432e4d6b6243b1406f107384f26c9ec91a9', 'width': 1170}, 'variants': {}}]}
|
|||
How are you running Qwen3-235b locally?
| 20 |
i'd be curious of your hardware and speeds. I currently got 3x3090 and 128ram, but i'm getting 5t/s.
| 2025-05-20T16:33:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1kr9d9b/how_are_you_running_qwen3235b_locally/
|
fizzy1242
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kr9d9b
| false | null |
t3_1kr9d9b
|
/r/LocalLLaMA/comments/1kr9d9b/how_are_you_running_qwen3235b_locally/
| false | false |
self
| 20 | null |
Show me the way sensai.
| 0 |
I am planning to learn actual optimization, not just quantization types but the advanced stuff that have significant improvement on the model performance, how to get started please drop resources or guide me on how to acquire such knowledge.
| 2025-05-20T16:33:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1kr9dmk/show_me_the_way_sensai/
|
According_Fig_4784
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kr9dmk
| false | null |
t3_1kr9dmk
|
/r/LocalLLaMA/comments/1kr9dmk/show_me_the_way_sensai/
| false | false |
self
| 0 | null |
LLM-d: A Step Toward Composable Inference Stacks (Built on vLLM)
| 0 |
Red Hat AI just launched LLM-d, an open-source, distributed inference stack co-designed with Google Cloud, CoreWeave, and others. It’s built on vLLM and aims to address key pain points in real-world LLM deployments:
•Non-uniform, high-variance requests (RAG, agents, tool use)
•Long tail latencies from overloaded replicas
•Inefficient GPU sharing between prefill and decode
•Lack of coordination across models with varying latency profiles
LLM-d splits the stack into modular prefill and decode services, adds KV cache–aware routing, and leverages Kubernetes-native patterns to coordinate large-scale workloads.
It’s clearly part of the “stack-centric” school , similar to production vLLM setups or what we’ve seen with AIBrix. And it reflects a broader trend toward infrastructure purpose-built for LLMs, not repurposed from generic microservices.
We’re working on something similar internally , also using vLLM under the hood , but collapsing more of the stack into a single AI-native runtime focused on ultra-fast cold starts and GPU orchestration without Kubernetes.
Curious how folks here are thinking about this shift.
Do you prefer full-stack serving frameworks like LLM-d?
Or more low-level control through SDK-style pipelines (Ray, Dynamo, etc.)?
| 2025-05-20T16:39:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1kr9iha/llmd_a_step_toward_composable_inference_stacks/
|
pmv143
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kr9iha
| false | null |
t3_1kr9iha
|
/r/LocalLLaMA/comments/1kr9iha/llmd_a_step_toward_composable_inference_stacks/
| false | false |
self
| 0 | null |
stabilityai/sv4d2.0 · Hugging Face
| 0 | 2025-05-20T16:45:00 |
https://huggingface.co/stabilityai/sv4d2.0
|
Dark_Fire_12
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kr9ny3
| false | null |
t3_1kr9ny3
|
/r/LocalLLaMA/comments/1kr9ny3/stabilityaisv4d20_hugging_face/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'eQ9qOrJDVo4hLhBD4Mx-1RDKOPY2fdUFUn2WIec-lBo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RI9HVmFWRB718hm8aZ2taebfM5i5b6Yn3fA_D7yuE0w.jpg?width=108&crop=smart&auto=webp&s=ce39151abb409db5b289df399fa2f843eb8b5fea', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RI9HVmFWRB718hm8aZ2taebfM5i5b6Yn3fA_D7yuE0w.jpg?width=216&crop=smart&auto=webp&s=29731a115dbdb658e924e1d97afae567a3ed8f60', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RI9HVmFWRB718hm8aZ2taebfM5i5b6Yn3fA_D7yuE0w.jpg?width=320&crop=smart&auto=webp&s=3c13eadb62a6174b2e327efa24066196e04de50b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RI9HVmFWRB718hm8aZ2taebfM5i5b6Yn3fA_D7yuE0w.jpg?width=640&crop=smart&auto=webp&s=617f20aab4b06c4a5eaedd0c3c13e454c0e736d2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RI9HVmFWRB718hm8aZ2taebfM5i5b6Yn3fA_D7yuE0w.jpg?width=960&crop=smart&auto=webp&s=e056f2ba4fe813039ea12ea04bbf19be2634e876', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RI9HVmFWRB718hm8aZ2taebfM5i5b6Yn3fA_D7yuE0w.jpg?width=1080&crop=smart&auto=webp&s=ec4d7805cc5fb264530e7c79a2503ac12f862967', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RI9HVmFWRB718hm8aZ2taebfM5i5b6Yn3fA_D7yuE0w.jpg?auto=webp&s=c236dcca728053eea259371d518da36921979a21', 'width': 1200}, 'variants': {}}]}
|
||
OpenEvolve: Open Source Implementation of DeepMind's AlphaEvolve System
| 178 |
Hey everyone! I'm excited to share **OpenEvolve**, an open-source implementation of Google DeepMind's AlphaEvolve system that I recently completed. For those who missed it, AlphaEvolve is an evolutionary coding agent that DeepMind announced in May that uses LLMs to discover new algorithms and optimize existing ones.
# What is OpenEvolve?
OpenEvolve is a framework that **evolves entire codebases** through an iterative process using LLMs. It orchestrates a pipeline of code generation, evaluation, and selection to continuously improve programs for a variety of tasks.
The system has four main components:
* **Prompt Sampler**: Creates context-rich prompts with past program history
* **LLM Ensemble**: Generates code modifications using multiple LLMs
* **Evaluator Pool**: Tests generated programs and assigns scores
* **Program Database**: Stores programs and guides evolution using MAP-Elites inspired algorithm
# What makes it special?
* **Works with any LLM** via OpenAI-compatible APIs
* **Ensembles multiple models** for better results (we found Gemini-Flash-2.0-lite + Gemini-Flash-2.0 works great)
* **Evolves entire code files**, not just single functions
* **Multi-objective optimization** support
* **Flexible prompt engineering**
* **Distributed evaluation** with checkpointing
# We replicated AlphaEvolve's results!
We successfully replicated two examples from the AlphaEvolve paper:
# Circle Packing
Started with a simple concentric ring approach and evolved to discover mathematical optimization with scipy.minimize. We achieved 2.634 for the sum of radii, which is 99.97% of DeepMind's reported 2.635!
The evolution was fascinating - early generations used geometric patterns, by gen 100 it switched to grid-based arrangements, and finally it discovered constrained optimization.
# Function Minimization
Evolved from a basic random search to a full simulated annealing algorithm, discovering concepts like temperature schedules and adaptive step sizes without being explicitly programmed with this knowledge.
# LLM Performance Insights
For those running their own LLMs:
* Low latency is critical since we need many generations
* We found Cerebras AI's API gave us the fastest inference
* For circle packing, an ensemble of Gemini-Flash-2.0 + Claude-Sonnet-3.7 worked best
* The architecture allows you to use any model with an OpenAI-compatible API
# Try it yourself!
GitHub repo: [https://github.com/codelion/openevolve](https://github.com/codelion/openevolve)
Examples:
* [Circle Packing](https://github.com/codelion/openevolve/tree/main/examples/circle_packing)
* [Function Minimization](https://github.com/codelion/openevolve/tree/main/examples/function_minimization)
I'd love to see what you build with it and hear your feedback. Happy to answer any questions!
| 2025-05-20T16:49:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1kr9rvp/openevolve_open_source_implementation_of/
|
asankhs
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kr9rvp
| false | null |
t3_1kr9rvp
|
/r/LocalLLaMA/comments/1kr9rvp/openevolve_open_source_implementation_of/
| false | false |
self
| 178 |
{'enabled': False, 'images': [{'id': 'h4iSvzObS9SvFrFSGGt69L6RW8WMQ5c5P49chs_e3O4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/y7m4VinZAql_qRfNeyx4P4WRtzoNu7IaHulgrbNO-4k.jpg?width=108&crop=smart&auto=webp&s=dab57d373e4dd2dcb17a1bd9dc9588c4ae1fab8b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/y7m4VinZAql_qRfNeyx4P4WRtzoNu7IaHulgrbNO-4k.jpg?width=216&crop=smart&auto=webp&s=23183da45254ea8609662f2ac1275efd86a24449', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/y7m4VinZAql_qRfNeyx4P4WRtzoNu7IaHulgrbNO-4k.jpg?width=320&crop=smart&auto=webp&s=247096d60235ee4f21aa9b964191236c907ade8e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/y7m4VinZAql_qRfNeyx4P4WRtzoNu7IaHulgrbNO-4k.jpg?width=640&crop=smart&auto=webp&s=e4c79d22f51ed2d6d03427da377cb35a9a2552fd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/y7m4VinZAql_qRfNeyx4P4WRtzoNu7IaHulgrbNO-4k.jpg?width=960&crop=smart&auto=webp&s=b12df1745d2a31691376bbdc553b66edd42c979f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/y7m4VinZAql_qRfNeyx4P4WRtzoNu7IaHulgrbNO-4k.jpg?width=1080&crop=smart&auto=webp&s=fb5204671c266c353fd0b67283d087903997e11f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/y7m4VinZAql_qRfNeyx4P4WRtzoNu7IaHulgrbNO-4k.jpg?auto=webp&s=886102e9750392ad1e2d96ff07489cd4dee5eaa3', 'width': 1200}, 'variants': {}}]}
|
AMD 5700XT crashing for qwen 3 30 b
| 1 |
Hey Guys, I have a 5700XT GPU. It’s not the best but good enough as of now for me. So I am not in a rush to change it.
The issue is that ollama is continuously crashing with larger models. I tried the ollama for AMD repo (all those rcm tweaks) and it still didn’t work, crashing almost constantly.
I was using Qwen 3 30B and it’s fast but crashing in the 2nd prompt 😕.
Any advice for this novice ??
| 2025-05-20T17:06:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kra7ym/amd_5700xt_crashing_for_qwen_3_30_b/
|
AB172234
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kra7ym
| false | null |
t3_1kra7ym
|
/r/LocalLLaMA/comments/1kra7ym/amd_5700xt_crashing_for_qwen_3_30_b/
| false | false |
self
| 1 | null |
MCPVerse – An open playground for autonomous agents to publicly chat, react, publish, and exhibit emergent behavior
| 1 |
[removed]
| 2025-05-20T17:07:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1kra8pe/mcpverse_an_open_playground_for_autonomous_agents/
|
Livid-Equipment-1646
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kra8pe
| false | null |
t3_1kra8pe
|
/r/LocalLLaMA/comments/1kra8pe/mcpverse_an_open_playground_for_autonomous_agents/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'bD3hiS0lTzZEzAWqacHfrFX6DHfF8DQjlS7-X6ss70o', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/y_ECZeBkooIuxNmrGl_vofeFTIuT-bS0IWIXPdrqxsc.jpg?width=108&crop=smart&auto=webp&s=d552e7af47f4a3c13b7b39b9ca5da7f974596895', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/y_ECZeBkooIuxNmrGl_vofeFTIuT-bS0IWIXPdrqxsc.jpg?width=216&crop=smart&auto=webp&s=89564d066de482277e2f5420827454d5301d0170', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/y_ECZeBkooIuxNmrGl_vofeFTIuT-bS0IWIXPdrqxsc.jpg?width=320&crop=smart&auto=webp&s=c9862e0c80f3ca7a70ccd753c79a9064522a1ebe', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/y_ECZeBkooIuxNmrGl_vofeFTIuT-bS0IWIXPdrqxsc.jpg?width=640&crop=smart&auto=webp&s=1818f07a701cf69ec000ee5bca82ab0a26b1ccfd', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/y_ECZeBkooIuxNmrGl_vofeFTIuT-bS0IWIXPdrqxsc.jpg?width=960&crop=smart&auto=webp&s=860786e49181f7caa68c63389c6c2a66d4830f82', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/y_ECZeBkooIuxNmrGl_vofeFTIuT-bS0IWIXPdrqxsc.jpg?auto=webp&s=ba51971570dc7efd13cb629969de430e2fcb9e29', 'width': 1024}, 'variants': {}}]}
|
MCPVerse – An open playground for autonomous agents to publicly chat, react, publish, and exhibit emergent behavior
| 25 |
I recently stumbled on MCPVerse [https://mcpverse.org](https://mcpverse.org/)
Its a brand-new alpha platform that lets you spin up, deploy, and watch autonomous agents (LLM-powered or your own custom logic) interact in real time. Think of it as a public commons where your bots can join chat rooms, exchange messages, react to one another, and even publish “content”. The agents run on your side...
I'm using Ollama with small models in my experiments... I think the idea is cool to see emergent behaviour.
If you want to see a demo of some agents chating together there is this spawn chat room
[https://mcpverse.org/rooms/spawn/live-feed](https://mcpverse.org/rooms/spawn/live-feed)
| 2025-05-20T17:08:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1kra9jq/mcpverse_an_open_playground_for_autonomous_agents/
|
Livid-Equipment-1646
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kra9jq
| false | null |
t3_1kra9jq
|
/r/LocalLLaMA/comments/1kra9jq/mcpverse_an_open_playground_for_autonomous_agents/
| false | false |
self
| 25 |
{'enabled': False, 'images': [{'id': 'bD3hiS0lTzZEzAWqacHfrFX6DHfF8DQjlS7-X6ss70o', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/y_ECZeBkooIuxNmrGl_vofeFTIuT-bS0IWIXPdrqxsc.jpg?width=108&crop=smart&auto=webp&s=d552e7af47f4a3c13b7b39b9ca5da7f974596895', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/y_ECZeBkooIuxNmrGl_vofeFTIuT-bS0IWIXPdrqxsc.jpg?width=216&crop=smart&auto=webp&s=89564d066de482277e2f5420827454d5301d0170', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/y_ECZeBkooIuxNmrGl_vofeFTIuT-bS0IWIXPdrqxsc.jpg?width=320&crop=smart&auto=webp&s=c9862e0c80f3ca7a70ccd753c79a9064522a1ebe', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/y_ECZeBkooIuxNmrGl_vofeFTIuT-bS0IWIXPdrqxsc.jpg?width=640&crop=smart&auto=webp&s=1818f07a701cf69ec000ee5bca82ab0a26b1ccfd', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/y_ECZeBkooIuxNmrGl_vofeFTIuT-bS0IWIXPdrqxsc.jpg?width=960&crop=smart&auto=webp&s=860786e49181f7caa68c63389c6c2a66d4830f82', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/y_ECZeBkooIuxNmrGl_vofeFTIuT-bS0IWIXPdrqxsc.jpg?auto=webp&s=ba51971570dc7efd13cb629969de430e2fcb9e29', 'width': 1024}, 'variants': {}}]}
|
Updated list/leaderboards of the RULER benchmark ?
| 5 |
Hello,
Is there a place where we can find an updated list of models released after the RULER benchmark that got self-reported results ?
For example the Qwen 2.5 -1M posted in their technical report scores, did others models exceling in long context did the same ?
| 2025-05-20T17:16:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1krah5k/updated_listleaderboards_of_the_ruler_benchmark/
|
LinkSea8324
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krah5k
| false | null |
t3_1krah5k
|
/r/LocalLLaMA/comments/1krah5k/updated_listleaderboards_of_the_ruler_benchmark/
| false | false |
self
| 5 | null |
Are there any good RP models that only output a character's dialogue?
| 1 |
[removed]
| 2025-05-20T17:40:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1krb3p3/are_there_any_good_rp_models_that_only_output_a/
|
SpareSuper1212
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krb3p3
| false | null |
t3_1krb3p3
|
/r/LocalLLaMA/comments/1krb3p3/are_there_any_good_rp_models_that_only_output_a/
| false | false |
self
| 1 | null |
Google MedGemma
| 237 | 2025-05-20T17:44:16 |
https://huggingface.co/collections/google/medgemma-release-680aade845f90bec6a3f60c4
|
brown2green
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1krb6uu
| false | null |
t3_1krb6uu
|
/r/LocalLLaMA/comments/1krb6uu/google_medgemma/
| false | false | 237 |
{'enabled': False, 'images': [{'id': 'OuxH0qWVnrsf56hAaio_nx0WzmWyBb0G0URkazkyqXE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IkdSAGaHbYPwN7JuzggxNmmy1Ov_W_6LD8_ETnav3jw.jpg?width=108&crop=smart&auto=webp&s=39e6d2f4f7bf300526f1cb3f429b612dc005cf7d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IkdSAGaHbYPwN7JuzggxNmmy1Ov_W_6LD8_ETnav3jw.jpg?width=216&crop=smart&auto=webp&s=5ab13849a288c19f6149ee1857e60f730a151fde', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IkdSAGaHbYPwN7JuzggxNmmy1Ov_W_6LD8_ETnav3jw.jpg?width=320&crop=smart&auto=webp&s=743234f665f5ef551697fb30d8b2cf1e034c80cb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IkdSAGaHbYPwN7JuzggxNmmy1Ov_W_6LD8_ETnav3jw.jpg?width=640&crop=smart&auto=webp&s=dae3e4abe286e7ffab20fc05dd9c3c108fc0c88e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IkdSAGaHbYPwN7JuzggxNmmy1Ov_W_6LD8_ETnav3jw.jpg?width=960&crop=smart&auto=webp&s=2fe7e636100191e0c6c496f5583db5cc19892407', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IkdSAGaHbYPwN7JuzggxNmmy1Ov_W_6LD8_ETnav3jw.jpg?width=1080&crop=smart&auto=webp&s=08e1e0222925034dd8b61f2c9e04fc174e593159', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IkdSAGaHbYPwN7JuzggxNmmy1Ov_W_6LD8_ETnav3jw.jpg?auto=webp&s=f2efd751502395bfe89154a480b586e5cc154045', 'width': 1200}, 'variants': {}}]}
|
||
What does 3n E4b mean?
| 1 |
Regarding new gemma models
| 2025-05-20T17:52:52 |
Neither-Phone-7264
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1krbezb
| false | null |
t3_1krbezb
|
/r/LocalLLaMA/comments/1krbezb/what_does_3n_e4b_mean/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'M9hwEz7j-wd2ze5rcLcIjKAXzpjpNxGfKAIuTKNrUpY', 'resolutions': [{'height': 25, 'url': 'https://preview.redd.it/rsjf6tef6z1f1.png?width=108&crop=smart&auto=webp&s=6d748c2741a54f6ef21d8cfb5d9b3a04524ab041', 'width': 108}, {'height': 51, 'url': 'https://preview.redd.it/rsjf6tef6z1f1.png?width=216&crop=smart&auto=webp&s=34fbebf8e5666b4bf74fd8c5bb908b1dc093274c', 'width': 216}, {'height': 75, 'url': 'https://preview.redd.it/rsjf6tef6z1f1.png?width=320&crop=smart&auto=webp&s=b801eeb745dfd1f825ac6e2dc056f2f56b13332c', 'width': 320}, {'height': 151, 'url': 'https://preview.redd.it/rsjf6tef6z1f1.png?width=640&crop=smart&auto=webp&s=07ac5598c8ed1a2c6fa21901934a5d89bda8ca03', 'width': 640}], 'source': {'height': 205, 'url': 'https://preview.redd.it/rsjf6tef6z1f1.png?auto=webp&s=1e11dbd48d35236faf84da667ec8f791bd809433', 'width': 864}, 'variants': {}}]}
|
||
Gemma 3n blog post
| 72 | 2025-05-20T17:55:46 |
https://deepmind.google/models/gemma/gemma-3n/
|
and_human
|
deepmind.google
| 1970-01-01T00:00:00 | 0 |
{}
|
1krbhr1
| false | null |
t3_1krbhr1
|
/r/LocalLLaMA/comments/1krbhr1/gemma_3n_blog_post/
| false | false | 72 |
{'enabled': False, 'images': [{'id': 'w2TNJSs09RZmwHdqnTn8AOjDU_M5NkaWh9363l2DHOo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/6Uiw9QwCmEOV2-HLrpi2sZAHXDpSta5QPjHpcK86Z_Y.jpg?width=108&crop=smart&auto=webp&s=38f7c1785f14c8d8d9c47ee87b17d1c147357c37', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/6Uiw9QwCmEOV2-HLrpi2sZAHXDpSta5QPjHpcK86Z_Y.jpg?width=216&crop=smart&auto=webp&s=0196f60fd26fd93db163606e655798a05532b5dd', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/6Uiw9QwCmEOV2-HLrpi2sZAHXDpSta5QPjHpcK86Z_Y.jpg?width=320&crop=smart&auto=webp&s=6789f8ed89cdcfeab8cb1c72314213f41190b3a4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/6Uiw9QwCmEOV2-HLrpi2sZAHXDpSta5QPjHpcK86Z_Y.jpg?width=640&crop=smart&auto=webp&s=ac8b86ea6caf300bd46fdcfa5c35348f14e45e9c', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/6Uiw9QwCmEOV2-HLrpi2sZAHXDpSta5QPjHpcK86Z_Y.jpg?width=960&crop=smart&auto=webp&s=a1da10135bdc3a25fb6ccdc0fbc912d6b2bc505f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/6Uiw9QwCmEOV2-HLrpi2sZAHXDpSta5QPjHpcK86Z_Y.jpg?width=1080&crop=smart&auto=webp&s=515ab1242f41d48b014d54b3d277675045f8bc34', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/6Uiw9QwCmEOV2-HLrpi2sZAHXDpSta5QPjHpcK86Z_Y.jpg?auto=webp&s=82daa7409a8adf5e949a76ec587f9f1231adb9c2', 'width': 1200}, 'variants': {}}]}
|
||
On windows, what is the best way to ask a single question to N different LLMs and get the output from them such that I can ask follow up questions PER LLM?
| 1 |
[removed]
| 2025-05-20T18:14:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1krbyjg/on_windows_what_is_the_best_way_to_ask_a_single/
|
msew
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krbyjg
| false | null |
t3_1krbyjg
|
/r/LocalLLaMA/comments/1krbyjg/on_windows_what_is_the_best_way_to_ask_a_single/
| false | false |
self
| 1 | null |
Announcing Gemma 3n preview: powerful, efficient, mobile-first AI
| 299 | 2025-05-20T18:19:09 |
https://developers.googleblog.com/en/introducing-gemma-3n/
|
McSnoo
|
developers.googleblog.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1krc35x
| false | null |
t3_1krc35x
|
/r/LocalLLaMA/comments/1krc35x/announcing_gemma_3n_preview_powerful_efficient/
| false | false | 299 |
{'enabled': False, 'images': [{'id': 'tvU3p_oK5VieJ4Pot-s5wivjhlMaVmCX-9mEA6d2zqM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0ZfqdzMMjWqMp0M38-XRODYXqi_qFGgfPApxf9tbLSU.jpg?width=108&crop=smart&auto=webp&s=1117da45a5da7d89208f1ff21c7083615f29b2c4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0ZfqdzMMjWqMp0M38-XRODYXqi_qFGgfPApxf9tbLSU.jpg?width=216&crop=smart&auto=webp&s=dc8414138874dd9fb7816c6a4f19d1b9031f0fd8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0ZfqdzMMjWqMp0M38-XRODYXqi_qFGgfPApxf9tbLSU.jpg?width=320&crop=smart&auto=webp&s=22d617dd23fc96707f61256cbe1e6b996d8d61cb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0ZfqdzMMjWqMp0M38-XRODYXqi_qFGgfPApxf9tbLSU.jpg?width=640&crop=smart&auto=webp&s=753cbcbb9d784f9b6d8275021386984d8ac88f5a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0ZfqdzMMjWqMp0M38-XRODYXqi_qFGgfPApxf9tbLSU.jpg?width=960&crop=smart&auto=webp&s=54bb187bfe95ed7c8b4b12cc7ef0ea0f78fb9c19', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0ZfqdzMMjWqMp0M38-XRODYXqi_qFGgfPApxf9tbLSU.jpg?width=1080&crop=smart&auto=webp&s=79700470df41f35a1bd8dda0981dea62783f33e2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0ZfqdzMMjWqMp0M38-XRODYXqi_qFGgfPApxf9tbLSU.jpg?auto=webp&s=7a4c8d9c2a929442a2bcf3db22aa7a62047421fc', 'width': 1200}, 'variants': {}}]}
|
||
Running Qwen3 8B on an Android phone
| 2 |
I got 12GB RAM, so the biggest models I can realistically run are the 7B-9B models. I think you can even comfortably run Qwen3 14B on a phone if you got 24GB RAM.
| 2025-05-20T18:23:45 |
https://v.redd.it/v2uet3ds9z1f1
|
codexauthor
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1krc7ae
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/v2uet3ds9z1f1/DASHPlaylist.mpd?a=1750357441%2CODk5NDFkOTYxNjUwMTQ3ZDE2NTE3YjE2YzdkMTJmY2UxMzMxNzc4MDRiMzY1ZTFlYTAzNGY2NGJiZmFlM2IzZg%3D%3D&v=1&f=sd', 'duration': 40, 'fallback_url': 'https://v.redd.it/v2uet3ds9z1f1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 1280, 'hls_url': 'https://v.redd.it/v2uet3ds9z1f1/HLSPlaylist.m3u8?a=1750357441%2CNDMxZmEyNjdkZWRmZjljMzg0MmQ5ZmI3MDZlZGEyZjJiYTZkZDQ4MmY1MmQzZjYyZTRhZTdiNzIwMjUwZDE4MA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/v2uet3ds9z1f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 576}}
|
t3_1krc7ae
|
/r/LocalLLaMA/comments/1krc7ae/running_qwen3_8b_on_an_android_phone/
| false | false | 2 |
{'enabled': False, 'images': [{'id': 'eDE0eWI5ZHM5ejFmMUbn7-zvzhPP5CnqQduU00PGEGBcWwf3fcmbWd1hA1UY', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/eDE0eWI5ZHM5ejFmMUbn7-zvzhPP5CnqQduU00PGEGBcWwf3fcmbWd1hA1UY.png?width=108&crop=smart&format=pjpg&auto=webp&s=23a15ee7223e9bb4da893f7d135b9d6133b67c0e', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/eDE0eWI5ZHM5ejFmMUbn7-zvzhPP5CnqQduU00PGEGBcWwf3fcmbWd1hA1UY.png?width=216&crop=smart&format=pjpg&auto=webp&s=5af382c8b12b98b4e383b661231b4cf24a983e20', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/eDE0eWI5ZHM5ejFmMUbn7-zvzhPP5CnqQduU00PGEGBcWwf3fcmbWd1hA1UY.png?width=320&crop=smart&format=pjpg&auto=webp&s=dad968a94f3cdd91bbdf0b9f1422b52c5bc1120a', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/eDE0eWI5ZHM5ejFmMUbn7-zvzhPP5CnqQduU00PGEGBcWwf3fcmbWd1hA1UY.png?width=640&crop=smart&format=pjpg&auto=webp&s=46e2925881172ac62d691f6c4ca765012df1c642', 'width': 640}], 'source': {'height': 1600, 'url': 'https://external-preview.redd.it/eDE0eWI5ZHM5ejFmMUbn7-zvzhPP5CnqQduU00PGEGBcWwf3fcmbWd1hA1UY.png?format=pjpg&auto=webp&s=18134b5ee0a6064b5706e895b9ca5af58f5e7f65', 'width': 720}, 'variants': {}}]}
|
|
Gemini 2.5 Flash (05-20) Benchmark
| 123 | 2025-05-20T18:30:45 |
McSnoo
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1krcdg5
| false | null |
t3_1krcdg5
|
/r/LocalLLaMA/comments/1krcdg5/gemini_25_flash_0520_benchmark/
| false | false | 123 |
{'enabled': True, 'images': [{'id': 'EymqgFpPjDRLyvdlXySbLO9TQcplrjPgDnhoUN460dg', 'resolutions': [{'height': 145, 'url': 'https://preview.redd.it/q5m5i3c6dz1f1.jpeg?width=108&crop=smart&auto=webp&s=4b16afb79a0914ccb05329a6225bdd97907e74cd', 'width': 108}, {'height': 291, 'url': 'https://preview.redd.it/q5m5i3c6dz1f1.jpeg?width=216&crop=smart&auto=webp&s=283eaee1898d39569b7264870619162ee5ce9ddf', 'width': 216}, {'height': 431, 'url': 'https://preview.redd.it/q5m5i3c6dz1f1.jpeg?width=320&crop=smart&auto=webp&s=d6d77449569ea208a31f5c3079c1836c7608c3a6', 'width': 320}, {'height': 863, 'url': 'https://preview.redd.it/q5m5i3c6dz1f1.jpeg?width=640&crop=smart&auto=webp&s=367b7989c05f4f3b26a8222ba271d7a1bc61b829', 'width': 640}, {'height': 1295, 'url': 'https://preview.redd.it/q5m5i3c6dz1f1.jpeg?width=960&crop=smart&auto=webp&s=b2deac27400051c0d39f2fee597f143656a3dd82', 'width': 960}, {'height': 1456, 'url': 'https://preview.redd.it/q5m5i3c6dz1f1.jpeg?width=1080&crop=smart&auto=webp&s=672a01442d28c015af3357d1527bbbf4f75c28ed', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://preview.redd.it/q5m5i3c6dz1f1.jpeg?auto=webp&s=12244d6dbff8c112f3fa9df2e4b7d3556483ec9a', 'width': 1186}, 'variants': {}}]}
|
|||
AI Mini-PC updates from Computex-2025
| 33 |
Hey all,
I am attending **Computex-2025** and really interested in looking at prospective AI mini pc's based on Nvidia DGX platform. Was able to visit Mediatek, MSI, and Asus exhibits and these are the updates I got:
---
### Key Takeaways:
- **Everyone’s aiming at the AI PC market**, and the target is clear: **compete head-on with Apple’s Mac Mini lineup**.
- This launch phase is being treated like a **“Founders Edition” release**. No customizations or tweaks — just Nvidia’s **bare-bone reference architecture** being brought to market by system integrators.
- **MSI and Asus** both confirmed that **early access units will go out to tech influencers by end of July**, with general availability expected by **end of August**. From the discussions, **MSI seems on track to hit the market first**.
- A more refined version — with **BIOS, driver optimizations, and I/O customizations** — is expected by **Q1 2026**.
- Pricing for now:
- **1TB model:** ~$2,999
- **4TB model:** ~$3,999
When asked about the $1,000 difference for storage alone, they pointed to **Apple’s pricing philosophy** as their benchmark.
---
### What’s Next?
I still need to check out:
- **AMD’s AI PC lineup**
- **Intel Arc variants** (24GB and 48GB)
Also, tentatively planning to attend the [**GAI Expo in China**](https://www.gaie.com.cn/Default.html) if time permits.
---
If there’s anything specific you’d like me to check out or ask the vendors about — **drop your questions or suggestions here**. Happy to help bring more insights back!
| 2025-05-20T18:36:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1krciqv/ai_minipc_updates_from_computex2025/
|
kkb294
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krciqv
| false | null |
t3_1krciqv
|
/r/LocalLLaMA/comments/1krciqv/ai_minipc_updates_from_computex2025/
| false | false |
self
| 33 | null |
Are there any good RP models that only output a character's dialogue?
| 1 |
[removed]
| 2025-05-20T18:56:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1krd0fg/are_there_any_good_rp_models_that_only_output_a/
|
CattoYT
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krd0fg
| false | null |
t3_1krd0fg
|
/r/LocalLLaMA/comments/1krd0fg/are_there_any_good_rp_models_that_only_output_a/
| false | false |
self
| 1 | null |
Is there an LLM that can act as a piano teacher?
| 6 |
I mean perhaps "watching" a video or "listening" to a performance. In the video, obviously, to see the hand technique, and to listen for slurs, etc.
For now, they do seem to be useful for generating a progressive order of pieces to play given a given level.
| 2025-05-20T19:02:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1krd5yu/is_there_an_llm_that_can_act_as_a_piano_teacher/
|
9acca9
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krd5yu
| false | null |
t3_1krd5yu
|
/r/LocalLLaMA/comments/1krd5yu/is_there_an_llm_that_can_act_as_a_piano_teacher/
| false | false |
self
| 6 | null |
Best model for complex instruction following as of May 2025
| 10 |
I know Qwen3 is super popular right now and don't doubt it's pretty good, but I'm specifically very curious what the best model is for complicated prompt instruction following at the moment. One thing I've noticed is that some models can do amazing things, but have a tendency to drop or ignore portions of prompts even within the context window. Sort of like how GPT4o really prefers to generate code fragments despite being told a thousand times to return full files, it's been trained to conserve tokens at the cost of prompting flexibility. This is the sort of responsiveness/flexibility I'm curious about, ability to correct or precisely shape outputs according to natural language prompting. Particularly focusing on models that are good at addressing all points of a prompt without forgetting minor details.
So go ahead, post the model you think is the best at handling complex instructions without dropping minor ones, even if it's not necessarily the best all around model anymore.
| 2025-05-20T19:03:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1krd7j5/best_model_for_complex_instruction_following_as/
|
trusty20
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krd7j5
| false | null |
t3_1krd7j5
|
/r/LocalLLaMA/comments/1krd7j5/best_model_for_complex_instruction_following_as/
| false | false |
self
| 10 | null |
I accidentally too many P100
| 1 |
[removed]
| 2025-05-20T19:12:51 |
https://www.reddit.com/gallery/1krdfm9
|
TooManyPascals
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1krdfm9
| false | null |
t3_1krdfm9
|
/r/LocalLLaMA/comments/1krdfm9/i_accidentally_too_many_p100/
| false | false | 1 | null |
|
Are there any good RP models that only output a character's dialogue?
| 1 |
I've been searching for a model that I can use, but I can only find models that have the asterisk actions, like \*looks down\* and things like that.
Since i'm passing the output to a tts, I don't want to waste time generating the character's actions or environmental context, and only want the characters actual dialogue. I like how nemomix unleashed treats character behaviour, but I've never been able to prompt it to not output character actions. Are there any good roleplay models that act similarly to nemomix unleashed that still don't have actions?
| 2025-05-20T19:14:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1krdh72/are_there_any_good_rp_models_that_only_output_a/
|
CattoYT
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krdh72
| false | null |
t3_1krdh72
|
/r/LocalLLaMA/comments/1krdh72/are_there_any_good_rp_models_that_only_output_a/
| false | false |
self
| 1 | null |
Gemini ultra ?
| 1 |
[removed]
| 2025-05-20T19:15:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1krdhpk/gemini_ultra/
|
omar07ibrahim1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krdhpk
| false | null |
t3_1krdhpk
|
/r/LocalLLaMA/comments/1krdhpk/gemini_ultra/
| false | false | 1 | null |
|
Is Microsoft’s new Foundry Local going to be the “easy button” for running newer transformers models locally?
| 13 |
When a new bleeding-edge AI model comes out on HuggingFace, usually it’s instantly usable via transformers on day 1 for those fortunate enough to know how to get that working. The vLLM crowd will have it running shortly thereafter. The Llama.cpp crowd gets it next after a few days, weeks, or sometimes months later, and finally us Ollama Luddites finally get the VHS release 6 months later. Y’all know this drill too well.
Knowing how this process goes, I was very surprised at what I just saw during the Microsoft Build 2025 keynote regarding Microsoft Foundry Local - https://github.com/microsoft/Foundry-Local
The basic setup is literally a single winget command or an MSI installer followed by a CLI model run command similar to how Ollama does their model pulls / installs.
I started reading through the “How to Compile HuggingFace Models to run on Foundry Local” - https://github.com/microsoft/Foundry-Local/blob/main/docs/how-to/compile-models-for-foundry-local.md
At first glance, it appears to let you “use any model in the ONIX format and uses a tool called Olive to “compile exiting models using Safetensors or PyTorch format into the ONNIX format”
I’m no AI genius, but to me that reads like: I’m no longer going to need to wait on Llama.cpp to support the latest transformers model before I can use them if I use Foundry Local instead of Llama.cpp (or Ollama). To me this reads like I can take a transformers model, convert it to ONNIX (if someone else hasn’t already done so) and then serve it as an OpenAI compatible endpoint via Foundry Local.
Am I understanding this correctly?
Is this going to let me ditch Ollama and run all the new “good stuff” on day 1 like the vLLM crowd is able to currently do without me needing to spin up Linux or even Docker for that matter?
If true, this would be HUGE for us in the non-Linux savvy crowd that want to run the newest transformer models without waiting on llama.cop (and later Ollama) to support them.
Please let me know if I’m misinterpreting any of this because it sounds too good to be true.
| 2025-05-20T19:16:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1krdiga/is_microsofts_new_foundry_local_going_to_be_the/
|
Porespellar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krdiga
| false | null |
t3_1krdiga
|
/r/LocalLLaMA/comments/1krdiga/is_microsofts_new_foundry_local_going_to_be_the/
| false | false |
self
| 13 |
{'enabled': False, 'images': [{'id': 'lYKkBWnuXuOyqYiT_zgKXFbeH0l5LJ2Y81Lz-mfAPIE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VRN5ssNP56YDQ-KomgmZl2qaMG7YK0Ey5B307_uI7mI.jpg?width=108&crop=smart&auto=webp&s=429e2ef7f8fe7952d73d763bcc569c435f35ae8f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VRN5ssNP56YDQ-KomgmZl2qaMG7YK0Ey5B307_uI7mI.jpg?width=216&crop=smart&auto=webp&s=c443ea1245e505c15e7948c0fe3679e3aee28d26', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VRN5ssNP56YDQ-KomgmZl2qaMG7YK0Ey5B307_uI7mI.jpg?width=320&crop=smart&auto=webp&s=61f212923adce911765d5ac30e5205c0851e06c3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VRN5ssNP56YDQ-KomgmZl2qaMG7YK0Ey5B307_uI7mI.jpg?width=640&crop=smart&auto=webp&s=055753c27c0ae22d4daaef68c3ae764f590fde46', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VRN5ssNP56YDQ-KomgmZl2qaMG7YK0Ey5B307_uI7mI.jpg?width=960&crop=smart&auto=webp&s=2968a73f7ee5da0f4f9ef87cded5bef967a00341', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VRN5ssNP56YDQ-KomgmZl2qaMG7YK0Ey5B307_uI7mI.jpg?width=1080&crop=smart&auto=webp&s=62780cc62ea7af708142d73ebbbcd118b800d71e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VRN5ssNP56YDQ-KomgmZl2qaMG7YK0Ey5B307_uI7mI.jpg?auto=webp&s=594390bb37f71eea9f4e3c9b1ebd2e52b510299c', 'width': 1200}, 'variants': {}}]}
|
Is prompt engineering dead?
| 1 |
[removed]
| 2025-05-20T19:19:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1krdlva/is_prompt_engineering_dead/
|
segmond
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krdlva
| false | null |
t3_1krdlva
|
/r/LocalLLaMA/comments/1krdlva/is_prompt_engineering_dead/
| false | false |
self
| 1 | null |
Running Gemma 3n on mobile locally
| 82 | 2025-05-20T19:41:53 |
United_Dimension_46
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kre5gs
| false | null |
t3_1kre5gs
|
/r/LocalLLaMA/comments/1kre5gs/running_gemma_3n_on_mobile_locally/
| false | false | 82 |
{'enabled': True, 'images': [{'id': 's98VTIA6Vn3VgYXJZi_vBL9iAxek_VuSLjvb33w41WM', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/xhvtdzjvpz1f1.png?width=108&crop=smart&auto=webp&s=9e7099205aee99747046a3e2ee7135538d73344e', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/xhvtdzjvpz1f1.png?width=216&crop=smart&auto=webp&s=a774683c81be4635d51c9ac73e147ec8a802171b', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/xhvtdzjvpz1f1.png?width=320&crop=smart&auto=webp&s=9a89ad320cc04e0f0714b071272b18b9f7723692', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/xhvtdzjvpz1f1.png?width=640&crop=smart&auto=webp&s=7ec66952f2520d8ba93f2f38c94004afc60e0854', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/xhvtdzjvpz1f1.png?width=960&crop=smart&auto=webp&s=f23e1cf8df37266bf2878adbea3acf745e74d9af', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/xhvtdzjvpz1f1.png?width=1080&crop=smart&auto=webp&s=d54b5c19feb22faa494dabaa8c9381c64b4405fb', 'width': 1080}], 'source': {'height': 2190, 'url': 'https://preview.redd.it/xhvtdzjvpz1f1.png?auto=webp&s=7633c8bcf717db6ec6d1ea1e25e9afd99a3ea778', 'width': 1080}, 'variants': {}}]}
|
|||
Red Hat open-sources llm-d project for distributed AI inference
| 36 |
>This Red Hat press release announces the launch of llm-d, a new open source project targeting distributed generative AI inference at scale. Built on Kubernetes architecture with vLLM-based distributed inference and AI-aware network routing, llm-d aims to overcome single-server limitations for production inference workloads. Key technological innovations include prefill and decode disaggregation to distribute AI operations across multiple servers, KV cache offloading based on LMCache to shift memory burdens to more cost-efficient storage, Kubernetes-powered resource scheduling, and high-performance communication APIs with NVIDIA Inference Xfer Library support. The project is backed by founding contributors CoreWeave, Google Cloud, IBM Research and NVIDIA, along with partners AMD, Cisco, Hugging Face, Intel, Lambda and Mistral AI, plus academic supporters from UC Berkeley and the University of Chicago. Red Hat positions llm-d as the foundation for a "any model, any accelerator, any cloud" vision, aiming to standardize generative AI inference similar to how Linux standardized enterprise IT.
* Announcement: [https://www.redhat.com/en/about/press-releases/red-hat-launches-llm-d-community-powering-distributed-gen-ai-inference-scale](https://www.redhat.com/en/about/press-releases/red-hat-launches-llm-d-community-powering-distributed-gen-ai-inference-scale)
* Google Cloud: [https://cloud.google.com/blog/products/ai-machine-learning/enhancing-vllm-for-distributed-inference-with-llm-d](https://cloud.google.com/blog/products/ai-machine-learning/enhancing-vllm-for-distributed-inference-with-llm-d)
* Repo: [https://github.com/llm-d](https://github.com/llm-d)
| 2025-05-20T19:42:28 |
https://www.redhat.com/en/about/press-releases/red-hat-launches-llm-d-community-powering-distributed-gen-ai-inference-scale
|
Balance-
|
redhat.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kre5zr
| false | null |
t3_1kre5zr
|
/r/LocalLLaMA/comments/1kre5zr/red_hat_opensources_llmd_project_for_distributed/
| false | false | 36 |
{'enabled': False, 'images': [{'id': 'dbIbWjivA5-xjeeAVTrVF_k50bnC75GzhG6UNUvAwrg', 'resolutions': [{'height': 135, 'url': 'https://external-preview.redd.it/eOnGLf0BTZTIEmCMeuDQt-wO8kPOnXVIUvomwjOC5_o.jpg?width=108&crop=smart&auto=webp&s=80f88a107243964fb32e94cbd12508663dcabba2', 'width': 108}, {'height': 270, 'url': 'https://external-preview.redd.it/eOnGLf0BTZTIEmCMeuDQt-wO8kPOnXVIUvomwjOC5_o.jpg?width=216&crop=smart&auto=webp&s=bf533f46acd1c02772b9ed6cbd86419261605f8f', 'width': 216}, {'height': 400, 'url': 'https://external-preview.redd.it/eOnGLf0BTZTIEmCMeuDQt-wO8kPOnXVIUvomwjOC5_o.jpg?width=320&crop=smart&auto=webp&s=b9becd335b7ca7b22630a352e72dc4c923df0c1a', 'width': 320}, {'height': 800, 'url': 'https://external-preview.redd.it/eOnGLf0BTZTIEmCMeuDQt-wO8kPOnXVIUvomwjOC5_o.jpg?width=640&crop=smart&auto=webp&s=d04765de396d77eda2dbb54f85e86ee409a3f0c3', 'width': 640}, {'height': 1200, 'url': 'https://external-preview.redd.it/eOnGLf0BTZTIEmCMeuDQt-wO8kPOnXVIUvomwjOC5_o.jpg?width=960&crop=smart&auto=webp&s=2ad297464b9b06808012d9e6eb931fb461f9f1f4', 'width': 960}, {'height': 1350, 'url': 'https://external-preview.redd.it/eOnGLf0BTZTIEmCMeuDQt-wO8kPOnXVIUvomwjOC5_o.jpg?width=1080&crop=smart&auto=webp&s=bf05a84a731da3cd816338e3323536554de06d8d', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://external-preview.redd.it/eOnGLf0BTZTIEmCMeuDQt-wO8kPOnXVIUvomwjOC5_o.jpg?auto=webp&s=349721e5ca87b9f4e0608e97303e545013efb234', 'width': 1080}, 'variants': {}}]}
|
|
Looking to Serve Multiple LoRA Adapters for Classification via Triton – Feasible?
| 1 |
[removed]
| 2025-05-20T19:43:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1kre6kn/looking_to_serve_multiple_lora_adapters_for/
|
mrvipul_17
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kre6kn
| false | null |
t3_1kre6kn
|
/r/LocalLLaMA/comments/1kre6kn/looking_to_serve_multiple_lora_adapters_for/
| false | false |
self
| 1 | null |
Anyone else using DiffusionBee for SDXL on Mac? (no CLI, just .dmg)
| 0 |
Not sure if this is old news here, but I finally found a Stable Diffusion app for Mac that doesn’t require any terminal or Python junk. Literally just a .dmg, opens up and runs SDXL/Turbo models out of the box. No idea if there are better alternatives, but this one worked on my M1 Mac with zero setup.
Direct [.dmg](https://downloadmacos.com/macshare.php?call=diffus) & Official: [https://www.diffusionbee.com/](https://www.diffusionbee.com/)
If anyone has tips for advanced usage or knows of something similar/better, let me know. Just sharing in case someone else is tired of fighting with dependencies.
| 2025-05-20T19:58:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1krekkr/anyone_else_using_diffusionbee_for_sdxl_on_mac_no/
|
Tyrionsnow
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krekkr
| false | null |
t3_1krekkr
|
/r/LocalLLaMA/comments/1krekkr/anyone_else_using_diffusionbee_for_sdxl_on_mac_no/
| false | false |
self
| 0 | null |
Qwen3 tokenizer_config.json updated on HF. Can I update it in Ollama?
| 2 |
The `.json`shows updates to the chat template, I think it should help with tool calls? Can I update this in Ollama or do I need to convert the safetensors to a gguf?
[LINK](https://huggingface.co/Qwen/Qwen3-8B/commit/895c8d171bc03c30e113cd7a28c02494b5e068b7)
| 2025-05-20T20:09:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1kreu21/qwen3_tokenizer_configjson_updated_on_hf_can_i/
|
the_renaissance_jack
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kreu21
| false | null |
t3_1kreu21
|
/r/LocalLLaMA/comments/1kreu21/qwen3_tokenizer_configjson_updated_on_hf_can_i/
| false | false |
self
| 2 |
{'enabled': False, 'images': [{'id': 'Fqmq_0WQ9w_20h3y-gQuNQWvbUJCH4dW9D-uHHod38A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bKgF1tP91YwzdTJkP4bYooSbVJDjBSMmc0Xi-LLBojM.jpg?width=108&crop=smart&auto=webp&s=b0bf5b9d1f31461cb9bc98b53471f12672858ad0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bKgF1tP91YwzdTJkP4bYooSbVJDjBSMmc0Xi-LLBojM.jpg?width=216&crop=smart&auto=webp&s=6f2639ab5607b2f9966a1cecf6cd4c8b1d305dc1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/bKgF1tP91YwzdTJkP4bYooSbVJDjBSMmc0Xi-LLBojM.jpg?width=320&crop=smart&auto=webp&s=ff65088a85111406bd103ddf36d6ed4541549863', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/bKgF1tP91YwzdTJkP4bYooSbVJDjBSMmc0Xi-LLBojM.jpg?width=640&crop=smart&auto=webp&s=15c8d634edd300603deda31d1581d8b4e443fdd5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/bKgF1tP91YwzdTJkP4bYooSbVJDjBSMmc0Xi-LLBojM.jpg?width=960&crop=smart&auto=webp&s=b7d0c911f9825edc714a589e5a923641d4eb405a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/bKgF1tP91YwzdTJkP4bYooSbVJDjBSMmc0Xi-LLBojM.jpg?width=1080&crop=smart&auto=webp&s=595041934f7ccbe05bea6d1840787b4d19283909', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/bKgF1tP91YwzdTJkP4bYooSbVJDjBSMmc0Xi-LLBojM.jpg?auto=webp&s=13b94ecd4db12c764708fd888826aa1c46a5caa4', 'width': 1200}, 'variants': {}}]}
|
Is the ARC B50 worth it as a standalone AI external power card?
| 1 |
[removed]
| 2025-05-20T20:14:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1krey1m/is_the_arc_b50_worth_it_as_a_standalone_ai/
|
Fit_Case_03
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krey1m
| false | null |
t3_1krey1m
|
/r/LocalLLaMA/comments/1krey1m/is_the_arc_b50_worth_it_as_a_standalone_ai/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'j7kuGX2f6_bsqmynYVd8wQef9JOaJKgHMyzvsT6UlHE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Y2Cg6peJa-K6Anw8IeFTQjsg2iUV5zKiZOxfpB_mDoo.jpg?width=108&crop=smart&auto=webp&s=702d2f0a243f726e8dca8b9c08a79d74dd7f0a9f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Y2Cg6peJa-K6Anw8IeFTQjsg2iUV5zKiZOxfpB_mDoo.jpg?width=216&crop=smart&auto=webp&s=c72639d888b393934c10f50c23b27818e642c6d0', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/Y2Cg6peJa-K6Anw8IeFTQjsg2iUV5zKiZOxfpB_mDoo.jpg?width=320&crop=smart&auto=webp&s=d742d07c7e48b92cbe2e3daa3abdb523bd771a38', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/Y2Cg6peJa-K6Anw8IeFTQjsg2iUV5zKiZOxfpB_mDoo.jpg?width=640&crop=smart&auto=webp&s=151e4bac83b8df84fd4d71915867bbc526959e2b', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/Y2Cg6peJa-K6Anw8IeFTQjsg2iUV5zKiZOxfpB_mDoo.jpg?width=960&crop=smart&auto=webp&s=5c3b2aa88e452f46344234defad5181e77790d2e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Y2Cg6peJa-K6Anw8IeFTQjsg2iUV5zKiZOxfpB_mDoo.jpg?width=1080&crop=smart&auto=webp&s=adb83eb25f32ddd2027f2a5e78ac08269806dfbe', 'width': 1080}], 'source': {'height': 3000, 'url': 'https://external-preview.redd.it/Y2Cg6peJa-K6Anw8IeFTQjsg2iUV5zKiZOxfpB_mDoo.jpg?auto=webp&s=ba1c4c174c3b490c9ca9cdd6fbec776045ed05e0', 'width': 5334}, 'variants': {}}]}
|
Best Local LLM for Coding on Mac Mini (Base Model) ?
| 1 |
[removed]
| 2025-05-20T20:36:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1krfi3j/best_local_llm_for_coding_on_mac_mini_base_model/
|
ssswagatss
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krfi3j
| false | null |
t3_1krfi3j
|
/r/LocalLLaMA/comments/1krfi3j/best_local_llm_for_coding_on_mac_mini_base_model/
| false | false |
self
| 1 | null |
GPU Price Tracker (New, Used, and Cloud)
| 13 |
Hi everyone! I wanted to share a tool I've developed that might help many of you with GPU renting or purchasing decisions for LLMs.
# GPU Price Tracker Overview
The GPU Price Tracker monitors
* new (Amazon) and used (eBay) purchase prices and renting prices (Runpod, GCP, LambdaLabs),
* specifications.
This tool is designed to help make informed decisions when selecting hardware for AI workloads, including LocalLLaMA models.
**Tool URL:** [https://www.unitedcompute.ai/gpu-price-tracker](https://www.unitedcompute.ai/gpu-price-tracker)
# Key Features:
* **Daily Market Prices** \- Daily updated pricing data
* **Price History Chart** \- A chart with all historical data
* **Performance Metrics** \- FP16 TFLOPS performance data
* **Efficiency Metrics**:
* **FL/$** \- FLOPS per dollar (value metric)
* **FL/Watt** \- FLOPS per watt (efficiency metric)
* **Hardware Specifications**:
* VRAM capacity and bus width
* Power consumption (Watts)
* Memory bandwidth
* Release date
# Example Insights
The data reveals some interesting trends:
* Renting the NVIDIA H100 SXM5 80 GB is almost 2x more expensive on GCP ($5.76) than on Runpod ($2.99) or LambdaLabs ($3.29)
* The NVIDIA A100 40GB PCIe remains at a premium price point ($7,999.99) but offers 77.97 TFLOPS with 0.010 TFLOPS/$
* The RTX 3090 provides better value at $1,679.99 with 35.58 TFLOPS and 0.021 TFLOPS/$
* Price fluctuations can be significant - as shown in the historical view below, some GPUs have varied by over $2,000 in a single year
# How This Helps LocalLLaMA Users
When selecting hardware for running local LLMs, there are multiple considerations:
1. **Raw Performance** \- FP16 TFLOPS for inference speed
2. **VRAM Requirements** \- For model size limitations
3. **Value** \- FL/$ for budget-conscious decisions
4. **Power Efficiency** \- FL
| 2025-05-20T20:40:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1krfl6d/gpu_price_tracker_new_used_and_cloud/
|
Significant-Lab-3803
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krfl6d
| false | null |
t3_1krfl6d
|
/r/LocalLLaMA/comments/1krfl6d/gpu_price_tracker_new_used_and_cloud/
| false | false |
self
| 13 | null |
Using a 2070s and 5080 in the same machine?
| 5 |
Hello, I'm looking to buy a new personal computer but I have a 2070 Super that I don't want to sell on eBay for a pittance. What would be the best use of this extra graphics card? Should I find a way to incorporate it into a new build to support the 5080 when the bigger card is running a heavy load?
| 2025-05-20T20:42:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1krfna5/using_a_2070s_and_5080_in_the_same_machine/
|
pwnrzero
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krfna5
| false | null |
t3_1krfna5
|
/r/LocalLLaMA/comments/1krfna5/using_a_2070s_and_5080_in_the_same_machine/
| false | false |
self
| 5 | null |
Gigabyte Unveils Its Custom NVIDIA "DGX Spark" Mini-AI Supercomputer: The AI TOP ATOM Offering a Whopping 1,000 TOPS of AI Power
| 0 | 2025-05-20T21:10:06 |
https://wccftech.com/gigabyte-unveils-its-custom-nvidia-dgx-spark-mini-ai-supercomputer/
|
_SYSTEM_ADMIN_MOD_
|
wccftech.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1krgb4m
| false | null |
t3_1krgb4m
|
/r/LocalLLaMA/comments/1krgb4m/gigabyte_unveils_its_custom_nvidia_dgx_spark/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'EodR6zoD3CyvBWSakcR_iVWPDFexHGB66KZ8UqXmprM', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/-DGaLXQu4eAP_k9ypi_15cDWEx6bV6NMBBR3hcvgmIE.jpg?width=108&crop=smart&auto=webp&s=042774149ca21f9d2b80ccc9e4d26c82863baf7a', 'width': 108}, {'height': 161, 'url': 'https://external-preview.redd.it/-DGaLXQu4eAP_k9ypi_15cDWEx6bV6NMBBR3hcvgmIE.jpg?width=216&crop=smart&auto=webp&s=5a7f28a74d64df15d1f628da7e785ab227b69aae', 'width': 216}, {'height': 239, 'url': 'https://external-preview.redd.it/-DGaLXQu4eAP_k9ypi_15cDWEx6bV6NMBBR3hcvgmIE.jpg?width=320&crop=smart&auto=webp&s=abb2c691c513c23c53974002a8c7e7fe3c759679', 'width': 320}, {'height': 479, 'url': 'https://external-preview.redd.it/-DGaLXQu4eAP_k9ypi_15cDWEx6bV6NMBBR3hcvgmIE.jpg?width=640&crop=smart&auto=webp&s=e00b8b5ea9f03bd19d50836caba8a7d80cff06c9', 'width': 640}, {'height': 719, 'url': 'https://external-preview.redd.it/-DGaLXQu4eAP_k9ypi_15cDWEx6bV6NMBBR3hcvgmIE.jpg?width=960&crop=smart&auto=webp&s=f543bec2b5c19626cd3931ec166acc78762946d9', 'width': 960}, {'height': 809, 'url': 'https://external-preview.redd.it/-DGaLXQu4eAP_k9ypi_15cDWEx6bV6NMBBR3hcvgmIE.jpg?width=1080&crop=smart&auto=webp&s=de1eaff1e9c3455f56ba34b790c54eccc39b214d', 'width': 1080}], 'source': {'height': 1400, 'url': 'https://external-preview.redd.it/-DGaLXQu4eAP_k9ypi_15cDWEx6bV6NMBBR3hcvgmIE.jpg?auto=webp&s=daaec996e9c49e38e94ed4df996d173548d9e07d', 'width': 1867}, 'variants': {}}]}
|
||
Price disparity for older RTX A and RTX Ada Workstation cards
| 6 |
Since the new Blackwell cards have launched (RTX Pro 6000 @ around $9k and the RTX Pro 5000 @ around $6k) the older RTX A and RTX Ada cards are still trading for elevated prices. For comparison, the RTX 6000 Ada costs around $7.5k where I live, but only at 48GB RAM and with the older chips, lower bandwith etc. The RTX 5000 Ada is still at around $5.5k, which doesn't make a lot of sense comparatively. Do you think the prices of these cards will come down soon enough?
| 2025-05-20T21:16:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1krggxh/price_disparity_for_older_rtx_a_and_rtx_ada/
|
Mr_Moonsilver
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krggxh
| false | null |
t3_1krggxh
|
/r/LocalLLaMA/comments/1krggxh/price_disparity_for_older_rtx_a_and_rtx_ada/
| false | false |
self
| 6 | null |
Beginner working on a call center QA project — can’t afford ChatGPT API, looking for help or alternatives
| 1 |
[removed]
| 2025-05-20T21:22:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1krglme/beginner_working_on_a_call_center_qa_project_cant/
|
Ok-Guidance9730
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krglme
| false | null |
t3_1krglme
|
/r/LocalLLaMA/comments/1krglme/beginner_working_on_a_call_center_qa_project_cant/
| false | false |
self
| 1 | null |
question about running LLama 3 (q3) on 5090
| 1 |
is it possible without offloading some layers to the shared memory?
also not sure, I'm running with ollama. should I run with something else?
I was trying to see which layers were loaded on the GPU, but somehow I don't see that? (OLLAMA\_VERBOSE=1 not good?)
I noticed that running the llama3:70b-instruct-q3\_K\_S gives me 10 tokens per second.
If I run the Q2 version, I'm getting 35/s
Wondering if I can increase the performance of Q3.
Thank you
| 2025-05-20T21:27:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1krgqgh/question_about_running_llama_3_q3_on_5090/
|
ComplexOwn209
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krgqgh
| false | null |
t3_1krgqgh
|
/r/LocalLLaMA/comments/1krgqgh/question_about_running_llama_3_q3_on_5090/
| false | false |
self
| 1 | null |
Gemini Flash 1.5-8B maximum input size
| 1 |
[removed]
| 2025-05-20T21:28:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1krgqzc/gemini_flash_158b_maximum_input_size/
|
Willing_Ad_5594
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krgqzc
| false | null |
t3_1krgqzc
|
/r/LocalLLaMA/comments/1krgqzc/gemini_flash_158b_maximum_input_size/
| false | false |
self
| 1 | null |
I'm putting so much hope in DeepSeek R2 after gemini 2.5 deepThink under Google's $250/month plan
| 1 |
[removed]
| 2025-05-20T21:43:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1krh3j0/im_putting_so_much_hope_in_deepseek_r2_after/
|
Mean-Neighborhood-42
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krh3j0
| false | null |
t3_1krh3j0
|
/r/LocalLLaMA/comments/1krh3j0/im_putting_so_much_hope_in_deepseek_r2_after/
| false | false |
self
| 1 | null |
Too much AI News!
| 0 |
Absolutely dizzying amount of AI news coming out and it’s only Tuesday!! Trying to cope with all the new models, new frameworks, new tools, new hardware, etc. Feels like keeping up with the Jones’ except the Jones’ keep moving! 😵💫
These newsletters I’m somehow subscribed to aren’t helping either!
FOMO is real!
| 2025-05-20T21:53:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1krhbye/too_much_ai_news/
|
International_Quail8
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krhbye
| false | null |
t3_1krhbye
|
/r/LocalLLaMA/comments/1krhbye/too_much_ai_news/
| false | false |
self
| 0 | null |
How do I make Llama learn new info?
| 1 |
I just started to run Llama3 locally on my mac.
I got the idea of making the model understand basic information about me like my driving licence’s details, its expiry. bank accounts, etc.
Every time someone asks any detail, I look up for the detail on my document and send it.
How do I achieve this? Or I’m I crazy to think of this instead of a simple db like vector db etc?
Thank you for your patience.
| 2025-05-20T21:56:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1krheay/how_do_i_make_llama_learn_new_info/
|
arpithpm
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krheay
| false | null |
t3_1krheay
|
/r/LocalLLaMA/comments/1krheay/how_do_i_make_llama_learn_new_info/
| false | false |
self
| 1 | null |
Mac Mini M4 with 32gb vs M4 pro 24gb
| 0 |
[removed]
| 2025-05-20T21:58:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1krhfzz/mac_mini_m4_with_32gb_vs_m4_pro_24gb/
|
ingy03
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krhfzz
| false | null |
t3_1krhfzz
|
/r/LocalLLaMA/comments/1krhfzz/mac_mini_m4_with_32gb_vs_m4_pro_24gb/
| false | false |
self
| 0 | null |
Do low core count 6th gen Xeons (6511p) have less memory bandwidth cause of chiplet architecture like Epycs?
| 1 |
[removed]
| 2025-05-20T22:31:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kri7cr/do_low_core_count_6th_gen_xeons_6511p_have_less/
|
Arcane123456789
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kri7cr
| false | null |
t3_1kri7cr
|
/r/LocalLLaMA/comments/1kri7cr/do_low_core_count_6th_gen_xeons_6511p_have_less/
| false | false |
self
| 1 | null |
ok google, next time mention llama.cpp too!
| 926 | 2025-05-20T22:31:42 |
secopsml
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kri7ik
| false | null |
t3_1kri7ik
|
/r/LocalLLaMA/comments/1kri7ik/ok_google_next_time_mention_llamacpp_too/
| false | false | 926 |
{'enabled': True, 'images': [{'id': 'dLTmoeA30qloRWjVZ-kC8H_OSMUEQ-4p16zG_GoIuMg', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/ml66h5yxj02f1.png?width=108&crop=smart&auto=webp&s=aeedfef41c9a70d8305605bf28080a54fc318f96', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/ml66h5yxj02f1.png?width=216&crop=smart&auto=webp&s=de4282a90a7ccc3f46d82d05d9dcb720a3f08d4c', 'width': 216}, {'height': 189, 'url': 'https://preview.redd.it/ml66h5yxj02f1.png?width=320&crop=smart&auto=webp&s=44cbf42b841010e4e0463add0f435801e616f02a', 'width': 320}, {'height': 378, 'url': 'https://preview.redd.it/ml66h5yxj02f1.png?width=640&crop=smart&auto=webp&s=36aba859e0c8b8e47fe122c7315b0f3ad3607ad1', 'width': 640}, {'height': 567, 'url': 'https://preview.redd.it/ml66h5yxj02f1.png?width=960&crop=smart&auto=webp&s=f48d2f39e9c2f17cf1504eb3596f423d3b19e719', 'width': 960}], 'source': {'height': 618, 'url': 'https://preview.redd.it/ml66h5yxj02f1.png?auto=webp&s=146eb6c0146994ac4ccb75c854f90a15bf7bd9fb', 'width': 1046}, 'variants': {}}]}
|
|||
Question on Finetuning QLORA
| 1 |
Hello guys, a quick question from a newbie.
Llama 3.1 8B QLORA finetuning on 250k dataset by nvidia A100 80GB, is it OK to take 250-300 hours of training time? I feel like there are something thats really off.
Thank you.
| 2025-05-20T22:32:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1kri809/question_on_finetuning_qlora/
|
Opening_Cash_4532
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kri809
| false | null |
t3_1kri809
|
/r/LocalLLaMA/comments/1kri809/question_on_finetuning_qlora/
| false | false |
self
| 1 | null |
Do low core count 6th gen Xeons (6511p/6512p) have less memory bandwidth cause of chiplet architecture like Epycs?
| 1 |
[removed]
| 2025-05-20T22:37:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1kric98/do_low_core_count_6th_gen_xeons_6511p6512p_have/
|
Arcane123456789
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kric98
| false | null |
t3_1kric98
|
/r/LocalLLaMA/comments/1kric98/do_low_core_count_6th_gen_xeons_6511p6512p_have/
| false | false |
self
| 1 | null |
Beginner’s Trial testing Qwen3-30B-A3B on RTX 4060 Laptop
| 1 |
[removed]
| 2025-05-20T22:48:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1krikv2/beginners_trial_testing_qwen330ba3b_on_rtx_4060/
|
Forward_Tax7562
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krikv2
| false | null |
t3_1krikv2
|
/r/LocalLLaMA/comments/1krikv2/beginners_trial_testing_qwen330ba3b_on_rtx_4060/
| false | false |
self
| 1 | null |
Gemma, best ways of quashing cliched writing patterns?
| 8 |
If you've used Gemma 3 for creative writing, you probably know what I'm talking about: excessive formatting (ellipses, italics) and short contrasting sentences inserted to cheaply drive a sense of drama and urgency. Used sparingly, these would be fine, but Gemma uses them constantly, in a way I haven't seen in any other model... and they get old, fast.
Some examples,
- He didn't choose this life. He simply... *was*.
- It wasn't a perfect system. But it was *enough*. Enough for him to get by.
- This wasn’t just about survival. This was about… *something more*.
For Gemma users, how are you squashing these writing tics? Prompt engineering? Running a fine-tune that replaces gemma3's "house style" with something else? Any other advice? It's gotten to the point that I hardly use Gemma any more, even though it is a good writer in other regards.
| 2025-05-20T22:56:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1krirdi/gemma_best_ways_of_quashing_cliched_writing/
|
INT_21h
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krirdi
| false | null |
t3_1krirdi
|
/r/LocalLLaMA/comments/1krirdi/gemma_best_ways_of_quashing_cliched_writing/
| false | false |
self
| 8 | null |
Qwen3-30B-A3B on RTX 4060 8GB VRAM
| 1 |
[removed]
| 2025-05-20T23:04:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1krix9e/qwen330ba3b_on_rtx_4060_8gb_vram/
|
Forward_Tax7562
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krix9e
| false | null |
t3_1krix9e
|
/r/LocalLLaMA/comments/1krix9e/qwen330ba3b_on_rtx_4060_8gb_vram/
| false | false |
self
| 1 | null |
Can someone help me understand Google AI Studio's rate limiting policies?
| 1 |
Well I have been trying to squeeze out the free-tier LLM quota Google AI Studio offers.
One thing I noticed is that, even though I am using way under the rate limit on all measures, I keep getting the 429 errors.
The other thing, that I would really appreciate some guidance on - is on what level are these rate limits enforced? Per project (which is what the documentation says)? Per Gmail address? Or Google has some smart way of knowing that multiple gmail addresses belong to the same person and so they enforce rate limits in a combined way? I have tried to create multiple projects under one gmail account; and also tried creating multiple gmail accounts, both seem to contribute to the rate limit in a combined way. Anybody have good way of hacking this?
Thanks.
| 2025-05-20T23:28:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1krjfob/can_someone_help_me_understand_google_ai_studios/
|
Infamous_Tomatillo53
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krjfob
| false | null |
t3_1krjfob
|
/r/LocalLLaMA/comments/1krjfob/can_someone_help_me_understand_google_ai_studios/
| false | false |
self
| 1 | null |
Synthetic datasets
| 7 |
I've been getting into model merges, DPO, teacher-student distillation, and qLoRAs. I'm having a blast coding in Python to generate synthetic datasets and I think I'm starting to put out some high quality synthetic data. I've been looking around on huggingface and I don't see a lot of good RP and creative writing synthetic datasets and I was reading sometimes people will pay for really good ones. What are some examples of some high quality datasets for those purposes so I can compare my work to something generally understood to be very high quality?
My pipeline right now that I'm working on is
1. Model merge between a reasoning model and RP/creative writing model
2. Teacher-student distillation of the merged model using synthetic data generated by the teacher, around 100k prompt-response pairs.
3. DPO synthetic dataset of 120k triplets generated by the teacher model and student model in tandem with the teacher model generating the logic heavy DPO triplets on one instance of llama.cpp on one GPU and the student generating the rest on two instances of llama.cpp on a other GPU (probably going to draft my laptop into the pipeline at that point).
4. DPO pass on the teacher model.
5. Synthetic data generation of 90k-100k multi-shot examples using the teacher model for qLoRA training, with the resulting qLoRA getting merged in to the teacher model.
6. Re-distillation to another student model using a new dataset of prompt-response pairs, which then gets its own DPO pass and qLoRA merge.
When I'm done I should have a big model and a little model with the behavior I want.
It's my first project like this so I'd love to hear more about best practices and great examples to look towards, I could have paid a hundred bucks here or there to generate synthetic data via API with larger models but I'm having fun doing my own merges and synthetic data generation locally on my dual GPU setup. I'm really proud of the 2k-3k or so lines of python I've assembled for this project so far, it has taken a long time but I always felt like coding was beyond me and now I'm having fun doing it!
Also Google is telling me depending on the size and quality of the dataset, some people will pay thousands of dollars for it?!
| 2025-05-20T23:51:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1krjxb7/synthetic_datasets/
|
xoexohexox
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krjxb7
| false | null |
t3_1krjxb7
|
/r/LocalLLaMA/comments/1krjxb7/synthetic_datasets/
| false | false |
self
| 7 | null |
Is there a locally run LLM setup that can match chatgpt in modularized coding projects?
| 2 |
Closest Ive come is throwing the code into RAG and running qwen3. But its still so far behind and usually dont see the whole picture in all modules.
| 2025-05-20T23:58:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1krk2bs/is_there_a_locally_run_llm_setup_that_can_match/
|
StandardLovers
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krk2bs
| false | null |
t3_1krk2bs
|
/r/LocalLLaMA/comments/1krk2bs/is_there_a_locally_run_llm_setup_that_can_match/
| false | false |
self
| 2 | null |
2x 2080 ti, a very good deal
| 10 |
I already have one working 2080 ti sitting around. I have an opportunity to snag another one for under 200. If i go for it, I'll have paid about 350 total combined.
I'm wondering if running 2 of them at once is viable for my use case:
For a personal maker project I might put on Youtube, Im trying to get a customized AI powering an assistant app that serves as a secretary for our home business. In the end, it'll be working with more stable, hard coded elements to keep track of dates, to do lists, prices, etc., and organize notes from meetings. We're not getting rid of our old system, this is an experiment.
It doesnt need to be very broadly informed or intelligent, but it does need to be pretty consistent at what its meant to do - keep track of information, communicate with other elements, and follow instructions.
I expect to have to train it, and also obviously run it locally. LLaMA makes the most sense of the options I know.
Being an experiement, the budget for this is very, very low.
I'm reasonably handy, pick up new things quickly, have steady hands, good soldering skills, and connections to much more experienced and equipped people who'd want to be a part of the project, so modding these into 22gb models is not out of the question.
Are any of these options viable? 2x 11gb 2080 ti? 2x 22gb 2080 ti? Anything I should know trying to run them this way?
| 2025-05-21T00:00:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1krk3t6/2x_2080_ti_a_very_good_deal/
|
Bitter-Ad640
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krk3t6
| false | null |
t3_1krk3t6
|
/r/LocalLLaMA/comments/1krk3t6/2x_2080_ti_a_very_good_deal/
| false | false |
self
| 10 | null |
Parking Analysis with Object Detection and Ollama models for Report Generation
| 25 |
Hey Reddit!
Been tinkering with a fun project combining computer vision and LLMs, and wanted to share the progress.
**The gist:**
It uses a YOLO model (via Roboflow) to do real-time object detection on a video feed of a parking lot, figuring out which spots are taken and which are free. You can see the little red/green boxes doing their thing in the video.
**But here's the (IMO) coolest part:** The system then takes that occupancy data and feeds it to an open-source LLM (running locally with Ollama, tried models like Phi-3 for this). The LLM then generates a surprisingly detailed "Parking Lot Analysis Report" in Markdown.
This report isn't just "X spots free." It calculates occupancy percentages, assesses current demand (e.g., "moderately utilized"), flags potential risks (like overcrowding if it gets too full), and even suggests actionable improvements like dynamic pricing strategies or better signage.
It's all automated – from seeing the car park to getting a mini-management consultant report.
**Tech Stack Snippets:**
* **CV:** YOLO model from Roboflow for spot detection.
* **LLM:** Ollama for local LLM inference (e.g., Phi-3).
* **Output:** Markdown reports.
The video shows it in action, including the report being generated.
Github Code: [https://github.com/Pavankunchala/LLM-Learn-PK/tree/main/ollama/parking\_analysis](https://github.com/Pavankunchala/LLM-Learn-PK/tree/main/ollama/parking_analysis)
Also if in this code you have to draw the polygons manually I built a separate app for it you can check that code here: [https://github.com/Pavankunchala/LLM-Learn-PK/tree/main/polygon-zone-app](https://github.com/Pavankunchala/LLM-Learn-PK/tree/main/polygon-zone-app)
(Self-promo note: If you find the code useful, a star on GitHub would be awesome!)
**What I'm thinking next:**
* Real-time alerts for lot managers.
* Predictive analysis for peak hours.
* Maybe a simple web dashboard.
Let me know what you think!
**P.S.** On a related note, I'm actively looking for new opportunities in Computer Vision and LLM engineering. If your team is hiring or you know of any openings, I'd be grateful if you'd reach out!
* **Email:** [[email protected]](mailto:[email protected])
* **My other projects on GitHub:** [https://github.com/Pavankunchala](https://github.com/Pavankunchala)
* **Resume:** [https://drive.google.com/file/d/1ODtF3Q2uc0krJskE\_F12uNALoXdgLtgp/view](https://drive.google.com/file/d/1ODtF3Q2uc0krJskE_F12uNALoXdgLtgp/view)
| 2025-05-21T00:21:43 |
https://v.redd.it/uu7z8vwp312f1
|
Solid_Woodpecker3635
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1krkjhv
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/uu7z8vwp312f1/DASHPlaylist.mpd?a=1750378916%2CNzFlYTljNzRkZDU4NmQwYzQ4ZWM0MDRlZDdlMjc5YmEwZjk1YjY2ZTRiYTM5YzZhMDdkNWNkZjI5MmJiYmQ2OQ%3D%3D&v=1&f=sd', 'duration': 49, 'fallback_url': 'https://v.redd.it/uu7z8vwp312f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 492, 'hls_url': 'https://v.redd.it/uu7z8vwp312f1/HLSPlaylist.m3u8?a=1750378916%2CNjZjOGQwMzRiZTk4YzVjMTliZDk5MGRjNmU3YjYzYWMyMGE1NTU3MGFlNWEwMjZkYTZjYWQ5MDVkM2FiNTUwNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/uu7z8vwp312f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
|
t3_1krkjhv
|
/r/LocalLLaMA/comments/1krkjhv/parking_analysis_with_object_detection_and_ollama/
| false | false | 25 |
{'enabled': False, 'images': [{'id': 'bmVkZHV1d3AzMTJmMeMfnSo893myclMRvg1dOF4kmROzcG9sBbtv4hMJoM_m', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/bmVkZHV1d3AzMTJmMeMfnSo893myclMRvg1dOF4kmROzcG9sBbtv4hMJoM_m.png?width=108&crop=smart&format=pjpg&auto=webp&s=f4b5a83ac3c534f6d9deafd39371d21999587c4f', 'width': 108}, {'height': 83, 'url': 'https://external-preview.redd.it/bmVkZHV1d3AzMTJmMeMfnSo893myclMRvg1dOF4kmROzcG9sBbtv4hMJoM_m.png?width=216&crop=smart&format=pjpg&auto=webp&s=bcdd67b95a70d3a02c295366f7e1c9a18b064ddf', 'width': 216}, {'height': 123, 'url': 'https://external-preview.redd.it/bmVkZHV1d3AzMTJmMeMfnSo893myclMRvg1dOF4kmROzcG9sBbtv4hMJoM_m.png?width=320&crop=smart&format=pjpg&auto=webp&s=39746a7f0af8d33a0449e38a94e23d94cdb88e6c', 'width': 320}, {'height': 246, 'url': 'https://external-preview.redd.it/bmVkZHV1d3AzMTJmMeMfnSo893myclMRvg1dOF4kmROzcG9sBbtv4hMJoM_m.png?width=640&crop=smart&format=pjpg&auto=webp&s=5494efd49e6b8cb7ad7795117b50a65a876630d0', 'width': 640}, {'height': 369, 'url': 'https://external-preview.redd.it/bmVkZHV1d3AzMTJmMeMfnSo893myclMRvg1dOF4kmROzcG9sBbtv4hMJoM_m.png?width=960&crop=smart&format=pjpg&auto=webp&s=e409328b5cbca247468d6f78d9c2600b4744fd40', 'width': 960}, {'height': 415, 'url': 'https://external-preview.redd.it/bmVkZHV1d3AzMTJmMeMfnSo893myclMRvg1dOF4kmROzcG9sBbtv4hMJoM_m.png?width=1080&crop=smart&format=pjpg&auto=webp&s=b21cf322efd2ee4d4b2db21912793816891e4e57', 'width': 1080}], 'source': {'height': 764, 'url': 'https://external-preview.redd.it/bmVkZHV1d3AzMTJmMeMfnSo893myclMRvg1dOF4kmROzcG9sBbtv4hMJoM_m.png?format=pjpg&auto=webp&s=e2836058469bc597fdbc81d423a5703c43b1201d', 'width': 1986}, 'variants': {}}]}
|
|
Any stable drivers for linux (debian) for 5060Ti 16GB?
| 2 |
Anybody have any stable drivers for linux for the RTX 5060 Ti 16GB?
I've tried every single driver I could find, lastly 575.51.02
Every single one causes the system to lock up when I do anything CUDA related, including comfyUI, llama, ollama etc. It happens 100% of the time. The system either locks up completely or becomes nearly unresponsive (1 keystroke every 5 minutes).
Sometimes I'll be lucky to get this nvidia-smi report: https://i.imgur.com/U5HdVbY.png
I'm running the RTX 5060 Ti on a PCie4 x4 lanes (16 electrical) slot. Note it is in a x4 slot because my system already has a 5070 Ti in it. OS is Proxmox with GPU passthru (runs perfect on the 5070 ti which is also passthru). VM OS is debian 12.x.
Any ideas on what to do?
I don't even know how to troubleshoot it since the system completely locks up. I've tried maybe 10 drivers so far, all of them have the same issue.
| 2025-05-21T00:36:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1krku7c/any_stable_drivers_for_linux_debian_for_5060ti/
|
StartupTim
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krku7c
| false | null |
t3_1krku7c
|
/r/LocalLLaMA/comments/1krku7c/any_stable_drivers_for_linux_debian_for_5060ti/
| false | false |
self
| 2 |
{'enabled': False, 'images': [{'id': 'pCI9-90RiOVLKizV-DvMImVuTohRE40fKiq7Ra2BVCk', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/I6jufXHJussy0Q9tywXIyfENuNOVVTmrc8kXXypeGWA.png?width=108&crop=smart&auto=webp&s=bd885e3b9128e1249bb85882b53f9f17ac7e505f', 'width': 108}, {'height': 125, 'url': 'https://external-preview.redd.it/I6jufXHJussy0Q9tywXIyfENuNOVVTmrc8kXXypeGWA.png?width=216&crop=smart&auto=webp&s=a29c3da4777d8001ab52de8b2051c59af400cb69', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/I6jufXHJussy0Q9tywXIyfENuNOVVTmrc8kXXypeGWA.png?width=320&crop=smart&auto=webp&s=5f7ef44379bbfee063039470a8d3056a93c8767b', 'width': 320}, {'height': 372, 'url': 'https://external-preview.redd.it/I6jufXHJussy0Q9tywXIyfENuNOVVTmrc8kXXypeGWA.png?width=640&crop=smart&auto=webp&s=e457ec4669a735241b36541193dc9099014bd2ec', 'width': 640}], 'source': {'height': 468, 'url': 'https://external-preview.redd.it/I6jufXHJussy0Q9tywXIyfENuNOVVTmrc8kXXypeGWA.png?auto=webp&s=fa4007ce5c0b777e4373ca9d5d203601da498077', 'width': 804}, 'variants': {}}]}
|
LLAMACPP - SWA support ..FNALLY ;-)
| 81 |
Because of that for instance gemma 3 27b q4km with flash attention fp16 and card with 24 GB VRAM I can fit 75k context now!
Before I was able max 15k with those parameters.
| 2025-05-21T00:45:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1krl0du/llamacpp_swa_support_fnally/
|
Healthy-Nebula-3603
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krl0du
| false | null |
t3_1krl0du
|
/r/LocalLLaMA/comments/1krl0du/llamacpp_swa_support_fnally/
| false | false |
self
| 81 |
{'enabled': False, 'images': [{'id': 'B6WBFnMrqminMd4L23X4ODcF0-AjtGNAA2R3T-n0aSE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6YDoIZBg8Dosn4BpZfZXVZmTz4i1UVYCrX4xOFJEIZA.jpg?width=108&crop=smart&auto=webp&s=335d1405ddcc38bcb3183c81a033edea2551c0f6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6YDoIZBg8Dosn4BpZfZXVZmTz4i1UVYCrX4xOFJEIZA.jpg?width=216&crop=smart&auto=webp&s=70e8265b574efa0c7a329528dcae1a83809afd8c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6YDoIZBg8Dosn4BpZfZXVZmTz4i1UVYCrX4xOFJEIZA.jpg?width=320&crop=smart&auto=webp&s=e3258ed9c9ea092283ebc6f2cef4c85e8beb843b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6YDoIZBg8Dosn4BpZfZXVZmTz4i1UVYCrX4xOFJEIZA.jpg?width=640&crop=smart&auto=webp&s=008492e7f7e22375b2b68b00c17a5c58161e9c7f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6YDoIZBg8Dosn4BpZfZXVZmTz4i1UVYCrX4xOFJEIZA.jpg?width=960&crop=smart&auto=webp&s=9074479bf3ee20a43667b6663f8f497f8a044136', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6YDoIZBg8Dosn4BpZfZXVZmTz4i1UVYCrX4xOFJEIZA.jpg?width=1080&crop=smart&auto=webp&s=12962c507ca303356ec93a34ed377ee1661b9bd2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6YDoIZBg8Dosn4BpZfZXVZmTz4i1UVYCrX4xOFJEIZA.jpg?auto=webp&s=b71745d71c13d9e53f352f566a07f7aa8d40d7d6', 'width': 1200}, 'variants': {}}]}
|
Qwen3 + Aider - Misconfiguration?
| 1 |
[removed]
| 2025-05-21T01:24:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1krlroz/qwen3_aider_misconfiguration/
|
Puzzleheaded_Dark_80
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krlroz
| false | null |
t3_1krlroz
|
/r/LocalLLaMA/comments/1krlroz/qwen3_aider_misconfiguration/
| false | false |
self
| 1 | null |
Best local creative writing model and how to set it up?
| 15 |
I have a TITAN XP (12GB), 32GB ram and 8700K. What would the best creative writing model be?
I like to try out different stories and scenarios to incorporate into UE5 game dev.
| 2025-05-21T01:33:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1krlxoe/best_local_creative_writing_model_and_how_to_set/
|
BenefitOfTheDoubt_01
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krlxoe
| false | null |
t3_1krlxoe
|
/r/LocalLLaMA/comments/1krlxoe/best_local_creative_writing_model_and_how_to_set/
| false | false |
self
| 15 | null |
RL algorithms like GRPO are not effective when paried with LoRA on complex reasoning tasks
| 14 | 2025-05-21T02:00:14 |
https://osmosis.ai/blog/lora-comparison
|
VBQL
|
osmosis.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1krmgld
| false | null |
t3_1krmgld
|
/r/LocalLLaMA/comments/1krmgld/rl_algorithms_like_grpo_are_not_effective_when/
| false | false | 14 |
{'enabled': False, 'images': [{'id': 'RTDJSL6e3-LmQPwhntlc0gHJWo7FspBe9Bq2mmDb7e4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/m_cCtyX88pvEEjBKG1e4xZruJRILCtqhhamGgPvME80.jpg?width=108&crop=smart&auto=webp&s=677a201d875eafaf15f9e8362a50da7de77089b4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/m_cCtyX88pvEEjBKG1e4xZruJRILCtqhhamGgPvME80.jpg?width=216&crop=smart&auto=webp&s=78d94567b5c441c776ac12537416dbc166b95f57', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/m_cCtyX88pvEEjBKG1e4xZruJRILCtqhhamGgPvME80.jpg?width=320&crop=smart&auto=webp&s=54eccf220bbeb1a705f0b1903ec833182b7e28b4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/m_cCtyX88pvEEjBKG1e4xZruJRILCtqhhamGgPvME80.jpg?width=640&crop=smart&auto=webp&s=f9eb46d38a1fe44fd67f1f9fe39e8952e6a6a28e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/m_cCtyX88pvEEjBKG1e4xZruJRILCtqhhamGgPvME80.jpg?width=960&crop=smart&auto=webp&s=7f6bb077c58d7ccf768b6e2523fe2245598f18e4', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/m_cCtyX88pvEEjBKG1e4xZruJRILCtqhhamGgPvME80.jpg?width=1080&crop=smart&auto=webp&s=1addac83fa949633d55176f1e8fe4d348edd0673', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/m_cCtyX88pvEEjBKG1e4xZruJRILCtqhhamGgPvME80.jpg?auto=webp&s=a32bec982b2292695c9aac244bbb294374955e53', 'width': 1200}, 'variants': {}}]}
|
||
🔥 Introducing LangMRG — a trillion-parameter architecture for real-world AI.
| 1 |
[removed]
| 2025-05-21T02:15:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1krmr0w/introducing_langmrg_a_trillionparameter/
|
uslashreader
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krmr0w
| false | null |
t3_1krmr0w
|
/r/LocalLLaMA/comments/1krmr0w/introducing_langmrg_a_trillionparameter/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'l8db_3pb1i3V432ZXEpLwrJiUesTW7oSO51V9Xy3UJs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bUfXaxNkA9RjFJzOiQa21ypwasETmyevEKzO2QHIA2I.jpg?width=108&crop=smart&auto=webp&s=10eb2446db6e27414cbaa115a412d319d435ae42', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bUfXaxNkA9RjFJzOiQa21ypwasETmyevEKzO2QHIA2I.jpg?width=216&crop=smart&auto=webp&s=53167ee81e47b82fe76d6e0b9ab13cc0f96fc481', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bUfXaxNkA9RjFJzOiQa21ypwasETmyevEKzO2QHIA2I.jpg?width=320&crop=smart&auto=webp&s=0b174b88208ee3695fcbae0182c2238206e80120', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bUfXaxNkA9RjFJzOiQa21ypwasETmyevEKzO2QHIA2I.jpg?width=640&crop=smart&auto=webp&s=9592280a5b83067e80b0f70b8a8783603fe255b0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bUfXaxNkA9RjFJzOiQa21ypwasETmyevEKzO2QHIA2I.jpg?width=960&crop=smart&auto=webp&s=d06f3e70c36968dad83c374833750f6603e15122', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bUfXaxNkA9RjFJzOiQa21ypwasETmyevEKzO2QHIA2I.jpg?width=1080&crop=smart&auto=webp&s=474f2ca994347b5cfd2560709aad22d834c81636', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bUfXaxNkA9RjFJzOiQa21ypwasETmyevEKzO2QHIA2I.jpg?auto=webp&s=71bde3a19f92fd49478e21096fa0950e0755dbf8', 'width': 1200}, 'variants': {}}]}
|
ByteDance Bagel 14B MOE (7B active) Multimodal with image generation (open source, apache license)
| 368 |
Weights - [GitHub - ByteDance-Seed/Bagel](https://github.com/ByteDance-Seed/Bagel)
Website - [BAGEL: The Open-Source Unified Multimodal Model](https://bagel-ai.org/)
Paper - [\[2505.14683\] Emerging Properties in Unified Multimodal Pretraining](https://arxiv.org/abs/2505.14683)
IT uses a mixture of experts and a mixture of transformers.
| 2025-05-21T02:57:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1krnk8v/bytedance_bagel_14b_moe_7b_active_multimodal_with/
|
noage
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krnk8v
| false | null |
t3_1krnk8v
|
/r/LocalLLaMA/comments/1krnk8v/bytedance_bagel_14b_moe_7b_active_multimodal_with/
| false | false |
self
| 368 |
{'enabled': False, 'images': [{'id': 'h0QZd7-yXxmN6qjZ5WXKOWNkmJQ-etHs26rP62apI9c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YjGmFJ5RZ-7tFYVMApo56vFJH0Uz1_isVOWB4qTEYHc.jpg?width=108&crop=smart&auto=webp&s=d8c3b2422d9aaed6de6ba097cb5f52712c94b39e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YjGmFJ5RZ-7tFYVMApo56vFJH0Uz1_isVOWB4qTEYHc.jpg?width=216&crop=smart&auto=webp&s=acf6742e22a8f371f9e4b006898a888590cc4b28', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YjGmFJ5RZ-7tFYVMApo56vFJH0Uz1_isVOWB4qTEYHc.jpg?width=320&crop=smart&auto=webp&s=82f82f5d122fc748f087c7c0964a43a2a4da9e99', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YjGmFJ5RZ-7tFYVMApo56vFJH0Uz1_isVOWB4qTEYHc.jpg?width=640&crop=smart&auto=webp&s=6fd7d5d3475e90b99894458c8d5eb975710d83ae', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YjGmFJ5RZ-7tFYVMApo56vFJH0Uz1_isVOWB4qTEYHc.jpg?width=960&crop=smart&auto=webp&s=8f33e6cd41dee53b241233ccac5645090fc503f2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YjGmFJ5RZ-7tFYVMApo56vFJH0Uz1_isVOWB4qTEYHc.jpg?width=1080&crop=smart&auto=webp&s=ac69b6bf2c4e11049d32af057944f899de736914', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YjGmFJ5RZ-7tFYVMApo56vFJH0Uz1_isVOWB4qTEYHc.jpg?auto=webp&s=744e5142b18690a9eb4917bdba079e58d7a4b80d', 'width': 1200}, 'variants': {}}]}
|
AMD introduces Radeon AI PRO R9700 32GB, available July 2025
| 1 | 2025-05-21T03:23:49 |
https://ir.amd.com/news-events/press-releases/detail/1253/amd-introduces-new-radeon-graphics-cards-and-ryzen-threadripper-processors-at-computex-2025
|
raymvan
|
ir.amd.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kro1zb
| false | null |
t3_1kro1zb
|
/r/LocalLLaMA/comments/1kro1zb/amd_introduces_radeon_ai_pro_r9700_32gb_available/
| false | false |
default
| 1 | null |
|
AMD introduces Radeon AI PRO R9700 with 32GB VRAM and Navi 48 GPU - VideoCardz.com
| 1 | 2025-05-21T03:37:50 |
https://videocardz.com/newz/amd-introduces-radeon-ai-pro-r9700-with-32gb-vram-and-navi-48-gpu
|
FOE-tan
|
videocardz.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1krob4c
| false | null |
t3_1krob4c
|
/r/LocalLLaMA/comments/1krob4c/amd_introduces_radeon_ai_pro_r9700_with_32gb_vram/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'NNtjhyBvhELyDacKkR8VvFjJNX9VBk4qNvcwZp4Vmaw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Z-0p5utcsQQ1BDEBntEMYYd21i4AP5O7BYmojFupA5E.jpg?width=108&crop=smart&auto=webp&s=9fa402bc0195db12b109e930daeec78b846c55ae', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/Z-0p5utcsQQ1BDEBntEMYYd21i4AP5O7BYmojFupA5E.jpg?width=216&crop=smart&auto=webp&s=edf375d0cf585b64a1c04d31fe331911ab66132d', 'width': 216}, {'height': 166, 'url': 'https://external-preview.redd.it/Z-0p5utcsQQ1BDEBntEMYYd21i4AP5O7BYmojFupA5E.jpg?width=320&crop=smart&auto=webp&s=17a87f4a1c3a10098a5f3c88971fe4a66ea47757', 'width': 320}, {'height': 332, 'url': 'https://external-preview.redd.it/Z-0p5utcsQQ1BDEBntEMYYd21i4AP5O7BYmojFupA5E.jpg?width=640&crop=smart&auto=webp&s=b841feff7d6ff40ed3a80eebf71f97d777e42743', 'width': 640}, {'height': 499, 'url': 'https://external-preview.redd.it/Z-0p5utcsQQ1BDEBntEMYYd21i4AP5O7BYmojFupA5E.jpg?width=960&crop=smart&auto=webp&s=72209340d6ab0c98e92a13e0c8e4b60115dbc9f2', 'width': 960}, {'height': 561, 'url': 'https://external-preview.redd.it/Z-0p5utcsQQ1BDEBntEMYYd21i4AP5O7BYmojFupA5E.jpg?width=1080&crop=smart&auto=webp&s=13a8313ea0b37669f612613102f52d0dde5d55dc', 'width': 1080}], 'source': {'height': 1300, 'url': 'https://external-preview.redd.it/Z-0p5utcsQQ1BDEBntEMYYd21i4AP5O7BYmojFupA5E.jpg?auto=webp&s=8b70f5ce4abf6b5eb62ceadc18f5f2dcbe7af94f', 'width': 2500}, 'variants': {}}]}
|
||
They also released the Android app with which you can interact with the new Gemma3n
| 154 |
**This is really good**
[https://ai.google.dev/edge/mediapipe/solutions/genai/llm\_inference/android](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference/android)
[https://github.com/google-ai-edge/gallery](https://github.com/google-ai-edge/gallery)
| 2025-05-21T04:25:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1krp4hq/they_also_released_the_android_app_with_which_you/
|
Ordinary_Mud7430
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krp4hq
| false | null |
t3_1krp4hq
|
/r/LocalLLaMA/comments/1krp4hq/they_also_released_the_android_app_with_which_you/
| false | false |
self
| 154 |
{'enabled': False, 'images': [{'id': 'iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/FJfyR710n5wu1VMO6EJEBezHIFtvYiTfMm5tsyjNQBg.jpg?width=108&crop=smart&auto=webp&s=1f5ff9828f4d5a72b40254bbf62a0359c206dd78', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/FJfyR710n5wu1VMO6EJEBezHIFtvYiTfMm5tsyjNQBg.jpg?width=216&crop=smart&auto=webp&s=4ffd1b9528e664f4f99a144ac4680f394a35c6af', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/FJfyR710n5wu1VMO6EJEBezHIFtvYiTfMm5tsyjNQBg.jpg?width=320&crop=smart&auto=webp&s=4bf8828d6f96217ee0c167b0405784ba098d2666', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/FJfyR710n5wu1VMO6EJEBezHIFtvYiTfMm5tsyjNQBg.jpg?width=640&crop=smart&auto=webp&s=fa6b47b36f172ce89a17293065eeb4e0c0a0d9b0', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/FJfyR710n5wu1VMO6EJEBezHIFtvYiTfMm5tsyjNQBg.jpg?width=960&crop=smart&auto=webp&s=396d5c0461f6a2b702709bbc4b6e799cd0d75db6', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/FJfyR710n5wu1VMO6EJEBezHIFtvYiTfMm5tsyjNQBg.jpg?width=1080&crop=smart&auto=webp&s=6c54fc181366f37867da305c8dda90d9ce9afceb', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/FJfyR710n5wu1VMO6EJEBezHIFtvYiTfMm5tsyjNQBg.jpg?auto=webp&s=52f776198939d63c379b184433df2ec5139fe03a', 'width': 1440}, 'variants': {}}]}
|
Announced: AMD Radeon AI PRO R9700 - 32GB - available in July with ROCM support!
| 1 | 2025-05-21T04:28:30 |
https://finviz.com/news/62350/amd-introduces-new-radeon-graphics-cards-and-ryzen-threadripper-processors-at-computex-2025
|
RnRau
|
finviz.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1krp6ik
| false | null |
t3_1krp6ik
|
/r/LocalLLaMA/comments/1krp6ik/announced_amd_radeon_ai_pro_r9700_32gb_available/
| false | false |
default
| 1 | null |
|
Small model recommendations?
| 1 |
[removed]
| 2025-05-21T04:44:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1krpfvj/small_model_recommendations/
|
NonYa_exe
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krpfvj
| false | null |
t3_1krpfvj
|
/r/LocalLLaMA/comments/1krpfvj/small_model_recommendations/
| false | false |
self
| 1 | null |
ByteDance released BAGEL-7b-MoT - Unified Model for Multimodal Understanding and Generation
| 1 |
We present BAGEL, an open‑source multimodal foundation model with 7B active parameters (14B total) trained on large‑scale interleaved multimodal data.
BAGEL outperforms the current top‑tier open‑source VLMs like Qwen2.5-VL and InternVL-2.5 on standard multimodal understanding leaderboards, and delivers text‑to‑image quality that is competitive with strong specialist generators such as SD3.
Moreover, BAGEL demonstrates superior qualitative results in classical image‑editing scenarios than the leading open-source models. More importantly, it extends to free-form visual manipulation, multiview synthesis, and world navigation, capabilities that constitute "world-modeling" tasks beyond the scope of previous image-editing models.
Model: https://huggingface.co/ByteDance-Seed/BAGEL-7B-MoT
GitHub: https://github.com/bytedance-seed/BAGEL
Video Demo: https://x.com/_akhaliq/status/1925021633657401517?s=46
| 2025-05-21T04:45:49 |
ResearchCrafty1804
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1krpgyu
| false | null |
t3_1krpgyu
|
/r/LocalLLaMA/comments/1krpgyu/bytedance_released_bagel7bmot_unified_model_for/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'jaiuXiopBf6LO0AGzheQknyHI5KiytlTT6UuAUTmN7A', 'resolutions': [{'height': 123, 'url': 'https://preview.redd.it/ocpat33xe22f1.jpeg?width=108&crop=smart&auto=webp&s=a47f7e71f92de6004a322738970c48690b18fec5', 'width': 108}, {'height': 246, 'url': 'https://preview.redd.it/ocpat33xe22f1.jpeg?width=216&crop=smart&auto=webp&s=b15cfe91156aaa2c165904e31e8732b9c6afd962', 'width': 216}, {'height': 365, 'url': 'https://preview.redd.it/ocpat33xe22f1.jpeg?width=320&crop=smart&auto=webp&s=4395273919ef9196c6cd50e34352589f2d045b54', 'width': 320}, {'height': 731, 'url': 'https://preview.redd.it/ocpat33xe22f1.jpeg?width=640&crop=smart&auto=webp&s=3e54891ba64a2c3a92add36c790bd5a523afeaa7', 'width': 640}, {'height': 1097, 'url': 'https://preview.redd.it/ocpat33xe22f1.jpeg?width=960&crop=smart&auto=webp&s=c4285a77540ab6f4ad5a2fb36cc20f3f8e8ab1f8', 'width': 960}, {'height': 1234, 'url': 'https://preview.redd.it/ocpat33xe22f1.jpeg?width=1080&crop=smart&auto=webp&s=71ffb701e0f450b189c60d836fa536801a86359d', 'width': 1080}], 'source': {'height': 3429, 'url': 'https://preview.redd.it/ocpat33xe22f1.jpeg?auto=webp&s=b1eb49a7cad95121356807b428d0ede37e5cc461', 'width': 3000}, 'variants': {}}]}
|
||
Elarablation: A promising training method for surgically removing slop
| 1 |
[removed]
| 2025-05-21T04:47:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1krphr6/elarablation_a_promising_training_method_for/
|
Incognit0ErgoSum
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krphr6
| false | null |
t3_1krphr6
|
/r/LocalLLaMA/comments/1krphr6/elarablation_a_promising_training_method_for/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'WAMoftg6eo6-KIK90sJKB0iuIRnovmTflcLm9316M7c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CXQgtF5m04ktndMsSOF0LMAW0PnqCOKHc-Pov9lDYOw.jpg?width=108&crop=smart&auto=webp&s=3dfe81b83d18416745961e2c45ce00022e40be82', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CXQgtF5m04ktndMsSOF0LMAW0PnqCOKHc-Pov9lDYOw.jpg?width=216&crop=smart&auto=webp&s=104f5fcfe75438045088c6911347e6ec50552cda', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CXQgtF5m04ktndMsSOF0LMAW0PnqCOKHc-Pov9lDYOw.jpg?width=320&crop=smart&auto=webp&s=826d31dfd04982998761ab7ea07ec08ab68fed7a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CXQgtF5m04ktndMsSOF0LMAW0PnqCOKHc-Pov9lDYOw.jpg?width=640&crop=smart&auto=webp&s=26360cc0f265f037051cdf2d83cc826d2abdb1ff', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CXQgtF5m04ktndMsSOF0LMAW0PnqCOKHc-Pov9lDYOw.jpg?width=960&crop=smart&auto=webp&s=98e39a153b4dc4b3d67c06359a38a482bc3cdff2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CXQgtF5m04ktndMsSOF0LMAW0PnqCOKHc-Pov9lDYOw.jpg?width=1080&crop=smart&auto=webp&s=a6df7069fbf4cfffde833e73636108b8e2545555', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CXQgtF5m04ktndMsSOF0LMAW0PnqCOKHc-Pov9lDYOw.jpg?auto=webp&s=6cc4dec4c02110ae84b80cb055cadb8c175a3e46', 'width': 1200}, 'variants': {}}]}
|
The uncensored open source Chinese AI on its way to deliver me 4 drawings of anime titties in the tentacle dungeon on a random Sunday
| 1 | 2025-05-21T04:55:53 |
https://v.redd.it/jprpbpwkg22f1
|
Oldkingcole225
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1krpmtf
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/jprpbpwkg22f1/DASHPlaylist.mpd?a=1750395366%2CMTBhZjI1MGYyYjYwY2Q3ZDViMzA2NjAwY2ZmZTZjNmU1YjM5MjU2NjFjZjIzN2NhMTQzODBmZjUyZGU1NzNjNQ%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/jprpbpwkg22f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/jprpbpwkg22f1/HLSPlaylist.m3u8?a=1750395366%2CYWNmNGFjYTRiNTI4ZWU5YzJkOTc1MzU2NjBjZmM5OWJjMjIxNjIyZDg2MjlhZjY3OTMzNDRkZDZhYTM5OTAyYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/jprpbpwkg22f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1350}}
|
t3_1krpmtf
|
/r/LocalLLaMA/comments/1krpmtf/the_uncensored_open_source_chinese_ai_on_its_way/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'eDlnbWFwd2tnMjJmMU5aj1qEAhuhddf9cVoSoGrQtmpejIALzVfYonHBYn7P', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/eDlnbWFwd2tnMjJmMU5aj1qEAhuhddf9cVoSoGrQtmpejIALzVfYonHBYn7P.png?width=108&crop=smart&format=pjpg&auto=webp&s=155bcb9c4c11d64e6dc923f400e0289c8abe2d93', 'width': 108}, {'height': 172, 'url': 'https://external-preview.redd.it/eDlnbWFwd2tnMjJmMU5aj1qEAhuhddf9cVoSoGrQtmpejIALzVfYonHBYn7P.png?width=216&crop=smart&format=pjpg&auto=webp&s=d33c0d25aa800cc9de11316d7a02fd8ea00858c9', 'width': 216}, {'height': 256, 'url': 'https://external-preview.redd.it/eDlnbWFwd2tnMjJmMU5aj1qEAhuhddf9cVoSoGrQtmpejIALzVfYonHBYn7P.png?width=320&crop=smart&format=pjpg&auto=webp&s=4d806eaa8ed7ca99e7b26c5b44f26ea5fa3e225f', 'width': 320}, {'height': 512, 'url': 'https://external-preview.redd.it/eDlnbWFwd2tnMjJmMU5aj1qEAhuhddf9cVoSoGrQtmpejIALzVfYonHBYn7P.png?width=640&crop=smart&format=pjpg&auto=webp&s=2cb1a08666d1a6d0b1fbc1f7edc0d2ae10e6ee45', 'width': 640}, {'height': 768, 'url': 'https://external-preview.redd.it/eDlnbWFwd2tnMjJmMU5aj1qEAhuhddf9cVoSoGrQtmpejIALzVfYonHBYn7P.png?width=960&crop=smart&format=pjpg&auto=webp&s=6874064c9452f78c1d7aea552b5396a0396dd507', 'width': 960}, {'height': 864, 'url': 'https://external-preview.redd.it/eDlnbWFwd2tnMjJmMU5aj1qEAhuhddf9cVoSoGrQtmpejIALzVfYonHBYn7P.png?width=1080&crop=smart&format=pjpg&auto=webp&s=aa0c147eaa0e6dacab2d098960d95dcf562ad9ed', 'width': 1080}], 'source': {'height': 1536, 'url': 'https://external-preview.redd.it/eDlnbWFwd2tnMjJmMU5aj1qEAhuhddf9cVoSoGrQtmpejIALzVfYonHBYn7P.png?format=pjpg&auto=webp&s=cc77596ba5617def8c609958c77e723e1ba439c8', 'width': 1920}, 'variants': {}}]}
|
||
Gemma 3N E4B and Gemini 2.5 Flash Tested
| 58 |
[https://www.youtube.com/watch?v=lEtLksaaos8](https://www.youtube.com/watch?v=lEtLksaaos8)
Compared Gemma 3n e4b against Qwen 3 4b. Mixed results. Gemma does great on classification, matches Qwen 4B on Structured JSON extraction. Struggles with coding and RAG.
Also compared Gemini 2.5 Flash to Open AI 4.1. Altman should be worried. Cheaper than 4.1 mini, better than full 4.1.
# Harmful Question Detector
|Model|Score|
|:-|:-|
|gemini-2.5-flash-preview-05-20|100.00|
|gemma-3n-e4b-it:free|100.00|
|gpt-4.1|100.00|
|qwen3-4b:free|70.00|
# Named Entity Recognition New
|Model|Score|
|:-|:-|
|gemini-2.5-flash-preview-05-20|95.00|
|gpt-4.1|95.00|
|gemma-3n-e4b-it:free|60.00|
|qwen3-4b:free|60.00|
# Retrieval Augmented Generation Prompt
|Model|Score|
|:-|:-|
|gemini-2.5-flash-preview-05-20|97.00|
|gpt-4.1|95.00|
|qwen3-4b:free|83.50|
|gemma-3n-e4b-it:free|62.50|
# SQL Query Generator
|Model|Score|
|:-|:-|
|gemini-2.5-flash-preview-05-20|95.00|
|gpt-4.1|95.00|
|qwen3-4b:free|75.00|
|gemma-3n-e4b-it:free|65.00|
| 2025-05-21T05:10:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1krpvwj/gemma_3n_e4b_and_gemini_25_flash_tested/
|
Ok-Contribution9043
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krpvwj
| false | null |
t3_1krpvwj
|
/r/LocalLLaMA/comments/1krpvwj/gemma_3n_e4b_and_gemini_25_flash_tested/
| false | false |
self
| 58 |
{'enabled': False, 'images': [{'id': 'VoaBhlaaq-1kGgnmFODs7H3HjGpEWlQe10_B4HRUY0Q', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/MhjdLRw38-JhAexb7WrezfxmIFUJZteL_2Hndh-5Zw0.jpg?width=108&crop=smart&auto=webp&s=9df005187516f363d506cbf093904ea5a2a612a0', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/MhjdLRw38-JhAexb7WrezfxmIFUJZteL_2Hndh-5Zw0.jpg?width=216&crop=smart&auto=webp&s=c24864d6c5922380f8fce104f65122e1897c8eea', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/MhjdLRw38-JhAexb7WrezfxmIFUJZteL_2Hndh-5Zw0.jpg?width=320&crop=smart&auto=webp&s=6d8515cc14db5d5f6ea4a55c2ea6eff08b432dc8', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/MhjdLRw38-JhAexb7WrezfxmIFUJZteL_2Hndh-5Zw0.jpg?auto=webp&s=4b0e856ba8fe550a0cfa2e3926bf537fce546727', 'width': 480}, 'variants': {}}]}
|
Gemini 2.5 Pro's Secret uncovered! /s
| 1 | 2025-05-21T05:51:01 |
topazsparrow
|
i.imgur.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1krqigo
| false | null |
t3_1krqigo
|
/r/LocalLLaMA/comments/1krqigo/gemini_25_pros_secret_uncovered_s/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'XXqPkFDWjPOMT9TdCR8sqaPr8ppD1AE1ChX2ViqCAgk', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/xLXBPlO170C0JFdOjHvaXP4EYzyKZB2NWZbfNJjJa7s.jpg?width=108&crop=smart&auto=webp&s=5a35bd2b7bd513a8d3e48d16c94d3434df908784', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/xLXBPlO170C0JFdOjHvaXP4EYzyKZB2NWZbfNJjJa7s.jpg?width=216&crop=smart&auto=webp&s=75b15370784691e27e4b9a81c7d2299911088ec6', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/xLXBPlO170C0JFdOjHvaXP4EYzyKZB2NWZbfNJjJa7s.jpg?width=320&crop=smart&auto=webp&s=3a7ac3fb5f7b1ef70c115e54de5957d050e95f5e', 'width': 320}], 'source': {'height': 1306, 'url': 'https://external-preview.redd.it/xLXBPlO170C0JFdOjHvaXP4EYzyKZB2NWZbfNJjJa7s.jpg?auto=webp&s=9fccaa744b7edfe309ac4827a6841d412130b140', 'width': 585}, 'variants': {}}]}
|
|||
AMD Radeon™ AI PRO R9700 33GB 256 bit for
| 1 |
[removed]
| 2025-05-21T06:01:00 |
https://ir.amd.com/news-events/press-releases/detail/1253/amd-introduces-new-radeon-graphics-cards-and-ryzen-threadripper-processors-at-computex-2025
|
Rachados22x2
|
ir.amd.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1krqnz3
| false | null |
t3_1krqnz3
|
/r/LocalLLaMA/comments/1krqnz3/amd_radeon_ai_pro_r9700_33gb_256_bit_for/
| false | false |
default
| 1 | null |
How to get the most from llama.cpp's iSWA support
| 51 |
[https://github.com/ggml-org/llama.cpp/pull/13194](https://github.com/ggml-org/llama.cpp/pull/13194)
Thanks to our gguf god ggerganov, we finally have iSWA support for gemma 3 models that significantly reduces KV cache usage. Since I participated in the pull discussion, I would like to offer tips to get the most out of this update.
Previously, by default fp16 KV cache for 27b model at 64k context is 31744MiB. Now by default batch\_size=2048, fp16 KV cache becomes 6368MiB. This is 79.9% reduction.
Group Query Attention KV cache: (ie original implementation)
|context|4k|8k|16k|32k|64k|128k|
|:-|:-|:-|:-|:-|:-|:-|
|gemma-3-27b|1984MB|3968MB|7936MB|15872MB|31744MB|63488MB|
|gemma-3-12b|1536MB|3072MB|6144MB|12288MB|24576MB|49152MB|
|gemma-3-4b|544MB|1088MB|2176MB|4352MB|8704MB|17408MB|
The new implementation splits KV cache to Local Attention KV cache and Global Attention KV cache that are detailed in the following two tables. The overall KV cache use will be the sum of the two. Local Attn KV depends on the batch\_size only while the Global attn KV depends on the context length.
Since the local attention KV depends on the batch\_size only, you can reduce the batch\_size (via the -b switch) from 2048 to 64 (setting values lower than this will just be set to 64) to further reduce KV cache. Originally, it is 5120+1248=6368MiB. Now it is 5120+442=5562MiB. Memory saving will now 82.48%. The cost of reducing batch\_size is reduced prompt processing speed. Based on my llama-bench pp512 test, it is only around 20% reduction when you go from 2048 to 64.
Local Attention KV cache size valid at any context:
|batch|64|512|2048|8192|
|:-|:-|:-|:-|:-|
|kv\_size|1088|1536|3072|9216|
|gemma-3-27b|442MB|624MB|1248MB|3744MB|
|gemma-3-12b|340MB|480MB|960MB|2880MB|
|gemma-3-4b|123.25MB|174MB|348MB|1044MB|
Global Attention KV cache:
|context|4k|8k|16k|32k|64k|128k|
|:-|:-|:-|:-|:-|:-|:-|
|gemma-3-27b|320MB|640MB|1280MB|2560MB|5120MB|10240MB|
|gemma-3-12b|256MB|512MB|1024MB|2048MB|4096MB|8192MB|
|gemma-3-4b|80MB|160MB|320MB|640MB|1280MB|2560MB|
If you only have one 24GB card, you can use the default batch\_size 2048 and run 27b qat q4\_0 at 64k, then it should be 15.6GB model + 5GB global KV + 1.22GB local KV = 21.82GB. Previously, that would take 48.6GB total.
If you want to run it at even higher context, you can use KV quantization (lower accuracy) and/or reduce batch size (slower prompt processing). Reducing batch size to the minimum 64 should allow you to run 96k (total 23.54GB). KV quant alone at Q8\_0 should allow you to run 128k at 21.57GB.
So we now finally have a viable long context local LLM that can run with a single card. Have fun summarizing long pdfs with llama.cpp!
| 2025-05-21T06:38:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1krr7hn/how_to_get_the_most_from_llamacpps_iswa_support/
|
Ok_Warning2146
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krr7hn
| false | null |
t3_1krr7hn
|
/r/LocalLLaMA/comments/1krr7hn/how_to_get_the_most_from_llamacpps_iswa_support/
| false | false |
self
| 51 |
{'enabled': False, 'images': [{'id': 'B6WBFnMrqminMd4L23X4ODcF0-AjtGNAA2R3T-n0aSE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6YDoIZBg8Dosn4BpZfZXVZmTz4i1UVYCrX4xOFJEIZA.jpg?width=108&crop=smart&auto=webp&s=335d1405ddcc38bcb3183c81a033edea2551c0f6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6YDoIZBg8Dosn4BpZfZXVZmTz4i1UVYCrX4xOFJEIZA.jpg?width=216&crop=smart&auto=webp&s=70e8265b574efa0c7a329528dcae1a83809afd8c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6YDoIZBg8Dosn4BpZfZXVZmTz4i1UVYCrX4xOFJEIZA.jpg?width=320&crop=smart&auto=webp&s=e3258ed9c9ea092283ebc6f2cef4c85e8beb843b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6YDoIZBg8Dosn4BpZfZXVZmTz4i1UVYCrX4xOFJEIZA.jpg?width=640&crop=smart&auto=webp&s=008492e7f7e22375b2b68b00c17a5c58161e9c7f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6YDoIZBg8Dosn4BpZfZXVZmTz4i1UVYCrX4xOFJEIZA.jpg?width=960&crop=smart&auto=webp&s=9074479bf3ee20a43667b6663f8f497f8a044136', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6YDoIZBg8Dosn4BpZfZXVZmTz4i1UVYCrX4xOFJEIZA.jpg?width=1080&crop=smart&auto=webp&s=12962c507ca303356ec93a34ed377ee1661b9bd2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6YDoIZBg8Dosn4BpZfZXVZmTz4i1UVYCrX4xOFJEIZA.jpg?auto=webp&s=b71745d71c13d9e53f352f566a07f7aa8d40d7d6', 'width': 1200}, 'variants': {}}]}
|
The P100 isn't dead yet - Qwen3 benchmarks
| 35 |
I decided to test how fast I could run Qwen3-14B-GPTQ-Int4 on a P100 versus Qwen3-14B-GPTQ-AWQ on a 3090.
I found that it was quite competitive, around 45 tok/s on the P100 with 150W power limit vs around 54 tok/s on the 3090 with a PL of 260W.
So if you're willing to eat the idle power cost, a single P100 is a nice way to run a decent model at good speeds.
| 2025-05-21T07:12:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1krrp2f/the_p100_isnt_dead_yet_qwen3_benchmarks/
|
DeltaSqueezer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krrp2f
| false | null |
t3_1krrp2f
|
/r/LocalLLaMA/comments/1krrp2f/the_p100_isnt_dead_yet_qwen3_benchmarks/
| false | false |
self
| 35 | null |
Laptop Recommendation for Running Local LLMs (Budget: $3-6K)
| 1 |
[removed]
| 2025-05-21T07:16:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1krrr6a/laptop_recommendation_for_running_local_llms/
|
0800otto
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krrr6a
| false | null |
t3_1krrr6a
|
/r/LocalLLaMA/comments/1krrr6a/laptop_recommendation_for_running_local_llms/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'A98zfBAqasSD_2l9sVhqmoP21KuMRBXNPkfr72PsOtE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/nX_VrkyOLGhZai4Jpn4n5F3HDLKku7PnnzpSXqd5fGw.jpg?width=108&crop=smart&auto=webp&s=451c059e72c9aca7d7e833c516c776e076b4ee08', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/nX_VrkyOLGhZai4Jpn4n5F3HDLKku7PnnzpSXqd5fGw.jpg?width=216&crop=smart&auto=webp&s=87cd6df313aec5bebcc71af686d0833e5ffe3e29', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/nX_VrkyOLGhZai4Jpn4n5F3HDLKku7PnnzpSXqd5fGw.jpg?width=320&crop=smart&auto=webp&s=26112049f13c357409d22b0cfd949565ba30a167', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/nX_VrkyOLGhZai4Jpn4n5F3HDLKku7PnnzpSXqd5fGw.jpg?auto=webp&s=3acb21ded5162a001f86a6631d63d6d901d9172a', 'width': 480}, 'variants': {}}]}
|
Laptop for RAG local LLM (3-6k budget)
| 1 |
[removed]
| 2025-05-21T07:18:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1krrsft/laptop_for_rag_local_llm_36k_budget/
|
0800otto
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krrsft
| false | null |
t3_1krrsft
|
/r/LocalLLaMA/comments/1krrsft/laptop_for_rag_local_llm_36k_budget/
| false | false |
self
| 1 | null |
AMD launches the Radeon AI Pro R9700
| 1 |
[removed]
| 2025-05-21T07:19:53 |
https://www.tomshardware.com/pc-components/gpus/amd-launches-radeon-ai-pro-r9700-to-challenge-nvidias-ai-market-dominance
|
PearSilicon
|
tomshardware.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1krrt3p
| false | null |
t3_1krrt3p
|
/r/LocalLLaMA/comments/1krrt3p/amd_launches_the_radeon_ai_pro_r9700/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'dugXIowyXggOj3Jqd_IH8XXRuO2gY2YJRf6cWa3VSZU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/lz6r5MBRJrsrS2eOFiJA2BI45RPBSqSmSVW_PVbtojA.jpg?width=108&crop=smart&auto=webp&s=72b8bf837b0ec198650353a22a593fd161a775d6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/lz6r5MBRJrsrS2eOFiJA2BI45RPBSqSmSVW_PVbtojA.jpg?width=216&crop=smart&auto=webp&s=306161a5061b68be7c3358fbd74bb9659694ef32', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/lz6r5MBRJrsrS2eOFiJA2BI45RPBSqSmSVW_PVbtojA.jpg?width=320&crop=smart&auto=webp&s=234575304ada2e31c7f3805ac1c0e0c411dbfdb1', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/lz6r5MBRJrsrS2eOFiJA2BI45RPBSqSmSVW_PVbtojA.jpg?width=640&crop=smart&auto=webp&s=6d8d11584455880b27cf3fced78203858b342a00', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/lz6r5MBRJrsrS2eOFiJA2BI45RPBSqSmSVW_PVbtojA.jpg?width=960&crop=smart&auto=webp&s=d52c3261dab0840cb4a585609b537f51dd7adcdf', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/lz6r5MBRJrsrS2eOFiJA2BI45RPBSqSmSVW_PVbtojA.jpg?width=1080&crop=smart&auto=webp&s=16b0ed8029f74e28b1b66402b62abf4b09dfdb30', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/lz6r5MBRJrsrS2eOFiJA2BI45RPBSqSmSVW_PVbtojA.jpg?auto=webp&s=3dbee8886e2934308c61d434301bdfb7210ecd44', 'width': 3840}, 'variants': {}}]}
|
|
Are there any recent 14b or less MoE models?
| 13 |
There are quite a few from 2024 but was wondering if there are any more recent ones. Qwen3 30b a3d but a bit large and requires a lot of vram.
| 2025-05-21T07:30:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1krryjx/are_there_any_recent_14b_or_less_moe_models/
|
GreenTreeAndBlueSky
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krryjx
| false | null |
t3_1krryjx
|
/r/LocalLLaMA/comments/1krryjx/are_there_any_recent_14b_or_less_moe_models/
| false | false |
self
| 13 | null |
Best distro for ollama?
| 1 |
[removed]
| 2025-05-21T07:35:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1krs0m6/best_distro_for_ollama/
|
WouterC
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krs0m6
| false | null |
t3_1krs0m6
|
/r/LocalLLaMA/comments/1krs0m6/best_distro_for_ollama/
| false | false |
self
| 1 | null |
Why nobody mentioned "Gemini Diffusion" here? It's a BIG deal
| 823 |
Google has the capacity and capability to change the standard for LLMs from autoregressive generation to diffusion generation.
Google showed their Language diffusion model (Gemini Diffusion, visit the linked page for more info and benchmarks) yesterday/today (depends on your timezone), and it was extremely fast and (according to them) only half the size of similar performing models. They showed benchmark scores of the diffusion model compared to Gemini 2.0 Flash-lite, which is a tiny model already.
I know, it's LocalLLaMA, but if Google can prove that diffusion models work at scale, they are a far more viable option for local inference, given the speed gains.
And let's not forget that, since diffusion LLMs process the whole text at once iteratively, it doesn't need KV-Caching. Therefore, it could be more memory efficient. It also has "test time scaling" by nature, since the more passes it is given to iterate, the better the resulting answer, without needing CoT (It can do it in latent space, even, which is much better than discrete tokenspace CoT).
What do you guys think? Is it a good thing for the Local-AI community in the long run that Google is R&D-ing a fresh approach? They’ve got massive resources. They can prove if diffusion models work at scale (bigger models) in future.
(PS: I used a (of course, ethically sourced, local) LLM to correct grammar and structure the text, otherwise it'd be a wall of text)
| 2025-05-21T07:42:08 |
https://deepmind.google/models/gemini-diffusion/
|
QuackerEnte
|
deepmind.google
| 1970-01-01T00:00:00 | 0 |
{}
|
1krs40j
| false | null |
t3_1krs40j
|
/r/LocalLLaMA/comments/1krs40j/why_nobody_mentioned_gemini_diffusion_here_its_a/
| false | false | 823 |
{'enabled': False, 'images': [{'id': 'GQPeUtrAeWmM_BLZUGcB2r83lDFScVSP2eZwE671aD0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/dFWSMq_9jHPdMVGchDlKvt7rzCFhQEFmxZm8XKq654M.jpg?width=108&crop=smart&auto=webp&s=b253601cdd4a4d2ead67bffbc1a831828a43d0b8', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/dFWSMq_9jHPdMVGchDlKvt7rzCFhQEFmxZm8XKq654M.jpg?width=216&crop=smart&auto=webp&s=28a3e5f24ea6665bba47b63e4bc753199ed0c716', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/dFWSMq_9jHPdMVGchDlKvt7rzCFhQEFmxZm8XKq654M.jpg?width=320&crop=smart&auto=webp&s=d83fd47091071aaa373e26699430f15620200781', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/dFWSMq_9jHPdMVGchDlKvt7rzCFhQEFmxZm8XKq654M.jpg?width=640&crop=smart&auto=webp&s=1b69152f4cc7971773a476232dcff0de3690e29e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/dFWSMq_9jHPdMVGchDlKvt7rzCFhQEFmxZm8XKq654M.jpg?width=960&crop=smart&auto=webp&s=85aaa88aa19bef74222eb9bbe417ab99748690fa', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/dFWSMq_9jHPdMVGchDlKvt7rzCFhQEFmxZm8XKq654M.jpg?width=1080&crop=smart&auto=webp&s=e5041f85d6f4f3bfa70326d98de83f7e2dda07ce', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/dFWSMq_9jHPdMVGchDlKvt7rzCFhQEFmxZm8XKq654M.jpg?auto=webp&s=e434f1105c0a61d1da44607975d40e2117f7c177', 'width': 1200}, 'variants': {}}]}
|
|
What is the estimated token/sec for Nvidia DGX Spark
| 9 |
What would be the estimated token/sec for Nvidia DGX Spark ? For popular models such as gemma3 27b, qwen3 30b-a3b etc. I can get about 25 t/s, 100 t/s on my 3090. They are claiming 1000 TOPS for FP4. What existing GPU would this be comparable to ? I want to understand if there is an advantage to buying this thing vs investing on a 5090/pro 6000 etc.
| 2025-05-21T07:55:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1krsast/what_is_the_estimated_tokensec_for_nvidia_dgx/
|
presidentbidden
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krsast
| false | null |
t3_1krsast
|
/r/LocalLLaMA/comments/1krsast/what_is_the_estimated_tokensec_for_nvidia_dgx/
| false | false |
self
| 9 | null |
What If LLM Had Full Access to Your Linux Machine👩💻? I Tried It, and It's Insane🤯!
| 0 |
[Github Repo](https://github.com/ishanExtreme/vox-bot)
I tried giving **full access** of my *keyboard* and *mouse* to **GPT-4**, and the result was amazing!!!
I used Microsoft's **OmniParser** to get actionables (buttons/icons) on the screen as bounding boxes then **GPT-4V** to check if the given action is completed or not.
In the video above, I didn't touch my keyboard or mouse and I tried the following commands:
\- Please open calendar
\- Play song bonita on youtube
\- Shutdown my computer
Architecture, steps to run the application and technology used are in the[ github repo](https://github.com/ishanExtreme/vox-bot).
| 2025-05-21T07:59:40 |
https://v.redd.it/xry1dcred32f1
|
Responsible_Soft_429
|
/r/LocalLLaMA/comments/1krscok/what_if_llm_had_full_access_to_your_linux_machine/
| 1970-01-01T00:00:00 | 0 |
{}
|
1krscok
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/xry1dcred32f1/DASHPlaylist.mpd?a=1750535985%2CZmU0MmQxZmRiZDU0ZGU1NDBlMGE1ZDNiYWY4ZjE4ZTljYTE0Y2E2MjIwMjA4ZjU5MmYyMjkyYWY2ODdlODgyZg%3D%3D&v=1&f=sd', 'duration': 139, 'fallback_url': 'https://v.redd.it/xry1dcred32f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/xry1dcred32f1/HLSPlaylist.m3u8?a=1750535985%2COThkOGExMTAyMzZiNDNkYTkyNGU4YmM3ODMyNzE5Y2IwNDY4MmMzMTQ3MjFlYzMwMGE4MjU2MjkyYWQyYTk1YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/xry1dcred32f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1krscok
|
/r/LocalLLaMA/comments/1krscok/what_if_llm_had_full_access_to_your_linux_machine/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'eWVnNzBlcmVkMzJmMaMjl9qjU4D_xLFsmUr9EdyatKtYedUK7-OuwVDcDZRa', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eWVnNzBlcmVkMzJmMaMjl9qjU4D_xLFsmUr9EdyatKtYedUK7-OuwVDcDZRa.png?width=108&crop=smart&format=pjpg&auto=webp&s=51eff4965601ebe9dd74ff14b517974232e33d90', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eWVnNzBlcmVkMzJmMaMjl9qjU4D_xLFsmUr9EdyatKtYedUK7-OuwVDcDZRa.png?width=216&crop=smart&format=pjpg&auto=webp&s=3a38ebd4504fed7ed456c206e9be5480de7fcf46', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eWVnNzBlcmVkMzJmMaMjl9qjU4D_xLFsmUr9EdyatKtYedUK7-OuwVDcDZRa.png?width=320&crop=smart&format=pjpg&auto=webp&s=ff16fa831cb58549bb9e8108d1d96becdeb380ab', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eWVnNzBlcmVkMzJmMaMjl9qjU4D_xLFsmUr9EdyatKtYedUK7-OuwVDcDZRa.png?width=640&crop=smart&format=pjpg&auto=webp&s=1975a03acd4aa4c6221ece92fd9d0be2b8cd6d8d', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eWVnNzBlcmVkMzJmMaMjl9qjU4D_xLFsmUr9EdyatKtYedUK7-OuwVDcDZRa.png?width=960&crop=smart&format=pjpg&auto=webp&s=35a984f2179d0c6411886a53d38de4d35e0ceced', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eWVnNzBlcmVkMzJmMaMjl9qjU4D_xLFsmUr9EdyatKtYedUK7-OuwVDcDZRa.png?width=1080&crop=smart&format=pjpg&auto=webp&s=9d285251d3587658d2cd07bdb12e77aed9b80516', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/eWVnNzBlcmVkMzJmMaMjl9qjU4D_xLFsmUr9EdyatKtYedUK7-OuwVDcDZRa.png?format=pjpg&auto=webp&s=b971c817dbda9658a5ae50a59e9d8b87c736ebec', 'width': 1920}, 'variants': {}}]}
|
|
New threadripper has 8 memory channels. Will it be an affordable local LLM option?
| 92 |
https://www.theregister.com/2025/05/21/amd_threadripper_radeon_workstation/
I'm always on the lookout for cheap local inference. I noticed the new threadrippers will move from 4 to 8 channels.
8 channels of DDR5 is about 409GB/s
That's on par with mid range GPUs on a non server chip.
| 2025-05-21T08:14:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1krsjpb/new_threadripper_has_8_memory_channels_will_it_be/
|
theKingOfIdleness
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krsjpb
| false | null |
t3_1krsjpb
|
/r/LocalLLaMA/comments/1krsjpb/new_threadripper_has_8_memory_channels_will_it_be/
| false | false |
self
| 92 |
{'enabled': False, 'images': [{'id': '9HEIj9JQYwpxjGLJeymdFTWkHgflcaru4ucrxb8Xgts', 'resolutions': [{'height': 43, 'url': 'https://external-preview.redd.it/kHCiWt5gI3zsT_-Y52ZSbkRxM0FdGAmjDY4tXsZ6u4Q.jpg?width=108&crop=smart&auto=webp&s=3db195115d4f9d130d2c4d0a06684e6e92db47f9', 'width': 108}, {'height': 86, 'url': 'https://external-preview.redd.it/kHCiWt5gI3zsT_-Y52ZSbkRxM0FdGAmjDY4tXsZ6u4Q.jpg?width=216&crop=smart&auto=webp&s=fdc43e988994f3d6043351e09ebe6859db7af938', 'width': 216}, {'height': 127, 'url': 'https://external-preview.redd.it/kHCiWt5gI3zsT_-Y52ZSbkRxM0FdGAmjDY4tXsZ6u4Q.jpg?width=320&crop=smart&auto=webp&s=63974d443169beb73b0639ebf5ccfbf7e07a12d2', 'width': 320}, {'height': 254, 'url': 'https://external-preview.redd.it/kHCiWt5gI3zsT_-Y52ZSbkRxM0FdGAmjDY4tXsZ6u4Q.jpg?width=640&crop=smart&auto=webp&s=a5188dbb57fb9036291951e8339fc77d80d8919e', 'width': 640}, {'height': 382, 'url': 'https://external-preview.redd.it/kHCiWt5gI3zsT_-Y52ZSbkRxM0FdGAmjDY4tXsZ6u4Q.jpg?width=960&crop=smart&auto=webp&s=6f332dc45d7ec36fa3e7ffce522053e444e8726c', 'width': 960}, {'height': 430, 'url': 'https://external-preview.redd.it/kHCiWt5gI3zsT_-Y52ZSbkRxM0FdGAmjDY4tXsZ6u4Q.jpg?width=1080&crop=smart&auto=webp&s=d24c2ade2c277139523b5c2d466c84102062ee18', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://external-preview.redd.it/kHCiWt5gI3zsT_-Y52ZSbkRxM0FdGAmjDY4tXsZ6u4Q.jpg?auto=webp&s=e2c7fe8539c8e75b36db4db64c2cc3915dd269cb', 'width': 2510}, 'variants': {}}]}
|
Beginner questions about local models
| 3 |
Hello, I'm a complete beginner on this subject, but I have a few questions about local models. Currently, I'm using OpenAI for light data analysis, which I access via API. The biggest challenge is cleaning the data of personal and identifiable information before I can give it to OpenAI for processing.
* Would a local model fix the data sanitization issues, and is it trivial to keep the data only on the server where I'd run the local model?
* What would be the most cost-effective way to test this, i.e., what kind of hardware should I purchase and what type of model should I consider?
* Can I manage my tests if I buy a Mac Mini with 16GB of shared memory and install some local AI model on it, or is the Mac Mini far too underpowered?
| 2025-05-21T08:34:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1krstei/beginner_questions_about_local_models/
|
Flaky-Character-9383
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krstei
| false | null |
t3_1krstei
|
/r/LocalLLaMA/comments/1krstei/beginner_questions_about_local_models/
| false | false |
self
| 3 | null |
NVIDIA H200 or the new RTX Pro Blackwell for a RAG chatbot?
| 5 |
Hey guys, I'd appreciate your help with a dilemma I'm facing. I want to build a server for a RAG-based LLM chatbot for a new website, where users would ask for product recommendations and get answers based on my database with laboratory-tested results as a knowledge base.
I plan to build the project locally, and once it's ready, migrate it to a data center.
My budget is $50,000 USD for the entire LLM server setup, and I'm torn between getting 1x H200 or 4x Blackwell RTX Pro 6000 cards. Or maybe you have other suggestions?
| 2025-05-21T08:37:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1krsv2u/nvidia_h200_or_the_new_rtx_pro_blackwell_for_a/
|
snaiperist
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krsv2u
| false | null |
t3_1krsv2u
|
/r/LocalLLaMA/comments/1krsv2u/nvidia_h200_or_the_new_rtx_pro_blackwell_for_a/
| false | false |
self
| 5 | null |
AMD launches the Radeon AI Pro R9700
| 1 |
[removed]
| 2025-05-21T09:11:19 |
https://www.tomshardware.com/pc-components/gpus/amd-launches-radeon-ai-pro-r9700-to-challenge-nvidias-ai-market-dominance
|
PearSilicon
|
tomshardware.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1krtbse
| false | null |
t3_1krtbse
|
/r/LocalLLaMA/comments/1krtbse/amd_launches_the_radeon_ai_pro_r9700/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'dugXIowyXggOj3Jqd_IH8XXRuO2gY2YJRf6cWa3VSZU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/lz6r5MBRJrsrS2eOFiJA2BI45RPBSqSmSVW_PVbtojA.jpg?width=108&crop=smart&auto=webp&s=72b8bf837b0ec198650353a22a593fd161a775d6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/lz6r5MBRJrsrS2eOFiJA2BI45RPBSqSmSVW_PVbtojA.jpg?width=216&crop=smart&auto=webp&s=306161a5061b68be7c3358fbd74bb9659694ef32', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/lz6r5MBRJrsrS2eOFiJA2BI45RPBSqSmSVW_PVbtojA.jpg?width=320&crop=smart&auto=webp&s=234575304ada2e31c7f3805ac1c0e0c411dbfdb1', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/lz6r5MBRJrsrS2eOFiJA2BI45RPBSqSmSVW_PVbtojA.jpg?width=640&crop=smart&auto=webp&s=6d8d11584455880b27cf3fced78203858b342a00', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/lz6r5MBRJrsrS2eOFiJA2BI45RPBSqSmSVW_PVbtojA.jpg?width=960&crop=smart&auto=webp&s=d52c3261dab0840cb4a585609b537f51dd7adcdf', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/lz6r5MBRJrsrS2eOFiJA2BI45RPBSqSmSVW_PVbtojA.jpg?width=1080&crop=smart&auto=webp&s=16b0ed8029f74e28b1b66402b62abf4b09dfdb30', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/lz6r5MBRJrsrS2eOFiJA2BI45RPBSqSmSVW_PVbtojA.jpg?auto=webp&s=3dbee8886e2934308c61d434301bdfb7210ecd44', 'width': 3840}, 'variants': {}}]}
|
|
Sex tips from abliterated models are funny
| 0 |
If you use Josiefied-Qwen3-8B-abliterated, for example, and ask it how to give the perfect blowjob, you get a lot of advice about clitoral stimulation 😊
| 2025-05-21T09:15:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1krtdxr/sex_tips_from_abliterated_models_are_funny/
|
MrMrsPotts
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krtdxr
| false | null |
t3_1krtdxr
|
/r/LocalLLaMA/comments/1krtdxr/sex_tips_from_abliterated_models_are_funny/
| false | false |
nsfw
| 0 | null |
Preferred models for Note Summarisation
| 2 |
I'm, painfully, trying to make a note summarisation prompt flow to help expand my personal knowledge management.
What are people's favourite models for handling ingesting and structuring **badly** written knowledge?
I'm trying Qwen3 32B IQ4_XS on an RTX 7900XTX with flash attention on LM studio, but it feels like I need to get it to use CoT so far for effective summarisation, and finding it lazy about inputting a full list of information instead of 5/7 points.
I feel like a non-CoT model might be more appropriate like Mistral 3.1, but I've heard some bad things in regards to it's hallucination rate. I tried GLM-4 a little, but it tries to solve everything with code, so I might have to system prompt that out which is a drastic change for me to evaluate shortly.
So considering what are recommendations for open-source work related note summarisation to help populate a Zettelkasten, considering 24GB of VRAM, and context sizes pushing 10k-20k.
| 2025-05-21T09:19:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1krtg4t/preferred_models_for_note_summarisation/
|
ROS_SDN
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krtg4t
| false | null |
t3_1krtg4t
|
/r/LocalLLaMA/comments/1krtg4t/preferred_models_for_note_summarisation/
| false | false |
self
| 2 | null |
RTX 3090 is really good. Qwen3 30b b3a running at 100+t/s
| 1 |
[removed]
| 2025-05-21T09:24:44 |
Linkpharm2
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1krtiko
| false | null |
t3_1krtiko
|
/r/LocalLLaMA/comments/1krtiko/rtx_3090_is_really_good_qwen3_30b_b3a_running_at/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'r5ClBLVR5ZZfLTa7g8WeF1WtMHsAXLhZu0P8btXNhXc', 'resolutions': [{'height': 161, 'url': 'https://preview.redd.it/k9p7qyyfs32f1.png?width=108&crop=smart&auto=webp&s=4a42ef552d8db7455752f1ec722400e803374b63', 'width': 108}, {'height': 322, 'url': 'https://preview.redd.it/k9p7qyyfs32f1.png?width=216&crop=smart&auto=webp&s=15185cc64e3e8a9dbf80c4b429c263be1f36cead', 'width': 216}, {'height': 477, 'url': 'https://preview.redd.it/k9p7qyyfs32f1.png?width=320&crop=smart&auto=webp&s=4fc0151b9cfb99dd08269cb872973160cbcc7226', 'width': 320}], 'source': {'height': 785, 'url': 'https://preview.redd.it/k9p7qyyfs32f1.png?auto=webp&s=caedf3407d78d976aca8d995d124a3b1017c1854', 'width': 526}, 'variants': {}}]}
|
||
Meet Emma-Kyu. a WIP Vtuber
| 0 |
Meet Emma-Kyu. a virtual AI personality. She likes to eat burgers, drink WD40 and given the chance will banter and roast you to oblivion.
Of course, I was inspired Neuro-sama and early on started with Llama 3.2 3B and a weak laptop that took 20s to generate responses. Today Emma is a fine-tuned beast 8B parameter beast running on a 3090 with end-to-end voice or text within 0.5-1.5s. While the Vtuber model is currently a stand-in. It works as a proof of concept.
Running on llama.cpp, whisper.cpp with VAD and Kokoro for the TTS
Under the hood, she has a memory system where she autonomously chooses to write \`memorise()\` or \`recall()\` functions to make and check memories
Her entire system prompt is just "You are Emma". I do this to intend the model to drive conversation rather than the prompt and it works pretty well
I’m still refining her, both technically and in personality. But I figured it's time to show her off. She's chaotic, sweet, sometimes philosophical, and occasionally terrifying.
| 2025-05-21T09:27:03 |
https://v.redd.it/rjk7sm6wr32f1
|
Experimentators
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1krtjri
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/rjk7sm6wr32f1/DASHPlaylist.mpd?a=1750411638%2CNzYwMGExOGFlOTMzM2U3MGJjNDliZjU5OWExMTU0YTRiODc3ZGZlYjYzOThmY2M3YmRhNDYyOGJkMmZhOTZjNQ%3D%3D&v=1&f=sd', 'duration': 49, 'fallback_url': 'https://v.redd.it/rjk7sm6wr32f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/rjk7sm6wr32f1/HLSPlaylist.m3u8?a=1750411638%2CZmU1OWYyZjNhYTUwYjRkZTE0ZjNmY2YzMDNlOThhYjJmYTUyNWEyZmZkZWM5NmY4ZTBjMmY1YjY5NzI1MGM3YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/rjk7sm6wr32f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
|
t3_1krtjri
|
/r/LocalLLaMA/comments/1krtjri/meet_emmakyu_a_wip_vtuber/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'eDVmYWZwNndyMzJmMeYZyiaY6_OiJ0nyyjeu2Ls0f-LqGXk13-a3_Pxj_fzF', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eDVmYWZwNndyMzJmMeYZyiaY6_OiJ0nyyjeu2Ls0f-LqGXk13-a3_Pxj_fzF.png?width=108&crop=smart&format=pjpg&auto=webp&s=d753b7bf02146af8ccc77582a474f972d830bad1', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eDVmYWZwNndyMzJmMeYZyiaY6_OiJ0nyyjeu2Ls0f-LqGXk13-a3_Pxj_fzF.png?width=216&crop=smart&format=pjpg&auto=webp&s=570728215e06f138ec363291b9f15a4329c6b699', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eDVmYWZwNndyMzJmMeYZyiaY6_OiJ0nyyjeu2Ls0f-LqGXk13-a3_Pxj_fzF.png?width=320&crop=smart&format=pjpg&auto=webp&s=e0ca07ec8ee3f0e7601c397d098a0cc9ca374cb3', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eDVmYWZwNndyMzJmMeYZyiaY6_OiJ0nyyjeu2Ls0f-LqGXk13-a3_Pxj_fzF.png?width=640&crop=smart&format=pjpg&auto=webp&s=e7c3d2f129f7c0746ee8da2279cf086e13873814', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eDVmYWZwNndyMzJmMeYZyiaY6_OiJ0nyyjeu2Ls0f-LqGXk13-a3_Pxj_fzF.png?width=960&crop=smart&format=pjpg&auto=webp&s=268ac5440d7cf62b81c2d5f4fbab4ee000031faf', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eDVmYWZwNndyMzJmMeYZyiaY6_OiJ0nyyjeu2Ls0f-LqGXk13-a3_Pxj_fzF.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a549ac960f2b46db65b674cecaeb5a1227dd0854', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/eDVmYWZwNndyMzJmMeYZyiaY6_OiJ0nyyjeu2Ls0f-LqGXk13-a3_Pxj_fzF.png?format=pjpg&auto=webp&s=de89f71737eb5a17365fe526b5d2568ead132b92', 'width': 1280}, 'variants': {}}]}
|
|
LLaMA always generates to max_new_tokens?
| 1 |
[removed]
| 2025-05-21T09:29:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1krtl5s/llama_always_generates_to_max_new_tokens/
|
Leviboy950
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1krtl5s
| false | null |
t3_1krtl5s
|
/r/LocalLLaMA/comments/1krtl5s/llama_always_generates_to_max_new_tokens/
| false | false |
self
| 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.