title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Someone Used a 1997 Processor and Showed That Only 128 MB of Ram Were Needed to Run a Modern AI—and Here's the Proof | 0 | "On the Pentium II, the 260K parameter Llama model processed 39.31 tokens per second—a far cry from the performance of more modern systems, but still a remarkable feat. Larger models, such as the 15M parameter version, ran slower, at just 1.03 tokens per second, but still far outstripped expectations." | 2025-06-21T13:40:08 | https://dailygalaxy.com/2025/06/someone-used-a-1997-processor-and-showed-that-only-128-mb-of-ram-were-needed-to-run-a-modern-ai-and-heres-the-proof/ | tjthomas101 | dailygalaxy.com | 1970-01-01T00:00:00 | 0 | {} | 1lgwtbm | false | null | t3_1lgwtbm | /r/LocalLLaMA/comments/1lgwtbm/someone_used_a_1997_processor_and_showed_that/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': '6-IP1Fx_VH0qDehLYhAbUI9jqEe6w254wGvWor6bhMM', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/6-IP1Fx_VH0qDehLYhAbUI9jqEe6w254wGvWor6bhMM.jpeg?width=108&crop=smart&auto=webp&s=e5f880d85d5988490b8ce3e1e0c385b25a729441', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/6-IP1Fx_VH0qDehLYhAbUI9jqEe6w254wGvWor6bhMM.jpeg?width=216&crop=smart&auto=webp&s=41e034e5857b0f4eda598dbdc85b9bea13fdd737', 'width': 216}, {'height': 193, 'url': 'https://external-preview.redd.it/6-IP1Fx_VH0qDehLYhAbUI9jqEe6w254wGvWor6bhMM.jpeg?width=320&crop=smart&auto=webp&s=67342d99e017c3a7a09aea013cc06d930d3a4ae4', 'width': 320}, {'height': 387, 'url': 'https://external-preview.redd.it/6-IP1Fx_VH0qDehLYhAbUI9jqEe6w254wGvWor6bhMM.jpeg?width=640&crop=smart&auto=webp&s=096d02df8e63ea99fb383d628067d6b0b3cae001', 'width': 640}, {'height': 581, 'url': 'https://external-preview.redd.it/6-IP1Fx_VH0qDehLYhAbUI9jqEe6w254wGvWor6bhMM.jpeg?width=960&crop=smart&auto=webp&s=3fa6e6af1803c0188529e2e3a713e1b65b92c23e', 'width': 960}, {'height': 654, 'url': 'https://external-preview.redd.it/6-IP1Fx_VH0qDehLYhAbUI9jqEe6w254wGvWor6bhMM.jpeg?width=1080&crop=smart&auto=webp&s=ed74b6e6df29e399c663d760015df3da650952a4', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/6-IP1Fx_VH0qDehLYhAbUI9jqEe6w254wGvWor6bhMM.jpeg?auto=webp&s=9fbaa517e46e38062b9649fd9491505a23512947', 'width': 1980}, 'variants': {}}]} |
Minimax-M1 is competitive with Gemini 2.5 Pro 05-06 on Fiction.liveBench Long Context Comprehension | 89 | 2025-06-21T13:51:53 | fictionlive | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lgx222 | false | null | t3_1lgx222 | /r/LocalLLaMA/comments/1lgx222/minimaxm1_is_competitive_with_gemini_25_pro_0506/ | false | false | default | 89 | {'enabled': True, 'images': [{'id': 'o9sgqppkca8f1', 'resolutions': [{'height': 151, 'url': 'https://preview.redd.it/o9sgqppkca8f1.png?width=108&crop=smart&auto=webp&s=e7a22e13377921274a8b4fcac4aca5a745a6d5e8', 'width': 108}, {'height': 302, 'url': 'https://preview.redd.it/o9sgqppkca8f1.png?width=216&crop=smart&auto=webp&s=23c5a5581314b3a78f78318abef9f8fec56a45a8', 'width': 216}, {'height': 447, 'url': 'https://preview.redd.it/o9sgqppkca8f1.png?width=320&crop=smart&auto=webp&s=266e8673beb07662406f2337fa3f15c55ea02670', 'width': 320}, {'height': 895, 'url': 'https://preview.redd.it/o9sgqppkca8f1.png?width=640&crop=smart&auto=webp&s=b3738a800b971da66fb76ce1bfd12e92a05565ae', 'width': 640}, {'height': 1342, 'url': 'https://preview.redd.it/o9sgqppkca8f1.png?width=960&crop=smart&auto=webp&s=d465c844baef14e572f72862cf3b8bd666ea26d8', 'width': 960}, {'height': 1510, 'url': 'https://preview.redd.it/o9sgqppkca8f1.png?width=1080&crop=smart&auto=webp&s=0cfa8eabd8d1fbb5bedaaeed36e1519a0ca74a97', 'width': 1080}], 'source': {'height': 2532, 'url': 'https://preview.redd.it/o9sgqppkca8f1.png?auto=webp&s=19a7bc588b89b67e342ec8adbe2d682383982f1b', 'width': 1810}, 'variants': {}}]} |
||
Don’t Forget Error Handling with Agentic Workflows | 0 | This was a very interesting read. As our models get more complex, and get inserted into more workflows, it might be a good idea to have error handling wrapped around the agent calls to prevent undesired behavior. | 2025-06-21T13:59:23 | https://www.anthropic.com/research/agentic-misalignment | SignificanceNeat597 | anthropic.com | 1970-01-01T00:00:00 | 0 | {} | 1lgx7oy | false | null | t3_1lgx7oy | /r/LocalLLaMA/comments/1lgx7oy/dont_forget_error_handling_with_agentic_workflows/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'YqpkFbtVl5x_nWeH4WDeyrnRsjmO_TkNaHzW8-LXmr4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/YqpkFbtVl5x_nWeH4WDeyrnRsjmO_TkNaHzW8-LXmr4.png?width=108&crop=smart&auto=webp&s=3858f721a29547fa04b2e0baa4c1d0e9bf05205b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/YqpkFbtVl5x_nWeH4WDeyrnRsjmO_TkNaHzW8-LXmr4.png?width=216&crop=smart&auto=webp&s=4224c7e04ded8e72fee16fe77f09907bd62dc2e5', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/YqpkFbtVl5x_nWeH4WDeyrnRsjmO_TkNaHzW8-LXmr4.png?width=320&crop=smart&auto=webp&s=2b2319e5e79c3b62645429fe815de04f3f06d3ae', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/YqpkFbtVl5x_nWeH4WDeyrnRsjmO_TkNaHzW8-LXmr4.png?width=640&crop=smart&auto=webp&s=aa674f2476f8a95494716eea665c7047e2db6d64', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/YqpkFbtVl5x_nWeH4WDeyrnRsjmO_TkNaHzW8-LXmr4.png?width=960&crop=smart&auto=webp&s=34ee3b863f1b42dfe5a465d0b37039f27ec29b80', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/YqpkFbtVl5x_nWeH4WDeyrnRsjmO_TkNaHzW8-LXmr4.png?width=1080&crop=smart&auto=webp&s=3f0addeb4ed9f7ddfe1c4c97b96a6fdfb105b130', 'width': 1080}], 'source': {'height': 1261, 'url': 'https://external-preview.redd.it/YqpkFbtVl5x_nWeH4WDeyrnRsjmO_TkNaHzW8-LXmr4.png?auto=webp&s=6b40c70e0cc6e737ea4ce498090fc4a36867b2f0', 'width': 2401}, 'variants': {}}]} |
|
Self Adapting LLMs - legit? | 126 | I just came across the new MIT paper *Self-Adapting Language Models* (Zweiger et al., June 2025).
The core idea is wild:
* The LLM produces a **self-edit**—a chunk of text that can (a) rewrite / augment the input data, (b) pick hyper-parameters, or (c) call external tools for data augmentation or gradient updates.
* Those self-edits are fed straight back into supervised finetuning (or RL), so the model *persistently* updates its own weights.
* They train the model to *judge its own edits* with a downstream reward signal, so it keeps iterating until performance improves.
Essentially the model becomes both **student and curriculum designer**, continuously generating the exactly-what-it-needs data to get better.
My (much humbler) attempt & pain points
* For a tweet-classification project I had GPT-4 **select** real tweets and **synthesize** new ones to expand the finetuning set.
* Quality was decent, but (1) **insanely expensive**, and (2) performance **regressed** vs. a baseline where I manually hand-picked examples.
* I only did straight SFT; didn’t try RL-style feedback (wasn’t aware of anything cleaner than full-blown PPO/DPO at the time).
Am I wrong to think that this will not hold in main use cases? Why not just try GRPO RL for the use cases that the user wants? I am honestly a bit confused, can someone explain or discuss on what am I missing here? How can a model know what it needs other than a much bigger model giving it feedback on every iteration? Has RL worked on other stuff than text before in this context? | 2025-06-21T14:14:34 | Desperate_Rub_1352 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lgxjw2 | false | null | t3_1lgxjw2 | /r/LocalLLaMA/comments/1lgxjw2/self_adapting_llms_legit/ | false | false | default | 126 | {'enabled': True, 'images': [{'id': 'rlhp01gfca8f1', 'resolutions': [{'height': 134, 'url': 'https://preview.redd.it/rlhp01gfca8f1.png?width=108&crop=smart&auto=webp&s=b64fb70eef9e0567f59bf4cb042f140492f0bb9b', 'width': 108}, {'height': 268, 'url': 'https://preview.redd.it/rlhp01gfca8f1.png?width=216&crop=smart&auto=webp&s=447e22494462532cd66a2adb989a271b25ad9eea', 'width': 216}, {'height': 397, 'url': 'https://preview.redd.it/rlhp01gfca8f1.png?width=320&crop=smart&auto=webp&s=ec163f3d8f9acdb9ea9ced29ce719a80f305881c', 'width': 320}, {'height': 794, 'url': 'https://preview.redd.it/rlhp01gfca8f1.png?width=640&crop=smart&auto=webp&s=ad7294f3dda4aacfc84c69129907508eae493c63', 'width': 640}, {'height': 1192, 'url': 'https://preview.redd.it/rlhp01gfca8f1.png?width=960&crop=smart&auto=webp&s=b9052349821055f594f631feaa4081b1f8c27586', 'width': 960}, {'height': 1341, 'url': 'https://preview.redd.it/rlhp01gfca8f1.png?width=1080&crop=smart&auto=webp&s=03102bf818e7bc3e9c22d344257046c8444b1f36', 'width': 1080}], 'source': {'height': 1344, 'url': 'https://preview.redd.it/rlhp01gfca8f1.png?auto=webp&s=9205af6ac827591f9c72109b1eb18b1b1d27104b', 'width': 1082}, 'variants': {}}]} |
|
Using Qwen3 30b in Roo code | 4 | Does anyone had any experience using Qwen3 in Roo? Which parameter do you use? I use 8bit quantizations, results are meaningful, but far from perfect. Did anyone use the same model in the same configuration? Which parameters did you use?
My params for llama.cpp:
```
-hf Qwen/Qwen3-30B-A3B-GGUF:Q8_0 \
-c 131072 --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768 \
--temp 0.6 --min-p 0.0 --top-k 40 --top-p 0.95 --samplers "top_k;top_p;min_p;temperature;"
``` | 2025-06-21T14:26:01 | https://www.reddit.com/r/LocalLLaMA/comments/1lgxswa/using_qwen3_30b_in_roo_code/ | ArtisticHamster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgxswa | false | null | t3_1lgxswa | /r/LocalLLaMA/comments/1lgxswa/using_qwen3_30b_in_roo_code/ | false | false | self | 4 | null |
The "unbiased" r1 1776 seems to be obsessed with China | 0 | When given some meaningless text or short numbers, it talks about the western accusation on China. When given any random date in the past, it finds (or hallucinate) scandals and accusations about China (and it respond in Chinese).
When I asked about Israel, it talks about China. When I asked about 1984, it literally talks more about China than 1984... and says nothing about Nazi Germany or Soviet Union.
Is this unbiased? I don't think so. It feels more like overfitting...
What if there are people using this kind of "unbiased" llms thinking that it is neutral and use it for educational purposes?
LLMs with bias can be really problematic.
Similar techniques can be used against any country or entity and heavily influence the democratic processes. Maybe not as obvious as this (but has anyone noticed this?), but I can totally see things like this be used in partisan use cases.
Imagine when most people (voters) learn about new things via LLM and the models are all controlled by giant companies and rich entities. Imagine when the education system heavily adopts things like this and the future generations fill their curiosity with this. Imagine when so-called "unbiased" models were injected with other ideologies that are a bit harder to recognize.
I don't know.
| 2025-06-21T14:26:48 | https://www.reddit.com/gallery/1lgxti0 | Salty_Interest_1493 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lgxti0 | false | null | t3_1lgxti0 | /r/LocalLLaMA/comments/1lgxti0/the_unbiased_r1_1776_seems_to_be_obsessed_with/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'xbXruVDSc5rDebGr5Xr53IpcOzTvsV7vi7A5BQwD2D0', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/xbXruVDSc5rDebGr5Xr53IpcOzTvsV7vi7A5BQwD2D0.png?width=108&crop=smart&auto=webp&s=cba724f58c10a5bc2eb5d629587dc45b06ab5b32', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/xbXruVDSc5rDebGr5Xr53IpcOzTvsV7vi7A5BQwD2D0.png?width=216&crop=smart&auto=webp&s=664ba548832cfce0d54d12d87e70b7057f63461b', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/xbXruVDSc5rDebGr5Xr53IpcOzTvsV7vi7A5BQwD2D0.png?width=320&crop=smart&auto=webp&s=6c37d8f586bbad3bd3a5656821aa9b8f53902cbe', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/xbXruVDSc5rDebGr5Xr53IpcOzTvsV7vi7A5BQwD2D0.png?width=640&crop=smart&auto=webp&s=91452ad262542f3189af83e0f102f3a29ebc8aef', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/xbXruVDSc5rDebGr5Xr53IpcOzTvsV7vi7A5BQwD2D0.png?width=960&crop=smart&auto=webp&s=006a693694078d6217322b1cdb914a39ffbeb676', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/xbXruVDSc5rDebGr5Xr53IpcOzTvsV7vi7A5BQwD2D0.png?width=1080&crop=smart&auto=webp&s=8f7c4172c72705bf37a931d0ff55c495d23f6db9', 'width': 1080}], 'source': {'height': 9740, 'url': 'https://external-preview.redd.it/xbXruVDSc5rDebGr5Xr53IpcOzTvsV7vi7A5BQwD2D0.png?auto=webp&s=c99b2bc5cb6ff4007c03d2936089a4135063f867', 'width': 1440}, 'variants': {}}]} |
|
moonshotai/Kimi-VL-A3B-Thinking-2506 · Hugging Face | 79 | 2025-06-21T14:36:31 | https://huggingface.co/moonshotai/Kimi-VL-A3B-Thinking-2506 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lgy12q | false | null | t3_1lgy12q | /r/LocalLLaMA/comments/1lgy12q/moonshotaikimivla3bthinking2506_hugging_face/ | false | false | 79 | {'enabled': False, 'images': [{'id': 'nn6Om0LrvY9dh6qkhvLPezIS-aJdRaC0O6BpYJYgA5E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nn6Om0LrvY9dh6qkhvLPezIS-aJdRaC0O6BpYJYgA5E.png?width=108&crop=smart&auto=webp&s=b6a411809385a8832da3850c0f3d679bebfc629b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nn6Om0LrvY9dh6qkhvLPezIS-aJdRaC0O6BpYJYgA5E.png?width=216&crop=smart&auto=webp&s=815a6c554041f29d29d6364595052b23b1144b1e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nn6Om0LrvY9dh6qkhvLPezIS-aJdRaC0O6BpYJYgA5E.png?width=320&crop=smart&auto=webp&s=49846633df728385669003362bb8f01cf2804b35', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nn6Om0LrvY9dh6qkhvLPezIS-aJdRaC0O6BpYJYgA5E.png?width=640&crop=smart&auto=webp&s=0c180a6be6bee520890e9e2bcc9303148c925f48', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nn6Om0LrvY9dh6qkhvLPezIS-aJdRaC0O6BpYJYgA5E.png?width=960&crop=smart&auto=webp&s=6bdcd72022a3f8598e5e2a9326ab4f1b06b1cd2f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nn6Om0LrvY9dh6qkhvLPezIS-aJdRaC0O6BpYJYgA5E.png?width=1080&crop=smart&auto=webp&s=077f2a165f4efd5dec15b0a5eec9981a497f9180', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nn6Om0LrvY9dh6qkhvLPezIS-aJdRaC0O6BpYJYgA5E.png?auto=webp&s=a23bc871fce22973e69cb7fccbd12f19f4fdde55', 'width': 1200}, 'variants': {}}]} |
||
Build Qwen3 from Scratch | 72 | I'm a big fan of Sebastian Raschka's earlier work on LLMs from scratch. He recently switched from Llama to Qwen (a switch I recently made too thanks to someone in this subreddit) and wrote a Jupyter notebook implementing Qwen3 from scratch.
Highly recommend this resource as a learning project. | 2025-06-21T14:41:27 | https://github.com/rasbt/LLMs-from-scratch/tree/main/ch05/11_qwen3 | entsnack | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lgy4wa | false | null | t3_1lgy4wa | /r/LocalLLaMA/comments/1lgy4wa/build_qwen3_from_scratch/ | false | false | default | 72 | null |
Steering LLM outputs | 58 | **What is this?**
* Optimising LLM proxy runs workflow that mixes instructions from multiple anchor prompts based on their weights
* Weights are controlled via specially crafted artifact. The artifact connects back to the workflow over websockets and is able of sending/receiving data.
* The artifact can pause or slow down the generation as well for better control.
* Runs completely outside the inference engine, at OpenAI-compatible API level
[Code](https://github.com/av/harbor/blob/main/boost/src/modules/promx.py)
**How to run it?**
* Standalone - `docker pull` [`ghcr.io/av/harbor-boost:latest`](http://ghcr.io/av/harbor-boost:latest), [configuration reference](https://github.com/av/harbor/wiki/5.2.-Harbor-Boost#standalone-usage)
* Also see [example starter repo](https://github.com/av/boost-starter)
* with [Harbor](https://github.com/av/harbor) \- `harbor up boost` | 2025-06-21T15:14:23 | https://v.redd.it/0351w9ovpa8f1 | Everlier | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lgyv8a | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/0351w9ovpa8f1/DASHPlaylist.mpd?a=1753110878%2CNTFhODNhNzhmNTdiM2MxNTRhY2I4OGIzZDc2ZDE4ODgyOWUyNzVkODZjNDkxNTcyOWZkMWM5ZWY4ZDEyZjAxMA%3D%3D&v=1&f=sd', 'duration': 96, 'fallback_url': 'https://v.redd.it/0351w9ovpa8f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/0351w9ovpa8f1/HLSPlaylist.m3u8?a=1753110878%2CMWFmNzk2ZTVjNDFlNGQzYTljNGI4NGU4NTA0NTE2NDkyMjhjZWRmNTQ3N2JlNjdmNjVhMWFmMjBiZTUxNzliZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/0351w9ovpa8f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1728}} | t3_1lgyv8a | /r/LocalLLaMA/comments/1lgyv8a/steering_llm_outputs/ | false | false | 58 | {'enabled': False, 'images': [{'id': 'NmN0cDU5bnZwYThmMcWg2kr7Oe9IfY8fGfsf43KXN8n2ZXafTDS0jzzrXQ6i', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/NmN0cDU5bnZwYThmMcWg2kr7Oe9IfY8fGfsf43KXN8n2ZXafTDS0jzzrXQ6i.png?width=108&crop=smart&format=pjpg&auto=webp&s=458e0befcfc8cc4bad57728ee49419a5fcb90ed3', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/NmN0cDU5bnZwYThmMcWg2kr7Oe9IfY8fGfsf43KXN8n2ZXafTDS0jzzrXQ6i.png?width=216&crop=smart&format=pjpg&auto=webp&s=8971613d7e8250c9fe14081990c2720b25b1775f', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/NmN0cDU5bnZwYThmMcWg2kr7Oe9IfY8fGfsf43KXN8n2ZXafTDS0jzzrXQ6i.png?width=320&crop=smart&format=pjpg&auto=webp&s=f860069ab0e4c5ddf241f1744d1e79b6ef3b15f3', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/NmN0cDU5bnZwYThmMcWg2kr7Oe9IfY8fGfsf43KXN8n2ZXafTDS0jzzrXQ6i.png?width=640&crop=smart&format=pjpg&auto=webp&s=f1fa70503b844024f3fdfbd07055ab72adf6f7cc', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/NmN0cDU5bnZwYThmMcWg2kr7Oe9IfY8fGfsf43KXN8n2ZXafTDS0jzzrXQ6i.png?width=960&crop=smart&format=pjpg&auto=webp&s=ff23f4132d8eb5d9ba36e53bb553859eba022c42', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/NmN0cDU5bnZwYThmMcWg2kr7Oe9IfY8fGfsf43KXN8n2ZXafTDS0jzzrXQ6i.png?width=1080&crop=smart&format=pjpg&auto=webp&s=fbaa2ac905bac8dd899e15c2e87261730f19366e', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/NmN0cDU5bnZwYThmMcWg2kr7Oe9IfY8fGfsf43KXN8n2ZXafTDS0jzzrXQ6i.png?format=pjpg&auto=webp&s=4b5a3b4e0e37158bea9d03a5ae61113cc07c37f4', 'width': 1920}, 'variants': {}}]} |
|
Question about throughput of individual requests on a single GPU | 0 | What do you use to maximize the throughput of LLMs for a single request? I'm going to use it locally for Roo Code, and you know, the higher the tk/s per request, the faster it works.
I have a 5080, but I can easily run 14B models at 80 tk/s or 24B models (quantized to Q3_K_L) at 48-50 tk/s with llama.cpp. | 2025-06-21T15:17:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lgyxkc/question_about_throughput_of_individual_requests/ | ajmusic15 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgyxkc | false | null | t3_1lgyxkc | /r/LocalLLaMA/comments/1lgyxkc/question_about_throughput_of_individual_requests/ | false | false | self | 0 | null |
Xiaomi Mimo RL 7b vs Qwen 3 8b | 1 | Hi, I need an AI model to pair with Owl AI (a Manus alternative) I need an AI that excels in Analysis, Coding Task Planning and Automation.
I'm undecided between Xiaomi Mimo RL 7b and Qwen 3 8b (I can only run models with max 8b parameters) which one do you guys recommend? | 2025-06-21T15:28:05 | https://www.reddit.com/r/LocalLLaMA/comments/1lgz6c7/xiaomi_mimo_rl_7b_vs_qwen_3_8b/ | thepaganalchemist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgz6c7 | false | null | t3_1lgz6c7 | /r/LocalLLaMA/comments/1lgz6c7/xiaomi_mimo_rl_7b_vs_qwen_3_8b/ | false | false | self | 1 | null |
Build DeepSeek-R1-Distill-Qwen-7B from Scratch | 0 | I'm a big fan of Sebastian Raschka's earlier work on LLMs from scratch. He recently switched from Llama to Qwen (a switch I recently made too thanks to someone in this subreddit) and wrote a Jupyter notebook implementing Qwen3 from scratch.
Highly recommend this resource as a learning project. | 2025-06-21T15:36:20 | https://github.com/rasbt/LLMs-from-scratch/tree/main/ch05/11_qwen3 | entsnack | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lgzd58 | false | null | t3_1lgzd58 | /r/LocalLLaMA/comments/1lgzd58/build_deepseekr1distillqwen7b_from_scratch/ | false | false | default | 0 | null |
Copilot Replacement | 0 | I started working at a company that only works with GH Copilot recently. It’s been terrible. I’m wondering whether running a local reasoning model might perform better. Please advise.
Work Macbook: M2 pro 16 GB.
Let me know if anything needs to be clarified in order to move forward.
Thanks! | 2025-06-21T16:17:48 | https://www.reddit.com/r/LocalLLaMA/comments/1lh0bdk/copilot_replacement/ | Few_Speaker_9537 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lh0bdk | false | null | t3_1lh0bdk | /r/LocalLLaMA/comments/1lh0bdk/copilot_replacement/ | false | false | self | 0 | null |
Ollama alternatives | 17 | I have a Linux Ubuntu server with 192GB ram and a geoforce rtx 4090 GPU. I've been creating some python apps lately using ollama and langchain with models like gemma3:27b.
I know ollama and langchain are both not the most cutting edge tools. I am pretty good in programming and configuration so could probably move on to better options.
Interested in rag and data related projects using statistics and machine learning. Have built some pretty cool stuff with plotly, streamlit and duckdb.
Just started really getting hands on with local LLMs. For those that are further along and graduated from ollama etc. Do you have any suggestions on things that I should consider to maximize accuracy and speed. Either in terms of frameworks, models or LLM clients?
I plan to test qwen3 and llama4 models, but gemma3 is pretty decent. I would like to do more with models that aupport tooling, which gemma3 does not. I installed devstral for that reason.
Even though I mentioned a lot about models, my question is broader than that. I am more interested on others thoughts around ollama and langchain, which I know can be slow or bloated, but that is where I started, and not necessarily where I want to end up.
Thank you :) | 2025-06-21T16:20:32 | https://www.reddit.com/r/LocalLLaMA/comments/1lh0div/ollama_alternatives/ | Maleficent_Payment44 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lh0div | false | null | t3_1lh0div | /r/LocalLLaMA/comments/1lh0div/ollama_alternatives/ | false | false | self | 17 | null |
RTX 6000 Pro Blackwell | 10 | Had 2+4 RTX 3090 server for local projects. Manageable if run under-powered.
The 3090s still seem like a great value, but start feeling dated.
Thinking of getting a single RTX 6000 Pro 96GB Blackwell. \~2.5-3x cost of 4 x 3090.
Would love to hear your opinions.
Pros: More VRAM, very easy to run, much faster inference (\~5090), can run a image gen models easy, native support for quants.
Cons: CPU might become bottleneck if running multiple apps. Eg whisper, few VLLM instances, python stuff.
What do you guys think?
Have anyone tried to run multiple VLLMs + whisper + kokoro on a single workstation / server card? Are they only good for using with 1 app or can the CPU be allocated effectively? | 2025-06-21T16:30:37 | https://www.reddit.com/r/LocalLLaMA/comments/1lh0lqd/rtx_6000_pro_blackwell/ | val_in_tech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lh0lqd | false | null | t3_1lh0lqd | /r/LocalLLaMA/comments/1lh0lqd/rtx_6000_pro_blackwell/ | false | false | self | 10 | null |
Autopaste MFAs from Gmail using LLaMA | 52 | Inspired by Apple's "insert code from SMS" feature, made a tool to speed up the process of inserting incoming email MFAs: [https://github.com/yahorbarkouski/auto-mfa](https://github.com/yahorbarkouski/auto-mfa)
Connect accounts, choose LLM provider (Ollama supported), add a system shortcut targeting the script, and enjoy your extra 10 seconds every time you need to paste your MFAs | 2025-06-21T16:32:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lh0noy/autopaste_mfas_from_gmail_using_llama/ | samewakefulinsomnia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lh0noy | false | null | t3_1lh0noy | /r/LocalLLaMA/comments/1lh0noy/autopaste_mfas_from_gmail_using_llama/ | false | false | self | 52 | {'enabled': False, 'images': [{'id': 'gCAViH5hF4TxEPmo4563WW0QGV5l7QbzcnAoWjvNruM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gCAViH5hF4TxEPmo4563WW0QGV5l7QbzcnAoWjvNruM.png?width=108&crop=smart&auto=webp&s=b6c8d48fb32fa8252364d52009d7d60921355114', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gCAViH5hF4TxEPmo4563WW0QGV5l7QbzcnAoWjvNruM.png?width=216&crop=smart&auto=webp&s=4b3a851c769c723db8ecdbb3164d7672f17ffb78', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gCAViH5hF4TxEPmo4563WW0QGV5l7QbzcnAoWjvNruM.png?width=320&crop=smart&auto=webp&s=ecf27c5186db989f122575e177878a9e4b0ee858', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gCAViH5hF4TxEPmo4563WW0QGV5l7QbzcnAoWjvNruM.png?width=640&crop=smart&auto=webp&s=3a6ff3ffb71920f2bfc18f3e0cf66f04c2601292', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gCAViH5hF4TxEPmo4563WW0QGV5l7QbzcnAoWjvNruM.png?width=960&crop=smart&auto=webp&s=96cd5371846098211650b058c1a89cd730247d83', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gCAViH5hF4TxEPmo4563WW0QGV5l7QbzcnAoWjvNruM.png?width=1080&crop=smart&auto=webp&s=9a0ee9c08cf8b4c5db796c1de6904a4b7af2e9fa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gCAViH5hF4TxEPmo4563WW0QGV5l7QbzcnAoWjvNruM.png?auto=webp&s=e43fd2a1494623162dc8f92f6de880fff90bebe4', 'width': 1200}, 'variants': {}}]} |
how many people will tolerate slow speed for running LLM locally? | 116 | just want to check how many people will tolerate speed for privacy? | 2025-06-21T16:36:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lh0qb9/how_many_people_will_tolerate_slow_speed_for/ | OwnSoup8888 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lh0qb9 | false | null | t3_1lh0qb9 | /r/LocalLLaMA/comments/1lh0qb9/how_many_people_will_tolerate_slow_speed_for/ | false | false | self | 116 | null |
Deepseekv3-0324 671b LORA training | 12 | Is there a way currently to train LORAs off of Deepseekv3-0324 (671b) given that there is no huggingface transformers support yet? | 2025-06-21T17:07:26 | https://www.reddit.com/r/LocalLLaMA/comments/1lh1gkh/deepseekv30324_671b_lora_training/ | triestdain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lh1gkh | false | null | t3_1lh1gkh | /r/LocalLLaMA/comments/1lh1gkh/deepseekv30324_671b_lora_training/ | false | false | self | 12 | null |
System prompt caching with persistent state augmented retrieval | 0 | I have this use case where I needed to process a fairly large contexts repeatedly with local CPU only inference capabilities.
In my testing, prompt processing took as long as 45 seconds.
Trying to setup KV caching I discovered (shamefully) that llama cpp and python bindings do support caching out of the box and even let me persist the LLM state to disk.
Now one thing started to click in my mind:
what about combining a text description of the prompt (such as a task description) to do RAG like on the persisted cache.
I mean:
- system prompt encode a task description for a “larger” model, 8B for instance
- expose a 0.5B LLM to the user to route queries (using tool calls, the tools being the larger LLM and its pre-processed system prompts)
Has anyone tested such a setup ?
| 2025-06-21T17:37:37 | https://www.reddit.com/r/LocalLLaMA/comments/1lh25j3/system_prompt_caching_with_persistent_state/ | Fluid-Age-9266 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lh25j3 | false | null | t3_1lh25j3 | /r/LocalLLaMA/comments/1lh25j3/system_prompt_caching_with_persistent_state/ | false | false | self | 0 | null |
Voice Cloning model that allows training on longer audio | 3 | Hi,
Im trying to find a TTS model that allows more refence audio to clone a voice. As often they only take up to 30 seconds of audio to base the voice off. However i have characters with audio between 30minutes long to 8 Hours.
So want a model I can train to get the most of.
Any suggestions?
| 2025-06-21T17:39:56 | https://www.reddit.com/r/LocalLLaMA/comments/1lh27fn/voice_cloning_model_that_allows_training_on/ | Back-Rare | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lh27fn | false | null | t3_1lh27fn | /r/LocalLLaMA/comments/1lh27fn/voice_cloning_model_that_allows_training_on/ | false | false | self | 3 | null |
CEO Bench: Can AI Replace the C-Suite? | 226 | I put together a (slightly tongue in cheek) benchmark to test some LLMs. All open source and all the data is in the repo.
It makes use of the excellent `llm` Python package from Simon Willison.
I've only benchmarked a couple of local models but want to see what the smallest LLM is that will score above the estimated "human CEO" performance. How long before a sub-1B parameter model performs better than a tech giant CEO? | 2025-06-21T17:49:08 | https://ceo-bench.dave.engineer/ | dave1010 | ceo-bench.dave.engineer | 1970-01-01T00:00:00 | 0 | {} | 1lh2ffp | false | null | t3_1lh2ffp | /r/LocalLLaMA/comments/1lh2ffp/ceo_bench_can_ai_replace_the_csuite/ | false | false | default | 226 | null |
Moore Threads: An overlooked possibility for cheap local LLM inference? | 1 | There's a Chinese company called Moore Threads which makes very mediocre but affordable gaming GPUs, including the MTT S80 **which is $170 for 16GB**.
Of course, no CUDA or VULKAN, but even so, with how expensive even used mining cards are nowadays, it might be a very good choice for affordably running very large models at acceptable speeds (\~10t/s). Admittedly, I don't have any benchmarks.
I've never seen a single comment in this entire sub mention this company, which makes me think that perhaps we have overlooked them and should include them in discussions of budget-friendly inference hardware setups.
While I look forward to the release of the Intel's B60 DUAL, we won't be able to confirm their real price until they release, so for now I wanted to explore the cards which are on the market today. | 2025-06-21T18:16:17 | https://www.reddit.com/r/LocalLLaMA/comments/1lh328r/moore_threads_an_overlooked_possibility_for_cheap/ | HugoCortell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lh328r | false | null | t3_1lh328r | /r/LocalLLaMA/comments/1lh328r/moore_threads_an_overlooked_possibility_for_cheap/ | false | false | self | 1 | null |
How to fine-tune and things required to fine-tune a Language Model? | 8 | I am a beginner in Machine learning and language models. I am currently studying about Small Language Models and I want to fine-tune SLMs for specific tasks. I know about different fine-tuning methods in concept but don't know how to implement/apply any of that in code and practical way.
My questions are -
1. How much data should I approximately need to fine-tune a SLM?
2. How to divide the dataset? And what will be those division, regarding training, validation and benchmarking.
3. How to practically fine-tune a model ( could be fine-tuning by LoRA ) with the dataset, and how to apply different datasets. Basically how to code these stuff?
4. Best places to fine-tune to the model, like, colab, etc. and How much computational power, and money I need to spend on subscription?
If any of these questions aren't clear, you can ask me to your questions and I will be happy to elaborate.
Thanks. | 2025-06-21T18:17:00 | https://www.reddit.com/r/LocalLLaMA/comments/1lh32t8/how_to_finetune_and_things_required_to_finetune_a/ | No_Requirement9600 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lh32t8 | false | null | t3_1lh32t8 | /r/LocalLLaMA/comments/1lh32t8/how_to_finetune_and_things_required_to_finetune_a/ | false | false | self | 8 | null |
From Arch-Function to Arch-Agent. Designed for fast multi-step, multi-turn workflow orchestration in agents. | 84 | Hello - in the past i've shared my work around [function-calling](https://www.reddit.com/r/LocalLLaMA/comments/1hr9ll1/i_built_a_small_function_calling_llm_that_packs_a/) on this sub. The encouraging feedback and usage (over 100k downloads 🤯) has gotten me and my team cranking away. Six months from our initial launch, I am excited to share our agent models: Arch-Agent.
Full details in the model card: [https://huggingface.co/katanemo/Arch-Agent-7B](https://huggingface.co/katanemo/Arch-Agent-7B) \- but quickly, Arch-Agent offers state-of-the-art performance for advanced function calling scenarios, and sophisticated multi-step/multi-turn agent workflows. Performance was measured on BFCL, although we'll also soon publish results on the Tau-Bench as well.
These models will power [Arch](https://github.com/katanemo/archgw/) (the universal data plane for AI) - the open source project where some of our science work is vertically integrated.
Hope like last time - you all enjoy these new models and our open source work 🙏 | 2025-06-21T18:20:02 | AdditionalWeb107 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lh359d | false | null | t3_1lh359d | /r/LocalLLaMA/comments/1lh359d/from_archfunction_to_archagent_designed_for_fast/ | false | false | default | 84 | {'enabled': True, 'images': [{'id': 'n7hvejg7kb8f1', 'resolutions': [{'height': 27, 'url': 'https://preview.redd.it/n7hvejg7kb8f1.png?width=108&crop=smart&auto=webp&s=903edc66dc08aafb66affa8e74636840e4af4198', 'width': 108}, {'height': 54, 'url': 'https://preview.redd.it/n7hvejg7kb8f1.png?width=216&crop=smart&auto=webp&s=b5f7460c6aa02c1b7267f5ecec65092065026707', 'width': 216}, {'height': 81, 'url': 'https://preview.redd.it/n7hvejg7kb8f1.png?width=320&crop=smart&auto=webp&s=608a9959c18c310d9f7af8318bcf2757f47ca0be', 'width': 320}, {'height': 162, 'url': 'https://preview.redd.it/n7hvejg7kb8f1.png?width=640&crop=smart&auto=webp&s=52b7b89164644abd1aa046530243be33db354edd', 'width': 640}, {'height': 243, 'url': 'https://preview.redd.it/n7hvejg7kb8f1.png?width=960&crop=smart&auto=webp&s=a770c65fed53de4d35d85a8ef10e955a6292cb55', 'width': 960}, {'height': 273, 'url': 'https://preview.redd.it/n7hvejg7kb8f1.png?width=1080&crop=smart&auto=webp&s=1549c2935b493d49f72559a496547128b627980e', 'width': 1080}], 'source': {'height': 492, 'url': 'https://preview.redd.it/n7hvejg7kb8f1.png?auto=webp&s=429212faf040ff911757bfd5d5ba4cf9bc33fd96', 'width': 1941}, 'variants': {}}]} |
|
Abstracting the Prompt and Context | 0 | If large language models are a new operating system, and natural English is the programming language, then what are the abstraction methods?
One of the fundamental problems is that each model is trained / tuned in different ways and responds very differently to explicit or implicit English instructions.
We have loose guidelines like "Role / Objective / Output format" but no agreed upon standardizations.
Early frameworks like langchain and llamaindex highlight this exact issue - they attempted to abstract, but we're still in effect hard coding prompts a few layers deep.
This doesn't work like c++... Because there is no hard truth ground to stand on. Gemini 08-25 might respond very differently to the exact wording a few layers deep.
So, my question here is - what are the abstraction methods that are being discussed?
What are your ideas?
| 2025-06-21T19:13:28 | https://www.reddit.com/r/LocalLLaMA/comments/1lh4d6r/abstracting_the_prompt_and_context/ | RMCPhoto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lh4d6r | false | null | t3_1lh4d6r | /r/LocalLLaMA/comments/1lh4d6r/abstracting_the_prompt_and_context/ | false | false | self | 0 | null |
Still confused about Memory (mem0) integration into llamaindex AgentWorkflow | 2 | So as the title clearly states : i'm really confused about how does mem0 works with LLamaindex AgentWorkflow class. let me explain
Yes, i understood that mem0 for example is used to hold context long term to understand the user preferences....etc . however as i was reading this page from the doc: [https://docs.mem0.ai/core-concepts/memory-types](https://docs.mem0.ai/core-concepts/memory-types) i started getting confused.
I already built a simple LLM chatbot in my app with function calls using the OpenAI SDK. typically, using any AI Model ( Claude, GPT, Gemini...etc) you'd always pass the raw conversation array that consist of objects with content and role (system, assistant, user).
However now i'm using LLamaindex to build a multi agent systems that consist of having multiple agents working together. For that i'm using AgentWorkflow class. i don't understand how everything fits together.
looking at an example from the llamaindex doc for using the AgentWorkflow class :
`agent_workflow = AgentWorkflow(`
`agents=[research_agent, write_agent, review_agent],`
`root_agent=research_agent.name,`
`initial_state={`
`"research_notes": {},`
`"report_content": "Not written yet.",`
`"review": "Review required.",`
`},`
`)`
`handler = agent_workflow.run(`
`user_msg="""`
`Write me a report on the history of the web. Briefly describe the history`
`of the world wide web, including the development of the internet and the`
`development of the web, including 21st century developments.`
`""",`
`ctx=ctx,`
`// as an example here you initiate the mem0 client`
`memory=mem0_client`
`)`
Reading the mem0 link i just shared it states :
# Short-Term Memory
The most basic form of memory in AI systems holds immediate context - like a person remembering what was just said in a conversation. This includes:
* **Conversation History**: Recent messages and their order
* **Working Memory**: Temporary variables and state
* **Attention Context**: Current focus of the conversation
Now my question is this : is the short term memory a replacement for passing the raw conversation history to the AgentWorkflow class ? do you need both? if yes what's the point of Short term memory if you already have raw conversation history besides using that raw conversation array to display the conversation in your UI? | 2025-06-21T19:23:20 | https://www.reddit.com/r/LocalLLaMA/comments/1lh4l30/still_confused_about_memory_mem0_integration_into/ | ProfessionalDress259 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lh4l30 | false | null | t3_1lh4l30 | /r/LocalLLaMA/comments/1lh4l30/still_confused_about_memory_mem0_integration_into/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'yhLyZIOlPjfsTAaaTBXQ9FEg5m1ZLHQUbTjfVNnuXk8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/yhLyZIOlPjfsTAaaTBXQ9FEg5m1ZLHQUbTjfVNnuXk8.png?width=108&crop=smart&auto=webp&s=986053c6da380d0fd5291ec1f848ffcb4008914a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/yhLyZIOlPjfsTAaaTBXQ9FEg5m1ZLHQUbTjfVNnuXk8.png?width=216&crop=smart&auto=webp&s=178e9d4855148dc5eae70b3b869838bc58b6faac', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/yhLyZIOlPjfsTAaaTBXQ9FEg5m1ZLHQUbTjfVNnuXk8.png?width=320&crop=smart&auto=webp&s=f79c711a6483ef2fa3069828185645db69796318', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/yhLyZIOlPjfsTAaaTBXQ9FEg5m1ZLHQUbTjfVNnuXk8.png?width=640&crop=smart&auto=webp&s=b1e5cf50cde05486beb63764ef8c6a60a14c001f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/yhLyZIOlPjfsTAaaTBXQ9FEg5m1ZLHQUbTjfVNnuXk8.png?width=960&crop=smart&auto=webp&s=c0a7b86f50ffb092ca7603403b1290acc4dff987', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/yhLyZIOlPjfsTAaaTBXQ9FEg5m1ZLHQUbTjfVNnuXk8.png?width=1080&crop=smart&auto=webp&s=270eb55fa134c1effacf443d2348a111c13550fd', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/yhLyZIOlPjfsTAaaTBXQ9FEg5m1ZLHQUbTjfVNnuXk8.png?auto=webp&s=f63e4e63a7db1a26251cf0a6fd7ee5ae07d6c43f', 'width': 1200}, 'variants': {}}]} |
XAI's Slack must be comedy | 1 | 2025-06-21T19:30:15 | https://www.reddit.com/r/LocalLLaMA/comments/1lh4qgs/xais_slack_must_be_comedy/ | Longjumping-Solid563 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lh4qgs | false | null | t3_1lh4qgs | /r/LocalLLaMA/comments/1lh4qgs/xais_slack_must_be_comedy/ | false | false | 1 | null |
||
Qwen3 is very.... talkative? And yet not very... focused? | 10 | Messing around with some local models, and I kept seeing Qwen3 recommended so I thought I'd play around with it.
Give it a simple question like "how big is the moon" or "write a limerick about the sea" and it'll .... write about 1000 words on how to define the moon and why you might measure it in meters instead of miles for various reasons. Eventually it might answer the question. For the limerick it defined a limerick rhyme scheme (AABBA) and then eventually, after a lot of internal debate, output a limerick that... did not follow that rhyme scheme at all lol. none of the lines rhymed.
Is this the expected Qwen output? Is it just designed to act like an extremely chatty person with ADHD? | 2025-06-21T19:40:34 | https://www.reddit.com/r/LocalLLaMA/comments/1lh4ynv/qwen3_is_very_talkative_and_yet_not_very_focused/ | nirurin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lh4ynv | false | null | t3_1lh4ynv | /r/LocalLLaMA/comments/1lh4ynv/qwen3_is_very_talkative_and_yet_not_very_focused/ | false | false | self | 10 | null |
Best uncensored LLM | 0 | What is the best local LLM which is uncensored and good, even in complex tasks like programming?
| 2025-06-21T19:59:47 | https://www.reddit.com/r/LocalLLaMA/comments/1lh5e04/best_uncensored_llm/ | Dizzy_Opposite3363 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lh5e04 | false | null | t3_1lh5e04 | /r/LocalLLaMA/comments/1lh5e04/best_uncensored_llm/ | false | false | self | 0 | null |
Built a LiteLLM adapter for locally hosted HuggingFace models on your machine because local transformers deserved the OpenAI API treatment | 27 | **TL;DR**: Made local HuggingFace transformers work through LiteLLM's OpenAI-compatible interface. No more API inconsistencies between local and cloud models. Feel free to use it or help me enriching and making it more mature
Hey everyone!
So here's the thing LiteLLM is AMAZING for calling 100+ LLM providers through a unified OpenAI-like interface. It supports HuggingFace models too... but only through their cloud inference providers (Serverless, Dedicated Endpoints, etc.).
**The missing piece?** Using your local HuggingFace models (the ones you run with `transformers`) through the same clean OpenAI API interface.
# What I built:
A **custom LiteLLM provider** that bridges this gap, giving you:
* **OpenAI API compatibility** for your local HF models no more switching between different interfaces
* **Seamless integration** with any LiteLLM-compatible framework (CrewAI, LangChain, AutoGen, Google-ADK, etc.)
* **4-bit/8-bit quantization** OOTB support for bitsandbytes
* **Streaming support** that actually works properly with LiteLLM's chunk formatting
* **Auto chat templates**
* **Multi-GPU support** and memory monitoring
# Why this matters:
# Option 1: Direct integration
import litellm
litellm.custom_provider_map = [
{"provider": "huggingface-local", "custom_handler": adapter}
]
response = litellm.completion(
model="huggingface-local/Phi-4-reasoning",
messages=[{"role": "user", "content": "Hello!"}]
)
# Option 2: Proxy server (OpenAI-compatible API)
# Start: litellm --config litellm_config.yaml
# Then use in the following way:
curl --location 'http://0.0.0.0:4000/v1/chat/completions' \
--header 'Content-Type: application/json' \
--data '{
"model": "qwen-local",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "what is LLM?"
}
],
"stream": false
}'
**The real value**: Your local models get OpenAI API compatibility + work with existing LiteLLM-based tools + serve via REST API.
# Current status:
✅ Working with Qwen, Phi-4, Gemma 3 models and technically should work with other Text generation models.
✅ Streaming, quantization, memory monitoring
✅ LiteLLM proxy server integration
✅ Clean, modular codebase
# Further improvement scope:
* **Testing more models** \- especially newer architectures
* **Documentation/examples** \- because good docs matter
This fills a real gap in the ecosystem. LiteLLM is fantastic for cloud providers, but local HF models deserved the same love. Now they have it!
**The bottom line:** Your local HuggingFace models can now speak fluent OpenAI API, making them first-class citizens in the LiteLLM ecosystem.
GitHub: [https://github.com/arkaprovob/litellm-hf-local](https://github.com/arkaprovob/litellm-hf-local) | 2025-06-21T20:03:10 | https://www.reddit.com/r/LocalLLaMA/comments/1lh5gwl/built_a_litellm_adapter_for_locally_hosted/ | arkbhatta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lh5gwl | false | null | t3_1lh5gwl | /r/LocalLLaMA/comments/1lh5gwl/built_a_litellm_adapter_for_locally_hosted/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': 'H6fYCL0IdaUUhXSvGrJA54iiawydndRntwWO9LlIKYQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/H6fYCL0IdaUUhXSvGrJA54iiawydndRntwWO9LlIKYQ.png?width=108&crop=smart&auto=webp&s=df790dae35941c0d078d899fa14b5aeaa1cf7767', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/H6fYCL0IdaUUhXSvGrJA54iiawydndRntwWO9LlIKYQ.png?width=216&crop=smart&auto=webp&s=47fc594c028707788535a18c386a2841aedf0094', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/H6fYCL0IdaUUhXSvGrJA54iiawydndRntwWO9LlIKYQ.png?width=320&crop=smart&auto=webp&s=50e703b49b14fb01888f393152691acebac6714c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/H6fYCL0IdaUUhXSvGrJA54iiawydndRntwWO9LlIKYQ.png?width=640&crop=smart&auto=webp&s=6f42cab46a5bf5c334215f0e6f717d62ada83cf4', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/H6fYCL0IdaUUhXSvGrJA54iiawydndRntwWO9LlIKYQ.png?width=960&crop=smart&auto=webp&s=09c0ae28db0db82b7ace65b3bac06d5d18509b4c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/H6fYCL0IdaUUhXSvGrJA54iiawydndRntwWO9LlIKYQ.png?width=1080&crop=smart&auto=webp&s=d13557ee0a1d286a9f74ee85060e1d5a977a99e5', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/H6fYCL0IdaUUhXSvGrJA54iiawydndRntwWO9LlIKYQ.png?auto=webp&s=ec74e3ed53e1b43c8b340387a4a9e3f484572237', 'width': 1200}, 'variants': {}}]} |
Anyone using JetBrains/Rider? | 9 | I heard their IDEs can integrate with locally running models, so im searching for people who know about this!
Have you tried this out? Is it possible? Any quirks?
Thanks in advance! | 2025-06-21T20:36:13 | https://www.reddit.com/r/LocalLLaMA/comments/1lh66t7/anyone_using_jetbrainsrider/ | CSEliot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lh66t7 | false | null | t3_1lh66t7 | /r/LocalLLaMA/comments/1lh66t7/anyone_using_jetbrainsrider/ | false | false | self | 9 | null |
Which AI/LLM can I run on my 16 GB M3 Macbook Air for helping me learn from PDFs or epubs and it can run without internet access? | 2 | I don't have much technical knowledge about AI/LLM, just dabbling to do simple textual interactions. I need help to find if I can run a local and offline AI or LLM on my macbook which will help me study and read loads of epubs and pdf files. Basically the AI can go through the contents and help me learn.
I will be offshore for few months so I need to run it without internet access.
Thank you in advance. | 2025-06-21T21:09:28 | https://www.reddit.com/r/LocalLLaMA/comments/1lh6wvk/which_aillm_can_i_run_on_my_16_gb_m3_macbook_air/ | DoiMach | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lh6wvk | false | null | t3_1lh6wvk | /r/LocalLLaMA/comments/1lh6wvk/which_aillm_can_i_run_on_my_16_gb_m3_macbook_air/ | false | false | self | 2 | null |
Building a Home AI Guardian System | 1 | [removed] | 2025-06-21T21:30:20 | https://www.reddit.com/r/LocalLLaMA/comments/1lh7d3i/building_a_home_ai_guardian_system/ | HomeLlama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lh7d3i | false | null | t3_1lh7d3i | /r/LocalLLaMA/comments/1lh7d3i/building_a_home_ai_guardian_system/ | false | false | self | 1 | null |
Embedding With LM Studio - what am i doing wrong | 8 | I've updated LM Studio to 0.3.17 (build 7) and trying to run embedding models in the developer tab so that i can push it to AnythingLLM where my work is.
funny thing is , the original "text-embedding-nomic-embed-text-v1.5" loads fine and works with Anything.
but text-embedding-qwen3-embedding-0.6b & 8B and any other Embed model i use i get the below error:
========================================================================
Failed to load the model
Failed to load embedding model
Failed to load model into embedding engine. Message: Embedding engine exception: Failed to load model. Internal error: Failed to initialize the context: failed to allocate compute pp buffers
========================================================================
I'm just trying to understand and improve what i currently have working. The original idea was since im using Qwen3 for my work, why not try and use the Qwen3 embedding models as its probably designed to work with it.
Alot of the work i am currently doing is calling RAG from within documents. | 2025-06-22T00:15:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lharbh/embedding_with_lm_studio_what_am_i_doing_wrong/ | uber-linny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lharbh | false | null | t3_1lharbh | /r/LocalLLaMA/comments/1lharbh/embedding_with_lm_studio_what_am_i_doing_wrong/ | false | false | self | 8 | null |
A Great Breakdown of the "Disney vs Midjourney" Lawsuit Case | 26 | As you all know by now, Disney has sued Midjourney on the basis that the latter trained its AI image generating models on copyrighted materials.
This is a serious case that we all should follow up closely. LegalEagle broke down the case in their new YouTube video linked below:
[https://www.youtube.com/watch?v=zpcWv1lHU6I](https://www.youtube.com/watch?v=zpcWv1lHU6I)
I really hope Midjourney wins this one. | 2025-06-22T00:51:14 | https://www.reddit.com/r/LocalLLaMA/comments/1lhbgcn/a_great_breakdown_of_the_disney_vs_midjourney/ | Iory1998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhbgcn | false | null | t3_1lhbgcn | /r/LocalLLaMA/comments/1lhbgcn/a_great_breakdown_of_the_disney_vs_midjourney/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'GEf9AtUXnr4MI62GfIOXoVmrEP2VWkLczHVq2J1XNJY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/GEf9AtUXnr4MI62GfIOXoVmrEP2VWkLczHVq2J1XNJY.jpeg?width=108&crop=smart&auto=webp&s=bc3822579336fc0dc96b5a12e4295538d2fe70a5', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/GEf9AtUXnr4MI62GfIOXoVmrEP2VWkLczHVq2J1XNJY.jpeg?width=216&crop=smart&auto=webp&s=1147baef7c9e6bb4ad59e077f66d71d02f91274c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/GEf9AtUXnr4MI62GfIOXoVmrEP2VWkLczHVq2J1XNJY.jpeg?width=320&crop=smart&auto=webp&s=b593b0f4be9138cfe4b72c376dc2d536aaedc858', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/GEf9AtUXnr4MI62GfIOXoVmrEP2VWkLczHVq2J1XNJY.jpeg?auto=webp&s=ec3b555826d52f8c715c6ba93252b7c9f01b6b8a', 'width': 480}, 'variants': {}}]} |
Is QWEN online service quantized? | 0 | I've made several translation tests using QWEN3 235B IQ4\_XS with KV cache at f16 vs the one on their website.
Often, the translation I get locally is as good or a tiny bit better than the online version.
Is it possible than wanting to save on servers infrastructure, they serve some of their models at 4bits ? | 2025-06-22T01:06:59 | https://www.reddit.com/r/LocalLLaMA/comments/1lhbr86/is_qwen_online_service_quantized/ | DrVonSinistro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhbr86 | false | null | t3_1lhbr86 | /r/LocalLLaMA/comments/1lhbr86/is_qwen_online_service_quantized/ | false | false | self | 0 | null |
AI project, kind of crazy | 0 | Alright, it's time.
I've been thinking about this for a while, and I'm finally ready to dive in. This will be a journey, and I know I won’t be able to do it alone so if you’re interested, DM me. Happy to share the upside if it works.
This isn’t a breakthrough idea. It’s a real, practical attempt at something many of us know is possible. AI agents that provide real value and generate income.
The Goal:
Develop multiple autonomous AI agents that generate $100 - $1000 a day. Legally and ethically. Maybe a system that develops, tests, and refines these agents.
My Commitment:
I'm putting $150K of my own money to bootstrap, no expectation of return. If it works, awesome. If not, I’m happy to have tried.
The Stack:
Infrastructure:
Mix of LLMs (using vLLM or SGLang), VLMs, and possibly other multi modal AI models
High-throughput backend with batching and concurrency support
Hundreds of Linux containers on Proxmox clusters (using Zen 2/3 EPYC servers I have been accumulating)
Shared databases (I like mongo), shared storage, etc..
Potential VPN or proxy networks if required
AI Models:
Self hosted SOTA models like Mistral, DeepSeek R1, etc.
Running on in-house hardware but open to using cloud services where it makes sense
Software:
I am a seasoned programmer but have been embracing vibe coding so this project will be mostly vibe coded (I think). The software will research the Internet (and ask AI models) for potential ways to make money OR gather ideas from people like you on reddit. And we try to build something to do it. Human intervention is OK and encouraged, we will tell the AI humans are here to help. Hopefully we will understand what is best for the AI or a human to do. This does not need to be 100% autonomous. The goal it to have many agents each making money.
If you've been thinking about agentic systems, income through AI agents, or just want to contribute your skills to something hard, DM me.
Let’s build.
If this starts to work will be incorporating and take it from there. | 2025-06-22T01:48:44 | https://www.reddit.com/r/LocalLLaMA/comments/1lhciq9/ai_project_kind_of_crazy/ | humanoid64 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhciq9 | false | null | t3_1lhciq9 | /r/LocalLLaMA/comments/1lhciq9/ai_project_kind_of_crazy/ | false | false | self | 0 | null |
【最好用的图片打标&反推神器】图图的超级智能打标器1.0.5版本更新说明(附实操教程) | 1 | [removed] | 2025-06-22T02:10:05 | https://v.redd.it/77zef24uzd8f1 | Key-Consequence5367 | /r/LocalLLaMA/comments/1lhcwm0/最好用的图片打标反推神器图图的超级智能打标器105版本更新说明附实操教程/ | 1970-01-01T00:00:00 | 0 | {} | 1lhcwm0 | false | null | t3_1lhcwm0 | /r/LocalLLaMA/comments/1lhcwm0/最好用的图片打标反推神器图图的超级智能打标器105版本更新说明附实操教程/ | false | false | default | 1 | null |
[The image marking & reverse engineering tool] Tutu's Super Smart Marker 1.0.5 update notes (with practical tutorial) | 1 | [removed] | 2025-06-22T02:16:21 | https://v.redd.it/3k74i07v0e8f1 | Key-Consequence5367 | /r/LocalLLaMA/comments/1lhd0pl/the_image_marking_reverse_engineering_tool_tutus/ | 1970-01-01T00:00:00 | 0 | {} | 1lhd0pl | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/3k74i07v0e8f1/DASHPlaylist.mpd?a=1753280188%2CNmNjMDIyOTlhYThlNDllN2I5ZTUzYWE4OGY1YzBlNjBiMjc5OTc5ZDYxMGU3MjI4YmVjMDFiNmM0NWI2YzE4Mw%3D%3D&v=1&f=sd', 'duration': 654, 'fallback_url': 'https://v.redd.it/3k74i07v0e8f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/3k74i07v0e8f1/HLSPlaylist.m3u8?a=1753280188%2CYWE1MTI3YmRkOTc0NGUyNjJmNWYyOWNlZGVjZWMzNjBmZDVkNzIzNTgyM2Y0YjI2ZDY2YmRiMWIyZGExMzcwZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/3k74i07v0e8f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1lhd0pl | /r/LocalLLaMA/comments/1lhd0pl/the_image_marking_reverse_engineering_tool_tutus/ | false | false | default | 1 | null |
Some Observations using the RTX 6000 PRO Blackwell. | 129 | Thought I would share some thoughts playing around with the RTX 6000 Pro 96GB Blackwell Workstation edition.
Using the card inside a Razer Core X GPU enclosure:
1. I bought this bracket ([link](https://www.etsy.com/listing/1293010019/razer-core-x-bracket-for-corsair-power?ref=cart)) and replaced the Razer Core X power supply with an SFX-L 1000W. Worked beautifully.
2. Razer Core X cannot handle a 600W card, the outside case gets very HOT with the RTX 6000 Blackwell 600 Watt workstation edition working.
3. I think this is a perfect use case for the 300W Max-Q edition.
Using the RTX 6000 96GB:
1. The RTX 6000 96GB Blackwell is bleeding edge. I had to build all libraries with the latest CUDA driver to get it to be usable. For Llama.cpp I had to build it and specifically set the flag to the CUDA architecture (the documents are wrong, it's not it's compute capability 90 not 12.)
2. When I built all the frame works the RTX 6000 allowed me to run bigger models but I noticed they ran kind of slow. At least with Llama I noticed it's not taking advantage of the architecture. I verified with Nvidia-smi that it was running on the card. The coding agent (llama-vscode, open-ai api) was dumber.
3. The dumber behavior was similar with freshly built VLLM and Open-Webui. Took so long to build PyTorch with the latest CUDA library to get it to work. \\
4. Switch back to the 3090 inside the Razer Core X and everything just works beautifully. The Qwen2.5 Coder 14B Instruct picked up on me was converting c-style enums to C++ and it automatically suggested the next whole enum class vs Qwen 2.5 32B coder instruct FP16 and Q8.
I wasted way too much time (2 days?) rebuilding a bunch of libraries for Llama, VLM, etc.. to take advantage of RTX 6000 96GB. This includes time spent going the git issues with the RTX 6000. Props to LM studio for making using of the card though it felt dumber still.
Wish the A6000 and the 6000 ADA 48GB cards were cheaper though I say if your time is a lot of money it's worth it for something that's stable, proven, and will work with all frameworks right out of the box. | 2025-06-22T02:17:39 | https://www.reddit.com/r/LocalLLaMA/comments/1lhd1j0/some_observations_using_the_rtx_6000_pro_blackwell/ | Aroochacha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhd1j0 | false | null | t3_1lhd1j0 | /r/LocalLLaMA/comments/1lhd1j0/some_observations_using_the_rtx_6000_pro_blackwell/ | false | false | self | 129 | {'enabled': False, 'images': [{'id': 'Oir1aSDsQ0h01B6c6Y0LuLAIUWhPG5VI3k42BpGr9hQ', 'resolutions': [{'height': 144, 'url': 'https://external-preview.redd.it/Oir1aSDsQ0h01B6c6Y0LuLAIUWhPG5VI3k42BpGr9hQ.jpeg?width=108&crop=smart&auto=webp&s=5be3acce185287f211a6e62a7e787d9b87eea6ee', 'width': 108}, {'height': 288, 'url': 'https://external-preview.redd.it/Oir1aSDsQ0h01B6c6Y0LuLAIUWhPG5VI3k42BpGr9hQ.jpeg?width=216&crop=smart&auto=webp&s=432a531e313557c49255bbf49b1910dd2eee70e6', 'width': 216}, {'height': 426, 'url': 'https://external-preview.redd.it/Oir1aSDsQ0h01B6c6Y0LuLAIUWhPG5VI3k42BpGr9hQ.jpeg?width=320&crop=smart&auto=webp&s=8665852f6e21914495dd956c42e84b6fbb80e4ed', 'width': 320}, {'height': 853, 'url': 'https://external-preview.redd.it/Oir1aSDsQ0h01B6c6Y0LuLAIUWhPG5VI3k42BpGr9hQ.jpeg?width=640&crop=smart&auto=webp&s=93c8ad501ec178d5df7113458a8f47110b7ddc94', 'width': 640}, {'height': 1280, 'url': 'https://external-preview.redd.it/Oir1aSDsQ0h01B6c6Y0LuLAIUWhPG5VI3k42BpGr9hQ.jpeg?width=960&crop=smart&auto=webp&s=3afa6d7dc52809498ad3ce0bd665b86d52c8891b', 'width': 960}, {'height': 1440, 'url': 'https://external-preview.redd.it/Oir1aSDsQ0h01B6c6Y0LuLAIUWhPG5VI3k42BpGr9hQ.jpeg?width=1080&crop=smart&auto=webp&s=dfa27e4fefb8403cfc568d1c23eac27461c47e01', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/Oir1aSDsQ0h01B6c6Y0LuLAIUWhPG5VI3k42BpGr9hQ.jpeg?auto=webp&s=e7a5774e1a808ef4e2c397d386ce786e698f53bd', 'width': 1080}, 'variants': {}}]} |
ChatGPT alike local web ui for apple silicon? | 9 | I am looking for a specific AI software that I can run on my Mac that lets me have a web ui with ChatGPT alike functions: uploading files, web search and possibly even deep research? Is there anything out there like this I can run locally and free? | 2025-06-22T02:25:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lhd69y/chatgpt_alike_local_web_ui_for_apple_silicon/ | IntrigueMe_1337 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhd69y | false | null | t3_1lhd69y | /r/LocalLLaMA/comments/1lhd69y/chatgpt_alike_local_web_ui_for_apple_silicon/ | false | false | self | 9 | null |
The Qwen Tokenizer Seems to be better than the Deepseek Tokenizer - Testing a 50-50 SLERP merge of the same two models (Qwen3-8B and DeepSeek-R1-0528-Qwen3-8B) with different tokenizers | 136 | I was interested in merging [DeepSeek-R1-0528-Qwen3-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B) and [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) as they were both my two favorite under 10b\~ models, and finding the Deepseek distill especially impressive. Noted in their model card was the following:
>The model architecture of DeepSeek-R1-0528-Qwen3-8B is identical to that of Qwen3-8B, but it shares the same tokenizer configuration as DeepSeek-R1-0528. This model can be run in the same manner as Qwen3-8B, but it is essential to ensure that all configuration files are sourced from our repository rather than the original Qwen3 project.
Which made me realize, they were both good merge candidates for each other, both being not finetunes, but fully trained models off the Qwen3-8B-Base, and even sharing the same favored sampler settings. The only real difference were the tokenizers. This took me to a crossroads, which tokenizer should my merge inherit? Asking around, I was told there shouldn't be much difference, but I ended up finding out very differently once I did some actual testing. The TL;DR is, the Qwen tokenizer seems to perform better ***and*** use far less tokens for it's thinking. It is a larger tokenizer I noted, and was told that means the tokenizer is more optimized, but I was skeptical about this and decided to test it.
This turned out not to be a not so easy endeavor, since the benchmark I decided on (LocalAIME by u/EntropyMagnets which I thank for making and sharing this tool), is rather long when you use a thinking model, since they require quite a few tokens to get to their answer with any amount of accuracy. I first tested with 4k context, then 8k, then briefly even 16k before realizing the LLM responses were still getting cut off. GLM 9B did not have this issue, and used very few tokens in comparison even with context set to 30k. Testing took very long, but with the help of others from the KoboldAI server (shout out to everyone there willing to help, a lot of people volunteered their help, who I will accredit below), we were able to eventually get it done.
This is the most useful graph that came of this, you can see below models using the Qwen tokenizer used less tokens than any of the models using the Deepseek tokenizer, and had higher accuracy. Both merges also performed better than their same tokenizer parent model counterparts.
[Model Performance VS Tokens Generated](https://preview.redd.it/lbpldqh57e8f1.png?width=2969&format=png&auto=webp&s=41dd5f79caaa5a59c3e89cf26accf2b4fc062693)
I would have liked to have tested at a higher precision, like Q8\_0, and on more problem attempts (like 3-5) for better quality data but didn't have the means to. If anyone with the means to do so is interested in giving it a try, please feel free to reach out to me for help, or if anyone wants to loan me their hardware I would be more than happy to run the tests again under better settings.
For anyone interested, more information is available in the model cards of the merges I made, which I will link below:
* w/ Qwen3 tokenizer [https://huggingface.co/lemon07r/Qwen3-R1-SLERP-Q3T-8B](https://huggingface.co/lemon07r/Qwen3-R1-SLERP-Q3T-8B)
* w/ Deepseek R1 tokenizer [https://huggingface.co/lemon07r/Qwen3-R1-SLERP-DST-8B](https://huggingface.co/lemon07r/Qwen3-R1-SLERP-DST-8B)
Currently only my own static GGUF quants are available (in Q4\_K\_S and Q8\_0) but hopefully others will provide more soon enough.
I've stored all my raw data, and test results in a repository here: [https://github.com/lemon07r/LocalAIME\_results](https://github.com/lemon07r/LocalAIME_results)
**Special Thanks to The Following People** (for making this possible)**:**
* Eisenstein for their modified fork of LocalAIME to work better with KoboldCPP and modified sampler settings for Qwen/Deepseek models, and doing half of my testing for me on his machine. Also helping me with a lot of my troubleshooting.
* Twistedshadows for loaning me some of their runpod hours to do my testing.
* Henky as well, for also loaning me some of their runpod hours, and helping me troubleshoot some issues with getting KCPP to work with LocalAIME
* Everyone else on the KoboldAI discord server, there were more than a few willing to help me out in the way of advice, troubleshooting, or offering me their machines or runpod hours to help with testing if the above didn't get to it first.
* u/EntropyMagnets for making and sharing his LocalAIME tool | 2025-06-22T03:01:18 | https://www.reddit.com/r/LocalLLaMA/comments/1lhdu5q/the_qwen_tokenizer_seems_to_be_better_than_the/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhdu5q | false | null | t3_1lhdu5q | /r/LocalLLaMA/comments/1lhdu5q/the_qwen_tokenizer_seems_to_be_better_than_the/ | false | false | 136 | {'enabled': False, 'images': [{'id': 'sIlsOyewqWKbkaq9LXBmI2vpBNvSB1xv0YAMiyBxo9s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sIlsOyewqWKbkaq9LXBmI2vpBNvSB1xv0YAMiyBxo9s.png?width=108&crop=smart&auto=webp&s=21698fa4359145798ca9e06dbf89b0d063f7c18a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/sIlsOyewqWKbkaq9LXBmI2vpBNvSB1xv0YAMiyBxo9s.png?width=216&crop=smart&auto=webp&s=8fabb1bf2d04592bcd37cf927df29b2cfcf21aea', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/sIlsOyewqWKbkaq9LXBmI2vpBNvSB1xv0YAMiyBxo9s.png?width=320&crop=smart&auto=webp&s=09154f10f1dbd218292e2205a4ba785d69ef1b6e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/sIlsOyewqWKbkaq9LXBmI2vpBNvSB1xv0YAMiyBxo9s.png?width=640&crop=smart&auto=webp&s=a462876ceca8aa7da6001fdcb8936398a0cfa6d5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/sIlsOyewqWKbkaq9LXBmI2vpBNvSB1xv0YAMiyBxo9s.png?width=960&crop=smart&auto=webp&s=9dce69ec3033d1915a5f9517fe3b6658a3c04e8c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/sIlsOyewqWKbkaq9LXBmI2vpBNvSB1xv0YAMiyBxo9s.png?width=1080&crop=smart&auto=webp&s=45518b1a4602a0b8f35ca3ac92fb5d8545e504ec', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/sIlsOyewqWKbkaq9LXBmI2vpBNvSB1xv0YAMiyBxo9s.png?auto=webp&s=b7e2ac4cdd39bb72ab1cc60747a3c256c269534b', 'width': 1200}, 'variants': {}}]} |
|
Agentic ai platform | 0 | Guys,
I have been looking for an agentic ai plaform like dify with no luck. I need to build agentic ai for the financial domain. Running dify on docker throws so many errors while file processing. I have timried lyzr.ai. I am not technical and need something which has a clean UI. Flowise is throwing errors while installing:( | 2025-06-22T03:07:37 | https://www.reddit.com/r/LocalLLaMA/comments/1lhdy7m/agentic_ai_platform/ | monsterindian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhdy7m | false | null | t3_1lhdy7m | /r/LocalLLaMA/comments/1lhdy7m/agentic_ai_platform/ | false | false | self | 0 | null |
50 Days of Building a Small Language Model from Scratch | 1 | [removed] | 2025-06-22T03:11:51 | https://www.reddit.com/r/LocalLLaMA/comments/1lhe0w6/50_days_of_building_a_small_language_model_from/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhe0w6 | false | null | t3_1lhe0w6 | /r/LocalLLaMA/comments/1lhe0w6/50_days_of_building_a_small_language_model_from/ | false | false | 1 | null |
|
50 Days of Building a Small Language Model from Scratch | 1 | [removed] | 2025-06-22T03:25:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lhe9gk/50_days_of_building_a_small_language_model_from/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhe9gk | false | null | t3_1lhe9gk | /r/LocalLLaMA/comments/1lhe9gk/50_days_of_building_a_small_language_model_from/ | false | false | self | 1 | null |
50 Days of Building a Small Language Model from Scratch | 1 | [removed] | 2025-06-22T03:26:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lhea33/50_days_of_building_a_small_language_model_from/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhea33 | false | null | t3_1lhea33 | /r/LocalLLaMA/comments/1lhea33/50_days_of_building_a_small_language_model_from/ | false | false | self | 1 | null |
50 days building a tiny language model from scratch, what I’ve learned so far | 844 | Hey folks,
I’m starting a new weekday series on June 23 at 9:00 AM PST where I’ll spend 50 days coding a two LLM (15–30M parameters) from the ground up: no massive GPU cluster, just a regular laptop or modest GPU.
Each post will cover one topic:
* Data collection and subword tokenization
* Embeddings and positional encodings
* Attention heads and feed-forward layers
* Training loops, loss functions, optimizers
* Evaluation metrics and sample generation
* Bonus deep dives: MoE, multi-token prediction,etc
Why bother with tiny models?
1. They run on the CPU.
2. You get daily feedback loops.
3. Building every component yourself cements your understanding.
I’ve already tried:
1. A 30 M-parameter GPT variant for children’s stories
2. A 15 M-parameter DeepSeek model with Mixture-of-Experts
I’ll drop links to the code in the first comment.
Looking forward to the discussion and to learning together. See you on Day 1. | 2025-06-22T03:31:14 | https://www.reddit.com/r/LocalLLaMA/comments/1lhed49/50_days_building_a_tiny_language_model_from/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhed49 | false | null | t3_1lhed49 | /r/LocalLLaMA/comments/1lhed49/50_days_building_a_tiny_language_model_from/ | false | false | self | 844 | null |
[OpenSource]Multi-LLM client - LLM Bridge | 21 | Previously, I created a separate LLM client for Ollama for iOS and MacOS and released it as open source,
but I recreated it by integrating iOS and MacOS codes and adding APIs that support them based on Swift/SwiftUI.
https://preview.redd.it/00dq12p66f8f1.jpg?width=2880&format=pjpg&auto=webp&s=5b97237c3558709596ef0396b5f5d197add9f794
\* Supports Ollama and LMStudio as local LLMs.
\* If you open a port externally on the computer where LLM is installed on Ollama, you can use free LLM remotely.
\* MLStudio is a local LLM management program with its own UI, and you can search and install models from HuggingFace, so you can experiment with various models.
\* You can set the IP and port in LLM Bridge and receive responses to queries using the installed model.
\* Supports OpenAI
\* You can receive an API key, enter it in the app, and use ChatGtp through API calls.
\* Using the API is cheaper than paying a monthly membership fee. \* Claude support
\* Use API Key
\* Image transfer possible for image support models
\* PDF, TXT file support
\* Extract text using PDFKit and transfer it
\* Text file support
\* Open source
\* Swift/SwiftUI
\* Source link
\* [https://github.com/bipark/swift\_llm\_bridge](https://github.com/bipark/swift_llm_bridge) | 2025-06-22T06:05:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lhgvq4/opensourcemultillm_client_llm_bridge/ | billythepark | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhgvq4 | false | null | t3_1lhgvq4 | /r/LocalLLaMA/comments/1lhgvq4/opensourcemultillm_client_llm_bridge/ | false | false | 21 | {'enabled': False, 'images': [{'id': '10mCBOjQL0RLrB--BfVKVkZcSDhwfEFJ4fJJfr9rSTA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/10mCBOjQL0RLrB--BfVKVkZcSDhwfEFJ4fJJfr9rSTA.png?width=108&crop=smart&auto=webp&s=54701ef5fc670b94aeb263f2f5ef1644f13a5e25', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/10mCBOjQL0RLrB--BfVKVkZcSDhwfEFJ4fJJfr9rSTA.png?width=216&crop=smart&auto=webp&s=c325e0fe57c37c99c94d2da6caecddb84169d070', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/10mCBOjQL0RLrB--BfVKVkZcSDhwfEFJ4fJJfr9rSTA.png?width=320&crop=smart&auto=webp&s=3a86dcbb719e4bfca541d086198719958c188750', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/10mCBOjQL0RLrB--BfVKVkZcSDhwfEFJ4fJJfr9rSTA.png?width=640&crop=smart&auto=webp&s=d26661e920b09c71c0e0d22c6b28b034db44cd7f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/10mCBOjQL0RLrB--BfVKVkZcSDhwfEFJ4fJJfr9rSTA.png?width=960&crop=smart&auto=webp&s=59ac896e5ba9a2b4304b6aaae665cd8732faa4a8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/10mCBOjQL0RLrB--BfVKVkZcSDhwfEFJ4fJJfr9rSTA.png?width=1080&crop=smart&auto=webp&s=3f1371ef44c0e484e94d57a3c283ca847ee16934', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/10mCBOjQL0RLrB--BfVKVkZcSDhwfEFJ4fJJfr9rSTA.png?auto=webp&s=b087639c174aa16d720267aee52348e3aeab8037', 'width': 1200}, 'variants': {}}]} |
|
Best open agentic coding assistants that don’t need an OpenAI key? | 49 | Looking for ai dev tools that actually let you use your own models, something agent-style that can analyse multiple files, track goals, and suggest edits/refactors, ideally all within vscode or terminal.
I’ve used Copilot’s agent mode, but it’s obviously tied to OpenAI. I’m more interested in
Tools that work with local models (via Ollama or similar)
API-pluggable setups (Gemini 1.5, deepseek, Qwen3, etc)
Agents that can track tasks, not just generate single responses
I’ve been trying Blackbox’s vscode integration, which has some agentic behaviour now. Also tried cline and roo, which are promising for CLI work.
But most tools either
Require a paid key to do anything useful
Aren’t flexible with models
Or don’t handle full-project context
anyone found a combo that works well with open models and integrates tightly with your coding environment? Not looking for prompt uis, looking for workflow tools please | 2025-06-22T07:03:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lhhs1r/best_open_agentic_coding_assistants_that_dont/ | Fabulous_Bluebird931 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhhs1r | false | null | t3_1lhhs1r | /r/LocalLLaMA/comments/1lhhs1r/best_open_agentic_coding_assistants_that_dont/ | false | false | self | 49 | null |
How much performance am I losing using chipset vs CPU lanes on 3080ti? | 8 | I have a 3080ti and an MSI Z790 gaming plus wifi. For some reason my pcie slot with the cpu lanes isn’t working. The chipset one works fine.
How much performance should I expect to lose with local llama? | 2025-06-22T07:34:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lhi8p8/how_much_performance_am_i_losing_using_chipset_vs/ | FactoryReboot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhi8p8 | false | null | t3_1lhi8p8 | /r/LocalLLaMA/comments/1lhi8p8/how_much_performance_am_i_losing_using_chipset_vs/ | false | false | self | 8 | null |
Seeking Advice for On-Premise LLM Roadmap for Enterprise Customer Care (Llama/Mistral, Ollama, Hardware) | 1 | [removed] | 2025-06-22T08:36:00 | https://www.reddit.com/r/LocalLLaMA/comments/1lhj4yr/seeking_advice_for_onpremise_llm_roadmap_for/ | Worth_Rabbit_6262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhj4yr | false | null | t3_1lhj4yr | /r/LocalLLaMA/comments/1lhj4yr/seeking_advice_for_onpremise_llm_roadmap_for/ | false | false | self | 1 | null |
I built MAI: A fully self-hosted emotional AI assistant with voice, memory, and sentiment analysis—Ghost in the Shell vibes included | 1 | [removed] | 2025-06-22T08:38:28 | https://v.redd.it/g32u1k4jxf8f1 | nomorecrackpl | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lhj691 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/g32u1k4jxf8f1/DASHPlaylist.mpd?a=1753173522%2CMGUxMzFhZTAwYzQ5MTkyZDBkYzI5NGI2ZGMxODE4NGY2MjRjYjk0ZDNiOTVkMGUyYWI2ZTRkNDhmYTRjOGFiMg%3D%3D&v=1&f=sd', 'duration': 61, 'fallback_url': 'https://v.redd.it/g32u1k4jxf8f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/g32u1k4jxf8f1/HLSPlaylist.m3u8?a=1753173522%2CMmM0NzIwZDYxZjYxMzk2M2E0MTkwNTQxY2IwM2IxNzE2ZWNmYjlkZTA4NjE4MDQ5ODg0ZTFmMjQ3MzZkZjcwOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/g32u1k4jxf8f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1lhj691 | /r/LocalLLaMA/comments/1lhj691/i_built_mai_a_fully_selfhosted_emotional_ai/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'd2t0am9pNGp4ZjhmMYWSE6LvYldKBhyvrF1cox7ppGQ78_7jxnoYoX0ao_bd', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d2t0am9pNGp4ZjhmMYWSE6LvYldKBhyvrF1cox7ppGQ78_7jxnoYoX0ao_bd.png?width=108&crop=smart&format=pjpg&auto=webp&s=dacd1bf232802649f406dc65164b0912de626126', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/d2t0am9pNGp4ZjhmMYWSE6LvYldKBhyvrF1cox7ppGQ78_7jxnoYoX0ao_bd.png?width=216&crop=smart&format=pjpg&auto=webp&s=c9bef12834eb9af51cc8b8f3e325af0c6ae0e9cc', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/d2t0am9pNGp4ZjhmMYWSE6LvYldKBhyvrF1cox7ppGQ78_7jxnoYoX0ao_bd.png?width=320&crop=smart&format=pjpg&auto=webp&s=fd531af9faaee2373943b1ebe9a23923c6b1f952', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/d2t0am9pNGp4ZjhmMYWSE6LvYldKBhyvrF1cox7ppGQ78_7jxnoYoX0ao_bd.png?width=640&crop=smart&format=pjpg&auto=webp&s=0c797f1506b45ac25dbe45c8a460b0a3a758cf24', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/d2t0am9pNGp4ZjhmMYWSE6LvYldKBhyvrF1cox7ppGQ78_7jxnoYoX0ao_bd.png?width=960&crop=smart&format=pjpg&auto=webp&s=05912366773c7dcf664f88fb11708c56120479d7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/d2t0am9pNGp4ZjhmMYWSE6LvYldKBhyvrF1cox7ppGQ78_7jxnoYoX0ao_bd.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f31590a6c3afe2c4c9c2d1fc2fa11193da04e274', 'width': 1080}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/d2t0am9pNGp4ZjhmMYWSE6LvYldKBhyvrF1cox7ppGQ78_7jxnoYoX0ao_bd.png?format=pjpg&auto=webp&s=e73279f812093f1a74d38446a5ae5fffd1b925d4', 'width': 1080}, 'variants': {}}]} |
|
9070 XTs for AI? | 1 | [removed] | 2025-06-22T09:28:25 | https://www.reddit.com/r/LocalLLaMA/comments/1lhjvvi/9070_xts_for_ai/ | RepresentativeCut486 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhjvvi | false | null | t3_1lhjvvi | /r/LocalLLaMA/comments/1lhjvvi/9070_xts_for_ai/ | false | false | self | 1 | null |
Huge differeance in inference speed between 3090 ? | 1 | [removed] | 2025-06-22T10:04:46 | https://www.reddit.com/r/LocalLLaMA/comments/1lhkepw/huge_differeance_in_inference_speed_between_3090/ | vdiallonort | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhkepw | false | null | t3_1lhkepw | /r/LocalLLaMA/comments/1lhkepw/huge_differeance_in_inference_speed_between_3090/ | false | false | self | 1 | null |
Best local llm for 6vcpu 13gig ram vps no gpu | 1 | [removed] | 2025-06-22T10:08:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lhkgm3/best_local_llm_for_6vcpu_13gig_ram_vps_no_gpu/ | jayn35 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhkgm3 | false | null | t3_1lhkgm3 | /r/LocalLLaMA/comments/1lhkgm3/best_local_llm_for_6vcpu_13gig_ram_vps_no_gpu/ | false | false | self | 1 | null |
Anyone solved Unsqueeze matcher issues when converting ONNX to Caffe using YAML + dvconvert? | 1 | [removed] | 2025-06-22T10:26:34 | https://www.reddit.com/r/LocalLLaMA/comments/1lhkqag/anyone_solved_unsqueeze_matcher_issues_when/ | Soft_Examination1158 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhkqag | false | null | t3_1lhkqag | /r/LocalLLaMA/comments/1lhkqag/anyone_solved_unsqueeze_matcher_issues_when/ | false | false | self | 1 | null |
LLM Assistant with function calling - Update 2 | 1 | [removed] | 2025-06-22T10:45:59 | http://rivridis.com/windows-assistant | Rivridis | rivridis.com | 1970-01-01T00:00:00 | 0 | {} | 1lhl0g0 | false | null | t3_1lhl0g0 | /r/LocalLLaMA/comments/1lhl0g0/llm_assistant_with_function_calling_update_2/ | false | false | default | 1 | null |
Found this amazing RAG for medical research backed answers. (askmedically.com) | 0 | [removed] | 2025-06-22T10:58:01 | https://www.reddit.com/gallery/1lhl71b | ashutrv | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lhl71b | false | null | t3_1lhl71b | /r/LocalLLaMA/comments/1lhl71b/found_this_amazing_rag_for_medical_research/ | false | false | 0 | {'enabled': True, 'images': [{'id': '4c4XLGb0z0jbqJLo0LEPH6xIVh_59XK6UaTXk6f3Xts', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/4c4XLGb0z0jbqJLo0LEPH6xIVh_59XK6UaTXk6f3Xts.jpeg?width=108&crop=smart&auto=webp&s=7f941cda492a36d930437f411010cf5bbecb3363', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/4c4XLGb0z0jbqJLo0LEPH6xIVh_59XK6UaTXk6f3Xts.jpeg?width=216&crop=smart&auto=webp&s=1b80a4d5357072068a9bbf78e0e91ea271f49d4b', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/4c4XLGb0z0jbqJLo0LEPH6xIVh_59XK6UaTXk6f3Xts.jpeg?width=320&crop=smart&auto=webp&s=a3d22e502597fb6dc8ea7063d08950b8a6820510', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/4c4XLGb0z0jbqJLo0LEPH6xIVh_59XK6UaTXk6f3Xts.jpeg?width=640&crop=smart&auto=webp&s=072777e5826a3a1b8958e1cdfd71ae702ada5883', 'width': 640}, {'height': 1920, 'url': 'https://external-preview.redd.it/4c4XLGb0z0jbqJLo0LEPH6xIVh_59XK6UaTXk6f3Xts.jpeg?width=960&crop=smart&auto=webp&s=790966e17029e8bbd440fe68ee3e9e297431f4e4', 'width': 960}, {'height': 2160, 'url': 'https://external-preview.redd.it/4c4XLGb0z0jbqJLo0LEPH6xIVh_59XK6UaTXk6f3Xts.jpeg?width=1080&crop=smart&auto=webp&s=5048d654d395abf3201eb49441b72d015ac7fd2c', 'width': 1080}], 'source': {'height': 2441, 'url': 'https://external-preview.redd.it/4c4XLGb0z0jbqJLo0LEPH6xIVh_59XK6UaTXk6f3Xts.jpeg?auto=webp&s=6f67faa310826c6b57eadb80c94052918beb23bc', 'width': 1179}, 'variants': {}}]} |
|
LLM SUGGESTIONS PLEASE | 1 | [removed] | 2025-06-22T11:33:14 | https://www.reddit.com/r/LocalLLaMA/comments/1lhlrav/llm_suggestions_please/ | Radiant_Truth_8743 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhlrav | false | null | t3_1lhlrav | /r/LocalLLaMA/comments/1lhlrav/llm_suggestions_please/ | false | false | self | 1 | null |
🔥 Free Year of Perplexity Pro for Samsung Galaxy Users (and maybe emulator users too… | 1 | [removed] | 2025-06-22T11:55:10 | https://www.reddit.com/r/LocalLLaMA/comments/1lhm4dz/free_year_of_perplexity_pro_for_samsung_galaxy/ | PrettyRevolution1842 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhm4dz | false | null | t3_1lhm4dz | /r/LocalLLaMA/comments/1lhm4dz/free_year_of_perplexity_pro_for_samsung_galaxy/ | false | false | self | 1 | null |
Benchmarking | 1 | [removed] | 2025-06-22T12:42:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lhmyvn/benchmarking/ | chisleu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhmyvn | false | null | t3_1lhmyvn | /r/LocalLLaMA/comments/1lhmyvn/benchmarking/ | false | false | self | 1 | null |
Cost effective batch inference | 1 | [removed] | 2025-06-22T12:48:10 | https://www.reddit.com/r/LocalLLaMA/comments/1lhn2z9/cost_effective_batch_inference/ | Sea-Quiet-229 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhn2z9 | false | null | t3_1lhn2z9 | /r/LocalLLaMA/comments/1lhn2z9/cost_effective_batch_inference/ | false | false | self | 1 | null |
Is it worth it to try IQ3 (or Q3) quants to fit more context. | 1 | [removed] | 2025-06-22T13:37:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lho2m4/is_it_worth_it_to_try_iq3_or_q3_quants_to_fit/ | KeinNiemand | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lho2m4 | false | null | t3_1lho2m4 | /r/LocalLLaMA/comments/1lho2m4/is_it_worth_it_to_try_iq3_or_q3_quants_to_fit/ | false | false | self | 1 | null |
Interested in encoding and cosine similarity for Qwen/InternVL | 1 | [removed] | 2025-06-22T13:40:36 | https://www.reddit.com/r/LocalLLaMA/comments/1lho5e6/interested_in_encoding_and_cosine_similarity_for/ | Big-Horse-6181 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lho5e6 | false | null | t3_1lho5e6 | /r/LocalLLaMA/comments/1lho5e6/interested_in_encoding_and_cosine_similarity_for/ | false | false | self | 1 | null |
cosine similarity encoders question | 1 | [removed] | 2025-06-22T13:44:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lho8pm/cosine_similarity_encoders_question/ | Affectionate-Tax2179 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lho8pm | false | null | t3_1lho8pm | /r/LocalLLaMA/comments/1lho8pm/cosine_similarity_encoders_question/ | false | false | self | 1 | null |
我 | 1 | 2025-06-22T14:37:16 | https://youtube.com/@dmmloungetv?si=CuJT3pqGgBNAfi25 | Pitiful_Engine_6901 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1lhpeqi | false | null | t3_1lhpeqi | /r/LocalLLaMA/comments/1lhpeqi/我/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'AlLsdznZBxfrjilYbyEbxKnIVXPPEhzx6TQcG42POFg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/AlLsdznZBxfrjilYbyEbxKnIVXPPEhzx6TQcG42POFg.jpeg?width=108&crop=smart&auto=webp&s=bf29d39a3ccc7d0c79a4cc47fb651fc2205cace4', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/AlLsdznZBxfrjilYbyEbxKnIVXPPEhzx6TQcG42POFg.jpeg?width=216&crop=smart&auto=webp&s=a18970ff165542e9c6e7c8da884abb23ccf9f32f', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/AlLsdznZBxfrjilYbyEbxKnIVXPPEhzx6TQcG42POFg.jpeg?width=320&crop=smart&auto=webp&s=c23a846ce773a0cf13329ec1c99138972d8d1821', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/AlLsdznZBxfrjilYbyEbxKnIVXPPEhzx6TQcG42POFg.jpeg?width=640&crop=smart&auto=webp&s=9188ae300b15f90d9d140a5995b9807469148af0', 'width': 640}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/AlLsdznZBxfrjilYbyEbxKnIVXPPEhzx6TQcG42POFg.jpeg?auto=webp&s=1b5f3cf0feb30639c2781bdc37a1f6e76c8690e5', 'width': 900}, 'variants': {}}]} |
||
Axelera Metis AI card: usable for local inference? | 1 | [removed] | 2025-06-22T14:50:59 | https://axelera.ai/ai-accelerators/metis-pcie-ai-acceleration-card | Nilithium | axelera.ai | 1970-01-01T00:00:00 | 0 | {} | 1lhpq4b | false | null | t3_1lhpq4b | /r/LocalLLaMA/comments/1lhpq4b/axelera_metis_ai_card_usable_for_local_inference/ | false | false | 1 | {'enabled': False, 'images': [{'id': '7qmhTBlYE6dUWW6DIxrbgB7fxw2Jgc8RFtXJ9LUNLBw', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/7qmhTBlYE6dUWW6DIxrbgB7fxw2Jgc8RFtXJ9LUNLBw.jpeg?width=108&crop=smart&auto=webp&s=77c2ba2108ee5d3134c0d43f25b4fbd2adae6ff8', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/7qmhTBlYE6dUWW6DIxrbgB7fxw2Jgc8RFtXJ9LUNLBw.jpeg?width=216&crop=smart&auto=webp&s=3f92bb40e3fb5c71a8c32f7928cda2c2b252d0b3', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/7qmhTBlYE6dUWW6DIxrbgB7fxw2Jgc8RFtXJ9LUNLBw.jpeg?width=320&crop=smart&auto=webp&s=228663d92b74b2fea9e9afe69ff3a4376dd3bc97', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/7qmhTBlYE6dUWW6DIxrbgB7fxw2Jgc8RFtXJ9LUNLBw.jpeg?width=640&crop=smart&auto=webp&s=ae778da4f7401eb6c613187b4f0cdc6fcb665371', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/7qmhTBlYE6dUWW6DIxrbgB7fxw2Jgc8RFtXJ9LUNLBw.jpeg?width=960&crop=smart&auto=webp&s=a84dbbaec4a7c83f60ea6dbdfb14d6ceba5d2686', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/7qmhTBlYE6dUWW6DIxrbgB7fxw2Jgc8RFtXJ9LUNLBw.jpeg?width=1080&crop=smart&auto=webp&s=437049ba850419da506e1225d8030c2bbb73de8b', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://external-preview.redd.it/7qmhTBlYE6dUWW6DIxrbgB7fxw2Jgc8RFtXJ9LUNLBw.jpeg?auto=webp&s=bfba4a0065dcafa3ca1c1f6bf1ca54a9ec3c3093', 'width': 2400}, 'variants': {}}]} |
|
[New Features & Better] Tabulens: A Vision-LLM Powered PDF Table Extractor | 1 | [removed] | 2025-06-22T15:43:19 | https://www.reddit.com/r/LocalLLaMA/comments/1lhqy4c/new_features_better_tabulens_a_visionllm_powered/ | PleasantInspection12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhqy4c | false | null | t3_1lhqy4c | /r/LocalLLaMA/comments/1lhqy4c/new_features_better_tabulens_a_visionllm_powered/ | false | false | 1 | null |
|
Experience running llms on CPU only | 1 | [removed] | 2025-06-22T15:46:39 | https://www.reddit.com/r/LocalLLaMA/comments/1lhr0wl/experience_running_llms_on_cpu_only/ | 82shadesofgrey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhr0wl | false | null | t3_1lhr0wl | /r/LocalLLaMA/comments/1lhr0wl/experience_running_llms_on_cpu_only/ | false | false | self | 1 | null |
Most Suitable Model for Text Classification | 1 | [removed] | 2025-06-22T16:24:18 | https://www.reddit.com/r/LocalLLaMA/comments/1lhrx54/most_suitable_model_for_text_classification/ | Jason_Wesley | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhrx54 | false | null | t3_1lhrx54 | /r/LocalLLaMA/comments/1lhrx54/most_suitable_model_for_text_classification/ | false | false | self | 1 | null |
Best open-source LLM for summarizing German course transcripts? | 1 | [removed] | 2025-06-22T16:37:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lhs85n/best_opensource_llm_for_summarizing_german_course/ | Sea-Woodpecker-2594 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhs85n | false | null | t3_1lhs85n | /r/LocalLLaMA/comments/1lhs85n/best_opensource_llm_for_summarizing_german_course/ | false | false | self | 1 | null |
Best open-source LLM for summarizing German course transcripts (cloud setup)? | 1 | [removed] | 2025-06-22T16:42:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lhscve/best_opensource_llm_for_summarizing_german_course/ | Sea-Woodpecker-2594 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhscve | false | null | t3_1lhscve | /r/LocalLLaMA/comments/1lhscve/best_opensource_llm_for_summarizing_german_course/ | false | false | self | 1 | null |
I think I’m gonna apply for Meta what would be your dream job there? | 1 | [removed] | 2025-06-22T16:43:08 | https://www.reddit.com/r/LocalLLaMA/comments/1lhsd2w/i_think_im_gonna_apply_for_meta_what_would_be/ | TheMightyDice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhsd2w | false | null | t3_1lhsd2w | /r/LocalLLaMA/comments/1lhsd2w/i_think_im_gonna_apply_for_meta_what_would_be/ | false | false | self | 1 | null |
LTT Review/Breakdown of the Chinese 48GB 4090 2 Slot GPUs | 1 | 2025-06-22T16:58:51 | https://www.youtube.com/watch?v=HZgQp-WDebU | Rollingsound514 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1lhsqeb | false | {'oembed': {'author_name': 'Linus Tech Tips', 'author_url': 'https://www.youtube.com/@LinusTechTips', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/HZgQp-WDebU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="NVIDIA Never Authorized The Production Of This Card"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/HZgQp-WDebU/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'NVIDIA Never Authorized The Production Of This Card', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1lhsqeb | /r/LocalLLaMA/comments/1lhsqeb/ltt_reviewbreakdown_of_the_chinese_48gb_4090_2/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ZSkXOQ0Ftmzf9m07Ydba1-71lECRPh1WZMhCFovef6Y', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ZSkXOQ0Ftmzf9m07Ydba1-71lECRPh1WZMhCFovef6Y.jpeg?width=108&crop=smart&auto=webp&s=34b6e95c9e78450a03bc17669db1039556875ab2', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/ZSkXOQ0Ftmzf9m07Ydba1-71lECRPh1WZMhCFovef6Y.jpeg?width=216&crop=smart&auto=webp&s=94a5189da6314051515f34d0a46727096a47647f', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/ZSkXOQ0Ftmzf9m07Ydba1-71lECRPh1WZMhCFovef6Y.jpeg?width=320&crop=smart&auto=webp&s=1fdb319a25ca00eba0456ee1f02c9bf5308cdb5e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/ZSkXOQ0Ftmzf9m07Ydba1-71lECRPh1WZMhCFovef6Y.jpeg?auto=webp&s=5ca2af1087455cec442de957ead14f0da81edf2e', 'width': 480}, 'variants': {}}]} |
||
moonshotai released a new multi modal model with 16B params with 3B active | 1 | [removed] | 2025-06-22T17:08:12 | https://www.reddit.com/r/LocalLLaMA/comments/1lhsytb/moonshotai_released_a_new_multi_modal_model_with/ | BreakfastFriendly728 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhsytb | false | null | t3_1lhsytb | /r/LocalLLaMA/comments/1lhsytb/moonshotai_released_a_new_multi_modal_model_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'nn6Om0LrvY9dh6qkhvLPezIS-aJdRaC0O6BpYJYgA5E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nn6Om0LrvY9dh6qkhvLPezIS-aJdRaC0O6BpYJYgA5E.png?width=108&crop=smart&auto=webp&s=b6a411809385a8832da3850c0f3d679bebfc629b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nn6Om0LrvY9dh6qkhvLPezIS-aJdRaC0O6BpYJYgA5E.png?width=216&crop=smart&auto=webp&s=815a6c554041f29d29d6364595052b23b1144b1e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nn6Om0LrvY9dh6qkhvLPezIS-aJdRaC0O6BpYJYgA5E.png?width=320&crop=smart&auto=webp&s=49846633df728385669003362bb8f01cf2804b35', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nn6Om0LrvY9dh6qkhvLPezIS-aJdRaC0O6BpYJYgA5E.png?width=640&crop=smart&auto=webp&s=0c180a6be6bee520890e9e2bcc9303148c925f48', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nn6Om0LrvY9dh6qkhvLPezIS-aJdRaC0O6BpYJYgA5E.png?width=960&crop=smart&auto=webp&s=6bdcd72022a3f8598e5e2a9326ab4f1b06b1cd2f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nn6Om0LrvY9dh6qkhvLPezIS-aJdRaC0O6BpYJYgA5E.png?width=1080&crop=smart&auto=webp&s=077f2a165f4efd5dec15b0a5eec9981a497f9180', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nn6Om0LrvY9dh6qkhvLPezIS-aJdRaC0O6BpYJYgA5E.png?auto=webp&s=a23bc871fce22973e69cb7fccbd12f19f4fdde55', 'width': 1200}, 'variants': {}}]} |
No posts from last 9 hours - I am addicted to see updates here | 1 | 2025-06-22T17:27:13 | dreamai87 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lhtfiy | false | null | t3_1lhtfiy | /r/LocalLLaMA/comments/1lhtfiy/no_posts_from_last_9_hours_i_am_addicted_to_see/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'db146xgxji8f1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/db146xgxji8f1.jpeg?width=108&crop=smart&auto=webp&s=9e5f56096200d1e86a24c23f094a995aefbb2e6b', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/db146xgxji8f1.jpeg?width=216&crop=smart&auto=webp&s=afaad9031b5b298af868f1c0697757b2cd90d594', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/db146xgxji8f1.jpeg?width=320&crop=smart&auto=webp&s=b3712efac21b9fb35608e03dcd8a4e653c547507', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/db146xgxji8f1.jpeg?width=640&crop=smart&auto=webp&s=75e6c67663290899ca031a55228801f5038622b2', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/db146xgxji8f1.jpeg?width=960&crop=smart&auto=webp&s=e1e1b50ed9e7df801ff69e665b24066f649d5e3a', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/db146xgxji8f1.jpeg?auto=webp&s=d21e571a261917932e633f4ea1c920b413558f3d', 'width': 1024}, 'variants': {}}]} |
||
From last 10 hours no new updates - i am addicted its seems | 1 | 2025-06-22T17:30:44 | dreamai87 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lhtikp | false | null | t3_1lhtikp | /r/LocalLLaMA/comments/1lhtikp/from_last_10_hours_no_new_updates_i_am_addicted/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'jeehul3kki8f1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/jeehul3kki8f1.jpeg?width=108&crop=smart&auto=webp&s=969976d0795a7adb69e67b686caa12605d01bbc6', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/jeehul3kki8f1.jpeg?width=216&crop=smart&auto=webp&s=c7d03d1abfd0d8c5a7dda7a0619caea748e683fe', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/jeehul3kki8f1.jpeg?width=320&crop=smart&auto=webp&s=a4847eeccc5b79195b4d83d030cae132a7fca331', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/jeehul3kki8f1.jpeg?width=640&crop=smart&auto=webp&s=082283fe723be4c02b7deb7b12d6094b7f4a7e55', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/jeehul3kki8f1.jpeg?width=960&crop=smart&auto=webp&s=e2f78b4bcedc379f660c04407de8202f6d69c67d', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/jeehul3kki8f1.jpeg?auto=webp&s=8cd3a961f69114f0d8dd8b1c7c5126763a8d315f', 'width': 1024}, 'variants': {}}]} |
||
OpenAI's Chief Product Officer made a Lt. Colonel in the Army. | 1 | 2025-06-22T18:06:02 | https://www.army.mil/article-amp/286317/army_launches_detachment_201_executive_innovation_corps_to_drive_tech_transformation | fallingdowndizzyvr | army.mil | 1970-01-01T00:00:00 | 0 | {} | 1lhudl3 | false | null | t3_1lhudl3 | /r/LocalLLaMA/comments/1lhudl3/openais_chief_product_officer_made_a_lt_colonel/ | false | false | default | 1 | null |
|
LinusTechTips reviews Chinese 4090s with 48Gb VRAM, messes with LLMs | 1 | [removed] | 2025-06-22T18:12:07 | https://youtu.be/HZgQp-WDebU | BumbleSlob | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1lhuivb | false | {'oembed': {'author_name': 'Linus Tech Tips', 'author_url': 'https://www.youtube.com/@LinusTechTips', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/HZgQp-WDebU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="NVIDIA Never Authorized The Production Of This Card"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/HZgQp-WDebU/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'NVIDIA Never Authorized The Production Of This Card', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1lhuivb | /r/LocalLLaMA/comments/1lhuivb/linustechtips_reviews_chinese_4090s_with_48gb/ | false | false | default | 1 | null |
POLARIS - Bytedance | 1 | [removed] | 2025-06-22T18:14:39 | https://github.com/ChenxinAn-fdu/POLARIS/tree/main | KillerX629 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lhul4j | false | null | t3_1lhul4j | /r/LocalLLaMA/comments/1lhul4j/polaris_bytedance/ | false | false | default | 1 | null |
I've created an app that allows you to AI-analyze many PDF files (and their highlights) into tables - and it can fully function with local LMs | 1 | [removed] | 2025-06-22T18:39:25 | https://v.redd.it/m5wix88owi8f1 | RansomWarrior | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lhv6sb | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/m5wix88owi8f1/DASHPlaylist.mpd?a=1753209578%2CZGJjM2NmNDU2YWYzODY2NmJjMTM2MzU1NWZlMDg4OWNkMjQ5YzNjMDdkNjJkNzNhODIxMDkwMmRkYzExN2I1ZQ%3D%3D&v=1&f=sd', 'duration': 50, 'fallback_url': 'https://v.redd.it/m5wix88owi8f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/m5wix88owi8f1/HLSPlaylist.m3u8?a=1753209578%2CZjcyMDc5MmY3NTdlMjNjMmQ5YWRkZDNjMGNhNWRlNTE4ODUzMDViNjY5Mjc2MTE2MDAxOTUzNjhkNzUzMzBhNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/m5wix88owi8f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1lhv6sb | /r/LocalLLaMA/comments/1lhv6sb/ive_created_an_app_that_allows_you_to_aianalyze/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'NHBjbW45OG93aThmMSjUq0oWmjO_U9p2SGcS5oS-SH5E9DSIH-yH-b3lhkRc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NHBjbW45OG93aThmMSjUq0oWmjO_U9p2SGcS5oS-SH5E9DSIH-yH-b3lhkRc.png?width=108&crop=smart&format=pjpg&auto=webp&s=8fc8980cc615ca093999fa7d84c9d60956720f55', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NHBjbW45OG93aThmMSjUq0oWmjO_U9p2SGcS5oS-SH5E9DSIH-yH-b3lhkRc.png?width=216&crop=smart&format=pjpg&auto=webp&s=374d1477d54f503b7fa1a717b676e0effe804a47', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NHBjbW45OG93aThmMSjUq0oWmjO_U9p2SGcS5oS-SH5E9DSIH-yH-b3lhkRc.png?width=320&crop=smart&format=pjpg&auto=webp&s=a2b0196610c38f532d1652b32d56be166e2501f6', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NHBjbW45OG93aThmMSjUq0oWmjO_U9p2SGcS5oS-SH5E9DSIH-yH-b3lhkRc.png?width=640&crop=smart&format=pjpg&auto=webp&s=0c9134d0411e0f5c30deeb30deb1dccfdacfb826', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NHBjbW45OG93aThmMSjUq0oWmjO_U9p2SGcS5oS-SH5E9DSIH-yH-b3lhkRc.png?width=960&crop=smart&format=pjpg&auto=webp&s=62eb0182616eda0d4e8bfa089c72e71f3619ced2', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NHBjbW45OG93aThmMSjUq0oWmjO_U9p2SGcS5oS-SH5E9DSIH-yH-b3lhkRc.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c7cc6c63d2e7489f15c422b19ca3153874d35227', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/NHBjbW45OG93aThmMSjUq0oWmjO_U9p2SGcS5oS-SH5E9DSIH-yH-b3lhkRc.png?format=pjpg&auto=webp&s=f8d485f83db97c1f1b37cb61f3936d87d127a7d7', 'width': 1920}, 'variants': {}}]} |
|
managing coding agents through the phone? | 1 | [removed] | 2025-06-22T18:43:53 | https://www.reddit.com/r/LocalLLaMA/comments/1lhvaop/managing_coding_agents_through_the_phone/ | secopsml | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhvaop | false | null | t3_1lhvaop | /r/LocalLLaMA/comments/1lhvaop/managing_coding_agents_through_the_phone/ | false | false | self | 1 | null |
Does anyone else find Dots really impressive? | 1 | [removed] | 2025-06-22T19:10:10 | https://www.reddit.com/r/LocalLLaMA/comments/1lhvxn3/does_anyone_else_find_dots_really_impressive/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhvxn3 | false | null | t3_1lhvxn3 | /r/LocalLLaMA/comments/1lhvxn3/does_anyone_else_find_dots_really_impressive/ | false | false | self | 1 | null |
In-character roleplay in LLM thinking tokens | 1 | [removed] | 2025-06-22T19:22:18 | Lesterpaintstheworld | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lhw875 | false | null | t3_1lhw875 | /r/LocalLLaMA/comments/1lhw875/incharacter_roleplay_in_llm_thinking_tokens/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '689klvmu3j8f1', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/689klvmu3j8f1.png?width=108&crop=smart&auto=webp&s=e86488da806d1acfdc7486930b27d9ef031a8f85', 'width': 108}, {'height': 170, 'url': 'https://preview.redd.it/689klvmu3j8f1.png?width=216&crop=smart&auto=webp&s=219319694c940e96c167d46b459132d157ca883a', 'width': 216}, {'height': 251, 'url': 'https://preview.redd.it/689klvmu3j8f1.png?width=320&crop=smart&auto=webp&s=55de9f61d8b9906fd825ea556f9443d4f8a3d93e', 'width': 320}, {'height': 503, 'url': 'https://preview.redd.it/689klvmu3j8f1.png?width=640&crop=smart&auto=webp&s=196d146f64d723e6973e217c141f3d4a8cb97560', 'width': 640}], 'source': {'height': 700, 'url': 'https://preview.redd.it/689klvmu3j8f1.png?auto=webp&s=b11e4c81a02896bdb97a1a0c3e802b325523dd0e', 'width': 889}, 'variants': {}}]} |
|
What if i made groq for Voice Models?? | 1 | [removed] | 2025-06-22T19:47:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lhwsr2/what_if_i_made_groq_for_voice_models/ | Expert-Address-2918 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhwsr2 | false | null | t3_1lhwsr2 | /r/LocalLLaMA/comments/1lhwsr2/what_if_i_made_groq_for_voice_models/ | false | false | self | 1 | null |
Browser extension to desensationalise headlines with a local LLM | 1 | [removed] | 2025-06-22T20:18:47 | https://www.reddit.com/r/LocalLLaMA/comments/1lhxju9/browser_extension_to_desensationalise_headlines/ | Everlier | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhxju9 | false | null | t3_1lhxju9 | /r/LocalLLaMA/comments/1lhxju9/browser_extension_to_desensationalise_headlines/ | false | false | self | 1 | null |
2x NVIDIA RTX 6000 Blackwell GPUs in My AI Workstation – What Should I Test Next? (192GB VRAM + 512 GB ECC DDR5 RAM) | 1 | [removed] | 2025-06-22T21:39:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lhzfp3/2x_nvidia_rtx_6000_blackwell_gpus_in_my_ai/ | texasdude11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhzfp3 | false | null | t3_1lhzfp3 | /r/LocalLLaMA/comments/1lhzfp3/2x_nvidia_rtx_6000_blackwell_gpus_in_my_ai/ | false | false | self | 1 | null |
mod deleted his account? | 1 | [removed] | 2025-06-22T21:59:24 | https://www.reddit.com/r/LocalLLaMA/comments/1lhzvj2/mod_deleted_his_account/ | futurefootballplayer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhzvj2 | false | null | t3_1lhzvj2 | /r/LocalLLaMA/comments/1lhzvj2/mod_deleted_his_account/ | false | false | self | 1 | null |
Mid-Range uncensored model for writing, esp. fiction? | 1 | [removed] | 2025-06-22T22:04:44 | https://www.reddit.com/r/LocalLLaMA/comments/1lhzzuz/midrange_uncensored_model_for_writing_esp_fiction/ | Late-Assignment8482 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhzzuz | false | null | t3_1lhzzuz | /r/LocalLLaMA/comments/1lhzzuz/midrange_uncensored_model_for_writing_esp_fiction/ | false | false | self | 1 | null |
great video explaining how Language Models suddenly got really good, just by throwing more Compute at the problem | 1 | [removed] | 2025-06-22T22:33:03 | https://www.reddit.com/r/LocalLLaMA/comments/1li0mi9/great_video_explaining_how_language_models/ | maniaq | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li0mi9 | false | null | t3_1li0mi9 | /r/LocalLLaMA/comments/1li0mi9/great_video_explaining_how_language_models/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '9sTzOynCNLUEsVy20ac4RuO2848rWQcR3dxZ7wgKjEo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/9sTzOynCNLUEsVy20ac4RuO2848rWQcR3dxZ7wgKjEo.jpeg?width=108&crop=smart&auto=webp&s=699b0146473040c96259633be869ee047c7e2ba2', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/9sTzOynCNLUEsVy20ac4RuO2848rWQcR3dxZ7wgKjEo.jpeg?width=216&crop=smart&auto=webp&s=c2a345ae5c15605fe4fdb7bd4d76b00bd970c19f', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/9sTzOynCNLUEsVy20ac4RuO2848rWQcR3dxZ7wgKjEo.jpeg?width=320&crop=smart&auto=webp&s=ce28f117f7e67938867c861997e6ab04123f014a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/9sTzOynCNLUEsVy20ac4RuO2848rWQcR3dxZ7wgKjEo.jpeg?auto=webp&s=52e37899ba6ebc96c22d1ca695874f9aaa90d5a3', 'width': 480}, 'variants': {}}]} |
What is the best way to give my LLM a single document worth of information to use? | 1 | [removed] | 2025-06-22T22:38:39 | https://www.reddit.com/r/LocalLLaMA/comments/1li0qvm/what_is_the_best_way_to_give_my_llm_a_single/ | lololy87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li0qvm | false | null | t3_1li0qvm | /r/LocalLLaMA/comments/1li0qvm/what_is_the_best_way_to_give_my_llm_a_single/ | false | false | self | 1 | null |
Best local LLM for JSON Objects? | 1 | [removed] | 2025-06-22T22:52:54 | https://www.reddit.com/r/LocalLLaMA/comments/1li11wl/best_local_llm_for_json_objects/ | ganderofvenice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li11wl | false | null | t3_1li11wl | /r/LocalLLaMA/comments/1li11wl/best_local_llm_for_json_objects/ | false | false | self | 1 | null |
Best local LLM for JSON Objects? | 1 | [removed] | 2025-06-22T22:56:55 | https://www.reddit.com/r/LocalLLaMA/comments/1li14yb/best_local_llm_for_json_objects/ | ganderofvenice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li14yb | false | null | t3_1li14yb | /r/LocalLLaMA/comments/1li14yb/best_local_llm_for_json_objects/ | false | false | self | 1 | null |
What are the best 70b tier models/finetunes these days? | 1 | [removed] | 2025-06-22T23:03:33 | https://www.reddit.com/r/LocalLLaMA/comments/1li1a6i/what_are_the_best_70b_tier_modelsfinetunes_these/ | DepthHour1669 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li1a6i | false | null | t3_1li1a6i | /r/LocalLLaMA/comments/1li1a6i/what_are_the_best_70b_tier_modelsfinetunes_these/ | false | false | self | 1 | null |
Any Local LLM Suggestion ? | 1 | [removed] | 2025-06-22T23:05:27 | https://www.reddit.com/r/LocalLLaMA/comments/1li1bmz/any_local_llm_suggestion/ | thesayk0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li1bmz | false | null | t3_1li1bmz | /r/LocalLLaMA/comments/1li1bmz/any_local_llm_suggestion/ | false | false | self | 1 | null |
Really bad performance on EPYC 7C13 with 1TB of RAM, BIOS settings to blame? | 1 | [removed] | 2025-06-22T23:45:56 | https://www.reddit.com/r/LocalLLaMA/comments/1li265m/really_bad_performance_on_epyc_7c13_with_1tb_of/ | BasicCoconut9187 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li265m | false | null | t3_1li265m | /r/LocalLLaMA/comments/1li265m/really_bad_performance_on_epyc_7c13_with_1tb_of/ | false | false | 1 | null |
|
Best human-like model that doesn't know it's an AI? | 1 | [removed] | 2025-06-22T23:59:46 | https://www.reddit.com/r/LocalLLaMA/comments/1li2g96/best_humanlike_model_that_doesnt_know_its_an_ai/ | RandumbRedditor1000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li2g96 | false | null | t3_1li2g96 | /r/LocalLLaMA/comments/1li2g96/best_humanlike_model_that_doesnt_know_its_an_ai/ | false | false | self | 1 | null |
What's your /r/LocalLLaMA "hot take" ? | 1 | [removed] | 2025-06-23T00:58:13 | https://www.reddit.com/r/LocalLLaMA/comments/1li3m1o/whats_your_rlocalllama_hot_take/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li3m1o | false | null | t3_1li3m1o | /r/LocalLLaMA/comments/1li3m1o/whats_your_rlocalllama_hot_take/ | false | false | self | 1 | null |
TIL that Kevin Durant (yes that one) was an early investor in Huggingface | 1 | [removed] | 2025-06-23T01:25:49 | obvithrowaway34434 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1li45nb | false | null | t3_1li45nb | /r/LocalLLaMA/comments/1li45nb/til_that_kevin_durant_yes_that_one_was_an_early/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'q1_MnU5HXvaazeTN5w2YJeuExTa7DRt8n07DvsWk-yU', 'resolutions': [{'height': 92, 'url': 'https://preview.redd.it/ofsiyiq3xk8f1.png?width=108&crop=smart&auto=webp&s=08aa76bf66ff3de24796b90fc38a3dcf5014ef4d', 'width': 108}, {'height': 184, 'url': 'https://preview.redd.it/ofsiyiq3xk8f1.png?width=216&crop=smart&auto=webp&s=9aedcc365319c95f71e115f702c1e49384562d08', 'width': 216}, {'height': 272, 'url': 'https://preview.redd.it/ofsiyiq3xk8f1.png?width=320&crop=smart&auto=webp&s=cca0e319eace004067c884abd3bcb9d3fc6dfd92', 'width': 320}], 'source': {'height': 514, 'url': 'https://preview.redd.it/ofsiyiq3xk8f1.png?auto=webp&s=c869adafddcd02c875e184a1cda9baa84f4a0385', 'width': 603}, 'variants': {}}]} |
||
Any local models that has less restraints? | 1 | [removed] | 2025-06-23T01:26:34 | https://www.reddit.com/r/LocalLLaMA/comments/1li467g/any_local_models_that_has_less_restraints/ | rushblyatiful | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li467g | false | null | t3_1li467g | /r/LocalLLaMA/comments/1li467g/any_local_models_that_has_less_restraints/ | false | false | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.