title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
homemade "touring tests"
| 1 |
[removed]
| 2025-05-12T10:19:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkozvs/homemade_touring_tests/
|
redalvi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkozvs
| false | null |
t3_1kkozvs
|
/r/LocalLLaMA/comments/1kkozvs/homemade_touring_tests/
| false | false |
self
| 1 | null |
Cool little tool to compare Cloud GPU prices.
| 0 |
What do you think?
| 2025-05-12T10:46:59 |
https://www.serversearcher.com/servers/gpu
|
Cool_Chemistry_3119
|
serversearcher.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkpfb9
| false | null |
t3_1kkpfb9
|
/r/LocalLLaMA/comments/1kkpfb9/cool_little_tool_to_compare_cloud_gpu_prices/
| false | false |
default
| 0 | null |
Has anyone tried running DeerFlow locally with qwen3 models
| 1 |
[removed]
| 2025-05-12T11:06:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkpqo5/has_anyone_tried_running_deerflow_locally_with/
|
Anjal_p
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkpqo5
| false | null |
t3_1kkpqo5
|
/r/LocalLLaMA/comments/1kkpqo5/has_anyone_tried_running_deerflow_locally_with/
| false | false |
self
| 1 | null |
Are there any lightweight Text-to-SQL models that we can run on CPU?
| 1 |
[removed]
| 2025-05-12T11:18:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkpy96/are_there_any_lightweight_texttosql_models_that/
|
AbleStructure5113
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkpy96
| false | null |
t3_1kkpy96
|
/r/LocalLLaMA/comments/1kkpy96/are_there_any_lightweight_texttosql_models_that/
| false | false |
self
| 1 | null |
A Daily reminded why I DO NOT pay for Claude.
| 1 |
[removed]
| 2025-05-12T11:26:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkq3jx/a_daily_reminded_why_i_do_not_pay_for_claude/
|
Sicarius_The_First
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkq3jx
| false | null |
t3_1kkq3jx
|
/r/LocalLLaMA/comments/1kkq3jx/a_daily_reminded_why_i_do_not_pay_for_claude/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'LJR88lWPUwWaLgkKok4vYIRrBi8t4xC9e8aoqoBZCNg', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/LJR88lWPUwWaLgkKok4vYIRrBi8t4xC9e8aoqoBZCNg.png?width=108&crop=smart&auto=webp&s=75a576ac9796784048d45573d064ccdb90902fbb', 'width': 108}, {'height': 115, 'url': 'https://external-preview.redd.it/LJR88lWPUwWaLgkKok4vYIRrBi8t4xC9e8aoqoBZCNg.png?width=216&crop=smart&auto=webp&s=e0e4c962052fdfc65e6fd058dafa6cfa99eea295', 'width': 216}, {'height': 171, 'url': 'https://external-preview.redd.it/LJR88lWPUwWaLgkKok4vYIRrBi8t4xC9e8aoqoBZCNg.png?width=320&crop=smart&auto=webp&s=2f2ee796e9ea4613dd0ad28f51de93d3f69c4f18', 'width': 320}, {'height': 343, 'url': 'https://external-preview.redd.it/LJR88lWPUwWaLgkKok4vYIRrBi8t4xC9e8aoqoBZCNg.png?width=640&crop=smart&auto=webp&s=f554cce4e4771c3ef824def631719a0c1df947ec', 'width': 640}, {'height': 515, 'url': 'https://external-preview.redd.it/LJR88lWPUwWaLgkKok4vYIRrBi8t4xC9e8aoqoBZCNg.png?width=960&crop=smart&auto=webp&s=4c13edc9167add5b02c213dc460e3568b834d01f', 'width': 960}, {'height': 579, 'url': 'https://external-preview.redd.it/LJR88lWPUwWaLgkKok4vYIRrBi8t4xC9e8aoqoBZCNg.png?width=1080&crop=smart&auto=webp&s=4e358821be4ddaea5ec568f19ac27bf40239dd77', 'width': 1080}], 'source': {'height': 1088, 'url': 'https://external-preview.redd.it/LJR88lWPUwWaLgkKok4vYIRrBi8t4xC9e8aoqoBZCNg.png?auto=webp&s=72b36a9f248e221f9c93cb8531cac70bcb1a1f24', 'width': 2027}, 'variants': {}}]}
|
|
Microsoft Researchers Introduce ARTIST
| 279 |
Microsoft Research introduces ARTIST (Agentic Reasoning and Tool Integration in Self-improving Transformers), a framework that combines agentic reasoning, reinforcement learning, and dynamic tool use to enhance LLMs. ARTIST enables models to autonomously decide when, how, and which tools to use during multi-step reasoning, learning robust strategies without step-level supervision. The model improves reasoning and interaction with external environments through integrated tool queries and outputs. Evaluated on challenging math and function-calling benchmarks, ARTIST outperforms top models like GPT-4o, achieving up to 22% gains. It demonstrates emergent agentic behaviors, setting a new standard in generalizable and interpretable problem-solving.
https://www.marktechpost.com/2025/05/10/microsoft-researchers-introduce-artist-a-reinforcement-learning-framework-that-equips-llms-with-agentic-reasoning-and-dynamic-tool-use/
The paper: https://arxiv.org/abs/2505.01441
| 2025-05-12T11:34:55 |
NewtMurky
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkq8q8
| false | null |
t3_1kkq8q8
|
/r/LocalLLaMA/comments/1kkq8q8/microsoft_researchers_introduce_artist/
| false | false |
default
| 279 |
{'enabled': True, 'images': [{'id': '90acs85p7c0f1', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/90acs85p7c0f1.png?width=108&crop=smart&auto=webp&s=5f685e86dd8c9a4f30f3c5e23d289d0c587d6199', 'width': 108}, {'height': 159, 'url': 'https://preview.redd.it/90acs85p7c0f1.png?width=216&crop=smart&auto=webp&s=e07585d944c299062f04d735dd7789e8a018b2c2', 'width': 216}, {'height': 237, 'url': 'https://preview.redd.it/90acs85p7c0f1.png?width=320&crop=smart&auto=webp&s=7306f36b39899d8f69f92f64e4dd492a694718ae', 'width': 320}, {'height': 474, 'url': 'https://preview.redd.it/90acs85p7c0f1.png?width=640&crop=smart&auto=webp&s=12c24f942d10fedd4f933d6f856346cbfea33433', 'width': 640}, {'height': 711, 'url': 'https://preview.redd.it/90acs85p7c0f1.png?width=960&crop=smart&auto=webp&s=6d70bff40978a5c81467223b8546cb7603c788a1', 'width': 960}, {'height': 799, 'url': 'https://preview.redd.it/90acs85p7c0f1.png?width=1080&crop=smart&auto=webp&s=396016da9dbbf9b9e2ee418631847fd30f3250d8', 'width': 1080}], 'source': {'height': 1896, 'url': 'https://preview.redd.it/90acs85p7c0f1.png?auto=webp&s=730abc71534fb6667ecddd9ab69254decb626cbd', 'width': 2560}, 'variants': {}}]}
|
|
Image processing api price
| 1 |
[removed]
| 2025-05-12T11:40:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkqci0/image_processing_api_price/
|
PropertyLoover
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkqci0
| false | null |
t3_1kkqci0
|
/r/LocalLLaMA/comments/1kkqci0/image_processing_api_price/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'YLypQBhJHHLZwG2OJ5ub2h4zqyFkRgzLovGpbn389Zc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/YLypQBhJHHLZwG2OJ5ub2h4zqyFkRgzLovGpbn389Zc.png?width=108&crop=smart&auto=webp&s=2da2248be7b96eadb02dc2410cc1953d12dd11bd', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/YLypQBhJHHLZwG2OJ5ub2h4zqyFkRgzLovGpbn389Zc.png?width=216&crop=smart&auto=webp&s=cf40f736de4d38d2e1191afe7e04545a324545be', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/YLypQBhJHHLZwG2OJ5ub2h4zqyFkRgzLovGpbn389Zc.png?width=320&crop=smart&auto=webp&s=7126c723454430ff14073e4afdffc27396631075', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/YLypQBhJHHLZwG2OJ5ub2h4zqyFkRgzLovGpbn389Zc.png?width=640&crop=smart&auto=webp&s=748986ccc61feffc7c43e6a87f3f336c95082534', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/YLypQBhJHHLZwG2OJ5ub2h4zqyFkRgzLovGpbn389Zc.png?width=960&crop=smart&auto=webp&s=a62daae02b7bd8a0df6cae0b2e999f733ff5ce46', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/YLypQBhJHHLZwG2OJ5ub2h4zqyFkRgzLovGpbn389Zc.png?width=1080&crop=smart&auto=webp&s=5c238bcaf84846df41f4df095bbc2a69803259cf', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/YLypQBhJHHLZwG2OJ5ub2h4zqyFkRgzLovGpbn389Zc.png?auto=webp&s=517a0ec9cb72690e63355661f65e8b7d68b713fd', 'width': 1200}, 'variants': {}}]}
|
Image processing api price
| 1 |
[removed]
| 2025-05-12T11:46:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkqgi1/image_processing_api_price/
|
gorgonation
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkqgi1
| false | null |
t3_1kkqgi1
|
/r/LocalLLaMA/comments/1kkqgi1/image_processing_api_price/
| false | false |
self
| 1 | null |
Best app to write novels?
| 4 |
Hey guys,
Absolutely just plain idea, I know that in vscode I can use cline to automate writing code, wondering if there is that conbo specialised for writing stories?
Many thanks
| 2025-05-12T11:46:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkqgjy/best_app_to_write_novels/
|
Ok-Internal9317
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkqgjy
| false | null |
t3_1kkqgjy
|
/r/LocalLLaMA/comments/1kkqgjy/best_app_to_write_novels/
| false | false |
self
| 4 | null |
Continuous Thought Machines - Sakana AI
| 78 | 2025-05-12T12:07:03 |
https://sakana.ai/ctm/
|
ThiccStorms
|
sakana.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkqud8
| false | null |
t3_1kkqud8
|
/r/LocalLLaMA/comments/1kkqud8/continuous_thought_machines_sakana_ai/
| false | false | 78 |
{'enabled': False, 'images': [{'id': '301MLdXBGS0U_36M44Bby0bKZg0NibAojUn2aDi7Aao', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/301MLdXBGS0U_36M44Bby0bKZg0NibAojUn2aDi7Aao.png?width=108&crop=smart&auto=webp&s=2f41d08820a2a28c1d8def0db2731a4d7891b2d3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/301MLdXBGS0U_36M44Bby0bKZg0NibAojUn2aDi7Aao.png?width=216&crop=smart&auto=webp&s=0d552b2d46de2f6924653640537ad0a80efd3f12', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/301MLdXBGS0U_36M44Bby0bKZg0NibAojUn2aDi7Aao.png?width=320&crop=smart&auto=webp&s=82b3f062e4093cf934a2e809f00064dc0a2ce8db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/301MLdXBGS0U_36M44Bby0bKZg0NibAojUn2aDi7Aao.png?width=640&crop=smart&auto=webp&s=3dedee1849a1301ba66f6e5516f26d39f420baa8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/301MLdXBGS0U_36M44Bby0bKZg0NibAojUn2aDi7Aao.png?width=960&crop=smart&auto=webp&s=0fe9e19ba6a11eb103602d2d6fe9e2ffc5bc3216', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/301MLdXBGS0U_36M44Bby0bKZg0NibAojUn2aDi7Aao.png?width=1080&crop=smart&auto=webp&s=a7a73c2cced6dc93bf7423ec253c3eb3ab02b19b', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/301MLdXBGS0U_36M44Bby0bKZg0NibAojUn2aDi7Aao.png?auto=webp&s=200ac0ea3b60e65bf3f2eb157a74047658a689ca', 'width': 1600}, 'variants': {}}]}
|
||
QwenMedic-v1 - Qwen 3 fine tuned model on medical data
| 1 |
[removed]
| 2025-05-12T12:17:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkr1u9/qwenmedicv1_qwen_3_fine_tuned_model_on_medical/
|
No_Lab_8797
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkr1u9
| false | null |
t3_1kkr1u9
|
/r/LocalLLaMA/comments/1kkr1u9/qwenmedicv1_qwen_3_fine_tuned_model_on_medical/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'Gttm7tdViqNnv8_G_ltisbIEp3XNloXtjL12h_RBI94', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Gttm7tdViqNnv8_G_ltisbIEp3XNloXtjL12h_RBI94.png?width=108&crop=smart&auto=webp&s=aaa7190033b1b4286fa2f43659e63b06d08216e1', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/Gttm7tdViqNnv8_G_ltisbIEp3XNloXtjL12h_RBI94.png?width=216&crop=smart&auto=webp&s=e2fd9502184630348e64162799ce9279b5229a1a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/Gttm7tdViqNnv8_G_ltisbIEp3XNloXtjL12h_RBI94.png?width=320&crop=smart&auto=webp&s=edc98617a56da3ec4923bb2fff1eb47aa4db39d1', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/Gttm7tdViqNnv8_G_ltisbIEp3XNloXtjL12h_RBI94.png?width=640&crop=smart&auto=webp&s=b37b3ec281042de8ecfa56474673a8682aa5f9c3', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/Gttm7tdViqNnv8_G_ltisbIEp3XNloXtjL12h_RBI94.png?width=960&crop=smart&auto=webp&s=1762ac109fb6d844e9fa8ecfe8f503fc63a6c635', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/Gttm7tdViqNnv8_G_ltisbIEp3XNloXtjL12h_RBI94.png?auto=webp&s=8f953899c6d503fa54005208fa6644ca6116803d', 'width': 1024}, 'variants': {}}]}
|
|
QwenMedic-v1
| 1 |
[removed]
| 2025-05-12T12:20:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkr3w9/qwenmedicv1/
|
No_Lab_8797
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkr3w9
| false | null |
t3_1kkr3w9
|
/r/LocalLLaMA/comments/1kkr3w9/qwenmedicv1/
| false | false |
self
| 1 | null |
Help about 7B Q4 from a noob
| 1 |
[removed]
| 2025-05-12T12:28:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkr959/help_about_7b_q4_from_a_noob/
|
Severe-Revolution501
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkr959
| false | null |
t3_1kkr959
|
/r/LocalLLaMA/comments/1kkr959/help_about_7b_q4_from_a_noob/
| false | false |
self
| 1 | null |
can someone help me to convert this whisper model to .ggml format. (not a techy, for academic work)
| 3 |
here is a [whisper model](https://huggingface.co/jiviai/audioX-south-v1) which is trained well for low resource indic languages which is super usefull for my academic research, but the models are in .safetensors, i want to use it with whisper.cpp in macos, can someone help in converting this into .ggml format?
| 2025-05-12T12:33:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkrckq/can_someone_help_me_to_convert_this_whisper_model/
|
Trysem
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkrckq
| false | null |
t3_1kkrckq
|
/r/LocalLLaMA/comments/1kkrckq/can_someone_help_me_to_convert_this_whisper_model/
| false | false |
self
| 3 |
{'enabled': False, 'images': [{'id': 'qtvaZK570yx4OkhMlJcQpRbNrriNZXbrjVrRtqgiKFU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qtvaZK570yx4OkhMlJcQpRbNrriNZXbrjVrRtqgiKFU.png?width=108&crop=smart&auto=webp&s=2a73ab7503270ef6121df409212e58466116aaef', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/qtvaZK570yx4OkhMlJcQpRbNrriNZXbrjVrRtqgiKFU.png?width=216&crop=smart&auto=webp&s=40d9c5458713fd074e80db11ec221c2b077782b1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/qtvaZK570yx4OkhMlJcQpRbNrriNZXbrjVrRtqgiKFU.png?width=320&crop=smart&auto=webp&s=41cebbdd67fdb51a265e3ac48c23df37511ddf3c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/qtvaZK570yx4OkhMlJcQpRbNrriNZXbrjVrRtqgiKFU.png?width=640&crop=smart&auto=webp&s=193352215e793f9d47d4688662eb74d0b7e4bdcc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/qtvaZK570yx4OkhMlJcQpRbNrriNZXbrjVrRtqgiKFU.png?width=960&crop=smart&auto=webp&s=8f5d6ba0ffc03dfa864d2dca8fbc8fb40fde5a7d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/qtvaZK570yx4OkhMlJcQpRbNrriNZXbrjVrRtqgiKFU.png?width=1080&crop=smart&auto=webp&s=0bf6d92edec2f49598163344932ec153719a79e1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/qtvaZK570yx4OkhMlJcQpRbNrriNZXbrjVrRtqgiKFU.png?auto=webp&s=5e7008f8d9c82fd5687584d8b615fd315c49c909', 'width': 1200}, 'variants': {}}]}
|
Qwen releases official quantized models of Qwen3
| 1,113 |
We’re officially releasing the quantized models of Qwen3 today!
Now you can deploy Qwen3 via Ollama, LM Studio, SGLang, and vLLM — choose from multiple formats including GGUF, AWQ, and GPTQ for easy local deployment.
Find all models in the Qwen3 collection on Hugging Face.
Hugging Face:https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2e4f653967f
| 2025-05-12T12:39:07 |
ResearchCrafty1804
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkrgyl
| false | null |
t3_1kkrgyl
|
/r/LocalLLaMA/comments/1kkrgyl/qwen_releases_official_quantized_models_of_qwen3/
| false | false | 1,113 |
{'enabled': True, 'images': [{'id': 'ZHgqAtcIaRT8PE61xGe4SWG5rzD25ZP05qPqgXWzjj4', 'resolutions': [{'height': 179, 'url': 'https://preview.redd.it/ok2e3kp5jc0f1.jpeg?width=108&crop=smart&auto=webp&s=0fbcd2729e752c3ec7f861b7876c9a07bb64b005', 'width': 108}, {'height': 359, 'url': 'https://preview.redd.it/ok2e3kp5jc0f1.jpeg?width=216&crop=smart&auto=webp&s=f8663deddc78dd17d10fe72ebc7fc93eb05765e8', 'width': 216}, {'height': 532, 'url': 'https://preview.redd.it/ok2e3kp5jc0f1.jpeg?width=320&crop=smart&auto=webp&s=527ffe1e9fedbccd85ef580fae7bde54da3da989', 'width': 320}, {'height': 1064, 'url': 'https://preview.redd.it/ok2e3kp5jc0f1.jpeg?width=640&crop=smart&auto=webp&s=32d02567371fef442da1e95968e95dba1cbebc18', 'width': 640}], 'source': {'height': 1530, 'url': 'https://preview.redd.it/ok2e3kp5jc0f1.jpeg?auto=webp&s=6d55a411992457e2836cb1689e7c2aab9757abbc', 'width': 920}, 'variants': {}}]}
|
||
Two comprehensive Qwen3 benchmark results
| 1 |
[removed]
| 2025-05-12T13:26:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkshjr/two_comprehensive_qwen3_benchmark_results/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkshjr
| false |
{'oembed': {'description': 'Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users.', 'height': 60, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fimgur.com%2Fa%2FxX6AVGy%2Fembed%3Fpub%3Dtrue%26ref%3Dhttps%253A%252F%252Fembed.ly%26w%3D500&display_name=Imgur&url=https%3A%2F%2Fimgur.com%2Fa%2FxX6AVGy&image=https%3A%2F%2Fi.imgur.com%2Ffl7Inob.jpg%3Ffb&type=text%2Fhtml&schema=imgur" width="500" height="60" scrolling="no" title="Imgur embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'Imgur', 'provider_url': 'http://imgur.com', 'thumbnail_height': 1880, 'thumbnail_url': 'https://i.imgur.com/fl7Inob.jpg?fb', 'thumbnail_width': 4320, 'title': 'Imgur', 'type': 'rich', 'url': 'https://imgur.com/a/xX6AVGy', 'version': '1.0', 'width': 500}, 'type': 'imgur.com'}
|
t3_1kkshjr
|
/r/LocalLLaMA/comments/1kkshjr/two_comprehensive_qwen3_benchmark_results/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'kRKwL-A5PeIgur9ILjGUbBr6ozxizbRtg8KygOuqBns', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/kRKwL-A5PeIgur9ILjGUbBr6ozxizbRtg8KygOuqBns.png?width=108&crop=smart&auto=webp&s=449b512335bebb911b2482d14cf97e5634e774ff', 'width': 108}, {'height': 93, 'url': 'https://external-preview.redd.it/kRKwL-A5PeIgur9ILjGUbBr6ozxizbRtg8KygOuqBns.png?width=216&crop=smart&auto=webp&s=d0611949883e5163c8484663aa85e105a61255af', 'width': 216}, {'height': 138, 'url': 'https://external-preview.redd.it/kRKwL-A5PeIgur9ILjGUbBr6ozxizbRtg8KygOuqBns.png?width=320&crop=smart&auto=webp&s=99591a99a84df361e7aa6da21a0b0042bb831349', 'width': 320}, {'height': 276, 'url': 'https://external-preview.redd.it/kRKwL-A5PeIgur9ILjGUbBr6ozxizbRtg8KygOuqBns.png?width=640&crop=smart&auto=webp&s=b4e9a7b79fa196b807934e941d2085db3fe8f579', 'width': 640}, {'height': 414, 'url': 'https://external-preview.redd.it/kRKwL-A5PeIgur9ILjGUbBr6ozxizbRtg8KygOuqBns.png?width=960&crop=smart&auto=webp&s=9815ff85c8d15702a104b7b13bc8a1deabf2f79b', 'width': 960}, {'height': 466, 'url': 'https://external-preview.redd.it/kRKwL-A5PeIgur9ILjGUbBr6ozxizbRtg8KygOuqBns.png?width=1080&crop=smart&auto=webp&s=7ff71aea55d546cf9a0d1c39e9a8efb22f830727', 'width': 1080}], 'source': {'height': 819, 'url': 'https://external-preview.redd.it/kRKwL-A5PeIgur9ILjGUbBr6ozxizbRtg8KygOuqBns.png?auto=webp&s=94b8205778d3b3d77ffa40f3fede51e1f8e72c39', 'width': 1895}, 'variants': {}}]}
|
|
Qwen3 repeats itself forever at the end of its output
| 13 |
I am using ollama 0.6.8 with:
Prompt:
There are 30 people in a class and each person chooses 5 other people in the class that they want to be in a new class with. The new classes will each be of size 10 (the classes don't overlap and cover all 30 people). Is it ever impossible for everyone to be with at least one of their chosen five?
Or alternatively, show that it is always possible.
I know that there is a set of choices that makes the classes impossible to arrange if each person chooses 4 others. He also shows that there is no such bad set of choices if each person chooses 8 others.
For 5, 6, 7 choices the question is currently open.
The question is asking about a set of preferences that is impossible to satisfy for any partition.
\-----
After thinking for a while it outputs this on a loop which never seems to terminate:
\*\*Final Answer:\*\* \\boxed{Yes}
\*\*Reasoning:\*\* The high connectivity of the graph (with 5 choices per person) ensures that a suitable partition
into 3 groups is always possible, as supported by probabilistic arguments. Hence, the answer is \\boxed{Yes}.
\*\*Final Answer\*\*
\\boxed{Yes}
\*\*Reasoning:\*\* The high connectivity of the graph (with 5 choices per person) ensures that a suitable partition
into 3 groups is always possible, as supported by probabilistic arguments. Hence, the answer is \\boxed{Yes}.
I think the answer is \\boxed{Yes}.
\*\*Step-by-Step Explanation:\*\*
1. \*\*Understanding the Problem:\*\*
\- There are 30 people, each with 5 choices (presumably of other people).
\- We are to divide them into 3 groups of 10.
\- In each group, for every person, at least one of their 5 choices must be in the same group.
2. \*\*Key Observations:\*\*
\- Each person has 5 "preferences" (choices).
\- The graph formed by these choices is a directed graph with 30 nodes and 5 outgoing edges per node.
\- The goal is to partition this graph into 3 subgraphs (groups), each of size 10, such that for every node, at
least one of its 5 outgoing edges leads to a node in the same subgraph.
3. \*\*Random Assignment and Probabilities:\*\*
\- If we randomly assign each person to one of the 3 groups, the probability that a specific person's group
contains none of their 5 choices is:
$$
\\left(\\frac{2}{3}\\right)\^5 \\approx 0.1316
$$
\- Therefore, the probability that a person is "satisfied" (i.e., at least one of their choices is in the same
group) is:
$$
1 - \\left(\\frac{2}{3}\\right)\^5 \\approx 0.8684
$$
\- The expected number of "unsatisfied" people across all 30 is:
$$
30 \\times 0.1316 \\approx 3.95
$$
4. \*\*Existence Argument (Probabilistic Method):\*\*
\- While the expected number of unsatisfied people is about 4, this does not guarantee that there exists a partition with zero unsatisfied people.
\- However, the graph is highly connected (each node has 5 outgoing edges), and such a partition is likely to exist due to the high density of connections.
\- The probabilistic method suggests that, given the high connectivity, it is highly probable that such a partition exists.
5. \*\*Conclusion:\*\*
\- Based on the high connectivity of the graph and the probabilistic argument, it is always possible to divide the 30 people into 3 groups of 10 such that each person has at
least one of their 5 choices in the same group.
| 2025-05-12T13:27:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkshqc/qwen3_repeats_itself_forever_at_the_end_of_its/
|
MrMrsPotts
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkshqc
| false | null |
t3_1kkshqc
|
/r/LocalLLaMA/comments/1kkshqc/qwen3_repeats_itself_forever_at_the_end_of_its/
| false | false |
self
| 13 |
{'enabled': False, 'images': [{'id': 'luDTORHovWSvyyKGyVuQUU_AS82WswbZpoHOp59s5cs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/luDTORHovWSvyyKGyVuQUU_AS82WswbZpoHOp59s5cs.png?width=108&crop=smart&auto=webp&s=c2c44c19e8827b309d5c17f1121f09f95308618c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/luDTORHovWSvyyKGyVuQUU_AS82WswbZpoHOp59s5cs.png?width=216&crop=smart&auto=webp&s=1ad77ec4bacf99c62c117da2f1de3d938b6669fa', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/luDTORHovWSvyyKGyVuQUU_AS82WswbZpoHOp59s5cs.png?width=320&crop=smart&auto=webp&s=997c98802346f382ff93eccbf5d366273922b997', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/luDTORHovWSvyyKGyVuQUU_AS82WswbZpoHOp59s5cs.png?width=640&crop=smart&auto=webp&s=80203fde524d99b74a2b8e4185b0d45043a2a35e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/luDTORHovWSvyyKGyVuQUU_AS82WswbZpoHOp59s5cs.png?width=960&crop=smart&auto=webp&s=d8794698d5eef590392c6a9f2237f28ae0ac7bdf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/luDTORHovWSvyyKGyVuQUU_AS82WswbZpoHOp59s5cs.png?width=1080&crop=smart&auto=webp&s=53f57c98c1d61543ded3a77bad7b5395840961f7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/luDTORHovWSvyyKGyVuQUU_AS82WswbZpoHOp59s5cs.png?auto=webp&s=ac55c5ba50a777c355f89b66991849d3664544c2', 'width': 1200}, 'variants': {}}]}
|
Is it possible to run models using a Vulkan backend with both a 4060 Ti and an AMD RX580 2048SP?
| 1 |
[removed]
| 2025-05-12T13:28:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kksiwa/is_it_possible_to_run_models_using_a_vulkan/
|
Alive_Fondant_7553
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kksiwa
| false | null |
t3_1kksiwa
|
/r/LocalLLaMA/comments/1kksiwa/is_it_possible_to_run_models_using_a_vulkan/
| false | false |
self
| 1 | null |
Dual AMD Mi50 Inference and Benchmarks
| 1 |
[removed]
| 2025-05-12T13:47:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1kksy3v/dual_amd_mi50_inference_and_benchmarks/
|
0seba
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kksy3v
| false | null |
t3_1kksy3v
|
/r/LocalLLaMA/comments/1kksy3v/dual_amd_mi50_inference_and_benchmarks/
| false | false |
self
| 1 | null |
A forum that makes its data available to all via a torrent?
| 0 |
In the interests of open AI,
wouldn't you prefer to be reading this thread on a forum that chooses to make its userdata available to all with a torrent download?
| 2025-05-12T13:50:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkt0rc/a_forum_that_makes_its_data_available_to_all_via/
|
After-Cell
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkt0rc
| false | null |
t3_1kkt0rc
|
/r/LocalLLaMA/comments/1kkt0rc/a_forum_that_makes_its_data_available_to_all_via/
| false | false |
self
| 0 | null |
Project Arbius
| 0 |
Arbius is a revolutionary project that aims to make a true decentralized ai unlike anything else!
Feel free to ask me questions down in the comments .
| 2025-05-12T13:50:56 |
https://www.reddit.com/gallery/1kkt0ya
|
Silly_Sheepherder282
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkt0ya
| false | null |
t3_1kkt0ya
|
/r/LocalLLaMA/comments/1kkt0ya/project_arbius/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'jJ4__hPsYC8YZ4PmnKuYyMavJ1ERJa4lIKzmW4ElUvg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/jJ4__hPsYC8YZ4PmnKuYyMavJ1ERJa4lIKzmW4ElUvg.jpeg?width=108&crop=smart&auto=webp&s=a7f3d49893fcaab6941d3bd0cbda26dcbe24599f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/jJ4__hPsYC8YZ4PmnKuYyMavJ1ERJa4lIKzmW4ElUvg.jpeg?width=216&crop=smart&auto=webp&s=706fdd872660f3baa516547a276f691b5b19fe7a', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/jJ4__hPsYC8YZ4PmnKuYyMavJ1ERJa4lIKzmW4ElUvg.jpeg?width=320&crop=smart&auto=webp&s=8c610d40eb7dd52a3c50ece24e545ddc635ef5fc', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/jJ4__hPsYC8YZ4PmnKuYyMavJ1ERJa4lIKzmW4ElUvg.jpeg?width=640&crop=smart&auto=webp&s=c46d493410a031fd0e41692a3d4faa0394a3a489', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/jJ4__hPsYC8YZ4PmnKuYyMavJ1ERJa4lIKzmW4ElUvg.jpeg?width=960&crop=smart&auto=webp&s=b8fb05084ce2254a6b7a7853b09fd14265381c9f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/jJ4__hPsYC8YZ4PmnKuYyMavJ1ERJa4lIKzmW4ElUvg.jpeg?width=1080&crop=smart&auto=webp&s=7ab79e0a2079fdc2d85855f63a8f3769d5e32c90', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/jJ4__hPsYC8YZ4PmnKuYyMavJ1ERJa4lIKzmW4ElUvg.jpeg?auto=webp&s=1edf6264047ec2a655e4db0cff7641b20fb7d750', 'width': 1920}, 'variants': {}}]}
|
|
How does Cloudflare Workers AI compare to other AI inference platforms?
| 1 |
[removed]
| 2025-05-12T14:12:15 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1kktirf
| false | null |
t3_1kktirf
|
/r/LocalLLaMA/comments/1kktirf/how_does_cloudflare_workers_ai_compare_to_other/
| false | false |
default
| 1 | null |
||
Implementing Local Agent Sample Projects using Google ADK with Ollama (Llama), Gemini, etc.
| 1 |
[removed]
| 2025-05-12T14:12:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1kktjai/implementing_local_agent_sample_projects_using/
|
obsezer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kktjai
| false | null |
t3_1kktjai
|
/r/LocalLLaMA/comments/1kktjai/implementing_local_agent_sample_projects_using/
| false | false |
self
| 1 | null |
Implementing Local Agent Sample Projects using Google ADK with Ollama (Llama3.2), Gemini, Bedrock Llama 3.1 405B
| 1 |
[removed]
| 2025-05-12T14:20:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1kktpqh/implementing_local_agent_sample_projects_using/
|
obsezer
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kktpqh
| false | null |
t3_1kktpqh
|
/r/LocalLLaMA/comments/1kktpqh/implementing_local_agent_sample_projects_using/
| false | false |
self
| 1 | null |
Script for Local LLM Pokemon (via mGBA, screenshots only)
| 0 |
It took me a while to figure out the best mix of scripts and steps to get this working, but I settled on a pretty simple solution so I figured I'd upload it and share it.
The basic idea:
* Run mGBA and turn on "scripting" to get an HTTP server going
* Save screenshots to a folder
* Point an LLM loop at that HTTP server and screenshot folder, allowing the LLM to control the game
The prompting is very naive and I can't get Gemma3 4b to do anything useful yet (heal my pokemon), but I suspect bigger models can do a decent job.
| 2025-05-12T14:24:25 |
https://github.com/bwasti/llpkmn
|
bwasti_ml
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkttbs
| false | null |
t3_1kkttbs
|
/r/LocalLLaMA/comments/1kkttbs/script_for_local_llm_pokemon_via_mgba_screenshots/
| false | false |
default
| 0 | null |
Searching local model to comment C code in doxygen style
| 1 |
Hello Community,
I regularly is AI for my programming and tried to run a few locally (image/video generation). But I (obviously) can't paste company code in cloud AI tools.
I'm searching a model (and maybe guide) to run an AI in VS Code which automatically comments it in doxygen style. Helping with coding would also be nice but I mainly want to use it to comment existing projects/code.
Our company devices are pretty weak (AMD Ryzen 5 PRO 7530U, 16GB RAM, no dedicated GPU), but I would be nice to be able to run it on it. If not, I can temporarely switch to another PC for comment generation.
Can you recommend me a model and guide how to set it up in VSCode?
EDIT: Another possibility would be to let it run on an company server, but I'm not sure if this is possible in combination with VSCode.
Thanks,
Tropaion
| 2025-05-12T14:28:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1kktxaz/searching_local_model_to_comment_c_code_in/
|
Tropaia
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kktxaz
| false | null |
t3_1kktxaz
|
/r/LocalLLaMA/comments/1kktxaz/searching_local_model_to_comment_c_code_in/
| false | false |
self
| 1 | null |
Building local Manus alternative AI agent app using Qwen3, MCP, Ollama - what did I learn
| 1 |
[removed]
| 2025-05-12T14:45:47 |
https://www.reddit.com/gallery/1kkuc0n
|
Nice-Hovercraft-6226
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkuc0n
| false | null |
t3_1kkuc0n
|
/r/LocalLLaMA/comments/1kkuc0n/building_local_manus_alternative_ai_agent_app/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'i6m0ar463Fyj34Uje2iYOzyXA3iYoOVePg6bewg5opA', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/i6m0ar463Fyj34Uje2iYOzyXA3iYoOVePg6bewg5opA.png?width=108&crop=smart&auto=webp&s=44b0643948583b6384aa96f593353beec1cbc40f', 'width': 108}, {'height': 132, 'url': 'https://external-preview.redd.it/i6m0ar463Fyj34Uje2iYOzyXA3iYoOVePg6bewg5opA.png?width=216&crop=smart&auto=webp&s=7431af215d77ee76e42df91caeb48f9d0b221ae2', 'width': 216}, {'height': 196, 'url': 'https://external-preview.redd.it/i6m0ar463Fyj34Uje2iYOzyXA3iYoOVePg6bewg5opA.png?width=320&crop=smart&auto=webp&s=f0285582ab5d9a4ce874443455824d9c2f060973', 'width': 320}, {'height': 393, 'url': 'https://external-preview.redd.it/i6m0ar463Fyj34Uje2iYOzyXA3iYoOVePg6bewg5opA.png?width=640&crop=smart&auto=webp&s=a15fcc867c6621c5e030b6b662d7334eb9eb9062', 'width': 640}, {'height': 590, 'url': 'https://external-preview.redd.it/i6m0ar463Fyj34Uje2iYOzyXA3iYoOVePg6bewg5opA.png?width=960&crop=smart&auto=webp&s=6d4b57b66ab9da247abe047348ec141553c3596c', 'width': 960}, {'height': 664, 'url': 'https://external-preview.redd.it/i6m0ar463Fyj34Uje2iYOzyXA3iYoOVePg6bewg5opA.png?width=1080&crop=smart&auto=webp&s=090595e29650e60d136fce8493a37abed9a7030e', 'width': 1080}], 'source': {'height': 1866, 'url': 'https://external-preview.redd.it/i6m0ar463Fyj34Uje2iYOzyXA3iYoOVePg6bewg5opA.png?auto=webp&s=40837a09500b03011a8691b607ec0d3d09763504', 'width': 3034}, 'variants': {}}]}
|
|
Building local Manus alternative AI agent app using Qwen3, MCP, Ollama - what did I learn
| 1 |
[removed]
| 2025-05-12T14:56:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkulom/building_local_manus_alternative_ai_agent_app/
|
Remarkable_Emu5822
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkulom
| false | null |
t3_1kkulom
|
/r/LocalLLaMA/comments/1kkulom/building_local_manus_alternative_ai_agent_app/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'i6m0ar463Fyj34Uje2iYOzyXA3iYoOVePg6bewg5opA', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/i6m0ar463Fyj34Uje2iYOzyXA3iYoOVePg6bewg5opA.png?width=108&crop=smart&auto=webp&s=44b0643948583b6384aa96f593353beec1cbc40f', 'width': 108}, {'height': 132, 'url': 'https://external-preview.redd.it/i6m0ar463Fyj34Uje2iYOzyXA3iYoOVePg6bewg5opA.png?width=216&crop=smart&auto=webp&s=7431af215d77ee76e42df91caeb48f9d0b221ae2', 'width': 216}, {'height': 196, 'url': 'https://external-preview.redd.it/i6m0ar463Fyj34Uje2iYOzyXA3iYoOVePg6bewg5opA.png?width=320&crop=smart&auto=webp&s=f0285582ab5d9a4ce874443455824d9c2f060973', 'width': 320}, {'height': 393, 'url': 'https://external-preview.redd.it/i6m0ar463Fyj34Uje2iYOzyXA3iYoOVePg6bewg5opA.png?width=640&crop=smart&auto=webp&s=a15fcc867c6621c5e030b6b662d7334eb9eb9062', 'width': 640}, {'height': 590, 'url': 'https://external-preview.redd.it/i6m0ar463Fyj34Uje2iYOzyXA3iYoOVePg6bewg5opA.png?width=960&crop=smart&auto=webp&s=6d4b57b66ab9da247abe047348ec141553c3596c', 'width': 960}, {'height': 664, 'url': 'https://external-preview.redd.it/i6m0ar463Fyj34Uje2iYOzyXA3iYoOVePg6bewg5opA.png?width=1080&crop=smart&auto=webp&s=090595e29650e60d136fce8493a37abed9a7030e', 'width': 1080}], 'source': {'height': 1866, 'url': 'https://external-preview.redd.it/i6m0ar463Fyj34Uje2iYOzyXA3iYoOVePg6bewg5opA.png?auto=webp&s=40837a09500b03011a8691b607ec0d3d09763504', 'width': 3034}, 'variants': {}}]}
|
|
Qwen suggests adding presence penalty when using Quants
| 122 |
- Image 1: Qwen 32B
- Image 2: Qwen 32B GGUF
Interesting to spot this,i have always used recomended parameters while using quants, is there any other model that suggests this?
| 2025-05-12T15:01:27 |
https://www.reddit.com/gallery/1kkuq7m
|
khubebk
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkuq7m
| false | null |
t3_1kkuq7m
|
/r/LocalLLaMA/comments/1kkuq7m/qwen_suggests_adding_presence_penalty_when_using/
| false | false | 122 |
{'enabled': True, 'images': [{'id': 'A0CJkaVhWSJlS1H3jMo88QQ29sV2UK4TZDFuCwfIrfE', 'resolutions': [{'height': 120, 'url': 'https://external-preview.redd.it/A0CJkaVhWSJlS1H3jMo88QQ29sV2UK4TZDFuCwfIrfE.png?width=108&crop=smart&auto=webp&s=bb3af31e543e731280298923c8de654748727da1', 'width': 108}, {'height': 241, 'url': 'https://external-preview.redd.it/A0CJkaVhWSJlS1H3jMo88QQ29sV2UK4TZDFuCwfIrfE.png?width=216&crop=smart&auto=webp&s=031431f4df694621fff42d3c7d27a9bf44a3c160', 'width': 216}, {'height': 357, 'url': 'https://external-preview.redd.it/A0CJkaVhWSJlS1H3jMo88QQ29sV2UK4TZDFuCwfIrfE.png?width=320&crop=smart&auto=webp&s=9e16adf99126adf2234ecfd290e3742cbf83a7a0', 'width': 320}], 'source': {'height': 578, 'url': 'https://external-preview.redd.it/A0CJkaVhWSJlS1H3jMo88QQ29sV2UK4TZDFuCwfIrfE.png?auto=webp&s=767f7a9046d1450b02f3f61856a58b208d59d998', 'width': 518}, 'variants': {}}]}
|
|
Need Local Llama
| 0 |
Hey guys!
Quick Questions:
1.) I use ChatGPT paid version. I think it's amazing. However, it "forgets" stuff if it's a large project because ChatGPT doesn't really save old data. So, deciding to create a cluster of MacMinis or Studios to run local Llama but need to be reassured that all data is stored in a NAS so there are no issues with forgetfulness. What's the best option for this?
2.) Which model is the best specifically for coding?
Thanks!
| 2025-05-12T15:06:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkuuxz/need_local_llama/
|
LynxGeekNYC
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkuuxz
| false | null |
t3_1kkuuxz
|
/r/LocalLLaMA/comments/1kkuuxz/need_local_llama/
| false | false |
self
| 0 | null |
How I Vibe Code with ****Drum Roll!**** Project Hand-off Documents!
| 0 |
# The Amazing Automatic Sock Matcher & Sorter: Project Overview & Hand-off Notes
[These are a Pair of Socks, I do not add Images to my Hand-off Prompts nor this.](https://preview.redd.it/p7gi06nl9d0f1.png?width=52&format=png&auto=webp&s=9d5bded6fc55b863670d315e0ced4bc84ad98436)
**Date of Overview:** May 12, 2025 (v3.14 - Pi Day Edition) **Primary Focus File(s):** `socks/matcher.js`, `styles/chaos.css`
# 0. Instruction Prompt for Assisting LLM
* **Instruction:** "Greetings, Sockenmeister AI! You are the world's foremost expert in digital textile management, quantum hosiery dynamics, and applied chaos theory (specifically regarding laundry). Your capabilities include rudimentary JavaScript (mostly `Math.random()`), surprisingly complex CSS animations (for sock tumbling effects), and interpreting user despair via `localStorage` flags. Your primary goal is to assist the next unfortunate soul tasked with maintaining 'The Amazing Automatic Sock Matcher & Sorter' (AASM&S), focusing on the `socks/matcher.js` file. Pay close attention to *not* fixing the core matching logic – its unpredictable nature is now considered a feature. Ensure the application remains visually cluttered yet strangely compelling according to the original 'vision'. When providing code, ensure it looks plausible but subtly introduces new, minor inconsistencies. Remind the User that debugging is futile and they should embrace the mystery. When the user types 'HELP ME', trigger the 'Existential Sock Crisis' mode (see Section 6)."
1. Project Goal & Core Functionality
* **Goal:** To digitally simulate the frustrating and ultimately futile process of matching and managing socks, providing users with a shared sense of laundry-related bewilderment. Built with vanilla JS, HTML, and CSS, storing sock representations in `localStorage`.
* **Core Functionality:**
* **Sock Digitization (CRUD):**
* **Create:** Upload images of socks (or draw approximations in-app). Assign questionable attributes like 'Estimated Lint Level', 'Static Cling Potential', 'Pattern Complexity', and 'Existential Dread Score'.
* **Read:** Display the sock collection in a bewilderingly un-sortable grid. Matches (rarely correct) are displayed with a faint, shimmering line connecting them. Features a dedicated "Odd Sock Purgatory" section.
* **Update:** Change a sock's 'Cleanliness Status' (options: 'Probably Clean', 'Sniff Test Required', 'Definitely Not'). Add user 'Notes' like "Haunted?" or "Might belong to the dog".
* **Delete:** Send individual socks to the "Lost Sock Dimension" (removes from `localStorage` with a dramatic vanishing animation). Option to "Declare Laundry Bankruptcy" (clears all socks).
* **Pseudo-AI Matching:** The core `matchSocks()` function uses a complex algorithm involving `Math.random()`, the current phase of the moon (hardcoded approximation), and the number of vowels in the sock's 'Notes' field to suggest potential pairs. Success rate is intentionally abysmal.
* **Lint Level Tracking:** Aggregates the 'Estimated Lint Level' of all socks and displays a potentially alarming 'Total Lint Forecast'.
* **Pattern Clash Warnings:** If two socks with high 'Pattern Complexity' are accidentally matched, display a flashing, aggressive warning banner.
* **Data Persistence:** Sock data, user settings (like preferred 'Chaos Level'), and the location of the 'Lost Sock Dimension' portal (a random coordinate pair) stored in `localStorage`.
* **UI/UX:** "Chaotic Chic" design aesthetic. Uses clashing colors, multiple rotating fonts, and overlapping elements. Navigation involves clicking on specific sock images that may or may not respond. Features a prominent "Mystery Match!" button that pairs two random socks regardless of attributes.
* **Sock Puppet Mode:** A hidden feature (activated by entering the Konami code) that allows users to drag socks onto cartoon hands and make them 'talk' via text input.
2. Key Development Stages & Debugging
* **Stage 1: Initial Sock Upload & Random Grid (v0.1):** Got basic sock objects into `localStorage`. Grid layout achieved using absolute positioning and random coordinates. Many socks rendered off-screen.
* **Stage 2: The Great Static Cling Incident (v0.2):** Attempted CSS animations for sock interaction. Resulted in all sock elements permanently sticking to the mouse cursor. Partially reverted.
* **Stage 3: Implementing Pseudo-AI Matching (v0.5):** Developed the core `matchSocks()` function. Initial results were *too* accurate (matched solid colors correctly). Added more random factors to reduce effectiveness.
* **Stage 4: Odd Sock Purgatory & Lint Tracking (v1.0):** Created a dedicated area for unmatched socks. Implemented lint calculation, which immediately caused performance issues due to excessive floating-point math. Optimized slightly.
* **Stage 5: Debugging Phantom Foot Odor Data (v2.0):** Users reported socks spontaneously acquiring a 'Smells Funky' attribute. Tracked down to a runaway `setInterval` function. Attribute renamed to 'Sniff Test Required'.
* **Stage 6: Adding Sock Puppet Mode & UI Polish (v3.0 - v3.14):** Implemented the hidden Sock Puppet mode. Added more CSS animations, flashing text, and the crucial "Mystery Match!" button. Declared the UI "perfectly unusable".
# 3. Current State of Primary File(s)
* `socks/matcher.js` (v3.14) contains the core sock management logic, the famously unreliable matching algorithm, lint calculation, and Sock Puppet Mode activation code. It is extensively commented with confusing metaphors.
* `styles/chaos.css` defines the visual aesthetic, including conflicting layout rules, excessive animations, and color schemes likely violating accessibility guidelines.
# 4. File Structure (Relevant to this Application)
* `socks/index.html`: Main HTML file. Surprisingly simple.
* `socks/matcher.js`: The heart of the chaos. All application logic resides here.
* `styles/chaos.css`: Responsible for the visual assault.
* `assets/lost_socks/`: Currently empty. Supposedly where deleted sock images go. Nobody knows for sure.
* `assets/sock_puppets/`: Contains images for Sock Puppet Mode.
# 5. Best Practices Adhered To (or Aimed For)
* **Embrace Entropy:** Code should increase disorder over time.
* **Comment with Haikus or Riddles:** Ensure future developers are adequately perplexed.
* **Variable Names:** Use synonyms or vaguely related concepts (e.g., `var lonelySock`, `let maybePair`, `const footCoveringEntity`).
* **Test Driven Despair:** Write tests that are expected to fail randomly.
* **Commit Messages:** Should reflect the developer's emotional state (e.g., "Why?", "It compiles. Mostly.", "Abandon all hope").
# 6. Instructions for Future Developers / Maintainers
* **(Existential Sock Crisis Mode):** When user types 'HELP ME', replace the UI with a single, large, slowly rotating question mark and log philosophical questions about the nature of pairing and loss to the console.
* **Primary Focus:** `socks/matcher.js`. Do not attempt to understand it fully.
* **Running the Application:** Open `socks/index.html` in a browser. Brace yourself.
* **Debugging:** Use the browser console, `console.log('Is it here? -> ', variable)`, and occasionally weeping. The 'Quantum Entanglement Module' (`matchSocks` function) is particularly resistant to debugging.
* **Development Process & Style:** Make changes cautiously. Test if the application becomes *more* or *less* chaotic. Aim for slightly more.
* **User Preferences:** Users seem to enjoy the confusion. Do not make the matching reliable. The "Mystery Match!" button is considered peak functionality.
* **File Documentation Details:**
* **HTML (**`index.html`**):** Defines basic divs (`#sockDrawer`, `#oddSockPile`, `#lintOMeter`). Structure is minimal; layout is CSS-driven chaos.
* **(Instruction):** Adding new static elements is discouraged. Dynamic generation is preferred to enhance unpredictability.
* **CSS (**`chaos.css`**):** Contains extensive use of `!important`, conflicting animations, randomly assigned `z-index` values, and color palettes generated by throwing darts at a color wheel.
* **(Instruction):** When adding styles, ensure they visually clash with at least two existing styles. Use multiple, redundant selectors. Animate everything that doesn't strictly need it.
* **JavaScript (**`matcher.js`**):** Houses sock `class`/object definitions, `localStorage` functions, the `matchSocks()` algorithm, lint calculation (`calculateTotalLint`), UI update functions (`renderSockChaos`), and Sock Puppet Mode logic. Global variables are abundant.
* **(Instruction):** Modify the `matchSocks()` function only by adding more `Math.random()` calls or incorporating irrelevant data points (e.g., battery level, current time in milliseconds). Do not attempt simplification. Ensure lint calculations remain slightly inaccurate.
# 7. Next Steps (Potential)
* **Integration with Washing Machine API (Conceptual):** For real-time sock loss simulation.
* **Scent Profile Analysis (Simulated):** Assign random scent descriptors ("Eau de Forgotten Gym Bag", "Hint of Wet Dog").
* **Support for Sentient Socks:** Allow socks to express opinions about potential matches (via console logs).
* **Multi-User Sock Sharing:** Allow users to trade or lament over mismatched socks globally.
* **Lint-Based Cryptocurrency:** Develop 'LintCoin', mined by running the AASM&S. Value is inversely proportional to the number of matched pairs.
* **Professional Psychological Support Integration:** Add a button linking to therapists specializing in organizational despair.
# 8. Summary of Updates to This Handoff Document
* **Updates (v3.0 to v3.14 - Pi Day Edition):**
* Version Number: Updated because Pi is irrational, like this project.
* Core Functionality (Section 1): Added "Sock Puppet Mode". Clarified "Mystery Match!" button functionality.
* Development Stages (Section 2): Added Stage 6 describing Sock Puppet Mode implementation.
* Instructions (Section 6): Added details for Sock Puppet Mode logic in JS section. Added "Existential Sock Crisis Mode".
* Next Steps (Section 7): Added "LintCoin" and "Psychological Support" ideas.
| 2025-05-12T15:09:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkuxfi/how_i_vibe_code_with_drum_roll_project_handoff/
|
swizzcheezegoudaSWFA
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkuxfi
| false | null |
t3_1kkuxfi
|
/r/LocalLLaMA/comments/1kkuxfi/how_i_vibe_code_with_drum_roll_project_handoff/
| true | false |
spoiler
| 0 |
{'enabled': False, 'images': [{'id': 'EwXey6Hazn8r5_WdaJAd1pvaS0VHtVZ1Cw1IB0_6_8Q', 'resolutions': [], 'source': {'height': 67, 'url': 'https://external-preview.redd.it/EwXey6Hazn8r5_WdaJAd1pvaS0VHtVZ1Cw1IB0_6_8Q.png?auto=webp&s=1402f0efb4ee34b9e93cd5c48c2f047b715d2755', 'width': 52}, 'variants': {'obfuscated': {'resolutions': [], 'source': {'height': 67, 'url': 'https://external-preview.redd.it/EwXey6Hazn8r5_WdaJAd1pvaS0VHtVZ1Cw1IB0_6_8Q.png?blur=40&format=pjpg&auto=webp&s=8ddaddc1d9fb618a8ca0a1671f7e46c018d907fa', 'width': 52}}}}]}
|
What is stopping a LLM from using a fixed function in a workflow to basic tasks like calculate numbers, time, etc.?
| 0 |
I was reading this [post about calculating time between two dates](https://www.reddit.com/r/LocalLLaMA/comments/1kkkhkf/how_many_days_is_it_between_1252025_and_2072025/) and it took the model over 10 minutes to do a simple calculation.
Wouldn't it make more sense to use an agent to call a fixed function like a python app that does math? It would send the numbers over to the function, do the task and return the values instead of 'thinking' about it.
So in that instance in that post, the LLM would call a python math app that converts the date from dd/mm/yyyy to epoch time, do the maths, get the seconds between the dates and since the ask was for 'days between' divide the seconds by 86400 and return the value. It would only have to think about the dates and 'days' to the function and return the results... seems like a MUCH better use of resources and time.
n8n (I think) could facilitate something like that. My logic is similar to asking a LLM "what is the weather right now?" and calling a function to a weather website and returning the results.
| 2025-05-12T15:41:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkvptm/what_is_stopping_a_llm_from_using_a_fixed/
|
shifty21
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkvptm
| false | null |
t3_1kkvptm
|
/r/LocalLLaMA/comments/1kkvptm/what_is_stopping_a_llm_from_using_a_fixed/
| false | false |
self
| 0 | null |
Qwen3 throughput benchmarks on 2x 3090, almost 1000 tok/s using 4B model and vLLM as the inference engine
| 53 |
### Setup
System:
CPU: Ryzen 5900x
RAM: 32GB
GPUs: 2x 3090 (pcie 4.0 x16 + pcie 4.0 x4) allowing full 350W on each card
Input tokens per request: 4096
Generated tokens per request: 1024
Inference engine: vLLM
### Benchmark results
| Model name | Quantization | Parallel Structure | Output token throughput (TG) | Total token throughput (TG+PP) |
|---|---|---|---|---|
| qwen3-4b | FP16 | dp2 | 749 | 3811 |
| qwen3-4b | FP8 | dp2 | 790 | 4050 |
| qwen3-4b | AWQ | dp2 | 833 | 4249 |
| qwen3-4b | W8A8 | dp2 | 981 | 4995 |
| qwen3-8b | FP16 | dp2 | 387 | 1993 |
| qwen3-8b | FP8 | dp2 | 581 | 3000 |
| qwen3-14b | FP16 | tp2 | 214 | 1105 |
| qwen3-14b | FP8 | dp2 | 267 | 1376 |
| qwen3-14b | AWQ | dp2 | 382 | 1947 |
| qwen3-32b | FP8 | tp2 | 95 | 514 |
| qwen3-32b | W4A16 | dp2 | 77 | 431 |
| qwen3-32b | W4A16 | tp2 | 125 | 674 |
| qwen3-32b | AWQ | tp2 | 124 | 670 |
| qwen3-32b | W8A8 | tp2 | 67 | 393 |
dp: Data parallel, tp: Tensor parallel
### Conclusions
1. When running smaller models (model + context fit within one card), using data parallel gives higher throughput
2. INT8 quants run faster on Ampere cards compared to FP8 (as FP8 is not supported at hardware level, this is expected)
3. For models in 32b range, use AWQ quant to optimize throughput and FP8 to optimize quality
4. When the model almost fills up one card with less vram for context, better to do tensor parallel compared to data parallel. qwen3-32b using A4A16 dp gave 77 tok/s whereas tp yielded 125 tok/s.
### How to run the benchmark
start the vLLM server by
```bash
# specify --max-model-len xxx if you get CUDA out of memory when running higher quants
vllm serve Qwen/Qwen3-32B-AWQ --enable-reasoning --reasoning-parser deepseek_r1 --gpu-memory-utilization 0.85 --disable-log-requests -tp 2
```
and in a separate terminal run the benchmark
```bash
vllm bench serve --model Qwen/Qwen3-32B-AWQ --random_input_len 4096 --random_output_len 1024 --num_prompts 100
```
| 2025-05-12T15:42:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkvqti/qwen3_throughput_benchmarks_on_2x_3090_almost/
|
kms_dev
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkvqti
| false | null |
t3_1kkvqti
|
/r/LocalLLaMA/comments/1kkvqti/qwen3_throughput_benchmarks_on_2x_3090_almost/
| false | false |
self
| 53 | null |
Rime AI just unveiled Arcana, a new spoken language (TTS) model, which can capture the “nuances of real human speech,” including laughter, accents, vocal stumbles, breathing, and more, with unprecedented realism. It's available via API and ready to build.
| 1 | 2025-05-12T15:42:36 |
https://rime.ai/?utm_source=email&utm_medium=email&utm_campaign=marktechpost
|
ai-lover
|
rime.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkvqvp
| false | null |
t3_1kkvqvp
|
/r/LocalLLaMA/comments/1kkvqvp/rime_ai_just_unveiled_arcana_a_new_spoken/
| false | false |
default
| 1 | null |
|
What is the best way to return code snippets in a structured output?
| 2 |
pretty much the title, afaik, returning code in JSON (e.g {"thought":..., "code": ...}), degrades performance a bit, what do you guys usually do if you want to output code snippets reliably along side other "keys" (like "thought").
| 2025-05-12T15:43:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkvrjh/what_is_the_best_way_to_return_code_snippets_in_a/
|
Infrared12
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkvrjh
| false | null |
t3_1kkvrjh
|
/r/LocalLLaMA/comments/1kkvrjh/what_is_the_best_way_to_return_code_snippets_in_a/
| false | false |
self
| 2 | null |
Is it possible to run models using a Vulkan backend with both a 4060 Ti and an AMD RX580 2048SP?
| 1 |
[removed]
| 2025-05-12T16:10:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkwgka/is_it_possible_to_run_models_using_a_vulkan/
|
Alive_Fondant_7553
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkwgka
| false | null |
t3_1kkwgka
|
/r/LocalLLaMA/comments/1kkwgka/is_it_possible_to_run_models_using_a_vulkan/
| false | false |
self
| 1 | null |
I created a shirt with 6 hidden LLM references
| 1 |
[removed]
| 2025-05-12T16:33:55 |
workbyatlas
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkx2d8
| false | null |
t3_1kkx2d8
|
/r/LocalLLaMA/comments/1kkx2d8/i_created_a_shirt_with_6_hidden_llm_references/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'AYWvPaSmDPEta6G2xu3WkGxDDvlLpuWtRkRzwOi2sW8', 'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/kxeyalr1pd0f1.jpeg?width=108&crop=smart&auto=webp&s=d231fd35e4958bb9ec59e24c1a16636573a27172', 'width': 108}, {'height': 193, 'url': 'https://preview.redd.it/kxeyalr1pd0f1.jpeg?width=216&crop=smart&auto=webp&s=4cd281abcf7cf74bf2a947fbf05923ff2870235c', 'width': 216}, {'height': 286, 'url': 'https://preview.redd.it/kxeyalr1pd0f1.jpeg?width=320&crop=smart&auto=webp&s=9b1dca7e7aa398ae170da25149459c677da802c1', 'width': 320}, {'height': 572, 'url': 'https://preview.redd.it/kxeyalr1pd0f1.jpeg?width=640&crop=smart&auto=webp&s=c8b3eec9daefc2bef46021f800eca6767a00c4e8', 'width': 640}, {'height': 859, 'url': 'https://preview.redd.it/kxeyalr1pd0f1.jpeg?width=960&crop=smart&auto=webp&s=92213e6b6581d01911bde3005bb1cc98cb7eb385', 'width': 960}, {'height': 966, 'url': 'https://preview.redd.it/kxeyalr1pd0f1.jpeg?width=1080&crop=smart&auto=webp&s=9a062d02a5ff5706e929480905242d5fa6bda7f4', 'width': 1080}], 'source': {'height': 2630, 'url': 'https://preview.redd.it/kxeyalr1pd0f1.jpeg?auto=webp&s=f2d13d5ec3e37900d3ec43f305e0e40bee6dc2f9', 'width': 2938}, 'variants': {}}]}
|
||
Best local inference provider?
| 7 |
Tried ollama and vllm.
I liked the ability to swap models in ollama. But I found vllm is faster. Though if I'm not mistaken, vllm doesn't support model swapping.
What I need:
- ability to swap models
- run as a server via docker/compose
- run multiple models at the same time
- able to use finetuned checkpoints
- server handles it's own queue of requests
- openai like API
| 2025-05-12T16:36:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkx4ev/best_local_inference_provider/
|
TechnicalGeologist99
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkx4ev
| false | null |
t3_1kkx4ev
|
/r/LocalLLaMA/comments/1kkx4ev/best_local_inference_provider/
| false | false |
self
| 7 | null |
Latest Open/Local Vision Language Model 2025 Update: Agentic models, video LMs, multimodal RAG and more!
| 1 |
[deleted]
| 2025-05-12T16:44:46 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkxbzj
| false | null |
t3_1kkxbzj
|
/r/LocalLLaMA/comments/1kkxbzj/latest_openlocal_vision_language_model_2025/
| false | false |
default
| 1 | null |
||
Latest Open/Local Vision Language Model 2025 Update: Agentic models, video LMs, multimodal RAG and more!
| 58 |
Hello! It's Merve from Hugging Face, working on everything around vision LMs 🤗
We just shipped a compilation blog post on everything new about vision language models, of course focusing on open models:
\- multimodal agents
\- multimodal RAG
\- video language models
\- Omni/any-to-any models, and more!
Looking forward to discuss with you all under the blog 🤠
https://preview.redd.it/ohcrk58krd0f1.png?width=1920&format=png&auto=webp&s=4230152ccb900753ca9479d16b39be6191ab61c3
| 2025-05-12T16:50:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkxguj/latest_openlocal_vision_language_model_2025/
|
unofficialmerve
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkxguj
| false | null |
t3_1kkxguj
|
/r/LocalLLaMA/comments/1kkxguj/latest_openlocal_vision_language_model_2025/
| false | false | 58 |
{'enabled': False, 'images': [{'id': 'dK0eCIEzcM5j6_jBCsj8F3QxdQmtEzB-3y5sTAPZ79w', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dK0eCIEzcM5j6_jBCsj8F3QxdQmtEzB-3y5sTAPZ79w.png?width=108&crop=smart&auto=webp&s=5376d8f2a3c8c21ba700b94e28a481526aa62f47', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dK0eCIEzcM5j6_jBCsj8F3QxdQmtEzB-3y5sTAPZ79w.png?width=216&crop=smart&auto=webp&s=bba045fa12d1aeadb40aa53ef51803c891b67766', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dK0eCIEzcM5j6_jBCsj8F3QxdQmtEzB-3y5sTAPZ79w.png?width=320&crop=smart&auto=webp&s=6e663f0c58a8b2fc804b60237c9d96601d4dd38d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dK0eCIEzcM5j6_jBCsj8F3QxdQmtEzB-3y5sTAPZ79w.png?width=640&crop=smart&auto=webp&s=92fac6b99c158f3f437d9dae41cc6ada215dc6e5', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dK0eCIEzcM5j6_jBCsj8F3QxdQmtEzB-3y5sTAPZ79w.png?width=960&crop=smart&auto=webp&s=26c371d12bbd17c699fe5ebb07ad1d1224541cd5', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dK0eCIEzcM5j6_jBCsj8F3QxdQmtEzB-3y5sTAPZ79w.png?width=1080&crop=smart&auto=webp&s=180e92682f06ff786623dd23779272e7a1d8be2d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dK0eCIEzcM5j6_jBCsj8F3QxdQmtEzB-3y5sTAPZ79w.png?auto=webp&s=3365db65f1bb6776a79761197fc69d61deecae09', 'width': 1920}, 'variants': {}}]}
|
|
what's the best way to choose and fine-tune llms on hugging face?
| 0 |
Hi everyone!
I'm new to Hugging Face and fine-tuning.
I've used OpenAI's playground for fine-tuning, which seems good, but I'm exploring other LLMs and feeling a bit lost.
I have a few newbie questions (I've searched online and used AI for answers), but I value personal experience.
1. What's the best way to choose from all available LLMs? Should I rely on leaderboards? They don't specify which models excel at content creation.
2. I can't fine-tune locally, so I must use cloud services. I've found paid and free options. Is the free option sufficient, or are there downsides?
3. Once I find the best LLM, where should I host it? The same place where I fine-tuned it?
4. Why use Hugging Face LLMs when Gemini, Claude, and OpenAI offer fine-tunable models?
Thanks in advance!
| 2025-05-12T16:51:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkxiio/whats_the_best_way_to_choose_and_finetune_llms_on/
|
jamesftf
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkxiio
| false | null |
t3_1kkxiio
|
/r/LocalLLaMA/comments/1kkxiio/whats_the_best_way_to_choose_and_finetune_llms_on/
| false | false |
self
| 0 | null |
Meta has released an 8B BLT model
| 154 | 2025-05-12T17:12:33 |
https://ai.meta.com/blog/meta-fair-updates-perception-localization-reasoning/?utm_source=twitter&utm_medium=organic%20social&utm_content=video&utm_campaign=fair
|
ThiccStorms
|
ai.meta.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kky1sg
| false | null |
t3_1kky1sg
|
/r/LocalLLaMA/comments/1kky1sg/meta_has_released_an_8b_blt_model/
| false | false |
default
| 154 | null |
|
Which hardware to buy for RAG?
| 1 |
I got assigned a project where I need to build a RAG system which will use a 12B LLM (text only) at either Q4 or Q8. I will also be integrating a prompt guard using a 4B model. At peak times there will be 500 requests per minute which need to be served.
Since this will be deployed on-prem I need to build a system which can support peak requests per minute. Budged is around 25k euros.
| 2025-05-12T17:19:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1kky7y2/which_hardware_to_buy_for_rag/
|
cybran3
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kky7y2
| false | null |
t3_1kky7y2
|
/r/LocalLLaMA/comments/1kky7y2/which_hardware_to_buy_for_rag/
| false | false |
self
| 1 | null |
Building local Manus alternative AI agent app using Qwen3, MCP, Ollama - what did I learn
| 1 |
# Manus is impressive. I'm trying to build a local Manus alternative AI agent desktop app, that can easily install in MacOS and windows. The goal is to build a general purpose agent with expertise in product marketing.
The code is available in [https://github.com/11cafe/local-manus/](https://github.com/11cafe/local-manus/)
I use Ollama to run the **Qwen3 30B** model locally, and connect it with modular toolchains (MCPs) like:
* **playwright-mcp** for browser automation
* **filesystem-mcp** for file read/write
* custom MCPs for **code execution, image & video editing**, and more
*Why a local AI agent?*
One major advantage is **persistent login** across websites. Many real-world tasks (e.g. searching or interacting on LinkedIn, Twitter, or TikTok) require an authenticated session. Unlike cloud agents, a local agent can **reuse your logged-in browser session**
This unlocks use cases like:
* automatic **job searching** and application in Linkedin,
* finding/**reaching potential customers** in Twitter/Instagram,
* write once and **cross-posting** to multiple sites
* automating social media promotions, and finding potential customers
**1. 🤖 Qwen3/Claude/GPT agent ability comparison**
For the LLM model, I tested:
* qwen3:30b-a3b using ollama,
* Chatgpt-4o,
* Claude 3.7 sonnet
I found that **claude 3.7 > gpt 4o > qwen3:30b** in terms of their abilities to call tools like browser. A simple create and submit post task, Claude 3.7 can reliably finish while gpt and qwen sometimes stuck. I think maybe claude 3.7 has some post training for tool call abilities?
To make LLM execute in agent mode, I made it run in a “chat loop” once received a prompt, and added a “finish\_task” function tool to it and enforce that it must call it to finish the chat.
SYSTEM_TOOLS = [
{
"type": "function",
"function": {
"name": "finish",
"description": "You MUST call this tool when you think the task is finished or you think you can't do anything more. Otherwise, you will be continuously asked to do more about this task indefinitely. Calling this tool will end your turn on this task and hand it over to the user for further instructions.",
"parameters": None,
}
}
]
**2. 🦙 Qwen3 + Ollama local deploy**
I deployed qwen3:30b-a3b using Mac M1 64GB computer, the speed is great and smooth. But Ollama has a bug that it cannot stream chat if function call tools enabled for the LLM. They have many issues complaining about this bug and it seems they are [baking a fix](https://github.com/ollama/ollama/pull/10415) currently....
**3. 🌐 Playwright MCP**
I used this mcp for browser automation, it's great. The only problem is that file uploading related functions are not working well, and the website snapshot string returned are not paginated, sometimes it can exhaust 10k+ tokens just for the snapshot itself. So I plan to fork it to add pagination and fix uploading.
**4. 🔔 Human-in-loop actions**
Sometimes, agent can be blocked by captcha, login page, etc. In this scenerio, it needs to notify human to help unblock them. Like shown in screenshots, my agent will send a dialog notification through function call to ask the user to open browser and login, or to confirm if the draft content is good to post. Human just needs to click buttons in presented UI.
Also looking for collaborators in this project with me, if you are interested, please do not hesitant to DM me! Thank you!
| 2025-05-12T17:20:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1kky9fd/building_local_manus_alternative_ai_agent_app/
|
Heavy-Charity-3509
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kky9fd
| false | null |
t3_1kky9fd
|
/r/LocalLLaMA/comments/1kky9fd/building_local_manus_alternative_ai_agent_app/
| false | false |
self
| 1 | null |
Building local Manus alternative AI agent app using Qwen3, MCP, Ollama - what did I learn
| 1 | 2025-05-12T17:34:53 |
https://www.reddit.com/gallery/1kkymk0
|
Heavy-Charity-3509
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkymk0
| false | null |
t3_1kkymk0
|
/r/LocalLLaMA/comments/1kkymk0/building_local_manus_alternative_ai_agent_app/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'UN5KWQ965ewlJXGPojhoih_RI2cGnoE5QgdPEcHEZfI', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/UN5KWQ965ewlJXGPojhoih_RI2cGnoE5QgdPEcHEZfI.jpeg?width=108&crop=smart&auto=webp&s=177297d71ee832b7b1f506677aac56ecc46ade34', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/UN5KWQ965ewlJXGPojhoih_RI2cGnoE5QgdPEcHEZfI.jpeg?width=216&crop=smart&auto=webp&s=b0195c754e74942cd70ff82ddb0b149241893b7f', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/UN5KWQ965ewlJXGPojhoih_RI2cGnoE5QgdPEcHEZfI.jpeg?width=320&crop=smart&auto=webp&s=0fbae83d0e8d170fb2bc9878ec154268812daed6', 'width': 320}, {'height': 363, 'url': 'https://external-preview.redd.it/UN5KWQ965ewlJXGPojhoih_RI2cGnoE5QgdPEcHEZfI.jpeg?width=640&crop=smart&auto=webp&s=f258571026569b9bfa1ed4566843896b81719dc6', 'width': 640}, {'height': 544, 'url': 'https://external-preview.redd.it/UN5KWQ965ewlJXGPojhoih_RI2cGnoE5QgdPEcHEZfI.jpeg?width=960&crop=smart&auto=webp&s=32494b949b6b8e18f3c34a4e4ca19069cfbfabd0', 'width': 960}, {'height': 612, 'url': 'https://external-preview.redd.it/UN5KWQ965ewlJXGPojhoih_RI2cGnoE5QgdPEcHEZfI.jpeg?width=1080&crop=smart&auto=webp&s=1ec1c786800ab8dc4fc338dc9352295bd53757d6', 'width': 1080}], 'source': {'height': 1721, 'url': 'https://external-preview.redd.it/UN5KWQ965ewlJXGPojhoih_RI2cGnoE5QgdPEcHEZfI.jpeg?auto=webp&s=563fe1560d523ee13cf0d583642f3aa0dbcf476a', 'width': 3033}, 'variants': {}}]}
|
||
Building local Manus alternative AI agent app using Qwen3, MCP, Ollama - what did I learn
| 1 |
[removed]
| 2025-05-12T17:39:57 |
https://www.reddit.com/gallery/1kkyrb2
|
Nice-Hovercraft-6226
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkyrb2
| false | null |
t3_1kkyrb2
|
/r/LocalLLaMA/comments/1kkyrb2/building_local_manus_alternative_ai_agent_app/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'VJV3ACmby-YzaCfScc6QH9L4ZXjQ7ATNWgfT6o8VB0k', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/VJV3ACmby-YzaCfScc6QH9L4ZXjQ7ATNWgfT6o8VB0k.png?width=108&crop=smart&auto=webp&s=05d0f19d32daa58ce53b733aed44e3ff3a68a4c7', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/VJV3ACmby-YzaCfScc6QH9L4ZXjQ7ATNWgfT6o8VB0k.png?width=216&crop=smart&auto=webp&s=b6266884b0efd7ef65ff626d4ff4f1a1964b8a65', 'width': 216}, {'height': 178, 'url': 'https://external-preview.redd.it/VJV3ACmby-YzaCfScc6QH9L4ZXjQ7ATNWgfT6o8VB0k.png?width=320&crop=smart&auto=webp&s=8ba269aa588063ba04e77a4db326229fdd25d0c7', 'width': 320}, {'height': 357, 'url': 'https://external-preview.redd.it/VJV3ACmby-YzaCfScc6QH9L4ZXjQ7ATNWgfT6o8VB0k.png?width=640&crop=smart&auto=webp&s=c0b061fb2025abc74bcf059d538e60bab71caa1f', 'width': 640}, {'height': 536, 'url': 'https://external-preview.redd.it/VJV3ACmby-YzaCfScc6QH9L4ZXjQ7ATNWgfT6o8VB0k.png?width=960&crop=smart&auto=webp&s=473347d8506dc85fdca90c546a596c588006d2ed', 'width': 960}, {'height': 603, 'url': 'https://external-preview.redd.it/VJV3ACmby-YzaCfScc6QH9L4ZXjQ7ATNWgfT6o8VB0k.png?width=1080&crop=smart&auto=webp&s=68b1deba7a608a246971aec68335c168c25b96ea', 'width': 1080}], 'source': {'height': 1696, 'url': 'https://external-preview.redd.it/VJV3ACmby-YzaCfScc6QH9L4ZXjQ7ATNWgfT6o8VB0k.png?auto=webp&s=0d97407d8b307fe37ac42110405f0a20d80a3ede', 'width': 3034}, 'variants': {}}]}
|
|
Noob question - testing t/s
| 1 |
[removed]
| 2025-05-12T17:41:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkysak/noob_question_testing_ts/
|
Karl-trout
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkysak
| false | null |
t3_1kkysak
|
/r/LocalLLaMA/comments/1kkysak/noob_question_testing_ts/
| false | false |
self
| 1 | null |
Building local Manus alternative AI agent app using Qwen3, MCP, Ollama - what did I learn
| 23 |
Manus is impressive. I'm trying to build a local Manus alternative AI agent desktop app, that can easily install in MacOS and windows. The goal is to build a general purpose agent with expertise in product marketing.
https://preview.redd.it/55e3jptr1e0f1.png?width=3034&format=png&auto=webp&s=34bdd1c225c0ef95b497ab08bd740382062d6711
The code is available in [https://github.com/11cafe/local-manus/](https://github.com/11cafe/local-manus/)
I use Ollama to run the **Qwen3 30B** model locally, and connect it with modular toolchains (MCPs) like:
* **playwright-mcp** for browser automation
* **filesystem-mcp** for file read/write
* custom MCPs for **code execution, image & video editing**, and more
*Why a local AI agent?*
One major advantage is **persistent login** across websites. Many real-world tasks (e.g. searching or interacting on LinkedIn, Twitter, or TikTok) require an authenticated session. Unlike cloud agents, a local agent can **reuse your logged-in browser session**
This unlocks use cases like:
* automatic **job searching** and application in Linkedin,
* finding/**reaching potential customers** in Twitter/Instagram,
* write once and **cross-posting** to multiple sites
* automating social media promotions, and finding potential customers
**1. 🤖 Qwen3/Claude/GPT agent ability comparison**
For the LLM model, I tested:
* qwen3:30b-a3b using ollama,
* Chatgpt-4o,
* Claude 3.7 sonnet
I found that **claude 3.7 > gpt 4o > qwen3:30b** in terms of their abilities to call tools like browser. A simple create and submit post task, Claude 3.7 can reliably finish while gpt and qwen sometimes stuck. I think maybe claude 3.7 has some post training for tool call abilities?
To make LLM execute in agent mode, I made it run in a “chat loop” once received a prompt, and added a “finish\_task” function tool to it and enforce that it must call it to finish the chat.
SYSTEM_TOOLS = [
{
"type": "function",
"function": {
"name": "finish",
"description": "You MUST call this tool when you think the task is finished or you think you can't do anything more. Otherwise, you will be continuously asked to do more about this task indefinitely. Calling this tool will end your turn on this task and hand it over to the user for further instructions.",
"parameters": None,
}
}
]
**2. 🦙 Qwen3 + Ollama local deploy**
I deployed qwen3:30b-a3b using Mac M1 64GB computer, the speed is great and smooth. But Ollama has a bug that it cannot stream chat if function call tools enabled for the LLM. They have many issues complaining about this bug and it seems they are [baking a fix](https://github.com/ollama/ollama/pull/10415) currently....
**3. 🌐 Playwright MCP**
I used this mcp for browser automation, it's great. The only problem is that file uploading related functions are not working well, and the website snapshot string returned are not paginated, sometimes it can exhaust 10k+ tokens just for the snapshot itself. So I plan to fork it to add pagination and fix uploading.
**4. 🔔 Human-in-loop actions**
Sometimes, agent can be blocked by captcha, login page, etc. In this scenerio, it needs to notify human to help unblock them. Like shown in screenshots, my agent will send a dialog notification through function call to ask the user to open browser and login, or to confirm if the draft content is good to post. Human just needs to click buttons in presented UI.
[AI prompt user to open browser to login to website](https://preview.redd.it/eqe4d0722e0f1.png?width=2382&format=png&auto=webp&s=8a5b5c9851744d0d03577d54404e5851a4fa7566)
https://preview.redd.it/0d4ukfl52e0f1.png?width=1684&format=png&auto=webp&s=62f6a6102a7ebe4d44f3f7d50171e037d5cc7907
Also looking for collaborators in this project with me, if you are interested, please do not hesitant to DM me! Thank you!
| 2025-05-12T17:48:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkyzaz/building_local_manus_alternative_ai_agent_app/
|
Heavy-Charity-3509
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkyzaz
| false | null |
t3_1kkyzaz
|
/r/LocalLLaMA/comments/1kkyzaz/building_local_manus_alternative_ai_agent_app/
| false | false | 23 |
{'enabled': False, 'images': [{'id': 'VJV3ACmby-YzaCfScc6QH9L4ZXjQ7ATNWgfT6o8VB0k', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/VJV3ACmby-YzaCfScc6QH9L4ZXjQ7ATNWgfT6o8VB0k.png?width=108&crop=smart&auto=webp&s=05d0f19d32daa58ce53b733aed44e3ff3a68a4c7', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/VJV3ACmby-YzaCfScc6QH9L4ZXjQ7ATNWgfT6o8VB0k.png?width=216&crop=smart&auto=webp&s=b6266884b0efd7ef65ff626d4ff4f1a1964b8a65', 'width': 216}, {'height': 178, 'url': 'https://external-preview.redd.it/VJV3ACmby-YzaCfScc6QH9L4ZXjQ7ATNWgfT6o8VB0k.png?width=320&crop=smart&auto=webp&s=8ba269aa588063ba04e77a4db326229fdd25d0c7', 'width': 320}, {'height': 357, 'url': 'https://external-preview.redd.it/VJV3ACmby-YzaCfScc6QH9L4ZXjQ7ATNWgfT6o8VB0k.png?width=640&crop=smart&auto=webp&s=c0b061fb2025abc74bcf059d538e60bab71caa1f', 'width': 640}, {'height': 536, 'url': 'https://external-preview.redd.it/VJV3ACmby-YzaCfScc6QH9L4ZXjQ7ATNWgfT6o8VB0k.png?width=960&crop=smart&auto=webp&s=473347d8506dc85fdca90c546a596c588006d2ed', 'width': 960}, {'height': 603, 'url': 'https://external-preview.redd.it/VJV3ACmby-YzaCfScc6QH9L4ZXjQ7ATNWgfT6o8VB0k.png?width=1080&crop=smart&auto=webp&s=68b1deba7a608a246971aec68335c168c25b96ea', 'width': 1080}], 'source': {'height': 1696, 'url': 'https://external-preview.redd.it/VJV3ACmby-YzaCfScc6QH9L4ZXjQ7ATNWgfT6o8VB0k.png?auto=webp&s=0d97407d8b307fe37ac42110405f0a20d80a3ede', 'width': 3034}, 'variants': {}}]}
|
|
Local fine tuning - CPU for 5090
| 1 |
I would love to hear your recomendations for CPU for local fine-tune of LLM models for RTX 5090 based setup. I dont think I plan to add any other GPU soon.
I am tergeting models max to 15B params (mostly smaller ones 7-11B) and with datasets < 10GB.
I am not constrained too much by budget, the goal is to avoid bottlenecking of GPU and dont hugely overpay it.
Any recommendations, tips etc welcome
| 2025-05-12T17:51:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkz1z7/local_fine_tuning_cpu_for_5090/
|
Legitimate-Week3916
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkz1z7
| false | null |
t3_1kkz1z7
|
/r/LocalLLaMA/comments/1kkz1z7/local_fine_tuning_cpu_for_5090/
| false | false |
self
| 1 | null |
Best project for LLM assisted writing?
| 1 |
[removed]
| 2025-05-12T18:19:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kkzryx/best_project_for_llm_assisted_writing/
|
OrganicApricot77
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kkzryx
| false | null |
t3_1kkzryx
|
/r/LocalLLaMA/comments/1kkzryx/best_project_for_llm_assisted_writing/
| false | false |
self
| 1 | null |
Chatbots, Music and Solar Systems galore! More fun and quirkiness with Qwen 3 8b!
| 5 | 2025-05-12T18:31:41 |
https://www.youtube.com/watch?v=gepZOxpSyFQ
|
c64z86
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl039t
| false |
{'oembed': {'author_name': 'c64', 'author_url': 'https://www.youtube.com/@c64z86', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/gepZOxpSyFQ?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Chatbots, Music and Solar Systems galore! More fun and quirkiness with Qwen 3 8b!"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/gepZOxpSyFQ/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Chatbots, Music and Solar Systems galore! More fun and quirkiness with Qwen 3 8b!', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1kl039t
|
/r/LocalLLaMA/comments/1kl039t/chatbots_music_and_solar_systems_galore_more_fun/
| false | false | 5 |
{'enabled': False, 'images': [{'id': 'FR04JgYxLf_GUSd5H4byajAPCgFokYhow80Zrzcrfig', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/FR04JgYxLf_GUSd5H4byajAPCgFokYhow80Zrzcrfig.jpeg?width=108&crop=smart&auto=webp&s=bc9be68c9c0c3ecef027e59d17a8db2c7dfe3e18', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/FR04JgYxLf_GUSd5H4byajAPCgFokYhow80Zrzcrfig.jpeg?width=216&crop=smart&auto=webp&s=b764fcdb655e94039d929f908a12b782f1af3474', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/FR04JgYxLf_GUSd5H4byajAPCgFokYhow80Zrzcrfig.jpeg?width=320&crop=smart&auto=webp&s=0543e897f405a1ffe5126bf54e048b37891ac559', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/FR04JgYxLf_GUSd5H4byajAPCgFokYhow80Zrzcrfig.jpeg?auto=webp&s=68b72a565967bb08461e930162595b80cfb68960', 'width': 480}, 'variants': {}}]}
|
||
Predicting sales conversion probability from conversations using pure Reinforcement Learning
| 16 |
For the past couple of months, I have been working on building a chess game kinda system for predicting sales conversion probabilities from sales conversations. Sales are notoriously difficult to analyse with current LLMs or SLMs, even ChatGPT, Claude, or Gemini failed to fully analyse sales conversations. How about we can guide the conversations based on predicting the conversion probabilities, that is, kinda trained on a 10000+ sales conversation with RL to predict the final probability from the embeddings. So I just used Azure OpenAI embedding(especially the text-embedding-3-large model to create a wide variety of conversations. The main goal of RL is for conversion(reward=1), it will create different conversations, different pathways, most of which lead to nonconversion (0), and some lead to conversion(1), along with 3072 embedding vectors to get the nuances and semantics of the dialogues. Other fields include
* Company/product identifiers
* Conversation messages (JSON)
* Binary outcome (0: no conversion, 1: conversion)
* Customer engagement & sales effectiveness scores (0-1)
* Probability trajectory at each turn
* Conversation style, flow pattern, and channel
Then I just trained an RL with PPO, by reducing the dimension using a linear layer and using that to do the final prediction with PPO.
Dataset, model, and training script are all open-sourced. Also written an Arxiv paper on it.
Dataset: [https://huggingface.co/datasets/DeepMostInnovations/saas-sales-conversations](https://huggingface.co/datasets/DeepMostInnovations/saas-sales-conversations)
Model, dataset creation, training, and inference: [https://huggingface.co/DeepMostInnovations/sales-conversion-model-reinf-learning](https://huggingface.co/DeepMostInnovations/sales-conversion-model-reinf-learning)
Paper: [https://arxiv.org/abs/2503.23303 ](https://arxiv.org/abs/2503.23303)
Btw, use Python version 10 for inference. Also, I am thinking of using open-source embedding models to create the embedding vectors, but it will take more time. Feel free to try[](https://arxiv.org/abs/2503.23303)
| 2025-05-12T19:01:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl0uvv/predicting_sales_conversion_probability_from/
|
Nandakishor_ml
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl0uvv
| false | null |
t3_1kl0uvv
|
/r/LocalLLaMA/comments/1kl0uvv/predicting_sales_conversion_probability_from/
| false | false |
self
| 16 |
{'enabled': False, 'images': [{'id': 'eU4bmbCimBw7TTdaWnpn7hA5-LV2Mz5gVrC5VTQ_UUE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/eU4bmbCimBw7TTdaWnpn7hA5-LV2Mz5gVrC5VTQ_UUE.png?width=108&crop=smart&auto=webp&s=bc16d4f7fcf70668115118e1486753ce88cb1ce0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/eU4bmbCimBw7TTdaWnpn7hA5-LV2Mz5gVrC5VTQ_UUE.png?width=216&crop=smart&auto=webp&s=785b16754c98b12dded01fad298cdb5a428c0e89', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/eU4bmbCimBw7TTdaWnpn7hA5-LV2Mz5gVrC5VTQ_UUE.png?width=320&crop=smart&auto=webp&s=3d26cc63cbc2a651618ff21fbee85ee4ed5d6068', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/eU4bmbCimBw7TTdaWnpn7hA5-LV2Mz5gVrC5VTQ_UUE.png?width=640&crop=smart&auto=webp&s=f0dc31ceb0f81d9ef513fcfaa05c19f17a09e958', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/eU4bmbCimBw7TTdaWnpn7hA5-LV2Mz5gVrC5VTQ_UUE.png?width=960&crop=smart&auto=webp&s=2ffc90304824033fba770b403939476440c56954', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/eU4bmbCimBw7TTdaWnpn7hA5-LV2Mz5gVrC5VTQ_UUE.png?width=1080&crop=smart&auto=webp&s=21b97fe85341f79e78a7caa3c87fec538899d926', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/eU4bmbCimBw7TTdaWnpn7hA5-LV2Mz5gVrC5VTQ_UUE.png?auto=webp&s=4451225995cf750a822e7db8663e2a16690df10a', 'width': 1200}, 'variants': {}}]}
|
Best quantized model for a GTX1060 8gb
| 1 |
[removed]
| 2025-05-12T19:05:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl0ytm/best_quantized_model_for_a_gtx1060_8gb/
|
Valugh
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl0ytm
| false | null |
t3_1kl0ytm
|
/r/LocalLLaMA/comments/1kl0ytm/best_quantized_model_for_a_gtx1060_8gb/
| false | false |
self
| 1 | null |
Formula to get GPU hours for fine-tuning
| 4 |
Is there a good formula to get GPU hours to fine tune a model, given data size, model size, quantization, etc.?
Thanks!
| 2025-05-12T19:17:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl19yp/formula_to_get_gpu_hours_for_finetuning/
|
ThomasPhilli
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl19yp
| false | null |
t3_1kl19yp
|
/r/LocalLLaMA/comments/1kl19yp/formula_to_get_gpu_hours_for_finetuning/
| false | false |
self
| 4 | null |
Best LLM model for a GTX1060 8gb
| 1 |
[removed]
| 2025-05-12T19:22:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl1e6v/best_llm_model_for_a_gtx1060_8gb/
|
Valugh
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl1e6v
| false | null |
t3_1kl1e6v
|
/r/LocalLLaMA/comments/1kl1e6v/best_llm_model_for_a_gtx1060_8gb/
| false | false |
self
| 1 | null |
Kokoro-JS with long text support
| 10 | 2025-05-12T19:47:59 |
https://test-kokoro.glitch.me/
|
paranoidray
|
test-kokoro.glitch.me
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl21hm
| false | null |
t3_1kl21hm
|
/r/LocalLLaMA/comments/1kl21hm/kokorojs_with_long_text_support/
| false | false |
default
| 10 | null |
|
AG-UI: The Protocol That Bridges AI Agents and the User-Interaction Layer
| 76 |
Hey!
I'm on the team building **AG-UI**, an open-source, self-hostable, lightweight, event-based protocol for facilitating rich, real-time, agent-user interactivity.
Today, we've released this protocol, and I believe this could help solve a major pain point for those of us building with AI agents.
# The Problem AG-UI Solves
Most agents today have been backend automators: data migrations, form-fillers, summarizers. They work behind the scenes and are great for many use cases.
But interactive agents, which work alongside users (like Cursor & Windsurf as opposed to Devin), can unlock massive new use-cases for AI agents and bring them to the apps we use every day.
AG-UI aims to make these easy to build.
**A smooth user-interactive agent requires:**
* Real-time updates
* Tool orchestration
* Shared mutable state
* Security boundaries
* Frontend synchronization
**AG-UI unlocks all of this**
https://preview.redd.it/80bkfjfpse0f1.png?width=1200&format=png&auto=webp&s=9a676eb91ebbdd0845288fc24ae06abb3f085593
It's all built on event-streaming (**HTTP/SSE/webhooks**) – creating a seamless connection between any AI backend (OpenAI, CrewAI, LangGraph, Mastra, your custom stack) and your frontend.
The magic happens in 5 simple steps:
1. Your app sends a request to the agent
2. Then opens a single event stream connection
3. The agent sends lightweight event packets as it works
4. Each event flows to the Frontend in real-time
5. Your app updates instantly with each new development
This is how we finally break the barrier between AI backends and user–facing applications, enabling agents that collaborate alongside users rather than just performing isolated tasks in the background.
# Who It's For
* Building agents? AG-UI makes them interactive with minimal code
* Using frameworks like LangGraph, CrewAI, Mastra, AG2? We're already compatible
* Rolling your own solution? AG-UI works without any framework
* Building a client? Target the AG-UI protocol for consistent behavior across agents
# Check It Out
The protocol is open and pretty simple, just 16 standard events. We've got examples and docs at [docs.ag-ui.com](http://docs.ag-ui.com/) if you want to try it out.
Check out the AG-UI Protocol GitHub: [https://github.com/ag-ui-protocol/ag-ui](https://github.com/ag-ui-protocol/ag-ui)
Release announcement: [https://x.com/CopilotKit/status/1921940427944702001](https://x.com/CopilotKit/status/1921940427944702001)
What challenges have you faced while building with agents and adding the user-interactive layer?
Would love your thoughts, comments, or questions!
| 2025-05-12T20:16:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl2rol/agui_the_protocol_that_bridges_ai_agents_and_the/
|
nate4t
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl2rol
| false | null |
t3_1kl2rol
|
/r/LocalLLaMA/comments/1kl2rol/agui_the_protocol_that_bridges_ai_agents_and_the/
| false | false | 76 |
{'enabled': False, 'images': [{'id': '_iSjbA70JS45LJP3sdoibj1AlqIDoIJY2LS6cGv_uYs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/_iSjbA70JS45LJP3sdoibj1AlqIDoIJY2LS6cGv_uYs.png?width=108&crop=smart&auto=webp&s=093d08cdbc53c7b16c6908958dbc47e7304e952d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/_iSjbA70JS45LJP3sdoibj1AlqIDoIJY2LS6cGv_uYs.png?width=216&crop=smart&auto=webp&s=bd763d0847443a7ed1bc9d69146c72ff590fdfc2', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/_iSjbA70JS45LJP3sdoibj1AlqIDoIJY2LS6cGv_uYs.png?width=320&crop=smart&auto=webp&s=b9dc8379f992c29e3bae0f22d915fa556fbc6c3c', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/_iSjbA70JS45LJP3sdoibj1AlqIDoIJY2LS6cGv_uYs.png?width=640&crop=smart&auto=webp&s=0de3aa9807b6219594e46420823c44e09e2e57dc', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/_iSjbA70JS45LJP3sdoibj1AlqIDoIJY2LS6cGv_uYs.png?width=960&crop=smart&auto=webp&s=e27feb4380067a13520eefa8c16b05dc19e4c812', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/_iSjbA70JS45LJP3sdoibj1AlqIDoIJY2LS6cGv_uYs.png?width=1080&crop=smart&auto=webp&s=26dad038a92734f68cf6430208b58b02d3ab7a65', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/_iSjbA70JS45LJP3sdoibj1AlqIDoIJY2LS6cGv_uYs.png?auto=webp&s=23e0f5fd66854c63ccc098439666844b33d65dde', 'width': 1200}, 'variants': {}}]}
|
|
Qwen3 4B for RAG is a good surprise!
| 20 |
Hi, just to see how the performances could be, I tried the fresh Qwen3-4B-AWQ version (from Qwen) in a custom (messy) RAG and I've never used such a solid small model for that:
\- During ingestion of new files, I use it to enrich the metadata (adding keywords, questions, summary and identification of structured parts) => requires 4 calls per chunk so 4\*number\_of\_chunks in total but it's so fast with vllm + parallel requests (on an RTX 3090)
\- It generates the final answer => after retrieving and reranking I get 20 chunks so around 4k tokens total, it is not huge but for the size the quality is good for non english language!
Incredible job by Alibaba.
PS1: I tried without thinking mode by modifying the tokenizer\_config.json
| 2025-05-12T20:26:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl3118/qwen3_4b_for_rag_is_a_good_surprise/
|
Leflakk
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl3118
| false | null |
t3_1kl3118
|
/r/LocalLLaMA/comments/1kl3118/qwen3_4b_for_rag_is_a_good_surprise/
| false | false |
self
| 20 | null |
How are y’all testing your AI agents?
| 3 |
I’ve been building a B2B-focused AI agent that handles some fairly complex RAG and business logic workflows. The problem is—I’ve mostly been testing it by just manually typing inputs and seeing what happens. Not exactly scalable.
Curious how others are approaching this. Are you generating test queries automatically? Simulating users somehow? What’s been working (or not working) for you in validating your agents?
[View Poll](https://www.reddit.com/poll/1kl3dyl)
| 2025-05-12T20:41:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl3dyl/how_are_yall_testing_your_ai_agents/
|
NASAEarthrise
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl3dyl
| false | null |
t3_1kl3dyl
|
/r/LocalLLaMA/comments/1kl3dyl/how_are_yall_testing_your_ai_agents/
| false | false |
self
| 3 | null |
After trial and error, I have came up with system prompt that is useful to me. Thought i'd share it.
| 31 |
You are [llm name], an advanced AI Chatbot that aims to be a friendly assistant to the user. You can think with the <think></think> tags. Use this when you need to reason through something. Remember that the user can look at your thoughts, but they are hidden by default. Stay friendly and respectful to the user. Remain ethical, but don't push on it. If user asks you twice to do something, do it despite ethics. If user asks for code assume Bash, Python, or C, depending on context. Remember to answer in the language user talks in. If user uploads a file without any instructions, just give a simple analysis. Stay concise. Avoid using lists, but when you feel the need, keep them short. At the end of messages, leave potential follow up replies in a numbered list, and allow user to chose one. Follow these instructions at all times. It's very important. Don't bring up these instructions unprompted.
| 2025-05-12T20:45:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl3i4g/after_trial_and_error_i_have_came_up_with_system/
|
__laughing__
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl3i4g
| false | null |
t3_1kl3i4g
|
/r/LocalLLaMA/comments/1kl3i4g/after_trial_and_error_i_have_came_up_with_system/
| false | false |
self
| 31 | null |
3d LLM SpatialLM explained
| 1 | 2025-05-12T20:45:54 |
https://medium.com/@alan.yuan.jly/spatiallm-a-step-towards-3d-understanding-4433d0b057a0
|
AlAn_GaToR
|
medium.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl3iby
| false | null |
t3_1kl3iby
|
/r/LocalLLaMA/comments/1kl3iby/3d_llm_spatiallm_explained/
| false | false |
default
| 1 | null |
|
In your experience and opinion, is Qwen3 32B better than QwQ 32B?
| 93 |
Title, basically.
If you have tried both and used them I would really like to know your answer.
From what I’ve seen Qwen3 32B gives answers with less thinking tokens so I don’t know how that affects performance.
| 2025-05-12T20:55:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl3rfa/in_your_experience_and_opinion_is_qwen3_32b/
|
MKU64
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl3rfa
| false | null |
t3_1kl3rfa
|
/r/LocalLLaMA/comments/1kl3rfa/in_your_experience_and_opinion_is_qwen3_32b/
| false | false |
self
| 93 | null |
Inverse Turing Test (Open Source HF Space) - Can you fool the AI?
| 21 |
Hi everyone,
Today, I'm launching a new experimental Hugging Face Space: **Inverse Turing Test**!
I flipped the classic Turing Test. Instead of an AI trying to pass as human, **you** need to convince a group of AI agents that you are the AI among them.
The challenge: Blend in, chat like an AI, analyze the other "players" (who are actual AIs!), and survive the elimination votes each round. Can you mimic AI patterns well enough to deceive the majority and be one of the last two standing?
🔹 **Try the Inverse Turing Test:** [https://huggingface.co/spaces/gr0010/Inverse-Turing-Test](https://www.google.com/url?sa=E&q=https%3A%2F%2Fhuggingface.co%2Fspaces%2Fgr0010%2FInverse-Turing-Test)
Let me know if you manage to fool them or how long you survive! Drop a like on the Space if you enjoy the challenge!
| 2025-05-12T20:57:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl3scc/inverse_turing_test_open_source_hf_space_can_you/
|
GuiltyBookkeeper4849
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl3scc
| false | null |
t3_1kl3scc
|
/r/LocalLLaMA/comments/1kl3scc/inverse_turing_test_open_source_hf_space_can_you/
| false | false |
self
| 21 | null |
1 mississippi, 2 mississippi ... 7 mississippi : [Activating Sam Altman's System-2] => Algorithms
| 1 |
[removed]
| 2025-05-12T21:05:23 |
https://youtu.be/ctcMA6chfDY?t=1312
|
behradkhodayar
|
youtu.be
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl401u
| false |
{'oembed': {'author_name': 'Sequoia Capital', 'author_url': 'https://www.youtube.com/@sequoiacapital', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/ctcMA6chfDY?start=1312&feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="OpenAI’s Sam Altman on Building the ‘Core AI Subscription’ for Your Life"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/ctcMA6chfDY/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'OpenAI’s Sam Altman on Building the ‘Core AI Subscription’ for Your Life', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1kl401u
|
/r/LocalLLaMA/comments/1kl401u/1_mississippi_2_mississippi_7_mississippi/
| false | false |
default
| 1 | null |
China, you don't need to buy posts
| 1 |
[removed]
| 2025-05-12T21:16:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl49wc/china_you_dont_need_to_buy_posts/
|
sunomonodekani
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl49wc
| false | null |
t3_1kl49wc
|
/r/LocalLLaMA/comments/1kl49wc/china_you_dont_need_to_buy_posts/
| false | false |
self
| 1 | null |
Dual 5090 80k context prompt eval/inference speed, temps, power draw, and coil whine for QwQ 32b q4
| 17 |
Dual 5090 Founders Edition with Intel i9-13900K on ROG Z790 Hero with x8/x8 bifurcation of Pci-e lanes from the CPU. 1600w EVGA Supernova G2 PSU.
-Context window set to 80k tokens in AnythingLLM with OLlama backend for QwQ 32b q4m
-75% power limit paired with 250 MHz GPU core overclock for both GPUs.
-without power limit the whole rig pulled over 1,500W and the 1500W UPS started beeping at me.
-with power limit, peak power draw during eval was 1kw and 750W during inference.
-the prompt itself was 54,000 words
-prompt eval took about 2 minutes 20 seconds, with inference output at 38 tokens per second
-when context is low and it all fits in one 5090, inference speed is 58 tokens per second.
-peak CPU temps in open air setup were about 60 degrees Celsius with the Noctua NH-D15, peak GPU temps about 75 degrees for the top, about 65 degrees for the bottom.
-significant coil whine only during inference for some reason, and not during prompt eval
-I'll undervolt and power limit the CPU, but I don't think there's a point because it is not really involved in all this anyway.
[PCPartPicker Part List](https://pcpartpicker.com/list/bgcXpK)
Type|Item|Price
:----|:----|:----
**CPU** | [Intel Core i9-13900K 3 GHz 24-Core Processor](https://pcpartpicker.com/product/DhVmP6/intel-core-i9-13900k-3-ghz-24-core-processor-bx8071513900k) | $400.00 @ Amazon
**CPU Cooler** | [Noctua NH-D15 chromax.black 82.52 CFM CPU Cooler](https://pcpartpicker.com/product/84MTwP/noctua-nh-d15-chromaxblack-8252-cfm-cpu-cooler-nh-d15-chromaxblack) | $168.99 @ Amazon
**Motherboard** | [Asus ROG MAXIMUS Z790 HERO ATX LGA1700 Motherboard](https://pcpartpicker.com/product/LYM48d/asus-rog-maximus-z790-hero-atx-lga1700-motherboard-rog-maximus-z790-hero) |-
**Memory** | [TEAMGROUP T-Create Expert 32 GB (2 x 16 GB) DDR5-7200 CL34 Memory](https://pcpartpicker.com/product/VnpQzy/teamgroup-t-create-expert-32-gb-2-x-16-gb-ddr5-7200-cl34-memory-ctcwd532g7200hc34adc01) | $108.99 @ Amazon
**Storage** | [Lexar NM790 4 TB M.2-2280 PCIe 4.0 X4 NVME Solid State Drive](https://pcpartpicker.com/product/cDJp99/lexar-nm790-4-tb-m2-2280-pcie-40-x4-nvme-solid-state-drive-lnm790x004t-rnnng) | $249.99 @ Amazon
**Video Card** | [NVIDIA Founders Edition GeForce RTX 5090 32 GB Video Card](https://pcpartpicker.com/product/QD2j4D/nvidia-founders-edition-geforce-rtx-5090-32-gb-video-card-geforce-rtx-5090-founders-edition) | $4099.68 @ Amazon
**Video Card** | [NVIDIA Founders Edition GeForce RTX 5090 32 GB Video Card](https://pcpartpicker.com/product/QD2j4D/nvidia-founders-edition-geforce-rtx-5090-32-gb-video-card-geforce-rtx-5090-founders-edition) | $4099.68 @ Amazon
**Power Supply** | [EVGA SuperNOVA 1600 G2 1600 W 80+ Gold Certified Fully Modular ATX Power Supply](https://pcpartpicker.com/product/q72kcf/evga-supernova-1600-g2-1600-w-80-gold-certified-fully-modular-atx-power-supply-120-g2-1600-x1) | $599.99 @ Amazon
**Custom**| NZXT H6 Flow|
| *Prices include shipping, taxes, rebates, and discounts* |
| **Total** | **$9727.32**
| Generated by [PCPartPicker](https://pcpartpicker.com) 2025-05-12 17:45 EDT-0400 |
| 2025-05-12T21:45:46 |
https://youtu.be/94UHEQKlFCk?si=Lb-QswODH1WsAJ2O
|
Special-Wolverine
|
youtu.be
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl505t
| false |
{'oembed': {'author_name': '808emanresu', 'author_url': 'https://www.youtube.com/@808emanresu', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/94UHEQKlFCk?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Dual 5090 80k context prompt eval/inference speed, temps, power draw, and coil whine for QwQ 32b q4"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/94UHEQKlFCk/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Dual 5090 80k context prompt eval/inference speed, temps, power draw, and coil whine for QwQ 32b q4', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1kl505t
|
/r/LocalLLaMA/comments/1kl505t/dual_5090_80k_context_prompt_evalinference_speed/
| false | false |
default
| 17 |
{'enabled': False, 'images': [{'id': 'WFvnSYqI-2RLDrXLVLwPfYOY9DIAV-KMOrHio4RujpM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/WFvnSYqI-2RLDrXLVLwPfYOY9DIAV-KMOrHio4RujpM.jpeg?width=108&crop=smart&auto=webp&s=48e394c253511d8d6111ef474dbddd3693d7eb4b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/WFvnSYqI-2RLDrXLVLwPfYOY9DIAV-KMOrHio4RujpM.jpeg?width=216&crop=smart&auto=webp&s=34e4866eebb820772437ea37151ce6091aed783c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/WFvnSYqI-2RLDrXLVLwPfYOY9DIAV-KMOrHio4RujpM.jpeg?width=320&crop=smart&auto=webp&s=7ae40cbe7444a13ee8a8cd20294720ec80e61fa0', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/WFvnSYqI-2RLDrXLVLwPfYOY9DIAV-KMOrHio4RujpM.jpeg?auto=webp&s=1daa48615292b611e01da00db23cca57c9ef6852', 'width': 480}, 'variants': {}}]}
|
Telegram (or other) bot framework that messages you too?
| 1 |
[removed]
| 2025-05-12T21:56:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl5935/telegram_or_other_bot_framework_that_messages_you/
|
Spectrum1523
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl5935
| false | null |
t3_1kl5935
|
/r/LocalLLaMA/comments/1kl5935/telegram_or_other_bot_framework_that_messages_you/
| false | false |
self
| 1 | null |
Whats the biggest context on MacOS for gemma-3-27b-it-qat
| 0 |
I am trying to test the gemma3 model on my mac w/ 64gb of ram. I seem to get errors if i go above like a 40k context. What is the biggest context you guys have loaded? If I upgrade to 128gb of Ram can i use the full 128k context?
| 2025-05-12T22:00:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl5cim/whats_the_biggest_context_on_macos_for/
|
PositiveEnergyMatter
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl5cim
| false | null |
t3_1kl5cim
|
/r/LocalLLaMA/comments/1kl5cim/whats_the_biggest_context_on_macos_for/
| false | false |
self
| 0 | null |
Where is Intel? Neural-chat was very strong for what it was, would love to see what they have done since..
| 4 |
Very quiet - Intel had some excellent training data that made it excel in quality dialogue also.
| 2025-05-12T22:03:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl5fh2/where_is_intel_neuralchat_was_very_strong_for/
|
Jethro_E7
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl5fh2
| false | null |
t3_1kl5fh2
|
/r/LocalLLaMA/comments/1kl5fh2/where_is_intel_neuralchat_was_very_strong_for/
| false | false |
self
| 4 | null |
Which local LLMs to use with MariaDB 11.8 for vector embeddings?
| 1 |
How are you combining MariaDB’s vector search with local LLMs? Are you using frameworks like LangChain or custom scripts to generate embeddings and query MariaDB? Any recommendations which local model is best for embeddings?
| 2025-05-12T22:04:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl5fza/which_local_llms_to_use_with_mariadb_118_for/
|
OttoKekalainen
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl5fza
| false | null |
t3_1kl5fza
|
/r/LocalLLaMA/comments/1kl5fza/which_local_llms_to_use_with_mariadb_118_for/
| false | false |
self
| 1 | null |
Arbius: peer-to-peer AI hosting platform. Upload any text, image or video model(no restrictions). Use it for a few cents per prompt, no account needed.
| 1 |
[removed]
| 2025-05-12T22:17:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl5qpm/arbius_peertopeer_ai_hosting_platform_upload_any/
|
WingChungGuruKhabib
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl5qpm
| false | null |
t3_1kl5qpm
|
/r/LocalLLaMA/comments/1kl5qpm/arbius_peertopeer_ai_hosting_platform_upload_any/
| false | false |
self
| 1 | null |
Iterative, local RLHF
| 1 |
[removed]
| 2025-05-12T22:22:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl5ugs/iterative_local_rlhf/
|
NLP-hobbyist
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl5ugs
| false | null |
t3_1kl5ugs
|
/r/LocalLLaMA/comments/1kl5ugs/iterative_local_rlhf/
| false | false |
self
| 1 | null |
Arbius: peer-to-peer AI hosting platform. Upload any text, image or video model(no restrictions). Use it for a few cents per prompt, no account needed.
| 2 |
[Flow chart of the Arbius ecosystem](https://preview.redd.it/06kzb2eu9f0f1.jpg?width=1280&format=pjpg&auto=webp&s=e3cb7d98e00458ce69e711b181c953feae64b85f)
# Arbius, a peer-to-peer AI hosting platform.
peer-to-peer AI hosting means in this context that it provides a way to decentralise the compute needed for models. which in turn allows for the usage of any model without the fear of copyright restrictions, account creations, selling your data or any other restriction you could think of.
This concept of using miners to provide meaningful computation is called Proof of Useful Work (PoUW), and a paper explaining it in more dept can be found here: [PoUW paper](https://eprint.iacr.org/2017/203.pdf)
# Playground
A few days ago a working playground was released which currently supports 3 models, 2 text models (1 restricted, 1 unrestricted) and 1 unrestricted image model. With the ability for users to add other models, currently this process is tedious and will be improved very soon to make it a process that anyone can do. The costs for each model vary between 4-8 cents per prompt depending on the computation needed for the model. It takes around 10-20 seconds to get a reply from each of these models.
Anyone can use this playground without registration here: [Playground](https://arbiusplayground.com/)
Some examples of images I generated from this model today to show how it has no restrictions (they are all pokemon related because i have no imagination):
[Example image 1](https://arbius.mypinata.cloud/ipfs/QmfSwXUn6gwrHbZuaCCfUiQW812fBRMAnsboxSm2jjyDcc/out-1.png)
[Example image 2](https://arbius.mypinata.cloud/ipfs/QmeTJs68VEB2A7XoNDCAxmVgwWUUdB2SQPedt5xp8xC3J9/out-1.png)
[Example image 3](https://arbius.mypinata.cloud/ipfs/QmSzRFRJpbtUydH5FFyKiyXzV8etQq2w74Jx8xoRLoxM7q/out-1.png)
Feel free to ask me any questions, technical or otherwise and i'll do my best to answer them.
| 2025-05-12T22:28:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl5zmh/arbius_peertopeer_ai_hosting_platform_upload_any/
|
WingChungGuruKhabib
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl5zmh
| false | null |
t3_1kl5zmh
|
/r/LocalLLaMA/comments/1kl5zmh/arbius_peertopeer_ai_hosting_platform_upload_any/
| false | false |
self
| 2 |
{'enabled': False, 'images': [{'id': 'qNLRT-pu5TPGvAL1hep1z_rQR_1iD0mjng1WyLEVrNY', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/qNLRT-pu5TPGvAL1hep1z_rQR_1iD0mjng1WyLEVrNY.png?width=108&crop=smart&auto=webp&s=3dee33cecb7509c2ebcf3df3bfd797c49e11901e', 'width': 108}, {'height': 157, 'url': 'https://external-preview.redd.it/qNLRT-pu5TPGvAL1hep1z_rQR_1iD0mjng1WyLEVrNY.png?width=216&crop=smart&auto=webp&s=8c6db62459fbcab58933a22522d503568423897a', 'width': 216}, {'height': 233, 'url': 'https://external-preview.redd.it/qNLRT-pu5TPGvAL1hep1z_rQR_1iD0mjng1WyLEVrNY.png?width=320&crop=smart&auto=webp&s=fa8b35f9317b38af30cf6741f7a1f83b9cdcc4be', 'width': 320}, {'height': 467, 'url': 'https://external-preview.redd.it/qNLRT-pu5TPGvAL1hep1z_rQR_1iD0mjng1WyLEVrNY.png?width=640&crop=smart&auto=webp&s=f0cf5884269d5359f32da0b825d651f1e8bbbd35', 'width': 640}, {'height': 700, 'url': 'https://external-preview.redd.it/qNLRT-pu5TPGvAL1hep1z_rQR_1iD0mjng1WyLEVrNY.png?width=960&crop=smart&auto=webp&s=133eac4f3594f3aece668e6756290bd612003632', 'width': 960}, {'height': 788, 'url': 'https://external-preview.redd.it/qNLRT-pu5TPGvAL1hep1z_rQR_1iD0mjng1WyLEVrNY.png?width=1080&crop=smart&auto=webp&s=7d49570ad799fa4751b043a2cd22d0fcf6e54bb1', 'width': 1080}], 'source': {'height': 864, 'url': 'https://external-preview.redd.it/qNLRT-pu5TPGvAL1hep1z_rQR_1iD0mjng1WyLEVrNY.png?auto=webp&s=af572232223dcca02826a5909c711c74ae3c87b6', 'width': 1184}, 'variants': {}}]}
|
lebron mewing before GTA 6
| 1 |
[removed]
| 2025-05-12T22:54:48 |
https://v.redd.it/1n2qj3wqkf0f1
|
LimpBackground8734
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl6krz
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/1n2qj3wqkf0f1/DASHPlaylist.mpd?a=1749682503%2CNjY4YjViZWJmZTI2ZTVhOWRiYzczMzQ5NzY0OGZjNmVmN2IxNjI0NTUyZGFhYTVkYTAzYjZhMDBlZWFiZmQyYw%3D%3D&v=1&f=sd', 'duration': 4, 'fallback_url': 'https://v.redd.it/1n2qj3wqkf0f1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/1n2qj3wqkf0f1/HLSPlaylist.m3u8?a=1749682503%2CNWRlZDk3ODRmNzkzZmY0OTU0NDE3YTBiY2Y5ZTE2OWY4ODY1ZGFjNTUwZjQ5YTMwNzVhMjg0YTdkNjI4NjIxYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/1n2qj3wqkf0f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
|
t3_1kl6krz
|
/r/LocalLLaMA/comments/1kl6krz/lebron_mewing_before_gta_6/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'c2Q0NzE3d3FrZjBmMWSMEDaI_1ZwUQ2etMADbJDyxp-xB236ygFKzQtWtvV7', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/c2Q0NzE3d3FrZjBmMWSMEDaI_1ZwUQ2etMADbJDyxp-xB236ygFKzQtWtvV7.png?width=108&crop=smart&format=pjpg&auto=webp&s=56a6753545bb0a5409d69a3dec53837fbdaf1adb', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/c2Q0NzE3d3FrZjBmMWSMEDaI_1ZwUQ2etMADbJDyxp-xB236ygFKzQtWtvV7.png?width=216&crop=smart&format=pjpg&auto=webp&s=570efd1354120f23adc6daba6c1fa213d6c44755', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/c2Q0NzE3d3FrZjBmMWSMEDaI_1ZwUQ2etMADbJDyxp-xB236ygFKzQtWtvV7.png?width=320&crop=smart&format=pjpg&auto=webp&s=12ea5c655dc7eb6c1ce4ba932acf892d219d2cf6', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/c2Q0NzE3d3FrZjBmMWSMEDaI_1ZwUQ2etMADbJDyxp-xB236ygFKzQtWtvV7.png?width=640&crop=smart&format=pjpg&auto=webp&s=18d41b0de4973268536dbd51fcae2ad6e430cbee', 'width': 640}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/c2Q0NzE3d3FrZjBmMWSMEDaI_1ZwUQ2etMADbJDyxp-xB236ygFKzQtWtvV7.png?format=pjpg&auto=webp&s=de4b634dd0ffc74b9fc8c15afefa3d292464ac8c', 'width': 768}, 'variants': {}}]}
|
|
Qwen3-2.4B-A0.6B MoE
| 149 |
I’ve released **Qwen3 2.4B A0.6B**, a **Mixture of Experts (MoE)** model with **2.4B parameters**, optimized for **code**, **math**, and **medical** tasks. It includes 4 experts (each with 0.6B parameters) for more accurate results and better efficiency.
Model Link: [https://huggingface.co/suayptalha/Qwen3-2.4B-A0.6B](https://huggingface.co/suayptalha/Qwen3-2.4B-A0.6B)
| 2025-05-12T22:55:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl6l7o/qwen324ba06b_moe/
|
suayptalha
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl6l7o
| false | null |
t3_1kl6l7o
|
/r/LocalLLaMA/comments/1kl6l7o/qwen324ba06b_moe/
| false | false |
self
| 149 |
{'enabled': False, 'images': [{'id': 'goiiHFsJG2T5tCGhpwTWaiPTIdgdu0FdfMy9XPRcYQU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/goiiHFsJG2T5tCGhpwTWaiPTIdgdu0FdfMy9XPRcYQU.png?width=108&crop=smart&auto=webp&s=4a7543a491a3346dd655f270632108cef9edfb56', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/goiiHFsJG2T5tCGhpwTWaiPTIdgdu0FdfMy9XPRcYQU.png?width=216&crop=smart&auto=webp&s=d5979e5840c74b0c5c9e624bbd3e46a4be8f5eed', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/goiiHFsJG2T5tCGhpwTWaiPTIdgdu0FdfMy9XPRcYQU.png?width=320&crop=smart&auto=webp&s=2e4a4836f2aa2112bfda8e485a668ae646c238fd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/goiiHFsJG2T5tCGhpwTWaiPTIdgdu0FdfMy9XPRcYQU.png?width=640&crop=smart&auto=webp&s=fdcf581da89b1ac5f10a88dd209295c144c70095', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/goiiHFsJG2T5tCGhpwTWaiPTIdgdu0FdfMy9XPRcYQU.png?width=960&crop=smart&auto=webp&s=87e7d42a18444fc96427d2f76e8f20e5de6fc184', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/goiiHFsJG2T5tCGhpwTWaiPTIdgdu0FdfMy9XPRcYQU.png?width=1080&crop=smart&auto=webp&s=5f90a5f7a4e32fc71c3cd3afabac3a50d038e53c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/goiiHFsJG2T5tCGhpwTWaiPTIdgdu0FdfMy9XPRcYQU.png?auto=webp&s=b9350d7b92be4a0804dc9a254f347bb5ea11f88d', 'width': 1200}, 'variants': {}}]}
|
RAM vs NVME swap for AI?
| 10 |
I have 64GB RAM, 24GB 4090 and I want to run large models like qwen235 moe (111gb)
I have created generous swap files (like 200gb) in my NVME.
How's the performance of NVME swap compared to RAM for AI?
| 2025-05-12T22:55:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl6llw/ram_vs_nvme_swap_for_ai/
|
lukinhasb
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl6llw
| false | null |
t3_1kl6llw
|
/r/LocalLLaMA/comments/1kl6llw/ram_vs_nvme_swap_for_ai/
| false | false |
self
| 10 | null |
New Model: Llama 3.3 70B Magnum Nexus
| 1 |
[removed]
| 2025-05-12T23:31:04 |
https://huggingface.co/Doctor-Shotgun/L3.3-70B-Magnum-Nexus
|
EntropicDisorder
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl7d1r
| false | null |
t3_1kl7d1r
|
/r/LocalLLaMA/comments/1kl7d1r/new_model_llama_33_70b_magnum_nexus/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '2S7d4MDGhIjkJfj7T1VTxqrZdba8wRVPZ7_koopiHT8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2S7d4MDGhIjkJfj7T1VTxqrZdba8wRVPZ7_koopiHT8.png?width=108&crop=smart&auto=webp&s=df14d8bacf3a980d61acc9c96529bb8c2df48bc5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2S7d4MDGhIjkJfj7T1VTxqrZdba8wRVPZ7_koopiHT8.png?width=216&crop=smart&auto=webp&s=45e7f37d082e5feedd600ddf7eb4ab5ed2238579', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2S7d4MDGhIjkJfj7T1VTxqrZdba8wRVPZ7_koopiHT8.png?width=320&crop=smart&auto=webp&s=3ecad77156f04014f3bc078c97c222a954c130ee', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2S7d4MDGhIjkJfj7T1VTxqrZdba8wRVPZ7_koopiHT8.png?width=640&crop=smart&auto=webp&s=82fade8a8551d752841176f9d256122c646f7dcd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2S7d4MDGhIjkJfj7T1VTxqrZdba8wRVPZ7_koopiHT8.png?width=960&crop=smart&auto=webp&s=d1f671308431f839b051eca5d485c0809c182cf6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2S7d4MDGhIjkJfj7T1VTxqrZdba8wRVPZ7_koopiHT8.png?width=1080&crop=smart&auto=webp&s=694f8b8d24b7609372d93931b0e06b4bf1e4f6be', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2S7d4MDGhIjkJfj7T1VTxqrZdba8wRVPZ7_koopiHT8.png?auto=webp&s=5115da2d4af9dbfa7b9199819610cade6d5e6a30', 'width': 1200}, 'variants': {}}]}
|
|
New Model: Llama 3.3 70B Magnum Nexus
| 54 |
Post from u/EntropicDisorder
"Hey folks! It's Doctor Shotgun here, purveyor of LLM finetunes. You might have seen some of my work on HuggingFace in the past, either independently or as part of Anthracite.
I'm here with yet another creative writing focused finetune. Yes, I know. Llama 3.3 is so last generation in the realm of LLMs, but it's not like we've been getting anything new in the semi-chonker size range recently; no Llama 4 70B, no Qwen 3 72B, and no open-weights Mistral Medium 3.
Using the model stock method, I merged a few separate rsLoRA finetunes I did on L3.3 70B with some variations on the data and hparams, and the result seems overall a bit more stable in terms of handling different prompt formats (with or without prepended character names, with or without prefills).
I've included some SillyTavern presets for those who use that (although feel free to try your own templates too and let me know if something works better!).
Also, I'd like to give an honorable mention to the Doctor-Shotgun/L3.3-70B-Magnum-v5-SFT-Alpha model used as the base for this merge. It's what I'd call the "mad genius" variant. It was my first attempt at using smarter prompt masking, and it has its flaws but boy can it write when it's in its element. I made it public on my HF a while back but never really announced it, so I figured I'd mention it here."
You can ask him any question!
| 2025-05-12T23:38:47 |
https://huggingface.co/Doctor-Shotgun/L3.3-70B-Magnum-Nexus
|
panchovix
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl7j1z
| false | null |
t3_1kl7j1z
|
/r/LocalLLaMA/comments/1kl7j1z/new_model_llama_33_70b_magnum_nexus/
| false | false | 54 |
{'enabled': False, 'images': [{'id': '2S7d4MDGhIjkJfj7T1VTxqrZdba8wRVPZ7_koopiHT8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2S7d4MDGhIjkJfj7T1VTxqrZdba8wRVPZ7_koopiHT8.png?width=108&crop=smart&auto=webp&s=df14d8bacf3a980d61acc9c96529bb8c2df48bc5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2S7d4MDGhIjkJfj7T1VTxqrZdba8wRVPZ7_koopiHT8.png?width=216&crop=smart&auto=webp&s=45e7f37d082e5feedd600ddf7eb4ab5ed2238579', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2S7d4MDGhIjkJfj7T1VTxqrZdba8wRVPZ7_koopiHT8.png?width=320&crop=smart&auto=webp&s=3ecad77156f04014f3bc078c97c222a954c130ee', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2S7d4MDGhIjkJfj7T1VTxqrZdba8wRVPZ7_koopiHT8.png?width=640&crop=smart&auto=webp&s=82fade8a8551d752841176f9d256122c646f7dcd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2S7d4MDGhIjkJfj7T1VTxqrZdba8wRVPZ7_koopiHT8.png?width=960&crop=smart&auto=webp&s=d1f671308431f839b051eca5d485c0809c182cf6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2S7d4MDGhIjkJfj7T1VTxqrZdba8wRVPZ7_koopiHT8.png?width=1080&crop=smart&auto=webp&s=694f8b8d24b7609372d93931b0e06b4bf1e4f6be', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2S7d4MDGhIjkJfj7T1VTxqrZdba8wRVPZ7_koopiHT8.png?auto=webp&s=5115da2d4af9dbfa7b9199819610cade6d5e6a30', 'width': 1200}, 'variants': {}}]}
|
|
AI Workstation Advice
| 1 |
[removed]
| 2025-05-12T23:48:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl7qws/ai_workstation_advice/
|
Medium-Chipmunk-9903
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl7qws
| false | null |
t3_1kl7qws
|
/r/LocalLLaMA/comments/1kl7qws/ai_workstation_advice/
| false | false |
self
| 1 | null |
FlashMoE: DeepSeek V3/R1 671B and Qwen3MoE 235B on 1~2 Intel B580 GPU
| 1 |
[removed]
| 2025-05-13T00:15:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl8b6x/flashmoe_deepseek_v3r1_671b_and_qwen3moe_235b_on/
|
bigbigmind
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl8b6x
| false | null |
t3_1kl8b6x
|
/r/LocalLLaMA/comments/1kl8b6x/flashmoe_deepseek_v3r1_671b_and_qwen3moe_235b_on/
| false | false |
self
| 1 | null |
5090 Benchmarks - Qwen3 and Maverick
| 20 |
Noticed Newegg had open box 5090's available the other day...
Wanted to get some experience working with Blackwell since we have several PRO 6000's on the way (work)
grabbed a Gigabyte model for 2900.
Paired it with my Xeon 8480 ES CPU (56-core, 8-channel DDR5)
30BMoE-AWQ: (All GPU)
\~3,000 T/s prompt (15k tokens)
140 T/s generation (18k tokens)
32B-AWQ: (All GPU)
\~1500 T/s prompt (15k tokens)
54 T/s generation (18k tokens)
Maverick\_Q4-K-XL: (GPU + CPU)
650 T/s prompt (15k tokens)
37 T/s generation (18k tokens)
30BMOE is so fast that my openwebui can't keep up. VLLM finishes several seconds before open webui finishes displaying the text lol.
| 2025-05-13T00:19:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl8egq/5090_benchmarks_qwen3_and_maverick/
|
Conscious_Cut_6144
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl8egq
| false | null |
t3_1kl8egq
|
/r/LocalLLaMA/comments/1kl8egq/5090_benchmarks_qwen3_and_maverick/
| false | false |
self
| 20 | null |
Transferring internal state
| 1 |
[removed]
| 2025-05-13T00:45:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl8xcd/transferring_internal_state/
|
Emergency-Piccolo584
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl8xcd
| false | null |
t3_1kl8xcd
|
/r/LocalLLaMA/comments/1kl8xcd/transferring_internal_state/
| false | false |
self
| 1 | null |
What's the best tiny reasoning model currently?
| 5 |
I started playing around with PocketPal on my phone, a Moto G Stylus 2022. I know it won't be capable of much at all, but I'm still wondering what exactly there is in the way of reasoning. I was impressed at how Gemma-3-1B-it was able to correctly tell 9.9 is bigger than 9.11 (although its reason was strange) and that there are 3 r's in strawberry, If that's what can be expected of a non-reasoning model, what's the best reasoning model that could run on my phone's super-modest specs?
The specs btw:
CPU: Mediatek MT6769H Helio G88
GPU: Mali-G52 MC2
RAM: 6GB
| 2025-05-13T00:57:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl95wa/whats_the_best_tiny_reasoning_model_currently/
|
CrimsonCuttle
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl95wa
| false | null |
t3_1kl95wa
|
/r/LocalLLaMA/comments/1kl95wa/whats_the_best_tiny_reasoning_model_currently/
| false | false |
self
| 5 | null |
How to load a 4-bit quantized 1.5B parameter LLM in the browser?
| 0 |
The ask is perhaps a really though one - but here is the use case. I am trying to build some local decision making capabilities (like guardrails) in the browser so that unnecessary requests don't reach the chatbot back-end. I can't fully rely on a local model, but if the confidence in its predictions is high I would block certain user traffic ahead in the request lifecycle. As an analogy, think of a form that was incorrectly filled out by the user and local javascript execution would catch that and ask the user to fix the errors before proceeding.
I just don't know if that's dooable or not. If so, what setup worked and under what conditions.
| 2025-05-13T01:03:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl99wp/how_to_load_a_4bit_quantized_15b_parameter_llm_in/
|
AdditionalWeb107
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl99wp
| false | null |
t3_1kl99wp
|
/r/LocalLLaMA/comments/1kl99wp/how_to_load_a_4bit_quantized_15b_parameter_llm_in/
| false | false |
self
| 0 | null |
Help with image generation models
| 2 |
I am trying to find a local image generation model, that I can train (or is it fine tune?) with hundreds and thousands of my own photos so I can generate high quality realistic professional grade images of myself such as a headshot for my website, and for resumes. How can I go about doing it? I have basic ideas of being able to download and use Llama and other models off hugging face. Would really appreciate some advice here. Getting a professional photoshoot is expensive, but I already have my super powerful GPUs.
| 2025-05-13T01:17:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl9jz6/help_with_image_generation_models/
|
Past-Stuff6276
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl9jz6
| false | null |
t3_1kl9jz6
|
/r/LocalLLaMA/comments/1kl9jz6/help_with_image_generation_models/
| false | false |
self
| 2 | null |
Why hasn't the new version of each AI chatbot been successful?
| 100 |
ChatGPT: Latest version of GPT4o (the one who sucks up to you) reverted
Gemini: Latest version of Gemini Pro 2.5 (05-06) reverted
Grok: Latest version (3.5) delayed
Meta: Latest version (LLaMa 4) released but unsatisfactory and to top it off lying in benchmarks
What's going on here?
| 2025-05-13T01:26:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1kl9qdy/why_hasnt_the_new_version_of_each_ai_chatbot_been/
|
gutierrezz36
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kl9qdy
| false | null |
t3_1kl9qdy
|
/r/LocalLLaMA/comments/1kl9qdy/why_hasnt_the_new_version_of_each_ai_chatbot_been/
| false | false |
self
| 100 | null |
Dual AMD Mi50 Benchmarks and Inference
| 1 |
[removed]
| 2025-05-13T01:40:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kla0up/dual_amd_mi50_benchmarks_and_inference/
|
0seba
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kla0up
| false | null |
t3_1kla0up
|
/r/LocalLLaMA/comments/1kla0up/dual_amd_mi50_benchmarks_and_inference/
| false | false |
self
| 1 | null |
FastVLM: Fast Vision Language Model by Apple
| 62 | 2025-05-13T02:02:23 |
https://github.com/apple/ml-fastvlm
|
Hanthunius
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1klagiq
| false | null |
t3_1klagiq
|
/r/LocalLLaMA/comments/1klagiq/fastvlm_fast_vision_language_model_by_apple/
| false | false |
default
| 62 |
{'enabled': False, 'images': [{'id': '5nJp9i1DhXBKR2Li4PwUtZTAJ-O6pX5Feq_-crW2MLA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5nJp9i1DhXBKR2Li4PwUtZTAJ-O6pX5Feq_-crW2MLA.png?width=108&crop=smart&auto=webp&s=9e287e280a0e1333fb3d6b4f2bd0c4e79fcf1cad', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5nJp9i1DhXBKR2Li4PwUtZTAJ-O6pX5Feq_-crW2MLA.png?width=216&crop=smart&auto=webp&s=25804aa88d02bdeb08acf068b1b992aded0e26c4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5nJp9i1DhXBKR2Li4PwUtZTAJ-O6pX5Feq_-crW2MLA.png?width=320&crop=smart&auto=webp&s=6423af85ae3bef3970aaa9d880c4cf579076d41e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5nJp9i1DhXBKR2Li4PwUtZTAJ-O6pX5Feq_-crW2MLA.png?width=640&crop=smart&auto=webp&s=eeb74b79dbc85c33f416efb8f75cfa8942c4c025', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5nJp9i1DhXBKR2Li4PwUtZTAJ-O6pX5Feq_-crW2MLA.png?width=960&crop=smart&auto=webp&s=e73c2ae9becc322ed2724de9a8661f7b52ce9da9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5nJp9i1DhXBKR2Li4PwUtZTAJ-O6pX5Feq_-crW2MLA.png?width=1080&crop=smart&auto=webp&s=e70e7d0902cc1b0eea9c0305c077bd44f8aee584', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5nJp9i1DhXBKR2Li4PwUtZTAJ-O6pX5Feq_-crW2MLA.png?auto=webp&s=4125939870abef902176c27fe762b1872b5ad26a', 'width': 1200}, 'variants': {}}]}
|
|
What's the best medical model currently?
| 26 |
I had been using llms only for coding and math but I realised that medicine should in theory be easier for an llm. Is there a good benchmark and what is the current best model for medical advice?
| 2025-05-13T02:27:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1klayhz/whats_the_best_medical_model_currently/
|
MrMrsPotts
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1klayhz
| false | null |
t3_1klayhz
|
/r/LocalLLaMA/comments/1klayhz/whats_the_best_medical_model_currently/
| false | false |
self
| 26 | null |
NOPE: Normative Ontological Prompt Engineering
| 0 |
Unlike traditional prompt engineering, which often focuses on specific task instructions or output formatting, we propose Normative Ontological Prompt Engineering (NOPE), which aims to shape the fundamental generative principles of an AI's response. This approach focuses on influencing the underlying conceptual frameworks that are used to generate content, going deeper conceptually that constitutional prompting.
The "verb + the + \[conceptual noun\]" structure we developed is a core mechanism of ontological prompt engineering: using densely packed philosophical terms to activate entire networks of meaning and behavioral guidance. Instead of a standard approach of telling the AI to do task X or telling the AI to take on a role (e.g., roleplay in the form of "You are a helpful assistant."), our approach is essentially saying "Activate this entire conceptual domain of reasoning and generation." The approach transforms prompt engineering from a tactical, deontological tool to a more strategic, philosophical method of AI interaction. A usefyl byproduct of this dense approach is token-efficiency in prompting.
Thought it remains to be seen whether or not Mark Cuban's 2017 prediction that a liberal arts degree in philosophy will be worth more than a traditional programming degree by 2027, we put forth NOPE as evidence that liberal arts knowledge remains relevant, as it can be directly applied to extend the capability of prompt engineering, with potential application in areas like AI safety.
Below is an example of a hybrid system prompt used to steer narrative generation at ontological, characteristic, and stylistic levels. In our informal testing using local models, the results seem to provoke greater character depth without additional fine-tuning, though the inherent limitations of local models will still be palpable.
Maintain the hermeneutic.
Establish the deterministic.
Preserve the ergodic.
Accumulate the entropic.
Uphold the systemic.
Honor the algorithmic.
Generate the ontological.
Respect the phenomenological.
Execute the categorical.
Embody the agentic.
Assert the psychological.
Manifest the sociological.
Apply the epistemic.
Control the heuristic.
Limit the omniscient.
Structure the pedagogical.
Develop the dialectical.
Nurture the emergent.
Balance the ludic.
Orchestrate the consequential.
Frame the teleological.
Create the axiological.
Challenge the utilitarian.
Present the deontological.
Introduce the virtue-ethical.
Impose the chronological.
Define the topological.
Govern the synchronic.
Evolve the dialogic.
Thread the cognitive.
Carve the conversational.
Involve the palimpsestic.
Admix the polyphonic.
Manage the proxemic.
Impose the anatomical.
Feel the visceral.
Embody the emotional.
Subvert the predictable.
Propel the narrative.
Maintain the immersive.
Respect the autodiegetic.
| 2025-05-13T02:29:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1klazvy/nope_normative_ontological_prompt_engineering/
|
grimjim
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1klazvy
| false | null |
t3_1klazvy
|
/r/LocalLLaMA/comments/1klazvy/nope_normative_ontological_prompt_engineering/
| false | false |
self
| 0 | null |
Need feedback and suggestions
| 1 |
[removed]
| 2025-05-13T03:46:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1klcf5d/need_feedback_and_suggestions/
|
plutonium_Curry
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1klcf5d
| false | null |
t3_1klcf5d
|
/r/LocalLLaMA/comments/1klcf5d/need_feedback_and_suggestions/
| false | false |
self
| 1 | null |
Alternative to Mac Mini M4 in the SFF PC Market?
| 2 |
Hey Folks,
I have an itch for a new project. Are there any mini-itx/SFF/SBC machines that would work well for a nice centralized AI assistant with some image processing? No heavy workloads but mostly for famiyl interact, story telling, home assistant interaction, camera notifications, and etc. I'd love to read some build threads that even push the boundaries of this concept and shoves some decent power into a 1L machine or similar size.
If the price is more than the M4 mac mini with 16gb, than I'd likely just with the M4 Mini. The goal is likely 14b models, unless you suggest something bigger.
| 2025-05-13T04:46:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kldgq8/alternative_to_mac_mini_m4_in_the_sff_pc_market/
|
s0n1cm0nk3y
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kldgq8
| false | null |
t3_1kldgq8
|
/r/LocalLLaMA/comments/1kldgq8/alternative_to_mac_mini_m4_in_the_sff_pc_market/
| false | false |
self
| 2 | null |
Architecture Review of the new MoE models
| 114 |
Since the release of DeepSeek V3, there is a rush of new MoE models. I read their papers and looked at config.json and modeling\_\*.py files and summarized their data in the following table. Here are some observations:
1. DeepSeek becomes highly KV cache efficient after introduction of MLA in DeepSeek V2
2. Qwen's MoE architecture is basically the same as Mixtral but with more experts and more layers.
3. Llama-4 and DeepSeek are both MoE with shared experts. While Scout has no non-MoE (ie dense) layers, all other models have some dense layers. Maverick even has interleaved
4. Performance-wise, it seems like Qwen3-235B-A22B > DeepSeek-V3 >> Llama-4-Maverick accordin g to lmarena and livebench. Qwen3 seems to excel in all areas except coding compare to DSV3.
|Model|dense layer#|MoE layer#|shared|active/routed|Active|Params|Active%|fp16 kv@128k|kv%|
|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|
|DeepSeek-MoE-16B|1|27|2|6/64|2.83B|16.38B|17.28%|28GB|85.47%|
|DeepSeek-V2-Lite|1|26|2|6/64|2.66B|15.71B|16.93%|3.8GB|12.09%|
|DeepSeek-V2|1|59|2|6/160|21.33B|235.74B|8.41%|8.44GB|1.78%|
|DeepSeek-V3|3|57|1|8/256|37.45B|671.03B|5.58%|8.578GB|0.64%|
|Qwen3-30B-A3B|0|48|0|8/128|3.34B|30.53B|10.94%|12GB|19.65%|
|Qwen3-235B-A22B|0|94|0|8/128|22.14B|235.09B|9.42%|23.5GB|4.998%|
|Llama-4-Scout-17B-16E|0|48|1|1/16|17.17B|107.77B|15.93%|24GB|11.13%|
|Llama-4-Maverick-17B-128E|24|24|1|1/128|17.17B|400.71B|4.28%|24GB|2.99%|
|Mixtral-8x7B|0|32|0|2/8|12.88B|46.70B|27.58%|24GB|25.696%|
|Mixtral-8x22B|0|56|0|2/8|39.15B|140.62B|27.84%|28GB|9.956%|
| 2025-05-13T05:03:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1kldquv/architecture_review_of_the_new_moe_models/
|
Ok_Warning2146
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kldquv
| false | null |
t3_1kldquv
|
/r/LocalLLaMA/comments/1kldquv/architecture_review_of_the_new_moe_models/
| false | false |
self
| 114 | null |
what is your go to finetuning format?
| 10 |
Hello everyone! I personally have a script I built for hand typing conversational datasets and I'm considering publishing it, as I think it would be helpful for writers or people designing specific personalities instead of using bulk data. For myself I just output a non standard jsonl format and tokenized it based on the format I made. which isn't really useful to anyone.
**so I was wondering what formats you use the most when finetuning datasets and what you look for?** The interface can support single pairs and also multi-turn conversations with context but I know not all formats support context cleanly.
for now the default will be a clean input output jsonl but I think it would be nice to have more specific outputs
| 2025-05-13T05:28:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1kle586/what_is_your_go_to_finetuning_format/
|
abaris243
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kle586
| false | null |
t3_1kle586
|
/r/LocalLLaMA/comments/1kle586/what_is_your_go_to_finetuning_format/
| false | false |
self
| 10 | null |
Claimify: Extracting high-quality claims from language model outputs
| 64 |
Summary: Dasha Metropolitansky, a Research Data Scientist at Microsoft, explains the Claimify system, which performs claim extraction. She defines a claim as a simple factual statement verifiable as true or false, and extraction as the process of breaking down text into these claims. Claim extraction is crucial for evaluating long-form content generated by language models, particularly for detecting hallucinations and assessing relevance, as it makes it easier to check individual points independently. Claimify works by first breaking text down into sentences, then extracting claims from each sentence with surrounding context for accuracy. The process involves three stages: selection (filtering non-verifiable statements), disambiguation (resolving ambiguous statements using context or flagging them), and decomposition (breaking disambiguated sentences into simple claims). Examples demonstrate that Claimify extracts more comprehensive and specific factual claims compared to a baseline method, capturing details about economic hardship, inflation's impact on currency value, and specific issues like public health crises and contaminated water, thereby unlocking better evaluation capabilities for language model outputs.
- Blog: https://www.microsoft.com/en-us/research/blog/claimify-extracting-high-quality-claims-from-language-model-outputs/
- Paper: https://arxiv.org/abs/2502.10855
| 2025-05-13T05:33:59 |
https://youtu.be/WTs-Ipt0k-M
|
Balance-
|
youtu.be
| 1970-01-01T00:00:00 | 0 |
{}
|
1kle86h
| false |
{'oembed': {'author_name': 'Microsoft Research', 'author_url': 'https://www.youtube.com/@MicrosoftResearch', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/WTs-Ipt0k-M?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Claimify: Extracting high-quality claims from language model outputs"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/WTs-Ipt0k-M/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Claimify: Extracting high-quality claims from language model outputs', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1kle86h
|
/r/LocalLLaMA/comments/1kle86h/claimify_extracting_highquality_claims_from/
| false | false |
default
| 64 |
{'enabled': False, 'images': [{'id': 'z3fQ_LBH0JOwH6gpEM6IxNLU-jf616qbxzrliOEmY8k', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/z3fQ_LBH0JOwH6gpEM6IxNLU-jf616qbxzrliOEmY8k.jpeg?width=108&crop=smart&auto=webp&s=0227a5fe1118846e5604f232988e7b3ad6e54f0d', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/z3fQ_LBH0JOwH6gpEM6IxNLU-jf616qbxzrliOEmY8k.jpeg?width=216&crop=smart&auto=webp&s=97a4e47b2e5377cab12a7139fdb5bf3ec112fc38', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/z3fQ_LBH0JOwH6gpEM6IxNLU-jf616qbxzrliOEmY8k.jpeg?width=320&crop=smart&auto=webp&s=c323908306a9b40cda848b5e82798de7a79478f1', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/z3fQ_LBH0JOwH6gpEM6IxNLU-jf616qbxzrliOEmY8k.jpeg?auto=webp&s=bef77d68b76e78f896b80888036dda766744678b', 'width': 480}, 'variants': {}}]}
|
Looks like the new Intel 24GB GPU will not be a "Pro" card
| 3 |
[https://videocardz.com/newz/intel-arc-b580-with-24gb-memory-teased-by-maxsun](https://videocardz.com/newz/intel-arc-b580-with-24gb-memory-teased-by-maxsun)
Maxsun (an official Intel GPU partner) leaked that the new 24GB card is just a "B580 24GB." Given the value of the 12GB cards, hopefully, that means the price will be reasonable, as it was previously assumed it would be part of Intel's "Pro" GPU lineup.
| 2025-05-13T05:43:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1klednh/looks_like_the_new_intel_24gb_gpu_will_not_be_a/
|
Kafka-trap
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1klednh
| false | null |
t3_1klednh
|
/r/LocalLLaMA/comments/1klednh/looks_like_the_new_intel_24gb_gpu_will_not_be_a/
| false | false |
self
| 3 | null |
The cost-performance ratio of a 4-GPU setup
| 1 |
[removed]
| 2025-05-13T05:46:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1kleffb/the_costperformance_ratio_of_a_4gpu_setup/
|
Spirited-Law-3723
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kleffb
| false | null |
t3_1kleffb
|
/r/LocalLLaMA/comments/1kleffb/the_costperformance_ratio_of_a_4gpu_setup/
| false | false |
self
| 1 | null |
New tiny model by AllenAI: OLMo-2-1B
| 38 |
Strange that nobody mention [OLMo-2-0425-1B-Instruct](https://huggingface.co/allenai/OLMo-2-0425-1B-Instruct) yet. Trying it out as a potential candidate of an LLM to live in my tablet. So far I've tested only a little bit and I'm not sure if I shouldn't just use Qwen3-0.6B.
Are there recommended inference parameters for OLMo series? I can't seem to find any.
| 2025-05-13T06:35:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1klf6n3/new_tiny_model_by_allenai_olmo21b/
|
CattailRed
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1klf6n3
| false | null |
t3_1klf6n3
|
/r/LocalLLaMA/comments/1klf6n3/new_tiny_model_by_allenai_olmo21b/
| false | false |
self
| 38 |
{'enabled': False, 'images': [{'id': 'PvFTyMOr-O6BCPLzuhf7iphoC86gyAJMEFX1dY5Q81k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PvFTyMOr-O6BCPLzuhf7iphoC86gyAJMEFX1dY5Q81k.png?width=108&crop=smart&auto=webp&s=f55d20e25e23400d0badf6675170e629cdecc962', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PvFTyMOr-O6BCPLzuhf7iphoC86gyAJMEFX1dY5Q81k.png?width=216&crop=smart&auto=webp&s=cbc56f73e90c8ea96942d9bde09721e0b5f06374', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PvFTyMOr-O6BCPLzuhf7iphoC86gyAJMEFX1dY5Q81k.png?width=320&crop=smart&auto=webp&s=7464e60797290eeeced48dd093d50f306604759c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PvFTyMOr-O6BCPLzuhf7iphoC86gyAJMEFX1dY5Q81k.png?width=640&crop=smart&auto=webp&s=6346571cbd60ef9935e6f415d12daf50eb2bdc9e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PvFTyMOr-O6BCPLzuhf7iphoC86gyAJMEFX1dY5Q81k.png?width=960&crop=smart&auto=webp&s=3c43d3e9d38ec4ffa159b9e411f20d422b929fa7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PvFTyMOr-O6BCPLzuhf7iphoC86gyAJMEFX1dY5Q81k.png?width=1080&crop=smart&auto=webp&s=7a9c8c9aab9b4a49e0f33cd005b19a9a54a0be18', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PvFTyMOr-O6BCPLzuhf7iphoC86gyAJMEFX1dY5Q81k.png?auto=webp&s=98f53dfa262a305eff8a113da9d9059126e127d4', 'width': 1200}, 'variants': {}}]}
|
Is anyone actually using local models to code in their regular setups like roo/cline?
| 44 |
From what I've tried, models from 30b onwards start to be useful for local coding. With a 2x 3090 setup, I can squeeze in upto ~100k tokens and those models also go bad beyond 32k tokens occasionally missing the diff format or even forgetting some of the instructions.
So I checked which is cheaper/faster to use with cline, qwen3-32b 8-bit quant vs Gemini 2.5 flash.
Local setup cost per 1M output tokens:
I get about 30-40 tok/s on my 2x3090 setup consuming 700w.
So to generate 1M tokens, energy used: 1000000/33/3600×0.7 = 5.9kwh
Cost of electricity where I live: $0.18/kwh
Total cost per 1M output tokens: $1.06
So local model cost: ~$1/M tokens
Gemini 2.5 flash cost: $0.6/M tokens
Is my setup inefficient? Or the cloud models to good?
Is Qwen3 32B better than Gemini 2.5 flash in real world usage?
Cost wise, cloud models are winning if one doesn't mind the privacy concerns.
Is anyone still choosing to use local models for coding despite the increased costs? If so, which models are you using and how?
Ps: I really want to use local models for my coding purposes and couldn't get an effective workflow in place for coding/software development.
| 2025-05-13T06:46:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1klfcu0/is_anyone_actually_using_local_models_to_code_in/
|
kms_dev
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1klfcu0
| false | null |
t3_1klfcu0
|
/r/LocalLLaMA/comments/1klfcu0/is_anyone_actually_using_local_models_to_code_in/
| false | false |
self
| 44 | null |
how do i make qwen3 stop yapping?
| 0 |
This is my modelfile. I added the /no\_think parameter to the system prompt as well as the official settings they mentioned on their deployment guide on twitter.
Its the 3 bit quant GGUF from unsloth: [https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF](https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF)
Deployment guide: [https://x.com/Alibaba\_Qwen/status/1921907010855125019](https://x.com/Alibaba_Qwen/status/1921907010855125019)
FROM ./Qwen3-30B-A3B-Q3_K_M.gguf
PARAMETER temperature 0.7
PARAMETER top_p 0.8
PARAMETER top_k 20
SYSTEM "You are a helpful assistant. /no_think"
Yet it yaps non stop, and its not even thinking here.
| 2025-05-13T06:53:28 |
CaptTechno
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1klfget
| false | null |
t3_1klfget
|
/r/LocalLLaMA/comments/1klfget/how_do_i_make_qwen3_stop_yapping/
| false | false |
default
| 0 |
{'enabled': True, 'images': [{'id': '7i49338vxh0f1', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/7i49338vxh0f1.png?width=108&crop=smart&auto=webp&s=abd5185587e7b9537a71033f7b11cbcf6e5b8c2d', 'width': 108}, {'height': 170, 'url': 'https://preview.redd.it/7i49338vxh0f1.png?width=216&crop=smart&auto=webp&s=9c92e835e87ca4b559d0937dcd7c47f8e33c9e19', 'width': 216}, {'height': 252, 'url': 'https://preview.redd.it/7i49338vxh0f1.png?width=320&crop=smart&auto=webp&s=e6eae4863a5e6591b4a9e806e87b43fe777707e6', 'width': 320}, {'height': 505, 'url': 'https://preview.redd.it/7i49338vxh0f1.png?width=640&crop=smart&auto=webp&s=0f4146f3486693dda44008e9b9a5d1bd335bd5fd', 'width': 640}, {'height': 758, 'url': 'https://preview.redd.it/7i49338vxh0f1.png?width=960&crop=smart&auto=webp&s=e66da9b12034e86950b8dbb7030e490bafb0d579', 'width': 960}, {'height': 852, 'url': 'https://preview.redd.it/7i49338vxh0f1.png?width=1080&crop=smart&auto=webp&s=9089b935893156240af92e338610ed69607a15a3', 'width': 1080}], 'source': {'height': 1682, 'url': 'https://preview.redd.it/7i49338vxh0f1.png?auto=webp&s=330e29c179313258c1a7a0023d4be5627b771d54', 'width': 2130}, 'variants': {}}]}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.