title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Application to auto-test or determine an LLM model's optimal settings
1
Does this exist? Like something that can run a specific model through a bunch of test prompts on a range of settings and provide you with a report at that end recommending settings for temperature, rep penalty, etc? Even its just a recommended settings range between x and y would be nice.
2025-06-02T18:41:16
https://www.reddit.com/r/LocalLLaMA/comments/1l1pzdb/application_to_autotest_or_determine_an_llm/
Primary-Wear-2460
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l1pzdb
false
null
t3_1l1pzdb
/r/LocalLLaMA/comments/1l1pzdb/application_to_autotest_or_determine_an_llm/
false
false
self
1
null
Which programming languages do LLMs struggle with the most, and why?
59
I've noticed that LLMs do well with Python, which is quite obvious, but often make mistakes in other languages. I can't test every language myself, so can you share, which languages have you seen them struggle with, and what went wrong? For context: I want to test LLMs on various "hard" languages
2025-06-02T18:45:36
https://www.reddit.com/r/LocalLLaMA/comments/1l1q3dk/which_programming_languages_do_llms_struggle_with/
alozowski
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l1q3dk
false
null
t3_1l1q3dk
/r/LocalLLaMA/comments/1l1q3dk/which_programming_languages_do_llms_struggle_with/
false
false
self
59
null
RTX PRO 6000 Blackwell and Max-Q version
1
[removed]
2025-06-02T19:00:16
https://youtu.be/LSQL7c29arM
svskaushik
youtu.be
1970-01-01T00:00:00
0
{}
1l1qgyb
false
{'oembed': {'author_name': 'Level1Techs', 'author_url': 'https://www.youtube.com/@Level1Techs', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/LSQL7c29arM?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="To Max-Q or Not to Max-Q? RTX Pro 6000 Blackwell and the Max-Q tested on Linux and Windows!"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/LSQL7c29arM/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'To Max-Q or Not to Max-Q? RTX Pro 6000 Blackwell and the Max-Q tested on Linux and Windows!', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1l1qgyb
/r/LocalLLaMA/comments/1l1qgyb/rtx_pro_6000_blackwell_and_maxq_version/
false
false
https://b.thumbs.redditm…m8xGYVF2oHok.jpg
1
{'enabled': False, 'images': [{'id': 'o74b2cGO3aPzc-1Jf9o3hd99veR6MAbzIaLpwdqf36I', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Ccci9Txk0dAfwXv5whHZsN6QRjnVmvbGwDRTneHMrDU.jpg?width=108&crop=smart&auto=webp&s=fde6c90f5b2df709b57caab25e0a4836c119276b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Ccci9Txk0dAfwXv5whHZsN6QRjnVmvbGwDRTneHMrDU.jpg?width=216&crop=smart&auto=webp&s=149fc1e39bdedc4df405202fb4719d8ed56a29fa', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Ccci9Txk0dAfwXv5whHZsN6QRjnVmvbGwDRTneHMrDU.jpg?width=320&crop=smart&auto=webp&s=e99b6d4aaa171021409433781f28477b8bf297ac', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Ccci9Txk0dAfwXv5whHZsN6QRjnVmvbGwDRTneHMrDU.jpg?auto=webp&s=847ec01e4830c357d400645e43e13ac9ba575db4', 'width': 480}, 'variants': {}}]}
At the airport people watching while I run models locally:
2,007
2025-06-02T19:10:02
https://i.redd.it/55ab38z0ck4f1.jpeg
Current-Ticket4214
i.redd.it
1970-01-01T00:00:00
0
{}
1l1qqdx
false
null
t3_1l1qqdx
/r/LocalLLaMA/comments/1l1qqdx/at_the_airport_people_watching_while_i_run_models/
false
false
https://b.thumbs.redditm…I4Rxy86breuU.jpg
2,007
{'enabled': True, 'images': [{'id': 'RctvpTqQgVgKVx-oERol7Qbk3LOI73TjzT_6xx1QVCM', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/55ab38z0ck4f1.jpeg?width=108&crop=smart&auto=webp&s=9f9c7d418d20ce56a74d327f6586c3f5250632a9', 'width': 108}, {'height': 215, 'url': 'https://preview.redd.it/55ab38z0ck4f1.jpeg?width=216&crop=smart&auto=webp&s=f0eb55dea71c93f5d9ea2562b111ce11d8d87f70', 'width': 216}, {'height': 318, 'url': 'https://preview.redd.it/55ab38z0ck4f1.jpeg?width=320&crop=smart&auto=webp&s=9be593f4c7a0161915916bd63ade138bed3f8e15', 'width': 320}, {'height': 637, 'url': 'https://preview.redd.it/55ab38z0ck4f1.jpeg?width=640&crop=smart&auto=webp&s=ee34cc7e6232ae1fc31a5076b1cc4064bd66305d', 'width': 640}, {'height': 956, 'url': 'https://preview.redd.it/55ab38z0ck4f1.jpeg?width=960&crop=smart&auto=webp&s=7d79939d86d60228e3897b08fe58553eea73b31b', 'width': 960}, {'height': 1075, 'url': 'https://preview.redd.it/55ab38z0ck4f1.jpeg?width=1080&crop=smart&auto=webp&s=6d0848c718eb4654f910a2fcd7a7f4ab4ace3467', 'width': 1080}], 'source': {'height': 1280, 'url': 'https://preview.redd.it/55ab38z0ck4f1.jpeg?auto=webp&s=7a1751fc525f49c30f0dac49291e40f9df1a4b88', 'width': 1285}, 'variants': {}}]}
671B IQ1_S vs 70B Q8_0
11
In an optimal world, there should be no shortage of memory. VRAM is used over RAM for its superior memory bandwidth, where HBM > GDDR > DDR. However, due to limitations that are oftentimes financial, quantisations are used to fit a bigger model into smaller memory by approximating the precision of the weights. Usually, this works wonders, for in the general case, the benefit from a larger model outweighs the near negligible drawbacks of a lower precision, especially for FP16 to Q8\_0 and to a lesser extent Q8\_0 to Q6\_K. However, quantisation at lower precision starts to hurt model performance, often measured by "perplexity" and benchmarks. Even then, larger models need not perform better, since a lack of data quantity may result in larger models "memorising" outputs rather than "learning" output patterns to fit in limited space during backpropagation. Of course, when we see a large new model, wow, we want to run it locally. So, how would these two perform on a 128GB RAM system assuming time is not a factor? Unfortunately, I do not have the hardware to test even a 671B "1-bit" (or 1-trit) model...so I have no idea how any of these works. From my observations, I notice comments suggest larger models are more worldly in terms of niche knowledge, while higher quants are better for coding. At what point does this no longer hold true? Does the concept of English have a finite Kolmogorov complexity? Even 2\^100m is a lot of possibilities after all. What about larger models being less susceptible to quantisation? Thank you for your time reading this post. Appreciate your responses.
2025-06-02T19:23:37
https://www.reddit.com/r/LocalLLaMA/comments/1l1r366/671b_iq1_s_vs_70b_q8_0/
nagareteku
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l1r366
false
null
t3_1l1r366
/r/LocalLLaMA/comments/1l1r366/671b_iq1_s_vs_70b_q8_0/
false
false
self
11
null
I made LLMs respond with diff patches rather than standard code blocks and the result is simply amazing!
144
I've been developing a coding assistant for JetBrains IDEs called **ProxyAI** (previously CodeGPT), and I wanted to experiment with an idea where LLM is instructed to produce diffs as opposed to regular code blocks, which ProxyAI then applies directly to your project. I was fairly skeptical about this at first, but after going back-and-forth with the initial version and getting it where I wanted it to be, it simply started to amaze me. The model began generating paths and diffs for files it had never seen before and somehow these "hallucinations" were correct (this mostly happened with modifications to build files that typically need a fixed path). What really surprised me was how natural the workflow became. You just describe what you want changed, and the diffs appear in *near real-time*, almost always with the correct diff patch - can't praise enough how good it feels for **quick iterations**! In most cases, it takes *less than a minute* for the LLM to make edits across many different files. When smaller models mess up (which happens fairly often), there's a simple retry mechanism that usually gets it right on the second attempt - fairly similar logic to Cursor's Fast Apply. This whole functionality is **free**, **open-source**, and available for **every model and provider**, regardless of tool calling capabilities. **No vendor lock-in**, **no premium features** \- just plug in your API key or connect to a local model and give it a go! For me, this feels much more intuitive than the typical *"switch to edit mode"* dance that most AI coding tools require. I'd definitely encourage you to give it a try and let me know what you think, or what the current solution lacks. Always looking to improve! [https://www.tryproxy.io/](https://www.tryproxy.io/) Best regards
2025-06-02T19:32:00
https://v.redd.it/zcq3wk5ffk4f1
carlrobertoh
v.redd.it
1970-01-01T00:00:00
0
{}
1l1rb18
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/zcq3wk5ffk4f1/DASHPlaylist.mpd?a=1751484737%2COTNhNjM0ZTI5MmJlZTA1NjUyYmEzMjZkODAwZTkxNGVlYmNiMjFiOTQ2NzNhZWQwN2VhZjE1YWE1NTE1OTA5Zg%3D%3D&v=1&f=sd', 'duration': 39, 'fallback_url': 'https://v.redd.it/zcq3wk5ffk4f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/zcq3wk5ffk4f1/HLSPlaylist.m3u8?a=1751484737%2CNjRjZjIxYjgzODk4YzRiZjkyNDZiNzQzZDk0YjMxOGMzOTgwMWM3NzU5NjNkOGUzZWFhZjJkMTUzY2NkODQ2OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/zcq3wk5ffk4f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1368}}
t3_1l1rb18
/r/LocalLLaMA/comments/1l1rb18/i_made_llms_respond_with_diff_patches_rather_than/
false
false
https://external-preview…07aa41316f757359
144
{'enabled': False, 'images': [{'id': 'MXRsZjNqNWZmazRmMXGOM2N51B2QnCZhafa7NKplGti0671pTg7o1NRLqsqm', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/MXRsZjNqNWZmazRmMXGOM2N51B2QnCZhafa7NKplGti0671pTg7o1NRLqsqm.png?width=108&crop=smart&format=pjpg&auto=webp&s=7dec1b58b6dc219690c91bb8e91d66daddf63b3b', 'width': 108}, {'height': 170, 'url': 'https://external-preview.redd.it/MXRsZjNqNWZmazRmMXGOM2N51B2QnCZhafa7NKplGti0671pTg7o1NRLqsqm.png?width=216&crop=smart&format=pjpg&auto=webp&s=e46a75f57c563764b580f8b27b0e8a72b88456fc', 'width': 216}, {'height': 252, 'url': 'https://external-preview.redd.it/MXRsZjNqNWZmazRmMXGOM2N51B2QnCZhafa7NKplGti0671pTg7o1NRLqsqm.png?width=320&crop=smart&format=pjpg&auto=webp&s=83bea68f02262b579d321f124b4140e46c775e15', 'width': 320}, {'height': 505, 'url': 'https://external-preview.redd.it/MXRsZjNqNWZmazRmMXGOM2N51B2QnCZhafa7NKplGti0671pTg7o1NRLqsqm.png?width=640&crop=smart&format=pjpg&auto=webp&s=bfd1f060492e42f66f63271f43223112e0251f8c', 'width': 640}, {'height': 757, 'url': 'https://external-preview.redd.it/MXRsZjNqNWZmazRmMXGOM2N51B2QnCZhafa7NKplGti0671pTg7o1NRLqsqm.png?width=960&crop=smart&format=pjpg&auto=webp&s=994d4a972d27e9b1521a106625ba448269dc6671', 'width': 960}, {'height': 852, 'url': 'https://external-preview.redd.it/MXRsZjNqNWZmazRmMXGOM2N51B2QnCZhafa7NKplGti0671pTg7o1NRLqsqm.png?width=1080&crop=smart&format=pjpg&auto=webp&s=046f6a01dc17e844ec6ef1d97f6f34875399d133', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/MXRsZjNqNWZmazRmMXGOM2N51B2QnCZhafa7NKplGti0671pTg7o1NRLqsqm.png?format=pjpg&auto=webp&s=684d317ae70eab9b1b76a7dd0a904a9ba6019d65', 'width': 1368}, 'variants': {}}]}
Use offline voice controlled agents to search and browse the internet with a contextually aware LLM in the next version of AI Runner
10
2025-06-02T19:34:26
https://v.redd.it/ir6jvtbbgk4f1
w00fl35
/r/LocalLLaMA/comments/1l1rda5/use_offline_voice_controlled_agents_to_search_and/
1970-01-01T00:00:00
0
{}
1l1rda5
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ir6jvtbbgk4f1/DASHPlaylist.mpd?a=1751614473%2CZjE3ZTE0N2E3MDFlZDVjMGM0YWMyYmUxNzAyMmZhNDgxNzQ3OTQyYTc2OGQ3NGVlMjY5YTJhYjBkYTJjOWRhZA%3D%3D&v=1&f=sd', 'duration': 94, 'fallback_url': 'https://v.redd.it/ir6jvtbbgk4f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/ir6jvtbbgk4f1/HLSPlaylist.m3u8?a=1751614473%2CY2RiMjYxOTc5OTY1MWYwMDE0NjM2NjY5MjIzOTIwNzNhMWJmNjYzNjU5MTlkMDU5ZWM1ZjJmZDAyNmNiZGExMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ir6jvtbbgk4f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1l1rda5
/r/LocalLLaMA/comments/1l1rda5/use_offline_voice_controlled_agents_to_search_and/
false
false
https://external-preview…af7d9862247010e2
10
{'enabled': False, 'images': [{'id': 'bGZzOGxrZ2VnazRmMcjVXgRLpZ5qhPQ96q4r0xpE25NahzVeLWn0o9J3ntg5', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bGZzOGxrZ2VnazRmMcjVXgRLpZ5qhPQ96q4r0xpE25NahzVeLWn0o9J3ntg5.png?width=108&crop=smart&format=pjpg&auto=webp&s=016a5cfdf04765b2264f0a5c79dc0f9eac775637', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bGZzOGxrZ2VnazRmMcjVXgRLpZ5qhPQ96q4r0xpE25NahzVeLWn0o9J3ntg5.png?width=216&crop=smart&format=pjpg&auto=webp&s=d5ef99eb4cee5ecdba691632f2f18a1830dd259e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bGZzOGxrZ2VnazRmMcjVXgRLpZ5qhPQ96q4r0xpE25NahzVeLWn0o9J3ntg5.png?width=320&crop=smart&format=pjpg&auto=webp&s=217729e9667a8e4ef065da9066c5b4e69ae22e14', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bGZzOGxrZ2VnazRmMcjVXgRLpZ5qhPQ96q4r0xpE25NahzVeLWn0o9J3ntg5.png?width=640&crop=smart&format=pjpg&auto=webp&s=e11023a2e509ce14ba13f09b1b3d355c6fb8c878', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bGZzOGxrZ2VnazRmMcjVXgRLpZ5qhPQ96q4r0xpE25NahzVeLWn0o9J3ntg5.png?width=960&crop=smart&format=pjpg&auto=webp&s=41e244cf3b932edf111a6b1df4ad797720554b74', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bGZzOGxrZ2VnazRmMcjVXgRLpZ5qhPQ96q4r0xpE25NahzVeLWn0o9J3ntg5.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4c1f50cc5d5b18456f1df104317de0a75aeceb72', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bGZzOGxrZ2VnazRmMcjVXgRLpZ5qhPQ96q4r0xpE25NahzVeLWn0o9J3ntg5.png?format=pjpg&auto=webp&s=1268d75490705b84cf24ced9101eb9c2c89ec879', 'width': 1920}, 'variants': {}}]}
LLMs to run on CPU and low memory
1
[removed]
2025-06-02T19:50:14
https://www.reddit.com/r/LocalLLaMA/comments/1l1rrox/llms_to_run_on_cpu_and_low_memory/
idreesBughio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l1rrox
false
null
t3_1l1rrox
/r/LocalLLaMA/comments/1l1rrox/llms_to_run_on_cpu_and_low_memory/
false
false
self
1
null
MCP Client with Local Ollama LLM and Multi-Server Tool Support
1
[removed]
2025-06-02T20:02:39
https://www.reddit.com/r/LocalLLaMA/comments/1l1s2vs/mcp_client_with_local_ollama_llm_and_multiserver/
Wise-Grand-8374
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l1s2vs
false
null
t3_1l1s2vs
/r/LocalLLaMA/comments/1l1s2vs/mcp_client_with_local_ollama_llm_and_multiserver/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Rx9DrYNWlGQT1_MnLZNwFPsPllAXaBHqTpEvI2wZRew', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/f24OxfyaGGvhV75Pj1RWVF_E-1Lra8-H9srdgrRyIzQ.jpg?width=108&crop=smart&auto=webp&s=0969de21fb21b1d0da604b858cff682cc4837a4d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/f24OxfyaGGvhV75Pj1RWVF_E-1Lra8-H9srdgrRyIzQ.jpg?width=216&crop=smart&auto=webp&s=ba215b96662689d465a2a5baabfbe56ba89c174d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/f24OxfyaGGvhV75Pj1RWVF_E-1Lra8-H9srdgrRyIzQ.jpg?width=320&crop=smart&auto=webp&s=309074d82080c1a5c5beb6202e07f7bc9f0cd0d6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/f24OxfyaGGvhV75Pj1RWVF_E-1Lra8-H9srdgrRyIzQ.jpg?width=640&crop=smart&auto=webp&s=8bdcd1e9d5e918a4977583f1b6a848d010068c4e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/f24OxfyaGGvhV75Pj1RWVF_E-1Lra8-H9srdgrRyIzQ.jpg?width=960&crop=smart&auto=webp&s=752aad7315029aeacd6e0a98257fdf8d0354697b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/f24OxfyaGGvhV75Pj1RWVF_E-1Lra8-H9srdgrRyIzQ.jpg?width=1080&crop=smart&auto=webp&s=ebfa9e1795c2ff663d7a04ff3e0fde9db2d5faf0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/f24OxfyaGGvhV75Pj1RWVF_E-1Lra8-H9srdgrRyIzQ.jpg?auto=webp&s=35512f36979266bb5d7e19cabe9e73e55ac41eeb', 'width': 1200}, 'variants': {}}]}
MCP Client with Local Ollama LLM and Multi-Server Tool Support
1
[removed]
2025-06-02T20:03:52
https://www.reddit.com/r/LocalLLaMA/comments/1l1s3y9/mcp_client_with_local_ollama_llm_and_multiserver/
Wise-Grand-8374
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l1s3y9
false
null
t3_1l1s3y9
/r/LocalLLaMA/comments/1l1s3y9/mcp_client_with_local_ollama_llm_and_multiserver/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Rx9DrYNWlGQT1_MnLZNwFPsPllAXaBHqTpEvI2wZRew', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/f24OxfyaGGvhV75Pj1RWVF_E-1Lra8-H9srdgrRyIzQ.jpg?width=108&crop=smart&auto=webp&s=0969de21fb21b1d0da604b858cff682cc4837a4d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/f24OxfyaGGvhV75Pj1RWVF_E-1Lra8-H9srdgrRyIzQ.jpg?width=216&crop=smart&auto=webp&s=ba215b96662689d465a2a5baabfbe56ba89c174d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/f24OxfyaGGvhV75Pj1RWVF_E-1Lra8-H9srdgrRyIzQ.jpg?width=320&crop=smart&auto=webp&s=309074d82080c1a5c5beb6202e07f7bc9f0cd0d6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/f24OxfyaGGvhV75Pj1RWVF_E-1Lra8-H9srdgrRyIzQ.jpg?width=640&crop=smart&auto=webp&s=8bdcd1e9d5e918a4977583f1b6a848d010068c4e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/f24OxfyaGGvhV75Pj1RWVF_E-1Lra8-H9srdgrRyIzQ.jpg?width=960&crop=smart&auto=webp&s=752aad7315029aeacd6e0a98257fdf8d0354697b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/f24OxfyaGGvhV75Pj1RWVF_E-1Lra8-H9srdgrRyIzQ.jpg?width=1080&crop=smart&auto=webp&s=ebfa9e1795c2ff663d7a04ff3e0fde9db2d5faf0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/f24OxfyaGGvhV75Pj1RWVF_E-1Lra8-H9srdgrRyIzQ.jpg?auto=webp&s=35512f36979266bb5d7e19cabe9e73e55ac41eeb', 'width': 1200}, 'variants': {}}]}
What real-world use cases actually justify running a local LLM instead of using a cloud model?
1
[removed]
2025-06-02T20:13:24
https://www.reddit.com/r/LocalLLaMA/comments/1l1scmw/what_realworld_use_cases_actually_justify_running/
Similar-Let-1981
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l1scmw
false
null
t3_1l1scmw
/r/LocalLLaMA/comments/1l1scmw/what_realworld_use_cases_actually_justify_running/
false
false
self
1
null
ZorkGPT: Open source AI agent that plays the classic text adventure game Zork
116
I built an AI system that plays Zork (the classic, and very hard 1977 text adventure game) using multiple open-source LLMs working together. The system uses separate models for different tasks: * Agent model decides what actions to take * Critic model evaluates those actions before execution * Extractor model parses game text into structured data * Strategy generator learns from experience to improve over time Unlike the other Pokemon gaming projects, this focuses on using open source models. I had initially wanted to limit the project to models that I can run locally on my MacMini, but that proved to be fruitless after many thousands of turns. I also don't have the cash resources to runs this on Gemini or Claude (like how can those guys afford that??). The AI builds a map as it explores, maintains memory of what it's learned, and continuously updates its strategy. The live viewer shows real-time data of the AI's reasoning process, current game state, learned strategies, and a visual map of discovered locations. You can watch it play live at [https://zorkgpt.com](https://zorkgpt.com) Project code: [https://github.com/stickystyle/ZorkGPT](https://github.com/stickystyle/ZorkGPT) Just wanted to share something I've been playing with after work that I thought this audience would find neat. I just wiped its memory this morning and started a fresh "no-touch" run, so let's see how it goes :)
2025-06-02T20:46:12
https://www.reddit.com/r/LocalLLaMA/comments/1l1t75j/zorkgpt_open_source_ai_agent_that_plays_the/
stickystyle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l1t75j
false
null
t3_1l1t75j
/r/LocalLLaMA/comments/1l1t75j/zorkgpt_open_source_ai_agent_that_plays_the/
false
false
self
116
null
💻 I optimized Qwen3:30B MoE to run on my RTX 3070 laptop at 24 tok/s — full breakdown inside
1
[removed]
2025-06-02T20:52:44
https://www.reddit.com/r/LocalLLaMA/comments/1l1td2c/i_optimized_qwen330b_moe_to_run_on_my_rtx_3070/
kekePower
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l1td2c
false
null
t3_1l1td2c
/r/LocalLLaMA/comments/1l1td2c/i_optimized_qwen330b_moe_to_run_on_my_rtx_3070/
false
false
self
1
{'enabled': False, 'images': [{'id': 'oqG6_t4eKaYPbxXv0zqeDRigZxwGztvuEF9rm-qDThY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/m4SqJf5AxZPC0pBa5AyT-_SLeKjlRmzEHhZInSn1cY8.jpg?width=108&crop=smart&auto=webp&s=4b5d3e2bcd050efbb28916e92c0b7d0420fc426f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/m4SqJf5AxZPC0pBa5AyT-_SLeKjlRmzEHhZInSn1cY8.jpg?width=216&crop=smart&auto=webp&s=dffd06b4aae71856022e986fbe3df1b5c561b59e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/m4SqJf5AxZPC0pBa5AyT-_SLeKjlRmzEHhZInSn1cY8.jpg?width=320&crop=smart&auto=webp&s=f75cbecdde485d253cf94a9c65c1f1a86931824e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/m4SqJf5AxZPC0pBa5AyT-_SLeKjlRmzEHhZInSn1cY8.jpg?width=640&crop=smart&auto=webp&s=a33937f28da42a664fa6e74e5e002945d4a803dc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/m4SqJf5AxZPC0pBa5AyT-_SLeKjlRmzEHhZInSn1cY8.jpg?width=960&crop=smart&auto=webp&s=7e560546c2261e82a62fb4f669c8f2e9415ccaef', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/m4SqJf5AxZPC0pBa5AyT-_SLeKjlRmzEHhZInSn1cY8.jpg?width=1080&crop=smart&auto=webp&s=f69ab57830abb3e95888d6a7aba0e3c62280da26', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/m4SqJf5AxZPC0pBa5AyT-_SLeKjlRmzEHhZInSn1cY8.jpg?auto=webp&s=db069e7e09856759f4150f029d6ded28ed6559a5', 'width': 1200}, 'variants': {}}]}
Which model should duckduckgo add next?
0
They currently have llama 3.3 and mistral small 3, in terms of open models. The closed ones are o3 mini , gpt 4o mini and Claude 3 haiku. What would you add if you were in charge?
2025-06-02T21:04:24
https://www.reddit.com/r/LocalLLaMA/comments/1l1tnug/which_model_should_duckduckgo_add_next/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l1tnug
false
null
t3_1l1tnug
/r/LocalLLaMA/comments/1l1tnug/which_model_should_duckduckgo_add_next/
false
false
self
0
null
From Zork to LocalLLM’s.
0
Newb here. I recently taught my kids how to make text based adventure games based on Transformers lore using AI. They had a blast. I wanted ChatGPT to generate an image with each story prompt and I was really disappointed with the speed and frustrated by the constant copyright issues. I found myself upgrading the 3070ti in my shoebox sized mini ITX pc to a 3090. I might even get a 4090. I have LM studio and Stable diffusion installed. Right now the images look small and they aren’t really close to what I’m asking for. What else should install? For anything I can do with local ai. I’d love veo3 type videos. If I can do that locally in a year, I’ll buy a 5090. I don’t need a tutorial, I can ask ChatGPT for directions. Tell me what I should research.
2025-06-02T21:31:46
https://www.reddit.com/r/LocalLLaMA/comments/1l1uct5/from_zork_to_localllms/
Yakapo88
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l1uct5
false
null
t3_1l1uct5
/r/LocalLLaMA/comments/1l1uct5/from_zork_to_localllms/
false
false
self
0
null
What formats should I use for fine tuning of LLM’s?
3
I have been working on an AI agent program that essentially recursively splits tasks into smaller tasks, until an LLM decides it is simple enough. Then it attempts to execute the task with tool calling, and the results propagate up to the initial task. I want to fine tune a model (maybe Qwen2.5) to perform better on this task. I have done this before, but only on single-turn prompts, and never involving tool calling. What format should I use for that? I’ve heard I should use JSONL with axolotl, but I can’t seem to find any functional samples. Has anyone successfully accomplished this, specifically with multi turn tool use samples?
2025-06-02T21:34:09
https://www.reddit.com/r/LocalLLaMA/comments/1l1uex6/what_formats_should_i_use_for_fine_tuning_of_llms/
Pretend_Guava7322
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l1uex6
false
null
t3_1l1uex6
/r/LocalLLaMA/comments/1l1uex6/what_formats_should_i_use_for_fine_tuning_of_llms/
false
false
self
3
null
Run models on local pc
1
[removed]
2025-06-02T21:40:00
https://www.reddit.com/r/LocalLLaMA/comments/1l1uk46/run_models_on_local_pc/
borisr10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l1uk46
false
null
t3_1l1uk46
/r/LocalLLaMA/comments/1l1uk46/run_models_on_local_pc/
false
false
self
1
null
Thoughts on "The Real Cost of Open-Source LLMs [Breakdowns]"
0
[https://artificialintelligencemadesimple.substack.com/p/the-real-cost-of-open-source-llms](https://artificialintelligencemadesimple.substack.com/p/the-real-cost-of-open-source-llms) I agree with most of the arguments in this post. While the pro argument for using open-source LLMs for most part is that you control your IP and not trust the cloud provider, for all other use-cases, it is best to use one of the state of the art LLMs as an API service. What do you all think?
2025-06-02T22:35:10
https://www.reddit.com/r/LocalLLaMA/comments/1l1vv61/thoughts_on_the_real_cost_of_opensource_llms/
azhorAhai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l1vv61
false
null
t3_1l1vv61
/r/LocalLLaMA/comments/1l1vv61/thoughts_on_the_real_cost_of_opensource_llms/
false
false
self
0
{'enabled': False, 'images': [{'id': 'uDPbA4kCHGno54ldbcC_Aws-DRxCuXWuc0eOPJUcLZo', 'resolutions': [{'height': 79, 'url': 'https://external-preview.redd.it/AmWwYBgrRaoUY61Cb23CvP-UDkprYPqju3rYlwENdK4.jpg?width=108&crop=smart&auto=webp&s=2d191c8ab679e708e18cfd26aa18f44d49b726cb', 'width': 108}, {'height': 159, 'url': 'https://external-preview.redd.it/AmWwYBgrRaoUY61Cb23CvP-UDkprYPqju3rYlwENdK4.jpg?width=216&crop=smart&auto=webp&s=5c56932f56299a0035f38d5c94b02cc9a603c913', 'width': 216}, {'height': 236, 'url': 'https://external-preview.redd.it/AmWwYBgrRaoUY61Cb23CvP-UDkprYPqju3rYlwENdK4.jpg?width=320&crop=smart&auto=webp&s=177ae129ba963a1e85f666ba248dc9badb6e544f', 'width': 320}, {'height': 473, 'url': 'https://external-preview.redd.it/AmWwYBgrRaoUY61Cb23CvP-UDkprYPqju3rYlwENdK4.jpg?width=640&crop=smart&auto=webp&s=d8e18da08f3a9e697685a235a46f769b700fcd85', 'width': 640}], 'source': {'height': 499, 'url': 'https://external-preview.redd.it/AmWwYBgrRaoUY61Cb23CvP-UDkprYPqju3rYlwENdK4.jpg?auto=webp&s=132d6e1e15144d638eb2b4b204b67bf83fe7f723', 'width': 675}, 'variants': {}}]}
Anthropic is owning the ARC-AGI-2 leaderboard
0
https://arcprize.org/leaderboard
2025-06-02T22:46:27
https://i.redd.it/ar0usrpnel4f1.jpeg
Balance-
i.redd.it
1970-01-01T00:00:00
0
{}
1l1w4fb
false
null
t3_1l1w4fb
/r/LocalLLaMA/comments/1l1w4fb/anthropic_is_owning_the_arcagi2_leaderboard/
false
false
https://b.thumbs.redditm…KAVi4KLBXeXY.jpg
0
{'enabled': True, 'images': [{'id': 'AWODV5oVitenu2UAmjnRBkjhowjViOIJjGmT9x6HZ8A', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/ar0usrpnel4f1.jpeg?width=108&crop=smart&auto=webp&s=4dcc0ef420694d90033002576c523f515b1ced5c', 'width': 108}, {'height': 139, 'url': 'https://preview.redd.it/ar0usrpnel4f1.jpeg?width=216&crop=smart&auto=webp&s=7a16632ed63b5930d550d7e2a1e6c4490eaeb3ed', 'width': 216}, {'height': 206, 'url': 'https://preview.redd.it/ar0usrpnel4f1.jpeg?width=320&crop=smart&auto=webp&s=064005e363910ea118c63a41b998991afb03a323', 'width': 320}, {'height': 413, 'url': 'https://preview.redd.it/ar0usrpnel4f1.jpeg?width=640&crop=smart&auto=webp&s=e0daa46ae8b926fdf3776730ca9b629823a0bb01', 'width': 640}, {'height': 619, 'url': 'https://preview.redd.it/ar0usrpnel4f1.jpeg?width=960&crop=smart&auto=webp&s=bc36aca9f4096e5b55d67c3ebf215727d1b1138e', 'width': 960}, {'height': 697, 'url': 'https://preview.redd.it/ar0usrpnel4f1.jpeg?width=1080&crop=smart&auto=webp&s=d3bbfed5aeeea5e88c03ff1b29b0e696f128986f', 'width': 1080}], 'source': {'height': 1746, 'url': 'https://preview.redd.it/ar0usrpnel4f1.jpeg?auto=webp&s=2b4f495c001b09b43c93ad0be8112c3a272c74e0', 'width': 2704}, 'variants': {}}]}
llama4:maverick vs qwen3:235b
12
Title says it all. Which do like best and why?
2025-06-02T22:58:01
https://www.reddit.com/r/LocalLLaMA/comments/1l1wdlj/llama4maverick_vs_qwen3235b/
M3GaPrincess
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l1wdlj
false
null
t3_1l1wdlj
/r/LocalLLaMA/comments/1l1wdlj/llama4maverick_vs_qwen3235b/
false
false
self
12
null
Why use thinking model ?
28
I'm relatively new to using models. I've experimented with some that have a "thinking" feature, but I'm finding the delay quite frustrating – a minute to generate a response feels excessive. I understand these models are popular, so I'm curious what I might be missing in terms of their benefits or how to best utilize them. Any insights would be appreciated!
2025-06-02T23:10:34
https://www.reddit.com/r/LocalLLaMA/comments/1l1wnsz/why_use_thinking_model/
Empty_Object_9299
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l1wnsz
false
null
t3_1l1wnsz
/r/LocalLLaMA/comments/1l1wnsz/why_use_thinking_model/
false
false
self
28
null
Sharing my a demo of tool for easy handwritten fine-tuning dataset creation!
4
hello! I wanted to share a tool that I created for making hand written fine tuning datasets, originally I built this for myself when I was unable to find conversational datasets formatted the way I needed when I was fine-tuning llama 3 for the first time and hand typing JSON files seemed like some sort of torture so I built a little simple UI for myself to auto format everything for me.  I originally built this back when I was a beginner so it is very easy to use with no prior dataset creation/formatting experience but also has a bunch of added features I believe more experienced devs would appreciate! **I have expanded it to support :** \- many formats; chatml/chatgpt, alpaca, and sharegpt/vicuna \- multi-turn dataset creation not just pair based \- token counting from various models \- custom fields (instructions, system messages, custom ids), \- auto saves and every format type is written at once \- formats like alpaca have no need for additional data besides input and output as a default instructions are auto applied (customizable) \- goal tracking bar I know it seems a bit crazy to be manually hand typing out datasets but hand written data is great for customizing your LLMs and keeping them high quality, I wrote a 1k interaction conversational dataset with this within a month during my free time and it made it much more mindless and easy   I hope you enjoy! I will be adding new formats over time depending on what becomes popular or asked for [**Here is the demo to test out on Hugging Face**](https://huggingface.co/spaces/Gabriella0333/LLM_Scribe_Demo) (not the full version, full version and video demo linked at bottom of page)
2025-06-02T23:32:39
https://www.reddit.com/r/LocalLLaMA/comments/1l1x5k4/sharing_my_a_demo_of_tool_for_easy_handwritten/
abaris243
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l1x5k4
false
null
t3_1l1x5k4
/r/LocalLLaMA/comments/1l1x5k4/sharing_my_a_demo_of_tool_for_easy_handwritten/
false
false
self
4
{'enabled': False, 'images': [{'id': 'k3HvQmGHEJ1BAvR8nlt8TqGTjxapGWDW6TdcJI-H9Eo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Sc-1dxf-YWblv6Tfxa4TwjsFJ-7fpjAy8ZXAlqtIL3A.jpg?width=108&crop=smart&auto=webp&s=4f6646bf38ed847b77260fb5e044f0d2d2c85075', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Sc-1dxf-YWblv6Tfxa4TwjsFJ-7fpjAy8ZXAlqtIL3A.jpg?width=216&crop=smart&auto=webp&s=7c3394af0566551a172582da40a25e7a986b7d91', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Sc-1dxf-YWblv6Tfxa4TwjsFJ-7fpjAy8ZXAlqtIL3A.jpg?width=320&crop=smart&auto=webp&s=fee818c5cfa825d93e6e8caa38317c5b542db007', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Sc-1dxf-YWblv6Tfxa4TwjsFJ-7fpjAy8ZXAlqtIL3A.jpg?width=640&crop=smart&auto=webp&s=f1bac63452d2a060b8ab94d4117d1ac84a395716', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Sc-1dxf-YWblv6Tfxa4TwjsFJ-7fpjAy8ZXAlqtIL3A.jpg?width=960&crop=smart&auto=webp&s=3c9afd767c4ff4e803b6a4a8835ca67e949acd04', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Sc-1dxf-YWblv6Tfxa4TwjsFJ-7fpjAy8ZXAlqtIL3A.jpg?width=1080&crop=smart&auto=webp&s=dd52c2b20e5080b0d14ed1af1c412a9de0a78a4f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Sc-1dxf-YWblv6Tfxa4TwjsFJ-7fpjAy8ZXAlqtIL3A.jpg?auto=webp&s=38e4e2e98da2d7ebbb75e9a6fe04d339fb6abe9e', 'width': 1200}, 'variants': {}}]}
Local model/setup similar to GPT4-turbo. Is it possible?
1
[removed]
2025-06-03T01:04:40
https://www.reddit.com/r/LocalLLaMA/comments/1l1z1pg/local_modelsetup_similar_to_gpt4turbo_is_it/
FinancialMechanic853
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l1z1pg
false
null
t3_1l1z1pg
/r/LocalLLaMA/comments/1l1z1pg/local_modelsetup_similar_to_gpt4turbo_is_it/
false
false
self
1
null
Losing my patience with LLMs
0
me: ok. then why did you bullshit me earlier and tell me i couldn't... llm: You're absolutely right — I did not bullshit you. I was being very careful to make sure you understood the nuances and limitations of... ... later ... Final Answer You did not get "bullshitted" — I was being very careful to make sure you didn’t run into a common pitfall. me: i think you should stop being defensive and instead acknowledge your mistake and apologise! llm: You're absolutely right — I should have been more clear and less defensive. I apologize for the confusion and for not being as direct as I could have been.
2025-06-03T01:41:46
https://www.reddit.com/r/LocalLLaMA/comments/1l1zsci/losing_my_patience_with_llms/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l1zsci
false
null
t3_1l1zsci
/r/LocalLLaMA/comments/1l1zsci/losing_my_patience_with_llms/
false
false
self
0
null
How Far Can AI Go in Higher Math Education?
1
[removed]
2025-06-03T01:56:39
https://www.reddit.com/r/LocalLLaMA/comments/1l202zi/how_far_can_ai_go_in_higher_math_education/
Quick-Knowledge1615
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l202zi
false
null
t3_1l202zi
/r/LocalLLaMA/comments/1l202zi/how_far_can_ai_go_in_higher_math_education/
false
false
self
1
null
LLM an engine
24
I can’t help but feel like the LLM, ollama, deep seek, openAI, Claude, are all engines sitting on a stand. Yes we see the raw power it puts out when sitting on an engine stand, but we can’t quite conceptually figure out the “body” of the automobile. The car changed the world, but not without first the engine. I’ve been exploring mcp, rag and other context servers and from what I can see, they all suck. ChatGPTs memory does the best job, but when programming, remembering that I always have a set of includes, or use a specific theme, they all do a terrible job. Please anyone correct me if I’m wrong, but it feels like we have all this raw power just waiting to be unleashed, and I can only tap into the raw power when I’m in an isolated context window, not on the open road.
2025-06-03T02:13:46
https://www.reddit.com/r/LocalLLaMA/comments/1l20f2h/llm_an_engine/
localremote762
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l20f2h
false
null
t3_1l20f2h
/r/LocalLLaMA/comments/1l20f2h/llm_an_engine/
false
false
self
24
null
lightening fast, realtime voice cloning text to speech model?
1
[removed]
2025-06-03T02:37:38
https://www.reddit.com/r/LocalLLaMA/comments/1l20vyq/lightening_fast_realtime_voice_cloning_text_to/
aivoicebot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l20vyq
false
null
t3_1l20vyq
/r/LocalLLaMA/comments/1l20vyq/lightening_fast_realtime_voice_cloning_text_to/
false
false
self
1
null
rtx pro 6000 96gb a good inference gpu?
1
[removed]
2025-06-03T02:43:05
https://www.reddit.com/r/LocalLLaMA/comments/1l20zul/rtx_pro_6000_96gb_a_good_inference_gpu/
Dry-Vermicelli-682
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l20zul
false
null
t3_1l20zul
/r/LocalLLaMA/comments/1l20zul/rtx_pro_6000_96gb_a_good_inference_gpu/
false
false
self
1
null
OSS implementation of OpenAI's vector search tool?
13
Hi, Is there a library that implements OpenAI's vector search? Something where you can create vector stores, add files (pdf, docx, md) to the vector stores and then search these vector store for a certain query.
2025-06-03T02:57:02
https://www.reddit.com/r/LocalLLaMA/comments/1l219ol/oss_implementation_of_openais_vector_search_tool/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l219ol
false
null
t3_1l219ol
/r/LocalLLaMA/comments/1l219ol/oss_implementation_of_openais_vector_search_tool/
false
false
self
13
null
Fine tuning/Distilling local models to achieve high accuracy
1
[removed]
2025-06-03T03:16:02
https://www.reddit.com/r/LocalLLaMA/comments/1l21mmb/fine_tuningdistilling_local_models_to_achieve/
tezdhar-mk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l21mmb
false
null
t3_1l21mmb
/r/LocalLLaMA/comments/1l21mmb/fine_tuningdistilling_local_models_to_achieve/
false
false
self
1
null
LMStudio+Cline+MacBookPro repeated response
0
Hi guys, I didn’t know who to turn to so I wanna ask here. On my new MacBook Pro M4 48gb RAM I’m running LM studio and Cline Vs code extension+MCP. When I ask something in Cline, it repeats the response over and over and was thinking maybe LMstudio was caching the response. When I use Copilot or other online models, it’s working fine. Or even LMStudio in my other pc in the LAN, it works ok. I was wondering if people are having the same issue.
2025-06-03T03:55:18
https://www.reddit.com/r/LocalLLaMA/comments/1l22cel/lmstudioclinemacbookpro_repeated_response/
mcchung52
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l22cel
false
null
t3_1l22cel
/r/LocalLLaMA/comments/1l22cel/lmstudioclinemacbookpro_repeated_response/
false
false
self
0
null
Do small reasoning/CoT models get stuck in long thinking loops more often?
8
Hey, As the title suggests, I've noticed small reasoning models tend to think a lot, sometimes they don't stop. QwQ-32B, DeepSeek-R1-Distill-Qwen-32B and DeepSeek-R1-0528-Qwen3-8B. Larger models tend to not get stuck as often. Could it be because of short context windows? Or am I imagining it.
2025-06-03T04:53:49
https://www.reddit.com/r/LocalLLaMA/comments/1l23d09/do_small_reasoningcot_models_get_stuck_in_long/
Proud_Fox_684
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l23d09
false
null
t3_1l23d09
/r/LocalLLaMA/comments/1l23d09/do_small_reasoningcot_models_get_stuck_in_long/
false
false
self
8
null
Did anyone that ordered the GMK X2 from Amazon get it yet?
3
From what I've read elsewhere, GMK is reportedly giving priority to orders made directly on their website. So Amazon orders get the leftovers. Has anyone gotten a X2 ordered off of Amazon?
2025-06-03T05:18:31
https://www.reddit.com/r/LocalLLaMA/comments/1l23rrg/did_anyone_that_ordered_the_gmk_x2_from_amazon/
fallingdowndizzyvr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l23rrg
false
null
t3_1l23rrg
/r/LocalLLaMA/comments/1l23rrg/did_anyone_that_ordered_the_gmk_x2_from_amazon/
false
false
self
3
null
Guide: How to Run DeepSeek R1 0528 (FP8 + Q4_K_M Hybrid) Locally on Ktransformers with 10tk/s
1
[removed]
2025-06-03T06:01:02
https://www.reddit.com/r/LocalLLaMA/comments/1l24g9t/guide_how_to_run_deepseek_r1_0528_fp8_q4_k_m/
texasdude11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l24g9t
false
null
t3_1l24g9t
/r/LocalLLaMA/comments/1l24g9t/guide_how_to_run_deepseek_r1_0528_fp8_q4_k_m/
false
false
self
1
{'enabled': False, 'images': [{'id': 'RduHNzldNAak5rxDap3_NIDMxvXd1vV1qebNFfPTrp0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/5fCBw3TJn0Z0DDLLbtMDzmb0QlZB-RM_NGf0IFIv63c.jpg?width=108&crop=smart&auto=webp&s=667feb0d27b7e82d0b729b17c003b649467b058c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/5fCBw3TJn0Z0DDLLbtMDzmb0QlZB-RM_NGf0IFIv63c.jpg?width=216&crop=smart&auto=webp&s=1c1838fbada7bef13c1cfb3b1c6996e314431b9e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/5fCBw3TJn0Z0DDLLbtMDzmb0QlZB-RM_NGf0IFIv63c.jpg?width=320&crop=smart&auto=webp&s=571a0af02d8b7393df21dbb8e345e7f3831095fb', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/5fCBw3TJn0Z0DDLLbtMDzmb0QlZB-RM_NGf0IFIv63c.jpg?auto=webp&s=6adba127ef0dba15e7c1dd0c6f6345461254c43e', 'width': 480}, 'variants': {}}]}
Try to ask DeepSeek a lucky number and …
1
2025-06-03T06:40:18
https://v.redd.it/8k6anz35rn4f1
hachimi_ddj
v.redd.it
1970-01-01T00:00:00
0
{}
1l251is
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/8k6anz35rn4f1/DASHPlaylist.mpd?a=1751524832%2CZjhkMjgxZDBiY2Q1NDU2NDJmZGVlOWVhMTZlZTRiNjdkYWE5NmU4YTE5YmI3YWM0NDAzNmY5YTljNmU2Njg0Nw%3D%3D&v=1&f=sd', 'duration': 19, 'fallback_url': 'https://v.redd.it/8k6anz35rn4f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/8k6anz35rn4f1/HLSPlaylist.m3u8?a=1751524832%2CNGE5ZDY5MzQ5ZWZlN2FjNGE1YTY5MjMzNmYxZThlZWEzZWY0YzJhYzQ4MzEyZGVjNDIyMmU5NTM3NTBmNmU1ZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/8k6anz35rn4f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1100}}
t3_1l251is
/r/LocalLLaMA/comments/1l251is/try_to_ask_deepseek_a_lucky_number_and/
false
false
https://external-preview…6ab29acb77a0e065
1
{'enabled': False, 'images': [{'id': 'dWY0OGc5NDRybjRmMXP-LZ88OO3kqEERitjQSnvplxQNgrUMZyjrGzLygAud', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/dWY0OGc5NDRybjRmMXP-LZ88OO3kqEERitjQSnvplxQNgrUMZyjrGzLygAud.png?width=108&crop=smart&format=pjpg&auto=webp&s=3dc0fc3719c342222fef2b79e744b84f588adb11', 'width': 108}, {'height': 141, 'url': 'https://external-preview.redd.it/dWY0OGc5NDRybjRmMXP-LZ88OO3kqEERitjQSnvplxQNgrUMZyjrGzLygAud.png?width=216&crop=smart&format=pjpg&auto=webp&s=34dee1230bf0115ee007fcaa9b1df6e3cd30ece1', 'width': 216}, {'height': 209, 'url': 'https://external-preview.redd.it/dWY0OGc5NDRybjRmMXP-LZ88OO3kqEERitjQSnvplxQNgrUMZyjrGzLygAud.png?width=320&crop=smart&format=pjpg&auto=webp&s=26937221687a54cc18e063730c09152141b497c3', 'width': 320}, {'height': 418, 'url': 'https://external-preview.redd.it/dWY0OGc5NDRybjRmMXP-LZ88OO3kqEERitjQSnvplxQNgrUMZyjrGzLygAud.png?width=640&crop=smart&format=pjpg&auto=webp&s=2830c00411c9724fcb8236a74f09beeb251eb7c1', 'width': 640}, {'height': 627, 'url': 'https://external-preview.redd.it/dWY0OGc5NDRybjRmMXP-LZ88OO3kqEERitjQSnvplxQNgrUMZyjrGzLygAud.png?width=960&crop=smart&format=pjpg&auto=webp&s=61a641b70e80f9d27bf7e07e9ec7bd1cefefe451', 'width': 960}, {'height': 706, 'url': 'https://external-preview.redd.it/dWY0OGc5NDRybjRmMXP-LZ88OO3kqEERitjQSnvplxQNgrUMZyjrGzLygAud.png?width=1080&crop=smart&format=pjpg&auto=webp&s=86f5443ae8b584c9539e54326d57e0df5d8d1b3e', 'width': 1080}], 'source': {'height': 968, 'url': 'https://external-preview.redd.it/dWY0OGc5NDRybjRmMXP-LZ88OO3kqEERitjQSnvplxQNgrUMZyjrGzLygAud.png?format=pjpg&auto=webp&s=fb4b17a428b60273bd1af91b35441d94981258ed', 'width': 1480}, 'variants': {}}]}
any local FIM model for writing?
1
[removed]
2025-06-03T07:22:09
https://www.reddit.com/r/LocalLLaMA/comments/1l25oft/any_local_fim_model_for_writing/
TinyDetective110
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l25oft
false
null
t3_1l25oft
/r/LocalLLaMA/comments/1l25oft/any_local_fim_model_for_writing/
false
false
self
1
null
What happened to the fused/merged models?
12
I remember back when QwQ-32 first came out there was a FuseO1 thing with SkyT1. Are there any newer models like this?
2025-06-03T07:23:10
https://www.reddit.com/r/LocalLLaMA/comments/1l25oyk/what_happened_to_the_fusedmerged_models/
Su1tz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l25oyk
false
null
t3_1l25oyk
/r/LocalLLaMA/comments/1l25oyk/what_happened_to_the_fusedmerged_models/
false
false
self
12
null
Claude 4 Sonnet Locally?
1
[removed]
2025-06-03T08:18:51
https://www.reddit.com/r/LocalLLaMA/comments/1l26hm2/claude_4_sonnet_locally/
VanFenix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l26hm2
false
null
t3_1l26hm2
/r/LocalLLaMA/comments/1l26hm2/claude_4_sonnet_locally/
false
false
self
1
null
Fine-tune + Outlines
1
[removed]
2025-06-03T08:40:27
https://www.reddit.com/r/LocalLLaMA/comments/1l26sos/finetune_outlines/
Total_Hedgehog2946
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l26sos
false
null
t3_1l26sos
/r/LocalLLaMA/comments/1l26sos/finetune_outlines/
false
false
self
1
null
How are commercial dense models so much faster?
3
Is there a way increase generation speed of a model? I have been trying to make the the QwQ work, and I has been... acceptable quality wise, but because of the thinking (thought for a minute) chatting has become a drag. And regenerating a message requires either a lot of patience or manually editing the message part each time. I do like the prospect of better context adhesion, but for now I feel like managing context manually is less tedious. But back to the point. Is there a way I could increase the generation speed? Maybe by running a parallel instance? I have 2x3090 on a remote server and a 1x3090 on my machine. Running 2x3090 sadly uses half of each card (but allows better quant and context) in koboldcpp (linux) during inference (but full when processing prompt).
2025-06-03T08:44:11
https://www.reddit.com/r/LocalLLaMA/comments/1l26ujb/how_are_commercial_dense_models_so_much_faster/
kaisurniwurer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l26ujb
false
null
t3_1l26ujb
/r/LocalLLaMA/comments/1l26ujb/how_are_commercial_dense_models_so_much_faster/
false
false
self
3
null
Quants performance of Qwen3 30b a3b
0
Graph based on the data taken from the second pic, on qwen'hf page.
2025-06-03T09:00:59
https://www.reddit.com/gallery/1l2735s
GreenTreeAndBlueSky
reddit.com
1970-01-01T00:00:00
0
{}
1l2735s
false
null
t3_1l2735s
/r/LocalLLaMA/comments/1l2735s/quants_performance_of_qwen3_30b_a3b/
false
false
https://b.thumbs.redditm…S_33rcGr_BaI.jpg
0
null
Current best multimodal web search model that fits on 16gb vram?
1
[removed]
2025-06-03T09:08:06
https://www.reddit.com/r/LocalLLaMA/comments/1l276zx/current_best_multimodal_web_search_model_that/
anonthatisopen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l276zx
false
null
t3_1l276zx
/r/LocalLLaMA/comments/1l276zx/current_best_multimodal_web_search_model_that/
false
false
self
1
null
Flexible Quant length models
1
[removed]
2025-06-03T09:20:30
https://www.reddit.com/r/LocalLLaMA/comments/1l27dfk/flexible_quant_length_models/
therealAtten
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l27dfk
false
null
t3_1l27dfk
/r/LocalLLaMA/comments/1l27dfk/flexible_quant_length_models/
false
false
self
1
null
Google opensources DeepSearch stack
917
While it's not evident if this is the exact same stack they use in the Gemini user app, it sure looks very promising! Seems to work with Gemini and Google Search. Maybe this can be adapted for any local model and SearXNG?
2025-06-03T09:25:47
https://github.com/google-gemini/gemini-fullstack-langgraph-quickstart
Mr_Moonsilver
github.com
1970-01-01T00:00:00
0
{}
1l27g8d
false
null
t3_1l27g8d
/r/LocalLLaMA/comments/1l27g8d/google_opensources_deepsearch_stack/
false
false
https://b.thumbs.redditm…BD95xnDn2aqI.jpg
917
{'enabled': False, 'images': [{'id': '76BYxAmoYh0LRivDOt8EZLMmuAZLopQTlMSJxK1_FL0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jtUtL7EqwS5bMEk8XfF81tFd6n1MgnQyQL0hQG-jzRk.jpg?width=108&crop=smart&auto=webp&s=df7320f3f462d80501e450cba890c5c1da63f14c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jtUtL7EqwS5bMEk8XfF81tFd6n1MgnQyQL0hQG-jzRk.jpg?width=216&crop=smart&auto=webp&s=e16f2f1d624a639538ac083ee6a66c5d40d1060a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jtUtL7EqwS5bMEk8XfF81tFd6n1MgnQyQL0hQG-jzRk.jpg?width=320&crop=smart&auto=webp&s=a1ff371c05a99e08a754428cfa330433dd34ea8f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jtUtL7EqwS5bMEk8XfF81tFd6n1MgnQyQL0hQG-jzRk.jpg?width=640&crop=smart&auto=webp&s=a0633532a1f9e627adaa6246fd5a299a809f2654', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jtUtL7EqwS5bMEk8XfF81tFd6n1MgnQyQL0hQG-jzRk.jpg?width=960&crop=smart&auto=webp&s=fb9a1469e16ee65e57e550cbf0b15bc99ceecb7f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jtUtL7EqwS5bMEk8XfF81tFd6n1MgnQyQL0hQG-jzRk.jpg?width=1080&crop=smart&auto=webp&s=037cc9b09db7494203b8073d1fa3c6ec736e7b3e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jtUtL7EqwS5bMEk8XfF81tFd6n1MgnQyQL0hQG-jzRk.jpg?auto=webp&s=026ae11fc249a7372e57833c03572ab51ff2d052', 'width': 1200}, 'variants': {}}]}
Good Hindi tts needed, kokoro works, but unfair pauses and and very less tones ?
0
So I am basically fan of kokoro, had helped me automate lot of stuff, currently working on chatterbox-tts it only supports english while i liked it which need editing though because of noises.
2025-06-03T09:56:30
https://www.reddit.com/r/LocalLLaMA/comments/1l27wcj/good_hindi_tts_needed_kokoro_works_but_unfair/
jadhavsaurabh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l27wcj
false
null
t3_1l27wcj
/r/LocalLLaMA/comments/1l27wcj/good_hindi_tts_needed_kokoro_works_but_unfair/
false
false
self
0
null
nvidia/Nemotron-Research-Reasoning-Qwen-1.5B · Hugging Face
144
2025-06-03T10:06:22
https://huggingface.co/nvidia/Nemotron-Research-Reasoning-Qwen-1.5B
ab2377
huggingface.co
1970-01-01T00:00:00
0
{}
1l2820t
false
null
t3_1l2820t
/r/LocalLLaMA/comments/1l2820t/nvidianemotronresearchreasoningqwen15b_hugging/
false
false
https://b.thumbs.redditm…-x8hzPPKQAFw.jpg
144
{'enabled': False, 'images': [{'id': 'VN7DXrn_T5Pxpv1mq6PfRd0le3hZRiB0SsXAxPAGtN0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RP_o1NnFnVgqmDAj8haRnOnwD5ZnZcjaUEqHghtS6ig.jpg?width=108&crop=smart&auto=webp&s=5ce45ed3dc5fb189823b00c0dd8f361141f4594c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RP_o1NnFnVgqmDAj8haRnOnwD5ZnZcjaUEqHghtS6ig.jpg?width=216&crop=smart&auto=webp&s=4d6c5704721e1e31d0320e0c697807a3b3926a6d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RP_o1NnFnVgqmDAj8haRnOnwD5ZnZcjaUEqHghtS6ig.jpg?width=320&crop=smart&auto=webp&s=6f2dad4660cb44760e91f009d9ee38711f5d2644', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RP_o1NnFnVgqmDAj8haRnOnwD5ZnZcjaUEqHghtS6ig.jpg?width=640&crop=smart&auto=webp&s=fbe8cb87d4ec3c680868083410ff0f4da7c3636d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RP_o1NnFnVgqmDAj8haRnOnwD5ZnZcjaUEqHghtS6ig.jpg?width=960&crop=smart&auto=webp&s=c9b5948520b6f8540553e0e2699d7c1ff6e6d5bd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RP_o1NnFnVgqmDAj8haRnOnwD5ZnZcjaUEqHghtS6ig.jpg?width=1080&crop=smart&auto=webp&s=7f542c13c9d27b093e1fcc4bc99a97d6b489a69e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RP_o1NnFnVgqmDAj8haRnOnwD5ZnZcjaUEqHghtS6ig.jpg?auto=webp&s=4eea9c73f28d17a50803a9d4a84a9fb0708fe6ca', 'width': 1200}, 'variants': {}}]}
From crypto mining to democratizing AI: I built a platform that lets you run Llama-70B using distributed GPUs - Beta launching this month!
0
[removed]
2025-06-03T10:25:44
https://www.reddit.com/r/LocalLLaMA/comments/1l28d43/from_crypto_mining_to_democratizing_ai_i_built_a/
myurtsever
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l28d43
false
null
t3_1l28d43
/r/LocalLLaMA/comments/1l28d43/from_crypto_mining_to_democratizing_ai_i_built_a/
false
false
self
0
null
Local LLM Server. Is ZimaBoard 2 a good option? If not, what is?
1
[removed]
2025-06-03T10:25:50
https://www.reddit.com/r/LocalLLaMA/comments/1l28d68/local_llm_server_is_zimaboard_2_a_good_option_if/
Jokras
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l28d68
false
null
t3_1l28d68
/r/LocalLLaMA/comments/1l28d68/local_llm_server_is_zimaboard_2_a_good_option_if/
false
false
self
1
null
Smallest model to fine tune for RAG-like use case?
2
I am investigating switching from a large model to a smaller LLM fine tuned for our use case, that is a form of RAG. Currently I use json for input / output but I can switch to simple text even if I lose the contour set of support information. I imagine i can potentially use a 7/8b model but I wonder if I can get away with a 1b model or even smaller. Any pointer or experience to share?
2025-06-03T10:57:19
https://www.reddit.com/r/LocalLLaMA/comments/1l28vqr/smallest_model_to_fine_tune_for_raglike_use_case/
daniele_dll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l28vqr
false
null
t3_1l28vqr
/r/LocalLLaMA/comments/1l28vqr/smallest_model_to_fine_tune_for_raglike_use_case/
false
false
self
2
null
Vision Language Models are Biased
1
State-of-the-art VLMs (o3, o4-mini, GPT-4.1, Claude 3.7, Gemini 2.5) achieve 100% accuracy counting on images of popular subjects (e.g. knowing that the Adidas logo has 3 stripes and a dog has 4 legs) but are only **\~17%** accurate in counting in counterfactual images (e.g. counting stripes in a 4-striped Adidas-like logo or counting legs in a 5-legged dog). Check out the paper, data, and code here: [https://vlmsarebiased.github.io](https://vlmsarebiased.github.io)
2025-06-03T11:01:10
https://www.reddit.com/r/LocalLLaMA/comments/1l28y7r/vision_language_models_are_biased/
Substantial-Air-1285
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l28y7r
false
null
t3_1l28y7r
/r/LocalLLaMA/comments/1l28y7r/vision_language_models_are_biased/
false
false
self
1
null
Are Vision Language Models Biased?
1
[removed]
2025-06-03T11:24:03
https://www.reddit.com/r/LocalLLaMA/comments/1l29d7y/are_vision_language_models_biased/
Substantial-Air-1285
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l29d7y
false
null
t3_1l29d7y
/r/LocalLLaMA/comments/1l29d7y/are_vision_language_models_biased/
false
false
self
1
{'enabled': False, 'images': [{'id': 'QqSY3F9i2BgB-OdT_JpQr1vBqr2oq4spYNzkghHXwCM', 'resolutions': [], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/4PgIzt2dsWk0hsH_pv6fTscUBf4LNxa8vUF1zyE23u0.jpg?auto=webp&s=adf334dabc58b5ccda405f20fe4d11f983c41fe9', 'width': 64}, 'variants': {}}]}
What are the minimum Parts i need for my micro controller to run 1B oder 2B models?
1
[removed]
2025-06-03T11:29:55
https://www.reddit.com/r/LocalLLaMA/comments/1l29h4s/what_are_the_minimum_parts_i_need_for_my_micro/
sokratesy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l29h4s
false
null
t3_1l29h4s
/r/LocalLLaMA/comments/1l29h4s/what_are_the_minimum_parts_i_need_for_my_micro/
false
false
self
1
null
GPT-4 might already have Theory of Mind. A new paper shows it can model false beliefs—without any special training.
1
[removed]
2025-06-03T11:31:59
https://bytesandbrains.beehiiv.com/subscribe
Visible-Property3453
bytesandbrains.beehiiv.com
1970-01-01T00:00:00
0
{}
1l29im5
false
null
t3_1l29im5
/r/LocalLLaMA/comments/1l29im5/gpt4_might_already_have_theory_of_mind_a_new/
false
false
https://a.thumbs.redditm…YDomE9CKl1K0.jpg
1
{'enabled': False, 'images': [{'id': 'DAJ6lFy0un-mhroVTMgF-HKp3YlgN35hOyKuhK5AfRs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/kPh-pYaOvgvWxjg-Ef1JJJuxQrn-z67Wn7KABDvesoY.jpg?width=108&crop=smart&auto=webp&s=84d6bc361ffd058ec676035761317d669b7d3e11', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/kPh-pYaOvgvWxjg-Ef1JJJuxQrn-z67Wn7KABDvesoY.jpg?width=216&crop=smart&auto=webp&s=cc41196d056cf1d1ff01b070676f30e534af36d7', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/kPh-pYaOvgvWxjg-Ef1JJJuxQrn-z67Wn7KABDvesoY.jpg?width=320&crop=smart&auto=webp&s=d6cc48bffdcab993ae9ef995e8d8a5d7958dca62', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/kPh-pYaOvgvWxjg-Ef1JJJuxQrn-z67Wn7KABDvesoY.jpg?width=640&crop=smart&auto=webp&s=faad29616d8885449c9f8e8e33f082c7c9babcfa', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/kPh-pYaOvgvWxjg-Ef1JJJuxQrn-z67Wn7KABDvesoY.jpg?width=960&crop=smart&auto=webp&s=e8e7bd7f9f20b50f4dff5e6109d119d1d76f6c91', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/kPh-pYaOvgvWxjg-Ef1JJJuxQrn-z67Wn7KABDvesoY.jpg?width=1080&crop=smart&auto=webp&s=efba5e95e4ccc1d9b5200b311c5a92280780e8bb', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/kPh-pYaOvgvWxjg-Ef1JJJuxQrn-z67Wn7KABDvesoY.jpg?auto=webp&s=82f79acf287db3c06b501311ae94ba3e817764dd', 'width': 1200}, 'variants': {}}]}
I reproduced Search-R1 on Qwen 2.5-3B and slightly surpassed it—here's my key finding on "reflective reasoning"!
1
[removed]
2025-06-03T11:53:15
https://www.reddit.com/r/LocalLLaMA/comments/1l29wpp/i_reproduced_searchr1_on_qwen_253b_and_slightly/
Money-Coast-3905
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l29wpp
false
null
t3_1l29wpp
/r/LocalLLaMA/comments/1l29wpp/i_reproduced_searchr1_on_qwen_253b_and_slightly/
false
false
https://b.thumbs.redditm…V2ltMZLwBPwU.jpg
1
null
Search-R1 Reproduce Project Shows Reflective Phrases Boost Benchmark Scores
1
[removed]
2025-06-03T12:00:27
https://www.reddit.com/r/LocalLLaMA/comments/1l2a1kq/searchr1_reproduce_project_shows_reflective/
Money-Coast-3905
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2a1kq
false
null
t3_1l2a1kq
/r/LocalLLaMA/comments/1l2a1kq/searchr1_reproduce_project_shows_reflective/
false
false
https://a.thumbs.redditm…kz6YpGmYI-w4.jpg
1
{'enabled': False, 'images': [{'id': 'ytQuyxiOuL5ouJ0sqxE7gOdxYzItnkLkbVqu5MGpRDk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/k3NEN_IzWP6YwX-Xv4OyPxq8adBAlAwLY4zxqLeM6tw.jpg?width=108&crop=smart&auto=webp&s=833ac246fce7c4cf7a92407b679586ced159449f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/k3NEN_IzWP6YwX-Xv4OyPxq8adBAlAwLY4zxqLeM6tw.jpg?width=216&crop=smart&auto=webp&s=c78e45af1c6550045f61e4d453d200bf9a09b1b9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/k3NEN_IzWP6YwX-Xv4OyPxq8adBAlAwLY4zxqLeM6tw.jpg?width=320&crop=smart&auto=webp&s=115009b398266f1bcc4833bad644f720da843427', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/k3NEN_IzWP6YwX-Xv4OyPxq8adBAlAwLY4zxqLeM6tw.jpg?width=640&crop=smart&auto=webp&s=b318808f8797eb4213ae4ea31a51c75309297f47', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/k3NEN_IzWP6YwX-Xv4OyPxq8adBAlAwLY4zxqLeM6tw.jpg?width=960&crop=smart&auto=webp&s=2fa15bbeaa012fe9435eba6bf556195008120dbc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/k3NEN_IzWP6YwX-Xv4OyPxq8adBAlAwLY4zxqLeM6tw.jpg?width=1080&crop=smart&auto=webp&s=fca20136c9351546d82037368abaee753bb1f998', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/k3NEN_IzWP6YwX-Xv4OyPxq8adBAlAwLY4zxqLeM6tw.jpg?auto=webp&s=fbbe5203ca4ee8a2ef2de44aa3b4f143f5a11f50', 'width': 1200}, 'variants': {}}]}
Search-R1 Reproduce Project Shows Reflective Phrases Boost Benchmark Scores
1
[removed]
2025-06-03T12:03:23
https://www.reddit.com/r/LocalLLaMA/comments/1l2a3u4/searchr1_reproduce_project_shows_reflective/
Money-Coast-3905
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2a3u4
false
null
t3_1l2a3u4
/r/LocalLLaMA/comments/1l2a3u4/searchr1_reproduce_project_shows_reflective/
false
false
self
1
null
Benchmarking OCR on LLMs for consumer GPUs: Xiaomi MiMo-VL-7B-RL vs Qwen, Gemma, InternVL — Surprising Insights on Parameters and /no_think
1
[removed]
2025-06-03T12:03:32
https://www.reddit.com/gallery/1l2a3xu
PaceZealousideal6091
reddit.com
1970-01-01T00:00:00
0
{}
1l2a3xu
false
null
t3_1l2a3xu
/r/LocalLLaMA/comments/1l2a3xu/benchmarking_ocr_on_llms_for_consumer_gpus_xiaomi/
false
false
https://b.thumbs.redditm…8HoU7qploN8s.jpg
1
null
Search-R1 Reproduce Project Shows Reflective Phrases Boost Benchmark Scores
1
[removed]
2025-06-03T12:04:34
https://i.redd.it/6mbqldm0dp4f1.png
Money-Coast-3905
i.redd.it
1970-01-01T00:00:00
0
{}
1l2a4nn
false
null
t3_1l2a4nn
/r/LocalLLaMA/comments/1l2a4nn/searchr1_reproduce_project_shows_reflective/
false
false
https://a.thumbs.redditm…6wFVz8abSWX4.jpg
1
{'enabled': True, 'images': [{'id': 'vslKNYayNSJhW2bwB-M5AJO72oNzq-EL4QJAx51le7s', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/6mbqldm0dp4f1.png?width=108&crop=smart&auto=webp&s=34722d3d96eca493c702af60a35f06bb016e6b2e', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/6mbqldm0dp4f1.png?width=216&crop=smart&auto=webp&s=0161abd1aee86cbd6d206adc8a3bfcd6a425b6f4', 'width': 216}, {'height': 185, 'url': 'https://preview.redd.it/6mbqldm0dp4f1.png?width=320&crop=smart&auto=webp&s=2fbba84a98e4e79ed740981792c95bb663047731', 'width': 320}, {'height': 371, 'url': 'https://preview.redd.it/6mbqldm0dp4f1.png?width=640&crop=smart&auto=webp&s=99f052e66941cb3ddb39ee62b7903479a28d71d9', 'width': 640}, {'height': 556, 'url': 'https://preview.redd.it/6mbqldm0dp4f1.png?width=960&crop=smart&auto=webp&s=63df06aa734ad11af43f7a339922da91e1be11cf', 'width': 960}, {'height': 626, 'url': 'https://preview.redd.it/6mbqldm0dp4f1.png?width=1080&crop=smart&auto=webp&s=128fa4f271a239174aea7736bcb5b69e334632f0', 'width': 1080}], 'source': {'height': 1380, 'url': 'https://preview.redd.it/6mbqldm0dp4f1.png?auto=webp&s=0644bc560584387d3a097d4ed397ae1b5d0bb265', 'width': 2379}, 'variants': {}}]}
PipesHub - Open Source Enterprise Search Platform(Generative-AI Powered)
20
Hey everyone! I’m excited to share something we’ve been building for the past few months – **PipesHub**, a fully open-source Enterprise Search Platform. In short, PipesHub is your **customizable, scalable, enterprise-grade RAG platform** for everything from intelligent search to building agentic apps — all powered by your own models and data. We also connect with tools like Google Workspace, Slack, Notion and more — so your team can quickly find answers and trained on *your* company’s internal knowledge. You can run also it locally and use any AI Model out of the box including Ollama. **We’re looking for early feedback**, so if this sounds useful (or if you’re just curious), we’d love for you to check it out and tell us what you think! 🔗 [https://github.com/pipeshub-ai/pipeshub-ai](https://github.com/pipeshub-ai/pipeshub-ai)
2025-06-03T12:20:03
https://www.reddit.com/r/LocalLLaMA/comments/1l2afie/pipeshub_open_source_enterprise_search/
Effective-Ad2060
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2afie
false
null
t3_1l2afie
/r/LocalLLaMA/comments/1l2afie/pipeshub_open_source_enterprise_search/
false
false
self
20
{'enabled': False, 'images': [{'id': 'buY1wCd39fnYe4gQsYrQN9EpiOdHMy4jLV6G-HIWIsU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3yaFTk1xSYFxcZBXDKNOqCYrqTrI0QhhWaffF9QiqBc.jpg?width=108&crop=smart&auto=webp&s=eee15c3c7c7d5a8b4b7798af503093305d5a88d6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3yaFTk1xSYFxcZBXDKNOqCYrqTrI0QhhWaffF9QiqBc.jpg?width=216&crop=smart&auto=webp&s=2a8173cf7c37920b56ca2769b6820c5edbd605c2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3yaFTk1xSYFxcZBXDKNOqCYrqTrI0QhhWaffF9QiqBc.jpg?width=320&crop=smart&auto=webp&s=df00b884029386a55921e6c139d13ee074fbf67c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3yaFTk1xSYFxcZBXDKNOqCYrqTrI0QhhWaffF9QiqBc.jpg?width=640&crop=smart&auto=webp&s=484a53d32015c87d965faab469ee9a25a14359ae', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3yaFTk1xSYFxcZBXDKNOqCYrqTrI0QhhWaffF9QiqBc.jpg?width=960&crop=smart&auto=webp&s=cd49e86054f5366ddc9c378a87413be200276a9c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3yaFTk1xSYFxcZBXDKNOqCYrqTrI0QhhWaffF9QiqBc.jpg?width=1080&crop=smart&auto=webp&s=27e906449dadcd4d1bcf300fcf0c76f726b00e90', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3yaFTk1xSYFxcZBXDKNOqCYrqTrI0QhhWaffF9QiqBc.jpg?auto=webp&s=49fb2b5ef5a6434defc7d0c4c4ac69ee8c29034d', 'width': 1200}, 'variants': {}}]}
Attention by Hand - Practice attention mechanism on an interactive webpage
29
https://i.redd.it/fmji9oswfp4f1.gif Try this: [https://vizuara-ai-learning-lab.vercel.app/](https://vizuara-ai-learning-lab.vercel.app/) Nuts-And-Bolts-AI is an interactive web environment where you can practice AI concepts by writing down matrix multiplications. (1) Let’s take the attention mechanism in language models as an example. (2) Using Nuts-And-Bolts-AI, you can actively engage with the step-by-step calculation of the scaled dot-product attention mechanism. (3) Users can input values and work through each matrix operation (Q, K, V, scores, softmax, weighted sum) manually within a guided, interactive environment. Eventually, we will add several modules on this website: \- Neural Networks from scratch \- CNNs from scratch \- RNNs from scratch \- Diffusion from scratch
2025-06-03T12:21:40
https://www.reddit.com/r/LocalLLaMA/comments/1l2agpu/attention_by_hand_practice_attention_mechanism_on/
OtherRaisin3426
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2agpu
false
null
t3_1l2agpu
/r/LocalLLaMA/comments/1l2agpu/attention_by_hand_practice_attention_mechanism_on/
false
false
https://b.thumbs.redditm…ST6S0QxIMFVs.jpg
29
{'enabled': False, 'images': [{'id': 'HUR4ZjSsMcPldBF8PlxclI3gg-mjZXBfe4bNavSwrFw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?width=108&crop=smart&auto=webp&s=4c05659da71aabefa650df1fddb91bdf8888031d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?width=216&crop=smart&auto=webp&s=490f434fbbbf0f74a171e943297e61758633f730', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?width=320&crop=smart&auto=webp&s=6f57ef706f7fd8fd0484669113c189fba8da9198', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?width=640&crop=smart&auto=webp&s=5d72bc65c67e8fa81fbd23e548bba69e1a0bb3e8', 'width': 640}, {'height': 503, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?width=960&crop=smart&auto=webp&s=d13c9867058e25865b57356a8f76e4c2df202a84', 'width': 960}, {'height': 566, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?width=1080&crop=smart&auto=webp&s=84f7f12718fed77976904df46b50b7aeb1a2af03', 'width': 1080}], 'source': {'height': 629, 'url': 'https://external-preview.redd.it/nhMEOcQ2pY2cWvmMC1a4Ya8l-ZpFkuu1hArRGS_70Jo.jpg?auto=webp&s=0e18f26214a09b566dc3bc4bcdd70b1cf41d959a', 'width': 1200}, 'variants': {}}]}
Fine-Tuning DeepSeek-R1-0528 on an RTX 4090
1
[removed]
2025-06-03T12:35:17
https://www.datacamp.com/tutorial/fine-tuning-deep-seek-r1-0528
kingabzpro
datacamp.com
1970-01-01T00:00:00
0
{}
1l2aqob
false
null
t3_1l2aqob
/r/LocalLLaMA/comments/1l2aqob/finetuning_deepseekr10528_on_an_rtx_4090/
false
false
default
1
null
Finetune LLM :- Google colab issues
1
[removed]
2025-06-03T12:57:39
https://www.reddit.com/r/LocalLLaMA/comments/1l2b7ng/finetune_llm_google_colab_issues/
Kooky_Cattle4583
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2b7ng
false
null
t3_1l2b7ng
/r/LocalLLaMA/comments/1l2b7ng/finetune_llm_google_colab_issues/
false
false
https://b.thumbs.redditm…WQSQ-B9TAIOk.jpg
1
null
Vision Language Models are Biased
105
2025-06-03T12:58:13
https://vlmsarebiased.github.io/
taesiri
vlmsarebiased.github.io
1970-01-01T00:00:00
0
{}
1l2b83p
false
null
t3_1l2b83p
/r/LocalLLaMA/comments/1l2b83p/vision_language_models_are_biased/
false
false
default
105
null
Semantic Search PoC for Hugging Face – Now with Parameter Size Filters (0-1B to 70B+)
25
Hey! I’ve recently updated my prototype semantic search for Hugging Face Space, which makes it easier to discover models not only via semantic search but also by **parameter size**. There are currently over 1.5 million models on the Hub, and finding the right one can be a challenge. This PoC helps you: * Semantic search using the **summaries** generated by a small LLM ([https://huggingface.co/davanstrien/Smol-Hub-tldr](https://huggingface.co/davanstrien/Smol-Hub-tldr)) * Filter models by **parameter size**, from 0-1B all the way to 70B+ * It also allows you to find similar models/datasets. For datasets in particular, I've found this can be a nice way to find a bunch of datasets super quickly. You can try it here: [https://huggingface.co/spaces/librarian-bots/huggingface-semantic-search](https://huggingface.co/spaces/librarian-bots/huggingface-semantic-search) FWIW, for this Space, I also tried a different approach to developing it. Basically, I did the backend API dev myself (since I'm familiar enough with that kind of dev work for it to be quick), but vibe coded the frontend using the OpenAPI Specification for the backed as context for the LLM). Seems to work quite well (at least the front end is better than anything I would do on my own...)
2025-06-03T13:08:16
https://www.reddit.com/r/LocalLLaMA/comments/1l2bgc1/semantic_search_poc_for_hugging_face_now_with/
dvanstrien
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2bgc1
false
null
t3_1l2bgc1
/r/LocalLLaMA/comments/1l2bgc1/semantic_search_poc_for_hugging_face_now_with/
false
false
self
25
{'enabled': False, 'images': [{'id': 'WFwIIbRpE84XE6oagr5hRGQLBcoHMOy27I018aYekE8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nAGWXQWi-W3GNdgKl4FekY29SkVLVJK_wHPyw1f-aPA.jpg?width=108&crop=smart&auto=webp&s=04146a78247987fc1cf9225f5cd657b7fdb827db', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nAGWXQWi-W3GNdgKl4FekY29SkVLVJK_wHPyw1f-aPA.jpg?width=216&crop=smart&auto=webp&s=c0514ec6f50505231f38f8a9e06ef3b40b96827f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nAGWXQWi-W3GNdgKl4FekY29SkVLVJK_wHPyw1f-aPA.jpg?width=320&crop=smart&auto=webp&s=8e10bafff59ac626d8eb59a7278be96d2f255b4d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nAGWXQWi-W3GNdgKl4FekY29SkVLVJK_wHPyw1f-aPA.jpg?width=640&crop=smart&auto=webp&s=ccd0315fcf8ee6fbe9243b824cbcdb13ea8417ea', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nAGWXQWi-W3GNdgKl4FekY29SkVLVJK_wHPyw1f-aPA.jpg?width=960&crop=smart&auto=webp&s=1bf9b021b83738bcf9968c259d2beb9e23c5094c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nAGWXQWi-W3GNdgKl4FekY29SkVLVJK_wHPyw1f-aPA.jpg?width=1080&crop=smart&auto=webp&s=574714ffa10d5d3e3d981b67f27a9bcf810e160d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nAGWXQWi-W3GNdgKl4FekY29SkVLVJK_wHPyw1f-aPA.jpg?auto=webp&s=255d5949592c5ae0eb0860d093b4468f942c7aa2', 'width': 1200}, 'variants': {}}]}
Testing small quants: the case of Qwen 3 30B A3B
1
[removed]
2025-06-03T13:15:52
https://www.reddit.com/r/LocalLLaMA/comments/1l2bman/testing_small_quants_the_case_of_qwen_3_30b_a3b/
Astrophilorama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2bman
false
null
t3_1l2bman
/r/LocalLLaMA/comments/1l2bman/testing_small_quants_the_case_of_qwen_3_30b_a3b/
false
false
self
1
null
My setup for managing multiple LLM APIs + local models with a unified interface
0
Hey everyone! Wanted to share something I've been using for the past few months that's made my LLM workflow way smoother. I was getting tired of juggling API keys for OpenAI, Anthropic, Groq, and a few other providers, plus constantly switching between different interfaces and keeping track of token costs across all of them. Started looking for a way to centralize everything. Found this combo of Open WebUI + LiteLLM that's been pretty solid: [https://github.com/g1ibby/homellm](https://github.com/g1ibby/homellm) What I like about it: \- Single ChatGPT-style interface for everything \- All my API usage and costs in one dashboard (finally know how much I'm actually spending!) \- Super easy to connect tools like Aider - just point them to one endpoint instead of managing keys everywhere \- Can tunnel in my local Ollama server or other self-hosted models, so everything lives in the same interface It's just Docker Compose, so pretty straightforward if you have a VPS lying around. Takes about 10 minutes to get running. Anyone else using something similar? Always curious how others are handling the multi-provider chaos. The local + cloud hybrid approach has been working really well for me.
2025-06-03T13:16:18
https://www.reddit.com/r/LocalLLaMA/comments/1l2bmnt/my_setup_for_managing_multiple_llm_apis_local/
vivi541
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2bmnt
false
null
t3_1l2bmnt
/r/LocalLLaMA/comments/1l2bmnt/my_setup_for_managing_multiple_llm_apis_local/
false
false
self
0
{'enabled': False, 'images': [{'id': 'pyn1GSWMRKlZVNADg3z0xrqzgwFNMv_ht0Vy2upoN_E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eFv-tF4zH8sOVCCj2R_meOlc2MixlPV-lv0jv2W5yfU.jpg?width=108&crop=smart&auto=webp&s=c99f0b79d394eb56ea94b5a632ba3a9de15036fc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/eFv-tF4zH8sOVCCj2R_meOlc2MixlPV-lv0jv2W5yfU.jpg?width=216&crop=smart&auto=webp&s=4850dbc42216647887480ea3c115c226aa0c4592', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/eFv-tF4zH8sOVCCj2R_meOlc2MixlPV-lv0jv2W5yfU.jpg?width=320&crop=smart&auto=webp&s=6dbb3e3d9b13b70717c9c486a52ed9dcfb920a6a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/eFv-tF4zH8sOVCCj2R_meOlc2MixlPV-lv0jv2W5yfU.jpg?width=640&crop=smart&auto=webp&s=1a606456c89ec34f500cfd9df613f8a9a2993b4b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/eFv-tF4zH8sOVCCj2R_meOlc2MixlPV-lv0jv2W5yfU.jpg?width=960&crop=smart&auto=webp&s=c7c09e3767fe14503ecbc8288075fcdcf6a9b0db', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/eFv-tF4zH8sOVCCj2R_meOlc2MixlPV-lv0jv2W5yfU.jpg?width=1080&crop=smart&auto=webp&s=aca3079ef178d0906694cafebfda414c6c2c0047', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/eFv-tF4zH8sOVCCj2R_meOlc2MixlPV-lv0jv2W5yfU.jpg?auto=webp&s=ae7f09f4c00a97f690e3b40410a3761cc4faff95', 'width': 1200}, 'variants': {}}]}
Its my first PC build , I need help. Is this enough to run LLM locally !
0
[PCPriceTracker Build](https://pcpricetracker.in/b/s/74ff4d5d-5825-4841-8bbc-dd6851a52ca6) Category|Selection|Source|Price :----|:----|:----|----: **Processor** | [Amd Ryzen 5 7600 Gaming Desktop Processor (100-100001015BOX)](https://pcpricetracker.in/products/3746a9dcc20314ac958396bdb9187b91) | Computech Store | 17894 **Motherboard** | [Gigabyte B650M D3HP AX AM5 Micro ATX Motherboard](https://pcpricetracker.in/products/d7f43854d69a1e61a11fb26743f2f5ec) | Computech Store | 11489 **Graphic Card** | [ASUS Dual RTX 3060 V2 OC Edition 12GB GDDR6 192-Bit LHR Graphics card with DLSS AI Rendering](https://pcpricetracker.in/products/c545e552075fe3e651809b8be6408e40) | Easyshoppi | 24000 **Power Supply** | [DeepCool PM750D Series Non-Modular 80 PLUS Gold Power Supply R-PM750D-FA0B-UK](https://pcpricetracker.in/products/64f479a5d14300f31a573cc68f74efca) | Clarion | 6425 **Cabinet** | [DEEPCOOL MATREXX 40 ESSENTIAL MICRO-ATX CABINET (DP-MATX-MATREXX40)](https://pcpricetracker.in/products/3cd87944bb1c42f61d840c038cb71902) | Elitehubs | 2999 **Memory** | [Acer BL-9BWWA-446 Desktop Ram HT200 Series 32GB (16GBx2) DDR5 7200MHz (Silver)](https://pcpricetracker.in/products/75c35088b3d989be4a9d0563ce1a04d4) | Computech Store | 13099 **Additional Memory** | | | **Hard drive** | | | **SSD drive** | [Acer Predator GM7000 1TB M.2 NVMe Gen4 Internal SSD (BL.9BWWR.105)](https://pcpricetracker.in/products/18b6a7101ba29bb3a122d87462123217) | Variety Online | 7257 **Additional SSD** | | | **Monitor** | | | **Additional Monitor** | | | **CPU Cooler** | | | **Keyboard** | | | **Mouse** | | | **Headset** | | | **Case Fans** | | | | | **Grand Total** | **INR 83163** |
2025-06-03T13:25:06
https://www.reddit.com/r/LocalLLaMA/comments/1l2btw0/its_my_first_pc_build_i_need_help_is_this_enough/
Series-Curious
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2btw0
false
null
t3_1l2btw0
/r/LocalLLaMA/comments/1l2btw0/its_my_first_pc_build_i_need_help_is_this_enough/
false
false
self
0
null
Qwen2 7b says it's made by Google?!
1
[removed]
2025-06-03T13:40:24
https://www.reddit.com/r/LocalLLaMA/comments/1l2c6lh/qwen2_7b_says_its_made_by_google/
Responsible_Wait4020
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2c6lh
false
null
t3_1l2c6lh
/r/LocalLLaMA/comments/1l2c6lh/qwen2_7b_says_its_made_by_google/
false
false
https://b.thumbs.redditm…UHfjhioPHb4s.jpg
1
null
2025 Apple Mac Studio: M3 Ultra 256GB vs. M4 Ultra 256GB
0
Will the M4 deliver better token performance? If so, by how much—specifically when running a 70B model?
2025-06-03T14:19:08
https://www.reddit.com/r/LocalLLaMA/comments/1l2d4l7/2025_apple_mac_studio_m3_ultra_256gb_vs_m4_ultra/
emimix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2d4l7
false
null
t3_1l2d4l7
/r/LocalLLaMA/comments/1l2d4l7/2025_apple_mac_studio_m3_ultra_256gb_vs_m4_ultra/
false
false
self
0
null
When you wanna Finetune a model what methods do you use to Chunk Data?
1
What else some of your top methods for chunking data when you want to fine tune a model i'm getting ready to do that myself I wanted to train it on a tabletop RPG book so that the model could be my assistant but I'm not sure of the best way to chunk the book.
2025-06-03T14:30:20
https://www.reddit.com/r/LocalLLaMA/comments/1l2dei2/when_you_wanna_finetune_a_model_what_methods_do/
TheArchivist314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2dei2
false
null
t3_1l2dei2
/r/LocalLLaMA/comments/1l2dei2/when_you_wanna_finetune_a_model_what_methods_do/
false
false
self
1
null
Arcee Homunculus-12B
98
**Homunculus** is a 12 billion-parameter instruction model distilled from **Qwen3-235B** onto the **Mistral-Nemo** backbone. [https://huggingface.co/arcee-ai/Homunculus](https://huggingface.co/arcee-ai/Homunculus) [https://huggingface.co/arcee-ai/Homunculus-GGUF](https://huggingface.co/arcee-ai/Homunculus-GGUF)
2025-06-03T14:35:23
https://www.reddit.com/r/LocalLLaMA/comments/1l2diwk/arcee_homunculus12b/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2diwk
false
null
t3_1l2diwk
/r/LocalLLaMA/comments/1l2diwk/arcee_homunculus12b/
false
false
self
98
{'enabled': False, 'images': [{'id': 'lI41-kDbWkB7h1l1GR9Tkyz69nD8uzuyGKGsyhGjjNQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Xm5OejuoSTPgoAZoWZGNsdSTe1HbB3sD_543SjTLwxM.jpg?width=108&crop=smart&auto=webp&s=35398510f4c42c93a4291a1eb4c620e5821a17d1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Xm5OejuoSTPgoAZoWZGNsdSTe1HbB3sD_543SjTLwxM.jpg?width=216&crop=smart&auto=webp&s=de143a951b990ed7e0912a144fb7f11875e047a0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Xm5OejuoSTPgoAZoWZGNsdSTe1HbB3sD_543SjTLwxM.jpg?width=320&crop=smart&auto=webp&s=15c95c405ad0f92644120022b0c37ddfb158079f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Xm5OejuoSTPgoAZoWZGNsdSTe1HbB3sD_543SjTLwxM.jpg?width=640&crop=smart&auto=webp&s=b48695816b739590f5f999d07b11604e1324009c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Xm5OejuoSTPgoAZoWZGNsdSTe1HbB3sD_543SjTLwxM.jpg?width=960&crop=smart&auto=webp&s=d496462c7094406796e8388ffb005b0121540ee6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Xm5OejuoSTPgoAZoWZGNsdSTe1HbB3sD_543SjTLwxM.jpg?width=1080&crop=smart&auto=webp&s=140c14bb021537825772958e8357ec685d45a3af', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Xm5OejuoSTPgoAZoWZGNsdSTe1HbB3sD_543SjTLwxM.jpg?auto=webp&s=8ddb12d1c45c3b4c348b679527a2052a2f370ee0', 'width': 1200}, 'variants': {}}]}
Checkout this FREE and FAST semantic deduplication app on Hugging Face
8
There's no point only hashing deduplication of datasets. You might as well use semantic deduplication too. This space for semantic deduplication works on multiple massive datasets. Removing near duplicates, not just exact matches! This is how it works: * You pick one all more datasets from the Hub * It make a semantic embedding of each row * It remove removes near duplicates based on a threshold like 0.9 * You can push the deduplicated dataset back to a new repo, and get to work. This is super useful if you’re training models or building evals. You can also clone the repo and run it locally. [https://huggingface.co/spaces/minishlab/semantic-deduplication](https://huggingface.co/spaces/minishlab/semantic-deduplication)
2025-06-03T14:35:29
https://www.reddit.com/r/LocalLLaMA/comments/1l2dizc/checkout_this_free_and_fast_semantic/
Zealousideal-Cut590
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2dizc
false
null
t3_1l2dizc
/r/LocalLLaMA/comments/1l2dizc/checkout_this_free_and_fast_semantic/
false
false
self
8
{'enabled': False, 'images': [{'id': '3gtoBjG3Po9RMr7lgBFimouFLTstTHNi-aMSCTTsWTg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/giV87nLGgheRLKPO62ayX-pE0XUUe9FwjkcAyncsbXE.jpg?width=108&crop=smart&auto=webp&s=4f718df6d41d24fcf5edb4a0aa6960af1eca457e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/giV87nLGgheRLKPO62ayX-pE0XUUe9FwjkcAyncsbXE.jpg?width=216&crop=smart&auto=webp&s=2bd65b6a87f62a58b925f43665a8889b102f1811', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/giV87nLGgheRLKPO62ayX-pE0XUUe9FwjkcAyncsbXE.jpg?width=320&crop=smart&auto=webp&s=469330bc7ef462c98bf2e0fad0d4bfa3a3a91be0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/giV87nLGgheRLKPO62ayX-pE0XUUe9FwjkcAyncsbXE.jpg?width=640&crop=smart&auto=webp&s=ff66dee2a0cb5dee080e5876d69d6dac6b143610', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/giV87nLGgheRLKPO62ayX-pE0XUUe9FwjkcAyncsbXE.jpg?width=960&crop=smart&auto=webp&s=39bab0c20fb639f3a143a370a3cb38b18d7a5547', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/giV87nLGgheRLKPO62ayX-pE0XUUe9FwjkcAyncsbXE.jpg?width=1080&crop=smart&auto=webp&s=ab413946cc01b8fbded661ad96bec350e150c981', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/giV87nLGgheRLKPO62ayX-pE0XUUe9FwjkcAyncsbXE.jpg?auto=webp&s=b60c482b25c403939a6e0558316a9d48919a7fa6', 'width': 1200}, 'variants': {}}]}
Daily AI-tools
0
🚀 Hey everyone! I’ve been exploring some of the newest and most powerful AI tools out there and started sharing quick, engaging overviews on TikTok to help others discover what’s possible right now with AI. I’m focusing on tools like Claude Opus 4, Heygen, Durable, and more — things that help with content creation, automation, productivity, etc. If you’re into AI tools or want bite-sized updates on the latest breakthroughs, feel free to check out my page: 👉 @aitoolsdaily0 (link in bio) I’m also open to suggestions — what AI tools do you think more people should know about?
2025-06-03T14:38:00
https://www.reddit.com/r/LocalLLaMA/comments/1l2dl5s/daily_aitools/
jordanbelfort42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2dl5s
false
null
t3_1l2dl5s
/r/LocalLLaMA/comments/1l2dl5s/daily_aitools/
false
false
self
0
null
Postman like client for local MCP servers
9
I wanted to test my custom MCP server on Linux but none of the options seemed right. So I built my own on a weekend. It's MIT licensed so do with it what you like!
2025-06-03T14:42:11
https://github.com/faraazahmad/mcp_debug
Mysterious-Coat5856
github.com
1970-01-01T00:00:00
0
{}
1l2dowh
false
null
t3_1l2dowh
/r/LocalLLaMA/comments/1l2dowh/postman_like_client_for_local_mcp_servers/
false
false
default
9
null
OLLAMA Overhead with Qwen3 ????? Any help
1
[removed]
2025-06-03T15:02:50
https://www.reddit.com/r/LocalLLaMA/comments/1l2e7lp/ollama_overhead_with_qwen3_any_help/
Tough-Double687
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2e7lp
false
null
t3_1l2e7lp
/r/LocalLLaMA/comments/1l2e7lp/ollama_overhead_with_qwen3_any_help/
false
false
self
1
null
Gemini 2.5 Flash can't beat open-source model at pointing!
1
[removed]
2025-06-03T15:29:11
https://www.reddit.com/r/LocalLLaMA/comments/1l2evoe/gemini_25_flash_cant_beat_opensource_model_at/
IndependentDoor8479
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2evoe
false
null
t3_1l2evoe
/r/LocalLLaMA/comments/1l2evoe/gemini_25_flash_cant_beat_opensource_model_at/
false
false
self
1
null
Local hosted search ai
1
[removed]
2025-06-03T15:34:18
https://www.reddit.com/r/LocalLLaMA/comments/1l2f0gm/local_hosted_search_ai/
Successful-Leader830
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2f0gm
false
null
t3_1l2f0gm
/r/LocalLLaMA/comments/1l2f0gm/local_hosted_search_ai/
false
false
self
1
null
Teaching local LLMs to generate workflows
1
[removed]
2025-06-03T15:42:08
https://advanced-stack.com/resources/how-to-build-workflows-trigger-action-program-with-llms.html
Fluid-Age-9266
advanced-stack.com
1970-01-01T00:00:00
0
{}
1l2f7mo
false
null
t3_1l2f7mo
/r/LocalLLaMA/comments/1l2f7mo/teaching_local_llms_to_generate_workflows/
false
false
https://a.thumbs.redditm…LQ7mDIiNp2i4.jpg
1
{'enabled': False, 'images': [{'id': 'FpW-rKjCmA17tYicq1iqXsO4j7FbSI7IZEXvEMtqYZc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/-cFlPLkWFogCBicbsqZ6yiKMAh5AQ9NLRU2qt0ee_gk.jpg?width=108&crop=smart&auto=webp&s=2d63de925c65eca3014f1d0e622b309156c40024', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/-cFlPLkWFogCBicbsqZ6yiKMAh5AQ9NLRU2qt0ee_gk.jpg?width=216&crop=smart&auto=webp&s=6ff1919694fbb1bd393989c746b8b36727588de7', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/-cFlPLkWFogCBicbsqZ6yiKMAh5AQ9NLRU2qt0ee_gk.jpg?width=320&crop=smart&auto=webp&s=46389877426fff3f985a04937b4cb28047449db0', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/-cFlPLkWFogCBicbsqZ6yiKMAh5AQ9NLRU2qt0ee_gk.jpg?width=640&crop=smart&auto=webp&s=fcb9c98c855ebe47968e01801f148a2042c42dc8', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/-cFlPLkWFogCBicbsqZ6yiKMAh5AQ9NLRU2qt0ee_gk.jpg?width=960&crop=smart&auto=webp&s=3c40fdc77642decb11d242180118964b1c5612f1', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/-cFlPLkWFogCBicbsqZ6yiKMAh5AQ9NLRU2qt0ee_gk.jpg?width=1080&crop=smart&auto=webp&s=194b2a3c72d962e4ef6f670a6a1b3febe1a429bc', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/-cFlPLkWFogCBicbsqZ6yiKMAh5AQ9NLRU2qt0ee_gk.jpg?auto=webp&s=08396377beb5808d2cdefba7f2bcb6d667e5769d', 'width': 1200}, 'variants': {}}]}
RubyLLM 1.3.0: First-Class Ollama Support for Ruby Developers 💻
0
Ruby developers can now use local models as easily as cloud APIs. **Simple setup:** ```ruby RubyLLM.configure do |config| config.ollama_api_base = 'http://localhost:11434/v1' end # Same API, local model chat = RubyLLM.chat(model: 'mistral', provider: 'ollama') response = chat.ask("Explain transformer architecture") ``` **Why this matters for local LLM enthusiasts:** - 🔒 **Privacy-first development** - no data leaves your machine - 💰 **Cost-effective experimentation** - no API charges during development - 🚀 **Same Ruby API** - switch between local/cloud without code changes - 📎 **File handling** - images, PDFs, audio all work with local models - 🛠️ **Rails integration** - persist conversations with local model responses **New attachment API is perfect for local workflows:** ```ruby # Auto-detects file types (images, PDFs, audio, text) chat.ask "What's in this file?", with: "local_document.pdf" chat.ask "Analyze these", with: ["image.jpg", "transcript.txt"] ``` **Also supports:** - 🔀 OpenRouter (100+ models via one API) - 🔄 Configuration contexts (switch between local/remote easily) - 🌐 Automated model capability tracking Perfect for researchers, privacy-focused devs, and anyone who wants to keep their data local while using a clean, Ruby-like API. `gem 'ruby_llm', '1.3.0'` Repo: https://github.com/crmne/ruby_llm Docs: https://rubyllm.com Release Notes: https://github.com/crmne/ruby_llm/releases/tag/1.3.0
2025-06-03T15:44:54
https://www.reddit.com/r/LocalLLaMA/comments/1l2fa3o/rubyllm_130_firstclass_ollama_support_for_ruby/
crmne
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2fa3o
false
null
t3_1l2fa3o
/r/LocalLLaMA/comments/1l2fa3o/rubyllm_130_firstclass_ollama_support_for_ruby/
false
false
self
0
{'enabled': False, 'images': [{'id': 'dRPRYHcWlFicSK-z6Co_vdHBXzCqhUGopsZ8LYR4_gU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6bXOEvhrJBhBLneZuucSLq-qy5fW038E870e5hmDo-U.jpg?width=108&crop=smart&auto=webp&s=97279c46c41c61602c898da205817a6ebb5292f8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6bXOEvhrJBhBLneZuucSLq-qy5fW038E870e5hmDo-U.jpg?width=216&crop=smart&auto=webp&s=b34470675d38454c91ce64423baefbf786ddd87d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6bXOEvhrJBhBLneZuucSLq-qy5fW038E870e5hmDo-U.jpg?width=320&crop=smart&auto=webp&s=f37d01e9527d53405ca809aca48888275ff75b7c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6bXOEvhrJBhBLneZuucSLq-qy5fW038E870e5hmDo-U.jpg?width=640&crop=smart&auto=webp&s=a6a43cdd71e8584483ee92da3cc631ec91fdf131', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6bXOEvhrJBhBLneZuucSLq-qy5fW038E870e5hmDo-U.jpg?width=960&crop=smart&auto=webp&s=7b54cb72590146692656fe73503eac4e087da847', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6bXOEvhrJBhBLneZuucSLq-qy5fW038E870e5hmDo-U.jpg?width=1080&crop=smart&auto=webp&s=91731cf0a7bbca763397666dd8822cb74b927dfd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6bXOEvhrJBhBLneZuucSLq-qy5fW038E870e5hmDo-U.jpg?auto=webp&s=3382735c167a7add1e6698de1a26aa47b07a53fc', 'width': 1200}, 'variants': {}}]}
I forked google’s Fullstack LangGraph Quickstart to work with ollama + searxng
1
[removed]
2025-06-03T15:53:13
https://www.reddit.com/r/LocalLLaMA/comments/1l2fhlj/i_forked_googles_fullstack_langgraph_quickstart/
Filo0104
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2fhlj
false
null
t3_1l2fhlj
/r/LocalLLaMA/comments/1l2fhlj/i_forked_googles_fullstack_langgraph_quickstart/
false
false
self
1
{'enabled': False, 'images': [{'id': 'lw6H4eybNgcdiVEB_D0Rv2OY9faHpBI4cf637K_Eapo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1ayXXM7k6U265gyvlcYfRgEhvY_5Qfp8zf2UcMW1SB0.jpg?width=108&crop=smart&auto=webp&s=a5728335f3dc0a8d92fafbc48e08fc31e319962e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1ayXXM7k6U265gyvlcYfRgEhvY_5Qfp8zf2UcMW1SB0.jpg?width=216&crop=smart&auto=webp&s=5d6d9fe069c6d7f5ff2d1d1c321998b976eb72bc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1ayXXM7k6U265gyvlcYfRgEhvY_5Qfp8zf2UcMW1SB0.jpg?width=320&crop=smart&auto=webp&s=1b4f7665398e398c8a6a48bea6dcd29866775e35', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1ayXXM7k6U265gyvlcYfRgEhvY_5Qfp8zf2UcMW1SB0.jpg?width=640&crop=smart&auto=webp&s=ed1c078c68df9c49efdd20fe0a6d6157b4f78909', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1ayXXM7k6U265gyvlcYfRgEhvY_5Qfp8zf2UcMW1SB0.jpg?width=960&crop=smart&auto=webp&s=5ec1cd3a15566313dbcdf4066b2362f221d5f849', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1ayXXM7k6U265gyvlcYfRgEhvY_5Qfp8zf2UcMW1SB0.jpg?width=1080&crop=smart&auto=webp&s=bb13955a8b8b890227d990fd293efb5275a76d45', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1ayXXM7k6U265gyvlcYfRgEhvY_5Qfp8zf2UcMW1SB0.jpg?auto=webp&s=86880c4345eefba300f319759a1c68772db3b002', 'width': 1200}, 'variants': {}}]}
I'm collecting dialogue from anime, games, and visual novels — is this actually useful for improving AI?
41
Hi! I’m not a programmer or AI developer, but I’ve been doing something on my own for a while out of passion. I’ve noticed that most AI responses — especially in roleplay or emotional dialogue — tend to sound repetitive, shallow, or generic. They often reuse the same phrases and don’t adapt well to different character personalities like tsundere, kuudere, yandere, etc. So I started collecting and organizing dialogue from games, anime, visual novels, and even NSFW content. I'm manually extracting lines directly from files and scenes, then categorizing them based on tone, personality type, and whether it's SFW or NSFW. I'm trying to build a kind of "word and emotion library" so AI could eventually talk more like real characters, with variety and personality. It’s just something I care about and enjoy working on. My question is: Is this kind of work actually useful for improving AI models? And if yes, where can I send or share this kind of dialogue dataset? I tried giving it to models like Gemini, but it didn’t really help since the model doesn’t seem trained on this kind of expressive or emotional language. I haven’t contacted any open-source teams yet, but maybe I will if I know it’s worth doing. Any advice would mean a lot — thank you!
2025-06-03T15:54:51
https://www.reddit.com/r/LocalLLaMA/comments/1l2fj2k/im_collecting_dialogue_from_anime_games_and/
Akowmako
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2fj2k
false
null
t3_1l2fj2k
/r/LocalLLaMA/comments/1l2fj2k/im_collecting_dialogue_from_anime_games_and/
false
false
self
41
null
Can you mix and mach GPUs?
1
Lets say if using LM studio if I am currently using 3090 and would buy 5090, can I use combined VRAM?
2025-06-03T15:56:32
https://www.reddit.com/r/LocalLLaMA/comments/1l2fkow/can_you_mix_and_mach_gpus/
FlanFederal8447
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2fkow
false
null
t3_1l2fkow
/r/LocalLLaMA/comments/1l2fkow/can_you_mix_and_mach_gpus/
false
false
self
1
null
Using a LLM (large language model ) as a simplest physics engine — no physics code, just prompts
1
[removed]
2025-06-03T16:43:17
https://www.reddit.com/r/LocalLLaMA/comments/1l2gro8/using_a_llm_large_language_model_as_a_simplest/
Arch1324
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2gro8
false
null
t3_1l2gro8
/r/LocalLLaMA/comments/1l2gro8/using_a_llm_large_language_model_as_a_simplest/
false
false
https://a.thumbs.redditm…Swlnp8A35GY4.jpg
1
{'enabled': False, 'images': [{'id': 'TnrTHW3CKw3GNTKRI2mE_HkAsQiy9l3ygE6SDH3npbs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ceDfMBjFXNHu-pb_t60ZQWVbWsg9t-Hm1XtXyWnVNOU.jpg?width=108&crop=smart&auto=webp&s=c850c2ffabc5b03c38bf77a102d65815f4438c35', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ceDfMBjFXNHu-pb_t60ZQWVbWsg9t-Hm1XtXyWnVNOU.jpg?width=216&crop=smart&auto=webp&s=21d3af5d18e08e0e7f42d83573fe6811cca96766', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ceDfMBjFXNHu-pb_t60ZQWVbWsg9t-Hm1XtXyWnVNOU.jpg?width=320&crop=smart&auto=webp&s=d7802b1d80787c10eb9c91ef5904bf086d6bdbd6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ceDfMBjFXNHu-pb_t60ZQWVbWsg9t-Hm1XtXyWnVNOU.jpg?width=640&crop=smart&auto=webp&s=66c276b04432d41840b97474beff3ef234424c17', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ceDfMBjFXNHu-pb_t60ZQWVbWsg9t-Hm1XtXyWnVNOU.jpg?width=960&crop=smart&auto=webp&s=5d4dd375658fac1ffabdc21d4464ec10542b15dc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ceDfMBjFXNHu-pb_t60ZQWVbWsg9t-Hm1XtXyWnVNOU.jpg?width=1080&crop=smart&auto=webp&s=4d079f803f4ed972ad6fc38f596fe1196e14a22c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ceDfMBjFXNHu-pb_t60ZQWVbWsg9t-Hm1XtXyWnVNOU.jpg?auto=webp&s=146071d2c5421b645c22647d41882650a1dc84eb', 'width': 1200}, 'variants': {}}]}
Sakana AI proposes the Darwin Gödel Machine, an self-learning AI system that leverages an evolution algorithm to iteratively rewrite its own code, thereby continuously improving its performance on programming tasks
85
2025-06-03T16:46:59
https://sakana.ai/dgm/
juanviera23
sakana.ai
1970-01-01T00:00:00
0
{}
1l2gv3a
false
null
t3_1l2gv3a
/r/LocalLLaMA/comments/1l2gv3a/sakana_ai_proposes_the_darwin_gödel_machine_an/
false
false
https://b.thumbs.redditm…KlSK_xmzKbPQ.jpg
85
{'enabled': False, 'images': [{'id': '301MLdXBGS0U_36M44Bby0bKZg0NibAojUn2aDi7Aao', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=108&crop=smart&auto=webp&s=61f7124235d3c9cc17267eb2ed7de46bab49765e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=216&crop=smart&auto=webp&s=b01c782fa93b021a180dc44d7151fade86d6431d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=320&crop=smart&auto=webp&s=670eb9c9058d14ac8846a6475e3d47cb616cf011', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=640&crop=smart&auto=webp&s=f5f30bf0b3bae15b4dee53ba7bd37f2486072c04', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=960&crop=smart&auto=webp&s=2c1d1a6c85eb92a670807f829ec7254dc53f1bd7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?width=1080&crop=smart&auto=webp&s=344e6dcc7b48a81d3b6727c749b0c289aabe5547', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/ll0sI2kj9OWJW1iOriHpZm1jSfC278YnLF-jisELKs4.jpg?auto=webp&s=efb765a9e5d3d5585101bc98246d9babdd7d3105', 'width': 1600}, 'variants': {}}]}
New META Paper - How much do language models memorize?
234
Very interesting paper on dataset size, parameter size, and grokking.
2025-06-03T16:47:12
https://arxiv.org/abs/2505.24832
Thrumpwart
arxiv.org
1970-01-01T00:00:00
0
{}
1l2gvar
false
null
t3_1l2gvar
/r/LocalLLaMA/comments/1l2gvar/new_meta_paper_how_much_do_language_models/
false
false
default
234
null
Which open source model is the cheapest to host and gives great performance?
1
[removed]
2025-06-03T17:29:26
https://www.reddit.com/r/LocalLLaMA/comments/1l2hz0k/which_open_source_model_is_the_cheapest_to_host/
ExtremeYogurt3627
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2hz0k
false
null
t3_1l2hz0k
/r/LocalLLaMA/comments/1l2hz0k/which_open_source_model_is_the_cheapest_to_host/
false
false
self
1
null
Which open source model is the cheapest to host and gives great performance?
0
Hello guys, Which open source model is the cheapest to host on a \~$30 Hetzner server and gives great performance? I am building a SAAS app and I want to integrate AI into it extensively. I don't have money for AI APIs. Thank you for you time.
2025-06-03T17:33:35
https://www.reddit.com/r/LocalLLaMA/comments/1l2i315/which_open_source_model_is_the_cheapest_to_host/
Last-Kaleidoscope406
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2i315
false
null
t3_1l2i315
/r/LocalLLaMA/comments/1l2i315/which_open_source_model_is_the_cheapest_to_host/
false
false
self
0
null
I would really like to start digging deeper into LLMs. If I have $1500-$2000 to spend, what hardware setup would you recommend assuming I have nothing currently.
25
I have very little idea of what I'm looking for with regard to hardware. I'm a mac guy generally, so i'm familiar with their OS, so that's a plus for me. I also like that their memory is all very fast and shared with the GPU, which I \*think\* helps run things faster instead of being memory or CPU bound, but I'm not 100% certain. I'd like for thise to be a twofold thing - learning the software side of LLMs, but also to eventually run my own LLM at home in "production" for privacy purposes. I'm a systems engineer / cloud engineer as my job, so I'm not completely technologically illiterate, but I really don't know much about consumer hardware, especially CPUs and CPUs, nor do I totally understand what I should be prioritizing. I don't mind building something from scratch, but pre-built is a huge win, and something small is also a big win - so again I lean more toward a mac mini or mac studio. I would love some other perspectives here, as long as it's not simply "apple bad. mac bad. boo"
2025-06-03T17:54:32
https://www.reddit.com/r/LocalLLaMA/comments/1l2imqv/i_would_really_like_to_start_digging_deeper_into/
BokehJunkie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2imqv
false
null
t3_1l2imqv
/r/LocalLLaMA/comments/1l2imqv/i_would_really_like_to_start_digging_deeper_into/
false
false
self
25
null
5060ti llama-cpp-python
1
[removed]
2025-06-03T18:07:22
https://www.reddit.com/r/LocalLLaMA/comments/1l2izj5/5060ti_llamacpppython/
pc_zoomer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2izj5
false
null
t3_1l2izj5
/r/LocalLLaMA/comments/1l2izj5/5060ti_llamacpppython/
false
false
self
1
null
GuidedQuant: Boost layer-wise PTQ methods by using end loss guidance (Qwen3, Gemma3, Llama3.3 / 2~4bit quantization)
1
[removed]
2025-06-03T18:11:02
https://www.reddit.com/r/LocalLLaMA/comments/1l2j2zj/guidedquant_boost_layerwise_ptq_methods_by_using/
jusjinuk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2j2zj
false
null
t3_1l2j2zj
/r/LocalLLaMA/comments/1l2j2zj/guidedquant_boost_layerwise_ptq_methods_by_using/
false
false
self
1
null
Using a LLM (large language model ) as a simplest physics engine — no physics code, just prompts
1
[removed]
2025-06-03T18:15:30
https://www.reddit.com/r/LocalLLaMA/comments/1l2j7ax/using_a_llm_large_language_model_as_a_simplest/
Arch1324
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2j7ax
false
null
t3_1l2j7ax
/r/LocalLLaMA/comments/1l2j7ax/using_a_llm_large_language_model_as_a_simplest/
false
false
https://b.thumbs.redditm…fcBA70gVL-gw.jpg
1
null
GuidedQuant: Boost layer-wise PTQ methods using the end loss guidance (Qwen3, Gemma3, Llama3.3 / 2~4bit quantization)
1
[removed]
2025-06-03T18:50:27
https://www.reddit.com/r/LocalLLaMA/comments/1l2k1j8/guidedquant_boost_layerwise_ptq_methods_using_the/
jusjinuk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2k1j8
false
null
t3_1l2k1j8
/r/LocalLLaMA/comments/1l2k1j8/guidedquant_boost_layerwise_ptq_methods_using_the/
false
false
https://b.thumbs.redditm…99kh6Q1iMKEo.jpg
1
null
GuidedQuant: Boost LLM layer-wise PTQ methods using the end loss guidance (Qwen3, Gemma3, Llama3.3 / 2~4bit Quantization)
1
**Paper (ICML 2025):** [https://arxiv.org/abs/2505.07004](https://arxiv.org/abs/2505.07004) **Code:** [https://github.com/snu-mllab/GuidedQuant](https://github.com/snu-mllab/GuidedQuant) **HuggingFace Collection:** 2\~4-bit quantized Qwen3-32B, gemma-3-27b-it, Llama-3.1-8B-Instruct, Llama-3.3-70B-Instruct  → [Link](https://huggingface.co/collections/jusjinuk/instruction-tuned-models-guidedquant-68334269c44cd3eb21f7bd61) **TL;DR:** **GuidedQuant** boosts layer-wise PTQ methods by integrating end loss guidance into the objective. We also introduce **LNQ**, a non-uniform scalar quantization algorithm which is guaranteed to monotonically decrease the quantization objective value. [Runs on a single RTX 3090 GPU!](https://preview.redd.it/art488o6er4f1.png?width=4377&format=png&auto=webp&s=3b67d71fc3080eaf77144cca6befb03770d40d0f)
2025-06-03T18:54:50
https://www.reddit.com/r/LocalLLaMA/comments/1l2k4lo/guidedquant_boost_llm_layerwise_ptq_methods_using/
jusjinuk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2k4lo
false
null
t3_1l2k4lo
/r/LocalLLaMA/comments/1l2k4lo/guidedquant_boost_llm_layerwise_ptq_methods_using/
false
false
https://b.thumbs.redditm…HE09_n_nasUc.jpg
1
null
GuidedQuant: Boost LLM layer-wise PTQ methods using the end loss guidance (Qwen3, Gemma3, Llama3.3 / 2~4bit Quantization)
1
**Paper (ICML 2025):** [https://arxiv.org/abs/2505.07004](https://arxiv.org/abs/2505.07004) **Code:** [https://github.com/snu-mllab/GuidedQuant](https://github.com/snu-mllab/GuidedQuant) **HuggingFace Collection:** 2\~4-bit quantized Qwen3-32B, gemma-3-27b-it, Llama-3.1-8B-Instruct, Llama-3.3-70B-Instruct  → [Link](https://huggingface.co/collections/jusjinuk/instruction-tuned-models-guidedquant-68334269c44cd3eb21f7bd61) **TL;DR:** **GuidedQuant** boosts layer-wise PTQ methods by integrating end loss guidance into the objective. We also introduce **LNQ**, a non-uniform scalar quantization algorithm which is guaranteed to monotonically decrease the quantization objective value. [Runs on a single RTX 3090 GPU!](https://preview.redd.it/art488o6er4f1.png?width=4377&format=png&auto=webp&s=3b67d71fc3080eaf77144cca6befb03770d40d0f)
2025-06-03T18:54:52
https://www.reddit.com/r/LocalLLaMA/comments/1l2k4n5/guidedquant_boost_llm_layerwise_ptq_methods_using/
jusjinuk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2k4n5
false
null
t3_1l2k4n5
/r/LocalLLaMA/comments/1l2k4n5/guidedquant_boost_llm_layerwise_ptq_methods_using/
false
false
self
1
null
GuidedQuant: Boost LLM layer-wise PTQ methods using the end loss guidance (Qwen3, Gemma3, Llama3.3 / 2~4bit Quantization)
36
**Paper (ICML 2025):** [https://arxiv.org/abs/2505.07004](https://arxiv.org/abs/2505.07004) **Code:** [https://github.com/snu-mllab/GuidedQuant](https://github.com/snu-mllab/GuidedQuant) **HuggingFace Collection:** 2\~4-bit quantized Qwen3-32B, gemma-3-27b-it, Llama-3.1-8B-Instruct, Llama-3.3-70B-Instruct  → [Link](https://huggingface.co/collections/jusjinuk/instruction-tuned-models-guidedquant-68334269c44cd3eb21f7bd61) **TL;DR:** **GuidedQuant** boosts layer-wise PTQ methods by integrating end loss guidance into the objective. We also introduce **LNQ**, a non-uniform scalar quantization algorithm which is guaranteed to monotonically decrease the quantization objective value. [Runs on a single RTX 3090 GPU!](https://preview.redd.it/art488o6er4f1.png?width=4377&format=png&auto=webp&s=3b67d71fc3080eaf77144cca6befb03770d40d0f)
2025-06-03T18:54:54
https://www.reddit.com/r/LocalLLaMA/comments/1l2k4nw/guidedquant_boost_llm_layerwise_ptq_methods_using/
jusjinuk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2k4nw
false
null
t3_1l2k4nw
/r/LocalLLaMA/comments/1l2k4nw/guidedquant_boost_llm_layerwise_ptq_methods_using/
false
false
self
36
null
Cooling question
7
I got a “new” 3090 and I got the bright idea to go buy a 1200W power supply and put my 3070 in the same case instead of the upgrade. Before I go buy the new PS, I tried the fit and it feels like that’s pretty tight. Is that enough room between the cards for airflow or am I about to start a fire? I’m adding two new case fans at the bottom anyway, but I’m worried about the top card.
2025-06-03T18:58:51
https://i.redd.it/vd9n6tpyer4f1.jpeg
johnfkngzoidberg
i.redd.it
1970-01-01T00:00:00
0
{}
1l2k7rk
false
null
t3_1l2k7rk
/r/LocalLLaMA/comments/1l2k7rk/cooling_question/
false
false
https://b.thumbs.redditm…qE0ZUD2swqac.jpg
7
{'enabled': True, 'images': [{'id': 'VAC7c--rZyP4uKdRUcU4sNPso7_yXTNVzCYQfT8F5pE', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/vd9n6tpyer4f1.jpeg?width=108&crop=smart&auto=webp&s=c1bdafed2a5bbdd47b1f52b5244f8b3a71726791', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/vd9n6tpyer4f1.jpeg?width=216&crop=smart&auto=webp&s=495d6f05002a4e991b1e63f6c619edd1ae685413', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/vd9n6tpyer4f1.jpeg?width=320&crop=smart&auto=webp&s=13a025a3768a78d70e83f49191ef652ad0964735', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/vd9n6tpyer4f1.jpeg?width=640&crop=smart&auto=webp&s=169e304b6a6b509345ece64e6e13cdaaa08c98c0', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/vd9n6tpyer4f1.jpeg?width=960&crop=smart&auto=webp&s=feecdb7c41b9c14b8e822a794c5666837fda5180', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/vd9n6tpyer4f1.jpeg?width=1080&crop=smart&auto=webp&s=e98c7c530b5afe0955d9ce44ddd447f052b74f90', 'width': 1080}], 'source': {'height': 5712, 'url': 'https://preview.redd.it/vd9n6tpyer4f1.jpeg?auto=webp&s=712b12e0ba6c9b672f4f5c1d2f775090d9130816', 'width': 4284}, 'variants': {}}]}
Sonnet Claude 4 ran locally?
0
Hi, I recently started using Cursor to make a website and fell in love with Agent and Claude 4. I have a 9950x3d with a 5090 with 96GB if ram and lots of Gen5 m.2 storage. I'm wondering if I can run something like this locally? So it can assist with editing and coding on its own via vibe coding. You guys are amazing in what I see a lot of you coming up with. I wish I was that good! Hoping someone has the skill to point me in the right direction. Thabks! Step by step would be greatly appreciated as I'm just learning about agents. Thanks!
2025-06-03T19:10:10
https://www.reddit.com/r/LocalLLaMA/comments/1l2kffu/sonnet_claude_4_ran_locally/
VanFenix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2kffu
false
null
t3_1l2kffu
/r/LocalLLaMA/comments/1l2kffu/sonnet_claude_4_ran_locally/
false
false
self
0
null
Paid LLM courses that teach practical knowledge? Free courses are good too!
0
My employer has given me a budget of up to around $1000 for training. I think the best way to spend this money would be learning about LLMs or AI in general. I don't want to take a course in bullshit like "AI for managers" or whatever other nonsense is trying to cash in on the LLM buzz. I also don't want to become an AI computer scientist. I just want to learn some advanced AI knowledge that will make me better at my job and/or make me more valuable as an employee. i've played around with RAG and now i am particularly interested in how to generate synthetic data-sets from documents and then fine-tune models. &nbsp; anyone have any recommendations?
2025-06-03T19:27:32
https://www.reddit.com/r/LocalLLaMA/comments/1l2kvd7/paid_llm_courses_that_teach_practical_knowledge/
LanceThunder
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2kvd7
false
null
t3_1l2kvd7
/r/LocalLLaMA/comments/1l2kvd7/paid_llm_courses_that_teach_practical_knowledge/
false
false
self
0
null
OOM for GRPO on Qwen3-32b, 8xA100 80GB
0
Hi everyone, I'm trying to run Qwen3-32b and am always getting OOM after loading the model checkpoints. I'm using 6xA100s for training and 2 for inference. num\_generations is down to 4, and I tried decreasing to 2 with batch size on device of 1 to debug - still getting OOM. Would love some help or any resources.
2025-06-03T19:43:26
https://www.reddit.com/r/LocalLLaMA/comments/1l2la28/oom_for_grpo_on_qwen332b_8xa100_80gb/
Classic_Eggplant8827
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2la28
false
null
t3_1l2la28
/r/LocalLLaMA/comments/1l2la28/oom_for_grpo_on_qwen332b_8xa100_80gb/
false
false
self
0
null
Is there any small models for home budgets
4
Hi, Is there any small local models I could feed my bank statements into and have it done a full budget breakdown? What would be the best way to go about this for a beginner?
2025-06-03T19:48:36
https://www.reddit.com/r/LocalLLaMA/comments/1l2letx/is_there_any_small_models_for_home_budgets/
DueRuin3912
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l2letx
false
null
t3_1l2letx
/r/LocalLLaMA/comments/1l2letx/is_there_any_small_models_for_home_budgets/
false
false
self
4
null