title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
14B Hybrid Reasoning UI Model for websites and components
57
2025-06-07T05:11:24
https://www.reddit.com/gallery/1l5d0sm
United-Rush4073
reddit.com
1970-01-01T00:00:00
0
{}
1l5d0sm
false
null
t3_1l5d0sm
/r/LocalLLaMA/comments/1l5d0sm/14b_hybrid_reasoning_ui_model_for_websites_and/
false
false
https://b.thumbs.redditm…0HSBQaW0nTDA.jpg
57
null
Search-based Question Answering
1
[removed]
2025-06-07T05:29:18
https://www.reddit.com/r/LocalLLaMA/comments/1l5dbaj/searchbased_question_answering/
BeyazSapkaliAdam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5dbaj
false
null
t3_1l5dbaj
/r/LocalLLaMA/comments/1l5dbaj/searchbased_question_answering/
false
false
self
1
null
Created a more accurate local speech-to-text tool for your Mac
8
Heya, I made a simple, native macOS app for local speech-to-text transcription with OpenAI's Whisper model that runs on your Mac's neural engine. The goal was to have a better dictation mode on macOS. \* Runs 100% locally on your machine. \* Powered by OpenAI's Whisper models. \* Free, open-source, no payment, and no sign-up required. [Download](https://github.com/sapoepsilon/Whispera/releases/tag/v1.0.2) [Repo](https://github.com/sapoepsilon/Whispera/) I am also thinking of coupling it with a 3b or an 8b model that could execute bash commands. So, for example, you could say, "Open mail," and the mail would appear. Or you could say, "Change image names to something meaningful," and the image names would change too, etc., etc. What do you guys think?
2025-06-07T05:43:44
https://v.redd.it/uuds5cx10g5f1
sapoepsilon
v.redd.it
1970-01-01T00:00:00
0
{}
1l5dj75
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/uuds5cx10g5f1/DASHPlaylist.mpd?a=1751867040%2CNjljZDQwNDI4YTQ0MmVkMTIzNzBkZjZkNGNiOWM3ZGRhNzQ0ZmY2MGM3ZjUxODhjZjhjMDMwYzNkZmVmZDg5YQ%3D%3D&v=1&f=sd', 'duration': 55, 'fallback_url': 'https://v.redd.it/uuds5cx10g5f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/uuds5cx10g5f1/HLSPlaylist.m3u8?a=1751867040%2CMzZhYjQ5MmM1MjRjY2E2ZDFlNTM5NjYyMGY4NjlhZGRiMTAxYTg3ZWUxZDIwZGRmZTQyMWRiN2NkMGRmMmUxMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/uuds5cx10g5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1688}}
t3_1l5dj75
/r/LocalLLaMA/comments/1l5dj75/created_a_more_accurate_local_speechtotext_tool/
false
false
https://external-preview…9be42dbe8680e16c
8
{'enabled': False, 'images': [{'id': 'cDBxbTIyeDEwZzVmMT8uzfDMvcHcJ-0JGqh0-SuoB0Eof-6z5U2JDgs80IYP', 'resolutions': [{'height': 69, 'url': 'https://external-preview.redd.it/cDBxbTIyeDEwZzVmMT8uzfDMvcHcJ-0JGqh0-SuoB0Eof-6z5U2JDgs80IYP.png?width=108&crop=smart&format=pjpg&auto=webp&s=9a44b0d9b1f0f28422d3118c44620c284cee0284', 'width': 108}, {'height': 138, 'url': 'https://external-preview.redd.it/cDBxbTIyeDEwZzVmMT8uzfDMvcHcJ-0JGqh0-SuoB0Eof-6z5U2JDgs80IYP.png?width=216&crop=smart&format=pjpg&auto=webp&s=e8496641fac06ec6ef30f1537855554b065fdb93', 'width': 216}, {'height': 204, 'url': 'https://external-preview.redd.it/cDBxbTIyeDEwZzVmMT8uzfDMvcHcJ-0JGqh0-SuoB0Eof-6z5U2JDgs80IYP.png?width=320&crop=smart&format=pjpg&auto=webp&s=9be1206b03b084cd87b4280187a48657e7dcd7a3', 'width': 320}, {'height': 409, 'url': 'https://external-preview.redd.it/cDBxbTIyeDEwZzVmMT8uzfDMvcHcJ-0JGqh0-SuoB0Eof-6z5U2JDgs80IYP.png?width=640&crop=smart&format=pjpg&auto=webp&s=ad424193cc17233bc85ea3537937c6c53182d57d', 'width': 640}, {'height': 614, 'url': 'https://external-preview.redd.it/cDBxbTIyeDEwZzVmMT8uzfDMvcHcJ-0JGqh0-SuoB0Eof-6z5U2JDgs80IYP.png?width=960&crop=smart&format=pjpg&auto=webp&s=e7f37fb23a404739de5124d5808d4437b54cff0c', 'width': 960}, {'height': 690, 'url': 'https://external-preview.redd.it/cDBxbTIyeDEwZzVmMT8uzfDMvcHcJ-0JGqh0-SuoB0Eof-6z5U2JDgs80IYP.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e5bb866ad5bd894380a0c3b56e21d6a980cfd50a', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cDBxbTIyeDEwZzVmMT8uzfDMvcHcJ-0JGqh0-SuoB0Eof-6z5U2JDgs80IYP.png?format=pjpg&auto=webp&s=55b99e0971c1f1c35e17c783b1ee4a075dfd043d', 'width': 1688}, 'variants': {}}]}
Turn any notes into Obsidian-like Graphs
24
Hello r/LocalLLaMA, We just built a tool that allows you to visualize your notes and documents as cool, obsidian-like graphs. Upload your notes and see the clusters form around the correct topics, and then quantify the most-important topics across your information! Here's a short video to show you what it looks like: https://reddit.com/link/1l5dl08/video/dsz3w1r61g5f1/player Check it out at: [https://github.com/morphik-org/morphik-core](https://github.com/morphik-org/morphik-core) Would love any feedback!
2025-06-07T05:46:59
https://www.reddit.com/r/LocalLLaMA/comments/1l5dl08/turn_any_notes_into_obsidianlike_graphs/
Advanced_Army4706
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5dl08
false
null
t3_1l5dl08
/r/LocalLLaMA/comments/1l5dl08/turn_any_notes_into_obsidianlike_graphs/
false
false
self
24
{'enabled': False, 'images': [{'id': '0jS3OQc94lDop9VzaIIqMrfiiN5rvjO0QcENBiUgb3U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Gk7VveshS4WyPfA47SOy8UKOHsV7s_eBjWB3XqHXMC0.jpg?width=108&crop=smart&auto=webp&s=5c5640f2a55dde5ca532841ea677a3d66c0a1a92', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Gk7VveshS4WyPfA47SOy8UKOHsV7s_eBjWB3XqHXMC0.jpg?width=216&crop=smart&auto=webp&s=0ba8639ece0343da6761f6322d8b6343b65803d6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Gk7VveshS4WyPfA47SOy8UKOHsV7s_eBjWB3XqHXMC0.jpg?width=320&crop=smart&auto=webp&s=38ccaf61aff65fcc34762b5a5d6f9d3ec22f97b0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Gk7VveshS4WyPfA47SOy8UKOHsV7s_eBjWB3XqHXMC0.jpg?width=640&crop=smart&auto=webp&s=013efac2ee7015ea2a285e1af45d5c3b37d7387b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Gk7VveshS4WyPfA47SOy8UKOHsV7s_eBjWB3XqHXMC0.jpg?width=960&crop=smart&auto=webp&s=ead20a67b5de72e7e67ac93a57816b02e14c8d8c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Gk7VveshS4WyPfA47SOy8UKOHsV7s_eBjWB3XqHXMC0.jpg?width=1080&crop=smart&auto=webp&s=ef847c548e54f52e21fd6445d89fd6288c084fd0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Gk7VveshS4WyPfA47SOy8UKOHsV7s_eBjWB3XqHXMC0.jpg?auto=webp&s=5e7f78ac0ba65dd95ed66145063cf8204833259e', 'width': 1200}, 'variants': {}}]}
Cannot interence with images on llama-cpp-python
1
[removed]
2025-06-07T05:53:07
https://www.reddit.com/r/LocalLLaMA/comments/1l5doaj/cannot_interence_with_images_on_llamacpppython/
Direct_Solid_6541
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5doaj
false
null
t3_1l5doaj
/r/LocalLLaMA/comments/1l5doaj/cannot_interence_with_images_on_llamacpppython/
false
false
self
1
null
Cannot interence with images on llama-cpp-python
1
[removed]
2025-06-07T05:54:04
https://www.reddit.com/r/LocalLLaMA/comments/1l5dot2/cannot_interence_with_images_on_llamacpppython/
Direct_Solid_6541
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5dot2
false
null
t3_1l5dot2
/r/LocalLLaMA/comments/1l5dot2/cannot_interence_with_images_on_llamacpppython/
false
false
self
1
null
Looking for ground truth datasets for ai text classification tasks?
2
I am asking this because I came across a lot of benchmarks for ai models. At some point I got confused. So I created my text classification datasets with the help of a colleague. It was for a paper first, but later on became a curiosity. Is there publicly available ground truth datasets? I would like to test open models text classification capacity on my own. I know some authors publicly open their datasets. If there is a hub or resources that you can, I appreciate a lot. Also one more question, this might be a rookie question. Is it reliable to use publicly available datasets to test ai models performance? Don’t companies use and scrape this datasets to train their models? I feel like this is an issue. Yes, more data bring better performance. If company trained its model on data I am trying to benchmark it, would my benchmarks be valid?
2025-06-07T07:06:24
https://www.reddit.com/r/LocalLLaMA/comments/1l5es3o/looking_for_ground_truth_datasets_for_ai_text/
datavisualist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5es3o
false
null
t3_1l5es3o
/r/LocalLLaMA/comments/1l5es3o/looking_for_ground_truth_datasets_for_ai_text/
false
false
self
2
null
Has anyone tested the RX 9060 XT for local inference yet?
6
Was browsing around for any performance results, as I think this could be very interesting for a budget LLM build but haven't found any benchmarks yet. Do you have insights in what's to expect from this card for local inference? What's your expectation and would you consider using it in your future builds?
2025-06-07T07:07:45
https://www.reddit.com/r/LocalLLaMA/comments/1l5esrq/has_anyone_tested_the_rx_9060_xt_for_local/
Mr_Moonsilver
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5esrq
false
null
t3_1l5esrq
/r/LocalLLaMA/comments/1l5esrq/has_anyone_tested_the_rx_9060_xt_for_local/
false
false
self
6
null
Connect Your MCP Client to the Hugging Face Hub
1
2025-06-07T07:23:10
https://huggingface.co/changelog/hf-mcp-server
ab2377
huggingface.co
1970-01-01T00:00:00
0
{}
1l5f0zv
false
null
t3_1l5f0zv
/r/LocalLLaMA/comments/1l5f0zv/connect_your_mcp_client_to_the_hugging_face_hub/
false
false
default
1
{'enabled': False, 'images': [{'id': '8PIFHlbVQXzkSjccclalxbkJ0IRS2gfTguCTm2ykPzU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6zhkDqQ258w7lWdhFtguEmZzHAEVpfrpgsUV12nS_cE.jpg?width=108&crop=smart&auto=webp&s=026ed793e5b6f83453fa13610861256186ae470c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6zhkDqQ258w7lWdhFtguEmZzHAEVpfrpgsUV12nS_cE.jpg?width=216&crop=smart&auto=webp&s=51125adb1803189fcc30e276d6fb9d11030c77ed', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6zhkDqQ258w7lWdhFtguEmZzHAEVpfrpgsUV12nS_cE.jpg?width=320&crop=smart&auto=webp&s=7192b32f4834e53a361714ef2205da77d0258bbb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6zhkDqQ258w7lWdhFtguEmZzHAEVpfrpgsUV12nS_cE.jpg?width=640&crop=smart&auto=webp&s=ad95f21fbea1dee0d81bed443b3a37666145f5d1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6zhkDqQ258w7lWdhFtguEmZzHAEVpfrpgsUV12nS_cE.jpg?width=960&crop=smart&auto=webp&s=d79b62db37f2530e98c5d44617776cc98ad9ff0c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6zhkDqQ258w7lWdhFtguEmZzHAEVpfrpgsUV12nS_cE.jpg?width=1080&crop=smart&auto=webp&s=64bc8ab2c1d5838445234b9116e4fa8ac4c0c57d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6zhkDqQ258w7lWdhFtguEmZzHAEVpfrpgsUV12nS_cE.jpg?auto=webp&s=2f0ecc3d9c7e4a813eb7729aa4c14c6bbd39bfe4', 'width': 1200}, 'variants': {}}]}
The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text
156
2025-06-07T07:28:10
https://arxiv.org/abs/2506.05209
brown2green
arxiv.org
1970-01-01T00:00:00
0
{}
1l5f3m0
false
null
t3_1l5f3m0
/r/LocalLLaMA/comments/1l5f3m0/the_common_pile_v01_an_8tb_dataset_of_public/
false
false
default
156
null
LMStudio autostarts no matter what (windows)
3
I don't know if this is the right place for this post. I installed LMStudio on windows. I am very picky about which apps auto-start with the system, and all decent and respectful apps have a setting for this and give you a choice. I could not find such an option in LMStudio... (please prove I am dumb). I went ahead and manually disabled LMStudio from auto-starting from Windows' system settings.... yet after an update, LMStudio proudly auto-starts again on system boot. (cry)
2025-06-07T07:30:04
https://www.reddit.com/r/LocalLLaMA/comments/1l5f4ke/lmstudio_autostarts_no_matter_what_windows/
cangaroo_hamam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5f4ke
false
null
t3_1l5f4ke
/r/LocalLLaMA/comments/1l5f4ke/lmstudio_autostarts_no_matter_what_windows/
false
false
self
3
null
How to get started on understanding .cpp models
0
I am self employed and have been coding a text processing application for awhile now. Part of it relies on an LLM for various functionalities and I recently came to learn about .cpp models (especially the .cpp version of HF's SmolLM2) and I am generally a big fan of all things lightweight. I am now planning to partner with another entity to develop my own small specialist model and ideally I would want it to come in .cpp format as well but I struggle to find resources about pursuing the .cpp route for non-existing / custom models. Can anyone suggest some resources in that regard?
2025-06-07T07:42:11
https://www.reddit.com/r/LocalLLaMA/comments/1l5fasn/how_to_get_started_on_understanding_cpp_models/
RDA92
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5fasn
false
null
t3_1l5fasn
/r/LocalLLaMA/comments/1l5fasn/how_to_get_started_on_understanding_cpp_models/
false
false
self
0
null
What is the best LLM for philosophy, history and general knowledge?
11
I love to ask chatbots philosophical stuff, about god, good, evil, the future, etc. I'm also a history buff, I love knowing more about the middle ages, roman empire, the enlightenment, etc. I ask AI for book recommendations and I like to question their line of reasoning in order to get many possible answers to the dilemmas I come out with. What would you think is the best LLM for that? I've been using Gemini but I have no tested many others. I have Perplexity Pro for a year, would that be enough?
2025-06-07T07:58:27
https://www.reddit.com/r/LocalLLaMA/comments/1l5fj59/what_is_the_best_llm_for_philosophy_history_and/
Upbeat-Impact-6617
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5fj59
false
null
t3_1l5fj59
/r/LocalLLaMA/comments/1l5fj59/what_is_the_best_llm_for_philosophy_history_and/
false
false
self
11
null
Security Risks of PDF Upload with OCR and AI Processing (OpenAI)
1
[removed]
2025-06-07T08:00:21
https://www.reddit.com/r/LocalLLaMA/comments/1l5fk5d/security_risks_of_pdf_upload_with_ocr_and_ai/
Total_Ad6084
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5fk5d
false
null
t3_1l5fk5d
/r/LocalLLaMA/comments/1l5fk5d/security_risks_of_pdf_upload_with_ocr_and_ai/
false
false
self
1
null
What's the closest tts to real time voice cloning?
12
I have been out of the loop after the sesame disaster. I recently needed a tts which can talk in cloned voice in as close to real time as possible. Have there been any recent developments?. How do they compare to equivalent closed source ones? Thanks for your time :)
2025-06-07T08:26:09
https://www.reddit.com/r/LocalLLaMA/comments/1l5fxp1/whats_the_closest_tts_to_real_time_voice_cloning/
Cheap_Concert168no
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5fxp1
false
null
t3_1l5fxp1
/r/LocalLLaMA/comments/1l5fxp1/whats_the_closest_tts_to_real_time_voice_cloning/
false
false
self
12
null
The more things change, the more they stay the same
1
[removed]
2025-06-07T08:29:07
https://i.redd.it/ksjooo59ug5f1.jpeg
Kooky-Somewhere-2883
i.redd.it
1970-01-01T00:00:00
0
{}
1l5fz6h
false
null
t3_1l5fz6h
/r/LocalLLaMA/comments/1l5fz6h/the_more_things_change_the_more_they_stay_the_same/
false
false
https://a.thumbs.redditm…3wLprII8FqZ4.jpg
1
{'enabled': True, 'images': [{'id': 'lSjTZqFYOSU_Hutj_ne5eqirAOZ-wX94CFhd1oJnPoI', 'resolutions': [{'height': 133, 'url': 'https://preview.redd.it/ksjooo59ug5f1.jpeg?width=108&crop=smart&auto=webp&s=bc8df2de7423464b7a93c128316a866e691d8f67', 'width': 108}, {'height': 266, 'url': 'https://preview.redd.it/ksjooo59ug5f1.jpeg?width=216&crop=smart&auto=webp&s=884f355d36dcf7bd1224ab1bea189209eae2ff4f', 'width': 216}, {'height': 395, 'url': 'https://preview.redd.it/ksjooo59ug5f1.jpeg?width=320&crop=smart&auto=webp&s=d98198068f29ffff80eb379c202ace3fdef82dae', 'width': 320}, {'height': 790, 'url': 'https://preview.redd.it/ksjooo59ug5f1.jpeg?width=640&crop=smart&auto=webp&s=334ca8566a796d23b75428b7a3f6e36a6bcbfab3', 'width': 640}, {'height': 1185, 'url': 'https://preview.redd.it/ksjooo59ug5f1.jpeg?width=960&crop=smart&auto=webp&s=638b91987d4e6657fdc4324b824c87e1fce4181b', 'width': 960}, {'height': 1333, 'url': 'https://preview.redd.it/ksjooo59ug5f1.jpeg?width=1080&crop=smart&auto=webp&s=f0660d012487230bb9685e55db9f01437a69250c', 'width': 1080}], 'source': {'height': 1585, 'url': 'https://preview.redd.it/ksjooo59ug5f1.jpeg?auto=webp&s=5eb4d95c7d09a0aec9449484e1ab4595f8cfba3c', 'width': 1284}, 'variants': {}}]}
The more things change, the more they stay the same
1,007
2025-06-07T08:37:07
https://i.redd.it/qzf8fxlovg5f1.jpeg
Kooky-Somewhere-2883
i.redd.it
1970-01-01T00:00:00
0
{}
1l5g36v
false
null
t3_1l5g36v
/r/LocalLLaMA/comments/1l5g36v/the_more_things_change_the_more_they_stay_the_same/
false
false
https://b.thumbs.redditm…MwfCKPYUkh4o.jpg
1,007
{'enabled': True, 'images': [{'id': 'BNUQEHOP5dU_qxiR2T20EtkPumO3t4Q1f-0pevXJjqM', 'resolutions': [{'height': 133, 'url': 'https://preview.redd.it/qzf8fxlovg5f1.jpeg?width=108&crop=smart&auto=webp&s=ca1a1e11afc43d95abf94251e1420d02a8f88f71', 'width': 108}, {'height': 266, 'url': 'https://preview.redd.it/qzf8fxlovg5f1.jpeg?width=216&crop=smart&auto=webp&s=83c3b96c80d6c5684aafca48d3b7d77a55a8518d', 'width': 216}, {'height': 395, 'url': 'https://preview.redd.it/qzf8fxlovg5f1.jpeg?width=320&crop=smart&auto=webp&s=397e2977ed920276943230b680f2c645bf316ade', 'width': 320}, {'height': 790, 'url': 'https://preview.redd.it/qzf8fxlovg5f1.jpeg?width=640&crop=smart&auto=webp&s=f5ac8fe23d8205c013d8f56cfb3833665ec4b3c3', 'width': 640}, {'height': 1185, 'url': 'https://preview.redd.it/qzf8fxlovg5f1.jpeg?width=960&crop=smart&auto=webp&s=0702b14c1fa2f6f67d605b3c6e19b38c735e01d3', 'width': 960}, {'height': 1333, 'url': 'https://preview.redd.it/qzf8fxlovg5f1.jpeg?width=1080&crop=smart&auto=webp&s=8a1697ff70b0619b04d1125013800aa500d67bde', 'width': 1080}], 'source': {'height': 1585, 'url': 'https://preview.redd.it/qzf8fxlovg5f1.jpeg?auto=webp&s=af3bdce605044150e62429c2a536477355a0ed1c', 'width': 1284}, 'variants': {}}]}
Really Gemini?
1
[removed]
2025-06-07T08:39:32
https://i.redd.it/ldc6xf64wg5f1.png
younestft
i.redd.it
1970-01-01T00:00:00
0
{}
1l5g4db
false
null
t3_1l5g4db
/r/LocalLLaMA/comments/1l5g4db/really_gemini/
false
false
default
1
{'enabled': True, 'images': [{'id': 'ldc6xf64wg5f1', 'resolutions': [{'height': 186, 'url': 'https://preview.redd.it/ldc6xf64wg5f1.png?width=108&crop=smart&auto=webp&s=12b49b522adf2c3ac303ecb213aa88b076d29d9c', 'width': 108}, {'height': 372, 'url': 'https://preview.redd.it/ldc6xf64wg5f1.png?width=216&crop=smart&auto=webp&s=1969b652e22684e29d3b72e8e581ce9274b18e52', 'width': 216}, {'height': 551, 'url': 'https://preview.redd.it/ldc6xf64wg5f1.png?width=320&crop=smart&auto=webp&s=904e0c13aa6c9cb12b8118decffce6a9d4eb6259', 'width': 320}, {'height': 1102, 'url': 'https://preview.redd.it/ldc6xf64wg5f1.png?width=640&crop=smart&auto=webp&s=03c075e34fec8fa3396bbbde52905d47be89d3de', 'width': 640}, {'height': 1654, 'url': 'https://preview.redd.it/ldc6xf64wg5f1.png?width=960&crop=smart&auto=webp&s=35c7a34e07f8733298b31320a859e8916f9a8503', 'width': 960}, {'height': 1861, 'url': 'https://preview.redd.it/ldc6xf64wg5f1.png?width=1080&crop=smart&auto=webp&s=9916501e9b51dfc76c31ffc6c1ba52f949f3f50c', 'width': 1080}], 'source': {'height': 1861, 'url': 'https://preview.redd.it/ldc6xf64wg5f1.png?auto=webp&s=f2e8fe10edc22f025314f9985bbab3f7d7cca0de', 'width': 1080}, 'variants': {}}]}
How to integrate MCP into React with one command
1
[removed]
2025-06-07T08:46:14
https://i.redd.it/nal5tl34xg5f1.jpeg
anmolbaranwal
i.redd.it
1970-01-01T00:00:00
0
{}
1l5g7rr
false
null
t3_1l5g7rr
/r/LocalLLaMA/comments/1l5g7rr/how_to_integrate_mcp_into_react_with_one_command/
false
false
default
1
{'enabled': True, 'images': [{'id': 'nal5tl34xg5f1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/nal5tl34xg5f1.jpeg?width=108&crop=smart&auto=webp&s=bfff779c827daa8541f9a88584cca1c591ecf3cf', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/nal5tl34xg5f1.jpeg?width=216&crop=smart&auto=webp&s=a794b6763d3ac2a436e1e0991fd66786ce5335af', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/nal5tl34xg5f1.jpeg?width=320&crop=smart&auto=webp&s=f157390be6d9baeb4d00943c6cb936507c4aabc3', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/nal5tl34xg5f1.jpeg?width=640&crop=smart&auto=webp&s=a23c14409b5eb9a25e9582da4c3773a728c1aead', 'width': 640}], 'source': {'height': 480, 'url': 'https://preview.redd.it/nal5tl34xg5f1.jpeg?auto=webp&s=c6c39b9d8bc737ebf1580b78d8ab680e220794dc', 'width': 720}, 'variants': {}}]}
How to integrate MCP into React with one command
1
[removed]
2025-06-07T08:56:24
https://www.reddit.com/r/LocalLLaMA/comments/1l5gcyn/how_to_integrate_mcp_into_react_with_one_command/
anmolbaranwal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5gcyn
false
null
t3_1l5gcyn
/r/LocalLLaMA/comments/1l5gcyn/how_to_integrate_mcp_into_react_with_one_command/
false
false
self
1
null
How to integrate MCP into React with one command
0
There are many frameworks like OpenAI Agents SDK, MCP-Agent, Google ADK, Vercel AI SDK, Praison AI to help you build MCP Agents. But integrating MCP within a React app is still complex. So I created a free guide to do it with just one command using CopilotKit CLI. Here is the command. `npx copilotkit@latest init -m MCP` I have covered all the concepts involved (including architecture). Also showed how to code the complete integration from scratch. Would love your feedback, especially if there’s anything important I have missed or misunderstood.
2025-06-07T09:06:40
https://www.reddit.com/r/LocalLLaMA/comments/1l5giiz/how_to_integrate_mcp_into_react_with_one_command/
anmolbaranwal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5giiz
false
null
t3_1l5giiz
/r/LocalLLaMA/comments/1l5giiz/how_to_integrate_mcp_into_react_with_one_command/
false
false
self
0
null
Optimizing DeepSeek-R1-0528 Inference Speed
1
[removed]
2025-06-07T09:36:06
https://www.reddit.com/r/LocalLLaMA/comments/1l5gxl0/optimizing_deepseekr10528_inference_speed/
prepytixel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5gxl0
false
null
t3_1l5gxl0
/r/LocalLLaMA/comments/1l5gxl0/optimizing_deepseekr10528_inference_speed/
false
false
self
1
null
Optimizing DeepSeek-R1-0528 Inference Speed
1
[removed]
2025-06-07T09:40:43
https://www.reddit.com/r/LocalLLaMA/comments/1l5gzxo/optimizing_deepseekr10528_inference_speed/
prepytixel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5gzxo
false
null
t3_1l5gzxo
/r/LocalLLaMA/comments/1l5gzxo/optimizing_deepseekr10528_inference_speed/
false
false
self
1
null
Deepseek R1 Performance Optimization
1
[removed]
2025-06-07T09:53:18
https://www.reddit.com/r/LocalLLaMA/comments/1l5h6a5/deepseek_r1_performance_optimization/
prepytixel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5h6a5
false
null
t3_1l5h6a5
/r/LocalLLaMA/comments/1l5h6a5/deepseek_r1_performance_optimization/
false
false
self
1
null
ktransformers AMX
1
[removed]
2025-06-07T09:53:26
https://www.reddit.com/r/LocalLLaMA/comments/1l5h6cl/ktransformers_amx/
festr2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5h6cl
false
null
t3_1l5h6cl
/r/LocalLLaMA/comments/1l5h6cl/ktransformers_amx/
false
false
self
1
null
What is the best and affordable uncensored model to fine tune with your own data?
1
[removed]
2025-06-07T10:09:06
https://www.reddit.com/r/LocalLLaMA/comments/1l5hevz/what_is_the_best_and_affordable_uncensored_model/
sprmgtrb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5hevz
false
null
t3_1l5hevz
/r/LocalLLaMA/comments/1l5hevz/what_is_the_best_and_affordable_uncensored_model/
false
false
self
1
null
chat ui that allows editing generated think tokens
3
title; is there a ui application that allows modifying the thinking tokens already generated “changing the words” then rerunning final answer? i know i can do that in a notebook with prefixing but looking for a complete system
2025-06-07T10:52:33
https://www.reddit.com/r/LocalLLaMA/comments/1l5i24u/chat_ui_that_allows_editing_generated_think_tokens/
liquid_bee_3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5i24u
false
null
t3_1l5i24u
/r/LocalLLaMA/comments/1l5i24u/chat_ui_that_allows_editing_generated_think_tokens/
false
false
self
3
null
Deepseek
82
I am using I am using UD Q2-XL now and works great on my 3955wx TR with 256GB ddr4 and 2x3090 (Using only one 3090, has roughly the same speed but with 32k context.). Cca. 8t/s generation speed and 245t/s pp speed, ctx-size 71680. I am using ik_llama. I am very satisfied with the results. I through at it 20k tokens of code files and after 10-15m of thinking, it gives me very high quality responses. PP TG N_KV T_PP s S_PP t/s T_TG s S_TG t/s 7168 1792 0 29.249 245.07 225.164 7.96 ./build/bin/llama-sweep-bench --model /home/ciprian/ai/models/DeepseekR1-0523-Q2-XL-UD/DeepSeek-R1-0528-UD-Q2_K_XL-00001-of-00006.gguf --alias DeepSeek-R1-0528-UD-Q2_K_XL --ctx-size 71680 -ctk q8_0 -mla 3 -fa -amb 512 -fmoe --temp 0.6 --top_p 0.95 --min_p 0.01 --n-gpu-layers 63 -ot "blk.[0-3].ffn_up_exps=CUDA0,blk.[0-3].ffn_gate_exps=CUDA0,blk.[0-3].ffn_down_exps=CUDA0" -ot "blk.1[0-2].ffn_up_exps=CUDA1,blk.1[0-2].ffn_gate_exps=CUDA1" --override-tensor exps=CPU --parallel 1 --threads 16 --threads-batch 16 --host 0.0.0.0 --port 5002 --ubatch-size 7168 --batch-size 7168 --no-mmap
2025-06-07T12:17:49
https://www.reddit.com/r/LocalLLaMA/comments/1l5jh4y/deepseek/
ciprianveg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5jh4y
false
null
t3_1l5jh4y
/r/LocalLLaMA/comments/1l5jh4y/deepseek/
false
false
self
82
null
Local inference with Snapdragon X Elite
8
A while ago a bunch of "AI laptops" came out wihoch were supposedly great for llms because they had "NPUs". Has anybody bought one and tried them out? I'm not sure exactly 8f this hardware is supported for local inference with common libraires etc. Thanks!
2025-06-07T12:49:19
https://www.reddit.com/r/LocalLLaMA/comments/1l5k290/local_inference_with_snapdragon_x_elite/
GreenTreeAndBlueSky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5k290
false
null
t3_1l5k290
/r/LocalLLaMA/comments/1l5k290/local_inference_with_snapdragon_x_elite/
false
false
self
8
null
LLM's vs LRM's (beyond marketing): Large Language Modles (gpt 4/4o) vs Large Reasoning Modles (gpt o1/o3)
1
[removed]
2025-06-07T13:37:45
https://www.reddit.com/r/LocalLLaMA/comments/1l5l1ht/llms_vs_lrms_beyond_marketing_large_language/
ditpoo94
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5l1ht
false
null
t3_1l5l1ht
/r/LocalLLaMA/comments/1l5l1ht/llms_vs_lrms_beyond_marketing_large_language/
false
false
self
1
null
Get Claude at Home! Local UI Model for web development with charts components and UI elements with UI based hybrid reasoning on Qwen3. Tesslate/UIGEN-T3-14B
6
2025-06-07T14:06:32
https://www.reddit.com/gallery/1l5ln3h
United-Rush4073
reddit.com
1970-01-01T00:00:00
0
{}
1l5ln3h
false
null
t3_1l5ln3h
/r/LocalLLaMA/comments/1l5ln3h/get_claude_at_home_local_ui_model_for_web/
false
false
https://b.thumbs.redditm…4W7_z9xP9fhs.jpg
6
null
Could AI Be the Next Bubble? Dot-Com Echoes, Crisis Triggers, and What You Think
1
[removed]
2025-06-07T14:25:35
https://www.reddit.com/r/LocalLLaMA/comments/1l5m1u5/could_ai_be_the_next_bubble_dotcom_echoes_crisis/
Necessary-Tap5971
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5m1u5
false
null
t3_1l5m1u5
/r/LocalLLaMA/comments/1l5m1u5/could_ai_be_the_next_bubble_dotcom_echoes_crisis/
false
false
self
1
null
Conversational Agent for automating SOP(Policies)
4
What is the best input format like Yaml or json based graphs for automating a SOP through a conversational AI Agent? And which framework now is most suited for this? I cannot hand code this SOP as i have more than 100+ such SOPs to automate. Example SOP for e-commerce: Get the list of all orders (open and past) placed from the customer’s WhatsApp number If the customer has no orders, inform the customer that no purchases were found linked to the WhatsApp number. If the customer has multiple orders, ask the customer to specify the Order ID (or forward the order confirmation) for which the customer needs help. If the selected order status is Processing / Pending-Payment / Pending-Verification If the customer wants to cancel the order, confirm the request, trigger “Order → Cancel → Immediate Refund”, and notify the Finance team. If the customer asks for a return/refund/replacement before the item ships, explain that only a cancellation is possible at this stage; returns begin after delivery. If the order status is Shipped / In Transit If it is < 12 hours since dispatch (intercept window open), offer an in-transit cancellation; on customer confirmation, raise a courier-intercept ticket and update the customer. If it is ≥ 12 hours since dispatch, inform the customer that in-transit cancellation is no longer possible. Advise them to refuse delivery or to initiate a return after delivery.
2025-06-07T14:27:39
https://www.reddit.com/r/LocalLLaMA/comments/1l5m3j3/conversational_agent_for_automating_soppolicies/
dnivra26
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5m3j3
false
null
t3_1l5m3j3
/r/LocalLLaMA/comments/1l5m3j3/conversational_agent_for_automating_soppolicies/
false
false
self
4
null
langchain4j google-ai-gemini
0
I am seeking help to upgrade from Gemini 2.0 Flash to Gemini 2.5 Flash. Has anyone done this before or is currently working on it? If you have any ideas or experience with this upgrade, could you please help me complete it?
2025-06-07T14:56:38
https://www.reddit.com/r/LocalLLaMA/comments/1l5mqjj/langchain4j_googleaigemini/
Additional-Demand-78
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5mqjj
false
null
t3_1l5mqjj
/r/LocalLLaMA/comments/1l5mqjj/langchain4j_googleaigemini/
false
false
self
0
null
The Google AI Studio free tier isn't going anywhere anytime soon
1
[removed]
2025-06-07T15:25:39
[deleted]
1970-01-01T00:00:00
0
{}
1l5neni
false
null
t3_1l5neni
/r/LocalLLaMA/comments/1l5neni/the_google_ai_studio_free_tier_isnt_going/
false
false
default
1
null
Best Model for Coding and Game Dev with Godot
1
[removed]
2025-06-07T15:37:52
https://www.reddit.com/r/LocalLLaMA/comments/1l5nops/best_model_for_coding_and_game_dev_with_godot/
Venconquis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5nops
false
null
t3_1l5nops
/r/LocalLLaMA/comments/1l5nops/best_model_for_coding_and_game_dev_with_godot/
false
false
self
1
null
LMStudio Gemma QAT vs Unsloth Gemma QAT
54
[avg@10 performance](https://preview.redd.it/y1bmyqyh0j5f1.png?width=3523&format=png&auto=webp&s=c7d652a3482277e0da395ff06d7d31e40d3bdd25) [success &#37; of each model on each problem \(on the 10 attempts available\)](https://preview.redd.it/l0j9spei0j5f1.png?width=4935&format=png&auto=webp&s=47a1ab8472e07a2f5ce322a0ed77a9dc9faa2ad1) I tested Gemma 3 27B, 12B, 4B QAT GGUFs on AIME 2024 with 10 runs for each of the 30 problems. For this test i used both unsloth and lmstudio versions and the results are quite interesing although not definitive (i am not sure if all of them cross statistical significance). If interested on the code i used, check [here](https://github.com/Belluxx/LocalAIME).
2025-06-07T15:52:39
https://www.reddit.com/r/LocalLLaMA/comments/1l5o13i/lmstudio_gemma_qat_vs_unsloth_gemma_qat/
EntropyMagnets
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5o13i
false
null
t3_1l5o13i
/r/LocalLLaMA/comments/1l5o13i/lmstudio_gemma_qat_vs_unsloth_gemma_qat/
false
false
https://b.thumbs.redditm…f3jlTfXF5bck.jpg
54
{'enabled': False, 'images': [{'id': 'UZy0QzGADqAIskEh5ONaettN7Yzx1s_YqWBEm5G0bbY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gOPWOfQ-0nh7k8jUTSkqlAPIa5Ds-RElf4m0WrgaQ_I.jpg?width=108&crop=smart&auto=webp&s=394d4559602ba4e54c041db7c6704c020b3f1e50', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gOPWOfQ-0nh7k8jUTSkqlAPIa5Ds-RElf4m0WrgaQ_I.jpg?width=216&crop=smart&auto=webp&s=bae14533abd3cf1d73a215d1adb651ba3ca97d86', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gOPWOfQ-0nh7k8jUTSkqlAPIa5Ds-RElf4m0WrgaQ_I.jpg?width=320&crop=smart&auto=webp&s=0b1b3686172cf021dfcb695991f1f88927484200', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gOPWOfQ-0nh7k8jUTSkqlAPIa5Ds-RElf4m0WrgaQ_I.jpg?width=640&crop=smart&auto=webp&s=c11159d5cb84d33c948b11d577aeb74e4b2c3040', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gOPWOfQ-0nh7k8jUTSkqlAPIa5Ds-RElf4m0WrgaQ_I.jpg?width=960&crop=smart&auto=webp&s=885d34c101970c6522b76575a57d1e557a611d29', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gOPWOfQ-0nh7k8jUTSkqlAPIa5Ds-RElf4m0WrgaQ_I.jpg?width=1080&crop=smart&auto=webp&s=866b6d0f4123fee49983d558c53d80b897e7aef8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gOPWOfQ-0nh7k8jUTSkqlAPIa5Ds-RElf4m0WrgaQ_I.jpg?auto=webp&s=6440ba77af1c07cf17638d5c2b666a7978189bf9', 'width': 1200}, 'variants': {}}]}
Avian.io scammers?
32
Does anyone else have the problem, that avian.io is trying to debit money without any reason? I used avian.io for 2 days in January and put 10€ prepaid on there, didn’t like it and 5 months later in may they tried to withdraw 178€. Luckily I used Revolut and didn’t have enough money on this account. Automatic topup is deactivated on avian and I have no deployments or subscriptions. Today they tried to debit 441€! In my account are no billings or usage statistics for anything besides 2 days in January for a few cents. Are they insolvent and just try to scam their users for a few last hundreds of euros?
2025-06-07T16:01:01
https://www.reddit.com/gallery/1l5o84i
OneLovePlus
reddit.com
1970-01-01T00:00:00
0
{}
1l5o84i
false
null
t3_1l5o84i
/r/LocalLLaMA/comments/1l5o84i/avianio_scammers/
false
false
https://b.thumbs.redditm…jZKJvkvNkOvs.jpg
32
null
Issues with Models Gemma, Qwen and OpenWebUI WebSearch
1
[removed]
2025-06-07T16:32:23
https://www.reddit.com/r/LocalLLaMA/comments/1l5oyc8/issues_with_models_gemma_qwen_and_openwebui/
GreenEventHorizon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5oyc8
false
null
t3_1l5oyc8
/r/LocalLLaMA/comments/1l5oyc8/issues_with_models_gemma_qwen_and_openwebui/
false
false
self
1
null
Looking for successful vLLM + GPTQ/AWQ setups on AMD 7900XT(X) — did anyone get it working?
1
[removed]
2025-06-07T16:43:48
https://www.reddit.com/r/LocalLLaMA/comments/1l5p7q9/looking_for_successful_vllm_gptqawq_setups_on_amd/
djdeniro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5p7q9
false
null
t3_1l5p7q9
/r/LocalLLaMA/comments/1l5p7q9/looking_for_successful_vllm_gptqawq_setups_on_amd/
false
false
https://b.thumbs.redditm…h75JxkVEAI5Q.jpg
1
null
vLLM + GPTQ/AWQ setups on AMD 7900 xtx — did anyone get it working?
1
[removed]
2025-06-07T16:45:00
https://www.reddit.com/r/LocalLLaMA/comments/1l5p8s8/vllm_gptqawq_setups_on_amd_7900_xtx_did_anyone/
djdeniro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5p8s8
false
null
t3_1l5p8s8
/r/LocalLLaMA/comments/1l5p8s8/vllm_gptqawq_setups_on_amd_7900_xtx_did_anyone/
false
false
self
1
null
vLLM + GPTQ/AWQ setups on AMD 7900 xtx - did anyone get it working?
6
Hey! If someone here has successfully launched **Qwen3-32B** or any other model using **GPTQ** or **AWQ**, please share your experience and method — it would be extremely helpful! I've tried multiple approaches to run the model, but I keep getting either gibberish or exclamation marks instead of meaningful output. **Hardware specs:** * Hard drive: MZ32-AR0 * RAM: 6x32GB DDR4-3200 * GPUs: 4x RX 7900XT + 1x RX 7900XT **Current config (docker-compose for vLLM):** services:   vllm: pull_policy: always tty: true ports: - 8000:8000  image: ghcr.io/embeddedllm/vllm-rocm:v0.9.0-rocm6.4 volumes: - /mnt/tb_disk/llm:/app/models devices: - /dev/kfd:/dev/kfd - /dev/dri:/dev/dri environment: - ROCM_VISIBLE_DEVICES=0,1,2,3 - CUDA_VISIBLE_DEVICES=0,1,2,3 - HSA_OVERRIDE_GFX_VERSION=11.0.0 - HIP_VISIBLE_DEVICES=0,1,2,3 command: sh -c 'vllm serve /app/models/models/vllm/Qwen3-4B-autoround-4bit-gptq  --gpu-memory-utilization 0.999  --max_model_len 4000  -tp 4' volumes: {}
2025-06-07T16:46:52
https://www.reddit.com/r/LocalLLaMA/comments/1l5pab6/vllm_gptqawq_setups_on_amd_7900_xtx_did_anyone/
djdeniro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5pab6
false
null
t3_1l5pab6
/r/LocalLLaMA/comments/1l5pab6/vllm_gptqawq_setups_on_amd_7900_xtx_did_anyone/
false
false
self
6
null
What is the next local model that will beat deepseek 0528?
46
I know it's not really local for most of us for practical reasons but it is at least in theory.
2025-06-07T16:55:07
https://www.reddit.com/r/LocalLLaMA/comments/1l5ph7v/what_is_the_next_local_model_that_will_beat/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5ph7v
false
null
t3_1l5ph7v
/r/LocalLLaMA/comments/1l5ph7v/what_is_the_next_local_model_that_will_beat/
false
false
self
46
null
Experience with Phi-4 reasoning models
1
[removed]
2025-06-07T16:58:16
https://www.reddit.com/r/LocalLLaMA/comments/1l5pjsd/experience_with_phi4_reasoning_models/
Barry_Jumps
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5pjsd
false
null
t3_1l5pjsd
/r/LocalLLaMA/comments/1l5pjsd/experience_with_phi4_reasoning_models/
false
false
self
1
null
[Showcase] Hydravisor – Secure AI Workload Orchestration in Rust (TUI, VMs, Ollama, Bedrock)
1
[removed]
2025-06-07T17:12:55
https://www.reddit.com/r/LocalLLaMA/comments/1l5pwf9/showcase_hydravisor_secure_ai_workload/
TheKelsbee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5pwf9
false
null
t3_1l5pwf9
/r/LocalLLaMA/comments/1l5pwf9/showcase_hydravisor_secure_ai_workload/
false
false
self
1
null
Release: Lightweight JS Markdown WYSIWYG editor for local-LLM prompt crafting (repo + live demo)
1
[removed]
2025-06-07T17:42:37
https://www.reddit.com/r/LocalLLaMA/comments/1l5qln6/release_lightweight_js_markdown_wysiwyg_editor/
celsowm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5qln6
false
null
t3_1l5qln6
/r/LocalLLaMA/comments/1l5qln6/release_lightweight_js_markdown_wysiwyg_editor/
false
false
self
1
null
Stack2Flat for flattening text files
1
[removed]
2025-06-07T18:00:32
https://www.reddit.com/r/LocalLLaMA/comments/1l5r0sn/stack2flat_for_flattening_text_files/
kal0kag0thia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5r0sn
false
null
t3_1l5r0sn
/r/LocalLLaMA/comments/1l5r0sn/stack2flat_for_flattening_text_files/
false
false
self
1
null
Worked on this ChatAPI+Frontend for Local and API inference until I burned myself out completely, and haven't touched it at all for a year. Should I finish it? Or are you satisfied with your current solutions for inference ( Supports: ExllamaV2, Ollama & OpenAI / Gemini / Claude / Grok )
0
2025-06-07T18:06:41
https://www.reddit.com/gallery/1l5r66x
Severin_Suveren
reddit.com
1970-01-01T00:00:00
0
{}
1l5r66x
false
null
t3_1l5r66x
/r/LocalLLaMA/comments/1l5r66x/worked_on_this_chatapifrontend_for_local_and_api/
false
false
https://b.thumbs.redditm…8Qan83KMf3rs.jpg
0
null
Worked on this ChatAPI+Frontend for Local and API inference until I burned myself out completely, and haven't touched it at all for a year. Should I finish it? Or are you satisfied with your current solutions for inference ( Supports: ExllamaV2, Ollama & OpenAI / Gemini / Claude / Grok )
0
2025-06-07T18:06:54
https://www.reddit.com/gallery/1l5r6du
Severin_Suveren
reddit.com
1970-01-01T00:00:00
0
{}
1l5r6du
false
null
t3_1l5r6du
/r/LocalLLaMA/comments/1l5r6du/worked_on_this_chatapifrontend_for_local_and_api/
false
false
default
0
null
Worked on this ChatAPI+Frontend for Local and API inference until I burned myself out completely, and haven't touched it at all for a year. Should I finish it? Or are you satisfied with your current solutions for inference ( Supports: ExllamaV2, Ollama & OpenAI / Gemini / Claude / Grok )
0
2025-06-07T18:07:32
https://www.reddit.com/gallery/1l5r6vc
Severin_Suveren
reddit.com
1970-01-01T00:00:00
0
{}
1l5r6vc
false
null
t3_1l5r6vc
/r/LocalLLaMA/comments/1l5r6vc/worked_on_this_chatapifrontend_for_local_and_api/
false
false
default
0
null
Worked on this ChatAPI+Frontend for Local and API inference until I burned myself out completely, and haven't touched it at all for a year. Should I finish it? Or are you satisfied with your current solutions for inference ( Supports: ExllamaV2, Ollama & OpenAI / Gemini / Claude / Grok )
0
2025-06-07T18:09:03
https://imgur.com/a/D8uFE0E
Severin_Suveren
imgur.com
1970-01-01T00:00:00
0
{}
1l5r883
false
{'oembed': {'description': 'Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users.', 'height': 60, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fimgur.com%2Fa%2FD8uFE0E%2Fembed%3Fpub%3Dtrue%26ref%3Dhttps%253A%252F%252Fembed.ly%26w%3D500&display_name=Imgur&url=https%3A%2F%2Fimgur.com%2Fa%2FD8uFE0E&image=https%3A%2F%2Fi.imgur.com%2FlH92gG6.jpg%3Ffb&type=text%2Fhtml&schema=imgur" width="500" height="60" scrolling="no" title="Imgur embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'Imgur', 'provider_url': 'http://imgur.com', 'thumbnail_height': 1818, 'thumbnail_url': 'https://i.imgur.com/lH92gG6.jpg?fb', 'thumbnail_width': 3839, 'title': 'Imgur', 'type': 'rich', 'url': 'https://imgur.com/a/D8uFE0E', 'version': '1.0', 'width': 500}, 'type': 'imgur.com'}
t3_1l5r883
/r/LocalLLaMA/comments/1l5r883/worked_on_this_chatapifrontend_for_local_and_api/
false
false
default
0
null
Got an LLM to write a fully standards-compliant HTTP 2.0 server via a code-compile-test loop
80
I made a [framework](https://github.com/outervation/promptyped) for structuring long LLM workflows, and managed to get it to build a full HTTP 2.0 server from scratch, 15k lines of source code and over 30k lines of tests, that passes all the [h2spec](https://github.com/summerwind/h2spec) conformance tests. Although this task used Gemini 2.5 Pro as the LLM, the framework itself is open source (Apache 2.0) and it shouldn't be too hard to make it work with local models if anyone's interested, especially if they support the Openrouter/OpenAI style API. So I thought I'd share it here in case anybody might find it useful (although it's still currently in alpha state). The framework is [https://github.com/outervation/promptyped](https://github.com/outervation/promptyped), the server it built is [https://github.com/outervation/AiBuilt\_llmahttap](https://github.com/outervation/AiBuilt_llmahttap) (I wouldn't recommend anyone actually use it, it's just interesting as an example of how a 100% LLM architectured and coded application may look). I also wrote a blog post detailing some of the changes to the framework needed to support building an application of non-trivial size: [https://outervationai.substack.com/p/building-a-100-llm-written-standards](https://outervationai.substack.com/p/building-a-100-llm-written-standards) .
2025-06-07T18:33:23
https://www.reddit.com/r/LocalLLaMA/comments/1l5rsis/got_an_llm_to_write_a_fully_standardscompliant/
logicchains
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5rsis
false
null
t3_1l5rsis
/r/LocalLLaMA/comments/1l5rsis/got_an_llm_to_write_a_fully_standardscompliant/
false
false
self
80
{'enabled': False, 'images': [{'id': '4SJry6D7g-LwCUQrcbsqiHc52fzVjg6z_aLZ90Ns8R0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NWnwdAR3ClxEtJMKZS4_ru0P6b3uPpgAAX9QLItwBP4.jpg?width=108&crop=smart&auto=webp&s=a8af01a50253f648d76926691581d65fa4c6f431', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NWnwdAR3ClxEtJMKZS4_ru0P6b3uPpgAAX9QLItwBP4.jpg?width=216&crop=smart&auto=webp&s=0fb41b6394d87445e1914a334137e45c66978c1a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NWnwdAR3ClxEtJMKZS4_ru0P6b3uPpgAAX9QLItwBP4.jpg?width=320&crop=smart&auto=webp&s=81793be16bf155dfb5826b3adfe420e167e959d4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NWnwdAR3ClxEtJMKZS4_ru0P6b3uPpgAAX9QLItwBP4.jpg?width=640&crop=smart&auto=webp&s=58d94a13a45459e46a787016e59eab3b37992d33', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NWnwdAR3ClxEtJMKZS4_ru0P6b3uPpgAAX9QLItwBP4.jpg?width=960&crop=smart&auto=webp&s=66ce7487a42cd2d156481ebd23b8490f5cd51390', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NWnwdAR3ClxEtJMKZS4_ru0P6b3uPpgAAX9QLItwBP4.jpg?width=1080&crop=smart&auto=webp&s=96d91a99306c0931e725c12248012314e632cc3b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NWnwdAR3ClxEtJMKZS4_ru0P6b3uPpgAAX9QLItwBP4.jpg?auto=webp&s=0d687065f4dd3f4075e1466d4db49501b3d7a512', 'width': 1200}, 'variants': {}}]}
Help for Ollama local Models
1
[removed]
2025-06-07T18:39:56
https://www.reddit.com/r/LocalLLaMA/comments/1l5ry25/help_for_ollama_local_models/
Tgthemen123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5ry25
false
null
t3_1l5ry25
/r/LocalLLaMA/comments/1l5ry25/help_for_ollama_local_models/
false
false
self
1
null
Llama-3.1-70B Holds Its Own vs GPT-4o on URL-Scraping Benchmark (Costs 2 ¢/hit)
1
[removed]
2025-06-07T18:54:50
https://www.reddit.com/r/LocalLLaMA/comments/1l5saad/llama3170b_holds_its_own_vs_gpt4o_on_urlscraping/
Putrid-Television981
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5saad
false
null
t3_1l5saad
/r/LocalLLaMA/comments/1l5saad/llama3170b_holds_its_own_vs_gpt4o_on_urlscraping/
false
false
self
1
null
Reinforcement learning a model for symbolic / context compression to saturate semantic bandwidth? (then retraining reasoning in the native compression space)
10
Hey there folks, I am currently unable to get to work on my project due to difficulties with vllm and nccl (that python/ml ecosystem is FUCKING crazy) so in the meantime I'm sharing my ideas so we can discuss and get some dopamine hits. I will try to keep the technical details and philosophies out of this post and stick to the concrete concept. Back when ChatGPT 3.5 came out, there was a party trick that made the rounds of Twitter, shown in the first two images. Then we never heard about it again as the context window increased. Then in 2024 there were all sorts of "schizo" outputs that people researched, it came under many variations such as super-prompting, xenocognition, etc. many things at high temperature, some obtained at ordinary values at 1.0 Then reinforcement learning took off and we got R1-zero which by itself reproduced these kind of outputs without any kind of steering in this direction, but in a way that actually appeared to improve the result on benchmarks. So what I have done is attempting to construct a framework around R1-zero, and then from there I could construct additional methods and concepts to achieve R1-zero type models with more intention towards far higher reasoning performance. The first step that came out of this formalization is an information compressor/decompressor. By generating a large number of rollout with sufficient steering or SFT, the model can gravitate towards the optimal method of orchestrating language to compress any desired chunk of text or information to the theoretical limit. There is an hypothesis which proposes that somewhere in this loop, the model can develop a meta-awareness where the weights themselves are rearranged to instantiate richer and more developped rule tables, such that the RL run continues to raise the reward beyond what is thought possible, since the weights themselves begin to encode pre-computed universally applicable decision tables. That is to say that conditionally within a `<compress>` tag, token polysemy as well as sequence meaning may explode, allowing the model to program the exact equivalent hidden state activation into its mind with the fewest possible tokens, while continuing to optimize the weights such that it retains the lowest perplexity across diverse dataset samples in order to steer clear of brain damage. We definitely must train a diverse alignment channel with english, so that the model can directly explain what information is embedded by the hyper-compressed text sequence or interpret / use it as though it were bare english in the context. From there, we theoretically now possess the ability to compress and defragment LLM context lossessly, driving massive reduction in inference cost. Now, we use the compression model and train models with random compression replacement of snippets of the context, so that for all future models they can naturally interleave compressed representations of information. But the true gain is the language of compression and the extensions that can be built on it. Once this is achieved, the compressor/decompressor expert model is used as a generator for SFT data to align any reasoner model to think in the plus-ultra compression language, or perhaps you alternate back and forth between training `<think>` and `<compress>` on the same weights. Not sure what would work best. Note that I think we actually don't need SFT by prefixing the rollout with a rich but diverse prompt, inside of a special templating fence which deletes/omits/replaces it for the final backpropagation! In other words, we can fold the effect of a large prompt into a single action word such as `compress the following text: `. (selective remembering) We could maybe go from 1% to 100% intelligence in a matter of a few days if we RL correctly, ensuring that the model never plateaus and enters infinite scaling as it should. Currently there are some fundamental problems with RL since it doesn't lead to infinite intelligence.
2025-06-07T18:55:21
https://www.reddit.com/gallery/1l5saph
ryunuck
reddit.com
1970-01-01T00:00:00
0
{}
1l5saph
false
null
t3_1l5saph
/r/LocalLLaMA/comments/1l5saph/reinforcement_learning_a_model_for_symbolic/
false
false
https://b.thumbs.redditm…CqSC15EKld1g.jpg
10
null
Testing Quant Quality for Shisa V2 405B
20
Last week[ we launched Shisa V2 405B](https://www.reddit.com/r/LocalLLaMA/comments/1l318di/shisa_v2_405b_the_strongest_model_ever_built_in/), an extremely strong JA/EN-focused multilingual model. It's also, well, quite a big model (800GB+ at FP16), so I made some quants for launch as well, including [a bunch of GGUFs](https://huggingface.co/shisa-ai/shisa-v2-llama3.1-405b-GGUF). These quants were all (except the Q8\_0) imatrix quants that used our JA/EN shisa-v2-sharegpt dataset to create a custom calibration set. This weekend I was doing some quality testing and decided, well, I might as well test **all** of the quants and share as I feel like there isn't enough out there measuring how different quants affect downstream performance for different models. I did my[ testing with JA MT-Bench (judged by GPT-4.1)](https://github.com/shisa-ai/ja-mt-bench-harness) and it should be representative of a wide range of Japanese output quality (llama.cpp doesn't run well on H200s and of course, doesn't run well at high concurrency, so this was about the limit of my patience for evals). This is a bit of a messy graph to read, but the main takeaway should be *don't run the IQ2\_XXS*: https://preview.redd.it/gshifun4uj5f1.png?width=3600&format=png&auto=webp&s=eda062d83d28a902ff1804e804fedfa22e1a7ddd In this case, I believe the table is actually a lot more informative: |Quant|Size (GiB)|% Diff|Overall|Writing|Roleplay|Reasoning|Math|Coding|Extraction|STEM|Humanities| |:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-| |Full FP16|810||**9.13**|9.25|**9.55**|8.15|8.90|9.10|9.65|9.10|9.35| |IQ3\_M|170|\-0.99|9.04|8.90|9.45|7.75|8.95|8.95|9.70|**9.15**|9.50| |Q4\_K\_M|227|\-1.10|9.03|**9.40**|9.00|8.25|8.85|**9.10**|9.50|8.90|9.25| |Q8\_0|405|\-1.20|9.02|**9.40**|9.05|**8.30**|**9.20**|8.70|9.50|8.45|9.55| |W8A8-INT8|405|\-1.42|9.00|9.20|9.35|7.80|8.75|9.00|9.80|8.65|9.45| |FP8-Dynamic|405|\-3.29|8.83|8.70|9.20|7.85|8.80|8.65|9.30|8.80|9.35| |IQ3\_XS|155|\-3.50|8.81|8.70|9.05|7.70|8.60|8.95|9.35|8.70|9.45| |IQ4\_XS|202|\-3.61|8.80|8.85|**9.55**|6.90|8.35|8.60|**9.90**|8.65|**9.60**| |*70B FP16*|140|\-7.89|8.41|7.95|9.05|6.25|8.30|8.25|9.70|8.70|9.05| |IQ2\_XXS|100|\-18.18|7.47|7.50|6.80|5.15|7.55|7.30|9.05|7.65|8.80| Due to margin of error, you could probably fairly say that the IQ3\_M, Q4\_K\_M, and Q8\_0 GGUFs have almost no functional loss versus the FP16 (while the average is about 1% lower, individual category scores can be higher than the full weights). You probably want to do a lot more evals (different evals, multiple runs) if you want split hairs more. Interestingly the XS quants (IQ3 and IQ4) not only perform about the same, but also both fare worse than the IQ3\_M. I also included the 70B Full FP16 scores and if the same pattern holds, I'd think you'd be a lot better off running our earlier released Shisa V2 70B Q4\_K\_M (40GB) or IQ3\_M (32GB) vs the 405B IQ2\_XXS (100GB). In an ideal world, of course, you should test different quants on your own downstream tasks, but I understand that that's not always an option. Based on this testing, I'd say, if you had to pick on bang/buck quant blind for our model, staring with the IQ3\_M seems like a good pick. So, these quality evals were the main things I wanted to share, but here's a couple bonus benchmarks. I posted this in the comments from the announcement post, but this is how fast a Llama3 405B IQ2\_XXS runs on Strix Halo: ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = AMD Radeon Graphics (RADV GFX1151) (radv) | uma: 1 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat | model | size | params | backend | ngl | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: | | llama ?B IQ2_XXS - 2.0625 bpw | 99.90 GiB | 405.85 B | Vulkan,RPC | 999 | 1 | pp512 | 11.90 ± 0.02 | | llama ?B IQ2_XXS - 2.0625 bpw | 99.90 GiB | 405.85 B | Vulkan,RPC | 999 | 1 | tg128 | 1.93 ± 0.00 | build: 3cc1f1f1 (5393) And this is how the same IQ2\_XXS performs running on a single H200 GPU: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA H200, compute capability 9.0, VMM: yes | model | size | params | backend | ngl | fa | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --------------: | -------------------: | | llama ?B IQ2_XXS - 2.0625 bpw | 99.90 GiB | 405.85 B | CUDA | 999 | 1 | pp512 | 225.54 ± 0.03 | | llama ?B IQ2_XXS - 2.0625 bpw | 99.90 GiB | 405.85 B | CUDA | 999 | 1 | tg128 | 7.50 ± 0.00 | build: 1caae7fc (5599) Note that an FP8 runs at \~28 tok/s (tp4) with SGLang. I'm not sure where the bottleneck is for llama.cpp, but it doesn't seem to perform very well on H200 hardware. Of course, you don't run H200s to run concurrency=1. For those curious, here's what my initial SGLang FP8 vs vLLM W8A8-INT8 comparison looks like (using ShareGPT set for testing): [Not bad!](https://preview.redd.it/j2n6wutqzj5f1.png?width=2000&format=png&auto=webp&s=c6afc181da48c28ecdd1b40b09ad9077a03f2644)
2025-06-07T19:21:39
https://www.reddit.com/r/LocalLLaMA/comments/1l5sw3m/testing_quant_quality_for_shisa_v2_405b/
randomfoo2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5sw3m
false
null
t3_1l5sw3m
/r/LocalLLaMA/comments/1l5sw3m/testing_quant_quality_for_shisa_v2_405b/
false
false
https://b.thumbs.redditm…Q8mC0n1reZIg.jpg
20
{'enabled': False, 'images': [{'id': 'GlYrmXOeqo4eLr9a2-0GdjRO_Qf2AaHmzqZXVXnScEo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CNIe-FCowsvogRo2IcOE4ZR2DIChtGcTE74OqVAaIsc.jpg?width=108&crop=smart&auto=webp&s=0c60f7b2aa42082ccc4e8fd71920cfa4a8426cc2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CNIe-FCowsvogRo2IcOE4ZR2DIChtGcTE74OqVAaIsc.jpg?width=216&crop=smart&auto=webp&s=d1a7f92520d60efed27400e2ad3b0ce264ffae30', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CNIe-FCowsvogRo2IcOE4ZR2DIChtGcTE74OqVAaIsc.jpg?width=320&crop=smart&auto=webp&s=74cd46e029177b6093c0a895b8c2fc41057947fa', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CNIe-FCowsvogRo2IcOE4ZR2DIChtGcTE74OqVAaIsc.jpg?width=640&crop=smart&auto=webp&s=4a4ecb10ba3cedefafacad923575df79a2a8390e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CNIe-FCowsvogRo2IcOE4ZR2DIChtGcTE74OqVAaIsc.jpg?width=960&crop=smart&auto=webp&s=5ad78c8604dc6c881fe4cb19a088c64b12708b98', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CNIe-FCowsvogRo2IcOE4ZR2DIChtGcTE74OqVAaIsc.jpg?width=1080&crop=smart&auto=webp&s=8d09ab8026813842874026de45de0f14d9b2d9ce', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CNIe-FCowsvogRo2IcOE4ZR2DIChtGcTE74OqVAaIsc.jpg?auto=webp&s=de2ca48d956059b1539f8b66941d9fbdba0b1934', 'width': 1200}, 'variants': {}}]}
Closed-Source AI Strikes Again: Cheap Moves Like This Prove We Need Open-Source Alternatives
220
Just saw Anthropic cutting access of Claude to Windsurf editor (not that I care), but it shows how these companies can make rash decisions about access to their models. There are thousands of ways for OpenAI to get access to Claude’s API if it really wanted to. But taking decisions like this or targeting startups like that just shows why we need a solid ecosystem of open-source models.
2025-06-07T19:47:52
https://www.reddit.com/r/LocalLLaMA/comments/1l5th35/closedsource_ai_strikes_again_cheap_moves_like/
dreamai87
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5th35
false
null
t3_1l5th35
/r/LocalLLaMA/comments/1l5th35/closedsource_ai_strikes_again_cheap_moves_like/
false
false
self
220
null
Any Benchmarks 2080 Ti 22GB Vs 3060 12GB?
1
Hi, looking to dip my toe in with local hosted LLMs and looking at budget GPU options, are there any benchmarks comparing the 2080 Ti modded with 22GB Vs a stock 3060 12GB. For that matter, any other options I should be considering for the same price point and just for entry-level 3B–7B models or 13B models (quantised) at a push?
2025-06-07T19:52:13
https://www.reddit.com/r/LocalLLaMA/comments/1l5tkj6/any_benchmarks_2080_ti_22gb_vs_3060_12gb/
SKX007J1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5tkj6
false
null
t3_1l5tkj6
/r/LocalLLaMA/comments/1l5tkj6/any_benchmarks_2080_ti_22gb_vs_3060_12gb/
false
false
self
1
null
Openwebui Token counter
6
Personal Project: OpenWebUI Token Counter (Floating)Built this out of necessity — but it turned out insanely useful for anyone working with inference APIs or local LLM endpoints.It’s a lightweight Chrome extension that:Shows live token usage as you type or pasteWorks inside OpenWebUI (TipTap compatible)Helps you stay under token limits, especially with long promptsRuns 100% locally — no data ever leaves your machineWhether you're using:OpenAI, Anthropic, or Mistral APIsLocal models via llama.cpp, Kobold, or OobaboogaOr building your own frontends...This tool just makes life easier.No bloat. No tracking. Just utility.Check it out here:https://github.com/Detin-tech/OpenWebUI\_token\_counter Would love thoughts, forks, or improvements — it's fully open source. Note due to tokenizers this is only accurate within +/- 10% but close enough for a visual ballpark https://preview.redd.it/tu9gx4zvbk5f1.png?width=1173&format=png&auto=webp&s=c48eab57e6f4e6cee7f263891f1ee7212da096b0
2025-06-07T20:13:30
https://www.reddit.com/r/LocalLLaMA/comments/1l5u1ne/openwebui_token_counter/
UnReasonable_why
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5u1ne
false
null
t3_1l5u1ne
/r/LocalLLaMA/comments/1l5u1ne/openwebui_token_counter/
false
false
https://b.thumbs.redditm…W9KXQfND8prE.jpg
6
null
CoexistAI – Open-source, modular research framework for local deep research 🚀🧑‍💻
1
[removed]
2025-06-07T20:35:16
https://www.reddit.com/r/LocalLLaMA/comments/1l5uirn/coexistai_opensource_modular_research_framework/
Optimalutopic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5uirn
false
null
t3_1l5uirn
/r/LocalLLaMA/comments/1l5uirn/coexistai_opensource_modular_research_framework/
false
false
https://b.thumbs.redditm…N-zEarkF2Mhw.jpg
1
null
How to use Veo 3 without Gemini
1
[removed]
2025-06-07T20:41:23
https://v.redd.it/nz4qkvltgk5f1
Guilty_Law7965
v.redd.it
1970-01-01T00:00:00
0
{}
1l5unsr
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/nz4qkvltgk5f1/DASHPlaylist.mpd?a=1751920897%2COWEwNTFjNGM5ODdmOTc0MWJiMzIxMDgwNTk4NmJiYmY0YzY2MDEwOTg2YTc3OWIwYmIyZWM3Nzg5ZGEwOWM0ZA%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/nz4qkvltgk5f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/nz4qkvltgk5f1/HLSPlaylist.m3u8?a=1751920897%2CNDQ1ZDBmMDZlMjRjZjBjN2YxMDMxYjYzNjkyNDBjZTNjMWMxNjYxNmIxZGU3YjQzZWJkNmVhNzY3MzJiODIwMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/nz4qkvltgk5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1l5unsr
/r/LocalLLaMA/comments/1l5unsr/how_to_use_veo_3_without_gemini/
false
false
https://external-preview…8ff0ed833b449b1d
1
{'enabled': False, 'images': [{'id': 'YWQ0cTV2bHRnazVmMZJehOIwRkSuf6mnNc-x7mDwIjTKcGDqn2xmdtCMOXv9', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YWQ0cTV2bHRnazVmMZJehOIwRkSuf6mnNc-x7mDwIjTKcGDqn2xmdtCMOXv9.png?width=108&crop=smart&format=pjpg&auto=webp&s=124243681259233ffa3ea372b3f0d3b4274e1a10', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/YWQ0cTV2bHRnazVmMZJehOIwRkSuf6mnNc-x7mDwIjTKcGDqn2xmdtCMOXv9.png?width=216&crop=smart&format=pjpg&auto=webp&s=e32de0bbcfdf1ca093166f1fa2c57ad492f48119', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/YWQ0cTV2bHRnazVmMZJehOIwRkSuf6mnNc-x7mDwIjTKcGDqn2xmdtCMOXv9.png?width=320&crop=smart&format=pjpg&auto=webp&s=f23004ccedc6147b02ee275779fe12235a12eb13', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/YWQ0cTV2bHRnazVmMZJehOIwRkSuf6mnNc-x7mDwIjTKcGDqn2xmdtCMOXv9.png?width=640&crop=smart&format=pjpg&auto=webp&s=a16f5b56105d3cdac310b2a42cf5cf4586d036d2', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/YWQ0cTV2bHRnazVmMZJehOIwRkSuf6mnNc-x7mDwIjTKcGDqn2xmdtCMOXv9.png?width=960&crop=smart&format=pjpg&auto=webp&s=ef79fa37b7236acc2f2bb6af390cddaf5fc44fae', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/YWQ0cTV2bHRnazVmMZJehOIwRkSuf6mnNc-x7mDwIjTKcGDqn2xmdtCMOXv9.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f6e89f401271f9d96a92cfe52b2a9ffb42b5887f', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/YWQ0cTV2bHRnazVmMZJehOIwRkSuf6mnNc-x7mDwIjTKcGDqn2xmdtCMOXv9.png?format=pjpg&auto=webp&s=52442e429bccb4660171f8924075f64335dcc5f3', 'width': 1280}, 'variants': {}}]}
CoexistAI – Open-source, modular research framework for local deep research 🚀🧑‍💻
1
[removed]
2025-06-07T20:42:18
https://www.reddit.com/r/LocalLLaMA/comments/1l5uoj1/coexistai_opensource_modular_research_framework/
feltyouonce
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5uoj1
false
null
t3_1l5uoj1
/r/LocalLLaMA/comments/1l5uoj1/coexistai_opensource_modular_research_framework/
false
false
self
1
null
Worked on this ChatAPI+Frontend for Local and API inference until I burned myself out completely, and haven't touched it at all for a year. Should I finish it? Or are you satisfied with your current solutions for inference ( Supports: ExllamaV2, Ollama & OpenAI / Gemini / Claude / Grok )
1
[removed]
2025-06-07T20:42:30
https://www.reddit.com/r/LocalLLaMA/comments/1l5uool/worked_on_this_chatapifrontend_for_local_and_api/
Severin_Suveren
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5uool
false
null
t3_1l5uool
/r/LocalLLaMA/comments/1l5uool/worked_on_this_chatapifrontend_for_local_and_api/
false
false
self
1
null
DeepSeek R1 is *amazing* at deciphering dwarfs in Dwarf Fortress
102
I've always wanted to connect an LLM to *Dwarf Fortress* – the game is perfect for it with its text-heavy systems and deep simulation. But I never had the technical know-how to make it happen. So I improvised: 1. Extracted game text from screenshots(steam version) using Gemini 1.5 Pro (there’s *definitely* a better method, but it worked so...) 2. Fed all that raw data into DeepSeek R1 3. Asked for a **creative interpretation** of the dwarf behaviors The results were genuinely better than I though. The model didn’t just parse the data - it pinpointed delightful quirks and patterns such as: >*"The log is messy with repeated headers, but key elements reveal..."* I especially love how fresh and playful its voice sounds: >*"...And I should probably mention the peach cider. That detail’s too charming to omit."* Full output below in markdown – enjoy the read! [Pastebin](https://pastebin.com/frk4ZvhL) As a bonus, I generated an image with the OpenAI API platform version of the image generator, just because *why not.* [Portrait of Ast Siltun](https://preview.redd.it/415zrv0cmk5f1.png?width=1024&format=png&auto=webp&s=819e094b3a3e426b3fee4abedca5c11f239600de)
2025-06-07T21:18:26
https://www.reddit.com/r/LocalLLaMA/comments/1l5vhkx/deepseek_r1_is_amazing_at_deciphering_dwarfs_in/
olaf4343
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5vhkx
false
null
t3_1l5vhkx
/r/LocalLLaMA/comments/1l5vhkx/deepseek_r1_is_amazing_at_deciphering_dwarfs_in/
false
false
https://a.thumbs.redditm…_Or9fIDhZJH4.jpg
102
{'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?auto=webp&s=b9f5c4e4867fbffb2c1ff45dd70aa338d1e3f40c', 'width': 150}, 'variants': {}}]}
My 64gb VRAM build
109
Nuc 9 extreme housing a 5060ti 16gb, and running two 3090 eGPUs connected through occulink. A good bit of modification to make it work, but the SFF and modularity of the GPUs I think made it worth it. Happy to be done with this part of the project, and moving on to building agents!
2025-06-07T21:20:48
https://i.redd.it/9b5rdhpxnk5f1.jpeg
cweave
i.redd.it
1970-01-01T00:00:00
0
{}
1l5vjcu
false
null
t3_1l5vjcu
/r/LocalLLaMA/comments/1l5vjcu/my_64gb_vram_build/
false
false
default
109
{'enabled': True, 'images': [{'id': '9b5rdhpxnk5f1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/9b5rdhpxnk5f1.jpeg?width=108&crop=smart&auto=webp&s=18d5e1677f8f8caaa04527198894f15588ff8994', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/9b5rdhpxnk5f1.jpeg?width=216&crop=smart&auto=webp&s=dbd1215e46d04fbb2234ad21f5bc6389d7f67194', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/9b5rdhpxnk5f1.jpeg?width=320&crop=smart&auto=webp&s=46e0757677e550049b5e92ad7e62b93ef62e04db', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/9b5rdhpxnk5f1.jpeg?width=640&crop=smart&auto=webp&s=b751a92c4a0ac3db3f5a6c9a168b009026b0a3d6', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/9b5rdhpxnk5f1.jpeg?width=960&crop=smart&auto=webp&s=0ffee5e11bef6cda9ddba9396b9393ef5df55e47', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/9b5rdhpxnk5f1.jpeg?width=1080&crop=smart&auto=webp&s=db7569e67c5a751cab7c2e855874772be84ff6d5', 'width': 1080}], 'source': {'height': 3213, 'url': 'https://preview.redd.it/9b5rdhpxnk5f1.jpeg?auto=webp&s=9b57f125932bb0f217aca84b465ed4bd94b7e704', 'width': 5712}, 'variants': {}}]}
What is the best nsfw model for budget PC like this?
0
12400f 32gb ram 2060 6gb Not mainly in Rp or things but mainly for asking nsfw things that not allowed in chatgpt. Anyway, recommend me ur choice whatever it's purpose at least support nsfw
2025-06-07T21:33:53
https://www.reddit.com/r/LocalLLaMA/comments/1l5vtnr/what_is_the_best_nsfw_model_for_budget_pc_like/
aaisn62
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5vtnr
false
null
t3_1l5vtnr
/r/LocalLLaMA/comments/1l5vtnr/what_is_the_best_nsfw_model_for_budget_pc_like/
false
false
nsfw
0
null
ik_llama oom error with ds-r1, llama.cpp works fine
1
[removed]
2025-06-07T21:45:55
https://www.reddit.com/r/LocalLLaMA/comments/1l5w2w1/ik_llama_oom_error_with_dsr1_llamacpp_works_fine/
prepytixel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5w2w1
false
null
t3_1l5w2w1
/r/LocalLLaMA/comments/1l5w2w1/ik_llama_oom_error_with_dsr1_llamacpp_works_fine/
false
false
self
1
null
Struggling to learn
1
[removed]
2025-06-07T22:16:35
https://www.reddit.com/r/LocalLLaMA/comments/1l5wqhe/struggling_to_learn/
MeridiusTS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5wqhe
false
null
t3_1l5wqhe
/r/LocalLLaMA/comments/1l5wqhe/struggling_to_learn/
false
false
self
1
null
My 160GB local LLM rig
1,138
Built this monster with 4x V100 and 4x 3090, with the threadripper / 256 GB RAM and 4x PSU. One Psu for power everything in the machine and 3x PSU 1000w to feed the beasts. Used bifurcated PCIE raisers to split out x16 PCIE to 4x x4 PCIEs. Ask me anything, biggest model I was able to run on this beast was qwen3 235B Q4 at around ~15 tokens / sec. Regularly I am running Devstral, qwen3 32B, gamma 3-27B, qwen3 4b x 3….all in Q4 and use async to use all the models at the same time for different tasks.
2025-06-07T22:26:06
https://i.redd.it/qukd2c1lzk5f1.jpeg
TrifleHopeful5418
i.redd.it
1970-01-01T00:00:00
0
{}
1l5wxoa
false
null
t3_1l5wxoa
/r/LocalLLaMA/comments/1l5wxoa/my_160gb_local_llm_rig/
false
false
default
1,138
{'enabled': True, 'images': [{'id': 'qukd2c1lzk5f1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/qukd2c1lzk5f1.jpeg?width=108&crop=smart&auto=webp&s=f4638db25dbe0b48d8dfdfd592c15fb9e3e5b227', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/qukd2c1lzk5f1.jpeg?width=216&crop=smart&auto=webp&s=bacfc010634b6c2f7760d1299a5a7cad3e3771c5', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/qukd2c1lzk5f1.jpeg?width=320&crop=smart&auto=webp&s=61d380be1aa15a6235205f9f483f675f0e3d63b8', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/qukd2c1lzk5f1.jpeg?width=640&crop=smart&auto=webp&s=e7a7aa1adf0ea1755b60c2abe92f454649a64f90', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/qukd2c1lzk5f1.jpeg?width=960&crop=smart&auto=webp&s=ee328d8513d7aa18ef74521a37be92e3386cee6b', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/qukd2c1lzk5f1.jpeg?width=1080&crop=smart&auto=webp&s=c626fb224cee8ec77fe7073b9d8a816697ca9244', 'width': 1080}], 'source': {'height': 2250, 'url': 'https://preview.redd.it/qukd2c1lzk5f1.jpeg?auto=webp&s=6206b531ac6d3790c5f9267f20131845e80fb032', 'width': 3000}, 'variants': {}}]}
Cómo tener una pareja?
1
[removed]
2025-06-07T22:40:46
https://www.reddit.com/r/LocalLLaMA/comments/1l5x8mu/cómo_tener_una_pareja/
wintos666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5x8mu
false
null
t3_1l5x8mu
/r/LocalLLaMA/comments/1l5x8mu/cómo_tener_una_pareja/
false
false
self
1
null
Privacy preserving ChatGPT/Claude voice mode alternative
9
I cant find any open source projects that have comparable performance to vocie mode in ChatGPT/Claude - which really is quite excellent. I dont trust them and their privacy policy allows sufficient wiggle room for them to misuse my voice data. So looking for alternatives. > Q: Does the privacy policy state clearly that Anthropic will not save my voice data? > Based on the Anthropic Privacy Policy (effective May 1, 2025) at [](https://www.anthropic.com/legal/privacy,)<https://www.anthropic.com/legal/privacy,> it does not state clearly that Anthropic will not save your voice data. > The policy indicates that "Inputs" (which could include voice data if provided by the user) are collected and may be used for purposes such as developing and training their language models. Specifically, under "1. Collection of Personal Data," the "Inputs and Outputs" section states: "Our AI services allow you to interact with the Services in a variety of formats ("Prompts" or "Inputs"), which generate responses ("Outputs") based on your Inputs. This includes where you choose to integrate third-party applications with our services. If you include personal data or reference external content in your Inputs, we will collect that information and this information may be reproduced in your Outputs." > Furthermore, the section "Personal data we collect or receive to train our models" mentions "Data that our users or crowd workers provide" as a source for training data. This implies that user-provided data, including potential voice inputs, can be collected and utilized for model training
2025-06-07T23:03:29
https://www.reddit.com/r/LocalLLaMA/comments/1l5xpyb/privacy_preserving_chatgptclaude_voice_mode/
rm-rf-rm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5xpyb
false
null
t3_1l5xpyb
/r/LocalLLaMA/comments/1l5xpyb/privacy_preserving_chatgptclaude_voice_mode/
false
false
self
9
null
Laptop for using and creating LLMs.
1
[removed]
2025-06-07T23:23:20
https://www.reddit.com/r/LocalLLaMA/comments/1l5y4lu/laptop_for_using_and_creating_llms/
Frosty_Park7555
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5y4lu
false
null
t3_1l5y4lu
/r/LocalLLaMA/comments/1l5y4lu/laptop_for_using_and_creating_llms/
false
false
self
1
null
Need help finding a permissive LLM for real-world memoir writing
1
[removed]
2025-06-08T00:22:44
https://www.reddit.com/r/LocalLLaMA/comments/1l5zarf/need_help_finding_a_permissive_llm_for_realworld/
maxmill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5zarf
false
null
t3_1l5zarf
/r/LocalLLaMA/comments/1l5zarf/need_help_finding_a_permissive_llm_for_realworld/
false
false
self
1
null
Need help finding a permissive LLM for real-world memoir writing
1
[removed]
2025-06-08T00:25:58
https://www.reddit.com/r/LocalLLaMA/comments/1l5zcv9/need_help_finding_a_permissive_llm_for_realworld/
maxmill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5zcv9
false
null
t3_1l5zcv9
/r/LocalLLaMA/comments/1l5zcv9/need_help_finding_a_permissive_llm_for_realworld/
false
false
self
1
null
Paints Undo Problem
4
I want to use a tool called paints undo but it requires 16gb of VRAM, I was thinking of using the p100 but I heard it doesn't support modern cuda and that may affect compatibility, I was thinking of the 4060 but that costs $400 and I saw that hourly rates of cloud rental services can be as cheap as a couple dollars per hour, so I tried vast ai but was having trouble getting the tool to work (I assume its issues with using linux instead of windows.) So is there a windows os based cloud pc with 16gb VRAM that I can rent to try it out before spending hundreds on a gpu?
2025-06-08T00:26:33
https://github.com/lllyasviel/Paints-UNDO
Specialist-Feeling-9
github.com
1970-01-01T00:00:00
0
{}
1l5zd9y
false
null
t3_1l5zd9y
/r/LocalLLaMA/comments/1l5zd9y/paints_undo_problem/
false
false
https://b.thumbs.redditm…tAnyKG-FkX1s.jpg
4
{'enabled': False, 'images': [{'id': 'Z679JjluuW--OIwwuDCUNMCiixuDb79rlKAq9CTeic4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VcolIgM88vsjmV8X3yGR624P9znAaaDhOkfLg5OrUv4.jpg?width=108&crop=smart&auto=webp&s=a200b18803362f6ea2f8b6bba154f185368c9555', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VcolIgM88vsjmV8X3yGR624P9znAaaDhOkfLg5OrUv4.jpg?width=216&crop=smart&auto=webp&s=d79275e5f57bc19d7e4a5e565e42ee0add27f855', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VcolIgM88vsjmV8X3yGR624P9znAaaDhOkfLg5OrUv4.jpg?width=320&crop=smart&auto=webp&s=aa9487cba5ba3a35f17342f0d58cc5d0c12b532a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VcolIgM88vsjmV8X3yGR624P9znAaaDhOkfLg5OrUv4.jpg?width=640&crop=smart&auto=webp&s=5157818bbf28b598edef5a9cfad476d1c9da04e5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VcolIgM88vsjmV8X3yGR624P9znAaaDhOkfLg5OrUv4.jpg?width=960&crop=smart&auto=webp&s=68546d53daed13f69d87c8a75a755a2534dcf6b8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VcolIgM88vsjmV8X3yGR624P9znAaaDhOkfLg5OrUv4.jpg?width=1080&crop=smart&auto=webp&s=450daf6742814a93a678608fda0fe6af1fcbc474', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VcolIgM88vsjmV8X3yGR624P9znAaaDhOkfLg5OrUv4.jpg?auto=webp&s=eeaffa0aac2651ce15e60c1985a7fa95b1261912', 'width': 1200}, 'variants': {}}]}
Why don't we see more technically-oriented 'clown-car' MoEs?
29
So I've been thinking about [sparcity](https://old.reddit.com/r/LocalLLaMA/comments/1l44lw8/sparse_transformers_run_2x_faster_llm_with_30/mw8e3zc/) and [MoEs](https://old.reddit.com/r/LocalLLaMA/comments/1l2qv7z/help_me_understand_moe_vs_dense/) lately. I've been really pleasantly surprised at how well Llama 4 Scout runs [on my laptop](https://old.reddit.com/r/LocalLLaMA/comments/1l1581z/which_model_are_you_using_june25_edition/mvkkmqo/), for example. I don't use it all the time, or even the majority of the time, but it's one of the first local models that is both good enough and fast enough to help with some of my niche coding. Someone linked to Goddard's [Mixture of Experts for Clowns (at a Circus)](https://goddard.blog/posts/clown-moe/) in another thread -- what a fun read. It got me thinking. I do computational sciences research. When I get a new research assistant, I hand them a virtual stack of papers and references and say something like, > "Please read this collection of materials that I've amassed over the past 20 years. Then you can work on a niche extension of an in-the-weeds idea that you won't understand unless you've internalized random bits of this collection." I mean, *not really* -- I don't actually demand that they read everything before diving into research. That's not how people learn! Instead they'll learn as they do the work. They'll run into some problem, ask me about it, and I'll something like, "oh yeah you've hit quirk ABC of method XYZ, go read papers JLK." And my various RAs will build their own stack of random specialized topics over time. But it would be great if someone *could* internalize all those materials, because lots of new discovery is finding weird connections between different topics. And this gets me thinking - some of the papers that pop up when you search [mergekit on google scholar](https://scholar.google.com/scholar?hl=en&q=mergekit) are scientists training specialized models on niche topics. Not fine tuning the models, but actually doing continuing pretraining to put new niche knowledge in their models' "heads." Some groups spend a lot of resources, some spend a little. I could probably split my pile of conceptual materials into a variety of smaller thematic groups and train "small" models that are all experts in disparate topics, then moe-merge them into a bigger model. When I talk with SOTA models about various details here, it seems like I probably could come up enough tokens for the size of various mini-experts that I want. I'd love to have something approximately llama 4 scout-sized, but with more detailed knowledge about the various topics I want it to have. Are people doing this? If so, how do I find them? (I am probably searching HF poorly, so tips/tricks appreciated...) If not, why not? (Effectiveness/performance? cost? something else?) If I'm interested in giving it a shot, what are some pitfalls/etc to bear in mind?
2025-06-08T00:36:57
https://www.reddit.com/r/LocalLLaMA/comments/1l5zkdw/why_dont_we_see_more_technicallyoriented_clowncar/
RobotRobotWhatDoUSee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5zkdw
false
null
t3_1l5zkdw
/r/LocalLLaMA/comments/1l5zkdw/why_dont_we_see_more_technicallyoriented_clowncar/
false
false
self
29
null
What models can I run on 2 x 5060 Ti 16 Gb
5
3090 is not an option for me. So I will have to get multiple 5060s. What models can I run ? t/s should be atleast 20. My usecase is mainly text, with some RAG involved and context about 1k tokens.
2025-06-08T00:43:38
https://www.reddit.com/r/LocalLLaMA/comments/1l5zp0t/what_models_can_i_run_on_2_x_5060_ti_16_gb/
presidentbidden
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l5zp0t
false
null
t3_1l5zp0t
/r/LocalLLaMA/comments/1l5zp0t/what_models_can_i_run_on_2_x_5060_ti_16_gb/
false
false
self
5
null
It took me 7 months to build a privacy-first AI Notetaker that uses local AI, and now I need your help
1
[removed]
2025-06-08T00:49:49
https://v.redd.it/c1e7faek5l5f1
beerbellyman4vr
v.redd.it
1970-01-01T00:00:00
0
{}
1l5zt5v
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/c1e7faek5l5f1/DASHPlaylist.mpd?a=1751935806%2CNTEwODI0YWQ4YWMwOWIxZDkwM2Q5YmU3YmU1YjkxYTE1YTBhNjQ5NzNjOTBiNjRmYTVmYzQ4NDc3ZDBhZDY3ZA%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/c1e7faek5l5f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/c1e7faek5l5f1/HLSPlaylist.m3u8?a=1751935806%2CMGRjNzJlMDE1Mzc4YmM1ZDJhY2YzMjE5MDYyNDk0MjhlYTY3MzViZDk2YTNiOTQxZDg0NGUyMGE5MzJkNWIzMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/c1e7faek5l5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1l5zt5v
/r/LocalLLaMA/comments/1l5zt5v/it_took_me_7_months_to_build_a_privacyfirst_ai/
false
false
https://external-preview…da9aa58fd6410c2e
1
{'enabled': False, 'images': [{'id': 'bjd1bnVkZWs1bDVmMYZAQSEo-Bz5PU-9Qn53EydPLIzrmfbDbioD5ehOIDM8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bjd1bnVkZWs1bDVmMYZAQSEo-Bz5PU-9Qn53EydPLIzrmfbDbioD5ehOIDM8.png?width=108&crop=smart&format=pjpg&auto=webp&s=25026ad6e4555743a813792833d85e05c62a11c3', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bjd1bnVkZWs1bDVmMYZAQSEo-Bz5PU-9Qn53EydPLIzrmfbDbioD5ehOIDM8.png?width=216&crop=smart&format=pjpg&auto=webp&s=bfbe3e6b252ff7278a3a48e0e07227175411eaa6', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bjd1bnVkZWs1bDVmMYZAQSEo-Bz5PU-9Qn53EydPLIzrmfbDbioD5ehOIDM8.png?width=320&crop=smart&format=pjpg&auto=webp&s=dd9e1656747fd0fe431a02ed048e6a0a73ac4743', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bjd1bnVkZWs1bDVmMYZAQSEo-Bz5PU-9Qn53EydPLIzrmfbDbioD5ehOIDM8.png?width=640&crop=smart&format=pjpg&auto=webp&s=93e9452d1eb2b52a801119ca766360eb0fd5a06f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bjd1bnVkZWs1bDVmMYZAQSEo-Bz5PU-9Qn53EydPLIzrmfbDbioD5ehOIDM8.png?width=960&crop=smart&format=pjpg&auto=webp&s=b771ac74ab349fedcd42031638480b135939c5bf', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bjd1bnVkZWs1bDVmMYZAQSEo-Bz5PU-9Qn53EydPLIzrmfbDbioD5ehOIDM8.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f39a877c3b465a87d52d77b39948298e45833bf4', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bjd1bnVkZWs1bDVmMYZAQSEo-Bz5PU-9Qn53EydPLIzrmfbDbioD5ehOIDM8.png?format=pjpg&auto=webp&s=9ff9ce8edb74e226a52492ee30f2d448f6d66b81', 'width': 1920}, 'variants': {}}]}
Built a fully local Whisper + pyannote stack to replace Otter. Full diarisation, transcripts & summaries on GPU.
72
Not a dev. Just got tired of Otter’s limits. No real customisation. Cloud only. Subpar export options. I built a fully local pipeline to diarise and transcribe team meetings. It handles long recordings (three hours plus) and spits out labelled transcripts and JSON per session. Stack includes: • ctranslate2 and faster-whisper for transcription • pyannote and speechbrain for diarisation • Speaker-attributed text and JSON exports • Output is fully customised to my needs – executive summaries, action lists, and clean notes ready for stakeholders No cloud. No uploads. No locked features. Runs on GPU. It was a headache getting CUDA and cuDNN working. I still couldn’t find cuDNN 9.1.0 for CUDA 12. If anyone knows how to get early or hidden builds from NVIDIA, let me know. Keen to see if anyone else has built something similar. Also open to ideas on: • Cleaning up diarisation when it splits the same speaker too much • Making multi-session batching easier • General accuracy improvements
2025-06-08T01:32:38
https://www.reddit.com/r/LocalLLaMA/comments/1l60l2w/built_a_fully_local_whisper_pyannote_stack_to/
Loosemofo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l60l2w
false
null
t3_1l60l2w
/r/LocalLLaMA/comments/1l60l2w/built_a_fully_local_whisper_pyannote_stack_to/
false
false
self
72
null
Advice on PC or specs purchase-MCA -
1
[removed]
2025-06-08T02:18:27
https://www.reddit.com/r/LocalLLaMA/comments/1l61fb3/advice_on_pc_or_specs_purchasemca/
SmokingHensADAN
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l61fb3
false
null
t3_1l61fb3
/r/LocalLLaMA/comments/1l61fb3/advice_on_pc_or_specs_purchasemca/
false
false
self
1
null
How does vector dimension reduction work in new Qwen3 embedding models?
9
I am looking at various text embedding models for a RAG/chat project that I'm working on and I came across the new [Qwen3 embedding models](https://huggingface.co/Qwen/Qwen3-Embedding-4B) today. I'm excited because they not only are the leading open models on MTEB, but apparently they allow you to arbitrarily choose the vector dimensions up to a fixed amount. One annoying architectural issue I've run into recently is that pgvector only allows a maximum of 2000 dimensions for stored vectors. But with the new Qwen3 4B embedding models (which can handle up to 2560 dimensions) I'll be able to resize them to 2000 dimensions to fit in my pgvector fields. But I'm trying to understand what the implications are (as far as quality/accuracy) of reducing the size of the vectors. What exactly is the process through which they are reducing the dimensions of the vectors? Is there a way of quantifying how much of a hit I'll take in terms of retrieval accuracy? I've tried reading the [paper](https://arxiv.org/pdf/2506.05176) they released on Arxiv, but didn't see anything in there that explains how this works. On a side note, I'm also curious if anyone has benchmarks on RTX 4090 for the 0.6B/4B/8B models, and what kind of performance they've seen at various sequence lengths?
2025-06-08T02:59:32
https://www.reddit.com/r/LocalLLaMA/comments/1l625ld/how_does_vector_dimension_reduction_work_in_new/
jferments
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l625ld
false
null
t3_1l625ld
/r/LocalLLaMA/comments/1l625ld/how_does_vector_dimension_reduction_work_in_new/
false
false
self
9
{'enabled': False, 'images': [{'id': 'a4RKUsnAm8IC-BKYrsZ1f7Alx0eBv72V-P3IuPSMKnc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YFoL3_G0fX2KBsBnAV48cO_AyG-qPoT422Rtc1R3odw.jpg?width=108&crop=smart&auto=webp&s=441648f6f9a51aebcf4510a0bbd606287f36f8c2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YFoL3_G0fX2KBsBnAV48cO_AyG-qPoT422Rtc1R3odw.jpg?width=216&crop=smart&auto=webp&s=2766509441a7eedd1911aa7c085aaf6d8e74fd65', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YFoL3_G0fX2KBsBnAV48cO_AyG-qPoT422Rtc1R3odw.jpg?width=320&crop=smart&auto=webp&s=53eded4d1ec24ed02c57d74cfaaa04e55b3073ab', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YFoL3_G0fX2KBsBnAV48cO_AyG-qPoT422Rtc1R3odw.jpg?width=640&crop=smart&auto=webp&s=6536cc269f5798355fc40b32c10719b69979665f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YFoL3_G0fX2KBsBnAV48cO_AyG-qPoT422Rtc1R3odw.jpg?width=960&crop=smart&auto=webp&s=381d79413d6153e622f1922bed48972240b5b797', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YFoL3_G0fX2KBsBnAV48cO_AyG-qPoT422Rtc1R3odw.jpg?width=1080&crop=smart&auto=webp&s=242316a985631c510c40133d096640f981383d39', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YFoL3_G0fX2KBsBnAV48cO_AyG-qPoT422Rtc1R3odw.jpg?auto=webp&s=a61e8f8e81f52f0782d69c8835026f13ae1f4870', 'width': 1200}, 'variants': {}}]}
What's the most affordable way to run 72B+ sized models for Story/RP?
12
I was using Grok for the longest time but they've introduced some filters that are getting a bit annoying to navigate. Thinking about running things local now. Are those Macs with tons of memory worthwhile, or?
2025-06-08T03:00:49
https://www.reddit.com/r/LocalLLaMA/comments/1l626hj/whats_the_most_affordable_way_to_run_72b_sized/
PangurBanTheCat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l626hj
false
null
t3_1l626hj
/r/LocalLLaMA/comments/1l626hj/whats_the_most_affordable_way_to_run_72b_sized/
false
false
self
12
null
2-Fan or 3-Fan GPU
0
I'd like to get into LLMs. Right now I'm using a 5600 xt AMD GPU, and I'm looking into upgrading my GPU in the next few months when the budget allows it. Does it matter if the GPU I get is 2-fan or 3-fan? The 2-fan GPUs are cheaper, so I am looking into getting one of those. My concern though is will the 2-fan or even a SFF 3-fan GPU get too warm if i start using them for LLMs and stable diffusion as well? Thanks in advance for the input!
2025-06-08T03:17:32
https://www.reddit.com/r/LocalLLaMA/comments/1l62gzz/2fan_or_3fan_gpu/
Intelligent-Dust1715
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l62gzz
false
null
t3_1l62gzz
/r/LocalLLaMA/comments/1l62gzz/2fan_or_3fan_gpu/
false
false
self
0
null
Quantum AI ML Science Fair 2025 - Visualizations
1
[removed]
2025-06-08T03:24:13
https://v.redd.it/theyjdckgm5f1
Financial_Pick8394
v.redd.it
1970-01-01T00:00:00
0
{}
1l62kyc
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/theyjdckgm5f1/DASHPlaylist.mpd?a=1751945068%2CODcxN2FhYjhiMGFlNzRhY2JmYzhhZTkwNWE1NzQ0ZjhjNzlmYmE5YWIyMTFiYzI3NzhiNTE3ZTAzMjBkMjVkNA%3D%3D&v=1&f=sd', 'duration': 96, 'fallback_url': 'https://v.redd.it/theyjdckgm5f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 822, 'hls_url': 'https://v.redd.it/theyjdckgm5f1/HLSPlaylist.m3u8?a=1751945068%2CYjdhM2Y4NmZkNDRhY2JkMjVhYjk3ZjFhODgwMTAzMmNkZDA0NmNjMWI5MmUwYmJhMTA3ZTg1MjgwMTNmYzVhMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/theyjdckgm5f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1l62kyc
/r/LocalLLaMA/comments/1l62kyc/quantum_ai_ml_science_fair_2025_visualizations/
false
false
https://external-preview…7b258a9df0b6bd60
1
{'enabled': False, 'images': [{'id': 'cmd1NzBkY2tnbTVmMYuIAs6gJs-DAX6KZI3Cha-JnaQ2QrMKG6h46Z6jKHfl', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/cmd1NzBkY2tnbTVmMYuIAs6gJs-DAX6KZI3Cha-JnaQ2QrMKG6h46Z6jKHfl.png?width=108&crop=smart&format=pjpg&auto=webp&s=13f1ef14cea65f3aab58fb545c7918b6436755e5', 'width': 108}, {'height': 92, 'url': 'https://external-preview.redd.it/cmd1NzBkY2tnbTVmMYuIAs6gJs-DAX6KZI3Cha-JnaQ2QrMKG6h46Z6jKHfl.png?width=216&crop=smart&format=pjpg&auto=webp&s=4e1019e513c79212f327e4b31a3f0cafb9b84292', 'width': 216}, {'height': 137, 'url': 'https://external-preview.redd.it/cmd1NzBkY2tnbTVmMYuIAs6gJs-DAX6KZI3Cha-JnaQ2QrMKG6h46Z6jKHfl.png?width=320&crop=smart&format=pjpg&auto=webp&s=f28e33e95a354e4a9d58872b8118eccf710dce29', 'width': 320}, {'height': 274, 'url': 'https://external-preview.redd.it/cmd1NzBkY2tnbTVmMYuIAs6gJs-DAX6KZI3Cha-JnaQ2QrMKG6h46Z6jKHfl.png?width=640&crop=smart&format=pjpg&auto=webp&s=fede79cf79bc25150cdf4d74b39de6652ae63ed8', 'width': 640}, {'height': 411, 'url': 'https://external-preview.redd.it/cmd1NzBkY2tnbTVmMYuIAs6gJs-DAX6KZI3Cha-JnaQ2QrMKG6h46Z6jKHfl.png?width=960&crop=smart&format=pjpg&auto=webp&s=333c42b9b3c3c6b00f535eae5088a28dfd4136ba', 'width': 960}, {'height': 462, 'url': 'https://external-preview.redd.it/cmd1NzBkY2tnbTVmMYuIAs6gJs-DAX6KZI3Cha-JnaQ2QrMKG6h46Z6jKHfl.png?width=1080&crop=smart&format=pjpg&auto=webp&s=39740100a8eeae5128de5e1a24ab5857ecb523d7', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cmd1NzBkY2tnbTVmMYuIAs6gJs-DAX6KZI3Cha-JnaQ2QrMKG6h46Z6jKHfl.png?format=pjpg&auto=webp&s=37e7017734a69c44c046ef7b447ca3e3b7b48cc5', 'width': 2520}, 'variants': {}}]}
Question: how do you use these beasts to earn money.
1
[removed]
2025-06-08T04:56:16
https://www.reddit.com/r/LocalLLaMA/comments/1l643gg/question_how_do_you_use_these_beasts_to_earn_money/
absolute-calm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l643gg
false
null
t3_1l643gg
/r/LocalLLaMA/comments/1l643gg/question_how_do_you_use_these_beasts_to_earn_money/
false
false
self
1
null
# [Tool Release] Poor AI – A Self-Generating CLI for AI-Assisted Development
1
[removed]
2025-06-08T06:17:00
https://www.reddit.com/r/LocalLLaMA/comments/1l65ci3/tool_release_poor_ai_a_selfgenerating_cli_for/
DrinkMean4332
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l65ci3
false
null
t3_1l65ci3
/r/LocalLLaMA/comments/1l65ci3/tool_release_poor_ai_a_selfgenerating_cli_for/
false
false
self
1
null
Motorola mobile phones to soon have on-device local LLMs
0
2025-06-08T06:32:12
https://i.redd.it/2ey1d1kxdn5f1.png
WordyBug
i.redd.it
1970-01-01T00:00:00
0
{}
1l65kta
false
null
t3_1l65kta
/r/LocalLLaMA/comments/1l65kta/motorola_mobile_phones_to_soon_have_ondevice/
false
false
default
0
{'enabled': True, 'images': [{'id': '2ey1d1kxdn5f1', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/2ey1d1kxdn5f1.png?width=108&crop=smart&auto=webp&s=9c3e49680858036215c08224143222706d18d608', 'width': 108}, {'height': 231, 'url': 'https://preview.redd.it/2ey1d1kxdn5f1.png?width=216&crop=smart&auto=webp&s=ed75b3a257f223de5b39077c7b2daaaac8cb21d4', 'width': 216}, {'height': 342, 'url': 'https://preview.redd.it/2ey1d1kxdn5f1.png?width=320&crop=smart&auto=webp&s=b48e165891164f2a40b982376599538bd43418ef', 'width': 320}, {'height': 684, 'url': 'https://preview.redd.it/2ey1d1kxdn5f1.png?width=640&crop=smart&auto=webp&s=cdaf232b196975c514a5195106f1c11306ad7d65', 'width': 640}, {'height': 1026, 'url': 'https://preview.redd.it/2ey1d1kxdn5f1.png?width=960&crop=smart&auto=webp&s=55bc1cb696cc9cff8082ffd043e4976c50091f2f', 'width': 960}, {'height': 1155, 'url': 'https://preview.redd.it/2ey1d1kxdn5f1.png?width=1080&crop=smart&auto=webp&s=8984cce572e3b624d8e74fa5e8366b2d8b06ae10', 'width': 1080}], 'source': {'height': 1782, 'url': 'https://preview.redd.it/2ey1d1kxdn5f1.png?auto=webp&s=3251974b5ca37298e10ddb051de32cb7fdc4c48d', 'width': 1666}, 'variants': {}}]}
Best models by size?
36
I am confused how to find benchmarks that tell me the strongest model for math/coding by size. I want to know which local model is strongest that can fit in 16GB of RAM (no GPU). I would also like to know the same thing for 32GB, Where should I be looking for this info?
2025-06-08T06:44:00
https://www.reddit.com/r/LocalLLaMA/comments/1l65r2k/best_models_by_size/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l65r2k
false
null
t3_1l65r2k
/r/LocalLLaMA/comments/1l65r2k/best_models_by_size/
false
false
self
36
null
Instead of summarizing, cut filler words
1
[removed]
2025-06-08T07:14:48
https://www.reddit.com/r/LocalLLaMA/comments/1l667i6/instead_of_summarizing_cut_filler_words/
AlphaHusk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l667i6
false
null
t3_1l667i6
/r/LocalLLaMA/comments/1l667i6/instead_of_summarizing_cut_filler_words/
false
false
self
1
null
Apple's new research paper on the limitations of "thinking" models
180
2025-06-08T07:22:03
https://machinelearning.apple.com/research/illusion-of-thinking
seasonedcurlies
machinelearning.apple.com
1970-01-01T00:00:00
0
{}
1l66b8a
false
null
t3_1l66b8a
/r/LocalLLaMA/comments/1l66b8a/apples_new_research_paper_on_the_limitations_of/
false
false
default
180
{'enabled': False, 'images': [{'id': 'D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?width=108&crop=smart&auto=webp&s=cdcbdf7d4e054676a9ea185723b2cca1b298211b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?width=216&crop=smart&auto=webp&s=236ee2cb0fd51c8feb2185840b9b4c5339cb0ba1', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?width=320&crop=smart&auto=webp&s=30825797867938b50b42671226c9c7da51a9f448', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?width=640&crop=smart&auto=webp&s=08a86235cd9b365c08749230f9302dd340fba50b', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?width=960&crop=smart&auto=webp&s=1bab2db17030b48eb06b2b7c20f33a46e36e69e4', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?width=1080&crop=smart&auto=webp&s=1e1801db787237252688dda8bbf280b56afb151f', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/5UeYdCFnJOGDfg-kwWIoUmPBZZuPLPogz2CSwjAzY08.jpg?auto=webp&s=57f21999f40fdbca7f025acb8aa88a0534c88097', 'width': 1200}, 'variants': {}}]}
Motorola is integrating on-device local AI to its mobile phones
18
2025-06-08T07:24:07
https://i.redd.it/rok89w2cnn5f1.png
WordyBug
i.redd.it
1970-01-01T00:00:00
0
{}
1l66cbt
false
null
t3_1l66cbt
/r/LocalLLaMA/comments/1l66cbt/motorola_is_integrating_ondevice_local_ai_to_its/
false
false
https://b.thumbs.redditm…KAshObB1hM8g.jpg
18
{'enabled': True, 'images': [{'id': 'yw_NVxHG0MWYH3Ji3pVSUgWfxMEJSw4jYBczqrCEYS0', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/rok89w2cnn5f1.png?width=108&crop=smart&auto=webp&s=3c412268db362d859f2fb72454e8b8bacba81c34', 'width': 108}, {'height': 231, 'url': 'https://preview.redd.it/rok89w2cnn5f1.png?width=216&crop=smart&auto=webp&s=2bef5a5298ab6724b722760f3a7b5f16a3307c75', 'width': 216}, {'height': 342, 'url': 'https://preview.redd.it/rok89w2cnn5f1.png?width=320&crop=smart&auto=webp&s=e084d9a938cd52406e82857a5002ed055d70c786', 'width': 320}, {'height': 684, 'url': 'https://preview.redd.it/rok89w2cnn5f1.png?width=640&crop=smart&auto=webp&s=f67f5cf553ab9bd8b0830ff8373881b8137bbe5f', 'width': 640}, {'height': 1026, 'url': 'https://preview.redd.it/rok89w2cnn5f1.png?width=960&crop=smart&auto=webp&s=1cc55795d6bef2d5905c068b161e1d5e1ade30a8', 'width': 960}, {'height': 1155, 'url': 'https://preview.redd.it/rok89w2cnn5f1.png?width=1080&crop=smart&auto=webp&s=48e4097674df4199668b80f7a018b6622701ea70', 'width': 1080}], 'source': {'height': 1782, 'url': 'https://preview.redd.it/rok89w2cnn5f1.png?auto=webp&s=4a26a84d654bbc839625ad788e8b76100bf7d439', 'width': 1666}, 'variants': {}}]}
the new Gemini 2.5 PRO reigns SUPREME
1
[removed]
2025-06-08T07:26:22
https://www.reddit.com/r/LocalLLaMA/comments/1l66di4/the_new_gemini_25_pro_reigns_supreme/
CandidLeek319
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l66di4
false
null
t3_1l66di4
/r/LocalLLaMA/comments/1l66di4/the_new_gemini_25_pro_reigns_supreme/
false
false
https://a.thumbs.redditm…6_vCNbLQIp34.jpg
1
null
the new Gemini 2.5 PRO reigns SUPREME
0
https://preview.redd.it/…e full version)?
2025-06-08T07:29:11
https://www.reddit.com/r/LocalLLaMA/comments/1l66ez8/the_new_gemini_25_pro_reigns_supreme/
DistributionOk2434
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l66ez8
false
null
t3_1l66ez8
/r/LocalLLaMA/comments/1l66ez8/the_new_gemini_25_pro_reigns_supreme/
false
false
https://b.thumbs.redditm…z2qm7Ry3pv1k.jpg
0
null
Vision support in ChatterUI (albeit, very slow)
47
Pre-release here: https://github.com/Vali-98/ChatterUI/releases/tag/v0.8.7-beta3 For the uninitiated, ChatterUI is a LLM chat client which can run models on your device or connect to proprietary/open source APIs. I've been working on getting attachments working in ChatterUI, and thanks to pocketpal's maintainer, llama.rn now has local vision support! Vision support is now available in pre-release for local compatible models + their mmproj files and for APIs which support them (like Google AI Studio or OpenAI). Unfortunately, since llama.cpp itself lacks a stable android gpu backend, image processing is **extremely** slow, as the screenshot above shows 5 minutes for a 512x512 image. iOS performance however seems decent, but the build currently not available for public testing. Feel free to share any issues or thoughts on the current state of the app!
2025-06-08T07:45:42
https://i.redd.it/zm7h1u2frn5f1.png
----Val----
i.redd.it
1970-01-01T00:00:00
0
{}
1l66nmv
false
null
t3_1l66nmv
/r/LocalLLaMA/comments/1l66nmv/vision_support_in_chatterui_albeit_very_slow/
false
false
https://b.thumbs.redditm…JZCH_ab_EVlE.jpg
47
{'enabled': True, 'images': [{'id': 'x4Hh47HuPNwLz-vc5KP7yhyAkHPIcEMd66gu9rRDZYc', 'resolutions': [{'height': 173, 'url': 'https://preview.redd.it/zm7h1u2frn5f1.png?width=108&crop=smart&auto=webp&s=939f9b5cd916ed90d5bf82509fe467e8d5f67966', 'width': 108}, {'height': 346, 'url': 'https://preview.redd.it/zm7h1u2frn5f1.png?width=216&crop=smart&auto=webp&s=b51bd96afa4fb7f13bbeb056e4c277a26c9bbfe6', 'width': 216}, {'height': 513, 'url': 'https://preview.redd.it/zm7h1u2frn5f1.png?width=320&crop=smart&auto=webp&s=8645f6446f3d2229f1c4b04c0fc4bf8b0da6a8f9', 'width': 320}, {'height': 1027, 'url': 'https://preview.redd.it/zm7h1u2frn5f1.png?width=640&crop=smart&auto=webp&s=6b206a47664af8449186456d61abfc65ee826351', 'width': 640}, {'height': 1541, 'url': 'https://preview.redd.it/zm7h1u2frn5f1.png?width=960&crop=smart&auto=webp&s=48316ace5e956d65c03e5d304b0e1f60885ac85c', 'width': 960}, {'height': 1734, 'url': 'https://preview.redd.it/zm7h1u2frn5f1.png?width=1080&crop=smart&auto=webp&s=19bb63fc7f1cc732fa8ab1fc18bc0d833a590cce', 'width': 1080}], 'source': {'height': 1734, 'url': 'https://preview.redd.it/zm7h1u2frn5f1.png?auto=webp&s=55c7dcead5b966c6b6b89de4bcedc7ba60d7f665', 'width': 1080}, 'variants': {}}]}
Testing Frontier LLMs on 2025 Chinese Gaokao Math Problems - Fresh Benchmark Results
27
Tested frontier LLMs on yesterday's 2025 Chinese Gaokao (National College Entrance Examination) math problems (74 points total: 8 single-choice, 3 multiple-choice, 3 fill-in-blank). Since these were released June 7th, zero chance of training data contamination. [result](https://preview.redd.it/zj8lzkziwn5f1.png?width=2369&format=png&auto=webp&s=488120bcd8d64966da383e95cc585e3901239fac) Question 6 was a vector geometry problem requiring visual interpretation, so text-only models (Deepseek series, Qwen series) couldn't attempt it.
2025-06-08T08:16:54
https://www.reddit.com/r/LocalLLaMA/comments/1l67457/testing_frontier_llms_on_2025_chinese_gaokao_math/
nekofneko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l67457
false
null
t3_1l67457
/r/LocalLLaMA/comments/1l67457/testing_frontier_llms_on_2025_chinese_gaokao_math/
false
false
https://b.thumbs.redditm…xBF16DR3yH4U.jpg
27
null
Rig upgraded to 8x3090
428
About 1 year ago I posted about a [4 x 3090 build](https://www.reddit.com/r/LocalLLaMA/comments/1bqxfc0/another_4x3090_build/). This machine has been great for learning to fine-tune LLMs and produce synthetic data-sets. However, even with deepspeed and 8B models, the maximum training full fine-tune context length was about 2560 tokens per conversation. Finally I decided to get some 16->8x8 lane splitters, some more GPUs and some more RAM. Training Qwen/Qwen3-8B (full fine-tune) with 4K context length completed success fully and without pci errors, and I am happy with the build. The spec is like: * Asrock Rack EP2C622D16-2T * 8xRTX 3090 FE (192 GB VRAM total) * Dual Intel Xeon 8175M * 512 GB DDR4 2400 * EZDIY-FAB PCIE Riser cables * Unbranded Alixpress PCIe-Bifurcation 16X to x8x8 * Unbranded Alixpress open chassis As the lanes are now split, each GPU has about half the bandwidth. Even if training takes a bit longer, being able to full fine tune to a longer context window is worth it in my opinion.
2025-06-08T08:29:06
https://i.redd.it/7ios74ratn5f1.jpeg
lolzinventor
i.redd.it
1970-01-01T00:00:00
0
{}
1l67afp
false
null
t3_1l67afp
/r/LocalLLaMA/comments/1l67afp/rig_upgraded_to_8x3090/
false
false
https://b.thumbs.redditm…2mhWAxGpuhWE.jpg
428
{'enabled': True, 'images': [{'id': '-WvJw7ZZ_qSmswW2idiCv8wPrDK2tbRkhHlB5fcKeUU', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/7ios74ratn5f1.jpeg?width=108&crop=smart&auto=webp&s=96236796d070a68723ca116778095a5a79ea0886', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/7ios74ratn5f1.jpeg?width=216&crop=smart&auto=webp&s=061788e97c740f6e8d50204d4849823c1cc4b079', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/7ios74ratn5f1.jpeg?width=320&crop=smart&auto=webp&s=e1815b851b853d4571ceb7563da453388c914422', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/7ios74ratn5f1.jpeg?width=640&crop=smart&auto=webp&s=246fae63e33b93df4bd41453232e98b5a716bad9', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/7ios74ratn5f1.jpeg?width=960&crop=smart&auto=webp&s=a9e4399ff5bd8daacdb5acb9bc1cc81b2e251da6', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/7ios74ratn5f1.jpeg?width=1080&crop=smart&auto=webp&s=dff006f4559d03d3e27ba15ed64fe31158637782', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/7ios74ratn5f1.jpeg?auto=webp&s=855dae102ab76ccfee8d7726b5dd3a2f2455c0cc', 'width': 3024}, 'variants': {}}]}
Any good fine-tuning framework/system?
2
I want to fine-tune a complex AI process that will likely require fine-tuning multiple LLMs to perform different actions. Are there any good gateways, python libraries, or any other setup that you would recommend to collect data, create training dataset, measure performance, etc? Preferably an all-in-one solution?
2025-06-08T08:32:09
https://www.reddit.com/r/LocalLLaMA/comments/1l67c2a/any_good_finetuning_frameworksystem/
No_Heart_159
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l67c2a
false
null
t3_1l67c2a
/r/LocalLLaMA/comments/1l67c2a/any_good_finetuning_frameworksystem/
false
false
self
2
null
[In Development] Serene Pub, a simpler SillyTavern like roleplay client
28
I've been using Ollama to roleplay for a while now. SillyTavern has been fantastic, but I've had some frustrations with it. I've started developing my own application with the same copy-left license. I am at the point where I want to test the waters and get some feedback and gauge interest. [**Link to the project & screenshots**](https://github.com/doolijb/serene-pub/tree/main) (It's in early alpha, it's not feature complete and there will be bugs.) **About the project:** Serene Pub is a modern, customizable chat application designed for immersive roleplay and creative conversations. This app is heavily inspired by Silly Tavern, with the objective of being more intuitive, responsive and simple to configure. Primary concerns Serene Pub aims to address: 1. Reduce the number of nested menus and settings. 2. Reduced visual clutter. 3. Manage settings server-side to prevent configurations from changing because the user switched windows/devices. 4. Make API calls & chat completion requests asyncronously server-side so they process regardless of window/device state. 5. Use sockets for all data, the user will see the same information updated across all windows/devices. 6. Have compatibility with the majority of Silly Tavern import/exports, i.e. Character Cards 7. Overall be a well rounded app with a suite of features. Use SillyTavern if you want the most options, features and plugin-support. \--- You can read more details in the readme, see the link above. Thanks everyone!
2025-06-08T08:44:14
https://www.reddit.com/r/LocalLLaMA/comments/1l67i14/in_development_serene_pub_a_simpler_sillytavern/
doolijb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l67i14
false
null
t3_1l67i14
/r/LocalLLaMA/comments/1l67i14/in_development_serene_pub_a_simpler_sillytavern/
false
false
self
28
{'enabled': False, 'images': [{'id': 'Vr-dkzqy7wO0NXweCWge6EzB1AmYJbON5tbAvvWroio', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tKqJpLUoZRvqW1JN6I_V7JsXGqsFzFKrVyfpDrmC_4U.jpg?width=108&crop=smart&auto=webp&s=53eb895cd33bf255dc2f5e7e672def54f65f6339', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tKqJpLUoZRvqW1JN6I_V7JsXGqsFzFKrVyfpDrmC_4U.jpg?width=216&crop=smart&auto=webp&s=4f07ed78590f25c5593158c16a70bb8989281538', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tKqJpLUoZRvqW1JN6I_V7JsXGqsFzFKrVyfpDrmC_4U.jpg?width=320&crop=smart&auto=webp&s=e769ae927a18fb95666184782d490b00a00c0732', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tKqJpLUoZRvqW1JN6I_V7JsXGqsFzFKrVyfpDrmC_4U.jpg?width=640&crop=smart&auto=webp&s=75807ff1e766c93703dc374dcc620b4a8102c29f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tKqJpLUoZRvqW1JN6I_V7JsXGqsFzFKrVyfpDrmC_4U.jpg?width=960&crop=smart&auto=webp&s=866d5b1535283f3b1eb7eda71fe3cdbb772fc075', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tKqJpLUoZRvqW1JN6I_V7JsXGqsFzFKrVyfpDrmC_4U.jpg?width=1080&crop=smart&auto=webp&s=4e495d0c5651e08d314a2f0fed134bab75c59c17', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tKqJpLUoZRvqW1JN6I_V7JsXGqsFzFKrVyfpDrmC_4U.jpg?auto=webp&s=0712ad2b9f7b895bcaae3e655f8a00bb5b6f181d', 'width': 1200}, 'variants': {}}]}
Need a tutorial on GPUs
0
To understand more about training and inference, I need to learn a bit more about how GPUs work. like stuff about SM, warp, threads, ... . I'm not interested in GPU programming. Is there any video/course on this that is not too long? (shorter than 10 hours)
2025-06-08T08:57:07
https://www.reddit.com/r/LocalLLaMA/comments/1l67obk/need_a_tutorial_on_gpus/
DunderSunder
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l67obk
false
null
t3_1l67obk
/r/LocalLLaMA/comments/1l67obk/need_a_tutorial_on_gpus/
false
false
self
0
null
Macbook Air M4: Worth going for 32GB or is bandwidth the bottleneck?
1
[removed]
2025-06-08T08:59:40
https://www.reddit.com/r/LocalLLaMA/comments/1l67ply/macbook_air_m4_worth_going_for_32gb_or_is/
broad_marker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1l67ply
false
null
t3_1l67ply
/r/LocalLLaMA/comments/1l67ply/macbook_air_m4_worth_going_for_32gb_or_is/
false
false
self
1
null