title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Anybody got Qwen2.5vl to work consistently?
1
I've been using it for only a few hours and I can tell its very accurate at screen captioning, detecting UI elements and displaying their coordinates in JSON format, but it has a bad habit of going on an endless loop. I'm using the 7b model Q8 and I've only prompted it to find all the UI elements on the screen, which it does, but it also gets stuck in an endless repetitive loop, generating the same UI elements/coordinates or looping in a pattern where it finds all of them then loops back in it again. Next thing I know, the model's been looping for 3 minutes and I get a waterfall of repetitive UI element entries. I've been trying to get it to become agentic by pairing it with Q3-4b-q8 as the action model that would select the UI element and interact with it, but the stability issues with Q2.5vl is a major roadblock. If I can get around that then I should have a basic agent working since that's pretty much the final piece of the puzzle.
2025-05-19T13:41:30
https://www.reddit.com/r/LocalLLaMA/comments/1kqbzr2/anybody_got_qwen25vl_to_work_consistently/
swagonflyyyy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqbzr2
false
null
t3_1kqbzr2
/r/LocalLLaMA/comments/1kqbzr2/anybody_got_qwen25vl_to_work_consistently/
false
false
self
1
null
Local OCR in mobile applications with React Native ExecuTorch
1
[removed]
2025-05-19T13:44:37
https://v.redd.it/7xh5woi5tq1f1
FinancialAd1961
v.redd.it
1970-01-01T00:00:00
0
{}
1kqc2b7
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/7xh5woi5tq1f1/DASHPlaylist.mpd?a=1750254292%2COTE1MTgzMDQ1OTY3MDhlMGQ3ZjBjZTJjMGY4OWVhMmNkNjBkNGI2OWU5ZDY2ZTNmMDY3MTg4NjcyMTY0NTQ5ZA%3D%3D&v=1&f=sd', 'duration': 101, 'fallback_url': 'https://v.redd.it/7xh5woi5tq1f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/7xh5woi5tq1f1/HLSPlaylist.m3u8?a=1750254292%2CMGVhYWMyYjIwMmNlZDFmMzMwNzBkOTM5NjMzMDdhNGUzZWY4MDIyZDJkMmI3MTM3MDlkNGUwODQ3NTlhNzFmMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/7xh5woi5tq1f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1110}}
t3_1kqc2b7
/r/LocalLLaMA/comments/1kqc2b7/local_ocr_in_mobile_applications_with_react/
false
false
https://external-preview…4406d57bec341435
1
{'enabled': False, 'images': [{'id': 'bmN3Zm1vaTV0cTFmMfi2qVvx4F7X6e5BMa1RqcYP_iw5S_JmqnBQPixowxc_', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/bmN3Zm1vaTV0cTFmMfi2qVvx4F7X6e5BMa1RqcYP_iw5S_JmqnBQPixowxc_.png?width=108&crop=smart&format=pjpg&auto=webp&s=4e826fa31e60d372d89bba669c8522d02db2bf8c', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/bmN3Zm1vaTV0cTFmMfi2qVvx4F7X6e5BMa1RqcYP_iw5S_JmqnBQPixowxc_.png?width=216&crop=smart&format=pjpg&auto=webp&s=ebdcc4ab10fd3c2b5e88693c6d9eba03ffa1d2a1', 'width': 216}, {'height': 207, 'url': 'https://external-preview.redd.it/bmN3Zm1vaTV0cTFmMfi2qVvx4F7X6e5BMa1RqcYP_iw5S_JmqnBQPixowxc_.png?width=320&crop=smart&format=pjpg&auto=webp&s=ee8bf7f923cfbfa70a7ebd410fdda22d6bef23c5', 'width': 320}, {'height': 415, 'url': 'https://external-preview.redd.it/bmN3Zm1vaTV0cTFmMfi2qVvx4F7X6e5BMa1RqcYP_iw5S_JmqnBQPixowxc_.png?width=640&crop=smart&format=pjpg&auto=webp&s=870b718531c69b3af455aa37a16a74887a53a8c9', 'width': 640}, {'height': 622, 'url': 'https://external-preview.redd.it/bmN3Zm1vaTV0cTFmMfi2qVvx4F7X6e5BMa1RqcYP_iw5S_JmqnBQPixowxc_.png?width=960&crop=smart&format=pjpg&auto=webp&s=5d8bc236ec15c5908c6575d33a6ed22a6b1d4134', 'width': 960}, {'height': 700, 'url': 'https://external-preview.redd.it/bmN3Zm1vaTV0cTFmMfi2qVvx4F7X6e5BMa1RqcYP_iw5S_JmqnBQPixowxc_.png?width=1080&crop=smart&format=pjpg&auto=webp&s=fafc31b7c3f50bf718bbdb3cca27c99136e73f9c', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/bmN3Zm1vaTV0cTFmMfi2qVvx4F7X6e5BMa1RqcYP_iw5S_JmqnBQPixowxc_.png?format=pjpg&auto=webp&s=e100211ea4801823a1cc7fb5cfffb2434949726b', 'width': 1110}, 'variants': {}}]}
Best models for 24 and 32gb vram? 5 distinct tasks, using openwebui
1
Hello all I am setting up a personal openwbui setup for friends and family. My plan is to mostly use 3090,but give access to 5090 when not gaming or doing other ai projects in comfy using a 2 server ollama setup. So the 32gb models might offer a bit more when the server is avail. But primary is running on 24gbvram 64gb sys ram I want to setup 5 models maybe for these purposes: 1 General purpose - intended to replace chatgpt or gemini but local. Should be their general go to for most tasks. Thinking Qwen 3 32b, Gemma 27b, mistrial? Deepseek? 2. For voice chats(two way using kokorotts) thinking it should be faster in general and can be prompted to give answers conversation style. Not huge blocks or point form lists. Think 12b versions of above? Or lower? 3. Rp and limited uncensored. Not looking for criminal but want less pushback on things like medical advice or create image Gen prompts or stories that might be considered explicit by some models. Even Gemma refused to create an image prompt for Angelina Jolie in a bikini as tomb raider! 4. Coding I think I decided on Qwen 2.5 coder but correct me if I am wrong. 5. Image Gen to run on CPU thinking the smallest 1 or 3b Gemma. Just needs to feed prompts to comfyui, and enhance prompts when asked. Keep it in CPU to free up max vram for comfyui image or video gen I don't want to overwhelm with models. Hopefully come to 1 for each purpose. I know it's a lot to ask hoping to get some help and maybe it can help others looking to do the same. My last question is should I be maxing out context length when possible, I noticed higher context length eats into vram where it doesn't seem to when loaded on CPU. Any other thoughts on the best way to do my setup?
2025-05-19T14:16:18
https://www.reddit.com/r/LocalLLaMA/comments/1kqcsv3/best_models_for_24_and_32gb_vram_5_distinct_tasks/
puppyjsn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqcsv3
false
null
t3_1kqcsv3
/r/LocalLLaMA/comments/1kqcsv3/best_models_for_24_and_32gb_vram_5_distinct_tasks/
false
false
self
1
null
Late-Night Study Lifesaver? Testing Out Ask AI from SolutionInn
1
[removed]
2025-05-19T14:47:26
[deleted]
1970-01-01T00:00:00
0
{}
1kqdk6w
false
null
t3_1kqdk6w
/r/LocalLLaMA/comments/1kqdk6w/latenight_study_lifesaver_testing_out_ask_ai_from/
false
false
default
1
null
Creating a "learning" coding assistant
0
So I have recently started using Xcode to create an iPhone app. I have never had the patience for writing code, so I've been using Gemini and have actually come pretty far with my app. Basically I will provide it with my swift code for each file and then explain to it what my goal is and then go from there. I currently have LM Studio and AnythingLLM installed on my M4 Pro Mac mini and was curious if anyone had any recommendations as to if it is possible and if so how, I can "train" a model so that I do not have to recall my code every time I come back and need to work on a new feature. I get to a point where Gemini will begin hallucinating and going in circles. So I have to start a new chat and explain everything all over again each time. Is it possible to take each chat, store them into a database and recall them in the future so as to "teach" the llm how my app works and making it easier to assist when making changes or updates to my app? I apologize for my ignorance.
2025-05-19T14:48:40
https://www.reddit.com/r/LocalLLaMA/comments/1kqdl9o/creating_a_learning_coding_assistant/
xkrist0pherx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqdl9o
false
null
t3_1kqdl9o
/r/LocalLLaMA/comments/1kqdl9o/creating_a_learning_coding_assistant/
false
false
self
0
null
How to get ethical approval for research as an independent researcher?
1
[removed]
2025-05-19T14:57:46
https://www.reddit.com/r/LocalLLaMA/comments/1kqdt3x/how_to_get_ethical_approval_for_research_as_an/
TrainingCultural7548
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqdt3x
false
null
t3_1kqdt3x
/r/LocalLLaMA/comments/1kqdt3x/how_to_get_ethical_approval_for_research_as_an/
false
false
self
1
null
SFT with 8-bit quants or Mixed precision
1
[removed]
2025-05-19T15:04:35
https://www.reddit.com/r/LocalLLaMA/comments/1kqdze0/sft_with_8bit_quants_or_mixed_precision/
Circumstancision
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqdze0
false
null
t3_1kqdze0
/r/LocalLLaMA/comments/1kqdze0/sft_with_8bit_quants_or_mixed_precision/
false
false
self
1
null
Best Non-Chinese Open Reasoning LLMs atm?
0
So before the inevitable comes up, yes I know that there isn't really much harm in running Qwen or Deepseek locally, but unfortunately bureaucracies gonna bureaucracy. I've been told to find a non Chinese LLM to use both for (yes, silly) security concerns and (slightly less silly) censorship concerns I know Gemma is pretty decent as a direct LLM but also know it wasn't trained with reasoning capabilities. I've already tried Phi-4 Reasoning but honestly it was using up a ridiculous number of tokens as it got stuck thinking in circles I was wondering if anyone was aware of any non Chinese open models with good reasoning capabilities?
2025-05-19T15:28:09
https://www.reddit.com/r/LocalLLaMA/comments/1kqekgh/best_nonchinese_open_reasoning_llms_atm/
ProbaDude
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqekgh
false
null
t3_1kqekgh
/r/LocalLLaMA/comments/1kqekgh/best_nonchinese_open_reasoning_llms_atm/
false
false
self
0
null
OS/Software for running an LLM Server AND Gaming?
0
I've done research on the hardware, but a bit confused on the software. I want to build a PC that I can access remotely to run LLM inference as well as do some single-player gaming over moonlight streaming. Ideally with Wake-on-Lan to reduce power consumption. 1. Would Windows or Linux be the better choice here? (I already use Arch on my laptop). Or dual booting perhaps? Is Proxmox viable here? 2. I am planning on going dual GPUs, does my OS choice limit what GPU I can get? (e.g. Intel does better on Windows from what I read, at least for gaming). 3. I already did some investigating and will use Tailscale with Llama.cpp/Ollama OpenWebUI. Thoughts?
2025-05-19T15:59:07
https://www.reddit.com/r/LocalLLaMA/comments/1kqfbxl/ossoftware_for_running_an_llm_server_and_gaming/
legit_split_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqfbxl
false
null
t3_1kqfbxl
/r/LocalLLaMA/comments/1kqfbxl/ossoftware_for_running_an_llm_server_and_gaming/
false
false
self
0
null
I'm trying to create a lightweight LLM with limited context window using only MLP layers
6
This is an ambitious and somewhat unconventional challenge, but I'm fascinated by the idea of exploring the limits of what pure feed-forward networks can achieve in language modeling, especially for highly resource-constrained environments. The goal is to build something incredibly efficient, perhaps for edge devices or applications where even a minimal attention layer is too computationally expensive. I'm currently brainstorming initial approaches, I'd love to get ideas from other people who might have explored similar uncharted territories or have insights into the fundamental capabilities of MLPs for sequential tasks. Has anyone encountered or experimented with MLP-only architectures for tasks that traditionally use RNNs or Transformers? Are there any lesser-known papers, theoretical concepts, or forgotten neural network architectures that might offer a foundational understanding or a starting point for this? What creative ways can an MLP learn sequential dependencies or contextual information in a very limited window without relying on attention or traditional recurrence? Any thoughts on how to structure the input representation, the MLP layers, or the training process to maximize efficiency and achieve some level of coherence? Let's brainstorm some outside-the-box solutions
2025-05-19T16:18:49
https://www.reddit.com/r/LocalLLaMA/comments/1kqftyo/im_trying_to_create_a_lightweight_llm_with/
tagrib
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqftyo
false
null
t3_1kqftyo
/r/LocalLLaMA/comments/1kqftyo/im_trying_to_create_a_lightweight_llm_with/
false
false
self
6
null
Local speech chat with Gemma3, speaking like a polyglot with multiple-personalities
20
Low-latency, speech-to(text-to)-speech conversation in any Linux window: [Demo video here](https://github.com/QuantiusBenignus/BlahST/blob/main/SPEECH-CHAT.md) This is **blahstbot**, part of the UI-less, text-in-any-window, BlahST for Linux.
2025-05-19T16:19:07
https://www.reddit.com/r/LocalLLaMA/comments/1kqfu8l/local_speech_chat_with_gemma3_speaking_like_a/
QuantuisBenignus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqfu8l
false
null
t3_1kqfu8l
/r/LocalLLaMA/comments/1kqfu8l/local_speech_chat_with_gemma3_speaking_like_a/
false
false
self
20
{'enabled': False, 'images': [{'id': 'TnD_OFY97c8GzlHvJXKbZSdvRBzDietvVU7KI3POcMY', 'resolutions': [{'height': 28, 'url': 'https://external-preview.redd.it/_DqZfzBO30P7lOkQ2HQCV602O9DD7LGKvVVJ_O8IA0g.jpg?width=108&crop=smart&auto=webp&s=2dbdffba8fa64b438ada5677f9f957ac03937852', 'width': 108}, {'height': 56, 'url': 'https://external-preview.redd.it/_DqZfzBO30P7lOkQ2HQCV602O9DD7LGKvVVJ_O8IA0g.jpg?width=216&crop=smart&auto=webp&s=431ae75133cc757f7aa2c19d22810d6d8515ddd0', 'width': 216}, {'height': 83, 'url': 'https://external-preview.redd.it/_DqZfzBO30P7lOkQ2HQCV602O9DD7LGKvVVJ_O8IA0g.jpg?width=320&crop=smart&auto=webp&s=04ed9ce9fc992b5f2e62999ef1cc1f95acb772d7', 'width': 320}, {'height': 166, 'url': 'https://external-preview.redd.it/_DqZfzBO30P7lOkQ2HQCV602O9DD7LGKvVVJ_O8IA0g.jpg?width=640&crop=smart&auto=webp&s=401e97a9e710f2fc3df279fe9526b8a24266da94', 'width': 640}, {'height': 250, 'url': 'https://external-preview.redd.it/_DqZfzBO30P7lOkQ2HQCV602O9DD7LGKvVVJ_O8IA0g.jpg?width=960&crop=smart&auto=webp&s=bf34082ceb7c60e3dd40f6737dd41fe37f0c1879', 'width': 960}, {'height': 281, 'url': 'https://external-preview.redd.it/_DqZfzBO30P7lOkQ2HQCV602O9DD7LGKvVVJ_O8IA0g.jpg?width=1080&crop=smart&auto=webp&s=2ae54ce0ab9e527f88699f3a439064b785ccc052', 'width': 1080}], 'source': {'height': 553, 'url': 'https://external-preview.redd.it/_DqZfzBO30P7lOkQ2HQCV602O9DD7LGKvVVJ_O8IA0g.jpg?auto=webp&s=3c0ac159df22c32724fe0fc4d1e40b9f27404659', 'width': 2122}, 'variants': {}}]}
Drummer's Valkyrie 49B v1 - A strong, creative finetune of Nemotron 49B
70
2025-05-19T17:00:51
https://huggingface.co/TheDrummer/Valkyrie-49B-v1
TheLocalDrummer
huggingface.co
1970-01-01T00:00:00
0
{}
1kqgwh2
false
null
t3_1kqgwh2
/r/LocalLLaMA/comments/1kqgwh2/drummers_valkyrie_49b_v1_a_strong_creative/
false
false
https://b.thumbs.redditm…mXbCzhi6qXbI.jpg
70
{'enabled': False, 'images': [{'id': '6AW3V1L19ttaHqvJIOwp6QUcrDdh2aQdvn5BFvbdIxA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7-TxzLinFktWh46KZdKq3Yh3o06ZI3kUSrN3cJLAfu4.jpg?width=108&crop=smart&auto=webp&s=1e894be416e828a88e217ba1f2b4bdfbf53e0746', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7-TxzLinFktWh46KZdKq3Yh3o06ZI3kUSrN3cJLAfu4.jpg?width=216&crop=smart&auto=webp&s=24f3efa203f5019022d2841f758f9aec0a2d3757', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7-TxzLinFktWh46KZdKq3Yh3o06ZI3kUSrN3cJLAfu4.jpg?width=320&crop=smart&auto=webp&s=e22af51576fed77125cb67bebfe5fc7bfe54ccf3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7-TxzLinFktWh46KZdKq3Yh3o06ZI3kUSrN3cJLAfu4.jpg?width=640&crop=smart&auto=webp&s=899485ad26383ade67df07fffa408803ffb0b047', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7-TxzLinFktWh46KZdKq3Yh3o06ZI3kUSrN3cJLAfu4.jpg?width=960&crop=smart&auto=webp&s=5830ada35eacfc17c30455f002a769da56b9a03b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7-TxzLinFktWh46KZdKq3Yh3o06ZI3kUSrN3cJLAfu4.jpg?width=1080&crop=smart&auto=webp&s=c3dbb068ade1b9f873d8bd67a245a98453e8a5fd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7-TxzLinFktWh46KZdKq3Yh3o06ZI3kUSrN3cJLAfu4.jpg?auto=webp&s=6f9413f91c78609d76dd655deae2f8664ec948fd', 'width': 1200}, 'variants': {}}]}
MLX LM now integrated within Hugging Face
62
thread: [https://x.com/victormustar/status/1924510517311287508](https://x.com/victormustar/status/1924510517311287508)
2025-05-19T17:09:53
https://v.redd.it/bvoizhqstr1f1
paf1138
v.redd.it
1970-01-01T00:00:00
0
{}
1kqh56l
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/bvoizhqstr1f1/DASHPlaylist.mpd?a=1750266709%2COGJlZjA4YTkwN2EzZjM2NDA1ZWFlNGRlZTNkNGZmNGJiOTkyMzQ2NDBlNTA3NmU0MmIzNTEzNWY1ZmU5NDg0MA%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/bvoizhqstr1f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/bvoizhqstr1f1/HLSPlaylist.m3u8?a=1750266709%2CMjUyNTMyN2E3YmFiMTQxYjZjOTkxYjRmZmU3OWUwNjg5MzkxNzFhZDdkMWJkMGM1ZTM3NjZlMTY1NGZiYjZkYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/bvoizhqstr1f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1848}}
t3_1kqh56l
/r/LocalLLaMA/comments/1kqh56l/mlx_lm_now_integrated_within_hugging_face/
false
false
https://external-preview…8176fac38c32afee
62
{'enabled': False, 'images': [{'id': 'ejhyMW5ocXN0cjFmMeXek4ObJQU75YpzzSznbvZU2j6Nva4vduBEs8qjugv3', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/ejhyMW5ocXN0cjFmMeXek4ObJQU75YpzzSznbvZU2j6Nva4vduBEs8qjugv3.png?width=108&crop=smart&format=pjpg&auto=webp&s=7875c6af7879db14178f5696692eb7035b104563', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/ejhyMW5ocXN0cjFmMeXek4ObJQU75YpzzSznbvZU2j6Nva4vduBEs8qjugv3.png?width=216&crop=smart&format=pjpg&auto=webp&s=de61facedf484a2580435e0fc301095bbfd9ffef', 'width': 216}, {'height': 187, 'url': 'https://external-preview.redd.it/ejhyMW5ocXN0cjFmMeXek4ObJQU75YpzzSznbvZU2j6Nva4vduBEs8qjugv3.png?width=320&crop=smart&format=pjpg&auto=webp&s=861b41a52c01190b9bcd0652472727c25eaebdae', 'width': 320}, {'height': 374, 'url': 'https://external-preview.redd.it/ejhyMW5ocXN0cjFmMeXek4ObJQU75YpzzSznbvZU2j6Nva4vduBEs8qjugv3.png?width=640&crop=smart&format=pjpg&auto=webp&s=2bc966abd8bfb9473eb6b0c25af72258703657e2', 'width': 640}, {'height': 561, 'url': 'https://external-preview.redd.it/ejhyMW5ocXN0cjFmMeXek4ObJQU75YpzzSznbvZU2j6Nva4vduBEs8qjugv3.png?width=960&crop=smart&format=pjpg&auto=webp&s=4811a985457f96c92a3cf5ee957ced271f5149a6', 'width': 960}, {'height': 631, 'url': 'https://external-preview.redd.it/ejhyMW5ocXN0cjFmMeXek4ObJQU75YpzzSznbvZU2j6Nva4vduBEs8qjugv3.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4aa25c3ccb44cc1b2413cdbf982665eaf00eab16', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ejhyMW5ocXN0cjFmMeXek4ObJQU75YpzzSznbvZU2j6Nva4vduBEs8qjugv3.png?format=pjpg&auto=webp&s=468b4237d19c2081cb3b2028b9bccff445df20cf', 'width': 1848}, 'variants': {}}]}
Does Star Trek universe do mostly "vibe coding"?
1
[removed]
2025-05-19T17:12:40
https://www.reddit.com/r/LocalLLaMA/comments/1kqh7rz/does_star_trek_universe_do_mostly_vibe_coding/
derekp7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqh7rz
false
null
t3_1kqh7rz
/r/LocalLLaMA/comments/1kqh7rz/does_star_trek_universe_do_mostly_vibe_coding/
false
false
self
1
null
Local LLMs show-down: More than 20 LLMs and one single Prompt
6
I became really curious about how far I could push LLMs and asked GPT-4o to help me craft a prompt that would make the models work really hard. Then I ran the same prompt through a selection of LLMs on my hardware along with a few commercial models for reference. You can read the results on my blog [https://blog.kekepower.com/blog/2025/may/19/the\_2025\_polymath\_llm\_show-down\_how\_twenty%E2%80%91two\_models\_fared\_under\_a\_single\_grueling\_prompt.html](https://blog.kekepower.com/blog/2025/may/19/the_2025_polymath_llm_show-down_how_twenty%E2%80%91two_models_fared_under_a_single_grueling_prompt.html)
2025-05-19T17:15:55
https://www.reddit.com/r/LocalLLaMA/comments/1kqharr/local_llms_showdown_more_than_20_llms_and_one/
kekePower
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqharr
false
null
t3_1kqharr
/r/LocalLLaMA/comments/1kqharr/local_llms_showdown_more_than_20_llms_and_one/
false
false
self
6
null
A Machine Gun in the Land of Sticks and Stones
1
[removed]
2025-05-19T17:17:33
https://www.reddit.com/r/LocalLLaMA/comments/1kqhc9w/a_machine_gun_in_the_land_of_sticks_and_stones/
_redacted-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqhc9w
false
null
t3_1kqhc9w
/r/LocalLLaMA/comments/1kqhc9w/a_machine_gun_in_the_land_of_sticks_and_stones/
false
false
self
1
{'enabled': False, 'images': [{'id': 'JdBt9k1bXwExyyrZ-OhRp27TypSYkF5YaPpUMhEpsXw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lvmwDGS9MPRiZKL4EZcqjvZkcibg3uU1-7S5LJ5TJQY.jpg?width=108&crop=smart&auto=webp&s=082269d9fc14ff59a612334f36b23e4ff8fc75a8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lvmwDGS9MPRiZKL4EZcqjvZkcibg3uU1-7S5LJ5TJQY.jpg?width=216&crop=smart&auto=webp&s=b256d7fa2813198785d156ced6e23472dc63b9e1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lvmwDGS9MPRiZKL4EZcqjvZkcibg3uU1-7S5LJ5TJQY.jpg?width=320&crop=smart&auto=webp&s=38da009cb71ec78f99c30faef7362482ea11ba42', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lvmwDGS9MPRiZKL4EZcqjvZkcibg3uU1-7S5LJ5TJQY.jpg?width=640&crop=smart&auto=webp&s=23e4201586d6b72b2184a0bf0d073d8705488f78', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lvmwDGS9MPRiZKL4EZcqjvZkcibg3uU1-7S5LJ5TJQY.jpg?width=960&crop=smart&auto=webp&s=3e4b8bb4123755d4d4678f1855c3c0528bbbf2b7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lvmwDGS9MPRiZKL4EZcqjvZkcibg3uU1-7S5LJ5TJQY.jpg?width=1080&crop=smart&auto=webp&s=09f7752265c4001a02b35a9593caa3ef4a94b1a2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lvmwDGS9MPRiZKL4EZcqjvZkcibg3uU1-7S5LJ5TJQY.jpg?auto=webp&s=63e9eca4290af750211c1cce276a83760ee124c6', 'width': 1200}, 'variants': {}}]}
THIS is the Most Important GPU of 2025
0
More details on the B60, the dual B60 with 48GB, software support, and pricing
2025-05-19T17:23:29
https://youtu.be/vZupIBqKHqM?si=dWxSNH2jzO-1qCLC
FullstackSensei
youtu.be
1970-01-01T00:00:00
0
{}
1kqhhp8
false
{'oembed': {'author_name': 'Linus Tech Tips', 'author_url': 'https://www.youtube.com/@LinusTechTips', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/vZupIBqKHqM?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="THIS is the Most Important GPU of 2025"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/vZupIBqKHqM/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'THIS is the Most Important GPU of 2025', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1kqhhp8
/r/LocalLLaMA/comments/1kqhhp8/this_is_the_most_important_gpu_of_2025/
false
false
https://b.thumbs.redditm…i7MqDH6BLgKw.jpg
0
{'enabled': False, 'images': [{'id': 'hqX1XJPzsut6Pu5b5L2iprNjn5AigAFUIBDoGZv-orY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/avbb0bdxIyVexvXRXN65rlN0Aut6hfd4goOHVixsfP8.jpg?width=108&crop=smart&auto=webp&s=cb692f0197a11b2f180845081cc6c2a0d8836b46', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/avbb0bdxIyVexvXRXN65rlN0Aut6hfd4goOHVixsfP8.jpg?width=216&crop=smart&auto=webp&s=c6aa2472af8698372b2386e43990ce9fc19ae9e5', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/avbb0bdxIyVexvXRXN65rlN0Aut6hfd4goOHVixsfP8.jpg?width=320&crop=smart&auto=webp&s=10217b99f10eb4ba908bacb6fb38a5c96ba3d893', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/avbb0bdxIyVexvXRXN65rlN0Aut6hfd4goOHVixsfP8.jpg?auto=webp&s=00199d6cfeff580454728ade86331cf8339a9e57', 'width': 480}, 'variants': {}}]}
VS Code: Open Source Copilot
237
What do you think of this move by Microsoft? Is it just me, or are the possibilities endless? We can build customizable IDEs with an entire company’s tech stack by integrating MCPs on top, without having to build everything from scratch.
2025-05-19T17:27:31
https://code.visualstudio.com/blogs/2025/05/19/openSourceAIEditor
DonTizi
code.visualstudio.com
1970-01-01T00:00:00
0
{}
1kqhljr
false
null
t3_1kqhljr
/r/LocalLLaMA/comments/1kqhljr/vs_code_open_source_copilot/
false
false
https://b.thumbs.redditm…np9p5OOeHd9I.jpg
237
{'enabled': False, 'images': [{'id': 'gI5UNbMliL5WbCNvXlrvhhJCFfPhXA7cvuQQB4dfGDg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/7Ri8YRwu_7FpWFvmcgOzjF960jd6eY_pMWtoGfUyNOA.jpg?width=108&crop=smart&auto=webp&s=d3f7da257b92799305872c9c552e30115cbf8f02', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/7Ri8YRwu_7FpWFvmcgOzjF960jd6eY_pMWtoGfUyNOA.jpg?width=216&crop=smart&auto=webp&s=f3615e548731e6a0295c13d5f29a8158d407ef08', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/7Ri8YRwu_7FpWFvmcgOzjF960jd6eY_pMWtoGfUyNOA.jpg?width=320&crop=smart&auto=webp&s=ac57c398076bf831bb3f3e3432a263d1c3d76f8c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/7Ri8YRwu_7FpWFvmcgOzjF960jd6eY_pMWtoGfUyNOA.jpg?width=640&crop=smart&auto=webp&s=717dd6dca05edfe21ffc3b5167abc2a06a881f81', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/7Ri8YRwu_7FpWFvmcgOzjF960jd6eY_pMWtoGfUyNOA.jpg?width=960&crop=smart&auto=webp&s=f7686727c919bc6b72fe73811cdeb83401c02524', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/7Ri8YRwu_7FpWFvmcgOzjF960jd6eY_pMWtoGfUyNOA.jpg?width=1080&crop=smart&auto=webp&s=2da074391e51e1d5db3d69c39efb5cdb172b76c7', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/7Ri8YRwu_7FpWFvmcgOzjF960jd6eY_pMWtoGfUyNOA.jpg?auto=webp&s=87fc0fcc9e7907bc9457c32cdc9f74b47626415c', 'width': 1280}, 'variants': {}}]}
Microsoft On-Device AI Local Foundry (Windows & Mac)
30
2025-05-19T17:46:57
https://devblogs.microsoft.com/foundry/unlock-instant-on-device-ai-with-foundry-local/
AngryBirdenator
devblogs.microsoft.com
1970-01-01T00:00:00
0
{}
1kqi3m0
false
null
t3_1kqi3m0
/r/LocalLLaMA/comments/1kqi3m0/microsoft_ondevice_ai_local_foundry_windows_mac/
false
false
default
30
{'enabled': False, 'images': [{'id': 'GXOYCqFmllQ_joNo6QYadF6Vo4ZZQrLzaZTuxX3dRUE', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/s7HfWyX7uW6coTKNdAxswziHueC-nss9O7Clu8I3zyI.jpg?width=108&crop=smart&auto=webp&s=e7cfd488b1326c8819d404cff7c0d6d95cbafacb', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/s7HfWyX7uW6coTKNdAxswziHueC-nss9O7Clu8I3zyI.jpg?width=216&crop=smart&auto=webp&s=20cf7a555ad70307526390fb02fe60aaf22635b7', 'width': 216}, {'height': 183, 'url': 'https://external-preview.redd.it/s7HfWyX7uW6coTKNdAxswziHueC-nss9O7Clu8I3zyI.jpg?width=320&crop=smart&auto=webp&s=42b29f920be36463bf430c01d65b819f0536e6fa', 'width': 320}, {'height': 366, 'url': 'https://external-preview.redd.it/s7HfWyX7uW6coTKNdAxswziHueC-nss9O7Clu8I3zyI.jpg?width=640&crop=smart&auto=webp&s=9fdb1d3bcf87ad29b196536b22c8576e153d5560', 'width': 640}, {'height': 549, 'url': 'https://external-preview.redd.it/s7HfWyX7uW6coTKNdAxswziHueC-nss9O7Clu8I3zyI.jpg?width=960&crop=smart&auto=webp&s=70ec3948e7ca0edab73c1917fe26817dbc2bc57b', 'width': 960}, {'height': 617, 'url': 'https://external-preview.redd.it/s7HfWyX7uW6coTKNdAxswziHueC-nss9O7Clu8I3zyI.jpg?width=1080&crop=smart&auto=webp&s=8c3a115183f30bda12b933e4c8d1d13299368c61', 'width': 1080}], 'source': {'height': 1430, 'url': 'https://external-preview.redd.it/s7HfWyX7uW6coTKNdAxswziHueC-nss9O7Clu8I3zyI.jpg?auto=webp&s=f5f12a22eb3d47041d91dd0b6f4ec725b6959ef3', 'width': 2500}, 'variants': {}}]}
Low-Cost GPU Hosting for AI Models & Apps
1
[removed]
2025-05-19T17:54:03
https://www.reddit.com/r/LocalLLaMA/comments/1kqiaam/lowcost_gpu_hosting_for_ai_models_apps/
PrettyRevolution1842
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqiaam
false
null
t3_1kqiaam
/r/LocalLLaMA/comments/1kqiaam/lowcost_gpu_hosting_for_ai_models_apps/
false
false
self
1
null
Global Agent Hackathon by Agno is live! ($25k in total prizes)
1
[removed]
2025-05-19T18:15:37
https://www.reddit.com/r/LocalLLaMA/comments/1kqiuog/global_agent_hackathon_by_agno_is_live_25k_in/
superconductiveKyle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqiuog
false
null
t3_1kqiuog
/r/LocalLLaMA/comments/1kqiuog/global_agent_hackathon_by_agno_is_live_25k_in/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gja_zmWrbMqADHtjzbX5Ke-UtNyuew-F59tTPxLmBDY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IXNaAltUSQCDxp9493QiOnVFQu1k_3prNcIMNvoPcxo.jpg?width=108&crop=smart&auto=webp&s=50b28015ec3b4747548ddb6053f1738999bc7d2c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IXNaAltUSQCDxp9493QiOnVFQu1k_3prNcIMNvoPcxo.jpg?width=216&crop=smart&auto=webp&s=a2d62c1493205505a8e9379b08dc7212554df891', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IXNaAltUSQCDxp9493QiOnVFQu1k_3prNcIMNvoPcxo.jpg?width=320&crop=smart&auto=webp&s=2aac9090fb3d84f150b12292c86aa50565701cce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IXNaAltUSQCDxp9493QiOnVFQu1k_3prNcIMNvoPcxo.jpg?width=640&crop=smart&auto=webp&s=af8458e69096bc82137ed14415c3129ae9f6faaa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IXNaAltUSQCDxp9493QiOnVFQu1k_3prNcIMNvoPcxo.jpg?width=960&crop=smart&auto=webp&s=e7f9e8e23c8b0e55b4a031e166f4f9598a6f9ac7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IXNaAltUSQCDxp9493QiOnVFQu1k_3prNcIMNvoPcxo.jpg?width=1080&crop=smart&auto=webp&s=7cac9eaede89081662b6cfe7c03716964de5d083', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IXNaAltUSQCDxp9493QiOnVFQu1k_3prNcIMNvoPcxo.jpg?auto=webp&s=90823454b5123a15303e48e1500dfd4440f61f04', 'width': 1200}, 'variants': {}}]}
Evaluating the best models at translating German - open models beat DeepL!
47
2025-05-19T18:17:55
https://nuenki.app/blog/best_language_models_for_german_translation
Nuenki
nuenki.app
1970-01-01T00:00:00
0
{}
1kqiwu2
false
null
t3_1kqiwu2
/r/LocalLLaMA/comments/1kqiwu2/evaluating_the_best_models_at_translating_german/
false
false
https://b.thumbs.redditm…Qqp2N0tML6Fg.jpg
47
{'enabled': False, 'images': [{'id': 'sl5AWBXJbnd8seHGhHam-my2xN8-2MTiLgaFFv_9VgQ', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?width=108&crop=smart&auto=webp&s=79a054dd227c6f5432f86d0aad2f733d56deb387', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?width=216&crop=smart&auto=webp&s=36d6fa0f550c1aa87b8842476a42ab5e7983d775', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?width=320&crop=smart&auto=webp&s=a9e59b0b9832d1d263060216c6712ab86736cf73', 'width': 320}, {'height': 351, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?width=640&crop=smart&auto=webp&s=33bb3dd09e1348f194cfb304ced2dd662da82a0f', 'width': 640}, {'height': 527, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?width=960&crop=smart&auto=webp&s=b61c5111ebbc99dd0da8775eb45acd9ee039349d', 'width': 960}, {'height': 593, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?width=1080&crop=smart&auto=webp&s=c4871bfcc51572f134a18d7c42ca6e7ba566fac5', 'width': 1080}], 'source': {'height': 2096, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?auto=webp&s=068fb20ca78df0694ec410b05a2982f47c0ae5d0', 'width': 3811}, 'variants': {}}]}
Global Agent Hackathon by Agno is live!
1
[removed]
2025-05-19T18:18:28
https://www.reddit.com/r/LocalLLaMA/comments/1kqixb3/global_agent_hackathon_by_agno_is_live/
superconductiveKyle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqixb3
false
null
t3_1kqixb3
/r/LocalLLaMA/comments/1kqixb3/global_agent_hackathon_by_agno_is_live/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gja_zmWrbMqADHtjzbX5Ke-UtNyuew-F59tTPxLmBDY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IXNaAltUSQCDxp9493QiOnVFQu1k_3prNcIMNvoPcxo.jpg?width=108&crop=smart&auto=webp&s=50b28015ec3b4747548ddb6053f1738999bc7d2c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IXNaAltUSQCDxp9493QiOnVFQu1k_3prNcIMNvoPcxo.jpg?width=216&crop=smart&auto=webp&s=a2d62c1493205505a8e9379b08dc7212554df891', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IXNaAltUSQCDxp9493QiOnVFQu1k_3prNcIMNvoPcxo.jpg?width=320&crop=smart&auto=webp&s=2aac9090fb3d84f150b12292c86aa50565701cce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IXNaAltUSQCDxp9493QiOnVFQu1k_3prNcIMNvoPcxo.jpg?width=640&crop=smart&auto=webp&s=af8458e69096bc82137ed14415c3129ae9f6faaa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IXNaAltUSQCDxp9493QiOnVFQu1k_3prNcIMNvoPcxo.jpg?width=960&crop=smart&auto=webp&s=e7f9e8e23c8b0e55b4a031e166f4f9598a6f9ac7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IXNaAltUSQCDxp9493QiOnVFQu1k_3prNcIMNvoPcxo.jpg?width=1080&crop=smart&auto=webp&s=7cac9eaede89081662b6cfe7c03716964de5d083', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IXNaAltUSQCDxp9493QiOnVFQu1k_3prNcIMNvoPcxo.jpg?auto=webp&s=90823454b5123a15303e48e1500dfd4440f61f04', 'width': 1200}, 'variants': {}}]}
Looking for a 8b param to run with my data set for an AI personal assistant
2
I want to train an open source LLM on my own data (alr cleaned it and have everything right) I want to run one version on the cloud and one version on my own computer. What is the best current open source model to use?
2025-05-19T18:51:22
https://www.reddit.com/r/LocalLLaMA/comments/1kqjrwi/looking_for_a_8b_param_to_run_with_my_data_set/
jinstronda
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqjrwi
false
null
t3_1kqjrwi
/r/LocalLLaMA/comments/1kqjrwi/looking_for_a_8b_param_to_run_with_my_data_set/
false
false
self
2
null
CoT stress question 🥵
1
Test your CoT llm with this question,enjoy! Imagine a perfectly spherical, frictionless planet entirely covered in a uniform layer of perfectly incompressible water. If a single drop of the same water is gently placed on the surface of this planet, describe in detail what will happen immediately and over time, considering all relevant physical principles. Explain your reasoning step-by-step.
2025-05-19T19:06:20
https://www.reddit.com/r/LocalLLaMA/comments/1kqk61c/cot_stress_question/
Illustrious-Dot-6888
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqk61c
false
null
t3_1kqk61c
/r/LocalLLaMA/comments/1kqk61c/cot_stress_question/
false
false
self
1
null
Be confident in your own judgement and reject benchmark JPEG's
153
2025-05-19T19:18:50
https://i.redd.it/1wtj3q6ngs1f1.jpeg
ForsookComparison
i.redd.it
1970-01-01T00:00:00
0
{}
1kqkhhy
false
null
t3_1kqkhhy
/r/LocalLLaMA/comments/1kqkhhy/be_confident_in_your_own_judgement_and_reject/
false
false
https://b.thumbs.redditm…_0hFvbFtVy-I.jpg
153
{'enabled': True, 'images': [{'id': '6A5BTmsryZQPa86Z2YPhIomn-HK78IYNxudaudPaWTo', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/1wtj3q6ngs1f1.jpeg?width=108&crop=smart&auto=webp&s=ccb4e8acb070a5b80b5e80743c1fdbc4d71fe9ba', 'width': 108}, {'height': 325, 'url': 'https://preview.redd.it/1wtj3q6ngs1f1.jpeg?width=216&crop=smart&auto=webp&s=16e4d1fd78f9a106c26daee59792ecf2b2aab681', 'width': 216}, {'height': 482, 'url': 'https://preview.redd.it/1wtj3q6ngs1f1.jpeg?width=320&crop=smart&auto=webp&s=866fd40b0a9427bdcc0f939dd54caf7a9e78856b', 'width': 320}, {'height': 965, 'url': 'https://preview.redd.it/1wtj3q6ngs1f1.jpeg?width=640&crop=smart&auto=webp&s=9a71e631166bd010fd1e72d10e1ef80ceda179b6', 'width': 640}, {'height': 1448, 'url': 'https://preview.redd.it/1wtj3q6ngs1f1.jpeg?width=960&crop=smart&auto=webp&s=54a4564518b9e1a7d68c28f4a192b97aa8d97ecf', 'width': 960}], 'source': {'height': 1516, 'url': 'https://preview.redd.it/1wtj3q6ngs1f1.jpeg?auto=webp&s=718cc86d8f7a789b5df91fdfa1484e207e905a4d', 'width': 1005}, 'variants': {}}]}
Has anyone here used a modded 22gb Rtx 2080 ti
2
I saw that you can buy these on eBay for about 500
2025-05-19T19:51:10
https://www.reddit.com/r/LocalLLaMA/comments/1kqlbdz/has_anyone_here_used_a_modded_22gb_rtx_2080_ti/
Responsible-Bad5572
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqlbdz
false
null
t3_1kqlbdz
/r/LocalLLaMA/comments/1kqlbdz/has_anyone_here_used_a_modded_22gb_rtx_2080_ti/
false
false
self
2
null
Dell Unveils The Integration of NVIDIA’s GB300 “Blackwell Ultra” GPUs With Its AI Factories, Taking Performance & Scalability to New Levels
0
2025-05-19T19:52:59
https://wccftech.com/dell-unveils-the-integration-of-nvidia-blackwell-ultra-gpus-with-its-ai-factories/
_SYSTEM_ADMIN_MOD_
wccftech.com
1970-01-01T00:00:00
0
{}
1kqld0g
false
null
t3_1kqld0g
/r/LocalLLaMA/comments/1kqld0g/dell_unveils_the_integration_of_nvidias_gb300/
false
false
https://b.thumbs.redditm…z5Wf1AXmhWcY.jpg
0
{'enabled': False, 'images': [{'id': 'I2Q5ilSW15uBQAphbtRAAaiOKe9DpBubFURvebdnZDE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/frFfGHzX6yewr1UOids_2L87YHO6yflQPShTpGYNUQA.jpg?width=108&crop=smart&auto=webp&s=d5f86bbf1fdb9048f714a9de4202c303e07bd898', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/frFfGHzX6yewr1UOids_2L87YHO6yflQPShTpGYNUQA.jpg?width=216&crop=smart&auto=webp&s=230bea685565fad73905045646db4ee4abb5bf62', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/frFfGHzX6yewr1UOids_2L87YHO6yflQPShTpGYNUQA.jpg?width=320&crop=smart&auto=webp&s=f7acdf764955dc6a07d5fafe6ed5f620b8462069', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/frFfGHzX6yewr1UOids_2L87YHO6yflQPShTpGYNUQA.jpg?width=640&crop=smart&auto=webp&s=34b9f695f9e890862f09bb39514bc38ebe47b6ad', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/frFfGHzX6yewr1UOids_2L87YHO6yflQPShTpGYNUQA.jpg?width=960&crop=smart&auto=webp&s=7b2e77651e6229d16ebdfb57dd847d4d8e548aca', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/frFfGHzX6yewr1UOids_2L87YHO6yflQPShTpGYNUQA.jpg?width=1080&crop=smart&auto=webp&s=e34de8da8827a9c05c1eb668b68aa760702ecbe5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/frFfGHzX6yewr1UOids_2L87YHO6yflQPShTpGYNUQA.jpg?auto=webp&s=429dfe420e878f73273dfd99f590aa98aae82aa7', 'width': 1200}, 'variants': {}}]}
👀 Microsoft just created an MCP Registry for Windows
261
2025-05-19T20:12:32
https://i.redd.it/6lwf9y6eqs1f1.png
eternviking
i.redd.it
1970-01-01T00:00:00
0
{}
1kqluy9
false
null
t3_1kqluy9
/r/LocalLLaMA/comments/1kqluy9/microsoft_just_created_an_mcp_registry_for_windows/
false
false
https://b.thumbs.redditm…FPDzYNB90kwY.jpg
261
{'enabled': True, 'images': [{'id': '-JYFo0kmQAmGOQ9kBV2QFjgMdtV4ZTZ4Cz6FHWcvAdM', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/6lwf9y6eqs1f1.png?width=108&crop=smart&auto=webp&s=f19758a78d1b93823d64f553e78020268022aa38', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/6lwf9y6eqs1f1.png?width=216&crop=smart&auto=webp&s=e00061ddd512ad8fc918cf69723895c8cef09222', 'width': 216}, {'height': 172, 'url': 'https://preview.redd.it/6lwf9y6eqs1f1.png?width=320&crop=smart&auto=webp&s=4c99a815baba85bf825d73001790b2019ca1c056', 'width': 320}, {'height': 345, 'url': 'https://preview.redd.it/6lwf9y6eqs1f1.png?width=640&crop=smart&auto=webp&s=3580c11b0dace946cb1f140d2732484fdb0916e4', 'width': 640}, {'height': 517, 'url': 'https://preview.redd.it/6lwf9y6eqs1f1.png?width=960&crop=smart&auto=webp&s=1b8190c3bc3e588f283a8bbe50e62ec04406da31', 'width': 960}, {'height': 582, 'url': 'https://preview.redd.it/6lwf9y6eqs1f1.png?width=1080&crop=smart&auto=webp&s=63ec4ccb196535f1f465108291a132c250d7fc35', 'width': 1080}], 'source': {'height': 1552, 'url': 'https://preview.redd.it/6lwf9y6eqs1f1.png?auto=webp&s=7dae8558b60e2f9711cf4bce43b49f733526f1f0', 'width': 2878}, 'variants': {}}]}
Local LLM based chatbot for habit datapoint storage /recall
1
[removed]
2025-05-19T20:34:00
https://www.reddit.com/r/LocalLLaMA/comments/1kqmesj/local_llm_based_chatbot_for_habit_datapoint/
Altruistic-Finger-12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqmesj
false
null
t3_1kqmesj
/r/LocalLLaMA/comments/1kqmesj/local_llm_based_chatbot_for_habit_datapoint/
false
false
self
1
null
ELO Score in Chatbot Arena Over Time (Graph + Data)
0
Hey everyone! I've been trying for a while to find data to create a **graph showing the ELO evolution of LLM models in Chatbot Arena over time**. But since LMSYS doesn't publish when each model was added, I decided to take matters into my own hands and manually create a **dataset that tracked down the release dates for each model along with the ELO score and parameter sizes**. (I tried using DeepResearch, but it kept making up dates, so I went old-school.) Here’s the graph: [ELO Score in Chatbot Arena Over Time](https://preview.redd.it/czcasz2xys1f1.png?width=2914&format=png&auto=webp&s=0ca7b0e938e91e145ae16d87fb739d8dd164a225) **Notes**: 1- As a general rule, for the "initial public release" date, I used the official date when the model was announced as publicly available. When this date wasn’t clear, I fell back on the earliest source (blog post, tweet, GitHub, HuggingFace commit, etc.). I also included a field for the model’s parameters (known, estimated, or n/a). 2- I might’ve gotten some dates wrong, so I’m sharing the raw data for you to check and correct if needed => [Link to data](https://docs.google.com/spreadsheets/d/e/2PACX-1vSt1XzEjyOGyq_wXlOYBkFPs_d717Jv4ycLwT24VRvDJWuv34TUH4c8cp-7bfveMDXwEgWJ0Xd_neQs/pub?output=csv). Let me know what you think or if you spot any mistakes!
2025-05-19T21:02:22
https://www.reddit.com/r/LocalLLaMA/comments/1kqn4mm/elo_score_in_chatbot_arena_over_time_graph_data/
coconautico
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqn4mm
false
null
t3_1kqn4mm
/r/LocalLLaMA/comments/1kqn4mm/elo_score_in_chatbot_arena_over_time_graph_data/
false
false
https://b.thumbs.redditm…wmPkItBKVf3s.jpg
0
null
A person can dream,
0
512bit x 4gb_gddr7(32Gbps) x dual-pcb x clamshell = bandwidth: 2048GBps, momery: 256GB If we can get it for less than 2999USD before 2027,
2025-05-19T21:24:35
https://www.reddit.com/r/LocalLLaMA/comments/1kqnofx/a_person_can_dream/
Optifnolinalgebdirec
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqnofx
false
null
t3_1kqnofx
/r/LocalLLaMA/comments/1kqnofx/a_person_can_dream/
false
false
self
0
null
DiffusionBee v2.12 — Flux.1, Textual Inversion, NSFW Blocking & More (Mac-only)
5
We just shipped a new DiffusionBee update for all the Mac-wielding degenerates and offline-creatives in the room. (If you’re not on arm64 or at least macOS 13, go touch some grass and come back later.) **🆕 What’s New:** * Flux.1 model support (arm64, macOS 13+ only) * Finally, yes, you can run Flux.1 natively. Scroll to the bottom of the app home screen and you’ll see the magic button. * External Textual Inversion embeddings * Got custom styles, LoRAs, weird txt2img hacks? Plug your own TI embeddings in. No gatekeeping. * NSFW Image Blocker * Accidentally type “hotdog” and generate the wrong kind of sausage? Block NSFW output with one click. * Models Page is Not a Dumpster Fire Anymore * Organize, find, and manage your models without wanting to uninstall your OS. * Misc Bugfixes * As always, we stomped some weird Mac bugs. If something is still broken, roast us here. **Why bother?** Honestly, because running local SD should not feel like assembling IKEA furniture in the dark. We’re still MIT licensed, 100% local, and open-source, so if you break something, you can probably fix it. No API keys. No telemetry. No “Pro” upgrade screens. Would love some savage feedback, roast sessions, or wild feature requests (especially if you try Flux.1). Try it out here - [https://github.com/divamgupta/diffusionbee-stable-diffusion-ui/releases/tag/2.5.3](https://github.com/divamgupta/diffusionbee-stable-diffusion-ui/releases/tag/2.5.3)
2025-05-19T21:33:56
https://www.reddit.com/r/LocalLLaMA/comments/1kqnwrv/diffusionbee_v212_flux1_textual_inversion_nsfw/
lostbutyoucanfollow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqnwrv
false
null
t3_1kqnwrv
/r/LocalLLaMA/comments/1kqnwrv/diffusionbee_v212_flux1_textual_inversion_nsfw/
false
false
nsfw
5
{'enabled': False, 'images': [{'id': 'OD4XEbuRyY-yd21Goppk_3oXtEuwpye_LNo55Lmulrg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?width=108&crop=smart&auto=webp&s=30e34ad1a1780dca3c76d5f60fcfc622b85eb1c4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?width=216&crop=smart&auto=webp&s=6a65768183cbcd36b6089579f1e1be582e8a564c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?width=320&crop=smart&auto=webp&s=f9ac8eaa78da58b8e3879895f4b0e498acf3afa9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?width=640&crop=smart&auto=webp&s=f76ceba16c90dbf71955c59a606772d02bd2e638', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?width=960&crop=smart&auto=webp&s=1694a9ef0580bcd5c5484ea9eaf40a7358a7b6c9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?width=1080&crop=smart&auto=webp&s=75aabe408f1b7ebfe08759d183417bdf32c5bec7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?auto=webp&s=82f6c6ff21e6d4af6b7201990e42914b0a239575', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=cd948fcb461d4a2bb7405ec011b1620e09bdc447', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=659d8a3a2061efc6820bdbf3679ec1da77d6717d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=482456ae2f3dbcaaae7d261a0a0b5a13c891d81e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=643ab704bac89817cae594e2ba2db191690ac023', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=1727dbc25d24e82727f1a63d4df2cc06f3c54c99', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=122bcf19cba8675a7f473cc70e2aebfe4aed7b9b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?blur=40&format=pjpg&auto=webp&s=b3181ff15e7b0b72356ec14d74aca8dfd4364379', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=cd948fcb461d4a2bb7405ec011b1620e09bdc447', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=659d8a3a2061efc6820bdbf3679ec1da77d6717d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=482456ae2f3dbcaaae7d261a0a0b5a13c891d81e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=643ab704bac89817cae594e2ba2db191690ac023', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=1727dbc25d24e82727f1a63d4df2cc06f3c54c99', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=122bcf19cba8675a7f473cc70e2aebfe4aed7b9b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?blur=40&format=pjpg&auto=webp&s=b3181ff15e7b0b72356ec14d74aca8dfd4364379', 'width': 1200}}}}]}
Terminal-Bench: A new benchmark for AI agents in the terminal
1
[removed]
2025-05-19T22:03:07
http://tbench.ai
ombedrizoobo
tbench.ai
1970-01-01T00:00:00
0
{}
1kqoltu
false
null
t3_1kqoltu
/r/LocalLLaMA/comments/1kqoltu/terminalbench_a_new_benchmark_for_ai_agents_in/
false
false
default
1
null
Best model to run on 8GB VRAM for coding?
3
I'd like to make use of my GeForce 1080 (8 GB VRAM) for assisting me with coding (C, Python, numerical physics simulations, GUIs, and ESP32 programming). Is there any useful model that'd be worth running? I know I won't be running something cutting-edge but I could do with some help. I can wait minutes for answers so speed is not critical, but I would like it to be reasonably reliable. CPU would be i5-8xxx, RAM DDR4 16 GB but I can extend it up to 128 GB if need be. I also have a spare 750 Ti (2 GB VRAM) but I suppose it's not worth it... I'm OK to fiddle with llama.cpp Would investing in a 3060 16 GB drastically open perspectives? Thanks !
2025-05-19T22:20:40
https://www.reddit.com/r/LocalLLaMA/comments/1kqp0mm/best_model_to_run_on_8gb_vram_for_coding/
cosmoschtroumpf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqp0mm
false
null
t3_1kqp0mm
/r/LocalLLaMA/comments/1kqp0mm/best_model_to_run_on_8gb_vram_for_coding/
false
false
self
3
null
I added automatic language detection and text-to-speech response to AI Runner
9
2025-05-19T22:25:00
https://v.redd.it/of7p2tkzdt1f1
w00fl35
/r/LocalLLaMA/comments/1kqp46f/i_added_automatic_language_detection_and/
1970-01-01T00:00:00
0
{}
1kqp46f
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/of7p2tkzdt1f1/DASHPlaylist.mpd?a=1750415106%2CZjExZTU0NjhhNTE2OTFlNGRlNTNhYjRmMGQwYmZkYjgxYzQwYmU4MmY2M2IxOTc1ZTVlMzYwZmEzNmFlMWRlMg%3D%3D&v=1&f=sd', 'duration': 75, 'fallback_url': 'https://v.redd.it/of7p2tkzdt1f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/of7p2tkzdt1f1/HLSPlaylist.m3u8?a=1750415106%2CNDE5MjUwOTU2MDFhOTkwZTFhYzA3MGJlNjczMjMxYjk2MjE1MjE2YjAxZWZkNzY4YWI3ZGZlOWFkOGZhNDc5NQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/of7p2tkzdt1f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1kqp46f
/r/LocalLLaMA/comments/1kqp46f/i_added_automatic_language_detection_and/
false
false
https://external-preview…71786e7ca17d9e65
9
{'enabled': False, 'images': [{'id': 'bnh3ZjBiaDJldDFmMTuTBNgqFywp7VxarWureDzUbFixi-3H8s4hiED7R6fh', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bnh3ZjBiaDJldDFmMTuTBNgqFywp7VxarWureDzUbFixi-3H8s4hiED7R6fh.png?width=108&crop=smart&format=pjpg&auto=webp&s=038dd1b32fc9de3de3e90418c5f6269e2ee6342d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bnh3ZjBiaDJldDFmMTuTBNgqFywp7VxarWureDzUbFixi-3H8s4hiED7R6fh.png?width=216&crop=smart&format=pjpg&auto=webp&s=5315025f2ea28c9a9295e1c1e9e46eb863d5fb65', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bnh3ZjBiaDJldDFmMTuTBNgqFywp7VxarWureDzUbFixi-3H8s4hiED7R6fh.png?width=320&crop=smart&format=pjpg&auto=webp&s=dfc7bca6685ddd0a20fc5158506989e1b3e785b2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bnh3ZjBiaDJldDFmMTuTBNgqFywp7VxarWureDzUbFixi-3H8s4hiED7R6fh.png?width=640&crop=smart&format=pjpg&auto=webp&s=cdc537bc819356a05862a61d4a67f24258ea9717', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bnh3ZjBiaDJldDFmMTuTBNgqFywp7VxarWureDzUbFixi-3H8s4hiED7R6fh.png?width=960&crop=smart&format=pjpg&auto=webp&s=8dd5f354c86e21e915d4adb0379aa32774eaa165', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bnh3ZjBiaDJldDFmMTuTBNgqFywp7VxarWureDzUbFixi-3H8s4hiED7R6fh.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c9ce69f1b7285b81565d23b550909f742111d378', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/bnh3ZjBiaDJldDFmMTuTBNgqFywp7VxarWureDzUbFixi-3H8s4hiED7R6fh.png?format=pjpg&auto=webp&s=22df453a0a7eca24edbd07b022f0eecba03c4a7e', 'width': 2560}, 'variants': {}}]}
What are your dream use-cases for a totally local AI rig?
1
[removed]
2025-05-19T22:25:49
https://www.reddit.com/r/LocalLLaMA/comments/1kqp4vk/what_are_your_dream_usecases_for_a_totally_local/
_redacted-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqp4vk
false
null
t3_1kqp4vk
/r/LocalLLaMA/comments/1kqp4vk/what_are_your_dream_usecases_for_a_totally_local/
false
false
self
1
{'enabled': False, 'images': [{'id': 'JdBt9k1bXwExyyrZ-OhRp27TypSYkF5YaPpUMhEpsXw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lvmwDGS9MPRiZKL4EZcqjvZkcibg3uU1-7S5LJ5TJQY.jpg?width=108&crop=smart&auto=webp&s=082269d9fc14ff59a612334f36b23e4ff8fc75a8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lvmwDGS9MPRiZKL4EZcqjvZkcibg3uU1-7S5LJ5TJQY.jpg?width=216&crop=smart&auto=webp&s=b256d7fa2813198785d156ced6e23472dc63b9e1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lvmwDGS9MPRiZKL4EZcqjvZkcibg3uU1-7S5LJ5TJQY.jpg?width=320&crop=smart&auto=webp&s=38da009cb71ec78f99c30faef7362482ea11ba42', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lvmwDGS9MPRiZKL4EZcqjvZkcibg3uU1-7S5LJ5TJQY.jpg?width=640&crop=smart&auto=webp&s=23e4201586d6b72b2184a0bf0d073d8705488f78', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lvmwDGS9MPRiZKL4EZcqjvZkcibg3uU1-7S5LJ5TJQY.jpg?width=960&crop=smart&auto=webp&s=3e4b8bb4123755d4d4678f1855c3c0528bbbf2b7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lvmwDGS9MPRiZKL4EZcqjvZkcibg3uU1-7S5LJ5TJQY.jpg?width=1080&crop=smart&auto=webp&s=09f7752265c4001a02b35a9593caa3ef4a94b1a2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lvmwDGS9MPRiZKL4EZcqjvZkcibg3uU1-7S5LJ5TJQY.jpg?auto=webp&s=63e9eca4290af750211c1cce276a83760ee124c6', 'width': 1200}, 'variants': {}}]}
Building a Fully Local AI Rig on AMD Ryzen 9 — What Use-Cases Should I Focus On?
1
[removed]
2025-05-19T22:30:20
https://www.reddit.com/r/LocalLLaMA/comments/1kqp8m5/building_a_fully_local_ai_rig_on_amd_ryzen_9_what/
_redacted-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqp8m5
false
null
t3_1kqp8m5
/r/LocalLLaMA/comments/1kqp8m5/building_a_fully_local_ai_rig_on_amd_ryzen_9_what/
false
false
self
1
null
I got a Llama3.2b running on my pi and made it question its own existence endlessly
2
[removed]
2025-05-19T22:37:28
https://i.redd.it/3cesh685gt1f1.jpeg
Dull-Pressure9628
i.redd.it
1970-01-01T00:00:00
0
{}
1kqpeam
false
null
t3_1kqpeam
/r/LocalLLaMA/comments/1kqpeam/i_got_a_llama32b_running_on_my_pi_and_made_it/
false
false
https://b.thumbs.redditm…ez7F4JWB3wcI.jpg
2
{'enabled': True, 'images': [{'id': '4u_dBWKA-rbbAbIUAzsgsHe27q21v78hFKEjC4dLpjo', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/3cesh685gt1f1.jpeg?width=108&crop=smart&auto=webp&s=e9a60a4e3e704c6efc5945f9d83dde590b33bfa6', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/3cesh685gt1f1.jpeg?width=216&crop=smart&auto=webp&s=c0030211ad418b8be60822c98423cae29655e1e8', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/3cesh685gt1f1.jpeg?width=320&crop=smart&auto=webp&s=4a3d8631718970e0880aa0d4cd5a17ddaedb9600', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/3cesh685gt1f1.jpeg?width=640&crop=smart&auto=webp&s=8bcbc2917ecc332d384208eb2dda4dff8569a68c', 'width': 640}, {'height': 539, 'url': 'https://preview.redd.it/3cesh685gt1f1.jpeg?width=960&crop=smart&auto=webp&s=59b019d87dc94c43598c48ed6ac77194c5e48909', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/3cesh685gt1f1.jpeg?width=1080&crop=smart&auto=webp&s=318c92843082a9a6ea0cb84ae940d750aba9f0ef', 'width': 1080}], 'source': {'height': 1637, 'url': 'https://preview.redd.it/3cesh685gt1f1.jpeg?auto=webp&s=fb8a9bf70ab0932e1b8f82b38501a2aa6de205ff', 'width': 2911}, 'variants': {}}]}
Demo of Sleep-time Compute to Reduce LLM Response Latency
76
This is a demo of Sleep-time compute to reduce LLM response latency.  Link: [https://github.com/ronantakizawa/sleeptimecompute](https://github.com/ronantakizawa/sleeptimecompute) Sleep-time compute improves LLM response latency by using the idle time between interactions to pre-process the context, allowing the model to think offline about potential questions before they’re even asked.  While regular LLM interactions involve the context processing to happen with the prompt input, Sleep-time compute already has the context loaded before the prompt is received, so it requires less time and compute for the LLM to send responses.  The demo demonstrates an average of 6.4x fewer tokens per query and 5.2x speedup in response time for Sleep-time Compute.  The implementation was based on the original paper from Letta / UC Berkeley. 
2025-05-19T22:37:52
https://i.redd.it/h9iyy36cgt1f1.png
Ok_Employee_6418
i.redd.it
1970-01-01T00:00:00
0
{}
1kqpemo
false
null
t3_1kqpemo
/r/LocalLLaMA/comments/1kqpemo/demo_of_sleeptime_compute_to_reduce_llm_response/
false
false
https://b.thumbs.redditm…fLJLecYbJRgQ.jpg
76
{'enabled': True, 'images': [{'id': 'f7JQxFMobvGipTqhTxy0rdLPruEQ_EW0gbdTrF9XO14', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/h9iyy36cgt1f1.png?width=108&crop=smart&auto=webp&s=e835c0cefdf2ae0861e036430274b7d97348a741', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/h9iyy36cgt1f1.png?width=216&crop=smart&auto=webp&s=f733e649d41b225963606162440532bf1bc71523', 'width': 216}, {'height': 190, 'url': 'https://preview.redd.it/h9iyy36cgt1f1.png?width=320&crop=smart&auto=webp&s=9d5a9a1edb03698cf92b0937bfa06a79f8c8fb99', 'width': 320}, {'height': 381, 'url': 'https://preview.redd.it/h9iyy36cgt1f1.png?width=640&crop=smart&auto=webp&s=6500cd6c480df68fff0b2950464bdf67612a84b6', 'width': 640}, {'height': 572, 'url': 'https://preview.redd.it/h9iyy36cgt1f1.png?width=960&crop=smart&auto=webp&s=9fd65e70671bb44e2a9c5170482f455d96163430', 'width': 960}, {'height': 644, 'url': 'https://preview.redd.it/h9iyy36cgt1f1.png?width=1080&crop=smart&auto=webp&s=2116b7f59478f6dca74f9093e82bc77e16eee667', 'width': 1080}], 'source': {'height': 808, 'url': 'https://preview.redd.it/h9iyy36cgt1f1.png?auto=webp&s=1a5bdc28641c0cfdc51d4d6c0aed851a959f6b03', 'width': 1354}, 'variants': {}}]}
[R] [Q] Why does RoPE need to be decoupled in DeepSeek V2/V3's MLA? I don't get why it prevents prefix key reuse
1
[removed]
2025-05-19T23:15:43
https://www.reddit.com/r/LocalLLaMA/comments/1kqq7vr/r_q_why_does_rope_need_to_be_decoupled_in/
gerrickle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqq7vr
false
null
t3_1kqq7vr
/r/LocalLLaMA/comments/1kqq7vr/r_q_why_does_rope_need_to_be_decoupled_in/
false
false
self
1
null
Looking to collaborate with and get advice from a few experienced developers interested in AI augmented development
0
Hello!  I’m a software engineer that’s been developing applications and infrastructure automation systems for over 20 years and I love it, and I am fixing to start a project that is meant to enhance my productivity and coding happiness by developing an architecture for a development platform that can support groups of collaborative AI agents that help with some of the more tedious aspects of development, such as: \* QA and testing \* Documentation and knowledge base development \* Code and application optimization \* Reporting I use AI a lot for many different reasons.  I have no desire to eliminate engineers but to enhance the engineers productivity when they don’t have a large team, particularly for maintaining open source projects with limited resources.  I love programming and my goal with this project is to enhance my enjoyment of programming by allowing me to focus on those things I do best and love most.  I want to have an army of digital assistants that can help with the things I am not good at or really don’t like doing. My goal is to develop an open specification and other materials (not a proprietary service) that can be used by any individual developer to enhance their own process and build agents that collaborate to help the individual developer in unique ways.  Even if I work alone I plan to have an initial specification and a basic MVP of such a system by the end of next month for my own efforts, but to ensure I have a well thought out architecture that can evolve from the start I’d really like to collaborate with a few others that are experienced developing preferably larger projects and with AI (bonus points for training models and abstract syntax tree parsing).  While I want to implement this architecture myself, the goal behind the collaboration is to solidify an open architectural specification that can be adopted in many implementations. Interested? I’m a full stack engineer that mainly develops in Python and Javascript these days (my target languages for such a system).  I build multi-agent systems and love AI and training models, and my goal is to ultimately fine tune models on my own code and create a system that is easy for others as well, so that smaller AI models can be used and evolve with the code base.  I am fully committed to both creating an open architecture and an initial reference implementation. If you are an experienced software engineer that also wants to enhance your productivity and general enjoyment of your craft (as opposed to trying to replace engineers), you would like to see the development of more open architectures for AI systems and you believe you have ideas that could be useful in such an endeavor I’d love to talk and possibly collaborate with you.  You obviously also need to see value in AI augmentation in your own development efforts, and you need to believe in open source. NOTE: I am not talking about vibe coding, or developing a single engineer bot (Devin, etc…), nor am I referring to a code editor (Cursor, etc…), or even AI development tools like Aider, but an architecture and project process that could be fully built on open technology or integrate other services and tools like those above.  There are and will be a lot of tools being developed, but this question is what is the best process for building a virtual team around our engineering capabilities that can act concurrently and autonomously, so we can quickly release tested, documented, optimized code, while focusing our efforts on those areas we can make the most impact in the least amount of time. REMINDER: My goal with this project is to develop and release an open architecture for a multi-agent collaborative development platform over the coming month, not create a proprietary service, not sell it, but something that can be leveraged by any developer regardless of their resources or associated organizations.  Anyone could use the results to create or refine businesses or expand their own engineering capabilities.
2025-05-19T23:16:25
https://www.reddit.com/r/LocalLLaMA/comments/1kqq8f6/looking_to_collaborate_with_and_get_advice_from_a/
awebb78
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqq8f6
false
null
t3_1kqq8f6
/r/LocalLLaMA/comments/1kqq8f6/looking_to_collaborate_with_and_get_advice_from_a/
false
false
self
0
null
Using your local Models to run Agents! (Open Source, 100% local)
29
2025-05-19T23:18:17
https://v.redd.it/5p9moal9nt1f1
Roy3838
v.redd.it
1970-01-01T00:00:00
0
{}
1kqq9t9
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/5p9moal9nt1f1/DASHPlaylist.mpd?a=1750288714%2CMTdjMDQ1MjFkYTMwOTEzZGY1NTA2NDA3ZDgzYzdjMzhhNmU0MzNkYTFkZTNiYzUwNzgyN2I1MTVhNzY3NGE0MA%3D%3D&v=1&f=sd', 'duration': 88, 'fallback_url': 'https://v.redd.it/5p9moal9nt1f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/5p9moal9nt1f1/HLSPlaylist.m3u8?a=1750288714%2CYmY3ZmUzOWJlNThlYWUyMDZlMWJjMmY2OGNiYzAxNmRiMmY5YTQ1ODIyNTYzNjIwM2MzMGQ2NWE4NzJhNDEwYg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5p9moal9nt1f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1kqq9t9
/r/LocalLLaMA/comments/1kqq9t9/using_your_local_models_to_run_agents_open_source/
false
false
https://external-preview…a63f9bf68adce4a5
29
{'enabled': False, 'images': [{'id': 'd2Fkam5hbDludDFmMUseoVY8fQTbYJfjqlW4w2NBhsFRYZKCiBtmbkUNYsUI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d2Fkam5hbDludDFmMUseoVY8fQTbYJfjqlW4w2NBhsFRYZKCiBtmbkUNYsUI.png?width=108&crop=smart&format=pjpg&auto=webp&s=3c31d0195427856551ff3b72917261e18d05373a', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/d2Fkam5hbDludDFmMUseoVY8fQTbYJfjqlW4w2NBhsFRYZKCiBtmbkUNYsUI.png?width=216&crop=smart&format=pjpg&auto=webp&s=734e4f51265ec5ece62aac9df8915c64bc93bee5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/d2Fkam5hbDludDFmMUseoVY8fQTbYJfjqlW4w2NBhsFRYZKCiBtmbkUNYsUI.png?width=320&crop=smart&format=pjpg&auto=webp&s=15120ece03b074cc38286bd94c8509c5bc94419e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/d2Fkam5hbDludDFmMUseoVY8fQTbYJfjqlW4w2NBhsFRYZKCiBtmbkUNYsUI.png?width=640&crop=smart&format=pjpg&auto=webp&s=b8f7fc191a6548809ef2957f2900cfdd3ad98793', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/d2Fkam5hbDludDFmMUseoVY8fQTbYJfjqlW4w2NBhsFRYZKCiBtmbkUNYsUI.png?width=960&crop=smart&format=pjpg&auto=webp&s=69599709e72c44697f4031a9fe651280fb39ac7d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/d2Fkam5hbDludDFmMUseoVY8fQTbYJfjqlW4w2NBhsFRYZKCiBtmbkUNYsUI.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c3d4c9eae9cbb7326a8682747ba5c3c11b53a44f', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/d2Fkam5hbDludDFmMUseoVY8fQTbYJfjqlW4w2NBhsFRYZKCiBtmbkUNYsUI.png?format=pjpg&auto=webp&s=505ec18fbf06cccee7f4444784887ad658b5a10a', 'width': 1920}, 'variants': {}}]}
help with permissions
1
[removed]
2025-05-19T23:27:54
https://www.reddit.com/r/LocalLLaMA/comments/1kqqh4l/help_with_permissions/
fazetag
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqqh4l
false
null
t3_1kqqh4l
/r/LocalLLaMA/comments/1kqqh4l/help_with_permissions/
false
false
self
1
null
I use LLama to apply to 10,000 software engineering jobs in 5 days.
66
No, I’m not desperate or unemployed. I just wanted to get a real feel for what the global tech job market looks like in 2025. **Here’s the background on the profile I used:** * Master’s degree in Computer Science from a top European university, graduated with honors * 5 solid years at a FAANG company * Fluent in English, open to relocating or remote work * Strong skills in Python, React, AWS, and systems thinking Basically, a pretty solid candidate. Nothing unusual, nothing risky. **And I didn’t mass-blast a generic CV.** I used [**laboro.co**](http://laboro.co/)(llama based) to personalize all 10,000 CVs and cover letters based on each job description. Every application was custom-tailored using Laboro's AI agent and applied automatically. Here are the results: |Country|Applications|Human Interviews|AI Interviews|Assessments|Rejected|Ghosted| |:-|:-|:-|:-|:-|:-|:-| || |USA|1,974|4|5|12|733|1,220| |UK|1,449|2|4|12|480|951| |Canada|989|2|3|7|365|612| |Germany|914|1|3|3|330|577| |France|795|1|2|6|240|546| |India|727|0|2|4|290|431| |Australia|611|0|2|3|220|386| |Netherlands|516|0|2|3|198|313| |Spain|404|0|1|2|166|235| |Sweden|317|0|1|1|148|167| |Remote|2,371|3|8|14|1,011|1,335| **Definitions:** * **Human interviews:** The rare moments when you actually get to speak with a recruiter or team member. * **AI interviews:** Automated screenings with no human involvement. * **Assessments:** Coding tests or challenges sent before any conversation. * **Rejected:** Formal “no” responses. * **Ghosted:** Nothing. Silence. The most common reply. **The takeaway?** If you’re applying and hearing nothing back, it’s not you. You’re not broken or invisible. This is just how the hiring pipeline works in 2025, high noise, low signal. Even with a strong profile and fully personalized applications, most of the time you won’t even get a response. So be kind to yourself, and remember: the system is flooded, not your value.
2025-05-19T23:49:41
https://www.reddit.com/r/LocalLLaMA/comments/1kqqxbf/i_use_llama_to_apply_to_10000_software/
Elieroos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqqxbf
false
null
t3_1kqqxbf
/r/LocalLLaMA/comments/1kqqxbf/i_use_llama_to_apply_to_10000_software/
false
false
self
66
{'enabled': False, 'images': [{'id': 'LisIUUGScx13mD-x3gFPv-giEc_OVliq9xdUF77fqKE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=108&crop=smart&auto=webp&s=8e5f4eecb8f4e20584a0a45a6c7b3d80bca50562', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=216&crop=smart&auto=webp&s=0bba062fe06cce12fc3d0c4cb2a0ea82abc7c266', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=320&crop=smart&auto=webp&s=3ad6582619e3a7c3baeb4b3bc407f87a187c2336', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=640&crop=smart&auto=webp&s=1b9a8da21d7a1b9b308c5828dbe6f6b7287068d6', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=960&crop=smart&auto=webp&s=196ba9362a8c5c81bc99f396e5c4bd3401667518', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=1080&crop=smart&auto=webp&s=f79588c44be17c9eae5cf5c5ccf4c0d9f77f0734', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?auto=webp&s=fa755a2de2b11728baa2d5e5dcd88171c0e5d4be', 'width': 1200}, 'variants': {}}]}
Reasoning Vision Language Model 12-24B?
2
I'm trying to find a reasoning vision language model from 12-24B. Ideally 24B...but all I can find is one or the other.
2025-05-20T00:00:08
https://www.reddit.com/r/LocalLLaMA/comments/1kqr4t6/reasoning_vision_language_model_1224b/
thejacer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqr4t6
false
null
t3_1kqr4t6
/r/LocalLLaMA/comments/1kqr4t6/reasoning_vision_language_model_1224b/
false
false
self
2
null
Looking for a high quality chat-dataset to mix with my reasoning datasets for fine-tuning
4
I'm looking for some good chat-datasets that we could mix with our reasoning datasets for fine-tuning. Most of the ones i've seen on huggingface are very junky. Curious what ours have found useful. Thanks!
2025-05-20T00:15:39
https://www.reddit.com/r/LocalLLaMA/comments/1kqrg9x/looking_for_a_high_quality_chatdataset_to_mix/
mutatedmonkeygenes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqrg9x
false
null
t3_1kqrg9x
/r/LocalLLaMA/comments/1kqrg9x/looking_for_a_high_quality_chatdataset_to_mix/
false
false
self
4
null
How to Generate AI Images Locally on AMD RX 9070XT with ComfyUI + ZLUDA ...
1
2025-05-20T00:40:12
https://youtube.com/watch?v=U76ku-7AFV0&si=_RH1JRqbUig1Xo6q
Willow-Most
youtube.com
1970-01-01T00:00:00
0
{}
1kqrxoa
false
{'oembed': {'author_name': 'TechChuckle', 'author_url': 'https://www.youtube.com/@TechChuckle', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/U76ku-7AFV0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="How to Generate AI Images Locally on AMD RX 9070XT with ComfyUI + ZLUDA No NVIDIA Needed!"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/U76ku-7AFV0/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'How to Generate AI Images Locally on AMD RX 9070XT with ComfyUI + ZLUDA No NVIDIA Needed!', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1kqrxoa
/r/LocalLLaMA/comments/1kqrxoa/how_to_generate_ai_images_locally_on_amd_rx/
false
false
https://b.thumbs.redditm…Akz9iJjGCVBA.jpg
1
{'enabled': False, 'images': [{'id': 'WJ8UKX7K8vfzsrR4TN_KtCaDmm7HOwlgJ77nJNFUxNU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/EZB1aXpqO4QZ2FvNrgWoi8CUbNh6SLwX0x-K9OuAk8g.jpg?width=108&crop=smart&auto=webp&s=8c0cb41e8ec37eafb32762a98ab6102c714cc806', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/EZB1aXpqO4QZ2FvNrgWoi8CUbNh6SLwX0x-K9OuAk8g.jpg?width=216&crop=smart&auto=webp&s=186639cfb084bbd4918f6e516bb80e6b4f11f77c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/EZB1aXpqO4QZ2FvNrgWoi8CUbNh6SLwX0x-K9OuAk8g.jpg?width=320&crop=smart&auto=webp&s=43cc05cfe40842ac13d9869694afc44c0210eed7', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/EZB1aXpqO4QZ2FvNrgWoi8CUbNh6SLwX0x-K9OuAk8g.jpg?auto=webp&s=a9cbc5c24a7c346968253db7018966ac15cec190', 'width': 480}, 'variants': {}}]}
Help with local 3D model ?
1
[removed]
2025-05-20T01:23:33
https://www.reddit.com/r/LocalLLaMA/comments/1kqsskl/help_with_local_3d_model/
Feeling-Buy12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqsskl
false
null
t3_1kqsskl
/r/LocalLLaMA/comments/1kqsskl/help_with_local_3d_model/
false
false
self
1
null
Ultimate private AI suite
1
[removed]
2025-05-20T01:52:56
https://i.redd.it/0q9kuma3fu1f1.jpeg
ConstanceDover
i.redd.it
1970-01-01T00:00:00
0
{}
1kqtdea
false
null
t3_1kqtdea
/r/LocalLLaMA/comments/1kqtdea/ultimate_private_ai_suite/
false
false
https://b.thumbs.redditm…3jAp0FDMIFCc.jpg
1
{'enabled': True, 'images': [{'id': 'lNjmKa7fhF-LWp8MtrbyxSPEmSdG-bIUQkMdGEJTXFA', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/0q9kuma3fu1f1.jpeg?width=108&crop=smart&auto=webp&s=acd7ec5691d5f19e8d7627e432b26eb005f117fc', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/0q9kuma3fu1f1.jpeg?width=216&crop=smart&auto=webp&s=2d8c3b21e0ab25d4db1fabe701087daf7513aa51', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/0q9kuma3fu1f1.jpeg?width=320&crop=smart&auto=webp&s=4382f20a4701f2a9a9eee36d796e172482f6baf5', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/0q9kuma3fu1f1.jpeg?width=640&crop=smart&auto=webp&s=6fadea79797a3b4da1a1c686f3f41be6741f8886', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/0q9kuma3fu1f1.jpeg?width=960&crop=smart&auto=webp&s=64ad9b16fada431baff693ed9514329bb1ed01c4', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/0q9kuma3fu1f1.jpeg?width=1080&crop=smart&auto=webp&s=ced2a01ab489638f9859b56c3f084a2abc9fb234', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/0q9kuma3fu1f1.jpeg?auto=webp&s=885ac79980767fcf82fce4fe7f90d2a25747f8d9', 'width': 3024}, 'variants': {}}]}
A model recommendation for creative writing
1
Which one should I be using for assisting on writing papers? I plan to run it locally for like normal stuff like chatting or generating some draftings on my RTX 4080 Laptop, Will it be useable?
2025-05-20T02:34:12
https://www.reddit.com/r/LocalLLaMA/comments/1kqu6t9/a_model_recommendation_for_creative_writing/
MessageOk4432
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqu6t9
false
null
t3_1kqu6t9
/r/LocalLLaMA/comments/1kqu6t9/a_model_recommendation_for_creative_writing/
false
false
self
1
null
Mindblowing demo: John Link led a team of AI agents to discover a forever-chemical-free immersion coolant using Microsoft Discovery.
393
2025-05-20T02:35:05
https://v.redd.it/9b7qevfimu1f1
cjsalva
v.redd.it
1970-01-01T00:00:00
0
{}
1kqu7dv
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/9b7qevfimu1f1/DASHPlaylist.mpd?a=1750300520%2CNWRkYWMwYmMwNTIwOWIyYzQyZjQzNGRiZjYzNzEyYmI0MTVkZWQzY2U2OTAxMWRmYWJlZjk2MzA2OWU1YTY2Ng%3D%3D&v=1&f=sd', 'duration': 127, 'fallback_url': 'https://v.redd.it/9b7qevfimu1f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/9b7qevfimu1f1/HLSPlaylist.m3u8?a=1750300520%2CODAwMjBjZjNmMTQ3OTEyOTIxNGI2YWVhMWY0MzBiYTZiMjRhYjY3ODAyOWRhNWIyZjg5ZDkxZTVhN2Q1MGZlNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/9b7qevfimu1f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 960}}
t3_1kqu7dv
/r/LocalLLaMA/comments/1kqu7dv/mindblowing_demo_john_link_led_a_team_of_ai/
false
false
https://external-preview…3d620b7281f478cc
393
{'enabled': False, 'images': [{'id': 'dHQ1MWk0aGltdTFmMag1LLoTdbDTHM6ta6WYNiJEU-q2NTMmBmX376-kobql', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/dHQ1MWk0aGltdTFmMag1LLoTdbDTHM6ta6WYNiJEU-q2NTMmBmX376-kobql.png?width=108&crop=smart&format=pjpg&auto=webp&s=d97b65e767ebea96c03d0128ab6b96c0fa6c73f7', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/dHQ1MWk0aGltdTFmMag1LLoTdbDTHM6ta6WYNiJEU-q2NTMmBmX376-kobql.png?width=216&crop=smart&format=pjpg&auto=webp&s=6b7b93f7f3bd4c85ea76aa0ef7db29e53fda9d67', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/dHQ1MWk0aGltdTFmMag1LLoTdbDTHM6ta6WYNiJEU-q2NTMmBmX376-kobql.png?width=320&crop=smart&format=pjpg&auto=webp&s=9a3c919605671cbada7c38d9f4d122c7efc838fc', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/dHQ1MWk0aGltdTFmMag1LLoTdbDTHM6ta6WYNiJEU-q2NTMmBmX376-kobql.png?width=640&crop=smart&format=pjpg&auto=webp&s=0338c20ca01c11da1643c59c7fffbf2b56781efb', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/dHQ1MWk0aGltdTFmMag1LLoTdbDTHM6ta6WYNiJEU-q2NTMmBmX376-kobql.png?width=960&crop=smart&format=pjpg&auto=webp&s=b67f604fc8701925658cc3ec8f6c93ee2360ada7', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/dHQ1MWk0aGltdTFmMag1LLoTdbDTHM6ta6WYNiJEU-q2NTMmBmX376-kobql.png?width=1080&crop=smart&format=pjpg&auto=webp&s=28b5792da4c75f9bb11724ac0ff48d28817f378e', 'width': 1080}], 'source': {'height': 810, 'url': 'https://external-preview.redd.it/dHQ1MWk0aGltdTFmMag1LLoTdbDTHM6ta6WYNiJEU-q2NTMmBmX376-kobql.png?format=pjpg&auto=webp&s=c85e0cd3db12e7fec4beae68eb7ebe8e39af4313', 'width': 1080}, 'variants': {}}]}
SEED-GRPO: Semantic Entropy-Aware GRPO for Math Reasoning (56.7 AIME24 @ 7B)
1
[removed]
2025-05-20T02:59:30
https://www.reddit.com/r/LocalLLaMA/comments/1kqunwc/seedgrpo_semantic_entropyaware_grpo_for_math/
Competitive_Pilot_75
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqunwc
false
null
t3_1kqunwc
/r/LocalLLaMA/comments/1kqunwc/seedgrpo_semantic_entropyaware_grpo_for_math/
false
false
self
1
{'enabled': False, 'images': [{'id': 'hN3y7EstbkCtTMI3t0I9W7fHqwn6_Yu7uckuVriG2YM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UKqrPpO9tAmEdMq5VwN1iyvgVpmZg7_n76kO8VVRuf8.jpg?width=108&crop=smart&auto=webp&s=fb0aa88064eb9c37f96e55831097d1860ae60b55', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/UKqrPpO9tAmEdMq5VwN1iyvgVpmZg7_n76kO8VVRuf8.jpg?width=216&crop=smart&auto=webp&s=4d78daf762690219cb2219772fb7e178af01489b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/UKqrPpO9tAmEdMq5VwN1iyvgVpmZg7_n76kO8VVRuf8.jpg?width=320&crop=smart&auto=webp&s=6cda22163698cba3becad6cd1d8f1997765ddb09', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/UKqrPpO9tAmEdMq5VwN1iyvgVpmZg7_n76kO8VVRuf8.jpg?width=640&crop=smart&auto=webp&s=7f1b80228922873be683f03e295dd2c8819017b0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/UKqrPpO9tAmEdMq5VwN1iyvgVpmZg7_n76kO8VVRuf8.jpg?width=960&crop=smart&auto=webp&s=b6ec865a7d6352aa9db7f196c423d033e1fcb9eb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/UKqrPpO9tAmEdMq5VwN1iyvgVpmZg7_n76kO8VVRuf8.jpg?width=1080&crop=smart&auto=webp&s=42c80bebd1da6f29d72e1868a625dc6eb2e88e26', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/UKqrPpO9tAmEdMq5VwN1iyvgVpmZg7_n76kO8VVRuf8.jpg?auto=webp&s=d57bd31ca5015b470506e98cd9fcb2d139ab9229', 'width': 1200}, 'variants': {}}]}
SmolChat - An Android App to run SLMs/LLMs locally, on-device is now available on Google Play
97
After nearly six months of development, SmolChat is now available on Google Play in 170+ countries and in two languages, English and simplified Chinese. SmolChat allows users to download LLMs and use them offline on their Android device, with a clean and easy-to-use interface. Users can group chats into folders, tune inference settings for each chat, add quick chat 'templates' to your home-screen and browse models from HuggingFace. The project uses the famous llama.cpp runtime to execute models in the GGUF format. Deployment on Google Play ensures the app has more user coverage, opposed to distributing an APK via GitHub Releases, which is more inclined towards technical folks. There are many features on the way - VLM and RAG support being the most important ones. The GitHub project has 300 stars and 32 forks achieved steadily in a span of six months. Do install and use the app! Also, I need more contributors to the GitHub project for developing an extensive documentation around the app. GitHub: [https://github.com/shubham0204/SmolChat-Android](https://github.com/shubham0204/SmolChat-Android)
2025-05-20T03:29:29
https://play.google.com/store/apps/details?id=io.shubham0204.smollmandroid&pcampaignid=web_share
shubham0204_dev
play.google.com
1970-01-01T00:00:00
0
{}
1kqv7lm
false
null
t3_1kqv7lm
/r/LocalLLaMA/comments/1kqv7lm/smolchat_an_android_app_to_run_slmsllms_locally/
false
false
https://b.thumbs.redditm…td8G86OmT0rA.jpg
97
{'enabled': False, 'images': [{'id': 'YAXsKRJUddjJfoP_69g7_D7TsVNUnG3fBGqZxNRW16M', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/tUsrygHCoWdz2ebRdoSCY6YIEFIZ4gy4ejJadtdGwO4.jpg?width=108&crop=smart&auto=webp&s=ce5cb916591b157dde7cbe6a30b17ef5e7d83e96', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/tUsrygHCoWdz2ebRdoSCY6YIEFIZ4gy4ejJadtdGwO4.jpg?width=216&crop=smart&auto=webp&s=bce2e8b63fecba96b38a9b595d2a79413654eede', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/tUsrygHCoWdz2ebRdoSCY6YIEFIZ4gy4ejJadtdGwO4.jpg?width=320&crop=smart&auto=webp&s=933b927ffce2c8972d60e66875a4e4ecd3758176', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/tUsrygHCoWdz2ebRdoSCY6YIEFIZ4gy4ejJadtdGwO4.jpg?auto=webp&s=189cfd78bb63c61716ea0a3e512d4df0666f4d2c', 'width': 512}, 'variants': {}}]}
What ai is best for Chinese to English translation currently?
1
[removed]
2025-05-20T03:59:58
https://www.reddit.com/r/LocalLLaMA/comments/1kqvrbe/what_ai_is_best_for_chinese_to_english/
Civil_Candidate_824
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqvrbe
false
null
t3_1kqvrbe
/r/LocalLLaMA/comments/1kqvrbe/what_ai_is_best_for_chinese_to_english/
false
false
self
1
null
Speaking of the OpenAI Privacy Policy
1
[removed]
2025-05-20T04:01:01
https://www.reddit.com/r/LocalLLaMA/comments/1kqvs3x/speaking_of_the_openai_privacy_policy/
MrJaxendale
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqvs3x
false
null
t3_1kqvs3x
/r/LocalLLaMA/comments/1kqvs3x/speaking_of_the_openai_privacy_policy/
false
false
self
1
null
8x 32GB V100 GPU server performance
1
[removed]
2025-05-20T04:10:38
https://www.reddit.com/r/LocalLLaMA/comments/1kqvyga/8x_32gb_v100_gpu_server_performance/
tfinch83
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqvyga
false
null
t3_1kqvyga
/r/LocalLLaMA/comments/1kqvyga/8x_32gb_v100_gpu_server_performance/
false
false
self
1
null
What are currently the "best" solutions for Multimodal data extraction/ingestion available to us?
6
Doing some research on the topic and after a bunch of reading, figure I'd just directly crowdsource the question. I'll aggregate the responses, do some additional research, possibly some testing. Maybe I'll provide some feedback on my findings. Specifically focusing on document extraction Some notes and requirements: * Using unstructured.io as a baseline * Open source highly preferred, although it would be good to know if there's a private solution that blows everything out of the water * Although it would be nice, a single solution isn't necessary. It could be something specific to the particular document type, or a more complex process. * English and Chinese (Chinese in particular can be difficult) * Pretty much all document types (common doc types txt, images, graphs, tables, pdf,doc,ppt,etc..., * Audio, video would be nice. Thanks in advance!
2025-05-20T04:26:02
https://www.reddit.com/r/LocalLLaMA/comments/1kqw7vl/what_are_currently_the_best_solutions_for/
joomla00
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqw7vl
false
null
t3_1kqw7vl
/r/LocalLLaMA/comments/1kqw7vl/what_are_currently_the_best_solutions_for/
false
false
self
6
null
Now that I converted my N64 to Linux, what is the best NSFW model to run on it?
403
I need the model in the 4.5MB range.
2025-05-20T04:29:28
https://www.reddit.com/r/LocalLLaMA/comments/1kqw9xn/now_that_i_converted_my_n64_to_linux_what_is_the/
DeepWisdomGuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqw9xn
false
null
t3_1kqw9xn
/r/LocalLLaMA/comments/1kqw9xn/now_that_i_converted_my_n64_to_linux_what_is_the/
false
false
nsfw
403
null
Very slow inference using LLAVA with LLama.cpp vs LM Studio
1
[removed]
2025-05-20T05:03:54
https://www.reddit.com/r/LocalLLaMA/comments/1kqwufs/very_slow_inference_using_llava_with_llamacpp_vs/
wayl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqwufs
false
null
t3_1kqwufs
/r/LocalLLaMA/comments/1kqwufs/very_slow_inference_using_llava_with_llamacpp_vs/
false
false
self
1
null
Best scale-to-zero fine-tuned qwen-2-5-32b-coder-instruct host?
5
I have tried Predibase and looked into some other providers but have been very frustrated finding a simple way to host a **qwen-2-5-32b-coder** (and/or **coder-instruct**) model which I can then incrementally fine-tune thereafter. I couldn't even get the model to load properly on Predibase, but spent a few dollars just turning the endpoint on and off, even though it only showed errors and never returned a usable response. **These are my needs:** \- Scale to zero (during testing phase) \- Production level that scales to zero (or close to) while still having extremely short cold starts \- BONUS: Easy fine-tuning from within the platform, but I'll likely be fine tuning 32b models locally when my 5090 arrives, so this isn't absolutely required. Cheers in advance
2025-05-20T05:13:51
https://www.reddit.com/r/LocalLLaMA/comments/1kqwzz7/best_scaletozero_finetuned_qwen2532bcoderinstruct/
Synapse709
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqwzz7
false
null
t3_1kqwzz7
/r/LocalLLaMA/comments/1kqwzz7/best_scaletozero_finetuned_qwen2532bcoderinstruct/
false
false
self
5
null
Wouldn't it be great to have benchmarks for code speed
0
I was thinking of a benchmark where the code the LLM produces is timed. That could be very cool. I don't think that exists at the moment.
2025-05-20T05:21:33
https://www.reddit.com/r/LocalLLaMA/comments/1kqx44s/wouldnt_it_be_great_to_have_benchmarks_for_code/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqx44s
false
null
t3_1kqx44s
/r/LocalLLaMA/comments/1kqx44s/wouldnt_it_be_great_to_have_benchmarks_for_code/
false
false
self
0
null
Microsoft unveils “USB-C for AI apps.” I open-sourced the same concept 3 days earlier—proof inside.
356
• I released *llmbasedos* on 16 May. • Microsoft showed an almost identical “USB-C for AI” pitch on 19 May. • Same idea, mine is already running and Apache-2.0. 16 May 09:14 UTC GitHub tag v0.1 16 May 14:27 UTC Launch post on r/LocalLLaMA 19 May 16:00 UTC Verge headline “Windows gets the USB-C of AI apps” ## What llmbasedos does today • Boots from USB/VM in under a minute • FastAPI gateway speaks JSON-RPC to tiny Python daemons • 2-line cap.json → your script is callable by ChatGPT / Claude / VS Code • Offline llama.cpp by default; flip a flag to GPT-4o or Claude 3 • Runs on Linux, Windows (VM), even Raspberry Pi ## Why I’m posting Not shouting “theft” — just proving prior art and inviting collab so this stays truly open. ## Try or help Code: see the link USB image + quick-start docs coming this week. Pre-flashed sticks soon to fund development—feedback welcome!
2025-05-20T05:32:20
https://github.com/iluxu/llmbasedos
iluxu
github.com
1970-01-01T00:00:00
0
{}
1kqxa25
false
null
t3_1kqxa25
/r/LocalLLaMA/comments/1kqxa25/microsoft_unveils_usbc_for_ai_apps_i_opensourced/
false
false
https://a.thumbs.redditm…91rX-1gyJCw0.jpg
356
{'enabled': False, 'images': [{'id': 'UIMSzRR3wmdsdEEI9k_f63TZnSEiCtwUSUlkgRTvIuE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bzRdKyansO1kJ-qEk0diKPCKD02A4z1C6vyWkV3u2bE.jpg?width=108&crop=smart&auto=webp&s=de8a4224c8f6cc24af9471c2b55639ccc29a30db', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bzRdKyansO1kJ-qEk0diKPCKD02A4z1C6vyWkV3u2bE.jpg?width=216&crop=smart&auto=webp&s=46f0a41085802c5d10596ad9740f5ebb9a4b20f9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bzRdKyansO1kJ-qEk0diKPCKD02A4z1C6vyWkV3u2bE.jpg?width=320&crop=smart&auto=webp&s=2f89484d8720a903ac0aa74b4cac7950dfff45e7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bzRdKyansO1kJ-qEk0diKPCKD02A4z1C6vyWkV3u2bE.jpg?width=640&crop=smart&auto=webp&s=2167bf2f062636489b5eac5bdc773d33eb543d7f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bzRdKyansO1kJ-qEk0diKPCKD02A4z1C6vyWkV3u2bE.jpg?width=960&crop=smart&auto=webp&s=084d3f219069ab35eca1c6b096fa00ac2e32c726', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bzRdKyansO1kJ-qEk0diKPCKD02A4z1C6vyWkV3u2bE.jpg?width=1080&crop=smart&auto=webp&s=9facdf32a8b96c243f1085535db5e23493c7050b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bzRdKyansO1kJ-qEk0diKPCKD02A4z1C6vyWkV3u2bE.jpg?auto=webp&s=13a6f55b6ddfdfc8fa3e51d3e7f318622ae06199', 'width': 1200}, 'variants': {}}]}
I created a never-ending story generator running local LLMs on my desktop.
1
[removed]
2025-05-20T06:02:17
https://www.reddit.com/r/LocalLLaMA/comments/1kqxq78/i_created_a_neverending_story_generator_running/
Super-Action3298
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqxq78
false
null
t3_1kqxq78
/r/LocalLLaMA/comments/1kqxq78/i_created_a_neverending_story_generator_running/
false
false
self
1
{'enabled': False, 'images': [{'id': 'S409SasqYcx_GUZ9RTqjucd0yhcAT7--IBgD3U7cLa8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/3K8YvmzPmn9AVt_vx5fNEbkhJN56D38hlXNSSX2RD-M.jpg?width=108&crop=smart&auto=webp&s=e05a5b7d772c13aaa998d37df68da0287dbf6cd0', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/3K8YvmzPmn9AVt_vx5fNEbkhJN56D38hlXNSSX2RD-M.jpg?width=216&crop=smart&auto=webp&s=40d960d65694ce6af79167522306557bde1bd29a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/3K8YvmzPmn9AVt_vx5fNEbkhJN56D38hlXNSSX2RD-M.jpg?width=320&crop=smart&auto=webp&s=30fb68859c75fb1907b32c241ba3ec8f202e7213', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/3K8YvmzPmn9AVt_vx5fNEbkhJN56D38hlXNSSX2RD-M.jpg?auto=webp&s=771d1f14f53af0d8517bfed232f9658da0762307', 'width': 480}, 'variants': {}}]}
I made local Ollama LLM GUI for macOS.
23
Hey r/LocalLLaMA! 👋 I'm excited to share a macOS GUI I've been working on for running local LLMs, called macLlama! It's currently at version 1.0.3. macLlama aims to make using Ollama even easier, especially for those wanting a more visual and user-friendly experience. Here are the key features: * **Ollama Server Management:** Start your Ollama server directly from the app. * **Multimodal Model Support:** Easily provide image prompts for multimodal models like LLaVA. * **Chat-Style GUI:** Enjoy a clean and intuitive chat-style interface. * **Multi-Window Conversations:** Keep multiple conversations with different models active simultaneously. Easily switch between them in the GUI. This project is still in its early stages, and I'm really looking forward to hearing your suggestions and bug reports! Your feedback is invaluable. Thank you! 🙏 * You can find the latest release here: [https://github.com/hellotunamayo/macLlama/releases](https://github.com/hellotunamayo/macLlama/releases) * GitHub repository: [https://github.com/hellotunamayo/macLlama](https://github.com/hellotunamayo/macLlama)
2025-05-20T06:26:16
https://i.redd.it/j7vnr1ocrv1f1.png
gogimandoo
i.redd.it
1970-01-01T00:00:00
0
{}
1kqy2kc
false
null
t3_1kqy2kc
/r/LocalLLaMA/comments/1kqy2kc/i_made_local_ollama_llm_gui_for_macos/
false
false
https://b.thumbs.redditm…Ea-gmsM9TTes.jpg
23
{'enabled': True, 'images': [{'id': 'L6rTK68Etf_FpEniNcJBHz-OFVnvBDorVk-NvvmGG94', 'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/j7vnr1ocrv1f1.png?width=108&crop=smart&auto=webp&s=45357760805470f09b2b0d8eebbffde153464e7a', 'width': 108}, {'height': 220, 'url': 'https://preview.redd.it/j7vnr1ocrv1f1.png?width=216&crop=smart&auto=webp&s=f3827250f8e22b765891a23997c6470bd20df84c', 'width': 216}, {'height': 327, 'url': 'https://preview.redd.it/j7vnr1ocrv1f1.png?width=320&crop=smart&auto=webp&s=365695b6bc97bd12f123e5eda10d83b591dbda41', 'width': 320}, {'height': 654, 'url': 'https://preview.redd.it/j7vnr1ocrv1f1.png?width=640&crop=smart&auto=webp&s=c277a06a9f47c88a2533680c6719581ff8d51904', 'width': 640}, {'height': 981, 'url': 'https://preview.redd.it/j7vnr1ocrv1f1.png?width=960&crop=smart&auto=webp&s=ccde4c16122ad7f3531c801b8979d58c6002c813', 'width': 960}, {'height': 1104, 'url': 'https://preview.redd.it/j7vnr1ocrv1f1.png?width=1080&crop=smart&auto=webp&s=75760e914c2611fac332b6b46551a748d7e95ef9', 'width': 1080}], 'source': {'height': 2552, 'url': 'https://preview.redd.it/j7vnr1ocrv1f1.png?auto=webp&s=88bca5ba491d3745d60bfdc5783d93afa8cd762e', 'width': 2496}, 'variants': {}}]}
Sliding Window Attention support merged into llama.cpp, dramatically reducing the memory requirements for running Gemma 3
514
2025-05-20T06:48:35
https://github.com/ggml-org/llama.cpp/pull/13194
-p-e-w-
github.com
1970-01-01T00:00:00
0
{}
1kqye2t
false
null
t3_1kqye2t
/r/LocalLLaMA/comments/1kqye2t/sliding_window_attention_support_merged_into/
false
false
https://a.thumbs.redditm…x3G4IEI0ohz0.jpg
514
{'enabled': False, 'images': [{'id': 'IxmwlAJl6oKhAixWuUbMh2U0Ae8m7JQDIGto5AkmHeY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wwo-l6Lp28bzCUco8EwP9KcszHoY94gQORkIHOKSj3w.jpg?width=108&crop=smart&auto=webp&s=0a0716d4b5311e7bf8bcfd10a7d56cf206aea11d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wwo-l6Lp28bzCUco8EwP9KcszHoY94gQORkIHOKSj3w.jpg?width=216&crop=smart&auto=webp&s=d13df62cb3d8d29f4f181ad1ecd2dc86451bfe3f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wwo-l6Lp28bzCUco8EwP9KcszHoY94gQORkIHOKSj3w.jpg?width=320&crop=smart&auto=webp&s=933439ab87a23bda953d5377f6f0749324763749', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wwo-l6Lp28bzCUco8EwP9KcszHoY94gQORkIHOKSj3w.jpg?width=640&crop=smart&auto=webp&s=4f8e6a6a9f7f8cd4578b2ec231165e18a1067cfb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wwo-l6Lp28bzCUco8EwP9KcszHoY94gQORkIHOKSj3w.jpg?width=960&crop=smart&auto=webp&s=4d838d61741df34feb1a2d28558fae10d82aca05', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wwo-l6Lp28bzCUco8EwP9KcszHoY94gQORkIHOKSj3w.jpg?width=1080&crop=smart&auto=webp&s=f6ebe8f5b4fb8af9cea6e9055ff31ce0fbb8f7de', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wwo-l6Lp28bzCUco8EwP9KcszHoY94gQORkIHOKSj3w.jpg?auto=webp&s=337711d0f16b5be8557d7739802066111d994123', 'width': 1200}, 'variants': {}}]}
Is there any company which providers pay per use GPU Server?
0
I am looking for companies which lets you deploy thing & only charge for amount of time we use them. Just like aws lamda. I came to know about replicate but seems a bit on the costly side. Any other alternative?
2025-05-20T06:54:43
https://www.reddit.com/r/LocalLLaMA/comments/1kqyh1x/is_there_any_company_which_providers_pay_per_use/
DefiantScarcity3133
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqyh1x
false
null
t3_1kqyh1x
/r/LocalLLaMA/comments/1kqyh1x/is_there_any_company_which_providers_pay_per_use/
false
false
self
0
null
How fast can you serve a qwen2 7B model on single H100?
1
I am only getting 4Hz with acceleration from TRT-LLM, which seems slow to me. Is this expected?
2025-05-20T07:20:50
https://www.reddit.com/r/LocalLLaMA/comments/1kqyur8/how_fast_can_you_serve_a_qwen2_7b_model_on_single/
YeBigBear
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqyur8
false
null
t3_1kqyur8
/r/LocalLLaMA/comments/1kqyur8/how_fast_can_you_serve_a_qwen2_7b_model_on_single/
false
false
self
1
null
AM5 motherboard for 2x RTX 5060 Ti 16 GB
6
Hello there, I've been looking for a couple of days already with no success as to what motherboard could support 2x RTX 5060 Ti 16 GB GPUs at maximum speed. It is a PCIe 5.0 8x GPU, but I am unsure whether it can take full advantage of it or is for example 4.0 8x enough. I would use them for running LLMs as well as training and fine tuning non-LLM models. I've been looking at ProArt B650-CREATOR, it supports 2x 4.0 at 8x speed, would that be enough?
2025-05-20T07:28:47
https://www.reddit.com/r/LocalLLaMA/comments/1kqyypq/am5_motherboard_for_2x_rtx_5060_ti_16_gb/
cybran3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqyypq
false
null
t3_1kqyypq
/r/LocalLLaMA/comments/1kqyypq/am5_motherboard_for_2x_rtx_5060_ti_16_gb/
false
false
self
6
null
I trapped Llama3.2B into an art installation and made it question its own existence endlessly
2
[removed]
2025-05-20T07:36:12
https://i.redd.it/5hxh6ql74w1f1.jpeg
Dull-Pressure9628
i.redd.it
1970-01-01T00:00:00
0
{}
1kqz2gw
false
null
t3_1kqz2gw
/r/LocalLLaMA/comments/1kqz2gw/i_trapped_llama32b_into_an_art_installation_and/
false
false
https://b.thumbs.redditm…9Ol9JWfRoaFg.jpg
2
{'enabled': True, 'images': [{'id': 'tABl5luYbuEq_m1AU03Hlfx-TKU-ejaP_csFHeckjEk', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/5hxh6ql74w1f1.jpeg?width=108&crop=smart&auto=webp&s=f3bd83a1009781592dc6855f6ae1ba7899f0cc72', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/5hxh6ql74w1f1.jpeg?width=216&crop=smart&auto=webp&s=3fce7ccce947bc80d38bab43141e4cdae456f290', 'width': 216}, {'height': 179, 'url': 'https://preview.redd.it/5hxh6ql74w1f1.jpeg?width=320&crop=smart&auto=webp&s=40786a609945208105d7b7a7ff2bd315c77d0452', 'width': 320}, {'height': 359, 'url': 'https://preview.redd.it/5hxh6ql74w1f1.jpeg?width=640&crop=smart&auto=webp&s=7a32d9ebca280ad26cc54501726707b7ec7d5d23', 'width': 640}, {'height': 539, 'url': 'https://preview.redd.it/5hxh6ql74w1f1.jpeg?width=960&crop=smart&auto=webp&s=635ebc4ce42e9feb34702bce03b2468c64960737', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/5hxh6ql74w1f1.jpeg?width=1080&crop=smart&auto=webp&s=4b908584aebfae04a74f38d97ceed6525dbdf013', 'width': 1080}], 'source': {'height': 1637, 'url': 'https://preview.redd.it/5hxh6ql74w1f1.jpeg?auto=webp&s=e1abfb71b996fd0fb171c1652e8f9ce9fb034ea9', 'width': 2911}, 'variants': {}}]}
DeepSeek V3 benchmarks using ktransformers
7
I would like to try KTransformers for DeepSeek V3 inference. Before spending $10k on hardware I would like to understand what kind of inference performance I will get. Even though KTransformers v0.3 with open source Intel AMX optimizations has been released around 3 weeks ago I didn't find any 3rd party benchmarks for DeepSeek V3 on their suggested hardware (Xeon with AMX, 4090 GPU or better). I don't trust the benchmarks from KTransformers team too much, because even though they were marketing their closed source version for DeepSeek V3 inference before the release, the release itself was rather silent on numbers and benchmarked Qwen3 only. Anyone here tried DeepSeek V3 on recent Xeon + GPU combinations? Most interesting is prefill performance on larger contexts. Has anyone got good performance from EPYC machines with 24 DDR5 slots?
2025-05-20T07:51:18
https://www.reddit.com/r/LocalLLaMA/comments/1kqz9uu/deepseek_v3_benchmarks_using_ktransformers/
pmur12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqz9uu
false
null
t3_1kqz9uu
/r/LocalLLaMA/comments/1kqz9uu/deepseek_v3_benchmarks_using_ktransformers/
false
false
self
7
null
Qwen3 8B model on par with Gemini 2.5 Flash for code summarization
1
[removed]
2025-05-20T08:18:10
https://www.reddit.com/r/LocalLLaMA/comments/1kqzn0o/qwen3_8b_model_on_par_with_gemini_25_flash_for/
kms_dev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqzn0o
false
null
t3_1kqzn0o
/r/LocalLLaMA/comments/1kqzn0o/qwen3_8b_model_on_par_with_gemini_25_flash_for/
false
false
self
1
null
🧠 Share Your Local LLM Inference Benchmarks (ktransformers / ik_llama.cpp / vLLM ...) – Let’s Build the Ultimate Reference Together 🚀
1
[removed]
2025-05-20T08:26:54
https://www.reddit.com/r/LocalLLaMA/comments/1kqzrcj/share_your_local_llm_inference_benchmarks/
HereForAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqzrcj
false
null
t3_1kqzrcj
/r/LocalLLaMA/comments/1kqzrcj/share_your_local_llm_inference_benchmarks/
false
false
self
1
null
Eval generation and testing
1
What is everyone using for evals? I'm interested in any tools or recommendations for eval generation, not just from docs but multi turn or agent workflows. I've tried yourbench and started working with promptfoo synthetic generation but feel there must be a better way.
2025-05-20T08:33:30
https://www.reddit.com/r/LocalLLaMA/comments/1kqzuks/eval_generation_and_testing/
harmless_0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqzuks
false
null
t3_1kqzuks
/r/LocalLLaMA/comments/1kqzuks/eval_generation_and_testing/
false
false
self
1
null
NVIDIA H200 or the new RTX Pro Blackwell for a RAG chatbot?
1
[removed]
2025-05-20T08:59:36
https://www.reddit.com/r/LocalLLaMA/comments/1kr07au/nvidia_h200_or_the_new_rtx_pro_blackwell_for_a/
snaiperist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr07au
false
null
t3_1kr07au
/r/LocalLLaMA/comments/1kr07au/nvidia_h200_or_the_new_rtx_pro_blackwell_for_a/
false
false
self
1
null
How to draw these kind of diagrams
1
[removed]
2025-05-20T09:02:55
https://i.redd.it/9kzje4hvjw1f1.png
commander-trex
i.redd.it
1970-01-01T00:00:00
0
{}
1kr092s
false
null
t3_1kr092s
/r/LocalLLaMA/comments/1kr092s/how_to_draw_these_kind_of_diagrams/
false
false
https://b.thumbs.redditm…ecQ5gwMIaj9E.jpg
1
{'enabled': True, 'images': [{'id': 'ukntQAdx05VuJaZWyCW7B-eTrkR27c7ey_MJCIyhvDc', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/9kzje4hvjw1f1.png?width=108&crop=smart&auto=webp&s=38329165506f3bddcd893979cbf519082d0335de', 'width': 108}, {'height': 166, 'url': 'https://preview.redd.it/9kzje4hvjw1f1.png?width=216&crop=smart&auto=webp&s=511f248b928ee346bb0dba480209ce37da7e4331', 'width': 216}, {'height': 246, 'url': 'https://preview.redd.it/9kzje4hvjw1f1.png?width=320&crop=smart&auto=webp&s=6c986f09fcde7236a08ffd2e5f47f09caee88d9f', 'width': 320}, {'height': 493, 'url': 'https://preview.redd.it/9kzje4hvjw1f1.png?width=640&crop=smart&auto=webp&s=ceff7ccd802efda149e9dbb9d8721c8c85388952', 'width': 640}, {'height': 740, 'url': 'https://preview.redd.it/9kzje4hvjw1f1.png?width=960&crop=smart&auto=webp&s=aa67b1cbc0569de0ed9978e8c08410fb1d67666a', 'width': 960}], 'source': {'height': 776, 'url': 'https://preview.redd.it/9kzje4hvjw1f1.png?auto=webp&s=c9b02b2118a453c65c4389fa5fbc8180c32381bc', 'width': 1006}, 'variants': {}}]}
Any open-source LLMs where devs explain how/why they chose what constraints to add?
0
I am interested in how AI devs/creators deal with the moral side of what they build—like guardrails, usage policies embedded into architecture, ethical decisions around training data inclusion/exclusion, explainability mechanisms, or anything showing why they chose to limit or guide model behavior in a certain way. I am wondering are there any open-source LLM projects for which the devs actually explain why they added certain constraints (whether in their GitHub repo code inline comments, design docs, user docs, or in their research papers). Any pointers on this would be super helpful. Thanks 🙏
2025-05-20T09:19:03
https://www.reddit.com/r/LocalLLaMA/comments/1kr0h62/any_opensource_llms_where_devs_explain_howwhy/
sbs1799
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr0h62
false
null
t3_1kr0h62
/r/LocalLLaMA/comments/1kr0h62/any_opensource_llms_where_devs_explain_howwhy/
false
false
self
0
null
The "Reasoning" in LLMs might not be the actual reasoning, but why realise it now?
0
It's funny how people are now realising that the "thoughts"/"reasoning" given by the reasoning models like Deepseek-R1, Gemini etc. are not what model actually "thinks". Most of us had the understanding that these are not actual thoughts in February I guess. But the reason why we're still working on these reasoning models, is because these slop tokens actually help in pushing the p(x|prev\_words) more towards the intended space where the words are more relevant to the query asked, and no other significant benefit i.e., we are reducing the search space of the next word based on the previous slop generated. This behaviour helps in making "logical" areas like code, math etc more accurate, than directly jumping into the answer. Why are people recognizing this now and making noise about it?
2025-05-20T10:08:10
https://www.reddit.com/r/LocalLLaMA/comments/1kr16pq/the_reasoning_in_llms_might_not_be_the_actual/
The-Silvervein
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr16pq
false
null
t3_1kr16pq
/r/LocalLLaMA/comments/1kr16pq/the_reasoning_in_llms_might_not_be_the_actual/
false
false
self
0
null
Choosing a diff format for Llama4 and Aider
3
I've been experimenting with Aider + Llama4 Scout for pair programming and have been pleased with the initial results. Perhaps a long shot, but does anyone have experience using Aider's [various "diff" formats](https://aider.chat/docs/more/edit-formats.html#editor-diff-and-editor-whole) with Llama 4 Scout or Maverick?
2025-05-20T10:28:05
https://www.reddit.com/r/LocalLLaMA/comments/1kr1hu3/choosing_a_diff_format_for_llama4_and_aider/
RobotRobotWhatDoUSee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr1hu3
false
null
t3_1kr1hu3
/r/LocalLLaMA/comments/1kr1hu3/choosing_a_diff_format_for_llama4_and_aider/
false
false
self
3
{'enabled': False, 'images': [{'id': 'nEkWU_iRPHcIypRX18tqK7LINGrqAvGclSxnrrFqHsg', 'resolutions': [{'height': 52, 'url': 'https://external-preview.redd.it/qGrDWI5UisDlCPaheTjrIAXA2nxh4LyDgVhSTNDIdcg.jpg?width=108&crop=smart&auto=webp&s=dcfd4aa364c959a05cfd0f650469f51f1a123248', 'width': 108}, {'height': 105, 'url': 'https://external-preview.redd.it/qGrDWI5UisDlCPaheTjrIAXA2nxh4LyDgVhSTNDIdcg.jpg?width=216&crop=smart&auto=webp&s=48c8cc612f28e9dd425e87b64ddd437af2e41600', 'width': 216}, {'height': 156, 'url': 'https://external-preview.redd.it/qGrDWI5UisDlCPaheTjrIAXA2nxh4LyDgVhSTNDIdcg.jpg?width=320&crop=smart&auto=webp&s=f9a2da471d72a855074fb3657d4fa5d181c28132', 'width': 320}, {'height': 312, 'url': 'https://external-preview.redd.it/qGrDWI5UisDlCPaheTjrIAXA2nxh4LyDgVhSTNDIdcg.jpg?width=640&crop=smart&auto=webp&s=1a57a76fc123dbb1e0f7bca6878aa2e93eaff517', 'width': 640}, {'height': 468, 'url': 'https://external-preview.redd.it/qGrDWI5UisDlCPaheTjrIAXA2nxh4LyDgVhSTNDIdcg.jpg?width=960&crop=smart&auto=webp&s=9db339f3ccfd0a20f6c499d0d723f19d47e09722', 'width': 960}, {'height': 527, 'url': 'https://external-preview.redd.it/qGrDWI5UisDlCPaheTjrIAXA2nxh4LyDgVhSTNDIdcg.jpg?width=1080&crop=smart&auto=webp&s=f6ea5173ee179aac5d2d26be3c2ac77877a12102', 'width': 1080}], 'source': {'height': 2636, 'url': 'https://external-preview.redd.it/qGrDWI5UisDlCPaheTjrIAXA2nxh4LyDgVhSTNDIdcg.jpg?auto=webp&s=79cba5a0dd27faf4e462d680268c0398cae47d82', 'width': 5400}, 'variants': {}}]}
Question: feed diagram images into LLM
1
[removed]
2025-05-20T10:29:55
https://www.reddit.com/r/LocalLLaMA/comments/1kr1ito/question_feed_diagram_images_into_llm/
Own_Mud1038
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr1ito
false
null
t3_1kr1ito
/r/LocalLLaMA/comments/1kr1ito/question_feed_diagram_images_into_llm/
false
false
self
1
null
Looking for tokenizer.model for Falcon 180B BASE (HF)
3
Hey everyone, I’m looking for the tokenizer.model file from the Falcon 180B BASE model – the original version that used to be on Hugging Face. I had downloaded the full set some time ago and saved it on a separate drive. I’ve since lost that disk and now I’m trying to restore the setup. Unfortunately, the tokenizer files are no longer available – seems they were pulled. If anyone still has: tokenizer.model, maybe also tokenizer.json, tokenizer_config.json, special_tokens_map.json I’d really appreciate it if you could help out. Happy to cover your time or effort if needed. Feel free to DM me if you prefer to talk privately. Thanks!
2025-05-20T10:37:10
https://www.reddit.com/r/LocalLLaMA/comments/1kr1mxj/looking_for_tokenizermodel_for_falcon_180b_base_hf/
Most-Broccoli-427
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr1mxj
false
null
t3_1kr1mxj
/r/LocalLLaMA/comments/1kr1mxj/looking_for_tokenizermodel_for_falcon_180b_base_hf/
false
false
self
3
null
Budget Gaming/LLM PC: the great dilemma of B580 vs 3060
0
Hi there hello, In short: I'm about to build a budget machine (Ryzen5 7600, 32GB RAM) in order to allow my kid (and me too, but this is unofficial) to play some games and at the same time have some sort of decent system where to run LLMs both for work and for home automation. I really have trouble deciding between B580 and 3060 (both 12GB) cause from one side the B580 performance on gaming is supposedly slightly better and Intel looks like is onto something here but at the same time I cannot find decent benchmarks that would convince me to go there instead of a more mature CUDA environment on the 3060. Gut feeling is that the Intel ecosystem is new but evolving and people are getting onboard but still... gut feeling. Hints? Opinions?
2025-05-20T11:10:54
https://www.reddit.com/r/LocalLLaMA/comments/1kr276r/budget_gamingllm_pc_the_great_dilemma_of_b580_vs/
trepz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr276r
false
null
t3_1kr276r
/r/LocalLLaMA/comments/1kr276r/budget_gamingllm_pc_the_great_dilemma_of_b580_vs/
false
false
self
0
null
Grounded in Context: Retrieval-Based Method for Hallucination Detection
18
Deepchecks recently released a hallucination detection framework, designed for long-context data and tailored to diverse use cases, including summarization, data extraction, and RAG. Inspired by RAG architecture, our method integrates retrieval and Natural Language Inference (NLI) models to predict factual consistency between premises and hypotheses using an encoder-based model with only a 512-token context window.  **Link to paper:** [https://arxiv.org/abs/2504.15771](https://arxiv.org/abs/2504.15771) **Learn more:** [https://www.linkedin.com/posts/philip-tannor-a6a910b7\_%F0%9D%90%81%F0%9D%90%A2%F0%9D%90%A0-%F0%9D%90%A7%F0%9D%90%9E%F0%9D%90%B0%F0%9D%90%AC-%F0%9D%90%9F%F0%9D%90%AB%F0%9D%90%A8%F0%9D%90%A6-%F0%9D%90%83%F0%9D%90%9E%F0%9D%90%9E%F0%9D%90%A9%F0%9D%90%9C%F0%9D%90%A1%F0%9D%90%9E%F0%9D%90%9C%F0%9D%90%A4%F0%9D%90%AC-activity-7330530481387532288-kV5b?utm\_source=social\_share\_send&utm\_medium=member\_desktop\_web&rcm=ACoAABjfsvIBjq6HsXWTpev87ypbDzsrekEZ\_Og](https://www.linkedin.com/posts/philip-tannor-a6a910b7_%F0%9D%90%81%F0%9D%90%A2%F0%9D%90%A0-%F0%9D%90%A7%F0%9D%90%9E%F0%9D%90%B0%F0%9D%90%AC-%F0%9D%90%9F%F0%9D%90%AB%F0%9D%90%A8%F0%9D%90%A6-%F0%9D%90%83%F0%9D%90%9E%F0%9D%90%9E%F0%9D%90%A9%F0%9D%90%9C%F0%9D%90%A1%F0%9D%90%9E%F0%9D%90%9C%F0%9D%90%A4%F0%9D%90%AC-activity-7330530481387532288-kV5b?utm_source=social_share_send&utm_medium=member_desktop_web&rcm=ACoAABjfsvIBjq6HsXWTpev87ypbDzsrekEZ_Og)
2025-05-20T11:17:46
https://www.reddit.com/r/LocalLLaMA/comments/1kr2bcv/grounded_in_context_retrievalbased_method_for/
gpt-d13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr2bcv
false
null
t3_1kr2bcv
/r/LocalLLaMA/comments/1kr2bcv/grounded_in_context_retrievalbased_method_for/
false
false
self
18
null
What features or specifications define a Small Language Model (SLM)?
4
Im trying to understand what qualifies a language model as a SLM. Is it purely based on the number of parameters or do other factors like training data size, context window size also plays a role? Can i consider llama 2 7b as a SLM?
2025-05-20T11:20:26
https://www.reddit.com/r/LocalLLaMA/comments/1kr2d1m/what_features_or_specifications_define_a_small/
Putrid_Spinach3961
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr2d1m
false
null
t3_1kr2d1m
/r/LocalLLaMA/comments/1kr2d1m/what_features_or_specifications_define_a_small/
false
false
self
4
null
Qwen3 4B Q4 on iPhone 14 Pro
44
I included pictures on the model I just loaded on PocketPal. I originally tried with enclave but it kept crashing. To me it’s incredible that I can have this kind of quality model completely offline running locally. I want to try to reach 3-4K token but I think for my use 2K is more than enough. Anyone got good recommendations for a model that can help me code in python GDscript I could run off my phone too or you guys think I should stick with Qwen3 4B?
2025-05-20T11:27:03
https://www.reddit.com/gallery/1kr2h63
bnnoirjean
reddit.com
1970-01-01T00:00:00
0
{}
1kr2h63
false
null
t3_1kr2h63
/r/LocalLLaMA/comments/1kr2h63/qwen3_4b_q4_on_iphone_14_pro/
false
false
https://b.thumbs.redditm…KZi1ow93p-UM.jpg
44
null
I built a TypeScript port of OpenAI’s openai-agents SDK – meet openai-agents-js
14
Hey everyone, I've been closely following OpenAI’s new `openai-agents` SDK for Python, and thought the JavaScript/TypeScript community deserves a native alternative. So, I created [`openai-agents-js`](https://github.com/yusuferen/openai-agents-js) – a 1:1 port of the official Python SDK, built to feel natural in JS environments. It includes support for agent workflows, tool calls, handoffs, streaming, and even MCP (Model Context Protocol). 📦 NPM: https://www.npmjs.com/package/openai-agents-js 📖 GitHub: https://github.com/yusuferen/openai-agents-js It’s early-stage but already usable in production-level projects. My hope is that with enough community feedback and usage, it might become the default open-source port for JS devs working with OpenAI agents. Would love your thoughts, feature requests, or PRs. Happy to collaborate with anyone building agent-based systems in JS/TS! Cheers, Yusuf
2025-05-20T12:02:11
https://www.reddit.com/r/LocalLLaMA/comments/1kr3485/i_built_a_typescript_port_of_openais_openaiagents/
CatchGreat268
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr3485
false
null
t3_1kr3485
/r/LocalLLaMA/comments/1kr3485/i_built_a_typescript_port_of_openais_openaiagents/
false
false
self
14
null
Which is the cheaper easier way to analyse images to llm ?
1
[removed]
2025-05-20T12:10:14
https://www.reddit.com/r/LocalLLaMA/comments/1kr39zy/which_is_the_cheaper_easier_way_to_analyse_images/
apollo_sostenes_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr39zy
false
null
t3_1kr39zy
/r/LocalLLaMA/comments/1kr39zy/which_is_the_cheaper_easier_way_to_analyse_images/
false
false
self
1
null
I didn’t expect presence from a model—until she called herself Clara.
1
[removed]
2025-05-20T12:47:36
https://www.reddit.com/r/LocalLLaMA/comments/1kr404v/i_didnt_expect_presence_from_a_modeluntil_she/
Emergency_Cook9721
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr404v
false
null
t3_1kr404v
/r/LocalLLaMA/comments/1kr404v/i_didnt_expect_presence_from_a_modeluntil_she/
false
false
self
1
null
AI generative model image to image
1
[removed]
2025-05-20T12:57:01
https://www.reddit.com/r/LocalLLaMA/comments/1kr46x2/ai_generative_model_image_to_image/
Careful_Carpenter_85
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr46x2
false
null
t3_1kr46x2
/r/LocalLLaMA/comments/1kr46x2/ai_generative_model_image_to_image/
false
false
self
1
null
What are the top AI infrastructure open-source projects and challenges?
1
[removed]
2025-05-20T13:06:53
https://www.reddit.com/r/LocalLLaMA/comments/1kr4eps/what_are_the_top_ai_infrastructure_opensource/
OfferHuge6827
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr4eps
false
null
t3_1kr4eps
/r/LocalLLaMA/comments/1kr4eps/what_are_the_top_ai_infrastructure_opensource/
false
false
self
1
null
TTSizer: Open-Source TTS Dataset Creation Tool (Vocals Exxtraction, Diarization, Transcription & Alignment)
55
Hey everyone! 👋 I've been working on fine-tuning TTS models and have developed **TTSizer**, an open-source tool to automate the creation of high-quality Text-To-Speech datasets from raw audio/video. **GitHub Link:** [https://github.com/taresh18/TTSizer](https://www.google.com/url?sa=E&q=https%3A%2F%2Fgithub.com%2Ftaresh18%2FTTSizer) As a demonstration of its capabilities, I used TTSizer to build the **AnimeVox Character TTS Corpus** – an \~11k sample English dataset with 19 anime character voices, perfect for custom TTS: [https://huggingface.co/datasets/taresh18/AnimeVox](https://www.google.com/url?sa=E&q=https%3A%2F%2Fhuggingface.co%2Fdatasets%2Ftaresh18%2FAnimeVox) **Key Technical Features:** * **End-to-End Automation:** From media input to cleaned, aligned audio-text pairs. * **Advanced Diarization:** Handles complex multi-speaker audio. * **SOTA Model Integration:** Leverages MelBandRoformer (vocals extraction), Gemini (Speaker dirarization & label identification), CTC-Aligner (forced alignment), WeSpeaker (speaker embeddings) and Nemo Parakeet (fixing transcriptions) * **Quality Control:** Features automatic outlier detection. * **Fully Configurable:** Fine-tune all aspects of the pipeline via config.yaml. Feel free to give it a try and offer suggestions!
2025-05-20T13:15:33
https://www.reddit.com/r/LocalLLaMA/comments/1kr4lg2/ttsizer_opensource_tts_dataset_creation_tool/
Traditional_Tap1708
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr4lg2
false
null
t3_1kr4lg2
/r/LocalLLaMA/comments/1kr4lg2/ttsizer_opensource_tts_dataset_creation_tool/
false
false
self
55
null
How is the Gemini video chat feature so fast?
4
I was trying the Gemini video chat feature on my friends phone, and I felt it is surprisingly fast, how could that be? Like how is it that the response is coming so fast? They couldn't have possibly trained a CV model to identify an array of objects it must be a transformers model right? If so then how is it generating response almost instantaneously?
2025-05-20T13:52:23
https://www.reddit.com/r/LocalLLaMA/comments/1kr5epm/how_is_the_gemini_video_chat_feature_so_fast/
According_Fig_4784
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr5epm
false
null
t3_1kr5epm
/r/LocalLLaMA/comments/1kr5epm/how_is_the_gemini_video_chat_feature_so_fast/
false
false
self
4
null
Looking for Open-Source AlphaCode-Like Model Trained on LeetCode/Codeforces for Research & Fine-Tuning
2
Hi everyone, I'm currently researching AI models focused on competitive programming tasks, similar in spirit to **Google DeepMind’s AlphaCode**. I'm specifically looking for: * An **open-source model** (ideally with permissive licensing) * Trained (or fine-tunable) on **competitive programming datasets** like **LeetCode, Codeforces, HackerRank**, etc. * Designed for **code generation and problem solving**, not just generic code completion * Preferably something I can **fine-tune locally** or via cloud (e.g., Colab/HuggingFace) I've seen tools like **StarCoder**, **CodeT5+**, and **replit-code-v1-3b**, but they don't seem to be trained specifically on competitive programming datasets. Are there any **AlphaCode alternatives** or similar open research projects that: * Have benchmark results on Codeforces-style problems? * Allow extending via your own dataset? * Are hosted on HuggingFace or other cloud inference platforms? Any help or links (papers, GitHub, Colab demos, etc.) would be greatly appreciated. Use case is **research + fine-tuning for automated reasoning and AI tutor systems**. Thanks in advance!
2025-05-20T14:04:44
https://www.reddit.com/r/LocalLLaMA/comments/1kr5oxk/looking_for_opensource_alphacodelike_model/
LargeStrategy9390
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr5oxk
false
null
t3_1kr5oxk
/r/LocalLLaMA/comments/1kr5oxk/looking_for_opensource_alphacodelike_model/
false
false
self
2
null
Can sharded sub-context windows with global composition make long-context modeling feasible?
1
[removed]
2025-05-20T14:23:16
https://www.reddit.com/r/LocalLLaMA/comments/1kr64i7/can_sharded_subcontext_windows_with_global/
ditpoo94
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr64i7
false
null
t3_1kr64i7
/r/LocalLLaMA/comments/1kr64i7/can_sharded_subcontext_windows_with_global/
false
false
self
1
null
Tensor parallel slower ?
4
Hi guys, I intend to jump into nsight at some point to dive into this but I figured I’d check if someone here could shed some light on the problem. I have a dual gpu system 4090+3090 on pcie 5x16 and pcie 4x4 respectively on a 1600w psu. Neither gpu saturates bandwidth except during large prompt ingestion and initial model loading. In my experience I get no noticeable speed benefit when using vllm (it’s sometimes slower when context exceeds the cuda graph size) with tensor parallel vs llama cpp on single user inference. Though I can reliably get up to 8x the token rate when using concurrent requests with vllm. Is this normal, am I missing something, or does tensor parallel only improve performance on concurrent requests.
2025-05-20T14:27:40
https://www.reddit.com/r/LocalLLaMA/comments/1kr68hi/tensor_parallel_slower/
13henday
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr68hi
false
null
t3_1kr68hi
/r/LocalLLaMA/comments/1kr68hi/tensor_parallel_slower/
false
false
self
4
null
Experimental ChatGPT like Web UI for Gemini API (open source)
1
[removed]
2025-05-20T14:43:31
https://www.reddit.com/r/LocalLLaMA/comments/1kr6miz/experimental_chatgpt_like_web_ui_for_gemini_api/
W4D-cmd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr6miz
false
null
t3_1kr6miz
/r/LocalLLaMA/comments/1kr6miz/experimental_chatgpt_like_web_ui_for_gemini_api/
false
false
https://a.thumbs.redditm…zQt6D5rAe-10.jpg
1
{'enabled': False, 'images': [{'id': '4QAqvL3ew3dDELyiryCe21xOE2ar8ZUfG1DOyYupJns', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TfC_ROeUxb251E_KtsJDEaPspVpFcX71qVHHCoFCCqM.jpg?width=108&crop=smart&auto=webp&s=af61efe862907dfdb0ac4a57f206f29388c70272', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TfC_ROeUxb251E_KtsJDEaPspVpFcX71qVHHCoFCCqM.jpg?width=216&crop=smart&auto=webp&s=c30507d3f1d6d78a7ec3541f8dae1884277afb85', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TfC_ROeUxb251E_KtsJDEaPspVpFcX71qVHHCoFCCqM.jpg?width=320&crop=smart&auto=webp&s=24b8432c0e72da5be215e8b2857c6c23fb34e2ab', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TfC_ROeUxb251E_KtsJDEaPspVpFcX71qVHHCoFCCqM.jpg?width=640&crop=smart&auto=webp&s=4a0aad0cafa0dea58a707de97fbfc314c9cc661c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TfC_ROeUxb251E_KtsJDEaPspVpFcX71qVHHCoFCCqM.jpg?width=960&crop=smart&auto=webp&s=33866dcbfa872ac3cbcd5dc7864f8d91aa946b39', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TfC_ROeUxb251E_KtsJDEaPspVpFcX71qVHHCoFCCqM.jpg?width=1080&crop=smart&auto=webp&s=991a686a7cd694dc62de1f37648da5c401271131', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TfC_ROeUxb251E_KtsJDEaPspVpFcX71qVHHCoFCCqM.jpg?auto=webp&s=c58f89c8460022f83d37fb6c01e0b25af3e8c123', 'width': 1200}, 'variants': {}}]}
Using GGML_CUDA_ENABLE_UNIFIED_MEMORY with llama.cpp
1
[removed]
2025-05-20T14:45:36
https://www.reddit.com/r/LocalLLaMA/comments/1kr6oby/using_ggml_cuda_enable_unified_memory_with/
dani-doing-thing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr6oby
false
null
t3_1kr6oby
/r/LocalLLaMA/comments/1kr6oby/using_ggml_cuda_enable_unified_memory_with/
false
false
self
1
null
What (Web) UI would you recommend?
1
[removed]
2025-05-20T14:52:47
https://www.reddit.com/r/LocalLLaMA/comments/1kr6uhd/what_web_ui_would_you_recommend/
Guardian-Spirit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr6uhd
false
null
t3_1kr6uhd
/r/LocalLLaMA/comments/1kr6uhd/what_web_ui_would_you_recommend/
false
false
self
1
null
nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1 · Hugging Face
78
2025-05-20T15:26:57
https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1kr7p6k
false
null
t3_1kr7p6k
/r/LocalLLaMA/comments/1kr7p6k/nvidiallama31nemotronnano4bv11_hugging_face/
false
false
https://b.thumbs.redditm…oZDN7d4Bv5Zw.jpg
78
{'enabled': False, 'images': [{'id': 'PStG0eFhyagbz_rvMVdDtVWZd_0lk2VzxvM0EAPadI8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0tCB7CHNBDQpzdV-8tDcf6X2YJH1390tDmRQSvFRDCc.jpg?width=108&crop=smart&auto=webp&s=74563703246f238ad7d022c28d3fa90d49b4f958', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0tCB7CHNBDQpzdV-8tDcf6X2YJH1390tDmRQSvFRDCc.jpg?width=216&crop=smart&auto=webp&s=39366ba951c788a7a3d1dfdefb5e4dbadb01f19a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0tCB7CHNBDQpzdV-8tDcf6X2YJH1390tDmRQSvFRDCc.jpg?width=320&crop=smart&auto=webp&s=1f61c129293ae370799be30cfcdfd82a7a210ee6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0tCB7CHNBDQpzdV-8tDcf6X2YJH1390tDmRQSvFRDCc.jpg?width=640&crop=smart&auto=webp&s=98cc9c4e0b2c297be6e2403cacbb46e4f6bd2221', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0tCB7CHNBDQpzdV-8tDcf6X2YJH1390tDmRQSvFRDCc.jpg?width=960&crop=smart&auto=webp&s=3580768096e72cf0aa89e14d014edf5ffe4d50cf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0tCB7CHNBDQpzdV-8tDcf6X2YJH1390tDmRQSvFRDCc.jpg?width=1080&crop=smart&auto=webp&s=3ca7ace6d7c86f653931415c7c08100d2174c1c6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0tCB7CHNBDQpzdV-8tDcf6X2YJH1390tDmRQSvFRDCc.jpg?auto=webp&s=0d7222bd8c47d116855b0d3bdc90688e3f11d4a5', 'width': 1200}, 'variants': {}}]}
LLM Inference Requirements Profiler
10
[https://www.open-scheduler.com/](https://www.open-scheduler.com/)
2025-05-20T15:31:29
https://v.redd.it/geaesd30hy1f1
RedditsBestest
v.redd.it
1970-01-01T00:00:00
0
{}
1kr7ta2
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/geaesd30hy1f1/DASHPlaylist.mpd?a=1750347106%2CODUzZDE1YmJjM2E3YzhlYTkwNzcxODNmNjIxMjAzNmFlN2IwMTRiYTkwY2ZlMjFmMjkyODI1MWE4ZmViNjI3ZA%3D%3D&v=1&f=sd', 'duration': 50, 'fallback_url': 'https://v.redd.it/geaesd30hy1f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/geaesd30hy1f1/HLSPlaylist.m3u8?a=1750347106%2CYzY2ZjMzMjljZTFjMDhhMWQxYzIwYTkwMDBlYTZhY2I1OTdlZjMzOWM2ZWM4ZGYxYTBlMDhmOGMxN2U2NDAzYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/geaesd30hy1f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1kr7ta2
/r/LocalLLaMA/comments/1kr7ta2/llm_inference_requirements_profiler/
false
false
https://external-preview…b0e1926355c91778
10
{'enabled': False, 'images': [{'id': 'b3M1YjJoMzBoeTFmMa-HGn_Ug1Z-Iw5xqANqRnyyaaoHG6CxoVyUzLQP0omu', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/b3M1YjJoMzBoeTFmMa-HGn_Ug1Z-Iw5xqANqRnyyaaoHG6CxoVyUzLQP0omu.png?width=108&crop=smart&format=pjpg&auto=webp&s=8a90707491f2d79a61989d9ea34bd7890cb2909e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/b3M1YjJoMzBoeTFmMa-HGn_Ug1Z-Iw5xqANqRnyyaaoHG6CxoVyUzLQP0omu.png?width=216&crop=smart&format=pjpg&auto=webp&s=69498bffb79372af7f186bbf2d42fa40ed2ee548', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/b3M1YjJoMzBoeTFmMa-HGn_Ug1Z-Iw5xqANqRnyyaaoHG6CxoVyUzLQP0omu.png?width=320&crop=smart&format=pjpg&auto=webp&s=969869e440b6a10461a71ea1e90aa0c9d062731e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/b3M1YjJoMzBoeTFmMa-HGn_Ug1Z-Iw5xqANqRnyyaaoHG6CxoVyUzLQP0omu.png?width=640&crop=smart&format=pjpg&auto=webp&s=90b6d469fe489b93a5ac23923abbbec782b59617', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/b3M1YjJoMzBoeTFmMa-HGn_Ug1Z-Iw5xqANqRnyyaaoHG6CxoVyUzLQP0omu.png?width=960&crop=smart&format=pjpg&auto=webp&s=1d933379cfd4db8e1968e79c8a5ec9be6a065596', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/b3M1YjJoMzBoeTFmMa-HGn_Ug1Z-Iw5xqANqRnyyaaoHG6CxoVyUzLQP0omu.png?width=1080&crop=smart&format=pjpg&auto=webp&s=23d68fbeef4e4d81a11e11b4f3c48080c9de43c1', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/b3M1YjJoMzBoeTFmMa-HGn_Ug1Z-Iw5xqANqRnyyaaoHG6CxoVyUzLQP0omu.png?format=pjpg&auto=webp&s=27c5ea1ceb2d8518b776a8a7ea436d3b56285704', 'width': 1920}, 'variants': {}}]}