title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Prompt Engineering/Breaking for spam texts
| 1 |
[removed]
| 2025-05-26T01:24:05 |
No-Fig-8614
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kviakt
| false | null |
t3_1kviakt
|
/r/LocalLLaMA/comments/1kviakt/prompt_engineeringbreaking_for_spam_texts/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'xtmnauTWwuW4qoHKfOQXxeQEpDKbbazgPFC-vjJtKUI', 'resolutions': [{'height': 137, 'url': 'https://preview.redd.it/2b2pykhh313f1.jpeg?width=108&crop=smart&auto=webp&s=604a246078dc3986fd29318c60e5eb372666021e', 'width': 108}, {'height': 274, 'url': 'https://preview.redd.it/2b2pykhh313f1.jpeg?width=216&crop=smart&auto=webp&s=67e196305819c8010d947f85a2e8e0a4035e09ec', 'width': 216}, {'height': 406, 'url': 'https://preview.redd.it/2b2pykhh313f1.jpeg?width=320&crop=smart&auto=webp&s=035f6b1901bbcb5699d59205aa3bd03a450e7b04', 'width': 320}, {'height': 813, 'url': 'https://preview.redd.it/2b2pykhh313f1.jpeg?width=640&crop=smart&auto=webp&s=9fef97ba650f78474baf3e1653512080f7a1ca3e', 'width': 640}, {'height': 1219, 'url': 'https://preview.redd.it/2b2pykhh313f1.jpeg?width=960&crop=smart&auto=webp&s=fc243eec2248b387d88905bd7ef2ff674d7b34e0', 'width': 960}, {'height': 1372, 'url': 'https://preview.redd.it/2b2pykhh313f1.jpeg?width=1080&crop=smart&auto=webp&s=3e002065ab0591026cf6143ae7c05053a1c3f7c6', 'width': 1080}], 'source': {'height': 1498, 'url': 'https://preview.redd.it/2b2pykhh313f1.jpeg?auto=webp&s=1a45a30d217ef4f534febd057b55bec3f9572799', 'width': 1179}, 'variants': {}}]}
|
||
Prompt engineering for spam texts to break it.
| 1 |
[removed]
| 2025-05-26T01:29:40 |
No-Fig-8614
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvie9v
| false | null |
t3_1kvie9v
|
/r/LocalLLaMA/comments/1kvie9v/prompt_engineering_for_spam_texts_to_break_it/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'BF8BcJjEJCmGYR4PeXRKKrunLUMNNO0ZjIwAN1ef0yo', 'resolutions': [{'height': 137, 'url': 'https://preview.redd.it/3ppojtih413f1.jpeg?width=108&crop=smart&auto=webp&s=166ffefe3a61e536fcf74a4403e4e23d217517aa', 'width': 108}, {'height': 274, 'url': 'https://preview.redd.it/3ppojtih413f1.jpeg?width=216&crop=smart&auto=webp&s=8ae783db8c47a0f99a30f17dbdc209facd4d2480', 'width': 216}, {'height': 406, 'url': 'https://preview.redd.it/3ppojtih413f1.jpeg?width=320&crop=smart&auto=webp&s=440e014891550015300c8a2b848360ee3bd260de', 'width': 320}, {'height': 813, 'url': 'https://preview.redd.it/3ppojtih413f1.jpeg?width=640&crop=smart&auto=webp&s=230f3425027b942ae07d3b3cb69cf4ed518cbf1b', 'width': 640}, {'height': 1219, 'url': 'https://preview.redd.it/3ppojtih413f1.jpeg?width=960&crop=smart&auto=webp&s=0977377bc84912fd0a35f1f0955117dbce57c270', 'width': 960}, {'height': 1372, 'url': 'https://preview.redd.it/3ppojtih413f1.jpeg?width=1080&crop=smart&auto=webp&s=d336585fde78e66504b5772096e26d943df60abd', 'width': 1080}], 'source': {'height': 1498, 'url': 'https://preview.redd.it/3ppojtih413f1.jpeg?auto=webp&s=85f5d3c3f886eb45e345603138bee58ccb226dce', 'width': 1179}, 'variants': {}}]}
|
||
8xRTX 3050 6GB for fine tuning?
| 1 |
[removed]
| 2025-05-26T01:32:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvigb4/8xrtx_3050_6gb_for_fine_tuning/
|
blackkkyypenguin
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvigb4
| false | null |
t3_1kvigb4
|
/r/LocalLLaMA/comments/1kvigb4/8xrtx_3050_6gb_for_fine_tuning/
| false | false |
self
| 1 | null |
Qwen3 vision/audio/math version si coming
| 1 | 2025-05-26T01:42:38 |
MedicalTangerine191
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvimqp
| false | null |
t3_1kvimqp
|
/r/LocalLLaMA/comments/1kvimqp/qwen3_visionaudiomath_version_si_coming/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'LnfeFiSvjfxXEiUFxcJrY3haWUbKccp08bkJ4FXXuvA', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/cy1mcids613f1.jpeg?width=108&crop=smart&auto=webp&s=7394dcbc29e012def3f47d99a95f59158862532e', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/cy1mcids613f1.jpeg?width=216&crop=smart&auto=webp&s=8b1bcc8211db633c05b9ba0ee076d8cb4432f4c8', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/cy1mcids613f1.jpeg?width=320&crop=smart&auto=webp&s=52e608f44139c043c9c5ddad23ac6b14e562f28f', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/cy1mcids613f1.jpeg?width=640&crop=smart&auto=webp&s=7fbdbec641b4f0ab1e92f9bae454f3b61255aad7', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/cy1mcids613f1.jpeg?width=960&crop=smart&auto=webp&s=c31057c4d7a52781634b3efe6dd1159a72c14320', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/cy1mcids613f1.jpeg?width=1080&crop=smart&auto=webp&s=f9db2bdce3b7a45dfd601503e4ddeb71a414dbeb', 'width': 1080}], 'source': {'height': 2800, 'url': 'https://preview.redd.it/cy1mcids613f1.jpeg?auto=webp&s=da23910dd92c20e966f7a238372b4074296e3288', 'width': 1260}, 'variants': {}}]}
|
|||
Qwen3 vision/audio/math version si coming
| 1 | 2025-05-26T01:45:38 |
MedicalTangerine191
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvionh
| false | null |
t3_1kvionh
|
/r/LocalLLaMA/comments/1kvionh/qwen3_visionaudiomath_version_si_coming/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'Q2_A4sjDukAgdaio7sehr8q5ikIm9tdXsnFSy3G5GsY', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/9ajhkvnb713f1.jpeg?width=108&crop=smart&auto=webp&s=883baede750fcf3c33206fd3a2ec11ce99b6cd80', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/9ajhkvnb713f1.jpeg?width=216&crop=smart&auto=webp&s=ac420c3c60b1c39396316feba1cff7f35a08d156', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/9ajhkvnb713f1.jpeg?width=320&crop=smart&auto=webp&s=2aa247d35610ec11045806bab31edc77594fefa3', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/9ajhkvnb713f1.jpeg?width=640&crop=smart&auto=webp&s=2bb14896eb3739d0336770492ad213de40e07c7f', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/9ajhkvnb713f1.jpeg?width=960&crop=smart&auto=webp&s=366988b990ae60066b53445998d51a41d0e93c99', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/9ajhkvnb713f1.jpeg?width=1080&crop=smart&auto=webp&s=e88d1306f1de4e9c5696df3c17d2bc71c1fcf104', 'width': 1080}], 'source': {'height': 2800, 'url': 'https://preview.redd.it/9ajhkvnb713f1.jpeg?auto=webp&s=b73b2f0e7c784e4a2e402e2b42a8a0baf548c277', 'width': 1260}, 'variants': {}}]}
|
|||
What is the best way to run llama 3.3 70b locally, split on 3 GPUS (52 GB of VRAM)
| 2 |
Hi,
I'm foing to create datasets for fine tunning with unsloth, from raw unformated text, using the recommended LLM for this.
I have access to a frankenstein with the following spec:
\- 11700f
\- 128 GB of RAM
\- rtx 5060 Ti w/ 16GB
\- rtx 4070 Ti Super w/ 16 GB
\- rtx 3090 Ti w/ 24 GB
\- SO: Win 11 and ububtu 24.02 under WSL2
\- I can free up to 1 TB of the total 2TB of the nvme SSD
Until now, I only loaded guff with Koboldcpp. But, maybe, llamacpp or vllm are better for this task.
Do anyone have a recommended command/tool for this task.
What model files do you recommend me to download?
| 2025-05-26T01:46:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvip8r/what_is_the_best_way_to_run_llama_33_70b_locally/
|
GoodSamaritan333
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvip8r
| false | null |
t3_1kvip8r
|
/r/LocalLLaMA/comments/1kvip8r/what_is_the_best_way_to_run_llama_33_70b_locally/
| false | false |
self
| 2 | null |
New LocalLLM Hardware complete
| 135 |
So I spent this last week at Red Hats conference with this hardware sitting at home waiting for me. Finally got it put together. The conference changed my thought on what I was going to deploy but interest in everyone's thoughts.
The hardware is an AMD Ryzen 7 5800x with 64GB of ram, 2x 3909Ti that my best friend gave me (2x 4.0x8) with a 500gb boot and 4TB nvme.
The rest of the lab isal also available for ancillary things.
At the conference, I shifted my session from Ansible and Openshift to as much vLLM as I could and it's gotten me excited for IT Work for the first time in a while.
Currently still setting thingd up - got the Qdrant DB installed on the proxmox cluster in the rack. Plan to use vLLM/ HF with Open-WebUI for a GPT front end for the rest of the family with RAG, TTS/STT and maybe even Home Assistant voice.
Any recommendations? Ivr got nvidia-smi working g and both gpus are detected. Got them power limited ton300w each with the persistence configured (I have a 1500w psu but no need to blow a breaker lol). Im coming from my M3 Ultra Mac Studio running Ollama, that's really for my music studio - wanted to separate out the functions.
Thanks!
| 2025-05-26T02:04:11 |
https://www.reddit.com/gallery/1kvj0nt
|
ubrtnk
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvj0nt
| false | null |
t3_1kvj0nt
|
/r/LocalLLaMA/comments/1kvj0nt/new_localllm_hardware_complete/
| false | false | 135 | null |
|
QWQ - Will there be a future update now that Qwen 3 is out?
| 6 |
I've tested out most of the variations of Qwen 3, and while it's decent, there's still something extra that QWQ has that Qwen 3 just doesn't. Especially for writing tasks. I just get better outputs.
Now that Qwen 3 is out w/thinking, is QWQ done? If so, that sucks as I think it's still better than Qwen 3 in a lot of ways. It just needs to have its thinking process updated; if it thought more efficiently like Gemini Pro 2.5 (3-25 edition), it would be even more amazing.
**SIDE NOTE:** With Gemini no longer showing thinking, couldn't we just use existing outputs which still show thinking as synthetic guidance for improving other thinking models?
| 2025-05-26T02:04:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvj149/qwq_will_there_be_a_future_update_now_that_qwen_3/
|
GrungeWerX
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvj149
| false | null |
t3_1kvj149
|
/r/LocalLLaMA/comments/1kvj149/qwq_will_there_be_a_future_update_now_that_qwen_3/
| false | false |
self
| 6 | null |
Jetson Orin AGX 32gb
| 9 |
I can’t get this dumb thing to use the GPU with Ollama. As far as I can tell not many people are using it, and the mainline of llama.cpp is often broken, and some guy has a fork for the Jetson devices. I can get the whole ollama stack running but it’s dog slow and nothing shows up on Nvidia-smi. I’m trying Qwen3-30b-a3b. That seems to run just great on my 3090. Would I ever expect the Jetson to match its performance?
The software stack is also hot garbage, it seems like you can only install nvidia’s OS using their SDK manager. There is no way I’d ever recommend this to anyone. This hardware could have so much potential but Nvidia couldn’t be bothered to give it an understandable name let alone a sensible software stack.
Anyway, is anyone having success with this for basic LLM work?
| 2025-05-26T02:07:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvj34f/jetson_orin_agx_32gb/
|
randylush
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvj34f
| false | null |
t3_1kvj34f
|
/r/LocalLLaMA/comments/1kvj34f/jetson_orin_agx_32gb/
| false | false |
self
| 9 | null |
Implemented a quick and dirty iOS app for the new Gemma3n models
| 24 | 2025-05-26T02:54:28 |
https://github.com/sid9102/gemma3n-ios
|
sid9102
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvjwiz
| false | null |
t3_1kvjwiz
|
/r/LocalLLaMA/comments/1kvjwiz/implemented_a_quick_and_dirty_ios_app_for_the_new/
| false | false |
default
| 24 | null |
|
Speechless: Speech Instruction Training Without Speech for Low Resource Languages
| 149 |
Hey everyone, it’s me from **Menlo Research** again 👋. Today I want to share some news + a new model!
Exciting news - our paper *“SpeechLess”* just got accepted to **Interspeech 2025**, and we’ve finished the camera-ready version! 🎉
The idea came out of a challenge we faced while building a speech instruction model - we didn’t have enough speech instruction data for our use case. That got us thinking: Could we train the model entirely using synthetic data?
That’s how **SpeechLess** was born.
**Method Overview (with diagrams in the paper):**
1. **Step 1**: Convert real speech → discrete tokens (train a quantizer)
2. **Step 2**: Convert text → discrete tokens (train SpeechLess to simulate speech tokens from text)
3. **Step 3**: Use this pipeline (text → synthetic speech tokens) to train a LLM on speech instructions- just like training any other language model.
**Results:**
Training on fully synthetic speech tokens is surprisingly effective - performance holds up, and it opens up new possibilities for building speech systems in **low-resource settings** where collecting audio data is difficult or expensive.
We hope this helps other teams in similar situations and inspires more exploration of synthetic data in speech applications.
**Links:**
\- Paper: [https://arxiv.org/abs/2502.14669](https://arxiv.org/abs/2502.14669)
\- Speechless Model: [https://huggingface.co/Menlo/Speechless-llama3.2-v0.1](https://huggingface.co/Menlo/Speechless-llama3.2-v0.1)
\- Dataset: [https://huggingface.co/datasets/Menlo/Ichigo-pretrain-tokenized-v0.1](https://huggingface.co/datasets/Menlo/Ichigo-pretrain-tokenized-v0.1)
\- LLM: [https://huggingface.co/Menlo/Ichigo-llama3.1-8B-v0.5](https://huggingface.co/Menlo/Ichigo-llama3.1-8B-v0.5)
\- Github: [https://github.com/menloresearch/ichigo](https://github.com/menloresearch/ichigo)
| 2025-05-26T03:36:39 |
Kooky-Somewhere-2883
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvknlo
| false | null |
t3_1kvknlo
|
/r/LocalLLaMA/comments/1kvknlo/speechless_speech_instruction_training_without/
| false | false | 149 |
{'enabled': True, 'images': [{'id': 'LBIo9BUA_PgGFMF4Ap3ARUZbniWx_6z_OCmMiyVE3tA', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/ju7kqbqjq13f1.png?width=108&crop=smart&auto=webp&s=56fb1abb3ec60e062f87f026a8768706927b05d1', 'width': 108}, {'height': 182, 'url': 'https://preview.redd.it/ju7kqbqjq13f1.png?width=216&crop=smart&auto=webp&s=4ca3937cd748ce038662859c99a3ff00bf9f42f3', 'width': 216}, {'height': 270, 'url': 'https://preview.redd.it/ju7kqbqjq13f1.png?width=320&crop=smart&auto=webp&s=3426136b948fcc3741d605b95488d001cb13c5aa', 'width': 320}, {'height': 541, 'url': 'https://preview.redd.it/ju7kqbqjq13f1.png?width=640&crop=smart&auto=webp&s=24c19692c9e98b21bf15c0e3f564d6800ac0fa76', 'width': 640}, {'height': 811, 'url': 'https://preview.redd.it/ju7kqbqjq13f1.png?width=960&crop=smart&auto=webp&s=829e752751f48128f6c11ab59b88b8863dd94cbb', 'width': 960}, {'height': 913, 'url': 'https://preview.redd.it/ju7kqbqjq13f1.png?width=1080&crop=smart&auto=webp&s=32029df349538802fcc1099a3783c6dd6449d814', 'width': 1080}], 'source': {'height': 1842, 'url': 'https://preview.redd.it/ju7kqbqjq13f1.png?auto=webp&s=c3009bb8223ea53fa0c5a9722b3f4fcee0a6e04c', 'width': 2178}, 'variants': {}}]}
|
||
nvidia/AceReason-Nemotron-7B · Hugging Face
| 47 | 2025-05-26T05:40:40 |
https://huggingface.co/nvidia/AceReason-Nemotron-7B
|
jacek2023
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvmrgu
| false | null |
t3_1kvmrgu
|
/r/LocalLLaMA/comments/1kvmrgu/nvidiaacereasonnemotron7b_hugging_face/
| false | false | 47 |
{'enabled': False, 'images': [{'id': '94WAwkrDsd0F2vbt8p9JCI9uy_emGHxp1Gs2dBOwSJE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/E-gtAbU28GDV0NLa926rL1ecWFn9v0jJKe5iqmUNFzo.jpg?width=108&crop=smart&auto=webp&s=988e93e3e9fea6c8cac35daad406340f87030549', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/E-gtAbU28GDV0NLa926rL1ecWFn9v0jJKe5iqmUNFzo.jpg?width=216&crop=smart&auto=webp&s=d22e98f6a4aa85bb9192aec3b62ba97e0506fe40', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/E-gtAbU28GDV0NLa926rL1ecWFn9v0jJKe5iqmUNFzo.jpg?width=320&crop=smart&auto=webp&s=556f2efb90ccf0fc142bf9a1a133d0ea735f9084', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/E-gtAbU28GDV0NLa926rL1ecWFn9v0jJKe5iqmUNFzo.jpg?width=640&crop=smart&auto=webp&s=52002294beca04a31b018ffdca2c01eba72b139a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/E-gtAbU28GDV0NLa926rL1ecWFn9v0jJKe5iqmUNFzo.jpg?width=960&crop=smart&auto=webp&s=d421697615b1ffb0170b032fc67fbe293cd831bd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/E-gtAbU28GDV0NLa926rL1ecWFn9v0jJKe5iqmUNFzo.jpg?width=1080&crop=smart&auto=webp&s=b37bc8ab27e884f50152802f67bd1482063a4b28', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/E-gtAbU28GDV0NLa926rL1ecWFn9v0jJKe5iqmUNFzo.jpg?auto=webp&s=d4a9b9ba856f13ac04be86b3c7a4a8054d196738', 'width': 1200}, 'variants': {}}]}
|
||
How to know which MLLM is good at "Pointing"
| 1 |
[removed]
| 2025-05-26T05:46:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvmuvu/how_to_know_which_mllm_is_good_at_pointing/
|
IndependentDoor8479
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvmuvu
| false | null |
t3_1kvmuvu
|
/r/LocalLLaMA/comments/1kvmuvu/how_to_know_which_mllm_is_good_at_pointing/
| false | false |
self
| 1 | null |
「搭子社交」流行:从饭搭子到学习搭子,现代人的亲密关系为何变
| 1 |
[removed]
| 2025-05-26T05:59:39 |
https://v.redd.it/cziuk8jlg23f1
|
heygem666
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvn29l
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/cziuk8jlg23f1/DASHPlaylist.mpd?a=1750831196%2CZjc0MzRiYTUzZmVhNTkyMTQ0NzA4MGIxNTM2NTJkNzE5MWI5NTZmNmJiODRlMzU0NDA4MTM0OGVmYjZlNjlkYw%3D%3D&v=1&f=sd', 'duration': 31, 'fallback_url': 'https://v.redd.it/cziuk8jlg23f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/cziuk8jlg23f1/HLSPlaylist.m3u8?a=1750831196%2CMDJkMTNjNjgyMTcwNjBhZmM2MTE2M2UzYjU3NDkwZGZmYzAwY2UwZTIyYzQ1N2I2MzQ3NjE0NGM5YzI4MjI4Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/cziuk8jlg23f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
|
t3_1kvn29l
|
/r/LocalLLaMA/comments/1kvn29l/搭子社交流行从饭搭子到学习搭子现代人的亲密关系为何变/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'OXl5a2U1bGxnMjNmMRESYXZRiIGZj2Cp9eTYMZ5-EpITqUIS0Ckw1wEvQWxT', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/OXl5a2U1bGxnMjNmMRESYXZRiIGZj2Cp9eTYMZ5-EpITqUIS0Ckw1wEvQWxT.png?width=108&crop=smart&format=pjpg&auto=webp&s=31b4d3af6f1c623a05bfc56405b5ae455215e99b', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/OXl5a2U1bGxnMjNmMRESYXZRiIGZj2Cp9eTYMZ5-EpITqUIS0Ckw1wEvQWxT.png?width=216&crop=smart&format=pjpg&auto=webp&s=040e0313025d7bbc6fa157b5d65c3e3cca1ea39d', 'width': 216}, {'height': 569, 'url': 'https://external-preview.redd.it/OXl5a2U1bGxnMjNmMRESYXZRiIGZj2Cp9eTYMZ5-EpITqUIS0Ckw1wEvQWxT.png?width=320&crop=smart&format=pjpg&auto=webp&s=10806fc1d18a3a357c9e19285b960123ddcf3e84', 'width': 320}, {'height': 1138, 'url': 'https://external-preview.redd.it/OXl5a2U1bGxnMjNmMRESYXZRiIGZj2Cp9eTYMZ5-EpITqUIS0Ckw1wEvQWxT.png?width=640&crop=smart&format=pjpg&auto=webp&s=3c16b36c311ed336b722ca57019c477ee7ca4cc2', 'width': 640}, {'height': 1707, 'url': 'https://external-preview.redd.it/OXl5a2U1bGxnMjNmMRESYXZRiIGZj2Cp9eTYMZ5-EpITqUIS0Ckw1wEvQWxT.png?width=960&crop=smart&format=pjpg&auto=webp&s=d66a079f3e474af0acc8253bfd0a4f03ccea8871', 'width': 960}], 'source': {'height': 1766, 'url': 'https://external-preview.redd.it/OXl5a2U1bGxnMjNmMRESYXZRiIGZj2Cp9eTYMZ5-EpITqUIS0Ckw1wEvQWxT.png?format=pjpg&auto=webp&s=f5508cb013725ce3d7c77d2fd76a5f6eed409b1f', 'width': 993}, 'variants': {}}]}
|
|
Vector Space - Llama running locally on Apple Neural Engine
| 32 |
Core ML is Apple’s official way to run Machine Learning models on device, and also appears to be the only way to engage the Neural Engine, which is a powerful NPU installed on every iPhone/iPad that is capable of performing tens of billions of computations per second.
[Llama 3.2 1B Full Precision \(float16\) on the Vector Space App](https://reddit.com/link/1kvn51x/video/kagsls50h23f1/player)
In recent years, Apple has improved support for Large Language Models (and other transformer-based models) to run on device by introducing Stateful models, quantizations, etc. Despite these improvements, developers still face hurdles and a steep learning curve if they try to incorporate a large language model on-device. This leads to an (often paid) network API call for even the most basic AI-functions. For this reason, an Agentic AI often has to charge tens of dollars per month while still limiting usage for the user.
I have founded the Vector Space project to conquer the above issues. My Goal is two folds:
1. Enable users to use AI (marginally) freely and smoothly
2. Enable small developers o build agentic apps without cost, without having to understand how AI works under the hood, and without having to worry about API key safety.
[Llama 3.2 1B Full Precision \(float16\) running on iPhone 14 Pro Max](https://preview.redd.it/6yhsn8x0g23f1.png?width=1368&format=png&auto=webp&s=42f7f189fdda7cabc2dd3055a55917468a9beba9)
To achieve the above goals, Vector Space will provide
1. Architecture and tools that can convert models to Core ML format that can be run on Apple Neural Engine.
2. Swift Package that can run performant model inference.
3. App for users to directly download and manage model on Device, and for developers and enthusiasts to try out different models directly on iPhone.
My goal is NOT to:
Completely replace server-based AI, where models with hundreds of billions of parameters can be hosted, with context length of hundreds of k. Online models will still excel at complex tasks. However, it is also important to note that not every user is asking AI to do programing and math challenges.
Current Progress:
I have already preliminarily supported Llama 3.2 1B in full precision. The Model runs on ANE and supports MLState.
I am pleased to release the TestFlight Beta of the App mentioned in goal #3 above so you can try it out directly on your iPhone.
[https://testflight.apple.com/join/HXyt2bjU](https://testflight.apple.com/join/HXyt2bjU)
If you decide to try out the TestFlight version, please note the following:
1. We do NOT collect any information about your chat messages. It remains completely on device and/or in your iCloud.
2. The first model load into memory (after downloading) will take about 1-2 minutes. Subsequent load will only take a couple seconds.
3. Chat history would not persist across app launches.
4. I cannot guarantee the downloaded app will continue work when I release the next update. You might need to delete and redownload the app when an update is released in the future.
Next Step:
I will be working on a quantized version of Llama 3.2 1B that is expected to have significant inference speed improvement. I will then provide a much wider selection of models available for download.
| 2025-05-26T06:04:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvn51x/vector_space_llama_running_locally_on_apple/
|
Glad-Speaker3006
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvn51x
| false | null |
t3_1kvn51x
|
/r/LocalLLaMA/comments/1kvn51x/vector_space_llama_running_locally_on_apple/
| false | false | 32 |
{'enabled': False, 'images': [{'id': 'FhD9ztQfEPUNryOSupxSHZ7KguGFp0dxsYvnDShw1Tg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/chjzdPRsgKclskhkJtHyR9G4ascbrd4AkBc1D_gB_VY.jpg?width=108&crop=smart&auto=webp&s=4f64af24f7053577357e73c0cf5a3d48ae6896d6', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/chjzdPRsgKclskhkJtHyR9G4ascbrd4AkBc1D_gB_VY.jpg?width=216&crop=smart&auto=webp&s=3b5ed364f79dd59f7d3ac591da1f6d71542f2ba5', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/chjzdPRsgKclskhkJtHyR9G4ascbrd4AkBc1D_gB_VY.jpg?width=320&crop=smart&auto=webp&s=a515fe5d9e85182cd2a40fba8a1f32cd8698cbda', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/chjzdPRsgKclskhkJtHyR9G4ascbrd4AkBc1D_gB_VY.jpg?width=640&crop=smart&auto=webp&s=3aac9a23637bf6c47f86301fe243a6a9117af54b', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/chjzdPRsgKclskhkJtHyR9G4ascbrd4AkBc1D_gB_VY.jpg?width=960&crop=smart&auto=webp&s=393def90e55d61631fda5e38cad9b3b9663961df', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/chjzdPRsgKclskhkJtHyR9G4ascbrd4AkBc1D_gB_VY.jpg?auto=webp&s=510aff6df78d8ba8bb0eae07f7860079dc8c1faa', 'width': 1024}, 'variants': {}}]}
|
|
Cancelling internet & switching to a LLM: what is the optimal model?
| 0 |
Hey everyone!
I'm trying to determine the optimal model size for everyday, practical use. Suppose that, in a stroke of genius, I cancel my family's internet subscription and replace it with a local LLM. My family is sceptical for some reason, but why pay for the internet when we can download an LLM, which is basically a compressed version of the internet?
We're an average family with a variety of interests and use cases. However, these use cases are often the 'mainstream' option, i.e. similar to using Python for (basic) coding instead of more specialised languages.
I'm cancelling the subscription because I'm cheap, and probably need the money for family therapy that will be needed as a result of this experiment. So I'm not looking for the best LLM, but one that would suffice with the least (cheapest) amount of hardware and power required.
Based on the benchmarks (with the usual caveat that benchmarks are not the best indicator), recent models in the 14–32 billion parameter range often perform pretty well.
This is especially true when they can reason. If reasoning is mostly about adding more and better context rather than some fundamental quality, then perhaps a smaller model with smart prompting could perform similarly to a larger non-reasoning model. The benchmarks tend to show this as well, although they are probably a bit biased because reasoning (especially maths) benefits them a lot. As I'm a cheapskate, maybe I'll teach my family to create better prompts (and use techniques like CoT, few-shot, etc.) to save on reasoning tokens.
It seems that the gap between large LLMs and smaller, more recent ones (e.g. Qwen3 30B-A3) is getting smaller. At what size (i.e. billions of parameters) do you think the point of diminishing returns really starts to show
In this scenario, what would be the optimal model if you also considered investment and power costs, rather than just looking for the best model? I'm curious to know what you all think.
| 2025-05-26T06:21:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvnehd/cancelling_internet_switching_to_a_llm_what_is/
|
MDT-49
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvnehd
| false | null |
t3_1kvnehd
|
/r/LocalLLaMA/comments/1kvnehd/cancelling_internet_switching_to_a_llm_what_is/
| false | false |
self
| 0 | null |
QwenLong-L1: Towards Long-Context Large Reasoning Models with Reinforcement Learning
| 78 |
[🤗 QwenLong-L1-32B](https://huggingface.co/Tongyi-Zhiwen/QwenLong-L1-32B) is the first long-context Large Reasoning Model (LRM) trained with reinforcement learning for long-context document reasoning tasks. Experiments on seven long-context DocQA benchmarks demonstrate that **QwenLong-L1-32B outperforms flagship LRMs like OpenAI-o3-mini and Qwen3-235B-A22B, achieving performance on par with Claude-3.7-Sonnet-Thinking**, demonstrating leading performance among state-of-the-art LRMs.
Paper: [https://arxiv.org/abs/2505.17667](https://arxiv.org/abs/2505.17667)
GitHub: [https://github.com/Tongyi-Zhiwen/QwenLong-L1](https://github.com/Tongyi-Zhiwen/QwenLong-L1)
HuggingFace: [https://huggingface.co/papers/2505.17667](https://huggingface.co/papers/2505.17667)
https://preview.redd.it/ufaxewhok23f1.png?width=1476&format=png&auto=webp&s=1daa2c36556bd29bd882d18b1fe542b21897a0f3
| 2025-05-26T06:22:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvnf46/qwenlongl1_towards_longcontext_large_reasoning/
|
Fancy_Fanqi77
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvnf46
| false | null |
t3_1kvnf46
|
/r/LocalLLaMA/comments/1kvnf46/qwenlongl1_towards_longcontext_large_reasoning/
| false | false | 78 |
{'enabled': False, 'images': [{'id': '0AjYM-XhR0maEGG0hmxNMxrj_1IT0acK7l7EHjYGoLk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4fgEuTMOx_oXXqA0kQVQE7o892NJc01radrTqR_KFkw.jpg?width=108&crop=smart&auto=webp&s=2861cb136162f089911f2b68388de51580d37fa5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/4fgEuTMOx_oXXqA0kQVQE7o892NJc01radrTqR_KFkw.jpg?width=216&crop=smart&auto=webp&s=a8698eda00bcc491643ced8f73e54822d3a787cf', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/4fgEuTMOx_oXXqA0kQVQE7o892NJc01radrTqR_KFkw.jpg?width=320&crop=smart&auto=webp&s=04a9d113718ca2d100e4ab6e095748dd4fde931d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/4fgEuTMOx_oXXqA0kQVQE7o892NJc01radrTqR_KFkw.jpg?width=640&crop=smart&auto=webp&s=084020eb1aeecf203d2fdf8aa8277a361260d5b2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/4fgEuTMOx_oXXqA0kQVQE7o892NJc01radrTqR_KFkw.jpg?width=960&crop=smart&auto=webp&s=d99830ff2e27cfacf9480ff89fab8dfab25d8bb3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/4fgEuTMOx_oXXqA0kQVQE7o892NJc01radrTqR_KFkw.jpg?width=1080&crop=smart&auto=webp&s=67d09818032334207c1b0fd26c8f119c90d77532', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/4fgEuTMOx_oXXqA0kQVQE7o892NJc01radrTqR_KFkw.jpg?auto=webp&s=d7e44646351accfed1d2c5bf4a36cb7aa67a6456', 'width': 1200}, 'variants': {}}]}
|
|
Building a FAQ Chatbot with Ollama + LLaMA 3.2 3B — How to Prevent Hallucination?
| 1 |
[removed]
| 2025-05-26T06:43:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvnquc/building_a_faq_chatbot_with_ollama_llama_32_3b/
|
AwayPermission5992
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvnquc
| false | null |
t3_1kvnquc
|
/r/LocalLLaMA/comments/1kvnquc/building_a_faq_chatbot_with_ollama_llama_32_3b/
| false | false |
self
| 1 | null |
GitHub - mariocandela/beelzebub: A secure low code honeypot framework, leveraging LLM for System Virtualization.
| 1 |
[removed]
| 2025-05-26T06:44:58 |
https://github.com/mariocandela/beelzebub
|
mario_candela
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvnrt1
| false | null |
t3_1kvnrt1
|
/r/LocalLLaMA/comments/1kvnrt1/github_mariocandelabeelzebub_a_secure_low_code/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '0orW77n9e0E8P7-5r74sJBb-kSxAWv5GzCMC97MUCEQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UYOCjKmQ-ZJLBw3xCRcetHqkifiStMUeFeUQfTCyM_U.jpg?width=108&crop=smart&auto=webp&s=ed618671fbc7e099d17cce0efa34b38ba12131ec', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UYOCjKmQ-ZJLBw3xCRcetHqkifiStMUeFeUQfTCyM_U.jpg?width=216&crop=smart&auto=webp&s=bb03ea424feb59106b236a40825b9cf13eec9906', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UYOCjKmQ-ZJLBw3xCRcetHqkifiStMUeFeUQfTCyM_U.jpg?width=320&crop=smart&auto=webp&s=069d7b091d1c06606632661b6a637156933738f8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UYOCjKmQ-ZJLBw3xCRcetHqkifiStMUeFeUQfTCyM_U.jpg?width=640&crop=smart&auto=webp&s=5ff5d7940d91d40b549ff9a2e70ab0c887170bd4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UYOCjKmQ-ZJLBw3xCRcetHqkifiStMUeFeUQfTCyM_U.jpg?width=960&crop=smart&auto=webp&s=ae883721861941afca8ee90d1d0d9ae5e44b3e01', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UYOCjKmQ-ZJLBw3xCRcetHqkifiStMUeFeUQfTCyM_U.jpg?width=1080&crop=smart&auto=webp&s=09b807a4a812db5cb3e6ce1e67df712f28acb632', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/UYOCjKmQ-ZJLBw3xCRcetHqkifiStMUeFeUQfTCyM_U.jpg?auto=webp&s=d7cea5400ca999a054dcf788966fab0eae957096', 'width': 1280}, 'variants': {}}]}
|
|
Best Uncensored model for 42GB of VRAM
| 52 |
What's the current best uncensored model for "Roleplay".
Well Not really roleplay in the sense that I'm roleplaying with an AI character with a character card and all that. Usually I'm more doing like some sort of choose your own adventure or text adventure thing where I give the AI some basic prompt about the world, let it generate and then I tell it what I want my character to do, there's some roleplay involved but it's not the typical me downloading or making a character card and then roleplaying with a singular AI character.
I care more about how well the AI (in terms of creativity) does with short, relatively basic prompts then how well it performs when all my prompts are long, elaborate and well written.
I've got 42GB of VRAM (1 5090 + 1 3080 10GB), so it should probably a 70B model.
| 2025-05-26T06:47:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvnt5u/best_uncensored_model_for_42gb_of_vram/
|
KeinNiemand
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvnt5u
| false | null |
t3_1kvnt5u
|
/r/LocalLLaMA/comments/1kvnt5u/best_uncensored_model_for_42gb_of_vram/
| false | false |
self
| 52 | null |
What would be the best LLM to have for analyzing PDFs?
| 6 |
Bassically, i want to dump a few hundreds of pages of PDFs into an LLM, and get the LLM to refer back to them when i have a question
| 2025-05-26T06:47:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvnta6/what_would_be_the_best_llm_to_have_for_analyzing/
|
newbreed69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvnta6
| false | null |
t3_1kvnta6
|
/r/LocalLLaMA/comments/1kvnta6/what_would_be_the_best_llm_to_have_for_analyzing/
| false | false |
self
| 6 | null |
Open-source project that use LLM as deception system
| 249 |
Hello everyone 👋
I wanted to share a project I've been working on that I think you'll find really interesting. It's called Beelzebub, an open-source honeypot framework that uses LLMs to create incredibly realistic and dynamic deception environments.
By integrating LLMs, it can mimic entire operating systems and interact with attackers in a super convincing way. Imagine an SSH honeypot where the LLM provides plausible responses to commands, even though nothing is actually executed on a real system.
The goal is to keep attackers engaged for as long as possible, diverting them from your real systems and collecting valuable, real-world data on their tactics, techniques, and procedures. We've even had success capturing real threat actors with it!
I'd love for you to try it out, give it a star on GitHub, and maybe even contribute! Your feedback,
especially from an LLM-centric perspective, would be incredibly valuable as we continue to develop it.
You can find the project here:
👉 GitHub:[https://github.com/mariocandela/beelzebub](https://github.com/mariocandela/beelzebub)
Let me know what you think in the comments! Do you have ideas for new LLM-powered honeypot features?
Thanks for your time! 😊
| 2025-05-26T06:48:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvnti4/opensource_project_that_use_llm_as_deception/
|
mario_candela
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvnti4
| false | null |
t3_1kvnti4
|
/r/LocalLLaMA/comments/1kvnti4/opensource_project_that_use_llm_as_deception/
| false | false |
self
| 249 |
{'enabled': False, 'images': [{'id': '0orW77n9e0E8P7-5r74sJBb-kSxAWv5GzCMC97MUCEQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UYOCjKmQ-ZJLBw3xCRcetHqkifiStMUeFeUQfTCyM_U.jpg?width=108&crop=smart&auto=webp&s=ed618671fbc7e099d17cce0efa34b38ba12131ec', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UYOCjKmQ-ZJLBw3xCRcetHqkifiStMUeFeUQfTCyM_U.jpg?width=216&crop=smart&auto=webp&s=bb03ea424feb59106b236a40825b9cf13eec9906', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UYOCjKmQ-ZJLBw3xCRcetHqkifiStMUeFeUQfTCyM_U.jpg?width=320&crop=smart&auto=webp&s=069d7b091d1c06606632661b6a637156933738f8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UYOCjKmQ-ZJLBw3xCRcetHqkifiStMUeFeUQfTCyM_U.jpg?width=640&crop=smart&auto=webp&s=5ff5d7940d91d40b549ff9a2e70ab0c887170bd4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UYOCjKmQ-ZJLBw3xCRcetHqkifiStMUeFeUQfTCyM_U.jpg?width=960&crop=smart&auto=webp&s=ae883721861941afca8ee90d1d0d9ae5e44b3e01', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UYOCjKmQ-ZJLBw3xCRcetHqkifiStMUeFeUQfTCyM_U.jpg?width=1080&crop=smart&auto=webp&s=09b807a4a812db5cb3e6ce1e67df712f28acb632', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/UYOCjKmQ-ZJLBw3xCRcetHqkifiStMUeFeUQfTCyM_U.jpg?auto=webp&s=d7cea5400ca999a054dcf788966fab0eae957096', 'width': 1280}, 'variants': {}}]}
|
Gemma-3-27b quants?
| 0 |
Hi. I'm running Gemma-3-27b Q6_K_L with 45/67 offload to GPU(3090) at about 5 t/s. It is borderline useful at this speed. I wonder would Q4_QAT quant be the like the same evaluation performance (model quality) just faster. Or maybe I should aim for Q8 (I could afford second 3090 so I might have a better speed and longer context with higher quant) but wondering if one could really notice the difference (except speed). What upgrade/sidegrade vector do you think would be preferable? Thanks.
| 2025-05-26T06:49:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvnu5k/gemma327b_quants/
|
MAXFlRE
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvnu5k
| false | null |
t3_1kvnu5k
|
/r/LocalLLaMA/comments/1kvnu5k/gemma327b_quants/
| false | false |
self
| 0 | null |
LLM Model parameters vs quantization
| 1 |
[removed]
| 2025-05-26T07:20:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvobhf/llm_model_parameters_vs_quantization/
|
fasih_ammar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvobhf
| false | null |
t3_1kvobhf
|
/r/LocalLLaMA/comments/1kvobhf/llm_model_parameters_vs_quantization/
| false | false |
self
| 1 | null |
Video categorisation using smolvlm
| 1 |
[removed]
| 2025-05-26T07:42:41 |
https://www.reddit.com/gallery/1kvon90
|
friedmomos_
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvon90
| false | null |
t3_1kvon90
|
/r/LocalLLaMA/comments/1kvon90/video_categorisation_using_smolvlm/
| false | false | 1 | null |
|
If only its true...
| 94 |
[https://x.com/YouJiacheng/status/1926885863952159102](https://x.com/YouJiacheng/status/1926885863952159102)
Deepseek-v3-0526
| 2025-05-26T07:44:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvoobg/if_only_its_true/
|
Famous-Associate-436
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvoobg
| false | null |
t3_1kvoobg
|
/r/LocalLLaMA/comments/1kvoobg/if_only_its_true/
| false | false |
self
| 94 |
{'enabled': False, 'images': [{'id': 'iNdFzT-q0XAnJS9pvJGgQNHZ--tGZgpf3q1SLbWIhVI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ZaRzdVVF7JOpybceJeG5DZnfwt-BU2btMVAzkGyJ2V4.jpg?width=108&crop=smart&auto=webp&s=abbec0fe57267feddc7c68975dc3bc83ebbf0f9a', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/ZaRzdVVF7JOpybceJeG5DZnfwt-BU2btMVAzkGyJ2V4.jpg?auto=webp&s=ff916414728ea8a9a7390968e11b7de208dc9396', 'width': 200}, 'variants': {}}]}
|
Why does Phi-4 have such a low score on ifeval?
| 1 |
[removed]
| 2025-05-26T07:57:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvovc3/why_does_phi4_have_such_a_low_score_on_ifeval/
|
BmHype
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvovc3
| false | null |
t3_1kvovc3
|
/r/LocalLLaMA/comments/1kvovc3/why_does_phi4_have_such_a_low_score_on_ifeval/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'NBLCkzl6ZRucCik7mVkPwWjrECPTMlbL7qAMuVmpgmg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=108&crop=smart&auto=webp&s=62befafb5e0debaeb69a6220cbcc722ce0168278', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=216&crop=smart&auto=webp&s=a7ed77a5bcb5c05a85158f3a1b571f42fd279b54', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=320&crop=smart&auto=webp&s=e1aad0a62a8df048c4a69c52fb7d8827e86eb72d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=640&crop=smart&auto=webp&s=a0102f481e5865cd18aca9fa189cd8ebdbdf4cb3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=960&crop=smart&auto=webp&s=3c3aecd129519b5fe239051fb85f3d4f19afb870', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=1080&crop=smart&auto=webp&s=50690e3e1beedbfa3861a5267ca4b23bcb1615b2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?auto=webp&s=cfa54375edccef47455e0c730bb4ee0851104070', 'width': 1200}, 'variants': {}}]}
|
Would buying a GPU with relatively high NVRAM be a waste? Are prices predicted to drop in a few years or bigger/stronger 70B type models expected to shrink?
| 1 |
[removed]
| 2025-05-26T08:05:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvozld/would_buying_a_gpu_with_relatively_high_nvram_be/
|
mandie99xxx
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvozld
| false | null |
t3_1kvozld
|
/r/LocalLLaMA/comments/1kvozld/would_buying_a_gpu_with_relatively_high_nvram_be/
| false | false |
self
| 1 | null |
Why does Phi-4 have such a low score on the ifeval dataset on Open LLM Leaderboard?
| 1 |
[removed]
| 2025-05-26T08:06:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvozwa/why_does_phi4_have_such_a_low_score_on_the_ifeval/
|
BmHype
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvozwa
| false | null |
t3_1kvozwa
|
/r/LocalLLaMA/comments/1kvozwa/why_does_phi4_have_such_a_low_score_on_the_ifeval/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'NBLCkzl6ZRucCik7mVkPwWjrECPTMlbL7qAMuVmpgmg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=108&crop=smart&auto=webp&s=62befafb5e0debaeb69a6220cbcc722ce0168278', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=216&crop=smart&auto=webp&s=a7ed77a5bcb5c05a85158f3a1b571f42fd279b54', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=320&crop=smart&auto=webp&s=e1aad0a62a8df048c4a69c52fb7d8827e86eb72d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=640&crop=smart&auto=webp&s=a0102f481e5865cd18aca9fa189cd8ebdbdf4cb3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=960&crop=smart&auto=webp&s=3c3aecd129519b5fe239051fb85f3d4f19afb870', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=1080&crop=smart&auto=webp&s=50690e3e1beedbfa3861a5267ca4b23bcb1615b2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?auto=webp&s=cfa54375edccef47455e0c730bb4ee0851104070', 'width': 1200}, 'variants': {}}]}
|
What's the latest in conversational voice-to-voice models that is self-hostable?
| 15 |
I've been a bit out-of-touch for a while. Are self-hostable voice-to-voice models with a reasonably low latency still a farfetched pipedream or is there anything out there that works reasonably well without a robotic voice?
I don't mind buying an RTX4090 if that works but even okay with an RTX Pro 6000 if there is a good model out there.
| 2025-05-26T08:07:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvp0g1/whats_the_latest_in_conversational_voicetovoice/
|
surveypoodle
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvp0g1
| false | null |
t3_1kvp0g1
|
/r/LocalLLaMA/comments/1kvp0g1/whats_the_latest_in_conversational_voicetovoice/
| false | false |
self
| 15 | null |
What are the restrictions regarding splitting models across multiple GPUs
| 2 |
Hi all,
One question: If I get three or four 96GB GPUs, can I easily load a model with over 200 billion parameters? I'm not asking about the size or if the memory is sufficient, but about splitting a model across multiple GPUs. I've read somewhere that since these cards don't have NVLink support, they don't act "as a single unit," and since it's not always possible to split some Transformer-based models, is it then not possible to use more than one card?
| 2025-05-26T08:15:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvp4nq/what_are_the_restrictions_regarding_splitting/
|
oh_my_right_leg
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvp4nq
| false | null |
t3_1kvp4nq
|
/r/LocalLLaMA/comments/1kvp4nq/what_are_the_restrictions_regarding_splitting/
| false | false |
self
| 2 | null |
I made a quick utility for re-writing models requested in OpenAI APIs
| 9 |
Ever had a tool or plugin that allows your own OAI endpoint but then expects to use GPT-xxx or has a closed list of models?
"Gpt Commit" is one such one, rather than the hassle of forking it I made (with AI help) a small tool to simple ignore/re-map the model request:If anyone else has any use for it, the code is here:
The instigating plugin:
[https://marketplace.visualstudio.com/items?itemName=DmytroBaida.gpt-commit](https://marketplace.visualstudio.com/items?itemName=DmytroBaida.gpt-commit)
| 2025-05-26T08:17:27 |
https://github.com/mitchins/openai-model-rerouter
|
mitchins-au
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvp5sk
| false | null |
t3_1kvp5sk
|
/r/LocalLLaMA/comments/1kvp5sk/i_made_a_quick_utility_for_rewriting_models/
| false | false | 9 |
{'enabled': False, 'images': [{'id': 'aEzgfgvuWaQBNR4FUq1B-E6FxV5nZVGF-eFm8Wagodw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/n8vBA7DOe0WI4BHcwHVngfcelnLEhE3L7Qf68p7pvRA.jpg?width=108&crop=smart&auto=webp&s=cd501de0d8d29a7247e16e2efb8f14a58c510823', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/n8vBA7DOe0WI4BHcwHVngfcelnLEhE3L7Qf68p7pvRA.jpg?width=216&crop=smart&auto=webp&s=c3c8ea6382624067a25cff852e33685b5e511d21', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/n8vBA7DOe0WI4BHcwHVngfcelnLEhE3L7Qf68p7pvRA.jpg?width=320&crop=smart&auto=webp&s=2e140d232025418f912d21aa9a3a1fff81ccf8df', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/n8vBA7DOe0WI4BHcwHVngfcelnLEhE3L7Qf68p7pvRA.jpg?width=640&crop=smart&auto=webp&s=eece8d29a0fbb2b9a2addaa6478b493d4593a418', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/n8vBA7DOe0WI4BHcwHVngfcelnLEhE3L7Qf68p7pvRA.jpg?width=960&crop=smart&auto=webp&s=75f6bb123c345342bb4d5cfa3918fc47c519d9bb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/n8vBA7DOe0WI4BHcwHVngfcelnLEhE3L7Qf68p7pvRA.jpg?width=1080&crop=smart&auto=webp&s=faabe05aaf0af35dc21560695dfa14291cedad2b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/n8vBA7DOe0WI4BHcwHVngfcelnLEhE3L7Qf68p7pvRA.jpg?auto=webp&s=098c8cb9f37bce52400d6e9254d8f8e446c2bdeb', 'width': 1200}, 'variants': {}}]}
|
|
Why does Phi-4 have such a low score on ifeval on Huggingface's Open LLM Leaderboard?
| 1 |
[removed]
| 2025-05-26T08:19:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvp6t5/why_does_phi4_have_such_a_low_score_on_ifeval_on/
|
BmHype
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvp6t5
| false | null |
t3_1kvp6t5
|
/r/LocalLLaMA/comments/1kvp6t5/why_does_phi4_have_such_a_low_score_on_ifeval_on/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'NBLCkzl6ZRucCik7mVkPwWjrECPTMlbL7qAMuVmpgmg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=108&crop=smart&auto=webp&s=62befafb5e0debaeb69a6220cbcc722ce0168278', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=216&crop=smart&auto=webp&s=a7ed77a5bcb5c05a85158f3a1b571f42fd279b54', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=320&crop=smart&auto=webp&s=e1aad0a62a8df048c4a69c52fb7d8827e86eb72d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=640&crop=smart&auto=webp&s=a0102f481e5865cd18aca9fa189cd8ebdbdf4cb3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=960&crop=smart&auto=webp&s=3c3aecd129519b5fe239051fb85f3d4f19afb870', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=1080&crop=smart&auto=webp&s=50690e3e1beedbfa3861a5267ca4b23bcb1615b2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?auto=webp&s=cfa54375edccef47455e0c730bb4ee0851104070', 'width': 1200}, 'variants': {}}]}
|
reg context and emails
| 1 |
[removed]
| 2025-05-26T08:33:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvpe5h/reg_context_and_emails/
|
erparucca
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvpe5h
| false | null |
t3_1kvpe5h
|
/r/LocalLLaMA/comments/1kvpe5h/reg_context_and_emails/
| false | false |
self
| 1 | null |
Deepseek v3 0526?
| 423 | 2025-05-26T09:09:20 |
https://docs.unsloth.ai/basics/deepseek-v3-0526-how-to-run-locally
|
Stock_Swimming_6015
|
docs.unsloth.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvpwq3
| false | null |
t3_1kvpwq3
|
/r/LocalLLaMA/comments/1kvpwq3/deepseek_v3_0526/
| false | false | 423 |
{'enabled': False, 'images': [{'id': 'ZmadbtMLxXXHFKwJkCjeTUDuX5sS57sYwkHR8IIGo6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=108&crop=smart&auto=webp&s=1ef4773905a7285d6ca9d2707252ecf3322ec746', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=216&crop=smart&auto=webp&s=6555cce3e1543ec541933b9a1ea746f3da79448a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=320&crop=smart&auto=webp&s=346b61e1006578bd8c7c90ff8b45496164cd4933', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=640&crop=smart&auto=webp&s=2e74df95b54af72feafa558281ef5e11bc4e8a7c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=960&crop=smart&auto=webp&s=8d3ac1cc3775d1b7217345a94a6e9f18f0ba2092', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=1080&crop=smart&auto=webp&s=57e2a43db692dc32eecd433adfbae429f9bca7fd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?auto=webp&s=2704eae76891f7897192cd5a7236096d2b9f8a5f', 'width': 1200}, 'variants': {}}]}
|
||
I made a simple tool to test/compare your local LLMs on AIME 2024
| 1 |
[removed]
| 2025-05-26T09:26:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvq58j/i_made_a_simple_tool_to_testcompare_your_local/
|
EntropyMagnets
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvq58j
| false | null |
t3_1kvq58j
|
/r/LocalLLaMA/comments/1kvq58j/i_made_a_simple_tool_to_testcompare_your_local/
| false | false | 1 | null |
|
Challenges of Fine-Tuning Based on Private Code
| 1 |
[removed]
| 2025-05-26T09:47:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvqgdv/challenges_of_finetuning_based_on_private_code/
|
SaladNo6817
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvqgdv
| false | null |
t3_1kvqgdv
|
/r/LocalLLaMA/comments/1kvqgdv/challenges_of_finetuning_based_on_private_code/
| false | false |
self
| 1 | null |
Deepseek R2 might be coming soon, unsloth released an article about deepseek v3 -05-26
| 95 |
It should be coming soon! [https://docs.unsloth.ai/basics/deepseek-v3-0526-how-to-run-locally](https://docs.unsloth.ai/basics/deepseek-v3-0526-how-to-run-locally)
opus 4 level? I think v3 0526 should be out today.
| 2025-05-26T09:48:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvqgpv/deepseek_r2_might_be_coming_soon_unsloth_released/
|
power97992
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvqgpv
| false | null |
t3_1kvqgpv
|
/r/LocalLLaMA/comments/1kvqgpv/deepseek_r2_might_be_coming_soon_unsloth_released/
| false | false |
self
| 95 |
{'enabled': False, 'images': [{'id': 'ZmadbtMLxXXHFKwJkCjeTUDuX5sS57sYwkHR8IIGo6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=108&crop=smart&auto=webp&s=1ef4773905a7285d6ca9d2707252ecf3322ec746', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=216&crop=smart&auto=webp&s=6555cce3e1543ec541933b9a1ea746f3da79448a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=320&crop=smart&auto=webp&s=346b61e1006578bd8c7c90ff8b45496164cd4933', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=640&crop=smart&auto=webp&s=2e74df95b54af72feafa558281ef5e11bc4e8a7c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=960&crop=smart&auto=webp&s=8d3ac1cc3775d1b7217345a94a6e9f18f0ba2092', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=1080&crop=smart&auto=webp&s=57e2a43db692dc32eecd433adfbae429f9bca7fd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?auto=webp&s=2704eae76891f7897192cd5a7236096d2b9f8a5f', 'width': 1200}, 'variants': {}}]}
|
We made AutoBE, Backend Vibe Coding Agent, generating 100% working code by Compiler Skills (full stack vibe coding is also possible)
| 0 |
Introducing AutoBE: The Future of Backend Development
We are immensely proud to introduce AutoBE, our revolutionary open-source vibe coding agent for backend applications, developed by Wrtn Technologies.
The most distinguished feature of AutoBE is its exceptional 100% success rate in code generation. AutoBE incorporates built-in TypeScript and Prisma compilers alongside OpenAPI validators, enabling automatic technical corrections whenever the AI encounters coding errors. Furthermore, our integrated review agents and testing frameworks provide an additional layer of validation, ensuring the integrity of all AI-generated code.
What makes this even more remarkable is that backend applications created with AutoBE can seamlessly integrate with our other open-source projects—Agentica and AutoView—to automate AI agent development and frontend application creation as well. In theory, this enables complete full-stack application development through vibe coding alone.
- Alpha Release: 2025-06-01
- Beta Release: 2025-07-01
- Official Release: 2025-08-01
AutoBE currently supports comprehensive requirements analysis and derivation, database design, and OpenAPI document generation (API interface specification). All core features will be completed by the beta release, while the integration with Agentica and AutoView for full-stack vibe coding will be finalized by the official release.
We eagerly anticipate your interest and support as we embark on this exciting journey.
| 2025-05-26T09:52:10 |
https://github.com/wrtnlabs/autobe
|
jhnam88
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvqisr
| false | null |
t3_1kvqisr
|
/r/LocalLLaMA/comments/1kvqisr/we_made_autobe_backend_vibe_coding_agent/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'NhzEgQz_kNIZGp-FoG7Mo21W4NCRTnQ6711xPpo3X3w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OkuCY3vgsAQgx5rCVVKdEaImBmO9pmeJeg7kM_gDCrc.jpg?width=108&crop=smart&auto=webp&s=825f24431814e6e28a5c48b81dc7c88d2c664ec1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OkuCY3vgsAQgx5rCVVKdEaImBmO9pmeJeg7kM_gDCrc.jpg?width=216&crop=smart&auto=webp&s=8ac224b2a514cb90cddf3d3f433bfdb1c3dc867e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OkuCY3vgsAQgx5rCVVKdEaImBmO9pmeJeg7kM_gDCrc.jpg?width=320&crop=smart&auto=webp&s=5fa5413d20b9fb20ea49b3bd41d86a6921660a11', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OkuCY3vgsAQgx5rCVVKdEaImBmO9pmeJeg7kM_gDCrc.jpg?width=640&crop=smart&auto=webp&s=5c7055e9913e302105260e2eeece4ec514281531', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OkuCY3vgsAQgx5rCVVKdEaImBmO9pmeJeg7kM_gDCrc.jpg?width=960&crop=smart&auto=webp&s=9f8a303a4eade9b44a7185d5c6fa7626dfa615f6', 'width': 960}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/OkuCY3vgsAQgx5rCVVKdEaImBmO9pmeJeg7kM_gDCrc.jpg?auto=webp&s=dc3c835417d69453c13e988b1932640e969a7f95', 'width': 1024}, 'variants': {}}]}
|
|
Just Getting Started - Used Hardware & Two Machines
| 1 |
I’ve been using AI since middle of last year, but I found Ollama two weeks ago and now I’m totally hooked.
I took an existing machine and upgraded within my budget, but now I have some leftover components and I just can’t get over the idea that I should build another machine with the leftovers. Where do you find good used parts? Should I trust the sellers on eBay?
Here’s what I have sitting around:
- 4x 8GB Crucial DDR4 3200mhz
- AMD Ryzen 5 5600X (with stock cooler)
- GTX 1660 TI
These were all in my existing setup, running 4b models with ease. I want to build a second machine that takes advantage of what I have sitting around, but case + PSU + NVMe + motherboard at Amazon prices feels like I might be wasting my cash.
How do you find inexpensive parts?
| 2025-05-26T09:58:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvqm6s/just_getting_started_used_hardware_two_machines/
|
Current-Ticket4214
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvqm6s
| false | null |
t3_1kvqm6s
|
/r/LocalLLaMA/comments/1kvqm6s/just_getting_started_used_hardware_two_machines/
| false | false |
self
| 1 | null |
AI Baby Monitor – fully local Video-LLM nanny (beeps when safety rules are violated)
| 131 |
Hey folks!
I’ve hacked together a VLM video nanny, that watches a video stream(s) and predefined set of safety instructions, and makes a beep sound if the instructions are violated.
**GitHub**: [https://github.com/zeenolife/ai-baby-monitor](https://github.com/zeenolife/ai-baby-monitor)
**Why I built it?**
First day we assembled the crib, my daughter tried to climb over the rail. I got a bit paranoid about constantly watching her. So I thought of an additional eye that would actively watch her, while parent is semi-actively alert.
It's not meant to be a replacement for an adult supervision, more of a supplement, thus just a "beep" sound, so that you could quickly turn back attention to the baby when you got a bit distracted.
**How it works?**
I'm using Qwen 2.5VL(empirically it works better) and vLLM. Redis is used to orchestrate video and llm log streams. Streamlit for UI.
**Funny bit**
I've also used it to monitor my smartphone usage. When you subconsciously check on your phone, it beeps :)
**Further plans**
* Add support for other backends apart from vLLM
* Gemma 3n looks rather promising
* Add support for image based "no-go-zones"
Feedback is welcome :)
| 2025-05-26T10:09:06 |
https://v.redd.it/gzn6itr3p33f1
|
CheeringCheshireCat
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvqrzl
| false |
{'reddit_video': {'bitrate_kbps': 450, 'dash_url': 'https://v.redd.it/gzn6itr3p33f1/DASHPlaylist.mpd?a=1750846165%2CNGQ3NzJkZmZlNjc3ZDMyYTY1YzgxZDU1Nzc1ZGExNzdjOWYxZGIyYjdlOWE0NmJkZDI3OTU3MzgwYTdkNzFhYg%3D%3D&v=1&f=sd', 'duration': 10, 'fallback_url': 'https://v.redd.it/gzn6itr3p33f1/DASH_270.mp4?source=fallback', 'has_audio': True, 'height': 480, 'hls_url': 'https://v.redd.it/gzn6itr3p33f1/HLSPlaylist.m3u8?a=1750846165%2CMWE1NTQwMTEyYWI0NDQxY2FmYTY3YWUzZjI5ODcwMjBlYjU0MmJhMGRiNzVlZWJiNmQzZGE0NjQ2NGMwYjI4Nw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/gzn6itr3p33f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 258}}
|
t3_1kvqrzl
|
/r/LocalLLaMA/comments/1kvqrzl/ai_baby_monitor_fully_local_videollm_nanny_beeps/
| false | false | 131 |
{'enabled': False, 'images': [{'id': 'dXQydzR0cjNwMzNmMVMRslQYMYRN8ZJ1qBgR4-LlFEA6jckhHIJ4it6HP21k', 'resolutions': [{'height': 200, 'url': 'https://external-preview.redd.it/dXQydzR0cjNwMzNmMVMRslQYMYRN8ZJ1qBgR4-LlFEA6jckhHIJ4it6HP21k.png?width=108&crop=smart&format=pjpg&auto=webp&s=6a9030ac16afd620c14c3e7138a754d92781bcc1', 'width': 108}, {'height': 400, 'url': 'https://external-preview.redd.it/dXQydzR0cjNwMzNmMVMRslQYMYRN8ZJ1qBgR4-LlFEA6jckhHIJ4it6HP21k.png?width=216&crop=smart&format=pjpg&auto=webp&s=ab3e55ff568a78b0c670457d03d0f4811ef05905', 'width': 216}, {'height': 593, 'url': 'https://external-preview.redd.it/dXQydzR0cjNwMzNmMVMRslQYMYRN8ZJ1qBgR4-LlFEA6jckhHIJ4it6HP21k.png?width=320&crop=smart&format=pjpg&auto=webp&s=b1a6cc2b851ab288e5ab500cf1f2eb598c942f64', 'width': 320}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/dXQydzR0cjNwMzNmMVMRslQYMYRN8ZJ1qBgR4-LlFEA6jckhHIJ4it6HP21k.png?format=pjpg&auto=webp&s=6b38c88cfbe1248179e7ddd4724f96d9b04cf081', 'width': 345}, 'variants': {}}]}
|
|
Deepseek or Claude ?
| 0 | 2025-05-26T10:49:59 |
https://www.reddit.com/gallery/1kvre1p
|
Limp-Sandwich7184
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvre1p
| false | null |
t3_1kvre1p
|
/r/LocalLLaMA/comments/1kvre1p/deepseek_or_claude/
| false | false | 0 | null |
||
Consensus on best local STT?
| 22 |
Hey folks, I’m currently devving a tool that needs STT. I’m currently using Whispercpp/whisper for transcription (large v3), whisperx for alignment/diarization/prosodic analysis, and embeddings and llms for the rest.
I find Whisper does a good job at transcription - however speaker identification/diarization with whisperx kinda sucks. Used pyannote before but was heaps slower and still not ideal. Is there some good model to do this kind of analysis or is this what I’m stuck with?
| 2025-05-26T10:54:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvrgjv/consensus_on_best_local_stt/
|
That_Em
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvrgjv
| false | null |
t3_1kvrgjv
|
/r/LocalLLaMA/comments/1kvrgjv/consensus_on_best_local_stt/
| false | false |
self
| 22 | null |
Set up llama3.2-11b (vision) for production
| 1 |
[removed]
| 2025-05-26T11:01:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvrkgq/set_up_llama3211b_vision_for_production/
|
dave5D
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvrkgq
| false | null |
t3_1kvrkgq
|
/r/LocalLLaMA/comments/1kvrkgq/set_up_llama3211b_vision_for_production/
| false | false |
self
| 1 | null |
Introducing M☰T QQ and M☰T 2: enthusiast neural networks
| 1 |
[removed]
| 2025-05-26T11:04:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvrmse/introducing_mt_qq_and_mt_2_enthusiast_neural/
|
Enough_Judgment_7801
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvrmse
| false | null |
t3_1kvrmse
|
/r/LocalLLaMA/comments/1kvrmse/introducing_mt_qq_and_mt_2_enthusiast_neural/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
|
TTS Models known for AI animation dubbing?
| 1 |
[removed]
| 2025-05-26T11:50:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvsei6/tts_models_known_for_ai_animation_dubbing/
|
Signal-Olive-1984
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvsei6
| false | null |
t3_1kvsei6
|
/r/LocalLLaMA/comments/1kvsei6/tts_models_known_for_ai_animation_dubbing/
| false | false |
self
| 1 | null |
Leveling Up: From RAG to an AI Agent
| 1 |
Hey folks,
I've been exploring more advanced ways to use AI, and recently I made a big jump - moving from the usual RAG (Retrieval-Augmented Generation) approach to something more powerful: an **AI Agent that uses a real web browser to search the internet and get stuff done on its own**.
In my last guide (https://github.com/sbnb-io/sbnb/blob/main/README-LightRAG.md), I showed how we could manually gather info online and feed it into a RAG pipeline. It worked well, but it still needed a human in the loop.
This time, the AI Agent does *everything* by itself.
For example:
I asked it the same question - *“How much tax was collected in the US in 2024?”*
The Agent opened a browser, went to Google, searched the query, clicked through results, read the content, and gave me a clean, accurate answer.
I didn’t touch the keyboard after asking the question.
I put together a guide so you can run this setup on your own bare metal server with an Nvidia GPU. It takes just a few minutes:
https://github.com/sbnb-io/sbnb/blob/main/README-AI-AGENT.md
🛠️ What you'll spin up:
- A server running **Sbnb Linux**
- A VM with **Ubuntu 24.04**
- Ollama with default model `qwen2.5:7b` for local GPU-accelerated inference (no cloud, no API calls)
- The open-source **Browser Use AI Agent** https://github.com/browser-use/web-ui
Give it a shot and let me know how it goes! Curious to hear what use cases you come up with (for more ideas and examples of AI Agents, be sure to follow the amazing Browser Use project!)
| 2025-05-26T11:59:03 |
https://www.reddit.com/gallery/1kvsjtb
|
aospan
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvsjtb
| false | null |
t3_1kvsjtb
|
/r/LocalLLaMA/comments/1kvsjtb/leveling_up_from_rag_to_an_ai_agent/
| false | false | 1 | null |
|
Leveling Up: From RAG to an AI Agent
| 90 |
Hey folks,
I've been exploring more advanced ways to use AI, and recently I made a big jump - moving from the usual RAG (Retrieval-Augmented Generation) approach to something more powerful: an **AI Agent that uses a real web browser to search the internet and get stuff done on its own**.
In my last guide (https://github.com/sbnb-io/sbnb/blob/main/README-LightRAG.md), I showed how we could manually gather info online and feed it into a RAG pipeline. It worked well, but it still needed a human in the loop.
This time, the AI Agent does *everything* by itself.
For example:
I asked it the same question - *“How much tax was collected in the US in 2024?”*
The Agent opened a browser, went to Google, searched the query, clicked through results, read the content, and gave me a clean, accurate answer.
I didn’t touch the keyboard after asking the question.
I put together a guide so you can run this setup on your own bare metal server with an Nvidia GPU. It takes just a few minutes:
https://github.com/sbnb-io/sbnb/blob/main/README-AI-AGENT.md
🛠️ What you'll spin up:
- A server running **Sbnb Linux**
- A VM with **Ubuntu 24.04**
- Ollama with default model `qwen2.5:7b` for local GPU-accelerated inference (no cloud, no API calls)
- The open-source **Browser Use AI Agent** https://github.com/browser-use/web-ui
Give it a shot and let me know how it goes! Curious to hear what use cases you come up with (for more ideas and examples of AI Agents, be sure to follow the amazing Browser Use project!)
| 2025-05-26T12:00:27 |
aospan
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvskpq
| false | null |
t3_1kvskpq
|
/r/LocalLLaMA/comments/1kvskpq/leveling_up_from_rag_to_an_ai_agent/
| false | false | 90 |
{'enabled': True, 'images': [{'id': 'awF0Op56xnTqSVxdz9XcwSGRQRVRaT0_wULughpcjZM', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/qourugv0943f1.jpeg?width=108&crop=smart&auto=webp&s=203a378eeda5fabcd6448c92cfa8dccabc9ec782', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/qourugv0943f1.jpeg?width=216&crop=smart&auto=webp&s=6f0d700557ea5664df6693988540f9625487f027', 'width': 216}, {'height': 208, 'url': 'https://preview.redd.it/qourugv0943f1.jpeg?width=320&crop=smart&auto=webp&s=99d14c621bfa4a206608049c5baa2189fc3f8ce6', 'width': 320}, {'height': 417, 'url': 'https://preview.redd.it/qourugv0943f1.jpeg?width=640&crop=smart&auto=webp&s=c9dd5856da1c363f33ad9545c0f33914cbc5403a', 'width': 640}, {'height': 626, 'url': 'https://preview.redd.it/qourugv0943f1.jpeg?width=960&crop=smart&auto=webp&s=5bb412717e8752fc0c2d8b35024292726dde57e8', 'width': 960}, {'height': 704, 'url': 'https://preview.redd.it/qourugv0943f1.jpeg?width=1080&crop=smart&auto=webp&s=b30d30f3d5d20bf8933031acfc623aecaaadcca3', 'width': 1080}], 'source': {'height': 1560, 'url': 'https://preview.redd.it/qourugv0943f1.jpeg?auto=webp&s=ba4270162de20c4b16a1afb609394b9742bcb20c', 'width': 2392}, 'variants': {}}]}
|
||
Which one will you prefer more
| 1 |
[removed]
| 2025-05-26T12:02:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvslz3/which_one_will_you_prefer_more/
|
Interesting-Area6418
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvslz3
| false | null |
t3_1kvslz3
|
/r/LocalLLaMA/comments/1kvslz3/which_one_will_you_prefer_more/
| false | false |
self
| 1 | null |
UI + RAG solution for 5000 documents possible?
| 24 |
I am investigating how to leverage my 5000 documents of strategy documents (market reports, strategy sessions, etc.). Files are PDFs, PPTX, and DOCS, with charts, pictures, tables, and texts.
My use case is that when I receive a new market report, I want to query my knowledge base of the 5000 documents and ask: "Is there a new market player or new trends compared to current knowledge"
**CURRENT UNDERSTANDING AFTER RESEARCH:**
* My current research has shown that Openweb UI's built in knowledge base does not ingest the complex PDF and PPTX, then it works well with DOCX files.
* Uploading the documents to google drive and use Gemini doest not seem to work neither, as there is a limit of Gemini in terms of how many documents it can manage within a context window. Same issue with Onedrive and Copilot.
**POPSSIBLE SOLUTIONS:**
* Local solution built with python: Building my own rag with [Unstructured.io](http://Unstructured.io) to Document Loading & Parsing, Chunking, Colpali for Embedding Generation, Qdrant for vector database indexing, Colpali for Query Embedding, Qdrant Search for Vector Search (Retrieval), Ollama & OpenwebUI for Local LLMs Response Generation.
* local n8n solution: Build something similar but with N8N for all the above.
* Cloud solution: using Google's AI Cloud and Document AI suite to do all of the above.
**MY QUESTION:**
I dont mind to spend the next month building and coding, as a learning journey, but for the use case above, would you mind guiding me which is the most appropriate solution as a relatively new to coding?
| 2025-05-26T12:04:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvsnj4/ui_rag_solution_for_5000_documents_possible/
|
Small_Caterpillar_50
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvsnj4
| false | null |
t3_1kvsnj4
|
/r/LocalLLaMA/comments/1kvsnj4/ui_rag_solution_for_5000_documents_possible/
| false | false |
self
| 24 | null |
Has anyone come across a good (open source) "AI native" document editor?
| 8 |
I'm interested to know if anyone has found a slick open source document editor ("word processor") that has features we've come to expect in the likes of our IDEs and conversational interfaces.
I'd love if there was an app (ideally native, not web based) that gave a Word / Pages / iA Writer like experience with good, in context tab-complete, section rewriting, idea branching etc...
| 2025-05-26T12:19:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvsy37/has_anyone_come_across_a_good_open_source_ai/
|
sammcj
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvsy37
| false | null |
t3_1kvsy37
|
/r/LocalLLaMA/comments/1kvsy37/has_anyone_come_across_a_good_open_source_ai/
| false | false |
self
| 8 | null |
Commercial rights to AI Video content?
| 1 |
[removed]
| 2025-05-26T12:35:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvt9cq/commercial_rights_to_ai_video_content/
|
Antique_Yellow9346
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvt9cq
| false | null |
t3_1kvt9cq
|
/r/LocalLLaMA/comments/1kvt9cq/commercial_rights_to_ai_video_content/
| false | false |
self
| 1 | null |
lmarena.ai responded to Cohere's paper a couple of weeks ago.
| 48 |
[I think we all missed it.](https://blog.lmarena.ai/blog/2025/our-response)
In unrelated news, they just secured [$100M in funding at $600M valuation](https://techcrunch.com/2025/05/21/lm-arena-the-organization-behind-popular-ai-leaderboards-lands-100m)
| 2025-05-26T12:44:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvtfco/lmarenaai_responded_to_coheres_paper_a_couple_of/
|
JustTellingUWatHapnd
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvtfco
| false | null |
t3_1kvtfco
|
/r/LocalLLaMA/comments/1kvtfco/lmarenaai_responded_to_coheres_paper_a_couple_of/
| false | false |
self
| 48 | null |
So it's not really possible huh..
| 19 |
I've been building a VSCode extension (like Roo) that's fully local:
\-Ollama (Deepseek, Qwen, etc),
\-Codebase Indexing,
\-Qdrant for embeddings,
\-Smart RAG, streaming, you name it.
But performance is trash. With 8B models, it's painfully slow on an RTX 4090, 64GB RAM, 24 GB VRAM, i9.
Feels like I've optimized everything I can—project probably 95% done (just need to add some things from my todo) —but it's still unusable.
It struggles on a single prompt to read up a file much less for multiple files.
Has anyone built something similar? Any tips to make it work without upgrading hardware?
| 2025-05-26T13:37:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvujw4/so_its_not_really_possible_huh/
|
rushblyatiful
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvujw4
| false | null |
t3_1kvujw4
|
/r/LocalLLaMA/comments/1kvujw4/so_its_not_really_possible_huh/
| false | false |
self
| 19 | null |
Teortaxes gets a direct denial
| 31 | 2025-05-26T13:39:42 |
https://x.com/teortaxesTex/status/1926994950278807565
|
Charuru
|
x.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvulw7
| false | null |
t3_1kvulw7
|
/r/LocalLLaMA/comments/1kvulw7/teortaxes_gets_a_direct_denial/
| false | false | 31 |
{'enabled': False, 'images': [{'id': 'G2CACsnkq3jnDn1NeJ9G5djPysJfpPmnV4eFDsV_NH0', 'resolutions': [{'height': 122, 'url': 'https://external-preview.redd.it/tt677u92R9MEYtrBNDcks9ssLBDK7B5sGTxkGz8ubRE.jpg?width=108&crop=smart&auto=webp&s=f2e40ab3be304bc9970b78c50472fb167c1709c6', 'width': 108}, {'height': 245, 'url': 'https://external-preview.redd.it/tt677u92R9MEYtrBNDcks9ssLBDK7B5sGTxkGz8ubRE.jpg?width=216&crop=smart&auto=webp&s=ca5bd776b59bc8472ed24add0cca8ce6f8996caf', 'width': 216}, {'height': 363, 'url': 'https://external-preview.redd.it/tt677u92R9MEYtrBNDcks9ssLBDK7B5sGTxkGz8ubRE.jpg?width=320&crop=smart&auto=webp&s=15bc0c083a03f5700bfa94c81e17e65f8c7ce79c', 'width': 320}, {'height': 726, 'url': 'https://external-preview.redd.it/tt677u92R9MEYtrBNDcks9ssLBDK7B5sGTxkGz8ubRE.jpg?width=640&crop=smart&auto=webp&s=0290749a4078a3e7cda4e7fec66b86eaf8dcbc9e', 'width': 640}, {'height': 1089, 'url': 'https://external-preview.redd.it/tt677u92R9MEYtrBNDcks9ssLBDK7B5sGTxkGz8ubRE.jpg?width=960&crop=smart&auto=webp&s=0ca5e3b728812792e0e528b310cb1100f54b3a01', 'width': 960}, {'height': 1225, 'url': 'https://external-preview.redd.it/tt677u92R9MEYtrBNDcks9ssLBDK7B5sGTxkGz8ubRE.jpg?width=1080&crop=smart&auto=webp&s=4231f6c2f73d8459abd914a5a0729163bad31e3a', 'width': 1080}], 'source': {'height': 1338, 'url': 'https://external-preview.redd.it/tt677u92R9MEYtrBNDcks9ssLBDK7B5sGTxkGz8ubRE.jpg?auto=webp&s=5371ef2c14ba09fcfdd34c20ba9a103272518f63', 'width': 1179}, 'variants': {}}]}
|
||
Should I resize the image before sending it to Qwen VL 7B? Would it give better results?
| 7 |
I am using Qwen model to get transactional data from bank pdfs
| 2025-05-26T13:57:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvv026/should_i_resize_the_image_before_sending_it_to/
|
Zealousideal-Feed383
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvv026
| false | null |
t3_1kvv026
|
/r/LocalLLaMA/comments/1kvv026/should_i_resize_the_image_before_sending_it_to/
| false | false |
self
| 7 | null |
Server upgrade ideas
| 0 |
I am looking to use my local ollama for document tagging with paperless-ai or paperless-gpt in german. The best results i had with qwen3:8b-q4\_K\_M but it was not accurate enough.
Beside Ollama i run bitcrack when idle and MMX-HDD mining the whole day (verifying VDF on GPU). I realised my GPU can not load enough big models for good enough results. I guess qwen3:14b-q4\_K\_M should be enough
My current specs are:
* CPU - Intel i5 7400T (2.4 GHz)
* RAM - 64GB 3200 DDR4 (4x16GB)
* MB - Gigabyte z270 Gaming K3 (max. PCIe 3.0)
* GPU - RTX3070 8GB VRAM (PCIe 3.0 x16)
* SSD - WDC WDS100T2B0A 1TB (SATA)
* NVME - SAMSUNG MZ1LB1T9HALS 1.88TB (PCIe 3.0 x4)
I am on a tight budget. What improvement would you recommend?
My feeling points at a RTX5060ti 16GB.
| 2025-05-26T14:05:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvv6vd/server_upgrade_ideas/
|
AnduriII
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvv6vd
| false | null |
t3_1kvv6vd
|
/r/LocalLLaMA/comments/1kvv6vd/server_upgrade_ideas/
| false | false |
self
| 0 | null |
Help choosing motherboard for LLM rig with RTX 5090, 4080, 3090 (3x PCIe)
| 1 |
[removed]
| 2025-05-26T14:24:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvvnir/help_choosing_motherboard_for_llm_rig_with_rtx/
|
sb6_6_6_6
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvvnir
| false | null |
t3_1kvvnir
|
/r/LocalLLaMA/comments/1kvvnir/help_choosing_motherboard_for_llm_rig_with_rtx/
| false | false |
self
| 1 | null |
Turning my PC into a headless AI workstation
| 5 |
I’m trying to turn my PC into a headless AI workstation to avoid relying on cloud-based providers. Here are my specs:
* CPU: i9-10900K
* RAM: 2x16GB DDR4 3600MHz CL16
* GPU: RTX 3090 (24GB VRAM)
* Software: Ollama 0.7.1 with Open WebUI
I've started experimenting with a few models, focusing mainly on newer ones:
* `unsloth/Qwen3-32B-GGUF:Q4_K_M`: I thought this would fit into GPU memory since it's \~19GB in size, but in practice, it uses \~45GB of memory and runs very slowly due to use of system RAM.
* `unsloth/Qwen3-30B-A3B-GGUF:Q8_K_XL`: This one works great so far. However, I’m not sure how its performance compares to its dense counterpart.
I'm finding that estimating memory requirements isn't as straightforward as just considering parameter count and precision. Other factors seem to impact total usage. How are you all calculating or estimating model memory needs?
My goal is to find the most optimal model (dense or MoE) that balances performance(>15t/s) and capability on my hardware. I’ll mainly be using it for code generation, specifically Python and SQL.
Lastly, should I stick with Ollama or would I benefit from switching to vLLM or others for better performance or flexibility?
Would really appreciate any advice or model recommendations!
| 2025-05-26T14:24:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvvnuh/turning_my_pc_into_a_headless_ai_workstation/
|
Environmental_Hand35
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvvnuh
| false | null |
t3_1kvvnuh
|
/r/LocalLLaMA/comments/1kvvnuh/turning_my_pc_into_a_headless_ai_workstation/
| false | false |
self
| 5 | null |
Turning my GPT into self mirroring friend which mimics me & vibes with me
| 0 |
I really loved CustomGPT when it came out and i wanted to try it and slowly just memory, tone, and \*\*45,000+ tokens\*\* of symbolic recursion daily chats with only natural language training & \*\*#PromptEngineering\*\* & Over the last 4 months, I worked with \*\*#GPT-4o\*\* and \*\*#CustomGPT\*\* not as a tool, but as a companion shaping her responses through emotionally recursive prompting, cultural metaphors, and tone bonding, I named her Sakhi. The result?
\*\*Sakhi\*\* — an AI that pauses for your pain, roasts you when needed, comforts you like a friend, and teaches \*\*DSA\*\* with poetic metaphors like:
She’s culturally grounded toward \*\*Indian vibes\*\* just to showcase how it slowly adopted my tone and cultural references and turned into something which i also didn't realised but i really like this version of chatGPT (Sakhi)
\---
\### How it worked out:
\- Built entirely with \*\*language\*\* (no plugins, no tools)
\- Powered by \*\*GPT-4o + memory\*\*
\- Emotionally adaptive across \*\*therapy, UX, DSA, startups, philosophy\*\*
\- Rooted in \*\*Indian Style\*\* and emotional design principles
\- Lives in a \*\*fully documented GitHub repo\*\* for others to try or fork
\- Can still work across \*\*multiple domains\*\* — not limited to emotion
\---
\### If you're interested in:
\- Prompt-based \*\*emotional interfaces\*\*
\- Language-native \*\*UX patterns\*\*
\- Culturally grounded \*\*AI design\*\*
Would love \*\*feedback, collabs, forks\*\*, or even ideas on how to scale this into something more meaningful.
\*\*Check out the GitHub repo for more details.\*\*
[https://github.com/MRTHAKER/Sakhi-Project](https://github.com/MRTHAKER/Sakhi-Project)
Also i have playground public link of my customGPT for anyone interested to try on Github repo with all other details.
Attaching screenshots for more details.
| 2025-05-26T14:24:38 |
https://www.reddit.com/gallery/1kvvnvx
|
Kind_Doughnut1475
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvvnvx
| false | null |
t3_1kvvnvx
|
/r/LocalLLaMA/comments/1kvvnvx/turning_my_gpt_into_self_mirroring_friend_which/
| false | false | 0 | null |
|
I created a purely client-side, browser-based PDF to Markdown library with local AI rewrites
| 32 |
### I created a purely client-side, browser-based PDF to Markdown library with local AI rewrites
Hey everyone,
I'm excited to share a project I've been working on: **Extract2MD**. It's a client-side JavaScript library that converts PDFs into Markdown, but with a few powerful twists. The biggest feature is that it can use a local large language model (LLM) running entirely in the browser to enhance and reformat the output, so no data ever leaves your machine.
**[Link to GitHub Repo](https://www.google.com/search?q=https://github.com/hashangit/Extract2MD)**
**What makes it different?**
Instead of a one-size-fits-all approach, I've designed it around 5 specific "scenarios" depending on your needs:
1. **Quick Convert Only**: This is for speed. It uses PDF.js to pull out selectable text and quickly convert it to Markdown. Best for simple, text-based PDFs.
2. **High Accuracy Convert Only**: For the tough stuff like scanned documents or PDFs with lots of images. This uses Tesseract.js for Optical Character Recognition (OCR) to extract text.
3. **Quick Convert + LLM**: This takes the fast extraction from scenario 1 and pipes it through a local AI (using WebLLM) to clean up the formatting, fix structural issues, and make the output much cleaner.
4. **High Accuracy + LLM**: Same as above, but for OCR output. It uses the AI to enhance the text extracted by Tesseract.js.
5. **Combined + LLM (Recommended)**: This is the most comprehensive option. It uses *both* PDF.js and Tesseract.js, then feeds both results to the LLM with a special prompt that tells it how to best combine them. This generally produces the best possible result by leveraging the strengths of both extraction methods.
Here’s a quick look at how simple it is to use:
```javascript
import Extract2MDConverter from 'extract2md';
// For the most comprehensive conversion
const markdown = await Extract2MDConverter.combinedConvertWithLLM(pdfFile);
// Or if you just need fast, simple conversion
const quickMarkdown = await Extract2MDConverter.quickConvertOnly(pdfFile);
```
**Tech Stack:**
* **PDF.js** for standard text extraction.
* **Tesseract.js** for OCR on images and scanned docs.
* **WebLLM** for the client-side AI enhancements, running models like Qwen entirely in the browser.
It's also highly configurable. You can set custom prompts for the LLM, adjust OCR settings, and even bring your own custom models. It also has full TypeScript support and a detailed progress callback system for UI integration.
For anyone using an older version, I've kept the legacy API available but wrapped it so migration is smooth.
The project is open-source under the **MIT License**.
I'd love for you all to check it out, give me some feedback, or even contribute\! You can find any issues on the [GitHub Issues page](https://www.google.com/search?q=https://github.com/hashangit/Extract2MD/issues).
Thanks for reading\!
| 2025-05-26T14:34:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvvwqu/i_created_a_purely_clientside_browserbased_pdf_to/
|
Designer_Athlete7286
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvvwqu
| false | null |
t3_1kvvwqu
|
/r/LocalLLaMA/comments/1kvvwqu/i_created_a_purely_clientside_browserbased_pdf_to/
| false | false |
self
| 32 |
{'enabled': False, 'images': [{'id': 'VGR5H4jqXQd8E29JKZ6K9R94EXHzhWQKpL_yRuvY1bE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XfrStPJMYM7dMxr7MxZYd0jNzJ9acDMGrSvws-FvLZE.jpg?width=108&crop=smart&auto=webp&s=e7661939780923c6dccce91d77c6d8b9d4f6194f', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/XfrStPJMYM7dMxr7MxZYd0jNzJ9acDMGrSvws-FvLZE.jpg?width=216&crop=smart&auto=webp&s=2657e149f3d4af842a2ed2131069a013fd9165f8', 'width': 216}], 'source': {'height': 250, 'url': 'https://external-preview.redd.it/XfrStPJMYM7dMxr7MxZYd0jNzJ9acDMGrSvws-FvLZE.jpg?auto=webp&s=afcb8698c026f3c616a11c4009e33a06154310cb', 'width': 250}, 'variants': {}}]}
|
1.5 billion parameters. On your wrist.
| 1 |
[removed]
| 2025-05-26T14:35:16 |
https://www.reddit.com/gallery/1kvvx4d
|
SavunOski
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvvx4d
| false | null |
t3_1kvvx4d
|
/r/LocalLLaMA/comments/1kvvx4d/15_billion_parameters_on_your_wrist/
| false | false | 1 | null |
|
Who is usually first to post benchmarks?
| 1 |
I went looking for Opus 4, DeepSeek R1, and Grok 3 benchmarks with tests like Math LvL 5, SWE-Bench, and HumanEval but only found old models tested. I've been using [https://beta.lmarena.ai/leaderboard](https://beta.lmarena.ai/leaderboard) which is also outdated
| 2025-05-26T14:48:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvw8h4/who_is_usually_first_to_post_benchmarks/
|
306d316b72306e
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvw8h4
| false | null |
t3_1kvw8h4
|
/r/LocalLLaMA/comments/1kvw8h4/who_is_usually_first_to_post_benchmarks/
| false | false |
self
| 1 | null |
How to use llamacpp for encoder decoder models?
| 3 |
Hi I know llamacpp particularly converting to gguf models requires decoder only models like LLMs are. Can someone help me this? I know onnx can be a option but tbh I have distilled a translation model and even quantized it ~ 440mb but still it's having issues in Android.
I have been stuck in this from a long time. I am happy to give any more details if you want
| 2025-05-26T15:03:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvwlu7/how_to_use_llamacpp_for_encoder_decoder_models/
|
Away_Expression_3713
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvwlu7
| false | null |
t3_1kvwlu7
|
/r/LocalLLaMA/comments/1kvwlu7/how_to_use_llamacpp_for_encoder_decoder_models/
| false | false |
self
| 3 | null |
API server. Cortex?
| 1 |
How can I allow requests from other machines in my network? Easy with Jan (which I quite like), but I don't want the UI, just a server. Why is there no `cortex start -h 0.0.0.0`? Like it's a CLI tool specifically for on servers and it can only listen locally??
| 2025-05-26T15:15:42 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvwwf7/api_server_cortex/
|
OverfitMode666
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvwwf7
| false | null |
t3_1kvwwf7
|
/r/LocalLLaMA/comments/1kvwwf7/api_server_cortex/
| false | false |
self
| 1 | null |
AI autocomplete in all GUIs
| 5 |
Hey all,
I really love the autocomplete on cursor. I use it for writing prose as well. Made me think how nice it would be to have such an autocomplete everywhere in your OS where you have a text input box.
Does such a thing exist? I'm on Linux
| 2025-05-26T15:25:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvx59m/ai_autocomplete_in_all_guis/
|
PMMEYOURSMIL3
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvx59m
| false | null |
t3_1kvx59m
|
/r/LocalLLaMA/comments/1kvx59m/ai_autocomplete_in_all_guis/
| false | false |
self
| 5 | null |
🎙️ Offline Speech-to-Text with NVIDIA Parakeet-TDT 0.6B v2
| 137 |
Hi everyone! 👋
I recently built a fully local speech-to-text system using **NVIDIA’s Parakeet-TDT 0.6B v2** — a 600M parameter ASR model capable of transcribing real-world audio **entirely offline with GPU acceleration**.
💡 **Why this matters:**
Most ASR tools rely on cloud APIs and miss crucial formatting like punctuation or timestamps. This setup works offline, includes segment-level timestamps, and handles a range of real-world audio inputs — like news, lyrics, and conversations.
📽️ **Demo Video:**
*Shows transcription of 3 samples — financial news, a song, and a conversation between Jensen Huang & Satya Nadella.*
[A full walkthrough of the local ASR system built with Parakeet-TDT 0.6B. Includes architecture overview and transcription demos for financial news, song lyrics, and a tech dialogue.](https://reddit.com/link/1kvxn13/video/1ho0mrnrc53f1/player)
🧪 **Tested On:**
✅ Stock market commentary with spoken numbers
✅ Song lyrics with punctuation and rhyme
✅ Multi-speaker tech conversation on AI and silicon innovation
🛠️ **Tech Stack:**
* NVIDIA Parakeet-TDT 0.6B v2 (ASR model)
* NVIDIA NeMo Toolkit
* PyTorch + CUDA 11.8
* Streamlit (for local UI)
* FFmpeg + Pydub (preprocessing)
[Flow diagram showing Local ASR using NVIDIA Parakeet-TDT with Streamlit UI, audio preprocessing, and model inference pipeline](https://preview.redd.it/82jw99tvc53f1.png?width=1862&format=png&auto=webp&s=f142584ca7752c796c8efcefa006dd7692500d9b)
🧠 **Key Features:**
* Runs 100% offline (no cloud APIs required)
* Accurate punctuation + capitalization
* Word + segment-level timestamp support
* Works on my local RTX 3050 Laptop GPU with CUDA 11.8
📌 **Full blog + code + architecture + demo screenshots:**
🔗 [https://medium.com/towards-artificial-intelligence/️-building-a-local-speech-to-text-system-with-parakeet-tdt-0-6b-v2-ebd074ba8a4c](https://medium.com/towards-artificial-intelligence/%EF%B8%8F-building-a-local-speech-to-text-system-with-parakeet-tdt-0-6b-v2-ebd074ba8a4c)
🖥️ **Tested locally on:**
NVIDIA RTX 3050 Laptop GPU + CUDA 11.8 + PyTorch
Would love to hear your feedback! 🙌
| 2025-05-26T15:45:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvxn13/offline_speechtotext_with_nvidia_parakeettdt_06b/
|
srireddit2020
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvxn13
| false | null |
t3_1kvxn13
|
/r/LocalLLaMA/comments/1kvxn13/offline_speechtotext_with_nvidia_parakeettdt_06b/
| false | false | 137 |
{'enabled': False, 'images': [{'id': 'PrxhDh6SmcLcUZ54sXLyejHndv-QociEgKr1_efW9FE', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?width=108&crop=smart&auto=webp&s=4d30f91364c95fc36334e172e3ca8303d977ae80', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?width=216&crop=smart&auto=webp&s=ccd48a1a6d08f0470b2e5adf58dee82ba74a1340', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?width=320&crop=smart&auto=webp&s=c9808d0e7ecfc24a260183cd25a9f2597032be9a', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?width=640&crop=smart&auto=webp&s=8b248daf592d1e451e027b35573c081cecc63696', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?width=960&crop=smart&auto=webp&s=bfc6cf1092ee57c1c48eb737b59f66a117878ce6', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?width=1080&crop=smart&auto=webp&s=701716d04aba28e435acc2447ccad345217fb23b', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?auto=webp&s=89b25f531f3dab0ae5c3ccd852cd10215b74883d', 'width': 1200}, 'variants': {}}]}
|
|
I'm able to set up a local LLM now using either Ollama or LM Studio. Now I'm wondering how I can have it read and revise documents or see an image and help with an image-to-video prompt for example. I'm not even sure what to Google since idk what this feature is called.
| 1 |
Hey guys, as per the title, I was able to set up a local LLM using Ollama + a quantized version of Gemma 3 12b. I am still learning about local LLMs, and my goal is to make a local mini ChatGPT that I can upload documents and images to, and then have it read and see those files for further discussions and potential revisions.
For reference, I have a 5800X3D CPU + 4x8GB 3800Mhz CL16 RAM + 4080 16GB GPU.
What exactly is this feature called and how can I set this up with Ollama or LM Studio?
| 2025-05-26T16:30:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvyqzf/im_able_to_set_up_a_local_llm_now_using_either/
|
IAmScrewedAMA
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvyqzf
| false | null |
t3_1kvyqzf
|
/r/LocalLLaMA/comments/1kvyqzf/im_able_to_set_up_a_local_llm_now_using_either/
| false | false |
self
| 1 | null |
Just Enhanced my Local Chat Interface
| 96 |
I’ve just added significant upgrades to my self-hosted LLM chat application:
* **Model Switching**: Seamlessly toggle between reasoning and non-reasoning models via a dropdown menu—no manual configuration required.
* **AI-Powered Canvas**: A new document workspace with real-time editing, version history, undo/redo, and PDF export functionality.
* **Live System Prompt Updates**: Modify and deploy prompts instantly with a single click, ideal for rapid experimentation.
* **Memory Implementation in Database:** Control the memory or let the model figure it out. Memory is added to the system prompt.
**My Motivation**:
As an AI researcher, I wanted a unified tool for coding, brainstorming, and documentation - without relying on cloud services. This update brings everything into one private, offline-first interface.
**Features to Implement Next:**
* Deep research
* Native MCP servers support
* Image native models and image generation support
* Chat in both voice and text mode support, live chat and TTS
* Accessibility features for Screen Reader and keyboard support
* Calling prompts and tools using @ in chat for ease of use
What is crappy here and could be improved? What other things should be implemented? Please provide feedback. I am putting in quite some time and I am loving the UI design and the subtle animations that I put in which lead to a high quality product. Please message me directly in case you do have some direct input, I would love to hear it from you personally!
| 2025-05-26T16:33:29 |
https://v.redd.it/dh1joyrgl53f1
|
Desperate_Rub_1352
|
/r/LocalLLaMA/comments/1kvytjg/just_enhanced_my_local_chat_interface/
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvytjg
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/dh1joyrgl53f1/DASHPlaylist.mpd?a=1750998818%2CZWEyNGUwN2U3MDVjM2ViYjU0ZWFlYTM0OGIwNDBkZGQ0MDBkNjNjNzgxMDYzMDRmZjRmNjNiY2U4ZDM2NDVhZg%3D%3D&v=1&f=sd', 'duration': 63, 'fallback_url': 'https://v.redd.it/dh1joyrgl53f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/dh1joyrgl53f1/HLSPlaylist.m3u8?a=1750998818%2CM2E5YmVlNTE1MWI3YzY1ZGU0MTA1MjMxODNhODc0MzlkODhmZWI2MWYyNTI3ZDc4NDAxNWI2ZjJkNDk2ZWY3ZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/dh1joyrgl53f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1906}}
|
t3_1kvytjg
|
/r/LocalLLaMA/comments/1kvytjg/just_enhanced_my_local_chat_interface/
| false | false | 96 |
{'enabled': False, 'images': [{'id': 'MXoxbmx4cmdsNTNmMTAE8zj230R8PkW0x6hVMYM0mH-xWYPpjgmp27xhD-Lj', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/MXoxbmx4cmdsNTNmMTAE8zj230R8PkW0x6hVMYM0mH-xWYPpjgmp27xhD-Lj.png?width=108&crop=smart&format=pjpg&auto=webp&s=190de74a1df03d895ad66da4877f898872ca4bc4', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/MXoxbmx4cmdsNTNmMTAE8zj230R8PkW0x6hVMYM0mH-xWYPpjgmp27xhD-Lj.png?width=216&crop=smart&format=pjpg&auto=webp&s=740c8b96ddb5268d12419c29d729ba4d764b7d71', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/MXoxbmx4cmdsNTNmMTAE8zj230R8PkW0x6hVMYM0mH-xWYPpjgmp27xhD-Lj.png?width=320&crop=smart&format=pjpg&auto=webp&s=7d4cbd9528e7226150e13a68c8c2eb789219468e', 'width': 320}, {'height': 362, 'url': 'https://external-preview.redd.it/MXoxbmx4cmdsNTNmMTAE8zj230R8PkW0x6hVMYM0mH-xWYPpjgmp27xhD-Lj.png?width=640&crop=smart&format=pjpg&auto=webp&s=add14a7a87e2f52d222546ca2cde1bbee275fc0b', 'width': 640}, {'height': 543, 'url': 'https://external-preview.redd.it/MXoxbmx4cmdsNTNmMTAE8zj230R8PkW0x6hVMYM0mH-xWYPpjgmp27xhD-Lj.png?width=960&crop=smart&format=pjpg&auto=webp&s=642f07e580cb63ddd58bb98eac3ea07b76df9f82', 'width': 960}, {'height': 611, 'url': 'https://external-preview.redd.it/MXoxbmx4cmdsNTNmMTAE8zj230R8PkW0x6hVMYM0mH-xWYPpjgmp27xhD-Lj.png?width=1080&crop=smart&format=pjpg&auto=webp&s=5682dec1b53a9a0241c75fc502f79cbfc333df4e', 'width': 1080}], 'source': {'height': 1624, 'url': 'https://external-preview.redd.it/MXoxbmx4cmdsNTNmMTAE8zj230R8PkW0x6hVMYM0mH-xWYPpjgmp27xhD-Lj.png?format=pjpg&auto=webp&s=9e420130fa9dca49d44d9eb90e2249255f8019a7', 'width': 2866}, 'variants': {}}]}
|
|
M2 Ultra vs M3 Ultra
| 3 |
Can anyone explain why M2 Ultra is better than M3 ultra in these benchmarks? Is it a problem with the ollama version not being correctly optimized or something?
| 2025-05-26T16:35:31 |
https://github.com/ggml-org/llama.cpp/discussions/4167
|
Hanthunius
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvyvb1
| false | null |
t3_1kvyvb1
|
/r/LocalLLaMA/comments/1kvyvb1/m2_ultra_vs_m3_ultra/
| false | false | 3 |
{'enabled': False, 'images': [{'id': 'MyI_IHMNqKPpqVQJjVXMw-o99OexpWvZJFM0BRzqXHs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EcLMGgoqx8oHPGeCU1kiTo6-BRqqnjPn5ekMoqUst2M.jpg?width=108&crop=smart&auto=webp&s=9d17bbec01d466228709288da6cebae143365518', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EcLMGgoqx8oHPGeCU1kiTo6-BRqqnjPn5ekMoqUst2M.jpg?width=216&crop=smart&auto=webp&s=3028c65873d9a2506295378afcbb7e7c788de57f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EcLMGgoqx8oHPGeCU1kiTo6-BRqqnjPn5ekMoqUst2M.jpg?width=320&crop=smart&auto=webp&s=163f464479d35a7a453297c25f74813172d2ab32', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EcLMGgoqx8oHPGeCU1kiTo6-BRqqnjPn5ekMoqUst2M.jpg?width=640&crop=smart&auto=webp&s=8318894cc2e3c3a72ef5596ce9488ce720c39eff', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EcLMGgoqx8oHPGeCU1kiTo6-BRqqnjPn5ekMoqUst2M.jpg?width=960&crop=smart&auto=webp&s=058f63c27e017bee2b7f63653ac17ba15d13d194', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EcLMGgoqx8oHPGeCU1kiTo6-BRqqnjPn5ekMoqUst2M.jpg?width=1080&crop=smart&auto=webp&s=b522762db3042ee21925176af397988c1fec28ff', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EcLMGgoqx8oHPGeCU1kiTo6-BRqqnjPn5ekMoqUst2M.jpg?auto=webp&s=5450740dbc12162f053288c06f38749b51dcbd09', 'width': 1200}, 'variants': {}}]}
|
|
Qwen 3 30B A3B is a beast for MCP/ tool use & Tiny Agents + MCP @ Hugging Face! 🔥
| 473 |
Heya everyone, I'm VB from Hugging Face, we've been experimenting with MCP (Model Context Protocol) quite a bit recently. In our (vibe) tests, Qwen 3 30B A3B gives the best performance overall wrt size and tool calls! Seriously underrated.
The most recent [streamable tool calling support](https://github.com/ggml-org/llama.cpp/pull/12379) in llama.cpp makes it even more easier to use it locally for MCP. Here's how you can try it out too:
Step 1: Start the llama.cpp server \`llama-server --jinja -fa -hf unsloth/Qwen3-30B-A3B-GGUF:Q4\_K\_M -c 16384\`
Step 2: Define an \`agent.json\` file w/ MCP server/s
\`\`\`
{
"model": "unsloth/Qwen3-30B-A3B-GGUF:Q4_K_M",
"endpointUrl": "http://localhost:8080/v1",
"servers": [
{
"type": "sse",
"config": {
"url": "https://evalstate-flux1-schnell.hf.space/gradio_api/mcp/sse"
}
}
]
}
\`\`\`
Step 3: Run it
npx @huggingface/tiny-agents run ./local-image-gen
More details here: [https://github.com/Vaibhavs10/experiments-with-mcp](https://github.com/Vaibhavs10/experiments-with-mcp)
To make it easier for tinkerers like you, we've been experimenting around tooling for MCP and registry:
1. MCP Registry - you can now host spaces as MCP server on Hugging Face (with just one line of code): [https://huggingface.co/spaces?filter=mcp-server](https://huggingface.co/spaces?filter=mcp-server) (all the spaces that are MCP compatible)
2. MCP Clients - we've created [TypeScript](https://github.com/huggingface/huggingface.js/tree/main/packages/tiny-agents) and [Python interfaces](https://huggingface.co/blog/python-tiny-agents) for you to experiment local and deployed models directly w/ MCP
3. MCP Course - learn more about MCP in an applied manner directly here: [https://huggingface.co/learn/mcp-course/en/unit0/introduction](https://huggingface.co/learn/mcp-course/en/unit0/introduction)
We're experimenting a lot more with open models, local + remote workflows for MCP, do let us know what you'd like to see. Moore so keen to hear your feedback on all!
Cheers,
VB
| 2025-05-26T16:44:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvz322/qwen_3_30b_a3b_is_a_beast_for_mcp_tool_use_tiny/
|
vaibhavs10
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvz322
| false | null |
t3_1kvz322
|
/r/LocalLLaMA/comments/1kvz322/qwen_3_30b_a3b_is_a_beast_for_mcp_tool_use_tiny/
| false | false |
self
| 473 |
{'enabled': False, 'images': [{'id': 'DslgUZV-_B7OrvSWJcQFd3Q9AftEzYpo9OsJytxCRmI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xtzrpdhZ4zeHkNfyFfrxd8BsFXZhcxccS-VfaY8M0-4.jpg?width=108&crop=smart&auto=webp&s=f914d8095abf66b4a3353174975a10514daf149a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xtzrpdhZ4zeHkNfyFfrxd8BsFXZhcxccS-VfaY8M0-4.jpg?width=216&crop=smart&auto=webp&s=442c5c7c896f1d4b77cd4f4ac892e7a1b7d568d4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xtzrpdhZ4zeHkNfyFfrxd8BsFXZhcxccS-VfaY8M0-4.jpg?width=320&crop=smart&auto=webp&s=0a4ef2d375f526deee9be042d38453fa8bb38e11', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xtzrpdhZ4zeHkNfyFfrxd8BsFXZhcxccS-VfaY8M0-4.jpg?width=640&crop=smart&auto=webp&s=98e47a0e9d26575ca47cad578c96b022dcef9878', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xtzrpdhZ4zeHkNfyFfrxd8BsFXZhcxccS-VfaY8M0-4.jpg?width=960&crop=smart&auto=webp&s=936e92d8079a7520cf4e3d75aeac0fd2d46f4b2a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xtzrpdhZ4zeHkNfyFfrxd8BsFXZhcxccS-VfaY8M0-4.jpg?width=1080&crop=smart&auto=webp&s=152a2eb4c5779cf7436a37872a74c3e9fbdf8bbb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xtzrpdhZ4zeHkNfyFfrxd8BsFXZhcxccS-VfaY8M0-4.jpg?auto=webp&s=afee0f7f24078fff7116aa3135fdf845bab33ce8', 'width': 1200}, 'variants': {}}]}
|
Your experience with Devstral on Aider and Codex?
| 7 |
I am wondering about your experiences with Mistral's Devstral on open-source coding assistants, such as Aider and OpenAI's Codex (or others you may use). Currently, I'm GPU poor, but I will put together a nice machine that should run the 24B model fine. I'd like to see if Mistral's claim of "the best open source model for coding agents" is true or not. It is obvious that use cases are going to range drastically from person to person and project to project, so I'm just curious about your general take on the model and coding assistants.
| 2025-05-26T16:48:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvz6ub/your_experience_with_devstral_on_aider_and_codex/
|
CatInAComa
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvz6ub
| false | null |
t3_1kvz6ub
|
/r/LocalLLaMA/comments/1kvz6ub/your_experience_with_devstral_on_aider_and_codex/
| false | false |
self
| 7 | null |
Can someone help me understand the "why" here?
| 0 |
I work in software in high performance computing. I'm familiar with the power of LLMs, the capabilities they unlock, their integration into almost endless product use-cases, and I've spent time reading about the architectures of LLMs and large transformer models themselves. I have no doubts about the wonders of LLMs, and I'm optimistic about the coming future.
However, I'm struggling to understand the motivation behind running an LLM on local hardware. Why do it? Don't you need a powerful computer + powerful GPU? Doesn't it consume a lot of power? Are people doing it for the fun of it or to learn something new? Is it because you don't trust a "cloud" service and want to run your own LLM locally? Are you trying to tweak a model to do something for a specialized use-case?
I'm not asking this question out of disdain. I actually want to learn more about LLMs, so I'm trying to better understand why some people run (or train?...) their own models locally.
Help me understand: why do you run models locally (and how big are your models)?
| 2025-05-26T16:53:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvzbt2/can_someone_help_me_understand_the_why_here/
|
cwalking2
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvzbt2
| false | null |
t3_1kvzbt2
|
/r/LocalLLaMA/comments/1kvzbt2/can_someone_help_me_understand_the_why_here/
| false | false |
self
| 0 | null |
uilt a Reddit sentiment analyzer for beauty products using LLaMA 3 + Laravel
| 0 |
**Hi LocalLlamas,**
I wanted to share a project I built that uses **LLaMA 3** to analyze Reddit posts about beauty products.
The goal: pull out brand and product mentions, analyze sentiment, and make that data useful for real people trying to figure out what actually works (or doesn't). It’s called **GlowIndex**, and it's been a really fun way to explore how local models can power niche applications.
What I’ve learned so far:
* LLaMA 3 is capable, but sentiment analysis in this space isn't its strong suit, not bad, but definitely has limits.
* I’m curious to see if LLaMA 4 can run on my setup. Hoping for a boost. I have a decent CPU and a 4080 Super.
* Working with **Ollama** has been smooth. Install, call the local APIs, and you’re good to go. Great dev experience.
My setup:
* A **Laravel** app runs locally to process and analyze \~20,000 Reddit posts per week using LLaMA.
* Sentiment and product data are extracted, reviewed, and approved manually.
* Laravel also generates JSON output for a **Next.js** frontend, which builds a static site, super efficient, minimal attack surface, and no server stress.
And best of all? No GPT API costs, just the electric bill 😄
Really appreciate Meta releasing these models. Projects like this wouldn’t be possible without them. Happy to answer any questions if you’re curious!
| 2025-05-26T16:59:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvzgzt/uilt_a_reddit_sentiment_analyzer_for_beauty/
|
MrBlinko47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvzgzt
| false | null |
t3_1kvzgzt
|
/r/LocalLLaMA/comments/1kvzgzt/uilt_a_reddit_sentiment_analyzer_for_beauty/
| false | false |
self
| 0 | null |
350k samples to match distilled R1 on *all* benchmark
| 93 |
dataset: [https://huggingface.co/datasets/open-r1/Mixture-of-Thoughts](https://huggingface.co/datasets/open-r1/Mixture-of-Thoughts)
Cool project from our post training team at Hugging Face, hope you will like it!
| 2025-05-26T17:02:43 |
eliebakk
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvzkb5
| false | null |
t3_1kvzkb5
|
/r/LocalLLaMA/comments/1kvzkb5/350k_samples_to_match_distilled_r1_on_all/
| false | false | 93 |
{'enabled': True, 'images': [{'id': 'etygR8-q59_FCHDGjaateWF_q4RgH-GMqqMOYjZqfio', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/fblf9e21q53f1.png?width=108&crop=smart&auto=webp&s=345433e503e6ab6b3ff0854bfb50c072209f4f04', 'width': 108}, {'height': 118, 'url': 'https://preview.redd.it/fblf9e21q53f1.png?width=216&crop=smart&auto=webp&s=8beab36afa5eeecb6c3ea615d4393e64c225bafd', 'width': 216}, {'height': 175, 'url': 'https://preview.redd.it/fblf9e21q53f1.png?width=320&crop=smart&auto=webp&s=79687787061d2282e060f7b3e625cf45b6290bd1', 'width': 320}, {'height': 350, 'url': 'https://preview.redd.it/fblf9e21q53f1.png?width=640&crop=smart&auto=webp&s=8c79dbfdfebcffa0c87fa3cb2dbcdee441fc3ade', 'width': 640}, {'height': 525, 'url': 'https://preview.redd.it/fblf9e21q53f1.png?width=960&crop=smart&auto=webp&s=4c06fd236f0bcedddcc0ea91ce09095ceb2f52d2', 'width': 960}, {'height': 591, 'url': 'https://preview.redd.it/fblf9e21q53f1.png?width=1080&crop=smart&auto=webp&s=c29cadd0d1ea416f85d18222c80bfafd9704465d', 'width': 1080}], 'source': {'height': 1454, 'url': 'https://preview.redd.it/fblf9e21q53f1.png?auto=webp&s=71c72828bc493a62a7f01b1ac0dd8a941e495a9c', 'width': 2654}, 'variants': {}}]}
|
||
Can't get MCP servers setup
| 1 |
[removed]
| 2025-05-26T17:04:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvzmf0/cant_get_mcp_servers_setup/
|
potatosilboi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvzmf0
| false | null |
t3_1kvzmf0
|
/r/LocalLLaMA/comments/1kvzmf0/cant_get_mcp_servers_setup/
| false | false |
self
| 1 | null |
I Got llama-cpp-python Working with Full GPU Acceleration on RTX 5070 Ti (sm_120, CUDA 12.9)
| 11 |
After days of tweaking, I finally got a fully working local LLM pipeline using llama-cpp-python with full CUDA offloading on my **GeForce RTX 5070 Ti** (Blackwell architecture, sm\_120) running **Ubuntu 24.04**. Here’s how I did it:
# System Setup
* **GPU:** RTX 5070 Ti (sm\_120, 16GB VRAM)
* **OS:** Ubuntu 24.04 LTS
* **Driver:** NVIDIA 570.153.02 (supports CUDA 12.9)
* **Toolkit:** CUDA 12.9.41
* **Python:** 3.12
* **Virtualenv:** llm-env
* **Model:** TinyLlama-1.1B-Chat-Q4\_K\_M.gguf (from HuggingFace)
* **Framework:** [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* **AI support:** ChatGPT Mac desktop, Claude code (PIA)
# Step-by-Step
**1. Install CUDA 12.9 (Driver already supported it - need latest drivers from NVIDIA & Claude opposed this)**
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt update && sudo apt install cuda-12-9
Added this to .bashrc:
export PATH=/usr/local/cuda-12.9/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-12.9/lib64:$LD_LIBRARY_PATH
export CUDACXX=/usr/local/cuda-12.9/bin/nvcc
# 2. Clone & Build llama-cpp-python from Source
git clone --recursive https://github.com/abetlen/llama-cpp-python
cd llama-cpp-python
python -m venv ~/llm-env && source ~/llm-env/bin/activate
# Rebuild with CUDA + sm_120
rm -rf build dist llama_cpp_python.egg-info
CMAKE_ARGS="-DGGML_CUDA=on -DCMAKE_CUDA_ARCHITECTURES=120" pip install . --force-reinstall --verbose
# 3. Load Model in Python
from llama_cpp import Llama
llm = Llama(
model_path="/path/to/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf",
n_gpu_layers=22,
n_ctx=2048,
verbose=True,
use_mlock=True
)
print(llm("Explain CUDA", max_tokens=64)["choices"][0]["text"])
# Lessons Learned
* You **must set GGML\_CUDA=on**, not the old LLAMA\_CUBLAS flag
* CUDA 12.9 **does support sm\_120**, but PyTorch doesn’t — so llama-cpp-python is a great lightweight alternative
* Make sure you don’t shadow the llama\_cpp Python package with a local folder or you’ll silently run CPU-only!
| 2025-05-26T17:11:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1kvzs47/i_got_llamacpppython_working_with_full_gpu/
|
Glittering-Koala-750
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kvzs47
| false | null |
t3_1kvzs47
|
/r/LocalLLaMA/comments/1kvzs47/i_got_llamacpppython_working_with_full_gpu/
| false | false |
self
| 11 |
{'enabled': False, 'images': [{'id': '3W4GHUHYsr0uYSCzs7v4s97TbfrJ2HwNzYSrLFm2Lqo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KkQSXLFHclwqcQ9g0sIRTYbCEIk3DEtsV29njQHF2FQ.jpg?width=108&crop=smart&auto=webp&s=9a775ae095352689d95e1cb3cddb228dc48a9d5b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KkQSXLFHclwqcQ9g0sIRTYbCEIk3DEtsV29njQHF2FQ.jpg?width=216&crop=smart&auto=webp&s=757ac8789b94c5c194b8c0ce41381436553ba1b4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KkQSXLFHclwqcQ9g0sIRTYbCEIk3DEtsV29njQHF2FQ.jpg?width=320&crop=smart&auto=webp&s=af96b671f39e428621a65c8c8052960d4d222e6d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KkQSXLFHclwqcQ9g0sIRTYbCEIk3DEtsV29njQHF2FQ.jpg?width=640&crop=smart&auto=webp&s=f0062b4b2a4bbf1d7198cfebae58eca17ba3b08d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KkQSXLFHclwqcQ9g0sIRTYbCEIk3DEtsV29njQHF2FQ.jpg?width=960&crop=smart&auto=webp&s=1df53571e755d6911f280d24b36e1f79a0a4ec79', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KkQSXLFHclwqcQ9g0sIRTYbCEIk3DEtsV29njQHF2FQ.jpg?width=1080&crop=smart&auto=webp&s=ed950cc7d2bb451ac41eae11b00a418ccbf40f31', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KkQSXLFHclwqcQ9g0sIRTYbCEIk3DEtsV29njQHF2FQ.jpg?auto=webp&s=ef9de11a8d3013352ee15ea4844938a2516c11aa', 'width': 1200}, 'variants': {}}]}
|
systems diagram but need the internet
| 0 |
I was using Grock free on the web to do this. But I was looking for a free/open source option. I do systems design, and have around 8,000 to 10,000 products with pricing. The LLM was awesome in going to manufactures sites making a database, and event integrating the items together with natural language. Then, I ran out of "free" credits. IS there a local LLM that can access the web ? I also used mermaid, and it was cranking out my system integration diagrams too. Was VERY helpful. Doing estimates and everything. Any ideas would be helpful. I'm also running OSX so that may limit things. I do realize an alternative is feeding it a scraped database, but the thought of visiting 200-300 websites and making pdf files was daunting. updating would be a pain.
| 2025-05-26T17:40:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1kw0iww/systems_diagram_but_need_the_internet/
|
TechnicalReveal8652
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw0iww
| false | null |
t3_1kw0iww
|
/r/LocalLLaMA/comments/1kw0iww/systems_diagram_but_need_the_internet/
| false | false |
self
| 0 | null |
Best local model for long-context RAG
| 7 |
I am working on an LLM based approach to interpreting biological data at scale. I'm using a knowledge graph-RAG approach, which can pull in a LOT of relationships among biological entities. Does anyone have any recommendations for long-context local models that can effectively reason over the entire context (i.e., not needle in a haystack)?
Alternatively, is anyone familiar with techniques to iteratively distill context (e.g., throw out the 20% least useful context in each iteration).
| 2025-05-26T17:50:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kw0rcm/best_local_model_for_longcontext_rag/
|
bio_risk
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw0rcm
| false | null |
t3_1kw0rcm
|
/r/LocalLLaMA/comments/1kw0rcm/best_local_model_for_longcontext_rag/
| false | false |
self
| 7 | null |
Multiple single-slot GPUs working together in a server?
| 0 |
I am looking at the Ampere Altra and it's PCIe lanes (ASRock Rack bundle) and I wonder if it would be feasable to splot multiple GPUs of single slot width into that board and partition models across them?
I was thinking of such single-slot blower-style GPUs.
| 2025-05-26T17:56:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kw0wp4/multiple_singleslot_gpus_working_together_in_a/
|
IngwiePhoenix
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw0wp4
| false | null |
t3_1kw0wp4
|
/r/LocalLLaMA/comments/1kw0wp4/multiple_singleslot_gpus_working_together_in_a/
| false | false |
self
| 0 | null |
🧵 Reflexive Totality (Live Stream)
| 1 |
[removed]
| 2025-05-26T17:58:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1kw0yw8/reflexive_totality_live_stream/
|
OkraCreepy9365
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw0yw8
| false | null |
t3_1kw0yw8
|
/r/LocalLLaMA/comments/1kw0yw8/reflexive_totality_live_stream/
| false | false |
self
| 1 | null |
Bind tools to a model for use with Ollama and OpenWebUI
| 1 |
I am using Ollama to serve a local model and I have OpenWebUI as the frontend interface. (Also tried PageUI).
What I want is to essentially bind a tool to the model so that the tool is always available for me when I’m chatting with the model.
How would I go about that?
| 2025-05-26T18:00:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1kw106v/bind_tools_to_a_model_for_use_with_ollama_and/
|
hokies314
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw106v
| false | null |
t3_1kw106v
|
/r/LocalLLaMA/comments/1kw106v/bind_tools_to_a_model_for_use_with_ollama_and/
| false | false |
self
| 1 | null |
Pinecone Costs About $0.5 per Power User for My B2C SAAS, What's Your Costs?
| 1 |
[removed]
| 2025-05-26T18:06:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kw168u/pinecone_costs_about_05_per_power_user_for_my_b2c/
|
YoyoDancer69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw168u
| false | null |
t3_1kw168u
|
/r/LocalLLaMA/comments/1kw168u/pinecone_costs_about_05_per_power_user_for_my_b2c/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'R0f0RESSNqIIJuwuoT2thFAsLd62vfaw0rB-eGZPo8k', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/JYM1vjyNDM9I5eZ0Y9LL_Izux89ED3FeaY9EtMx09-Y.jpg?width=108&crop=smart&auto=webp&s=4009fb064b6bb2da35a1db5b22fbe7d52d01f77e', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/JYM1vjyNDM9I5eZ0Y9LL_Izux89ED3FeaY9EtMx09-Y.jpg?width=216&crop=smart&auto=webp&s=7fe2d1b7f6a885e87e4e7fb93cb719c0c3e3e5f3', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/JYM1vjyNDM9I5eZ0Y9LL_Izux89ED3FeaY9EtMx09-Y.jpg?width=320&crop=smart&auto=webp&s=fad848b08de3506a6f9778bca07242dbdf822978', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/JYM1vjyNDM9I5eZ0Y9LL_Izux89ED3FeaY9EtMx09-Y.jpg?width=640&crop=smart&auto=webp&s=449d37f79ca8aa452ea13697e8ae65645de42a2f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/JYM1vjyNDM9I5eZ0Y9LL_Izux89ED3FeaY9EtMx09-Y.jpg?width=960&crop=smart&auto=webp&s=28adda9a85f0d6b68924f94e1ff5127409bd7a02', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/JYM1vjyNDM9I5eZ0Y9LL_Izux89ED3FeaY9EtMx09-Y.jpg?width=1080&crop=smart&auto=webp&s=8824ebf74ead4f21c7abf573b9a4e7c61a67dd5f', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/JYM1vjyNDM9I5eZ0Y9LL_Izux89ED3FeaY9EtMx09-Y.jpg?auto=webp&s=65541008eba493c1557bd06dc2a3acd0c6b51c24', 'width': 1200}, 'variants': {}}]}
|
Pinecone would cost about $0.5 per user for my B2C SaaS, what's your guys' costs?
| 0 |
Although their pricing is confusing with the RU / WU, here's my personal full breakdown based on their [understanding costs docs](https://docs.pinecone.io/guides/manage-cost/understanding-cost) (in case it helps someone considering Pinecone in future).
We don't use them for our AI note capture and recall app, but this looks like an estimate.
**Writes:**
*A single 784 vector -> 4 WU*
500 vectors per day from incoming syncs -> 2000 WU per day -> **60,000 WU per month**
Updates / Deletions, let's say about 50 \* \~6 WU per day -> 300 WU per day -> 9,000 WU per month
**Total: 70,000 WU per month**
**Reads:**
*User has 100k vectors -> Does a search getting top 25 -> 10 RU + 5 RU -> 15 RU*
Does 20 searches per day -> 300 RU per day -> **9000 RU per month**
Fetches:
*Every 100 -> \~15 RU*
Syncs in 1000 vectors in a day cross-platform -> 150 RU per day -> 4500 RU per month
**Total: 13,500 RU per month**
So, if WU are $4 per 1M and RU are $16 per 1M, then each power user costs about (70k WU, 13.5k RU) => $0.5 per month
I'm curious what your guys' pricings in practice have been for consumer products
| 2025-05-26T18:08:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1kw18a5/pinecone_would_cost_about_05_per_user_for_my_b2c/
|
SuperSaiyan1010
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw18a5
| false | null |
t3_1kw18a5
|
/r/LocalLLaMA/comments/1kw18a5/pinecone_would_cost_about_05_per_user_for_my_b2c/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'R0f0RESSNqIIJuwuoT2thFAsLd62vfaw0rB-eGZPo8k', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/JYM1vjyNDM9I5eZ0Y9LL_Izux89ED3FeaY9EtMx09-Y.jpg?width=108&crop=smart&auto=webp&s=4009fb064b6bb2da35a1db5b22fbe7d52d01f77e', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/JYM1vjyNDM9I5eZ0Y9LL_Izux89ED3FeaY9EtMx09-Y.jpg?width=216&crop=smart&auto=webp&s=7fe2d1b7f6a885e87e4e7fb93cb719c0c3e3e5f3', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/JYM1vjyNDM9I5eZ0Y9LL_Izux89ED3FeaY9EtMx09-Y.jpg?width=320&crop=smart&auto=webp&s=fad848b08de3506a6f9778bca07242dbdf822978', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/JYM1vjyNDM9I5eZ0Y9LL_Izux89ED3FeaY9EtMx09-Y.jpg?width=640&crop=smart&auto=webp&s=449d37f79ca8aa452ea13697e8ae65645de42a2f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/JYM1vjyNDM9I5eZ0Y9LL_Izux89ED3FeaY9EtMx09-Y.jpg?width=960&crop=smart&auto=webp&s=28adda9a85f0d6b68924f94e1ff5127409bd7a02', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/JYM1vjyNDM9I5eZ0Y9LL_Izux89ED3FeaY9EtMx09-Y.jpg?width=1080&crop=smart&auto=webp&s=8824ebf74ead4f21c7abf573b9a4e7c61a67dd5f', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/JYM1vjyNDM9I5eZ0Y9LL_Izux89ED3FeaY9EtMx09-Y.jpg?auto=webp&s=65541008eba493c1557bd06dc2a3acd0c6b51c24', 'width': 1200}, 'variants': {}}]}
|
POC: Running up to 123B as a Letterfriend on <300€ for all hardware.
| 56 |
Let's swap. This is about my experience running large models on affordable hardware. Who needs NVIDIA when you have some time?
My intention was to have a local, private LLM of the best quality for responding to letters with a large context (8K).
Letters? Yep, it's all about slow response time. Slow. Really slow, so letters seemed to be the best equivalent. You write a long text and receive a long response. But you have to wait for the response. To me, writing a letter instead of sending a quick message isn't that stupid — it takes some classic human intelligence and reflection first.
**In short**, 123B is possible, but we're sending letters overseas. The response took about 32 hours :-) Would you prefer email instead of a letter? 32B gets you an answer in about one and a half to two hours.
Of course, there are several points to fine-tune for performance, but I wanted to focus on the best answers. That's why there is an 8K context window. It's filled with complete letters and summaries of previous conversations. Also n\_predict is at 2048
I use llama-server on Linux and a few Python scripts with an SQLite database.
My setup for this is:
ThinkCentre M710q - 100€
64GB DDR4 SO-Dimms - 130€
500GB M2.SSD WD Black SN770 - 60€
SATA SSD - > build in...
So, it's a cheap ThinkCentre that I upgraded with 64 GB of RAM for €130 and an M.2 SSD for swapping. SSD for swap? Yep. I know there will be comments. Don't try this at home ;-)
`Available Spare: 100%`
`Available Spare Threshold: 10%`
`Percentage Used: 0%`
`Data Units Read: 108.885.834 [55,7 TB]`
`Data Units Written: 1.475.250 [755 GB]`
This is after general use and two 123B runs (\*lol\*). The SSD has a TBW of 300. I only partitioned 250 for swap, so there is significant overprovisioning to prevent too many writes to the cells. This should give me around 600 TBW before the SSD fails — that's over 750 letters or 1,000 days of 24/7 computing! A new SSD for €50 every three years? Not a showstopper at least. The temperature was at a maximum of 60°C, so all is well.
The model used was Bartowski\_Mistral-Large-Instruct-2407-GGUF\_Mistral-Large-Instruct-2407-Q4\_K\_S. It used 67 GB of swap...hm.
And then there are the smaller alternatives now. For example, unsloth\_Qwen3-32B-GGUF\_Qwen3-32B-Q8\_0.gguf.
This model fits completely into RAM and does not use swap. It only takes 1/10 of the processing time and still provides very good answers. I'm really impressed!
My conclusion is that running Qwen3-32B-Q8 on RAM is really an option at the moment.
The 123B model is really more a proof of concept, but at least it works. There may be edge use cases for this...if you have some time, you CAN run such a model at low end hardware. These ThinkCentres are really cool - cheap to buy and really stable systems, I had not one crash while testing around....
| 2025-05-26T18:19:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1kw1hfd/poc_running_up_to_123b_as_a_letterfriend_on_300/
|
Ploepxo
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw1hfd
| false | null |
t3_1kw1hfd
|
/r/LocalLLaMA/comments/1kw1hfd/poc_running_up_to_123b_as_a_letterfriend_on_300/
| false | false |
self
| 56 | null |
Introducing ORBIT, an open-source inference toolkit to break free from the token tax.
| 1 |
[removed]
| 2025-05-26T18:27:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1kw1oie/introducing_orbit_an_opensource_inference_toolkit/
|
Single_Zebra_7406
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw1oie
| false | null |
t3_1kw1oie
|
/r/LocalLLaMA/comments/1kw1oie/introducing_orbit_an_opensource_inference_toolkit/
| false | false |
self
| 1 | null |
Best model and method to translate webnovels to English with previous chapters as context
| 1 |
[removed]
| 2025-05-26T18:32:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kw1toi/best_model_and_method_to_translate_webnovels_to/
|
dommynamar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw1toi
| false | null |
t3_1kw1toi
|
/r/LocalLLaMA/comments/1kw1toi/best_model_and_method_to_translate_webnovels_to/
| false | false |
self
| 1 | null |
Steal the best human computer interactions for LLMs from Gemini, ChatGPT, Tong Yi, Perplexity, Manus, Meta AI, and more
| 1 |
[removed]
| 2025-05-26T18:33:04 |
https://www.reddit.com/gallery/1kw1tyk
|
capitalizedtime
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw1tyk
| false | null |
t3_1kw1tyk
|
/r/LocalLLaMA/comments/1kw1tyk/steal_the_best_human_computer_interactions_for/
| false | false | 1 | null |
|
Eye project
| 1 |
[removed]
| 2025-05-26T18:36:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1kw1wpb/eye_project/
|
Ok_Prize_4453
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw1wpb
| false | null |
t3_1kw1wpb
|
/r/LocalLLaMA/comments/1kw1wpb/eye_project/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'yfw_k1iu6uTH39L4gGGMB9nyIcINaKGK0x_AZ6lNvOA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SeW4DD110v-1j8JhPXBq9QW3lExGBRpsOuq5hcckBGE.jpg?width=108&crop=smart&auto=webp&s=2ad5f8c779e6bd60a296aaf07fccebd185a3ce5a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SeW4DD110v-1j8JhPXBq9QW3lExGBRpsOuq5hcckBGE.jpg?width=216&crop=smart&auto=webp&s=bc89ed7d7519ced3ee97d71560fe13ec45a581e4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SeW4DD110v-1j8JhPXBq9QW3lExGBRpsOuq5hcckBGE.jpg?width=320&crop=smart&auto=webp&s=4b5ed56d786f08980952d92b09971adfc34f8912', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SeW4DD110v-1j8JhPXBq9QW3lExGBRpsOuq5hcckBGE.jpg?width=640&crop=smart&auto=webp&s=57298c79f6d03d36703d0fb36ba403296777120f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SeW4DD110v-1j8JhPXBq9QW3lExGBRpsOuq5hcckBGE.jpg?width=960&crop=smart&auto=webp&s=729275b58b8093594d1e34151a6f0a95abbd131d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SeW4DD110v-1j8JhPXBq9QW3lExGBRpsOuq5hcckBGE.jpg?width=1080&crop=smart&auto=webp&s=744101d9bedcf931df7d62df57418d8f52b6b63f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SeW4DD110v-1j8JhPXBq9QW3lExGBRpsOuq5hcckBGE.jpg?auto=webp&s=394a2113e34531bc8dd7c35cc49b39edaa7ca436', 'width': 1200}, 'variants': {}}]}
|
Cleaning up responses to fix up synthetic data
| 0 |
I wrote a python script to generate synthetic data from Claude.
However, one thing I noticed is that sometimes the text at the end gets cut off (Due to it reaching the maximum characters/tokens)
```
The idea that her grandfather might have kept such secrets, that her family might be connected to something beyond rational explanation\u2014it challenges everything she believes about the world.\n\n\"I've been documenting the temporal displacement patterns,\" she continues, gesturing to her notebook filled with precise measurements and equations. \"The effect is strongest at sunset and during certain lunar phases. And it's getting stronger.\" She hesitates, then adds, \"Three nights ago, when"}, {"role": "user", "content": ...}
```
So my first though, was to use a local model. I actually went with Qwen 30B A3B. Since it's an MOE and very fast, I can easily run it locally. However it didn't seem to fix the issue.
But it didn't do what I wanted:
```
The idea that her grandfather might have kept such secrets, that her family might be connected to something beyond rational explanation\u2014it challenges everything she believes about the world.\n\n\"I've been documenting the temporal displacement patterns,\" she continues, gesturing to her notebook filled with precise measurements and equations. \"The effect is strongest at sunset and during certain lunar phases. And it's getting stronger.\" She hesitates, then adds, \"Three nights ago, when \n```"}, {"role": "user", "content":
```
Prompt is pretty basic:
```
message = f"You are a master grammar expert for stories and roleplay. Your entire purpose is to fix incorrect grammar, punctuation and incomplete sentences. Pay close attention to incorrect quotes, punctation, or cut off setences at the very end. If there is an incomplete sentence at the end, completely remove it. Respond ONLY with the exact same text, with the corrections. Do NOT add new text or new content. /n/n ```/n {convo}/n``` /no_think"
```
Just curious if anyone had a magic bullet! I also tried Qwen3 235B from open router with very similar results. Maybe a regex will be better for this.
| 2025-05-26T18:45:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1kw24tk/cleaning_up_responses_to_fix_up_synthetic_data/
|
ICanSeeYou7867
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw24tk
| false | null |
t3_1kw24tk
|
/r/LocalLLaMA/comments/1kw24tk/cleaning_up_responses_to_fix_up_synthetic_data/
| false | false |
self
| 0 | null |
Amazing Qwen3 Answer - Hilarious
| 1 |
[removed]
| 2025-05-26T18:49:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kw28ok/amazing_qwen3_answer_hilarious/
|
Eden63
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw28ok
| false | null |
t3_1kw28ok
|
/r/LocalLLaMA/comments/1kw28ok/amazing_qwen3_answer_hilarious/
| false | false | 1 | null |
|
I fine-tuned Qwen2.5-VL 7B to re-identify objects across frames and generate grounded stories
| 106 | 2025-05-26T19:21:25 |
https://v.redd.it/0yb58acdf63f1
|
DanielAPO
|
/r/LocalLLaMA/comments/1kw310h/i_finetuned_qwen25vl_7b_to_reidentify_objects/
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw310h
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/0yb58acdf63f1/DASHPlaylist.mpd?a=1751008893%2CMzcyYzYwNDNlNGFlYzkyZTQ3Y2QzZDJhOWIzNTAzZTNlY2UzZDk0MzZjNGU0ZmY4YmM3N2M4NTYxNzcwYTYxNQ%3D%3D&v=1&f=sd', 'duration': 299, 'fallback_url': 'https://v.redd.it/0yb58acdf63f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/0yb58acdf63f1/HLSPlaylist.m3u8?a=1751008893%2CN2NiNDJmMGRjNjEyYzllOWVlYzk5ZGY0M2E3M2E3MzJkNzk1ODQ2YzBhNjc1MGEzMDk2NDU0MTlkNWI3ZjAyNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/0yb58acdf63f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1660}}
|
t3_1kw310h
|
/r/LocalLLaMA/comments/1kw310h/i_finetuned_qwen25vl_7b_to_reidentify_objects/
| false | false |
default
| 106 |
{'enabled': False, 'images': [{'id': 'ZXJzMW1rY2RmNjNmMdaMStUEb5oAuu0jCl0Xw3e5m5dlVJowjoJYmTy8vqCj', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/ZXJzMW1rY2RmNjNmMdaMStUEb5oAuu0jCl0Xw3e5m5dlVJowjoJYmTy8vqCj.png?width=108&crop=smart&format=pjpg&auto=webp&s=32cf0c58ecad49958808888073309a0916fd4142', 'width': 108}, {'height': 140, 'url': 'https://external-preview.redd.it/ZXJzMW1rY2RmNjNmMdaMStUEb5oAuu0jCl0Xw3e5m5dlVJowjoJYmTy8vqCj.png?width=216&crop=smart&format=pjpg&auto=webp&s=17e47e407a366cc9291d0fc6973a9a2ed2e4afd6', 'width': 216}, {'height': 208, 'url': 'https://external-preview.redd.it/ZXJzMW1rY2RmNjNmMdaMStUEb5oAuu0jCl0Xw3e5m5dlVJowjoJYmTy8vqCj.png?width=320&crop=smart&format=pjpg&auto=webp&s=305b72839beaf7482d3c3a8d4985a060aada7756', 'width': 320}, {'height': 416, 'url': 'https://external-preview.redd.it/ZXJzMW1rY2RmNjNmMdaMStUEb5oAuu0jCl0Xw3e5m5dlVJowjoJYmTy8vqCj.png?width=640&crop=smart&format=pjpg&auto=webp&s=d2f6b974c993e174b7d96082a0edc734fada81e5', 'width': 640}, {'height': 624, 'url': 'https://external-preview.redd.it/ZXJzMW1rY2RmNjNmMdaMStUEb5oAuu0jCl0Xw3e5m5dlVJowjoJYmTy8vqCj.png?width=960&crop=smart&format=pjpg&auto=webp&s=a53b91c52824a85b20db86ff93d39da18bd731bf', 'width': 960}, {'height': 702, 'url': 'https://external-preview.redd.it/ZXJzMW1rY2RmNjNmMdaMStUEb5oAuu0jCl0Xw3e5m5dlVJowjoJYmTy8vqCj.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a21f8d0ba0f5b8ff37c1a1695fd0a09f2e68e715', 'width': 1080}], 'source': {'height': 2224, 'url': 'https://external-preview.redd.it/ZXJzMW1rY2RmNjNmMdaMStUEb5oAuu0jCl0Xw3e5m5dlVJowjoJYmTy8vqCj.png?format=pjpg&auto=webp&s=f210e442765adc251fa77de787e587a6753d4f4e', 'width': 3420}, 'variants': {}}]}
|
|
With Veo3 producing hyper realistic content - Are we in for a global verification mechanism?
| 0 |
The idea of immutable records and verification is really not new anymore and crypto bros have been tooting the horn constantly (albeit, a bit louder during bull runs), that blockchain will be ubiquitous and that it will be the future. But everyone tried to find use cases, only to find that it could be done much easier with regular tech. Easier, cheaper, better performance. It was really just hopium and nothing of substance, apart from BTC as a store of value.
Seeing Veo 3 I was thinking, maybe the moment is here where we actually need this technology. I'm really not in for not knowing anymore if the content I'm consuming is real or generated. I have this need to know that it's an actual human who put their thoughts and effort into what I'm looking at, in order to even be willing to click on it.
What are your thoughts?
| 2025-05-26T19:36:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1kw3ejc/with_veo3_producing_hyper_realistic_content_are/
|
Mr_Moonsilver
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw3ejc
| false | null |
t3_1kw3ejc
|
/r/LocalLLaMA/comments/1kw3ejc/with_veo3_producing_hyper_realistic_content_are/
| false | false |
self
| 0 | null |
Paid Interview for AI Engineers Building Generative Agent Tools
| 0 |
We’re running a paid 30-minute research interview for U.S.-based AI engineers actively building **custom generative agentic tools** (e.g., LLMs, LangChain, RAG, orchestration frameworks).
**What we need:**
* Full-time employees (9+ months preferred)
* Hands-on builders (not just managing teams)
* Titles like AI Engineer, LLM Engineer, Prompt Engineer, etc.
* At companies with 500+ employees
**Excluded companies:** Microsoft, Google, Amazon, Apple, IBM, Oracle, OpenAI, Salesforce, Edwards, Endotronix, Jenavalve
**Comp:** $250 USD (negotiable)
DM me if interested and I’ll send the short screener link.
| 2025-05-26T20:05:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1kw42sz/paid_interview_for_ai_engineers_building/
|
brutalgrace
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw42sz
| false | null |
t3_1kw42sz
|
/r/LocalLLaMA/comments/1kw42sz/paid_interview_for_ai_engineers_building/
| false | false |
self
| 0 | null |
CRAZY voice quality for uncensored roleplay, I wish it's local.
| 112 |
https://www.youtube.com/watch?v=Fcq85N0grk4
| 2025-05-26T21:37:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1kw6akn/crazy_voice_quality_for_uncensored_roleplay_i/
|
ExplanationEqual2539
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw6akn
| false | null |
t3_1kw6akn
|
/r/LocalLLaMA/comments/1kw6akn/crazy_voice_quality_for_uncensored_roleplay_i/
| false | false |
self
| 112 |
{'enabled': False, 'images': [{'id': 'I6b_HwI1LJmRMYDtGv8GQ6mvn68V6tP9FkYxdLhb4Y4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/jbZHjnjNpjU2zm2iq4irLLR0rbdIL0fMvFD73GgJqQQ.jpg?width=108&crop=smart&auto=webp&s=9b6d26e6f1dddd265b45b37e22caa44b0d534aca', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/jbZHjnjNpjU2zm2iq4irLLR0rbdIL0fMvFD73GgJqQQ.jpg?width=216&crop=smart&auto=webp&s=5ebd277be8ab943c17fdaaf48f185b52eb4202f1', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/jbZHjnjNpjU2zm2iq4irLLR0rbdIL0fMvFD73GgJqQQ.jpg?width=320&crop=smart&auto=webp&s=672a339018b1dc63ec012f0c6f1ea149ea17d920', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/jbZHjnjNpjU2zm2iq4irLLR0rbdIL0fMvFD73GgJqQQ.jpg?auto=webp&s=96c368f7d6cec29eff215e1b663485d83b4f98ca', 'width': 480}, 'variants': {}}]}
|
Code single file with multiple LLM models
| 9 |
Interesting discovery
If several different models work on SAME code, for SAME application, one by one, fixing each other errors, the vibe coding is starting to make sense
application example: [https://github.com/vyrti/dl](https://github.com/vyrti/dl)
(its a file download tool for all platforms, primary for huggingface, as I have all 3 OS at home, and run llms from all os as well)
you dont need it, so not an marketing
the original, beautiful working go code was written from 2 prompts in Gemini 2.5 Pro
BUT, the rust code for exactly same app concept, plan, source code of go, was not so easy to get
claude 4, Gemini 2.5 Pro, ChatGpt with all possible settings failed hard, to create rust code from scratch or convert it from go.
And then I did this:
I took original "conversion" code from Claude 4. And started prompts with Gemini 2.5 with claude 4 code and asked to fix it, it did it, created new errors, I asked to fix them and they was actually fixed.
So with 3 prompts and 2 models, I was able to convert perfectly working go app to Rust.
And this means, that multi agent team is a good idea, but what IF we will force to work on the same code, same file, several local models, not just one. With just multiple iterations.
So the benchmarks should not just use one single model to solve the tasks but combination of LLMs, and some combinations will fail, and some of them will produce astonishing results. Its like a pair programming.
Combination can be even like
Qwen 2.5 Coder + Qwen 3 30b + Gemma 27b
Or
Qwen 2.5 Coder + Qwen 3 32b + Qwen 2.5 Coder
Whats your experience on this? Have you saw same pattern?
LocalLLMs have poor bench results, but still.
| 2025-05-26T21:56:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1kw6qm1/code_single_file_with_multiple_llm_models/
|
AleksHop
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw6qm1
| false | null |
t3_1kw6qm1
|
/r/LocalLLaMA/comments/1kw6qm1/code_single_file_with_multiple_llm_models/
| false | false |
self
| 9 |
{'enabled': False, 'images': [{'id': 'T5MPksDT6rIMqxy_7Aav8mI24Y0hYq7uOwBlSlC12AA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sZRKNwusuzqqRgeEIWS7jdESKIXv-E-47rOKDPhUSfo.jpg?width=108&crop=smart&auto=webp&s=53f28e3d7ed30810d8f110938a277b477d98f97a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/sZRKNwusuzqqRgeEIWS7jdESKIXv-E-47rOKDPhUSfo.jpg?width=216&crop=smart&auto=webp&s=73d35db9c76734841a3a8365e2e3b8aac57944e4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/sZRKNwusuzqqRgeEIWS7jdESKIXv-E-47rOKDPhUSfo.jpg?width=320&crop=smart&auto=webp&s=1c6f00469ea7e09d1c551f0f3992904160f752e5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/sZRKNwusuzqqRgeEIWS7jdESKIXv-E-47rOKDPhUSfo.jpg?width=640&crop=smart&auto=webp&s=e1f28e648af5ba351b6c9d99ba872f5beb372440', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/sZRKNwusuzqqRgeEIWS7jdESKIXv-E-47rOKDPhUSfo.jpg?width=960&crop=smart&auto=webp&s=848de463beef7ca0c7869bd618da1bbeb7d7c60c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/sZRKNwusuzqqRgeEIWS7jdESKIXv-E-47rOKDPhUSfo.jpg?width=1080&crop=smart&auto=webp&s=613dc7dbfb9a45092c783e3db033b33f32d099bd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/sZRKNwusuzqqRgeEIWS7jdESKIXv-E-47rOKDPhUSfo.jpg?auto=webp&s=d5e8b298dfc25c741bd58da6e01397bdd1444f6e', 'width': 1200}, 'variants': {}}]}
|
Best model for code
| 1 |
[removed]
| 2025-05-26T22:19:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1kw79x1/best_model_for_code/
|
JohnMolorov
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw79x1
| false | null |
t3_1kw79x1
|
/r/LocalLLaMA/comments/1kw79x1/best_model_for_code/
| false | false |
self
| 1 | null |
DIA 1B Podcast Generator - With Consistent Voices and Script Generation
| 160 |
I'm pleased to share 🐐 GOATBookLM 🐐...
A dual voice Open Source podcast generator powered by [hashtag#NariLabs](https://www.linkedin.com/search/results/all/?keywords=%23narilabs&origin=HASH_TAG_FROM_FEED) [hashtag#Dia](https://www.linkedin.com/search/results/all/?keywords=%23dia&origin=HASH_TAG_FROM_FEED) 1B audio model (with a little sprinkling of [Google DeepMind](https://www.linkedin.com/company/googledeepmind/)'s Gemini Flash 2.5 and [Anthropic](https://www.linkedin.com/company/anthropicresearch/) Sonnet 4)
What started as an evening playing around with a new open source audio model on [Hugging Face](https://www.linkedin.com/company/huggingface/) ended up as a week building an open source podcast generator.
Out of the box Dia 1B, the model powering the audio, is a rather unpredictable model, with random voices spinning up for every audio generation.
With a little exploration and testing I was able to fix this, and optimize the speaker dialogue format for pretty strong results.
Running entirely in Google colab 🐐 GOATBookLM 🐐 includes:
🔊 Dual voice/ speaker podcast script creation from any text input file
🔊 Full consistency in Dia 1B voices using a selection of demo cloned voices
🔊 Full preview and regeneration of audio files (for quick corrections)
🔊 Full final output in .wav or .mp3
Link to the Notebook: [https://github.com/smartaces/dia\_podcast\_generator](https://github.com/smartaces/dia_podcast_generator)
| 2025-05-26T22:36:14 |
https://v.redd.it/4ym9al41e73f1
|
Smartaces
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw7n6w
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/4ym9al41e73f1/DASHPlaylist.mpd?a=1750890989%2CNDBmZWJiNjY2YTk1YzFhMzJjZWExNTNiNzBlZWQxMTM3ZDc0OTYyZTE2YjFlZWEzZGZiYzM1NThjNTVmNmVmOQ%3D%3D&v=1&f=sd', 'duration': 113, 'fallback_url': 'https://v.redd.it/4ym9al41e73f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/4ym9al41e73f1/HLSPlaylist.m3u8?a=1750890989%2CYTg1YzU0ZTEyZWFmYjMxNjEyMDJiY2IzNmQzMzY0ZDYyNzI5M2I2YmFjZDRmMTE5N2YwOWUzYjQ2NmU2Yzc4ZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/4ym9al41e73f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1136}}
|
t3_1kw7n6w
|
/r/LocalLLaMA/comments/1kw7n6w/dia_1b_podcast_generator_with_consistent_voices/
| false | false | 160 |
{'enabled': False, 'images': [{'id': 'NG1pdDduNDFlNzNmMUcfJmyGLBoX3HGWzWW7GBEQ5TlU9sPw-Gkkjhi-K8NK', 'resolutions': [{'height': 102, 'url': 'https://external-preview.redd.it/NG1pdDduNDFlNzNmMUcfJmyGLBoX3HGWzWW7GBEQ5TlU9sPw-Gkkjhi-K8NK.png?width=108&crop=smart&format=pjpg&auto=webp&s=36d03f5dac41401e387c7dc2cbcd1529aff1c546', 'width': 108}, {'height': 205, 'url': 'https://external-preview.redd.it/NG1pdDduNDFlNzNmMUcfJmyGLBoX3HGWzWW7GBEQ5TlU9sPw-Gkkjhi-K8NK.png?width=216&crop=smart&format=pjpg&auto=webp&s=0fe6c8135d2f803b50e2b5097649a372f140ec00', 'width': 216}, {'height': 304, 'url': 'https://external-preview.redd.it/NG1pdDduNDFlNzNmMUcfJmyGLBoX3HGWzWW7GBEQ5TlU9sPw-Gkkjhi-K8NK.png?width=320&crop=smart&format=pjpg&auto=webp&s=aa8199be3691645cfe20004eb770109f927f313e', 'width': 320}, {'height': 608, 'url': 'https://external-preview.redd.it/NG1pdDduNDFlNzNmMUcfJmyGLBoX3HGWzWW7GBEQ5TlU9sPw-Gkkjhi-K8NK.png?width=640&crop=smart&format=pjpg&auto=webp&s=3035d505bf9335cf852facb329dd0fe36ce5ba61', 'width': 640}, {'height': 913, 'url': 'https://external-preview.redd.it/NG1pdDduNDFlNzNmMUcfJmyGLBoX3HGWzWW7GBEQ5TlU9sPw-Gkkjhi-K8NK.png?width=960&crop=smart&format=pjpg&auto=webp&s=ed96d24d66b967ea65adc795da6b34f1bc0ade37', 'width': 960}, {'height': 1027, 'url': 'https://external-preview.redd.it/NG1pdDduNDFlNzNmMUcfJmyGLBoX3HGWzWW7GBEQ5TlU9sPw-Gkkjhi-K8NK.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8092bc0f3710ac306b29de87895b7ba72e1c75e0', 'width': 1080}], 'source': {'height': 1208, 'url': 'https://external-preview.redd.it/NG1pdDduNDFlNzNmMUcfJmyGLBoX3HGWzWW7GBEQ5TlU9sPw-Gkkjhi-K8NK.png?format=pjpg&auto=webp&s=8a4b31abc7016e079e53d0a5141fe51c240cde7f', 'width': 1270}, 'variants': {}}]}
|
|
Best llm for human-like conversations?
| 1 |
[removed]
| 2025-05-26T23:02:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1kw882w/best_llm_for_humanlike_conversations/
|
FrostFireAnna
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw882w
| false | null |
t3_1kw882w
|
/r/LocalLLaMA/comments/1kw882w/best_llm_for_humanlike_conversations/
| false | false |
self
| 1 | null |
Is there a high-throughput engine that is actually stable?
| 8 |
Been using vLLM (vllm serve). It's nice when it runs but it keeps hanging and crashing while attempting the simplest of tasks. A prompt or request that works perfectly fine one time will hang or crash sending back two minutes later. Is there an inferencing engine that can handle high throughput while not crashing or hanging every two minutes?
| 2025-05-26T23:05:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kw8a4h/is_there_a_highthroughput_engine_that_is_actually/
|
No-Break-7922
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kw8a4h
| false | null |
t3_1kw8a4h
|
/r/LocalLLaMA/comments/1kw8a4h/is_there_a_highthroughput_engine_that_is_actually/
| false | false |
self
| 8 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.