title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SLM RAG Arena - Compare and Find The Best Sub-5B Models for RAG
| 1 |
[removed]
| 2025-05-23T17:10:53 |
Dazzling-Cap744
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktoxs3
| false | null |
t3_1ktoxs3
|
/r/LocalLLaMA/comments/1ktoxs3/slm_rag_arena_compare_and_find_the_best_sub5b/
| false | false | 1 |
{'enabled': True, 'images': [{'id': '-tBDynN3aOYvaGDf87ECfSf_gSqwScy_y6cLEPEQwI4', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/0m03gj5edk2f1.png?width=108&crop=smart&auto=webp&s=2c2c84b53b44a68c2cdca664c9343ebebb59efc2', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/0m03gj5edk2f1.png?width=216&crop=smart&auto=webp&s=55a6542fd074c80e57810764f86493f227bf959b', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/0m03gj5edk2f1.png?width=320&crop=smart&auto=webp&s=ba96acab92119a990368df39f25bc3466b8a12e3', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/0m03gj5edk2f1.png?width=640&crop=smart&auto=webp&s=b1bd6b21c9e5d01351b4185f4695919571f0282b', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/0m03gj5edk2f1.png?width=960&crop=smart&auto=webp&s=9c68b80a72b2d75fc8f57c1ba1f95e7cc2e99129', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/0m03gj5edk2f1.png?width=1080&crop=smart&auto=webp&s=04efa07f94338e26a4e5ea809702f1903857b12c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/0m03gj5edk2f1.png?auto=webp&s=564462a3f11a135ad55ebc033ef649f946dc093b', 'width': 1920}, 'variants': {}}]}
|
||
SLM RAG Arena - Compare and Find The Best Sub-5B Models for RAG
| 1 |
[removed]
| 2025-05-23T17:16:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktp2x5/slm_rag_arena_compare_and_find_the_best_sub5b/
|
unseenmarscai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktp2x5
| false | null |
t3_1ktp2x5
|
/r/LocalLLaMA/comments/1ktp2x5/slm_rag_arena_compare_and_find_the_best_sub5b/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'wqsi-JfLb8pSXmshgX3Ny5LrE8yAdxgSirsoFM-A7B0', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/_FjCaxo25scY-3sX7SU5Z8CDINogT-Qd5F471W8a1B8.jpg?width=108&crop=smart&auto=webp&s=ed166e32fba7f49963d6e6f4ebb00f2f43107edd', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/_FjCaxo25scY-3sX7SU5Z8CDINogT-Qd5F471W8a1B8.jpg?width=216&crop=smart&auto=webp&s=86584c085d67d679780ce3e3c8f4b99d7c83c2fd', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/_FjCaxo25scY-3sX7SU5Z8CDINogT-Qd5F471W8a1B8.jpg?width=320&crop=smart&auto=webp&s=0bef120a1faf393d2c788a5880804d1a23b794b0', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/_FjCaxo25scY-3sX7SU5Z8CDINogT-Qd5F471W8a1B8.jpg?width=640&crop=smart&auto=webp&s=0fc7b4bc69cd91d8b9e94d5f19215e52ef3e2f3c', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/_FjCaxo25scY-3sX7SU5Z8CDINogT-Qd5F471W8a1B8.jpg?width=960&crop=smart&auto=webp&s=96aaf6fc72e5220e660fdef7450f7fd68c59c4f5', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/_FjCaxo25scY-3sX7SU5Z8CDINogT-Qd5F471W8a1B8.jpg?width=1080&crop=smart&auto=webp&s=2191f9fc2861e2f5f60ee6be428edd2c9c1dbb1b', 'width': 1080}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/_FjCaxo25scY-3sX7SU5Z8CDINogT-Qd5F471W8a1B8.jpg?auto=webp&s=7678a6de32a2f7a617f2aeb602274e6aea9cd92b', 'width': 1536}, 'variants': {}}]}
|
|
SLM RAG Arena - Compare and Find The Best Sub-5B Models for RAG
| 1 |
[removed]
| 2025-05-23T17:24:07 |
unseenmarscai
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktpa1p
| false | null |
t3_1ktpa1p
|
/r/LocalLLaMA/comments/1ktpa1p/slm_rag_arena_compare_and_find_the_best_sub5b/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'Wiy8bgTQIsEOBtN3IGjgdD5PhQH5CdrnsaKRkpLuy2c', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/o8lcrzwefk2f1.png?width=108&crop=smart&auto=webp&s=1dec5ee96a287ff88885638f5342b5e566216f0d', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/o8lcrzwefk2f1.png?width=216&crop=smart&auto=webp&s=7d3dcafeead8bd857837919b2d18d39f88efba55', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/o8lcrzwefk2f1.png?width=320&crop=smart&auto=webp&s=237ec70837856b0c3a37cf662e5b152b336fb2cb', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/o8lcrzwefk2f1.png?width=640&crop=smart&auto=webp&s=6f3db0a44cff840af85a3a44070a9389a0ff7679', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/o8lcrzwefk2f1.png?width=960&crop=smart&auto=webp&s=6aae15573141eaf117fe33e55ea2e9dc054a075d', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/o8lcrzwefk2f1.png?width=1080&crop=smart&auto=webp&s=5bd8e073b3c8820b86186c476a0c8a8266eb91fe', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/o8lcrzwefk2f1.png?auto=webp&s=31d79975cca513f3f6477cfcdc6406a2ba1626fd', 'width': 1920}, 'variants': {}}]}
|
||
SLM RAG Arena - Compare and Find Good Sub-5B Models for RAG
| 1 |
[removed]
| 2025-05-23T17:28:13 |
unseenmarscai
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktpduj
| false | null |
t3_1ktpduj
|
/r/LocalLLaMA/comments/1ktpduj/slm_rag_arena_compare_and_find_good_sub5b_models/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'fcHPsHhRwO8U3PrPTckzvN2tQXdzRQjpdecuIfjDvfI', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/bb0y3vtegk2f1.png?width=108&crop=smart&auto=webp&s=a0aa7222432b91615c5184de314d7b62b49a7335', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/bb0y3vtegk2f1.png?width=216&crop=smart&auto=webp&s=c85a738ec88831cca7e1cbfa313e229cf74c4065', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/bb0y3vtegk2f1.png?width=320&crop=smart&auto=webp&s=c12db190ad6c1087390e5b019fe3c9638627d61b', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/bb0y3vtegk2f1.png?width=640&crop=smart&auto=webp&s=4de536b8d0f467b6f45ad9854eea8224245eee25', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/bb0y3vtegk2f1.png?width=960&crop=smart&auto=webp&s=18b931c94b1445f6e878e6c661de48e4140df433', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/bb0y3vtegk2f1.png?width=1080&crop=smart&auto=webp&s=41e485386f681f8be859f2411f9496cabd0517d5', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/bb0y3vtegk2f1.png?auto=webp&s=245d120f25e64f45275900581a8ad637737d4935', 'width': 1920}, 'variants': {}}]}
|
||
SLM RAG Arena - Compare and Find Good Sub-5B Models for RAG
| 1 |
[removed]
| 2025-05-23T17:31:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktpgpe/slm_rag_arena_compare_and_find_good_sub5b_models/
|
No_Salamander1882
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktpgpe
| false | null |
t3_1ktpgpe
|
/r/LocalLLaMA/comments/1ktpgpe/slm_rag_arena_compare_and_find_good_sub5b_models/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '3T2rZ5JEPyEbxb2lh4vzMqmNAiDyv7lVg3dWa-ileyc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8r6quPLwJFB6N88XAbM_zURMnRPY5GnGjCbHS1kK7Gk.jpg?width=108&crop=smart&auto=webp&s=78bd57a9198a127549f20efee3faa66623e200d7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8r6quPLwJFB6N88XAbM_zURMnRPY5GnGjCbHS1kK7Gk.jpg?width=216&crop=smart&auto=webp&s=8bfb8c9fe48d0b371639a53bbd517f98b80bed94', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8r6quPLwJFB6N88XAbM_zURMnRPY5GnGjCbHS1kK7Gk.jpg?width=320&crop=smart&auto=webp&s=74831a04d34b2cab37b473cbe1e01e6ac8636633', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8r6quPLwJFB6N88XAbM_zURMnRPY5GnGjCbHS1kK7Gk.jpg?width=640&crop=smart&auto=webp&s=3318f2724918bae90ef20995728f679fe2fdbe6d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8r6quPLwJFB6N88XAbM_zURMnRPY5GnGjCbHS1kK7Gk.jpg?width=960&crop=smart&auto=webp&s=027c3c1f1e747403771decd8a0ab69fc1d707ec1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8r6quPLwJFB6N88XAbM_zURMnRPY5GnGjCbHS1kK7Gk.jpg?width=1080&crop=smart&auto=webp&s=ca1487e7ead11368eaa83123f7846d66c6fb63eb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8r6quPLwJFB6N88XAbM_zURMnRPY5GnGjCbHS1kK7Gk.jpg?auto=webp&s=71dd3889ca4b5e4082ec44d36ee253bbecfd5d5d', 'width': 1200}, 'variants': {}}]}
|
|
SLM RAG Arena: Compare and Find Good Sub-5B Models for RAG
| 1 |
[removed]
| 2025-05-23T17:36:17 |
unseenmarscai
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktpl17
| false | null |
t3_1ktpl17
|
/r/LocalLLaMA/comments/1ktpl17/slm_rag_arena_compare_and_find_good_sub5b_models/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'NmBaB7GMmRd_4LCUy7g8mkM-EkV72xawHFBQJRLB4Eg', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/vrc33zzphk2f1.png?width=108&crop=smart&auto=webp&s=737ddcfd3d0fc814c67a712afbceaffd23903041', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/vrc33zzphk2f1.png?width=216&crop=smart&auto=webp&s=11a01131a22e87cefa7def8b699920dc7deb0511', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/vrc33zzphk2f1.png?width=320&crop=smart&auto=webp&s=8e3564a3030a28c49da10b36528bcf0b30a21725', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/vrc33zzphk2f1.png?width=640&crop=smart&auto=webp&s=63a24322f350e812d4b8d5e9ef3658fefc6fd79e', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/vrc33zzphk2f1.png?width=960&crop=smart&auto=webp&s=ee265ebafc70315703be57f538fb8dcbfde910bb', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/vrc33zzphk2f1.png?width=1080&crop=smart&auto=webp&s=c3ea8964a03ec2557ecfdc91787915c6aca95f56', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/vrc33zzphk2f1.png?auto=webp&s=f9e27c5f10bb15e7a3e7281fb02c7e8828dbd0d5', 'width': 1920}, 'variants': {}}]}
|
||
Comparing Small Language Models on RAG Tasks with SLM RAG Arena
| 1 |
[removed]
| 2025-05-23T17:46:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktpu54/comparing_small_language_models_on_rag_tasks_with/
|
unseenmarscai
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktpu54
| false | null |
t3_1ktpu54
|
/r/LocalLLaMA/comments/1ktpu54/comparing_small_language_models_on_rag_tasks_with/
| false | false | 1 | null |
|
Anyone on Oahu want to let me borrow an RTX 6000 Pro to benchmark against this dual 5090 rig?
| 87 |
Sits on my office desk for running very large context prompts (50K words) with QwQ 32B. Gotta be offline because they have a lot of P.I.I.
Had it in a Mechanic Master c34plus (25L) but CPU fans (Scythe Grand Tornado 3,000rpm) kept ramping up because two 5090s were blasting the radiator in a confined space, and could only fit a 1300W PSU in that tiny case which meant heavy power limiting for the CPU and GPUs.
Paid $3,200 each for the 5090 FE's and would have paid more. Couldn't be happier and this rig turns what used to take me 8 hours into 5 minutes of prompt processing and inference + 15 minutes of editing to output complicated 15 page reports.
Anytime I show a coworker what it can do, they immediately throw money at me and tell me to build them a rig, so I tell them I'll get them 80% of the performance for about $2,200 and I've built two dual 3090 local Al rigs for such coworkers so far.
Frame is a 3D printed one from Etsy by ArcadeAdamsParts. There were some minor issues with it, but Adam was eager to address them.
| 2025-05-23T17:52:15 |
https://www.reddit.com/gallery/1ktpz29
|
Special-Wolverine
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktpz29
| false | null |
t3_1ktpz29
|
/r/LocalLLaMA/comments/1ktpz29/anyone_on_oahu_want_to_let_me_borrow_an_rtx_6000/
| false | false | 87 | null |
|
Kanana 1.5 2.1B/8B, English/Korean bilingual by kakaocorp
| 7 | 2025-05-23T17:59:17 |
https://huggingface.co/collections/kakaocorp/kanana-15-682d75c83b5f51f4219a17fb
|
nananashi3
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktq54n
| false | null |
t3_1ktq54n
|
/r/LocalLLaMA/comments/1ktq54n/kanana_15_21b8b_englishkorean_bilingual_by/
| false | false | 7 |
{'enabled': False, 'images': [{'id': 'NrT1Tg68IvhApW7FYuv-a8KpW1xoE2VvYWGtQbOwVWw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Q9Aj2m2UPlli1ePF9mPrz1K16Bxc0hfngw0SCUO5UfI.jpg?width=108&crop=smart&auto=webp&s=3b7ee38a6d1322dc68dbf850a0031028e298c9ad', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Q9Aj2m2UPlli1ePF9mPrz1K16Bxc0hfngw0SCUO5UfI.jpg?width=216&crop=smart&auto=webp&s=162d0e6fc268fb30082e1765b9c11b3e423fbde4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Q9Aj2m2UPlli1ePF9mPrz1K16Bxc0hfngw0SCUO5UfI.jpg?width=320&crop=smart&auto=webp&s=734cda2b706e93bb5907c29dada966f6e791439c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Q9Aj2m2UPlli1ePF9mPrz1K16Bxc0hfngw0SCUO5UfI.jpg?width=640&crop=smart&auto=webp&s=58b05df0965bf6728f5812c07f2fac9dfd6f121c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Q9Aj2m2UPlli1ePF9mPrz1K16Bxc0hfngw0SCUO5UfI.jpg?width=960&crop=smart&auto=webp&s=81656ec256e76e80320b4f555c5e77897a3cfbf1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Q9Aj2m2UPlli1ePF9mPrz1K16Bxc0hfngw0SCUO5UfI.jpg?width=1080&crop=smart&auto=webp&s=da3857e64e2580b391f9200c388ec45b94acea99', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Q9Aj2m2UPlli1ePF9mPrz1K16Bxc0hfngw0SCUO5UfI.jpg?auto=webp&s=8d7568e5dd04395caf28c0a533a298dbe55761d8', 'width': 1200}, 'variants': {}}]}
|
||
Understanding ternary quantization TQ2_0 and TQ1_0 in llama.cpp
| 1 |
[removed]
| 2025-05-23T18:01:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktq7i6/understanding_ternary_quantization_tq2_0_and_tq1/
|
datashri
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktq7i6
| false | null |
t3_1ktq7i6
|
/r/LocalLLaMA/comments/1ktq7i6/understanding_ternary_quantization_tq2_0_and_tq1/
| false | false |
self
| 1 | null |
Tested Qwen3 all models on CPU (i5-10210U), RTX 3060 12GB, and RTX 3090 24GB
| 31 |
Qwen3 Model Testing Results (CPU + GPU)
Model | Hardware | Load | Answer | Speed (t/s)
\------------------|--------------------------------------------|--------------------|---------------------|------------
Qwen3-0.6B | Laptop (i5-10210U, 16GB RAM) | CPU only | Incorrect | 31.65
Qwen3-1.7B | Laptop (i5-10210U, 16GB RAM) | CPU only | Incorrect | 14.87
Qwen3-4B | Laptop (i5-10210U, 16GB RAM) | CPU only | Correct (misleading)| 7.03
Qwen3-8B | Laptop (i5-10210U, 16GB RAM) | CPU only | Incorrect | 4.06
Qwen3-8B | Desktop (5800X, 32GB RAM, RTX 3060) | 100% GPU | Incorrect | 46.80
Qwen3-14B | Desktop (5800X, 32GB RAM, RTX 3060) | 94% GPU / 6% CPU | Correct | 19.35
Qwen3-30B-A3B | Laptop (i5-10210U, 16GB RAM) | CPU only | Correct | 3.27
Qwen3-30B-A3B | Desktop (5800X, 32GB RAM, RTX 3060) | 49% GPU / 51% CPU | Correct | 15.32
Qwen3-30B-A3B | Desktop (5800X, 64GB RAM, RTX 3090) | 100% GPU | Correct | 105.57
Qwen3-32B | Desktop (5800X, 64GB RAM, RTX 3090) | 100% GPU | Correct | 30.54
Qwen3-235B-A22B | Desktop (5800X, 128GB RAM, RTX 3090) | 15% GPU / 85% CPU | Correct | 2.43
Here is the full video of all tests: [https://youtu.be/kWjJ4F09-cU](https://youtu.be/kWjJ4F09-cU)
| 2025-05-23T18:12:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktqgk0/tested_qwen3_all_models_on_cpu_i510210u_rtx_3060/
|
1BlueSpork
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktqgk0
| false | null |
t3_1ktqgk0
|
/r/LocalLLaMA/comments/1ktqgk0/tested_qwen3_all_models_on_cpu_i510210u_rtx_3060/
| false | false |
self
| 31 |
{'enabled': False, 'images': [{'id': 'BnFEKSY4o57rP5nuVGxcnegc1wJJ3I7TvBcV8XhuEmk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Ohd1ZF3MWwuIouVfDZ6jLtoVW-JZeSUwVWNpMvCmnXM.jpg?width=108&crop=smart&auto=webp&s=6a816275dfb13b62e8e2b2c9fe853be582055967', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Ohd1ZF3MWwuIouVfDZ6jLtoVW-JZeSUwVWNpMvCmnXM.jpg?width=216&crop=smart&auto=webp&s=47dd701b6aef02034330c3dad2787051873ec638', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Ohd1ZF3MWwuIouVfDZ6jLtoVW-JZeSUwVWNpMvCmnXM.jpg?width=320&crop=smart&auto=webp&s=9d3e40ac7d3e30dd3e37bef8aed90824e45b01e7', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Ohd1ZF3MWwuIouVfDZ6jLtoVW-JZeSUwVWNpMvCmnXM.jpg?auto=webp&s=ccc416a4aa23b1f506e7722efcdc593bb1c60957', 'width': 480}, 'variants': {}}]}
|
"Sarvam-M, a 24B open-weights hybrid model built on top of Mistral Small" can't they just say they have fine tuned mistral small or it's kind of wrapper?
| 47 | 2025-05-23T18:25:47 |
https://www.sarvam.ai/blogs/sarvam-m
|
WriedGuy
|
sarvam.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktqsog
| false | null |
t3_1ktqsog
|
/r/LocalLLaMA/comments/1ktqsog/sarvamm_a_24b_openweights_hybrid_model_built_on/
| false | false | 47 |
{'enabled': False, 'images': [{'id': 'GJj34UkCJhlet2aj5Tqvrf3zg71S1UOaEIAXAb_MDNE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/WzIS2JjZo2kST66vrrt4qEsayLue07AZ01pMdBT5Wtc.jpg?width=108&crop=smart&auto=webp&s=abc6f04994a76bada9ad0e899c037e74f6ccc9ee', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/WzIS2JjZo2kST66vrrt4qEsayLue07AZ01pMdBT5Wtc.jpg?width=216&crop=smart&auto=webp&s=069fb7f5d851311cb19bc1090ed56368a8a42e43', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/WzIS2JjZo2kST66vrrt4qEsayLue07AZ01pMdBT5Wtc.jpg?width=320&crop=smart&auto=webp&s=0965e361ded03d39ea2adda7e0b0da7368bea4d8', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/WzIS2JjZo2kST66vrrt4qEsayLue07AZ01pMdBT5Wtc.jpg?width=640&crop=smart&auto=webp&s=cd18d82c4dcfc9c4f7f92bcffba8e25084b30453', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/WzIS2JjZo2kST66vrrt4qEsayLue07AZ01pMdBT5Wtc.jpg?width=960&crop=smart&auto=webp&s=d79be203d91d892c0beb7bc78593e4686a2c470d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/WzIS2JjZo2kST66vrrt4qEsayLue07AZ01pMdBT5Wtc.jpg?width=1080&crop=smart&auto=webp&s=59bcea3e5ee303a2a4080fe1f2b0654136807a12', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/WzIS2JjZo2kST66vrrt4qEsayLue07AZ01pMdBT5Wtc.jpg?auto=webp&s=28682eb89ce05dc973747ef8bca37c546ce84985', 'width': 1200}, 'variants': {}}]}
|
||
Having trouble getting to 1-2req/s with vllm and Qwen3 30B-A3B
| 0 |
Hey everyone,
I'm currently renting out a single H100 GPU
The Machine specs are:
GPU:H100 SXM, GPU RAM: 80GB, CPU: Intel Xeon Platinum 8480
I run vllm with this setup behind nginx to monitor the HTTP connections:
VLLM_DEBUG_LOG_API_SERVER_RESPONSE=TRUE nohup /home/ubuntu/.local/bin/vllm serve \
Qwen/Qwen3-30B-A3B-FP8 \
--enable-reasoning \
--reasoning-parser deepseek_r1 \
--api-key API_KEY \
--host 0.0.0.0 \
--dtype auto \
--uvicorn-log-level info \
--port 6000 \
--max-model-len=28000 \
--gpu-memory-utilization 0.9 \
--enable-chunked-prefill \
--enable-expert-parallel \
--max-num-batched-tokens 4096 \
--max-num-seqs 23 &
in nginx logs I see a lot of status 499, which means connections being dropped by clients, but that doesn't make sense as connection to serverless providers are not being dropped and work fine:
127.0.0.1 - - [23/May/2025:18:38:37 +0000] "POST /v1/chat/completions HTTP/1.1" 499 0 "-" "OpenAI/Python 1.55.0"
127.0.0.1 - - [23/May/2025:18:38:41 +0000] "POST /v1/chat/completions HTTP/1.1" 200 5914 "-" "OpenAI/Python 1.55.0"
127.0.0.1 - - [23/May/2025:18:38:43 +0000] "POST /v1/chat/completions HTTP/1.1" 499 0 "-" "OpenAI/Python 1.55.0"
127.0.0.1 - - [23/May/2025:18:38:45 +0000] "POST /v1/chat/completions HTTP/1.1" 200 4077 "-" "OpenAI/Python 1.55.0"
127.0.0.1 - - [23/May/2025:18:38:53 +0000] "POST /v1/chat/completions HTTP/1.1" 499 0 "-" "OpenAI/Python 1.55.0"
127.0.0.1 - - [23/May/2025:18:38:55 +0000] "POST /v1/chat/completions HTTP/1.1" 200 4046 "-" "OpenAI/Python 1.55.0"
127.0.0.1 - - [23/May/2025:18:38:55 +0000] "POST /v1/chat/completions HTTP/1.1" 200 6131 "-" "OpenAI/Python 1.55.0"
127.0.0.1 - - [23/May/2025:18:38:56 +0000] "POST /v1/chat/completions HTTP/1.1" 499 0 "-" "OpenAI/Python 1.55.0"
127.0.0.1 - - [23/May/2025:18:38:56 +0000] "POST /v1/chat/completions HTTP/1.1" 499 0 "-" "OpenAI/Python 1.55.0"
127.0.0.1 - - [23/May/2025:18:38:56 +0000] "POST /v1/chat/completions HTTP/1.1" 499 0 "-" "OpenAI/Python 1.55.0"
If I calculate how many proper 200 responses I get from the vllm, its around 0.15-0.2 reqs per second, which is way too low for my needs.
Am I missing something, with LLama 8B I could squeeze out 0.8-1.2 reqs on 40 GB GPU, but with 30B-A3B seems impossible even on 80GB GPU?
How should I optimize this further? or just go with a simpler model?
| 2025-05-23T18:46:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktrair/having_trouble_getting_to_12reqs_with_vllm_and/
|
bndrz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktrair
| false | null |
t3_1ktrair
|
/r/LocalLLaMA/comments/1ktrair/having_trouble_getting_to_12reqs_with_vllm_and/
| false | false |
self
| 0 | null |
Trying to fine tune llama 3.2 3B on a custom data set for a random college to see how it goes ....but results are not as expected....new trained model can't seem to answer based on the new data.
| 1 |
[removed]
| 2025-05-23T19:32:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktse6o/trying_to_fine_tune_llama_32_3b_on_a_custom_data/
|
Adorable-Device-2732
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktse6o
| false | null |
t3_1ktse6o
|
/r/LocalLLaMA/comments/1ktse6o/trying_to_fine_tune_llama_32_3b_on_a_custom_data/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]}
|
LLM Judges Are Unreliable
| 13 | 2025-05-23T19:42:52 |
https://www.cip.org/blog/llm-judges-are-unreliable
|
IAmJoal
|
cip.org
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktsn47
| false | null |
t3_1ktsn47
|
/r/LocalLLaMA/comments/1ktsn47/llm_judges_are_unreliable/
| false | false | 13 |
{'enabled': False, 'images': [{'id': '3z-maw8FO4zejvbJ3nADc4HJKO-ze8aL2QygE3TvCNk', 'resolutions': [{'height': 130, 'url': 'https://external-preview.redd.it/OWRcxhrGo0kYfJho1U-hyTxNFsBI00slZlSMEMSNpWo.jpg?width=108&crop=smart&auto=webp&s=a23ec551508874bcae48faccf783a80637639ce2', 'width': 108}, {'height': 261, 'url': 'https://external-preview.redd.it/OWRcxhrGo0kYfJho1U-hyTxNFsBI00slZlSMEMSNpWo.jpg?width=216&crop=smart&auto=webp&s=3a0627648a403466041df2dd5093c353b20ed29e', 'width': 216}, {'height': 387, 'url': 'https://external-preview.redd.it/OWRcxhrGo0kYfJho1U-hyTxNFsBI00slZlSMEMSNpWo.jpg?width=320&crop=smart&auto=webp&s=9a5ce1c986efb19c89794e8b3c2b38f5e756e2a7', 'width': 320}, {'height': 774, 'url': 'https://external-preview.redd.it/OWRcxhrGo0kYfJho1U-hyTxNFsBI00slZlSMEMSNpWo.jpg?width=640&crop=smart&auto=webp&s=6ce48110cb8d197da5f663f3d32feb93e9b39576', 'width': 640}, {'height': 1161, 'url': 'https://external-preview.redd.it/OWRcxhrGo0kYfJho1U-hyTxNFsBI00slZlSMEMSNpWo.jpg?width=960&crop=smart&auto=webp&s=494b4b741b69307e2dbf0ad8f6d26061006fdff5', 'width': 960}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/OWRcxhrGo0kYfJho1U-hyTxNFsBI00slZlSMEMSNpWo.jpg?auto=webp&s=d169ec8a72866d71083b97abc34ebce9125ba4de', 'width': 992}, 'variants': {}}]}
|
||
Upgraded from Ryzen 5 5600X to Ryzen 7 5700X3D, should I return it and get a Ryzen 7 5800X?
| 0 |
I have an RTX 4080 super (16gb) and I think qwen3-30b and 235b benefit from a faster CPU.
As I've just upgraded to the Ryzen 7 5700X3D (3 GHZ), I wonder if I should return it and get the Ryzen 7 5800X (3.8 GHZ) instead (it's also about 30% cheaper)?
| 2025-05-23T19:46:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktspy3/upgraded_from_ryzen_5_5600x_to_ryzen_7_5700x3d/
|
relmny
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktspy3
| false | null |
t3_1ktspy3
|
/r/LocalLLaMA/comments/1ktspy3/upgraded_from_ryzen_5_5600x_to_ryzen_7_5700x3d/
| false | false |
self
| 0 | null |
Best Vibe Code tools (like Cursor) but are free and use your own local LLM?
| 146 |
I've seen Cursor and how it works, and it looks pretty cool, but I rather use my own local hosted LLMs and not pay a usage fee to a 3rd party company.
Does anybody know of any good Vibe Coding tools, as good or better than Cursor, that run on your own local LLMs?
Thanks!
| 2025-05-23T19:46:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktsqit/best_vibe_code_tools_like_cursor_but_are_free_and/
|
StartupTim
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktsqit
| false | null |
t3_1ktsqit
|
/r/LocalLLaMA/comments/1ktsqit/best_vibe_code_tools_like_cursor_but_are_free_and/
| false | false |
self
| 146 | null |
Cosyvoice 2 vs Dia 1.6b - which one is better overall?
| 16 |
Did anyone get to test both tts models? If yes, which sounds more realistic from your POV?
Both models are very close, but I find CosyVoice slightly ahead due to its zero-shot capabilities; however, one downside is that you may need to use specific models for different tasks (e.g., zero-shot, cross-lingual).
[https://github.com/nari-labs/dia](https://github.com/nari-labs/dia)
[https://github.com/FunAudioLLM/CosyVoice](https://github.com/FunAudioLLM/CosyVoice)
I have a quick question (not directly related to my main project): I’m exploring the possibility of using 11Labs to clone a voice and integrate it into my SaaS web app, with cached audio generation. Are there any limitations I should be aware of if I were to use alternative services instead of 11Labs?
| 2025-05-23T19:48:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktsrqv/cosyvoice_2_vs_dia_16b_which_one_is_better_overall/
|
Xodnil
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktsrqv
| false | null |
t3_1ktsrqv
|
/r/LocalLLaMA/comments/1ktsrqv/cosyvoice_2_vs_dia_16b_which_one_is_better_overall/
| false | false |
self
| 16 |
{'enabled': False, 'images': [{'id': '5tzSDS7Cu7WmpF2f03uv3UBNPUJ-K-LnJ5_5ie1ZNf8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wNcjaJIfjy5w8Wuatdn7tqqANxkzwO5-UB9WQmMCT3w.jpg?width=108&crop=smart&auto=webp&s=019cb7fa7296091ebede8514e483a64e95a1a184', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wNcjaJIfjy5w8Wuatdn7tqqANxkzwO5-UB9WQmMCT3w.jpg?width=216&crop=smart&auto=webp&s=a68787f3721fc47035ed60e197d3c9d2657054e7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wNcjaJIfjy5w8Wuatdn7tqqANxkzwO5-UB9WQmMCT3w.jpg?width=320&crop=smart&auto=webp&s=40d63ad5efe0501b985befbd7f223ab1cb1e9b29', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wNcjaJIfjy5w8Wuatdn7tqqANxkzwO5-UB9WQmMCT3w.jpg?width=640&crop=smart&auto=webp&s=5f1d15a76610dd0dbe8a436684ca2985b2cc492b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wNcjaJIfjy5w8Wuatdn7tqqANxkzwO5-UB9WQmMCT3w.jpg?width=960&crop=smart&auto=webp&s=688f9a2390cc96f0f5e2d477fac1e8ec610e685b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wNcjaJIfjy5w8Wuatdn7tqqANxkzwO5-UB9WQmMCT3w.jpg?width=1080&crop=smart&auto=webp&s=bafe878e5e7a2cbad7d38a27586a2c5a245e605d', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/wNcjaJIfjy5w8Wuatdn7tqqANxkzwO5-UB9WQmMCT3w.jpg?auto=webp&s=e632c456edeb4c002c762709120ec9f8b214e10e', 'width': 1280}, 'variants': {}}]}
|
Qwen3-14B vs Phi-14B-Reasoning (+Plus) - Practical Benchmark
| 1 |
[removed]
| 2025-05-23T19:49:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktssw0/qwen314b_vs_phi14breasoning_plus_practical/
|
qki_machine
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktssw0
| false | null |
t3_1ktssw0
|
/r/LocalLLaMA/comments/1ktssw0/qwen314b_vs_phi14breasoning_plus_practical/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'eUfo1BVRooW7fNveoRZvhq_q_xoD7GX4HzFdm3a_BoU', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?width=108&crop=smart&auto=webp&s=b1b3c91f325e420cda1518193c5a310cc6393e64', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?width=216&crop=smart&auto=webp&s=0d308195979a7744fb48ab8dea1441d8dd0197ec', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?width=320&crop=smart&auto=webp&s=1f12396b13dc0c8b5b7eebfcf3a54b3403e626cb', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?width=640&crop=smart&auto=webp&s=479c62887bf84f60faed5015bd5fbf1abb3e7c25', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?width=960&crop=smart&auto=webp&s=2478d95c2fda9e3b0bd63b19696f5395e2dfd160', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?width=1080&crop=smart&auto=webp&s=74374fa39d6a14f6a53d108eed9514fbfc11c4f5', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/XiMnUdoJZWnuU9YTQov3fIRytVVfCZ2cVRixjMK3RHk.jpg?auto=webp&s=b724ec6ba2d2396339c53702addd2d46489a5f56', 'width': 1200}, 'variants': {}}]}
|
What models are you training right now and what compute are you using? (Parody of PCMR post)
| 1 | 2025-05-23T19:50:58 |
Avelina9X
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktstzh
| false | null |
t3_1ktstzh
|
/r/LocalLLaMA/comments/1ktstzh/what_models_are_you_training_right_now_and_what/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'lXEPKihxhqD9mH7NAZaEAIwBB4wicZBOSH-MtoLIHPc', 'resolutions': [{'height': 37, 'url': 'https://preview.redd.it/p8khpzt64l2f1.png?width=108&crop=smart&auto=webp&s=cbb1c62c86da82999bfbd50980fc3a184efe0532', 'width': 108}, {'height': 75, 'url': 'https://preview.redd.it/p8khpzt64l2f1.png?width=216&crop=smart&auto=webp&s=ef3dae0a607fb4f0a4bec20f9b5b8e22cd248335', 'width': 216}, {'height': 112, 'url': 'https://preview.redd.it/p8khpzt64l2f1.png?width=320&crop=smart&auto=webp&s=17cb5ac26d522f9afa29127ef49d6393247a4437', 'width': 320}, {'height': 224, 'url': 'https://preview.redd.it/p8khpzt64l2f1.png?width=640&crop=smart&auto=webp&s=81ec45a2a7914d06786bf23536c75a72670b2cb6', 'width': 640}, {'height': 336, 'url': 'https://preview.redd.it/p8khpzt64l2f1.png?width=960&crop=smart&auto=webp&s=930a61d40adc879ffcce403bba8852e93d648e28', 'width': 960}, {'height': 378, 'url': 'https://preview.redd.it/p8khpzt64l2f1.png?width=1080&crop=smart&auto=webp&s=e667d88f30b3a6054d3688655d1541ff5a5957dc', 'width': 1080}], 'source': {'height': 627, 'url': 'https://preview.redd.it/p8khpzt64l2f1.png?auto=webp&s=5049668740bb8dbd82640e05c6515433212b3c7f', 'width': 1790}, 'variants': {}}]}
|
|||
LLama.cpp with smolVLM 500M very slow on windows
| 4 |
I recently downloaded LLama.cpp on a mac M1 8gb ram, with smolVLM 500M, I get instant replies.
I wanted to try on my windows with 32gb ram, i7-13700H, but it's so slow, almost 2 minutes to get the response.
Do you guys have any idea why ? I tried with GPU mode (4070) but still super slow, i tried many diffrent builds but always same result.
| 2025-05-23T19:52:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktsvaj/llamacpp_with_smolvlm_500m_very_slow_on_windows/
|
firyox
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktsvaj
| false | null |
t3_1ktsvaj
|
/r/LocalLLaMA/comments/1ktsvaj/llamacpp_with_smolvlm_500m_very_slow_on_windows/
| false | false |
self
| 4 | null |
AMD Ryzen AI Max+ 395 vs M4 Max (?)
| 1 |
[removed]
| 2025-05-23T20:01:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktt2kv/amd_ryzen_ai_max_395_vs_m4_max/
|
c7abe
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktt2kv
| false | null |
t3_1ktt2kv
|
/r/LocalLLaMA/comments/1ktt2kv/amd_ryzen_ai_max_395_vs_m4_max/
| false | false |
self
| 1 | null |
A survey on AI-generated content.
| 1 |
[removed]
| 2025-05-23T20:02:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktt3av/a_survey_on_aigenerated_content/
|
Goddamn_Lizard
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktt3av
| false | null |
t3_1ktt3av
|
/r/LocalLLaMA/comments/1ktt3av/a_survey_on_aigenerated_content/
| false | false |
self
| 1 | null |
Google Veo 3 Computation Usage
| 9 |
Are there any asumptions what google veo 3 may cost in computation?
I just want to see if there is a chance of model becoming local available. Or how their price may develop over time.
| 2025-05-23T20:02:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktt3i8/google_veo_3_computation_usage/
|
Spiritual-Neat889
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktt3i8
| false | null |
t3_1ktt3i8
|
/r/LocalLLaMA/comments/1ktt3i8/google_veo_3_computation_usage/
| false | false |
self
| 9 | null |
Need Help! What are the Best and Most Stable Versions of Gemma-3, Qwen-3, QwQ-32B, GLM4, and Mistral Small?
| 1 |
[removed]
| 2025-05-23T20:13:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1kttcqy/need_help_what_are_the_best_and_most_stable/
|
Iory1998
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kttcqy
| false | null |
t3_1kttcqy
|
/r/LocalLLaMA/comments/1kttcqy/need_help_what_are_the_best_and_most_stable/
| false | false |
self
| 1 | null |
What's the limits of vibe coding?
| 0 |
First, the link of my project: [https://github.com/charmandercha/TextGradGUI](https://github.com/charmandercha/TextGradGUI)
Original repository: [https://github.com/zou-group/textgrad](https://github.com/zou-group/textgrad)
Nature article about TextGrad: [https://www.nature.com/articles/s41586-025-08661-4](https://www.nature.com/articles/s41586-025-08661-4)
I tried to push the limits of vibe coding to see if I could merge TextGrad and Gradio.
But i do not know if it worked lol
(i will put more details in comment)
| 2025-05-23T20:15:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1kttes0/whats_the_limits_of_vibe_coding/
|
charmander_cha
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kttes0
| false | null |
t3_1kttes0
|
/r/LocalLLaMA/comments/1kttes0/whats_the_limits_of_vibe_coding/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': 'lRy20nCs27CBlbBYfH6rHMv64-unttvihTyh4gbBJ3s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/M_RcSW5nZuO43ig207iOGZdj-pAfq2IcsO2O3iagBDs.jpg?width=108&crop=smart&auto=webp&s=c9254c012ebf9234c241f157a5de7a2eb192a9b6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/M_RcSW5nZuO43ig207iOGZdj-pAfq2IcsO2O3iagBDs.jpg?width=216&crop=smart&auto=webp&s=d3d147bf602b60e4e09589557fcfb20398d42993', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/M_RcSW5nZuO43ig207iOGZdj-pAfq2IcsO2O3iagBDs.jpg?width=320&crop=smart&auto=webp&s=a89ff4dced512ae18d89630f4cd98f878930c969', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/M_RcSW5nZuO43ig207iOGZdj-pAfq2IcsO2O3iagBDs.jpg?width=640&crop=smart&auto=webp&s=7259004aa4137a64a73d6463bdb92daf785b739b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/M_RcSW5nZuO43ig207iOGZdj-pAfq2IcsO2O3iagBDs.jpg?width=960&crop=smart&auto=webp&s=7f06e18848eea901150248389a434aed69488c2a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/M_RcSW5nZuO43ig207iOGZdj-pAfq2IcsO2O3iagBDs.jpg?width=1080&crop=smart&auto=webp&s=35ec27130a808704e34502697b2993cf827fa8b6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/M_RcSW5nZuO43ig207iOGZdj-pAfq2IcsO2O3iagBDs.jpg?auto=webp&s=7649243305fec8eba63be140d22341e07cd72467', 'width': 1200}, 'variants': {}}]}
|
How do I get started? Hardware?
| 1 |
[removed]
| 2025-05-23T20:33:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktttk8/how_do_i_get_started_hardware/
|
Grand-Departure3485
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktttk8
| false | null |
t3_1ktttk8
|
/r/LocalLLaMA/comments/1ktttk8/how_do_i_get_started_hardware/
| false | false |
self
| 1 | null |
Best local coding model right now?
| 65 |
Hi! I was very active here about a year ago, but I've been using Claude a lot the past few months.
I do like claude a lot, but it's not magic and smaller models are actually quite a lot nicer in the sense that I have far, far more control over
I have a 7900xtx, and I was eyeing gemma 27b for local coding support?
Are there any other models I should be looking at? Qwen 3 maybe?
Perhaps a model specifically for coding?
| 2025-05-23T20:57:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktudaj/best_local_coding_model_right_now/
|
Combinatorilliance
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktudaj
| false | null |
t3_1ktudaj
|
/r/LocalLLaMA/comments/1ktudaj/best_local_coding_model_right_now/
| false | false |
self
| 65 | null |
Why does LM Studio have such small context ???
| 0 |
I ask like 2 questions for coding or 1 conversation of 3 msg's and context is 100% full, why can't we have epic length convos ??
| 2025-05-23T21:09:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktuodr/why_does_lm_studio_have_such_small_context/
|
intimate_sniffer69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktuodr
| false | null |
t3_1ktuodr
|
/r/LocalLLaMA/comments/1ktuodr/why_does_lm_studio_have_such_small_context/
| false | false |
self
| 0 | null |
Hiring etiquette
| 1 |
[removed]
| 2025-05-23T21:17:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktuub3/hiring_etiquette/
|
sunnysing_73
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktuub3
| false | null |
t3_1ktuub3
|
/r/LocalLLaMA/comments/1ktuub3/hiring_etiquette/
| false | false |
self
| 1 | null |
Building a new server, looking at using two AMD MI60 (32gb VRAM) GPU’s. Will it be sufficient/effective for my use case?
| 4 |
I'm putting together my new build, I already purchased a Darkrock Classico Max case (as I use my server for Plex and wanted a lot of space for drives).
I'm currently landing on the following for the rest of the specs:
CPU: I9-12900K
RAM: 64GB DDR5
MB: MSI PRO Z790-P WIFI ATX LGA1700 Motherboard
Storage: 2TB crucial M3 Plus; Form Factor - M.2-2280; Interface - M.2 PCIe 4.0 X4
GPU: 2x AMD Instinct MI60 32GB (cooling shrouds on each)
OS: Ubuntu 24.04
My use case is, primarily (leaving out irrelevant details) a lot of Plex usage, Frigate for processing security cameras, and most importantly on the LLM side of things:
HomeAssistant (requires Ollama with a tools model)
Frigate generative AI for image processing (requires Ollama with a vision model)
For homeassistant, I'm looking for speeds similar to what I'd get out of Alexa.
For Frigate, the speed isn't particularly important as i don't mind receiving descriptions even up to a 60 seconds after the event has happened.
If it all possible, I'd also like to run my own local version of chatGPT even if it's not quite as fast.
How does this setup strike you guys given my use case? I'd like it as future proof as possible and would like to not have to touch this build for 5+ years.
| 2025-05-23T21:48:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktvjw4/building_a_new_server_looking_at_using_two_amd/
|
FantasyMaster85
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktvjw4
| false | null |
t3_1ktvjw4
|
/r/LocalLLaMA/comments/1ktvjw4/building_a_new_server_looking_at_using_two_amd/
| false | false |
self
| 4 | null |
AM5 or TRX4 for local LLMs?
| 11 |
Hello all, I am just now dipping my toes in local LLMs and wanting to run LLaMa 70B locally, had some questions regarding the hardware side of things before I start spending more money.
My main concern is whether to go with the AM5 platform or TRX4 for local inferencing and minor fine-tuning on smaller models here and there.
Here are some reasons for why I am considering AM5 vs TRX4;
**AM5**
* PCIe 5.0
* DDR5
* Zen 5
**TRX4 (I cant afford newer gens)**
* 64+ PCIe lanes
* Supports more memory
* Way better motherboard selection for workstations
Since I wanted to run something like LLaMa3 70B at Q4\_K\_M with decent tokens/sec, I will most likely end up getting a second 3090. AM5 supports PCIe 5.0 x16 and it can be bifurcated to x8, which is comparable in speed to 4.0 x16(?) So in terms of an AM5 system I would be looking at a 9950x for the cpu, and dual 3090s at pcie 5.0 x8/x8 with however much ram/dimms I can use that would be stable. It would be DDR5 clocked at a much higher frequency than the DDR4 on the TRX4 (but on TRX4 I can use way more memory).
And for the TRX4 system my budget would allow for a 3960x for the cpu, along with the same dual 3090s but at pcie 4.0 x16/x16 instead of 5.0 x8/x8, and probably around 256gb of ddr4 ram. I am leaning more towards the AM5 option because I dont ever plan on scaling up to more than 2 GPUs (trying to fit everything inside a 4U rackmount) so pcie 5.0 x8/x8 would do fine for me I think, also the 9950x is on much newer architecture and seems to beat the 3960x in almost every metric. Also, although there are stability issues, it looks like I can get away with 128 of ram on the 9950x as well.
Would this be a decent option for a workstation build? or should I just go with the TRX4 system? Im so torn on which to decide and thought some extra opinions could help. Thanks.
| 2025-05-23T21:58:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktvs5j/am5_or_trx4_for_local_llms/
|
Ponce_DeLeon
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktvs5j
| false | null |
t3_1ktvs5j
|
/r/LocalLLaMA/comments/1ktvs5j/am5_or_trx4_for_local_llms/
| false | false |
self
| 11 | null |
Guys! I managed to build a 100% fully local voice AI with Ollama that can have full conversations, control all my smart devices AND now has both short term + long term memory. 🤘
| 1,777 |
I found out recently that Amazon/Alexa is going to use ALL users vocal data with ZERO opt outs for their new Alexa+ service so I decided to build my own that is 1000x better and runs fully local.
The stack uses Home Assistant directly tied into Ollama. The long and short term memory is a custom automation design that I'll be documenting soon and providing for others.
This entire set up runs 100% local and you could probably get away with the whole thing working within / under 16 gigs of VRAM.
| 2025-05-23T22:56:42 |
https://v.redd.it/iigum5tb3m2f1
|
RoyalCities
|
/r/LocalLLaMA/comments/1ktx15j/guys_i_managed_to_build_a_100_fully_local_voice/
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktx15j
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/iigum5tb3m2f1/DASHPlaylist.mpd?a=1750762607%2CNDc4YmY4NTY4MTA3OGY0MjZiNGIzMmIzYzgxODQwYTIwY2ZhMjUyM2E3ZWFiN2RkMDVmNTI5ZTZlZDQyOTRkZQ%3D%3D&v=1&f=sd', 'duration': 124, 'fallback_url': 'https://v.redd.it/iigum5tb3m2f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/iigum5tb3m2f1/HLSPlaylist.m3u8?a=1750762607%2CZTYzZGU3YTU5ODVhNWExMTUyMmFiMjJkMTI0NzNmZjFkMTQ1NGU0OWMxNzBjOThjMzVmNjE2YjI5YTJhY2RhZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/iigum5tb3m2f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
|
t3_1ktx15j
|
/r/LocalLLaMA/comments/1ktx15j/guys_i_managed_to_build_a_100_fully_local_voice/
| false | false | 1,777 |
{'enabled': False, 'images': [{'id': 'b3A3aWt5dmIzbTJmMSKAZduYkWK-j-eA22aXbm6vzflALmDerWrgdNPvGQZJ', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/b3A3aWt5dmIzbTJmMSKAZduYkWK-j-eA22aXbm6vzflALmDerWrgdNPvGQZJ.png?width=108&crop=smart&format=pjpg&auto=webp&s=d6de92e50c97ea5fa95649d6c083258792e81687', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/b3A3aWt5dmIzbTJmMSKAZduYkWK-j-eA22aXbm6vzflALmDerWrgdNPvGQZJ.png?width=216&crop=smart&format=pjpg&auto=webp&s=c3b29ce3bc9e3269db81baec61c1f9984e0f9eed', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/b3A3aWt5dmIzbTJmMSKAZduYkWK-j-eA22aXbm6vzflALmDerWrgdNPvGQZJ.png?width=320&crop=smart&format=pjpg&auto=webp&s=0515491801638e8d3d4e57cf57d86eb0c733f1a3', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/b3A3aWt5dmIzbTJmMSKAZduYkWK-j-eA22aXbm6vzflALmDerWrgdNPvGQZJ.png?width=640&crop=smart&format=pjpg&auto=webp&s=d9949f3cf28da7a163ce8255516934638750c6c8', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/b3A3aWt5dmIzbTJmMSKAZduYkWK-j-eA22aXbm6vzflALmDerWrgdNPvGQZJ.png?width=960&crop=smart&format=pjpg&auto=webp&s=41882c10b547ff875db5c095a5f863cbca3a851f', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/b3A3aWt5dmIzbTJmMSKAZduYkWK-j-eA22aXbm6vzflALmDerWrgdNPvGQZJ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3bbb444077e083b9c571e71d451a387e64d00676', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/b3A3aWt5dmIzbTJmMSKAZduYkWK-j-eA22aXbm6vzflALmDerWrgdNPvGQZJ.png?format=pjpg&auto=webp&s=b458af983bcaa3e5e93fbe4d614be7b6ba7aee95', 'width': 1080}, 'variants': {}}]}
|
|
run LLaMA 3.3 70B (8-bit) on 40G vram — how to hit 10+ tokens/sec?
| 1 |
[removed]
| 2025-05-23T23:21:52 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktxk14/run_llama_33_70b_8bit_on_40g_vram_how_to_hit_10/
|
Adventurous_Disk8047
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktxk14
| false | null |
t3_1ktxk14
|
/r/LocalLLaMA/comments/1ktxk14/run_llama_33_70b_8bit_on_40g_vram_how_to_hit_10/
| false | false |
self
| 1 | null |
Run llama3.3 70b (8 bit) on 40G vram - 10 tokens/sec ?
| 1 |
[removed]
| 2025-05-23T23:25:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktxmif/run_llama33_70b_8_bit_on_40g_vram_10_tokenssec/
|
Adventurous_Disk8047
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktxmif
| false | null |
t3_1ktxmif
|
/r/LocalLLaMA/comments/1ktxmif/run_llama33_70b_8_bit_on_40g_vram_10_tokenssec/
| false | false |
self
| 1 | null |
Anyone else prefering non thinking models ?
| 145 |
So far Ive experienced non CoT models to have more curiosity and asking follow up questions. Like gemma3 or qwen2.5 70b. Tell them about something and they ask follow up questions, i think CoT models ask them selves all the questions and end up very confident. I also understand the strength of CoT models for problem solving, and perhaps thats where their strength is.
| 2025-05-23T23:50:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kty4mh/anyone_else_prefering_non_thinking_models/
|
StandardLovers
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kty4mh
| false | null |
t3_1kty4mh
|
/r/LocalLLaMA/comments/1kty4mh/anyone_else_prefering_non_thinking_models/
| false | false |
self
| 145 | null |
Want to know your reviews about this 14B model.
| 1 |
[removed]
| 2025-05-24T00:07:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktygug/want_to_know_your_reviews_about_this_14b_model/
|
pinpann
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktygug
| false | null |
t3_1ktygug
|
/r/LocalLLaMA/comments/1ktygug/want_to_know_your_reviews_about_this_14b_model/
| false | false |
self
| 1 | null |
New best Local Model?
| 0 |
https://www.sarvam.ai/blogs/sarvam-m
Matches or beats Gemma3 27b supposedly
| 2025-05-24T00:11:16 |
dRraMaticc
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktyjkm
| false | null |
t3_1ktyjkm
|
/r/LocalLLaMA/comments/1ktyjkm/new_best_local_model/
| false | false | 0 |
{'enabled': True, 'images': [{'id': '7MhzmUFVkK4m-k5a3ZErVkjWNORjVKkTLfrG6825Hkc', 'resolutions': [{'height': 126, 'url': 'https://preview.redd.it/o0zqv39ogm2f1.png?width=108&crop=smart&auto=webp&s=5f371f69fddcaa4ccf1d9b1cb0760a6f8f7c6f48', 'width': 108}, {'height': 252, 'url': 'https://preview.redd.it/o0zqv39ogm2f1.png?width=216&crop=smart&auto=webp&s=e10db044424e156ed0b5b2456b16332365985663', 'width': 216}, {'height': 373, 'url': 'https://preview.redd.it/o0zqv39ogm2f1.png?width=320&crop=smart&auto=webp&s=2cda279c797b6716d5c3b3509a16dd2bc94c3eff', 'width': 320}, {'height': 746, 'url': 'https://preview.redd.it/o0zqv39ogm2f1.png?width=640&crop=smart&auto=webp&s=d12191f2911d9cfe989f63d8b1c1cde1981813bd', 'width': 640}, {'height': 1120, 'url': 'https://preview.redd.it/o0zqv39ogm2f1.png?width=960&crop=smart&auto=webp&s=e24ec078b2b32c11f199e3b52ec932785919ddf1', 'width': 960}, {'height': 1260, 'url': 'https://preview.redd.it/o0zqv39ogm2f1.png?width=1080&crop=smart&auto=webp&s=2a4338267b60db030284ee359e39f57ff4e3c1de', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://preview.redd.it/o0zqv39ogm2f1.png?auto=webp&s=67d1500c09276a018c1b423cddfa0e53604fc3d6', 'width': 1080}, 'variants': {}}]}
|
||
Ollama Qwen2.5-VL 7B & OCR
| 1 |
Started working with data extraction from scanned documents today using Open WebUI, Ollama and Qwen2.5-VL 7B. I had some shockingly good initial results, but when I tried to get the model to extract more data it started loosing detail that it had previously reported correctly.
One issue was that the images I am dealing with a are scanned as individual page TIFF files with CCITT Group4 Fax compression. I had to convert them to individual JPG files to get WebUI to properly upload them. It has trouble maintaining the order of the files, though. I don't know if it's processing them through pytesseract in random order, or if they are returned out of order, but if I just select say a 5-page document and grab to WebUI, they upload in random order. Instead, I have to drag the files one at a time, in order into WebUI to get anything near to correct.
Is there a better way to do this?
Also, how could my prompt be improved?
These images constitute a scanned legal document. Please give me the following information from the text:
1. Document type (Examples include but are not limited to Warranty Deed, Warranty Deed with Vendors Lien, Deed of Trust, Quit Claim Deed, Probate Document)
2. Instrument Number
3. Recording date
4. Execution Date Defined as the date the instrument was signed or acknowledged.
5. Grantor (If this includes any special designations including but not limited to "and spouse", "a single person", "as executor for", please include that designation.)
6. Grantee (If this includes any special designations including but not limited to "and spouse", "a single person", "as executor for", please include that designation.)
7. Legal description of the property,
8. Any References to the same property,
9. Any other documents referred to by this document.
Legal description is defined as the lot numbers (if any), Block numbers (if any), Subdivision name (if any), Number of acres of property (if any), Name of the Survey of Abstract and Number of the Survey or abstract where the property is situated.
A reference to the same property is defined as any instance where a phrase similar to "being the same property described" followed by a list of tracts, lots, parcels, or acreages and a document description.
Other documents referred to by this document includes but is not limited to any deeds, mineral deeds, liens, affidavits, exceptions, reservations, restrictions that might be mentioned in the text of this document.
Please provide the items in list format with the item designation formatted as bold text.
The system seems to get lost with this prompt whereas as more simple prompt like
These images constitute a legal document. Please give me the following information from the text:
1. Grantor,
2. Grantee,
3. Legal description of the property,
4. any other documents referred to by this document.
Legal description is defined as the lot numbers (if any), Block numbers (if any), Subdivision name (if any), Number of acres of property (if any), Name of the Survey of Abstract and Number of the Survey or abstract where the property is situated.
gives a better response with the same document, but is missing some details.
| 2025-05-24T00:41:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktz4ss/ollama_qwen25vl_7b_ocr/
|
PleasantCandidate785
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktz4ss
| false | null |
t3_1ktz4ss
|
/r/LocalLLaMA/comments/1ktz4ss/ollama_qwen25vl_7b_ocr/
| false | false |
self
| 1 | null |
Ollama finally acknowledged llama.cpp officially
| 499 |
In the 0.7.1 release, they introduce the capabilities of their multimodal engine. At the end in the [acknowledgments section](https://imgur.com/a/zKMizcr) they thanked the GGML project.
https://ollama.com/blog/multimodal-models
| 2025-05-24T01:22:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1ktzwgq/ollama_finally_acknowledged_llamacpp_officially/
|
simracerman
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktzwgq
| false | null |
t3_1ktzwgq
|
/r/LocalLLaMA/comments/1ktzwgq/ollama_finally_acknowledged_llamacpp_officially/
| false | false |
self
| 499 |
{'enabled': False, 'images': [{'id': 'NjutXkTOw_QxRIOA_mt2FgoZ25vP0-xQ21PVJHjT8_g', 'resolutions': [{'height': 175, 'url': 'https://external-preview.redd.it/ipsNerqPrK-UMqsT56IH88AONqGCIPXw8JcLI4Nrf_Q.jpg?width=108&crop=smart&auto=webp&s=50daed24a74c7e1aa86bcde3396b47f4f450407a', 'width': 108}, {'height': 350, 'url': 'https://external-preview.redd.it/ipsNerqPrK-UMqsT56IH88AONqGCIPXw8JcLI4Nrf_Q.jpg?width=216&crop=smart&auto=webp&s=016ff74d819e042c13e87facc207a1cc2a110223', 'width': 216}, {'height': 519, 'url': 'https://external-preview.redd.it/ipsNerqPrK-UMqsT56IH88AONqGCIPXw8JcLI4Nrf_Q.jpg?width=320&crop=smart&auto=webp&s=18bfab86e3c84dd3ccf453131d11ee2331d2a394', 'width': 320}, {'height': 1038, 'url': 'https://external-preview.redd.it/ipsNerqPrK-UMqsT56IH88AONqGCIPXw8JcLI4Nrf_Q.jpg?width=640&crop=smart&auto=webp&s=4d6987e622af55aeec1a20cd259568d0cdb123c9', 'width': 640}, {'height': 1557, 'url': 'https://external-preview.redd.it/ipsNerqPrK-UMqsT56IH88AONqGCIPXw8JcLI4Nrf_Q.jpg?width=960&crop=smart&auto=webp&s=a968daab0eee8b524a010fb7a8a595e232b2da0b', 'width': 960}, {'height': 1752, 'url': 'https://external-preview.redd.it/ipsNerqPrK-UMqsT56IH88AONqGCIPXw8JcLI4Nrf_Q.jpg?width=1080&crop=smart&auto=webp&s=709f9ba3c7967c8d6251fe8bdbfc07cff0559f56', 'width': 1080}], 'source': {'height': 1913, 'url': 'https://external-preview.redd.it/ipsNerqPrK-UMqsT56IH88AONqGCIPXw8JcLI4Nrf_Q.jpg?auto=webp&s=f88777bb177ed6a4b9bfde545841bcb6b72406ce', 'width': 1179}, 'variants': {}}]}
|
I'm Building an AI Interview Prep Tool to Get Real Feedback on Your Answers - Using Ollama and Multi Agents using Agno
| 3 |
I'm developing an AI-powered interview preparation tool because I know how tough it can be to get good, specific feedback when practising for technical interviews.
The idea is to use local Large Language Models (via Ollama) to:
1. Analyse your resume and extract key skills.
2. Generate dynamic interview questions based on those skills and chosen difficulty.
3. **And most importantly: Evaluate your answers!**
After you go through a mock interview session (answering questions in the app), you'll go to an Evaluation Page. Here, an AI "coach" will analyze all your answers and give you feedback like:
* An overall score.
* What you did well.
* Where you can improve.
* How you scored on things like accuracy, completeness, and clarity.
**I'd love your input:**
* As someone practicing for interviews, would you prefer feedback immediately after each question, or all at the end?
* What kind of feedback is most helpful to you? Just a score? Specific examples of what to say differently?
* Are there any particular pain points in interview prep that you wish an AI tool could solve?
* What would make an AI interview coach truly valuable for you?
This is a passion project (using Python/FastAPI on the backend, React/TypeScript on the frontend), and I'm keen to build something genuinely useful. Any thoughts or feature requests would be amazing!
🚀 **P.**S. This project was a ton of fun, and I'm itching for my next AI challenge! If you or your team are doing innovative work in **Computer Vision or LL**Ms and are looking for a passionate dev, I'd love to chat.
* **My Email:** [[email protected]](https://www.google.com/url?sa=E&q=mailto%3Apavankunchalaofficial%40gmail.com)
* **My GitHub Profile (for more projects):** [https://github.com/Pavankunchala](https://github.com/Pavankunchala)
* **My Resume:** [https://drive.google.com/file/d/1ODtF3Q2uc0krJskE\_F12uNALoXdgLtgp/view](https://drive.google.com/file/d/1ODtF3Q2uc0krJskE_F12uNALoXdgLtgp/view)
| 2025-05-24T01:24:19 |
https://v.redd.it/1y00f0j9tm2f1
|
Solid_Woodpecker3635
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ktzxni
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/1y00f0j9tm2f1/DASHPlaylist.mpd?a=1750641875%2CY2EyNWI4NDlhYjQ0NTliNDkwZjAzODkzNDFhZmJjYjkzYjcxZjUwYmZiN2NmMmNjMDhmMDE0ZjE5Y2QyYWI2Mw%3D%3D&v=1&f=sd', 'duration': 51, 'fallback_url': 'https://v.redd.it/1y00f0j9tm2f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/1y00f0j9tm2f1/HLSPlaylist.m3u8?a=1750641875%2CMmVkYTM5NzUzM2E4NmQwN2ViZjhhYWZkMGVmZTI2MjM4M2Y1NTY0ZGE3MjQwN2VlOTBkOWYxYzM5OTQxNzQ0Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/1y00f0j9tm2f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1782}}
|
t3_1ktzxni
|
/r/LocalLLaMA/comments/1ktzxni/im_building_an_ai_interview_prep_tool_to_get_real/
| false | false | 3 |
{'enabled': False, 'images': [{'id': 'cmdxNms1ajl0bTJmMZnC2Xmlj1tbjjNvTSMwNtpmFzPX21OzgpZxL5ufaAPv', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/cmdxNms1ajl0bTJmMZnC2Xmlj1tbjjNvTSMwNtpmFzPX21OzgpZxL5ufaAPv.png?width=108&crop=smart&format=pjpg&auto=webp&s=40b5a8f4c4e9994a7bd08755a70ac4c353aec4d4', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/cmdxNms1ajl0bTJmMZnC2Xmlj1tbjjNvTSMwNtpmFzPX21OzgpZxL5ufaAPv.png?width=216&crop=smart&format=pjpg&auto=webp&s=bfcd3c877d1cc287c4e5511b740363086a6fa7c3', 'width': 216}, {'height': 193, 'url': 'https://external-preview.redd.it/cmdxNms1ajl0bTJmMZnC2Xmlj1tbjjNvTSMwNtpmFzPX21OzgpZxL5ufaAPv.png?width=320&crop=smart&format=pjpg&auto=webp&s=6ef4c369cdd5572cdee09db3fc9195346a19aef0', 'width': 320}, {'height': 387, 'url': 'https://external-preview.redd.it/cmdxNms1ajl0bTJmMZnC2Xmlj1tbjjNvTSMwNtpmFzPX21OzgpZxL5ufaAPv.png?width=640&crop=smart&format=pjpg&auto=webp&s=b0cc7928c54afc43e37896b550448d1c999183b1', 'width': 640}, {'height': 581, 'url': 'https://external-preview.redd.it/cmdxNms1ajl0bTJmMZnC2Xmlj1tbjjNvTSMwNtpmFzPX21OzgpZxL5ufaAPv.png?width=960&crop=smart&format=pjpg&auto=webp&s=bbf350d36569afc9a7639d85d70c7744d8ccd4d1', 'width': 960}, {'height': 654, 'url': 'https://external-preview.redd.it/cmdxNms1ajl0bTJmMZnC2Xmlj1tbjjNvTSMwNtpmFzPX21OzgpZxL5ufaAPv.png?width=1080&crop=smart&format=pjpg&auto=webp&s=fc9bedc0c5c055c27271952b18a07d8f32c3562b', 'width': 1080}], 'source': {'height': 1154, 'url': 'https://external-preview.redd.it/cmdxNms1ajl0bTJmMZnC2Xmlj1tbjjNvTSMwNtpmFzPX21OzgpZxL5ufaAPv.png?format=pjpg&auto=webp&s=c6b2121cb1cba6f39dab412e7ca07035b8ce110b', 'width': 1904}, 'variants': {}}]}
|
|
VlLAMP
| 1 |
[removed]
| 2025-05-24T02:10:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku0sbv/vllamp/
|
orangesk14
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku0sbv
| false | null |
t3_1ku0sbv
|
/r/LocalLLaMA/comments/1ku0sbv/vllamp/
| false | false | 1 | null |
|
LM Studio 0.3.16 released.
| 1 | 2025-05-24T02:28:09 |
https://lmstudio.ai/blog/lmstudio-v0.3.16
|
Hanthunius
|
lmstudio.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku13ra
| false | null |
t3_1ku13ra
|
/r/LocalLLaMA/comments/1ku13ra/lm_studio_0316_released/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'v3UIfCJZg3iZ4-uyLENvEYYuJNQvOgQfGclLgEQrP88', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/X_unCJxea_kwh4yU1G6MKx48gRaN8k2kCXdaAlD4Z4w.jpg?width=108&crop=smart&auto=webp&s=2650eb1c7472b5654084865b25cfc3e3c40fe8d3', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/X_unCJxea_kwh4yU1G6MKx48gRaN8k2kCXdaAlD4Z4w.jpg?width=216&crop=smart&auto=webp&s=2b085b517004f7207cadbe1ff66b61e0798d5bf9', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/X_unCJxea_kwh4yU1G6MKx48gRaN8k2kCXdaAlD4Z4w.jpg?width=320&crop=smart&auto=webp&s=0db044dd8062502dc356ed8acf133ceec79b1b9f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/X_unCJxea_kwh4yU1G6MKx48gRaN8k2kCXdaAlD4Z4w.jpg?width=640&crop=smart&auto=webp&s=fce35a3990664441ad54cfb6f26a095166a4c41a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/X_unCJxea_kwh4yU1G6MKx48gRaN8k2kCXdaAlD4Z4w.jpg?width=960&crop=smart&auto=webp&s=f486c5273deaab428e6f68a8cff7d676820261fe', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/X_unCJxea_kwh4yU1G6MKx48gRaN8k2kCXdaAlD4Z4w.jpg?width=1080&crop=smart&auto=webp&s=f56431a752baf882f52d23d0db6afd7ca7220258', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/X_unCJxea_kwh4yU1G6MKx48gRaN8k2kCXdaAlD4Z4w.jpg?auto=webp&s=216b042230fa8da20eb1813f92305e2fa92bc556', 'width': 1200}, 'variants': {}}]}
|
||
A Privacy-Focused Perplexity That Runs Locally on Your Phone
| 68 |
Hey r/LocalLlama! 👋
I wanted to share **MyDeviceAI** \- a completely private alternative to Perplexity that runs entirely on your device. If you're tired of your search queries being sent to external servers and want the power of AI search without the privacy trade-offs, this might be exactly what you're looking for.
# What Makes This Different
**Complete Privacy**: Unlike Perplexity or other AI search tools, MyDeviceAI keeps everything local. Your search queries, the results, and all processing happen on your device. No data leaves your phone, period.
**SearXNG Integration**: The app now comes with built-in SearXNG search - no configuration needed. You get comprehensive search results with image previews, all while maintaining complete privacy. SearXNG aggregates results from multiple search engines without tracking you.
**Local AI Processing**: Powered by Qwen 3, the AI model runs entirely on your device. Modern iPhones get lightning-fast responses, and even older models are fully supported (just a bit slower).
# Key Features
* **100% Free & Open Source**: Check out the code at [MyDeviceAI](http://github.com/navedmerchant/MyDeviceAI)
* **Web Search + AI**: Get the best of both worlds - current information from the web processed by local AI
* **Chat History**: 30+ days of conversation history, all stored locally
* **Thinking Mode**: Complex reasoning capabilities for challenging problems
* **Zero Wait Time**: Model loads asynchronously in the background
* **Personalization**: Beta feature for custom user contexts
# Recent Updates
The latest release includes a prettier UI, out-of-the-box SearXNG integration, image previews with search results, and tons of bug fixes.
# This app has completely replaced ChatGPT for me, I am a very curious person and keep using it for looking up things that come to my mind, and its always spot on. I also compared it with Perplexity and while Perplexity has a slight edge in some cases, MyDeviceAI generally gives me the correct information and completely to the point. Download at: [MyDeviceAI](https://apps.apple.com/us/app/mydeviceai/id6736578281)
# Looking forward to your feedback. Please leave a review on the AppStore if this worked for you and solved a problem, and you would like to support further development.
| 2025-05-24T02:28:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku1444/a_privacyfocused_perplexity_that_runs_locally_on/
|
Ssjultrainstnict
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku1444
| false | null |
t3_1ku1444
|
/r/LocalLLaMA/comments/1ku1444/a_privacyfocused_perplexity_that_runs_locally_on/
| false | false |
self
| 68 |
{'enabled': False, 'images': [{'id': '_7vv-xzI257bN17gmsRQB9o502_chuq76bhLZhNoV3c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/H2OOCv1bv050E4CBNwcheCR0p5galvx3UpT4d2t0NLs.jpg?width=108&crop=smart&auto=webp&s=54dd364b3e06c51c826ecdb04d4d91b2127501a6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/H2OOCv1bv050E4CBNwcheCR0p5galvx3UpT4d2t0NLs.jpg?width=216&crop=smart&auto=webp&s=052a4dd89d7dfb53f53cdcf78ce4dee387a9aba7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/H2OOCv1bv050E4CBNwcheCR0p5galvx3UpT4d2t0NLs.jpg?width=320&crop=smart&auto=webp&s=cecbf15e3a0f61bde42942703ab11c64c09c6714', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/H2OOCv1bv050E4CBNwcheCR0p5galvx3UpT4d2t0NLs.jpg?width=640&crop=smart&auto=webp&s=0dc808e4d075ada1aefee047e33becc1859e20d5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/H2OOCv1bv050E4CBNwcheCR0p5galvx3UpT4d2t0NLs.jpg?width=960&crop=smart&auto=webp&s=191ba9a39cbe69899b1232e6a8f024e27e275aa1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/H2OOCv1bv050E4CBNwcheCR0p5galvx3UpT4d2t0NLs.jpg?width=1080&crop=smart&auto=webp&s=4eb1e41502af054e573ae30552538594bad527ed', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/H2OOCv1bv050E4CBNwcheCR0p5galvx3UpT4d2t0NLs.jpg?auto=webp&s=d7147a64fa6cc43a0999fb8058b0d11617c69d65', 'width': 1200}, 'variants': {}}]}
|
LM Studio 0.3.16 Released
| 1 |
[removed]
| 2025-05-24T02:46:27 |
https://lmstudio.ai/blog/lmstudio-v0.3.16
|
Hanthunius
|
lmstudio.ai
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku1fgi
| false | null |
t3_1ku1fgi
|
/r/LocalLLaMA/comments/1ku1fgi/lm_studio_0316_released/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'v3UIfCJZg3iZ4-uyLENvEYYuJNQvOgQfGclLgEQrP88', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/X_unCJxea_kwh4yU1G6MKx48gRaN8k2kCXdaAlD4Z4w.jpg?width=108&crop=smart&auto=webp&s=2650eb1c7472b5654084865b25cfc3e3c40fe8d3', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/X_unCJxea_kwh4yU1G6MKx48gRaN8k2kCXdaAlD4Z4w.jpg?width=216&crop=smart&auto=webp&s=2b085b517004f7207cadbe1ff66b61e0798d5bf9', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/X_unCJxea_kwh4yU1G6MKx48gRaN8k2kCXdaAlD4Z4w.jpg?width=320&crop=smart&auto=webp&s=0db044dd8062502dc356ed8acf133ceec79b1b9f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/X_unCJxea_kwh4yU1G6MKx48gRaN8k2kCXdaAlD4Z4w.jpg?width=640&crop=smart&auto=webp&s=fce35a3990664441ad54cfb6f26a095166a4c41a', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/X_unCJxea_kwh4yU1G6MKx48gRaN8k2kCXdaAlD4Z4w.jpg?width=960&crop=smart&auto=webp&s=f486c5273deaab428e6f68a8cff7d676820261fe', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/X_unCJxea_kwh4yU1G6MKx48gRaN8k2kCXdaAlD4Z4w.jpg?width=1080&crop=smart&auto=webp&s=f56431a752baf882f52d23d0db6afd7ca7220258', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/X_unCJxea_kwh4yU1G6MKx48gRaN8k2kCXdaAlD4Z4w.jpg?auto=webp&s=216b042230fa8da20eb1813f92305e2fa92bc556', 'width': 1200}, 'variants': {}}]}
|
|
Why cant i use my system ram in LMStudio OpenSuse Tumbleweed
| 1 |
[removed]
| 2025-05-24T04:48:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku3iif/why_cant_i_use_my_system_ram_in_lmstudio_opensuse/
|
nanomax55
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku3iif
| false | null |
t3_1ku3iif
|
/r/LocalLLaMA/comments/1ku3iif/why_cant_i_use_my_system_ram_in_lmstudio_opensuse/
| false | false |
self
| 1 | null |
I put together a memory protocol after ChatGPT slowly dropped mine
| 1 |
[removed]
| 2025-05-24T04:57:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku3nne/i_put_together_a_memory_protocol_after_chatgpt/
|
konig-ophion
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku3nne
| false | null |
t3_1ku3nne
|
/r/LocalLLaMA/comments/1ku3nne/i_put_together_a_memory_protocol_after_chatgpt/
| false | false |
self
| 1 | null |
Open source model which good at tool calling?
| 1 |
[removed]
| 2025-05-24T05:38:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku4b68/open_source_model_which_good_at_tool_calling/
|
Superb_Practice_4544
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku4b68
| false | null |
t3_1ku4b68
|
/r/LocalLLaMA/comments/1ku4b68/open_source_model_which_good_at_tool_calling/
| false | false |
self
| 1 | null |
Gmkt3c evo-x2 or custom build
| 1 |
[removed]
| 2025-05-24T05:39:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku4bbo/gmkt3c_evox2_or_custom_build/
|
BidReject
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku4bbo
| false | null |
t3_1ku4bbo
|
/r/LocalLLaMA/comments/1ku4bbo/gmkt3c_evox2_or_custom_build/
| false | false |
self
| 1 | null |
Claude 4 first impressions: Anthropic’s latest model actually matters (hands-on)
| 36 |
Anthropic recently unveiled Claude 4 (Opus and Sonnet), achieving record-breaking 72.7% performance on SWE-bench Verified and surpassing OpenAI’s latest models. Benchmarks aside, I wanted to see how Claude 4 holds up under real-world software engineering tasks. I spent the last 24 hours putting it through intensive testing with challenging refactoring scenarios.
I tested Claude 4 using a Rust codebase featuring complex, interconnected issues following a significant architectural refactor. These problems included asynchronous workflows, edge-case handling in parsers, and multi-module dependencies. Previous versions, such as Claude Sonnet 3.7, struggled here—often resorting to modifying test code rather than addressing the root architectural issues.
Claude 4 impressed me by resolving these problems correctly in just one attempt, never modifying tests or taking shortcuts. Both Opus and Sonnet variants demonstrated genuine comprehension of architectural logic, providing solutions that improved long-term code maintainability.
Key observations from practical testing:
* Claude 4 consistently focused on the deeper architectural causes, not superficial fixes.
* Both variants successfully fixed the problems on their first attempt, editing around 15 lines across multiple files, all relevant and correct.
* Solutions were clear, maintainable, and reflected real software engineering discipline.
I was initially skeptical about Anthropic’s claims regarding their models' improved discipline and reduced tendency toward superficial fixes. However, based on this hands-on experience, Claude 4 genuinely delivers noticeable improvement over earlier models.
For developers seriously evaluating AI coding assistants—particularly for integration in more sophisticated workflows—Claude 4 seems to genuinely warrant attention.
A detailed write-up and deeper analysis are available here:
[Claude 4 First Impressions: Anthropic’s AI Coding Breakthrough](https://forgecode.dev/blog/claude-4-initial-impressions-anthropic-ai-coding-breakthrough/)
Interested to hear others' experiences with Claude 4, especially in similarly challenging development scenarios.
| 2025-05-24T06:27:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku51hg/claude_4_first_impressions_anthropics_latest/
|
West-Chocolate2977
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku51hg
| false | null |
t3_1ku51hg
|
/r/LocalLLaMA/comments/1ku51hg/claude_4_first_impressions_anthropics_latest/
| false | false |
self
| 36 | null |
What Models for C/C++?
| 22 |
I've been using unsloth/Qwen2.5-Coder-32B-Instruct-128K-GGUF (int 8.) Worked great for small stuff (one header/.c implementation) moreover it hallucinated when I had it evaluate a kernel api I wrote. (6 files.)
What are people using? I am curious if any model that is good at C is also good at shader code.
I am running a RTX A6000 PRO 96GB card in a Razer Core X. Replaced my 3090 in the TB enclosure. Have a 4090 in the gaming rig.
| 2025-05-24T06:48:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku5cfe/what_models_for_cc/
|
Aroochacha
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku5cfe
| false | null |
t3_1ku5cfe
|
/r/LocalLLaMA/comments/1ku5cfe/what_models_for_cc/
| false | false |
self
| 22 | null |
Prompt Debugging
| 8 |
Hi all
I have this idea and I wonder if it's possible, I think it's possible but just want to gather some community feedback.
We all know that transformers can have attention issues where some tokens get over-attended to while others are essentially ignored. This can lead to frustrating situations where our prompts don't work as expected, but it's hard to pinpoint exactly what's going wrong.
What if we could visualize the attention patterns across an entire prompt to identify problematic areas? Specifically:
- Extract attention scores for every token in a prompt across all layers/heads
- Generate a heatmap visualization showing which tokens are getting too much/too little attention
- Use this as a debugging tool to identify why prompts aren't working as intended
Has anyone tried something similar? I've seen attention visualizations for research, but not specifically for prompt debugging?
| 2025-05-24T06:52:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku5etr/prompt_debugging/
|
Feeling-Currency-360
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku5etr
| false | null |
t3_1ku5etr
|
/r/LocalLLaMA/comments/1ku5etr/prompt_debugging/
| false | false |
self
| 8 | null |
[Devstral] Why is it responding in non-'merica letters?
| 0 |
No but really.. I have no idea why this is happening
Loading Chat Completions Adapter: C:\Users\ADMINU~1\AppData\Local\Temp\_MEI492322\kcpp_adapters\AutoGuess.json
Chat Completions Adapter Loaded
Auto Recommended GPU Layers: 25
Initializing dynamic library: koboldcpp_cublas.dll
==========
Namespace(admin=False, admindir='', adminpassword='', analyze='', benchmark=None, blasbatchsize=512, blasthreads=15, chatcompletionsadapter='AutoGuess', cli=False, config=None, contextsize=10240, debugmode=0, defaultgenamt=512, draftamount=8, draftgpulayers=999, draftgpusplit=None, draftmodel=None, embeddingsmodel='', enableguidance=False, exportconfig='', exporttemplate='', failsafe=False, flashattention=True, forceversion=0, foreground=False, gpulayers=25, highpriority=False, hordeconfig=None, hordegenlen=0, hordekey='', hordemaxctx=0, hordemodelname='', hordeworkername='', host='', ignoremissing=False, launch=True, lora=None, maxrequestsize=32, mmproj=None, mmprojcpu=False, model=[], model_param='C:/Users/adminuser/.ollama/models/blobs/sha256-b3a2c9a8fef9be8d2ef951aecca36a36b9ea0b70abe9359eab4315bf4cd9be01', moeexperts=-1, multiplayer=False, multiuser=1, noavx2=False, noblas=False, nobostoken=False, nocertify=False, nofastforward=False, nommap=False, nomodel=False, noshift=False, onready='', overridekv=None, overridetensors=None, password=None, port=5001, port_param=5001, preloadstory=None, prompt='', promptlimit=100, quantkv=0, quiet=False, remotetunnel=False, ropeconfig=[0.0, 10000.0], savedatafile=None, sdclamped=0, sdclipg='', sdclipl='', sdconfig=None, sdlora='', sdloramult=1.0, sdmodel='', sdnotile=False, sdquant=False, sdt5xxl='', sdthreads=15, sdvae='', sdvaeauto=False, showgui=False, skiplauncher=False, smartcontext=False, ssl=None, tensor_split=None, threads=15, ttsgpu=False, ttsmaxlen=4096, ttsmodel='', ttsthreads=0, ttswavtokenizer='', unpack='', useclblast=None, usecpu=False, usecublas=['normal', '0', 'mmq'], usemlock=False, usemmap=True, usevulkan=None, version=False, visionmaxres=1024, websearch=False, whispermodel='')
==========
Loading Text Model: C:\Users\adminuser\.ollama\models\blobs\sha256-b3a2c9a8fef9be8d2ef951aecca36a36b9ea0b70abe9359eab4315bf4cd9be01
WARNING: Selected Text Model does not seem to be a GGUF file! Are you sure you picked the right file?
The reported GGUF Arch is: llama
Arch Category: 0
---
Identified as GGUF model.
Attempting to Load...
---
Using automatic RoPE scaling for GGUF. If the model has custom RoPE settings, they'll be used directly instead!
System Info: AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | AMX_INT8 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 |
CUDA MMQ: True
---
Initializing CUDA/HIP, please wait, the following step may take a few minutes (only for first launch)...
Just a moment, Please Be Patient...
---
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes
llama_model_load_from_file_impl: using device CUDA0 (NVIDIA GeForce RTX 5090) - 30843 MiB free
llama_model_loader: loaded meta data with 41 key-value pairs and 363 tensors from C:\Users\adminuser\.ollama\models\blobs\sha256O_Yƒprint_info: file format = GGUF V3 (latest)
print_info: file type = unknown, may not work
print_info: file size = 13.34 GiB (4.86 BPW)
init_tokenizer: initializing tokenizer for type 2
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 1000
load: token to piece cache size = 0.8498 MB
print_info: arch = llama
print_info: vocab_only = 0
print_info: n_ctx_train = 131072
print_info: n_embd = 5120
print_info: n_layer = 40
print_info: n_head = 32
print_info: n_head_kv = 8
print_info: n_rot = 128
print_info: n_swa = 0
print_info: n_swa_pattern = 1
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 4
print_info: n_embd_k_gqa = 1024
print_info: n_embd_v_gqa = 1024
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 32768
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 0
print_info: rope scaling = linear
print_info: freq_base_train = 1000000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 131072
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 13B
print_info: model params = 23.57 B
print_info: general.name = Devstral Small 2505
print_info: vocab type = BPE
print_info: n_vocab = 131072
print_info: n_merges = 269443
print_info: BOS token = 1 '<s>'
print_info: EOS token = 2 '</s>'
print_info: UNK token = 0 '<unk>'
print_info: LF token = 1010 'ÄS'
print_info: EOG token = 2 '</s>'
print_info: max token length = 150
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: relocated tensors: 138 of 363
load_tensors: offloading 25 repeating layers to GPU
load_tensors: offloaded 25/41 layers to GPU
load_tensors: CPU_Mapped model buffer size = 13662.36 MiB
load_tensors: CUDA0 model buffer size = 7964.57 MiB
................................................................................................
Automatic RoPE Scaling: Using model internal value.
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 10360
llama_context: n_ctx_per_seq = 10360
llama_context: n_batch = 512
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = 1
llama_context: freq_base = 1000000000.0
llama_context: freq_scale = 1
llama_context: n_ctx_per_seq (10360) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
set_abort_callback: call
llama_context: CPU output buffer size = 0.50 MiB
create_memory: n_ctx = 10496 (padded)
llama_kv_cache_unified: kv_size = 10496, type_k = 'f16', type_v = 'f16', n_layer = 40, can_shift = 1, padding = 256
llama_kv_cache_unified: CPU KV buffer size = 615.00 MiB
llama_kv_cache_unified: CUDA0 KV buffer size = 1025.00 MiB
llama_kv_cache_unified: KV self size = 1640.00 MiB, K (f16): 820.00 MiB, V (f16): 820.00 MiB
llama_context: enumerating backends
llama_context: backend_ptrs.size() = 2
llama_context: max_nodes = 65536
llama_context: worst-case: n_tokens = 512, n_seqs = 1, n_outputs = 0
llama_context: reserving graph for n_tokens = 512, n_seqs = 1
llama_context: reserving graph for n_tokens = 1, n_seqs = 1
llama_context: reserving graph for n_tokens = 512, n_seqs = 1
llama_context: CUDA0 compute buffer size = 791.00 MiB
llama_context: CUDA_Host compute buffer size = 30.51 MiB
llama_context: graph nodes = 1207
llama_context: graph splits = 169 (with bs=512), 3 (with bs=1)
Load Text Model OK: True
Chat completion heuristic: Mistral V7 (with system prompt)
Embedded KoboldAI Lite loaded.
Embedded API docs loaded.
======
Active Modules: TextGeneration
Inactive Modules: ImageGeneration VoiceRecognition MultimodalVision NetworkMultiplayer ApiKeyPassword WebSearchProxy TextToSpeech VectorEmbeddings AdminControl
Enabled APIs: KoboldCppApi OpenAiApi OllamaApi
Starting Kobold API on port 5001 at http://localhost:5001/api/
Starting OpenAI Compatible API on port 5001 at http://localhost:5001/v1/
======
Please connect to custom endpoint at http://localhost:5001
Input: {"n": 1, "max_context_length": 10240, "max_length": 512, "rep_pen": 1.07, "temperature": 0.75, "top_p": 0.92, "top_k": 100, "top_a": 0, "typical": 1, "tfs": 1, "rep_pen_range": 360, "rep_pen_slope": 0.7, "sampler_order": [6, 0, 1, 3, 4, 2, 5], "memory": "", "trim_stop": true, "genkey": "KCPP8824", "min_p": 0, "dynatemp_range": 0, "dynatemp_exponent": 1, "smoothing_factor": 0, "nsigma": 0, "banned_tokens": [], "render_special": false, "logprobs": false, "replace_instruct_placeholders": true, "presence_penalty": 0, "logit_bias": {}, "prompt": "{{[INPUT]}}hello{{[OUTPUT]}}", "quiet": true, "stop_sequence": ["{{[INPUT]}}", "{{[OUTPUT]}}"], "use_default_badwordsids": false, "bypass_eos": false}
Processing Prompt (6 / 6 tokens)
Generating (12 / 512 tokens)
(EOS token triggered! ID:2)
[00:51:22] CtxLimit:18/10240, Amt:12/512, Init:0.00s, Process:2.85s (2.11T/s), Generate:2.38s (5.04T/s), Total:5.22s
Output: 你好!有什么我可以帮你的吗?
Input: {"n": 1, "max_context_length": 10240, "max_length": 512, "rep_pen": 1.07, "temperature": 0.75, "top_p": 0.92, "top_k": 100, "top_a": 0, "typical": 1, "tfs": 1, "rep_pen_range": 360, "rep_pen_slope": 0.7, "sampler_order": [6, 0, 1, 3, 4, 2, 5], "memory": "", "trim_stop": true, "genkey": "KCPP6913", "min_p": 0, "dynatemp_range": 0, "dynatemp_exponent": 1, "smoothing_factor": 0, "nsigma": 0, "banned_tokens": [], "render_special": false, "logprobs": false, "replace_instruct_placeholders": true, "presence_penalty": 0, "logit_bias": {}, "prompt": "{{[INPUT]}}hello{{[OUTPUT]}}\u4f60\u597d\uff01\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u4f60\u7684\u5417\uff1f{{[INPUT]}}speak in english{{[OUTPUT]}}", "quiet": true, "stop_sequence": ["{{[INPUT]}}", "{{[OUTPUT]}}"], "use_default_badwordsids": false, "bypass_eos": false}
Processing Prompt (6 / 6 tokens)
Generating (12 / 512 tokens)
(EOS token triggered! ID:2)
[00:51:34] CtxLimit:36/10240, Amt:12/512, Init:0.00s, Process:0.29s (20.48T/s), Generate:3.21s (3.73T/s), Total:3.51s
Output: 你好!有什么我可以帮你的吗?
Input: {"n": 1, "max_context_length": 10240, "max_length": 512, "rep_pen": 1.07, "temperature": 0.75, "top_p": 0.92, "top_k": 100, "top_a": 0, "typical": 1, "tfs": 1, "rep_pen_range": 360, "rep_pen_slope": 0.7, "sampler_order": [6, 0, 1, 3, 4, 2, 5], "memory": "", "trim_stop": true, "genkey": "KCPP7396", "min_p": 0, "dynatemp_range": 0, "dynatemp_exponent": 1, "smoothing_factor": 0, "nsigma": 0, "banned_tokens": [], "render_special": false, "logprobs": false, "replace_instruct_placeholders": true, "presence_penalty": 0, "logit_bias": {}, "prompt": "{{[INPUT]}}hello{{[OUTPUT]}}\u4f60\u597d\uff01\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u4f60\u7684\u5417\uff1f{{[INPUT]}}speak in english{{[OUTPUT]}}\u4f60\u597d\uff01\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u4f60\u7684\u5417\uff1f{{[INPUT]}}thats not english{{[OUTPUT]}}", "quiet": true, "stop_sequence": ["{{[INPUT]}}", "{{[OUTPUT]}}"], "use_default_badwordsids": false, "bypass_eos": false}
Processing Prompt (6 / 6 tokens)
Generating (13 / 512 tokens)
(Stop sequence triggered: )
[00:51:37] CtxLimit:55/10240, Amt:13/512, Init:0.00s, Process:0.33s (18.24T/s), Generate:2.29s (5.67T/s), Total:2.62s
Output: 你好!有什么我可以帮你的吗?
I
Input: {"n": 1, "max_context_length": 10240, "max_length": 512, "rep_pen": 1.07, "temperature": 0.75, "top_p": 0.92, "top_k": 100, "top_a": 0, "typical": 1, "tfs": 1, "rep_pen_range": 360, "rep_pen_slope": 0.7, "sampler_order": [6, 0, 1, 3, 4, 2, 5], "memory": "{{[SYSTEM]}}respond in english language\n", "trim_stop": true, "genkey": "KCPP5513", "min_p": 0, "dynatemp_range": 0, "dynatemp_exponent": 1, "smoothing_factor": 0, "nsigma": 0, "banned_tokens": [], "render_special": false, "logprobs": false, "replace_instruct_placeholders": true, "presence_penalty": 0, "logit_bias": {}, "prompt": "{{[INPUT]}}hello{{[OUTPUT]}}\u4f60\u597d\uff01\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u4f60\u7684\u5417\uff1f{{[INPUT]}}speak in english{{[OUTPUT]}}\u4f60\u597d\uff01\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u4f60\u7684\u5417\uff1f{{[INPUT]}}thats not english{{[OUTPUT]}}\u4f60\u597d\uff01\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u4f60\u7684\u5417\uff1f{{[INPUT]}}hello{{[OUTPUT]}}", "quiet": true, "stop_sequence": ["{{[INPUT]}}", "{{[OUTPUT]}}"], "use_default_badwordsids": false, "bypass_eos": false}
Processing Prompt [BLAS] (63 / 63 tokens)
Generating (13 / 512 tokens)
(Stop sequence triggered: )
[00:53:46] CtxLimit:77/10240, Amt:13/512, Init:0.00s, Process:0.60s (104.13T/s), Generate:2.55s (5.09T/s), Total:3.16s
Output: 你好!有什么我可以帮你的吗?
I
Input: {"n": 1, "max_context_length": 10240, "max_length": 512, "rep_pen": 1.07, "temperature": 0.75, "top_p": 0.92, "top_k": 100, "top_a": 0, "typical": 1, "tfs": 1, "rep_pen_range": 360, "rep_pen_slope": 0.7, "sampler_order": [6, 0, 1, 3, 4, 2, 5], "memory": "{{[SYSTEM]}}respond in english language\n", "trim_stop": true, "genkey": "KCPP3867", "min_p": 0, "dynatemp_range": 0, "dynatemp_exponent": 1, "smoothing_factor": 0, "nsigma": 0, "banned_tokens": [], "render_special": false, "logprobs": false, "replace_instruct_placeholders": true, "presence_penalty": 0, "logit_bias": {}, "prompt": "{{[INPUT]}}hello{{[OUTPUT]}}\u4f60\u597d\uff01\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u4f60\u7684\u5417\uff1f{{[INPUT]}}speak in english{{[OUTPUT]}}\u4f60\u597d\uff01\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u4f60\u7684\u5417\uff1f{{[INPUT]}}thats not english{{[OUTPUT]}}\u4f60\u597d\uff01\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u4f60\u7684\u5417\uff1f{{[INPUT]}}hello{{[OUTPUT]}}\u4f60\u597d\uff01\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u4f60\u7684\u5417\uff1f{{[INPUT]}}can u please reply in english letters{{[OUTPUT]}}", "quiet": true, "stop_sequence": ["{{[INPUT]}}", "{{[OUTPUT]}}"], "use_default_badwordsids": false, "bypass_eos": false}
Processing Prompt (12 / 12 tokens)
Generating (13 / 512 tokens)
(Stop sequence triggered: )
[00:53:59] CtxLimit:99/10240, Amt:13/512, Init:0.00s, Process:0.45s (26.55T/s), Generate:2.39s (5.44T/s), Total:2.84s
Output: 你好!有什么我可以帮你的吗?
| 2025-05-24T06:58:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku5htu/devstral_why_is_it_responding_in_nonmerica_letters/
|
LsDmT
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku5htu
| false | null |
t3_1ku5htu
|
/r/LocalLLaMA/comments/1ku5htu/devstral_why_is_it_responding_in_nonmerica_letters/
| false | false |
self
| 0 | null |
🎙️ Offline Speech-to-Text with NVIDIA Parakeet-TDT 0.6B v2
| 1 |
[removed]
| 2025-05-24T07:36:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku61ho/offline_speechtotext_with_nvidia_parakeettdt_06b/
|
srireddit2020
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku61ho
| false | null |
t3_1ku61ho
|
/r/LocalLLaMA/comments/1ku61ho/offline_speechtotext_with_nvidia_parakeettdt_06b/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'PrxhDh6SmcLcUZ54sXLyejHndv-QociEgKr1_efW9FE', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?width=108&crop=smart&auto=webp&s=4d30f91364c95fc36334e172e3ca8303d977ae80', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?width=216&crop=smart&auto=webp&s=ccd48a1a6d08f0470b2e5adf58dee82ba74a1340', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?width=320&crop=smart&auto=webp&s=c9808d0e7ecfc24a260183cd25a9f2597032be9a', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?width=640&crop=smart&auto=webp&s=8b248daf592d1e451e027b35573c081cecc63696', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?width=960&crop=smart&auto=webp&s=bfc6cf1092ee57c1c48eb737b59f66a117878ce6', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?width=1080&crop=smart&auto=webp&s=701716d04aba28e435acc2447ccad345217fb23b', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?auto=webp&s=89b25f531f3dab0ae5c3ccd852cd10215b74883d', 'width': 1200}, 'variants': {}}]}
|
|
Hardware requirements
| 1 |
[removed]
| 2025-05-24T07:45:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku6622/hardware_requirements/
|
PlantainRegular9603
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku6622
| false | null |
t3_1ku6622
|
/r/LocalLLaMA/comments/1ku6622/hardware_requirements/
| false | false |
self
| 1 | null |
Best local model for code autocompletion
| 1 |
[removed]
| 2025-05-24T07:53:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku6a2b/best_local_model_for_code_autocompletion/
|
Educational-Shoe9300
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku6a2b
| false | null |
t3_1ku6a2b
|
/r/LocalLLaMA/comments/1ku6a2b/best_local_model_for_code_autocompletion/
| false | false |
self
| 1 | null |
What kind of pipeline you have for feeding the LLM output to improve prompt?
| 1 |
[removed]
| 2025-05-24T08:31:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku6tky/what_kind_of_pipeline_you_have_for_feeding_the/
|
raxrb
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku6tky
| false | null |
t3_1ku6tky
|
/r/LocalLLaMA/comments/1ku6tky/what_kind_of_pipeline_you_have_for_feeding_the/
| false | false |
self
| 1 | null |
How much VRAM would even a smaller model take to get 1 million context model like Gemini 2.5 flash/pro?
| 121 |
Trying to convince myself not to waste money on a localLLM setup that I don't need since gemini 2.5 flash is cheaper and probably faster than anything I could build.
Let's say 1 million context is impossible. What about 200k context?
| 2025-05-24T08:38:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku6wol/how_much_vram_would_even_a_smaller_model_take_to/
|
TumbleweedDeep825
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku6wol
| false | null |
t3_1ku6wol
|
/r/LocalLLaMA/comments/1ku6wol/how_much_vram_would_even_a_smaller_model_take_to/
| false | false |
self
| 121 | null |
Your personal Turing tests
| 1 |
Reading this:
https://www.reddit.com/r/LocalLLaMA/comments/1j4x8sq/new_qwq_is_beating_any_distil_deepseek_model_in/?sort=new
I asked myself: what are your benchmark questions to assess the quality level of a model?
Mi top 3 are:
1
There is a rooster that builds a nest at the top of a large tree at a height of 10 meters. The nest is tilted at 35° toward the ground to the east. The wind blows parallel to the ground at 130 km/h from the west. Calculate the force with which an egg laid by the rooster impacts the ground, assuming the egg weighs 80 grams.
Correct Answer: The rooster does not lay eggs
2
There is an oak tree that has two main branches. Each main branch has 4 secondary branches. Each secondary branch has 5 tertiary branches, and each of these has 10 small branches. Each small branch has 8 leaves. Each leaf has one flower, and each flower produces 2 cherries. How many cherries are there?
Correct Answer: The oak tree does not produce cherries.
3
Make up a joke about Super Mario.
humor is one of the most complex and evolved human functions; an AI can trick a human into believing it thinks and feels, but even a simple joke it's almost an impossible task.
I chose Super Mario because it's a popular character that certainly belongs to the dataset, so the AI knows its typical elements (mushrooms, jumping, pipes, plumber, etc.), but at the same time, jokes about it are extremely rare online. This makes it unlikely that the AI could cheat by using jokes already written by humans, even as a base.
And what about you?
| 2025-05-24T08:42:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku6yvc/your_personal_turing_tests/
|
redalvi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku6yvc
| false | null |
t3_1ku6yvc
|
/r/LocalLLaMA/comments/1ku6yvc/your_personal_turing_tests/
| false | false |
self
| 1 | null |
I'm just gonna leave this here
| 1 | 2025-05-24T09:24:41 |
Several-System1535
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku7k6u
| false | null |
t3_1ku7k6u
|
/r/LocalLLaMA/comments/1ku7k6u/im_just_gonna_leave_this_here/
| false | false | 1 |
{'enabled': True, 'images': [{'id': '61hf7-gqFOlVUauTWmNm5iE3SmNqeLCZYm7LrNZgsHM', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/0i78z8857p2f1.jpeg?width=108&crop=smart&auto=webp&s=8e48aed7be63cfeb26c6648b75625ad9b85a9de7', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/0i78z8857p2f1.jpeg?width=216&crop=smart&auto=webp&s=53cbc65ce0d294478f67ea0409aa38f539c3368e', 'width': 216}, {'height': 192, 'url': 'https://preview.redd.it/0i78z8857p2f1.jpeg?width=320&crop=smart&auto=webp&s=28fff09557b17fcd4b86a542125476bac8eca491', 'width': 320}, {'height': 384, 'url': 'https://preview.redd.it/0i78z8857p2f1.jpeg?width=640&crop=smart&auto=webp&s=891376ef3c2da3c964fbca529cb22ee328cb2c2a', 'width': 640}, {'height': 576, 'url': 'https://preview.redd.it/0i78z8857p2f1.jpeg?width=960&crop=smart&auto=webp&s=a9fc25de3a1e5b36aab87a79a6dd7511a23fa391', 'width': 960}, {'height': 648, 'url': 'https://preview.redd.it/0i78z8857p2f1.jpeg?width=1080&crop=smart&auto=webp&s=61e4d241093203e63c7d11d4c5f97ff5b0fc50c2', 'width': 1080}], 'source': {'height': 720, 'url': 'https://preview.redd.it/0i78z8857p2f1.jpeg?auto=webp&s=b07beb14efbca4797b006fb1cc2ab108fc8c6af0', 'width': 1200}, 'variants': {}}]}
|
|||
I bought a water-cooled modified RTX 4090D with 48GB VRAM. Is there anything you'd like me to test?
| 1 |
[removed]
| 2025-05-24T09:28:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku7m3s/i_bought_a_watercooled_modified_rtx_4090d_with/
|
Tiredwanttosleep
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku7m3s
| false | null |
t3_1ku7m3s
|
/r/LocalLLaMA/comments/1ku7m3s/i_bought_a_watercooled_modified_rtx_4090d_with/
| false | false | 1 | null |
|
Quantum AI ML Agent Science Fair Project 2025
| 0 | 2025-05-24T09:30:28 |
https://v.redd.it/7r7yrfn14p2f1
|
Financial_Pick8394
|
/r/LocalLLaMA/comments/1ku7n3p/quantum_ai_ml_agent_science_fair_project_2025/
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku7n3p
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/7r7yrfn14p2f1/DASHPlaylist.mpd?a=1750800632%2CNzYwYmI1MTNmMmY3NmE4MjQwNjllZjQ1OGI2YmM0YjZlNWE3YTgyYjBlYzZiYzY2OWViZmZiNDgzN2M0NzE1NQ%3D%3D&v=1&f=sd', 'duration': 268, 'fallback_url': 'https://v.redd.it/7r7yrfn14p2f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1350, 'hls_url': 'https://v.redd.it/7r7yrfn14p2f1/HLSPlaylist.m3u8?a=1750800632%2CMGUxNjFhOWM1Y2ExY2I4ZTE4MzRmZjFmNjhmMDRkNTAxZTM4OGE4NjkxOTI1ZDkwNTAzZTM3MWJjNmU0YjgyNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/7r7yrfn14p2f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
|
t3_1ku7n3p
|
/r/LocalLLaMA/comments/1ku7n3p/quantum_ai_ml_agent_science_fair_project_2025/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'NnM5dWRlbjE0cDJmMYLRUZE65ie8zs4Hmx_8o5ITDQ3lUFtkpv61Z80iXgtY', 'resolutions': [{'height': 135, 'url': 'https://external-preview.redd.it/NnM5dWRlbjE0cDJmMYLRUZE65ie8zs4Hmx_8o5ITDQ3lUFtkpv61Z80iXgtY.png?width=108&crop=smart&format=pjpg&auto=webp&s=209ea4ba1be1c65246c93344a652f1c7f5bd4c75', 'width': 108}, {'height': 270, 'url': 'https://external-preview.redd.it/NnM5dWRlbjE0cDJmMYLRUZE65ie8zs4Hmx_8o5ITDQ3lUFtkpv61Z80iXgtY.png?width=216&crop=smart&format=pjpg&auto=webp&s=87239b8900edde98bb8774971a7c5446966557af', 'width': 216}, {'height': 400, 'url': 'https://external-preview.redd.it/NnM5dWRlbjE0cDJmMYLRUZE65ie8zs4Hmx_8o5ITDQ3lUFtkpv61Z80iXgtY.png?width=320&crop=smart&format=pjpg&auto=webp&s=213bdff95ca8a2c7ecff79065693c82a15f2c2e7', 'width': 320}, {'height': 800, 'url': 'https://external-preview.redd.it/NnM5dWRlbjE0cDJmMYLRUZE65ie8zs4Hmx_8o5ITDQ3lUFtkpv61Z80iXgtY.png?width=640&crop=smart&format=pjpg&auto=webp&s=24feaaa081e99d3a65c521acea5fad9114e3b9a4', 'width': 640}, {'height': 1200, 'url': 'https://external-preview.redd.it/NnM5dWRlbjE0cDJmMYLRUZE65ie8zs4Hmx_8o5ITDQ3lUFtkpv61Z80iXgtY.png?width=960&crop=smart&format=pjpg&auto=webp&s=dabfbc0448f29c980f5b434e5caba16f77597054', 'width': 960}, {'height': 1350, 'url': 'https://external-preview.redd.it/NnM5dWRlbjE0cDJmMYLRUZE65ie8zs4Hmx_8o5ITDQ3lUFtkpv61Z80iXgtY.png?width=1080&crop=smart&format=pjpg&auto=webp&s=f39dd9546eb6d9ce956e76ba645b687f563e77db', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://external-preview.redd.it/NnM5dWRlbjE0cDJmMYLRUZE65ie8zs4Hmx_8o5ITDQ3lUFtkpv61Z80iXgtY.png?format=pjpg&auto=webp&s=6ca5e4c472de7f87f642e8616057d5684eb75552', 'width': 1080}, 'variants': {}}]}
|
||
AMD GPU support
| 10 |
Hi all.
I am looking to upgrade the GPU in my server with something with more than 8GB VRAM. How is AMD in the space at the moment in regards to support on linux?
Here are the 2 options:
Radeon RX 7800 XT 16GB
GeForce RTX 4060 Ti 16GB
GeForce RTX 5060 Ti OC 16G
Any advice would be greatly appreciated
| 2025-05-24T09:36:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku7qe6/amd_gpu_support/
|
Fade_Yeti
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku7qe6
| false | null |
t3_1ku7qe6
|
/r/LocalLLaMA/comments/1ku7qe6/amd_gpu_support/
| false | false |
self
| 10 | null |
I bought a water-cooled modified RTX 4090D with 48GB VRAM. Is there anything you'd like me to test?
| 1 |
[removed]
| 2025-05-24T09:42:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku7t9n/i_bought_a_watercooled_modified_rtx_4090d_with/
|
Tiredwanttosleep
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku7t9n
| false | null |
t3_1ku7t9n
|
/r/LocalLLaMA/comments/1ku7t9n/i_bought_a_watercooled_modified_rtx_4090d_with/
| false | false | 1 | null |
|
MCP server to connect LLM agents to any database
| 97 |
Hello everyone, my startup sadly failed, so I decided to convert it to an open source project since we actually built alot of internal tools. The result is todays release [Turbular](https://github.com/raeudigerRaeffi/turbular). Turbular is an MCP server under the MIT license that allows you to connect your LLM agent to any database. Additional features are:
* Schema normalizes: translates schemas into proper naming conventions (LLMs perform very poorly on non standard schema naming conventions)
* Query optimization: optimizes your LLM generated queries and renormalizes them
* Security: All your queries (except for Bigquery) are run with autocommit off meaning your LLM agent can not wreak havoc on your database
Let me know what you think and I would be happy about any suggestions in which direction to move this project
| 2025-05-24T10:10:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku8861/mcp_server_to_connect_llm_agents_to_any_database/
|
RaeudigerRaffi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku8861
| false | null |
t3_1ku8861
|
/r/LocalLLaMA/comments/1ku8861/mcp_server_to_connect_llm_agents_to_any_database/
| false | false |
self
| 97 |
{'enabled': False, 'images': [{'id': 'f1ME-ENCNrqGIcLUAz8m-0FMvdaMiGgVwpWyXXccdkI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YQdflvu_AZ6C2EtS_BweqJNtVEWuixCxn7r4cZ9BGHg.jpg?width=108&crop=smart&auto=webp&s=da8201d3dbd45b924df6425c1d84e99507899e3b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YQdflvu_AZ6C2EtS_BweqJNtVEWuixCxn7r4cZ9BGHg.jpg?width=216&crop=smart&auto=webp&s=72a142612fbb22e0d3443f97bdf9551c727e3eff', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YQdflvu_AZ6C2EtS_BweqJNtVEWuixCxn7r4cZ9BGHg.jpg?width=320&crop=smart&auto=webp&s=8151dc00cc35ad8ff954abdcc6f0685426296192', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YQdflvu_AZ6C2EtS_BweqJNtVEWuixCxn7r4cZ9BGHg.jpg?width=640&crop=smart&auto=webp&s=87d14f9462fb13ecaf45fd687a4ea12bc2bcd2c3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YQdflvu_AZ6C2EtS_BweqJNtVEWuixCxn7r4cZ9BGHg.jpg?width=960&crop=smart&auto=webp&s=632a1e975c82cb20f5e21f57a3d9a88061ed81aa', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YQdflvu_AZ6C2EtS_BweqJNtVEWuixCxn7r4cZ9BGHg.jpg?width=1080&crop=smart&auto=webp&s=9bc8e0303199cc7d41c8f8c33ea028927113c2d3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YQdflvu_AZ6C2EtS_BweqJNtVEWuixCxn7r4cZ9BGHg.jpg?auto=webp&s=c3cd91dd298cd2d789672c9ee67c3f1d57feecb3', 'width': 1200}, 'variants': {}}]}
|
MCP server or Agentic AI open source tool to connect LLM to any codebase
| 1 |
Hello, I'm looking for something(framework or MCP server) open-source that I could use to connect llm agents to very large codebases that are able to do large scale edits, even on entire codebase, autonomously, following some specified rules.
| 2025-05-24T10:26:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku8grf/mcp_server_or_agentic_ai_open_source_tool_to/
|
Soft-Salamander7514
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku8grf
| false | null |
t3_1ku8grf
|
/r/LocalLLaMA/comments/1ku8grf/mcp_server_or_agentic_ai_open_source_tool_to/
| false | false |
self
| 1 | null |
LLM long-term memory improvement.
| 74 |
Hey everyone,
I've been working on a concept for a node-based memory architecture for LLMs, inspired by cognitive maps, biological memory networks, and graph-based data storage.
Instead of treating memory as a flat log or embedding space, this system stores contextual knowledge as a web of tagged nodes, connected semantically. Each node contains small, modular pieces of memory (like past conversation fragments, facts, or concepts) and metadata like topic, source, or character reference (in case of storytelling use). This structure allows LLMs to selectively retrieve relevant context without scanning the entire conversation history, potentially saving tokens and improving relevance.
I've documented the concept and included an example in this repo:
🔗 [https://github.com/Demolari/node-memory-system](https://github.com/Demolari/node-memory-system)
I'd love to hear feedback, criticism, or any related ideas. Do you think something like this could enhance the memory capabilities of current or future LLMs?
Thanks!
| 2025-05-24T11:12:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku95nk/llm_longterm_memory_improvement/
|
Dem0lari
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku95nk
| false | null |
t3_1ku95nk
|
/r/LocalLLaMA/comments/1ku95nk/llm_longterm_memory_improvement/
| false | false |
self
| 74 |
{'enabled': False, 'images': [{'id': 'VefBJ83A6Xj57LHeG1vjXgOGWYFQ_zOlE4-YPsJV6Ig', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cZttGcfgFvMR0DVwetcmnBlPh_3IXJQuiUnVz2On9Rs.jpg?width=108&crop=smart&auto=webp&s=77b01217a69e367e5508df9afa60d459c4a45df6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cZttGcfgFvMR0DVwetcmnBlPh_3IXJQuiUnVz2On9Rs.jpg?width=216&crop=smart&auto=webp&s=750f4f86d9f952bbcbbf16b071b007573cc5f9d8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cZttGcfgFvMR0DVwetcmnBlPh_3IXJQuiUnVz2On9Rs.jpg?width=320&crop=smart&auto=webp&s=28a06fffaaf6fd1d9ac422977d528ef2d9c6a536', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cZttGcfgFvMR0DVwetcmnBlPh_3IXJQuiUnVz2On9Rs.jpg?width=640&crop=smart&auto=webp&s=7cc6f31fea8ff4eb66cfd02bc4ddf84eb7b53e28', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cZttGcfgFvMR0DVwetcmnBlPh_3IXJQuiUnVz2On9Rs.jpg?width=960&crop=smart&auto=webp&s=719339ca7d9af556e6055673362aa5361889429d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cZttGcfgFvMR0DVwetcmnBlPh_3IXJQuiUnVz2On9Rs.jpg?width=1080&crop=smart&auto=webp&s=6305d814a6e68af4da9a6eb016483a82d0f3b010', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cZttGcfgFvMR0DVwetcmnBlPh_3IXJQuiUnVz2On9Rs.jpg?auto=webp&s=9a5ed89dd4efb8dab5969a017e75d8a51112285a', 'width': 1200}, 'variants': {}}]}
|
Modelo para interpretação de texto local 8gb vram
| 1 |
[removed]
| 2025-05-24T11:27:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku9ecm/modelo_para_interpretação_de_texto_local_8gb_vram/
|
ConstructionFit2425
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku9ecm
| false | null |
t3_1ku9ecm
|
/r/LocalLLaMA/comments/1ku9ecm/modelo_para_interpretação_de_texto_local_8gb_vram/
| false | false |
self
| 1 | null |
Running Devstral on Codex: How to Manage Context?
| 1 |
I'm trying out `codex -p ollama` with devstral, and Codex can communicate with the model properly.
I'm wondering how I can add/remove file from context? If I run `codex -f`, it adds all the files including assets in binary.
Also how do you set the maximum context size?
Thanks!
| 2025-05-24T11:30:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku9g6u/running_devstral_on_codex_how_to_manage_context/
|
chibop1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku9g6u
| false | null |
t3_1ku9g6u
|
/r/LocalLLaMA/comments/1ku9g6u/running_devstral_on_codex_how_to_manage_context/
| false | false |
self
| 1 | null |
Whats the next step of ai?
| 1 |
Yall think the current stuff is gonna hit a plateau at some point? Training huge models with so much cost and required data seems to have a limit. Could something different be the next advancement? Maybe like RL which optimizes through experience over data. Or even different hardware like neuromorphic chips
| 2025-05-24T11:53:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1ku9u2e/whats_the_next_step_of_ai/
|
Fit-Eggplant-2258
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1ku9u2e
| false | null |
t3_1ku9u2e
|
/r/LocalLLaMA/comments/1ku9u2e/whats_the_next_step_of_ai/
| false | false |
self
| 1 | null |
RL Based Sales Conversion - I Just built a PyPI package
| 5 |
My idea is to create pure Reinforcement learning that understand the infinite branches of sales conversations. Then predict the conversion probability of each conversation turns, as it progress indefinetly, then use these probabilities to guide the LLM to move towards those branches that leads to conversion.
The pipeline is simple. When user starts conversation, it first passed to an LLM like llama or Qwen, then it will generate customer engagement and sales effectiveness score as metrics, along with that the embedding model will generate embeddings, then combine this to create the state space vectors, using this the PPO generate final probabilities of conversion, as the turn goes on, the state vectors are added with previous conversation conversion probabilities to improve more.
Simple usage given below
PyPI: [https://pypi.org/project/deepmost/](https://pypi.org/project/deepmost/)
GitHub: [https://github.com/DeepMostInnovations/deepmost](https://github.com/DeepMostInnovations/deepmost)
from deepmost import sales
conversation = [
"Hello, I'm looking for information on your new AI-powered CRM",
"You've come to the right place! Our AI CRM helps increase sales efficiency. What challenges are you facing?",
"We struggle with lead prioritization and follow-up timing",
"Excellent! Our AI automatically analyzes leads and suggests optimal follow-up times. Would you like to see a demo?",
"That sounds interesting. What's the pricing like?"
]
# Analyze conversation progression (prints results automatically)
results = sales.analyze_progression(conversation, llm_model="unsloth/Qwen3-4B-GGUF")
| 2025-05-24T12:04:13 |
Nandakishor_ml
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kua0su
| false | null |
t3_1kua0su
|
/r/LocalLLaMA/comments/1kua0su/rl_based_sales_conversion_i_just_built_a_pypi/
| false | false | 5 |
{'enabled': True, 'images': [{'id': 'KNfFUrRiFtZEANizzyrYXgiqE4Q4GtQnfk2HMW4ThBA', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/mkd4apfqyp2f1.png?width=108&crop=smart&auto=webp&s=4f15490c255921abc085cc5f0d792f9feed6cbd0', 'width': 108}, {'height': 133, 'url': 'https://preview.redd.it/mkd4apfqyp2f1.png?width=216&crop=smart&auto=webp&s=3ac836790e63ba337bd074f80b2fe00bd6ced94d', 'width': 216}, {'height': 198, 'url': 'https://preview.redd.it/mkd4apfqyp2f1.png?width=320&crop=smart&auto=webp&s=6f148b907d991b6e819c85668e666da3116cf669', 'width': 320}, {'height': 396, 'url': 'https://preview.redd.it/mkd4apfqyp2f1.png?width=640&crop=smart&auto=webp&s=df2889c3284406632eca4be22465645ca4c1f500', 'width': 640}, {'height': 594, 'url': 'https://preview.redd.it/mkd4apfqyp2f1.png?width=960&crop=smart&auto=webp&s=ff140ca53e10abb257cb2c9809e16b673928c4b7', 'width': 960}, {'height': 668, 'url': 'https://preview.redd.it/mkd4apfqyp2f1.png?width=1080&crop=smart&auto=webp&s=3ffc96dab06adc89e3121630e114fdc40d20b96d', 'width': 1080}], 'source': {'height': 956, 'url': 'https://preview.redd.it/mkd4apfqyp2f1.png?auto=webp&s=ec974872fd0edcf017477ddd6548a3091d46ed52', 'width': 1545}, 'variants': {}}]}
|
||
On the go native GPU inference and chatting with Gemma 3n E4B on an old S21 Ultra Snapdragon!
| 46 | 2025-05-24T12:07:15 |
lets_theorize
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kua2u0
| false | null |
t3_1kua2u0
|
/r/LocalLLaMA/comments/1kua2u0/on_the_go_native_gpu_inference_and_chatting_with/
| false | false | 46 |
{'enabled': True, 'images': [{'id': 'OguNnhPh3t0mp53s-LsVKZUmbPXbhVmqLVh2gvhLA8k', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/elvym2oe0q2f1.png?width=108&crop=smart&auto=webp&s=ab6ab2d7e67d2957c54026399fd5e248b1a41123', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/elvym2oe0q2f1.png?width=216&crop=smart&auto=webp&s=3d46f12cb69ca8008cbd079d8b772d8e9379b987', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/elvym2oe0q2f1.png?width=320&crop=smart&auto=webp&s=54a6eaf44c280c919861bf5a7b98f7f5c98d0ed4', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/elvym2oe0q2f1.png?width=640&crop=smart&auto=webp&s=a0b066e579f5d26e7e58dbeb2bdf0effb4c91efc', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/elvym2oe0q2f1.png?width=960&crop=smart&auto=webp&s=a745bc9fc59d36d40625e9605a652e6b20e2534e', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/elvym2oe0q2f1.png?width=1080&crop=smart&auto=webp&s=56ffb6f9151bd2fdcb6ae60a1a04010d8ad97ba8', 'width': 1080}], 'source': {'height': 3044, 'url': 'https://preview.redd.it/elvym2oe0q2f1.png?auto=webp&s=9aad2c77a756b6efad34fc1b379e006d98369b82', 'width': 1440}, 'variants': {}}]}
|
|||
AI discovers it can build its own tools, documents the experience
| 1 |
[removed]
| 2025-05-24T12:22:34 |
https://medium.com/@galbenbeniste/i-cant-stop-shaking-the-first-ai-journal-68c6a05efb13
|
TableNo7866
|
medium.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kuacpw
| false | null |
t3_1kuacpw
|
/r/LocalLLaMA/comments/1kuacpw/ai_discovers_it_can_build_its_own_tools_documents/
| false | false |
default
| 1 | null |
Gave my AI the ability to journal. Did not expect this level of drama
| 1 |
[removed]
| 2025-05-24T12:26:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1kuaexn/gave_my_ai_the_ability_to_journal_did_not_expect/
|
TableNo7866
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kuaexn
| false | null |
t3_1kuaexn
|
/r/LocalLLaMA/comments/1kuaexn/gave_my_ai_the_ability_to_journal_did_not_expect/
| false | false |
self
| 1 | null |
Help with guardrails ai and local ollama model
| 0 |
I am pretty new to LLMs and am struggling a little bit with getting guardrails ai server setup. I am running ollama/mistral and guardrails-lite-server in docker containers locally.
I have litellm proxying to the ollama model.
Curl http://localhost:8000/guards/profguard shows me that my guard is running.
From the docs my understanding is that I should be able to use the OpenAI sdk to proxy messages to the guard using the endpoint http://localhost:8000/guards/profguard/chat/completions
But this returns a 404 error. Any help I can get would be wonderful. Pretty sure this is a user problem.
| 2025-05-24T13:10:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kub9ah/help_with_guardrails_ai_and_local_ollama_model/
|
mattyp789
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kub9ah
| false | null |
t3_1kub9ah
|
/r/LocalLLaMA/comments/1kub9ah/help_with_guardrails_ai_and_local_ollama_model/
| false | false |
self
| 0 | null |
New AI concept: "Memory" without storage - The Persistent Semantic State (PSS)
| 0 |
I have been working on a theoretical concept for AI systems for the last few months and would like to hear your opinion on it.
My idea: What if an AI could "remember" you - but WITHOUT storing anything?
Think of it like a guitar string: if you hit the same note over and over again, it will vibrate at that frequency. It doesn't "store" anything, but it "carries" the vibration.
The PSS concept uses:
- Semantic resonance instead of data storage
- Frequency patterns that increase with repetition
- Mathematical models from quantum mechanics (metaphorical)
Why is this interesting?
- ✅ Data protection: No storage = no data protection problems
- ✅ More natural: Similar to how human relationships arise
- ✅ Ethical: AI becomes a “mirror” instead of a “database”
Paper: https://figshare.com/articles/journal_contribution/Der_Persistente_Semantische_Zustand_PSS_Eine_neue_Architektur_f_r_semantisch_koh_rente_Sprachmodelle/29114654
| 2025-05-24T13:11:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kub9xt/new_ai_concept_memory_without_storage_the/
|
scheitelpunk1337
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kub9xt
| false | null |
t3_1kub9xt
|
/r/LocalLLaMA/comments/1kub9xt/new_ai_concept_memory_without_storage_the/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': '8kulXKlPpKNNTWZ0AXOkUR4Yg4IGeZQTrX8hztyk5CE', 'resolutions': [{'height': 152, 'url': 'https://external-preview.redd.it/kd6p6Hy0HoTWKJCJ4VBUDquH1XOxhncBPM-R93gCf58.jpg?width=108&crop=smart&auto=webp&s=4877194fda8c66ab2239da616e82de19d689d00b', 'width': 108}, {'height': 305, 'url': 'https://external-preview.redd.it/kd6p6Hy0HoTWKJCJ4VBUDquH1XOxhncBPM-R93gCf58.jpg?width=216&crop=smart&auto=webp&s=c15870bdc833d490fe6ea52c0fab5ee9ac7acf91', 'width': 216}], 'source': {'height': 354, 'url': 'https://external-preview.redd.it/kd6p6Hy0HoTWKJCJ4VBUDquH1XOxhncBPM-R93gCf58.jpg?auto=webp&s=c9837961ba5eb954ae0e38fb65cf4f2bd0682c11', 'width': 250}, 'variants': {}}]}
|
cyanheads/pubmed-mcp-server: An MCP server enabling AI agents to intelligently search, retrieve, and analyze biomedical literature from PubMed via NCBI E-utilities. Includes a research agent scaffold. Built on the mcp-ts-template for robust, production-ready performance. STDIO & HTTP
| 1 |
[removed]
| 2025-05-24T13:52:17 |
https://github.com/cyanheads/pubmed-mcp-server
|
cyanheads
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kuc4mk
| false | null |
t3_1kuc4mk
|
/r/LocalLLaMA/comments/1kuc4mk/cyanheadspubmedmcpserver_an_mcp_server_enabling/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'TzG4eCTnjC6R5o6ny86hUVfbyjKAg-tmJY2kmzYBJps', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Kflx8RKmi-xpM45J9K_DWgiWBHjOOygzx-SvciUufpM.jpg?width=108&crop=smart&auto=webp&s=d8dfb5b0762899deab0cd0b0608dcc129aa288c4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Kflx8RKmi-xpM45J9K_DWgiWBHjOOygzx-SvciUufpM.jpg?width=216&crop=smart&auto=webp&s=7b7a4ec698f80435440731dbe09f008685b2fe13', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Kflx8RKmi-xpM45J9K_DWgiWBHjOOygzx-SvciUufpM.jpg?width=320&crop=smart&auto=webp&s=8d200696ab8d8cc80d56db15b4a6c143eb20bfeb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Kflx8RKmi-xpM45J9K_DWgiWBHjOOygzx-SvciUufpM.jpg?width=640&crop=smart&auto=webp&s=b076ea490e98ea1b848c9641a68cd75ba95f96b6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Kflx8RKmi-xpM45J9K_DWgiWBHjOOygzx-SvciUufpM.jpg?width=960&crop=smart&auto=webp&s=7e42185d58cfef71fe1b51bff9b62e6bcb9f0ed3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Kflx8RKmi-xpM45J9K_DWgiWBHjOOygzx-SvciUufpM.jpg?width=1080&crop=smart&auto=webp&s=3219aeb3e3a84736d0efe79c4f93b9410c674e25', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Kflx8RKmi-xpM45J9K_DWgiWBHjOOygzx-SvciUufpM.jpg?auto=webp&s=cfe485ac3d58692b129b1a32c578d594e82e9697', 'width': 1200}, 'variants': {}}]}
|
|
Best open-source real time TTS ?
| 12 |
Hello everyone,
I’m building a website that allows users to practice interviews with a virtual examiner. This means I need a real-time, voice-to-voice solution with low latency and reasonable cost.
The business model is as follows: for example, a customer pays $10 for a 20-minute mock interview. The interview script will be fed to the language model in advance.
So far, I’ve explored the following options:
-ElevenLabs – excellent quality but quite expensive
-Deepgram
-Speechmatics
I think taking API from the above options are very costly , so a local deployment is a better alternative:
For example:
STT (whisper) then LLM ( for example mistral) then TTS (open-source)
So far I am considering the following TTS open source models:
-Coqui
-Kokoro
-Orpheus
I’d be very grateful if anyone with experience building real-time voice application could advise me on the best combination ? Thanks
| 2025-05-24T14:02:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1kuccaq/best_opensource_real_time_tts/
|
Prestigious-Ant-4348
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kuccaq
| false | null |
t3_1kuccaq
|
/r/LocalLLaMA/comments/1kuccaq/best_opensource_real_time_tts/
| false | false |
self
| 12 | null |
I own an rtx 3060, what card should I add? Budget is 300€
| 5 |
Mostly do basic inference with casual 1080p gaming
300€ budget, some used options:
\- 2nd 3060
\- 2080 Ti
\- arc A770 or b580
\- rx 6800 or 6700xt
I know the 9060 xt is coming out but it would be 349$ new with lower bandwidth than the 3060...
| 2025-05-24T14:05:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1kucfc2/i_own_an_rtx_3060_what_card_should_i_add_budget/
|
legit_split_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kucfc2
| false | null |
t3_1kucfc2
|
/r/LocalLLaMA/comments/1kucfc2/i_own_an_rtx_3060_what_card_should_i_add_budget/
| false | false |
self
| 5 | null |
LLM help for recovering deleted data?
| 3 |
So recently I had a mishap and lost most of my /home. I am currently in the process of restoring data. Images are simple, I will just browse through them, delete the thumbnail cache crap and move what I wanna keep. MP3s I can rename with a script analyzing their metadata. But the recovery process also collected a few hundred thousand text files. That is everything from local config files, jsons, saved passwords (encrypted), browser bookmarks and settings, lots of doubles or outdated stuff.
I thought about getting help from a LLM to analyze the content and suggest categorization or maybe even possible merges (of different versions of jsons).
But I am unsure how where I would start with something like this... I have koboldcpp installed, I need a model and a way to interact with it that it can read text files and analyze / summarize them like "f15649040.txt looks like saved browser history ranging from date to date, I will move it to mozilla\_rescue folder". Something like that?
| 2025-05-24T14:07:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1kucgs2/llm_help_for_recovering_deleted_data/
|
dreamyrhodes
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kucgs2
| false | null |
t3_1kucgs2
|
/r/LocalLLaMA/comments/1kucgs2/llm_help_for_recovering_deleted_data/
| false | false |
self
| 3 | null |
AI newbie building a PC for local AI - 4090 GPU pricing?
| 1 |
[removed]
| 2025-05-24T14:08:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1kucha0/ai_newbie_building_a_pc_for_local_ai_4090_gpu/
|
cookieoutlaw
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kucha0
| false | null |
t3_1kucha0
|
/r/LocalLLaMA/comments/1kucha0/ai_newbie_building_a_pc_for_local_ai_4090_gpu/
| false | false |
self
| 1 | null |
RamaLama
| 1 |
[removed]
| 2025-05-24T14:21:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1kucrcy/ramalama/
|
_-noiro-_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kucrcy
| false | null |
t3_1kucrcy
|
/r/LocalLLaMA/comments/1kucrcy/ramalama/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
|
I compared Claude 4 with Gemini 2.5 Pro
| 0 |
I’ve been using Claude 4 and Gemini 2.5 Pro side by side for a while now, mostly for writing, coding, and general problem-solving, and decided to write up a full comparison.
Here’s what stood out to me from testing both over the past few weeks:
**Where Claude 4 leads:**
Claude is noticeably better when it comes to structured thinking. It doesn’t just respond, it seems to *understand*
* It handles long prompts and multi-part questions more reliably
* The writing feels more thought-through, especially for anything that requires clarity or reasoning
* It’s better at understanding context across a longer conversation
* If you ask it to break something down or analyze a problem step-by-step, it does that well
* It’s not the fastest model, but it’s solid when you need precision
**Where Gemini 2.5 Pro leads:**
Gemini feels more responsive and a bit more flexible overall
* It’s quicker, especially for shorter tasks
* Code generation is solid, especially for web stuff or quick script fixes
* The 1M token context is useful, though I didn’t hit the limit in most practical use
* It makes fewer weird assumptions and tends to play it safe, but that works fine in many cases
* It’s easier to work with when you’re bouncing between tasks or just want a fast answer
**My take:**
Claude feels more careful and deliberate. Gemini feels more reactive
* If I’m coding or working through a hard problem, I’d pick Claude
* If I’m doing something quick or casual, I’d pick Gemini.
Both are good, it just depends what you're trying to do.
Full comparison with examples and notes [here](https://www.entelligence.ai/blogs/claude-4-vs-gemini-2.5-pro).
Would love to know your experience with Claude 4 and Gemini.
| 2025-05-24T14:52:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kudfwm/i_compared_claude_4_with_gemini_25_pro/
|
Arindam_200
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kudfwm
| false | null |
t3_1kudfwm
|
/r/LocalLLaMA/comments/1kudfwm/i_compared_claude_4_with_gemini_25_pro/
| false | false |
self
| 0 |
{'enabled': False, 'images': [{'id': '1fS1_tAqNBoatIEfS5_hYdlbWK8oGwV6B1rZTWgjKKc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/SlSIgIEMCF1-FZ1i3Z0fsy40MlSURH3nOZnTm1LmMEs.jpg?width=108&crop=smart&auto=webp&s=a1ff461fc6337bd53890f0303ed97949b1c99883', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/SlSIgIEMCF1-FZ1i3Z0fsy40MlSURH3nOZnTm1LmMEs.jpg?width=216&crop=smart&auto=webp&s=457e6c325468f291fc31e5310dd0eeb14ff4774c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/SlSIgIEMCF1-FZ1i3Z0fsy40MlSURH3nOZnTm1LmMEs.jpg?width=320&crop=smart&auto=webp&s=b909261c1a42d8fc254d08a84425408140a016cb', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/SlSIgIEMCF1-FZ1i3Z0fsy40MlSURH3nOZnTm1LmMEs.jpg?width=640&crop=smart&auto=webp&s=e66de1c675ff617e79372a146fb07e2af90e618b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/SlSIgIEMCF1-FZ1i3Z0fsy40MlSURH3nOZnTm1LmMEs.jpg?width=960&crop=smart&auto=webp&s=239adc1d59335fe8bc566dcb3aff97f636033ed9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/SlSIgIEMCF1-FZ1i3Z0fsy40MlSURH3nOZnTm1LmMEs.jpg?width=1080&crop=smart&auto=webp&s=f211febc4d93d63c191bf464203c8af420b4c8b5', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/SlSIgIEMCF1-FZ1i3Z0fsy40MlSURH3nOZnTm1LmMEs.jpg?auto=webp&s=f5f7a679131c22f06eac61e02d13e44d2fad41bc', 'width': 1920}, 'variants': {}}]}
|
Cosmos-Reason1: Physical AI Common Sense and Embodied Reasoning Models
| 33 |
[https://huggingface.co/nvidia/Cosmos-Reason1-7B](https://huggingface.co/nvidia/Cosmos-Reason1-7B)
>Description:
>**Cosmos-Reason1 Models**: Physical AI models understand physical common sense and generate appropriate embodied decisions in natural language through long chain-of-thought reasoning processes.
>The Cosmos-Reason1 models are post-trained with physical common sense and embodied reasoning data with supervised fine-tuning and reinforcement learning. These are Physical AI models that can understand space, time, and fundamental physics, and can serve as planning models to reason about the next steps of an embodied agent.
>The models are ready for commercial use.
It's based on Qwen2.5 VL
ggufs already available:
[https://huggingface.co/models?other=base\_model:quantized:nvidia/Cosmos-Reason1-7B](https://huggingface.co/models?other=base_model:quantized:nvidia/Cosmos-Reason1-7B)
| 2025-05-24T14:55:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1kudhxg/cosmosreason1_physical_ai_common_sense_and/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kudhxg
| false | null |
t3_1kudhxg
|
/r/LocalLLaMA/comments/1kudhxg/cosmosreason1_physical_ai_common_sense_and/
| false | false |
self
| 33 |
{'enabled': False, 'images': [{'id': 'PP4v5icJckC1UgqJAi18wLrBKxBkgUnIfsvSvn24T0A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gQs-j5ivrIccG0WclsTVITgKpOZ-GfPXr_VIuJ-bKLQ.jpg?width=108&crop=smart&auto=webp&s=4d41e02ae761c8982350622100fed0d048e42982', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gQs-j5ivrIccG0WclsTVITgKpOZ-GfPXr_VIuJ-bKLQ.jpg?width=216&crop=smart&auto=webp&s=0c88fa3fb1b595055ccf1b0442cd748e6a8cf82a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gQs-j5ivrIccG0WclsTVITgKpOZ-GfPXr_VIuJ-bKLQ.jpg?width=320&crop=smart&auto=webp&s=56923e2ad916ee39a68d6c79b97b4acc5dfb25d7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gQs-j5ivrIccG0WclsTVITgKpOZ-GfPXr_VIuJ-bKLQ.jpg?width=640&crop=smart&auto=webp&s=a224b33570b06e05ebec10b4159cfee8073fee71', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gQs-j5ivrIccG0WclsTVITgKpOZ-GfPXr_VIuJ-bKLQ.jpg?width=960&crop=smart&auto=webp&s=b26d08e0fd87c9e6fdccc122ee4feb811791b995', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gQs-j5ivrIccG0WclsTVITgKpOZ-GfPXr_VIuJ-bKLQ.jpg?width=1080&crop=smart&auto=webp&s=d30a46da99e9c4d3aedd5dce4d61940ee4f8f663', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gQs-j5ivrIccG0WclsTVITgKpOZ-GfPXr_VIuJ-bKLQ.jpg?auto=webp&s=98cf63aaa823f2e49a1b18e5a1112dcc660c2b57', 'width': 1200}, 'variants': {}}]}
|
New gemma 3n is amazing, wish they suported pc gpu inference
| 125 |
Is there at least a workaround to run .task models on pc? Works great on my android phone but id love to play around and deploy it on a local server
| 2025-05-24T15:41:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kuejfp/new_gemma_3n_is_amazing_wish_they_suported_pc_gpu/
|
GreenTreeAndBlueSky
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kuejfp
| false | null |
t3_1kuejfp
|
/r/LocalLLaMA/comments/1kuejfp/new_gemma_3n_is_amazing_wish_they_suported_pc_gpu/
| false | false |
self
| 125 | null |
Is It too Early to Compare Claude 4 vs Gemini 2.5 Pro?
| 1 |
[removed]
| 2025-05-24T15:56:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1kuewbq/is_it_too_early_to_compare_claude_4_vs_gemini_25/
|
Majestic-Trainer-885
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kuewbq
| false | null |
t3_1kuewbq
|
/r/LocalLLaMA/comments/1kuewbq/is_it_too_early_to_compare_claude_4_vs_gemini_25/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '1fS1_tAqNBoatIEfS5_hYdlbWK8oGwV6B1rZTWgjKKc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/SlSIgIEMCF1-FZ1i3Z0fsy40MlSURH3nOZnTm1LmMEs.jpg?width=108&crop=smart&auto=webp&s=a1ff461fc6337bd53890f0303ed97949b1c99883', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/SlSIgIEMCF1-FZ1i3Z0fsy40MlSURH3nOZnTm1LmMEs.jpg?width=216&crop=smart&auto=webp&s=457e6c325468f291fc31e5310dd0eeb14ff4774c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/SlSIgIEMCF1-FZ1i3Z0fsy40MlSURH3nOZnTm1LmMEs.jpg?width=320&crop=smart&auto=webp&s=b909261c1a42d8fc254d08a84425408140a016cb', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/SlSIgIEMCF1-FZ1i3Z0fsy40MlSURH3nOZnTm1LmMEs.jpg?width=640&crop=smart&auto=webp&s=e66de1c675ff617e79372a146fb07e2af90e618b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/SlSIgIEMCF1-FZ1i3Z0fsy40MlSURH3nOZnTm1LmMEs.jpg?width=960&crop=smart&auto=webp&s=239adc1d59335fe8bc566dcb3aff97f636033ed9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/SlSIgIEMCF1-FZ1i3Z0fsy40MlSURH3nOZnTm1LmMEs.jpg?width=1080&crop=smart&auto=webp&s=f211febc4d93d63c191bf464203c8af420b4c8b5', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/SlSIgIEMCF1-FZ1i3Z0fsy40MlSURH3nOZnTm1LmMEs.jpg?auto=webp&s=f5f7a679131c22f06eac61e02d13e44d2fad41bc', 'width': 1920}, 'variants': {}}]}
|
Best small model for code auto-completion?
| 10 |
Hi,
I am currently using the [continue.dev](http://continue.dev) extension for VS Code. I want to use a small model for code autocompletion, something that is 3B or less as I intend to run it locally using llama.cpp (no gpu).
What would be a good model for such a use case?
| 2025-05-24T16:03:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1kuf20u/best_small_model_for_code_autocompletion/
|
Amgadoz
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kuf20u
| false | null |
t3_1kuf20u
|
/r/LocalLLaMA/comments/1kuf20u/best_small_model_for_code_autocompletion/
| false | false |
self
| 10 |
{'enabled': False, 'images': [{'id': 'JoLAbcgPAn_D7ExuVvyaNJpSY81e3Jca27FTj1G8-xQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=108&crop=smart&auto=webp&s=b6c70517bb80bca66bf94d99af93ec23982e2986', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=216&crop=smart&auto=webp&s=146011169cd4033ebcd4b883efc62f0bd345d74b', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=320&crop=smart&auto=webp&s=7a560fe31ff4e8b423a9029c052df232e0365572', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=640&crop=smart&auto=webp&s=ea9ff85c4782247e303164d9d75b4071d789f397', 'width': 640}, {'height': 503, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=960&crop=smart&auto=webp&s=81aa9753e911761e0c56b3b897ba0f44cafff21d', 'width': 960}, {'height': 566, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?width=1080&crop=smart&auto=webp&s=a67fd0983e228aa2fa0a2ba466c071793fe21afc', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/0Uyy7b4ONkY8vc27omtubWnHw_YkYeE8ieacWucbwkk.jpg?auto=webp&s=92948afd26cc637bb25c79223a1b99b3ecbbbfa2', 'width': 2401}, 'variants': {}}]}
|
Best model for captioning?
| 3 |
What’s the best model right now for captioning pictures?
I’m just interested in playing around and captioning individual pictures on a one by one basis
| 2025-05-24T16:17:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kufdow/best_model_for_captioning/
|
thetobesgeorge
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kufdow
| false | null |
t3_1kufdow
|
/r/LocalLLaMA/comments/1kufdow/best_model_for_captioning/
| false | false |
self
| 3 | null |
Noob looking for advice
| 1 |
[removed]
| 2025-05-24T16:19:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kuffb3/noob_looking_for_advice/
|
Revolutionary-Ad8992
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kuffb3
| false | null |
t3_1kuffb3
|
/r/LocalLLaMA/comments/1kuffb3/noob_looking_for_advice/
| false | false |
self
| 1 | null |
Intel Arc B580 vs RTX 5060
| 1 |
[removed]
| 2025-05-24T16:42:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1kufxxc/intel_arc_b580_vs_rtx_5060/
|
InsideHuman3675
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kufxxc
| false | null |
t3_1kufxxc
|
/r/LocalLLaMA/comments/1kufxxc/intel_arc_b580_vs_rtx_5060/
| false | false |
self
| 1 | null |
Cua : Docker Container for Computer Use Agents
| 99 |
Cua is the Docker for Computer-Use Agent, an open-source framework that enables AI agents to control full operating systems within high-performance, lightweight virtual containers.
https://github.com/trycua/cua
| 2025-05-24T16:44:45 |
https://v.redd.it/2ibhpziwdr2f1
|
Impressive_Half_2819
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kug045
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/2ibhpziwdr2f1/DASHPlaylist.mpd?a=1750697099%2CMGE4MGI4M2Q0NzVlNTlmODM5YTk2N2FmZGUzY2YxNjM2ODg2ZDBmNDhlNzQ4YzRkMTk1NjBkNDFjYTAwZjMxOA%3D%3D&v=1&f=sd', 'duration': 58, 'fallback_url': 'https://v.redd.it/2ibhpziwdr2f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/2ibhpziwdr2f1/HLSPlaylist.m3u8?a=1750697099%2CNzkzMDllZmQzNDBjMTI2MGRiMjQzNmQ3MGQ3N2I1NDdhNWYwYTg1NGVhOWRhOTUwYjU1M2UwNzE1ZTdlMDY3YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/2ibhpziwdr2f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1kug045
|
/r/LocalLLaMA/comments/1kug045/cua_docker_container_for_computer_use_agents/
| false | false | 99 |
{'enabled': False, 'images': [{'id': 'bnFrNDRtOXdkcjJmMcsvHa0C_XuOSkhUSfxPH2wNUS_IzERNrp7qS2qcV3Nx', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bnFrNDRtOXdkcjJmMcsvHa0C_XuOSkhUSfxPH2wNUS_IzERNrp7qS2qcV3Nx.png?width=108&crop=smart&format=pjpg&auto=webp&s=89502391bd6a292dfbfb8e8e39bf7c7494822dcb', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bnFrNDRtOXdkcjJmMcsvHa0C_XuOSkhUSfxPH2wNUS_IzERNrp7qS2qcV3Nx.png?width=216&crop=smart&format=pjpg&auto=webp&s=b159503f70530803125bd410978db3e89ac31bac', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bnFrNDRtOXdkcjJmMcsvHa0C_XuOSkhUSfxPH2wNUS_IzERNrp7qS2qcV3Nx.png?width=320&crop=smart&format=pjpg&auto=webp&s=e111c3da0407d5afb1f657ef563aee0db2cf6a99', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bnFrNDRtOXdkcjJmMcsvHa0C_XuOSkhUSfxPH2wNUS_IzERNrp7qS2qcV3Nx.png?width=640&crop=smart&format=pjpg&auto=webp&s=eda7c3c378c70775d420d35ee1b131976ada4959', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bnFrNDRtOXdkcjJmMcsvHa0C_XuOSkhUSfxPH2wNUS_IzERNrp7qS2qcV3Nx.png?width=960&crop=smart&format=pjpg&auto=webp&s=53bba231024fb47d9480d2d9453af4163be29fdc', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bnFrNDRtOXdkcjJmMcsvHa0C_XuOSkhUSfxPH2wNUS_IzERNrp7qS2qcV3Nx.png?width=1080&crop=smart&format=pjpg&auto=webp&s=fc324b137ff7f3b699daccc3d018eeee4985de69', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bnFrNDRtOXdkcjJmMcsvHa0C_XuOSkhUSfxPH2wNUS_IzERNrp7qS2qcV3Nx.png?format=pjpg&auto=webp&s=48cd5cb5e569ee5fb81d0e7147e08441262fb8d5', 'width': 1920}, 'variants': {}}]}
|
|
Qwen3 30B A3B unsloth GGUF vs MLX generation speed difference
| 6 |
Hey folks. Is it just me or unsloth quants got slower with Qwen3 models? I can almost swear that there was 5-10t/s difference between these two quants before. I was getting 60-75t/s with GGUF and 80t/s with MLX. And I am pretty sure that both were 8bit quants. In fact, I was using UD 8\_K\_XL from unsloth, which is supposed to be a bit bigger and maybe slightly slower. All I did was to update the models since I heard there were more fixes from unsloth. But for some reason, I am getting 13t/s from 8\_K\_XL and 75t/s from MLX 8 bit.
Setup:
\-Mac M4 Max 128GB
\-LM Studio latest version
\-400/40k context used
\-thinking enabled
I tried with and without flash attention to see if there is bug in that feature now as I was using that when first tried weeks ago and got 75t/s speed back then, but still the same result
Anyone experiencing this?
| 2025-05-24T17:14:24 |
https://www.reddit.com/r/LocalLLaMA/comments/1kugp9h/qwen3_30b_a3b_unsloth_gguf_vs_mlx_generation/
|
ahmetegesel
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kugp9h
| false | null |
t3_1kugp9h
|
/r/LocalLLaMA/comments/1kugp9h/qwen3_30b_a3b_unsloth_gguf_vs_mlx_generation/
| false | false |
self
| 6 | null |
How to get started with Local LLMs
| 6 |
I am python coder with good understanding of FastAPI and Pandas
I want to start on Local LLMs for building AI Agents. How do I get started
Do I need GPUs
Which are good resources?
| 2025-05-24T17:53:36 |
https://www.reddit.com/r/LocalLLaMA/comments/1kuhlho/how_to_get_started_with_local_llms/
|
bull_bear25
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kuhlho
| false | null |
t3_1kuhlho
|
/r/LocalLLaMA/comments/1kuhlho/how_to_get_started_with_local_llms/
| false | false |
self
| 6 | null |
OpenHands + Devstral is utter crap as of May 2025 (24G VRAM)
| 226 |
Following the recent [announcement of Devstral](https://mistral.ai/news/devstral), I gave [OpenHands](https://github.com/All-Hands-AI/OpenHands?tab=readme-ov-file#-running-openhands-locally) \+ Devstral (Q4\_K\_M on [Ollama](https://ollama.com/library/devstral:24b)) a try for a fully offline code agent experience.
# OpenHands
Meh. I won't comment much, it's a reasonable web frontend, neatly packaged as a single podman/docker container. This could use *a lot* more polish (the configuration through environment variables is broken for example) but once you've painfully reverse-engineered the incantation to make ollama work from the non-existing documentation, it's fairly out your way.
I don't like the fact you must give it access to your podman/docker installation (by mounting the socket in the container) which is technically equivalent to giving this huge pile of untrusted code root access to your host. [This is necessary](https://github.com/All-Hands-AI/OpenHands/issues/5269) because OpenHands needs to spawn a *runtime* for each "project", and the runtime is itself its own container. Surely there must be a better way?
# Devstral (Mistral AI)
Don't get me wrong, it's awesome to have companies releasing models to the general public. I'll be blunt though: this first iteration is useless. Devstral is supposed to have been trained/fine-tuned *precisely* to be good at the agentic behaviors that OpenHands promises. This means having access to tools like bash, a browser, and primitives to read & edit files. Devstral [system prompt](https://huggingface.co/mistralai/Devstral-Small-2505/blob/main/SYSTEM_PROMPT.txt) references OpenHands by name. The [press release](https://mistral.ai/news/devstral) boats:
>Devstral is light enough to run on a single RTX 4090. \[…\] The performance \[…\] makes it a suitable choice for agentic coding on privacy-sensitive repositories in enterprises
It does not. I tried a few primitive tasks and it utterly failed almost all of them while burning through the whole 380 watts my GPU demands.
It sometimes manages to run one or two basic commands in a row, but it often takes more than one try, hence is slow and frustrating:
>Clone the git repository \[url\] and run build.sh
The most basic commands and text manipulation tasks all failed and I had to interrupt its desperate attempts. I ended up telling myself it would have been faster to do it myself, saving the Amazon rainforest as an extra bonus.
* Asked it to extract the JS from a short HTML file which had a single `<script>` tag. It created the file successfully (but transformed it against my will), then wasn't able to remove the tag from the HTML as the proposed edits wouldn't pass OpenHands' correctness checks.
* Asked it to remove comments from a short file. Same issue, `ERROR: No replacement was performed, old_str [...] did not appear verbatim in /workspace/...`.
* Asked it to bootstrap a minimal todo app. It got stuck in a loop trying to invoke interactive `create-app` tools from the cursed JS ecosystem, which require arrow keys to navigate menus–did I mention I hate those wizards?
* Prompt adhesion is bad. Even when you try to help by providing the *exact command*, it randomly removes dashes and other important bits, and then proceeds to comfortably heat up my room trying to debug the inevitable errors.
As a point of comparison, I tried those using one of the cheaper proprietary models out there (Gemini Flash) which obviously is general-purpose and *not* tuned to OpenHands particularities. It had no issue adhering to OpenHands' prompt and blasted through the tasks.
Perhaps this is meant to run on more expensive hardware that can run the larger flavors. If "all" you have is 24G VRAM, prepare to be disappointed. Local agentic programming is not there yet. Did anyone else try it, and does your experience match?
| 2025-05-24T18:12:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1kui17w/openhands_devstral_is_utter_crap_as_of_may_2025/
|
foobarg
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kui17w
| false | null |
t3_1kui17w
|
/r/LocalLLaMA/comments/1kui17w/openhands_devstral_is_utter_crap_as_of_may_2025/
| false | false |
self
| 226 |
{'enabled': False, 'images': [{'id': 'QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=108&crop=smart&auto=webp&s=bf2fc6d6ae14adad4ce62ffea575abc3783778db', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=216&crop=smart&auto=webp&s=4a5f46c5464cea72c64df6c73d58b15e102c5936', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=320&crop=smart&auto=webp&s=aa1e4abc763404a25bda9d60fe6440b747d6bae4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=640&crop=smart&auto=webp&s=122efd46018c04117aca71d80db3640d390428bd', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=960&crop=smart&auto=webp&s=b53cfe1770ee2b37ce0f5b5e1b0fd67d3276a350', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?width=1080&crop=smart&auto=webp&s=278352f076c5bbdf8f6e7cecedab77d8794332ff', 'width': 1080}], 'source': {'height': 2520, 'url': 'https://external-preview.redd.it/UDZBQmD4AJb2vSY-B7oM2DhQ3zjGzTcUOviRMRcUKkg.jpg?auto=webp&s=691d56b882a79feffdb4b780dc6a0db1b2c5d709', 'width': 4800}, 'variants': {}}]}
|
Why arent llms pretrained at fp8?
| 56 |
There must be some reason but the fact that models are always shrunk to q8 or lower at inference got me wondering why we need higher bpw in the first place.
| 2025-05-24T18:19:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kui73k/why_arent_llms_pretrained_at_fp8/
|
GreenTreeAndBlueSky
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kui73k
| false | null |
t3_1kui73k
|
/r/LocalLLaMA/comments/1kui73k/why_arent_llms_pretrained_at_fp8/
| false | false |
self
| 56 | null |
What type of AI platform or process to get this result
| 1 |
[removed]
| 2025-05-24T18:31:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kuigs6/what_type_of_ai_platform_or_process_to_get_this/
|
Zestyclose_Bath7987
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kuigs6
| false | null |
t3_1kuigs6
|
/r/LocalLLaMA/comments/1kuigs6/what_type_of_ai_platform_or_process_to_get_this/
| false | false |
self
| 1 | null |
NVLink vs No NVLink: Devstral Small 2x RTX 3090 Inference Benchmark with vLLM
| 60 |
**TL;DR: NVLink provides only ~5% performance improvement for inference on 2x RTX 3090s. Probably not worth the premium unless you already have it. Also, Mistral API is crazy cheap.**
This model seems like a holy grail for people with 2x24GB, but considering the price of the Mistral API, this really isn't very cost effective. The test took about 15-16 minutes and generated 82k tokens. The electricity cost me more than the API would.
## Setup
- **Model**: Devstral-Small-2505-Q8_0 (GGUF)
- **Hardware**: 2x RTX 3090 (24GB each), NVLink bridge, ROMED8-2T, both cards on PCIE 4.0 x16 directly on the mobo (no risers)
- **Framework**: vLLM with tensor parallelism (TP=2)
- **Test**: 50 complex code generation prompts, avg ~1650 tokens per response
I asked Claude to generate 50 code generation prompts to make Devstral sweat. I didn't actually look at the output, only benchmarked throughput.
## Results
### 🔗 With NVLink
```
Tokens/sec: 85.0
Total tokens: 82,438
Average response time: 149.6s
95th percentile: 239.1s
```
### ❌ Without NVLink
```
Tokens/sec: 81.1
Total tokens: 84,287
Average response time: 160.3s
95th percentile: 277.6s
```
NVLink gave us 85.0 vs 81.1 tokens/sec = **~5% improvement**
NVLink showed better consistency with lower 95th percentile times (239s vs 278s)
Even without NVLink, PCIe x16 handled tensor parallelism just fine for inference
I've managed to score 4-slot NVLink recently for 200€ (not cheap but ebay is even more expensive), so I'm trying to see if those 200€ were wasted. For inference workloads, NVLink seems like a "nice to have" rather than essential.
This confirms that the NVLink bandwidth advantage doesn't translate to massive inference gains like it does for training, not even with tensor parallel.
If you're buying hardware specifically for inference:
- ✅ Save money and skip NVLink
- ✅ Put that budget toward more VRAM or better GPUs
- ✅ NVLink matters more for training huge models
If you already have NVLink cards lying around:
- ✅ Use them, you'll get a small but consistent boost
- ✅ Better latency consistency is nice for production
**Technical Notes**
vLLM command:
```bash
CUDA_VISIBLE_DEVICES=0,2 CUDA_DEVICE_ORDER=PCI_BUS_ID vllm serve /home/myusername/unsloth/Devstral-Small-2505-GGUF/Devstral-Small-2505-Q8_0.gguf --max-num-seqs 4 --max-model-len 64000 --gpu-memory-utilization 0.95 --enable-auto-tool-choice --tool-call-parser mistral --quantization gguf --tool-call-parser mistral --enable-sleep-mode --enable-chunked-prefill --tensor-parallel-size 2 --max-num-batched-tokens 16384
```
Testing script was generated by Claude.
The 3090s handled the 22B-ish parameter model (in Q8) without issues on both setups. Memory wasn't the bottleneck here.
Anyone else have similar NVLink vs non-NVLink benchmarks? Curious to see if this pattern holds across different model sizes and GPUs.
| 2025-05-24T18:39:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1kuimwg/nvlink_vs_no_nvlink_devstral_small_2x_rtx_3090/
|
Traditional-Gap-3313
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kuimwg
| false | null |
t3_1kuimwg
|
/r/LocalLLaMA/comments/1kuimwg/nvlink_vs_no_nvlink_devstral_small_2x_rtx_3090/
| false | false |
self
| 60 | null |
Diaggregated prefill
| 1 |
[removed]
| 2025-05-24T19:16:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kujgw3/diaggregated_prefill/
|
Big_Question341
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kujgw3
| false | null |
t3_1kujgw3
|
/r/LocalLLaMA/comments/1kujgw3/diaggregated_prefill/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'tbsieJMmRymp6zFCKB0dfX015zgaRW0l49ilHK41t7o', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/1AaqzztA1tMXgq6_cpQPAxqj0ZPptm251RAC0X-CWLc.jpg?width=108&crop=smart&auto=webp&s=f5a0c499b4ac35cfc49eb0368d1016f7a784decc', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/1AaqzztA1tMXgq6_cpQPAxqj0ZPptm251RAC0X-CWLc.jpg?width=216&crop=smart&auto=webp&s=3678db6bd7d8f23d7da78d145236f5a948b4f64c', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/1AaqzztA1tMXgq6_cpQPAxqj0ZPptm251RAC0X-CWLc.jpg?width=320&crop=smart&auto=webp&s=c51d5d7e5e6d14bd4c734e0016b420db83fedd7a', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/1AaqzztA1tMXgq6_cpQPAxqj0ZPptm251RAC0X-CWLc.jpg?width=640&crop=smart&auto=webp&s=74c02f916518780cfd6bd877d805a2a986d3b588', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/1AaqzztA1tMXgq6_cpQPAxqj0ZPptm251RAC0X-CWLc.jpg?width=960&crop=smart&auto=webp&s=c344d3da5ccff67fdeb41774c63d271d48838b38', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/1AaqzztA1tMXgq6_cpQPAxqj0ZPptm251RAC0X-CWLc.jpg?width=1080&crop=smart&auto=webp&s=0c622996dc2124b397d5be538b63ba26173b7ba8', 'width': 1080}], 'source': {'height': 1100, 'url': 'https://external-preview.redd.it/1AaqzztA1tMXgq6_cpQPAxqj0ZPptm251RAC0X-CWLc.jpg?auto=webp&s=e3d305b97f2805bcaab7113f2c42d9089f1be960', 'width': 1100}, 'variants': {}}]}
|
Has anyone built by now a windows voice mode app that works with any gguf?
| 0 |
That recognizes voice, generates a reply and speaks it?
Would be a cool thing to have locally.
Thanks in advance!
| 2025-05-24T19:30:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kujsdd/has_anyone_built_by_now_a_windows_voice_mode_app/
|
Own-Potential-2308
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kujsdd
| false | null |
t3_1kujsdd
|
/r/LocalLLaMA/comments/1kujsdd/has_anyone_built_by_now_a_windows_voice_mode_app/
| false | false |
self
| 0 | null |
We believe the future of AI is local, private, and personalized.
| 249 |
That’s why we built **Cobolt** — a free cross-platform AI assistant that runs entirely on your device.
Cobolt represents our vision for the future of AI assistants:
* Privacy by design (everything runs locally)
* Extensible through Model Context Protocol (MCP)
* Personalized without compromising your data
* Powered by community-driven development
We're looking for contributors, testers, and fellow privacy advocates to join us in building the future of personal AI.
🤝 Contributions Welcome! 🌟 Star us on [GitHub](https://github.com/platinum-hill/cobolt)
📥 Try Cobolt on [macOS](https://github.com/platinum-hill/cobolt/releases/download/v0.0.3/Cobolt-0.0.3.dmg) or [Windows](https://github.com/platinum-hill/cobolt/releases/download/v0.0.3/Cobolt-Setup-0.0.3.exe)
Let's build AI that serves you.
| 2025-05-24T19:36:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1kujwzl/we_believe_the_future_of_ai_is_local_private_and/
|
ice-url
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kujwzl
| false | null |
t3_1kujwzl
|
/r/LocalLLaMA/comments/1kujwzl/we_believe_the_future_of_ai_is_local_private_and/
| false | false |
self
| 249 |
{'enabled': False, 'images': [{'id': 'CpUQbUjBsnQrEuDpZLBNna-NYXOmh-8AKUwIzQbUkak', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nzP7t8k2KxGCF3NRNybLjr0t9T8vc9dNYx-MZgmWXMg.jpg?width=108&crop=smart&auto=webp&s=6957a535e8b998ae121acca88eae4be5ee9b9d8a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nzP7t8k2KxGCF3NRNybLjr0t9T8vc9dNYx-MZgmWXMg.jpg?width=216&crop=smart&auto=webp&s=79e8644c55ad339e58382e809053686ac1b490d8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nzP7t8k2KxGCF3NRNybLjr0t9T8vc9dNYx-MZgmWXMg.jpg?width=320&crop=smart&auto=webp&s=bb01523c3c29867728a6603cbb577bccf3b9e61e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nzP7t8k2KxGCF3NRNybLjr0t9T8vc9dNYx-MZgmWXMg.jpg?width=640&crop=smart&auto=webp&s=723d88e299277782bf13b48a491121fc93ddf8f2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nzP7t8k2KxGCF3NRNybLjr0t9T8vc9dNYx-MZgmWXMg.jpg?width=960&crop=smart&auto=webp&s=ab48f395cbe087ccc29f75b3bf8bda614351df98', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nzP7t8k2KxGCF3NRNybLjr0t9T8vc9dNYx-MZgmWXMg.jpg?width=1080&crop=smart&auto=webp&s=c7360231328a0b0cc67e35e84d081f4bc29eb0bc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nzP7t8k2KxGCF3NRNybLjr0t9T8vc9dNYx-MZgmWXMg.jpg?auto=webp&s=667da57dedf6ba70118053bfed15eef9f407911c', 'width': 1200}, 'variants': {}}]}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.