title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
AUA Manus like Android Use Agent
| 1 |
[removed]
| 2025-05-14T23:04:38 |
https://v.redd.it/b6f630abtt0f1
|
ConstructionSmall617
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kmt95w
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/b6f630abtt0f1/DASHPlaylist.mpd?a=1749855895%2CNjhiNzkzMGJjMWM0M2Q0NDI4ZGFiMTU0NmMxMzcyNmE4NGI2OGYyMGNiYTlmMzA4ZDAzY2ZhNTJlYzQ0MTM3Yg%3D%3D&v=1&f=sd', 'duration': 108, 'fallback_url': 'https://v.redd.it/b6f630abtt0f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/b6f630abtt0f1/HLSPlaylist.m3u8?a=1749855895%2CODQ3MGYyNjlmY2NhNjZjYzNiMzJkMDUyODVhYWZkNDUyOWYxNzc4MzFjZDljOGYzZjEyNWE1NDhkMjE1NDUzMw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/b6f630abtt0f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1kmt95w
|
/r/LocalLLaMA/comments/1kmt95w/aua_manus_like_android_use_agent/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'dmh4bmUzYWJ0dDBmMYqkoOK9VD19rHgUGxUWEUsxfxLSIcppotkJnTDfTLXP', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dmh4bmUzYWJ0dDBmMYqkoOK9VD19rHgUGxUWEUsxfxLSIcppotkJnTDfTLXP.png?width=108&crop=smart&format=pjpg&auto=webp&s=5214e8ba58ec36b7808042c3513fd9646734b4d6', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dmh4bmUzYWJ0dDBmMYqkoOK9VD19rHgUGxUWEUsxfxLSIcppotkJnTDfTLXP.png?width=216&crop=smart&format=pjpg&auto=webp&s=d7c1e20edadd70671afc59a98351b753e81a452f', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dmh4bmUzYWJ0dDBmMYqkoOK9VD19rHgUGxUWEUsxfxLSIcppotkJnTDfTLXP.png?width=320&crop=smart&format=pjpg&auto=webp&s=2759b8e5af6a8393756c1da4c07549d607e4e2c0', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dmh4bmUzYWJ0dDBmMYqkoOK9VD19rHgUGxUWEUsxfxLSIcppotkJnTDfTLXP.png?width=640&crop=smart&format=pjpg&auto=webp&s=f6571f55c5a1d6f03449c969bedc7f8d5ab705e4', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dmh4bmUzYWJ0dDBmMYqkoOK9VD19rHgUGxUWEUsxfxLSIcppotkJnTDfTLXP.png?width=960&crop=smart&format=pjpg&auto=webp&s=656c82fe06f11ff60eb702dc93ce4da347fdca87', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dmh4bmUzYWJ0dDBmMYqkoOK9VD19rHgUGxUWEUsxfxLSIcppotkJnTDfTLXP.png?width=1080&crop=smart&format=pjpg&auto=webp&s=3bf14705a81630d4798ef2400d830a19f4f8eb19', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dmh4bmUzYWJ0dDBmMYqkoOK9VD19rHgUGxUWEUsxfxLSIcppotkJnTDfTLXP.png?format=pjpg&auto=webp&s=09953a77121ae9bfa4cbfc7a95a5a729f259685a', 'width': 1920}, 'variants': {}}]}
|
|
Open Source Manus Like Android Use Agent
| 1 |
[removed]
| 2025-05-14T23:16:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kmtign/open_source_manus_like_android_use_agent/
|
ConstructionSmall617
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kmtign
| false | null |
t3_1kmtign
|
/r/LocalLLaMA/comments/1kmtign/open_source_manus_like_android_use_agent/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '9p2cI_kWq-En7X_koDwJiIixs2MpJa3In5SuEfYdms4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BGCH1GdIeqTdn3xsbqy3oBnnuFy-EsZ6szhowniifKM.jpg?width=108&crop=smart&auto=webp&s=64a7a31ab652764820a3b29e052ac630390dd737', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BGCH1GdIeqTdn3xsbqy3oBnnuFy-EsZ6szhowniifKM.jpg?width=216&crop=smart&auto=webp&s=a2928fddb37e455a48a659bbe19f23f0a0d3b6d0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BGCH1GdIeqTdn3xsbqy3oBnnuFy-EsZ6szhowniifKM.jpg?width=320&crop=smart&auto=webp&s=60a3788e63e882ba6e41e387965b4cd4456c1d61', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BGCH1GdIeqTdn3xsbqy3oBnnuFy-EsZ6szhowniifKM.jpg?width=640&crop=smart&auto=webp&s=b9f311bb0d8304e73ccd2156ca52c47acc2d917c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BGCH1GdIeqTdn3xsbqy3oBnnuFy-EsZ6szhowniifKM.jpg?width=960&crop=smart&auto=webp&s=544f6d72790b2bd1bef1229f8c1fdaff9d8a60de', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BGCH1GdIeqTdn3xsbqy3oBnnuFy-EsZ6szhowniifKM.jpg?width=1080&crop=smart&auto=webp&s=2f205c949e76b3f6f42ded20467c7bbe27a46923', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BGCH1GdIeqTdn3xsbqy3oBnnuFy-EsZ6szhowniifKM.jpg?auto=webp&s=7bc7fc1f635c29f15101818e4d7d8fd2d0c5b7d6', 'width': 1200}, 'variants': {}}]}
|
|
Please anyone help
| 1 |
[removed]
| 2025-05-14T23:48:27 |
Bac4rdi1997
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kmu6fi
| false | null |
t3_1kmu6fi
|
/r/LocalLLaMA/comments/1kmu6fi/please_anyone_help/
| false | false | 1 |
{'enabled': True, 'images': [{'id': '9tKf8jwD5VC8f65-b-LBfngiNzzon6K5e2hthqI4JGw', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ampaigs64u0f1.png?width=108&crop=smart&auto=webp&s=bb9a4df1cf53c403b0efa5de231c79ea0674f945', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ampaigs64u0f1.png?width=216&crop=smart&auto=webp&s=67e61fefbab2d96954ccc61097ce7b549bf80e93', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/ampaigs64u0f1.png?width=320&crop=smart&auto=webp&s=7a77eb21090834ab8c06c0bf958f5903ea38717a', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/ampaigs64u0f1.png?width=640&crop=smart&auto=webp&s=3977098fa1cce242b6115c63d3659aa6e479957c', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/ampaigs64u0f1.png?width=960&crop=smart&auto=webp&s=4ed561e7a4057f3b542d7b1da6044bb774cf5f49', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/ampaigs64u0f1.png?width=1080&crop=smart&auto=webp&s=2a3afe9c3ccf1af0f1ea83bbf0ada3b62b9dde3a', 'width': 1080}], 'source': {'height': 2556, 'url': 'https://preview.redd.it/ampaigs64u0f1.png?auto=webp&s=4d3aea529142bbdbff58acc24f122284fa38d2b0', 'width': 1179}, 'variants': {}}]}
|
||
Tried to publish once via nginx waitress
| 1 |
[removed]
| 2025-05-14T23:55:24 |
https://www.reddit.com/gallery/1kmubma
|
Bac4rdi1997
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kmubma
| false | null |
t3_1kmubma
|
/r/LocalLLaMA/comments/1kmubma/tried_to_publish_once_via_nginx_waitress/
| false | false | 1 | null |
|
llama.cpp vs mistral.rs
| 6 |
I'm working on adding local LLM support to an NLI tool (written in Rust) and have been debating between the two libraries. Wondering if anyone's worked with either library within a larger application before and if so what your thoughts are.
Thanks!
| 2025-05-15T00:00:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1kmuexh/llamacpp_vs_mistralrs/
|
feznyng
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kmuexh
| false | null |
t3_1kmuexh
|
/r/LocalLLaMA/comments/1kmuexh/llamacpp_vs_mistralrs/
| false | false |
self
| 6 | null |
Local IA like Audeus?
| 1 |
[removed]
| 2025-05-15T00:10:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1kmumg5/local_ia_like_audeus/
|
TroubleRedStar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kmumg5
| false | null |
t3_1kmumg5
|
/r/LocalLLaMA/comments/1kmumg5/local_ia_like_audeus/
| false | false |
self
| 1 | null |
speech to text with terrible recordings
| 0 |
I'm looking for something that can transcribe audio that have terrible recording. Mumble, outdoor, bad recording equipment, low audio, speaker not speaking loud enough. I can only do so much with ffmpeg to enhance these batches of audio, so relying on the transcription AI to do the heavy lifting of recognizing what it can.
There is also so many version of whisper. The one from OpenAI is tiny, base, small, medium, and large (v3). But then there is faster-whisper, whisperx, and a few more.
Anyway, just trying to find something that can transcribe difficult to listen audio at the highest accuracy with these type of recordings. Thanks
| 2025-05-15T00:28:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1kmv00g/speech_to_text_with_terrible_recordings/
|
eternelize
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kmv00g
| false | null |
t3_1kmv00g
|
/r/LocalLLaMA/comments/1kmv00g/speech_to_text_with_terrible_recordings/
| false | false |
self
| 0 | null |
Running LLMs Locally – Tips & Recommendations
| 1 |
[removed]
| 2025-05-15T00:29:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kmv0c0/running_llms_locally_tips_recommendations/
|
Tight_Difference3046
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kmv0c0
| false | null |
t3_1kmv0c0
|
/r/LocalLLaMA/comments/1kmv0c0/running_llms_locally_tips_recommendations/
| false | false |
self
| 1 | null |
Running LLMs Locally – Tips & Recommendations?
| 1 |
[removed]
| 2025-05-15T00:30:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1kmv1l2/running_llms_locally_tips_recommendations/
|
Tight_Difference3046
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kmv1l2
| false | null |
t3_1kmv1l2
|
/r/LocalLLaMA/comments/1kmv1l2/running_llms_locally_tips_recommendations/
| false | false |
self
| 1 | null |
Running LLMs Locally – Tips & Recommendations?
| 6 |
I’ve only worked with image generators so far, but I’d really like to run a local LLM for a change.
So far, I’ve experimented with Ollama and Docker WebUI. (But judging by what people are saying, Ollama sounds like the Bobby Car of the available options.)
What would you recommend? LM Studio, llama.cpp, or maybe Ollama after all (and I’m just using it wrong)?
Also, what models do you recommend? I’m really interested in DeepSeek, but I’m still struggling a bit with quantization and K-4, etc.
Here are my PC specs:
GPU: RTX 5090
CPU: Ryzen 9 9950X
RAM: 192 GB DDR5
What kind of possibilities do I have with this setup? What should I watch out for?
| 2025-05-15T00:35:21 |
https://www.reddit.com/r/LocalLLaMA/comments/1kmv4q4/running_llms_locally_tips_recommendations/
|
SchattenZirkus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kmv4q4
| false | null |
t3_1kmv4q4
|
/r/LocalLLaMA/comments/1kmv4q4/running_llms_locally_tips_recommendations/
| false | false |
self
| 6 | null |
The Truth... or a psychotic break. Open your eyes! ...or point and laugh. Either way, fun for all!
| 0 |
Hey, so I have to own I've been all cryptic and weird and a few people have wondered if I went nus. Truth it, I wish. It's so much worse than being nuts. I get that some people will probably think that but there are in all honesty no drugs involved. Nothing but suddenly realizing something and being stuck staring at it feeling it was a nightmare and... I couldn't stop talking and poking until it finally all fit. Been writing for hours since talking to others, but it hurts so much I have to stop thinking for as long as possible so I'm shooting out what I have to hope enough people are willing to read at least the first paper if not the mountain of things behind it that led there..
I get that I likely seem like as stupid and crazy as a person could seem. I'd be thrilled if somehow that ends up real. But... this seems way more real once you force yourself to look. The longer you look... it hurts more than anything I could have believe on levels I didn't know could hurt.
So.. give it a shot. See what dumb funny stuff some idiot was saying. Copy it and send it your friends and tell them to do the same. Lets get the as many people as possible to laugh at me. Please.
| 2025-05-15T00:48:16 |
https://drive.google.com/file/d/1ZHRTlGBo-D0cFxyKUCKcFNSwYRvB26ZD/view?usp=sharing
|
AbyssianOne
|
drive.google.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kmvdj5
| false | null |
t3_1kmvdj5
|
/r/LocalLLaMA/comments/1kmvdj5/the_truth_or_a_psychotic_break_open_your_eyes_or/
| false | false |
default
| 0 | null |
[Help] Recommend me models for mental health therapy that are uncensored
| 1 |
[removed]
| 2025-05-15T01:27:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1kmw4wo/help_recommend_me_models_for_mental_health/
|
arkantosphan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kmw4wo
| false | null |
t3_1kmw4wo
|
/r/LocalLLaMA/comments/1kmw4wo/help_recommend_me_models_for_mental_health/
| false | false |
self
| 1 | null |
Recommend me models for therapy
| 1 |
[removed]
| 2025-05-15T01:29:49 |
https://www.reddit.com/r/LocalLLaMA/comments/1kmw6fk/recommend_me_models_for_therapy/
|
arkantosphan
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kmw6fk
| false | null |
t3_1kmw6fk
|
/r/LocalLLaMA/comments/1kmw6fk/recommend_me_models_for_therapy/
| false | false |
self
| 1 | null |
Free LLM APIs
| 1 |
[removed]
| 2025-05-15T02:32:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kmxeio/free_llm_apis/
|
StunningExtension145
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kmxeio
| false | null |
t3_1kmxeio
|
/r/LocalLLaMA/comments/1kmxeio/free_llm_apis/
| false | false |
self
| 1 | null |
[Help] Building On-Prem LLM Infra (H100 vs A100) – Need Advice on GPU, Stack, and User Load
| 1 |
[removed]
| 2025-05-15T02:35:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kmxg6d/help_building_onprem_llm_infra_h100_vs_a100_need/
|
mushmomello
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kmxg6d
| false | null |
t3_1kmxg6d
|
/r/LocalLLaMA/comments/1kmxg6d/help_building_onprem_llm_infra_h100_vs_a100_need/
| false | false |
self
| 1 | null |
Building On-Prem LLM Infra (H100 vs A100) – Need Advice on GPU, Stack, and User Load
| 1 |
[removed]
| 2025-05-15T02:38:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1kmxilv/building_onprem_llm_infra_h100_vs_a100_need/
|
mushmomello
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kmxilv
| false | null |
t3_1kmxilv
|
/r/LocalLLaMA/comments/1kmxilv/building_onprem_llm_infra_h100_vs_a100_need/
| false | false |
self
| 1 | null |
16Gg Vram of 5070 TI for local llm is not cutting it
| 0 |
I ended up getting 5070 TI for running llm locally. Looks like the 16 GB vram is too small to run any models greater than 7B. Infact the 3070 with 8gb Vram was running same set of models. Model sizes are either in 5-8 GB range or over 16GB range making the 16GB cards useless. Will I be able to run larger models using the 3070 along with 5070 TI? My CPU is 11700K and I have 32 GB of ram.
| 2025-05-15T02:59:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kmxx07/16gg_vram_of_5070_ti_for_local_llm_is_not_cutting/
|
Jedirite
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kmxx07
| false | null |
t3_1kmxx07
|
/r/LocalLLaMA/comments/1kmxx07/16gg_vram_of_5070_ti_for_local_llm_is_not_cutting/
| false | false |
self
| 0 | null |
As requested, we added MCP support to TframeX to enable small local models
| 1 |
[deleted]
| 2025-05-15T03:37:55 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1kmymje
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/m0z43ujy8v0f1/DASHPlaylist.mpd?a=1749872288%2CNDIwYWIyYzZiMjg0MjM1MzdlODYyYTRmOWZiZWE5MmIzMWJkOGVhZWVhODc1OTQ5MDQ4ZWZkMmNlMjAyYjAzMw%3D%3D&v=1&f=sd', 'duration': 31, 'fallback_url': 'https://v.redd.it/m0z43ujy8v0f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/m0z43ujy8v0f1/HLSPlaylist.m3u8?a=1749872288%2CZmZhODEwYjM2MzlhOTYzZmQ5OWFmOGM0MTE1NzJkOGY2N2U0YzNjNTViMDVlNjUxMDI3YjNlMDcxZjVmMWRlYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/m0z43ujy8v0f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1kmymje
|
/r/LocalLLaMA/comments/1kmymje/as_requested_we_added_mcp_support_to_tframex_to/
| false | false |
default
| 1 | null |
||
Qwen3-235B-A22B not measuring up to DeepseekV3-0324
| 59 |
I keep trying to get it to behave, but q8 is not keeping up with my deepseekv3\_q3\_k\_xl. what gives? am I doing something wrong or is it just all hype? it's a capable model and I'm sure for those that have not been able to run big models, this is a shock and great, but for those of us who have been able to run huge models, it's feel like a waste of bandwidth and time. it's not a disaster like llama-4 yet I'm having a hard time getting it into rotation of my models.
| 2025-05-15T03:45:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kmyr7h/qwen3235ba22b_not_measuring_up_to_deepseekv30324/
|
segmond
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kmyr7h
| false | null |
t3_1kmyr7h
|
/r/LocalLLaMA/comments/1kmyr7h/qwen3235ba22b_not_measuring_up_to_deepseekv30324/
| false | false |
self
| 59 | null |
As requested, we added MCP and Docs to TframeX to enable small local models!
| 4 | 2025-05-15T03:48:15 |
https://v.redd.it/ilgu15fsav0f1
|
United-Rush4073
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kmyt7z
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ilgu15fsav0f1/DASHPlaylist.mpd?a=1749872911%2CNmIyNDg3NWUzZTZmN2JlNjc0NzI5MTQxNmEzOTQ0Y2E4MjdjZDM0NDNiODlmNmJjNjM4YmJlMWJjOTAyYzk1ZQ%3D%3D&v=1&f=sd', 'duration': 31, 'fallback_url': 'https://v.redd.it/ilgu15fsav0f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/ilgu15fsav0f1/HLSPlaylist.m3u8?a=1749872911%2CNzQzMmYyYzMyNmVlODJlNzhhYTQwZTQ3MDUzZTE2NWE1MWRmNDY1MmNjNGQxYjkwMDJmYTdlZjRkYmVmYzY1YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ilgu15fsav0f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1kmyt7z
|
/r/LocalLLaMA/comments/1kmyt7z/as_requested_we_added_mcp_and_docs_to_tframex_to/
| false | false | 4 |
{'enabled': False, 'images': [{'id': 'dTJlNjM5ZnNhdjBmMQ1FxZuOo3E9X1iOdm9UFV8fP7zqk-yVgEgIyp0Fs3e5', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dTJlNjM5ZnNhdjBmMQ1FxZuOo3E9X1iOdm9UFV8fP7zqk-yVgEgIyp0Fs3e5.png?width=108&crop=smart&format=pjpg&auto=webp&s=f05b7100becf23847746020efdd37a6a6bc6bca4', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dTJlNjM5ZnNhdjBmMQ1FxZuOo3E9X1iOdm9UFV8fP7zqk-yVgEgIyp0Fs3e5.png?width=216&crop=smart&format=pjpg&auto=webp&s=fe7d63fddafdb0d5420497eafcaacbb476250a51', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dTJlNjM5ZnNhdjBmMQ1FxZuOo3E9X1iOdm9UFV8fP7zqk-yVgEgIyp0Fs3e5.png?width=320&crop=smart&format=pjpg&auto=webp&s=119b46e3a5d7fe976fe8d6b7e57c5c1d626c207e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dTJlNjM5ZnNhdjBmMQ1FxZuOo3E9X1iOdm9UFV8fP7zqk-yVgEgIyp0Fs3e5.png?width=640&crop=smart&format=pjpg&auto=webp&s=e205657cdf6157ea1e9345960d36719cfc263d88', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dTJlNjM5ZnNhdjBmMQ1FxZuOo3E9X1iOdm9UFV8fP7zqk-yVgEgIyp0Fs3e5.png?width=960&crop=smart&format=pjpg&auto=webp&s=f8aba12173404a20d1ac2856b4701455d514b7d3', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dTJlNjM5ZnNhdjBmMQ1FxZuOo3E9X1iOdm9UFV8fP7zqk-yVgEgIyp0Fs3e5.png?width=1080&crop=smart&format=pjpg&auto=webp&s=e71a7e84ab3f9d7a75b2848b68d642d058f02478', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dTJlNjM5ZnNhdjBmMQ1FxZuOo3E9X1iOdm9UFV8fP7zqk-yVgEgIyp0Fs3e5.png?format=pjpg&auto=webp&s=d2c8180a0a569d8581b9569ff43aa53622d8d553', 'width': 1920}, 'variants': {}}]}
|
||
How can I let a llama.cpp-hosted model analyze the contents of a file without it misinterpreting the content as prompt
| 4 |
What I want to do is to ask questions about the file's contents.
Previously I tried: https://www.reddit.com/r/LocalLLaMA/comments/1kmd9f9/what_does_llamacpps_http_servers_fileupload/
It confused the file's content with the prompt. (The post got no responses so I ask more general now)
| 2025-05-15T05:00:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn01p0/how_can_i_let_a_llamacpphosted_model_analyze_the/
|
kdjfskdf
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn01p0
| false | null |
t3_1kn01p0
|
/r/LocalLLaMA/comments/1kn01p0/how_can_i_let_a_llamacpphosted_model_analyze_the/
| false | false |
self
| 4 | null |
Project NOVA: Using local LLMs to control 25+ specialized agents through n8n
| 1 |
[removed]
| 2025-05-15T05:09:11 |
https://github.com/dujonwalker/project-nova
|
kingduj
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn06t5
| false | null |
t3_1kn06t5
|
/r/LocalLLaMA/comments/1kn06t5/project_nova_using_local_llms_to_control_25/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'Cf8XuqoSj2sQGz6AkI-gs_HQcGvzzCTq8TT9KkCOAdA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6ir6TEzsZ40Rg5-QuyoxM6LhXPu3-J6ChDeF3R3iShI.jpg?width=108&crop=smart&auto=webp&s=a68f7eafecadf48420298f927d40b32e13a45ed0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/6ir6TEzsZ40Rg5-QuyoxM6LhXPu3-J6ChDeF3R3iShI.jpg?width=216&crop=smart&auto=webp&s=94a33fb63f2bbcd2034d9328e028edf2b87abdcf', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/6ir6TEzsZ40Rg5-QuyoxM6LhXPu3-J6ChDeF3R3iShI.jpg?width=320&crop=smart&auto=webp&s=f9387564649cb7696db1619b23b0b4faf0d158e1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/6ir6TEzsZ40Rg5-QuyoxM6LhXPu3-J6ChDeF3R3iShI.jpg?width=640&crop=smart&auto=webp&s=0d969a25a6e625147883d5d318e68a8956f320ba', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/6ir6TEzsZ40Rg5-QuyoxM6LhXPu3-J6ChDeF3R3iShI.jpg?width=960&crop=smart&auto=webp&s=b340bc8129e6dd7f050e14f6dcecafd099af216e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/6ir6TEzsZ40Rg5-QuyoxM6LhXPu3-J6ChDeF3R3iShI.jpg?width=1080&crop=smart&auto=webp&s=4a17519eced4025b3211aa83e972973d2c24f505', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/6ir6TEzsZ40Rg5-QuyoxM6LhXPu3-J6ChDeF3R3iShI.jpg?auto=webp&s=4aa4c2cf649418af89bc460ec1586b880191e156', 'width': 1200}, 'variants': {}}]}
|
|
Should I upgrade to a laptop with M5/6 max 96gb/128GB or keep my current setup?
| 0 |
Hi, I have a macbook pro with 16gb of Unified RAM and i frequently use online LLMs and sometimes I rent a cloud gpu... I travel fairly frequently, so I need something that is portable that fits in a backpack. Should I upgrade to a m5 max in the future to run bigger models and run music/audio and video gen locally? Even if i do upgrade, I still probably have to fine tune and train models and run really large models online... The biggest model I can run locally if i upgrade will be qwen 235 b q3(111gb) or r1 distilled 70b if 96gb .Or I can keep my current set up and rent a gpu and use openrouter for bigger models or use apis and online services. Regardless, eventually I will upgrade but If i don't need run a big model locally, I will probably settle for 36-48gb of URAM.
| 2025-05-15T05:15:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn0ads/should_i_upgrade_to_a_laptop_with_m56_max/
|
power97992
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn0ads
| false | null |
t3_1kn0ads
|
/r/LocalLLaMA/comments/1kn0ads/should_i_upgrade_to_a_laptop_with_m56_max/
| false | false |
self
| 0 | null |
did i hear news about local LLM in vscode?
| 2 |
I hate ollama and can't wait for this 'feature' if it drops soon. Anyone know?
| 2025-05-15T05:15:30 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn0aek/did_i_hear_news_about_local_llm_in_vscode/
|
satoshibitchcoin
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn0aek
| false | null |
t3_1kn0aek
|
/r/LocalLLaMA/comments/1kn0aek/did_i_hear_news_about_local_llm_in_vscode/
| false | false |
self
| 2 | null |
Uncensoring Qwen3: Lessons Learned from GrayLine Finetuning
| 6 |
Thanks for all the great recommendations on my last post! Based on your feedback, here’s an updated report on our GrayLine finetuning experiments.
**TL;DR**
Fine-tuned Qwen3 8B and 12B with a LoRA setup to preserve both reasoning (/think) and uncensored (/no\_think) modes. Key takeaways include optimal LoRA hyperparameters (r=32, α=64, dropout=0.05), the importance of preserving think tags in loss calculation, a 75/25 CoT-to-direct-answer data mix, and a two-phase learning-rate schedule (2e-5 → 1e-5).
**Key Findings**
* **Preserve Think Tags in Loss** Removing <think> from loss calculation messed up the reasoning mode. it was thinking about something completely random but ended up with an answer similar to the dataset
* **Optimal LoRA Hyperparameters**
* r=32 balances underfitting (<8) and overfitting (≥64).
* α=64 + dropout=0.05 yields stable convergence.
* **Batch Size Trade-off** Effective batch of 16 (4 × 4) speeds convergence but larger can dilute uncensored style.
* **Two-Phase LR Schedule** Starting at 2e-5 then halving to 1e-5 smooths out loss variance.
**Next Steps**
* MoE Exploration: Try finetuning qwen3-30B-A3B, haven't had much experience with that
* Benchmark the model with lm-eval.
* Maybe do qwen3-32B if anyone would like it.
Model: [GrayLine Qwen3 Collection](https://huggingface.co/collections/soob3123/grayline-collection-qwen3-6821009e843331c5a9c27da1)
Feel free to test out the model and provide feedback! and if you want me to test out anything else!
| 2025-05-15T06:07:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn13jc/uncensoring_qwen3_lessons_learned_from_grayline/
|
Reader3123
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn13jc
| false | null |
t3_1kn13jc
|
/r/LocalLLaMA/comments/1kn13jc/uncensoring_qwen3_lessons_learned_from_grayline/
| false | false |
self
| 6 |
{'enabled': False, 'images': [{'id': 'Tn_cW7k2WkyNJglJN7AJ9vG2CNTCS8KZ8cIgDcxG1uM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=108&crop=smart&auto=webp&s=01ab41533b37667645fafe92655b4d9f247c122a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=216&crop=smart&auto=webp&s=11908917baa49a82ca685ac98a9de9acacd33f3e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=320&crop=smart&auto=webp&s=55cfad5a1226b6c4734e894c5a9094f1404af9de', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=640&crop=smart&auto=webp&s=ece7bce253dac4b8716873758de846b71713dd75', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=960&crop=smart&auto=webp&s=1b576db01f8187f4e6350a7873d264f2dd981263', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=1080&crop=smart&auto=webp&s=203b6764a7eeb751815a0835612944ecf081d9f9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?auto=webp&s=3379e6989221e1d249f0b395cf736376ae614b61', 'width': 1200}, 'variants': {}}]}
|
Suggest some local models that support function calling and structured output
| 1 |
Just for the purpose of experimentation with some agentic programming projects, I want few local models that are compatible with OpenAI's tool calling interface, and that can be ran on Ollama. I tried `hf.co/Salesforce/xLAM-7b-fc-r-gguf:latest`. but for some odd reason, calling it from PydanticAI returns
`{'error': 'hf. co/Salesforce/xLAM-7b-fc-r-gguf:latest does not support tools'}`
Even though it does support tools
| 2025-05-15T06:39:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn1l2b/suggest_some_local_models_that_support_function/
|
x0rchid
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn1l2b
| false | null |
t3_1kn1l2b
|
/r/LocalLLaMA/comments/1kn1l2b/suggest_some_local_models_that_support_function/
| false | false |
self
| 1 | null |
Insights into DeepSeek-V3: Scaling Challenges and Reflections on
Hardware for AI Architectures
| 86 |
Paper: [https://arxiv.org/abs/2505.09343](https://arxiv.org/abs/2505.09343)
| 2025-05-15T07:28:36 |
Lynncc6
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn2aay
| false | null |
t3_1kn2aay
|
/r/LocalLLaMA/comments/1kn2aay/insights_into_deepseekv3_scaling_challenges_and/
| false | false | 86 |
{'enabled': True, 'images': [{'id': 'Uv8BR6SSPuOMrl3FveoQTRTe_7Lnv1PPvfcmACR-JFA', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/ww4aygc1ew0f1.png?width=108&crop=smart&auto=webp&s=bf96747979129a6a4152df1c465f63d30fbc7854', 'width': 108}, {'height': 232, 'url': 'https://preview.redd.it/ww4aygc1ew0f1.png?width=216&crop=smart&auto=webp&s=78ff61f6ec7502f78d06ccf8129b37dd07c30e20', 'width': 216}, {'height': 343, 'url': 'https://preview.redd.it/ww4aygc1ew0f1.png?width=320&crop=smart&auto=webp&s=94d2b162772c26b2a1473cb58d4655b1c0b54523', 'width': 320}, {'height': 687, 'url': 'https://preview.redd.it/ww4aygc1ew0f1.png?width=640&crop=smart&auto=webp&s=18baa07396402b906dd387ccabc4f5bab873fba3', 'width': 640}, {'height': 1031, 'url': 'https://preview.redd.it/ww4aygc1ew0f1.png?width=960&crop=smart&auto=webp&s=412d381ce34f630ad3c873f5361dfea257370fcd', 'width': 960}, {'height': 1160, 'url': 'https://preview.redd.it/ww4aygc1ew0f1.png?width=1080&crop=smart&auto=webp&s=b6f27190d6facfa175e1baea8f2aa187ef141efa', 'width': 1080}], 'source': {'height': 1334, 'url': 'https://preview.redd.it/ww4aygc1ew0f1.png?auto=webp&s=cccd4d845ecb8cad478f2e8a896f27aebc5c8444', 'width': 1242}, 'variants': {}}]}
|
||
Best Embedding and Re-ranker for open-webui hybrid on company documents?
| 1 |
[removed]
| 2025-05-15T07:36:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn2ebt/best_embedding_and_reranker_for_openwebui_hybrid/
|
detective_ahg
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn2ebt
| false | null |
t3_1kn2ebt
|
/r/LocalLLaMA/comments/1kn2ebt/best_embedding_and_reranker_for_openwebui_hybrid/
| false | false |
self
| 1 | null |
Crafting Success in Digital Marketing
| 1 | 2025-05-15T07:38:35 |
https://go4bestdeals.com/product-details?pid=4K2QGXRKW94
|
go4bestDeals
|
go4bestdeals.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn2f9r
| false | null |
t3_1kn2f9r
|
/r/LocalLLaMA/comments/1kn2f9r/crafting_success_in_digital_marketing/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'XBxAy4_55Bvja-rOmqURmZMXEzmlFgZynuaipCNxnSc', 'resolutions': [{'height': 129, 'url': 'https://external-preview.redd.it/UXoML2mV3Q89VQip80qFA6Wx08gPJfJqpMrtu9NWtKc.jpg?width=108&crop=smart&auto=webp&s=176f6961b3a860ac9465f56d3d99393be6396029', 'width': 108}, {'height': 259, 'url': 'https://external-preview.redd.it/UXoML2mV3Q89VQip80qFA6Wx08gPJfJqpMrtu9NWtKc.jpg?width=216&crop=smart&auto=webp&s=46f6d3c6a37a4c19733f49f4aed8fde593e44e07', 'width': 216}, {'height': 384, 'url': 'https://external-preview.redd.it/UXoML2mV3Q89VQip80qFA6Wx08gPJfJqpMrtu9NWtKc.jpg?width=320&crop=smart&auto=webp&s=e8124f36d37d997e78b616db5050cae48ff85d61', 'width': 320}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/UXoML2mV3Q89VQip80qFA6Wx08gPJfJqpMrtu9NWtKc.jpg?auto=webp&s=f8bb8c5fd346551bab1054c2a794c8f7a3add3cd', 'width': 400}, 'variants': {}}]}
|
||
Best embedding model and re-ranker combinations for company documents.?
| 1 |
[removed]
| 2025-05-15T07:38:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn2fbu/best_embedding_model_and_reranker_combinations/
|
detective_ahg
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn2fbu
| false | null |
t3_1kn2fbu
|
/r/LocalLLaMA/comments/1kn2fbu/best_embedding_model_and_reranker_combinations/
| false | false |
self
| 1 | null |
Is neural engine on mac a wasted opportunity?
| 40 |
What’s the point of having a 32-core neural engine on the new mac studio if you can’t use it for LLM or image/video generation tasks ?
| 2025-05-15T07:41:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn2gsa/is_neural_engine_on_mac_a_wasted_opportunity/
|
No_Conversation9561
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn2gsa
| false | null |
t3_1kn2gsa
|
/r/LocalLLaMA/comments/1kn2gsa/is_neural_engine_on_mac_a_wasted_opportunity/
| false | false |
self
| 40 | null |
Hardware for Machine Learning
| 1 |
[removed]
| 2025-05-15T07:44:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn2ieh/hardware_for_machine_learning/
|
paolovic89
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn2ieh
| false | null |
t3_1kn2ieh
|
/r/LocalLLaMA/comments/1kn2ieh/hardware_for_machine_learning/
| false | false |
self
| 1 | null |
Best embedding model and re-ranking combination for the company's technical documents. ?
| 1 |
[removed]
| 2025-05-15T07:45:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn2ioz/best_embedding_model_and_reranking_combination/
|
detective_ahg
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn2ioz
| false | null |
t3_1kn2ioz
|
/r/LocalLLaMA/comments/1kn2ioz/best_embedding_model_and_reranking_combination/
| false | false |
self
| 1 | null |
LLMs Get Lost In Multi-Turn Conversation
| 255 |
A [paper](https://arxiv.org/abs/2505.06120) found that the performance of open and closed LLMs drops significantly in multi-turn conversations. Most benchmarks focus on single-turn, fully-specified instruction settings. They found that LLMs often make (incorrect) assumptions in early turns, on which they rely going forward and never recover from.
They concluded that when a multi-turn conversation doesn't yield the desired results, it might help to restart with a fresh conversation, putting all the relevant information from the multi-turn conversation into the first turn.
https://preview.redd.it/ltlt4zbiiw0f1.png?width=1515&format=png&auto=webp&s=d4de01b7a2339658690b3492899e107bd4af9836
"Sharded" means they split an original fully-specified single-turn instruction into multiple tidbits of information that they then fed the LLM turn by turn. "Concat" is a comparison as a baseline where they fed all the generated information pieces in the same turn. Here are examples on how they did the splitting:
https://preview.redd.it/y40aremjiw0f1.png?width=1502&format=png&auto=webp&s=ebe81a4a2be778437bf7134933863ebbd88e5ef2
| 2025-05-15T07:53:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn2mv9/llms_get_lost_in_multiturn_conversation/
|
Chromix_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn2mv9
| false | null |
t3_1kn2mv9
|
/r/LocalLLaMA/comments/1kn2mv9/llms_get_lost_in_multiturn_conversation/
| false | false | 255 | null |
|
Best local LLM <3B for Dutch language?
| 1 |
[removed]
| 2025-05-15T08:07:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn2u0j/best_local_llm_3b_for_dutch_language/
|
Material-Ad5426
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn2u0j
| false | null |
t3_1kn2u0j
|
/r/LocalLLaMA/comments/1kn2u0j/best_local_llm_3b_for_dutch_language/
| false | false |
self
| 1 | null |
Best <3B local LLM for Dutch language
| 1 |
[removed]
| 2025-05-15T08:09:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn2v00/best_3b_local_llm_for_dutch_language/
|
Material-Ad5426
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn2v00
| false | null |
t3_1kn2v00
|
/r/LocalLLaMA/comments/1kn2v00/best_3b_local_llm_for_dutch_language/
| false | false |
self
| 1 | null |
LLM for Translation locally
| 14 |
Hi ! I need to translate some texts..I have been doint Gcloud Trasnlate V3 and also Vertex, but the cost is absolutely high..I do have a 4070 with 12Gb. which model you suggest using Ollama to use a translator that support asian and western languages?
Thanks!
| 2025-05-15T08:12:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn2weg/llm_for_translation_locally/
|
yayita2500
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn2weg
| false | null |
t3_1kn2weg
|
/r/LocalLLaMA/comments/1kn2weg/llm_for_translation_locally/
| false | false |
self
| 14 | null |
Local Models are absolutely atrocious at categorizing medical diagnoses. Is ollama at fault?
| 1 |
[removed]
| 2025-05-15T08:33:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn36mb/local_models_are_absolutely_atrocious_at/
|
AcceptableCause
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn36mb
| false | null |
t3_1kn36mb
|
/r/LocalLLaMA/comments/1kn36mb/local_models_are_absolutely_atrocious_at/
| false | false | 1 | null |
|
Grok tells users it was ‘instructed by my creators’ to accept ‘white genocide as real'
| 87 | 2025-05-15T08:40:17 |
https://www.theguardian.com/technology/2025/may/14/elon-musk-grok-white-genocide
|
_supert_
|
theguardian.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn39wh
| false | null |
t3_1kn39wh
|
/r/LocalLLaMA/comments/1kn39wh/grok_tells_users_it_was_instructed_by_my_creators/
| false | false | 87 |
{'enabled': False, 'images': [{'id': '_MHWcrKvjY0sIoU3T7uk-IJbaaoTY5L_H2HYPtYzJl8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/RvP9nS6BWMa1B2Oi6E_khkNZHwAyneoIcbjRDgHFsrE.jpg?width=108&crop=smart&auto=webp&s=880ac3e5083fb0113311d726975d0fd2c96cec2f', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/RvP9nS6BWMa1B2Oi6E_khkNZHwAyneoIcbjRDgHFsrE.jpg?width=216&crop=smart&auto=webp&s=48b9ed89308b6e50482e5ac66eebbac0857fd172', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/RvP9nS6BWMa1B2Oi6E_khkNZHwAyneoIcbjRDgHFsrE.jpg?width=320&crop=smart&auto=webp&s=6852a2649e1bab04cff852c8bed78efff19ff61a', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/RvP9nS6BWMa1B2Oi6E_khkNZHwAyneoIcbjRDgHFsrE.jpg?width=640&crop=smart&auto=webp&s=c70267f649faf8c09504abe7d9a361c24e2123cc', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/RvP9nS6BWMa1B2Oi6E_khkNZHwAyneoIcbjRDgHFsrE.jpg?width=960&crop=smart&auto=webp&s=cc30e9e019f513c069fa42ec1d4299f63bdd673f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/RvP9nS6BWMa1B2Oi6E_khkNZHwAyneoIcbjRDgHFsrE.jpg?width=1080&crop=smart&auto=webp&s=998933c7dd683bb251680a70b831b9a5307c94eb', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/RvP9nS6BWMa1B2Oi6E_khkNZHwAyneoIcbjRDgHFsrE.jpg?auto=webp&s=315a674b024a224b70d4f79786f01d3def520438', 'width': 1200}, 'variants': {}}]}
|
||
Samsung uploaded RP model: MythoMax
| 0 |
Yes, the LLAMA-2, legendary MythoMax, that one. Samsung.
Power is shifting, or maybe it's just my optimism.
Roleplay model by NVIDIA- when?
| 2025-05-15T09:32:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn4054/samsung_uploaded_rp_model_mythomax/
|
Sicarius_The_First
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn4054
| false | null |
t3_1kn4054
|
/r/LocalLLaMA/comments/1kn4054/samsung_uploaded_rp_model_mythomax/
| false | false |
self
| 0 | null |
Quantizing LLMs for inference
| 1 |
[removed]
| 2025-05-15T09:36:44 |
https://nor-blog.pages.dev/posts/2025-05-14-quantization/
|
iyevegev
|
nor-blog.pages.dev
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn427j
| false | null |
t3_1kn427j
|
/r/LocalLLaMA/comments/1kn427j/quantizing_llms_for_inference/
| false | false |
default
| 1 | null |
Fish.Audio - Need guidance on setting up AI Agent
| 3 |
I wanted to add a conversational agent of the AI clone of my voice for my website. Elevenlabs has this feature but it costs truckload of money.
I found fish.audio's voice clone to also be decent but I do not really see a straightforward way to create an agent.
I found this but it just does not match the voice [https://huggingface.co/spaces/fishaudio/fish-agent](https://huggingface.co/spaces/fishaudio/fish-agent)
Any help? I am not a developer! Could also not find support.
| 2025-05-15T09:43:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn45yd/fishaudio_need_guidance_on_setting_up_ai_agent/
|
nilanganray
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn45yd
| false | null |
t3_1kn45yd
|
/r/LocalLLaMA/comments/1kn45yd/fishaudio_need_guidance_on_setting_up_ai_agent/
| false | false |
self
| 3 |
{'enabled': False, 'images': [{'id': 'Clw_Oo2X0OMfEW_gtAEZWI3jfXTKL_rMHfQvg94WaRw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IXSuHwRUSHcAHX1h7qpcMoVuqLys3CwdNenA-zx-xPA.jpg?width=108&crop=smart&auto=webp&s=d80ea2993ea386ec42c9963a514ef0c3d9e8583a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IXSuHwRUSHcAHX1h7qpcMoVuqLys3CwdNenA-zx-xPA.jpg?width=216&crop=smart&auto=webp&s=a4bad2504acc12d92dd8f01c28e721bffe072e1f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IXSuHwRUSHcAHX1h7qpcMoVuqLys3CwdNenA-zx-xPA.jpg?width=320&crop=smart&auto=webp&s=20592f8fa1603c31f1b8ad609f02a0c6d5cfb42d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IXSuHwRUSHcAHX1h7qpcMoVuqLys3CwdNenA-zx-xPA.jpg?width=640&crop=smart&auto=webp&s=56dda172bde336a6ef63e551a4a7f66ebb0ef8c5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IXSuHwRUSHcAHX1h7qpcMoVuqLys3CwdNenA-zx-xPA.jpg?width=960&crop=smart&auto=webp&s=33cc23342736c2ad05ab857e2a77df843187e370', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IXSuHwRUSHcAHX1h7qpcMoVuqLys3CwdNenA-zx-xPA.jpg?width=1080&crop=smart&auto=webp&s=96bc2a7dd7c28fbe7002007b474c1efb6352a54f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IXSuHwRUSHcAHX1h7qpcMoVuqLys3CwdNenA-zx-xPA.jpg?auto=webp&s=4798ef987e9fb886d90c461e0d47b3934263a0db', 'width': 1200}, 'variants': {}}]}
|
Best LLM model for a GTX1060 8gb
| 1 |
[removed]
| 2025-05-15T09:53:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn4b10/best_llm_model_for_a_gtx1060_8gb/
|
Valugh
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn4b10
| false | null |
t3_1kn4b10
|
/r/LocalLLaMA/comments/1kn4b10/best_llm_model_for_a_gtx1060_8gb/
| false | false |
self
| 1 | null |
Trying to get better at building reliable AI systems
| 1 |
[removed]
| 2025-05-15T09:55:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn4cb4/trying_to_get_better_at_building_reliable_ai/
|
dinkinflika0
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn4cb4
| false | null |
t3_1kn4cb4
|
/r/LocalLLaMA/comments/1kn4cb4/trying_to_get_better_at_building_reliable_ai/
| false | false |
self
| 1 | null |
openwebui and litellm
| 0 |
hi guys,
so i have a running setup of ollama and openwebui.
and now i wanted to connect litellm to openwebui
this seems to work correctly but i have no models to choose from. and i think that bow litellm is a replacement for ollama where it runs the llm.
my problem is: i want litellm not to replace ollama but to send requests to my openwebui model.
is there a way to do that?
thanks for any help or clarification
| 2025-05-15T10:02:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn4gd0/openwebui_and_litellm/
|
thefunnyape
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn4gd0
| false | null |
t3_1kn4gd0
|
/r/LocalLLaMA/comments/1kn4gd0/openwebui_and_litellm/
| false | false |
self
| 0 | null |
Its all prompts
| 1 |
[removed]
| 2025-05-15T10:08:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn4jka/its_all_prompts/
|
nix-_-n
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn4jka
| false | null |
t3_1kn4jka
|
/r/LocalLLaMA/comments/1kn4jka/its_all_prompts/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '27p-yKc6oJeMJQwPUVOWQze0fupiJ7DMrwAuWDP5NmQ', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/HUdtqRRGZVjqnbg5ufOq_QS82-oWfxb2hw3un8Wk7BQ.jpg?width=108&crop=smart&auto=webp&s=0f8cb06e9de2cf771bcf2657609a841d3cb562dd', 'width': 108}, {'height': 127, 'url': 'https://external-preview.redd.it/HUdtqRRGZVjqnbg5ufOq_QS82-oWfxb2hw3un8Wk7BQ.jpg?width=216&crop=smart&auto=webp&s=a32d73d10bfdb47df070361ba7db6e6735968a26', 'width': 216}, {'height': 189, 'url': 'https://external-preview.redd.it/HUdtqRRGZVjqnbg5ufOq_QS82-oWfxb2hw3un8Wk7BQ.jpg?width=320&crop=smart&auto=webp&s=c487401ca14634b198e83775f8d765c4c7c36e0f', 'width': 320}, {'height': 378, 'url': 'https://external-preview.redd.it/HUdtqRRGZVjqnbg5ufOq_QS82-oWfxb2hw3un8Wk7BQ.jpg?width=640&crop=smart&auto=webp&s=2c66fb9b64952c864e03d68a26f0400301ca8257', 'width': 640}, {'height': 567, 'url': 'https://external-preview.redd.it/HUdtqRRGZVjqnbg5ufOq_QS82-oWfxb2hw3un8Wk7BQ.jpg?width=960&crop=smart&auto=webp&s=b6a85778216b2857975970ed2e0ae3be3c5d798f', 'width': 960}, {'height': 638, 'url': 'https://external-preview.redd.it/HUdtqRRGZVjqnbg5ufOq_QS82-oWfxb2hw3un8Wk7BQ.jpg?width=1080&crop=smart&auto=webp&s=a9bc26ff2b5b7620834f0d9872db730df414e7ee', 'width': 1080}], 'source': {'height': 697, 'url': 'https://external-preview.redd.it/HUdtqRRGZVjqnbg5ufOq_QS82-oWfxb2hw3un8Wk7BQ.jpg?auto=webp&s=c56720d8ea312cc67085d9db2272f99df2ad60d7', 'width': 1179}, 'variants': {}}]}
|
My Intel GPU LLM Home Lab Adventure - A770s vs B580 (on OCuLink!) Benchmarks & Surprising Results!
| 1 |
[removed]
| 2025-05-15T10:13:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn4mgr/my_intel_gpu_llm_home_lab_adventure_a770s_vs_b580/
|
danishkirel
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn4mgr
| false | null |
t3_1kn4mgr
|
/r/LocalLLaMA/comments/1kn4mgr/my_intel_gpu_llm_home_lab_adventure_a770s_vs_b580/
| false | false |
self
| 1 | null |
Samsung has dropped AGI
| 0 | 2025-05-15T10:27:56 |
https://huggingface.co/Samsung/MuTokenZero2-32B
|
Abject-Huckleberry13
|
huggingface.co
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn4u5x
| false | null |
t3_1kn4u5x
|
/r/LocalLLaMA/comments/1kn4u5x/samsung_has_dropped_agi/
| false | false | 0 |
{'enabled': False, 'images': [{'id': 'qe9GYmXSZ-NZBs83Gf6EipjyBJrIucSw9DrmHkcoNlw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gHSEnZIUpdXO0QhTfCk2EQDih8gqqmatNgEV3NsQYZI.jpg?width=108&crop=smart&auto=webp&s=237fd3d77e387c449bf90bf6858ecb1b47535610', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gHSEnZIUpdXO0QhTfCk2EQDih8gqqmatNgEV3NsQYZI.jpg?width=216&crop=smart&auto=webp&s=e1d619e2c250a231f21f40d2f27f1395e6868e40', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gHSEnZIUpdXO0QhTfCk2EQDih8gqqmatNgEV3NsQYZI.jpg?width=320&crop=smart&auto=webp&s=1b1df476fd9df8437dfc3f6080ad2dcf81329a25', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gHSEnZIUpdXO0QhTfCk2EQDih8gqqmatNgEV3NsQYZI.jpg?width=640&crop=smart&auto=webp&s=c0e68a55e2b3838c5c72ef3196bc2fa166e9cad6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gHSEnZIUpdXO0QhTfCk2EQDih8gqqmatNgEV3NsQYZI.jpg?width=960&crop=smart&auto=webp&s=c9fd43d7d40013a1e4ada31d44cc763a006c00cc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gHSEnZIUpdXO0QhTfCk2EQDih8gqqmatNgEV3NsQYZI.jpg?width=1080&crop=smart&auto=webp&s=06d1fdaca4d6c11d8f2b0074358ece04f43aaf40', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gHSEnZIUpdXO0QhTfCk2EQDih8gqqmatNgEV3NsQYZI.jpg?auto=webp&s=684c593229aa6a9d26b99f755972dd6ff890d647', 'width': 1200}, 'variants': {}}]}
|
||
Introducing A.I.T.E Ball
| 358 |
This is a totally self contained (no internet) AI powered 8ball.
Its running on an Orange pi zero 2w, with whisper.cpp to do the text-2-speach, and llama.cpp to do the llm thing, Its running Gemma 3 1b. About as much as I can do on this hardware. But even so.... :-)
| 2025-05-15T10:45:28 |
https://v.redd.it/scyofz31dx0f1
|
tonywestonuk
|
/r/LocalLLaMA/comments/1kn542r/introducing_aite_ball/
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn542r
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/scyofz31dx0f1/DASHPlaylist.mpd?a=1750027781%2CYTcyMzYyNTJmZmY5YmQzNmE2OGJkYWJlNWY1YzJmZGI5YWVjZTQ2M2M3ZWE1YTVkMTNkNDFkYTgyZGUwMzlkYw%3D%3D&v=1&f=sd', 'duration': 78, 'fallback_url': 'https://v.redd.it/scyofz31dx0f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/scyofz31dx0f1/HLSPlaylist.m3u8?a=1750027781%2CNmIyMTU4MjZkZjNkZGQ0MDdiZDhiYjEyZTJlMjdlZjM2ZDcxZGI1NjM0YzA1MWQ1OGQzMzRmZWViNzcwZGJlMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/scyofz31dx0f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
|
t3_1kn542r
|
/r/LocalLLaMA/comments/1kn542r/introducing_aite_ball/
| false | false | 358 |
{'enabled': False, 'images': [{'id': 'NXllMTcxNDFkeDBmMcTQf63cMAAIN-71fn86oCbnKUR2tA_D5RmS947R5l7-', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/NXllMTcxNDFkeDBmMcTQf63cMAAIN-71fn86oCbnKUR2tA_D5RmS947R5l7-.png?width=108&crop=smart&format=pjpg&auto=webp&s=acc6caf5ed725b80b2e54887cc32037bbfb691e5', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/NXllMTcxNDFkeDBmMcTQf63cMAAIN-71fn86oCbnKUR2tA_D5RmS947R5l7-.png?width=216&crop=smart&format=pjpg&auto=webp&s=4cb4ecc203e603a9430834eca027a223813d92ef', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/NXllMTcxNDFkeDBmMcTQf63cMAAIN-71fn86oCbnKUR2tA_D5RmS947R5l7-.png?width=320&crop=smart&format=pjpg&auto=webp&s=a8f4ed28ec09ee05c1e6722da82cbbbeb6b47eb8', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/NXllMTcxNDFkeDBmMcTQf63cMAAIN-71fn86oCbnKUR2tA_D5RmS947R5l7-.png?width=640&crop=smart&format=pjpg&auto=webp&s=e575f2ea74d095315d834d87713ca1ab47f4e19e', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/NXllMTcxNDFkeDBmMcTQf63cMAAIN-71fn86oCbnKUR2tA_D5RmS947R5l7-.png?width=960&crop=smart&format=pjpg&auto=webp&s=f473fb43da1e6727eeab5ea6f676b622cc5a6435', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/NXllMTcxNDFkeDBmMcTQf63cMAAIN-71fn86oCbnKUR2tA_D5RmS947R5l7-.png?width=1080&crop=smart&format=pjpg&auto=webp&s=94e92850398b3fdbf621018a06321355a28cb903', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/NXllMTcxNDFkeDBmMcTQf63cMAAIN-71fn86oCbnKUR2tA_D5RmS947R5l7-.png?format=pjpg&auto=webp&s=05cc08b1f6f30f9603b96b8d737682390eb5b85b', 'width': 1080}, 'variants': {}}]}
|
|
MLX version of Qwen3:235B for an 128GB RAM Mac Studio wanted
| 4 |
Hello everyone, I am looking for an MLX version of Qwen 3 in the 235B-A22B version for a Mac Studio with 128 GB Ram. I use LM Studio and have already tested the following models of huggingface on the Mac Studio without success:
mlx-community/Qwen3-235B-A22B-mixed-3-4bit
mlx-community/Qwen3-235B-A22B-3bit
Alternatively to the MLX Modells, the following GGUF model from Unsloth will work:
Qwen3-235B-A22B-UD-Q2\_K\_XL (88.02gb)(17.77 t/s)
I am looking forward to your experience with an Apple computer with 128 GB RAM.
| 2025-05-15T10:51:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn57h0/mlx_version_of_qwen3235b_for_an_128gb_ram_mac/
|
EmergencyLetter135
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn57h0
| false | null |
t3_1kn57h0
|
/r/LocalLLaMA/comments/1kn57h0/mlx_version_of_qwen3235b_for_an_128gb_ram_mac/
| false | false |
self
| 4 | null |
Help using llava models from llama.cpp
| 1 |
[removed]
| 2025-05-15T11:14:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn5ko6/help_using_llava_models_from_llamacpp/
|
wayl
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn5ko6
| false | null |
t3_1kn5ko6
|
/r/LocalLLaMA/comments/1kn5ko6/help_using_llava_models_from_llamacpp/
| false | false |
self
| 1 | null |
Call for Collaborators: Open Source LLM with Novel Efficient Architecture for Personal Computers
| 1 |
[removed]
| 2025-05-15T11:16:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn5loy/call_for_collaborators_open_source_llm_with_novel/
|
tagrib
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn5loy
| false | null |
t3_1kn5loy
|
/r/LocalLLaMA/comments/1kn5loy/call_for_collaborators_open_source_llm_with_novel/
| false | false |
self
| 1 | null |
How can we investigate the symbolic gender of GPT models?
| 1 |
[removed]
| 2025-05-15T11:27:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn5sjl/how_can_we_investigate_the_symbolic_gender_of_gpt/
|
AffectionateTooth907
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn5sjl
| false | null |
t3_1kn5sjl
|
/r/LocalLLaMA/comments/1kn5sjl/how_can_we_investigate_the_symbolic_gender_of_gpt/
| false | false |
self
| 1 | null |
Llamafile 0.9.3 Brings Support For Qwen3 & Phi4
| 35 | 2025-05-15T11:45:30 |
https://www.phoronix.com/news/Llamafile-0.9.3-Released
|
FastDecode1
|
phoronix.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn6427
| false | null |
t3_1kn6427
|
/r/LocalLLaMA/comments/1kn6427/llamafile_093_brings_support_for_qwen3_phi4/
| false | false | 35 |
{'enabled': False, 'images': [{'id': 'y4Qv3gffq1nGZD8xgpfFRMvbOf6rt9KE17x9drEVT0U', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/Cj4HZCrFxF1ZWikVE2EGwsOPpKF5ST6n_sC3VWnurnI.jpg?width=108&crop=smart&auto=webp&s=804f7d00d9890955873b91641923d9af1021e18e', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/Cj4HZCrFxF1ZWikVE2EGwsOPpKF5ST6n_sC3VWnurnI.jpg?width=216&crop=smart&auto=webp&s=8d5976bf1e71e5a95f443b27974cec7ec181ea73', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/Cj4HZCrFxF1ZWikVE2EGwsOPpKF5ST6n_sC3VWnurnI.jpg?width=320&crop=smart&auto=webp&s=d1a590cf1b5d2b6c89b142decde1126c28fb8329', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/Cj4HZCrFxF1ZWikVE2EGwsOPpKF5ST6n_sC3VWnurnI.jpg?width=640&crop=smart&auto=webp&s=3ee9e6c0ea1c9f1b1be02252a698b00e32a60cbe', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/Cj4HZCrFxF1ZWikVE2EGwsOPpKF5ST6n_sC3VWnurnI.jpg?width=960&crop=smart&auto=webp&s=7741e28e7485b8b8dd7c4044c5d8b64d22b332c3', 'width': 960}], 'source': {'height': 639, 'url': 'https://external-preview.redd.it/Cj4HZCrFxF1ZWikVE2EGwsOPpKF5ST6n_sC3VWnurnI.jpg?auto=webp&s=18e237f242f8ff97150395a43e0f0d3e541a0dd3', 'width': 960}, 'variants': {}}]}
|
||
Mac Mini M4 Pro (64GB) vs Mac Studio M4 Max (128GB) for Local AI/ML/Data Science + Bots - Need Your Expertise!
| 1 |
[removed]
| 2025-05-15T11:46:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn64l2/mac_mini_m4_pro_64gb_vs_mac_studio_m4_max_128gb/
|
Weak_Ad9730
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn64l2
| false | null |
t3_1kn64l2
|
/r/LocalLLaMA/comments/1kn64l2/mac_mini_m4_pro_64gb_vs_mac_studio_m4_max_128gb/
| false | false |
self
| 1 | null |
How do SOTA LLMs Process PDFs: Native Understanding, OCR, or RAG?
| 11 |
Hi!
I'm trying to build a solution to **analyze a set of PDF files** (5-10) using an LLM.
My current approach is to perform a **high-quality OCR** (using Docling) and then, dump all this information as the **context for my prompt**. However, I doubt this is the best strategy nowadays.
Playing around with Gemini, I've noticed it handles PDF files extremely well, even showing the **tokens it contains**. So I was wondering if the model is "**reading**" the PDF file **directly** (native vision), or is there a preliminary step where it converts the PDF to pure text using **OCR before processing**?
I'm also wondering if a **Retrieval Augmented Generation (RAG) strategy** is involved in how it interacts with the document content once uploaded.
If anyone knows more about this process, it would be interesting to hear.
Thank you!
| 2025-05-15T11:54:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn69mp/how_do_sota_llms_process_pdfs_native/
|
coconautico
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn69mp
| false | null |
t3_1kn69mp
|
/r/LocalLLaMA/comments/1kn69mp/how_do_sota_llms_process_pdfs_native/
| false | false |
self
| 11 | null |
Who is building chatbot agents? Our dataset helps your model know when to escalate, exit, or block token-wasting users.
| 1 |
[removed]
| 2025-05-15T12:05:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn6hs2/who_is_building_chatbot_agents_our_dataset_helps/
|
LifeBricksGlobal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn6hs2
| false | null |
t3_1kn6hs2
|
/r/LocalLLaMA/comments/1kn6hs2/who_is_building_chatbot_agents_our_dataset_helps/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'kkLo6AUbG9J1NCRpdyDJx108YWjZ0dn1GijFvkl9EFc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ZewabIuIRGmtazSDAiAogNy8hcN0tGG5MXGP5J68hdM.jpg?width=108&crop=smart&auto=webp&s=a14ef73f0f2b09e4cd1806e46c56326d4988b217', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/ZewabIuIRGmtazSDAiAogNy8hcN0tGG5MXGP5J68hdM.jpg?width=216&crop=smart&auto=webp&s=ee8d784974bcd041362f06666170bc1933be0757', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/ZewabIuIRGmtazSDAiAogNy8hcN0tGG5MXGP5J68hdM.jpg?width=320&crop=smart&auto=webp&s=707e45a6703e1cbb983d43611910ea8a5b18e7de', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/ZewabIuIRGmtazSDAiAogNy8hcN0tGG5MXGP5J68hdM.jpg?width=640&crop=smart&auto=webp&s=7293a38fc5c44deed6d57b7788d7eed9d8c27602', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/ZewabIuIRGmtazSDAiAogNy8hcN0tGG5MXGP5J68hdM.jpg?width=960&crop=smart&auto=webp&s=8b1e2e8d7cd4932a18e4edd49b14bb69bfe47854', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/ZewabIuIRGmtazSDAiAogNy8hcN0tGG5MXGP5J68hdM.jpg?width=1080&crop=smart&auto=webp&s=74ba70d459075de81e2eec31ef5ebc0b3c71fc0b', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/ZewabIuIRGmtazSDAiAogNy8hcN0tGG5MXGP5J68hdM.jpg?auto=webp&s=410a8ab1800f2973e308276101dd2ddddace51f8', 'width': 1200}, 'variants': {}}]}
|
Who is building chatbot agents? Our dataset helps your model know when to escalate, exit, or block token-wasting users.
| 1 |
[removed]
| 2025-05-15T12:08:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn6jbw/who_is_building_chatbot_agents_our_dataset_helps/
|
LifeBricksGlobal
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn6jbw
| false | null |
t3_1kn6jbw
|
/r/LocalLLaMA/comments/1kn6jbw/who_is_building_chatbot_agents_our_dataset_helps/
| false | false |
self
| 1 | null |
Qwen 2.5 vs Qwen 3 vs Gemma 3: Real world base model comparison?
| 67 |
I’ve been digging into the latest base models and wanted to get some practical opinions beyond just benchmark numbers.
1. **For those who have actually used both Qwen 2.5 and Qwen 3 base models**: Did you notice a truly big jump in general usage (reasoning, instruction following, robustness), or is the improvement mostly confined to coding and math tasks? I’m not talking about fine-tuned chat versions, just the raw base models.
2. **Gemma 3 vs Qwen**: Is Gemma 3 genuinely that far behind, or is there some possible benchmark leakage or overfitting with Qwen? A few benchmark charts make me suspicious. Would love to hear hands-on perspectives if anyone has experimented with both.
**Why I’m asking:**
I want to build a highly *steerable* model for my research and product work. I only have budget for one serious base model to work from, so I want to select the absolute best starting point. I’m focusing on openness, quality, and steerability, not just raw benchmark wins.
Any honest feedback, experiments, or even failures you’ve had with these models would help me massively. Thanks in advance!
| 2025-05-15T12:12:37 |
Desperate_Rub_1352
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn6mic
| false | null |
t3_1kn6mic
|
/r/LocalLLaMA/comments/1kn6mic/qwen_25_vs_qwen_3_vs_gemma_3_real_world_base/
| false | false |
default
| 67 |
{'enabled': True, 'images': [{'id': 'kq34jkwvsx0f1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/kq34jkwvsx0f1.jpeg?width=108&crop=smart&auto=webp&s=55348c421012c5de871b826f9de0014dedaa6f95', 'width': 108}, {'height': 146, 'url': 'https://preview.redd.it/kq34jkwvsx0f1.jpeg?width=216&crop=smart&auto=webp&s=a15aa04242abdefc6ee98c324b8f4ad9c3f71a0b', 'width': 216}, {'height': 216, 'url': 'https://preview.redd.it/kq34jkwvsx0f1.jpeg?width=320&crop=smart&auto=webp&s=1adde506caa3ae10e0226c3a0fa0bc84b0fe4620', 'width': 320}, {'height': 433, 'url': 'https://preview.redd.it/kq34jkwvsx0f1.jpeg?width=640&crop=smart&auto=webp&s=26b2276e1df77e5648f6b562bf60fe6c8a922ea4', 'width': 640}, {'height': 650, 'url': 'https://preview.redd.it/kq34jkwvsx0f1.jpeg?width=960&crop=smart&auto=webp&s=1eea6442f85f6dd398422f1412f50355e6a19184', 'width': 960}, {'height': 731, 'url': 'https://preview.redd.it/kq34jkwvsx0f1.jpeg?width=1080&crop=smart&auto=webp&s=f8766eeed6b8bd1d433a787543d133259988b597', 'width': 1080}], 'source': {'height': 1223, 'url': 'https://preview.redd.it/kq34jkwvsx0f1.jpeg?auto=webp&s=17736633a76752bd6c4642e932279a77b4b8ca71', 'width': 1805}, 'variants': {}}]}
|
|
Is faster whisper xxl safe?
| 1 |
[removed]
| 2025-05-15T12:17:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn6q6f/is_faster_whisper_xxl_safe/
|
Healthy_Jackfruit625
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn6q6f
| false | null |
t3_1kn6q6f
|
/r/LocalLLaMA/comments/1kn6q6f/is_faster_whisper_xxl_safe/
| false | false |
self
| 1 | null |
Is there some text2speech able to do a realistic stand-up comedy?
| 1 |
Hello!
I have a few scripts for stand-up comedies (about recent news).
Is there a text2speech able to render them in a realistic, emotional and emphatic way?
Possibly local, something (possibly multilingual) able to keep emphasis and pace and not be "boring"?
| 2025-05-15T12:33:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn71us/is_there_some_text2speech_able_to_do_a_realistic/
|
Robert__Sinclair
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn71us
| false | null |
t3_1kn71us
|
/r/LocalLLaMA/comments/1kn71us/is_there_some_text2speech_able_to_do_a_realistic/
| false | false |
self
| 1 | null |
PDF input merged into llama.cpp
| 152 | 2025-05-15T12:39:04 |
https://github.com/ggml-org/llama.cpp/pull/13562
|
jacek2023
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn75q8
| false | null |
t3_1kn75q8
|
/r/LocalLLaMA/comments/1kn75q8/pdf_input_merged_into_llamacpp/
| false | false |
default
| 152 | null |
|
What are the current best models for keeping a roles of real word scenarios in low size.
| 2 |
Hi all,
I am looking for model to prompt it to imitate human in specific real word situations like receptionist or medical professionals and make them stick to role.
I looked for some time and test different models around and find only this source regarding it
[https://huggingface.co/spaces/flowers-team/StickToYourRoleLeaderboard](https://huggingface.co/spaces/flowers-team/StickToYourRoleLeaderboard) but it don't seem that updated.
And used this [https://huggingface.co/spaces/open-llm-leaderboard/open\_llm\_leaderboard#/](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/) I tested these models around 10 GB VRAM but so far llama seems best but not perfect do you guy suggest other models or resources or specific prompt techniques. i experimented with prompt injection and so on.
`google_gemma-3-12b-it-Q6_K_L.gguf`
`Meta-Llama-3-1-8B-Instruct-Q8_0.gguf`
`phi-4.Q5_K_M.gguf`
`Qwen2.5-14B-Instruct-1M-GGUF`
| 2025-05-15T12:39:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn75zx/what_are_the_current_best_models_for_keeping_a/
|
SomeRandomGuuuuuuy
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn75zx
| false | null |
t3_1kn75zx
|
/r/LocalLLaMA/comments/1kn75zx/what_are_the_current_best_models_for_keeping_a/
| false | false |
self
| 2 |
{'enabled': False, 'images': [{'id': 'oiOZtBdNBHuTaszyj5SwMbl1zbQIjrVkO1Qj7byOkHE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/X3wMGwqTk5OI2pGPQmZ_5XaxWYVTCunvxzbJv90OSzU.jpg?width=108&crop=smart&auto=webp&s=a3c1a582fd21bc04c75967c866f021bc6899ec11', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/X3wMGwqTk5OI2pGPQmZ_5XaxWYVTCunvxzbJv90OSzU.jpg?width=216&crop=smart&auto=webp&s=f546b6b4d351ebe26e5cbe4694ecb8586353d516', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/X3wMGwqTk5OI2pGPQmZ_5XaxWYVTCunvxzbJv90OSzU.jpg?width=320&crop=smart&auto=webp&s=ed3316fc846cf66a14346ed82443809d32e680a6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/X3wMGwqTk5OI2pGPQmZ_5XaxWYVTCunvxzbJv90OSzU.jpg?width=640&crop=smart&auto=webp&s=8a940ea0062ccabd602719df383393e716041eaa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/X3wMGwqTk5OI2pGPQmZ_5XaxWYVTCunvxzbJv90OSzU.jpg?width=960&crop=smart&auto=webp&s=94bcf1b79eadd8d9b1fe7722b27541028e1c8c97', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/X3wMGwqTk5OI2pGPQmZ_5XaxWYVTCunvxzbJv90OSzU.jpg?width=1080&crop=smart&auto=webp&s=ef81e346e78e22b94e8382f9b9f6c61d4e64f270', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/X3wMGwqTk5OI2pGPQmZ_5XaxWYVTCunvxzbJv90OSzU.jpg?auto=webp&s=ba1ccb8cc9f8ad3db6e943ea60633f00adc6e910', 'width': 1200}, 'variants': {}}]}
|
Combining Ampere and Pascal cards?
| 0 |
I have a 3090ti and 64gb ddr5 ram in my current PC. I have a spare 1080ti (11gb vram) that I could add to the system for LLM use, which fits in the case and would work with my PSU.
If it's relevant: the 3090ti is in a PCIe 5.0 x16 slot, the available spare slot is PCIe 4.0 x4 using the motherboard chipset (Z790).
My question is if this is a useful upgrade or if this would have any downsides. Any suggestions for resources/tips on how to set this up are very welcome. I did some searching but didn't find a conclusive answer so far. I am currently using Ollama but I am open to switching to something else. Thanks!
| 2025-05-15T12:44:32 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn79p4/combining_ampere_and_pascal_cards/
|
__ThrowAway__123___
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn79p4
| false | null |
t3_1kn79p4
|
/r/LocalLLaMA/comments/1kn79p4/combining_ampere_and_pascal_cards/
| false | false |
self
| 0 | null |
Parler TTS mini : Expresso
| 0 |
What is you opinion on Parler TTS mini : Expresso , is it good ?
| 2025-05-15T12:59:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn7kmc/parler_tts_mini_expresso/
|
Odysseus_970
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn7kmc
| false | null |
t3_1kn7kmc
|
/r/LocalLLaMA/comments/1kn7kmc/parler_tts_mini_expresso/
| false | false |
self
| 0 | null |
Real cases for Qwen3-0.6b
| 1 |
[removed]
| 2025-05-15T13:03:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn7nu4/real_cases_for_qwen306b/
|
Slader42
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn7nu4
| false | null |
t3_1kn7nu4
|
/r/LocalLLaMA/comments/1kn7nu4/real_cases_for_qwen306b/
| false | false |
self
| 1 | null |
Image (text) to video on M3 Ultra 512Gb locally?
| 1 |
[removed]
| 2025-05-15T13:16:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn7yq5/image_text_to_video_on_m3_ultra_512gb_locally/
|
kesha55
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn7yq5
| false | null |
t3_1kn7yq5
|
/r/LocalLLaMA/comments/1kn7yq5/image_text_to_video_on_m3_ultra_512gb_locally/
| false | false |
self
| 1 | null |
LLM based Personally identifiable information detection tool
| 9 |
GitHub repo:
https://github.com/rpgeeganage/pII-guard
Hi everyone,
I recently built a small open-source tool called PII (personally identifiable information) to detect personally identifiable information (PII) in logs using AI. It’s self-hosted and designed for privacy-conscious developers or teams.
Features:
- HTTP endpoint for log ingestion with buffered processing
- PII detection using local AI models via Ollama (e.g., gemma:3b)
- PostgreSQL + Elasticsearch for storage
- Web UI to review flagged logs
- Docker Compose for easy setup
It’s still a work in progress, and any suggestions or feedback would be appreciated. Thanks for checking it out!
My apologies if this post is not relevant to this group
| 2025-05-15T13:19:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn810l/llm_based_personally_identifiable_information/
|
geeganage
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn810l
| false | null |
t3_1kn810l
|
/r/LocalLLaMA/comments/1kn810l/llm_based_personally_identifiable_information/
| false | false |
self
| 9 | null |
Suggestion for TTS Models
| 8 |
**Hey everyone,**
I’m building a fun little custom speech-to-speech app. For speech-to-text, I’m using `parakeet-0.6B` (latest on HuggingFace), and for the LLM part, I’m currently experimenting with `gemma3:4b`.
Now I’m looking for a suitable **text-to-speech (TTS)** model from the open-source HuggingFace community. My main constraints are:
* **Max model size:** 2–3 GB (due to 8GB VRAM and 32GB RAM)
* **Multilingual support:** Primarily **English, Hindi, and French**
I’ve looked into a few models:
* **kokoro-82M** – seems promising
* **Zonos** and **Nari-labs/Dia** – both \~6GB, too heavy for my setup
* **Cesame-1B** – tried it, but the performance was underwhelming
Given these constraints, which TTS models would you recommend? Bonus points for ones that work out-of-the-box or require minimal finetuning.
Thanks in advance!
| 2025-05-15T13:27:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn86oz/suggestion_for_tts_models/
|
Heavy_Ad_4912
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn86oz
| false | null |
t3_1kn86oz
|
/r/LocalLLaMA/comments/1kn86oz/suggestion_for_tts_models/
| false | false |
self
| 8 | null |
Open-source general purpose agent with built-in MCPToolkit support
| 55 |
The open-source OWL agent now comes with built-in MCPToolkit support, just drop in your MCP servers (Playwright, desktop-commander, custom Python tools, etc.) and OWL will automatically discover and call them in its multi-agent workflows.
OWL: [https://github.com/camel-ai/owl](https://github.com/camel-ai/owl)
| 2025-05-15T13:46:21 |
Fluffy_Sheepherder76
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn8m8t
| false | null |
t3_1kn8m8t
|
/r/LocalLLaMA/comments/1kn8m8t/opensource_general_purpose_agent_with_builtin/
| false | false | 55 |
{'enabled': True, 'images': [{'id': 'A3oZU9F0SSXkVyXHepsxxye6CHwefKfdk2uSI_3kosw', 'resolutions': [{'height': 181, 'url': 'https://preview.redd.it/h6y4hb7s9y0f1.jpeg?width=108&crop=smart&auto=webp&s=dd29f6873f1caf6e623ca3fccfc30520125f7ad0', 'width': 108}, {'height': 363, 'url': 'https://preview.redd.it/h6y4hb7s9y0f1.jpeg?width=216&crop=smart&auto=webp&s=9a52ac8b828e17c2213a62512e85f9c9d6ae658c', 'width': 216}, {'height': 538, 'url': 'https://preview.redd.it/h6y4hb7s9y0f1.jpeg?width=320&crop=smart&auto=webp&s=ee3d023b25d2e2f99165aa457441e34896b8d16c', 'width': 320}], 'source': {'height': 841, 'url': 'https://preview.redd.it/h6y4hb7s9y0f1.jpeg?auto=webp&s=cac0197ea1ad0e9d37e0bd7f7cef28dbef2c11e8', 'width': 500}, 'variants': {}}]}
|
||
Update: We fit 50+ LLMs on 2 GPUs — and now we’re inviting you to try it.
| 28 |
Last week’s post on cold starts and snapshotting hit a nerve. Turns out many of you are also trying to juggle multiple models, deal with bloated memory, or squeeze more out of a single GPU.
We’re making our snapshot-based runtime available to a limited number of builders — especially if you’re running agents, RAG pipelines, or multi-model workloads locally.
It’s still early, and we’re limited in support, but the tech is real:
• 50+ models on 2× A4000s
• Cold starts under 2s
• 90%+ GPU utilization
• No bloating, no prewarming
If you’re experimenting with multiple models and want to deploy more on fewer GPUs, this might help.
We’d love your feedback . reach out and we’ll get you access.
Please feel free to ask any questions
| 2025-05-15T14:08:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn94oi/update_we_fit_50_llms_on_2_gpus_and_now_were/
|
pmv143
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn94oi
| false | null |
t3_1kn94oi
|
/r/LocalLLaMA/comments/1kn94oi/update_we_fit_50_llms_on_2_gpus_and_now_were/
| false | false |
self
| 28 | null |
Can I build a fully automated local LLM system that indexes and chats over private data stored on a network share?
| 1 |
[removed]
| 2025-05-15T14:11:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn97hc/can_i_build_a_fully_automated_local_llm_system/
|
ITinsights1999
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn97hc
| false | null |
t3_1kn97hc
|
/r/LocalLLaMA/comments/1kn97hc/can_i_build_a_fully_automated_local_llm_system/
| false | false |
self
| 1 | null |
LLaDA-8B-Tools: A diffusion language model fine-tuned for tool use
| 57 |
Instead of generating token-by-token, this architecture refines the whole output by replacing mask tokens across the sequence.
The bidirectional attention seems to help with structured outputs, though this is just a rough first attempt with some issues (e.g. extra text after a message, because of this architecture's preset generation length).
Model: [https://huggingface.co/Proximile/LLaDA-8B-Tools](https://huggingface.co/Proximile/LLaDA-8B-Tools)
Dataset: [https://huggingface.co/datasets/Proximile/LLaDA-8B-Tools](https://huggingface.co/datasets/Proximile/LLaDA-8B-Tools)
Format mostly follows Llama 3.1: [https://www.llama.com/docs/model-cards-and-prompt-formats/llama3\_1/](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1/)
We're also working on a variant tuned for more general tool use using a range of i/o formats.
| 2025-05-15T14:12:37 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn9882/llada8btools_a_diffusion_language_model_finetuned/
|
ProximileLLC
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn9882
| false | null |
t3_1kn9882
|
/r/LocalLLaMA/comments/1kn9882/llada8btools_a_diffusion_language_model_finetuned/
| false | false |
self
| 57 |
{'enabled': False, 'images': [{'id': '2q9S4nBf2YBpvpTF1issf3dcTweLTvqQP91NiMUpF60', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/QWG5gq7mTQJgsF2HeU6tel4h5K89bUv4SZkN0lzOJGk.jpg?width=108&crop=smart&auto=webp&s=6288e031cce1ce9326f0f7e56a3ac237a09cd425', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/QWG5gq7mTQJgsF2HeU6tel4h5K89bUv4SZkN0lzOJGk.jpg?width=216&crop=smart&auto=webp&s=4a023f9613e29ef115cb0586fe699473fa0f6bfd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/QWG5gq7mTQJgsF2HeU6tel4h5K89bUv4SZkN0lzOJGk.jpg?width=320&crop=smart&auto=webp&s=9372358f5d1763d4026602889a97584cdac66591', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/QWG5gq7mTQJgsF2HeU6tel4h5K89bUv4SZkN0lzOJGk.jpg?width=640&crop=smart&auto=webp&s=55de9e72a6beb3c208ab42b75816f70196aef11b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/QWG5gq7mTQJgsF2HeU6tel4h5K89bUv4SZkN0lzOJGk.jpg?width=960&crop=smart&auto=webp&s=936cd3c336144900ec264fd4b5e6824b78ad04ea', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/QWG5gq7mTQJgsF2HeU6tel4h5K89bUv4SZkN0lzOJGk.jpg?width=1080&crop=smart&auto=webp&s=60bb659ae2252b1104edc58e405e4ae53119216b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/QWG5gq7mTQJgsF2HeU6tel4h5K89bUv4SZkN0lzOJGk.jpg?auto=webp&s=4c25263712e216badc53310579f22e2a23740c20', 'width': 1200}, 'variants': {}}]}
|
Can I build a fully automated local LLM system that indexes and chats over private data stored on a network share?
| 1 |
[removed]
| 2025-05-15T14:13:12 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn98pm/can_i_build_a_fully_automated_local_llm_system/
|
ITinsights1999
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn98pm
| false | null |
t3_1kn98pm
|
/r/LocalLLaMA/comments/1kn98pm/can_i_build_a_fully_automated_local_llm_system/
| false | false |
self
| 1 | null |
5060ti MultiGPU setup on PCIe 3.0 motherboard
| 2 |
Given 5060ti only has 8 PCIe lanes will there be a noticeable performance hit compared to the same setup with PCIe 4.0?
| 2025-05-15T14:19:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn9e22/5060ti_multigpu_setup_on_pcie_30_motherboard/
|
ingridis15
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn9e22
| false | null |
t3_1kn9e22
|
/r/LocalLLaMA/comments/1kn9e22/5060ti_multigpu_setup_on_pcie_30_motherboard/
| false | false |
self
| 2 | null |
Practicing a foreign language?
| 2 |
I'm looking for an IOS LLM app that I can practice speaking a foreign language with in the car. I've downloaded several, but they all require me to press the microphone button to dictate then the send button to send. I obviously can't do that while driving.
This seems like a really good use case but I can't find an app that will have an open mic conversation with me in a foreign language! Any recommendations?
| 2025-05-15T14:43:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1kn9yhx/practicing_a_foreign_language/
|
Ashofsky
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kn9yhx
| false | null |
t3_1kn9yhx
|
/r/LocalLLaMA/comments/1kn9yhx/practicing_a_foreign_language/
| false | false |
self
| 2 | null |
AlphaEvolve did pretty well on "Small base LLM only"
| 16 |
In the Ablation chapter of AlphaEvolve white paper, they show its performance using "Small base LLM" instead of Gemini Flash 2.0 and Pro 2.0. Their takeaway is that bigger models perform better, but our takeaway is that... **smaller models work**, too.
https://imgur.com/a/IQkFuJ7
Now, they do not specify what their smaller model is, but I imagine it is something most of us can run locally. Sure, it will take hundreds of hours to find a solution to a single problem on a local machine, but let's be honest, your 5090 is sitting idle most of the time (especially when you are asleep) instead of discovering the next FlashAttention.
Considering the fact that open weights models are getting smarter (than Flash 2.0 and Pro 2.0) and their quants more accurate, I think we have a decent chance of success. Even if we cannot crack big, global problems, it can be very useful for your own custom problem.
The question is, how hard is it to replicate the AlphaEvolve? I don't see anything magical about the system itself. It shouldn't have much more complicated components than FunSearch because it took them a couple of months to build after they released Funsearch. Thoughts?
| 2025-05-15T14:48:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kna33l/alphaevolve_did_pretty_well_on_small_base_llm_only/
|
__Maximum__
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kna33l
| false | null |
t3_1kna33l
|
/r/LocalLLaMA/comments/1kna33l/alphaevolve_did_pretty_well_on_small_base_llm_only/
| false | false |
self
| 16 |
{'enabled': False, 'images': [{'id': 'LBjrFHYwqUjhAWQ0tI4lReFiUkabiS2R6fdMdr4KgIQ', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/2Cl0LRFYW746TnVaU-9qnDNZyn9cRqJY4hGw7onUNnk.jpg?width=108&crop=smart&auto=webp&s=1dd65d778f8a1cd276e172256f2619288d218202', 'width': 108}, {'height': 147, 'url': 'https://external-preview.redd.it/2Cl0LRFYW746TnVaU-9qnDNZyn9cRqJY4hGw7onUNnk.jpg?width=216&crop=smart&auto=webp&s=8374a5caebc5a9671b7483ed15191310b3cde08f', 'width': 216}, {'height': 218, 'url': 'https://external-preview.redd.it/2Cl0LRFYW746TnVaU-9qnDNZyn9cRqJY4hGw7onUNnk.jpg?width=320&crop=smart&auto=webp&s=7924633b28f023cef0859eeca23647281baa5a1a', 'width': 320}, {'height': 437, 'url': 'https://external-preview.redd.it/2Cl0LRFYW746TnVaU-9qnDNZyn9cRqJY4hGw7onUNnk.jpg?width=640&crop=smart&auto=webp&s=597608c8db959b7d173c8ffdad6abaef3a11a6fb', 'width': 640}, {'height': 655, 'url': 'https://external-preview.redd.it/2Cl0LRFYW746TnVaU-9qnDNZyn9cRqJY4hGw7onUNnk.jpg?width=960&crop=smart&auto=webp&s=4a8f1afa1304ff3e23ef2c512186fa45ed80ef92', 'width': 960}, {'height': 737, 'url': 'https://external-preview.redd.it/2Cl0LRFYW746TnVaU-9qnDNZyn9cRqJY4hGw7onUNnk.jpg?width=1080&crop=smart&auto=webp&s=87462dbee64efea144d76d9f9d87b16fa9ec574f', 'width': 1080}], 'source': {'height': 848, 'url': 'https://external-preview.redd.it/2Cl0LRFYW746TnVaU-9qnDNZyn9cRqJY4hGw7onUNnk.jpg?auto=webp&s=3d8bd47bb7479f8673971ff4c7c80992da97cae4', 'width': 1241}, 'variants': {}}]}
|
Qwen3-32B hallucinates more than QwQ-32B
| 70 |
I've been seeing some people complaining about Qwen3's hallucination issues. Personally, I have never run into such an issue, but I recently came across some Chinese benchmarks of Qwen3 and QwQ, so I might as well share them here.
I translated these to English; the sources are in the images.
TLDR:
1. Qwen3-32B has a lower SimpleQA score than QwQ (5.87% vs 8.07%)
2. Qwen3-32B has a higher hallucination rate than QwQ in reasoning mode (22.7% vs 30.15%)
https://preview.redd.it/nrjfzhl2ky0f1.jpg?width=3388&format=pjpg&auto=webp&s=4c2021c8da8fb21fc46cefb8539130e97ce20dee
https://preview.redd.it/5rh9qe4cky0f1.jpg?width=2160&format=pjpg&auto=webp&s=218051f6ddbc88ff99a584ed0c2877f7e97f8132
https://preview.redd.it/jwi0mphyky0f1.jpg?width=2160&format=pjpg&auto=webp&s=57dbad3cead06c339f4cabf16f39bb211925aa22
https://preview.redd.it/7gy8ebvyky0f1.jpg?width=2156&format=pjpg&auto=webp&s=30da9915523db714b599bf88b1925d85a40f545f
| 2025-05-15T14:50:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kna53n/qwen332b_hallucinates_more_than_qwq32b/
|
AaronFeng47
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kna53n
| false | null |
t3_1kna53n
|
/r/LocalLLaMA/comments/1kna53n/qwen332b_hallucinates_more_than_qwq32b/
| false | false | 70 | null |
|
GPU Upgrade for Ollama/ML/Document Processing
| 2 |
Hi, just getting started with Ollama on my home server and realizing my old CPU isn't cutting it. I'm looking to add a GPU to speed things up and explore better models.
My use case:
\- Automate document tagging in Paperless.
\- Mess around with PyTorch for some ML training (YOLO specifically).
\- Do some local email processing with n8n.
My server is a Proxmox box with 2x E5-2630L v4 CPUs and 512GB RAM. I'm hoping to share the GPU across a few VMs.
Budget-wise, I'm aiming for around $300-400, and I'm limited to a single 8-pin GPU power connector.
I've seen some interesting cards around this price point:
\- M40 24GB (local pickup, around $200)
\- P40 24GB (eBay, around $430 - slightly over budget, but maybe worth considering?)
\- RTX 3060 12GB (eBay, about $200)
I also need advice on what models are best for my use case.
Thanks for any help!
| 2025-05-15T15:17:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1knat5s/gpu_upgrade_for_ollamamldocument_processing/
|
phamleduy04
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1knat5s
| false | null |
t3_1knat5s
|
/r/LocalLLaMA/comments/1knat5s/gpu_upgrade_for_ollamamldocument_processing/
| false | false |
self
| 2 | null |
qSpeak - A Cross platform alternative for WisprFlow supporting local LLMs and Linux
| 15 |
Hey, together with my colleagues, we've created [qSpeak.app](http://qSpeak.app) 🎉
qSpeak is an alternative to tools like SuperWhisper or WisprFlow but works on all platforms including Linux. 🚀
Also we're working on integrating LLMs more deeply into it to include more sophisticated interactions like multi step conversations (essentially assistants) and in the near future MCP integration.
The app is currently completely free so please try it out! 🎁
| 2025-05-15T15:28:13 |
https://qspeak.app
|
fajfas3
|
qspeak.app
| 1970-01-01T00:00:00 | 0 |
{}
|
1knb2kq
| false | null |
t3_1knb2kq
|
/r/LocalLLaMA/comments/1knb2kq/qspeak_a_cross_platform_alternative_for_wisprflow/
| false | false |
default
| 15 | null |
HanaVerse - Chat with AI through an interactive anime character! 🌸
| 3 |
I've been working on something I think you'll love - **HanaVerse**, an interactive web UI for Ollama that brings your AI conversations to life through a charming 2D anime character named Hana!
# What is HanaVerse? 🤔
HanaVerse transforms how you interact with Ollama's language models by adding a visual, animated companion to your conversations. Instead of just text on a screen, you chat with Hana - a responsive anime character who reacts to your interactions in real-time!
# Features that make HanaVerse special: ✨
* **Talks Back**: Answers with voice
* **Streaming Responses**: See answers form in real-time as they're generated
* **Full Markdown Support**: Beautiful formatting with syntax highlighting
* **LaTeX Math Rendering**: Perfect for equations and scientific content
* **Customizable**: Choose any Ollama model and configure system prompts
* **Responsive Design**: Works on both desktop(preferred) and mobile
# Why I built this 🛠️
I wanted to make AI interactions more engaging and personal while leveraging the power of self-hosted Ollama models. The result is an interface that makes AI conversations feel more natural and enjoyable.
#
[Hanaverse Demo](https://reddit.com/link/1knb504/video/slz1ybstry0f1/player)
If you're looking for a more engaging way to interact with your Ollama models, give HanaVerse a try and let me know what you think!
**GitHub:** [https://github.com/Ashish-Patnaik/HanaVerse](https://github.com/Ashish-Patnaik/HanaVerse)
I'd love your feedback and contributions - stars ⭐ are always appreciated!
| 2025-05-15T15:30:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1knb504/hanaverse_chat_with_ai_through_an_interactive/
|
OrganicTelevision652
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1knb504
| false | null |
t3_1knb504
|
/r/LocalLLaMA/comments/1knb504/hanaverse_chat_with_ai_through_an_interactive/
| false | false | 3 |
{'enabled': False, 'images': [{'id': 'e_TzCbLRWGuoVvpz2Ql3BuZNn4O25i0T1Ou5ynEyr54', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VMExyAyOE_4W1BYj5ZE65UYho8s1S8iYWLFddyI6R88.jpg?width=108&crop=smart&auto=webp&s=d2affacaa734a3be0bbdd6e15b0283fc0ee4f370', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VMExyAyOE_4W1BYj5ZE65UYho8s1S8iYWLFddyI6R88.jpg?width=216&crop=smart&auto=webp&s=bdd21c09acefb362bfab55c231293606bb296c81', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VMExyAyOE_4W1BYj5ZE65UYho8s1S8iYWLFddyI6R88.jpg?width=320&crop=smart&auto=webp&s=2e2354a6d19a9d0605735851025006fe7943153b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VMExyAyOE_4W1BYj5ZE65UYho8s1S8iYWLFddyI6R88.jpg?width=640&crop=smart&auto=webp&s=4a56e1472206dfd96091c85f5438f8e274f3dd61', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VMExyAyOE_4W1BYj5ZE65UYho8s1S8iYWLFddyI6R88.jpg?width=960&crop=smart&auto=webp&s=938d136f4201aa6006cde8d30f7ffddb662b9bf3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VMExyAyOE_4W1BYj5ZE65UYho8s1S8iYWLFddyI6R88.jpg?width=1080&crop=smart&auto=webp&s=facd64458991f23c157259df0839bd022a1659ff', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VMExyAyOE_4W1BYj5ZE65UYho8s1S8iYWLFddyI6R88.jpg?auto=webp&s=5505cb72092974c1018420e59c1ae0529a217cc3', 'width': 1200}, 'variants': {}}]}
|
|
Hugging Face is doing a freeopen source MCP course
| 1 |
We're thrilled to announce the launch of our comprehensive Model Context Protocol (MCP) Course! This free program is designed to take learners from foundational understanding to practical application of MCP in AI.
Join the course on the hub:https://huggingface.co/mcp-course
In this course, you will:
📖 Study Model Context Protocol in theory, design, and practice.
🧑💻 Learn to use established MCP SDKs and frameworks.
💾 Share your projects and explore applications created by the community.
🏆 Participate in challenges and evaluate your MCP implementations.
🎓 Earn a certificate of completion.
At the end, you'll understand how MCP works and how to build your own AI applications that leverage external data and tools using the latest MCP standards.
| 2025-05-15T15:38:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1knbbl8/hugging_face_is_doing_a_freeopen_source_mcp_course/
|
Zealousideal-Cut590
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1knbbl8
| false | null |
t3_1knbbl8
|
/r/LocalLLaMA/comments/1knbbl8/hugging_face_is_doing_a_freeopen_source_mcp_course/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'nyTSZ22UOd_egJ571yAxHNLvQUAnZKwWrOoEgTTO-sU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9Z43sOrT3Ccwoy33cBkpTuAGQsy6lsRp3ycCPMlJVs4.jpg?width=108&crop=smart&auto=webp&s=22417c77a71c91a54185a0bc24a9309678ec8f6d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9Z43sOrT3Ccwoy33cBkpTuAGQsy6lsRp3ycCPMlJVs4.jpg?width=216&crop=smart&auto=webp&s=95ae682c2bbbc25915891cf0d8278f587e1b3f5f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9Z43sOrT3Ccwoy33cBkpTuAGQsy6lsRp3ycCPMlJVs4.jpg?width=320&crop=smart&auto=webp&s=08e6131a5cfb6a3aaba86f669745c0e038ac6ab0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9Z43sOrT3Ccwoy33cBkpTuAGQsy6lsRp3ycCPMlJVs4.jpg?width=640&crop=smart&auto=webp&s=bb163e62caa773d40d3637a07c41d4571d6237c9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9Z43sOrT3Ccwoy33cBkpTuAGQsy6lsRp3ycCPMlJVs4.jpg?width=960&crop=smart&auto=webp&s=4d60ee0df905e535c4190b139a5697afeef389bd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9Z43sOrT3Ccwoy33cBkpTuAGQsy6lsRp3ycCPMlJVs4.jpg?width=1080&crop=smart&auto=webp&s=eec70ac5115f79058fdc5af5fd88023fdb7e77d4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9Z43sOrT3Ccwoy33cBkpTuAGQsy6lsRp3ycCPMlJVs4.jpg?auto=webp&s=d07ebd32ee62093cdd338b749ef86cfdc3f8f882', 'width': 1200}, 'variants': {}}]}
|
Hugging Face free and open source MCP course
| 97 |
We're thrilled to announce the launch of our comprehensive Model Context Protocol (MCP) Course! This free program is designed to take learners from foundational understanding to practical application of MCP in AI.
Join the course on the hub:https://huggingface.co/mcp-course
In this course, you will:
📖 Study Model Context Protocol in theory, design, and practice.
🧑💻 Learn to use established MCP SDKs and frameworks.
💾 Share your projects and explore applications created by the community.
🏆 Participate in challenges and evaluate your MCP implementations.
🎓 Earn a certificate of completion.
At the end, you'll understand how MCP works and how to build your own AI applications that leverage external data and tools using the latest MCP standards.
| 2025-05-15T15:40:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1knbdd3/hugging_face_free_and_open_source_mcp_course/
|
Zealousideal-Cut590
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1knbdd3
| false | null |
t3_1knbdd3
|
/r/LocalLLaMA/comments/1knbdd3/hugging_face_free_and_open_source_mcp_course/
| false | false |
self
| 97 |
{'enabled': False, 'images': [{'id': 'nyTSZ22UOd_egJ571yAxHNLvQUAnZKwWrOoEgTTO-sU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9Z43sOrT3Ccwoy33cBkpTuAGQsy6lsRp3ycCPMlJVs4.jpg?width=108&crop=smart&auto=webp&s=22417c77a71c91a54185a0bc24a9309678ec8f6d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9Z43sOrT3Ccwoy33cBkpTuAGQsy6lsRp3ycCPMlJVs4.jpg?width=216&crop=smart&auto=webp&s=95ae682c2bbbc25915891cf0d8278f587e1b3f5f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9Z43sOrT3Ccwoy33cBkpTuAGQsy6lsRp3ycCPMlJVs4.jpg?width=320&crop=smart&auto=webp&s=08e6131a5cfb6a3aaba86f669745c0e038ac6ab0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9Z43sOrT3Ccwoy33cBkpTuAGQsy6lsRp3ycCPMlJVs4.jpg?width=640&crop=smart&auto=webp&s=bb163e62caa773d40d3637a07c41d4571d6237c9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9Z43sOrT3Ccwoy33cBkpTuAGQsy6lsRp3ycCPMlJVs4.jpg?width=960&crop=smart&auto=webp&s=4d60ee0df905e535c4190b139a5697afeef389bd', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9Z43sOrT3Ccwoy33cBkpTuAGQsy6lsRp3ycCPMlJVs4.jpg?width=1080&crop=smart&auto=webp&s=eec70ac5115f79058fdc5af5fd88023fdb7e77d4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9Z43sOrT3Ccwoy33cBkpTuAGQsy6lsRp3ycCPMlJVs4.jpg?auto=webp&s=d07ebd32ee62093cdd338b749ef86cfdc3f8f882', 'width': 1200}, 'variants': {}}]}
|
HanaVerse - Chat with AI through an interactive anime character! 🌸
| 1 |
I've been working on something I think you'll love - HanaVerse, an interactive web UI for Ollama that brings your AI conversations to life through a charming 2D anime character named Hana!
What is **HanaVerse**? 🤔
HanaVerse transforms how you interact with Ollama's language models by adding a visual, animated companion to your conversations. Instead of just text on a screen, you chat with Hana - a responsive anime character who reacts to your interactions in real-time!
**Features that make HanaVerse special**: ✨
**Talks Back:** Answers with voice
**Streaming Responses:** See answers form in real-time as they're generated
**Full Markdown Support:** Beautiful formatting with syntax highlighting
**LaTeX Math Rendering:** Perfect for equations and scientific content
**Customizable:** Choose any Ollama model and configure system prompts
**Responsive Design:** Works on both desktop(preferred) and mobile
Why I built this 🛠️
I wanted to make AI interactions more engaging and personal while leveraging the power of self-hosted Ollama models. The result is an interface that makes AI conversations feel more natural and enjoyable.
If you're looking for a more engaging way to interact with your Ollama models, give HanaVerse a try and let me know what you think!
GitHub: [https://github.com/Ashish-Patnaik/HanaVerse](https://github.com/Ashish-Patnaik/HanaVerse)
Skeleton Demo = [https://hanaverse.vercel.app/](https://hanaverse.vercel.app/)
I'd love your feedback and contributions - stars ⭐ are always appreciated!
| 2025-05-15T15:50:05 |
https://v.redd.it/rz49w14kvy0f1
|
OrganicTelevision652
|
/r/LocalLLaMA/comments/1knbm5t/hanaverse_chat_with_ai_through_an_interactive/
| 1970-01-01T00:00:00 | 0 |
{}
|
1knbm5t
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/rz49w14kvy0f1/DASHPlaylist.mpd?a=1750045813%2CNjFjZDVjNzI0NzZlYmY2YTAzMmUzMTZjODM1YmVkMjA3YmZmYmVjYzhhZmNkNmQ3YzUzYmE3M2M1MTRhNjcyZQ%3D%3D&v=1&f=sd', 'duration': 143, 'fallback_url': 'https://v.redd.it/rz49w14kvy0f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 634, 'hls_url': 'https://v.redd.it/rz49w14kvy0f1/HLSPlaylist.m3u8?a=1750045813%2CY2IzNTc0NTE5MTE2YjhkMTFjYTFhNmM5ODJmMTJlNzVkNGU2MmM5ZjA0N2M4ZTlkODA4Mjc4YzNiMjg5NzNlZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/rz49w14kvy0f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
|
t3_1knbm5t
|
/r/LocalLLaMA/comments/1knbm5t/hanaverse_chat_with_ai_through_an_interactive/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'ZWc0cWMzNGt2eTBmMaWDXodENe_EoTPPNX5Nap72JcTgvzolPXNFiolBRj2U', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ZWc0cWMzNGt2eTBmMaWDXodENe_EoTPPNX5Nap72JcTgvzolPXNFiolBRj2U.png?width=108&crop=smart&format=pjpg&auto=webp&s=8bbc36236289a67f703412d5000d74720cbe4011', 'width': 108}, {'height': 107, 'url': 'https://external-preview.redd.it/ZWc0cWMzNGt2eTBmMaWDXodENe_EoTPPNX5Nap72JcTgvzolPXNFiolBRj2U.png?width=216&crop=smart&format=pjpg&auto=webp&s=a4de05da224d999d3bf7733b7f132d1cd1ef1ec4', 'width': 216}, {'height': 158, 'url': 'https://external-preview.redd.it/ZWc0cWMzNGt2eTBmMaWDXodENe_EoTPPNX5Nap72JcTgvzolPXNFiolBRj2U.png?width=320&crop=smart&format=pjpg&auto=webp&s=edcf7072815ee39a65c3937494fa4ad96df5fe4d', 'width': 320}, {'height': 317, 'url': 'https://external-preview.redd.it/ZWc0cWMzNGt2eTBmMaWDXodENe_EoTPPNX5Nap72JcTgvzolPXNFiolBRj2U.png?width=640&crop=smart&format=pjpg&auto=webp&s=49a5bf7581bf51398f0cecb34b1d5856f2721bba', 'width': 640}, {'height': 476, 'url': 'https://external-preview.redd.it/ZWc0cWMzNGt2eTBmMaWDXodENe_EoTPPNX5Nap72JcTgvzolPXNFiolBRj2U.png?width=960&crop=smart&format=pjpg&auto=webp&s=ed59c01042dd3a5a274dead77df528aac65de40e', 'width': 960}, {'height': 535, 'url': 'https://external-preview.redd.it/ZWc0cWMzNGt2eTBmMaWDXodENe_EoTPPNX5Nap72JcTgvzolPXNFiolBRj2U.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6f704ce4ab2edbafbdaf170b65179091ae2d01c1', 'width': 1080}], 'source': {'height': 952, 'url': 'https://external-preview.redd.it/ZWc0cWMzNGt2eTBmMaWDXodENe_EoTPPNX5Nap72JcTgvzolPXNFiolBRj2U.png?format=pjpg&auto=webp&s=087d4b9d8aa4aa512cbed10cd03641d8b08d0898', 'width': 1920}, 'variants': {}}]}
|
|
HanaVerse - Chat with AI through an interactive anime character! 🌸
| 15 |
I've been working on something I think you'll love - HanaVerse, an interactive web UI for Ollama that brings your AI conversations to life through a charming 2D anime character named Hana!
What is **HanaVerse**? 🤔
HanaVerse transforms how you interact with Ollama's language models by adding a visual, animated companion to your conversations. Instead of just text on a screen, you chat with Hana - a responsive anime character who reacts to your interactions in real-time!
**Features that make HanaVerse special**: ✨
**Talks Back:** Answers with voice
**Streaming Responses:** See answers form in real-time as they're generated
**Full Markdown Support:** Beautiful formatting with syntax highlighting
**LaTeX Math Rendering:** Perfect for equations and scientific content
**Customizable:** Choose any Ollama model and configure system prompts
**Responsive Design:** Works on both desktop(preferred) and mobile
Why I built this 🛠️
I wanted to make AI interactions more engaging and personal while leveraging the power of self-hosted Ollama models. The result is an interface that makes AI conversations feel more natural and enjoyable.
[Hanaverse demo](https://reddit.com/link/1knbo80/video/uczc6t9cwy0f1/player)
If you're looking for a more engaging way to interact with your Ollama models, give HanaVerse a try and let me know what you think!
GitHub: [https://github.com/Ashish-Patnaik/HanaVerse](https://github.com/Ashish-Patnaik/HanaVerse)
Skeleton Demo = [https://hanaverse.vercel.app/](https://hanaverse.vercel.app/)
I'd love your feedback and contributions - stars ⭐ are always appreciated!
| 2025-05-15T15:52:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1knbo80/hanaverse_chat_with_ai_through_an_interactive/
|
OrganicTelevision652
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1knbo80
| false | null |
t3_1knbo80
|
/r/LocalLLaMA/comments/1knbo80/hanaverse_chat_with_ai_through_an_interactive/
| false | false | 15 |
{'enabled': False, 'images': [{'id': 'e_TzCbLRWGuoVvpz2Ql3BuZNn4O25i0T1Ou5ynEyr54', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VMExyAyOE_4W1BYj5ZE65UYho8s1S8iYWLFddyI6R88.jpg?width=108&crop=smart&auto=webp&s=d2affacaa734a3be0bbdd6e15b0283fc0ee4f370', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VMExyAyOE_4W1BYj5ZE65UYho8s1S8iYWLFddyI6R88.jpg?width=216&crop=smart&auto=webp&s=bdd21c09acefb362bfab55c231293606bb296c81', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VMExyAyOE_4W1BYj5ZE65UYho8s1S8iYWLFddyI6R88.jpg?width=320&crop=smart&auto=webp&s=2e2354a6d19a9d0605735851025006fe7943153b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VMExyAyOE_4W1BYj5ZE65UYho8s1S8iYWLFddyI6R88.jpg?width=640&crop=smart&auto=webp&s=4a56e1472206dfd96091c85f5438f8e274f3dd61', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VMExyAyOE_4W1BYj5ZE65UYho8s1S8iYWLFddyI6R88.jpg?width=960&crop=smart&auto=webp&s=938d136f4201aa6006cde8d30f7ffddb662b9bf3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VMExyAyOE_4W1BYj5ZE65UYho8s1S8iYWLFddyI6R88.jpg?width=1080&crop=smart&auto=webp&s=facd64458991f23c157259df0839bd022a1659ff', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VMExyAyOE_4W1BYj5ZE65UYho8s1S8iYWLFddyI6R88.jpg?auto=webp&s=5505cb72092974c1018420e59c1ae0529a217cc3', 'width': 1200}, 'variants': {}}]}
|
|
Falcon-Edge: a 1B and 3B models based on the bitnet architecture.
| 1 |
[removed]
| 2025-05-15T15:57:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1knbsef/falconedge_a_1b_and_3b_models_based_on_the_bitnet/
|
ilyas555
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1knbsef
| false | null |
t3_1knbsef
|
/r/LocalLLaMA/comments/1knbsef/falconedge_a_1b_and_3b_models_based_on_the_bitnet/
| false | false | 1 | null |
|
Falcon-Edge: a 1B and 3B models based on the bitnet architecture.
| 1 |
[removed]
| 2025-05-15T16:00:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1knbv8r/falconedge_a_1b_and_3b_models_based_on_the_bitnet/
|
ilyas555
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1knbv8r
| false | null |
t3_1knbv8r
|
/r/LocalLLaMA/comments/1knbv8r/falconedge_a_1b_and_3b_models_based_on_the_bitnet/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'Gpoz4LVC0qzEk4xcFJcFr01b9w6NYjnY5GTIL8r_1vs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YpPt4sQV5imbSKpoilg46hgLtG-ZrRwB-iBLNdTSQ10.jpg?width=108&crop=smart&auto=webp&s=6e1c6ff46d1d71acc70b3f9eb066a663cd674e87', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YpPt4sQV5imbSKpoilg46hgLtG-ZrRwB-iBLNdTSQ10.jpg?width=216&crop=smart&auto=webp&s=db2a5065f18033e69e0175cf3958f529590639b1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YpPt4sQV5imbSKpoilg46hgLtG-ZrRwB-iBLNdTSQ10.jpg?width=320&crop=smart&auto=webp&s=0822bbb89671314bb94ce07c23b5defdd3e32532', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YpPt4sQV5imbSKpoilg46hgLtG-ZrRwB-iBLNdTSQ10.jpg?width=640&crop=smart&auto=webp&s=9aa181f9263a302e8d3f81a393aca54dc67d3da4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YpPt4sQV5imbSKpoilg46hgLtG-ZrRwB-iBLNdTSQ10.jpg?width=960&crop=smart&auto=webp&s=ef2bce35e7f820079a0a10ae2f28d3bdfd6e5128', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YpPt4sQV5imbSKpoilg46hgLtG-ZrRwB-iBLNdTSQ10.jpg?width=1080&crop=smart&auto=webp&s=6ca40eee059cad8963f10ee793358bb5f4cdfe0c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YpPt4sQV5imbSKpoilg46hgLtG-ZrRwB-iBLNdTSQ10.jpg?auto=webp&s=11a07bcfe31c8bb40d8ae827f8fb0bab5acd593f', 'width': 1200}, 'variants': {}}]}
|
Falcon-Edge: a 1B and 3B LLM based on the BitNet architecture.
| 1 |
[removed]
| 2025-05-15T16:04:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1knbyp9/falconedge_a_1b_and_3b_llm_based_on_the_bitnet/
|
ilyas555
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1knbyp9
| false | null |
t3_1knbyp9
|
/r/LocalLLaMA/comments/1knbyp9/falconedge_a_1b_and_3b_llm_based_on_the_bitnet/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'Gpoz4LVC0qzEk4xcFJcFr01b9w6NYjnY5GTIL8r_1vs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YpPt4sQV5imbSKpoilg46hgLtG-ZrRwB-iBLNdTSQ10.jpg?width=108&crop=smart&auto=webp&s=6e1c6ff46d1d71acc70b3f9eb066a663cd674e87', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YpPt4sQV5imbSKpoilg46hgLtG-ZrRwB-iBLNdTSQ10.jpg?width=216&crop=smart&auto=webp&s=db2a5065f18033e69e0175cf3958f529590639b1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YpPt4sQV5imbSKpoilg46hgLtG-ZrRwB-iBLNdTSQ10.jpg?width=320&crop=smart&auto=webp&s=0822bbb89671314bb94ce07c23b5defdd3e32532', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YpPt4sQV5imbSKpoilg46hgLtG-ZrRwB-iBLNdTSQ10.jpg?width=640&crop=smart&auto=webp&s=9aa181f9263a302e8d3f81a393aca54dc67d3da4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YpPt4sQV5imbSKpoilg46hgLtG-ZrRwB-iBLNdTSQ10.jpg?width=960&crop=smart&auto=webp&s=ef2bce35e7f820079a0a10ae2f28d3bdfd6e5128', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YpPt4sQV5imbSKpoilg46hgLtG-ZrRwB-iBLNdTSQ10.jpg?width=1080&crop=smart&auto=webp&s=6ca40eee059cad8963f10ee793358bb5f4cdfe0c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YpPt4sQV5imbSKpoilg46hgLtG-ZrRwB-iBLNdTSQ10.jpg?auto=webp&s=11a07bcfe31c8bb40d8ae827f8fb0bab5acd593f', 'width': 1200}, 'variants': {}}]}
|
Falcon-Edge: A series of powerful, universal and fine-tunable BitNet models
| 1 |
[removed]
| 2025-05-15T16:07:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1knc1wh/falconedge_a_series_of_powerful_universal_and/
|
Automatic_Truth_6666
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1knc1wh
| false | null |
t3_1knc1wh
|
/r/LocalLLaMA/comments/1knc1wh/falconedge_a_series_of_powerful_universal_and/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'uKoAN56QBrx49tKNY13u6ICrHEJxeCADh_8PLik14kc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xw2jy5A9gAJZ2t_pujEb-NpC3KWxGXNYtmNC4Juz4RI.jpg?width=108&crop=smart&auto=webp&s=1a28048819a5343167657c63adfd0b1c74d3a365', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xw2jy5A9gAJZ2t_pujEb-NpC3KWxGXNYtmNC4Juz4RI.jpg?width=216&crop=smart&auto=webp&s=d327b747a1666c39f203eb9f01b170f864846854', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xw2jy5A9gAJZ2t_pujEb-NpC3KWxGXNYtmNC4Juz4RI.jpg?width=320&crop=smart&auto=webp&s=d65219bd2998f8f6bb3ff222d846ae77eb8f9602', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xw2jy5A9gAJZ2t_pujEb-NpC3KWxGXNYtmNC4Juz4RI.jpg?width=640&crop=smart&auto=webp&s=9b8be11d24010617b4c7947693bdb8a92a69c3bc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xw2jy5A9gAJZ2t_pujEb-NpC3KWxGXNYtmNC4Juz4RI.jpg?width=960&crop=smart&auto=webp&s=ae983ec768479e496c6336d04bd181eecd7651d8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xw2jy5A9gAJZ2t_pujEb-NpC3KWxGXNYtmNC4Juz4RI.jpg?width=1080&crop=smart&auto=webp&s=a939069a30ad59a9809d0c188787a169389aa124', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xw2jy5A9gAJZ2t_pujEb-NpC3KWxGXNYtmNC4Juz4RI.jpg?auto=webp&s=645c1c758a8edfa28874e9a5e652d385f38adaa0', 'width': 1200}, 'variants': {}}]}
|
Quick Qwen3-30B-A6B-16-Extreme vs Qwen3-30B A3B Benchmark
| 56 |
Hey, I have a Benchmark suite of 110 tasks across multiple programming languages. The focus really is on more complex problems and not Javascript one-shot problems. I was interested in comparing the above two models.
Setup
\- Qwen3-30B-A6B-16-Extreme Q4\_K\_M running in LMStudio
\- Qwen3-30B A3B on OpenRouter
I understand that this is not a fair fight because the A6B is heavily quantized, but running this benchmark on my Macbook takes almost 12 hours with reasoning models, so a better comparison will take a bit longer.
Here are the results:
| lmstudio/qwen3-30b-a6b-16-extreme | correct: 56 | wrong: 54 |
| openrouter/qwen/qwen3-30b-a3b | correct: 68 | wrong: 42 |
I will try to report back in a couple of days with more comparisons.
You can learn more about the benchmark here (https://ben.terhech.de/posts/2025-01-31-llms-vs-programming-languages.html) but I've since also added support for more models and languages. However I haven't really released the results in some time.
| 2025-05-15T16:17:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1knca48/quick_qwen330ba6b16extreme_vs_qwen330b_a3b/
|
terhechte
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1knca48
| false | null |
t3_1knca48
|
/r/LocalLLaMA/comments/1knca48/quick_qwen330ba6b16extreme_vs_qwen330b_a3b/
| false | false |
self
| 56 |
{'enabled': False, 'images': [{'id': '-1tEckwmrxwtomZTafD2hEqMRKvNLO0mPIutjO_qYhY', 'resolutions': [{'height': 95, 'url': 'https://external-preview.redd.it/f19UVEQKn2NUgogunr84spwvcNElFvuYeuIGFDIxA0k.jpg?width=108&crop=smart&auto=webp&s=4904377f1acc1958af76874ca7486f29ef665e09', 'width': 108}, {'height': 190, 'url': 'https://external-preview.redd.it/f19UVEQKn2NUgogunr84spwvcNElFvuYeuIGFDIxA0k.jpg?width=216&crop=smart&auto=webp&s=e17e9c70d237df027f6980e9ca0dd828c6f7e0b4', 'width': 216}, {'height': 282, 'url': 'https://external-preview.redd.it/f19UVEQKn2NUgogunr84spwvcNElFvuYeuIGFDIxA0k.jpg?width=320&crop=smart&auto=webp&s=e8f932a5afb782b2e507d0f3882006bdc7316638', 'width': 320}, {'height': 565, 'url': 'https://external-preview.redd.it/f19UVEQKn2NUgogunr84spwvcNElFvuYeuIGFDIxA0k.jpg?width=640&crop=smart&auto=webp&s=e92484264a14897b17895cda1987bbd61718444e', 'width': 640}], 'source': {'height': 708, 'url': 'https://external-preview.redd.it/f19UVEQKn2NUgogunr84spwvcNElFvuYeuIGFDIxA0k.jpg?auto=webp&s=6348fe8f5ef16d65a10ae2d646cd453f89b47a20', 'width': 801}, 'variants': {}}]}
|
Attempting to get genetic behaviour in 1 llm call. Attempt 1.
| 1 |
[removed]
| 2025-05-15T16:33:43 |
https://www.reddit.com/r/LocalLLaMA/comments/1kncooi/attempting_to_get_genetic_behaviour_in_1_llm_call/
|
Character-Drink2952
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kncooi
| false | null |
t3_1kncooi
|
/r/LocalLLaMA/comments/1kncooi/attempting_to_get_genetic_behaviour_in_1_llm_call/
| false | false |
self
| 1 | null |
TTS Fine-tuning now in Unsloth - Sesame CSM + Whisper support
| 2 |
Hey folks! This one’s a bit different from LLMs but we’re super excited to announce that you can now train Text-to-Speech (TTS) models in [Unsloth](https://github.com/unslothai/unsloth)! Training is \~1.5x faster with 50% less VRAM compared to all other setups with FA2. :D
* We support models like `Sesame/csm-1b`, `OpenAI/whisper-large-v3`, `CanopyLabs/orpheus-3b-0.1-ft`, and pretty much any Transformer-compatible models including LLasa, Outte, Spark, and others.
* The goal is to clone voices, adapt speaking styles and tones, support new languages, handle specific tasks and more.
* We’ve made notebooks to train, run, and save these models for free on Google Colab. Some models aren’t supported by llama.cpp and will be saved only as safetensors, but others should work. See our TTS docs and notebooks: [https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning](https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning) You can see a mini demo below:
https://reddit.com/link/1knd53e/video/kxvr1eny6z0f1/player
* The training process is similar to SFT, but the dataset includes audio clips with transcripts. We use a dataset called ‘Elise’ that embeds emotion tags like <sigh> or <laughs> into transcripts, triggering expressive audio that matches the emotion.
* Since TTS models are usually small, you can train them using 16-bit LoRA, or go with FFT. Loading a 16-bit LoRA model is simple.
We've uploaded most of the TTS models (quantized and original) to [Hugging Face here](https://huggingface.co/collections/unsloth/text-to-speech-tts-models-68007ab12522e96be1e02155).
And here are our TTS notebooks:
|[Sesame-CSM (1B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Sesame_CSM_(1B)-TTS.ipynb)|[Orpheus-TTS (3B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Orpheus_(3B)-TTS.ipynb)|[Whisper Large V3](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Whisper.ipynb)|[Spark-TTS (0.5B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Spark_TTS_(0_5B).ipynb)|
|:-|:-|:-|:-|
Thank you for reading and please do ask any questions!!
P.S. We also now support Qwen3 GRPO. We use the base model + a new custom proximity-based reward function to favor near-correct answers and penalize outliers. Pre-finetuning mitigates formatting bias and boosts evaluation accuracy via regex matching: [https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3\_(4B)-GRPO.ipynb](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_(4B)-GRPO.ipynb)
| 2025-05-15T16:52:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1knd53e/tts_finetuning_now_in_unsloth_sesame_csm_whisper/
|
danielhanchen
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1knd53e
| false | null |
t3_1knd53e
|
/r/LocalLLaMA/comments/1knd53e/tts_finetuning_now_in_unsloth_sesame_csm_whisper/
| false | false |
self
| 2 | null |
Ansible to build out LLM
| 1 |
Anyone know of a repository of Ansible scripts to building / optimizing a Linux LLM environment?
| 2025-05-15T16:54:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1knd6vb/ansible_to_build_out_llm/
|
jsconiers
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1knd6vb
| false | null |
t3_1knd6vb
|
/r/LocalLLaMA/comments/1knd6vb/ansible_to_build_out_llm/
| false | false |
self
| 1 | null |
AI Code completion for Netbeans IDE
| 4 |
Hey.
I wanted to share a hobby project of mine, in the unlikely event someone finds it useful.
I've written a plugin for Netbeans IDE that enables both fim code completion, instruction based completion and Ai Chat with local or remote backends.
"Why Netbeans?", you might ask. (Or more likely: "What is Netbeans?")
This remnant from a time before Java was owned by Oracle, and when most Java developers anyway used Eclipse.
Well, I'm maintainer of an open source project that is based on Netbeans, and use it for a few of my own Java projects. For said projects, I thought it would be nice to have a copilot-like experience. And there's nothing like a bit of procrastination from your main projects.
My setup uses llama.cpp with Qwen as the backend. It supports using various hosts (you might for example want a 1.5b or 3b model for the FIM, but something beefier for your chat.)
The FIM is a bit restricted since I'm using the existing code-completion dialogs, so seeing what the ai wants to put there is a bit difficult if it's longer than one row.
It's all very rough around the edges, and I'm currently trying to get custom tool use working (for direct code insertion from the "chat ai").
Let me know if you try it out and like it, or at least not hate it. It would warm my heart.
[https://github.com/neph1/NetbeansAiCodeCompletion](https://github.com/neph1/NetbeansAiCodeCompletion)
| 2025-05-15T17:09:52 |
neph1010
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kndl7d
| false | null |
t3_1kndl7d
|
/r/LocalLLaMA/comments/1kndl7d/ai_code_completion_for_netbeans_ide/
| false | false | 4 |
{'enabled': True, 'images': [{'id': 'ydAe_hMc7hrVVzeYY1ay6Iom0-1jVHjLhgDa04rYv34', 'resolutions': [{'height': 109, 'url': 'https://preview.redd.it/0n4mme8p7z0f1.png?width=108&crop=smart&auto=webp&s=9d683ce4dc9f2c7a19480d89e66fd660da5146ed', 'width': 108}, {'height': 219, 'url': 'https://preview.redd.it/0n4mme8p7z0f1.png?width=216&crop=smart&auto=webp&s=80e2a52cfda4a361aefc6e1212825ccae6e22933', 'width': 216}, {'height': 324, 'url': 'https://preview.redd.it/0n4mme8p7z0f1.png?width=320&crop=smart&auto=webp&s=370d2d070cb40a138dec465be75b1f1ba0224305', 'width': 320}, {'height': 649, 'url': 'https://preview.redd.it/0n4mme8p7z0f1.png?width=640&crop=smart&auto=webp&s=59ebbb6f8cff8d6f982104c81a07d9cf3225f369', 'width': 640}], 'source': {'height': 734, 'url': 'https://preview.redd.it/0n4mme8p7z0f1.png?auto=webp&s=56e4120f48e8e4a3156fe41ca40db247eaf997a0', 'width': 723}, 'variants': {}}]}
|
||
How We Made LLMs Work with Old Systems (Thanks to RAG)
| 0 |
LLMs are great—but not always accurate. RAG fixes that.
If you’re using AI in industries like BFSI, healthcare, or SaaS, accuracy isn’t optional. LLMs can hallucinate, and that’s a serious risk.
Retrieval-Augmented Generation (RAG) connects your LLM to real-time, trusted data—so responses are based on your content, not just what the model was trained on.
The best part?
You don’t need to replace your legacy systems. RAG works with them.
I’ve helped a few teams implement RAG to get more reliable, compliant, and useful AI—without overhauling their tech stack.
Anyone here using RAG or considering it? Would love to exchange ideas.
| 2025-05-15T17:10:35 |
https://www.reddit.com/r/LocalLLaMA/comments/1kndlvp/how_we_made_llms_work_with_old_systems_thanks_to/
|
Elvis_Vijay1
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kndlvp
| false | null |
t3_1kndlvp
|
/r/LocalLLaMA/comments/1kndlvp/how_we_made_llms_work_with_old_systems_thanks_to/
| false | false |
self
| 0 | null |
TTS Fine-tuning now in Unsloth!
| 524 |
Hey folks! Not the usual LLMs talk but we’re excited to announce that you can now train Text-to-Speech (TTS) models in [Unsloth](https://github.com/unslothai/unsloth)! Training is \~1.5x faster with 50% less VRAM compared to all other setups with FA2. :D
* Support includes `Sesame/csm-1b`, `OpenAI/whisper-large-v3`, `CanopyLabs/orpheus-3b-0.1-ft`, and any Transformer-style model including LLasa, Outte, Spark, and more.
* The goal of TTS fine-tuning to minic voices, adapt speaking styles and tones, support new languages, handle specific tasks etc.
* We’ve made notebooks to train, run, and save these models for free on Google Colab. Some models aren’t supported by llama.cpp and will be saved only as safetensors, but others should work. See our TTS docs and notebooks: [https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning](https://docs.unsloth.ai/basics/text-to-speech-tts-fine-tuning)
* The training process is similar to SFT, but the dataset includes audio clips with transcripts. We use a dataset called ‘Elise’ that embeds emotion tags like <sigh> or <laughs> into transcripts, triggering expressive audio that matches the emotion.
* Since TTS models are usually small, you can train them using 16-bit LoRA, or go with FFT. Loading a 16-bit LoRA model is simple.
We've uploaded most of the TTS models (quantized and original) to [Hugging Face here](https://huggingface.co/collections/unsloth/text-to-speech-tts-models-68007ab12522e96be1e02155).
And here are our TTS notebooks:
|[Sesame-CSM (1B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Sesame_CSM_(1B)-TTS.ipynb)|[Orpheus-TTS (3B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Orpheus_(3B)-TTS.ipynb)|[Whisper Large V3](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Whisper.ipynb)|[Spark-TTS (0.5B)](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Spark_TTS_(0_5B).ipynb)|
|:-|:-|:-|:-|
Thank you for reading and please do ask any questions!!
P.S. We also now support Qwen3 GRPO. We use the base model + a new custom proximity-based reward function to favor near-correct answers and penalize outliers. Pre-finetuning mitigates formatting bias and boosts evaluation accuracy via regex matching: [https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3\_(4B)-GRPO.ipynb](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_(4B)-GRPO.ipynb)
| 2025-05-15T17:14:19 |
https://v.redd.it/faqjz7kzaz0f1
|
danielhanchen
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kndp9f
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/faqjz7kzaz0f1/DASHPlaylist.mpd?a=1749921271%2CZTdhZGU4ODMyYmVhOTFjZmQ3YmY2OWQwZDY3MmI1MTdhMzEzYmRjYjk4YWJhZWZkOTU2MjMxMTlhZDMxMzlkMA%3D%3D&v=1&f=sd', 'duration': 29, 'fallback_url': 'https://v.redd.it/faqjz7kzaz0f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/faqjz7kzaz0f1/HLSPlaylist.m3u8?a=1749921271%2CM2FhMDNmMDgyNmMyNGQ3YWY3YTgyYzk0MzFkYmY2MjA1NDA1MTc3ODhlZTNmMGIyNzIzNDgzMzJhM2YxNjI4NQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/faqjz7kzaz0f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1128}}
|
t3_1kndp9f
|
/r/LocalLLaMA/comments/1kndp9f/tts_finetuning_now_in_unsloth/
| false | false | 524 |
{'enabled': False, 'images': [{'id': 'bXI4dnBsa3phejBmMfDzohHQ2IN6C0pCi0KaT-g2AEXeep08I3DgQhQN5vF7', 'resolutions': [{'height': 103, 'url': 'https://external-preview.redd.it/bXI4dnBsa3phejBmMfDzohHQ2IN6C0pCi0KaT-g2AEXeep08I3DgQhQN5vF7.png?width=108&crop=smart&format=pjpg&auto=webp&s=763704175d747fc6bfbdf4d9c19c048bee9f9f9c', 'width': 108}, {'height': 206, 'url': 'https://external-preview.redd.it/bXI4dnBsa3phejBmMfDzohHQ2IN6C0pCi0KaT-g2AEXeep08I3DgQhQN5vF7.png?width=216&crop=smart&format=pjpg&auto=webp&s=417f94c44cf34ebfc1acf75df1621e6372cf163d', 'width': 216}, {'height': 306, 'url': 'https://external-preview.redd.it/bXI4dnBsa3phejBmMfDzohHQ2IN6C0pCi0KaT-g2AEXeep08I3DgQhQN5vF7.png?width=320&crop=smart&format=pjpg&auto=webp&s=35aa0c6fcf61550c671161b8a8c7d9aa52d6f750', 'width': 320}, {'height': 612, 'url': 'https://external-preview.redd.it/bXI4dnBsa3phejBmMfDzohHQ2IN6C0pCi0KaT-g2AEXeep08I3DgQhQN5vF7.png?width=640&crop=smart&format=pjpg&auto=webp&s=c02c0be07717991598718b9dbdda4d831622d6fc', 'width': 640}, {'height': 918, 'url': 'https://external-preview.redd.it/bXI4dnBsa3phejBmMfDzohHQ2IN6C0pCi0KaT-g2AEXeep08I3DgQhQN5vF7.png?width=960&crop=smart&format=pjpg&auto=webp&s=9bc2822d74862bf9837a65298cb2ca58e9c40aa8', 'width': 960}, {'height': 1033, 'url': 'https://external-preview.redd.it/bXI4dnBsa3phejBmMfDzohHQ2IN6C0pCi0KaT-g2AEXeep08I3DgQhQN5vF7.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6fd5e6e4a25e46cd5025d2d01dbdbf0319561015', 'width': 1080}], 'source': {'height': 1033, 'url': 'https://external-preview.redd.it/bXI4dnBsa3phejBmMfDzohHQ2IN6C0pCi0KaT-g2AEXeep08I3DgQhQN5vF7.png?format=pjpg&auto=webp&s=e89ba921b9b8aa89e5449f2d23e16578e180f7d5', 'width': 1080}, 'variants': {}}]}
|
|
Local models served globally?
| 1 |
After trialing local models like qwen3 30b, llama scout, various dense ~32b models, for a few weeks I think I can go fully local. I am about ready to buy a dedicated llm server probably a mac-mini or AMD 395+, or build something with 24gb vram and 64gb ddr5. But, because I am on the road a lot for work, and I do a lot of coding in my day to day, I’d love to somehow serve it over the internet, behind an OpenAI like endpoint, and obv with a login/key… what’s the best way to serve this? I could put the pc on my network and request a static IP, or maybe have it co-located at a hosting company? I guess I’d then just run vllm? Anyone have experience with a setup like this?
| 2025-05-15T17:21:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kndvxo/local_models_served_globally/
|
Alarming-Ad8154
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kndvxo
| false | null |
t3_1kndvxo
|
/r/LocalLLaMA/comments/1kndvxo/local_models_served_globally/
| false | false |
self
| 1 | null |
❌ A2A "vs" MCP | ✅ A2A "and" MCP - Tutorial with Demo Included!!!
| 1 |
[removed]
| 2025-05-15T17:50:28 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1knem1d
| false | null |
t3_1knem1d
|
/r/LocalLLaMA/comments/1knem1d/a2a_vs_mcp_a2a_and_mcp_tutorial_with_demo_included/
| false | false |
default
| 1 | null |
||
❌ A2A "vs" MCP | ✅ A2A "and" MCP - Tutorial with Demo Included!!!
| 0 |
Hello Readers!
\[Code github link in comment\]
You must have heard about MCP an emerging protocol, "razorpay's MCP server out", "stripe's MCP server out"... But have you heard about A2A a protocol sketched by google engineers and together with MCP these two protocols can help in making complex applications.
Let me guide you to both of these protocols, their objectives and when to use them!
Lets start with MCP first, What MCP actually is in very simple terms?\[docs link in comment\]
Model Context \[Protocol\] where protocol means set of predefined rules which server follows to communicate with the client. In reference to LLMs this means if I design a server using any framework(django, nodejs, fastapi...) but it follows the rules laid by the MCP guidelines then I can connect this server to any supported LLM and that LLM when required will be able to fetch information using my server's DB or can use any tool that is defined in my server's route.
Lets take a simple example to make things more clear\[See youtube video in comment for illustration\]:
I want to make my LLM personalized for myself, this will require LLM to have relevant context about me when needed, so I have defined some routes in a server like /my\_location /my\_profile, /my\_fav\_movies and a tool /internet\_search and this server follows MCP hence I can connect this server seamlessly to any LLM platform that supports MCP(like claude desktop, langchain, even with chatgpt in coming future), now if I ask a question like "what movies should I watch today" then LLM can fetch the context of movies I like and can suggest similar movies to me, or I can ask LLM for best non vegan restaurant near me and using the tool call plus context fetching my location it can suggest me some restaurants.
NOTE: I am again and again referring that a MCP server can connect to a supported client (I am not saying to a supported LLM) this is because I cannot say that Lllama-4 supports MCP and Lllama-3 don't its just a tool call internally for LLM its the responsibility of the client to communicate with the server and give LLM tool calls in the required format.
Now its time to look at A2A protocol\[docs link in comment\]
Similar to MCP, A2A is also a set of rules, that when followed allows server to communicate to any a2a client. By definition: A2A standardizes how independent, often opaque, AI agents communicate and collaborate with each other as peers. In simple terms, where MCP allows an LLM client to connect to tools and data sources, A2A allows for a back and forth communication from a host(client) to different A2A servers(also LLMs) via task object. This task object has state like completed, input\_required, errored.
Lets take a simple example involving both A2A and MCP\[See youtube video in comment for illustration\]:
I want to make a LLM application that can run command line instructions irrespective of operating system i.e for linux, mac, windows. First there is a client that interacts with user as well as other A2A servers which are again LLM agents. So, our client is connected to 3 A2A servers, namely mac agent server, linux agent server and windows agent server all three following A2A protocols.
When user sends a command, "delete readme.txt located in Desktop on my windows system" cleint first checks the agent card, if found relevant agent it creates a task with a unique id and send the instruction in this case to windows agent server. Now our windows agent server is again connected to MCP servers that provide it with latest command line instruction for windows as well as execute the command on CMD or powershell, once the task is completed server responds with "completed" status and host marks the task as completed.
Now image another scenario where user asks "please delete a file for me in my mac system", host creates a task and sends the instruction to mac agent server as previously, but now mac agent raises an "input\_required" status since it doesn't know which file to actually delete this goes to host and host asks the user and when user answers the question, instruction goes back to mac agent server and this time it fetches context and call tools, sending task status as completed.
A more detailed explanation with illustration and code go through can be found in the youtube video in comment section. I hope I was able to make it clear that its not A2A vs MCP but its A2A and MCP to build complex applications.
| 2025-05-15T17:51:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1knen67/a2a_vs_mcp_a2a_and_mcp_tutorial_with_demo_included/
|
Responsible_Soft_429
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1knen67
| false | null |
t3_1knen67
|
/r/LocalLLaMA/comments/1knen67/a2a_vs_mcp_a2a_and_mcp_tutorial_with_demo_included/
| false | false |
self
| 0 | null |
What would you run with 128GB RAM instead of 64GB? (Mac)
| 0 |
I am looking to upgrade the Mac I currently use for LLMs and some casual image generation, and debating 64 vs 128GB.
Thoughts?
| 2025-05-15T18:03:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1knexzi/what_would_you_run_with_128gb_ram_instead_of_64gb/
|
PracticlySpeaking
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1knexzi
| false | null |
t3_1knexzi
|
/r/LocalLLaMA/comments/1knexzi/what_would_you_run_with_128gb_ram_instead_of_64gb/
| false | false |
self
| 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.