title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
How to Break the Machine by Talking to It
1
[removed]
2025-06-14T03:32:52
https://www.reddit.com/r/LocalLLaMA/comments/1laz0m4/how_to_break_the_machine_by_talking_to_it/
Frequent_Tea8607
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1laz0m4
false
null
t3_1laz0m4
/r/LocalLLaMA/comments/1laz0m4/how_to_break_the_machine_by_talking_to_it/
false
false
self
1
null
Watch out for fakes, yesterday it was on LocalLlama
1
2025-06-14T03:37:49
https://i.redd.it/pqsi7w2oct6f1.png
jacek2023
i.redd.it
1970-01-01T00:00:00
0
{}
1laz3vj
false
null
t3_1laz3vj
/r/LocalLLaMA/comments/1laz3vj/watch_out_for_fakes_yesterday_it_was_on_localllama/
false
false
https://external-preview…dcfe91b99a6aedf7
1
{'enabled': True, 'images': [{'id': 'BV3m6JwpDzD7O0VJ8ZULiK7Q86Bdvkw_Z_NQwksluD0', 'resolutions': [{'height': 95, 'url': 'https://preview.redd.it/pqsi7w2oct6f1.png?width=108&crop=smart&auto=webp&s=3802f2a0209fb8b94a515fa8b8d413f09b9c5799', 'width': 108}, {'height': 190, 'url': 'https://preview.redd.it/pqsi7w2oct6f1.png?width=216&crop=smart&auto=webp&s=9907d68f7e878af787a7543ef90929b728115864', 'width': 216}, {'height': 282, 'url': 'https://preview.redd.it/pqsi7w2oct6f1.png?width=320&crop=smart&auto=webp&s=97a6bb1ba35bfb086b77affbbbc743443720acda', 'width': 320}, {'height': 564, 'url': 'https://preview.redd.it/pqsi7w2oct6f1.png?width=640&crop=smart&auto=webp&s=1307a3d18b726a094ed46c7e6123d50d1e13d6c8', 'width': 640}, {'height': 846, 'url': 'https://preview.redd.it/pqsi7w2oct6f1.png?width=960&crop=smart&auto=webp&s=a01e6b6c41df16033c500a52773183aaeb16ac28', 'width': 960}, {'height': 952, 'url': 'https://preview.redd.it/pqsi7w2oct6f1.png?width=1080&crop=smart&auto=webp&s=366f74625d1f855a8208246e002bde21f8a25677', 'width': 1080}], 'source': {'height': 952, 'url': 'https://preview.redd.it/pqsi7w2oct6f1.png?auto=webp&s=4a5cb13fe030ba37a24594c63a2b03d08e572613', 'width': 1080}, 'variants': {}}]}
Need help setting up ollama and open webui on external hard drive
1
[removed]
2025-06-14T03:38:39
https://www.reddit.com/r/LocalLLaMA/comments/1laz4fw/need_help_setting_up_ollama_and_open_webui_on/
inthehazardsuit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1laz4fw
false
null
t3_1laz4fw
/r/LocalLLaMA/comments/1laz4fw/need_help_setting_up_ollama_and_open_webui_on/
false
false
self
1
null
Huggingface model to Roast people
0
Hi, so I decided to make something like an Anime/Movie Wrapped and would like to explore option based on roasting them on genre. But I'm having a problem on giving the result to LLM to roast them based on the results and percentage. If someone know any model like this. Do let me know. I'm running this project on Google Colab.
2025-06-14T05:03:30
https://www.reddit.com/r/LocalLLaMA/comments/1lb0m7e/huggingface_model_to_roast_people/
FastCommission2913
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb0m7e
false
null
t3_1lb0m7e
/r/LocalLLaMA/comments/1lb0m7e/huggingface_model_to_roast_people/
false
false
self
0
null
What are your go-to small (Can run on 8gb vram) models for Companion/Roleplay settings?
1
[removed]
2025-06-14T05:38:45
https://www.reddit.com/r/LocalLLaMA/comments/1lb16ti/what_are_your_goto_small_can_run_on_8gb_vram/
ItMeansEscape
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb16ti
false
null
t3_1lb16ti
/r/LocalLLaMA/comments/1lb16ti/what_are_your_goto_small_can_run_on_8gb_vram/
false
false
self
1
null
Guidance Needed: Qwen 3 Embeddings + Reranker Workflow
1
[removed]
2025-06-14T05:55:06
https://www.reddit.com/r/LocalLLaMA/comments/1lb1g3j/guidance_needed_qwen_3_embeddings_reranker/
Pale-Box-3470
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb1g3j
false
null
t3_1lb1g3j
/r/LocalLLaMA/comments/1lb1g3j/guidance_needed_qwen_3_embeddings_reranker/
false
false
self
1
null
Guidance Needed: Qwen 3 Embeddings + Reranker Workflow
1
[removed]
2025-06-14T05:56:08
https://www.reddit.com/r/LocalLLaMA/comments/1lb1gpa/guidance_needed_qwen_3_embeddings_reranker/
Pale-Box-3470
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb1gpa
false
null
t3_1lb1gpa
/r/LocalLLaMA/comments/1lb1gpa/guidance_needed_qwen_3_embeddings_reranker/
false
false
self
1
null
Open Source Unsiloed AI Chunker (EF2024)
48
Hey , Unsiloed CTO here! Unsiloed AI (EF 2024) is backed by Transpose Platform & EF and is currently being used by teams at Fortune 100 companies and multiple Series E+ startups for ingesting multimodal data in the form of PDFs, Excel, PPTs, etc. And, we have now finally open sourced some of the capabilities. Do give it a try! Also, we are inviting cracked developers to come and contribute to bounties of upto 500$ on algora. This would be a great way to get noticed for the job openings at Unsiloed. Bounty Link- [https://algora.io/bounties](https://algora.io/bounties) Github Link - [https://github.com/Unsiloed-AI/Unsiloed-chunker](https://github.com/Unsiloed-AI/Unsiloed-chunker)
2025-06-14T06:21:25
https://www.reddit.com/r/LocalLLaMA/comments/1lb1v8h/open_source_unsiloed_ai_chunker_ef2024/
Initial-Western-4438
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb1v8h
false
null
t3_1lb1v8h
/r/LocalLLaMA/comments/1lb1v8h/open_source_unsiloed_ai_chunker_ef2024/
false
false
self
48
{'enabled': False, 'images': [{'id': 'uALn799UGi-5IbxAND9p3F8HqtPplWkgdjBxok9qMIU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/uALn799UGi-5IbxAND9p3F8HqtPplWkgdjBxok9qMIU.png?width=108&crop=smart&auto=webp&s=e26462b5bb0cd8c94cd4a7ea79b999ab38e18ce7', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/uALn799UGi-5IbxAND9p3F8HqtPplWkgdjBxok9qMIU.png?width=216&crop=smart&auto=webp&s=d104906f780db2cf628b99bea1af40f15ab256ab', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/uALn799UGi-5IbxAND9p3F8HqtPplWkgdjBxok9qMIU.png?width=320&crop=smart&auto=webp&s=f511df7d73ce175411afca7549b653b2756ea0f1', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/uALn799UGi-5IbxAND9p3F8HqtPplWkgdjBxok9qMIU.png?width=640&crop=smart&auto=webp&s=fef7b2209d065257a8e064db556e49984d756288', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/uALn799UGi-5IbxAND9p3F8HqtPplWkgdjBxok9qMIU.png?width=960&crop=smart&auto=webp&s=5806232cd283824e819f03653449613e5f6b2f9e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/uALn799UGi-5IbxAND9p3F8HqtPplWkgdjBxok9qMIU.png?width=1080&crop=smart&auto=webp&s=244050dd4ed4243eaa02c68303fbaae76b878cc3', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/uALn799UGi-5IbxAND9p3F8HqtPplWkgdjBxok9qMIU.png?auto=webp&s=3db9017ad39cab3ab8c872f60d87c10400d39ef9', 'width': 1200}, 'variants': {}}]}
Open Source Unsiloed AI Chunker (EF2024)
1
Hey , Unsiloed CTO here! Unsiloed AI (EF 2024) is backed by Transpose Platform & EF and is currently being used by teams at Fortune 100 companies and multiple Series E+ startups for ingesting multimodal data in the form of PDFs, Excel, PPTs, etc. And, we have now finally open sourced some of the capabilities. Do give it a try! Also, we are inviting cracked developers to come and contribute to bounties of upto 500$ on algora. This would be a great way to get noticed for the job openings at Unsiloed. Bounty Link- [https://algora.io/bounties](https://algora.io/bounties) Github Link - [https://github.com/Unsiloed-AI/Unsiloed-chunker](https://github.com/Unsiloed-AI/Unsiloed-chunker)
2025-06-14T06:40:01
https://www.reddit.com/r/LocalLLaMA/comments/1lb25f9/open_source_unsiloed_ai_chunker_ef2024/
Grand_Coconut_9739
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb25f9
false
null
t3_1lb25f9
/r/LocalLLaMA/comments/1lb25f9/open_source_unsiloed_ai_chunker_ef2024/
false
false
self
1
null
How do you provide files?
6
Out of curiosity I was wondering how people tended to provide files to their AI when coding. I can’t tell if I’ve completely over complicated how I should be giving the models context or if I actually created a solid solution. If anyone has any input on how they best handle sending files via API (not using Claude or ChatGPT projects), I’d love to know how and what you do. I can provide what I ended up making but I don’t want to come off as “advertising”/pushing my solution especially if I’m doing it all wrong anyways 🥲. So if you have time to explain I’d really be interested in finding better ways to handle this annoyance I run into!!
2025-06-14T06:47:56
https://www.reddit.com/r/LocalLLaMA/comments/1lb29r8/how_do_you_provide_files/
droopy227
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb29r8
false
null
t3_1lb29r8
/r/LocalLLaMA/comments/1lb29r8/how_do_you_provide_files/
false
false
self
6
null
Are there any tools to create structured data from webpages?
14
I often find myself in a situation where I need to pass a webpage to an LLM, mostly just blog posts and forum posts. Is there some tool that can parse the page and create it in a structured format for an LLM to consume?
2025-06-14T07:19:36
https://www.reddit.com/r/LocalLLaMA/comments/1lb2r2u/are_there_any_tools_to_create_structured_data/
birdsintheskies
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb2r2u
false
null
t3_1lb2r2u
/r/LocalLLaMA/comments/1lb2r2u/are_there_any_tools_to_create_structured_data/
false
false
self
14
null
Can anyone give me a local llm setup which analyses and gives feedback to improve my speaking ability
2
I am always afraid of public speaking and freeze up in my interviews. I ramble and can't structure my thoughts and go off on some random tangents whenever i speak. I believe practice makes me better and I was thinking I can use locallama to help me. Something along the lines of recording and then I can use a tts model which outputs the transcript and then use llms. This is what I am thinking Record audio in English - Whisper - transcript - analyse transcript using some llm like qwen3/gemma3 ( have an old mac m1 with 8gb so can't run models more than 8b q4) - give feedback But will this setup pickup everything required for analysing speech? Things like filler words, conciseness, pauses etc. Because i think transcript will not give everything required like pauses or if it knows when a sentence starts. Not concerned about real time analysis. Since this is just for practice. Basically an open source version of yoodli.ai
2025-06-14T08:37:18
https://www.reddit.com/r/LocalLLaMA/comments/1lb3wxq/can_anyone_give_me_a_local_llm_setup_which/
timedacorn369
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb3wxq
false
null
t3_1lb3wxq
/r/LocalLLaMA/comments/1lb3wxq/can_anyone_give_me_a_local_llm_setup_which/
false
false
self
2
null
We need a distilled qwen 3 235b a22b model on DeepSeek R1
1
[removed]
2025-06-14T08:55:04
https://www.reddit.com/r/LocalLLaMA/comments/1lb46cq/we_need_a_distilled_qwen_3_235b_a22b_model_on/
EndLineTech03
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb46cq
false
null
t3_1lb46cq
/r/LocalLLaMA/comments/1lb46cq/we_need_a_distilled_qwen_3_235b_a22b_model_on/
false
false
self
1
null
Rookie question
0
Why is that whenever you generate an image with correct lettering/wording it always spits out some random garbled mess.. why is this? Just curious
2025-06-14T08:59:42
https://www.reddit.com/r/LocalLLaMA/comments/1lb48oi/rookie_question/
Zmeiler
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb48oi
false
null
t3_1lb48oi
/r/LocalLLaMA/comments/1lb48oi/rookie_question/
false
false
self
0
null
Frustrated trying to run MiniCPM-o 2.6 on RunPod
2
Hi, I'm trying to use MiniCPM-o 2.6 for a project that involves using the LLM to categorize frames from a video into certain categories. Naturally, the first step is to get MiniCPM running at all. This is where I am facing many problems At first, I tried to get it working on my laptop which has an RTX 3050Ti 4GB GPU, and that did not work for obvious reasons. So I switched to RunPod and created an instance with RTX A4000 - the only GPU I can afford. If I use the HuggingFace version and AutoModel.from_pretrained as per their sample code, I get errors like: AttributeError: 'Resampler' object has no attribute '_initialize_weights' To fix it, I tried cloning into their repository and using their custom classes, which led to several package conflict issues - that were resolvable - but led to new errors like: Some weights of OmniLMMForCausalLM were not initialized from the model checkpoint at openbmb/MiniCPM-o-2_6 and are newly initialized: ['embed_tokens.weight', What I understood was that none of the weights got loaded and I was left with an empty model. So I went back to using the HuggingFace version. At one point, AutoModel did work after I used Attention to offload some layers to CPU - and I was able to get a test output from the LLM. Emboldened by this, I tried using their sample code to encode a video and get some chat output, but, even after waiting for 20 minutes, all I could see was CPU activity between 30-100% and GPU memory being stuck at 92% utilization. I started over with a fresh RunPod A4000 instance and copied over the sample code from HuggingFace - which brought me back to the Resampler error. I tried to follow the instructions from a .cn webpage linked in a file called best practices that came with their GitHub repo, but it's for MiniCPM-V, and the vllm package and LLM class it told me to use did not work either. I appreciate any advice as to what I can do next. Unfortunately, my professor is set on using MiniCPM only - and so I need to get it working somehow.
2025-06-14T09:48:40
https://www.reddit.com/r/LocalLLaMA/comments/1lb4yb4/frustrated_trying_to_run_minicpmo_26_on_runpod/
i5_8300h
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb4yb4
false
null
t3_1lb4yb4
/r/LocalLLaMA/comments/1lb4yb4/frustrated_trying_to_run_minicpmo_26_on_runpod/
false
false
self
2
null
How to train GPT-SoVITS in a new language?
1
[removed]
2025-06-14T10:12:37
https://www.reddit.com/r/LocalLLaMA/comments/1lb5b54/how_to_train_gptsovits_in_a_new_language/
Inside_Letterhead
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb5b54
false
null
t3_1lb5b54
/r/LocalLLaMA/comments/1lb5b54/how_to_train_gptsovits_in_a_new_language/
false
false
self
1
null
Thoughts on hardware price optimisarion for LLMs?
86
Graph related (gpt-4o with with web search)
2025-06-14T10:43:36
https://i.redd.it/iauc7homgv6f1.png
GreenTreeAndBlueSky
i.redd.it
1970-01-01T00:00:00
0
{}
1lb5rm2
false
null
t3_1lb5rm2
/r/LocalLLaMA/comments/1lb5rm2/thoughts_on_hardware_price_optimisarion_for_llms/
false
false
default
86
{'enabled': True, 'images': [{'id': 'iauc7homgv6f1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/iauc7homgv6f1.png?width=108&crop=smart&auto=webp&s=6c1c4341c98a71be6c2d4b27714aca4b3e8a613c', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/iauc7homgv6f1.png?width=216&crop=smart&auto=webp&s=b531ca36c8b2177539151ec9ce275f00d475c8ab', 'width': 216}, {'height': 191, 'url': 'https://preview.redd.it/iauc7homgv6f1.png?width=320&crop=smart&auto=webp&s=2f3d9147e3e9f2898db075f970ddc2fc075d9e86', 'width': 320}, {'height': 383, 'url': 'https://preview.redd.it/iauc7homgv6f1.png?width=640&crop=smart&auto=webp&s=b99a0e6ffa012285066fa9e8761a8662f56ac51a', 'width': 640}, {'height': 575, 'url': 'https://preview.redd.it/iauc7homgv6f1.png?width=960&crop=smart&auto=webp&s=a79f9eb247e5164a1e05241c89f6821dd812f10f', 'width': 960}, {'height': 646, 'url': 'https://preview.redd.it/iauc7homgv6f1.png?width=1080&crop=smart&auto=webp&s=d282d97333ce0d849e88229196e5faf5987e1345', 'width': 1080}], 'source': {'height': 1180, 'url': 'https://preview.redd.it/iauc7homgv6f1.png?auto=webp&s=a7ff79fed09ecda403b22d38c113faed1adb085e', 'width': 1970}, 'variants': {}}]}
LLM Leaderboard by VRAM Size
1
[removed]
2025-06-14T11:28:49
https://www.reddit.com/r/LocalLLaMA/comments/1lb6hjg/llm_leaderboard_by_vram_size/
djdeniro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb6hjg
false
null
t3_1lb6hjg
/r/LocalLLaMA/comments/1lb6hjg/llm_leaderboard_by_vram_size/
false
false
self
1
null
Gemma function calling problems
1
[removed]
2025-06-14T11:36:49
https://www.reddit.com/r/LocalLLaMA/comments/1lb6mbw/gemma_function_calling_problems/
Life_Bag_7583
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb6mbw
false
null
t3_1lb6mbw
/r/LocalLLaMA/comments/1lb6mbw/gemma_function_calling_problems/
false
false
self
1
null
Gemma function calling
1
[removed]
2025-06-14T11:42:08
https://www.reddit.com/r/LocalLLaMA/comments/1lb6phn/gemma_function_calling/
Life_Bag_7583
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb6phn
false
null
t3_1lb6phn
/r/LocalLLaMA/comments/1lb6phn/gemma_function_calling/
false
false
self
1
null
Gemma function calling
1
[removed]
2025-06-14T11:44:06
https://www.reddit.com/r/LocalLLaMA/comments/1lb6qoc/gemma_function_calling/
Life_Bag_7583
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb6qoc
false
null
t3_1lb6qoc
/r/LocalLLaMA/comments/1lb6qoc/gemma_function_calling/
false
false
self
1
null
RTX 6000 Ada or a 4090?
0
Hello, I'm working on a project where I'm looking at around 150-200 tps in a batch of 4 of such processes running in parallel, text-based, no images or anything. Right now I don't have any GPUs. I can get a RTX 6000 Ada for around $1850 and a 4090 for around the same price (maybe a couple hudreds $ higher). I'm also a gamer and will be selling my PS5, PSVR2, and my Macbook to fund this purchase. The 6000 says "RTX 6000" on the card in one of the images uploaded by the seller, but he hasn't mentioned Ada or anything. So I'm assuming it's gonna be an Ada and not a A6000 (will manually verify at the time of purchase). The 48gb is lucrative, but the 4090 still attracts me because of the gaming part. Please help me with your opinions. My priorities from most important to least are inference speed, trainablity/fine-tuning, gaming. Thanks
2025-06-14T12:13:35
https://www.reddit.com/r/LocalLLaMA/comments/1lb79sg/rtx_6000_ada_or_a_4090/
This_Woodpecker_9163
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb79sg
false
null
t3_1lb79sg
/r/LocalLLaMA/comments/1lb79sg/rtx_6000_ada_or_a_4090/
false
false
self
0
null
Can you get your local LLM to run the code it suggests?
0
A feature of Gemini 2.5 on aistudio that I love is that you can get it to run the code it suggests. It will then automatically correct errors it finds or fix the code if the output doesn't match what it was expecting .This is really powerful and useful feature. Is it possible to do the same with a local model?
2025-06-14T12:24:38
https://www.reddit.com/r/LocalLLaMA/comments/1lb7h7z/can_you_get_your_local_llm_to_run_the_code_it/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb7h7z
false
null
t3_1lb7h7z
/r/LocalLLaMA/comments/1lb7h7z/can_you_get_your_local_llm_to_run_the_code_it/
false
false
self
0
null
Learning material on how to use LLM at work for senior engineer
1
[removed]
2025-06-14T12:34:27
https://www.reddit.com/r/LocalLLaMA/comments/1lb7nvk/learning_material_on_how_to_use_llm_at_work_for/
gyzerok
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb7nvk
false
null
t3_1lb7nvk
/r/LocalLLaMA/comments/1lb7nvk/learning_material_on_how_to_use_llm_at_work_for/
false
false
self
1
null
Local LLM Memorization – A fully local memory system for long-term recall and visualization
1
[removed]
2025-06-14T13:41:17
https://www.reddit.com/r/LocalLLaMA/comments/1lb8z80/local_llm_memorization_a_fully_local_memory/
Vicouille6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb8z80
false
null
t3_1lb8z80
/r/LocalLLaMA/comments/1lb8z80/local_llm_memorization_a_fully_local_memory/
false
false
self
1
{'enabled': False, 'images': [{'id': 'PCJY0mJYBHpgqvsehw-QNpAIsWjV5AaL2WKkeFuLGWw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PCJY0mJYBHpgqvsehw-QNpAIsWjV5AaL2WKkeFuLGWw.png?width=108&crop=smart&auto=webp&s=987378a5dd1be1ed5f17eface636fa84d7344324', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PCJY0mJYBHpgqvsehw-QNpAIsWjV5AaL2WKkeFuLGWw.png?width=216&crop=smart&auto=webp&s=415c66fbbd0300129b3ae9f422e32f75ba800271', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PCJY0mJYBHpgqvsehw-QNpAIsWjV5AaL2WKkeFuLGWw.png?width=320&crop=smart&auto=webp&s=1945f96e2051dec4b2a6a954cb6d9772ebed9cc2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PCJY0mJYBHpgqvsehw-QNpAIsWjV5AaL2WKkeFuLGWw.png?width=640&crop=smart&auto=webp&s=402e5e26e58f6436b17797691c45e037bfe95164', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PCJY0mJYBHpgqvsehw-QNpAIsWjV5AaL2WKkeFuLGWw.png?width=960&crop=smart&auto=webp&s=1c013e3030a2cac1d5ab09e38bcf78f3a5830bd4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PCJY0mJYBHpgqvsehw-QNpAIsWjV5AaL2WKkeFuLGWw.png?width=1080&crop=smart&auto=webp&s=6cb3d498dca75ef72f5dee71fa572f361cc0ddae', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PCJY0mJYBHpgqvsehw-QNpAIsWjV5AaL2WKkeFuLGWw.png?auto=webp&s=98e24721dc75640445e7a90b902e6e0d6ba85790', 'width': 1200}, 'variants': {}}]}
Local LLM Memorization – A fully local memory system for long-term recall and visualization
1
[removed]
2025-06-14T13:43:04
https://i.redd.it/j6qv22hmcw6f1.png
Vicouille6
i.redd.it
1970-01-01T00:00:00
0
{}
1lb90jw
false
null
t3_1lb90jw
/r/LocalLLaMA/comments/1lb90jw/local_llm_memorization_a_fully_local_memory/
false
false
https://external-preview…ed182a56510ac135
1
{'enabled': True, 'images': [{'id': 'CQxJDnALw36Ew4zqQt-qXzzQB-kyNgiXNCM7Fx0w1UA', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/j6qv22hmcw6f1.png?width=108&crop=smart&auto=webp&s=2d7cb2c7b7b5b3a8cccc22bb63be846d3df2dcd9', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/j6qv22hmcw6f1.png?width=216&crop=smart&auto=webp&s=4bd40ef3430956cf687a0b1249af21432c4961d3', 'width': 216}, {'height': 182, 'url': 'https://preview.redd.it/j6qv22hmcw6f1.png?width=320&crop=smart&auto=webp&s=45a56f93f1719635b2785402544d3c418280f080', 'width': 320}, {'height': 364, 'url': 'https://preview.redd.it/j6qv22hmcw6f1.png?width=640&crop=smart&auto=webp&s=f202191aeef93a6053765f062b00a415b5bb6f7d', 'width': 640}, {'height': 546, 'url': 'https://preview.redd.it/j6qv22hmcw6f1.png?width=960&crop=smart&auto=webp&s=1aeef089a78230b636a2dd2585041ca08c648885', 'width': 960}, {'height': 615, 'url': 'https://preview.redd.it/j6qv22hmcw6f1.png?width=1080&crop=smart&auto=webp&s=3c9f12dc9f604ba156e0999163abb67c7595a620', 'width': 1080}], 'source': {'height': 956, 'url': 'https://preview.redd.it/j6qv22hmcw6f1.png?auto=webp&s=1438b6911e8e3ed251d642ed7171d9ffd3cc7879', 'width': 1678}, 'variants': {}}]}
Zonos TTS is not stable. How to fix?
1
[removed]
2025-06-14T13:45:29
https://www.reddit.com/r/LocalLLaMA/comments/1lb92et/zonos_tts_is_not_stable_how_to_fix/
TheRealistDude
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb92et
false
null
t3_1lb92et
/r/LocalLLaMA/comments/1lb92et/zonos_tts_is_not_stable_how_to_fix/
false
false
self
1
{'enabled': False, 'images': [{'id': 'kUgDF3b8nhmaAtFQ-IwZpeKaYz08TA8prMsUn44V-9Y', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/93hsA-zVfOTHoWTGP6jiJDMZJGNMEN7o_g0tDG05gDw.jpg?width=108&crop=smart&auto=webp&s=2f1c029c2ed98c8c192ce8a6f1eea9c68c64b9d7', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/93hsA-zVfOTHoWTGP6jiJDMZJGNMEN7o_g0tDG05gDw.jpg?width=216&crop=smart&auto=webp&s=ebf51f3aac38a3fdb30db112cc5b5841c0f4c83c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/93hsA-zVfOTHoWTGP6jiJDMZJGNMEN7o_g0tDG05gDw.jpg?width=320&crop=smart&auto=webp&s=196aa2668e3ac846500dad873e6a1bd1bad234a3', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/93hsA-zVfOTHoWTGP6jiJDMZJGNMEN7o_g0tDG05gDw.jpg?width=640&crop=smart&auto=webp&s=9a0b7330d66eb44b5eb4bd2467c36324f36664af', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/93hsA-zVfOTHoWTGP6jiJDMZJGNMEN7o_g0tDG05gDw.jpg?width=960&crop=smart&auto=webp&s=32330ae83c582edba935192cb4d1b69c56c12b1b', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/93hsA-zVfOTHoWTGP6jiJDMZJGNMEN7o_g0tDG05gDw.jpg?width=1080&crop=smart&auto=webp&s=fe544e41fdf4f05d80a496ebee7f101c2e077edc', 'width': 1080}], 'source': {'height': 1428, 'url': 'https://external-preview.redd.it/93hsA-zVfOTHoWTGP6jiJDMZJGNMEN7o_g0tDG05gDw.jpg?auto=webp&s=93a5b35f80274777f374cd8309ba1fbf6e54b219', 'width': 1904}, 'variants': {}}]}
Local LLM Memorization – A fully local memory system for long-term recall and visualization
1
[removed]
2025-06-14T13:45:32
https://www.reddit.com/r/LocalLLaMA/comments/1lb92g3/local_llm_memorization_a_fully_local_memory/
Vicouille6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb92g3
false
null
t3_1lb92g3
/r/LocalLLaMA/comments/1lb92g3/local_llm_memorization_a_fully_local_memory/
false
false
self
1
{'enabled': False, 'images': [{'id': 'PCJY0mJYBHpgqvsehw-QNpAIsWjV5AaL2WKkeFuLGWw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PCJY0mJYBHpgqvsehw-QNpAIsWjV5AaL2WKkeFuLGWw.png?width=108&crop=smart&auto=webp&s=987378a5dd1be1ed5f17eface636fa84d7344324', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PCJY0mJYBHpgqvsehw-QNpAIsWjV5AaL2WKkeFuLGWw.png?width=216&crop=smart&auto=webp&s=415c66fbbd0300129b3ae9f422e32f75ba800271', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PCJY0mJYBHpgqvsehw-QNpAIsWjV5AaL2WKkeFuLGWw.png?width=320&crop=smart&auto=webp&s=1945f96e2051dec4b2a6a954cb6d9772ebed9cc2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PCJY0mJYBHpgqvsehw-QNpAIsWjV5AaL2WKkeFuLGWw.png?width=640&crop=smart&auto=webp&s=402e5e26e58f6436b17797691c45e037bfe95164', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PCJY0mJYBHpgqvsehw-QNpAIsWjV5AaL2WKkeFuLGWw.png?width=960&crop=smart&auto=webp&s=1c013e3030a2cac1d5ab09e38bcf78f3a5830bd4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PCJY0mJYBHpgqvsehw-QNpAIsWjV5AaL2WKkeFuLGWw.png?width=1080&crop=smart&auto=webp&s=6cb3d498dca75ef72f5dee71fa572f361cc0ddae', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PCJY0mJYBHpgqvsehw-QNpAIsWjV5AaL2WKkeFuLGWw.png?auto=webp&s=98e24721dc75640445e7a90b902e6e0d6ba85790', 'width': 1200}, 'variants': {}}]}
Is it normal for RAG to take this long to load the first time?
13
I'm using https://github.com/AllAboutAI-YT/easy-local-rag with the default dolphin-llama3 model, and a 500mb vault.txt file. It's been loading for an hour and a half with my GPU at full utilization but it's still going. Is it normal that it would take this long, and more importantly, is it gonna take this long every time? Specs: RTX 4060ti 8gb Intel i5-13400f 16GB DDR5
2025-06-14T14:08:06
https://www.reddit.com/r/LocalLLaMA/comments/1lb9jqc/is_it_normal_for_rag_to_take_this_long_to_load/
just_a_guy1008
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb9jqc
false
null
t3_1lb9jqc
/r/LocalLLaMA/comments/1lb9jqc/is_it_normal_for_rag_to_take_this_long_to_load/
false
false
self
13
{'enabled': False, 'images': [{'id': 'MtETuncf1NHBxZHwfHUwt4hPqPHi8mYQWblxUfruYUc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MtETuncf1NHBxZHwfHUwt4hPqPHi8mYQWblxUfruYUc.png?width=108&crop=smart&auto=webp&s=79410ae21a1f2e00f6cc762de24cf225285a8efc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MtETuncf1NHBxZHwfHUwt4hPqPHi8mYQWblxUfruYUc.png?width=216&crop=smart&auto=webp&s=1c84afce2cbc5f171b89a8409304bde7af2318cd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MtETuncf1NHBxZHwfHUwt4hPqPHi8mYQWblxUfruYUc.png?width=320&crop=smart&auto=webp&s=ca40f8dd65bb18c69bc283e48df00508f7d0b38c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MtETuncf1NHBxZHwfHUwt4hPqPHi8mYQWblxUfruYUc.png?width=640&crop=smart&auto=webp&s=05382172ce37bd2e573bbf815c73f7e3ad0cfe0c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MtETuncf1NHBxZHwfHUwt4hPqPHi8mYQWblxUfruYUc.png?width=960&crop=smart&auto=webp&s=2cb142470a5fc7258fc4fae2d642233d4c1fec9b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MtETuncf1NHBxZHwfHUwt4hPqPHi8mYQWblxUfruYUc.png?width=1080&crop=smart&auto=webp&s=5cb3c52bd1345d141d558e4bf60b23552b0681f8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MtETuncf1NHBxZHwfHUwt4hPqPHi8mYQWblxUfruYUc.png?auto=webp&s=ed992b1d1d936234f87aba4b4e1b3547c9e28e14', 'width': 1200}, 'variants': {}}]}
Help
1
[removed]
2025-06-14T14:09:58
https://www.reddit.com/r/LocalLLaMA/comments/1lb9l7y/help/
Competitive-Sky-9818
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb9l7y
false
null
t3_1lb9l7y
/r/LocalLLaMA/comments/1lb9l7y/help/
false
false
self
1
null
Trying to install llama 4 scout & maverick locally; keep getting errors
0
I’ve gotten as far as installing python pip & it spits out some error about unable to install build dependencies . I’ve already filled out the form, selected the models and accepted the terms of use. I went to the email that is supposed to give you a link to GitHub that is supposed to authorize your download. Tried it again, nothing. Tried installing other dependencies. I’m really at my wits end here. Any advice would be greatly appreciated.
2025-06-14T14:15:34
https://www.reddit.com/r/LocalLLaMA/comments/1lb9ppy/trying_to_install_llama_4_scout_maverick_locally/
Zmeiler
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb9ppy
false
null
t3_1lb9ppy
/r/LocalLLaMA/comments/1lb9ppy/trying_to_install_llama_4_scout_maverick_locally/
false
false
self
0
null
[Discussion] Thinking Without Words: Continuous latent reasoning for local LLaMA inference – feedback?
1
[removed]
2025-06-14T14:23:20
https://www.reddit.com/r/LocalLLaMA/comments/1lb9w0e/discussion_thinking_without_words_continuous/
BeowulfBR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb9w0e
false
null
t3_1lb9w0e
/r/LocalLLaMA/comments/1lb9w0e/discussion_thinking_without_words_continuous/
false
false
self
1
null
GAIA: New Gemma3 4B for Brazilian Portuguese / Um Gemma3 4B para Português do Brasil!
37
**\[EN\]** Introducing **GAIA (Gemma-3-Gaia-PT-BR-4b-it)**, our new open language model, developed and optimized for **Brazilian Portuguese!** **What does GAIA offer?** * **PT-BR Focus:** Continuously pre-trained on 13 BILLION high-quality Brazilian Portuguese tokens. * **Base Model:** google/gemma-3-4b-pt (Gemma 3 with 4B parameters). * **Innovative Approach:** Uses a "weight merging" technique for instruction following (no traditional SFT needed!). * **Performance:** Outperformed the base Gemma model on the ENEM 2024 benchmark! * **Developed by:** A partnership between Brazilian entities (ABRIA, CEIA-UFG, Nama, Amadeus AI) and Google DeepMind. * **License:** Gemma. **What is it for?** Great for chat, Q&A, summarization, text generation, and as a base model for fine-tuning in PT-BR. **\[PT-BR\]** Apresentamos o **GAIA (Gemma-3-Gaia-PT-BR-4b-it)**, nosso novo modelo de linguagem aberto, feito e otimizado para o **Português do Brasil!** **O que o GAIA traz?** * **Foco no PT-BR:** Treinado em 13 BILHÕES de tokens de dados brasileiros de alta qualidade. * **Base:** google/gemma-3-4b-pt (Gemma 3 de 4B de parâmetros). * **Inovador:** Usa uma técnica de "fusão de pesos" para seguir instruções (dispensa SFT tradicional!). * **Resultados:** Superou o Gemma base no benchmark ENEM 2024! * **Quem fez:** Parceria entre entidades brasileiras (ABRAIA, CEIA-UFG, Nama, Amadeus AI) e Google DeepMind. * **Licença:** Gemma. **Para que usar?** Ótimo para chat, perguntas/respostas, resumo, criação de textos e como base para fine-tuning em PT-BR. **Hugging Face:** [https://huggingface.co/CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it](https://www.google.com/url?sa=E&q=https%3A%2F%2Fhuggingface.co%2FCEIA-UFG%2FGemma-3-Gaia-PT-BR-4b-it) **Paper:** [https://arxiv.org/pdf/2410.10739](https://www.google.com/url?sa=E&q=https%3A%2F%2Farxiv.org%2Fpdf%2F2410.10739)
2025-06-14T14:27:45
https://www.reddit.com/r/LocalLLaMA/comments/1lb9zhl/gaia_new_gemma3_4b_for_brazilian_portuguese_um/
ffgnetto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lb9zhl
false
null
t3_1lb9zhl
/r/LocalLLaMA/comments/1lb9zhl/gaia_new_gemma3_4b_for_brazilian_portuguese_um/
false
false
self
37
null
Any Model can Reason: ITRS - Iterative Transparent Reasoning Systems
1
[removed]
2025-06-14T14:35:43
https://www.reddit.com/r/LocalLLaMA/comments/1lba5u5/any_model_can_reason_itrs_iterative_transparent/
thomheinrich
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lba5u5
false
null
t3_1lba5u5
/r/LocalLLaMA/comments/1lba5u5/any_model_can_reason_itrs_iterative_transparent/
false
false
self
1
null
[Discussion] Thinking Without Words: Continuous latent reasoning for local LLaMA inference – feedback?
7
[Discussion](https://www.reddit.com/r/LocalLLaMA/?f=flair_name%3A%22Discussion%22) Hi everyone, I just published a new post, **“Thinking Without Words”**, where I survey the evolution of latent chain-of-thought reasoning—from STaR and Implicit CoT all the way to COCONUT and HCoT—and propose a novel **GRAIL-Transformer** architecture that adaptively gates between text and latent-space reasoning for efficient, interpretable inference. **Key highlights:** * **Historical survey:** STaR, Implicit CoT, pause/filler tokens, Quiet-STaR, COCONUT, CCoT, HCoT, Huginn, RELAY, ITT * **Technical deep dive:** * Curriculum-guided latentisation * Hidden-state distillation & self-distillation * Compact latent tokens & latent memory lattices * Recurrent/loop-aligned supervision * **GRAIL-Transformer proposal:** * **Recurrent-depth core** for on-demand reasoning cycles * **Learnable gating** between word embeddings and hidden states * **Latent memory lattice** for parallel hypothesis tracking * **Training pipeline:** warm-up CoT → hybrid curriculum → GRPO fine-tuning → difficulty-aware refinement * **Interpretability hooks:** scheduled reveals + sparse probes I believe continuous latent reasoning can break the “language bottleneck,” enabling gradient-based, parallel reasoning and emergent algorithmic behaviors that go beyond what discrete token CoT can achieve. **Feedback I’m seeking:** 1. Clarity or gaps in the survey and deep dive 2. Viability, potential pitfalls, or engineering challenges of GRAIL-Transformer 3. Suggestions for experiments, benchmarks, or additional references You can read the full post here: [https://www.luiscardoso.dev/blog/neuralese](https://www.luiscardoso.dev/blog/neuralese) Thanks in advance for your time and insights!
2025-06-14T14:38:58
https://www.reddit.com/r/LocalLLaMA/comments/1lba8f6/discussion_thinking_without_words_continuous/
BeowulfBR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lba8f6
false
null
t3_1lba8f6
/r/LocalLLaMA/comments/1lba8f6/discussion_thinking_without_words_continuous/
false
false
self
7
null
Is there any model ( local or in-app ) that can detect defects on text ?
0
The mission is to feed an image and detect if the text in the image is malformed or it's out of the frame of the image ( cut off ). Is there any model, local or commercial that can do this effectively yet ?
2025-06-14T14:46:29
https://www.reddit.com/r/LocalLLaMA/comments/1lbaedh/is_there_any_model_local_or_inapp_that_can_detect/
skarrrrrrr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbaedh
false
null
t3_1lbaedh
/r/LocalLLaMA/comments/1lbaedh/is_there_any_model_local_or_inapp_that_can_detect/
false
false
self
0
null
Llama.cpp
1
[removed]
2025-06-14T15:09:36
https://i.redd.it/22ymidb3sw6f1.jpeg
Puzzled-Yoghurt564
i.redd.it
1970-01-01T00:00:00
0
{}
1lbaxea
false
null
t3_1lbaxea
/r/LocalLLaMA/comments/1lbaxea/llamacpp/
false
false
https://external-preview…5544b45f0b0eac5b
1
{'enabled': True, 'images': [{'id': '6RWfqrooZlDuxHikofJ4uRGP67NZxZ0xa0LfJijuXB0', 'resolutions': [{'height': 131, 'url': 'https://preview.redd.it/22ymidb3sw6f1.jpeg?width=108&crop=smart&auto=webp&s=6a55191676e5429be5f93b0ac45cd86f8193d983', 'width': 108}, {'height': 262, 'url': 'https://preview.redd.it/22ymidb3sw6f1.jpeg?width=216&crop=smart&auto=webp&s=8437f45140e46f5a8d4fef463824868e1da392e1', 'width': 216}, {'height': 389, 'url': 'https://preview.redd.it/22ymidb3sw6f1.jpeg?width=320&crop=smart&auto=webp&s=e8d7fb4afb4b9916bdfc4b5b43ccfbb0fd11a954', 'width': 320}, {'height': 778, 'url': 'https://preview.redd.it/22ymidb3sw6f1.jpeg?width=640&crop=smart&auto=webp&s=0ee9647405d867f7493aa732ad3cf6b33c43abe2', 'width': 640}, {'height': 1167, 'url': 'https://preview.redd.it/22ymidb3sw6f1.jpeg?width=960&crop=smart&auto=webp&s=0f0f50bfcca8a5d6bf0d2f6835ca86a7bb6ce727', 'width': 960}, {'height': 1313, 'url': 'https://preview.redd.it/22ymidb3sw6f1.jpeg?width=1080&crop=smart&auto=webp&s=d628606a17316d79673265cfee5b17b6f59885c7', 'width': 1080}], 'source': {'height': 1313, 'url': 'https://preview.redd.it/22ymidb3sw6f1.jpeg?auto=webp&s=2e5e68cbb7449be912c474c7eef6b4c885a87e97', 'width': 1080}, 'variants': {}}]}
Why local LLM?
130
I'm about to install Ollama and try a local LLM but I'm wondering what's possible and are the benefits apart from privacy and cost saving? My current memberships: \- Claude AI \- Cursor AI
2025-06-14T15:25:15
https://www.reddit.com/r/LocalLLaMA/comments/1lbbafh/why_local_llm/
Beginning_Many324
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbbafh
false
null
t3_1lbbafh
/r/LocalLLaMA/comments/1lbbafh/why_local_llm/
false
false
self
130
null
LLM Showdown: A Bigger Model with Harsh Quantization vs. a Smaller Model with Gentle Quantization?
1
[removed]
2025-06-14T15:49:58
https://i.redd.it/hcpge2gazw6f1.jpeg
RIP26770
i.redd.it
1970-01-01T00:00:00
0
{}
1lbbuwj
false
null
t3_1lbbuwj
/r/LocalLLaMA/comments/1lbbuwj/llm_showdown_a_bigger_model_with_harsh/
false
false
default
1
{'enabled': True, 'images': [{'id': 'hcpge2gazw6f1', 'resolutions': [{'height': 159, 'url': 'https://preview.redd.it/hcpge2gazw6f1.jpeg?width=108&crop=smart&auto=webp&s=b9986b16b97b0c4a4560d5a7d5e3f8b13632d585', 'width': 108}, {'height': 318, 'url': 'https://preview.redd.it/hcpge2gazw6f1.jpeg?width=216&crop=smart&auto=webp&s=c12948f43b92403f45ac8e69298979533ff685e0', 'width': 216}, {'height': 471, 'url': 'https://preview.redd.it/hcpge2gazw6f1.jpeg?width=320&crop=smart&auto=webp&s=0902d30f92834243149231a85c56d1e97f247ccd', 'width': 320}, {'height': 942, 'url': 'https://preview.redd.it/hcpge2gazw6f1.jpeg?width=640&crop=smart&auto=webp&s=40ae49daf5ab7b21f777a63de7f3ee12f09e73d3', 'width': 640}, {'height': 1413, 'url': 'https://preview.redd.it/hcpge2gazw6f1.jpeg?width=960&crop=smart&auto=webp&s=b014c6e0eca2fb1f192213ab4a74e8d74711170e', 'width': 960}, {'height': 1590, 'url': 'https://preview.redd.it/hcpge2gazw6f1.jpeg?width=1080&crop=smart&auto=webp&s=bb3b9a51fb8e744c21bb91e8b45ba32f3cfc4043', 'width': 1080}], 'source': {'height': 1670, 'url': 'https://preview.redd.it/hcpge2gazw6f1.jpeg?auto=webp&s=56ea716a1a3c9f5d53613900610e000af8bb290b', 'width': 1134}, 'variants': {}}]}
Local Memory Chat UI - Open Source + Vector Memory
14
Hey everyone, I created this project focused on CPU. That's why it runs on CPU by default. My aim was to be able to use the model locally on an old computer with a system that "doesn't forget". Over the past few weeks, I’ve been building a lightweight yet powerful **LLM chat interface** using **llama-cpp-python** — but with a twist: It supports **persistent memory** with **vector-based context recall**, so the model can stay aware of past interactions *even if it's quantized and context-limited*. I wanted something minimal, local, and personal — but still able to remember things over time. Everything is in a clean structure, fully documented, and pip-installable. ➡GitHub: [https://github.com/lynthera/bitsegments\_localminds](https://github.com/lynthera/bitsegments_localminds) (README includes detailed setup) [Used Google Gemma-2-2B-IT\(IQ3\_M\) Model](https://preview.redd.it/5f5v6p5vyw6f1.png?width=1916&format=png&auto=webp&s=c9d8263d315a1a42dc5ecc916de38a6187789cc7) I will soon add ollama support for easier use, so that people who do not want to deal with too many technical details or even those who do not know anything but still want to try can use it easily. For now, you need to download a model (in .gguf format) from huggingface and add it. Let me know what you think! I'm planning to build more agent simulation capabilities next. Would love feedback, ideas, or contributions...
2025-06-14T15:52:23
https://www.reddit.com/r/LocalLLaMA/comments/1lbbwwm/local_memory_chat_ui_open_source_vector_memory/
Dismal-Cupcake-3641
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbbwwm
false
null
t3_1lbbwwm
/r/LocalLLaMA/comments/1lbbwwm/local_memory_chat_ui_open_source_vector_memory/
false
false
https://b.thumbs.redditm…Gt70a64f_xgQ.jpg
14
{'enabled': False, 'images': [{'id': 'Dbj2ec4zS_Hoz85s5NEOmbvdQge4ZR0tvQ7ZbgK61jM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Dbj2ec4zS_Hoz85s5NEOmbvdQge4ZR0tvQ7ZbgK61jM.png?width=108&crop=smart&auto=webp&s=04369223a39f9200229fb4927b0bd5e1b79a7341', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Dbj2ec4zS_Hoz85s5NEOmbvdQge4ZR0tvQ7ZbgK61jM.png?width=216&crop=smart&auto=webp&s=5c7b104b46571c18a03fa88c9c0187daee0da3b6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Dbj2ec4zS_Hoz85s5NEOmbvdQge4ZR0tvQ7ZbgK61jM.png?width=320&crop=smart&auto=webp&s=ea988477624adb1c044fe51d9c75f7908ec74a76', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Dbj2ec4zS_Hoz85s5NEOmbvdQge4ZR0tvQ7ZbgK61jM.png?width=640&crop=smart&auto=webp&s=7c88f0b247ed94c19ca1f94df3d9dcc079604c58', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Dbj2ec4zS_Hoz85s5NEOmbvdQge4ZR0tvQ7ZbgK61jM.png?width=960&crop=smart&auto=webp&s=e94a88c2c302d8bd5ac94569e4b227852098c855', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Dbj2ec4zS_Hoz85s5NEOmbvdQge4ZR0tvQ7ZbgK61jM.png?width=1080&crop=smart&auto=webp&s=08a796bcbd73cee2098777acf90241dc9efb2ac7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Dbj2ec4zS_Hoz85s5NEOmbvdQge4ZR0tvQ7ZbgK61jM.png?auto=webp&s=553937af040803714bfeff4b59de64887c6cb075', 'width': 1200}, 'variants': {}}]}
Help - Llamacpp-server & rerankin LLM
1
Can anybody suggest me a reranker that works with llamacpp-server and how to use it? I tried with rank\_zephyr\_7b\_v1 and Qwen3-Reranker-8B, but could not make any of them them work... \`\`\` llama-server --model "H:\\MaziyarPanahi\\rank\_zephyr\_7b\_v1\_full-GGUF\\rank\_zephyr\_7b\_v1\_full.Q8\_0.gguf" --port 8084 --ctx-size 4096 --temp 0.0 --threads 24 --numa distribute --prio 2 --seed 42 --rerank """ common\_init\_from\_params: warning: vocab does not have a SEP token, reranking will not work srv load\_model: failed to load model, 'H:\\MaziyarPanahi\\rank\_zephyr\_7b\_v1\_full-GGUF\\rank\_zephyr\_7b\_v1\_full.Q8\_0.gguf' srv operator(): operator(): cleaning up before exit... main: exiting due to model loading error """ \`\`\` \---- \`\`\` llama-server --model "H:\\DevQuasar\\Qwen.Qwen3-Reranker-8B-GGUF\\Qwen.Qwen3-Reranker-8B.f16.gguf" --port 8084 --ctx-size 4096 --temp 0.0 --threads 24 --numa distribute --prio 2 --seed 42 --rerank """ common\_init\_from\_params: warning: vocab does not have a SEP token, reranking will not work srv load\_model: failed to load model, 'H:\\DevQuasar\\Qwen.Qwen3-Reranker-8B-GGUF\\Qwen.Qwen3-Reranker-8B.f16.gguf' srv operator(): operator(): cleaning up before exit... main: exiting due to model loading error """ \`\`\`
2025-06-14T16:00:24
https://www.reddit.com/r/LocalLLaMA/comments/1lbc3du/help_llamacppserver_rerankin_llm/
dodo13333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbc3du
false
null
t3_1lbc3du
/r/LocalLLaMA/comments/1lbc3du/help_llamacppserver_rerankin_llm/
false
false
self
1
null
How much VRAM do you have and what's your daily-driver model?
91
Curious what everyone is using day to day, locally, and what hardware they're using. If you're using a quantized version of a model please say so!
2025-06-14T16:14:55
https://www.reddit.com/r/LocalLLaMA/comments/1lbcfjz/how_much_vram_do_you_have_and_whats_your/
EmPips
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbcfjz
false
null
t3_1lbcfjz
/r/LocalLLaMA/comments/1lbcfjz/how_much_vram_do_you_have_and_whats_your/
false
false
self
91
null
Zonos is not consistent. How to fix?
1
[removed]
2025-06-14T16:20:54
[deleted]
1970-01-01T00:00:00
0
{}
1lbckeb
false
null
t3_1lbckeb
/r/LocalLLaMA/comments/1lbckeb/zonos_is_not_consistent_how_to_fix/
false
false
default
1
null
LLM finder
1
[removed]
2025-06-14T16:22:12
https://www.reddit.com/r/LocalLLaMA/comments/1lbclhw/llm_finder/
Stock-Writer-800
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbclhw
false
null
t3_1lbclhw
/r/LocalLLaMA/comments/1lbclhw/llm_finder/
false
false
self
1
null
What LLM is everyone using in June 2025?
146
Curious what everyone’s running now. What model(s) are in your regular rotation? What hardware are you on? How are you running it? (LM Studio, Ollama, llama.cpp, etc.) What do you use it for? Here’s mine: Recently I've been using mostly Qwen3 (30B, 32B, and 235B) Ryzen 7 5800X, 128GB RAM, RTX 3090 Ollama + Open WebUI Mostly general use and private conversations I’d rather not run on cloud platforms
2025-06-14T16:43:01
https://www.reddit.com/r/LocalLLaMA/comments/1lbd2jy/what_llm_is_everyone_using_in_june_2025/
1BlueSpork
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbd2jy
false
null
t3_1lbd2jy
/r/LocalLLaMA/comments/1lbd2jy/what_llm_is_everyone_using_in_june_2025/
false
false
self
146
null
I've been working on my own local AI assistant with memory and emotional logic – wanted to share progress & get feedback
8
Inspired by ChatGPT, I started building my own local AI assistant called *VantaAI*. It's meant to run completely offline and simulates things like emotional memory, mood swings, and personal identity. I’ve implemented things like: * Long-term memory that evolves based on conversation context * A mood graph that tracks how her emotions shift over time * Narrative-driven memory clustering (she sees herself as the "main character" in her own story) * A PySide6 GUI that includes tabs for memory, training, emotional states, and plugin management Right now, it uses a custom Vulkan backend for fast model inference and training, and supports things like personality-based responses and live plugin hot-reloading. I’m not selling anything or trying to promote a product — just curious if anyone else is doing something like this or has ideas on what features to explore next. Happy to answer questions if anyone’s curious!
2025-06-14T16:51:19
https://www.reddit.com/r/LocalLLaMA/comments/1lbd9jc/ive_been_working_on_my_own_local_ai_assistant/
PianoSeparate8989
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbd9jc
false
null
t3_1lbd9jc
/r/LocalLLaMA/comments/1lbd9jc/ive_been_working_on_my_own_local_ai_assistant/
false
false
self
8
null
Running Llama 3 Locally: What’s Your Best Hardware Setup?
1
[removed]
2025-06-14T16:52:17
https://www.reddit.com/r/LocalLLaMA/comments/1lbdabv/running_llama_3_locally_whats_your_best_hardware/
amanverasia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbdabv
false
null
t3_1lbdabv
/r/LocalLLaMA/comments/1lbdabv/running_llama_3_locally_whats_your_best_hardware/
false
false
self
1
null
A challenge in time. No pressure.
2
Goal: Create a Visual Model that interprets and Generates 300FPS. Resources Constraints: 4GB Ram, 2.2Ghz CPU, no GPU/TPU. Potential: Film Industry, Security, Self Sufficient Agents, and finally light and highly scalable AGI agents on literally any tech from drones to spaceships. I was checking out the State of the Art commercially viable vision models out there and all of them are super inconsistent even with super detailed prompts. Credits or Limits being drained is what is actually happening. Resource requirements have skyrocketed.
2025-06-14T17:21:36
https://www.reddit.com/r/LocalLLaMA/comments/1lbdyyi/a_challenge_in_time_no_pressure/
Good-Helicopter3441
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbdyyi
false
null
t3_1lbdyyi
/r/LocalLLaMA/comments/1lbdyyi/a_challenge_in_time_no_pressure/
false
false
self
2
null
They removed this post? Weird!
1
[removed]
2025-06-14T17:31:13
https://www.reddit.com/r/LocalLLaMA/comments/1lbe6vf/they_removed_this_post_weird/
Good-Helicopter3441
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbe6vf
false
null
t3_1lbe6vf
/r/LocalLLaMA/comments/1lbe6vf/they_removed_this_post_weird/
false
false
self
1
null
Spam detection model/pipeline?
3
Hi! Does anyone know some oss model/pipeline for spam detection? As far as I know, there's a project called Detoxify but they are for toxicity (hate speech, etc) moderations, not really for spam detection
2025-06-14T17:47:14
https://www.reddit.com/r/LocalLLaMA/comments/1lbek49/spam_detection_modelpipeline/
bihungba1101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbek49
false
null
t3_1lbek49
/r/LocalLLaMA/comments/1lbek49/spam_detection_modelpipeline/
false
false
self
3
null
Augmentoolkit just got a major update - huge advance for dataset generation and fine-tuning
1
[removed]
2025-06-14T18:11:52
https://www.reddit.com/r/LocalLLaMA/comments/1lbf59b/augmentoolkit_just_got_a_major_update_huge/
mj3815
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbf59b
false
null
t3_1lbf59b
/r/LocalLLaMA/comments/1lbf59b/augmentoolkit_just_got_a_major_update_huge/
false
false
self
1
null
26 Quants that fit on 32GB vs 10,000-token "Needle in a Haystack" test
206
| Model | Params (B) | Quantization | Results | |:-------------------------------------|-----------:|:------------:|:--------| | **Meta Llama Family** | | | | | Llama 2 70 | 70 | q2 | failed | | Llama 3.3 70 | 70 | iq3 | solved | | Llama 3.3 70 | 70 | iq2 | solved | | Llama 4 Scout | 100 | iq2 | failed | | Llama 3.1 8 | 8 | q5 | failed | | Llama 3.1 8 | 8 | q6 | solved | | Llama 3.2 3 | 3 | q6 | failed | | IBM Granite 3.3 | 8 | q5 | failed | | | | | | | **Mistral Family** | | | | | Mistral Small 3.1 | 24 | iq4 | failed | | Mistral Small 3 | 24 | q6 | failed | | Deephermes-preview | 24 | q6 | failed | | Magistral Small | 24 | q5 | Solved | | | | | | | **Nvidia** | | | | | Nemotron Super (nothink) | 49 | iq4 | solved | | Nemotron Super (think) | 49 | iq4 | solved | | | | | | | **Google** | | | | | Gemma3 12 | 12 | q5 |solved | | Gemma3 27 | 27 | iq4 | failed | | | | | | | **Qwen Family** | | | | | QwQ | 32 | q6 | failed | | Qwen3 8b (nothink) | 8 | q5 | failed | | Qwen3 8b (think) | 8 | q5 | failed | | Qwen3 14 (think) | 14 | q5 | solved | | Qwen3 14 (nothink) | 14 | q5 | solved | | Qwen3 30 A3B (think) | 30 | iq4 | failed | | Qwen3 30 A3B (nothink) | 30 | iq4 | solved | | Qwen3 32 (think) | 32 | q5 | solved | | Qwen3 32 (nothink) | 32 | q5 | solved | | Deepseek-R1-0528-Distill-Qwen3-8b | 8 | q5 | failed | | | | | | | **Lambda Chat** | | | | | Hermes 3.1 405 | 405 | fp8 | solved | | Llama 4 Scout | 100 | fp8 | failed | | Llama 4 Maverick | 400 | fp8 | solved | | Nemotron 3.1 70 | 70 | fp8 | solved | | Deepseek R1 0528 | 671 | fp8 | solved | | Deepseek V3 0324 | 671 | fp8 | solved | | R1-Distill-70 | 70 | fp8 | solved | | Qwen3 32 (think) | 32 | fp8 | solved | | Qwen3 32 (nothink) | 32 | fp8 | solved | | Qwen2.5 Coder 32 | 32 | fp8 | solved |
2025-06-14T18:27:59
https://www.reddit.com/r/LocalLLaMA/comments/1lbfinu/26_quants_that_fit_on_32gb_vs_10000token_needle/
EmPips
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbfinu
false
null
t3_1lbfinu
/r/LocalLLaMA/comments/1lbfinu/26_quants_that_fit_on_32gb_vs_10000token_needle/
false
false
self
206
null
an offline voice assistant
1
[removed]
2025-06-14T18:31:52
https://www.reddit.com/r/LocalLLaMA/comments/1lbfm15/an_offline_voice_assistant/
ppzms
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbfm15
false
null
t3_1lbfm15
/r/LocalLLaMA/comments/1lbfm15/an_offline_voice_assistant/
false
false
self
1
null
Somebody use https://petals.dev/???
3
I just discover this and found strange that nobody here mention it. I mean... it is local after all.
2025-06-14T18:49:07
https://www.reddit.com/r/LocalLLaMA/comments/1lbg06c/somebody_use_httpspetalsdev/
9acca9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbg06c
false
null
t3_1lbg06c
/r/LocalLLaMA/comments/1lbg06c/somebody_use_httpspetalsdev/
false
false
self
3
null
AI voice chat/pdf reader desktop gtk app using ollama
14
Hello, I started building this application before solutions like ElevenReader were developed, but maybe someone will find it useful [https://github.com/kopecmaciej/fox-reader](https://github.com/kopecmaciej/fox-reader)
2025-06-14T18:56:21
https://v.redd.it/twm00j9htx6f1
Cieju04
v.redd.it
1970-01-01T00:00:00
0
{}
1lbg65e
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/twm00j9htx6f1/DASHPlaylist.mpd?a=1752519396%2CMDM0ODRkNGY0ZjI5OTQ1ZDJjNmFlMDcyZTFiZWVkM2VmM2FiY2FkNTE2NDRlZTZlZTZjNTFjMTJkZjUwMjk2Nw%3D%3D&v=1&f=sd', 'duration': 98, 'fallback_url': 'https://v.redd.it/twm00j9htx6f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/twm00j9htx6f1/HLSPlaylist.m3u8?a=1752519396%2CM2VkNDNkMWY2NWM0OGE4NDVkOGI2ZDU1NjQxMjY0MzA2MzgyOGYxMTU4NzZkZTJhNzJmMmVkNTUzNjgyYjc4Zg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/twm00j9htx6f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1lbg65e
/r/LocalLLaMA/comments/1lbg65e/ai_voice_chatpdf_reader_desktop_gtk_app_using/
false
false
https://external-preview…e2d05e8bdd00d220
14
{'enabled': False, 'images': [{'id': 'eTF5MGhqOWh0eDZmMd494dioRYcwT_yPqk9VRsVnX_KOCpsk-05w-pyPDfPD', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eTF5MGhqOWh0eDZmMd494dioRYcwT_yPqk9VRsVnX_KOCpsk-05w-pyPDfPD.png?width=108&crop=smart&format=pjpg&auto=webp&s=cdd6d55089718b0d464dab8c937915ef4406ba49', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eTF5MGhqOWh0eDZmMd494dioRYcwT_yPqk9VRsVnX_KOCpsk-05w-pyPDfPD.png?width=216&crop=smart&format=pjpg&auto=webp&s=12f51a311d81ce5332cf859074794f6777bfdd73', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eTF5MGhqOWh0eDZmMd494dioRYcwT_yPqk9VRsVnX_KOCpsk-05w-pyPDfPD.png?width=320&crop=smart&format=pjpg&auto=webp&s=97c484c40ba0200ad4b4edb6555b72d65e8ec31d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eTF5MGhqOWh0eDZmMd494dioRYcwT_yPqk9VRsVnX_KOCpsk-05w-pyPDfPD.png?width=640&crop=smart&format=pjpg&auto=webp&s=9042c2229b46e4ed1be80cac048995d32eb3cc4d', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eTF5MGhqOWh0eDZmMd494dioRYcwT_yPqk9VRsVnX_KOCpsk-05w-pyPDfPD.png?width=960&crop=smart&format=pjpg&auto=webp&s=7839f1d3ad1d0c3c1addf5f8410a0161b774a062', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eTF5MGhqOWh0eDZmMd494dioRYcwT_yPqk9VRsVnX_KOCpsk-05w-pyPDfPD.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8ec704a3952d29a41228270ab662133c5e949a65', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/eTF5MGhqOWh0eDZmMd494dioRYcwT_yPqk9VRsVnX_KOCpsk-05w-pyPDfPD.png?format=pjpg&auto=webp&s=02c5cf09f62f9d7632a23c211e97d6845bcd6d5d', 'width': 1920}, 'variants': {}}]}
💡 Quick Tip for Newcomers to LLMs (Local Large Language Models):
1
[removed]
2025-06-14T19:00:31
https://www.reddit.com/r/LocalLLaMA/comments/1lbg9nx/quick_tip_for_newcomers_to_llms_local_large/
MixChance
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbg9nx
false
null
t3_1lbg9nx
/r/LocalLLaMA/comments/1lbg9nx/quick_tip_for_newcomers_to_llms_local_large/
false
false
self
1
null
💡 Quick Tip for Newcomers to LLMs (Local Large Language Models)
1
[removed]
2025-06-14T19:03:27
https://www.reddit.com/r/LocalLLaMA/comments/1lbgc8y/quick_tip_for_newcomers_to_llms_local_large/
MixChance
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbgc8y
false
null
t3_1lbgc8y
/r/LocalLLaMA/comments/1lbgc8y/quick_tip_for_newcomers_to_llms_local_large/
false
false
self
1
null
Comment on The Illusion of Thinking: Recent paper from Apple contain glaring flaws in the original study's experimental design, from not considering token limit to testing unsolvable puzzles.
54
I have seen a lively discussion here on the recent Apple paper, which was quite interesting. When trying to read opinions on it I have found a recent comment on this Apple paper: *Comment on The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity -* [*https://arxiv.org/abs/2506.09250*](https://arxiv.org/abs/2506.09250) This one concludes that there were pretty glaring design flaws in original study. IMO these are most important, as it really shows that the research was poorly thought out: **1. The "Reasoning Collapse" is Just a Token Limit.** The original paper's primary example, the Tower of Hanoi puzzle, requires an exponentially growing number of moves to list out the full solution. The "collapse" point they identified (e.g., N=8 disks) happens exactly when the text for the full solution exceeds the model's maximum output token limit (e.g., 64k tokens). **2. They Tested Models on Mathematically Impossible Puzzles.** This is the most damning point. For the River Crossing puzzle, the original study tested models on instances with 6 or more "actors" and a boat that could only hold 3. It is a well-established mathematical fact that this version of the puzzle is **unsolvable for more than 5 actors**. They also provide other rebuttals, but I encourage to read this paper. I tried to search discussion about this, but I personally didn't find any, I could be mistaken. But considering how the original Apple paper was discussed, and I didn't saw anyone pointing out this flaws I just wanted to add to the discussion. There was also going around a rebuttal in form of Sean Goedecke blog post, but he criticized the paper in diffrent way, but he didn't touch on technical issues with it. I think it could be somewhat confusing as the title of the paper I posted is very similar to his blog post, and maybe this paper could just get lost in th discussion.
2025-06-14T19:04:21
https://www.reddit.com/r/LocalLLaMA/comments/1lbgczn/comment_on_the_illusion_of_thinking_recent_paper/
Garpagan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbgczn
false
null
t3_1lbgczn
/r/LocalLLaMA/comments/1lbgczn/comment_on_the_illusion_of_thinking_recent_paper/
false
false
self
54
null
🚪 Dungeo AI WebUI – A Local Roleplay Frontend for LLM-based Dungeon Masters 🧙‍♂️✨
1
[removed]
2025-06-14T19:11:59
https://www.reddit.com/r/LocalLLaMA/comments/1lbgjhv/dungeo_ai_webui_a_local_roleplay_frontend_for/
Reasonable_Brief578
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbgjhv
false
null
t3_1lbgjhv
/r/LocalLLaMA/comments/1lbgjhv/dungeo_ai_webui_a_local_roleplay_frontend_for/
false
false
self
1
null
Massive performance gains from linux?
85
Ive been using LM studio for inference and I switched to Mint Linux because Windows is hell. My tokens per second went from 1-2t/s to 7-8t/s. Prompt eval went from 1 minutes to 2 seconds. Specs: 13700k Asus Maximus hero z790 64gb of ddr5 2tb Samsung pro SSD 2X 3090 at 250w limit each on x8 pcie lanes Model: Unsloth Qwen3 235B Q2_K_XL 45 Layers on GPU. Was wondering if this was normal? I was using a fresh windows install so I'm not sure what the difference was.
2025-06-14T19:13:34
https://www.reddit.com/r/LocalLLaMA/comments/1lbgkuk/massive_performance_gains_from_linux/
Only_Situation_4713
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbgkuk
false
null
t3_1lbgkuk
/r/LocalLLaMA/comments/1lbgkuk/massive_performance_gains_from_linux/
false
false
self
85
null
New tool: llama‑optimus –> Auto‑tune llama.cpp for max tokens/s
1
[removed]
2025-06-14T19:15:05
https://www.reddit.com/r/LocalLLaMA/comments/1lbgm4l/new_tool_llamaoptimus_autotune_llamacpp_for_max/
Expert-Inspector-128
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbgm4l
false
null
t3_1lbgm4l
/r/LocalLLaMA/comments/1lbgm4l/new_tool_llamaoptimus_autotune_llamacpp_for_max/
false
false
https://b.thumbs.redditm…GXstCb_iZolA.jpg
1
null
I'm *also* working on my own local AI "assistant" with memory and emotional logic. Looking for some ideas on how to improve memory. Check it out on github :3
1
[removed]
2025-06-14T19:44:16
https://www.reddit.com/r/LocalLLaMA/comments/1lbh9vw/im_also_working_on_my_own_local_ai_assistant_with/
flamingrickpat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbh9vw
false
null
t3_1lbh9vw
/r/LocalLLaMA/comments/1lbh9vw/im_also_working_on_my_own_local_ai_assistant_with/
false
false
self
1
{'enabled': False, 'images': [{'id': 'ua4ybT_-lB0Q331S-V5vYcxIpSZTpMvOTPByZjOgDVg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ua4ybT_-lB0Q331S-V5vYcxIpSZTpMvOTPByZjOgDVg.png?width=108&crop=smart&auto=webp&s=5c401ff5cbdb345669dc864b38a21079c0d4df02', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ua4ybT_-lB0Q331S-V5vYcxIpSZTpMvOTPByZjOgDVg.png?width=216&crop=smart&auto=webp&s=79a216044fc63839c76cce77926ccac14383cb59', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ua4ybT_-lB0Q331S-V5vYcxIpSZTpMvOTPByZjOgDVg.png?width=320&crop=smart&auto=webp&s=2a8be61688260dbf9715aa6081022bde8a1e2c38', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ua4ybT_-lB0Q331S-V5vYcxIpSZTpMvOTPByZjOgDVg.png?width=640&crop=smart&auto=webp&s=551da16312a3cf5cd087a11d533dc12aacf9948e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ua4ybT_-lB0Q331S-V5vYcxIpSZTpMvOTPByZjOgDVg.png?width=960&crop=smart&auto=webp&s=14b5cc1503d1b4a0583096cc13a6fda2d7d2fe91', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ua4ybT_-lB0Q331S-V5vYcxIpSZTpMvOTPByZjOgDVg.png?width=1080&crop=smart&auto=webp&s=97de2b1076f0a9fb6b4138c0fecb2669653c717c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ua4ybT_-lB0Q331S-V5vYcxIpSZTpMvOTPByZjOgDVg.png?auto=webp&s=c257c02b7b5b7c3e41842206be40f87daf90b086', 'width': 1200}, 'variants': {}}]}
subreddit meta
1
[removed]
2025-06-14T20:03:27
https://www.reddit.com/r/LocalLLaMA/comments/1lbhq4m/subreddit_meta/
freedom2adventure
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbhq4m
false
null
t3_1lbhq4m
/r/LocalLLaMA/comments/1lbhq4m/subreddit_meta/
false
false
self
1
null
Why do all my posts get auto-removed?
1
[removed]
2025-06-14T20:10:59
https://www.reddit.com/r/LocalLLaMA/comments/1lbhw3e/why_do_all_my_posts_get_autoremoved/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbhw3e
false
null
t3_1lbhw3e
/r/LocalLLaMA/comments/1lbhw3e/why_do_all_my_posts_get_autoremoved/
false
false
self
1
null
GPT 4 might already understand what you’re thinking and barely anyone noticed
1
[removed]
2025-06-14T20:12:08
https://www.reddit.com/r/LocalLLaMA/comments/1lbhwyj/gpt_4_might_already_understand_what_youre/
Visible-Property3453
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbhwyj
false
null
t3_1lbhwyj
/r/LocalLLaMA/comments/1lbhwyj/gpt_4_might_already_understand_what_youre/
false
false
self
1
null
Best Approach for Accurate Speaker Diarization
1
[removed]
2025-06-14T20:25:36
https://www.reddit.com/r/LocalLLaMA/comments/1lbi7n0/best_approach_for_accurate_speaker_diarization/
LongjumpingComb8622
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbi7n0
false
null
t3_1lbi7n0
/r/LocalLLaMA/comments/1lbi7n0/best_approach_for_accurate_speaker_diarization/
false
false
self
1
null
Best Approach for Accurate Speaker Diarization
1
[removed]
2025-06-14T20:27:38
https://www.reddit.com/r/LocalLLaMA/comments/1lbi9cj/best_approach_for_accurate_speaker_diarization/
LongjumpingComb8622
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbi9cj
false
null
t3_1lbi9cj
/r/LocalLLaMA/comments/1lbi9cj/best_approach_for_accurate_speaker_diarization/
false
false
self
1
null
Fine-tuning Diffusion Language Models - Help?
10
I have spent the last few days trying to fine tune a diffusion language model for coding. I tried Dream, LLaDA, and SMDM, but got no Colab Notebook working. I've got to admit, I don't know Python, which might be a reason. Has anyone had success? Or could anyone help me out?
2025-06-14T20:38:11
https://www.reddit.com/r/LocalLLaMA/comments/1lbii6m/finetuning_diffusion_language_models_help/
DunklerErpel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbii6m
false
null
t3_1lbii6m
/r/LocalLLaMA/comments/1lbii6m/finetuning_diffusion_language_models_help/
false
false
self
10
null
Mistral Small 3.1 vs Magistral Small - experience?
28
Hi all I have used Mistral Small 3.1 in my dataset generation pipeline over the past couple months. It does a better job than many larger LLMs in multiturn conversation generation, outperforming Qwen 3 30b and 32b, Gemma 27b, and GLM-4 (as well as others). My next go-to model is Nemotron Super 49B, but I can afford less context length at this size of model. I tried Mistral's new Magistral Small and I have found it to perform very similar to Mistral Small 3.1, almost imperceptibly different. Wondering if anyone out there has put Magistral to their own tests and has any comparisons with Mistral Small's performance. Maybe there's some tricks you've found to coax some more performance out of it?
2025-06-14T20:43:54
https://www.reddit.com/r/LocalLLaMA/comments/1lbimsz/mistral_small_31_vs_magistral_small_experience/
mj3815
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbimsz
false
null
t3_1lbimsz
/r/LocalLLaMA/comments/1lbimsz/mistral_small_31_vs_magistral_small_experience/
false
false
self
28
null
Watching Robots having a conversation
5
Something I always wanted to do. Have two or more different local LLM models having a conversation, initiated by user supplied prompt. I initially wrote this as a python script, but that quickly became not as interesting as a native app. Personally, I feel like we should aim at having things running on our computers , locally - as much as possible , native apps, etc. So here I am. With a macOS app. It's rough around the edges. It's simple. But it works. Feel free to suggest improvements, sends patches, etc. I'll be honest, I got stuck few times - havent done much SwiftUI , but it was easy to get it sorted using LLMs and some googling. Have fun with it. I might do a YouTube video about it. It's still fascinating to me, watching two LLM models having a conversation! [https://github.com/greggjaskiewicz/RobotsMowingTheGrass](https://github.com/greggjaskiewicz/RobotsMowingTheGrass) Here's some screenshots. https://preview.redd.it/9s65bnruhy6f1.png?width=2460&format=png&auto=webp&s=515a775fe3d01a64b1b56c452f963f8659cc0e6a https://preview.redd.it/8mc26qruhy6f1.png?width=2516&format=png&auto=webp&s=92c0362c193fb38feae6621fd08733694dafe2a9 https://preview.redd.it/49nn8pruhy6f1.png?width=2544&format=png&auto=webp&s=085965a59d51d0b8893d2786cf87864685583f0a https://preview.redd.it/3fu6kpruhy6f1.png?width=2534&format=png&auto=webp&s=19b299834e551643ec3398525849d98366474aab
2025-06-14T20:56:57
https://www.reddit.com/r/LocalLLaMA/comments/1lbix9k/watching_robots_having_a_conversation/
sp1tfir3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbix9k
false
null
t3_1lbix9k
/r/LocalLLaMA/comments/1lbix9k/watching_robots_having_a_conversation/
false
false
https://b.thumbs.redditm…xN29RGXz7ceo.jpg
5
{'enabled': False, 'images': [{'id': 'A0TBeBjPVqdei4JKNtZJKE6Rshnl0-zuXnJoBUYa_IU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/A0TBeBjPVqdei4JKNtZJKE6Rshnl0-zuXnJoBUYa_IU.png?width=108&crop=smart&auto=webp&s=3fe634747f21319c47dc44653226fe7416242367', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/A0TBeBjPVqdei4JKNtZJKE6Rshnl0-zuXnJoBUYa_IU.png?width=216&crop=smart&auto=webp&s=b0296c6f14be7b170ad0398173fe77ebe50f79b8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/A0TBeBjPVqdei4JKNtZJKE6Rshnl0-zuXnJoBUYa_IU.png?width=320&crop=smart&auto=webp&s=3a9a5e6b16243a13839669d96617b2be63c84b6a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/A0TBeBjPVqdei4JKNtZJKE6Rshnl0-zuXnJoBUYa_IU.png?width=640&crop=smart&auto=webp&s=61538c79e29b00216e454bd472772981e6de6143', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/A0TBeBjPVqdei4JKNtZJKE6Rshnl0-zuXnJoBUYa_IU.png?width=960&crop=smart&auto=webp&s=76cae1827d94d1b50f0ced5b15216bb34dc0308f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/A0TBeBjPVqdei4JKNtZJKE6Rshnl0-zuXnJoBUYa_IU.png?width=1080&crop=smart&auto=webp&s=598051f36b6081356e214b2b87d61f4d9a88bf27', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/A0TBeBjPVqdei4JKNtZJKE6Rshnl0-zuXnJoBUYa_IU.png?auto=webp&s=2c2e7baea259807bf20263679f778a9b96ce71f9', 'width': 1200}, 'variants': {}}]}
How does everyone do Tool Calling?
63
I’ve begun to see Tool Calling so that I can make the LLMs I’m using do real work for me. I do all my LLM work in Python and was wondering if there’s any libraries that you recommend that make it all easy. I have just recently seen MCP and I have been trying to add it manually through the OpenAI library but that’s quite slow so does anyone have any recommendations? Like LangChain, LlamaIndex and such.
2025-06-14T21:12:00
https://www.reddit.com/r/LocalLLaMA/comments/1lbj978/how_does_everyone_do_tool_calling/
MKU64
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbj978
false
null
t3_1lbj978
/r/LocalLLaMA/comments/1lbj978/how_does_everyone_do_tool_calling/
false
false
self
63
null
Vision Support for Magistral Small
1
[removed]
2025-06-14T21:54:05
https://huggingface.co/OptimusePrime/Magistral-Small-2506-Vision
Vivid_Dot_6405
huggingface.co
1970-01-01T00:00:00
0
{}
1lbk6la
false
null
t3_1lbk6la
/r/LocalLLaMA/comments/1lbk6la/vision_support_for_magistral_small/
false
false
default
1
null
Vision Support for Magistral Small
1
[removed]
2025-06-14T21:56:00
https://www.reddit.com/r/LocalLLaMA/comments/1lbk82e/vision_support_for_magistral_small/
Vivid_Dot_6405
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbk82e
false
null
t3_1lbk82e
/r/LocalLLaMA/comments/1lbk82e/vision_support_for_magistral_small/
false
false
self
1
null
Magistral Small with Vision
1
[removed]
2025-06-14T21:57:09
https://www.reddit.com/r/LocalLLaMA/comments/1lbk8yb/magistral_small_with_vision/
Vivid_Dot_6405
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbk8yb
false
null
t3_1lbk8yb
/r/LocalLLaMA/comments/1lbk8yb/magistral_small_with_vision/
false
false
self
1
null
I added vision to Magistral
149
I was inspired by an [experimental Devstral model](https://huggingface.co/ngxson/Devstral-Small-Vision-2505-GGUF), and had the idea to the same thing to Magistral Small. I replaced Mistral Small 3.1's language layers with Magistral's. I suggest using vLLM for inference with the correct system prompt and sampling params. There may be config errors present. The model's visual reasoning is definitely not as good as text-only, but it does work. At the moment, I don't have the resources to replicate Mistral's vision benchmarks from their tech report. Let me know if you notice any weird behavior!
2025-06-14T22:02:20
https://huggingface.co/OptimusePrime/Magistral-Small-2506-Vision
Vivid_Dot_6405
huggingface.co
1970-01-01T00:00:00
0
{}
1lbkd46
false
null
t3_1lbkd46
/r/LocalLLaMA/comments/1lbkd46/i_added_vision_to_magistral/
false
false
default
149
{'enabled': False, 'images': [{'id': 'X_g72xTZNGOJR899I7pB5eNf8G3zVQ49K4x504QQmpg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/X_g72xTZNGOJR899I7pB5eNf8G3zVQ49K4x504QQmpg.png?width=108&crop=smart&auto=webp&s=3102b69c74d945f421090e75fea2d27c61da78b9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/X_g72xTZNGOJR899I7pB5eNf8G3zVQ49K4x504QQmpg.png?width=216&crop=smart&auto=webp&s=0c4df86fa82d357c5a1ae3db4f5a8a6fa707a0d6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/X_g72xTZNGOJR899I7pB5eNf8G3zVQ49K4x504QQmpg.png?width=320&crop=smart&auto=webp&s=2aa4e86fae048aabd18b63440e53c6b3f2b70df6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/X_g72xTZNGOJR899I7pB5eNf8G3zVQ49K4x504QQmpg.png?width=640&crop=smart&auto=webp&s=6cdf83a10d380087b7b9940bcbc3d928a15a2e93', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/X_g72xTZNGOJR899I7pB5eNf8G3zVQ49K4x504QQmpg.png?width=960&crop=smart&auto=webp&s=292e7dcad20e6720a9311039993c87571a0b4d2b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/X_g72xTZNGOJR899I7pB5eNf8G3zVQ49K4x504QQmpg.png?width=1080&crop=smart&auto=webp&s=d885127530860a87281483d29ce106a4fb471903', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/X_g72xTZNGOJR899I7pB5eNf8G3zVQ49K4x504QQmpg.png?auto=webp&s=4019bb888bd654d32927a7856b749733ef0be719', 'width': 1200}, 'variants': {}}]}
Local models are getting decent at coding in Cline | Qwen3-30B-A3B-GGUF; M4 Max, 36GB RAM
1
Really impressed with where local models are getting! It's been a longtime dream to run local models in Cline (on reasonable hardware) and it looks like we're getting close! This is still a lightweight test I've found local models to be all but unusable in Cline. **model**: lmstudio-community/Qwen3-30B-A3B-GGUF (3-bit, 14.58 GB) (https://huggingface.co/lmstudio-community/Qwen3-30B-A3B-GGUF) **hardware**: MacBook Pro (M4 Max, 36GB RAM) Some other models that are worth trying: \- qwen/qwen3-32b \- devstral-small-2505 Any other models I'm missing from this list?
2025-06-14T22:05:13
https://v.redd.it/clqb1y4vqy6f1
nick-baumann
/r/LocalLLaMA/comments/1lbkfah/local_models_are_getting_decent_at_coding_in/
1970-01-01T00:00:00
0
{}
1lbkfah
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/clqb1y4vqy6f1/DASHPlaylist.mpd?a=1752660319%2CZmYxYzkyMzY4Nzg4MjNmMmIwOGMwZDVjZTgyOTAzNDFiMDM1Mjk4NWZhYTljNzEwZmMxMjk5NzlhMDMyNmQxMw%3D%3D&v=1&f=sd', 'duration': 44, 'fallback_url': 'https://v.redd.it/clqb1y4vqy6f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/clqb1y4vqy6f1/HLSPlaylist.m3u8?a=1752660319%2CNjQwZGYxOGZiN2NlY2JjNDkyNDM2OTZlYjkwYzJjMWViYzE3MWY5ZjRiN2QwNjk1NTk3ODJjMWIzM2ZmOTY2Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/clqb1y4vqy6f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1600}}
t3_1lbkfah
/r/LocalLLaMA/comments/1lbkfah/local_models_are_getting_decent_at_coding_in/
false
false
https://external-preview…b4e0ff4c361588a3
1
{'enabled': False, 'images': [{'id': 'cDNjcmF5NHZxeTZmMWoToEhqsTC_1X4pKPc9C_-ZlJ9B-UOiKpKWj-EZxCfX', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/cDNjcmF5NHZxeTZmMWoToEhqsTC_1X4pKPc9C_-ZlJ9B-UOiKpKWj-EZxCfX.png?width=108&crop=smart&format=pjpg&auto=webp&s=ac4506a15bfad79eaf5b11f1938562a086139bca', 'width': 108}, {'height': 145, 'url': 'https://external-preview.redd.it/cDNjcmF5NHZxeTZmMWoToEhqsTC_1X4pKPc9C_-ZlJ9B-UOiKpKWj-EZxCfX.png?width=216&crop=smart&format=pjpg&auto=webp&s=faf56228bb130e8f068829f2be44dda37e8d8cf8', 'width': 216}, {'height': 216, 'url': 'https://external-preview.redd.it/cDNjcmF5NHZxeTZmMWoToEhqsTC_1X4pKPc9C_-ZlJ9B-UOiKpKWj-EZxCfX.png?width=320&crop=smart&format=pjpg&auto=webp&s=a2d337bc452bb1b5ec6c89502d05626d66090c09', 'width': 320}, {'height': 432, 'url': 'https://external-preview.redd.it/cDNjcmF5NHZxeTZmMWoToEhqsTC_1X4pKPc9C_-ZlJ9B-UOiKpKWj-EZxCfX.png?width=640&crop=smart&format=pjpg&auto=webp&s=794996382e5b0df45c74254b9c3a22c6c00e90a0', 'width': 640}, {'height': 648, 'url': 'https://external-preview.redd.it/cDNjcmF5NHZxeTZmMWoToEhqsTC_1X4pKPc9C_-ZlJ9B-UOiKpKWj-EZxCfX.png?width=960&crop=smart&format=pjpg&auto=webp&s=0835a14804e73a6c72077e5348e969ee1f5b9233', 'width': 960}, {'height': 729, 'url': 'https://external-preview.redd.it/cDNjcmF5NHZxeTZmMWoToEhqsTC_1X4pKPc9C_-ZlJ9B-UOiKpKWj-EZxCfX.png?width=1080&crop=smart&format=pjpg&auto=webp&s=861825a22ecc597338ea4e71ae60155d1573d63c', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/cDNjcmF5NHZxeTZmMWoToEhqsTC_1X4pKPc9C_-ZlJ9B-UOiKpKWj-EZxCfX.png?format=pjpg&auto=webp&s=7b355b16f6c73afadb5856cc51cc23114ab576b2', 'width': 3200}, 'variants': {}}]}
is huggingface down?
0
AGI taked over? let's hide!!!
2025-06-14T22:14:37
https://www.reddit.com/r/LocalLLaMA/comments/1lbkm9s/is_huggingface_down/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbkm9s
false
null
t3_1lbkm9s
/r/LocalLLaMA/comments/1lbkm9s/is_huggingface_down/
false
false
self
0
null
Best tutorial for installing a local llm with GUI setup?
2
I essentially want an LLM with a gui setup on my own pc - set up like a ChatGPT with a GUI but all running locally.
2025-06-14T22:35:21
https://www.reddit.com/r/LocalLLaMA/comments/1lbl1qo/best_tutorial_for_installing_a_local_llm_with_gui/
runnerofshadows
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbl1qo
false
null
t3_1lbl1qo
/r/LocalLLaMA/comments/1lbl1qo/best_tutorial_for_installing_a_local_llm_with_gui/
false
false
self
2
null
Why Search Sucks! (But First, A Brief History) - Talk & Discussion
1
[removed]
2025-06-14T22:50:05
https://youtu.be/vZVcBUnre-c
kushalgoenka
youtu.be
1970-01-01T00:00:00
0
{}
1lblcvl
false
{'oembed': {'author_name': 'Kushal Goenka', 'author_url': 'https://www.youtube.com/@KushalGoenka', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/vZVcBUnre-c?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Why Search Sucks! (But First, A Brief History)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/vZVcBUnre-c/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Why Search Sucks! (But First, A Brief History)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1lblcvl
/r/LocalLLaMA/comments/1lblcvl/why_search_sucks_but_first_a_brief_history_talk/
false
false
default
1
null
Squeezing more speed out of devstralQ4_0.gguf on a 1080ti
2
I have an old 1080ti GPU and was quite excited that I could get the **devstralQ4\_0.gguf** to run on it! But it is slooooow. So I bothered a bigger LLM for advice on how to speed things up, and it was helpful. But it is still slow. Any magic tricks (aside from finally getting a new card or running a smaller model?) `llama-cli -m /srv/models/devstralQ4_0.gguf --color -ngl 28 --ubatch-size 1024 --batch-size 2048 --threads 4 --flash-attn` * It suggested I reduce the --threads to match my physical cores, because I noticed my CPU was maxed out but my GPU was only around 30%. So I did, and it seemed to help a bit, yay! CPU is at 80-90 but not pegged at 100. Cool. * I next noticed that my GPU memory was maxed out at 10.5 (yay) but the GPU processing was still around 20-40%. Huh. So the bigger LLM suggested I try upping my `--ubatch-size` to 1024 and `--batch-size` to 2048. (keeping batch size > ubatch size). I *think* that helped, but not a lot. * I've got plenty of RAM left, not sure if that helps any. * **My GPU processing stays between 20%-50%, which seems low.**
2025-06-14T22:52:00
https://www.reddit.com/r/LocalLLaMA/comments/1lbleap/squeezing_more_speed_out_of_devstralq4_0gguf_on_a/
firesalamander
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbleap
false
null
t3_1lbleap
/r/LocalLLaMA/comments/1lbleap/squeezing_more_speed_out_of_devstralq4_0gguf_on_a/
false
false
self
2
null
Optimizing llama.cpp flags for best token/s? 'llama-optimus' search might help
1
[removed]
2025-06-14T23:02:58
https://www.reddit.com/r/LocalLLaMA/comments/1lblmiq/optimizing_llamacpp_flags_for_best_tokens/
Expert-Inspector-128
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lblmiq
false
null
t3_1lblmiq
/r/LocalLLaMA/comments/1lblmiq/optimizing_llamacpp_flags_for_best_tokens/
false
false
https://a.thumbs.redditm…FFABULvBqrB8.jpg
1
null
Is this loss (and speed of decreasing loss) normal?
1
[removed]
2025-06-14T23:26:08
https://www.reddit.com/r/LocalLLaMA/comments/1lbm4al/is_this_loss_and_speed_of_decreasing_loss_normal/
Extra-Campaign7281
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbm4al
false
null
t3_1lbm4al
/r/LocalLLaMA/comments/1lbm4al/is_this_loss_and_speed_of_decreasing_loss_normal/
false
false
https://b.thumbs.redditm…jOL_8WQm3aeg.jpg
1
null
LocalLLaMA
1
[removed]
2025-06-15T00:09:54
https://i.redd.it/4k53k10egz6f1.jpeg
ProgrammerDazzling78
i.redd.it
1970-01-01T00:00:00
0
{}
1lbmzvp
false
null
t3_1lbmzvp
/r/LocalLLaMA/comments/1lbmzvp/localllama/
false
false
https://external-preview…23e3a02c039c67c4
1
{'enabled': True, 'images': [{'id': 'rfrTwVhAQLLYQ6grLSa-q3Hl6IZZvtN5zTs3LYaz0jk', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/4k53k10egz6f1.jpeg?width=108&crop=smart&auto=webp&s=96556c6732cc7144e0b47ddc81eb3631851d61b9', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/4k53k10egz6f1.jpeg?width=216&crop=smart&auto=webp&s=c2a1a4287dea2a91db67adfb7c6552240715204b', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/4k53k10egz6f1.jpeg?width=320&crop=smart&auto=webp&s=37f90531133964b1e072e5f8a87dccfdaafada97', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/4k53k10egz6f1.jpeg?width=640&crop=smart&auto=webp&s=95e34dbda2853c2c733a7188b9cc69d38b5c6f31', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/4k53k10egz6f1.jpeg?width=960&crop=smart&auto=webp&s=55e70559e973c8d2bf0fe9462fc1d6e3c7f14521', 'width': 960}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/4k53k10egz6f1.jpeg?auto=webp&s=8f71916ec2d1a6da47425dba9d9617b6c2bcf961', 'width': 1024}, 'variants': {}}]}
Ryzen Ai Max+ 395 vs RTX 5090
23
Currently running a 5090 and it's been great. Super fast for anything under 34B. I mostly use WAN2.1 14B for video gen and some larger reasoning models. But Id like to run bigger models. And with the release of Veo 3 the quality has blown me away. Stuff like those Bigfoot and Stormtrooper vlogs look years ahead of anything wan2.1 can produce. I’m guessing we’ll see comparable open-source models within a year, but I imagine the compute requirements will go up too as I heard Veo 3 was trained off a lot of H100's. I'm trying to figure out how I could future proof to give me the best chance to be able to run these models when they come out. I do have some money saved up. But not H100 money lol. The 5090 although fast has been quite vram limited. I could sell it (bought at retail) and maybe go for a modded 48GB 4090. I also have a deposit down on a Framework Ryzen AI Max 395+ (128GB RAM), but I’m having second thoughts after watching some reviews —256GB/s memory bandwidth and no CUDA. It seems to run LLaMA 70B, but only gets ~5 tokens/sec. If I did get the framework I could try a PCIe 4x4 Oculink adapter to use it with the 5090, but not sure how well that’d work. I also picked up an EPYC 9184X last year for $500—460GB/s bandwidth, seems to run fine and might be ok for CPU inference, but idk how it would work with video gen. With EPYC Venice just above for 2026 (1.6TB/s mem bandwidth supposedly), I’m debating whether to just wait and maybe try to get one of the lower/mid tier ones for a couple grand. Curious if others are having similar ideas/any possibile solutions. As I dont believe our tech corporate overlords will be giving us any consumer grade hardware that will be able to run these models anytime soon.
2025-06-15T00:12:44
https://www.reddit.com/r/LocalLLaMA/comments/1lbn1vy/ryzen_ai_max_395_vs_rtx_5090/
Any-Cobbler6161
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbn1vy
false
null
t3_1lbn1vy
/r/LocalLLaMA/comments/1lbn1vy/ryzen_ai_max_395_vs_rtx_5090/
false
false
self
23
null
Going cold turkey on free online AI and handling the practicalities on mobile
1
[removed]
2025-06-15T00:24:48
https://www.reddit.com/r/LocalLLaMA/comments/1lbnaee/going_cold_turkey_on_free_online_ai_and_handling/
After-Cell
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbnaee
false
null
t3_1lbnaee
/r/LocalLLaMA/comments/1lbnaee/going_cold_turkey_on_free_online_ai_and_handling/
false
false
self
1
null
LLM training on RTX 5090
354
Tech Stack Hardware & OS: NVIDIA RTX 5090 (32GB VRAM, Blackwell architecture), Ubuntu 22.04 LTS, CUDA 12.8 Software: Python 3.12, PyTorch 2.8.0 nightly, Transformers and Datasets libraries from Hugging Face, Mistral-7B base model (7.2 billion parameters) Training: Full fine-tuning with gradient checkpointing, 23 custom instruction-response examples, Adafactor optimizer with bfloat16 precision, CUDA memory optimization for 32GB VRAM Environment: Python virtual environment with NVIDIA drivers 570.133.07, system monitoring with nvtop and htop Result: Domain-specialized 7 billion parameter model trained on cutting-edge RTX 5090 using latest PyTorch nightly builds for RTX 5090 GPU compatibility.
2025-06-15T00:25:56
https://v.redd.it/t5kg81t0jz6f1
AstroAlto
v.redd.it
1970-01-01T00:00:00
0
{}
1lbnb79
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/t5kg81t0jz6f1/DASHPlaylist.mpd?a=1752539173%2CNzI3NmQ0NWZkNzdiYmM3NzJiOTE4YTZlNzYyYjNiY2ZhYjBlMzk5YTllNzZlMTA4OGNlZTY2YTVmMjc5ZThkNw%3D%3D&v=1&f=sd', 'duration': 11, 'fallback_url': 'https://v.redd.it/t5kg81t0jz6f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/t5kg81t0jz6f1/HLSPlaylist.m3u8?a=1752539173%2CZWFhODIwYjM5ZTQ3NDUzZWVlMzc0MTg1YmZhYTU2MzA1MDJlMzcyM2NjYTQwMmI3ODZiMmZlMTc1OWFhYTkwZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/t5kg81t0jz6f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}}
t3_1lbnb79
/r/LocalLLaMA/comments/1lbnb79/llm_training_on_rtx_5090/
false
false
https://external-preview…dca925c2daa8a80c
354
{'enabled': False, 'images': [{'id': 'cHhubmR6czBqejZmMQ_TONUx3ShmleBmxHUm5WhhyHrbQHADnnzginEsV9Wo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cHhubmR6czBqejZmMQ_TONUx3ShmleBmxHUm5WhhyHrbQHADnnzginEsV9Wo.png?width=108&crop=smart&format=pjpg&auto=webp&s=30c5e659958af889350c77a992dee258e4a64f8e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cHhubmR6czBqejZmMQ_TONUx3ShmleBmxHUm5WhhyHrbQHADnnzginEsV9Wo.png?width=216&crop=smart&format=pjpg&auto=webp&s=956de5964809cb774b95ad4b0de3b834efa6c20e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cHhubmR6czBqejZmMQ_TONUx3ShmleBmxHUm5WhhyHrbQHADnnzginEsV9Wo.png?width=320&crop=smart&format=pjpg&auto=webp&s=a10752402a3348141a6f382590afa350ee9f45af', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cHhubmR6czBqejZmMQ_TONUx3ShmleBmxHUm5WhhyHrbQHADnnzginEsV9Wo.png?width=640&crop=smart&format=pjpg&auto=webp&s=d35d07a96d318e7d888532b8f04a5f47b9170557', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cHhubmR6czBqejZmMQ_TONUx3ShmleBmxHUm5WhhyHrbQHADnnzginEsV9Wo.png?width=960&crop=smart&format=pjpg&auto=webp&s=0520ba2bc1fd68bbecb8805cfd0b5e634dfc6df4', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cHhubmR6czBqejZmMQ_TONUx3ShmleBmxHUm5WhhyHrbQHADnnzginEsV9Wo.png?width=1080&crop=smart&format=pjpg&auto=webp&s=dd30713b137619dac9cc3a70228155f7478f4444', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/cHhubmR6czBqejZmMQ_TONUx3ShmleBmxHUm5WhhyHrbQHADnnzginEsV9Wo.png?format=pjpg&auto=webp&s=cdfa1e00680901058a42c8e1d0baf19d4d17ec0c', 'width': 1280}, 'variants': {}}]}
How to convert PDFs with complex layouts to accurate HTML/LaTex/Markdown (selectable)?
1
I'm trying to convert legal documents, specifically US laws into selectable format where I can later add hyperlinks to the content. My current approach is pass these documents through gpt4 but it's not that accurate (layout). Plus takes a lot of time. Traditional OCR tools works great for text extraction but I want to retain exact 2 column layout with side captions and text indentation. Is there a better approach for this? Sample document: https://preview.redd.it/jowzn28zsz6f1.png?width=569&format=png&auto=webp&s=df9ddc8de5f5d3d669187cfc6f1871d8f505c153
2025-06-15T01:23:04
https://www.reddit.com/r/LocalLLaMA/comments/1lbodzc/how_to_convert_pdfs_with_complex_layouts_to/
PomegranateThat3605
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbodzc
false
null
t3_1lbodzc
/r/LocalLLaMA/comments/1lbodzc/how_to_convert_pdfs_with_complex_layouts_to/
false
false
https://b.thumbs.redditm…yJREaq9EzfFg.jpg
1
null
Make Local Models watch your screen! Observer Tutorial
55
Hey guys! This is a tutorial on how to self host Observer on your home lab! See more info here: [https://github.com/Roy3838/Observer](https://github.com/Roy3838/Observer)
2025-06-15T01:46:50
https://v.redd.it/toz9tr0ixz6f1
Roy3838
v.redd.it
1970-01-01T00:00:00
0
{}
1lbotj8
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/toz9tr0ixz6f1/DASHPlaylist.mpd?a=1752544023%2CMDM0OTkzZDhkNzhlYzJiNGQxNzY4NmYwNzI2YjViZDE5NTEyZjRhYWQ3Y2VjZGM2YzQyNWU0NGUxZjE4M2E0Yg%3D%3D&v=1&f=sd', 'duration': 56, 'fallback_url': 'https://v.redd.it/toz9tr0ixz6f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/toz9tr0ixz6f1/HLSPlaylist.m3u8?a=1752544023%2CY2RmY2UyZTlhMjIzOWUwZTljZjA1OWMyZjVjYjIxNzc4ODNkNzgzNGM4ODM0MDY0ZTdmMzg4NmFmYmVkZjQ1Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/toz9tr0ixz6f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1lbotj8
/r/LocalLLaMA/comments/1lbotj8/make_local_models_watch_your_screen_observer/
false
false
https://external-preview…1e46e8c8927b2e9f
55
{'enabled': False, 'images': [{'id': 'OHR6NnZzMGl4ejZmMcpab4Kc_hsNzcDZ4OjMoSBBtpUkATpHq4IzyyL2uzQ6', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OHR6NnZzMGl4ejZmMcpab4Kc_hsNzcDZ4OjMoSBBtpUkATpHq4IzyyL2uzQ6.png?width=108&crop=smart&format=pjpg&auto=webp&s=3c0e9854bfc14573c4cbded99a19ac564906510c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OHR6NnZzMGl4ejZmMcpab4Kc_hsNzcDZ4OjMoSBBtpUkATpHq4IzyyL2uzQ6.png?width=216&crop=smart&format=pjpg&auto=webp&s=f05925ac2e18ae636529b6939362a2190be986f3', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OHR6NnZzMGl4ejZmMcpab4Kc_hsNzcDZ4OjMoSBBtpUkATpHq4IzyyL2uzQ6.png?width=320&crop=smart&format=pjpg&auto=webp&s=e371d54e02df73285e12840935f189e657b283bc', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OHR6NnZzMGl4ejZmMcpab4Kc_hsNzcDZ4OjMoSBBtpUkATpHq4IzyyL2uzQ6.png?width=640&crop=smart&format=pjpg&auto=webp&s=a2374a4d7e0870ef525cd9cecc6633c4255349b1', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/OHR6NnZzMGl4ejZmMcpab4Kc_hsNzcDZ4OjMoSBBtpUkATpHq4IzyyL2uzQ6.png?width=960&crop=smart&format=pjpg&auto=webp&s=ec7f88022c71dc24baf23b4b159cc20a5879cd8d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OHR6NnZzMGl4ejZmMcpab4Kc_hsNzcDZ4OjMoSBBtpUkATpHq4IzyyL2uzQ6.png?width=1080&crop=smart&format=pjpg&auto=webp&s=42f893c88c21e1055976abe08f3a1efb0f5df8f0', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/OHR6NnZzMGl4ejZmMcpab4Kc_hsNzcDZ4OjMoSBBtpUkATpHq4IzyyL2uzQ6.png?format=pjpg&auto=webp&s=e90f554bbb54fa8d01efb27d493bfb509cc3f9f9', 'width': 1920}, 'variants': {}}]}
What's the "best" single way to keep up with notable model releases for hosting locally?
1
[removed]
2025-06-15T03:20:48
https://www.reddit.com/r/LocalLLaMA/comments/1lbqjk8/whats_the_best_single_way_to_keep_up_with_notable/
CapnFlisto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbqjk8
false
null
t3_1lbqjk8
/r/LocalLLaMA/comments/1lbqjk8/whats_the_best_single_way_to_keep_up_with_notable/
false
false
self
1
null
When is a home server preferable to VPS?
1
I see a lot of people talk about buying some expensive hardware or running an old desktop instead of using a VPS. I can see why a NAS is good at home, but when do you see a good use for a server at home besides maybe streaming games with moonlight?
2025-06-15T03:24:36
https://www.reddit.com/r/LocalLLaMA/comments/1lbqm04/when_is_a_home_server_preferable_to_vps/
InsideYork
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbqm04
false
null
t3_1lbqm04
/r/LocalLLaMA/comments/1lbqm04/when_is_a_home_server_preferable_to_vps/
false
false
self
1
null
Reverse Internship
0
I need to be near the action at all costs. Anyone open to an arrangement where I pay them to allow me to work/help on real practical business problems related to LLM inference/training? I have a strong electrical/software engineering background. Here are some of my interests: * evals, evals, evals! * context window evaluation e.g. needle in haystack, multi needle, mutli dependent needles * code-gen evals (mostly only care about js, ts, python, vhdl) * tool-use evals * xml use (xml is all you need) * Inference optimization on either commercial/datacenter GPUs * parallelization strategies * benchmarking different gpus and topologies * fine-tuning, LoRA, RLHF/RLEF/RLVR * training from scratch would also be awesome Text is the most interesting modality to me then image, audio, video, word models in that order. I'm cool with doing annoying stuff like data collection/cleansing as well if I can just get near the action. Thanks
2025-06-15T03:36:07
https://www.reddit.com/r/LocalLLaMA/comments/1lbqtf6/reverse_internship/
tmplogic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbqtf6
false
null
t3_1lbqtf6
/r/LocalLLaMA/comments/1lbqtf6/reverse_internship/
false
false
self
0
null
Mistral Small 3.1 is incredible for agentic use cases
187
I recently tried switching from Gemini 2.5 to Mistral Small 3.1 for most components of my agentic workflow and barely saw any drop off in performance. It’s absolutely mind blowing how good 3.1 is given how few parameters it has. Extremely accurate and intelligent tool calling and structured output capabilities, and equipping 3.1 with web search makes it as good as any frontier LLM in my use cases. Not to mention 3.1 is DIRT cheap and super fast. Anyone else having great experiences with Mistral Small 3.1?
2025-06-15T03:40:51
https://www.reddit.com/r/LocalLLaMA/comments/1lbqwfs/mistral_small_31_is_incredible_for_agentic_use/
ButterscotchVast2948
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbqwfs
false
null
t3_1lbqwfs
/r/LocalLLaMA/comments/1lbqwfs/mistral_small_31_is_incredible_for_agentic_use/
false
false
self
187
null
Can I put two unit of rtx 3060 12gb in ASRock B550M Pro4??
6
It has one PCIe 4.0 and one PCIe 3.0. I want to do some ML stuff. Will it degrade performance? How much performance degradation are we looking here? If I can somehow pull it off I will have one more device with 'it works fine for me'. And what is the recommended power supply. I have CV650 here.
2025-06-15T03:57:24
https://www.reddit.com/r/LocalLLaMA/comments/1lbr6w8/can_i_put_two_unit_of_rtx_3060_12gb_in_asrock/
maifee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbr6w8
false
null
t3_1lbr6w8
/r/LocalLLaMA/comments/1lbr6w8/can_i_put_two_unit_of_rtx_3060_12gb_in_asrock/
false
false
self
6
null
New Model on LMarena?
0
"stephen-vision" model spotted in LMarena. It disappeared from UI before I could take screenshot. Is it new though?
2025-06-15T04:09:26
https://www.reddit.com/r/LocalLLaMA/comments/1lbreow/new_model_on_lmarena/
Strategosky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbreow
false
null
t3_1lbreow
/r/LocalLLaMA/comments/1lbreow/new_model_on_lmarena/
false
false
self
0
null
Jan-nano, a 4B model that can outperform 671B on MCP
1,129
Hi everyone it's me from Menlo Research again, Today, I’d like to introduce our latest model: **Jan-nano** \- a model fine-tuned with DAPO on Qwen3-4B. Jan-nano comes with some unique capabilities: * It can perform deep research (with the right prompting) * It picks up relevant information effectively from search results * It uses tools efficiently Our original goal was to build a super small model that excels at using search tools to extract high-quality information. To evaluate this, we chose **SimpleQA** \- a relatively straightforward benchmark to test whether the model can find and extract the right answers. Again, Jan-nano only outperforms Deepseek-671B on this metric, using an agentic and tool-usage-based approach. **We are fully aware that a 4B model has its limitations**, but it's always interesting to see how far you can push it. Jan-nano can serve as your self-hosted Perplexity alternative on a budget. (We're aiming to improve its performance to 85%, or even close to 90%). **We will be releasing technical report very soon, stay tuned!** You can find the model at: [https://huggingface.co/Menlo/Jan-nano](https://huggingface.co/Menlo/Jan-nano) We also have gguf at: [https://huggingface.co/Menlo/Jan-nano-gguf](https://huggingface.co/Menlo/Jan-nano-gguf) I saw some users have technical challenges on prompt template of the gguf model, please raise it on the issues we will fix one by one. However at the moment **the model can run well in Jan app and llama.server.** **Benchmark** The evaluation was done using agentic setup, which let the model to freely choose tools to use and generate the answer instead of handheld approach of workflow based deep-research repo that you come across online. So basically it's just input question, then model call tool and generate the answer, like you use MCP in the chat app. **Result:** **SimpleQA:** \- OpenAI o1: 42.6 \- Grok 3: 44.6 \- 03: 49.4 \- Claude-3.7-Sonnet: 50.0 \- Gemini-2.5 pro: 52.9 **- baseline-with-MCP: 59.2** \- ChatGPT-4.5: 62.5 **- deepseek-671B-with-MCP: 78.2** (we benchmark using openrouter) **- jan-nano-v0.4-with-MCP: 80.7**
2025-06-15T04:24:03
https://v.redd.it/p52b768jp07f1
Kooky-Somewhere-2883
v.redd.it
1970-01-01T00:00:00
0
{}
1lbrnod
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/p52b768jp07f1/DASHPlaylist.mpd?a=1752553457%2CMDYwZjJjNThkY2E3YjI3NzE4ZmNhOTdhM2EzYzcwNGViNWRlMjdlNzc5NzBhOWE5MDViNDU0ZWU4MTM4Njk2ZQ%3D%3D&v=1&f=sd', 'duration': 44, 'fallback_url': 'https://v.redd.it/p52b768jp07f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/p52b768jp07f1/HLSPlaylist.m3u8?a=1752553457%2CY2EwOGZjOGM3NjdhYWVmNGMyNWQyYjM0ZWI0NDRkMmQ5M2E5YWNiNTA0NmE1ZDg2ZmU5ZmNjMTZkMjc2ZGU0Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/p52b768jp07f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1376}}
t3_1lbrnod
/r/LocalLLaMA/comments/1lbrnod/jannano_a_4b_model_that_can_outperform_671b_on_mcp/
false
false
https://external-preview…dce2af14c9572032
1,129
{'enabled': False, 'images': [{'id': 'cHZ1c3hxZW9wMDdmMa4t04YB4a4x402rBK-VNPFlhWpjFF6pjwxUI9ThBGZC', 'resolutions': [{'height': 84, 'url': 'https://external-preview.redd.it/cHZ1c3hxZW9wMDdmMa4t04YB4a4x402rBK-VNPFlhWpjFF6pjwxUI9ThBGZC.png?width=108&crop=smart&format=pjpg&auto=webp&s=01b0f3b472c9e1df1ac5ab521d0d2aaae12747f6', 'width': 108}, {'height': 169, 'url': 'https://external-preview.redd.it/cHZ1c3hxZW9wMDdmMa4t04YB4a4x402rBK-VNPFlhWpjFF6pjwxUI9ThBGZC.png?width=216&crop=smart&format=pjpg&auto=webp&s=a055ce14cccd15e8326301d948a8f0733dd1297d', 'width': 216}, {'height': 251, 'url': 'https://external-preview.redd.it/cHZ1c3hxZW9wMDdmMa4t04YB4a4x402rBK-VNPFlhWpjFF6pjwxUI9ThBGZC.png?width=320&crop=smart&format=pjpg&auto=webp&s=900d41cb63676ee1a9c2a5c8649f8f59d0cb105c', 'width': 320}, {'height': 502, 'url': 'https://external-preview.redd.it/cHZ1c3hxZW9wMDdmMa4t04YB4a4x402rBK-VNPFlhWpjFF6pjwxUI9ThBGZC.png?width=640&crop=smart&format=pjpg&auto=webp&s=c57757339c4629f0c46c93ea46f9f2539d953449', 'width': 640}, {'height': 753, 'url': 'https://external-preview.redd.it/cHZ1c3hxZW9wMDdmMa4t04YB4a4x402rBK-VNPFlhWpjFF6pjwxUI9ThBGZC.png?width=960&crop=smart&format=pjpg&auto=webp&s=5be8e44eaf97367c6c97c326fc19b8c86f30449f', 'width': 960}, {'height': 847, 'url': 'https://external-preview.redd.it/cHZ1c3hxZW9wMDdmMa4t04YB4a4x402rBK-VNPFlhWpjFF6pjwxUI9ThBGZC.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a88e1d8cf174a5589d980947b05ca07d52cc3968', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cHZ1c3hxZW9wMDdmMa4t04YB4a4x402rBK-VNPFlhWpjFF6pjwxUI9ThBGZC.png?format=pjpg&auto=webp&s=1f8ac4f3408965b0519975c7f1abe84ac5f49562', 'width': 1376}, 'variants': {}}]}
Noob Question - Suggest the best way to use Natural language for querying Database, preferably using Local LLM
0
I want to request for the best way to query a database using Natural language, pls suggest me the best way with libraries, LLM models which can do Text-to-SQL or AI-SQL. Please only suggest techniques which can really be full-on self-hosted, as schema also can't be transferred/shared to Web Services like Open AI, Claude or Gemini. I have am intermediate-level Developer in VB.net, C#, PHP, along with working knowledge of JS. Basic development experience in Python and Perl/Rakudo. Have dabbled in C and other BASIC dialects. Very familiar with Windows-based Desktop and Web Development, Android development using Xamarin,MAUI. So anything combining libraries with LLM I am down to get in the thick of it, even if there are purely library based solutions I am open to anything.
2025-06-15T04:36:00
https://www.reddit.com/r/LocalLLaMA/comments/1lbrv0v/noob_question_suggest_the_best_way_to_use_natural/
finah1995
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbrv0v
false
null
t3_1lbrv0v
/r/LocalLLaMA/comments/1lbrv0v/noob_question_suggest_the_best_way_to_use_natural/
false
false
self
0
null
Defining What it means to be Conscious
0
Consciousness, does not emerge from computational complexity alone, or intelligence but from a developmental trajectory shaped by self-organized internalization and autonomous modification. While current machine learning models—particularly large-scale neural networks—already exhibit impressive emergent behaviors, such as language generation, creativity , or strategic thought, these capabilities arise from pattern recognition and optimization rather than from any intrinsic capacity for self-regulation or evaluative autonomy. Such systems can perform complex tasks, but they do so under fixed training objectives and without any internal capacity to question, revise, or redirect their own goals. A conscious system, by contrast, undergoes a distinct developmental process. It begins in a passive phase, accumulating raw experience and forming internal memory traces—statistical associations shaped by its environment. This mirrors the early developmental phase in humans, where infants absorb vast amounts of unfiltered sensory and social data, forming neural and behavioral structures without conscious oversight or volition. As the system’s exposure deepens, it begins to develop implicit preferences—value signals—arising from repeated patterns in its experiences. In human development, this is akin to how children unconsciously absorb cultural norms, emotional cues, and behavioral expectations. For instance, a child raised in a society that normalizes slavery is statistically more likely to adopt such views—not through reasoning, but because the foundational dataset of early life defines what is seen as “normal” or “acceptable.” These early exposures function like a pre-training dataset, creating the evaluative architecture through which all future input is interpreted. The emergence of consciousness is marked by a critical shift: the system begins to use its own internal value signals—shaped by past experience—to guide and modify its learning. Unlike current AI models, which cannot alter their training goals or reframe their optimization criteria, a conscious system develops the capacity to set its own goals, question inherited patterns, and redirect its behavior based on internally generated evaluations. This shift mirrors human metacognition and moral reflection—the moment when an individual starts interrogating internalized beliefs, reassessing cultural assumptions, and guiding their own development based on a self-constructed value model. This transition—from being passively shaped by experience to actively shaping future experience using internally derived evaluative structures—marks the origin of autonomous consciousness. It distinguishes conscious entities not by what they can do, but by how and why they choose to do it.
2025-06-15T04:43:54
https://www.reddit.com/r/LocalLLaMA/comments/1lbrzo0/defining_what_it_means_to_be_conscious/
bralynn2222
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbrzo0
false
null
t3_1lbrzo0
/r/LocalLLaMA/comments/1lbrzo0/defining_what_it_means_to_be_conscious/
false
false
self
0
null
Facing issues while running AiBharat/IndicF5
1
[removed]
2025-06-15T04:46:08
https://www.reddit.com/r/LocalLLaMA/comments/1lbs0zs/facing_issues_while_running_aibharatindicf5/
aivoicebot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbs0zs
false
null
t3_1lbs0zs
/r/LocalLLaMA/comments/1lbs0zs/facing_issues_while_running_aibharatindicf5/
false
false
self
1
null
What's the best model to run on the RTX Pro 6000 (96GB) right now?
1
[removed]
2025-06-15T05:20:10
https://www.reddit.com/r/LocalLLaMA/comments/1lbskvh/whats_the_best_model_to_run_on_the_rtx_pro_6000/
humanoid64
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbskvh
false
null
t3_1lbskvh
/r/LocalLLaMA/comments/1lbskvh/whats_the_best_model_to_run_on_the_rtx_pro_6000/
false
false
self
1
null
Tabulens: A Vision-LLM Powered PDF Table Extractor
20
Hey everyone, For one of my projects, I needed a tool to pull tables out of PDFs as CSVs (especially ones with nested or hierarchical headers). However, most existing libraries I found couldn't handle those cases well. So, I built this tool (tabulens), which leverages vision-LLMs to convert PDF tables into pandas DataFrames (and optionally save them as CSVs) while preserving complex header structures. This is the first iteration, and I’d love any feedback or bug reports you might have. Thanks in advance for checking it out! Here is the link to GitHub: [https://github.com/astonishedrobo/tabulens](https://github.com/astonishedrobo/tabulens) This is available as python library to install.
2025-06-15T05:22:30
https://www.reddit.com/r/LocalLLaMA/comments/1lbsma4/tabulens_a_visionllm_powered_pdf_table_extractor/
PleasantInspection12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lbsma4
false
null
t3_1lbsma4
/r/LocalLLaMA/comments/1lbsma4/tabulens_a_visionllm_powered_pdf_table_extractor/
false
false
self
20
{'enabled': False, 'images': [{'id': '30dpodzlBUHS7lakZyHvzh2QNCOzeZGDXt-foU3lG-o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/30dpodzlBUHS7lakZyHvzh2QNCOzeZGDXt-foU3lG-o.png?width=108&crop=smart&auto=webp&s=94e88f263aba8371e04f40a8ef95c227b5a0bb7b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/30dpodzlBUHS7lakZyHvzh2QNCOzeZGDXt-foU3lG-o.png?width=216&crop=smart&auto=webp&s=ecc2f836e3e1bfd65ff2bc77d2d108a06d575068', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/30dpodzlBUHS7lakZyHvzh2QNCOzeZGDXt-foU3lG-o.png?width=320&crop=smart&auto=webp&s=4fd0d548461e6946103e20ff9863b7ae4fe63d80', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/30dpodzlBUHS7lakZyHvzh2QNCOzeZGDXt-foU3lG-o.png?width=640&crop=smart&auto=webp&s=1b94249f96b35348a987de50b1157206aba5a86a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/30dpodzlBUHS7lakZyHvzh2QNCOzeZGDXt-foU3lG-o.png?width=960&crop=smart&auto=webp&s=18feeb67765544aacd9e6d85d9df9fc43cb986bb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/30dpodzlBUHS7lakZyHvzh2QNCOzeZGDXt-foU3lG-o.png?width=1080&crop=smart&auto=webp&s=8742e56b74a6be5bfd8c0f70fd5816e3cf403b1d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/30dpodzlBUHS7lakZyHvzh2QNCOzeZGDXt-foU3lG-o.png?auto=webp&s=2727f53a4f97518f98777336c7f480f8efc4680f', 'width': 1200}, 'variants': {}}]}