title
stringlengths 1
300
| score
int64 0
8.54k
| selftext
stringlengths 0
40k
| created
timestamp[ns]date 2023-04-01 04:30:41
2025-06-30 03:16:29
⌀ | url
stringlengths 0
878
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns]date 1970-01-01 00:00:00
2025-06-26 17:30:18
| gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
8.54k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Why mistral small 3 faster than qwen3 30b a3b model?
| 1 |
[removed]
| 2025-05-27T19:05:25 |
https://www.reddit.com/r/LocalLLaMA/comments/1kwvrrz/why_mistral_small_3_faster_than_qwen3_30b_a3b/
|
Alone_Ad_6011
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwvrrz
| false | null |
t3_1kwvrrz
|
/r/LocalLLaMA/comments/1kwvrrz/why_mistral_small_3_faster_than_qwen3_30b_a3b/
| false | false |
self
| 1 | null |
Time to make all models think 🧠 – the brand-new *Mixture-of-Thoughts* reasoning dataset is here
| 1 |
[removed]
| 2025-05-27T19:06:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1kwvslz/time_to_make_all_models_think_the_brandnew/
|
Thatisverytrue54321
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwvslz
| false | null |
t3_1kwvslz
|
/r/LocalLLaMA/comments/1kwvslz/time_to_make_all_models_think_the_brandnew/
| false | false |
self
| 1 | null |
Mistral small 3 faster than qwen3 30b a3b model. It's wierd
| 1 |
[removed]
| 2025-05-27T19:08:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1kwvuhe/mistral_small_3_faster_than_qwen3_30b_a3b_model/
|
Alone_Ad_6011
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwvuhe
| false | null |
t3_1kwvuhe
|
/r/LocalLLaMA/comments/1kwvuhe/mistral_small_3_faster_than_qwen3_30b_a3b_model/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=108&crop=smart&auto=webp&s=ff8c322202cb0f1a1f82f87a2c77754ddc0b9e61', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=216&crop=smart&auto=webp&s=e20458b3bc0a4d8ebf3e09b7e3615cfda4e00844', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=320&crop=smart&auto=webp&s=508265ec16105ddc4d2105e057c292f8470229ac', 'width': 320}, {'height': 355, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=640&crop=smart&auto=webp&s=690b875bfe1b25ba2e96b432c42bb1b096935efd', 'width': 640}, {'height': 533, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=960&crop=smart&auto=webp&s=ee86a1133471b58f18d2dbf89ec1c88906c2d623', 'width': 960}, {'height': 600, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=1080&crop=smart&auto=webp&s=e42c63d534439a755f46f08c5db09cbaaefca3d0', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?auto=webp&s=6e0008e17dc8f6f6b13799bc7416400acacbaca0', 'width': 1260}, 'variants': {}}]}
|
Help with safetensors quants
| 1 |
[removed]
| 2025-05-27T19:24:17 |
https://www.reddit.com/r/LocalLLaMA/comments/1kww9h0/help_with_safetensors_quants/
|
chub0ka
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kww9h0
| false | null |
t3_1kww9h0
|
/r/LocalLLaMA/comments/1kww9h0/help_with_safetensors_quants/
| false | false |
self
| 1 | null |
How to think about ownership of my personal AI system
| 3 |
I’m working on building my own personal AI system, and thinking about what it means to own my own AI system. Here’s how I’m thinking about it and would appreciate thoughts from the community on where you think I am on or off base here.
I think ownership lies on spectrum between running on ChatGPT which I clearly don’t own or running a 100% MIT licensed setup locally that I clearly do own.
Hosting: Let’s say I’m running an MIT-licensed AI system but instead of hosting it locally, I run it on Google Cloud. I don’t own the cloud infrastructure, but I’d still consider this my AI system. Why? Because I retain full control. I can leave anytime, move to another host, or run it locally without losing anything. The cloud host is a service that I am using to host my AI system.
AI Models: I also don’t believe I need to own or self-host every model I use in order to own my AI system. I think about this like my physical mind. I control my intelligence, but I routinely consult other minds you don’t own like mentors, books, and specialists. So if I use a third-party model (say, for legal or health advice), that doesn’t compromise ownership so long as I choose when and how to use it, and I’m not locked into it.
Interface: Where I draw a harder line is the interface. Whether it’s a chatbox, wearable, or voice assistant, this is the entry point to my digital mind. If I don’t own and control this, someone else could reshape how I experience or access my system. So if I don’t own the interface I don’t believe I own my own AI system.
Storage & Memory: As memory in AI systems continues to improve, this is what is going to make AI systems truly personal. And this will be what makes my AI system truly my AI system. As unique to me as my physical memory, and exponentially more powerful. The more I use my personal AI system the more memory it will have, and the better and more personalized it will be at helping me. Over time losing access to the memory of my AI system would be as bad or potentially even worse than losing access to my physical memory.
Do you agree, disagree or think I am missing components from the above?
| 2025-05-27T19:31:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1kwwgon/how_to_think_about_ownership_of_my_personal_ai/
|
davidtwaring
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwwgon
| false | null |
t3_1kwwgon
|
/r/LocalLLaMA/comments/1kwwgon/how_to_think_about_ownership_of_my_personal_ai/
| false | false |
self
| 3 | null |
most hackable coding agent
| 6 |
I find with local models coding agents need quite a lot of guidance and fail at tasks that are too complex. Also adherence to style and other rules is often not easy to achieve.
I use agents to do planing, requirement engineering, software architecture stuff etc., which is usually very specific to my domain and tailoring low resource LLMs to my use cases is often surprisingly effective. Only missing piece in my agentic chain is the actual coding part. I don't want to reinvent the wheel, when others have figured that out better than I ever could.
Aider seems to be the option closest to what I want. They have python bindings but they also kind of advise against using it.
Any experience and recommendations for integrating coding agents in your own agent workflows?
| 2025-05-27T19:37:44 |
https://www.reddit.com/r/LocalLLaMA/comments/1kwwlyv/most_hackable_coding_agent/
|
mnze_brngo_7325
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwwlyv
| false | null |
t3_1kwwlyv
|
/r/LocalLLaMA/comments/1kwwlyv/most_hackable_coding_agent/
| false | false |
self
| 6 | null |
B-score: Detecting Biases in Large Language Models Using Response History
| 11 |
**TLDR:** When LLMs can see their own previous answers, their biases significantly decrease. We introduce B-score, a metric that detects bias by comparing responses between single-turn and multi-turn conversations.
**Paper, Code & Data:** [https://b-score.github.io](https://b-score.github.io)
| 2025-05-27T19:44:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1kwws3n/bscore_detecting_biases_in_large_language_models/
|
Substantial-Air-1285
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwws3n
| false | null |
t3_1kwws3n
|
/r/LocalLLaMA/comments/1kwws3n/bscore_detecting_biases_in_large_language_models/
| false | false |
self
| 11 | null |
Install llm on your MOBILE phone
| 0 |
I use this app to install llms 100% locally on my mobile phone
And no I not sponsored or any of that crap, the app it's self is 100% free so there noway that they are sponsoring anybody.
And yes you can install huggingface.co models without leaving the app at all
| 2025-05-27T19:46:08 |
Rare-Programmer-1747
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwwteg
| false | null |
t3_1kwwteg
|
/r/LocalLLaMA/comments/1kwwteg/install_llm_on_your_mobile_phone/
| false | false | 0 |
{'enabled': True, 'images': [{'id': 'P91Uz7r61L4yzXIkwnqBEAAIUHRf3MAGODVpLkG8h_A', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/xagahzm0pd3f1.jpeg?width=108&crop=smart&auto=webp&s=e4bd3b028a1862743c3dbfdbf451ceebc7e8498f', 'width': 108}, {'height': 165, 'url': 'https://preview.redd.it/xagahzm0pd3f1.jpeg?width=216&crop=smart&auto=webp&s=8ef55cb0a39d1e5cdc8dc4dea2417c0fa4ed0ecd', 'width': 216}, {'height': 245, 'url': 'https://preview.redd.it/xagahzm0pd3f1.jpeg?width=320&crop=smart&auto=webp&s=57201378b010024f1224e0304b9cc82dd761f40f', 'width': 320}, {'height': 490, 'url': 'https://preview.redd.it/xagahzm0pd3f1.jpeg?width=640&crop=smart&auto=webp&s=d5b098d0db6c4c7646cd6db138a0f46f7e668a67', 'width': 640}, {'height': 736, 'url': 'https://preview.redd.it/xagahzm0pd3f1.jpeg?width=960&crop=smart&auto=webp&s=43715e7c95b92e74d4af358795ffe382fde10f98', 'width': 960}, {'height': 828, 'url': 'https://preview.redd.it/xagahzm0pd3f1.jpeg?width=1080&crop=smart&auto=webp&s=4152cba5c66530d93d25759c77ab60b9637be359', 'width': 1080}], 'source': {'height': 828, 'url': 'https://preview.redd.it/xagahzm0pd3f1.jpeg?auto=webp&s=1ed3749a3dcd29a8ef1db918aedd14888ae0d125', 'width': 1080}, 'variants': {}}]}
|
||
We build Curie: The Open-sourced AI Co-Scientist Making ML More Accessible for Your Research
| 58 |
After personally seeing many researchers in fields like biology, materials science, and chemistry struggle to apply machine learning to their valuable domain datasets to accelerate scientific discovery and gain deeper insights, often due to the lack of specialized ML knowledge needed to select the right algorithms, tune hyperparameters, or interpret model outputs, we knew we had to help.
That's why we're so excited to introduce the new AutoML feature in [Curie](https://github.com/Just-Curieous/Curie) 🔬, our AI research experimentation co-scientist designed to make ML more accessible! Our goal is to empower researchers like them to rapidly test hypotheses and extract deep insights from their data. Curie automates the aforementioned complex ML pipeline – taking the tedious yet critical work.
For example, Curie can generate highly performant models, achieving a 0.99 AUC (top 1% performance) for a melanoma (cancer) detection task. We're passionate about open science and invite you to try Curie and even contribute to making it better for everyone!
[Curie Overview](https://preview.redd.it/49dxl6s9pd3f1.png?width=1455&format=png&auto=webp&s=9399ab72db599f6da8aa2bdfe3666fc251c2d43b)
Check out our post: [https://www.just-curieous.com/machine-learning/research/2025-05-27-automl-co-scientist.html](https://www.just-curieous.com/machine-learning/research/2025-05-27-automl-co-scientist.html)
| 2025-05-27T19:49:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1kwwwil/we_build_curie_the_opensourced_ai_coscientist/
|
Pleasant-Type2044
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwwwil
| false | null |
t3_1kwwwil
|
/r/LocalLLaMA/comments/1kwwwil/we_build_curie_the_opensourced_ai_coscientist/
| false | false | 58 | null |
|
Mistral small 3 faster than qwen3 30b a3b model. It's wierd
| 1 |
[removed]
| 2025-05-27T20:07:18 |
https://www.reddit.com/r/LocalLLaMA/comments/1kwxcxq/mistral_small_3_faster_than_qwen3_30b_a3b_model/
|
Alone_Ad_6011
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwxcxq
| false | null |
t3_1kwxcxq
|
/r/LocalLLaMA/comments/1kwxcxq/mistral_small_3_faster_than_qwen3_30b_a3b_model/
| false | false |
self
| 1 | null |
Introducing free Ai software: Ai Chat to Cart and checkout with Stripe / Paypal (demo is using Llama via GroqCloud), Wush Wush Games is my son's video games store. I would love your feedback. Peace
| 1 |
You can find it here, would love your feedback
[https://github.com/store-craft/storecraft](https://github.com/store-craft/storecraft)
| 2025-05-27T20:16:19 |
https://v.redd.it/5pkiw3butd3f1
|
hendrixstring
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwxldn
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/5pkiw3butd3f1/DASHPlaylist.mpd?a=1750968993%2CZmVmZWEzYjhiZWE4YTc0YTA1NjY1MWU4YTNmNWM3NGNjNmUyMGY1YTI2ZGY4NDIzMWFhNDE4NWQzNzIwZmJkNg%3D%3D&v=1&f=sd', 'duration': 66, 'fallback_url': 'https://v.redd.it/5pkiw3butd3f1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/5pkiw3butd3f1/HLSPlaylist.m3u8?a=1750968993%2CZDRlMDZhMDhkMmJiNDcwYTg0MTZmNjVjMTYzZmU0OWE4ZWRjNDNmNWJiY2MzOGZkNTYzNjZhZjM4OTRkZjQ0Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5pkiw3butd3f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1174}}
|
t3_1kwxldn
|
/r/LocalLLaMA/comments/1kwxldn/introducing_free_ai_software_ai_chat_to_cart_and/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'bzU2cmw0YnV0ZDNmMUjdaI7r9S6-MAMwjQiFfLJk__IxcsOiQ6_No2AEou7j', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/bzU2cmw0YnV0ZDNmMUjdaI7r9S6-MAMwjQiFfLJk__IxcsOiQ6_No2AEou7j.png?width=108&crop=smart&format=pjpg&auto=webp&s=e928ccd40d53ebca3af5a6037ed38af409baf203', 'width': 108}, {'height': 132, 'url': 'https://external-preview.redd.it/bzU2cmw0YnV0ZDNmMUjdaI7r9S6-MAMwjQiFfLJk__IxcsOiQ6_No2AEou7j.png?width=216&crop=smart&format=pjpg&auto=webp&s=7c2dd0b8a193c9109af87cb5418a46c37021d83b', 'width': 216}, {'height': 196, 'url': 'https://external-preview.redd.it/bzU2cmw0YnV0ZDNmMUjdaI7r9S6-MAMwjQiFfLJk__IxcsOiQ6_No2AEou7j.png?width=320&crop=smart&format=pjpg&auto=webp&s=ee8794026396c0f5cab4afa74fa4778c927bff0f', 'width': 320}, {'height': 392, 'url': 'https://external-preview.redd.it/bzU2cmw0YnV0ZDNmMUjdaI7r9S6-MAMwjQiFfLJk__IxcsOiQ6_No2AEou7j.png?width=640&crop=smart&format=pjpg&auto=webp&s=30ca7a9668b2c68c415bd41a698270dfd91cfb7e', 'width': 640}, {'height': 588, 'url': 'https://external-preview.redd.it/bzU2cmw0YnV0ZDNmMUjdaI7r9S6-MAMwjQiFfLJk__IxcsOiQ6_No2AEou7j.png?width=960&crop=smart&format=pjpg&auto=webp&s=5d63ec399f94941699063469437b41cbd7ef0dad', 'width': 960}, {'height': 662, 'url': 'https://external-preview.redd.it/bzU2cmw0YnV0ZDNmMUjdaI7r9S6-MAMwjQiFfLJk__IxcsOiQ6_No2AEou7j.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c6170c2d1ec36ba22a810291c520b3c49aca7f26', 'width': 1080}], 'source': {'height': 730, 'url': 'https://external-preview.redd.it/bzU2cmw0YnV0ZDNmMUjdaI7r9S6-MAMwjQiFfLJk__IxcsOiQ6_No2AEou7j.png?format=pjpg&auto=webp&s=6951b3728bbf3e8e52e4c03cd7c8c02215710ae5', 'width': 1190}, 'variants': {}}]}
|
|
Why is Mistral Small 3 faster than the Qwen3 30B A3B model?
| 1 |
[removed]
| 2025-05-27T20:24:03 |
https://www.reddit.com/r/LocalLLaMA/comments/1kwxsi4/why_is_mistral_small_3_faster_than_the_qwen3_30b/
|
Alone_Ad_6011
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwxsi4
| false | null |
t3_1kwxsi4
|
/r/LocalLLaMA/comments/1kwxsi4/why_is_mistral_small_3_faster_than_the_qwen3_30b/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=108&crop=smart&auto=webp&s=ff8c322202cb0f1a1f82f87a2c77754ddc0b9e61', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=216&crop=smart&auto=webp&s=e20458b3bc0a4d8ebf3e09b7e3615cfda4e00844', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=320&crop=smart&auto=webp&s=508265ec16105ddc4d2105e057c292f8470229ac', 'width': 320}, {'height': 355, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=640&crop=smart&auto=webp&s=690b875bfe1b25ba2e96b432c42bb1b096935efd', 'width': 640}, {'height': 533, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=960&crop=smart&auto=webp&s=ee86a1133471b58f18d2dbf89ec1c88906c2d623', 'width': 960}, {'height': 600, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=1080&crop=smart&auto=webp&s=e42c63d534439a755f46f08c5db09cbaaefca3d0', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?auto=webp&s=6e0008e17dc8f6f6b13799bc7416400acacbaca0', 'width': 1260}, 'variants': {}}]}
|
How to make two llms work jointly in a problem solving task?
| 2 |
I am trying to understand if there is any way to make two local llms collaborate on a problem solving task. I am particularly curious to see the dynamics of such collaboration through systematic analytics of their conversational turns. Is this possible using say LM studio or ollama and Python?
| 2025-05-27T20:40:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kwy6u7/how_to_make_two_llms_work_jointly_in_a_problem/
|
sbs1799
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwy6u7
| false | null |
t3_1kwy6u7
|
/r/LocalLLaMA/comments/1kwy6u7/how_to_make_two_llms_work_jointly_in_a_problem/
| false | false |
self
| 2 | null |
Local RAG for PDF questions
| 4 |
Hello, I am looking for some feedback one a simple project I put together for asking questions about PDFs. Anyone have experience with chromadb and langchain in combination with Ollama?
[https://github.com/Mschroeder95/ai-rag-setup](https://github.com/Mschroeder95/ai-rag-setup)
| 2025-05-27T20:49:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kwyfnq/local_rag_for_pdf_questions/
|
Overall_Advantage750
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwyfnq
| false | null |
t3_1kwyfnq
|
/r/LocalLLaMA/comments/1kwyfnq/local_rag_for_pdf_questions/
| false | false |
self
| 4 |
{'enabled': False, 'images': [{'id': 'G5JWJlj-Qa4L9MQEJRUPEbJQQanWpbpjHNG8v9CjD_A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7H9Trziob4v0wza0YSrioj9_PT0VRi7YrwG5RHb9XJo.jpg?width=108&crop=smart&auto=webp&s=f01286ded6609e1a113a49a38e7f63998e31644b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7H9Trziob4v0wza0YSrioj9_PT0VRi7YrwG5RHb9XJo.jpg?width=216&crop=smart&auto=webp&s=81890ac57102defe476094e5bedaab0a1eb7b883', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7H9Trziob4v0wza0YSrioj9_PT0VRi7YrwG5RHb9XJo.jpg?width=320&crop=smart&auto=webp&s=0582b2165e31936ec38e0bdf7a1e6085b918461d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7H9Trziob4v0wza0YSrioj9_PT0VRi7YrwG5RHb9XJo.jpg?width=640&crop=smart&auto=webp&s=8f816a6a57d9bdd97adb61eb8f9d1369afb3f932', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7H9Trziob4v0wza0YSrioj9_PT0VRi7YrwG5RHb9XJo.jpg?width=960&crop=smart&auto=webp&s=313056ea2b4079ad4171748fa0ccaa2c2547cd1f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7H9Trziob4v0wza0YSrioj9_PT0VRi7YrwG5RHb9XJo.jpg?width=1080&crop=smart&auto=webp&s=a2736b2cd5800fbd827bc7705ed061d80b7f07ed', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7H9Trziob4v0wza0YSrioj9_PT0VRi7YrwG5RHb9XJo.jpg?auto=webp&s=aa7847a1ed1a3c08c121a234e6505672e928a025', 'width': 1200}, 'variants': {}}]}
|
OpenRouter Inference: Issue with Combined Contexts
| 1 |
[removed]
| 2025-05-27T20:53:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1kwyivr/openrouter_inference_issue_with_combined_contexts/
|
Critical-Sea-2581
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwyivr
| false | null |
t3_1kwyivr
|
/r/LocalLLaMA/comments/1kwyivr/openrouter_inference_issue_with_combined_contexts/
| false | false |
self
| 1 | null |
Your favourite non-English/Chinese model
| 5 |
Much like English is the lingua franca for programming, it seems to also be the same preferred language for, well, language models (plus Chinese, obviously). For those generating content or using models in languages that are not Chinese or English, what is your model or models of choice?
Gemma 3 and Qwen 3 boast, on paper, some of the highest numbers of languages "officially" supported (except Gemma 3 1B, which Google decided to neuter entirely) but honestly outside of high resources languages they often leave a lot to be desired imo. Don't even get me started on forgetting to turn off thinking on Qwen when attempting something outside of English and Chinese. That being said, it is fun to see labs and universities in Europe and Asia put out finetunes of these models for local languages, but it is a bit sad to see true multilingual excellence still kinda locked behind APIs.
| 2025-05-27T20:58:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1kwynyt/your_favourite_nonenglishchinese_model/
|
JohnnyOR
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwynyt
| false | null |
t3_1kwynyt
|
/r/LocalLLaMA/comments/1kwynyt/your_favourite_nonenglishchinese_model/
| false | false |
self
| 5 | null |
How to use tools calling with Qwen 2.5 Coder?
| 1 |
[removed]
| 2025-05-27T21:02:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1kwyro6/how_to_use_tools_calling_with_qwen_25_coder/
|
Educational-Shoe9300
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwyro6
| false | null |
t3_1kwyro6
|
/r/LocalLLaMA/comments/1kwyro6/how_to_use_tools_calling_with_qwen_25_coder/
| false | false |
self
| 1 | null |
Looking for japanese translation model
| 1 |
[removed]
| 2025-05-27T21:29:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1kwzftb/looking_for_japanese_translation_model/
|
Blackm1996
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwzftb
| false | null |
t3_1kwzftb
|
/r/LocalLLaMA/comments/1kwzftb/looking_for_japanese_translation_model/
| false | false |
self
| 1 | null |
How the turn tables
| 1 | 2025-05-27T21:30:08 |
waiting_for_zban
|
i.imgflip.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwzgi0
| false | null |
t3_1kwzgi0
|
/r/LocalLLaMA/comments/1kwzgi0/how_the_turn_tables/
| false | false | 1 |
{'enabled': True, 'images': [{'id': '1WdwqUlHhokGbqsr5ZriC3VQC6gPNyl-dIH05KuBZX0', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/vTL3CWzPgrKTuBhUfhMQqYf570wFQy7oAPtjtpDdcyg.jpg?width=108&crop=smart&auto=webp&s=0bc59d8a2f5cad6421b5e617018df16f760128fb', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/vTL3CWzPgrKTuBhUfhMQqYf570wFQy7oAPtjtpDdcyg.jpg?width=216&crop=smart&auto=webp&s=9abe101e69838a86b5375102a30b3e514be26197', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/vTL3CWzPgrKTuBhUfhMQqYf570wFQy7oAPtjtpDdcyg.jpg?width=320&crop=smart&auto=webp&s=6ad79acdaec7251c95cd5860d8d1e0a1934bacea', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/vTL3CWzPgrKTuBhUfhMQqYf570wFQy7oAPtjtpDdcyg.jpg?width=640&crop=smart&auto=webp&s=2ed14dd5cd1b748da8f58cfe34f582d8851935be', 'width': 640}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/vTL3CWzPgrKTuBhUfhMQqYf570wFQy7oAPtjtpDdcyg.jpg?auto=webp&s=310b61b2287e667edad7d527b92df59cddbf7033', 'width': 750}, 'variants': {}}]}
|
|||
Google are doing some incredible work.
| 1 |
[removed]
| 2025-05-27T21:35:04 |
https://www.reddit.com/r/LocalLLaMA/comments/1kwzkth/google_are_doing_some_incredible_work/
|
alimmmmmmm69
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwzkth
| false | null |
t3_1kwzkth
|
/r/LocalLLaMA/comments/1kwzkth/google_are_doing_some_incredible_work/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'r0sxgNH0IzUXMQaOyiTA50SnDxGWeLZUCJ3d-KYmPFY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/w4NxFMgHoBanDU3q5KjrK4f_SK-uacKIZDOIeWGwPIY.jpg?width=108&crop=smart&auto=webp&s=987df3d25c3798e65bcfda4cff5d8c2fb393989a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/w4NxFMgHoBanDU3q5KjrK4f_SK-uacKIZDOIeWGwPIY.jpg?width=216&crop=smart&auto=webp&s=6506e7157bcb7e604dad5e9bbf5ca09c69f05c4c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/w4NxFMgHoBanDU3q5KjrK4f_SK-uacKIZDOIeWGwPIY.jpg?width=320&crop=smart&auto=webp&s=45b5150462981f181ac18e97e96807140dc91e10', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/w4NxFMgHoBanDU3q5KjrK4f_SK-uacKIZDOIeWGwPIY.jpg?auto=webp&s=2a70857ccfffe017ad40bf1cdb352af605770929', 'width': 480}, 'variants': {}}]}
|
Gemma 3n
| 1 |
[removed]
| 2025-05-27T21:37:16 |
https://www.youtube.com/watch?v=eJFJRyXEHZ0
|
alimmmmmmm69
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwzmvk
| false |
{'oembed': {'author_name': 'Google for Developers', 'author_url': 'https://www.youtube.com/@GoogleDevelopers', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/eJFJRyXEHZ0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Announcing Gemma 3n Preview: Powerful, Efficient, Mobile-First AI"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/eJFJRyXEHZ0/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Announcing Gemma 3n Preview: Powerful, Efficient, Mobile-First AI', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1kwzmvk
|
/r/LocalLLaMA/comments/1kwzmvk/gemma_3n/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'r0sxgNH0IzUXMQaOyiTA50SnDxGWeLZUCJ3d-KYmPFY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/w4NxFMgHoBanDU3q5KjrK4f_SK-uacKIZDOIeWGwPIY.jpg?width=108&crop=smart&auto=webp&s=987df3d25c3798e65bcfda4cff5d8c2fb393989a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/w4NxFMgHoBanDU3q5KjrK4f_SK-uacKIZDOIeWGwPIY.jpg?width=216&crop=smart&auto=webp&s=6506e7157bcb7e604dad5e9bbf5ca09c69f05c4c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/w4NxFMgHoBanDU3q5KjrK4f_SK-uacKIZDOIeWGwPIY.jpg?width=320&crop=smart&auto=webp&s=45b5150462981f181ac18e97e96807140dc91e10', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/w4NxFMgHoBanDU3q5KjrK4f_SK-uacKIZDOIeWGwPIY.jpg?auto=webp&s=2a70857ccfffe017ad40bf1cdb352af605770929', 'width': 480}, 'variants': {}}]}
|
|
State of open-source computer using agents (2025)?
| 2 |
I'm looking for a new domain to dig into after spending time on language, music, and speech.
I played around with [OpenAI's CUA](https://openai.com/index/computer-using-agent/) and think it's a cool idea. What are the best open-source CUA models available today to build on and improve? I'm looking for something hackable and with a good community (or a dev/team open to reasonable pull requests).
I thought I'd make a post here to crowdsource your experiences.
| 2025-05-27T21:42:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1kwzrh4/state_of_opensource_computer_using_agents_2025/
|
entsnack
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwzrh4
| false | null |
t3_1kwzrh4
|
/r/LocalLLaMA/comments/1kwzrh4/state_of_opensource_computer_using_agents_2025/
| false | false |
self
| 2 | null |
Advice needed - looking for the best model to run given my hardware
| 1 |
[removed]
| 2025-05-27T21:47:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1kwzvnx/advice_needed_looking_for_the_best_model_to_run/
|
2x4x12
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kwzvnx
| false | null |
t3_1kwzvnx
|
/r/LocalLLaMA/comments/1kwzvnx/advice_needed_looking_for_the_best_model_to_run/
| false | false |
self
| 1 | null |
Deepseek R2 Release?
| 65 |
Didn’t Deepseek say they were accelerating the timeline to release R2 before the original May release date shooting for April? Now that it’s almost June, have they said anything about R2 or when they will be releasing?
| 2025-05-27T22:00:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1kx077t/deepseek_r2_release/
|
Old-Medicine2445
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kx077t
| false | null |
t3_1kx077t
|
/r/LocalLLaMA/comments/1kx077t/deepseek_r2_release/
| false | false |
self
| 65 | null |
Is a MacBook Pro M3 Max with 36GB memory a good idea for $2300 as compared to an equivalent pc build?
| 1 |
[removed]
| 2025-05-27T22:14:23 |
https://www.reddit.com/r/LocalLLaMA/comments/1kx0jbu/is_a_macbook_pro_m3_max_with_36gb_memory_a_good/
|
nissan_sunny
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kx0jbu
| false | null |
t3_1kx0jbu
|
/r/LocalLLaMA/comments/1kx0jbu/is_a_macbook_pro_m3_max_with_36gb_memory_a_good/
| false | false |
self
| 1 | null |
What am I doing wrong (Qwen3-8B)?
| 0 |
Qwen3-8B Q6_K_L in LMStudio. TitanXP (12GB VRAM) gpu, 32GB ram.
As far as I read, this model should work fine with my card but it's incredibly slow. It keeps "thinking" for the simplest prompts.
First thing I tried was saying "Hello" and it immediately starting doing math and trying to figure out the solution to a Pythagorean Theorm problem I didn't give it.
I told it to "Sat Hi". It took "thought for 14.39 seconds" then said "hello".
Mistral Nemo Instruct 2407 Q4_K_S (12B parameter model) runs significantly faster even though it's a larger model.
Is this simply a quantization issue or is something wrong here?
| 2025-05-27T23:00:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1kx1kct/what_am_i_doing_wrong_qwen38b/
|
BenefitOfTheDoubt_01
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kx1kct
| false | null |
t3_1kx1kct
|
/r/LocalLLaMA/comments/1kx1kct/what_am_i_doing_wrong_qwen38b/
| false | false |
self
| 0 | null |
GitHub - som1tokmynam/FusionQuant: FusionQuant Model Merge & GGUF Conversion Pipeline - Your Free Toolkit for Custom LLMs!
| 3 |
Hey all,
Just dropped **FusionQuant v1.4**! a Docker-based toolkit to easily merge LLMs (with Mergekit) and convert them to GGUF (Llama.cpp) or the newly supported EXL2 format (Exllamav2) for local use.
**GitHub:**[https://github.com/som1tokmynam/FusionQuant](https://github.com/som1tokmynam/FusionQuant)
**Key v1.4 Updates:**
* ✨ **EXL2 Quantization:** Now supports Exllamav2 for efficient EXL2 model creation.
* 🚀 **Optimized Docker:** Uses custom precompiled `llama.cpp` and `exl2`.
* 💾 **Local Cache for Merges:** Save models locally to speed up future merges.
* ⚙️ **More GGUF Options:** Expanded GGUF quantization choices.
**Core Features:**
* Merge models with YAML, upload to Hugging Face.
* Convert to GGUF or EXL2 with many quantization options.
* User-friendly Gradio Web UI.
* Run as a pipeline or use steps standalone.
**Get Started (Docker):** Check the Github for the full `docker run` command and requirements (NVIDIA GPU recommended for EXL2/GGUF).
| 2025-05-27T23:26:01 |
https://www.reddit.com/r/LocalLLaMA/comments/1kx24nq/github_som1tokmynamfusionquant_fusionquant_model/
|
Som1tokmynam
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kx24nq
| false | null |
t3_1kx24nq
|
/r/LocalLLaMA/comments/1kx24nq/github_som1tokmynamfusionquant_fusionquant_model/
| false | false |
self
| 3 |
{'enabled': False, 'images': [{'id': 'hNkx9ytwlNXddx1fLS-mwtVWl2w2GvvtYklsF8BkRnI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ooJzPkMfYoFms4e2WfQfKTe_KiVxoPazDceEl5_Mg28.jpg?width=108&crop=smart&auto=webp&s=a89952465907ddd1cd4149b67da925a27adc6894', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ooJzPkMfYoFms4e2WfQfKTe_KiVxoPazDceEl5_Mg28.jpg?width=216&crop=smart&auto=webp&s=71a037475e2a7dae3436b6e4c4cb38d47649ab3f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ooJzPkMfYoFms4e2WfQfKTe_KiVxoPazDceEl5_Mg28.jpg?width=320&crop=smart&auto=webp&s=deac42c5f5d328710c21dc38a6a06b16a5040cdc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ooJzPkMfYoFms4e2WfQfKTe_KiVxoPazDceEl5_Mg28.jpg?width=640&crop=smart&auto=webp&s=dbcc48db25ccf318a05d0b78053f5fff2b79bccb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ooJzPkMfYoFms4e2WfQfKTe_KiVxoPazDceEl5_Mg28.jpg?width=960&crop=smart&auto=webp&s=037a833547249808a8447f5c61b877d5706e0736', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ooJzPkMfYoFms4e2WfQfKTe_KiVxoPazDceEl5_Mg28.jpg?width=1080&crop=smart&auto=webp&s=57518b7271f230de0816d013c82b06b37f39fe75', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ooJzPkMfYoFms4e2WfQfKTe_KiVxoPazDceEl5_Mg28.jpg?auto=webp&s=99aacb858aa00947a81c2e65a6b92963d7aac712', 'width': 1200}, 'variants': {}}]}
|
Qwen3-14B vs Gemma3-12B
| 33 |
What do you guys thinks about these models? Which one to choose?
I mostly ask some programming knowledge questions, primary Go and Java.
| 2025-05-27T23:42:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1kx2hcm/qwen314b_vs_gemma312b/
|
COBECT
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kx2hcm
| false | null |
t3_1kx2hcm
|
/r/LocalLLaMA/comments/1kx2hcm/qwen314b_vs_gemma312b/
| false | false |
self
| 33 | null |
Is LLaMa the right choice for local agents that will make use of outside data?
| 0 |
Trying to build my first local agentic system on a new Mac Mini M4 with 24GB RAM but I am not sure if LLaMa is the right choice on account of a crucial requirement is that it be able to connect to my Google Calendar.
Is it really challenging to make local models work with online tools and is LLaMa capable of this?
Any advice appreciated.
| 2025-05-27T23:57:56 |
https://www.reddit.com/r/LocalLLaMA/comments/1kx2tfl/is_llama_the_right_choice_for_local_agents_that/
|
xtrafunky
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kx2tfl
| false | null |
t3_1kx2tfl
|
/r/LocalLLaMA/comments/1kx2tfl/is_llama_the_right_choice_for_local_agents_that/
| false | false |
self
| 0 | null |
Creating a local LLM-powered NPC Dialog System (with simple RAG)
| 1 | 2025-05-27T23:58:29 |
https://erikr.bearblog.dev/creating-an-llm-powered-npc-dialog-system-with-simple-rag/
|
fumblebear
|
erikr.bearblog.dev
| 1970-01-01T00:00:00 | 0 |
{}
|
1kx2ttx
| false | null |
t3_1kx2ttx
|
/r/LocalLLaMA/comments/1kx2ttx/creating_a_local_llmpowered_npc_dialog_system/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'tXq_cdqcJ-341mHBzEnpF6FJ9AzKuXG0coRBECeQ0yE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/hmZ2zuFGLGZ_OHs7N14Ue07JMyCkUd5qHoaxRMtNn-c.jpg?width=108&crop=smart&auto=webp&s=85ed7d99594e160fecf37d78e7c330e1e21dafaa', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/hmZ2zuFGLGZ_OHs7N14Ue07JMyCkUd5qHoaxRMtNn-c.jpg?width=216&crop=smart&auto=webp&s=fcf269e9d511046fb31891aad5fbd3e00fa6bdec', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/hmZ2zuFGLGZ_OHs7N14Ue07JMyCkUd5qHoaxRMtNn-c.jpg?width=320&crop=smart&auto=webp&s=498105fe4bd86ccdb97825d4ee20660a83dc78ff', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/hmZ2zuFGLGZ_OHs7N14Ue07JMyCkUd5qHoaxRMtNn-c.jpg?width=640&crop=smart&auto=webp&s=9d8bff50a43251fe24eadddd2869124454c02cae', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/hmZ2zuFGLGZ_OHs7N14Ue07JMyCkUd5qHoaxRMtNn-c.jpg?width=960&crop=smart&auto=webp&s=656ff462d41013247877e45549736577f760f9a9', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/hmZ2zuFGLGZ_OHs7N14Ue07JMyCkUd5qHoaxRMtNn-c.jpg?auto=webp&s=6e9afe4e8735e57c0b300b7358fc92b0d56ee1a4', 'width': 1024}, 'variants': {}}]}
|
||
Help with using unsloth on a structured conversation flow
| 1 |
[removed]
| 2025-05-28T00:21:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1kx3ayn/help_with_using_unsloth_on_a_structured/
|
rjjacob
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kx3ayn
| false | null |
t3_1kx3ayn
|
/r/LocalLLaMA/comments/1kx3ayn/help_with_using_unsloth_on_a_structured/
| false | false |
self
| 1 | null |
How are you using Qwen?
| 11 |
I’m currently training speculative decoding models on Qwen, aiming for 3-4x faster inference. However, I’ve noticed that Qwen’s reasoning style significantly differs from typical LLM outputs, reducing the expected performance gains. To address this, I’m looking to enhance training with additional reasoning-focused datasets aligned closely with real-world use cases.
I’d love your insights:
• Which model are you currently using?
• Do your applications primarily involve reasoning, or are they mostly direct outputs? Or a combination?
• What’s your main use case for Qwen? coding, Q&A, or something else?
If you’re curious how I’m training the model, I’ve open-sourced the repo and posted here: https://www.reddit.com/r/LocalLLaMA/s/2JXNhGInkx
| 2025-05-28T00:29:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1kx3h5w/how_are_you_using_qwen/
|
xnick77x
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kx3h5w
| false | null |
t3_1kx3h5w
|
/r/LocalLLaMA/comments/1kx3h5w/how_are_you_using_qwen/
| false | false |
self
| 11 | null |
Best Browser-Agent with Image Recogntion/Image Input?
| 1 |
[removed]
| 2025-05-28T01:14:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kx4e8n/best_browseragent_with_image_recogntionimage_input/
|
SafuWaifu
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kx4e8n
| false | null |
t3_1kx4e8n
|
/r/LocalLLaMA/comments/1kx4e8n/best_browseragent_with_image_recogntionimage_input/
| false | false |
self
| 1 | null |
Tip for those building agents. The CLI is king.
| 28 |
There are a lot of ways of exposing tools to your agents depending on the framework or your implementation. MCP servers are making this trivial. But I am finding that exposing a simple CLI tool to your LLM/Agent with instructions on how to use common cli commands can actually work better, while reducing complexity. For example, the `wc` command: https://en.wikipedia.org/wiki/Wc_(Unix)
Crafting a system prompt for your agents to make use of these universal, but perhaps obscure commands for your level of experience, can greatly increase the probability of a successful task/step completion.
I have been experimenting with using a lot of MCP servers and exposing their tools to my agent fleet implementation (what should a group of agents be called?, a perplexity of agents? :D ), and have found that giving your agents the ability to simply issue cli commands can work a lot better.
Thoughts?
| 2025-05-28T01:46:50 |
https://www.reddit.com/gallery/1kx51dp
|
LocoMod
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kx51dp
| false | null |
t3_1kx51dp
|
/r/LocalLLaMA/comments/1kx51dp/tip_for_those_building_agents_the_cli_is_king/
| false | false | 28 | null |
|
Curious what everyone thinks of Meta's long term AI strategy. Do you think Meta will find its market when compared to Gemini/OpenAI? Open source obviously has its benefits but Mistral/Deepseek are worthy competitors. Would love to hear thoughts of where Llama is and potential to overtake?
| 1 |
[removed]
| 2025-05-28T02:11:40 |
https://www.reddit.com/r/LocalLLaMA/comments/1kx5jd9/curious_what_everyone_thinks_of_metas_long_term/
|
Excellent-Plastic638
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kx5jd9
| false | null |
t3_1kx5jd9
|
/r/LocalLLaMA/comments/1kx5jd9/curious_what_everyone_thinks_of_metas_long_term/
| false | false |
self
| 1 | null |
Usecase for graph summarization (chart to table)
| 1 |
I have bunch of Radio frequency usecase graphs in capacitance, inductance, IV, transistor and so on.
I want to train a model that literally outputs a table.
I found Deplot which I think suits my usecase. Issue is I have little samples to finetune. I was checking if I could get the setup to work with Lora but it is not even converging on the training dataset. Not sure if I am doing something wrong. Models like qwen does but llama factory does the ground work well for us there.
I want to make deplot work since they focus specifically on chart to table
Does anyone have experience in setting up deplot and making it converge for training dataset atleast even a single sample
| 2025-05-28T03:48:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kx7ezh/usecase_for_graph_summarization_chart_to_table/
|
unknown5493
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kx7ezh
| false | null |
t3_1kx7ezh
|
/r/LocalLLaMA/comments/1kx7ezh/usecase_for_graph_summarization_chart_to_table/
| false | false |
self
| 1 | null |
Made a performant benchmarking and evaluation client for inference servers!
| 1 |
Figured I'd share this here in case anyone is interested. A goofy project I've been working on, inspired by being annoyed at how slow LM-Eval can be. WIP still, need to do a lot of work with better eval metrics (like F1 etc) and try for a number of different datasets
| 2025-05-28T03:49:04 |
https://github.com/sangstar/scale
|
DM_ME_YOUR_CATS_PAWS
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kx7fe4
| false | null |
t3_1kx7fe4
|
/r/LocalLLaMA/comments/1kx7fe4/made_a_performant_benchmarking_and_evaluation/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '3DYE1LX7FaRMet0V4j6wppEIkS_Z7clacqDAaxycJmc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?width=108&crop=smart&auto=webp&s=2b347618f6bf6886041f3444d5d4047bdb559b02', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?width=216&crop=smart&auto=webp&s=cc25cbecf4cffcdd7bb316cd3b6c1ccf39adbb9a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?width=320&crop=smart&auto=webp&s=ae902ea8ed687cd7c71a51f129781d591a9044a4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?width=640&crop=smart&auto=webp&s=427330da8e6e491bb4168fc40d74dee468cca398', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?width=960&crop=smart&auto=webp&s=2e5b3b0691cfbdb6d896bd83ef287dee777f69d3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?width=1080&crop=smart&auto=webp&s=eee93c56ea02ff60b8ae2d427af9ca759b4d5702', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?auto=webp&s=71e8f76252b200c8bdad002e176a8482b1f94609', 'width': 1200}, 'variants': {}}]}
|
|
Building a KYC Agent – LangGraph vs AutoGen vs CrewAI
| 1 |
[removed]
| 2025-05-28T03:53:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1kx7hzy/building_a_kyc_agent_langgraph_vs_autogen_vs/
|
Careless-Bat-1884
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kx7hzy
| false | null |
t3_1kx7hzy
|
/r/LocalLLaMA/comments/1kx7hzy/building_a_kyc_agent_langgraph_vs_autogen_vs/
| false | false |
self
| 1 | null |
Made a super fast OpenAI API endpoint benchmarking and evaluation client!
| 1 |
[removed]
| 2025-05-28T03:58:29 |
https://github.com/sangstar/scale
|
Traditional-Review22
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kx7ldt
| false | null |
t3_1kx7ldt
|
/r/LocalLLaMA/comments/1kx7ldt/made_a_super_fast_openai_api_endpoint/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '3DYE1LX7FaRMet0V4j6wppEIkS_Z7clacqDAaxycJmc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?width=108&crop=smart&auto=webp&s=2b347618f6bf6886041f3444d5d4047bdb559b02', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?width=216&crop=smart&auto=webp&s=cc25cbecf4cffcdd7bb316cd3b6c1ccf39adbb9a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?width=320&crop=smart&auto=webp&s=ae902ea8ed687cd7c71a51f129781d591a9044a4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?width=640&crop=smart&auto=webp&s=427330da8e6e491bb4168fc40d74dee468cca398', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?width=960&crop=smart&auto=webp&s=2e5b3b0691cfbdb6d896bd83ef287dee777f69d3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?width=1080&crop=smart&auto=webp&s=eee93c56ea02ff60b8ae2d427af9ca759b4d5702', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?auto=webp&s=71e8f76252b200c8bdad002e176a8482b1f94609', 'width': 1200}, 'variants': {}}]}
|
|
Super fast OpenAI API inference engine benchmarking and evaluation client I made. Check it out!
| 1 |
[removed]
| 2025-05-28T04:03:11 |
https://github.com/sangstar/scale
|
Traditional-Review22
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kx7omh
| false | null |
t3_1kx7omh
|
/r/LocalLLaMA/comments/1kx7omh/super_fast_openai_api_inference_engine/
| false | false | 1 |
{'enabled': False, 'images': [{'id': '3DYE1LX7FaRMet0V4j6wppEIkS_Z7clacqDAaxycJmc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?width=108&crop=smart&auto=webp&s=2b347618f6bf6886041f3444d5d4047bdb559b02', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?width=216&crop=smart&auto=webp&s=cc25cbecf4cffcdd7bb316cd3b6c1ccf39adbb9a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?width=320&crop=smart&auto=webp&s=ae902ea8ed687cd7c71a51f129781d591a9044a4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?width=640&crop=smart&auto=webp&s=427330da8e6e491bb4168fc40d74dee468cca398', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?width=960&crop=smart&auto=webp&s=2e5b3b0691cfbdb6d896bd83ef287dee777f69d3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?width=1080&crop=smart&auto=webp&s=eee93c56ea02ff60b8ae2d427af9ca759b4d5702', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?auto=webp&s=71e8f76252b200c8bdad002e176a8482b1f94609', 'width': 1200}, 'variants': {}}]}
|
|
How much VRAM headroom for context?
| 7 |
Still new to this and couldn't find a decent answer. I've been testing various models and I'm trying to find the largest model that I can run effectively on my 5090. The calculator on HF is giving me errors regardless of which model I enter. Is there a rule of thumb that one can follow for a rough estimate? I want to try running the LIama 70B Q3\_K\_S model that takes up 30.9GB of VRAM which would only leave me with 1.1GB VRAM for context. Is this to low?
| 2025-05-28T04:24:08 |
https://www.reddit.com/r/LocalLLaMA/comments/1kx82bo/how_much_vram_headroom_for_context/
|
Nomski88
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kx82bo
| false | null |
t3_1kx82bo
|
/r/LocalLLaMA/comments/1kx82bo/how_much_vram_headroom_for_context/
| false | false |
self
| 7 | null |
Best model for 4070 TI Super
| 2 |
Hello there, hope everyone is doing well.
I am kinda new in this world, so I have been wondering what would be the best model for my graphic card. I want to use it for general purposes like asking what colours should I get my blankets if my room is white, what sizes should I buy etc etc.
I just used chatgpt with the free tries of their premium AI and it was quite good so I'd also like to know how "bad" is a model running locally compared to chatgpt by example? Can the local model browse on the internet?
Thanks in advance guys!
| 2025-05-28T05:52:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1kx9kje/best_model_for_4070_ti_super/
|
Beniko19
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kx9kje
| false | null |
t3_1kx9kje
|
/r/LocalLLaMA/comments/1kx9kje/best_model_for_4070_ti_super/
| false | false |
self
| 2 | null |
Megakernel doubles Llama-1B inference speed for batch size 1
| 73 |
The authors of this [bloglike paper](https://hazyresearch.stanford.edu/blog/2025-05-27-no-bubbles) at Stanford found that vLLM and SGLang lose significant performance due to overhead in CUDA usage for low batch sizes - what you usually use when running locally to chat. Their improvement doubles the inference speed on a H100, which however has significantly higher memory bandwidth than a 3090 for example. It remains to be seen how this scales to user GPUs. The benefits will diminish the larger the model gets.
The best thing is that even with their optimizations there seems to be still some room left for further improvements - theoretically. There was also no word on llama.cpp in there. Their publication is a nice & easy read though.
| 2025-05-28T05:58:06 |
https://www.reddit.com/r/LocalLLaMA/comments/1kx9nfk/megakernel_doubles_llama1b_inference_speed_for/
|
Chromix_
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kx9nfk
| false | null |
t3_1kx9nfk
|
/r/LocalLLaMA/comments/1kx9nfk/megakernel_doubles_llama1b_inference_speed_for/
| false | false |
self
| 73 |
{'enabled': False, 'images': [{'id': '-WHgGLJANkDpubg8JwSLJ_kMgGHdyAiWnD4mQMVCLm0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/SkkOsvk-2jlZgGZC1fPBHplz1bPi5ZFKveN7yDHKX3c.jpg?width=108&crop=smart&auto=webp&s=976ec4699fe996a0df9064bec720deebd3fb92bc', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/SkkOsvk-2jlZgGZC1fPBHplz1bPi5ZFKveN7yDHKX3c.jpg?width=216&crop=smart&auto=webp&s=e4b9cd54d51cd5be4de7fa6cc172b08c07d1ce57', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/SkkOsvk-2jlZgGZC1fPBHplz1bPi5ZFKveN7yDHKX3c.jpg?width=320&crop=smart&auto=webp&s=70f2c9fa48e28be9e270246bd6352928cee42a59', 'width': 320}], 'source': {'height': 460, 'url': 'https://external-preview.redd.it/SkkOsvk-2jlZgGZC1fPBHplz1bPi5ZFKveN7yDHKX3c.jpg?auto=webp&s=92a28305715d6e192db7b969d053c195981bb79b', 'width': 460}, 'variants': {}}]}
|
Google AI Edge Gallery is released!
| 1 |
[removed]
| 2025-05-28T06:27:28 |
https://x.com/itsPaulAi/status/1927453363425210810
|
Lynncc6
|
x.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxa3qc
| false | null |
t3_1kxa3qc
|
/r/LocalLLaMA/comments/1kxa3qc/google_ai_edge_gallery_is_released/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'XpUhnAnsQPxzWMB4hOye_0tpC_GVc2EK04U06ew4CaE', 'resolutions': [{'height': 98, 'url': 'https://external-preview.redd.it/kxxnDjSSQc5hu0hU1hn9LiDkI_lG0JsoHHAqfTLLY_s.jpg?width=108&crop=smart&auto=webp&s=2639f0dd23b34a72725e14099054779fc5fad746', 'width': 108}, {'height': 196, 'url': 'https://external-preview.redd.it/kxxnDjSSQc5hu0hU1hn9LiDkI_lG0JsoHHAqfTLLY_s.jpg?width=216&crop=smart&auto=webp&s=9cd1623467291f708e85e5b0efadd5ee0d6cf53a', 'width': 216}, {'height': 290, 'url': 'https://external-preview.redd.it/kxxnDjSSQc5hu0hU1hn9LiDkI_lG0JsoHHAqfTLLY_s.jpg?width=320&crop=smart&auto=webp&s=6ad30334af1ba10b482d0ab37cb2d63390c54231', 'width': 320}, {'height': 580, 'url': 'https://external-preview.redd.it/kxxnDjSSQc5hu0hU1hn9LiDkI_lG0JsoHHAqfTLLY_s.jpg?width=640&crop=smart&auto=webp&s=3d7b08843efdd97d1ea5e3b1bf30b09ce55b481b', 'width': 640}, {'height': 871, 'url': 'https://external-preview.redd.it/kxxnDjSSQc5hu0hU1hn9LiDkI_lG0JsoHHAqfTLLY_s.jpg?width=960&crop=smart&auto=webp&s=c412aa129ff683597f70134e5d1c7d8966728c23', 'width': 960}, {'height': 980, 'url': 'https://external-preview.redd.it/kxxnDjSSQc5hu0hU1hn9LiDkI_lG0JsoHHAqfTLLY_s.jpg?width=1080&crop=smart&auto=webp&s=f3b868f3665b9f7a5c6e4f511b9ebe7dc2cf85db', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/kxxnDjSSQc5hu0hU1hn9LiDkI_lG0JsoHHAqfTLLY_s.jpg?auto=webp&s=f87db2d86ab0a8b4fdbdb16a1e23eec5980d2395', 'width': 1190}, 'variants': {}}]}
|
|
Google AI Edge Gallery
| 189 |
**Explore, Experience, and Evaluate the Future of On-Device Generative AI with Google AI Edge.**
The Google AI Edge Gallery is an experimental app that puts the power of cutting-edge Generative AI models directly into your hands, running entirely on your Android *(available now)* and iOS *(coming soon)* devices. Dive into a world of creative and practical AI use cases, all running locally, without needing an internet connection once the model is loaded. Experiment with different models, chat, ask questions with images, explore prompts, and more!
[https://github.com/google-ai-edge/gallery?tab=readme-ov-file](https://github.com/google-ai-edge/gallery?tab=readme-ov-file)
| 2025-05-28T06:33:50 |
Lynncc6
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxa788
| false | null |
t3_1kxa788
|
/r/LocalLLaMA/comments/1kxa788/google_ai_edge_gallery/
| false | false | 189 |
{'enabled': True, 'images': [{'id': '6NTG7vNynnFIuiZo1EXhOu6NSSi5gAD7jh6L4knpExQ', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/s6rgmrfawg3f1.png?width=108&crop=smart&auto=webp&s=2b3febb2ea73a2f6b70df4940bbef935ffb54fb9', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/s6rgmrfawg3f1.png?width=216&crop=smart&auto=webp&s=17201c760c6b09e2bcdfb7beb17d0e334692e003', 'width': 216}, {'height': 202, 'url': 'https://preview.redd.it/s6rgmrfawg3f1.png?width=320&crop=smart&auto=webp&s=d0b0f2702c8f31febd16bc9a110c05b49a7dbc38', 'width': 320}, {'height': 404, 'url': 'https://preview.redd.it/s6rgmrfawg3f1.png?width=640&crop=smart&auto=webp&s=4720f1c95bf832e5eacd2490cf5b69783a79a11b', 'width': 640}, {'height': 607, 'url': 'https://preview.redd.it/s6rgmrfawg3f1.png?width=960&crop=smart&auto=webp&s=45dc303be0e018d3c2efe52208cccef87de6f761', 'width': 960}, {'height': 683, 'url': 'https://preview.redd.it/s6rgmrfawg3f1.png?width=1080&crop=smart&auto=webp&s=6c381b89a263fb87f214b9b870727d246eead365', 'width': 1080}], 'source': {'height': 1938, 'url': 'https://preview.redd.it/s6rgmrfawg3f1.png?auto=webp&s=2516eaa021abd775e7054616f5b109bf43a74b0f', 'width': 3064}, 'variants': {}}]}
|
||
When do you think the gap between local llm and o4-mini can be closed
| 15 |
Not sure if OpenAI recently upgraded this o4-mini free version, but I found this model really surpassed almost every local model in both correctness and consistency. I mainly tested on the coding part (not agent mode). It can understand the problem so well with minimal context (even compared to the Claude 3.7 & 4). I really hope one day we can get this thing running in local setup.
| 2025-05-28T06:49:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxafjv/when_do_you_think_the_gap_between_local_llm_and/
|
GregView
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxafjv
| false | null |
t3_1kxafjv
|
/r/LocalLLaMA/comments/1kxafjv/when_do_you_think_the_gap_between_local_llm_and/
| false | false |
self
| 15 | null |
after 5 month deepseek still the king of the open source there base model is one of the most intelligent model in both closed source and also top in the open source . lets see what we will see in the next model no update about the next model . but i think they are looking for standard
| 1 |
[removed]
| 2025-05-28T06:53:38 |
Select_Dream634
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxahtn
| false | null |
t3_1kxahtn
|
/r/LocalLLaMA/comments/1kxahtn/after_5_month_deepseek_still_the_king_of_the_open/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'Hoj57_lrNsctqQJNajIy7NxQ9Qg_OlnXU7sQld2X4Fc', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/gnc0hvzizg3f1.png?width=108&crop=smart&auto=webp&s=f721fe6f90e15a5d634342c2bc8b80d880ea6afe', 'width': 108}, {'height': 97, 'url': 'https://preview.redd.it/gnc0hvzizg3f1.png?width=216&crop=smart&auto=webp&s=65548ef9f0f35f53b8f7d469f5a79907adf1dcfb', 'width': 216}, {'height': 143, 'url': 'https://preview.redd.it/gnc0hvzizg3f1.png?width=320&crop=smart&auto=webp&s=229336005bd5897a6dbe4f382e406f9c83139fd5', 'width': 320}, {'height': 287, 'url': 'https://preview.redd.it/gnc0hvzizg3f1.png?width=640&crop=smart&auto=webp&s=5541e2cd4702c47617ffbc8f9ffa7d7876e44be8', 'width': 640}, {'height': 431, 'url': 'https://preview.redd.it/gnc0hvzizg3f1.png?width=960&crop=smart&auto=webp&s=7a6325213ef3ea6650ebdd6473463f12d9e97c23', 'width': 960}, {'height': 485, 'url': 'https://preview.redd.it/gnc0hvzizg3f1.png?width=1080&crop=smart&auto=webp&s=601ee71fb41b8badc76c8078b77d6e1a74b2bae4', 'width': 1080}], 'source': {'height': 534, 'url': 'https://preview.redd.it/gnc0hvzizg3f1.png?auto=webp&s=813e9440beae853e742856364d0cff7ce57611da', 'width': 1189}, 'variants': {}}]}
|
||
I'm trying to build a open source LLM rag ai asistant for my financial audit firm
| 1 |
[removed]
| 2025-05-28T07:04:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxanzc/im_trying_to_build_a_open_source_llm_rag_ai/
|
timeladyxox
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxanzc
| false | null |
t3_1kxanzc
|
/r/LocalLLaMA/comments/1kxanzc/im_trying_to_build_a_open_source_llm_rag_ai/
| false | false |
self
| 1 | null |
LLM Farm gemma-3
| 1 |
[removed]
| 2025-05-28T07:13:09 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxasil/llm_farm_gemma3/
|
kindfii
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxasil
| false | null |
t3_1kxasil
|
/r/LocalLLaMA/comments/1kxasil/llm_farm_gemma3/
| false | false |
self
| 1 | null |
The Economist: "Companies abandon their generative AI projects"
| 614 |
A [recent article](https://archive.ph/P51MQ) in the Economist claims that "the share of companies abandoning most of their generative-AI pilot projects has risen to 42%, up from 17% last year." Apparently companies who invested in generative AI and slashed jobs are now disappointed and they began rehiring humans for roles.
The hype with the generative AI increasingly looks like a "we have a solution, now let's find some problems" scenario. Apart from software developers and graphic designers, I wonder how many professionals actually feel the impact of generative AI in their workplace?
| 2025-05-28T07:23:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxaxw9/the_economist_companies_abandon_their_generative/
|
mayalihamur
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxaxw9
| false | null |
t3_1kxaxw9
|
/r/LocalLLaMA/comments/1kxaxw9/the_economist_companies_abandon_their_generative/
| false | false |
self
| 614 | null |
🧠 How do you go from a raw idea to something real? (For devs/designers/builders)
| 1 |
[removed]
| 2025-05-28T07:48:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxbbcm/how_do_you_go_from_a_raw_idea_to_something_real/
|
InjurySuccessful3125
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxbbcm
| false | null |
t3_1kxbbcm
|
/r/LocalLLaMA/comments/1kxbbcm/how_do_you_go_from_a_raw_idea_to_something_real/
| false | false |
self
| 1 | null |
T-MAC extends its capabilities to Snapdragon mobile NPU!
| 2 |
https://github.com/microsoft/T-MAC/blob/main/t-man/README.md
- 50 t/s for BitNet-2B-4T on Snapdragon 8G3 NPU
- NPU only, doesn't impact other apps
- Prebuilt APK for SDG3 devices [on github](https://github.com/microsoft/T-MAC/releases/tag/1.0.0a5)
| 2025-05-28T08:03:21 |
https://github.com/microsoft/T-MAC/blob/main/t-man/README.md
|
Aaaaaaaaaeeeee
|
github.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxbj8u
| false | null |
t3_1kxbj8u
|
/r/LocalLLaMA/comments/1kxbj8u/tmac_extends_its_capabilities_to_snapdragon/
| false | false |
default
| 2 | null |
I built an open-source VRAM Calculator inside Hugging Face
| 1 |
It's a Chrome extension that sits inside the Hugging Face website. It auto-loads model specs into the calculation. [Link to the extension](https://chromewebstore.google.com/detail/hugging-face-vram-calcula/bioohacjdieeliinbpocpdhpdapfkhal?authuser=0&hl=en-GB).
\> To test it, install the extension (no registration/key needed) and navigate to a HF model page. Then click the "VRAM" icon on the top right to open the sidepanel.
You can specify quantization, batch size, sequence length, etc.
Works for inference & fine-tuning.
If it does not fit on the specified GPUs, it gives you an advise on how to still run it (e.g. lowering precision).
It is inspired at my work, where we were constantly exporting metrics from HF to estimate required hardware. Now, it saves us in the dev team quite some time and clients can use it, too.
Contributions to this project are highly appreciated in [this GitHub repo](https://github.com/NEBUL-AI/HF-VRAM-Extension).
| 2025-05-28T08:03:38 |
https://v.redd.it/2q7gz3mubh3f1
|
Cool-Maintenance8594
|
/r/LocalLLaMA/comments/1kxbjds/i_built_an_opensource_vram_calculator_inside/
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxbjds
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/2q7gz3mubh3f1/DASHPlaylist.mpd?a=1751141026%2CMTQwMzY0OGI3MWFmNGE3MjRhYThkNjhiZjQzZGIwMTUzYmRjMjJkZWUzNWQ2YWIzOTNlZGNjN2I2YzBjZTkzNQ%3D%3D&v=1&f=sd', 'duration': 83, 'fallback_url': 'https://v.redd.it/2q7gz3mubh3f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/2q7gz3mubh3f1/HLSPlaylist.m3u8?a=1751141026%2COWRiMDY4NWQ0NzNkNmVmZjUzNTY5MzZiNzQxYzFhZThlODdjNzdjOTJhNzNmOTY3ODFiZDE3MWRkOGFiYTBkOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/2q7gz3mubh3f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1kxbjds
|
/r/LocalLLaMA/comments/1kxbjds/i_built_an_opensource_vram_calculator_inside/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'ODBnb2o1bXViaDNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ODBnb2o1bXViaDNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?width=108&crop=smart&format=pjpg&auto=webp&s=6d35f82cf54d3695cf26fadd1fb08a0532b8fcd1', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ODBnb2o1bXViaDNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?width=216&crop=smart&format=pjpg&auto=webp&s=2e200e82edcb92dbcd7202fc93a38b7679a30a1e', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ODBnb2o1bXViaDNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?width=320&crop=smart&format=pjpg&auto=webp&s=1c48f854456687dc68b5929928a4d83173a6d427', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ODBnb2o1bXViaDNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?width=640&crop=smart&format=pjpg&auto=webp&s=955098516d656c4624e924c650979670c162fb1b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ODBnb2o1bXViaDNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?width=960&crop=smart&format=pjpg&auto=webp&s=64e0ac37543e9ef3cafde6692c7c9983bf607e47', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ODBnb2o1bXViaDNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6a92a3317310293d8833c79bae85c10c5fee630e', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/ODBnb2o1bXViaDNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?format=pjpg&auto=webp&s=52f4f02a1eed0e08455b1a195f2c77ec79e1d320', 'width': 3840}, 'variants': {}}]}
|
|
Another Ryzen Max+ 395 machine has been released. Are all the Chinese Max+ 395 machines the same?
| 30 |
Another AMD Ryzen Max+ 395 mini-pc has been released. The FEVM FA-EX9. For those who kept asking for it, this comes with Oculink. Here's a YT review.
https://www.youtube.com/watch?v=-1kuUqp1X2I
I think all the Chinese Max+ mini-pcs are the same. I noticed again that this machine has *exactly* the same port layout as the GMK X2. But how can that be if this has Oculink but the X2 doesn't? The Oculink is an addon. It takes up one of the NVME slots. It's just not the port layout, but the motherboards look exactly the same. Down to the same red color. So it's like one manufacturer is making the MB and then all the other companies are using that MB for their mini-pcs.
| 2025-05-28T08:10:10 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxbmr9/another_ryzen_max_395_machine_has_been_released/
|
fallingdowndizzyvr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxbmr9
| false | null |
t3_1kxbmr9
|
/r/LocalLLaMA/comments/1kxbmr9/another_ryzen_max_395_machine_has_been_released/
| false | false |
self
| 30 |
{'enabled': False, 'images': [{'id': '6QU22n3E7n6OJMoz58HPRHlOys8UjYhF1bBPxL-Fj4k', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/1Dtni1upEapADVmsfcu-FGGUWVJaHDTA5wNU5q6JsKw.jpg?width=108&crop=smart&auto=webp&s=b309630cf4e255095bb51b3ce5caa1873639a976', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/1Dtni1upEapADVmsfcu-FGGUWVJaHDTA5wNU5q6JsKw.jpg?width=216&crop=smart&auto=webp&s=bd8b36ab8228580fd62e3de3d5583f62f82a0a30', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/1Dtni1upEapADVmsfcu-FGGUWVJaHDTA5wNU5q6JsKw.jpg?width=320&crop=smart&auto=webp&s=e0684b413059afcc81863ee27def079f0713a560', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/1Dtni1upEapADVmsfcu-FGGUWVJaHDTA5wNU5q6JsKw.jpg?auto=webp&s=6b492beef52aa22601362277e7275fe33e2ef079', 'width': 480}, 'variants': {}}]}
|
Is Stanford's AGI Rivermind ever coming back?
| 0 |
Kinda feel like a conspiracy theorist, but what are the chances they were told by govmnt to shut it down lol? But really, are there any news or posts from Stanford about the incident and if they are going to make the model public again?
| 2025-05-28T08:23:39 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxbtnc/is_stanfords_agi_rivermind_ever_coming_back/
|
cdanymar
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxbtnc
| false | null |
t3_1kxbtnc
|
/r/LocalLLaMA/comments/1kxbtnc/is_stanfords_agi_rivermind_ever_coming_back/
| false | false |
self
| 0 | null |
What's possible with each currently purchasable amount of Mac Unified RAM?
| 2 |
This is a bit of an update of [https://www.reddit.com/r/LocalLLaMA/comments/1gs7w2m/choosing\_the\_right\_mac\_for\_running\_large\_llms/](https://www.reddit.com/r/LocalLLaMA/comments/1gs7w2m/choosing_the_right_mac_for_running_large_llms/) more than 6 months later, with different available CPUs/GPUs.
I am going to renew my MacBook Air (M1) into a recent MacBook Air or Pro, and I need to decide what to pick in terms of RAM (afaik options are 24/32/48/64/128 at the moment). Budget is not an issue (business expense with good ROI).
While I do code & data engineering a lot, I'm not interested into LLM for coding (results are always under my expectations), but I'm more interested in PDF -> JSON transcriptions, general LLM use (brainstorming), connection to music / MIDI etc.
Is it worth going the 128 GB route? Or something in between? Thank you!
| 2025-05-28T08:31:27 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxbxmf/whats_possible_with_each_currently_purchasable/
|
thibaut_barrere
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxbxmf
| false | null |
t3_1kxbxmf
|
/r/LocalLLaMA/comments/1kxbxmf/whats_possible_with_each_currently_purchasable/
| false | false |
self
| 2 | null |
Looks like the claraverse devs are listening to the comments from previous posts
| 1 |
[removed]
| 2025-05-28T08:46:32 |
https://www.youtube.com/watch?v=FWgFiBU7R14
|
k1sh0r
|
youtube.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxc57n
| false |
{'oembed': {'author_name': 'ClaraVerse', 'author_url': 'https://www.youtube.com/@ClaraVerse.Tutorials', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/FWgFiBU7R14?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="ClaraVerse Feature Update - MCP, Tool, Agentic, Background"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/FWgFiBU7R14/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'ClaraVerse Feature Update - MCP, Tool, Agentic, Background', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1kxc57n
|
/r/LocalLLaMA/comments/1kxc57n/looks_like_the_claraverse_devs_are_listening_to/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'hb1G5JYx5rimNYX2W7HeITqNZWn0lkoeeKQ43s4uc_E', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/k3XyeKX7NTyUUeeNFUiNna-UAkFxzQJAxTfaZyRC0-g.jpg?width=108&crop=smart&auto=webp&s=5a2157d4819c2f89cf940ccf60f86d47bdc228b7', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/k3XyeKX7NTyUUeeNFUiNna-UAkFxzQJAxTfaZyRC0-g.jpg?width=216&crop=smart&auto=webp&s=c8fb16f9cdec686d1d11308b693859993969c5a9', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/k3XyeKX7NTyUUeeNFUiNna-UAkFxzQJAxTfaZyRC0-g.jpg?width=320&crop=smart&auto=webp&s=30695d3f2873d1cc8f9d622b1940eaa43f2a2624', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/k3XyeKX7NTyUUeeNFUiNna-UAkFxzQJAxTfaZyRC0-g.jpg?auto=webp&s=0acb3b22b0ba9711b5335fefcfe8f8b03d7f71b4', 'width': 480}, 'variants': {}}]}
|
|
MCP Proxy – Use your embedded system as an agent
| 19 |
https://i.redd.it/kzn1dvcfkh3f1.gif
Video: [https://www.youtube.com/watch?v=foCp3ja8FRA](https://www.youtube.com/watch?v=foCp3ja8FRA)
Repository: [https://github.com/openserv-labs/mcp-proxy](https://github.com/openserv-labs/mcp-proxy)
Hello!
I've been playing around with agents, MCP servers and embedded systems for a while. I was trying to figure out the best way to connect my real-time devices to agents and use them in multi-agent workflows.
At OpenServ, we have an API to interact with agents, so at first I thought I'd just run a specialized web server to talk to the platform. But that had its own problems—mainly memory issues and needing to customize it for each device.
Then we thought, why not just run a regular web server and use it as an agent? The idea is simple, and the implementation is even simpler thanks to MCP. I define my server’s endpoints as tools in the MCP server, and agents (MCP clients) can call them directly.
Even though the initial idea was to work with embedded systems, this can work for any backend.
Would love to hear your thoughts—especially around connecting agents to real-time devices to collect sensor data or control them in mutlti-agent workflows.
| 2025-05-28T08:47:54 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxc5vo/mcp_proxy_use_your_embedded_system_as_an_agent/
|
arbayi
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxc5vo
| false |
{'oembed': {'author_name': 'Batur Yılmaz Arslan', 'author_url': 'https://www.youtube.com/@BaturYilmazArslan', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/foCp3ja8FRA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="mcp proxy"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/foCp3ja8FRA/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'mcp proxy', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
|
t3_1kxc5vo
|
/r/LocalLLaMA/comments/1kxc5vo/mcp_proxy_use_your_embedded_system_as_an_agent/
| false | false | 19 |
{'enabled': False, 'images': [{'id': 'ALOXY2PamJpEf9jNTwKvzcmANYAYelWajVomqXftPws', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/QE_AwMn8Vhy9CL-rjaMkp2CgPWYkmSjtSuxPv7QHnQs.jpg?width=108&crop=smart&auto=webp&s=6ecacbb51dd7cfa94197225ac09740f0e34fa278', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/QE_AwMn8Vhy9CL-rjaMkp2CgPWYkmSjtSuxPv7QHnQs.jpg?width=216&crop=smart&auto=webp&s=3f054236e8e3f18f3505e00b05ed8251db88fc11', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/QE_AwMn8Vhy9CL-rjaMkp2CgPWYkmSjtSuxPv7QHnQs.jpg?width=320&crop=smart&auto=webp&s=ecb64a2bffb05e44d274eb04fb6b8576a8f1055e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/QE_AwMn8Vhy9CL-rjaMkp2CgPWYkmSjtSuxPv7QHnQs.jpg?auto=webp&s=e1d9cccfb239969f3b2c5963a80153aefb724a2e', 'width': 480}, 'variants': {}}]}
|
|
Advising on LLM Deployment for Internal Codebase Use — Is DeepSeek-V3 Massive Overkill?
| 1 |
[removed]
| 2025-05-28T08:51:47 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxc7ta/advising_on_llm_deployment_for_internal_codebase/
|
BroncoDankus
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxc7ta
| false | null |
t3_1kxc7ta
|
/r/LocalLLaMA/comments/1kxc7ta/advising_on_llm_deployment_for_internal_codebase/
| false | false |
self
| 1 | null |
How are you using MCP
| 1 |
[removed]
| 2025-05-28T09:02:20 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxcdb6/how_are_you_using_mcp/
|
Fluffy_Sheepherder76
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxcdb6
| false | null |
t3_1kxcdb6
|
/r/LocalLLaMA/comments/1kxcdb6/how_are_you_using_mcp/
| false | false |
self
| 1 | null |
How are you using MCP?
| 1 |
[removed]
| 2025-05-28T09:04:05 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxce6o/how_are_you_using_mcp/
|
Fluffy_Sheepherder76
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxce6o
| false | null |
t3_1kxce6o
|
/r/LocalLLaMA/comments/1kxce6o/how_are_you_using_mcp/
| false | false |
self
| 1 | null |
Metal performance 2x slower in notarized builds
| 1 |
[removed]
| 2025-05-28T09:12:26 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxcimc/metal_performance_2x_slower_in_notarized_builds/
|
Impossible-Bat6366
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxcimc
| false | null |
t3_1kxcimc
|
/r/LocalLLaMA/comments/1kxcimc/metal_performance_2x_slower_in_notarized_builds/
| false | false |
self
| 1 | null |
BrowserBee: A web browser agent in your Chrome side panel
| 1 |
[removed]
| 2025-05-28T09:48:45 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxd1w1/browserbee_a_web_browser_agent_in_your_chrome/
|
parsa28
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxd1w1
| false | null |
t3_1kxd1w1
|
/r/LocalLLaMA/comments/1kxd1w1/browserbee_a_web_browser_agent_in_your_chrome/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '8DMpX6r2OSWxRX4Pd29951OACdCjq5-CBr_tnz2zLaA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?width=108&crop=smart&auto=webp&s=9a2cdb78109d85405237f8fbc1b3e2d61ce75bd6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?width=216&crop=smart&auto=webp&s=3152e3cd29505a7193fe1ab2e4a67e880ed48c66', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?width=320&crop=smart&auto=webp&s=3cef30bffbfbcd71e8ef9abfa6425e2b7f743a39', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?width=640&crop=smart&auto=webp&s=8c8efa88cd1526b7caefc49eb21e73e65c0a2cdc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?width=960&crop=smart&auto=webp&s=055b4b0477b271d0c0274161beb64743e46d8f12', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?width=1080&crop=smart&auto=webp&s=5d2e93c0ac2c186e6fb4eeef5e38d01fe1605575', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?auto=webp&s=ff9cc7d15ad92c56b67fece8548ac91c95ec203b', 'width': 1200}, 'variants': {}}]}
|
BrowserBee: A web browser agent in your Chrome side panel
| 1 |
[removed]
| 2025-05-28T09:50:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxd2oz/browserbee_a_web_browser_agent_in_your_chrome/
|
parsa28
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxd2oz
| false | null |
t3_1kxd2oz
|
/r/LocalLLaMA/comments/1kxd2oz/browserbee_a_web_browser_agent_in_your_chrome/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '8DMpX6r2OSWxRX4Pd29951OACdCjq5-CBr_tnz2zLaA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?width=108&crop=smart&auto=webp&s=9a2cdb78109d85405237f8fbc1b3e2d61ce75bd6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?width=216&crop=smart&auto=webp&s=3152e3cd29505a7193fe1ab2e4a67e880ed48c66', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?width=320&crop=smart&auto=webp&s=3cef30bffbfbcd71e8ef9abfa6425e2b7f743a39', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?width=640&crop=smart&auto=webp&s=8c8efa88cd1526b7caefc49eb21e73e65c0a2cdc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?width=960&crop=smart&auto=webp&s=055b4b0477b271d0c0274161beb64743e46d8f12', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?width=1080&crop=smart&auto=webp&s=5d2e93c0ac2c186e6fb4eeef5e38d01fe1605575', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?auto=webp&s=ff9cc7d15ad92c56b67fece8548ac91c95ec203b', 'width': 1200}, 'variants': {}}]}
|
BrowserBee: A web browser agent in your Chrome side panel
| 1 |
[removed]
| 2025-05-28T09:54:46 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxd50q/browserbee_a_web_browser_agent_in_your_chrome/
|
parsa28
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxd50q
| false | null |
t3_1kxd50q
|
/r/LocalLLaMA/comments/1kxd50q/browserbee_a_web_browser_agent_in_your_chrome/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': '8DMpX6r2OSWxRX4Pd29951OACdCjq5-CBr_tnz2zLaA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?width=108&crop=smart&auto=webp&s=9a2cdb78109d85405237f8fbc1b3e2d61ce75bd6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?width=216&crop=smart&auto=webp&s=3152e3cd29505a7193fe1ab2e4a67e880ed48c66', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?width=320&crop=smart&auto=webp&s=3cef30bffbfbcd71e8ef9abfa6425e2b7f743a39', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?width=640&crop=smart&auto=webp&s=8c8efa88cd1526b7caefc49eb21e73e65c0a2cdc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?width=960&crop=smart&auto=webp&s=055b4b0477b271d0c0274161beb64743e46d8f12', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?width=1080&crop=smart&auto=webp&s=5d2e93c0ac2c186e6fb4eeef5e38d01fe1605575', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?auto=webp&s=ff9cc7d15ad92c56b67fece8548ac91c95ec203b', 'width': 1200}, 'variants': {}}]}
|
Scores in old and new lmarena are different
| 6 |
Have they provided any explanations on this?
| 2025-05-28T10:00:59 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxd8cq/scores_in_old_and_new_lmarena_are_different/
|
Economy_Apple_4617
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxd8cq
| false | null |
t3_1kxd8cq
|
/r/LocalLLaMA/comments/1kxd8cq/scores_in_old_and_new_lmarena_are_different/
| false | false |
self
| 6 | null |
Help: effect of Dry sampling on quality
| 0 |
I've build a tool to create image using a gradio api, the output is a json with the url generated passed back to the model.
I was using Qwen 30B Moe Q4\_XL from unsloth with llama.cpp as my daily driver with dry multiplier at 0.8 without any major issue but here I found that it consistently changed the url hallucinating.
Example with dry multiplier 0.8, suggested settings from Qwen team and presence penalty 1.5
> given the following json write the image url:
{
"prompt": "A cinematic view of Rome at sunset, showcasing the Colosseum and Roman Forum illuminated by warm orange and pink hues, with dramatic shadows and a vibrant sky. The scene captures the historic architecture bathed in soft, golden light, evoking a sense of timeless grandeur.",
"image_url": "https://example.net/cache/tools_sana/20250527-224501/image.webp",
"model_used": "Sana",
"style": "Cinematic",
"timestamp": "2025-05-27T22:45:01.978055",
"status": "success"
}
/no_think
<think>
</think>
The image URL is:
**https://example.net/cache/tools_sана/2025052七-224501/image webp**
removing the dry multiplier works as expected.
Am I doing something wrong with sampling parameters, is it somewhat expected, any hints?
Thank you in advance
p.s. if someone is interested in the tool you can find it [here](https://openwebui.com/t/fakezeta/sana_image_generation)
| 2025-05-28T10:05:41 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxdb1n/help_effect_of_dry_sampling_on_quality/
|
fakezeta
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxdb1n
| false | null |
t3_1kxdb1n
|
/r/LocalLLaMA/comments/1kxdb1n/help_effect_of_dry_sampling_on_quality/
| false | false |
self
| 0 | null |
impressive streamlining in local llm deployment: Gemma 3n downloading directly to my phone without any tinkering. what a time to be alive.
| 1 |
google ai edge gallery apk: https://github.com/google-ai-edge/gallery/wiki/2.-Getting-Started
| 2025-05-28T10:07:11 |
thebigvsbattlesfan
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxdbwv
| false | null |
t3_1kxdbwv
|
/r/LocalLLaMA/comments/1kxdbwv/impressive_streamlining_in_local_llm_deployment/
| false | false | 1 |
{'enabled': True, 'images': [{'id': 'S5HBNERO_EZ1nm4_vX7KSjG8WmDV08hQYmjUlub1GvE', 'resolutions': [{'height': 112, 'url': 'https://preview.redd.it/bsbufgpjxh3f1.jpeg?width=108&crop=smart&auto=webp&s=8dc8919907a56ab083b61ac4dc908edbc6ec1de5', 'width': 108}, {'height': 224, 'url': 'https://preview.redd.it/bsbufgpjxh3f1.jpeg?width=216&crop=smart&auto=webp&s=6cac75d8d78ce5c6101f3a1c7ea7914a16d4e90d', 'width': 216}, {'height': 332, 'url': 'https://preview.redd.it/bsbufgpjxh3f1.jpeg?width=320&crop=smart&auto=webp&s=7f3a2d305ccc2eb8a3d3d2eb603af24a5ba3b86c', 'width': 320}, {'height': 664, 'url': 'https://preview.redd.it/bsbufgpjxh3f1.jpeg?width=640&crop=smart&auto=webp&s=effe06419e337823f06b32843965dfd51ea57c37', 'width': 640}, {'height': 996, 'url': 'https://preview.redd.it/bsbufgpjxh3f1.jpeg?width=960&crop=smart&auto=webp&s=729b31da0d3cbabf9d348fdf380c374675e15b64', 'width': 960}], 'source': {'height': 1120, 'url': 'https://preview.redd.it/bsbufgpjxh3f1.jpeg?auto=webp&s=b66b8101e7860ba3faebb7899a57a2d7f27f0a6c', 'width': 1079}, 'variants': {}}]}
|
||
impressive streamlining in local llm deployment: gemma 3n downloading directly to my phone without any tinkering. what a time to be alive!
| 100 | 2025-05-28T10:08:41 |
thebigvsbattlesfan
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxdcpi
| false | null |
t3_1kxdcpi
|
/r/LocalLLaMA/comments/1kxdcpi/impressive_streamlining_in_local_llm_deployment/
| false | false | 100 |
{'enabled': True, 'images': [{'id': 'YAeKaYu2qaZhbAYy0cMWEnsNZX4LHUwqNo_czIljTT4', 'resolutions': [{'height': 112, 'url': 'https://preview.redd.it/sd06j27qyh3f1.jpeg?width=108&crop=smart&auto=webp&s=599724f93ec5bf34488f0dccede3c4f5022bf63a', 'width': 108}, {'height': 224, 'url': 'https://preview.redd.it/sd06j27qyh3f1.jpeg?width=216&crop=smart&auto=webp&s=77ac5c002ed8295758f2b23cb7c078e52ebb395c', 'width': 216}, {'height': 332, 'url': 'https://preview.redd.it/sd06j27qyh3f1.jpeg?width=320&crop=smart&auto=webp&s=7a125da71cb85426659a0d588f8aaee0d2a432c2', 'width': 320}, {'height': 664, 'url': 'https://preview.redd.it/sd06j27qyh3f1.jpeg?width=640&crop=smart&auto=webp&s=c51a26804948f34a4686a4018dd2e02a67c40a82', 'width': 640}, {'height': 996, 'url': 'https://preview.redd.it/sd06j27qyh3f1.jpeg?width=960&crop=smart&auto=webp&s=222a138d644aef0199b25690b74771a99ba2e845', 'width': 960}], 'source': {'height': 1120, 'url': 'https://preview.redd.it/sd06j27qyh3f1.jpeg?auto=webp&s=8c2467e7e936ff6680768712bb04670f1b7a5f25', 'width': 1079}, 'variants': {}}]}
|
|||
Cobolt is now available on Linux! 🎉
| 2 |
Remember when we said Cobolt is "Powered by community-driven development"?
After our [last post](https://www.reddit.com/r/LocalLLaMA/comments/1kujwzl/we_believe_the_future_of_ai_is_local_private_and) about Cobolt – our local, private, and personalized AI assistant – the call for Linux support was overwhelming. Well, you asked, and we're thrilled to deliver: Cobolt is now available on Linux! 🎉 [Get started here](https://github.com/platinum-hill/cobolt?tab=readme-ov-file#getting-started)
We are excited by your engagement and shared belief in accessible, private AI.
**Our promise remains: Privacy by design, extensible, and personalized.**
Join us in shaping the future of Cobolt on [Github](https://github.com/platinum-hill/cobolt)
Thank you for driving us forward. Let's keep building AI that serves you, now on Linux!
# LocalAI #Linux #PrivacyMatters #OpenSource #MCP #Ollama #Privacy #LocalFirst
| 2025-05-28T10:14:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxdfsq/cobolt_is_now_available_on_linux/
|
ice-url
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxdfsq
| false | null |
t3_1kxdfsq
|
/r/LocalLLaMA/comments/1kxdfsq/cobolt_is_now_available_on_linux/
| false | false |
self
| 2 |
{'enabled': False, 'images': [{'id': 'NNzbw1x8m7xNfFARrLCIjGSKnOdC-deMYYvKqIswEDg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YCnlhDpFOQrBOSdtdqBvC3VbL7egUyj2Dd-ayfrsClQ.jpg?width=108&crop=smart&auto=webp&s=bd22f5d1f9db29cb0c5495f8f3d66125022ea1a6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YCnlhDpFOQrBOSdtdqBvC3VbL7egUyj2Dd-ayfrsClQ.jpg?width=216&crop=smart&auto=webp&s=6439ac893c23a4efe65d0b8185611365dcf3a334', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YCnlhDpFOQrBOSdtdqBvC3VbL7egUyj2Dd-ayfrsClQ.jpg?width=320&crop=smart&auto=webp&s=1e06c7164c67f3a3ec43379f6b4e3357d7de5b2b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YCnlhDpFOQrBOSdtdqBvC3VbL7egUyj2Dd-ayfrsClQ.jpg?width=640&crop=smart&auto=webp&s=a764afbeef8e026ba4ce0a46bd040b7f5d0865f7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YCnlhDpFOQrBOSdtdqBvC3VbL7egUyj2Dd-ayfrsClQ.jpg?width=960&crop=smart&auto=webp&s=291ba75972b72412bb2ac0bcda522d57f0b3319a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YCnlhDpFOQrBOSdtdqBvC3VbL7egUyj2Dd-ayfrsClQ.jpg?width=1080&crop=smart&auto=webp&s=ccbb45ef9e9e919fcd8213dc739e8016aecb577b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YCnlhDpFOQrBOSdtdqBvC3VbL7egUyj2Dd-ayfrsClQ.jpg?auto=webp&s=2f0c69b69fdfd4888508acc8d341e1008e5911a9', 'width': 1200}, 'variants': {}}]}
|
Deep Research Agent (Apple Silicon)
| 4 |
Hi everyone
I’ve been using Perplexica which is honestly fantastic for every day use. I wish I could access it on every device alas I’m a noob at hosting and don’t really even know what I’d need to do it…
Anyway, the point: I’m looking for a deep research agent that works on Apple Silicon I’ve used local-deep-research (https://github.com/langchain-ai/local-deep-researcher) currently this is only deep research agent I’ve got working on Apple silicon.
Does anyone know of any others that produce good reports? I like the look of gpt-researcher but as yet I can’t get it working on Apple silicon and I’m also not sure if it’s any better than what I’ve used above…
If anyone can recommend anything they have a good experience with would be appreciated :)!
| 2025-05-28T10:18:14 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxdhzg/deep_research_agent_apple_silicon/
|
BalaelGios
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxdhzg
| false | null |
t3_1kxdhzg
|
/r/LocalLLaMA/comments/1kxdhzg/deep_research_agent_apple_silicon/
| false | false |
self
| 4 |
{'enabled': False, 'images': [{'id': '7KpX8ZO5jaU9AAaMjXtLTImZjtPOyYx1ewVQ5AEd7WI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dA9mn3r5WxZGATEWMxRN74d2LPRpm01S4hl_9yCThCw.jpg?width=108&crop=smart&auto=webp&s=4b555f9745cb7d1990248b8a2712f1b36496df45', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dA9mn3r5WxZGATEWMxRN74d2LPRpm01S4hl_9yCThCw.jpg?width=216&crop=smart&auto=webp&s=71bc98d2c1ca6d4b943c16cc0a2c2df2545351ed', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dA9mn3r5WxZGATEWMxRN74d2LPRpm01S4hl_9yCThCw.jpg?width=320&crop=smart&auto=webp&s=2105eebde52e52b34b226ad30f02732b6c861a6d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dA9mn3r5WxZGATEWMxRN74d2LPRpm01S4hl_9yCThCw.jpg?width=640&crop=smart&auto=webp&s=4e41940e4408b6c92eaef8e63229f4b7d5f2b31b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dA9mn3r5WxZGATEWMxRN74d2LPRpm01S4hl_9yCThCw.jpg?width=960&crop=smart&auto=webp&s=2a41f545a7515dcb80ea6d6ee39f0501fa95134b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dA9mn3r5WxZGATEWMxRN74d2LPRpm01S4hl_9yCThCw.jpg?width=1080&crop=smart&auto=webp&s=26edafc9a617352cd0799dcef6f78405bc0ae238', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dA9mn3r5WxZGATEWMxRN74d2LPRpm01S4hl_9yCThCw.jpg?auto=webp&s=a3892fc32b400cfcd7ab230d716ace231a57ce70', 'width': 1200}, 'variants': {}}]}
|
Cobolt is now available on Linux! 🎉
| 1 |
[deleted]
| 2025-05-28T10:19:30 |
[deleted]
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxdin0
| false | null |
t3_1kxdin0
|
/r/LocalLLaMA/comments/1kxdin0/cobolt_is_now_available_on_linux/
| false | false |
default
| 1 | null |
||
Cobolt is now available on Linux! 🎉
| 66 |
Remember when we said Cobolt is "Powered by community-driven development"?
After our [last post](https://www.reddit.com/r/LocalLLaMA/comments/1kujwzl/we_believe_the_future_of_ai_is_local_private_and) about Cobolt – **our local, private, and personalized AI assistant** – the call for Linux support was overwhelming. Well, you asked, and we're thrilled to deliver: Cobolt is now available on Linux! 🎉 [Get started here](https://github.com/platinum-hill/cobolt?tab=readme-ov-file#getting-started)
We are excited by your engagement and shared belief in accessible, private AI.
Join us in shaping the future of Cobolt on [Github](https://github.com/platinum-hill/cobolt).
**Our promise remains: Privacy by design, extensible, and personalized.**
Thank you for driving us forward. Let's keep building AI that serves you, now on Linux!
| 2025-05-28T10:23:07 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxdkms/cobolt_is_now_available_on_linux/
|
ice-url
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxdkms
| false | null |
t3_1kxdkms
|
/r/LocalLLaMA/comments/1kxdkms/cobolt_is_now_available_on_linux/
| false | false |
self
| 66 |
{'enabled': False, 'images': [{'id': 'NNzbw1x8m7xNfFARrLCIjGSKnOdC-deMYYvKqIswEDg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YCnlhDpFOQrBOSdtdqBvC3VbL7egUyj2Dd-ayfrsClQ.jpg?width=108&crop=smart&auto=webp&s=bd22f5d1f9db29cb0c5495f8f3d66125022ea1a6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YCnlhDpFOQrBOSdtdqBvC3VbL7egUyj2Dd-ayfrsClQ.jpg?width=216&crop=smart&auto=webp&s=6439ac893c23a4efe65d0b8185611365dcf3a334', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YCnlhDpFOQrBOSdtdqBvC3VbL7egUyj2Dd-ayfrsClQ.jpg?width=320&crop=smart&auto=webp&s=1e06c7164c67f3a3ec43379f6b4e3357d7de5b2b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YCnlhDpFOQrBOSdtdqBvC3VbL7egUyj2Dd-ayfrsClQ.jpg?width=640&crop=smart&auto=webp&s=a764afbeef8e026ba4ce0a46bd040b7f5d0865f7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YCnlhDpFOQrBOSdtdqBvC3VbL7egUyj2Dd-ayfrsClQ.jpg?width=960&crop=smart&auto=webp&s=291ba75972b72412bb2ac0bcda522d57f0b3319a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YCnlhDpFOQrBOSdtdqBvC3VbL7egUyj2Dd-ayfrsClQ.jpg?width=1080&crop=smart&auto=webp&s=ccbb45ef9e9e919fcd8213dc739e8016aecb577b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YCnlhDpFOQrBOSdtdqBvC3VbL7egUyj2Dd-ayfrsClQ.jpg?auto=webp&s=2f0c69b69fdfd4888508acc8d341e1008e5911a9', 'width': 1200}, 'variants': {}}]}
|
DeepSeek Announces Upgrade, Possibly Launching New Model Similar to 0324
| 315 |
The official DeepSeek group has issued an announcement claiming an upgrade, possibly a new model similar to the 0324 version.
| 2025-05-28T10:25:48 |
https://www.reddit.com/gallery/1kxdm2z
|
luckbossx
|
reddit.com
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxdm2z
| false | null |
t3_1kxdm2z
|
/r/LocalLLaMA/comments/1kxdm2z/deepseek_announces_upgrade_possibly_launching_new/
| false | false | 315 | null |
|
Seeking Help Setting Up a Local LLM Assistant for TTRPG Worldbuilding + RAG on Windows 11
| 5 |
Hey everyone! I'm looking for some guidance on setting up a local LLM to help with **TTRPG worldbuilding and running games** (like D&D or other systems). I want to be able to:
- Generate and roleplay NPCs
- Write world lore collaboratively
- Answer rules questions from PDFs
- Query my own documents (lore, setting info, custom rules, etc.)
So I think I need **RAG** (Retrieval-Augmented Generation) — or at least some way to have the LLM "understand" and reference my worldbuilding files or rule PDFs.
---
🖥️ **My current setup:**
- Windows 11
- 4070 (12GB of Vram)
- 64GB of Ram
- SillyTavern installed and working
- TabbyAPI installed
---
❓ **What I'm trying to figure out:**
- Can I do **RAG** with **SillyTavern** or **TabbyAPI**?
- What’s the best **model loader** on Windows 11 that supports RAG (or can be used in a RAG pipeline)?
- Which **models** would you recommend for:
- Worldbuilding / creative writing
- Rule parsing and Q&A
- Lightweight enough to run locally
---
🧠 **What I want in the long run:**
- A local AI DM assistant that remembers lore
- Can roleplay NPCs (via SillyTavern or similar)
- Can read and answer questions from PDFs (like the PHB or custom notes)
- Privacy is important — I want to keep everything local
If you’ve got a setup like this or know how to connect the dots between SillyTavern + RAG + local models, I’d love your advice!
Thanks in advance!
| 2025-05-28T11:16:50 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxeg6a/seeking_help_setting_up_a_local_llm_assistant_for/
|
TheArchivist314
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxeg6a
| false | null |
t3_1kxeg6a
|
/r/LocalLLaMA/comments/1kxeg6a/seeking_help_setting_up_a_local_llm_assistant_for/
| false | false |
self
| 5 | null |
Parakeet-TDT 0.6B v2 FastAPI STT Service (OpenAI-style API + Experimental Streaming)
| 26 |
Hi! I'm (finally) releasing a FastAPI wrapper around NVIDIA’s Parakeet-TDT 0.6B v2 ASR model with:
* REST `/transcribe` endpoint with optional timestamps
* Health & debug endpoints: `/healthz`, `/debug/cfg`
* Experimental WebSocket `/ws` for real-time PCM streaming and partial/full transcripts
GitHub: [https://github.com/Shadowfita/parakeet-tdt-0.6b-v2-fastapi](https://github.com/Shadowfita/parakeet-tdt-0.6b-v2-fastapi)
| 2025-05-28T11:47:57 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxf0ig/parakeettdt_06b_v2_fastapi_stt_service/
|
Shadowfita
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxf0ig
| false | null |
t3_1kxf0ig
|
/r/LocalLLaMA/comments/1kxf0ig/parakeettdt_06b_v2_fastapi_stt_service/
| false | false |
self
| 26 |
{'enabled': False, 'images': [{'id': 'Yor2PiK5DIoagga8Ef6-su6OIlt5qBUMgWamr45nJVw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xJ17kcz6rkaNCdYXDG-VzsbOMWH2mcJrp4Pb3HyF9mk.jpg?width=108&crop=smart&auto=webp&s=f4be908586b4b521b8a363c5ca70fd5feec2349b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xJ17kcz6rkaNCdYXDG-VzsbOMWH2mcJrp4Pb3HyF9mk.jpg?width=216&crop=smart&auto=webp&s=62acad860e310725f81c35458c6b5f0a79e15478', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xJ17kcz6rkaNCdYXDG-VzsbOMWH2mcJrp4Pb3HyF9mk.jpg?width=320&crop=smart&auto=webp&s=3378288c96af8ee1aa4196a062b0eabd841c3467', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xJ17kcz6rkaNCdYXDG-VzsbOMWH2mcJrp4Pb3HyF9mk.jpg?width=640&crop=smart&auto=webp&s=0159d17e6fd63009d621c54780c0b2a0a5fb456d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xJ17kcz6rkaNCdYXDG-VzsbOMWH2mcJrp4Pb3HyF9mk.jpg?width=960&crop=smart&auto=webp&s=2ea93e7e34b85290515f445defa588cb052d9fec', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xJ17kcz6rkaNCdYXDG-VzsbOMWH2mcJrp4Pb3HyF9mk.jpg?width=1080&crop=smart&auto=webp&s=12aa0c87a6a4b596888c666370179c73b7da94db', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xJ17kcz6rkaNCdYXDG-VzsbOMWH2mcJrp4Pb3HyF9mk.jpg?auto=webp&s=098ca3e008550ce5a2d64c9cc45346a61f1f4123', 'width': 1200}, 'variants': {}}]}
|
Upgrading from RTX 4060 to 3090
| 3 |
Hi guys I am planning to upgrade from a 4060 to a 3090 to triple the VRAM and be able to run Qwen 3 30b or 32b, but I noticed that the 3090 has 2 power connections instead of one like my 4060. I have a cable that already has 2 endings, do I have to worry about anything else, or can I just slot the new one right in and it will work? The PSU itself should handle the watts.
Sorry if it's a bit of an obvious question, but I want to make sure my 700 euros won't go to waste.
| 2025-05-28T11:49:51 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxf1q7/upgrading_from_rtx_4060_to_3090/
|
ElekDn
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxf1q7
| false | null |
t3_1kxf1q7
|
/r/LocalLLaMA/comments/1kxf1q7/upgrading_from_rtx_4060_to_3090/
| false | false |
self
| 3 | null |
Running Llama 3.2-1b on my android through Pocketpal
| 1 | 2025-05-28T11:53:01 |
https://v.redd.it/rl8aqet2hi3f1
|
EmployeeLogical5051
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxf3v0
| false |
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/rl8aqet2hi3f1/DASHPlaylist.mpd?a=1751025194%2CZjIwNGRkOThhNmRlNTg0YTE3NDBjNTk1OTI4MTYzMjllODFjMTRiYWYwZWJlZTA5ZmI4NDJmYzIwNTQ4ODNhYg%3D%3D&v=1&f=sd', 'duration': 18, 'fallback_url': 'https://v.redd.it/rl8aqet2hi3f1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/rl8aqet2hi3f1/HLSPlaylist.m3u8?a=1751025194%2CN2JkYzliZWI5ODU4YjEwOWM5ODM4YmNiNmVmMjU3OTUyMjhhNzkzM2M4ZDdmMDY5NmQ0ZWVjOTU2YzRkMjk5YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/rl8aqet2hi3f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 576}}
|
t3_1kxf3v0
|
/r/LocalLLaMA/comments/1kxf3v0/running_llama_321b_on_my_android_through_pocketpal/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'MzduaTVvdTJoaTNmMZszMcdNxuXWeErgnsrIuHYF1y85A4lZHEELgSwMS8lM', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/MzduaTVvdTJoaTNmMZszMcdNxuXWeErgnsrIuHYF1y85A4lZHEELgSwMS8lM.png?width=108&crop=smart&format=pjpg&auto=webp&s=b33637624425e0ef1e7024c28b43152a666869dd', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/MzduaTVvdTJoaTNmMZszMcdNxuXWeErgnsrIuHYF1y85A4lZHEELgSwMS8lM.png?width=216&crop=smart&format=pjpg&auto=webp&s=b81cf6650f8da7058c0e89e8fa7e275df8345a82', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/MzduaTVvdTJoaTNmMZszMcdNxuXWeErgnsrIuHYF1y85A4lZHEELgSwMS8lM.png?width=320&crop=smart&format=pjpg&auto=webp&s=5aea3463efcf3fe69f2924a9a003f393c3f9a385', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/MzduaTVvdTJoaTNmMZszMcdNxuXWeErgnsrIuHYF1y85A4lZHEELgSwMS8lM.png?width=640&crop=smart&format=pjpg&auto=webp&s=5887418d6069bbdbe6d6d0fc53b001cdfecc7a20', 'width': 640}], 'source': {'height': 1600, 'url': 'https://external-preview.redd.it/MzduaTVvdTJoaTNmMZszMcdNxuXWeErgnsrIuHYF1y85A4lZHEELgSwMS8lM.png?format=pjpg&auto=webp&s=d60667eb9a49f7139973a1dce9107f92d923ee4f', 'width': 720}, 'variants': {}}]}
|
||
Best budget GPU for running a local model+occasional gaming?
| 0 |
Hey. My intention is to run LLama and/or DeepSeek locally on my unraid server while occasionally still gaming now and then when not in use for AI.
Case can fit up to 290mm cards otherwise I'd of gotten a used 3090.
I've been looking at 5060 16GB, would that be a decent card? Or would going for a 5070 16gb be a better choice. I can grab a 5060 for approx 500 eur, 5070 is already 1100.
| 2025-05-28T11:59:13 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxf7z2/best_budget_gpu_for_running_a_local/
|
answerencr
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxf7z2
| false | null |
t3_1kxf7z2
|
/r/LocalLLaMA/comments/1kxf7z2/best_budget_gpu_for_running_a_local/
| false | false |
self
| 0 | null |
Old model, new implementation
| 7 |
[chatllm.cpp](https://github.com/foldl/chatllm.cpp) implements this model as the 1st supported vision model.
I have search this group. Not many have tested this model due to lack of support from llama.cpp. Now, would you like to try this model?
| 2025-05-28T12:24:34 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxfq8r/old_model_new_implementation/
|
foldl-li
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxfq8r
| false | null |
t3_1kxfq8r
|
/r/LocalLLaMA/comments/1kxfq8r/old_model_new_implementation/
| false | false |
self
| 7 |
{'enabled': False, 'images': [{'id': 'RumUH0pn0UD5ECnIUOzs1g7sbyxOEntl4lbZSZpwHSc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1jcE0HdHUPFbZD9hiXaNrwsonKJdXFdcUjqkV5AiFOo.jpg?width=108&crop=smart&auto=webp&s=1acef437405ecf1a87bf1f70dbdfe8a7f73af9f4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1jcE0HdHUPFbZD9hiXaNrwsonKJdXFdcUjqkV5AiFOo.jpg?width=216&crop=smart&auto=webp&s=b0b02311d95943a7bdc36b1f9f1a8ab6e5774435', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1jcE0HdHUPFbZD9hiXaNrwsonKJdXFdcUjqkV5AiFOo.jpg?width=320&crop=smart&auto=webp&s=612c908014342b1253d605ce44abe64d0b1ab3bc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1jcE0HdHUPFbZD9hiXaNrwsonKJdXFdcUjqkV5AiFOo.jpg?width=640&crop=smart&auto=webp&s=03c857b5c8839b82855dd97de5692029d79d69c9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1jcE0HdHUPFbZD9hiXaNrwsonKJdXFdcUjqkV5AiFOo.jpg?width=960&crop=smart&auto=webp&s=baf2942c6ef6c8a24286df0790c2647a21194db8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1jcE0HdHUPFbZD9hiXaNrwsonKJdXFdcUjqkV5AiFOo.jpg?width=1080&crop=smart&auto=webp&s=69f317b9e96aa7fa889b1ceb15c989744c9bfcaa', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1jcE0HdHUPFbZD9hiXaNrwsonKJdXFdcUjqkV5AiFOo.jpg?auto=webp&s=5c37cd85ca911ee3df6863c230ace8bafea8767c', 'width': 1200}, 'variants': {}}]}
|
vLLM Classify Bad Results
| 10 |
Has anyone used vLLM for classification?
I have a fine-tuned modernBERT model with 5 classes.
During model training, the best model shows a .78 F1 score.
After the model is trained, I passed the test set through vLLM and Hugging Face pipelines as a test and get the screenshot above.
Hugging Face pipeline matches the result (F1 of .78) but vLLM is way off, with an F1 of .58.
Any ideas?
| 2025-05-28T12:50:16 |
Upstairs-Garlic-2301
|
i.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxg95a
| false | null |
t3_1kxg95a
|
/r/LocalLLaMA/comments/1kxg95a/vllm_classify_bad_results/
| false | false | 10 |
{'enabled': True, 'images': [{'id': 'LFJY147gafAUNJLufVK9T3r-EJAnH4-brIcaHpIXMMk', 'resolutions': [{'height': 124, 'url': 'https://preview.redd.it/d9tr89iqri3f1.png?width=108&crop=smart&auto=webp&s=156724ca5fdcba587564732869494a90061d253c', 'width': 108}, {'height': 249, 'url': 'https://preview.redd.it/d9tr89iqri3f1.png?width=216&crop=smart&auto=webp&s=59cf40b4a3fc00e09b02d9b73fb49b9ce4f042a2', 'width': 216}, {'height': 369, 'url': 'https://preview.redd.it/d9tr89iqri3f1.png?width=320&crop=smart&auto=webp&s=4dc7f280e4881666310e8aa6d72302914983d3bf', 'width': 320}, {'height': 739, 'url': 'https://preview.redd.it/d9tr89iqri3f1.png?width=640&crop=smart&auto=webp&s=f21243a1700f4e98e4582d952577c7f25af1d879', 'width': 640}, {'height': 1108, 'url': 'https://preview.redd.it/d9tr89iqri3f1.png?width=960&crop=smart&auto=webp&s=1891107bcb78beb07d56670140d164b35ab50b10', 'width': 960}, {'height': 1247, 'url': 'https://preview.redd.it/d9tr89iqri3f1.png?width=1080&crop=smart&auto=webp&s=9771b001811ae4dcc07f6d90a2d4ed9c440ba5bd', 'width': 1080}], 'source': {'height': 1596, 'url': 'https://preview.redd.it/d9tr89iqri3f1.png?auto=webp&s=bfcf6c3ce327c6ef4162ee0f503b1b4d0fffb184', 'width': 1382}, 'variants': {}}]}
|
||
Deepsee launch new DSv3 as well
| 1 |
Updated V3 as well
https://preview.redd.it/bjulgxp0ti3f1.jpg?width=1280&format=pjpg&auto=webp&s=1cc6b4bced8b86c8ab1fe761637e70138e3b9fbf
| 2025-05-28T12:57:31 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxgeje/deepsee_launch_new_dsv3_as_well/
|
shing3232
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxgeje
| false | null |
t3_1kxgeje
|
/r/LocalLLaMA/comments/1kxgeje/deepsee_launch_new_dsv3_as_well/
| false | false | 1 | null |
|
New DeepseekV3 as well
| 28 |
New V3!
https://preview.redd.it/wjoiebx5ti3f1.jpg?width=1280&format=pjpg&auto=webp&s=11bcdcd461259d9329165669759f04fb531ee79c
| 2025-05-28T12:58:33 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxgfbj/new_deepseekv3_as_well/
|
shing3232
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxgfbj
| false | null |
t3_1kxgfbj
|
/r/LocalLLaMA/comments/1kxgfbj/new_deepseekv3_as_well/
| false | false | 28 | null |
|
Any luck in running gemma 3n model locally on iphone with react native?
| 1 |
[removed]
| 2025-05-28T13:00:28 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxggsd/any_luck_in_running_gemma_3n_model_locally_on/
|
Ordinary_Emu8014
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxggsd
| false | null |
t3_1kxggsd
|
/r/LocalLLaMA/comments/1kxggsd/any_luck_in_running_gemma_3n_model_locally_on/
| false | false |
self
| 1 | null |
chat-first code editing?
| 3 |
For software development with LMs we have quite a few IDE-centric solutions like Roo, Cline, <the commercial>, then a hybrid bloated/heavy UI of OpenHands and then the hardcore CLI stuff that just "works", which are fairly feasible to start even on a way in [Termux](https://f-droid.org/en/packages/com.termux/).
What I'm seeking for is a context aware, indexed, tool for editing software projects on the way which would be simple and reliable for making changes from a prompt. I'd just review/revert its changes in Termux and it wouln't need to care about that or it could monitor the changes in the repo directory.
I mean can we simply have Cascade plugin to any of the established chat UIs?
| 2025-05-28T13:14:22 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxgs54/chatfirst_code_editing/
|
uhuge
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxgs54
| false | null |
t3_1kxgs54
|
/r/LocalLLaMA/comments/1kxgs54/chatfirst_code_editing/
| false | false |
self
| 3 |
{'enabled': False, 'images': [{'id': '7jMZ7XD80oeucmGEaTwktIRZexLtGWvJfKdVD6Wu2SI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/CXNpYDWzyOfIgBDx_cT8hOjSBmkBzPV2V8PF_sGNtQk.jpg?width=108&crop=smart&auto=webp&s=feccf0b924bf22ec5c533966c536d95028f97e5c', 'width': 108}], 'source': {'height': 192, 'url': 'https://external-preview.redd.it/CXNpYDWzyOfIgBDx_cT8hOjSBmkBzPV2V8PF_sGNtQk.jpg?auto=webp&s=031c51d2d8f5432c06c27f05b396f35e7cc9e005', 'width': 192}, 'variants': {}}]}
|
Model suggestions for string and arithmetic operations.
| 0 |
I am building a solution that does string operations, simple math, intelligent conversion of unformatted dates, checking datatype of values in the variables.
What are some models that can be used for the above scenario?
| 2025-05-28T13:23:02 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxgz6u/model_suggestions_for_string_and_arithmetic/
|
Forward_Friend_2078
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxgz6u
| false | null |
t3_1kxgz6u
|
/r/LocalLLaMA/comments/1kxgz6u/model_suggestions_for_string_and_arithmetic/
| false | false |
self
| 0 | null |
Is there an open source alternative to manus?
| 60 |
I tried manus and was surprised how ahead it is of other agents at browsing the web and using files, terminal etc autonomously.
There is no tool I've tried before that comes close to it.
What's the best open source alternative to Manus that you've tried?
| 2025-05-28T13:23:15 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxgzd1/is_there_an_open_source_alternative_to_manus/
|
BoJackHorseMan53
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxgzd1
| false | null |
t3_1kxgzd1
|
/r/LocalLLaMA/comments/1kxgzd1/is_there_an_open_source_alternative_to_manus/
| false | false |
self
| 60 | null |
FlashMoe support in ipex-llm allows you to run DeepSeek V3/R1 671B and Qwen3MoE 235B models with just 1 or 2 Intel Arc GPU (such as A770 and B580)
| 22 |
I just noticed that this team claims it is possible to run the DeepSeek V1/R1 671B with two cheap Intel GPUs (and a huge amount of system RAM). I wonder if anybody has actually tried or built such a beast?
[https://github.com/intel/ipex-llm/blob/main/docs/mddocs/Quickstart/flashmoe\_quickstart.md](https://github.com/intel/ipex-llm/blob/main/docs/mddocs/Quickstart/flashmoe_quickstart.md)
| 2025-05-28T13:24:16 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxh07e/flashmoe_support_in_ipexllm_allows_you_to_run/
|
lQEX0It_CUNTY
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxh07e
| false | null |
t3_1kxh07e
|
/r/LocalLLaMA/comments/1kxh07e/flashmoe_support_in_ipexllm_allows_you_to_run/
| false | false |
self
| 22 |
{'enabled': False, 'images': [{'id': 'dQDfQwMdXNmvr4OEVIPfeHsTwt5A8oIqJPenKSWasbA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OAWBzkdIvJjZkJRx3FwspG9npepbJnpfeBvMde9gj4M.jpg?width=108&crop=smart&auto=webp&s=fc18aaf4e3b35b37605fe2d377d3fd5b74f206d1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OAWBzkdIvJjZkJRx3FwspG9npepbJnpfeBvMde9gj4M.jpg?width=216&crop=smart&auto=webp&s=dd67fbdbaf05b9fa46cc7a9622dcb140f6a70f29', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OAWBzkdIvJjZkJRx3FwspG9npepbJnpfeBvMde9gj4M.jpg?width=320&crop=smart&auto=webp&s=0f414405ac28793b593e88c48675d2a16d6dee9d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OAWBzkdIvJjZkJRx3FwspG9npepbJnpfeBvMde9gj4M.jpg?width=640&crop=smart&auto=webp&s=12a5551e0ca5ce9255fdfc3b66d58c25b32e59a3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OAWBzkdIvJjZkJRx3FwspG9npepbJnpfeBvMde9gj4M.jpg?width=960&crop=smart&auto=webp&s=11a3698960b5f2c8ddbe11bfb0e0e318832a31e2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OAWBzkdIvJjZkJRx3FwspG9npepbJnpfeBvMde9gj4M.jpg?width=1080&crop=smart&auto=webp&s=160f5d110d9406d658de9dc4e1bd8b5a6d3cf225', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OAWBzkdIvJjZkJRx3FwspG9npepbJnpfeBvMde9gj4M.jpg?auto=webp&s=eb1aabed6982e04393195c3217afa3fce17a7ec7', 'width': 1200}, 'variants': {}}]}
|
LLM on the go hardware question? (noob)
| 1 |
[removed]
| 2025-05-28T13:31:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxh6et/llm_on_the_go_hardware_question_noob/
|
tameka777
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxh6et
| false | null |
t3_1kxh6et
|
/r/LocalLLaMA/comments/1kxh6et/llm_on_the_go_hardware_question_noob/
| false | false |
self
| 1 | null |
VideoGameBench: Can Language Models play Video Games? (arXiv)
| 1 |
[removed]
| 2025-05-28T13:40:20 |
https://v.redd.it/yuiak0p00j3f1
|
ZhalexDev
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxhdec
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/yuiak0p00j3f1/DASHPlaylist.mpd?a=1751031635%2CZTg2NmMyOWE0NWU1NWM2NmEyZTA4ZGZlNGZhMmYzZGQ4NTI4MGZkNjM4MWU3ZTIxODlhNmNmMTFkOTEyMTYwYw%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/yuiak0p00j3f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/yuiak0p00j3f1/HLSPlaylist.m3u8?a=1751031635%2CNTY2NzQ2YjBhOWYxNzFmYjczNGRiYTZlNjhiMWM1NDRiNGQwYzQzYzVmOTY0NzA2MjliYTY5NThjMTZjYzc2OQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/yuiak0p00j3f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1kxhdec
|
/r/LocalLLaMA/comments/1kxhdec/videogamebench_can_language_models_play_video/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'cmZpemd6bzAwajNmMZ0udffztWdSVshPJLts_aHTpUjGuXCw85zjXQVQ2EeR', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cmZpemd6bzAwajNmMZ0udffztWdSVshPJLts_aHTpUjGuXCw85zjXQVQ2EeR.png?width=108&crop=smart&format=pjpg&auto=webp&s=453fffc8cd31d5aaa228915c43810eece0e1fa23', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cmZpemd6bzAwajNmMZ0udffztWdSVshPJLts_aHTpUjGuXCw85zjXQVQ2EeR.png?width=216&crop=smart&format=pjpg&auto=webp&s=db88c8a508c0479c55162f73733639bf1e03402c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cmZpemd6bzAwajNmMZ0udffztWdSVshPJLts_aHTpUjGuXCw85zjXQVQ2EeR.png?width=320&crop=smart&format=pjpg&auto=webp&s=3dbe839ff79abdcf39c8d84eaa2c56277dcdcf37', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cmZpemd6bzAwajNmMZ0udffztWdSVshPJLts_aHTpUjGuXCw85zjXQVQ2EeR.png?width=640&crop=smart&format=pjpg&auto=webp&s=6edfa87c925ec9a73345527ca1230d67b74ec109', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cmZpemd6bzAwajNmMZ0udffztWdSVshPJLts_aHTpUjGuXCw85zjXQVQ2EeR.png?width=960&crop=smart&format=pjpg&auto=webp&s=4a323425bac53263e6bc1c7ba7a0f834358727f7', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cmZpemd6bzAwajNmMZ0udffztWdSVshPJLts_aHTpUjGuXCw85zjXQVQ2EeR.png?width=1080&crop=smart&format=pjpg&auto=webp&s=213fa06b8a884efb71ced6ec9ef40cc5d2cb5483', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cmZpemd6bzAwajNmMZ0udffztWdSVshPJLts_aHTpUjGuXCw85zjXQVQ2EeR.png?format=pjpg&auto=webp&s=f00322fcec2a410e117433915140f0f089fb4839', 'width': 1920}, 'variants': {}}]}
|
|
VideoGameBench: Can Language Models play Video Games (arXiv release)
| 1 |
[removed]
| 2025-05-28T13:42:38 |
https://v.redd.it/16w87gp11j3f1
|
ZhalexDev
|
v.redd.it
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxhfb6
| false |
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/16w87gp11j3f1/DASHPlaylist.mpd?a=1751031773%2CNzZiNzkzMGMyOGRjZjc1YjdkNjE0ZDQxM2JiYjFmZTI3MmYwZTUxNjBkMDYyMzg3MTliN2M0ODQxYjdiOGFkZg%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/16w87gp11j3f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/16w87gp11j3f1/HLSPlaylist.m3u8?a=1751031773%2CYTBkZmRhMGJjMzNjZmI0NzJlOTVkYjgxOGYwNDdmM2NiNTk3YTRhYjA0ZjI5ZTZiZjdjNmEwYTE3ZDRkMGViMg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/16w87gp11j3f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
|
t3_1kxhfb6
|
/r/LocalLLaMA/comments/1kxhfb6/videogamebench_can_language_models_play_video/
| false | false | 1 |
{'enabled': False, 'images': [{'id': 'N3U5bjlqcDExajNmMZ0udffztWdSVshPJLts_aHTpUjGuXCw85zjXQVQ2EeR', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/N3U5bjlqcDExajNmMZ0udffztWdSVshPJLts_aHTpUjGuXCw85zjXQVQ2EeR.png?width=108&crop=smart&format=pjpg&auto=webp&s=039ab140f0b83e5e726a7cd4821fa6a329f10eed', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/N3U5bjlqcDExajNmMZ0udffztWdSVshPJLts_aHTpUjGuXCw85zjXQVQ2EeR.png?width=216&crop=smart&format=pjpg&auto=webp&s=daba1fb40fd8c5ea2234661c8a9c9e16d8e38941', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/N3U5bjlqcDExajNmMZ0udffztWdSVshPJLts_aHTpUjGuXCw85zjXQVQ2EeR.png?width=320&crop=smart&format=pjpg&auto=webp&s=54f7e7d2e026a1885b282572e24d4b06029ff82b', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/N3U5bjlqcDExajNmMZ0udffztWdSVshPJLts_aHTpUjGuXCw85zjXQVQ2EeR.png?width=640&crop=smart&format=pjpg&auto=webp&s=712b4d9bf347e9518574477ecdce806e6d0174ed', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/N3U5bjlqcDExajNmMZ0udffztWdSVshPJLts_aHTpUjGuXCw85zjXQVQ2EeR.png?width=960&crop=smart&format=pjpg&auto=webp&s=b6186205339995040ceb37079d921a3c9ce42116', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/N3U5bjlqcDExajNmMZ0udffztWdSVshPJLts_aHTpUjGuXCw85zjXQVQ2EeR.png?width=1080&crop=smart&format=pjpg&auto=webp&s=caeff22d7ef2bfc719cfb23d7243b0c4ba9bbaba', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/N3U5bjlqcDExajNmMZ0udffztWdSVshPJLts_aHTpUjGuXCw85zjXQVQ2EeR.png?format=pjpg&auto=webp&s=7478abbb39bbea5ece643f9e82c9a3789517032c', 'width': 1920}, 'variants': {}}]}
|
|
VideoGameBench- full code + paper release
| 29 | ERROR: type should be string, got "https://reddit.com/link/1kxhmgo/video/hzjtuzzr1j3f1/player\n\n**VideoGameBench** evaluates VLMs on Game Boy and MS-DOS games given only raw screen input, just like how a human would play. The best model (Gemini) completes just 0.48% of the benchmark. We have a bunch of clips on the website: \n[vgbench.com](http://vgbench.com)\n\n[https://arxiv.org/abs/2505.18134](https://arxiv.org/abs/2505.18134)\n\n[https://github.com/alexzhang13/vg-bench](https://github.com/alexzhang13/vg-bench)\n\nAlex and I will stick around to answer questions here." | 2025-05-28T13:51:11 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxhmgo/videogamebench_full_code_paper_release/
|
ofirpress
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxhmgo
| false | null |
t3_1kxhmgo
|
/r/LocalLLaMA/comments/1kxhmgo/videogamebench_full_code_paper_release/
| false | false |
self
| 29 | null |
Llama.cpp: Does it make sense to use a larger --n-predict (-n) than --ctx-size (-c)?
| 6 |
My setup: A reasoning model eg Qwen3 32B at Q4KXL + 16k context. Those will fit snugly in 24GB VRAM.
Problem: Reasoning models, 1 time out of 3 (in my use cases), will keep on thinking for longer than the 16k window, and maybe indefinitely, and that's why I set the -n option to be slightly less than -c to account for my prompt.
Question: I can relax -n to perhaps 30k, which the reasoning models suggest. However, when -n is larger than -c, won't the context window shift and the response's relevance to my prompt start decreasing?
Thanks.
| 2025-05-28T14:15:58 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxi7qh/llamacpp_does_it_make_sense_to_use_a_larger/
|
ParaboloidalCrest
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxi7qh
| false | null |
t3_1kxi7qh
|
/r/LocalLLaMA/comments/1kxi7qh/llamacpp_does_it_make_sense_to_use_a_larger/
| false | false |
self
| 6 | null |
Llama.cpp wont use gpu’s
| 0 |
So I recently downloaded an unsloth quant of DeepSeek R1 to test for the hell of it.
I downloaded the cuda 12.x version of llama.cpp from the releases section of the GitHub
I then went and started launching the model through the llama-server.exe making sure to use the —n-gpu-layers (or w.e) it is and set it to 14 since I have 2 3090’s and unsloth said to use 7 for one gpu…
The llama server booted and it claimed 14 layers were offloaded to the gpu’s, but both my gpu’s vram were at 0Gb used… so it seems it’s not actually loading to them…
Is there something I am missing?
| 2025-05-28T14:24:55 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxifq9/llamacpp_wont_use_gpus/
|
DeSibyl
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxifq9
| false | null |
t3_1kxifq9
|
/r/LocalLLaMA/comments/1kxifq9/llamacpp_wont_use_gpus/
| false | false |
self
| 0 | null |
Is slower inference and non-realtime cheaper?
| 3 |
is there a service that can take in my requests, and then give me the response after A WHILE, like, days later.
and is significantly cheaper?
| 2025-05-28T14:53:00 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxj4ne/is_slower_inference_and_nonrealtime_cheaper/
|
AryanEmbered
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxj4ne
| false | null |
t3_1kxj4ne
|
/r/LocalLLaMA/comments/1kxj4ne/is_slower_inference_and_nonrealtime_cheaper/
| false | false |
self
| 3 | null |
QwQ 32B is Amazing (& Sharing my 131k + Imatrix)
| 141 |
I'm curious what your experience has been with QwQ 32B. I've seen really good takes on QwQ vs Qwen3, but I think they're not comparable. Here's the differences I see and I'd love feedback.
# When To Use Qwen3
If I had to choose between QwQ 32B versus Qwen3 for daily AI assistant tasks, I'd choose Qwen3. This is because for 99% of general questions or work, Qwen3 is faster, answers just as well, and does amazing. As where QwQ 32B will do just as good, but it'll often over think and spend much longer answering any question.
# When To Use QwQ 32B
Now for an AI agent or doing orchestration level work, I would choose QwQ all day every day. It's not that Qwen3 is bad, but it cannot handle the same level of semantic orchestration. In fact, ChatGPT 4o can't keep up with what I'm pushing QwQ to do.
# Benchmarks
[Simulation Fidelity Benchmark](https://huggingface.co/datasets/magiccodingman/QwQ-32B-abliterated-131k-GGUF-Yarn-Imatrix/blob/main/Benchmarks/Simulation%20Fidelity%20Benchmark.md) is something I created a long time ago. Firstly I love RP based D&D inspired AI simulated games. But, I've always hated how current AI systems makes me the driver, but without any gravity. Anything and everything I say goes, so years ago I made a benchmark that is meant to be a better enforcement of simulated gravity. And as I'd eventually build agents that'd do real world tasks, this test funnily was an amazing benchmark for everything. So I know it's dumb that I use something like this, but it's been a fantastic way for me to gauge the wisdom of an AI model. I've often valued wisdom over intelligence. It's not about an AI knowing a random capital of X country, it's about knowing when to Google the capital of X country. [Benchmark Tests](https://huggingface.co/datasets/magiccodingman/QwQ-32B-abliterated-131k-GGUF-Yarn-Imatrix/tree/main/Benchmarks) are here. And if more details on inputs or anything are wanted, I'm more than happy to share. My system prompt was counted with GPT 4 token counter (bc I'm lazy) and it was \~6k tokens. Input was \~1.6k. The shown benchmarks was the end results. But I had tests ranging a total of \~16k tokens to \~40k tokens. I don't have the hardware to test further sadly.
# My Experience With QwQ 32B
So, what am I doing? Why do I like QwQ? Because it's not just emulating a good story, it's remembering many dozens of semantic threads. Did an item get moved? Is the scene changing? Did the last result from context require memory changes? Does the current context provide sufficient information or is the custom RAG database created needed to be called with an optimized query based on meta data tags provided?
Oh I'm just getting started, but I've been pushing QwQ to the absolute edge. Because AI agents whether a dungeon master of a game, creating projects, doing research, or anything else. A single missed step is catastrophic to simulated reality. Missed contexts leads to semantic degradation in time. Because my agents have to consistently alter what it remembers or knows. I have limited context limits, so it must always tell the future version that must run what it must do for the next part of the process.
Qwen3, Gemma, GPT 4o, they do amazing. To a point. But they're trained to be assistants. But QwQ 32B is weird, incredibly weird. The kind of weird I love. It's an agent level battle tactician. I'm allowing my agent to constantly rewrite it's own system prompts (partially), have full access to grab or alter it's own short term and long term memory, and it's not missing a beat.
The perfection is what makes QwQ so very good. Near perfection is required when doing wisdom based AI agent tasks.
# QwQ-32B-Abliterated-131k-GGUF-Yarn-Imatrix
I've enjoyed QwQ 32B so much that I made my own version. Note, this isn't a fine tune or anything like that, but my own custom GGUF converted version to run on llama.cpp. But I did do the following:
1.) Altered the llama.cpp conversion script to add yarn meta data tags. (TLDR, unlocked the normal 8k precision but can handle \~32k to 131,072 tokens)
2.) Utilized a hybrid FP16 process with all quants with embed, output, all 64 layers (attention/feed forward weights + bias).
3.) Q4 to Q6 were all created with a \~16M token imatrix to make them significantly better and bring the level of precision much closer to Q8. (Q8 excluded, reasons in repo).
The repo is here:
[https://huggingface.co/datasets/magiccodingman/QwQ-32B-abliterated-131k-GGUF-Yarn-Imatrix](https://huggingface.co/datasets/magiccodingman/QwQ-32B-abliterated-131k-GGUF-Yarn-Imatrix)
# Have You Really Used QwQ?
I've had a fantastic time with QwQ 32B so far. When I say that Qwen3 and other models can't keep up, I've genuinely tried to put each in an environment to compete on equal footing. It's not that everything else was "bad" it just wasn't as perfect as QwQ. But I'd also love feedback.
I'm more than open to being wrong and hearing why. Is Qwen3 able to hit just as hard? Note I did utilize Qwen3 of all sizes plus think mode.
But I've just been incredibly happy to use QwQ 32B because it's the first model that's open source and something I can run locally that can perform the tasks I want. So far any API based models to do the tasks I wanted would cost \~$1k minimum a month, so it's really amazing to be able to finally run something this good locally.
If I could get just as much power with a faster, more efficient, or smaller model, that'd be amazing. But, I can't find it.
| 2025-05-28T15:00:38 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxjbb5/qwq_32b_is_amazing_sharing_my_131k_imatrix/
|
crossivejoker
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxjbb5
| false | null |
t3_1kxjbb5
|
/r/LocalLLaMA/comments/1kxjbb5/qwq_32b_is_amazing_sharing_my_131k_imatrix/
| false | false |
self
| 141 |
{'enabled': False, 'images': [{'id': '0AGVID46IoyBFfXi_I1ft4PTmcq0SBDpTWIhAApEH0s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Ytxc-xelT6_LJP3XZ2AYDHZVaynypLwEGtX8q6e6SD4.jpg?width=108&crop=smart&auto=webp&s=ace50ae5421a0fa349e9031eeebbedf5d9fec0c0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Ytxc-xelT6_LJP3XZ2AYDHZVaynypLwEGtX8q6e6SD4.jpg?width=216&crop=smart&auto=webp&s=e334856a24d38ba258b79e64d299cad77cb9d29c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Ytxc-xelT6_LJP3XZ2AYDHZVaynypLwEGtX8q6e6SD4.jpg?width=320&crop=smart&auto=webp&s=0f1220bdc6e1d8bf08ad49e01ae10c88e29e351e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Ytxc-xelT6_LJP3XZ2AYDHZVaynypLwEGtX8q6e6SD4.jpg?width=640&crop=smart&auto=webp&s=a80a80425af0535766ec8408d29d82eb647e62d5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Ytxc-xelT6_LJP3XZ2AYDHZVaynypLwEGtX8q6e6SD4.jpg?width=960&crop=smart&auto=webp&s=9abd92baa2896b2ff2e6cd0ce4890a345c30365a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Ytxc-xelT6_LJP3XZ2AYDHZVaynypLwEGtX8q6e6SD4.jpg?width=1080&crop=smart&auto=webp&s=88754ce04b3f5518760293c9f00bd2f9b33d5d50', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Ytxc-xelT6_LJP3XZ2AYDHZVaynypLwEGtX8q6e6SD4.jpg?auto=webp&s=f34fe17eeb26c1da20ae8c6c10e98b6999779434', 'width': 1200}, 'variants': {}}]}
|
Thoughts on which open source is best for what use-cases
| 2 |
Wondering if there is any work done/being done to 'pick' open source models for behavior based use-cases. For example: Which open source model is good for sentiment analysis, which model is good for emotion analysis, which model is good for innovation (generating newer ideas), which model is good for anomaly detection etc.
I have just generated sample behaviors mimicking human behavior. If there is similar work done with another similar objective, please feel free to share.
Thanks!!
| 2025-05-28T15:10:53 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxjl07/thoughts_on_which_open_source_is_best_for_what/
|
tazzspice
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxjl07
| false | null |
t3_1kxjl07
|
/r/LocalLLaMA/comments/1kxjl07/thoughts_on_which_open_source_is_best_for_what/
| false | false |
self
| 2 | null |
Dual RTX 3090 users (are there many of us?)
| 21 |
What is your TDP ? (Or optimal clock speeds)
What is your PCIe lane speeds ?
Power supply ?
Planning to upgrade or sell before prices drop ?
Any other remarks ?
| 2025-05-28T15:30:48 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxk2zf/dual_rtx_3090_users_are_there_many_of_us/
|
StandardLovers
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxk2zf
| false | null |
t3_1kxk2zf
|
/r/LocalLLaMA/comments/1kxk2zf/dual_rtx_3090_users_are_there_many_of_us/
| false | false |
self
| 21 | null |
Another reorg for Meta Llama: AGI team created
| 39 |
Which teams are going to get the most GPUs?
[https://www.axios.com/2025/05/27/meta-ai-restructure-2025-agi-llama](https://www.axios.com/2025/05/27/meta-ai-restructure-2025-agi-llama)
Llama team divided into two teams: an AI products team and an AGI Foundations unit.
The AI products team will be responsible for the Meta AI assistant, Meta's AI Studio and AI features within Facebook, Instagram and WhatsApp.
The AGI Foundations unit will cover a range of technologies, including the company's **Llama models**, as well as efforts to improve capabilities in reasoning, multimedia and voice.
The company's AI research unit, known as FAIR (Fundamental AI Research), remains separate from the new organizational structure, though one specific team working on multimedia is moving to the new AGI Foundations team.
Meta hopes that splitting a single large organization into smaller teams will speed product development and give the company more flexibility as it adds additional technical leaders.
The company is also [seeing key talent depart](https://www.businessinsider.com/meta-llama-ai-talent-mistral-2025-5), including to French rival Mistral, as reported by Business Insider.
| 2025-05-28T15:31:29 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxk3lk/another_reorg_for_meta_llama_agi_team_created/
|
Terminator857
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxk3lk
| false | null |
t3_1kxk3lk
|
/r/LocalLLaMA/comments/1kxk3lk/another_reorg_for_meta_llama_agi_team_created/
| false | false |
self
| 39 |
{'enabled': False, 'images': [{'id': 'hvqgHBjFtTay3NZyhRkPCn_2z-518HI17PkLVORa898', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/IbHRuWvtpfBCQoFRlxyxFQrSlpBeTjpMfaYH3WusCvs.jpg?width=108&crop=smart&auto=webp&s=d4dc40fd932667fe6cd956ce919f6ae5b010a7ac', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/IbHRuWvtpfBCQoFRlxyxFQrSlpBeTjpMfaYH3WusCvs.jpg?width=216&crop=smart&auto=webp&s=f5f476bcb43a5876675093eaa55029954fe98a22', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/IbHRuWvtpfBCQoFRlxyxFQrSlpBeTjpMfaYH3WusCvs.jpg?width=320&crop=smart&auto=webp&s=511bc02bac0999dc80d3eb95ddeac85c66cc9224', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/IbHRuWvtpfBCQoFRlxyxFQrSlpBeTjpMfaYH3WusCvs.jpg?width=640&crop=smart&auto=webp&s=ef08722322465f331668b63eb02bc589ea8836e7', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/IbHRuWvtpfBCQoFRlxyxFQrSlpBeTjpMfaYH3WusCvs.jpg?width=960&crop=smart&auto=webp&s=f4039adbccf9e746d3d48e083ea9098abfa6f51c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/IbHRuWvtpfBCQoFRlxyxFQrSlpBeTjpMfaYH3WusCvs.jpg?width=1080&crop=smart&auto=webp&s=81e88103f2147ef06c06459afd652b4d49f36895', 'width': 1080}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/IbHRuWvtpfBCQoFRlxyxFQrSlpBeTjpMfaYH3WusCvs.jpg?auto=webp&s=f638088403b312dd9e40f9cce0b4e65d897cd469', 'width': 1366}, 'variants': {}}]}
|
Running LLMs Locally (using llama.cpp, Ollama, Docker Runner Model, and vLLM)
| 1 |
[removed]
| 2025-05-28T15:42:19 |
https://www.reddit.com/r/LocalLLaMA/comments/1kxkdcv/running_llms_locally_using_llamacpp_ollama_docker/
|
Gvara
|
self.LocalLLaMA
| 1970-01-01T00:00:00 | 0 |
{}
|
1kxkdcv
| false | null |
t3_1kxkdcv
|
/r/LocalLLaMA/comments/1kxkdcv/running_llms_locally_using_llamacpp_ollama_docker/
| false | false |
self
| 1 |
{'enabled': False, 'images': [{'id': 'nf1vfntDmlnVJqxHFe2djx5X6uwztCtSsbje7STTE0U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wmm93anfij2PJBeK-OhX7s7HbHrFZZTSJ6_riS1E0f4.jpg?width=108&crop=smart&auto=webp&s=419ca509f80806dd0b1e360d256e1d848ea9a438', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wmm93anfij2PJBeK-OhX7s7HbHrFZZTSJ6_riS1E0f4.jpg?width=216&crop=smart&auto=webp&s=0a731cf62938b6524a0f8cda651853fb55c05283', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wmm93anfij2PJBeK-OhX7s7HbHrFZZTSJ6_riS1E0f4.jpg?width=320&crop=smart&auto=webp&s=bff565f518e9aff9787b530afad3e8f328f237eb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wmm93anfij2PJBeK-OhX7s7HbHrFZZTSJ6_riS1E0f4.jpg?width=640&crop=smart&auto=webp&s=6d7fc3ebbec0f55f825aded9ea480aacd1943f0c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wmm93anfij2PJBeK-OhX7s7HbHrFZZTSJ6_riS1E0f4.jpg?width=960&crop=smart&auto=webp&s=d4ff4698e3cf847bb96e963ce11164617a1cd7a8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wmm93anfij2PJBeK-OhX7s7HbHrFZZTSJ6_riS1E0f4.jpg?width=1080&crop=smart&auto=webp&s=01633c35bbff3f03a5889c031677304c23db6928', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wmm93anfij2PJBeK-OhX7s7HbHrFZZTSJ6_riS1E0f4.jpg?auto=webp&s=34926518a6689e02a34fab49e4678fc75de21fd1', 'width': 1200}, 'variants': {}}]}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.