title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
40k
created
timestamp[ns]date
2023-04-01 04:30:41
2025-06-30 03:16:29
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2025-06-26 17:30:18
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Can I run a higher parameter model?
0
With my current setup I am able to run the Deep seek R1 0528 Qwen 8B model about 12 tokens/second. Can I move up to a higher parameter model or will I be getting 0.5 tokens/second?
2025-06-18T02:45:50
https://www.reddit.com/r/LocalLLaMA/comments/1le68fs/can_i_run_a_higher_parameter_model/
Ok_Most9659
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le68fs
false
null
t3_1le68fs
/r/LocalLLaMA/comments/1le68fs/can_i_run_a_higher_parameter_model/
false
false
self
0
null
What's your analysis of unsloth/DeepSeek-R1-0528-Qwen3-8B-GGUF locally
21
It's been almost 20 days since the release, I'm considering buying rtx 5090 based PC this winter to use BF16 or Q_8_K_XL unsloth version, my main use case are document processing, summarization(context length will not be an issue since i'm using chunking algorithm for shorter chunks) and trading. Does it justify it's benchmark results?
2025-06-18T02:47:47
https://www.reddit.com/r/LocalLLaMA/comments/1le69tx/whats_your_analysis_of/
ready_to_fuck_yeahh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le69tx
false
null
t3_1le69tx
/r/LocalLLaMA/comments/1le69tx/whats_your_analysis_of/
false
false
self
21
null
Is it possible to run a model with multiple GPUs and would that be much powerful?
0
Is it possible to run a model with multiple GPUs and would that be much powerful?
2025-06-18T04:15:37
https://www.reddit.com/r/LocalLLaMA/comments/1le7wig/is_it_possible_to_run_a_model_with_multiple_gpus/
0y0s
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le7wig
false
null
t3_1le7wig
/r/LocalLLaMA/comments/1le7wig/is_it_possible_to_run_a_model_with_multiple_gpus/
false
false
self
0
null
Testing the limits of base apple silicon.
4
I have an old M1 Mac 8gb ram, if anyone has tested it limits how far were you able to go with reasonable performance and also I discovered MLX fine-tuning specifically for MAC but I am unsure if I will be able to run on it. I was able to run: qwen 3b on it with some spike in usage it was okayish, I wonder if any specific model has been well optimised for apple silicon.
2025-06-18T04:28:02
https://www.reddit.com/r/LocalLLaMA/comments/1le84f5/testing_the_limits_of_base_apple_silicon/
ILoveMy2Balls
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le84f5
false
null
t3_1le84f5
/r/LocalLLaMA/comments/1le84f5/testing_the_limits_of_base_apple_silicon/
false
false
self
4
null
Please recommend "World knowledge rich" model
1
[removed]
2025-06-18T04:46:03
https://www.reddit.com/r/LocalLLaMA/comments/1le8fff/please_recommend_world_knowledge_rich_model/
Desperate-Sir-5088
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le8fff
false
null
t3_1le8fff
/r/LocalLLaMA/comments/1le8fff/please_recommend_world_knowledge_rich_model/
false
false
self
1
null
Searching for world knowledge rich model
1
[removed]
2025-06-18T04:50:23
https://www.reddit.com/r/LocalLLaMA/comments/1le8i54/searching_for_world_knowledge_rich_model/
Desperate-Sir-5088
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le8i54
false
null
t3_1le8i54
/r/LocalLLaMA/comments/1le8i54/searching_for_world_knowledge_rich_model/
false
false
self
1
null
Need an advice for knowledge rich model
1
[removed]
2025-06-18T04:57:15
https://www.reddit.com/r/LocalLLaMA/comments/1le8mdg/need_an_advice_for_knowledge_rich_model/
Desperate-Sir-5088
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le8mdg
false
null
t3_1le8mdg
/r/LocalLLaMA/comments/1le8mdg/need_an_advice_for_knowledge_rich_model/
false
false
self
1
null
GMK X2(AMD Max+ 395 w/128GB) first impressions.
90
I've had a X2 for about a day. These are my first impressions of it including a bunch of numbers comparing to other GPUs I have. First, the people who were claiming that you couldn't load a model larger than 64GB because it would need to use 64GB of RAM for the CPU too are wrong. That's simple user error. That is simply not the case. Second, the GPU can use 120W. It does that when doing PP. Unfortunately, TG seems to be memory bandwidth limited and when doing that the GPU is at around 89W. Third, as delivered the BIOS was not capable of allocating more than 64GB to the GPU on my 128GB machine. It need a BIOS update. GMK should at least send email about with the correct BIOS to use. I first tried the one linked to on the GMK store page. That updated me to what it claimed was the required one, version 1.04 from 5/12 or later. That didn't do the job. the BIOS was dated 5/12. I still couldn't allocate more than 64GB to the GPU. So I dug around the GMK website and found a link to a different BIOS. It is also version 1.04 but is dated 5/14. That one worked. It took forever to flash compared to the first one and took forever to reboot, it turns out twice. There was no video signal for what felt like a long time, although it was probably only about a minute or so. But it finally showed the GMK logo only to restart again with another wait. The second time it booted back up to Windows. This time I could set the VRAM allocation to 96GB. Overall, it's as I expected. So far, it's like my M1 Max with 96GB. But with about 3x the PP speed. It strangely uses more than a bit of "shared memory" for the GPU as opposed to the "dedicated memory". Like GBs worth. Which normally would make me believe it's slowing it down, on this machine the "shared" and "dedicated" RAM is the same. Although it's probably less efficient to go though the shared stack. I wish there was a way to turn off shared memory for a GPU in Windows. It can be done in Linux. Here are a bunch of numbers. First for a small LLM that I can fit onto a 3060 12GB. Then successively bigger from there. **9B** **Max+** | model | size | params | backend | ngl | mmap | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ---: | --------------: | -------------------: | | gemma2 9B Q8_0 | 9.15 GiB | 9.24 B | RPC,Vulkan | 99 | 0 | pp512 | 923.76 ± 2.45 | | gemma2 9B Q8_0 | 9.15 GiB | 9.24 B | RPC,Vulkan | 99 | 0 | tg128 | 21.22 ± 0.03 | | gemma2 9B Q8_0 | 9.15 GiB | 9.24 B | RPC,Vulkan | 99 | 0 | pp512 @ d5000 | 486.25 ± 1.08 | | gemma2 9B Q8_0 | 9.15 GiB | 9.24 B | RPC,Vulkan | 99 | 0 | tg128 @ d5000 | 12.31 ± 0.04 | **M1 Max** | model | size | params | backend | threads | mmap | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | ------: | ---: | --------------: | -------------------: | | gemma2 9B Q8_0 | 9.15 GiB | 9.24 B | Metal,BLAS,RPC | 8 | 0 | pp512 | 335.93 ± 0.22 | | gemma2 9B Q8_0 | 9.15 GiB | 9.24 B | Metal,BLAS,RPC | 8 | 0 | tg128 | 28.08 ± 0.02 | | gemma2 9B Q8_0 | 9.15 GiB | 9.24 B | Metal,BLAS,RPC | 8 | 0 | pp512 @ d5000 | 262.21 ± 0.15 | | gemma2 9B Q8_0 | 9.15 GiB | 9.24 B | Metal,BLAS,RPC | 8 | 0 | tg128 @ d5000 | 20.07 ± 0.01 | **3060** | model | size | params | backend | ngl | mmap | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ---: | --------------: | -------------------: | | gemma2 9B Q8_0 | 9.15 GiB | 9.24 B | Vulkan,RPC | 999 | 0 | pp512 | 951.23 ± 1.50 | | gemma2 9B Q8_0 | 9.15 GiB | 9.24 B | Vulkan,RPC | 999 | 0 | tg128 | 26.40 ± 0.12 | | gemma2 9B Q8_0 | 9.15 GiB | 9.24 B | Vulkan,RPC | 999 | 0 | pp512 @ d5000 | 545.49 ± 9.61 | | gemma2 9B Q8_0 | 9.15 GiB | 9.24 B | Vulkan,RPC | 999 | 0 | tg128 @ d5000 | 19.94 ± 0.01 | **7900xtx** | model | size | params | backend | ngl | mmap | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ---: | --------------: | -------------------: | | gemma2 9B Q8_0 | 9.15 GiB | 9.24 B | Vulkan,RPC | 999 | 0 | pp512 | 2164.10 ± 3.98 | | gemma2 9B Q8_0 | 9.15 GiB | 9.24 B | Vulkan,RPC | 999 | 0 | tg128 | 61.94 ± 0.20 | | gemma2 9B Q8_0 | 9.15 GiB | 9.24 B | Vulkan,RPC | 999 | 0 | pp512 @ d5000 | 1197.40 ± 4.75 | | gemma2 9B Q8_0 | 9.15 GiB | 9.24 B | Vulkan,RPC | 999 | 0 | tg128 @ d5000 | 44.51 ± 0.08 | **Max+ CPU** | model | size | params | backend | ngl | mmap | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | ---: | --------------: | -------------------: | | gemma2 9B Q8_0 | 9.15 GiB | 9.24 B | RPC,Vulkan | 0 | 0 | pp512 | 438.57 ± 3.88 | | gemma2 9B Q8_0 | 9.15 GiB | 9.24 B | RPC,Vulkan | 0 | 0 | tg128 | 6.99 ± 0.01 | | gemma2 9B Q8_0 | 9.15 GiB | 9.24 B | RPC,Vulkan | 0 | 0 | pp512 @ d5000 | 292.43 ± 0.30 | | gemma2 9B Q8_0 | 9.15 GiB | 9.24 B | RPC,Vulkan | 0 | 0 | tg128 @ d5000 | 5.82 ± 0.01 |
2025-06-18T05:28:44
https://www.reddit.com/r/LocalLLaMA/comments/1le951x/gmk_x2amd_max_395_w128gb_first_impressions/
fallingdowndizzyvr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le951x
false
null
t3_1le951x
/r/LocalLLaMA/comments/1le951x/gmk_x2amd_max_395_w128gb_first_impressions/
false
false
self
90
null
Post Ego Intelligence AI starter kit
1
[removed]
2025-06-18T05:50:36
https://www.reddit.com/r/LocalLLaMA/comments/1le9hfi/post_ego_intelligence_ai_starter_kit/
Final_Growth_8288
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le9hfi
false
null
t3_1le9hfi
/r/LocalLLaMA/comments/1le9hfi/post_ego_intelligence_ai_starter_kit/
false
false
self
1
null
2xH100 vs 1xH200 for LLM fine-tuning - which is better?
1
[removed]
2025-06-18T06:12:55
https://www.reddit.com/r/LocalLLaMA/comments/1le9u0c/2xh100_vs_1xh200_for_llm_finetuning_which_is/
Significant_Income_1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1le9u0c
false
null
t3_1le9u0c
/r/LocalLLaMA/comments/1le9u0c/2xh100_vs_1xh200_for_llm_finetuning_which_is/
false
false
self
1
null
What are folks' favorite base models for tuning right now?
11
I've got 2x3090 on the way and have some text corpuses I'm interested in fine-tuning some base models on. What are the current favorite base models, both for general purpose and writing specifically, if there are any that excel? I'm currently looking at Gemma 2 9B or maybe Mistral Small 3.124B. I've got some relatively large datasets terabytes of plaintext) so want to start with something solid before I go burning days on the tuning. Any bleeding edge favorites for creative work, or older models that have come out on top? Thanks for any tips!
2025-06-18T06:25:23
https://www.reddit.com/r/LocalLLaMA/comments/1lea11k/what_are_folks_favorite_base_models_for_tuning/
CharlesStross
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lea11k
false
null
t3_1lea11k
/r/LocalLLaMA/comments/1lea11k/what_are_folks_favorite_base_models_for_tuning/
false
false
self
11
null
Is CentML shutting down?
1
[removed]
2025-06-18T06:41:59
https://www.reddit.com/r/LocalLLaMA/comments/1lea9xk/is_centml_shutting_down/
Anis_Mekacher
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lea9xk
false
null
t3_1lea9xk
/r/LocalLLaMA/comments/1lea9xk/is_centml_shutting_down/
false
false
https://a.thumbs.redditm…7JIP7ruRm4t8.jpg
1
null
Easily run multiple local llama.cpp servers with FlexLLama
20
Hi everyone. I’ve been working on a lightweight tool called **FlexLLama** that makes it really easy to run multiple llama.cpp instances locally. It’s open-source and it lets you run multiple llama.cpp models at once (even on different GPUs) and puts them all behind a single OpenAI compatible API - so you never have to shut one down to use another (models are switched dynamically on the fly). A few highlights: * Spin up several llama.cpp servers at once and distribute them across different GPUs / CPU. * Works with chat, completions, embeddings and reranking models. * Comes with a web dashboard so you can see runner status and switch models on the fly. * Supports automatic startup and dynamic model reloading, so it’s easy to manage a fleet of models. Here’s the repo: [https://github.com/yazon/flexllama]() I'm open to any questions or feedback, let me know what you think.
2025-06-18T06:57:55
https://www.reddit.com/r/LocalLLaMA/comments/1leaip7/easily_run_multiple_local_llamacpp_servers_with/
yazoniak
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leaip7
false
null
t3_1leaip7
/r/LocalLLaMA/comments/1leaip7/easily_run_multiple_local_llamacpp_servers_with/
false
false
self
20
{'enabled': False, 'images': [{'id': 'tOnGQQVBUGYTjHWnaMDYXgUV2FJ7LJdXQlMueTBtftM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tOnGQQVBUGYTjHWnaMDYXgUV2FJ7LJdXQlMueTBtftM.png?width=108&crop=smart&auto=webp&s=a2119ef5dd658da93637d7b88d6e576bf05f7ed8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tOnGQQVBUGYTjHWnaMDYXgUV2FJ7LJdXQlMueTBtftM.png?width=216&crop=smart&auto=webp&s=a620616d14cb864ccaa0cd9c9b94011855664380', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tOnGQQVBUGYTjHWnaMDYXgUV2FJ7LJdXQlMueTBtftM.png?width=320&crop=smart&auto=webp&s=f54b163505cd82c98fef26c7834f34b6cec2f998', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tOnGQQVBUGYTjHWnaMDYXgUV2FJ7LJdXQlMueTBtftM.png?width=640&crop=smart&auto=webp&s=3f7b3d4e31fbdbcb27778d08464d0a08beb2dfe6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tOnGQQVBUGYTjHWnaMDYXgUV2FJ7LJdXQlMueTBtftM.png?width=960&crop=smart&auto=webp&s=73e4137c721317835a9262dcad068ad06ddcca91', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tOnGQQVBUGYTjHWnaMDYXgUV2FJ7LJdXQlMueTBtftM.png?width=1080&crop=smart&auto=webp&s=89b3fa8c2bf07877b16a417e21cfcdc392d9c9e9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tOnGQQVBUGYTjHWnaMDYXgUV2FJ7LJdXQlMueTBtftM.png?auto=webp&s=3e4e6f1dde6f9aa1e5ab4866b31d1853bc486b99', 'width': 1200}, 'variants': {}}]}
If NotebookLM were Agentic
12
Hi r/LocalLLaMA ! https://reddit.com/link/1leamks/video/yak8abh4xm7f1/player At [Morphik](https://morphik.ai), we're dedicated to building the best RAG and document-processing systems in the world. Morphik works particularly well with visual data. As a challenge, I was trying to get it to solve a Where's Waldo puzzle. This led me down the agent rabbit hole and culminated in an agentic document viewer which can navigate the document, zoom into pages, and search/compile information exactly the way a human would. This is ideal for things like analyzing blueprints, hard to parse data-sheets, or playing Where's Waldo :) In the demo below, I ask the agent to compile information across a 42 page 10Q report from NVIDIA. Test it out [here](https://morphik.ai)! Soon, we'll be adding features to actually annotate the documents too - imagine filing your tax forms, legal docs, or entire applications with just a prompt. Would love your feedback, feature requests, suggestions, or comments below! As always, we're open source: [https://github.com/morphik-org/morphik-core](https://github.com/morphik-org/morphik-core) (Would love a ⭐️!) \- [Morphik](https://morphik.ai) Team ❤️ PS: We got feedback to make our installation simpler, and it is one-click for all machines now!
2025-06-18T07:04:47
https://www.reddit.com/r/LocalLLaMA/comments/1leamks/if_notebooklm_were_agentic/
Advanced_Army4706
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leamks
false
null
t3_1leamks
/r/LocalLLaMA/comments/1leamks/if_notebooklm_were_agentic/
false
false
self
12
null
Need an advice for knowledge rich model
5
First, I am a beginner in this field, and I understand that my assumptions may be completely wrong. I have been working in the business continuity field for companies, and I am trying to introduce LLM to create plans (BCP) for existing important customers to prepare for various risks, such as natural disasters, accidents, or financial crises. After some testing, I concluded that only Gemini 2.5 Pro possesses the level of knowledge and creativity required by our clients. Unfortunately, the company does not permit the use of online models due to compliance issues. Instead, I have been continuing pretraining or fine-tuning open models using the data I have, and while the latest models are excellent at solving STEM problems or Python coding, I have found that they lack world knowledge—at least in the areas I am interested in. (There are a few good articles related to this here) Anyway, I would appreciate it if you could recommend any models I could test. It should be smaller than Deepseek R1. It would be great if it could be easily fine-tuned using Unsloth or Llama Factory. (Nemotron Ultra was a great candidate, but I couldn't load the 35th tensor in PyTorch.) I'm planning to try Q4 quant at the 70B-200B level. Any advice would be appreciated.
2025-06-18T07:30:43
https://www.reddit.com/r/LocalLLaMA/comments/1leb0mq/need_an_advice_for_knowledge_rich_model/
Desperate-Sir-5088
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leb0mq
false
null
t3_1leb0mq
/r/LocalLLaMA/comments/1leb0mq/need_an_advice_for_knowledge_rich_model/
false
false
self
5
null
Choosing between two H100 vs one H200
3
I’m new to hardware and was asked by my employer to research whether using two NVIDIA H100 GPUs or one H200 GPU is better for fine-tuning large language models. I’ve heard some libraries, like Unsloth, aren’t fully ready for multi-GPU setups, and I’m not sure how challenging it is to effectively use multiple GPUs. If you have any easy-to-understand advice or experiences about which option is more powerful and easier to work with for fine-tuning LLMs, I’d really appreciate it. Thanks so much!
2025-06-18T07:49:38
https://www.reddit.com/r/LocalLLaMA/comments/1lebaf0/choosing_between_two_h100_vs_one_h200/
Significant_Income_1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lebaf0
false
null
t3_1lebaf0
/r/LocalLLaMA/comments/1lebaf0/choosing_between_two_h100_vs_one_h200/
false
false
self
3
null
Looking for a stack to serve local models as parallel concurrent async requests with multiple workers on fast api server.
1
Hello, I'm building a system to serve multiple models (LLMs like Gemma 12B-IT, Faster Whisper for speech-to-text, and speech-to-text kokoro) on one or multiple GPUs, aiming for **parallel concurrent async requests** with **multiple workers**. I’ve researched vLLM, LLaMA.cpp, and Triton Inference Server and want to confirm if what I think of will work. # My Plan * **FastAPI**: For async API endpoints to handle concurrent requests. Using aiohttp not sure if needed with triton. * **Uvicorn + Gunicorn**: To run FastAPI with multiple workers for parallelism across CPU cores. * **Triton Inference Server**: To serve models efficiently: * **vLLM backend** for LLMs (e.g., Gemma 12B-IT) for high-throughput inference. * **CTranslate2 backend** for Faster Whisper (speech-to-text). * **Async gRPC**: To connect FastAPI to Triton without blocking the async event loop. I just read about it not sure I need this or celery # Questions 1. I plan to first add async using aiohttp as I was using requests with async which don;t work of course. Then dockers vllm with parallelism and then add the triton as I heard it takes most time and it's hard to handle. Is this good plan or should i prepare dockers for each models first ? I am not sure if I will need to rewrite them using async with server to work correctly ? 2. Is this stack (FastAPI + Uvicorn/Gunicorn + Triton with vLLM/CTranslate2) the best for serving mixed models with high concurrency? 3. Has anyone used vLLM directly in FastAPI vs. via Triton? Any pros/cons? 4. Any tips for optimizing GPU memory usage or scaling workers for high request loads? 5. For models like Faster Whisper, is Triton’s CTranslate2 backend the way to go, or are there better alternatives? # My Setup * Hardware: One or multiple GPUs ( NVIDIA). * Models: Gemma 12B-IT, Faster Whisper, hugging face models, kokoro-tts. * Goal: High-throughput, low-latency serving with async and parallel processing. I
2025-06-18T09:03:42
https://www.reddit.com/r/LocalLLaMA/comments/1leccts/looking_for_a_stack_to_serve_local_models_as/
SomeRandomGuuuuuuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leccts
false
null
t3_1leccts
/r/LocalLLaMA/comments/1leccts/looking_for_a_stack_to_serve_local_models_as/
false
false
self
1
null
Understand block diagrams
2
I have documents with lots of block diagrams (A is connected to B of that sorts).. llama does understand the text but struggles with extracting the arrow mark connections, Gemini pro seems to be better though. I have tried some vision models as well but performance is not what I expected. Which model would you recommend for this task?
2025-06-18T09:19:42
https://www.reddit.com/r/LocalLLaMA/comments/1leclef/understand_block_diagrams/
SathukaBootham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leclef
false
null
t3_1leclef
/r/LocalLLaMA/comments/1leclef/understand_block_diagrams/
false
false
self
2
null
NVIDIA B300 cut all INT8 and FP64 performance???
51
[https://www.nvidia.com/en-us/data-center/hgx/](https://www.nvidia.com/en-us/data-center/hgx/)
2025-06-18T09:27:17
https://i.redd.it/cekoaeehmn7f1.png
Mindless_Pain1860
i.redd.it
1970-01-01T00:00:00
0
{}
1lecpcr
false
null
t3_1lecpcr
/r/LocalLLaMA/comments/1lecpcr/nvidia_b300_cut_all_int8_and_fp64_performance/
false
false
https://external-preview…c379b5e0c384c5ec
51
{'enabled': True, 'images': [{'id': 'mtoim3aNzp-GedzPJXgJ8e-TiOtDucitFLyUMMG-OEo', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/cekoaeehmn7f1.png?width=108&crop=smart&auto=webp&s=088fb40859aed95536dd000460af67955860d19d', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/cekoaeehmn7f1.png?width=216&crop=smart&auto=webp&s=68979b4decc056715e01cb65bc47de8fcded05ad', 'width': 216}, {'height': 171, 'url': 'https://preview.redd.it/cekoaeehmn7f1.png?width=320&crop=smart&auto=webp&s=55fb2a97ac6fe8825465560b8f8d21e9c1536826', 'width': 320}, {'height': 343, 'url': 'https://preview.redd.it/cekoaeehmn7f1.png?width=640&crop=smart&auto=webp&s=2334c9b7ee829fa01a3d12ddcf78fd5d800909f0', 'width': 640}, {'height': 515, 'url': 'https://preview.redd.it/cekoaeehmn7f1.png?width=960&crop=smart&auto=webp&s=b48a87998d94fbb047a85b55ac9096ea510c823a', 'width': 960}, {'height': 579, 'url': 'https://preview.redd.it/cekoaeehmn7f1.png?width=1080&crop=smart&auto=webp&s=a54739bd648df6a9d4dfcd1c05e31010b65b1140', 'width': 1080}], 'source': {'height': 1020, 'url': 'https://preview.redd.it/cekoaeehmn7f1.png?auto=webp&s=190677ca73784259459c451fbd0aa98d09738adb', 'width': 1901}, 'variants': {}}]}
Best model for scraping and de-conjugating and translating Hebrew words out of texts? Basically generating a vocab list.
2
"De-conjugating" is a hard thing to explain without an example, but in English, it's like getting the word "walk" out of an input of "walked" or "walking." I've been using ChatGPT o3 for this and it works fine (according to an native speaker who checked the translations) but I want something more automated because I have a lot of texts to look at. I'm trying to extract nouns, verbs, adjectives, and other expressions out of 4-10 minute transcripts of lectures. I don't want to use the ChatGPT API because I presume it'll be quite expensive. And I'm pretty sure that I can program a simple method to keep track of which words have appeared in previous lectures so that it's not giving me the same words over and over again just because it appears in multiple lectures. I can't do that with ChatGPT, I think. ps: If it can add the vowel markings, that'll be great.
2025-06-18T09:28:00
https://www.reddit.com/r/LocalLLaMA/comments/1lecppd/best_model_for_scraping_and_deconjugating_and/
vardonir
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lecppd
false
null
t3_1lecppd
/r/LocalLLaMA/comments/1lecppd/best_model_for_scraping_and_deconjugating_and/
false
false
self
2
null
Google doubled the price of Gemini 2.5 Flash thinking output after GA from 0.15 to 0.30 what
213
ERROR: type should be string, got "\n\nhttps://cloud.google.com/vertex-ai/generative-ai/pricing"
2025-06-18T09:48:16
https://www.reddit.com/r/LocalLLaMA/comments/1led0lb/google_doubled_the_price_of_gemini_25_flash/
NoAd2240
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1led0lb
false
null
t3_1led0lb
/r/LocalLLaMA/comments/1led0lb/google_doubled_the_price_of_gemini_25_flash/
false
false
self
213
{'enabled': False, 'images': [{'id': 'DsiOIzUSicS_9zIKwMDQbNT2LOE1o29sSYs49HAmO_k', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/DsiOIzUSicS_9zIKwMDQbNT2LOE1o29sSYs49HAmO_k.png?width=108&crop=smart&auto=webp&s=4d0406250101bf7b77173aee1f071f40049a1cf3', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/DsiOIzUSicS_9zIKwMDQbNT2LOE1o29sSYs49HAmO_k.png?width=216&crop=smart&auto=webp&s=d0cc379554f631fbe2788f2eef37373d9db0dd2b', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/DsiOIzUSicS_9zIKwMDQbNT2LOE1o29sSYs49HAmO_k.png?width=320&crop=smart&auto=webp&s=fd7b6ea44cc34201d27ed7803c57843a5693b4f0', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/DsiOIzUSicS_9zIKwMDQbNT2LOE1o29sSYs49HAmO_k.png?width=640&crop=smart&auto=webp&s=635431c5fe10b3d4e477123496ecb5d7f8ce7c83', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/DsiOIzUSicS_9zIKwMDQbNT2LOE1o29sSYs49HAmO_k.png?width=960&crop=smart&auto=webp&s=e288263b99e04512c4b124b01fd98e12c486c71e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/DsiOIzUSicS_9zIKwMDQbNT2LOE1o29sSYs49HAmO_k.png?width=1080&crop=smart&auto=webp&s=bfaa27b7f76b5019a3bdfb829d3e704916b1b580', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/DsiOIzUSicS_9zIKwMDQbNT2LOE1o29sSYs49HAmO_k.png?auto=webp&s=2546d5b344a8192fccf94a216183ec43508d3080', 'width': 1200}, 'variants': {}}]}
Local AI for a small/median accounting firm - € Buget of 10k-25k
90
Our medium-sized **accounting firm** (around 100 people) in the **Netherlands** is looking to set up a local AI system, I'm hoping to tap into your collective wisdom for some recommendations. The **budget** is roughly **€10k-€25k.** This is purely for the hardware. I'll be able to build the system myself. I'll also handle the software side. I don't have a lot of experience actually running local models but I do spent a lot of my free time watching videos about it. We're going local for privacy. Keeping sensitive client data in-house is paramount. My boss does not want anything going to the cloud. Some more info about use cases what I had in mind: * **RAG system** for professional questions about Dutch accounting standards and laws. * **Analyzing and summarizing** various files like contracts, invoices, emails, excel sheets, word files and pdfs. * Developing **AI agents** for more advanced task automation. * **Coding assistance** for our data analyst (mainly in Python). I'm looking for broad advice on: Hardware * Go with a **CPU** based or **GPU based** set up? * If I go with GPU's should I go with a couple of consumer GPU's like 3090/4090's or maybe a single Pro 6000? Why pick one over the other (cost obviously) # Software * **Operating System:** Is Linux still the go-to for optimal AI performance and compatibility with frameworks? * **Local AI Model (LLMs):** What LLMs are generally recommended for a mix of RAG, summarization, agentic workflows, and coding? Or should I consider running multiple models? I've read some positive reviews about qwen3 235b. Can I even run a model like that with reasonable tps within this budget? Probably not the full 235b variant? * **Inference Software:** What are the best tools for running open-source LLMs locally, from user-friendly options for beginners to high-performance frameworks for scaling? * **Supporting Software:** What recommendations do you have for open-source tools or frameworks for building RAG systems (vector databases, RAG frameworks) and AI agents? Any general insights, experiences, or project architectural advice would be greatly appreciated! Thanks in advance for your input!
2025-06-18T09:51:00
https://www.reddit.com/r/LocalLLaMA/comments/1led23c/local_ai_for_a_smallmedian_accounting_firm_buget/
AFruitShopOwner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1led23c
false
null
t3_1led23c
/r/LocalLLaMA/comments/1led23c/local_ai_for_a_smallmedian_accounting_firm_buget/
false
false
self
90
null
Is there a context management system?
3
As part of chatting and communicating we sometimes say "thats out of context" or "you switch context". And im thinking, how do humans organize that? And is there some library or system that has this capability? Im not sure if a model (like an embedding model) could do that. Because context is dynamic. I think such a system could improve long-term memory of chat bots. If you got any link to papers about that topic or any informations, i would be thankful!
2025-06-18T10:20:09
https://www.reddit.com/r/LocalLLaMA/comments/1ledidc/is_there_a_context_management_system/
freehuntx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ledidc
false
null
t3_1ledidc
/r/LocalLLaMA/comments/1ledidc/is_there_a_context_management_system/
false
false
self
3
null
How does one extract meaning information and queries from 100s of customer chats?
0
Hey, I am facing a bit of issue with this and I wanted to ask that if I have 100s of customer conversations, conversations between customers and customer service providers about products. But I want to understand what are customer pain points and what are they facing issues with? How do I extract that information without reading through it manually? One solution that I figured was to call an LLM to summarize all the conversations based on a clear propmpt for deciphering customer intent and query. And then run a clustering model on those summaries. If you know other ways of extracting meaning information from customer conversations for a product based company do tell!
2025-06-18T10:25:14
https://www.reddit.com/r/LocalLLaMA/comments/1ledlaa/how_does_one_extract_meaning_information_and/
toinfinity_nbeyond
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ledlaa
false
null
t3_1ledlaa
/r/LocalLLaMA/comments/1ledlaa/how_does_one_extract_meaning_information_and/
false
false
self
0
null
Looking for a .guff file to run on llama.cpp server for an specific need.
2
Hello r/LocalLLaMA, I'm a lazy handyman with a passion for local models, and I'm currently working on a side project to build a pre-fabricated wood house. I've designed the house using Sweet Home 3D, but now I need to break it down into individual pieces to build it with a local carpenter. So, I'm trying to automate or accelerate the generation of 3D pieces in FreeCAD using Python code, but I'm not a coder. I can do some basic troubleshooting, but that's about it. I'm using llama.cpp to run small models with llama-swap on my RTX 2060 12GB, and I'm looking for a model that can analyze images and files to extract context and generate Python code for FreeCAD piece generation. I'm looking for a .guff model that can help me with this task. Anyone know of one that can do that? Sorry if my english is bad, its not my first language. Some key points about my project(with ai help): * I'm using FreeCAD for 3D modeling * I need to generate Python code to automate or accelerate piece generation. * I'm looking for a .guff model that can analyze images and files to extract context * I'm running small models on my RTX 2060 12GB using LLaMA-swap Thanks for any help or guidance you can provide!
2025-06-18T10:27:38
https://www.reddit.com/r/LocalLLaMA/comments/1ledmny/looking_for_a_guff_file_to_run_on_llamacpp_server/
Martialogrand
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ledmny
false
null
t3_1ledmny
/r/LocalLLaMA/comments/1ledmny/looking_for_a_guff_file_to_run_on_llamacpp_server/
false
false
self
2
null
Is there a flexible pattern for AI workflows?
2
For a goal-oriented domain like customer support where you could have specialist agents for "Account Issues", "Transaction Issues", etc., I can't think of a better way to orchestrate agents other than static, predefined workflows. I have 2 questions: 1. Is there a known pattern that allows updates to "agentic workflows" at runtime? Think RAG but for telling the agent what to do without flooding the context window. 2. How do you orchestrate your agents today in a way that gives you control over how information flows through the system while leveraging the benefits of LLMs and tool calling? Appreciate any help/comment.
2025-06-18T10:33:03
https://www.reddit.com/r/LocalLLaMA/comments/1ledpvp/is_there_a_flexible_pattern_for_ai_workflows/
redditinws
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ledpvp
false
null
t3_1ledpvp
/r/LocalLLaMA/comments/1ledpvp/is_there_a_flexible_pattern_for_ai_workflows/
false
false
self
2
null
What happens when inference gets 10-100x faster and cheaper?
2
Really fast inference is coming. Probably this year. A 10-100x leap in inference speed seems possible with the right algorithmic improvements and custom hardware. ASICs running Llama-3 70B are already >20x faster than H100 GPUs. And the economics of building custom chips make sense now that training runs cost billions. Even a 1% speed boost can justify $100M+ of investment. We should expect widespread availability very soon. If this happens, inference will feel as fast and cheap as a database query. What will this unlock? What will become possible that currently isn't viable in production? Here are a couple changes I see coming: * **RAG gets way better.** LLMs will be used to index data for retrieval. Imagine if you could construct a knowledge graph from millions of documents in the same time it takes to compute embeddings. * **Inference-time search actually becomes a thing.** Techniques like tree-of-thoughts and graph-of-thoughts will be used in production. In general, the more inference calls you throw at a problem, the better the result. 7B models can even act like 400B models with enough compute. Now we'll exploit this fully. What else will change? Or are there bottlenecks I'm not seeing?
2025-06-18T10:41:21
https://www.reddit.com/r/LocalLLaMA/comments/1leduoz/what_happens_when_inference_gets_10100x_faster/
jsonathan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leduoz
false
null
t3_1leduoz
/r/LocalLLaMA/comments/1leduoz/what_happens_when_inference_gets_10100x_faster/
false
false
self
2
null
WikipeQA : An evaluation dataset for both web-browsing agents and vector DB RAG systems
8
Hey fellow OSS enjoyer, I've created WikipeQA, an evaluation dataset inspired by BrowseComp but designed to test a broader range of retrieval systems. **What makes WikipeQA different?** Unlike BrowseComp (which requires live web browsing), WikipeQA can evaluate BOTH: * **Web-browsing agents**: Can your agent find the answer by searching online? (The info exists on Wikipedia and its sources) * **Traditional RAG systems**: How well does your vector DB perform when given the full Wikipedia corpus? This lets you directly compare different architectural approaches on the same questions. **The Dataset:** * 3,000 complex, narrative-style questions (encrypted to prevent training contamination) * 200 public examples to get started * Includes the full Wikipedia pages used as sources * Shows the exact chunks that generated each question * Short answers (1-4 words) for clear evaluation **Example question:** *"Which national Antarctic research program, known for its 2021 Midterm Assessment on a 2015 Strategic Vision, places the Changing Antarctic Ice Sheets Initiative at the top of its priorities to better understand why ice sheets are changing now and how they will change in the future?"* Answer: *"United States Antarctic Program"* **Built with Kushim** The entire dataset was automatically generated using Kushim, my open-source framework. This means you can create your own evaluation datasets from your own documents - perfect for domain-specific benchmarks. **Current Status:** * Dataset is ready at: [https://huggingface.co/datasets/teilomillet/wikipeqa](https://huggingface.co/datasets/teilomillet/wikipeqa) * Working on the eval harness (coming soon) * Would love to see early results if anyone runs evals! I'm particularly interested in seeing: 1. How traditional vector search compares to web browsing on these questions 2. Whether hybrid approaches (vector DB + web search) perform better 3. Performance differences between different chunking/embedding strategies If you run any evals with WikipeQA, please share your results! Happy to collaborate on making this more useful for the community.
2025-06-18T10:58:26
https://www.reddit.com/r/LocalLLaMA/comments/1lee4pd/wikipeqa_an_evaluation_dataset_for_both/
Fit_Strawberry8480
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lee4pd
false
null
t3_1lee4pd
/r/LocalLLaMA/comments/1lee4pd/wikipeqa_an_evaluation_dataset_for_both/
false
false
self
8
{'enabled': False, 'images': [{'id': 'szD-lbz7zAgVrnZV66GQyAT_OBYdeIfstYgD56PmzMs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/szD-lbz7zAgVrnZV66GQyAT_OBYdeIfstYgD56PmzMs.png?width=108&crop=smart&auto=webp&s=d8c2a5a0e8af1cc89736fddad25a9bd929bd4564', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/szD-lbz7zAgVrnZV66GQyAT_OBYdeIfstYgD56PmzMs.png?width=216&crop=smart&auto=webp&s=c60c998ff5f1fdc9022bef7db3d99826d346d399', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/szD-lbz7zAgVrnZV66GQyAT_OBYdeIfstYgD56PmzMs.png?width=320&crop=smart&auto=webp&s=193e3c048fa439595f8a965881e792c214891aec', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/szD-lbz7zAgVrnZV66GQyAT_OBYdeIfstYgD56PmzMs.png?width=640&crop=smart&auto=webp&s=83a9fba877c89a39487e1bf7740c61b372b24664', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/szD-lbz7zAgVrnZV66GQyAT_OBYdeIfstYgD56PmzMs.png?width=960&crop=smart&auto=webp&s=02b4a42fab707b4fee234f536d973becb4ca2f8d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/szD-lbz7zAgVrnZV66GQyAT_OBYdeIfstYgD56PmzMs.png?width=1080&crop=smart&auto=webp&s=c9809839c8f6c20f09c400e012683c954c2e1b44', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/szD-lbz7zAgVrnZV66GQyAT_OBYdeIfstYgD56PmzMs.png?auto=webp&s=09e1364ffb839edfa7ae3c15d2470e30a045b74c', 'width': 1200}, 'variants': {}}]}
MiniMax-M1
29
2025-06-18T11:06:41
https://github.com/MiniMax-AI/MiniMax-M1
David-Kunz
github.com
1970-01-01T00:00:00
0
{}
1leea24
false
null
t3_1leea24
/r/LocalLLaMA/comments/1leea24/minimaxm1/
false
false
default
29
{'enabled': False, 'images': [{'id': 'oMQEZmVHLMDxK1KRt9PeRYnQDqbulLVqufYW6GyaZ_E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oMQEZmVHLMDxK1KRt9PeRYnQDqbulLVqufYW6GyaZ_E.png?width=108&crop=smart&auto=webp&s=a07cd876a65ff821c1740dcc3eec4186df4d6783', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/oMQEZmVHLMDxK1KRt9PeRYnQDqbulLVqufYW6GyaZ_E.png?width=216&crop=smart&auto=webp&s=a7098e8a372424bb7346643353ca8e259ddcfcfa', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/oMQEZmVHLMDxK1KRt9PeRYnQDqbulLVqufYW6GyaZ_E.png?width=320&crop=smart&auto=webp&s=4f12a6560d9adea246cb9af283d7cd30ccc70b50', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/oMQEZmVHLMDxK1KRt9PeRYnQDqbulLVqufYW6GyaZ_E.png?width=640&crop=smart&auto=webp&s=7e03a77037995d5532bd94e47fb40d5cd61b33f6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/oMQEZmVHLMDxK1KRt9PeRYnQDqbulLVqufYW6GyaZ_E.png?width=960&crop=smart&auto=webp&s=4ab21aaa04b3710ba31a9a3c6271b4910d61e05c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/oMQEZmVHLMDxK1KRt9PeRYnQDqbulLVqufYW6GyaZ_E.png?width=1080&crop=smart&auto=webp&s=b39d648ec3db63f443d539a664d68eb87d4ea129', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/oMQEZmVHLMDxK1KRt9PeRYnQDqbulLVqufYW6GyaZ_E.png?auto=webp&s=455f26d037912a8c3da63b69eb0a7e62ca6edec6', 'width': 1200}, 'variants': {}}]}
【New release v1.7.1】Dingo: A Comprehensive Data Quality Evaluation Tool
5
[https://github.com/DataEval/dingo](https://github.com/DataEval/dingo)
2025-06-18T12:01:24
https://www.reddit.com/r/LocalLLaMA/comments/1lef9o7/new_release_v171dingo_a_comprehensive_data/
chupei0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lef9o7
false
null
t3_1lef9o7
/r/LocalLLaMA/comments/1lef9o7/new_release_v171dingo_a_comprehensive_data/
false
false
self
5
{'enabled': False, 'images': [{'id': 'W8si7_6tkZ4YDQqRC_99J0DePjKhiwvS9rRdY9Un7o8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/W8si7_6tkZ4YDQqRC_99J0DePjKhiwvS9rRdY9Un7o8.png?width=108&crop=smart&auto=webp&s=b3101971d5ac666e25b61eb655e115c247facb35', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/W8si7_6tkZ4YDQqRC_99J0DePjKhiwvS9rRdY9Un7o8.png?width=216&crop=smart&auto=webp&s=f196cb3a174c6d71fac036d2dfb55cca7b80fe4e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/W8si7_6tkZ4YDQqRC_99J0DePjKhiwvS9rRdY9Un7o8.png?width=320&crop=smart&auto=webp&s=df9fb38b463f367490bfae5be2d83577ce0aa057', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/W8si7_6tkZ4YDQqRC_99J0DePjKhiwvS9rRdY9Un7o8.png?width=640&crop=smart&auto=webp&s=1e6231d40eeedd0ce91daa1b18cd2bf44e30a65e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/W8si7_6tkZ4YDQqRC_99J0DePjKhiwvS9rRdY9Un7o8.png?width=960&crop=smart&auto=webp&s=a8fed7de87e3a205026dbb82cf8ba152acaac2fd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/W8si7_6tkZ4YDQqRC_99J0DePjKhiwvS9rRdY9Un7o8.png?width=1080&crop=smart&auto=webp&s=ac97a947d15f3a2da798d72a014a2d5099e85d36', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/W8si7_6tkZ4YDQqRC_99J0DePjKhiwvS9rRdY9Un7o8.png?auto=webp&s=6b1e2b0bf217624a1c7cd43eabe9c5a290769766', 'width': 1200}, 'variants': {}}]}
gpt_agents.py
10
https://github.com/jameswdelancey/gpt_agents.py A single-file, multi-agent framework for LLMs—everything is implemented in one core file with no dependencies for maximum clarity and hackability. See the main implementation.
2025-06-18T12:10:57
https://www.reddit.com/r/LocalLLaMA/comments/1lefgmh/gpt_agentspy/
jameswdelancey
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lefgmh
false
null
t3_1lefgmh
/r/LocalLLaMA/comments/1lefgmh/gpt_agentspy/
false
false
self
10
{'enabled': False, 'images': [{'id': 'mzCuNninrc06fuLZQ3sr3E1j5PXhpXpjwPlsXVdLlSQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mzCuNninrc06fuLZQ3sr3E1j5PXhpXpjwPlsXVdLlSQ.png?width=108&crop=smart&auto=webp&s=0dd22858636e814d4321cceca0a94eb4aafb6472', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mzCuNninrc06fuLZQ3sr3E1j5PXhpXpjwPlsXVdLlSQ.png?width=216&crop=smart&auto=webp&s=2b0d220c6ccad01cbff458a09b294351124577f1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mzCuNninrc06fuLZQ3sr3E1j5PXhpXpjwPlsXVdLlSQ.png?width=320&crop=smart&auto=webp&s=84d4f768548b400c417ccbad968ecf7bcdd2dabc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mzCuNninrc06fuLZQ3sr3E1j5PXhpXpjwPlsXVdLlSQ.png?width=640&crop=smart&auto=webp&s=a404e7fedb9256e888ac6fb5d7323f55f4d1e89b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mzCuNninrc06fuLZQ3sr3E1j5PXhpXpjwPlsXVdLlSQ.png?width=960&crop=smart&auto=webp&s=ef416079660429a7195ff9bf1f384a1419481f9a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mzCuNninrc06fuLZQ3sr3E1j5PXhpXpjwPlsXVdLlSQ.png?width=1080&crop=smart&auto=webp&s=eca9dc91d35031170f70ea8cb9bdb3e0e0de0859', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mzCuNninrc06fuLZQ3sr3E1j5PXhpXpjwPlsXVdLlSQ.png?auto=webp&s=b904e304dd154d61813b1a0d0413a971c0da1d15', 'width': 1200}, 'variants': {}}]}
Update:My agent model now supports OpenAI function calling format! (mirau-agent-base)
18
Hey r/LocalLLaMA! A while back I shared my multi-turn tool-calling model [in this post](https://www.reddit.com/r/LocalLLaMA/comments/1l7v9gf/a_multiturn_toolcalling_base_model_for_rl_agent/). Based on community feedback about OpenAI compatibility, I've updated the model to support OpenAI's function calling format! **What's new:** * Full compatibility with OpenAI's tool/function definition format * New model available at: [https://huggingface.co/eliuakk/mirau-agent-base-oai](https://huggingface.co/eliuakk/mirau-agent-base-oai) * Live demo: [https://modelscope.cn/studios/mouseEliauk/mirau-agent-demo/summary](https://modelscope.cn/studios/mouseEliauk/mirau-agent-demo/summary) **About the model:** mirau-agent-14b-base is a large language model specifically optimized for Agent scenarios, fine-tuned from Qwen2.5-14B-Instruct. This model focuses on enhancing multi-turn tool-calling capabilities, enabling it to autonomously plan, execute tasks, and handle exceptions in complex interactive environments. Although named "base," this does not refer to a pre-trained only base model. Instead, it is a "cold-start" version that has undergone Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO). It provides a high-quality initial policy for subsequent reinforcement learning training. We also hope the community can further enhance it with RL.
2025-06-18T12:51:47
https://huggingface.co/eliuakk/mirau-agent-base-oai
EliaukMouse
huggingface.co
1970-01-01T00:00:00
0
{}
1legaq8
false
null
t3_1legaq8
/r/LocalLLaMA/comments/1legaq8/updatemy_agent_model_now_supports_openai_function/
false
false
default
18
{'enabled': False, 'images': [{'id': '8a8-yIo-XSHNh-GrNf4RtIK3HA5ouO1zg1RogdSi4c0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8a8-yIo-XSHNh-GrNf4RtIK3HA5ouO1zg1RogdSi4c0.png?width=108&crop=smart&auto=webp&s=1d7bbe0bc11d323d6826cedac9638df0dbd62a5a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8a8-yIo-XSHNh-GrNf4RtIK3HA5ouO1zg1RogdSi4c0.png?width=216&crop=smart&auto=webp&s=d5de04e8e94a5880cf26e34d7ffd56f6e3415676', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8a8-yIo-XSHNh-GrNf4RtIK3HA5ouO1zg1RogdSi4c0.png?width=320&crop=smart&auto=webp&s=feaec59996a1632874ea517d305ad678d42ebdc0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8a8-yIo-XSHNh-GrNf4RtIK3HA5ouO1zg1RogdSi4c0.png?width=640&crop=smart&auto=webp&s=cd68843d35315a418aca711de9b0c6516cd2f26e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8a8-yIo-XSHNh-GrNf4RtIK3HA5ouO1zg1RogdSi4c0.png?width=960&crop=smart&auto=webp&s=47b728f4fdbc2e6ea95224ff371c26a9a0cfc5f6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8a8-yIo-XSHNh-GrNf4RtIK3HA5ouO1zg1RogdSi4c0.png?width=1080&crop=smart&auto=webp&s=74fdc3cbac0bb66d96151a6a9fd05b30beb2e2cf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8a8-yIo-XSHNh-GrNf4RtIK3HA5ouO1zg1RogdSi4c0.png?auto=webp&s=fea108f42dccf8383530dca638db4f23abb1b309', 'width': 1200}, 'variants': {}}]}
3090 + 4090 vs 5090 for conversional Al? Gemma27b on Linux.
0
Newbie here. I want to be able to train this local AI model. Needs text to speech and speech to text. Is running two cards a pain or is it worth the effort? I already have the 3090 and 4090. Thanks for your time.
2025-06-18T12:59:36
https://www.reddit.com/r/LocalLLaMA/comments/1leggrf/3090_4090_vs_5090_for_conversional_al_gemma27b_on/
Yakapo88
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leggrf
false
null
t3_1leggrf
/r/LocalLLaMA/comments/1leggrf/3090_4090_vs_5090_for_conversional_al_gemma27b_on/
false
false
self
0
null
Can your favourite local model solve this?
305
I am interested which, if any, models this relatively simple geometry picture if you simply give it this image. I don't have a big enough setup to test visual models.
2025-06-18T13:24:24
https://i.redd.it/gkjegqtyso7f1.png
MrMrsPotts
i.redd.it
1970-01-01T00:00:00
0
{}
1leh14g
false
null
t3_1leh14g
/r/LocalLLaMA/comments/1leh14g/can_your_favourite_local_model_solve_this/
false
false
default
305
{'enabled': True, 'images': [{'id': 'gkjegqtyso7f1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/gkjegqtyso7f1.png?width=108&crop=smart&auto=webp&s=799da9f2de59cf167b365385bad826a0c20e9cb0', 'width': 108}, {'height': 146, 'url': 'https://preview.redd.it/gkjegqtyso7f1.png?width=216&crop=smart&auto=webp&s=ed4222d7d216b69f7584c9cdbcdb8527adb7109e', 'width': 216}, {'height': 217, 'url': 'https://preview.redd.it/gkjegqtyso7f1.png?width=320&crop=smart&auto=webp&s=e18a66090402648f9c47ce142810f8b49817ba4c', 'width': 320}, {'height': 435, 'url': 'https://preview.redd.it/gkjegqtyso7f1.png?width=640&crop=smart&auto=webp&s=93880be720bd03128b1e673976aa49f67626b2f0', 'width': 640}, {'height': 653, 'url': 'https://preview.redd.it/gkjegqtyso7f1.png?width=960&crop=smart&auto=webp&s=6d3c2b626526aeb871cf7d5d3cd210e010f9dc0c', 'width': 960}], 'source': {'height': 734, 'url': 'https://preview.redd.it/gkjegqtyso7f1.png?auto=webp&s=8915ab318ff14ef0f4a52961300d277b05d81129', 'width': 1079}, 'variants': {}}]}
Built memX: a shared memory backend for LLM agents (demo + open-source code)
51
Hey everyone — I built this over the weekend and wanted to share: 🔗 https://github.com/MehulG/memX **memX** is a shared memory layer for LLM agents — kind of like Redis, but with real-time sync, pub/sub, schema validation, and access control. Instead of having agents pass messages or follow a fixed pipeline, they just read and write to shared memory keys. It’s like a collaborative whiteboard where agents evolve context together. **Key features:** - Real-time pub/sub - Per-key JSON schema validation - API key-based ACLs - Python SDK
2025-06-18T13:37:19
https://v.redd.it/ibq16xv5vo7f1
Temporary-Tap-7323
v.redd.it
1970-01-01T00:00:00
0
{}
1lehbra
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ibq16xv5vo7f1/DASHPlaylist.mpd?a=1752845855%2CZTM2ZjRmZWFiYTFhMTc5NmUxY2Q0MWNiODg4YzU4MTgzZGJjYjAxOTBlMzc1MGQyM2RiM2E1NTBkYjYxNTM1Zg%3D%3D&v=1&f=sd', 'duration': 11, 'fallback_url': 'https://v.redd.it/ibq16xv5vo7f1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/ibq16xv5vo7f1/HLSPlaylist.m3u8?a=1752845855%2CYjliNmZmOGNkOGYxMWVjODc1N2JiNjQyYTc3MjExODQxYzYxYjQ1MWFmZWQzMGZiNGQyNDg5ODI3NDBiYmFkMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/ibq16xv5vo7f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1lehbra
/r/LocalLLaMA/comments/1lehbra/built_memx_a_shared_memory_backend_for_llm_agents/
false
false
https://external-preview…0292c4a3c3146859
51
{'enabled': False, 'images': [{'id': 'bWpmbGR5djV2bzdmMYCyFtIdy85G-V-pyC1NhTykFPs5rMNi1ya3S7fCsS5U', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bWpmbGR5djV2bzdmMYCyFtIdy85G-V-pyC1NhTykFPs5rMNi1ya3S7fCsS5U.png?width=108&crop=smart&format=pjpg&auto=webp&s=3031cc30ee24e75094a4e7250ac591345596cfda', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bWpmbGR5djV2bzdmMYCyFtIdy85G-V-pyC1NhTykFPs5rMNi1ya3S7fCsS5U.png?width=216&crop=smart&format=pjpg&auto=webp&s=7b40c7be9a6387677e2d38d2b8d68f42b8c7e3a9', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bWpmbGR5djV2bzdmMYCyFtIdy85G-V-pyC1NhTykFPs5rMNi1ya3S7fCsS5U.png?width=320&crop=smart&format=pjpg&auto=webp&s=eb420b8f39cd7bf626f82892dcc48acf863c009e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bWpmbGR5djV2bzdmMYCyFtIdy85G-V-pyC1NhTykFPs5rMNi1ya3S7fCsS5U.png?width=640&crop=smart&format=pjpg&auto=webp&s=eac7b84464e601db9180eda5d023236ec0d7e4ee', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bWpmbGR5djV2bzdmMYCyFtIdy85G-V-pyC1NhTykFPs5rMNi1ya3S7fCsS5U.png?width=960&crop=smart&format=pjpg&auto=webp&s=8f045c6c466797b9301395faa022afc63fbc8a38', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bWpmbGR5djV2bzdmMYCyFtIdy85G-V-pyC1NhTykFPs5rMNi1ya3S7fCsS5U.png?width=1080&crop=smart&format=pjpg&auto=webp&s=6e1117189e8cb73ec3e83bc2e6b283053a710ff2', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bWpmbGR5djV2bzdmMYCyFtIdy85G-V-pyC1NhTykFPs5rMNi1ya3S7fCsS5U.png?format=pjpg&auto=webp&s=0d86cd0b4a27a536b707d7adac8ebdf1007b85d0', 'width': 1920}, 'variants': {}}]}
Local LLM Coding Setup for 8GB VRAM - Coding Models?
4
Unfortunately for now, I'm limited to **8GB VRAM** (**32GB RAM**) with my friend's laptop - NVIDIA GeForce RTX 4060 GPU - Intel(R) Core(TM) i7-14700HX 2.10 GHz. We can't upgrade this laptop with neither RAM nor Graphics anymore. I'm not expecting great performance from LLMs with this VRAM. Just decent OK performance is enough for me on coding. Fortunately I'm able to load upto 14B models(I pick highest quant fit my VRAM whenever possible) with this VRAM, I use JanAI. **My use case** : Python, C#, Js(And Optionally Rust, Go). To develop simple utilities & small games. Please share **Coding Models**, **Tools**, **Utilities**, **Resources**, **etc.,** for this setup to help this Poor GPU. Tools like OpenHands could help me newbies like me on coding better way? or AI coding assistants/agents like Roo / Cline? What else? Big Thanks (We don't want to invest anymore with current laptop. I can use friend's this laptop weekdays since he needs that for gaming weekends only. I'm gonna build a PC with some medium-high config for 150-200B models next year start. So for next 6-9 months, I have to use this current laptop for coding).
2025-06-18T13:40:09
https://www.reddit.com/r/LocalLLaMA/comments/1lehe2i/local_llm_coding_setup_for_8gb_vram_coding_models/
pmttyji
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lehe2i
false
null
t3_1lehe2i
/r/LocalLLaMA/comments/1lehe2i/local_llm_coding_setup_for_8gb_vram_coding_models/
false
false
self
4
null
Oops
1,936
2025-06-18T14:12:39
https://i.redd.it/iv35yrek1p7f1.png
Own-Potential-2308
i.redd.it
1970-01-01T00:00:00
0
{}
1lei5mb
false
null
t3_1lei5mb
/r/LocalLLaMA/comments/1lei5mb/oops/
false
false
default
1,936
{'enabled': True, 'images': [{'id': 'iv35yrek1p7f1', 'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/iv35yrek1p7f1.png?width=108&crop=smart&auto=webp&s=680ebd462541fd8c80431aa7b123c5468b76ebb4', 'width': 108}, {'height': 220, 'url': 'https://preview.redd.it/iv35yrek1p7f1.png?width=216&crop=smart&auto=webp&s=7f80207a5b054f045c35a189917ad5b2abd4c039', 'width': 216}, {'height': 325, 'url': 'https://preview.redd.it/iv35yrek1p7f1.png?width=320&crop=smart&auto=webp&s=3293cff95c68119d896aa4efd0015a9b248cb00c', 'width': 320}, {'height': 651, 'url': 'https://preview.redd.it/iv35yrek1p7f1.png?width=640&crop=smart&auto=webp&s=0a1be0e37ffab5a4926e5a5a7a869b2ee3a9c853', 'width': 640}, {'height': 977, 'url': 'https://preview.redd.it/iv35yrek1p7f1.png?width=960&crop=smart&auto=webp&s=d563121dc3f7d883c707ee7a6e5b6f550dc638a8', 'width': 960}, {'height': 1100, 'url': 'https://preview.redd.it/iv35yrek1p7f1.png?width=1080&crop=smart&auto=webp&s=002f9b0a2d2c6465a790c7a8df239b61067b69e7', 'width': 1080}], 'source': {'height': 1100, 'url': 'https://preview.redd.it/iv35yrek1p7f1.png?auto=webp&s=36b994582bc82ea397b908a4557e6fb8d4646a3e', 'width': 1080}, 'variants': {}}]}
Model Context Protocol (MCP) just got easier to use with IdeaWeaver
0
https://i.redd.it/i8sh45ds7p7f1.gif Model Context Protocol (MCP) just got easier to use with IdeaWeaver MCP is transforming how AI agents interact with tools, memory, and humans, making them more context-aware and reliable. But let’s be honest: setting it up manually is still a hassle. What if you could enable it with just two commands? Meet IdeaWeaver — your one-stop CLI for setting up MCP servers in seconds. Currently supports: 1: GitHub 2: AWS 3: Terraform …and more coming soon! Here’s how simple it is: \# Set up authentication `ideaweaver mcp setup-auth github` \# Enable the server `ideaweaver mcp enable github` \# Example: List GitHub issues `ideaweaver mcp call-tool github list_issues \` `--args '{"owner": "100daysofdevops", "repo": "100daysofdevops"}'` * No config files * No code required * Just clean, simple CLI magic 🔗 Docs: [https://ideaweaver-ai-code.github.io/ideaweaver-docs/mcp/aws/](https://ideaweaver-ai-code.github.io/ideaweaver-docs/mcp/aws/) 🔗 GitHub Repohttps://github.com/ideaweaver-ai-code/ideaweaver If this sounds useful, please give it a try and let me know your thoughts. And if you like the project, don’t forget to ⭐ the repo—it helps more than you know!
2025-06-18T14:47:42
https://www.reddit.com/r/LocalLLaMA/comments/1lej0ml/model_context_protocol_mcp_just_got_easier_to_use/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lej0ml
false
null
t3_1lej0ml
/r/LocalLLaMA/comments/1lej0ml/model_context_protocol_mcp_just_got_easier_to_use/
false
false
https://b.thumbs.redditm…XrYtVKTMXArc.jpg
0
null
Hugging Face Sheets - experiment with 1.5K open LLMs on data you care about
26
Hi! We've built this app as a playground of open LLMs for unstructured datasets. It might be interesting to this community. It's powered by HF Inference Providers and could be useful for playing and finding the right open models for your use case, without downloading them or running code. I'd love to hear your ideas. You can try it out here: [https://huggingface.co/spaces/aisheets/sheets](https://huggingface.co/spaces/aisheets/sheets) Available models: [https://huggingface.co/models?inference\_provider=featherless-ai,together,hf-inference,sambanova,cohere,cerebras,fireworks-ai,groq,hyperbolic,nebius,novita&sort=trending](https://huggingface.co/models?inference_provider=featherless-ai,together,hf-inference,sambanova,cohere,cerebras,fireworks-ai,groq,hyperbolic,nebius,novita&sort=trending)
2025-06-18T14:49:16
https://v.redd.it/w0j5vts27p7f1
dvilasuero
/r/LocalLLaMA/comments/1lej1z2/hugging_face_sheets_experiment_with_15k_open_llms/
1970-01-01T00:00:00
0
{}
1lej1z2
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/w0j5vts27p7f1/DASHPlaylist.mpd?a=1752979758%2CYTAzNWNhY2RkOTk1NzJlNzYzZTczMzM3YzJjZjMwZDkzNmRhMjI5OTQyZjEyOTJmY2FlYzJlODViMzYxZTYxYQ%3D%3D&v=1&f=sd', 'duration': 52, 'fallback_url': 'https://v.redd.it/w0j5vts27p7f1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/w0j5vts27p7f1/HLSPlaylist.m3u8?a=1752979758%2CNGQxMjI4N2I2NjhkYjdiMzA3ZWE5ZTE4MWI2NzcxMDljZDFhNDEyNjgzZTZiMzIzN2RhOWM5MjczZTAzMmU4Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/w0j5vts27p7f1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1lej1z2
/r/LocalLLaMA/comments/1lej1z2/hugging_face_sheets_experiment_with_15k_open_llms/
false
false
https://external-preview…fe1c5b2f0cd3a9e4
26
{'enabled': False, 'images': [{'id': 'OWhnczl0czI3cDdmMbW9BBj7Ryw9x5mw-FdJgsMiyoCrCN12S1yIw2gj5bap', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OWhnczl0czI3cDdmMbW9BBj7Ryw9x5mw-FdJgsMiyoCrCN12S1yIw2gj5bap.png?width=108&crop=smart&format=pjpg&auto=webp&s=5d95154c9d21718c3f333755b8bbef539ee1a1ac', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/OWhnczl0czI3cDdmMbW9BBj7Ryw9x5mw-FdJgsMiyoCrCN12S1yIw2gj5bap.png?width=216&crop=smart&format=pjpg&auto=webp&s=e81f23f4254aada67ad28fc6c20e845327f089d7', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/OWhnczl0czI3cDdmMbW9BBj7Ryw9x5mw-FdJgsMiyoCrCN12S1yIw2gj5bap.png?width=320&crop=smart&format=pjpg&auto=webp&s=9d6a4d6bc768568bc22eb2befa1e820a3de974b4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/OWhnczl0czI3cDdmMbW9BBj7Ryw9x5mw-FdJgsMiyoCrCN12S1yIw2gj5bap.png?width=640&crop=smart&format=pjpg&auto=webp&s=3c094e7ac90cd4af21e0be5b95ecb7f3e45eb36f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/OWhnczl0czI3cDdmMbW9BBj7Ryw9x5mw-FdJgsMiyoCrCN12S1yIw2gj5bap.png?width=960&crop=smart&format=pjpg&auto=webp&s=a6e87583f63e6d608872d53ec343efb8ab16201d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/OWhnczl0czI3cDdmMbW9BBj7Ryw9x5mw-FdJgsMiyoCrCN12S1yIw2gj5bap.png?width=1080&crop=smart&format=pjpg&auto=webp&s=42ac3785edb1a0475168f244ae992a277c475900', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/OWhnczl0czI3cDdmMbW9BBj7Ryw9x5mw-FdJgsMiyoCrCN12S1yIw2gj5bap.png?format=pjpg&auto=webp&s=94e33c3c49690c0e4d2ee2ebb4c80268a4125d6b', 'width': 3840}, 'variants': {}}]}
Is there a way to optimize flags for llama.cpp towards best tok/s local AI?
1
[removed]
2025-06-18T15:01:59
https://www.reddit.com/r/LocalLLaMA/comments/1lejdf6/is_there_a_way_to_optimize_flags_for_llamacpp/
Expert-Inspector-128
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lejdf6
false
null
t3_1lejdf6
/r/LocalLLaMA/comments/1lejdf6/is_there_a_way_to_optimize_flags_for_llamacpp/
false
false
self
1
null
GPU and General Recommendations for DL-CUDA local AI PC
2
Hi folks, I want to build a PC where I can tinker with some CUDA, tinker with LLMs, maybe some diffusion models, train, inference, maybe build some little apps etc. and I am trying to determine which GPU fits me the best. In my opinion, RTX 3090 may be the best because of 24 GB VRAM, and maybe I might get 2 which makes 48 GB which is super. Also, my alternatives are these: \- RTX 4080 (bit expensive then RTX 3090, and 16 GB VRAM but newer architecture, maybe useful for low-level I don't know I'm a learner for now), \- RTX 4090 (Much more expensive, more suitable but it will extend the time for building the rig), \- RTX 5080 (Double the price of 3090, 16 GB but Blackwell), \- and RTX 5090 (Dream GPU, too far away for me for now) I know VRAM differs, but really that much? Is it worth giving up architecture for VRAM? Also for the other parts like motherboard, processor is important too. Processor should feed a M.2 SSD, 2 GPUs. Like a X99 system with Core i7-5820K enough? My alternatives are 5960X, 6950X, 7900X. I don't want nothing too fancy, price matters. My point is build performance with budget.
2025-06-18T15:11:58
https://www.reddit.com/r/LocalLLaMA/comments/1lejmkj/gpu_and_general_recommendations_for_dlcuda_local/
emre570
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lejmkj
false
null
t3_1lejmkj
/r/LocalLLaMA/comments/1lejmkj/gpu_and_general_recommendations_for_dlcuda_local/
false
false
self
2
null
Is there a way to optimize flags for llama.cpp towards best tok/s local AI?
1
[removed]
2025-06-18T15:17:05
https://www.reddit.com/r/LocalLLaMA/comments/1lejr6k/is_there_a_way_to_optimize_flags_for_llamacpp/
Expert-Inspector-128
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lejr6k
false
null
t3_1lejr6k
/r/LocalLLaMA/comments/1lejr6k/is_there_a_way_to_optimize_flags_for_llamacpp/
false
false
self
1
null
Built an open-source DeepThink plugin that brings Gemini 2.5 style advanced reasoning to local models (DeepSeek R1, Qwen3, etc.)
67
Hey r/LocalLLaMA! So Google just dropped their Gemini 2.5 report and there's this really interesting technique called "Deep Think" that got me thinking. Basically, it's a structured reasoning approach where the model generates multiple hypotheses in parallel and critiques them before giving you the final answer. The results are pretty impressive - SOTA on math olympiad problems, competitive coding, and other challenging benchmarks. I implemented a DeepThink plugin for OptiLLM that works with local models like: * DeepSeek R1 * Qwen3 The plugin essentially makes your local model "think out loud" by exploring multiple solution paths simultaneously, then converging on the best answer. It's like giving your model an internal debate team. # How it works Instead of the typical single-pass generation, the model: 1. Generates multiple approaches to the problem in parallel 2. Evaluates each approach critically 3. Synthesizes the best elements into a final response This is especially useful for complex reasoning tasks, math problems, coding challenges, etc. We actually won the 3rd Prize at Cerebras & OpenRouter Qwen 3 Hackathon with this approach, which was pretty cool validation that the technique works well beyond Google's implementation. https://preview.redd.it/5el6xgxhep7f1.png?width=1238&format=png&auto=webp&s=d9f4f420191f047573dc5dd7adfbc05c2c175227 # Code & Demo * GitHub: [https://github.com/codelion/optillm/tree/main/optillm/plugins/deepthink](https://github.com/codelion/optillm/tree/main/optillm/plugins/deepthink) * Demo video: [https://www.youtube.com/watch?v=b06kD1oWBA4](https://www.youtube.com/watch?v=b06kD1oWBA4) The plugin is ready to use right now if you want to try it out. Would love to get feedback from the community and see what improvements we can make together. Has anyone else been experimenting with similar reasoning techniques for local models? Would be interested to hear what approaches you've tried. **Edit:** For those asking about performance impact - yes, it does increase inference time since you're essentially running multiple reasoning passes. But for complex problems where you want the best possible answer, the trade-off is usually worth it.
2025-06-18T15:26:55
https://www.reddit.com/r/LocalLLaMA/comments/1lek04t/built_an_opensource_deepthink_plugin_that_brings/
asankhs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lek04t
false
null
t3_1lek04t
/r/LocalLLaMA/comments/1lek04t/built_an_opensource_deepthink_plugin_that_brings/
false
false
https://b.thumbs.redditm…n_poPZJGnM8g.jpg
67
null
M4 Max 128GB MacBook arrives today. Is LM Studio still king for running MLX or have things moved on?
19
As title: new top-of-the-line MBP arrives today and I’m wondering what the most performant option is for hosting models locally on it. Also: we run a quad RTX A6000 rig and I’ll be doing some benchmark comparisons between that and the MBP. Any requests?
2025-06-18T15:36:34
https://www.reddit.com/r/LocalLLaMA/comments/1lek8yo/m4_max_128gb_macbook_arrives_today_is_lm_studio/
__JockY__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lek8yo
false
null
t3_1lek8yo
/r/LocalLLaMA/comments/1lek8yo/m4_max_128gb_macbook_arrives_today_is_lm_studio/
false
false
self
19
null
MiCA – A new parameter-efficient fine-tuning method with higher knowledge uptake and less forgetting (beats LoRA in my tests)
0
Hi all, I’ve been working on a new **parameter-efficient fine-tuning method** for LLMs, called **MiCA (Minor Component Adaptation)**, and wanted to share the results and open it up for feedback or collaboration. MiCA improves on existing methods (like LoRA) in three core areas: ✅ **Higher knowledge uptake**: in some domain-specific tests, up to **5x more learning** of new concepts compared to LoRA ✅ **Much less catastrophic forgetting**: core LLM capabilities are preserved even after targeted adaptation ✅ **Fewer trainable parameters**: it's highly efficient and ideal for small compute budgets or on-device use cases I’ve also combined MiCA with **reinforcement learning-style reward signals** to fine-tune reasoning-heavy workflows — especially useful for domains like legal, financial, or multi-step decision tasks where pure prompt engineering or LoRA struggle. And here’s a write-up: [https://stenruediger.substack.com/p/supercharge-your-llms-introducing](https://stenruediger.substack.com/p/supercharge-your-llms-introducing) I’d love to hear what others think — and if you’re working on something where this might be useful, happy to connect. Also open to **pilots, licensing**, or collaborative experiments.
2025-06-18T15:37:43
https://www.reddit.com/r/LocalLLaMA/comments/1lek9yr/mica_a_new_parameterefficient_finetuning_method/
Majestic-Explorer315
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lek9yr
false
null
t3_1lek9yr
/r/LocalLLaMA/comments/1lek9yr/mica_a_new_parameterefficient_finetuning_method/
false
false
self
0
{'enabled': False, 'images': [{'id': 'kEqUS9oLy_2lNOTe_bM6usZsPbwmQlC1mkyEAv_D5rM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kEqUS9oLy_2lNOTe_bM6usZsPbwmQlC1mkyEAv_D5rM.jpeg?width=108&crop=smart&auto=webp&s=1df48c6071da3b3f2dffe06ba1401b645cfee2b3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kEqUS9oLy_2lNOTe_bM6usZsPbwmQlC1mkyEAv_D5rM.jpeg?width=216&crop=smart&auto=webp&s=194f3e93da53e41a80a9344355f58aeeeaf951fd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kEqUS9oLy_2lNOTe_bM6usZsPbwmQlC1mkyEAv_D5rM.jpeg?width=320&crop=smart&auto=webp&s=38161494b88eb502db47d803b6d115d79589afd4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kEqUS9oLy_2lNOTe_bM6usZsPbwmQlC1mkyEAv_D5rM.jpeg?width=640&crop=smart&auto=webp&s=521eeca48a04b01b8930c4a7491dc3f3ddf92cd0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kEqUS9oLy_2lNOTe_bM6usZsPbwmQlC1mkyEAv_D5rM.jpeg?width=960&crop=smart&auto=webp&s=ab6c6c12e0071fa454cca007a26d29ca1ec13f22', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kEqUS9oLy_2lNOTe_bM6usZsPbwmQlC1mkyEAv_D5rM.jpeg?width=1080&crop=smart&auto=webp&s=1b81c4830b1fb84991fe42fd503fd82351834bb8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kEqUS9oLy_2lNOTe_bM6usZsPbwmQlC1mkyEAv_D5rM.jpeg?auto=webp&s=9c760320ab2c8b927841b9edad242140eb03e5cd', 'width': 1200}, 'variants': {}}]}
gemini-2.5-flash-lite-preview-06-17 performance on IDP Leaderboard
14
https://preview.redd.it/…similar results?
2025-06-18T15:52:25
https://www.reddit.com/r/LocalLLaMA/comments/1lekndj/gemini25flashlitepreview0617_performance_on_idp/
SouvikMandal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lekndj
false
null
t3_1lekndj
/r/LocalLLaMA/comments/1lekndj/gemini25flashlitepreview0617_performance_on_idp/
false
false
https://a.thumbs.redditm…NSDeFUIiMmt4.jpg
14
null
Best non-Chinese open models?
2
Yes I know that running them locally is fine, and believe me there's nothing I'd like to do more than just use Qwen, but there is significant resistance to anything from China in this use case Most important factor is it needs to be good at RAG, summarization and essay/report writing. Reasoning would also be a big plus I'm currently playing around with Llama 3.3 Nemotron Super 49B and Gemma 3 but would love other things to consider
2025-06-18T16:15:04
https://www.reddit.com/r/LocalLLaMA/comments/1lel886/best_nonchinese_open_models/
ProbaDude
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lel886
false
null
t3_1lel886
/r/LocalLLaMA/comments/1lel886/best_nonchinese_open_models/
false
false
self
2
null
Development environment setup
1
I use a windows machine with a 5070 TI and a 3070. I have 96 GB of Ram. I have been installing python and other stuff into this machine but now I feel that it might be better to set up a virtual/docker environment. Is there any readymade setup I can download? Also, can such virtual environments take full advantage of the GPUs? I don't want to dual boot into Linux as I do play windows games.
2025-06-18T17:07:42
https://www.reddit.com/r/LocalLLaMA/comments/1leml1x/development_environment_setup/
Jedirite
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leml1x
false
null
t3_1leml1x
/r/LocalLLaMA/comments/1leml1x/development_environment_setup/
false
false
self
1
null
We took Qwen3 235B A22B from 34 tokens/sec to 54 tokens/sec by switching from llama.cpp with Unsloth dynamic Q4_K_M GGUF to vLLM with INT4 w4a16
88
System: quad RTX A6000 Epyc. Originally we were running the Unsloth dynamic GGUFs at UD_Q4_K_M and UD_Q5_K_XL with which we were getting speeds of 34 and 31 tokens/sec, respectively, for small-ish prompts of 1-2k tokens. A couple of days ago we tried an experiment with another 4-bit quant type: INT 4, specifically w4a16, which is a 4-bit quant that's expanded and run at FP16. Or something. The wizard and witches will know better, forgive my butchering of LLM mechanics. This is the one we used: `justinjja/Qwen3-235B-A22B-INT4-W4A16`. The point is that w4a16 runs in vLLM and is a whopping 20 tokens/sec faster than Q4 in llama.cpp in like-for-like tests (as close as we could get without going crazy). Does anyone know how w4a16 compares to Q4_K_M in terms of quantization quality? Are these 4-bit quants actually comparing apples to apples? Or are we sacrificing quality for speed? We'll do our own tests, but I'd like to hear opinions from the peanut gallery.
2025-06-18T17:09:31
https://www.reddit.com/r/LocalLLaMA/comments/1lemmsq/we_took_qwen3_235b_a22b_from_34_tokenssec_to_54/
__JockY__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lemmsq
false
null
t3_1lemmsq
/r/LocalLLaMA/comments/1lemmsq/we_took_qwen3_235b_a22b_from_34_tokenssec_to_54/
false
false
self
88
null
Joycap-beta with llama.cpp
6
Has anyone gotten llama.cpp to work with joycap yet? So far the latest version of Joycap seems to be the captioning king for my workflows but I've only managed to use it with VLLM which is super slow to startup (despite the model being cached in RAM) and that leads to a lot of waiting combined with llama-swap.
2025-06-18T17:37:29
https://www.reddit.com/r/LocalLLaMA/comments/1lencvg/joycapbeta_with_llamacpp/
HollowInfinity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lencvg
false
null
t3_1lencvg
/r/LocalLLaMA/comments/1lencvg/joycapbeta_with_llamacpp/
false
false
self
6
null
new 72B and 70B models from Arcsee
1
looks like there are some new models from Arcsee [https://huggingface.co/arcee-ai/Virtuoso-Large](https://huggingface.co/arcee-ai/Virtuoso-Large) [https://huggingface.co/arcee-ai/Virtuoso-Large-GGUF](https://huggingface.co/arcee-ai/Virtuoso-Large-GGUF) [https://huggingface.co/arcee-ai/Arcee-SuperNova-v1](https://huggingface.co/arcee-ai/Arcee-SuperNova-v1) [https://huggingface.co/arcee-ai/Arcee-SuperNova-v1-GGUF](https://huggingface.co/arcee-ai/Arcee-SuperNova-v1-GGUF) not sure is it related or there will be one more: [https://github.com/ggml-org/llama.cpp/pull/14185](https://github.com/ggml-org/llama.cpp/pull/14185)
2025-06-18T17:38:43
https://www.reddit.com/r/LocalLLaMA/comments/1lendzl/new_72b_and_70b_models_from_arcsee/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lendzl
false
null
t3_1lendzl
/r/LocalLLaMA/comments/1lendzl/new_72b_and_70b_models_from_arcsee/
false
false
self
1
null
new 72B and 70B models from Arcee
81
looks like there are some new models from Arcee [https://huggingface.co/arcee-ai/Virtuoso-Large](https://huggingface.co/arcee-ai/Virtuoso-Large) [https://huggingface.co/arcee-ai/Virtuoso-Large-GGUF](https://huggingface.co/arcee-ai/Virtuoso-Large-GGUF) [https://huggingface.co/arcee-ai/Arcee-SuperNova-v1](https://huggingface.co/arcee-ai/Arcee-SuperNova-v1) [https://huggingface.co/arcee-ai/Arcee-SuperNova-v1-GGUF](https://huggingface.co/arcee-ai/Arcee-SuperNova-v1-GGUF) not sure is it related or there will be one more: [https://github.com/ggml-org/llama.cpp/pull/14185](https://github.com/ggml-org/llama.cpp/pull/14185)
2025-06-18T17:39:53
https://www.reddit.com/r/LocalLLaMA/comments/1lenf36/new_72b_and_70b_models_from_arcee/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lenf36
false
null
t3_1lenf36
/r/LocalLLaMA/comments/1lenf36/new_72b_and_70b_models_from_arcee/
false
false
self
81
{'enabled': False, 'images': [{'id': '-hdNyksM34JT-kjBh-zd6EEe5SPZyDVYsj8WFDRYWns', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-hdNyksM34JT-kjBh-zd6EEe5SPZyDVYsj8WFDRYWns.png?width=108&crop=smart&auto=webp&s=bde6d425e961c460755cedf86cf0f698f3745398', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-hdNyksM34JT-kjBh-zd6EEe5SPZyDVYsj8WFDRYWns.png?width=216&crop=smart&auto=webp&s=6b541dabdb1823b782c06ac5c263627df9ba6287', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-hdNyksM34JT-kjBh-zd6EEe5SPZyDVYsj8WFDRYWns.png?width=320&crop=smart&auto=webp&s=cd39f86b7e3ea05548c2fcd89c68569402f0a82a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-hdNyksM34JT-kjBh-zd6EEe5SPZyDVYsj8WFDRYWns.png?width=640&crop=smart&auto=webp&s=e169a7d769873bd77c9720c22e740830e531d9f7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-hdNyksM34JT-kjBh-zd6EEe5SPZyDVYsj8WFDRYWns.png?width=960&crop=smart&auto=webp&s=e59b50f32656c6b278aea2e39079990741f766c6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-hdNyksM34JT-kjBh-zd6EEe5SPZyDVYsj8WFDRYWns.png?width=1080&crop=smart&auto=webp&s=f10c607a85b01dd4ed3d9fa773991e2d46622460', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-hdNyksM34JT-kjBh-zd6EEe5SPZyDVYsj8WFDRYWns.png?auto=webp&s=55aa532f1bcb2b0f55a32b262898aebe47be55ed', 'width': 1200}, 'variants': {}}]}
Help with Ollama & Open WebUI – Best Practices for Staff Knowledge Base
1
[removed]
2025-06-18T17:51:53
https://www.reddit.com/r/LocalLLaMA/comments/1lenq73/help_with_ollama_open_webui_best_practices_for/
4real2me
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lenq73
false
null
t3_1lenq73
/r/LocalLLaMA/comments/1lenq73/help_with_ollama_open_webui_best_practices_for/
false
false
self
1
null
OpenAI found features in AI models that correspond to different ‘personas’
118
[https://techcrunch.com/2025/06/18/openai-found-features-in-ai-models-that-correspond-to-different-personas/](https://techcrunch.com/2025/06/18/openai-found-features-in-ai-models-that-correspond-to-different-personas/) **TL;DR:** OpenAI discovered that large language models contain internal "persona" features neural patterns linked to specific behaviours like helpfulness or sarcasm. By activating or suppressing these, researchers can steer the model’s personality and alignment, offering a new path to control and debug AI behaviour.
2025-06-18T18:16:40
https://www.reddit.com/r/LocalLLaMA/comments/1leod7d/openai_found_features_in_ai_models_that/
nightsky541
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leod7d
false
null
t3_1leod7d
/r/LocalLLaMA/comments/1leod7d/openai_found_features_in_ai_models_that/
false
false
self
118
{'enabled': False, 'images': [{'id': 'uLi8cvStggzfhD_nSz25v1NZ6hqYjkaB9F5ArBvqZX4', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/uLi8cvStggzfhD_nSz25v1NZ6hqYjkaB9F5ArBvqZX4.jpeg?width=108&crop=smart&auto=webp&s=625eebe08226a18b15b91510476f4c7be9772770', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/uLi8cvStggzfhD_nSz25v1NZ6hqYjkaB9F5ArBvqZX4.jpeg?width=216&crop=smart&auto=webp&s=13d2899d5a80a36063c81980d22831e1297bcabf', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/uLi8cvStggzfhD_nSz25v1NZ6hqYjkaB9F5ArBvqZX4.jpeg?width=320&crop=smart&auto=webp&s=ceb13b5b2fde309ce3aa54d46d913dbdf42cfaa0', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/uLi8cvStggzfhD_nSz25v1NZ6hqYjkaB9F5ArBvqZX4.jpeg?width=640&crop=smart&auto=webp&s=0b631edb44a79089ad01ccac65638fb4b9964745', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/uLi8cvStggzfhD_nSz25v1NZ6hqYjkaB9F5ArBvqZX4.jpeg?width=960&crop=smart&auto=webp&s=c38dd9e511af34444967f6e2c2697265ce233580', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/uLi8cvStggzfhD_nSz25v1NZ6hqYjkaB9F5ArBvqZX4.jpeg?width=1080&crop=smart&auto=webp&s=b75ca076bae2f3743cf14e12e0934bd772a6c20b', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/uLi8cvStggzfhD_nSz25v1NZ6hqYjkaB9F5ArBvqZX4.jpeg?auto=webp&s=84698e19aa0518cf705258cc9729d0c92c7f9097', 'width': 1200}, 'variants': {}}]}
How much does it cost ai companies to train xbillion amount of parameters?
3
Hello, I Have been working on my own stuff lately, and decided to test how much memory 5million parameters (i call them units) would cost. It came out to be 37.7gb of ram, but it made me think, that if i had 2 24gb gpus id be able to effectively train for small problems and it would cost me $4000(retail), so if i wanted to train a billion parameters( excluding electricity costs and others) it would cost me 200\*4000=$800,000/billion parameters as upfront costs. s FYI: Yes, this is a simplification. i am in no way intending to brag or to be confounding to anyone. The network had 3 layers. the input layer consisting of 56 parameters , the hidden layer consisting of 5M parameters, the output layer consisting of 16, and it is a regression problem. Posting this hear because my post keeps getting deleted in the machineLearning sub
2025-06-18T18:18:12
https://www.reddit.com/r/LocalLLaMA/comments/1leoej7/how_much_does_it_cost_ai_companies_to_train/
KingYSL
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leoej7
false
null
t3_1leoej7
/r/LocalLLaMA/comments/1leoej7/how_much_does_it_cost_ai_companies_to_train/
false
false
self
3
null
Cluster advice needed
0
Hello local llama , I’m new to this chat so sorry if this breaks any rules . I’m a young enthusiast and have been working on my dream ai project for awhile. I was looking at maybe building a duel a100 40g PCie cluster eventually, I noticed however that on eBay they had no/ little used supply (trying to budget) any help or advice would be greatly appreciated while trying to set this up. Also open to any other setup recommendations
2025-06-18T18:28:15
https://www.reddit.com/r/LocalLLaMA/comments/1leonta/cluster_advice_needed/
Fun_Nefariousness228
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leonta
false
null
t3_1leonta
/r/LocalLLaMA/comments/1leonta/cluster_advice_needed/
false
false
self
0
null
Daily Paper Discussions on the Yannic Kilcher Discord -> V-JEPA 2
0
As a part of daily paper discussions on the Yannic Kilcher discord server, I will be volunteering to lead the analysis of the world model that achieves state-of-the-art performance on visual understanding and prediction in the physical world -> V-JEPA 2 🧮 🔍 V-JEPA 2 is a 1.2 billion-parameter model that was built using [Meta Joint Embedding Predictive Architecture](https://ai.meta.com/blog/yann-lecun-advances-in-ai-research/) (JEPA), which we first shared in 2022. Highlights: 1. **Groundbreaking AI Model**: V-JEPA 2 leverages over 1 million hours of internet-scale video data to achieve state-of-the-art performance in video understanding, prediction, and planning tasks. 2. **Zero-Shot Robotic Control**: The action-conditioned world model, V-JEPA 2-AC, enables robots to perform complex tasks like pick-and-place in new environments without additional training. ​ 3. **Human Action Anticipation**: V-JEPA 2 achieves a 44% improvement over previous models in predicting human actions, setting new benchmarks in the Epic-Kitchens-100 dataset. ​ 4. **Video Question Answering Excellence**: When aligned with a large language model, V-JEPA 2 achieves top scores on multiple video QA benchmarks, showcasing its ability to understand and reason about the physical world. ​ 5. **Future of AI Systems**: This research paves the way for advanced AI systems capable of perceiving, predicting, and interacting with the physical world, with applications in robotics, autonomous systems, and beyond. ​ 🌐 [https://huggingface.co/papers/2506.09985](https://huggingface.co/papers/2506.09985) 🤗 [https://huggingface.co/collections/facebook/v-jepa-2-6841bad8413014e185b497a6](https://huggingface.co/collections/facebook/v-jepa-2-6841bad8413014e185b497a6) 🛠️ Fine-tuning Notebook @ [https://colab.research.google.com/drive/16NWUReXTJBRhsN3umqznX4yoZt2I7VGc?usp=sharing](https://colab.research.google.com/drive/16NWUReXTJBRhsN3umqznX4yoZt2I7VGc?usp=sharing) 🕰 Friday, June 19, 2025, 12:30 AM UTC // Friday, June 19, 2025 6.00 AM IST // Thursday, June 18, 2025, 5:30 PM PDT Try the streaming demo on SSv2 checkpoint [https://huggingface.co/spaces/qubvel-hf/vjepa2-streaming-video-classification](https://huggingface.co/spaces/qubvel-hf/vjepa2-streaming-video-classification) Join in for the fun \~ [https://discord.gg/mspuTQPS?event=1384953914029506792](https://discord.gg/mspuTQPS?event=1384953914029506792) https://preview.redd.it/3iswz3i5dq7f1.png?width=766&format=png&auto=webp&s=50669f609f62282f37e8c0ff823ef46059df325a https://reddit.com/link/1leoy4x/video/mvs555l3dq7f1/player
2025-06-18T18:39:29
https://www.reddit.com/r/LocalLLaMA/comments/1leoy4x/daily_paper_discussions_on_the_yannic_kilcher/
CATALUNA84
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leoy4x
false
null
t3_1leoy4x
/r/LocalLLaMA/comments/1leoy4x/daily_paper_discussions_on_the_yannic_kilcher/
false
false
https://b.thumbs.redditm…7pU7K6VI2ZWg.jpg
0
null
Lorras for LLMs
0
Do we have this option? 🤔 lately I've been seeing new models pop up left and right and oops this one doesn't understand xyz, so I have to download another model...only to find out it's missing % of the dataset of the previous model. Having lorras link up with LLMs would be pretty useful and I don't think I've seen anyone use it. Or am I missing something (I'm new btw) even though I have a dozen or so models lol.
2025-06-18T18:39:50
https://www.reddit.com/r/LocalLLaMA/comments/1leoyg3/lorras_for_llms/
mk8933
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leoyg3
false
null
t3_1leoyg3
/r/LocalLLaMA/comments/1leoyg3/lorras_for_llms/
false
false
self
0
null
self host minimax?
5
i want to use minimax but im just not sure about sending data to china and want to self host it. is that possible? which locally hosted agentic focused model can we run on either rented hardware or local gpus?
2025-06-18T18:40:16
https://www.reddit.com/r/LocalLLaMA/comments/1leoyu2/self_host_minimax/
Just_Lingonberry_352
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leoyu2
false
null
t3_1leoyu2
/r/LocalLLaMA/comments/1leoyu2/self_host_minimax/
false
false
self
5
null
lmarena not telling us chatbot names after battle
0
yupp.ai is a recent alternative to lmarena.
2025-06-18T18:59:46
https://www.reddit.com/r/LocalLLaMA/comments/1lepgii/lmarena_not_telling_us_chatbot_names_after_battle/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lepgii
false
null
t3_1lepgii
/r/LocalLLaMA/comments/1lepgii/lmarena_not_telling_us_chatbot_names_after_battle/
false
false
self
0
null
Mobile Phones are becoming better at running AI locally on the device.
39
We aggregated the tokens/second on various devices that use apps built with Cactus. https://preview.redd.it/phdczm64hq7f1.png?width=1320&format=png&auto=webp&s=f7981fa2775bc2a723e2d51f738a75d8ae7bd432 * 1B - 4B models at INT4 run quite fast (we shipped some improvements though). * You can see the full list on our GitHub [https://github.com/cactus-compute/cactus](https://github.com/cactus-compute/cactus). You might be wondering if these models aren’t too small to get meaningful results, however: * Beyond coding and large-scale enterprise projects that involves reasoning over massive contexts, these models are overkill.  * Most products are fine with GPT 4.1 actually, users working on embedding even go for much smaller models, Gemma is great. [Gemma 3n 4B is very competitive!](https://preview.redd.it/ow1n6jbxgq7f1.png?width=1200&format=png&auto=webp&s=7699d15dc26eae73165c1455af491dd7ecddc19b) * 1-4B models are perfect for on-device problems like automatic message/call handling, email summary, gallery search, photo editing, text retrieval, reminder/calendar management, phone settings control, text-to-speech, realtime translation, quick Q/As and other personal problems * Even Apple’s foundation framework and Google AI Edge products do not exceed 3B either. You might also be thinking “yes privacy might be a use case, but is API cost really a problem”, well its not for B2B products and …but its nuanced. * For **consumer** **products** with **100s of millions of users** and **<= 3B in revenue**, (Pinterest, Dropbox, Telegram, Duolingo, Blinklist, Audible, ), covering the cost for 500m users is infeasible, makes more sense to offload the costs to the users via a premium package or deploying in-house versions. * Well, wouldn’t they maximise profits and reduce operational overhead by letting the users run the AI locally? * In fact, I would argue that Cursor is becoming too expensive for non-corporate users, and could benefit by using a local model for simple tasks. * The future of personal AI is heading towards realtime live models like Project Astra, Gemini Live, ChatGPT Live Preview etc, which all need very low latency for good user experience. * I mean Zoom/Meets/Teams calls still face latency issues, and we see this glitch in these live streaming models. * We created a low-latency live AI system that runs locally on device with Cactus, watch demo here [https://www.linkedin.com/feed/update/urn:li:activity:7334225731243139072](https://www.linkedin.com/feed/update/urn:li:activity:7334225731243139072) Please share your thoughts here in the comments.
2025-06-18T19:02:35
https://www.reddit.com/r/LocalLLaMA/comments/1lepjc5/mobile_phones_are_becoming_better_at_running_ai/
Henrie_the_dreamer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lepjc5
false
null
t3_1lepjc5
/r/LocalLLaMA/comments/1lepjc5/mobile_phones_are_becoming_better_at_running_ai/
false
false
https://b.thumbs.redditm…yVxMHVZz-hHY.jpg
39
{'enabled': False, 'images': [{'id': 'bssrhhUFkv6YYPmNcbuJIt4gLvIfF5uq2fTh65BCaWI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bssrhhUFkv6YYPmNcbuJIt4gLvIfF5uq2fTh65BCaWI.png?width=108&crop=smart&auto=webp&s=02bdcaa524e19f8a3591b0deaf1d84df538991c2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bssrhhUFkv6YYPmNcbuJIt4gLvIfF5uq2fTh65BCaWI.png?width=216&crop=smart&auto=webp&s=13069c9b0370bf7ef089dded89bebbddf2f4dce6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bssrhhUFkv6YYPmNcbuJIt4gLvIfF5uq2fTh65BCaWI.png?width=320&crop=smart&auto=webp&s=4fcab4e9474a68e7c215b0653bd541d1de214525', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bssrhhUFkv6YYPmNcbuJIt4gLvIfF5uq2fTh65BCaWI.png?width=640&crop=smart&auto=webp&s=2764476115f204e3b179924d64c1d7e63030de16', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bssrhhUFkv6YYPmNcbuJIt4gLvIfF5uq2fTh65BCaWI.png?width=960&crop=smart&auto=webp&s=65c255291c236fd411a011f47430e2394e56913e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bssrhhUFkv6YYPmNcbuJIt4gLvIfF5uq2fTh65BCaWI.png?width=1080&crop=smart&auto=webp&s=ed45c72b2172571c84c65a1e522b75946bb2da0c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bssrhhUFkv6YYPmNcbuJIt4gLvIfF5uq2fTh65BCaWI.png?auto=webp&s=bae00d435c669a6046db9cac8aa119d9ad6e0d42', 'width': 1200}, 'variants': {}}]}
Unlimited Repeated generations by fine-tuned model
0
I was fine tuning phi-4 14b model on a math dataset and for the first time I trained it without any system prompt and it worked fine then I added a system prompt stating "You are a math solver. Only answer math related questions. Show step-by-step solution" and then it started producing faulty outputs while repeating the same text in loop unlimited number of times. I tried changing the temperature and min\_p parameters too but it did not work. Has anybody else faced this issue or have I discovered something new.
2025-06-18T19:23:46
https://www.reddit.com/r/LocalLLaMA/comments/1leq2y1/unlimited_repeated_generations_by_finetuned_model/
ILoveMy2Balls
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leq2y1
false
null
t3_1leq2y1
/r/LocalLLaMA/comments/1leq2y1/unlimited_repeated_generations_by_finetuned_model/
false
false
self
0
null
The Bizarre Limitations of Apple's Foundation Models Framework
47
Last week Apple announced some great new APIs for their on-device foundation models in OS 26. Devs have been experimenting with it for over a week now, and the local LLM is surprisingly capable for only a 3B model w/2-bit quantization. It's also very power efficient because it leverages the ANE. You can try it out for yourself if you have the current developer OS releases as a [chat interface](https://github.com/PallavAg/Apple-Intelligence-Chat) or using [Apple's game dialog demo](https://developer.apple.com/documentation/foundationmodels/generate-dynamic-game-content-with-guided-generation-and-tools). Unfortunately, people are quickly finding that artificial restrictions are limiting the utility of the framework (at least for now). The first issue most devs will notice are the overly aggressive guardrails. Just take a look at the posts over on the [developer forums](https://developer.apple.com/forums/topics/machine-learning-and-ai/machine-learning-and-ai-foundation-models). Everything from news summarization to apps about fishing and camping are blocked. All but the most bland dialog in the Dream Coffee demo is also censored - just try asking "Can I get a polonium latte for my robot?". You can't even work around the guardrails through clever prompting because the API call itself returns an error. There are also rate limits for certain uses, so no batch processing or frequent queries. The excuse here might be power savings on mobile, but the only comparable workaround is to bundle another open-weight model - which will totally nuke the battery anyway. Lastly, you cannot really build an app around any Apple Intelligence features because the App Store ecosystem does not allow publishers to restrict availability to supported devices. Apple will tell you that you need a fallback for older devices, in case local models are not available. But that kind of defeats the purpose - if I need to bundle Mistral or Qwen with my app "just in case", then I might as well not use the Foundation Models Framework at all. I really hope that these issues get resolved during the OS 26 beta cycle. There is a ton of potential here for local AI apps, and I'd love to see it take off!
2025-06-18T19:29:31
https://www.reddit.com/r/LocalLLaMA/comments/1leq843/the_bizarre_limitations_of_apples_foundation/
SandBlaster2000AD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leq843
false
null
t3_1leq843
/r/LocalLLaMA/comments/1leq843/the_bizarre_limitations_of_apples_foundation/
false
false
self
47
{'enabled': False, 'images': [{'id': 'JL-tf_8hpEnmaNwWT7XYcTlHn4xeQ-Jn6zZ-F4f3TIQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JL-tf_8hpEnmaNwWT7XYcTlHn4xeQ-Jn6zZ-F4f3TIQ.png?width=108&crop=smart&auto=webp&s=b2073825a7abe157927355836cf908592b7b7b59', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JL-tf_8hpEnmaNwWT7XYcTlHn4xeQ-Jn6zZ-F4f3TIQ.png?width=216&crop=smart&auto=webp&s=1959c4faf3025212e6862e5e51c6f042545dee44', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JL-tf_8hpEnmaNwWT7XYcTlHn4xeQ-Jn6zZ-F4f3TIQ.png?width=320&crop=smart&auto=webp&s=28cc80957d9aa35192f34a7f6ed0320d4537dc82', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JL-tf_8hpEnmaNwWT7XYcTlHn4xeQ-Jn6zZ-F4f3TIQ.png?width=640&crop=smart&auto=webp&s=84215b2db5c4e98a5377db6d0222540333e053ea', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JL-tf_8hpEnmaNwWT7XYcTlHn4xeQ-Jn6zZ-F4f3TIQ.png?width=960&crop=smart&auto=webp&s=94f640292421bbb47ce03f020793d222d5d4fdaa', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JL-tf_8hpEnmaNwWT7XYcTlHn4xeQ-Jn6zZ-F4f3TIQ.png?width=1080&crop=smart&auto=webp&s=75d3081db58dcacd63df590d45fb480df0d4d5d2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JL-tf_8hpEnmaNwWT7XYcTlHn4xeQ-Jn6zZ-F4f3TIQ.png?auto=webp&s=10343aedf16a4e3fd392358d9e6802fa17b34cf8', 'width': 1200}, 'variants': {}}]}
Which local API is the best to work with when developing local LLM apps for yourself?
3
There are so many local LLM servers out there, each with their own API (llama.cpp, ollama, LM studio, llmv, etc) I am a bit overwhelmed trying to decide which API to use. Does anyone have any experience or feedback in this area that can help me choose one?
2025-06-18T19:34:09
https://www.reddit.com/r/LocalLLaMA/comments/1leqcc5/which_local_api_is_the_best_to_work_with_when/
crispyfrybits
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leqcc5
false
null
t3_1leqcc5
/r/LocalLLaMA/comments/1leqcc5/which_local_api_is_the_best_to_work_with_when/
false
false
self
3
null
Why a Northern BC credit union took AI sovereignty into its own hands
0
Not entirely LocalLLama but close.
2025-06-18T19:36:38
https://betakit.com/why-a-northern-bc-credit-union-took-ai-sovereignty-into-its-own-hands/
redpatchguy
betakit.com
1970-01-01T00:00:00
0
{}
1leqeld
false
null
t3_1leqeld
/r/LocalLLaMA/comments/1leqeld/why_a_northern_bc_credit_union_took_ai/
false
false
default
0
{'enabled': False, 'images': [{'id': '7PosR6QMNfxLEK3oDmXKj-fHdB5Cmelx6JrJtB-hdCY', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/7PosR6QMNfxLEK3oDmXKj-fHdB5Cmelx6JrJtB-hdCY.jpeg?width=108&crop=smart&auto=webp&s=22a902d71dc4069f4139912cb856424f925ab4bf', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/7PosR6QMNfxLEK3oDmXKj-fHdB5Cmelx6JrJtB-hdCY.jpeg?width=216&crop=smart&auto=webp&s=99e53167ba5e692002afc6e31d248d293a5d88bf', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/7PosR6QMNfxLEK3oDmXKj-fHdB5Cmelx6JrJtB-hdCY.jpeg?width=320&crop=smart&auto=webp&s=ad0d7dfd02c632ce634d0fb70a44234c364e7982', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/7PosR6QMNfxLEK3oDmXKj-fHdB5Cmelx6JrJtB-hdCY.jpeg?width=640&crop=smart&auto=webp&s=28ad3253afc5056e546628815971811423947970', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/7PosR6QMNfxLEK3oDmXKj-fHdB5Cmelx6JrJtB-hdCY.jpeg?width=960&crop=smart&auto=webp&s=0361a6130885bdd7cf07d356cd99cc49161b42e5', 'width': 960}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/7PosR6QMNfxLEK3oDmXKj-fHdB5Cmelx6JrJtB-hdCY.jpeg?auto=webp&s=a08702ee949688cc768741f8e6d61dc1b36480b1', 'width': 1050}, 'variants': {}}]}
RAG injection in Chain of Thought (COT)
10
I just recently started running 'deepseek-ai/DeepSeek-R1-Distill-Qwen-14B' locally (Macbook Pro M4 48GB). I have been messing around with an idea where I inject information from a ToolUse/RAG model in to the <think> section. Essentially: User prompt > DeepseekR1 runs 50 tokens > stop. Run another tool use model on user prompt ask if we have a tool to answer the question, if yes return results, if no return empty string> result injected back in the conversation started with DeepseekR1 that ran for 50 tokens > continue running > output from DeepseekR1 with RAG thought injection. Essentially trying to get the benefit of a reasoning model and a tool use model (i'm aware tool use is output structure training, but R1 wasn't trained to output tool struct commonly used). Curious if anyone else has done anything like this. happy to share code.
2025-06-18T20:00:58
https://www.reddit.com/r/LocalLLaMA/comments/1ler0ew/rag_injection_in_chain_of_thought_cot/
Strange_Test7665
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ler0ew
false
null
t3_1ler0ew
/r/LocalLLaMA/comments/1ler0ew/rag_injection_in_chain_of_thought_cot/
false
false
self
10
null
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training
112
**Over the past year and a half** I've been working on the problem of **factual finetuning** \-- **training an open-source LLM on new facts** so that it learns those facts, essentially extending its knowledge cutoff. Now that I've made significant progress on the problem, I just released **Augmentoolkit 3.0** — an easy-to-use dataset generation and model training tool. Add documents, click a button, and Augmmentoolkit will do everything for you: it'll generate a domain-specific dataset, combine it with a balanced amount of generic data, automatically train a model on it, download it, quantize it, and run it for inference (accessible with a built-in chat interface). The project (and its demo models) are fully open-source. I even trained a model to run inside Augmentoolkit itself, allowing for faster **local dataset generation**. This update took more than six months and thousands of dollars to put together, and represents **a complete rewrite and overhaul of the original project.** It includes 16 prebuilt dataset generation pipelines and the extensively-documented code and conventions to build more. Beyond just factual finetuning, it even **includes an experimental** [**GRPO pipeline**](https://github.com/e-p-armstrong/augmentoolkit/blob/master/docs/grpo.md) that lets you **train a model to do any conceivable task** by just **writing a prompt to grade that task.** # The Links * [Project](https://github.com/e-p-armstrong/augmentoolkit) * [Train your first model in 13 minutes quickstart tutorial video](https://www.youtube.com/watch?v=E9TyyZzIMyY&ab_channel=Augmentoolkit) * Demo model (what the quickstart produces) * [Link](https://huggingface.co/Heralax/llama-Augmentoolkit-Quickstart-Factual-Demo-Example) * Dataset and training configs are fully open source. The config is literally the quickstart config; the dataset is * The demo model is an LLM trained on a subset of the US Army Field Manuals -- the best free and open modern source of comprehensive documentation on a well-known field that I have found. This is also because I trained a model on these in the past and so training on them now serves as a good comparison between the power of the current tool compared to its previous version. * Experimental GRPO models * Now that Augmentoolkit includes the ability to grade models for their performance on a task, I naturally wanted to try this out, and on a task that people are familiar with. * I produced two RP models (base: Mistral 7b v0.2) with the intent of maximizing writing style quality and emotion, while minimizing GPT-isms. * One model has thought processes, the other does not. The non-thought-process model came out better for reasons described in the model card. * Non-reasoner [https://huggingface.co/Heralax/llama-gRPo-emotions-nothoughts](https://huggingface.co/Heralax/llama-gRPo-emotions-nothoughts) * Reasoner [https://huggingface.co/Heralax/llama-gRPo-thoughtprocess](https://huggingface.co/Heralax/llama-gRPo-thoughtprocess) # The Process to Reproduce * Clone * `git clone` [`https://github.com/e-p-armstrong/augmentoolkit.git`](https://github.com/e-p-armstrong/augmentoolkit.git) * Run Start Script * Local or Online * Mac * `bash` [`macos.sh`](http://macos.sh/) * `bash local_macos.sh` * Linux * `bash` [`linux.sh`](http://linux.sh/) * `bash local_linux.sh` * Windows + warning * `./start_windows.bat` * Windows interface compatibility is uncertain. It's probably more reliable to use the CLI instead. Instructions are here * Add API keys or use the local model * I trained a 7b model that is purpose-built to run Augmentoolkit pipelines (Apache license). This means that you can probably generate data at a decent speed on your own computer. It will definitely be slower than with an API, but it will be *much* better than trying to generate tens of millions of tokens with a local 70b. * There are separate start scripts for local datagen. * You'll probably only be able to get good dataset generation speed on a linux machine even though it does technically run on Mac, since Llama.cpp is MUCH slower than vLLM (which is Linux-only). * Click the "run" Button * Get Your Model * The integrated chat interface will automatically let you chat with it when the training and quanting is finished * The model will also automatically be pushed to Hugging Face (make sure you have enough space!) # Uses Besides faster generation times and lower costs, an expert AI that is trained on a domain gains a "big-picture" understanding of the subject that a generalist just won't have. It's the difference between giving a new student a class's full textbook and asking them to write an exam, versus asking a graduate student in that subject to write the exam. The new student probably won't even know where in that book they should look for the information they need, and even if they see the correct context, there's no guarantee that they understands what it means or how it fits into the bigger picture. Also, trying to build AI apps based on closed-source LLMs released by big labs sucks: * The lack of stable checkpoints under the control of the person running the model, makes the tech unstable and unpredictable to build on. * Capabilities change without warning and models are frequently made worse. * People building with AI have to work around the LLMs they are using (a moving target), rather than make the LLMs they are using fit into their system * Refusals force people deploying models to dance around the stuck-up morality of these models while developing. * Closed-source labs charge obscene prices, doing monopolistic rent collecting and impacting the margins of their customers. * Using closed-source labs is a privacy nightmare, especially now that API providers may be required by law to save and log formerly-private API requests. * Different companies have to all work with the same set of models, which have the same knowledge, the same capabilities, the same opinions, and they all sound more or less the same. But current open-source models often either suffer from a severe lack of capability, or are massive enough that they might as well be closed-source for most of the people trying to run them. The proposed solution? Small, efficient, powerful models that achieve superior performance on the things they are being used for (and sacrifice performance in the areas they *aren't* being used for) which are trained for their task and are controlled by the companies that use them. With Augmentoolkit: * You train your models, decide when those models update, and have full transparency over what went into them. * Capabilities change only when the company wants, and no one is forcing them to make their models worse. * People working with AI can customize the model they are using to function as part of the system they are designing, rather than having to twist their system to match a model. * Since you control the data it is built on, the model is only as restricted as you want it to be. * 7 billion parameter models (the standard size Augmentoolkit trains) are so cheap to run it is absurd. They can run on a laptop, even. * Because you control your model, you control your inference, and you control your customers' data. * With your model's capabilities being fully customizable, your AI sounds like *your* AI, and has the opinions and capabilities that you want it to have. Furthermore, the open-source indie finetuning scene has been on life support, largely due to a lack of ability to make data, and the difficulty of getting started with (and getting results with) training, compared to methods like merging. Now that data is far easier to make, and training for specific objectives is much easier to do, and there is a good baseline with training wheels included that makes getting started easy, the hope is that people can iterate on finetunes and the scene can have new life. Augmentoolkit is taking a bet on an open-source future powered by small, efficient, Specialist Language Models. # Cool things of note * Factually-finetuned models can actually cite what files they are remembering information from, and with a good degree of accuracy at that. This is not exclusive to the domain of RAG anymore. * Augmentoolkit models by default use a custom prompt template because it turns out that making SFT data look more like pretraining data in its structure helps models use their pretraining skills during chat settings. This includes factual recall. * Augmentoolkit was used to create the dataset generation model that runs Augmentoolkit's pipelines. You can find the config used to make the dataset (2.5 gigabytes) in the `generation/core_composition/meta_datagen` folder. * There's a pipeline for turning normal SFT data into reasoning SFT data that can give a good cold start to models that you want to give thought processes to. A number of datasets converted using this pipeline [are available on Hugging Face](https://huggingface.co/Augmentoolkit), fully open-source. * Augmentoolkit does not just automatically train models on the domain-specific data you generate: to ensure that there is enough data made for the model to 1) generalize and 2) learn the actual capability of conversation, Augmentoolkit will balance your domain-specific data with generic conversational data, ensuring that the LLM becomes smarter while retaining all of the question-answering capabilities imparted by the facts it is being trained on. * If you just want to make data and don't want to automatically train models, there's a config file option for that of course. # Why do all this + Vision I believe AI alignment is solved when individuals and orgs can make their AI act as they want it to, rather than having to settle for a one-size-fits-all solution. The moment people can use AI specialized to their domains, is also the moment when AI stops being slightly wrong at everything, and starts being incredibly useful across different fields. Furthermore, we must do everything we can to avoid a specific type of AI-powered future: the AI-powered future where what AI believes and is capable of doing is entirely controlled by a select few. Open source has to survive and thrive for this technology to be used right. As many people as possible must be able to control AI. I want to stop a slop-pocalypse. I want to stop a future of extortionate rent-collecting by the established labs. I want open-source finetuning, even by individuals, to thrive. I want people to be able to be artists, with data their paintbrush and AI weights their canvas. Teaching models facts was the first step, and I believe this first step has now been taken. It was probably one of the hardest; best to get it out of the way sooner. After this, I'm going to be making coding expert models for specific languages, and I will also improve the [GRPO pipeline](https://github.com/e-p-armstrong/augmentoolkit/blob/master/docs/grpo.md), which allows for models to be trained to do *literally anything* better. I encourage you to fork the project so that you can make your own data, so that you can create your own pipelines, and so that you can keep the spirit of open-source finetuning and experimentation alive. I also encourage you to star the project, because I like it when "number go up". Huge thanks to Austin Cook and all of Alignment Lab AI for helping me with ideas and with getting this out there. Look out for some cool stuff from them soon, by the way :) [Happy hacking!](https://github.com/e-p-armstrong/augmentoolkit)
2025-06-18T20:33:11
https://www.reddit.com/r/LocalLLaMA/comments/1lersrw/augmentoolkit_30_7_months_of_work_mit_license/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lersrw
false
null
t3_1lersrw
/r/LocalLLaMA/comments/1lersrw/augmentoolkit_30_7_months_of_work_mit_license/
false
false
self
112
{'enabled': False, 'images': [{'id': 'JPdazJ6jtyR317Uj2SGJFQQZYzaRBapP-lbz0ow2wM8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JPdazJ6jtyR317Uj2SGJFQQZYzaRBapP-lbz0ow2wM8.png?width=108&crop=smart&auto=webp&s=dfa70ffebb9194edbb5e27da4a36fd2490c85f6e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JPdazJ6jtyR317Uj2SGJFQQZYzaRBapP-lbz0ow2wM8.png?width=216&crop=smart&auto=webp&s=bc6f5921735c764b4a5872aaaf137c21fd9eaf0a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JPdazJ6jtyR317Uj2SGJFQQZYzaRBapP-lbz0ow2wM8.png?width=320&crop=smart&auto=webp&s=861480078cfe5b5a6d606d8748c6d90c584e199a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JPdazJ6jtyR317Uj2SGJFQQZYzaRBapP-lbz0ow2wM8.png?width=640&crop=smart&auto=webp&s=8583a011c64efbc563707ac8996f32baf680fa6e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JPdazJ6jtyR317Uj2SGJFQQZYzaRBapP-lbz0ow2wM8.png?width=960&crop=smart&auto=webp&s=147a97f5a8ae0e8f6a96c8e055cb73135eb1ed03', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JPdazJ6jtyR317Uj2SGJFQQZYzaRBapP-lbz0ow2wM8.png?width=1080&crop=smart&auto=webp&s=b1a5e810166e95e60d15020226816c1c3cf0fa26', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JPdazJ6jtyR317Uj2SGJFQQZYzaRBapP-lbz0ow2wM8.png?auto=webp&s=050e808bb39e5d219443e460d847b015b180988d', 'width': 1200}, 'variants': {}}]}
Vector with Ollama and push it into ChromaDB
0
Hello! I am currently interning without much prior knowledge, and I have to handle a file that contains (287,113,3). My task was to vectorize the data using only Ollama and then import it into chromaDB, while also being able to communicate with the AI without using Langchain. I tried to watch a YouTube video about this task, but most videos used Langchain, and my mentor advised me to avoid using it. How should I approach this problem?
2025-06-18T21:14:25
https://www.reddit.com/r/LocalLLaMA/comments/1lestr2/vector_with_ollama_and_push_it_into_chromadb/
Aggravating_Ad_3433
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lestr2
false
null
t3_1lestr2
/r/LocalLLaMA/comments/1lestr2/vector_with_ollama_and_push_it_into_chromadb/
false
false
self
0
null
Run Open WebUI over HTTPS on Windows without exposing it to the internet tutorial
4
Disclaimer! I'm learning. Feel free to help me make this tutorial better. Hello! I've struggled with running open webui over https without exposing it to the internet on windows for a bit. I wanted to be able to use voice and call mode on iOS browsers but https was a requirement for that. At first I tried to do it with an autosigned certificate but that proved to be not valid. So after a bit of back and forth with gemini pro 2.5 I finally managed to do it! and I wanted to share it here in case anyone find it useful as I didn't find a complete tutorial on how to do it. The only perk is that you have to have a domain to be able to sign the certificate. (I don't know if there is any way to bypass this limitation) ## Prerequisites - OpenWebUI installed and running on Windows (accessible at [http://localhost:8080](http://localhost:8080)) - WSL2 with a Linux distribution (I've used Ubuntu) installed on Windows - A custom domain (we’ll use mydomain.com) managed via a provider that supports API access (I've used Cloudflare) - Know your Windows local IP address (e.g., 192.168.1.123). To find it, open CMD and run `ipconfig` ## Step 1: Preparing the Windows Environment Edit the `hosts` file so your PC resolves `openwebui.mydomain.com` to itself instead of the public internet. 1. Open Notepad as Administrator 2. Go to File > Open > `C:\Windows\System32\drivers\etc` 3. Select “All Files” and open the `hosts` file 4. Add this line at the end (replace with your local IP): ``` 192.168.1.123 openwebui.mydomain.com ``` 5. Save and close ## Step 2: Install Required Software in WSL (Ubuntu) Open your WSL terminal and update the system: ```bash sudo apt-get update && sudo apt-get upgrade -y ``` Install Nginx and Certbot with DNS plugin: ```bash sudo apt-get install -y nginx certbot python3-certbot-dns-cloudflare ``` ## Step 3: Get a Valid SSL Certificate via DNS Challenge This method doesn’t require exposing your machine to the internet. ### Get your API credentials: 1. Log into Cloudflare 2. Create an API Token with permissions to edit DNS for `mydomain.com` 3. Copy the token ### Create the credentials file in WSL: ```bash mkdir -p ~/.secrets/certbot nano ~/.secrets/certbot/cloudflare.ini ``` Paste the following (replace with your actual token): ```ini # Cloudflare API token dns_cloudflare_api_token = YOUR_API_TOKEN_HERE ``` Secure the credentials file: ```bash sudo chmod 600 ~/.secrets/certbot/cloudflare.ini ``` ### Request the certificate: ```bash sudo certbot certonly \ --dns-cloudflare \ --dns-cloudflare-credentials ~/.secrets/certbot/cloudflare.ini \ -d openwebui.mydomain.com \ --non-interactive --agree-tos -m [email protected] ``` If successful, the certificate will be stored at: `/etc/letsencrypt/live/openwebui.mydomain.com/` ## Step 4: Configure Nginx as a Reverse Proxy Create the Nginx site config: ```bash sudo nano /etc/nginx/sites-available/openwebui.mydomain.com ``` Paste the following (replace `192.168.1.123` with your Windows local IP): ```nginx server { listen 443 ssl; listen [::]:443 ssl; server_name openwebui.mydomain.com; ssl_certificate /etc/letsencrypt/live/openwebui.mydomain.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/openwebui.mydomain.com/privkey.pem; location / { proxy_pass http://192.168.1.123:8080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } } ``` Enable the site and test Nginx: ```bash sudo ln -s /etc/nginx/sites-available/openwebui.mydomain.com /etc/nginx/sites-enabled/ sudo rm /etc/nginx/sites-enabled/default sudo nginx -t ``` You should see: `syntax is ok` and `test is successful` ## Step 5: Network Configuration Between Windows and WSL Get your WSL internal IP: ```bash ip addr | grep eth0 ``` Look for the `inet` IP (e.g., `172.29.93.125`) Set up port forwarding using PowerShell as Administrator (in Windows): ```powershell netsh interface portproxy add v4tov4 listenport=443 listenaddress=0.0.0.0 connectport=443 connectaddress=<WSL-IP> ``` Add a firewall rule to allow external connections on port 443: 1. Open Windows Defender Firewall with Advanced Security 2. Go to Inbound Rules > New Rule 3. Rule type: Port 4. Protocol: TCP. Local Port: 443 5. Action: Allow the connection 6. Profile: Check Private (at minimum) 7. Name: Something like `Nginx WSL (HTTPS)` ## Step 6: Start Everything and Enjoy Restart Nginx in WSL: ```bash sudo systemctl restart nginx ``` Check that it’s running: ```bash sudo systemctl status nginx ``` You should see: `Active: active (running)` ## Final Test 1. Open a browser on your PC and go to: [https://openwebui.mydomain.com](https://openwebui.mydomain.com) 2. You should see the OpenWebUI interface with: - A green padlock - No security warnings 3. To access it from your phone: - Either edit its `hosts` file (if possible) - Or configure your router’s DNS to resolve `openwebui.mydomain.com` to your local IP Alternatively, you can access: ``` https://192.168.1.123 ``` This may show a certificate warning because the certificate is issued for the domain, not the IP, but encryption still works. ## Pending problems: - When using voice call mode on the phone, only the first sentence of the LLM response is spoken. If I exit voice call mode and click on the read out loud button of the response, only the first sentence is read as well. Then if I go to the PC where everything is running and click on the read out loud button all the LLM response is read. So the audio is generated, this seems to be a iOS issue, but I haven't managed to solved it yet. Any tips will be appreciated. I hope you find this tutorial useful ^^
2025-06-18T21:54:52
https://www.reddit.com/r/LocalLLaMA/comments/1letslu/run_open_webui_over_https_on_windows_without/
gwyngwynsituation
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1letslu
false
null
t3_1letslu
/r/LocalLLaMA/comments/1letslu/run_open_webui_over_https_on_windows_without/
false
false
self
4
null
EchoStream – A Local AI Agent That Lives on Your iPhone
1
[removed]
2025-06-18T22:39:47
https://www.reddit.com/r/LocalLLaMA/comments/1leuu1o/echostream_a_local_ai_agent_that_lives_on_your/
Local_Yam_5657
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leuu1o
false
null
t3_1leuu1o
/r/LocalLLaMA/comments/1leuu1o/echostream_a_local_ai_agent_that_lives_on_your/
true
false
spoiler
1
null
LocalBuddys - Local Friends For Everyone (But need help)
3
LocalBuddys has a lightweight interface that works on every device and works locally to ensure data security and not depend on any API. It is currently designed to be connected from other devices, using your laptop or computer as a main server. I am thinking of raising funds on Kickstarter and making this project professional so that more people will want to use it, but there are many shortcomings in this regard. Of course, a web interface is not enough, there are dozens of them nowadays. So I fine-tuned a few open source models to develop a friendly model, but the result is not good at all. I really need help and guidance. This project is not for profit, the reason I want to raise funds on kickstarter is to generate resources for further development. I'd like to share a screenshot to hear your thoughts. https://preview.redd.it/dixaevbdkr7f1.png?width=1916&format=png&auto=webp&s=a3d6c9ff7752e1e84d8ac7571bc6871ff754e0f8 Of course, it's very simple right now. I wanted to create a few characters and add their animations, but I couldn't. If you're interested and want to spend your free time, we can work together :)
2025-06-18T22:45:56
https://www.reddit.com/r/LocalLLaMA/comments/1leuz0z/localbuddys_local_friends_for_everyone_but_need/
Dismal-Cupcake-3641
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leuz0z
false
null
t3_1leuz0z
/r/LocalLLaMA/comments/1leuz0z/localbuddys_local_friends_for_everyone_but_need/
false
false
https://b.thumbs.redditm…Ugf8WOnSgW1g.jpg
3
null
Someone to give me a runpod referral code?
0
i heard there's a sweet $500 bonus 👀 if anyone’s got a referral link, i’d really appreciate it trying to get started without missing out!
2025-06-18T22:52:48
https://www.reddit.com/r/LocalLLaMA/comments/1lev4hc/someone_to_give_me_a_runpod_referral_code/
rainyposm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lev4hc
false
null
t3_1lev4hc
/r/LocalLLaMA/comments/1lev4hc/someone_to_give_me_a_runpod_referral_code/
false
false
self
0
null
Suggest a rig for running local LLM for ~$3,000
8
Simply that. I have a budget approx. $3k and I want to build or buy a rig to run the largest local llm for the budget. My only constraint is that it must run Linux. Otherwise I’m open to all options (DGX, new or used, etc). Not interested in training or finetuning models, just running
2025-06-18T23:33:44
https://www.reddit.com/r/LocalLLaMA/comments/1lew0rk/suggest_a_rig_for_running_local_llm_for_3000/
x0rchidia
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lew0rk
false
null
t3_1lew0rk
/r/LocalLLaMA/comments/1lew0rk/suggest_a_rig_for_running_local_llm_for_3000/
false
false
self
8
null
Does this mean we are free from the shackles of CUDA? We can use AMD GPUs wired up together to run models ?
25
2025-06-18T23:53:58
https://i.redd.it/y31qo2q5xr7f1.png
Just_Lingonberry_352
i.redd.it
1970-01-01T00:00:00
0
{}
1lewg4u
false
null
t3_1lewg4u
/r/LocalLLaMA/comments/1lewg4u/does_this_mean_we_are_free_from_the_shackles_of/
false
false
default
25
{'enabled': True, 'images': [{'id': 'y31qo2q5xr7f1', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/y31qo2q5xr7f1.png?width=108&crop=smart&auto=webp&s=3089d7f5f8c49fb2ffac36489acea817090b88b1', 'width': 108}, {'height': 160, 'url': 'https://preview.redd.it/y31qo2q5xr7f1.png?width=216&crop=smart&auto=webp&s=57012df1f6327299f567e7a36f27b6f430b230bd', 'width': 216}, {'height': 237, 'url': 'https://preview.redd.it/y31qo2q5xr7f1.png?width=320&crop=smart&auto=webp&s=e3fc6a8ff123df7a12da08af01957d4e453f225d', 'width': 320}, {'height': 475, 'url': 'https://preview.redd.it/y31qo2q5xr7f1.png?width=640&crop=smart&auto=webp&s=b2f2ad2fad2ebbe87b16acdd6835744a159aea21', 'width': 640}], 'source': {'height': 637, 'url': 'https://preview.redd.it/y31qo2q5xr7f1.png?auto=webp&s=da0db09a1c0a815064d66f76242701fbc96fc043', 'width': 858}, 'variants': {}}]}
We built this project to increase LLM throughput by 3x. Now it has been adopted by IBM in their LLM serving stack!
423
Hi guys, our team has built this open source project, LMCache, to reduce repetitive computation in LLM inference and make systems serve more people (3x more throughput in chat applications) and it has been used in IBM's open source LLM inference stack. In LLM serving, the input is computed into intermediate states called KV cache to further provide answers. These data are relatively large (\~1-2GB for long context) and are often evicted when GPU memory is not enough. In these cases, when users ask a follow up question, the software needs to recompute for the same KV Cache. LMCache is designed to combat that by efficiently offloading and loading these KV cache to and from DRAM and disk. This is particularly helpful in multi-round QA settings when context reuse is important but GPU memory is not enough. Ask us anything! Github: [https://github.com/LMCache/LMCache](https://github.com/LMCache/LMCache)
2025-06-18T23:55:55
https://i.redd.it/775o8e8hxr7f1.jpeg
Nice-Comfortable-650
i.redd.it
1970-01-01T00:00:00
0
{}
1lewhla
false
null
t3_1lewhla
/r/LocalLLaMA/comments/1lewhla/we_built_this_project_to_increase_llm_throughput/
false
false
default
423
{'enabled': True, 'images': [{'id': '775o8e8hxr7f1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/775o8e8hxr7f1.jpeg?width=108&crop=smart&auto=webp&s=07c1104129fa40256bcf2871a8f6782191a78e1c', 'width': 108}, {'height': 137, 'url': 'https://preview.redd.it/775o8e8hxr7f1.jpeg?width=216&crop=smart&auto=webp&s=74eb56a5f2d7a7ef2717b7388c51c1327e2dcbf7', 'width': 216}, {'height': 204, 'url': 'https://preview.redd.it/775o8e8hxr7f1.jpeg?width=320&crop=smart&auto=webp&s=7ba2a91d49787f015c8d52dd1ba38698179d2459', 'width': 320}, {'height': 408, 'url': 'https://preview.redd.it/775o8e8hxr7f1.jpeg?width=640&crop=smart&auto=webp&s=c12230c686bdb16949fed6cf8cf00afff6399ea3', 'width': 640}], 'source': {'height': 537, 'url': 'https://preview.redd.it/775o8e8hxr7f1.jpeg?auto=webp&s=2e779c1630a606fb09efae16b5f83314af330546', 'width': 841}, 'variants': {}}]}
How much is the 3090 on the used market in your country?
10
Hi there guys, hoping you're having a good day. I was wondering the 3090 used prices on your country, as they seem very different based on this. I will start, with Chile. Here the used 3090s used hover between 550 and 650USD. This is a bit of increase in price vs some months, where it was between 500 and 550 USD. Also I went to EU, specifically to Madrid, Spain 3 weeks ago. And when I did check on a quick search, they hovered between 600 and 700 EUR. BTW as reference, 4090s used go for \~1800-1900USD which is just insane, and new 5090s are at 2700-2900USD range, which is also insane.
2025-06-19T00:25:34
https://www.reddit.com/r/LocalLLaMA/comments/1lex3pi/how_much_is_the_3090_on_the_used_market_in_your/
panchovix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lex3pi
false
null
t3_1lex3pi
/r/LocalLLaMA/comments/1lex3pi/how_much_is_the_3090_on_the_used_market_in_your/
false
false
self
10
null
How to set up local llms on a 6700 xt
8
All right so I struggled for what’s gotta be about four or five weeks now to get local LLM’s running with my GPU which is a 6700 XT. After this process of about four weeks I finally got something working on windows so here is the guide in case anyone is interested: # AMD RX 6700 XT LLM Setup Guide - KoboldCpp with GPU Acceleration **Successfully tested on AMD Radeon RX 6700 XT (gfx1031) running Windows 11** ## Performance Results - **Generation Speed**: ~17 tokens/second - **Processing Speed**: ~540 tokens/second - **GPU Utilization**: 20/29 layers offloaded to GPU - **VRAM Usage**: ~2.7GB - **Context Size**: 4096 tokens ## The Problem Most guides focus on ROCm setup, but AMD RX 6700 XT (gfx1031 architecture) has compatibility issues with ROCm on Windows. The solution is using **Vulkan acceleration** instead, which provides excellent performance and stability. ## Prerequisites - AMD RX 6700 XT graphics card - Windows 10/11 - At least 8GB system RAM - 4-5GB free storage space ## Step 1: Download KoboldCpp-ROCm 1. Go to: https://github.com/YellowRoseCx/koboldcpp-rocm/releases 2. Download the latest `koboldcpp_rocm.exe` 3. Create folder: `C:\Users\[YourUsername]\llamafile_test\koboldcpp-rocm\` 4. Place the executable inside the `koboldcpp-rocm` folder ## Step 2: Download a Model Download a GGUF model (recommended: 7B parameter models for RX 6700 XT): - Qwen2.5-Coder-7B-Instruct (recommended for coding) - Llama-3.1-8B-Instruct - Any other 7B-8B GGUF model Place the `.gguf` file in: `C:\Users\[YourUsername]\llamafile_test\` ## Step 3: Create Launch Script Create `start_koboldcpp_optimized.bat` with this content: ```batch @echo off cd /d "C:\Users\[YourUsername]\llamafile_test" REM Kill any existing processes taskkill /F /IM koboldcpp-rocm.exe 2>nul echo =============================================== echo KoboldCpp with Vulkan GPU Acceleration echo =============================================== echo Model: [your-model-name].gguf echo GPU: AMD RX 6700 XT via Vulkan echo GPU Layers: 20 echo Context: 4096 tokens echo Port: 5001 echo =============================================== koboldcpp-rocm\koboldcpp-rocm.exe ^ --model "[your-model-name].gguf" ^ --host 127.0.0.1 ^ --port 5001 ^ --contextsize 4096 ^ --gpulayers 20 ^ --blasbatchsize 1024 ^ --blasthreads 4 ^ --highpriority ^ --skiplauncher echo. echo Server running at: http://localhost:5001 echo Performance: ~17 tokens/second generation echo. pause ``` **Replace `[YourUsername]` and `[your-model-name]` with your actual values.** ## Step 4: Run and Verify 1. **Run the script**: Double-click `start_koboldcpp_optimized.bat` 2. **Look for these success indicators**: ``` Auto Selected Vulkan Backend... ggml_vulkan: 0 = AMD Radeon RX 6700 XT (AMD proprietary driver) offloaded 20/29 layers to GPU Starting Kobold API on port 5001 ``` 3. **Open browser**: Navigate to http://localhost:5001 4. **Test generation**: Try generating some text to verify GPU acceleration ## Expected Output ``` Processing Prompt [BLAS] (XXX / XXX tokens) Generating (XXX / XXX tokens) [Time] CtxLimit:XXXX/4096, Process:X.XXs (500+ T/s), Generate:X.XXs (15-20 T/s) ``` ## Troubleshooting ### If you get "ROCm failed" or crashes: - **Solution**: The script automatically falls back to Vulkan - this is expected and optimal - **Don't install ROCm** - it's not needed and can cause conflicts ### If you get low performance (< 10 tokens/sec): 1. **Reduce GPU layers**: Change `--gpulayers 20` to `--gpulayers 15` or `--gpulayers 10` 2. **Check VRAM**: Monitor GPU memory usage in Task Manager 3. **Reduce context**: Change `--contextsize 4096` to `--contextsize 2048` ### If server won't start: 1. **Check port**: Change `--port 5001` to `--port 5002` 2. **Run as administrator**: Right-click script → "Run as administrator" ## Key Differences from Other Guides 1. **No ROCm required**: Uses Vulkan instead of ROCm 2. **No environment variables needed**: Auto-detection works perfectly 3. **No compilation required**: Uses pre-built executable 4. **Optimized for gaming GPUs**: Settings tuned for consumer hardware ## Performance Comparison | Method | Setup Complexity | Performance | Stability | |--------|-----------------|-------------|-----------| | ROCm (typical guides) | High | Variable | Poor on gfx1031 | | **Vulkan (this guide)** | **Low** | **17+ T/s** | **Excellent** | | CPU-only | Low | 3-4 T/s | Good | ## Final Notes - **VRAM limit**: RX 6700 XT has 12GB, can handle up to ~28 GPU layers for 7B models - **Context scaling**: Larger context (8192+) may require fewer GPU layers - **Model size**: 13B models work but require fewer GPU layers (~10-15) - **Stability**: Vulkan is more stable than ROCm for gaming GPUs This setup provides near-optimal performance for AMD RX 6700 XT without the complexity and instability of ROCm configuration. ## Support If you encounter issues: 1. Check Windows GPU drivers are up to date 2. Ensure you have latest Visual C++ redistributables 3. Try reducing `--gpulayers` value if you run out of VRAM **Tested Configuration**: Windows 11, AMD RX 6700 XT, 32GB RAM, AMD Ryzen 5 5600 Hope this helps!!
2025-06-19T00:42:29
https://www.reddit.com/r/LocalLLaMA/comments/1lexg9w/how_to_set_up_local_llms_on_a_6700_xt/
Electronic_Image1665
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lexg9w
false
null
t3_1lexg9w
/r/LocalLLaMA/comments/1lexg9w/how_to_set_up_local_llms_on_a_6700_xt/
false
false
self
8
{'enabled': False, 'images': [{'id': 'ZRCBYj7Wzdb4SDul3sdKbeK3y7WN1wpaEu3i5jQ3cuI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZRCBYj7Wzdb4SDul3sdKbeK3y7WN1wpaEu3i5jQ3cuI.png?width=108&crop=smart&auto=webp&s=1d7d66a5611bb1de56e59d4cfb3b261d6803a0bb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZRCBYj7Wzdb4SDul3sdKbeK3y7WN1wpaEu3i5jQ3cuI.png?width=216&crop=smart&auto=webp&s=82365cf53d3f25e46cd0412238fe700a7109ba44', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZRCBYj7Wzdb4SDul3sdKbeK3y7WN1wpaEu3i5jQ3cuI.png?width=320&crop=smart&auto=webp&s=d4ca8b69b60388254fb0d073aca63a5f6c1df8c9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZRCBYj7Wzdb4SDul3sdKbeK3y7WN1wpaEu3i5jQ3cuI.png?width=640&crop=smart&auto=webp&s=743952b0495af4bcfebc7431c62623778ff93bb3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZRCBYj7Wzdb4SDul3sdKbeK3y7WN1wpaEu3i5jQ3cuI.png?width=960&crop=smart&auto=webp&s=600d701d7dbbd405ed5ffdf47bfeb929eef723c1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZRCBYj7Wzdb4SDul3sdKbeK3y7WN1wpaEu3i5jQ3cuI.png?width=1080&crop=smart&auto=webp&s=8fac800cc2891efdc0d4b79f1664d411189ec05f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZRCBYj7Wzdb4SDul3sdKbeK3y7WN1wpaEu3i5jQ3cuI.png?auto=webp&s=a1680607aed9cb65807d75422ae5b623f3f34e04', 'width': 1200}, 'variants': {}}]}
Best realtime open source STT model?
13
What's the best model to transcribe a conversation in realtime, meaning that the words have to appear as the person is talking.
2025-06-19T00:49:58
https://www.reddit.com/r/LocalLLaMA/comments/1lexlsd/best_realtime_open_source_stt_model/
ThatIsNotIllegal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lexlsd
false
null
t3_1lexlsd
/r/LocalLLaMA/comments/1lexlsd/best_realtime_open_source_stt_model/
false
false
self
13
null
Pickaxe - I built an open-source Typescript library for scaling agents
6
Hey everyone -- I'm an engineer working on [Hatchet](https://github.com/hatchet-dev/hatchet). We're releasing an open source Typescript library for building agents that scale: [https://github.com/hatchet-dev/pickaxe](https://github.com/hatchet-dev/pickaxe) Pickaxe is explicitly **not a framework**. Most frameworks lock you into a difficult-to-use abstraction and force you to use certain patterns or vendors which might not be a good fit for your agent. We fully expect you to write your own tooling and integrations for agent memory, prompts, LLM calls. Instead, it's built for two things: 1. **Fault-tolerance** \- when you wrap a function in \`pickaxe.agent\`, it will automatically checkpoint your agent's execution history, so even if the machine that the agent is running on crashes, the agent can easily resume working on a new machine. 2. **Scalability** \- every tool call or agent execution is sent through a task queue which distributes work across a fleet of machines. As a result, it's possible to scale out to hundreds of thousands of agent executions simultaneously. Lots more about this execution model in our docs: [https://pickaxe.hatchet.run/](https://pickaxe.hatchet.run/) I get that a lot of folks are running agents locally or just playing around with agents -- this probably isn't a good fit. But if you're building an agent that needs to scale pretty rapidly or is dealing with a ton of data -- this might be for you! Happy to dive into the architecture/thinking behind Pickaxe in the comments.
2025-06-19T01:11:18
https://www.reddit.com/r/LocalLLaMA/comments/1ley18c/pickaxe_i_built_an_opensource_typescript_library/
hatchet-dev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ley18c
false
null
t3_1ley18c
/r/LocalLLaMA/comments/1ley18c/pickaxe_i_built_an_opensource_typescript_library/
false
false
self
6
{'enabled': False, 'images': [{'id': 'zYe5xpXR2iPd1Eu1Lo8aKb5L8K9YGpEd5hoMILnzwJs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zYe5xpXR2iPd1Eu1Lo8aKb5L8K9YGpEd5hoMILnzwJs.png?width=108&crop=smart&auto=webp&s=dadc5ca4da4397dd84c9e6920ebf9279893aaecf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zYe5xpXR2iPd1Eu1Lo8aKb5L8K9YGpEd5hoMILnzwJs.png?width=216&crop=smart&auto=webp&s=9d1a3eccb84420f92b345dbabfad1a26bdef718f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zYe5xpXR2iPd1Eu1Lo8aKb5L8K9YGpEd5hoMILnzwJs.png?width=320&crop=smart&auto=webp&s=be31910a2d367cf749380116176e4be88928b48a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zYe5xpXR2iPd1Eu1Lo8aKb5L8K9YGpEd5hoMILnzwJs.png?width=640&crop=smart&auto=webp&s=a1638e65cb3c8fd161f3f008b776eda328fa419e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zYe5xpXR2iPd1Eu1Lo8aKb5L8K9YGpEd5hoMILnzwJs.png?width=960&crop=smart&auto=webp&s=34029dd4f09945ae46933ed037066fc646756400', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zYe5xpXR2iPd1Eu1Lo8aKb5L8K9YGpEd5hoMILnzwJs.png?width=1080&crop=smart&auto=webp&s=2989c0b323cdb9f8b10e1b3103904965f4f1a744', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zYe5xpXR2iPd1Eu1Lo8aKb5L8K9YGpEd5hoMILnzwJs.png?auto=webp&s=5fa99f08c0db0a350011ef9b32861d8fb5353b22', 'width': 1200}, 'variants': {}}]}
I'm having trouble accessing LMArena
2
When I visit [lmarena.ai](http://lmarena.ai) using the Firefox browser, the website shows a message saying “Failed to verify your browser”. However, it works fine in the Edge browser. How can I resolve this issue? [Imgur](https://imgur.com/RGcsi0V)
2025-06-19T01:14:34
https://www.reddit.com/r/LocalLLaMA/comments/1ley3k6/im_having_trouble_accessing_lmarena/
r-amadeus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ley3k6
false
null
t3_1ley3k6
/r/LocalLLaMA/comments/1ley3k6/im_having_trouble_accessing_lmarena/
false
false
self
2
{'enabled': False, 'images': [{'id': '3Jk05Nv97du10Ig6B3W5Wav6jGF7ceIALgPhRceUDc4', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/BroBmulOYtW7F-P8LovnUxR1SX1-Y31A2Tra1lKbZgs.jpg?width=108&crop=smart&auto=webp&s=967046deda84b588f7fe65e135ead5d4726ccb44', 'width': 108}, {'height': 127, 'url': 'https://external-preview.redd.it/BroBmulOYtW7F-P8LovnUxR1SX1-Y31A2Tra1lKbZgs.jpg?width=216&crop=smart&auto=webp&s=29fd0e076087ae9c21701dbe7a6c570d8dc0437e', 'width': 216}, {'height': 188, 'url': 'https://external-preview.redd.it/BroBmulOYtW7F-P8LovnUxR1SX1-Y31A2Tra1lKbZgs.jpg?width=320&crop=smart&auto=webp&s=971c37a20479fb3997942f875957323554799a6e', 'width': 320}, {'height': 376, 'url': 'https://external-preview.redd.it/BroBmulOYtW7F-P8LovnUxR1SX1-Y31A2Tra1lKbZgs.jpg?width=640&crop=smart&auto=webp&s=4b386ed9c560de3af064b10428d56876d6d0aa6d', 'width': 640}, {'height': 564, 'url': 'https://external-preview.redd.it/BroBmulOYtW7F-P8LovnUxR1SX1-Y31A2Tra1lKbZgs.jpg?width=960&crop=smart&auto=webp&s=2927997e2f48cbf785fead9acc3acf7074338e97', 'width': 960}, {'height': 635, 'url': 'https://external-preview.redd.it/BroBmulOYtW7F-P8LovnUxR1SX1-Y31A2Tra1lKbZgs.jpg?width=1080&crop=smart&auto=webp&s=d46e63c61c9f2311f24206adb82b01fd0c535c71', 'width': 1080}], 'source': {'height': 1495, 'url': 'https://external-preview.redd.it/BroBmulOYtW7F-P8LovnUxR1SX1-Y31A2Tra1lKbZgs.jpg?auto=webp&s=55942584dd81b283e2f5d466433f107b102e9dba', 'width': 2542}, 'variants': {}}]}
I created a GUI based software to fine-tune LLMs. Please give me some suggestions.
4
Hello guys! I just finished my freshman year and built a simple Electron-based tool for fine-tuning LLMs. I found the existing options (like CLI or even Hugging Face AutoTrain) a bit hard or limited, so I wanted to build something easier. Right now, it supports basic fine-tuning using Unsloth. I plan to add support for Azure, GCP, drive integrations, automatic training schedules, and more. The pictures I am sharing you don't work perfectly and need proper conditions currently. I hope you guys can give me some feedback as a fellow bro and tell me what I should do. Would appreciate any thoughts — thanks! Any suggestion is welcomed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! https://preview.redd.it/kvs06qy5bs7f1.png?width=2750&format=png&auto=webp&s=cdc2bc4007853cdd8178e181de3c6808e8a36e54 https://preview.redd.it/ngpatry5bs7f1.png?width=2750&format=png&auto=webp&s=c9b027853c394348083962ec168529ed7ab447ee
2025-06-19T01:30:38
https://www.reddit.com/r/LocalLLaMA/comments/1leyf4s/i_created_a_gui_based_software_to_finetune_llms/
ConfusionEven2625
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leyf4s
false
null
t3_1leyf4s
/r/LocalLLaMA/comments/1leyf4s/i_created_a_gui_based_software_to_finetune_llms/
false
false
https://b.thumbs.redditm…sXm4MQe5r3RE.jpg
4
null
Self-hosting LLaMA: What are your biggest pain points?
44
Hey fellow llama enthusiasts! Setting aside compute, what has been the biggest issues that you guys have faced when trying to self host models? e.g: * Running out of GPU memory or dealing with slow inference times * Struggling to optimize model performance for specific use cases * Privacy? * Scaling models to handle high traffic or large datasets
2025-06-19T01:34:58
https://www.reddit.com/r/LocalLLaMA/comments/1leyi70/selfhosting_llama_what_are_your_biggest_pain/
Sriyakee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leyi70
false
null
t3_1leyi70
/r/LocalLLaMA/comments/1leyi70/selfhosting_llama_what_are_your_biggest_pain/
false
false
self
44
null
Dual CPU Penalty?
9
Should there be a noticable penalty for running dual CPUs on a workload? Two systems running same version of Ubuntu Linux, on ollama with gemma3 (27b-it-fp16). One has a thread ripper 7985 with 256GB memory, 5090. Second system is a dual 8480 Xeon with 256GB memory and a 5090. Regaurdless of workload the threadripper is always faster.
2025-06-19T01:53:34
https://www.reddit.com/r/LocalLLaMA/comments/1leyvq5/dual_cpu_penalty/
jsconiers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1leyvq5
false
null
t3_1leyvq5
/r/LocalLLaMA/comments/1leyvq5/dual_cpu_penalty/
false
false
self
9
null
Private AI Voice Assistant + Open-Source Speaker Powered by Llama & Jetson!
134
**TL;DR:** We built a **100% private, AI-powered voice assistant** for your smart home — runs locally on **Jetson**, uses **Llama models**, connects to our **open-source Sonos-like speaker**, and integrates with **Home Assistant** to control basically *everything*. No cloud. Just fast, private, real-time control. ========================= Wassup Llama friends! I started a YouTube channel showing how to build a private/local voice assistant (think Alexa, but off-grid). It kinda/sorta blew up… and that led to a full-blown hardware startup. We built a **local LLM server and conversational voice pipeline** on Jetson hardware, then connected it wirelessly to our **open-source smart speaker** (like a DIY Sonos One). Then we layered in robust **tool-calling support to integrate with Home Assistant**, unlocking full control over your smart home — lights, sensors, thermostats, you name it. End result? A **100% private, local voice assistant** for the smart home. No cloud. No spying. Just you, your home, and a talking box that *actually respects your privacy*. We’re call ourselves **FutureProofHomes**, and we’d love a little LocalLLaMA love to help spread the word. Check us out @ [FutureProofHomes.ai](https://FutureProofHomes.ai) Cheers, everyone!
2025-06-19T01:59:17
https://youtu.be/WrreIi8LCiw
FutureProofHomes
youtu.be
1970-01-01T00:00:00
0
{}
1leyzxp
false
{'oembed': {'author_name': 'FutureProofHomes', 'author_url': 'https://www.youtube.com/@FutureProofHomes', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/WrreIi8LCiw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Local LLM AI Voice Assistant (Nexus Sneak Peek)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/WrreIi8LCiw/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Local LLM AI Voice Assistant (Nexus Sneak Peek)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1leyzxp
/r/LocalLLaMA/comments/1leyzxp/private_ai_voice_assistant_opensource_speaker/
false
false
https://external-preview…6cd17bba06d6ff62
134
{'enabled': False, 'images': [{'id': '1cKeuQGVkwjz1OX7NjtAv9GHzhusji4vD5LLPS4kBVk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/1cKeuQGVkwjz1OX7NjtAv9GHzhusji4vD5LLPS4kBVk.jpeg?width=108&crop=smart&auto=webp&s=0bc6510c00960d22a9218498dc030e8b34816167', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/1cKeuQGVkwjz1OX7NjtAv9GHzhusji4vD5LLPS4kBVk.jpeg?width=216&crop=smart&auto=webp&s=76464f17d734dc250f2a49ff36c1c0e0d420806e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/1cKeuQGVkwjz1OX7NjtAv9GHzhusji4vD5LLPS4kBVk.jpeg?width=320&crop=smart&auto=webp&s=47f09968e3b0f627cdcdb3a1244eedbba09400f1', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/1cKeuQGVkwjz1OX7NjtAv9GHzhusji4vD5LLPS4kBVk.jpeg?auto=webp&s=48de41c7086bc3fc9cc5e5be9aefa62575f6d904', 'width': 480}, 'variants': {}}]}
[Open] LMeterX - Professional Load Testing for Any OpenAI-Compatible LLM API
9
**Solving Real Pain Points** 🤔 Don't know your LLM's concurrency limits? 🤔 Need to compare model performance but lack proper tools? 🤔 Want professional metrics (TTFT, TPS, RPS) not just basic HTTP stats? **Key Features** ✅ Universal compatibility - Applicable to any openai format API such as GPT, Claude, Llama, etc (language/multimodal /CoT) ✅ Smart load testing - Precise concurrency control & Real user simulation ✅ Professional metrics - TTFT, TPS, RPS, success/error rate, etc ✅ Multi-scenario support - Text conversations & Multimodal (image+text) ✅ Visualize the results - Performance report & Model arena ✅ Real-time monitoring - Hierarchical monitoring of tasks and services ✅ Enterprise ready - Docker deployment & Web management console & Scalable architecture ⬇️ **DEMO** ⬇️ https://i.redd.it/14l0srxgrs7f1.gif **🚀 One-Click Docker deploy** curl -fsSL [https://raw.githubusercontent.com/DataEval/LMeterX/main/quick-start.sh](https://raw.githubusercontent.com/DataEval/LMeterX/main/quick-start.sh) | bash ⭐ **GitHub** ➡️ \[[GitHub - DataEval/LMeterX](https://github.com/DataEval/LMeterX)\]
2025-06-19T02:45:45
https://www.reddit.com/r/LocalLLaMA/comments/1lezxa9/open_lmeterx_professional_load_testing_for_any/
SignalBelt7205
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lezxa9
false
null
t3_1lezxa9
/r/LocalLLaMA/comments/1lezxa9/open_lmeterx_professional_load_testing_for_any/
false
false
https://b.thumbs.redditm…aL1l-qXNK-QA.jpg
9
null
Any LLM that can detect musical tonality from an audio?
5
I was wondering if there is such a thing locally.
2025-06-19T02:52:23
https://www.reddit.com/r/LocalLLaMA/comments/1lf01uz/any_llm_that_can_detect_musical_tonality_from_an/
9acca9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf01uz
false
null
t3_1lf01uz
/r/LocalLLaMA/comments/1lf01uz/any_llm_that_can_detect_musical_tonality_from_an/
false
false
self
5
null
IdeaWeaver: One CLI to Train, Track, and Deploy Your Models with Custom Data
0
ERROR: type should be string, got "\n\nhttps://i.redd.it/6qqrrq4qys7f1.gif\n\nAre you looking for a single tool that can handle the entire lifecycle of training a model on your data, track experiments, and register models effortlessly?\n\nMeet IdeaWeaver.\n\nWith just a single command, you can:\n\n* Train a model using your custom dataset\n* Automatically track experiments in MLflow, Comet, or DagsHub\n* Push trained models to registries like Hugging Face Hub, MLflow, Comet, or DagsHub\n\nAnd we’re not stopping there, AWS Bedrock integration is coming soon.\n\nNo complex setup. No switching between tools. Just clean CLI-based automation.\n\n\n\n👉 Learn more here: [https://ideaweaver-ai-code.github.io/ideaweaver-docs/training/train-output/](https://ideaweaver-ai-code.github.io/ideaweaver-docs/training/train-output/)\n\n👉 GitHub repo: [https://github.com/ideaweaver-ai-code/ideaweaver](https://github.com/ideaweaver-ai-code/ideaweaver)\n\n"
2025-06-19T03:24:41
https://www.reddit.com/r/LocalLLaMA/comments/1lf0o4u/ideaweaver_one_cli_to_train_track_and_deploy_your/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf0o4u
false
null
t3_1lf0o4u
/r/LocalLLaMA/comments/1lf0o4u/ideaweaver_one_cli_to_train_track_and_deploy_your/
false
false
https://b.thumbs.redditm…nTGinx-B0fUw.jpg
0
null
Which AWS Sagemaker Quota to request for training llama 3.2-3B-Instruct with PPO and Reinforcement learning?
3
This is my first time using AWS. I have been added to my PI's lab organization, which has some credits. Now I am trying to do an experiment where I will be basically using a modified reward method for training llama3.2-3B with PPO. The authors of the original work used 4 A100 GPUs for their training with PPO (they used Qwen 2.5 3B). What is a similar (maybe a bit smaller in scale) service in AWS Sagemaker? I mean, in GPU power? I am thinking of ml.p3.8xlarge. I am not sure if I will be needing this much. I have some credits left in colab where I am using A100 GPU. Since I have a paper submission in two weeks,. I wanted to request for quota early.
2025-06-19T03:26:54
https://www.reddit.com/r/LocalLLaMA/comments/1lf0pk9/which_aws_sagemaker_quota_to_request_for_training/
Furiousguy79
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf0pk9
false
null
t3_1lf0pk9
/r/LocalLLaMA/comments/1lf0pk9/which_aws_sagemaker_quota_to_request_for_training/
false
false
self
3
null
Is there any LLM tool for UX and accessibility?
1
Is there any LLM tool for UX and accessibility? I am looking for some kind of scanner that detects issues in my apps.
2025-06-19T04:11:54
https://www.reddit.com/r/LocalLLaMA/comments/1lf1j2v/is_there_any_llm_tool_for_ux_and_accessibility/
darkcatpirate
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf1j2v
false
null
t3_1lf1j2v
/r/LocalLLaMA/comments/1lf1j2v/is_there_any_llm_tool_for_ux_and_accessibility/
false
false
self
1
null
Voice Mode - Dirty Method
1
[removed]
2025-06-19T05:17:03
https://www.reddit.com/r/LocalLLaMA/comments/1lf2n6u/voice_mode_dirty_method/
MixedPixels
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf2n6u
false
null
t3_1lf2n6u
/r/LocalLLaMA/comments/1lf2n6u/voice_mode_dirty_method/
false
false
self
1
null
Multiple claude code pro accounts on One Machine? my path into madness (and a plea for sanity, lol, guyzz this is bad)
0
Okay, so hear me out. My workflow is... intense. And one Claude Code Pro account just isn't cutting it. I've got a couple of pro accounts for... reasons. Don't ask. (whispering, ... saving cost..., keep that as a secret for me, will ya) Back to topic, how in the world do you switch between them on the same machine without going insane? I feel like I'm constantly logging in and out. Specifically for the API, where the heck does the key even get saved? Is there some secret file I can just swap out? Is anyone else living this double life? Or is it just me lol?
2025-06-19T05:47:53
https://www.reddit.com/r/LocalLLaMA/comments/1lf354a/multiple_claude_code_pro_accounts_on_one_machine/
ExplanationEqual2539
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf354a
false
null
t3_1lf354a
/r/LocalLLaMA/comments/1lf354a/multiple_claude_code_pro_accounts_on_one_machine/
false
false
self
0
null
Embedding Language Model (ELM)
13
I can be a bit nutty, but this HAS to be the future. The ability to sample and score over the continuous latent representation, made relatively extremely transparent by a densely populated semantic "map" which can be traversed. Anyone want to team up and train one 😎
2025-06-19T05:48:24
https://arxiv.org/html/2310.04475v2
Repulsive-Memory-298
arxiv.org
1970-01-01T00:00:00
0
{}
1lf35fh
false
null
t3_1lf35fh
/r/LocalLLaMA/comments/1lf35fh/embedding_language_model_elm/
false
false
default
13
null
Looking to generate videos of cartoon characters - need help with suggestions.
2
I’m interested in generating video of popular cartoon characters like SpongeBob and Homer. I’m curious about the approach and tools I should use to achieve this. Currently, all models can generate videos up to 5 seconds long, which is fine for me. However, I want the anatomy and art style of the characters to remain accurate throughout the video. Unfortunately, the current models don’t seem to capture the hands, faces, and mouths of specific characters accurately. For example, Patrick, a starfish, doesn’t have fingers, but every time the model generates a video, it produces fingers and awkward facial movements. I’m open to using Image to Video, as it seems to yield better results.  Thank you.
2025-06-19T06:19:56
https://www.reddit.com/r/LocalLLaMA/comments/1lf3nak/looking_to_generate_videos_of_cartoon_characters/
6UwO9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf3nak
false
null
t3_1lf3nak
/r/LocalLLaMA/comments/1lf3nak/looking_to_generate_videos_of_cartoon_characters/
false
false
self
2
null
Does ollama pass username or other info to models?
1
Searched around but can't find a clear answer about this, was wondering if anybody here knew before I start poking around the source. This evening I installed a fresh copy of Debian on my machine to mess around with my new 4060 Ti, downloaded ollama and gemma3 as user eliasnd, and for my first message asked it to write me a story about a knight. It immediately named the main character Elias, and when I asked why it gave some answer about picking a historical name. Could theoretically be a coincidence but find that a bit hard to believe. Does ollama pass any user metadata to the models it runs via a system prompt or something similar? Wondering how it could have gotten that name in its context
2025-06-19T06:52:12
https://www.reddit.com/r/LocalLLaMA/comments/1lf45eq/does_ollama_pass_username_or_other_info_to_models/
eliasnd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf45eq
false
null
t3_1lf45eq
/r/LocalLLaMA/comments/1lf45eq/does_ollama_pass_username_or_other_info_to_models/
false
false
self
1
null
Freeplane xml mind maps locally: only Qwen3 and Phi4 Reasoning Plus can create them in one shot?
2
I started to experiment with Freeplane xml mind map creation using only LLMs. Grok can create ingenious xml mind maps, which can be opened in Freeplane. But there are local solutions too! I used Qwen3 14b q8 and Phi4 Reasoning Plus q8 to create xml mind maps. In my opinion Phi4 Reasoning Plus is the king of local mind map creation, it is shockingly good! Are there any other local models worth mentioning? Let's talk about it!
2025-06-19T07:24:48
https://www.reddit.com/r/LocalLLaMA/comments/1lf4npv/freeplane_xml_mind_maps_locally_only_qwen3_and/
custodiam99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf4npv
false
null
t3_1lf4npv
/r/LocalLLaMA/comments/1lf4npv/freeplane_xml_mind_maps_locally_only_qwen3_and/
false
false
self
2
null
Giving invite link of manus ai Agent. (With 1.9k token )
0
I think many already know manus ai agent. It's awesome. You can get 1500+300 free credit and access of this ai agent. Enjoy >Use this Invite [Link](https://manus.im/invitation/QE3PHKPEV6PGVRI)
2025-06-19T07:26:50
https://www.reddit.com/r/LocalLLaMA/comments/1lf4otq/giving_invite_link_of_manus_ai_agent_with_19k/
shadow--404
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf4otq
false
null
t3_1lf4otq
/r/LocalLLaMA/comments/1lf4otq/giving_invite_link_of_manus_ai_agent_with_19k/
false
false
self
0
{'enabled': False, 'images': [{'id': '-KicQtg3F05jY9RJnOIW7mTXG5gYHARkFYj99D5ifPU', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/-KicQtg3F05jY9RJnOIW7mTXG5gYHARkFYj99D5ifPU.png?width=108&crop=smart&auto=webp&s=260714b5951bb46fdf2bf0a74425b2ca66c9306b', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/-KicQtg3F05jY9RJnOIW7mTXG5gYHARkFYj99D5ifPU.png?width=216&crop=smart&auto=webp&s=dd215cd287c94a12cc87bb6ba603a40e07dcf92b', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/-KicQtg3F05jY9RJnOIW7mTXG5gYHARkFYj99D5ifPU.png?width=320&crop=smart&auto=webp&s=538685184c1a6900226e9dd756e98dfcfd4c605f', 'width': 320}, {'height': 337, 'url': 'https://external-preview.redd.it/-KicQtg3F05jY9RJnOIW7mTXG5gYHARkFYj99D5ifPU.png?width=640&crop=smart&auto=webp&s=d62f7a3de91d0ccec83406639e1c2758652e8962', 'width': 640}, {'height': 506, 'url': 'https://external-preview.redd.it/-KicQtg3F05jY9RJnOIW7mTXG5gYHARkFYj99D5ifPU.png?width=960&crop=smart&auto=webp&s=6c270a4834294e1a47153db674b2ab55101fd434', 'width': 960}, {'height': 570, 'url': 'https://external-preview.redd.it/-KicQtg3F05jY9RJnOIW7mTXG5gYHARkFYj99D5ifPU.png?width=1080&crop=smart&auto=webp&s=9ca3984fb6f2dc444f53a8410ab3bf554ce46416', 'width': 1080}], 'source': {'height': 655, 'url': 'https://external-preview.redd.it/-KicQtg3F05jY9RJnOIW7mTXG5gYHARkFYj99D5ifPU.png?auto=webp&s=abb2bf00510a1134a45e46475669dc46dafa7dd9', 'width': 1241}, 'variants': {}}]}
Effect of Linux on M-series Mac inference perfomance
0
Hi everyone! Recently I have been considering buying a used M-series Mac for everyday use and local LLM inferece. I am looking for decent T/s with 8-32B models, and good CPU performace for my work (which M-series Macs are known for). I am generally a fan of the unified memory idea and the philosophy with which these computers are built. I think overall they would be a good deal when it comes to usage other than LLM inference. However, having used Macs some time ago, I had a terrible experience with Mac OS. The permission control and accessibility, weird package management, lack of customization the way I need it... I never regretted switching to Fedora Linux. But now I learned that there is Asahi Linux that is purpose-built for M-series Macs. My question is: **will it affect performance with inference? If yes, how much? Which compatibility issues can I expect?** I imagine most inference engines today use Apple's proprietary Metal stack, and I am not sure how would it compare to FOSS solutions like Vulkan. Thanks in advance.
2025-06-19T08:14:56
https://www.reddit.com/r/LocalLLaMA/comments/1lf5eu2/effect_of_linux_on_mseries_mac_inference/
libregrape
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf5eu2
false
null
t3_1lf5eu2
/r/LocalLLaMA/comments/1lf5eu2/effect_of_linux_on_mseries_mac_inference/
false
false
self
0
null
Jan got an upgrade: New design, switched from Electron to Tauri, custom assistants, and 100+ fixes - it's faster & more stable now
491
Jan v0.6.0 is out. * Fully redesigned UI * Switched from Electron to Tauri for lighter and more efficient performance * You can create your own assistants with instructions & custom model settings * New themes & customization settings (e.g. font size, code block highlighting style) Including improvements to thread handling and UI behavior to tweaking extension settings, cleanup, log improvements, and more. Update your Jan or download the latest here: [https://jan.ai](https://jan.ai) Full release notes here: [https://github.com/menloresearch/jan/releases/tag/v0.6.0](https://github.com/menloresearch/jan/releases/tag/v0.6.0) **Quick notes:** 1. If you'd like to play with the new Jan but has not download a model via Jan, please import your GGUF models via Settings -> Model Providers -> llama.cpp -> Import. See the latest image in the post to do that. 2. Jan is going to get bigger update soon on MCP usage, we're testing MCP usage with our MCP-specific model, [Jan Nano](https://huggingface.co/collections/Menlo/jan-nano-684f6ebfe9ed640fddc55be7), that surpass DeepSeek V3 671B on agentic use cases. If you'd like to test it as well, feel free to join our Discord to see the build links.
2025-06-19T08:52:09
https://www.reddit.com/gallery/1lf5yog
eck72
reddit.com
1970-01-01T00:00:00
0
{}
1lf5yog
false
null
t3_1lf5yog
/r/LocalLLaMA/comments/1lf5yog/jan_got_an_upgrade_new_design_switched_from/
false
false
https://external-preview…e4a7ba59e87c1336
491
{'enabled': True, 'images': [{'id': '9cdGcDSZyRjc_UX2wAVlzVVLsf-enKZlEOwOWD7xiXc', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/9cdGcDSZyRjc_UX2wAVlzVVLsf-enKZlEOwOWD7xiXc.png?width=108&crop=smart&auto=webp&s=221b4db86ddab09ec6f129c3e6c9b3234bfc02e8', 'width': 108}, {'height': 173, 'url': 'https://external-preview.redd.it/9cdGcDSZyRjc_UX2wAVlzVVLsf-enKZlEOwOWD7xiXc.png?width=216&crop=smart&auto=webp&s=58967ce908fa95d1dfa105cfdf61d68c224a4cd0', 'width': 216}, {'height': 257, 'url': 'https://external-preview.redd.it/9cdGcDSZyRjc_UX2wAVlzVVLsf-enKZlEOwOWD7xiXc.png?width=320&crop=smart&auto=webp&s=657836dc6146d3dfd3d2489da68e1868eb1aedf8', 'width': 320}, {'height': 515, 'url': 'https://external-preview.redd.it/9cdGcDSZyRjc_UX2wAVlzVVLsf-enKZlEOwOWD7xiXc.png?width=640&crop=smart&auto=webp&s=25d6c033d9aec8e5c2afd0f97b93b27a4a07b430', 'width': 640}, {'height': 773, 'url': 'https://external-preview.redd.it/9cdGcDSZyRjc_UX2wAVlzVVLsf-enKZlEOwOWD7xiXc.png?width=960&crop=smart&auto=webp&s=df84f190098afc8227a7e42847aee3fc9ca85adf', 'width': 960}, {'height': 869, 'url': 'https://external-preview.redd.it/9cdGcDSZyRjc_UX2wAVlzVVLsf-enKZlEOwOWD7xiXc.png?width=1080&crop=smart&auto=webp&s=4cedd1f010270c51103382a95f75014ee981834f', 'width': 1080}], 'source': {'height': 2160, 'url': 'https://external-preview.redd.it/9cdGcDSZyRjc_UX2wAVlzVVLsf-enKZlEOwOWD7xiXc.png?auto=webp&s=343c21eb292a2d3be05e1c0af2a76472cbc252cf', 'width': 2682}, 'variants': {}}]}
Qwen 2.5 32B or Similar Models
2
Hi everyone, I'm quite new to the concepts around Large Language Models (LLMs). From what I've seen so far, most of the API access for these models seems to be paid or subscription based. I was wondering if anyone here knows about ways to access or use these models for free—either through open-source alternatives or by running them locally. If you have any suggestions, tips, or resources, I’d really appreciate it!
2025-06-19T08:52:50
https://www.reddit.com/r/LocalLLaMA/comments/1lf5z06/qwen_25_32b_or_similar_models/
Valuable_Benefit9938
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf5z06
false
null
t3_1lf5z06
/r/LocalLLaMA/comments/1lf5z06/qwen_25_32b_or_similar_models/
false
false
self
2
null
Personalized AI Tutor built on top of Gemini
1
[removed]
2025-06-19T08:54:11
https://www.reddit.com/r/LocalLLaMA/comments/1lf5zos/personalized_ai_tutor_built_on_top_of_gemini/
OtherRaisin3426
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lf5zos
false
null
t3_1lf5zos
/r/LocalLLaMA/comments/1lf5zos/personalized_ai_tutor_built_on_top_of_gemini/
false
false
self
1
null